var/home/core/zuul-output/0000755000175000017500000000000015134634420014527 5ustar corecorevar/home/core/zuul-output/logs/0000755000175000017500000000000015134651345015500 5ustar corecorevar/home/core/zuul-output/logs/kubelet.log.gz0000644000175000017500000333102015134651226020255 0ustar corecoreRsikubelet.log_o[;r)Br'o b-n(!9t%Cs7}g/غIs,r.k9GfD 2|i.߷;U/;?FެxۻfW޾n^X/ixK|1Ool_~yyiw|zxV^֯v5gCh31 )Kh3i J1hG{aD4iӌçN/e] o;iF]u54!h/9Y@$9GAOI=2,!N{\00{B"唄(".V.U) _.f*g,Z0>?<;~9.뙘 vKAb;-$JRPţ*描Լf^`iwoW~wSL2uQO)qai]>yE*,?k 9Z29}}(4ҲIFyG -^W6yY<*uvf d |TRZ;j?| |!I糓 sw`{s0Aȶ9W E%*mG:tëoG(;h0!}qfJz硂Ϧ4Ck9]٣Z%T%x~5r.N`$g`Խ!:*Wni|QXj0NbYe獸]fNdƭwq <ć;_ʧNs9[(=!@Q,}s=LN YlYd'Z;o.K'[-הp|A*Z*}QJ0SqAYE0i5P-$̿<_d^"]}Z|-5rC wjof'(%*݅^J">CMMQQ؏*ΧL ߁NPi?$;g&立q^-:}KA8Nnn6C;XHK:lL4Aْ .vqHP"P.dTrcD Yjz_aL_8};\N<:R€ N0RQ⚮FkeZ< )VCRQrC|}nw_~ܥ0~fgKAw^};fs)1K MޠPBUB1J{Ⱦ79`®3uO0T-Oy+tǭQI%Q$SiJ. 9F[L1c!zG|k{kEu+Q & "> 3J?5OͩLH.:;ߡ֖QʡCOx]*9W C;6)SCVOאUʇq )$ {SG!pN7,/M(.ΰdƛޜP16$ c:!%Piocej_H!CEF L훨bِp{!*({bʂAtĘ5dw9}ŒEanvVZ?C}!w,ƍͩ?9} [oF2(Y}Q7^{E}xA|AŜt;y}=W<*e'&Ж0(ݕ`{az^su/x)W>OK(BSsǽҰ%>kh5nIYk'LVc(a<1mCޢmp.֣?5t罦X[nMcow&|||x:k/.EoV%#?%W۱`3fs䓯ҴgqmubIfp$HhtLzܝ6rq/nLN?2Ǒ|;C@,UѩJ:|n^/GSZ;m#Nvd?PqTcLQMhg:F[bTm!V`AqPaPheUJ& z?NwpGj{VjQS,؃I'[y~EQ(S +mpN, Mq 70AږA :}d,ZByXϯ&Ksg3["66hŢFD&iQCFd4%h= z{tKmdߟ9i {A.:Mw~^`X\u6|6rcIF3b9O:j 2IN…D% YCUI}~;XI썋Fqil><UKkZ{iqi :íy˧FR1u)X9 f΁U ~5batx|ELU:T'T[G*ݧ ؽZK̡O6rLmȰ (T$ n#b@hpj:˾ojs)M/8`$:) X+ҧSaۥzw}^P1J%+P:Dsƫ%z; +g 0հc0E) 3jƯ?e|miȄ;嶑, }t&&\5u17\I@ 5O? ʴ(aPqPϟ'|!p+,ICE^fu `|M3J#BQȌ6DNnCˣ"F$/Qx%m&FK_7P|٢?I-RiAKoQrMI>QQ!'7h,sF\jzP\7:Q\)#s{p'ɂN$r;fVkv߸>6!<̅:xn<# -BȢ1I~ŋ-*|`В~_>ۅm}67X9z=Oa Am]fnޤ{"hd߃Ԉ|tLD3 7'yOc& LFs%B!sRE2K0p\0͙npV)̍F$X8a-bp)5,] Bo|ؖA]Y`-jyL'8>JJ{>źuMp(jL!M7uTźmr(Uxbbqe5rZ HҘ3ڴ(|e@ew>w3C=9k-{p>րd^T@eFZ#WWwYzK uK r؛6V L)auS6=`#(TO֙`mn Lv%7mSU@n_Vۀl9BIcSxlT![`[klzFض˪.l >7l@ΖLl gEj gWUDnr7AG;lU6ieabp៚U|,}S@t1:X _ .xI_7ve Z@7IX/C7@u BGڔE7M/k $q^hڧ};naU%~X!^C5Aw͢.@d!@dU}b? -ʏw |VvlK۴ymkiK% 0OFjT_kPW1mk%?\@R>XCl}b ,8; :.b9m]XaINE`!6uOhUuta^xN@˭d- T5 $4ذ:[a>֋&"_ }Oõϸ~rj uw\h~M il[ 2pCaOok.X0C?~[:^Pr򣏷y@/ڠ --i!M5mjozEƨ||Yt,=d#uЇ  l]չoݴmqV".lCqBѷ /![auPmpnEjus]2{2#b'$?T3{k>h+@]*pp桸]%nĴFԨlu |VXnq#r:kg_Q1,MNi˰ 7#`VCpᇽmpM+tWuk0 q /} 5 ¶]fXEj@5JcU_b@JS`wYmJ gEk2'0/> unKs^C6B WEt7M'#|kf1:X l]ABC {kanW{ 6 g`_w\|8Fjȡstuf%Plx3E#zmxfU S^ 3_`wRY}@ŹBz²?mК/mm}m"Gy4dl\)cb<>O0BďJrDd\TDFMEr~q#i}$y3.*j) qQa% |`bEۈ8S 95JͩA3SX~߃ʟ~㍖›f!OI1R~-6͘!?/Vvot4~6I@GNݖ-m[d<-l9fbn,'eO2sٟ+AWzw A<4 }w"*mj8{ P&Y#ErwHhL2cPr Wҭюky7aXt?2 'so fnHXx1o@0TmBLi0lhѦ* _9[3L`I,|J @xS}NEij]Qexx*lJF#+L@-ՑQz֬]")JCp<Ҋ(.GGzpFL`1CS$Ǥ46iWMUF>su0,gy(&TI{ U܋N5 l͖h"褁lm *#n/Q!m b0X3i)\IN˭% Y&cKoG w 9pM^WϋQf7s#bd+SDL ,FZ<1Kx&C!{P|Ռr,* ] O;*X]Eg,5,ouZm8pnglVj!p2֬uT[QyB402|2d5K: `Bcz|Rxxl3{c` 1nhJzQHv?hbºܞz=73qSO0}Dc D]ͺjgw07'㤸z YJ\Hb9Ɖ„2Hi{(2HFE?*w*hy4ޙM^٫wF(p]EwQzr*! 5F XrO7E[!gJ^.a&HߣaaQÝ$_vyz4}0!yܒ栒޹a% Ŋ X!cJ!A\ ?E\R1 q/rJjd A4y4c+bQ̘TT!kw/nb͵FcRG0xeO sw5TV12R7<OG5cjShGg/5TbW > ]~Wޠ9dNiee$V[\[Qp-&u~a+3~;xUFFW>'ǣC~방u)т48ZdH;j a]`bGԹ#qiP(yڤ~dO@wA[Vz/$NW\F?H4kX6)F*1*(eJAaݡ krqB}q^fn 8y7P  GRޠkQn>eqQntq"Occ°NRjg#qSn02DŔw:ؽ 5l)Fa/TTmCԤ{"9b{ywSXE*m#3U ùRIvޏrJ`k|wJKH:O*OKy`( ݢe*{ ua ȻݔhvOkU~OǠI/aǕ-JMX _.6KsjA Qsmd  O#F.Uf28ZAgy>y,d$C?v01q5e.Um>]RLa&r?+@6k&#l)I5_> ` D s5npo}/ؙq #a2V?X~.4O/'|/_|&q̑0dd4>vk 60D _o~[Sw3ckpkpLNa ^j 5*<&}kˢmqvۗj=<Tr=[ a^؃ È(<^=xZb [_tܡ&yЋ{ Sym^?̑sU~' Ԓ f\itu)b>5X -$sn.wMmeüze1N싈s~5$lIpS,SPկz%=lRѦmDV}xݏw86"Qi32<_ [ (I;N)%>-2H`cDFڇ0<t#A'PGEu&ݛSܦV(e֖tށ ft~c.!R0N<R{m4]2X#Ih5?2ސv/3,9ҡT9C% V{l4VX|!0^!^ R'cVi)@kF&ta%*x5㘸,A|AWJҞlhdEBb uBep!*)j xK :$>I k˯8Uuq/L^GFIu`2a$c Ryz =!nTlNGg m&zҧ7=|Q~ˊcUo( H.' .gўa!L{:5NE#ynqE?ֆgȤ=8V8o?VLk E2ڪkX~EG#bYѤ:MGQjr֞u.>\m S{s=U"LgdJҮmΉo]T"DĊԩL H8(,i .l:65R&5gA%E=Fsk6[E_2pwȳϛA+D=Ƃa2\sio AM(j5M{K$gv>cDp"'0޽5x#[g;hwxD(ҲRQ{**'>fC;'m*)"3.E2IeD m 2.ZÐ( ,K$I{2m)]7pJ9vJMw1٩ٯJ,􇃓YY.Wuad1f87WՂ 1 Nzd.=ft&= QTQnR&%b&BL(ho7zNc>>|'=+GgS{:/UoD8A?1%,@fx;h[F&jV3=ɹ6-M魛ڴ'$%82his00g`<Xr+XKc$2iֹH<7N8HSWӏ>u;jk{Bm(W F@@{7Ju/A*  ޖ^,3qJPݪM|$V1y,EzfHG]:*vM x v쳎]'.ho3p-Igh7 ܆hR M hW UT PƷq{lB _e>2ӳߖ|ʒk̡"#uTPO"탕>;7VQXu/;&̀1Je=cJǷVӟpg~V]:х@J]fVw,w}ߠa73uTaKtȃt筓+5ZnڷKsi7. qCU馥^޶oڷϽig?^ͮc^ ٸxST gKQ'%?T7<{i{A<[/nra$pZdK3ilL Eڳ tT*lA" Lbw Z8Ҽwgϊ)h&i'z0;/ρ;{Ȑc$} dv` /wS%I,3-v 0ŐR [y v]d{ڙ(h)A0*z_CBV6@8YjԜg/s2ȥ@;x^$aj?Q&\!Ţ#oG&kvNDՋ8 ;!Y CL$ڲ`ax|r*_[~e]WBe͢ *$0mCXcrD6;΀YȁRd9e6bd鄖`7ɯ7cikMVǔ8aH4Zq5Aag#is]R \6?tc 7UmM*VoO|wTQD:=!#iJZy8ɑYr.bi:__⇿^t0$DHa o(VaU\>p峛"Y3Λ%|Mm=TJ9yy(O?"~:pk!#OY:{-$`3iԊjRQN&(&1NqW|cR<\}vEÃLB ZaZdq,]?\h"_n/p=fNѕ|s?N$3} 7,\Φ>4&$/"]1D5/[W%3=-$h`EbDC;.j0X1dR/bjWd{R\' 礅_ܣR;xu/U>|kؐ9ξΒDǭ0Z*Vr'tdQu!4YhdqT 42?߬b|Ivżw>! {&~ĊL:}1*8&6f5 %>}R݄}ڏ5ϦQ ,s=[1F81åӚifWg`!|T/.^oyh!; |*'@x7yR>ngg 枤hF %8p &q֢{,Qƙ1*dcsO bSŧUNy<Ǭ E;-S*v=ߦGUt\+n3X=Ay>lX$2ca_s6DWE-_H ;/tT~IU|eݒ*$!s}Uۄse.YOT<KqWK7՝BYǭ~ԀR"?^RNqyu Np> x肖A_WbY_L6i}Y^XpCŋ=5 {RףvJ> QA>"J~Y0_8`N O{b_m٭/e> y>aF PCbʍN^XS&}ǀMWrS't$=tn̮&nzۿ-޻~ aLĵ0D<ObUis )B ۊZl{IA4Sx,T3ѱ ԶJ=rs>Nb: Q6ˌ߉J%.Dl2ȱ5ܱ&6Xƿm޼Q ߉J4WrSHTdp&?ӹ'cJq6zPlX.0H!D@UapVFڧD5~Yb:.͘C4z 6qeJ6`~#Eh3ŕs,|HrVQ6~߮ 馏SVB l)cYg%1C+t;'|Y8Wd**hг˅r-3'^\  [Cr: QvOS}ll>ŰAVG Y%.9Vnd8? ǫjU3k%E)OD:"Ϳ%E)=}l/'χ"Q_4ILAٍKK7'lWQVm0c:%UEhZ]/1bazn2ͦ_DQP/2 re%_bR~r9_7*rn |S.Z!rVo%¢EN$i^B^sX؅ z1ǡ>XtiK`uk&LO./!Z&p:ˏ!_B{{K1>"=a'Kн|~+: :8au"Χ@#=Ugzy]sT|~Aek Xi.gL'—Ʃb4AUqػu\Mڗ$-DJ|nj*à␻,?XAe0bX@ h0v[}Bf*Gۓ Km|6d6D -':l ܇Bz1U_#GXӫ+al H d]k/I,[,|vg>=u^\J+Ȫcv/#miX :)ǪCZA^jc5?7Ua,X nʬ^Cv.'A$ƽŇKoA`d;_/EZ~'*"ȜI*Dmƃ˳bKg^saͭ̍*tPu*9aJ_ 3It+u;3OgCX}kU⧘pvzz0V Y3'Dco\:^dnJF7a)AH _§ GT6:l`GT~JQ { xRP8_S( $?uk| ]bP\vۗf5 ޯL_Su>o>&lrAMҪ1IAtRuy}z{~n\CXߍ|}&zo쟮)EuXrjPݛ6U0)lwۨJ0*oz7߬VcZܩJ==\J=]?Ww?¯8nq~q?A-?T_qOq?5-3 |q |w.dަ'/<_ DtDw׃w4<\A ڄC~ %lBeH1Y8J-ޖ1e/@˒ӓdcY'HAKq^$8`b $1Nr Qz?ۧ1Zm/G+qYcYl YhD$ktߒToId E$dwS:֢̆ ?GЅ'JƖ'ZXO='kJՂU086\h%1GK$Yn5 ']Q; 'd:%gI- XE]kF}>~0}4t3Qf5xdRhEB-{ |q.ȃThLj'KQ %8Gk`V;Sl\h)-؈nx2Ld="KԦ:EVuwN ًs9d#Ϸ$*u!?I#bX9nW !&H2kVyKZt<cm^ ToEχi)e.ԡD6Xb]eS7HebLϧB|+l_cX6.v@H98Vc0lT cihU9P!`Nz TtƩG /Y1puOSvBۢPltdr$i+tHi >] bCD6b&!9VE7e4p +{6g߷2KY,`W[f1_ܑMYٚ/2Jh)ݸVC6{җ2ct"*5SCt)eNqǪP@o`co ӎ_) SxF;;Ds'n [&8NJP5[H2ԶjwRC>he:ա+e/)I0\lWoӊĭYcؤ~qoWP~ RT4&9VkίE|s`poSfbpnVGIGTq3J&J0CQ v&P_񾅶X/)T/ϧ]+GJzApU۞]<:YnG\~%&58IS)`0効<9ViCb!bX%E+o*ƾtNU*n-zߞ_EAV$=i[mhm"roe5jq$i!;V0eOޝ4ccc2J1TN>7q;"sդsP[ kW`u yg[~S [j3+sE.,uDΡ1RºVݐ/CBc˾[ shGՙf 2+);W{ @dlG)%OعF&4D&m.Im9cH,HCԢ{b G-lMxD+@_$c%* _jR|\:dc5u= A@kUc\ǔz;M>dUN/a\FRĦ@x؂ǀ$6%}N^ Q!%8j0dUo=rh>*YȴU Q,͸*E_%&UKjsT*?[uvcD-~VA/.ȍȱh b-E'V"mf""ǦK9k-nU #`uVi<s)/=rnlӗЩsdLyVIUI':4^6&q>E[؂e@"5FqUAn˟"ԀCv8>6|U DmR;~:xI/iFV\dsEGز/:ٕycZt_$d:z2 .}gRcƈ^ʮf!A/.V'4cbpWaPBIpqS<(l^ȣg3K?e Z?ڢVSZMCpnqL f2D?mzq.[~;DY〩b𻾏-[f8dBմVs6傊וzF"daeY(R+q%so#r|.v\sfa:\X%;3Xl; \k>kqBbB;t@/Cԍ)Gရ[-rnl-w/38ѮI*/=:1&"u8V䝏)@+\ ~ZJ l0Vm_h#ZYBK]nc˝߂~[jRuo[|["w;?2Y#tV[GT F*OO '+EoW&DqJ5;OU!F>j\FW+[3=`BZWX vZd>t.U8ǖ\*Vu6Y3{y&iE&I27ߎ}7.P\zoo^T7Wy^TW\6U0)l?Z_Mñf/~ZTuusn *~x2Ϸ?sHH?Et _I`[>?a`57X Ox!8+J&[V =͋A,z`s,J|L/rʑ߽|IhM4fGQIcj{ƕ_1~lQ,imv9XHٺDU/N%qҤt2˳lm@7,KG"kGnyiZDH~}ye*ŖDˆY~VD=6\d2쨶wD4*so~7Ē5uۋ%I\ϼ؊Y e,C{5bi3O dUIJV2xwRbch;ˤt5FQʮP(f3;86 Ƃ;eێq␀eC5y*Cf(p.8d(.VG-;"e 5Uʈ0[yuU2{4|0 q-a!9췔'f(AyL"L@ P|ELZ?I5)X2PrR-dDtI>5 וw꨷,ɹE]=2f5.Y;J0|Kr]jW޼:IJd0$W2}t6Tש%g%7auaV%z Ƨ4U,E& &PB6KwmN՝_iZW =59%.|A6<\<,(l Y% տ:ͣJw<)V,硼t6w7 ٘$: kt._Z=9CNQĭ G4gi1K+Zi Ol_Kł)mÈ+m{jezb>rr> Y_{Q_]ɼ[cPXwg,׫nB`N*/긝Gg'8 @Ÿץ`I.J.`]*~R];p(vɧb!Jqڤu+pB,|_ ,k؉=(q<>LUE̝YIᠽ&nuX'z:=NeS__RB@dwcU|~N*`i8^x}6EՇw8b Lgwks~( >b4G5ˊ9/>*ǒZ =>R}ew|~ 8SgאB}p= 'o5Hu nnSL"ܢyytl<6 LS@:8( "?W3EԔ`UԔ* X߰$O݁4~[z.ǥ%,=St[úlT *Vc6PW(:JUkrM1}}v988/i !m҄3h{t[ b;ܖ)5r _\7 8XMwVvoꈿbѥ6f8GX`99M߽bAwW⏲_ZUTbSrN!@nIث+'W+i %'pr~8y}€?X\3h2'ݶ72ý)y梮zk!RWg*X){rVu?h2bdʣG!;ky +ZANݰ&5,U6[BVt7 ,+ˇKny{P}z$Qi2Df}SMch`$=}x0+J*.աtVl>upT5|;#Dm2>CL,OJ@mGH"[g{#5"Z(H6*,!#L0^`HjTGM6X0yi M-Qkw^g[w55 XS`3k8 )Q-B뙎_&,ok7\[8@- E.c<`..}* V_|loϢR]i: H@8hq LT`D.u ISV3xfEuK:5u+~Q\UFt8e0κ9]˛"%S='P!--%+ص #mi9΍Rkl\P5{;\|X0U!mE\UC`>;VdC8:,#a䰷nGE{U#Lr~#fcoG\UfPH`Z 2Q]TԞP`v|f9`"m%#Z_[Ǎ" rj<̎<2|a HޖSyus.;BϷ!F$UuC#Y4=i0E2ۆ{HڑRSS Q`6 ̖9neH\,Sn[jdH #b7I4YYaӔ-t-%Q< u<,J#dOh;M,p9@:Bsc3 8^ݹ8mŦ/\bz*Iq pB ]ptM[ vZ\߀g%yvFݻ0]ddQcE*|aSm<,K RhjMd^ atlj2'j)Si"X%GRXCzڪx{'+@`IQ-6a$os@Ϋ}p{lkD&"9Ȝ1LjQⲬ`;.u M] jB)))!HDZ:/J]6(ş},cX|>""H--Eٚv$[1KڤbӵF,]T̨"`PQJquq-+],ݹGG|nM[+E3ە`?3ҙSNJ_ՂYҶ]Rh3bN؎"aԊGl(L/8}=kobԐmҚpgj5+[jyO5_ŖDTpAj݆δ&mjgY mMͲ2Oo6F0uquMp(.i&XE ;>k9[hW//&ڪm,Iߒ^˺ 6h/kbG%yM+9~ YWꒊ`.u 1ѷ=w+io@^g\ Zwڳk2`[*] $HՃw^Ȧ`chX(Yr,{h*#pѤ;[絭2eUv>b.%CÃUE1'Vg3|e`$+N*#M{J ba7CR28zضmI$ 5#$q<-X b?coleGj-^X{fqFe@ϵ! jzBtvt@+)# wYx7n7iMv hڻ|V\ Ȼ{~uv?޿0,+aQD@ iU B񻐍6_ nHķJ!DT^Mv3*Dw% f7= I ]G#|J8nd +QE0NAN2it$X]8;M_ZqSFz?@ipaw0#u( 8 FPvWN|0ѝ cv# 9k xSuktEO9\;(ׯwj.MּelI;iy@ߧy|ANCo|曦ja(R&0? ` ,Eh^{ڹ# @'i incu- `7_' 2t.4@-Y-m#M7@Ν>ka~7ՉD?w"tg-5g-?ҩT v"6g-|w[,v7ɮ .Dq(۬߭hTV~D4 Pݟ- `NE M␓ogJF3 ai;L6ݎ8f7n$#buTv$7.xX܎dSf C&2k9fX["#fl?bפ;IοO/;Է`jI =E\$-Ia9p*qL=P;g#EQ , AX0h (-#Yy R%1/ZKP)No4q|N,L 2.)7a1< v0Vs\Y2m:J֩һ YѨJNx5]Tb DVWd ~sV,p+H%$Cy!},-j`n0YİGir~6-0Y":eijN7lژVnv‰3B"o+l,֛(Dyp) @l" / @nl&"0)4U${Z5B} A2KZ35G`߰rvp]&&Eؤ(u[XP5JB_!0; 4C.Z|Bfy(̲Z+d9|ҮY LOO;h:p,@6.e'- 1(d͈)ӧ4@ Fekd_w\L}h _1mz;GW @ˬTD`H O %H_ߠm|Myrr飖ky{u:#J辨\d!{\-ݽR( b%utcP3}dЋG0 qr3CK.2TgnZa / ǰ|wd񈎠 #s=0,ێ\Z?n^"] [ g8[pKi0E}X _ydfT t6. /&Y*f3YYՓͭXj@{TUr fyJ1[1]eƘ)k,  3<җsG ÜZ"ӽ}j5L2<>=8|/@~ސ_ߗM%Nu[e`2m%ӼXKXLF"oTOrϹ|0JM oz8EOe|j鯋b?w 5@qtN.2ު b3h⸆Y֘#\ ()(ǤH7W@Y9h<mZxv/ǺC'Z^b#vF^q<3H͛V狸z$GU͝fGɯa)/=쩚k̄gZ3Q5]z V[3au`H<\%)n)tQfhzP=ӌl=g AtóNm eny˦<̷I٣, '[C4I W8g1yLS5]F,Qʀ0s|xܔpu |θl`hz1@dٚh |[5@wFI5 +jku:zhji 6cԃ8Y) mkK[V`V-̀E6V\[zdVL5mPτD` zkfcUZ'w514,;JТ{OnI@Ϡ[Xvݣ6ER>{FG,c< ]^ʚM((Xd \m?AKoU5I(݀P9uJ7 5 e6'PBBMB 57'| 57 jjm@9^HvP{B _GBB u:lN:Bl@$݀PwsB꾐Pw K;ǨU\e n 5ی xL6׼u~@ ;lb},n6rry^t~/$1{)gcpn$r.|# o<~}j 0qĿws #`_99@fEZ&}-j%> Fū ~W '`+mzdam8\ ߡT`V*ko i.Oy^&1~iZ.U]Py`(y:]Q~R1 qピ$?f^hr6Oޟ0av6Dn/ cJ9P"BDF4vhSi2nI*孁(lA/EQg(C?l܏޲G$ >jv.")᐀KGn}x'&d.&Dz)$ΓXE᩻uɭsKO;y%2,%fy̟lV df hKf4)‰0*iG)ܛrî `f'ld[ALaQ_KY܄<3{8Yp Apoz. kc@0Jb FO;J$T ;._@0|@wIkRU&fEOYx(I =;)C@w>z @!3B;N0*իGL;=KXA@z#91%84fS /ׂ . ߞGʪW%,4g۬s✹l\ 򠦧.p#⌊"Ad&HiF9E4LSuɌGWZVc.Ǒt.P} lU9aJ8OsHR ;䶓(U/ǗyH5"I%)AslpDbyyP2J.A,AڧκhS,f̝(V(%#b-;PAјəH,:`T r<^<{|YP3b%"D:Ƙv||ժU{=0(Ie L8_D|yFr>>V!1p%DŚU8zk'|@@zAdh )1?^x?D?nhKd<*` NDٛAom im?2_.{#Zt0Uʥ5YK!-WHiݘ"iߤ}[uVzt6nh8>~_V)/wy/"2^%ЏDv?zᮇg)co7ۚ\`(S9{N>u{I=65>bnt*z1oSEo%fMEl->YVQ2#(P#X8Jn zTdd S [} 5NsUPnx*<>ۼEV|Een {tA5[/YRֈrs$kLe;0S&Qt}qOZFeE8q~݆E9AP) QXa9pEH"GWV&ŷ8I3 %Q˩kb)4[j })H⺇.-h0$u1<=b)=X+5fؕ«1?wtMu TJs*$YsK4"=)VDYGA-R6P-%OEI;"=0AEn4LtUhS93ji, kyE6ؐn?) &o&7B7L3hkR#څ sH17G4I4/I8c($IfߥZ7P61sjA˫~y<|x`8Ŷ1h1Nd Au`l7Sr3K'%KߗavGŢ@|$,(y-d>i#_4"Tz'Mޢ~ ww.VӨwH\%ڜZaOg`g@>Ón8/ ʡdy>>6i.c*bpY@{NxrJ̯$(đ2ݹfF}Z _MD%ȄJ]>F> @aKbك#07<8E0Ù7LXfY[886qO[(rv~kiė/obÒ4[L0BŴ4?W\;UXY}齬no\߈'u!V =k9 {mL+|:?}cwS`ږV7v/Ѥm-L[)r%~Ӏ?4hf tP%]{kA`,DyXlkf)cRcuNk987oZCfUQ n5[ǻ*]$ vsk&lB)` *X>aܴ%,qWsw,^ xҚǩ5c;BPٱ\aZ¼)^r.!K:E!Dp?P_etanf^ynGT0,@gѪ<j+0ddV@ VAKnXݯ MF۱eg(V9e|A 6 iĘ!zp·QZlk!(-@k]Q2myfθ2HۚD2bYCgV:Z< KpeT sKg΍FJSxլ4V?EhZqa,qc照,{.=KRp8-Lja,=+ǶV ̓o_Y x cH c&RkgtOאN\I3!Yl1UK auF+G#>rG{I+zQ8^1 z}b5{I7;knHl ; :n[2*;%vIa7xdpus؅na4?قe'h3dYv"Z=YpKXby0㚁>wWݙV\B %G׉uM÷n VtCw_$8(,7B/jlLjP3Td˱})xRŇ0"fY"Wg[?}tT#,Q6!< 榾H^'qJX 0]P|7X{o e"(Wc\;;D >_)ovm3iGj}iA1R'!TNH,~9K>1 ,sVyŐਗյ&ˀq*@qdw  eo%8N*"fIfnB2K7G%5QHsԔ=y$ͭ_ 6Ov,v;KqR-lB%da֘lœ8x\GbXx~̯zuѬ|0@]G8m"9}B8B(\+ v/w$8.:%զ Ӯ{xRKCH4 ?`e `8z3ޘ*!3ݫ">2CR쿔F:%/C*;Sp$xVpH r(\&޳R9x~CMvO;s",wj}#d GKSOBL[/G18㐼o.p&e3^1enTR|Q orGn`jb'ʌe:%x,ѷcc^Ln 3@Xysx!ɵsNuݼC2 j\YDItݯIp #gEGYBM0*{zbVʾFq48BK+ (Z_pK[C]z0s71q7̱=KǮ E>6'!ܭ6NSs&3xws]X'Kk ]DtS|e<>c.= D3I j4uсb}iKH7)"3 #gm5 pO n7-"&UE.R (J1!eaD[/$8.lGL=/O'&3v(V&z'BЩHi$%ڎ$;Xdd#l~mGஸY;C"VO s 77}!q^TdV..4l3npr$ ~lHu\\H@Y׌PciDߜ\vL$57Zaf l"׀ɂ,,Ϗ$7Xa}fydR',k}em:snl21O 8T}Ftycleږ0qꍳ ng+J(Q|ܭ#lU ^% ..@MR c[EKtѓā~{|/ U Q~$Aqcmp<LR2V-ݮ:{C%z3_Xl*p#ҭDd:Y -M *k?P-5ۓkĝMAEz-X#5&֪Wb1f^JMYVݕznr$_N!&x|>8}n+Dӊ}7_T-`Ҏ 8o|pkr<*7;BF~$GڃF9( \wv`bqiQ⟛a+.j-Ť}nWjPx%+!0^lY>߭)O767gnd]8AT2E_X/$8n,#1w  GZ DQקKw\($Vcvl>ะ?7+V$d]smCl:*9x<:%N:4 0͵\*2Wt+j^IpgKf:>Mua(sCJ s/e ss;oMe1߯6]H-/<]v@8-]\zLE}?6l~'sx܊DjS4Ko?~&'Gk}Y]>} uIN)S8GMXe\OnVU\^)wm_vo,`E n70a08|ăʖW#[#Ɏ4ɚ-͐sx^<$sux? xtdGÓ۳Oc)9aM"= a 轟N!0CF}d>_E]TfS|5"ST]'Q$+%V!8NQP#h7 cp^dە5#EUp `D̿F5z~3R\Ǟ/cixtU)/yge Zc/h7.'?}Ǩ _<G"948Ks UcX,!Y 0ɝ *m`1}Wa4q0/4Ȃ="ub2 .zh&A#޻N'}_Lt ؎|?,. G蛱fcdI6U} \Hm,: <@`Y.^sՅnޔNU-^܁V7GP3' J+cd4]p#J/$K?Oz6uGNzY<5#ntZL~?Mfcߌ\!߳M[An׳%Fl_o\ w5 [xg~\DI5Y2_&P8W3GIg稑h*WzGH/;c Khԝh#%2|`y$A7P(Вmq8WkƣdZJ3C/9+!pU;#ʪ:-G4mo/q3SS\QvAp` ϫ&8z;tUe7U6?]wjT9=v]r*f 77wP*?b)la6SlGiI+EW6֗W,nOxqluϣIT Oz E/Ww%r`cb#3&6R76i܀z`t>\160hvm9JF釳75J:MII@KTCX kWE͈vty 2UE]&7x;[~"|y}9~Hû^JB62z7E!Jzٷ^vyh<<(8Y!@1YĥsAr%7lŵc{]&a$#k-u"F'`0k 6~?9BS'J`2V NTJ{sT8;9'bq&v mE4a ˉTXg!g5ka@j6yO`Dر/l+xǭC;xd{1wK7ju9&Y;eÁS91- yNJh#bfW{̀B̮+Ȳt8a.'9(p猔<7[A-\wT8;9x6-7|-hXGbǏb&Ța<{F|ݖpťܨuӡ/^]!-1=k  }ꉅFf=q⹧ /"x_eWx2j{?<ӻ{WJ$;_n1ӭPJ rjpނ%('Sۜ\rhQ3fs%4`^lQIٴ!nK)AX& 9\x . z !ւ5% Kۻ$ ;j4|{x,Ҡ/55kXPFPWY!S1GoE.Mգvӥ^)zUO5bRW (z)o2WvE踸1=䗤,V쯟p? ,߇{_sr (nGz ~3,. G蛛߀(=Ӈ RjiBGU p#@7L xt~|#q>x@ 'Vro>85{R۾΢y{wŵLR-ԛ|h+OyvC3ԐIR.{\cc;IbGelþols0G{ߞ j _.6.\7aϰVW"NmίE*UUFϔαJؑ}Wsbˮg9{k,+Lbłx,/ >5S^7Rr<;Jv?zj. dqaN7ٯ~o(7,~ 2j@!Xҋ>_,Ɲ JccAOo~nPI D!JȽ==X [V.KͽsK Ja]\fH2R9A$u UQ᫵dL]_O-ecb"y?f^b|}iB7F.`_״̢|֧[F#XΤ[#3VP\lZF[1/w,ωɹ 1=Az尰9XeBBkQ A;b 1+!wΗF|6ƈ'MukqZ3M0AE0)3@A;d(|Yn{_ BމTbu\HNv/,P /D\N*قȨ*vZ1p-\nmYQopAk j-0]r5-j&a]y1)1Fy4)iuGɛn*_q-\% 1rJ&B<džR`sSȵPRl 7ɫhg;*kEM̀Ԥ!DiLH$xsIpkLH$Wbldz -RsI9J{-qN(2׹!nQ"!-V69s0nitg@W:4բ|+p_[ c>r_ 29n>G6QU&=[D$O{w|z`R9~N&n,\8y08MTrA/C}Je&n Y.ow3 ͮ^GP0D[d5LRj\yG'#oIROX1*i  2@0 pq d?5""OD> LIrLjtα`A̽4چ@qRmWQ`6}kެ'KmmIQ{n2V_v^.sS*[P0P G LGp;Pcs"铪J ,+IpZJx)7u|u4V!"S(q/s#WZjDSY̽ (G;;99wT@K3Dϕ"OǒF_z 1mrsJa q"qN2 v+PfFChOȈf:B 0Z ry.4`9Q`Cq7-n="ѓ=rQZ"|j*Ќ!Õs<8YN+s(bHn]nXehP3t{{i.@)iΪwrSϚTJ)J"RTa* ] GQƁ!GRQg,d@Z),k)F=n=I7:(Ey,[ ḯxCҜX bgHS [lrA 6La,)D.`tUA*"7w( ց 3 )AK9o-gN!IfbW< ^6*hK֌"=7kI` bc|.m!*/Ȯƈ5%hE:_Iܯ#3Bd6,Pg6+IwU$mh0 Δ$Ek I?c]dɯ(9uYRedԶHj93;Rqv܁`Gр9޺[oE#b"/ N47RiFdy CN"&FpͥD)HRz&>U#~4؎K2Z?Q`B-r5iy PFy)TƆ4 0Mc(b৥$&%1Q,MLPbMM.2LSS1whj2LMhOvI0GmrvQVZ֣m i܌Ga̱٢ӛ3'gtx Wq GYakZCQ lI;ζv#s?UZ-tfTTRm;8bAu©$8 wKCOH`j>ڊ (UyUf5X5r %I1LJb%D$T&<qD'&nG' [#+*F4`@`d #&BËP@fCFzm9"K 2=2j?~MEck28EzUSS]f\ 2Z!RV$RsI)siñTom-[#SIdzkH_Mju10>A*R(qg@+ _SLfaMujr-5o;"ݺԫHHCJU(,yRl_*00+&}4XpUq)N&N6wgE'L~%v%v-.NjU@GuO ";⩖`}!Vw|kU[)!3ő%QL>G#o<ꑫ[u=-xृ Šń0rP>3 PӶ:ߠuT\IO>z<9w`!4P,wW`; /q{kdW)V̫$*/DT\ڏQ(!YR^(އ9`,Jx`51ȳr acYTs~9L`CVb,Eb͚[Qu 4&A_Ke6H_KF9h(8 a,+qHvQJ~4X|>_u]=#? ;=_G8v~*wѳ&B0%S٧ϯ޼}_M.ˇ;bCWyL閶kɥn=|| .hʄIYOymy=4Qeh_J/uGe.ǾN" Ing;2q¾ ]W{nLZǒ4SLN d8hl~Cscg:9&$_$@c͌Gm~iӵnym0 6xx{>H{ۊof4~Oq6A_4ƟYziK5NzT2.=߂=4b49l-dO2zgDڻfdbnK˜{=rZ#R D+?}]|DJ&k7苛 49Qt4:/Y̖) WU=J})]ߖnzvn87*%YȘf5[f%E$|OL`vE|yo74sT&zgc3R􀅱AeX[y;^=5v˳&,n vqݩPaJy˩]tF{vm+pߣ z\Qbk|V^)|e,yKj'f4Gw6}}ex {BAGLbWQr] *B3DZ܍nܯPM4~mDaGtd; g>j`^>8N'P݁]p4M3a à9F 'T)ni3*7at5 m@8 KWA)QtKkiA>"Җ˪$|0Cc"}3P%G+ WClLg WPڕ+aK?ixI!c!'o1=#C5VྙG! *;4!;ɕ+R; .i3`$5P^[X1Wh IF >yJ gj +^22uȔ#Xmdr!1V1WS0Xo&gWKBf`7{ȓ  L7Smdv8Iǣљ;S!s4T聃Us}!Ce= cQ*Y7b!IA,˟ףk֐/}i,br$ؚl2F9 x%eJLE3@yn2gz#'g (ke9X9'FKT?[ʊF@ۂ˟i:tvh2ǛDQ"b!0i`pIFٰfL(MJ̼>ȼ<ɬar̢S=Rn"VYSUŃ֪%r݀VL1Rff:\0|tⱲda[xgcv0hWI`@w-=GQxl=M*F`h8j8+]WkҾPV;rgKۘ~fq9]1|n )s;J/WwNf_yky~3GfHN.C42q*uX=-kR< >b"Qؽd 9$X;ELu*RW324n$ #B/MԗrV}O ;(7 zxҤ4m2KH1\Q^uZCd ;4+62&ģjA&,õf }W2>-s]7_2-&U7 x4>ȷ˖Y{E8pji=?{y ۷W~_{ZB醉\@h;>-/Z,$ ?1fD m}OMn ߎb ?lF3oYaPGln.pc@{мx:(r!\f՚7Ha/WŇ?~A?f~ g#C<;Wݹ/Ad3j3U.pR36i\;;y $2Kw}غ8n^V1*pῳI'4 5a 1 Myx_QhLy(*62^ Ksr7ːd1^c}|גqn32?Ot0؝y Fm <^WccpʜocHn:`>?_r^G>atӼUE+,߮Fײ|Nye ۸y7fOQk x0PxʒƜ%F=xdc8Y38&ʷ~o1м EgT_R%Ys A/۽Cxt(e[X]_nl_yZ7keW ݥk8?~?noS3_uW4Ga4RX>00ʕK&d!(/bد?&Ø" W!ITvZ$b@gQz,#f (f1^6]w`Ns% ΁Ϟ!Buhq#N!u\﩮GƂa8J"8E\4yD%mC>` *fQTXPƟ6 ۛTͣ- O2r惧)B0ފL)+{ Nٷ@#wFʚԮDz&AAS> 8)D*أv1K;ǡ+Ʉ!߿sVW5x ,(-t>@YׁG>ZjڸI'E@*$-V:Kʙl\8[͡flXQX". {`2Hrau@ő 3фf˕?y=2+v<$|4B5//fi?Ȩ !G=2j>.(tYTxJ25Bmn0ľ<'#Ŧu3G)rQ(2~.|, <܁8Bf E/tN-ǐ GlŬ`#-'w(6+Je2Pm\`Z."..qO$n*x)<(W2nfCfo.;;M9osLO;cxt Ǒ\>`q׮<+#dאRSKF)e6FXԍQǂ 2~.V{dT|*ΜvrqsH[9 ,G' fJ@X^|6$D"*"h=E""]#̞ĵ,X:Ȫ*EI2adw E!mZmV5+nՉxzJ`ׇ#d6?|ޖ#*cK=,.mSg/eG^j>w?|/za{,?򶐍{ZVVD퉾bKS;q[6,i˪-9"^%y˶D45&dKE?{8cv_(!}m\1sMVx@Q,Is h#_|,ؖkn)5ClAݕ1aH:2^t #1eG<6ݐ4LTg!F#빣|+ۈ* \ȼ$ *GUٴ?iv8$ h% %`͂sO?=2n} |$LgwnKw:2is3tJQl09D*R($ `u|`zʾV+&-Ċ:`eYT!eMbyk!w <Xti݇Zs5UrC&&,&zw&FWRYz!" l} *Ը6uUյ>ĵ1۸6/}\}7`+2rwR5p]"bLʔ_- ΋hH"U9E~'_ B㗂!RN~3N{PĽStB7wUxws>2y>mdbDآX6 H"(u<>CDjEUoPb+se= \6Պ!*gy1&ц+eYըgk*bҎ78lBpļJOپ]6<ÚmMnV!knv>kc1(]#f}lJC87la{vx5g RyԜOe,$/Y(N8:i#欐)ۦ~~TI3ՁԨU)d$6HMl5d^XcvC(r1L( ( (%юm"Ļʣ^.;Kr83^F "xtAXȅ*JNi8n|SsaM\M7{EG@ /7cŸ'X+*LE'<$ey¸Mf7t6f胨5>/KP;kɌ[6Ȩ8Q˦x CLf3S}7HCtR,U/8|#nQ{*wX SR˵.*sZzcˮW\IL gNcv#(NXq6]q)tQ=xZ65͠H(PX(#f ¦8zm}*]>DEt~a0jlRoţ4'QVhl|[H(G9$=2jvJpgi;Ե`oWz^JnuKʃftsߠNa&&Jri3nF( KQ'^B8vF &:Q:;V¸P-y)̉$6zf%OA:9uS߁X߁" NYƦ`=a]3rŒ8wK rrMv|fLz\d=ܬ/\*PGQ󕊜pNЈmTI37DhET}QC{[W85T oW>(& isOĠGF<;E> >4 ؆b"4k%oNQH1M$FU& 0O1K(]=2,Sϕ.^%ͣ#|kh:1_ iwNH/CȐ͔ ]2#<^:|,|:ռn&}y6Pg!QJi4^J0R/&85qRfse]͂" UmD j^3y;Ͽ`]k{ ^=C̯*K&O5OHŃ?fTt6F h_N|M ٲd_̙06FE\wqp}!&%+fJʎ|!VIvzqpjfA\Ӫ.Pni hcyU'LyrOd;_yl7#;a˃}CZ8/fH+턭]mz .m n0qVT &| bnzYxy`DPcp:L+} f5S4Ӑ*c Maֱ -ED%m8߶}E}ECE4(lqLT( #!̘$ܕj ,&ke ` zdAknE>rJ >7& tөPN8{˴}͕+?dYjϹLuo0!Xt0Ǣ-&<ךϤUh:Fى=2* fAM{¸ rLqf#f-&?k l7%8+N6^s\+N|5lHk3!Pi>}f&E/g 5GFųI̔=^q僺N;Sᅚa.|2-kGJ]2{df [^.7?&LW6*o+NA$!'\3"E{mfeWOb+soj@"& #8Wz>1hB蠬cgntUiSҦ*oĄ+lhRQ^yWܸ uUw[IU\ʇݤ~M.X4"m~ R"J-kf\DxMFdYD*69{|rFKU u|y9/]'S 77B!>%B~/̳]?-Y6GLj(.ć 71!i{t88x-SX#|qO%"uHDKSbtR'0$ R3wAKe vUĩdƚWѐD]?/7X>}T%m#NNFs)FW>-sE)RtgO_ƈۨB/'# fr1m{u]6V]qӒ$rj*,762'6'(3VRJLO L HE¨t@rc2KW:(KQ6R֞o+e陼`,& ]eVQR(i_`q42\*{ i d(EŘP 80R_" Q[=:ƼldOsd8"soa%cfJc'BTDGTE6~ {(Oc58.%b%h &GwoY\~}.OH УcWtf'$?C GjohLc5c?8LqxehWIMrx$eq""݂M e=ϗ*qu!b1}Ax\& !n7V5SkYq7KҶVu~VQ_\Q L ^ht ;eH"FF;WksTRorZC&2&%0.O0[RR!ԝrd9CM1c,=Jڒ|tܥE&v6/J/CBiY9)_H7z6(!*tkz ΄0x<#{5l#Tim$-hJC+Oc)@$8OlchX@213)gW:bL-Ks/=:8zek~BzhFR0JH S$1 s4L.c^Xc7Y.evA ˦3~ 7[Wu4G|:h) $zt|Μ"y/]=EՏhEc HۈEmܟP)y){Ƌ(z.}^#n1{]N|Weif2 Jxa;*sf4d]qZ>NC?P u Lp肦˪3[7~\0D9If$< hr> wőo$?t/W5± ڈF8x=_})5.?U[e77d}ʶKE[+AW){:a eVzt\q(;q&w0=|˰BgAdQ;NųY&EZAc2 ^865(iL|XT=L{_{`6/q;uT>C_^]U_^*'NkW)_T\L7P4.l̳/t"Ay;ۭ 5 mO{o(6V΂, &t]NZO }jpM$QY֤'wwv5zj.i/d7G<N NQ9#BSvJC/b|gy 2g{DZ##܇3f;ҝe3(f>i/Wc5_Cp6$*FҶ')L-r+P1ZPQ½湯& Cm8 ,"LΘ"O 6`*MM26$ѩ"y-Sq$\xv^נպU u jYlGW 2?":!ix@H!sXe: $L H(1sHF72(H#H6&!VYXE󘑄ȚD:MJO'-h h(>ж<]lC65b%eAkpL,ٝRO4Wkpv rźCpEڲ cGd;pv݃ A E7aג9=4?5Nu'ICj1ɓ8'ε8əiN}HiE 6* "8^Qxӝr+1:Q}nW`4hP^/U>: CCpp à(<[r띴) %<EmtԕU0r!mJ@c%4OIdS`w4]gIt^+1Q{]!*b(#s >f tWI{71 ڗE=ݢcZ'IA-:yxɃ\y#}v#M{<*D/^7 Sb{H> hT1Kǰ -#0%"($0B?U;DhJRK k D c)I]( XINq$;)#4: x#} 9XIZ30s vU]q϶d\ %LePsEC/6 W]3L-n^J&"TSH#& #PԠA%܇^PvynԹCf$I-'4eYfw<@ϗz+݊c˺C̗Q=ޝ@no5K߷;u3̬ рY8 )S߯?g W՜[D1"=M%ytaӤO)\ rAs`QD 6KMhLiF9X*TUJ 7*>=[E8qk@m.V VC?;SQoci)PKunz|]mP\Wl]死 FOMSiu[֖?}N7A,DԈw/%|ľWўqչ<VVOr@/"JB[[D9"5[!n4_^ Swk}\_~q ݸIV~2وHC DlY8V9?p?ےSefdOG%U,ly5OY_g@.j-|7tj>/[ Š^?Џӕ_7BSSfVT߼<Ŵ.~L |yr+==y&mIy<ΖUw ۱~m*Oٞۼl}%<=\9YQlŬ<*WDqRPCAs(?} 8hϣFc &Tf:6&*wn1{c-GH>1"E `6)8[Y6Ԟ3uM^:4^;̄3^Н9Ѻܶi99CV/Ga}pRm+ȭfcTZsr(^`;q P2lv[Qw&Wz@HSIHNf-y6C_IM<0*+oQLL FNgҹ䰶*iLȞ,rۖdd`]1ȕ;^8ܼ2p?b*6 :wK@FMqHpBp4VJW I%Z+x09[ޢlfZAIdi} vo!5`U% HY(ZZ?qyf?GP;'@WOFcyUAA\c5!ڈ5q$:|d" 9^4 w|10z{Cp;2Xi剒Uqi$rJƉPm2 (v(!g;PZ$dVT,NJ1e&L%v:ꍈwyUCN$(Yh J3h(hQD<^R?--֕B¢T ;Cֶ#1fcX+T S'slfGH/ws1n'e5s/@UWPGuf:GB4S)- /n4Yv,o”=Yȼht],Eu^`f=,ݿb5H1Sd!޾Lg$w_ׯkbv1UedME />1G3f;?b1I?k"EL(j[p\Rf!uX4&c~"x@حKb ՟F͈QEM~(\<=$R1>DFfRBd%2Hn = >yXapfɈ=2e fds !{t;.|$BΙJ9/W:^'y Pz0pT2qO]/}jCoBk>APUoxY4+' !E?^O>4517E Eܚ #(OK4b`ޯg`iv1lzEEv>VGIf@QGSp"L9X' jyQlRA@WNz>q\TpY+hr ;0"1mMq ]Fc Ej77ĩ7xԆ84۰(v8pH!?2xdEc2\q k4ƸZՐXBHSt !Bi )!wO!N3n@^GMsSyGfֲl['[,hX[V(kQ0Ou!&DzGa3}eXJwc*1 &HՎ(5XADk-cuC5#(A kE+6<('/;kehQzd\A9 %la@4MFµiT8y٤jϋY]3yd}~OAajU3-5_6?N;yww޿|L1o>5?\'2iWi2ȓ`=e6[A9NF]f2P修u:dYIeͩ YyփVO{/5W]?COןyMMox^OB"s,fw+ (Yr\Yv#O#cEJ-QS|Z\!;-gCBDn]!"b # @ W;jq+z9%4QH#nGXHó/h@Gp$wۢ2]Qc*nhA%Yr.jK[!v] (ZaFseźt{-w$+TH[Z~=!/ JOkP >(;g39 e!T Do{PXzj u`ja!;{Cξ3fOqg(_ s=iƫ:i]vHLQFmGd~ OYj sdh "4`pTEcY:!fm>!`6)8BoU~*wn޾{~3vK,YyD3'L3㤐{w'ڧ,n5Rg=lzumCZNuCnp>>/y\4;bv^ 14\FnVӇy ~=LVE|m+,3ҨZu|Xok5jĀc**rӹ-5O '"0撑}nhTi $qKN⌹s=7ٴ tf]}3Ϻ+Li[wy, <ºUɿneͻ^3%ᡢs%(AN84+ԩټO-ZSnYkRntYLe@R\M2ӦUs7MhhԑDs'2rGL<`$S\tǘ =/D`Ln:~;nFcaP)Ԧo zGH>·W{q57 &Z^O5 BtQMZ1jW_y;u>@>lKT./(߷e^wK]O{ԅ_J]oF3 jf ?O)Oig|_Y^|1SeI+_fݽ¿\N-L[s6_ڵi^/L\y'O8 |N J{v~6um,L調_3y]".yZ Pv m|UP%B4:uGk{hťKᾶksCتc5z BV2΍LO,Kkk r@Zw|^r4K%IzkfHw -Wu=E3۽zƾ#cj=S(:)ɖ1I!&cGY@; [o~)}}6 o a0ˈKq{툻mw7~FU6,kI?}t).Pf( F#$°6%9da*Qm1?orЋr@ۑUoI6g3 Lݓ͛%pB` hFl58ys/'Jvx{/XS_HrHAuS7? KJUt|4<IyTZ>*ax;Nm\6 OAY~ܓ{=HܟObO+MOQ.LyBj]\kI.R&%6*2EhǽJx$Cӗʐ}Zꖵ4zn䮗1H[2h5p%,F"5CbGNo'=vܠ[Lxo⳹de? Jp R26m7K[żY)?">2hy1Y!4Ҙ]~tZ/|?\~Dvq=}Rp9ǍՉ|  ҁ=nL~&2moѨc#}&$ ּ6\rVJ OpVU;x <&H kz8bƴbZmznQg@|%bD&,eJmu$!ϳy/S2[ܘ4\]I˷32Ilqʼ@Vfc!Dt`]yK Y9 YrwYFU) ͼ*$8DJ6YQR%AoNu9As6gX.BBXxQYýD( ڬyחNc3 DWߘ j_oy=@1׃alT|'8Y@ہ=nLr}~KZsf⪤`K$q)?0sco1q_y61[M,fiLzO{c:-us3HCܘ,_G;3p "M /7yʢf92ȟk4ƍMI5ޚl,~cr(quIsrH g;YK/k\H~7&53%&p!W%@L>asu}Yt4-K>典ELƪ@f+ ٢-o⼁GI$5(1&ۅܔ\&{Wȍ0/3|gsas{ȗ$Mp`wmedI߯z=DRH4XdURj&t2{by (hj#55F"D%1)]Pn䩴i$fϜ)6Qap(ZS6U4@#1sU3E$U$Y ȉOZ+ـgB#AAR.>sTrry]Ar%eJٷ/^' )xY5@v. Vj^T9UBk/݁8n`dDZDHeGh!B6m 98rjF594J FgDw-Ox[#@6 KH!az9X!dj]{!=')p#6yFgog4u*2@#rZ=OY&'0``dl|6v_4{k3w۝Ld^HU1 >ZT6bcBJ%{[<73xh9#;"黙U{.g'j1Nx52m' d'XtC^[H%T3=V}09(/l=3}g?Ke %tIk#{iԧz@Hw [KklTB^]b}k(xwH>zs`.ڥ ڄ^Wr|Ook5zyÃޮ`_kW^vy $ >!R>!|-r9Y>بy,!r F r|fߙ>_%t*4`D`% 1| MG9zr>} c\ qU5(Ӛ1D(26/&(UU,B9Tdvfz}vx߲ #kG]ʶBMYV*6λG"`L4{OIr&烺|&c|5 M̅Hڪc'HVCz2zϞ!>.-(@IpJ88AպFVQZsh$f9۴e\L`H^^&7ǜFR5&=51Z ( ywklftB$f>>[*>jdBZ_ 0ɼ̉`l6|;=G*-M<KH̜ͭVW v7ob&lxǁ7`_7v8Ei5H7[$LY3bpIh՜_^^ncLkU/L`twnv` #<[ F<ĵG5@6Uub:pV6~`o*Yhey dh$fϜ?nē@3@#1syLaqЫuoގ #:'-t>VrDE v ,#_K9=K&1FTNA|lu aiN |/a4p"& E\ i:5-e7}]wk4;55sWee:E;0]S6@#1s";ωcxu`W8t1ONS$7eޙ'蜝ՉsHbD^ lBqm+t 4Z}h )퀹٬cTsHw%BoJ';$^Ri S/q~nX̉3S7 XdB~J ,pvj\p[ciUP#WWOM[ᕖn0F7<)B46myB+Kd`9'SqWCįeK> 'Bq˼<;|JuF^WjmFb4^;DnnTs+ ȹjjq&E2@#1sUr_"ナ E~Pb 5Ġ)hɑ񺮐ithic9(󊁸J:%! u `M҂VQjFK2u!"ÅjYjf֔ĀpajPm$2|nheQԞYΟ/\% &&[ΊLćByO^ 43GHy Ky_43Gk'+Qe[. H[^'Ҹ\BF y Yc@ҵI`m>xHuu]O˪ 3'ep%.lKHkE3:HpIf7g PS9@ ]/e,S'fEܠK% hL/гb%s)[erV*^ s9q`ڴ\("69r1.[]*O&h% [>3DsCgW2{apWiI89"\lY( |ĚAJ+WFPZ.+U,*Y0T4]p8VQ! /,$ H̜IvZ->;twN#:5r2+27@#-s4tST2h$f)aCM#UqJv!]dT KIe5Fr5tm3'RH 䯯' UPoǪRQd1.*uU Zui%5޼ aJuܵDja"0HgyeGQjʅn7‰1h&Xo'+?@#1sJWv!BWUDT<~pS8x>xFb/UI idzh>Gsb{58<-;TғCd*$}N+̬t蚊#C++/TEtMT#Ϫi$揠WIj%UHrb5h$fN+$|hㇽOfO̐rrp/-^P42b&+<&URB;UCRʮf{5S}L!swNcs/4 Jø'qmZVZH:fխw{n?Yz8GvgSO.^1nƕi-0;"V+B{{Ck7_ߎN!ގߌ}xh56{vԋ=3*lVAgZ-dceM#+R,b Zxn9\/}.{|](~k1tlpWgy Zi{gJ f@ףv6B@뱹Ζ-g`jTrv[H ڗu^O7#XM77mu!k2Y>?]+AD qkުp*)7WjLN읝-jpv ëAFvKY5f?fu6myrLaB3?zhH7З; u5ߺ7}Y?-t}n7 T"Le?ms?f&K2Kwo@Sm`On8}%KvZgimK-g};kLl?-x蓟lHw{ɝI=xV0;;߂h5Tư ¼ ĐVq UֵEWM!k--5eQnߧ{o> bnKÉ7G3q "XFm6ɤw5-I I;*h2so=zQ .y4Kp&poGÿ&wSW٥_SP˿hnjHi}frSAMw_}BG<4cV`e _lFx;}m'y`ӵb\;V f NqR@WNViTx0קzO|@u- ٵ*uͫS >&{j&N |adMQho7wy5qnvlx?ᚷ|;V{8{[ K7dVnLGhae.x8k@D Eڧזό[keM}foܛBW; ~zl7^Bz%ͯ ޼|+UxpR']^z ]c\cER-r`^fb= خp.wj% DpJSRY \:`X\,ƛ| &LŴ5n(]J7Ȇ7sQQ\( JJ;/\|g_-|9t : ZƱm2Zd)Yduc*ø*>[W|?3i20P$l@+R3`_V_$ZO@bDyS">ۦJ-Qh>'ez 9';9C uEK! `Wk0O ?/7M^A^ )%.3kعtROF+wdNdܣ5HgIxž-XkZ0S 6\ ir^ʹ?v Hۿxj½p&\ؽ|Y9 W *!ygmHW$ݬ3{;fd&2>YRDʞU%J"%"i;%]]]5t+Xo Zba,2拙lI2Xo?AE SցxӴ/\]8ڵ]8u5,3W(BGC%,v jhcנO)w2g(r~yַ/#MJ mS=ɄÐ&K/aX@%$)T𯀦8<ɝpu,H5[d X>pHw|OINkCF!_z{xxa˹9HݏcYo{`E5+ |ϸ|]{je RL)r V^Mw1\MjIE)6;^iwٚx[(zTT:X廷N%tc &gߢS{8U83'c0 f fImBUWNO4l6w^Ù@u6^WWTv'7?}.5g@j6::b꒢n+V(.>M3s1{Ċlq+ڏG{DIy ln@;3+*^ًTb sPgNo=cfJ,U شַZѿ86*iVxl9 8PVps2æbd{Ag >|_=)ָwnrk̸#fOY,]$Eߥqʑ+cò9SWY M\\[C:sx/7yhTQ\ r5D] ߯cf*Ք}*3G93{f-r8;Ogmro;mD&'ܱOGMZ%:1eE*ⵜ$",oětӺֱ<EϊA2[,ҲRarԾ+DZ; 3=,fUqcTʁZOњ>fAgS<~!Qz!ЈZq{jDacd\6usc%?CIs{ü<?/1Uq9>sDu.;r7h7fLy3R^آ=q!m[BdA!Bȸ,dSPAZ8ϋo2 {'~l_=zj98wBEW9#> `T1QxG&Z)j 郯r|iHׯ|t}=$/9%1=m4u~4]o%}v2ixO_@c0&k{2OOy: #KI y]x9qBopz`&mVl~kboic"1.AeOwr)R?WOMW}&NW@ڻmlNRD.sd)cto:MQ}*bakQz A$(q|ݍ ۉm2em N'i#/#a<P67() NV|g0a YNJ C!`4|# `N(}NR*ij9+ǰ؏nMak_G)a`,@݈pGKRruc,+̖Iڽ ?51Z-nʻЏÕ qW*'jS4Xi'h$)N?{Dtzr:I9)ぬ\Ucm~* yW7=(a\9u}-(rO)<mQw*hi-_?QCGmDk Md]%Z%*rΐy} mY ;m}YjԮT+ Rf1Y>8IҐgjSl/v$:Icch]IϰiW T,iZҋ?pm 0^f;yVhkieP<'^|ۇ%|ipl~x1H?l~x2vLVil{}26 ;ՁOn JH9)sBpdq7rêDŽZ!bsfB㩭gb U6?e-Z^t`?G+1Ԫ㒞ڟ% Y@9Y'<'cR# 0NE!#c7'xLI'K"afsɰ;ɵ#腘]Z'<=xJqf U8 =w5r.gI8&qnLm[<HfOjẕof'facjS0`}Vr~*>JIQQIt6wӧuϝvbl8&㉳*\NBKk)np-m~?;8ߺ=mH0uG*Im{g-P8mhS-&ehKg=&6ct)9K/"leL;F]-xpn}lH *+̑較p!}|2?]\ Qb+)%])qB#Qs>4.zvt; +~Fvi\&v 0`o{Dˉ[MCW΃7N,@{0.p0 '@k-X0,A߂O 1=f5K_'_/˪Y NUҭ:jj0۵X>Ѱ%3p8(P.1])+.-WEgd9R,7IRܠ`Yٲs\˭PIe2 H<}ڴYX\N3zl'ՈWk]C_P y!*? ! cq&#p-4Ah* `7u5\>59Bcxʞx|l.y"d"iK!!$72)P{0$`HJur"d9?|;tFk::P=:|Ŗwb;\(WWr$v CJtSZGEb1v$99o[L͋qfdIGExk7=v6K:/[qn[CjG)9?=db"GZ2zG-Wvp LڤS$/6& 6/2\N%ŕ Ùg9c1EC:o0 2"zr${40; 9B \sMn{ś;fTtgP61~`$Y^|J?~)1:O6+,˕Q:p9IϨa,roqw4gh7siEm:a-vf/Li[S_6ч4'?jK'TmcRf=f_d%%ϼ6< %C6{8ou( i R" PW~ d6kvu[)w ~[N4jdžmí: (4m,g,S`LK TW~!ɈW>%cbxV PCcqPћbtwge>HX+\w Xi~7I14yl}4 ٨Pr9,ov~$Γd,Dq3kK?U rT9-\9}Ḡx:2|P_z@r\gοp }ҽ~Ezᚶnh@O0kzj:N'yKSAb,Ww>巓B_,r[G8fR6vGYI=.Q3aq(X۷afG3Ŗޖ^ir~AZc1c1k2~;vсLL|'6=uT\r=#Ҹc(}9-Cc,,*-/gvX~XeاU*Ǝn0cT}E8.DݱUNFT3blԸm=uQ1 L}qcpu#Ń{movV/gugΆ!6 b`:MU;05?cZ;r]vJm\ $Uz1|1LQC.Y:^e\|~̸-4L*?r3+z>I2Qp9B QAw'lx< 'b1WFO1LxVI.E jwcpGN8Ѽ z-˕bh{*C PYm6j:!/MU}!VsrcIӱW^&th*;D 1!q!3x恓hKeE-` [ Ǭ$Tbt<K"E`BWj.l c)/ dڨ Ḵf 8Dg1X?7],zVї1~}_0B|V>A2NE꒘TMe\4UhL/g}O۩ c~+Z5lU]Հ?͏Vwv>+p3FyfСN~Ia6yi=·rŭ xP|~ru N0c/||/j˽temj{΢_Ziϋ ܹ XWjîToޕ&M FyyOs}'I xo~0geJ ƘNhfd6A 9YA145<Mܤ,CX2SNoVY'o +EE#3yjVn2@N_tں/mt35l^Dsn/,* *aI\rA73s뫽iq65߻)s3@h‹3Ѩ~Wd |En4׷V=UKt-X_R7?TU'\+姛YLOm<σkNסٴLYE,'cflŚku*}͡^&haZ [1n\t֜Ej:G6?F|N}:Āw'8Ueg0;ou~vz8I/:K%Z7W(}GiTh陧:O6WX Ժ}wR<"FtEQt1[o[R(eOڸgh{yEԓ ;Sv]`(`#`('Y ljό$i0_Dkc0 cnSʚ1 'RwqJƉFg)Y&\$` ?D#7;uEMk] f3w fڗ4E~0;0> vŸ`#?\Ng#R Y@<ڌ%(%ei|a>l,L" h4X3i3aeQ^aC:sat V,+ F1L!R"KPcC:b$MāuRg2C{wF-־3a }:>W'1M>^{RN k5O {H`kbTrxTu c#. Y|=ThG1Q9KH ~zh1/0oao~at uǯ3󛅶1crJqtiqLIbT0 vĊSł: \' W`!!^,(; < buҠk<iiou`12V8#X%XO6e&h:CYR(bX1:HPG6 Og,A0̙%dQk4NQtxDҔ;Ԯدo%wؾUzu̾$EtiC:EEq jZf(Kp?ȸt 8-01S"5`8jA23V9bDJ(cEUX$Mu":ޖ~'ivWz}l\/m_89-ZpW-y_8WK$/ tE]hm l|yWNL wZI ZlvEuhyb.A(2a|>ܰv3V Žc(\^L(Wz !v(5Dj$=5"mqFq>mKr.R\ex)]SZs93hoQ˝9;{f|Gm[O5孕N_^ n'fGg#޺ Mꫲx-ERCXI W3;.xhQy8qxr eE_?PO'VCUBU|{2o_[N*Zc}forK{Euov^/vD$|!; uB]P/ uB]hb?ohbH]  !~UAfd?{ , ԥ!O)X_ ,"_is>mܧ͝f\݁^LJ S2In"%q:q]$`2gwe䏳3EhBa׊0LZ;(1K6}?hfOXaLlތ",EZ]bL*" 20\d^NX,J%]#K [˭숨Y{ߣqUgXbk{u-w=.>ɬ;cA7&ujU^]|vV)(ΫaC{u\0^_k'4Dmm}}!b=7ZnLJ|5F5;ѱQj>۬8Kcgj.l`]G Ïjs> rL$ \~cs04Ͻ~so7v)IR I,sQdD9y$RXc$5 ^3SzS<>s&SvP|VJiT >~ӷO LFsu1'*&Zј,%,@e[Vph=֫Xxi.o@(7p;*/! x WeR]MJ`j/޳$8C3Z1o鐄@$ .Ֆ9PcdiK3@i #RR3f2!N1v Y;s>pI\ g\f (^GڲZjɄqiF:G؅BkE8ڣʷ\eOgz~bwjt~;E% 8 328!Y |p985F8-Sijiʝ.B6 }EJg! j*SDA &cͱRs@@e1hlPj {b 4hX*QpAAA:.1b )3rˑ Vj ,( CX2PZ@icRQSD1`Q<,eo8xOD:ϟcFͻ|~?7QW\ZCzq}ĖwlZ˥X&AU;N2 Mrb$GRYEtʝ 2 )9PI`ÃXOorPB'^c-XP-*dJN@HE3i9ax'K%B^ؙdf"V<7A.AaɴPZ&a!N"0ʬ0Bs^;1 ,蟃P!Q9QHD87ICZᄤ3q1bkE*;!R2hww[`Z-(X[-+~;y $(e{^g~m4wdV./>;w }wHEd\/-:[K&C <o}eC.cKzKgWGf|qfkpvKrȻW5jMf&2˯|qUjV$(JYӼJ.]wmIg7N`C3M$eY9w!5| I#QiWUН?#㡡N:m'͞ԥǫ棂abDSh۩ԇp8$;Թ^խ}6[}q4>{7{ :ϖL߱sM(Zi;Tm>~}v8 dV7 |7ݜQӜ4 }k 촴D~p W&/o'?Z7Xѕ5}&nmP? s3|lu~[Q8 JY62S+~Y&rG0LqMkŻ/߿Lcm2q j~w1#^ބu}>3.&S?r^6+Oנ/Cn,TKW],~ikmN5ҥZQVh"8¤ |m m[оM]9X`rI| VtŖ\[$)nQ0j5[^ lǕ4/ 5҉ڼՓ;-].{Kla[ Oy#IEIEdDZ)@āk)v>P-Ƕf(M?v޷C4$1e'ႃU,J 't͏ ")'Bv?O,w+Gؐ:ŶϹPvY{VGKaVG- :$moDD3ޭ*ٯ 7eP)i)j=y,I)I'';ItSu.v;ym DtGͰ+[B /qT6J!^ؠLaLVمBF0@ @0& b eJٓ5ht"F)y3Q3ڗ6(" KK8Fc`V,YSmLP1@ k߆o_/q8_s(ʯ59QJweƓD+x ch^1Te6%r@#¥y1+;3Sc&+2hsU6]Vg`P,bI j;K gq<SN}Vi wܣ>fK-64, D,`haP3Bb@ѳ]f\k`)L2(LcqZkƣvoan]Рp5 3ghmLm0"r .´5qؑ:Rj {0"[7;rRg}x:+vCv9rOȲ5mkB :r{aRA+6%D%7 ,0K `k ,ӻ^$*J'&:8XPAY)$)C(5%\ SEd!=!tOv J JP X6B`D$/Q IY {+0Z@[f m!j&`Oy*J'Ix~$ F 1B XPKpI0]TRBH iORsIRIjQdCzSdQ0`%/L"XXU_`9"qI%sS}b_q=l3!#HQ E^ G&,j9,̔G Em:=ԑaA0A[R.k%! xJ+>ԌSWFCڐlRsG~Y]SJ 7ռ уMf8AI(x 1`,Yydc"y+)`S"!&"Ŝ(aH>%``qܒvE11qr.])|?Y,ڵ9d#)^dir+/Y[O/Mσb15v90x .F73JۊҖZp61<۞}]]3Cd3u_*hpt0iȽo{=/{_82hTrO1`dIN܆i* XoQC@HS gS.vb|mO憎Hk~|;ϵ0[twHhDGFi`]{lwN |[@Ƃ‚SG.A];Hܛq4,nW]T@B  ,td0Jew$"_qx}d9&X`T9F$K7#`"ŏ2""&ZR#H8aowk4ոn~u@#x]GE|aUIcEf2xzNc24ZB;2(3*GT KIwP'׮1Ghb-~~-pɳQv L-y6ʲQle,e(FY6ʲQle0FT5T^Sk*{Me5-bA>r:$ Ƙct|5 WlI ZrV8EM>Hhxaɀ0­prHhmiߢJ3:L9%i:e?tu;ŋ̢Y{zW$YNƣ֟ոA1]\xGn,}%`mow|{ώcyv*|uZ:cAn~Vr_zw"kamc2`^<=?(0,M')7uݙ, i #,s99 V\MHGxY.C.K%sdf̂y8\+ # P^^TL-,B$USR]A(NvmXLׅVd>i^*Wur['U౺\a6Ғ4LG_Ku(:y>=յhoQNEO͵CdKen3x4u.]r~aTP0QR o]crTl&eI=톆fn޼-Tj(ԉ.IF__ .ʑI>qx8h㤃o@."|؟znV[NrG8tJ# ,[?n|eA㮔瓻"4 j +XPTa`h e<_1\qܳ@o3"t|2CXe-KϿ؋ ew#BIG=V_~ri.CR5jxB"> KKB9UaaR"V5v:xn[xGS9ʠP2zEɱ](#d+8>jK ^Ȍ ^`0DNd(,]0#^'U7'W"[O@a͠?5\A|4 j-$ TH'Kґ0хT]Ka^}FY1 3 r] B{i:IžŒe,310BB23uS/IN Y$53nov!c-YҲeI{HI;j {u(<o>d_x_ ̥޷~IPtk@?P#c)% ))K;rCbCwq~nБ]΂>(@VK]qb/yh$tHࠌ 3/5)?L1FK{R$,40!S&pZж | \KKȷwgS9 \ .A=*(FJ)_J} - y,~NRpAeE]jwmGp0>[nE1_DK1ׄ澻LWio*@ BfpwJ܁ $ǚH';)*TL2oy8_wmmt%;@Jm<}M*Dn|HѕuvV25i}VYBL^XxҚqۥ);H)[T.ڥ&ڢ' :VI0 4R m/؟߳v`2ܷ$os?/DS?q2?m',]279$M&̨Lx )e]^|$v"_{z"rˊoO<% ^~ZTdEO~ayzU.cvݧGRHǹyysPO|^.fTD#1q"H8# Y'Deb[.~lltwq1PXKh2e|Xk=s:SOِ3[Ɓf`4;Ze\[/, aʙ1T2%!4픃ugDp<|~7~H[)Eb)@TJ"io}LRrВ9N KPzuS<KA($6Rs,f 9`+EqN%q A7# :GK.mm0&GԧH RtYF)vLo ~ӗGRbi!yƒ1JOEYd!z'|ƞRw'x ,}G7͋A/hޚӍٲ<1O U8Qƿ%+qckb{[>RsYD kF9kMSLU0aS!PyȘ(S#Hux1̢ Sidh!0`uav.$%L=ӞIb`.@Ej&VgUk+x[ޥ| #~Az{/_>f.my9l3T6: 暳 &1GUz,ӣV1ޞ} HM@̳ -so͗o@S.*)&J-" -8*R_>,_'gkMZ LЇ@Ԋ♑qAAй9 ڙRITꪜ5ި{|Vf,`r>^o^iMVx0֒Xa)A0RoF˽E zQ#jz5\WͿ-?jGv#_1l/z9i:EM[Ӽ*7[EqP/{`8kGZ˟ rC_sN`YsK[OR75wt1UQq_9ku|5Zi?.t]/禫tY[v}Sg9.Jݱ^Ex'to mF!v7!7?mS6S9ǹqVL )t9B-m,2ueS8+S|֘Zm*70?CfIe?,mO/REG^2c"#LDx- 2u2d;{Fz!rҒ7z6czsNǷ~ЄkBZ=HZ2~Fߚox~o|Tj928G/?,Z _UM~k6qm9ff{yqovEn!@1z>{0m_;| 8UGtf%*]^6U o8k~?VWyA|!~O>lԒ'% HcdA8I|R*A"1mC4~WFjпU5FD ҘgK\()p'K_6]Nj2An՘FڌpiBʼnwU&g>䦭ҹuuD[F ޠ^v cd 8$F`3K]cciJ7oHu?+ZڸxwELM6ٴ`RԲȁ}ٛkMZ;c ^뛥4 cDT J*6eb:+l%e7[tSkk]{6힍P;wOE\M/r5?06O̳eUv>I=t'ѥixf!!3_Rka[VBy(IƪyDzŭD]!0N#ޜLRbMDDe?@2D⮻MOST+6F JmFBudz4 *F&)|sy,Pm|\\qicZi]ړj#,l/rev>[04NluJkHqyq_R$$e[Xnг1-mdws$ŞXCzoL\WwN]^'oΚ3V﹉3/MWcr# c߱' T?n,}޿Jђ0@$UIAR\0@~lJui.;Y!.7,^x]ΏOry"=(~VyV%z־Om+w۵eAæ/i-}=̻3g@5h睉a U۵c {T5a*'#& lOF"R*?lu3V4gJK fC2E%*/?, x-k;<ƍr'+LX@lYKmf 5g<ט)RgWI2Y7H?BB b,'`}XHM|J d1jH*i&#jqRHx0RPɯ h@Fl ,$a0~QTm)*#ZS@K̀#x-yu21DyAX$oEf 1j7j{{NO49bP<#șk7?FBEcKb=H@w5I c8qP<ÜiM &Zr2[$ 뽀4~+Jeuǃs :7 (GHi ^pIdJN"R5  5hYPKm $_ 0-A6D=`c$ToTmz[fJf$R6JMJrh uc$,bdRʼ9 9K^SV]mh]Yv#&KBN`((a׳!WI*  Ukxӄ3Ѳq| 3jhe,)^iY3sh 4c 53jۼԮ-PGR]zm$`Jb(cGH&  ENL@=7FB Lm'sB $N|H8YD~*m󪸩΢J%- f+%C@Sӌ#xd՗~gZTՙ-͜&P5E:HڬȄ xA˘3o. A!xz_YهF‰xGD;tWH(H^W+^ytDQ^Kq+',͏PY{F!%MBļU@,Pi\ GH3 q4pIGK͕ǁ(q@31,J'uB #x7 ]S" q^BJg*mP. 5 k;'}uIP|$m34 I8;FB Rj˛T(3]ioG+n0``r&@|p O1MʲEV 1$%E>4gOuWW%zN|E',S[]jhc0C4!Pi:: ł'| ٥6/)m{`2i0K1/Xfm]jhe > 6K쒓5Δ y:Yh RCWGL t( 3`Y͔#Dt2jhc ,E\&uQ,6qJCŪۥmcH,3&,6Fqf6g]j)Sb<<]^R4p=\?7cH)\I(:P'CɓN^(}VKlWfb\h#Uc1{@̙ *|q<"kC '8ҫ魼YHW9v4T Jɛ_M,gNs*I?"'Q-^! 0BqU 0Wj0Rs5D8JL_a? ;*܃b $a+ Xhaj0h0%gPS 2(z D«[ц5sR %5yy H^"x&/^F- `[exI 8 XJˆ~B^H?/#А Ht kDBE}fC'DE;69Ls(`ۓeX<8"ԗ}JGE@* B“- {fPD>HOotC{RX.Myf|;e^6i!*->cm]ݔE{94ų-`*op< ֮2c"\Z7"Fe(s@&pcO %-~ \9yK_ͰlթIDgkRixҦ+煗iW!1kٵsVi{>D. \E-[$Q^[-)kjt!$U]cNH\!.|_ xd NA UMFƛCD)9Pngi87i/-^ոA6]\$i)Ko< bU"u/F,B0#H>HԔ1тFQ4`p(y"}gOR[ZW|;.Szl:-qj2-75W+LA-OOVD[&nk|{8>`^׿&:Q ,RgcB2E4^JJ8%\IRQ Ƒ6&)A9]_MO90wV\MbU50'lyǗ/ qc1eqh_:x`o u wp]MXU~[LR[U&E?lҭކKصs=.2 mwZyg bxG>tVn#u7'\:j7MJmxeOkLhC}ӷ=X+߁Gøس|KބaQ'biN?-Ew^(!?J'xniQT!Yu˭͏dk$:m%w<Й]{L  ¸{5RYmS"큃Z<⯢_wTƚ 4 Ǟ㝏l~ =G7ˏ̰pːiazq) IKB9UaaR"ڠF5/6-޶7TV˄B}%wS)YěUcL.l׾q'v&i3Ur-6wAOP fh/#Et$LEt!"'U2` @e \HLe@YGf9AQN&=rl4 [bx/Ǵ (QHLd$&F:łKsg H\5lZvqZi'a`Ė;t۝d_c |}Br _zu+SX8.NokK^H6gޙMNGדH{zz;<#B^ޯ\{7 70"l]g81-IM3~-/(1qUƔdΥf!_H= ]@rߣu=d?hv9܅\e SE#B\Nއۅ_`ͿNRTArgEmʻru񋷳ab݊,_BBc E+}3TO7gio,@2Bp7͋:mAj>k#9[줨ym=/pe_>|~- S{ ,\Olz6׸_dl*b*V)*A K\RzpYweSLl`BAo/ƀ MCzՑh%_^OfX![z]۬7;^&bKcI`` d~/KX|y=խͩZ4 _KZBY3 tm gm[о.]1X bϹt6$C1I kFvEgst2VTdi_LHmV|6_EMߧCֆPx;6mE%s77|.+7д ym1#ϭTYGPh1EY>y &ѐN( mB(5P~ bkU<.0X^IIwʁRSo]79ZMGI`夸4x]3bXчN}Ԗ|5HiZ8Ap #Fwp]]i~vV@_U/Gz#7|5:{~ z}zrHmZ⃬^DݹSAS@&ڣ;޷IK9*V܌ޗ׃qٳ_z?nBʨ=/jVWy * P %ԑR(,c|qA 6k~]DfHϻ)统N1{0N6䲁W[3{mr] oR h{1hgs󥟖aрmP% S }xLA{w?v&JJ"t6} ڔb|Ulu[^ݳ> WịUh4MleFm6`#*$;A 0AtҲKϘ'WN,~y+Fߺۚl4(ڝQ+GYVڕaQix`,(U2xi23Ńdi,ô&`1d U cJ#68R$#?g)Oc.*5X@ a0w`;Fc,hXe|Fn6v?]!lT.x ]P4_ҀU>9%D.Ư!\g mD,T6㍏GQn)q L&nAb'WɤRY.5yyְt ܙ"zS^`nGQO:2$=A(wTq"gMYKE)w"w9>YR2^\ƤnyqQrgBZY;׶YٷH1aUib%RGCJ9ō8`1waʽ3ø)a fOvtu|*DO}!Ʉ+̯ۋSpv+?^慄WO.5pLi[˯7Smbwb[bpID:kdJ9XIAYr[2mik)D\.{Ps-\.ע˵r-\.ע˵r-\.e18akGvҔwm"al|-Gu]W*nq]ĻJle,!]Znua,(ZV'UN.a|]v^ \(wp)BSϘ8Pt30 NVFS)wN<'>)]ewvsYVL3=/瞷\O IϨ5l2hm` =i.zXiƌb]ɤ#*^cM~q*\aنφ]0'Ȱ8X AN.Se ;La'a'Mn& "w$!,gNz-D0i0-SΊ_ۼϋo+҃'J輨₡37WjJLT74 qI C s.`9V6;kAdRKJ1{P03f _^ϓ[4_ orG<ɵjRf'c䄫)%K {ŌgH\j@t;B$zL"u3pG1'Cv2(R #. G9hS䃃628μ4֤X#b,FH +m%6!`BL1jvR0Z^jFuU֦elΏ6Pߟt\|Uy36!I˻O6w=v0ؖ IcW:V!Ie0:*E6x"ZPECQReK"uo9PI`ZJ׽ʵLP")9IEd03,*͸Jb,#`VAyf#(&w_~*`_z(f̺ rƃ} *5U`WwҨ0 t#b(B۞Ծ/Ѵ #Ir,bIb2jq9 8re9!-\o;C:zs ~˟޾|K؇{^Rž*<0Jr 7Z654&^+θO߷R;:nBidNE=3tv엮|t58128$ӎU$kX>+Mj֗i:@?&>l֩hG 9KG%TňP)۪Շp;7 ؝Y6aOsߚQ.j/~|nx/(wRh5p_O *$d4Nށ 3fv9l6GrIP?&;ZX//5Hٴ>6M'z?WrW~tf$vYU \`.3e"p7<pVv~~qQꗉPoǛn9p]MD^թ\x]:;܀J40Zlܥ$ENTjfլt85#8_/umCaXA:]}Ɠ-phǍzY M)rך,{EjU f>Z1n*FUkxn:Tԛg(i@\ Ml1ˊ}jTOMugG8Ӈb*zvW9waT~H`RGTRJmɌ6>)DIM'ɷNvZ)b P[AvBXmYa-Y*R 7)}.X(鈸Ͱ{;sޖۨd7TgѢ0"蔬#0?!^uU;^ܱYj25jjf",lQRAD%J;;Uϳ4z|PilARa U-¤³eWt^]]ݶ"O/cMձ{oꏪod~Qw~<9`QE]n889 PJqmM\D] <V5d׃GYMzTW7]xUӀтal,R;WzAp8PYbJ`z / >hvU3!sy ]颍`"{,Qǰ"`̙jIij)b ]  AHR o 흱Vc&ye4zl50؅D|/to#\08Co4oVD ifp[fܽa8޵=sϷp[ +Wam LFn躑wݿU}Non[کiv7#Bij~ / #a] `\H-)[QsjG/IW1G, փ֭ǩ}Εn_`oSU/՛ݫW&Zޢ7?Żӷ{{?!y{=NOo-exyDBܻ`>S{W}yy/!ej3~Mw/OܥsQdur]]?}:9Zl]Yyc>W[iâ毻s|X}~g|{w,*Z?]mHA~PZ!r>Fx]fd강mϐ b|+;/60[A|f?\R}x- \.b134kMœ1QxbQtcw<^cvT@Aω.RkwŎnyFCi/v6-9zGHoTdolNִv}kLN_gn_9o: qzc&[OH0fE½|쇚ЃG2cs"1o!7TW :͌g{ GpGD[3gv݌4%Д3n#=l):.;B408[B3KÄRm`UJ3c5 ߙ4GJ8>„;nPF|6X__7J_I1f]+-؈b+JZ|HrnaJJl!߿n.`%fʘ{:gZI+cͤ Enf[3a]^G{gsZlwK#[) FG[GwhvO 2N&k3&KbkFcO6XZ/6jK >dg Ē`g{nv*P1k1 L(}Xz;ZCw42lB}j'vFl)@4ZeZ26yQ)4c5"^=` ԃB&` _v2Hq6W $ʅ(C ]\Pdp-Wm2c/m4CK"uj edL6πg%a.H c*j[Lh|DSZ7faV<:PkZOt^FGUQ\fq-Y4vO[#>w24j operBeC1ajB+5]ƊVGE4r-rx TP sWv(c ¾o}B `h4$J ʈLh_# il}sي>RbnE<@Ec#a;&p1ֲ48LR 3ω{SY6l3Q6 70Dr4GSF[4S B!eQ3#`DXgo},ёNWhbDj\ (bh=sU4VyuB¿\HE8ȭil,@%2b$&=a]1Nj|lLcBEДe 93ap)I.Ŋ4d1ќO&Ƃ8h@;P2/0b~^Z)7n" n;#JDkk% .Se lÂcD"4j,F芸1^ 4Dm2:,Su<F:]^6#h: =K*5UWY-D 薚8NtltHpj;30FF""v`1Ab;lwJ`sqKmcͦY$Xu'[0"9qw@65X90Qm1wa@f'@ @ hcU-R02vV sP#;Pj^vem)U1/h+V稁&Ycr&RDdS8׀7NqXt [H4y Qfw7:fw bpiUR"찈f cqN&%0#7|KkѾll:Py`7{ ff{T$s/7d2)^D*:LTEOjFP9"l| ݶೋiգbq~9ZRnXY"0 f31W0+}>|UaFeyICI@x8r{ɵcjFFns!:ҪI.HwjT@=%VKmZU'oKS`jCw&Фw5IE \0bM [`4jMl) tB`BDE9` ݒ`j?.^c?ӫU9hk†ӠBxa_rKǽ#lL\Ȇmb0t!;^(^ߞFK_\|W._ֻL!x(bmtO]CLPEw9lV#j! ]na^I7QKhXWZ{zTǽ>:3d|C Q)*~~IY_D嵠Lf—I} ~>dMR4Kyf=uOt=ߝv85`wS&dX(`tgVGN؄ =23<8)C[LkԀk_QMbi5yhdôK-y)O/F,e dCLsLs+ʱ5ͱ-f44ʹ/ia_ )VԜ> SWힻNdmb]UU]UU]UU]UU]UU]UU]UU]UU]UU]UU]UU]UU]UU]UU]UU]UU]UU]UU]UU]UU]UU]UU]U*DW;6# s=* ɋ P zMYy@R< )Hy@R< )Hy@R< )Hy@R< )Hy@R< )Hy@R<RB2Fy@2=GP< )Hy@R< )Hy@R< )Hy@R< )Hy@R< )Hy@R< )Hy@z< J᠞AԤ\g$S~|< Xy@ϑ-+Hy@R< )Hy@R< )Hy@R< )Hy@R< )Hy@R< )Hy@Rlx@t9j*W=\?_6F?ߝ?j}8c?Eb.ZO$V-= g+^K}\ ᧣+C\Fn f_}Mdw sGƿѿw߿@j t\o,9P5@v_w!Ka7`ʴkiK7Ǘi)sϱLG?$TEr%˕XXBH+Yzԅ7G׿"8%(ߜ_\mSvٻs{`rK_@^|gʈtxÆ⦌46mjWXbۂm4TQMvP&K0vRShw!c<΅Swx. A$DZZp,PT䠶]#]hЏ։3 0-I;9}[p]nD6HN?҅d߅-Խf4y<+V a^N-(y&WOx Zgq zBrB=zZJEGkh'4P 툔86Pkwr3 8(P= Ώrab-qy7[My5X :?_sd K6\Lݠ滍f*.>"ol2h0(PQ32S,rps˵pt?/R5z)"t^I#[-IuqbQ=K>iɕߛUiXŬd`z~J)&=L\ܙq5~ΖICd< My.R]As&H]oZ:B[_ʿ_%c}.}:9~^&Q^#*5m6W5a!ЇFB/H_!J>jSwŜ@؅'v[w$d"iY":KeXz%-A?_< Ϊ;ª[`ˍË9ܨzm^z0;-pzW`tub[ck\P02Hm/OtiMk7/rn ,4K+8 gAKC vy8+qv2+݃w&EEY1v7=}yjd05YM7g7m9odA͵2s׻),T]Rb-( =YA 3a;!|;cvt8.:Lp u"QPtd-"`eN(J R1%s`@5ZL "PgHFǤ"9r"3âҌ;=ptЪ (O ;= J `vǃ<b̺ z =wXA;i\\QJK'0*Bܑ0t-մ#Ir,bIb:jq9 8r#H0B5Z"HΚF ATq0 "򈑐;pnf/UUlR{5O˟?sd<4TS>\*~zofOR`|brs{Q1LQ/r{Z.m+64^~ZXƼ/~4~0q4߬~,߰sM(Zk/ ͔"n-Y\+Zځ k}:6,6GrIQڱU}}'p}CA~)ht Zcm!g_O .Д^׷̙mXZeUL8^[e\`*3Ye"8aY^z +~{4FTo.Q 5v5?NLoث" |~=BRnlV,S0IR Ŏ-5%%#35KYz؜kU4}F <%ţ}7[b@H)_p|Kג,_S,7IRܠ`0aWZKnmxE߯5¤q70z>oog1ib:,tnֽ1yǷA*\x`YrWt+H@v]*:`0~f,ʶJ{Yr"ߏ)cRܹ a8c90%ye=̩o6cE4cJpH_y^D.2Y3{@chyq2k;|9yCyHU4Fl\_v^0NU4LLKh‘±׊[Ey E,2Xcm՝4AZX,)vGuH8.0t$oŰ[)ǣ݉ug7=*wj;cM)mSx٪Tn[(" g#Bp).}ԖxØ3A~64KQw="ܩ9&XmrO/n o@fO:PhddOP fh*SSt$LEt!"'U2`eɁN˾ \H☁ 30  D96{XGl ibudxt^)jN31k@e=7FwbO1+8%RkovHHN>ۍt l(3O (@.J*)vL!0R:\ -꘷6/wFSKԋ?{WWnJ#/IpAb@&N$ a")uꖻ[m[õ__pǴʠK(vv m1'r(!L=ٯ7zFp\Ґ^_]5e4 "= $ w@tUF#Y5٪س𳂑TQH$eDF* [BLeE\"g!n PA8"JsY2e)2n"7 kn`!Gam֎̛=%!pݏ!|[88nY DGgU'G )gĽJƗ#3$C!Lj$8m Xo.quZ uZ YTI8x5*asJ{dlP hQeLE@dIC1Txm ԣ_Jl/yK2}됸k/}BALDJ_vGm' yԄ@ؘ)q"(gg>1)рÓpֲ<ɗϾcXvs-dXLfә.T.D饊^z)jҘ 84I9$_x=>D:&0gSO Dv Ot QZ4z"44ß~@ 1DqPaJRf⼵Q"!YOJZ`ŨO fu fufu;+yV܀pDPzm CgQBJ`X8 Jfyyƽ]Si#V0LO"KaFd,}3xwOtsslQ퉎+ʘppwĆA瓴w㛫&QN6H)&"4eiZѴQqrswsb{'k$U"O|%ѷ$7Re@PGm\g"'oEb6R zk,|N_ 'r݈n ^׍2FH.ph1^ߍz vw:9ƴs$ɕ4[6VmNù-< AH:/,=o}={ wqF5fHeی2v8%aȣQŠ4N9g6DYͥDZ&Đ2(i=s賳Oxau]%HR?V?gg;epE͵!=K* 11 _ 2gGige&=[V\\0-'>T[8Z]EH=HJ"2 p:aKvӈ*{+R Xn3ˤ:YSHEPEԻϢqǚ5x.S?dc-zagkLԐ xHIY;!DLeB2рfT\:k]GZgG‰|H %Nw)PFqOQ:3Ax\.^E)x0,Ԭ UX= }?])8;CSZ <¹YAAJ"T򙲅ÇW_xÕ>~wt#d}6spJXu?3*Nt1ݨM݇/N_?@* ;/_O?~VQPGV{δjTՌۋw}Wާ^d7{ 5o;0[Хf%ćοnЏ?vG5M/M|t583;um*N;ߏ8 6;5jwԁ+ַ cgF4n?s#V4$zTozo,y3 |D$[ɵ-GJ`RץL(Y/7ߛD/=6C|_o/}ݦox+Ȩ{fcqAUtv\(t~]-8|Z-Sp;&s_(?/ǚV&lTb<I_hy'ۏ[n{ 7+}|Ȥۨ l%{|n| ~[#fVͲ\`,KUen(8p'4yA]ۛ7ޔ>MzQ龉zCgrxWGW8a ߹numz^0}EHzU/[+ TOUnW-Zyk ҕGzQkԺQׅހG0*})puki+JhSWubSR:Z/1>6l0Rŏu' \=׽¿߫9X ťX}4|c~7VN}mf˛3W~S:rU(l"gg(3:k' b5Ў阢xRϷsv!*l[=O);:+ Mۤuɉp'AsFCt%I -=Pj@iƷ7O\hlM,99. {'ŵ;?6%S^Fyg)qU;u) <,8Bj홶\Nia4J#+{@-fz3 MĮЅьiLDǛAK2%B@w83)>|FHJ^{ 39([Ho=&9h6߯pi` =\^2dZg͢SW'ԙo.0jK SUwX;T1UҫNg8q_K6M}=7?w XimiXv{A6{/hnmׂmYvf'2 ?׷=|n(?5|*5gD&p* Cx"kTKzߒޟ-=B$9dVQx NLE QOsDTr2 !IɧU`K)UG!8&eb<L| 0OuÇ5gCZ7x:pzXHnsT=m|Μ'xS隔<ݮYd.78jtlFnmڨ0V! 6@.o3&X)M_x̎Fݰ}4bI%dm{»_>q 慒CLa>_ܷYuNj'+Y^ç-,9辑2NxeIywc5v/,6?mn3}MF߫6|Lv"SZx"K"s 0\9$ mMdu,OO&¢ Qi8qh ~+ G˙` nat_opb0~:~S-rX-w^;#cW1m;p?J/ߨV})D;6g![e)*NROF͜qC92s >$٨fs*Y0'AIfbvGYzfh%2Ϩ֌5gf܎RN [ԅ6 muAuҡx0_oRpO?&.@|7uVd-,E m@ˠe4E8NP2.I3L&lg˳6Z=0Q6UL90I4@$%Bb*R@ *=e+c sQ`uRR0a"sXẗdsi7L J!%YKhgq^EE0' ;v/s42y 7&jf#D$'Ipvܞ&# gƜsʓ_ ]xqPr, khʔ 2Qs.0o"gZpЪw3)%5{hvKFdAQ{qA:S*-ׁZKxS\HcxdBw3uyY;qJ&S`(%R\d 1J`- `BP22MhTVA!H+)#) $zoR)L9#2D9p'͢/h;wlAtVQ zel@I fr㜴6G4H-&{Og@GՙAy)1!ąac1C !Gd*+Զ.ƒf*21xBu[;#:3Ea]~ۋt ښ1wӸRq.M뵟oB顳"@,Ֆ {%.x<+=kb0V{gKNP̀ʈ*I02ʔ(62\fS 3W4X(5e9 p.Υ\+}L^2a@c P'Dщ@7?i$Ԟ|j)j吃{}:]H4d缥좌k8& #HKM(] ) Fw^92Mel_,þiAC;i9P1|7'oQT̾^woeGö8dq0Kppk|f;hA֪yUxVGj6`$U!2"IѠ B|k|SF٩DY* FB)ByΧHRd\Lg,(2n"7 kn`!=.m8ʼكKo SµX` }N}e>?z]MH9t镌dz4 @! aVgV iKzku9>ZgHLJ;Wv{Uc2VxF*eJ- l&K%Гk[({Gd\;-G_˺ޕ6r$B:?춍s,`{*I{YYXIhů̌Ȉb b2` *+DbuQ# Q`ATL `nk;"$)N8ڀGcf(y$4n1kG9~'ur|br@),%xT'3p#9s*o Y g'uunXńFY; Zʽ3ø)ayb89nq_~^lh>>c٨EY-F}TD{<>Yo]>{-(!+d\Y8]8ܡXiX3ɬ`)$d@ω/_::_\kOrNZ8j!P1[C ԃHYrE1F: lpNmպ8S1+5X@ruRFH߮ EI]&#O sPW!{j6bY{EZ}~N83X'=:7CI;WKV(* P Lf:g*JpCC[S&o)iE.DXnN˸wE)I-4XG cn5w()r$X XQ, +(NFD0+Cʘ19=f,q|1+,b"v#m|f>6_/rr;H#ÜTX)&؀(3-|)$(ǖX NMm1i2Z`tftn!˕#*&)5 TC&ujTtK^ x15gW+_Cڡ}2Wv_1xg,5tFaIݫ_*Eze,w Leo[rm~~-1#A<渧a銰S7}&m}4g=»~/DHBf|m=O\`e XJA5mͳ~zz3|zQNan>eUR1Dzb>IqY .Q3-9˜R 3|@z镼\^|` ׀N1~n^KC?L.S5VLAS:@3A*ArH52|M?/<&ǖJy,X *d.YexNo4yO\Q6GW&dv{)q!f>_A,VweQ %u/P p!瓅\RUdg2vwEJ/ܾY{^Kƽ ʗ_4t 7\ܜ,SmdW@;-㝙_]7'ݵ.Eт2PO/˗G2|Y"W,XPPem}W MiQO2av ?D62]5z<Ȗ1[jrvMOnt2O8]r}oKMftyb.Le>G(IQplsFZ3KIrqfӺ{U{ж3h ocJYlE}F~ r\Lϗ"%M x TٷMUʚ L,n Dc P7xQ-Lԯ_SxW|Zhǻߌ6ұh*2NIV׳0-գ?!3珣Bc$LGZ|7K֛A0{ȣvgOɲ_5valz}@i>lM䇑_ -Dנ UC>jx!y28GeA]tyn ƯQNWŀ\ȵ@1Ћ/ TR,(\I}Bv=dB]0}V0sQJ-F<ہ&0&w`k;C F$kϋ >7{*iJNJ:Q{'dE.k[Z#q>kA;Ղnuy827-@X,V}r|e-د"[W^e[2][X;L6W5MmU88G.f̙WdfcÃ}7UvA5P_Whb(gc*C@tj#k}V\'o9C&X߭Ig[5}JUaVA-Tto4TOa|.M[ڛI7SW3}^l\368a'aY$J@ddǎswBm9s xt8ג!073ΕuӚbu F P%o%__HLepڳO'E)W{*$J="M1;:+vNPUB0oS;ӆKR;Gi~+VRR%0V}HTDSɒLb}U3_%o{zK^FfqrM1`I&0 V vJ7ZLNu'bRNo?>6Bree0IF~@tc&-fٜl3 !8؍Fsn>~\a|Mj&z"]2yj:R5&մbTo?ReT,hi~_^cC L(k :F|ͦa\o1EX:˯S|7Z~[UV gsKB{Ґ+7S%f^ia_!u!YV60'Qb5;M\G#iƲ]GZ"+ cM19=i(gfb\IeOBu|-yc6VKE%WOy䊭9 ]ji$Ir(gT޸;U: Hܛp?f] xGP Lf:g*JpCzG{QRhĥ{ӷ9c hI"’b*׎pk=wZZ Ö8A/E cn519yre c%q2Ȭfh9e} g6P E- o{o@.OJ;7fO"\2f$@I@KAD2'V-CD$V1ɟL񜙜~ .o>>ϟߪV^7,$ (/g7"5 3y[g'!lx]u;Bz=ڟ)^(ewUW-dw q"rH)Ip\Npe;)pe9qQ,=:G+cs"]/*It HLLryå\=vi՞2^+j$7<-' SM[X]ןB#5G bh2Տtv?ZnFo${-m˵ei:{?ڳ=渧m?tۏa}c7 n^ v?z&‚2OYЖґVfV~L*EC#<3 ׁJ^ S?A_/K0k@u'K:~n^K{l$arqaF4:҈jJglݪχ-X2)ŝ˼ 8cL*3L L`fҹɒ򟋖\x:1;D$&Ojyd_Gw B[geڷ棈ιmv{hzinOsa~IUv߃Ϧy7ogyGUNMwEkïv>TbGz/=OCNܗȉ[j"ns|3|>oQ= .wAC,@C}$ro`e "߁c8#Iw#CǶimpH 3`}$(!99#6Xk`mO%,2gbօ  =FY&&.9E4O>-w|՝ :}ֹ7;o{{S8VHw(B"hd5l#XJ JGTO$%|ȪShȖJ[0DH@V=9R$mSqωJQdB|a%յf쌜͚q=Jy]gQ.B޹N_ OUp3U劋-I@y?>}>>~ {Pl* )'#a34!RdKNS-I21W'2Xզ46J]b WP)1-u;#gaZ2.wEkwwбֆ^k UU>Ҡxe (e53R4X}>lf}>kvI2esBiPJ+ƂLVaR}(لJoAz' [q|KZ/i==Pt?{.fVz`%Ju4g*(yMH1JYB)JQQSlؠnbf*heuCFsnow=5XV3sZw$Fbx`HEH9rVkI]o)US0;&ւE)Eo RlRF,!Y%9T2h;t%J#R.JQlxko/`8?]PmkZr7+9с6{W2bxJtF&6̀|2:S1Q&cH/:IP؟ ˰J}Dk2ZeZmʫBNX&0P6RW `{rs 8e(fܷyۤ2=xja݆ >L,IGH$h}YM6J]b BOam*)&K]Beay)gv}ԢhX^+Hw9rWFzguژG!+ œᴐAg_w;j}l^A)  &տikfIh>ģwy<ۯ>ן!/q%  J ?-yv]ˮ×vM!kJ}1)8 =k) qhf!;K`7wuG/'nr/~/Ɓv(ŵ|~}Y}ml `71[;-p.O ],vݨx2E]ӳ//0?_Sǣ Joh}mbf=WoF:".2M']`*+r@6vC%uoW*0Cߎfoz3M]wgiy8Kw4uݶݚnCw5 g|xa"?ۖ?M[~vne+hi[zYEmi4ѳRԱT8=}E%(Ηn:XfHň/ٯ-[S-bQQj=cЗM; / MF'\LWl*6m}2TæS@s)H.6 Y|!&I!bd(E8"Y;$Ĕe tQ|3v۱essq2RPhٴXUc6>ZPojlY$VbɁNU'x`9H{}v>4{eՕA`pU;4S?V2#-:W_/umlWx*_,o=z"eAg{Rۚ 5k4Kmkj뵭JXY$:O!X?ƿ*1[2QN&8CM$Odȋz W =Phe2FЀ!qQ:p*jijZ ڒ ODrFQvHkVR{$\Q`*Fe"BGD`5VȂ tF% %M]1i?[ Lz,bR5i_-ڌxpf8SP6hż=wg]Ùw™w{|e6*kb$K P CALW:&EX~3{wy(CjtʲQѩ&o3[k&8ȻSGD}qw@Y!K"CAJ2u6!6d],}Y0y%@Ě'㭓 &%L$40&RFcb\\ ƝEьUQ<|k@cVKf \1:5Jm`#[!/jR)bKMF(cT!H/)#)W6yoImRZYbT:2H&&*l,Ŵ6 .,XRV@hSD?!'AHx_N`}p;gIjG|ѩඒ1bƠD*1Pq**K)c"TE 1B֬M/c;$cf@3=}~.݀7nv.݁;]Z7~|1th%3Fe/ 7Z-;3ܞ[o{nU%F]t=%!-F! e6I,Jmj`+JE$H"t[]l+-h6V9E+ X'G;xW/Ͱ4kNZGMzۀ5|~Y'PTB4M2bY&$D/ } l4SdE&^tܱ%{ o7p P웹kiIٰ$u<qګLDQ^&(`V=j:bȺj(0222+ؠQ:#)@=Sب8ゐrkO\{ԕCZ;fœyr\fn_%<oSs&C'>@ȋB9Nۦg;Ѐ|g *bF}45]Uz_tֈ]WӕJQM''TiuUɕ{cWjuuUtWW_Nh&Nm=Zae4:|3\:4l1؞F Ozκ. $0QBgM,FQ<\Ʃ򧞟̄$sEՄ MJRj k^+)ƹ{M!:eHWP:)*Wja5~R,yf8ʔ?,KMϯ^^ԦI>|~[Mw);/8A/Eq}Qܾ(n_/_ Ih:GZ?H$Ш,Nei!S}ɾVdd٤,  vTɠIE#! R(r6ltA9]CpT,"%g Ȓ h:j9[fluP[rx:>?JYVO3oW=u&U[3-^pĭ(Pt>9@qnxlգ;yVND%mRK.MӔ6'o}O Kxs[O_[7mN6tf㣰~6/ lfܺkM7_4r7/\hOg6P&C+#HPQCrsF T)&mPGO%t JX™uaEC-ebVwem$I23#2``{3ø6<%)RMRV˿~#HERT\|<#88[NѾphPF\w}stN]oVO$ 3}=ߊU@Pb{nOZhO&#PapG+aǽQOTؔFK <u+ i"*jlzRk6+9Ǭwq&Kpb& 2hF&J+^j89Âry.\ؗk\\Eǽ0(?LB*PCFdHhϣB^"ED< ?.$G:*k bk~VM)3sǾ戬-G,"jkIRQY .*@q(x<7\jS}Ki-=j ;Q}Wucl?QHv"G2E7f?J^,ZM,rUeg,5һ"gEK}vNWAoӇծGV9a+ 7ɤ|QZ{.Ap6*zAYgYJ)btرfB)jcAwxpطv(Cd(9CMUņQ1M_ϔGV\n蛢0fwgYņؑh7\]4l +w1 *ZfIrOeB|yb8O/}ƅoX+Q 7򿨒 8y_3rE`mq^cЅ?ytPZrlA| v>9ew!^,I[J_i}m9|>g?׸ߨ4]'leІj]fjcxJǽO}~稚 r#j?D>3MqIv"^v+^gWƖ2{wtm^N8괞T^_f-2QKG{-'bRh϶,z\q9]r˷Zm6fByw4 Q!CBA8vOilӬD0pCUЎm阡&oh8,1eJJ!QJI'>FP )1j rZ ~H(h ~H Z.jm转* U2U`j-%ibj5Q[ EʈIjTҨ/ u}HP[J "qg!FZ4q ;e@q'a6~c%n"$ʹ JD*|hRmqap^6cLsn8-ǩRgu:g5Q ASb!J#G$AwN X6ř&YfrȥjG#u o/ (Hu! !d4虃ҩɍ{s)p8 . dDz3a$iS־kNkkǣ902GZ-BShՑljb-M b'oNHQFL)1eƛA^KQG$䴕I+Fjv|^={]LR&Au2 FegTΉc^iv܈@T8#G %p!:KڅhGVDBd#ՁHOEg bѪ>tWw3D}q;~Vl7Zzߍ_O,cps?O<C,x4֐"`aQGx>LǎxSPvh(Rq:Lr=)#"Q%OJW/6g:t5j T8wN1۬W4K(> :W%WVDTu@ST.j[9 S*/:3Ǻ:3Ǿ:3K)&$Hϝ0I+uv1TB:̢!(X˵pA|Z ;cl5\gc'uG.wyCe_8;>?5kمKvIL4]]M_KHwU/k馥E=e{K i˷y'9ח׭[MZbkr #§մty wKk(%̖0OG+uK tuqbjD1MwLY^L GzWUBdQmPasm 6A%x\i(8\܊ tv{}a I%Fx6pEGӬ"QŒ^xSxŽaDSqR-.l~;6돶sf:; <$V۟zmXi{e&F7fQ,{aCG3q<~ųNY J}틾~M/6 ]=WFሷᤀm;ç{2K7H(a#KER*1l4H8ǘ.[FM$OO :%~ε籛~s= ¦UgfESO][WpEj4řY'"|TMb<+-]%TWዼ*A KH!UpJL0_ԭ_l$OgTXYgҐ9Ik!NsUk.PHx8P^+j$ҸF.76c`g& }}.~Bb dcTYM[B;|gÈbWIsIWi*qؼ:OyXW^:˫$('ǮjǮ{^HSOH]LQW\NE]%j5=vuTr$;uØ֖U"QW\OE]%jk@Gt+AdD-Ǯ^Xyv>PVP.gxjyvY;Z(F ?~~| 㲩ټũs Qb3Z ϿFxLƽb`ld>Δtc(N50HyRf`aA?{q}`\XJU 4~[e.ih*}oejաP%_o/6K~b-Qp/xǽmp6)nKۉ}?| IJXv/ waflJ`RGSJmΌ6> Kow1= ,@'%5;ON`ɠ']%zrk D0?uTUV}yJw%nJyC7+Էp',Gy&sʅ~RwwS#ˢ.V3%[~w%BX"*jG;PzJ%N-͋A8D13ZM{&{ŧ[ HEV&+O>5Śf2ZlDdF$Q42Ib[ 2R`4XH*[b1'L;ݪ;y-)g @ ; ۨQ1,%!܁5B2-Kpk䬗G/@86d'?;YRk3YK:HOQ\L]Wt7QsU)`MN'ȥ's؃Z *RҽHE?eNF]qq0* \ryn23Ńtq]r#Rmk5ᝒ5`pID:kdī@,ԃdY VSު~1T)SnE4 Z{1[,} |]٠XْB:"/qlYL[ސ<%L (f^QsZsm0}v>hmz7n9sr|op񝍤Mp村e$,y)馍la ,roX,rdK|lHwstk9r+LT!AZ7=LJm-$X!*sT]Ye3*"8+}l8:bqlt)KC᳾6KŔu)k @MqX{m+֫++ iP_s00v?yB_lUc.䦓eUWrϥdw/%FݮJeawdv sɳ$AX:xf:xˋSK?cb+]GԓԶ]O 4L"[mqIF_} N[ w1n._h>IўiJ_11#=Pm#UI^vhybt+'{&)%ñ~{TZ޳=k^mf[1RF%Q=ޘMOvMfh:鬯mj{h}xz$C@91+jԝ8Ox[ }rT~o|@bB~3f$@I@KAD{&h U@%DVM̀ .7Ɠ/6p ~-xmoۯzV[4^Wur%O!B;bۋ۔ iH2/62^dluH6A0ɵ r B(:qV{t`E(Wݕ46rpۼ'=6Y}sk 3=˥j*7;S@$X,{ sue-2['o8.wEgmj5]b)RX*fYZ4bLoi-Vb!sXw. T2ǔzܙS/6cUj-hCXͥ6a屌FnÎ3g_%*#6ve&#QHeеwZ D佖豉hjpklxSǔ'vxx_d8{Tfޯǖg5 lo58Y+`w&f׼څ7v\SDJ{$Ch=2[ "HSl"~x3Hkl>@kWM7r5?>>]7~z_k^\զC:9Xs:iuVeٞ&@|fzLv(J1<7?oN9}5UwϽAee\uP󓉻PwE,-$7eͧk闗sr㫽624Iiн^^O`h&8#!cZ sARe]bcmUL-aJ!? qfAn$@bXXHFEz E Bp9Hv DZS%zӣ'UvlGrQq0G80|<!,x<8d5U Annnw7 44 r* P#A9k"aT f qJH* D? JڤB:`^) Ӂi]Iղn\n;ڸe;Y8y]`m2xa ̄H?Jlq#?g|"88f!exю  D9#(XG iYFz}X8,'I1Fl?ՈeH:iհGʽ SjH^:5"8aNK:4 ~1JnU#jo|: ĤgID:W΁'MFbRN5bk׈KI*lizvzӋGW'}th|'[}q Wd^񓄽~ n0i¿X %Rڂ';3z70=i0ߍ`NO?0F8?v'j%v eڍzb0((eڸbw\ gxw:;/cg*`|P@+@` Ъ"m.'y6-3x6{Q*m7ySQbKʗWO~.BMf'TAvYy`24iDYA%~ 1L3Ro^fgSj<ݫӾoK(1sUΔ% ܐ B'D袣U䮀wiBidOv>q{VL+#{Agλ@0%8'Kȷk 3>?;h5# Qs9l0#\U.E>Ŋ f~߰, o4|g$έB`tTDAm . %ChA=+ uZDI%c"uد90IvIB#0Y +I#R"%1dNF03,*͸Jb,A"@3}W8~*#xQ68UJ5j|3*?g? yʪ ţ◟@]L?ZG9TňPonVBoͬ?Rћƒ쏪a̻ ̾&n4 Ëo/E+Q,b)4PҙM7~~LŲ 6_wQbs$g6*U;L`ߥp}{}>_&SⳛvOY~~:M0OY5o7rgVʪp`esU̔Ɗ?e\%fz|sylz/+P?> ꏓ|ozU"$?_er{}[1vIzpëǖǒ:l]=ڝmKUQ^hkq(%G[-0AB{nt`R>وג-ϩ$)n`}doaW z9WOWt*27i'_L[}4|cXIǦx)дEfo< /QpM{s :OMr7)ΎS&#JYbܑ8!BԉOq BHYAympJ& ")Z9P:RĖ:H 8rcXs7VٍϥA'#vU< vMJVjݎTy0͓%n0EN93ٻ6neW|9-M2|3=z..Nzm5"9׽wWJJx$w䐜>CrfcI80}G$j. =*+{^jG@$4lF4! IBъTH BN 1 L$(-SG4>1E\O]hY 5Lkv"g^n}M4)9M_k٘gjg"CAΡOM_7Շԁ"TTTpF£$Ubu$S}#L2)#dd&SiP^0ʬJ;4DBLy{ D ],h!ԕ/ښ+"{j=9RٺJez|@SI)5nQ]VhbKMi,BL+/tH0/3gg 8lPt3:c.RPojx#MBN9{`mTd>L^ *TG#u` `A牖4$O]Gꊜ8_&~,;7^rNI--,l)PnzT5"C3Uqw+|AR  |!,X$ӇA|3Qw򾺘05*O#׊D(g.1@!58jU`Tr\T1`$`}Ucpe%)Ĩ5)hą1,{?wvFb˶ᅿ=Nuܯ+# &7D!*GTr"Q4Zju܇ VDKT6mZaO>x3@4HGtrcޱ +DHPڡ&볇cN3e hq1ʹJ&bI3rg:҈_udd  GR$PۚwL$ 1烳@u,{;"ǿUtY{~[@[s՛Avs؛s¤Z2"n#u&R-/g0?E#B7^D: |tpphgyH"jFWOh(GH1 Jъ4b"F2aVZ 82Q1xΒj};#g3B8ٽ]wF_V|QrM;i\9WQ:Zr +s_rE1V`<1Ia}R\q곃((C#pNFo_9R4@ZRd7lozeT~_ (Ўo{KoPM19xl?pa=ZЇs0*9CNZ"E"RHxb ')y`NE;i=I=HP`@~y5; ~oewfU=z5]~zvsCF J=>؜OxFLDN HՖ1ZZL=r.\J8π9=0"ALjM<(ÁDVHu mm"~[ zK3xQ͵ˮ}n},\\^ʭ?px -W5g#{e!H}_4tY/7-Cq Ȳ7-Q%mm02E7s2"\]Yx-,:d Ex\>\ f/%˺zu`≅bmVg,%//.L-,ŲMD=hI6_b< ն e.t |QLc1i-^ӭJsrbOoWS_MR]53E.<3 <^CQ4Jo\,rrkf]p$eKw1eFlf_P o]c1DO*8k-uenRcDtNz+jǩnZ7P5O'mt7 r9/FWovL)}ʎ*+ND"9/I!Ĝ8 LJ"WN#11{ZZ*7'_$i {>]F-A)`uA=x -Y Y3@AZH}\gWȣss.UaP; ϲ~~щ>;ѷ׋1~{3~yl8q4HđO\v>Fq8ISuQ%%\AxiU*[GIpZ~!s??/~ۀ 4W/ZI]Tgg>]j>*r40?KUv" p^b‹u+ͷoO^>:N^/[QL/>DUPa'N:E.7P[bQ:Ղ5 ms8VL j>y ӊ M9|MVM0UCW;>(42ڹmIpzgSWv)wR:Z)rK- v͠moulO0X}:oݠsQŏm?S/6kmDŽ{5槩@3iB`p>p< Dilg(SGaԇ]o=N uG~+&bL{rZ !QJIz%}0F JHELq(urtTw:r5/ tq츰=X}~)ANէ٭w:NUݩܝ2!C* uq; $Rp]ؐ|y&YU![dW'~+f$ka>wy8X*b#!- Αb6Lsp:@;9©UA9C S6ݼi)}ZvC_v^")\B`JV3a=P/d"HXr8q.n=,U6F)DgOl:x*F%QMnIx73retm)8ܖM-$Uh;Exy9L8Z-*x RFx\T֙cn^z|yU.FC1kIweKfz>e>v,$K.]T; ȳ]Ѽ f6]7j+EPY͍a2ovs(mwztCOս:ʞr5zHxΚϊn{ֽmD=7Ӊ@qg"k-͙`lPvX.U2;o>.s@BɑBs) ]ms7+,}6#4T庺sR[ɥ6~˥jkM2IIqR߷1$%JD)R_dpi49MHueRvRTmU=U/Sm mͻ:\}Szcx΢ .R(YxZ21D g[yU]q% f皔"ېT&;AdO1r,A?c$*zc⪉\ER"T,Wmz^z^/WT1޵qiߞOk3XCp)#!5Vr@3cg=?ƹ%U;dBȗXڄ@n Xh O% x;yźA6r,z\8&3B謏@EjN[%XZAjΜ3p2?\̌M}1pX'ʤ1=Tkug0U]j1D; AC(t?4JJTȍud2JFrYD]E]Gm k픍"+a:HO1em 4l Ixe$ )ycTupD<0-qdzɀuR d*MLLN_O-qV0o=Heb}YoZ0 1^P7gaLk&t8^OKtq쬣֑Z:Rnjsg2;ZFҽϧjG9ab5K D0̀F`tqs/ɇo|hZ P1 QkJJE0V[Z=UcdWΕPUQlY@PRdr5VЙ 12:l9VgńCBs18H+j*n2z训b.&ڥ?\;dd9t9"#>1 N1adԆk#Mc/ڕ'9);+|]WgV|`oӸGB~Y"O{qxvETen6cA럍ƽAMbtSqO:Z۴QL1X2Ŋ)Vwt)UgDS hq)FBSeݱ.uHu"c3k8i)Yp]Aw̌ٮMxE-ZLU^ԚX3V0:yG(Mn#UӴ$pupMNOTN&q fm2P>&aMm}<@l6oq NVfj6$oRӂw=eDD6ŃsI4 YlQDTjxbVQJxxs;D^>iMǵc4foxo\L;w{\I`0n;C"_{G'Gy4;fdJYW0_P7[[&N 7,u DniB= >0O{[k|]}~hՀ*6*a@n#}J ZY&Rܲqc|cj+rBu =W 9Y/C4_2oejZ<|ƚ5ڳfY0J/Z{Pik͖֞=玏eg(~%8_Kc 0$0`V`@iQ{`I^``RPΝ2#^$t*7G7sÉh:U۵p0748暴ӧ`ܷo4 `{8[Em gCIe{oΣv=&DP$H(K(/GA^i }A 6VldWګZ{F^ݔѐppWHٰ9)RZ1hAsPz{8WhHJdWčlH_W?zu:J\7sqGmJͦGWD"҈&r 0i05!k i̩7sK7n%Or'QDƕ#Jb1a"}D}G?6Bpa*w$~uk}&ؤ֌FG VzcOm8Ӹ8Aoh6O%p}oX528ٳVM\c_]o[Wm? XͶZlvc5x}GK^l%9oQ.D`Btт55v(OPLQ:|N Iy/P)Rv^)D+uz`FXQ4|o39 h,hU$KFU'8d~_]~z4=Zg7>?ݔV^|Z7-M0ߋ?;MJ5g;sc`f֤+CIf>EI-1xBk5j>RO h.\AY>"TJdmXSF'XDHOIr) K"w֝3$EfW'=lhmW=M w˛6kUװ,Ry4 a(F l*8H$D+^CRX2os IZ =43z~w=**4|Y @00# i[w7Nm݁Gѹǖj*VLamh#U۶gqED>{tRh(usJEȿN\G.ɛ`%0$<"IguEJL:2DSVk(E􀀧Fjz13zߺF)cjjĨdYbPPDFQNDIR,B.dz+a z-S7gsr߀rr!dJt=SVQޝ{D5"bo0y}s}}ۿ},Fio;s ~Z̚]5-ojӻkv]mŽ l~w߾{#Ƿo./4MRsq+r9S㴊ս zjM魶)~1sCy:&m_[AΞ,}j)mcF9 ;K}ۥ'ހy,_c~qw7o'r/]߾hF+ ŃV︻\;w5+xO˛o1];>-B?yq:V,܅mT hBm,.xooŪɔJhw}fu>F>f=ַv}~K شR$/tg+~~:FIH !rof&wg77 VzkcT uUu;9^W6ү If[kK~w=>>x(lN/ia4)Z\vRηC{8[vL;{K#^_'d9xNfL^5+FdS]&APw/&VIhCݤ͂BG^6eCf|֚W h<*@䐤uYF&,<]V֓ .ht: hϭ阡>zC͞%*Q띳βhKyS|`'ɩaQ:ɒEvku2(ubP{wp&bgǢKD͏9ŞSJGnwj^nޠk\E낶Ȏ覾EgҦsϯ^yNYֺ _C%_ {S݈veۮ7jG]fўM+ i}<:s3KG0ylmwhL1j$ 2H(NOBN@ p9$=ü 5-!$FP0YxN -FkODgU]ݜ$U=o<§oΖ͓:0eO߃˛{yMsN_ PsnkኃŇRpD[u-fh~N$,bRl iS}#Cuo@7^3v ڤ7b䱤*zwLوFu$mv FBG'U`I -"%*ԩ.E"g d pcJACRsEe,)9XYwvԑOx)վ5 zMbX=exuَT1r4[յ.ߡnGchm5vuV8=fT==wjɳvJvƮ.#*V ZG}_,㭬 n_K7қT:/A/mͶB.pfJWiz|Wx{S5/L5Ufo7[yƼ _nᓯn8G(]zA{Nϊj8_]-ilEu[#7ֈSP%E$l[B `څ58;ΎlfNJr3<?3VC&(+!FYRV6P&cNY2H)KTWÂ%xSd U2zML~.ٜ<$XYwoQP5CF4:WdOWEӅhӲ=^_y@mősN) >QPZa#c!օiK)XT(tt҈tsII` -B@>T=e'YڒҚXIu;vθ/2Zt 8ƒ%j_ɟhby_8ǖjrŠÿeS,I$yժ+*4w1+SRZUGg3٫E- e RcYwst9 pΨc_-;ĵOvc 3J &RQA%/- rtfSXiXBfd( KѨ3$)R#+rY cv֝kTa/Se;}9#G]bb-FB1E-Cq^*Ϥs$7,I+u`1G;99R'拣Q:}/_3)m> 5u b&CQ7I8)Ol>txJT^zgW>yddȜ<  Z(maT! iB+:i,,( f+cJRvY"5d,utnVFH'i eԪD##ug;b8tTi4VOkc >53?cͲ%̙Hbl __TB %dr*)Y|Fk%7s}6l8mN6} {őhZd5"[3?lk%G~=7/#)ofWqQY$.\knLI1FdIyTR,լ"7 i2&@c,BQ(3|$L`I0.HYIQ;n-zKPHYk/>b+8S`IRBQgs#BHX;oFLW;~x} ѭf`V'BЇ sWTKt1  4,5& Bp098ASS#̣CR k)A9mB ZsCt"lN)JP`'(9tġl:clچ ^xK:xL-Cq/16C̖ :VQ]Ԭp!2>E4AaFkd8>L}BRř _/ jP@`=3b xT=)d`JSTdD2&ZsAJT!K#$I2ei1 Ȥ1l9Mt*AuxgNd_\xC_,_k$!ںH3sBվSߜjP5l@ ^5ªpxkdXa܅Thɘ҄(V&&9c XZ}kx.w E*VyN6H*.DtP?J;|QBvG5(LD !,:7?a UR l>bșUwvju_ܶ w&M-B>{+rxQ3͙V|sQ^Jp<BGVZս6"9S;t4u xTHd!poB18p/{Wa.OBrkimCE\ɮ^l#6NP$rfDV#mF7Ytֺ\*nCB )+Td3•s"u*rR ,+F,S1yBƁ:sR$o!qFfUFզg4xΦ?^G:_M/'q/@}+0vtf-Kr6rR9%zI˭:DH WL89#Om6` Hu*Ns>f7`vwdVfe2EzgU2T;IS$K.q;ug*eD06sN癥F.7Fbii AahmY)$wcZJ;j\%h%I`sF^%'W 4:*s@ushn?Tw^(^S>[RY Q2aM2]-K2[ˎ\WXaYt}'nuVG]UG.r-;t>a G$oWsżC YcI7,~}h5;Ρ*`lT*Ȇ1ʹ] 2<0>V\{I Tjsu Yh'^d[~Gqq8w[ED_0?m\'_O% mh]%93AP j"SM"D"Z^q M0<`D) & r\,Q HOd[gP _TE(eOݼ:z< ۻYE=| Vh)$$#e%_^HMt(kDVN>dAqVI:g2=? }&y^RfKlE}8 &%Qu#w %!o)x-&O,9=C"+ %J):H91H'0CIո(KV?6/uu3&}G PЮI=aaYL^! /tuq3גid{"2%do& Izc-J{zn51~WI֖e[.hv5;OwOEb<[NGPky@{m6smg;ls7]'eBˆ]8nYe}嶣3jceMn ;wQ_mh8λx0kpLjr‰gAWHgCR*R)L:qQ%3M2Eb*h:8qvl]lVU $2 ԘKA%q&kr&`|rAoh`Ūsԑz1tt~q6xbPml\wt6e =^H"0&]^tL/ jGMKM?prv xq5;[Z=:q;h.PwY޿0.N[:EH]w i^4mwi^yʖݢuE?\w{Wε]vqXUk-t2n%AqM|GѧO-ko]]zUo:x}-/ŧo,CQ5?n яƋۧ\íuD2_TeQ쏚K}]:Jfy6ȍ PĘEmDF^[eȰrz~1 l .hyB(U9 LȈQi[g$3N0aR~/#!cy[uP1,VfjͨdZoJ4Mo۝!eASELDduI@(d`!2䒴hdHP$,j$bV!)x_YVny8WWC`<X>+Ee(Dlq'!^ b:HbdЖ\9dHoHrtJu hE΄ 9 nIҀ7\b,MgDńIT*@Fnmq$h>pt1+*5Z-\☭ 4V&˥:Yù! 4'N}}{(? $ˌ#S3^.,3Qc xLIY;Ȗ,xH&Ԝeւ7]Q:((%mw)0LQ:s<\XV2mHHBw!:`\&W+FWL׊˰F? M;O昦pr`GE22_?GhNej?ER'VsOg?$+ھ_^>KĀ1SIk8ZVsx0e9u_LKifio߲Vt mrWN3_ns:gӏomq@tBO.rkg3כ]a<[M67|8o}?ʋ%drFLJlڍ|_}0ϯXgoIl첝/8˦]qlp/Wixt5 Y˛7ߔw4}.LL[)|1V&ɡez͹I~vzq'et8^V ;M]$=-N|7[A/V@fߋ[~{q~wpvX:$_?,$b(۠'0+RGc+!nD7uåo.r%r;={k/ xz{΀V@ f4Ϯǻ{shsŋ+mf6?4et>l(B΋Rf2sOic/mXn>{֎#*)es)k"6imd6(? Zx4D+Iޕ6#".@ajfOƀ)YrI R$peQeUe2 Z@ʁIUp&^E$PËק+y;QV]w8S_Xqs\WiJ~p^!'"LJ\S@A\^HO}1Ċjts^ZqòJ&>Ϭ΢ֱ~N oe26nihHwV4+< 4gG7mƩ(Gㇷ y<헒 vF_?FUvD%Pd)EC*ͷ (/>kt9?b>'x%x?&T~YxoM13{6ɏ{OE#Zi4ϳK_֛u~ EfBv<̄~Ak9*4e02+V3FGl_]ɯ,TY>0eBKmc4(Htׁ HR8,Ypu),Y[ YAIHcBk˵Td^9PB%  -'b[LHQ3XsPn\39,XČr^b6_3>pfTYj @3%$r,'3թ'TFMIm:aZ#BiA *,KSM,@=bGq["P8e%)*k'>EnS*FʸGאƳ╳p{wb,Ndzڍr! j jCeZ ëG«L3O?o^)PTD#1 (扲V';q-L4'h*wq'PZ4e3#gJpv6]J"=lFCH½ Ez,DFsL*R<5$ S\fYjCnF~x8|h+!16xTJ. iGҔJ Zj"Sd]0AT!H))(Fʒ93G!=DP,*_/&;dh5u5HBNjS %~soxFg),#[Y!TN4λN" 9* Uɓml Yd܇$QNiv^~/ [#ouہGZ(йyUr7G>C Dd;Hy9/V2A@]C|2xLhteojozo)复pnJ=LK V(e!A[eQ{+#GWjƤ͡ȑ,kRQcr\Uf*THUƹPկrF -r3 pDfܿkx>fe"qӫYcCx}?5 H-ua@IOT[#\J3.|g n`e2J1ec.!&YY|mheɕ\%2UEn^GixlťzMؗ[βbQ)-?>|v7yӫ G׿^h+n|ʋgOWrѢa}?&_7 oIK'd!~o?r,ݯ^\gKeGls;m/'q<{z.80yL#|#m6~)(@AHgd{v4$d@zfG@-L^V;ra52f* Ց C}56"BsThI$d%X$UY$ƞr9\sy5_u2'?+Ms5Kk R3OypxY dI$,#@g%k8u4JdԘǠW۾wߤۙg;ܼY4~6=lPpIvd QZiU0IĜ)Dr4c:(fYr=׋%- t'hAD 4Kʇ,J`;OT`ƞ7̺ -6KG P.~ն~7ذY /⅁Fϻd2 q2S@W :Ü0G:%jy*18-HgՙY*UsUȣ9/ lwӍ|Ф[oٵHnn٩d6!/A;\f[Q[bjvG׭~yףiӧ7NN"6s.ԑW݇ C.&voXiMf++ml>s>9ng=?5;y07e]/Xw/%l;qthv<0ki·?oe/N=v>.!g|Jhu[GJHgř]|k$2I%)1BI.Ov&4ކ"*Z`rHȚx1ERHޝ>$5Ӕ{<:ɾAd+#c1܉ 0ف匧PV 6r#"̗ŦcV40M6AI`AIo<#ə8WdEnے n&Sd,`dhV Љ`( DٱWEZ8(-k@M7 IYGVF-m4):'zVĹYQ~=l&LK&6Oݵm>~rf7G$xy& $i ecRJԆ@P+1΢+)ۄZ">%'?a):8Q')AodM?63Ua]sa XʌĦt7o==pw4p`:{-eƒQM:T~.:"YH1e%Ʉ)U(*r5B.(#ɁO̢ hrgM;Ҋ)+j˞Q[=0k7RJT"0$a(sT 9y+hkn-ݯL1hf lEʎ&eA2 Y&Q|!TLFuv!􌇽s3F"4 bo㮈=#" 8 ʤdT*@Ndfώ֭H»U ЩLߌ1l-mMM|_*9 LLjVrHĖ4h {fD6PGsi:%wE3.\nU :)Xsـ cȁ-$$c. b{ Z )x+P ztފcgxDޏ 5 ըj';l:=f%KVl4kQU zLJt .b}R\]m|Z*>b[>>Z|''0Lє(ldKLQ6%R&~e2:CFK߂ jes2,DRIGnR&۔:^k_YT7qn魹)q%=A^ZO'Z:} &޺s ˹o '.Vպ# \ B*fbyg{J*)M*E0%Xt1%W:_hdZ /*?4 diV'p:fUH;R]kCk1q0bT `;欮!c;x"WR<*w;:,dHh @9SJ%GV u0dOOh폵4ݪX)is: 녈Rbl <] @Fs .t_EA&vn . s9QGX2rLG4RKfPŁſ!qC.n|ǨrqeT ;;-j-~Nz]>=◙p^LBen~|/7=ɓ?}a@tKAKL;GEnw?~|UX< WK1;^-`mu\1F`ׅ*OIxi_׌u~p/UXWF1J>f}iq\}kw' qn}(\7D1 C<Kۇ#Ɗ_vw>u߫f rr,Mơ7y vK~d扏ڭnyY{$v”muYly;VIfmɯ'8_xj],Ko^]+WzCAaZRt{m[)mSZ ]YmO9eٯݴ/ yj0W.ۻ|s3œn>ۏ~9yZ"qigO@)H>Jc"d"%:T)}vY2WGdZ^IJY@{읎oZsw;6<@.r.^-Yka6[tqg~v-rF+ٲH@*Nۜ٪@^dJD5?^Duՙ ^5N=eKO=Xssɟ>|M,cb4QAhdޤ!}*~ܾ4CtdI+N1L1JZjr@BEE.)tk2&Nƛqsg̋:Kpglw)<^LxPhP',D%rԌ>0.FkP "hY}0ߍY}_L~͗V?]ƟXyzJfk}(ĺT]ω/ef/v;~9 m:[{K7uņ9ڝkw]w6Pe}mJEZu[3Ѵ]n`N']QN |w>lO&KQ[)d4+xzzĦgq w\)iS1)FMdVXoRp>MS6E>nd>$zHfN"Xvl;{ . \.x*BYo-(ŧzlp^9()gDa$Kn@m $6csfroܢӷ1扁)T(X:wU OBu DJ1 ñtltlC'еOV9`[Wo ם^;ŶDz]m.O1>| &k]ibD+씄>:c%?^XĔ )oIEe.`nePTb EDM2 dCPR1> dQ1; E[kHSAұM"ƞwz&G& lGD[zt6>z[ɥ}3:5oqwR;Vzx0@z$̮QB.gӏ>MƳogHhQ]!Y\6 <(yaQN48cyDL]yt,Q^x:;׿&?1!XkMMˋm>O?Wg>RgX1üH]j:nD iDǾܱ,S,^msR ==puMg>M_s ̏L1?7_|J"/I0:6 L;!F? V1MʤdSHI\D%JBjk=wE}E `p2YN>j-n+HXY ͬߠ.榤Ht9-V+.VYF:$l㥷/& &S[36Gx.Ms d0 ~cOi$ӈt?O O۫YZ:rsx`Ttػ:K+NXDm軷 -tm~~˿dUZ [{L^ۃ~WczS땣ln0]7^U6W1]Ԓt .+ΰy ;PR%f:όݮ*+P'#E%=:}1Nj}0`4\O砾Cd0(D)!Mf(枉c e ,p'eY!=^7| ~8/0 =/5SuGX ~oї:zʟ>L!ZJ ELsfA,juP. E\Ч*:D8"Yq 0<0# )zJZ.<",yZBi@G0Ս[5fP!F,ѽ" aZ BI'hy=brj֝n;;o*KuBL>mߪiZ)}}i3k`à_׌g)k&x! 0KB꨽3Y ^rVO/b=kC׆<1f!h1S!*[8x4QwATAB]?~;晑zHD8x`P34|> &EK?"'&5Ѱ.)2$vڮ? ɦo7*Xgw5C!C: RT[eJ| <+t zLJ*Wg)0:Na 3I= 6& =x-a6S&p!cjC%s]]@wr=H]i.[pTab&$g\6$Уف٬\;NԹZp# ƨCRHYMRmYo23i(`744'*,;u):~/(7]Aw|sQNFH^f,rP1HJdCəY2z#_'N콦x}5+]UxA Cmw{} Z Db ͕f9F.Y7S2ho{U!LgQ|L#o+}[O"+>b>w?n`kVGU/8t-ExɟK4;Ka_L]/%Y#I>>Y]o0ؙKvD}BTKK}!= QuH/!̇K &!:2LEYۈ1ўΙ Ѡe\ FC]GX+q zg8H~Hm>B tz9ֿ/ZoES_:tsູS b"pU#g\*p  fJ6gDҬSٿMTQ{ʎ)'Ĭ9:p}AĘWL(A\3]JT*>&=7W/J0١\PóBh*`cR 9n9l_u&d fv@!LȒ#6@6ȕkLQugFMo-MNEVMN^Ykzf|Qҫ@>2:X&pP>iEd&9g2"h)T7N2:ǘKIʘ L0RlğL4(N]&պqjXXM2YjXo jq{qY7/'דp1i Ar0kL'ˠxh PI*bDHEsQ0RttPF-l$eڄV2 TFjَ~4K*池v5yejw v%\1do>D$V.eA#'9L$dT ufTo$GҐr$-:@҄51D&("Q1)I;Wug;J< RcAj/"ʈ(:Dq%!%r'R:S)(*Bv6猆RpH[$8ɒ(+!j:MI 4P+#blGīDK 's˥%%"TEpM K"8жZ')u"A#  1b!,XBrǂդc_9l%Xo{yZcXϣJb!AR d%)HyҊq bΖͭHiԚ,j8ol@Om XӾv9ZkjZs!Gdr{)gI?4tOFuhx7'TFKr5˲RwQ.*il |!+:Ib*prC*Eܘ m{( 5$Ab_x P] *r@mƠ6fcQk4vuЛ@@zGѹVHpO0Mg>SΞ{9UQmȠ]HIZ;Ȇ,O:8ZgYr= ?ѴS^T҄&01Mљ"UYAFʼ ;WZ1,m%`vAìu4Ki&ioߺh&SZZY.v?/?+wb^/pt:fkGX*?/O=-^,v9=d*w}IaL[ANI Fs:>=dn}md1k[k+l wVY/iXnCVY '8-lf=Šo~|[ƨՋj#)C58ϗf~n}bjWˋ8"۫zoqrIOn4<ׂVHfi_Ӓ?%8}nY˻9t,9ד)N[2}.&8IKG{8_i| #yK#^_b9|JX^3& |p(~{&nݝ NFP9fH4O}s3i ZTG|{Ufnz<,-?l|6+h6zâ\(h#(}`|+(e:3k $CL;o:,iȣ DJBPdJjҘ%O:x%\*Dx񴗊f JU^(Uj2ٲ@ +2fWk7^kDukŘ^N=`Zϙo2p8 `(*XPE-a 4oa*0 9ˑ2ڧGT{peIK.IMRpEe0%NiB7+2q!Z i&M9r9] '&fzK>:4bUbL-QS\B[x (&73k1Hj33Ji.q ;188oJl ך(ц'z8,M 7x##N7+|nE.^އuY$9fɊ~yI4cR7,>*HD W\.kt!9yRGBT^Kv$.gNq+bZ-3כcI9:n) 3*yNQu`ğWچ67:b:yUmi{x? ;jZњ-C!V:&Y)t qD/ ﹐p*O{{t- y_]d"8k' ̸! ʒ0 (†dÜO)1ڍ,gXB5Ir#g94'M`oGx9;d}{XU9.ZHl #Yƴh5b) MM So/\]BEĕ[кD̄QX+3FS\ф lrݫVH@3ym!fš*:2=b!El>;:r-8='= UfۉǤҊbBƩ U \ .FƄ,"` ^ Mc{cԄ$Bs`eB/Y>n7/zscп&oԝg};ϗ)E0"mDK Vr,DDXǼ0TzxԑiojdM"[ M8o;e`y<=bGF ﰜ XJV.+ M?ExVY.`1ţIs!jtJɂ@ӑ)xʖsV]Q(d4FI@%H:*e!$(J̅3«H&dIs-ˏ8xH +-r$G)w]wbaiSfI:|N/]Mϧ1OP%!1f>NOc!uqw&9 '*xȀrG@h4SeOmWY_Gk`sܔ8wNd]Wt iBL(B6 PkB*TB!PHP"i"]FE`zʕź | t17ns_k.E ʠO8GAdCʭYfE!h[UI^pLTT=2$:80NnctUEvTѯpl7Kkg8hQq0 ƫBln22ʜ: eK9]+Ju(\V}W S^2sfq62Ny:wM堤}oEǣ'iΏY.Q:a7L2܋֚0ޏ\#^TGQ v@ЁCK9 "ÁB.ׇWZ Jup JH= f N \Q\W pઐkݡQw*Tr++8 U!U!2HϾUR^!\inNeV{ߞo/i$'o&Xͣڦž8M.8iO tQh# d*)ewr>xQ(e;إ\Í֦̿0]/N{Ú+ʅ4òɤqM53yQڃ~>Qkuju?3˲YO It̥ar:%{ޞܟOSo5r 1`=i.\W-)"7ͯikoo{Ѭfn~~ӿdWh Vz zC_m'- ߲ڻOV{^J^}#6<ZFג'T3o[lݖ6nw7mJRk F-s(jmvM2 $gz}WX}nAh*+$IڊKACS]MMb2)Ah6h0* DNɔ`:Rn7Mɧdt w)Š>ǕJ\d;I516ӲNk!ub|}>qدu@.1ų>hMf&$ߍ;T8?\zrΪB8uEn[»W=:Q[;w;IhYG}7W]j-D\~ZTuK)zdK2=y4 dE%jݽM{EO%j^*y=(+뛽=i]hw[Mh>R4HȭOxr)̗lXt(>x%>uAr#HNuL%ͭ;Oz]u^9keNٌC&Nj%~{el3akc-;/?ko.w=N(-!{ ˪< !4tp$0"FmD +qn(Ŭ̙ q@vt_do9&ID7(nZvΨ㸎CX>j}5,Ye,[e>U*csv^1N1 GRfЀ\3%LR|*ic9(Be>+֜"6ǘC`IYrAГI~$GqR8g5r#c{JkXx .' i ԡݰe$Rk^x24@o T~0] 9O9e2YJ>hiuI|#`i)D-A` ) lJcѨ`TEdP.)g2Bˈ9;qInmu<y˨;$؍p1Fw OY/"Xx R") кW횊em+ҐI2䊴hЈ$ QudHJu\@k~ڴ bk㡈(ZFD!bgQ+.)Fc& ?YS2gۼ+v9HqJr2Z%rC: 4i+Uˈ9;>M4Ձpq23۫ζVCqQ7x/i[G%u6iaK4$@Ic C8I"&p)pq_Pp_e&# ~-gZ~\FI򜕂*`+} vS2o@w$Z2 lws?Ix4hLsD-YLHelBA(P*gw;?#LF ܧą`Q*Kd*PDҗ].DxŶJq{d\ %}hK.ǸKlz7Mlm+v$GkF6y:&2F$0H`I@DgF%h+Ɨ:=pp9)vuYW5 Yˌʂu;ʫ Nit`9D`d.@ʁ{A}^Pp?aj<_W`WK3.3`FZ'A;)e!hp&!-Y!v]*)._)d1HOK\Y*xHe]@ awY!:9zs9l EDs{N>3\w7+t~ Pک~xoRNCv'vzLRo{a4ޟ'Ozϓ?~/AQft1kZ4p?ߕwiW5NR/[}/ҲC|??'Xl+g'T9KX]Mz$ k؋OzMFɹ7/NXMM(qwCXWߛo~H Mǩ7M1]oB'ˉ.o䦮 @ Gܩԧt}1`7)W@oV>4z>meuSs> դ&?5k'$ܷ 6?t9S[YT:sGX*wñbu QœIL,x:J׃aoKz`>Iϗjo}MCGѓˋ8N#f{]٬Y4;N/I$=-k㓅V'͵:Vп?%8>xlSOz2[V(RoIK})tm&Ev&lB{*K=6rOV5+zC] RN~A b=`@KdɼwA],^o>7uM^H|6r#"!Hn7 }:,_lx$G=;~Ŗd/eeRPq{XO/5!nF Мw8 mn j(jDd =[m qvJ@\#$ :$ #Yu".dd(DERH"Mt[m ۀZئ}ȉC/ mb27(Ik("3.wB18L:&-c+i7W`^=>u%ei׼] ` {^{Bk'u' ' RtutD[F7-6}GDy=틎}QJYP3Q!=Lq)$S4Yt;Al{-41i1i*-nL7Pz1!@L+jt,ky6ch~S]la}`)QtƫYc=j8]8jCq aa9Xl7!O a%bD"L!uhQ) H2KJD%fo$M 0@ Xo5EWlƙs6oӻb^zqoR8[E{<$T˧vUzzܡ U<H68DSԘuCCl- _RB9ۊ`GD)zJ3 "zEH(LzIZ9.X 9g[(O侦lp^;OeLR30%$QqaNwƝd۫E|[Ow@V%>U!;b#;!L(RȞ$c.ckh [EShCTRSBʃdgz'4,qH%Z@P9!A9 TJūB1I@tPUqH)֫#%x\G1 xb`J"JHU%@ER 5D@(e:vD:6γt{V)h&͠~;5Y^C'67/SŀΒA7Uh_ٛǴw(YkHg[;?ŁLBE)l$xک(4XQ%;Su*ً6&vrI&%*Q^^{`&PIG@NQ@u`glVlǣ# v`x4l͢/yͰ]ocEG|1# PI)HRt*CLk-HE7H  d.NJC֩@Nւȋ8hr(MeTުQ``wv&3c @A]^8_̅x6|"zOн} 5K!}pVQ'EX'Q"BmC e!9I봫zLQc R6{)Ar`fBYRPKVV#=]+q6k8 x4U1sVZub)f dWFl9m<[reZ&{0c]]_Vיxq[՜B8.'%}nY֭^PpRU><0mi}l9vkL( #bPАhR.) P) ) p̕*FgGzJD%+Rװ^H1c$2 A#JPBT @_oYt I :Z|o#yMm&pt|'=XO Xn\{B0[|EL^}5LWYZ♵I5tF}<[gZڛ3.ِW^oYϬy=1m0"4e>O0裦"5h"bo+vV:N.an}uōZpM+C<_vQ;_^3#Sn#n%a2AK[>`U4-xmANwWٻpE*S 77,k<ȇR5o"ծxʭz:u>u NEmp{ڥ_the㣛a;~p7dJjЀЮ s@273!WU;sǠHD0da2.NYBq}~D'u.Jo {fjkqTmj"p:QWvy,.4xSFչ:" i[P[WGrRe'|f ""Թ<ƽa \1'r/ + YBF$e 4!%-?_Wz3a$7jtη{n+f^aB">̿0X(Zk%GdvcP)2?%3OSozV5˓ ,`# Alb(i&4/tRtby*\=2/l9^̀Cf\TdAIQFgUP0![EgZߙ%m;WVYC:\p`hP_.h|a#ԅ <-D3o܌;Ya eP* iZ J%1?b{=igfAr 18L 5Dvu5ϓiW̜]\/du/Ӣ抁d<߳r9hD_!|pߓ.&Z*= ]  V F`<߱Mf'-* |p̿WϝnBtz|-\_xoBJM I:)P%eSڨPr)ɘDh'N,^\ͽ߀?bÖy6Z@S@&k4u"'`ݰšk\U)91zurLjG뷲̍{5b 򏲨?^e`+:ݯzTmZzz.2/fv 4J㛳uRMMP7d7EE:f Fʉz;}P7gHk,E^Z&J^j#<[$(' UN$] o6*qI^x\:u`'$RaJHU1lm}J(.PD(6Ƕױ#ұq5HoZ׾{s't*Uck6M7::NWӱ tl<-UJ-`F[CN<܌N.r1r i+rW&Yhd᡺o⡻ooBq 9#PQh ;A$xک(4Xg{WM3|8$E& >mȒG8ݒ,jI)Y3j&Y_A\] ~X𽄪Ç> |8uЎOyH\2$ɘbѤ׸a}@Dd`4FJtc";S6_H&%|`{RHi 1OdPٗ~3d_LTvǕ Y ,-FU WW9XE)O\cjpr k5oV^A#$Q  p Č .pјF\J\Zύ$)]en*O5sN&<FņݖB/ 9l+m|YBϠU?;Dba8y[/H#$Uj2=-Qj[$5RZ3QPy*Q4njy"כBC2"[y j̆'5pKIA&ix@Պ~³, @'UM=4޺NInlwG1b%>Omz[}&5I1v B"a>x)hp~p&']]'2>XCa @Fr4 .{RД=ApEAr?)l(3M G#uJo"M"ɦ|ϒ؟WT-Uf `nPtx*#Ho?$8ᳺ+)O%\[Hb\[9%5Cqw5rw5vw5VJXL{nN22bl 9x@ɮc8mj 1PKwQ ΘHɃ> F:Cd9]aVl8;@u/unL=nM*Fh99_k>D㾟Tx8\'%z8v: O Hy|4g%QI Eb%ЕI*J#Aw~s|JJ PI!ϤsAzsF<=KWjJ3աȑ 8bY L"&5DUd*QfE vQɄekyCeB Kmq@42g1\3H+&*ybK6,5g>FԖfW ޢS,HJQ`vEG]ѣQW+z=zPOJ3lԳh"D?yyn]W( M3 }{:' 0ԕj_YJ%‹j:4AKCM4Jp*<2&ӗ6{)Ũ@^9({@ݔ[sѩK ƣH&RjjoAWs{PX砼!r (}M&#[XrNeεwvNreCxcw66G\|w|ڦ%!@(wz8;}.)&+\$Дjl`?Ukx'"^mN0t  :\G/ r% eub`9&g(/:4X]0tu`@1yI gYu0:ZJ VS "s4/˄Y<!Pu[aNRV>a_73?>\gm{6{g6\il>3*y &~!.ډw9v3'Ѡo#v䢟dt4C ~M=n#ͺ3z?!İ}mލFRd$n$n3۶jr*{7māV8x+f-7Vdg+r\֝:>:c*ąG/q[R(Nڅ?Qц2=qvUSÑӔJ2 ЕA_y(-K %p*2UdBʮ~؞j ,0Ѡ] |QYKdAr6:z) !er1 H# 'þq.si&-[ h N0iGS?Nzv, E{΂N:ZXRE³M BQF vT&F]A{E :)hdO:vrFؚ興DyA%xH([.[g$)^1RYFzO05h#Udx2I&=CPH @HeХ +?& Y'fހ)n?7Ʈx2+ZOMbozagqxoN~N ~ (>7=~Yh̻楺 N>χ.ʒw}k {o;빵M~q,=.10+ɫ%ܟQa1`y7Nt8ߚ}dv4,DGv{?dwo++A90LAms^ۈ 8X_riAY/_ߏ]|N2Tդ 70Nt w\ lSkyӂ-5Xزɒ^/v>ɒsl;1g^^0?ޓu ?H͔`4S}FϿ]5 S Z׷PO8ʺU`(2,kykc̢|=6,(&]2?zpߊE9KW,nMͶR,|[|ۼl~O^w:viy' pk V,{q3i)ُWu|o(r$G |< D6it"Q}FN}u١ s_1;)&bL{r8$jQ)-6N1XRQT,b>}Q98tpZ/Dx=V]f٭.C^: UPA!ːyC(WM큋8:`X !&s*a 6!Bw^TIՍֳ+\𽺺N]]] l-pe0?2Oˇm6!-?)olEm~U/u@z_F+9©uE9 S9?hUUr8w7B$Kͩ^P/U2(?cɁ7q"8SٰCQĀ@_vuP!"e`t<j[yJ; [ gG+s#_@O垤Ib׈/i"&Uk^y-F*uE5͌gҳSxwңiMsFenMrḳ.lBsz()csXri]0 naZgNsLvޡ|ӸD*9奖x4@ot{ڞپaϏiכW67n,+{ŵ[s^=:h;|2<0tȚO PY7Q> .!ʷ/ndRq[qoE%#p%+Oq'YԤS_ň#G c EҔ&B*t\ns>;HQM1(͒'SK-"ɵ)JH (>:&Ƙt(N>HŤPlWQ˔a(Gr@2>ʄ?K濡ȁcQ<*q7 Nkr;ьZl85֩18mw:b乧*n6,g6YJ.*@x܀O ٻ޶n%Un~0vIMpnѴi`C2F\Is~He:9eCəg PISQHd yRJ^H`"1pp DzJ)=1(cB!D0`zP2) Qh>Q*KR9w#c9]-KWX M{ N/f">dp LՃ`0M_8bSLsVˁ&Wh0ZsD+ G" fcFeeJ^ EYNML(Jm #v1rFl?%/];EmZM$o5e ki\!'"?hTN'$o$h!fHZF bP8|5d Fub܍Q? O/X?EDVޓ;C!x"(/PVb<9vsFlk(!h-m&)ZqNEEq4;M0#nD<8qq2\^g).ya\{\:w_닄C$?D qep1#O8?z0jz\jg Fe؍S26T>bu#g:p?Ͼֿ h&Y y}t $$+zpuvaly>, E\\lnѯFʏ'?5@;k`6QUb<{ TsTEJ{JoqYX}]礞G".NziEw`U&W]L}LV{zpŔdL;W\-v2JI \qBRqLQ\6bĵg_z1>CcJч*_۸FrۊlOZBV$شYEvY3L fjEL{d}*4a\e+p+JJB2#pVW:V|0GRvdtlJTDI~D"7x4mNN)Q =왵+Rr`:lt&]iAwT2ӆ2٠eL{5qGOM\ =nxYqp]%)PBt|>H t2YdC6 @H[ ջuL a|w3|$сE^59Кw79= Fщ*+6hI:jIc2&YHgY(:Rj^z.$:Ͳ\Giof3IW Y4H0!L.L-}h2-'hhJ.9LdـZwB*S슾:Y˛xE܍'-aCA{zH /ݱ9\ ~6L Oy"SkVZkM!).<=H~Has,4iefh|?W_wUvfGgrMK+@4n q9Rsh! m|r>Η)$ͼ%Vԫ&E$yխC(29Uqs{3t . }5԰bcGNSԒIS ctm%g5@C)t,whһdO(r)9xjEbriPLI"шUXFG"hqJ :_$?p56U-/Sr5.)86~Ǜ娹5`o&cjD>2Tpmӂdo#ZSV%͍g1 w.5 .&9e ~>dU[/yB0-@'4X!MF@,Z(vcPȸ0̒#as & 3AA 󊣺j$h1Js9O'F}A:)? ä=iAJM}lDDq \SJqmt I"z{Az m1ŝD[TD1­M&P)H$b *gedE<.&noQx3#$m}V%tΣ}[:(Ugqgq\_ۓ^^<'_~ˍ[PPG6YӼTӔ<7IdWF\.ۨg0gܲm_Ë g&9rJ&٤B,F *^^sG_誸*P Um#n٣gt`<::~8?\#]CllT}xp0o?A[tAkmuG7M.ۭ'|;ZY*-Fs_--r W鼧6fKW:Ղ-e% U8Y-Lxc_ ?1mP Fs\g>{n0XV7jp눸2˦pzcfYK,3)F{nz3!z*U`(.HkÃ4d:kifVYN!D٧)u#=͎\ɱ堒JI%ѲJiEΏb`yefbyRI^%n12xluSއWjjYDťtl)^x&3VwhPto \nl蔲R.+~P׷ ʤfknQu o^u4 "ג6؜.mεHB&gj,]ctݖ^_=prG2w3:GKXrRxj ﮼3V6yz~Fk=]ֱ&d&rj임= i!{a 딓F)M,0$h;=dǻf ڛM#Kc>B:$',,B:&ƘU0 H$8CQk:>\[HTT>jdBH \0)*QEFet !yOryeLH2!YO&%4Q'Jp A42#nd,b%rR 9cbӜE֌o'}o n0]&/)X&yTIK9Dt4] Fk Xa%3n81{.0.,&WJl@-rblbFaUhSs7b0,}Abܱ-j¨M`n… yèaaz=hȉϹ%2eEY$v-d @+xԈ5!P,2*1/PS IhTG\a<,Fx9, }Ab-"ˆG\ ܱ;Ay5<yۜ3G("ZC Akm(6)MyЂ3>,u**i* '+ȹ/O#NG\.qǶx(  w1 n)'m4Y1{5Py-[ qinwi 9Yy)\9~KVNsXo[.f[됁FBh&C F&h.Qy o'۠ d4 [OEv72cg56~WoE>ޣ7˭ÛyYȰ-I0.\r1_yE2u`lo<êY,|TFjZ{vS22D2h@(e`iG 6V%FGS$RHK9ҺYVʛlu:g{n1dὡц50ꌜ:E vikǫqSx_c4L hg"CW>[&:>qYI¥ٔp<JK&XkyIY*s*W;Zb ֿAu}xomy<d@)[ۀE2|FigI*mObY835itLm}D-dnүܣ{^ Vq Q+|@=`ǃzš)G\5KҠ @5ތ8J('hDU@N%+ 4u:rhh==7庎F'V5Ki^ R L9܎7ezƟͭd,r/"Pr SyNi{ 2tu`T cR(kKPpDpXIU, 7YznNE Y/c4ۃ.-m^lMKrL판|-e*&B Ejxl gx4i@&PH`dMz!|$+N_%FLثVky"@FX eu4)-k.=`4;dĞD¨^}8>8{OF" rHHGB-?d`uIe l/IIf>4uB_ :h}G]D= um n`'^Bn }[f%qL٩^$Pw&)UBH%Rk 6U6omdxڡ C$ȸr=Y&&z8`uMi1ܖh! WnG wn/qn$Rͦ);Po4]On]ZDEJFK^٧z^S{U O}{NBf2ځ=!PwNva>~)M'77W!ԼnHM \HHq7muٯ VV)ZU&77mVaOi+{۶ʑ=k,x[a,)RaR|a_Ս|ٿtr=F%Wu90\ÊpZHNBFZƻYUo5~ǨiN{$O$WMbDj%{$OҞT нX,I,zJuFl٨B.¹BSWW1wZpFllk٨B-7 B;TW nI]QW\q6`BUE*ٻ"r|USWWJߣRJsF~aI~1ih>VXp8kD%`ԅLџd(i4G497 ү^x\a9_2˞.#;5]ȵ\4Qk@.TPӿ4(ٙ]a \Xd.\Z4R GcDDZ :B-J[U*8$ձŠ'͗ 2pO!\QߦnT骛BkPEō8 Ctn\οnnh pM Qm1{V1?%wq6<[`ћ%F\&GF]lXSʔ'4F"I*}Hi2øƒ_6jWNN#]zBaNetGd%ղ).Rj%6Cb1m6eʑQNJ\i,ڜ4/]FT/]&|J'{X$*쐒bf Nc9J!B QȎޞ H6deGގ@f#_f| F UԻ ָhSk|$IRXUEF'{'C:m7V( I&jeWi(qA_#.dı&6b ]ɻE uR4 ƅ5fi0+3\iu naegya=Ҵ> ktvEBVPB ׎]S`P&V+`I=Rvv1)'l t[ЍJJJ#LeFd21;`<~K)Xd2zbJ˲fP57]\?70c)(|k X "Y0(3m؍ttg-JC(]eԭ9+i,b8& 0 bG@A. `L1R"%8TRp 3k'T bM90i3XGfұK"\+4ަ2Ù`Hq 92rNcQtd/Ez@ߑPIPSQzX]T;j-ݷIKfRR}Ju4L#r 9 ẀBb|M6y-dI-%VS@ er>A aUh#ǻ=sNvԙyopѺQKQEn"磊1͋B$% !`BDEfΰ`⃡-Ob8ҋy-a]Z|ou/c$>Hp"P ǀ2,76O0V%SdJW=$ @+XȇTXTu~R& :hKP-WwV4hmC5YK1Z16w"Mp pXID>"CO,%VX(;KSvw9(P"Q _C݅Zc`0Bk@)@3E5=)D*Z5FJܱ-)ZAB XC-D;)g brogcQ0%-ʀGصm,L  35)JL ͠Uq@ơ"2ΪU%1 Ұ"  e#@ A16%Vm[bLkB:b8q$]gm*d5{ c;V$L'Knc+q0`u˿w4m'QEŬ׶aXSQ5(U/IHY=yp4vM@ƤG #{6l|IiFI*vCPDluPmk"* FDs"IYf SA ,3 R 4Č,mz x"nkQMƮXBĿva+Bq&SK:)Pvr,RxyŴ aLO(ƈ0R܌EEH,{w:P)Z`pO+QUE :6nӺ r0=&z`IQ{/A҉Y7/Q 4T Z W-EC9y{dӮA^ QђDlajlJ؝5F6 #XBVp*,]4r䋞P+Bb|~)H w8QNZ{Nqz4I}6ՔVnD|9\Y 6iU."vCJ,=uRՒ-0 YX yۨOp]g銆358D'&@_EA0Wq|~].T@;t@x?-m]kDnܾ#?ttd0!|f N-=Ֆd{of.z6ѝƨ?Vc6WrKN?{zodyrM =EX: u@bX: u@bX: u@bX: u@bX: u@bX: u@z: 'Ttlܸ7: Xţ?Û:u@OG Ʊu@bX: u@bX: u@bX: u@bX: u@bX: u@bXTu@2:W: $;7: IңJ:'Y#X: u@bX: u@bX: u@bX: u@bX: u@bX: u@OU>`x7: Xy@Yu@F`X: u@bX: u@bX: u@bX: u@bX: u@bX: >}^9y^9xzFRǗ?.2?CbrdK&GD憽-z%X)˖l飩V(`p$s"k|pEVzWa\F#""s"kyp+0 WOub \ \aAVp+y˩υ Zlk^< \G厼\Iϝz鄏v e܇fWwݬ •rZZGpEaWdnWJpJ;d#""sdm`edzpO'veh#7pEʽ+VWd! WO5*|N4(YvNyvlNҬ 3 71[1,bكe<} zPpLt61˫A[A"_{+gǿH;=ֿ͏Ύ9l=7׳HB/V{=_8rF塈t~(\jy;EѲ'nswj!&ܿԞ囹\]uNn= zfEPēQObzswѐJIu9p=sEܫT`+'sBʽ }^蓕%doԯ&k οmkZ?#|b[ʝ~Y֍B:6t-CuJi7Ć = J/0u?P:~tt+GL] vDIh}7ןb攘ѝOQwlי7 a#?-W9V4uw[[0E6:*}OAT,5i+Ք 9F Eέ_9i'0flĺVZCQI)~y X\E-R8:]^pl&sGߝ2xU{ۻM;/?N+#?-rgiHvp]ػtL]o&_jC[!^כ^g7=?9Ĩw6ZӖzJ 'ij}X>tڣɆWz?]=y lyCkv7!-^GslWrX{훽nx?i_+hya1_8ovKnkB=;_^ѿΎ>rX~hDީ״}kw(J9O*;QMf߬ ?nQ=.<=rsU߬>y +>wW7/} {+?jQՆIܸX}Rh-R[Kݡ6o1s) ~XF\?CY=' ~~m=}~ ר:cW r~;;hX-: fcz16o{'~B\/ J-Bk9X?<9=VZ/I .j,Voey誩hQJ*tL2 $I4]}bZHG҉עY3eۛӆ&MLq6G3jT-'D%2VXCViN. g1P$xnM5/xUikV}9j;ӓ鞬|BQ=>W\t (׌7>^zPwEATF̤8Q"wm-{{bEѴm`d,97}g$Y%[GLh9|֝-c{X5[J֮-K낌Ow'o|?_'Ĭ#C"6Zua{XYb1b[ZDѲEE,RǨҋc) 7 MP"Tw2@{2arZD1bčK5J\FD.v&P$éq@$&-r圸eZw7D.gG,.ʖbg^f >ҲKR`mB;/2,ybHc C8I|!%;x,ٱ=T-C=܃ k1Oq IEBH͠ K( ʖem6;J 絒4~TVЇYyɯ' /Y`aY:d_:mfˋG3̥3& u(!7S 2 'R/%팓cg蘁֞?qȃS"$1hbu4`Gk0:K6E4Jѝ'r4NcqjmNĔ\JhbjFz7-̨q &!TJ YA̬"Q+^D:M(q;FvD8%K-*" x WJ$+NU|eFdGvʆ#`! p'2aȓRf\De :6 n!Y0gSJ X)~e K[' qӜzne S\m~4l;[^ 8nVID \ :*@#Yƴh5R) U)y綗.4x4"1qe. o13!Dʌ!ф͜l̓XPW$@3ym!f% ":2]b!El>;:r-q;ՑS+mcRiEbd1c !GTƪ[#cTYe0BdMcGcd2#=}~[EA[Avmj[,ʒPN^ޏxHg J%/ gߒӱ;~L8®N |6[c`L .GrI+ 1QfFI iMˉEIW,XB]$E\i)g \Qض`kyXM /p]nT2`NZ"=7MN N<|yܱx>\ْBvSBfUˆ x-MXVZQcH$DX(} Ktd55PwMg Zrx0k .y*~ os}Ǖ eAv 4<;͑pVպܛڀzB4[թg/~\N%i3QKuR6BrEK Q)mʲ% ʻMéMYtuTYE6·eݗo.ئ;ZvJkqjһAP"FuApN= |r/dtf)|@Ҳj鳐6Ak*W_)碚lZ2a TbW9ֶ:jUD&Q\J~jRhSb6| JyX,(W6?dKhy>,妤{9%׶ajn}s:]cyumY3Ah0n)5{Y*᳽GϮ>8d='{4{ףyWsl7￞#Di{3#,?^Piu o&m$Tk-rReW(@'+XcݡݡX1^(ȤA)`@ M",#iHq,X=ne-9wgl.p0]:!`/0[߻پ6Sxsr?8vطѥsZp iz x$Tl岂ʀGhC5 ty> `1LAs!jtGt:2J˹Nj9 "JXJYDrN*a2D  2>/Wo9֝-tGcu58)Vi'~v#]7BXc'3 D}f?\L  =$pebLĤ g '|DXnE\ еQq4yg8OHN."dP@NdTo;R[yXO@G$o W.X Y> 9P3 o{9 "7RL0GxgZsяrw8=1h7#(LBf˘ڹ RJ tYKI<0œ A*"Quy^>=ӏE?Zf=w?z{Q3%GMooYʄ7:aIkDW93Y{Ƒ$ݱ`| ],b~J$suw[r5m1M~=荋=tٷjunuEz/I 5}.ߛN`eN"ngܠ`R0aV{fMgoyQTl튁&q=`ּӮbNyNDa-!ZC˦TVsyo'v]lY.YmwMK_oσ͠hOsKMMb|&=$>7 ۹LʑS+VYSDUJzg0=WL{>zBι` 3F !8vւhH(򹖜N)bxЄIzCҫx) հ] /r RjuRfR>{\vTsB1Gi'R=H%bqO9D.R;Q[2r9D ]9oBi:]q_ƪrٳ ~O% #T]ґ &SWVRUQ@0K<Ny2(Tk&g~oi6IMmT,GXt 'K/>(@_s z8i(ru*{\eϒYr=w;p0QjRr8p$P`f2,b´!& Yjr㔱m>Z IV&2ed1j΀iT#8׾cjl^|g`RkB0K v4Y"Wc9̴BC?wٳ9ȆKO^"Enp'O,{oGOW[Q)؃+\v]z*UI>JJrpTo #+ X<W\q4pU o➵Lp;p Wٛw^d'W^!?͗RO'RѸtgR)B| t?`oH6 a4J0Ei+jӞ5+>ʌjk)k@; e0fܫ9bY 3mo< z#8Eŕ1"A PZa853dl`"P2zE)%PG @Op).}ԖxTH0Osf"T9둱=Y5,l3vBº 9$7nh.n 7͓_5M?3GlApDdc9Vjɠ5 OcA*┐V* D= EP` yI)JSt$LEt!"'Uˈ9~N<n;vEm2j; v{UkqăKf&D Vb9)Ո%=ko&r!c2Hh5> 'H c}9P4e]:l@tpQ|TKj%5T;NOT[9 `8h#KcA4Q#Ӫ؋@\.%6!`BL1jvRʃ/ R5mM#$a)QsP|vxCI6$W ˲УCg&& I*Q)(l=-E +P%"J,q+~ݞu w"9Ȍv#R$dNcRɜRafXTqXŽ @?$kwA~, < BJlRPެPpx+Tq' +X;JiTGm^:U1uShd#*yKlt$@#8S;K `G`6C7NЄjDRs!F ATq "H8`LD5\7w+_njV b2HVpVeb^}VlޠzEe}ٯޜǟ޼pp?wR4|д1Qgomzqy0ƥ|sSN#sJO_}3zKKpiJ&i d^gf3? *=j\r4 $TE_fMIeh< TSi5<*I^O>[xrj|yjabDW0KQvaw6`:^~d TrZY>T84)L\qF~NB7Bc E~;!췏oMj@rB& \Û.ՔkӱdyU\tP9gQ1E/xohc5%>Wp*}ff>6ʎ>`Y׷3XXeUN8ʲUfJcŷYe"W,Տ`ŧӓ􎲩_&VPHOMӬ%Y dYKaG5ymt6֒fU6 kEy-opjX0gV4}Ͻ3 EOC{nt#V9lNxArN%ozA+ހDw S)twI|eMwemgu!~L}V!-sn2"@FaZ<0`6qvBin) 6ĝwЊЯ.Dp4.{}ͺ6DGG/"~D Dp -ї*m~em"( ) 9?3gg8Ӝ9!r͇әD-M VOpDRIL!WqV Y%L._ZXh.P፧:c\k\ "8O` yjͪ-q6Ǚ_~g"> B 3[HJ¿E#BwGUAZ pphgyH"j`y 2*pFq.;Eťb"Fr 5QsIaMQ娀!C,IM*x SQuDcF"' ‘x[1:z{ p~\VK6Lu.߼IlYkDmO OF&7mT# ZXz$ЅR?vլ]e|֮VN&)sE:τsAxr2*14B;nD ` IQ*Ə AJ,ALjM] &eF2 +T"=C8rؚ8[߿G}u?}X~a l-U347ޝ4b[IMӃ0 +CG1>BJَCxn"9P*RQGy&H]Ҟm`KnѨ_S[lq+p8IM\ߝ}uŝ/U7f\!BBj"G[0W$"A$d)I^JD-qiP.9jS-?73OǟR"k~^v'~[šOfBچW-~1r' ZR^>7̊^Uن N!A0LJaep͍N!X=V,ǫ=ək^!MΝf| ̋@(e]JZ@@~9tֽKo):C j$ӏKX? cһF- Puٻem-=_\D^rkڣeLٖ͡뷮6&OJ|^űC/5J,: zS* U}:[!$HPLF6$@/"+P%Z3 + i IGO =w9cj/R{ pXvh).jB= ]я'o7"}ҡ7qЛuyy?IJ{e7ko^}n~| ?Qgk^Ϟw_ʘ-g==*LfX6R3\8v)B hϨ l:vY\Aϥ]VSomԺkm[ Bф|0(fgZ[.)AD qŊ@%@*lԱIrpԃ!7Μ2/~H2A8B8Ɠ"3aL&9B(?SG^s|;+ll\^A>n=F~zeAl@st+TIM H)_;Aqg ٜ6J;ӊ  ZrW/9yĀF5(ς#  AR$ 4ģ`:oO 3'b· 6mFMB=+(jݺy!dKRN<p}E1='܍Nq,Z[Vq9'$QGbQW&3xSr^4Q D2i5r\ΫT:*>.UZ{[q>vUn 4+K#@6%ֽ;*sǪ¸2WﭏuT}mO)x7|9̗mw4?fxIvm3|Uq9n%yQe>p>3ߗYePu@ܬs1lilPۍB[|!Yt!qfIۚU(~󃛦5k.hӸi.F1cVɒ}QKzxu;@3ՌPIDL?_R)+h:Xl2_ <$n7Sn8N^SxaW3DO]\-R}Mip{pk&f́EljxW3/79tk@n w;vWm~a0&lg9Wj57֝v.fs݂n7dݐ[Uug ݑT> JC4*\`܊ \P&0܌(TT!ImN9=>,pZ:j8 8sn'X }+ND"9f.I!Ĝ8@ېD Y@n+5Чb'~u>nM7涏dze]K5?w%s6'v=]l9.[J_] TE!T'1+4Nq tɇjCO=v8tr܅ޣ?eޢ6s5y] qWn7Нuݝ-n;i\U̵ fG0^wyc\*q0a|a?N\KZ꣢*pɃ0u([aES8L`!nTe!n(A +p]D; D0~C*7F+#^٤bY^3q3Dk^2~1ˆ\O,+|P'Ac7#lskŢrWPGSGoz᤺4 oDAYAI 8 .!L?ݟ(&Ή : |궨Q\NPAgi:u*,%xؿ86jr'c ]v*R3ZCZ5\t>5$KH5Dn<#wlU)>wUUR]Ew%81O?.˃zwoG㬖e)E>yT$d_i\r\}E/v:zF_j;~i e=@p/P@Զ!  b BeXA&kGOZ{ I|ĥUyo+gX%s'7$$+eäH4ti> F@I {RVN&)SRPg¹ Q.pƂc^iv`D T8#8']ˢC,ze@NqXQhڟ /n]PǦSXqf&A{:\Dh+I1(hgW%c s|٪&HrY#TcȇFxєr1WLhR /R%4 kp1vkv p8Ikʔݝ;YzpT0 BeÄ1o` GՠOJHr|txWMb9bњ@΅y0`i]Ў{{c=Tu9VNr54@Y<&}||^>zԾѣ߬V` qpИ+`AGE qq goLǑ:kAkJs]Ҡ 1PK8߼3&H}2 Y]b7XklGԳQ`MG\`5_x2غAU2r[3vNu<֑!(y0(\1@y둏%sϫ_H2A DQW%YŁ(+5C0nEg$;SYp5t+xSX<5iWOgE^@FҘUYF̩SigR=I=T(T2\3{kRwE] WÔ:l,>d;L^ O*b-g1yeg~I 'A )B $\Eaa@sDGý@RcSOJ/ Cw+RO'NpJ*Ƥ&E)@$r=U^eQQ Ge%tpDȲC O 4@| R1̫Ĵ M:-=7GT-k[:YP[ ʿہG"9SXUP,TN%rfAHy6QS͵=QǦr=cC:,F43r2yՆ$gmO37i#LˏE v9sШ?'Y?ndXpBu<0#޴u:\#9I$mGrެndI:@/F&/7k򨱠蚻ԏi~.Thxsiu>|{1@5aA2; a]hM]c}MÈ/ Mdž>!ay=Wwg1MwƓo_feF: 㫯yG;7wzdf7Mǫ˜翿+_^Mn}SVm*z|?쒟9B=Oˋ^uݏwK}Cijtzt3u_:¡m}-Ǟ@|na xK+~~9\Cut&$)9cT3&!./EMr9U 7k\I+k[Խ|*K00<0 *bUj fx`5{әl,bj.<0K~pQA.FjxϳkB;m(fn<=IU޾٪rβSva93QbDwO]$G!J1΅ He!fsЅJ DpZ8i=2{# ^e//xPG(x_6{y'}Em8^1D %T$sF8$RxteLw;e@/KO8NǠ_bLZ&/fCKKS.Jqodnlħ٪z얥<ѝtFˮ_[Crf'hn^3%m6;\?aT؏}<ٞkbzVp'υ'y^cZXZETN8S%&Ckȩ;e_+X[+bJ=yA pn L!Nz/Bc{?ƮK (E[?'=>ϲ|Z:,a[l"bLg2M=ˍ~QAD5Z{Xg2jmZfUkɘboQ_c`ޛ\^ZjK)AWDSqM]e9Fq}BM#{G~nGd iַ×xKhP$:?KMz(4s{04]oryދz;^5u/E x%S87p5M霋S8㏳C2ZZ J56Fj4a.?ty1Bv6?/?׻_A #@I{BF|oGozECr5ße[ֵU+imԳ2$⹊}9!VW|MeHBXVƆTuOgε}Q :y{u:'Fu+"s?:ume>iR5s{ތuDk}$TUQ`ȼvvWI]˧}FP| $4O.ah4Ƅz!#Aǒހ8S, s)Dh.ࣃy,JR69k'cpD1vZFzMD<6e;Ev{J4Qa2UWlv6XDv+W@1}$7Q0)@\NFYY41 F gxAH&KѾUȉ2D HGpgIGPI[s*N)}aE-vEucjTW^]&?Iԋ'(ʿ~IeE3] 8={0'-p&#(QI!3AbRBrMuH@BOkB2tJ*(O nTid,vd,gb'6-Xa-];,ܡ|hr-5u!'[o~7#6D1wVN $4hr[m&+ؠ1;{60.a;Ƀb&eh7|L+]p\cAbұ/j¨M;,؍p!f8a3u4:`Z֣l_RYﴠ=d )G/CT5!/Yd <ģS;vgu20 "}FD!bA 8,Yt?*s <1H֡fV},hJzG$8Kg\pYdGOqVQsEˆXx1vYQu}q "t&uPRQ0a,K!hCrݔ0}D Ccbұ/x@XlQQٛ`Mp`mţg(.YMwĹZuŵV*s|wv}!C>@rtIǠ($P ,~2&v'0ɤ>QZ&  AYgs+<%ClQ; B31RHp88m^ʊ`rzvмT2LQw+=!|؟⿽Tҗo׭zo~n|FG߮jKΘP{ИkBѕ澾R$n^|2$dn<`7jjA?{HΘ*IxoR6xὁ"c5$g*} }6T`6 ߆KK( \Vy.\e4V6\&G0vKwrwV _Pw[EQ=qL{s ȁ:OMaRQo'LTP}Zhv};: 6:;<>DCB:y|\plEa I68%sHJVar~-S͉5{mO!~uw>v^;t)hB&d!;qi'$~B0EI%3ȖXR?p2UKxfZ1N4Q=Cd#40dV:&J3PEa"uXtA",KD,洴Zҁ2(Vbu詀 L&n`qb|>S*˥#o<AC4c8j2^Pqʳ{Gt'CLӫࣃ1-Ȯї m:S1!&m8Dڒ1BK#*5qqE3gO.mQ!ڎUK O'#)"\B/ZSvC1ÙV7‭iU"bK(F[ :aFhNPƶ+ZE8ˏ8 rџK΍t:CpY>j_??uP NALW* ǝ\oXyc}}%s|)P)QSRo9[<:JJ,锖@D!"LDy&j-6,(6H]Ą@vk:7 #RS(6^Ԗi`mB3KU*Τ",5Xt2E26 UWdIVJYf9!A;܁ L9Xx8F} ؖ U`hi)1H-h= D^;Miq`6JJ"tl(3O (0a]TRR ^K*p-<VFoAv{ %6}}]@Ur|7c`Lt#=4LF$ |@ fI^?؅ޏp[3~sa(Rype;'!?fRzw6$ Q&LH8s+ ta] gf홽<{^SpT.+ &~" hR-ٰ<{3UJuC:,~.巏rݚa}|0Ѵ)w"xgr>+{ /S9˼Wߋ=S݄c ŗhFj2$@/'k`5!uE0I3*ȮD-zuSWOP]Q;B팺J*+*Q^]N]=Eu%9su6JOO\ kՖ?sphAaovLӵ$k+H3oKA#.S[ aeΠF*zT3<s߇|썒yc1VoYPQeM(&LRc$Nj3i߽:\g-$vPf1 Y$0c@LԻ_Ky=_*㑵D~ډ_8L1qqkNI/y: Q`TFQӣx_k%R_Z`,/gJ/L_@JukNcڮb@{ɱF.QK-9+R 3\%mԡ &cY?6J2fxGZd3ZL&zk,w@$5JCk2F\"ߕ6\"r< WT7U ͖bЀm^~$YՇ&*wy`djl2p %MOc nG|r#WɇjV"@{?Q"t+TcT[j+ 33*ɮDzuSWOP]%t7mJAWZ]]e*;ueT]RW@!txWURnGdz1.`*wF]%ruzURt8HRW@0~j#]QWZ]]%*zB+ܺy t/G*)]0~+]'EV 9`9S ej f+m=!ǔ ;!+wIԪ @Ĥ O7HN;ݖJ +*Q+ٶDe`pRk.vH]).;\z!+1 ۢJb|UGz&X2#a&FaAÜ<:eT뀙^G&êjdR2j=?ɝU=ttW"dgd33q'ۜDdR9arc\xg~8k`+}nX ʟW4*\ϵy*v$qL AJѺoK!33==T&+o8eA_TWݜam3; ~o%Su:*lHV> 7m=]>Ge?g7o=ghk5xxf7>&7p\#k\:'Ԏ` M_ e4pfXh+bѿ&*-"m)Cq;B9(M_!\#zPT9F$㑅H>HԔ1 FрG!eLDz֟7?ؙH21_s6YkCz,N*h\+kPSBӡNZrҀ, >/Íi/5Fj/]T'՟ٌHoYdO۝eЯ;\uQٻoc_2KI5[ƹp8Rg?n~݅{L,c05%ҩ7V"FMsFV;|rܒ04Kw6<=kF8=z W. }3&+H儽2d3vnW!{Ջ 9W9v9DZJc1F1sɭ`)i(&fID-$E&hcsvC{-gFpID:kdī (h Ed%xo@Eo}G5*Cz Z{1[,} *:<0;AZUZy#RT> Jз5¨~}!r@p@ {MjWׇdۏy C7Į V::z t ^;*$]>Y(G(Wy&sʅ!K}Gh@syD;؞Xw{䨤QKXEsP,qR ރj3pĘ[!9mbc#;GQafnFfSl󹙌6ϻo/~34Q42tzDl@> IecK,i;`14-NG&*h2:ZȢrD`p'53v6kR 8W~{OV?=P&柮[/a}6)j}yLQŗkCd@%U켫rEbּ{q6q~ȝ&3ԏ-lGΏyzRc}VX<($Pq.E)dn`NeT{v7WN~{v9&6(AiSR+}iӧYg~8)BS:-1cDS#wLb,0HLU,^,Bp0 g-28\cb&NF EhU1m]I_,N ,Rg) cfha@9);Ŝ(aHdž3vHΰ,r /:'(ɻͪNr˩UZ'}5Gp7o^S\a4P>s#PhpʉT&|;dlQ  O74=,g,*P3WphY^:,0LyIшzbR:*>XN,Ol/:L2qk= 8pT1FҰؠSk !1 Bd4 ºHL3O?tHREhBg]61u+Pud׳iodGx #it} gnƑiZ[vQ(EHGAE.Jb52`KD?W`ƫtL-$qĂ!AQ†*F'QA*-vcmÝR z fxBNWJegfu: }>8>JCBS*i*7e΋ 23_wԅ7#UZ$/!_:܅<ن[c*+6[~RU﹫ka[כ[D說sU]jn"D2 R_ ofxm&DZV6B$#JoVndWI*݂F2-׫+tXB$Uu[;U7|Qq}RkwT/X&aT AYh;VT4vK!cM}̎~rm5mG.}cWO,Nl2d_ͤ?Qqtz~ XU2xZmj8qڹ g4ϿfU}ƗhN0}e/Ϭ雁3#&A!+;(YvsHr$M>ʠ=]>`Tp%۔=~J>t? kECyۛf4=Ujri!.4@5JO揞qw! (#iנ[G1|-?~s˞,.>zd|y2'1OkFcχ㫟1rzۘTe| OЫ߾6y+?a{-^~Cy /sZ-^m*6bȘq0%aD,5Ee$V-Hp'Jɳ h7F/I2 Xf;dlr{7􍭲C}MkRNǸ۪'\Zw,p(@E;BWю)ݵ6'0 y y<)/{(I"CGaLL!1uL:g#4y"C?󂐌o|Y=-/?:Y_x`ډQ&=Au@ NU.7]q!v}ؕs)z^0]4/Aɹ▱GI|e`;![nu 'PAػ$uNG]vwnt) IUp;o 1qV{t JSL#,8^%iA é\6?]ߝUy N6d"y]A6vm^O?ʲŧa+^w>}m~|/)q@A>J]?^f/f5d#g"Bl8aU[x{(ol?P$a~ U -Xv/__M^HۙeʷC0;;1,;?NSdĮjAĭN]Ԉιh6wTQΪ-t680U~j};/NjdUBB ,82 m.mkٰ7w#p64nޯ'[=rYMd2L?nV[dxf< ~l*LQe 1*&^pT\]e /z_d{]]]ѮLJ[@R;d ]Q#ׄHbW4< :jk[·զ~$cW5ؕYˌFfOB.CrnY A{*zN=|yb3 kѽMNpcޏrCGyH6y铋u:=FNSJV(.>'n;=ާK nEqK dj;NO8SR#RT2q$'!zGG/Eٜ*BdmU%Ԛ)c$\C xmp˨BDK*  mKqŦH$wtu&5˛] >fd\fX.ўᢳzFu&'k#3hQF%1(@➆.xCUKjggZedɓ^' TX hJ5 *. W]\3;nX  8uJeQ&+%P\g)HOS\kcc9O@ebEWpHyR}Ajid"ia$ pH4QkTREq:$Ie_a"u"t͘hܳK '-#ܘhR3T  D4Rm"9haRnV~CBYC*-,[qeTϳXLRq2/m_W?~qǯ8Q|Μ6aX6j28~gMڕmvyM]iVjlj_ tnYrogߊOKKpe&Uu<5b(8aN:MɬW\p +TѬpX [w*K/.YUѓ'͟JA90LA.u7]t'hot~l|uڪ_W7;ӧ0N4,\zmj dlymxw,W ָD;/TaBxe_?SrRIB1LNqUz͠iyf<׾o}&FN^M,Pی278ċT P4PWڻzq&sųq-h_ӯmg/mnܛݚuoZ2x3`*06HL5`=NVQS"w K%fϟ]4vt<>&bL{r)$lRZ$mPN1X̥iΆVΓfq~vdbGxSvұR}k}.u:]V .C7 sB*à\`XC)M"%ܕURkZۄ:fJ/ ޗ,m_Cɘd̦uK׈PArnGi*4AړުƨY6Q֦l:{" %Ja$/QPzܪKR1TtP:c"eD )jT 5KfCTipR$j]Vȴg6r)s6nl $It$AUI.xsǴ҄'ֲE58[䐡 9* 2}  $ź4*'Sd⭫㖵#Smz[>gf-Ae,D;W x !*KAȠgޏ̦I=t.t(tށuӾR{TkT7A*EMr^L(JaC 4e&GLy*Q:ea PVXe$#1(N/wq+o2tݸ|Oȕu u&.㠪UՋ ֺ̹Tb}uOԤ,i?_Źkx5"( Py)C/QZ1O q%olkLP 6dXwz´&1ʙK Шc:08.jk|J `$`%>G3AYm6N%)@$EJza cwg" f|L?0w/'Of~ErϦNp}Jrw0q7MqBrbE^ͳjAge}ZvpT[^*pȋyE[q#^n.nh[oyQgzpav:*^?UO?\G+?Ty— ,jyӈ3xV6Pc"i*j֜I-$gJZfqT;*E dmܾDNN:Ƽr]^(U g9L>=ŊjMu+7h{|*Kd"O&bѤ(>i99r-iT ɻl~sվEo*{ɚ[n1 dВ/ڭѪ 跃9fOJ{,-ONNYzfwRmYZZ*/%tֱ$S8a]v5M{մt瞵AGI(jFG!nh`sOF!f( .0#0Diy>7] %t~{ ՝?{8%hVG`=:J0)zspq?ۗ(DMO2JNISj#BLU۟h*4dGHW\Gt ὡ Fh:]eBtJhj]e=Ү2\iBW]tx*;Lapkq^M Q.uFUN*E( "$?BhUxb:׷#3 +(:a*EOo-ݧi B3tFei:1ciHl~2xT^8$͸X<UT8 ?qܡ "VMJ%‹r>=5ropLgQ\>YTֲPtB$/)߄w{qc\qns1_)'{X{*|f:{.fNNOCR] LG{wϟ%ΕEV NBi`VFÄwdA 09W,SDm2RJ)t]2s:8y{/Vo{᥋ʖbIձTi Eh2\}d󆧌R;ǸAONS(DU&Wٛc̭ zt%U#p7th!]NS (+ʀ52\BWu(@W] ^sD=qFB/l>VmܖP*a#ߊ*wzg4ŗ?O~?/Q: ÞTe$є+MT~iHZEnjM/b~|6BQ~rS3ċ+#!lL9*0+o|qbґ'>q Kp.5܍#/ Lr+qx -?UҒ޷ɣUw`3wXy==\~>SɨT_ 3P172,o*Eyv*>]F5&Źvi5yf2|CeC5߾&EGU.UնC~TPvͿ]mAUu詡@*7tU5}Vw%'rGHWp GthWCY*>#+n}+( ]e fv2J9c+a @vU AS܉2Z#NW5#+ ($J`- c#rMhѹGoȀFop5Eoh7 JFmc4 ~iμ*9hyeCe]e?F SGWm+ t{+C%Gtȣ(U/t+cv(aЮ>>DfV;vpϼC+DWۡT]-JtSF=+L ]!\X_ Ԝ tGtkpA2ZNWR4+ tx3u3E\i_ 2*NW% tJ09lJW+BFBr_YC)@W$RF.5Rt$G #])^x KB8c %[h3eANjS NrmD A'ᴕFwmI_aCeH~j.f\bĮ<-A!H):@Db==tzz{I/%ȞT_"p{,ӧ]~OpT Bq+cV?댢 OO]q!w:[\-Kt6(x#qIwbfrA7޽hͤՕW_=Tc2LP!sQPfcTx?O/oUd/|]JoCa!e^ɇ|W`W᪅Հ7 A~7zCx=-lH~Fu}<󗋢u?do_#~òm~\}qV5ബDrp=] mПTqZy_ap,k *WuHq 6)R7֯e٫6fg +%BpRƹg2sk 9E-#$guSpZM2 7ɮ@҅*/ ₀C*u$㑅HFꥦ0+Cʘ:3>2;IsS cJָ &?oI-IN4:JBD^\\:Y]hnn֭$UJ]n(]xmfK:96eoPF<1r$ $H,("P-=!ƒ BFOpGG.x˨ @טXDcں-2X,Rg2DO'SDcLVS"!wK9 Q Ƒ7vvz{BoYzW%XrŇDn3 ;v2Iۻ9Ҭi`KW_a:P>s(1$cNcN9IGASCHMB,pW<O1]tB yxƢ5sy '``-B+GS˔'L<\7.Oru##ǓzARŤ#Ŝ ^$CC 4I{)}"RuӢCg?01{E}{vCALd/IJwI@AVc=sGFjmI0ՁRt1!<  (VH,^uWo,!c@ޝIC( U#̍`BR)QA*-vT10͒]RnET gZV•RK3y5z?xS<>KGؖBP*6eFŕɁq;iH>^'w$ㇴ[M}{>&ȇV;lvxdj'7+7U`~~3+u!wz_ fv}nd$ޏdtodD 4\-%:+i\tgq=^iy#ϓYy/L~Mf7?}crqTev .ɛFΖx_f4oLxebMҥOWZ..o*ߴDk{Z'K I’-ǁA5[X"_oSmZ1zzI3?J^1]p8noE{}LCùeQβ5&&)7{#RS'Kb\>0O[My,4"[8jVE[wSjr:ӅLë@ݼT|`x8jjĠOyɴ+k1QeE"fKCUĽe֚sbEKC_"4#@9>ԕNcȂcj czB 4XR{"mTH DsoBNwo9仦Nx$!L'vw@{!CC'Xú#IUp;o8 O=cF@/M1`*I.Uk=:M+˅ehN$0ȋ4 efP/M0'P- w Wޞ}l58.o";%~lpfvu(XԼ,n{N J?gɸ`3gk YڑKnYtXyf?zdQS:%*1ѓ9%bW?S[|gp&.P(p;srLcZRI1C麣=t)8[cR04|rVk/1[,} *{rB %g%FIu~t.`kmۧғH uf_&hCy+уd,R;yApTf`^Y<K_yRΓv/y$]*S3cq0yLpZrZA<"kCw )'(R(H9@k5f,`ZFL&ZOHKD,gZ (7[Mh3\1ٽ߾Q.ZGZwlW{;h&SfwnfIYEǝCs-&VgLe ݛR`lScJvw;ǶMvޡGZV0--S4r[Գ!7aT[pbXEU_Ϸy׾9r5}<[tHN{s;Kʰ|`e4y5AԯF>op&ΰE2E!6 e:]vʎeoR~/La,JtXJ#"s{7ZZ$:$N.{.{dS"둼!^ᾎ&z8U0M0 .Iq GB^+npg`Am -,Q#L$r>x]`D |+JYQ{cg* z6r^w:T'SOuaf_|Rf{EfG0=;+9bDALFqc;"q+jfB}HB))&=)Q[~R12#>a̙R}#coFlް78 {C`r8A3n$^?0mApG~TDžj$5 YK122V1d9%s)Yp!HwH :]|TQH)$g&p؃ "8s`9Gys|pPGǙƚQ#ӫً\.%6!`B u¨U1 m+K)$r=Fo7vR\E$/8(q2tL)0u&6mnbFT!Nx,k&%ɽ[$@\0ԃi%U8aK`*[9kmȲI0aw떁nݳ fd!i=IG)CdSiS@1ENNue{J|˳3C L *H`4gty.n|ܓ8uHyA&+%kP\g)HOS\kc}9O@ebEWpHR/eѾi.&:""Ѹ^G )R%J֙h$IA/SMJ O[F1)Фg(  D4Rm"x¤\߭v S/@HJ$[qeTTQDͻI,F8_4?7o}A8Q|1mɢ1/7U4ep|_GuʕMVyE(V/t.Yrloŗ%Me"\,O ; E?hSz6)'nGe`RpX G9,hN,xUǏzWfwUxxrRP SSek>~Rp(󏫟o ׿(~꽭hC&qע𕓳Z* Fq_]b')~(XtWѧW7~cwZz9nlIB쟓z9r CT7I(9J7K~?C9/XMsߕOJ/CF/7 I/3}1]Yw]e؋zppT6iޓD (2g6tE|Y͞hha,1UɧJiwY|\*I^y1@wM8OxxUjeJߡk>DȪeh?.CeIaMq6sW kTilHeimj 9bJSOY`1}ʘuv}u  Jk?JK0T5Uld5&ӓ&&TT*tFAq ,IƴT}>uL SDʈ*2o SJ2j̆JTipR$.+bdڳf6r)s4nl $It$AUI.xscZik"-r~ d:wGRIG8a} a]+͛⣗{p|qڑlVޖNٿY:3HEIs)q4uvtr4VLӺÍEI;X9쫉 @F Px G4IqesaBAV  ,3ĔiH]O!c\k\V<eUF2M_gxMI xNcMR}^nw3`bM\w*UMHUsT*`.U\zuOzvU\|zC, Q:R,XD-y]cb"X> ֞FD(g.1B58ju`Tq]1`͆$`%>gNycpm0JRHȕ# dua 6?Zw6g$3ix=ҵ"q&7DJEQ"mZRˉ)S"PEm.M ϔdE܉  exk- 0Ӛ'<$nj+QHVݛ($@#RÙ"s1 %b @'ka1$!]|ji'Zi9ƅ80 ΥpL`[x`B6x9.Jcϱͱq# 0מo'ۍA9p_@EY Nsc6lL+~Fc;ǮA~Am˷:|g'o/6EC0y`ör܋q|\J_?mg{h&])΋ ʁqR =Rx5Ϫ Vֲڒx̰x!/EEmōm3tvUc{k{u#/9|.7^ŏWQ웟85È3ӋpR!P2UL:c 2*Exyy\̔û^p֎]YC^z8OD$ԕ@"Z^:l|4_42.89=dkxtͶgiPր3p<^&=L@Pay(G\F7ʇ tL$АO `5gR əDՎJ,YAoڟ18ǣ1?h9x. (Y '9>=łjEv7صɄK.R$.cƓ "0c45X4לz@V9Ζ4*HGm6E7Me/YT|{1=4vi+*y[moٓ9F c`k%4Ii.q\P6ᑥިRXJg+I jY?ꮦ1G׬] A$B05C( Oi6rJ: =7UG= iΎ_NJܾgOGI Sg.l^g~R?x:h@Te$є+MT2p˞ (i'_eDvu*Us;Nے}e!`/28MЮz.uE&6k obq,)H)C21)#7:M *8-&L}5f:)fO[o~઄ ƗRLDȵQT P Ad")!ShbvbvbvgUJXL{nN22bӕn8CDQC֢*ea^{/%vWrz߱_uٝ[uçiOV{^IAԼS\f">6!RH!%(K.MP*P?ݿ~zJJ֒B2IFpǼ6 DXRS*jT9nxg8Ml#/gg nVl.BokZχ^LO{<w‡O=+9 @a..$hEQT*uǎ9*04PpdRe6"FS&9ܐ% R9\'^q*ybEg: h4MΛ/eg}OQ/W;,HݡCݙC.bhl!R3_ \4wh̀Ew;Vވ\4 Oxu0@w<Ѥ+t z(|Ct_u⫮d&h=]eHWF] TeZtݵOW >%2kvy_{mX}fpٞj3|O7CtƠte6+CVwrJJw-߶t}<]!JIXOWϐt`+D+OWRq7Br]ev2\ ]Tzt% p%;tRhh;]em;^NJ*ʖSˎ`'C yATPClPoca;s?`h~URY"y6IxNU1-wTpfO_:=ߗ'8:>A7ziue&8|,TB !(Aڠ^on,9ZHKHE*'tҙ21ɜeF{^^i=˝$Rr])Q!cSLKZJϮE'%|h)J +(Ba6LR-C̰%ע=Ý&1!ʀ ]UpMW*'7e,#L3tl3ZENWՓ+Iz_1A6쒾!\د $QWRWOW=]=끚%1GPc?a4̃QAP]Eo>G1_T{;R :o< WmqP|ϋ#âN7}Nl={W_r)"hQ7{-zGAmZkgZ{X_Yw~T l 6KcTU ]>Hq$%N3{^ꣁ<{쓯xO>!ݶuu8myǫmvqBt{toۣ?>_9/N:/^l8JNo} Dfo~ yI?=cpwxwq+ώo^[\ ʴ}=){4C~iBP jwg@N-E1ڠ]1ʘD=Aft2Q2b pZ ]1Z5BWO1Ar+&ZbBW6C+Fy`# tm芌.+mNW@I ]=Ar&8ی CW ׅ QFi$]yo?83csէw}8aJyNV6bFpb0Z{m$u*qpKEπ[ ]nT+FkarIUI%+ w9OW(@W>[]t-7 |jwnžw՛^>˗/A_oOk/0;?GqdmvwE(?||mҙ:C ?M.fkb{x cȧb8oۑ]b/ /';t۷or=ټ{1I1*SQBߜzv'^O1z>yj>g ,5z#Ϡ6{7/ _u?=?o?ёZ6/_eqI=#lqqt(w۲K"oUXwǬ}~cw'𒒥2[ߞAQMVa8 pJ$5H 9в=bswHf,0kD3$p'̍-Ռ bzL-x# |&5NnoC- m*F)]mlL*K"LCt(.e`0 !B1ڱ*gs#Qǘ oT#FSHQ"YWǏFB~/7oNs!Mu#NQ^dc)h'!T {ϛ4gUA2țk#8sԬV[ ރF%k >7}gtsaM nN';Yn)Ǟ\1#QQ ~\̷ 2Sh:Kw#|`5 j}ITp,ֺ[7.m|i"%h,I{mgj#rsHF 9yRcYB Q|n` &`֙JbP=wQ$C.聺0Q9^&THdFG>rN jSNcG3PQCm>+h-i~G>83 AL7XuS7 bʋucmP6Z"uV6`ŅO5Nc[ 3泴R#u˶ Iyc֣6JQ,wP6Pc.roPPCoAydSXb<5:\vAjc4)@F&EÏM(pm 8t, |FEEt(Mj*6 FDg7d2yA Av%4rzR }W4G e6 a!0,JLhW4 ~DUuASзbt,rXҠDtP 6CZb@$F=(aL!#A `_ܢۣłTM%b6(8Yh>3(RAUvH'/ 3s?`ד[~ɚ/kozkU0oYΎ#D m3bk!!ƻAy@r *}rl|U2GH}1[ZmU&1:Σ&i.Y. Ao%CJP$JETdڇ`)C#`ҥiF_h^b u6B[!q*p ? ,0PFu5;:@vtVB_+!Hb!ҧďXM7)ߛ>ʻ:Y.RurS}|6C@FD2=R\b>MzCY<.JX3nD44ePwExg@rFGK֬kdž@ ΝU-eQi6f4Yd0-7n(ǧ3^ @zVF6pл!)C^$u-PUv#!C{wŹCI*WswHw[Qz낆dj*g':R?']߼Ual+4Ou XQW Ժ) }r7pO6r%֋1TDyò*a‘r;p[Tc$T:9Q܌J#x$z U 䪶5 !@KULmuLG[c@sກ6[[EѴR;✵AUJFm:3Ag2vt VPXG%==jyAц%7\sA-4pYqC OU!͚`)o"a2نf@2/O 4%尀d.نFp:qKކRZ7$ЭME<` \t*M .Fgr\Um/`XKEu,*f-$kAH 8iBe. V3P5wwW,"$RT1aՋAll-VްRG~Ͳ͖whN_xqHh7[YpѾAfNl_>v ^|bm[5~/ݐ`A7ۼ4W8^r3l\?M8RDs>?hugڜ^>OdN?n>/w})`۱mV=ɯ3{kCcO/$"^У:n*z> g=%H|@$> H|@$> H|@$> H|@$> H|@$> H|@EÒ|@(qfA.ğķX󀞤(-H|@$> H|@$> H|@$> H|@$> H|@$> H|@|@(A/i1>  <9#>J|@$> H|@$> H|@$> H|@$> H|@$> H|@$>Bܒ|@`7R!r>(qJIȊ$> H|@$> H|@$> H|@$> H|@$> H|@$> H|@Oz6qo5Qo7[?]{o]zldtlK -1db[ڨաۖQb[-{.>RM}wz`3]= .KWCPpXt@WZK^k_]1`CW 7jevBWOVҕ^ ].rk)thM8tb ]YC/|CW إW2OWaCWdIvAtk 4kkt^ ҕ3-R;&b,?xu(Փ+oUY[Ek0@1quwCy+wlFxrGbֆV;{ V/Vݡ yŹy^+RtLؚ4oFζ7n_C s8fnRbo:nOwۙd1[%FZɟ"KwVr,X6yJq]O9:}J@VA7b>Lҩ߿Qϧ|LksZʛ`zwo(X ﮦe=?"+;j'TuEaH=pZ>OW5d>6g( +jֱB㝲BjYX{3ed(,XEjǣ.`0`4Zp zz#^Kmx19#fUBgWd-ϞxVɱ`xr$WIQ'jXѓs,4DbXáU∫cĕ׎쳂p5D!\kKjbWǃ++W$tkB`nQq7+ĪW@X;: ?b'Y^pI=OɁYWԈV=o;N XZQ Xm_U7ގ:B\)#ǂpEVB1zb^:B\iNeX4JW XCR%:B\1 MAb9,Rpj_zm\J#W8_3HQb1bʗ+V jgC q:$0DsiO!J\ס^^f)ʇ 52y1U lt?|Qn\tϢTvyRI"c^ +ˋײy>AAv ,n V3*CjR`Fg{M Fbp嬖X̜'y9N>qASuU0}(\\(gTR{:Hqq[QN6u8WqJ?pC{v}㪓\=5Min7z`ںVtF`n{MRpj+R G\!st_\Iuk/WVUV:\iJrYpA^pZ9t\J'F\!+@rn{xܾFۡ bWz.AT'$Ts-Or3^CtǤSa|ˇxĐePyҥfk ]{= V֫[Tc-(PB`Iz$Xz,Ɠ#JR<9V=9V 'wx+\89EJՃ~ʡA*WE)b/ q\J5ZWLj+T 4ł}91, I'V >ƓU1 =[pFsq))rF'X.c]y/^|q*quBOIS$-W$RpEj١ ~vpXTx+NUS䚞74M3:ʍznKqU(,WK\ھ2tS9W+R WXuZ%o]JG\!4F]H0)W,׸Rpj:X%WGƈ \gES xʾvSpXFDgҟ:\^ҡ^yIdƽ,1G4_.%Pr1 O?vrtdي+yixkSm˦+$FCF1k-6 P+Y7NE!I'=#ε-!*NlNܐ*JWT%dWM'yV]njaQ_l.Ϳ޾}K"=|y2-?z}f'DH>uA4 ?㿫+]Ya?75':˳%iZv_?sV|u/]0L.?w}ywF~֞n&,b4fǯn&6_wsRw>'~wߡg{|b>< mkv{km{zvYMβT€ͪ &bQ"FhAit r[Qv>yǠ9aoϦuu c~¡6r15ihD0ub$GQŴJ۳n\/ By{v5YFlZ;7WWdX??Vo6,͌ 2۲+򫊍Vz2avF&+g~v$c9QPכU,2v57m篗_w?.̏?mϑNRS)Z~l_PptWo/Ozex훭[A5HAg_0wNƈ~k4yp{u5N&yV/xx6c)Tu5B EU Yw/iɁy nhv;8ϲ;k\ѹZfȍZFQD2MBY jw֢@Z8T5dC0:IѤFrkT"0۝YOnnS DLk$\TҴp6t9{D}KWKCY5<YZ(Q~d' M ZXR<5IZW6^$FH豆<9KZQY19MB 8`_9d))DE7&gW S#RI9:վ Ade|lGz tV ,T:{٦z#lE0GMt9C:$bN+T9$BDhgeè8qX} i0@ ç$RSd❦CŔ!q@bb ؎}l@}lm kG=vBwx+Ap`OÁ~J9{hu;[<``n ,c䠫U24i?0Qg[P%{ƁLyG̨McxփB;U uݐ,K|# Ei['rr#LB.heMPQ( Tk>t<Ce(`UQƥQ5x!|ΜkALc=H][oB%wd]rpo3@;=Unvozo}l.zU ǖ`re.LmDCE=v@z5bݻdja$ ȡS{ZByz{SfۨA1)c~\yU7H_-x5y @}8489FЬ36/Kp4}z2}y:xo)GebWA4TK{^Wۊ֫l8x8Zwef1l_POhHϨ Y-@EkPLǔCC 6RUT^g Їiv ?{սO'NA1P[j/g öԦ(yuرC>F 6H䐅ZTQZ*k4!::41')?{ƍ_aݧxch<]u6TvSy|T h+}!E= I;N[UgA0ZFlK<`J@OW'9$>}NUJ/ F0l/3^,m΍Υ%t$u8yJA:0Ѻ d*jA0\$5 Kof; 'x4aAI(m9& %sj.`Ah zC [5Z,I?Ȓݬ:pwCkek#:t7x/Vh($$#eٸ%^HM5! '2x ܉8]O3Oc/z6^4̖ 6y%ᐃ:FJC61—yK*GO{txĒs=$28x2`Adi(BmDNL4 !$]ո( w$L2ubk01 {E=a؍ -,HVjݢ D3+\/\3oJou%\Hx,IW|) xbݱLӳ/%j̗JwtbFIIv=CF>|2f)MO,6C; :n7f?wP& )67f7Q& 鲤_6]iʣTH@vu8^ tr6`[ՎbڜL&gi:{\ (haBKF)}If\Ϧ鵫uN|㱥߿}d5{@=[WLɺ<ŸM_ ,4)ѼMK_؀.ii Y96Ogzv(tt׿eеm6qN9`4qȗ[,ٛ2ԶrP3 ڝvEsq&-Tg|=?+P~]}/_^k?:~o?ֽ%׃_~+ؘ]#&@V  LW0'8., :ݙ?|J[o&ӓǼ{ו3.,/ٻw^/Lޟ?}"~g˃ͅUJR?/O??/[=φO+pbfﮑC9+Xk.?/+~j}W^ N.Jc=\Z"Q߶ Vhp!SB_2!^ͿgB+ܼQouko2S)7EׯG 𶫹5抇I*'⽳nMS;iA. ^-.*\:-vtI[ BF & ^Pj7yv5q? n ;kA(reTa ޴S7s.Hn"Z7o+}Y%,8 T! *B *B/=!:âR1XtΎ,yUJо]B4TJFȠ8͵a9FE0> 74 M%yVz144o,_"EƵh2;^ }T|S.Ls>I2[k)YFܤ WH` ` XnA|rl=_ ^Ni$r)ZhT1IgM-r(kN}ԮZ%s:0Wpʺ_uw;O6>l2JMv*PhX(kN}ReVyސ:$k B@V_0̧35Vʷ nܙ/]瑞|h2bQ&+ QJxࢌx:g2K1@\M׳_n~ןOۼ쮟1Y'JIqpyƃ9+jtBLm`pjzɭkoM~m~O.v nwW8WxY'=Z\1z|s|ݐ"5qA*;um}7ԓ$lzqO8[M!&?46iQJK2%f>=G>vI4XlQAkW Uo KoAKi'C׾ {ǽUUw^-W7*8rtt6՚6x*M'2ԟ[|1o?]N% ZF&ЍK`!rizc+rtszc5(E^GsdTE^ QrT:q5F>!L 2 !9&& Zea-G[G!8.e"r>Mp5tXЙSk!m5uՆ~ZF]H+-چ=kqQvœ;`SS;@wZxГy 3 _HnEmҮ`לb)\l*ϲYX_Ʒyt]ނ|FPF o<&=h~2λ{vw&^eBRyw!Z3B'Lfq9P,G ^s|HuGjkrbWd@Y3pA'EL.HYzCv's%}&!U[2VCgdgT(@YM & h{7]c2Ps8gzil_,hh4 g.9O9 ,Q2$䙙d1 #2ͺ#D@H`*Ξ+@ ڠAhdE9`e!]NgL%v5tvK05Xv58TjRR`bܕ ("B8S(rd"pL%-Y7o&+m<$,C.ɋFHdML H¢.NYJ1Ȑd,[N@"ǡQTDܞI*F%9)Fc:?WdۼVn.,gE򍓒]ֆC4R"gBs#N-"yBK,%b5tvKD:\-3*YJPY.B/{=de}$:P΋l:*f쩙uLԘB*4SRdU<$ jւ$?4ƞK5HF:e1Jޥdb NQ:s<\XV2\񞑾ڀBw!:`\i{_!Jo3i~ 3|\X]E2".Ͳ_eYfJco3D.8aQj^ ?>>E2<7>MX(z|@mЦ'u*-<ٌ"EN"v.בK{nmŁR\)Hɰ.*DZm^V#.3;'{|y#J,i9anZM2ٰA4`pH|u5Gָ6/զM!7znG.?'mwÏ'ɰcvecus~cчDŽ=Ťy.$D8yX>F<1r%I ȫ"P-N!r0tpȥTTkL,H( J8mBB5ĒE*L<鶍)R ug%0l Y }m7#|8)^czP͵U%Vg\LQJ4S \-Ǐx EɘӘSN)ޓ,NFjR(b9fhr;Y=<6EU0i, /C Q L-S^Rk4(wȓyn\/=#ǃ_4IG<7 $ 9"I6AXing[[df}X/ G! CP[; ioAd&y)^(1L9##Iu$a1hcBxR$VC7XlEWjS'<]]S%7DkFj=®N'#amxʹ7q AAtfUbC;X&{!H{IYl^e)WxS\D 5 RPqHoeDW6-5Yu>]B ?ſƸEG;47W˫%";cu\D # 뒁z|$4UzڡXe- G'4)x$5)ZԔZ+҃*2~Ԙq&z|ICQD`Uŭy'1AE""LwS3r .]6<"%q 5`yd)@'{J91F2H<*n.\<>L~L`Z 1$aJˆX \HXi ֆ@;iU"Duy@/qGV՚wT|ErcNxEQJ*'u))0o+ÞR2l!s؛f2X(ĒjKE,:o=Ey3߰N{#/h (Yd(( )9I C\/zd a!7l>n/oVO~owj)iF@bTqRSRǛ )iw4:Нe3߭bܳ.ݻ?b=6S-zO+R; _ >B{i\eu,RqXtGY—%l:K˭.#ĂgePBY_g:OBkbZG&PHx3jg>9 4.u9Į^.ޓ~T>{' LK%fv3l;QD=Ndg*iRq.̱r2bIt R2|%gS4awdk|]tiI:nT:dLY!Yv"=T7ɺTޭ:O%cRܹ K(uĔzexn꯷Ǯc#Zux$]*e4pv\̽p9B-V9M-E !i!E<T ).4흱Vc&ye4zl5DdFΚv56?pNJB\64 Xkq;k /_f\2}Ug%mvE.~ՃᤪӞJ+D.%+& BdR w1xq[/TKg]th^.]9.k5_XbͶm_v`ّb]';ȶyFN?v/GAknn~OTZ;͍UFӫ6 ;q=悚:J-*t8@D}mNђ_eggQbRZ>,Sڻֺ@d Қ$!ѩUMn >xTE4$u) !{ZPDw Q"5m.ia y;6rDˊʫѬG=;+9bDƁLFqc;"q+jfؐ7oP2zS^R>RLzSNq飶ăbdFA}˜3r#c6rV#c>Y%f I[ 7Ma-c'$/9_y=>Ioxg#6 M8 "2B% G$rD*^IIiJt@I^(BR']BIM* %(Mґ0хT;9~N6ΌڸCN]zw*u LAK9(r,SJ({n&r!c2Hh5> 'H ʉHz@R{@ҘxxX)m ۂcSD$t!rKBʽ SjH^:5"8a, J:4 z1J2 #"ƁQHLd$&) " +@&LR#1sɟ?3"f#g5"j 8n-3l43.;\\FmQLaY[)'\ @Mb2 Hvx-x;6CfxxѼA\?*3eFp=Y?jBu^H(v/Zq12JVU2d9%sɾYp.Mn@u^|TmIdwf<v#'-Μwa0xQ"q楱&F6)bd{+¥&L4e|&Zж |K.2Y5li8C-HD_`d.Aĕב \ڶs>%>zdJ`Z.h̪\+UHRJ( E{hA=(ZX*AQReXH+FJc$F6oYٻ1^ˇxZefd[ 9i$\V #"Yǯb6CS:^$1bDž'^zϴɒ'/40-N"Дj$AT\f / X127%ޖRma](! ʧ =Nqm8HQ<Qu^Q!NJ1ڲVp-o4"HZ >bk#"8JЦH([d˭3zIY_o顂t hFmRnL 4)D,r=TF%K6ZߐnSҪbWq}y[Ģn}'_|~/qs * ϙӰ߻άhM5M(7J./iv~йeNjuӁϱjm*/M]NhPn($;*Lף2\Ä٠>ğA~}[^t~ـ ƩԭnUBw82zRφΟ^*@0SS\# ٤:9Y^zmzUu׿{&/Ł\շd+p"I2Tɵ澞Jg16r775V26~Ks(:A1Y~ /w%x1gxS㾻J;;4^;t9hjC2dgm> ջ8,?ÚN m)9֨ؐ\$&sg[{_~%cڒ1\vmu<;YYc"m!-tliܙO/?Zv FeL^{qrQQo )F̑3WsPη#p*X>p3{$t<&[lmWԃ<oVM/ZJ7ݴ_ūRae؊;?U` 'j|;Ë~wP+l8{ՎEeX$T{>{uzsiX\.RO=5; 9`y,;j0|DYP|ԄꒂJ0((=XzJWmt)#Rȼ5LQ+xnЬ{l|@ Nj`D'ъs\JT9 76t:ic*DnHZxK1c4_ҸeڿѦCkgI1vB"a>x)hmqzplߝęfb:Mv|udխn?͵^=(Jj>rM` T[gdg6IjG|љV1t2@Bѹ L"O"1烳EX2C2֏u6dQtPnښ[.ni9/g ȾRDKǙ-%Loi?̪ؽ-7a6iAӠgyL2oق85;:ʄ7Ҍf3VS]H!4 5.΅LIl"jGs7 9+ԉގwq1lj`ne'ʹ_R6;`KX d)M:x S/(hlItFw޾EwdԛYRT|[V4qi9;w^um-7SkoKCNχ{E6/&`_nkMq0R`R<$2b8p#Dp\ Rp3Bhk=gGɬ2? ж.̖\o`R 6_JnDj/BXR"KN )}NY<$[fWuçiKm Ni~h)Hr}'M!@H RK FnkF*h3R|F*N%򶵤gҹ 912h=KWjJ3ݡFOLNK#Zgӏ#"wl B<ÓMls͞bޢL*@ + 3ZQGȟJSr옣 M= &kT4*c,hd>jC,g[m:GS. q ^o[jߴξاֳlҳ=KxJ p2?w#WJnݨf@[St299F#Qc$3LP$ghV*3&bWB:o^WGE)fQH&V^,|h޼Ϳ|])@?zgŵ0|VpGomq+Jvk]!~?:yYFg$ 9lD .ic@"Sg>REk>!VZ}oW+V=<-^\O#~+fϪleT#TT#\*(!SpRO5b0oeAE:f8|s>K36/lWd)QIZZV3&8VH1{2|oZV]jJZ#u fboU&}QWZeRV]=BuũdB9{2 *S)E^9H0#ub_UV]WWJբǨ$#@i1,ިLd2 v]]e*1+rf`&c6K߫r9:crP̈́T! uO V2Qy mo~oX?p4prВ_a%flmݾU$ y%bS aq^>n$Ħs+#PJOU%Y3|ØF:# B^y7/mvoT eLl@ٟ\/!Sv6d*ma# )r2zo֐3w]]e*M=RW`F]er7^2kșJ=Fue(#b*NgwUw{:88AfKY N3ex1̜}Ӿk[TGy,SkVD)ץKzИ{FbstV}-^감__vyO?ϓ$.0g;:ӿ;D:tZfi;l(hBќI)pX6^j S#/\yF!<χtA`e8c@O83l >Ƈ"$2e\ӔR2 Х;dhe7ds^g ^ ءPfLߏ}W6n ))|Gei= 9 3:z)V "%C-3HÆIu; \h868eT!2i4emh YY1D7S- htTa]: hM`\dжiA3#jl4|{nkOՒq#2mn9Kp ȇ"ii-3#+a{F3Ҽy#`[fŗ*>U,VF'0FX0u7KLfBgMd8lB0RoOs}*? 4l/W3._[gr0NXcJ !*De(˿x]ɐj1JbR`21C ўt)`%e%Sh,Z[Fz'==7f%&ˑg 2LBLr92 ҟgy!ְ1NYzenVJyF?\˿S׃N`g.Nx_Wߜïߟ|߯NhN'FXA$FA,EWMs'/eizvU]^wꦿ58.-+!^~al%cѻ *q0^H .a'Dvͽʰb5UN;jz>i,IXh~c#5ԎӠ \S鿘=Y 黋˕,*mKpSzo1?ПY^ ~o[y݄Zz^M.IMǷ|F;v P6` э ^ܩ;-`8+&[13QœGILb_aL`&<7CJo?enkk>|oBY;l4:nէ2 8Mq;JɊߟ?Ѭ[ViK5fL?_W}A6EON/qt5, R1Y6fM )!:Lc}9 P , Ҕ.}rQ[ٜ]ؖN=דF.e~[ z ϕth55qΦ,-kUpKk,)fbE/m.gwW7ݝ-@|D4A߼g766uR.ËOm6f˛'g3Hg.ˢ \>Q؀"bfE'39[\r 1gm蘁枏hxe}yRRNh"6imd6V~24x4`[*ѕ+o!W|6ZŎݕUJt̿Ҏp7X5H՟A"hi\f"T;]9s9Iu]m>)dpu6=\kl+ mG_r8gK*Ls4-D%uRBӭ4 S*W j}`5 rlmuS?o oW/6{GĂ-gM^ҋYɽn3qnZ-yے>va] {wGR \w:MPSj,w|`**jLTx"ԡN ׈&^t 2MGQU% N$0Y)NLJ>%2Xf.$5чP.X&L"GsNxytFΎbl>0&p:Ϫ@z\^\o>8.jc4ZPxdUحu?얪]xGӺNCھۙȍMF6է!Յ/@hsեַvȠ? [Y}_JWMp'[*r,lͺB*YQɪ]Wix|Sx{YxJn^%fo7uy5?ݼEnlͧ[*nݤ޵k7x̗zʬTe29/^}rއHH:揤kgX7QyA4D%#{@ew*rzYXeVrZ9$H*ש yt!-PGKWVl81BsD%i^ş ddWٻV׈#ۥ{"W`8FTf/pHXVG3(U9-ȈQiP-١SF-Ӂk0L?do$<=$K'趒b0ݾ|oF팜팚gjL4o۝Yu"*Yx̆ݙ>y@ÑKtBd=pUbt|2LksA*}k.Iv*591 $zZDO6g (NJLBk9%cwX3YS ֭,K9˂&yzr=d7*~>:?LpýIg2 I#gfyay2I˔n/ +LE) M j\$.tI:1ev,;#ga\P1Ejw:ڼc{#؍F#`֗8Sၧ )r21e~3Yi1X%iȄ $-E2$kbb!EM1bV!)x߱<쌜VKXYLE0Dl}+cD%fOBbT !hc ^'A'VSW@Lp"@zֈu*匑HqRRp" șC:47%m& oDu.NMu 8y.U\E^.nZ^de}m@9Ld Y>[󼣼WQs}ITC^iIzc4{ّ2Gw^Aj A9c'a K l% ܛ\ 0!0g)%uk2E]3,y ht>jo[ Y)Y'u`Vj*(ábt~ǙU}qlS2AN_$1@d 1Ɣh)KY A5 S^s݅ =T6轱JG൯NWyƠH@' 4)V>>?SVR$*˴d |9C%b'>Qab{-xo6``ܠʄ*K0LeM(12<҂.EڹT3 d( {=BFH&GǕ2~Ok ?T3ˌ> Mz.oEYvSCNGpn 윷L.!&#8J˘M]) gNEo_:V@zZ,EAG[0/إ8+mt߸1%;~33 =;3<)Ř$χ D+VTҳPbYj,^>V#k5k{vh$S2"II B K{5>Y`Tb B%no*E2\ΧȲRd\\g,3`m]nKc ^Mxm6vʼ!tꆀ1~6{ A.Y=!$ԓԑaT\,ഭzku9l>zPaā)wWv{:A+/a88]}ѠTOףVJryC{织c+!+0(FSNz`^!A,"qHt,pڿ)I~V) @&$`-t*[&p\\*<[2ǸNOWqZeó78 e)sH[f:rʠZf"/e[ 6cm$wWRp\(wp)B.,cF@m̠@ sbaJlseYke1 +N,$T z>Gy >: d&y{yţ$4Lr}M岥/Skp_R1r6mCߖ)-ߓ=$ZJ}x׷ߣ]Bf^_Oɩ#|snU*m^[coUR LM]sڇ)~}M˚ &cѰWG4cg-v"jY"M᭦84BjHEZzZ >{\iUtmlVڪTRa$HಟΝ#Rk*/y-N2-$X!*STH]&(1p:=$~opU3(/b +I(Z3K[D˜RcSQz0q8_O|)=J>z]nI0j7C^l 6%>DŽ=@|LS)$!)@}!2X:a<pU*-""/ jG.x˨ @טXQpfU1m]I,Z-#<[r ;[T{ݒ;n*nK(i)(X(1$c(D*a^Z{RB8Q(r OZxəog)ؐ3)3Wpe2-S˔'&-Ԃ?#`c|ďcZҳۣ_~1wqZNWᅯ3Qu"i*M5u3,eҌHTz2{h0Ao gw|+"޾Ǔ sG\ FǰZ~s~=94}t황Ep2>ɛ/?iş(*1Xⷋ hPɇw?}|,ldy}J-_;KM=x"߿vpRE@)7Z[b/ ͬ+~̌χ7Έx~%FG#~[hrxg}?w{#/ zCL#/>\3D+ dW6{>"1;6q˟Kam(QZ21cPLBi·ﮠ4S W;f#`eHPp$~7~껣T[ 1M8"=}ŋ*3|aƼS f zu3Wm`x~Zcj czB 4X1DxO)'&El %{udX4y|D5}R!, wEՖ-]ƌY( #b)(^fJe  U1ՆPz]v$ ~Kn7ni.כ8'JZ^xX7 xƊJt_K?c:~H%1y ' 2: YTJjd>Y rrhyڊ=cfDS3aLȘBct05G AX@=WKt1¸ѿU?vf$p $FaAH*VyJu ؏r\LS3Z]mUZ#'pUttRN]mٮ)5V^|Pa j!7!F%egt_:Ewpf"¥_`}'5{'t1 K=wVR[%eڍJ5qPH@(oIlp%rș/X0 ;/J .ɡBm09:½EZ6n+.o[8~INˋ*4ro†cCvq$K/s'yRt~Ry籮^P-Tu7)\3Q\ ym"}12qDI@i7549>Jir%()*a H@HMHWJ-[zpEMҮ`1p%)pe*RU WObxїxeЏL&(Ρow1>sW] T/cuf464)F Yi٧~ϋ J |z.hE/L|.@U/{^z1T&&1ydٵXb?ON[Og+3d v&ɫgG$~Խggqנs͠w/uǓdSv;/:on6H)U"1H䇮E*5ku'qİPu1B]׻4ܟ. FL`Rg(Vb=gi k2c2]Jۺ^M$5qs::/u͋Ϳ*UU|>=^pzכn,ZEV9UET}-]&NFfu֊"rOYTZ"ש.yb\Z:kxkU-ڶczM'._3k*$7. q D!w$uȍoz';Q tS vKt)ZF[.}!s->uGgljyG<\G1QGp{b6;N^l+:o]÷l1Ia]@gwZc ~.yp= MS2E[մ4&n[K$V:;^>w0{ ^.Wb%=*&tQbhbM1Y8:JSzC]Yԓu^'V.(ߧ\loOb.j M8U ""r.ev}%"1GhK[-՞Xw4>jTZ%T?2~#1^ kXU8Qѿ#$3H4`s &ۜS0Z=6 R)xcx&Ns' P.>*YP'ɫCbq NOnߋd4M=fg N6O9k.hzVXL_n잌MOkl(`EZ>J WM!0i0{|S-VE_;Kaaﻸ'@kw8L KzWpŞ@|I"S.rpNWy-+bU(Y;]]#8BX5ȒDyGrl%q-HɄ֒{:SU$XU$WUVC+JqFo\6Oe)p*R|O4H6qsH.MH-;xc0Rv B,}lwF3lLzPꇄM!6RrHպpUmj#$ + v\J~P UZ/"QW1H.fMH-ŇWJ[zpE 5t)Z#vpl' WLh$p*Dc*c US+.4;?wH'hxIxbuBSf?t 10ɕ)0oĮ̌*$L+LԾƝ_g"tӫhۀyO֞4: 4=^ ƌ{ttFT<[;Yj&wO4+hGC#PȌϢ Su~{[%_Q|7mo:G7kTlapuT*L@ݽߠk6xz2؍Y:噷МfGС~&Gn}`g(IO~e ^FcD2Y`YL^jʈhA \h#2&"=OO/9XTc|te#]EaVrރ-[BP'pqm5/lzuO%e"|߀4'R,{ZRRS/Q7Cy:}|ύ,i5$ *|~x(`siT }wr쵲2aa^!X&HSE57V"FMsFV;|r^q *(~3o59Ba"0ߪlA[hbӧ(YM2RZ {=AB$ad{5]mfg5=l^=N{r3]c"OoTF;,/`0axLWFW9v؂ECex6Je!٠LfLM|gѬhŚDy!7X?gӆi'l*ړȉtT G-8D!ʼnWn0Cd("+ad)_^?:lJa^iGjd9HysQ b?`Xw"yaTnF{Q2hX8W ڀG1 IeǖX N:6Wt֤6׌z nts3w2𤊇"(`ðd^P.)UMM 6ݘy{_U ;k({fӁ¯{=kf߆(:c2%@YXgS| }muGE M R:żH6~l~h4*=߫4=Uj2 fXqgHTzd4`~ԍg T}]~qcƿ t}cZgFsxZ>uWn>/{d >81O)ܴJpfۛCݔ֟F@~NuS:S _sWf8 %e(vsd4u^h>{3*f_eIձTol˚̆YFeAzgbln*Vںar7RZڪ)3ۺf>-!|L_^R&Ky=S L>0ՑY|ͽ+[8|rnԡ0AszZ Yr~f'[j[{v6~[QL+/Ds-7} v?4<6\+C_iT#Wۑy}'FW"ESKjȬ=IeyνU ݽan s~bҶOBLx$yVCbJʘq0%aD,i.$H4A[sC *($.^fyn~[%0pso7c}hP˸u3΁b(GSJtxL!bJ!R C&l!sLoZH+e0Q#fR9+uz"*N;̓SN'A!.3 %Z1!281IՃ D |4y"Ct_x6]FX= O?z<` RhQaDL%*ON )iJ4:yZR*Bu/`2AVةvfٮXbŏV%z>qQ#}:$7b{E痮nKϡZ5X&VQ.($<3`N@44‚UƇW`9ƛuc`*vSUٕ |m Fi۫0M]Z6oaMpzW ]]7=<n$?mx_OOfti2pϳ7gKy2ǑsnRTvYozOR ]> ]gewƹ`ct2QlE>Ӓ)E 0;2^)Kaǣ>hZs@uG+ܡP݊,׮W0'vpr.UP޺v CX= y+vl,e^@EWc&;􅖋9*رʶ蓈x$]@2Eq ;.^8Ϝ D"Fۍi!F<T )Y;;cƌL"^ˈiDk45DJ5v(u |rW mh&n28چwK c/c?\XmA3AjAZ'=O ?8;o02^.vIi% bx۷ֻjK<'#3p cL%նflݚ=Қ.lM2ԅ 7c79,Or7l`t;1ii9RyHif: Y{%%)G%.$cx%$ItY$) Ӂi]IղnێabnM:ڸe;!حN8kVSG<ذhfB$X-r0XT#P ݸȅ4*XȀ 1+ A ('R2y#ȁQ1-٭ hPHZֈӈFIHaJ K缷F',Siii5Ƃf4hkl: ĤglsÂTɁ%MFb~5bkֈW#I_vڀq絳gc7Fg!,)D>"%DS# ETuu,sibCqɦz֋Ӌ;x{,|b:XBuȞ=$XÖ!3#;xzq[`ܱ>TC>6`z8Je?S1qǙht /0q$=fRU+56Nlb*hVI(hSCڢ@(%p=TO:FO$#V1Ōx`#&i)G~U>ZEJ v'Qւ SExl :}YrDFΚYsmE F:TJc&W~ٵU.⊋>dmճ-KO̿r΁E+D Ybض!3bW3"96< "gӫR55!Ycʒ*b Wё,L`lPW ǯ se(fwRtxuV-k5%$3Ēt Z1gMwXtM6Huah ;ߕiZGrbR T ئhSb@}Ԣh\ T6:ҠwVXI!9,\IPLAAI(052s"^?vӰ?d%ku?0ԨWkjjXO訌Gq}ӿϯ~z7{{7/! Jx_G4·ơI{θz)ҸŽ8g:F|;g$܊Y$MՐؼ܈~F0#Lv9h1MGWʡ9U'%ѿ|⁦4mjyZSb|ɇNj/5 $)i6oli`>?j9(oޛ2[^i? U;kh'Q\95= )g=<[ +6[ ;jBs1MSjJ$itpttVٴs>4|x2_`׻h]v.eE6u] 'xJӷF+~y:G\ 7n̨43zhZV EOޟ?3ېeg^Wk·OksV*Ȭ=} тYzYu0Y)L\*n_xlâ?;ֵeFlg\J3^_;w|юNxO;~;~}osovm~ n,}KDrI'bB 5&_P}2[#bT3!aucVz>mcc/ea4vsFgXǜm FhMLC` c(wI'M$O &O-<%h**m}07w¤k XK!vSCw399D+A(L|*Mm c s[f|kkn#uX[ hlwH랐V-fHkZ|.^+ܲR11ǣq;?VbڦGOu^+/iᥨgG㳛l ~}28ߌ\w?zq#j-ۥvnF%.x墥cv6Se+gX"g;o 8=գo_ha P̦#¯8[%G1Y]-|YYq~huT۝uR駭au_d-;.gNNLJN>Fi{qe ?{1`}^sst<'x|x45nր1LVQC7wCU%f+6@4il3DD(]m:]:]ma RheI^EtTҧ% ODr6aa֬kt>'QXST1V78"E`iXskfIC ؈+zLjrGz'wZ3u_;f'83iag)lRJE4NzÙ3n%t[w[l]lY|Ze6*kb$K [xHS !.eg :yJqZ E8sˆɯ1/BVWg'sܵ,+3pPOP] U9j^ )sLqI(26*:VqdISpmTv[TpG (䬍EP$%e:x$6d],E`6lE(K^ &: AL JʙHh`MqaNK1wF=uWv_ʲd@eQF' ~d+%Z a"ԴJ`1. W2IH kEĕM[ҺRR$SS@2^6QNmZP/^"-LkK%XoHZQDI$[/NN`}pz;gwIrvJpk*1(%J!d RZRU"CL5c-q ݃}q.ʭ@[sAp=[:)W|= Q}hBCP&!-+;vg7=(nynU% F]t=%VFcaGYMȾhX E]\DTo*\bsQ/" UNe |:Ap0rVG[ijo4NZ>FzLp'}w]9OBI,.$AB)ZfX,ZH,YSligb'} t6T|k2(۪K(v |oɿABCO*Lώm &C~4Eh &ΫWy^`Y7^ %FFR#9BB|>SLbt BC4ȵZ Kҕo&̙'pOMm&\qT݁lj#>}y@:` VG%w!t:*VGVWXCw*N7UW`k:PS ]]U*aWCU%QWLSmb*+繍A399/F2٫q(-V3;)P5u}K+5&i5[/$}ѾVT–녬?w:gۈdGlF#3lV6uJӽLEkĊ(T9Y,Kn':،k~>JsKek6/ڇhQGOjcPRS`j- I0)"dh ] y0` >(e^BQ$>$cBYLOxDz/j%I I*IP p9'% )f8P*i$g `IK M/\FG>ִ^*E똂Dcª$L2h t3(IcdBfwt hf"z`q6,%Ft"+1x/MqT7 ff.zw`Sϔ!YfhtA Q5Yak>T^ TdbKyPu| yY4T ]^Ys6: ~)(Du ׬`5kvC'>) #ȨgZZ \4ƐUl[_oU"XCQF&}.Vk`='2!nD1\qpݪkN0:ЅW;QyN׻,lK6u,yCJZǞ!JbYj#ƈD *)0YX$ "M#k`5EtV[; H6!`nMK Pn;0Y޲ .#Z9#2XeOh.I2rd5)1@S`Grl@DU4]u=utCD8+"!hަVd@/ͨ5kyGD)H6=md#~@PS!HvPW.c(^eFTGͺ"Z_RrpLrtUA 5$4<ۙ"Z$VcMH 0j"b #I tUTy1ޕBo1"5;Τj0~Эnj1#.Efz18i|(ؼ(ڡ;II/MY&]r{o.Y*Ҭd :ҕjnܟTeDX()`0&`2Έ A9!`"cZ5yUA/|ڄLknƣ'&>yT_V )kIVx )n!A> ,T5E:ՐcWAZDYьmV D v*"}7Xߗv[}n=ҋQuD!*Kt]/s&#Tuo{Hf*D;"<6#-ѧEkhg];VƁ]"鵈T([|vw5hGm`鍝}NZl,4>  DhRyd!.h]`1(Ψ$1GwPa4,,{cF8&gو^`DUFY"bEnwMƝCu6iPF+ j2+o[ cj5w*x/GCVH-7mH QH6 Z(a* z0dk6`=A; ns28o+O7<]gmùI}`0$֣{nJ''f9z6Rf5H((;-Y:6Tk*&Zۗq$>yh횠͐I;_n Ar4c2vzNQfuH $ hT) SRqX 7Gbb '~W'Mm2X{iB N!gпŐ?$oQ^ѬpwLO("YT1"@jQ$Fb1wv`\tFdUA;Fт9M!7i]s035ziF&E32j̤yB2Kfj)څZuZbw*W!*>@MVkAW YVqdmA` G` 0h iaӐ3`2)z.$9%a#ȝ*St%ĒP-iV!_UF<AP"m֍\4*f. !Cc4MZKIP$nc Zz a9ouhD䝩y0wt0j_/eR d|@+꽷-Jt> %ұ}@b> }@b> }@b> }@b> }@b> }@bt> fJ> iZ3j> 4}@2h}@b> }@b> }@b> }@b> }@b> }@b>CY 5%E{Jt> |=> }@b> }@b> }@b> }@b> }@b> }@Z{1%א3q{Jot>Ub> }@b> }@b> }@b> }@b> }@b> S^t-5׷}b̯fO@.=%RMc[\'c[Z޶@Ƕ-inǻ +i'CWWMeIhwPF+6]me?;Qh~:Gg=F̞}ĮL!}=SU|:²"oNÑҝ {h=_;Hn:|Q1"j69):~cJ 03tPō*,28+ʫV^w(ToR'݌1_fyG*`1}16vKܘs7)L\Bn̿B܉s:Z>Okrq"l\AH_q؋Nk;"}'(c'l[v}Bܴ훡]L6ǮW4IfT-| 1)ԳZZgia-Q4UGKmz+KΈ֦:E+!d[$'oBO}9 Hϱb*JhᲒ;%L+kTZtE(E1]]@0"FNuNW( B$#5`:tчtj (LWHW:Xy3fW?-^zt.88~AC}tyO.uqvq0 f[+XHuU:}Zh]6Y<ލDrsUBD+s~:8q(͞ܬRһ&DW ]\iBWV}+B#ҕFi7!`+d:7Z?+tutjJtEtF9Z'^ Jb蕘]`'CW7L6tE(f:@J} xtFn?sZ'^ tutʮ84Cp3;ksn\#77myyV]M Yry.lmU'k@'MHR[CD 2w96o7)+TP@jnJ@hJ2k iÔd+kTІtE(#??DT]`&CW_8n+B%g<\%TЪϮaЕ~`cu4/t(<գL=[AWM/i7!"Ͼqp ]I^;]}{i+rBtE T hՀ22] ]i4+ǡuriPzʹ{k-v1o_W/@D {55lqz)5ȫywi~]/;N"-˫~|1S-O7;^Ͷ)l)]uJviŦ}'4-J)vޠ-~g]m.Q;~_fO픡]opi:VYz Qw糫}+f/أo,| no笱?6sb-7:PVێͱY7?=9zXPSP6#uMJ5%tV}%Eέy)i1Rk*lpZRQӤERU>S܇`P\PVdHWo#Ȋ7ߺn%bbk1Gu@ṋ _͐||_+oWoҵ }mZڔs>|1?[n1wl.we~ݹn)|n/(p^jֳ*^^b9ZYcy'żnKB]$aq Sz C;=?ݭξxTlqOsv5Q,]C8Ù:Z^]Q7'_:}}@[g]]u?s]=t+}v_ۇ/xf>z_h^w~n/tswPZͶ8^;jKK\*;F'hgA vzz*iP=}ձtX*w,RG;.zLXs7^wa#/6&?RmhM }3[J ӣ+f[޵j>}4uxU4Tly| MTuGߢQЫbQ*,[Fo4ha~|)ł Z/K{.ӣ,b#4؍/֖myw8ly4p6_BI=gy2ߓ[OIZy+ɘ·b5~{ky7dJttI3}]vE"T%aJvmMk?IXƗ&&lRHsEڒtBjzO:Ҕ SΫǒ}lɾHɯCJ~_tr3O[>j. {E1BZuʖTIFG,z@K=~[Iy"K3+ HmދPF_lw]q˼Urq~`m pb|U϶F7m}'_MnpiO@~=g\c(}H+r; w=b}-tr҈]}}"X$P!^Ki%`}cVS{<( $5I͌UF"\Ÿ\.sqOsWiuЋ<_ ]:?;?[,-eK"*M1-e}.S,MV+5*c4{*"g3իXEE-B앉Dח֋Ȍ=;guKb kOem92kKfm`>%c.!Ua&^?*{eBș: ۂ {3:JV(dd@E<6Qj*R@T7|8p6"}a㩌FfDŌȌxJBmkFj]o[GWefG%ڙu#c}JӤBRV[O(Q'>Wu8b!JAy@O:UL]>Pyۜ3ZlRQ7NYi&SI1l܄4-fвDl;K닄(3ϥuJve(:ŻVYq[ dku\Zdz(hr! P{Lap+P_ N.C.í_Zxոض񡑛 GsMp}Zd)rm6 dcAk4c9;MRB@ˁGѹcpឧAjRfh2zhff+LT!GJh+ څA !a2s擎:(FɅY)κ_FvmI&D@;+-JȀy<ջi+G/_z7J7F5=| s"q L+ˤj fܢukgl避/n}cJ;-t!zNQų%Y$[&xo/:s3%+}x}6*q>BύHߍo M=2ˌ|Cq$vs2F黠`ߝ>(W׮7) Sj׃2-d),&f{]۬Z4;J_!$(kVWDV@fi^M.^.kyk3K҅CzHԦQץBa\WG8]_ڹmؼ9f .\:,S c҂m`@ūfr!^^?o5h҄0Kdwc1i RjЬMgߕӯϮ6wf̃3ߴd/ 3"G>pQT]1L1ꓴ bϞ1?q);R",RRIlR^q :V̥bXDŵyR+I4<ĎSgm8OQ{]OwhzwƠXչ eZ: ͈{|S uZ[q] #Zކ ͔!<g.f©.#.Z3mvY +uKm̱,|୍(bVz(9oFÚdʘ{{$Zë~eOU1cCET|l׸#TU6NrnnHzє3+ܽ@<%Xrj[R6%ޘ=ye9|%mZkf! -rV *)6~Aj<9-@*!i4aFeV 2$jq47i1]#,TY>0eBTi61]u6 )Ql ºi묭T yf1!s5Zi*2o"Zwȡ&+k( 3 uEƤ{Ow $u4ob bZMq$}nfp 7ogFIu1pAL̔VD:M(q=FzD88oJW !M OT1`YW&xob &M%b*gddUf 5ʁSVrmu~9T`87YHKq̽tnwX*博`rML]󺃪m@U{7T]0 :1^=^O+~d2XDLt> ,b( Nka2X#y;eCد12X.(XKh2eM`Zӑ3%I;r.D7lFC^r m=X"9d {<5$ ShZw(R1{qӿG 5cKK~$Mt9hɜ Yʠ&!2EAy綗68x8"j⠣sHY:bȵ9`E7sd-}txz3NHBNjS % soxFg),#[BSrg:_ujxL\&FsH,x8W%ߪ1W|L(d!swDq:;"&@mymCP:śŢdUiヘtYHC! ^pG\o؝i?&e*Mn6YmtY#[ B@!Lu#ͅn7,C-JQJFKnuvRp%wz=b<;`. ZwV(G# 0 2+6uiPosP[ .9Wsu lZhJMBfVFZ¼S5.# Ȇ*ˏܣt %{ÿVPJ|7o4ӋXJ?Md5в(9*9?0BtvoZ/³߅ȋgok7~{q*J6: o?br|5J. _=v^"X)*uTIYRP|Vo+ݙz/g6ѳo^Bn_'|r飫*gb?$ϧY7[_NPˆ[z\~Q6WOw~ jvj$;`SlzYg[Ug*3>{L8uX}^]-Q5 v'TBcflvc(ӈ@ /ApcG=sB3Ulgyf|F֜=0U"D$`"PnPE1%b*\2QTc|,yOMJM)oҝ'c\k8n'c/(l Vov MM)EЧ)9?zB21&`q$,pwjsᑴ؜7ヵ+V\<5;^f߻9v@0 t&~ iQ sBlƧMj&Ut] i#VMc<T9D?bGx@Zr? 컛\-TGgͩ b|uŲo%5Ek3?)ɦ^_kHt+W/ɜ:FC Z⥷$s}u2Z=ݲvͣxѲf(eQ貟ŊҎ`ŇmΉ6j#%N@H@1J(:"ȫEfݦ#2Ͼ]@NvgELRbM"hb3iԝ _5r }naJ>LK VBPσʢ)e)ktJvv|q9GQӴ]~ UsP]M.gb؞0?^OD:y0c@E) [קLi {`m7Ke2{3%˘B\~deI8dOgɕL\%2UԵkqWǣp7b+As z6AߡWtU4?4 KJ,$[#e Id PCՒxȿh-D{:*?zu>paVTXB<^hJW.0MY8琪"2# #,YCG G 4@| :za"WR˼.qDBFyasiΔ/7Few w p9;WxCOCu0:./q}1?+^q,(`NZ i:@N./ZK oE*: Qϕ1cc Da`HN2|(/rQc>(gn*sa湳 g#̃SGAdz-3GGP8hY΀G#?q(EGFF}+C1ےA_ֳcv]c*C7jmzeoܐ \eX W5PQjMiZY5'ykYFeIi sIf r<(v﷟6+*p=|aBf߿~,3pc徧a^ #*rK:իoƣȏקt׹뗓wt U8}]GϠ"|5W^pUk?f8Nkmr:9~W}H|^P~Cc̦Ae<򮒳+!+_Im l'g@[UúHݝbrΝ?or,q4Cg6NkI sa ,ahn[XDx1 b9o_}P^\Ƃ~,=Fܣ:o4KB$A(W VA7>Y\vu+6+[f::pR9?|yYUw$,1Α$frdy C ^?"s\Ы3xe0YžKKkANO~~U De %H^*vTa%ঙwR>W`9Gx/(+a?V. (~·"~Vc7AvK@vte1A'٨Т5gfT/@Qj/kK=lx lJXܯ_̀;[ړ]QG;!prCg_Mp}[TegH~a/4Pf^z0UH_IY9ܷk>>XXv6{PG4n{Q:ߔ|vij=ή-`9K(9Bq',D Ldt &Aʐ+۝0oy2SZLi7dO w/nyP f"H(a9a4)B˵606Qܠ)/c?C Qo]oc{3{d3 wONaC5mM6v*rnD.\lɅ#cB60VJӓ :`T' \|yC8܊WFn"0NQ=孰܃{JN9qYB9 a{wH"mʨ>Xw c{g'/ ۵iNȊP"=ULHe*7{*xёS•$BJ!] ]I|Zm_R}Κ5W+͛GًYli;aR)݋x4g К٣GyF>CO{0jaw-0hXt.44(Ug4m(U3ٴ?(yݞM0䣯>HΆa/h/\< @7$.A}u̮ӇJ)'RƐ&߆yM>ˀ?ՕEV7-*Qxrw޿~(2^$yF{Jb$Y-Hi#IhDZ'3"\̙'TtΟ] 'DWT bNWu2HWpԙp)BBWV Q֬4:FR$6hWT 2JNWӮ]5^[f} 4m O35w+ـdGWzj,qBtE:tp%Mm+DEGW;HW2S+ҡ+Y*thm;]1B:AVXI+n%c&&CWVvBm ՃЕZ] kKGB< t(+9^B >e{RieW.iPmC{6׾9 pm5Xܥ2>Ve szٚ-M_W+]L=ϕж =yΪuahWτwYZ3cz^zʠ`N^'ם:6PNC'&׉{GTJNXtn: 9D+trRn! 9M8W*!JCWa܉1t(] ]M фd ]!Zz)DxGW;HWXImBt%MJn4+f׫=B7l_OnW-{ShV--4eYTR]ݶ)%DWGkQ ]ZNy Q-XgGWBWZ+,HF]Z#Y1BhGWHW>) 3K+˵IVF(] ] y]`fpE2Ut(۶ՃЕd$EW&DWwۆ JvBtttyuy8).SM[Eyk@Gݭ6gY_ t ؤ7 \=e?Bwz 4X2o=jn3BWV~Qkp)LJGXn S3BWV w,7\ۄ ++9GڈR ]5^frE-/5[:oҴ̓n@Wv=ٔ on3BWV] ]1a *T d+Ճ:F+]`*M2tp5I5Cٶ,]=] 7+ld2tpm2vt%nYJ{Wo=p32vDL Qd+-LeKXҌ}n#Fhk4W^/iPoCE-KC05Ʉ`kަ)+&k{mt4^UIocFEw+ɨCz6'^nyhdd%(غVdA3; 3.HBe=wG)7WQ"en~xÇN|˔Y?iQazI螕)z.+0ߟOUIKe~2U7QQ /ӇY-} 䜮i翋?ލgq 'oFwQ1a^d7_ Gol`Yu=߭d%U}|ZAYP/q7ۜŔ>ȣr4¿=WA=K8% LZ%n ,3 W[+êMb/FE=cǤ'*&.F \hh8sc&\F"VS--%ASP`&gtx!xS*i2X'd!?8G;Y}|u6̽J^z>FFm)1 f'wv]N0፛/9?"-?W'ZicTǵ!pE?rD7IgYǪƜ \Gg X}Bܦ!*Ը踷jL= 795Y&SMې Tx!GgZE䠗jôа@w0VHTÈKZ(vl(ܣ8sx4jR+LPS+-7kcU ՠʂ<꠽PN" uVt1'" O.UAϹqq6:/I!E̝ RW->)&JB,/wpk@ = QSO#a9&eXÈV!*|[2=#$_nh6~61Jyf?f'1Y8qgyۃ|Ã_s#(Zm>a|S~zMVᄒꬸnvR˒ﳯχeGtfn p:!fqhĐa]'#/]oGW~v~?{ .$X I߯zO/QCǁy]}U,6N[ZMFV#y͟W= S}67U;zoOfjxyy1}}R\:0J|zJ:,yϫj4iV/aЏ?6hGPMnbI- sk<^H?,Wgo- Nmq&) J? Go1;-J?q;6 n.UuuLcѽxӿްM?M`4]cm)_/V15 îJ;?e=Q.FN?f!!G_.K|?[qOoDŽ͠$~݆XTѓiK@Qen,,7IKVpXA + C:LfZeys+U k#Qg $&iee;Z}]XQ^Ek3-)i2tV|4{6AW,9zм<(Љ%0GSdQóe9n*; UzmvrMQ 3k5`PDfyd63ϤcL:~܊9<~{0nQ&z_fwc:<) R^&f*[O3?Y#Zp"+Ewy$Mo8}kM:ĚX=1)s|̎؞rԬkb+K9Ҽ"NXPwAٿFhnVꍍe[Ba4=@&\}ٛwj4z׫znvSl~$۟޽^O佴M)̳}Or2N'4a/Us1bLp6)6]͍wVȗiGZ0e34m(%.-N{&sM?1sP^*Ǜ F{hdQO:İjпAzOC1E9f ́F)Zl:dEl7YIou_P&w3wtq"-eğ;7,<^:v>Pf~6Z$ đ)304bE<Γzmk\ɝU]Ջu(@hw%wnA11RRWnI]5wgO9tf~vf!(e5g{L?gr^I{>G>']9?Î_W/Od<ûm#Ԝ7УRj=2ߵKmOyZyg" -}& S_en7!n.<Ć2AA4I-Pv#FJbP*'$ΐD_&e3#ς+LNl9E)pÄZjbA`1 )+22Q)}FL~6uXv|?)nQFt9(/HXf]@dzBK$k1j#4'9lhu)/7 1LȚHxcI?*VHS܀`- _mqY }9X6O~[E%mXڰ='T?To;N;)a45D&3q9(Tp$ժd43+Ԍs1K.H$t贍ك\*XQZLBmZuKJkq,]Y(Y' O* Ns7_lR@nd{7 0n3؜2PZZ^_<3BpY+:+a.ZIBBMi,FE+|$%s> &c kYbVFøB1Ej6;ڼe;!؝J#fp{@}*,O 59hKN8] NsE !WĢQ&K&&F$pL9!udH:ZUg8>( x-MXVZQ9\5%~&j`mevZS-V70hc5yK_K_Kg@BrPDr~H82c{= o^5 %ɛWѓoǛL]QBOz?=†&I+݈SqzZuyۯV[N[y\R/,SZuL[҃$ uVokYP7WW =zu|zPopBwV,:ڞCt[4@ah'eL-uyT5_|_ە m1;o7T@ĶnMhLbGi ^,_h6F7EU dRJ*/0V\piprW:-!Ox)q',^kA>Û~ئۢi?mU˒_Iߗg4YCkRG3ss W՞78ў#LSӴ:4Y([׃Qy58mC=$RjIb Ti*ХR~T}@zwZ9YI x& NeǼ2ҤՃXJEpF$sU11I}"IĊ'>`JTɊ2#@'ڊ5mdiM\C54ې`sd2M `YB !Ar6 e;TJ$F (̊hXkz/CӛR 1] AۙWz=L%g/@k>[vNZYFolm\J{jO*dʹ!`)p!rnVsCRn?ʕ,JKL}beJ \6XJrpԃ[zZݤsYW#q:NIY|S:! c"5 0LT`hk4S/J/ H`1?g:hIkm[sqt'7$ WR1&6y*˪K,**9OWeJ83NebO =1<1Р18*zБKa(s0*-'Y|Ic@T\143 &'gRX杣Q1x* ..g̛[448I_ 6#ݱO 6tjiMFJLe:8}#p<*)~X*(gӚ,i9YWq MOE =C<4Գzzyi*$hK啃JA܁;kb<3O+̃C[=!7{fg=IQq ʳ N E ^`VE8(HQ:_(#~ַ6;[dZ?PP-:t#-@/A86>(XriH&%`RppEbk 兲DԞ4 *+¸vD:a")zC_;ViyF^6׆K̰( _&؏gw玪?0=>&?ZwGƓ7in^{ߓu>,x~ָF/f_~zMQT}Ag/gI*ޟ .-n#!?`e z˯8=!x)z˶V..^5wcKq&ˋK }}V w9HSsºA/Y~rx5hlY6IHQmVo< B]ߌoQ׹CmJȽFjC`]!G}O@a4]C??|y_W$(HUr$sOE nuwj0.뒁.E]ZtÂ׵`B#+u 7-|P·v)Mɜ#|uRNa??&[68|@Ce0 +~뢢qQ]S;aqL$\u3OM'@o/z֍ywI5->Mlt}^ +jHv߽#n0 0j ݥ3{Dm(s%~I<4?S| /f:ChdWcW6 jh˛ř6nM{vӅlw ɏel׼殯2+{&o"t.&zZ7{Ce-/Zv=XGmlG=hŽ'o6/<G[2g 1V&&C\0%(J DpZ8ivNse/ϕ%|xvi>2P SX89xR*U1l4ܐD FW^t?ꤜYkmO5ٸ'L|}'pNXsUEnJnJ>~2i5J!lC;Rҟ g-*:,$V%0J͹*S'}Z]AnIk5;tet9 B%h"0,WbӯI"JH҇$_x9]W{Xe?LNoNdYFVU)IymdZc"0}éx9 GI@V40Wx"_/B~&WlŤ}w 7Woi&.MR'>)_Fy%󺘞 ,5MBf yg~^iK)H)ASW(|)Qb ի'Yy% @EdsD*!NWR^ ]9et`LW[l[ۡݗ1ʮk-tءg3+w*r+D t(SJIN$BRBWVˮUB Ч0'*< l+D/GPjҕ(] g :t(W0%$䘉cJ$I P%[[sX Mx"l!t<0ʹZ8ʃ0I8'[ww:&E$Kn%X-Ŕ; T䜻*lgb*!fd%OpŒCtݒCBZr1YFtK>9e!ҕfS]i& UBKMJ}C+,+sҮ`%Z\*5zzcUR4}ՖkKWۡd?[ݢ+JNW{{zS K ]!\er+Dkx*T+h=]]1 k ]%-+D;]!J=] ]qnʉ`o-%}KNWR!o \ ҕAΈȇU7+%JՓJvcN);bj9u[Sr֕zRzN}IIm!ܿ䖀!Md+9Bg^R^o8@A]%{%\r+Dt Qv, SOWCWZ0eFt ˆ\ 2uB\tute"MdlqLpٳn?-ˡ+GD(D`3={m쉮BhǴ+]ў;T#IFtE%!DdCW\ r:]!J0ȉɇft(K0ZgDW\(!]!\r+D t( 2+lٻLBWv%C+yot!clAD+;o "JaҕA^DA1c GI r>d~kJ&k?uXUT<o:ܢ"BHJQ+N W$*pX6Q^(S#^=ԯwweV֠džT6Mz\ǫºPx8?N;=i<8ү帪8N|~yEKn[]\F,GcYt۸EpӞZ+'M) 41 >m򮪕;܇z:vRᐜ g q*O}x5$1@Vs46L3gK5pWۻ"}gԭ$AS'߭Vўz?wnGH=~&O݆LS*ӥ#ǰAW^z,gj/1,޷sh94@ʼna>f`͝o۫L!!E7&C셪avʕpó??|X*WL%E v=?+3Hw #é\5z_ &7-]>U/&%u66%dPsRx倖 )]-/N_Q ZLʴR'Y)jrEJFȭFwqf:A;ݗgMM饪9.6/ccna&F Ubs3aCsES$sJghb qK-)թeO{橕,E'4Mv:ׯL+݌ntϠ>Za*o}?D/eq6 VsڂM>rmd]]TaapD_$};2 K_/E_gh *vIJ.+*>m.zTUaeU>\CmJ o| Jvt:& lP~<47Qѿroǫ~`N@-Dy5hx^(\ 4*GWˑfWy7T.U$caTprVJIת v5/CӫhevA]:RTJ ba=NMqr$A3xmh,#JK-ˍ&aȍL8eL3('j؏r\{=xr:UkmUҽJH*{8ɶ,UӛQT=ԇtV$ݕoruµ|:*6w;[A;q,\ iD*LN,Iֈ 8I2#)`y}-aDS90"XKmԊ2=&)'$uXZ JyUn5ZT *5ղA-hhI]P,G3%*{W2 EIlc׫ǦA876;CxKp"Î_n+> {z>J~U"$XR* Q=brpe )RH4Kk2 `x80(ORL4JHECk%7,aML_̹NSneyd 'LVPa9)"w 次+! .2m^jhP"~4ysUY0hLAm\waP՚dW_ Q(b L!/EB+ؓ9!'=UWF݋g/y4c8dS SV8Si$cj"j ^t"y ^- G=S vJ& ihKHgS0!!B`hqm$mU@ q.{?t^q[3:l+r8o%$ES‡^76?<Ż;w9kÅ0ߩ&tPhv2pEI PV1õ)Q CR|UߏGۅ·$\n2WQ>`R@)Lт 0ғ` t8 ocg?T긪h]`bW{O߽|sZ|<ׯyfsUۏUWER W[,@䣳Ë~fEgpW}_g3|njk#RD>>-~\͐|f ;Z|44-̾U|L>ç쨤PxH&g6KLp3 ɨVBZ'8C]UBuuZ BBk!𰮈(RHUK;Y wDkSghOa݄ne'n?x6ԼuU${dUAt#̕%֕!rWM^r]RIlq.1'GVZF*OɅɪu^MN~њ!֭ )4'6'V͠ Ƌ[ӻf]6OAGþy^*W܀D߫Q`) l?0 ހQmZoͲb 67҆U[-YuFdXG&nWblmn[C {CQ,J 5ĭi[gO¿T\s捧j?>FSMK%T%:xH-RH/&2<ݻqO,n|yqKu7%u9,Ӽo@ylzᓗUsUgYY+zw9V.NYxk ½Ok?R6eknx8P9c&6e9X8S93Bu,)тjch@9܌=n'}{\;}Nb$zL$0ab(I -ROІp!NlF0.p~ vtv 3M B`N8p*ZZJ6R99FLí,6vVZWZ#{~'l]vv}$ەtXÁUTqr}TG8!"NAh0Ǚ۩i!;VONJ.] ]M`ɰtЄku\a64u&Ygp&aV(8y"թN%({+0!R&`25LCRS u$Ilx{.uo3tv4, -OgVsna5Guu1yua^>TmY9jNvHPT$`iG簷 I9; &^PXJO xbEP05v&p!4p!;"DBaBBb)KL*mY$pX3'm$( 3ͪʼnc3NKˍJs*R*#q.vR/q NXa8FT֞s(ޙSښva$0i,%Vӈ9$`@#S16* EN2 sZ!4{FꖑT4jR"sc(YĈ v![#: \!I{ʆ'*9KH`\N7ӓ…~VUG?}{?})^xz *?" b[eԯtkUטӗ߼ ]oz_[}3%2cғ}/L4Fo*ĦO:Dk8FrA*BN}\]& \Y訞>*_BQSR*cw8b$.Ձ٩Յ-6:u\EW}O?:+&nt»q?7 iVh Fv[V㯿^6Y}T開6*{Viˋ~^wm/x?Z_My> xY|owx:;ZxkwQ[eYN8^ZeUf22*AskF7ȊI/QF%@~6Oqo:]O6p˲^WvgݎPIASuәV 2ϷU˖/&e[ٝlmK i%Q^h_[1*$lٹNEwc<Mt`dƥٯYfR,xQ]˙xz44CIf*I :٤ ~g FlM3wJɿ}Z-M5N%&X*! I nvl2Q*%h.IyJ}ߧxI:+O>Ϝ,[ǹ`^@N͉2* `XXN%NΓTrNg4qx9}N|Q}6ha!hZz!c0{%XG.⇨LŔR38U!Ty)7ۛ^z$MK IdՇ_#GHĘ03GArTĘ@^j'p~{''6 *0^{P,^I(ðIÉ2_&^KRM#(Hk绽2G8'p /B(S̘#kS1+UJy\h;uΎ^J6o~<ݔMkI,Q[kf֤,pUݕ[JA-]xwp\¬]9[]@r#jcx"p%<tͷ6qպ+dwtl.]^un`7MY.Qn ZnQonrh>nǯSolG5 GSl4y=]ޡ5اV+j,嶷6Rʩm^K0 tAoG6 D'\ѳ*/:49z|㉇q?La~pʁ/ބ8&#w?t{a6{aI8RX첗H;Cg84G*r *eԭк鼵m&y:-;<ݕyd:^wuqyNIV, K"#-y$,kJr c`]9Gڿ~~q77ճۗ..wƺ-xx{gxd͟ΐg/_݆pl>Yp%WQ$*& WC_y#]r[7&DlߨD襹:!cN?!"r^֞O;e!;@j62N qdUa8 B2+~Éog߽x0|?^˃{&~}w9b[۲IS [.b&6ђ;JDb[4 7wO, !mC #7`B%p.UDFRwеuQ&#pGryQHSAiq,jɨmW^jڋ8y' 'JŶBB($J>ĬEe(d0CPJ-kj3\sı 0}5 QbΓpZ8V1TqZ~n2"WD|xZ=[ZQ֒CPߢJK t[䓡XcghFv}LTq -X|BI;euEgL qD[]7OY,9i2.Ҋ+.>񙺗\֕,K;e($Qzl},p!05pTpZv<8< 6q܋6<n[-{\/c -vq܆*>: V},^ZxwcokLΎ޾(ƼoƵ]!]^m1P =ܱfWز ui Y燿\2N F.-l9hl3ez}s}nvj$}:(8¨j1)Oh..9 c(w.1, Ux88.k_jBw}|0R.F8+ɱtLF5{[]$d DĘ 6xId뉴w翵 *'`;Ͼ{Ni'Tvu딽ٞq.zjL=Mz)){sl<8°>}K-1 [i4ϣڬD&WCTKN+Nxcܣ!.[sx9p% Do޼Zʥ޽ZؽD&8Y͘jEH\rj(f\fP5$ uײ]Qj%gba!<̉ݗN#pɶF$42j#=6|go!\Ev-qp#rɏHцk3ǀk^ n3HcSLY Q7L#AMG{u%:+lX,/@TR ' 8cn!bvThZ\͚[[;hwtJbzlXc&ݜBK"w͍8KKcaxF;4vO7C&(Kb3q\`QO6XTIԘŎL7# N&Pؒ31K,1 Lu1qP `ZcC qٞ4XK-1{u&G˂WhZ%Z V<@ʙr,-xݫJ 6p AAG.P[f'Cqj(g.z<%12ZC^jgǚڨn(-Ku*̘A9MinX~(X|8N`; tuT@ܼeÊ%!B&a齈!6 v`g eRJ-j| [T K3Tb03% Gc%d&X/@:йڂ>/R\.aBI !Q5%;,\CP f9X `s;XGgm0J  ڛ(d*xc((s4e`j %2c9Tl@JoΡqh=*4)Ɍo1($Ɨ ɢ5X] Enf]\Hȁ(=7!}X4uNhN7cD)}('k! Q ȝN;Xvd[@4%MP 4PG3ؒfr>B|`! "myVt1*z7^7`=OH, NMjdreyo= knzJqH:T@ d- W| %"03&|mՏֆ}e9؈@!ve05CY?!^gNg  E zRkNC)xyHVCy!#M-Jt9ͣIѩz!m e@v 9j+zP}s~qPT w;bE_qh"rm6Te@ZXr?!DEWV2 -,˘pn"H4QC$ /wu3Z`p GuPSV}L h pN9pcHQ2S3kj+H͚ um̙h3;4{2 UU)H  pM%G<,gPkwxjozZq¨}Hb5 1M-ZMp9 Xf>o9EzƼΟpJ7a"EIvt5$((q˱ Ԇ!d~7|p4LZ6zl(JRI#DrY(c10K`U C(/C$n ՂХ5c_Zm wch) } g&5keZ8O%6B^!W.ӐkW?o~onx}\ εe՛_o7x7=ٻƍ$w7-,v~ c%H<*acz{TA0ɐ5cUw^ծĒdEbU}QE!m  6`A9NMFo&BO6 nߢ5f߾ 9 "nZAFN$$'!$HN$HN$HN$HN$HN$HN$HN$HN$HN$HN$HN$HN$HN$HN$HN$HN$HN$HN$HN$HN$HN$HN$HN$HN$ 곗Ag&Z0pH]EUb\Sd_UTe% * TGUd/T~ЋTemYU1X4㎁9^1DY;[~ڃ} S:-b<]yzT)wB|mmx1˼~~vx8'z"|#Hr}"'%-_u2%-W^1*iI%u0/~r_:<޽)7 sQگmxoEG=8?;O?h6ůF;HH3V+$SE?v,~W3~@.릊\v֤yq(fхg4ulUHTX=?\j'|YV櫰ܿNܿpV@^Mf{f:K\^r>kg+W[z|u}f) Y` ,õ'b>ylztP$3jP,'QȌb9Ff*h;1K2(3j gu7d{v@]դPhnVw -.ʪs]+ e^rA,dzJ>4mw˺W،Ɣڣ䃰kBe=OXVٳITFpԬ^UZNc.J;:=Bp]L,&bFrcFXsMyd/9z!b~^_0{z<<+QtLWwab(B'5i,7X%G-;]hu8;~d./9W:" xUCa(^묪)|HR r#Jէ> gQ_ J{.Y+w" OeoqmZ>9dvǃ]וʝbЅ-=6p7eV.w/. 4]UIe Q+X12^T^qP벲LɲF[%b؅?$s-C Bg}H8wVrh)9Pc9e:"`tDlp:LGD e:awBgCWdCWDW_ ] ^2$s+q`懥КՑlҊ~ѕ؂ծ]S}FtbI ]!\%s+D} QOtut˜ [ ]!\s+@;]!JΉ'W;+l6tpυ}+D$nr&BJBWt(&:B){KWvBn”Eau+qIژp8]$fBZ39 ?`2/6/sx~ԝK{{ko׻ ֪n /rUjNT8oF dҼA a027o@xCSmRP=yy5:-rFHY /tT2oQZg7h'A2YDkDDhS<#BVfCW׉\ zw#+DŽ6#r͎µxWپDWHWisJD-#\MF+@)ȻJn1+l܁j;C{`p+ot%+ItksP6<BVBW[5ՋЕBk]!`/+kʅ`}+D)%ҕZ:]`}H{B]!])nB3+u6tp ӹekxJs-ψa&BBWVl]]YZh+/|Ź眝bag//٢ݎV,ϧ˴,ch*R8])U}.jޝ0_aqXa)ZL`C*6 EU&חv]fj=qUd뤜.y /*]ZTA&O/ *^ޣNj z5BI QxN^REQ5(Eb5dkwje2RʪP7/c]}Y {"HLYW5*e=@k{$(yc) g4p-υ}'ytn9͓`f/!JE9HWg}Ft}>UBW/!J!Rv:e9&NWևNq*ܕڂծ] (a|u5pS'͆}|;O_8vvt>oe=DJw{>,|;@;>~^nmw۽P[UAl3QЯ/iOJՃ|kt.}iw&eDWXq ]!\r+DX QjNtut%2'Fl Zr+@;]!ʾɶ]])ōgΆ=]!JLjRˆ |d]!ZNW JOtute}[R s*1Wyv;O7m୨i&$߾~"އ9FY!)?;uFte?ޟ&BDWGHWNs%}Ftm>e3+h5 DtTeDWؘl l Ѻ v>eq~zy%ퟰ JtּtvzS6#^l l >t(-':B+%2+lˆ<NB]!]I\fѕ42LWtp}6th}+@ޭxJYNV,j\ >t(-yWHW2sQkB DWIW޵VYzV^\+~NjoY[lv;iNzJ1QI6:memG[#ncLϦy/*CdaV+Zތ6|">vp%CIS:09͓#`ϲ' ?i QRQҕٜ,5pYCz3Еw>BBW`ۡ䴬wtm+MrُJCp4^8.'S4K8>#/'gpzg~DsY1tE4*IS$m*reTbp}Lܻ֜}> ͬWo߾nO֋f78y O6U-֥ mфBEV\L kVU7>gq|^,oWi׫04:v.y ԫzƾ~uM9+n DLs}ZwG On(l3Zƃk vw3Fe;Q?GLQ.䏟~j+%ysf<+wPu x*O[u?w3? ~?;c"*FUb@`2$Kˤ:$]Y錬MZ8 Ke funOsTl0(N/J93U8Z^ād|PUJWIIho0Uѳip(_9{ʖme'K;A,M\ks-Nof4G5{F}c1@lIlx>=YD2 |\ VyENZCnG̷+}xju뷋?6@*iBqQŜfI(93fe`8p*SYKj`m$ʑtâQV-oL)7 -}30' ̉\=)d`F9W9v9秣@|J\r@1?0z 0 |. ;FHkei͠ϚQuCBD{PC(:qf1-/ړȉtT G-8D!ʼnWX Gd+)olCz6HFjRz  6 j#nPsS!AT>b_@8(X>6a2|}yۙSb IKap0]Y'D#{s `gljJ0̀[b4{t #j]w!/, bQ x #+B-|ފے[a.N{hK擛gbs |۬{L;>U@yM>jbiJĨsF|2/$ϜwJTgKSٹEv< Ð؂ Yz{6·l6I!r_`o3C@ksHrsPL7A0ɵ r B($<3@cĜ2WL#,8ZI_e)%c*O,lL񅖷!Yf#(,}岩/yw]hs9xqot<6i8'SB[_RTw }>8 OXz&]Oհ͂5,=K5C7FzGz_+y)t}|]ynkY{Tg_k67$sjƌV=z"H%90)Hջ3ׂw9KHcK%=,Ow&((1:mgA;3\ljW-?ӳ)Jb +%BpR-N(sk 9En'78_ 붖N.Fu!cT[럷٬}aaž>^">SNe>F<1r$ $H,("P-kh7aO1`/=r[FepL*"MDcں-2?X*,Yy)t+SDcIY,-,xGI4 GV FNvkpqm\4C@nQMCuKVIҾ"xW TrIy:pa+"Fd,mH? aKk& CJHPNFjŚXᫀ^4xzљ߫gy6 U04Z!pNn ZhD=1<*0ո&&0fFK)5AI()&wVKǥGS9^_?i>%m͓U +χ?}K4COp*m+{s]tް~2VGiQv|{.1\;)/tݹN ^^ha ֧3G`6'u ֏~6TP_XCon {AoܟyT:!,0;cj,uhI'SAeD'Ue.{?zיo|)S޾.LSٹ}L8cyK[jhAi0cI,\HXi dC *Mɚ(ā.0*bF;q5hcAg75q5(qg| G2Vg/>4!-adU#UEY;x# E.zzO lFЏ8߾ތZEKmv3*aeQ,]Yașle?ؤUnQ~][yA ?Ђ6B»EZ߻li<<= Ƥ_]YYyz8unmVJVyƃT-h {mVF*zEP{q'4Zwx7f׽`Bjл<*8n0(XKT yc! ܝwF 9e;bTJjۺq`w ;w>-]H",Q"SsL&7zda _݅>ܿߞyc7G-2p%3ˆ;,I%*ON {44:_dhl@dBu_0ݿ͈{HXm߲=QMk$Dk-ey}O Ș9*gJ\ϝ1fAq^Ršݻ^ !/9%9T[r 4Qps|pPFǙƚ@c1꛷Xoа' 븾h9ŋN f\MflsglW>`!G0 G0VD_d&X}RN;0&KߴɎ)Lsi'6imd6f~24h4FY`fXD͍*ϓzڜq/{ aDž!Oz_Fx֠KZ5 =IbzE; 2kQ}(&*LΛrm(l6Z[1*A*vYT"ᦔg3NЈh`9gQ$vhh}LE凭){X0#ڬ°H}Rk˸0`9mWE'8B\ؙ3*ͅ!{mUB!HnY488`UUmN~~Q}naCDAEkyS7K|aJorq!Dhwq>pHJ3 *fߊ:ޮZRf˓Ÿ]xډt71Kuo\7j7d-཰^!7 E 6ѩLL6ٕEu<*@}s}鷋b|HRC_4|ƣ9WKZbj)ȋ"Euӭo?ǓNSl_,iu::t1XS_pz:ww wfmyv`ZOĴZS:r*?TS> F&At* x\@_Uc,GOޥZ|gk/C|}5*"sD#3[eo;ѤjՋwSیI:tMc͍@k:\,붌tC&o +֏cm9bқD&ƂM4f3bȥyܜHW+u-~Cc*z|N vǶY'iWҌ%Yоk-RDr^ DR,KB~_o|B;bSxg|-*P۵XOMۻvuZ%nPqSה*x^pϏ^񥍗tkӪ[FiZ¹Xלzmo R=ȖMwvf뵵j~Rk]ػ.|}r~M9<bcݯ%nnXѥswÕQQ55_zw-ܧiAAS,СWsEC6R{q9.=$t*2ozBMO$L ;)@#)@ Aw* c 3&CcL7Q ",|ZS;JϞ<ϣIbQK%lqYѲ9W,:sZ\(aq~ &Y > ͐Ek%c]4&x]qa{>r@ ^`ShZ-.r\$Θ29;Lq%͞ZcWfQ =0;p1Fr"˕N9ƒa r2JV[n[Vx V1I21C&)F aML'nQqLE+R*RP9 ꗝ 0͏]WFD>  wo LS1uB$92huIugۼF$ܢY@"II\ֆh$gB)Nї[D4&-V͜x>ѭeRrfɮ(*pqŻfYYiYGu0&)rH"$ǔKwELpq<6;vCYnxX}2ǻ'~TZMM_nxgFbbD'5i]oi>wҋ|LOӷ"Aޣ~| p 1ȒRbjo[  ǔrJxʧx䃉p5sI|rzٍ{LȌNUhVW=M7 $M #Č)KV%e]H dYf0/CRz4u[>Okza[W AC:ҁt77WBd`JBcAF&/ o4nTD?Dz%+SS6f#Y jlp LI+t"(3QPld0u0HWY,r\*K[!0)KWN T3g;"LN>5/wqc,]/N>{'3gxe Bv[\de\Y6 MXFPZș3 WNGuλ`o߂R8MlzʖfJ[uokYr٩ƄY*u7+EP ST08%Yf-H Yk1 p \;f86 IkmGXt{(=ߡO]@uddznS NV:Lm:C׻׻^h&AI8z5* ӎhs|jlP ^ r1*,}?j;J̚mx-;>t}x4_b> `:?Kl>ԱM͇9H)B#%1lx~x6Ν~{=@z=@?z=`KVCpKSRrWC1x$)>ɞl!6Bd 2,pw!J/U V{SR"evo 8?;_X=|w9|,]]M:cRS?_TUuwS՛暵.IWZg^ l=bpRRU+J-gX"wjPXLXWW* .}(pUup%s+ekZ`W`ߌn^Sis0wǣ+ov# ac%AgۯM'HWxjd>yaVqʋi;Phh1Lu@P\b_2y18mj޽LWgMr:~p{ 5o~[eḌh 6Rds 9ߍfϐjX7'ܠ~ՑIm FZ6JFZ#x<P̕7+x3 `qHpU v`e WJ\=Ce< *[v0pUuWdcb%\=Cr 3WyD_/A23b%g W qL=+UFcs?iI[JW]X f/o.z>q9*&jn\a4]oEb^'=d\TeNiLG<^8ݦUcZ:῏!3BvztC:{:bcPq4B+ \̚!%K,O`y bb?fBM*Yx4ldήqRcqea i-zGށmy-GdM9("$¤VJLk}lhkS6i##T@$ZUhZ1J)G@(~XуN> : 2t{A`)hNX͌VKm6Z M R͜u[e?{۸ 1OE8Wg f`W[,)<?%K%r˖=L&"YU]_w!$PRA$tVܠ%8Tw8` }۝4g 9sL)S3>s ɝJ"Sg&܏\yV>Qf<CuJ;z*VZh}Jk!%zVպF%jkP[zH9XMc73W\J+L`I0HN42b,KPBrCD2R,Rg-hM'mZE1M-M/*D@iW#W~ ֐ q`xĶuyjbG/f.3ߺ8v`3љ':=ٮk rI (EI ֒Ěja$˗ZRw(VN&)snAu2 F14zpKЮT g¡#ڲLqxL&j9kDrHu mm"X;/&e6q~3>D|a[fs#7jPlRǴŤx==bޢLj5\$@s#CcTT*PYsT0n)*P4Y#TcHFxIYҞm|˄`Q'#gjg9Ӵ 8lkY=S4r!KN5]J<=ճZ y\8Vȵ'v™:1Z}NE؇}vd].hJe9ЈJ=uA pn L!L4 Rッ+OAݖ]9K}l:F˻XyITS \Bl_W݇^_}ٷup:^ Tȴw&{XOׯIO3u6א+XWv䂥li_۲c+|g=:zx®u[vH_"194"9P%xfVDø2,/A {}WAUΪUw^-WvWpElet%cRڙTOdRP|~MG(I@%9UۨcwQ{ةc&ACXN˾uяgAp)p ">op!u phnTIZD4 G6{'ȏ| 4s1D&܀12BH2%ṢTyiSEE%pihX:)S 4@| R1̫Ĵ M:-=7GT-1vsEb.Nܮ5v&a'htkYwӴk(³u{ȊGF!*sLiaWA9kD=Bd]ˁū")xITZiyIHV0|($Ǔ *F4`:(pZʴ<:tCqyb"4*\yhi\ ?,E€+Q: Ȼ$rR=9ouմX#w?JFԽȮJ< p;x ip5nͥ%$J@SX6Q^(@Տ@S{jDwi X;ݱ)>"d럾y4> ۪~8M/-k2?:~ϧUN{o|/lq9KBȝiz|!yh'G}\"'es1QQ_c\܅k磥Cs4ݟm,r _ߢVX<ՇىOW &O|S{_W7^ջp:^|u?g!\?~?9:fwO5 ̏_jۮ^ ]\Q>sfu,רȧhb箈ŔfTQ,}xzde o^v|2=͍,sp¦~!i(Uk5at8帻u30f 9=QGN+; DN`[vSWd??̷?VȺaxyw䣪'v0x8؄K6dz:zR S#\ QtV>.=2?xc/]A٘=>|͏&Z3Gu>?@4+ X6__ 0Ii.VonDbތ.6ow jmNN~G"?t@4<g]T%jvL} ̾bb&hl완/ oj[ͳd˜csؙ#3I~Ы]o0wpL3a2c=io/7hް޼Svgi!Qb?u *:B)8ƹpL`2lnBO**EilO,nBλ_Ųݎyҧ ̹q'"H(aP̥")S6Q6$+sc ,2t\O3z[ ^7V67WndI4l-lz6T@O&_R*u_}/,1o֢XzpمQGj *j6WoȿyWCgfCwy3|F.g~èdShFb\#(S& e.ݨ9kr$:(MM*/Ȇ¿"jAm m]]^?vxkz{*膹wc\G١oCM~ԞyT`gYQ9&3{7{c5۩8DUBWE+vu[uX] RGg5}Fm17;~/J oy͓#)jOqN.HWgˬ3L8n";R7VZOҤW1HLq^Y\9C9!6 F% )P5`pR9& b1z>-K-N/'/c#-@\NFY41 F g!Yn&PlWTP#'0b% E…%Ao<>rB ݚSDpJ j1r j<07[kǗ.eSi=3@J*XRr\x6hrb],-J!\JuH@Bd h?锸UhGQO n*Қ95c9[.,;B euY.G"dkz-5^&e&~O/6ߛL_Ʀ4Zb(<$ux;@ROV j\+4 >fg#t6A*/<(fZQ6qD҅5v1rkl7%y(Zw쪵iaM;Y7B>9ͨ]?= a3u4:xD0DD-5=/$T;-(G!刢=D&D%L{#(<`]H":*k blׇ ex(Ya:ičXDSbKQ:d6k G6hq[BPg\ṇ I3*}nIYX##gF8<:Kqɮz Eb7q$ {b픊 cY u1 ^Ep)N/C/>,C^XTؖY^jFn |ng= hbk?Q6 iǹhʐY>41Ms]j5ZF THaD~CB!eڍք !JKD=Q u6I\Gf$ZT#odRj?{G]v}xezzI"$sHd 3âҌ;Rz(Ъ (OxwBc5`^ϸ&m1Fϭu(gpsTjƅ4*#6H/: Ap2=GSMM(d+t"Y`N"b,50\ 5V8kTfxVmAЄjDRs!F ATqyHHp{=rDh!oV.W#OI:Sc\\ ӪW8zձZ}\|Dշ ׹8 /oOpp_G|x*ƒ8h9~ɧi[DQ&mVyw]g`GMtEKq۞[q=Y ֦ҒXj*yvj_/;ΰp+KTSFa}7Cґ Çv*̋խD}8Ҕ bR0UD1"T6*%\/|] A6Ԕ 7WU_~>t 7ͧMw_i')t?sM(δLrO( x$dRkuxMlA o+y8N?[yowt|;=eߍu_$t̾O }sV'Jg ˪jp|z)O/Eow R2ր o|:r2o:.j)zx6U/5Miɲ^~V>;_IRdUҒ$謹p4- דڢ_38]]zEMpR}_vܤ@)LK5^_a{JN Kjv3@fVnTdiܹL>W1kzq\g۬'{gq(Wiob.<=0[穲)T*MZf'i@mϞzcJ{bǎѐND.g,J #?Hg1N 9 /$'SlOuO;y{H+@6輻QOB61T)S 4a#A29HysQ b+}#PrΘ:7;Y&]_x*BA4_^CAbihdDܜWu}7z_z Ld_~/4"+`A[`z_t`IٷUp?Vgq~nѹ/NƇӚ9K,W x|9by=F0Mʝ{ 8`20Cs>aUγ -{C P )Y]>33 5͊u%LLG z7ex 6J @IrʍG S=bU q8;@+w2fZ9$zf6|u{T3!׊6VZIn݉ۿ(P^7;$CCtY^zhNܽO5NPѺk57 o}ayPY7%>XPw蚭GQ5Qӡ9cfe6]//'{EIW䕶gZi+J[|!~7Δ_Gߤ%7ϑFIkK?#R\gɍ_r;GCaF]cUJ91$\TRi"E3JZC1Ù7GCtbreqc*h%{gqoR0#4'(xAcmsw\cw)9']ύxOF@zC?0+ Sۀz=L]06`mB3KU.EY$yk ̱:C:65mf]{~-A[r3-j!{# jy9նJ-rFBB,ZZJL)0"~ ܹͩU[>R5{(V*a:hӖbi$QK]Ga)FP.y/ y vڃ9$#b΂wDðqEĹXMw+؃ك{+"{eL^5IO[ϟz?wH|O2; 6 1B"%8J$ X%;Rk)i:/+۠AwijS9 ˇ|;l Z ]8fy`У7S(aiMIgm8HnӢ$JђdKk,)Qp2-GjZbݺf{5240a)(u(4A96zH (`(7j<[֑aY D#aREbD hl&RWVim!̋ 95>{Sc탠Ӂ`>}0Ar}'1O'I?|pmO?|Sm"o~+8BS$pz+1!.o}_~7vR/ﶓ?rN(R\#ЋJ2mV_)lp`+=qx+Po~n@\ BPl0ն[MEܵlM nim0`LuPV_>͝߻NoIGMmxf4^8JI++o~\w<7ɍّFcM15]LDzt䰾7OO=nʺ"؂gƢ aYb%b4BNDԲgiv ^Pxމ3g"g4pa'WbE=&xyX$Y i>{xjc9+<uU=Q5ڋy6]p4q{0>ꅀ rU{z ns .VR뵐ip(2~7L*CzrT2<_=*3 o66S~N)\ t..fn.YAM\{ʕwo;kH)EԒ)E 0÷d=w0w2)np0%sH[ ΉuX6L⹗W3Nzv[>3cs|19f>̿PsjBA95 !f{` r¸0.' r¸0.' 2^mJK MkbzA\K Mҝ?4dk [9eo[9k_o[fM-s2-s~˜2-_C2j-s~˜2-s~˜2-s~YeƝhČuoyW^.8 #T*BҖ:2U %8ex T>d " `rv[2S,qR ͕{ !J՜v#aR/1,)V)ʈhA #(H8 @82~lM Wָo?a#a޿t.!nWl 6["_B ;w4Y]f>%Wyza΀N*zDl@ ƧX;vN1VYE pM <m6*GT &i" j@CĹ[@({Ϣ9[ST"{Y H;g.ܞnC8wm#_!if09m XQIuhٲC,uTcƽ* d RJA2%Y lL\aYޞ'^bn ` `7}l|Ж6-fZ=&A^G3{L;>U@yM +\0uj|>^I8;8ߕKڟ.M fW΅y$?YkU_n:XEjըtZŨKp(Htqj:[2RLjʒi*jL3O=i[ĥ ]VR%h2pd(XItG5kkӮpJ,[@.`Hq8HuyD8P#RBpr:ibVV)4FƐ:>z!!%+Nz, ȹh]3` %Y J$XpQ,f`UDNX!,].02T|cdYJ&~Q!Q&= T`ˢY_BxrOoyo`fAK-8:Q;~h۶x&չ !涅g#ɫJ,si Wr~2ɑN^bR%^ڠL gI1R S,W?tY~&0\{9j(43R@,ԃ`e9YAM ;ĬW逰od~sVz#C*K*d#+Qp k+Jf6;'wfaIqiɝƱCGK s͛11x-# dRKJ 3IIr_Iλkx) hq|K#0Ã}m~  D5;^WVDgĚ3RȪP;Ԓ]S[MiRI,ʿˠ\Z L~[*IJ,MPtQPfc:};|Cրk&?5'YAPS[a[BDu:Z8")sk 9Eז,&y@FT߳uUlYn/\qƼTj}Lؓk*i*$D8s1OE1tx"1 ("P-4^4Bmb)G.x˨ @טX(CQ@8ZcpL[%BRf ywzX MKzf_*@t'iպFlm["8[7xOV(1$cD*a^Z6YRBr26R(r W:œG ǓSߩuZlٛ!]D zv700[ /PIAקe5tgaո&mE 0fFK)5AI()&wVKǥGS9yoR;ݰBO#yM?߅߃i[O%[Lw\u_0F:k8S^U7eڔ42ZCu9\Ղ١QV_1Xtp jҡힴ?_^RmZ _߀V?ħAiIo?]]oz+ގٟhR^Xч>㙪@ '/z?;~ż^ /K9&UssD|v>%FG#v*g ǓcM5]웿\co&&70B7ypSS"MsuF,SE Rc~2R]s| ghyvx؞(ңqa&i>?]nJl't[FX7x.y7kUp\/ܩ B8N $46a,RK/߯^.1΂6Z!}d b9gC4Aufʸ }~J+\knЖ|cr|㫦wVc,NKtn_]'ļoiSn77IvcT?_b< x~l):b?ɩ.Az1 ̾b*{&uP:jIaؠ1=ԓj$7蕶75Q}wLc̈́1نaUS$o.hPhoXOqkY 7$"rW]G{6nxnUcj czB 4XYuRNL IeyνU ݅["/o)]@v_P%wooWjtqǧ)HBH # $<:eSdiu[dV3Р&YlQ3h}0?La%oBmjZnѫa?r\N͡K-c1&NL LqK0:K˭.#2΁}72mgS"s,PSS+qFa{tCI#-=uRoI*Pk C, bE[=ޛ  RoLwZ D佖豉hj iȼR mJm챌FnÎF8Ϝ C-V9M-EBֆۖ?#T|p ptx?Qi<˪οV Tʿ~ ׿LƩLJ/8s3dl|]`ZsQ:xɼ*\!0?(Vw!|e{Q9Fѿf ȝ80) ੪0ZLI5[7C5@KHٯNͤIڟW5 z 5Ƚ$5۾[ȼʒp<d<f&Pn:Yq\?lF)u>]qn18OLƗ_I~v7hJސ8r v̹%7l>WY7zI7ҝ=%bM}Ā bضuHftƾ#7km^x.O^y18GK#x82>%-߀A!fl'1qc)L5g?}զ*;oxe r~%=@hRm9E)&߀Oӯ D c.)3dѢR_: Lm A.u(;z+N1mኳ(1a)ӎx H)j DJ,&x!ItHtරT!u>E΋gu0 ƩiIpS8B8Zq0<;Ek m j`HjXD`I8ʃ8 $P[1,,VJ\fAFnA 'E9{e_;B?l|e >Mjզ#0ώjaq` 8x-㎰H26S(֢T#b'8>jK<'#3p cL%[3f#gḟUlqG]HQ^]x :]xxd e&jo^{klApDd?R3KH嬉T`Ҵҩ%y|PEg 'H >r8AYf#g>YE#f㏻jDY#N#vq{)s)5$x/ ?HRii5Ƃf4qb}o`\-nm]wE/r%{i~ϙdEFa4`GGsHfm@D#y&TL:i0t g4Fx2Ц[&|>$4u\#:5׹-/ٔb\={.^de}z YW$ŕ<[ `ǔ) ;Ő EbWx5ؔr<"Uaя;~l=q4_9lNxhcvhCi:%0/Bܮz|̐@flBmBBk3O(I ZCI{.^Be,iL JF+O{-9;e8re s4XqRTYa޶B% tMѱni"=[:o)X$Z23xGwgqBh$O,׾_Mgkm^YfIeky֢JTcƺA3QU_ia*RRm'TP%Md5wg5Ӑtʆ(ޥ /X )JgP Ⅿ\Ad|06܌CF9;ÌSZ*:!x΄s9B!pd)2M@%ZR뇕߳clZC+@}%`J^鸞T;{_h ~8Fgǃ0:&?7?7oFXAĆh/z/,f4=~:myEWݎQX84嬄x7o4PlLp=7ESb,Ϯh#j)Y%(;'͑ ̾nUNPuq2mNYm~>3`ovQދ;T5_?_~k(AXn%ז: k>_Uxڠ&o~0u˻fVMɋ_Ê^ɋW7[/.0ޣ! F;P v+?^&7o?_~~IsjM>|lŭ|)\w[106ݨq[FCq.|t/]?Mx|wmv:VvQs+'⚕m]pfu,X{M<'Ȋ_Qͪ^i p~DE:;,[dݑvy]69Q`8:OMo\^|rywgvv#_6,&x”B$w-t榫 XTV(kgOe2V`b^*^_oehU&ƺ,էn?븪Z,nwc{fo2ٖli⾎\,#v࣪B0ScE44&39kWN@IT':azɎ/OAg:Ii#ۤ8dBF<sQs#xQ}C/x}5ܺv3fxLytheđO)',~#@肃fEI^|o.3V=>'&<ƯFӏ?n0V=aݫ5ͳS;?a*ڢtV)ePY}&g1~zXrp F0F}L-A7oBJ#~vW %>jZ!k*yQ-Piݭ0? iQ[/'?  5tЬk..'gGgK ~7G;9U7]]q8@olpKE\ϕllzǝ \`ˋu\9ٮTr] .9 ~j'י⪕Z|7jnʶqA(` \`U\Z#+Ri]=|PHZQ Hc+T)q1V`+(W$WRpEjMqE*m}ĕ\2WHbH]\q?RR4`;+T˙:H%ĕ&Tz>>X_̦J"C&1(`|n/}G~>;MQGt ?M&'-,71r90Xzy/Ll08:?Wu6ʝ\+M DݫH0B5tIVfHkd>xC%-E/./J\|lυMbcž1rԑGL\ml0_4I#?F #x׸#k}u(% 7`ʙD&P̬ R)~>b \`YHRpEjuѓJ{\!Ⲩg^$XbpEr5">)+̡\\JUzp4;dhV}}qRqN`Jձp\MM)f  \\Jզ"C\qͅd?jm JfJu\ʮmIp%N+v>wN.Vz\!֢ϵu1"O:7 \Z K!=WJ[&KjG\bpr >z q*-sW{+ ͩK-8y$PQ+[^U3_Gxz1 W+8g'*%eËI>xH;/~{ߣEa^, y~ 6r"PUJ%ZUr\z #帽²Ҹ̆Bj(Ccr1p)X4~\a=O'i^LacU9HGA.wv[IZa@ +U9U2n` ȑ`[@:U@lKv*y39c  .W$3j%S]jqe K $\\[S=TX{WT=ĕ3VڒBK+lɜ;E5 }(AcgXզN'v!ݶ P-ʐJ~3p-3+Cr9/WVt~C*e~qe? ɕP Hr]4#}ĕc\IE(r IfZu\JXW1y,wo'Wz0N`J;ֻW1F+lwxK+R+;+RL=+w>N.ץvWRW{+.W(XrzW(HU P\wJP`[]PJbwWqR`r3NjAuWW+ڻrP}z}\vW褘=Z9/gFrD+ T}u]>L ZR9;T?p~3*rImo$G.f$GrU1OVw~T~$#9ù%E!` \rb`;.=hJ3#T_ߴ3\qO]\Cu *#W*ﯘ eE+ 6p>;>c΅n`ě)tgw) *s 4 c8l?O?xQf\ٜ@[+ЕZ*A%4t/$~d}ѿ_>_$qL 5j'hI o73bƓ,r<{9::laaJH;OJW =<*_Y+.Ɋ0##*dPžTc9lZǴ/}IB 31sYnDxα ,YC=&M=@<Й}7A)(2D-\*kĆFZĄG=h*$ ÖN hkت1gkrpxi4Yٔ=ep<^ 6Zg嘓BO>"3kX4+Ҷ : k,Gw5mJwe|4TEf*Ylޫn"$Isek"xq4.r>=7B<i>4xNuuh٦fArs%k >Úѻ9Fd{r5nj9o3|1 sUڄҙܒ`/ղ#Q,6 BRXoݸT`1vѼXm6j!G]чdԬ|W!'csmhjT1g ,(R>`Dt= RAvTuD #g餍_hBJ'S. BAnQxi TTPtC[B Z 4BsX?~XqDg() %JJo*|,d./Vי [Fhcn&_!;ɺ(BfCwdu=ƺ2sCVj;xXUAl=<Vڹ^6f=aks)͐C *TD@5fd{zCls6);[]]vAjc4)@F&E0W  d#~tZ, |FEEtֳ-P*ӫl#j,8H&j! ]V۞dWNc@!7CAdܡQ l-{= C$X !a@YPѮiIfOQUl RLA9 N0t掊CC\jSS5qC%:sQ%@ `CA.fd&f_ b7ڛZ(Sѝ)E緁D x!>YRH( g/u: qJi0+_CP u9kh 130d0Wz,Io $$ eFjDtP, 6C Zb@$F=(aL! s=D &23.}" -_=}łTM%^1)!9Yh>3(RAUvNZ'$̿2c{L1wve)bK4^Tl}ٹ.0cka&a=Cw /%e:d6Q t56lAhqU +0똆'z$;u- %tA\xO(z+R|$Q*LFȼb|b1;3]: _(zOG3}dH֨Vx$ ;#3xh *YȩՏoTG}^ yT'ێjJz%B4Da&AJvzn?ͻ2ߢn$l& -/Q(%C8˾0g7 ŢWsC D (BZt(AdJ wb(s!{~Z)AH8I/Z{m(=mGm(utqtk :"@.*  .Fr\Uf-1˱XTZI>Pp͇ %]:)Z͘-, \g'~b!(E%蕲`j?&r/5'NBTCo<&燭ӌLy8eo̠iu.tՓ[`ޙ|w )6m3CH|@$> H|@$> H|@$> H|@$> H|@$> H|@t>dE8> qЦ__ )%>3Ż$> H|@$> H|@$> H|@$> H|@$> H|@$> H|@:S@> 0> (> ~> Fis$> H|@$> H|@$> H|@$> H|@$> H|@$>  |@K^(dm4I|@$> H|@$> H|@$> H|@$> H|@$> H|@$>syc}ͫox)zoN@kanܿH%˛Ʊ-n0ؖ6ۖ7JlKga[?CrU_a/GW.^BW֮BWgHWD:L 0tp?}ZBWHWȆA&=]1\3]ht(RUfom5UaVb͖;ox]w86loo^}Xr5BU_UZuL_b!ZUxj~[X[߸ms՛7xHsAC//3@o^Ŭ5z(21㌁pQD瞆so]ft':P ۽:CbL6]:{|o?hp]c?&'6_*siͅ7IÇ?߳ 0S0W9r. #1(0tp0-tA *:mHtŀ8EthY;]1ʽ tu>t%CvibABW6LCWCo/ftu`G^=&u: U+{Bo++tC]l>7ЕR; ]1Hk+FҕasI C9bF+FkQm*:dCW 7 (vbF ]!]8R1; u+FҕH zbiZQZYj?KZ՗qw-wŀPgSthcxDLfKL'/'/;S4OTU+i E_$|iol~0ԟzyC>9>/uZK.S ZȪ6h1zK^!/~?֒/Xc(I~وz S>I8YM2QN~J|$ߒV QxIwZ]ݘSloЫc@|%@fco}nV×QVK Յ_Ӌ&u!եxї\l4M?( S2\G?mXHFԟgXT, DW ؏CW v"bIPo-HW1mGRW ؏w8 ]Q+Feg9@tN0tpHVw(E ]3._vC z~WfCkNdS<]#芄>wutz[0tpO~NWr +C#Ǔ(thOeS<lBW_lR8 ]1G+uڬ1BWgHWn b'2x8jzb ]!]g[ew-0^nY2$:^jsE2Ef#kh`a^ 7$FIJD|v xD=Awǡ5at(m:C )E?]pR 3\3 ]1Zk+F9URƒpdiD@Kj \Nt9B)rtu`O] לx8DCI.rGЕܡ*0]fb4 ]1ZOk+FҕQ^R/AoAb_ڴvbΐ?]1d+R4 ]1S9wCۅ ]: ɟ WQZvb$tut4tH ;4 ]1ZZ=]1JgΑ"jK9kkpϴI3#0r =K /8:*dH ف:&|17*~;FK2Z]N郆څXўjKq(B8J.yf bCW 7NΗ/<1J-{<ϑvWP@t΍Np0FvJd*Vp 1õlbάbCWC/&w Gv0uusuSo7 ͫkͶݽR}w3}hxɯ&榋 fj]oFWW~hACƆXYtD)~DQ$K2SKĖo;dre;f>>lXaxBGtV)UOv~,싗?~ݗId;zrS9@hnyz7fdӊypA Jo +,`j@x bTXI1~XFS7#=~ͨO6rD-6Po޶zL1W'?o~3rj?{WJpGpc]%r%J*pW!\Q*{W@0e|o*PuJTJbTB \}D-UbW#\q #1ݟU"u;o]%*>KRwcb]JUXM(]JMKvx;I`tH[0>yd+r ZvnHT2z>CA2>Ee \%rsW@mGڌV>RLKOp+?U"w \:\%*!s+|}r`ǺJ /pUh*Q_S/HtvM:Kpjβ"4wfL4d,q}ۨfGc* { a@oztFGg;jeJ_]BC(-%(躤=ńxcj4`,A9_kՏ.I ?z-,je볣PnzДIaܻc<>EC%J' 5~,e:6r9;o ܞƭmM3IkָKqꍹ1P _|YrZggg` fʸ/W8FT W9SJS;g1crCb*Ww].&尰!njzkްkcI_Kvgse+u1/e݇'l=EtZfBr;z.'vՄmPQh@K4ոa//NON "8s>;#S䃃228μ4֐`L1F!wT~lnӢ >`2>|8)u"]4Nа" 3I8FҀжI% OKȳybZ}brSyh<82br\O˖6GǃhمCG+xTWtq2"m AIт]2PETYGX f7b1UJP52RNKtgяd˿MLJUxzz2>i_(F:Zޅ۹ ͳ~L(Gwodo&gL_Y i:+a+eI[ȯ:}{}1lf)֤iֻ,/ͳ{uL接Y/lǂH6h%ӗ5ޛWiu=ὁEߏ_)ƻɤ}4Z+oMsi}Kw3eU 87ryU̔Ɗo2D8a'4ݿ+ޜ>e2߼|7γӫir< M)dYׅ͊Y~ &%Xax1ت Icu_W%G /^TK-]׋Q'L3 e m:j߂t#֫6|&Mך-&,/铤xatA/=aֆW5{ *23iܕLkkA+FnwTƎڱ|ټ?ß?#3ϴ ym1ϭTYG246 L+pqs!">Xt ڏ ͳD[Yd5^+ FL8G lz#1F ȣv ze" fF,B0G"`"S*\F07XJe+C i-tcuFΚL-Le^ӳs^Wr( 5׮\؁쭐pġ u |ghi7Á ;`'S.Wg<.悾^{n-NS9]L?Sb3oFSmތֱB;I[nI[6io=֙]ň:h#Ymo(5"BD(ڜ-rm| D0#D,m?\L<=Y28FPeecͰÚq1udyP#]w-뭣A%v8,d&ީkfv_O"z8ԅo5*Cz ^ wE%K.-7Ms0ܢ&wwr__1}RlRJ!4\A6rɼP򘎳lid?I,zn=x7(nK\Dԓl_;IkܘjuTIdt*ܴ˓ r(< Q>_ڷ/0my6g_-W%E[˕1|"P'_S"s,P09W9%EcK]YRܒȬx &NRDD\+ObQ+rH  A@Va S띱Vc&ye4zl50Dt|9SVh#h101yLpZrZBֆ!}`Wb}dN޸A=^+(z.{tR\rxRTA2 <IZ[UAtu|vh|.0ȝCXLEeoEγɰf&*K0|}aM,E5TLs0>V';Ҍ8YY'wLևᔌޔ'ÑL2 iatX;Ϋ{9YSM'zu~ݝo4_$3~ˁ6Oz,)/u6|+oXʒA֚UNMJDJ(ͯMrkVWj}@=q+dDCs(]?ҫTڔy8chVN|'ܺ~^ͳLELE;cf}^p3*<^&nWljXsiu7NaY=nջnU2Yj3=o3jtHqvi{Hi"mƧW ^|zn:i1hQ&6 !y(۹SA w"Ǵ+΢8JN;}0X "=<`n.mkD!yn˾w(~º;y3|v|L^#أa`\:$kŭւ"¼\"1HTP&YM ]?O騰ْP[1,,VJtG jg<,mJ ͇`ݸ.$w'[&߷.ޕawa0I'#P)WsĈ).hwEW̐uB})R>RLzSNC v cL;#adV ;-n~R,[e|t~|~`&cXNpD2(gM;암*L!NIPEgwlڸc>X+p{"X]鄾 ̄H[,ȱOF<(Y#88f!exю Xr1cCR{Ҙ3rʩO'S{>$`D?ED1""qu)s)5$x/ ?HD uYFco3FI\Н"V!7QzHL8cMR'2Ӂ'MFbu]9#e4sib]qɶH;Ez.xC#WƃZwSwm~bf~F?%%))NVP I驞:sPDlK"$$E)@IbH@vqvPac0{ Y|Q4ǛHp-q?GjF /T_oio(oFcbcFMvO}ĿHJt ψ22c(^lb04aC6H2f)sDCnBմeVz{ Kl4ue)71hqkN5U}m=7Y|nNW۲:m` w^T,OmTfuNɞ8:?ㄌLP20a[ 1lQ%FGS$RM9],+M6ΐ5],pZhdn5&:E\:^if:czާώ? $9Z+6 g_QơO_/|tCxr:C2[+eQ*Xg.&UeWŃHj` t)F=!(b]X;HI dAd5 }H eTvCqUj< {"4Ot:h! b ťW|OFh3?hbIM1 &3WBV6fV1(qLyv<Ptvӝ%$&Lˍ!EvK|aR\ 6z'0nFnK^Og=OGn̽*:ځӨKQnĹg Xd oe<;W/hZPhSB՞_P 7CuDs}U:+<̏SV@@] Q,o ]agO].xUt'{9Won<ӣR?`Q ;̀ےb|'.O|㤕bҾLj8)a)%EdIܥ1.=8 <52{u<Ň8j[ɅC #Fm٩qӬ4tyXc !WۆyŸ;oM_|:ΏgGYF74 ]392A7?t+#-lY4 YŒ"-L\`=RtlI.K3d67 [l`A{c<E2ThA}MkL*ȁ x{,)y'_ e"~4LC pC dˤ4[uBY/:*V y) + TLI 2 >N1$54}!ZtIE!xU`ҷ75Ax5m"]"} ɞʇg{`4 B JIki"+tVbTuEF JFrѲa6*m>u | fZ;Qd%L y -iT"HRg *ǭWR5[{$AhabɀG2 G#* A o wS܁qu؂ nF'.LR4<Ⱦa\J>nxyVآ=#dљ fY( J{tlЯ.'&QN6Xe5KD0fZ?vJ-Yx.PG̋`P#y*2pp\yT5/ ˂!eIDDЙ Hmd:0\f0rlL;_ p4 HpԽ q}`oGfs c+w>9)Q_<.Z!sւ>2ht&ES4E\ lQҀ\e ꄜkD]"9lBUgceɕ B-)U2͍soY/@G$o8lSzSͺ1Ŵ,Y Ƥ#˺\b}2Ѥq**X+L*TZYk n"I[cPJJ4#/It.yv3u8Y:W ퟁyx{; ksyX}oqgRF!m[:0?A1l.m`\ٝƿ 44SS%& ZƌSxCi%* ( d)&M͖OA]phfO! ڝQȅ4eӦY*(N'}@dr\A`XyB Oޛd1"zhmdʛs.i0 m R2epȓI"Ι+ dnJ"Z#6 & "H^:MqX[y:Asy W%jvQ~5=ѵW/?o+]qovvvn)^MU-bFpr!|1WS챭;x cqwcם_,]^(4hPMF^l(F=A$*,VhTZTBNHe \J8t?_|@+EuwCuW rMgT+ntmFWB=A*j^I\MZ;2EȽ4\CL%x-tB\Uj\)g ' kolT cR(1{ J(NR/!+AkZ|3)\?z]sAi%y7/ ?ukJ*akS'OGW)D (6B`IƜDsV &h"VQ^a'$0 &iRVȅg(:;ou!q/Ԩa!F(Y "!1CS x Ev]h[c܁o-f @׼[P^Ř|8]U{ݖv7wS}DtA/Q܀nLnx4Y@&PI%_N;ɚՆY:WJZӌ&5=[MOky^ْĢMN B>\fYG Qct+҃FzLFJ>*ZF,O.ʼ?}5A =[g2H &%B)"?`}2j}cZZ޺quZlٻ&CC u/Ptd H'!KMwAq)^9yf *dtIM9JKd,/d E2YO0B\1au 3;\ Q.|x3o߳.&lU EtPwӛo:=0'=Ny'V5O7mQ7l/— /Zlw/ls*Dy]-[9f,SFuˁ@ QSRtbF IP ֊u"xړ3Fe/wgKMUH:B Ejn()٢Ȇ Y70:ZrV)78c ]KƖW(Mϸm C_co(n_7mE"lcc?\>CLȥA8D@1Q1UI<󣸣ȁSq+YvPdH"@rA'PhoI" I `xT [S+[s+Yֻr{*oWL(6 Ί\EE+rv{&% (KKQ!,tOEQ| (E"`=XOeLRs:w' FīJf{=M}>g./++@Jr/H9WTяӴ^cV%d2GZfKRBP"s°J94u0V6JDԴ0>0AFɍKE4^5qB>;m# n:x"UO YHQԺ>CgO!̒G7Eu( Jd3{ag8iVsF2~Bu(Tauuz}@U07ubz i&o])`EK [8k=zl zȃޏlѝ 8p,/"]$f1` fV& ᓴY+gT%T_ Қ[H`ASdP8-2s AƄ"BQT=|zךZ^ҫhrNFoCpQDC2Ncɫ%\bMTYAz[Hqyp#%g"T- I'`FD%քJDe2D>ڧg-Xd)(^9`r!z}*|(&2xb 糆)xz}S(U{ [V_}T_y&7:a{fؘ;Ɯq*/P^q<T Ti<UJ#/{*AanݬkA3*5LtCم jt:[jyJ ~i3̌ 'U@\ǠD0Q͊TRCЋ u4^9X8'eHy᳐AlQ"y:;Cȸ$IZf@z.w~GJ]`|c֥_-y6 m?E=8d@d1+cyuP܀;n\>HB4Έ`B1hDCKuv; ;KStGVXA*Zo<uJ+&ِ*Ʋ.  Qp6%QUtI(\Eco{?g_Fw V_n1z;'/QfUo[vP~wnAl/${1Ǹ_vOAig=i@aCapRuhzOf6)>0]їmkڟ'}:(IC6hw9)hk&AB ^fZ$N"522w4wʷuA$"$}x2&H<@]ь頋5 )GNnm庡O*?׃gܷ78FOit;!')]kĬכ[{7)=*?uݶ P@Pit;z<s3_n]3.tSk#6ɧ>|ZF"vY5"5jls*= s)ӧszh)yq<{Xl02zklR:@A*%r{TJᢖa;Fe,I$ 6(6!bIF$tzM.k7ܤ9ɓCMKZbQ>K_Pk+I 4Ե_ Wf~Ex.6W[ܷyӇ%az?G|c`UD+̐J#Z)m4!)+U&E؉Es`ð? M"D6GtR*ڢ\,șb s0HLL k$ޣS^dkVNPFzGҒԘSU.֛8 Dwס)#aU|Wcc6 64jyM{ƟtrB56Y &CX6N iӏṜ'm4eIZ 8vca90ASbL2(`e^bղh6KU]޻{zK<\@^ h>Fˢ2-S˔'&{ Iq݄ {{tq>2쩞P8pT1Fa=A1`Cb@2@&" "0֨~'o[޺u/u^hM6 NJQۖ]Ǯ-Km0DU]|Wp6δ5,6gO/R8Փ`5lM0d 68v+ll| ZfO`̌R j#*QSL7&hﬖK#1hPkް:Bj4m=Q}y5SayLnҹ#Fɇ&U7ԟx`.;ݛ44w 8I_)_UjqR]z?N8華fxh0 `fǝ(t> _?0Il7IXz޽xsV[m믯^Zo7+x?W/G}99~PNuo SbM(WǸOع)Щ^t˙ٱ`t3=-~>KHn f)*pp5]<${6yՁ!Ba*Ƶ>NFwW>oꁰI=pZYzdfrk'9^/,xQm??I2N3@NwRm]a(+~܅B7;2<E7 *f7z? cl-13qoM*DԤ,͸ԏ"Z3E܆@S] K fP;wڭiW[Jm"v yNJ5 ė8 , <D?-D}UWY]Z.UL_17%-i`\NƁH-\"zؑn!Voe$~1.%L&]1qW[%17 IVL{ ˧@?͞F YSVd62 OA4\eߠ$ }"<ώ"]-!4asE\T=xOaC5It\NS2ĝ ŏV]ZL_)]rwaUjEIwyxꥯ]R&йW,̰k BМϗ Y;4 9fZ#l zvzu ؆ZGI5AV=&5,D .öknV T;.;:g 6mͷ9?\-x6H`X11n={(yd'{GI ёtQ)[ŭvg;J8Ԗ1Z( n.' cBd qc9F AX@d# / . k)iF@bTq@DX))D r,4LS{]k*Jf+ *~ե#+o;uݲ]vMo#؋g XP }\5Ua'A'e'Ofx2xKn0e~&*~* ͫa%SK"Q,@$tV9p&&_14 |[mH+Sh)#[FUy[mU.dYs4k3Ă7cyeGDž<+;?޼)* U )q< K{d/^J?"|0Kojirx6f7`*X?_d2'+S Cx8k^b}jS9'\L)kRs,fYVWT}HRt u:Q;N`igr;jpnb&3gλ@08ώr=mdpyiI#b,FL&4V`JlB Oc*혅M7Ck y [cmBΆ~ 7!tF%H:.f-,@^b1_vBH{:¥a[z4c:.7r[Y$@\ $ChA=+ uZDI%c"uد90 ÃofAlFk׉")96!Q* J3JZ"@◳Sy Jq_MuiMmNaWƥ|:B BH)=~_v.ԁ+=y'2]:4Lge 'CkTt|WKV}VU'ū◔~R*pQ0UD1"T:R?pnF| )I?OR}nupE<8)?8~?n8ůʺ9l?sM(5ӅkI\vsR73 ǷiYc֎dg35-ۛ~1ZX.ϺHwٴ).bm5o3eU,8˲]fJcD`p{!7GGGi_&V20㸛riv=ax1u*)\x]:;Ǡ^gS.IJfmzb[:KYQQhZ\Τ|A}wPG[-(+& XS*x~-Ѳ)rÜ$ %xYNBSne*TdҸK_,]}}Yt#q۫Zyfmvh}63B^[̅'@sj#ڀ%3aTͅPŢ}ADmWAڸR.mL@9v"Ip'Bh'W&{n7k_\Qs*}+)f|XVHe^SP:3bPl4B4A  )5Mgle5hSskֽ7JPԺUo,qMoEפ Z(ڜ-rm|Ӆc1&PjY/.`mo:.LD0.qc:e"f_[߰xfv`^ovMPP0N޼H ^ wE%K6*kwA2=cb /nM/ oCbt`" I^^DPAlkXd_E,/އ^נ$ؗFRWXt s{W-rw}v-绛hs}kеBj{M4-ʪvem؂:e,R;{Apu*PcJx\^z(l]ѓѫ#,vG%,Ie'J""B' (m-B<Q=(3? Y,e)l801 T:oAPlvPNAQ#e'\QdUgXf0I*L,SCQj9]i(#-:Hl@뼼 >VMx474Jn|ۦ@9 qHjF"`!Դ,_d7y<Ú7${Ak#jEk#nQrySa3߸k B-}.2ɬY_v#u0?fNgFyٺŢaձj\[֮lVm[v닌M ay`țEOz階\9pf+'YOniꘀM=\~'E*n~4 N|[w6l*W^:zQ\T\9 fq@ p-㎰H26V9P2zE`=)xrgIȌ= 1g&j3z؞/l 3 )s9qҌ/7t@on6/~ 7npq0HṈ`,J?pD2(gM;암*~#ɿlrS:ٻC ?%iR!)+'HDCG,3]=U]U]+4 >fcFeN E $MF&Jw,;#~ƣbbM>VWc_M;ڴڽ{.ӌV!ap?S@!#CDRSyT;-(G 5Cъʚ,2%*1ģQȹ_VF`";[JDֱDdD% BXCPއଌ^:2'F9:۬ Zє65I* Aqqp.2Jkђf\U{|hsD8ttTήVɾr:^.޵-$mmdkTL˒f(Hh!ih=Ɣ0}D C/!Ev:cyaw?cgxW\HяV>B`ΟLL-Y3{L W`#Brqz**SıL`z6J8%aZ. T4\qee+_WK\g }+;RGGq.R2PKN@6X$;d~߲ ^?X`Ƭz*NDU9~vX_UK"d&`6JR2 )9qEdF8H :N50"|y4JP$J _6^LFNSi1\5H|Ź543<74  R)A\qO.os8̄GBf^޺~,`k[aw9PB țBBH.QMΡjYɁ(K(0nE=Mo:H7kujuƧ, xVګTR)eH4wv$߭ثN_)96U,+\~\\zR{'p)p "0&Rp!uy_ *2d@4kʇ5@;007 fvw;NP9x\;~omTr \/K  (^l\F)@$ 5 $Q L3*)8g 2:Ddua^k{',9um ΋NdGRB<3phxpbb|/îi"Ll=a6U[4% R2S?Α'r ~V\!\v8sscp_( $v6\\*zJ>:ٴ"Tqh#9+:[}=T5HˉWUt+cfd2&Ԃdl\%<+UbeqSE5[dZFHYJ ̃ǘ#`.<ǥt{tI@N1H߷>mx=W(jqCxT.kղ_+IO]|G7ŇWї2W.=_cڦ둯rjË/!_R_o[f,XYOͅ"o 쬰!dĄb.r( ̿nT. 7 `9Ft]W _hZՓG?l>Qm:[$̯"?.4ER_uݫmGyׯX"Ǝ*eEk-_,w}:#Tx2jT 9.ףZa[b]\Ͻ>H j2*6Ijp`-/h;`".mn<1 z4ݳ]=v,>ăN>+?х[wi|L>ggGV4E.( }fK(TT!I{ѻO,\=p:8 q>\έW;EB sl.I!Ĝ8!^"1n^^>];X`;[Ooz˪'f(MR&HiҚG~걡F `?ZN {aT#J=SorBuhNAʫRqT1ܗpsғJ@)p>Gz8Ǎ!=-#hrG,0.$imLV~jKX7*t.*Y|<#rUr߾}X*I$2[FDx*DNe(kO!cp$CBkkx=Elg_`{x7ovKj{Wn+XГvjLkJbQιCCZqD}@-.LZ<{`  P Hۨ: (+<%Cl:);֨)jc\a"Fᆗth fuѰ\H#REgx:3%ǡ$l@i._O<>_ڭupi[O׸,4Q5Ikƞ'{0#!P)ZIvuO}<[1v񋅱mc'=S22$ϕf{ 茩:%jϴcp^QO(3כf֝b-53AIPƹH)u~?] n}8DGx$ (=EDw`qXDIGg^e`") 0& D,"F*M^܊cBiwG!ƭrҪL(bkݍʼҜNcN˅\]I_~z{_9_폯A?)Uvo Ʋkk?]үr-m_Vj/t pVף`<0MeآK F "oAx򡎺L۩2MmEl8u.3OilsqhT}Oz4*xzz2puU)h9:Q;z? 8|"6.^jɫwMkw'? ÀltZNyV2gƑJ JoFWhw o[R77pS7'mY(?^m?L+ p<ʹ|ߍ?G-^pUdl}^Qz{fYW/n̲jop,smepfOʊwgy\)'֟|D9tVܬ@OL/ga|3jf *enV.߯Q%DM5NΗ[ XF7h[X|}=lxjmNlKy-Q^ U oQ(L+0T[ָ Urϧ,.A,t1[k,G_R,Im`8lf)l o 5bޡfK Ӹ4PW_B[lu,_xx=cGx͝ٽ%3zX9g&8MqTlγ@`fL%bF> # P">Ҏ_VctvyNaLŘ xrw!ZQJœ2N2X9QBR4,d XyQv&%\="칱}Ωч5٭aDkaD[Ԫ>dqB""AЄ.rQ((EUݠ%Xriu-Ԅő>iD5:QDEWAڸJrx6e {G #Ki†ADb6QSSR/gGԎ`~.køPrpRi+ '10mg($.YEHȆ{lM<}{inmѺ{둌.fOJ؋_bׇ.#օNre)YR3lB9Ĺj]˯C7}[ʴ.Q5xu!-:kAv7yKG0'j?.H5-hCL$4PwQqygLDe0Q"NVerڦg, 2.adE8 Xw+eQܾ_rh" fԢdڷ޽*M˽&߇Ixw mnVtzZt-mgoH.5ӭ{.S*q3wNYGޗA2ZNEP%ģJ{uP>Ucѳ#S*G'-c5)當 KW DHQ'x|-.2RݶVG|Bt5mʁkg}OU\ЇV_մ=$a:3>,С0C@6 3m>ɧ?koǫpd4VFG (q Ye'Liݘ=ӣ1{ 8 oAIY\΅HQBdxB=f |N֩pOo>I=&:N⁔T9U#ʢ22@dPb2VQ%IЩ֮3Rl=P̦_#< uSF6)=Q;3gOU~0-͓y>TvN]awi' J*ST>: $ cAQf0ೳL^㱔DEEP{+ASdJJ% b*Z*1-wA%OX x0=6??gg㫳# RFg[ BhQ22ʄi2;U/4j6"J ںh!XHNQbEZ lc̜i-M?VWcWԖP{`&RJTj[ayUH)d9hÁd^Z0-)cJ(:kRѪ0Z&cI"rPagl6_tU` "v>vED1"DT H)S hgTW\ ]|{t:ǎ -rl hI:U*΄D f+1F8흄Bz4ZglGī̷:0.NKuvJvE1.\o{] ďfe yUP19B¨9z̥a(/ žagcW<v=@XG +T1Z).jy=T-t >WIw .:ڻ_?ÛXzȇ7?+RRTɷl yZeefx}_f-Z;$+AM!32[5}_[?k sA?{~^_E Ў?* [f8 ٷ塋Con>v`c~/J~o5僟T-6,K?$f4w~iv|w4ab줻~8mŗe?R#Xuj7.M2.OyO~Hw^ho1)"A(ƒe蝳 3wѱ9YH:\ "(HE5:- e$ KJZjX9ga"uEAXU|q(HH$EI9# #r5*9}rޙ9QZ\N~ .@V%~e*0Fi.PH a|He$zeQ6PS)`whL^"RI9KLKPRG+ UF&ۭu`a`O->V%*G_!' ]LYi~Snfi0 SBI (kUH)EF]Q]Sa8&y.O'ŋW=A_waw'݋^6MywɟYM7V _LF8ij-P< PPUARt`cqiΜc!\rp';pdq6mIs-Umbo@o0h[|sn|z9gxlk2 .I%xd"4ՈE'D5 hQ3>a 8P(3SܨɂyAHJ:ӣFxcj&(WQi  uH2HBXT(gХkbԙ9ۉmӬsOo2/0S'G3;; !6zCp>>3]Lߵ7nEjtt@utD[ub#ں19 ՃzUQ*boJYEOJoBHhO`j/E<\Tz`@FB)Q26['5>`yK9X萉'ZI!K-u5C 7SCW$12$i"ElAi<L Q-%rt=<+s.$rI`/MU%p2?4gc{|Nzf՟WU킪T&Y%*X>]0ɻMotE3~h"a*+0#~it]`ϛ{DBh˫hz*AI9HA0eVMp>: ]")U>94v{Coaz vڄ/Fd-D.&6ᣈj|EC,Rҩ(#91h^SIz(%I)axDB:3Fg9x(*hT %S`vBVh B0; ; J408>e}OR,uڄ,Z R`s[ibsJ*k+/@:Wtp޴yɬ7ۻZ ;+^kW|=慴b/=3$Qcm_h '6|uQƨ7L^JCac7@j^~J:=W:=w:=_džPR(5٨!%Jxu.aVACjOgѦ[nLe,g)?Hm"cJe2{_FO* lcnV<ٰq>9f!u}cq`GbGN=l[xOE[$z=|>hf']У1$DV `cG74;z_}.bmL0 EeBH&EeZ#Kd/W:)M PQ ,hhCeTd=,1 z_u;3I𱹜*q'ׅߟv{ْۙunrev2?fmst~5={1FQ'AC( YhF(rINZ]㠂4 (v+JIA'IJKlf2u-Yd_k}V{vE2݃͝Y/'lW~hkhҺOb.b9"E rG$^izG#^W͵G#^WEwj% ^ ZDXz9L.P쬋}/%M@&6֠4:(Ck?/PS'8ڗm =.Hr˹+hd`ϟпd\4ZK2.l3h2.\+%Ruj%! 3.HGWP'٣+6W<bkpV=kډfWw->\?rS֎gF㋼Sg3z XR ɧSkkw/>%"Q X /t',H'o[hc?n lmG?7mh3|8@ BMԟh2M|8ˋϘFFmo4+ӻ7) xC5hxC5̱Vއ9Je ye^k GWlGSSսsp ᪶Wcj0飁+GN\UkZizpDhLSeҚ[ZpUC0rx 6t`z WHszgW4ծ^ pݧu]UIUt3UkX誡uj骡g ,v w=+ͨVDW9f(EWhԕ<Ӿ@WHWNx{I=Ԃ?nnjruImvtzMN9x {dϞ%۰BH74f<pI膆V|Pغ&yt Ig%7jhߢo(P#]9-Ө +WʱUCvWCɇA}+}t׎ȆP;]њcg_eCoo?s1 Z+ ޭp6+:ն'Uy4t̻unV]R]!]IC֌`?unT@W{HWʰzDt?5։]?>ҕg飼zY?8]Lecyq4~`Eӏ& stٻ7ӕVm{7m3}}^ Wx$ٻݍndlw2ĪK*(Ue%Ce9֒Hm cAKDJjEOkր=}uc eK_u|֭YoW'oCZOpt KnXZZ|~׳틂tL,pjGJ^&VۇӜ7ޫѤx6Vь-HJo)~S<;V1,`MyGkWOog`p}ŻsUi1% OoN}[ kIigw+?~=W-.g&l:rdCtmԳSOXe *v'kb;s-PP0)\[*K|"/甶c, :Ifؕ>rYnʼn xЉW­;Xo룣5r.)r0ٓV5d˧Ww4Q++', gޞ7|Ս?.ßaНA44:b;-~C^JE ٣bm2}Cr{b{-=Yu0䬐v+r}m7ͽ߇Ƙ5~W&/1+cs =R{'IZ ?:p65~}{og IVWynz*/w ya輻f!ifG|; Jɕ騟@Jn? uN1T[tZH͟LKvM^WKnod]^֗#+?QB nKjXk1Lwz^NcZ {dkG3[rJTy|ҶwwkX誡n骡tJi-h pYX誡_i\Y@WCWޑchj,tК_Ƕk^ <2,gd3w -?f( 7+ym]OX;&`Gn4tJ7jhY:]5KC#؏G]]5OW 偮'cxDtZٱUCtPy=+ޏ`mFCW  ]5nǮJPWHWZzDtRG]5LcV]rЁ`#׳1N/K{{CE;z?'uKw"\N^իWA^t 1Y9GP0G^.jq_Zduo ޻!?{*!/ ggB<+Ho?N;\Flzy5#Vn3Gw-[c'R"ӫ_ܭqaW]G׽-w:)-k%mOw?u_ϯ➜J:"[~^P@D́]|0- w3p@E ~mgg6͗'q -q~8=GCW9mp͉7pY{9Kw|,>Ajs} ]u)(%S9h2PTո@&QP |W}u iq񡯣#i_ū?|z&or0JBx&{.F lEA+ShR0e6r][]h!2DLH!)%B!,SV*لO;r}&rK&q \I9-Ud䳵˰A*"HKNhs`Dm؅tlDkK B.:dxIŜ O̅$jPc*xi^VC|/j:(U "a9˪ VJJ^PޣO"=F,=! 7V) !!1N6*&YkPtP(CfkgOHD=򍭾\rO)[Q A1E hCWlf-IfDARukUZ\5j[6/+sm m\) kPf/j*%V62JZxN(ۢ*D5$W= %g /l-%z pHM",^ VjH) ARd!>-mjr[&f   AbR))"(,ETP Uv!(0fD0!.0%AU"HdGh%G S2[( 4`!-OhkPQ@QXkP mgTWNKBAArXCjMnHmQ0yT*C8FߪBybXtNMqP4+훙-˚aϘcr\L/* - Vh4ldÜk->eB$"{P2[2Z@pA)zg[a':JpTb0V(Xp2ށ&L(Hp q8 d>rPAq TUҔx"T. Wj,pLzrN UJ+M;&RffH57]?4 17߶Z C,Lh-W&SfHKUFԭ1J ƒ> 1v0 }G8_ !.A I\ئ[C%:8P%@ ~2Pf̤|%!L+VJ (92Z(J892r\/XT5˃Zy{D)J6􅲎ګ\@PS(HTLD*V^J!AUDI)TR6NU#2BϋE/U, r`"JQ#pPB)9$Y,V K/]9$T댉/dPg:c$ Ҽ_v." F*{$'*JI;t'"Bu>LKYҘ`޶5+\LNV LP#h!% %tMކ!JA3y2N1!\hs^u@X%Udk>pt(A B\1ve"HΠ$1H F|d@)CeCƠ"beF WnP\*@TqYTb+'ϊZB`jY4MEޙdY$ykBz?̴1eTPȵ1EjHK7$E mD "'/B+ k@fVx"6 |"nEh BZ(&` y[tВz SCCnjK]D•zԃv#|b7bfP2!PEf[t$L#F/llM1) {ÿ=܅JnR=k-B $D1v] 7hōHM \ڰ-fĶPpD=`]CۺeP@q6^;H>M^*\WA ,K$hTK'2(Hu=wZd}(Ʈp' W:'M"ƺyr5no"gp.Wh5U F.LQ (-cDԦlEEHLn ^PbO@56TUmPQ4A1sjoX{u;fFO -rz b#Eݖ UCL/ LƪA-TJP: vWi~cP[!ԂZcK`vMb zյp* -]#E3h $zBs!=?-S¸p7GAsOk:#i(eרI` kLCbi)Mf(+ *[ Hq-;f5IZAHJx.[R,}N -^^+,@uhwBCEӠ_Lv*n9-WSae6r s q54vG$_|7ya^ }af N]e~͓ -8Eb )VYU]nᯊɄsoʋm*K>rxT|;͛NssQrXLbt~.?+GA94pv|&łδ,vOۏˏVKnx|e[oYfe:]foM-jyre^I?Rrdo`*AnLfQ=ڂTjOr;s@1s@1s@1s@1s@1s@1s@1s@1s@1s@1s@1s@*֧}{ Wd8 mTa q@s@1s@1s@1s@1s@1s@1s@1s@1s@1s@1s@1s@?+"o9xB3xlQLrU4+g7M\̊lڼ >C=ד.|\-h.g|<;hӦ?[dBniEM<oWWm \zO bܥ3ߟ= Cވ7ze%M. ޴iCJ`J-L:`J z`T`)i_V}agNH}םZ{GQ) mv=߉XaUBG S3aGV"'{l>[E,^#Dj֟T 5BWdf{݈>2@y 1@y 1@y 1@y 1@y 1@y 1@y 1@y 1@y 1@y 1@y 1@ީyDZ?AOPk't1)n儓Q2s@1s@1s@1s@1s@1s@1s@1s@1s@1s@1s@1Щp@_OW1pbWgO^Oi)\N_O jzuUyD=Lg&ٶ8tl "cK'm0k]e ZT*=t"Z]]  v2"ˮHmC+R3"U0|++]*vEjzUǮ-^ =]mWjkW*sRaٕaWU/eT1$dWuHƮH uvvE*}d:AR&dW$8dF]AmHrlW'hWZ9dBvEt]ZnW{+0)EW$83vErc2cWP+]J%ٮNЮRtڥ]A1Z]A lW'iWA6]ڳE1ه ^͈!i} ":Y >{Qn@z\9-jq~ZuToW5%[h/WGۗRo] [7k,׿81O~[C7|MOڊWmt/~y$ʪ,V5*7ZyUMJܫ7^fbb~VWfJȵenH<-]us$LqIsSl4Uƴ^MVMMim -7ҴDK[F.F hQ w?+uٺ߳e!u\3ʞpULre2aGV"'{l>[Q4g$eR}2Q뷟s>L#B:T&4j+zP+RI'?L)IȮHOg Ɍ"wB ݮ:*)MIF c=kU*vEjTvuvEF]A*"ˮH]AىٮR+s˪7gOձǮGO=U?n`L2lWwzi56!m2v%:r JnWPtJYIȮ ؆]A""z4J[oJ!ؚ]\LtunWPg:ABNȮHOǮH'2R+R%]]YKiN2$$7TҬ8tؽx휍A%78'W%7?";i9n8ӆ) vʐܐ3/RЋF߮R /az ^3/lӱ+B*vEjzM1!" ( t*vEjѓJcWUoτ^''96is^*~R]YVA]`o+\*vJȡlWhW*Ǡ5gW쎾@/Qi{jvE*6vv(vvQ'dWHƮHWbvA]]`  &\'"jcWR++6 Zz˺̦ &ݰ7%kfwg(k6:*yJNh^ c3/ԧ ҉2V:k!. =!C[J3h6zhm ~v1UJO$s<{ɍ<#mT'?EҤ)lҙIrm2Hc=Rvuvw!$ɵ:".ݮHez?][V;C2`Y{^je?CzؕckKp  >ژ]ڇƓoWrJb:R)]`+L*vX_?C[BQJ#*!"1 r~vE* գؕ)]`+등+R{,ௗJ?خŮ4 :]Amj8خǮ;KUfwKĭ8Ǥɠ7h g]uIq2׾}%AD2qɍ2j <*sqQ/'83*KS{%TlWhWA)aSIc:JLtEj+Rvuv$dW$اcW$7E6Tzkg|g^#U?HU?n`ѕaWU/U#~OƮH{U" ݮHlW'hWJKS+|=~Rz-nW(+>!"&Z]Z/nW2]]THi R$ָ ]Y k+5z}eWPk?vNJS wBD 3P4MzL ʮ_hЪ\;< &4jH^(Mr`/zt>X3vY-? \ [x)}4j7_]o:_~8..usi7tU,/z_o޼A,Խr=_GۄFB&7ylM]p}>=3oo6i.{}Z=ĽE _彚J-M^dϦwHw}onệAׯh^d=w%f( 9 E[Go"NW~׿ټH w:ڏ[qVڝmo^9S5yk+wV6VʖemZRU>hCܻi1bs| ڜ)oi$X4yݼ;UW;s&r@znЁ9RolkVPMjX6ʵRE "FZJhE\Ni,kebyPǑˢ}GK4K՝^{Gܴ!??v)„ J/vI/,tr8߭E|]쮰9۹k|Uˋ1>~Wq]Meq9dUSTSw8QbY }:bݷ/EQn:E ܓA`\]BTI+"ڧy>.U)2(m[OiXȢD[mȲ mSHnŁ8'ٲ]|oKZH^#:.IΤ|5g羺@+F۪[;A}k>}zd39juO{_tj.&0UySuQ b6{u+?y@_a?*{TQ:UgnmjO{wB&Z?{׺F_E?gl'_ 8g]3,hDI'?ɶ-ٲde'QSb7WŪbmV%`F(C+uD V:#3h"ǐ2gNhky@B$$8H$[Z=ɳ*spggͻ"ɘLY'߅.7ids b$ 5i Db3 EVH5xkt9 9 R,101dd0f id$:ٙ RM*HvVُ,Rʐ ˾1$-3 a q0()7m4r>L~Jsz~L,_Ve `^8Iš$!LI4U9B_W` M4g*hmiX~(4frܰk?_Mp5?Ϯ?be: 1=38^1'ܽԕsrF;PfcŲFAyɍ5btBCVN\ [<G_J9K8MW1cX"OxQ;K!rm%n$NxcQ$SR:B59c d^-j:3//=3 [CTvfYr eM:&KO \g2PF%T~jZŸҷzZq(dhPO"E-1!+ ^xqtY'\qW #ZSrr $\"rqk-Y ֌b ;Ӏv1 awK<` Epla #")4?~'+3J+}%J// FG2L8] Ͷz 5b4ˣQ=RPJѿJ`ۏ]JU-[OO0mo!.gX5r6iV{;=#迻v %@Dc c `]@('wtt^{Iyi=CNrQtӛ_IbHi?T~|ؘw"YYh>Oa\Yziu}}D\OcW~Cnzx8M~;n{J~Vo߂iuaTRza9 )Xr \wSs]i F3z_I۱ќV_؝)p2qi?n UǻϻLwL~C6 sJ|싯i0< Qo<|q+` Q=/p.?Ig@˅ϬRBCKi|H)NSJف޽)zw"~sޡDg '~</LeeY]i9nK:9clQ(N w@~J=n%HY{Xօ:p4&3wI(`2 1ȓ)['"DCp̕Y,ujJ!m2'2/06-N0շ^(<ӢSNY\/^J'=}}YWz6|z}u1 412ݨmˍvY<67A;9%qlM_G?V0#{XbAd Y?hj߶ڗ=|(}rHP׾˰N z?^t7.nP|-x-E+?G+rw@g[G;b6:?F{V FTzy`v*< l tyR?ar=qJ)oMџY]b=#ង. ^eHs۔9{zC{$BVkB1k}rn1XlsשܥuӵXny|Bnv>qb7ZlG*'=\uUom MMww~JlC+Qu#>ܾC" m3x^ wĞ:&mk}o*8N\jT8ig@ 'չ[I}>VGR|0A;<ȭH 7LؤF( 9238m@9_ѿ&K1$Le eօxBhདL6B='͆Y/7 i@u_do" HI$S܀N`-sVgE)̉ T'@Hua=*۴**6hyһvkAi£5dLf1 JG!9 W,=8SkMʹ%=]tڦA$O.g,(NZL UqjXXmfaE X"Ʒua=Ћo,~נ f2^,8bsy%rΝgfEYyrJYrn$ {! F46F%+|"%s> Qy#fُ0 \P8yej v.38}E"VfA8d)/gq4U݂Y"8Y "+:J5 Y4#a} eX$m85ׅ> 0 "V"b@wǛJV\ 0%cJ Fmt%@dނ9m!8+1V gEQSᳱ\&D.8B]dI ee\R2"Vg?"\"j \\#Yk2.\5څDz\ZAd'\d=b.aK#CPrSaq,x@XW w я/(2գתѝ?B#JR!hur6^KX hS2w!̝]H;@W)$;ͩpأ0ZQ (^Rgi,a6Vy0)g*edNJ턶D.KRYzpJQA9$$ei+UQr#v!%i@vľn2:+0;O0}mMYHZXiB bg'" "U4x`8M.HHmXa!:fLqg5V*}<H RI$HDerP᠅I2V. :6BRJFNk ʾ@x;lm}8$z%lܨs6woo |}!Jߗ?O Do|7~ !7lw7?𐔅&RfcDF9>M+ԃ gSE7H4{{u'pk2E=ͪf+PIbbf1T2OV޽͙m~=3ZoRa_PںmE VyX,Lĺ\^Uoدi9'-ot#=x9bwW7; v҄0St٪g׃6)^c9n6f77ޤ%[;M۷cWΎ_OٱAe,$6)Ef~+V!ZGT l8Oj%x9exSlcS-74ݣV !C݄ q$Lk8aq?U$>i);c`%32x œ!"JA m\QA X[2P *OV9X<m`4."&Q)M`3,R\JqTn e≚HT ۲GZ28#z8n ٙKe4H,lgQ*ƹ)#S$fFehaX,^\͉M `+k}mnݣuFWGF@O+k1Oq0OZ^ ~La2(.59_8hnD#Ϗw^ Cǒߥ{*O^ i7=߾ UOvYv'\=ELaHȧT[zgeb?,9:lG_-,x.S o*)jb[GoZMoH?烼Pu5VPwknXVCZd{4'? ,t4/߹gˊGQ~58v:Uήo`6 8-^.J zCxB+K49xVb,߶Z:1;*T>/Q IbmN{~ PCIPƥXy 0vɄ!$[GF&#m7["RzYPEV{@=ƘcZiFڂyk6 )Iq`ǫ(Q@aLßkIw7l7RR(պdzg@0p@,&TWԨ\ +y2U0ehAwwηՈ=щVXeD Xp)TYP{ 1Q%Ew"%؊ ]`l59۰\KU!LZ˼GT̎TLk߻V3n5746~dpIh+!L̤, 8A|=s+'>LۻT='E*7X&#uj *Oቢfi0pD%IbYOc#4Q 6P3.> 'dSV2m/ɸZnʬ}8 p_KY.}TU/AUmCU7ku jG\I2z}7Z ti *j1J+*4_) NkL7hϕП֙أlH;A `ց&n4IP|f& ʨS#3v6\J n]@NP0-#XӖk녥!RcJI%Y0 v;[ʢ &S?CO#%R2.`~$Mt9hI' KMH2Ju{)z JJVK\9Gc&ȴ9`gVPN}^sZܣNna$Q!'R 3oV-S$˖貌T;oe 7ruӤ_t>G1.ab$1} 160Ƅ̳̇IahM< κv~G GZ]d4;rZ f]!g[}oQ'+CDrJQ4+Y3{ŞHk{Y 6Ym@uY⑭TXOfpCCP9Fqn VW!242 F:32T{*Eu|^NNz`qCzjY}qIy~qC;ySl">[ gEdL1h1h!k9 x$XjنUQX wŻ޾ Ȣ7ig웁 u;pجM7^bru \NJՁh{D)suXIdQt('•Ba]}1te)zsLd/25lyt.'[jnJ32 t詄-v]!Z)NWRڣ+m1tpbzNW*Mz]!J:@ f +Ѯ.FBFJN)0cBBWVTv+i(.TOjfnUJf|W8?Gs8F>ݺW0Vq<̪耺E͝ȵcmh[YL` Gx{ *C`?p.?eAgәjW C6jIUfUXY-EaN&˄; r;i|cA$˽fRq}%Uu1Wzr$dWR9 #Koa9K`^'Îc_a{wQXN7hoؔt)/OpE1'Dh-9D`%c=l9'k)}+M PcuIڕvy/ڢ+kZ.t P 2!ҕeLH]]`'mѕe(V ]Z\NW! ዡ+A'zAa,]U3t4 e݆7D+(}+AOWgF} 񆀵.QUtut8BDWبb Z[ ]ZIe Q25uN!`͋+۱ZQ=()JpXIt.%T]$]I)ʋEWWB4tC+CZ˾ԓz'cuzRM&Մ^/!#jN oi\>~@ɩo܉kGd,(Q*fh 21 g5揙'Oh]v::}fn4]AڴemެVKo]֝ؒCIhz#LMrdgTnlw!+ 8? i|aW RVߥgf`W~鞨(=pW9&kt8.[JQ+jcN'?)3fTR1F O9W]`Jܽ. cGajFZR짅yƕZٷ*BEogN]eejmNѓh9Kܜh}k5D*@Q2 )7hFR)mv>k7]LeT,nmYUDs6@&k28m#NM#$B Sm_KAb2c֎͐7&{PtPj!SRW{_၈&3G֏V'L#MZ)ʕ<@{mamM":Nr*0&?rih*kR=^bC- ïUh}GcU~tB ب"?ߧrJ&C*5hi/<:fDyKƜ 9s,AO/Ss.~9oNB1 yrgk*uo-TC*)*F[u΁z :Ü|X"َMѵHIZ%9įZk|0 46`Ō$ZjC_2"$1ڴCԨ*O|"GfּXL)&>*C֪+הOQਧn|Pa 3HT'*Ht=KF Ȏ ў$I_KԑiV#_f| E U4S`eL9'XEA)K^5TTlPtԠ-;y9xC9ӦG2~0P8jeWGW`$ty6T:C8քDWXLĝd B2fCl(vu\s \/ *ep> !sVΊC*ZD@ %X_;wMBAqU&t)S`_b ~N`K(J J+.@:0 Ns[29G!*(5 7EP TBM'$Idd0*zbTR uWh%W e2a̷6QҜ` ePDdBEA31ۢU<B:nY{$$XYu0wT \|})q$8TR 3ˁ Uk+9oL&vr/ōbUڛk((Sѝ*Eg2K7 U͸?2ڳ<;"JPAw/u4 8匂kcwՕHޫ0I]D5=() E>R]`rT^7Dr!*"pPB='Y,k x^! C9TU54-dPg7w?r5_{1#.UUq1 4ߨ(pl^!0P"_ {0bˀ;o,zmWqZs.:-W f=f]&cI}"X 3 o b9PT8xigM:J`"gIW=$B+uT2: !'i֣YΈ A9AJ$rAVW2pȚ[CL|OC }Y'H@ɘukx$ۑ ox>JrA#kP[DbYX{¶0VA;> VGdm|~^y'KS ʘ`Yg̾q6A@F4=|NAJ./sNݳ鶣D(ʠvКf6CܖSBEٔЎa<P9@'^BkڝChgՌ6!XX-;{fo (2Б5;؍l:LMLq4$?<(Bjw7;+jV5ʰ2unjp6|BUWH!(vDpPk>Zl_t,3, MGFYICxS(m@\+w[U #o$XH * @T_h"Kp05. ڪ mryWi<q5@H2|}юFex*G5`0uvMr63IҢG:4'#`Ilf1hmT֚B$KBq)'ޮ z~;=6iV{ұRpPaRDluHck"* F mE7NJdt7 %0ed]bF = >J;HuXoެ7a; VXW4#RNnB12fY(V1Z8p ֣,*1"@΍-Q;L ;A lU?lU[; #<v!][/-I=72_4TtOdv2(` ӹU={h/hD[zf'$> H|@$> H|@$> H|@$> H|@$> H|@$> Q}@ B]'< 0d{ 6<4e= x$> H|@$> H|@$> H|@$> H|@$> H|@$> H|@/"J> x:> 7]hOy@nKA?l[$> H|@$> H|@$> H|@$> H|@$> H|@$> ')2*Ŷ'BvW1}@Q$> H|@$> H|@$> H|@$> H|@$> H|@$>z.*ڻKzKvgU+Wm,_㿯۫]T_ޭn )]a n-V,=xX;Z0kZ 2NO r.Ta@C|0M^;7Ch:!B=\1&j=CWϻ3|tO?HU#h0tepk9qݷUoLWCշGϋ#]=uI dj|*t=] ^ ]icՀ̩@sKCW8 >bp]szHWtj Ht p:ZV (RBW/,8h}2t5:h@i|tH۫,wvþؤ׿br=_ FGp,u@b_.Yy\OlܼW޵rV#^]#BOK[U@3wk2|(ca//JmWzK%eM/٨򲄠mqY/+xb~f^iSUkſt}¸fcFܗ\T\f;8n$|z""vp@rXu`̛l%FWzk[PcYjK==O|h)Ghga _w /7d{l0!(nTfthܬiMi`2p1a0MH-UVj 珫?0]>^gG-vl!Wm!+*wXwZ^<wGvWG̣*L4mcqd܁q?3zRrt]j+}וPSu5D]E y&1̈́f}4?oSViz9wJ]Ny]*fK -b4`Y4o̝[pQJ-oz rXX?_nLn^{}zN}@j%oZKS7AS3"] .ļmډ=iM j:z8(d\{,ZW~?9Svw4 b`R4ƓqZtŴ|ʘU'$ OŞuG{G 3t嫮Z+ҕ'FWLkѢ+,]WBijZs jn:9YJX+>I bZOk_+6X?^zDyqqv߂!Dr7z!Q48jTh{]>=ǣ)2Ve%hZO6ѭ+ȴW {VltON.E?nq,˿?Ͽ^_ʣsM,7?on_mf__Nm9GO_ς3AS㒁Zܤ%[Z[|R(51*#5\=}aB[~_PBՓ !"] 05ܾцu%fWCY3shx*]WL Tu5H]E:oٟja^ChWT9 !)ۊ:dqVƷ$di9_}N\13=Y-Yjv~r DUeJMrQIjt%u:WX!*J] .] OJ(C34 fਧ1(IMci)~PBͮŽU&^J}WYUn;ʡu4=2lFUzhՃHN8 FgJhS*]WLͻj8.QԤ+qcB-SBi]UWO+ǹ"]1C]1n j+Q_WL]j8BL1iʮKAcТ+P2w5D] Nt%'ܾ'b(]WBjYTnbY鶼J/`d{Ͱ )5ܾJ+|WTeV-[S_ƦN aI↷ 9&)Vٻc.ӻz:`$i&<]9y\2[Eu7/+ǟ?2xskpb;ǜywu<'g/_,lZce1_/[h59h~F?:=:gVWo|aLˆVֳ|aOc&D@xt3Ӯ缽wi=st]s*eͧo>1&|6@p_U'OƘ>7W9>EQI͕_lyӶZZͶC2b*SVܧ ֆH);SzR^IF=/QԼDھˣ,my+$>XOsn_-.Ab/?[J' /~)ɇ`}L9 QH>ws~nh`Rw!G߽M\nZDm>lo Oƻ7<Kdqo*g)9NrJwΈ#8 %T=S&- >b+'TB !&Tњ!e}΄xO€iG7_7'[;9_U9t+b>P`McJ='R|R#G;f!+`RC[vDZ|7D\!7Ws'N`!>+q:-Ӣ+>s9 1s.W;J5߶v 7!}{g=%eR a͕GS]pѣf;jXy8m`Z0]|)b ݊''6;n}L2#><3zçgGg^s{1$l|A:gCɘl{^֒E`!)Ws{)H>z`ZϦ|h:Zm)[TnQ=bf@r۔P79 }f58E5ȅfP:-4U'Ɠ+`ù,}* 7@Ѧl͢8SbbC|銁-Z5\rZt%>+,mxAՓ *ҕFWj+t] e (ҕ `o+ƵFWL+]WBjcp@j' ChRbJr5($g"] 0] .] m_Sc("Pg㬭w5Zz]f}4?^o3uµKݬH,25a֤%UHˀ1]! EE Daxzl 1l_o+ܲWt"3.%oھɣ BtɃ"] "]D-ھTͣtUWCUR+aJpNdy *FScPQ;/%EWB-ҎU&& cϺusvG,J-j2tZ銁15b\{,`M$U] PW6m+FWkjѕX^ptRrO}_] 0ѕzEWB}bJ4j"c@؅FWiQTU[iǡmaֲzVz鶽hjj;/x;/=n>`{=Wv-vtHSh Mjf4Kdgs_oJΠ<Ў 4nv ݵ(7Me#5-9%zW|S7Ƴ䞤%Li'pjt%IMS1tPR45D]E֑"]10z-hѕB5?92!M8z5ܤf?S(mճݪ@,`g0=/4{?3P˛^F87D4_^QizE/)4"DZJ״PVMPӁs4KIOV)}ΣMt]1e(m6\(ܝ^"UoJƛOWџa<\xzv6_Ӹ_s49zt/wGrWlv8͇x?Qu[ۣ\{r<<^+tayΤgǜ/"GY9Fh5?l/xka 45*팮(ȫVl/t2?sa<^ojr=?|߼c3ǽб3qq5j6tV[ctΞ\NNVr"/#'߬+knFE'{w!>:ݽ13vc&:P(2&)1}UI ėL ӑKctto|X}t=Z'Ov{cWk8]4C $,e@"L(<'(6_}+!ZlYlY&$ɰ9$Rh10sGVMl5Ld 28 ‡IJ KZ z ?6FW֙rb![o'1_F ԪoD,C[9y+'L9䊒V0ܭ 0Ku)'\bp[B_ m,Η *ϻDJNb}%O[%kp3ZNdoTG)A t2(i&$)d "0ԙ%Ě'B'fFg#a\J>=´n8 0FKZ ,}ԙ:Yz뢏eXV(_X;Cң̔CљPHrћl'C!<|GqVh|NYEA gVoȮ8̦']UEt K ]@=g&g},\^h+ _;d<}07'UҜm'$H$80UIƐ bR LM,f<\^'[Ds{VO0 ]=fl>v%PO>n~ 3A8#m9'4Ex=8f,](MO6Cpdz o8]RABjwIڦWF~ 2SpWXb(-l{#z}K 1_s\q֧5p*bj@Dnفd V/D:ru2VÁ2~1e?',$gŁ;8 7+?pqkY2_8x~l!_\Q}q<>&)i 2X[5DDI?Kr)- ׫OQjO+9ۉ +DCj!,戵 ryA)a iqsIJ4"2g&|.߹HXab"{)A#u2~X̗(=vWj MQxbBiN̐Y)Ҍ+'ՉHTjER)45)c{Xzåp 5Eªt숞9v,:?]~Nζ^47(F }Kя"BPA7YXkٚ6Te[87ͿUˌB>7?0@cRo%I+~4\x%pX~yo\SLf' z0<'l9+fli}KA]0U(nM%#mK!8Q2~mTb sHA1DeW}^vp$]i%pCq ҙ- \$b~`L3cEPF(w%rMwG}\ڦ@~i7g gK0j]uEje8&Q[<ꭷC5 e{;YyCPYym \g94H_y@tlz}8KVbi^bޡsbR㊙S#5 eڣ,G "x]w20U/NFOd+[/7p?+!YnByl>Z̠{7|2p{5 0la 4Uӻv{!* }F0$+^H^v#t3j/$b%{m9?B<_ ,}=P`ՅŲ'$~VŵHG8k^V'Iu5J:H@frt!q+5j+ &0Kd _VR[p!uK&_#yfQ AxuU*+*LHTsF D(lrb)ID%4ј$,3BYa~ف!TBFIYm 9\1yaud^82ig^CF focziIo, eDz3L8\HXx%ZU}?J?r{P(̵8: ({,V&E,i|EgU }UXi@TO9+̙>cVS R,;CzG<8]q㦢~*vƁ,X:YX1ki_Z5=gm;g L* NTt=g%zku0~;}o.EEd EwQO?>/E88XW,x*E4W#" Éq_{+7jɹf7.U=.rogVx mTNJkVF1)D-߫ u1S nD&Ka'Fզ̴rY|}$X>U2wM^uMb\Pa't;q[mK%)o(@ge VOԻRñ 0Ϧeiy+h[AIMQEYj=UFPIuk @.VړK9I=߆Q5m-u!9:hNVsaFDe)[p3SOoMweT>n)Jo!Ҵ2[*}O*T4%bֺGIUހ$=:(DїǸlDzgPy)M[A"E]CɊk:ת9)+\+)d}JdJ) 8_ q*j&'F q <\ x䥾{( *UcXg6T9y]ϞDV#%?`lg-BP4O]jmrK EQ %:·(Et(Iɏ "$ W9}O4̸[gD-t~fRZq>|1)O˥S7c:M?Y2ߣP698a/(#HB,8ĕ!\6=_N`HGlQWt4Cx67:k2TXki׉%o3hP8Nb) [aw׺ =f EV9lDT­b)GuJQwcG %!XJX $$337H=[ʹXHw9 v0Jr(o—Q' ($HI98ӏ?@  9D#UET ?-4:Biq ?s{2ŽAiIl坠~d+8F/Bv/u*%Yڐd T]06w88!{⹅!E7w_>"!  [%BA38 }[b$TI*R6hU'9lop%[?y v i; p"^0WO&㈋A'x'㧫|.m}dދ+Nd8vryM'[IZ,KIV):-.i0q/Mie:M zuT̃@C@OG4&cEq;fo*R"xvUz~q .TFy9][o8+y<;/}\`yb_NǓ_TE1%^j4#ŪbWCΐf8nnsНe&,\W*JR3 DJbc+<eR{Su_PkhjZyyl<_0?~WJŴAO.C*Y3ưCؙvg.{#C[G8 ci|h{톳ɭLN=*h`_xaR 8 9_[cgOȹc! ݱ$qM1>/;3a3 :ujfIU;xV(]'˼C j:<#Oz=(C˝UIWo¤!Ba^1cr=6Kn]1EdwF֓W|f&V8$}WO*"v?z:LS\)^~6c-t.OQ쾆R!SmgNLsNy)eL- Ymak->U9B",wsJ)9ZkDT:9=H P¯V&=/'g|A: mN)rTlb+'c&L?,_Y0WML,aṢ^vv%镨I2nGXUU @F֘0 o5i ̂r LŞ+3%HZ5֔Y;R5 Nwljڨnp)hoL(J p,A5-cߤD3 *z,]J& N6aiqKBlUlƫSGh{m@je1HOmY}ɗPde^K̤ xpɬ@I`dau4QhK g^#u+<%6} =KXn<ҡ%IUO͑Fo8#C-jƳG:PQ\UPHeɋ|JcS`AрCEO, x7͹],>l!Cp_]wnǐ3PQdjH5c k3}5DcJe(30UZ&Ł'+$167K#ʒ/͐X,tBOɞcVEo'C߸>Mh߷gO= ǜXQ'6Ԍ1l$^+ڣt9pKUofV^y0b=^:Xmoz0ʇPJnv|[V5c BKl>@+ kGI̼ˆn[Xj)*Qst+EXmΫZ˜ASgOy>{*0q$xR ˎz[jhOrbH*f?A*98˕7jc#@׍-IQAwJ 7VBECaixlP/J7̈́󷿗Lj\E~+YV=e ]w.{Vw7y Q(zy;&c=o̜FQٲP+E\N(}4 &4v71O;C(+-`5BQ:pO?f:+h;'QrwsGvی23W`j"y-e҄V9m[dٝV-T8uf;w2ȅFVeƅ$1V" E;֑ulf񉆢ۢnи&Ji~zvhsss4s-#Os?zI\PYfX>ɡqV;jo yRAP=F^ }5-?(o:ճB,x.dו,N {S$*pZ%JV5DScl|VElpػY7 k"nQ3/&_|O9Q)zzkЯ?Qi)hZѪY.biAe1a}]쭓ԹTƆe36 8 rK1F~_8MVx!zov7`deɬ/0ʋ8.;2wlQ%9_{˪_YbcDHM37S3`[(V+w>͒Tc-J H) JE2%,S?p:7TRZїm1:\u2c _āV1?4eDl{E|f3h;2sB_O<'$hXCLa౒ټv@i}a{ zje98 ;u*7M{$iNH4wr";qrˤrcm޿vś;)]eG<)HPhw.@|jS6,7Qog_\JLA&U1a]m3d^ .s6^I<6rldՍ2w -ȋES͟tV;9E"Xr|SPvD|&Ŏ^=\N2|`cUgkĔT+9%G %x7X%~86+:Ò+fuVLMA` 8 WJ8=&&@KA:GD R|XP Rb&"k bTTxh:YvaTPNsހ_`puwBNaMgO!B123Z"0\vz}%ac| ("FG8؁\DGE`@{)4}ee""ǻ9ǝx-asdt^T !?? ]. [AqEƑ1cbS} K>4p_/3$0BIJ8e ə8/ߏa JIC( yZ1}lƻ|/vwut?L6GiS3F4D.B*uc{ȱW > 4`fkQ,lIv}utrr5I9|$CQipoh0OŨ 28ٞ' g࿲hyN/Ooo?x0s#,`~~>v+@q!!2\O7X3\.vT94ϥ2 Il3_j>̓烏)>dG~:hJ| b$_SŜ核%r0tPKL tZ͗9YŨcP2Jʰ&=I[}tZ61 Όa8=xC+Ք %{mRd01]3"X8 g,;VW|fӅ ,3k7#-27GxQj 2&F$U 䁳ÄT]y>&54 aItA7B ,,if9a쩀L>u[]?r=r &u !L5\uI+h|3m@=2>y(:@LP}wAa7+Y6|- Ú~Ɇ+@{L'Ls2/>.d4fC/7E1`rS nÛ:-l7e6aE|׏?_,* {N r.9g;r67࢜4FC?rl _"{鿱kxX(0hsؿa?K ZbxِDK3H0tʼv΁` i&rͅgRbTAc[}2R(i &GˋrK ݽ\Y,X.@[DkY8qc*HyyG.433uF]i}>*@3`dh$bnw%=pr^>^ҍO!/Vp*;Jq(έ1uIEGյYRu /?j,0t"?GiY5lT><3J6{ U=5&ƺ@֙2luLkR wnɉUcy1JKRTJwY=^7XHf:t}9C"YPb\h&~[Er-??OX%,m-FwyRth1(OOY7M򽅋*xjy{*e)!fCU a$-G1Õʬ!,Ɖ,iUws?dnG  s9BUOnT b`(1"m8dȆ#I2e6uTAc,ڡO-p9s$V@mbgɳx9 S;RnV% WxWqK+⠖h+B@5ݘON\@1WD.Ż=)-O\͉/&_ fJY/c5<TXgR ܽޙ D)rsJ8Lrg8KTNYNR%ƅ|)mLKi-J=}OۮNT=.,$5Xo`t:hf񾍂U<")tQg(w3_ll N9`%\ .Jqu5s=1LQdx`0%|Fr3 C4fTR.E93p{L,z4rVťRaJS\_0l0UJNu<T"76+.#0ǎ":=bl}YYSÛҌZb1Gy`ťKהŽ9zS`A.d&7# 'jV,fk?iɰݥ.Z7DFd٣=j\St3_vh4N3&޾3I.uIAi)ZMVSF 16Cl'`mXD=e>nfZ\M^ۼFBh֝W+?~N0\LDWDRuͧƤ㩂Ʒ: ,Ȏ]5pI.3.3^JM%4qvqϝ`&(q{|+T1p<. f뜠^8^\!]VW.2X|Gx gaU(-Z1INK.sȓCTKeԻ9r꼵iOH8 BzItr|k~3L9_~: _?/Eś[p29L}j IJX:iڀEkPy/:2 `p)pNDũr{alH2X7E#Q35*! wn8-}/GNm¨ ﺭWA㍢gD 'ף| h6N"oBcl!(YԒaǷW4x }M'9Q,b5R.A }Ws4c9 PSO/L#տX:E<, AF~%Ujg ?fK ht_O dz?_?5j9qMVpj0:˫taΜBOmS|SzmQm043? cos`0̛`M0XO܇O[:kGP6}StoS.iң\z{nZUo .na&ѓu]0 8- "јjDs!Fy Vp9.oQG?g<cPa?rӯ EUshQ^K=s1NUY0>8ÐރRN$ʘ- 5Rn(v[eN\) Ms'A5S~vn^,+XmK/oVJ3>߆{%V#}Җ"]{+O/Pҿe--cr^gWR)J}iQĘYi._<h$26L0%uSI "~EZ{,8{Z#(=u]_-%XJe1@~|,HD}/_{HB:-{k.TZKc7άSfˊ&:^Im`/00597.Ә?y H7N*h| @yΖm}o8+}zWCU&w-oWkvLxw~܁[o&=%;\uB9#l7W]qk gM|c)WO|nzR/|1#_sهTSZ \ɶܞHZaڮx In'uvIlQ-OYQ]`S T$ŗuwz}Yz m9YRù qlՠ>۱ Iе r:M!o~ i͡߂_'F2! ǰC-<,Z܆@SU\>;tr[? D}[ߡTtJ2*h1sI#֦WYe72G&?Q/LnrQ鮠νKB.R󕺚niս|naⵥȞ*M!'T'/nG, B]U.sU>5V:YMp= F>oaWdX}@Ztf9rκ\?{WF/vNV ,73_mǖH[`U˲zi--,OWE*V=`k3l2"/K}%mQ~޺'n*#n4r8U Zq1?_^@$+7~KOLkyw^ŦA9seYWw99?dC&&- lFjޏoFՇsfs ɪ/q9-j'G}8kj#uf4豷!85LhδA9ޙVL܍ӯ-6ZjqʹQ\!d֢sՙ !\S_b )(pS-Z&?5vBC(lTr4wvD$BfY14=ppZpK48 Lka6bd/ʍ6Ez9slZ0}h&->t>GH#%5%xUVl",InHcu No7?_ng/%2)BG ED"hy q$҄EFĀA!SCeIĻ-S# cU3h }9 o J0TDh:b}pĚ!oۯßB9\EU~oO>@2ri2OuǼڲXkgbeEt%|#񼸭|;Od4c\y\?Lo>NnpGA64t GGf4(i0.O'(/[7F誱n鈽77S_70U%_|WhxS`r Wjp/=S:T;+pK{v@I:H435mNJo.oAQL$5&I@+NRXV)%muVbP~~Ѷ _WDW+ZԾN5Zono/ڇ .b}1뤚VN8>?"0R+˸ίr]ԟګLJNpk Bvp4#t8.Z°\mtG(H$* MrbҸ5À&A -@(ep]7N"Z;:ɻڻ}XBD~d]C"ɪz (%]e0f;*B :J*KNZ\%Y5G*G1amy*V&s\mJ*EY-j-_ypcߖ:oϤ7 8KBUצ$zqIWnq*EBó=tˆ+d q\kGUh& -crlJt }܇ ]۶] ā/q׺VHcoua8aDXrg8gN 󬥯=یezrMax)B#z~Iw"h<8iF B$dM[n5dp.rqh"J DjMUlrcǼXgoDO6MS9Ƶe^%|8{8( "6n \@ۋGϦf2kS2'hRM$C˗FbyB SQQtD&Mf1RdfY<]=t!Uc<*NM)qg\7 Ϲg{u I)"BH4 'wm\G@"d6~+> 6?QCYf';>SE!4HM˕IVh&\T`۠%=C)<05 uKUB*s;ced^7͹~-nHy^UsHf / \ 2|::Eh*ZDtǥ|- 0<\9_k~P̀ĪZR…gB!mޞ4 M0ռQԳUnaL#A!R Sז6O2y͆0ף:>C(a"qVU Ԩbe=#JMRtгV 2N 6Tth=a8W-W4CE IZ\\78c{UQY&lA TcZh*MA"g{xo Ӛt2lqd78JꛥFYu+:GJB!\(#<2BϠP՜g B1}j~+( RC4׫}u^[SDP~֡=a10_~YQ\48B΋@C֘wFWKzSy1Ym8d,%QiqkuQ0whYɬE*ͲA]3gy1{[cMzOrL~>J>?)=s=_k0UWSIP-D]r~Nj1w8puW5,ё4J69_;G4"KN8cCE$&T"$%~#$E@\&OLvs+~}}I1_mqumT#brVbZTRzL #tsrB&ù޲p`>mgnq*s/}?nQ7j# _Qo+zD7-?_ls5aǸL>{2gnqp0/mHr7~UsU^x@:GiaiΒ@Y,@&q `TUI(2QEcCS<F$!7s1HOA0;"2x 4VeΏW7&[\We:Z&m8>F4nlˣAƩ5h)zQ Rqp q&DF3w469qmZ~p Ɖb"5%B]p'ՆBfRi#E'")2>cHiBqjAP!S20d iLY 4'|믟.9O|J?o.!L4 E! 34Lhx_%‚1ZU  IKd$R94sP/R~mRJAjc-|\,Kt7Eb*"}e>!U8uhVDiFW말I#ַLyB ^ /"iL)f?ۊ(ibDR2Q  {|g0B p07\6h97>G_KiDDD:T-vM]O$YZ"t}0\#q=g&ˇ>Y ̆SØZ gp=훦LkZsW RL_=lsu' CgD$z2D0bU! 4{ҧ(Rkc9ܡF I(RE/jVqw/>#WeTB["`H>@˥xxuKjk^EP"o!Ft1"TvN1"#֚Xs!%POqwЗ~w;h\|T_قp 詭%5hŤ .~p77s/F3(DKtuTsXqwo^" Fd+s;ǧːrՁo}TQ|MWf>ݳ;T*TJ޲:8i<KgYYh-?C`f&5:I޵h"Qq,%L^%rJ8E RМ&/XJir ګkWVX.}$DRhӳK$YϽ8b+SDʀ<{{WKr?DhمD.vu楕б¯oow) BmV8+붙UژpIN| ƴtnbxLEij7nP3"k+G[8&B ) $A^hX5R+;_bK[}$lLB[>}.ź|_X,o~~ߝG -, P̮@u1$65mb(z,]i;j+q} Mً?^! \'~իy{0kU[~K3*nT>u`˥UWٮbʴ52s(*_iF YC0b$KȆ\ks#4i/V M`cwcaL9\?? Z c@Dt_zۭɑLD{6~8s-Yg{۵멣J%|  3nj^#:}ST.gBQapd;tDGtׇu/oNWϯ3}}Cڪt:Uj5C_48g=&z3"iM5S {O_&'>V/@###؉R FvToBaq# Oo @Xk/c!# [:Y2`HYAG7R=Rz k)Ja 1Lo<$Ko4|V%ytk62ܺG+p}pK?_O̕RP5~Jexe9M+VzhdJBS!!L ,&PbeYttBSsj@l,wkYCQFlFX/b* *ĒnR2ލ~ F8Ęv~ qqr!5vR Tt}]}: ?>.4z;:c:S w4;sKc7ϐ\Vku6k4HCzc&Yĕ&@jK61h,Uv& b<,ݕ0 3\R5bU ZsIJHJl6E'uW*IRjm1uV3Rftm I/@֌Y3 d(5Kd9[R:#Kfp}#6i=Q(ujZpdvwR4Rk}MտJp.9Pfdo(RA%`3k&5EaʅvJT>M4z]+;(P:4F EHNHE;\[PeK*]’Uw{szWeԫ(Y*ڸ48#sgy:ݕob4Qߒ7iRJ9(K䋄U$ ε4ݶXJ-TH/κ&^Q6%ȎR9/fEj57d 9C4Ht@f40n Dif(M- ԤLuVQ쩿AS䝭E;$ef%R2-z47HYA.EǎuNj==P PTQGBtw^dG-*hҕB0.F3E5ZcnC*F.-[UCIA$\A% n\/5 CVƩ AuZܕYT5PǑmQ0X+1S2Vjʘ ]nrl˓>Ha._<dZ{H ntWw\Euѥj!}#)Eo,Q5'B6lP4 Ry%% uXc Lzir̦Þڗ)oU^M } rǴjɲ$GA\#qlr{c6ˆOfwm7W~jO^/?wpDUKW~Y}Nc^[?keᗑhRz¼&'XQUm:QzxAWRt0fer>Ԗ+@Jzou-k)*e'ش ;o͝2VZW͛~!t`v=vͭt@vЌ1Ǯ<ÄQz,q6=XNWz'5:2zzd8 б&Qm~ jT & IAiűE-TW1E7C ZP5^4Js*+6] ##ltC5ɮqUmPVo;a$N6 ƐF:_*EG%VLe*f#f=Uv(ّc`"ARp/zL=YJ~-=(3Ż7{luBi{I7| y)movTfb,Ç?Nߥzi8şq@QE A~%^^W7PcֿǼX4}L7qfܟrC~;cUcZBV{Q# ˜yT2ExC{9Nܹoc+VtgB#"Ķ"h?7~ 4|֘nC"; ;<x/s!ׅݬpu^zRQlL)կg[؏!UeTUxnʢHbM9 +LyŻB)Hw)sO܅k,M*xJe(sas#o(%g*~P<A9'bTB{a<^cXro2/qkvhLC#-d`Q"#m~T%^R]$h4*'eSwe"7 QXx?7 {ϾKP쿾?ݭO ѧUs4IDSr="f*UUK6aύkM9ma͕F)KζV:Ԛu K|ӻS%u3?ËAB}esȁg6 ')r,|gMC (` '9m8ReyRo-u'&'j&゠0%N9jDU̬:ۼx0I* -z*CԢhi5rcwI>X$4pU@k*h7 {Idd)Y1id']`f]`f{Ek&OIvt8?|oӻ͔h k3)J#[ESMb8x=+DSHb[*>mU8:"Ua) 6z*FEH eUu]eMAesݢiYfn1{*v!WaibP1`ޣ&ܔ~&o,o=;mEԲQ1I -Ҳ ]E{.2"%Rre䂠(Sv(kfV!azvQ 󒹐_}GSəP" Gà5}(Y( SACN&4i tM[ɿ3}H{){)vOK O6^J^Jq}HOgr}$=63q?NWsև=;+7 sNE&>[t>|re%g(El-J ?e OR>I)q&+/lmi{z;-Д1ìcs9k+{1P*RѓuM֥7O3@\^=Ѥp=FzkC1]|L_* o6„;Iޤmi39if( >7s2:'BdZ 7R2-ZϻJ7-f T}#e=/qnro]q%퇳s>s鑯f.)B>76/l.} AWgƈ!`otl\k{7&6Wڙ̧m$]װ;RJ.ٺnco`-2(:TQa-vlXLZ3dX vX U/Ήe͓{!׶N:(h87[շԀry֩[2Q] `Y+5A4=#ik`J~oq%9M9C"ho :F}׬d 2ϺnN>!lfvёRY^J09=EfPsAJـf7KQQ[һ/^C!g!?<_ ~1}o>Nz铋2b,1Qۧ9Raޝ(?.= rꁡ50dVK#4CT)G["c1nV8l ֝;xL*65^`z`roqn>2nTʋwqC# Vue?B!9hŽ(duRcV %F"9mq=1ϾV5W5U>Z-YAf9l:er)͍ߛ;DG5w7CT{Ћ&PkbL ,`mk &ZCɠNKzF"u `YJ LH&znxS CEho<:7aCQ%H5lOMt< K)Rcs_eG b.%U Y򷭀,ЌG{lC7(F!nQ13uikEҍUċɓZ{5sj -ȳ&V]QhKG,^GMy3a6j jؽУjWQԘQJ%ap1Ʒp}\my ^POoWJp},,xԜ-,s/Ua8`20} 5] /L{246tTH ih ?vHgf|]7Iv:q%H{y~tQt aM Q{uggGV헬*T`śv9 !E84u(d(blh""c@uV؟54gt  *3I֑}X DFft҅xD?L韻9 7QVezUʬWj(%QʣBiFXPCU`l`dvX t: sLeg*螥vC%Jh\SFuL3{{>#،^0nA:7fex4n]vl xgz1mð%[wm0TlS G&gȶNs*FOJ*FFf1Fhgq6rnﮋ09 &_9q+b>iao946 AuE\ 5|wWx \WΈ4-:l}久 @JstfrG8h5 ⑩؟H)kPbCh K,dY% e`Q[遡ǵ_Нd'([{2mA1ѝI*}{#-ib˸ֳw }`/Q~Qە)'WcLQؽ _ecMpWD\O>]X+p/ V"T f wG;*[T!bVfmhU֏ܭPk)>B&V% h4Q>51rY )U3~.vd O+LᛳEB}p0d)3fZ{!A0G$`5d bOZH%IbB#;蕙l22Na2,fS#`WJל2O *1S9Uמ9HBZ uq<&G6ooT^Q+xƁ&#(:Q]K1oy[<?l4k*;بz~Kڒ 6LVC/87FI{k0 B}L^)z^oR;sߢ&5Ma:%&9P7z-ɵItTqHv"ama4/Zژ}-be= JXdk S6b+G<" FKha@:Fl`^c0C=>5Le .fCJKƳW;x+ ^x~Ǡ"ͭ3_PsW)vQ\!B6 4J"UurBk+eMM]R2tFA>Ze~?}s޷TK{džy+!h-2+x d )G7&"[.!A1p%5Ԓ#Ob˂Wּ('&LYo-siq~zxen3`NO'8~0ͦrmUL- ,3!tΖh Cbj;t|@C@[&8"5ҫ1E/jC ٩ %XY!)0ڿ9 ӳzn~~Si_]HӐ;/=w^jjE7%^6bФkn|Ņn R`t^:[VT8zuT/fw9Nno1v'Ii'DFA1S;v&b=- ǭ U*4BS FщGI@ɩݳR;@`v0Wja. IfnBn{| "{V'˴ă2-wvi:<0 yMx:6 ME3J E H־$";sf7o"s̎”̃@040;sj8 'z*ԁn 6g ^Ω#&ܩp 8%jeRs$㵝mڝ3u^W`.Tb VvY!gvNSsvF8%jg1)qjqS69Svndʓ5] 0{=o ȩݙS;S#O8a(rDMіW5vFvJhKdW`huj`cDv·ĞEqn'\dTazy)L[ tj7QjQ}IOnՕdGs/;47\_'s\sHfNօ#dl }p5,@6%q.צVqb^kR(:*V zZ@gξSh̓T?/l߽Kpu?=اf b:V}^ЪFm 4-BckԓV6r]ئZ%n 6*R(Dvְᙐ5mMĐ'5b\{]Ipk._?lu-xJ7us]ũ.._OQsq#+Mg6>$S;{>a`0Nu[_wQEk)*e5ٻ{E]6 ۄ`} I#y?ϻջ3T`Rqݢ@Rww[)̽HF$)وvب8CT "m>d3av9&$ߋa&*)\,ݙ208sj7o<ܩf)Q;%Q4vӣv%,(%oԎ:2y ̑I%)3vnRn{|9LWo԰Iv4j5Avv] ?ۂQ7Kʕ\bs>11{/l(ߘD//c+[BO_?/ԋkOGub??Qa'hchXS |mFFK@zBҼ>Wg5ѻa{K!M`+5 Op_6A_vWB~:vu{fI\"1$oEd?h4;65ڬ.u4b^e#B)k\yߓbUsbHxjR g5PO%9sq|)zxڹsVS#Kg5cbMy ($BY7@9 3u ;r dQa2ь$Xq8x5Gv&SX`6e )|]QPgIMź:)z&r߬>AfзW}mu_ܿ\Iց)kGܽ{"c *wa0}M6:`(6ȸ OJ)0A.wm;YWPLáoQfO-y=R?Z 0HnI'$gXX!k1@1m%yk X6&dc/lAItr*]6%!2]~ͭSKE ^S6SΘѸ\5n~FX5Vd,fظ6,kԦ&LU/W*:N [WڌB{;:0V0|S™oM=x)9\>)_-pzJQМf*(L!2qV3=e$P$Q4,cw,{8W.r|sJ((Wt54jNT y8J`esG)o!V5u%_SVN(jl /N_~xł)-bAnP};>G)8i IC!WlT-^1qJvrMSY J "m "{m>al'g za"zTys?Wb\"~v\tSj 9rf U5%Ք)MEg1%\[#璒* &5+P8d=;֭t \'7 @M=qΦtš*ăh;(*SŌ\jT BT WZw'W9\ϴKJ56+0ZJ Mbܨ5Q4oR }_l6_!w pVtP95mKv Sv<,=JlxEB0C)FJ9D1 :<7SUFCPX TNec2kOEz}b$"lDߕyyG4*FUIpΏY| ֌>զiM1ϳb++SmcaCs^!\^? pn߂DvP7NB_ګUT'o#þYrqo}|h3͈.ly|mݻ3kåw_Owbr}WF6w9w~}tsm;熯i0o,||~nAqa=XЍ}O7tct7pQVhmayd [>+=zdz}Ŭkz ?`3ޙ(\泃k>8e~X?ij-3[`UD oD [sNq?/b!#r4_`UN=hG q7f*JvrP\.>s->t}Tg2cgF{G~u֊VZ~Po\Z {U˵c('0OymաV>`Z'Eyc0f,vϩ<Ȩ9 BahYtYpfVo#p`nݼ޸fW6D؊3G˽ vA9<+04*a'`PAfM3voi32մb+Yn0m~2ݎ0;m:iTcofjy7lnYqܷ0;9ΒhِcCƠvAqM{~b)M&18 ~rbd)IX/~BP|%'T}(rԚ~dL6F|j4ktOcZ)|J W"'`$elz̜&Ij m" f3Ji "'u@J9U9$RZ6,6{ݒo>`w3Oɪf :`;Ȕړ 6@f\Dxb0zE(aHU prlX>,mw:%UX?)?RcljF=(s/?>JT/zP=LNl6as] #rJB5(&u(ryY|~W̃L$DmOIV ^LS{}%lh5MzhQJɥ[VuJHZE*EId\47XKTU`BxK߅)@ֈJ>~ZM!-[&&ruJ\&87*"R X!CUƗjs*!i&4l5jUHC8כu/?0jg1 %+0]XMj36*̢ Bf(IS)b) *PHR?\g(m|&^ t:5Ĭ=?JQYb$j/T0Qulŵ|'Cث(G4XA1PS&㘳dGDP[%zyv,}!Ee;Xl|iKU ZɪIlVmT"5Sbs#U:ņPSrccqѥG(b-.6EuƹtOG\~QDt@||Ate1:H{9Mr h7vT4:ڵ)hÚ%DmSWsMVN "-H }㟖2<^k ? z%NёD]^d*LD%5ϸT&9JVJqYX{m]U/8HyンBF(]~tkD{5T}Zr/'z3ѿme06w(U&bK'JVhKq$rWt.j KN6YsͯQ >g|Op^0uLݽ`Zv/),3G-A [Zs9sǜl1~= Nϲ&cHGb{O biYO~>[A{_,)(>ED3Ÿ(TԃkU3`X"zZ,LܸW)+D6 F_kOZKy{g.;sbdl\1hzo|ZOQ^ز> >=+cZ`Bʴ6qhnDBfLi|d0ۤvڌ}vAuCSTe j7Їjaރph4* ̬EF+y%xqX4y_Ebv,}}Mzݩ-X3tkyFwzgn/}_7s'iIG_쓠z'gza GwU;-Zxqӣ7C~kvNgYߒ, ;Dq-{ >2k+mF?!bf2xc2x(Ǡ@kfϖOc~=} yD;6'+6͝;;?cxr{:Q @P;:94y?U: bd}{eCu9obF9.X2}mQ͢-Ed,( y%$?'٣~>yst/jy~ώ <2|sgYygv yd T|G:D L;"X]^-qcݐsxH7.He'pM*^TC6pІqu]iX9VoqV?2{OI4J&{.JثUcQfW]/Ic ̱F ÜgA7vcS;h^̷alXjP=vQdP]v >ɢP  {9A Q['v}bpfP8옳CS`[90)u~H5ZnP|#0,;<#r9$}iNUa)K:`Gk#S=!8TC 1t Nv}y:"#@󚟁?4کQI3?!f3 FWv5޵6v#i&um`f}ݗsuv=;d}/Duzd qKtX,~H~4); 0$8΅ sєOmouHv<用]ߏ|S ,XZUj;iv;HUoKUoKUmXv0i %<0U!2UH!X4o ͛@,IB- )! @5@ jx3X KJ^œ ||$ذlkvF0W*ZUo0OڙFP ˘*n5͔/ycveH*{s 掭J!ru5O j@߹5v}c*uV6̵y' Y`A_B %J-`Ɯ y8 Tr<-YL8d%$wɂҖKg M{EGwwlS^ɣ'+W׹6fu5BfJ+5l9K.Ma0b>/W>`0O?5ҹ324OkɟTycwMyYPchkozd 7Y`5qЌGܐs\3/@!2b5qJ- kUQfvQ)ĽeUQf79IcV9ǔٜ[0)s{LbLR4HLPktcfёDG>Ju?xl([fۻ V |S:|;WZ^$"VjWjK5}<·yLv/aΧH R8@|̧U+ԮoʤujH)SqĔyjRHTj(p~EvߍpX^`0o$HUKRqHu?S;Fvpsi;Mhփ\Nev٭D> Jȕ ǩnrV U5AH&)VbW]L7N D{-1O<̮2Ч jicav>*j% )p@%jiJlETB$~/ޛ/W^w./^SXDt ߳dzvq,54ccM<5@C>?˷b_<ܾw޷NPA6NkXȠLd?׷n li6_TCv1m^Op^\\w_]Jj}oA`1MI)÷>g0kDĞ04ŐDQ[r8‘ơd 31Sq˞s[,컥uwJ|=_ADo>3pb.@[}q馤M6A!6|ڸY֙ԍK݄= !vjgxb3a8N2_jn^\^7J%Sd/lKKs!ętΚ Q7mGSNrv:do9jq膈ILGnCmkW^w_cC=67WӢNunLahL^Z MLB紪k''_[#ےA:)4' ݠӑi=rVڸU>bQ*=&=t0Z>,Tͫ>W_5w盫q zai~eayM$Dh'ba!n 8NJ>`0P߅!i(&gC;'z01@qhY^ܔ+4\e}hg~BF;ۣ+b}\==|u+M<'Rj^k?Zl0{_b8(Q ss^pc~뱿!.K]T,kqu' i06ԠFmkc90-)m[}Fi;ŶEo_=wpy'bZoFAO3a *C8zT$-K3a7a ݝ{JS ÷<&M,M0bS:=-PZG}@RZ<_<@3iKkm}8I72Э.oEiԑd(\-DEq:#"v0g$Բ(6!񍲗I*/kN*jt@^hg"'V {x%P7.İ41%Mh*M(Y,)*й['/G;nbV3<+7'u!Lэ@%̳*\5b<fZKSdD6RJ 12!sM~xN#XE|ln)HbmɎGhK5@xb{Jy)F,O b _zB VCG-aUAcZ>y7++ Ĉ =yG[O,|) Ź𯇍꼋` |{}hA~Z\`т6hAVIgJ*W~N{գ% ^d#ݽzkZe`Jkͻ2ǙZ:+Qcɞ YLu6(1ղG;$)t #f5uV7EMhsT5//=P\k#G|y5jnHsS 8Jatڳ% )$w_99)hͧ|#R IYŨn|dQmc |8́[%429rHZ5-+ƃG wno+^oyIyu-?{b.=y7y7 =9xn><[r3տ=.a/ S I"r:Cҧ6&$1/JEPϖ,|f`O4۱x q;S橑 5O- ۼ2.=3w=Yr3.e;&,o_t,3˗/j53O)w)c t5{Vg%|sbW_>>_6oH&XFźcgEjB6i'aİpetT;jlCHM"8hX}nc߅S2M8-,ƨ ߽( ;|Ei+`aCAf7X8JhӘ!> ]hlM ;3J1(Fk1GQnl:6ȡњ6$s-8ōj@{F(4A.4S!ptWJZjq/Ű+(iÇM羒" Xbq?y'y !D p;2\iΒEqA\{salث9U[N$|%JŐs?ީ-Yp*$(S(ף]Z:~Y,C7n"<Phc#ATldyjBhzH4>1ľM&<=Zc {X^mgnN|8,Y S*>sba?$R<-=o<4.zrNx0-5\}jݙlKݙ_ݙWݙkݙ]l!2f;{0g0j96ذ{Gم )0|r0ْ_fi?(aw fVAL9BPk]7^/|a#> ")咢5̮ajS-`t,·#Vjr m(UjW]ߔR7NaPςU EEǏpxHVK7fuva_P]eve|CKٽqfkps3Ph{ <FB&Ԇ@޿W^uhv ̑2>juHC!eHa!X2|SN2owI=.{,dXsJV_"%ț ќEJɝY CdoMR7n^\^7z)~Qҁ`b5iAQ ޕq$G껺e0z.%2<#UPP3"X9UU:,i:ҶNKTLi.(P6'㓭5Po#s6f}c4ReDQLJ ѱ[G#diKY)QF`=#8]kk~͏l#,u~~'t i.\ G,u?fxg9H@oJN|}tp{Vp44)"-xnhSX]:eԙP¤IA3,a7`iT.C%:7 3r 7/7ܞ 9/Pz+!xpu,~ގn$P)P5\1\#>1=J hdu|S4u>zdlN Y \1ڄJT<6g3QKV>TYuBz)ż+WyzNAzE6N`'w~Cʟ foR~Op5]nI0b W]R998ԱXP)KVTp ;?v?ߛ|@G=C:xF3~+١EX/Ha=p+CՔo>*-v).EV:D_F 1,[e*͂W JhO+OQ+K~8B0f*5}{7n|NGoxi`Lg~3,2l2Y]zx )XbiwϺU)ub %-ԶKi֣X^_3L"[Ys"a_ɋk+CzۤMjצDjSlXڔlF;ڹ8#XHbK՗*[˿Bg31bEԃrFХܾ+vQ'WMn&I 'R[AEHEYFVqσ@uq*]#bdcN)XތP&q==ֽ F{yYzQSf6dbڂ\RZ!&b22r 8}gcLD\Ab6߂zSaG1BұŕhC@B<@ (åD'4IS$WU֟EV*F/hwMZgTʃ6k ?7[Sjf{^ʜRKfZ5롄\5##ն \S&(l7PкgPyv练1IZr ݅b,ܕ/D(rX 7vR8Ո&B"D(*]ix܄ˆ=Ք7Ҙ K2_TRvY5Ub4#A㌂+}ʖ.!֒4տB7X\5 GFװCLX%3rۛ /۪E m^&\ΆsC =ult554Mg5r/Og(xdP ZuLhK.Ldz.կ(e"y#2۷qݿ f <ͅoKT= mbB^"6^eZ/{!_Xx4rOCE2kIjeqS)Nqi Lr>XrQmFD"DO#Rgjib'+m:l?i Ŵ`Z}Z,x)ORT'V1(ߖR8\e,cEQbQ6 ɸbN;G_V\3 r8>+fHd'y8۟ݠijݒؑ؇fs\{ؔx(K=iHd_s8P4x0z<h4՞G롇ɐNtjat܎Ͻ~a[ka[:< /b&lp3u'~RrgKDTETЭu2Dc=FSͥJaL딀8K (na`/J= EO^ϩi%RU!sg2D:j.e>j#dtx1䇧?}΍lpV7Xt9hdmB6Ro==+BXy gD5e+z(~?/,7Qgj Ç>pb5<7/i'ߗ&Mɇh6Fq ҕ}⏾_G߯WܯxxH\fG?e#fyDVIF.I%6Ih~5r緓s!.:mʭiIB$4-v3w9'x7PPu< y_VTawXRƧ@ZY3)8[OيJS #7Tx:nIU2Or P jgV9ӾOK2N{ם;.w_EuϷo~<~6s {~; dI Fvf^ |*k~xhrpxT)Cתa76D[Kps)p״G3ϲra\Wy_(s}0gT,6_X=H9NoKdd‚W3kpR$ %. M"CF$I2n"gblJ(eJKI 2:$(iIx̓#`Stֽ{7W?A?ro%wMGwL^jkrȮhoprr1)ȑC~o+3Κe@#T*YN&r! (SN!-RȌYE] rR!lANw4҇J:BHE+XBe?6Lʼ7,?[%8,+\^}|?`~ ismW5wx52Sx7'&Gio|$mZtdWM٦Kj.)܈@BNj?ZȴFSU>~&DBSdTyFԐ(u^)ӉdHIeNt %"OۧC|AK@HQ'tezS%J38}|^EgHx% xq<J3O>JfëHKoN@XKRyI$]i=#=KVb5)Us/<,ȶLZ"W`R`\8嵰:4ɏ)b7(HM3c$/98C $Y Vs\0;9WewuN(0h92% XLj%#|>-f&DPI2AD%I-3|EXRadYeFLD(ƭsd ggiL[Edga} Ȇ{B1FIfѓF`4OkU&{=_ %9׬8. [.%)_I{p4*F<-jOeJ%Z>˒FazχG)TK '`= p'`--k 7mm>4o_xѡϢ 64djYͬ3n)ꦊt}ZuԐx6ı;?Ԑx23N0 ڧO D 5xQ7R(q4FꩂL)jEZ˒FJ.ºuY|!Ь`L<׵LZ1 w?8ǁS(?m9klz$χer:LyZ(Ps68'6 ~ЯLQLQ37W痍 *>1V*U "ZP:JYڐ'?|?k׳p࿌k}4UfrRU*iPzܳۇNMcRp 8ZTErvXv#}Ngv!(iMaUk+sgX}WcZ4iZ y}#Q_\sW05g rTfs#v7<_C[jؔAg3N!_m5mw&X~X!ʔf:2;?I`%UYN0AbosPIQzݞ\_^J d43QsS*EIdr3u_<[}/)NK&O_=]hQ4*FeҨ=ZZfe01y(T1&Jps<2N2˔X,Gg@T)cKBn兊>uvew…Oo .rAG ` 6*` 6jg(8t4`8:-Rb:gsIef31[n}Zxbxuz>|wV [Hۥcn|ڥWO'a $Mt!wI ȯ7%L4{Sju3÷?}<^Nyw>}e=OwqCW!#~ߔYo%U'#ifۆY[w'ݯq9 JQ,BRsaLZ˟09[ 0ٽP2xh nn4Kz{Dr|2Nߒҷ=_N{G ꆄfuHhV mxl|v;{Ɲs&;a St}u&M9Ty6%,m`e:rv_Jty-tuu}U F7W=fwJݚHУWh@ 6%Qk}*'#bi):#-.H;&WsF]xO::&FKY?G\g/Ɨ3)y?MnUds8!QcGF;'FqQbQEIؐ#3q<,()Y>$ss>Hs𪼎U±9x7$g*jN!H;p+`*k6cT,{tJ2"2{ =4K:3CBG| ꗟ`%9RFH'YyM^M HbqlrΜ 5F%]b䘖,:QMLhRBdj@3N+^qm@Xt(Fbď?j賓}$Ǥ"xf4 T1iU)g:z}wJ^% m+ Fxh Vc4l"ҫ[o;A}Ѫ>aCRO˰lDV3LaieVGN=;Y@eakx5,>8=lo/I"eo$@!M.h@oR!t#˽U& WP x(4_ d3‼XaW{ƞDA:njv5Fo ~&h!˜+Ccd?B13]{p4r)}V{A 6V^ELUe4Md?Hcxr(>s$2 Cͤ,j/HRrRyc&AͤE%HB1K'IJ%ʽP4 v8` =~>ʪ=J -ka*yz[ðyYl4C4 >jcAanMёid$ )=BÚ^v0 36vKNH , ){ `qs yFb CDeHeAl/ H=йqňӴm"KƔ <FqFNw)~:/O3W>@8w˯νȳ?[{MYLA٣-XQOJig"RzM2f3{^ !k#d$oк ̮.::A{6hU^ ~H5 CUU#gsZN] Fg')e6JzE[3ND1"҅S1Ǡ#i)AR =1=PT#bMnu/  h_ru'4ᡪ*4C~U."a:v$יf_ۓq@?H&Ѝ?/;2Dӷ;HYO9nK:ɤeL*:Lށ))<@&lGqXa㋪Z.~8PE7P<ǸZ*A.j@Y}3: c?y7V {`Iݡn8ol&A6̃i0q`)G<1Q6vkTznoQ% ~3_ ZI[ d]%Ĵy4n<3)u4LX W%Y=S)95Jt)Gca)ʤ ђhAxi~ &Hk8.RavEi1X=t~qQy>duݚ^Gɸ\',1=ZB} ;2' =?TX(fv7X:EE0GqHrKҚz_>p ssFcNoekZ22(aXS&j5V;ѰcYX" fHΙ ވlL:cEʲa@Lkъ $PP)Aж]8iZI|gxw`4~]q~j.yL&c$9T2}̑71+M52zȱKeEf aC)8v# !&r+jSW8Y[‡<]5z-bq"hx_#Ug$!Wcc'0~m?Gs?d! # j:D !B1Dg 0!sKVa$sd8UtPgmO4é Vd.)bf؃ 겺Q q;4.d*MUeøC2T?v^b؝yH`=( E4ERu9- tRŦTŠ&$'iKfrh`#^;wvmQٵ}sg2 <%Z?0k-9gvl.<ˬȄsfsC;T֜GksQVst|}r{ΠXn[\=~K+lWGMˡ?n9>S[SѮĔ+fef0fΈb mݑurGB(" ; BV@IԊe]!:HĆmgPrdv#; I{Y˻[7REeEDMUS)7_,)`?:%tWJz"^-rM2`ߍ8ڒBq{1 }#$wbֺAE%@G9jm5s+hQ %-6i#XVx`L0J[Tխ{;70>G[!Ɯ漞ݴƘB5{'lzTȆmvd! {)(Mi0 h%1q)4g9=P U<~P"sc*rUo E o H1j ӎ1Yg2 mJv5/ur4S9%IH恽jq;:rAtj++hZ6+ZUܱ}|Xr43=r aSq1>qڮ.aus^u*-tii[ۗµsc %i)IT)=NG&$y`ʙ:~*Wq3WO`D7oXrpnL_1(:ρi5X(sT֨d4HI;]aO9oﴑR52)[N 1A>1BTbsO`u2="ՑS[UL"ag9F BPZCZͯo81p"tYS,;0Z?Vn[ 8Qmm.6V5f$*U[B #@0F%`ġAa 9晌0'Ztvl千4j}"̑3Wg~z1J/d*3X^" eSK ruޙ=@`J(# ȸVQA ) @X:_/|Jtg`SD2[rlefuȬ%9B 9'Y51Rɽ/,}c0Οz`aDDY.]Ҩi\D٬GΟBB86BumzYM;DCLrF1iuFN!3F3F+2]g1 -s!:3Q<^[`+4j7Ƀ*Ȱ R\)a`;r]²Y zF^y$"ܓ1"O&N²!A\I{+՘ikJ/RT?8ʘAFm^f{1F^#o> lz|u y|Y\6m>-ѓig&<)/gA2ahՙzwF|$gDo=U1KoWdXH QCoI |zB)Z"&!j¡ 0*'mqgS5bVR[z?YhmyCF?I&k7,V+5ЯIJt{VV9nj=[#4Z.u)8t@*11n4<^7idp~|-SORj=Fm("N} `HCWf(ج/r^ת!^yЏ+zLH]:,Rx[ gV`0EBja.kV|un>"<0/Z\ B2~#yS 7%.^m0XDHrȐyzdGʹ駙%==mX G*Y#?\^mԒ4PҾ3d&F7:K٤]ubN >7 $^##(T* v:p y@6Oߛ *EGKCis3MY=;G2:2 eTr:a@"i? ^@{鷅A0&v( F} qj}Ox.)oB|-hSQuc(9݅n{GFˎ5qqeYJpBzg̓H@~Xnm1`콓L-Ԗ:g(K~nYKiGnǬwŪƛ!>ƄN#*^*1 ''W6[{R@c()y=ҙ_U,yψ-Yq[τ-#3+/f?JZ? 6,oO_ON7pdi,Hw~0|.tsne8d⫇wd_/|?_I?wBa5ˡ?n'@h pK1wKQ GXōZo/a0̸䰡d ژĤ}S,Ť3CMڻkMa&t_КJ2?tK4 KE<2N*>'TI04~`(!ج!WNu2ĨL͛F%iT/!f#Aͯo?1ؒ<as̳.f&H-h4e?Y\8i.xIp|u woe CK%9՘)P߈AIC@1,l?MwѿYt:$ُaPizuBijpOBeDj)+5x/jzz F<-1wqaVgw;խB4LUt/ o'K&z| C(jf56BܔZ|ho W\J&ɢ+[[fRk*S?dvGy~=V=?.=,50}~WO; ߀nZ:bzu^,.ۋ"}uO;Mh_'i≹ɃOzbe ?ILaۛd,֟ќ˾둟KgnR1 ~+Co'yvq/(KO̿\I2zqy^JoBT{qq$?;.qo}Z|":oH9WDIR8k FNC_C$7*O-9/˲aa)%-<.9)iϞlozf <3A]$r$DE&5^(}-#" 渷Xm 9"鍦OwJi4;&oC>َ&w4ɍh}6@Z (+"d ɃTQ'MK VQE{TRuʲESN'g/T=~F =ݔ8#k`3_I_tQ#Ԧ.b;bxϻXźX,J6b\]0daEE6h.U$D| ]"gK91GYZFJY;c QH)5H!7c< yg^u,cE t4B.@U …uJ+6T#ƵeGݹe9@4?/)Nn> LIOLiߡT6@o aem*Ms9;7p%}z:J?\y+eٕ+Xf.#$q9yfP:FHEnRYn[!VbЧرnX GdQ LVZSr7 +H nC&%9@6?/FIk )'R۶7 v)2=GDh::o)5Dl{ҋUrifPi4\ǀ;+1w INOM۹ [IaZjQ2Rf. +u+C]E#%+=oalkH|0{{?g}9 P!X=Yd~ӺQ08trvΑD ,"w! io^xA'410)t/i:#)ɟFPqLOvf].BfQJ!T_b 8ZTiGG1b&NX7sK}se{R3';껃_ KRz_W74c ~L[2/m;F- h F |[XpO3cu{~jO=\n~7 (0$LE6Z)T8Pi9Kz,Y& `H1qEc{L\E탎{oM=R$uV*gTRgf'oZāoU(ɻݪwVmvn[5s.;lHJBm\⃘|>s%`1΄ǎ A}B`k_c gE6Z }4ړHzKP> jVAE)L) وH j $bB*% Ev]&4@!BTp:fs¹F՝砯T9%\: 0Vmyw;\1gd㒓}7f)cpCL{M7GTo8wh1E3&CP[˜:`" ^QiKZAQ`ih7N &u_@Mu0Z`HhѢ. 6O'8Lr+XL\3`Je Q*ʘs^Z=tQF7_~\1S"&@(pK )8o)Y"ECi</`{ӂ9VNҖ;鶸jXqeGDK(GPy& *1F, "+˼u,Wz1" '1f Z@Q1Jf !WoD."()1NJ1,ٗČ_~HChT/1H0lvnҴ LFs3 Cv|=m3}mKC{rRnxr~oy}}gG ^//x/HJO/w1~ " Qj!'8Lgsk7,C[s3]/ho!\^p*WX2/.HuXIT׹riM%iE}j.Vc`At֮1[ 椔 A8&/[(uJ(b14}@SүS [z1ب@vM#Z(̆kR0RXn\sO x Ԃ[(8 %ݤYvMf_~ئ!)cMƦ{R૛PhP &EВHdH$UjKF9p>O7 lqnCJ0|z+|̞|x@@|/{]oj7zjQZ>ۍ^IշmQv `e+̿<~&HLWGZڗNW'!CuT1ܛOF8ji1ޯBuco4 {k5\oجKpnfpw Uqz5ƕm*so-szM)H,Bw@T-m\F_[s"Ioltp:L k4\ llnES$)큼6U&!'~P$I?Ѱ9aSp`/ұz52dUw>,/Cy ~Xs_9~W}[8zmuei߇nU"2Ƚ\P䂈3w@0 IqА+?[Q]MT"@da/SE]M(eU7hRTz  fbiȏ/gaR=8J>tl&pͻ.=ޛ|ÈЫn/VⵓW!em( \v+o5DH)ׇ67϶?ժ~ !i zjo-ۖ@oT\~UީQ4ۻY(W?׿]Ae%JG6J Z Ե{W12@W׫v/xڪumi[DS…8U&%N4^oޕV;,s:r4뿲תU9'ktPH9| ŕm{Q=lYi,+ f.bA;ì7Hb#GˣT;qw>Uzѷ`s![V˂LjEo8L+Em+BjM;XHaAGȼb"#.ZDsjty1~<fi/ߣï#_ULT1% }$#$ HD"'&143fMHsK "#KR2H0pIkG?ch舱̥qGcԒm$Zڲ[+>RAsvUign ?RY6j 1-[-~g2+˜Y?lJ֬m>$c&,=^/Jq|ɛ;=#j:_~[B86q/oX ]mNЊvkc&Yx>SR'#JɈ:]cBBIT#fG>S;&Z&8-J~8DU˧J4hM<7v𓚇t.U8.V(~)c䟞 _&^gOUBϼYh e Fx*8] :]闰10akE+ڽ w%g޸ :SWo4Uo^eơJQiyYɗ=[3cyk.Ǵbs7Jj9=n>FTGPs-mZNȨ["128X#FT$p!(Eܬ[;kÐQ;*V+zcې:<3[IJfͬpnf# Y.8<z~KVg=nsPAK6D|κfZ/"P wC5tG5 g~RQV2֘Q`(_n KaQC@iHR;]E2Ԩܳ.5-汋S~wԬU(^(Yey=3QDez?C0!xO`xwC[B"ح Qb tl9$G+b1r\k˾>Ԙ12 V!_qʓi٥LޘMI1RLyjʀ dnJw6;d|4EVU ~͑x?[;p9AK@xe U+"\V/ d^YYTW&h5zU#$B*zXi+X! kdR(*Vesd V !.vZ_zorݮ.B/M(p&)=b{Xp(: )*Vzʠy4rIq\I@h~ =Ŏbqh7̺qH~:C\]˵ ;]4^bD_]dD0$!EķᣜfpX>2"avM&y5u]ϤFD3tLA~6:&s[e9}y |8Kp kEKeVɱU ]{o H&i9ҍ(] ooYEWy8ۧ#Ҁ^G55>X-jgn}z\VCT)-~ zWKo 51Kaiqkŭ$q)Ew,]HR\ E:&y1!upA/ 1)f+忬7P =I!NǧXe~Y&y SY,V uo@u+G0xu?%MFD6cz v%"+u_5wd9qrPm8MD1 J7ip`wB${oj'O+]+^px Z|W)@FT:|҆h6œ꫁#Τ\CHO~VeS 4* (K2p IpG6N!R9|a9ES-S0'[ XK^IxqW|\oIXo0k Y(JzG7Is5Hh#b9 \E18 IJX=,Z|wOu3AgifZ)(SxF)g䱊SF햾%y0ᶠZ]S͜ ؝xDPos0Ba^2B*" !viy?B<օR4IZ!D9L)iE!JHa ̩HB=:gȈ"*xfdH*FaTp& H4i)|=?/*(0NۧE#ten(d}}L|=+Hƞ?msni8YG_Ӯ//A ~\,>w9ҹ*o}} KV9)GQs|* pW>~i}8}[-GclEYY.*_X,=Vw#4b5|@OGFVFVm?ְ3| bAyHLlVqdpnX׮CUՔl f3v_ę韆#3X ;I8Hd,ƭkNtǧL.&;kf>Jis?XG1[7ʷ8"::9zA]q[<-yϜe1Bݧ~[?l;T׈@q$ycL2l5+\NJ{/_۽_y|w{%^MZOtio(ޣ;ւhc%`yOGmI*dyl'Z'qٞݺR ƺȒV\(GnJ$Fk<n܍ڠ o}BqejV)n{|k AwTłDrJ*S`.6fvC^LV&*AUj/˦BC~6R̾qݢ=]wueUKǠ= c .]کu-)%CgG(Nmqv4iˆq& jd(xvF,<3ghs-~5kϩOyɲM줌(^ I)m킶]ж 1 4o lC7rVNZwk|ufR"h%󟜙l=f 0233z !|vgc&nexy4K\L**p#qyC!3%XS1*MԔk$vԞ0e gVJD*SE)d,MH8XIL5X@qUH%GL5Ցe!07br[;:ŋ1^"1@Al[yUE>PR2|S2Z/$Q(S{& 0I*ʙV?RrNy#=9~ U?bQoYI3ʛr&+p`(8H5Qj-:Tфjb#dJPj$AIs1õ d)OYHHJũ>2ShJ"a'b~<*k\۷B& )3n܁.nEuQ+v1hczh$n tb"g vlPaǮ[>:bm׬X){ D07{:r2`SW>ܺEh_]*PΠZ><"?v֠e=Pmxn=8hI?z>l w5?T3թװ7uKn ܽj]ُ]­X`r{[v+9lcN&T;7uM厥P?&.xg7񊧇z};!]\e8OARyfqOBaQ(T ,܃洓oޓ.ewcHKbbX8鷮.[=K)>v߁/N/ ;Sw; Ic!6ݯT=J{{.;/uqwWiJۥb~^oWgHU\ix {U {Hb^o9qst*۸[Kc)EV W5[,fլ^̅:~=7z)?\deQ?4ĤJBFiAJm'%@Ǹ;}wN',z22;fp~nuխ_ۖvAm=QO4+'$_l'HTq''EI[y~ y O&";KK]-, &e͜;.۩HX Q xgs?>v"H$_]HD01>5=N8C#/yhd$ <(r;|TMdc_țr^^eL'ck_/PYLOXU6=Fϛ,z hⲝ(6 0H]I}e=~424;Rf3\̫IT|_V JlV~U7,© r-iSE28iJB`׌F8Fe#: (y#I{G!b GcA( ۡƠ]vkTNJHMkBh,s}E9c1c]ɨ6J 2Z*TD҈ᯁJxJio[dǜetlY?븶.j{3qر; E&@TfpJgAyYFHkQRcvBv=:twg'{2uwGߜ}|'hr,ۂEZpxz`>Ұ??fU~g^}?bSxvQ=r3(O 33;'ĽN;{gB\ qui{C=76WW?߉ ÏS;soBr' hgඖe_Ǔ`lZV0P*g]ItyAzFӨŲMi'CI^ tMT<UӀ W̽TIRa*5=6m(`͍O<|Xۚ9yiGy5sn im"g|8A&|4LMmlll|L.i\=  vρXMѰ3[;[;[;[NT٦ QQy`"Fс?HbTpqD^ع- ܼR=0϶M{6 lґ偟\ g-tiګ?%t,pP^- B+ńd /.:S8硞+gg/ ,2(,_;F0@Fq^Y(zO)Mrl5]ΨӺ{hebr!yu`J{;m@OMZFa/]y9;6=5H.9wv~vt-wР=sGMnC::h_99:x~!*fTDxaGR& M07Q4uUj6mEY*JIes^6Ě$'qqÌG bS(˞>{`7$Ϡ,vtZ\f WAoͷlh*F`Z1rA SJIJLgjlWIf$"RR,I)a*0E)iBmW2$"=/t ;b6GPmLqZ`mDB(Im zPA#&( Ɓ Q*dm4BYQU& :4qP {gqB x~j$ .b ͳD*߿fW.Ýr)eLuUuկzk\eم"W />(I_uL@ bUܺଝ20f[X)D_Jއ=oCx))6bD!yOТ5QEB- ]m\N*7FAySBdpnjA "Y;Zo\7+$f+:G[h! ]4c,cmY^G*KHǰ# ѨJDFx?_hBg{M&euR2·ձ Cx:tƶ!2(,"ҍ "L%C^  j"xÍZLD%EӁDs!2笒99 b$Bqj6=DfIDIRҲCbHdV8^-#m! (x 3 VWB9'K $ԓG9IEd &'[2ƨ1SA ePX>LY)dcV gšlI;LjуI~XOXRXTiׄ06X&7GZGNIjuSYe~R7 &bD@'V%Fʓ*(ڼn,t@IS-Tk05WTF5P@4e2U'mP(JA%Ȣ%* ~~kA6ѣ쓈[DPSC6 q J-Z wUTib!Q. '}42L),D&(mu5eN'!4q6d2(p!=mդ+W$ٸ$AjU&&J5a q@ZB W9 Pgl}ۜ16+P y(7Pb<oPXY70w W!0}0TsP1xƐ)Z^ kq+65d&BӁ_1$ )TS(kSX+D )I9nPX4\I5kJPH(@ ?w5 EWܖTRC^qOJ]˚KRT^ BB&:_"5<jJa#YV+ HF@VAQ97eerKp8 2oyAڽtKio$' CQi"BT6zި=QqxH0|g`o.s|vM%vC|>cc{}D A}B,jQ4PAJ6@#`*r+F@ +'003!yrHv9 k=/3(?9҈H)FDN+Se2^J L566:CEB 衾"dԚ@:x3!38 W!aQII8UAJTיT U;+Qqn[QM 겚 Dw <}XX۰{teܥ1y2W%y%0x\=RHcޣ@I PK |&"Ft u )(@ PJN%lcDEkI(c[  % P;$mDXj59ҷd,jI(8 8$cx d3i̔hF%C2\jPqEEhR #J(P>&~h)X_NPS+0]V⤚@4QxTFrzDe#l B)4imlq/T^Aa MZV a:H@ܟFiҒ)L7[j . ˽sWmo6. t:'Dpy71qe*x?cU::hQ\ f"{h}GQp(rHۤk[֘~j98)EFKDmvQyCٰ5l@bFhQnh0ADD5 %TQepyDW!?(tR"X-TDe"7BAj`@rHAOY~ds@UߴYaeP],7ȊD4BiҠN W`!g )E^^^`V?!Pˢ#@HmR5u#!&z!Up?n~4U.v F`NmU`cm% 3LjcVI T)JK L%]F&C@ vCpSrXNKiנq}6~ofڊIC`mQin &`F(.(-lEzQ{!ѿ{L`JnÄ k|Tp rxauWe]MxYoɟZyN]?6yeWt>cyhkh.6񯯸|yZ\MPs-N `b˽Z۱Ag=eefeUvVXe!+ )yLau-Uq{$dGG\wwIg}2uW~Win^6F5N;:![5yU49 G]CkJY7Cn(oQH^HWg5kn%-2;ot'j*ݑ7T@N[rlb8vC=t s_ݬˏi̱9WG4j{&s̜TO9|{L.ܮcnqnNE_K7kffƚkfB(e]4s)v4Ⳳ!ٮпy[q>zliKPdu@$OUkG $[>)A|> ݊Zo%$C90" J7z_ /A^2JfZC Q!u!֩&:dujmi_E/ < hl%jUtYVQkf⚉0 I`%'ײ4#ӌL324#ӌL32ȴ"d)򛻩/.>|?#oWDzfNW'^M/~x Eൠ D|!B71c- ;<~hŊV %J[i漢bڐOv+nOQ=~|GQ+tM瘟@ՍѢ&z2UsP-餫2*Ṡ,MH~iE5E{?OKUu񱃌$Hܨ|q"5e|:s3uN]_?3an3R#zw/O~@Iw٫~yd#~:+q>2r  j륝ƭ.4A;-{#imٔ͵qO bBzygO7Ϭ[4h21_xpqmו}tO%X[tN?w:|{!Ub}n;>!(h[am)] Z܊Eo2E?EdЊk=9C=!-=Oǯ܊q)HMvO?aKafo;2@r Z=N?o$ڦqP=LP> f] !żlUۡ&+o-X@N K:TҔR3>ڌ.9JE*UrVhK2)cjZ!h"dpw;2m u_ B%uEtSL0_sJ[[e6=t_=㿏{c/MNw߫1:lGӭpF~1XwiVf_PҰC8q3̸2ʌ+3̸2ʌ+V߳AncýrPJյĚgfޙyg杙wfޙygg;Vx˃ {т(Y2 ~wrه۲D>ݠ<}hfLY yfX3uU[&u{7k~G_%^= @/l2"d= OOpnzo Ͼc of=\po7.A1KX\32bǸW甛!r>s.z5B~&/^Eiwd/s#b=]˹! Iwr~nwilP.X~~.N Zf"Us(󓍪7mKƷ@5GWg"NyfyhKfb;~| c ]f$⦦w $Iw<)xv0c;(49:懜#I$g!YB|A@! Vs 6S %907^."AnS:;="j⡻uih/wh-WoZY]? |8X磑HIJ̒149 3O; ^ʆ! |C0N!$NڌFq&ŹίΆ?@#<PʄCϘEVh:b #A<-2 r*{5Ɣ=0rʘdY)+ML'1rCb].QLrGZX~B*C,]!\G`"o6H ܠA[ϊ$rHa&f鸳HN \1Jً86UF_7z''zG/HaeCIvh 0(#̄YJIFqLJh4g@G-zF(#XR K;ꅐ(^ܙ:hmXx1 VJXX!SMG`:J ?/Ths;.(;VVX3LʢQ˶acX-?mϊGn kU>>PV5X6-mVxYtP/4<ll"bh |έ[ ±à}=IQ#!)Gd@Lm͚fU.ŋ;)d㳴XTcw9~8ʖm]d֫hۍ3kSY鯾}:sOXȎ #пg;{OG}N﫟oN.&G)[6yr>$NF5~9(pʶo;.`]5|W8NGjvuԌDp?W}9}~8oNHtrpi?e laj aL&tx!ؐQ١#Z; "a%Y ,3qH@j. 1:'cxbH0%0p[l'輩_ٍsƿPY-_?667 -ɈF=vdOGq]zrs 4gfZ%% zoʱ#ojP2yQ*}ʾClDʾf#wbs]:Lv[RtZ7͛V-4tkU*.dnp:]k[vh16-Ze'p ~JqGFP xu᯷,Co|H6Ni8C?]/1q3T6ȼ1J8Z"Gƹ6d*i2w%EZ*KKo]4l|_RBia _~`/mXХvv2۵+VζR"%. 5[Zw6@khmMo9/uF [ӹENplzC<Ђcńĭ. Qc6A :B|x3%%o]~ {W:w~2-P>%ߢs7 f#rpFQEHAAzgsԽ-s۠a[\S>[j{O@o"9,R_zQ<`n2!W׋&2=}Bl᱐lU`P i.-v<'b2x1ƔD MKvEUKХFwm 1)>65Y;HJS8{c5rݲ3/޼ֱ[O:kcDL͕> uS|N4 G?X# z;F[#hoxh%kĻo1%;=M V75#zM$gjmͶ6^JV;@15R+jBi+ VxY5ɄR44c5фR+4@=6Pj&6PjPjPlBkb iX-pFjF>]<.JRvftͶV}xc RNy`a!yq E&3Ja"JȀ)0j6h\̼$bVѪYiNe?wKdH{?,UNzĮi%ŕ- Q > W.#Hs}C8/Oߖ^;vSZJ۷Fۿ8úֈ<Hڏnd ^~HDHJ[鐈0B5 A.5u~1NXwE5;2aMh(&u,-K: ֘w\J=:h7F6W-EI^zAxH]gyvfqU$ f"Iۛ3HWPg"L-IȺ 4ot&¬9P꺞x|>ׂg*n$2+lϳ`mٛm T٭pgogʳwgutPᙟVxX[Oجt$:+t+DݢbӳaT9wa\ItAgلqmW|@`Hjv+YAa$|63áfXw%Jx+v>jYk*_LR3/W30aDU3ϰZ%54u&aEɗ&齂bwbts ;;|M_y%?<ڬ ^jA {޽uĵAX$WtF(1 dW :稌N)1+cfxoKMM3.W͇HpWP#hH^wّHаF; P u sk$4loH<4Xgwy0Rΰou0@ꚢ\ֵVPe^Z85Ou/UaGKْD9t4d=ug >{WQ_f컣tN:$Y.]gyY~}'˷4+9+ͷ]I^67_<ؼ#7;S"v;{?>8g\ x%-U_B}|2~|LfY .y. \K~-5m cx5 yԨrCǕyLKmpL )܃X ' Y oɏh@bs&T\N<,\c}́UE2Ŋ,곱L@ܶF3, i?KX}dړEJA=()Y$CrK4#1#uNH%Fh4).l,GAU ͉0RVGZ@# $3gx L bUc%Q>+ SS@` W5$oC^ ኛe]]@HY]B&Lj`/K'}dt'"Jz%A}$r/ di#42H^tUci 6ĺWkPX-VhHJ{ V"*38">Xt`D?md R}3Ȝ`21fr<0:YRDʞ *RMryȪ꺬Uݻg @P]Tk ։A}֢%4` ϕo-hDނr^',>I1x7 vBނ@-t*x$dX$A  lP|ҕZw C d|wfNQ&:kʜn0B{0ArVQL +pzX֤+HB!EqlPZL\s*+hkYiG }C R L7ih$mfLVP uTJI9$<"fG3!v ( a2Uq\V܅!Oł)be` IhZ`l$@PST||%#D %aVXiA-KTx)nCSHPS⋮fmB)vNh ރ.՗5%@FS })Qϗ( 2xjHa#`V6@'829xhr[4q=֙ ϘKw%R(8i`>JHQBu"3C8I_01nݞ|r9Ջ]&v %#3 @T΃=VCZࢾZ:py a YQE45Z2fa1[; k *,g_XK(|"28"jZ!T^eNOjp %56pL\t0_R˨ՙ ›q*pt/=&,XNU~T}%*u*ΈJ!m+dəj&8xzaU0~ v?n{ƝGUD|ҷ9J>ncFP"G]JS0u^ s@ QwK }|PDP-fx  +S@ʀvJpQƎnG,D>^|6ٕNXb3`bduJpӃp pdI..xc}  $fJ$)2!2A~\ w`Dd*p(` üO(@AV ?K$@ Ӊl\j% MG]Բ0geLk$f#o R)BMލ-"t[+,e ԿAn$,B L`_۴ג&x͔Z 6mMuXmm Bd\@>'/\$* @8fV"i`(tmLa F=b) >; ,VZ Ƭx= Us=bBg/J^Xk큱1`ƺ3 `'.R; cSttR`u\W&Q#Z KBYq+qÉͫn K{l\` "9v=pz@rHviv᫇GTJ'>k ],N#)}C9 }з^هoqq[2k%`CƞJk;zK[~-W~Vv܄lOuD*yMO%[YlM%(ޢd% ̧` ܙTC7 u1#a6f֡8z  -Ƞ84_KEMC>47_ x FFlT`=[%ዕC9+@h;AHInO )kMPБ&% fe+t򰁰DJ‹n ?<=l9[A{DU[Ad@c[Af+ ŭ #ARu ZxnSpGR ge vjKvb1|H 96\)&0&} ZrRyk$ZYeF "lO2ZțSؓG9=2>9?ǿwhލWk>?f1/r&l׿NfF?f/ON=x+8>kmt+緞6Qou!e"J{0&Co|͌拈;Ngw6Gn O+W1ku7kњO:<-he h^wnLF tj/2ҿi|U.M6 7xuqXmEJxPos_A*dU̷ d>DOSk+͸q0Zsk6(Fd7|0LO&X+ ʚV{r[3ÞVsr|piw􋟭[bØRvdj@+(X^*Q'}5-{>a[B3FT8cB|ʇTҵ3z(ϣ(&dʥgڕW^wZf~^: uTym_&'\FxZ8gxS] ֯x9#Mrd= 9o_ݧ߿nM{ .C3>KޔKmJ~f߬C]$:S~o$nMw^Qç4rJc"Nv\>%T=b `bM% b!QM/ N9h"U~뿏An{gQzs%޳孋(_*ly>2۽XDIyP䊫.qb{8(Kj0_?NKXa睜N`x1eQqћ/ ^\l O.o1y#i*o(M3 7 ,2_!^N~cvE;" Uъ(|]Oŕ|+|շhXhgSlbMYuR|q Hn\?}7ch7Dj+ܙ1m`M"َvZV;`~&['wx6t+iglA`I |Ko2v}R.'VF!%f}6Agr/6&:M+VŁ4^kز6H}ݥ=-]O\q?\'||SӼ7_}ˍޭ|<̒쟺_8;Z\@]YZ!ݱPBYDw (`o.y"l@ IԮP,y`>z;t/Z\R48WS:>eЂ'3IwF)/r(F{7=nwqC9ܖag1_l:ٟ|u'}͐2~L5jc+rS@3V>rX:IK+#Y6B Z,]6+ɓv* tn&!4|MK@nd;9#+dǧZ[wfA4"u(uˍ{0uuQV]/5) bKvy,Kܳ$|wwzENI l OZӆwоRS\yF$0!g)%~Nw+~+:=;jVw#{f؎X`0Sw wh*\S0*PΉ>2zs-u? F&jw#܉N{ ol0wm莅ʘWi{ٲ,)!ZQM1>q8n%O&5raеi-]r;I{QIPx:!Ou k_<:mQ8<:KG>\Ng<1~>?[Y<}qv7|1;zV䯳\]M2brRWEE^C O܍7.+7!Tt綢  &?>=RtZcv)AS%hNϗx>P7H#;J-s yA!/+>i)JK/$0 GbV,+䫷(qbh3 {OꥆkS,ͼYm5<4殪oG?_אq$ٱՋIKSދ)opR+STr[?ٝKyϷ~̛9fG:T9E1(P(ʋc?6v|w"o_#!GA/kꥦꋢ^R>^KU\b"{1K;:>xK%u޵6n$b졦/շy2&rP$呭E`VSEIMTSd"dWuۮKDy+ԦA|<pk8pd<*/٤xy>@Q\ĉ2~lÛrULFƠ-G w-`|Fw*nO2F0q^u^5xMHNv0쪽N <THp*) Xr9k V3FsrX- Լj@r(_<{MxZj.y8DW^^LP9J8mi 5=m3Oϒq;~HgVנM?Bؙ)M-IXc}Хe=){S#b=_y-mH->+y:g,hHCql^h՞$N$l\UZCw%>+; )nm?i390JʵjxZ?w~rH)@ 4Luٳ._=q"ek{a6-](I{y{z( "гQsvvdОLIOxh9wJ\zqpBZ# &۝3.ѝV:+lj#5JsxGj蹜SsOHx2~9u aBKqR~?uT7ijڎ7@I,~jy tI]~5$iPy'P+5ֲx4.ZsM-Kqׅ`5lll5Idy;DC\cNXRV>/'AE泲X/Bj4JW1'@~`h]g"LSy1Z|+ yϮn[K>d"ͲӬM,ׇG!j˧-lj)PWgzvM"NkdS}=+OBqy6M"y{ Jјl\R*S 84UYru 2U:#MzVڔpQQ'\g]n&}ۜ(UH#m~;%: ,uFU b{g;Qe" m$?XkG̀g%1\مrI0Wg Xwdsۚ.sya6I+2(y$ܱ*P)ZmS;m_3e#~pJ6D &||)z<6i#ε#(ԻsV DwvVW{9QsESisuS 17?Ӥ)Bzl%D`ĬorjӦeI}?{: ~ V+:]J׀$  3F*h4d z8x,2],C+[s;;;iu.'n9w:a3"1{7>نSʵiY6w47V~+Əu^n%zZN<֣4Dpkv.>m_QFt\ 4W:NϨAGpՃuvn '`"9`@53zS+d00{E|Z|`fٗejqXtn}M\𧜔M&(x~YTT9?ߏ7}+Uuk`؇oa+eS2m&B5;_jg 78z[)v}3/Tja*W}{3I߆QVY5\k^$R"ϮO3;ΒzHܐR&+]s>Wv@@T ju.<{֫M%Hyl-8nP*LnތGIahu)0@(JaDW1&\@ [/`Hr'ӵB| )@ ')}Ǔ)"!~JBk)UF8eѨ3gZpf!9 AII)^k,\5<ܻH$IVLR@T6ZPvTDI3 TEY0$GPxD"6k=%bɸ}e *Ҷ|UU+Eվ+7UqViHΌBPC"31-xjDJOiBf&Xarꓦ">L3 h3m1mfVev*Rw*̆9e8Op8S`8^['QĨICӌIl`>:!OR.4,*e$/-f>( 13OR0J6v15R̷Y['I[i;1_f+qY)e4?|Q;tk:8(, qrU)TWRȲrxG-on31щ45W}ry=l)uN␂@j]8~6Z h{h2 E0IVGD#i٬P U_y`>.*:( @{H.{ ҔX|( JWE NhF3Y[0FuqBka :NuRp{SΟbfɾwf]c{ϭ\l"e*&/Kq:mS#*|(X~,݇Z=Xx|@k_ JU>ӲPWۧRYfi4 R&=D@QPCӬCj&[,; 0㴘eyoE?sIK{i[|\c'vQ6ΓA ~Y~pW g##@' *F,}lԍЩ^xq=-"-_Y7:Qj<8̒Lgp;g,PݫL$xV|ϊ#j/ Z6]W_AZ^D~҈^ dr|^澺hzLdYL9*yOrrșХk漙!;oa "]שat;p o/4`.{7MtmمmQ9kF׫i&)oa}C[fwhL#ZoK&0IwxJj@i1l,f)MC32:dȞXG_V+{6擧ifx^?}~ ? Wԭ{ A !LaFґt+ɫ@u^meM$2f`m`o vroR}\ks[4];m,! F NP!YF3wc>?ܱG݂v>wokU^zwS0oZz[шDx髿J҆3-Ĭ\=y#^'޸yfq;0K{E 5j^m/lq#_r k_ S'npoc2yҶXrwzdڬ"i'bKeluhi>#1}Dӯ,38I l1ݙtgS71,ć~^AZ &A̺c1`Eb̷3)dr9b)INU`S܏7F=O3*8BvS#SZ0X@f*@-}>h_fX'762 9Xi4k:c*IQ|X5~'nf1P!CBukWZnT݅j紻:=$qnqKk FP)B_07ҠO#F?Aame[⇷WI9^$[/7o7zr%,3[y^bApZOe D?5 9a_k2Cc=-ǜE./2  kˈpz}ka3c/ 9[:fk\l똭q]X>= /q ! fBY$Վ0/=vDií4OJВ iΟ+ݼ#9] ~70-1ۜl6T^i4|˛8k)Ku AqK.Hd50 TsZ-MݝJsySwp"u3]̃=>zaIEn>Y I[뉯@J($HadXāhƘ"g< dqIKջNͫUFmWm0Ab SJ+aQʈFlF)a !\H V$$zp*n]8&BTbZTH̡'vhdSG%ͅ#.t"o}Ԝ+1Jb^Y>t.\cE+kB< Υ$m I,2$R*hi9 w.Ű~n& ̓E*؏(w4g9 xͨ}K8(YDPуr~rOמ}j> aȱS;??,lrqv! Q?}U|G_RR9k|``w~j&N<$]ogPوruvfV."$+29/^N۝tJ}Ί\EυR|7,rr,:#'xt݃"5׏^R3!>ZowV_9{ ߏ?␐8TU7>m4Xli7xHΎ=7Gs7snNPr3+:v}F^;wq/ibUρxkbG/y9{x|EIW:]:El{ 6:%٧RQxBM֛ΛO`5Ŵn)R_7izE(^/KQ7뛟dzI\71㎽9IY|xwMW DZ#EdX~>',:&ԨܙId C+&h3.v8bF"E9-2޵==l4{s} ng%&r_|.oƻ+?2w 6W/V?kHjx.CYM5 ^{/+\<`j Kk)VC<.cQ%q/A^O^0}-nA3}9/E[)z\"};ZthJZ `,"Ғ9#@% cY-)sK>#J0-9WLƯL0$%w$%`g}P}gH K5QJⰒAGlSJK5ԩ4ZYXLAY} !-$E 41)aTgC$^2,m6$AHlt|~RLsdsOUSr|ti5Bv{ow>!q$Wgff^ƳezKf ¬J8x`^<)LgI_Q GP%ը#nYrn޹8J刡JDN AM~e:C\ϮlO6 ٓh[vTbϦm b_@1R;-\FS2@Gx(ځ ט1B探RImU -͡dVTa͸5I~<چ*Ģ]֮7?vDŽ?O _sep-* `CV N Hb9mSt)193?|+9mݛϜSOanpT$g<@4 yk0`g^@#[ dGzBDҥu,2j$˾X`-h˩_D`('=8R1EK| &<V 3u*OO+^]W mHW.92E1=Mn4Hwn' 퀧s]klDS[hLU2 5Hn $Sn4H2廝.jݟhւ|"X@md6s|yqy )Ϩ䁤-+)y,{Β_&Sf2!1Ǹzɔvw5IZ|LBj'3+L޹3$F˼쏋[+չDyiݠN@X:,DQ5:uZ x^s3xy"| ҡ YA2 -xpdS*t2v' jvRŕs]:M L"I-3BɝV)$Ib]EϹ Hb% `e@&/X4[ `\CQ3Lz0 8*{a=+AJ Qb-~&eH-wI|[MtFXݦ'-x 'B^$"/Xb;t5΋U \!qQX1Pb5(=AYXd`O) %ް=G4*x!0/yľD^B  I˕DpJ;k9fB%sw F`%7\]0daEA\l\(Uy*)LKx,-؂x F{>kC99Owшu*2J!If?XRRnêt#,K"ݱ,fZcBfV(lTv % e2K<}-gJo`hYRD)<@ Fڂk/s)uHLR0 {/ߎB4#t?\Z:(RqQ: i!^C- q[bJ9qKJ/`%!?^灮tԗ* ,29"RU/uǾۆZcry4HI*}GiS_c؆|"%SM+z4ڍn4Hwn'ts[6ڭ E4KM1T|JI$|G-vpxZ+\"Z\D˔DXwi)'uŁw$2Y9G2ϯf%kVΐ\_|=LYIZ'-ftJ;few.t*DXX{}$*-myj*A{n׏PINPg}h#Ie^cFJH7iGKu6cuʭfR-"J_jz B$'bDJ|8ٷ]]-!3-3PǮ?ZrxnF&ۅg?ŨB'-FnZV#^m{zCΌw$ #HS^jG%G ^7c A*;s:& *g$%uA:$ipvI%=,h(;q5jpS&aec͹R0IJtᄑ>@gH:TCJERUzX'˄oR^ j&8c=%V(xfb-Wk<2NDT{Lxk%׀c;Su":V賓,*O^**Oy¡ȃeBOyNC!p~'mP ^A:wOc~̓][oG+,ŀǻX`wěB_-&NOF"iLυ8%6gꫮ wS_p/$ *BӕT#ﱱs1s5 ,'Aq sv](mDQtd!Re$>񦨄풏W_nH9WE ]Ji*)iccv:T D;&O E ' $QvHʔ?3@QӶcr%Z!VҖ,TEIL9#x *~TLaUԮI d]#AH!$J UX7<*R%Cj`x( xQ .ͦ㧛Rr1i5 V$I ӪTŌm;9]g@[KAH#M OeA S"xHQ%OHZtDDLLXeM("l,P)q8'4$8$':=mGFkbBqH(YɌUZXI *,Q@:l䑫ͷ #k#+=4WZBF>H[7E#),>U# 0 4pYUH'6|V3\?\]ַ:FM`Tg鎢ˤ"Pe:22 ; hrH"H$DY%=)/\'(/{J+#9ŎJוWtЕIJk}uוBRS䕺4"<^ iफ${K`׋ߛPvJM)W15)qULn?j&YQ&>\ш43b3)Zfuҙ5=ЏML2ɡ>̢),f|Zd銴2Qkϓ^  ՞wܥXpb:z:F 2sw0t"{5\i\-n0r08pinڃ|1IZc- HAl|9ЃP4P'd>ZY;YAS4.JĨeb}%Y؇q/F=ITIT1cK$ ,]fq(XjQQmaJT{qߞ Y÷*cñT*g9:"Fh4e~uy2=8fƔU- 4x^6{ s,6{+ܻg{Ϙ{ہ9~vТ7S]yh؛]~=ztwOmhM?UiL"?/'%kҤl>O=(WIfˢ}gq1>N2Z t ,:,fK%,%uhu)FN;Ё& xF<(%$8ˌVViT0k&q6SY uLȬq0I`X@s 2"0ZÁ;˩`gH!h 7Iۀ̡HIZµ> /!^ Dzw+أ#TFPF4w! SrחM9YJ͢ >,'rH~t:7s1~>y?]ܧX<&=_e+𢖫6L1P'lNٺP=ְWn1Mw@Exq}ʵE)E`|˵#!_T<:8[ڍF [)rDmۄN&0v+&4W!!_T3>XB[҄ڭ9S6m-MTaڭ\և|"Z$Sv⭌T3H(Aqؘ(oHʃ#n0jX?5ɜ5y8WRMܝtta8 ZUԙCtT\I2.n1żqX.e&9{ݸY y{2 c\~f^&< kxA-Ҩy>xH_wҷ=:WH_>R%mꌇP:-I>]XDםe+p§o'{9R+J3(y@Ek$Fc=hJGHp5s. FR.%x:.)P2ksyZ ;t@ KLQz)ӎRO)I_0k!_DYVD W GU,:Hh m:$xejut@(NPU3v_  9暑ժICZ[ԥzT>ydT2lGj}le4i 1F\{FM@ q){4ȝ^if,+~K|Ü1_@ߜ5~ob-g D5?p){q,_@ #];Оlh-p`he@d+&HGlӸe PFe6I^;?uuJM@{'Ov_'V;axԊiZQ)$nt`]c۸ 3A2AZ1rA ;bO1S 7.7gaXY'lŨ JT.hD' #/;#G@`fJu0J_~٠01(O7'EJM@{,pѓݺY!T5q`̡(}HZڹ5q׺Zx28t} Nh }XxzPP`˞QxLg鮺oHp }$c[oD^L̮7MtN2ӓܾ޼u:*z`gn?G| 5O2!>/Vs<:qQYz? мY*`77T*ςMY$ƕx"T{q9bu*񈺤/ )vԪW~Zdtu5ywOSqӓ;0vs<1LNmͱ%?M7_['sWˋ9+{ۋ ~9K6g,]ڜ5/m6pue7QGKb BȘZ8' Ui^{_S!80 R7p{+&f8/UB+ ' v3#Q- $CB!gv3ʢ=x5*9"La*4ֆ`.)#֪/R67|RtH쉎eRb8/݀J[rJ;6ԒEϑuvLr&3@EFrD4z)iy ڥ\__:w8}hRL7nVk殾P)_kT`ΗarcۣsuTX. bĒ+$[‚AߠDžO[0BNQ 8BUsXg1.fE Kg4%aֆREl 6XD 6+5G"!J&+YDP L,u{Akж]gm^Jːw,!F1s" ؄虷 GX0kؠUj-4(ЊF.EB̀D x3A"hw&W^=5ez =KyӐmzO`{=\DKdy߶vS lBVʃ)}G61F+[1ڭ EH8=zsKcn<wnSz6+ ݊ n}HW.eJ#%4'ZXKf</V;Ђ;qq^jҤV} mw$TA)nZ4, S).*H|jTSv$4A[UK/MJ@rА~l?˵ZDNs7F`49d_ c\c< bVǓkNR#Hաn*^9W $T+xpް@x9o^JWBSgJ 4zF\,1B`Sl_QrB2Us޻`}JxeV#%Z7Qpw2:QE%'Y`2rp(F=Wqm&o8GLx,PժrqD^1>Եroy 3bQu}%Jr; 8N^UGL@6 <#h+ yjv)\EBI w% D9j#c14DOVZIRNAYdH)+F.0r a ,"h$2HJqme(".fm8| ՕR#^v6c{hASz@Bru_5毗iG%>8cP\|[[qOAP"`~W;4[~T!~*s z3{xr4Xqgtt-/i1ov@9o",:r#4skaRخ| <}yQ&) Ϻ%'Z3)@:mv~ۻ4ƢRθdaNGFo\ *')i5罺F=cTj__B_fHZImD4;2_Mz _K2YiŠǬ%>f-1kU-Q-mqy1 RD(4z ^R( U`Цb[^OI r̸& bU2*ERk\#y_N>̍umR0\Q-ị3Wۂu^r[vHO` ֻ+,\Gq ZjZ4۴iau{~#%7ƀY2U@9b&юZz EޑـaٮeƶMٱgPi}@on\d)wPh=li !4 ( AwT^" Q(~V?0`}K .qya08FKñu^P1b `УIF@s:T1`\d\8׷⾫fC kf֝..ڱw߻_/K90rYc-v:S  ]xͿn=[x; -Ef o)f!:J3*A% 3 fn&Fgᶈ)ScW*N.qS~B9|"` L\j]lqu1G:7ezk4Q?#eHvpxs3"j@4{oQ_epZ XM}%6Qh8JO42tJ­\5g_[j(^C&d,}" U% 6dlp+\뇿2>qH0RyG"0XyuG, # ;GI^q}VLѥ=utp;}!OѹdxpN Z8:LhcaAl .EN Gt $I3kRw0nqK *pj7/1!f[69G0FxS@FI;lI5bg !q)w`+ 3*Ŀv .Acf7\l %t 3hH1n`F 6 9{A赽Y JIHn_ei{c {pg98F4@t}su&G_ƻ{~M&}\k|l~lg dʟ=>ف9 >$x#Od}7^b EepFV4n>5:}r&${3izD!σȺ\1k;/&櫟g_Cwg<;hP$ĨKK0d!rF%T [ t3ɻ1dU}dyNC[<;۵Q>STŏ1 ?F˜{ JqQLY~NϤ?5FneG.'*ɹ0Wyt< <\).Q0|^.o/߽LG/5w+,dȃ//blz& ȠeGjUDsPH:@ "55IѸp-NPHm{BC qFIG{'lMܡ>?V:.VYߤ+cWNPT|Nn>ARl"I"q'QŴyJSX!.D/je?ua+J^L=&]m?L~_]eޅ3&9{ZgfEUvZ;ttZ8)<'z߮^'Zh%S1*LO*[?^06$PD9%r[))?( )&㹜-Z d^S""#ƅGIԳjs|brYc-dp0 @U:*W&o6w ͖gwRi4[Ϳ>r*eH߮Mzaלpl(5!(yBgU2^jlJiw1aRw>hYUn6@_ o2/ߣj/i r Q3-Qpҵ\1u\XN4׋ $~mUL`^cIiMi&$?x Yhjvvj"b9)˖'EȊ{if5yUCsEFw/D)r< ]V|usSi˫Elm6}\1:bXJH[~YT k!̢gq*g?kV7ѽqMa5D8 ユ.Z QCI},XfOCß?tVFrק\'7զ$EesY,G_SHF8E`ʭSji`/mdSg(@Qh]s 4 b8DCs4lZr.^ψVWI9Lmf.'=.Np u@!ybfCh}qLҔq0\jBZ2sj*K\h?Cxd%3/{|+9b[9ʫ[2Q #|`;rBV~L(VﮧEB#zN=U(»T(A6ZO{ƺ y+7䇜\w|J/vKVyȽ[{Ƚ[pSʻ{ɀݒ0$;2jFnlZ3,ۯn>:UY'䟤gښ_aKjm/=ɩTmL6y8r h+|$dwiPL,R HxdS}~VMz;}33?̆qvFOX2;7LٲH~l)>zc3}S·2W΋_/3*`B.?./lId$KYG)}c%*AT Iȟ\DO)N8ʬqI!ύnIwOg7Pf :Bf&>K!B!Pղ#ȣa{P ȐPvX]}Dn;Qϋ՞p|=[]vwC2Uӗ?+Vš{+nU+޸ *m U*S˪J(5Ae~UP>z,@ֈ u{{"#ua^ T.gznpT0f` a-QWu{qPuu{-1[Q\ڃu6]nx׼b9Z+I_capf2dCzd$\g 8\,K-W`mt{B`%JY/̵I\Bx O ooOIFN^& |՝ǤCWWva.؞6 38 $ c(!Aj-x\?@F3:Z(;v38L;;=]%^bYo+`ΆA4+<"-_zE h_}pxw.%rBS-Z7%Ϳj@ 8MMH蔞~(_!Ȧ.]ooգG ;$W* +hGBa`jCR^O\S0T'RV*xW7$MTL)'LcF0Bi| _)IDVKɺuAiw .&ZTg0b$YN8I0@E FIhc/&E*mVy qc~Kj$G-+诞jf&Q2^5yr?^+<\\E5foCW*!o'qƙ̐%,.Y4I0LIY?eFgČzw\.d 1SLZM&2D,@$MhFS\3mXUfH-Jƌ ~$SI:804 XcxM"(:6C@m+l%ULD֠SQ6Ӡd@ː"BdNH QanqVRR L %bq.qIOSJ(X75'Kz£*Z^sX$hV PI h~fFJY~8rW8*a[5DsP'F zYٲ3|֚OQ;wuףsͭ޿r:e$kt4L-c"ntg]>N.L4_oKJ G?8YՒr`o/Eyx˃ƌA? YJVzz@? $]ֹx _#B?>aި8M :Г=bHk,v 3y#prDz(yc5*F`0iA NjG5aJqcL3P۱`\J3d%39ꌋ .8.=.AH#J׷W2%4Tr&* H&ɍά0,SqR-8#Q L$L$ԱI|*MF$KyoK KDŖk@MTVM)r8͘%:.7svj Vlz#UXv%Ji6A^\!`O鍤eز6#B ߟj: ()!7n;M A@:Quiu6۴xTdNK~s鼍~9~?} ??o[MvR5ːB|:mZ凲GO@r&B`I2LpI\>t5VFJ1 k1͞T\gn+^!}bN t+7W`ϡ@`zѠraJ5e$+wRw(!9{|m;U؀ 4wE"6]c]Jr|8Ҭ6=Tq9U{\z1T"C^C|}(rTT:8cbqVvq ]ٙS;cO}MD'3fw/' ev1_{ V) )B4e0~F} Hc6sPWScC]Q+ E)+~@ m:)]鎝T,\݌Ryn6k RfiUmf>,ENk!F#fw*"Nūӥ?M/s}r)"oEFފ|>Vv:_q2=Vx҅b\V4.8Y/jU{/Ѱ'[}vdR*f2Ӽ*5kbJ@UD?dn8_)MP~agF?Y$VV;Kd=d/j]W|=D{7!4܉9 9p<0=]\kD;JuJ@jqw'AGbΑ Q^CT܆T =z>Ԑ92 pCJH:jkȉrD1W\#k0 f*TTwy٠_K0ץx)\# C-b`o>kںvj뺬V/04f2bx%,K3-sKxߙ2V'H nN%8ōC!4W?W &n>d3IoCWчEG D0lE֖6eލoI(sسL7sMFs|Kfs0v o.ar t,_׻s"]c~_ jގԬs{Tg{8W!#R '9qX\<ā1UbL EwgfIjYd/EI[fO?3;j9rZ zgl`٩Lf)p"`݈pL 'w!ݮO{ޗFRܹ `F8*0%\e=f2{d73WM}y9'd!`R,OPgvawGL~NSUtѥj9٫̳̳6WϵAe|J0<(g:<'k2XpX4LI`Ž-68IPa飖KVdv'th'hBQv bFUKa$` LipJ !<EF@,p (5c5ۀ֏?b Ubs+rΈ2Z`QT2!ZHJ+9DHY} { (j(v@[0J| )-[BDF SH" bn͜ w1D 3x38uoY^IheKjFcT *o(s zP`[/'{ 4F2@Hx@GZ@bPZ bF)fĀPHX 4vZKNͨ3T IpJ(TАIjR8*sKW5dsX<~8 hB\pI!z!hX-)xB0 6 QIRU8Q=rh{"Le;EAqj }k"3au>(`"T"I$hkZHx2N!R_Zz(8"p*<((i a9G%RXEpFALGp\!Z"LA/Bc#=N tAG! rňZ{ÐVki8 QSM rP驅6S=NgrȦU6e2h % iܜF uFi4g#7s9%hP84}+g~qAt .kneweu88okQ|'p)H~TB9C|sO?>?9>8|-\ pXU..eZ,"=d2DzR=_&':7&t4+ܙa/o F.Nx0k-D\.Mǿ~'T.hHx4 ܠy{x/}yi}n]Y0__l^GŌ.g%M){pγroB<{mӈ$+m0<8~K ~!40N{T ,Mfp1?v V|;oo(.ǗչW?k{{U}nL@/!kݳorLfQ&JI,B@9*/\MW6A׼JҨ(],N/GVJpճ Nx]8-x/C!8[DJjMz81C c}3냄_cz_ m -Ӈ~h =]EsJjN{.2T4wS K[`}@k3m#;NGBj `w\A:'YK\)r.9`JF.ԶGI[f^X_Hma `DuzexGi0"`M@SiFDy*șPL@Jҵ$w՜%ISlȟyd:o\J!b w_@cL^C8]zwO{հ8oW kHM5<) }K =w'#`xVT齶 &̮ni9} 7fez=om$q v~9y~4?ߤ:d8ΥW|$bҞƿl,;Of{tM"]8B 1ar@f<FM9|9 ⹤ihn07>4BlhN4~`h.u}]͹l7Yi}9LGL1x ?UઔpQ !uȾJT@B  ,4`%0*MvUz[&IYfe]1 &ޕw^/Z|!&Z\[([ $#w`1 W%ܒU|I[PnY,k%&rs }e*RP/ XneJ4 *jk$ZӺH'++v97$nufeMV̜8.̎6ӌ3}w%SgV 09)H}QD㹥|>[Ȓc r.~MΜ (֯sftロ ıTd͙+3S "~Bz`8-VSϕ2Vf\mۥͭTZe WH6dRժ4D?,j9V'\2o'dž ]AfR|!e ?5'[ ^4oz mӴ_rqq+ 9K6 E$O]y_XzZw Cb7 Ù(_%FB:JC΃IPq98[eQ0ox߿ajƻZO^/-< =ivәPu 7i J~Pڿ+]M4MPi %W6I`g(/g VtjꂵƔ[YqKK* 7n+oqer%^G_[qDHtoc ƫX[ f@RH#՘0myE j7jtU~  !)A<iD:v$+zTVt c/YZk Пч0N%0G57Z[CSd"X;\ 2zqTv,1n8|iX H·i,4|| hA&Rlf!ЅO3^G.?ESQwes?|}avZ*&W(c|㖂@yE9`R^]W'LQ=Z.mPPcq#-0*25ϘѲ+ԡf.AOv.|M|8(|wTOF\O)_ݲj&(T,n#]ҳ+=?դD4wMi-XAJnYOwO(MCߧ_9>T ~ݚ#>Wj~OC^ݫKr%R`-JP IxM`юbl /׿R SZ^. "zrVKDLxΐyX|ڐ֣HĚIMaƜc|/[F۸ ǧv-mC>ߡ\`wjJ BzweIz10a=3k_T[ejw")/1UEQ2ģ2Ȉ8ˢnM$A")XPa p:a\ޕzW'#Z&K/ ҍm'odg/P)*=U*RKwv{r ]74 ~^6_f嘃ٻ62?Bv . l33 #E{p* .F9wQSܠ8& Y؝'./Ac;P6!)9 0#[#cD:έV 7ڲ^[=+'Q 2*UK<$f Ƙo2Dy%,U. }D2NqmuZ1ԖD-(lFՂ.̙1MM1 #h֑y.A,oXdBHwG@75mErI H 0!/]hd&r1fjnu}?P*Dhy(sNՏk:39IZ`tWww?;|p=g̿ȗ 9|Sub"%D''P9곐GP}/v=(JKv(DB'E{q0#:Zեn\`yWqPjt)/YNQ18-_#+_Ք+П' ZH{rCP_Ȼ挵YY3kiST/ܠI /q&YŅWLj-5e` ok.}t8 Q$[ 21TCbˆE9*N kF({r-^f:,6@5C5!W.\OxACB?k|T5Z|D)_'ֲ&vk¼jUc%:ছhZȪf;y)mY#ZmpK&QcMb:]ߥ'6 aNn? 9W4?um~ oW=Ȯ3ςPSZY_xV83tFh^r5-W,dȮ^T#0Cg8c"w;Qp6 X#=Hdq0`̪Bij dS啡3 v@ "Et<8 whH(窠dVhd@_?RΙJ,*UbӫĦWU6yt (cGSuxNQ&Pd$}9g4QHh ^ay͐^YA T(GmnWsB3:SdPV6|nr9:" wnOh:|. wT㯿/u>o_.[/x_t,l%GRxZclC-izsKM0Z:EtZ2ty&Jdi2N N7L5ݙS&:ӊr i!/@}T*Bkߞ߻K$ɽh)]3"ֽUjLRJVy7M%-o<-5EiD0uѴ`]T iՅe`hA!Eo?If^No=t:,C0ܴLrb7oY#؅RK'kKԛ,3)}㯢%U'f|LCV;K;[ԙ)goSfdɇOUIjuj^sW4eލ޽"`*ŸBXi JVT\êypGEbg,[)߁E=c`i̟D{~O" ( ?B#.8K=clh߉TVpYUKkN#tJACGn/vi36vu=Gg4-Pr(Ya#p`dܶ[I}jRZ9eQ, Z$r%7-oGD 7f7pπ`Ѵ46N~Y披bq,7m``)'6l-,8l; lˋ…Z%.,SHSu)UQb鴊)SG#AW&ƂI .@@4M8y*3ZvcϾ?԰g/l˲R_Ε:ѭϣ]_:$:f;1A]5=蓮=qAXWB;8!_ЃRrNoj/c*7s\ƃ<?pfSbuQX7Wz]S #MKXQ.~JPtrYV*?"{8(kOޕwar;% ?lr`Xl5lH!m_q`xGvpr9UjR7vTKGNCwY/gxhW-X iG'ӈsћP'raHcwwkKFXƗCIyt诀knc #뵿j3 ĢT5]OMݢ# ;Dӂ"ͻԶXy@> ʆDm,qY$fX87=ґ3ޚ#-.ݒ}նA˼f'ŌiԱBkЊ])R3m@T闙^IZ2 Y-,ܻEDwO3 v(?bKUq]%Uq]U}\b  8dU,uON(itu5l ZV<]r sm'_' #. `ȇRA 83mj͇/}Uל<\[1ۤ59yDxknOY3Ƌ'>!n ­Ev<0`UۻeTI"NX`lfjkf/DK}wqSK;h$1}͹=O;F4J5Nק3gME"Hg5s[(EmDjH$U;7ᬊrT) ԫz2PRU5J%)ͪ|(KEk5J_qgѼC{z삸[ۅ _] ʟ/Ls a{cޚɇ0]$wfV̝eɴ S$LxFNW H/}>y*GyϷUnjaWW~ M$A"G  TɵӔ2̙XaףlIH7WL;iADŽ:ph G/p*0%1FQ$U~_.59r]*sQֻmȫ"s"#t\it}3_L\rG=}91çQT%Q,7?(R/@)0QET"&P GRЩoAC n{ 55"K@Fǁ4udKgoX"詴(#)A|YK$`nZtrI lb;ϜgT.<>ՠ Og4!sG<+=ghꎛHu枵 f ;^Cu`&oƿ펍h?fa2WXN4u>pաK=+`oF?>لZR+5{)'i@g M5R=kXRdtk]6~xdedq8Ң=7B$8j' wGXDLna"FD `-'x+kI(bɢe31À6J+I ,+ Fq瀊)^EB)[K)3|)k(׈:9Tg\kAFG1#iXj9Hd'8a*Ƣ n#0IMd]>vL>ٻFn$WY-$"Y`>Env}ǎdg2Y_QVe/x4A2bX$*S tנYdڷ/KMO; [33⭬uʹ@629!xԡn.MYI@3Mm"*2;e15pޤz03VA:,P=)4Eiz)GRu/ ljLccm|d9(B*mk{ĶM hcqc*_m9l2~&a6vr<8-sqFRm4N YpQI1$lVmcN\aܾXy+ׅNi֭­DgHː;W??ׯ>;$_u] DT7n+L,M-iLeDa -o7sJ3*IHx{$!$ΤB%󈸔p̓A"<'iH=/6rUQ9 Ɛ˕v!7yyIu=\T2O7}x&/M7Q÷jс_,>}YLZ*⇢U߉rvDR+f<w!I"J-wv hl`噔3F"&_Y-U^: C=Bƿ*g?-".nfմYP~n69/?MrwcjR)/~a.JǨSL}i:3ڎ`WnSzӘ)(!SG}ZY TI\kɜ}"Ԉ:W=SjD&`PO,ȢÑ 3C\M$/ϧ u\&I]SdocU)a4d0i<U<߰(oÑ3~:&gW6G2k-9𖖷A^*D *_xMbb~h3r4lN cȧdE27fJT 2%-]P_iJYB iXBN^ TTRIJr3S5)h) ;cަV3TXKΠp L,8E(s'B}ssz[]"n (TtR=^ݚ"28)U=~ZwZ|C\ g]ÊKc.~zJ:A`pkIBdϝq %SbC9ً%Mlvd\o |yS-V51kR> J-Xoi(Ey)pq w)MTir E[=DBN1 lCW IB|;rÂ94u7. j[4zG N9YzQ`m– 7Ⱥ۝ȶ\'$Kc%=+!v^JIk)ڲXMdy,n46<:: }8z>A洼;5cX{![AySN*NZ݇ LZyj6풫٬nRͦ- >vujB;6%.WxKR_%rыFJr;ɕ<`'?^~PXz,c BJ{Q,Gu -AILLĩUy[ 2^!y 2scF6 e<@MZgfBZ̰Uiח}.ejKԪL!W@ajr ;L۽336dTjȊ!J. ˼P|(A܄6$=jc3sẸXH #u)'•V(g+}14]RC)'BLTfd 'S@o䎺]iHߍuƼlaN gZa~xk1JiZa.Rv G Ny?B@cN ;F6oM룐w$\Iؒ$*P?ct$1nu܂QƩ#vT$p|-:#C/Z:xۉO !AhR4 J VY 퐶+;S@\Osҟ^Ŗtsb$_poH!䮸Ny|6\./:qN>sm$$/H]o@){e7'Rt¿bt7PP˽G'+׍FrsCY+'š0snsJ3LB^?*3&yq}HЂ e45>8+T9ey&)V돎U)ox5g*ڍ>~Y4gr*Z^@2d_^Jh;] y?}j!Q X;d’8xW僓jˍy=2e퓝ڇjZ':HZ?٥V!U7I7!С7ef~.Ubtx2@4HW噷Aj * Ƣ. [A@Bt1}7eIؽz+tI R:3t2 ȒǻB垄!gѐϠ f \3L :h,l%fH.) rڻ ɉ<0/>Wza470;YW#I<*\k !>[;QjLQJg<+r*)u;^Y..b׋xD8BȖW!k'aXǦo,qOT3\X6a^,?M={ƖLVW:g}8c2_ܯaY];;mBۓvgCgk k~9Iǔ3hf46g5}b55yT a5fBh,nl?ƶMi*2 ؙu7cΦGZW0ojL6Ctֲd)߇`rB@_E7E9;?{0➳Rܩrؿ$6$2_+]6jIkfSI!KooN/ (4'ȗ"y{c;"s YS ^SGB*sRv];X'A,i@^~Iw%'x2UHj{]]]:R+O@YsfCӜ00 @T&aP|zlXPSߧn(415mמ@3ᴓ8KKƿewb&*)Q⚧ÜˮLGES\DEt_T]0HY%^D{ ̸1@!%IkYBj]4-Q|oa1a#դMb~$.e#0jXt|*t~JdDv3!HH7r,Yɶ~'QmEV_D[}QU|PPf2P3#*ua3Kw.B#p[4P%wuR/O׏Yԇ J9L Ih7@+UCCD5ejWua ul5 T٢eſM?[ǔ-=,ޟ=|,^B[ɠKY,F臀el #r{ǖ8I`WLW8C`S>MR|Jʒ S|>H^z6$\68IMn\5iJ 4owzіY!}*W.&'ނŠntzmmHoi0Y1 K1՜ĺ"+&̈́R9ii,]%<@jVL3^c].?I<_*;_&Zw &(/lP)zp,LE!Ȩlyj.*G)/x$Z i-[8\!sۧ$v' Da&'fZ1CBU^gfKMT.&wf9|My AN3t F76ِ'zi=ӂlM%-Ejgxt9x $P0E3 VnD>P*h!Bݱn 糓k>Iнvͧ3tII"[7 e&ȕѶp*PZ3{:*XsV-ϕJǢk%[Aț hL(B wm͍82;[%5 HvUlg^f+SɦexxI;qێ-'db\$$AZ4gt\/VHN-vs] JW*INVʲ]eA[B7rX94:8hkV.(JTJՉy7Sά (]stQςklydtWR'7cuUx˥̄NUS4cJ(  ~}~>`3@&óUFRHjxjF*,r;͸F=+*#Aie" M~eܒXߘݓ0!0I>)Z Υ)G8S+L:& WT^0('i<𡔰h;L{ AFt2柵Z1X'*-o/%T+˳K2G, %Å}OPFIҵBCE!b4p5~7ky,0%*+6A r U>a񹇙0XX(n 19J=iWTc`Z%y妪xTA 9crb s`RpW-($5pag4WCVԻt`pP0cGAwL%Q.bc@AOqHՀ^X PAW4ׁ.Pϐ`~Fjơ4ƨk/ApUJh*7Jv e` n`Z#q{\dZFkSe|ye(č౞󜨰,b^#Oki(ΌMe'ׇlk IZFwH1oP t$br-e-%T,Hkܡs.-ݒXB+~$Fe1¨+Zm&b4p|*6DAUk}#њ>BnUUo'46e%Gъ["/] OdՠGsZ~dcW8HWv37 ^_M}4$x.F5X9N–B:IZ]ڝ Q zZ]O2.6)=V4֭du5j)+ ]j{p[K Ӥ 83hFA^j֎x r&A1|>^:@3MǗfTXzzy^e -BXS4ƦY &ihP/΂#/.?\s3cfj&ӻ2l:7)mE/0a -Umb˝]^-Ca`6uSѺ++HE;*+gBvž{^\*Lh<4trJty0"GnR -E[*UxrnY 5Kqty\NF_*~dkty4.]n?#]#j#GܫABM Z=n&J0DdE: e%vw?չp8c^C3PAAo;z+y›Q+OKZ,=+Y;?]?M67Ĕt\Qf^۹47ksIlaK#sg训G4bKmP2vK;6YbxVʶǭYҨxu~ [X"^WdQz]%,Co<&k8P;F9WS\E;F9c#б(r+?9zӬY  6S1V6}*e|DC.p0>;RO.V/Mӏ߾.d[qpw@_DrbsлޟL`pLD9CNsaYEڞuH{R -ȶL)SFGKk+۫N*9j{Nqh/-+*_՜ FdT,NZiGpsz0TZw~\; 5aSɕZqK'kنD1%Xƀ?͏I .OWzZy1 PMwch NuN4[[Јy'/g -aOe4Ժ"xÌd-_{ixoYIY*)5.-.}qzqSYV񰿋W7.Ļ7aOu;|OvǑO{d~QOsnG8dhI˩slKw _"4l^a$V'£6ƛ6 2nb.O3/cC}[}$QQ*7Ky>AUOkf\!-zn]4Eje;NEhmGj|,ww 7cu>;47 /g Wݥ n Q+JSνӅTK)!j%%f jS h5z 68Kajԯ3i1 8ܱ뗟>_ "o@pHuH=m}E+a#WW+a]ˠ!핒R? `Q`!c.T7Wɏqlf&7E ٺ{pQ̷pٚyTWպN/g5-F-ViYSΡ߹܄8XumUvI-˧'+|JGj}oWkZ?Z}~x.?_"喞̗~\NRlpuEć  c\//?fw|&p m~lD7L({6V--jMg :\w%\d^T(F<)@ގ):^kV![ Qxl٬']{WقxZ?&a QK |d4+2H_ouX!$UuIM AoT8춹_ kzjͥ "뾿,~(OǴ 594@JZ2Q)CySio9U efU?/܄u@8ȟ.rHB{h^پ&-vdyenW7my ji%qHkf1q@2I NOTF9 P[ƓB3j#^I A1XY\} \VX/Si"W)CɤӘjFǕv(NxE@As@FZމhЭk.^xFGv;`;%w,2lVV4l/uk 2 }j2u Gl 8 HAx⺒ =!ibYLL$;< 2E;tL2ޖ֒RsܕQ 1bh6IY!@;*0:よB(fVGK,?A4*DWҡZjClFOJA,DOR403HmH3R*JCi69΀J iIU+Qp3N3yB高MBxiGB VZҺ#J!)]z5x 9:U`Jģ O`jը7yvSIcwumI_@ $8$ I%^%(Rfmyplf ;Vv+3Z Z[Z("DwZ9~[JQ6 `Șb ݂VLm (6DKI#^~J^2SÆS}<ȆW5sݷ+ZfD*]MKv7hiAQ}KF;r,74~<[9T;y7Y= Yyfݍu r'q*˖y"Jz@u7gw/~?\,n2=BaQ~?̏ln^~b(xMz66 RXDסʚZ_!??`%{Ik֭b0xB*J*Ἣ8@R&ǯ|3Il=b=aḰgkhا/=vF/s@mM4[.b*(Hh oZ'Pm$MMQ-,Gh5ltciPb§֋9"p)tEƞ֔ԠN$ID-⛢ZYR9"_irȓto=ZE)o眨9Ϧ:h@92:L*K$*#y$gqu#bSe.;\uxӨn|$o|uX.1)cfSHo&KȆ(qgn ]TY Ov3k /ω?|4z~x_nۍ\G6,p LRS>vOD_x6^TF @TO6u&A^&15Lo ;I,КO^LZ7阝{lrs7p3V@!1 ukP~lve}o!~0>X]6$~}o!~?qENBCl]M_?5…5`1=2)0xSM@yi){[r:|b& 6ph?Iea)]xbi.2O0[vLfS'̃v -zq 4GO}vJN 7 SXP;LOA%lt{v1 ڱƳv[O[Kc&23{S|(qT e|O2>Q G'{bsSPKdQK;F$uOLya`Ԝ(qz6v{DYJjp~rpU}e`Z¡~@U 8O{-(ELl=pF K)T,*jɁdW-S+/jYMBNҥ)SgStm(|X& O99Oj0)Z0A)R:TGLۛy{K˪qx5^5uԻZݯom="cMWyt۱co avFt:Q!ˇNdo_i!U-5RbNu`7PzS@c718Q)3P8P돍3KkOrag`Yz(5;כ(!*=,Gϐ0#^[)hBrP3Uo׮t2@({ 2 u)'[<'kaS2|H\#֝]|T95xо`JWyjڱƓf{DuhG|;/]TrP- G ڳ^>*b {8H_B\3^!ɫk<3 5-v/9dQ`b01xMZH2EF+k}{nQp~N}ONj iUh)F Qf{* A]So8 |:Lo::y>O^ޫ B U< EÕ'%lLq0@P &Haodq- !€3[~ '簨sR1zϳ(Flr9Gfr#S]hY_5UFL 4h-MC{3 AXOTqJ:Xie;_g5Wk<> Cf { щfkM@<9s@~r2#C :nz&~)ňpT@tb:1;pQjbxA&;Q5ˈɨ6*FH}Ana{t/~#.&.,c/KQPB&y"(g߉ 5a:5ZE+V&i$iscgwU۬ .̥̐')wVM|9 z. @+-"Y0 ;y' Ժs3n'GצȜah*ϲȤ-ϱ0B NhLð;`SRHUDg oO^N\2\<Ůc48JU;T{x7~k{WM&LKDwuF: ʝ2>}iFPlOlrs8ίCWK^ct"Ĉ׭#Vk<1G`lb{; Bb_dǻ(>𦮌BSif^l5@H5^H-]w>ubYt̔Ok 1XX1,5}1upԤ ΑSUq8 D%]P(ѐH Cc\x1hJo23:0A@*\W6,UpAQSJS6 {3<軵1\݋* !q3w;\|-M+%<צ<'BX;yQ[{^;7yh3VWΘ`o5qWMs|RKkfeI1^]ÔՀL׼Q5wLbcfT:XM %cKiţ`(B[<;8ꎅ=u\dE%ǜhtap|r^ 4(\ EX K)@ɴ衻;vNǸ"oeS[T%ƟCZzӪiK_Fܘ{vQ~{tq_2"U)V-VV@EdeY{'Fd.'3M:|vAFD` ^+ mq4ª D7cS$Ø5W |'/+dLGS mҺ>ӑTQnWj/(20GVĘ=?~y+ƘgYHY061Zc٧c܄:Qe^bnD"+NWTxA% boTH,p]֤֒i&KB9[vMk4+[%\{? tLh]FjBOϻ|5KER#]7ZNh"2V c!VՃ\(9TQ5JB:4x|EZ˻II_ _^Ua2}`d}ϥ4U˦ KO2tc}u J0k:̻:q̱"zuںeУzVp,7[4 Bdczg ezY \oq"MUҔb_ hZ5ղt>6Zhk+h  `)uGc‚be:>(sRWa׽Kɐty]Eg-+Tݠ\ eQ&kK=7o~\rޜ')Ӵҥ1 -Mf]t>kY3ZeRST:kjTB4=Oa4`Fڄfqݾ; "s^!)sU@Z5{REgGXDAL-㟤e|%257u7Հ&ϯ;^O欰Kv:5ü/PR(2uxՅϼ)5yMe=.i%a~i#ͲfϧPʛ [( & L{-#LdU{A4zhb6o;; ?|<. <ˍW MD*NKt.2֚@HT &Caa#*< DWEY%Z{2\G:f~ /^fs(=0Ca1MPLxZ4D-6اbbb ꢗ7E2荞Hɐ02 #Eb)$sČ%mSiD"GH-cDj ?8Q ^bgؼ,nf*l󽬊9i>zOȤJ`SV ~+X-Hax(%iO?LwpT(4ovzflȠZGѧ^8$3x9~Wt&DQ)e= fNJ͜rspb4#8ML(_s>7P:E@2d $8|=i& +P#zعM@x>9jo^Q˙RJҜqA8YRΙ/  SPQ }whE0WlY.v4Y3 RخBhs-AZr[yiur~1 {1u]> z-h#47A8 ̤awchPR)FYd^00t2b%e8򭐏Li(s9R5:1F H(i#<׬иZ`f]7^T21Ձ緹_yQ(,(ci#XL'kڸQ?\O^mW'\$c5z7:2s˵X lln G?Lii|+ˎ鍏206dKf pŕs4#`8]>l nw͝r?>\c&Y^N8{lYh'~?ΛλWo:{ȼ{ws}v{zΛ_wu~M7  w} A톏gan2F;_ӳvǟ/gYFk;yNËJ*}LZݾ^+[3?z` <;~h┶Wg'7O\&G Zq ƿ.Y/1f#H4.x;ZJwlSsGO9Ŧv>~i<\|EpN\ Ń˔lu0n0ȗ0&W._K毃Ѹ玀c)b0ysy^0ew_Av虋!6g3;ju "wSa(<\"a9E7S<1- 'Fj@t~ ($AHmg/ͩasozvΰ{aY>=4ݳwț7`0A'.F0Q[3cxg2\w`F7`)fNOle=﮴:Z?$ֺ^b$Zq;$ u'Ǘip3NeQ'Lpentҝr@ |n?,c[dҙNgiPP^kA֖)F%ۭSl^+|,h˜?xe+Aƍw |O4@RZN)܇k6[)Mxy7$)8g!}q-y]OOOehTD&*,Ͽ._eAxop2VgQp8U!11V;-qM`ـ/E E| I#xw+#W"jܪ岔RZ.Ki+ߺZﵑEyf0DZ }ƴ7U"/K4`"-bZQc6Z*(X1;a &3YBAo)XRKK̜*􃱆KceEzW8c*!~U۹^ ܻ:cvm~1sŘ[4T7]z%E_Uʇ '9 ͊x,tqee{jqCK 9 xZFb!> PcRRoHu4! 5߅Qg+i H}`X%VC"B$J_P2X GM)[5ɚXI0[M5/o+"b*E>2 -`VAo uԢ&(JS4cyùg %2LBq2HmhR P]df$HH-Z'}@r-Y;g% 8G01j"| \AP`Z N}aH]pfƉ0L]P{|dLJE1"pAM2g"_c,>{d\Z䤀jp-D+6 kze= /07]gP2Jv=7x*.bC#XQƛaO!fz!dd,@KkaWaM-#SR i`(Dخ:9je"JbZi,Q"x/{ǭbCҳdX$䃐,رgϗ"Q,K(7俟˚Ad#k/>b<[g;e^w+)Xb[ȶ])?5 uU+ /7HI^ny˜ B yIf)g/̘NB:A`wމ%OyO0[?oXvNRyϐV:ňO ^#lZf5pܝ) m*MPJ71 Eu3ʂ]E ouZвh JfD:?cǃp7qJO`fAL E(^=4cMRE%/}P<<%[c(e28i&c:vBvs/Ύ/x'?O^=䛻ld/F?i_,QN޼wnnW?'t9Kcv/l ᕿ.qިCO|$X`*Nbi]mD˫,z2aDF k 72,i,w;oz˗}<͈3I_5iNCgsFu֌"_X& al_E5miOޠpVʩ(ɴg]>t,31rD`D5gW8!COF| FϤthcqEee=4)0F; f2 LЪH%.'%39(h {D}W!ehgdܦ˃K1@NZ n4 f/=@6 ,$\agtO.5eo[K+vGU]7E jY+ .E=azٽ:5k2JYp˂ڎ eYZn@u#S7`@?`8ܜuz NCU[5m@]oN@ ԍnv8"C[uJjeߢ/-u=dRԽLgdF^h"8P7p) cԳa tg{+SӈD5K^+kG{^.^TsVQm,*4`6\4 , s2$j4z=Ը6k4z >PЮt5*8F5MP&&h蹳mc;O͇kr29Icg-~zGafoq{ʳyH5JRF碇 ;&ǟ3w|y+#ŇgÃͼ9ZWkե,s9vu㯼Dڟp]"}&Yȣy-5:S۳#Jy"w-G@ 7.݄c#(|5}/~<8Fڍ[eT^]Qgg_#=R:^\l9}wڜ6gxuڢq 8[Kڈ)@K-QS lkkZ$ m'JmJ!P c,DQJ!1xT@,cD~{#;Ak l ln>ܾ`[m_"w;4,"mO t\AKs)@;QrgWݝ/E`2$GĘ*!t[ [, 5F+6"H YTsHs%|SOJ?: .?}̣cq'? #Nݥm".nkkkzT -lGY@(؟y|FM"wkyJctM .kLBll .3ډź-j␂kТ-jТ-jbuAi27m1٢] s~k`neIk٠ȥ 9<([Zq(@]^\]n7/(+rvV@T(eK$m0>CTN X3XA2#+u,5zv6vSAÇPs+V`[v`;nN_ӔRl΅ZaR`bل BѨljF$ħ}Ug.R/> VfvQݨб$H? {yƏ֛HH:K:mŲ@EDs)+.C8P[~TBm jWe! 5){)[, `6!ܤPLNr`ѱ;Cj/.ba0&xVg/ybVIh[QQkF\(Lƣ03)dE嘗C+Ď]T­{H*V­[ w֢̳u28\Sa6Qؠ̥l9L2n/vkÌ$EiLG@iB?9i o9jH *ͨ#IрٚL$!,3Fn]Tʭ{PcmV)RnJ+i27e{I;\7i.u CS fe=F6:!or{G˄!wtlr e%]%1fXf;ɷko o*Vx[vʟ̯(k\Ձ 8&7V"61u.E^"O ?]Ԟ:J `l.2ft,1ѻB2M PAQ wJb1NY%>E422}*7+TS٧g5h5Ϲ2TRj ؠ7.P#j} .f7CɧEP|6f)@P6[)dʎ,})`$+%@{,*tK kk k?wWk+VX[vX;PݲvY?z6_#~o~ɥ嗢򵚳`ǣRZrʇpzc~GlRu*) "DWE(+OZG*Z/#y>(3{ ~ktcev '_#ߡ!n>u쌅\ċcvÏxs<5SɏW߾9Lj|r>:g\x~f0,qqɛ5pl?M^_#Cekhwp썿F"DT$Dky1xu='&2RXKX&"0t]gȓ/䗒GLCM*Kn%|X敖&xj^BriDm9&-d#F ޖPci!I PjC45Gzʼ ?Uw]5F뿎^|{t|1H&Ūģ`Cc LCg!QHxя\f 0tdkgvʚjZu Ȼ9n$+,~<%i-nYcab;K]!˖UyxD @|&i-}z,?rQ"(Zm4$kŸB"bf:ln9U',25>.;S>/\:z۟{YQ4}o ?}nd}ߖ^#|xWs`Hxg ~CS`|Ͽx6mmy9|kD>8 Awo?`WC'3VÇՏŏU\散pRoqS$OS:)sFWo><{5 [o6ׯҚ3ŶϚĘ+fkW'b na{=dwa6nbӄBz@oϱ"2zkK)kdv)=z_C񣝟gn3,q/pۍaf~zpy7!)TV/K ]1 "t%Z%47Z@0_6.5ng"pNMKM@@vRG4>Tb La4 +i1C>wߡ6XpiFf-Cn׌}Y&4_// W>z캁&14`mrA[P6 068kAAb$ 9&*Y 0w}zo n$y_o>6FR̠*5y0/aeD!?#t.Ʒ~dRglwfQyɹOen}Rn1kW%R1NHcNQX1)g=`I 9-3&b I TFۃf[Go 5?be0#aSc"g=8Ѝ5xA"YNs*>Rۃ˫e8{E`D,oS\Ke;9OPXeIo\WvmĞ܊CXW@"vu]yoZKLk|*=ERߺ,-9e*VxH4̋ qSTHb.HMlS-| yp`?1SY:'~ܪ2DȐxrH+U4鄀$"^KdHd EdoRz6j,9'Z-0:SĴj37^'ٲ5+s 9R]RMW2-U{XLbW{zE`c[fXO>ċ,6lN rs=&ёт `8RkiL@Si,F ޴ C2.PN-0O]#SYC<~@I1\|$x(i(@M -!"#5T))ZhcҬG.j.U^d^e Q!_|.b<*=Wu/X֡2!T ڃUTi0m+Qѽ"B;RGUT6ʗR}Dba9 dnKIGd!9d90q,7ZKh=/% L:n-,?waPrPϑ2ŠrCe=ea9 ޛҏF'K *a 5{!,,a|X!؆EƑ䕰ޥAY7{~XsXO#x.cRcH9$(#(>qZGᣐ1 L"L~=/<48u>m$𜌏䶅e4Fa9,SۣlKZIp@~e$*r`FA /V/|ȞJL IZY<6r|7Ɩۜ 9o5hWzNblx`7bi15߽oS?}_Ͽ~Ez=WrxRfuw\4bR;48\1'FY;io(K%Yf֬QМUH};<->eUVx$KmsnC3TSmzV^YZΓ㊞T0ow'է^A!VjI-SVFfޠ;3@?`jW0ieSz'B I 0l՛\G̐..;" IB_)mNԻ  YN驥Bхɥ'ML\'?qR),W%Oܢ3lE' şY }9%f }8c8{Q-^~%kC䷂ lRC'qGn#UZ.!K57ߪu;]n;`W*SRN[Rh.F ]Q>`F+m~wB^L%wo*i95`BLM2p|MuIQ򜺲!Ak:X~ ;8{Q4KoYKz<)#@_ZaA8|'cB_@spH "H<wm?p$Oa\rt&ܥ1!;0ɘLnQދ%QeLltf)"K ^ 4 PF?9:[qNءFa ^ʏ]49G}iۤj8]9-mR洱I\:]2*dX]D&H 5ԑ[6Pݻ8ajAV@fJjٵ ʱ~ܐs_fMX7N-tao%K u9/:fa롉\nP= OVCw I;2  *"*aC;?`lH\B-fV2Y1`IzK+`΋4Nʌ6kaɽKFo\jZO}olfhX[Ly:WsGr)ͩD+O>wФZ&yTA悯E&^tL r.?h՘KU Ϸݸ iptm{̂wze__yb & ^c#-feᨉ0SH1d3+gT2\c 9;ȸmWUQzoL+⓰`͏w#mwMK4@to4\' Xu<ޫ6e!rŘjc][ՉS&ƕr Yu# {Ylډs !^ N lF(HeVm&KP!6Bqm7z f-t0͒g -KT'x=&oo 6בܿyy1)%0wzU#YG0E_hyVd9&Nk Cyq67!ij R6Lc7r H@Q&qtd@F\@f0:ww&Pt)\g;A$t`P:>wȤr^W㤲W}Y -J'1%e9ͽKz`QH-!s@ً (|L],,jqF+5ANk1n]qDBJLȴ#Sa dn+m,eT( )k:AW2#,Tϣ P(l ?ǜZP~۳\W޹8Ό`O9t^Ÿ/g|#_y՘7`Km7/ĿT_=efu SǠ >F4.I@ -Ɍ 2rVH2>yWsT]ҞSC&誃Vy|Y %Ke9 u]C!_ƅt˱|lC8b܎g&]c_\g9Wz]OXG@||av>d6wCi:_axɏp/}xr_KV= ʲ!P[h_[^eEN1ʾt]t4plSaUv5(Aή(&G-f=ɶʲM|23av/ +I q&Л654$LA"U#9B b`-f|{㸭Jc^dEr}d,9!QLmlwO%˶dS(K JGź;gdžR%(pاF=eFZB\bW>_+S w"%wC{ԯIEcZ^&aX$Sz?iJ+gyhkr"~9kxNy*%Qb#Nm_s 7}GzȈPCaRRR #AJŌ1)Mup%(e TpP~37J|O͕ Jz #abju,tկ9F%gVY[5(P_b@댺sW\E7Zg쇏;?.{z;[L2+g'O}~m!&% ĉezg Y]~y7| ŝ~t5L-nqG_Rg/0^'zsqEud#$+*w+gﵱI/bJ\I)d®F|~S|JxG;˾n$UkTaJd?gR8=_]w[wve^)|OsFJ$h 1&>g*!JY!͒gZkI!Q~Оԝ?zcnӇ0f^9cac.3۬k)h짋`=Y>6IF!^??^T$a?ukp,D&,`KskkۨU c82!J2jV'^*:<=3h bTKSy>;L3FO"Yn7>ٻg_A:^J,燋壃U=ٴf brpGo]³m kg*@ZYDw}`\G6fU0 n ZW,]1b;5C%zX̝)K"o - d+s }>cdDID/RiI39!J,ۋjmۛ腜zŚ))O݀Z0](tVP#LBp(i1 yլ7>?vDT$˸k+(٣>ߓtѹ"1:DXkTC-pd& E­4S%),:'HjQe"Y2mBuԴn>9*1s8**4d0|_pr(x*IO^ډMXeZ rf4>R0< ޑ} [mqв,֝5v;$nMz,ݯ;8ն8}WqX-K%A%`P%RhM! Uf6+}z<]RG?Cvo<&EI_'cۇI~`1z$bLqM?D#bweY_5[D(~615%ϰ&ƚZ/=V-e:}ֵ7e6Ҿ̞JbZHVc~kvlZGB FOvjj^1%;Nӑa(P{6}93|Q2f~9M/zW3:]t ]L:#+bEw֥.]:0֌w]|xέC$ڳa~yyPd:WxHW﾿[ D/ z=*2vllc,R]Ɩ?y0}J>7nnIAյpBn]!q A(m^diᡍ̜F !*NGoNzq]:ZSb݁z9'&u]~cOR*z/.oMP#YnIIm ) 'h[nĽ~Ī<AV|>5umHx ]q¹|-}l'1J*2Ǻ@w1ۏxkBtyA]Ҙzky@:Kp5`K"1=w\ zH /"@iw1aۑ!%=C"ihk i%QKAMN(dʒLes\òBkQ{ Ӡ@ݾ \HrLPBˠ)iJF˕-dX+Ϯ>쪬9B O͘,%NHeJ3 Dy&`ή*/2Ar>ɫA-: x^=-c<;r.6-YV^7+ٓBI,i?)Ro4[V+Kx") DVSc;&dc=H* $I) wFuI-1+IkDGYx,3ue%e."Ak%H/YKizNVE5Xhr]A‹R, d'>m$1CniLρk1Cw%қ';$\ĶkOÌLPļ7bb`ӓXG՘;k :w ,zN$TtlDy# i0|i8T "-i: 'J5&gs+wcYhGd֬ͭA( `9h,j IJ*bJl 2m3=5Uhaci+"Iu)7n>mbVb;Qg,HNxĝi߇9|;I9ڎIMF\#"s'i,Imjeδ9=h*Ma+'H'G{ t_9牐 ?^8/wX5: COCҍ8HЕ{uo`ƪ7oI`K?0d×M{l0W|P!6! -OyfRxMU\1N9 9?RPB:e6Ap"_&k!U/5~1R@j)'׿J_R-(@G!)>+ŵYM%LJOEx.+Njw7#oG(]{_rsx&8xc َ Q!ѷZ R0eMd1 SaH):%S\Z*tI8I`)JĈXх;unYCKU -,%Y *$TQj3E ts.7\?N4u,Z+P$5x5;jyDѭZQkfIYبjS֐3D QH*LFY)&2 M$#3g0"$9%DS,,5@;ι/3/8 D]pLgeBeL|Z~R{6MR8u̧qbprjspPZ_-ylJXU [Vbm$+z4]iG ?-#Z~5Fz*pض)]$7G/n>/W~o\ -}i"S\VxvgX:iv iO hwֳP얅ٹY8igΞ(ܖq5s >J80wm5qR쑩AHz{iyFXHZco!IX7H'J]?}$[3C[=Ac[c>l#|Ѐ$ hn T|S< -EK"= iPw̳]<|E\@gKw-N_dlmK-tSZvے?؁HÄ $KGq~GGyU"i6>M`ND|J߽Vag"YFxp9v\;7WI삓V*b,AV59Q }j朽xC;sC _ j#ޔߤVK9pߣU6Z/ {Dkq#tx8pj/oY;3<6* "No;9]K;N7=_FgWsW>tpvzӜpAF3F ^EH5J:'{lギ&?v9W=^#8gSC$ha|c< XI7SfIBa )]:ǑBH~r=;N̞I7ޠe%W@2hdt/p0;,뷚Mc8+-K&~¾u7EKS܅lB; S%^d=y EAџut;1eX}^c̀bC1y@7kD+G:ͩl,1EM9%H&SIμט*A#46!YВ6;Kwy.Csq;gX\6 ]ͻ;"v7Ru[tE/:POly r㱊|Wr.up@bIԷ}@OvQxFt\i/.cv׽ݕ)Qna$mڔˋAvOowPBRJ x1XlGZ) 0y-ĆvV/i=/+4`zl/+A(ez_-TJ";_U*N"Jxl' ]k8'Myqv^֬r *2#Eg)-]L\nLޅ.v}%Q ~pؓڳPAг^F2.LWߤk5> {2|H؆z*$|4=B[! tTH\/% `e aÑBr. VMˆtyBrca㲴5FOd:ۣdӸ$neze/v_,X*}NGuz2b˙&tNIi%m?hK1:*^oϦ'ڛ@/VScYܚߚwN[@8sAA\I:Zs&vuUBּq~zH26p*N+EXį5G$voět?f4!f=|>ais__vƻ6%$wH7a;;SMIB2Pr⥺jf8}_=hN›R+J2-:ZL[I;nԺ?PAr⅒9q_Suv߁]q('eL|A?A c]O:L@\>ox&wްoC# gҕHTtEɈbؗ 2j:L?OL!0{̑[.]Qmө%ՅP$MP3\'άmA|V~K}'tI\ec-B.ȥUIkY ЎAޛӹ6t^8 j% FwGިfʰқn8ڙcIo#=~?qF:NըccF$gXq jNw*әL(u1}F&O8qiFGEVu9Q]I 5rX jAm0 6`0ܑqm0 ``{>s%6 )1- t9VBa* C2eWޮykҠfg5a 7nNovP 0wB&C/p})u8Pp6"1I=,Σ)4hI)6ٶ-j|JLe 6Å{i[Νt0\xgڭ'&Fd[ 1r|t޿;?=v?Pu_yIQV!qP _صAɉsAN#stVLQ#oV Qwzl8m7᳦9j7Lěmʳ՗}Tm]Lf2iBRJ&L8uUi=eBtT#/YʶG1nF?/zK糳w*G0 NQfu,3+Z_̜gwNގhxvTۿ9ջdB;g8율AғCѳ@ikL*٘ TN5myd\d*Fv%1MJ!3&Q9>рfφn*{l4QW} ] S[[F|4%g0CsA.sEuk} mR.m+HD"*%VQT(E@O Hf@mƠJu7 ( L[ @.0GF@\ǫa+CH [č}qDxb<B2.>oƑ!|G)>54d 4F_"E^QȫF^5Z;8S^s(e\ ƒpg;e { U"sɫ^e$9^Zt0E/VݗpS^bvf&rMH <zk[/8ԋ1ʃf^:>?T/xKB+@yFGi )z=gsPQUԨbaES8, &l bcBTVlĉe!"!yFT\8YQNQEIβd/ۥy^Y}E$(s^Y2HI:|E(TȃK2X AQ@@B80a \.gUM 8 ej<8=,k`;ŧi(?P;+¦|149C@橛WB%-2I'Jp}ءvώ?b;rJYwUP8oݽ/ ]mN:yqWIZ6UM Cĝݤ*$˜J;#3eT.FnjQw_S\PF%7b]B]لy~e͚.nzpy`{.bw62{-Q᦯rɿVb*B1uOIJxAr@VEuûg[GC6A9 <0ШsxZna{r۵wOݵŃy6vw4Xj $\V-u2z"OCpe W gc4I &>ql@A6\<ɯZ1+WY1.sKxA7}i*/MKS/=ܵs0P[}tpqGht]FYcyۣgE9A\ls] VDShG^y6˒'̷{V9NCgzm7hN(؇㉈#$"7dT#=(AgQbEWĔ_ϨP,zFKfX6\knfL.)O3drjK?OA&x8CeKjwS'`Tlq `IQry4s\pSz&>MU=񽀹S89"< ]=΃H(BGӃvP܉XDg듫-K+"XhEbdBXuj@m;ˆE|7)bU(D\^h@xA3vlϒ(wdC8d~rxo]u^ G(Iu)H,O[pZF康pZ*^Wb*ZWںGR عs.Cpڽv}[Wۓ0;uZ~:YOVѵ[!Vb7G7RFnH$*sklɫ[:ZM*htMSLxr㰉q룯a+l F`噃J5}n';n|ڨm߯_#Vh&BA g.B֡#q8E2"$ G28N)rV2{γƑdIWYX)&{1ްc.Ure>ozûAx9lT/ٌ],FRgo@0bv}l˟~UH*_үI-X˚9 b4 'bT˨MUVŚZkjhMmZWcԠO[B.9a\DA؈JB.$HHUxpBȘ&ts2;\т*!nȿtJ٬{HG/j?|VKU%+U~xF~5uE(} - &cѐ6*3")R;pJ%Έ|'wYO 2gh*;>I{k: p@X"iXvΚkxdZ'+}><#oXsiS?GR4>.d1:q¥|KPj㺖*)kZ-E,T/_Ƙ?eL+RҨ)!y*ej"uJJw]tkFv*|Ԯxn|D F4db0okQ999LliEt _qu5`b@uh _C}e}h6aɿ\Abl'# ~29[lcb<,H?RI{3BM "HNkaSB p){!p's?f65;'$S]]a\f(5 wn\__~(΃]=x\b֪t\S1sr[Y[C zfי P9_ ,e‡àE1C > H5n"0f!sgHfFkh F^zt_ߝᕻzeoE ZmG0=wti ,Y'R>I~lӣ\.Zh=iQӏW>si0ҧ:Cx _7JC")۳ ֫~sy={M՛ӆ3D-{l޸7nPnS1xFj>HSvn 3:Ȱ6w൑<2)98J޹%0Zk.݁)߱bkomc8C:%,#^fybB] wA W\p*8SVdjw.a"5R&+ޯEQŗV3C_13O/8H) XA1YN^Rt:@@wn,QjntquA*hC^ӼNQDh.b2n=4rRj㒖"ʄ,Br3rPdLdC_>"Bvjxi[krO4Jl3cF-!{!"[P[I\k+3+X˚Jyk*i\Q䒉šyT.fvQc^e&'=3dYڡb{Zl>gf Ttv9]h ReAX3G+ cr;XIH@4*kkj.MmJJ*D`VtDka)MlxJXˬ& ZFdIL;\; h' iMH  !J 1P!x&vJF#Iҙ"a`eRQcz͍PBgyCs`)$MNP9I3Ҷ.0nc2 5P?dFHT+2Ӡ4}J"xJmQ_\ziqixzv,rRTN@C$E4d`p"[V ڨr{HF;"I*N7-B3If/tqwܮH>'ibYc޻(:~0e,zHg$/ޥV)cvNбfcĜ͒9w46_\*{ca_EI^|;NP¡Fiﯦ<g@2K'#&y8aV>,w' ]1^MiΤxIz ~rT};ݳqyJo…}H<]&~{6+c}Ad8N}*}k2-LmW;»Sg+S_J% ޖLVLF+QzdF%wH;MW x$\ ӊl2)%4 /b5Vd6MAb:h;YlRIb$.aA`Ncc:yiAx < [({8ÐYu)+>Vye>{YX۷6K5I?$t\c:Dsal:G.4!-N@vGf@Б7-L;m|ږbVdW@"/u-n N.aM nxAX-ΗY"5hAz+imZ" %q,Y;QWR a1<0 vBNxiE}͙!:" *fVdn w tsw!)H'!@[^ქ5H: ZI,҂_oX1j㣣%κMMaEfnd ҈$ِd*ζq4 qTg:[' ms5+KAFkj>p|$ ֚ꁪܮϝIJ85f:?vvs/twg#-nd'g.VmIW`@w6 B?l]tšZ^ϙ{ڕg`V!*Vwyzۻ>7kǯ/G 'P $ gpN j<#0 p`˒VD h>]rgZA'}k/v?әW>jKBx2PZE&_Z#ܬJ֠'Tk"oV⵶R6y3,BLb*֓*$IFVc/Ulۭ(Z0~t ްl5-V'H-|oZ:Yc+"iS%8D o+ p>Y`aʧqr\]t,ҌG?VI 3Yi%4(v9Vsl;zOIɗJHNp,AnVWVF G })Z6 ?eʽJO&|YΏ /퐤YNU)QY[@ \`Bs8mUmsV|0]\nJ<);zSCۗ߼__|_}w/__Oje4݇Ȑj:G8^o_}qo|(Zz`~5@ Am4奒r2o(6*/@y3R@o5Ua>Ru֊v P½܊6eI\IZ ϭZs ӇMqU?&^-]B?%p4}} ߊ?t)z.G*g uU20rٌ_on`]h>N};Lt1@R|r3D=ꧩi/=[̇Rb 4} 5P0f69@͆b?,B |ڣկ#_uf/ЖG/ӘrgMotqY%Wՠ3Ѧz0C_V\,sE$d@*k'eD4Wl U\SNb%\D"< CJBG,1*d2iv쑚 c%<N61|Ɣ0ĤB:ƁIbN(.G-`"V94PRʷ 1 10L̜^o" ߊAal=J" bg|(m̔%TS(څJXck-?OQ2Fl[8:o2yRByccXQěgY<g8澘K-KaD ˤ,7(g9X0#}x3ӼEj_Z[*.c\83?brϯ{4~3wS '{}"}D#;QUrU2'F~w~$ xZxM9 ̨(i(!Tϋ:=]S{Ŝs`=֔sQ'kEټƙQJ!^;TV[jsT6P!la3*B-0 \FVV+Ź2-q{/IC|8[WM̀Kyj*2YK>S<8n,ԃqe q85#~4w}21상U{'z"]X 1T52W"ȠH"bk0DvEbʙ}._t:nY=M52ɦTau%T_}w}ZԂ7aM kT4m2dB~l\.hE@ymUf-2q. { q/[!:k4z:oB?& qh=QMbl.8 |+" 5%Z+}˅5˴f&YhLN*$֭*ɜ]RN4C9Q2 /|W^:|kXlUlϴgE\UfXfF+4 oU#6;N ZTdְSaLYO|A'dѾ8_q#d{ۘ3-1 aT$Q@af.`kKc@`-.L='p"Ӫon(K+Š FV㰇V5߀2E*AB9(aj.fAv, +b%LY y>CgJ:F%ô 콯V箜7OIVΦ4S,8l+ͨnT?@ѧ;$w%ŏ 8uvE .gIVs"[AɐSyjA<&8?_!&&K:H ֧cl6>hLZ;c}eZ=yr%-%UxfzOAn =@Ͱs,iK:!1%ߎ?vo*E lk݃I9hS' qmD"-iG{q,\;t/{| 9.17=3(K{KvsT& ?N9m+F%b:t*I@B;p1+vһ1IIn\N?|kD2IN%xhV ?qӎqrGB\bp0$!H f`d R)}qQNQDt8guT QIp}=.CWD*̫̆+x2zE2 )d1VJI"£(TsXE P'Y2,R ѓу-CB2,bbF唋aJ'-!|]%i0PYS:fi ]Y)W#s(ֆj&P*0-ȍ(#͜a84 qմk)k\~r%`Y2ZBhm[+㷻DЦ7.2TEEYlap9V!LE"e"JciJ؅1RqB{,[G~~so[o@6Oy)g 񂆃).GpJ&2u&WhfpWV4&Fok?ƧgOg^%f#V.kk+oymlb֙50 *%hi{shT{}}$7Za; k>RIu&Q[nq-$ סan e $BƎ: G#m195 YDC'4\hf*qFY`?N:oLN,e:CWD@tp4w$(b;Jǡ38-X$~tݎvVgں싕;l?( BS,I8TB k!$0$5"F REhD*[.;r|H%v?p v E߮VӅyz?Ut!]݉z\Σ_?|Je2/xKޞ/8LTSFÜؿupۻg0:Bg' I2Axe»@X|s{/L"ũ@Ig`&OiPݙ):'m<B~9]`aqda-"Y"N2ar"ATf4YPL"kf)s5RӔtÊ CuI9&8!!012y&לIr g^J$U8`ݤ:SGXJ|=$sAqE=VDBk`_?Ηɀfp\Tc9>`u͆%yɜT a'YS!2cs 2A {!H\3YʛY(E%?JNR /.Ts?˳O.ɤ=Ʀţ, 9/OOvs&@Zj@G&@ݽ_^~ݳN<r݆^ϐb^Ŋ#\aeV1AkWS'$+gg|얣Mم_zNO^5׃}=k\g>d{]˭Q*lH !b=_BZ1Xiʅe2"A˒/Ll3ZNR6]oǒW.#}žM n _ )1e~N!%!{EÀDi4]]juFŢv0̀BvH4M$^ a(E;uSkQ[7xP!d+=WN{ 2<%E H6.bVbB{lՖQH۱ۘ9WQ;K?Գ=ac F3R )v\%DxTki"ء*0d!(&y; :/+3cjx&ӧM,tO{lp#WC|m/*/ӽqzW{6{~򴴪i|hruutqQ1? QvicIT:PpcV$sXQ HzpD8J Y4#LJ"3KՂ9G/2XeZ1n[QͭS$TQJY""dq;fƀ;3uu'|_ns*tJnt䣍B ف-^ͦA|.kZI0-F|L~k3 TTdYLT(_tnu''-|fwV6`BXPL::>cRIOgVI|ZX0),!QD` O֟jxV vZ8O)rFc 7鈋G0|$h!s$DQ㞗 m*, ~Ò[zuI+ =4QW~Pn&7:KMx. nAV RϋUT7Tr"jzXD:nqʂVmp9Y_-;Kr>9,pMOu_ Q󅻿eϘ[4sK2yML8?Mnf?/n!Ogv|6oO%_[y߅\{KHmi<6 9\\r۫C .ocܑaw U zW!.Lapoyf~ T6۴XaM0%65u:sJ@jrfSF\K~ޅr[ T*fwᕄWd~U_#ͷ#oQa X2p)"_fy@@4xLa:>j ` zpQjCj )ūX1t{xr6&J 8@t]G#-qDpHҒ/K8OĘFtDČ$h^/bwN=~<Ξn#VNr~Ph޼ T] 6'%߾]]SX0H 3hOMJ*W!L耙huV9Y|$-G!$p4Ur2}KL+\;b~75c||Xbd:*kpB@aϳ[\>Zyzм6Zdoj) _RHcMoo !-p3%?&7D~;% )ƨ$I:4Pځ (:{[ HZQ}AH/ I}!(i&o3^R&ȳޭ-]F?e3-%&7/dQ2 ];`"b a$:޼%A>foRB nF` kBT, O> C`#M¢̧s>w4I,z;7uN*OueF~kSrm{&zM&01*֜g*L HkV?-Q>ςyy9|枮^ͨ*_ChὛ͗j g+[DT`qB.uYԧ?hb.[1S0Is*5SX@>i كv&!~6^5#d9[ekXpllipggqc"O1{ϠnEnjanq/NXB_j (eˣ:n ìv- TKc$ x%lGQ8ns7dO 2 מ;I¢֊HERSa*z*zpj%-(_ կ{l5nKf0'hd U)JϿKV ac{g:eaӹkI*9ÄW#*%>V'@[X4@[^[#7b9뮱`]GhI cſRm.v(ߙ<֭pcӽc]xi3,2Td Jd '}˪6 O;/NU(z՞LN괾a,Da\ O[Voc~%1J1FL@@n+W-1Q>ҍ*^/HWkkX:@W5ZUvXIصfbR:w`&IkCvdhwLA"‡i^lS/0p?7%[jm}t7vɀb_& eەGf+6Oiy(Z|Ff1OF[J!zk@_u{3@6'~@}ED ,˟;H?I^5$`nC28{(lDŢ˩ܫ48T͚aS,Q A'Bj\ݴ87G@!PLF`0 @"1%{f>*-.A{sTEᴾL;vӪͪO6pGqˋ6-Ud8/I\H,)}-?B1Bu7}pEy*6BY]_i6àW<#DM!D2N+?Xw9]Iؔq1ECK <ܢb$;"\&9nVPg* 0h&+cgQcka{xugT0PNB/A];B:j{6P.{j G cv%!;n{#(nW͟qq۰H#îwe o@ER Cpv۶QB@2 F Ʃ8K&:\7]@vR7Y//7Kjݟβf J#7LLDȻ8𽚞vP1rPnۖ~B8agZRq(^svKxLw1Kg& ࠱49St=& &3hBC  `F l!d\0ϭLݽKn-' Y{"$oRKt7^ue [~ʸ/8^^YŁ!!8SP1Ż[vv [Bb-nDRl#cyx- %0sL\7m "[M B=!8ɣz=H[QTI eƨ2AzB_:/jgsnp)u Zo'1`1}PZUi?/b%XD}u11np>(M>? ^\+xs7x̄z-eĠ+@Ys IA%GJ`ح:8]Nfa5ݸn9Yܺ _6on[D 9{L`O{PWA"^UM z$ (Z:m+J( DKY5}tc6/HBZXlM岧臥[m9 *\9[K8ZJaҒ0 5B2K,\3 ? Gj%1J6aQk)O&2Bd=mFY`yH]htjMK"O1ψVf4@Pz.VDzล(0pk jA1(k}C1cI"Na;hD5a,s ̤q,3B1V|ǔi(&S\&|r0 DsB!ibP{荰!VYza$zl( Ts!^#kdXBfEѮL\X,:~ֱQ^G{uuQWDrɹfJ200 bv(0s[ X9b#ǵ'5ZMғݘr=a;o X8$ilE@-@M=*b 7Su&#S\F6]I3r+[1i J7xnRa]Udޅ9103N?{ȍ俊?nX7 lv = v7{F7d2wbK[RkV?$;,䯊b PfYBR"xk\{$ͨUx5j4BI H©T.$2$B_0?To](|= &K Wq^lQkٺg 6̡x &]˗wNi 5{smSQ%7tp F!3C;}; ß/z3`<-?<ݻQR.NUxK;D%N6N;[@GZfeEhN9b&l$3%b\HmUƹ$>bEʴOTVA@+Mt'FX*fG(8I8" :(],|:‚2KB)?=gu℉,T(X܊QlԀÂz [6=eAugktvg{ q)$43X) 5vIWԴ-Cbxd[9T*aZY 8T{K: J4D'-ͷpQR]-triH}Gw(n 'L56$Ad9ԸXuun3KA&u$)HOϦ",œ)J6Ujصvj;޺TXp10vxsZT^X)} RiOm ZX k1V(r,&1Њg(hLܟS{wQo,<x]i[{ >@!@\)e*S{4+ƻNO-4-tV5Q>2M^_ 6] X _>p ]zt y?iqWJ>nw2uW?B9{+9;y9mT4˲|&WuɂVKgsևP2~pa0Zy 'EG p-kP8I:J֝|_=*RԀPs݃`3jH!s-Xe;}PJCG zHU"*.UTi:rw!oQK#R`ƣha/ݯĽgg@n .gO!d_9q<^Լ@ i:;qFFX 3jAp]3 fX-Q z5|W/ ^hЙ9@cmMr*ޭ^Іxr~ҎQ1;p KnyoZSCN7[qGʹ¿]dz4 5>b ?k莄{+0,oI MB;WW&aXԒAFJ9JT* ] i|$b>>4(N4>fJux۽3{ѺX>o(A,֝8EQt{`.Zw0բ;ou[h0@Q;޶`.Zw~,FrEGs@k~f_iNG8 }*$r {5*$4:nqHusHh_:nM/{_ 4Ewz&8ZZ5p$0"S 29G!A7O#@z Hc }{18;By;(/_ q緽clٰ&ɵ RbqcZB d(\/N~ >[': l'S=ə.K:&$Z"e St qD3NR . ld\s:<'gg%C>de=ZSqfCgiM!`(ɬO[KMUYf@#*3YV"K*ڴgWkr`FㆫHWCQ'`ڳ59.c׬7^PLckLm](o)jϮ4<0MiW=' 0 +J)iΡq@"^JZMQD2CUJ3(388eUh9WBPILD +A54Q! d\j'8Ǥ@3Vk3nL Y)Nֺ « q e%S%M5K[/)SU^Z@ 7QR?hS|* ]M˂;_ؑ`fA݌uC;M7Î`X>_\ܥ(/]uײ8/q}n է|IoݬgxdB%ҍbq&:rFUp26S^-BBH~:T6I 70Mj5^QP|Ox K&UY+`ڏPJ &^җ-HBX[9Nb<\JD'k$mnVGAj6) ht$ؑ4I#P4S|%hcÖ۷C$>Ɵ~3`&y^LQg B^h>E_Qɜg%S͘l@ ;Sbẘ?0s&(mgMt;5ww- \#0m!3ڱ ֲ|xR$2\C?/N|P/$O_}ތߔD7n6$ӈgdJQϵq*J%,,3\✄K2'Xє jdRd t&vS[j4笱8p20,R$<-0L VB5 cPY`o)JѪV7]8F)%ŝ|Z &B=9g;=7_RckNьТMooO!n!ϭe+mT,T u ̋N 4z7W c 9Yk8bG@tIƲٽ'evWgB㾆V@ZA7T1 B+hMi¯zQ. H+*T!!wXQao;o91ˇP& Jq4{KjA"Iil;k[GKF~JjAn39ZФI4ʅFAM,X^fQKgW6OQC}(JҦq:K8,.F@dЈYLPmn܋nCxďk>A \oXo00oS^v#ÓT8f[ps2t2@7͡pb<%0+V3ܘ.n"bEehCFsׄ& ܤԪ+@'E,ޖDqƄw X3QJI(5P)>[s8ͨBx"(^ iP،EűLlYξ7o[w&8@J. %4Pߛu ?PG)rk}?q !Z}^] HHMڍx^J IRJ: xV9$[Tϫ>yJ7L獩fl=Y5E.I4K?NLkЅ<S 6=˹/7ן^^A?DS\{Lq`űc,S$MJbLQ $N)_o+5.F4cDZ\G?y޻Lƹk`b}yma7`rې[6ٛ}t6} /ߛ5P(-v4p sIH wr-1؊>h8kn܅s wKv؛%=J>}exT۴Χ.]*9 O+3J$$\sRrA+[2y7U"&/dLlpVAH<4Z/ O֪5[4PQ/"Z+]\,Q|׽KKrNsŃYl>faz(ͧ8mQ~>qO߼n}z~&yMWoS?|M71?߻/,L~/~RϦ$O7)]X䍽t8?sdhi맧oqbx2ޕm$EowCc;AH,ց@6I4ﯚ4D4CXh5U]GUoS(_ ͇)<;yStoɚۮ)L>|-'5g5Jv1>| ,LM<[vip2Οm,4!fu31]M?]/j0}nIfy3c,g|5 '@[7Gpnn]e? a4 C6ɿo.R֬yۉ7t&Bopn\x>\zw=ٽ7S X+F†ڣ[h9:k^&7Q=}PP{IHa}y>^u/8w!egkAaкU>,yHN:񒙱?ъ"-Q80D$wY>f;(8Ѥk6kP6ob"PR)j.DX2j1G|U@Y,: aI0 JqTQhZ!JW^Ytǜs+Xbq` m@*dbPPYoIq瑅\@Jqgׄ*g)%NdɶzWt^OvViPԐx/aǏQ[I {o09d+ 95'V\JsZFQ(hQn(yt> JntvL2K]OM+o$LiDaRPF #6OtP)1ͩU2OmXj5 !!H'X3yhɏ';*OlZ uGu=o QG z*~.n.!7tfhloIx]ĮaC-+2kPFGrĭ٘pD[hAHUS72pc(zQR Imtd-c+&y@&l~ 8ܡb*L3N=J/k M/":J#yGqh$fq 7''r$B y& ="`I ĚK"XzS:>O|P7mݯL:L{͞g<)py!hև"5FAѦ8 Io] "yk 1U oM=L6PĵJt. i-:mY6y.uM )V:=[jsYNyea&B&j,0W…^/-:zK+9 M@~uU~U*?__`fIcn^p >tXbhcAÆE ϤZ~w]^fD{ QWA_(=+5ge"}foqmsM ClM8!V۬Dѡկ'|_.cQUMpקPd,񁒡XD1XEq "-Pĉ#ؘ& k@a`=rjg}vng}v'!0,R#sEbCi8 pF2iddio=_A|݇oɇ`!)2 ޜ>+|~]v~ekim4=.a gKV8PȐc c1 bidcկ 9(QT9W Q"ҞkwGRڸ(śz enc$DT$ԘE<@D0!(x";PE|{_{KGV¢- - KysŊ& EZ&Ѡe*Q"xhiQc6uT\}Aa«crV8g q AzN"&   t'b6:hKXzQ{+8J'7#!wYq|a48*"`.b ӟ(b#. H4 o6ɶ-A*t(R$R*(Lʐ1A >&!i)pBDĚS BIpiVXA؁(h2H#t5Z@ <kGDLT۵dTn8\l# vOq JW!)6D!JDjANdˉ%>Q󒸚n7'1ƟܕV+ᑎBlD$u@ R QQL-6:k!)x*T\ Vm#f&""3nb,>d'+t;0=3_zwqSLr/$Q k T[.$ $2>],I$$I}zHR*KIŠJ`IR IھXneX8=iҮ9 )"UiRQ` 5HNҎ_"=@LHJJf Q#.`1s%peYX)GIk wXA $enкq( De`,7k :XkjȍDCkgD=B!Y<m#"+ VIRYl8t)Av||挻G4ݜ.u"C:͝U\˜rW~^ROw^Z9|DЈ"Zߔ јN¦4P5!5nDp7u3SR4y\M:~)9J%Z]}Ɨȹn{쁂MQj4A$ͱ-Q֌(I!ԩ-nIR0j$MTv]. M@vCɾJwwkUޗwRTAoZ/jW=EJ%K$cե|cQO)Ɨ s=(yj˧Ɨ_:_9a7u.=>Hw~w`,xP;0+u,&z͕RowjXZ[p4V\&]DˎV$wpG/m31Ӯlj厌_EIwP (?uq/;]q\^;)B޾cֺsfєQ,̥X:vn/uvswB#]#>"[Wzu$GlpFo{jHxߩ"8."7u瓿"Շ<#Қ$CMiN]D: VD1!381o6WFMfO;hҫ+QWcҪ])T_<*aX=LcMc1}knspܳBG}8d(cw rKE(@R1 B b"zcJ!H0Ě&\ @q' !P`8L$ 63Nl(M>W5F@yǢ㋬%NK/ސ`F̟:~v-*HC²E0!4È u@t;GšV;ݢNZnQ|v}7;gbKQ&^vp >> &MAZ݂:<ŭzcg8^Er^~vSz\/nWI8lor..~ g>RI]p~sqًᨷ0f4  \0ܥMR0Ihr>=I%i [.VgC=Ě*b%dQYNP, ]<}uQ_vXr lBxBDf,d2u/-d A5Oۼ5z 9aAv⽗E$]V\ζyF{FPu.>ٺomU<iȡfG0@F1E3Қ^ڻE,Ĺn[kknHs~qR7:ي+T@b"2Il?=  HH"1=7#8aUApTjM,zphTj.F%QbRY팖S6\xDas$PhU1#9iύ-0FO  nFA#'4kGV1DNoֆnQ7I ? RZIa18VCܸTc4IڢL(ۻ2㊷rV,'Bpv0}}ћlj%4mrp20/,Vi4U -\21Ǜ۹a?d~qѓ85S(c//$nƂq-݌wF0 u&{ }מ>|`pL&CC:EB-UЙ9L)ÐC$C6)XJ9@EdڦfL *x{z[tQ4=3ue"YEo{FҫY;MΨ 7d@Αy"U)E" Ζ,N߲,QvLҬ͛Cc^a8stnkLIui1| k"ęLҽj'"۵l(qd/sY*O+h@1}2۠r^rGd GZR Iy}鵂Ke*w>61A}S#l``-!4Zg J c%kߙMo>4WÔ{`(!H3ݢib?O͗HQ] ݮ=i tzyѤN}4y Hsܴԫx 4;_J|No6&INm+IvyυFݹJ&F+~:cu aN23- 4E`(X zn ѬfW@1Oesup0+Jv~ȋmExHt%zO2;$BUߏxkv&oMDm _F; g/ 3[wval| /8gfB eC*q+Or%^jǏu */,!TiV{sO`/[bݫ_x}B$:&YAWE,&8RD9iE_ӷOZ8߳'h `Qq Y-F&B(`[ب1 GO*D/wܭ#G&kdb#*0q؁.ʄ  2 aI#0S(!*R+,3%N(AD`ACX#$ DqпO}f#[?Eܹ9dA}suy^*Y--n*z|eN .PB3䗪W3M?Lޢla|R_N[5dR aI_VOSn|_=&[=edEpH$hI֔q(M "߆ .ACq[1? >Nz ys`)s_ӻ^ߖn^Jpx35nBJ+$ՙm@Eb5ϯ7o}UT.`cplEIiqbxLX\[膛s\j`EURlGc)1ZRkrku@QC BEL(`&RD&\hEcRíL^Ӓ-ZJ^Z͒uBꢅVv-,UnvruBN  q #Lb.29*c@ Jhو 7鮭(.\U89wY먅\9b1ƎM$Šm"S)lqCP{P˳ց IZ 6F<*Р#0DЧ.69*c3aU 8'el$\r!]G#KB:^GSXR)Ws Pa" |Ě,JE (^^U@-"8fWRP.:˕D(2F(DAH,[SEd15"#KУK\ 3,c2?{LOx~Jܿw#PBm@ƌɬ#V_0UJԈR2)X M%B vX :omZGOiKgq3=> ,\]={8hxpuAG٫ӻ t|xoA"O?ܛi_Cx C6υa0%`Vs/]´Z# [Ͱ& ?eV°` 0քjNaUaO똡0C'Xjb12V9Ip8&Bāf6q'q C`zKTQ/3MGZ6BJk0ܥ\"nyh, B([iPE^25i128zH<^ciXpp 'g5^yf5KB(; D`?Zt<ø+J$F~I%zGLwz礁lk+x.zWoL`MfQR)S2s8BypPፀVJuȗvA+BZd@[' Եzt:zyMYsRț)+/uN "N2uaǵ?{ ly~azəpwy #Pq?>d@<+=;.{?rqpx>?oo5dF>\|,C.Íb, #ox.iON֢3$a$-\eçI ѡ:vuxg5Fpdgı\~>pԼwtke (Y*C?|V$jjj ]?!cE7%3y*L@a~'7'tւu=_@LV^e]̵y=z5bd6E_zw l:͓oM=Romo%y} er҂cקPDBHҚs몳Lbx Z,-e\cRT"J'6҇Z*&&S`Gf5F7T? o3۱I\Z7E_2 @J(1GDI#ALb UlƤOhVv|FJaV)6! +;̨QJY}u ֔UM^.rVIi#`hd#8ALL m#Y0䒴(.8h.m5Hr7wYbNAHԒ Ahs5K`-F8;Q06b0^aTu?DvW3"T;_Y𾕀 8T`}QzT"5Q㱢 ]Se榴#7t+@Bj yBnܯVyA9&>zl}a׼$RڰnUc[0köXE/\W*B$է f*ٕG4Ƽ=xEdipcOSC/7!n&܇hiOF_\#BR'PT%Qhkf ~+YWik;bb7ŢFd\#˸x/o[<\w|ںq&' B#tPo/mԮq.и$C\Zc7Ω7!A(mH&L[7n'%LD\@OYiڼ\A8܆JXX<*apczxkrX:s/n5%[*:UnQФ4 ֈCJF}:{Q \^h~FL8Fs Q'0%jńV/0 `y'V+ ?("e}Zx-)K?ꢸ8+ | Y`8MY Mϫ] ;\QeHKAbHcCXd es8cR $Cޕ`` xf򥬣)(}ηrato$q$rOyaF6ep' $>{br>SW*WّiwL+ G' )NO 2厫ګ[{&v.-ɤw=pQHWɘk.i)V|XkϕLN,*rl9`{Q />yB9[Ó9K>Ng (;(Y\]`8w,L(5j@ٚmܷQf.S6s0BFwB'ܹLZxxn=d8]Rq^uH-?@[{ ](nݚrDKV+UH8ҘEݟN|VW7 |D_{v܋<gt53)émʰm3+öBV"!ҥC֟!WÓY9N췷ō#GĮrz7֝yz jbQ^\X98XP ΚG]8š xb 2pP/䪀>uf4o mߠvWʬ<0b̋?eV`!4 -'\&gq(583ր}|P#gb3vĩ5Jq݅ຖmLo5oX=.On7m00r uF,FTύv[qFj43'o~uB;OOwcmx7 i4ja#Hu9 [f7]3 I|L9|ns7ڱLyhm-Mz{_h%HZ;R赑q [dU.U-fHbl2p~kFXV)_jaHbph*(U'\k T>kPRPi`V8Xզ^ WʗlUKȥKEvcV اkס\:qnkf%v0Ҏ'Q>Ѝ sWD dcjL1;I~.Ob.4[É>4OnE7:q"I@* V8R"Rh!F)fcCltlPt1!utE{U0MrטKZLJZLկ>:v8M}\NNt? {0ܼ?}6toG?~a\M*w83qp߿p7ʳ~ 9!=c#kt$[EKm۬jgkj_27W37%,`!X8пw7n 1ڒ%,Xgg08\EwYPik7̊?NX3g练'(\wz?*p*X \̹702_`~`>OX :Ec^/ͥ ']}\߾>3/ޟ^w_\x)1]tOR=y~S[>9űQ$0#fWMx`p"T4a/cڨs#ڇBqcd-D+n'%By LGQ{0wQ+ ޙJ4Ari4W. KC~Ґ_ӐgOƽa[d#YBjeLo4MsEe`FXACôOE[7[1hlHdvu[ Ԗ sحG$ Mw76d@L/a!珱l?{gFKӆ51ϸ6 D Ḷ'y0CD'pcjGH-1!W 8B( Jd$Ƽ:6%A "Ǘ7ƞ2QS9(@^. bDC4sJFc)18TCAMQx6R+?I=+ rKBxHCӭuܥH 1h@qs8؝j%yIE|ln3w;F0穭X޽}Gou{4($\/:rC'ʐ\s+ :sAx~%3aOKc.8&`c1gcqV0%Q |Ɯz f:GQ.論0,cNB2p,v[`uoà ƉRuj΍bANlЛ%$\&z\`wȐV@7UUbIaeU:ʅRQ-˘רF͐FVP]PBec:DJEٓ6tB Q.% 4q1qyFxA\:@Jz@n"֌h)$T1e`dI( Y?`ͱ WhelGS,亇:D),>S|DhA9R+_Q;S&t0'.J J`)! UhCbm@BlEA(OT7)T e.<.kn^(X 5PvTA+żE)Yp A@ٓY\ʗa`9i(MB+BV ;ܕKG|5YL1\^M~ ]#a`BCFcXCt 8hȩ z&}~9i/@࠶ݚ{?/O9[ taL<~pze1] Ah>LLB^U3xr~ֳ{5T6.pY 3Z1-~HC$_ǵ̖ G`vq򩎡=T$fe~f%#%Zn;ہ񵫭a3a ?y6·mh>>bH{"x"ċy+},"|l8V͙G h,ϷIaLbh3$HRR,f'iWɊA'>j҃y{[fu+̬QsLuRd$ )Z/US=Oy)Fvew[>uԥI( 7K©WF-q7@NH[4 i!e4oSg%(o瓏9aO!Mܾ"tW#BhĔ&@HUf< ,8Tp"]Tβ:Rg !lh7CJx&g6W?VR?1kϾa4W\R7CI>Xh Rۈa"E4T^'$17WԿ43h Pa: 9%ZD+̤V;f12Bb 5l]%I!\ L.,DPZ`+%vv. GX]v7k"&1m_rtTÔRzx>},x=DjOaPK (˷>h9= x4Ab+Db2(qȐ|dbᶇ|\MCJUr9N .އ\N`!1TY*^w;[e:u{gmـ}3"{۽.-_w"כѐ\F9ŘjME#4VB̝&4OҚ– 4 1|Ejhpc[EAE:.nY`[EA=Vn=2_)0QJ d~s0)ڐZ[1rqiLC3J3XQ-@cSi9H{XKJɱ2!8 Jb%taR/0R6$63OX3/G/^yl`XfoR-yBy*u0'8v!B`֜r[oDvhMӲYYʦ$`f-۰hն9s%e]RhhgU5k7#P\8h fldF:%%X̣A8}[A*z_iG.zїϝ:=F |wv;./1WJjuyLw|쪓AgXԁً4 "{輒: :˝ÛO[C4}}@z Ǩw)"%t 3sNP"J7Sc߸Ѥ;OvfAM-.%ո;c!xKqA E$f=H%2w8O|Xu[P>\eCGeBc,>B` {[cdFZ:cC >Hş;-u!tb;_n7DZ5}X^bmvʱ-ڝ ooQmǞRw6Ѫ4ER]0iZH ySp ~ !E8YH T"IQ[g=pv/v o֤63#?͓|KG.LA33?'Ձ|Uwk.l+6e ~:iI𼤍!@یh늍ܖNeVbl [,ȯ~?N2g WfX}4U1<|Cn^<|V _ _ޅUw3#ۯ pS^O-Rg-??th4(7^ Xhsww̑ I9g/a7~ ez ݶ|;3,>LQaVBU@[ej.2Xg)TP!$&4V3!G08'(,%]&$ee~fOn8F2E6^wuAi0q# 8M}̳}TYrΡѭ!\\)K LJXJ1Z%y~cVsn&ucڔDz4Mi çfo߸Տ^q&0[}}ޥw/8գϟwMzy|<pA"%7w׽;rٵtcE"$ξ^&?0.sqv2ᒅf.~:N, 4t% kpYjA7˂>=?zzgm7ޥIS}Ͽ>njﯞu`O:δxy\f`ⴛSu?}}d^z(a|ܼL{?_ 5l޳o?7xVNGw}=fdM7oyuӰ?dv6%{@!u>l0x6};;6u:{ 6z-hVkʩV<֝,X9F_w(8^d&6M{yBK;9$Kh163GZk9/MT1s<~]u躠ʂ{ɗZ~^Z8ӤVuٖ+yDJ8yKv!ԡ"/C5QNAD( +.V򈪀 %ײ±F*RYvJG1q'ULpC<SJ vJO b3{uic87@-'ƊWTHC8R0.ZJ"hp0•H57 広K|dJ4)b3S-$FH>OK^y(ȺDE ّ`DI9)%#9>#gX:KLI"-8XĉAtKTM)KX #-xMܯO,e sd&͎9mb7г#JR+6p!~Z {kF˭\h)÷aScIaBM^II \ :ko ~)ZJHW*Ҝk0D(r}.9E|AQ,,IvD1XA%e=vX$`Ib,B9ĨZ+I |ZYKxձTB2p|0w4DӁ&ہ7lTb@ ~c= !͸5hf0G4O4ӜqU0 otʍDХ0H/!&Vsd{0Ec*-ϘFn5ۡ{ZlRB9?qf%f<>1[^zқeK%c>w a\nI ˻W' Z`4e>#SU$J)HR'\hU6-т\2VAxЁ/=r<`|%R@1FQ)+i$&6ZCz܊%n,zMr$&#'lťaZ =~B|3n= ~e&J+,4nK.*1![;1䰶vMީ<qUX#nH䕪1 P |ިX [4S) YR'3ng+&βT@ڻi#^Lz@եb~|:LzWD^B1œθ@cj$,ne M7*&=מc8A8xX0;Oywgc(o,k=qϻ[܎"o,MSqKz-e^o໰ !i8Ɋv 2Ƀ nF(t3 W9W fs>Yd26#+d>JLƠd+>8RBsL8 bH,f,kY( 4CM0^ӎ?Dz2e$T:v cFbIYջ[tቢ GDw!aso9B休{hP<܆^AvѢ4'<,7"LIWvRA1b||`u:upV@ùR8tۯSZl]!Ȫg]D|C<议 V!( Z*;6dumaAC aǣ>z|H:i4'v!. ;dlC5וG+oȋy1Q+}z\ZxEn>;yF$+?;S8E mVǧT?N!+ xx-5@Q⏂\T>Ѩh<07%qz^=%ZTŧ%5oӋS0jQ~N1!1&:EHVYΙ~ug./YS QK]ۺuƳ@( At(5IO \ؐnJEB(2H4\( p#QkmTpUJ-Jn֟5STl8kUWs֍o8Ef;Ci)Ǝ%iYvrZeV $Ue6ӏsD63{ؒpիxUmB)^(pJ5ZP\\\M#EQKTp;4.x:`-\|Y(ja>tSQ2EGmLpGׅKH_bOIpGyzrD3]ϒ(?Vjʹib)$EU&SKԘq#|&\sIS[XL+bbZ-nRHc7;O c( qѣ@Z`WG@p?[M5Z3P_*ҧDssӇ?RvT--]EIG }+F~`ww{ tr:F3 Ɋo-کjI%lW^Hۨ~ւHRKWKwdkg_On;+d|]Dhǝw^k)` |8{⊋ 9" aMO1Vo(c8u8.p]਻"Gq:~FXΉ0j2GQEQZpHpH;ECk ϰEߎV B N M_t;z1cU-P`Y~.~{m=lIJUR:Jn%/{_]4\|lkw+ja%mYw[LTWhOI?O 3 W?^0RRL$ @+cft Q&Ifw@ Oz+;"J$W\ =?9T(E#r(zQ?$B+ bϕ6R% x&O;W55q(VCϼB X~eHg)΂ ).NߞrS|@@13-~>?~w/enT#JCr"}ۿ2ٳTc:m6T tui}I"QiVbp=y#Ỹ~t a9v{1_>fݔir7Zs{> ֿ&7IRULvXu6tYor'r> T7.>J<܃R8&'XQB0ú8ٿˈ#nbQy'/tYZ>.{Y⦼enB䬽:i lP TiQ^-xTن:/?3)]>30iKӱbڍQdk*.u;ѻ5<9wFu`B^-`\#D !.O;༮_1߇F 39,j b0ᆯZ+z;c_^ϯ[ [w0J3K[Γ3B{Ä@lܙNABSd (PG 4.%t^L W=Σ,6 ܝ¯׽vN?)N%kѦTh/ˢإDӽ3M/N5~a-t6]Ƿhe[1 t7rߢ&0[٣sc+;ڽ)0ޱ 7YߛUVM;~a~bƾ >UJG0;1/G0*A.݁kyr:\wŘ *eF0T յ.joq;cG=|̏})f ԒΩՆ>JON8E *RTp1^sE?m5pi N/j;#,(s /%TQ;b׷RU$ j00L)TX2 4GDT I$, M(EJY:VPW-p^]? W> m;^ϚbK*LVq=]b8bLSAR7E&%TK?`b!TbmA,J%2,UȈ))f6UR"0T$j14e6Ax.h\;8~*3'EQ9<7A~k=@FJJsE3~o"΃* 5H&vWT!UEH"tLYe`13s0l% @6iL i#(F];ݛ)V(כ݅V73n9סΪZdȟoEUͨ~Z-0!Le`؏֏`͘I*"pԋ~|~{s(˜O޽dCz'A? f?!cyD )b0WHy")G̏uh@:Arh"I~jZS9)4`7AIAvn" Ɛ;dg5gLhu& ]hJLF儅be& J1*<&1!v΅gVTGx 8Þv,j ڂjcf2ɘpmqhBƎ)GΟFJ3 W?͞>Ly忺7^񗒊10y|Kfj4K,0E׺ }="7Aߏ_޻H/ӏǝ=/I [ Iňj=f4ِ&}L"MhB~d{Q8$QY-/)Rrv2[d>Tƫ(VHt]Rku^I6 )pL.!ޑBљpbD/,jqI4>uPk#d/y/EIXҨDBeHsOĘjHv)%6&`ûvoE8ڭxS*ιƌ3pɌ KZyW (F<7!z@)e%GI3͆ȋ:][oG+^rr6S X &{,D$e'Yo H"&¡4p ;Utwt{u}?R9߮;NMjUˆ2Z -hpQ3 /JQ} xH]@X g|J !h\ LG\* PZ, HjӏdzGҜ6in ^i6 9쿋驒 'aPsчK~rl_G܎V6fz;r9cBGc 5-‚VۖYSܜw;U &FD_Q/{JyAsp5A~[E4UjgNPeN}O`ޤ?JK8@3Ry *g$]eT_)*&9,-D ɇdŜUk?q$!#ө>$ =I|!jR~U[82((iXlxƓoq5Qh1,2lB*tFq *iѶ5JP+4ӑ?phB]dZ~Y BL,yHE sQ8iYLxlO'4O΃_[[DrpWgWk f/ଳd;4ƩE9k㺚ÄFwu?| (;}Uծ>lF:m)e㼙 r-wvXzyzK/Fqq>_TAEk/c~l4d4B2^X-s:U!O6eBf.Ԭ[/ɠjQcs&^M̙'o jD#ƽZT%sC0ZYXhJgOPض\ՃZKٮn]qLGIj Oyٓ]ƍa-~'PmYJH> w7nzE ;FL.׸c=֯JCWX,dmɊrhucZj}MPa8o{kp d/lFj-E u[T D$S1Z,ʂ:x^y@ nPuƨ:$tj@wB9@WAvFLFgظ1p@y,8K}H~wvB ‘6M,8݉reXhnm2$ N cyr+Z KC^**_QS˒90KI ~=_|{;p!P[4]*'&[ޟߜRmC%Bu }9z}+ !E]ЕET>ﬢm&9v:Gw`DVpU4_#g1/FTLڗp,,O"mLj6xKjx y[qn])%=ߦo+'&fBvqBE˳7`)7W$:.k&*1;7=:q-]+IuWU;\Ȭ>}UʄIV, DPESr͉g<1z>`ڪLDɣhEQVʱ|c(5"Y!E(rKsRr5i\ɊI*,A2LT'@VFo3-* ,D'9)9c@MړC a0nCp98>,fIZUٸ4n t߉Tw'{IV~>Lˏt `sz'F&Zg ?^_\OOߝЧX0v 7_4 NNjFW+`ݟ ~qpCMq\sS+dCO׎j uF{Mkw~rȳ\7?UTMٟVf~q? Os1_?)߷ݡ?Tݡ?Tݡ?Tݡ?ԻC!ί tR K!J+җrRHBX9g>?-xx;n6ZZfSn|M/i|Q_-Ϝ,R_g{jedxr{J87Z'/K 4xqxJA 16`!\gQWǂmkɥ]%egBFђ^BE HxUl<U_ON ]}U?DLͶ`tXqA{v5-}|!l9PEyl [;v㵷0֚AwKݓ`57& 0dm^&~srW_뷾T׋4[=)zo0Ἃ~3txOftNny?wqMW6 Oq-USꛍw 9"oiEZr#sfHK llս~yvFcT7 0|H2SK{'r](Fm9( cZUԃNWqUͪd~;ҏ1G:E7G堞C")& cm7Fֲݑ8Ѷ[\KiM Gx9+$rXwZm\G LqӮd=qsҭh0 [ tz+9ņg:sQ t]$й#䶻8zZ-c~mW̵ZlZ2K]^dRYq͗ڭAb`xYoβ"mgH_\vk4Z$%O>N`{tklﯸJn<⬋7y~{V]bbKY>0TF蝭GVSi}Mm|l^p.aNV4n[!:"IjǴҞ ^G%L^EU*ńq͎ه7wb&?^-V!WZoxp4pmo J5aVۤɄ:2UrT38+%` ,AR;A`4SPG7Fxuφ4g4ȩ/u Q"Aƅ5JLP*9$-OIRIНK'ApzJ~!hUD9NBI>*Z"ZZEv:з˒{,qQ35=A )*z= JӒIcC*P9NS5gp6;izXς`B(akgHF^bQFCs pƍ$5JY]Vh2 $C0+ vѣҮ:VkFƕLYz9^OsLU/ Ǻn*n* ^F7ꄶǍO GKTbn*7ܮp\sS%a t"S ^-C1rTkɂOLYkKLɴJhQ{[+gtH4zPJ+,,/`FZ&̬e;$c0|vk]`ڠߧ,M_Z˹8tMpdPErΤQFh88lx`K$-*1,@"́i4-J2֫FnU˫nԏZ6J^bp\V\(tR)JE#6,>,y%Գauk shVܤҬ[RU(' c@@>x4m0gIhYsXTZ^< PD댭"Bx *9 :Q" eJғIڕV LdNHYIeP$;`n8ۦEմ6UƢ޴1LUf L{A4ԐjK!E 4kݻ}D1W)4W Ec* Ӣ^a sY ߵ&{ *`rbx>T#j=h6Q)uUˉe!QR>HFZ`G1j@T:`q ,:-;KcKy` 2sq˳+rʛ<N9W:3^eZaX&+冣,_fL1 SI&:n| +rp6RA{kJqe$3Z D:Nj9Z=I;$^(%Yעgٴ'hX^dО[Z6U7 fJzgy3ꂷX[Cz:3ʯiL$8 J48 ɑ#:AHj kDMk” 32t=T% ]Z6QJܛB'zaL|vrc\-۫>v_S,AQ?.#@IOݒM"ೋ˧?7lA$xH==yfW_< btl{D0w,Co/@Ogkw77 -^)~P<@x GKh[j-FӃћ.9 ?& ǒJDE%O\@f&ަA:֏,uo4T#$~qtR`8YqP<`qAr)Z_St>}|dLF( B /gXx V1yb{To7k/8W]Nbfo0Դ<3JA@aZ3"9$KT}Z} Lb*j3&,*|@,ͨCI%+KD\!L(,2{Js}樅SFf&r&kܙP&*hbJJ;X]L3 YL\ zĉ Hj0PQǵYA@^Nxr#i. c eGZO5^! CO0Ԗ*ڊe-7읎i4K.+jeYphł('mqha" )̉PCK 82J)AڑB*iGt&` 䲕T]xV=^ٰJLɒSXf*AW]8|QAjl d1]i!H%DKu8^ߙhEW@ѣMog r^E0y\?h,<Lx-N5,n-¿ ]xgfAoM~l]~[_WuUut8 ,url=vE9РaJd hSt4ҁcq>5t:Yl zUw׃ SsPMcʰ _1a(mk+I =?E3<׆D[ _ N~8V8v=*3;~g޳*1ץ῟ƣaׄsv ن1oSQlwZsZ|4K {MdVH&23Hd"zs)?iZ7-vwlP~WCៅHҲ0F׶ܕ,s}em|tݧc+^˸0F2MNP%+ QU}T ZÇexLƃ/Jp?tკO=H.!mBDCBG*$ (Vn[4al OwئPiT;퐣[I]>ݹ^N8";ӝ{d ]>1i9ਦ7-zHОh88-⍃i@88ބV'/e yT|%i%qn $bWvLHcc'd!VӘ%w$;œsള%}*WkO!W>h?O|p?sdznl՘dHH5j@^JFPGs=5DKl=ӞP,IDDZ$vI՚\oX 3W&p,SǬNpNX7¦iԂX]O9%vQP;HRωuށ THf ҂{M@8J@_iW6ejMqgBcRZZNߐ" HYTT3m4rV3-f,Q1JβRVUÂ>1qG @s0ǥ& 3(B>XI`LFX4)a)Cd@* TwL`رŵi5,瞀⌌鵜+919T ت =ZBT!_&J2N)T!h¢cc48R)M8X&HA 4X%X^80"Jv/kQ wx_3V$%.Q짤6d"Y߿{q[ƭo`j _ v'p?>oyx ~B\تbea lX֚YxGu<&VW0Z&E.r>z; {ArݗV߮N`Uk2JD*lVcC\'Χ?l&:Ε.i-FD.9QΆG$tׅvv{>*Tx_v~f3[]BlOh>\E¨@n}|iAuMqmw O{. w7Bea^rzXe?io˵>|buQmx$#%^e^2޵6rB١t86lr \J!) RRH{^Uu= Py=%c@^vyK9]=P@Dz9Di;9*,{l*]kT)UL.{SF,r͞6,YX]|,1›;dlp*{y@\{װx*cbNP~ %rtB9{l|uٜl<\xM h{^AoXQRn^ew [g ;E{mԹKH$dNN iv`;羀àvޠ7xoO7]]˵ήO3IWwf a9';}}[UMLom~6oqܳs1f!@[dQPsD6Ke!n;}V/աgq6yNQ_م^%f91'43;">(#$ZǤJ#HDp{TsN(;|fld<>ޙP8Ɫ&%7ZSY=2m8ZHu4 ZPk3{6x^~_Fx >@nܒK~Hw1l$IVs=QC,Uӭ߮~rzZ Y]vnYvBo_:CԮ7s;mtFNr:fAӐ쌯C?0L_](Ǻ HEze+\9%Iօ2fNCmAY0>cNZ|̆G2+85/Ίޠ7n"&G/nB<'m Qg9rљ=*4]Ie-"ǸZ)(H$T}@*o2Nqϫ%O[YiV2A[`fz6aҹWk{ȗg)i3~SL{Z˶W*w+?wUl2cܔ@=yl[6Sp0mr~׆D+>R5,qzj(?+dtWU12/`P/o 3XC!w/Y&/L^f 9 LH!R5Zn5X필 rm*C7fD57qX)"iSkoB&^Q]qeDP J'&eʖ<ِ^#B/eZឨ|R*KMhʭp 2^cKxumnHctZVVE_y"I!r(ڐ 4wZPN*"< -,X*'8qd LF@ 9:MUnuju247tF&3uUDŽ#OV 'Uk^i+ %gXNs58O2wCM7w;WqFz<=;=Du83 :_GO&[q'ϋ~!؏|F`:퍳nCx1;8F#鼯'3Ԧ @P.zZ;QPQX4s:e%|4-VWO?N'Qj|~ҪmJ#Zc#F~գ {彚>DSky4B{'A YF()lt .yXЎD ɀ%Z-z$nI&:+CV&fa<%]o{P*j{Gk{VDApi-Z`IzQHAAW J3п%͆(ң.heP$ z3@ V%ѧ7 ^TgB\`RE-*]B$'cI(44AVHCJR3o Q@K"9y}o> sʆpv.Ҝm* bEDNϊNEt4 "Y9G=Uh;)ESk81PS9Dh x Lg|^ĺ)r<9'ul̷Z,ӫ;nT;īk<}~s^Edx5}NL9?> F۲"P͖S.N:7 ~ UCYos3웻V·?JTh z5m(1(m6T#\]6$Z11@<> r5n5ðPT@?&fl/ IFQt/*cJk.}hVvR+ P*{L7pV.N:EM8l2YVZbTХ4!YSJ@HNY 43TaDa/WfA%H%Ⴓ2E`24DU ]r d"R,gFqԔ (L˕B񚛺{pWЕFŕڥ>iVG"7sA7 M2 .DZ)(:P(:uUxT3CջTgBu朑6vؕ>k19BXsi[̗(jM)AX0G摣͔.FIipHXVJ|&~"z(of)oTzGn0~Tb6'X̬`[eW9sQ}38s@jjYߪ7|"0jhf]5 "80*Ba:` @[^G:xH;&$STqTB5 b4e!B}fР=!fWF^CۧVq6@Fԍǯ EFV*#4u!Cepsa|1~5 PQy*ާHXN[pJ fDXz8nF}+g`LqlvdzT_+[ ehj*Àʎ4 L~go;K4~\ nracJGTr1_WȾm?ixpNJt 8Mȓ?VTPx%eD W*`%$OV]Q(ݐDL> $ F8goif Wڮm^kͿuHAaGy3b \ dϭ׍;zK_]%>VHADYť͇JA͝b< мEM~iRcRIQG0.L |¸}nr%h47-}B Ղ!̔'a/34\Ky#uj ytmk:[ڛmHB0~_eGPm~ll3QضDm4Hӵ'k.Tj畷5+o7%ԣNAWV9a/U^?Tɞ6[?)i s޹sG7T]=~zdOuK8-*Myv32,|}ߣkKoo~~ Vo9 }%c9;&B{klunݧ8w"!R(!1-T^)UiS ;%1kKK: sqkO8Kjan͟|%B = # 5iz˦V>=t/ > rgMW`t;__V`=(gzNwvk_cW7݇Nf *m?ulcݺ9ȿ׵3l6,=qŞ%b Vm+4BF9/!߭d5 Xz 3cd|3g8#op3+;%~2LDu-;=B?Ťx}Żz/ӭ ز]Dǿ5h2䜋:Ĺ=A"HkܼsG74i);7]ؚpwFnl02WI9;^}tqji/m1b.&o];דQt*pRgi\CCͧ Vp,TV] # Nx~ZUb1RICLfATw>UNHD2F860XF!" R&H(M#V@Xq}3_ػ؂7zoF>Lq~6g:w^=:Q &ӂ$<6IL@ (@C(Q% ]=:NȁF)[aӍB 0hlE,B(a H1kl Huĥq` 0yMI KPfFuSm3s̈hH`fc,I AxyD%B$,rb xǔ> T{cpRդɶ.J[#).3NͲ?I'1d߿o@^ږB =r9:9W;>M8+ܲTNvڃlAS{':jµ݌2O]>^.(\z5WG[Mim  2l1*[ (U)*el՘2o*tBĢQ8t4uohͨduG.ʳq~REI!#4 #P!4FLx16 {.}d{fW1XVɚk8">ĒDZˊ]eboL'^d:]f(T3}b/w.'l%X:TݛgʺX" %w:AbV~*whX>hRv.>>҇ldlŝъp?[;U!6Vrg \zkc>KL 0T(ggi]2uw⚨Ӡ3*G+x?8j#tL@|F&h"Tܞy@L 19h%A5ҞSzq=.QѸ=%s JȤJV qeFQr,˃d,UK}C;` b%ܐTmȁa(I9kSjM/JL1LL97r7.}M E:/AݴmU}Z&SV^};`Tx?IiUk6q ";RȠ]|Y|rڴKS_עˆ+y;/;ǻʌvp^ws`V7KvwT{E9n+$iUV_ ]kQnMW|L}wqk*IF؄1PLl:QF=R?ZzOOn rN~4yV_d>wy| D I3? r/ GO%zGӯ/o/n )T[Ufo>P)zBPy˗T W"SZ Rr\,=K*^a(?#1m3|5sE;L / U)4!l3p0(o %3%tIi%S&)WOݪtaY8+kbz.1\k!(9'0k+LƴUdFHqND@okJl0I1YpbJi@I( O }͎~#eJ5΄C"9b,)yşF=8qcCGU0)ĜUz[^ugP BĮUw^O7b.YMps (tH\rKh%t gMkLx!ztq>+k%JɡXস~k!EZxͿ53荑#emt'c :P+0rLR :=ma& [qgjNxzqzeSqI>L=xO(ce-JK,}de>nx}.I6ͷoPƚk0!喥a%E, >ûEGq Iz+q+:Jz,D!*fg_@v4g@ <'^oRml"T&U?!IpEQ0t8HE ?UfFBXΪ 9A:W"ͅ6ZZ4wX1:Lal%j zR]}yqVl祭BX7n/J}}(\Lu<quk_UMϟoe+O-}EQ ȭyڹ5R W>5v6M;Ƥ|r#!e*òȲkgz2sJUD}؁TЍ6fҲ~BBzP.emg'FilPc[~?Mmuk;y?£ ݍH-$7G=ΨIRWT7z7u.{׽5Mk_P'-8u/s=s>! py9~H/\;,Ɍ跒(whb4uTi`u-ٚF9ns<-EW[򝋨L[SoV%#j7_ \DǷ}owtUk7GV|"#SƟRͩ>ќhJ ]1L)zϟe:`!XTyWLʓ>O(q&P16X"PZ( Ug5 [RS0>NbYɾA)#K0cIJ>CM Fߴ4֝̀ug3` Xw X7TULC j-Bt$H! FHhBJ0C( Fa;7ML`eηЈ!*l2N&YNK\,y͎A6Gw^+슗ue;FtwenԖj%`RG(`GPRt㔱jѠ1I`Ɛ!1AE"Mb1 vqBH#-9ޒh/蔒|-fjtZju _l&i17P68Hh,mFT0`˗XIk08ş&I^Q|1`Xi b*1h$IL@ ں^G3ҡyrza&M/&;Z݇^(-3SwmHmw2_ʟ]Rd?͖DsӋg2W߯!%Q6(5Vr9"4F]'2\ %h{_rlϧڄB~ky:] ^\B}K'{~ٲ;l]m|KNe@]lfhfmdЏ1 ,3A(`SonI`2T?7Q0҄XldXMd0l*JF,&ƱH4Vqq_8-am5a9SA6oęظFc@NQ4:`Bf{`rsi{w|rfïR$)&'a>4KST hAgg^nHDβ^JЄNR /'EnPJer_$)d8JdI>b/Z=(,!.YfBC5AQaA4݂ToN„|Cug›c>*&zɖMq >Ca*)Q's%D.Qv1M(^nv,iq24W^(ʺgP »bWB@_U&j _]n_添+yFJU>:0X[;R,IBDJřj ]-hcߺzHFvT`VdԘ~ZGWнt/oXSx"N-ަ &82o^2LD'ի 1ņJW|ĖWbd)t|ߺq̨B{9q`GXbtBi"sW<-#ҿ_$]<&o={RtNoFf8v{$pdF}9Sjɓѿ!b$Giz#Ւb&f8%'SXJ$FCSTN9y7}xpG 54c< n$%idrٜJf8s<.i*KnUAv%Խ4l+IS/֒Ȋ) Ų65SB"\ciZXכR< h4pwmL7Uz~hS`W1nJ\ _" B6\OCX@0+))qAy)AJ$ۛ5v6ӂB=Udfk":WսK ֣c ̣gdT~.vrc1(ҙںa2˝bc.2͉$9pTLB[1+r J 4SiU)q00%Ÿ8?2Zh1BTє(3ҤˬlUq-͏zʞ_*v 1D@BCLYUF 85ʔI9 2 0ޕ!{p4 V) BWhr$%ր $:%HrKhB"Ep {25*TmR©`ҩr>ř ɱ&6E]qbK TNK.ed-ShM9Üs-B`NgDƄNWȲ#xNa͚X\ҋR.xJ`ahnaw̘ cbrVer! 0,)bM ek CT9XEE&O+* vA!Df:XH`cSšݡPt5cFn-kԟ 黽H Y?Ƿa>O[:bB,D7T011B\kPR ~[zEy Ex>@,9yW$)OZ\drɜt w=}z+S Xεbԑ>$ҬEvy19,Z*g`O)QS׺R~0E=FNCcگF $}0B<׽OAzG-914K;>fI |+ĺ[TLչ @ԂPe{"ia~JP.jP1u9PE%bP51?^+n8RH" !kn6Kd߸! c'`:Im.-ʡ*\4"r%0ƩRAƹB` /-qI$9N4fD#^A(KE.l;T'Wՙ ̔\ɾayt)f(O1'Q!!VH!J8vT,rl`R!bĚ0X|`j/wH7I_'/aKO羌^;Ξk'MȲJ6>E|p,_z`]1X'O3 Θ&,2wdy_ɘa}.C^vi^khc ~΁ =VRT ]:}7m\~T>,nӣ3+ 5 ﰗ2c<ާp}z>JG xG4ѓQγu/1i3иL5'T{{c-[E *& Q+и|KkU{gUEGkT(׍%]{oak$qA6itg#:6֖i&ZKYTY*"Gw]Nu^\ [Z\rl֍a;]W£ς-ehЭ|Hx!331 㙘;kUk aƻy~ff:| sT5'%݀&BVW~ҸCֲQ84 G ᐤyTr`޾׶rlm/ɛ55W˭֣z~:;~IVciceI0sn;wWor28wOQQܷʃ7ۗ$ԗ亶z)_* V("~@'/ڣo匤94-ЁmঃW[Jhhy9a38_J`30my҈J)>iOocbP`E@}/Vc)F{8iLƐBC4t(ː2]UXHTơKLΤ2 W(2rX3 ,=ߩ=L܆ط PͱMj+6hнėIS\eyrɕʅ]ι4.Ϩ#`nGnV@鋼Ӹ&/^8.Rx0H>bEn$pnT9mT@q GWm;RWKTr* #L~`Ťq=UJ_TqnVpq0jՔمNZТL$dc5~N"Jm[5@g"xB*\?~-$خHUf%#c&RJ\=*b΃ Ol6:~XihqvT'ED!1T;S-V/-_ŠBXSpKgP 89;j1ph9_ŠB s xXsH}UnVOߋSJ¿iGVGI# Ês!Dy!mQsF9T .v%WsCB!\8G o'+_rSfSä E Ԧܯ((KRtqJZD+A$J(U.r:՜MJT׮gr:Jcuv@T ąluمJclRL~; uSL8nL&\vRH)LEKg5[ԙ` f]%xdIqE»Q3*YdBz3|Kpvth-A4KgO]7_VuҞς74?~MLBuM?xu37Ɠe ׶~=~_$VX~P~@?͖߇yww=n!al [Z$˫Q8 A/`#w^[]_{5{VSǜ|u+W #͐7>Ek) .]r]sx'[2gBNY*["ֶ Bb̐7>E))P=ҍB3JL}Tnj{n*b̐7>E)^]8nLV[] BN>H3FН?MKy_[ƧsR1~#hKgIJ'8^)Mw؅^,A3|^B&S$ CX&[T9ʐUL%dFͯ ر`wĜ9rd84ʰ#7dTQjaN*2F[dPMs47\M" aQ6ֹS'j̭cfTׂQQ֝ d i6꽿#86Soud>PaUe.]dIkէ~xn 8Ȧ}J{ΖmMgP>1#ޱ,?Q[{i0Õ#5<8 y|;@C/>XH,cdGa2c=*rTr^2y UiV *x:dXdRFsΘԢ, ͳ$USR%wu#k hgOтp0>?$O}X7p٫pRmPp Kg8ppLJޏC=b%=ܢa i&~j`^'AH;b.Tw-%=5)ZJE4ILclk7a -I}Gv;]TѴ[DC[E4AlԴ!铆Ge0g0a z@al0Lh=6o+8,5Lo`YI}}mlhLHJ> 0IJ 1$֞Ic`rS l2Il@/f4(0=Uٖ+2UlHX; ;,CA6_\"IB 1wtQd9̊tFcH+ fFlb5zzw{co8Iȸ \1YVSmzD͊h7cR.'GA3P=*RST&>(FT=Ǧuw+ǃĺ<4њɴ4 W5g=TW03\ȗcʡ0} <kd0Sj5.@]uLN vO?_ Z gEnI~W_[ K#kjn@ 3Jx @?é*4,as%<]([wEmi'[* 9Yݮ0JnTJaA'u(+YR[U6 gR*QU-9-ŵ"a95 HaHy2,9\A(mBJEŤ5ps!J!]dd&)]B Z IեS0KKǠejm,y[d>Mr\eetrxIENp"NW:-`[9#C"CajY4L Jt<YI V\(Qr9Pó`~on}ۛOsි9Gue+qW }]4 wn4}',=ЀDOo4y9X8I?A)f`9F&_lˌJ›ZUH+X07hn5By [M#j#FTZ2m3 Edm<-9rɨtjݑ@[Pb"r vAԜ?d wU3Ֆ>G䔲R x *ؓ5p\*sc+EiYkdCzVw5,Av6E|/땶_,; U5LJl~l..WYK+֥KBm&S=w D ݮW3X䳯/,h`sғJg:Z~A`Q~-?F`(͆n| "h;\[7 w."0D6fqαC8{j, ZAfp i,/f}蓆y&z_ ;1eȍaS ETwOlKmAwPݽl1R xyzt.&^U`򩭿ssqW"C-bq{4^=(8"QLHx |΄b{Z+rZ#S(V 4CCey M6ϙцACY( Eèv |1U’>y̔ QʂƠbFp ql4Ҍ A.De)ӕT F*Fc**W!ZdUp@:ӸlҸCFH6$K 8J۫UYZ>f~m:cHTq'^YUe(gv&!sj&> וPn2e fܖ|›uRLY.S,K L^U˳ 1m G0"xp#X Bgs-Q2pL[,BBK/椵ʒK$%Ҩ*fJ'PbP)*tak@we عOzC!tm++\>vH},7!W8\3[m =\n땴ސUuwضy۳M4{;aZ\YVkWdF?߁ууу vZD3-T5#}(U/՝N}5˫m)>3*0p4-xӋ!)!{vq4F } S\|$֜SxԜ H +-QGZAۗDw0s|'S#keV;{btn^}n5j뿎bF: Rh;:޻ ><=(&ӮE)ĩRӯ\u&FAO*C.x!Q~=)BΞT f`ȔtZkU5 2̔^a4bj5WRR VIьt=>}{q/12VIqh?ͫ֓sEmI-zIC%DsnsQϙsSOEDg" {0<]FQ`%< O+ȋE-`ZA8`M;)uQX6h}UE_"Xgٶ\ٙoG'-o ?K&ߝzrwV\\.͝o_}h-ʍJiq_cw[vWj=T6 tįõ?wIw}kF>ZboouM([7X֓pMchKi>vK DtRNg{[z@ևpMF{:4-y|-I}Gv;y70vKO$n}H $B|Tݚ^ [*!vHCkڭ y"zLI F:DehsOd!cFu?E}847?gWŦ~"}Y;Z/7#C"[qwr㵃}ߨ쇟~4IZ]^\})L4wOłN$1!#2 `WbJ0Ĝfգ3"XHy)/q}4ع$Ia7&td,ͼB%T"oFB%T,q_I3wSi2lv2lNbR'`JD0vPI0'Ď#+,ݕ 씖W!VM8cH=q<'N/2șiN3.Ą8ԛL9=mCX`SKjrJe>WWEvW6_/uJ(:f;ޜ7f Ҋl&'P*Pt 7ӏ-8V7ȫQꇅS^D(MlyϷ>3 y}wO,2{|vyl!#O^'fău7 ^WޤWb/!PLzU^8ZUK5ERV`uaT)!m,U'G޾ üT׌x}1N?L !NqT۪>?YNpP#nG\@M.‹=i3Bs AFGl)Ww'cQv: (߃Z@PZ-nn{xo[ ׽xCZdwUg7Мֿ.* Q} gT_m}g_KJeՕx5o #׈NFt!*؜KL IyFCjXA3g|3dZ7v DdgYeC%*ȊgZ{6_oCgo hz/KNSlӖd/ERl9FGr<3 !jBIsRpS*C3beb H&`)+LQ=,ʿ7v6`a/7X}Vtȱ7+8ޓH9B h`()|!D҂AH J2a Вj^=h+`DJǁRD3m҇"nB[jV*gy 8bdאn1J9pߕsgCȻ|^#l̄xTwHKRM6 ޥ1UFlo>M}! 50Z׎^>osͯ^fϛ,rg][>o.FG0۪Vpe]VpjJ'ԐxP 0, 7]MҺ]ф-%Vkڪ,~3p$S;(8ɴOT֫ fruNe=|{vDbT)h9 eK j'4i5!܄ "Y*c RyFhqľdz}gMIRY%DF#^  *gZ핤Q)Z(@{^2jRO={r4H7}3_>ZiM/ |+YX^z`yTҡ2JŵE' (cKH0Wc 5gc9#o߻LM})D xOgMD&c\gWh"V`-1J+8ԅ-/Iyr-en9OX^q "wwHU ?5Kr45C>SV, B*ЎA"[:z0HO3Dګ>FZxMVUҝoY :0M{~0O0 Y 0Q*GGJ S/n3P]RW;ާ)~LFVGӳw.NgDvf?)[};Mv4i]-eډP NZ9Zr%hh^c1x>UA=>x&xTy+)7TqݨIh85z @!XIxhU5R{⃊D9F u9xcTli Z [[[Md&8)nYdi)qM`EtO(8҆hcə2U$nՕ@v2[ohNj{' 2J~Q^( Prt$ R %$oge9Ok #7=ú:M i*£ i5냫3.;ݓ񲊬?~U4&ǵ Uo/stB^C,-.wFl&?ݜp2OiQ}9ř{.(|sq'Wɖ(!wOWGL>{mqEIMj"*6Mje!P5IL,8AyNY&G؉gdnvdHJ@5z?H [&:&,*mАNQXn]ը-}b"s%gfQd:G4cc ]㓑L?b#!qh-dWT35 `;®_:o"0+P:`.43;\N// |[pZ4:"]peZZ 5X[xֳ;,RAzq9gC{ƉzX:\}(IM!wMI7g24Kk.\\OH0J!)ŋFexCvhV,k<|^:Ͻ v';Mg)T?6ԦU5Rr^OJ'S^0eX!DA3{ĺcȣGZK±Wk `?mB@[As.jdKu4g]t£V|KMmGbo.ã-},?MPCo5ЩfNї=>8i oJO-uN=qr1q:KjP7hܿmD:W.M2'xtя/1oWu]̮nf'p80;~2uD=7=}x&l4WWTI.O( [7TR,e ȼvy+#[i:OA5ʨ# ZX TaJ %#AtC+[^sޜ! f۠fs. %RyHev}PBޱvװ VB[ j} ouZ{#f96hYDe!"m1^Ȟ)ݯ#]M5! 3xjjOxo_9^)kPB,kXM6T*vxIҔ 7״0mvC˴OZN P[t64H"vm|#5_#T} "4 'II_ɘlO).ܺ%t;`f?k6\t0b/a]=I o9uìo>bBV[W-g}4Z%^FO?R"c2~hz6,Hl'~[t4]9MEWNSѕzѕE zrY0NHZ:PeT6hRzǭZ`P MkԪrrR2ܻ:}ێI0b4o/U>CNfGTIE`gIMbO (դ'RzB!#f@Di:qNtuC# u %e*Ь:,#J)jL*t نZXiֆ˗S-r@ Ń!A7_\16 篾\|{gֽAA>K&2S@=~/75qV?ɲooy ?y^!j:¢4C/5GD8-UQmJCj+Jߛ' zkߵƒ$EY602Vݬ1g3\6:Oj^%A=2aC5J2W 딲V΁ohљ 9=\[ph""-rr NPyS"R9isO{\xN rEa*3Iz:M=t]׳rZEߢkV=Gz0;g 9fױݚ` )-J%CǀE7g@ro33ɤyl%#OMV%@/,FS˭ě5%uPijbɍ(-N:B8Uڸ<8')+3 նhȍzF FL \ |3>=g<}L "d߻NH-{<(;9.3GLRNWx\ -zYX'죵CLjPL3:pre4^SV8 (d̙ri"C^}e&Y\ ?m1b209+k-TXUJ+%$(.o B(,J(4d2ΡHd>&ui ߭{rhh$W\iD%h]z0D(`8'q vRt!tsuV/keaS>TQ{(MRp Obi-(wg~>]0x{~G|ɘ& BzJ~1lO7gg(?E="sf&X?\\IU%&Ix=:?J0GD+͌Jr![W4jh4+ؗPڣN p+#x/垂^KgiPN( ;FN)bשXC)a{l) .+aхHvp" ek$ e`Vv؉Z>:v pQd"L|L'{H٨'**6 ]Mt)#( GT63F CLjUgBOu)ebhaDdSZЦQ)9̖4Hgj>byZm6ER3Fttǒel7#i Fϓrqvuf x=8*Ѿi_>VJjz%6~TsNBA[\34>sg`֠^&J _.p.}Pγ(Z䮅^) P)Ȱ-Kn2i})H]ZA<%e W.tuM1a߀M;p9@jg]0^a/$}%qBH++q#GEV2aX{4S ^i.H*w7RRGbfԶat2`!@3@WbvAg UF FRk<*8POaNUK.O[w*pbCT޻ Y\+PUN2P\6ʂ00m˕ X5 AB+K+ne_$` *  8c*hӯn'KQU4qN>jL1W]~Jܚ,7Mۖjϛ;?޽yU_ӑJim=i'W7ftj08FL|ہ3f fȖMi J &Ѫr7t8aH1 0VM@cӯLI{ l/CҦ@ WځdUIGh+5 B!J36 408mYbp3{vϒw v(nX&< nĶRyb\_\l30}s"8ܛ(tPJU#SޣJ !~ ɡ`wVe`w[" A_${ AX 6eXe+i'AabJh[潠&r5/Q$30Sy +m7)oJPŜ:H'h$B=MZ3p0T #ġڐSӊ: G͎PfԜQ%̨`E?-ax`p:ztp=۲k-DHPvESV41C%=h ВN갢1CLwQ4%D:i:+7e(fjZaS:da#%c\np0p)N2NT-vePΙ9 Z*j"0*3Un J~c6s ?R89HuC}ݬj[>n^qP87'ggL gxv"Xq?GL=xf̻w]?u6꣖. xex49'` sK>u!K-8Rڞ`2΁D0gܤU5!x̣MwۻP j |.?G?dAjdhգx6?ҫmLˆ:DO4Bl^3N`N+-'>#K_@v{mY@(ųKxM+ #uf m8$O7[wAGrY+k %7̙RۤiR@/ y:{"6Z}ꈖ,.ޠ-|N싅7|D6EM/=1tcwߚ2 h#4jic?!b#xD_p)LJyK+Nl{UF+Ą8kS&u4[}$R̕GWi o_Z}Z)qv埓ZOB\x|n]~s*ޭFQX1 O`T)8<7\%bICp-)䥅ej$΂L2otɿ*0s0ȥbgt1/赳 raACCp-)y[58P/.5a8 _Gk-)$0̧N%nJIRȝ;"7k/Dzm٢O/]ҥg,9fZJoWzƸMv×׋i}ʠ!\t \ gAA:_mqA^(C)m'ǏTvDq>a2C5\LŌPEZf{J)~z! $칹4bE m*~f 3KCn=:P&BseBZcӕF:ʽhk Ze<8"<0J|&^T=\}fntT-ӄ;^*ͧ{sSLN2lӟNwh Ʀ g+Qy Hhܺ% u*I<0W̅QI[$t<>d" 3*i13R>bx&ՏYCk G2iBț7k Mfc+%uIb3PnX/ d/ĉLFA ;5Se6SlaZ:ʋݟcܡuy3;=ޢ!t*SV4[ $iZ{(M/4Jp! ˨6!Tz"y${Ƶ#I6# _tZLU Q|:vHM~G<8p^X]@ЌD8/-~킶HK:]=JqSӇ]w3 ";w?`dcjoMsؿIs ԁ -bU@I% {N"3Ι$T ojOvgfFaWLY5fy=@ϗ+r)c97hPV.ߧtBp<_ggȂg֣S>$ g3䊓l]OdZ'[I;vfVWbVwŹU3bu(akw1|~t=>$)$<ñۻtth"lV< nJ=Ϗ\]B<|di1O7ȁ'^Ԡ(A1Qaz/u{\O9ŞhSSnjj) *U`{ >x-6B94P.m\T"} Mi\T nt:~8QbP)OwUpRrmDH,I#0tΫ M+v8Hji ;u4iVw\ݦb3vSsU\ <ǫ% PӖ,NC#? r iC8YYYޝR\li.x pb^b,L8qR,~qi֚vXp򞇴e,&H9p8jO+z,HUy ڔlnF#=f4M2FyOrL9b*P@9Rn a:T%ӫ-`LYqm"NG3 R%<ณBGT K#Qʢ@N 7å@)(SR;JA zm;,ֲW7c~R>=KT@MZO5}DǼb>l76Y|&nPql5B,\_\|yi&tFj/eّQ VH㇫rtuH?coJkfYM~9IBz&:]tx̟NcŒt(cnyt~4nO:>'WRY!b Ҍ?|١G-8k ' ܞsVir~DLۉg4g8XTl5K okȺ8EjF/,8:]\x"EMB NPpƓTnҒ@=/iySvp޺bX,Q.lFXT ˱ Am qx|Eq&z9`f-CLc11ט?ieiQb+KJ3Fn?kĄ 4icASre4^SVUD4AhJ2L(`{W9Az!u$駇#uQbn ,)s\ 9f 6J`20\RJ(Up(\|!\OZ,VFX]S7gX5]6؇-CلG W<NB^PwESuQh[Z!̹dK wY';vϴXOP,tLdM6>R|ާwd0;ϵ{pV~1?KWovy߽[̀_PM`&@f줣ys!yȠ#'ڀrYYnf$F!*r{FM3!Ukr[}u՝}$/؁ ɰX sf+"O,+`( n$ԒiSv j><=NJ4Q*G*w] gJKi歆#Dh4Lz7Gec[YܝhQK`jyUQz])]]V5c`qx M,s#aW%p?{@ `o+my.vsӊO68vAW?MԓM?AW/QLdY-Wю-W]Vx}bAkXa5prN1ӳ+]&hpzQlشiu6O|J6ҕlC͹hFIр碳E:1:EmIƘ] MQwӐP!VRay^]]̭Qø5˒[D1P9#A2s%8x&qohcc}kt+[WQmaĒ3ct-y c;h=Ç &$?3-&t{h&EA,&L L "rzmv1A̜e_SMi}Z+XJ[`(QL8Y M7Z#-^'?n H9Ϲ?FY=\V=%\ozp>"῕OI/O*Eຨ\uCZ9k we4.3ڻlgI3F5KEd РKY g\:5ooD]kK̇6N2vOu]= gQE Ǖ<|d cXeyhDYd?I"3K!32+aZ`ךÚ`ӽ dkQ\`#AmBّB0#2tB*S'_V䝺0HdѢB;[,C5QT= Yn< -g ɧ.ʍ^X,R^H+5v{J×ͽP !4 Ql1/VNqr60 z5S`-etLiy_r7ݕՎ^^]m1wwƅtF\[E~9&)%QUL'[Y+m담NIiEs'go^VqZI.!J҇cdh#)DmEB8ihCr6@`$Q"|9U%EKÃm0!^>:잋x+ubF7f]Sծ 5K34kfolUF]2π/3dIs& )kTpF("g^]܅<1Ï1Y!v"Et9:BpƁpBDO9d0c|b@tDLMJ؂,\V҆Lki\Ԑ\Ai6譗40ɵ5i nV#mhn7y,eۤtyl>b|A͕VaJd0 N; 4>nUE{/9'Ũ%Y2$KduuF;Ek FFH IO;`Oң 7hM d҆fR ]8A@OLUy4nUW؁̖h J! kج-*{2TY2O3YpRӆX;&I/_W2 N3a S&}0P/ 5 c`$%u١b@ѰFnȔ{4 lx~Fk&h۩;Ƿ{@)xف+s=\|'X5 X dpr A0%EZ ;-0' \It YWES!Rk%s)j{ i@ۉpN-~$z7$c42c=fGM>&[I%*}lђEQ'4jb?PL<0%@3&|`f#;o>u#y8Bg .ա "@9m67_m%rAx<@%H/(oE]%vM<'m>>dyۗފ *>'ܮ_KG bjѯ+7{/󊷯S^1f+w){x/_Z6{fQ$_>O\#A֍Ww+~-/7&Ύn,g)iHX+CnG&9(,zEX (Hi /B1Yȏc/Դ˴ li!X@))6"'ߧuz'D]~^yjO*9Rg@O&0J{* 76Ece/i_B/ e65MNu]1 M-4}rEu˺huKD-ky$Zmmx͗fK-.5g8rC%nwC5 D+Dd2]E59:vՎڌ\#VTkƓf͏'H5>`NƘ?Fn# ޜd6 cCG2fJjB*Y!JKc~s+5-wi)Ԃl.64oH!\C}2UșTqr$Ql3bӧZӧ^m}D}/')#/%7  S'p9(`)\E0\UFʺHvKBrWY4zls'%o"<)fZP[aIb’4&ue05l:@6E>g.4jr7nAm,r w_&QĹ@l Wb=mg긲긢֭>W łb23_y.X=4 Oao.[]-2<;$U2Raݺ{|J{l|fm!lp{n`zYIc 99"pOօRsiW$G,ɟ E>ӃRh̜o I<$md-vQ.xEOh6Ed` 2 ]))]ߘ%mU͗#!!p3Hlr&a d22wP$jN!{Vr=6k`h!(C(w(?Mw3%ҕZӦ#(j6ArrжsIrn^e"f =bTWӾx̄TkTN)l$%燫WՇ %wl%JZFh ;(\J9{*\z׍Bؙ2 /|VʟNqd{y0ns3˲².~_J2\LQ dsE}Äq<BZj&1QYj}]%^%*3vHݠh/cr)eGQx' 5kQ6455c es&+, [ݲL ˙Q&-wIoo'|؜#㠹RQB>JP,nϞsePLqy5)NMs}Z7vj|ZS``3\\(DꏗyU}v7ʅ&ŏnGuY6|B>AH7ρPZV/֞6YUfIA `+9T.\9! 7pۿy]5̒s'Ņ1SϹeY5h.qtc>\7+RPGǯ,Ya^Wd_N%AMAwe0i bd'6c?,T,W 35i}-6xȏ@sm7u VYzs4Z%o*UVH9CaI4Xt@0 i!8F;#QxflcqMӐs nJuDXQS䞃7ȅE˜E 1e1AJjD:XcŠ\)ܣLFn䦆IVց 7B9 4k2;. }xGD/G`,C4&a9M0 ^J#[>T"Z0=텕{p $-Ӟ3. CJ(b`2uƓda l2ђyvh EXR:I%zA`Tb0h``mh){<-9D+%2*B.IdhCt^G5whueF1r'Zn"v`E!"}uДe4LiJ&FagfNS3m oI9R18 ȆEmK_ffg)~W+lutW{a;"A0ķt\,ڢl ):v'-^sp. V*FT/5GYg |^j#l ვ_YA@yLj!G??q^MXr^w)53άO|vܕ& #nje\&D FT]lwX]=!'.&Wt;7qRT #ab~M޹rzmC:ia3jaӀ<cgM̒Dxot`U!UFGqm4{>$F9#-HF/d& I HeF-WBIaЬ{tAEhيBrah:b aҺY)QdPBrbS$ژS,䓅h'mo" CVTC!0-L C8URr$ <&#TМBکlnd@2ͅ$gM&5dt9A1Zk{AU5::,jQݎ5ADzcq냨Gu˺rV"_>fr.tw-eVj%ฮmizCS,{ ͮ}]ak81 5DbiH NQ̩9] ]^1r6> c^ÂJhETybX U0C5x͒y6Fx֕ =zf/* OtwTZD&#JtI?TΗI%{be=9UJqHXw1;Z02TBG9+q&' Eʠϋ-} I+RW8ƲGÉ,$ xnmn1⯞ QRADʏp-̀zyH /:珟WS*nP){1~Fۘ Y7@b!6̘V7(Jn! s Áw ]N͉ R)!ZI9PO>LY T#/jRgdTQT \ )Rk8"Tcl#D#2SȆ4ReڤV\3R F;YYS!]]Yr<u>8Jr:!Z%(]g4@֏s}{j1vՕePVhoĚ4ՙgq'tʬ8nJ}wS)Wlwzxy\3Mf?,:xoѣk?89jA}OR0*36#"T[.t,Fcw)r#ar Cxx{+6V٣!QuoC7}ݕswk4 fܳ-:QC`.\ok{1b%ar#duo=jau5֠Ucݾ#M% WfE>XOFGEpzd`2_us!k=~PvhaeF(g7WV:,LC&g庩u>&,K&E6!oOvG%*ޕ䯳/Ο|q[onruY[">2CO4}Zzt2gdi'(7M/CwzA7 {Zz2*=ِJ]-z?*wlO~h@@-ds#9!Vġ$ HL!t|h`vvk&'9GN9s4kBJY'q@bŔeԕ^S<3xr)˴HAԐNCN9ුQ\I <`z%&tŔ^' $>mxgk}ġkIDX%ifDR0. !P6[[Sќ<@e"ߜP>D9q_ >! tVzhs7_( ?6jQD2][tca˩~CPJ,dbw 'Kddrɐ' TWShvq~AU" ”Lų@5yfeZ"((Ҵ`RV2n-L Ip92YF hiS@g 7EJ##7)iEDnF "7%rS05 1 > R83H~ 5J2&N-41I%`[Hr(D j4M&֛u&5;;g ֔3֫ⷀ2j:XPԁS)(6XѬ>ر`9R>\+H]lRXv5w8k76SDhYA4و1E\6FhփE} w`Bnp7`4L3Y F 0Aր[H†Fk/hb=E5(jEF Şњ(:Wb!њA`hADat#nW։_vxmϿNeQAH<`MZM+k2[@Ж AF)*(%Yf$3ܨf4cfv&| YitITS_Rm@.̐v 9u,K# w)9Ejd9gDh F|?M52KmZ\9Ɔ2f7$RDogA.}VM_^:Ů'~W=S27ͯ#ڭN'Z şy*0n_1xUNnE;eN+(b9Rpo CX5r?oVҳ%?%}t +WggI (j; 8OZ7I[,!&퀾T gZ.$䅋hL)>zy׺UK5кb":hbU¥n n]H p}Zu<븿Zmxq8.L+MݏIv3G,sl3Yx7 Ϫ^OK $_LNO'O߿?Lf=KDj pJQR=l|(2R^TX |_ul6a򗻉f-N+[KԿLʓVN F˧b2kڶ cDd9LB\ӕt|*a*R{@{S݂̽J{G&۷S9 5[Ri})\(K*R"rJ59 %9uN"QӲ]PO4#VBjWm9DC5j aTĨľIyZMk;*~n BB^).覎֣C{ 岛}Qվbݣo y=n{}sV.Mc8!W;tj˻1pV>pZ+ۗO{49b1xyQֻ8vY*XpT̸#kۏq<8|xt10 hLkS=$Ga%P9=ia PrHf5lt?tĖJJGl~p<,"Ca|.Y+4_]⒪\Rpe˔r Tv7CH5H:v4j8YmuԈeDI&o)鸾=VaNߗ[ ȫ̟%[B|ﰤB/oQdK>ȮGzC4|f۞p!|={$Gˏo;>񮐵jpYq7[Wxwok*E˳<mWXhRk{Ъ< Rݜx>9$Zc3 b8GhRrRs.Y s駛KXGrrw6{rqg35QɣUA?CmSBBj@olI ՑBJAoK4gj_Q=xkLܗdڻL%)cٲ#ɓ%[ԫIRN|h͉j}7Ш=%κw}1je~23af8%L?϶ Md4L׋e?^tN-:BX j)PA֖Xf`D,37 \lO&fT]n JzdEg_sYl zyj+CqeQ4-deNY}sYݲJ}٪U2<$ANgߍ&/9ƭc:9=|p^Cw9{m3p0?_y=2Ǒjx@\8]r=mh`ܰ5Eח\PNO>D׬bJ+$֭qhW^U-p/"jm`<SAeP"$ZDžqK#vT1 05ؠkV.e#-sRO\+-T_" d "/I؉"' C[$@lbTLKnƖ:6zļ4T(Ɍ6JS4W#e.%~lcآbX1LBzRΈ3 2 xˉ㖉WUbXqI%ܜԺ0nwRO4]N{2<hs&$]w }ޜLboc3 As)rEQXs BXb .g%p' wx哼$XKjCtJL!siiqaQ&(УC%PsƹoX]]$`^J0ږYЀpiQP ʮ"-OوF)F4JjrdW$/b)ukS:-l1Ϭ*TF,E70d4OM |@G 6Q9F Y9m=IhD3 |%M䙷:fmċyu==12W#Uam0sX9"T65ܼdg3w7 5B‰2Jb*xՅBc(] d02W)jUJ6X/-Lb)oy"䎃׸hA&@kXKWZZ0?x*ѻV뉞MZ=3N0FX T{PMj}$=҇u7GrH=d*y^ "8)>mSfh^k [xK) 1uᏗWXP%Th&lRa5PIʥV-a=%5Ѕ䚈xT,h)RG-Pw!b tffns{4JvhGF5z8@&nO mNc;Hw‡y.tB3s_Y3@m2vdўG??ޙXe#Mkݽ+p4Jisթ'A 8y9U|tr};N^ 9LؗTdx@P=0+-\Mud|!`,gUXPU*๊0*U gz%nnト'GK $:rI,;n=`!áheD M M+.~ݭ9V-jtg&N >)6& Ri&}2vᣝ_G&N Q8VRLM.1n0nwX8z\ *Ɛ >~{)/@+xbM*+C#t:L.nH/(ލq7!* sowpbIVk_17Q>zwaV~k|819T;O][t072"?ӿWOeJ\5 uzüũ,䕛hM)RI8԰s˻i>wKŠĻp):]{< 5[Mh28Qr*f q²l7uڨ9w{ʛ鎣3"YPY-+ZNtm3i-TاCĔV2!V ם !b˘؉+;hh% Vz3s;)NwW{{Ӫn|8s!qg9a9mwf@fqyf=f0]OwT\yVwU'Ж5hUqϜh}~m[Jfm893@_wٻڠ-t B ~ӟzݥΐe?C2io:x?'uͽO~;y3N\;.Ggnp3,o.ߣqwD*KBtX*<.ߧZ7ЖϏy4CAq^tT^j|6Js/3_KI~QM̓X[X$Ub䉓ino!VGOnuZE+od punW%6?/|dgmYkC]jA]SSiO! Y/LYIpasP5'Wc$08NC*LRPqPy.*+SsRUMVAJcTpRٌI~%(J,<[k L R]=ZU{;L 3SUAoNR7RM}V>$Ukl$o.UN֊$[+@}r2:R,V"WZf]If %(-˶^Kw]BU!isnT7 xe"O.w>h}ߪZǰɐo}v-Ih7KZS ÄXP]?S&eP3!~y\hS`VyF[ _%1)JT)88͌ǡB)8/ 8_x%@|! ;xcVEi0Z; J RZq*'uF@DX(, *`Tlq9j[IP,4^bf5 2T[EHZ{o 0yhˎ"E`^݂ͧCId|cfa2 {sk!w#xl+Œ/_C\㕟Ƿ;[Ywb<͗ Ҭ~?,&? KRhӷGJїG(v?4ht6qr\JF#T**i V$U!yzƊ"O8yɀ#x Z@%H-bL ' IM򑩖ĒJ鎥XW`xVj[y뻛`cBjtwwX8ٜś/vfYv&>ixo}_$<<'igDȖ˵fBV(:F kC\[DKșF_aJV y <i/@,/!om_Ĵ$ɚKʩ1kB-+Y~*!ZGpB)sd!e/ddAYfu{By%:Q++eצXZV޸VfR*o,K;G?˾T_dT݌]x3l | ~t W%\Ep]U%,[.@58c(6! KyQ0!˵PăuTbv~:E,yQ;0 73H8eoYc^9_JףO|>TMU@1/hXT;m 0bH(g)!9#D%gۭ!+*x,(p`qs (:F p0s~}:+& UdA+u %%q>eaCaᮧS\+͓X\ plvKUpըQ@^m*@YzA3zr1*gY9S->e+~23BN"ӿϳ7SGJQ.w*cz>  )7{MqʾzbHqi}Fx+; q$\:/q'E&rqs[I+ o1޾y%s]qЅsGŨh~壧&ۇn}~9(Tu.v(kJfW)^WcKDYg?~Y<-Ç 2F3޵^dw٥.-sr6E!k0l?ʯZ;вʏqQ Kx f:ᘎ*2Tߎ.G4[2p}wTj)?:wYk/#]cw ^.݌e ݌qFY&u򸞥 0c0ǧ_?-T=˿r;k,o[Y ~p6NS]rhºuxCsc\`8:?AhÉmppy;a2M .XH핀A$ROBgDDC<$$JNAI3aCL YBR%$(~n]--i `y?[ >O ϐsԳȓ8Q dIƱI YFaF4\c" 6JG&Ǣ°'Zx*⟌J YVm;!\rt"(SSQ>0#IBAh)YKu2-G9qE'&0 B[Ͷ8aM1)G2XR ('Aި\$@),Ve'Ұ D#|<6%P'4r)[Pp]7BF<qjk{W;'DW ^%ٳIwv??F=kPN޹\SJ0@S~@>=߀w1c5텣|܇22WB)G R&J 3a)15JpC'?OYg 835!  ,7vş0(Gv6G$/3ooqЏzjEpJ*"Ԡ8YGLk.cM3M5xfYLL^ ";r7sQз?f3-aaa L%J3 CP ϋwtPAȞ6෣?r`|q[;)8Ηr|97,gy lr+wvޔ '}{VXG8y=6/F$J=mJ6XtGS\^7m'tm^-"aBO]8Hy`s~C^xWc /ף9޹Mh߱&koGAFKu8*:_: T|%?e?Ťbr]Ԑ8tK<x2w7 9!(}z&s_;檅-zA%0a#nz:vE;E`u;=3@Z3m0",GDjpfi2J3nW J0Xp\N#'2J!7m@3Oܟj ]ȳ>+'!ڽI{~|v^y^N$`5K Du,\1q,53E&BD)9Mg1,om3Lt,oHu0T;;>/t GW=Eg"Gs~_4{Tq\N`9 \J ED6)/dz5G8J.ӄ'H^1&Lj+Yv*"#XT3€p07 I#൰c$kO 8ge):aB1a},|gL'5a 'Q$*cܖ2{ZnM$KlQ Xd-` 1[# @`Ibp.^8Bs4/KO|I^{]!CNDwIxvt6N=dF !XCDZ,ĥn/xGPHUF ?܄ lWo}*.`7EЀ FPk^p֘:QbXjaHlWaH5XHr\<~WyW ) Ahmi%e۽IkZ--Pm7& Zih]9x!Pf xx;u"j懫(y&jyIԇs2ui[I-O[ѽdRkZ|J>=ckpK {S`CSJA;ػPзȁ9PMwTkJ$& 5T$ܡ҆|""S oMQ(>:F.G`ڭ E4H8}nh a=~_`eJ8R p4*u.ڻ\BC3Pmu[oS r7:%b;D h) ^8N:#k!<S$9¢Dij T+Da ۷P"r2_F;K*V t6swCN5kw{sP_>-w22;Ng͢ by fԣNaQ1t8׌=_ v4`+dE䗰)~)V=ܭxȃ`ŏ?])g%g:Z9CT.[CaJ" %,DDSTDwO\P[2[oR~^=_n_CwՏ'{WcMr7Π@'SHuAE7D9 sw"(D _P/~דL(qYI0O{x|*N=ZnT)NPAPM&:U$IIۮ2(k<hkC^n҇9Äl!>.kTCfEUGS.U==.^W & {K8/*4<)=(RZ=Vww]^r@~$[9¼#_иC5{:Z:+tk,+<_}Rgىoj MF\K-%(CrdUO&Vn,ZYt qS¹PE$E8"[$8 4Ix18ʢ,%R$)j!GzTMYC_t=$:3ĎePe9l=ɇ =SJ46VdG}({>ԯ] =_= t q=BwKXy[oS^]O*8Gr=I`&XWO 3;,l> ;rTB[++. J(V+8[Kś ٰaSjΛO&6׋lC& -L֢I9of&z>z\?短98kFͣ՗hq566W'/NhhwUR} jFPj,)aOD*`P `((l=Z%VZ%$"6:dJ#I4)e JLXe% ŘD [?Xi 5O^3PÇ@p7co4f(]^??aȻa8Y(7 gU1ADL_aM/?X6_nS<37ep}qOo , G2Va1ǔ10ƅC0>i;U<%FMAg30ܵbitMr*~2lUT!V)n UԮSpKR$԰BjW$ ].+^0MLn=[.EOjr q%jP+p (HST;8Ǿ),ɘpk$L HXe* Ś4Ɖ .f244!LDDL*,i FF1,J RUo#` ApaĔ`s阨fd_{_1ˡENx"6yE(^, F>DΘO@qv72'y<YctZ&p~5OM ᓗ<#_OHu[nʭ+VZrŮmga.tf۬d8Mpk?{ƍ KOɩPиJY[{jlf:`0N$R!)+?A$L׍ѝKֺ&KS(XҔJ&xn5EW2T<}y$#H$e8@Fc&RΜj&k@9Iֆ[@"dAm!F#3D̙ TjBe q֒΄Ŝ%&׎A.0Jx()jD,1X&.Da BnY54ͰŒLb Vr+Vh"Cƍo[!'gY˧0.0Q;KTxk`";g T¸bǒ)xU"/ ຒ[ h.$A\@R Dzϙ)0,P`ع>dM%3^rhsX'vq%C<8Uc@c}~{en9GQ,|hz{_7D[Ō23ځ:wQQk (ffplMD'4K4UDTi7¥N8%WTbw3,v|2g2wJCf4vSK-Hٝ6@k8AGt~Olq6UÞ,u.3~\_gs㍫JRƒ#;v1w!T?}Cˊ y_SJǰR pt}[6t<(qE_w~+q(a]R =9&˥F)Q(jUY~RL, $+؎x[{s;uPH (P"b 9d _E TIbR2:"LqAtE8,Ip3HI(,Mf܂N!&h̥rW)KL})ƄBM3  ӆP&1ae`-Ҕa,Kpfs#˰Qрj""7fX#e% O rH8͈UD( m@80xVΘ(:ZGrGCO@Z9hfS}I?y,uf @)quGt,~[[ep \ĭvmO;__u=?{,\Y͆|A䛲dziܔY@Ę%f,-s"+C`9?g{]7f<}sqQi*71 3K*IX(QX$1~_Xmr/D <1.+a^6s+Zi$nD25$ђ I4h }io`@T(,>6dRA?K_KӔ]*>r9Մb&c+ڨ\46AcB%G%tvD{; NԘ6BMD a)3RL8쾟Rd1I96ZY i-) 5V~B4ev}@Ў$_imΡ/=C?wTbċNNRe) uðI$oK6K%2$ɔ2 s&2/G89`h-ܗ<};5vꗹ{dCQ7fUQt7n݋y4^!_-vyC11 [^z-%"ߐeC~cTX[W TMI<>fI&@;n[ӟox\$IC kEӮ41,Z9 -[,ŀs.V>z5|Dx1/0A[ oKmܶ1K4<:W͗0{QʵM$7>N,&/wYaQLTb-^9tJ!]$^#aTF@sytޥ-{;Q<'Q.X\^X^cË[bzQ dr۟L9G7tvNݺp56p56]l(7Cl+RVN>_;3gTZ(-숣 ;vQ[{>=zlX6F;yWzq<,xjR͝ZbX 銇Alo5kB~1S).l]LeE8%_Q?u,d u`CYkw:|U|vƑR$%X >lEÙ>Lj56:H}}`أ2zġki͖ݜ9SF>&߲m&B$2?WF@LNzJl>Q+ߒ") zW:FL&ZVRf2f8+V,UjVe u}Dѫ.^<؁d4ykoA/G1PBۃkBu %qRP0AOet NsOi(p1jU)*ؘa;G€&vugPD8^k;GSj.1?;E8EC$W赴y&_!FŽʅb=Ʌl(ԡ( c+ -ۙ~+^{$(նYҁyƤZRiB/VwP|xv3ZOv;sf}>LfͦwPڛyٽ?Lg?溜&΍LnSglIKMidaލ/$4|ƒrBoө_o拘!S)^˕r}Q&νY<8K\?z#ξYomzsJ'(r9˨% ELqݦvDB[S BD3hvb)Xoڭj6$ F2%k$4n^j>X,;laJ±bjK&'"(=X|QEe/hafy%9>,A VfeV#82![JY#yX&d,!#O5Ç;$zJ BqCo-;P zk-joS Zdi˨fލڤ1IM1@>J4kyeoARaEARhwN/4P5 Dt>v0p0/G> 󝞿u峦i݋|0g?ٻ<5s{N^„‘PqF|L(;1/K +an㗻5dAm(L_mDmˣWp*qOgp)KG`-(*[7~-CE79~a|?$MN{LbYv#=G/_2OJ[7, O>;:nI;GB.6H]gۀ^@oj3p?-QAA]kl1ϦlaK0o;} n1Gڄ`NE?>5!Xelg&4nWZbz'ޏ?,RaɡXœ}}ٱ]X|D8hJJ"&u Dr=V!TbԭGwF)t=ukL]1*q*I)z7_1޺|Xt1>%"C%c=fS~x2ZX>W3D 'ۃ8 ȇNV'j [Gf:Vċ1 @,uFr, 9+UYU5U%3`]0rj )z[V;qD6U[ۜ{!]p0Uw Rps7 "^2 /9 ?%呼H~88? J+2]x %Q LiߣyNpȟ69sHh]T!hWᴿTe| ^hvl Nۥ{q, kQke[9|<`liA~mLj J1 @A+ဳ:HO{>b%*yJC^a6'~ޤC" -?V( p<'Oy[be 0+*1܁:C.m;y*-6;8*~@ 0r[/^8mRUPDNu+t `DO۱2 ѕ:Y:P/[8#$^DTL߯wci/Vd|۩qLPTg^GXLd_ɘGM "v$<Duphå-: ;o7k)Pc81ApJ5'80s_[%"IWAbU{wWyJOVuW(),x2QuT\Ў5N۸; D *X+[ۚqɊy>1oo8GuՅ#W6 g&X=uM^ Z~^>_^2 "9zʧ,D i m|?\N"m_~X/\qwgot7g޺>M}y燆9c(c  khj)d#If PlFqZmusL [.b/$[m0BdH2A"H@ VaCfgFp,8ldG#kDܺ%V2A # L& a:/e?@yfsfƴt@iJt)Lg&-V,όԦ1A1ܢoivi:/cX$N*4?~[7F(Rck/ HD><^5KQv% EhvΒA\$ac3p~-v~-xC2xO _8ZÓD 偤sx=dBNO'8\4vhG/XG0׫?b bת?{ƍ K/90_T'T `q#Zl忟Ɛw̕)Wl3F=F;?ݍRX7;H2$&HM 7E \>!&`C{023%Ӑ3u^N2"c9a>r_/) }Jran揳CΟ?NCGb9ߺ G(=|H1C()T*sPebtю|dgRarLׂXRα89V+TZ$[2X  _@Fi(uŊqϨ fn5Ŋ[jO|~L2w@ g;WC)fǷ~ɭ߀CCc>%;ku˼5Lk.&*ә6@%B P4UXqA.⩾ޤZlA7'ӟt2MQ[(,\`>]xuW Ϗ9(Nj{U#c5WofvD/ 8֍ G3eXGҸz;d㪅9wʡ !0ϐKXd=SPBNP&=< R5WCpqb3Vf6RySwȂ7bQMR(fbo6 ;^{㌬q 0뾼_]L ,`aquQ˴lFR7<nBV8f>v$ ݻ8z=.2~ǻ'\ќ*Ĉ߯ۧ/ GEppu{Zg4͗4O;3\c,yx w@`%B՟xdDPvAn~芏;!n tXpF!_ZGYC!Z<߾p. /tS)|}m:td>jy D+пr8#nÌ-WXEFsI3gIf`ذhK&_ͥ d8!œTh3B׆H"Ȱ 20$S¨2I [HW $>ῥאpvPrIcNA3~qrPDP/8NMmMǒw8f[xaÏ CSBebzI efdazGgyq/R6ՋsЌ{z& Ci%/ .:L~KmGe_$ b[jt'|kGαJ(iG@A<*J$J I -2j_ʬe+r$0k(PZ>p|b\_.erPDh]Pdr' x$&HMQRK=*äeDp%8=LXo,ʦq=?PwZ^]C>vGo`!{<B?_sӥ~|t`-&jTPqTim(n}djQVQrM;Vxm(^,gu#׳6zkfx"viwLN>w{/-H*pMxpΒ) ~]px}d W:m y6Vѽ>9jOi5FqɾᴾkQ1!QŁ7;1xg-. Z{ئrK#/s̓Zy`ϱU<ǒ಑6pq{9Vفr+[d煬~(R\Ss)ք˾ہKk_w=٪Y3huRޤZSM>aPL str?Xn-YfLEIRU|5dMa9yELCELQ{݀}nuy#:u(NjUh޶vc=R5!!\D˔T6q%7Z%і SHe;g뎒y\!׌owGy.9>t. ;KDb&[ 5ԝI2a2p7ލ4D}L:piޠ]bPa2 = ߬ekLG$sFDXwͽ_M+c!yXl68L/iizWg5;_hq*@S`Pm(4'WBMA yꢆǘ<[@dk 4'܏wB2x활U8HNJvG)1ʊΥK Mp5f@c8op: ?"3a>y_8nZ*AN- I8X 4c3Wmy=1޹d6MOg+RxB-څwijׇݡW `f͒$;$Gz](s.5=j y8g-y^%J!T%$N z4PJn~VB8jdF-S>-'09%:$D3{.A>CQTܖ4!fyO2mdV.79m*ւa7&se\Y &-V)RI_Ybʈ8&<1MSBt³Tq:GT ,-U9 6R(Cu"3Qb 4ppk R1:}"!6%20ڠf2 Jda{_r,]Pe, ӱ=9WCd0B5"2J dxbcQ*|IQ,<.(ˈV*;fAp>8@1qI|C" &-L>6Eu!88@d98rt0՘1WLUi* ɜMKgxfd, (32ӈjp8ڒ\%q{u2"\JLjm)\! Zpɜ4 ~\HeI3Mik`bQ~-Np>d{)YG`v=\)U0 ZsDQo:JIlRMWjBk1#`# :K9e:5 F͇*t"J=_T))ćHk >Dqa2&rybX.C%:<˂; AiW*.c8*j)DI_{ -Q LHd]C1%9*3L8/3+02)/`Lee晑*E(t%3!EכTS"}^vDj6BaVQbU`_k9LOO]񭛁۾ǩU8x/ӇFCXPKN ̭2)G$YW2tZuE={ >2b|E=5NSk7m.Sߴ4t}(nK ƛGujk3?񂒜Ldk8۽$K}1ѹNj+Tle\}0w =Gg뺰ULM1P)d;gs/ "o<F#bkXn:HI kdu`;'Jg$\OpڕhWk)OW,C7Be-CDJPnPOǶ,T<ֈ l '+> d'gMPׅ]Bv'T3dv4_vオS1>1^z`zF! BVkw^AO '%aZ{kcWJ{[S Zh@La=KtJ@MYY1w-W=[-`$׽ݱ?M 7Aob|B:Ml0)A)GYRuz kRlC,T&ESS?|!D_'(C3 A݁.7>6ys _,ofQV{M&\7T'O_X͔#S4ERˆQB,IUE'QŬ$Q5j Vd"ADCJ(Ê!EgsʠP- /1eܦFe -J&ϡLe[|uZ#Kj-Jj5Ja^PM Ț}\kI-NI}1we-[r Qt<ZmTFYz>8uV_ M bcGM %w گBJZsB{ԤաpsR -U[ʫh|ډC筥mŹs4Hb9G#X8 I$,&v {UNZVyV8B[.qȱSda/1EB"#&G8DS%DCPvv%(1) x僱mͦ2%bTZG6k AV<v jGK;현6-[q5TBFqgԸPh?l)a.P[8[KlRշXӠBKl4kNxs}BG郖>k-MKbp/T6-]vki[qJiBKۢki[nj/-z ?w-m\IM@r$>ys7z>sݩܥŚL~ nrf5,2 Cr~LhLM­^]R6aљe\Ӌ!j\U7z0"\%:(n;Ey$P˥-(r+bvyah@է0r0&rjUmd~)]NxU9Z.zb5ci5c|v&uuԚ(q88-#An9JRUF\i'zgVdJ0N57xxhqB {3"zԒw&iR3 x|w!}Um(<BU͗%ψD.ΩjC8k&%a A[s܄q=芼{ c96%M GK7(\dE.+( }~1$g=pPݰV8umQe1Rpu:HFS&Hx]4d8Q[!N3u0KedH`HԢ) 21N' h@0r;V J.̑40C .蔲gG#O9 *6l C[&g%>QZg֛$]"3;^3sl cM%ePTu13Jx=$^YXH{G^&kIB& [9zY5~J$219r R j5(pPҒkE=[,%AN Cȿ1C>XgMQb#b$/0"8buJHde'/nh=qɓ @ c6Y(ȞvSպVlW  Ve{Jւ[.Ĥr1E i2iڸ4> R`C~r0Ah UQ3XuӲ&|v6I;"|(mbETAS@WOмXI%}4f&CYe{aOMBkps: z6hF$t{؁ lX;-t-v6P8ȼ5)-#pRMxcޢf>hfxQhC% E[s aCxiw1:}[Yks \|OsTtkq78ʿ_@';kP9ʼnf/-]4ZۚG貵H/=R(`˟ge[ͽ][3A{}d9j>r_.ryopiaEG ag?N\jgxBf7a ~=Nq19rGy~Wjtv-ѱJ>'hO ظB,{ ^Ji>䢻}S'9=DcyrY>扥tiaCNˍo2~Qhz께4~|}~?Xj-jžMywz>_7Cv !B=qlKZf 9:ٚ匫YJMޝ+q3 FRw\ǹ?f'w/;젡mҕ;=g,u ګ~1]^V υ@ K74%*= [,y6򄟬UǞ (sEgsXf^8{ώCuI[b_'^.NI8ͽ q1;˛+q .oysf!u:z^sh4^?6A] NNUyIK48Edo.<ͪy E  e-s0ps锠)^[%;kSc}I+.fe<|IM|Vȉ5Ay7v.o$8-pG;߯&Scv$Ӄ5(qSɁ狇vT|fWn)Ks˯"KllB4q`x']R;*yK.~i"(蜲2E0^Aeo_|=0N_F+ d]jbp"t$_cVIE,iJCFFx[w{ԋ#zX}̳T7O2.y[?O/xgϏ7?=v,{P~W 駓Zd1DlsOr~#NOOMӬ'=å*R6VOxz]$g?:^ۄ4t~OG?ok+Al}xFjTh^ظR9,빯'u#ޯwm~K&Yd'"`0%3hښm,w"=#>%<7nZ>b܍}CjC"mP9ehLb e-Eb.sl02֖q8ڧfu5FG*JY%^3…f}wQ2T3Bb4v09p{dF(@)kDpaQzR{p˨!`Ih̺(ؐhC|e7eJ[Ӟ|%SgPJ(y}uڟʮU ۏi243)'  35^ր#a%ieq8G"e>[ nE=d.@ 6uBٕ1h60{t:H6 E| E2W˟/<ݞm?kr^`#lV&V񱻐=Żaolyk2crZykb w8W8)T̀1:oL?>ńg}ifƩT(Xx0^MMZ#յhp#g:5\3UFHeRc-}t p`D9t5s5MknG 6Znf&9FM/2&Qg7|iD# pxJp0g^A݃q4?m9ck@f|KQ7d&C~()|AcYQuŝ+m=Ǫ~>)gOpH #1E?F7o/wuSBcd>C$6('f9͢7*&wU,T؀:=o:9#2̼g8cDYy;w}SF"0 U.lÖ-2z&Z>~FOzuEZгWןq=;ɽ]LC_[\Yl 'X,-{)CbMO!Xe̡4.Xm`H>_hjO)J`Ԝ|y9_խ)QnBzHlI !Ԇ *V2HPK ꅢ%̏M4 2SdBr4Z%HGyP30(I,xKMT50eY$1_$a%50JV XN>|N'xWw!f}^#dhEZ1WF3*s=`PөmH0B=Q 4t HIʆBt*)'3D:O<,@f3`^ X¼qJ [{bʁ@W澥D; e~]<9 9GL ṭUi'쥷6@d7W(M"a tc`2:_W_")-6>Dg9kYg+19;z~mĵ|ͿЕD lULЭ+V+9N?aFNĪnjK1ÊF p@XUVJ*%nqw@<]ďrr({Rui.?R~,@`' {}"Pdn@mODc(窵;R<ޗ£T(#=}L5Щ8g>7Ԛ^CFOP-6Zo3JuϘVk9Jm׈f\9|&\ ]bM"t<]*]wy& rF8TTP<ڗ1 cWUy]yk} 33!u&:u%z@X滝jgFZHRH-jdF9EЉ!'xЎa)UҀoFN8HZhZ5@G&QbLBQ2<:5<@}d˔jLĽ>%Al7 b oR)Z+^AiRRQfqKa$+ d zQ2WPeOsͥa ZN#} 4n lW|rcG<Q&8(= b:WZ>>_]B&@\^-ߤNJ>z3vz{S@7︧t\9z;y=A RŀŧG8"== O 1t-/([We=[|"c@DE}QT;vQcv:%v(-NsÖIs.-r< +,@xCLJI' m y O4nT@,ӄjQ><.,%jBi^)E>{ H:P<ꪺ"ߴ0%'kc-RRBct>M˻ߴsAu;6>9g{ie)Ҍ]s&?K_nT~OY̔g&|9w 5 V2 `2dļUvdǯ~.||miB0ϻ^==1TwF o2 +,zI#фck׶މ&LM3y,XhЙm@πPrNŝڌӚ 9K 4 5 JmB u&F S ZPa0SHeޛE_HSFs{P- 1%Ħ/X8"z/e":b@(fρ(a=U9|=$ պIPÅC]nʟ30K7v5b>=t<2,ЎF>|%j^#2>:O]Hm:ln=9 8^[ݴ]#po̟/ۋhB_z.C}LWgKw{zNT`T77=17s Wdϩ4cЁxB*-I2*e#^'{G~}L"3F$G>N_.PVw˯߾>H)O+y 6M}ޒ'o)8U{KAem*NUFEP4Eͭ^9#fM\[zB!0-źPХՃ66mW\1 [)jE kj|=.On8q142E!NP c`@KeE,V (7?c/~7>r@⑼jGgFm'%67IH.~օaL;,}yïs҆2tȃ>k?8{0?_Ǜf#]͚?+{^-}JZ)iS[KP7U_Ϛ .\ N-K}>W}P.2;Wx\3B $ GW"8kYME:*2Rg$Q&7:\ZbImЃj՞j-Cm}:bN[ Zx5DDȣ ZMt5&CH3jF bAhV sjClCO-&*ӝ:eZJHE2nq؝j&(ӊT$Q6Ec`mrc :{] tՊQQGxd0k(gut8Gw1f{PJȜ9 -HڌDR۠bGGd>U k莳ѧK8)d,D*At(p7 ‚F=fGY^Nzڡ` bW2vJH- J]#SZ$Njhh}Mn] eIDېE'5 3:68W-F.5L{MM(RVClw*LLr.{v00ҸK)DkHR[kyKpVI0i5c#a-sKAR&-Tu>]YFy]o(߇y^9PJ%>00u|YC leej E2_.q=M] _pw _ OnxK YBl:gDΎ(HwzEFo<\98O?坅zȲm\/c6J8>ӉvHH&$7lnRMރWz 9jCDGRqvI1E{eLƎٛ>ę{sCr:ƕg~› ^=vjf*r;JJK\vzΟFvD͞\B!3'Hu.n]De˶ΤpZΨh |Rѝ&=wݷ[v&4|4ڦT ;QFG-՞o,Ӏ=HWάF.~X]^ᄍK_=^L*YClN=͓@S—?.Úyv xo۫e~Xֺ&)dڪx lD4|),"3;񏜯m?f&@(b:I>]5 :O~WrXYL=ĕĶw.6%~IFɈ&J|ـbjur\É:z`Z:E}rLȆ#Oi(qbPbA IL ]> Dl!0b7.w|6J&Th,ULQ}Z_C|C xX)eS_&{|%Z#E7-uoQ@-ZsIH#u>(b|?Vy~oo,X0<}`O- YȼaIcd18u%jlP>ʹSXy^+}X?F#dIYN?xφ\?=$w/c2ZBOPWn>{jr=|"i$!՗3 [7ScMz.df2SHw{<(W`tE\m\Cc>ĝ€A4hˢjC, hI^@wC<+o*TuEڐZ6 {23(aߢ;- /~y{mOw\?OZn ꥵV-0f=R4Ptf$J;\.w]\R`9.o>k2nNv'l x }mF cj#8׉|`g)ĸD*nas}`S%(KL{> :}B1JۉʏOۥL2ƌU~y`38biv2rP*F 'kyR'] "a3 yfrHbm|! J\H% 59:p0BXiAIBmianРnDY꺕D fMf e$<7źrS>,d" &'oF7#[odBM1ږP9Z15 lޏ&Q ۸ 3snR/$wa,K)[ҵF/)꽵#$cF[sXQ()3+8#E$7YD;DJW+dYWJ,AE6ceWJW4#Q 7#bQ41,A{-n$U~D/e,=:٫2(}/bx#`\. eʄE;Kz> RKܮ&__@wYfD*S{ﺽ>;8 m CdR߇+FV8\4:xD- mc'%VOH5FZTd豲{}A\"|A:hLa*n!.s}A4 lnP4r]J">ʦ.rڱΗꨟ`Ħ+,Tm.fMן?$(Ww? Q]ve鐵?옧o`!yYej,͞rY B@f%'Bai E<"e4O Dk-یgCg3_lCGFw[Ok:"g`zfTR~1L p C?׏m&G(:r8VzF{*cQo?=6t𭀅J ێ8Lȹ!KҌJ~vl<rx '숟!k6ʸOOFbOLz9V#R~.,clR"EiwpkC-}jaiwJ4+>WMP9Z^҇s1j =N!˙]J.H#й3K75̶* aڙАUcM j؉ 4ji( %խ0A8pj[jo0_Vu57X Q"4ʗ;_WZ{y-ŭ[m}U}6t3_sнٻ3ݔZ69Ic??^u҃onサ"GSKr~{|+WW--9F=ߘ3%ךqڄC*1=],pΎe~`Y2(Y"QEgHhgK?3h_ZK^3i`e%d5`'=.f<\ͥOPPN@Msȳv,mˠ6>a-M$z˘JUѐ61IѦ@r}%>N? F@7u3oŸO[-~eӮq|s^V4X֝.wR6~zq=9lt8D4.u>9 GDGL!GicGM{$c-lab Ώ} z{y $525z[I ΫΫΫΫ&8]UaB[u0`*h\`a)$+Hi w}Q.m_HRq3 5vU$˜x՞ۇ*Do"*aC[4]r ? U9IAwyGdFə-ۘ1$𞺉v[7Q>SJh(!nбDC\G"?Κ /6Rvg~P\mϓ޿1#cۉ֞dLIyf<4 IDWͻ0^uQhdƝpXa/ Y5=EN%N{@):)dmy׭Wi&04ׅ bZo+j6+7kyIk'Au3͕ߖ8gꇁE*Jmf$ʨ,L3SqQBx i.36b޳y 0Yۻ&rn^v p{\s [c_զir?1͉XaKi.CjČiǕJS r?JN!'Ff "I@!ٱ NyobDV۞:ttʝV)-J(VJBV^i@k+C]\śQI3QٌHg2}fԘ'`FC$ga<$ AT\oye'>ihrc̈I+KIH+p4ir/ߊ/jp,iRH+S<*f ZЏu͗wAn-i pb4w Y` <:j>.69YB}L#lVZVQ$yG&-j-˼4T,s9%1uK, TyHV߰ysd%?۾6mylJj8NbYÛ^GU˭DPҭ%ϞI~/JR[ѩduYƦVz;K`Z86P\*<2iSB!+IЦqM)b`C¦>W %ٛ 6* <2*ui!s~1@S2FNfn܏Vyx+#u->[K}8Twci gh \$u,w]{Q)1:)>ƺOr۸ t)5È`Sꨙ mHTU]"5oEޫޫޫޫ&z\T"L+ IM(+2U\V UHQi?h./jEБ֡n T̕Oz Z;i #1%䜼%\^^{+G6i >wW JLnΘ%2*҄*ex'j8*Ka ,΀emHV~b>E_to<>&y]gAҘx3{K<Bmqm8%y,1<gLr|<> $ʳPjn{dIp(X萢 ?HUs}m@nK,IVGYK?M^@ʲb3^jdIYr( {)6q|%l2ȯ<ubd6͌ '?ASvȤH(-;kOrA|< &Kn{(h)6'qd9LZA6*w8?Ь?yy:$ṟ3?];{G7:D 1c:aug> Tn4Hyx=|B b7Q$ᶇ42k]0־LgZ}S/_Yݙ{$M]AP#8ECX[p-(>.fk2:$9yL ﻈP[W5QwȿŧxiN9r8)Gv1:I/_>ȪVh-0Yrdzcso|o 0|6ZA[&C 1@ТL8K CՋW<Ҹ,6vE#&.zWbpGhrMćn rjbZnbtO`@,uia?n{}u|sI~"ĘpM0 6I`m5X@uaJZu~1֦7Ju*r9u#qRڵQ4r[m>h+c]V_^Mh#5ϑyx@&Y3K mkZU5d ة t|y+VgD._Į$_>|"ѕ.:ʠt6Ѝaܜ%Awlr 1r[":)i۫\]f;5FxH}4[1:TҼDZѣ4zbV9`?tF q&M\d#F°&_+d$}Cޠ~)kʨSbWF]JȌ:㬴2Eݥ[,#v!یc9ʛ~3i7\W_D]}By60􀩽|c3-QbrxlrTBL1V )'D#sJ#$ma bU3o'R H.b dx<)0o%|ω8 u$%I[0)tW&{C4$K %}KmޭU59a0h:Eɖ#^T3qY]"-J3^["F\^D5<{#FCpʙl75#4jfI^W5qUpزyTɈs msmE.H\ I~䷸y9KVJ9vq*أj%!g~HMfj'syYB(&\I\?nGa*Jcmiiy8p h;w}IfQh]Lָ/(QzָxƫLo"|/^|;$DV7_~\̯Womuvc<6xF>~:Ӎd-s>=i?b{|_ŲmЯ{{(g#; Bc]WY~_s#WW.˸ZW՟bWw$|l-5[Mk^?0X-Y("t&`2oj>y%ӭ3C G"J O4c&xE|h| $#e&&޾x#`ڨ$PŠΉUB_ A?pgī\Ns\)m!NlK * VXQ(sOu _]qsv-mزhLh3 wDZc1dlC ͈`h?`Y Yf>Zxf>uoyN?yAٯtu}Oko`CwKU۝<L|gϸ ^bہ񎻯bp_Ϗx׺%7Q&G%qԷq=G%ŀF՝F5xI})5 '0Judg͔JߗrSjҍ Rq(Pj;c=/7&`Pz(ER\LPZK.lB)LJxs(PJ&Ԡ-!'J'/ 4R_nJMMQUJȂ9&JߓrSja:=6JšYXP@{R_nJ ӊ(タyLꯤF.̄ғFq(N# RrBTO̽=R_nJO(=m}DZ/)5θ$hdr.ěNPU8iˤU+B3QɅ+kёP'1QrJģV7{(˨uw.aooJpٝ[ׇJcSp캸.J0o6t@Kd_7?#ŐԂSwJ !DCگ0i%~'P;@6߿|/Zh`t A9ha:ȩHf*#W㺫ϻqV߫UyQ)7O-LIPxyx᫞YsÕD ¿2J^2UwwKc €@ͻ})%4TH7(L t1QMf& 28ږYNHJĝ 7y¶v$,EM'kZꍦ { %u48 >O>5#t.=3 ?DB UNk*--Elx!!Em"c:%V[M#P^Ʃ3 r2ŃweS^>ml_)d637]d:6v\WxZ fS,j .HP?3~[7!#L:Պ Jy5VWӏf"q`C{mz;f$n7Gk/+&03|wc9yD0_bt<_Xh^vl^Zoebf{n. "9V2[oz}὿k2b0M[kDHS&3΅,sOBr2G|;xqRl|l +mNGyq3q@+6K>s&61~bV!XmEG(I{jp z$H1Azޡctގ `(jTI컇C,DXAm.Nz9.r0NZJ]~.祯]8Cmͳ   $X%0g_@a( ĞΒDIybAcmۡ_v`D~i]9Pr?ZymhF.Ջ6ȤdӠӢKy|ʔK.)CZqW.c;$Rta+r8vl^,Ql.j?s`uY, $W.>+mP\JXu.H#֫&k>nܗɱe1'; e鞟/ֶ /)|E!%Hoțw?Ji<+_j2˗|\ Ί%@oؒmS"˻}' ̬vx3]?>:()9|} hx)77ض$8y> +`04)wRIldar\"D6K>r*VO&YTXs%UTL}zr,ӌqE 4ee" _ MQRưx~.q0$wƺ4λMgTpցmYcwڲz"seP<| cAPCJ1k WLXN`-GzeWH-V99$  bcT Ckx"zŔ(w[;M{sN6ckZLk4M (1R,6@2 ˖D"5XꠉfާT^+PxT f@4IT,Q5 )ֿ4 ^ꉀӯ!mbcUh9g'j$&1ؓ_N'=cDs8\o=sdX?m)ɿ۔S&CIP9(W 7ȅLtAdO1ҕ; /&/L_hhwA?͋0]n/.Don6]:?om?o~7k F0;{c;Ǹc'۟>8x {(S,K;A}wczṞ:%t=X88r =UyD㡥"0;M/#L :]xkk}Ca'ޯS pXGݥu*I 4SJIR:$%F+<|ohMVn1p}像1bDžx͸XegVf5a FzW]"j0c3'1l.Dm6J+P&H)TYq۸Mĉ3s:.bτܟ#(;ZFZds$ԣJԉ*,K+ݮߦ#s*md8piXƐÙTgDR.-Tl)tltе d$ 3*^bۖ]r 3$(w`bnc$h#ذ|'=/J=cgsgy85'U_ &o6uqc@A<{1Bdl"wBGnq=eIn`5f~QW0޷ߒlЖ>2-o◘vz]Ac@r0 ]6aV$SI)‡"|7pya:j)kEL$7Vh)!9Ťϻ2z콙Nuk:e/fg=ƢDPW\ʠ7|oZO(4.auK>>7J۱PmbnJaS{8F9ypʹ&G0\ܮ'\*RPβ&BWK;H$lCJnb ^$ErK=Y.DX8-FN Cl˵$ާcy)ELqS=xbQ7I9RJwK cHU1VdζY~s ,$˔ߴf]aP=+7/e@`Lh AWB9enaW܎ݓoگ DDb@!ZmR`Fcuaq!zi;,ަgizaQ =fgu%?aMy|`.Ɨk-|[2yE6DEh1Tt6D\*[_纉9'q:N9SPi{hܑy*2ʟSkNx(n'*)e0-.unUgľw&f7|E6;<$6K&q~p_3Ji^Ͼ^k P.vZYvay%ժAS L=lbXjqrB$OSwh@5,&; е:$?ߎ,bhū% 2aBDJ;])8=AE {/dę~LO'nzߖՇ|9ލ#(J@V2x߿<®Ő@A1V.8ʏ&VYv*=EUUu{! X˵߳О>c(*P."e-i W&:%1?euSXSnMe:MǨcݎDL+=nͻ%Z64䅫h#t=Uuv;uk*i:Fv][~֭ y*S2Qk!YiDTW`'tr];-P@D)fz1ڋWb 97  1R( 1n@Iul(A(w!wX9!Vc-%(h  ")o k${:Z9fS1e6)€@C n%& GL.30 Ùu!Fx5!0ɑtØb͜HkXn(!f˙R7O5'0kxqAa`=9"D0X%X F &T['as|PYk$(2s!4~r•kPuHTKLSRTZTTnj߆7EY}ɸk ;yd|z7VI T kO.{? oC̔_ߥG?s> ̋wm_9@TR͆ 7?Q:t7Ӑgq[Tv8j dM+W]T[w+ⶊ1hVn[T@L p P[߼$0u,OÂ|w*ey= 8o6iM۬݅9^+3^‚ݰYx l^'iI_?$cϒU6t#P>@GK1ec,g>7 Rr@\0~R>}IF.z|˪F,bv3 5QЈ0q) ],$ZOpC΂Pcoe"r @DsK1A6R8ypO@66%REɝ !6[EU"q.EtO]k,̿W¤H( 2G, ,͙2SouJ[v\)KM@sD:x#)3o=L Zx+))+"cl^~/qp77?{ʍd^x96 ;G9:ǒOpv1}ɖlZjlHbXE~L|{gL+CʡUEJ{CFyRZhЌ%~4WBvW@+AB2B5_Ir]X\1s\"Aa0%N3:U`rØS-D81Y/q 60b9|4 p vjJ0 5UMȂ9Q%(fK੤Ri!o?ٝ"v~|dȴQ{{M3.MQՉ򓰱#If;0Pe%\hK.{evx6sj!PN6{|EB7=)7Z0=CE Vld`g_ߪ_"a QeV' sR&=A] \BD@as@aڲIζ6yoՃk0:! (j;LLЉdL:і@h0 q`Pt1rp(RIji{"E(/A!mwGwd;FPQo}愐TC뺥JK=ہqݷ)xj@!Zu>I=qԳV(O7(wzpz9UN1^.lT@v.SRs#K}{Y,:u֦X9ҭ  {C qj9IGQJ{U"'q TI<"Xez##R2q @hsyNx5# xn}_O"2A b:4|^pUdrF+>yBpRё/\-z7]J/4TSE nYdBln5 09VN8 =>Ӈd8;ҽ ?2zf{KA:hԄt|$ %BO4J/G,siBFAN_( ƄȳBRcDK%ӂ+pkċ}&L\LZA' $p#Д e|% AEZR0ͺy҃Zhb>Nمd!TRɜ%Ijt3Y8ҥ,%YuZ0$MNQEQ4u3O I¥Ù}$HSN3 Lqb}yf}YaR)(710֕=38&턧8(?pj7"WLIΨTڱQ_f7 X &gneGbrmح y&ڦEfIq#`$Tr=/$j"iZ(E13I\&3QS0,'JQ0,myzsYo#&uW n b'F<]1=Дt"hܩ SZKN7',w4^Q^-@X%Np U pl1GT` C)E8&9{.¡J.UX'hΉRXuᯝ&.B<;fҚc2=st!Y]&)V63\Xo6/tKnkSoR͟9;?mqJmv^+S.r-۔ձk؏8ˬm_6as 7%yb"_mKx}צ뗖[}-O Ers.2}.R `>MntιJpEDe,͊,9V@f#׽UV|wv _JF5hzM0/6<-iΐU \.t_-]n߻J:}XS'JiuKwٵgϓSakc+U#{G,@ :M),ߪQLUj+{ w|֊9lWU} Cn|^-3<5),BOørq U0vw7sRP\'$5IN4Y©$RT2%E.qIPd7'/ZKGQbw͉}ЇَbHiGűh<}>b&S H_lFA?\@SCTsYKW(SAEXsee=`B s6%QZ1tn#7?6wUkVoU|쇥ճ_pY=o.Fͤ5Z2]YS쉹{V&)`F(+ \Q_qj& d<`ID*w5Fzb ouB򺸩ӣO=zG!ڲzw޽W_F)D޿vZ`7J*vDh*P 8"aP( Ko H@yge@^CIsU,r 0캞9CZ.Mƪ>f5, R ?;rԱg ZZ(~F^h؋{-?_5/R9l2a;nE~ֶm8Xg.n-s Zlt1weܬ3{vsʷ=Shh \Y]m3o}S7Y!z3r|2JvzاۗxH| 7r֑q)Tvb11gx/ӊ%{xz.,䍛hMI9лiN'"12gy4SInG}[MfSx\fV[ gN߬LvvswtpKnNj/j}s%ݳks'<6-gڣ}8j$Nq=^=}Yjr;lY ^<1 {|d`%u# #9X5S^X8TIZ;/U.^+ vS^mQ\F ӓFC_7*zJ7[U))ڨS\3v!v'F]>]1=*kLJmo7ǿ7 ,HVȫ jZEVs}cvh$nj4?gDz4PITg30m3٢dlR'؊ [R_׸oKG˧3^sZ,v|*or_s=9՗۟Mo>rO/xY_oo5W?j};Di5'N¸,$Qg 2ޅǒ=1 $ 3$pd~f' vf0 _XP4tB~q{՛}A֞0n2ݍGh~xM_-폒q՗߭.n*:? f"&`w7孟_W7nZzSB?=}X`f'߷kZ53U=mWK˶+?l{j=GOչ} l0t6P¼̼'%No|BYV#-Į(t6`_ϛ<18j[_:RI!`] cF@[NA{}j%u_]WZZ}$-(ΦeD4lN$eX,RlRDtiK .WK,ePLI TR6s,/fv蘅ۛi_,3v=߷Oi7jWv8q$/-~v>@ۧ\,~}/? {9ǥJSjU7Z7Vu%[l[FJSWiFߛ=;cSYLЛ{o˟in&|tKYnD8'3ΛNpiHIhj8>qJ9ImUǶk:=?GE?m\WI$nMCL c[WI5ȚIeaNVldL9E3d"g=1_fz-f:A_v%4CCY+R,?LWul{e#*ECمKx4^pl&ku;*uɫ;B`4i]86>z):dCK9#fk)t2#IgmfC.en~q5}Ѕxv7蚇e3NpIٕ)8{EaQ) f fGٍ &8gUx>ڽV Gx}}3h ^{bzג+7,$k*ᡱۓxm\:a 0KڗE%2E 1n66OoaÛWOer'ؕJr@lGM2K, 7=`AώF ㇪j9v|^\f$Blcurj# dK a$IP崜P[C80.J\rry\?4KQcHo#hayj/y%^ُ -BTb^煻30[~sYzԕ\sMd6qp/d6{w>Y1fw|mvx$k V&Yմg"̈́U[4f׏p@|I-UۨvwJ &+M]5PwUQ7rLD,i 9Ͽ& +BA:tj=H~ _B o>q1My)GI q^E_rr K7ؚ!KLN :\xd #2yV?Z;{CQA=--k߉6Uݢt MfZ8eۑ )!PhˆekHU]q]gN:FHUn?WvV:6ё@ 53nVRe2J*4eTUǻMxt=]sQiƠh%tcl=,G7oӋwڌ$tcŒSU[NyHuYIrʥdWt898ފ b:ks4g,OuOgeFq+ulPifqw#{y0'wmo_pyckI34r|h:F9H(`ݵ0{а8Qji0-DcmנV#E_ r@Y#&R* v1nVd\;ٓA+P 4h,k.4**Y(J蠰 V@WS銃~ `/ K'Xܢsؼ$4Hͱ3Ґɭ5_oWς|(ݷ.] p\T`J^*FbɖۼtI'3 dgN֮OϾĬ %#/{,d?^GB ]#}L>f9n2`M9cm `~n>郿_>v}\75㺼OVm_ߍ7SA»zx!ڄcva1M[V[e@VFhUU]Lu-V4skwaġ;0ՑֵkűXTtʎ:uѮvVM #:3e``Oe S얽$uhv]7sb%zd Ua-y-So N MXv%ed8-]xOԀG]LE?_ψ_ ޶}[[\n[lof (ݻ뙂O]]Yn[=I^(W;{v>Е7?W?n>n?ڭn|Q?h][ltU;m}wuh/ExߓU}aX}/;zv4mB4_Ot-Q`?@5N9c.ڿxwKHsjѧ4ة<u>&3Oq!pNȎg^ۨr`99f 3SI:b3hgr 9~+{XXA/ : p(97#9Arb%c1mV oce~|i,Ԛ @_N͹u ˀM1#8#deIv7j0߬c gu/uzo/29XMqaN4tPA~{lA^sZ~SLfqrv. dF97yN EW (ƌ8m(m1&Ҭ3}6B+Q:$LwlorǬM 4@yw>,AQjWTE7 ׏=|lt{!?yujO#(j'V$kN25H7jK7 HǠ筬&C™jT'$ZgC_jZBe^=FY ( #d="ȱ {馂sv@48eX5 1'vbv w9).ˤMD~x~"pV-;UM[v(,Uךk]Y5M#bK7NźGG(JذEeETwm:ͶCPPUJA;hS2LL-K7% ht|OCӈdSt"#F#ӻI Q5!=샳A!L{ Ps8\sq(2T+GȴVfc!(C _4B7*h{nA(eA$ /m;JaapZhhiCQN&UlesP{J48c'Yƕb 3CG=Q 2 p0S P=SA2{307!.' bJ+S'oI(dRST#.\4Q:aMbPFZ&Xx&ӌ`E]t3[pĀ(g̕vG$:gJ'{79K/c "EO^@͗P~pCy>2<'@L#bVTb䉴iy (x 1ѷyE#tuE"Ws S jxf8jZຮ$ڨQke5޼!^NR4Q*S4El N#H!H88R8ԑ$zCrSђ8@oF#3|4P4h4|ڶx ޽ccluEf)T6ђR_PS1%*S*L`3M f,ʸbHXRi 0PU4m|(O8G@oA h3MI]3zPAOp*WX6a*i XkPT%sO''|5򟷚 KJ `Kn֧Ch˖u,2UhBLo^imT&軗^α0)~ 7. .${kwPp\UG5_e/C)3{)M&\࠳"\(/|!ZOoYܪM"öi-BTi+X:h90:ci|Wȵ_'I8E6.גE [y)|ƾ0> Ok t)v%QJ].2s׈R۶e5R;e[Oo~# 7B:5K`Ѱ@TWͰnX +zq?f,F+2I"'ףW#ُZd͕/g~\H]ԣ(owb* CdƃKρg>~qDBR˓XI`#֗ ,7FgA3\ OqCI;b̳cVxGx-ʖ"ݪ@SA *G~ϩV7^c!0s³ZI=V%2]t}R1A-g{ï C{pNcVE.8i^rhΙU#Rt 81g6ýV}Vq} ׄՈt-7엺VkGuG2{RvF]6l{& /SGP,qu pcQsqL=n^ƒ Yu#ÝQo{h URW p[;XB)ñPvqYnBЎ Ѹ.Uݞj8 m7Θ p{ppY#(1o{1znd+8i{r8;LZ_C#'>BlLxGCX+\*ӧ-%wu〠+{.E|mڦWdP&'QQFPdXb'rjfشi,h/!%L1V ּDDYE+ҡJs9$&J"4fKPH2PjJ]4<[ԇ| pOs#̾'X>7JJ~K'RԶ'X-A 4O:R:{]̱'c9Ld$Y{R[eICO9COg=>a!▜-g odz-M .~oY2kn%#fcD-pWXPK++ꏇE7^NzX(2aSc>C$D aA 8&Y Xيoyఙ*hc}fƱ7ְp)`&>ٖܢQqo/c_O|tLw~Y =h缿h-l8H3JߚR3_i.Dn?_v|}+I5~<[ c^pzbG%T3L2҅H2N㔒(";AC,a?2.EpVy5<4jji~;ۓ/%FV9v W%ظߌV\@qq{[N?sxTa\OF;YfN"iYdوߠՃ 6lJan4]T޿ب= iieOブ>k-M鹘0=o,VGHzr%~!rtۧqܲ;7=\.սjj.6Z&O߹vbP2!-v.A.Y?(za3PeOu&grPyG%Oo qW(M)I׷īV= [Gݙ$Qhg[.y/NUk ~JQ; UrS9 џG;bw/yoXA&mT7 {}B*阌2QL< NEW|TH\t%Xa2JˈJLtH%6 Q籢Jl[WZ'(٫zW<W{fankV /y(5_SQH%ǿxXjI|**Rk>^,'`¢:Yԥ⸺TQE\':Q^%G#^'W.'8tHp:rc1b<1cB9D̅M&I=k ,>2Aߥ8>\_}l&Hc㪭S)*KNx!;T6 ES|sޘ~^ u*5gryөhUs5K2ҲBKL^^]t*H ? J6T_:S ^{$z_$]Tn 6:1~8= ?V/EFhzH_i]#J^T7BS޽;Uplx65Юַj107Ƿw{ mш U5بN5ˑUj.T&y,^v?j!Y1IJ;`2` l.gů*wvSـ'*w8"@XKV;Shp0O>.([.nFȸ#qb8.&$2D )"g~9kD/)߅6i&rFhv ,Oo{xM~B4n\2lQ4LI¾G$2~0t9~7>r1m]`LD♔Ӱk˰ XyL(A)ZP"Yr,e&ٌ{Cgo~_P Ί1^##5:V0VyEa;Gy$ LScE$S:wm+I8C8 DɒgC4,tJ)6hehߣm8wmCXH)ӎ[Σ;̞CDz-uaOD#a}:o%Ǖ\xFT0&4N?QZ`$(Qkee޼!^NFY\ "IL08ag" 4f (dfTt.ȦdYӁތvu#WsV0W|Sgm4| {T0zSb}xn4xh̦Y`3/bsZ.q@4SV_GQX)ucޔSrr+@>CEˏ&(FۍEغ ;Zvw7kט`ᤢmL)x&2$Bb 'q)'"J{zDžW[mF[;3 d !|Z"RZc/I++xHR)wzf[[q+7L6aB Lr\aWwޑGq,0:H`Edc/^ImT21|K$kke/~@n뮱rfYq߬)Ou+9Y1[4,ߎM0$2L31.Nfm"DjoryKilMucF.FO^hޟ,f'ދy_yxJ98}ӵeԮO+VZvV;,88=cZFo,T?=BgjZ ]hۮ|aVmMqMe60jWqдRq ?mȎ _ꋪԨV* (-]CL ʨbnBxĐB([i@M0QBX dR#(LվU+/3KB(=Ǧ?E 4FIe $1Ci&d&CsLi[q4Wڣ@]e ml.$ S2ּ[ T[}lՉcZr܍cUwn=FM+;tQ]ɴcT$#. i¬ '+=i+-=XKg\CZ>/RJOJ1(՜@ZFuxٴƽqvFTAlvr:@]\"&Z]DǷҪ5":;)bX_m܎׸-h7JN|~cT:] A-GMꮬhL]Ǩ{aD;Bڲ[HA0tqư 1];ẖ[Ҙ>}Ѱޡ*70F鮁i%7ն=>T&>D.ZHAp!.FKERuZE3!k^7ǝ*Ǩ6 gml-Ttp܄ʌ){ޔ\^}xtkGӞmNNXgtg |ӒZ>ҙHKi:kɢ.Ub*ƣk28 -AZCFZ)dK_3x/#@۵QVVR2ut}_Q%1_ѕ9{_ᑼ}i2ۄMo=Voy37}s *H~..D~NY=\-gcyQYB>3Ts-Т =wWM7Cm=nj{w^S|)e mz-okLLd涖["R Re.m{RMAǷK;GSu/kA1s0C>Ghg w GS @7˒'~w><]ktϾgk nr4|Cq~N:*`B^АA+i3 io32ҶE6mdATi>X{i2R"i{:UqS;`v vG1 c[xE dli.ԑܑBnmNwڇ\m;f?= )҂8E;S183'Cȶ{ y/CD8cinHc!0ĩJ0ѬsZ&0)e$b2_`;l9-+L1"r8VeEv3҇Ҫݫ e}OOs^4ӟC1(r~(^>+׾ݧs% 9+=_,_,0puuy}8ŊϿ}vϠQ\XlϮ/v{gZ`\AF5<ܸwm{:lXWY1G%$NFO/}V֗jzk+ FW|UF4?*"?WXipV_ꋪLOJ%Yf!K!TRf|i[)R^ dV }FLVzV ~#`b/-rWJOJm1B?7G#t MPzcg2vU"|fWvEAKy9v'íԒR\e{Kj %C- pv,Ww1ג b9Bhz-[鈺QjԈɚ+MQut#;_/O,p|(zh+s>w׽òA="T5Q'zL3s[K\5(z$q:~Lzw S=D(1M<[21{NȵIS'80go4;k7f-1FQm;nCk `kDsqcIe{g^9RJar]Fv%ͭcLy.uZuE:>mG)EV{ڽfWSV*3Y + hDwu?^/侣??6*RiE k5W X`- ؔ4i$GnvC6-HUJj窮}k+F݅FQo0 VWn"?bM4H剝P8Y⻙"~T"ֶ9}[0QY2=鰃HAȽ~{Bx/Rq#a}G՚n7d% 8RRF G4Eu$c GA+) Ψ˦@=v< vQM!dh[ >MC 003ƴYfږq8Vi;:ZmXPisXGZ9jTTCvg$+p/f]s\$d~_) qL6Rȍs+t3F_Vn\hUjH^1ts \nfL@c#%yI[Mؠ4'MU82ؘKD?X=YuwT>`Knt0҈ҺpuOgN$>fT*gȉ$Y™pI7o:ߥQ~G=ؓHy9w-CSz7?*J&8}Bwungѐ190ឮxL2|} iNimWOkCM@ʎ+'TC ЎpC,5fC..tm=Ħ* u@qĔYq̩B'B0V)iTD=[Ʊ]uUL Rݡ~ύ Tk$OQRQĩU"'d$#Iq‘Өp#K})8OrőTBhED,E2}*"VuKgNAYïԷ~ٟ*W%_>Vi7v"RF$h)p?X?yuQ&s~6j=rR,Dm݊l 휬4ob.b 2!RFVeݿ@r%) "eb0I2e7Т[*%ڎ.t;԰+5G׼BIZʅ-+H Q(sӫIYh8G Z׸lvy*W AAѧF1ݪgr~Vӹ'p3iPԒ] cd_On~d=@+CX'({%-)Kzk;"eٚ;b&>VwR<&[A"~>beF ߉{Gq}J2F<*N,bUg(9P{]pUQ͉{<;UA %FL VtsD-7p,v^ - >B2n R㮈56&B}Uǂ f_o)2Ѣ6rchqYTG6h\sԆ,ϿleAT6kqxjNş֗z׉>^qQӧ]OE?}JW47.jaʥ[,ٷOՆVXobti//]Deٲ% y&ȦH~z7F[SNon^Sk{nk[MM#WMIޭBL7xwJ.[+ ޭ y&Z))kFz}*je9}"o=wgw?ٮgبfQ]ct[X(e?1xQ ?7$C>>DY??^Tu?÷&xC)W#ׯ?'Ep-׮5P]uvg}ׯsDgZƍ/<(*ز}ͺu\UH:=;~934ItC_n&_fykOSxMI/JuҁZp葔.z6:}|ޅXKS +5 -)Q]@R~r5*Moohe-`L4xDL%9SGyBWKz4DJQ}9 M l$-37.<fpݢ SZ6fj ~ ATo nR?18ulK8rFt10c]zPNz Dǻtl"FCPADEV$8,HT&D$ddyDX h#C,Y)0Bc!$qmgLi,Eiβ dH8NNi.sx+b 1DcW1L\CS`TdeFWI\Lb"I\F2FXdWc^ 綾5ĐX1/(ezidq\HLG",RgJ "('tI@vaGm)`]ǯdɴ[Xi T #.YRpV zn k.G$ ^CJDXzleP^DҐqнn` }w(S,j%xUSͼ,~-foԷ'u_CSu}ӣlSm,VV*/ XBdwAJj0-GNs;iFg>nTLK7dw%Jk=g̈S4^6#ccR'O F'ԻĚ'B\@3 }׷b'x_> )@;ہ0WT&n5a%nn{l[.ingC i3OxUxBS@))-oTJ Kۡfܮ-EwV{v$ܱ>xU?$ Nߙ.!'0' Q-wK鮘kp5 :\C>s\$۩S|N^kz;B J'. r JbZ ?i ,lwKo١V o guU1/aZUu]avIU-Ьڽ|bl[d]*<1h;һ7wOQYjYuAZ)߯0jv ͗]S]K(t8=\ C!Ŧva>2*)jI񨝭|~uKze٫1Q ID }IbYǒe+k ]@-Iz{2,ӢH#ɈGq$a$D^ncS;o̜ #?KY c|R(x+S myCo꯽TN&~WKl}AhJWˀ{)HbxMU,'7zP+܁WO8>V/*mvZhC~ d'jd_ أ%UΛ/DS@Dz1 r_R]>6޶gK* J9i9p~/O Hp/ESݘ{F4C;DjHKĵ=:e!|ngWG7&|mSgD|/J)qDtr=ldeTN$ 4_h!4 82|8lwwKk(<˹L:x4] Ps_#z (}œA_q`__Eds_>iPr {L_-;8#J U,Az2-@ן}nZ˂ 2x yG^+VJ}k@#B&OWix(Aہ#Jd7;>,ҕidmsJ2>/uiOvl:Flgg7\q$ 4ҽ8z'lmBoK.P;uWl))āyW[f2Mb) ( !wUUV , )E2*ҸRy3Q}PeB5u|p^9CP(Iי<:c3׉ZD~[Lp&Ca}&_{b"לm5$u|uH0w/&G05*rkoٓ!_deCn%O@nҀ ,$S" D7i ¶wu2 3r|M(gCC{C` ;̼#Kyڛb1uf%C Hme1!4,RZs]c>|h 2EXo֓FU<˒%zrL6o z^,>rw˒Rj֔Xjm/_C;jgObw|Lwjc[d?{Ƒ ٹ\c 6d@D[(%`&uiQ$UTwjedUkթ"6D;L53 U˘B+حT!!Te̦jG ^/uݽ7W1Uܯo(@C:C9\庺9rKf~&KWQwcn(VQ.'E囻X g # wTχ˗T挿)(B8y{ef=$)ғ3{cb7g">E (RG}i6t6І\%p%#vyKb).JdZ,{uƾu$1N q0P-q7EG2HѨBzp\rYXx! s .J6ϺEyRؒF$}gƏۓH EGxoEHtzui=jGƓR-Pb)3ӳ뢴Mcvܢ]٨gL{LflgZ%ռ>s6MuÆd> ´vs>!/v9m17w97l@V<-,vG ґi|х`pD!:|\+^TIx꺅^X Y̽Xx+oPKGYkkس3kSh0~ ;hNn|T>-W$To uRʬ4BUAFla/ZvzFzzyF.Wqg g|Kk_ĩU(s}Ŧʐ m1/ݕo'̊g#4ڽYitHZʗ"Cs& _2Òl'o"q,[R3KxS~ղ߀ҺoeF*J:1l y kz˾m-icAi;WkB 98+p,>QZ(_Z Gn `{x}X )Λ o#/o/Q\t0l3c*Et^o!ΰ:gLx}:8A͟ƹw z!^Lw"_}3S^{p![>ߏQ+~_>֪|+ُ/L,e^Ly ,(,P0<^jDy3Xy6n%{9d,OXej ݛ4N,>V<)ЬJs8Kav{iX]!]CyENݶ'}߼Vb{ls6]Cfsl؎0ZfݍzaC3rxcÈ)6C Z p#Ln +J^Mק>~x/; I4c},p)A]H1~*ڏaP+Gi)hAKDD˵`y*a&&ʲr\L!S8m8z>C%PB\B`P/\b@!VGjW4( %IJi"Ts:|ɥj> /@E4ܒk6IK_ +YTm*YOV\p -R9X2Vm兖/P#$jW\G(qٓ _G G_ovO,{2wCK>>pz*W 9.qiwKs\;w#i$FpIh!M>&K`$b :G?ޏ>N&VkâQad5 RG7h(Ξ:O3GXyohj?zk9R1^pNމPj0%? zKraru-[ hDY(?؇ uv~~Vm\,>foehn}[EKR 1 R֝p-Q3ڒFsgppAo,>V<qFSg6Y,H E˺Π6Z.*E7|\+kM HYD-/?o.p|)hCuHii+u Lj6jK3_/U9]MRYP%d!UeVyeB<]eE<#fa0VFɮ>ҁ 0)w ̥NBc0Vʮ{])w ,m`dh2YCBbwLpGKIZSڸȸ .(=hK|r` zB"=댳 pJU.be7=ᔩB0T+OUoAH>pLW >>pp=>HpFD0h  1yJ R:qAA,8+{(ԅdV\UDdIXM\&uݑôbst8p(8kl}_<.Ke ,aWKL&g:efĽ5@)ٝ>Vgcz~076[HiQMϲQ Kl?]oZے_6'gj5ւ\@k?t9)6μe|WU\vrtYo1?4=s?..EX"b 68=w֠PHPm P8+y\\u>, aQ& }UdWZ'A=:2ޅ(-A'*| ^^0Y?>|IΞ:h3_ɎN&>f9Yh>>b>;N^!*X7\φequcϯ>0\}c%F%kӏ`81f>;!g9ӿ5X.mDGʍdq0`>{ jR4[UqI{ؘ6imX^gLDtN!$HmОDj6Z*YPaIE!*|XS#^Ji0(]=@R0Nۄ8 ~9PZgI G}(mILX'$$} m:/tfAzR߅$8Ǜ~=m3Ф~=Px@)?GC~Ծ $pӦң_CäBS_FF矑OBԢ' jc1zs|<}N&%3& nr0m(elޣlPFՙnTvB/zV|=s\,8Y/1IEk-fK" 9,h9"^E1An B8@ i{-+Vwp[0h^tKuݲzu2~t XC Kcv ?Y(ɳK@IH`0$n+MnWvڧFy>CeW>!_r7=.#QC7^r^J$fy=լ' 2~.!c?Cx vZz`2Zb~" 0--ӗ: ( .g>jI:4!کBw2M>ks@rgfJwʔ6~.ts\{5¢^yǜ.81Jø=mB!G'Z'j9bNӭxX%v'fQjy:6ܗ9~kݘqΫUHY [=qI[1VHX{ }:Z+C"$ZH4A8} YAe ZvTP q7%wԧodQxUws0 Һ*a?~k_vD%'3K7>kJA^֦_7z#:0wLSml:LaIoe\R]>iy>yF<+)<; $8 @R 猩 E6zy=FC!/Ď  3Hц4))tWR I(*-ci*ƙLK"<0¸l[H-|I6$btI`HN 1=$~dR$)dvBpcTr2UT*7̱Ls0m-{ȌQjޡPϛ&i<\|8>*&.7In,gf0ϗy7Zۇl3Z[B/6;/W6Gx䎉Gx6cLn`R.l's=rGvћa1i>;aƼdc4~.'uwT 䄾npaMABοߚBZ̶!r%lv~v NV'/W/o4DٹK7?=xO+wv~CDPZ'aUߝJl{aQJ~b𸊟sxe G#FiL~zw)G5AK!j_h)5 dnE? F{o]AT*vF#-KA(Pvt%{'fP1e]9}=o=_7}w܃`.hӐ3"~6XM&.d6-wQa[WK_yE'#b@i_.8"ư- uILyVUwF-vgtio:uWUk0wu殰]Ae6},oצJ%1[n+$*e孞L%nmUZ=j-Z_!@M3TU^H['ܠR/JF7uVlqI[==s┋ Ծ;X+? qtfzύ%3w]o@ko@QᬮF!n v<}Eq5InIHJd;U{־ -[:|uH]e kN_`XKWaG͇7>6e۾GtT0LyS{'A3W`{RP)nkP-gP1jQ`F'$ވjؿ$8| +9#'c}略_l ٝ3,OzujGo g(sq\q N%^vH"2)j<28 Kx%NE?:QZ!j?\>y;ue:/}bz;X$씏r.dvut1*㗕CyTZc+^>\^"PNS&)Mda H FL@@dLiaC_Î"V1 A5"0Д I9W `bHf9 2ʤ kK RMku9@6VΧ>9ޭZTvGHaDISB8#T,Ѣ{KO󝙌JQ!1;#һƊHUT9 7fYզvoյ@IZ(_H2b#x[BuBJH`ab2k{%ZJKC/tRDw/]"#6B]-_6u$"(hJOFo^2;592("-I`K(*0mn 1Qqj0K!2M2gR{sY@A4ҡ.J.<{׾!;d"k) "޽i-Y<着dLX= !r͟-^u_K`PI.Fx_s-K;51pw[Ց[VGnYeu/"L#lBOdJ4PV!BM(M0q jw[bx̥Kr`o൳UbҥZ듰; ; 'aM킀e&-U-)TĮ@vk R B4ڮ؆ol;C_a(lxmfd+e$+; Jt;{̀c琽fkA2^0X!0.R0R ;zHr $2 gL$ڑ(Q$vW&"" 4( YkהH&h.3E(y=P<6_ZR2a3y&<՗)ˡNT`wC ,%@JC3sY˲} @|)K{96.(v_@$REgaIgRʶN) SFv'u} +J?Qk  Gq( 쀼HN yruǵugk@.K{Y00,5cc2ugLa)@(x"K1ex55 Jkȋ}x7D@ Ǿ/TPI o#_݉\Ɍᵔyf2\ˈ]tKv(;H2$ᬌ IND[U3oeoQhɺ!t`b2CLE3hrs.Sv2m$L-crMK=:"f0to#fݝqa5d2-6QyW;!%V#N}@A.ۉ)C`r`ωlfLWGfp3xuEyjNm$"z%p{|M /-&0Z[R%i,)ɴD٤HNM$jIfvgfgvvaet!zvi{_<Cہ?Sڽp!\t! NTNgO86y⽣ %U6Acb &: bnB0 ːDL[W 7c0e]'O?FIֻ5 'Sٷgpgʍ9B7ɇ4dr~tq1|xę ٝaߎW#x6a9C2'O@r ݣh|9}9D8|~իg`覃2,N2.6;uzTe?Ɨ&~^D&Oȩtx]ڑ:@ ]+e>ՉHNFp"wmn3Y j("*u%n K ,kC#3jX7 LĘwOR%HoM~Fmf,܇7! &KWlM曽VyZt,/0,I"-c{$Q7߼ysvl|+:>!n.GXs;i=e|{j'SNN+ W(z JC0Xp<:cW2?:ǣQph/L{޹5\ҙ6m{sp6'p<c)kS 3+ |zauZ<,*#>4Twױ1!"w?-Wﻦ<)CB3Pv8 _"1!h0\rfu)Kd)"1{<`{ˌw4h.0vlK$L̋U*A,I>)F˗ ZJ:L!j)AAQuX+% Ii&0/՝%W DΪsd2P5WGU< ` i#(*jWRDDt)qv0m.opmU˛` &ZyfxFPzRysU-.R gQx_E4H2U Sb-z&!8`EJVic#/wEY" 5z(m*Qmock^of&ܩٻ7RQt_%kX }%EnQ`UmB[U\ ARpYB)Inx*Y'ю$N>EEBJzJAhҸ|^.6{?kuQܪ50FqO7Zgnk2jOܸu5tt5=7t=8DvJjuK|m:u7]|35$2mu@-+E!ר9Bnc͵G g`qCz%T/D2Rkэ^,qk=΍y("Y|}y55IkdQ9ܲBqn뤡HqR30ōu<$JiPIe .!S9FvS# wkUY: Ɓq()#LlT j=ВeJ"p}D*I/L){^MaVl<Yds9Ź4$#5̨P6᪌=uElCVkݱ~m|۞]sKnZjGeMmB-U0)knnUIHT² X3)aM*R!cyI ׃o6!5B4U~A4\C !+X\:,$xECB!( Qkbf+pY_jЗ@b]eszCLPFJfKB5`G @Xp9wīRLd1_^I3S^*5FbV ;}aSn+?6 +,,|%XX wmmr6 cyS8OI悑S$CRr1CQùp(F7S9t:?y; t.UqLj/W^^DfX k>d" ޘ {d.b>-c^udo(Q=i'TP{PxΟnɨ((] 3evƹ}|ZεZ>1p'Elһ3}y o*qXn+yx=+N@%k\;*qFn'l"e#Sxc)ןm2nQ{ xAC, A})a@BAVpY?V4@&jńm C d%U:?mUV(K)la 95K+~bs-L9f[-b3Z=.^Vg5;- dk)85$\%MA 4mӯ.W(g[̔l'a#JΉ#ω+䈙>7Ƴy`/>92P*yjb'2 0Db@צ^r/U߀ *p &x ~~K#AںXs 3؝4f iK;f}xѤC]C>*$zaS5p8m&ugwp T(ƎwS>Q%gIeM8"EWEQ{tK#[#3!cCo:~?_y, $5y{|gu](wN.5ay؞`~޹5/|~Mtk%;jf´KZfWL#D8yiE4Yi_ҋfiB,ZpQ1ݙ^Bb;O8^nȵ>7 yv ?zmLE_C,Bj\!~bǓFE_B4wpxZsY@w: J\_rA`_GĻ%hKǎVW#PRuNU8zL/XjQXL4-BQ4bNoOQJ1ⲼYP #4`6& 8UC(ҡ>tajY1iْKCL"-I/Uςkݸg" t>xGA9\.v9$T po~2$톙X;nH$ªe{2Z_׬33I9o]iX5jݍʗil؇c~mW^iLoK ~r|ӗdSVkUL#ZhB 1.Oxi.$)].~f$QR43MJبnz©H?r%Y 29 + k99;|Xm[+4 RTl}2nhA"} 礀dkjX"p&6 :P(64՘`$'+o*赨r`l2b=Ƈ[ď1QϝMV?Β(o̢9>3F0fT'cF%iI 3i3gF 2b9$`0 ^&p,┉~gG׺u.d]w~(&w./Otm;c/j o.Dd\<\(w& l-hKʫv|P*AoF85Wh0OF́'=8%Oy=)T27FRY#2MC;l 9Uaj^,ݫƳT TÏM:1=?I?~ |rm88ua … MaQq$ADRH&R\ ELĺjaky[F4qF*%:dmoy0U? ܖ1e9Sܬ/-#YiaUiI`.X+!i+LƠ0MI9&Uaգk$f |,Zd/3G2,[UX~8Dbr!1XQekA H0;Pqs+11W WxJc.<9y!|KO*zVkXƀ.SmXhM %/Ð1@ QXtG- Z82>W)X0٬o_(l͟>{s'b phwg]avƦ^7C+7k}8]`ϧ!EeuΝ;-*q.4U:CL5)M c( :zE ,@cfLm$g| @Q pɢ1L%vS1bC sBy4Vx Rh uZ;˼ɍ-9a5~UF&}9yt/yCUOƱ?Ϩ |ywyy`  Jr?7+?}-xl ç8~zv3 g.l0mمloXkoA}.3lE]y|{% $% uaQ¢.RUe;(); m([oIIqn!DG90nqq)!:޼TŲk VR+qJzkק}v,~ ڰmVB~^W5pV"HH}VHr\ҵi%hڈB6LOR cfg*D`z /9k;an>NL3&S. <=r4h[/zQS˿ %>CT)%;X,[{>-y@/˼#N=}8^ID6qh`ɬ-6η-44B@nVdv2sJeSٮEY`Qn\ RPOI疢`}mC *1yS̎G87 6vZeiNzZ+8(;kń 0iXVLxETbS=uR}ÇrОp [STrrT3yddh)Y R  "V fW\*&ok3jwYb8j;ˋ 3vYI<(kI8,fBF4IF2eXsŨ% wLbk2")4t;B GJ!uKܑ'9lFb,8;4c/$ &Œmc*(.8AAiudi4]h1١Jϕ/M ([m[ZK'?)ʞ?zt|[etۿ֗? ,a>ܺ5epbkբ{ˋʾ#N<Y#Tf֘+G+ڒ.?3 xI9i7.L{f2VRr-#SXns [Y |D;hn n}vOVr-#Sҹ%~&.? 9}+X tv.!XnO Wɡ*3u{1  vhV))1{ssoV\IOavtoFS`!L13#['aF) Om3+g]('Sf"QrvEHlqY:4lB{kefɃ 3Ta<+n2]Zh8m9R Bt0(UĚZOшKK!k!LR(84H[3p>n]iydީG4?= 7M*ͱ[:صB*zb&/$mgX6ȝORk'7^*;Z)kbJ_-KZ,&uJK^EUFEvO5ч-hIY`SR[d_ʝ}6E~1^[ WjSUCg@/&_o!̝մ*~}0_b|;yd턠5WI\f;`dm w<.IUR>;6[jVveЖ9ڜliHlvOywۡ ͌Io./jςg4VqP%A$08 "B1'"ٻ7n$Wa[r@.%ŁW3@$qGJjͰݭQdb,VF/ݧj\lri,wD}V7#e~' ߾a_⾺P->O0՚WAJ~eQcg!?M1*jm6]ʟD,Ht#nv>OUǨ>Fv=aQ4yf9%'J@ pN01&LCGbPN`'y q5y_8m/j{Nvo׮ifL[8!饤 X:3륌laJ=Ra( MZcʕP"Zh !bL<+Es@ KF\8*`Bi屬#RHX<Fz&Ge+ 4Ȁ&[ 5ds~uL9A@$2űS()h/,KVJ@W8ϯY_NIW!'D2"Y'.NȽ^,g/{N6V,~^^%$MڀɎ&mFQt2ˀwL $(hNw~nD.!oBF:۬&Vt0'N$(!# ]xe Y,~␕`ؤ)z8B`3D3+Hmr *G0_g~f6NࣺJ!0uT Aa[N7 f5t`-v;AMgXS Bk*b )/S"x؂;jhX!/ݐ5N1!0n=,.tE 1uv(͟&Oɮ>l23jc{m<9 2=|Tmw/7n:Xx?`5Y}b`{z#%Q N8`( b /',8 l_}xUsiSG9A/B"zƇaNjMd@2ftl3$?j)78L lJfajR{PbIݹFȑSnr#* x>Pe/<yFRCBvrkVN7pbS@()JY"G6G6G5/-fw z^##8(rd|6?˦ǃsi7%1:i;N!bx^bL:]m//a.$`YhKޏz|yeUBZO8̇ ަO9\T2%̡>ۿ}P>k`BbƇ3DRy3c<܍QIks7"Qgk3 R'F"|1'rОyq:LiwH;)90c~ƝK` w^i'n[z2q Av“ j4'1Ʌdd?Y|W JgDi `MPtiun7< B}oS@ XDP>t jl >J- ` ֖s gPd´$[$ lRipHsEOCDr( BQ̒! sRC)q iaXYT(JJ%,@v6sf؁qpOC@J-^[UX3"K9A`W;y, pɑ dWTS RRɃF ,3(2da1 ]­6 6j=m!-ļ{>ڸ2x>f%o°ƽjOn]b6BGhLAFGh7Z>v Etrh.gwEr:v˞hvBBp͒)IGoh7 J.r1H1h < ݲ'ݺ/\D[!ؼgJah=Wvn&ksf:EOJVWC bq54TFÙjT&/9O>ɧpn 0njʠgzŔ `AC6Us9kZYBYR &N&x!U *iв"CxpnAO9Mr`S,@gi59 ,(&c6% J:L%?7*9 uB( }U! TrXZ,co0Hƀt &I4XyzpOJ.)}7Ӛo6\&vO6I﨡eA[ZrMK&A\і`ŭ&-*tP7Skz1 >TFQC+e \OޚcvWjcFYtpMƙpenw}cƔ9\w3l.h;-.h;]k78;\{۹av8,m"{h1^d%^_m[z2zրS,#,+O[ScGi[~ 3 >ʆC;JeIo2a[w旻>ֿ vF0d6ȁJgT̥T;D,Cːjaia0Xr"-߼gmOu|]۵51/ 0XԞ, %beQ,"=Y߼yMo4IHx6@kuZ`sT*6-F/I201}uGۧ=v޻ɭVOMJ M%vɒVEYɇRhO;> A)ѬCNrYWDI԰.n:!$!_Ê+J ,Mh:PÀ|`.5L*%)PbpʒrKmW9_bwoG~ +JERa<6]4VskXvb=39`+[I ia*20%5;0lHT\l3wwFyҋAY>̴rYXԩj6чk &c0cв.vEՄqfwim\c.7K ^-lQ3d;r.`yP0p{zy?s-?{`a)P23T)hW/+EJ`|Pc\Cf֊j06U2C$%GݪXE>\ RU$n,#OpԲ'O E4K8-v1h\ RD'w&r xLMn]H.Y2EݷSF(2}b":c4ns#OKطvLn]H.Y2 SK?.fbdg3IUZ'v !_)|""6ӏn9󟣈-7+߬pȯ$8"@:!hՄ ˯U>\-|Jv󩟋@5zլ[Ew?;KB6Gh`@[$H&xdHlxMR.;zry4Uxy@A|~f{WQ& g7#(>6['1T-ְ^LeO~aDWot }[p{/]lڛ=b=tk+jBKdct%hL`/bDUtt7rLmn7 Ĭtz_3|)Ga}1}@ʮUCWD~-΢-A4i/^+ϝùgc_ÒDR]a8 3e5Wi3-q0l^9A \ B$ N)s-!;y_Ez ^k)Qm*sgB(0$ s +Ixa $.ʹmf59 K FaCk'2!'JUL@`6CŤ&8 E3fs8J8gG)BRYD5CgYsA bbJ{rOwnɳBj9{*ktX΀\2s6q_t(;ϱ&NIKDw0BHخi<.nt_;T5yJ8XO@ە{ˆpFJJ߮BȂ\j7P'xM. ,FDy2$Q+O> C0e[]S)&uLKIV&*tK9@&@)˸^:w c5&]>bI 6` %os~\snzMq1Y"UZ3V"C7Oa)SUXyBs4c\.R%t&z_6O`4y ~?X$ҙ~ZP˫/"=0ᇅ߽~-a3 (7/e!Dϊ[G'>Ps2l&&> ?đ>͗FKtw`XQ۷/'99A %@콝Ϳ]ZϞBq:ex%?<>‹si5[cB` q.bowMd f6&B q  z4SDZ"FvZДTP0Z en%O l&glM던N K.zϗň_ϗ 믋m"S; ZTwydSg. a+u坟$ΟoU%?^{I$Lct"̿oilcH 3NOmW5$ | mAL,iS.A 4OThҊSOdT[EoJB KS0hN詨r yN\R c CLVØ+:>ZehV Y++ "O=Fu<@yc}fmӰ>ndndDp(6ɷkT0T¯.f9e4QXZmh<~o%fMGagJ}DPChɕ kt3 *4c\*|il=u /[wg U3l?ir& kyo| ]9 ]a<V51<88ބ^xuHCm|}e- (73o-#\!CΉ菕H_e˖qeiU-K7+QMI305OyPJSdGZփ<3|3PDRS#C#hhj!&[Lj-`[Lt^mYm)<%z1L L=;h2ս+pkr#5֬EZ'uT("u2^s΢chB)&{&󩻖 s:zb"[^]+ri]b$r}uLj" Hpٱl̂Tp)!))wA儃tNI( G 7mbJP V {!#nZIu\ z!":`iM> 3Msp7¡zY, @,}K/o7B9YZhM~9%}iޡ%ukQSK3$rg8sZ^еuyZ#2%̘{k;He@U f6q:K0]1&Kc*m}`5JE173-Vs6N(mF!4U(u E(DiL0tdMQ*d;-B#ZR21r28~DYc˶Q񌱩F6Bҗ^GG`ˎTh9LxCЈxZuq206ǯb0 -B_b!;1= -#C ᴼ_OWBʓ}鹑m9 ]MJMlB- Wtv9;: 9~%qQzҹH3`]fBCosV%5Gy93㍞.Iކn!1Rp݂ yZ^YiQ⺼R~`'x ]Ο^/n%F}Nݕ}տ?J|y}<{غI%iC 1:Zp =13L{ 9Oӌy%!d&W͜$"re"tLU#-r.δ{kiFR,sB)r>&/J MD$a A6 IQ@-3cSTN RFI2}m: %i 5JH&d$>a eBע9Z4` HE$lj#OM94j b=(K_D/sQZC MtSmquESڹJVf@F **Mٔ yԁz4LԴs"NbJ'MGҠ0^ ̨nVB3&S5xPL //2R앉:>Psj8IHlșM+#c ":A":2b4$ZȈLyyΙ72L#P|(W L)N+ʼtYkR8)2 G1\qC085sg@XE"\4ETsW0 woE(j HD &G6lGfRLJ̤X2d>o6fLs?ꈟE: '0:sĩl3%0I})5Rd0:suT9U]le]^8y&x֣ ́85UMl׭|Tw7]]X! ׄvg=-SLuͶۃ$JxƐ!W;GA/w7CxqA*TgNYi(hw|8k= ]]0<ȉ`f8<ŇKgIuqG}Wf&[\ݻ.w _=*{4=r9?EvBWSUx|\OűP޻]iP`l*M=qBCYVIʓE,o"9kWqk(WM@Ђ17Mq 4*2EÉjI4yfu8T8ir 6C,i\LHRqmWqJNR`)emL0mNOfjxnu nFUn{D"}nt$L5wv " 䑇GK-(.lJ-zG|7Ԉs_f ytmfUWjNiȍѾ]29{zwh|2,b 67?>c!yxpsga=&=ARI(]Is_Dy:vki,NxM$sQ@9hC-EbK#+m5 f4}FvyVRFޭbwkc|C1flx7͉ 6>`"r ʖIva' ://D0Q* Lt7RsDGlpMP߆p:Q_-<7s(e,n2|*PrfSjpwF>Jy(BJ2?/VP⑼yyu$\ͼaƝOCݧN5%b}')=Βi""k˽ZC5]V8NϟQЖQ?'WmE9ԉN&:)kQQDn E~q~'~\LЎ>hBNe591~^^.gmI03k~rglvb1eaisG47&%Qd7ERD5UUxo;9Abeg"εѮ#Ι#|'V\4Sv=| e]iMBgS6c bOkK*kb_ Ac-uKek$ y:Pc7şFܤXøˑZIQJ>rRӮG[x.2$`uug ]PKcV%BwS}}W7ъ被qŤ.f#f=Gp99;.ٴ:J0Flvr ,Sy q.sǂt=)$Rc#+V"t|ۚ`ئfV¹RϟsBzns j+VN6#)Pq-E 6)wv\cGBtU#Q1oRC^oƜ GM0|}0hs&EMo;t',gy+e_9s t2'a`0iλ]/> e|^UPu0ǿ'g~DsYݞ:=x/ GwVQIxa-)\.jmB {b~M`@W&oo >5MGoGC4MvC3غ33vYಝhUA_ǿ( dj} CJƞ io__vX^Nϰ2^Gaśd8vl@I3fG b'F=u4|7M?e2'yYiR/џ.drvd4χ=$pgZ PF(fIPDeث,y3J2^_LnGէ|.oA̿(=?›?{|]|/Ӆ]X|yC [Q&hg0溇iUl`#>a-d}ofP6=Hv gf߿p6.N4ϼZ[|ͼ%7"Ğ;k1ZEqoɿX'p9_@GSօ-?͍T1N׫TWH13G$X(j$ƺ..vP@lhS:;l 6Y)W/Œ`Ji?pAViV %Nf+MSxT`,v"i) )R҇(C-tgvҧ'A)5Q҇(CM8Ɔ)MJ-Ԕ򴥔)7)eʮJ0iGu+ j;pbЊQԉKF7ۦG" ¶D@ED؏TkDCA|`jӮ'1RPCo=X4Dgf}vkeNش9U^ʹp-ϊI[F _Ө< Ez>*\]'Såznm&+X^^~|M;^&CpKlhrW l _lX:ir2=h`u ^ۙ/W(HbD {E>8h162L-c;v9hSaBubQg–%sۃ2]w5a94bT[iQ.S75W[+xI: .|nJHrVe@=7ovD4G($qLC'B% !U8" %؏mVHROQJӺĬ#EK<\R3c&1/p `WL>&`S0aCEPaYÑr / !' BrMccM*=| Fi&=+sKcw"Kr?3h%1uKɓ(ɑA=J)g!:ǘ bFY<6E~<Or;Uflxy]#q#p`Lld^rŮ!ƞJD,ؗZ0L].#|MIcDA!܃( T9:a*edygK\plOυx5.$NkWI׻( t<;mv (kq]7NVfB}itZ= INδVK21TO6wGB}"(Ǡxƚb,PojYL'V#aW?_]$i<\IA4IjsR" *8(.f Z Lw6M[_01~bguOtsV-!ձa@AVWrO1 Hf8Km++-[Uܔ*SMUɖC?TlcE*rKB֓Knh;Zh4`5R&+-_ u휋ev?#f1 `t;UT!LV8NԠV@+,DGMJ$vtϜո %~$l: *6緇Z $Ȟ UUlY'y{ӊ4We"+i!kneYAHp{k _E<Ȗa41 hM_Gӟm0Jf0+ AKO''΢z}Pwe ۾<E}ڐ)x khxy^fƗEz't61^-#Wq: v=qW$ic! EL r߽ՍLV.Sw*R6W[h-5[W4pcTYy$Xmn9nMHw.%r ŧǵ̪|F^"HlM7 ;mT10mMoO>{ACM, g FOYl$ZK8^ Ȟ3͞Nbê"Y|ꎄM ȿ\=9#_zV9<9;5g{9Y~~?#p{0}}UdتϦBSVeO+O!(:p5J`H܏F־~mf<C>=)Y6~;|-o JJK]mC@݇ĈdmŹl~Zw#?Vaygu\JKgLD12WҌU:L*"^_=lYLֻEْn,$o8e*W쉗XU JAȊ-x-( ZZamLk`7~;@oi2KA;*ƾ>5QsYX,ۘD`>Q1e$7Q*'Be f* JzsD#" iz#db뷖sEҖмs3"(QpF)陙jjy+riR 9Q{ҹ읨.u2B{EJbz?L]T uRp7?ᕚ88X*OotaFgz$Vv*SnNTx}"؀L>=7l|Il VZlz:½|z*LdEXIT^6911p?nqO2i+?ҒR~}\Pӟ?ו';HݧI:M>((bYL,Q7\,Mn]ϖ"4;+퇰$+U2I1E784I1߽k,iفaKW_VaX(g8]gԾl\6e%"/g37ʀ;8Wkڪ8f^抙Rl՝5PPLԅ88knYiYZk8;zvP%ی.=YcW6׆[>j|].B7;=ayҏX]dݒEC;_˫i /jK-5*P"M}QxKM`qV9O(UBU %ͦU nn4J/=WU? tHG[BT9Vʡrwก'uQjY/FQuLQ7F#{Z38׊x0QżN("E,fݜ;otjTdZ0,zJWqc~ɷ1QMB4 Q. >PT\vBx*T 4Ej$N%Eg&׏54CVϼxlso^ fL#m'pGSD3z ~z9_:bJ贙<(:뛅E7}&EQhOO_sIgdA:]8KOBG#fJZ y4X(ϟֿ/j&eW *8|I޶x\k__$4'R =w9Wب@엺#wU3`ghnO6űy#'r&fHfT|p0ZL`iW;յ+Q-2ꋎR m1`zUr dyyÐٶR%pd!Mf;zp+2ۖURۏWHf;Af~}=i{%0þӠ*)MXMn|ݹ頢 i$ȥﳂ6:Bշpl!r;عݥF(I]|F]ҮRkH@wF^7s"g˛ytFbB# yWJRƕv-ANh]v@>F.2"LiwcICV~G6MȦ[y)Zw!/Eڗwhtءw+AtJwЦ[LB^8D;c \㘷c8B9OD6xf[5h' m0"TS9L9slz PatdJd vkCmQ9͂:<1ASqEL[PR!Lt'*Uy* ށpdVzkԝP)0ffq&PVHՀ+1u V8|FjHq5qqT:]rEꬦ=somd`cF1m,F1[jÎI|z.:V+~17_~_dUR&+b@1C8yWɟlTTȩ9ܞݼHuCw߭)23D"TqqN}U tsE `\%Aq":8)!%38jErR$H!@:-pV!BKf)zG]Ri(0$ Q3Wi!(gDqQsVS{4Q=ei R{9xKjϻJBQ{j'DN+l(p܊X2PgChK f;h̵("VeK@J7WW-YRXqw\ӴglC6k1Nrɀ$y\Xm? ߉-}萞n`}}/#!) ' 6) fmgA1؆_qBؖ% sZ$ygxdxd82sD&la0!c0)iGj%c.;JK5+9'w{r4p%3 煩i1 LC>a~zp+V@"Ѐ sJФB!=ЅpPJh6P~k5@K$61is1i+PWyH@O{>l@}J}W״S]jj%uqmBw5H=6jvyj芇Rsw"5fMiJuUmhvOM9 G)\0d"L5r}я[ :ƻ.J6Q fz>!Z4|fm](A~wjUR|vŷrALLLY{| 6o)^؜7elNH$gc zVJ֬Q4֠]vN=FN)w{u# "%8P!4^ Ar1|zqmJJY5D3a2*T2!+Mp 6>/bz7gKҡZ5lpwOzbp, Bmql %bKI嗋OjI'')$"Enr2j=~:O~%y11}3Ɵ0sr}$5j="0g98ϟvsOi&Ḥ,I)v*PY|k/-giSQN:4LpTN@ ²8"O+l'PP1uKoٶT!:lTy(E5C2DSR-ɪԬYp:Q ,J]XJkYD!<2U^2C)L:QyJ9K rҹBRKKJ13/E^G0J19(M%FݡiOVf|?py 䡴PRy(eu' P:zyMp!<6l$@Ѩ(.R25fi,wh1k* <.|0yÀ[l'ǀRnjØXb߇wo~hK[7f_ӹqǵNה]]ŏp3s=!0\TtjaS FD@ZkQ]c9{ d-3K+Y؜M$ŀ3g)ӯ|5S6|Ƿ`.`qI1Uk- }^-Ŗl0.d[sbW6tuO-fqr}7O YjoA{~&, HEze%̏9Z98 BL@A RS$JlP[Ȭ)5wV:((!$[\j6le,=X?t(A^#%6 t`+ߩK88bLEjL/Ѣvɪ9ч+5Ӫ~SǓ?٫>ߜ:͛ _O-~"&qV|6A)@U +P@?kD) Rc8$jL{qNyY `,_:#wD)"DBܜ!Rd d;;C1cLQB C%^z*4ў*<с0-@}LE_5+eyxf~?% iyow;_vUlT}{Wtc) wmmX5KwtTNoRf*E`Ė<[ )ŋhG.[@+pKʯpy1ٟMn ΊOm bY f];- \x{OZc٫9ms/Xr}T:b4+j:ucy瑤\ FĞ˓)pz>h}rh{/Pҧ0AUq%t lkU1\:hB% GR:88x %HpM"p;M"$$NwI 9WqU `TUn?XH!D_A *R 3CcZGU^Amj!CAOP†V)da)t?k7^k'ArA8!'8pxLYMB!2W>4 11ؤBƤ(MRԦ2J4I,i!:t_tܯDl3ĺC&,XY??Ζy4t4Jc)uGǑ@j7 f [P61W{s立m*k+x4"c_YDw@YҎV;_ξN[Ulj׫|{gu*)8h| {oo˯ܬ>N_(g|rO6AK1F Dnӽ!zϕv|0f\s=Wt#=TQL%'6fSgs^~8=^^ِ!9Cz1:eL8R9-aD,A ф'd֘Y(ǫ_|e|h~G޾-"zqc㩓h6~BS<8Jfx~y3I.W_zXI]'*>5nmO+ /HA%6ՑN)ĩ(z+oiN %41GH/h#Jz2Y<ˤ?~V/7TkǤ z.4ay/dc;ZU*L¬Y LY.C3<Qa͠{K1AK zJ;8Ő?;Rq[ S Q~3g"^4DŽr9`ycN"H)^MΘ1 Oq*)39=! :bnB\<}[ 1?~֜ъu+9?2ո3Y;giUɊJ[N풏%UCFH^ۯGHg;v]|^f 0 0RnxenF|v;*#wf`-;ͽLƫ̖f.]V7&o:LK\gJ6?/~Nrt?>hvf斘ș0mSdVE~gCC.WsF2 7i֟nܖY mrvY2O$!?6)n\!h<﨣ztW&i)j6$F2-FfCnMy":MQGҋ ;ϺnO nmH.2 +A)?6W۳nk8)?"eaRK)rRK!86GB)+,Hr0 +݊2̘&T3¨6ʟbkz_u3~FATRA;&(bqG12JLbИcxfH%)lIK!M%KyɄ 'o͙sHU,Ԓ\-\;"*~ے ^̠5޶ذ@1γS1T:lXX,1\W4|TVEc]Oo{9`L7Q>@}7hڊ!󒗎mw-CLtcAcZ#_mW ўԃȻ0f2i b  wlz{ACyo?_v2W%޺0^w l?`FIWǛ| gf@AѬpeB5*}Q&f!_I;f%;J{J=Թ'뉠^X1_o0'tAx;"A0wlOڈJ5bTk҇j|LlXIJcGZ87o~ ֖A Z%fUaDFܯrHYJFH2p7x~pcxpeo^y^m JӾ]-|T͏B\SINi Z8pL+&QdFк!)Kpsm=Uw _![z-Kmѽ?S2Abݷ3]7?KIc/j/%mх$ }pko}kxuXL>NoCC.pCZc4q(L.$@!}%AR=Br< ҙ -@rEkzm*]u( vJi8Zi OZZSTOVZSQ`-)rJ wͿQAAXnoLɕ(5f2戡/Kn?>=$BQS 7V *‡nwx/x6Y06W7g;(ͧSDqGO?mǁcD):w {$Jٍ/< Y#5fr ivty$7f:\E02΁|Wyt;oqq!ґ%IB2t4)i(oLr m)BԈ-R:FHJ{b#SR„Li%#ʅcv\c0~\58 ņT&G$14Z.plҾHT7*7Q?(T F>:_#ʷ#_PnBeWc9UDblsy{wuAQ9~rF߽bYf]v;NUI  OL6&J䃰x8'Xsog<>T \fS7kxGX/YLI.F_㥾~<蚻u|KCcxTΣ q-WN>:Σ/wek󶦯5UIwp<XFnpEV;{|'r>͖zl0q75G@b Br)EgݪƝ Ip(шD8LԀ`80 tX&% xI:+Yt|&*LKg< g(ff{%ΧRKvD)iNBN3DH2_r4F+ CTLHhE|g<:VSW#~wL&Rxk"PĘ EwK7d1)Ia{˫5QI&V+N+5&_L^2.6nln3IDbϑJܾ`%0ƣw-K&x&x{%=FԮg&>axC>Ms)`=g΋0 lǢqm^K-`<@ qU 3DfKEO}{~z!~sRnOd.cb@ NSp@BQ~:E?0g+CKC$Q x|Ro-"$QDn>6pEEB܀AjrCPqlBx?,Dq"/;Y͜*@"p)!JF`栊w͏3Wm^} 5$nâ?8 ~1q pgr[c?sX"#g="l`;+Ŷ0.=l&9<1 msp-av]w6Z(p axbR 2N$ڋ6 b30"!aCrHmZB$G D Hh,[pE𖔫oSĒIk9 !x #8ImJj˜lj#5eptPׄjȊkrE|fGGa;{|' ߩ͓l8#^ڷ5k>aJ$P3R̪lϢ^phXt٘Oh Zh4W#m|13 5`{kތ #TJ|ttD&Q-8?\=vav}˼ !ouLkw˨7*nY ۂ[Vzm\Ze(@`Ebmڏ^qXMf {G ]c+jnFVߏTm]0{{nf3ύ8nWCMpFi?޻sj"jgwߋk-zɝ@mm칋||/ld˙gQnGWtNJ~x[7+3I%ݻ̩Ȓ"Jv25(Ym9&MIVq*D`ݧ"p҅+-@Z с TpY @^-Y,Xhl"ي~J@jBUr$QkI qdЧ'7R9uqճgOx< Ο;T s3=iLlO?;q* OR}hZ̳qu?b[ 9/*(Nϟ0[v>;N_|0wa{o *E!;ĩq~1L@q%LOu.-e=8 ?Iۮc|]$>)Eu|Π̺a{ _ .jtOYtG(' B&4OEnM`0&DJMJqx;7` (Nz}_~ԡ'aca̎è(>?-J pNRh+xZ'~ TO;JԎ âe~a-l:?tb{QqݿoA9p?]\)t)\ӏ?x__ui(?(vWmSEXl^wsTǧ4?gLY\xhVn7KKP_8vXny;KL^]myc:أ(1[}~$~>̴Oê޻ U8A/'u.M*= x?u@P^MM˗/g^I>*34e|O{I,&d<s?/c4dB Aʐ@rc;nYج|i~ v/_"!-f9>,.6+ϭܓkV_,ƤUG4E_5OS  \ <}x8*tdW 1aO~~]"!j"!z(bU$yu1dďAhG0\x:9.?fٰj+V0crrg(^)>(K7>T.!顿|U1T5W=)`QeVU_#4:CRF(PZ"g"ʌ{ Դ&T$/>Š 3;*y8QRE8x^P2nS" T.B .Oi !kRǙ^:O)V1a0ΘdR&2(^OtԳT0( .dcA޵B%@04&d@ZBZKQ`j/RsٮG@;)ò Q:Xaܜ ,wd3X†3 lgM^r湠Zl/uFm=41 7//gj5)||fV(8Ӑ=Jܚ5-ufKJ/З#w.H^#2a8Q߀PIMtJ Յ WWvt},ʤz' %?f:K@ ɼ& ^a5!3MWۼ q ̥1xx|N/HNb(0?%V<~Q(י+7]촴cHr EtbL5`GޠRJQN #woR-炱) x1fg-&orOLjF"GD3넍2[+wٺtnϗo#jO~)L{QLxn[<⫗xI2Be:=i4Ex@!(a f<*,ScBQƃa*-%`z-> BM8SDDm'@ za$c-g1sKy=6&6ܸ 6jH< sC >Dd06c!ϵō6"\)3K?3>Fh•W*2vLGInRGmg鉪\L; ֚yQKu.V .΃m)w[4:N@{2 !1 "\̻M^OV>JO6*Nl_,zeLD Lda3EC(ag}x-b,q_!b\1-6< Sht6mJ+*CfZ 'q6 ]|뮏XPgt?E ێ]D+|۱]C^(6Upz)D%+n,%|Ftwrpt>'z@ܮ` Ӯu_BP}<tQD 2p̘%:#:kyAscS|fTYopE§*K+P֫e%a6IpXފgF ELZ |QdFYvK;|]hKF$}V8?/q?ߙFjV͘r0c֘;slyp)2qÜY5^E/0ڍkqLbpr*l4ʢpJ.bftƙ%hj%T8d癲s˧bBa>"֒c^]Cei^kJQQ;폍eFt%o;ޮiEQC^; QÓQI\81wر$ +Rhi 8b;Jb/2zyӞq=6,?҂HQ, NqAXd{w)Ź.6F.OCtVo4s6C@;0a߸5)$R6YLDtt|v:0!9ϩ*Yd0,ff= Zl:ms3I eg3 (Uf#ς)JoLB%F059jx+#Bp }40NeTpfiRmQMo+q<$*NIm#ȷ^YB$8Lz˲#dǒdr݊5#V?dEc+,hJ/)옌x-3?6qp ֲR0iHeЏvRѧt>t'~a.2)5 #aMJ.:0 "^wՄ(}>TKK*fe譗BFY|oRY֡>Ylb ܙ-]9XmL.9]WgRrHx} +,kxnF>lStd:2Pn85#:D4+b@ ]6dzX/s10 )Y2k!3@<ҘƋˏ뵥h Sx[6βRݟ"ȡS>DyD. Ы2`Hע8wuϔ-%3Xs`OHa+,ȰqoXNwOЙ|}V"n@1׭djβ)Fݹl ) iY`>16q3'E kV }è]ʄf|~.W!_dP(O@(Ne]M*$r ތv% gDV%sF`|y]rųў:ǜd$ 1,h˝q%!CoڕA@1_~R,'΃v$yTb+㼏]{vy,:o*h4_ij'}nC!lǽ:u@];Mc>1m{":z.?z^6n[g͎X,.fؽEV)jA4:y@Y&wlZu?%V{.߳ia)Ux1TDLp񬛊Z6ԢU\-ѓ!:[lrD+狻 *`Lv UEx;w}DāՍS@R:"eO)Dyw;B=X@7ad7҉N ;n.@~M+eX=I& VphLH)l:K,5)#HR]=MIvU-!9eE}?RmDNlrP^MN0B Q(xt^;}PŨz* GUl" IHK^%L[FYl)辨= 7p:kezڨ#[kԆFZ2Cȟz6VA(] $k(!XA}1'1GTv.\B=DlFBi$溙xf1?RӒ?}nVr?VmnN{{7v)[8?:/pMz_|CFe6_5Ͳ@kՆЌo}tD5 8h5΄@ \u4|<`+9ǤP˔n׍x*#"s6] t#4Y⨮lI\"xQ x*؁k~m1\w/~~RԤH}RGAZO-{ %F'㖛8d}Epn Tyk ޫZKp0',XhY2ty}֤@=.jȼɉ茴.Xh}Y;ק+=u]赋 6YS,EqɸM$<`d@Fԃ' p9}Yx_R{ɰhۦڦ^Yŭ^\O3Գ;,BL&gٵ1i( '"ւ3fAT e~[h輪G-c <^}nϱHp$ny_s\)^ekv4+vo=_}zl<]7yo~rC-n;D8# g{^)Ōmƻ=U泖fʫ3`ʯ'SwU{*L>3xhc<Ą}0oW7cHeP I}dUxR?յeij 7KXfH8Mo|ux06*N[~[R2N6ZnHf5«(y-sCEij};6}3Y'1%Rk+&u6ǡ[g8؂]䝪7 #"$$-4&2jO'2ҊZ'XH&MGThP|U)Chg3*[n:E=X {N~}SKaȗ=us/jlAs<3IGW0XCs: ZrNd e-+k[ġ2?^ykZq5-3@JķɹmPgQg>1:r1ފ<࢑Xئ%έtu٤|(66!b[4uk >:%Yeԕ8~J>]/_W״I $qopF^ɲF~9x Θu/Gι _4Y(%ML/al`c J] Q3upgcNњR yFv=1E8fku(R0SfIK>2`;>\]gߝf/Y2%cfzkd2$4T]j=\Xȩ0ȴR+ >YV.Rd%؇踡0<1EcOS"M_Yȭa(ϫg}H`@QlJ*$GLB =9Ѵo ~k+nID83`dž(s$?FS1m|&a ӿ[$4%&{map$4h!Bl]ܞ΃HDzkv}ײ_¯iT?xqP+`iER<]13uB/a( /`+.|"LmjHՕڍ}p (;ݠ N,ҫfwt0hY<ՓdcdvF::LnviKX!ҒPF'ã-Vױt!Fx0(,IQM,1/`ɁP0T!+캉"EB!d rR3C'/`AQ[uy,F<\4V:_O2][S#ˑ+OK-Bڳ9wegCQW$,I@Pn i1Vw}YY]YQ&q)07DYYX؀*f{l`~.`SL5~8tkQze'?C/ cy~C]N\No̳ 弴pBAG+A<]TVƋ pX45DI+.k^BĘ4\rU7,-e㽋0@mG\ 3]"@Q0q \!2.pA _wv+')~n:OooՑJh26@˳\:97띿sk`|-^C)CXfUuR59O;L  pZp 0"<#%QPFT_/Fmz_ڿrȩ/0R8xE1{ͦQ5gT!jӘ{<2X l%ޢEM rI{Zyq!r(}kI݊FE+vɵ@RJՖ9#Py4FA9j"%WH+ ~U]zXꉂyNq .$4F&6,Ra`) 9jN }=>T%I"ohjg!BH泧LP P'garHAIΝf䉛u~h!~ ΠL5CXb2´ʁR ɺ[G]GAJJ3^*gb;#e{N ur=P]k,fkw +gE 5q@~Wj̼3$T{~ g; ;J{)r{(Z![oENp떩eѪw黍NV{Vwp ZJvFΩ턣zք 6rIJa[WԘNgnE؝w~һa!oDWlXLQ9;Tր/ ZJ,_ŷ_͐3'J-z?*sW,ôturvP8 xLՈ ŰY̴``F}r%& kJ ݗrVL=K/1& OFd6]!+@msՆGECOlMA"BGRPY3'+wYT6Au˸]>U)mίfi%]pgiQWK"@%gV~?׏;r˻+tK0JwUGܕm/jP >?f wp@&5'K@ r]i]J?l_)/n?~ܭ;ǻ4xLuassjD /˚W6+06.Q+h;裩.K8mcmQ#_QZ/' Vv!p!p1! )[c/G~+͠JJJ3 t\{Ж個6Kܓ gBzn=VuJÁmY#nec8be01dE2*VAՀ-n(rLC2BC@YaB!f˔61cBy3K*yè*1AQ04` ǯI6L2id(m)acZ GFkΩFLOgrPVaؘc|5^ c-C9"q,{#,1M,Ve- P>}pi+P*0Ʉ\}t:f*/cyy1"FNd(1sT 7])Ayo;Z+5  A%pVJ蘔oGL6s)nTAE[Pl/&it]bΰ4Jt8gcnBa+TVi>qZUڟO]VɟmrIkx?{e `b/D4lqOРԼD^ιmXtbx4 J "%҄(QqB)x`TԹ $7f+WBMK[hnlfS<5̛??FګC\y7S7ۄecة,wFHO3K!-(=B=BĠBNw~7Y*yGU{b8ha|b4sqՅhW2a3|VE’^Sb(8B-T60|ox ϱh+<ʙ{4w [V9``Tu3^-oE`0hT=Zŭ[S AA8|*VS}BI'ޠf23yt^SW67CC\?uR r}P$ @L=D6"޷j5=- rV'p@ܐ b{{[)ВZ~٭}4z5䙆lt$T0E$dAΆpC6-fY`{2b͡cևWtOҾ ZTp&\ybET//9pʪՔJٱj쐼WoelkZ Blbrn2"9jciz4?xQ>[=w"zf@vPPNJýӣ,Ɗ$(^h8di.po'dK]ۄ~t;F($+zv5'z}huZ{?s;`٠m1!,%0/$3"xEJҏqNzz1.xhM-|9?ݞFQx9t7~ zBq~Wr#\B`rnOģngs A82AQZuaO GD{d1C7T"&sX8^x3Oi|smq⦟'-3Y \\ݕ,v/ X3h,:~[ѷ{rPCrݯ!m-@ۏx5$fDU0FLт* PI11'#^ɥc,@ i䦄]??4rw}*\7Ô:Ң94)ޭT*r]RcX:jVhޭЬ:+ljmJǣV ^g_1tVܨ?C~JJ벧r]ѷllzM+˞gOmu K{aύꨅq![̞2rR]=-G-}&ޭ:+efUlSlZ깒IQCT9 (#&j.I9YH|eԜ׍/ל[ͭWs疠utQ?|z=jiIzT))2wfI^×0~~:;{J/dH@ &d2هM`H$v2 Rf6uW{:L2UXds]Wnoy1JnMfd} ̺)WD3z^hRdM߶7HLqfH!LBHʬPЄX&J$:L v3?ekWa'qƞ_ WgwVZ #@HIQ'"W^WPJ;LnV:Yt7'ş~+%J{)Q˿ Dp؍&Lrt7oT{ x~_@u0 58էUH@$0Fe|wlZl|}BwMk i 2)A95ݢz#9$ DP;¸B2Eja$‚ MUrBRjjJU|9 0]YxݻG6 !bv\Rz;ɬtϳU2χUf\dzMsi1[M#o.A29qb1T^>f1Өu̶yg?ra 0J2UG\;Fܬ*Ym Qgt_r8~Y+AGqN%soaxW``K˶c;7lv2b|ڨ=XEfmUhz m *};8$Fաf8+`3>aVZHىȭZ)8Sŋ5qOoWn?V=/eYX$=.Zu+ ܭQ.';UORje8SQHGg2BPQj=:6/n3wOܯ{%#vbO-eKhJ)]W 4 n34r/sj#Em@8yTBw7X(0h #əw<8oSR'}Ј^Mތ\m0v[^CEPGBA}+V~&R׋cB="g>Xpͺ(|_%kKicf4:;I@@F{?T+yC^~?II NE1D}%jh@%C2D?HEe2duAQ^:et2:@vd:8ֺ}ӬD=pvg=c'~>tS5gcOcXyN0!Ի(t.,|r =DS 0\Y )Ii6 5,%*1.*@SHKlAz.RqaWk4j!9R o/{1Bk!h#8gDj(4T %I*v. X F&Vjُ&[PMhS糿dH U,ܮ/7|$&ǜQk/Cܤ(DÌFa)3f+0$8&d!--a5+XuZ`_܌d .9uFc#"L)02%@!JL8A IR)0)8!$-`0o4|-Ml>_ܱ=ӛ"h |8&0H__ˋϨJI,x~4W?ܹ{UHi2;>jU>wIBzwgc,mZ?PE<Ӟ!*eGM bܶ:VݞEPb8HJ~6]PFՍJ.[j$j@iZj@v&7;N+;k 1M:택 iDudB9=`ic`XP;s]|hTdUȶ^@fͺbVÞQkKyo,oB#zԻ#d}\AV̧'da3Ֆ 3CdO$䃹{bp('y. ŲÏH.I^tjM[F[3ʳWx{_#`̯8>J{kA` Vj@+ t5:B!^G!j9lzrSIQ7MOZAHpDa<2၈\i a]w"N;ȵxzp-V k=)kI:{!r? a+&蔝;\ +ePR:4C,[cmX _ ǿuS=1$Y NwK!Gu %}㿆 n-[8n$=L%w<&X@:Tzqa<'eYj)8IE̋0ƴR6K'+=^+%uϹ06 eYjɗɟsxTcZ)aVZHJJ RNx`$0+-B0+,fCn}j]ۈ,ZdoO~O9s'}A˲N:l=!N3⻁Po5J!9Jwa_%;4J*k+"1RƬ;ȱ]#^J'>٤y/3Z֨m5 f[[Y78S'7ji,`mgoδɒ: wJebGJxU +:?}!RI@3_.cPy߂ŤSYZNɋ(wȃ˾ج7u,0Mum,(F N㑾?v[<%DCqJ\P( O4U'3Bzi~mk I @ݎBS sXx<pjA_2ǁQw@~e:;Æm}swbN7XϵJMe4 Kn>di7 Z: N~qmv/k2;=B~@TQ$ּko/sԑtHOAa"-Ne>-YaU> D) 'VP10 EVHCJ Q_iR!SdH ) 1΋,քG%Z8 $:^U:D_S&pJ*~9*@;-PP̈́ ; pգJ[:I1(Av]f4ͧ렃Ҍ^$ؽWC/K3F㦑~f$_Rk{0OSaXӔf +F'Ϥ4 MBPsiT'f$sMN{yʙw4pF3}GT%=s=k{K3.?ͥۧO.cҌvj'=ISf 3|gY(A@kO~n o>HY4gO{{ZJLx `yШwN=+Gvd٢Z"7mbM8=fķ"sݪ&(Ͽv^ьf2%H?I.\NTNdMS-dHNL)dE'CaW rH0?5K63չP|qӔ’ 5c,@Dwá[ZǐH݆!.ֱۋK=uDm 0)eN3'g mL Y2(!9>|UK!p#YZr~OaOl3[k oWSD֜9$< P9VG),PʐR&;Ƨ9z&ɂ0W00)E)<ϙ e4 ~df=+$5  HzLCV;yw`.vt0K糪ZT]/T.Q &AФz3S<,OP剝pƣe]n"O]̿N Fk'Ü pm2y~Fh ,2cgH4 ̧D!EN# A: 1ASTSh4੗/ R! .Q*0(XTJ ; y(ȹ>4/ FeWuu~1,EysD\H4PlFjk<"\rw@ەPɤɑ`PDByo&[^4nsѫ=#t42ʌjs%=YڛU!_|/ƐOf |!?V%t}Lzk >r'|`*f`f[AaFe7`(wGli[F?Y DtWrfG$HF0OHcfF@Bks\B/-1;Z&hNBBuc= Z:3t0I8W : E%Q/n9LC(S9IB)W\ 2Q m5+x@52/of_o S-UIh(/4H RM^D[wb0@= r+ȖaV,u[B2+.EdQ4d5K׻r79ցWq1$SD!z'"AbRHDP4%yYTY?D@)hܑ;OS']MoS '0I .eeB,ݛvvwQ*" T +Y0"uvw?܏5s7][b_T+}!@e`%$Jw=J#kSUU޸Ȥ4BfEpgB81)Cvڝx@q"*B$gכ/$WB7:dkk wy 4ǤgQyIqI*r$LǞo7ҝïݦ{ MnE_W^WɡfWjEQH~Po<,zrpJJ}k|Hofflzjqtr׾mZ^NcЪW3xO9ʶsdS.j]_,qWheQ mi(Z4܌W=X@땑5cz%y53cZ'y exhF=8΍="n+4rU15L'W,XuiC ~pѩcHm J֍Ұ=y](^oq K[J'M3k '§$Z !ii,qІPUFscP5snl ÎaV$ ̨Z}uoBYĶ@+-X"| ,s CD?9{* PSYЈ%PP,CI*lYn͞_ia˫uEG (L8)`kr?y>,u^-=,; _>}|s"V6y._/y]]S /"2}f~a^>9fÌFyI(n ;"۝3yG^ŵ9Vioq~ƚݒ%LuЫ!ec$EWN8˫9q35oUk[e<h)ƺ5{금m79/SQ b.9t5KjGY;~07v'RU<>].촘$,L d(vF ˖Cig>-E^ĩ2'+6RpdBfBy rhs_%_7wxM /aŢ^~JhI ,v9q1QT,<9ɂדG0R4nK7Ǯ>+|˝oCc{FG cF2ɳ&'ƲJt\*,άvH ,ŽuۺkrqjI)R(e!GG"Pʐ oYް-[Ho)M.aD s,˜ZX<6ȅ ".S!^rx/d!qÊ6zK]{iҸEA /Έ&%Z|^l/FXޮo PPQXu=7 49IWC %|̫5[:ߦg Fgƣ;Z=pblR^ 㻽M#88'4:)Y'vsa݌Cf9iMQ@ɲOLH;%݅cWd<,5}N[Xϯ4% Qe 4+FBR-肋jpq,:n(ɋd'3&0'=5؋I j<#?5c<@>pZXwWyY -]gbWGgI{H+٤cf1x&AN-iH &)$nJ,6zםjpC[Γ3ٷ5ݿjdYw0Qd aDmGX7o^wx֧X gSuı$Mv 3\.BG\* O*)QcT͸j*Yxƶӎ 񇛓ܯsCOPb -ڣ& # ~dʩJ%u^fF?ð; 4P;úa A(I]6JqA4̻rg>4_Ms<|: tӸAYD2LI~6@V TĘ{K{vz4"O(j ñehwGgzL{u6磷(Գ~pş;GwOŻ$t~ !Yuq S%A RXÉ#w[z *S8_ݕȚ;2eJN"#oQ"y+N!'YbSG_M@ؓW` ՋtD]dvCఫC7Ca'a ~i=Y%VO=DF&vzUW,^7lwc(zQΑqf5ɳt\9b(a@t?/+E!y$#> F{0.OUUgnIj B뤟"7pFtiOTo pL7(ԔPh>(NhJ SuSPrӤ/(h@bə~%ѥRaZ1bゼ2ILHsssLB/9a W: g~1ynz(2OqN6}As=8A 9<`-"-H@uS}os+p )4D\da>{,&VbhD 2[XCCŃT;  zd$qT)P.c,O!k %\y [R6(3.'~@Fse|%0yԳa##Ԧĩ $U y҃{xM[|NWBF) W F\؀t` 9u))I xΔjޱSmj' izUR9-dBʣ_>wckXWJh6 J'6{RJߣ˩JHH g.D 6D*XSoWA_m4xiI w=cT9XTyv @Fv"O֒ &}w0qGE+ Dؓ)bHH,ẽ |ʌ҃y~a?NPwD(&CfOi</YNo~~c̽ܦ1I:~k}>ZRpiaBnupߟO?V9 | @rsp4X3],,k-|Xwߡ>9f$tr?:Kso8az+y% L)6׳I6kY# 9&JL~~[Oܢh5&[2w@ )9|1Y *rfKh>u٬2 mp 19Lʇ9#yrpH˿I {C\@[ޱNۊF\{ ^@.2?:"Bm}$j>""lH}dXJ[Ƿ{LJS `3lawMߒ6J7Jo,[S2>Xq7)bÔ|Kh4;"8#dMT}Y̡7' :s4 μܓR3$/iO&Ha(,-:O`Mmx ēAad #)FP;4R$u~a١;P<;3+5a:/k6]~D՛B+r~|J j ?zyz6_Xt }w  ͷG^%` 'ZsAhj狵رu`cAۓ ?^RIB a3cʵQ+~x5v7-*9՜v 5˞;06Mq`ɥSAIi,T$q5r^GB&a;S>|_wZLBڟ.+oq2,X'L#=T@QέQϤhd?ݲ*,rf)][TUlQU٢U$q#j.粄Fg $0cZAJЏ !v}Qخ/?en~sm.1"AUF`7sWE3:%Bjo9.*HNA"@{xc`v ፩*WJ@@D`p*cL ЃQ)ړv `EE(SD!RrK@4L x.kÀ9ŴSbspTٗ qLUkHbgO@!uj&F3`&T tٜN۞YCo?_A}UcQ~%xcANjOhr>"i%%#+LEiEA3*ޏ<-an1_VD%3 C:/ey~99Im$-cVc]@\hJ|4,S~/ҤZ~l:nkSl X3%5@٢%ߞj %I5w,iQFb֏,dњcmeA'\#ZCV)P@@ U)ɖ͑D_Gca'ij;>%G:FhOAbm,suJ@mkwtx}jDS6l.m82"Q2* TF)`0vW1GJLOo hNsOKڅf⎙v341FHGFH[~! :375/$^Hāzh+ btz"L .`3` hZBt܋6ILsHAK!^槏ʛ#[E"=2!tGoNŻ8XcqPf`*#Ǩ92B~ o;:?{WFAGe;11%$FU()U<$ӍE*Y/Ȉ+ER;K̫|e8)k*Gۢ z~3 E>./DW.ѹp4PDR%pXv6(-SG1Qnb;Rgt_A8m%\ NWڠ #huЖǴl$[\wZ졁qUB6!B&tR;v5EaܮG\vTKzApγIiYfВhQ C-ǒO#m=?gmJ? (58t-alsݹgP}Sd28Ma %3\PɠqXhA 'ZRcB(sފ'm@hϱ-<-0ױ҉SҡD}.!0uSuV5ܓ2}@hx&pk4"hrI08^NMJ Կ[+&goeŎN%@aaM|~:AGi88+1\rq~'#IAnjtyҹ6/LDJh_TR^xHh8ʳT?)ѤoY[zfZ17Bh[yp;VOoWS4E >In6eTegrdG4!D-t4qd8:ZF㓳C50:Zh:Ħkzntm.tղ-F{$\( BZX"[% fYHLD^2B+c^|T aβR,Y%+ K v=N6hsUHhuRj #$ ˅03l sAB>yɈ:Iy \=Z賱y=|Wu 6`'滳𴒒֮{:+s{j{P~̻֡oKwqKnŕy{qEO݊[V\Sor4LZju5ح=˭v_[vF vwsY|z4F'W1jE݅[G( ŻoӠpP8״RӱՓfFڀg|rxTtw9 | F$!Q"_稵_OHKH8%|(*Em<_򄵗Ea*ܮ55|ÄTEBKODSMCzeD %/ӄ&'Rnk9::z~x8td/+i:"4yIVU6T&W H'o7è(Ab|p#-MqEODgKB$}uW}ʛ*nrkRI?-vC|闾&C<0+%S}ꎪu]2=JѽWQe aX(9X߷o*<7,tT7 .M{mc 0,TPq $ g`(SCZLgQi *BJ[2/cC$% -ې $:|t4č"[x^.jw$NBLj \JV$kvvt;jKޙ IX 6M*T >:O<Ng kJdMѰ16 9%rQ01O /AgoỆÀvgqks.zYRcȕgɉ@͈2R\Hp2YI2DLP~'xH#H##FͥGAhF^FnI=%$?ڈ,ɥ/\)ZB3e(rC𘢖 6"192XM0l^Q~~5zާYUH:&%^yD )OY(٣88= Lj/^# Q=DG/_ Rf=ux2(D-U blꯦO_I\Qb@M>ųT1*?x9YۢEQ_t]VxUBx얣eܦeL0:~ M "&VLR=pVI~ RQ#mVDIà[m ­Z3T9k{Lj8m|S3Mb7y"*]45D5N:~Im &b|[ϲuo8s^@RU'VH$vO :"-5!pp4  (wFg+rRp_"*rE. LmNU%V" DQIWL(mX^6؎8C䇀0^QaÄd)5yɼBX =8(6ˑo(LnSL_Gk7/sPq2hN4*% 6^^_UOGV'W2NF(%,ʛ ū83+u3y`x3'7Ӌt2#}gOߜLd,1U8I;Q]cpm!dQ^k6ʙcޞ AޑGD5 QG0%PI+?eC[#Nn-3΅zC‹ZEɣ*|C!kGߞB.zD-@b48Ij:q Pͣ)Hxl$a˕^Cs͇i#!Ft{ô!r}l) _nA=.ظӛ99$NK7WeWsͪ |k>u'+c* PsϏO};]AJuD%b$$ E^sAƝى/{m-z35'%0gp^gr<3`NKE+Ӌ{rx_3X`Ēiڃ1q$\h-KYYx[(x#ft22Io鬄B>Ά|7ly/O)'u#t".Nrnl!ٷ9ktU*у|>݃ czv4i8JG&EeM^JZ$Rw8m$JkJ(O&n02DH*S o d4R6CAFn-^xkY;?p4/` YBfwҧzAZT!E @&^X5G-}ZJiR#+.: ~t: k[a8B+eW2glQ4'^E$e^ϼ%BcbG =KA ,&coSh: ~o&N/Js|uJ{%-snkYBk{ӵdl7r}'l3Mܮ' 2I6"C[I!g,5, 99pTp(o#$  +fKx->=SkIjGp%zN& T Ѡ+R6~k[v %|A-G֏4>-T  "(  0J.aem.htXbePUIZ E߶WE|Ko, h+f|7J80;a<2,/WWBgw-yFnYME.jkוc~9y{2y찤[\Rth-)ZԸmĨPDwUHsI!FQI-{k񨮯isE_Ӈ|!N{Y4%{3ŹǕMr.f+Qc6T)uN za\Ԋ%j(zTБWJ?|l]"7^.$B;k_~tSl,05pwYhb`s#zqe>S 㯟"F2^Mw5—bŸG}į>߸k0 JZӰ(W%H:-RNmvXq3* :/\^i>$8Ph g/=Cho0fלgͫӦa9׏.8ut_d7k7iǃݢS6& ⁲J :ZrcU.hW}cٳg iY]7I]WBW}uE~}~ 鯁" ؟Nz-E4?܀[8V6CH%>@#;rv&N>ʾ\eCy\r1G_݊AeM w|4{^_0LZʾ\Dbԫ@l, 3S潵8r&(LW ̣QBe=w4l%C^ ?ݦZ(2ɨH1Dir@%NE% dAm~;GLy&?m7'ctJWӁ.>b7k%6F?nerV mj-N߱II(#C,hHq$j*ۅBt-mok&"ЂW`{;WwBg4/z4&H3˔$B޸-UI@?5T00&sFOEۈQb|lKތCS3y2޶> 7ĦOo{[#JDoOa{Iʅ?<|'iO62hGÄ^\ɘ&$MX70_Ge9|nq) ꋔHVUd`,R/89͢e"qlL`VFZ-bHHd;j+εwbLDG j&7rB{nBعLFمe]$.2LA\u뮅Vp-2CQȜ֑7LD@RqZQ]Ԩ8-58C^Djm/C\K 㴔p7~LBK 㴴g3RRq50s[m-[7mS{i+^i릅8Sj{G&Ez!˰(K(=GPoH&ƣQk,ӝx N NBd{Qw2~ N BvQ\K"¡g͡"cU|,j,Bq֞BRh眳PV.5'د0$JLG%h<Srё{mq|cp!$g&V/2A~zzɰޟ앥̃u a+aYOBN[-(>q@.`_]Q9j]і-ZJ(ZjdjHAS՚i0p#ȞtWFqc0" Pi\47kZǶ Jō VU G+Y`iL5㵈D'Trzw6RKؑQoc`e*gU!cw06U[k6Emn[ۘCaz{K$ek>qRocLEXjkxo9PO[hiZ`|>PƆͅ<JiJմߘE1qY-j@ w+眍Y%di0k"jb9fek(-Q0ka=7fEvz 1 gAhCl=Y&%V$gnt=eB8SLx!$. &mv h'`ˋWn 2}p97|ְ ~nIc$ivŧ_ vUK>](E!c=~#NXПO&*6>{i{Dwd&Guv=jKF6le*Z4Zš2Hܡ2< 9P1\ގ]OTr!tjyH6!';yW7!͏?>MUͷ/}{2qw'~aCMsGwMA:DBeׯ$ko N{cѥW>z8rh>7=Qr4䝫hN FurߑbNzJt`n5iݦАwY:Ųu;E9uAd#ɺ, 2jݲKbUtNvsNP.W ^:r>Cel֊ܚ1mAY#k}vNy2>`G=̜TiYAܜF;zNťx)I*,˜(?1$'0''3c/d3KX4䞑9VHhf*ݺ -fr)\9jlZqERmY#u]O٬T0ޏ==ui%12rUA(۲,ɠrA5 # i-)>u5ki벡-@ R&`|[[U5gО2\G"p!4q>%\M!m;\x`"#2w)29"Ykn(h -}gT˥ꬵ>n\WnZ2CQŖDwk-var/home/core/zuul-output/logs/kubelet.log0000644000000000000000005303513015134651333017701 0ustar rootrootJan 23 09:07:06 crc systemd[1]: Starting Kubernetes Kubelet... Jan 23 09:07:06 crc restorecon[4683]: Relabeled /var/lib/kubelet/config.json from system_u:object_r:unlabeled_t:s0 to system_u:object_r:container_var_lib_t:s0 Jan 23 09:07:06 crc restorecon[4683]: /var/lib/kubelet/device-plugins not reset as customized by admin to system_u:object_r:container_file_t:s0 Jan 23 09:07:06 crc restorecon[4683]: /var/lib/kubelet/device-plugins/kubelet.sock not reset as customized by admin to system_u:object_r:container_file_t:s0 Jan 23 09:07:06 crc restorecon[4683]: /var/lib/kubelet/pods/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8/volumes/kubernetes.io~configmap/nginx-conf/..2025_02_23_05_40_35.4114275528/nginx.conf not reset as customized by admin to system_u:object_r:container_file_t:s0:c15,c25 Jan 23 09:07:06 crc restorecon[4683]: /var/lib/kubelet/pods/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c15,c25 Jan 23 09:07:06 crc restorecon[4683]: /var/lib/kubelet/pods/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8/containers/networking-console-plugin/22e96971 not reset as customized by admin to system_u:object_r:container_file_t:s0:c15,c25 Jan 23 09:07:06 crc restorecon[4683]: /var/lib/kubelet/pods/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8/containers/networking-console-plugin/21c98286 not reset as customized by admin to system_u:object_r:container_file_t:s0:c15,c25 Jan 23 09:07:06 crc restorecon[4683]: /var/lib/kubelet/pods/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8/containers/networking-console-plugin/0f1869e1 not reset as customized by admin to system_u:object_r:container_file_t:s0:c15,c25 Jan 23 09:07:06 crc restorecon[4683]: /var/lib/kubelet/pods/d1b160f5dda77d281dd8e69ec8d817f9/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c215,c682 Jan 23 09:07:06 crc restorecon[4683]: /var/lib/kubelet/pods/d1b160f5dda77d281dd8e69ec8d817f9/containers/setup/46889d52 not reset as customized by admin to system_u:object_r:container_file_t:s0:c225,c458 Jan 23 09:07:06 crc restorecon[4683]: /var/lib/kubelet/pods/d1b160f5dda77d281dd8e69ec8d817f9/containers/setup/5b6a5969 not reset as customized by admin to system_u:object_r:container_file_t:s0:c666,c963 Jan 23 09:07:06 crc restorecon[4683]: /var/lib/kubelet/pods/d1b160f5dda77d281dd8e69ec8d817f9/containers/setup/6c7921f5 not reset as customized by admin to system_u:object_r:container_file_t:s0:c215,c682 Jan 23 09:07:06 crc restorecon[4683]: /var/lib/kubelet/pods/d1b160f5dda77d281dd8e69ec8d817f9/containers/kube-rbac-proxy-crio/4804f443 not reset as customized by admin to system_u:object_r:container_file_t:s0:c225,c458 Jan 23 09:07:06 crc restorecon[4683]: /var/lib/kubelet/pods/d1b160f5dda77d281dd8e69ec8d817f9/containers/kube-rbac-proxy-crio/2a46b283 not reset as customized by admin to system_u:object_r:container_file_t:s0:c225,c458 Jan 23 09:07:06 crc restorecon[4683]: /var/lib/kubelet/pods/d1b160f5dda77d281dd8e69ec8d817f9/containers/kube-rbac-proxy-crio/a6b5573e not reset as customized by admin to system_u:object_r:container_file_t:s0:c225,c458 Jan 23 09:07:06 crc restorecon[4683]: /var/lib/kubelet/pods/d1b160f5dda77d281dd8e69ec8d817f9/containers/kube-rbac-proxy-crio/4f88ee5b not reset as customized by admin to system_u:object_r:container_file_t:s0:c225,c458 Jan 23 09:07:06 crc restorecon[4683]: /var/lib/kubelet/pods/d1b160f5dda77d281dd8e69ec8d817f9/containers/kube-rbac-proxy-crio/5a4eee4b not reset as customized by admin to system_u:object_r:container_file_t:s0:c666,c963 Jan 23 09:07:06 crc restorecon[4683]: /var/lib/kubelet/pods/d1b160f5dda77d281dd8e69ec8d817f9/containers/kube-rbac-proxy-crio/cd87c521 not reset as customized by admin to system_u:object_r:container_file_t:s0:c215,c682 Jan 23 09:07:06 crc restorecon[4683]: /var/lib/kubelet/pods/c03ee662-fb2f-4fc4-a2c1-af487c19d254/volumes/kubernetes.io~configmap/service-ca-bundle not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c24 Jan 23 09:07:06 crc restorecon[4683]: /var/lib/kubelet/pods/c03ee662-fb2f-4fc4-a2c1-af487c19d254/volumes/kubernetes.io~configmap/service-ca-bundle/..2025_02_23_05_33_42.2574241751 not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c24 Jan 23 09:07:06 crc restorecon[4683]: /var/lib/kubelet/pods/c03ee662-fb2f-4fc4-a2c1-af487c19d254/volumes/kubernetes.io~configmap/service-ca-bundle/..2025_02_23_05_33_42.2574241751/service-ca.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c24 Jan 23 09:07:06 crc restorecon[4683]: /var/lib/kubelet/pods/c03ee662-fb2f-4fc4-a2c1-af487c19d254/volumes/kubernetes.io~configmap/service-ca-bundle/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c24 Jan 23 09:07:06 crc restorecon[4683]: /var/lib/kubelet/pods/c03ee662-fb2f-4fc4-a2c1-af487c19d254/volumes/kubernetes.io~configmap/service-ca-bundle/service-ca.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c24 Jan 23 09:07:06 crc restorecon[4683]: /var/lib/kubelet/pods/c03ee662-fb2f-4fc4-a2c1-af487c19d254/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c24 Jan 23 09:07:06 crc restorecon[4683]: /var/lib/kubelet/pods/c03ee662-fb2f-4fc4-a2c1-af487c19d254/containers/router/38602af4 not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c24 Jan 23 09:07:06 crc restorecon[4683]: /var/lib/kubelet/pods/c03ee662-fb2f-4fc4-a2c1-af487c19d254/containers/router/1483b002 not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c24 Jan 23 09:07:06 crc restorecon[4683]: /var/lib/kubelet/pods/c03ee662-fb2f-4fc4-a2c1-af487c19d254/containers/router/0346718b not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c24 Jan 23 09:07:06 crc restorecon[4683]: /var/lib/kubelet/pods/c03ee662-fb2f-4fc4-a2c1-af487c19d254/containers/router/d3ed4ada not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c24 Jan 23 09:07:06 crc restorecon[4683]: /var/lib/kubelet/pods/c03ee662-fb2f-4fc4-a2c1-af487c19d254/containers/router/3bb473a5 not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c24 Jan 23 09:07:06 crc restorecon[4683]: /var/lib/kubelet/pods/c03ee662-fb2f-4fc4-a2c1-af487c19d254/containers/router/8cd075a9 not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c24 Jan 23 09:07:06 crc restorecon[4683]: /var/lib/kubelet/pods/c03ee662-fb2f-4fc4-a2c1-af487c19d254/containers/router/00ab4760 not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c24 Jan 23 09:07:06 crc restorecon[4683]: /var/lib/kubelet/pods/c03ee662-fb2f-4fc4-a2c1-af487c19d254/containers/router/54a21c09 not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c24 Jan 23 09:07:06 crc restorecon[4683]: /var/lib/kubelet/pods/37a5e44f-9a88-4405-be8a-b645485e7312/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c589,c726 Jan 23 09:07:06 crc restorecon[4683]: /var/lib/kubelet/pods/37a5e44f-9a88-4405-be8a-b645485e7312/containers/network-operator/70478888 not reset as customized by admin to system_u:object_r:container_file_t:s0:c176,c499 Jan 23 09:07:06 crc restorecon[4683]: /var/lib/kubelet/pods/37a5e44f-9a88-4405-be8a-b645485e7312/containers/network-operator/43802770 not reset as customized by admin to system_u:object_r:container_file_t:s0:c176,c499 Jan 23 09:07:06 crc restorecon[4683]: /var/lib/kubelet/pods/37a5e44f-9a88-4405-be8a-b645485e7312/containers/network-operator/955a0edc not reset as customized by admin to system_u:object_r:container_file_t:s0:c176,c499 Jan 23 09:07:06 crc restorecon[4683]: /var/lib/kubelet/pods/37a5e44f-9a88-4405-be8a-b645485e7312/containers/network-operator/bca2d009 not reset as customized by admin to system_u:object_r:container_file_t:s0:c140,c1009 Jan 23 09:07:06 crc restorecon[4683]: /var/lib/kubelet/pods/37a5e44f-9a88-4405-be8a-b645485e7312/containers/network-operator/b295f9bd not reset as customized by admin to system_u:object_r:container_file_t:s0:c589,c726 Jan 23 09:07:06 crc restorecon[4683]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/volumes/kubernetes.io~configmap/cni-binary-copy not reset as customized by admin to system_u:object_r:container_file_t:s0:c574,c582 Jan 23 09:07:06 crc restorecon[4683]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/volumes/kubernetes.io~configmap/cni-binary-copy/..2025_02_23_05_21_22.3617465230 not reset as customized by admin to system_u:object_r:container_file_t:s0:c574,c582 Jan 23 09:07:06 crc restorecon[4683]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/volumes/kubernetes.io~configmap/cni-binary-copy/..2025_02_23_05_21_22.3617465230/cnibincopy.sh not reset as customized by admin to system_u:object_r:container_file_t:s0:c574,c582 Jan 23 09:07:06 crc restorecon[4683]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/volumes/kubernetes.io~configmap/cni-binary-copy/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c574,c582 Jan 23 09:07:06 crc restorecon[4683]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/volumes/kubernetes.io~configmap/cni-binary-copy/cnibincopy.sh not reset as customized by admin to system_u:object_r:container_file_t:s0:c574,c582 Jan 23 09:07:06 crc restorecon[4683]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/volumes/kubernetes.io~configmap/cni-sysctl-allowlist not reset as customized by admin to system_u:object_r:container_file_t:s0:c574,c582 Jan 23 09:07:06 crc restorecon[4683]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/volumes/kubernetes.io~configmap/cni-sysctl-allowlist/..2025_02_23_05_21_22.2050650026 not reset as customized by admin to system_u:object_r:container_file_t:s0:c574,c582 Jan 23 09:07:06 crc restorecon[4683]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/volumes/kubernetes.io~configmap/cni-sysctl-allowlist/..2025_02_23_05_21_22.2050650026/allowlist.conf not reset as customized by admin to system_u:object_r:container_file_t:s0:c574,c582 Jan 23 09:07:06 crc restorecon[4683]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/volumes/kubernetes.io~configmap/cni-sysctl-allowlist/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c574,c582 Jan 23 09:07:06 crc restorecon[4683]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/volumes/kubernetes.io~configmap/cni-sysctl-allowlist/allowlist.conf not reset as customized by admin to system_u:object_r:container_file_t:s0:c574,c582 Jan 23 09:07:06 crc restorecon[4683]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c574,c582 Jan 23 09:07:06 crc restorecon[4683]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/containers/egress-router-binary-copy/bc46ea27 not reset as customized by admin to system_u:object_r:container_file_t:s0:c203,c924 Jan 23 09:07:06 crc restorecon[4683]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/containers/egress-router-binary-copy/5731fc1b not reset as customized by admin to system_u:object_r:container_file_t:s0:c138,c778 Jan 23 09:07:06 crc restorecon[4683]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/containers/egress-router-binary-copy/5e1b2a3c not reset as customized by admin to system_u:object_r:container_file_t:s0:c574,c582 Jan 23 09:07:06 crc restorecon[4683]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/containers/cni-plugins/943f0936 not reset as customized by admin to system_u:object_r:container_file_t:s0:c203,c924 Jan 23 09:07:06 crc restorecon[4683]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/containers/cni-plugins/3f764ee4 not reset as customized by admin to system_u:object_r:container_file_t:s0:c138,c778 Jan 23 09:07:06 crc restorecon[4683]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/containers/cni-plugins/8695e3f9 not reset as customized by admin to system_u:object_r:container_file_t:s0:c574,c582 Jan 23 09:07:06 crc restorecon[4683]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/containers/bond-cni-plugin/aed7aa86 not reset as customized by admin to system_u:object_r:container_file_t:s0:c203,c924 Jan 23 09:07:06 crc restorecon[4683]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/containers/bond-cni-plugin/c64d7448 not reset as customized by admin to system_u:object_r:container_file_t:s0:c138,c778 Jan 23 09:07:06 crc restorecon[4683]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/containers/bond-cni-plugin/0ba16bd2 not reset as customized by admin to system_u:object_r:container_file_t:s0:c574,c582 Jan 23 09:07:06 crc restorecon[4683]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/containers/routeoverride-cni/207a939f not reset as customized by admin to system_u:object_r:container_file_t:s0:c203,c924 Jan 23 09:07:06 crc restorecon[4683]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/containers/routeoverride-cni/54aa8cdb not reset as customized by admin to system_u:object_r:container_file_t:s0:c138,c778 Jan 23 09:07:06 crc restorecon[4683]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/containers/routeoverride-cni/1f5fa595 not reset as customized by admin to system_u:object_r:container_file_t:s0:c574,c582 Jan 23 09:07:06 crc restorecon[4683]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/containers/whereabouts-cni-bincopy/bf9c8153 not reset as customized by admin to system_u:object_r:container_file_t:s0:c203,c924 Jan 23 09:07:06 crc restorecon[4683]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/containers/whereabouts-cni-bincopy/47fba4ea not reset as customized by admin to system_u:object_r:container_file_t:s0:c138,c778 Jan 23 09:07:06 crc restorecon[4683]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/containers/whereabouts-cni-bincopy/7ae55ce9 not reset as customized by admin to system_u:object_r:container_file_t:s0:c574,c582 Jan 23 09:07:06 crc restorecon[4683]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/containers/whereabouts-cni/7906a268 not reset as customized by admin to system_u:object_r:container_file_t:s0:c203,c924 Jan 23 09:07:06 crc restorecon[4683]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/containers/whereabouts-cni/ce43fa69 not reset as customized by admin to system_u:object_r:container_file_t:s0:c138,c778 Jan 23 09:07:06 crc restorecon[4683]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/containers/whereabouts-cni/7fc7ea3a not reset as customized by admin to system_u:object_r:container_file_t:s0:c574,c582 Jan 23 09:07:06 crc restorecon[4683]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/containers/kube-multus-additional-cni-plugins/d8c38b7d not reset as customized by admin to system_u:object_r:container_file_t:s0:c203,c924 Jan 23 09:07:06 crc restorecon[4683]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/containers/kube-multus-additional-cni-plugins/9ef015fb not reset as customized by admin to system_u:object_r:container_file_t:s0:c138,c778 Jan 23 09:07:06 crc restorecon[4683]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/containers/kube-multus-additional-cni-plugins/b9db6a41 not reset as customized by admin to system_u:object_r:container_file_t:s0:c574,c582 Jan 23 09:07:06 crc restorecon[4683]: /var/lib/kubelet/pods/5b88f790-22fa-440e-b583-365168c0b23d/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c432,c991 Jan 23 09:07:06 crc restorecon[4683]: /var/lib/kubelet/pods/5b88f790-22fa-440e-b583-365168c0b23d/containers/network-metrics-daemon/b1733d79 not reset as customized by admin to system_u:object_r:container_file_t:s0:c476,c820 Jan 23 09:07:06 crc restorecon[4683]: /var/lib/kubelet/pods/5b88f790-22fa-440e-b583-365168c0b23d/containers/network-metrics-daemon/afccd338 not reset as customized by admin to system_u:object_r:container_file_t:s0:c272,c818 Jan 23 09:07:06 crc restorecon[4683]: /var/lib/kubelet/pods/5b88f790-22fa-440e-b583-365168c0b23d/containers/network-metrics-daemon/9df0a185 not reset as customized by admin to system_u:object_r:container_file_t:s0:c432,c991 Jan 23 09:07:06 crc restorecon[4683]: /var/lib/kubelet/pods/5b88f790-22fa-440e-b583-365168c0b23d/containers/kube-rbac-proxy/18938cf8 not reset as customized by admin to system_u:object_r:container_file_t:s0:c476,c820 Jan 23 09:07:06 crc restorecon[4683]: /var/lib/kubelet/pods/5b88f790-22fa-440e-b583-365168c0b23d/containers/kube-rbac-proxy/7ab4eb23 not reset as customized by admin to system_u:object_r:container_file_t:s0:c272,c818 Jan 23 09:07:06 crc restorecon[4683]: /var/lib/kubelet/pods/5b88f790-22fa-440e-b583-365168c0b23d/containers/kube-rbac-proxy/56930be6 not reset as customized by admin to system_u:object_r:container_file_t:s0:c432,c991 Jan 23 09:07:06 crc restorecon[4683]: /var/lib/kubelet/pods/925f1c65-6136-48ba-85aa-3a3b50560753/volumes/kubernetes.io~configmap/env-overrides not reset as customized by admin to system_u:object_r:container_file_t:s0:c440,c975 Jan 23 09:07:06 crc restorecon[4683]: /var/lib/kubelet/pods/925f1c65-6136-48ba-85aa-3a3b50560753/volumes/kubernetes.io~configmap/env-overrides/..2025_02_23_05_21_35.630010865 not reset as customized by admin to system_u:object_r:container_file_t:s0:c440,c975 Jan 23 09:07:06 crc restorecon[4683]: /var/lib/kubelet/pods/925f1c65-6136-48ba-85aa-3a3b50560753/volumes/kubernetes.io~configmap/env-overrides/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c440,c975 Jan 23 09:07:06 crc restorecon[4683]: /var/lib/kubelet/pods/925f1c65-6136-48ba-85aa-3a3b50560753/volumes/kubernetes.io~configmap/ovnkube-config not reset as customized by admin to system_u:object_r:container_file_t:s0:c440,c975 Jan 23 09:07:06 crc restorecon[4683]: /var/lib/kubelet/pods/925f1c65-6136-48ba-85aa-3a3b50560753/volumes/kubernetes.io~configmap/ovnkube-config/..2025_02_23_05_21_35.1088506337 not reset as customized by admin to system_u:object_r:container_file_t:s0:c440,c975 Jan 23 09:07:06 crc restorecon[4683]: /var/lib/kubelet/pods/925f1c65-6136-48ba-85aa-3a3b50560753/volumes/kubernetes.io~configmap/ovnkube-config/..2025_02_23_05_21_35.1088506337/ovnkube.conf not reset as customized by admin to system_u:object_r:container_file_t:s0:c440,c975 Jan 23 09:07:06 crc restorecon[4683]: /var/lib/kubelet/pods/925f1c65-6136-48ba-85aa-3a3b50560753/volumes/kubernetes.io~configmap/ovnkube-config/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c440,c975 Jan 23 09:07:06 crc restorecon[4683]: /var/lib/kubelet/pods/925f1c65-6136-48ba-85aa-3a3b50560753/volumes/kubernetes.io~configmap/ovnkube-config/ovnkube.conf not reset as customized by admin to system_u:object_r:container_file_t:s0:c440,c975 Jan 23 09:07:06 crc restorecon[4683]: /var/lib/kubelet/pods/925f1c65-6136-48ba-85aa-3a3b50560753/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c440,c975 Jan 23 09:07:06 crc restorecon[4683]: /var/lib/kubelet/pods/925f1c65-6136-48ba-85aa-3a3b50560753/containers/kube-rbac-proxy/0d8e3722 not reset as customized by admin to system_u:object_r:container_file_t:s0:c89,c211 Jan 23 09:07:06 crc restorecon[4683]: /var/lib/kubelet/pods/925f1c65-6136-48ba-85aa-3a3b50560753/containers/kube-rbac-proxy/d22b2e76 not reset as customized by admin to system_u:object_r:container_file_t:s0:c382,c850 Jan 23 09:07:06 crc restorecon[4683]: /var/lib/kubelet/pods/925f1c65-6136-48ba-85aa-3a3b50560753/containers/kube-rbac-proxy/e036759f not reset as customized by admin to system_u:object_r:container_file_t:s0:c440,c975 Jan 23 09:07:06 crc restorecon[4683]: /var/lib/kubelet/pods/925f1c65-6136-48ba-85aa-3a3b50560753/containers/ovnkube-cluster-manager/2734c483 not reset as customized by admin to system_u:object_r:container_file_t:s0:c89,c211 Jan 23 09:07:06 crc restorecon[4683]: /var/lib/kubelet/pods/925f1c65-6136-48ba-85aa-3a3b50560753/containers/ovnkube-cluster-manager/57878fe7 not reset as customized by admin to system_u:object_r:container_file_t:s0:c89,c211 Jan 23 09:07:06 crc restorecon[4683]: /var/lib/kubelet/pods/925f1c65-6136-48ba-85aa-3a3b50560753/containers/ovnkube-cluster-manager/3f3c2e58 not reset as customized by admin to system_u:object_r:container_file_t:s0:c89,c211 Jan 23 09:07:06 crc restorecon[4683]: /var/lib/kubelet/pods/925f1c65-6136-48ba-85aa-3a3b50560753/containers/ovnkube-cluster-manager/375bec3e not reset as customized by admin to system_u:object_r:container_file_t:s0:c382,c850 Jan 23 09:07:06 crc restorecon[4683]: /var/lib/kubelet/pods/925f1c65-6136-48ba-85aa-3a3b50560753/containers/ovnkube-cluster-manager/7bc41e08 not reset as customized by admin to system_u:object_r:container_file_t:s0:c440,c975 Jan 23 09:07:06 crc restorecon[4683]: /var/lib/kubelet/pods/cd70aa09-68dd-4d64-bd6f-156fe6d1dc6d/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c25 Jan 23 09:07:06 crc restorecon[4683]: /var/lib/kubelet/pods/cd70aa09-68dd-4d64-bd6f-156fe6d1dc6d/containers/download-server/48c7a72d not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c25 Jan 23 09:07:06 crc restorecon[4683]: /var/lib/kubelet/pods/cd70aa09-68dd-4d64-bd6f-156fe6d1dc6d/containers/download-server/4b66701f not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c25 Jan 23 09:07:06 crc restorecon[4683]: /var/lib/kubelet/pods/cd70aa09-68dd-4d64-bd6f-156fe6d1dc6d/containers/download-server/a5a1c202 not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c25 Jan 23 09:07:06 crc restorecon[4683]: /var/lib/kubelet/pods/ef543e1b-8068-4ea3-b32a-61027b32e95d/volumes/kubernetes.io~configmap/ovnkube-identity-cm not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c22 Jan 23 09:07:06 crc restorecon[4683]: /var/lib/kubelet/pods/ef543e1b-8068-4ea3-b32a-61027b32e95d/volumes/kubernetes.io~configmap/ovnkube-identity-cm/..2025_02_23_05_21_40.3350632666 not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c22 Jan 23 09:07:06 crc restorecon[4683]: /var/lib/kubelet/pods/ef543e1b-8068-4ea3-b32a-61027b32e95d/volumes/kubernetes.io~configmap/ovnkube-identity-cm/..2025_02_23_05_21_40.3350632666/additional-cert-acceptance-cond.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c22 Jan 23 09:07:06 crc restorecon[4683]: /var/lib/kubelet/pods/ef543e1b-8068-4ea3-b32a-61027b32e95d/volumes/kubernetes.io~configmap/ovnkube-identity-cm/..2025_02_23_05_21_40.3350632666/additional-pod-admission-cond.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c22 Jan 23 09:07:06 crc restorecon[4683]: /var/lib/kubelet/pods/ef543e1b-8068-4ea3-b32a-61027b32e95d/volumes/kubernetes.io~configmap/ovnkube-identity-cm/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c22 Jan 23 09:07:06 crc restorecon[4683]: /var/lib/kubelet/pods/ef543e1b-8068-4ea3-b32a-61027b32e95d/volumes/kubernetes.io~configmap/ovnkube-identity-cm/additional-cert-acceptance-cond.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c22 Jan 23 09:07:06 crc restorecon[4683]: /var/lib/kubelet/pods/ef543e1b-8068-4ea3-b32a-61027b32e95d/volumes/kubernetes.io~configmap/ovnkube-identity-cm/additional-pod-admission-cond.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c22 Jan 23 09:07:06 crc restorecon[4683]: /var/lib/kubelet/pods/ef543e1b-8068-4ea3-b32a-61027b32e95d/volumes/kubernetes.io~configmap/env-overrides not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c22 Jan 23 09:07:06 crc restorecon[4683]: /var/lib/kubelet/pods/ef543e1b-8068-4ea3-b32a-61027b32e95d/volumes/kubernetes.io~configmap/env-overrides/..2025_02_23_05_21_40.1388695756 not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c22 Jan 23 09:07:06 crc restorecon[4683]: /var/lib/kubelet/pods/ef543e1b-8068-4ea3-b32a-61027b32e95d/volumes/kubernetes.io~configmap/env-overrides/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c22 Jan 23 09:07:06 crc restorecon[4683]: /var/lib/kubelet/pods/ef543e1b-8068-4ea3-b32a-61027b32e95d/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c22 Jan 23 09:07:06 crc restorecon[4683]: /var/lib/kubelet/pods/ef543e1b-8068-4ea3-b32a-61027b32e95d/containers/webhook/26f3df5b not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c22 Jan 23 09:07:06 crc restorecon[4683]: /var/lib/kubelet/pods/ef543e1b-8068-4ea3-b32a-61027b32e95d/containers/webhook/6d8fb21d not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c22 Jan 23 09:07:06 crc restorecon[4683]: /var/lib/kubelet/pods/ef543e1b-8068-4ea3-b32a-61027b32e95d/containers/webhook/50e94777 not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c22 Jan 23 09:07:06 crc restorecon[4683]: /var/lib/kubelet/pods/ef543e1b-8068-4ea3-b32a-61027b32e95d/containers/approver/208473b3 not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c22 Jan 23 09:07:06 crc restorecon[4683]: /var/lib/kubelet/pods/ef543e1b-8068-4ea3-b32a-61027b32e95d/containers/approver/ec9e08ba not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c22 Jan 23 09:07:06 crc restorecon[4683]: /var/lib/kubelet/pods/ef543e1b-8068-4ea3-b32a-61027b32e95d/containers/approver/3b787c39 not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c22 Jan 23 09:07:06 crc restorecon[4683]: /var/lib/kubelet/pods/ef543e1b-8068-4ea3-b32a-61027b32e95d/containers/approver/208eaed5 not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c22 Jan 23 09:07:06 crc restorecon[4683]: /var/lib/kubelet/pods/ef543e1b-8068-4ea3-b32a-61027b32e95d/containers/approver/93aa3a2b not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c22 Jan 23 09:07:06 crc restorecon[4683]: /var/lib/kubelet/pods/ef543e1b-8068-4ea3-b32a-61027b32e95d/containers/approver/3c697968 not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c22 Jan 23 09:07:06 crc restorecon[4683]: /var/lib/kubelet/pods/3b6479f0-333b-4a96-9adf-2099afdc2447/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c20,c21 Jan 23 09:07:06 crc restorecon[4683]: /var/lib/kubelet/pods/3b6479f0-333b-4a96-9adf-2099afdc2447/containers/network-check-target-container/ba950ec9 not reset as customized by admin to system_u:object_r:container_file_t:s0:c20,c21 Jan 23 09:07:06 crc restorecon[4683]: /var/lib/kubelet/pods/3b6479f0-333b-4a96-9adf-2099afdc2447/containers/network-check-target-container/cb5cdb37 not reset as customized by admin to system_u:object_r:container_file_t:s0:c20,c21 Jan 23 09:07:06 crc restorecon[4683]: /var/lib/kubelet/pods/3b6479f0-333b-4a96-9adf-2099afdc2447/containers/network-check-target-container/f2df9827 not reset as customized by admin to system_u:object_r:container_file_t:s0:c20,c21 Jan 23 09:07:06 crc restorecon[4683]: /var/lib/kubelet/pods/31d8b7a1-420e-4252-a5b7-eebe8a111292/volumes/kubernetes.io~configmap/images not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Jan 23 09:07:06 crc restorecon[4683]: /var/lib/kubelet/pods/31d8b7a1-420e-4252-a5b7-eebe8a111292/volumes/kubernetes.io~configmap/images/..2025_02_23_05_22_30.473230615 not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Jan 23 09:07:06 crc restorecon[4683]: /var/lib/kubelet/pods/31d8b7a1-420e-4252-a5b7-eebe8a111292/volumes/kubernetes.io~configmap/images/..2025_02_23_05_22_30.473230615/images.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Jan 23 09:07:06 crc restorecon[4683]: /var/lib/kubelet/pods/31d8b7a1-420e-4252-a5b7-eebe8a111292/volumes/kubernetes.io~configmap/images/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Jan 23 09:07:06 crc restorecon[4683]: /var/lib/kubelet/pods/31d8b7a1-420e-4252-a5b7-eebe8a111292/volumes/kubernetes.io~configmap/images/images.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Jan 23 09:07:06 crc restorecon[4683]: /var/lib/kubelet/pods/31d8b7a1-420e-4252-a5b7-eebe8a111292/volumes/kubernetes.io~configmap/auth-proxy-config not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Jan 23 09:07:06 crc restorecon[4683]: /var/lib/kubelet/pods/31d8b7a1-420e-4252-a5b7-eebe8a111292/volumes/kubernetes.io~configmap/auth-proxy-config/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Jan 23 09:07:06 crc restorecon[4683]: /var/lib/kubelet/pods/31d8b7a1-420e-4252-a5b7-eebe8a111292/volumes/kubernetes.io~configmap/auth-proxy-config/config-file.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Jan 23 09:07:06 crc restorecon[4683]: /var/lib/kubelet/pods/31d8b7a1-420e-4252-a5b7-eebe8a111292/volumes/kubernetes.io~configmap/auth-proxy-config/..2025_02_24_06_22_02.1904938450 not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Jan 23 09:07:06 crc restorecon[4683]: /var/lib/kubelet/pods/31d8b7a1-420e-4252-a5b7-eebe8a111292/volumes/kubernetes.io~configmap/auth-proxy-config/..2025_02_24_06_22_02.1904938450/config-file.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Jan 23 09:07:06 crc restorecon[4683]: /var/lib/kubelet/pods/31d8b7a1-420e-4252-a5b7-eebe8a111292/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Jan 23 09:07:06 crc restorecon[4683]: /var/lib/kubelet/pods/31d8b7a1-420e-4252-a5b7-eebe8a111292/containers/machine-config-operator/fedaa673 not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Jan 23 09:07:06 crc restorecon[4683]: /var/lib/kubelet/pods/31d8b7a1-420e-4252-a5b7-eebe8a111292/containers/machine-config-operator/9ca2df95 not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Jan 23 09:07:06 crc restorecon[4683]: /var/lib/kubelet/pods/31d8b7a1-420e-4252-a5b7-eebe8a111292/containers/machine-config-operator/b2d7460e not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Jan 23 09:07:06 crc restorecon[4683]: /var/lib/kubelet/pods/31d8b7a1-420e-4252-a5b7-eebe8a111292/containers/kube-rbac-proxy/2207853c not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Jan 23 09:07:06 crc restorecon[4683]: /var/lib/kubelet/pods/31d8b7a1-420e-4252-a5b7-eebe8a111292/containers/kube-rbac-proxy/241c1c29 not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Jan 23 09:07:06 crc restorecon[4683]: /var/lib/kubelet/pods/31d8b7a1-420e-4252-a5b7-eebe8a111292/containers/kube-rbac-proxy/2d910eaf not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Jan 23 09:07:06 crc restorecon[4683]: /var/lib/kubelet/pods/09efc573-dbb6-4249-bd59-9b87aba8dd28/volumes/kubernetes.io~configmap/etcd-ca not reset as customized by admin to system_u:object_r:container_file_t:s0:c84,c419 Jan 23 09:07:06 crc restorecon[4683]: /var/lib/kubelet/pods/09efc573-dbb6-4249-bd59-9b87aba8dd28/volumes/kubernetes.io~configmap/etcd-ca/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c84,c419 Jan 23 09:07:06 crc restorecon[4683]: /var/lib/kubelet/pods/09efc573-dbb6-4249-bd59-9b87aba8dd28/volumes/kubernetes.io~configmap/etcd-ca/..2025_02_23_05_23_49.3726007728 not reset as customized by admin to system_u:object_r:container_file_t:s0:c84,c419 Jan 23 09:07:06 crc restorecon[4683]: /var/lib/kubelet/pods/09efc573-dbb6-4249-bd59-9b87aba8dd28/volumes/kubernetes.io~configmap/etcd-ca/..2025_02_23_05_23_49.3726007728/ca-bundle.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c84,c419 Jan 23 09:07:06 crc restorecon[4683]: /var/lib/kubelet/pods/09efc573-dbb6-4249-bd59-9b87aba8dd28/volumes/kubernetes.io~configmap/etcd-ca/ca-bundle.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c84,c419 Jan 23 09:07:06 crc restorecon[4683]: /var/lib/kubelet/pods/09efc573-dbb6-4249-bd59-9b87aba8dd28/volumes/kubernetes.io~configmap/etcd-service-ca not reset as customized by admin to system_u:object_r:container_file_t:s0:c84,c419 Jan 23 09:07:06 crc restorecon[4683]: /var/lib/kubelet/pods/09efc573-dbb6-4249-bd59-9b87aba8dd28/volumes/kubernetes.io~configmap/etcd-service-ca/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c84,c419 Jan 23 09:07:06 crc restorecon[4683]: /var/lib/kubelet/pods/09efc573-dbb6-4249-bd59-9b87aba8dd28/volumes/kubernetes.io~configmap/etcd-service-ca/..2025_02_23_05_23_49.841175008 not reset as customized by admin to system_u:object_r:container_file_t:s0:c84,c419 Jan 23 09:07:06 crc restorecon[4683]: /var/lib/kubelet/pods/09efc573-dbb6-4249-bd59-9b87aba8dd28/volumes/kubernetes.io~configmap/etcd-service-ca/..2025_02_23_05_23_49.841175008/service-ca.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c84,c419 Jan 23 09:07:06 crc restorecon[4683]: /var/lib/kubelet/pods/09efc573-dbb6-4249-bd59-9b87aba8dd28/volumes/kubernetes.io~configmap/etcd-service-ca/service-ca.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c84,c419 Jan 23 09:07:06 crc restorecon[4683]: /var/lib/kubelet/pods/09efc573-dbb6-4249-bd59-9b87aba8dd28/volumes/kubernetes.io~configmap/config not reset as customized by admin to system_u:object_r:container_file_t:s0:c84,c419 Jan 23 09:07:06 crc restorecon[4683]: /var/lib/kubelet/pods/09efc573-dbb6-4249-bd59-9b87aba8dd28/volumes/kubernetes.io~configmap/config/..2025_02_23_05_22_30.843437178 not reset as customized by admin to system_u:object_r:container_file_t:s0:c84,c419 Jan 23 09:07:06 crc restorecon[4683]: /var/lib/kubelet/pods/09efc573-dbb6-4249-bd59-9b87aba8dd28/volumes/kubernetes.io~configmap/config/..2025_02_23_05_22_30.843437178/config.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c84,c419 Jan 23 09:07:06 crc restorecon[4683]: /var/lib/kubelet/pods/09efc573-dbb6-4249-bd59-9b87aba8dd28/volumes/kubernetes.io~configmap/config/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c84,c419 Jan 23 09:07:06 crc restorecon[4683]: /var/lib/kubelet/pods/09efc573-dbb6-4249-bd59-9b87aba8dd28/volumes/kubernetes.io~configmap/config/config.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c84,c419 Jan 23 09:07:06 crc restorecon[4683]: /var/lib/kubelet/pods/09efc573-dbb6-4249-bd59-9b87aba8dd28/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c84,c419 Jan 23 09:07:06 crc restorecon[4683]: /var/lib/kubelet/pods/09efc573-dbb6-4249-bd59-9b87aba8dd28/containers/etcd-operator/c6c0f2e7 not reset as customized by admin to system_u:object_r:container_file_t:s0:c263,c871 Jan 23 09:07:06 crc restorecon[4683]: /var/lib/kubelet/pods/09efc573-dbb6-4249-bd59-9b87aba8dd28/containers/etcd-operator/399edc97 not reset as customized by admin to system_u:object_r:container_file_t:s0:c263,c871 Jan 23 09:07:06 crc restorecon[4683]: /var/lib/kubelet/pods/09efc573-dbb6-4249-bd59-9b87aba8dd28/containers/etcd-operator/8049f7cc not reset as customized by admin to system_u:object_r:container_file_t:s0:c263,c871 Jan 23 09:07:06 crc restorecon[4683]: /var/lib/kubelet/pods/09efc573-dbb6-4249-bd59-9b87aba8dd28/containers/etcd-operator/0cec5484 not reset as customized by admin to system_u:object_r:container_file_t:s0:c263,c871 Jan 23 09:07:06 crc restorecon[4683]: /var/lib/kubelet/pods/09efc573-dbb6-4249-bd59-9b87aba8dd28/containers/etcd-operator/312446d0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c406,c828 Jan 23 09:07:06 crc restorecon[4683]: /var/lib/kubelet/pods/09efc573-dbb6-4249-bd59-9b87aba8dd28/containers/etcd-operator/8e56a35d not reset as customized by admin to system_u:object_r:container_file_t:s0:c84,c419 Jan 23 09:07:06 crc restorecon[4683]: /var/lib/kubelet/pods/1386a44e-36a2-460c-96d0-0359d2b6f0f5/volumes/kubernetes.io~configmap/config not reset as customized by admin to system_u:object_r:container_file_t:s0:c108,c511 Jan 23 09:07:06 crc restorecon[4683]: /var/lib/kubelet/pods/1386a44e-36a2-460c-96d0-0359d2b6f0f5/volumes/kubernetes.io~configmap/config/..2025_02_23_05_22_30.133159589 not reset as customized by admin to system_u:object_r:container_file_t:s0:c108,c511 Jan 23 09:07:06 crc restorecon[4683]: /var/lib/kubelet/pods/1386a44e-36a2-460c-96d0-0359d2b6f0f5/volumes/kubernetes.io~configmap/config/..2025_02_23_05_22_30.133159589/config.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c108,c511 Jan 23 09:07:06 crc restorecon[4683]: /var/lib/kubelet/pods/1386a44e-36a2-460c-96d0-0359d2b6f0f5/volumes/kubernetes.io~configmap/config/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c108,c511 Jan 23 09:07:06 crc restorecon[4683]: /var/lib/kubelet/pods/1386a44e-36a2-460c-96d0-0359d2b6f0f5/volumes/kubernetes.io~configmap/config/config.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c108,c511 Jan 23 09:07:06 crc restorecon[4683]: /var/lib/kubelet/pods/1386a44e-36a2-460c-96d0-0359d2b6f0f5/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c108,c511 Jan 23 09:07:06 crc restorecon[4683]: /var/lib/kubelet/pods/1386a44e-36a2-460c-96d0-0359d2b6f0f5/containers/kube-controller-manager-operator/2d30ddb9 not reset as customized by admin to system_u:object_r:container_file_t:s0:c380,c909 Jan 23 09:07:06 crc restorecon[4683]: /var/lib/kubelet/pods/1386a44e-36a2-460c-96d0-0359d2b6f0f5/containers/kube-controller-manager-operator/eca8053d not reset as customized by admin to system_u:object_r:container_file_t:s0:c380,c909 Jan 23 09:07:06 crc restorecon[4683]: /var/lib/kubelet/pods/1386a44e-36a2-460c-96d0-0359d2b6f0f5/containers/kube-controller-manager-operator/c3a25c9a not reset as customized by admin to system_u:object_r:container_file_t:s0:c168,c522 Jan 23 09:07:06 crc restorecon[4683]: /var/lib/kubelet/pods/1386a44e-36a2-460c-96d0-0359d2b6f0f5/containers/kube-controller-manager-operator/b9609c22 not reset as customized by admin to system_u:object_r:container_file_t:s0:c108,c511 Jan 23 09:07:06 crc restorecon[4683]: /var/lib/kubelet/pods/96b93a3a-6083-4aea-8eab-fe1aa8245ad9/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c968,c969 Jan 23 09:07:06 crc restorecon[4683]: /var/lib/kubelet/pods/96b93a3a-6083-4aea-8eab-fe1aa8245ad9/containers/dns-operator/e8b0eca9 not reset as customized by admin to system_u:object_r:container_file_t:s0:c106,c418 Jan 23 09:07:06 crc restorecon[4683]: /var/lib/kubelet/pods/96b93a3a-6083-4aea-8eab-fe1aa8245ad9/containers/dns-operator/b36a9c3f not reset as customized by admin to system_u:object_r:container_file_t:s0:c529,c711 Jan 23 09:07:06 crc restorecon[4683]: /var/lib/kubelet/pods/96b93a3a-6083-4aea-8eab-fe1aa8245ad9/containers/dns-operator/38af7b07 not reset as customized by admin to system_u:object_r:container_file_t:s0:c968,c969 Jan 23 09:07:06 crc restorecon[4683]: /var/lib/kubelet/pods/96b93a3a-6083-4aea-8eab-fe1aa8245ad9/containers/kube-rbac-proxy/ae821620 not reset as customized by admin to system_u:object_r:container_file_t:s0:c106,c418 Jan 23 09:07:06 crc restorecon[4683]: /var/lib/kubelet/pods/96b93a3a-6083-4aea-8eab-fe1aa8245ad9/containers/kube-rbac-proxy/baa23338 not reset as customized by admin to system_u:object_r:container_file_t:s0:c529,c711 Jan 23 09:07:06 crc restorecon[4683]: /var/lib/kubelet/pods/96b93a3a-6083-4aea-8eab-fe1aa8245ad9/containers/kube-rbac-proxy/2c534809 not reset as customized by admin to system_u:object_r:container_file_t:s0:c968,c969 Jan 23 09:07:06 crc restorecon[4683]: /var/lib/kubelet/pods/496e6271-fb68-4057-954e-a0d97a4afa3f/volumes/kubernetes.io~configmap/config not reset as customized by admin to system_u:object_r:container_file_t:s0:c661,c999 Jan 23 09:07:06 crc restorecon[4683]: /var/lib/kubelet/pods/496e6271-fb68-4057-954e-a0d97a4afa3f/volumes/kubernetes.io~configmap/config/..2025_02_23_05_22_30.3532625537 not reset as customized by admin to system_u:object_r:container_file_t:s0:c661,c999 Jan 23 09:07:06 crc restorecon[4683]: /var/lib/kubelet/pods/496e6271-fb68-4057-954e-a0d97a4afa3f/volumes/kubernetes.io~configmap/config/..2025_02_23_05_22_30.3532625537/config.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c661,c999 Jan 23 09:07:06 crc restorecon[4683]: /var/lib/kubelet/pods/496e6271-fb68-4057-954e-a0d97a4afa3f/volumes/kubernetes.io~configmap/config/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c661,c999 Jan 23 09:07:06 crc restorecon[4683]: /var/lib/kubelet/pods/496e6271-fb68-4057-954e-a0d97a4afa3f/volumes/kubernetes.io~configmap/config/config.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c661,c999 Jan 23 09:07:06 crc restorecon[4683]: /var/lib/kubelet/pods/496e6271-fb68-4057-954e-a0d97a4afa3f/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c661,c999 Jan 23 09:07:06 crc restorecon[4683]: /var/lib/kubelet/pods/496e6271-fb68-4057-954e-a0d97a4afa3f/containers/kube-scheduler-operator-container/59b29eae not reset as customized by admin to system_u:object_r:container_file_t:s0:c338,c381 Jan 23 09:07:06 crc restorecon[4683]: /var/lib/kubelet/pods/496e6271-fb68-4057-954e-a0d97a4afa3f/containers/kube-scheduler-operator-container/c91a8e4f not reset as customized by admin to system_u:object_r:container_file_t:s0:c338,c381 Jan 23 09:07:06 crc restorecon[4683]: /var/lib/kubelet/pods/496e6271-fb68-4057-954e-a0d97a4afa3f/containers/kube-scheduler-operator-container/4d87494a not reset as customized by admin to system_u:object_r:container_file_t:s0:c442,c857 Jan 23 09:07:06 crc restorecon[4683]: /var/lib/kubelet/pods/496e6271-fb68-4057-954e-a0d97a4afa3f/containers/kube-scheduler-operator-container/1e33ca63 not reset as customized by admin to system_u:object_r:container_file_t:s0:c661,c999 Jan 23 09:07:06 crc restorecon[4683]: /var/lib/kubelet/pods/3ab1a177-2de0-46d9-b765-d0d0649bb42e/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c12,c18 Jan 23 09:07:06 crc restorecon[4683]: /var/lib/kubelet/pods/3ab1a177-2de0-46d9-b765-d0d0649bb42e/containers/kube-rbac-proxy/8dea7be2 not reset as customized by admin to system_u:object_r:container_file_t:s0:c12,c18 Jan 23 09:07:06 crc restorecon[4683]: /var/lib/kubelet/pods/3ab1a177-2de0-46d9-b765-d0d0649bb42e/containers/kube-rbac-proxy/d0b04a99 not reset as customized by admin to system_u:object_r:container_file_t:s0:c12,c18 Jan 23 09:07:06 crc restorecon[4683]: /var/lib/kubelet/pods/3ab1a177-2de0-46d9-b765-d0d0649bb42e/containers/kube-rbac-proxy/d84f01e7 not reset as customized by admin to system_u:object_r:container_file_t:s0:c12,c18 Jan 23 09:07:06 crc restorecon[4683]: /var/lib/kubelet/pods/3ab1a177-2de0-46d9-b765-d0d0649bb42e/containers/package-server-manager/4109059b not reset as customized by admin to system_u:object_r:container_file_t:s0:c12,c18 Jan 23 09:07:06 crc restorecon[4683]: /var/lib/kubelet/pods/3ab1a177-2de0-46d9-b765-d0d0649bb42e/containers/package-server-manager/a7258a3e not reset as customized by admin to system_u:object_r:container_file_t:s0:c12,c18 Jan 23 09:07:06 crc restorecon[4683]: /var/lib/kubelet/pods/3ab1a177-2de0-46d9-b765-d0d0649bb42e/containers/package-server-manager/05bdf2b6 not reset as customized by admin to system_u:object_r:container_file_t:s0:c12,c18 Jan 23 09:07:06 crc restorecon[4683]: /var/lib/kubelet/pods/6731426b-95fe-49ff-bb5f-40441049fde2/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c15 Jan 23 09:07:06 crc restorecon[4683]: /var/lib/kubelet/pods/6731426b-95fe-49ff-bb5f-40441049fde2/containers/control-plane-machine-set-operator/f3261b51 not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c15 Jan 23 09:07:06 crc restorecon[4683]: /var/lib/kubelet/pods/6731426b-95fe-49ff-bb5f-40441049fde2/containers/control-plane-machine-set-operator/315d045e not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c15 Jan 23 09:07:06 crc restorecon[4683]: /var/lib/kubelet/pods/6731426b-95fe-49ff-bb5f-40441049fde2/containers/control-plane-machine-set-operator/5fdcf278 not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c15 Jan 23 09:07:06 crc restorecon[4683]: /var/lib/kubelet/pods/6731426b-95fe-49ff-bb5f-40441049fde2/containers/control-plane-machine-set-operator/d053f757 not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c15 Jan 23 09:07:06 crc restorecon[4683]: /var/lib/kubelet/pods/6731426b-95fe-49ff-bb5f-40441049fde2/containers/control-plane-machine-set-operator/c2850dc7 not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c15 Jan 23 09:07:06 crc restorecon[4683]: /var/lib/kubelet/pods/b6cd30de-2eeb-49a2-ab40-9167f4560ff5/volumes/kubernetes.io~configmap/marketplace-trusted-ca not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 09:07:06 crc restorecon[4683]: /var/lib/kubelet/pods/b6cd30de-2eeb-49a2-ab40-9167f4560ff5/volumes/kubernetes.io~configmap/marketplace-trusted-ca/..2025_02_23_05_22_30.2390596521 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 09:07:06 crc restorecon[4683]: /var/lib/kubelet/pods/b6cd30de-2eeb-49a2-ab40-9167f4560ff5/volumes/kubernetes.io~configmap/marketplace-trusted-ca/..2025_02_23_05_22_30.2390596521/tls-ca-bundle.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 09:07:06 crc restorecon[4683]: /var/lib/kubelet/pods/b6cd30de-2eeb-49a2-ab40-9167f4560ff5/volumes/kubernetes.io~configmap/marketplace-trusted-ca/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 09:07:06 crc restorecon[4683]: /var/lib/kubelet/pods/b6cd30de-2eeb-49a2-ab40-9167f4560ff5/volumes/kubernetes.io~configmap/marketplace-trusted-ca/tls-ca-bundle.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 09:07:06 crc restorecon[4683]: /var/lib/kubelet/pods/b6cd30de-2eeb-49a2-ab40-9167f4560ff5/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 09:07:06 crc restorecon[4683]: /var/lib/kubelet/pods/b6cd30de-2eeb-49a2-ab40-9167f4560ff5/containers/marketplace-operator/fcfb0b2b not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 09:07:06 crc restorecon[4683]: /var/lib/kubelet/pods/b6cd30de-2eeb-49a2-ab40-9167f4560ff5/containers/marketplace-operator/c7ac9b7d not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 09:07:06 crc restorecon[4683]: /var/lib/kubelet/pods/b6cd30de-2eeb-49a2-ab40-9167f4560ff5/containers/marketplace-operator/fa0c0d52 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 09:07:06 crc restorecon[4683]: /var/lib/kubelet/pods/b6cd30de-2eeb-49a2-ab40-9167f4560ff5/containers/marketplace-operator/c609b6ba not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 09:07:06 crc restorecon[4683]: /var/lib/kubelet/pods/b6cd30de-2eeb-49a2-ab40-9167f4560ff5/containers/marketplace-operator/2be6c296 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 09:07:06 crc restorecon[4683]: /var/lib/kubelet/pods/b6cd30de-2eeb-49a2-ab40-9167f4560ff5/containers/marketplace-operator/89a32653 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 09:07:06 crc restorecon[4683]: /var/lib/kubelet/pods/b6cd30de-2eeb-49a2-ab40-9167f4560ff5/containers/marketplace-operator/4eb9afeb not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 09:07:06 crc restorecon[4683]: /var/lib/kubelet/pods/b6cd30de-2eeb-49a2-ab40-9167f4560ff5/containers/marketplace-operator/13af6efa not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 09:07:06 crc restorecon[4683]: /var/lib/kubelet/pods/b6312bbd-5731-4ea0-a20f-81d5a57df44a/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c12,c18 Jan 23 09:07:06 crc restorecon[4683]: /var/lib/kubelet/pods/b6312bbd-5731-4ea0-a20f-81d5a57df44a/containers/olm-operator/b03f9724 not reset as customized by admin to system_u:object_r:container_file_t:s0:c12,c18 Jan 23 09:07:06 crc restorecon[4683]: /var/lib/kubelet/pods/b6312bbd-5731-4ea0-a20f-81d5a57df44a/containers/olm-operator/e3d105cc not reset as customized by admin to system_u:object_r:container_file_t:s0:c12,c18 Jan 23 09:07:06 crc restorecon[4683]: /var/lib/kubelet/pods/b6312bbd-5731-4ea0-a20f-81d5a57df44a/containers/olm-operator/3aed4d83 not reset as customized by admin to system_u:object_r:container_file_t:s0:c12,c18 Jan 23 09:07:06 crc restorecon[4683]: /var/lib/kubelet/pods/210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c/volumes/kubernetes.io~configmap/config not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c18 Jan 23 09:07:06 crc restorecon[4683]: /var/lib/kubelet/pods/210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c/volumes/kubernetes.io~configmap/config/..2025_02_23_05_22_30.1906041176 not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c18 Jan 23 09:07:06 crc restorecon[4683]: /var/lib/kubelet/pods/210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c/volumes/kubernetes.io~configmap/config/..2025_02_23_05_22_30.1906041176/config.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c18 Jan 23 09:07:06 crc restorecon[4683]: /var/lib/kubelet/pods/210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c/volumes/kubernetes.io~configmap/config/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c18 Jan 23 09:07:06 crc restorecon[4683]: /var/lib/kubelet/pods/210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c/volumes/kubernetes.io~configmap/config/config.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c18 Jan 23 09:07:06 crc restorecon[4683]: /var/lib/kubelet/pods/210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c18 Jan 23 09:07:06 crc restorecon[4683]: /var/lib/kubelet/pods/210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c/containers/kube-storage-version-migrator-operator/0765fa6e not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c18 Jan 23 09:07:06 crc restorecon[4683]: /var/lib/kubelet/pods/210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c/containers/kube-storage-version-migrator-operator/2cefc627 not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c18 Jan 23 09:07:06 crc restorecon[4683]: /var/lib/kubelet/pods/210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c/containers/kube-storage-version-migrator-operator/3dcc6345 not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c18 Jan 23 09:07:06 crc restorecon[4683]: /var/lib/kubelet/pods/210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c/containers/kube-storage-version-migrator-operator/365af391 not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c18 Jan 23 09:07:06 crc restorecon[4683]: /var/lib/kubelet/pods/bc5039c0-ea34-426b-a2b7-fbbc87b49a6d/volumes/kubernetes.io~empty-dir/available-featuregates not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c12 Jan 23 09:07:06 crc restorecon[4683]: /var/lib/kubelet/pods/bc5039c0-ea34-426b-a2b7-fbbc87b49a6d/volumes/kubernetes.io~empty-dir/available-featuregates/featureGate-SelfManagedHA-Default.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c12 Jan 23 09:07:06 crc restorecon[4683]: /var/lib/kubelet/pods/bc5039c0-ea34-426b-a2b7-fbbc87b49a6d/volumes/kubernetes.io~empty-dir/available-featuregates/featureGate-SelfManagedHA-TechPreviewNoUpgrade.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c12 Jan 23 09:07:06 crc restorecon[4683]: /var/lib/kubelet/pods/bc5039c0-ea34-426b-a2b7-fbbc87b49a6d/volumes/kubernetes.io~empty-dir/available-featuregates/featureGate-SelfManagedHA-DevPreviewNoUpgrade.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c12 Jan 23 09:07:06 crc restorecon[4683]: /var/lib/kubelet/pods/bc5039c0-ea34-426b-a2b7-fbbc87b49a6d/volumes/kubernetes.io~empty-dir/available-featuregates/featureGate-Hypershift-TechPreviewNoUpgrade.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c12 Jan 23 09:07:06 crc restorecon[4683]: /var/lib/kubelet/pods/bc5039c0-ea34-426b-a2b7-fbbc87b49a6d/volumes/kubernetes.io~empty-dir/available-featuregates/featureGate-Hypershift-DevPreviewNoUpgrade.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c12 Jan 23 09:07:06 crc restorecon[4683]: /var/lib/kubelet/pods/bc5039c0-ea34-426b-a2b7-fbbc87b49a6d/volumes/kubernetes.io~empty-dir/available-featuregates/featureGate-Hypershift-Default.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c12 Jan 23 09:07:06 crc restorecon[4683]: /var/lib/kubelet/pods/bc5039c0-ea34-426b-a2b7-fbbc87b49a6d/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c12 Jan 23 09:07:06 crc restorecon[4683]: /var/lib/kubelet/pods/bc5039c0-ea34-426b-a2b7-fbbc87b49a6d/containers/openshift-api/b1130c0f not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c12 Jan 23 09:07:06 crc restorecon[4683]: /var/lib/kubelet/pods/bc5039c0-ea34-426b-a2b7-fbbc87b49a6d/containers/openshift-api/236a5913 not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c12 Jan 23 09:07:06 crc restorecon[4683]: /var/lib/kubelet/pods/bc5039c0-ea34-426b-a2b7-fbbc87b49a6d/containers/openshift-api/b9432e26 not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c12 Jan 23 09:07:06 crc restorecon[4683]: /var/lib/kubelet/pods/bc5039c0-ea34-426b-a2b7-fbbc87b49a6d/containers/openshift-config-operator/5ddb0e3f not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c12 Jan 23 09:07:06 crc restorecon[4683]: /var/lib/kubelet/pods/bc5039c0-ea34-426b-a2b7-fbbc87b49a6d/containers/openshift-config-operator/986dc4fd not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c12 Jan 23 09:07:06 crc restorecon[4683]: /var/lib/kubelet/pods/bc5039c0-ea34-426b-a2b7-fbbc87b49a6d/containers/openshift-config-operator/8a23ff9a not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c12 Jan 23 09:07:06 crc restorecon[4683]: /var/lib/kubelet/pods/bc5039c0-ea34-426b-a2b7-fbbc87b49a6d/containers/openshift-config-operator/9728ae68 not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c12 Jan 23 09:07:06 crc restorecon[4683]: /var/lib/kubelet/pods/bc5039c0-ea34-426b-a2b7-fbbc87b49a6d/containers/openshift-config-operator/665f31d0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c12 Jan 23 09:07:06 crc restorecon[4683]: /var/lib/kubelet/pods/6509e943-70c6-444c-bc41-48a544e36fbd/volumes/kubernetes.io~configmap/config not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c16 Jan 23 09:07:06 crc restorecon[4683]: /var/lib/kubelet/pods/6509e943-70c6-444c-bc41-48a544e36fbd/volumes/kubernetes.io~configmap/config/..2025_02_23_05_22_30.1255385357 not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c16 Jan 23 09:07:06 crc restorecon[4683]: /var/lib/kubelet/pods/6509e943-70c6-444c-bc41-48a544e36fbd/volumes/kubernetes.io~configmap/config/..2025_02_23_05_22_30.1255385357/operator-config.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c16 Jan 23 09:07:06 crc restorecon[4683]: /var/lib/kubelet/pods/6509e943-70c6-444c-bc41-48a544e36fbd/volumes/kubernetes.io~configmap/config/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c16 Jan 23 09:07:06 crc restorecon[4683]: /var/lib/kubelet/pods/6509e943-70c6-444c-bc41-48a544e36fbd/volumes/kubernetes.io~configmap/config/operator-config.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c16 Jan 23 09:07:06 crc restorecon[4683]: /var/lib/kubelet/pods/6509e943-70c6-444c-bc41-48a544e36fbd/volumes/kubernetes.io~configmap/service-ca-bundle not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c16 Jan 23 09:07:06 crc restorecon[4683]: /var/lib/kubelet/pods/6509e943-70c6-444c-bc41-48a544e36fbd/volumes/kubernetes.io~configmap/service-ca-bundle/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c16 Jan 23 09:07:06 crc restorecon[4683]: /var/lib/kubelet/pods/6509e943-70c6-444c-bc41-48a544e36fbd/volumes/kubernetes.io~configmap/service-ca-bundle/..2025_02_23_05_23_57.573792656 not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c16 Jan 23 09:07:06 crc restorecon[4683]: /var/lib/kubelet/pods/6509e943-70c6-444c-bc41-48a544e36fbd/volumes/kubernetes.io~configmap/service-ca-bundle/..2025_02_23_05_23_57.573792656/service-ca.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c16 Jan 23 09:07:06 crc restorecon[4683]: /var/lib/kubelet/pods/6509e943-70c6-444c-bc41-48a544e36fbd/volumes/kubernetes.io~configmap/service-ca-bundle/service-ca.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c16 Jan 23 09:07:06 crc restorecon[4683]: /var/lib/kubelet/pods/6509e943-70c6-444c-bc41-48a544e36fbd/volumes/kubernetes.io~configmap/trusted-ca-bundle not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c16 Jan 23 09:07:06 crc restorecon[4683]: /var/lib/kubelet/pods/6509e943-70c6-444c-bc41-48a544e36fbd/volumes/kubernetes.io~configmap/trusted-ca-bundle/..2025_02_23_05_22_30.3254245399 not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c16 Jan 23 09:07:06 crc restorecon[4683]: /var/lib/kubelet/pods/6509e943-70c6-444c-bc41-48a544e36fbd/volumes/kubernetes.io~configmap/trusted-ca-bundle/..2025_02_23_05_22_30.3254245399/ca-bundle.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c16 Jan 23 09:07:06 crc restorecon[4683]: /var/lib/kubelet/pods/6509e943-70c6-444c-bc41-48a544e36fbd/volumes/kubernetes.io~configmap/trusted-ca-bundle/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c16 Jan 23 09:07:06 crc restorecon[4683]: /var/lib/kubelet/pods/6509e943-70c6-444c-bc41-48a544e36fbd/volumes/kubernetes.io~configmap/trusted-ca-bundle/ca-bundle.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c16 Jan 23 09:07:06 crc restorecon[4683]: /var/lib/kubelet/pods/6509e943-70c6-444c-bc41-48a544e36fbd/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c16 Jan 23 09:07:06 crc restorecon[4683]: /var/lib/kubelet/pods/6509e943-70c6-444c-bc41-48a544e36fbd/containers/authentication-operator/136c9b42 not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c16 Jan 23 09:07:06 crc restorecon[4683]: /var/lib/kubelet/pods/6509e943-70c6-444c-bc41-48a544e36fbd/containers/authentication-operator/98a1575b not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c16 Jan 23 09:07:06 crc restorecon[4683]: /var/lib/kubelet/pods/6509e943-70c6-444c-bc41-48a544e36fbd/containers/authentication-operator/cac69136 not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c16 Jan 23 09:07:06 crc restorecon[4683]: /var/lib/kubelet/pods/6509e943-70c6-444c-bc41-48a544e36fbd/containers/authentication-operator/5deb77a7 not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c16 Jan 23 09:07:06 crc restorecon[4683]: /var/lib/kubelet/pods/6509e943-70c6-444c-bc41-48a544e36fbd/containers/authentication-operator/2ae53400 not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c16 Jan 23 09:07:06 crc restorecon[4683]: /var/lib/kubelet/pods/01ab3dd5-8196-46d0-ad33-122e2ca51def/volumes/kubernetes.io~configmap/config not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c16 Jan 23 09:07:06 crc restorecon[4683]: /var/lib/kubelet/pods/01ab3dd5-8196-46d0-ad33-122e2ca51def/volumes/kubernetes.io~configmap/config/..2025_02_23_05_22_30.3608339744 not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c16 Jan 23 09:07:06 crc restorecon[4683]: /var/lib/kubelet/pods/01ab3dd5-8196-46d0-ad33-122e2ca51def/volumes/kubernetes.io~configmap/config/..2025_02_23_05_22_30.3608339744/operator-config.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c16 Jan 23 09:07:06 crc restorecon[4683]: /var/lib/kubelet/pods/01ab3dd5-8196-46d0-ad33-122e2ca51def/volumes/kubernetes.io~configmap/config/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c16 Jan 23 09:07:06 crc restorecon[4683]: /var/lib/kubelet/pods/01ab3dd5-8196-46d0-ad33-122e2ca51def/volumes/kubernetes.io~configmap/config/operator-config.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c16 Jan 23 09:07:06 crc restorecon[4683]: /var/lib/kubelet/pods/01ab3dd5-8196-46d0-ad33-122e2ca51def/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c16 Jan 23 09:07:06 crc restorecon[4683]: /var/lib/kubelet/pods/01ab3dd5-8196-46d0-ad33-122e2ca51def/containers/service-ca-operator/e46f2326 not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c16 Jan 23 09:07:06 crc restorecon[4683]: /var/lib/kubelet/pods/01ab3dd5-8196-46d0-ad33-122e2ca51def/containers/service-ca-operator/dc688d3c not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c16 Jan 23 09:07:06 crc restorecon[4683]: /var/lib/kubelet/pods/01ab3dd5-8196-46d0-ad33-122e2ca51def/containers/service-ca-operator/3497c3cd not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c16 Jan 23 09:07:06 crc restorecon[4683]: /var/lib/kubelet/pods/01ab3dd5-8196-46d0-ad33-122e2ca51def/containers/service-ca-operator/177eb008 not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c16 Jan 23 09:07:06 crc restorecon[4683]: /var/lib/kubelet/pods/7539238d-5fe0-46ed-884e-1c3b566537ec/volumes/kubernetes.io~configmap/config not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c13 Jan 23 09:07:06 crc restorecon[4683]: /var/lib/kubelet/pods/7539238d-5fe0-46ed-884e-1c3b566537ec/volumes/kubernetes.io~configmap/config/..2025_02_23_05_22_30.3819292994 not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c13 Jan 23 09:07:06 crc restorecon[4683]: /var/lib/kubelet/pods/7539238d-5fe0-46ed-884e-1c3b566537ec/volumes/kubernetes.io~configmap/config/..2025_02_23_05_22_30.3819292994/config.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c13 Jan 23 09:07:06 crc restorecon[4683]: /var/lib/kubelet/pods/7539238d-5fe0-46ed-884e-1c3b566537ec/volumes/kubernetes.io~configmap/config/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c13 Jan 23 09:07:06 crc restorecon[4683]: /var/lib/kubelet/pods/7539238d-5fe0-46ed-884e-1c3b566537ec/volumes/kubernetes.io~configmap/config/config.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c13 Jan 23 09:07:06 crc restorecon[4683]: /var/lib/kubelet/pods/7539238d-5fe0-46ed-884e-1c3b566537ec/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c13 Jan 23 09:07:06 crc restorecon[4683]: /var/lib/kubelet/pods/7539238d-5fe0-46ed-884e-1c3b566537ec/containers/openshift-apiserver-operator/af5a2afa not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c13 Jan 23 09:07:06 crc restorecon[4683]: /var/lib/kubelet/pods/7539238d-5fe0-46ed-884e-1c3b566537ec/containers/openshift-apiserver-operator/d780cb1f not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c13 Jan 23 09:07:06 crc restorecon[4683]: /var/lib/kubelet/pods/7539238d-5fe0-46ed-884e-1c3b566537ec/containers/openshift-apiserver-operator/49b0f374 not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c13 Jan 23 09:07:06 crc restorecon[4683]: /var/lib/kubelet/pods/7539238d-5fe0-46ed-884e-1c3b566537ec/containers/openshift-apiserver-operator/26fbb125 not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c13 Jan 23 09:07:06 crc restorecon[4683]: /var/lib/kubelet/pods/a31745f5-9847-4afe-82a5-3161cc66ca93/volumes/kubernetes.io~configmap/trusted-ca not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c11 Jan 23 09:07:06 crc restorecon[4683]: /var/lib/kubelet/pods/a31745f5-9847-4afe-82a5-3161cc66ca93/volumes/kubernetes.io~configmap/trusted-ca/..2025_02_23_05_22_30.3244779536 not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c11 Jan 23 09:07:06 crc restorecon[4683]: /var/lib/kubelet/pods/a31745f5-9847-4afe-82a5-3161cc66ca93/volumes/kubernetes.io~configmap/trusted-ca/..2025_02_23_05_22_30.3244779536/tls-ca-bundle.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c11 Jan 23 09:07:06 crc restorecon[4683]: /var/lib/kubelet/pods/a31745f5-9847-4afe-82a5-3161cc66ca93/volumes/kubernetes.io~configmap/trusted-ca/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c11 Jan 23 09:07:06 crc restorecon[4683]: /var/lib/kubelet/pods/a31745f5-9847-4afe-82a5-3161cc66ca93/volumes/kubernetes.io~configmap/trusted-ca/tls-ca-bundle.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c11 Jan 23 09:07:06 crc restorecon[4683]: /var/lib/kubelet/pods/a31745f5-9847-4afe-82a5-3161cc66ca93/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c11 Jan 23 09:07:06 crc restorecon[4683]: /var/lib/kubelet/pods/a31745f5-9847-4afe-82a5-3161cc66ca93/containers/ingress-operator/cf14125a not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c11 Jan 23 09:07:06 crc restorecon[4683]: /var/lib/kubelet/pods/a31745f5-9847-4afe-82a5-3161cc66ca93/containers/ingress-operator/b7f86972 not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c11 Jan 23 09:07:06 crc restorecon[4683]: /var/lib/kubelet/pods/a31745f5-9847-4afe-82a5-3161cc66ca93/containers/ingress-operator/e51d739c not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c11 Jan 23 09:07:06 crc restorecon[4683]: /var/lib/kubelet/pods/a31745f5-9847-4afe-82a5-3161cc66ca93/containers/ingress-operator/88ba6a69 not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c11 Jan 23 09:07:06 crc restorecon[4683]: /var/lib/kubelet/pods/a31745f5-9847-4afe-82a5-3161cc66ca93/containers/ingress-operator/669a9acf not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c11 Jan 23 09:07:06 crc restorecon[4683]: /var/lib/kubelet/pods/a31745f5-9847-4afe-82a5-3161cc66ca93/containers/ingress-operator/5cd51231 not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c11 Jan 23 09:07:06 crc restorecon[4683]: /var/lib/kubelet/pods/a31745f5-9847-4afe-82a5-3161cc66ca93/containers/ingress-operator/75349ec7 not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c11 Jan 23 09:07:06 crc restorecon[4683]: /var/lib/kubelet/pods/a31745f5-9847-4afe-82a5-3161cc66ca93/containers/ingress-operator/15c26839 not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c11 Jan 23 09:07:06 crc restorecon[4683]: /var/lib/kubelet/pods/a31745f5-9847-4afe-82a5-3161cc66ca93/containers/ingress-operator/45023dcd not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c11 Jan 23 09:07:06 crc restorecon[4683]: /var/lib/kubelet/pods/a31745f5-9847-4afe-82a5-3161cc66ca93/containers/ingress-operator/2bb66a50 not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c11 Jan 23 09:07:06 crc restorecon[4683]: /var/lib/kubelet/pods/a31745f5-9847-4afe-82a5-3161cc66ca93/containers/kube-rbac-proxy/64d03bdd not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c11 Jan 23 09:07:06 crc restorecon[4683]: /var/lib/kubelet/pods/a31745f5-9847-4afe-82a5-3161cc66ca93/containers/kube-rbac-proxy/ab8e7ca0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c11 Jan 23 09:07:06 crc restorecon[4683]: /var/lib/kubelet/pods/a31745f5-9847-4afe-82a5-3161cc66ca93/containers/kube-rbac-proxy/bb9be25f not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c11 Jan 23 09:07:06 crc restorecon[4683]: /var/lib/kubelet/pods/bf126b07-da06-4140-9a57-dfd54fc6b486/volumes/kubernetes.io~configmap/trusted-ca not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 23 09:07:06 crc restorecon[4683]: /var/lib/kubelet/pods/bf126b07-da06-4140-9a57-dfd54fc6b486/volumes/kubernetes.io~configmap/trusted-ca/..2025_02_23_05_22_30.2034221258 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 23 09:07:06 crc restorecon[4683]: /var/lib/kubelet/pods/bf126b07-da06-4140-9a57-dfd54fc6b486/volumes/kubernetes.io~configmap/trusted-ca/..2025_02_23_05_22_30.2034221258/tls-ca-bundle.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 23 09:07:06 crc restorecon[4683]: /var/lib/kubelet/pods/bf126b07-da06-4140-9a57-dfd54fc6b486/volumes/kubernetes.io~configmap/trusted-ca/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 23 09:07:06 crc restorecon[4683]: /var/lib/kubelet/pods/bf126b07-da06-4140-9a57-dfd54fc6b486/volumes/kubernetes.io~configmap/trusted-ca/tls-ca-bundle.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 23 09:07:06 crc restorecon[4683]: /var/lib/kubelet/pods/bf126b07-da06-4140-9a57-dfd54fc6b486/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 23 09:07:06 crc restorecon[4683]: /var/lib/kubelet/pods/bf126b07-da06-4140-9a57-dfd54fc6b486/containers/cluster-image-registry-operator/9a0b61d3 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 23 09:07:06 crc restorecon[4683]: /var/lib/kubelet/pods/bf126b07-da06-4140-9a57-dfd54fc6b486/containers/cluster-image-registry-operator/d471b9d2 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 23 09:07:06 crc restorecon[4683]: /var/lib/kubelet/pods/bf126b07-da06-4140-9a57-dfd54fc6b486/containers/cluster-image-registry-operator/8cb76b8e not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 23 09:07:06 crc restorecon[4683]: /var/lib/kubelet/pods/f88749ec-7931-4ee7-b3fc-1ec5e11f92e9/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c12,c18 Jan 23 09:07:06 crc restorecon[4683]: /var/lib/kubelet/pods/f88749ec-7931-4ee7-b3fc-1ec5e11f92e9/containers/catalog-operator/11a00840 not reset as customized by admin to system_u:object_r:container_file_t:s0:c12,c18 Jan 23 09:07:06 crc restorecon[4683]: /var/lib/kubelet/pods/f88749ec-7931-4ee7-b3fc-1ec5e11f92e9/containers/catalog-operator/ec355a92 not reset as customized by admin to system_u:object_r:container_file_t:s0:c12,c18 Jan 23 09:07:06 crc restorecon[4683]: /var/lib/kubelet/pods/f88749ec-7931-4ee7-b3fc-1ec5e11f92e9/containers/catalog-operator/992f735e not reset as customized by admin to system_u:object_r:container_file_t:s0:c12,c18 Jan 23 09:07:06 crc restorecon[4683]: /var/lib/kubelet/pods/8cea82b4-6893-4ddc-af9f-1bb5ae425c5b/volumes/kubernetes.io~configmap/config not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c14 Jan 23 09:07:06 crc restorecon[4683]: /var/lib/kubelet/pods/8cea82b4-6893-4ddc-af9f-1bb5ae425c5b/volumes/kubernetes.io~configmap/config/..2025_02_23_05_22_30.1782968797 not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c14 Jan 23 09:07:06 crc restorecon[4683]: /var/lib/kubelet/pods/8cea82b4-6893-4ddc-af9f-1bb5ae425c5b/volumes/kubernetes.io~configmap/config/..2025_02_23_05_22_30.1782968797/config.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c14 Jan 23 09:07:06 crc restorecon[4683]: /var/lib/kubelet/pods/8cea82b4-6893-4ddc-af9f-1bb5ae425c5b/volumes/kubernetes.io~configmap/config/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c14 Jan 23 09:07:06 crc restorecon[4683]: /var/lib/kubelet/pods/8cea82b4-6893-4ddc-af9f-1bb5ae425c5b/volumes/kubernetes.io~configmap/config/config.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c14 Jan 23 09:07:06 crc restorecon[4683]: /var/lib/kubelet/pods/8cea82b4-6893-4ddc-af9f-1bb5ae425c5b/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c14 Jan 23 09:07:06 crc restorecon[4683]: /var/lib/kubelet/pods/8cea82b4-6893-4ddc-af9f-1bb5ae425c5b/containers/openshift-controller-manager-operator/d59cdbbc not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c14 Jan 23 09:07:06 crc restorecon[4683]: /var/lib/kubelet/pods/8cea82b4-6893-4ddc-af9f-1bb5ae425c5b/containers/openshift-controller-manager-operator/72133ff0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c14 Jan 23 09:07:06 crc restorecon[4683]: /var/lib/kubelet/pods/8cea82b4-6893-4ddc-af9f-1bb5ae425c5b/containers/openshift-controller-manager-operator/c56c834c not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c14 Jan 23 09:07:06 crc restorecon[4683]: /var/lib/kubelet/pods/8cea82b4-6893-4ddc-af9f-1bb5ae425c5b/containers/openshift-controller-manager-operator/d13724c7 not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c14 Jan 23 09:07:06 crc restorecon[4683]: /var/lib/kubelet/pods/8cea82b4-6893-4ddc-af9f-1bb5ae425c5b/containers/openshift-controller-manager-operator/0a498258 not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c14 Jan 23 09:07:06 crc restorecon[4683]: /var/lib/kubelet/pods/5fe579f8-e8a6-4643-bce5-a661393c4dde/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Jan 23 09:07:06 crc restorecon[4683]: /var/lib/kubelet/pods/5fe579f8-e8a6-4643-bce5-a661393c4dde/containers/machine-config-server/fa471982 not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Jan 23 09:07:06 crc restorecon[4683]: /var/lib/kubelet/pods/5fe579f8-e8a6-4643-bce5-a661393c4dde/containers/machine-config-server/fc900d92 not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Jan 23 09:07:06 crc restorecon[4683]: /var/lib/kubelet/pods/5fe579f8-e8a6-4643-bce5-a661393c4dde/containers/machine-config-server/fa7d68da not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Jan 23 09:07:06 crc restorecon[4683]: /var/lib/kubelet/pods/49ef4625-1d3a-4a9f-b595-c2433d32326d/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c22 Jan 23 09:07:06 crc restorecon[4683]: /var/lib/kubelet/pods/49ef4625-1d3a-4a9f-b595-c2433d32326d/containers/migrator/4bacf9b4 not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c22 Jan 23 09:07:06 crc restorecon[4683]: /var/lib/kubelet/pods/49ef4625-1d3a-4a9f-b595-c2433d32326d/containers/migrator/424021b1 not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c22 Jan 23 09:07:06 crc restorecon[4683]: /var/lib/kubelet/pods/49ef4625-1d3a-4a9f-b595-c2433d32326d/containers/migrator/fc2e31a3 not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c22 Jan 23 09:07:06 crc restorecon[4683]: /var/lib/kubelet/pods/49ef4625-1d3a-4a9f-b595-c2433d32326d/containers/graceful-termination/f51eefac not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c22 Jan 23 09:07:06 crc restorecon[4683]: /var/lib/kubelet/pods/49ef4625-1d3a-4a9f-b595-c2433d32326d/containers/graceful-termination/c8997f2f not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c22 Jan 23 09:07:06 crc restorecon[4683]: /var/lib/kubelet/pods/49ef4625-1d3a-4a9f-b595-c2433d32326d/containers/graceful-termination/7481f599 not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c22 Jan 23 09:07:06 crc restorecon[4683]: /var/lib/kubelet/pods/25e176fe-21b4-4974-b1ed-c8b94f112a7f/volumes/kubernetes.io~configmap/signing-cabundle not reset as customized by admin to system_u:object_r:container_file_t:s0:c19,c22 Jan 23 09:07:06 crc restorecon[4683]: /var/lib/kubelet/pods/25e176fe-21b4-4974-b1ed-c8b94f112a7f/volumes/kubernetes.io~configmap/signing-cabundle/..2025_02_23_05_22_49.2255460704 not reset as customized by admin to system_u:object_r:container_file_t:s0:c19,c22 Jan 23 09:07:06 crc restorecon[4683]: /var/lib/kubelet/pods/25e176fe-21b4-4974-b1ed-c8b94f112a7f/volumes/kubernetes.io~configmap/signing-cabundle/..2025_02_23_05_22_49.2255460704/ca-bundle.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c19,c22 Jan 23 09:07:06 crc restorecon[4683]: /var/lib/kubelet/pods/25e176fe-21b4-4974-b1ed-c8b94f112a7f/volumes/kubernetes.io~configmap/signing-cabundle/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c19,c22 Jan 23 09:07:06 crc restorecon[4683]: /var/lib/kubelet/pods/25e176fe-21b4-4974-b1ed-c8b94f112a7f/volumes/kubernetes.io~configmap/signing-cabundle/ca-bundle.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c19,c22 Jan 23 09:07:06 crc restorecon[4683]: /var/lib/kubelet/pods/25e176fe-21b4-4974-b1ed-c8b94f112a7f/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c19,c22 Jan 23 09:07:06 crc restorecon[4683]: /var/lib/kubelet/pods/25e176fe-21b4-4974-b1ed-c8b94f112a7f/containers/service-ca-controller/fdafea19 not reset as customized by admin to system_u:object_r:container_file_t:s0:c19,c22 Jan 23 09:07:06 crc restorecon[4683]: /var/lib/kubelet/pods/25e176fe-21b4-4974-b1ed-c8b94f112a7f/containers/service-ca-controller/d0e1c571 not reset as customized by admin to system_u:object_r:container_file_t:s0:c19,c22 Jan 23 09:07:06 crc restorecon[4683]: /var/lib/kubelet/pods/25e176fe-21b4-4974-b1ed-c8b94f112a7f/containers/service-ca-controller/ee398915 not reset as customized by admin to system_u:object_r:container_file_t:s0:c19,c22 Jan 23 09:07:06 crc restorecon[4683]: /var/lib/kubelet/pods/25e176fe-21b4-4974-b1ed-c8b94f112a7f/containers/service-ca-controller/682bb6b8 not reset as customized by admin to system_u:object_r:container_file_t:s0:c19,c22 Jan 23 09:07:06 crc restorecon[4683]: /var/lib/kubelet/pods/2139d3e2895fc6797b9c76a1b4c9886d/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c666,c920 Jan 23 09:07:06 crc restorecon[4683]: /var/lib/kubelet/pods/2139d3e2895fc6797b9c76a1b4c9886d/containers/setup/a3e67855 not reset as customized by admin to system_u:object_r:container_file_t:s0:c294,c884 Jan 23 09:07:06 crc restorecon[4683]: /var/lib/kubelet/pods/2139d3e2895fc6797b9c76a1b4c9886d/containers/setup/a989f289 not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c1016 Jan 23 09:07:06 crc restorecon[4683]: /var/lib/kubelet/pods/2139d3e2895fc6797b9c76a1b4c9886d/containers/setup/915431bd not reset as customized by admin to system_u:object_r:container_file_t:s0:c666,c920 Jan 23 09:07:06 crc restorecon[4683]: /var/lib/kubelet/pods/2139d3e2895fc6797b9c76a1b4c9886d/containers/etcd-ensure-env-vars/7796fdab not reset as customized by admin to system_u:object_r:container_file_t:s0:c294,c884 Jan 23 09:07:06 crc restorecon[4683]: /var/lib/kubelet/pods/2139d3e2895fc6797b9c76a1b4c9886d/containers/etcd-ensure-env-vars/dcdb5f19 not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c1016 Jan 23 09:07:06 crc restorecon[4683]: /var/lib/kubelet/pods/2139d3e2895fc6797b9c76a1b4c9886d/containers/etcd-ensure-env-vars/a3aaa88c not reset as customized by admin to system_u:object_r:container_file_t:s0:c666,c920 Jan 23 09:07:06 crc restorecon[4683]: /var/lib/kubelet/pods/2139d3e2895fc6797b9c76a1b4c9886d/containers/etcd-resources-copy/5508e3e6 not reset as customized by admin to system_u:object_r:container_file_t:s0:c294,c884 Jan 23 09:07:06 crc restorecon[4683]: /var/lib/kubelet/pods/2139d3e2895fc6797b9c76a1b4c9886d/containers/etcd-resources-copy/160585de not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c1016 Jan 23 09:07:06 crc restorecon[4683]: /var/lib/kubelet/pods/2139d3e2895fc6797b9c76a1b4c9886d/containers/etcd-resources-copy/e99f8da3 not reset as customized by admin to system_u:object_r:container_file_t:s0:c666,c920 Jan 23 09:07:06 crc restorecon[4683]: /var/lib/kubelet/pods/2139d3e2895fc6797b9c76a1b4c9886d/containers/etcdctl/8bc85570 not reset as customized by admin to system_u:object_r:container_file_t:s0:c294,c884 Jan 23 09:07:06 crc restorecon[4683]: /var/lib/kubelet/pods/2139d3e2895fc6797b9c76a1b4c9886d/containers/etcdctl/a5861c91 not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c1016 Jan 23 09:07:06 crc restorecon[4683]: /var/lib/kubelet/pods/2139d3e2895fc6797b9c76a1b4c9886d/containers/etcdctl/84db1135 not reset as customized by admin to system_u:object_r:container_file_t:s0:c666,c920 Jan 23 09:07:06 crc restorecon[4683]: /var/lib/kubelet/pods/2139d3e2895fc6797b9c76a1b4c9886d/containers/etcd/9e1a6043 not reset as customized by admin to system_u:object_r:container_file_t:s0:c294,c884 Jan 23 09:07:06 crc restorecon[4683]: /var/lib/kubelet/pods/2139d3e2895fc6797b9c76a1b4c9886d/containers/etcd/c1aba1c2 not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c1016 Jan 23 09:07:06 crc restorecon[4683]: /var/lib/kubelet/pods/2139d3e2895fc6797b9c76a1b4c9886d/containers/etcd/d55ccd6d not reset as customized by admin to system_u:object_r:container_file_t:s0:c666,c920 Jan 23 09:07:06 crc restorecon[4683]: /var/lib/kubelet/pods/2139d3e2895fc6797b9c76a1b4c9886d/containers/etcd-metrics/971cc9f6 not reset as customized by admin to system_u:object_r:container_file_t:s0:c294,c884 Jan 23 09:07:06 crc restorecon[4683]: /var/lib/kubelet/pods/2139d3e2895fc6797b9c76a1b4c9886d/containers/etcd-metrics/8f2e3dcf not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c1016 Jan 23 09:07:06 crc restorecon[4683]: /var/lib/kubelet/pods/2139d3e2895fc6797b9c76a1b4c9886d/containers/etcd-metrics/ceb35e9c not reset as customized by admin to system_u:object_r:container_file_t:s0:c666,c920 Jan 23 09:07:06 crc restorecon[4683]: /var/lib/kubelet/pods/2139d3e2895fc6797b9c76a1b4c9886d/containers/etcd-readyz/1c192745 not reset as customized by admin to system_u:object_r:container_file_t:s0:c294,c884 Jan 23 09:07:06 crc restorecon[4683]: /var/lib/kubelet/pods/2139d3e2895fc6797b9c76a1b4c9886d/containers/etcd-readyz/5209e501 not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c1016 Jan 23 09:07:06 crc restorecon[4683]: /var/lib/kubelet/pods/2139d3e2895fc6797b9c76a1b4c9886d/containers/etcd-readyz/f83de4df not reset as customized by admin to system_u:object_r:container_file_t:s0:c666,c920 Jan 23 09:07:06 crc restorecon[4683]: /var/lib/kubelet/pods/2139d3e2895fc6797b9c76a1b4c9886d/containers/etcd-rev/e7b978ac not reset as customized by admin to system_u:object_r:container_file_t:s0:c294,c884 Jan 23 09:07:06 crc restorecon[4683]: /var/lib/kubelet/pods/2139d3e2895fc6797b9c76a1b4c9886d/containers/etcd-rev/c64304a1 not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c1016 Jan 23 09:07:06 crc restorecon[4683]: /var/lib/kubelet/pods/2139d3e2895fc6797b9c76a1b4c9886d/containers/etcd-rev/5384386b not reset as customized by admin to system_u:object_r:container_file_t:s0:c666,c920 Jan 23 09:07:06 crc restorecon[4683]: /var/lib/kubelet/pods/efdd0498-1daa-4136-9a4a-3b948c2293fc/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c268,c620 Jan 23 09:07:06 crc restorecon[4683]: /var/lib/kubelet/pods/efdd0498-1daa-4136-9a4a-3b948c2293fc/containers/multus-admission-controller/cce3e3ff not reset as customized by admin to system_u:object_r:container_file_t:s0:c435,c756 Jan 23 09:07:06 crc restorecon[4683]: /var/lib/kubelet/pods/efdd0498-1daa-4136-9a4a-3b948c2293fc/containers/multus-admission-controller/8fb75465 not reset as customized by admin to system_u:object_r:container_file_t:s0:c268,c620 Jan 23 09:07:06 crc restorecon[4683]: /var/lib/kubelet/pods/efdd0498-1daa-4136-9a4a-3b948c2293fc/containers/kube-rbac-proxy/740f573e not reset as customized by admin to system_u:object_r:container_file_t:s0:c435,c756 Jan 23 09:07:06 crc restorecon[4683]: /var/lib/kubelet/pods/efdd0498-1daa-4136-9a4a-3b948c2293fc/containers/kube-rbac-proxy/32fd1134 not reset as customized by admin to system_u:object_r:container_file_t:s0:c268,c620 Jan 23 09:07:06 crc restorecon[4683]: /var/lib/kubelet/pods/20b0d48f-5fd6-431c-a545-e3c800c7b866/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c19,c24 Jan 23 09:07:06 crc restorecon[4683]: /var/lib/kubelet/pods/20b0d48f-5fd6-431c-a545-e3c800c7b866/containers/serve-healthcheck-canary/0a861bd3 not reset as customized by admin to system_u:object_r:container_file_t:s0:c19,c24 Jan 23 09:07:06 crc restorecon[4683]: /var/lib/kubelet/pods/20b0d48f-5fd6-431c-a545-e3c800c7b866/containers/serve-healthcheck-canary/80363026 not reset as customized by admin to system_u:object_r:container_file_t:s0:c19,c24 Jan 23 09:07:06 crc restorecon[4683]: /var/lib/kubelet/pods/20b0d48f-5fd6-431c-a545-e3c800c7b866/containers/serve-healthcheck-canary/bfa952a8 not reset as customized by admin to system_u:object_r:container_file_t:s0:c19,c24 Jan 23 09:07:06 crc restorecon[4683]: /var/lib/kubelet/pods/22c825df-677d-4ca6-82db-3454ed06e783/volumes/kubernetes.io~configmap/auth-proxy-config not reset as customized by admin to system_u:object_r:container_file_t:s0:c129,c158 Jan 23 09:07:06 crc restorecon[4683]: /var/lib/kubelet/pods/22c825df-677d-4ca6-82db-3454ed06e783/volumes/kubernetes.io~configmap/auth-proxy-config/..2025_02_23_05_33_31.2122464563 not reset as customized by admin to system_u:object_r:container_file_t:s0:c129,c158 Jan 23 09:07:06 crc restorecon[4683]: /var/lib/kubelet/pods/22c825df-677d-4ca6-82db-3454ed06e783/volumes/kubernetes.io~configmap/auth-proxy-config/..2025_02_23_05_33_31.2122464563/config-file.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c129,c158 Jan 23 09:07:06 crc restorecon[4683]: /var/lib/kubelet/pods/22c825df-677d-4ca6-82db-3454ed06e783/volumes/kubernetes.io~configmap/auth-proxy-config/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c129,c158 Jan 23 09:07:06 crc restorecon[4683]: /var/lib/kubelet/pods/22c825df-677d-4ca6-82db-3454ed06e783/volumes/kubernetes.io~configmap/auth-proxy-config/config-file.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c129,c158 Jan 23 09:07:06 crc restorecon[4683]: /var/lib/kubelet/pods/22c825df-677d-4ca6-82db-3454ed06e783/volumes/kubernetes.io~configmap/config not reset as customized by admin to system_u:object_r:container_file_t:s0:c129,c158 Jan 23 09:07:06 crc restorecon[4683]: /var/lib/kubelet/pods/22c825df-677d-4ca6-82db-3454ed06e783/volumes/kubernetes.io~configmap/config/..2025_02_23_05_33_31.333075221 not reset as customized by admin to system_u:object_r:container_file_t:s0:c129,c158 Jan 23 09:07:06 crc restorecon[4683]: /var/lib/kubelet/pods/22c825df-677d-4ca6-82db-3454ed06e783/volumes/kubernetes.io~configmap/config/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c129,c158 Jan 23 09:07:06 crc restorecon[4683]: /var/lib/kubelet/pods/22c825df-677d-4ca6-82db-3454ed06e783/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c129,c158 Jan 23 09:07:06 crc restorecon[4683]: /var/lib/kubelet/pods/22c825df-677d-4ca6-82db-3454ed06e783/containers/kube-rbac-proxy/793bf43d not reset as customized by admin to system_u:object_r:container_file_t:s0:c381,c387 Jan 23 09:07:06 crc restorecon[4683]: /var/lib/kubelet/pods/22c825df-677d-4ca6-82db-3454ed06e783/containers/kube-rbac-proxy/7db1bb6e not reset as customized by admin to system_u:object_r:container_file_t:s0:c142,c438 Jan 23 09:07:06 crc restorecon[4683]: /var/lib/kubelet/pods/22c825df-677d-4ca6-82db-3454ed06e783/containers/kube-rbac-proxy/4f6a0368 not reset as customized by admin to system_u:object_r:container_file_t:s0:c129,c158 Jan 23 09:07:06 crc restorecon[4683]: /var/lib/kubelet/pods/22c825df-677d-4ca6-82db-3454ed06e783/containers/machine-approver-controller/c12c7d86 not reset as customized by admin to system_u:object_r:container_file_t:s0:c381,c387 Jan 23 09:07:06 crc restorecon[4683]: /var/lib/kubelet/pods/22c825df-677d-4ca6-82db-3454ed06e783/containers/machine-approver-controller/36c4a773 not reset as customized by admin to system_u:object_r:container_file_t:s0:c142,c438 Jan 23 09:07:06 crc restorecon[4683]: /var/lib/kubelet/pods/22c825df-677d-4ca6-82db-3454ed06e783/containers/machine-approver-controller/4c1e98ae not reset as customized by admin to system_u:object_r:container_file_t:s0:c142,c438 Jan 23 09:07:06 crc restorecon[4683]: /var/lib/kubelet/pods/22c825df-677d-4ca6-82db-3454ed06e783/containers/machine-approver-controller/a4c8115c not reset as customized by admin to system_u:object_r:container_file_t:s0:c129,c158 Jan 23 09:07:06 crc restorecon[4683]: /var/lib/kubelet/pods/f4b27818a5e8e43d0dc095d08835c792/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c97,c980 Jan 23 09:07:06 crc restorecon[4683]: /var/lib/kubelet/pods/f4b27818a5e8e43d0dc095d08835c792/containers/setup/7db1802e not reset as customized by admin to system_u:object_r:container_file_t:s0:c97,c980 Jan 23 09:07:06 crc restorecon[4683]: /var/lib/kubelet/pods/f4b27818a5e8e43d0dc095d08835c792/containers/kube-apiserver/a008a7ab not reset as customized by admin to system_u:object_r:container_file_t:s0:c97,c980 Jan 23 09:07:06 crc restorecon[4683]: /var/lib/kubelet/pods/f4b27818a5e8e43d0dc095d08835c792/containers/kube-apiserver-cert-syncer/2c836bac not reset as customized by admin to system_u:object_r:container_file_t:s0:c97,c980 Jan 23 09:07:06 crc restorecon[4683]: /var/lib/kubelet/pods/f4b27818a5e8e43d0dc095d08835c792/containers/kube-apiserver-cert-regeneration-controller/0ce62299 not reset as customized by admin to system_u:object_r:container_file_t:s0:c97,c980 Jan 23 09:07:06 crc restorecon[4683]: /var/lib/kubelet/pods/f4b27818a5e8e43d0dc095d08835c792/containers/kube-apiserver-insecure-readyz/945d2457 not reset as customized by admin to system_u:object_r:container_file_t:s0:c97,c980 Jan 23 09:07:06 crc restorecon[4683]: /var/lib/kubelet/pods/f4b27818a5e8e43d0dc095d08835c792/containers/kube-apiserver-check-endpoints/7d5c1dd8 not reset as customized by admin to system_u:object_r:container_file_t:s0:c97,c980 Jan 23 09:07:06 crc restorecon[4683]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/utilities not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 09:07:06 crc restorecon[4683]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/utilities/copy-content not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 09:07:06 crc restorecon[4683]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 09:07:06 crc restorecon[4683]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 09:07:06 crc restorecon[4683]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/3scale-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 09:07:06 crc restorecon[4683]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/3scale-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 09:07:06 crc restorecon[4683]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/advanced-cluster-management not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 09:07:06 crc restorecon[4683]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/advanced-cluster-management/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 09:07:06 crc restorecon[4683]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/amq-broker-rhel8 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 09:07:06 crc restorecon[4683]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/amq-broker-rhel8/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 09:07:06 crc restorecon[4683]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/amq-online not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 09:07:06 crc restorecon[4683]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/amq-online/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 09:07:06 crc restorecon[4683]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/amq-streams not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 09:07:06 crc restorecon[4683]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/amq-streams/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 09:07:06 crc restorecon[4683]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/amq-streams-console not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 09:07:06 crc restorecon[4683]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/amq-streams-console/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 09:07:06 crc restorecon[4683]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/amq7-interconnect-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 09:07:06 crc restorecon[4683]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/amq7-interconnect-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 09:07:06 crc restorecon[4683]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ansible-automation-platform-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 09:07:06 crc restorecon[4683]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ansible-automation-platform-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 09:07:06 crc restorecon[4683]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ansible-cloud-addons-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 09:07:06 crc restorecon[4683]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ansible-cloud-addons-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 09:07:06 crc restorecon[4683]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/apicast-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 09:07:06 crc restorecon[4683]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/apicast-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 09:07:06 crc restorecon[4683]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/apicurio-registry-3 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 09:07:06 crc restorecon[4683]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/apicurio-registry-3/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 09:07:06 crc restorecon[4683]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/authorino-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 09:07:06 crc restorecon[4683]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/authorino-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 09:07:06 crc restorecon[4683]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/aws-load-balancer-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 09:07:06 crc restorecon[4683]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/aws-load-balancer-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 09:07:06 crc restorecon[4683]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/bamoe-businessautomation-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 09:07:06 crc restorecon[4683]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/bamoe-businessautomation-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 09:07:06 crc restorecon[4683]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/bamoe-kogito-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 09:07:06 crc restorecon[4683]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/bamoe-kogito-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 09:07:06 crc restorecon[4683]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/bpfman-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 09:07:06 crc restorecon[4683]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/bpfman-operator/index.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 09:07:06 crc restorecon[4683]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/businessautomation-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 09:07:06 crc restorecon[4683]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/businessautomation-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 09:07:06 crc restorecon[4683]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cephcsi-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 09:07:06 crc restorecon[4683]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cephcsi-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 09:07:06 crc restorecon[4683]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cincinnati-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 09:07:06 crc restorecon[4683]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cincinnati-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 09:07:06 crc restorecon[4683]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cluster-kube-descheduler-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 09:07:06 crc restorecon[4683]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cluster-kube-descheduler-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 09:07:06 crc restorecon[4683]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cluster-logging not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 09:07:06 crc restorecon[4683]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cluster-logging/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 09:07:06 crc restorecon[4683]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cluster-observability-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 09:07:06 crc restorecon[4683]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cluster-observability-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 09:07:06 crc restorecon[4683]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/compliance-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 09:07:06 crc restorecon[4683]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/compliance-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 09:07:06 crc restorecon[4683]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/container-security-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 09:07:06 crc restorecon[4683]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/container-security-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 09:07:06 crc restorecon[4683]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/costmanagement-metrics-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 09:07:06 crc restorecon[4683]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/costmanagement-metrics-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 09:07:06 crc restorecon[4683]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cryostat-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 09:07:06 crc restorecon[4683]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cryostat-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 09:07:06 crc restorecon[4683]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/datagrid not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 09:07:06 crc restorecon[4683]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/datagrid/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 09:07:06 crc restorecon[4683]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/devspaces not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 09:07:06 crc restorecon[4683]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/devspaces/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 09:07:06 crc restorecon[4683]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/devworkspace-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 09:07:06 crc restorecon[4683]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/devworkspace-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 09:07:06 crc restorecon[4683]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/dpu-network-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 09:07:06 crc restorecon[4683]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/dpu-network-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 09:07:06 crc restorecon[4683]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/eap not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 09:07:06 crc restorecon[4683]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/eap/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 09:07:06 crc restorecon[4683]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/elasticsearch-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 09:07:06 crc restorecon[4683]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/elasticsearch-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 09:07:06 crc restorecon[4683]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/external-dns-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 09:07:06 crc restorecon[4683]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/external-dns-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 09:07:06 crc restorecon[4683]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/fence-agents-remediation not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 09:07:06 crc restorecon[4683]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/fence-agents-remediation/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 09:07:06 crc restorecon[4683]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/file-integrity-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 09:07:06 crc restorecon[4683]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/file-integrity-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 09:07:06 crc restorecon[4683]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/fuse-apicurito not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 09:07:06 crc restorecon[4683]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/fuse-apicurito/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 09:07:06 crc restorecon[4683]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/fuse-console not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 09:07:06 crc restorecon[4683]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/fuse-console/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 09:07:06 crc restorecon[4683]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/fuse-online not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 09:07:06 crc restorecon[4683]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/fuse-online/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 09:07:06 crc restorecon[4683]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/gatekeeper-operator-product not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 09:07:06 crc restorecon[4683]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/gatekeeper-operator-product/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 09:07:06 crc restorecon[4683]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/jaeger-product not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 09:07:06 crc restorecon[4683]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/jaeger-product/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 09:07:06 crc restorecon[4683]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/jws-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 09:07:06 crc restorecon[4683]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/jws-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 09:07:06 crc restorecon[4683]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kernel-module-management not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 09:07:06 crc restorecon[4683]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kernel-module-management/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 09:07:06 crc restorecon[4683]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kernel-module-management-hub not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 09:07:06 crc restorecon[4683]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kernel-module-management-hub/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 09:07:06 crc restorecon[4683]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kiali-ossm not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 09:07:06 crc restorecon[4683]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kiali-ossm/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 09:07:06 crc restorecon[4683]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kubevirt-hyperconverged not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 09:07:06 crc restorecon[4683]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kubevirt-hyperconverged/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 09:07:06 crc restorecon[4683]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/logic-operator-rhel8 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 09:07:06 crc restorecon[4683]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/logic-operator-rhel8/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 09:07:06 crc restorecon[4683]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/loki-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 09:07:06 crc restorecon[4683]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/loki-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 09:07:06 crc restorecon[4683]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/lvms-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 09:07:06 crc restorecon[4683]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/lvms-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 09:07:06 crc restorecon[4683]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/machine-deletion-remediation not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 09:07:06 crc restorecon[4683]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/machine-deletion-remediation/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 09:07:06 crc restorecon[4683]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/mcg-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 09:07:06 crc restorecon[4683]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/mcg-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 09:07:06 crc restorecon[4683]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/mta-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 09:07:06 crc restorecon[4683]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/mta-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 09:07:06 crc restorecon[4683]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/mtc-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 09:07:06 crc restorecon[4683]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/mtc-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 09:07:06 crc restorecon[4683]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/mtr-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 09:07:06 crc restorecon[4683]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/mtr-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 09:07:06 crc restorecon[4683]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/mtv-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 09:07:06 crc restorecon[4683]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/mtv-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 09:07:06 crc restorecon[4683]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/multicluster-engine not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 09:07:06 crc restorecon[4683]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/multicluster-engine/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 09:07:06 crc restorecon[4683]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/netobserv-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 09:07:06 crc restorecon[4683]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/netobserv-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 09:07:06 crc restorecon[4683]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/node-healthcheck-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 09:07:06 crc restorecon[4683]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/node-healthcheck-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 09:07:06 crc restorecon[4683]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/node-maintenance-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 09:07:06 crc restorecon[4683]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/node-maintenance-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 09:07:06 crc restorecon[4683]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/node-observability-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 09:07:06 crc restorecon[4683]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/node-observability-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 09:07:06 crc restorecon[4683]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ocs-client-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 09:07:06 crc restorecon[4683]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ocs-client-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 09:07:06 crc restorecon[4683]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ocs-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 09:07:06 crc restorecon[4683]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ocs-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 09:07:06 crc restorecon[4683]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/odf-csi-addons-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 09:07:06 crc restorecon[4683]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/odf-csi-addons-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 09:07:06 crc restorecon[4683]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/odf-multicluster-orchestrator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 09:07:06 crc restorecon[4683]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/odf-multicluster-orchestrator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 09:07:06 crc restorecon[4683]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/odf-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 09:07:06 crc restorecon[4683]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/odf-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 09:07:06 crc restorecon[4683]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/odf-prometheus-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 09:07:06 crc restorecon[4683]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/odf-prometheus-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 09:07:06 crc restorecon[4683]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/odr-cluster-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 09:07:06 crc restorecon[4683]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/odr-cluster-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 09:07:06 crc restorecon[4683]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/odr-hub-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 09:07:06 crc restorecon[4683]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/odr-hub-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 09:07:06 crc restorecon[4683]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/openshift-cert-manager-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 09:07:06 crc restorecon[4683]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/openshift-cert-manager-operator/bundle-v1.15.0.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 09:07:06 crc restorecon[4683]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/openshift-cert-manager-operator/channel.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 09:07:06 crc restorecon[4683]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/openshift-cert-manager-operator/package.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 09:07:06 crc restorecon[4683]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/openshift-custom-metrics-autoscaler-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 09:07:06 crc restorecon[4683]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/openshift-custom-metrics-autoscaler-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 09:07:06 crc restorecon[4683]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/openshift-gitops-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 09:07:06 crc restorecon[4683]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/openshift-gitops-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 09:07:06 crc restorecon[4683]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/openshift-pipelines-operator-rh not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 09:07:06 crc restorecon[4683]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/openshift-pipelines-operator-rh/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 09:07:06 crc restorecon[4683]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/openshift-secondary-scheduler-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 09:07:06 crc restorecon[4683]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/openshift-secondary-scheduler-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 09:07:06 crc restorecon[4683]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/opentelemetry-product not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 09:07:06 crc restorecon[4683]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/opentelemetry-product/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 09:07:06 crc restorecon[4683]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/quay-bridge-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 09:07:06 crc restorecon[4683]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/quay-bridge-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 09:07:06 crc restorecon[4683]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/quay-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 09:07:06 crc restorecon[4683]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/quay-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 09:07:06 crc restorecon[4683]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/recipe not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 09:07:06 crc restorecon[4683]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/recipe/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 09:07:06 crc restorecon[4683]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/red-hat-camel-k not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 09:07:06 crc restorecon[4683]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/red-hat-camel-k/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 09:07:06 crc restorecon[4683]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/red-hat-hawtio-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 09:07:06 crc restorecon[4683]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/red-hat-hawtio-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 09:07:06 crc restorecon[4683]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/redhat-oadp-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 09:07:06 crc restorecon[4683]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/redhat-oadp-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 09:07:06 crc restorecon[4683]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/rh-service-binding-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 09:07:06 crc restorecon[4683]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/rh-service-binding-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 09:07:06 crc restorecon[4683]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/rhacs-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 09:07:06 crc restorecon[4683]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/rhacs-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 09:07:06 crc restorecon[4683]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/rhbk-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 09:07:06 crc restorecon[4683]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/rhbk-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 09:07:06 crc restorecon[4683]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/rhdh not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 09:07:06 crc restorecon[4683]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/rhdh/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 09:07:06 crc restorecon[4683]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/rhods-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 09:07:06 crc restorecon[4683]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/rhods-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 09:07:06 crc restorecon[4683]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/rhods-prometheus-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 09:07:06 crc restorecon[4683]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/rhods-prometheus-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 09:07:06 crc restorecon[4683]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/rhpam-kogito-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 09:07:06 crc restorecon[4683]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/rhpam-kogito-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 09:07:06 crc restorecon[4683]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/rhsso-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 09:07:06 crc restorecon[4683]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/rhsso-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 09:07:06 crc restorecon[4683]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/rook-ceph-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 09:07:06 crc restorecon[4683]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/rook-ceph-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 09:07:06 crc restorecon[4683]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/run-once-duration-override-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 09:07:06 crc restorecon[4683]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/run-once-duration-override-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 09:07:06 crc restorecon[4683]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/sandboxed-containers-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 09:07:06 crc restorecon[4683]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/sandboxed-containers-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 09:07:06 crc restorecon[4683]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/security-profiles-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 09:07:06 crc restorecon[4683]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/security-profiles-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 09:07:06 crc restorecon[4683]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/self-node-remediation not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 09:07:06 crc restorecon[4683]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/self-node-remediation/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 09:07:06 crc restorecon[4683]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/serverless-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 09:07:06 crc restorecon[4683]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/serverless-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 09:07:06 crc restorecon[4683]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/service-registry-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 09:07:06 crc restorecon[4683]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/service-registry-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 09:07:06 crc restorecon[4683]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/servicemeshoperator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 09:07:06 crc restorecon[4683]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/servicemeshoperator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 09:07:06 crc restorecon[4683]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/servicemeshoperator3 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 09:07:06 crc restorecon[4683]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/servicemeshoperator3/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 09:07:06 crc restorecon[4683]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/skupper-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 09:07:06 crc restorecon[4683]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/skupper-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 09:07:06 crc restorecon[4683]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/submariner not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 09:07:06 crc restorecon[4683]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/submariner/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 09:07:06 crc restorecon[4683]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/tang-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 09:07:06 crc restorecon[4683]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/tang-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 09:07:06 crc restorecon[4683]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/tempo-product not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 09:07:06 crc restorecon[4683]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/tempo-product/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 09:07:06 crc restorecon[4683]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/trustee-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 09:07:06 crc restorecon[4683]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/trustee-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 09:07:06 crc restorecon[4683]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/volsync-product not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 09:07:06 crc restorecon[4683]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/volsync-product/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 09:07:06 crc restorecon[4683]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/web-terminal not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 09:07:06 crc restorecon[4683]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/web-terminal/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 09:07:06 crc restorecon[4683]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/cache not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 09:07:06 crc restorecon[4683]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 09:07:06 crc restorecon[4683]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/db not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 09:07:06 crc restorecon[4683]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/db/00000-1.psg not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 09:07:06 crc restorecon[4683]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/db/00000-1.psg.pmt not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 09:07:06 crc restorecon[4683]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/db/db.pmt not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 09:07:06 crc restorecon[4683]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/db/index.pmt not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 09:07:06 crc restorecon[4683]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/db/main.pix not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 09:07:06 crc restorecon[4683]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/db/overflow.pix not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 09:07:06 crc restorecon[4683]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/digest not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 09:07:06 crc restorecon[4683]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 09:07:06 crc restorecon[4683]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/containers/extract-utilities/bc8d0691 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 09:07:06 crc restorecon[4683]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/containers/extract-utilities/6b76097a not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 09:07:06 crc restorecon[4683]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/containers/extract-utilities/34d1af30 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 09:07:06 crc restorecon[4683]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/containers/extract-content/312ba61c not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 09:07:06 crc restorecon[4683]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/containers/extract-content/645d5dd1 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 09:07:06 crc restorecon[4683]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/containers/extract-content/16e825f0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 09:07:06 crc restorecon[4683]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/containers/registry-server/4cf51fc9 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 09:07:06 crc restorecon[4683]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/containers/registry-server/2a23d348 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 09:07:06 crc restorecon[4683]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/containers/registry-server/075dbd49 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 09:07:06 crc restorecon[4683]: /var/lib/kubelet/pods/3cb93b32-e0ae-4377-b9c8-fdb9842c6d59/volumes/kubernetes.io~configmap/serviceca not reset as customized by admin to system_u:object_r:container_file_t:s0:c842,c986 Jan 23 09:07:06 crc restorecon[4683]: /var/lib/kubelet/pods/3cb93b32-e0ae-4377-b9c8-fdb9842c6d59/volumes/kubernetes.io~configmap/serviceca/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c842,c986 Jan 23 09:07:06 crc restorecon[4683]: /var/lib/kubelet/pods/3cb93b32-e0ae-4377-b9c8-fdb9842c6d59/volumes/kubernetes.io~configmap/serviceca/..2025_02_24_06_09_13.3521195566 not reset as customized by admin to system_u:object_r:container_file_t:s0:c842,c986 Jan 23 09:07:06 crc restorecon[4683]: /var/lib/kubelet/pods/3cb93b32-e0ae-4377-b9c8-fdb9842c6d59/volumes/kubernetes.io~configmap/serviceca/..2025_02_24_06_09_13.3521195566/image-registry.openshift-image-registry.svc..5000 not reset as customized by admin to system_u:object_r:container_file_t:s0:c842,c986 Jan 23 09:07:06 crc restorecon[4683]: /var/lib/kubelet/pods/3cb93b32-e0ae-4377-b9c8-fdb9842c6d59/volumes/kubernetes.io~configmap/serviceca/..2025_02_24_06_09_13.3521195566/image-registry.openshift-image-registry.svc.cluster.local..5000 not reset as customized by admin to system_u:object_r:container_file_t:s0:c842,c986 Jan 23 09:07:06 crc restorecon[4683]: /var/lib/kubelet/pods/3cb93b32-e0ae-4377-b9c8-fdb9842c6d59/volumes/kubernetes.io~configmap/serviceca/..2025_02_24_06_09_13.3521195566/default-route-openshift-image-registry.apps-crc.testing not reset as customized by admin to system_u:object_r:container_file_t:s0:c842,c986 Jan 23 09:07:06 crc restorecon[4683]: /var/lib/kubelet/pods/3cb93b32-e0ae-4377-b9c8-fdb9842c6d59/volumes/kubernetes.io~configmap/serviceca/default-route-openshift-image-registry.apps-crc.testing not reset as customized by admin to system_u:object_r:container_file_t:s0:c842,c986 Jan 23 09:07:06 crc restorecon[4683]: /var/lib/kubelet/pods/3cb93b32-e0ae-4377-b9c8-fdb9842c6d59/volumes/kubernetes.io~configmap/serviceca/image-registry.openshift-image-registry.svc..5000 not reset as customized by admin to system_u:object_r:container_file_t:s0:c842,c986 Jan 23 09:07:06 crc restorecon[4683]: /var/lib/kubelet/pods/3cb93b32-e0ae-4377-b9c8-fdb9842c6d59/volumes/kubernetes.io~configmap/serviceca/image-registry.openshift-image-registry.svc.cluster.local..5000 not reset as customized by admin to system_u:object_r:container_file_t:s0:c842,c986 Jan 23 09:07:06 crc restorecon[4683]: /var/lib/kubelet/pods/3cb93b32-e0ae-4377-b9c8-fdb9842c6d59/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c842,c986 Jan 23 09:07:06 crc restorecon[4683]: /var/lib/kubelet/pods/3cb93b32-e0ae-4377-b9c8-fdb9842c6d59/containers/node-ca/dd585ddd not reset as customized by admin to system_u:object_r:container_file_t:s0:c377,c642 Jan 23 09:07:06 crc restorecon[4683]: /var/lib/kubelet/pods/3cb93b32-e0ae-4377-b9c8-fdb9842c6d59/containers/node-ca/17ebd0ab not reset as customized by admin to system_u:object_r:container_file_t:s0:c338,c343 Jan 23 09:07:06 crc restorecon[4683]: /var/lib/kubelet/pods/3cb93b32-e0ae-4377-b9c8-fdb9842c6d59/containers/node-ca/005579f4 not reset as customized by admin to system_u:object_r:container_file_t:s0:c842,c986 Jan 23 09:07:06 crc restorecon[4683]: /var/lib/kubelet/pods/09ae3b1a-e8e7-4524-b54b-61eab6f9239a/volumes/kubernetes.io~configmap/etcd-serving-ca not reset as customized by admin to system_u:object_r:container_file_t:s0:c764,c897 Jan 23 09:07:06 crc restorecon[4683]: /var/lib/kubelet/pods/09ae3b1a-e8e7-4524-b54b-61eab6f9239a/volumes/kubernetes.io~configmap/etcd-serving-ca/..2025_02_23_05_23_11.449897510 not reset as customized by admin to system_u:object_r:container_file_t:s0:c764,c897 Jan 23 09:07:06 crc restorecon[4683]: /var/lib/kubelet/pods/09ae3b1a-e8e7-4524-b54b-61eab6f9239a/volumes/kubernetes.io~configmap/etcd-serving-ca/..2025_02_23_05_23_11.449897510/ca-bundle.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c764,c897 Jan 23 09:07:06 crc restorecon[4683]: /var/lib/kubelet/pods/09ae3b1a-e8e7-4524-b54b-61eab6f9239a/volumes/kubernetes.io~configmap/etcd-serving-ca/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c764,c897 Jan 23 09:07:06 crc restorecon[4683]: /var/lib/kubelet/pods/09ae3b1a-e8e7-4524-b54b-61eab6f9239a/volumes/kubernetes.io~configmap/etcd-serving-ca/ca-bundle.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c764,c897 Jan 23 09:07:06 crc restorecon[4683]: /var/lib/kubelet/pods/09ae3b1a-e8e7-4524-b54b-61eab6f9239a/volumes/kubernetes.io~configmap/trusted-ca-bundle not reset as customized by admin to system_u:object_r:container_file_t:s0:c764,c897 Jan 23 09:07:06 crc restorecon[4683]: /var/lib/kubelet/pods/09ae3b1a-e8e7-4524-b54b-61eab6f9239a/volumes/kubernetes.io~configmap/trusted-ca-bundle/..2025_02_23_05_23_11.1287037894 not reset as customized by admin to system_u:object_r:container_file_t:s0:c764,c897 Jan 23 09:07:06 crc restorecon[4683]: /var/lib/kubelet/pods/09ae3b1a-e8e7-4524-b54b-61eab6f9239a/volumes/kubernetes.io~configmap/trusted-ca-bundle/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c764,c897 Jan 23 09:07:06 crc restorecon[4683]: /var/lib/kubelet/pods/09ae3b1a-e8e7-4524-b54b-61eab6f9239a/volumes/kubernetes.io~configmap/audit-policies not reset as customized by admin to system_u:object_r:container_file_t:s0:c764,c897 Jan 23 09:07:06 crc restorecon[4683]: /var/lib/kubelet/pods/09ae3b1a-e8e7-4524-b54b-61eab6f9239a/volumes/kubernetes.io~configmap/audit-policies/..2025_02_23_05_23_11.1301053334 not reset as customized by admin to system_u:object_r:container_file_t:s0:c764,c897 Jan 23 09:07:06 crc restorecon[4683]: /var/lib/kubelet/pods/09ae3b1a-e8e7-4524-b54b-61eab6f9239a/volumes/kubernetes.io~configmap/audit-policies/..2025_02_23_05_23_11.1301053334/policy.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c764,c897 Jan 23 09:07:06 crc restorecon[4683]: /var/lib/kubelet/pods/09ae3b1a-e8e7-4524-b54b-61eab6f9239a/volumes/kubernetes.io~configmap/audit-policies/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c764,c897 Jan 23 09:07:06 crc restorecon[4683]: /var/lib/kubelet/pods/09ae3b1a-e8e7-4524-b54b-61eab6f9239a/volumes/kubernetes.io~configmap/audit-policies/policy.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c764,c897 Jan 23 09:07:06 crc restorecon[4683]: /var/lib/kubelet/pods/09ae3b1a-e8e7-4524-b54b-61eab6f9239a/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c764,c897 Jan 23 09:07:06 crc restorecon[4683]: /var/lib/kubelet/pods/09ae3b1a-e8e7-4524-b54b-61eab6f9239a/containers/fix-audit-permissions/bf5f3b9c not reset as customized by admin to system_u:object_r:container_file_t:s0:c49,c263 Jan 23 09:07:06 crc restorecon[4683]: /var/lib/kubelet/pods/09ae3b1a-e8e7-4524-b54b-61eab6f9239a/containers/fix-audit-permissions/af276eb7 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c701 Jan 23 09:07:06 crc restorecon[4683]: /var/lib/kubelet/pods/09ae3b1a-e8e7-4524-b54b-61eab6f9239a/containers/fix-audit-permissions/ea28e322 not reset as customized by admin to system_u:object_r:container_file_t:s0:c764,c897 Jan 23 09:07:06 crc restorecon[4683]: /var/lib/kubelet/pods/09ae3b1a-e8e7-4524-b54b-61eab6f9239a/containers/oauth-apiserver/692e6683 not reset as customized by admin to system_u:object_r:container_file_t:s0:c49,c263 Jan 23 09:07:06 crc restorecon[4683]: /var/lib/kubelet/pods/09ae3b1a-e8e7-4524-b54b-61eab6f9239a/containers/oauth-apiserver/871746a7 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c701 Jan 23 09:07:06 crc restorecon[4683]: /var/lib/kubelet/pods/09ae3b1a-e8e7-4524-b54b-61eab6f9239a/containers/oauth-apiserver/4eb2e958 not reset as customized by admin to system_u:object_r:container_file_t:s0:c764,c897 Jan 23 09:07:06 crc restorecon[4683]: /var/lib/kubelet/pods/43509403-f426-496e-be36-56cef71462f5/volumes/kubernetes.io~configmap/console-config not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c25 Jan 23 09:07:06 crc restorecon[4683]: /var/lib/kubelet/pods/43509403-f426-496e-be36-56cef71462f5/volumes/kubernetes.io~configmap/console-config/..2025_02_24_06_09_06.2875086261 not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c25 Jan 23 09:07:06 crc restorecon[4683]: /var/lib/kubelet/pods/43509403-f426-496e-be36-56cef71462f5/volumes/kubernetes.io~configmap/console-config/..2025_02_24_06_09_06.2875086261/console-config.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c25 Jan 23 09:07:06 crc restorecon[4683]: /var/lib/kubelet/pods/43509403-f426-496e-be36-56cef71462f5/volumes/kubernetes.io~configmap/console-config/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c25 Jan 23 09:07:06 crc restorecon[4683]: /var/lib/kubelet/pods/43509403-f426-496e-be36-56cef71462f5/volumes/kubernetes.io~configmap/console-config/console-config.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c25 Jan 23 09:07:06 crc restorecon[4683]: /var/lib/kubelet/pods/43509403-f426-496e-be36-56cef71462f5/volumes/kubernetes.io~configmap/trusted-ca-bundle not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c25 Jan 23 09:07:06 crc restorecon[4683]: /var/lib/kubelet/pods/43509403-f426-496e-be36-56cef71462f5/volumes/kubernetes.io~configmap/trusted-ca-bundle/..2025_02_24_06_09_06.286118152 not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c25 Jan 23 09:07:06 crc restorecon[4683]: /var/lib/kubelet/pods/43509403-f426-496e-be36-56cef71462f5/volumes/kubernetes.io~configmap/trusted-ca-bundle/..2025_02_24_06_09_06.286118152/tls-ca-bundle.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c25 Jan 23 09:07:06 crc restorecon[4683]: /var/lib/kubelet/pods/43509403-f426-496e-be36-56cef71462f5/volumes/kubernetes.io~configmap/trusted-ca-bundle/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c25 Jan 23 09:07:06 crc restorecon[4683]: /var/lib/kubelet/pods/43509403-f426-496e-be36-56cef71462f5/volumes/kubernetes.io~configmap/trusted-ca-bundle/tls-ca-bundle.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c25 Jan 23 09:07:06 crc restorecon[4683]: /var/lib/kubelet/pods/43509403-f426-496e-be36-56cef71462f5/volumes/kubernetes.io~configmap/oauth-serving-cert not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c25 Jan 23 09:07:06 crc restorecon[4683]: /var/lib/kubelet/pods/43509403-f426-496e-be36-56cef71462f5/volumes/kubernetes.io~configmap/oauth-serving-cert/..2025_02_24_06_09_06.3865795478 not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c25 Jan 23 09:07:06 crc restorecon[4683]: /var/lib/kubelet/pods/43509403-f426-496e-be36-56cef71462f5/volumes/kubernetes.io~configmap/oauth-serving-cert/..2025_02_24_06_09_06.3865795478/ca-bundle.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c25 Jan 23 09:07:06 crc restorecon[4683]: /var/lib/kubelet/pods/43509403-f426-496e-be36-56cef71462f5/volumes/kubernetes.io~configmap/oauth-serving-cert/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c25 Jan 23 09:07:06 crc restorecon[4683]: /var/lib/kubelet/pods/43509403-f426-496e-be36-56cef71462f5/volumes/kubernetes.io~configmap/oauth-serving-cert/ca-bundle.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c25 Jan 23 09:07:06 crc restorecon[4683]: /var/lib/kubelet/pods/43509403-f426-496e-be36-56cef71462f5/volumes/kubernetes.io~configmap/service-ca not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c25 Jan 23 09:07:06 crc restorecon[4683]: /var/lib/kubelet/pods/43509403-f426-496e-be36-56cef71462f5/volumes/kubernetes.io~configmap/service-ca/..2025_02_24_06_09_06.584414814 not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c25 Jan 23 09:07:06 crc restorecon[4683]: /var/lib/kubelet/pods/43509403-f426-496e-be36-56cef71462f5/volumes/kubernetes.io~configmap/service-ca/..2025_02_24_06_09_06.584414814/service-ca.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c25 Jan 23 09:07:06 crc restorecon[4683]: /var/lib/kubelet/pods/43509403-f426-496e-be36-56cef71462f5/volumes/kubernetes.io~configmap/service-ca/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c25 Jan 23 09:07:06 crc restorecon[4683]: /var/lib/kubelet/pods/43509403-f426-496e-be36-56cef71462f5/volumes/kubernetes.io~configmap/service-ca/service-ca.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c25 Jan 23 09:07:06 crc restorecon[4683]: /var/lib/kubelet/pods/43509403-f426-496e-be36-56cef71462f5/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c25 Jan 23 09:07:06 crc restorecon[4683]: /var/lib/kubelet/pods/43509403-f426-496e-be36-56cef71462f5/containers/console/ca9b62da not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c25 Jan 23 09:07:06 crc restorecon[4683]: /var/lib/kubelet/pods/43509403-f426-496e-be36-56cef71462f5/containers/console/0edd6fce not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c25 Jan 23 09:07:06 crc restorecon[4683]: /var/lib/kubelet/pods/7583ce53-e0fe-4a16-9e4d-50516596a136/volumes/kubernetes.io~configmap/config not reset as customized by admin to system_u:object_r:container_file_t:s0:c14,c22 Jan 23 09:07:06 crc restorecon[4683]: /var/lib/kubelet/pods/7583ce53-e0fe-4a16-9e4d-50516596a136/volumes/kubernetes.io~configmap/config/..2025_02_24_06_20_07.2406383837 not reset as customized by admin to system_u:object_r:container_file_t:s0:c14,c22 Jan 23 09:07:06 crc restorecon[4683]: /var/lib/kubelet/pods/7583ce53-e0fe-4a16-9e4d-50516596a136/volumes/kubernetes.io~configmap/config/..2025_02_24_06_20_07.2406383837/config.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c14,c22 Jan 23 09:07:06 crc restorecon[4683]: /var/lib/kubelet/pods/7583ce53-e0fe-4a16-9e4d-50516596a136/volumes/kubernetes.io~configmap/config/..2025_02_24_06_20_07.2406383837/openshift-controller-manager.client-ca.configmap not reset as customized by admin to system_u:object_r:container_file_t:s0:c14,c22 Jan 23 09:07:06 crc restorecon[4683]: /var/lib/kubelet/pods/7583ce53-e0fe-4a16-9e4d-50516596a136/volumes/kubernetes.io~configmap/config/..2025_02_24_06_20_07.2406383837/openshift-controller-manager.openshift-global-ca.configmap not reset as customized by admin to system_u:object_r:container_file_t:s0:c14,c22 Jan 23 09:07:06 crc restorecon[4683]: /var/lib/kubelet/pods/7583ce53-e0fe-4a16-9e4d-50516596a136/volumes/kubernetes.io~configmap/config/..2025_02_24_06_20_07.2406383837/openshift-controller-manager.serving-cert.secret not reset as customized by admin to system_u:object_r:container_file_t:s0:c14,c22 Jan 23 09:07:06 crc restorecon[4683]: /var/lib/kubelet/pods/7583ce53-e0fe-4a16-9e4d-50516596a136/volumes/kubernetes.io~configmap/config/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c14,c22 Jan 23 09:07:06 crc restorecon[4683]: /var/lib/kubelet/pods/7583ce53-e0fe-4a16-9e4d-50516596a136/volumes/kubernetes.io~configmap/config/config.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c14,c22 Jan 23 09:07:06 crc restorecon[4683]: /var/lib/kubelet/pods/7583ce53-e0fe-4a16-9e4d-50516596a136/volumes/kubernetes.io~configmap/config/openshift-controller-manager.client-ca.configmap not reset as customized by admin to system_u:object_r:container_file_t:s0:c14,c22 Jan 23 09:07:06 crc restorecon[4683]: /var/lib/kubelet/pods/7583ce53-e0fe-4a16-9e4d-50516596a136/volumes/kubernetes.io~configmap/config/openshift-controller-manager.openshift-global-ca.configmap not reset as customized by admin to system_u:object_r:container_file_t:s0:c14,c22 Jan 23 09:07:06 crc restorecon[4683]: /var/lib/kubelet/pods/7583ce53-e0fe-4a16-9e4d-50516596a136/volumes/kubernetes.io~configmap/config/openshift-controller-manager.serving-cert.secret not reset as customized by admin to system_u:object_r:container_file_t:s0:c14,c22 Jan 23 09:07:06 crc restorecon[4683]: /var/lib/kubelet/pods/7583ce53-e0fe-4a16-9e4d-50516596a136/volumes/kubernetes.io~configmap/client-ca not reset as customized by admin to system_u:object_r:container_file_t:s0:c14,c22 Jan 23 09:07:06 crc restorecon[4683]: /var/lib/kubelet/pods/7583ce53-e0fe-4a16-9e4d-50516596a136/volumes/kubernetes.io~configmap/client-ca/..2025_02_24_06_20_07.1071801880 not reset as customized by admin to system_u:object_r:container_file_t:s0:c14,c22 Jan 23 09:07:06 crc restorecon[4683]: /var/lib/kubelet/pods/7583ce53-e0fe-4a16-9e4d-50516596a136/volumes/kubernetes.io~configmap/client-ca/..2025_02_24_06_20_07.1071801880/ca-bundle.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c14,c22 Jan 23 09:07:06 crc restorecon[4683]: /var/lib/kubelet/pods/7583ce53-e0fe-4a16-9e4d-50516596a136/volumes/kubernetes.io~configmap/client-ca/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c14,c22 Jan 23 09:07:06 crc restorecon[4683]: /var/lib/kubelet/pods/7583ce53-e0fe-4a16-9e4d-50516596a136/volumes/kubernetes.io~configmap/client-ca/ca-bundle.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c14,c22 Jan 23 09:07:06 crc restorecon[4683]: /var/lib/kubelet/pods/7583ce53-e0fe-4a16-9e4d-50516596a136/volumes/kubernetes.io~configmap/proxy-ca-bundles not reset as customized by admin to system_u:object_r:container_file_t:s0:c14,c22 Jan 23 09:07:06 crc restorecon[4683]: /var/lib/kubelet/pods/7583ce53-e0fe-4a16-9e4d-50516596a136/volumes/kubernetes.io~configmap/proxy-ca-bundles/..2025_02_24_06_20_07.2494444877 not reset as customized by admin to system_u:object_r:container_file_t:s0:c14,c22 Jan 23 09:07:06 crc restorecon[4683]: /var/lib/kubelet/pods/7583ce53-e0fe-4a16-9e4d-50516596a136/volumes/kubernetes.io~configmap/proxy-ca-bundles/..2025_02_24_06_20_07.2494444877/tls-ca-bundle.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c14,c22 Jan 23 09:07:06 crc restorecon[4683]: /var/lib/kubelet/pods/7583ce53-e0fe-4a16-9e4d-50516596a136/volumes/kubernetes.io~configmap/proxy-ca-bundles/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c14,c22 Jan 23 09:07:06 crc restorecon[4683]: /var/lib/kubelet/pods/7583ce53-e0fe-4a16-9e4d-50516596a136/volumes/kubernetes.io~configmap/proxy-ca-bundles/tls-ca-bundle.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c14,c22 Jan 23 09:07:06 crc restorecon[4683]: /var/lib/kubelet/pods/7583ce53-e0fe-4a16-9e4d-50516596a136/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c14,c22 Jan 23 09:07:06 crc restorecon[4683]: /var/lib/kubelet/pods/7583ce53-e0fe-4a16-9e4d-50516596a136/containers/controller-manager/89b4555f not reset as customized by admin to system_u:object_r:container_file_t:s0:c14,c22 Jan 23 09:07:06 crc restorecon[4683]: /var/lib/kubelet/pods/87cf06ed-a83f-41a7-828d-70653580a8cb/volumes/kubernetes.io~configmap/config-volume not reset as customized by admin to system_u:object_r:container_file_t:s0:c466,c972 Jan 23 09:07:06 crc restorecon[4683]: /var/lib/kubelet/pods/87cf06ed-a83f-41a7-828d-70653580a8cb/volumes/kubernetes.io~configmap/config-volume/..2025_02_23_05_23_22.4071100442 not reset as customized by admin to system_u:object_r:container_file_t:s0:c466,c972 Jan 23 09:07:06 crc restorecon[4683]: /var/lib/kubelet/pods/87cf06ed-a83f-41a7-828d-70653580a8cb/volumes/kubernetes.io~configmap/config-volume/..2025_02_23_05_23_22.4071100442/Corefile not reset as customized by admin to system_u:object_r:container_file_t:s0:c466,c972 Jan 23 09:07:06 crc restorecon[4683]: /var/lib/kubelet/pods/87cf06ed-a83f-41a7-828d-70653580a8cb/volumes/kubernetes.io~configmap/config-volume/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c466,c972 Jan 23 09:07:06 crc restorecon[4683]: /var/lib/kubelet/pods/87cf06ed-a83f-41a7-828d-70653580a8cb/volumes/kubernetes.io~configmap/config-volume/Corefile not reset as customized by admin to system_u:object_r:container_file_t:s0:c466,c972 Jan 23 09:07:06 crc restorecon[4683]: /var/lib/kubelet/pods/87cf06ed-a83f-41a7-828d-70653580a8cb/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c466,c972 Jan 23 09:07:06 crc restorecon[4683]: /var/lib/kubelet/pods/87cf06ed-a83f-41a7-828d-70653580a8cb/containers/dns/655fcd71 not reset as customized by admin to system_u:object_r:container_file_t:s0:c457,c841 Jan 23 09:07:06 crc restorecon[4683]: /var/lib/kubelet/pods/87cf06ed-a83f-41a7-828d-70653580a8cb/containers/dns/0d43c002 not reset as customized by admin to system_u:object_r:container_file_t:s0:c55,c1022 Jan 23 09:07:06 crc restorecon[4683]: /var/lib/kubelet/pods/87cf06ed-a83f-41a7-828d-70653580a8cb/containers/dns/e68efd17 not reset as customized by admin to system_u:object_r:container_file_t:s0:c466,c972 Jan 23 09:07:06 crc restorecon[4683]: /var/lib/kubelet/pods/87cf06ed-a83f-41a7-828d-70653580a8cb/containers/kube-rbac-proxy/9acf9b65 not reset as customized by admin to system_u:object_r:container_file_t:s0:c457,c841 Jan 23 09:07:06 crc restorecon[4683]: /var/lib/kubelet/pods/87cf06ed-a83f-41a7-828d-70653580a8cb/containers/kube-rbac-proxy/5ae3ff11 not reset as customized by admin to system_u:object_r:container_file_t:s0:c55,c1022 Jan 23 09:07:06 crc restorecon[4683]: /var/lib/kubelet/pods/87cf06ed-a83f-41a7-828d-70653580a8cb/containers/kube-rbac-proxy/1e59206a not reset as customized by admin to system_u:object_r:container_file_t:s0:c466,c972 Jan 23 09:07:06 crc restorecon[4683]: /var/lib/kubelet/pods/44663579-783b-4372-86d6-acf235a62d72/containers/dns-node-resolver/27af16d1 not reset as customized by admin to system_u:object_r:container_file_t:s0:c304,c1017 Jan 23 09:07:06 crc restorecon[4683]: /var/lib/kubelet/pods/44663579-783b-4372-86d6-acf235a62d72/containers/dns-node-resolver/7918e729 not reset as customized by admin to system_u:object_r:container_file_t:s0:c853,c893 Jan 23 09:07:06 crc restorecon[4683]: /var/lib/kubelet/pods/44663579-783b-4372-86d6-acf235a62d72/containers/dns-node-resolver/5d976d0e not reset as customized by admin to system_u:object_r:container_file_t:s0:c585,c981 Jan 23 09:07:06 crc restorecon[4683]: /var/lib/kubelet/pods/9d4552c7-cd75-42dd-8880-30dd377c49a4/volumes/kubernetes.io~configmap/config not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c25 Jan 23 09:07:06 crc restorecon[4683]: /var/lib/kubelet/pods/9d4552c7-cd75-42dd-8880-30dd377c49a4/volumes/kubernetes.io~configmap/config/..2025_02_23_05_38_56.1112187283 not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c25 Jan 23 09:07:06 crc restorecon[4683]: /var/lib/kubelet/pods/9d4552c7-cd75-42dd-8880-30dd377c49a4/volumes/kubernetes.io~configmap/config/..2025_02_23_05_38_56.1112187283/controller-config.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c25 Jan 23 09:07:06 crc restorecon[4683]: /var/lib/kubelet/pods/9d4552c7-cd75-42dd-8880-30dd377c49a4/volumes/kubernetes.io~configmap/config/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c25 Jan 23 09:07:06 crc restorecon[4683]: /var/lib/kubelet/pods/9d4552c7-cd75-42dd-8880-30dd377c49a4/volumes/kubernetes.io~configmap/config/controller-config.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c25 Jan 23 09:07:06 crc restorecon[4683]: /var/lib/kubelet/pods/9d4552c7-cd75-42dd-8880-30dd377c49a4/volumes/kubernetes.io~configmap/trusted-ca not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c25 Jan 23 09:07:06 crc restorecon[4683]: /var/lib/kubelet/pods/9d4552c7-cd75-42dd-8880-30dd377c49a4/volumes/kubernetes.io~configmap/trusted-ca/..2025_02_23_05_38_56.2839772658 not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c25 Jan 23 09:07:06 crc restorecon[4683]: /var/lib/kubelet/pods/9d4552c7-cd75-42dd-8880-30dd377c49a4/volumes/kubernetes.io~configmap/trusted-ca/..2025_02_23_05_38_56.2839772658/tls-ca-bundle.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c25 Jan 23 09:07:06 crc restorecon[4683]: /var/lib/kubelet/pods/9d4552c7-cd75-42dd-8880-30dd377c49a4/volumes/kubernetes.io~configmap/trusted-ca/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c25 Jan 23 09:07:06 crc restorecon[4683]: /var/lib/kubelet/pods/9d4552c7-cd75-42dd-8880-30dd377c49a4/volumes/kubernetes.io~configmap/trusted-ca/tls-ca-bundle.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c25 Jan 23 09:07:06 crc restorecon[4683]: /var/lib/kubelet/pods/9d4552c7-cd75-42dd-8880-30dd377c49a4/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c25 Jan 23 09:07:06 crc restorecon[4683]: /var/lib/kubelet/pods/9d4552c7-cd75-42dd-8880-30dd377c49a4/containers/console-operator/d7f55cbb not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c25 Jan 23 09:07:06 crc restorecon[4683]: /var/lib/kubelet/pods/9d4552c7-cd75-42dd-8880-30dd377c49a4/containers/console-operator/f0812073 not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c25 Jan 23 09:07:06 crc restorecon[4683]: /var/lib/kubelet/pods/9d4552c7-cd75-42dd-8880-30dd377c49a4/containers/console-operator/1a56cbeb not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c25 Jan 23 09:07:06 crc restorecon[4683]: /var/lib/kubelet/pods/9d4552c7-cd75-42dd-8880-30dd377c49a4/containers/console-operator/7fdd437e not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c25 Jan 23 09:07:06 crc restorecon[4683]: /var/lib/kubelet/pods/9d4552c7-cd75-42dd-8880-30dd377c49a4/containers/console-operator/cdfb5652 not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c25 Jan 23 09:07:06 crc restorecon[4683]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/volumes/kubernetes.io~configmap/etcd-serving-ca not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Jan 23 09:07:06 crc restorecon[4683]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/volumes/kubernetes.io~configmap/etcd-serving-ca/..2025_02_24_06_17_29.3844392896 not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Jan 23 09:07:06 crc restorecon[4683]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/volumes/kubernetes.io~configmap/etcd-serving-ca/..2025_02_24_06_17_29.3844392896/ca-bundle.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Jan 23 09:07:06 crc restorecon[4683]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/volumes/kubernetes.io~configmap/etcd-serving-ca/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Jan 23 09:07:06 crc restorecon[4683]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/volumes/kubernetes.io~configmap/etcd-serving-ca/ca-bundle.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Jan 23 09:07:06 crc restorecon[4683]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/volumes/kubernetes.io~configmap/config not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Jan 23 09:07:06 crc restorecon[4683]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/volumes/kubernetes.io~configmap/config/..2025_02_24_06_17_29.848549803 not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Jan 23 09:07:06 crc restorecon[4683]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/volumes/kubernetes.io~configmap/config/..2025_02_24_06_17_29.848549803/config.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Jan 23 09:07:06 crc restorecon[4683]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/volumes/kubernetes.io~configmap/config/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Jan 23 09:07:06 crc restorecon[4683]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/volumes/kubernetes.io~configmap/config/config.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Jan 23 09:07:06 crc restorecon[4683]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/volumes/kubernetes.io~configmap/audit not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Jan 23 09:07:06 crc restorecon[4683]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/volumes/kubernetes.io~configmap/audit/..2025_02_24_06_17_29.780046231 not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Jan 23 09:07:06 crc restorecon[4683]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/volumes/kubernetes.io~configmap/audit/..2025_02_24_06_17_29.780046231/policy.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Jan 23 09:07:06 crc restorecon[4683]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/volumes/kubernetes.io~configmap/audit/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Jan 23 09:07:06 crc restorecon[4683]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/volumes/kubernetes.io~configmap/audit/policy.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Jan 23 09:07:06 crc restorecon[4683]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/volumes/kubernetes.io~configmap/image-import-ca not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Jan 23 09:07:06 crc restorecon[4683]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/volumes/kubernetes.io~configmap/image-import-ca/..2025_02_24_06_17_29.2926008347 not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Jan 23 09:07:06 crc restorecon[4683]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/volumes/kubernetes.io~configmap/image-import-ca/..2025_02_24_06_17_29.2926008347/image-registry.openshift-image-registry.svc..5000 not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Jan 23 09:07:06 crc restorecon[4683]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/volumes/kubernetes.io~configmap/image-import-ca/..2025_02_24_06_17_29.2926008347/image-registry.openshift-image-registry.svc.cluster.local..5000 not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Jan 23 09:07:06 crc restorecon[4683]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/volumes/kubernetes.io~configmap/image-import-ca/..2025_02_24_06_17_29.2926008347/default-route-openshift-image-registry.apps-crc.testing not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Jan 23 09:07:06 crc restorecon[4683]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/volumes/kubernetes.io~configmap/image-import-ca/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Jan 23 09:07:06 crc restorecon[4683]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/volumes/kubernetes.io~configmap/image-import-ca/image-registry.openshift-image-registry.svc..5000 not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Jan 23 09:07:06 crc restorecon[4683]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/volumes/kubernetes.io~configmap/image-import-ca/image-registry.openshift-image-registry.svc.cluster.local..5000 not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Jan 23 09:07:06 crc restorecon[4683]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/volumes/kubernetes.io~configmap/image-import-ca/default-route-openshift-image-registry.apps-crc.testing not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Jan 23 09:07:06 crc restorecon[4683]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/volumes/kubernetes.io~configmap/trusted-ca-bundle not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Jan 23 09:07:06 crc restorecon[4683]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/volumes/kubernetes.io~configmap/trusted-ca-bundle/..2025_02_24_06_17_29.2729721485 not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Jan 23 09:07:06 crc restorecon[4683]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/volumes/kubernetes.io~configmap/trusted-ca-bundle/..2025_02_24_06_17_29.2729721485/tls-ca-bundle.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Jan 23 09:07:06 crc restorecon[4683]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/volumes/kubernetes.io~configmap/trusted-ca-bundle/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Jan 23 09:07:06 crc restorecon[4683]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/volumes/kubernetes.io~configmap/trusted-ca-bundle/tls-ca-bundle.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Jan 23 09:07:06 crc restorecon[4683]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Jan 23 09:07:06 crc restorecon[4683]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/containers/fix-audit-permissions/fb93119e not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Jan 23 09:07:06 crc restorecon[4683]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/containers/openshift-apiserver/f1e8fc0e not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Jan 23 09:07:06 crc restorecon[4683]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/containers/openshift-apiserver-check-endpoints/218511f3 not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Jan 23 09:07:06 crc restorecon[4683]: /var/lib/kubelet/pods/308be0ea-9f5f-4b29-aeb1-5abd31a0b17b/volumes/kubernetes.io~empty-dir/tmpfs not reset as customized by admin to system_u:object_r:container_file_t:s0:c12,c18 Jan 23 09:07:06 crc restorecon[4683]: /var/lib/kubelet/pods/308be0ea-9f5f-4b29-aeb1-5abd31a0b17b/volumes/kubernetes.io~empty-dir/tmpfs/k8s-webhook-server not reset as customized by admin to system_u:object_r:container_file_t:s0:c12,c18 Jan 23 09:07:06 crc restorecon[4683]: /var/lib/kubelet/pods/308be0ea-9f5f-4b29-aeb1-5abd31a0b17b/volumes/kubernetes.io~empty-dir/tmpfs/k8s-webhook-server/serving-certs not reset as customized by admin to system_u:object_r:container_file_t:s0:c12,c18 Jan 23 09:07:06 crc restorecon[4683]: /var/lib/kubelet/pods/308be0ea-9f5f-4b29-aeb1-5abd31a0b17b/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c12,c18 Jan 23 09:07:06 crc restorecon[4683]: /var/lib/kubelet/pods/308be0ea-9f5f-4b29-aeb1-5abd31a0b17b/containers/packageserver/ca8af7b3 not reset as customized by admin to system_u:object_r:container_file_t:s0:c12,c18 Jan 23 09:07:06 crc restorecon[4683]: /var/lib/kubelet/pods/308be0ea-9f5f-4b29-aeb1-5abd31a0b17b/containers/packageserver/72cc8a75 not reset as customized by admin to system_u:object_r:container_file_t:s0:c12,c18 Jan 23 09:07:06 crc restorecon[4683]: /var/lib/kubelet/pods/308be0ea-9f5f-4b29-aeb1-5abd31a0b17b/containers/packageserver/6e8a3760 not reset as customized by admin to system_u:object_r:container_file_t:s0:c12,c18 Jan 23 09:07:06 crc restorecon[4683]: /var/lib/kubelet/pods/0b78653f-4ff9-4508-8672-245ed9b561e3/volumes/kubernetes.io~configmap/service-ca not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c6 Jan 23 09:07:06 crc restorecon[4683]: /var/lib/kubelet/pods/0b78653f-4ff9-4508-8672-245ed9b561e3/volumes/kubernetes.io~configmap/service-ca/..2025_02_23_05_27_30.557428972 not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c6 Jan 23 09:07:06 crc restorecon[4683]: /var/lib/kubelet/pods/0b78653f-4ff9-4508-8672-245ed9b561e3/volumes/kubernetes.io~configmap/service-ca/..2025_02_23_05_27_30.557428972/service-ca.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c6 Jan 23 09:07:06 crc restorecon[4683]: /var/lib/kubelet/pods/0b78653f-4ff9-4508-8672-245ed9b561e3/volumes/kubernetes.io~configmap/service-ca/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c6 Jan 23 09:07:06 crc restorecon[4683]: /var/lib/kubelet/pods/0b78653f-4ff9-4508-8672-245ed9b561e3/volumes/kubernetes.io~configmap/service-ca/service-ca.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c6 Jan 23 09:07:06 crc restorecon[4683]: /var/lib/kubelet/pods/0b78653f-4ff9-4508-8672-245ed9b561e3/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c6 Jan 23 09:07:06 crc restorecon[4683]: /var/lib/kubelet/pods/0b78653f-4ff9-4508-8672-245ed9b561e3/containers/cluster-version-operator/4c3455c0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c6 Jan 23 09:07:06 crc restorecon[4683]: /var/lib/kubelet/pods/0b78653f-4ff9-4508-8672-245ed9b561e3/containers/cluster-version-operator/2278acb0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c6 Jan 23 09:07:06 crc restorecon[4683]: /var/lib/kubelet/pods/0b78653f-4ff9-4508-8672-245ed9b561e3/containers/cluster-version-operator/4b453e4f not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c6 Jan 23 09:07:06 crc restorecon[4683]: /var/lib/kubelet/pods/0b78653f-4ff9-4508-8672-245ed9b561e3/containers/cluster-version-operator/3ec09bda not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c6 Jan 23 09:07:06 crc restorecon[4683]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~configmap/trusted-ca not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 23 09:07:06 crc restorecon[4683]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~configmap/trusted-ca/..2025_02_24_06_25_03.422633132 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 23 09:07:06 crc restorecon[4683]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~configmap/trusted-ca/..2025_02_24_06_25_03.422633132/anchors not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 23 09:07:06 crc restorecon[4683]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~configmap/trusted-ca/..2025_02_24_06_25_03.422633132/anchors/ca-bundle.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 23 09:07:06 crc restorecon[4683]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~configmap/trusted-ca/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 23 09:07:06 crc restorecon[4683]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~configmap/trusted-ca/anchors not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 23 09:07:06 crc restorecon[4683]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~configmap/registry-certificates not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 23 09:07:06 crc restorecon[4683]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~configmap/registry-certificates/..2025_02_24_06_25_03.3594477318 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 23 09:07:06 crc restorecon[4683]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~configmap/registry-certificates/..2025_02_24_06_25_03.3594477318/image-registry.openshift-image-registry.svc..5000 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 23 09:07:06 crc restorecon[4683]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~configmap/registry-certificates/..2025_02_24_06_25_03.3594477318/image-registry.openshift-image-registry.svc.cluster.local..5000 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 23 09:07:06 crc restorecon[4683]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~configmap/registry-certificates/..2025_02_24_06_25_03.3594477318/default-route-openshift-image-registry.apps-crc.testing not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 23 09:07:06 crc restorecon[4683]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~configmap/registry-certificates/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 23 09:07:06 crc restorecon[4683]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~configmap/registry-certificates/image-registry.openshift-image-registry.svc..5000 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 23 09:07:06 crc restorecon[4683]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~configmap/registry-certificates/image-registry.openshift-image-registry.svc.cluster.local..5000 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 23 09:07:06 crc restorecon[4683]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~configmap/registry-certificates/default-route-openshift-image-registry.apps-crc.testing not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 23 09:07:06 crc restorecon[4683]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 23 09:07:06 crc restorecon[4683]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/edk2 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 23 09:07:06 crc restorecon[4683]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/edk2/cacerts.bin not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 23 09:07:06 crc restorecon[4683]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/java not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 23 09:07:06 crc restorecon[4683]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/java/cacerts not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 23 09:07:06 crc restorecon[4683]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/openssl not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 23 09:07:06 crc restorecon[4683]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/openssl/ca-bundle.trust.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 23 09:07:06 crc restorecon[4683]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 23 09:07:06 crc restorecon[4683]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/tls-ca-bundle.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 23 09:07:06 crc restorecon[4683]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/email-ca-bundle.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 23 09:07:06 crc restorecon[4683]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/objsign-ca-bundle.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 23 09:07:06 crc restorecon[4683]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 23 09:07:06 crc restorecon[4683]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/2ae6433e.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 23 09:07:06 crc restorecon[4683]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/fde84897.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 23 09:07:06 crc restorecon[4683]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/75680d2e.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 23 09:07:06 crc restorecon[4683]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/openshift-service-serving-signer_1740288168.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 23 09:07:06 crc restorecon[4683]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/facfc4fa.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 23 09:07:06 crc restorecon[4683]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/8f5a969c.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 23 09:07:06 crc restorecon[4683]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/CFCA_EV_ROOT.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 23 09:07:06 crc restorecon[4683]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/9ef4a08a.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 23 09:07:06 crc restorecon[4683]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/ingress-operator_1740288202.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 23 09:07:06 crc restorecon[4683]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/2f332aed.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 23 09:07:06 crc restorecon[4683]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/248c8271.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 23 09:07:06 crc restorecon[4683]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/8d10a21f.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 23 09:07:06 crc restorecon[4683]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/ACCVRAIZ1.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 23 09:07:06 crc restorecon[4683]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/a94d09e5.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 23 09:07:06 crc restorecon[4683]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/3c9a4d3b.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 23 09:07:06 crc restorecon[4683]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/40193066.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 23 09:07:06 crc restorecon[4683]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/AC_RAIZ_FNMT-RCM.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 23 09:07:06 crc restorecon[4683]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/cd8c0d63.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 23 09:07:06 crc restorecon[4683]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/b936d1c6.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 23 09:07:06 crc restorecon[4683]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/CA_Disig_Root_R2.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 23 09:07:06 crc restorecon[4683]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/4fd49c6c.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 23 09:07:06 crc restorecon[4683]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/AC_RAIZ_FNMT-RCM_SERVIDORES_SEGUROS.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 23 09:07:06 crc restorecon[4683]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/b81b93f0.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 23 09:07:06 crc restorecon[4683]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/5f9a69fa.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 23 09:07:06 crc restorecon[4683]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Certigna.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 23 09:07:06 crc restorecon[4683]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/b30d5fda.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 23 09:07:06 crc restorecon[4683]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/ANF_Secure_Server_Root_CA.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 23 09:07:06 crc restorecon[4683]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/b433981b.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 23 09:07:06 crc restorecon[4683]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/93851c9e.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 23 09:07:06 crc restorecon[4683]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/9282e51c.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 23 09:07:06 crc restorecon[4683]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/e7dd1bc4.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 23 09:07:06 crc restorecon[4683]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Actalis_Authentication_Root_CA.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 23 09:07:06 crc restorecon[4683]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/930ac5d2.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 23 09:07:06 crc restorecon[4683]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/5f47b495.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 23 09:07:06 crc restorecon[4683]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/e113c810.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 23 09:07:06 crc restorecon[4683]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/5931b5bc.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 23 09:07:06 crc restorecon[4683]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/AffirmTrust_Commercial.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 23 09:07:06 crc restorecon[4683]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/2b349938.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 23 09:07:06 crc restorecon[4683]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/e48193cf.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 23 09:07:06 crc restorecon[4683]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/302904dd.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 23 09:07:06 crc restorecon[4683]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/a716d4ed.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 23 09:07:06 crc restorecon[4683]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/AffirmTrust_Networking.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 23 09:07:06 crc restorecon[4683]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/93bc0acc.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 23 09:07:06 crc restorecon[4683]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/86212b19.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 23 09:07:06 crc restorecon[4683]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Certigna_Root_CA.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 23 09:07:06 crc restorecon[4683]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/AffirmTrust_Premium.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 23 09:07:06 crc restorecon[4683]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/b727005e.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 23 09:07:06 crc restorecon[4683]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/dbc54cab.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 23 09:07:06 crc restorecon[4683]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/f51bb24c.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 23 09:07:06 crc restorecon[4683]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/c28a8a30.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 23 09:07:06 crc restorecon[4683]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/AffirmTrust_Premium_ECC.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 23 09:07:06 crc restorecon[4683]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/9c8dfbd4.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 23 09:07:06 crc restorecon[4683]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/ccc52f49.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 23 09:07:06 crc restorecon[4683]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/cb1c3204.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 23 09:07:06 crc restorecon[4683]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Amazon_Root_CA_1.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 23 09:07:06 crc restorecon[4683]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/ce5e74ef.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 23 09:07:06 crc restorecon[4683]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/fd08c599.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 23 09:07:06 crc restorecon[4683]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Certum_Trusted_Root_CA.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 23 09:07:06 crc restorecon[4683]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Amazon_Root_CA_2.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 23 09:07:06 crc restorecon[4683]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/6d41d539.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 23 09:07:06 crc restorecon[4683]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/fb5fa911.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 23 09:07:06 crc restorecon[4683]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/e35234b1.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 23 09:07:06 crc restorecon[4683]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Amazon_Root_CA_3.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 23 09:07:06 crc restorecon[4683]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/8cb5ee0f.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 23 09:07:06 crc restorecon[4683]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/7a7c655d.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 23 09:07:06 crc restorecon[4683]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/f8fc53da.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 23 09:07:06 crc restorecon[4683]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Amazon_Root_CA_4.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 23 09:07:06 crc restorecon[4683]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/de6d66f3.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 23 09:07:06 crc restorecon[4683]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/d41b5e2a.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 23 09:07:06 crc restorecon[4683]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/41a3f684.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 23 09:07:06 crc restorecon[4683]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/1df5a75f.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 23 09:07:06 crc restorecon[4683]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Atos_TrustedRoot_2011.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 23 09:07:06 crc restorecon[4683]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/e36a6752.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 23 09:07:06 crc restorecon[4683]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/b872f2b4.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 23 09:07:06 crc restorecon[4683]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/9576d26b.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 23 09:07:06 crc restorecon[4683]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/228f89db.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 23 09:07:06 crc restorecon[4683]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Atos_TrustedRoot_Root_CA_ECC_TLS_2021.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 23 09:07:06 crc restorecon[4683]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/fb717492.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 23 09:07:06 crc restorecon[4683]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/2d21b73c.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 23 09:07:06 crc restorecon[4683]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/0b1b94ef.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 23 09:07:06 crc restorecon[4683]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/595e996b.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 23 09:07:06 crc restorecon[4683]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Atos_TrustedRoot_Root_CA_RSA_TLS_2021.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 23 09:07:06 crc restorecon[4683]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/9b46e03d.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 23 09:07:06 crc restorecon[4683]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/128f4b91.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 23 09:07:06 crc restorecon[4683]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Buypass_Class_3_Root_CA.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 23 09:07:06 crc restorecon[4683]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/81f2d2b1.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 23 09:07:06 crc restorecon[4683]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Autoridad_de_Certificacion_Firmaprofesional_CIF_A62634068.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 23 09:07:06 crc restorecon[4683]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/3bde41ac.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 23 09:07:06 crc restorecon[4683]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/d16a5865.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 23 09:07:06 crc restorecon[4683]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Certum_EC-384_CA.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 23 09:07:06 crc restorecon[4683]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/BJCA_Global_Root_CA1.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 23 09:07:06 crc restorecon[4683]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/0179095f.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 23 09:07:06 crc restorecon[4683]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/ffa7f1eb.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 23 09:07:06 crc restorecon[4683]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/9482e63a.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 23 09:07:06 crc restorecon[4683]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/d4dae3dd.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 23 09:07:06 crc restorecon[4683]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/BJCA_Global_Root_CA2.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 23 09:07:06 crc restorecon[4683]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/3e359ba6.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 23 09:07:06 crc restorecon[4683]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/7e067d03.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 23 09:07:06 crc restorecon[4683]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/95aff9e3.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 23 09:07:06 crc restorecon[4683]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/d7746a63.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 23 09:07:06 crc restorecon[4683]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Baltimore_CyberTrust_Root.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 23 09:07:06 crc restorecon[4683]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/653b494a.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 23 09:07:06 crc restorecon[4683]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/3ad48a91.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 23 09:07:06 crc restorecon[4683]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Certum_Trusted_Network_CA.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 23 09:07:06 crc restorecon[4683]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Buypass_Class_2_Root_CA.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 23 09:07:06 crc restorecon[4683]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/54657681.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 23 09:07:06 crc restorecon[4683]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/82223c44.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 23 09:07:06 crc restorecon[4683]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/e8de2f56.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 23 09:07:06 crc restorecon[4683]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/2d9dafe4.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 23 09:07:06 crc restorecon[4683]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/d96b65e2.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 23 09:07:06 crc restorecon[4683]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/ee64a828.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 23 09:07:06 crc restorecon[4683]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/COMODO_Certification_Authority.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 23 09:07:06 crc restorecon[4683]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/40547a79.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 23 09:07:06 crc restorecon[4683]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/5a3f0ff8.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 23 09:07:06 crc restorecon[4683]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/7a780d93.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 23 09:07:06 crc restorecon[4683]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/34d996fb.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 23 09:07:06 crc restorecon[4683]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/COMODO_ECC_Certification_Authority.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 23 09:07:06 crc restorecon[4683]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/eed8c118.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 23 09:07:06 crc restorecon[4683]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/89c02a45.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 23 09:07:06 crc restorecon[4683]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Certainly_Root_R1.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 23 09:07:06 crc restorecon[4683]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/b1159c4c.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 23 09:07:06 crc restorecon[4683]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/COMODO_RSA_Certification_Authority.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 23 09:07:06 crc restorecon[4683]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/d6325660.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 23 09:07:06 crc restorecon[4683]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/d4c339cb.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 23 09:07:06 crc restorecon[4683]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/8312c4c1.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 23 09:07:06 crc restorecon[4683]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Certainly_Root_E1.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 23 09:07:06 crc restorecon[4683]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/8508e720.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 23 09:07:06 crc restorecon[4683]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/5fdd185d.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 23 09:07:06 crc restorecon[4683]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/48bec511.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 23 09:07:06 crc restorecon[4683]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/69105f4f.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 23 09:07:06 crc restorecon[4683]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/GlobalSign.1.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 23 09:07:06 crc restorecon[4683]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/0b9bc432.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 23 09:07:06 crc restorecon[4683]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Certum_Trusted_Network_CA_2.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 23 09:07:06 crc restorecon[4683]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/GTS_Root_R3.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 23 09:07:06 crc restorecon[4683]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/32888f65.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 23 09:07:06 crc restorecon[4683]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/CommScope_Public_Trust_ECC_Root-01.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 23 09:07:06 crc restorecon[4683]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/6b03dec0.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 23 09:07:06 crc restorecon[4683]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/219d9499.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 23 09:07:06 crc restorecon[4683]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/CommScope_Public_Trust_ECC_Root-02.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 23 09:07:06 crc restorecon[4683]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/5acf816d.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 23 09:07:06 crc restorecon[4683]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/cbf06781.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 23 09:07:06 crc restorecon[4683]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/CommScope_Public_Trust_RSA_Root-01.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 23 09:07:06 crc restorecon[4683]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/GTS_Root_R4.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 23 09:07:06 crc restorecon[4683]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/dc99f41e.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 23 09:07:06 crc restorecon[4683]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/CommScope_Public_Trust_RSA_Root-02.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 23 09:07:06 crc restorecon[4683]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/GlobalSign.3.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 23 09:07:06 crc restorecon[4683]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/AAA_Certificate_Services.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 23 09:07:06 crc restorecon[4683]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/985c1f52.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 23 09:07:06 crc restorecon[4683]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/8794b4e3.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 23 09:07:06 crc restorecon[4683]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/D-TRUST_BR_Root_CA_1_2020.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 23 09:07:06 crc restorecon[4683]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/e7c037b4.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 23 09:07:06 crc restorecon[4683]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/ef954a4e.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 23 09:07:06 crc restorecon[4683]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/D-TRUST_EV_Root_CA_1_2020.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 23 09:07:06 crc restorecon[4683]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/2add47b6.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 23 09:07:06 crc restorecon[4683]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/90c5a3c8.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 23 09:07:06 crc restorecon[4683]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/D-TRUST_Root_Class_3_CA_2_2009.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 23 09:07:06 crc restorecon[4683]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/b0f3e76e.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 23 09:07:06 crc restorecon[4683]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/53a1b57a.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 23 09:07:06 crc restorecon[4683]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/D-TRUST_Root_Class_3_CA_2_EV_2009.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 23 09:07:06 crc restorecon[4683]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/GlobalSign_Root_CA.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 23 09:07:06 crc restorecon[4683]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/DigiCert_Assured_ID_Root_CA.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 23 09:07:06 crc restorecon[4683]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/5ad8a5d6.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 23 09:07:06 crc restorecon[4683]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/68dd7389.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 23 09:07:06 crc restorecon[4683]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/DigiCert_Assured_ID_Root_G2.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 23 09:07:06 crc restorecon[4683]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/9d04f354.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 23 09:07:06 crc restorecon[4683]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/8d6437c3.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 23 09:07:06 crc restorecon[4683]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/062cdee6.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 23 09:07:06 crc restorecon[4683]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/bd43e1dd.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 23 09:07:06 crc restorecon[4683]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/DigiCert_Assured_ID_Root_G3.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 23 09:07:06 crc restorecon[4683]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/7f3d5d1d.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 23 09:07:06 crc restorecon[4683]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/c491639e.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 23 09:07:06 crc restorecon[4683]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/GlobalSign_Root_E46.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 23 09:07:06 crc restorecon[4683]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/DigiCert_Global_Root_CA.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 23 09:07:06 crc restorecon[4683]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/3513523f.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 23 09:07:06 crc restorecon[4683]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/399e7759.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 23 09:07:06 crc restorecon[4683]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/feffd413.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 23 09:07:06 crc restorecon[4683]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/d18e9066.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 23 09:07:06 crc restorecon[4683]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/DigiCert_Global_Root_G2.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 23 09:07:06 crc restorecon[4683]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/607986c7.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 23 09:07:06 crc restorecon[4683]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/c90bc37d.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 23 09:07:06 crc restorecon[4683]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/1b0f7e5c.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 23 09:07:06 crc restorecon[4683]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/1e08bfd1.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 23 09:07:06 crc restorecon[4683]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/DigiCert_Global_Root_G3.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 23 09:07:06 crc restorecon[4683]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/dd8e9d41.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 23 09:07:06 crc restorecon[4683]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/ed39abd0.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 23 09:07:06 crc restorecon[4683]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/a3418fda.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 23 09:07:06 crc restorecon[4683]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/bc3f2570.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 23 09:07:06 crc restorecon[4683]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/DigiCert_High_Assurance_EV_Root_CA.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 23 09:07:06 crc restorecon[4683]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/244b5494.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 23 09:07:06 crc restorecon[4683]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/81b9768f.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 23 09:07:06 crc restorecon[4683]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/GlobalSign.2.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 23 09:07:06 crc restorecon[4683]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/4be590e0.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 23 09:07:06 crc restorecon[4683]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/DigiCert_TLS_ECC_P384_Root_G5.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 23 09:07:06 crc restorecon[4683]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/9846683b.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 23 09:07:06 crc restorecon[4683]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/252252d2.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 23 09:07:06 crc restorecon[4683]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/1e8e7201.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 23 09:07:06 crc restorecon[4683]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/ISRG_Root_X1.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 23 09:07:06 crc restorecon[4683]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/DigiCert_TLS_RSA4096_Root_G5.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 23 09:07:06 crc restorecon[4683]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/d52c538d.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 23 09:07:06 crc restorecon[4683]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/c44cc0c0.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 23 09:07:06 crc restorecon[4683]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/GlobalSign_Root_R46.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 23 09:07:06 crc restorecon[4683]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/DigiCert_Trusted_Root_G4.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 23 09:07:06 crc restorecon[4683]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/75d1b2ed.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 23 09:07:06 crc restorecon[4683]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/a2c66da8.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 23 09:07:06 crc restorecon[4683]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/GTS_Root_R2.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 23 09:07:06 crc restorecon[4683]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/ecccd8db.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 23 09:07:06 crc restorecon[4683]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Entrust.net_Certification_Authority__2048_.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 23 09:07:06 crc restorecon[4683]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/aee5f10d.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 23 09:07:06 crc restorecon[4683]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/3e7271e8.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 23 09:07:06 crc restorecon[4683]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/b0e59380.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 23 09:07:06 crc restorecon[4683]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/4c3982f2.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 23 09:07:06 crc restorecon[4683]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Entrust_Root_Certification_Authority.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 23 09:07:06 crc restorecon[4683]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/6b99d060.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 23 09:07:06 crc restorecon[4683]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/bf64f35b.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 23 09:07:06 crc restorecon[4683]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/0a775a30.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 23 09:07:06 crc restorecon[4683]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/002c0b4f.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 23 09:07:06 crc restorecon[4683]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/cc450945.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 23 09:07:06 crc restorecon[4683]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Entrust_Root_Certification_Authority_-_EC1.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 23 09:07:06 crc restorecon[4683]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/106f3e4d.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 23 09:07:06 crc restorecon[4683]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/b3fb433b.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 23 09:07:06 crc restorecon[4683]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/GlobalSign.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 23 09:07:06 crc restorecon[4683]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/4042bcee.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 23 09:07:06 crc restorecon[4683]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Entrust_Root_Certification_Authority_-_G2.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 23 09:07:06 crc restorecon[4683]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/02265526.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 23 09:07:06 crc restorecon[4683]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/455f1b52.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 23 09:07:06 crc restorecon[4683]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/0d69c7e1.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 23 09:07:06 crc restorecon[4683]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/9f727ac7.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 23 09:07:06 crc restorecon[4683]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Entrust_Root_Certification_Authority_-_G4.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 23 09:07:06 crc restorecon[4683]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/5e98733a.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 23 09:07:06 crc restorecon[4683]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/f0cd152c.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 23 09:07:06 crc restorecon[4683]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/dc4d6a89.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 23 09:07:06 crc restorecon[4683]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/6187b673.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 23 09:07:06 crc restorecon[4683]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/FIRMAPROFESIONAL_CA_ROOT-A_WEB.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 23 09:07:06 crc restorecon[4683]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/ba8887ce.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 23 09:07:06 crc restorecon[4683]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/068570d1.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 23 09:07:06 crc restorecon[4683]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/f081611a.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 23 09:07:06 crc restorecon[4683]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/48a195d8.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 23 09:07:06 crc restorecon[4683]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/GDCA_TrustAUTH_R5_ROOT.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 23 09:07:06 crc restorecon[4683]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/0f6fa695.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 23 09:07:06 crc restorecon[4683]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/ab59055e.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 23 09:07:06 crc restorecon[4683]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/b92fd57f.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 23 09:07:06 crc restorecon[4683]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/GLOBALTRUST_2020.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 23 09:07:06 crc restorecon[4683]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/fa5da96b.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 23 09:07:06 crc restorecon[4683]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/1ec40989.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 23 09:07:06 crc restorecon[4683]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/7719f463.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 23 09:07:06 crc restorecon[4683]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/GTS_Root_R1.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 23 09:07:06 crc restorecon[4683]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/1001acf7.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 23 09:07:06 crc restorecon[4683]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/f013ecaf.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 23 09:07:06 crc restorecon[4683]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/626dceaf.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 23 09:07:06 crc restorecon[4683]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/c559d742.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 23 09:07:06 crc restorecon[4683]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/1d3472b9.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 23 09:07:06 crc restorecon[4683]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/9479c8c3.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 23 09:07:06 crc restorecon[4683]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/a81e292b.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 23 09:07:06 crc restorecon[4683]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/4bfab552.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 23 09:07:06 crc restorecon[4683]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Go_Daddy_Class_2_Certification_Authority.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 23 09:07:06 crc restorecon[4683]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Sectigo_Public_Server_Authentication_Root_E46.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 23 09:07:06 crc restorecon[4683]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Go_Daddy_Root_Certificate_Authority_-_G2.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 23 09:07:06 crc restorecon[4683]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/e071171e.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 23 09:07:06 crc restorecon[4683]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/57bcb2da.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 23 09:07:06 crc restorecon[4683]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/HARICA_TLS_ECC_Root_CA_2021.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 23 09:07:06 crc restorecon[4683]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/ab5346f4.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 23 09:07:06 crc restorecon[4683]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/5046c355.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 23 09:07:06 crc restorecon[4683]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/HARICA_TLS_RSA_Root_CA_2021.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 23 09:07:06 crc restorecon[4683]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/865fbdf9.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 23 09:07:06 crc restorecon[4683]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/da0cfd1d.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 23 09:07:06 crc restorecon[4683]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/85cde254.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 23 09:07:06 crc restorecon[4683]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Hellenic_Academic_and_Research_Institutions_ECC_RootCA_2015.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 23 09:07:06 crc restorecon[4683]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/cbb3f32b.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 23 09:07:06 crc restorecon[4683]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/SecureSign_RootCA11.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 23 09:07:06 crc restorecon[4683]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Hellenic_Academic_and_Research_Institutions_RootCA_2015.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 23 09:07:06 crc restorecon[4683]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/5860aaa6.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 23 09:07:06 crc restorecon[4683]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/31188b5e.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 23 09:07:06 crc restorecon[4683]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/HiPKI_Root_CA_-_G1.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 23 09:07:06 crc restorecon[4683]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/c7f1359b.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 23 09:07:06 crc restorecon[4683]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/5f15c80c.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 23 09:07:06 crc restorecon[4683]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Hongkong_Post_Root_CA_3.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 23 09:07:06 crc restorecon[4683]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/09789157.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 23 09:07:06 crc restorecon[4683]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/ISRG_Root_X2.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 23 09:07:06 crc restorecon[4683]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/18856ac4.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 23 09:07:06 crc restorecon[4683]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/1e09d511.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 23 09:07:06 crc restorecon[4683]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/IdenTrust_Commercial_Root_CA_1.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 23 09:07:06 crc restorecon[4683]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/cf701eeb.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 23 09:07:06 crc restorecon[4683]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/d06393bb.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 23 09:07:06 crc restorecon[4683]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/IdenTrust_Public_Sector_Root_CA_1.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 23 09:07:06 crc restorecon[4683]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/10531352.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 23 09:07:06 crc restorecon[4683]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Izenpe.com.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 23 09:07:06 crc restorecon[4683]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/SecureTrust_CA.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 23 09:07:06 crc restorecon[4683]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/b0ed035a.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 23 09:07:06 crc restorecon[4683]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Microsec_e-Szigno_Root_CA_2009.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 23 09:07:06 crc restorecon[4683]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/8160b96c.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 23 09:07:06 crc restorecon[4683]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/e8651083.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 23 09:07:06 crc restorecon[4683]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/2c63f966.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 23 09:07:06 crc restorecon[4683]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Security_Communication_RootCA2.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 23 09:07:06 crc restorecon[4683]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Microsoft_ECC_Root_Certificate_Authority_2017.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 23 09:07:06 crc restorecon[4683]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/8d89cda1.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 23 09:07:06 crc restorecon[4683]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/01419da9.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 23 09:07:06 crc restorecon[4683]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/SSL.com_TLS_RSA_Root_CA_2022.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 23 09:07:06 crc restorecon[4683]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/b7a5b843.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 23 09:07:06 crc restorecon[4683]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Microsoft_RSA_Root_Certificate_Authority_2017.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 23 09:07:06 crc restorecon[4683]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/bf53fb88.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 23 09:07:06 crc restorecon[4683]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/9591a472.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 23 09:07:06 crc restorecon[4683]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/3afde786.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 23 09:07:06 crc restorecon[4683]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/SwissSign_Gold_CA_-_G2.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 23 09:07:06 crc restorecon[4683]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/NAVER_Global_Root_Certification_Authority.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 23 09:07:06 crc restorecon[4683]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/3fb36b73.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 23 09:07:06 crc restorecon[4683]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/d39b0a2c.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 23 09:07:06 crc restorecon[4683]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/a89d74c2.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 23 09:07:06 crc restorecon[4683]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/cd58d51e.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 23 09:07:06 crc restorecon[4683]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/b7db1890.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 23 09:07:06 crc restorecon[4683]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/NetLock_Arany__Class_Gold__F__tan__s__tv__ny.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 23 09:07:06 crc restorecon[4683]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/988a38cb.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 23 09:07:06 crc restorecon[4683]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/60afe812.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 23 09:07:06 crc restorecon[4683]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/f39fc864.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 23 09:07:06 crc restorecon[4683]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/5443e9e3.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 23 09:07:06 crc restorecon[4683]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/OISTE_WISeKey_Global_Root_GB_CA.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 23 09:07:06 crc restorecon[4683]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/e73d606e.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 23 09:07:06 crc restorecon[4683]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/dfc0fe80.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 23 09:07:06 crc restorecon[4683]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/b66938e9.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 23 09:07:06 crc restorecon[4683]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/1e1eab7c.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 23 09:07:06 crc restorecon[4683]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/OISTE_WISeKey_Global_Root_GC_CA.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 23 09:07:06 crc restorecon[4683]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/773e07ad.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 23 09:07:06 crc restorecon[4683]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/3c899c73.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 23 09:07:06 crc restorecon[4683]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/d59297b8.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 23 09:07:06 crc restorecon[4683]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/ddcda989.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 23 09:07:06 crc restorecon[4683]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/QuoVadis_Root_CA_1_G3.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 23 09:07:06 crc restorecon[4683]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/749e9e03.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 23 09:07:06 crc restorecon[4683]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/52b525c7.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 23 09:07:06 crc restorecon[4683]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Security_Communication_RootCA3.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 23 09:07:06 crc restorecon[4683]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/QuoVadis_Root_CA_2.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 23 09:07:06 crc restorecon[4683]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/d7e8dc79.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 23 09:07:06 crc restorecon[4683]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/7a819ef2.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 23 09:07:06 crc restorecon[4683]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/08063a00.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 23 09:07:06 crc restorecon[4683]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/6b483515.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 23 09:07:06 crc restorecon[4683]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/QuoVadis_Root_CA_2_G3.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 23 09:07:06 crc restorecon[4683]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/064e0aa9.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 23 09:07:06 crc restorecon[4683]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/1f58a078.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 23 09:07:06 crc restorecon[4683]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/6f7454b3.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 23 09:07:06 crc restorecon[4683]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/7fa05551.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 23 09:07:06 crc restorecon[4683]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/QuoVadis_Root_CA_3.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 23 09:07:06 crc restorecon[4683]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/76faf6c0.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 23 09:07:06 crc restorecon[4683]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/9339512a.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 23 09:07:06 crc restorecon[4683]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/f387163d.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 23 09:07:06 crc restorecon[4683]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/ee37c333.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 23 09:07:06 crc restorecon[4683]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/QuoVadis_Root_CA_3_G3.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 23 09:07:06 crc restorecon[4683]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/e18bfb83.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 23 09:07:06 crc restorecon[4683]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/e442e424.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 23 09:07:06 crc restorecon[4683]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/fe8a2cd8.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 23 09:07:06 crc restorecon[4683]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/23f4c490.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 23 09:07:06 crc restorecon[4683]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/5cd81ad7.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 23 09:07:06 crc restorecon[4683]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/SSL.com_EV_Root_Certification_Authority_ECC.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 23 09:07:06 crc restorecon[4683]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/f0c70a8d.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 23 09:07:06 crc restorecon[4683]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/7892ad52.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 23 09:07:06 crc restorecon[4683]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/SZAFIR_ROOT_CA2.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 23 09:07:06 crc restorecon[4683]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/4f316efb.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 23 09:07:06 crc restorecon[4683]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/SSL.com_EV_Root_Certification_Authority_RSA_R2.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 23 09:07:06 crc restorecon[4683]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/06dc52d5.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 23 09:07:06 crc restorecon[4683]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/583d0756.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 23 09:07:06 crc restorecon[4683]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Sectigo_Public_Server_Authentication_Root_R46.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 23 09:07:06 crc restorecon[4683]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/SSL.com_Root_Certification_Authority_ECC.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 23 09:07:06 crc restorecon[4683]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/0bf05006.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 23 09:07:06 crc restorecon[4683]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/88950faa.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 23 09:07:06 crc restorecon[4683]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/9046744a.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 23 09:07:06 crc restorecon[4683]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/3c860d51.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 23 09:07:06 crc restorecon[4683]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/SSL.com_Root_Certification_Authority_RSA.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 23 09:07:06 crc restorecon[4683]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/6fa5da56.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 23 09:07:06 crc restorecon[4683]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/33ee480d.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 23 09:07:06 crc restorecon[4683]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Secure_Global_CA.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 23 09:07:06 crc restorecon[4683]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/63a2c897.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 23 09:07:06 crc restorecon[4683]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/SSL.com_TLS_ECC_Root_CA_2022.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 23 09:07:06 crc restorecon[4683]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/bdacca6f.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 23 09:07:06 crc restorecon[4683]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/ff34af3f.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 23 09:07:06 crc restorecon[4683]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/dbff3a01.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 23 09:07:06 crc restorecon[4683]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Security_Communication_ECC_RootCA1.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 23 09:07:06 crc restorecon[4683]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/emSign_Root_CA_-_C1.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 23 09:07:06 crc restorecon[4683]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Starfield_Class_2_Certification_Authority.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 23 09:07:06 crc restorecon[4683]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/406c9bb1.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 23 09:07:06 crc restorecon[4683]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Starfield_Root_Certificate_Authority_-_G2.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 23 09:07:06 crc restorecon[4683]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/emSign_ECC_Root_CA_-_C3.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 23 09:07:06 crc restorecon[4683]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Starfield_Services_Root_Certificate_Authority_-_G2.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 23 09:07:06 crc restorecon[4683]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/SwissSign_Silver_CA_-_G2.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 23 09:07:06 crc restorecon[4683]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/99e1b953.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 23 09:07:06 crc restorecon[4683]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/T-TeleSec_GlobalRoot_Class_2.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 23 09:07:06 crc restorecon[4683]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/vTrus_Root_CA.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 23 09:07:06 crc restorecon[4683]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/T-TeleSec_GlobalRoot_Class_3.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 23 09:07:06 crc restorecon[4683]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/14bc7599.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 23 09:07:06 crc restorecon[4683]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/TUBITAK_Kamu_SM_SSL_Kok_Sertifikasi_-_Surum_1.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 23 09:07:06 crc restorecon[4683]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/TWCA_Global_Root_CA.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 23 09:07:06 crc restorecon[4683]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/7a3adc42.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 23 09:07:06 crc restorecon[4683]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/TWCA_Root_Certification_Authority.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 23 09:07:06 crc restorecon[4683]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/f459871d.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 23 09:07:06 crc restorecon[4683]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Telekom_Security_TLS_ECC_Root_2020.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 23 09:07:06 crc restorecon[4683]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/emSign_Root_CA_-_G1.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 23 09:07:06 crc restorecon[4683]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Telekom_Security_TLS_RSA_Root_2023.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 23 09:07:06 crc restorecon[4683]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/TeliaSonera_Root_CA_v1.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 23 09:07:06 crc restorecon[4683]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Telia_Root_CA_v2.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 23 09:07:06 crc restorecon[4683]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/8f103249.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 23 09:07:06 crc restorecon[4683]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/f058632f.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 23 09:07:06 crc restorecon[4683]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/ca-certificates.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 23 09:07:06 crc restorecon[4683]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/TrustAsia_Global_Root_CA_G3.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 23 09:07:06 crc restorecon[4683]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/9bf03295.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 23 09:07:06 crc restorecon[4683]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/98aaf404.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 23 09:07:06 crc restorecon[4683]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/ca-bundle.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 23 09:07:06 crc restorecon[4683]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/TrustAsia_Global_Root_CA_G4.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 23 09:07:07 crc restorecon[4683]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/1cef98f5.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 23 09:07:07 crc restorecon[4683]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/073bfcc5.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 23 09:07:07 crc restorecon[4683]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/2923b3f9.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 23 09:07:07 crc restorecon[4683]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Trustwave_Global_Certification_Authority.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 23 09:07:07 crc restorecon[4683]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/f249de83.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 23 09:07:07 crc restorecon[4683]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/edcbddb5.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 23 09:07:07 crc restorecon[4683]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/emSign_ECC_Root_CA_-_G3.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 23 09:07:07 crc restorecon[4683]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Trustwave_Global_ECC_P256_Certification_Authority.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 23 09:07:07 crc restorecon[4683]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/9b5697b0.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 23 09:07:07 crc restorecon[4683]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/1ae85e5e.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 23 09:07:07 crc restorecon[4683]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/b74d2bd5.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 23 09:07:07 crc restorecon[4683]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Trustwave_Global_ECC_P384_Certification_Authority.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 23 09:07:07 crc restorecon[4683]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/d887a5bb.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 23 09:07:07 crc restorecon[4683]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/9aef356c.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 23 09:07:07 crc restorecon[4683]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/TunTrust_Root_CA.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 23 09:07:07 crc restorecon[4683]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/fd64f3fc.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 23 09:07:07 crc restorecon[4683]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/e13665f9.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 23 09:07:07 crc restorecon[4683]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/UCA_Extended_Validation_Root.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 23 09:07:07 crc restorecon[4683]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/0f5dc4f3.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 23 09:07:07 crc restorecon[4683]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/da7377f6.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 23 09:07:07 crc restorecon[4683]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/UCA_Global_G2_Root.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 23 09:07:07 crc restorecon[4683]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/c01eb047.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 23 09:07:07 crc restorecon[4683]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/304d27c3.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 23 09:07:07 crc restorecon[4683]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/ed858448.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 23 09:07:07 crc restorecon[4683]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/USERTrust_ECC_Certification_Authority.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 23 09:07:07 crc restorecon[4683]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/f30dd6ad.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 23 09:07:07 crc restorecon[4683]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/04f60c28.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 23 09:07:07 crc restorecon[4683]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/vTrus_ECC_Root_CA.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 23 09:07:07 crc restorecon[4683]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/USERTrust_RSA_Certification_Authority.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 23 09:07:07 crc restorecon[4683]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/fc5a8f99.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 23 09:07:07 crc restorecon[4683]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/35105088.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 23 09:07:07 crc restorecon[4683]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/ee532fd5.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 23 09:07:07 crc restorecon[4683]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/XRamp_Global_Certification_Authority.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 23 09:07:07 crc restorecon[4683]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/706f604c.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 23 09:07:07 crc restorecon[4683]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/76579174.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 23 09:07:07 crc restorecon[4683]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/certSIGN_ROOT_CA.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 23 09:07:07 crc restorecon[4683]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/8d86cdd1.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 23 09:07:07 crc restorecon[4683]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/882de061.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 23 09:07:07 crc restorecon[4683]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/certSIGN_ROOT_CA_G2.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 23 09:07:07 crc restorecon[4683]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/5f618aec.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 23 09:07:07 crc restorecon[4683]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/a9d40e02.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 23 09:07:07 crc restorecon[4683]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/e-Szigno_Root_CA_2017.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 23 09:07:07 crc restorecon[4683]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/e868b802.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 23 09:07:07 crc restorecon[4683]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/83e9984f.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 23 09:07:07 crc restorecon[4683]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/ePKI_Root_Certification_Authority.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 23 09:07:07 crc restorecon[4683]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/ca6e4ad9.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 23 09:07:07 crc restorecon[4683]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/9d6523ce.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 23 09:07:07 crc restorecon[4683]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/4b718d9b.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 23 09:07:07 crc restorecon[4683]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/869fbf79.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 23 09:07:07 crc restorecon[4683]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 23 09:07:07 crc restorecon[4683]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/containers/registry/f8d22bdb not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 23 09:07:07 crc restorecon[4683]: /var/lib/kubelet/pods/a0128f3a-b052-44ed-a84e-c4c8aaf17c13/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c17 Jan 23 09:07:07 crc restorecon[4683]: /var/lib/kubelet/pods/a0128f3a-b052-44ed-a84e-c4c8aaf17c13/containers/cluster-samples-operator/6e8bbfac not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c17 Jan 23 09:07:07 crc restorecon[4683]: /var/lib/kubelet/pods/a0128f3a-b052-44ed-a84e-c4c8aaf17c13/containers/cluster-samples-operator/54dd7996 not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c17 Jan 23 09:07:07 crc restorecon[4683]: /var/lib/kubelet/pods/a0128f3a-b052-44ed-a84e-c4c8aaf17c13/containers/cluster-samples-operator/a4f1bb05 not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c17 Jan 23 09:07:07 crc restorecon[4683]: /var/lib/kubelet/pods/a0128f3a-b052-44ed-a84e-c4c8aaf17c13/containers/cluster-samples-operator-watch/207129da not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c17 Jan 23 09:07:07 crc restorecon[4683]: /var/lib/kubelet/pods/a0128f3a-b052-44ed-a84e-c4c8aaf17c13/containers/cluster-samples-operator-watch/c1df39e1 not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c17 Jan 23 09:07:07 crc restorecon[4683]: /var/lib/kubelet/pods/a0128f3a-b052-44ed-a84e-c4c8aaf17c13/containers/cluster-samples-operator-watch/15b8f1cd not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c17 Jan 23 09:07:07 crc restorecon[4683]: /var/lib/kubelet/pods/6402fda4-df10-493c-b4e5-d0569419652d/volumes/kubernetes.io~configmap/config not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c15 Jan 23 09:07:07 crc restorecon[4683]: /var/lib/kubelet/pods/6402fda4-df10-493c-b4e5-d0569419652d/volumes/kubernetes.io~configmap/config/..2025_02_23_05_27_49.3523263858 not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c15 Jan 23 09:07:07 crc restorecon[4683]: /var/lib/kubelet/pods/6402fda4-df10-493c-b4e5-d0569419652d/volumes/kubernetes.io~configmap/config/..2025_02_23_05_27_49.3523263858/config-file.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c15 Jan 23 09:07:07 crc restorecon[4683]: /var/lib/kubelet/pods/6402fda4-df10-493c-b4e5-d0569419652d/volumes/kubernetes.io~configmap/config/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c15 Jan 23 09:07:07 crc restorecon[4683]: /var/lib/kubelet/pods/6402fda4-df10-493c-b4e5-d0569419652d/volumes/kubernetes.io~configmap/config/config-file.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c15 Jan 23 09:07:07 crc restorecon[4683]: /var/lib/kubelet/pods/6402fda4-df10-493c-b4e5-d0569419652d/volumes/kubernetes.io~configmap/images not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c15 Jan 23 09:07:07 crc restorecon[4683]: /var/lib/kubelet/pods/6402fda4-df10-493c-b4e5-d0569419652d/volumes/kubernetes.io~configmap/images/..2025_02_23_05_27_49.3256605594 not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c15 Jan 23 09:07:07 crc restorecon[4683]: /var/lib/kubelet/pods/6402fda4-df10-493c-b4e5-d0569419652d/volumes/kubernetes.io~configmap/images/..2025_02_23_05_27_49.3256605594/images.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c15 Jan 23 09:07:07 crc restorecon[4683]: /var/lib/kubelet/pods/6402fda4-df10-493c-b4e5-d0569419652d/volumes/kubernetes.io~configmap/images/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c15 Jan 23 09:07:07 crc restorecon[4683]: /var/lib/kubelet/pods/6402fda4-df10-493c-b4e5-d0569419652d/volumes/kubernetes.io~configmap/images/images.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c15 Jan 23 09:07:07 crc restorecon[4683]: /var/lib/kubelet/pods/6402fda4-df10-493c-b4e5-d0569419652d/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c15 Jan 23 09:07:07 crc restorecon[4683]: /var/lib/kubelet/pods/6402fda4-df10-493c-b4e5-d0569419652d/containers/kube-rbac-proxy/77bd6913 not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c15 Jan 23 09:07:07 crc restorecon[4683]: /var/lib/kubelet/pods/6402fda4-df10-493c-b4e5-d0569419652d/containers/kube-rbac-proxy/2382c1b1 not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c15 Jan 23 09:07:07 crc restorecon[4683]: /var/lib/kubelet/pods/6402fda4-df10-493c-b4e5-d0569419652d/containers/kube-rbac-proxy/704ce128 not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c15 Jan 23 09:07:07 crc restorecon[4683]: /var/lib/kubelet/pods/6402fda4-df10-493c-b4e5-d0569419652d/containers/machine-api-operator/70d16fe0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c15 Jan 23 09:07:07 crc restorecon[4683]: /var/lib/kubelet/pods/6402fda4-df10-493c-b4e5-d0569419652d/containers/machine-api-operator/bfb95535 not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c15 Jan 23 09:07:07 crc restorecon[4683]: /var/lib/kubelet/pods/6402fda4-df10-493c-b4e5-d0569419652d/containers/machine-api-operator/57a8e8e2 not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c15 Jan 23 09:07:07 crc restorecon[4683]: /var/lib/kubelet/pods/e7e6199b-1264-4501-8953-767f51328d08/volumes/kubernetes.io~configmap/config not reset as customized by admin to system_u:object_r:container_file_t:s0:c219,c404 Jan 23 09:07:07 crc restorecon[4683]: /var/lib/kubelet/pods/e7e6199b-1264-4501-8953-767f51328d08/volumes/kubernetes.io~configmap/config/..2025_02_23_05_27_49.3413793711 not reset as customized by admin to system_u:object_r:container_file_t:s0:c219,c404 Jan 23 09:07:07 crc restorecon[4683]: /var/lib/kubelet/pods/e7e6199b-1264-4501-8953-767f51328d08/volumes/kubernetes.io~configmap/config/..2025_02_23_05_27_49.3413793711/config.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c219,c404 Jan 23 09:07:07 crc restorecon[4683]: /var/lib/kubelet/pods/e7e6199b-1264-4501-8953-767f51328d08/volumes/kubernetes.io~configmap/config/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c219,c404 Jan 23 09:07:07 crc restorecon[4683]: /var/lib/kubelet/pods/e7e6199b-1264-4501-8953-767f51328d08/volumes/kubernetes.io~configmap/config/config.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c219,c404 Jan 23 09:07:07 crc restorecon[4683]: /var/lib/kubelet/pods/e7e6199b-1264-4501-8953-767f51328d08/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c219,c404 Jan 23 09:07:07 crc restorecon[4683]: /var/lib/kubelet/pods/e7e6199b-1264-4501-8953-767f51328d08/containers/kube-apiserver-operator/1b9d3e5e not reset as customized by admin to system_u:object_r:container_file_t:s0:c107,c917 Jan 23 09:07:07 crc restorecon[4683]: /var/lib/kubelet/pods/e7e6199b-1264-4501-8953-767f51328d08/containers/kube-apiserver-operator/fddb173c not reset as customized by admin to system_u:object_r:container_file_t:s0:c202,c983 Jan 23 09:07:07 crc restorecon[4683]: /var/lib/kubelet/pods/e7e6199b-1264-4501-8953-767f51328d08/containers/kube-apiserver-operator/95d3c6c4 not reset as customized by admin to system_u:object_r:container_file_t:s0:c219,c404 Jan 23 09:07:07 crc restorecon[4683]: /var/lib/kubelet/pods/9d751cbb-f2e2-430d-9754-c882a5e924a5/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c20,c21 Jan 23 09:07:07 crc restorecon[4683]: /var/lib/kubelet/pods/9d751cbb-f2e2-430d-9754-c882a5e924a5/containers/check-endpoints/bfb5fff5 not reset as customized by admin to system_u:object_r:container_file_t:s0:c20,c21 Jan 23 09:07:07 crc restorecon[4683]: /var/lib/kubelet/pods/9d751cbb-f2e2-430d-9754-c882a5e924a5/containers/check-endpoints/2aef40aa not reset as customized by admin to system_u:object_r:container_file_t:s0:c20,c21 Jan 23 09:07:07 crc restorecon[4683]: /var/lib/kubelet/pods/9d751cbb-f2e2-430d-9754-c882a5e924a5/containers/check-endpoints/c0391cad not reset as customized by admin to system_u:object_r:container_file_t:s0:c20,c21 Jan 23 09:07:07 crc restorecon[4683]: /var/lib/kubelet/pods/f614b9022728cf315e60c057852e563e/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c214,c928 Jan 23 09:07:07 crc restorecon[4683]: /var/lib/kubelet/pods/f614b9022728cf315e60c057852e563e/containers/kube-controller-manager/1119e69d not reset as customized by admin to system_u:object_r:container_file_t:s0:c776,c1007 Jan 23 09:07:07 crc restorecon[4683]: /var/lib/kubelet/pods/f614b9022728cf315e60c057852e563e/containers/kube-controller-manager/660608b4 not reset as customized by admin to system_u:object_r:container_file_t:s0:c214,c928 Jan 23 09:07:07 crc restorecon[4683]: /var/lib/kubelet/pods/f614b9022728cf315e60c057852e563e/containers/kube-controller-manager/8220bd53 not reset as customized by admin to system_u:object_r:container_file_t:s0:c214,c928 Jan 23 09:07:07 crc restorecon[4683]: /var/lib/kubelet/pods/f614b9022728cf315e60c057852e563e/containers/cluster-policy-controller/85f99d5c not reset as customized by admin to system_u:object_r:container_file_t:s0:c776,c1007 Jan 23 09:07:07 crc restorecon[4683]: /var/lib/kubelet/pods/f614b9022728cf315e60c057852e563e/containers/cluster-policy-controller/4b0225f6 not reset as customized by admin to system_u:object_r:container_file_t:s0:c214,c928 Jan 23 09:07:07 crc restorecon[4683]: /var/lib/kubelet/pods/f614b9022728cf315e60c057852e563e/containers/kube-controller-manager-cert-syncer/9c2a3394 not reset as customized by admin to system_u:object_r:container_file_t:s0:c776,c1007 Jan 23 09:07:07 crc restorecon[4683]: /var/lib/kubelet/pods/f614b9022728cf315e60c057852e563e/containers/kube-controller-manager-cert-syncer/e820b243 not reset as customized by admin to system_u:object_r:container_file_t:s0:c214,c928 Jan 23 09:07:07 crc restorecon[4683]: /var/lib/kubelet/pods/f614b9022728cf315e60c057852e563e/containers/kube-controller-manager-recovery-controller/1ca52ea0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c776,c1007 Jan 23 09:07:07 crc restorecon[4683]: /var/lib/kubelet/pods/f614b9022728cf315e60c057852e563e/containers/kube-controller-manager-recovery-controller/e6988e45 not reset as customized by admin to system_u:object_r:container_file_t:s0:c214,c928 Jan 23 09:07:07 crc restorecon[4683]: /var/lib/kubelet/pods/0b574797-001e-440a-8f4e-c0be86edad0f/volumes/kubernetes.io~configmap/mcc-auth-proxy-config not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Jan 23 09:07:07 crc restorecon[4683]: /var/lib/kubelet/pods/0b574797-001e-440a-8f4e-c0be86edad0f/volumes/kubernetes.io~configmap/mcc-auth-proxy-config/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Jan 23 09:07:07 crc restorecon[4683]: /var/lib/kubelet/pods/0b574797-001e-440a-8f4e-c0be86edad0f/volumes/kubernetes.io~configmap/mcc-auth-proxy-config/config-file.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Jan 23 09:07:07 crc restorecon[4683]: /var/lib/kubelet/pods/0b574797-001e-440a-8f4e-c0be86edad0f/volumes/kubernetes.io~configmap/mcc-auth-proxy-config/..2025_02_24_06_09_21.2517297950 not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Jan 23 09:07:07 crc restorecon[4683]: /var/lib/kubelet/pods/0b574797-001e-440a-8f4e-c0be86edad0f/volumes/kubernetes.io~configmap/mcc-auth-proxy-config/..2025_02_24_06_09_21.2517297950/config-file.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Jan 23 09:07:07 crc restorecon[4683]: /var/lib/kubelet/pods/0b574797-001e-440a-8f4e-c0be86edad0f/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Jan 23 09:07:07 crc restorecon[4683]: /var/lib/kubelet/pods/0b574797-001e-440a-8f4e-c0be86edad0f/containers/machine-config-controller/6655f00b not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Jan 23 09:07:07 crc restorecon[4683]: /var/lib/kubelet/pods/0b574797-001e-440a-8f4e-c0be86edad0f/containers/machine-config-controller/98bc3986 not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Jan 23 09:07:07 crc restorecon[4683]: /var/lib/kubelet/pods/0b574797-001e-440a-8f4e-c0be86edad0f/containers/machine-config-controller/08e3458a not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Jan 23 09:07:07 crc restorecon[4683]: /var/lib/kubelet/pods/0b574797-001e-440a-8f4e-c0be86edad0f/containers/kube-rbac-proxy/2a191cb0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Jan 23 09:07:07 crc restorecon[4683]: /var/lib/kubelet/pods/0b574797-001e-440a-8f4e-c0be86edad0f/containers/kube-rbac-proxy/6c4eeefb not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Jan 23 09:07:07 crc restorecon[4683]: /var/lib/kubelet/pods/0b574797-001e-440a-8f4e-c0be86edad0f/containers/kube-rbac-proxy/f61a549c not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Jan 23 09:07:07 crc restorecon[4683]: /var/lib/kubelet/pods/bd23aa5c-e532-4e53-bccf-e79f130c5ae8/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c318,c553 Jan 23 09:07:07 crc restorecon[4683]: /var/lib/kubelet/pods/bd23aa5c-e532-4e53-bccf-e79f130c5ae8/containers/hostpath-provisioner/24891863 not reset as customized by admin to system_u:object_r:container_file_t:s0:c37,c572 Jan 23 09:07:07 crc restorecon[4683]: /var/lib/kubelet/pods/bd23aa5c-e532-4e53-bccf-e79f130c5ae8/containers/hostpath-provisioner/fbdfd89c not reset as customized by admin to system_u:object_r:container_file_t:s0:c318,c553 Jan 23 09:07:07 crc restorecon[4683]: /var/lib/kubelet/pods/bd23aa5c-e532-4e53-bccf-e79f130c5ae8/containers/liveness-probe/9b63b3bc not reset as customized by admin to system_u:object_r:container_file_t:s0:c37,c572 Jan 23 09:07:07 crc restorecon[4683]: /var/lib/kubelet/pods/bd23aa5c-e532-4e53-bccf-e79f130c5ae8/containers/liveness-probe/8acde6d6 not reset as customized by admin to system_u:object_r:container_file_t:s0:c318,c553 Jan 23 09:07:07 crc restorecon[4683]: /var/lib/kubelet/pods/bd23aa5c-e532-4e53-bccf-e79f130c5ae8/containers/node-driver-registrar/59ecbba3 not reset as customized by admin to system_u:object_r:container_file_t:s0:c318,c553 Jan 23 09:07:07 crc restorecon[4683]: /var/lib/kubelet/pods/bd23aa5c-e532-4e53-bccf-e79f130c5ae8/containers/csi-provisioner/685d4be3 not reset as customized by admin to system_u:object_r:container_file_t:s0:c318,c553 Jan 23 09:07:07 crc restorecon[4683]: /var/lib/kubelet/pods/5441d097-087c-4d9a-baa8-b210afa90fc9/volumes/kubernetes.io~configmap/config not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c23 Jan 23 09:07:07 crc restorecon[4683]: /var/lib/kubelet/pods/5441d097-087c-4d9a-baa8-b210afa90fc9/volumes/kubernetes.io~configmap/config/..2025_02_24_06_20_07.341639300 not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c23 Jan 23 09:07:07 crc restorecon[4683]: /var/lib/kubelet/pods/5441d097-087c-4d9a-baa8-b210afa90fc9/volumes/kubernetes.io~configmap/config/..2025_02_24_06_20_07.341639300/config.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c23 Jan 23 09:07:07 crc restorecon[4683]: /var/lib/kubelet/pods/5441d097-087c-4d9a-baa8-b210afa90fc9/volumes/kubernetes.io~configmap/config/..2025_02_24_06_20_07.341639300/openshift-route-controller-manager.client-ca.configmap not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c23 Jan 23 09:07:07 crc restorecon[4683]: /var/lib/kubelet/pods/5441d097-087c-4d9a-baa8-b210afa90fc9/volumes/kubernetes.io~configmap/config/..2025_02_24_06_20_07.341639300/openshift-route-controller-manager.serving-cert.secret not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c23 Jan 23 09:07:07 crc restorecon[4683]: /var/lib/kubelet/pods/5441d097-087c-4d9a-baa8-b210afa90fc9/volumes/kubernetes.io~configmap/config/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c23 Jan 23 09:07:07 crc restorecon[4683]: /var/lib/kubelet/pods/5441d097-087c-4d9a-baa8-b210afa90fc9/volumes/kubernetes.io~configmap/config/config.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c23 Jan 23 09:07:07 crc restorecon[4683]: /var/lib/kubelet/pods/5441d097-087c-4d9a-baa8-b210afa90fc9/volumes/kubernetes.io~configmap/config/openshift-route-controller-manager.client-ca.configmap not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c23 Jan 23 09:07:07 crc restorecon[4683]: /var/lib/kubelet/pods/5441d097-087c-4d9a-baa8-b210afa90fc9/volumes/kubernetes.io~configmap/config/openshift-route-controller-manager.serving-cert.secret not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c23 Jan 23 09:07:07 crc restorecon[4683]: /var/lib/kubelet/pods/5441d097-087c-4d9a-baa8-b210afa90fc9/volumes/kubernetes.io~configmap/client-ca not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c23 Jan 23 09:07:07 crc restorecon[4683]: /var/lib/kubelet/pods/5441d097-087c-4d9a-baa8-b210afa90fc9/volumes/kubernetes.io~configmap/client-ca/..2025_02_24_06_20_07.2950937851 not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c23 Jan 23 09:07:07 crc restorecon[4683]: /var/lib/kubelet/pods/5441d097-087c-4d9a-baa8-b210afa90fc9/volumes/kubernetes.io~configmap/client-ca/..2025_02_24_06_20_07.2950937851/ca-bundle.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c23 Jan 23 09:07:07 crc restorecon[4683]: /var/lib/kubelet/pods/5441d097-087c-4d9a-baa8-b210afa90fc9/volumes/kubernetes.io~configmap/client-ca/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c23 Jan 23 09:07:07 crc restorecon[4683]: /var/lib/kubelet/pods/5441d097-087c-4d9a-baa8-b210afa90fc9/volumes/kubernetes.io~configmap/client-ca/ca-bundle.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c23 Jan 23 09:07:07 crc restorecon[4683]: /var/lib/kubelet/pods/5441d097-087c-4d9a-baa8-b210afa90fc9/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c23 Jan 23 09:07:07 crc restorecon[4683]: /var/lib/kubelet/pods/5441d097-087c-4d9a-baa8-b210afa90fc9/containers/route-controller-manager/feaea55e not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c23 Jan 23 09:07:07 crc restorecon[4683]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 09:07:07 crc restorecon[4683]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 09:07:07 crc restorecon[4683]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/abinitio-runtime-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 09:07:07 crc restorecon[4683]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/abinitio-runtime-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 09:07:07 crc restorecon[4683]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/accuknox-operator-certified not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 09:07:07 crc restorecon[4683]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/accuknox-operator-certified/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 09:07:07 crc restorecon[4683]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/aci-containers-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 09:07:07 crc restorecon[4683]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/aci-containers-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 09:07:07 crc restorecon[4683]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/aikit-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 09:07:07 crc restorecon[4683]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/aikit-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 09:07:07 crc restorecon[4683]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/airlock-microgateway not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 09:07:07 crc restorecon[4683]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/airlock-microgateway/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 09:07:07 crc restorecon[4683]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ako-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 09:07:07 crc restorecon[4683]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ako-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 09:07:07 crc restorecon[4683]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/alloy not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 09:07:07 crc restorecon[4683]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/alloy/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 09:07:07 crc restorecon[4683]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/anchore-engine not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 09:07:07 crc restorecon[4683]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/anchore-engine/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 09:07:07 crc restorecon[4683]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/anzo-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 09:07:07 crc restorecon[4683]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/anzo-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 09:07:07 crc restorecon[4683]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/anzograph-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 09:07:07 crc restorecon[4683]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/anzograph-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 09:07:07 crc restorecon[4683]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/anzounstructured-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 09:07:07 crc restorecon[4683]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/anzounstructured-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 09:07:07 crc restorecon[4683]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/appdynamics-cloud-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 09:07:07 crc restorecon[4683]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/appdynamics-cloud-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 09:07:07 crc restorecon[4683]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/appdynamics-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 09:07:07 crc restorecon[4683]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/appdynamics-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 09:07:07 crc restorecon[4683]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/aqua-operator-certified not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 09:07:07 crc restorecon[4683]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/aqua-operator-certified/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 09:07:07 crc restorecon[4683]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cass-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 09:07:07 crc restorecon[4683]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cass-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 09:07:07 crc restorecon[4683]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ccm-node-agent-dcap-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 09:07:07 crc restorecon[4683]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ccm-node-agent-dcap-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 09:07:07 crc restorecon[4683]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ccm-node-agent-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 09:07:07 crc restorecon[4683]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ccm-node-agent-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 09:07:07 crc restorecon[4683]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cfm-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 09:07:07 crc restorecon[4683]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cfm-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 09:07:07 crc restorecon[4683]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cilium not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 09:07:07 crc restorecon[4683]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cilium/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 09:07:07 crc restorecon[4683]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cilium-enterprise not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 09:07:07 crc restorecon[4683]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cilium-enterprise/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 09:07:07 crc restorecon[4683]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cloud-native-postgresql not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 09:07:07 crc restorecon[4683]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cloud-native-postgresql/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 09:07:07 crc restorecon[4683]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cloudbees-ci not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 09:07:07 crc restorecon[4683]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cloudbees-ci/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 09:07:07 crc restorecon[4683]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cloudera-streams-messaging-kubernetes-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 09:07:07 crc restorecon[4683]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cloudera-streams-messaging-kubernetes-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 09:07:07 crc restorecon[4683]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cloudnative-pg not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 09:07:07 crc restorecon[4683]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cloudnative-pg/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 09:07:07 crc restorecon[4683]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cnfv-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 09:07:07 crc restorecon[4683]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cnfv-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 09:07:07 crc restorecon[4683]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cockroachdb-certified not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 09:07:07 crc restorecon[4683]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cockroachdb-certified/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 09:07:07 crc restorecon[4683]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/conjur-follower-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 09:07:07 crc restorecon[4683]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/conjur-follower-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 09:07:07 crc restorecon[4683]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/coroot-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 09:07:07 crc restorecon[4683]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/coroot-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 09:07:07 crc restorecon[4683]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/crunchy-postgres-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 09:07:07 crc restorecon[4683]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/crunchy-postgres-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 09:07:07 crc restorecon[4683]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cte-k8s-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 09:07:07 crc restorecon[4683]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cte-k8s-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 09:07:07 crc restorecon[4683]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/datadog-operator-certified not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 09:07:07 crc restorecon[4683]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/datadog-operator-certified/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 09:07:07 crc restorecon[4683]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/dell-csm-operator-certified not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 09:07:07 crc restorecon[4683]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/dell-csm-operator-certified/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 09:07:07 crc restorecon[4683]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/digitalai-deploy-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 09:07:07 crc restorecon[4683]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/digitalai-deploy-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 09:07:07 crc restorecon[4683]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/digitalai-release-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 09:07:07 crc restorecon[4683]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/digitalai-release-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 09:07:07 crc restorecon[4683]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/dynatrace-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 09:07:07 crc restorecon[4683]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/dynatrace-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 09:07:07 crc restorecon[4683]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/edb-hcp-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 09:07:07 crc restorecon[4683]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/edb-hcp-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 09:07:07 crc restorecon[4683]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/eginnovations-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 09:07:07 crc restorecon[4683]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/eginnovations-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 09:07:07 crc restorecon[4683]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/elasticsearch-eck-operator-certified not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 09:07:07 crc restorecon[4683]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/elasticsearch-eck-operator-certified/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 09:07:07 crc restorecon[4683]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/falcon-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 09:07:07 crc restorecon[4683]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/falcon-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 09:07:07 crc restorecon[4683]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/federatorai-certified not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 09:07:07 crc restorecon[4683]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/federatorai-certified/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 09:07:07 crc restorecon[4683]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/fujitsu-enterprise-postgres-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 09:07:07 crc restorecon[4683]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/fujitsu-enterprise-postgres-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 09:07:07 crc restorecon[4683]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/function-mesh not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 09:07:07 crc restorecon[4683]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/function-mesh/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 09:07:07 crc restorecon[4683]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/harness-gitops-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 09:07:07 crc restorecon[4683]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/harness-gitops-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 09:07:07 crc restorecon[4683]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/hazelcast-platform-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 09:07:07 crc restorecon[4683]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/hazelcast-platform-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 09:07:07 crc restorecon[4683]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/hcp-terraform-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 09:07:07 crc restorecon[4683]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/hcp-terraform-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 09:07:07 crc restorecon[4683]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/hpe-ezmeral-csi-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 09:07:07 crc restorecon[4683]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/hpe-ezmeral-csi-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 09:07:07 crc restorecon[4683]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ibm-application-gateway-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 09:07:07 crc restorecon[4683]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ibm-application-gateway-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 09:07:07 crc restorecon[4683]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ibm-block-csi-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 09:07:07 crc restorecon[4683]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ibm-block-csi-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 09:07:07 crc restorecon[4683]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ibm-security-verify-access-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 09:07:07 crc restorecon[4683]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ibm-security-verify-access-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 09:07:07 crc restorecon[4683]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ibm-security-verify-directory-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 09:07:07 crc restorecon[4683]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ibm-security-verify-directory-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 09:07:07 crc restorecon[4683]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ibm-security-verify-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 09:07:07 crc restorecon[4683]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ibm-security-verify-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 09:07:07 crc restorecon[4683]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/infoscale-dr-manager not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 09:07:07 crc restorecon[4683]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/infoscale-dr-manager/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 09:07:07 crc restorecon[4683]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/infoscale-licensing-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 09:07:07 crc restorecon[4683]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/infoscale-licensing-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 09:07:07 crc restorecon[4683]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/infoscale-sds-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 09:07:07 crc restorecon[4683]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/infoscale-sds-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 09:07:07 crc restorecon[4683]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/infrastructure-asset-orchestrator-certified not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 09:07:07 crc restorecon[4683]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/infrastructure-asset-orchestrator-certified/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 09:07:07 crc restorecon[4683]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/instana-agent-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 09:07:07 crc restorecon[4683]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/instana-agent-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 09:07:07 crc restorecon[4683]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/intel-device-plugins-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 09:07:07 crc restorecon[4683]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/intel-device-plugins-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 09:07:07 crc restorecon[4683]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/intel-kubernetes-power-manager not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 09:07:07 crc restorecon[4683]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/intel-kubernetes-power-manager/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 09:07:07 crc restorecon[4683]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/iomesh-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 09:07:07 crc restorecon[4683]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/iomesh-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 09:07:07 crc restorecon[4683]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/joget-dx-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 09:07:07 crc restorecon[4683]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/joget-dx-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 09:07:07 crc restorecon[4683]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/joget-dx8-openshift-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 09:07:07 crc restorecon[4683]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/joget-dx8-openshift-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 09:07:07 crc restorecon[4683]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/joget-dx8-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 09:07:07 crc restorecon[4683]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/joget-dx8-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 09:07:07 crc restorecon[4683]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/k8s-triliovault not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 09:07:07 crc restorecon[4683]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/k8s-triliovault/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 09:07:07 crc restorecon[4683]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-ati-updates not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 09:07:07 crc restorecon[4683]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-ati-updates/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 09:07:07 crc restorecon[4683]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-kcos-framework not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 09:07:07 crc restorecon[4683]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-kcos-framework/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 09:07:07 crc restorecon[4683]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-kcos-ingress not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 09:07:07 crc restorecon[4683]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-kcos-ingress/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 09:07:07 crc restorecon[4683]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-kcos-licensing not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 09:07:07 crc restorecon[4683]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-kcos-licensing/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 09:07:07 crc restorecon[4683]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-kcos-sso not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 09:07:07 crc restorecon[4683]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-kcos-sso/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 09:07:07 crc restorecon[4683]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-keycloak-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 09:07:07 crc restorecon[4683]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-keycloak-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 09:07:07 crc restorecon[4683]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-load-core not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 09:07:07 crc restorecon[4683]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-load-core/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 09:07:07 crc restorecon[4683]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-loadcore-agents not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 09:07:07 crc restorecon[4683]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-loadcore-agents/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 09:07:07 crc restorecon[4683]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-nats-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 09:07:07 crc restorecon[4683]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-nats-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 09:07:07 crc restorecon[4683]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-nimbusmosaic-dusim not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 09:07:07 crc restorecon[4683]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-nimbusmosaic-dusim/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 09:07:07 crc restorecon[4683]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-rest-api-browser-v1 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 09:07:07 crc restorecon[4683]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-rest-api-browser-v1/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 09:07:07 crc restorecon[4683]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-wap-appsec not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 09:07:07 crc restorecon[4683]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-wap-appsec/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 09:07:07 crc restorecon[4683]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-wap-core not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 09:07:07 crc restorecon[4683]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-wap-core/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 09:07:07 crc restorecon[4683]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-wap-db not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 09:07:07 crc restorecon[4683]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-wap-db/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 09:07:07 crc restorecon[4683]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-wap-diagnostics not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 09:07:07 crc restorecon[4683]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-wap-diagnostics/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 09:07:07 crc restorecon[4683]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-wap-logging not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 09:07:07 crc restorecon[4683]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-wap-logging/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 09:07:07 crc restorecon[4683]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-wap-migration not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 09:07:07 crc restorecon[4683]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-wap-migration/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 09:07:07 crc restorecon[4683]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-wap-msg-broker not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 09:07:07 crc restorecon[4683]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-wap-msg-broker/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 09:07:07 crc restorecon[4683]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-wap-notifications not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 09:07:07 crc restorecon[4683]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-wap-notifications/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 09:07:07 crc restorecon[4683]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-wap-stats-dashboards not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 09:07:07 crc restorecon[4683]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-wap-stats-dashboards/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 09:07:07 crc restorecon[4683]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-wap-storage not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 09:07:07 crc restorecon[4683]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-wap-storage/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 09:07:07 crc restorecon[4683]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-wap-test-core not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 09:07:07 crc restorecon[4683]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-wap-test-core/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 09:07:07 crc restorecon[4683]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-wap-ui not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 09:07:07 crc restorecon[4683]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-wap-ui/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 09:07:07 crc restorecon[4683]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-websocket-service not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 09:07:07 crc restorecon[4683]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-websocket-service/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 09:07:07 crc restorecon[4683]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kong-gateway-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 09:07:07 crc restorecon[4683]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kong-gateway-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 09:07:07 crc restorecon[4683]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kubearmor-operator-certified not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 09:07:07 crc restorecon[4683]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kubearmor-operator-certified/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 09:07:07 crc restorecon[4683]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kubecost-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 09:07:07 crc restorecon[4683]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kubecost-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 09:07:07 crc restorecon[4683]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kubemq-operator-marketplace not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 09:07:07 crc restorecon[4683]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kubemq-operator-marketplace/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 09:07:07 crc restorecon[4683]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kubeturbo-certified not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 09:07:07 crc restorecon[4683]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kubeturbo-certified/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 09:07:07 crc restorecon[4683]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/lenovo-locd-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 09:07:07 crc restorecon[4683]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/lenovo-locd-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 09:07:07 crc restorecon[4683]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/marketplace-games-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 09:07:07 crc restorecon[4683]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/marketplace-games-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 09:07:07 crc restorecon[4683]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/memcached-operator-ogaye not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 09:07:07 crc restorecon[4683]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/memcached-operator-ogaye/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 09:07:07 crc restorecon[4683]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/memory-machine-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 09:07:07 crc restorecon[4683]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/memory-machine-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 09:07:07 crc restorecon[4683]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/model-builder-for-vision-certified not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 09:07:07 crc restorecon[4683]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/model-builder-for-vision-certified/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 09:07:07 crc restorecon[4683]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/mongodb-atlas-kubernetes not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 09:07:07 crc restorecon[4683]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/mongodb-atlas-kubernetes/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 09:07:07 crc restorecon[4683]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/mongodb-enterprise not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 09:07:07 crc restorecon[4683]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/mongodb-enterprise/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 09:07:07 crc restorecon[4683]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/netapp-spark-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 09:07:07 crc restorecon[4683]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/netapp-spark-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 09:07:07 crc restorecon[4683]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/netscaler-adm-agent-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 09:07:07 crc restorecon[4683]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/netscaler-adm-agent-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 09:07:07 crc restorecon[4683]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/netscaler-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 09:07:07 crc restorecon[4683]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/netscaler-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 09:07:07 crc restorecon[4683]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/neuvector-certified-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 09:07:07 crc restorecon[4683]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/neuvector-certified-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 09:07:07 crc restorecon[4683]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/nexus-repository-ha-operator-certified not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 09:07:07 crc restorecon[4683]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/nexus-repository-ha-operator-certified/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 09:07:07 crc restorecon[4683]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/nginx-ingress-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 09:07:07 crc restorecon[4683]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/nginx-ingress-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 09:07:07 crc restorecon[4683]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/pcc-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 09:07:07 crc restorecon[4683]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/pcc-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 09:07:07 crc restorecon[4683]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/nim-operator-certified not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 09:07:07 crc restorecon[4683]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/nim-operator-certified/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 09:07:07 crc restorecon[4683]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/nxiq-operator-certified not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 09:07:07 crc restorecon[4683]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/nxiq-operator-certified/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 09:07:07 crc restorecon[4683]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/nxrm-operator-certified not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 09:07:07 crc restorecon[4683]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/nxrm-operator-certified/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 09:07:07 crc restorecon[4683]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/odigos-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 09:07:07 crc restorecon[4683]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/odigos-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 09:07:07 crc restorecon[4683]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/open-liberty-certified not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 09:07:07 crc restorecon[4683]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/open-liberty-certified/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 09:07:07 crc restorecon[4683]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/openshiftartifactoryha-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 09:07:07 crc restorecon[4683]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/openshiftartifactoryha-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 09:07:07 crc restorecon[4683]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/openshiftxray-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 09:07:07 crc restorecon[4683]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/openshiftxray-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 09:07:07 crc restorecon[4683]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/operator-certification-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 09:07:07 crc restorecon[4683]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/operator-certification-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 09:07:07 crc restorecon[4683]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ovms-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 09:07:07 crc restorecon[4683]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ovms-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 09:07:07 crc restorecon[4683]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/pachyderm-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 09:07:07 crc restorecon[4683]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/pachyderm-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 09:07:07 crc restorecon[4683]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/pmem-csi-operator-os not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 09:07:07 crc restorecon[4683]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/pmem-csi-operator-os/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 09:07:07 crc restorecon[4683]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/portworx-certified not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 09:07:07 crc restorecon[4683]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/portworx-certified/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 09:07:07 crc restorecon[4683]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/prometurbo-certified not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 09:07:07 crc restorecon[4683]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/prometurbo-certified/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 09:07:07 crc restorecon[4683]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/pubsubplus-eventbroker-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 09:07:07 crc restorecon[4683]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/pubsubplus-eventbroker-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 09:07:07 crc restorecon[4683]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/redis-enterprise-operator-cert not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 09:07:07 crc restorecon[4683]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/redis-enterprise-operator-cert/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 09:07:07 crc restorecon[4683]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/runtime-component-operator-certified not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 09:07:07 crc restorecon[4683]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/runtime-component-operator-certified/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 09:07:07 crc restorecon[4683]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/runtime-fabric-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 09:07:07 crc restorecon[4683]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/runtime-fabric-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 09:07:07 crc restorecon[4683]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/sanstoragecsi-operator-bundle not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 09:07:07 crc restorecon[4683]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/sanstoragecsi-operator-bundle/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 09:07:07 crc restorecon[4683]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/silicom-sts-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 09:07:07 crc restorecon[4683]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/silicom-sts-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 09:07:07 crc restorecon[4683]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/smilecdr-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 09:07:07 crc restorecon[4683]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/smilecdr-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 09:07:07 crc restorecon[4683]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/sriov-fec not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 09:07:07 crc restorecon[4683]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/sriov-fec/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 09:07:07 crc restorecon[4683]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/stackable-commons-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 09:07:07 crc restorecon[4683]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/stackable-commons-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 09:07:07 crc restorecon[4683]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/stackable-zookeeper-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 09:07:07 crc restorecon[4683]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/stackable-zookeeper-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 09:07:07 crc restorecon[4683]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/t8c-certified not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 09:07:07 crc restorecon[4683]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/t8c-certified/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 09:07:07 crc restorecon[4683]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/t8c-tsc-client-certified not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 09:07:07 crc restorecon[4683]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/t8c-tsc-client-certified/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 09:07:07 crc restorecon[4683]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/tawon-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 09:07:07 crc restorecon[4683]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/tawon-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 09:07:07 crc restorecon[4683]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/tigera-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 09:07:07 crc restorecon[4683]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/tigera-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 09:07:07 crc restorecon[4683]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/timemachine-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 09:07:07 crc restorecon[4683]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/timemachine-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 09:07:07 crc restorecon[4683]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/vault-secrets-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 09:07:07 crc restorecon[4683]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/vault-secrets-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 09:07:07 crc restorecon[4683]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/vcp-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 09:07:07 crc restorecon[4683]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/vcp-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 09:07:07 crc restorecon[4683]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/webotx-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 09:07:07 crc restorecon[4683]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/webotx-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 09:07:07 crc restorecon[4683]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/xcrypt-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 09:07:07 crc restorecon[4683]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/xcrypt-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 09:07:07 crc restorecon[4683]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/zabbix-operator-certified not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 09:07:07 crc restorecon[4683]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/zabbix-operator-certified/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 09:07:07 crc restorecon[4683]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/cache not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 09:07:07 crc restorecon[4683]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 09:07:07 crc restorecon[4683]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/db not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 09:07:07 crc restorecon[4683]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/db/00000-1.psg not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 09:07:07 crc restorecon[4683]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/db/00000-1.psg.pmt not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 09:07:07 crc restorecon[4683]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/db/db.pmt not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 09:07:07 crc restorecon[4683]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/db/index.pmt not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 09:07:07 crc restorecon[4683]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/db/main.pix not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 09:07:07 crc restorecon[4683]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/db/overflow.pix not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 09:07:07 crc restorecon[4683]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/digest not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 09:07:07 crc restorecon[4683]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/utilities not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 09:07:07 crc restorecon[4683]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/utilities/copy-content not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 09:07:07 crc restorecon[4683]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 09:07:07 crc restorecon[4683]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/containers/extract-utilities/63709497 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 09:07:07 crc restorecon[4683]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/containers/extract-utilities/d966b7fd not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 09:07:07 crc restorecon[4683]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/containers/extract-utilities/f5773757 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 09:07:07 crc restorecon[4683]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/containers/extract-content/81c9edb9 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 09:07:07 crc restorecon[4683]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/containers/extract-content/57bf57ee not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 09:07:07 crc restorecon[4683]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/containers/extract-content/86f5e6aa not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 09:07:07 crc restorecon[4683]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/containers/registry-server/0aabe31d not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 09:07:07 crc restorecon[4683]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/containers/registry-server/d2af85c2 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 09:07:07 crc restorecon[4683]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/containers/registry-server/09d157d9 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 09:07:07 crc restorecon[4683]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 09:07:07 crc restorecon[4683]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/cache not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 09:07:07 crc restorecon[4683]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 09:07:07 crc restorecon[4683]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/db not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 09:07:07 crc restorecon[4683]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/db/00000-1.psg not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 09:07:07 crc restorecon[4683]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/db/00000-1.psg.pmt not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 09:07:07 crc restorecon[4683]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/db/db.pmt not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 09:07:07 crc restorecon[4683]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/db/index.pmt not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 09:07:07 crc restorecon[4683]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/db/main.pix not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 09:07:07 crc restorecon[4683]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/db/overflow.pix not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 09:07:07 crc restorecon[4683]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/digest not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 09:07:07 crc restorecon[4683]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 09:07:07 crc restorecon[4683]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/3scale-community-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 09:07:07 crc restorecon[4683]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/3scale-community-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 09:07:07 crc restorecon[4683]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-acm-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 09:07:07 crc restorecon[4683]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-acm-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 09:07:07 crc restorecon[4683]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-acmpca-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 09:07:07 crc restorecon[4683]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-acmpca-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 09:07:07 crc restorecon[4683]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-apigateway-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 09:07:07 crc restorecon[4683]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-apigateway-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 09:07:07 crc restorecon[4683]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-apigatewayv2-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 09:07:07 crc restorecon[4683]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-apigatewayv2-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 09:07:07 crc restorecon[4683]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-applicationautoscaling-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 09:07:07 crc restorecon[4683]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-applicationautoscaling-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 09:07:07 crc restorecon[4683]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-athena-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 09:07:07 crc restorecon[4683]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-athena-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 09:07:07 crc restorecon[4683]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-cloudfront-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 09:07:07 crc restorecon[4683]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-cloudfront-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 09:07:07 crc restorecon[4683]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-cloudtrail-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 09:07:07 crc restorecon[4683]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-cloudtrail-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 09:07:07 crc restorecon[4683]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-cloudwatch-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 09:07:07 crc restorecon[4683]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-cloudwatch-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 09:07:07 crc restorecon[4683]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-cloudwatchlogs-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 09:07:07 crc restorecon[4683]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-cloudwatchlogs-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 09:07:07 crc restorecon[4683]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-documentdb-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 09:07:07 crc restorecon[4683]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-documentdb-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 09:07:07 crc restorecon[4683]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-dynamodb-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 09:07:07 crc restorecon[4683]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-dynamodb-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 09:07:07 crc restorecon[4683]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-ec2-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 09:07:07 crc restorecon[4683]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-ec2-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 09:07:07 crc restorecon[4683]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-ecr-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 09:07:07 crc restorecon[4683]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-ecr-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 09:07:07 crc restorecon[4683]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-ecs-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 09:07:07 crc restorecon[4683]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-ecs-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 09:07:07 crc restorecon[4683]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-efs-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 09:07:07 crc restorecon[4683]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-efs-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 09:07:07 crc restorecon[4683]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-eks-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 09:07:07 crc restorecon[4683]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-eks-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 09:07:07 crc restorecon[4683]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-elasticache-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 09:07:07 crc restorecon[4683]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-elasticache-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 09:07:07 crc restorecon[4683]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-elbv2-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 09:07:07 crc restorecon[4683]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-elbv2-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 09:07:07 crc restorecon[4683]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-emrcontainers-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 09:07:07 crc restorecon[4683]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-emrcontainers-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 09:07:07 crc restorecon[4683]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-eventbridge-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 09:07:07 crc restorecon[4683]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-eventbridge-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 09:07:07 crc restorecon[4683]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-iam-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 09:07:07 crc restorecon[4683]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-iam-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 09:07:07 crc restorecon[4683]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-kafka-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 09:07:07 crc restorecon[4683]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-kafka-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 09:07:07 crc restorecon[4683]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-keyspaces-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 09:07:07 crc restorecon[4683]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-keyspaces-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 09:07:07 crc restorecon[4683]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-kinesis-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 09:07:07 crc restorecon[4683]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-kinesis-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 09:07:07 crc restorecon[4683]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-kms-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 09:07:07 crc restorecon[4683]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-kms-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 09:07:07 crc restorecon[4683]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-lambda-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 09:07:07 crc restorecon[4683]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-lambda-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 09:07:07 crc restorecon[4683]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-memorydb-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 09:07:07 crc restorecon[4683]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-memorydb-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 09:07:07 crc restorecon[4683]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-mq-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 09:07:07 crc restorecon[4683]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-mq-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 09:07:07 crc restorecon[4683]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-networkfirewall-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 09:07:07 crc restorecon[4683]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-networkfirewall-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 09:07:07 crc restorecon[4683]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-opensearchservice-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 09:07:07 crc restorecon[4683]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-opensearchservice-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 09:07:07 crc restorecon[4683]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-organizations-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 09:07:07 crc restorecon[4683]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-organizations-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 09:07:07 crc restorecon[4683]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-pipes-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 09:07:07 crc restorecon[4683]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-pipes-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 09:07:07 crc restorecon[4683]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-prometheusservice-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 09:07:07 crc restorecon[4683]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-prometheusservice-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 09:07:07 crc restorecon[4683]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-rds-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 09:07:07 crc restorecon[4683]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-rds-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 09:07:07 crc restorecon[4683]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-recyclebin-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 09:07:07 crc restorecon[4683]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-recyclebin-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 09:07:07 crc restorecon[4683]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-route53-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 09:07:07 crc restorecon[4683]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-route53-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 09:07:07 crc restorecon[4683]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-route53resolver-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 09:07:07 crc restorecon[4683]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-route53resolver-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 09:07:07 crc restorecon[4683]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-s3-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 09:07:07 crc restorecon[4683]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-s3-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 09:07:07 crc restorecon[4683]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-sagemaker-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 09:07:07 crc restorecon[4683]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-sagemaker-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 09:07:07 crc restorecon[4683]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-secretsmanager-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 09:07:07 crc restorecon[4683]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-secretsmanager-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 09:07:07 crc restorecon[4683]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-ses-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 09:07:07 crc restorecon[4683]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-ses-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 09:07:07 crc restorecon[4683]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-sfn-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 09:07:07 crc restorecon[4683]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-sfn-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 09:07:07 crc restorecon[4683]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-sns-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 09:07:07 crc restorecon[4683]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-sns-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 09:07:07 crc restorecon[4683]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-sqs-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 09:07:07 crc restorecon[4683]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-sqs-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 09:07:07 crc restorecon[4683]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-ssm-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 09:07:07 crc restorecon[4683]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-ssm-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 09:07:07 crc restorecon[4683]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-wafv2-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 09:07:07 crc restorecon[4683]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-wafv2-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 09:07:07 crc restorecon[4683]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/aerospike-kubernetes-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 09:07:07 crc restorecon[4683]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/aerospike-kubernetes-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 09:07:07 crc restorecon[4683]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/airflow-helm-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 09:07:07 crc restorecon[4683]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/airflow-helm-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 09:07:07 crc restorecon[4683]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/alloydb-omni-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 09:07:07 crc restorecon[4683]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/alloydb-omni-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 09:07:07 crc restorecon[4683]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/alvearie-imaging-ingestion not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 09:07:07 crc restorecon[4683]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/alvearie-imaging-ingestion/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 09:07:07 crc restorecon[4683]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/amd-gpu-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 09:07:07 crc restorecon[4683]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/amd-gpu-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 09:07:07 crc restorecon[4683]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/analytics-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 09:07:07 crc restorecon[4683]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/analytics-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 09:07:07 crc restorecon[4683]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/annotationlab not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 09:07:07 crc restorecon[4683]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/annotationlab/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 09:07:07 crc restorecon[4683]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/apicast-community-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 09:07:07 crc restorecon[4683]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/apicast-community-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 09:07:07 crc restorecon[4683]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/apicurio-api-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 09:07:07 crc restorecon[4683]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/apicurio-api-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 09:07:07 crc restorecon[4683]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/apicurio-registry not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 09:07:07 crc restorecon[4683]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/apicurio-registry/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 09:07:07 crc restorecon[4683]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/apicurito not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 09:07:07 crc restorecon[4683]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/apicurito/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 09:07:07 crc restorecon[4683]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/apimatic-kubernetes-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 09:07:07 crc restorecon[4683]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/apimatic-kubernetes-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 09:07:07 crc restorecon[4683]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/application-services-metering-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 09:07:07 crc restorecon[4683]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/application-services-metering-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 09:07:07 crc restorecon[4683]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/aqua not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 09:07:07 crc restorecon[4683]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/aqua/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 09:07:07 crc restorecon[4683]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/argocd-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 09:07:07 crc restorecon[4683]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/argocd-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 09:07:07 crc restorecon[4683]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/assisted-service-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 09:07:07 crc restorecon[4683]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/assisted-service-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 09:07:07 crc restorecon[4683]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/authorino-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 09:07:07 crc restorecon[4683]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/authorino-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 09:07:07 crc restorecon[4683]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/automotive-infra not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 09:07:07 crc restorecon[4683]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/automotive-infra/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 09:07:07 crc restorecon[4683]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/aws-efs-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 09:07:07 crc restorecon[4683]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/aws-efs-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 09:07:07 crc restorecon[4683]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/awss3-operator-registry not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 09:07:07 crc restorecon[4683]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/awss3-operator-registry/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 09:07:07 crc restorecon[4683]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/azure-service-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 09:07:07 crc restorecon[4683]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/azure-service-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 09:07:07 crc restorecon[4683]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/beegfs-csi-driver-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 09:07:07 crc restorecon[4683]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/beegfs-csi-driver-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 09:07:07 crc restorecon[4683]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/bpfman-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 09:07:07 crc restorecon[4683]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/bpfman-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 09:07:07 crc restorecon[4683]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/camel-k not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 09:07:07 crc restorecon[4683]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/camel-k/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 09:07:07 crc restorecon[4683]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/camel-karavan-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 09:07:07 crc restorecon[4683]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/camel-karavan-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 09:07:07 crc restorecon[4683]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cass-operator-community not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 09:07:07 crc restorecon[4683]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cass-operator-community/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 09:07:07 crc restorecon[4683]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cert-manager not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 09:07:07 crc restorecon[4683]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cert-manager/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 09:07:07 crc restorecon[4683]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cert-utils-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 09:07:07 crc restorecon[4683]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cert-utils-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 09:07:07 crc restorecon[4683]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cluster-aas-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 09:07:07 crc restorecon[4683]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cluster-aas-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 09:07:07 crc restorecon[4683]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cluster-impairment-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 09:07:07 crc restorecon[4683]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cluster-impairment-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 09:07:07 crc restorecon[4683]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cluster-manager not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 09:07:07 crc restorecon[4683]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cluster-manager/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 09:07:07 crc restorecon[4683]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cockroachdb not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 09:07:07 crc restorecon[4683]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cockroachdb/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 09:07:07 crc restorecon[4683]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/codeflare-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 09:07:07 crc restorecon[4683]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/codeflare-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 09:07:07 crc restorecon[4683]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/community-kubevirt-hyperconverged not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 09:07:07 crc restorecon[4683]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/community-kubevirt-hyperconverged/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 09:07:07 crc restorecon[4683]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/community-trivy-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 09:07:07 crc restorecon[4683]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/community-trivy-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 09:07:07 crc restorecon[4683]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/community-windows-machine-config-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 09:07:07 crc restorecon[4683]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/community-windows-machine-config-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 09:07:07 crc restorecon[4683]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/customized-user-remediation not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 09:07:07 crc restorecon[4683]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/customized-user-remediation/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 09:07:07 crc restorecon[4683]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cxl-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 09:07:07 crc restorecon[4683]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cxl-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 09:07:07 crc restorecon[4683]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/dapr-kubernetes-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 09:07:07 crc restorecon[4683]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/dapr-kubernetes-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 09:07:07 crc restorecon[4683]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/datadog-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 09:07:07 crc restorecon[4683]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/datadog-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 09:07:07 crc restorecon[4683]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/datatrucker-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 09:07:07 crc restorecon[4683]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/datatrucker-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 09:07:07 crc restorecon[4683]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/dbaas-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 09:07:07 crc restorecon[4683]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/dbaas-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 09:07:07 crc restorecon[4683]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/debezium-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 09:07:07 crc restorecon[4683]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/debezium-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 09:07:07 crc restorecon[4683]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/dell-csm-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 09:07:07 crc restorecon[4683]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/dell-csm-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 09:07:07 crc restorecon[4683]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/deployment-validation-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 09:07:07 crc restorecon[4683]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/deployment-validation-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 09:07:07 crc restorecon[4683]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/devopsinabox not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 09:07:07 crc restorecon[4683]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/devopsinabox/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 09:07:07 crc restorecon[4683]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/dns-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 09:07:07 crc restorecon[4683]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/dns-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 09:07:07 crc restorecon[4683]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/dynatrace-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 09:07:07 crc restorecon[4683]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/dynatrace-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 09:07:07 crc restorecon[4683]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/eclipse-amlen-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 09:07:07 crc restorecon[4683]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/eclipse-amlen-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 09:07:07 crc restorecon[4683]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/eclipse-che not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 09:07:07 crc restorecon[4683]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/eclipse-che/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 09:07:07 crc restorecon[4683]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ecr-secret-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 09:07:07 crc restorecon[4683]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ecr-secret-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 09:07:07 crc restorecon[4683]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/edp-keycloak-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 09:07:07 crc restorecon[4683]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/edp-keycloak-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 09:07:07 crc restorecon[4683]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/eginnovations-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 09:07:07 crc restorecon[4683]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/eginnovations-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 09:07:07 crc restorecon[4683]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/egressip-ipam-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 09:07:07 crc restorecon[4683]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/egressip-ipam-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 09:07:07 crc restorecon[4683]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ember-csi-community-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 09:07:07 crc restorecon[4683]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ember-csi-community-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 09:07:07 crc restorecon[4683]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/etcd not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 09:07:07 crc restorecon[4683]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/etcd/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 09:07:07 crc restorecon[4683]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/eventing-kogito not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 09:07:07 crc restorecon[4683]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/eventing-kogito/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 09:07:07 crc restorecon[4683]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/external-secrets-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 09:07:07 crc restorecon[4683]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/external-secrets-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 09:07:07 crc restorecon[4683]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/falcon-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 09:07:07 crc restorecon[4683]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/falcon-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 09:07:07 crc restorecon[4683]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/fence-agents-remediation not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 09:07:07 crc restorecon[4683]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/fence-agents-remediation/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 09:07:07 crc restorecon[4683]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/flink-kubernetes-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 09:07:07 crc restorecon[4683]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/flink-kubernetes-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 09:07:07 crc restorecon[4683]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/flux not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 09:07:07 crc restorecon[4683]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/flux/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 09:07:07 crc restorecon[4683]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/k8gb not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 09:07:07 crc restorecon[4683]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/k8gb/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 09:07:07 crc restorecon[4683]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/fossul-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 09:07:07 crc restorecon[4683]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/fossul-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 09:07:07 crc restorecon[4683]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/github-arc-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 09:07:07 crc restorecon[4683]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/github-arc-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 09:07:07 crc restorecon[4683]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/gitops-primer not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 09:07:07 crc restorecon[4683]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/gitops-primer/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 09:07:07 crc restorecon[4683]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/gitwebhook-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 09:07:07 crc restorecon[4683]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/gitwebhook-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 09:07:07 crc restorecon[4683]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/global-load-balancer-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 09:07:07 crc restorecon[4683]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/global-load-balancer-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 09:07:07 crc restorecon[4683]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/grafana-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 09:07:07 crc restorecon[4683]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/grafana-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 09:07:07 crc restorecon[4683]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/group-sync-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 09:07:07 crc restorecon[4683]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/group-sync-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 09:07:07 crc restorecon[4683]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/hawtio-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 09:07:07 crc restorecon[4683]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/hawtio-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 09:07:07 crc restorecon[4683]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/hazelcast-platform-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 09:07:07 crc restorecon[4683]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/hazelcast-platform-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 09:07:07 crc restorecon[4683]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/hedvig-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 09:07:07 crc restorecon[4683]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/hedvig-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 09:07:07 crc restorecon[4683]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/hive-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 09:07:07 crc restorecon[4683]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/hive-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 09:07:07 crc restorecon[4683]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/horreum-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 09:07:07 crc restorecon[4683]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/horreum-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 09:07:07 crc restorecon[4683]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/hyperfoil-bundle not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 09:07:07 crc restorecon[4683]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/hyperfoil-bundle/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 09:07:07 crc restorecon[4683]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ibm-block-csi-operator-community not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 09:07:07 crc restorecon[4683]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ibm-block-csi-operator-community/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 09:07:07 crc restorecon[4683]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ibm-security-verify-access-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 09:07:07 crc restorecon[4683]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ibm-security-verify-access-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 09:07:07 crc restorecon[4683]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ibm-spectrum-scale-csi-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 09:07:07 crc restorecon[4683]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ibm-spectrum-scale-csi-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 09:07:07 crc restorecon[4683]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ibmcloud-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 09:07:07 crc restorecon[4683]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ibmcloud-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 09:07:07 crc restorecon[4683]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/infinispan not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 09:07:07 crc restorecon[4683]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/infinispan/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 09:07:07 crc restorecon[4683]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/integrity-shield-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 09:07:07 crc restorecon[4683]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/integrity-shield-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 09:07:07 crc restorecon[4683]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ipfs-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 09:07:07 crc restorecon[4683]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ipfs-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 09:07:07 crc restorecon[4683]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/istio-workspace-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 09:07:07 crc restorecon[4683]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/istio-workspace-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 09:07:07 crc restorecon[4683]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/jaeger not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 09:07:07 crc restorecon[4683]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/jaeger/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 09:07:07 crc restorecon[4683]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kaoto-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 09:07:07 crc restorecon[4683]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kaoto-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 09:07:07 crc restorecon[4683]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keda not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 09:07:07 crc restorecon[4683]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keda/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 09:07:07 crc restorecon[4683]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keepalived-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 09:07:07 crc restorecon[4683]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keepalived-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 09:07:07 crc restorecon[4683]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keycloak-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 09:07:07 crc restorecon[4683]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keycloak-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 09:07:07 crc restorecon[4683]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keycloak-permissions-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 09:07:07 crc restorecon[4683]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keycloak-permissions-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 09:07:07 crc restorecon[4683]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/klusterlet not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 09:07:07 crc restorecon[4683]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/klusterlet/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 09:07:07 crc restorecon[4683]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kogito-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 09:07:07 crc restorecon[4683]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kogito-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 09:07:07 crc restorecon[4683]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/koku-metrics-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 09:07:07 crc restorecon[4683]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/koku-metrics-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 09:07:07 crc restorecon[4683]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/konveyor-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 09:07:07 crc restorecon[4683]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/konveyor-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 09:07:07 crc restorecon[4683]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/korrel8r not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 09:07:07 crc restorecon[4683]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/korrel8r/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 09:07:07 crc restorecon[4683]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kuadrant-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 09:07:07 crc restorecon[4683]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kuadrant-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 09:07:07 crc restorecon[4683]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kube-green not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 09:07:07 crc restorecon[4683]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kube-green/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 09:07:07 crc restorecon[4683]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kubecost not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 09:07:07 crc restorecon[4683]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kubecost/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 09:07:07 crc restorecon[4683]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kubernetes-imagepuller-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 09:07:07 crc restorecon[4683]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kubernetes-imagepuller-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 09:07:07 crc restorecon[4683]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kubeturbo not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 09:07:07 crc restorecon[4683]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kubeturbo/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 09:07:07 crc restorecon[4683]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/l5-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 09:07:07 crc restorecon[4683]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/l5-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 09:07:07 crc restorecon[4683]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/layer7-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 09:07:07 crc restorecon[4683]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/layer7-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 09:07:07 crc restorecon[4683]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/lbconfig-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 09:07:07 crc restorecon[4683]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/lbconfig-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 09:07:07 crc restorecon[4683]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/lib-bucket-provisioner not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 09:07:07 crc restorecon[4683]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/lib-bucket-provisioner/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 09:07:07 crc restorecon[4683]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/limitador-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 09:07:07 crc restorecon[4683]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/limitador-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 09:07:07 crc restorecon[4683]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/logging-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 09:07:07 crc restorecon[4683]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/logging-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 09:07:07 crc restorecon[4683]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/loki-helm-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 09:07:07 crc restorecon[4683]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/loki-helm-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 09:07:07 crc restorecon[4683]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/loki-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 09:07:07 crc restorecon[4683]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/loki-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 09:07:07 crc restorecon[4683]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/machine-deletion-remediation not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 09:07:07 crc restorecon[4683]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/machine-deletion-remediation/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 09:07:07 crc restorecon[4683]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/mariadb-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 09:07:07 crc restorecon[4683]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/mariadb-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 09:07:07 crc restorecon[4683]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/marin3r not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 09:07:07 crc restorecon[4683]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/marin3r/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 09:07:07 crc restorecon[4683]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/mercury-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 09:07:07 crc restorecon[4683]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/mercury-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 09:07:07 crc restorecon[4683]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/microcks not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 09:07:07 crc restorecon[4683]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/microcks/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 09:07:07 crc restorecon[4683]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/mongodb-atlas-kubernetes not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 09:07:07 crc restorecon[4683]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/mongodb-atlas-kubernetes/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 09:07:07 crc restorecon[4683]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/mongodb-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 09:07:07 crc restorecon[4683]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/mongodb-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 09:07:07 crc restorecon[4683]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/move2kube-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 09:07:07 crc restorecon[4683]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/move2kube-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 09:07:07 crc restorecon[4683]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/multi-nic-cni-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 09:07:07 crc restorecon[4683]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/multi-nic-cni-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 09:07:07 crc restorecon[4683]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/multicluster-global-hub-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 09:07:07 crc restorecon[4683]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/multicluster-global-hub-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 09:07:07 crc restorecon[4683]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/multicluster-operators-subscription not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 09:07:07 crc restorecon[4683]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/multicluster-operators-subscription/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 09:07:07 crc restorecon[4683]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/must-gather-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 09:07:07 crc restorecon[4683]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/must-gather-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 09:07:07 crc restorecon[4683]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/namespace-configuration-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 09:07:07 crc restorecon[4683]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/namespace-configuration-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 09:07:07 crc restorecon[4683]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ncn-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 09:07:07 crc restorecon[4683]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ncn-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 09:07:07 crc restorecon[4683]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ndmspc-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 09:07:07 crc restorecon[4683]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ndmspc-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 09:07:07 crc restorecon[4683]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/netobserv-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 09:07:07 crc restorecon[4683]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/netobserv-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 09:07:07 crc restorecon[4683]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/neuvector-community-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 09:07:07 crc restorecon[4683]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/neuvector-community-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 09:07:07 crc restorecon[4683]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/nexus-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 09:07:07 crc restorecon[4683]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/nexus-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 09:07:07 crc restorecon[4683]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/nexus-operator-m88i not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 09:07:07 crc restorecon[4683]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/nexus-operator-m88i/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 09:07:07 crc restorecon[4683]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/nfs-provisioner-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 09:07:07 crc restorecon[4683]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/nfs-provisioner-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 09:07:07 crc restorecon[4683]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/nlp-server not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 09:07:07 crc restorecon[4683]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/nlp-server/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 09:07:07 crc restorecon[4683]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/node-discovery-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 09:07:07 crc restorecon[4683]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/node-discovery-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 09:07:07 crc restorecon[4683]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/node-healthcheck-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 09:07:07 crc restorecon[4683]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/node-healthcheck-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 09:07:07 crc restorecon[4683]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/node-maintenance-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 09:07:07 crc restorecon[4683]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/node-maintenance-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 09:07:07 crc restorecon[4683]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/nsm-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 09:07:07 crc restorecon[4683]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/nsm-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 09:07:07 crc restorecon[4683]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/oadp-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 09:07:07 crc restorecon[4683]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/oadp-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 09:07:07 crc restorecon[4683]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/observability-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 09:07:07 crc restorecon[4683]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/observability-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 09:07:07 crc restorecon[4683]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/oci-ccm-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 09:07:07 crc restorecon[4683]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/oci-ccm-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 09:07:07 crc restorecon[4683]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ocm-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 09:07:07 crc restorecon[4683]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ocm-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 09:07:07 crc restorecon[4683]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/odoo-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 09:07:07 crc restorecon[4683]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/odoo-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 09:07:07 crc restorecon[4683]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/opendatahub-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 09:07:07 crc restorecon[4683]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/opendatahub-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 09:07:07 crc restorecon[4683]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/openebs not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 09:07:07 crc restorecon[4683]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/openebs/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 09:07:07 crc restorecon[4683]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/openshift-nfd-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 09:07:07 crc restorecon[4683]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/openshift-nfd-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 09:07:07 crc restorecon[4683]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/openshift-node-upgrade-mutex-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 09:07:07 crc restorecon[4683]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/openshift-node-upgrade-mutex-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 09:07:07 crc restorecon[4683]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/openshift-qiskit-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 09:07:07 crc restorecon[4683]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/openshift-qiskit-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 09:07:07 crc restorecon[4683]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/opentelemetry-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 09:07:07 crc restorecon[4683]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/opentelemetry-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 09:07:07 crc restorecon[4683]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/patch-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 09:07:07 crc restorecon[4683]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/patch-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 09:07:07 crc restorecon[4683]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/patterns-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 09:07:07 crc restorecon[4683]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/patterns-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 09:07:07 crc restorecon[4683]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/pcc-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 09:07:07 crc restorecon[4683]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/pcc-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 09:07:07 crc restorecon[4683]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/pelorus-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 09:07:07 crc restorecon[4683]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/pelorus-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 09:07:07 crc restorecon[4683]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/percona-xtradb-cluster-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 09:07:07 crc restorecon[4683]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/percona-xtradb-cluster-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 09:07:07 crc restorecon[4683]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/portworx-essentials not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 09:07:07 crc restorecon[4683]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/portworx-essentials/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 09:07:07 crc restorecon[4683]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/postgresql not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 09:07:07 crc restorecon[4683]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/postgresql/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 09:07:07 crc restorecon[4683]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/proactive-node-scaling-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 09:07:07 crc restorecon[4683]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/proactive-node-scaling-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 09:07:07 crc restorecon[4683]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/project-quay not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 09:07:07 crc restorecon[4683]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/project-quay/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 09:07:07 crc restorecon[4683]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/prometheus not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 09:07:07 crc restorecon[4683]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/prometheus/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 09:07:07 crc restorecon[4683]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/prometheus-exporter-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 09:07:07 crc restorecon[4683]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/prometheus-exporter-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 09:07:07 crc restorecon[4683]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/prometurbo not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 09:07:07 crc restorecon[4683]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/prometurbo/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 09:07:07 crc restorecon[4683]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/pubsubplus-eventbroker-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 09:07:07 crc restorecon[4683]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/pubsubplus-eventbroker-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 09:07:07 crc restorecon[4683]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/pulp-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 09:07:07 crc restorecon[4683]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/pulp-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 09:07:07 crc restorecon[4683]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/rabbitmq-cluster-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 09:07:07 crc restorecon[4683]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/rabbitmq-cluster-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 09:07:07 crc restorecon[4683]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/rabbitmq-messaging-topology-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 09:07:07 crc restorecon[4683]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/rabbitmq-messaging-topology-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 09:07:07 crc restorecon[4683]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/redis-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 09:07:07 crc restorecon[4683]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/redis-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 09:07:07 crc restorecon[4683]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/reportportal-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 09:07:07 crc restorecon[4683]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/reportportal-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 09:07:07 crc restorecon[4683]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/resource-locker-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 09:07:07 crc restorecon[4683]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/resource-locker-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 09:07:07 crc restorecon[4683]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/rhoas-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 09:07:07 crc restorecon[4683]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/rhoas-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 09:07:07 crc restorecon[4683]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ripsaw not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 09:07:07 crc restorecon[4683]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ripsaw/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 09:07:07 crc restorecon[4683]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/sailoperator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 09:07:07 crc restorecon[4683]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/sailoperator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 09:07:07 crc restorecon[4683]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/sap-commerce-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 09:07:07 crc restorecon[4683]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/sap-commerce-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 09:07:07 crc restorecon[4683]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/sap-data-intelligence-observer-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 09:07:07 crc restorecon[4683]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/sap-data-intelligence-observer-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 09:07:07 crc restorecon[4683]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/sap-hana-express-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 09:07:07 crc restorecon[4683]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/sap-hana-express-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 09:07:07 crc restorecon[4683]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/seldon-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 09:07:07 crc restorecon[4683]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/seldon-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 09:07:07 crc restorecon[4683]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/self-node-remediation not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 09:07:07 crc restorecon[4683]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/self-node-remediation/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 09:07:07 crc restorecon[4683]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/service-binding-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 09:07:07 crc restorecon[4683]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/service-binding-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 09:07:07 crc restorecon[4683]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/shipwright-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 09:07:07 crc restorecon[4683]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/shipwright-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 09:07:07 crc restorecon[4683]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/sigstore-helm-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 09:07:07 crc restorecon[4683]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/sigstore-helm-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 09:07:07 crc restorecon[4683]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/silicom-sts-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 09:07:07 crc restorecon[4683]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/silicom-sts-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 09:07:07 crc restorecon[4683]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/skupper-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 09:07:07 crc restorecon[4683]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/skupper-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 09:07:07 crc restorecon[4683]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/snapscheduler not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 09:07:07 crc restorecon[4683]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/snapscheduler/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 09:07:07 crc restorecon[4683]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/snyk-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 09:07:07 crc restorecon[4683]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/snyk-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 09:07:07 crc restorecon[4683]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/socmmd not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 09:07:07 crc restorecon[4683]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/socmmd/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 09:07:07 crc restorecon[4683]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/sonar-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 09:07:07 crc restorecon[4683]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/sonar-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 09:07:07 crc restorecon[4683]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/sosivio not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 09:07:07 crc restorecon[4683]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/sosivio/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 09:07:07 crc restorecon[4683]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/sonataflow-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 09:07:07 crc restorecon[4683]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/sonataflow-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 09:07:07 crc restorecon[4683]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/sosreport-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 09:07:07 crc restorecon[4683]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/sosreport-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 09:07:07 crc restorecon[4683]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/spark-helm-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 09:07:07 crc restorecon[4683]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/spark-helm-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 09:07:07 crc restorecon[4683]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/special-resource-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 09:07:07 crc restorecon[4683]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/special-resource-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 09:07:07 crc restorecon[4683]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/stolostron not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 09:07:07 crc restorecon[4683]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/stolostron/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 09:07:07 crc restorecon[4683]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/stolostron-engine not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 09:07:07 crc restorecon[4683]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/stolostron-engine/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 09:07:07 crc restorecon[4683]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/strimzi-kafka-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 09:07:07 crc restorecon[4683]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/strimzi-kafka-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 09:07:07 crc restorecon[4683]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/syndesis not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 09:07:07 crc restorecon[4683]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/syndesis/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 09:07:07 crc restorecon[4683]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/t8c not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 09:07:07 crc restorecon[4683]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/t8c/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 09:07:07 crc restorecon[4683]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/tagger not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 09:07:07 crc restorecon[4683]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/tagger/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 09:07:07 crc restorecon[4683]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/tempo-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 09:07:07 crc restorecon[4683]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/tempo-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 09:07:07 crc restorecon[4683]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/tf-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 09:07:07 crc restorecon[4683]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/tf-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 09:07:07 crc restorecon[4683]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/tidb-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 09:07:07 crc restorecon[4683]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/tidb-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 09:07:07 crc restorecon[4683]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/trident-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 09:07:07 crc restorecon[4683]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/trident-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 09:07:07 crc restorecon[4683]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/trustify-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 09:07:07 crc restorecon[4683]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/trustify-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 09:07:07 crc restorecon[4683]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ucs-ci-solutions-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 09:07:07 crc restorecon[4683]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ucs-ci-solutions-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 09:07:07 crc restorecon[4683]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/universal-crossplane not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 09:07:07 crc restorecon[4683]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/universal-crossplane/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 09:07:07 crc restorecon[4683]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/varnish-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 09:07:07 crc restorecon[4683]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/varnish-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 09:07:07 crc restorecon[4683]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/vault-config-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 09:07:07 crc restorecon[4683]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/vault-config-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 09:07:07 crc restorecon[4683]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/verticadb-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 09:07:07 crc restorecon[4683]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/verticadb-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 09:07:07 crc restorecon[4683]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/volume-expander-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 09:07:07 crc restorecon[4683]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/volume-expander-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 09:07:07 crc restorecon[4683]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/wandb-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 09:07:07 crc restorecon[4683]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/wandb-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 09:07:07 crc restorecon[4683]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/windup-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 09:07:07 crc restorecon[4683]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/windup-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 09:07:07 crc restorecon[4683]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/yaks not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 09:07:07 crc restorecon[4683]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/yaks/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 09:07:07 crc restorecon[4683]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/utilities not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 09:07:07 crc restorecon[4683]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/utilities/copy-content not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 09:07:07 crc restorecon[4683]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 09:07:07 crc restorecon[4683]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/containers/extract-utilities/c0fe7256 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 09:07:07 crc restorecon[4683]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/containers/extract-utilities/c30319e4 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 09:07:07 crc restorecon[4683]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/containers/extract-utilities/e6b1dd45 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 09:07:07 crc restorecon[4683]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/containers/extract-content/2bb643f0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 09:07:07 crc restorecon[4683]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/containers/extract-content/920de426 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 09:07:07 crc restorecon[4683]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/containers/extract-content/70fa1e87 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 09:07:07 crc restorecon[4683]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/containers/registry-server/a1c12a2f not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 09:07:07 crc restorecon[4683]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/containers/registry-server/9442e6c7 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 09:07:07 crc restorecon[4683]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/containers/registry-server/5b45ec72 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 09:07:07 crc restorecon[4683]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 09:07:07 crc restorecon[4683]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 09:07:07 crc restorecon[4683]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/abot-operator-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 09:07:07 crc restorecon[4683]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/abot-operator-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 09:07:07 crc restorecon[4683]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/aerospike-kubernetes-operator-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 09:07:07 crc restorecon[4683]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/aerospike-kubernetes-operator-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 09:07:07 crc restorecon[4683]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/aikit-operator-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 09:07:07 crc restorecon[4683]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/aikit-operator-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 09:07:07 crc restorecon[4683]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/anzo-operator-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 09:07:07 crc restorecon[4683]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/anzo-operator-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 09:07:07 crc restorecon[4683]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/anzograph-operator-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 09:07:07 crc restorecon[4683]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/anzograph-operator-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 09:07:07 crc restorecon[4683]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/anzounstructured-operator-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 09:07:07 crc restorecon[4683]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/anzounstructured-operator-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 09:07:07 crc restorecon[4683]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cloudbees-ci-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 09:07:07 crc restorecon[4683]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cloudbees-ci-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 09:07:07 crc restorecon[4683]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cockroachdb-certified-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 09:07:07 crc restorecon[4683]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cockroachdb-certified-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 09:07:07 crc restorecon[4683]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/crunchy-postgres-operator-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 09:07:07 crc restorecon[4683]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/crunchy-postgres-operator-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 09:07:07 crc restorecon[4683]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/datadog-operator-certified-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 09:07:07 crc restorecon[4683]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/datadog-operator-certified-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 09:07:07 crc restorecon[4683]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/dynatrace-operator-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 09:07:07 crc restorecon[4683]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/dynatrace-operator-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 09:07:07 crc restorecon[4683]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/entando-k8s-operator-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 09:07:07 crc restorecon[4683]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/entando-k8s-operator-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 09:07:07 crc restorecon[4683]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/flux not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 09:07:07 crc restorecon[4683]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/flux/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 09:07:07 crc restorecon[4683]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/instana-agent-operator-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 09:07:07 crc restorecon[4683]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/instana-agent-operator-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 09:07:07 crc restorecon[4683]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/iomesh-operator-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 09:07:07 crc restorecon[4683]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/iomesh-operator-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 09:07:07 crc restorecon[4683]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/joget-dx-operator-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 09:07:07 crc restorecon[4683]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/joget-dx-operator-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 09:07:07 crc restorecon[4683]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/joget-dx8-operator-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 09:07:07 crc restorecon[4683]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/joget-dx8-operator-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 09:07:07 crc restorecon[4683]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/k10-kasten-operator-paygo-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 09:07:07 crc restorecon[4683]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/k10-kasten-operator-paygo-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 09:07:07 crc restorecon[4683]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/k10-kasten-operator-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 09:07:07 crc restorecon[4683]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/k10-kasten-operator-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 09:07:07 crc restorecon[4683]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/k10-kasten-operator-term-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 09:07:07 crc restorecon[4683]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/k10-kasten-operator-term-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 09:07:07 crc restorecon[4683]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kubemq-operator-marketplace-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 09:07:07 crc restorecon[4683]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kubemq-operator-marketplace-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 09:07:07 crc restorecon[4683]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kubeturbo-certified-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 09:07:07 crc restorecon[4683]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kubeturbo-certified-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 09:07:07 crc restorecon[4683]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/linstor-operator-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 09:07:07 crc restorecon[4683]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/linstor-operator-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 09:07:07 crc restorecon[4683]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/marketplace-games-operator-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 09:07:07 crc restorecon[4683]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/marketplace-games-operator-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 09:07:07 crc restorecon[4683]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/model-builder-for-vision-certified-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 09:07:07 crc restorecon[4683]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/model-builder-for-vision-certified-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 09:07:07 crc restorecon[4683]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/neuvector-certified-operator-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 09:07:07 crc restorecon[4683]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/neuvector-certified-operator-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 09:07:07 crc restorecon[4683]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ovms-operator-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 09:07:07 crc restorecon[4683]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ovms-operator-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 09:07:07 crc restorecon[4683]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/pachyderm-operator-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 09:07:07 crc restorecon[4683]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/pachyderm-operator-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 09:07:07 crc restorecon[4683]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/redis-enterprise-operator-cert-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 09:07:07 crc restorecon[4683]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/redis-enterprise-operator-cert-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 09:07:07 crc restorecon[4683]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/seldon-deploy-operator-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 09:07:07 crc restorecon[4683]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/seldon-deploy-operator-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 09:07:07 crc restorecon[4683]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/starburst-enterprise-helm-operator-paygo-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 09:07:07 crc restorecon[4683]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/starburst-enterprise-helm-operator-paygo-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 09:07:07 crc restorecon[4683]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/starburst-enterprise-helm-operator-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 09:07:07 crc restorecon[4683]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/starburst-enterprise-helm-operator-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 09:07:07 crc restorecon[4683]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/t8c-certified-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 09:07:07 crc restorecon[4683]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/t8c-certified-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 09:07:07 crc restorecon[4683]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/timemachine-operator-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 09:07:07 crc restorecon[4683]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/timemachine-operator-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 09:07:07 crc restorecon[4683]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/vfunction-server-operator-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 09:07:07 crc restorecon[4683]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/vfunction-server-operator-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 09:07:07 crc restorecon[4683]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/xcrypt-operator-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 09:07:07 crc restorecon[4683]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/xcrypt-operator-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 09:07:07 crc restorecon[4683]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/yugabyte-platform-operator-bundle-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 09:07:07 crc restorecon[4683]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/yugabyte-platform-operator-bundle-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 09:07:07 crc restorecon[4683]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/zabbix-operator-certified-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 09:07:07 crc restorecon[4683]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/zabbix-operator-certified-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 09:07:07 crc restorecon[4683]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/cache not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 09:07:07 crc restorecon[4683]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 09:07:07 crc restorecon[4683]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/db not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 09:07:07 crc restorecon[4683]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/db/00000-1.psg not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 09:07:07 crc restorecon[4683]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/db/00000-1.psg.pmt not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 09:07:07 crc restorecon[4683]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/db/db.pmt not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 09:07:07 crc restorecon[4683]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/db/index.pmt not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 09:07:07 crc restorecon[4683]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/db/main.pix not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 09:07:07 crc restorecon[4683]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/db/overflow.pix not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 09:07:07 crc restorecon[4683]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/digest not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 09:07:07 crc restorecon[4683]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/utilities not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 09:07:07 crc restorecon[4683]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/utilities/copy-content not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 09:07:07 crc restorecon[4683]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 09:07:07 crc restorecon[4683]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/containers/extract-utilities/3c9f3a59 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 09:07:07 crc restorecon[4683]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/containers/extract-utilities/1091c11b not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 09:07:07 crc restorecon[4683]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/containers/extract-utilities/9a6821c6 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 09:07:07 crc restorecon[4683]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/containers/extract-content/ec0c35e2 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 09:07:07 crc restorecon[4683]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/containers/extract-content/517f37e7 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 09:07:07 crc restorecon[4683]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/containers/extract-content/6214fe78 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 09:07:07 crc restorecon[4683]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/containers/registry-server/ba189c8b not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 09:07:07 crc restorecon[4683]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/containers/registry-server/351e4f31 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 09:07:07 crc restorecon[4683]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/containers/registry-server/c0f219ff not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 09:07:07 crc restorecon[4683]: /var/lib/kubelet/pods/3dcd261975c3d6b9a6ad6367fd4facd3/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c247,c522 Jan 23 09:07:07 crc restorecon[4683]: /var/lib/kubelet/pods/3dcd261975c3d6b9a6ad6367fd4facd3/containers/wait-for-host-port/8069f607 not reset as customized by admin to system_u:object_r:container_file_t:s0:c378,c723 Jan 23 09:07:07 crc restorecon[4683]: /var/lib/kubelet/pods/3dcd261975c3d6b9a6ad6367fd4facd3/containers/wait-for-host-port/559c3d82 not reset as customized by admin to system_u:object_r:container_file_t:s0:c133,c223 Jan 23 09:07:07 crc restorecon[4683]: /var/lib/kubelet/pods/3dcd261975c3d6b9a6ad6367fd4facd3/containers/wait-for-host-port/605ad488 not reset as customized by admin to system_u:object_r:container_file_t:s0:c247,c522 Jan 23 09:07:07 crc restorecon[4683]: /var/lib/kubelet/pods/3dcd261975c3d6b9a6ad6367fd4facd3/containers/kube-scheduler/148df488 not reset as customized by admin to system_u:object_r:container_file_t:s0:c378,c723 Jan 23 09:07:07 crc restorecon[4683]: /var/lib/kubelet/pods/3dcd261975c3d6b9a6ad6367fd4facd3/containers/kube-scheduler/3bf6dcb4 not reset as customized by admin to system_u:object_r:container_file_t:s0:c133,c223 Jan 23 09:07:07 crc restorecon[4683]: /var/lib/kubelet/pods/3dcd261975c3d6b9a6ad6367fd4facd3/containers/kube-scheduler/022a2feb not reset as customized by admin to system_u:object_r:container_file_t:s0:c247,c522 Jan 23 09:07:07 crc restorecon[4683]: /var/lib/kubelet/pods/3dcd261975c3d6b9a6ad6367fd4facd3/containers/kube-scheduler-cert-syncer/938c3924 not reset as customized by admin to system_u:object_r:container_file_t:s0:c378,c723 Jan 23 09:07:07 crc restorecon[4683]: /var/lib/kubelet/pods/3dcd261975c3d6b9a6ad6367fd4facd3/containers/kube-scheduler-cert-syncer/729fe23e not reset as customized by admin to system_u:object_r:container_file_t:s0:c133,c223 Jan 23 09:07:07 crc restorecon[4683]: /var/lib/kubelet/pods/3dcd261975c3d6b9a6ad6367fd4facd3/containers/kube-scheduler-cert-syncer/1fd5cbd4 not reset as customized by admin to system_u:object_r:container_file_t:s0:c247,c522 Jan 23 09:07:07 crc restorecon[4683]: /var/lib/kubelet/pods/3dcd261975c3d6b9a6ad6367fd4facd3/containers/kube-scheduler-recovery-controller/a96697e1 not reset as customized by admin to system_u:object_r:container_file_t:s0:c378,c723 Jan 23 09:07:07 crc restorecon[4683]: /var/lib/kubelet/pods/3dcd261975c3d6b9a6ad6367fd4facd3/containers/kube-scheduler-recovery-controller/e155ddca not reset as customized by admin to system_u:object_r:container_file_t:s0:c133,c223 Jan 23 09:07:07 crc restorecon[4683]: /var/lib/kubelet/pods/3dcd261975c3d6b9a6ad6367fd4facd3/containers/kube-scheduler-recovery-controller/10dd0e0f not reset as customized by admin to system_u:object_r:container_file_t:s0:c247,c522 Jan 23 09:07:07 crc restorecon[4683]: /var/lib/kubelet/pods/49c341d1-5089-4bc2-86a0-a5e165cfcc6b/volumes/kubernetes.io~configmap/v4-0-config-system-trusted-ca-bundle not reset as customized by admin to system_u:object_r:container_file_t:s0:c682,c947 Jan 23 09:07:07 crc restorecon[4683]: /var/lib/kubelet/pods/49c341d1-5089-4bc2-86a0-a5e165cfcc6b/volumes/kubernetes.io~configmap/v4-0-config-system-trusted-ca-bundle/..2025_02_24_06_09_35.3018472960 not reset as customized by admin to system_u:object_r:container_file_t:s0:c682,c947 Jan 23 09:07:07 crc restorecon[4683]: /var/lib/kubelet/pods/49c341d1-5089-4bc2-86a0-a5e165cfcc6b/volumes/kubernetes.io~configmap/v4-0-config-system-trusted-ca-bundle/..2025_02_24_06_09_35.3018472960/ca-bundle.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c682,c947 Jan 23 09:07:07 crc restorecon[4683]: /var/lib/kubelet/pods/49c341d1-5089-4bc2-86a0-a5e165cfcc6b/volumes/kubernetes.io~configmap/v4-0-config-system-trusted-ca-bundle/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c682,c947 Jan 23 09:07:07 crc restorecon[4683]: /var/lib/kubelet/pods/49c341d1-5089-4bc2-86a0-a5e165cfcc6b/volumes/kubernetes.io~configmap/v4-0-config-system-trusted-ca-bundle/ca-bundle.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c682,c947 Jan 23 09:07:07 crc restorecon[4683]: /var/lib/kubelet/pods/49c341d1-5089-4bc2-86a0-a5e165cfcc6b/volumes/kubernetes.io~configmap/audit-policies not reset as customized by admin to system_u:object_r:container_file_t:s0:c682,c947 Jan 23 09:07:07 crc restorecon[4683]: /var/lib/kubelet/pods/49c341d1-5089-4bc2-86a0-a5e165cfcc6b/volumes/kubernetes.io~configmap/audit-policies/..2025_02_24_06_09_35.4262376737 not reset as customized by admin to system_u:object_r:container_file_t:s0:c682,c947 Jan 23 09:07:07 crc restorecon[4683]: /var/lib/kubelet/pods/49c341d1-5089-4bc2-86a0-a5e165cfcc6b/volumes/kubernetes.io~configmap/audit-policies/..2025_02_24_06_09_35.4262376737/audit.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c682,c947 Jan 23 09:07:07 crc restorecon[4683]: /var/lib/kubelet/pods/49c341d1-5089-4bc2-86a0-a5e165cfcc6b/volumes/kubernetes.io~configmap/audit-policies/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c682,c947 Jan 23 09:07:07 crc restorecon[4683]: /var/lib/kubelet/pods/49c341d1-5089-4bc2-86a0-a5e165cfcc6b/volumes/kubernetes.io~configmap/audit-policies/audit.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c682,c947 Jan 23 09:07:07 crc restorecon[4683]: /var/lib/kubelet/pods/49c341d1-5089-4bc2-86a0-a5e165cfcc6b/volumes/kubernetes.io~configmap/v4-0-config-system-cliconfig not reset as customized by admin to system_u:object_r:container_file_t:s0:c682,c947 Jan 23 09:07:07 crc restorecon[4683]: /var/lib/kubelet/pods/49c341d1-5089-4bc2-86a0-a5e165cfcc6b/volumes/kubernetes.io~configmap/v4-0-config-system-cliconfig/..2025_02_24_06_09_35.2630275752 not reset as customized by admin to system_u:object_r:container_file_t:s0:c682,c947 Jan 23 09:07:07 crc restorecon[4683]: /var/lib/kubelet/pods/49c341d1-5089-4bc2-86a0-a5e165cfcc6b/volumes/kubernetes.io~configmap/v4-0-config-system-cliconfig/..2025_02_24_06_09_35.2630275752/v4-0-config-system-cliconfig not reset as customized by admin to system_u:object_r:container_file_t:s0:c682,c947 Jan 23 09:07:07 crc restorecon[4683]: /var/lib/kubelet/pods/49c341d1-5089-4bc2-86a0-a5e165cfcc6b/volumes/kubernetes.io~configmap/v4-0-config-system-cliconfig/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c682,c947 Jan 23 09:07:07 crc restorecon[4683]: /var/lib/kubelet/pods/49c341d1-5089-4bc2-86a0-a5e165cfcc6b/volumes/kubernetes.io~configmap/v4-0-config-system-cliconfig/v4-0-config-system-cliconfig not reset as customized by admin to system_u:object_r:container_file_t:s0:c682,c947 Jan 23 09:07:07 crc restorecon[4683]: /var/lib/kubelet/pods/49c341d1-5089-4bc2-86a0-a5e165cfcc6b/volumes/kubernetes.io~configmap/v4-0-config-system-service-ca not reset as customized by admin to system_u:object_r:container_file_t:s0:c682,c947 Jan 23 09:07:07 crc restorecon[4683]: /var/lib/kubelet/pods/49c341d1-5089-4bc2-86a0-a5e165cfcc6b/volumes/kubernetes.io~configmap/v4-0-config-system-service-ca/..2025_02_24_06_09_35.2376963788 not reset as customized by admin to system_u:object_r:container_file_t:s0:c682,c947 Jan 23 09:07:07 crc restorecon[4683]: /var/lib/kubelet/pods/49c341d1-5089-4bc2-86a0-a5e165cfcc6b/volumes/kubernetes.io~configmap/v4-0-config-system-service-ca/..2025_02_24_06_09_35.2376963788/service-ca.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c682,c947 Jan 23 09:07:07 crc restorecon[4683]: /var/lib/kubelet/pods/49c341d1-5089-4bc2-86a0-a5e165cfcc6b/volumes/kubernetes.io~configmap/v4-0-config-system-service-ca/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c682,c947 Jan 23 09:07:07 crc restorecon[4683]: /var/lib/kubelet/pods/49c341d1-5089-4bc2-86a0-a5e165cfcc6b/volumes/kubernetes.io~configmap/v4-0-config-system-service-ca/service-ca.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c682,c947 Jan 23 09:07:07 crc restorecon[4683]: /var/lib/kubelet/pods/49c341d1-5089-4bc2-86a0-a5e165cfcc6b/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c682,c947 Jan 23 09:07:07 crc restorecon[4683]: /var/lib/kubelet/pods/49c341d1-5089-4bc2-86a0-a5e165cfcc6b/containers/oauth-openshift/6f2c8392 not reset as customized by admin to system_u:object_r:container_file_t:s0:c267,c588 Jan 23 09:07:07 crc restorecon[4683]: /var/lib/kubelet/pods/49c341d1-5089-4bc2-86a0-a5e165cfcc6b/containers/oauth-openshift/bd241ad9 not reset as customized by admin to system_u:object_r:container_file_t:s0:c682,c947 Jan 23 09:07:07 crc restorecon[4683]: /var/lib/kubelet/plugins not reset as customized by admin to system_u:object_r:container_file_t:s0 Jan 23 09:07:07 crc restorecon[4683]: /var/lib/kubelet/plugins/csi-hostpath not reset as customized by admin to system_u:object_r:container_file_t:s0 Jan 23 09:07:07 crc restorecon[4683]: /var/lib/kubelet/plugins/csi-hostpath/csi.sock not reset as customized by admin to system_u:object_r:container_file_t:s0 Jan 23 09:07:07 crc restorecon[4683]: /var/lib/kubelet/plugins/kubernetes.io not reset as customized by admin to system_u:object_r:container_file_t:s0 Jan 23 09:07:07 crc restorecon[4683]: /var/lib/kubelet/plugins/kubernetes.io/csi not reset as customized by admin to system_u:object_r:container_file_t:s0 Jan 23 09:07:07 crc restorecon[4683]: /var/lib/kubelet/plugins/kubernetes.io/csi/kubevirt.io.hostpath-provisioner not reset as customized by admin to system_u:object_r:container_file_t:s0 Jan 23 09:07:07 crc restorecon[4683]: /var/lib/kubelet/plugins/kubernetes.io/csi/kubevirt.io.hostpath-provisioner/1f4776af88835e41c12b831b4c9fed40233456d14189815a54dbe7f892fc1983 not reset as customized by admin to system_u:object_r:container_file_t:s0 Jan 23 09:07:07 crc restorecon[4683]: /var/lib/kubelet/plugins/kubernetes.io/csi/kubevirt.io.hostpath-provisioner/1f4776af88835e41c12b831b4c9fed40233456d14189815a54dbe7f892fc1983/globalmount not reset as customized by admin to system_u:object_r:container_file_t:s0 Jan 23 09:07:07 crc restorecon[4683]: /var/lib/kubelet/plugins/kubernetes.io/csi/kubevirt.io.hostpath-provisioner/1f4776af88835e41c12b831b4c9fed40233456d14189815a54dbe7f892fc1983/vol_data.json not reset as customized by admin to system_u:object_r:container_file_t:s0 Jan 23 09:07:07 crc restorecon[4683]: /var/lib/kubelet/plugins_registry not reset as customized by admin to system_u:object_r:container_file_t:s0 Jan 23 09:07:07 crc restorecon[4683]: Relabeled /var/usrlocal/bin/kubenswrapper from system_u:object_r:bin_t:s0 to system_u:object_r:kubelet_exec_t:s0 Jan 23 09:07:07 crc kubenswrapper[4684]: Flag --container-runtime-endpoint has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Jan 23 09:07:07 crc kubenswrapper[4684]: Flag --minimum-container-ttl-duration has been deprecated, Use --eviction-hard or --eviction-soft instead. Will be removed in a future version. Jan 23 09:07:07 crc kubenswrapper[4684]: Flag --volume-plugin-dir has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Jan 23 09:07:07 crc kubenswrapper[4684]: Flag --register-with-taints has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Jan 23 09:07:07 crc kubenswrapper[4684]: Flag --pod-infra-container-image has been deprecated, will be removed in a future release. Image garbage collector will get sandbox image information from CRI. Jan 23 09:07:07 crc kubenswrapper[4684]: Flag --system-reserved has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Jan 23 09:07:07 crc kubenswrapper[4684]: I0123 09:07:07.465334 4684 server.go:211] "--pod-infra-container-image will not be pruned by the image garbage collector in kubelet and should also be set in the remote runtime" Jan 23 09:07:07 crc kubenswrapper[4684]: W0123 09:07:07.467959 4684 feature_gate.go:330] unrecognized feature gate: AlibabaPlatform Jan 23 09:07:07 crc kubenswrapper[4684]: W0123 09:07:07.467979 4684 feature_gate.go:330] unrecognized feature gate: NetworkLiveMigration Jan 23 09:07:07 crc kubenswrapper[4684]: W0123 09:07:07.467985 4684 feature_gate.go:330] unrecognized feature gate: MinimumKubeletVersion Jan 23 09:07:07 crc kubenswrapper[4684]: W0123 09:07:07.467993 4684 feature_gate.go:330] unrecognized feature gate: InsightsConfigAPI Jan 23 09:07:07 crc kubenswrapper[4684]: W0123 09:07:07.467998 4684 feature_gate.go:330] unrecognized feature gate: ImageStreamImportMode Jan 23 09:07:07 crc kubenswrapper[4684]: W0123 09:07:07.468003 4684 feature_gate.go:330] unrecognized feature gate: OnClusterBuild Jan 23 09:07:07 crc kubenswrapper[4684]: W0123 09:07:07.468008 4684 feature_gate.go:330] unrecognized feature gate: VSphereMultiNetworks Jan 23 09:07:07 crc kubenswrapper[4684]: W0123 09:07:07.468012 4684 feature_gate.go:330] unrecognized feature gate: InsightsOnDemandDataGather Jan 23 09:07:07 crc kubenswrapper[4684]: W0123 09:07:07.468016 4684 feature_gate.go:330] unrecognized feature gate: NetworkSegmentation Jan 23 09:07:07 crc kubenswrapper[4684]: W0123 09:07:07.468020 4684 feature_gate.go:330] unrecognized feature gate: UpgradeStatus Jan 23 09:07:07 crc kubenswrapper[4684]: W0123 09:07:07.468023 4684 feature_gate.go:330] unrecognized feature gate: ClusterAPIInstallIBMCloud Jan 23 09:07:07 crc kubenswrapper[4684]: W0123 09:07:07.468027 4684 feature_gate.go:330] unrecognized feature gate: HardwareSpeed Jan 23 09:07:07 crc kubenswrapper[4684]: W0123 09:07:07.468031 4684 feature_gate.go:330] unrecognized feature gate: MachineAPIOperatorDisableMachineHealthCheckController Jan 23 09:07:07 crc kubenswrapper[4684]: W0123 09:07:07.468035 4684 feature_gate.go:330] unrecognized feature gate: NewOLM Jan 23 09:07:07 crc kubenswrapper[4684]: W0123 09:07:07.468038 4684 feature_gate.go:330] unrecognized feature gate: MachineAPIMigration Jan 23 09:07:07 crc kubenswrapper[4684]: W0123 09:07:07.468042 4684 feature_gate.go:330] unrecognized feature gate: SigstoreImageVerification Jan 23 09:07:07 crc kubenswrapper[4684]: W0123 09:07:07.468045 4684 feature_gate.go:330] unrecognized feature gate: Example Jan 23 09:07:07 crc kubenswrapper[4684]: W0123 09:07:07.468049 4684 feature_gate.go:330] unrecognized feature gate: OVNObservability Jan 23 09:07:07 crc kubenswrapper[4684]: W0123 09:07:07.468052 4684 feature_gate.go:330] unrecognized feature gate: GCPClusterHostedDNS Jan 23 09:07:07 crc kubenswrapper[4684]: W0123 09:07:07.468056 4684 feature_gate.go:330] unrecognized feature gate: AdminNetworkPolicy Jan 23 09:07:07 crc kubenswrapper[4684]: W0123 09:07:07.468059 4684 feature_gate.go:330] unrecognized feature gate: MultiArchInstallAWS Jan 23 09:07:07 crc kubenswrapper[4684]: W0123 09:07:07.468063 4684 feature_gate.go:330] unrecognized feature gate: AWSEFSDriverVolumeMetrics Jan 23 09:07:07 crc kubenswrapper[4684]: W0123 09:07:07.468066 4684 feature_gate.go:330] unrecognized feature gate: GCPLabelsTags Jan 23 09:07:07 crc kubenswrapper[4684]: W0123 09:07:07.468070 4684 feature_gate.go:330] unrecognized feature gate: PlatformOperators Jan 23 09:07:07 crc kubenswrapper[4684]: W0123 09:07:07.468073 4684 feature_gate.go:330] unrecognized feature gate: MultiArchInstallGCP Jan 23 09:07:07 crc kubenswrapper[4684]: W0123 09:07:07.468076 4684 feature_gate.go:330] unrecognized feature gate: OpenShiftPodSecurityAdmission Jan 23 09:07:07 crc kubenswrapper[4684]: W0123 09:07:07.468081 4684 feature_gate.go:353] Setting GA feature gate DisableKubeletCloudCredentialProviders=true. It will be removed in a future release. Jan 23 09:07:07 crc kubenswrapper[4684]: W0123 09:07:07.468087 4684 feature_gate.go:330] unrecognized feature gate: NetworkDiagnosticsConfig Jan 23 09:07:07 crc kubenswrapper[4684]: W0123 09:07:07.468091 4684 feature_gate.go:330] unrecognized feature gate: VolumeGroupSnapshot Jan 23 09:07:07 crc kubenswrapper[4684]: W0123 09:07:07.468095 4684 feature_gate.go:330] unrecognized feature gate: VSphereDriverConfiguration Jan 23 09:07:07 crc kubenswrapper[4684]: W0123 09:07:07.468099 4684 feature_gate.go:330] unrecognized feature gate: InsightsRuntimeExtractor Jan 23 09:07:07 crc kubenswrapper[4684]: W0123 09:07:07.468105 4684 feature_gate.go:330] unrecognized feature gate: DNSNameResolver Jan 23 09:07:07 crc kubenswrapper[4684]: W0123 09:07:07.468110 4684 feature_gate.go:330] unrecognized feature gate: AWSClusterHostedDNS Jan 23 09:07:07 crc kubenswrapper[4684]: W0123 09:07:07.468115 4684 feature_gate.go:330] unrecognized feature gate: AzureWorkloadIdentity Jan 23 09:07:07 crc kubenswrapper[4684]: W0123 09:07:07.468120 4684 feature_gate.go:330] unrecognized feature gate: IngressControllerDynamicConfigurationManager Jan 23 09:07:07 crc kubenswrapper[4684]: W0123 09:07:07.468124 4684 feature_gate.go:330] unrecognized feature gate: BootcNodeManagement Jan 23 09:07:07 crc kubenswrapper[4684]: W0123 09:07:07.468129 4684 feature_gate.go:330] unrecognized feature gate: MachineConfigNodes Jan 23 09:07:07 crc kubenswrapper[4684]: W0123 09:07:07.468134 4684 feature_gate.go:330] unrecognized feature gate: MixedCPUsAllocation Jan 23 09:07:07 crc kubenswrapper[4684]: W0123 09:07:07.468139 4684 feature_gate.go:330] unrecognized feature gate: SignatureStores Jan 23 09:07:07 crc kubenswrapper[4684]: W0123 09:07:07.468143 4684 feature_gate.go:330] unrecognized feature gate: ChunkSizeMiB Jan 23 09:07:07 crc kubenswrapper[4684]: W0123 09:07:07.468148 4684 feature_gate.go:353] Setting GA feature gate ValidatingAdmissionPolicy=true. It will be removed in a future release. Jan 23 09:07:07 crc kubenswrapper[4684]: W0123 09:07:07.468152 4684 feature_gate.go:330] unrecognized feature gate: NodeDisruptionPolicy Jan 23 09:07:07 crc kubenswrapper[4684]: W0123 09:07:07.468156 4684 feature_gate.go:330] unrecognized feature gate: GatewayAPI Jan 23 09:07:07 crc kubenswrapper[4684]: W0123 09:07:07.468160 4684 feature_gate.go:330] unrecognized feature gate: ConsolePluginContentSecurityPolicy Jan 23 09:07:07 crc kubenswrapper[4684]: W0123 09:07:07.468164 4684 feature_gate.go:330] unrecognized feature gate: EtcdBackendQuota Jan 23 09:07:07 crc kubenswrapper[4684]: W0123 09:07:07.468168 4684 feature_gate.go:330] unrecognized feature gate: SetEIPForNLBIngressController Jan 23 09:07:07 crc kubenswrapper[4684]: W0123 09:07:07.468172 4684 feature_gate.go:330] unrecognized feature gate: RouteAdvertisements Jan 23 09:07:07 crc kubenswrapper[4684]: W0123 09:07:07.468176 4684 feature_gate.go:330] unrecognized feature gate: PrivateHostedZoneAWS Jan 23 09:07:07 crc kubenswrapper[4684]: W0123 09:07:07.468183 4684 feature_gate.go:330] unrecognized feature gate: ManagedBootImages Jan 23 09:07:07 crc kubenswrapper[4684]: W0123 09:07:07.468188 4684 feature_gate.go:330] unrecognized feature gate: ExternalOIDC Jan 23 09:07:07 crc kubenswrapper[4684]: W0123 09:07:07.468191 4684 feature_gate.go:330] unrecognized feature gate: ClusterMonitoringConfig Jan 23 09:07:07 crc kubenswrapper[4684]: W0123 09:07:07.468195 4684 feature_gate.go:330] unrecognized feature gate: BuildCSIVolumes Jan 23 09:07:07 crc kubenswrapper[4684]: W0123 09:07:07.468199 4684 feature_gate.go:330] unrecognized feature gate: IngressControllerLBSubnetsAWS Jan 23 09:07:07 crc kubenswrapper[4684]: W0123 09:07:07.468203 4684 feature_gate.go:330] unrecognized feature gate: PersistentIPsForVirtualization Jan 23 09:07:07 crc kubenswrapper[4684]: W0123 09:07:07.468206 4684 feature_gate.go:330] unrecognized feature gate: MultiArchInstallAzure Jan 23 09:07:07 crc kubenswrapper[4684]: W0123 09:07:07.468211 4684 feature_gate.go:353] Setting GA feature gate CloudDualStackNodeIPs=true. It will be removed in a future release. Jan 23 09:07:07 crc kubenswrapper[4684]: W0123 09:07:07.468216 4684 feature_gate.go:330] unrecognized feature gate: AdditionalRoutingCapabilities Jan 23 09:07:07 crc kubenswrapper[4684]: W0123 09:07:07.468222 4684 feature_gate.go:351] Setting deprecated feature gate KMSv1=true. It will be removed in a future release. Jan 23 09:07:07 crc kubenswrapper[4684]: W0123 09:07:07.468227 4684 feature_gate.go:330] unrecognized feature gate: ClusterAPIInstall Jan 23 09:07:07 crc kubenswrapper[4684]: W0123 09:07:07.468231 4684 feature_gate.go:330] unrecognized feature gate: NutanixMultiSubnets Jan 23 09:07:07 crc kubenswrapper[4684]: W0123 09:07:07.468235 4684 feature_gate.go:330] unrecognized feature gate: MetricsCollectionProfiles Jan 23 09:07:07 crc kubenswrapper[4684]: W0123 09:07:07.468238 4684 feature_gate.go:330] unrecognized feature gate: MachineAPIProviderOpenStack Jan 23 09:07:07 crc kubenswrapper[4684]: W0123 09:07:07.468242 4684 feature_gate.go:330] unrecognized feature gate: VSphereControlPlaneMachineSet Jan 23 09:07:07 crc kubenswrapper[4684]: W0123 09:07:07.468246 4684 feature_gate.go:330] unrecognized feature gate: InsightsConfig Jan 23 09:07:07 crc kubenswrapper[4684]: W0123 09:07:07.468250 4684 feature_gate.go:330] unrecognized feature gate: ManagedBootImagesAWS Jan 23 09:07:07 crc kubenswrapper[4684]: W0123 09:07:07.468253 4684 feature_gate.go:330] unrecognized feature gate: PinnedImages Jan 23 09:07:07 crc kubenswrapper[4684]: W0123 09:07:07.468257 4684 feature_gate.go:330] unrecognized feature gate: AutomatedEtcdBackup Jan 23 09:07:07 crc kubenswrapper[4684]: W0123 09:07:07.468260 4684 feature_gate.go:330] unrecognized feature gate: CSIDriverSharedResource Jan 23 09:07:07 crc kubenswrapper[4684]: W0123 09:07:07.468263 4684 feature_gate.go:330] unrecognized feature gate: BareMetalLoadBalancer Jan 23 09:07:07 crc kubenswrapper[4684]: W0123 09:07:07.468267 4684 feature_gate.go:330] unrecognized feature gate: VSphereMultiVCenters Jan 23 09:07:07 crc kubenswrapper[4684]: W0123 09:07:07.468271 4684 feature_gate.go:330] unrecognized feature gate: VSphereStaticIPs Jan 23 09:07:07 crc kubenswrapper[4684]: I0123 09:07:07.468499 4684 flags.go:64] FLAG: --address="0.0.0.0" Jan 23 09:07:07 crc kubenswrapper[4684]: I0123 09:07:07.468511 4684 flags.go:64] FLAG: --allowed-unsafe-sysctls="[]" Jan 23 09:07:07 crc kubenswrapper[4684]: I0123 09:07:07.468520 4684 flags.go:64] FLAG: --anonymous-auth="true" Jan 23 09:07:07 crc kubenswrapper[4684]: I0123 09:07:07.468526 4684 flags.go:64] FLAG: --application-metrics-count-limit="100" Jan 23 09:07:07 crc kubenswrapper[4684]: I0123 09:07:07.468531 4684 flags.go:64] FLAG: --authentication-token-webhook="false" Jan 23 09:07:07 crc kubenswrapper[4684]: I0123 09:07:07.468536 4684 flags.go:64] FLAG: --authentication-token-webhook-cache-ttl="2m0s" Jan 23 09:07:07 crc kubenswrapper[4684]: I0123 09:07:07.468541 4684 flags.go:64] FLAG: --authorization-mode="AlwaysAllow" Jan 23 09:07:07 crc kubenswrapper[4684]: I0123 09:07:07.468547 4684 flags.go:64] FLAG: --authorization-webhook-cache-authorized-ttl="5m0s" Jan 23 09:07:07 crc kubenswrapper[4684]: I0123 09:07:07.468552 4684 flags.go:64] FLAG: --authorization-webhook-cache-unauthorized-ttl="30s" Jan 23 09:07:07 crc kubenswrapper[4684]: I0123 09:07:07.468557 4684 flags.go:64] FLAG: --boot-id-file="/proc/sys/kernel/random/boot_id" Jan 23 09:07:07 crc kubenswrapper[4684]: I0123 09:07:07.468561 4684 flags.go:64] FLAG: --bootstrap-kubeconfig="/etc/kubernetes/kubeconfig" Jan 23 09:07:07 crc kubenswrapper[4684]: I0123 09:07:07.468566 4684 flags.go:64] FLAG: --cert-dir="/var/lib/kubelet/pki" Jan 23 09:07:07 crc kubenswrapper[4684]: I0123 09:07:07.468571 4684 flags.go:64] FLAG: --cgroup-driver="cgroupfs" Jan 23 09:07:07 crc kubenswrapper[4684]: I0123 09:07:07.468576 4684 flags.go:64] FLAG: --cgroup-root="" Jan 23 09:07:07 crc kubenswrapper[4684]: I0123 09:07:07.468580 4684 flags.go:64] FLAG: --cgroups-per-qos="true" Jan 23 09:07:07 crc kubenswrapper[4684]: I0123 09:07:07.468585 4684 flags.go:64] FLAG: --client-ca-file="" Jan 23 09:07:07 crc kubenswrapper[4684]: I0123 09:07:07.468589 4684 flags.go:64] FLAG: --cloud-config="" Jan 23 09:07:07 crc kubenswrapper[4684]: I0123 09:07:07.468593 4684 flags.go:64] FLAG: --cloud-provider="" Jan 23 09:07:07 crc kubenswrapper[4684]: I0123 09:07:07.468598 4684 flags.go:64] FLAG: --cluster-dns="[]" Jan 23 09:07:07 crc kubenswrapper[4684]: I0123 09:07:07.468603 4684 flags.go:64] FLAG: --cluster-domain="" Jan 23 09:07:07 crc kubenswrapper[4684]: I0123 09:07:07.468858 4684 flags.go:64] FLAG: --config="/etc/kubernetes/kubelet.conf" Jan 23 09:07:07 crc kubenswrapper[4684]: I0123 09:07:07.468864 4684 flags.go:64] FLAG: --config-dir="" Jan 23 09:07:07 crc kubenswrapper[4684]: I0123 09:07:07.468869 4684 flags.go:64] FLAG: --container-hints="/etc/cadvisor/container_hints.json" Jan 23 09:07:07 crc kubenswrapper[4684]: I0123 09:07:07.468873 4684 flags.go:64] FLAG: --container-log-max-files="5" Jan 23 09:07:07 crc kubenswrapper[4684]: I0123 09:07:07.468879 4684 flags.go:64] FLAG: --container-log-max-size="10Mi" Jan 23 09:07:07 crc kubenswrapper[4684]: I0123 09:07:07.468883 4684 flags.go:64] FLAG: --container-runtime-endpoint="/var/run/crio/crio.sock" Jan 23 09:07:07 crc kubenswrapper[4684]: I0123 09:07:07.468888 4684 flags.go:64] FLAG: --containerd="/run/containerd/containerd.sock" Jan 23 09:07:07 crc kubenswrapper[4684]: I0123 09:07:07.468892 4684 flags.go:64] FLAG: --containerd-namespace="k8s.io" Jan 23 09:07:07 crc kubenswrapper[4684]: I0123 09:07:07.468896 4684 flags.go:64] FLAG: --contention-profiling="false" Jan 23 09:07:07 crc kubenswrapper[4684]: I0123 09:07:07.468901 4684 flags.go:64] FLAG: --cpu-cfs-quota="true" Jan 23 09:07:07 crc kubenswrapper[4684]: I0123 09:07:07.468905 4684 flags.go:64] FLAG: --cpu-cfs-quota-period="100ms" Jan 23 09:07:07 crc kubenswrapper[4684]: I0123 09:07:07.468909 4684 flags.go:64] FLAG: --cpu-manager-policy="none" Jan 23 09:07:07 crc kubenswrapper[4684]: I0123 09:07:07.468913 4684 flags.go:64] FLAG: --cpu-manager-policy-options="" Jan 23 09:07:07 crc kubenswrapper[4684]: I0123 09:07:07.468918 4684 flags.go:64] FLAG: --cpu-manager-reconcile-period="10s" Jan 23 09:07:07 crc kubenswrapper[4684]: I0123 09:07:07.468922 4684 flags.go:64] FLAG: --enable-controller-attach-detach="true" Jan 23 09:07:07 crc kubenswrapper[4684]: I0123 09:07:07.468932 4684 flags.go:64] FLAG: --enable-debugging-handlers="true" Jan 23 09:07:07 crc kubenswrapper[4684]: I0123 09:07:07.468936 4684 flags.go:64] FLAG: --enable-load-reader="false" Jan 23 09:07:07 crc kubenswrapper[4684]: I0123 09:07:07.468940 4684 flags.go:64] FLAG: --enable-server="true" Jan 23 09:07:07 crc kubenswrapper[4684]: I0123 09:07:07.468944 4684 flags.go:64] FLAG: --enforce-node-allocatable="[pods]" Jan 23 09:07:07 crc kubenswrapper[4684]: I0123 09:07:07.468950 4684 flags.go:64] FLAG: --event-burst="100" Jan 23 09:07:07 crc kubenswrapper[4684]: I0123 09:07:07.468955 4684 flags.go:64] FLAG: --event-qps="50" Jan 23 09:07:07 crc kubenswrapper[4684]: I0123 09:07:07.468959 4684 flags.go:64] FLAG: --event-storage-age-limit="default=0" Jan 23 09:07:07 crc kubenswrapper[4684]: I0123 09:07:07.468963 4684 flags.go:64] FLAG: --event-storage-event-limit="default=0" Jan 23 09:07:07 crc kubenswrapper[4684]: I0123 09:07:07.468967 4684 flags.go:64] FLAG: --eviction-hard="" Jan 23 09:07:07 crc kubenswrapper[4684]: I0123 09:07:07.468972 4684 flags.go:64] FLAG: --eviction-max-pod-grace-period="0" Jan 23 09:07:07 crc kubenswrapper[4684]: I0123 09:07:07.468976 4684 flags.go:64] FLAG: --eviction-minimum-reclaim="" Jan 23 09:07:07 crc kubenswrapper[4684]: I0123 09:07:07.468980 4684 flags.go:64] FLAG: --eviction-pressure-transition-period="5m0s" Jan 23 09:07:07 crc kubenswrapper[4684]: I0123 09:07:07.468985 4684 flags.go:64] FLAG: --eviction-soft="" Jan 23 09:07:07 crc kubenswrapper[4684]: I0123 09:07:07.468989 4684 flags.go:64] FLAG: --eviction-soft-grace-period="" Jan 23 09:07:07 crc kubenswrapper[4684]: I0123 09:07:07.468993 4684 flags.go:64] FLAG: --exit-on-lock-contention="false" Jan 23 09:07:07 crc kubenswrapper[4684]: I0123 09:07:07.469001 4684 flags.go:64] FLAG: --experimental-allocatable-ignore-eviction="false" Jan 23 09:07:07 crc kubenswrapper[4684]: I0123 09:07:07.469006 4684 flags.go:64] FLAG: --experimental-mounter-path="" Jan 23 09:07:07 crc kubenswrapper[4684]: I0123 09:07:07.469010 4684 flags.go:64] FLAG: --fail-cgroupv1="false" Jan 23 09:07:07 crc kubenswrapper[4684]: I0123 09:07:07.469015 4684 flags.go:64] FLAG: --fail-swap-on="true" Jan 23 09:07:07 crc kubenswrapper[4684]: I0123 09:07:07.469019 4684 flags.go:64] FLAG: --feature-gates="" Jan 23 09:07:07 crc kubenswrapper[4684]: I0123 09:07:07.469025 4684 flags.go:64] FLAG: --file-check-frequency="20s" Jan 23 09:07:07 crc kubenswrapper[4684]: I0123 09:07:07.469030 4684 flags.go:64] FLAG: --global-housekeeping-interval="1m0s" Jan 23 09:07:07 crc kubenswrapper[4684]: I0123 09:07:07.469034 4684 flags.go:64] FLAG: --hairpin-mode="promiscuous-bridge" Jan 23 09:07:07 crc kubenswrapper[4684]: I0123 09:07:07.469039 4684 flags.go:64] FLAG: --healthz-bind-address="127.0.0.1" Jan 23 09:07:07 crc kubenswrapper[4684]: I0123 09:07:07.469043 4684 flags.go:64] FLAG: --healthz-port="10248" Jan 23 09:07:07 crc kubenswrapper[4684]: I0123 09:07:07.469047 4684 flags.go:64] FLAG: --help="false" Jan 23 09:07:07 crc kubenswrapper[4684]: I0123 09:07:07.469051 4684 flags.go:64] FLAG: --hostname-override="" Jan 23 09:07:07 crc kubenswrapper[4684]: I0123 09:07:07.469055 4684 flags.go:64] FLAG: --housekeeping-interval="10s" Jan 23 09:07:07 crc kubenswrapper[4684]: I0123 09:07:07.469060 4684 flags.go:64] FLAG: --http-check-frequency="20s" Jan 23 09:07:07 crc kubenswrapper[4684]: I0123 09:07:07.469064 4684 flags.go:64] FLAG: --image-credential-provider-bin-dir="" Jan 23 09:07:07 crc kubenswrapper[4684]: I0123 09:07:07.469068 4684 flags.go:64] FLAG: --image-credential-provider-config="" Jan 23 09:07:07 crc kubenswrapper[4684]: I0123 09:07:07.469072 4684 flags.go:64] FLAG: --image-gc-high-threshold="85" Jan 23 09:07:07 crc kubenswrapper[4684]: I0123 09:07:07.469076 4684 flags.go:64] FLAG: --image-gc-low-threshold="80" Jan 23 09:07:07 crc kubenswrapper[4684]: I0123 09:07:07.469080 4684 flags.go:64] FLAG: --image-service-endpoint="" Jan 23 09:07:07 crc kubenswrapper[4684]: I0123 09:07:07.469084 4684 flags.go:64] FLAG: --kernel-memcg-notification="false" Jan 23 09:07:07 crc kubenswrapper[4684]: I0123 09:07:07.469088 4684 flags.go:64] FLAG: --kube-api-burst="100" Jan 23 09:07:07 crc kubenswrapper[4684]: I0123 09:07:07.469093 4684 flags.go:64] FLAG: --kube-api-content-type="application/vnd.kubernetes.protobuf" Jan 23 09:07:07 crc kubenswrapper[4684]: I0123 09:07:07.469097 4684 flags.go:64] FLAG: --kube-api-qps="50" Jan 23 09:07:07 crc kubenswrapper[4684]: I0123 09:07:07.469101 4684 flags.go:64] FLAG: --kube-reserved="" Jan 23 09:07:07 crc kubenswrapper[4684]: I0123 09:07:07.469105 4684 flags.go:64] FLAG: --kube-reserved-cgroup="" Jan 23 09:07:07 crc kubenswrapper[4684]: I0123 09:07:07.469109 4684 flags.go:64] FLAG: --kubeconfig="/var/lib/kubelet/kubeconfig" Jan 23 09:07:07 crc kubenswrapper[4684]: I0123 09:07:07.469113 4684 flags.go:64] FLAG: --kubelet-cgroups="" Jan 23 09:07:07 crc kubenswrapper[4684]: I0123 09:07:07.469117 4684 flags.go:64] FLAG: --local-storage-capacity-isolation="true" Jan 23 09:07:07 crc kubenswrapper[4684]: I0123 09:07:07.469121 4684 flags.go:64] FLAG: --lock-file="" Jan 23 09:07:07 crc kubenswrapper[4684]: I0123 09:07:07.469125 4684 flags.go:64] FLAG: --log-cadvisor-usage="false" Jan 23 09:07:07 crc kubenswrapper[4684]: I0123 09:07:07.469129 4684 flags.go:64] FLAG: --log-flush-frequency="5s" Jan 23 09:07:07 crc kubenswrapper[4684]: I0123 09:07:07.469133 4684 flags.go:64] FLAG: --log-json-info-buffer-size="0" Jan 23 09:07:07 crc kubenswrapper[4684]: I0123 09:07:07.469140 4684 flags.go:64] FLAG: --log-json-split-stream="false" Jan 23 09:07:07 crc kubenswrapper[4684]: I0123 09:07:07.469145 4684 flags.go:64] FLAG: --log-text-info-buffer-size="0" Jan 23 09:07:07 crc kubenswrapper[4684]: I0123 09:07:07.469149 4684 flags.go:64] FLAG: --log-text-split-stream="false" Jan 23 09:07:07 crc kubenswrapper[4684]: I0123 09:07:07.469153 4684 flags.go:64] FLAG: --logging-format="text" Jan 23 09:07:07 crc kubenswrapper[4684]: I0123 09:07:07.469157 4684 flags.go:64] FLAG: --machine-id-file="/etc/machine-id,/var/lib/dbus/machine-id" Jan 23 09:07:07 crc kubenswrapper[4684]: I0123 09:07:07.469162 4684 flags.go:64] FLAG: --make-iptables-util-chains="true" Jan 23 09:07:07 crc kubenswrapper[4684]: I0123 09:07:07.469166 4684 flags.go:64] FLAG: --manifest-url="" Jan 23 09:07:07 crc kubenswrapper[4684]: I0123 09:07:07.469170 4684 flags.go:64] FLAG: --manifest-url-header="" Jan 23 09:07:07 crc kubenswrapper[4684]: I0123 09:07:07.469176 4684 flags.go:64] FLAG: --max-housekeeping-interval="15s" Jan 23 09:07:07 crc kubenswrapper[4684]: I0123 09:07:07.469180 4684 flags.go:64] FLAG: --max-open-files="1000000" Jan 23 09:07:07 crc kubenswrapper[4684]: I0123 09:07:07.469185 4684 flags.go:64] FLAG: --max-pods="110" Jan 23 09:07:07 crc kubenswrapper[4684]: I0123 09:07:07.469189 4684 flags.go:64] FLAG: --maximum-dead-containers="-1" Jan 23 09:07:07 crc kubenswrapper[4684]: I0123 09:07:07.469193 4684 flags.go:64] FLAG: --maximum-dead-containers-per-container="1" Jan 23 09:07:07 crc kubenswrapper[4684]: I0123 09:07:07.469197 4684 flags.go:64] FLAG: --memory-manager-policy="None" Jan 23 09:07:07 crc kubenswrapper[4684]: I0123 09:07:07.469202 4684 flags.go:64] FLAG: --minimum-container-ttl-duration="6m0s" Jan 23 09:07:07 crc kubenswrapper[4684]: I0123 09:07:07.469228 4684 flags.go:64] FLAG: --minimum-image-ttl-duration="2m0s" Jan 23 09:07:07 crc kubenswrapper[4684]: I0123 09:07:07.469233 4684 flags.go:64] FLAG: --node-ip="192.168.126.11" Jan 23 09:07:07 crc kubenswrapper[4684]: I0123 09:07:07.469238 4684 flags.go:64] FLAG: --node-labels="node-role.kubernetes.io/control-plane=,node-role.kubernetes.io/master=,node.openshift.io/os_id=rhcos" Jan 23 09:07:07 crc kubenswrapper[4684]: I0123 09:07:07.469250 4684 flags.go:64] FLAG: --node-status-max-images="50" Jan 23 09:07:07 crc kubenswrapper[4684]: I0123 09:07:07.469255 4684 flags.go:64] FLAG: --node-status-update-frequency="10s" Jan 23 09:07:07 crc kubenswrapper[4684]: I0123 09:07:07.469259 4684 flags.go:64] FLAG: --oom-score-adj="-999" Jan 23 09:07:07 crc kubenswrapper[4684]: I0123 09:07:07.469264 4684 flags.go:64] FLAG: --pod-cidr="" Jan 23 09:07:07 crc kubenswrapper[4684]: I0123 09:07:07.469269 4684 flags.go:64] FLAG: --pod-infra-container-image="quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:33549946e22a9ffa738fd94b1345f90921bc8f92fa6137784cb33c77ad806f9d" Jan 23 09:07:07 crc kubenswrapper[4684]: I0123 09:07:07.469277 4684 flags.go:64] FLAG: --pod-manifest-path="" Jan 23 09:07:07 crc kubenswrapper[4684]: I0123 09:07:07.469281 4684 flags.go:64] FLAG: --pod-max-pids="-1" Jan 23 09:07:07 crc kubenswrapper[4684]: I0123 09:07:07.469286 4684 flags.go:64] FLAG: --pods-per-core="0" Jan 23 09:07:07 crc kubenswrapper[4684]: I0123 09:07:07.469291 4684 flags.go:64] FLAG: --port="10250" Jan 23 09:07:07 crc kubenswrapper[4684]: I0123 09:07:07.469295 4684 flags.go:64] FLAG: --protect-kernel-defaults="false" Jan 23 09:07:07 crc kubenswrapper[4684]: I0123 09:07:07.469299 4684 flags.go:64] FLAG: --provider-id="" Jan 23 09:07:07 crc kubenswrapper[4684]: I0123 09:07:07.469304 4684 flags.go:64] FLAG: --qos-reserved="" Jan 23 09:07:07 crc kubenswrapper[4684]: I0123 09:07:07.469308 4684 flags.go:64] FLAG: --read-only-port="10255" Jan 23 09:07:07 crc kubenswrapper[4684]: I0123 09:07:07.469313 4684 flags.go:64] FLAG: --register-node="true" Jan 23 09:07:07 crc kubenswrapper[4684]: I0123 09:07:07.469319 4684 flags.go:64] FLAG: --register-schedulable="true" Jan 23 09:07:07 crc kubenswrapper[4684]: I0123 09:07:07.469324 4684 flags.go:64] FLAG: --register-with-taints="node-role.kubernetes.io/master=:NoSchedule" Jan 23 09:07:07 crc kubenswrapper[4684]: I0123 09:07:07.469332 4684 flags.go:64] FLAG: --registry-burst="10" Jan 23 09:07:07 crc kubenswrapper[4684]: I0123 09:07:07.469337 4684 flags.go:64] FLAG: --registry-qps="5" Jan 23 09:07:07 crc kubenswrapper[4684]: I0123 09:07:07.469341 4684 flags.go:64] FLAG: --reserved-cpus="" Jan 23 09:07:07 crc kubenswrapper[4684]: I0123 09:07:07.469345 4684 flags.go:64] FLAG: --reserved-memory="" Jan 23 09:07:07 crc kubenswrapper[4684]: I0123 09:07:07.469351 4684 flags.go:64] FLAG: --resolv-conf="/etc/resolv.conf" Jan 23 09:07:07 crc kubenswrapper[4684]: I0123 09:07:07.469356 4684 flags.go:64] FLAG: --root-dir="/var/lib/kubelet" Jan 23 09:07:07 crc kubenswrapper[4684]: I0123 09:07:07.469360 4684 flags.go:64] FLAG: --rotate-certificates="false" Jan 23 09:07:07 crc kubenswrapper[4684]: I0123 09:07:07.469365 4684 flags.go:64] FLAG: --rotate-server-certificates="false" Jan 23 09:07:07 crc kubenswrapper[4684]: I0123 09:07:07.469369 4684 flags.go:64] FLAG: --runonce="false" Jan 23 09:07:07 crc kubenswrapper[4684]: I0123 09:07:07.469374 4684 flags.go:64] FLAG: --runtime-cgroups="/system.slice/crio.service" Jan 23 09:07:07 crc kubenswrapper[4684]: I0123 09:07:07.469378 4684 flags.go:64] FLAG: --runtime-request-timeout="2m0s" Jan 23 09:07:07 crc kubenswrapper[4684]: I0123 09:07:07.469382 4684 flags.go:64] FLAG: --seccomp-default="false" Jan 23 09:07:07 crc kubenswrapper[4684]: I0123 09:07:07.469387 4684 flags.go:64] FLAG: --serialize-image-pulls="true" Jan 23 09:07:07 crc kubenswrapper[4684]: I0123 09:07:07.469391 4684 flags.go:64] FLAG: --storage-driver-buffer-duration="1m0s" Jan 23 09:07:07 crc kubenswrapper[4684]: I0123 09:07:07.469396 4684 flags.go:64] FLAG: --storage-driver-db="cadvisor" Jan 23 09:07:07 crc kubenswrapper[4684]: I0123 09:07:07.469400 4684 flags.go:64] FLAG: --storage-driver-host="localhost:8086" Jan 23 09:07:07 crc kubenswrapper[4684]: I0123 09:07:07.469405 4684 flags.go:64] FLAG: --storage-driver-password="root" Jan 23 09:07:07 crc kubenswrapper[4684]: I0123 09:07:07.469410 4684 flags.go:64] FLAG: --storage-driver-secure="false" Jan 23 09:07:07 crc kubenswrapper[4684]: I0123 09:07:07.469414 4684 flags.go:64] FLAG: --storage-driver-table="stats" Jan 23 09:07:07 crc kubenswrapper[4684]: I0123 09:07:07.469418 4684 flags.go:64] FLAG: --storage-driver-user="root" Jan 23 09:07:07 crc kubenswrapper[4684]: I0123 09:07:07.469423 4684 flags.go:64] FLAG: --streaming-connection-idle-timeout="4h0m0s" Jan 23 09:07:07 crc kubenswrapper[4684]: I0123 09:07:07.469427 4684 flags.go:64] FLAG: --sync-frequency="1m0s" Jan 23 09:07:07 crc kubenswrapper[4684]: I0123 09:07:07.469432 4684 flags.go:64] FLAG: --system-cgroups="" Jan 23 09:07:07 crc kubenswrapper[4684]: I0123 09:07:07.469436 4684 flags.go:64] FLAG: --system-reserved="cpu=200m,ephemeral-storage=350Mi,memory=350Mi" Jan 23 09:07:07 crc kubenswrapper[4684]: I0123 09:07:07.469443 4684 flags.go:64] FLAG: --system-reserved-cgroup="" Jan 23 09:07:07 crc kubenswrapper[4684]: I0123 09:07:07.469447 4684 flags.go:64] FLAG: --tls-cert-file="" Jan 23 09:07:07 crc kubenswrapper[4684]: I0123 09:07:07.469452 4684 flags.go:64] FLAG: --tls-cipher-suites="[]" Jan 23 09:07:07 crc kubenswrapper[4684]: I0123 09:07:07.469459 4684 flags.go:64] FLAG: --tls-min-version="" Jan 23 09:07:07 crc kubenswrapper[4684]: I0123 09:07:07.469465 4684 flags.go:64] FLAG: --tls-private-key-file="" Jan 23 09:07:07 crc kubenswrapper[4684]: I0123 09:07:07.469470 4684 flags.go:64] FLAG: --topology-manager-policy="none" Jan 23 09:07:07 crc kubenswrapper[4684]: I0123 09:07:07.469475 4684 flags.go:64] FLAG: --topology-manager-policy-options="" Jan 23 09:07:07 crc kubenswrapper[4684]: I0123 09:07:07.469480 4684 flags.go:64] FLAG: --topology-manager-scope="container" Jan 23 09:07:07 crc kubenswrapper[4684]: I0123 09:07:07.469484 4684 flags.go:64] FLAG: --v="2" Jan 23 09:07:07 crc kubenswrapper[4684]: I0123 09:07:07.469491 4684 flags.go:64] FLAG: --version="false" Jan 23 09:07:07 crc kubenswrapper[4684]: I0123 09:07:07.469497 4684 flags.go:64] FLAG: --vmodule="" Jan 23 09:07:07 crc kubenswrapper[4684]: I0123 09:07:07.469502 4684 flags.go:64] FLAG: --volume-plugin-dir="/etc/kubernetes/kubelet-plugins/volume/exec" Jan 23 09:07:07 crc kubenswrapper[4684]: I0123 09:07:07.469506 4684 flags.go:64] FLAG: --volume-stats-agg-period="1m0s" Jan 23 09:07:07 crc kubenswrapper[4684]: W0123 09:07:07.469608 4684 feature_gate.go:330] unrecognized feature gate: NewOLM Jan 23 09:07:07 crc kubenswrapper[4684]: W0123 09:07:07.469614 4684 feature_gate.go:330] unrecognized feature gate: MultiArchInstallGCP Jan 23 09:07:07 crc kubenswrapper[4684]: W0123 09:07:07.469618 4684 feature_gate.go:330] unrecognized feature gate: HardwareSpeed Jan 23 09:07:07 crc kubenswrapper[4684]: W0123 09:07:07.469622 4684 feature_gate.go:330] unrecognized feature gate: AutomatedEtcdBackup Jan 23 09:07:07 crc kubenswrapper[4684]: W0123 09:07:07.469625 4684 feature_gate.go:330] unrecognized feature gate: EtcdBackendQuota Jan 23 09:07:07 crc kubenswrapper[4684]: W0123 09:07:07.469629 4684 feature_gate.go:330] unrecognized feature gate: MetricsCollectionProfiles Jan 23 09:07:07 crc kubenswrapper[4684]: W0123 09:07:07.469633 4684 feature_gate.go:330] unrecognized feature gate: ManagedBootImages Jan 23 09:07:07 crc kubenswrapper[4684]: W0123 09:07:07.469636 4684 feature_gate.go:330] unrecognized feature gate: AWSEFSDriverVolumeMetrics Jan 23 09:07:07 crc kubenswrapper[4684]: W0123 09:07:07.469640 4684 feature_gate.go:330] unrecognized feature gate: MultiArchInstallAzure Jan 23 09:07:07 crc kubenswrapper[4684]: W0123 09:07:07.469643 4684 feature_gate.go:330] unrecognized feature gate: MultiArchInstallAWS Jan 23 09:07:07 crc kubenswrapper[4684]: W0123 09:07:07.469647 4684 feature_gate.go:330] unrecognized feature gate: InsightsConfigAPI Jan 23 09:07:07 crc kubenswrapper[4684]: W0123 09:07:07.469650 4684 feature_gate.go:330] unrecognized feature gate: NodeDisruptionPolicy Jan 23 09:07:07 crc kubenswrapper[4684]: W0123 09:07:07.469654 4684 feature_gate.go:330] unrecognized feature gate: ClusterMonitoringConfig Jan 23 09:07:07 crc kubenswrapper[4684]: W0123 09:07:07.469658 4684 feature_gate.go:353] Setting GA feature gate CloudDualStackNodeIPs=true. It will be removed in a future release. Jan 23 09:07:07 crc kubenswrapper[4684]: W0123 09:07:07.469663 4684 feature_gate.go:330] unrecognized feature gate: NutanixMultiSubnets Jan 23 09:07:07 crc kubenswrapper[4684]: W0123 09:07:07.469667 4684 feature_gate.go:330] unrecognized feature gate: PersistentIPsForVirtualization Jan 23 09:07:07 crc kubenswrapper[4684]: W0123 09:07:07.469671 4684 feature_gate.go:330] unrecognized feature gate: ClusterAPIInstall Jan 23 09:07:07 crc kubenswrapper[4684]: W0123 09:07:07.469674 4684 feature_gate.go:330] unrecognized feature gate: AWSClusterHostedDNS Jan 23 09:07:07 crc kubenswrapper[4684]: W0123 09:07:07.469679 4684 feature_gate.go:330] unrecognized feature gate: AzureWorkloadIdentity Jan 23 09:07:07 crc kubenswrapper[4684]: W0123 09:07:07.469682 4684 feature_gate.go:330] unrecognized feature gate: AdditionalRoutingCapabilities Jan 23 09:07:07 crc kubenswrapper[4684]: W0123 09:07:07.469686 4684 feature_gate.go:330] unrecognized feature gate: GatewayAPI Jan 23 09:07:07 crc kubenswrapper[4684]: W0123 09:07:07.469689 4684 feature_gate.go:330] unrecognized feature gate: VolumeGroupSnapshot Jan 23 09:07:07 crc kubenswrapper[4684]: W0123 09:07:07.469708 4684 feature_gate.go:353] Setting GA feature gate DisableKubeletCloudCredentialProviders=true. It will be removed in a future release. Jan 23 09:07:07 crc kubenswrapper[4684]: W0123 09:07:07.469712 4684 feature_gate.go:330] unrecognized feature gate: InsightsConfig Jan 23 09:07:07 crc kubenswrapper[4684]: W0123 09:07:07.469716 4684 feature_gate.go:330] unrecognized feature gate: BareMetalLoadBalancer Jan 23 09:07:07 crc kubenswrapper[4684]: W0123 09:07:07.469720 4684 feature_gate.go:330] unrecognized feature gate: SignatureStores Jan 23 09:07:07 crc kubenswrapper[4684]: W0123 09:07:07.469727 4684 feature_gate.go:351] Setting deprecated feature gate KMSv1=true. It will be removed in a future release. Jan 23 09:07:07 crc kubenswrapper[4684]: W0123 09:07:07.469732 4684 feature_gate.go:330] unrecognized feature gate: InsightsOnDemandDataGather Jan 23 09:07:07 crc kubenswrapper[4684]: W0123 09:07:07.469735 4684 feature_gate.go:330] unrecognized feature gate: ManagedBootImagesAWS Jan 23 09:07:07 crc kubenswrapper[4684]: W0123 09:07:07.469739 4684 feature_gate.go:330] unrecognized feature gate: NetworkSegmentation Jan 23 09:07:07 crc kubenswrapper[4684]: W0123 09:07:07.469742 4684 feature_gate.go:330] unrecognized feature gate: OnClusterBuild Jan 23 09:07:07 crc kubenswrapper[4684]: W0123 09:07:07.469746 4684 feature_gate.go:330] unrecognized feature gate: MixedCPUsAllocation Jan 23 09:07:07 crc kubenswrapper[4684]: W0123 09:07:07.469749 4684 feature_gate.go:330] unrecognized feature gate: BuildCSIVolumes Jan 23 09:07:07 crc kubenswrapper[4684]: W0123 09:07:07.469753 4684 feature_gate.go:330] unrecognized feature gate: PrivateHostedZoneAWS Jan 23 09:07:07 crc kubenswrapper[4684]: W0123 09:07:07.469756 4684 feature_gate.go:330] unrecognized feature gate: SigstoreImageVerification Jan 23 09:07:07 crc kubenswrapper[4684]: W0123 09:07:07.469760 4684 feature_gate.go:330] unrecognized feature gate: MachineAPIMigration Jan 23 09:07:07 crc kubenswrapper[4684]: W0123 09:07:07.469764 4684 feature_gate.go:330] unrecognized feature gate: OVNObservability Jan 23 09:07:07 crc kubenswrapper[4684]: W0123 09:07:07.469767 4684 feature_gate.go:330] unrecognized feature gate: VSphereControlPlaneMachineSet Jan 23 09:07:07 crc kubenswrapper[4684]: W0123 09:07:07.469771 4684 feature_gate.go:330] unrecognized feature gate: VSphereMultiVCenters Jan 23 09:07:07 crc kubenswrapper[4684]: W0123 09:07:07.469774 4684 feature_gate.go:330] unrecognized feature gate: ExternalOIDC Jan 23 09:07:07 crc kubenswrapper[4684]: W0123 09:07:07.469777 4684 feature_gate.go:330] unrecognized feature gate: GCPClusterHostedDNS Jan 23 09:07:07 crc kubenswrapper[4684]: W0123 09:07:07.469781 4684 feature_gate.go:330] unrecognized feature gate: IngressControllerLBSubnetsAWS Jan 23 09:07:07 crc kubenswrapper[4684]: W0123 09:07:07.469784 4684 feature_gate.go:330] unrecognized feature gate: ConsolePluginContentSecurityPolicy Jan 23 09:07:07 crc kubenswrapper[4684]: W0123 09:07:07.469788 4684 feature_gate.go:330] unrecognized feature gate: MinimumKubeletVersion Jan 23 09:07:07 crc kubenswrapper[4684]: W0123 09:07:07.469791 4684 feature_gate.go:330] unrecognized feature gate: NetworkDiagnosticsConfig Jan 23 09:07:07 crc kubenswrapper[4684]: W0123 09:07:07.469795 4684 feature_gate.go:330] unrecognized feature gate: UpgradeStatus Jan 23 09:07:07 crc kubenswrapper[4684]: W0123 09:07:07.469798 4684 feature_gate.go:330] unrecognized feature gate: AlibabaPlatform Jan 23 09:07:07 crc kubenswrapper[4684]: W0123 09:07:07.469802 4684 feature_gate.go:330] unrecognized feature gate: OpenShiftPodSecurityAdmission Jan 23 09:07:07 crc kubenswrapper[4684]: W0123 09:07:07.469805 4684 feature_gate.go:330] unrecognized feature gate: ClusterAPIInstallIBMCloud Jan 23 09:07:07 crc kubenswrapper[4684]: W0123 09:07:07.469809 4684 feature_gate.go:330] unrecognized feature gate: Example Jan 23 09:07:07 crc kubenswrapper[4684]: W0123 09:07:07.469812 4684 feature_gate.go:330] unrecognized feature gate: VSphereDriverConfiguration Jan 23 09:07:07 crc kubenswrapper[4684]: W0123 09:07:07.469816 4684 feature_gate.go:330] unrecognized feature gate: IngressControllerDynamicConfigurationManager Jan 23 09:07:07 crc kubenswrapper[4684]: W0123 09:07:07.469819 4684 feature_gate.go:330] unrecognized feature gate: CSIDriverSharedResource Jan 23 09:07:07 crc kubenswrapper[4684]: W0123 09:07:07.469823 4684 feature_gate.go:330] unrecognized feature gate: AdminNetworkPolicy Jan 23 09:07:07 crc kubenswrapper[4684]: W0123 09:07:07.469826 4684 feature_gate.go:330] unrecognized feature gate: ImageStreamImportMode Jan 23 09:07:07 crc kubenswrapper[4684]: W0123 09:07:07.469829 4684 feature_gate.go:330] unrecognized feature gate: NetworkLiveMigration Jan 23 09:07:07 crc kubenswrapper[4684]: W0123 09:07:07.469834 4684 feature_gate.go:353] Setting GA feature gate ValidatingAdmissionPolicy=true. It will be removed in a future release. Jan 23 09:07:07 crc kubenswrapper[4684]: W0123 09:07:07.469838 4684 feature_gate.go:330] unrecognized feature gate: RouteAdvertisements Jan 23 09:07:07 crc kubenswrapper[4684]: W0123 09:07:07.469843 4684 feature_gate.go:330] unrecognized feature gate: GCPLabelsTags Jan 23 09:07:07 crc kubenswrapper[4684]: W0123 09:07:07.469847 4684 feature_gate.go:330] unrecognized feature gate: PlatformOperators Jan 23 09:07:07 crc kubenswrapper[4684]: W0123 09:07:07.469851 4684 feature_gate.go:330] unrecognized feature gate: SetEIPForNLBIngressController Jan 23 09:07:07 crc kubenswrapper[4684]: W0123 09:07:07.469855 4684 feature_gate.go:330] unrecognized feature gate: InsightsRuntimeExtractor Jan 23 09:07:07 crc kubenswrapper[4684]: W0123 09:07:07.469860 4684 feature_gate.go:330] unrecognized feature gate: DNSNameResolver Jan 23 09:07:07 crc kubenswrapper[4684]: W0123 09:07:07.469863 4684 feature_gate.go:330] unrecognized feature gate: PinnedImages Jan 23 09:07:07 crc kubenswrapper[4684]: W0123 09:07:07.469868 4684 feature_gate.go:330] unrecognized feature gate: ChunkSizeMiB Jan 23 09:07:07 crc kubenswrapper[4684]: W0123 09:07:07.469872 4684 feature_gate.go:330] unrecognized feature gate: MachineAPIProviderOpenStack Jan 23 09:07:07 crc kubenswrapper[4684]: W0123 09:07:07.469876 4684 feature_gate.go:330] unrecognized feature gate: BootcNodeManagement Jan 23 09:07:07 crc kubenswrapper[4684]: W0123 09:07:07.469879 4684 feature_gate.go:330] unrecognized feature gate: VSphereStaticIPs Jan 23 09:07:07 crc kubenswrapper[4684]: W0123 09:07:07.469883 4684 feature_gate.go:330] unrecognized feature gate: MachineAPIOperatorDisableMachineHealthCheckController Jan 23 09:07:07 crc kubenswrapper[4684]: W0123 09:07:07.469887 4684 feature_gate.go:330] unrecognized feature gate: MachineConfigNodes Jan 23 09:07:07 crc kubenswrapper[4684]: W0123 09:07:07.469890 4684 feature_gate.go:330] unrecognized feature gate: VSphereMultiNetworks Jan 23 09:07:07 crc kubenswrapper[4684]: I0123 09:07:07.470021 4684 feature_gate.go:386] feature gates: {map[CloudDualStackNodeIPs:true DisableKubeletCloudCredentialProviders:true DynamicResourceAllocation:false EventedPLEG:false KMSv1:true MaxUnavailableStatefulSet:false NodeSwap:false ProcMountType:false RouteExternalCertificate:false ServiceAccountTokenNodeBinding:false TranslateStreamCloseWebsocketRequests:false UserNamespacesPodSecurityStandards:false UserNamespacesSupport:false ValidatingAdmissionPolicy:true VolumeAttributesClass:false]} Jan 23 09:07:07 crc kubenswrapper[4684]: I0123 09:07:07.478570 4684 server.go:491] "Kubelet version" kubeletVersion="v1.31.5" Jan 23 09:07:07 crc kubenswrapper[4684]: I0123 09:07:07.478607 4684 server.go:493] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK="" Jan 23 09:07:07 crc kubenswrapper[4684]: W0123 09:07:07.478675 4684 feature_gate.go:330] unrecognized feature gate: GCPClusterHostedDNS Jan 23 09:07:07 crc kubenswrapper[4684]: W0123 09:07:07.478689 4684 feature_gate.go:330] unrecognized feature gate: MixedCPUsAllocation Jan 23 09:07:07 crc kubenswrapper[4684]: W0123 09:07:07.478711 4684 feature_gate.go:330] unrecognized feature gate: RouteAdvertisements Jan 23 09:07:07 crc kubenswrapper[4684]: W0123 09:07:07.478717 4684 feature_gate.go:330] unrecognized feature gate: VSphereMultiNetworks Jan 23 09:07:07 crc kubenswrapper[4684]: W0123 09:07:07.478721 4684 feature_gate.go:330] unrecognized feature gate: InsightsOnDemandDataGather Jan 23 09:07:07 crc kubenswrapper[4684]: W0123 09:07:07.478725 4684 feature_gate.go:330] unrecognized feature gate: OVNObservability Jan 23 09:07:07 crc kubenswrapper[4684]: W0123 09:07:07.478729 4684 feature_gate.go:330] unrecognized feature gate: VSphereStaticIPs Jan 23 09:07:07 crc kubenswrapper[4684]: W0123 09:07:07.478733 4684 feature_gate.go:330] unrecognized feature gate: OnClusterBuild Jan 23 09:07:07 crc kubenswrapper[4684]: W0123 09:07:07.478737 4684 feature_gate.go:330] unrecognized feature gate: MultiArchInstallAWS Jan 23 09:07:07 crc kubenswrapper[4684]: W0123 09:07:07.478741 4684 feature_gate.go:330] unrecognized feature gate: PlatformOperators Jan 23 09:07:07 crc kubenswrapper[4684]: W0123 09:07:07.478745 4684 feature_gate.go:330] unrecognized feature gate: VSphereDriverConfiguration Jan 23 09:07:07 crc kubenswrapper[4684]: W0123 09:07:07.478748 4684 feature_gate.go:330] unrecognized feature gate: NutanixMultiSubnets Jan 23 09:07:07 crc kubenswrapper[4684]: W0123 09:07:07.478752 4684 feature_gate.go:330] unrecognized feature gate: ManagedBootImagesAWS Jan 23 09:07:07 crc kubenswrapper[4684]: W0123 09:07:07.478756 4684 feature_gate.go:330] unrecognized feature gate: PinnedImages Jan 23 09:07:07 crc kubenswrapper[4684]: W0123 09:07:07.478759 4684 feature_gate.go:330] unrecognized feature gate: ChunkSizeMiB Jan 23 09:07:07 crc kubenswrapper[4684]: W0123 09:07:07.478763 4684 feature_gate.go:330] unrecognized feature gate: NodeDisruptionPolicy Jan 23 09:07:07 crc kubenswrapper[4684]: W0123 09:07:07.478768 4684 feature_gate.go:353] Setting GA feature gate CloudDualStackNodeIPs=true. It will be removed in a future release. Jan 23 09:07:07 crc kubenswrapper[4684]: W0123 09:07:07.478774 4684 feature_gate.go:330] unrecognized feature gate: MetricsCollectionProfiles Jan 23 09:07:07 crc kubenswrapper[4684]: W0123 09:07:07.478779 4684 feature_gate.go:330] unrecognized feature gate: VSphereMultiVCenters Jan 23 09:07:07 crc kubenswrapper[4684]: W0123 09:07:07.478782 4684 feature_gate.go:330] unrecognized feature gate: IngressControllerDynamicConfigurationManager Jan 23 09:07:07 crc kubenswrapper[4684]: W0123 09:07:07.478787 4684 feature_gate.go:353] Setting GA feature gate ValidatingAdmissionPolicy=true. It will be removed in a future release. Jan 23 09:07:07 crc kubenswrapper[4684]: W0123 09:07:07.478794 4684 feature_gate.go:330] unrecognized feature gate: InsightsRuntimeExtractor Jan 23 09:07:07 crc kubenswrapper[4684]: W0123 09:07:07.478798 4684 feature_gate.go:330] unrecognized feature gate: MachineConfigNodes Jan 23 09:07:07 crc kubenswrapper[4684]: W0123 09:07:07.478802 4684 feature_gate.go:330] unrecognized feature gate: ClusterAPIInstall Jan 23 09:07:07 crc kubenswrapper[4684]: W0123 09:07:07.478806 4684 feature_gate.go:330] unrecognized feature gate: SigstoreImageVerification Jan 23 09:07:07 crc kubenswrapper[4684]: W0123 09:07:07.478810 4684 feature_gate.go:330] unrecognized feature gate: ConsolePluginContentSecurityPolicy Jan 23 09:07:07 crc kubenswrapper[4684]: W0123 09:07:07.478813 4684 feature_gate.go:330] unrecognized feature gate: PrivateHostedZoneAWS Jan 23 09:07:07 crc kubenswrapper[4684]: W0123 09:07:07.478817 4684 feature_gate.go:330] unrecognized feature gate: AlibabaPlatform Jan 23 09:07:07 crc kubenswrapper[4684]: W0123 09:07:07.478822 4684 feature_gate.go:330] unrecognized feature gate: MachineAPIOperatorDisableMachineHealthCheckController Jan 23 09:07:07 crc kubenswrapper[4684]: W0123 09:07:07.478826 4684 feature_gate.go:330] unrecognized feature gate: Example Jan 23 09:07:07 crc kubenswrapper[4684]: W0123 09:07:07.478831 4684 feature_gate.go:353] Setting GA feature gate DisableKubeletCloudCredentialProviders=true. It will be removed in a future release. Jan 23 09:07:07 crc kubenswrapper[4684]: W0123 09:07:07.478839 4684 feature_gate.go:330] unrecognized feature gate: DNSNameResolver Jan 23 09:07:07 crc kubenswrapper[4684]: W0123 09:07:07.478844 4684 feature_gate.go:330] unrecognized feature gate: MinimumKubeletVersion Jan 23 09:07:07 crc kubenswrapper[4684]: W0123 09:07:07.478849 4684 feature_gate.go:330] unrecognized feature gate: AdminNetworkPolicy Jan 23 09:07:07 crc kubenswrapper[4684]: W0123 09:07:07.478854 4684 feature_gate.go:330] unrecognized feature gate: BootcNodeManagement Jan 23 09:07:07 crc kubenswrapper[4684]: W0123 09:07:07.478859 4684 feature_gate.go:330] unrecognized feature gate: ManagedBootImages Jan 23 09:07:07 crc kubenswrapper[4684]: W0123 09:07:07.478864 4684 feature_gate.go:351] Setting deprecated feature gate KMSv1=true. It will be removed in a future release. Jan 23 09:07:07 crc kubenswrapper[4684]: W0123 09:07:07.478869 4684 feature_gate.go:330] unrecognized feature gate: IngressControllerLBSubnetsAWS Jan 23 09:07:07 crc kubenswrapper[4684]: W0123 09:07:07.478874 4684 feature_gate.go:330] unrecognized feature gate: PersistentIPsForVirtualization Jan 23 09:07:07 crc kubenswrapper[4684]: W0123 09:07:07.478878 4684 feature_gate.go:330] unrecognized feature gate: MachineAPIProviderOpenStack Jan 23 09:07:07 crc kubenswrapper[4684]: W0123 09:07:07.478882 4684 feature_gate.go:330] unrecognized feature gate: AzureWorkloadIdentity Jan 23 09:07:07 crc kubenswrapper[4684]: W0123 09:07:07.478886 4684 feature_gate.go:330] unrecognized feature gate: InsightsConfig Jan 23 09:07:07 crc kubenswrapper[4684]: W0123 09:07:07.478889 4684 feature_gate.go:330] unrecognized feature gate: BareMetalLoadBalancer Jan 23 09:07:07 crc kubenswrapper[4684]: W0123 09:07:07.478893 4684 feature_gate.go:330] unrecognized feature gate: AWSClusterHostedDNS Jan 23 09:07:07 crc kubenswrapper[4684]: W0123 09:07:07.478897 4684 feature_gate.go:330] unrecognized feature gate: GCPLabelsTags Jan 23 09:07:07 crc kubenswrapper[4684]: W0123 09:07:07.478900 4684 feature_gate.go:330] unrecognized feature gate: EtcdBackendQuota Jan 23 09:07:07 crc kubenswrapper[4684]: W0123 09:07:07.478904 4684 feature_gate.go:330] unrecognized feature gate: ExternalOIDC Jan 23 09:07:07 crc kubenswrapper[4684]: W0123 09:07:07.478908 4684 feature_gate.go:330] unrecognized feature gate: AutomatedEtcdBackup Jan 23 09:07:07 crc kubenswrapper[4684]: W0123 09:07:07.478911 4684 feature_gate.go:330] unrecognized feature gate: ClusterAPIInstallIBMCloud Jan 23 09:07:07 crc kubenswrapper[4684]: W0123 09:07:07.478915 4684 feature_gate.go:330] unrecognized feature gate: MachineAPIMigration Jan 23 09:07:07 crc kubenswrapper[4684]: W0123 09:07:07.478919 4684 feature_gate.go:330] unrecognized feature gate: ImageStreamImportMode Jan 23 09:07:07 crc kubenswrapper[4684]: W0123 09:07:07.478923 4684 feature_gate.go:330] unrecognized feature gate: HardwareSpeed Jan 23 09:07:07 crc kubenswrapper[4684]: W0123 09:07:07.478926 4684 feature_gate.go:330] unrecognized feature gate: AdditionalRoutingCapabilities Jan 23 09:07:07 crc kubenswrapper[4684]: W0123 09:07:07.478930 4684 feature_gate.go:330] unrecognized feature gate: NetworkLiveMigration Jan 23 09:07:07 crc kubenswrapper[4684]: W0123 09:07:07.478934 4684 feature_gate.go:330] unrecognized feature gate: InsightsConfigAPI Jan 23 09:07:07 crc kubenswrapper[4684]: W0123 09:07:07.478937 4684 feature_gate.go:330] unrecognized feature gate: GatewayAPI Jan 23 09:07:07 crc kubenswrapper[4684]: W0123 09:07:07.478941 4684 feature_gate.go:330] unrecognized feature gate: VSphereControlPlaneMachineSet Jan 23 09:07:07 crc kubenswrapper[4684]: W0123 09:07:07.478944 4684 feature_gate.go:330] unrecognized feature gate: MultiArchInstallGCP Jan 23 09:07:07 crc kubenswrapper[4684]: W0123 09:07:07.478948 4684 feature_gate.go:330] unrecognized feature gate: VolumeGroupSnapshot Jan 23 09:07:07 crc kubenswrapper[4684]: W0123 09:07:07.478951 4684 feature_gate.go:330] unrecognized feature gate: UpgradeStatus Jan 23 09:07:07 crc kubenswrapper[4684]: W0123 09:07:07.478955 4684 feature_gate.go:330] unrecognized feature gate: AWSEFSDriverVolumeMetrics Jan 23 09:07:07 crc kubenswrapper[4684]: W0123 09:07:07.478958 4684 feature_gate.go:330] unrecognized feature gate: NetworkDiagnosticsConfig Jan 23 09:07:07 crc kubenswrapper[4684]: W0123 09:07:07.478961 4684 feature_gate.go:330] unrecognized feature gate: SetEIPForNLBIngressController Jan 23 09:07:07 crc kubenswrapper[4684]: W0123 09:07:07.478965 4684 feature_gate.go:330] unrecognized feature gate: MultiArchInstallAzure Jan 23 09:07:07 crc kubenswrapper[4684]: W0123 09:07:07.478970 4684 feature_gate.go:330] unrecognized feature gate: CSIDriverSharedResource Jan 23 09:07:07 crc kubenswrapper[4684]: W0123 09:07:07.478973 4684 feature_gate.go:330] unrecognized feature gate: NetworkSegmentation Jan 23 09:07:07 crc kubenswrapper[4684]: W0123 09:07:07.478977 4684 feature_gate.go:330] unrecognized feature gate: ClusterMonitoringConfig Jan 23 09:07:07 crc kubenswrapper[4684]: W0123 09:07:07.478980 4684 feature_gate.go:330] unrecognized feature gate: BuildCSIVolumes Jan 23 09:07:07 crc kubenswrapper[4684]: W0123 09:07:07.478984 4684 feature_gate.go:330] unrecognized feature gate: NewOLM Jan 23 09:07:07 crc kubenswrapper[4684]: W0123 09:07:07.478987 4684 feature_gate.go:330] unrecognized feature gate: OpenShiftPodSecurityAdmission Jan 23 09:07:07 crc kubenswrapper[4684]: W0123 09:07:07.478991 4684 feature_gate.go:330] unrecognized feature gate: SignatureStores Jan 23 09:07:07 crc kubenswrapper[4684]: I0123 09:07:07.478996 4684 feature_gate.go:386] feature gates: {map[CloudDualStackNodeIPs:true DisableKubeletCloudCredentialProviders:true DynamicResourceAllocation:false EventedPLEG:false KMSv1:true MaxUnavailableStatefulSet:false NodeSwap:false ProcMountType:false RouteExternalCertificate:false ServiceAccountTokenNodeBinding:false TranslateStreamCloseWebsocketRequests:false UserNamespacesPodSecurityStandards:false UserNamespacesSupport:false ValidatingAdmissionPolicy:true VolumeAttributesClass:false]} Jan 23 09:07:07 crc kubenswrapper[4684]: W0123 09:07:07.479117 4684 feature_gate.go:330] unrecognized feature gate: MachineAPIMigration Jan 23 09:07:07 crc kubenswrapper[4684]: W0123 09:07:07.479123 4684 feature_gate.go:330] unrecognized feature gate: AlibabaPlatform Jan 23 09:07:07 crc kubenswrapper[4684]: W0123 09:07:07.479127 4684 feature_gate.go:330] unrecognized feature gate: BuildCSIVolumes Jan 23 09:07:07 crc kubenswrapper[4684]: W0123 09:07:07.479131 4684 feature_gate.go:330] unrecognized feature gate: MultiArchInstallGCP Jan 23 09:07:07 crc kubenswrapper[4684]: W0123 09:07:07.479135 4684 feature_gate.go:330] unrecognized feature gate: MachineConfigNodes Jan 23 09:07:07 crc kubenswrapper[4684]: W0123 09:07:07.479138 4684 feature_gate.go:330] unrecognized feature gate: VolumeGroupSnapshot Jan 23 09:07:07 crc kubenswrapper[4684]: W0123 09:07:07.479142 4684 feature_gate.go:330] unrecognized feature gate: VSphereControlPlaneMachineSet Jan 23 09:07:07 crc kubenswrapper[4684]: W0123 09:07:07.479145 4684 feature_gate.go:330] unrecognized feature gate: NutanixMultiSubnets Jan 23 09:07:07 crc kubenswrapper[4684]: W0123 09:07:07.479149 4684 feature_gate.go:330] unrecognized feature gate: SetEIPForNLBIngressController Jan 23 09:07:07 crc kubenswrapper[4684]: W0123 09:07:07.479153 4684 feature_gate.go:330] unrecognized feature gate: CSIDriverSharedResource Jan 23 09:07:07 crc kubenswrapper[4684]: W0123 09:07:07.479157 4684 feature_gate.go:351] Setting deprecated feature gate KMSv1=true. It will be removed in a future release. Jan 23 09:07:07 crc kubenswrapper[4684]: W0123 09:07:07.479162 4684 feature_gate.go:330] unrecognized feature gate: SigstoreImageVerification Jan 23 09:07:07 crc kubenswrapper[4684]: W0123 09:07:07.479166 4684 feature_gate.go:330] unrecognized feature gate: PinnedImages Jan 23 09:07:07 crc kubenswrapper[4684]: W0123 09:07:07.479169 4684 feature_gate.go:330] unrecognized feature gate: SignatureStores Jan 23 09:07:07 crc kubenswrapper[4684]: W0123 09:07:07.479173 4684 feature_gate.go:330] unrecognized feature gate: NodeDisruptionPolicy Jan 23 09:07:07 crc kubenswrapper[4684]: W0123 09:07:07.479177 4684 feature_gate.go:330] unrecognized feature gate: VSphereMultiNetworks Jan 23 09:07:07 crc kubenswrapper[4684]: W0123 09:07:07.479180 4684 feature_gate.go:330] unrecognized feature gate: VSphereMultiVCenters Jan 23 09:07:07 crc kubenswrapper[4684]: W0123 09:07:07.479184 4684 feature_gate.go:330] unrecognized feature gate: GCPLabelsTags Jan 23 09:07:07 crc kubenswrapper[4684]: W0123 09:07:07.479187 4684 feature_gate.go:330] unrecognized feature gate: AWSEFSDriverVolumeMetrics Jan 23 09:07:07 crc kubenswrapper[4684]: W0123 09:07:07.479191 4684 feature_gate.go:330] unrecognized feature gate: ImageStreamImportMode Jan 23 09:07:07 crc kubenswrapper[4684]: W0123 09:07:07.479194 4684 feature_gate.go:330] unrecognized feature gate: AzureWorkloadIdentity Jan 23 09:07:07 crc kubenswrapper[4684]: W0123 09:07:07.479198 4684 feature_gate.go:330] unrecognized feature gate: NetworkLiveMigration Jan 23 09:07:07 crc kubenswrapper[4684]: W0123 09:07:07.479202 4684 feature_gate.go:330] unrecognized feature gate: MachineAPIOperatorDisableMachineHealthCheckController Jan 23 09:07:07 crc kubenswrapper[4684]: W0123 09:07:07.479206 4684 feature_gate.go:330] unrecognized feature gate: IngressControllerLBSubnetsAWS Jan 23 09:07:07 crc kubenswrapper[4684]: W0123 09:07:07.479209 4684 feature_gate.go:330] unrecognized feature gate: NewOLM Jan 23 09:07:07 crc kubenswrapper[4684]: W0123 09:07:07.479213 4684 feature_gate.go:330] unrecognized feature gate: ExternalOIDC Jan 23 09:07:07 crc kubenswrapper[4684]: W0123 09:07:07.479217 4684 feature_gate.go:353] Setting GA feature gate CloudDualStackNodeIPs=true. It will be removed in a future release. Jan 23 09:07:07 crc kubenswrapper[4684]: W0123 09:07:07.479222 4684 feature_gate.go:330] unrecognized feature gate: MetricsCollectionProfiles Jan 23 09:07:07 crc kubenswrapper[4684]: W0123 09:07:07.479227 4684 feature_gate.go:330] unrecognized feature gate: HardwareSpeed Jan 23 09:07:07 crc kubenswrapper[4684]: W0123 09:07:07.479231 4684 feature_gate.go:330] unrecognized feature gate: OpenShiftPodSecurityAdmission Jan 23 09:07:07 crc kubenswrapper[4684]: W0123 09:07:07.479235 4684 feature_gate.go:330] unrecognized feature gate: OVNObservability Jan 23 09:07:07 crc kubenswrapper[4684]: W0123 09:07:07.479239 4684 feature_gate.go:330] unrecognized feature gate: ClusterAPIInstall Jan 23 09:07:07 crc kubenswrapper[4684]: W0123 09:07:07.479243 4684 feature_gate.go:330] unrecognized feature gate: ClusterMonitoringConfig Jan 23 09:07:07 crc kubenswrapper[4684]: W0123 09:07:07.479246 4684 feature_gate.go:330] unrecognized feature gate: AdditionalRoutingCapabilities Jan 23 09:07:07 crc kubenswrapper[4684]: W0123 09:07:07.479250 4684 feature_gate.go:330] unrecognized feature gate: PlatformOperators Jan 23 09:07:07 crc kubenswrapper[4684]: W0123 09:07:07.479254 4684 feature_gate.go:330] unrecognized feature gate: InsightsRuntimeExtractor Jan 23 09:07:07 crc kubenswrapper[4684]: W0123 09:07:07.479257 4684 feature_gate.go:330] unrecognized feature gate: ManagedBootImages Jan 23 09:07:07 crc kubenswrapper[4684]: W0123 09:07:07.479261 4684 feature_gate.go:330] unrecognized feature gate: PrivateHostedZoneAWS Jan 23 09:07:07 crc kubenswrapper[4684]: W0123 09:07:07.479265 4684 feature_gate.go:330] unrecognized feature gate: AdminNetworkPolicy Jan 23 09:07:07 crc kubenswrapper[4684]: W0123 09:07:07.479268 4684 feature_gate.go:330] unrecognized feature gate: ChunkSizeMiB Jan 23 09:07:07 crc kubenswrapper[4684]: W0123 09:07:07.479272 4684 feature_gate.go:330] unrecognized feature gate: InsightsConfigAPI Jan 23 09:07:07 crc kubenswrapper[4684]: W0123 09:07:07.479276 4684 feature_gate.go:330] unrecognized feature gate: PersistentIPsForVirtualization Jan 23 09:07:07 crc kubenswrapper[4684]: W0123 09:07:07.479280 4684 feature_gate.go:353] Setting GA feature gate DisableKubeletCloudCredentialProviders=true. It will be removed in a future release. Jan 23 09:07:07 crc kubenswrapper[4684]: W0123 09:07:07.479285 4684 feature_gate.go:330] unrecognized feature gate: EtcdBackendQuota Jan 23 09:07:07 crc kubenswrapper[4684]: W0123 09:07:07.479289 4684 feature_gate.go:330] unrecognized feature gate: VSphereDriverConfiguration Jan 23 09:07:07 crc kubenswrapper[4684]: W0123 09:07:07.479293 4684 feature_gate.go:330] unrecognized feature gate: UpgradeStatus Jan 23 09:07:07 crc kubenswrapper[4684]: W0123 09:07:07.479298 4684 feature_gate.go:353] Setting GA feature gate ValidatingAdmissionPolicy=true. It will be removed in a future release. Jan 23 09:07:07 crc kubenswrapper[4684]: W0123 09:07:07.479303 4684 feature_gate.go:330] unrecognized feature gate: AWSClusterHostedDNS Jan 23 09:07:07 crc kubenswrapper[4684]: W0123 09:07:07.479306 4684 feature_gate.go:330] unrecognized feature gate: OnClusterBuild Jan 23 09:07:07 crc kubenswrapper[4684]: W0123 09:07:07.479310 4684 feature_gate.go:330] unrecognized feature gate: BareMetalLoadBalancer Jan 23 09:07:07 crc kubenswrapper[4684]: W0123 09:07:07.479314 4684 feature_gate.go:330] unrecognized feature gate: ClusterAPIInstallIBMCloud Jan 23 09:07:07 crc kubenswrapper[4684]: W0123 09:07:07.479318 4684 feature_gate.go:330] unrecognized feature gate: DNSNameResolver Jan 23 09:07:07 crc kubenswrapper[4684]: W0123 09:07:07.479321 4684 feature_gate.go:330] unrecognized feature gate: MachineAPIProviderOpenStack Jan 23 09:07:07 crc kubenswrapper[4684]: W0123 09:07:07.479325 4684 feature_gate.go:330] unrecognized feature gate: IngressControllerDynamicConfigurationManager Jan 23 09:07:07 crc kubenswrapper[4684]: W0123 09:07:07.479329 4684 feature_gate.go:330] unrecognized feature gate: MinimumKubeletVersion Jan 23 09:07:07 crc kubenswrapper[4684]: W0123 09:07:07.479333 4684 feature_gate.go:330] unrecognized feature gate: Example Jan 23 09:07:07 crc kubenswrapper[4684]: W0123 09:07:07.479337 4684 feature_gate.go:330] unrecognized feature gate: NetworkSegmentation Jan 23 09:07:07 crc kubenswrapper[4684]: W0123 09:07:07.479340 4684 feature_gate.go:330] unrecognized feature gate: RouteAdvertisements Jan 23 09:07:07 crc kubenswrapper[4684]: W0123 09:07:07.479344 4684 feature_gate.go:330] unrecognized feature gate: MultiArchInstallAWS Jan 23 09:07:07 crc kubenswrapper[4684]: W0123 09:07:07.479347 4684 feature_gate.go:330] unrecognized feature gate: AutomatedEtcdBackup Jan 23 09:07:07 crc kubenswrapper[4684]: W0123 09:07:07.479351 4684 feature_gate.go:330] unrecognized feature gate: NetworkDiagnosticsConfig Jan 23 09:07:07 crc kubenswrapper[4684]: W0123 09:07:07.479354 4684 feature_gate.go:330] unrecognized feature gate: ManagedBootImagesAWS Jan 23 09:07:07 crc kubenswrapper[4684]: W0123 09:07:07.479357 4684 feature_gate.go:330] unrecognized feature gate: BootcNodeManagement Jan 23 09:07:07 crc kubenswrapper[4684]: W0123 09:07:07.479361 4684 feature_gate.go:330] unrecognized feature gate: ConsolePluginContentSecurityPolicy Jan 23 09:07:07 crc kubenswrapper[4684]: W0123 09:07:07.479364 4684 feature_gate.go:330] unrecognized feature gate: InsightsConfig Jan 23 09:07:07 crc kubenswrapper[4684]: W0123 09:07:07.479368 4684 feature_gate.go:330] unrecognized feature gate: VSphereStaticIPs Jan 23 09:07:07 crc kubenswrapper[4684]: W0123 09:07:07.479371 4684 feature_gate.go:330] unrecognized feature gate: MultiArchInstallAzure Jan 23 09:07:07 crc kubenswrapper[4684]: W0123 09:07:07.479375 4684 feature_gate.go:330] unrecognized feature gate: InsightsOnDemandDataGather Jan 23 09:07:07 crc kubenswrapper[4684]: W0123 09:07:07.479378 4684 feature_gate.go:330] unrecognized feature gate: GCPClusterHostedDNS Jan 23 09:07:07 crc kubenswrapper[4684]: W0123 09:07:07.479381 4684 feature_gate.go:330] unrecognized feature gate: MixedCPUsAllocation Jan 23 09:07:07 crc kubenswrapper[4684]: W0123 09:07:07.479385 4684 feature_gate.go:330] unrecognized feature gate: GatewayAPI Jan 23 09:07:07 crc kubenswrapper[4684]: I0123 09:07:07.479390 4684 feature_gate.go:386] feature gates: {map[CloudDualStackNodeIPs:true DisableKubeletCloudCredentialProviders:true DynamicResourceAllocation:false EventedPLEG:false KMSv1:true MaxUnavailableStatefulSet:false NodeSwap:false ProcMountType:false RouteExternalCertificate:false ServiceAccountTokenNodeBinding:false TranslateStreamCloseWebsocketRequests:false UserNamespacesPodSecurityStandards:false UserNamespacesSupport:false ValidatingAdmissionPolicy:true VolumeAttributesClass:false]} Jan 23 09:07:07 crc kubenswrapper[4684]: I0123 09:07:07.479680 4684 server.go:940] "Client rotation is on, will bootstrap in background" Jan 23 09:07:07 crc kubenswrapper[4684]: I0123 09:07:07.484023 4684 bootstrap.go:85] "Current kubeconfig file contents are still valid, no bootstrap necessary" Jan 23 09:07:07 crc kubenswrapper[4684]: I0123 09:07:07.484126 4684 certificate_store.go:130] Loading cert/key pair from "/var/lib/kubelet/pki/kubelet-client-current.pem". Jan 23 09:07:07 crc kubenswrapper[4684]: I0123 09:07:07.484611 4684 server.go:997] "Starting client certificate rotation" Jan 23 09:07:07 crc kubenswrapper[4684]: I0123 09:07:07.484636 4684 certificate_manager.go:356] kubernetes.io/kube-apiserver-client-kubelet: Certificate rotation is enabled Jan 23 09:07:07 crc kubenswrapper[4684]: I0123 09:07:07.484958 4684 certificate_manager.go:356] kubernetes.io/kube-apiserver-client-kubelet: Certificate expiration is 2026-02-24 05:52:08 +0000 UTC, rotation deadline is 2025-12-30 10:04:06.905315948 +0000 UTC Jan 23 09:07:07 crc kubenswrapper[4684]: I0123 09:07:07.485036 4684 certificate_manager.go:356] kubernetes.io/kube-apiserver-client-kubelet: Rotating certificates Jan 23 09:07:07 crc kubenswrapper[4684]: I0123 09:07:07.489515 4684 dynamic_cafile_content.go:123] "Loaded a new CA Bundle and Verifier" name="client-ca-bundle::/etc/kubernetes/kubelet-ca.crt" Jan 23 09:07:07 crc kubenswrapper[4684]: E0123 09:07:07.490446 4684 certificate_manager.go:562] "Unhandled Error" err="kubernetes.io/kube-apiserver-client-kubelet: Failed while requesting a signed certificate from the control plane: cannot create certificate signing request: Post \"https://api-int.crc.testing:6443/apis/certificates.k8s.io/v1/certificatesigningrequests\": dial tcp 38.129.56.16:6443: connect: connection refused" logger="UnhandledError" Jan 23 09:07:07 crc kubenswrapper[4684]: I0123 09:07:07.490947 4684 dynamic_cafile_content.go:161] "Starting controller" name="client-ca-bundle::/etc/kubernetes/kubelet-ca.crt" Jan 23 09:07:07 crc kubenswrapper[4684]: I0123 09:07:07.496718 4684 log.go:25] "Validated CRI v1 runtime API" Jan 23 09:07:07 crc kubenswrapper[4684]: I0123 09:07:07.507672 4684 log.go:25] "Validated CRI v1 image API" Jan 23 09:07:07 crc kubenswrapper[4684]: I0123 09:07:07.508663 4684 server.go:1437] "Using cgroup driver setting received from the CRI runtime" cgroupDriver="systemd" Jan 23 09:07:07 crc kubenswrapper[4684]: I0123 09:07:07.510337 4684 fs.go:133] Filesystem UUIDs: map[0b076daa-c26a-46d2-b3a6-72a8dbc6e257:/dev/vda4 2026-01-23-09-01-14-00:/dev/sr0 7B77-95E7:/dev/vda2 de0497b0-db1b-465a-b278-03db02455c71:/dev/vda3] Jan 23 09:07:07 crc kubenswrapper[4684]: I0123 09:07:07.510362 4684 fs.go:134] Filesystem partitions: map[/dev/shm:{mountpoint:/dev/shm major:0 minor:22 fsType:tmpfs blockSize:0} /dev/vda3:{mountpoint:/boot major:252 minor:3 fsType:ext4 blockSize:0} /dev/vda4:{mountpoint:/var major:252 minor:4 fsType:xfs blockSize:0} /run:{mountpoint:/run major:0 minor:24 fsType:tmpfs blockSize:0} /run/user/1000:{mountpoint:/run/user/1000 major:0 minor:42 fsType:tmpfs blockSize:0} /tmp:{mountpoint:/tmp major:0 minor:30 fsType:tmpfs blockSize:0} /var/lib/etcd:{mountpoint:/var/lib/etcd major:0 minor:43 fsType:tmpfs blockSize:0}] Jan 23 09:07:07 crc kubenswrapper[4684]: I0123 09:07:07.522557 4684 manager.go:217] Machine: {Timestamp:2026-01-23 09:07:07.521767142 +0000 UTC m=+0.145145703 CPUVendorID:AuthenticAMD NumCores:8 NumPhysicalCores:1 NumSockets:8 CpuFrequency:2800000 MemoryCapacity:25199480832 SwapCapacity:0 MemoryByType:map[] NVMInfo:{MemoryModeCapacity:0 AppDirectModeCapacity:0 AvgPowerBudget:0} HugePages:[{PageSize:1048576 NumPages:0} {PageSize:2048 NumPages:0}] MachineID:21801e6708c44f15b81395eb736a7cec SystemUUID:63162577-fb09-4289-a5f3-3b12988dcfbf BootID:bcfe8adf-9d26-48e3-b456-e1c8d79ddfed Filesystems:[{Device:/run/user/1000 DeviceMajor:0 DeviceMinor:42 Capacity:2519945216 Type:vfs Inodes:615221 HasInodes:true} {Device:/var/lib/etcd DeviceMajor:0 DeviceMinor:43 Capacity:1073741824 Type:vfs Inodes:3076108 HasInodes:true} {Device:/dev/shm DeviceMajor:0 DeviceMinor:22 Capacity:12599738368 Type:vfs Inodes:3076108 HasInodes:true} {Device:/run DeviceMajor:0 DeviceMinor:24 Capacity:5039898624 Type:vfs Inodes:819200 HasInodes:true} {Device:/dev/vda4 DeviceMajor:252 DeviceMinor:4 Capacity:85292941312 Type:vfs Inodes:41679680 HasInodes:true} {Device:/tmp DeviceMajor:0 DeviceMinor:30 Capacity:12599742464 Type:vfs Inodes:1048576 HasInodes:true} {Device:/dev/vda3 DeviceMajor:252 DeviceMinor:3 Capacity:366869504 Type:vfs Inodes:98304 HasInodes:true}] DiskMap:map[252:0:{Name:vda Major:252 Minor:0 Size:429496729600 Scheduler:none}] NetworkDevices:[{Name:br-ex MacAddress:fa:16:3e:17:45:2f Speed:0 Mtu:1500} {Name:br-int MacAddress:d6:39:55:2e:22:71 Speed:0 Mtu:1400} {Name:ens3 MacAddress:fa:16:3e:17:45:2f Speed:-1 Mtu:1500} {Name:ens7 MacAddress:fa:16:3e:b3:76:1b Speed:-1 Mtu:1500} {Name:ens7.20 MacAddress:52:54:00:7f:06:de Speed:-1 Mtu:1496} {Name:ens7.21 MacAddress:52:54:00:44:3a:29 Speed:-1 Mtu:1496} {Name:ens7.22 MacAddress:52:54:00:2d:0f:53 Speed:-1 Mtu:1496} {Name:ens7.23 MacAddress:52:54:00:b4:11:4c Speed:-1 Mtu:1496} {Name:eth10 MacAddress:a2:69:83:41:cf:fe Speed:0 Mtu:1500} {Name:ovn-k8s-mp0 MacAddress:0a:58:0a:d9:00:02 Speed:0 Mtu:1400} {Name:ovs-system MacAddress:ee:f6:77:f5:53:6c Speed:0 Mtu:1500}] Topology:[{Id:0 Memory:25199480832 HugePages:[{PageSize:1048576 NumPages:0} {PageSize:2048 NumPages:0}] Cores:[{Id:0 Threads:[0] Caches:[{Id:0 Size:32768 Type:Data Level:1} {Id:0 Size:32768 Type:Instruction Level:1} {Id:0 Size:524288 Type:Unified Level:2}] UncoreCaches:[{Id:0 Size:16777216 Type:Unified Level:3}] SocketID:0 BookID: DrawerID:} {Id:0 Threads:[1] Caches:[{Id:1 Size:32768 Type:Data Level:1} {Id:1 Size:32768 Type:Instruction Level:1} {Id:1 Size:524288 Type:Unified Level:2}] UncoreCaches:[{Id:1 Size:16777216 Type:Unified Level:3}] SocketID:1 BookID: DrawerID:} {Id:0 Threads:[2] Caches:[{Id:2 Size:32768 Type:Data Level:1} {Id:2 Size:32768 Type:Instruction Level:1} {Id:2 Size:524288 Type:Unified Level:2}] UncoreCaches:[{Id:2 Size:16777216 Type:Unified Level:3}] SocketID:2 BookID: DrawerID:} {Id:0 Threads:[3] Caches:[{Id:3 Size:32768 Type:Data Level:1} {Id:3 Size:32768 Type:Instruction Level:1} {Id:3 Size:524288 Type:Unified Level:2}] UncoreCaches:[{Id:3 Size:16777216 Type:Unified Level:3}] SocketID:3 BookID: DrawerID:} {Id:0 Threads:[4] Caches:[{Id:4 Size:32768 Type:Data Level:1} {Id:4 Size:32768 Type:Instruction Level:1} {Id:4 Size:524288 Type:Unified Level:2}] UncoreCaches:[{Id:4 Size:16777216 Type:Unified Level:3}] SocketID:4 BookID: DrawerID:} {Id:0 Threads:[5] Caches:[{Id:5 Size:32768 Type:Data Level:1} {Id:5 Size:32768 Type:Instruction Level:1} {Id:5 Size:524288 Type:Unified Level:2}] UncoreCaches:[{Id:5 Size:16777216 Type:Unified Level:3}] SocketID:5 BookID: DrawerID:} {Id:0 Threads:[6] Caches:[{Id:6 Size:32768 Type:Data Level:1} {Id:6 Size:32768 Type:Instruction Level:1} {Id:6 Size:524288 Type:Unified Level:2}] UncoreCaches:[{Id:6 Size:16777216 Type:Unified Level:3}] SocketID:6 BookID: DrawerID:} {Id:0 Threads:[7] Caches:[{Id:7 Size:32768 Type:Data Level:1} {Id:7 Size:32768 Type:Instruction Level:1} {Id:7 Size:524288 Type:Unified Level:2}] UncoreCaches:[{Id:7 Size:16777216 Type:Unified Level:3}] SocketID:7 BookID: DrawerID:}] Caches:[] Distances:[10]}] CloudProvider:Unknown InstanceType:Unknown InstanceID:None} Jan 23 09:07:07 crc kubenswrapper[4684]: I0123 09:07:07.522762 4684 manager_no_libpfm.go:29] cAdvisor is build without cgo and/or libpfm support. Perf event counters are not available. Jan 23 09:07:07 crc kubenswrapper[4684]: I0123 09:07:07.522889 4684 manager.go:233] Version: {KernelVersion:5.14.0-427.50.2.el9_4.x86_64 ContainerOsVersion:Red Hat Enterprise Linux CoreOS 418.94.202502100215-0 DockerVersion: DockerAPIVersion: CadvisorVersion: CadvisorRevision:} Jan 23 09:07:07 crc kubenswrapper[4684]: I0123 09:07:07.523297 4684 swap_util.go:113] "Swap is on" /proc/swaps contents="Filename\t\t\t\tType\t\tSize\t\tUsed\t\tPriority" Jan 23 09:07:07 crc kubenswrapper[4684]: I0123 09:07:07.523468 4684 container_manager_linux.go:267] "Container manager verified user specified cgroup-root exists" cgroupRoot=[] Jan 23 09:07:07 crc kubenswrapper[4684]: I0123 09:07:07.523498 4684 container_manager_linux.go:272] "Creating Container Manager object based on Node Config" nodeConfig={"NodeName":"crc","RuntimeCgroupsName":"/system.slice/crio.service","SystemCgroupsName":"/system.slice","KubeletCgroupsName":"","KubeletOOMScoreAdj":-999,"ContainerRuntime":"","CgroupsPerQOS":true,"CgroupRoot":"/","CgroupDriver":"systemd","KubeletRootDir":"/var/lib/kubelet","ProtectKernelDefaults":true,"KubeReservedCgroupName":"","SystemReservedCgroupName":"","ReservedSystemCPUs":{},"EnforceNodeAllocatable":{"pods":{}},"KubeReserved":null,"SystemReserved":{"cpu":"200m","ephemeral-storage":"350Mi","memory":"350Mi"},"HardEvictionThresholds":[{"Signal":"memory.available","Operator":"LessThan","Value":{"Quantity":"100Mi","Percentage":0},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.1},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.15},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null}],"QOSReserved":{},"CPUManagerPolicy":"none","CPUManagerPolicyOptions":null,"TopologyManagerScope":"container","CPUManagerReconcilePeriod":10000000000,"ExperimentalMemoryManagerPolicy":"None","ExperimentalMemoryManagerReservedMemory":null,"PodPidsLimit":4096,"EnforceCPULimits":true,"CPUCFSQuotaPeriod":100000000,"TopologyManagerPolicy":"none","TopologyManagerPolicyOptions":null,"CgroupVersion":2} Jan 23 09:07:07 crc kubenswrapper[4684]: I0123 09:07:07.523659 4684 topology_manager.go:138] "Creating topology manager with none policy" Jan 23 09:07:07 crc kubenswrapper[4684]: I0123 09:07:07.523669 4684 container_manager_linux.go:303] "Creating device plugin manager" Jan 23 09:07:07 crc kubenswrapper[4684]: I0123 09:07:07.523818 4684 manager.go:142] "Creating Device Plugin manager" path="/var/lib/kubelet/device-plugins/kubelet.sock" Jan 23 09:07:07 crc kubenswrapper[4684]: I0123 09:07:07.523839 4684 server.go:66] "Creating device plugin registration server" version="v1beta1" socket="/var/lib/kubelet/device-plugins/kubelet.sock" Jan 23 09:07:07 crc kubenswrapper[4684]: I0123 09:07:07.524083 4684 state_mem.go:36] "Initialized new in-memory state store" Jan 23 09:07:07 crc kubenswrapper[4684]: I0123 09:07:07.524160 4684 server.go:1245] "Using root directory" path="/var/lib/kubelet" Jan 23 09:07:07 crc kubenswrapper[4684]: I0123 09:07:07.524621 4684 kubelet.go:418] "Attempting to sync node with API server" Jan 23 09:07:07 crc kubenswrapper[4684]: I0123 09:07:07.524638 4684 kubelet.go:313] "Adding static pod path" path="/etc/kubernetes/manifests" Jan 23 09:07:07 crc kubenswrapper[4684]: I0123 09:07:07.524657 4684 file.go:69] "Watching path" path="/etc/kubernetes/manifests" Jan 23 09:07:07 crc kubenswrapper[4684]: I0123 09:07:07.524669 4684 kubelet.go:324] "Adding apiserver pod source" Jan 23 09:07:07 crc kubenswrapper[4684]: I0123 09:07:07.524678 4684 apiserver.go:42] "Waiting for node sync before watching apiserver pods" Jan 23 09:07:07 crc kubenswrapper[4684]: W0123 09:07:07.526131 4684 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Node: Get "https://api-int.crc.testing:6443/api/v1/nodes?fieldSelector=metadata.name%3Dcrc&limit=500&resourceVersion=0": dial tcp 38.129.56.16:6443: connect: connection refused Jan 23 09:07:07 crc kubenswrapper[4684]: I0123 09:07:07.526191 4684 kuberuntime_manager.go:262] "Container runtime initialized" containerRuntime="cri-o" version="1.31.5-4.rhaos4.18.gitdad78d5.el9" apiVersion="v1" Jan 23 09:07:07 crc kubenswrapper[4684]: W0123 09:07:07.526200 4684 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Service: Get "https://api-int.crc.testing:6443/api/v1/services?fieldSelector=spec.clusterIP%21%3DNone&limit=500&resourceVersion=0": dial tcp 38.129.56.16:6443: connect: connection refused Jan 23 09:07:07 crc kubenswrapper[4684]: E0123 09:07:07.526196 4684 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Node: failed to list *v1.Node: Get \"https://api-int.crc.testing:6443/api/v1/nodes?fieldSelector=metadata.name%3Dcrc&limit=500&resourceVersion=0\": dial tcp 38.129.56.16:6443: connect: connection refused" logger="UnhandledError" Jan 23 09:07:07 crc kubenswrapper[4684]: E0123 09:07:07.526297 4684 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Service: failed to list *v1.Service: Get \"https://api-int.crc.testing:6443/api/v1/services?fieldSelector=spec.clusterIP%21%3DNone&limit=500&resourceVersion=0\": dial tcp 38.129.56.16:6443: connect: connection refused" logger="UnhandledError" Jan 23 09:07:07 crc kubenswrapper[4684]: I0123 09:07:07.526605 4684 certificate_store.go:130] Loading cert/key pair from "/var/lib/kubelet/pki/kubelet-server-current.pem". Jan 23 09:07:07 crc kubenswrapper[4684]: I0123 09:07:07.527300 4684 kubelet.go:854] "Not starting ClusterTrustBundle informer because we are in static kubelet mode" Jan 23 09:07:07 crc kubenswrapper[4684]: I0123 09:07:07.527899 4684 plugins.go:603] "Loaded volume plugin" pluginName="kubernetes.io/portworx-volume" Jan 23 09:07:07 crc kubenswrapper[4684]: I0123 09:07:07.527921 4684 plugins.go:603] "Loaded volume plugin" pluginName="kubernetes.io/empty-dir" Jan 23 09:07:07 crc kubenswrapper[4684]: I0123 09:07:07.527928 4684 plugins.go:603] "Loaded volume plugin" pluginName="kubernetes.io/git-repo" Jan 23 09:07:07 crc kubenswrapper[4684]: I0123 09:07:07.527934 4684 plugins.go:603] "Loaded volume plugin" pluginName="kubernetes.io/host-path" Jan 23 09:07:07 crc kubenswrapper[4684]: I0123 09:07:07.527948 4684 plugins.go:603] "Loaded volume plugin" pluginName="kubernetes.io/nfs" Jan 23 09:07:07 crc kubenswrapper[4684]: I0123 09:07:07.527954 4684 plugins.go:603] "Loaded volume plugin" pluginName="kubernetes.io/secret" Jan 23 09:07:07 crc kubenswrapper[4684]: I0123 09:07:07.527962 4684 plugins.go:603] "Loaded volume plugin" pluginName="kubernetes.io/iscsi" Jan 23 09:07:07 crc kubenswrapper[4684]: I0123 09:07:07.527972 4684 plugins.go:603] "Loaded volume plugin" pluginName="kubernetes.io/downward-api" Jan 23 09:07:07 crc kubenswrapper[4684]: I0123 09:07:07.527980 4684 plugins.go:603] "Loaded volume plugin" pluginName="kubernetes.io/fc" Jan 23 09:07:07 crc kubenswrapper[4684]: I0123 09:07:07.527986 4684 plugins.go:603] "Loaded volume plugin" pluginName="kubernetes.io/configmap" Jan 23 09:07:07 crc kubenswrapper[4684]: I0123 09:07:07.527996 4684 plugins.go:603] "Loaded volume plugin" pluginName="kubernetes.io/projected" Jan 23 09:07:07 crc kubenswrapper[4684]: I0123 09:07:07.528002 4684 plugins.go:603] "Loaded volume plugin" pluginName="kubernetes.io/local-volume" Jan 23 09:07:07 crc kubenswrapper[4684]: I0123 09:07:07.528388 4684 plugins.go:603] "Loaded volume plugin" pluginName="kubernetes.io/csi" Jan 23 09:07:07 crc kubenswrapper[4684]: I0123 09:07:07.529011 4684 server.go:1280] "Started kubelet" Jan 23 09:07:07 crc kubenswrapper[4684]: I0123 09:07:07.529232 4684 csi_plugin.go:884] Failed to contact API server when waiting for CSINode publishing: Get "https://api-int.crc.testing:6443/apis/storage.k8s.io/v1/csinodes/crc?resourceVersion=0": dial tcp 38.129.56.16:6443: connect: connection refused Jan 23 09:07:07 crc kubenswrapper[4684]: I0123 09:07:07.529331 4684 ratelimit.go:55] "Setting rate limiting for endpoint" service="podresources" qps=100 burstTokens=10 Jan 23 09:07:07 crc kubenswrapper[4684]: I0123 09:07:07.529447 4684 server.go:163] "Starting to listen" address="0.0.0.0" port=10250 Jan 23 09:07:07 crc kubenswrapper[4684]: I0123 09:07:07.530397 4684 server.go:236] "Starting to serve the podresources API" endpoint="unix:/var/lib/kubelet/pod-resources/kubelet.sock" Jan 23 09:07:07 crc systemd[1]: Started Kubernetes Kubelet. Jan 23 09:07:07 crc kubenswrapper[4684]: I0123 09:07:07.531461 4684 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate rotation is enabled Jan 23 09:07:07 crc kubenswrapper[4684]: E0123 09:07:07.531185 4684 event.go:368] "Unable to write event (may retry after sleeping)" err="Post \"https://api-int.crc.testing:6443/api/v1/namespaces/default/events\": dial tcp 38.129.56.16:6443: connect: connection refused" event="&Event{ObjectMeta:{crc.188d50f1abf822ed default 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Node,Namespace:,Name:crc,UID:crc,APIVersion:,ResourceVersion:,FieldPath:,},Reason:Starting,Message:Starting kubelet.,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2026-01-23 09:07:07.528979181 +0000 UTC m=+0.152357722,LastTimestamp:2026-01-23 09:07:07.528979181 +0000 UTC m=+0.152357722,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Jan 23 09:07:07 crc kubenswrapper[4684]: I0123 09:07:07.531497 4684 fs_resource_analyzer.go:67] "Starting FS ResourceAnalyzer" Jan 23 09:07:07 crc kubenswrapper[4684]: I0123 09:07:07.531513 4684 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-11-17 09:43:27.033527244 +0000 UTC Jan 23 09:07:07 crc kubenswrapper[4684]: I0123 09:07:07.532660 4684 volume_manager.go:287] "The desired_state_of_world populator starts" Jan 23 09:07:07 crc kubenswrapper[4684]: I0123 09:07:07.532679 4684 volume_manager.go:289] "Starting Kubelet Volume Manager" Jan 23 09:07:07 crc kubenswrapper[4684]: I0123 09:07:07.532917 4684 desired_state_of_world_populator.go:146] "Desired state populator starts to run" Jan 23 09:07:07 crc kubenswrapper[4684]: W0123 09:07:07.535022 4684 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.CSIDriver: Get "https://api-int.crc.testing:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0": dial tcp 38.129.56.16:6443: connect: connection refused Jan 23 09:07:07 crc kubenswrapper[4684]: E0123 09:07:07.535083 4684 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.CSIDriver: failed to list *v1.CSIDriver: Get \"https://api-int.crc.testing:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0\": dial tcp 38.129.56.16:6443: connect: connection refused" logger="UnhandledError" Jan 23 09:07:07 crc kubenswrapper[4684]: E0123 09:07:07.535482 4684 kubelet_node_status.go:503] "Error getting the current node from lister" err="node \"crc\" not found" Jan 23 09:07:07 crc kubenswrapper[4684]: E0123 09:07:07.535545 4684 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://api-int.crc.testing:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/crc?timeout=10s\": dial tcp 38.129.56.16:6443: connect: connection refused" interval="200ms" Jan 23 09:07:07 crc kubenswrapper[4684]: I0123 09:07:07.535567 4684 factory.go:55] Registering systemd factory Jan 23 09:07:07 crc kubenswrapper[4684]: I0123 09:07:07.535586 4684 factory.go:221] Registration of the systemd container factory successfully Jan 23 09:07:07 crc kubenswrapper[4684]: I0123 09:07:07.538853 4684 factory.go:153] Registering CRI-O factory Jan 23 09:07:07 crc kubenswrapper[4684]: I0123 09:07:07.538916 4684 factory.go:221] Registration of the crio container factory successfully Jan 23 09:07:07 crc kubenswrapper[4684]: I0123 09:07:07.539644 4684 factory.go:219] Registration of the containerd container factory failed: unable to create containerd client: containerd: cannot unix dial containerd api service: dial unix /run/containerd/containerd.sock: connect: no such file or directory Jan 23 09:07:07 crc kubenswrapper[4684]: I0123 09:07:07.539682 4684 factory.go:103] Registering Raw factory Jan 23 09:07:07 crc kubenswrapper[4684]: I0123 09:07:07.539714 4684 manager.go:1196] Started watching for new ooms in manager Jan 23 09:07:07 crc kubenswrapper[4684]: I0123 09:07:07.540783 4684 server.go:460] "Adding debug handlers to kubelet server" Jan 23 09:07:07 crc kubenswrapper[4684]: I0123 09:07:07.540944 4684 manager.go:319] Starting recovery of all containers Jan 23 09:07:07 crc kubenswrapper[4684]: I0123 09:07:07.549673 4684 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="96b93a3a-6083-4aea-8eab-fe1aa8245ad9" volumeName="kubernetes.io/projected/96b93a3a-6083-4aea-8eab-fe1aa8245ad9-kube-api-access-nzwt7" seLinuxMountContext="" Jan 23 09:07:07 crc kubenswrapper[4684]: I0123 09:07:07.549755 4684 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="09efc573-dbb6-4249-bd59-9b87aba8dd28" volumeName="kubernetes.io/secret/09efc573-dbb6-4249-bd59-9b87aba8dd28-etcd-client" seLinuxMountContext="" Jan 23 09:07:07 crc kubenswrapper[4684]: I0123 09:07:07.549771 4684 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="0b574797-001e-440a-8f4e-c0be86edad0f" volumeName="kubernetes.io/projected/0b574797-001e-440a-8f4e-c0be86edad0f-kube-api-access-lzf88" seLinuxMountContext="" Jan 23 09:07:07 crc kubenswrapper[4684]: I0123 09:07:07.549785 4684 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="308be0ea-9f5f-4b29-aeb1-5abd31a0b17b" volumeName="kubernetes.io/empty-dir/308be0ea-9f5f-4b29-aeb1-5abd31a0b17b-tmpfs" seLinuxMountContext="" Jan 23 09:07:07 crc kubenswrapper[4684]: I0123 09:07:07.549800 4684 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="31d8b7a1-420e-4252-a5b7-eebe8a111292" volumeName="kubernetes.io/projected/31d8b7a1-420e-4252-a5b7-eebe8a111292-kube-api-access-zgdk5" seLinuxMountContext="" Jan 23 09:07:07 crc kubenswrapper[4684]: I0123 09:07:07.549814 4684 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="3ab1a177-2de0-46d9-b765-d0d0649bb42e" volumeName="kubernetes.io/projected/3ab1a177-2de0-46d9-b765-d0d0649bb42e-kube-api-access-4d4hj" seLinuxMountContext="" Jan 23 09:07:07 crc kubenswrapper[4684]: I0123 09:07:07.549827 4684 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="43509403-f426-496e-be36-56cef71462f5" volumeName="kubernetes.io/configmap/43509403-f426-496e-be36-56cef71462f5-service-ca" seLinuxMountContext="" Jan 23 09:07:07 crc kubenswrapper[4684]: I0123 09:07:07.549843 4684 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="87cf06ed-a83f-41a7-828d-70653580a8cb" volumeName="kubernetes.io/secret/87cf06ed-a83f-41a7-828d-70653580a8cb-metrics-tls" seLinuxMountContext="" Jan 23 09:07:07 crc kubenswrapper[4684]: I0123 09:07:07.549860 4684 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="9d751cbb-f2e2-430d-9754-c882a5e924a5" volumeName="kubernetes.io/projected/9d751cbb-f2e2-430d-9754-c882a5e924a5-kube-api-access-s2dwl" seLinuxMountContext="" Jan 23 09:07:07 crc kubenswrapper[4684]: I0123 09:07:07.549871 4684 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="a0128f3a-b052-44ed-a84e-c4c8aaf17c13" volumeName="kubernetes.io/projected/a0128f3a-b052-44ed-a84e-c4c8aaf17c13-kube-api-access-gf66m" seLinuxMountContext="" Jan 23 09:07:07 crc kubenswrapper[4684]: I0123 09:07:07.549882 4684 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="a31745f5-9847-4afe-82a5-3161cc66ca93" volumeName="kubernetes.io/projected/a31745f5-9847-4afe-82a5-3161cc66ca93-bound-sa-token" seLinuxMountContext="" Jan 23 09:07:07 crc kubenswrapper[4684]: I0123 09:07:07.549894 4684 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="d75a4c96-2883-4a0b-bab2-0fab2b6c0b49" volumeName="kubernetes.io/configmap/d75a4c96-2883-4a0b-bab2-0fab2b6c0b49-iptables-alerter-script" seLinuxMountContext="" Jan 23 09:07:07 crc kubenswrapper[4684]: I0123 09:07:07.549907 4684 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="43509403-f426-496e-be36-56cef71462f5" volumeName="kubernetes.io/configmap/43509403-f426-496e-be36-56cef71462f5-trusted-ca-bundle" seLinuxMountContext="" Jan 23 09:07:07 crc kubenswrapper[4684]: I0123 09:07:07.549921 4684 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="6509e943-70c6-444c-bc41-48a544e36fbd" volumeName="kubernetes.io/secret/6509e943-70c6-444c-bc41-48a544e36fbd-serving-cert" seLinuxMountContext="" Jan 23 09:07:07 crc kubenswrapper[4684]: I0123 09:07:07.549936 4684 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="bf126b07-da06-4140-9a57-dfd54fc6b486" volumeName="kubernetes.io/secret/bf126b07-da06-4140-9a57-dfd54fc6b486-image-registry-operator-tls" seLinuxMountContext="" Jan 23 09:07:07 crc kubenswrapper[4684]: I0123 09:07:07.549947 4684 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="c03ee662-fb2f-4fc4-a2c1-af487c19d254" volumeName="kubernetes.io/configmap/c03ee662-fb2f-4fc4-a2c1-af487c19d254-service-ca-bundle" seLinuxMountContext="" Jan 23 09:07:07 crc kubenswrapper[4684]: I0123 09:07:07.549959 4684 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="09ae3b1a-e8e7-4524-b54b-61eab6f9239a" volumeName="kubernetes.io/configmap/09ae3b1a-e8e7-4524-b54b-61eab6f9239a-etcd-serving-ca" seLinuxMountContext="" Jan 23 09:07:07 crc kubenswrapper[4684]: I0123 09:07:07.549972 4684 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="0b78653f-4ff9-4508-8672-245ed9b561e3" volumeName="kubernetes.io/secret/0b78653f-4ff9-4508-8672-245ed9b561e3-serving-cert" seLinuxMountContext="" Jan 23 09:07:07 crc kubenswrapper[4684]: I0123 09:07:07.549984 4684 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="31d8b7a1-420e-4252-a5b7-eebe8a111292" volumeName="kubernetes.io/configmap/31d8b7a1-420e-4252-a5b7-eebe8a111292-auth-proxy-config" seLinuxMountContext="" Jan 23 09:07:07 crc kubenswrapper[4684]: I0123 09:07:07.550017 4684 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="3cb93b32-e0ae-4377-b9c8-fdb9842c6d59" volumeName="kubernetes.io/configmap/3cb93b32-e0ae-4377-b9c8-fdb9842c6d59-serviceca" seLinuxMountContext="" Jan 23 09:07:07 crc kubenswrapper[4684]: I0123 09:07:07.550029 4684 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="43509403-f426-496e-be36-56cef71462f5" volumeName="kubernetes.io/configmap/43509403-f426-496e-be36-56cef71462f5-console-config" seLinuxMountContext="" Jan 23 09:07:07 crc kubenswrapper[4684]: I0123 09:07:07.550040 4684 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="8f668bae-612b-4b75-9490-919e737c6a3b" volumeName="kubernetes.io/projected/8f668bae-612b-4b75-9490-919e737c6a3b-registry-tls" seLinuxMountContext="" Jan 23 09:07:07 crc kubenswrapper[4684]: I0123 09:07:07.550096 4684 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="bc5039c0-ea34-426b-a2b7-fbbc87b49a6d" volumeName="kubernetes.io/empty-dir/bc5039c0-ea34-426b-a2b7-fbbc87b49a6d-available-featuregates" seLinuxMountContext="" Jan 23 09:07:07 crc kubenswrapper[4684]: I0123 09:07:07.550109 4684 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="c03ee662-fb2f-4fc4-a2c1-af487c19d254" volumeName="kubernetes.io/secret/c03ee662-fb2f-4fc4-a2c1-af487c19d254-stats-auth" seLinuxMountContext="" Jan 23 09:07:07 crc kubenswrapper[4684]: I0123 09:07:07.550120 4684 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="efdd0498-1daa-4136-9a4a-3b948c2293fc" volumeName="kubernetes.io/secret/efdd0498-1daa-4136-9a4a-3b948c2293fc-webhook-certs" seLinuxMountContext="" Jan 23 09:07:07 crc kubenswrapper[4684]: I0123 09:07:07.550133 4684 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="fda69060-fa79-4696-b1a6-7980f124bf7c" volumeName="kubernetes.io/projected/fda69060-fa79-4696-b1a6-7980f124bf7c-kube-api-access-xcgwh" seLinuxMountContext="" Jan 23 09:07:07 crc kubenswrapper[4684]: I0123 09:07:07.550150 4684 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="1bf7eb37-55a3-4c65-b768-a94c82151e69" volumeName="kubernetes.io/projected/1bf7eb37-55a3-4c65-b768-a94c82151e69-kube-api-access-sb6h7" seLinuxMountContext="" Jan 23 09:07:07 crc kubenswrapper[4684]: I0123 09:07:07.550163 4684 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="20b0d48f-5fd6-431c-a545-e3c800c7b866" volumeName="kubernetes.io/projected/20b0d48f-5fd6-431c-a545-e3c800c7b866-kube-api-access-w9rds" seLinuxMountContext="" Jan 23 09:07:07 crc kubenswrapper[4684]: I0123 09:07:07.550177 4684 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="22c825df-677d-4ca6-82db-3454ed06e783" volumeName="kubernetes.io/projected/22c825df-677d-4ca6-82db-3454ed06e783-kube-api-access-7c4vf" seLinuxMountContext="" Jan 23 09:07:07 crc kubenswrapper[4684]: I0123 09:07:07.550190 4684 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="49c341d1-5089-4bc2-86a0-a5e165cfcc6b" volumeName="kubernetes.io/projected/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-kube-api-access-ngvvp" seLinuxMountContext="" Jan 23 09:07:07 crc kubenswrapper[4684]: I0123 09:07:07.550201 4684 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="925f1c65-6136-48ba-85aa-3a3b50560753" volumeName="kubernetes.io/configmap/925f1c65-6136-48ba-85aa-3a3b50560753-ovnkube-config" seLinuxMountContext="" Jan 23 09:07:07 crc kubenswrapper[4684]: I0123 09:07:07.550214 4684 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="22c825df-677d-4ca6-82db-3454ed06e783" volumeName="kubernetes.io/configmap/22c825df-677d-4ca6-82db-3454ed06e783-auth-proxy-config" seLinuxMountContext="" Jan 23 09:07:07 crc kubenswrapper[4684]: I0123 09:07:07.550228 4684 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="49c341d1-5089-4bc2-86a0-a5e165cfcc6b" volumeName="kubernetes.io/configmap/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-audit-policies" seLinuxMountContext="" Jan 23 09:07:07 crc kubenswrapper[4684]: I0123 09:07:07.550241 4684 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="49c341d1-5089-4bc2-86a0-a5e165cfcc6b" volumeName="kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-user-template-login" seLinuxMountContext="" Jan 23 09:07:07 crc kubenswrapper[4684]: I0123 09:07:07.550255 4684 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="6402fda4-df10-493c-b4e5-d0569419652d" volumeName="kubernetes.io/configmap/6402fda4-df10-493c-b4e5-d0569419652d-config" seLinuxMountContext="" Jan 23 09:07:07 crc kubenswrapper[4684]: I0123 09:07:07.550270 4684 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="6ea678ab-3438-413e-bfe3-290ae7725660" volumeName="kubernetes.io/configmap/6ea678ab-3438-413e-bfe3-290ae7725660-ovnkube-script-lib" seLinuxMountContext="" Jan 23 09:07:07 crc kubenswrapper[4684]: I0123 09:07:07.550300 4684 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="bf126b07-da06-4140-9a57-dfd54fc6b486" volumeName="kubernetes.io/configmap/bf126b07-da06-4140-9a57-dfd54fc6b486-trusted-ca" seLinuxMountContext="" Jan 23 09:07:07 crc kubenswrapper[4684]: I0123 09:07:07.550314 4684 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="0b78653f-4ff9-4508-8672-245ed9b561e3" volumeName="kubernetes.io/configmap/0b78653f-4ff9-4508-8672-245ed9b561e3-service-ca" seLinuxMountContext="" Jan 23 09:07:07 crc kubenswrapper[4684]: I0123 09:07:07.550332 4684 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="1bf7eb37-55a3-4c65-b768-a94c82151e69" volumeName="kubernetes.io/configmap/1bf7eb37-55a3-4c65-b768-a94c82151e69-audit" seLinuxMountContext="" Jan 23 09:07:07 crc kubenswrapper[4684]: I0123 09:07:07.550345 4684 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="49c341d1-5089-4bc2-86a0-a5e165cfcc6b" volumeName="kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-system-serving-cert" seLinuxMountContext="" Jan 23 09:07:07 crc kubenswrapper[4684]: I0123 09:07:07.550360 4684 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="49c341d1-5089-4bc2-86a0-a5e165cfcc6b" volumeName="kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-system-ocp-branding-template" seLinuxMountContext="" Jan 23 09:07:07 crc kubenswrapper[4684]: I0123 09:07:07.550373 4684 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="6402fda4-df10-493c-b4e5-d0569419652d" volumeName="kubernetes.io/projected/6402fda4-df10-493c-b4e5-d0569419652d-kube-api-access-mg5zb" seLinuxMountContext="" Jan 23 09:07:07 crc kubenswrapper[4684]: I0123 09:07:07.550385 4684 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="9d4552c7-cd75-42dd-8880-30dd377c49a4" volumeName="kubernetes.io/configmap/9d4552c7-cd75-42dd-8880-30dd377c49a4-trusted-ca" seLinuxMountContext="" Jan 23 09:07:07 crc kubenswrapper[4684]: I0123 09:07:07.550398 4684 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="bf126b07-da06-4140-9a57-dfd54fc6b486" volumeName="kubernetes.io/projected/bf126b07-da06-4140-9a57-dfd54fc6b486-kube-api-access-rnphk" seLinuxMountContext="" Jan 23 09:07:07 crc kubenswrapper[4684]: I0123 09:07:07.550410 4684 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="6402fda4-df10-493c-b4e5-d0569419652d" volumeName="kubernetes.io/secret/6402fda4-df10-493c-b4e5-d0569419652d-machine-api-operator-tls" seLinuxMountContext="" Jan 23 09:07:07 crc kubenswrapper[4684]: I0123 09:07:07.550441 4684 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="a31745f5-9847-4afe-82a5-3161cc66ca93" volumeName="kubernetes.io/secret/a31745f5-9847-4afe-82a5-3161cc66ca93-metrics-tls" seLinuxMountContext="" Jan 23 09:07:07 crc kubenswrapper[4684]: I0123 09:07:07.550454 4684 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="ef543e1b-8068-4ea3-b32a-61027b32e95d" volumeName="kubernetes.io/configmap/ef543e1b-8068-4ea3-b32a-61027b32e95d-env-overrides" seLinuxMountContext="" Jan 23 09:07:07 crc kubenswrapper[4684]: I0123 09:07:07.550469 4684 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="7583ce53-e0fe-4a16-9e4d-50516596a136" volumeName="kubernetes.io/secret/7583ce53-e0fe-4a16-9e4d-50516596a136-serving-cert" seLinuxMountContext="" Jan 23 09:07:07 crc kubenswrapper[4684]: I0123 09:07:07.550482 4684 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="09ae3b1a-e8e7-4524-b54b-61eab6f9239a" volumeName="kubernetes.io/secret/09ae3b1a-e8e7-4524-b54b-61eab6f9239a-etcd-client" seLinuxMountContext="" Jan 23 09:07:07 crc kubenswrapper[4684]: I0123 09:07:07.550495 4684 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="1386a44e-36a2-460c-96d0-0359d2b6f0f5" volumeName="kubernetes.io/configmap/1386a44e-36a2-460c-96d0-0359d2b6f0f5-config" seLinuxMountContext="" Jan 23 09:07:07 crc kubenswrapper[4684]: I0123 09:07:07.550510 4684 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="1bf7eb37-55a3-4c65-b768-a94c82151e69" volumeName="kubernetes.io/secret/1bf7eb37-55a3-4c65-b768-a94c82151e69-etcd-client" seLinuxMountContext="" Jan 23 09:07:07 crc kubenswrapper[4684]: I0123 09:07:07.550525 4684 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="496e6271-fb68-4057-954e-a0d97a4afa3f" volumeName="kubernetes.io/configmap/496e6271-fb68-4057-954e-a0d97a4afa3f-config" seLinuxMountContext="" Jan 23 09:07:07 crc kubenswrapper[4684]: I0123 09:07:07.550578 4684 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="5225d0e4-402f-4861-b410-819f433b1803" volumeName="kubernetes.io/empty-dir/5225d0e4-402f-4861-b410-819f433b1803-catalog-content" seLinuxMountContext="" Jan 23 09:07:07 crc kubenswrapper[4684]: I0123 09:07:07.550610 4684 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="5441d097-087c-4d9a-baa8-b210afa90fc9" volumeName="kubernetes.io/configmap/5441d097-087c-4d9a-baa8-b210afa90fc9-config" seLinuxMountContext="" Jan 23 09:07:07 crc kubenswrapper[4684]: I0123 09:07:07.550626 4684 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="5fe579f8-e8a6-4643-bce5-a661393c4dde" volumeName="kubernetes.io/projected/5fe579f8-e8a6-4643-bce5-a661393c4dde-kube-api-access-fcqwp" seLinuxMountContext="" Jan 23 09:07:07 crc kubenswrapper[4684]: I0123 09:07:07.550638 4684 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="8cea82b4-6893-4ddc-af9f-1bb5ae425c5b" volumeName="kubernetes.io/projected/8cea82b4-6893-4ddc-af9f-1bb5ae425c5b-kube-api-access-w4xd4" seLinuxMountContext="" Jan 23 09:07:07 crc kubenswrapper[4684]: I0123 09:07:07.550651 4684 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="8f668bae-612b-4b75-9490-919e737c6a3b" volumeName="kubernetes.io/projected/8f668bae-612b-4b75-9490-919e737c6a3b-kube-api-access-kfwg7" seLinuxMountContext="" Jan 23 09:07:07 crc kubenswrapper[4684]: I0123 09:07:07.550664 4684 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="8f668bae-612b-4b75-9490-919e737c6a3b" volumeName="kubernetes.io/secret/8f668bae-612b-4b75-9490-919e737c6a3b-installation-pull-secrets" seLinuxMountContext="" Jan 23 09:07:07 crc kubenswrapper[4684]: I0123 09:07:07.550677 4684 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="925f1c65-6136-48ba-85aa-3a3b50560753" volumeName="kubernetes.io/secret/925f1c65-6136-48ba-85aa-3a3b50560753-ovn-control-plane-metrics-cert" seLinuxMountContext="" Jan 23 09:07:07 crc kubenswrapper[4684]: I0123 09:07:07.550690 4684 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="1386a44e-36a2-460c-96d0-0359d2b6f0f5" volumeName="kubernetes.io/secret/1386a44e-36a2-460c-96d0-0359d2b6f0f5-serving-cert" seLinuxMountContext="" Jan 23 09:07:07 crc kubenswrapper[4684]: I0123 09:07:07.550729 4684 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="1bf7eb37-55a3-4c65-b768-a94c82151e69" volumeName="kubernetes.io/configmap/1bf7eb37-55a3-4c65-b768-a94c82151e69-image-import-ca" seLinuxMountContext="" Jan 23 09:07:07 crc kubenswrapper[4684]: I0123 09:07:07.550741 4684 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="49c341d1-5089-4bc2-86a0-a5e165cfcc6b" volumeName="kubernetes.io/configmap/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-system-cliconfig" seLinuxMountContext="" Jan 23 09:07:07 crc kubenswrapper[4684]: I0123 09:07:07.550755 4684 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="7bb08738-c794-4ee8-9972-3a62ca171029" volumeName="kubernetes.io/configmap/7bb08738-c794-4ee8-9972-3a62ca171029-cni-sysctl-allowlist" seLinuxMountContext="" Jan 23 09:07:07 crc kubenswrapper[4684]: I0123 09:07:07.550766 4684 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="925f1c65-6136-48ba-85aa-3a3b50560753" volumeName="kubernetes.io/configmap/925f1c65-6136-48ba-85aa-3a3b50560753-env-overrides" seLinuxMountContext="" Jan 23 09:07:07 crc kubenswrapper[4684]: I0123 09:07:07.550780 4684 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="b6cd30de-2eeb-49a2-ab40-9167f4560ff5" volumeName="kubernetes.io/projected/b6cd30de-2eeb-49a2-ab40-9167f4560ff5-kube-api-access-pj782" seLinuxMountContext="" Jan 23 09:07:07 crc kubenswrapper[4684]: I0123 09:07:07.550793 4684 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="09efc573-dbb6-4249-bd59-9b87aba8dd28" volumeName="kubernetes.io/configmap/09efc573-dbb6-4249-bd59-9b87aba8dd28-config" seLinuxMountContext="" Jan 23 09:07:07 crc kubenswrapper[4684]: I0123 09:07:07.550808 4684 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="20b0d48f-5fd6-431c-a545-e3c800c7b866" volumeName="kubernetes.io/secret/20b0d48f-5fd6-431c-a545-e3c800c7b866-cert" seLinuxMountContext="" Jan 23 09:07:07 crc kubenswrapper[4684]: I0123 09:07:07.550822 4684 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c" volumeName="kubernetes.io/configmap/210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c-config" seLinuxMountContext="" Jan 23 09:07:07 crc kubenswrapper[4684]: I0123 09:07:07.550835 4684 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="49c341d1-5089-4bc2-86a0-a5e165cfcc6b" volumeName="kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-system-session" seLinuxMountContext="" Jan 23 09:07:07 crc kubenswrapper[4684]: I0123 09:07:07.550850 4684 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="5fe579f8-e8a6-4643-bce5-a661393c4dde" volumeName="kubernetes.io/secret/5fe579f8-e8a6-4643-bce5-a661393c4dde-node-bootstrap-token" seLinuxMountContext="" Jan 23 09:07:07 crc kubenswrapper[4684]: I0123 09:07:07.550866 4684 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="7539238d-5fe0-46ed-884e-1c3b566537ec" volumeName="kubernetes.io/configmap/7539238d-5fe0-46ed-884e-1c3b566537ec-config" seLinuxMountContext="" Jan 23 09:07:07 crc kubenswrapper[4684]: I0123 09:07:07.550879 4684 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="22c825df-677d-4ca6-82db-3454ed06e783" volumeName="kubernetes.io/secret/22c825df-677d-4ca6-82db-3454ed06e783-machine-approver-tls" seLinuxMountContext="" Jan 23 09:07:07 crc kubenswrapper[4684]: I0123 09:07:07.550892 4684 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="49c341d1-5089-4bc2-86a0-a5e165cfcc6b" volumeName="kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-system-router-certs" seLinuxMountContext="" Jan 23 09:07:07 crc kubenswrapper[4684]: I0123 09:07:07.550906 4684 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="7bb08738-c794-4ee8-9972-3a62ca171029" volumeName="kubernetes.io/configmap/7bb08738-c794-4ee8-9972-3a62ca171029-cni-binary-copy" seLinuxMountContext="" Jan 23 09:07:07 crc kubenswrapper[4684]: I0123 09:07:07.550919 4684 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="b11524ee-3fca-4b1b-9cdf-6da289fdbc7d" volumeName="kubernetes.io/empty-dir/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d-catalog-content" seLinuxMountContext="" Jan 23 09:07:07 crc kubenswrapper[4684]: I0123 09:07:07.550932 4684 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="ef543e1b-8068-4ea3-b32a-61027b32e95d" volumeName="kubernetes.io/projected/ef543e1b-8068-4ea3-b32a-61027b32e95d-kube-api-access-s2kz5" seLinuxMountContext="" Jan 23 09:07:07 crc kubenswrapper[4684]: I0123 09:07:07.550944 4684 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="ef543e1b-8068-4ea3-b32a-61027b32e95d" volumeName="kubernetes.io/secret/ef543e1b-8068-4ea3-b32a-61027b32e95d-webhook-cert" seLinuxMountContext="" Jan 23 09:07:07 crc kubenswrapper[4684]: I0123 09:07:07.550957 4684 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="1bf7eb37-55a3-4c65-b768-a94c82151e69" volumeName="kubernetes.io/secret/1bf7eb37-55a3-4c65-b768-a94c82151e69-encryption-config" seLinuxMountContext="" Jan 23 09:07:07 crc kubenswrapper[4684]: I0123 09:07:07.550971 4684 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="25e176fe-21b4-4974-b1ed-c8b94f112a7f" volumeName="kubernetes.io/projected/25e176fe-21b4-4974-b1ed-c8b94f112a7f-kube-api-access-d4lsv" seLinuxMountContext="" Jan 23 09:07:07 crc kubenswrapper[4684]: I0123 09:07:07.550983 4684 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="3cb93b32-e0ae-4377-b9c8-fdb9842c6d59" volumeName="kubernetes.io/projected/3cb93b32-e0ae-4377-b9c8-fdb9842c6d59-kube-api-access-wxkg8" seLinuxMountContext="" Jan 23 09:07:07 crc kubenswrapper[4684]: I0123 09:07:07.550996 4684 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="8cea82b4-6893-4ddc-af9f-1bb5ae425c5b" volumeName="kubernetes.io/configmap/8cea82b4-6893-4ddc-af9f-1bb5ae425c5b-config" seLinuxMountContext="" Jan 23 09:07:07 crc kubenswrapper[4684]: I0123 09:07:07.551007 4684 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="8cea82b4-6893-4ddc-af9f-1bb5ae425c5b" volumeName="kubernetes.io/secret/8cea82b4-6893-4ddc-af9f-1bb5ae425c5b-serving-cert" seLinuxMountContext="" Jan 23 09:07:07 crc kubenswrapper[4684]: I0123 09:07:07.551020 4684 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="c03ee662-fb2f-4fc4-a2c1-af487c19d254" volumeName="kubernetes.io/secret/c03ee662-fb2f-4fc4-a2c1-af487c19d254-default-certificate" seLinuxMountContext="" Jan 23 09:07:07 crc kubenswrapper[4684]: I0123 09:07:07.551031 4684 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="308be0ea-9f5f-4b29-aeb1-5abd31a0b17b" volumeName="kubernetes.io/projected/308be0ea-9f5f-4b29-aeb1-5abd31a0b17b-kube-api-access-6ccd8" seLinuxMountContext="" Jan 23 09:07:07 crc kubenswrapper[4684]: I0123 09:07:07.551043 4684 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="308be0ea-9f5f-4b29-aeb1-5abd31a0b17b" volumeName="kubernetes.io/secret/308be0ea-9f5f-4b29-aeb1-5abd31a0b17b-apiservice-cert" seLinuxMountContext="" Jan 23 09:07:07 crc kubenswrapper[4684]: I0123 09:07:07.551056 4684 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="5225d0e4-402f-4861-b410-819f433b1803" volumeName="kubernetes.io/empty-dir/5225d0e4-402f-4861-b410-819f433b1803-utilities" seLinuxMountContext="" Jan 23 09:07:07 crc kubenswrapper[4684]: I0123 09:07:07.551071 4684 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="6ea678ab-3438-413e-bfe3-290ae7725660" volumeName="kubernetes.io/configmap/6ea678ab-3438-413e-bfe3-290ae7725660-env-overrides" seLinuxMountContext="" Jan 23 09:07:07 crc kubenswrapper[4684]: I0123 09:07:07.551084 4684 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="8f668bae-612b-4b75-9490-919e737c6a3b" volumeName="kubernetes.io/projected/8f668bae-612b-4b75-9490-919e737c6a3b-bound-sa-token" seLinuxMountContext="" Jan 23 09:07:07 crc kubenswrapper[4684]: I0123 09:07:07.551109 4684 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="c03ee662-fb2f-4fc4-a2c1-af487c19d254" volumeName="kubernetes.io/secret/c03ee662-fb2f-4fc4-a2c1-af487c19d254-metrics-certs" seLinuxMountContext="" Jan 23 09:07:07 crc kubenswrapper[4684]: I0123 09:07:07.551121 4684 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="308be0ea-9f5f-4b29-aeb1-5abd31a0b17b" volumeName="kubernetes.io/secret/308be0ea-9f5f-4b29-aeb1-5abd31a0b17b-webhook-cert" seLinuxMountContext="" Jan 23 09:07:07 crc kubenswrapper[4684]: I0123 09:07:07.551134 4684 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="b11524ee-3fca-4b1b-9cdf-6da289fdbc7d" volumeName="kubernetes.io/projected/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d-kube-api-access-x4zgh" seLinuxMountContext="" Jan 23 09:07:07 crc kubenswrapper[4684]: I0123 09:07:07.551148 4684 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="57a731c4-ef35-47a8-b875-bfb08a7f8011" volumeName="kubernetes.io/empty-dir/57a731c4-ef35-47a8-b875-bfb08a7f8011-catalog-content" seLinuxMountContext="" Jan 23 09:07:07 crc kubenswrapper[4684]: I0123 09:07:07.551159 4684 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="09ae3b1a-e8e7-4524-b54b-61eab6f9239a" volumeName="kubernetes.io/configmap/09ae3b1a-e8e7-4524-b54b-61eab6f9239a-trusted-ca-bundle" seLinuxMountContext="" Jan 23 09:07:07 crc kubenswrapper[4684]: I0123 09:07:07.551171 4684 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="09efc573-dbb6-4249-bd59-9b87aba8dd28" volumeName="kubernetes.io/configmap/09efc573-dbb6-4249-bd59-9b87aba8dd28-etcd-service-ca" seLinuxMountContext="" Jan 23 09:07:07 crc kubenswrapper[4684]: I0123 09:07:07.551184 4684 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="1bf7eb37-55a3-4c65-b768-a94c82151e69" volumeName="kubernetes.io/configmap/1bf7eb37-55a3-4c65-b768-a94c82151e69-etcd-serving-ca" seLinuxMountContext="" Jan 23 09:07:07 crc kubenswrapper[4684]: I0123 09:07:07.551197 4684 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="1bf7eb37-55a3-4c65-b768-a94c82151e69" volumeName="kubernetes.io/configmap/1bf7eb37-55a3-4c65-b768-a94c82151e69-trusted-ca-bundle" seLinuxMountContext="" Jan 23 09:07:07 crc kubenswrapper[4684]: I0123 09:07:07.551209 4684 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="43509403-f426-496e-be36-56cef71462f5" volumeName="kubernetes.io/secret/43509403-f426-496e-be36-56cef71462f5-console-serving-cert" seLinuxMountContext="" Jan 23 09:07:07 crc kubenswrapper[4684]: I0123 09:07:07.551221 4684 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="49c341d1-5089-4bc2-86a0-a5e165cfcc6b" volumeName="kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-user-idp-0-file-data" seLinuxMountContext="" Jan 23 09:07:07 crc kubenswrapper[4684]: I0123 09:07:07.551233 4684 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="57a731c4-ef35-47a8-b875-bfb08a7f8011" volumeName="kubernetes.io/empty-dir/57a731c4-ef35-47a8-b875-bfb08a7f8011-utilities" seLinuxMountContext="" Jan 23 09:07:07 crc kubenswrapper[4684]: I0123 09:07:07.551245 4684 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="57a731c4-ef35-47a8-b875-bfb08a7f8011" volumeName="kubernetes.io/projected/57a731c4-ef35-47a8-b875-bfb08a7f8011-kube-api-access-cfbct" seLinuxMountContext="" Jan 23 09:07:07 crc kubenswrapper[4684]: I0123 09:07:07.551259 4684 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="09efc573-dbb6-4249-bd59-9b87aba8dd28" volumeName="kubernetes.io/configmap/09efc573-dbb6-4249-bd59-9b87aba8dd28-etcd-ca" seLinuxMountContext="" Jan 23 09:07:07 crc kubenswrapper[4684]: I0123 09:07:07.551272 4684 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="0b574797-001e-440a-8f4e-c0be86edad0f" volumeName="kubernetes.io/configmap/0b574797-001e-440a-8f4e-c0be86edad0f-mcc-auth-proxy-config" seLinuxMountContext="" Jan 23 09:07:07 crc kubenswrapper[4684]: I0123 09:07:07.551287 4684 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="496e6271-fb68-4057-954e-a0d97a4afa3f" volumeName="kubernetes.io/projected/496e6271-fb68-4057-954e-a0d97a4afa3f-kube-api-access" seLinuxMountContext="" Jan 23 09:07:07 crc kubenswrapper[4684]: I0123 09:07:07.551300 4684 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="5225d0e4-402f-4861-b410-819f433b1803" volumeName="kubernetes.io/projected/5225d0e4-402f-4861-b410-819f433b1803-kube-api-access-9xfj7" seLinuxMountContext="" Jan 23 09:07:07 crc kubenswrapper[4684]: I0123 09:07:07.551318 4684 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="5b88f790-22fa-440e-b583-365168c0b23d" volumeName="kubernetes.io/secret/5b88f790-22fa-440e-b583-365168c0b23d-metrics-certs" seLinuxMountContext="" Jan 23 09:07:07 crc kubenswrapper[4684]: I0123 09:07:07.551330 4684 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="7539238d-5fe0-46ed-884e-1c3b566537ec" volumeName="kubernetes.io/secret/7539238d-5fe0-46ed-884e-1c3b566537ec-serving-cert" seLinuxMountContext="" Jan 23 09:07:07 crc kubenswrapper[4684]: I0123 09:07:07.551341 4684 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="9d4552c7-cd75-42dd-8880-30dd377c49a4" volumeName="kubernetes.io/configmap/9d4552c7-cd75-42dd-8880-30dd377c49a4-config" seLinuxMountContext="" Jan 23 09:07:07 crc kubenswrapper[4684]: I0123 09:07:07.551353 4684 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="4bb40260-dbaa-4fb0-84df-5e680505d512" volumeName="kubernetes.io/projected/4bb40260-dbaa-4fb0-84df-5e680505d512-kube-api-access-2w9zh" seLinuxMountContext="" Jan 23 09:07:07 crc kubenswrapper[4684]: I0123 09:07:07.551366 4684 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="6509e943-70c6-444c-bc41-48a544e36fbd" volumeName="kubernetes.io/configmap/6509e943-70c6-444c-bc41-48a544e36fbd-trusted-ca-bundle" seLinuxMountContext="" Jan 23 09:07:07 crc kubenswrapper[4684]: I0123 09:07:07.551378 4684 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="7583ce53-e0fe-4a16-9e4d-50516596a136" volumeName="kubernetes.io/projected/7583ce53-e0fe-4a16-9e4d-50516596a136-kube-api-access-xcphl" seLinuxMountContext="" Jan 23 09:07:07 crc kubenswrapper[4684]: I0123 09:07:07.551392 4684 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="09ae3b1a-e8e7-4524-b54b-61eab6f9239a" volumeName="kubernetes.io/configmap/09ae3b1a-e8e7-4524-b54b-61eab6f9239a-audit-policies" seLinuxMountContext="" Jan 23 09:07:07 crc kubenswrapper[4684]: I0123 09:07:07.551403 4684 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="22c825df-677d-4ca6-82db-3454ed06e783" volumeName="kubernetes.io/configmap/22c825df-677d-4ca6-82db-3454ed06e783-config" seLinuxMountContext="" Jan 23 09:07:07 crc kubenswrapper[4684]: I0123 09:07:07.551415 4684 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="31d8b7a1-420e-4252-a5b7-eebe8a111292" volumeName="kubernetes.io/secret/31d8b7a1-420e-4252-a5b7-eebe8a111292-proxy-tls" seLinuxMountContext="" Jan 23 09:07:07 crc kubenswrapper[4684]: I0123 09:07:07.551427 4684 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="49c341d1-5089-4bc2-86a0-a5e165cfcc6b" volumeName="kubernetes.io/configmap/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-system-trusted-ca-bundle" seLinuxMountContext="" Jan 23 09:07:07 crc kubenswrapper[4684]: I0123 09:07:07.551440 4684 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="49c341d1-5089-4bc2-86a0-a5e165cfcc6b" volumeName="kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-user-template-provider-selection" seLinuxMountContext="" Jan 23 09:07:07 crc kubenswrapper[4684]: I0123 09:07:07.551453 4684 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="5441d097-087c-4d9a-baa8-b210afa90fc9" volumeName="kubernetes.io/projected/5441d097-087c-4d9a-baa8-b210afa90fc9-kube-api-access-2d4wz" seLinuxMountContext="" Jan 23 09:07:07 crc kubenswrapper[4684]: I0123 09:07:07.551465 4684 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" volumeName="kubernetes.io/configmap/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-nginx-conf" seLinuxMountContext="" Jan 23 09:07:07 crc kubenswrapper[4684]: I0123 09:07:07.551477 4684 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="7bb08738-c794-4ee8-9972-3a62ca171029" volumeName="kubernetes.io/projected/7bb08738-c794-4ee8-9972-3a62ca171029-kube-api-access-279lb" seLinuxMountContext="" Jan 23 09:07:07 crc kubenswrapper[4684]: I0123 09:07:07.551490 4684 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="87cf06ed-a83f-41a7-828d-70653580a8cb" volumeName="kubernetes.io/configmap/87cf06ed-a83f-41a7-828d-70653580a8cb-config-volume" seLinuxMountContext="" Jan 23 09:07:07 crc kubenswrapper[4684]: I0123 09:07:07.551500 4684 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="bc5039c0-ea34-426b-a2b7-fbbc87b49a6d" volumeName="kubernetes.io/secret/bc5039c0-ea34-426b-a2b7-fbbc87b49a6d-serving-cert" seLinuxMountContext="" Jan 23 09:07:07 crc kubenswrapper[4684]: I0123 09:07:07.551512 4684 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="37a5e44f-9a88-4405-be8a-b645485e7312" volumeName="kubernetes.io/secret/37a5e44f-9a88-4405-be8a-b645485e7312-metrics-tls" seLinuxMountContext="" Jan 23 09:07:07 crc kubenswrapper[4684]: I0123 09:07:07.551524 4684 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="6509e943-70c6-444c-bc41-48a544e36fbd" volumeName="kubernetes.io/configmap/6509e943-70c6-444c-bc41-48a544e36fbd-service-ca-bundle" seLinuxMountContext="" Jan 23 09:07:07 crc kubenswrapper[4684]: I0123 09:07:07.551544 4684 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="bf126b07-da06-4140-9a57-dfd54fc6b486" volumeName="kubernetes.io/projected/bf126b07-da06-4140-9a57-dfd54fc6b486-bound-sa-token" seLinuxMountContext="" Jan 23 09:07:07 crc kubenswrapper[4684]: I0123 09:07:07.551556 4684 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="6ea678ab-3438-413e-bfe3-290ae7725660" volumeName="kubernetes.io/configmap/6ea678ab-3438-413e-bfe3-290ae7725660-ovnkube-config" seLinuxMountContext="" Jan 23 09:07:07 crc kubenswrapper[4684]: I0123 09:07:07.551568 4684 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="01ab3dd5-8196-46d0-ad33-122e2ca51def" volumeName="kubernetes.io/projected/01ab3dd5-8196-46d0-ad33-122e2ca51def-kube-api-access-w7l8j" seLinuxMountContext="" Jan 23 09:07:07 crc kubenswrapper[4684]: I0123 09:07:07.551582 4684 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="09ae3b1a-e8e7-4524-b54b-61eab6f9239a" volumeName="kubernetes.io/projected/09ae3b1a-e8e7-4524-b54b-61eab6f9239a-kube-api-access-zkvpv" seLinuxMountContext="" Jan 23 09:07:07 crc kubenswrapper[4684]: I0123 09:07:07.551593 4684 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="1bf7eb37-55a3-4c65-b768-a94c82151e69" volumeName="kubernetes.io/secret/1bf7eb37-55a3-4c65-b768-a94c82151e69-serving-cert" seLinuxMountContext="" Jan 23 09:07:07 crc kubenswrapper[4684]: I0123 09:07:07.551606 4684 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="1d611f23-29be-4491-8495-bee1670e935f" volumeName="kubernetes.io/empty-dir/1d611f23-29be-4491-8495-bee1670e935f-utilities" seLinuxMountContext="" Jan 23 09:07:07 crc kubenswrapper[4684]: I0123 09:07:07.551619 4684 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c" volumeName="kubernetes.io/projected/210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c-kube-api-access-qs4fp" seLinuxMountContext="" Jan 23 09:07:07 crc kubenswrapper[4684]: I0123 09:07:07.551633 4684 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="43509403-f426-496e-be36-56cef71462f5" volumeName="kubernetes.io/projected/43509403-f426-496e-be36-56cef71462f5-kube-api-access-qg5z5" seLinuxMountContext="" Jan 23 09:07:07 crc kubenswrapper[4684]: I0123 09:07:07.551645 4684 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="49c341d1-5089-4bc2-86a0-a5e165cfcc6b" volumeName="kubernetes.io/configmap/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-system-service-ca" seLinuxMountContext="" Jan 23 09:07:07 crc kubenswrapper[4684]: I0123 09:07:07.551657 4684 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="bc5039c0-ea34-426b-a2b7-fbbc87b49a6d" volumeName="kubernetes.io/projected/bc5039c0-ea34-426b-a2b7-fbbc87b49a6d-kube-api-access-mnrrd" seLinuxMountContext="" Jan 23 09:07:07 crc kubenswrapper[4684]: I0123 09:07:07.551669 4684 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="bd23aa5c-e532-4e53-bccf-e79f130c5ae8" volumeName="kubernetes.io/projected/bd23aa5c-e532-4e53-bccf-e79f130c5ae8-kube-api-access-jhbk2" seLinuxMountContext="" Jan 23 09:07:07 crc kubenswrapper[4684]: I0123 09:07:07.551681 4684 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="e7e6199b-1264-4501-8953-767f51328d08" volumeName="kubernetes.io/projected/e7e6199b-1264-4501-8953-767f51328d08-kube-api-access" seLinuxMountContext="" Jan 23 09:07:07 crc kubenswrapper[4684]: I0123 09:07:07.551715 4684 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="1d611f23-29be-4491-8495-bee1670e935f" volumeName="kubernetes.io/projected/1d611f23-29be-4491-8495-bee1670e935f-kube-api-access-bf2bz" seLinuxMountContext="" Jan 23 09:07:07 crc kubenswrapper[4684]: I0123 09:07:07.551731 4684 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="37a5e44f-9a88-4405-be8a-b645485e7312" volumeName="kubernetes.io/projected/37a5e44f-9a88-4405-be8a-b645485e7312-kube-api-access-rdwmf" seLinuxMountContext="" Jan 23 09:07:07 crc kubenswrapper[4684]: I0123 09:07:07.551743 4684 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="8f668bae-612b-4b75-9490-919e737c6a3b" volumeName="kubernetes.io/empty-dir/8f668bae-612b-4b75-9490-919e737c6a3b-ca-trust-extracted" seLinuxMountContext="" Jan 23 09:07:07 crc kubenswrapper[4684]: I0123 09:07:07.551755 4684 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="9d4552c7-cd75-42dd-8880-30dd377c49a4" volumeName="kubernetes.io/secret/9d4552c7-cd75-42dd-8880-30dd377c49a4-serving-cert" seLinuxMountContext="" Jan 23 09:07:07 crc kubenswrapper[4684]: I0123 09:07:07.551774 4684 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="a0128f3a-b052-44ed-a84e-c4c8aaf17c13" volumeName="kubernetes.io/secret/a0128f3a-b052-44ed-a84e-c4c8aaf17c13-samples-operator-tls" seLinuxMountContext="" Jan 23 09:07:07 crc kubenswrapper[4684]: I0123 09:07:07.551787 4684 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="b6cd30de-2eeb-49a2-ab40-9167f4560ff5" volumeName="kubernetes.io/configmap/b6cd30de-2eeb-49a2-ab40-9167f4560ff5-marketplace-trusted-ca" seLinuxMountContext="" Jan 23 09:07:07 crc kubenswrapper[4684]: I0123 09:07:07.551798 4684 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="cd70aa09-68dd-4d64-bd6f-156fe6d1dc6d" volumeName="kubernetes.io/projected/cd70aa09-68dd-4d64-bd6f-156fe6d1dc6d-kube-api-access-x2m85" seLinuxMountContext="" Jan 23 09:07:07 crc kubenswrapper[4684]: I0123 09:07:07.551810 4684 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="ef543e1b-8068-4ea3-b32a-61027b32e95d" volumeName="kubernetes.io/configmap/ef543e1b-8068-4ea3-b32a-61027b32e95d-ovnkube-identity-cm" seLinuxMountContext="" Jan 23 09:07:07 crc kubenswrapper[4684]: I0123 09:07:07.551821 4684 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="efdd0498-1daa-4136-9a4a-3b948c2293fc" volumeName="kubernetes.io/projected/efdd0498-1daa-4136-9a4a-3b948c2293fc-kube-api-access-fqsjt" seLinuxMountContext="" Jan 23 09:07:07 crc kubenswrapper[4684]: I0123 09:07:07.551832 4684 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="f88749ec-7931-4ee7-b3fc-1ec5e11f92e9" volumeName="kubernetes.io/projected/f88749ec-7931-4ee7-b3fc-1ec5e11f92e9-kube-api-access-dbsvg" seLinuxMountContext="" Jan 23 09:07:07 crc kubenswrapper[4684]: I0123 09:07:07.551845 4684 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="1386a44e-36a2-460c-96d0-0359d2b6f0f5" volumeName="kubernetes.io/projected/1386a44e-36a2-460c-96d0-0359d2b6f0f5-kube-api-access" seLinuxMountContext="" Jan 23 09:07:07 crc kubenswrapper[4684]: I0123 09:07:07.551857 4684 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="25e176fe-21b4-4974-b1ed-c8b94f112a7f" volumeName="kubernetes.io/configmap/25e176fe-21b4-4974-b1ed-c8b94f112a7f-signing-cabundle" seLinuxMountContext="" Jan 23 09:07:07 crc kubenswrapper[4684]: I0123 09:07:07.551870 4684 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="25e176fe-21b4-4974-b1ed-c8b94f112a7f" volumeName="kubernetes.io/secret/25e176fe-21b4-4974-b1ed-c8b94f112a7f-signing-key" seLinuxMountContext="" Jan 23 09:07:07 crc kubenswrapper[4684]: I0123 09:07:07.551884 4684 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="3ab1a177-2de0-46d9-b765-d0d0649bb42e" volumeName="kubernetes.io/secret/3ab1a177-2de0-46d9-b765-d0d0649bb42e-package-server-manager-serving-cert" seLinuxMountContext="" Jan 23 09:07:07 crc kubenswrapper[4684]: I0123 09:07:07.551899 4684 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="43509403-f426-496e-be36-56cef71462f5" volumeName="kubernetes.io/configmap/43509403-f426-496e-be36-56cef71462f5-oauth-serving-cert" seLinuxMountContext="" Jan 23 09:07:07 crc kubenswrapper[4684]: I0123 09:07:07.551912 4684 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="6402fda4-df10-493c-b4e5-d0569419652d" volumeName="kubernetes.io/configmap/6402fda4-df10-493c-b4e5-d0569419652d-images" seLinuxMountContext="" Jan 23 09:07:07 crc kubenswrapper[4684]: I0123 09:07:07.551926 4684 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="96b93a3a-6083-4aea-8eab-fe1aa8245ad9" volumeName="kubernetes.io/secret/96b93a3a-6083-4aea-8eab-fe1aa8245ad9-metrics-tls" seLinuxMountContext="" Jan 23 09:07:07 crc kubenswrapper[4684]: I0123 09:07:07.551942 4684 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c" volumeName="kubernetes.io/secret/210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c-serving-cert" seLinuxMountContext="" Jan 23 09:07:07 crc kubenswrapper[4684]: I0123 09:07:07.551954 4684 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="6509e943-70c6-444c-bc41-48a544e36fbd" volumeName="kubernetes.io/projected/6509e943-70c6-444c-bc41-48a544e36fbd-kube-api-access-6g6sz" seLinuxMountContext="" Jan 23 09:07:07 crc kubenswrapper[4684]: I0123 09:07:07.551965 4684 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="7539238d-5fe0-46ed-884e-1c3b566537ec" volumeName="kubernetes.io/projected/7539238d-5fe0-46ed-884e-1c3b566537ec-kube-api-access-tk88c" seLinuxMountContext="" Jan 23 09:07:07 crc kubenswrapper[4684]: I0123 09:07:07.551977 4684 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="87cf06ed-a83f-41a7-828d-70653580a8cb" volumeName="kubernetes.io/projected/87cf06ed-a83f-41a7-828d-70653580a8cb-kube-api-access-d6qdx" seLinuxMountContext="" Jan 23 09:07:07 crc kubenswrapper[4684]: I0123 09:07:07.551988 4684 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="8f668bae-612b-4b75-9490-919e737c6a3b" volumeName="kubernetes.io/configmap/8f668bae-612b-4b75-9490-919e737c6a3b-registry-certificates" seLinuxMountContext="" Jan 23 09:07:07 crc kubenswrapper[4684]: I0123 09:07:07.552000 4684 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="9d4552c7-cd75-42dd-8880-30dd377c49a4" volumeName="kubernetes.io/projected/9d4552c7-cd75-42dd-8880-30dd377c49a4-kube-api-access-pcxfs" seLinuxMountContext="" Jan 23 09:07:07 crc kubenswrapper[4684]: I0123 09:07:07.552011 4684 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="01ab3dd5-8196-46d0-ad33-122e2ca51def" volumeName="kubernetes.io/secret/01ab3dd5-8196-46d0-ad33-122e2ca51def-serving-cert" seLinuxMountContext="" Jan 23 09:07:07 crc kubenswrapper[4684]: I0123 09:07:07.552022 4684 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="5441d097-087c-4d9a-baa8-b210afa90fc9" volumeName="kubernetes.io/secret/5441d097-087c-4d9a-baa8-b210afa90fc9-serving-cert" seLinuxMountContext="" Jan 23 09:07:07 crc kubenswrapper[4684]: I0123 09:07:07.552032 4684 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="6ea678ab-3438-413e-bfe3-290ae7725660" volumeName="kubernetes.io/secret/6ea678ab-3438-413e-bfe3-290ae7725660-ovn-node-metrics-cert" seLinuxMountContext="" Jan 23 09:07:07 crc kubenswrapper[4684]: I0123 09:07:07.552042 4684 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="8f668bae-612b-4b75-9490-919e737c6a3b" volumeName="kubernetes.io/configmap/8f668bae-612b-4b75-9490-919e737c6a3b-trusted-ca" seLinuxMountContext="" Jan 23 09:07:07 crc kubenswrapper[4684]: I0123 09:07:07.552052 4684 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="b6cd30de-2eeb-49a2-ab40-9167f4560ff5" volumeName="kubernetes.io/secret/b6cd30de-2eeb-49a2-ab40-9167f4560ff5-marketplace-operator-metrics" seLinuxMountContext="" Jan 23 09:07:07 crc kubenswrapper[4684]: I0123 09:07:07.552063 4684 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="09ae3b1a-e8e7-4524-b54b-61eab6f9239a" volumeName="kubernetes.io/secret/09ae3b1a-e8e7-4524-b54b-61eab6f9239a-encryption-config" seLinuxMountContext="" Jan 23 09:07:07 crc kubenswrapper[4684]: I0123 09:07:07.552074 4684 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="0b78653f-4ff9-4508-8672-245ed9b561e3" volumeName="kubernetes.io/projected/0b78653f-4ff9-4508-8672-245ed9b561e3-kube-api-access" seLinuxMountContext="" Jan 23 09:07:07 crc kubenswrapper[4684]: I0123 09:07:07.552086 4684 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="4bb40260-dbaa-4fb0-84df-5e680505d512" volumeName="kubernetes.io/configmap/4bb40260-dbaa-4fb0-84df-5e680505d512-multus-daemon-config" seLinuxMountContext="" Jan 23 09:07:07 crc kubenswrapper[4684]: I0123 09:07:07.552097 4684 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="b6312bbd-5731-4ea0-a20f-81d5a57df44a" volumeName="kubernetes.io/secret/b6312bbd-5731-4ea0-a20f-81d5a57df44a-srv-cert" seLinuxMountContext="" Jan 23 09:07:07 crc kubenswrapper[4684]: I0123 09:07:07.552116 4684 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="c03ee662-fb2f-4fc4-a2c1-af487c19d254" volumeName="kubernetes.io/projected/c03ee662-fb2f-4fc4-a2c1-af487c19d254-kube-api-access-v47cf" seLinuxMountContext="" Jan 23 09:07:07 crc kubenswrapper[4684]: I0123 09:07:07.552129 4684 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="09efc573-dbb6-4249-bd59-9b87aba8dd28" volumeName="kubernetes.io/secret/09efc573-dbb6-4249-bd59-9b87aba8dd28-serving-cert" seLinuxMountContext="" Jan 23 09:07:07 crc kubenswrapper[4684]: I0123 09:07:07.552142 4684 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="5fe579f8-e8a6-4643-bce5-a661393c4dde" volumeName="kubernetes.io/secret/5fe579f8-e8a6-4643-bce5-a661393c4dde-certs" seLinuxMountContext="" Jan 23 09:07:07 crc kubenswrapper[4684]: I0123 09:07:07.552156 4684 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="6509e943-70c6-444c-bc41-48a544e36fbd" volumeName="kubernetes.io/configmap/6509e943-70c6-444c-bc41-48a544e36fbd-config" seLinuxMountContext="" Jan 23 09:07:07 crc kubenswrapper[4684]: I0123 09:07:07.552171 4684 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="7583ce53-e0fe-4a16-9e4d-50516596a136" volumeName="kubernetes.io/configmap/7583ce53-e0fe-4a16-9e4d-50516596a136-config" seLinuxMountContext="" Jan 23 09:07:07 crc kubenswrapper[4684]: I0123 09:07:07.552183 4684 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="a31745f5-9847-4afe-82a5-3161cc66ca93" volumeName="kubernetes.io/projected/a31745f5-9847-4afe-82a5-3161cc66ca93-kube-api-access-lz9wn" seLinuxMountContext="" Jan 23 09:07:07 crc kubenswrapper[4684]: I0123 09:07:07.552196 4684 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="b6312bbd-5731-4ea0-a20f-81d5a57df44a" volumeName="kubernetes.io/secret/b6312bbd-5731-4ea0-a20f-81d5a57df44a-profile-collector-cert" seLinuxMountContext="" Jan 23 09:07:07 crc kubenswrapper[4684]: I0123 09:07:07.552210 4684 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="fda69060-fa79-4696-b1a6-7980f124bf7c" volumeName="kubernetes.io/configmap/fda69060-fa79-4696-b1a6-7980f124bf7c-mcd-auth-proxy-config" seLinuxMountContext="" Jan 23 09:07:07 crc kubenswrapper[4684]: I0123 09:07:07.552222 4684 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="09ae3b1a-e8e7-4524-b54b-61eab6f9239a" volumeName="kubernetes.io/secret/09ae3b1a-e8e7-4524-b54b-61eab6f9239a-serving-cert" seLinuxMountContext="" Jan 23 09:07:07 crc kubenswrapper[4684]: I0123 09:07:07.552233 4684 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="3b6479f0-333b-4a96-9adf-2099afdc2447" volumeName="kubernetes.io/projected/3b6479f0-333b-4a96-9adf-2099afdc2447-kube-api-access-cqllr" seLinuxMountContext="" Jan 23 09:07:07 crc kubenswrapper[4684]: I0123 09:07:07.552245 4684 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="496e6271-fb68-4057-954e-a0d97a4afa3f" volumeName="kubernetes.io/secret/496e6271-fb68-4057-954e-a0d97a4afa3f-serving-cert" seLinuxMountContext="" Jan 23 09:07:07 crc kubenswrapper[4684]: I0123 09:07:07.552256 4684 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="5b88f790-22fa-440e-b583-365168c0b23d" volumeName="kubernetes.io/projected/5b88f790-22fa-440e-b583-365168c0b23d-kube-api-access-jkwtn" seLinuxMountContext="" Jan 23 09:07:07 crc kubenswrapper[4684]: I0123 09:07:07.552268 4684 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="6ea678ab-3438-413e-bfe3-290ae7725660" volumeName="kubernetes.io/projected/6ea678ab-3438-413e-bfe3-290ae7725660-kube-api-access-htfz6" seLinuxMountContext="" Jan 23 09:07:07 crc kubenswrapper[4684]: I0123 09:07:07.552281 4684 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="b11524ee-3fca-4b1b-9cdf-6da289fdbc7d" volumeName="kubernetes.io/empty-dir/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d-utilities" seLinuxMountContext="" Jan 23 09:07:07 crc kubenswrapper[4684]: I0123 09:07:07.552294 4684 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="f88749ec-7931-4ee7-b3fc-1ec5e11f92e9" volumeName="kubernetes.io/secret/f88749ec-7931-4ee7-b3fc-1ec5e11f92e9-profile-collector-cert" seLinuxMountContext="" Jan 23 09:07:07 crc kubenswrapper[4684]: I0123 09:07:07.552306 4684 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="44663579-783b-4372-86d6-acf235a62d72" volumeName="kubernetes.io/projected/44663579-783b-4372-86d6-acf235a62d72-kube-api-access-vt5rc" seLinuxMountContext="" Jan 23 09:07:07 crc kubenswrapper[4684]: I0123 09:07:07.552321 4684 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="925f1c65-6136-48ba-85aa-3a3b50560753" volumeName="kubernetes.io/projected/925f1c65-6136-48ba-85aa-3a3b50560753-kube-api-access-s4n52" seLinuxMountContext="" Jan 23 09:07:07 crc kubenswrapper[4684]: I0123 09:07:07.552335 4684 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="b6312bbd-5731-4ea0-a20f-81d5a57df44a" volumeName="kubernetes.io/projected/b6312bbd-5731-4ea0-a20f-81d5a57df44a-kube-api-access-249nr" seLinuxMountContext="" Jan 23 09:07:07 crc kubenswrapper[4684]: I0123 09:07:07.552350 4684 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="e7e6199b-1264-4501-8953-767f51328d08" volumeName="kubernetes.io/secret/e7e6199b-1264-4501-8953-767f51328d08-serving-cert" seLinuxMountContext="" Jan 23 09:07:07 crc kubenswrapper[4684]: I0123 09:07:07.552366 4684 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="1bf7eb37-55a3-4c65-b768-a94c82151e69" volumeName="kubernetes.io/configmap/1bf7eb37-55a3-4c65-b768-a94c82151e69-config" seLinuxMountContext="" Jan 23 09:07:07 crc kubenswrapper[4684]: I0123 09:07:07.552382 4684 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="43509403-f426-496e-be36-56cef71462f5" volumeName="kubernetes.io/secret/43509403-f426-496e-be36-56cef71462f5-console-oauth-config" seLinuxMountContext="" Jan 23 09:07:07 crc kubenswrapper[4684]: I0123 09:07:07.552395 4684 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="7583ce53-e0fe-4a16-9e4d-50516596a136" volumeName="kubernetes.io/configmap/7583ce53-e0fe-4a16-9e4d-50516596a136-client-ca" seLinuxMountContext="" Jan 23 09:07:07 crc kubenswrapper[4684]: I0123 09:07:07.552409 4684 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="a31745f5-9847-4afe-82a5-3161cc66ca93" volumeName="kubernetes.io/configmap/a31745f5-9847-4afe-82a5-3161cc66ca93-trusted-ca" seLinuxMountContext="" Jan 23 09:07:07 crc kubenswrapper[4684]: I0123 09:07:07.552421 4684 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="d75a4c96-2883-4a0b-bab2-0fab2b6c0b49" volumeName="kubernetes.io/projected/d75a4c96-2883-4a0b-bab2-0fab2b6c0b49-kube-api-access-rczfb" seLinuxMountContext="" Jan 23 09:07:07 crc kubenswrapper[4684]: I0123 09:07:07.552433 4684 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="f88749ec-7931-4ee7-b3fc-1ec5e11f92e9" volumeName="kubernetes.io/secret/f88749ec-7931-4ee7-b3fc-1ec5e11f92e9-srv-cert" seLinuxMountContext="" Jan 23 09:07:07 crc kubenswrapper[4684]: I0123 09:07:07.552444 4684 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="8f668bae-612b-4b75-9490-919e737c6a3b" volumeName="kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" seLinuxMountContext="" Jan 23 09:07:07 crc kubenswrapper[4684]: I0123 09:07:07.553171 4684 reconstruct.go:144] "Volume is marked device as uncertain and added into the actual state" volumeName="kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" deviceMountPath="/var/lib/kubelet/plugins/kubernetes.io/csi/kubevirt.io.hostpath-provisioner/1f4776af88835e41c12b831b4c9fed40233456d14189815a54dbe7f892fc1983/globalmount" Jan 23 09:07:07 crc kubenswrapper[4684]: I0123 09:07:07.553197 4684 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="01ab3dd5-8196-46d0-ad33-122e2ca51def" volumeName="kubernetes.io/configmap/01ab3dd5-8196-46d0-ad33-122e2ca51def-config" seLinuxMountContext="" Jan 23 09:07:07 crc kubenswrapper[4684]: I0123 09:07:07.553214 4684 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="09efc573-dbb6-4249-bd59-9b87aba8dd28" volumeName="kubernetes.io/projected/09efc573-dbb6-4249-bd59-9b87aba8dd28-kube-api-access-8tdtz" seLinuxMountContext="" Jan 23 09:07:07 crc kubenswrapper[4684]: I0123 09:07:07.553229 4684 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="1d611f23-29be-4491-8495-bee1670e935f" volumeName="kubernetes.io/empty-dir/1d611f23-29be-4491-8495-bee1670e935f-catalog-content" seLinuxMountContext="" Jan 23 09:07:07 crc kubenswrapper[4684]: I0123 09:07:07.553241 4684 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="49ef4625-1d3a-4a9f-b595-c2433d32326d" volumeName="kubernetes.io/projected/49ef4625-1d3a-4a9f-b595-c2433d32326d-kube-api-access-pjr6v" seLinuxMountContext="" Jan 23 09:07:07 crc kubenswrapper[4684]: I0123 09:07:07.553253 4684 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="4bb40260-dbaa-4fb0-84df-5e680505d512" volumeName="kubernetes.io/configmap/4bb40260-dbaa-4fb0-84df-5e680505d512-cni-binary-copy" seLinuxMountContext="" Jan 23 09:07:07 crc kubenswrapper[4684]: I0123 09:07:07.553266 4684 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" volumeName="kubernetes.io/secret/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-networking-console-plugin-cert" seLinuxMountContext="" Jan 23 09:07:07 crc kubenswrapper[4684]: I0123 09:07:07.553277 4684 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="6731426b-95fe-49ff-bb5f-40441049fde2" volumeName="kubernetes.io/projected/6731426b-95fe-49ff-bb5f-40441049fde2-kube-api-access-x7zkh" seLinuxMountContext="" Jan 23 09:07:07 crc kubenswrapper[4684]: I0123 09:07:07.553290 4684 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="0b574797-001e-440a-8f4e-c0be86edad0f" volumeName="kubernetes.io/secret/0b574797-001e-440a-8f4e-c0be86edad0f-proxy-tls" seLinuxMountContext="" Jan 23 09:07:07 crc kubenswrapper[4684]: I0123 09:07:07.553303 4684 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="5441d097-087c-4d9a-baa8-b210afa90fc9" volumeName="kubernetes.io/configmap/5441d097-087c-4d9a-baa8-b210afa90fc9-client-ca" seLinuxMountContext="" Jan 23 09:07:07 crc kubenswrapper[4684]: I0123 09:07:07.553314 4684 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="6731426b-95fe-49ff-bb5f-40441049fde2" volumeName="kubernetes.io/secret/6731426b-95fe-49ff-bb5f-40441049fde2-control-plane-machine-set-operator-tls" seLinuxMountContext="" Jan 23 09:07:07 crc kubenswrapper[4684]: I0123 09:07:07.553330 4684 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="7583ce53-e0fe-4a16-9e4d-50516596a136" volumeName="kubernetes.io/configmap/7583ce53-e0fe-4a16-9e4d-50516596a136-proxy-ca-bundles" seLinuxMountContext="" Jan 23 09:07:07 crc kubenswrapper[4684]: I0123 09:07:07.553341 4684 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="31d8b7a1-420e-4252-a5b7-eebe8a111292" volumeName="kubernetes.io/configmap/31d8b7a1-420e-4252-a5b7-eebe8a111292-images" seLinuxMountContext="" Jan 23 09:07:07 crc kubenswrapper[4684]: I0123 09:07:07.553352 4684 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="49c341d1-5089-4bc2-86a0-a5e165cfcc6b" volumeName="kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-user-template-error" seLinuxMountContext="" Jan 23 09:07:07 crc kubenswrapper[4684]: I0123 09:07:07.553363 4684 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="e7e6199b-1264-4501-8953-767f51328d08" volumeName="kubernetes.io/configmap/e7e6199b-1264-4501-8953-767f51328d08-config" seLinuxMountContext="" Jan 23 09:07:07 crc kubenswrapper[4684]: I0123 09:07:07.553374 4684 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="fda69060-fa79-4696-b1a6-7980f124bf7c" volumeName="kubernetes.io/secret/fda69060-fa79-4696-b1a6-7980f124bf7c-proxy-tls" seLinuxMountContext="" Jan 23 09:07:07 crc kubenswrapper[4684]: I0123 09:07:07.553384 4684 reconstruct.go:97] "Volume reconstruction finished" Jan 23 09:07:07 crc kubenswrapper[4684]: I0123 09:07:07.553391 4684 reconciler.go:26] "Reconciler: start to sync state" Jan 23 09:07:07 crc kubenswrapper[4684]: I0123 09:07:07.571029 4684 manager.go:324] Recovery completed Jan 23 09:07:07 crc kubenswrapper[4684]: I0123 09:07:07.578693 4684 kubelet_network_linux.go:50] "Initialized iptables rules." protocol="IPv4" Jan 23 09:07:07 crc kubenswrapper[4684]: I0123 09:07:07.580508 4684 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Jan 23 09:07:07 crc kubenswrapper[4684]: I0123 09:07:07.580690 4684 kubelet_network_linux.go:50] "Initialized iptables rules." protocol="IPv6" Jan 23 09:07:07 crc kubenswrapper[4684]: I0123 09:07:07.580750 4684 status_manager.go:217] "Starting to sync pod status with apiserver" Jan 23 09:07:07 crc kubenswrapper[4684]: I0123 09:07:07.580775 4684 kubelet.go:2335] "Starting kubelet main sync loop" Jan 23 09:07:07 crc kubenswrapper[4684]: E0123 09:07:07.581397 4684 kubelet.go:2359] "Skipping pod synchronization" err="[container runtime status check may not have completed yet, PLEG is not healthy: pleg has yet to be successful]" Jan 23 09:07:07 crc kubenswrapper[4684]: W0123 09:07:07.582220 4684 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.RuntimeClass: Get "https://api-int.crc.testing:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0": dial tcp 38.129.56.16:6443: connect: connection refused Jan 23 09:07:07 crc kubenswrapper[4684]: E0123 09:07:07.582286 4684 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.RuntimeClass: failed to list *v1.RuntimeClass: Get \"https://api-int.crc.testing:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0\": dial tcp 38.129.56.16:6443: connect: connection refused" logger="UnhandledError" Jan 23 09:07:07 crc kubenswrapper[4684]: I0123 09:07:07.583111 4684 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 23 09:07:07 crc kubenswrapper[4684]: I0123 09:07:07.583155 4684 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 23 09:07:07 crc kubenswrapper[4684]: I0123 09:07:07.583167 4684 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 23 09:07:07 crc kubenswrapper[4684]: I0123 09:07:07.584057 4684 cpu_manager.go:225] "Starting CPU manager" policy="none" Jan 23 09:07:07 crc kubenswrapper[4684]: I0123 09:07:07.584075 4684 cpu_manager.go:226] "Reconciling" reconcilePeriod="10s" Jan 23 09:07:07 crc kubenswrapper[4684]: I0123 09:07:07.584092 4684 state_mem.go:36] "Initialized new in-memory state store" Jan 23 09:07:07 crc kubenswrapper[4684]: I0123 09:07:07.593069 4684 policy_none.go:49] "None policy: Start" Jan 23 09:07:07 crc kubenswrapper[4684]: I0123 09:07:07.593609 4684 memory_manager.go:170] "Starting memorymanager" policy="None" Jan 23 09:07:07 crc kubenswrapper[4684]: I0123 09:07:07.593637 4684 state_mem.go:35] "Initializing new in-memory state store" Jan 23 09:07:07 crc kubenswrapper[4684]: E0123 09:07:07.636063 4684 kubelet_node_status.go:503] "Error getting the current node from lister" err="node \"crc\" not found" Jan 23 09:07:07 crc kubenswrapper[4684]: I0123 09:07:07.638640 4684 manager.go:334] "Starting Device Plugin manager" Jan 23 09:07:07 crc kubenswrapper[4684]: I0123 09:07:07.638682 4684 manager.go:513] "Failed to read data from checkpoint" checkpoint="kubelet_internal_checkpoint" err="checkpoint is not found" Jan 23 09:07:07 crc kubenswrapper[4684]: I0123 09:07:07.638696 4684 server.go:79] "Starting device plugin registration server" Jan 23 09:07:07 crc kubenswrapper[4684]: I0123 09:07:07.639070 4684 eviction_manager.go:189] "Eviction manager: starting control loop" Jan 23 09:07:07 crc kubenswrapper[4684]: I0123 09:07:07.639085 4684 container_log_manager.go:189] "Initializing container log rotate workers" workers=1 monitorPeriod="10s" Jan 23 09:07:07 crc kubenswrapper[4684]: I0123 09:07:07.639238 4684 plugin_watcher.go:51] "Plugin Watcher Start" path="/var/lib/kubelet/plugins_registry" Jan 23 09:07:07 crc kubenswrapper[4684]: I0123 09:07:07.639352 4684 plugin_manager.go:116] "The desired_state_of_world populator (plugin watcher) starts" Jan 23 09:07:07 crc kubenswrapper[4684]: I0123 09:07:07.639361 4684 plugin_manager.go:118] "Starting Kubelet Plugin Manager" Jan 23 09:07:07 crc kubenswrapper[4684]: E0123 09:07:07.645286 4684 eviction_manager.go:285] "Eviction manager: failed to get summary stats" err="failed to get node info: node \"crc\" not found" Jan 23 09:07:07 crc kubenswrapper[4684]: I0123 09:07:07.682080 4684 kubelet.go:2421] "SyncLoop ADD" source="file" pods=["openshift-kube-apiserver/kube-apiserver-crc","openshift-kube-controller-manager/kube-controller-manager-crc","openshift-kube-scheduler/openshift-kube-scheduler-crc","openshift-machine-config-operator/kube-rbac-proxy-crio-crc","openshift-etcd/etcd-crc"] Jan 23 09:07:07 crc kubenswrapper[4684]: I0123 09:07:07.682235 4684 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Jan 23 09:07:07 crc kubenswrapper[4684]: I0123 09:07:07.683410 4684 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 23 09:07:07 crc kubenswrapper[4684]: I0123 09:07:07.683435 4684 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 23 09:07:07 crc kubenswrapper[4684]: I0123 09:07:07.683442 4684 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 23 09:07:07 crc kubenswrapper[4684]: I0123 09:07:07.683537 4684 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Jan 23 09:07:07 crc kubenswrapper[4684]: I0123 09:07:07.683682 4684 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver/kube-apiserver-crc" Jan 23 09:07:07 crc kubenswrapper[4684]: I0123 09:07:07.683730 4684 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Jan 23 09:07:07 crc kubenswrapper[4684]: I0123 09:07:07.684362 4684 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 23 09:07:07 crc kubenswrapper[4684]: I0123 09:07:07.684378 4684 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 23 09:07:07 crc kubenswrapper[4684]: I0123 09:07:07.684389 4684 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 23 09:07:07 crc kubenswrapper[4684]: I0123 09:07:07.684394 4684 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 23 09:07:07 crc kubenswrapper[4684]: I0123 09:07:07.684399 4684 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 23 09:07:07 crc kubenswrapper[4684]: I0123 09:07:07.684403 4684 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 23 09:07:07 crc kubenswrapper[4684]: I0123 09:07:07.684531 4684 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Jan 23 09:07:07 crc kubenswrapper[4684]: I0123 09:07:07.684615 4684 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-controller-manager/kube-controller-manager-crc" Jan 23 09:07:07 crc kubenswrapper[4684]: I0123 09:07:07.684649 4684 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Jan 23 09:07:07 crc kubenswrapper[4684]: I0123 09:07:07.685367 4684 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 23 09:07:07 crc kubenswrapper[4684]: I0123 09:07:07.685386 4684 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 23 09:07:07 crc kubenswrapper[4684]: I0123 09:07:07.685393 4684 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 23 09:07:07 crc kubenswrapper[4684]: I0123 09:07:07.685754 4684 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 23 09:07:07 crc kubenswrapper[4684]: I0123 09:07:07.685769 4684 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 23 09:07:07 crc kubenswrapper[4684]: I0123 09:07:07.685801 4684 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 23 09:07:07 crc kubenswrapper[4684]: I0123 09:07:07.685877 4684 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Jan 23 09:07:07 crc kubenswrapper[4684]: I0123 09:07:07.686029 4684 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" Jan 23 09:07:07 crc kubenswrapper[4684]: I0123 09:07:07.686051 4684 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Jan 23 09:07:07 crc kubenswrapper[4684]: I0123 09:07:07.686417 4684 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 23 09:07:07 crc kubenswrapper[4684]: I0123 09:07:07.686433 4684 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 23 09:07:07 crc kubenswrapper[4684]: I0123 09:07:07.686445 4684 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 23 09:07:07 crc kubenswrapper[4684]: I0123 09:07:07.686512 4684 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Jan 23 09:07:07 crc kubenswrapper[4684]: I0123 09:07:07.686571 4684 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-machine-config-operator/kube-rbac-proxy-crio-crc" Jan 23 09:07:07 crc kubenswrapper[4684]: I0123 09:07:07.686613 4684 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Jan 23 09:07:07 crc kubenswrapper[4684]: I0123 09:07:07.686680 4684 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 23 09:07:07 crc kubenswrapper[4684]: I0123 09:07:07.686693 4684 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 23 09:07:07 crc kubenswrapper[4684]: I0123 09:07:07.686720 4684 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 23 09:07:07 crc kubenswrapper[4684]: I0123 09:07:07.687140 4684 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 23 09:07:07 crc kubenswrapper[4684]: I0123 09:07:07.687179 4684 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 23 09:07:07 crc kubenswrapper[4684]: I0123 09:07:07.687188 4684 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 23 09:07:07 crc kubenswrapper[4684]: I0123 09:07:07.687288 4684 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 23 09:07:07 crc kubenswrapper[4684]: I0123 09:07:07.687307 4684 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 23 09:07:07 crc kubenswrapper[4684]: I0123 09:07:07.687319 4684 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 23 09:07:07 crc kubenswrapper[4684]: I0123 09:07:07.687379 4684 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-etcd/etcd-crc" Jan 23 09:07:07 crc kubenswrapper[4684]: I0123 09:07:07.687425 4684 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Jan 23 09:07:07 crc kubenswrapper[4684]: I0123 09:07:07.688078 4684 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 23 09:07:07 crc kubenswrapper[4684]: I0123 09:07:07.688099 4684 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 23 09:07:07 crc kubenswrapper[4684]: I0123 09:07:07.688110 4684 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 23 09:07:07 crc kubenswrapper[4684]: E0123 09:07:07.736837 4684 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://api-int.crc.testing:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/crc?timeout=10s\": dial tcp 38.129.56.16:6443: connect: connection refused" interval="400ms" Jan 23 09:07:07 crc kubenswrapper[4684]: I0123 09:07:07.740083 4684 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Jan 23 09:07:07 crc kubenswrapper[4684]: I0123 09:07:07.741780 4684 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 23 09:07:07 crc kubenswrapper[4684]: I0123 09:07:07.741817 4684 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 23 09:07:07 crc kubenswrapper[4684]: I0123 09:07:07.741829 4684 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 23 09:07:07 crc kubenswrapper[4684]: I0123 09:07:07.741853 4684 kubelet_node_status.go:76] "Attempting to register node" node="crc" Jan 23 09:07:07 crc kubenswrapper[4684]: E0123 09:07:07.742437 4684 kubelet_node_status.go:99] "Unable to register node with API server" err="Post \"https://api-int.crc.testing:6443/api/v1/nodes\": dial tcp 38.129.56.16:6443: connect: connection refused" node="crc" Jan 23 09:07:07 crc kubenswrapper[4684]: I0123 09:07:07.755074 4684 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"resource-dir\" (UniqueName: \"kubernetes.io/host-path/f614b9022728cf315e60c057852e563e-resource-dir\") pod \"kube-controller-manager-crc\" (UID: \"f614b9022728cf315e60c057852e563e\") " pod="openshift-kube-controller-manager/kube-controller-manager-crc" Jan 23 09:07:07 crc kubenswrapper[4684]: I0123 09:07:07.755125 4684 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"resource-dir\" (UniqueName: \"kubernetes.io/host-path/3dcd261975c3d6b9a6ad6367fd4facd3-resource-dir\") pod \"openshift-kube-scheduler-crc\" (UID: \"3dcd261975c3d6b9a6ad6367fd4facd3\") " pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" Jan 23 09:07:07 crc kubenswrapper[4684]: I0123 09:07:07.755164 4684 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"resource-dir\" (UniqueName: \"kubernetes.io/host-path/2139d3e2895fc6797b9c76a1b4c9886d-resource-dir\") pod \"etcd-crc\" (UID: \"2139d3e2895fc6797b9c76a1b4c9886d\") " pod="openshift-etcd/etcd-crc" Jan 23 09:07:07 crc kubenswrapper[4684]: I0123 09:07:07.755195 4684 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"log-dir\" (UniqueName: \"kubernetes.io/host-path/2139d3e2895fc6797b9c76a1b4c9886d-log-dir\") pod \"etcd-crc\" (UID: \"2139d3e2895fc6797b9c76a1b4c9886d\") " pod="openshift-etcd/etcd-crc" Jan 23 09:07:07 crc kubenswrapper[4684]: I0123 09:07:07.755227 4684 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"resource-dir\" (UniqueName: \"kubernetes.io/host-path/f4b27818a5e8e43d0dc095d08835c792-resource-dir\") pod \"kube-apiserver-crc\" (UID: \"f4b27818a5e8e43d0dc095d08835c792\") " pod="openshift-kube-apiserver/kube-apiserver-crc" Jan 23 09:07:07 crc kubenswrapper[4684]: I0123 09:07:07.755254 4684 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cert-dir\" (UniqueName: \"kubernetes.io/host-path/2139d3e2895fc6797b9c76a1b4c9886d-cert-dir\") pod \"etcd-crc\" (UID: \"2139d3e2895fc6797b9c76a1b4c9886d\") " pod="openshift-etcd/etcd-crc" Jan 23 09:07:07 crc kubenswrapper[4684]: I0123 09:07:07.755281 4684 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"data-dir\" (UniqueName: \"kubernetes.io/host-path/2139d3e2895fc6797b9c76a1b4c9886d-data-dir\") pod \"etcd-crc\" (UID: \"2139d3e2895fc6797b9c76a1b4c9886d\") " pod="openshift-etcd/etcd-crc" Jan 23 09:07:07 crc kubenswrapper[4684]: I0123 09:07:07.755310 4684 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cert-dir\" (UniqueName: \"kubernetes.io/host-path/f4b27818a5e8e43d0dc095d08835c792-cert-dir\") pod \"kube-apiserver-crc\" (UID: \"f4b27818a5e8e43d0dc095d08835c792\") " pod="openshift-kube-apiserver/kube-apiserver-crc" Jan 23 09:07:07 crc kubenswrapper[4684]: I0123 09:07:07.755341 4684 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cert-dir\" (UniqueName: \"kubernetes.io/host-path/f614b9022728cf315e60c057852e563e-cert-dir\") pod \"kube-controller-manager-crc\" (UID: \"f614b9022728cf315e60c057852e563e\") " pod="openshift-kube-controller-manager/kube-controller-manager-crc" Jan 23 09:07:07 crc kubenswrapper[4684]: I0123 09:07:07.755369 4684 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"static-pod-dir\" (UniqueName: \"kubernetes.io/host-path/2139d3e2895fc6797b9c76a1b4c9886d-static-pod-dir\") pod \"etcd-crc\" (UID: \"2139d3e2895fc6797b9c76a1b4c9886d\") " pod="openshift-etcd/etcd-crc" Jan 23 09:07:07 crc kubenswrapper[4684]: I0123 09:07:07.755397 4684 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/f4b27818a5e8e43d0dc095d08835c792-audit-dir\") pod \"kube-apiserver-crc\" (UID: \"f4b27818a5e8e43d0dc095d08835c792\") " pod="openshift-kube-apiserver/kube-apiserver-crc" Jan 23 09:07:07 crc kubenswrapper[4684]: I0123 09:07:07.755425 4684 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cert-dir\" (UniqueName: \"kubernetes.io/host-path/3dcd261975c3d6b9a6ad6367fd4facd3-cert-dir\") pod \"openshift-kube-scheduler-crc\" (UID: \"3dcd261975c3d6b9a6ad6367fd4facd3\") " pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" Jan 23 09:07:07 crc kubenswrapper[4684]: I0123 09:07:07.755454 4684 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-kube\" (UniqueName: \"kubernetes.io/host-path/d1b160f5dda77d281dd8e69ec8d817f9-etc-kube\") pod \"kube-rbac-proxy-crio-crc\" (UID: \"d1b160f5dda77d281dd8e69ec8d817f9\") " pod="openshift-machine-config-operator/kube-rbac-proxy-crio-crc" Jan 23 09:07:07 crc kubenswrapper[4684]: I0123 09:07:07.755483 4684 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-lib-kubelet\" (UniqueName: \"kubernetes.io/host-path/d1b160f5dda77d281dd8e69ec8d817f9-var-lib-kubelet\") pod \"kube-rbac-proxy-crio-crc\" (UID: \"d1b160f5dda77d281dd8e69ec8d817f9\") " pod="openshift-machine-config-operator/kube-rbac-proxy-crio-crc" Jan 23 09:07:07 crc kubenswrapper[4684]: I0123 09:07:07.755511 4684 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-local-bin\" (UniqueName: \"kubernetes.io/host-path/2139d3e2895fc6797b9c76a1b4c9886d-usr-local-bin\") pod \"etcd-crc\" (UID: \"2139d3e2895fc6797b9c76a1b4c9886d\") " pod="openshift-etcd/etcd-crc" Jan 23 09:07:07 crc kubenswrapper[4684]: I0123 09:07:07.857754 4684 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"resource-dir\" (UniqueName: \"kubernetes.io/host-path/f4b27818a5e8e43d0dc095d08835c792-resource-dir\") pod \"kube-apiserver-crc\" (UID: \"f4b27818a5e8e43d0dc095d08835c792\") " pod="openshift-kube-apiserver/kube-apiserver-crc" Jan 23 09:07:07 crc kubenswrapper[4684]: I0123 09:07:07.857817 4684 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cert-dir\" (UniqueName: \"kubernetes.io/host-path/2139d3e2895fc6797b9c76a1b4c9886d-cert-dir\") pod \"etcd-crc\" (UID: \"2139d3e2895fc6797b9c76a1b4c9886d\") " pod="openshift-etcd/etcd-crc" Jan 23 09:07:07 crc kubenswrapper[4684]: I0123 09:07:07.857841 4684 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"data-dir\" (UniqueName: \"kubernetes.io/host-path/2139d3e2895fc6797b9c76a1b4c9886d-data-dir\") pod \"etcd-crc\" (UID: \"2139d3e2895fc6797b9c76a1b4c9886d\") " pod="openshift-etcd/etcd-crc" Jan 23 09:07:07 crc kubenswrapper[4684]: I0123 09:07:07.857864 4684 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cert-dir\" (UniqueName: \"kubernetes.io/host-path/f4b27818a5e8e43d0dc095d08835c792-cert-dir\") pod \"kube-apiserver-crc\" (UID: \"f4b27818a5e8e43d0dc095d08835c792\") " pod="openshift-kube-apiserver/kube-apiserver-crc" Jan 23 09:07:07 crc kubenswrapper[4684]: I0123 09:07:07.857890 4684 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cert-dir\" (UniqueName: \"kubernetes.io/host-path/f614b9022728cf315e60c057852e563e-cert-dir\") pod \"kube-controller-manager-crc\" (UID: \"f614b9022728cf315e60c057852e563e\") " pod="openshift-kube-controller-manager/kube-controller-manager-crc" Jan 23 09:07:07 crc kubenswrapper[4684]: I0123 09:07:07.857912 4684 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"static-pod-dir\" (UniqueName: \"kubernetes.io/host-path/2139d3e2895fc6797b9c76a1b4c9886d-static-pod-dir\") pod \"etcd-crc\" (UID: \"2139d3e2895fc6797b9c76a1b4c9886d\") " pod="openshift-etcd/etcd-crc" Jan 23 09:07:07 crc kubenswrapper[4684]: I0123 09:07:07.857896 4684 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"cert-dir\" (UniqueName: \"kubernetes.io/host-path/2139d3e2895fc6797b9c76a1b4c9886d-cert-dir\") pod \"etcd-crc\" (UID: \"2139d3e2895fc6797b9c76a1b4c9886d\") " pod="openshift-etcd/etcd-crc" Jan 23 09:07:07 crc kubenswrapper[4684]: I0123 09:07:07.857972 4684 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"resource-dir\" (UniqueName: \"kubernetes.io/host-path/f4b27818a5e8e43d0dc095d08835c792-resource-dir\") pod \"kube-apiserver-crc\" (UID: \"f4b27818a5e8e43d0dc095d08835c792\") " pod="openshift-kube-apiserver/kube-apiserver-crc" Jan 23 09:07:07 crc kubenswrapper[4684]: I0123 09:07:07.858004 4684 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"data-dir\" (UniqueName: \"kubernetes.io/host-path/2139d3e2895fc6797b9c76a1b4c9886d-data-dir\") pod \"etcd-crc\" (UID: \"2139d3e2895fc6797b9c76a1b4c9886d\") " pod="openshift-etcd/etcd-crc" Jan 23 09:07:07 crc kubenswrapper[4684]: I0123 09:07:07.858017 4684 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"cert-dir\" (UniqueName: \"kubernetes.io/host-path/f614b9022728cf315e60c057852e563e-cert-dir\") pod \"kube-controller-manager-crc\" (UID: \"f614b9022728cf315e60c057852e563e\") " pod="openshift-kube-controller-manager/kube-controller-manager-crc" Jan 23 09:07:07 crc kubenswrapper[4684]: I0123 09:07:07.857942 4684 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"usr-local-bin\" (UniqueName: \"kubernetes.io/host-path/2139d3e2895fc6797b9c76a1b4c9886d-usr-local-bin\") pod \"etcd-crc\" (UID: \"2139d3e2895fc6797b9c76a1b4c9886d\") " pod="openshift-etcd/etcd-crc" Jan 23 09:07:07 crc kubenswrapper[4684]: I0123 09:07:07.858051 4684 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"static-pod-dir\" (UniqueName: \"kubernetes.io/host-path/2139d3e2895fc6797b9c76a1b4c9886d-static-pod-dir\") pod \"etcd-crc\" (UID: \"2139d3e2895fc6797b9c76a1b4c9886d\") " pod="openshift-etcd/etcd-crc" Jan 23 09:07:07 crc kubenswrapper[4684]: I0123 09:07:07.858060 4684 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/f4b27818a5e8e43d0dc095d08835c792-audit-dir\") pod \"kube-apiserver-crc\" (UID: \"f4b27818a5e8e43d0dc095d08835c792\") " pod="openshift-kube-apiserver/kube-apiserver-crc" Jan 23 09:07:07 crc kubenswrapper[4684]: I0123 09:07:07.858081 4684 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cert-dir\" (UniqueName: \"kubernetes.io/host-path/3dcd261975c3d6b9a6ad6367fd4facd3-cert-dir\") pod \"openshift-kube-scheduler-crc\" (UID: \"3dcd261975c3d6b9a6ad6367fd4facd3\") " pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" Jan 23 09:07:07 crc kubenswrapper[4684]: I0123 09:07:07.858103 4684 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etc-kube\" (UniqueName: \"kubernetes.io/host-path/d1b160f5dda77d281dd8e69ec8d817f9-etc-kube\") pod \"kube-rbac-proxy-crio-crc\" (UID: \"d1b160f5dda77d281dd8e69ec8d817f9\") " pod="openshift-machine-config-operator/kube-rbac-proxy-crio-crc" Jan 23 09:07:07 crc kubenswrapper[4684]: I0123 09:07:07.858123 4684 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"var-lib-kubelet\" (UniqueName: \"kubernetes.io/host-path/d1b160f5dda77d281dd8e69ec8d817f9-var-lib-kubelet\") pod \"kube-rbac-proxy-crio-crc\" (UID: \"d1b160f5dda77d281dd8e69ec8d817f9\") " pod="openshift-machine-config-operator/kube-rbac-proxy-crio-crc" Jan 23 09:07:07 crc kubenswrapper[4684]: I0123 09:07:07.858144 4684 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"resource-dir\" (UniqueName: \"kubernetes.io/host-path/f614b9022728cf315e60c057852e563e-resource-dir\") pod \"kube-controller-manager-crc\" (UID: \"f614b9022728cf315e60c057852e563e\") " pod="openshift-kube-controller-manager/kube-controller-manager-crc" Jan 23 09:07:07 crc kubenswrapper[4684]: I0123 09:07:07.858148 4684 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"cert-dir\" (UniqueName: \"kubernetes.io/host-path/3dcd261975c3d6b9a6ad6367fd4facd3-cert-dir\") pod \"openshift-kube-scheduler-crc\" (UID: \"3dcd261975c3d6b9a6ad6367fd4facd3\") " pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" Jan 23 09:07:07 crc kubenswrapper[4684]: I0123 09:07:07.858165 4684 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"resource-dir\" (UniqueName: \"kubernetes.io/host-path/3dcd261975c3d6b9a6ad6367fd4facd3-resource-dir\") pod \"openshift-kube-scheduler-crc\" (UID: \"3dcd261975c3d6b9a6ad6367fd4facd3\") " pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" Jan 23 09:07:07 crc kubenswrapper[4684]: I0123 09:07:07.858188 4684 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"resource-dir\" (UniqueName: \"kubernetes.io/host-path/2139d3e2895fc6797b9c76a1b4c9886d-resource-dir\") pod \"etcd-crc\" (UID: \"2139d3e2895fc6797b9c76a1b4c9886d\") " pod="openshift-etcd/etcd-crc" Jan 23 09:07:07 crc kubenswrapper[4684]: I0123 09:07:07.858191 4684 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/f4b27818a5e8e43d0dc095d08835c792-audit-dir\") pod \"kube-apiserver-crc\" (UID: \"f4b27818a5e8e43d0dc095d08835c792\") " pod="openshift-kube-apiserver/kube-apiserver-crc" Jan 23 09:07:07 crc kubenswrapper[4684]: I0123 09:07:07.858193 4684 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etc-kube\" (UniqueName: \"kubernetes.io/host-path/d1b160f5dda77d281dd8e69ec8d817f9-etc-kube\") pod \"kube-rbac-proxy-crio-crc\" (UID: \"d1b160f5dda77d281dd8e69ec8d817f9\") " pod="openshift-machine-config-operator/kube-rbac-proxy-crio-crc" Jan 23 09:07:07 crc kubenswrapper[4684]: I0123 09:07:07.858215 4684 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"resource-dir\" (UniqueName: \"kubernetes.io/host-path/f614b9022728cf315e60c057852e563e-resource-dir\") pod \"kube-controller-manager-crc\" (UID: \"f614b9022728cf315e60c057852e563e\") " pod="openshift-kube-controller-manager/kube-controller-manager-crc" Jan 23 09:07:07 crc kubenswrapper[4684]: I0123 09:07:07.858057 4684 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"cert-dir\" (UniqueName: \"kubernetes.io/host-path/f4b27818a5e8e43d0dc095d08835c792-cert-dir\") pod \"kube-apiserver-crc\" (UID: \"f4b27818a5e8e43d0dc095d08835c792\") " pod="openshift-kube-apiserver/kube-apiserver-crc" Jan 23 09:07:07 crc kubenswrapper[4684]: I0123 09:07:07.857987 4684 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"usr-local-bin\" (UniqueName: \"kubernetes.io/host-path/2139d3e2895fc6797b9c76a1b4c9886d-usr-local-bin\") pod \"etcd-crc\" (UID: \"2139d3e2895fc6797b9c76a1b4c9886d\") " pod="openshift-etcd/etcd-crc" Jan 23 09:07:07 crc kubenswrapper[4684]: I0123 09:07:07.858208 4684 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"log-dir\" (UniqueName: \"kubernetes.io/host-path/2139d3e2895fc6797b9c76a1b4c9886d-log-dir\") pod \"etcd-crc\" (UID: \"2139d3e2895fc6797b9c76a1b4c9886d\") " pod="openshift-etcd/etcd-crc" Jan 23 09:07:07 crc kubenswrapper[4684]: I0123 09:07:07.858294 4684 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"resource-dir\" (UniqueName: \"kubernetes.io/host-path/2139d3e2895fc6797b9c76a1b4c9886d-resource-dir\") pod \"etcd-crc\" (UID: \"2139d3e2895fc6797b9c76a1b4c9886d\") " pod="openshift-etcd/etcd-crc" Jan 23 09:07:07 crc kubenswrapper[4684]: I0123 09:07:07.858298 4684 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"resource-dir\" (UniqueName: \"kubernetes.io/host-path/3dcd261975c3d6b9a6ad6367fd4facd3-resource-dir\") pod \"openshift-kube-scheduler-crc\" (UID: \"3dcd261975c3d6b9a6ad6367fd4facd3\") " pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" Jan 23 09:07:07 crc kubenswrapper[4684]: I0123 09:07:07.858258 4684 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"log-dir\" (UniqueName: \"kubernetes.io/host-path/2139d3e2895fc6797b9c76a1b4c9886d-log-dir\") pod \"etcd-crc\" (UID: \"2139d3e2895fc6797b9c76a1b4c9886d\") " pod="openshift-etcd/etcd-crc" Jan 23 09:07:07 crc kubenswrapper[4684]: I0123 09:07:07.858321 4684 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"var-lib-kubelet\" (UniqueName: \"kubernetes.io/host-path/d1b160f5dda77d281dd8e69ec8d817f9-var-lib-kubelet\") pod \"kube-rbac-proxy-crio-crc\" (UID: \"d1b160f5dda77d281dd8e69ec8d817f9\") " pod="openshift-machine-config-operator/kube-rbac-proxy-crio-crc" Jan 23 09:07:07 crc kubenswrapper[4684]: I0123 09:07:07.943322 4684 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Jan 23 09:07:07 crc kubenswrapper[4684]: I0123 09:07:07.945084 4684 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 23 09:07:07 crc kubenswrapper[4684]: I0123 09:07:07.945180 4684 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 23 09:07:07 crc kubenswrapper[4684]: I0123 09:07:07.945199 4684 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 23 09:07:07 crc kubenswrapper[4684]: I0123 09:07:07.945240 4684 kubelet_node_status.go:76] "Attempting to register node" node="crc" Jan 23 09:07:07 crc kubenswrapper[4684]: E0123 09:07:07.946046 4684 kubelet_node_status.go:99] "Unable to register node with API server" err="Post \"https://api-int.crc.testing:6443/api/v1/nodes\": dial tcp 38.129.56.16:6443: connect: connection refused" node="crc" Jan 23 09:07:08 crc kubenswrapper[4684]: I0123 09:07:08.013625 4684 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver/kube-apiserver-crc" Jan 23 09:07:08 crc kubenswrapper[4684]: I0123 09:07:08.017554 4684 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-controller-manager/kube-controller-manager-crc" Jan 23 09:07:08 crc kubenswrapper[4684]: W0123 09:07:08.039976 4684 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podf4b27818a5e8e43d0dc095d08835c792.slice/crio-b526e70fbfa2fd04b09a1e505a65d7b92d6f67378425f1aa67c5a7db98dc424b WatchSource:0}: Error finding container b526e70fbfa2fd04b09a1e505a65d7b92d6f67378425f1aa67c5a7db98dc424b: Status 404 returned error can't find the container with id b526e70fbfa2fd04b09a1e505a65d7b92d6f67378425f1aa67c5a7db98dc424b Jan 23 09:07:08 crc kubenswrapper[4684]: W0123 09:07:08.042003 4684 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podf614b9022728cf315e60c057852e563e.slice/crio-d2a0c7a1717444bdde76908abe8dad18603038d674cf50aad0ff40b71c66c549 WatchSource:0}: Error finding container d2a0c7a1717444bdde76908abe8dad18603038d674cf50aad0ff40b71c66c549: Status 404 returned error can't find the container with id d2a0c7a1717444bdde76908abe8dad18603038d674cf50aad0ff40b71c66c549 Jan 23 09:07:08 crc kubenswrapper[4684]: I0123 09:07:08.044253 4684 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" Jan 23 09:07:08 crc kubenswrapper[4684]: W0123 09:07:08.062680 4684 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod3dcd261975c3d6b9a6ad6367fd4facd3.slice/crio-4fed10f66fec06e58f2b8d938998e7904b717c4416dc4425e307468d70889fee WatchSource:0}: Error finding container 4fed10f66fec06e58f2b8d938998e7904b717c4416dc4425e307468d70889fee: Status 404 returned error can't find the container with id 4fed10f66fec06e58f2b8d938998e7904b717c4416dc4425e307468d70889fee Jan 23 09:07:08 crc kubenswrapper[4684]: I0123 09:07:08.069507 4684 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-machine-config-operator/kube-rbac-proxy-crio-crc" Jan 23 09:07:08 crc kubenswrapper[4684]: I0123 09:07:08.076391 4684 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-etcd/etcd-crc" Jan 23 09:07:08 crc kubenswrapper[4684]: W0123 09:07:08.085672 4684 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podd1b160f5dda77d281dd8e69ec8d817f9.slice/crio-87ef036e5c26decc5a62cb0b2a6dc9cb37b751ddd3ae443f31cba972c3f125e0 WatchSource:0}: Error finding container 87ef036e5c26decc5a62cb0b2a6dc9cb37b751ddd3ae443f31cba972c3f125e0: Status 404 returned error can't find the container with id 87ef036e5c26decc5a62cb0b2a6dc9cb37b751ddd3ae443f31cba972c3f125e0 Jan 23 09:07:08 crc kubenswrapper[4684]: W0123 09:07:08.086213 4684 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod2139d3e2895fc6797b9c76a1b4c9886d.slice/crio-dc46acfbad1bab933b6e18e49ba313cecda35ff2cbccdf41932d9baf94fc2ec6 WatchSource:0}: Error finding container dc46acfbad1bab933b6e18e49ba313cecda35ff2cbccdf41932d9baf94fc2ec6: Status 404 returned error can't find the container with id dc46acfbad1bab933b6e18e49ba313cecda35ff2cbccdf41932d9baf94fc2ec6 Jan 23 09:07:08 crc kubenswrapper[4684]: E0123 09:07:08.138563 4684 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://api-int.crc.testing:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/crc?timeout=10s\": dial tcp 38.129.56.16:6443: connect: connection refused" interval="800ms" Jan 23 09:07:08 crc kubenswrapper[4684]: I0123 09:07:08.347177 4684 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Jan 23 09:07:08 crc kubenswrapper[4684]: I0123 09:07:08.348451 4684 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 23 09:07:08 crc kubenswrapper[4684]: I0123 09:07:08.348494 4684 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 23 09:07:08 crc kubenswrapper[4684]: I0123 09:07:08.348505 4684 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 23 09:07:08 crc kubenswrapper[4684]: I0123 09:07:08.348529 4684 kubelet_node_status.go:76] "Attempting to register node" node="crc" Jan 23 09:07:08 crc kubenswrapper[4684]: E0123 09:07:08.348978 4684 kubelet_node_status.go:99] "Unable to register node with API server" err="Post \"https://api-int.crc.testing:6443/api/v1/nodes\": dial tcp 38.129.56.16:6443: connect: connection refused" node="crc" Jan 23 09:07:08 crc kubenswrapper[4684]: W0123 09:07:08.388715 4684 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Node: Get "https://api-int.crc.testing:6443/api/v1/nodes?fieldSelector=metadata.name%3Dcrc&limit=500&resourceVersion=0": dial tcp 38.129.56.16:6443: connect: connection refused Jan 23 09:07:08 crc kubenswrapper[4684]: E0123 09:07:08.388803 4684 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Node: failed to list *v1.Node: Get \"https://api-int.crc.testing:6443/api/v1/nodes?fieldSelector=metadata.name%3Dcrc&limit=500&resourceVersion=0\": dial tcp 38.129.56.16:6443: connect: connection refused" logger="UnhandledError" Jan 23 09:07:08 crc kubenswrapper[4684]: W0123 09:07:08.442187 4684 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Service: Get "https://api-int.crc.testing:6443/api/v1/services?fieldSelector=spec.clusterIP%21%3DNone&limit=500&resourceVersion=0": dial tcp 38.129.56.16:6443: connect: connection refused Jan 23 09:07:08 crc kubenswrapper[4684]: E0123 09:07:08.442791 4684 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Service: failed to list *v1.Service: Get \"https://api-int.crc.testing:6443/api/v1/services?fieldSelector=spec.clusterIP%21%3DNone&limit=500&resourceVersion=0\": dial tcp 38.129.56.16:6443: connect: connection refused" logger="UnhandledError" Jan 23 09:07:08 crc kubenswrapper[4684]: W0123 09:07:08.484187 4684 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.RuntimeClass: Get "https://api-int.crc.testing:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0": dial tcp 38.129.56.16:6443: connect: connection refused Jan 23 09:07:08 crc kubenswrapper[4684]: E0123 09:07:08.484334 4684 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.RuntimeClass: failed to list *v1.RuntimeClass: Get \"https://api-int.crc.testing:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0\": dial tcp 38.129.56.16:6443: connect: connection refused" logger="UnhandledError" Jan 23 09:07:08 crc kubenswrapper[4684]: W0123 09:07:08.514150 4684 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.CSIDriver: Get "https://api-int.crc.testing:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0": dial tcp 38.129.56.16:6443: connect: connection refused Jan 23 09:07:08 crc kubenswrapper[4684]: E0123 09:07:08.514275 4684 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.CSIDriver: failed to list *v1.CSIDriver: Get \"https://api-int.crc.testing:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0\": dial tcp 38.129.56.16:6443: connect: connection refused" logger="UnhandledError" Jan 23 09:07:08 crc kubenswrapper[4684]: I0123 09:07:08.530599 4684 csi_plugin.go:884] Failed to contact API server when waiting for CSINode publishing: Get "https://api-int.crc.testing:6443/apis/storage.k8s.io/v1/csinodes/crc?resourceVersion=0": dial tcp 38.129.56.16:6443: connect: connection refused Jan 23 09:07:08 crc kubenswrapper[4684]: I0123 09:07:08.532526 4684 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-12-02 22:38:28.482174159 +0000 UTC Jan 23 09:07:08 crc kubenswrapper[4684]: I0123 09:07:08.588806 4684 generic.go:334] "Generic (PLEG): container finished" podID="f4b27818a5e8e43d0dc095d08835c792" containerID="efa2eef93c6f5766565795e6674f79bc2e7cb62ac76cd9a1e407561378d62732" exitCode=0 Jan 23 09:07:08 crc kubenswrapper[4684]: I0123 09:07:08.588885 4684 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" event={"ID":"f4b27818a5e8e43d0dc095d08835c792","Type":"ContainerDied","Data":"efa2eef93c6f5766565795e6674f79bc2e7cb62ac76cd9a1e407561378d62732"} Jan 23 09:07:08 crc kubenswrapper[4684]: I0123 09:07:08.589123 4684 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" event={"ID":"f4b27818a5e8e43d0dc095d08835c792","Type":"ContainerStarted","Data":"b526e70fbfa2fd04b09a1e505a65d7b92d6f67378425f1aa67c5a7db98dc424b"} Jan 23 09:07:08 crc kubenswrapper[4684]: I0123 09:07:08.589323 4684 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Jan 23 09:07:08 crc kubenswrapper[4684]: I0123 09:07:08.590840 4684 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 23 09:07:08 crc kubenswrapper[4684]: I0123 09:07:08.590882 4684 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 23 09:07:08 crc kubenswrapper[4684]: I0123 09:07:08.590895 4684 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 23 09:07:08 crc kubenswrapper[4684]: I0123 09:07:08.591795 4684 generic.go:334] "Generic (PLEG): container finished" podID="2139d3e2895fc6797b9c76a1b4c9886d" containerID="45ae7eed4bb2ab5de80485e1ff7e8d16cb1718f1a4676791d35e22ba2e11887f" exitCode=0 Jan 23 09:07:08 crc kubenswrapper[4684]: I0123 09:07:08.591867 4684 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-etcd/etcd-crc" event={"ID":"2139d3e2895fc6797b9c76a1b4c9886d","Type":"ContainerDied","Data":"45ae7eed4bb2ab5de80485e1ff7e8d16cb1718f1a4676791d35e22ba2e11887f"} Jan 23 09:07:08 crc kubenswrapper[4684]: I0123 09:07:08.591939 4684 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-etcd/etcd-crc" event={"ID":"2139d3e2895fc6797b9c76a1b4c9886d","Type":"ContainerStarted","Data":"dc46acfbad1bab933b6e18e49ba313cecda35ff2cbccdf41932d9baf94fc2ec6"} Jan 23 09:07:08 crc kubenswrapper[4684]: I0123 09:07:08.592085 4684 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Jan 23 09:07:08 crc kubenswrapper[4684]: I0123 09:07:08.593128 4684 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 23 09:07:08 crc kubenswrapper[4684]: I0123 09:07:08.593171 4684 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 23 09:07:08 crc kubenswrapper[4684]: I0123 09:07:08.593183 4684 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 23 09:07:08 crc kubenswrapper[4684]: I0123 09:07:08.593766 4684 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Jan 23 09:07:08 crc kubenswrapper[4684]: I0123 09:07:08.593989 4684 generic.go:334] "Generic (PLEG): container finished" podID="d1b160f5dda77d281dd8e69ec8d817f9" containerID="1cdc2db678a5d1d932c0ed23c453f2450562334bfa685ec920e0a8bc8af61d7c" exitCode=0 Jan 23 09:07:08 crc kubenswrapper[4684]: I0123 09:07:08.594094 4684 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/kube-rbac-proxy-crio-crc" event={"ID":"d1b160f5dda77d281dd8e69ec8d817f9","Type":"ContainerDied","Data":"1cdc2db678a5d1d932c0ed23c453f2450562334bfa685ec920e0a8bc8af61d7c"} Jan 23 09:07:08 crc kubenswrapper[4684]: I0123 09:07:08.594145 4684 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/kube-rbac-proxy-crio-crc" event={"ID":"d1b160f5dda77d281dd8e69ec8d817f9","Type":"ContainerStarted","Data":"87ef036e5c26decc5a62cb0b2a6dc9cb37b751ddd3ae443f31cba972c3f125e0"} Jan 23 09:07:08 crc kubenswrapper[4684]: I0123 09:07:08.594245 4684 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Jan 23 09:07:08 crc kubenswrapper[4684]: I0123 09:07:08.595049 4684 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 23 09:07:08 crc kubenswrapper[4684]: I0123 09:07:08.595110 4684 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 23 09:07:08 crc kubenswrapper[4684]: I0123 09:07:08.595126 4684 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 23 09:07:08 crc kubenswrapper[4684]: I0123 09:07:08.595639 4684 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 23 09:07:08 crc kubenswrapper[4684]: I0123 09:07:08.595680 4684 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 23 09:07:08 crc kubenswrapper[4684]: I0123 09:07:08.595690 4684 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 23 09:07:08 crc kubenswrapper[4684]: I0123 09:07:08.598656 4684 generic.go:334] "Generic (PLEG): container finished" podID="3dcd261975c3d6b9a6ad6367fd4facd3" containerID="dfb74f1ff410b32092837918e51a33643c917e2cf829af6edd2e36180c64fcba" exitCode=0 Jan 23 09:07:08 crc kubenswrapper[4684]: I0123 09:07:08.598768 4684 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" event={"ID":"3dcd261975c3d6b9a6ad6367fd4facd3","Type":"ContainerDied","Data":"dfb74f1ff410b32092837918e51a33643c917e2cf829af6edd2e36180c64fcba"} Jan 23 09:07:08 crc kubenswrapper[4684]: I0123 09:07:08.598808 4684 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" event={"ID":"3dcd261975c3d6b9a6ad6367fd4facd3","Type":"ContainerStarted","Data":"4fed10f66fec06e58f2b8d938998e7904b717c4416dc4425e307468d70889fee"} Jan 23 09:07:08 crc kubenswrapper[4684]: I0123 09:07:08.598927 4684 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Jan 23 09:07:08 crc kubenswrapper[4684]: I0123 09:07:08.600033 4684 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 23 09:07:08 crc kubenswrapper[4684]: I0123 09:07:08.600077 4684 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 23 09:07:08 crc kubenswrapper[4684]: I0123 09:07:08.600088 4684 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 23 09:07:08 crc kubenswrapper[4684]: I0123 09:07:08.602844 4684 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" event={"ID":"f614b9022728cf315e60c057852e563e","Type":"ContainerStarted","Data":"7954e2feb1e89e1ec2c9055234e7b9bde7005afc751a3067c18cbb54d16045cc"} Jan 23 09:07:08 crc kubenswrapper[4684]: I0123 09:07:08.602916 4684 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" event={"ID":"f614b9022728cf315e60c057852e563e","Type":"ContainerStarted","Data":"d2a0c7a1717444bdde76908abe8dad18603038d674cf50aad0ff40b71c66c549"} Jan 23 09:07:08 crc kubenswrapper[4684]: E0123 09:07:08.939685 4684 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://api-int.crc.testing:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/crc?timeout=10s\": dial tcp 38.129.56.16:6443: connect: connection refused" interval="1.6s" Jan 23 09:07:09 crc kubenswrapper[4684]: I0123 09:07:09.149613 4684 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Jan 23 09:07:09 crc kubenswrapper[4684]: I0123 09:07:09.151003 4684 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 23 09:07:09 crc kubenswrapper[4684]: I0123 09:07:09.151073 4684 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 23 09:07:09 crc kubenswrapper[4684]: I0123 09:07:09.151087 4684 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 23 09:07:09 crc kubenswrapper[4684]: I0123 09:07:09.151115 4684 kubelet_node_status.go:76] "Attempting to register node" node="crc" Jan 23 09:07:09 crc kubenswrapper[4684]: E0123 09:07:09.151728 4684 kubelet_node_status.go:99] "Unable to register node with API server" err="Post \"https://api-int.crc.testing:6443/api/v1/nodes\": dial tcp 38.129.56.16:6443: connect: connection refused" node="crc" Jan 23 09:07:09 crc kubenswrapper[4684]: I0123 09:07:09.530058 4684 certificate_manager.go:356] kubernetes.io/kube-apiserver-client-kubelet: Rotating certificates Jan 23 09:07:09 crc kubenswrapper[4684]: E0123 09:07:09.531200 4684 certificate_manager.go:562] "Unhandled Error" err="kubernetes.io/kube-apiserver-client-kubelet: Failed while requesting a signed certificate from the control plane: cannot create certificate signing request: Post \"https://api-int.crc.testing:6443/apis/certificates.k8s.io/v1/certificatesigningrequests\": dial tcp 38.129.56.16:6443: connect: connection refused" logger="UnhandledError" Jan 23 09:07:09 crc kubenswrapper[4684]: I0123 09:07:09.531204 4684 csi_plugin.go:884] Failed to contact API server when waiting for CSINode publishing: Get "https://api-int.crc.testing:6443/apis/storage.k8s.io/v1/csinodes/crc?resourceVersion=0": dial tcp 38.129.56.16:6443: connect: connection refused Jan 23 09:07:09 crc kubenswrapper[4684]: I0123 09:07:09.533736 4684 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-12-02 04:02:04.011291469 +0000 UTC Jan 23 09:07:09 crc kubenswrapper[4684]: I0123 09:07:09.611169 4684 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" event={"ID":"f614b9022728cf315e60c057852e563e","Type":"ContainerStarted","Data":"c027c8977c1e3870ef0132bf28d479e8999b1a7d216327be7a9cff2aeee05c9e"} Jan 23 09:07:10 crc kubenswrapper[4684]: W0123 09:07:10.210588 4684 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.CSIDriver: Get "https://api-int.crc.testing:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0": dial tcp 38.129.56.16:6443: connect: connection refused Jan 23 09:07:10 crc kubenswrapper[4684]: E0123 09:07:10.210645 4684 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.CSIDriver: failed to list *v1.CSIDriver: Get \"https://api-int.crc.testing:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0\": dial tcp 38.129.56.16:6443: connect: connection refused" logger="UnhandledError" Jan 23 09:07:10 crc kubenswrapper[4684]: W0123 09:07:10.232430 4684 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Service: Get "https://api-int.crc.testing:6443/api/v1/services?fieldSelector=spec.clusterIP%21%3DNone&limit=500&resourceVersion=0": dial tcp 38.129.56.16:6443: connect: connection refused Jan 23 09:07:10 crc kubenswrapper[4684]: E0123 09:07:10.232499 4684 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Service: failed to list *v1.Service: Get \"https://api-int.crc.testing:6443/api/v1/services?fieldSelector=spec.clusterIP%21%3DNone&limit=500&resourceVersion=0\": dial tcp 38.129.56.16:6443: connect: connection refused" logger="UnhandledError" Jan 23 09:07:10 crc kubenswrapper[4684]: I0123 09:07:10.534829 4684 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-12-10 14:33:15.024933291 +0000 UTC Jan 23 09:07:10 crc kubenswrapper[4684]: I0123 09:07:10.615273 4684 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/kube-rbac-proxy-crio-crc" event={"ID":"d1b160f5dda77d281dd8e69ec8d817f9","Type":"ContainerStarted","Data":"f896177a3b765a2129450136ccb007601fff3c2d5669c777ad8af0eeaaf15d5a"} Jan 23 09:07:10 crc kubenswrapper[4684]: I0123 09:07:10.615398 4684 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Jan 23 09:07:10 crc kubenswrapper[4684]: I0123 09:07:10.616860 4684 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 23 09:07:10 crc kubenswrapper[4684]: I0123 09:07:10.616897 4684 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 23 09:07:10 crc kubenswrapper[4684]: I0123 09:07:10.616905 4684 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 23 09:07:10 crc kubenswrapper[4684]: I0123 09:07:10.619036 4684 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" event={"ID":"3dcd261975c3d6b9a6ad6367fd4facd3","Type":"ContainerStarted","Data":"beeba329cbddfbfbd71509b5d37064ec6031709b1403feb8e76af0e7818516cb"} Jan 23 09:07:10 crc kubenswrapper[4684]: I0123 09:07:10.619080 4684 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" event={"ID":"3dcd261975c3d6b9a6ad6367fd4facd3","Type":"ContainerStarted","Data":"b3735bcc057b640850e5db0bc7cd406ef0ac0c002d4550e741deaf34cf10908f"} Jan 23 09:07:10 crc kubenswrapper[4684]: I0123 09:07:10.619093 4684 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" event={"ID":"3dcd261975c3d6b9a6ad6367fd4facd3","Type":"ContainerStarted","Data":"02d494d3d24ff74db057c3d7e3a703635ce5b73863f17e5287e60eb112fcadf1"} Jan 23 09:07:10 crc kubenswrapper[4684]: I0123 09:07:10.619170 4684 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Jan 23 09:07:10 crc kubenswrapper[4684]: I0123 09:07:10.620046 4684 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 23 09:07:10 crc kubenswrapper[4684]: I0123 09:07:10.620076 4684 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 23 09:07:10 crc kubenswrapper[4684]: I0123 09:07:10.620087 4684 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 23 09:07:10 crc kubenswrapper[4684]: I0123 09:07:10.622657 4684 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" event={"ID":"f614b9022728cf315e60c057852e563e","Type":"ContainerStarted","Data":"f34540a58dd0dfcebbfd694b24202f58a89ddca8a0f04f3f4f2bcdba4be5c4b6"} Jan 23 09:07:10 crc kubenswrapper[4684]: I0123 09:07:10.622698 4684 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" event={"ID":"f614b9022728cf315e60c057852e563e","Type":"ContainerStarted","Data":"fde45d47daa7855ee7caa1df0222d2773fcdc8fb29413c61d6b74f7e7d8fa6e4"} Jan 23 09:07:10 crc kubenswrapper[4684]: I0123 09:07:10.622866 4684 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Jan 23 09:07:10 crc kubenswrapper[4684]: I0123 09:07:10.623779 4684 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 23 09:07:10 crc kubenswrapper[4684]: I0123 09:07:10.623802 4684 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 23 09:07:10 crc kubenswrapper[4684]: I0123 09:07:10.623812 4684 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 23 09:07:10 crc kubenswrapper[4684]: I0123 09:07:10.626991 4684 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" event={"ID":"f4b27818a5e8e43d0dc095d08835c792","Type":"ContainerStarted","Data":"9db80d9b156d2828ad5bcd38bc2d0783dac35f10f547f098815ee596931cde3b"} Jan 23 09:07:10 crc kubenswrapper[4684]: I0123 09:07:10.627028 4684 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" event={"ID":"f4b27818a5e8e43d0dc095d08835c792","Type":"ContainerStarted","Data":"5b80737ea9f882f63be2cf6a2f74002963d16e18aea3c96f738b2cd188f3c1da"} Jan 23 09:07:10 crc kubenswrapper[4684]: I0123 09:07:10.627042 4684 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" event={"ID":"f4b27818a5e8e43d0dc095d08835c792","Type":"ContainerStarted","Data":"68e3ed6cfd5c1ab6379385c7acee58117333f815f21be7d7c61038f7827f6621"} Jan 23 09:07:10 crc kubenswrapper[4684]: I0123 09:07:10.627053 4684 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" event={"ID":"f4b27818a5e8e43d0dc095d08835c792","Type":"ContainerStarted","Data":"39b1d62654cdce3e6a1e54cc35f36d530dec39b7ec54d7aba2ea8a64844ff90a"} Jan 23 09:07:10 crc kubenswrapper[4684]: I0123 09:07:10.628662 4684 generic.go:334] "Generic (PLEG): container finished" podID="2139d3e2895fc6797b9c76a1b4c9886d" containerID="53f54d40a8f9c299945a266795145acc4152527be204001cf6f5138e18677cf2" exitCode=0 Jan 23 09:07:10 crc kubenswrapper[4684]: I0123 09:07:10.628729 4684 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-etcd/etcd-crc" event={"ID":"2139d3e2895fc6797b9c76a1b4c9886d","Type":"ContainerDied","Data":"53f54d40a8f9c299945a266795145acc4152527be204001cf6f5138e18677cf2"} Jan 23 09:07:10 crc kubenswrapper[4684]: I0123 09:07:10.628863 4684 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Jan 23 09:07:10 crc kubenswrapper[4684]: I0123 09:07:10.629514 4684 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 23 09:07:10 crc kubenswrapper[4684]: I0123 09:07:10.629543 4684 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 23 09:07:10 crc kubenswrapper[4684]: I0123 09:07:10.629554 4684 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 23 09:07:10 crc kubenswrapper[4684]: I0123 09:07:10.753156 4684 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Jan 23 09:07:10 crc kubenswrapper[4684]: I0123 09:07:10.754227 4684 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 23 09:07:10 crc kubenswrapper[4684]: I0123 09:07:10.754260 4684 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 23 09:07:10 crc kubenswrapper[4684]: I0123 09:07:10.754272 4684 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 23 09:07:10 crc kubenswrapper[4684]: I0123 09:07:10.754296 4684 kubelet_node_status.go:76] "Attempting to register node" node="crc" Jan 23 09:07:11 crc kubenswrapper[4684]: I0123 09:07:11.535632 4684 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-12-02 00:36:30.189893848 +0000 UTC Jan 23 09:07:11 crc kubenswrapper[4684]: I0123 09:07:11.634077 4684 generic.go:334] "Generic (PLEG): container finished" podID="2139d3e2895fc6797b9c76a1b4c9886d" containerID="7cba30805de8549c0bfdfc368975cd0a27e2bb30c4fe11c71893553d0d40cf11" exitCode=0 Jan 23 09:07:11 crc kubenswrapper[4684]: I0123 09:07:11.634160 4684 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-etcd/etcd-crc" event={"ID":"2139d3e2895fc6797b9c76a1b4c9886d","Type":"ContainerDied","Data":"7cba30805de8549c0bfdfc368975cd0a27e2bb30c4fe11c71893553d0d40cf11"} Jan 23 09:07:11 crc kubenswrapper[4684]: I0123 09:07:11.634401 4684 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Jan 23 09:07:11 crc kubenswrapper[4684]: I0123 09:07:11.635763 4684 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 23 09:07:11 crc kubenswrapper[4684]: I0123 09:07:11.635796 4684 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 23 09:07:11 crc kubenswrapper[4684]: I0123 09:07:11.635807 4684 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 23 09:07:11 crc kubenswrapper[4684]: I0123 09:07:11.638918 4684 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Jan 23 09:07:11 crc kubenswrapper[4684]: I0123 09:07:11.638967 4684 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" event={"ID":"f4b27818a5e8e43d0dc095d08835c792","Type":"ContainerStarted","Data":"42263a97079566dbd93f1ca20399fd1f6cc2400f0d042ed062c1c1e15eaf0109"} Jan 23 09:07:11 crc kubenswrapper[4684]: I0123 09:07:11.639076 4684 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Jan 23 09:07:11 crc kubenswrapper[4684]: I0123 09:07:11.639889 4684 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 23 09:07:11 crc kubenswrapper[4684]: I0123 09:07:11.639913 4684 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 23 09:07:11 crc kubenswrapper[4684]: I0123 09:07:11.639922 4684 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 23 09:07:11 crc kubenswrapper[4684]: I0123 09:07:11.640875 4684 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 23 09:07:11 crc kubenswrapper[4684]: I0123 09:07:11.640947 4684 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 23 09:07:11 crc kubenswrapper[4684]: I0123 09:07:11.640973 4684 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 23 09:07:12 crc kubenswrapper[4684]: I0123 09:07:12.175473 4684 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-kube-controller-manager/kube-controller-manager-crc" Jan 23 09:07:12 crc kubenswrapper[4684]: I0123 09:07:12.235348 4684 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-kube-controller-manager/kube-controller-manager-crc" Jan 23 09:07:12 crc kubenswrapper[4684]: I0123 09:07:12.535828 4684 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2026-01-08 23:13:50.89039755 +0000 UTC Jan 23 09:07:12 crc kubenswrapper[4684]: I0123 09:07:12.644237 4684 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Jan 23 09:07:12 crc kubenswrapper[4684]: I0123 09:07:12.644763 4684 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-etcd/etcd-crc" event={"ID":"2139d3e2895fc6797b9c76a1b4c9886d","Type":"ContainerStarted","Data":"2664091f65dbfd77978229e9292847f4a3fe4756bbf69fc259f06f1711d73d58"} Jan 23 09:07:12 crc kubenswrapper[4684]: I0123 09:07:12.645018 4684 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Jan 23 09:07:12 crc kubenswrapper[4684]: I0123 09:07:12.645053 4684 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 23 09:07:12 crc kubenswrapper[4684]: I0123 09:07:12.645072 4684 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 23 09:07:12 crc kubenswrapper[4684]: I0123 09:07:12.645079 4684 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 23 09:07:12 crc kubenswrapper[4684]: I0123 09:07:12.645226 4684 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-kube-apiserver/kube-apiserver-crc" Jan 23 09:07:12 crc kubenswrapper[4684]: I0123 09:07:12.646244 4684 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 23 09:07:12 crc kubenswrapper[4684]: I0123 09:07:12.646261 4684 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 23 09:07:12 crc kubenswrapper[4684]: I0123 09:07:12.646268 4684 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 23 09:07:13 crc kubenswrapper[4684]: I0123 09:07:13.433680 4684 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" Jan 23 09:07:13 crc kubenswrapper[4684]: I0123 09:07:13.433850 4684 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Jan 23 09:07:13 crc kubenswrapper[4684]: I0123 09:07:13.434823 4684 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 23 09:07:13 crc kubenswrapper[4684]: I0123 09:07:13.434850 4684 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 23 09:07:13 crc kubenswrapper[4684]: I0123 09:07:13.434858 4684 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 23 09:07:13 crc kubenswrapper[4684]: I0123 09:07:13.536067 4684 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-12-10 14:15:35.00954867 +0000 UTC Jan 23 09:07:13 crc kubenswrapper[4684]: I0123 09:07:13.650020 4684 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-etcd/etcd-crc" event={"ID":"2139d3e2895fc6797b9c76a1b4c9886d","Type":"ContainerStarted","Data":"d86979ee904288e33f62934899ea6c379f7f7e2040315ab52e73fe7cd778e398"} Jan 23 09:07:13 crc kubenswrapper[4684]: I0123 09:07:13.650067 4684 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-etcd/etcd-crc" event={"ID":"2139d3e2895fc6797b9c76a1b4c9886d","Type":"ContainerStarted","Data":"0678a5662f340b2e46b0a6c96936b9a31bd3f5180859a6684dddb18a0400c2d6"} Jan 23 09:07:13 crc kubenswrapper[4684]: I0123 09:07:13.650081 4684 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness" Jan 23 09:07:13 crc kubenswrapper[4684]: I0123 09:07:13.650095 4684 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Jan 23 09:07:13 crc kubenswrapper[4684]: I0123 09:07:13.650121 4684 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Jan 23 09:07:13 crc kubenswrapper[4684]: I0123 09:07:13.650979 4684 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 23 09:07:13 crc kubenswrapper[4684]: I0123 09:07:13.651007 4684 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 23 09:07:13 crc kubenswrapper[4684]: I0123 09:07:13.651015 4684 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 23 09:07:13 crc kubenswrapper[4684]: I0123 09:07:13.651065 4684 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 23 09:07:13 crc kubenswrapper[4684]: I0123 09:07:13.651079 4684 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 23 09:07:13 crc kubenswrapper[4684]: I0123 09:07:13.651087 4684 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 23 09:07:13 crc kubenswrapper[4684]: I0123 09:07:13.841955 4684 certificate_manager.go:356] kubernetes.io/kube-apiserver-client-kubelet: Rotating certificates Jan 23 09:07:14 crc kubenswrapper[4684]: I0123 09:07:14.040657 4684 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-kube-apiserver/kube-apiserver-crc" Jan 23 09:07:14 crc kubenswrapper[4684]: I0123 09:07:14.150831 4684 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-kube-apiserver/kube-apiserver-crc" Jan 23 09:07:14 crc kubenswrapper[4684]: I0123 09:07:14.536876 4684 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-11-19 16:10:20.953004316 +0000 UTC Jan 23 09:07:14 crc kubenswrapper[4684]: I0123 09:07:14.629677 4684 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-kube-controller-manager/kube-controller-manager-crc" Jan 23 09:07:14 crc kubenswrapper[4684]: I0123 09:07:14.657190 4684 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-etcd/etcd-crc" event={"ID":"2139d3e2895fc6797b9c76a1b4c9886d","Type":"ContainerStarted","Data":"e45c7932eec6204825532d87e85ae78a530c115e3b4dc02e4d890c2aa5bed860"} Jan 23 09:07:14 crc kubenswrapper[4684]: I0123 09:07:14.657237 4684 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-etcd/etcd-crc" event={"ID":"2139d3e2895fc6797b9c76a1b4c9886d","Type":"ContainerStarted","Data":"4ffa3502d96f1f4b8b64211969013c6d85864cdc9aa14572b6d797fb56a5f9cb"} Jan 23 09:07:14 crc kubenswrapper[4684]: I0123 09:07:14.657258 4684 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Jan 23 09:07:14 crc kubenswrapper[4684]: I0123 09:07:14.657355 4684 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Jan 23 09:07:14 crc kubenswrapper[4684]: I0123 09:07:14.657373 4684 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Jan 23 09:07:14 crc kubenswrapper[4684]: I0123 09:07:14.658633 4684 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 23 09:07:14 crc kubenswrapper[4684]: I0123 09:07:14.658643 4684 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 23 09:07:14 crc kubenswrapper[4684]: I0123 09:07:14.658659 4684 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 23 09:07:14 crc kubenswrapper[4684]: I0123 09:07:14.658685 4684 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 23 09:07:14 crc kubenswrapper[4684]: I0123 09:07:14.658712 4684 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 23 09:07:14 crc kubenswrapper[4684]: I0123 09:07:14.658669 4684 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 23 09:07:14 crc kubenswrapper[4684]: I0123 09:07:14.658734 4684 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 23 09:07:14 crc kubenswrapper[4684]: I0123 09:07:14.658664 4684 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 23 09:07:14 crc kubenswrapper[4684]: I0123 09:07:14.658822 4684 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 23 09:07:14 crc kubenswrapper[4684]: I0123 09:07:14.874428 4684 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-etcd/etcd-crc" Jan 23 09:07:15 crc kubenswrapper[4684]: I0123 09:07:15.538013 4684 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-11-10 08:32:03.893193965 +0000 UTC Jan 23 09:07:15 crc kubenswrapper[4684]: I0123 09:07:15.660031 4684 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Jan 23 09:07:15 crc kubenswrapper[4684]: I0123 09:07:15.660046 4684 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Jan 23 09:07:15 crc kubenswrapper[4684]: I0123 09:07:15.661552 4684 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 23 09:07:15 crc kubenswrapper[4684]: I0123 09:07:15.661593 4684 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 23 09:07:15 crc kubenswrapper[4684]: I0123 09:07:15.661602 4684 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 23 09:07:15 crc kubenswrapper[4684]: I0123 09:07:15.661608 4684 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 23 09:07:15 crc kubenswrapper[4684]: I0123 09:07:15.661633 4684 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 23 09:07:15 crc kubenswrapper[4684]: I0123 09:07:15.661643 4684 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 23 09:07:16 crc kubenswrapper[4684]: I0123 09:07:16.538646 4684 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-11-23 10:24:58.348762534 +0000 UTC Jan 23 09:07:16 crc kubenswrapper[4684]: I0123 09:07:16.662369 4684 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Jan 23 09:07:16 crc kubenswrapper[4684]: I0123 09:07:16.663767 4684 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 23 09:07:16 crc kubenswrapper[4684]: I0123 09:07:16.663832 4684 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 23 09:07:16 crc kubenswrapper[4684]: I0123 09:07:16.663850 4684 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 23 09:07:17 crc kubenswrapper[4684]: I0123 09:07:17.539577 4684 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-12-27 10:46:16.358627546 +0000 UTC Jan 23 09:07:17 crc kubenswrapper[4684]: E0123 09:07:17.645396 4684 eviction_manager.go:285] "Eviction manager: failed to get summary stats" err="failed to get node info: node \"crc\" not found" Jan 23 09:07:18 crc kubenswrapper[4684]: I0123 09:07:18.540442 4684 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-11-20 04:48:44.939968553 +0000 UTC Jan 23 09:07:18 crc kubenswrapper[4684]: I0123 09:07:18.669227 4684 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-kube-controller-manager/kube-controller-manager-crc" Jan 23 09:07:18 crc kubenswrapper[4684]: I0123 09:07:18.669442 4684 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Jan 23 09:07:18 crc kubenswrapper[4684]: I0123 09:07:18.670912 4684 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 23 09:07:18 crc kubenswrapper[4684]: I0123 09:07:18.670968 4684 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 23 09:07:18 crc kubenswrapper[4684]: I0123 09:07:18.670982 4684 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 23 09:07:18 crc kubenswrapper[4684]: I0123 09:07:18.674225 4684 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-kube-controller-manager/kube-controller-manager-crc" Jan 23 09:07:19 crc kubenswrapper[4684]: I0123 09:07:19.540920 4684 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-12-24 20:29:20.850734237 +0000 UTC Jan 23 09:07:19 crc kubenswrapper[4684]: I0123 09:07:19.670149 4684 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Jan 23 09:07:19 crc kubenswrapper[4684]: I0123 09:07:19.671281 4684 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 23 09:07:19 crc kubenswrapper[4684]: I0123 09:07:19.671359 4684 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 23 09:07:19 crc kubenswrapper[4684]: I0123 09:07:19.671382 4684 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 23 09:07:20 crc kubenswrapper[4684]: I0123 09:07:20.445959 4684 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-kube-controller-manager/kube-controller-manager-crc" Jan 23 09:07:20 crc kubenswrapper[4684]: I0123 09:07:20.531257 4684 csi_plugin.go:884] Failed to contact API server when waiting for CSINode publishing: Get "https://api-int.crc.testing:6443/apis/storage.k8s.io/v1/csinodes/crc?resourceVersion=0": net/http: TLS handshake timeout Jan 23 09:07:20 crc kubenswrapper[4684]: E0123 09:07:20.540792 4684 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://api-int.crc.testing:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/crc?timeout=10s\": net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers)" interval="3.2s" Jan 23 09:07:20 crc kubenswrapper[4684]: I0123 09:07:20.541833 4684 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-12-22 13:22:24.211155507 +0000 UTC Jan 23 09:07:20 crc kubenswrapper[4684]: W0123 09:07:20.589537 4684 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Node: Get "https://api-int.crc.testing:6443/api/v1/nodes?fieldSelector=metadata.name%3Dcrc&limit=500&resourceVersion=0": net/http: TLS handshake timeout Jan 23 09:07:20 crc kubenswrapper[4684]: I0123 09:07:20.589627 4684 trace.go:236] Trace[964790670]: "Reflector ListAndWatch" name:k8s.io/client-go/informers/factory.go:160 (23-Jan-2026 09:07:10.588) (total time: 10001ms): Jan 23 09:07:20 crc kubenswrapper[4684]: Trace[964790670]: ---"Objects listed" error:Get "https://api-int.crc.testing:6443/api/v1/nodes?fieldSelector=metadata.name%3Dcrc&limit=500&resourceVersion=0": net/http: TLS handshake timeout 10001ms (09:07:20.589) Jan 23 09:07:20 crc kubenswrapper[4684]: Trace[964790670]: [10.001356526s] [10.001356526s] END Jan 23 09:07:20 crc kubenswrapper[4684]: E0123 09:07:20.589649 4684 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Node: failed to list *v1.Node: Get \"https://api-int.crc.testing:6443/api/v1/nodes?fieldSelector=metadata.name%3Dcrc&limit=500&resourceVersion=0\": net/http: TLS handshake timeout" logger="UnhandledError" Jan 23 09:07:20 crc kubenswrapper[4684]: I0123 09:07:20.672686 4684 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Jan 23 09:07:20 crc kubenswrapper[4684]: I0123 09:07:20.673585 4684 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 23 09:07:20 crc kubenswrapper[4684]: I0123 09:07:20.673620 4684 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 23 09:07:20 crc kubenswrapper[4684]: I0123 09:07:20.673629 4684 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 23 09:07:20 crc kubenswrapper[4684]: E0123 09:07:20.755382 4684 kubelet_node_status.go:99] "Unable to register node with API server" err="Post \"https://api-int.crc.testing:6443/api/v1/nodes\": net/http: TLS handshake timeout" node="crc" Jan 23 09:07:21 crc kubenswrapper[4684]: E0123 09:07:21.169100 4684 event.go:368] "Unable to write event (may retry after sleeping)" err="Post \"https://api-int.crc.testing:6443/api/v1/namespaces/default/events\": net/http: TLS handshake timeout" event="&Event{ObjectMeta:{crc.188d50f1abf822ed default 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Node,Namespace:,Name:crc,UID:crc,APIVersion:,ResourceVersion:,FieldPath:,},Reason:Starting,Message:Starting kubelet.,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2026-01-23 09:07:07.528979181 +0000 UTC m=+0.152357722,LastTimestamp:2026-01-23 09:07:07.528979181 +0000 UTC m=+0.152357722,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Jan 23 09:07:21 crc kubenswrapper[4684]: I0123 09:07:21.205565 4684 patch_prober.go:28] interesting pod/kube-apiserver-crc container/kube-apiserver-check-endpoints namespace/openshift-kube-apiserver: Liveness probe status=failure output="Get \"https://192.168.126.11:17697/healthz\": dial tcp 192.168.126.11:17697: connect: connection refused" start-of-body= Jan 23 09:07:21 crc kubenswrapper[4684]: I0123 09:07:21.205672 4684 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-kube-apiserver/kube-apiserver-crc" podUID="f4b27818a5e8e43d0dc095d08835c792" containerName="kube-apiserver-check-endpoints" probeResult="failure" output="Get \"https://192.168.126.11:17697/healthz\": dial tcp 192.168.126.11:17697: connect: connection refused" Jan 23 09:07:21 crc kubenswrapper[4684]: W0123 09:07:21.366130 4684 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.RuntimeClass: Get "https://api-int.crc.testing:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0": net/http: TLS handshake timeout Jan 23 09:07:21 crc kubenswrapper[4684]: I0123 09:07:21.366229 4684 trace.go:236] Trace[805878543]: "Reflector ListAndWatch" name:k8s.io/client-go/informers/factory.go:160 (23-Jan-2026 09:07:11.364) (total time: 10001ms): Jan 23 09:07:21 crc kubenswrapper[4684]: Trace[805878543]: ---"Objects listed" error:Get "https://api-int.crc.testing:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0": net/http: TLS handshake timeout 10001ms (09:07:21.366) Jan 23 09:07:21 crc kubenswrapper[4684]: Trace[805878543]: [10.00189554s] [10.00189554s] END Jan 23 09:07:21 crc kubenswrapper[4684]: E0123 09:07:21.366253 4684 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.RuntimeClass: failed to list *v1.RuntimeClass: Get \"https://api-int.crc.testing:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0\": net/http: TLS handshake timeout" logger="UnhandledError" Jan 23 09:07:21 crc kubenswrapper[4684]: I0123 09:07:21.542131 4684 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-11-09 19:22:41.885759641 +0000 UTC Jan 23 09:07:21 crc kubenswrapper[4684]: I0123 09:07:21.670160 4684 patch_prober.go:28] interesting pod/kube-controller-manager-crc container/cluster-policy-controller namespace/openshift-kube-controller-manager: Startup probe status=failure output="Get \"https://192.168.126.11:10357/healthz\": net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers)" start-of-body= Jan 23 09:07:21 crc kubenswrapper[4684]: I0123 09:07:21.670252 4684 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-kube-controller-manager/kube-controller-manager-crc" podUID="f614b9022728cf315e60c057852e563e" containerName="cluster-policy-controller" probeResult="failure" output="Get \"https://192.168.126.11:10357/healthz\": net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers)" Jan 23 09:07:21 crc kubenswrapper[4684]: I0123 09:07:21.812252 4684 patch_prober.go:28] interesting pod/kube-apiserver-crc container/kube-apiserver namespace/openshift-kube-apiserver: Startup probe status=failure output="HTTP probe failed with statuscode: 403" start-of-body={"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/livez\"","reason":"Forbidden","details":{},"code":403} Jan 23 09:07:21 crc kubenswrapper[4684]: I0123 09:07:21.812318 4684 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-kube-apiserver/kube-apiserver-crc" podUID="f4b27818a5e8e43d0dc095d08835c792" containerName="kube-apiserver" probeResult="failure" output="HTTP probe failed with statuscode: 403" Jan 23 09:07:21 crc kubenswrapper[4684]: I0123 09:07:21.821285 4684 patch_prober.go:28] interesting pod/kube-apiserver-crc container/kube-apiserver namespace/openshift-kube-apiserver: Startup probe status=failure output="HTTP probe failed with statuscode: 403" start-of-body={"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/livez\"","reason":"Forbidden","details":{},"code":403} Jan 23 09:07:21 crc kubenswrapper[4684]: I0123 09:07:21.821342 4684 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-kube-apiserver/kube-apiserver-crc" podUID="f4b27818a5e8e43d0dc095d08835c792" containerName="kube-apiserver" probeResult="failure" output="HTTP probe failed with statuscode: 403" Jan 23 09:07:22 crc kubenswrapper[4684]: I0123 09:07:22.543125 4684 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-12-05 12:56:24.168885197 +0000 UTC Jan 23 09:07:23 crc kubenswrapper[4684]: I0123 09:07:23.544201 4684 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-11-26 01:07:27.371281222 +0000 UTC Jan 23 09:07:23 crc kubenswrapper[4684]: I0123 09:07:23.955826 4684 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Jan 23 09:07:23 crc kubenswrapper[4684]: I0123 09:07:23.957541 4684 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 23 09:07:23 crc kubenswrapper[4684]: I0123 09:07:23.957591 4684 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 23 09:07:23 crc kubenswrapper[4684]: I0123 09:07:23.957606 4684 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 23 09:07:23 crc kubenswrapper[4684]: I0123 09:07:23.957638 4684 kubelet_node_status.go:76] "Attempting to register node" node="crc" Jan 23 09:07:23 crc kubenswrapper[4684]: E0123 09:07:23.962372 4684 kubelet_node_status.go:99] "Unable to register node with API server" err="nodes \"crc\" is forbidden: autoscaling.openshift.io/ManagedNode infra config cache not synchronized" node="crc" Jan 23 09:07:24 crc kubenswrapper[4684]: I0123 09:07:24.157114 4684 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-kube-apiserver/kube-apiserver-crc" Jan 23 09:07:24 crc kubenswrapper[4684]: I0123 09:07:24.157280 4684 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Jan 23 09:07:24 crc kubenswrapper[4684]: I0123 09:07:24.159088 4684 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 23 09:07:24 crc kubenswrapper[4684]: I0123 09:07:24.159116 4684 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 23 09:07:24 crc kubenswrapper[4684]: I0123 09:07:24.159127 4684 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 23 09:07:24 crc kubenswrapper[4684]: I0123 09:07:24.161938 4684 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-kube-apiserver/kube-apiserver-crc" Jan 23 09:07:24 crc kubenswrapper[4684]: I0123 09:07:24.338018 4684 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-etcd/etcd-crc" Jan 23 09:07:24 crc kubenswrapper[4684]: I0123 09:07:24.338211 4684 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Jan 23 09:07:24 crc kubenswrapper[4684]: I0123 09:07:24.339362 4684 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 23 09:07:24 crc kubenswrapper[4684]: I0123 09:07:24.339399 4684 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 23 09:07:24 crc kubenswrapper[4684]: I0123 09:07:24.339411 4684 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 23 09:07:24 crc kubenswrapper[4684]: I0123 09:07:24.361208 4684 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-etcd/etcd-crc" Jan 23 09:07:24 crc kubenswrapper[4684]: I0123 09:07:24.545281 4684 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-12-31 06:02:20.729879526 +0000 UTC Jan 23 09:07:24 crc kubenswrapper[4684]: I0123 09:07:24.681290 4684 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Jan 23 09:07:24 crc kubenswrapper[4684]: I0123 09:07:24.681290 4684 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Jan 23 09:07:24 crc kubenswrapper[4684]: I0123 09:07:24.682415 4684 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 23 09:07:24 crc kubenswrapper[4684]: I0123 09:07:24.682446 4684 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 23 09:07:24 crc kubenswrapper[4684]: I0123 09:07:24.682456 4684 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 23 09:07:24 crc kubenswrapper[4684]: I0123 09:07:24.682485 4684 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 23 09:07:24 crc kubenswrapper[4684]: I0123 09:07:24.682501 4684 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 23 09:07:24 crc kubenswrapper[4684]: I0123 09:07:24.682510 4684 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 23 09:07:24 crc kubenswrapper[4684]: I0123 09:07:24.695194 4684 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-etcd/etcd-crc" Jan 23 09:07:25 crc kubenswrapper[4684]: I0123 09:07:25.545875 4684 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-12-16 09:43:12.768356534 +0000 UTC Jan 23 09:07:25 crc kubenswrapper[4684]: I0123 09:07:25.684127 4684 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Jan 23 09:07:25 crc kubenswrapper[4684]: I0123 09:07:25.685118 4684 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 23 09:07:25 crc kubenswrapper[4684]: I0123 09:07:25.685156 4684 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 23 09:07:25 crc kubenswrapper[4684]: I0123 09:07:25.685168 4684 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 23 09:07:26 crc kubenswrapper[4684]: I0123 09:07:26.546936 4684 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-11-29 12:45:28.957363941 +0000 UTC Jan 23 09:07:26 crc kubenswrapper[4684]: I0123 09:07:26.583519 4684 reflector.go:368] Caches populated for *v1.Node from k8s.io/client-go/informers/factory.go:160 Jan 23 09:07:26 crc kubenswrapper[4684]: I0123 09:07:26.819082 4684 trace.go:236] Trace[359144961]: "Reflector ListAndWatch" name:k8s.io/client-go/informers/factory.go:160 (23-Jan-2026 09:07:16.028) (total time: 10790ms): Jan 23 09:07:26 crc kubenswrapper[4684]: Trace[359144961]: ---"Objects listed" error: 10790ms (09:07:26.818) Jan 23 09:07:26 crc kubenswrapper[4684]: Trace[359144961]: [10.790157395s] [10.790157395s] END Jan 23 09:07:26 crc kubenswrapper[4684]: I0123 09:07:26.819130 4684 reflector.go:368] Caches populated for *v1.CSIDriver from k8s.io/client-go/informers/factory.go:160 Jan 23 09:07:26 crc kubenswrapper[4684]: I0123 09:07:26.820023 4684 reconstruct.go:205] "DevicePaths of reconstructed volumes updated" Jan 23 09:07:26 crc kubenswrapper[4684]: I0123 09:07:26.821550 4684 trace.go:236] Trace[1273473738]: "Reflector ListAndWatch" name:k8s.io/client-go/informers/factory.go:160 (23-Jan-2026 09:07:13.726) (total time: 13094ms): Jan 23 09:07:26 crc kubenswrapper[4684]: Trace[1273473738]: ---"Objects listed" error: 13094ms (09:07:26.821) Jan 23 09:07:26 crc kubenswrapper[4684]: Trace[1273473738]: [13.094901439s] [13.094901439s] END Jan 23 09:07:26 crc kubenswrapper[4684]: I0123 09:07:26.821567 4684 reflector.go:368] Caches populated for *v1.Service from k8s.io/client-go/informers/factory.go:160 Jan 23 09:07:26 crc kubenswrapper[4684]: I0123 09:07:26.827344 4684 reflector.go:368] Caches populated for *v1.CertificateSigningRequest from k8s.io/client-go/tools/watch/informerwatcher.go:146 Jan 23 09:07:26 crc kubenswrapper[4684]: I0123 09:07:26.868028 4684 patch_prober.go:28] interesting pod/kube-apiserver-crc container/kube-apiserver-check-endpoints namespace/openshift-kube-apiserver: Readiness probe status=failure output="Get \"https://192.168.126.11:17697/healthz\": EOF" start-of-body= Jan 23 09:07:26 crc kubenswrapper[4684]: I0123 09:07:26.868096 4684 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-kube-apiserver/kube-apiserver-crc" podUID="f4b27818a5e8e43d0dc095d08835c792" containerName="kube-apiserver-check-endpoints" probeResult="failure" output="Get \"https://192.168.126.11:17697/healthz\": EOF" Jan 23 09:07:26 crc kubenswrapper[4684]: I0123 09:07:26.868596 4684 patch_prober.go:28] interesting pod/kube-apiserver-crc container/kube-apiserver-check-endpoints namespace/openshift-kube-apiserver: Readiness probe status=failure output="Get \"https://192.168.126.11:17697/healthz\": dial tcp 192.168.126.11:17697: connect: connection refused" start-of-body= Jan 23 09:07:26 crc kubenswrapper[4684]: I0123 09:07:26.868712 4684 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-kube-apiserver/kube-apiserver-crc" podUID="f4b27818a5e8e43d0dc095d08835c792" containerName="kube-apiserver-check-endpoints" probeResult="failure" output="Get \"https://192.168.126.11:17697/healthz\": dial tcp 192.168.126.11:17697: connect: connection refused" Jan 23 09:07:27 crc kubenswrapper[4684]: I0123 09:07:27.216466 4684 reflector.go:368] Caches populated for *v1.RuntimeClass from k8s.io/client-go/informers/factory.go:160 Jan 23 09:07:27 crc kubenswrapper[4684]: I0123 09:07:27.364596 4684 csr.go:261] certificate signing request csr-z4zmg is approved, waiting to be issued Jan 23 09:07:27 crc kubenswrapper[4684]: I0123 09:07:27.399889 4684 csr.go:257] certificate signing request csr-z4zmg is issued Jan 23 09:07:27 crc kubenswrapper[4684]: I0123 09:07:27.485566 4684 transport.go:147] "Certificate rotation detected, shutting down client connections to start using new credentials" Jan 23 09:07:27 crc kubenswrapper[4684]: W0123 09:07:27.485765 4684 reflector.go:484] k8s.io/client-go/informers/factory.go:160: watch of *v1.RuntimeClass ended with: very short watch: k8s.io/client-go/informers/factory.go:160: Unexpected watch close - watch lasted less than a second and no items received Jan 23 09:07:27 crc kubenswrapper[4684]: W0123 09:07:27.485794 4684 reflector.go:484] k8s.io/client-go/informers/factory.go:160: watch of *v1.CSIDriver ended with: very short watch: k8s.io/client-go/informers/factory.go:160: Unexpected watch close - watch lasted less than a second and no items received Jan 23 09:07:27 crc kubenswrapper[4684]: W0123 09:07:27.485804 4684 reflector.go:484] k8s.io/client-go/informers/factory.go:160: watch of *v1.Service ended with: very short watch: k8s.io/client-go/informers/factory.go:160: Unexpected watch close - watch lasted less than a second and no items received Jan 23 09:07:27 crc kubenswrapper[4684]: W0123 09:07:27.485879 4684 reflector.go:484] k8s.io/client-go/informers/factory.go:160: watch of *v1.Node ended with: very short watch: k8s.io/client-go/informers/factory.go:160: Unexpected watch close - watch lasted less than a second and no items received Jan 23 09:07:27 crc kubenswrapper[4684]: I0123 09:07:27.535125 4684 apiserver.go:52] "Watching apiserver" Jan 23 09:07:27 crc kubenswrapper[4684]: I0123 09:07:27.542145 4684 reflector.go:368] Caches populated for *v1.Pod from pkg/kubelet/config/apiserver.go:66 Jan 23 09:07:27 crc kubenswrapper[4684]: I0123 09:07:27.542394 4684 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-network-console/networking-console-plugin-85b44fc459-gdk6g","openshift-network-diagnostics/network-check-source-55646444c4-trplf","openshift-network-diagnostics/network-check-target-xd92c","openshift-network-node-identity/network-node-identity-vrzqb","openshift-network-operator/iptables-alerter-4ln5h","openshift-network-operator/network-operator-58b4c7f79c-55gtf"] Jan 23 09:07:27 crc kubenswrapper[4684]: I0123 09:07:27.542653 4684 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" Jan 23 09:07:27 crc kubenswrapper[4684]: I0123 09:07:27.542871 4684 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 23 09:07:27 crc kubenswrapper[4684]: E0123 09:07:27.542927 4684 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Jan 23 09:07:27 crc kubenswrapper[4684]: I0123 09:07:27.543013 4684 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Jan 23 09:07:27 crc kubenswrapper[4684]: E0123 09:07:27.543053 4684 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Jan 23 09:07:27 crc kubenswrapper[4684]: I0123 09:07:27.543095 4684 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 23 09:07:27 crc kubenswrapper[4684]: E0123 09:07:27.543117 4684 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Jan 23 09:07:27 crc kubenswrapper[4684]: I0123 09:07:27.543152 4684 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-node-identity/network-node-identity-vrzqb" Jan 23 09:07:27 crc kubenswrapper[4684]: I0123 09:07:27.543455 4684 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-operator/iptables-alerter-4ln5h" Jan 23 09:07:27 crc kubenswrapper[4684]: I0123 09:07:27.547370 4684 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-12-10 21:41:02.893312322 +0000 UTC Jan 23 09:07:27 crc kubenswrapper[4684]: I0123 09:07:27.547878 4684 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-network-operator"/"openshift-service-ca.crt" Jan 23 09:07:27 crc kubenswrapper[4684]: I0123 09:07:27.548267 4684 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-network-node-identity"/"env-overrides" Jan 23 09:07:27 crc kubenswrapper[4684]: I0123 09:07:27.548510 4684 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-network-operator"/"metrics-tls" Jan 23 09:07:27 crc kubenswrapper[4684]: I0123 09:07:27.548771 4684 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-network-operator"/"kube-root-ca.crt" Jan 23 09:07:27 crc kubenswrapper[4684]: I0123 09:07:27.548824 4684 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-network-node-identity"/"openshift-service-ca.crt" Jan 23 09:07:27 crc kubenswrapper[4684]: I0123 09:07:27.548924 4684 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-network-operator"/"iptables-alerter-script" Jan 23 09:07:27 crc kubenswrapper[4684]: I0123 09:07:27.548946 4684 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-network-node-identity"/"kube-root-ca.crt" Jan 23 09:07:27 crc kubenswrapper[4684]: I0123 09:07:27.548991 4684 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-network-node-identity"/"ovnkube-identity-cm" Jan 23 09:07:27 crc kubenswrapper[4684]: I0123 09:07:27.552855 4684 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-network-node-identity"/"network-node-identity-cert" Jan 23 09:07:27 crc kubenswrapper[4684]: I0123 09:07:27.578051 4684 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-23T09:07:27Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-23T09:07:27Z\\\",\\\"message\\\":\\\"containers with unready status: [network-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":false,\\\"restartCount\\\":5,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Jan 23 09:07:27 crc kubenswrapper[4684]: I0123 09:07:27.595504 4684 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-23T09:07:27Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-23T09:07:27Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Jan 23 09:07:27 crc kubenswrapper[4684]: I0123 09:07:27.616320 4684 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-23T09:07:27Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-23T09:07:27Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Jan 23 09:07:27 crc kubenswrapper[4684]: I0123 09:07:27.633461 4684 desired_state_of_world_populator.go:154] "Finished populating initial desired state of world" Jan 23 09:07:27 crc kubenswrapper[4684]: I0123 09:07:27.634740 4684 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-23T09:07:27Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-23T09:07:27Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Jan 23 09:07:27 crc kubenswrapper[4684]: I0123 09:07:27.670389 4684 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-23T09:07:27Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [webhook approver]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-23T09:07:27Z\\\",\\\"message\\\":\\\"containers with unready status: [webhook approver]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":false,\\\"restartCount\\\":6,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Jan 23 09:07:27 crc kubenswrapper[4684]: I0123 09:07:27.681083 4684 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-23T09:07:27Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [iptables-alerter]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-23T09:07:27Z\\\",\\\"message\\\":\\\"containers with unready status: [iptables-alerter]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Jan 23 09:07:27 crc kubenswrapper[4684]: I0123 09:07:27.691233 4684 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-kube-apiserver_kube-apiserver-crc_f4b27818a5e8e43d0dc095d08835c792/kube-apiserver-check-endpoints/0.log" Jan 23 09:07:27 crc kubenswrapper[4684]: I0123 09:07:27.692819 4684 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-23T09:07:27Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-23T09:07:27Z\\\",\\\"message\\\":\\\"containers with unready status: [network-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":false,\\\"restartCount\\\":5,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Jan 23 09:07:27 crc kubenswrapper[4684]: I0123 09:07:27.694646 4684 generic.go:334] "Generic (PLEG): container finished" podID="f4b27818a5e8e43d0dc095d08835c792" containerID="42263a97079566dbd93f1ca20399fd1f6cc2400f0d042ed062c1c1e15eaf0109" exitCode=255 Jan 23 09:07:27 crc kubenswrapper[4684]: I0123 09:07:27.694713 4684 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" event={"ID":"f4b27818a5e8e43d0dc095d08835c792","Type":"ContainerDied","Data":"42263a97079566dbd93f1ca20399fd1f6cc2400f0d042ed062c1c1e15eaf0109"} Jan 23 09:07:27 crc kubenswrapper[4684]: I0123 09:07:27.717321 4684 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-23T09:07:27Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-23T09:07:27Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Jan 23 09:07:27 crc kubenswrapper[4684]: I0123 09:07:27.722130 4684 scope.go:117] "RemoveContainer" containerID="42263a97079566dbd93f1ca20399fd1f6cc2400f0d042ed062c1c1e15eaf0109" Jan 23 09:07:27 crc kubenswrapper[4684]: I0123 09:07:27.722252 4684 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-kube-apiserver/kube-apiserver-crc"] Jan 23 09:07:27 crc kubenswrapper[4684]: I0123 09:07:27.724777 4684 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"v4-0-config-system-serving-cert\" (UniqueName: \"kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-system-serving-cert\") pod \"49c341d1-5089-4bc2-86a0-a5e165cfcc6b\" (UID: \"49c341d1-5089-4bc2-86a0-a5e165cfcc6b\") " Jan 23 09:07:27 crc kubenswrapper[4684]: I0123 09:07:27.724839 4684 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/7583ce53-e0fe-4a16-9e4d-50516596a136-serving-cert\") pod \"7583ce53-e0fe-4a16-9e4d-50516596a136\" (UID: \"7583ce53-e0fe-4a16-9e4d-50516596a136\") " Jan 23 09:07:27 crc kubenswrapper[4684]: I0123 09:07:27.724872 4684 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/57a731c4-ef35-47a8-b875-bfb08a7f8011-utilities\") pod \"57a731c4-ef35-47a8-b875-bfb08a7f8011\" (UID: \"57a731c4-ef35-47a8-b875-bfb08a7f8011\") " Jan 23 09:07:27 crc kubenswrapper[4684]: I0123 09:07:27.724921 4684 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/22c825df-677d-4ca6-82db-3454ed06e783-config\") pod \"22c825df-677d-4ca6-82db-3454ed06e783\" (UID: \"22c825df-677d-4ca6-82db-3454ed06e783\") " Jan 23 09:07:27 crc kubenswrapper[4684]: I0123 09:07:27.724944 4684 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"signing-cabundle\" (UniqueName: \"kubernetes.io/configmap/25e176fe-21b4-4974-b1ed-c8b94f112a7f-signing-cabundle\") pod \"25e176fe-21b4-4974-b1ed-c8b94f112a7f\" (UID: \"25e176fe-21b4-4974-b1ed-c8b94f112a7f\") " Jan 23 09:07:27 crc kubenswrapper[4684]: I0123 09:07:27.724995 4684 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/6509e943-70c6-444c-bc41-48a544e36fbd-serving-cert\") pod \"6509e943-70c6-444c-bc41-48a544e36fbd\" (UID: \"6509e943-70c6-444c-bc41-48a544e36fbd\") " Jan 23 09:07:27 crc kubenswrapper[4684]: I0123 09:07:27.725021 4684 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"serviceca\" (UniqueName: \"kubernetes.io/configmap/3cb93b32-e0ae-4377-b9c8-fdb9842c6d59-serviceca\") pod \"3cb93b32-e0ae-4377-b9c8-fdb9842c6d59\" (UID: \"3cb93b32-e0ae-4377-b9c8-fdb9842c6d59\") " Jan 23 09:07:27 crc kubenswrapper[4684]: I0123 09:07:27.725062 4684 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/e7e6199b-1264-4501-8953-767f51328d08-serving-cert\") pod \"e7e6199b-1264-4501-8953-767f51328d08\" (UID: \"e7e6199b-1264-4501-8953-767f51328d08\") " Jan 23 09:07:27 crc kubenswrapper[4684]: I0123 09:07:27.725089 4684 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/8cea82b4-6893-4ddc-af9f-1bb5ae425c5b-config\") pod \"8cea82b4-6893-4ddc-af9f-1bb5ae425c5b\" (UID: \"8cea82b4-6893-4ddc-af9f-1bb5ae425c5b\") " Jan 23 09:07:27 crc kubenswrapper[4684]: I0123 09:07:27.725148 4684 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"v4-0-config-system-ocp-branding-template\" (UniqueName: \"kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-system-ocp-branding-template\") pod \"49c341d1-5089-4bc2-86a0-a5e165cfcc6b\" (UID: \"49c341d1-5089-4bc2-86a0-a5e165cfcc6b\") " Jan 23 09:07:27 crc kubenswrapper[4684]: I0123 09:07:27.725173 4684 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-xcphl\" (UniqueName: \"kubernetes.io/projected/7583ce53-e0fe-4a16-9e4d-50516596a136-kube-api-access-xcphl\") pod \"7583ce53-e0fe-4a16-9e4d-50516596a136\" (UID: \"7583ce53-e0fe-4a16-9e4d-50516596a136\") " Jan 23 09:07:27 crc kubenswrapper[4684]: I0123 09:07:27.725201 4684 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-wxkg8\" (UniqueName: \"kubernetes.io/projected/3cb93b32-e0ae-4377-b9c8-fdb9842c6d59-kube-api-access-wxkg8\") pod \"3cb93b32-e0ae-4377-b9c8-fdb9842c6d59\" (UID: \"3cb93b32-e0ae-4377-b9c8-fdb9842c6d59\") " Jan 23 09:07:27 crc kubenswrapper[4684]: I0123 09:07:27.725247 4684 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/e7e6199b-1264-4501-8953-767f51328d08-config\") pod \"e7e6199b-1264-4501-8953-767f51328d08\" (UID: \"e7e6199b-1264-4501-8953-767f51328d08\") " Jan 23 09:07:27 crc kubenswrapper[4684]: I0123 09:07:27.725268 4684 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/bf126b07-da06-4140-9a57-dfd54fc6b486-trusted-ca\") pod \"bf126b07-da06-4140-9a57-dfd54fc6b486\" (UID: \"bf126b07-da06-4140-9a57-dfd54fc6b486\") " Jan 23 09:07:27 crc kubenswrapper[4684]: I0123 09:07:27.725289 4684 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"metrics-tls\" (UniqueName: \"kubernetes.io/secret/a31745f5-9847-4afe-82a5-3161cc66ca93-metrics-tls\") pod \"a31745f5-9847-4afe-82a5-3161cc66ca93\" (UID: \"a31745f5-9847-4afe-82a5-3161cc66ca93\") " Jan 23 09:07:27 crc kubenswrapper[4684]: I0123 09:07:27.725309 4684 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/6509e943-70c6-444c-bc41-48a544e36fbd-config\") pod \"6509e943-70c6-444c-bc41-48a544e36fbd\" (UID: \"6509e943-70c6-444c-bc41-48a544e36fbd\") " Jan 23 09:07:27 crc kubenswrapper[4684]: I0123 09:07:27.725328 4684 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"audit-policies\" (UniqueName: \"kubernetes.io/configmap/09ae3b1a-e8e7-4524-b54b-61eab6f9239a-audit-policies\") pod \"09ae3b1a-e8e7-4524-b54b-61eab6f9239a\" (UID: \"09ae3b1a-e8e7-4524-b54b-61eab6f9239a\") " Jan 23 09:07:27 crc kubenswrapper[4684]: I0123 09:07:27.725371 4684 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-279lb\" (UniqueName: \"kubernetes.io/projected/7bb08738-c794-4ee8-9972-3a62ca171029-kube-api-access-279lb\") pod \"7bb08738-c794-4ee8-9972-3a62ca171029\" (UID: \"7bb08738-c794-4ee8-9972-3a62ca171029\") " Jan 23 09:07:27 crc kubenswrapper[4684]: I0123 09:07:27.725396 4684 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"service-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/c03ee662-fb2f-4fc4-a2c1-af487c19d254-service-ca-bundle\") pod \"c03ee662-fb2f-4fc4-a2c1-af487c19d254\" (UID: \"c03ee662-fb2f-4fc4-a2c1-af487c19d254\") " Jan 23 09:07:27 crc kubenswrapper[4684]: I0123 09:07:27.725415 4684 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"etcd-client\" (UniqueName: \"kubernetes.io/secret/09ae3b1a-e8e7-4524-b54b-61eab6f9239a-etcd-client\") pod \"09ae3b1a-e8e7-4524-b54b-61eab6f9239a\" (UID: \"09ae3b1a-e8e7-4524-b54b-61eab6f9239a\") " Jan 23 09:07:27 crc kubenswrapper[4684]: I0123 09:07:27.725435 4684 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-w4xd4\" (UniqueName: \"kubernetes.io/projected/8cea82b4-6893-4ddc-af9f-1bb5ae425c5b-kube-api-access-w4xd4\") pod \"8cea82b4-6893-4ddc-af9f-1bb5ae425c5b\" (UID: \"8cea82b4-6893-4ddc-af9f-1bb5ae425c5b\") " Jan 23 09:07:27 crc kubenswrapper[4684]: I0123 09:07:27.725456 4684 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"image-import-ca\" (UniqueName: \"kubernetes.io/configmap/1bf7eb37-55a3-4c65-b768-a94c82151e69-image-import-ca\") pod \"1bf7eb37-55a3-4c65-b768-a94c82151e69\" (UID: \"1bf7eb37-55a3-4c65-b768-a94c82151e69\") " Jan 23 09:07:27 crc kubenswrapper[4684]: I0123 09:07:27.725480 4684 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-dbsvg\" (UniqueName: \"kubernetes.io/projected/f88749ec-7931-4ee7-b3fc-1ec5e11f92e9-kube-api-access-dbsvg\") pod \"f88749ec-7931-4ee7-b3fc-1ec5e11f92e9\" (UID: \"f88749ec-7931-4ee7-b3fc-1ec5e11f92e9\") " Jan 23 09:07:27 crc kubenswrapper[4684]: I0123 09:07:27.725503 4684 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"images\" (UniqueName: \"kubernetes.io/configmap/31d8b7a1-420e-4252-a5b7-eebe8a111292-images\") pod \"31d8b7a1-420e-4252-a5b7-eebe8a111292\" (UID: \"31d8b7a1-420e-4252-a5b7-eebe8a111292\") " Jan 23 09:07:27 crc kubenswrapper[4684]: I0123 09:07:27.725523 4684 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-7c4vf\" (UniqueName: \"kubernetes.io/projected/22c825df-677d-4ca6-82db-3454ed06e783-kube-api-access-7c4vf\") pod \"22c825df-677d-4ca6-82db-3454ed06e783\" (UID: \"22c825df-677d-4ca6-82db-3454ed06e783\") " Jan 23 09:07:27 crc kubenswrapper[4684]: I0123 09:07:27.725551 4684 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"machine-api-operator-tls\" (UniqueName: \"kubernetes.io/secret/6402fda4-df10-493c-b4e5-d0569419652d-machine-api-operator-tls\") pod \"6402fda4-df10-493c-b4e5-d0569419652d\" (UID: \"6402fda4-df10-493c-b4e5-d0569419652d\") " Jan 23 09:07:27 crc kubenswrapper[4684]: I0123 09:07:27.725605 4684 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/6402fda4-df10-493c-b4e5-d0569419652d-config\") pod \"6402fda4-df10-493c-b4e5-d0569419652d\" (UID: \"6402fda4-df10-493c-b4e5-d0569419652d\") " Jan 23 09:07:27 crc kubenswrapper[4684]: I0123 09:07:27.725628 4684 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"v4-0-config-system-service-ca\" (UniqueName: \"kubernetes.io/configmap/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-system-service-ca\") pod \"49c341d1-5089-4bc2-86a0-a5e165cfcc6b\" (UID: \"49c341d1-5089-4bc2-86a0-a5e165cfcc6b\") " Jan 23 09:07:27 crc kubenswrapper[4684]: I0123 09:07:27.725650 4684 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"auth-proxy-config\" (UniqueName: \"kubernetes.io/configmap/22c825df-677d-4ca6-82db-3454ed06e783-auth-proxy-config\") pod \"22c825df-677d-4ca6-82db-3454ed06e783\" (UID: \"22c825df-677d-4ca6-82db-3454ed06e783\") " Jan 23 09:07:27 crc kubenswrapper[4684]: I0123 09:07:27.725647 4684 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-system-serving-cert" (OuterVolumeSpecName: "v4-0-config-system-serving-cert") pod "49c341d1-5089-4bc2-86a0-a5e165cfcc6b" (UID: "49c341d1-5089-4bc2-86a0-a5e165cfcc6b"). InnerVolumeSpecName "v4-0-config-system-serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 23 09:07:27 crc kubenswrapper[4684]: I0123 09:07:27.725675 4684 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"audit-policies\" (UniqueName: \"kubernetes.io/configmap/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-audit-policies\") pod \"49c341d1-5089-4bc2-86a0-a5e165cfcc6b\" (UID: \"49c341d1-5089-4bc2-86a0-a5e165cfcc6b\") " Jan 23 09:07:27 crc kubenswrapper[4684]: I0123 09:07:27.725719 4684 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"v4-0-config-user-idp-0-file-data\" (UniqueName: \"kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-user-idp-0-file-data\") pod \"49c341d1-5089-4bc2-86a0-a5e165cfcc6b\" (UID: \"49c341d1-5089-4bc2-86a0-a5e165cfcc6b\") " Jan 23 09:07:27 crc kubenswrapper[4684]: I0123 09:07:27.725746 4684 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"service-ca\" (UniqueName: \"kubernetes.io/configmap/0b78653f-4ff9-4508-8672-245ed9b561e3-service-ca\") pod \"0b78653f-4ff9-4508-8672-245ed9b561e3\" (UID: \"0b78653f-4ff9-4508-8672-245ed9b561e3\") " Jan 23 09:07:27 crc kubenswrapper[4684]: I0123 09:07:27.725770 4684 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ca-trust-extracted\" (UniqueName: \"kubernetes.io/empty-dir/8f668bae-612b-4b75-9490-919e737c6a3b-ca-trust-extracted\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 23 09:07:27 crc kubenswrapper[4684]: I0123 09:07:27.725791 4684 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"v4-0-config-system-session\" (UniqueName: \"kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-system-session\") pod \"49c341d1-5089-4bc2-86a0-a5e165cfcc6b\" (UID: \"49c341d1-5089-4bc2-86a0-a5e165cfcc6b\") " Jan 23 09:07:27 crc kubenswrapper[4684]: I0123 09:07:27.725812 4684 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-tk88c\" (UniqueName: \"kubernetes.io/projected/7539238d-5fe0-46ed-884e-1c3b566537ec-kube-api-access-tk88c\") pod \"7539238d-5fe0-46ed-884e-1c3b566537ec\" (UID: \"7539238d-5fe0-46ed-884e-1c3b566537ec\") " Jan 23 09:07:27 crc kubenswrapper[4684]: I0123 09:07:27.725833 4684 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"srv-cert\" (UniqueName: \"kubernetes.io/secret/f88749ec-7931-4ee7-b3fc-1ec5e11f92e9-srv-cert\") pod \"f88749ec-7931-4ee7-b3fc-1ec5e11f92e9\" (UID: \"f88749ec-7931-4ee7-b3fc-1ec5e11f92e9\") " Jan 23 09:07:27 crc kubenswrapper[4684]: I0123 09:07:27.725859 4684 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/1386a44e-36a2-460c-96d0-0359d2b6f0f5-config\") pod \"1386a44e-36a2-460c-96d0-0359d2b6f0f5\" (UID: \"1386a44e-36a2-460c-96d0-0359d2b6f0f5\") " Jan 23 09:07:27 crc kubenswrapper[4684]: I0123 09:07:27.725879 4684 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-9xfj7\" (UniqueName: \"kubernetes.io/projected/5225d0e4-402f-4861-b410-819f433b1803-kube-api-access-9xfj7\") pod \"5225d0e4-402f-4861-b410-819f433b1803\" (UID: \"5225d0e4-402f-4861-b410-819f433b1803\") " Jan 23 09:07:27 crc kubenswrapper[4684]: I0123 09:07:27.725900 4684 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovn-control-plane-metrics-cert\" (UniqueName: \"kubernetes.io/secret/925f1c65-6136-48ba-85aa-3a3b50560753-ovn-control-plane-metrics-cert\") pod \"925f1c65-6136-48ba-85aa-3a3b50560753\" (UID: \"925f1c65-6136-48ba-85aa-3a3b50560753\") " Jan 23 09:07:27 crc kubenswrapper[4684]: I0123 09:07:27.725922 4684 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/7583ce53-e0fe-4a16-9e4d-50516596a136-proxy-ca-bundles\") pod \"7583ce53-e0fe-4a16-9e4d-50516596a136\" (UID: \"7583ce53-e0fe-4a16-9e4d-50516596a136\") " Jan 23 09:07:27 crc kubenswrapper[4684]: I0123 09:07:27.725941 4684 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"console-config\" (UniqueName: \"kubernetes.io/configmap/43509403-f426-496e-be36-56cef71462f5-console-config\") pod \"43509403-f426-496e-be36-56cef71462f5\" (UID: \"43509403-f426-496e-be36-56cef71462f5\") " Jan 23 09:07:27 crc kubenswrapper[4684]: I0123 09:07:27.725960 4684 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"stats-auth\" (UniqueName: \"kubernetes.io/secret/c03ee662-fb2f-4fc4-a2c1-af487c19d254-stats-auth\") pod \"c03ee662-fb2f-4fc4-a2c1-af487c19d254\" (UID: \"c03ee662-fb2f-4fc4-a2c1-af487c19d254\") " Jan 23 09:07:27 crc kubenswrapper[4684]: I0123 09:07:27.725982 4684 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/09efc573-dbb6-4249-bd59-9b87aba8dd28-serving-cert\") pod \"09efc573-dbb6-4249-bd59-9b87aba8dd28\" (UID: \"09efc573-dbb6-4249-bd59-9b87aba8dd28\") " Jan 23 09:07:27 crc kubenswrapper[4684]: I0123 09:07:27.726027 4684 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/7583ce53-e0fe-4a16-9e4d-50516596a136-config\") pod \"7583ce53-e0fe-4a16-9e4d-50516596a136\" (UID: \"7583ce53-e0fe-4a16-9e4d-50516596a136\") " Jan 23 09:07:27 crc kubenswrapper[4684]: I0123 09:07:27.726042 4684 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/22c825df-677d-4ca6-82db-3454ed06e783-config" (OuterVolumeSpecName: "config") pod "22c825df-677d-4ca6-82db-3454ed06e783" (UID: "22c825df-677d-4ca6-82db-3454ed06e783"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 23 09:07:27 crc kubenswrapper[4684]: I0123 09:07:27.726050 4684 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"multus-daemon-config\" (UniqueName: \"kubernetes.io/configmap/4bb40260-dbaa-4fb0-84df-5e680505d512-multus-daemon-config\") pod \"4bb40260-dbaa-4fb0-84df-5e680505d512\" (UID: \"4bb40260-dbaa-4fb0-84df-5e680505d512\") " Jan 23 09:07:27 crc kubenswrapper[4684]: I0123 09:07:27.726072 4684 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/25e176fe-21b4-4974-b1ed-c8b94f112a7f-signing-cabundle" (OuterVolumeSpecName: "signing-cabundle") pod "25e176fe-21b4-4974-b1ed-c8b94f112a7f" (UID: "25e176fe-21b4-4974-b1ed-c8b94f112a7f"). InnerVolumeSpecName "signing-cabundle". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 23 09:07:27 crc kubenswrapper[4684]: I0123 09:07:27.726088 4684 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/7583ce53-e0fe-4a16-9e4d-50516596a136-serving-cert" (OuterVolumeSpecName: "serving-cert") pod "7583ce53-e0fe-4a16-9e4d-50516596a136" (UID: "7583ce53-e0fe-4a16-9e4d-50516596a136"). InnerVolumeSpecName "serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 23 09:07:27 crc kubenswrapper[4684]: I0123 09:07:27.726100 4684 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"package-server-manager-serving-cert\" (UniqueName: \"kubernetes.io/secret/3ab1a177-2de0-46d9-b765-d0d0649bb42e-package-server-manager-serving-cert\") pod \"3ab1a177-2de0-46d9-b765-d0d0649bb42e\" (UID: \"3ab1a177-2de0-46d9-b765-d0d0649bb42e\") " Jan 23 09:07:27 crc kubenswrapper[4684]: I0123 09:07:27.726137 4684 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/09ae3b1a-e8e7-4524-b54b-61eab6f9239a-serving-cert\") pod \"09ae3b1a-e8e7-4524-b54b-61eab6f9239a\" (UID: \"09ae3b1a-e8e7-4524-b54b-61eab6f9239a\") " Jan 23 09:07:27 crc kubenswrapper[4684]: I0123 09:07:27.726158 4684 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/43509403-f426-496e-be36-56cef71462f5-trusted-ca-bundle\") pod \"43509403-f426-496e-be36-56cef71462f5\" (UID: \"43509403-f426-496e-be36-56cef71462f5\") " Jan 23 09:07:27 crc kubenswrapper[4684]: I0123 09:07:27.726176 4684 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-fcqwp\" (UniqueName: \"kubernetes.io/projected/5fe579f8-e8a6-4643-bce5-a661393c4dde-kube-api-access-fcqwp\") pod \"5fe579f8-e8a6-4643-bce5-a661393c4dde\" (UID: \"5fe579f8-e8a6-4643-bce5-a661393c4dde\") " Jan 23 09:07:27 crc kubenswrapper[4684]: I0123 09:07:27.726191 4684 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/e7e6199b-1264-4501-8953-767f51328d08-kube-api-access\") pod \"e7e6199b-1264-4501-8953-767f51328d08\" (UID: \"e7e6199b-1264-4501-8953-767f51328d08\") " Jan 23 09:07:27 crc kubenswrapper[4684]: I0123 09:07:27.726226 4684 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/57a731c4-ef35-47a8-b875-bfb08a7f8011-utilities" (OuterVolumeSpecName: "utilities") pod "57a731c4-ef35-47a8-b875-bfb08a7f8011" (UID: "57a731c4-ef35-47a8-b875-bfb08a7f8011"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 23 09:07:27 crc kubenswrapper[4684]: I0123 09:07:27.726230 4684 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/3ab1a177-2de0-46d9-b765-d0d0649bb42e-package-server-manager-serving-cert" (OuterVolumeSpecName: "package-server-manager-serving-cert") pod "3ab1a177-2de0-46d9-b765-d0d0649bb42e" (UID: "3ab1a177-2de0-46d9-b765-d0d0649bb42e"). InnerVolumeSpecName "package-server-manager-serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 23 09:07:27 crc kubenswrapper[4684]: I0123 09:07:27.726259 4684 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/22c825df-677d-4ca6-82db-3454ed06e783-kube-api-access-7c4vf" (OuterVolumeSpecName: "kube-api-access-7c4vf") pod "22c825df-677d-4ca6-82db-3454ed06e783" (UID: "22c825df-677d-4ca6-82db-3454ed06e783"). InnerVolumeSpecName "kube-api-access-7c4vf". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 23 09:07:27 crc kubenswrapper[4684]: I0123 09:07:27.726270 4684 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 23 09:07:27 crc kubenswrapper[4684]: I0123 09:07:27.726290 4684 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"cert\" (UniqueName: \"kubernetes.io/secret/20b0d48f-5fd6-431c-a545-e3c800c7b866-cert\") pod \"20b0d48f-5fd6-431c-a545-e3c800c7b866\" (UID: \"20b0d48f-5fd6-431c-a545-e3c800c7b866\") " Jan 23 09:07:27 crc kubenswrapper[4684]: I0123 09:07:27.726305 4684 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/7539238d-5fe0-46ed-884e-1c3b566537ec-serving-cert\") pod \"7539238d-5fe0-46ed-884e-1c3b566537ec\" (UID: \"7539238d-5fe0-46ed-884e-1c3b566537ec\") " Jan 23 09:07:27 crc kubenswrapper[4684]: I0123 09:07:27.726322 4684 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-6g6sz\" (UniqueName: \"kubernetes.io/projected/6509e943-70c6-444c-bc41-48a544e36fbd-kube-api-access-6g6sz\") pod \"6509e943-70c6-444c-bc41-48a544e36fbd\" (UID: \"6509e943-70c6-444c-bc41-48a544e36fbd\") " Jan 23 09:07:27 crc kubenswrapper[4684]: I0123 09:07:27.726338 4684 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/0b78653f-4ff9-4508-8672-245ed9b561e3-kube-api-access\") pod \"0b78653f-4ff9-4508-8672-245ed9b561e3\" (UID: \"0b78653f-4ff9-4508-8672-245ed9b561e3\") " Jan 23 09:07:27 crc kubenswrapper[4684]: I0123 09:07:27.726356 4684 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-v47cf\" (UniqueName: \"kubernetes.io/projected/c03ee662-fb2f-4fc4-a2c1-af487c19d254-kube-api-access-v47cf\") pod \"c03ee662-fb2f-4fc4-a2c1-af487c19d254\" (UID: \"c03ee662-fb2f-4fc4-a2c1-af487c19d254\") " Jan 23 09:07:27 crc kubenswrapper[4684]: I0123 09:07:27.726373 4684 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"audit\" (UniqueName: \"kubernetes.io/configmap/1bf7eb37-55a3-4c65-b768-a94c82151e69-audit\") pod \"1bf7eb37-55a3-4c65-b768-a94c82151e69\" (UID: \"1bf7eb37-55a3-4c65-b768-a94c82151e69\") " Jan 23 09:07:27 crc kubenswrapper[4684]: I0123 09:07:27.726389 4684 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"cni-binary-copy\" (UniqueName: \"kubernetes.io/configmap/7bb08738-c794-4ee8-9972-3a62ca171029-cni-binary-copy\") pod \"7bb08738-c794-4ee8-9972-3a62ca171029\" (UID: \"7bb08738-c794-4ee8-9972-3a62ca171029\") " Jan 23 09:07:27 crc kubenswrapper[4684]: I0123 09:07:27.726405 4684 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"oauth-serving-cert\" (UniqueName: \"kubernetes.io/configmap/43509403-f426-496e-be36-56cef71462f5-oauth-serving-cert\") pod \"43509403-f426-496e-be36-56cef71462f5\" (UID: \"43509403-f426-496e-be36-56cef71462f5\") " Jan 23 09:07:27 crc kubenswrapper[4684]: I0123 09:07:27.726422 4684 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-lzf88\" (UniqueName: \"kubernetes.io/projected/0b574797-001e-440a-8f4e-c0be86edad0f-kube-api-access-lzf88\") pod \"0b574797-001e-440a-8f4e-c0be86edad0f\" (UID: \"0b574797-001e-440a-8f4e-c0be86edad0f\") " Jan 23 09:07:27 crc kubenswrapper[4684]: I0123 09:07:27.726440 4684 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"v4-0-config-user-template-provider-selection\" (UniqueName: \"kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-user-template-provider-selection\") pod \"49c341d1-5089-4bc2-86a0-a5e165cfcc6b\" (UID: \"49c341d1-5089-4bc2-86a0-a5e165cfcc6b\") " Jan 23 09:07:27 crc kubenswrapper[4684]: I0123 09:07:27.726458 4684 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-pj782\" (UniqueName: \"kubernetes.io/projected/b6cd30de-2eeb-49a2-ab40-9167f4560ff5-kube-api-access-pj782\") pod \"b6cd30de-2eeb-49a2-ab40-9167f4560ff5\" (UID: \"b6cd30de-2eeb-49a2-ab40-9167f4560ff5\") " Jan 23 09:07:27 crc kubenswrapper[4684]: I0123 09:07:27.726473 4684 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/9d4552c7-cd75-42dd-8880-30dd377c49a4-config\") pod \"9d4552c7-cd75-42dd-8880-30dd377c49a4\" (UID: \"9d4552c7-cd75-42dd-8880-30dd377c49a4\") " Jan 23 09:07:27 crc kubenswrapper[4684]: I0123 09:07:27.726496 4684 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-qs4fp\" (UniqueName: \"kubernetes.io/projected/210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c-kube-api-access-qs4fp\") pod \"210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c\" (UID: \"210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c\") " Jan 23 09:07:27 crc kubenswrapper[4684]: I0123 09:07:27.726514 4684 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"service-ca\" (UniqueName: \"kubernetes.io/configmap/43509403-f426-496e-be36-56cef71462f5-service-ca\") pod \"43509403-f426-496e-be36-56cef71462f5\" (UID: \"43509403-f426-496e-be36-56cef71462f5\") " Jan 23 09:07:27 crc kubenswrapper[4684]: I0123 09:07:27.726530 4684 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"bound-sa-token\" (UniqueName: \"kubernetes.io/projected/a31745f5-9847-4afe-82a5-3161cc66ca93-bound-sa-token\") pod \"a31745f5-9847-4afe-82a5-3161cc66ca93\" (UID: \"a31745f5-9847-4afe-82a5-3161cc66ca93\") " Jan 23 09:07:27 crc kubenswrapper[4684]: I0123 09:07:27.726548 4684 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/1d611f23-29be-4491-8495-bee1670e935f-catalog-content\") pod \"1d611f23-29be-4491-8495-bee1670e935f\" (UID: \"1d611f23-29be-4491-8495-bee1670e935f\") " Jan 23 09:07:27 crc kubenswrapper[4684]: I0123 09:07:27.726564 4684 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/496e6271-fb68-4057-954e-a0d97a4afa3f-serving-cert\") pod \"496e6271-fb68-4057-954e-a0d97a4afa3f\" (UID: \"496e6271-fb68-4057-954e-a0d97a4afa3f\") " Jan 23 09:07:27 crc kubenswrapper[4684]: I0123 09:07:27.726581 4684 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-xcgwh\" (UniqueName: \"kubernetes.io/projected/fda69060-fa79-4696-b1a6-7980f124bf7c-kube-api-access-xcgwh\") pod \"fda69060-fa79-4696-b1a6-7980f124bf7c\" (UID: \"fda69060-fa79-4696-b1a6-7980f124bf7c\") " Jan 23 09:07:27 crc kubenswrapper[4684]: I0123 09:07:27.726597 4684 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-cfbct\" (UniqueName: \"kubernetes.io/projected/57a731c4-ef35-47a8-b875-bfb08a7f8011-kube-api-access-cfbct\") pod \"57a731c4-ef35-47a8-b875-bfb08a7f8011\" (UID: \"57a731c4-ef35-47a8-b875-bfb08a7f8011\") " Jan 23 09:07:27 crc kubenswrapper[4684]: I0123 09:07:27.726612 4684 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/496e6271-fb68-4057-954e-a0d97a4afa3f-kube-api-access\") pod \"496e6271-fb68-4057-954e-a0d97a4afa3f\" (UID: \"496e6271-fb68-4057-954e-a0d97a4afa3f\") " Jan 23 09:07:27 crc kubenswrapper[4684]: I0123 09:07:27.726629 4684 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-fqsjt\" (UniqueName: \"kubernetes.io/projected/efdd0498-1daa-4136-9a4a-3b948c2293fc-kube-api-access-fqsjt\") pod \"efdd0498-1daa-4136-9a4a-3b948c2293fc\" (UID: \"efdd0498-1daa-4136-9a4a-3b948c2293fc\") " Jan 23 09:07:27 crc kubenswrapper[4684]: I0123 09:07:27.726644 4684 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"registry-certificates\" (UniqueName: \"kubernetes.io/configmap/8f668bae-612b-4b75-9490-919e737c6a3b-registry-certificates\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 23 09:07:27 crc kubenswrapper[4684]: I0123 09:07:27.726660 4684 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-pjr6v\" (UniqueName: \"kubernetes.io/projected/49ef4625-1d3a-4a9f-b595-c2433d32326d-kube-api-access-pjr6v\") pod \"49ef4625-1d3a-4a9f-b595-c2433d32326d\" (UID: \"49ef4625-1d3a-4a9f-b595-c2433d32326d\") " Jan 23 09:07:27 crc kubenswrapper[4684]: I0123 09:07:27.726679 4684 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-x4zgh\" (UniqueName: \"kubernetes.io/projected/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d-kube-api-access-x4zgh\") pod \"b11524ee-3fca-4b1b-9cdf-6da289fdbc7d\" (UID: \"b11524ee-3fca-4b1b-9cdf-6da289fdbc7d\") " Jan 23 09:07:27 crc kubenswrapper[4684]: I0123 09:07:27.726719 4684 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/1bf7eb37-55a3-4c65-b768-a94c82151e69-trusted-ca-bundle\") pod \"1bf7eb37-55a3-4c65-b768-a94c82151e69\" (UID: \"1bf7eb37-55a3-4c65-b768-a94c82151e69\") " Jan 23 09:07:27 crc kubenswrapper[4684]: I0123 09:07:27.726740 4684 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovnkube-config\" (UniqueName: \"kubernetes.io/configmap/925f1c65-6136-48ba-85aa-3a3b50560753-ovnkube-config\") pod \"925f1c65-6136-48ba-85aa-3a3b50560753\" (UID: \"925f1c65-6136-48ba-85aa-3a3b50560753\") " Jan 23 09:07:27 crc kubenswrapper[4684]: I0123 09:07:27.726759 4684 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"certs\" (UniqueName: \"kubernetes.io/secret/5fe579f8-e8a6-4643-bce5-a661393c4dde-certs\") pod \"5fe579f8-e8a6-4643-bce5-a661393c4dde\" (UID: \"5fe579f8-e8a6-4643-bce5-a661393c4dde\") " Jan 23 09:07:27 crc kubenswrapper[4684]: I0123 09:07:27.726775 4684 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"encryption-config\" (UniqueName: \"kubernetes.io/secret/09ae3b1a-e8e7-4524-b54b-61eab6f9239a-encryption-config\") pod \"09ae3b1a-e8e7-4524-b54b-61eab6f9239a\" (UID: \"09ae3b1a-e8e7-4524-b54b-61eab6f9239a\") " Jan 23 09:07:27 crc kubenswrapper[4684]: I0123 09:07:27.726790 4684 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-sb6h7\" (UniqueName: \"kubernetes.io/projected/1bf7eb37-55a3-4c65-b768-a94c82151e69-kube-api-access-sb6h7\") pod \"1bf7eb37-55a3-4c65-b768-a94c82151e69\" (UID: \"1bf7eb37-55a3-4c65-b768-a94c82151e69\") " Jan 23 09:07:27 crc kubenswrapper[4684]: I0123 09:07:27.726805 4684 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c-serving-cert\") pod \"210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c\" (UID: \"210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c\") " Jan 23 09:07:27 crc kubenswrapper[4684]: I0123 09:07:27.726821 4684 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-w9rds\" (UniqueName: \"kubernetes.io/projected/20b0d48f-5fd6-431c-a545-e3c800c7b866-kube-api-access-w9rds\") pod \"20b0d48f-5fd6-431c-a545-e3c800c7b866\" (UID: \"20b0d48f-5fd6-431c-a545-e3c800c7b866\") " Jan 23 09:07:27 crc kubenswrapper[4684]: I0123 09:07:27.726840 4684 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/01ab3dd5-8196-46d0-ad33-122e2ca51def-serving-cert\") pod \"01ab3dd5-8196-46d0-ad33-122e2ca51def\" (UID: \"01ab3dd5-8196-46d0-ad33-122e2ca51def\") " Jan 23 09:07:27 crc kubenswrapper[4684]: I0123 09:07:27.726862 4684 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"mcc-auth-proxy-config\" (UniqueName: \"kubernetes.io/configmap/0b574797-001e-440a-8f4e-c0be86edad0f-mcc-auth-proxy-config\") pod \"0b574797-001e-440a-8f4e-c0be86edad0f\" (UID: \"0b574797-001e-440a-8f4e-c0be86edad0f\") " Jan 23 09:07:27 crc kubenswrapper[4684]: I0123 09:07:27.726879 4684 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"service-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/6509e943-70c6-444c-bc41-48a544e36fbd-service-ca-bundle\") pod \"6509e943-70c6-444c-bc41-48a544e36fbd\" (UID: \"6509e943-70c6-444c-bc41-48a544e36fbd\") " Jan 23 09:07:27 crc kubenswrapper[4684]: I0123 09:07:27.726895 4684 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/09efc573-dbb6-4249-bd59-9b87aba8dd28-config\") pod \"09efc573-dbb6-4249-bd59-9b87aba8dd28\" (UID: \"09efc573-dbb6-4249-bd59-9b87aba8dd28\") " Jan 23 09:07:27 crc kubenswrapper[4684]: I0123 09:07:27.726910 4684 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/bc5039c0-ea34-426b-a2b7-fbbc87b49a6d-serving-cert\") pod \"bc5039c0-ea34-426b-a2b7-fbbc87b49a6d\" (UID: \"bc5039c0-ea34-426b-a2b7-fbbc87b49a6d\") " Jan 23 09:07:27 crc kubenswrapper[4684]: I0123 09:07:27.726934 4684 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-2d4wz\" (UniqueName: \"kubernetes.io/projected/5441d097-087c-4d9a-baa8-b210afa90fc9-kube-api-access-2d4wz\") pod \"5441d097-087c-4d9a-baa8-b210afa90fc9\" (UID: \"5441d097-087c-4d9a-baa8-b210afa90fc9\") " Jan 23 09:07:27 crc kubenswrapper[4684]: I0123 09:07:27.726949 4684 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovn-node-metrics-cert\" (UniqueName: \"kubernetes.io/secret/6ea678ab-3438-413e-bfe3-290ae7725660-ovn-node-metrics-cert\") pod \"6ea678ab-3438-413e-bfe3-290ae7725660\" (UID: \"6ea678ab-3438-413e-bfe3-290ae7725660\") " Jan 23 09:07:27 crc kubenswrapper[4684]: I0123 09:07:27.726964 4684 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/1bf7eb37-55a3-4c65-b768-a94c82151e69-config\") pod \"1bf7eb37-55a3-4c65-b768-a94c82151e69\" (UID: \"1bf7eb37-55a3-4c65-b768-a94c82151e69\") " Jan 23 09:07:27 crc kubenswrapper[4684]: I0123 09:07:27.726984 4684 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/0b78653f-4ff9-4508-8672-245ed9b561e3-serving-cert\") pod \"0b78653f-4ff9-4508-8672-245ed9b561e3\" (UID: \"0b78653f-4ff9-4508-8672-245ed9b561e3\") " Jan 23 09:07:27 crc kubenswrapper[4684]: I0123 09:07:27.727007 4684 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/c03ee662-fb2f-4fc4-a2c1-af487c19d254-metrics-certs\") pod \"c03ee662-fb2f-4fc4-a2c1-af487c19d254\" (UID: \"c03ee662-fb2f-4fc4-a2c1-af487c19d254\") " Jan 23 09:07:27 crc kubenswrapper[4684]: I0123 09:07:27.727028 4684 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"v4-0-config-user-template-error\" (UniqueName: \"kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-user-template-error\") pod \"49c341d1-5089-4bc2-86a0-a5e165cfcc6b\" (UID: \"49c341d1-5089-4bc2-86a0-a5e165cfcc6b\") " Jan 23 09:07:27 crc kubenswrapper[4684]: I0123 09:07:27.727049 4684 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"etcd-client\" (UniqueName: \"kubernetes.io/secret/1bf7eb37-55a3-4c65-b768-a94c82151e69-etcd-client\") pod \"1bf7eb37-55a3-4c65-b768-a94c82151e69\" (UID: \"1bf7eb37-55a3-4c65-b768-a94c82151e69\") " Jan 23 09:07:27 crc kubenswrapper[4684]: I0123 09:07:27.727070 4684 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-lz9wn\" (UniqueName: \"kubernetes.io/projected/a31745f5-9847-4afe-82a5-3161cc66ca93-kube-api-access-lz9wn\") pod \"a31745f5-9847-4afe-82a5-3161cc66ca93\" (UID: \"a31745f5-9847-4afe-82a5-3161cc66ca93\") " Jan 23 09:07:27 crc kubenswrapper[4684]: I0123 09:07:27.727092 4684 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"etcd-serving-ca\" (UniqueName: \"kubernetes.io/configmap/09ae3b1a-e8e7-4524-b54b-61eab6f9239a-etcd-serving-ca\") pod \"09ae3b1a-e8e7-4524-b54b-61eab6f9239a\" (UID: \"09ae3b1a-e8e7-4524-b54b-61eab6f9239a\") " Jan 23 09:07:27 crc kubenswrapper[4684]: I0123 09:07:27.727113 4684 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-249nr\" (UniqueName: \"kubernetes.io/projected/b6312bbd-5731-4ea0-a20f-81d5a57df44a-kube-api-access-249nr\") pod \"b6312bbd-5731-4ea0-a20f-81d5a57df44a\" (UID: \"b6312bbd-5731-4ea0-a20f-81d5a57df44a\") " Jan 23 09:07:27 crc kubenswrapper[4684]: I0123 09:07:27.727133 4684 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/09ae3b1a-e8e7-4524-b54b-61eab6f9239a-trusted-ca-bundle\") pod \"09ae3b1a-e8e7-4524-b54b-61eab6f9239a\" (UID: \"09ae3b1a-e8e7-4524-b54b-61eab6f9239a\") " Jan 23 09:07:27 crc kubenswrapper[4684]: I0123 09:07:27.727152 4684 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"etcd-ca\" (UniqueName: \"kubernetes.io/configmap/09efc573-dbb6-4249-bd59-9b87aba8dd28-etcd-ca\") pod \"09efc573-dbb6-4249-bd59-9b87aba8dd28\" (UID: \"09efc573-dbb6-4249-bd59-9b87aba8dd28\") " Jan 23 09:07:27 crc kubenswrapper[4684]: I0123 09:07:27.727174 4684 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/5b88f790-22fa-440e-b583-365168c0b23d-metrics-certs\") pod \"5b88f790-22fa-440e-b583-365168c0b23d\" (UID: \"5b88f790-22fa-440e-b583-365168c0b23d\") " Jan 23 09:07:27 crc kubenswrapper[4684]: I0123 09:07:27.727196 4684 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/8f668bae-612b-4b75-9490-919e737c6a3b-trusted-ca\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 23 09:07:27 crc kubenswrapper[4684]: I0123 09:07:27.727217 4684 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"profile-collector-cert\" (UniqueName: \"kubernetes.io/secret/b6312bbd-5731-4ea0-a20f-81d5a57df44a-profile-collector-cert\") pod \"b6312bbd-5731-4ea0-a20f-81d5a57df44a\" (UID: \"b6312bbd-5731-4ea0-a20f-81d5a57df44a\") " Jan 23 09:07:27 crc kubenswrapper[4684]: I0123 09:07:27.727238 4684 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-jkwtn\" (UniqueName: \"kubernetes.io/projected/5b88f790-22fa-440e-b583-365168c0b23d-kube-api-access-jkwtn\") pod \"5b88f790-22fa-440e-b583-365168c0b23d\" (UID: \"5b88f790-22fa-440e-b583-365168c0b23d\") " Jan 23 09:07:27 crc kubenswrapper[4684]: I0123 09:07:27.727259 4684 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d-catalog-content\") pod \"b11524ee-3fca-4b1b-9cdf-6da289fdbc7d\" (UID: \"b11524ee-3fca-4b1b-9cdf-6da289fdbc7d\") " Jan 23 09:07:27 crc kubenswrapper[4684]: I0123 09:07:27.727280 4684 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/5225d0e4-402f-4861-b410-819f433b1803-utilities\") pod \"5225d0e4-402f-4861-b410-819f433b1803\" (UID: \"5225d0e4-402f-4861-b410-819f433b1803\") " Jan 23 09:07:27 crc kubenswrapper[4684]: I0123 09:07:27.727301 4684 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/57a731c4-ef35-47a8-b875-bfb08a7f8011-catalog-content\") pod \"57a731c4-ef35-47a8-b875-bfb08a7f8011\" (UID: \"57a731c4-ef35-47a8-b875-bfb08a7f8011\") " Jan 23 09:07:27 crc kubenswrapper[4684]: I0123 09:07:27.727330 4684 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-zkvpv\" (UniqueName: \"kubernetes.io/projected/09ae3b1a-e8e7-4524-b54b-61eab6f9239a-kube-api-access-zkvpv\") pod \"09ae3b1a-e8e7-4524-b54b-61eab6f9239a\" (UID: \"09ae3b1a-e8e7-4524-b54b-61eab6f9239a\") " Jan 23 09:07:27 crc kubenswrapper[4684]: I0123 09:07:27.727351 4684 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/5441d097-087c-4d9a-baa8-b210afa90fc9-client-ca\") pod \"5441d097-087c-4d9a-baa8-b210afa90fc9\" (UID: \"5441d097-087c-4d9a-baa8-b210afa90fc9\") " Jan 23 09:07:27 crc kubenswrapper[4684]: I0123 09:07:27.727373 4684 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"metrics-tls\" (UniqueName: \"kubernetes.io/secret/87cf06ed-a83f-41a7-828d-70653580a8cb-metrics-tls\") pod \"87cf06ed-a83f-41a7-828d-70653580a8cb\" (UID: \"87cf06ed-a83f-41a7-828d-70653580a8cb\") " Jan 23 09:07:27 crc kubenswrapper[4684]: I0123 09:07:27.727398 4684 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/5225d0e4-402f-4861-b410-819f433b1803-catalog-content\") pod \"5225d0e4-402f-4861-b410-819f433b1803\" (UID: \"5225d0e4-402f-4861-b410-819f433b1803\") " Jan 23 09:07:27 crc kubenswrapper[4684]: I0123 09:07:27.727423 4684 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/5441d097-087c-4d9a-baa8-b210afa90fc9-config\") pod \"5441d097-087c-4d9a-baa8-b210afa90fc9\" (UID: \"5441d097-087c-4d9a-baa8-b210afa90fc9\") " Jan 23 09:07:27 crc kubenswrapper[4684]: I0123 09:07:27.727446 4684 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"etcd-service-ca\" (UniqueName: \"kubernetes.io/configmap/09efc573-dbb6-4249-bd59-9b87aba8dd28-etcd-service-ca\") pod \"09efc573-dbb6-4249-bd59-9b87aba8dd28\" (UID: \"09efc573-dbb6-4249-bd59-9b87aba8dd28\") " Jan 23 09:07:27 crc kubenswrapper[4684]: I0123 09:07:27.727466 4684 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"tmpfs\" (UniqueName: \"kubernetes.io/empty-dir/308be0ea-9f5f-4b29-aeb1-5abd31a0b17b-tmpfs\") pod \"308be0ea-9f5f-4b29-aeb1-5abd31a0b17b\" (UID: \"308be0ea-9f5f-4b29-aeb1-5abd31a0b17b\") " Jan 23 09:07:27 crc kubenswrapper[4684]: I0123 09:07:27.727488 4684 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"apiservice-cert\" (UniqueName: \"kubernetes.io/secret/308be0ea-9f5f-4b29-aeb1-5abd31a0b17b-apiservice-cert\") pod \"308be0ea-9f5f-4b29-aeb1-5abd31a0b17b\" (UID: \"308be0ea-9f5f-4b29-aeb1-5abd31a0b17b\") " Jan 23 09:07:27 crc kubenswrapper[4684]: I0123 09:07:27.727510 4684 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"available-featuregates\" (UniqueName: \"kubernetes.io/empty-dir/bc5039c0-ea34-426b-a2b7-fbbc87b49a6d-available-featuregates\") pod \"bc5039c0-ea34-426b-a2b7-fbbc87b49a6d\" (UID: \"bc5039c0-ea34-426b-a2b7-fbbc87b49a6d\") " Jan 23 09:07:27 crc kubenswrapper[4684]: I0123 09:07:27.727526 4684 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-rnphk\" (UniqueName: \"kubernetes.io/projected/bf126b07-da06-4140-9a57-dfd54fc6b486-kube-api-access-rnphk\") pod \"bf126b07-da06-4140-9a57-dfd54fc6b486\" (UID: \"bf126b07-da06-4140-9a57-dfd54fc6b486\") " Jan 23 09:07:27 crc kubenswrapper[4684]: I0123 09:07:27.727542 4684 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-qg5z5\" (UniqueName: \"kubernetes.io/projected/43509403-f426-496e-be36-56cef71462f5-kube-api-access-qg5z5\") pod \"43509403-f426-496e-be36-56cef71462f5\" (UID: \"43509403-f426-496e-be36-56cef71462f5\") " Jan 23 09:07:27 crc kubenswrapper[4684]: I0123 09:07:27.727559 4684 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-x2m85\" (UniqueName: \"kubernetes.io/projected/cd70aa09-68dd-4d64-bd6f-156fe6d1dc6d-kube-api-access-x2m85\") pod \"cd70aa09-68dd-4d64-bd6f-156fe6d1dc6d\" (UID: \"cd70aa09-68dd-4d64-bd6f-156fe6d1dc6d\") " Jan 23 09:07:27 crc kubenswrapper[4684]: I0123 09:07:27.727575 4684 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-kfwg7\" (UniqueName: \"kubernetes.io/projected/8f668bae-612b-4b75-9490-919e737c6a3b-kube-api-access-kfwg7\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 23 09:07:27 crc kubenswrapper[4684]: I0123 09:07:27.727591 4684 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/6509e943-70c6-444c-bc41-48a544e36fbd-trusted-ca-bundle\") pod \"6509e943-70c6-444c-bc41-48a544e36fbd\" (UID: \"6509e943-70c6-444c-bc41-48a544e36fbd\") " Jan 23 09:07:27 crc kubenswrapper[4684]: I0123 09:07:27.727606 4684 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"cni-binary-copy\" (UniqueName: \"kubernetes.io/configmap/4bb40260-dbaa-4fb0-84df-5e680505d512-cni-binary-copy\") pod \"4bb40260-dbaa-4fb0-84df-5e680505d512\" (UID: \"4bb40260-dbaa-4fb0-84df-5e680505d512\") " Jan 23 09:07:27 crc kubenswrapper[4684]: I0123 09:07:27.727621 4684 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-x7zkh\" (UniqueName: \"kubernetes.io/projected/6731426b-95fe-49ff-bb5f-40441049fde2-kube-api-access-x7zkh\") pod \"6731426b-95fe-49ff-bb5f-40441049fde2\" (UID: \"6731426b-95fe-49ff-bb5f-40441049fde2\") " Jan 23 09:07:27 crc kubenswrapper[4684]: I0123 09:07:27.727637 4684 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovnkube-script-lib\" (UniqueName: \"kubernetes.io/configmap/6ea678ab-3438-413e-bfe3-290ae7725660-ovnkube-script-lib\") pod \"6ea678ab-3438-413e-bfe3-290ae7725660\" (UID: \"6ea678ab-3438-413e-bfe3-290ae7725660\") " Jan 23 09:07:27 crc kubenswrapper[4684]: I0123 09:07:27.727652 4684 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/1386a44e-36a2-460c-96d0-0359d2b6f0f5-serving-cert\") pod \"1386a44e-36a2-460c-96d0-0359d2b6f0f5\" (UID: \"1386a44e-36a2-460c-96d0-0359d2b6f0f5\") " Jan 23 09:07:27 crc kubenswrapper[4684]: I0123 09:07:27.727668 4684 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"default-certificate\" (UniqueName: \"kubernetes.io/secret/c03ee662-fb2f-4fc4-a2c1-af487c19d254-default-certificate\") pod \"c03ee662-fb2f-4fc4-a2c1-af487c19d254\" (UID: \"c03ee662-fb2f-4fc4-a2c1-af487c19d254\") " Jan 23 09:07:27 crc kubenswrapper[4684]: I0123 09:07:27.727683 4684 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"webhook-cert\" (UniqueName: \"kubernetes.io/secret/308be0ea-9f5f-4b29-aeb1-5abd31a0b17b-webhook-cert\") pod \"308be0ea-9f5f-4b29-aeb1-5abd31a0b17b\" (UID: \"308be0ea-9f5f-4b29-aeb1-5abd31a0b17b\") " Jan 23 09:07:27 crc kubenswrapper[4684]: I0123 09:07:27.727715 4684 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-jhbk2\" (UniqueName: \"kubernetes.io/projected/bd23aa5c-e532-4e53-bccf-e79f130c5ae8-kube-api-access-jhbk2\") pod \"bd23aa5c-e532-4e53-bccf-e79f130c5ae8\" (UID: \"bd23aa5c-e532-4e53-bccf-e79f130c5ae8\") " Jan 23 09:07:27 crc kubenswrapper[4684]: I0123 09:07:27.727732 4684 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"image-registry-operator-tls\" (UniqueName: \"kubernetes.io/secret/bf126b07-da06-4140-9a57-dfd54fc6b486-image-registry-operator-tls\") pod \"bf126b07-da06-4140-9a57-dfd54fc6b486\" (UID: \"bf126b07-da06-4140-9a57-dfd54fc6b486\") " Jan 23 09:07:27 crc kubenswrapper[4684]: I0123 09:07:27.727748 4684 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"auth-proxy-config\" (UniqueName: \"kubernetes.io/configmap/31d8b7a1-420e-4252-a5b7-eebe8a111292-auth-proxy-config\") pod \"31d8b7a1-420e-4252-a5b7-eebe8a111292\" (UID: \"31d8b7a1-420e-4252-a5b7-eebe8a111292\") " Jan 23 09:07:27 crc kubenswrapper[4684]: I0123 09:07:27.727764 4684 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"bound-sa-token\" (UniqueName: \"kubernetes.io/projected/8f668bae-612b-4b75-9490-919e737c6a3b-bound-sa-token\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 23 09:07:27 crc kubenswrapper[4684]: I0123 09:07:27.727780 4684 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"console-serving-cert\" (UniqueName: \"kubernetes.io/secret/43509403-f426-496e-be36-56cef71462f5-console-serving-cert\") pod \"43509403-f426-496e-be36-56cef71462f5\" (UID: \"43509403-f426-496e-be36-56cef71462f5\") " Jan 23 09:07:27 crc kubenswrapper[4684]: I0123 09:07:27.727795 4684 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovnkube-config\" (UniqueName: \"kubernetes.io/configmap/6ea678ab-3438-413e-bfe3-290ae7725660-ovnkube-config\") pod \"6ea678ab-3438-413e-bfe3-290ae7725660\" (UID: \"6ea678ab-3438-413e-bfe3-290ae7725660\") " Jan 23 09:07:27 crc kubenswrapper[4684]: I0123 09:07:27.727811 4684 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"marketplace-trusted-ca\" (UniqueName: \"kubernetes.io/configmap/b6cd30de-2eeb-49a2-ab40-9167f4560ff5-marketplace-trusted-ca\") pod \"b6cd30de-2eeb-49a2-ab40-9167f4560ff5\" (UID: \"b6cd30de-2eeb-49a2-ab40-9167f4560ff5\") " Jan 23 09:07:27 crc kubenswrapper[4684]: I0123 09:07:27.727826 4684 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"profile-collector-cert\" (UniqueName: \"kubernetes.io/secret/f88749ec-7931-4ee7-b3fc-1ec5e11f92e9-profile-collector-cert\") pod \"f88749ec-7931-4ee7-b3fc-1ec5e11f92e9\" (UID: \"f88749ec-7931-4ee7-b3fc-1ec5e11f92e9\") " Jan 23 09:07:27 crc kubenswrapper[4684]: I0123 09:07:27.727849 4684 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-mg5zb\" (UniqueName: \"kubernetes.io/projected/6402fda4-df10-493c-b4e5-d0569419652d-kube-api-access-mg5zb\") pod \"6402fda4-df10-493c-b4e5-d0569419652d\" (UID: \"6402fda4-df10-493c-b4e5-d0569419652d\") " Jan 23 09:07:27 crc kubenswrapper[4684]: I0123 09:07:27.727871 4684 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/9d4552c7-cd75-42dd-8880-30dd377c49a4-trusted-ca\") pod \"9d4552c7-cd75-42dd-8880-30dd377c49a4\" (UID: \"9d4552c7-cd75-42dd-8880-30dd377c49a4\") " Jan 23 09:07:27 crc kubenswrapper[4684]: I0123 09:07:27.727893 4684 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/1bf7eb37-55a3-4c65-b768-a94c82151e69-serving-cert\") pod \"1bf7eb37-55a3-4c65-b768-a94c82151e69\" (UID: \"1bf7eb37-55a3-4c65-b768-a94c82151e69\") " Jan 23 09:07:27 crc kubenswrapper[4684]: I0123 09:07:27.727912 4684 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-mnrrd\" (UniqueName: \"kubernetes.io/projected/bc5039c0-ea34-426b-a2b7-fbbc87b49a6d-kube-api-access-mnrrd\") pod \"bc5039c0-ea34-426b-a2b7-fbbc87b49a6d\" (UID: \"bc5039c0-ea34-426b-a2b7-fbbc87b49a6d\") " Jan 23 09:07:27 crc kubenswrapper[4684]: I0123 09:07:27.727928 4684 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"node-bootstrap-token\" (UniqueName: \"kubernetes.io/secret/5fe579f8-e8a6-4643-bce5-a661393c4dde-node-bootstrap-token\") pod \"5fe579f8-e8a6-4643-bce5-a661393c4dde\" (UID: \"5fe579f8-e8a6-4643-bce5-a661393c4dde\") " Jan 23 09:07:27 crc kubenswrapper[4684]: I0123 09:07:27.727944 4684 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"v4-0-config-system-router-certs\" (UniqueName: \"kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-system-router-certs\") pod \"49c341d1-5089-4bc2-86a0-a5e165cfcc6b\" (UID: \"49c341d1-5089-4bc2-86a0-a5e165cfcc6b\") " Jan 23 09:07:27 crc kubenswrapper[4684]: I0123 09:07:27.727959 4684 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/1386a44e-36a2-460c-96d0-0359d2b6f0f5-kube-api-access\") pod \"1386a44e-36a2-460c-96d0-0359d2b6f0f5\" (UID: \"1386a44e-36a2-460c-96d0-0359d2b6f0f5\") " Jan 23 09:07:27 crc kubenswrapper[4684]: I0123 09:07:27.727976 4684 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"proxy-tls\" (UniqueName: \"kubernetes.io/secret/fda69060-fa79-4696-b1a6-7980f124bf7c-proxy-tls\") pod \"fda69060-fa79-4696-b1a6-7980f124bf7c\" (UID: \"fda69060-fa79-4696-b1a6-7980f124bf7c\") " Jan 23 09:07:27 crc kubenswrapper[4684]: I0123 09:07:27.727995 4684 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"machine-approver-tls\" (UniqueName: \"kubernetes.io/secret/22c825df-677d-4ca6-82db-3454ed06e783-machine-approver-tls\") pod \"22c825df-677d-4ca6-82db-3454ed06e783\" (UID: \"22c825df-677d-4ca6-82db-3454ed06e783\") " Jan 23 09:07:27 crc kubenswrapper[4684]: I0123 09:07:27.728050 4684 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"env-overrides\" (UniqueName: \"kubernetes.io/configmap/6ea678ab-3438-413e-bfe3-290ae7725660-env-overrides\") pod \"6ea678ab-3438-413e-bfe3-290ae7725660\" (UID: \"6ea678ab-3438-413e-bfe3-290ae7725660\") " Jan 23 09:07:27 crc kubenswrapper[4684]: I0123 09:07:27.728075 4684 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-4d4hj\" (UniqueName: \"kubernetes.io/projected/3ab1a177-2de0-46d9-b765-d0d0649bb42e-kube-api-access-4d4hj\") pod \"3ab1a177-2de0-46d9-b765-d0d0649bb42e\" (UID: \"3ab1a177-2de0-46d9-b765-d0d0649bb42e\") " Jan 23 09:07:27 crc kubenswrapper[4684]: I0123 09:07:27.728101 4684 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"marketplace-operator-metrics\" (UniqueName: \"kubernetes.io/secret/b6cd30de-2eeb-49a2-ab40-9167f4560ff5-marketplace-operator-metrics\") pod \"b6cd30de-2eeb-49a2-ab40-9167f4560ff5\" (UID: \"b6cd30de-2eeb-49a2-ab40-9167f4560ff5\") " Jan 23 09:07:27 crc kubenswrapper[4684]: I0123 09:07:27.728122 4684 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"samples-operator-tls\" (UniqueName: \"kubernetes.io/secret/a0128f3a-b052-44ed-a84e-c4c8aaf17c13-samples-operator-tls\") pod \"a0128f3a-b052-44ed-a84e-c4c8aaf17c13\" (UID: \"a0128f3a-b052-44ed-a84e-c4c8aaf17c13\") " Jan 23 09:07:27 crc kubenswrapper[4684]: I0123 09:07:27.728141 4684 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"v4-0-config-system-trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-system-trusted-ca-bundle\") pod \"49c341d1-5089-4bc2-86a0-a5e165cfcc6b\" (UID: \"49c341d1-5089-4bc2-86a0-a5e165cfcc6b\") " Jan 23 09:07:27 crc kubenswrapper[4684]: I0123 09:07:27.728160 4684 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"mcd-auth-proxy-config\" (UniqueName: \"kubernetes.io/configmap/fda69060-fa79-4696-b1a6-7980f124bf7c-mcd-auth-proxy-config\") pod \"fda69060-fa79-4696-b1a6-7980f124bf7c\" (UID: \"fda69060-fa79-4696-b1a6-7980f124bf7c\") " Jan 23 09:07:27 crc kubenswrapper[4684]: I0123 09:07:27.728178 4684 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"webhook-certs\" (UniqueName: \"kubernetes.io/secret/efdd0498-1daa-4136-9a4a-3b948c2293fc-webhook-certs\") pod \"efdd0498-1daa-4136-9a4a-3b948c2293fc\" (UID: \"efdd0498-1daa-4136-9a4a-3b948c2293fc\") " Jan 23 09:07:27 crc kubenswrapper[4684]: I0123 09:07:27.728195 4684 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-ngvvp\" (UniqueName: \"kubernetes.io/projected/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-kube-api-access-ngvvp\") pod \"49c341d1-5089-4bc2-86a0-a5e165cfcc6b\" (UID: \"49c341d1-5089-4bc2-86a0-a5e165cfcc6b\") " Jan 23 09:07:27 crc kubenswrapper[4684]: I0123 09:07:27.728211 4684 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"control-plane-machine-set-operator-tls\" (UniqueName: \"kubernetes.io/secret/6731426b-95fe-49ff-bb5f-40441049fde2-control-plane-machine-set-operator-tls\") pod \"6731426b-95fe-49ff-bb5f-40441049fde2\" (UID: \"6731426b-95fe-49ff-bb5f-40441049fde2\") " Jan 23 09:07:27 crc kubenswrapper[4684]: I0123 09:07:27.728228 4684 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"bound-sa-token\" (UniqueName: \"kubernetes.io/projected/bf126b07-da06-4140-9a57-dfd54fc6b486-bound-sa-token\") pod \"bf126b07-da06-4140-9a57-dfd54fc6b486\" (UID: \"bf126b07-da06-4140-9a57-dfd54fc6b486\") " Jan 23 09:07:27 crc kubenswrapper[4684]: I0123 09:07:27.728244 4684 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-w7l8j\" (UniqueName: \"kubernetes.io/projected/01ab3dd5-8196-46d0-ad33-122e2ca51def-kube-api-access-w7l8j\") pod \"01ab3dd5-8196-46d0-ad33-122e2ca51def\" (UID: \"01ab3dd5-8196-46d0-ad33-122e2ca51def\") " Jan 23 09:07:27 crc kubenswrapper[4684]: I0123 09:07:27.728259 4684 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/9d4552c7-cd75-42dd-8880-30dd377c49a4-serving-cert\") pod \"9d4552c7-cd75-42dd-8880-30dd377c49a4\" (UID: \"9d4552c7-cd75-42dd-8880-30dd377c49a4\") " Jan 23 09:07:27 crc kubenswrapper[4684]: I0123 09:07:27.728274 4684 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/01ab3dd5-8196-46d0-ad33-122e2ca51def-config\") pod \"01ab3dd5-8196-46d0-ad33-122e2ca51def\" (UID: \"01ab3dd5-8196-46d0-ad33-122e2ca51def\") " Jan 23 09:07:27 crc kubenswrapper[4684]: I0123 09:07:27.728292 4684 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-gf66m\" (UniqueName: \"kubernetes.io/projected/a0128f3a-b052-44ed-a84e-c4c8aaf17c13-kube-api-access-gf66m\") pod \"a0128f3a-b052-44ed-a84e-c4c8aaf17c13\" (UID: \"a0128f3a-b052-44ed-a84e-c4c8aaf17c13\") " Jan 23 09:07:27 crc kubenswrapper[4684]: I0123 09:07:27.728307 4684 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"v4-0-config-user-template-login\" (UniqueName: \"kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-user-template-login\") pod \"49c341d1-5089-4bc2-86a0-a5e165cfcc6b\" (UID: \"49c341d1-5089-4bc2-86a0-a5e165cfcc6b\") " Jan 23 09:07:27 crc kubenswrapper[4684]: I0123 09:07:27.728327 4684 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"metrics-tls\" (UniqueName: \"kubernetes.io/secret/96b93a3a-6083-4aea-8eab-fe1aa8245ad9-metrics-tls\") pod \"96b93a3a-6083-4aea-8eab-fe1aa8245ad9\" (UID: \"96b93a3a-6083-4aea-8eab-fe1aa8245ad9\") " Jan 23 09:07:27 crc kubenswrapper[4684]: I0123 09:07:27.728346 4684 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-htfz6\" (UniqueName: \"kubernetes.io/projected/6ea678ab-3438-413e-bfe3-290ae7725660-kube-api-access-htfz6\") pod \"6ea678ab-3438-413e-bfe3-290ae7725660\" (UID: \"6ea678ab-3438-413e-bfe3-290ae7725660\") " Jan 23 09:07:27 crc kubenswrapper[4684]: I0123 09:07:27.728367 4684 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"console-oauth-config\" (UniqueName: \"kubernetes.io/secret/43509403-f426-496e-be36-56cef71462f5-console-oauth-config\") pod \"43509403-f426-496e-be36-56cef71462f5\" (UID: \"43509403-f426-496e-be36-56cef71462f5\") " Jan 23 09:07:27 crc kubenswrapper[4684]: I0123 09:07:27.728387 4684 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/a31745f5-9847-4afe-82a5-3161cc66ca93-trusted-ca\") pod \"a31745f5-9847-4afe-82a5-3161cc66ca93\" (UID: \"a31745f5-9847-4afe-82a5-3161cc66ca93\") " Jan 23 09:07:27 crc kubenswrapper[4684]: I0123 09:07:27.728407 4684 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/8cea82b4-6893-4ddc-af9f-1bb5ae425c5b-serving-cert\") pod \"8cea82b4-6893-4ddc-af9f-1bb5ae425c5b\" (UID: \"8cea82b4-6893-4ddc-af9f-1bb5ae425c5b\") " Jan 23 09:07:27 crc kubenswrapper[4684]: I0123 09:07:27.728431 4684 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/1d611f23-29be-4491-8495-bee1670e935f-utilities\") pod \"1d611f23-29be-4491-8495-bee1670e935f\" (UID: \"1d611f23-29be-4491-8495-bee1670e935f\") " Jan 23 09:07:27 crc kubenswrapper[4684]: I0123 09:07:27.728475 4684 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-pcxfs\" (UniqueName: \"kubernetes.io/projected/9d4552c7-cd75-42dd-8880-30dd377c49a4-kube-api-access-pcxfs\") pod \"9d4552c7-cd75-42dd-8880-30dd377c49a4\" (UID: \"9d4552c7-cd75-42dd-8880-30dd377c49a4\") " Jan 23 09:07:27 crc kubenswrapper[4684]: I0123 09:07:27.728498 4684 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/7583ce53-e0fe-4a16-9e4d-50516596a136-client-ca\") pod \"7583ce53-e0fe-4a16-9e4d-50516596a136\" (UID: \"7583ce53-e0fe-4a16-9e4d-50516596a136\") " Jan 23 09:07:27 crc kubenswrapper[4684]: I0123 09:07:27.728515 4684 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"proxy-tls\" (UniqueName: \"kubernetes.io/secret/0b574797-001e-440a-8f4e-c0be86edad0f-proxy-tls\") pod \"0b574797-001e-440a-8f4e-c0be86edad0f\" (UID: \"0b574797-001e-440a-8f4e-c0be86edad0f\") " Jan 23 09:07:27 crc kubenswrapper[4684]: I0123 09:07:27.728532 4684 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"env-overrides\" (UniqueName: \"kubernetes.io/configmap/925f1c65-6136-48ba-85aa-3a3b50560753-env-overrides\") pod \"925f1c65-6136-48ba-85aa-3a3b50560753\" (UID: \"925f1c65-6136-48ba-85aa-3a3b50560753\") " Jan 23 09:07:27 crc kubenswrapper[4684]: I0123 09:07:27.728552 4684 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-2w9zh\" (UniqueName: \"kubernetes.io/projected/4bb40260-dbaa-4fb0-84df-5e680505d512-kube-api-access-2w9zh\") pod \"4bb40260-dbaa-4fb0-84df-5e680505d512\" (UID: \"4bb40260-dbaa-4fb0-84df-5e680505d512\") " Jan 23 09:07:27 crc kubenswrapper[4684]: I0123 09:07:27.728579 4684 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-d6qdx\" (UniqueName: \"kubernetes.io/projected/87cf06ed-a83f-41a7-828d-70653580a8cb-kube-api-access-d6qdx\") pod \"87cf06ed-a83f-41a7-828d-70653580a8cb\" (UID: \"87cf06ed-a83f-41a7-828d-70653580a8cb\") " Jan 23 09:07:27 crc kubenswrapper[4684]: I0123 09:07:27.728601 4684 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-8tdtz\" (UniqueName: \"kubernetes.io/projected/09efc573-dbb6-4249-bd59-9b87aba8dd28-kube-api-access-8tdtz\") pod \"09efc573-dbb6-4249-bd59-9b87aba8dd28\" (UID: \"09efc573-dbb6-4249-bd59-9b87aba8dd28\") " Jan 23 09:07:27 crc kubenswrapper[4684]: I0123 09:07:27.728618 4684 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"cni-sysctl-allowlist\" (UniqueName: \"kubernetes.io/configmap/7bb08738-c794-4ee8-9972-3a62ca171029-cni-sysctl-allowlist\") pod \"7bb08738-c794-4ee8-9972-3a62ca171029\" (UID: \"7bb08738-c794-4ee8-9972-3a62ca171029\") " Jan 23 09:07:27 crc kubenswrapper[4684]: I0123 09:07:27.728633 4684 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"srv-cert\" (UniqueName: \"kubernetes.io/secret/b6312bbd-5731-4ea0-a20f-81d5a57df44a-srv-cert\") pod \"b6312bbd-5731-4ea0-a20f-81d5a57df44a\" (UID: \"b6312bbd-5731-4ea0-a20f-81d5a57df44a\") " Jan 23 09:07:27 crc kubenswrapper[4684]: I0123 09:07:27.728651 4684 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/7539238d-5fe0-46ed-884e-1c3b566537ec-config\") pod \"7539238d-5fe0-46ed-884e-1c3b566537ec\" (UID: \"7539238d-5fe0-46ed-884e-1c3b566537ec\") " Jan 23 09:07:27 crc kubenswrapper[4684]: I0123 09:07:27.728669 4684 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"proxy-tls\" (UniqueName: \"kubernetes.io/secret/31d8b7a1-420e-4252-a5b7-eebe8a111292-proxy-tls\") pod \"31d8b7a1-420e-4252-a5b7-eebe8a111292\" (UID: \"31d8b7a1-420e-4252-a5b7-eebe8a111292\") " Jan 23 09:07:27 crc kubenswrapper[4684]: I0123 09:07:27.728686 4684 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"signing-key\" (UniqueName: \"kubernetes.io/secret/25e176fe-21b4-4974-b1ed-c8b94f112a7f-signing-key\") pod \"25e176fe-21b4-4974-b1ed-c8b94f112a7f\" (UID: \"25e176fe-21b4-4974-b1ed-c8b94f112a7f\") " Jan 23 09:07:27 crc kubenswrapper[4684]: I0123 09:07:27.728733 4684 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-vt5rc\" (UniqueName: \"kubernetes.io/projected/44663579-783b-4372-86d6-acf235a62d72-kube-api-access-vt5rc\") pod \"44663579-783b-4372-86d6-acf235a62d72\" (UID: \"44663579-783b-4372-86d6-acf235a62d72\") " Jan 23 09:07:27 crc kubenswrapper[4684]: I0123 09:07:27.728749 4684 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"etcd-client\" (UniqueName: \"kubernetes.io/secret/09efc573-dbb6-4249-bd59-9b87aba8dd28-etcd-client\") pod \"09efc573-dbb6-4249-bd59-9b87aba8dd28\" (UID: \"09efc573-dbb6-4249-bd59-9b87aba8dd28\") " Jan 23 09:07:27 crc kubenswrapper[4684]: I0123 09:07:27.728766 4684 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"registry-tls\" (UniqueName: \"kubernetes.io/projected/8f668bae-612b-4b75-9490-919e737c6a3b-registry-tls\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 23 09:07:27 crc kubenswrapper[4684]: I0123 09:07:27.728784 4684 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-6ccd8\" (UniqueName: \"kubernetes.io/projected/308be0ea-9f5f-4b29-aeb1-5abd31a0b17b-kube-api-access-6ccd8\") pod \"308be0ea-9f5f-4b29-aeb1-5abd31a0b17b\" (UID: \"308be0ea-9f5f-4b29-aeb1-5abd31a0b17b\") " Jan 23 09:07:27 crc kubenswrapper[4684]: I0123 09:07:27.728800 4684 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/87cf06ed-a83f-41a7-828d-70653580a8cb-config-volume\") pod \"87cf06ed-a83f-41a7-828d-70653580a8cb\" (UID: \"87cf06ed-a83f-41a7-828d-70653580a8cb\") " Jan 23 09:07:27 crc kubenswrapper[4684]: I0123 09:07:27.728817 4684 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/5441d097-087c-4d9a-baa8-b210afa90fc9-serving-cert\") pod \"5441d097-087c-4d9a-baa8-b210afa90fc9\" (UID: \"5441d097-087c-4d9a-baa8-b210afa90fc9\") " Jan 23 09:07:27 crc kubenswrapper[4684]: I0123 09:07:27.728836 4684 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d-utilities\") pod \"b11524ee-3fca-4b1b-9cdf-6da289fdbc7d\" (UID: \"b11524ee-3fca-4b1b-9cdf-6da289fdbc7d\") " Jan 23 09:07:27 crc kubenswrapper[4684]: I0123 09:07:27.728860 4684 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-s4n52\" (UniqueName: \"kubernetes.io/projected/925f1c65-6136-48ba-85aa-3a3b50560753-kube-api-access-s4n52\") pod \"925f1c65-6136-48ba-85aa-3a3b50560753\" (UID: \"925f1c65-6136-48ba-85aa-3a3b50560753\") " Jan 23 09:07:27 crc kubenswrapper[4684]: I0123 09:07:27.728877 4684 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"installation-pull-secrets\" (UniqueName: \"kubernetes.io/secret/8f668bae-612b-4b75-9490-919e737c6a3b-installation-pull-secrets\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 23 09:07:27 crc kubenswrapper[4684]: I0123 09:07:27.728892 4684 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"etcd-serving-ca\" (UniqueName: \"kubernetes.io/configmap/1bf7eb37-55a3-4c65-b768-a94c82151e69-etcd-serving-ca\") pod \"1bf7eb37-55a3-4c65-b768-a94c82151e69\" (UID: \"1bf7eb37-55a3-4c65-b768-a94c82151e69\") " Jan 23 09:07:27 crc kubenswrapper[4684]: I0123 09:07:27.728909 4684 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-bf2bz\" (UniqueName: \"kubernetes.io/projected/1d611f23-29be-4491-8495-bee1670e935f-kube-api-access-bf2bz\") pod \"1d611f23-29be-4491-8495-bee1670e935f\" (UID: \"1d611f23-29be-4491-8495-bee1670e935f\") " Jan 23 09:07:27 crc kubenswrapper[4684]: I0123 09:07:27.728925 4684 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/496e6271-fb68-4057-954e-a0d97a4afa3f-config\") pod \"496e6271-fb68-4057-954e-a0d97a4afa3f\" (UID: \"496e6271-fb68-4057-954e-a0d97a4afa3f\") " Jan 23 09:07:27 crc kubenswrapper[4684]: I0123 09:07:27.728941 4684 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"encryption-config\" (UniqueName: \"kubernetes.io/secret/1bf7eb37-55a3-4c65-b768-a94c82151e69-encryption-config\") pod \"1bf7eb37-55a3-4c65-b768-a94c82151e69\" (UID: \"1bf7eb37-55a3-4c65-b768-a94c82151e69\") " Jan 23 09:07:27 crc kubenswrapper[4684]: I0123 09:07:27.728959 4684 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"v4-0-config-system-cliconfig\" (UniqueName: \"kubernetes.io/configmap/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-system-cliconfig\") pod \"49c341d1-5089-4bc2-86a0-a5e165cfcc6b\" (UID: \"49c341d1-5089-4bc2-86a0-a5e165cfcc6b\") " Jan 23 09:07:27 crc kubenswrapper[4684]: I0123 09:07:27.728975 4684 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c-config\") pod \"210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c\" (UID: \"210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c\") " Jan 23 09:07:27 crc kubenswrapper[4684]: I0123 09:07:27.728993 4684 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-d4lsv\" (UniqueName: \"kubernetes.io/projected/25e176fe-21b4-4974-b1ed-c8b94f112a7f-kube-api-access-d4lsv\") pod \"25e176fe-21b4-4974-b1ed-c8b94f112a7f\" (UID: \"25e176fe-21b4-4974-b1ed-c8b94f112a7f\") " Jan 23 09:07:27 crc kubenswrapper[4684]: I0123 09:07:27.729010 4684 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"images\" (UniqueName: \"kubernetes.io/configmap/6402fda4-df10-493c-b4e5-d0569419652d-images\") pod \"6402fda4-df10-493c-b4e5-d0569419652d\" (UID: \"6402fda4-df10-493c-b4e5-d0569419652d\") " Jan 23 09:07:27 crc kubenswrapper[4684]: I0123 09:07:27.729028 4684 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-nzwt7\" (UniqueName: \"kubernetes.io/projected/96b93a3a-6083-4aea-8eab-fe1aa8245ad9-kube-api-access-nzwt7\") pod \"96b93a3a-6083-4aea-8eab-fe1aa8245ad9\" (UID: \"96b93a3a-6083-4aea-8eab-fe1aa8245ad9\") " Jan 23 09:07:27 crc kubenswrapper[4684]: I0123 09:07:27.729045 4684 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-zgdk5\" (UniqueName: \"kubernetes.io/projected/31d8b7a1-420e-4252-a5b7-eebe8a111292-kube-api-access-zgdk5\") pod \"31d8b7a1-420e-4252-a5b7-eebe8a111292\" (UID: \"31d8b7a1-420e-4252-a5b7-eebe8a111292\") " Jan 23 09:07:27 crc kubenswrapper[4684]: I0123 09:07:27.729081 4684 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"nginx-conf\" (UniqueName: \"kubernetes.io/configmap/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-nginx-conf\") pod \"networking-console-plugin-85b44fc459-gdk6g\" (UID: \"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\") " pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 23 09:07:27 crc kubenswrapper[4684]: I0123 09:07:27.729102 4684 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-tls\" (UniqueName: \"kubernetes.io/secret/37a5e44f-9a88-4405-be8a-b645485e7312-metrics-tls\") pod \"network-operator-58b4c7f79c-55gtf\" (UID: \"37a5e44f-9a88-4405-be8a-b645485e7312\") " pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" Jan 23 09:07:27 crc kubenswrapper[4684]: I0123 09:07:27.729122 4684 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-cqllr\" (UniqueName: \"kubernetes.io/projected/3b6479f0-333b-4a96-9adf-2099afdc2447-kube-api-access-cqllr\") pod \"network-check-target-xd92c\" (UID: \"3b6479f0-333b-4a96-9adf-2099afdc2447\") " pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 23 09:07:27 crc kubenswrapper[4684]: I0123 09:07:27.729140 4684 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovnkube-identity-cm\" (UniqueName: \"kubernetes.io/configmap/ef543e1b-8068-4ea3-b32a-61027b32e95d-ovnkube-identity-cm\") pod \"network-node-identity-vrzqb\" (UID: \"ef543e1b-8068-4ea3-b32a-61027b32e95d\") " pod="openshift-network-node-identity/network-node-identity-vrzqb" Jan 23 09:07:27 crc kubenswrapper[4684]: I0123 09:07:27.729158 4684 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-s2dwl\" (UniqueName: \"kubernetes.io/projected/9d751cbb-f2e2-430d-9754-c882a5e924a5-kube-api-access-s2dwl\") pod \"network-check-source-55646444c4-trplf\" (UID: \"9d751cbb-f2e2-430d-9754-c882a5e924a5\") " pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Jan 23 09:07:27 crc kubenswrapper[4684]: I0123 09:07:27.729192 4684 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-rczfb\" (UniqueName: \"kubernetes.io/projected/d75a4c96-2883-4a0b-bab2-0fab2b6c0b49-kube-api-access-rczfb\") pod \"iptables-alerter-4ln5h\" (UID: \"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\") " pod="openshift-network-operator/iptables-alerter-4ln5h" Jan 23 09:07:27 crc kubenswrapper[4684]: I0123 09:07:27.729211 4684 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-rdwmf\" (UniqueName: \"kubernetes.io/projected/37a5e44f-9a88-4405-be8a-b645485e7312-kube-api-access-rdwmf\") pod \"network-operator-58b4c7f79c-55gtf\" (UID: \"37a5e44f-9a88-4405-be8a-b645485e7312\") " pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" Jan 23 09:07:27 crc kubenswrapper[4684]: I0123 09:07:27.729229 4684 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"networking-console-plugin-cert\" (UniqueName: \"kubernetes.io/secret/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-networking-console-plugin-cert\") pod \"networking-console-plugin-85b44fc459-gdk6g\" (UID: \"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\") " pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 23 09:07:27 crc kubenswrapper[4684]: I0123 09:07:27.729246 4684 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"webhook-cert\" (UniqueName: \"kubernetes.io/secret/ef543e1b-8068-4ea3-b32a-61027b32e95d-webhook-cert\") pod \"network-node-identity-vrzqb\" (UID: \"ef543e1b-8068-4ea3-b32a-61027b32e95d\") " pod="openshift-network-node-identity/network-node-identity-vrzqb" Jan 23 09:07:27 crc kubenswrapper[4684]: I0123 09:07:27.729262 4684 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-s2kz5\" (UniqueName: \"kubernetes.io/projected/ef543e1b-8068-4ea3-b32a-61027b32e95d-kube-api-access-s2kz5\") pod \"network-node-identity-vrzqb\" (UID: \"ef543e1b-8068-4ea3-b32a-61027b32e95d\") " pod="openshift-network-node-identity/network-node-identity-vrzqb" Jan 23 09:07:27 crc kubenswrapper[4684]: I0123 09:07:27.729279 4684 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"env-overrides\" (UniqueName: \"kubernetes.io/configmap/ef543e1b-8068-4ea3-b32a-61027b32e95d-env-overrides\") pod \"network-node-identity-vrzqb\" (UID: \"ef543e1b-8068-4ea3-b32a-61027b32e95d\") " pod="openshift-network-node-identity/network-node-identity-vrzqb" Jan 23 09:07:27 crc kubenswrapper[4684]: I0123 09:07:27.729297 4684 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-etc-kube\" (UniqueName: \"kubernetes.io/host-path/37a5e44f-9a88-4405-be8a-b645485e7312-host-etc-kube\") pod \"network-operator-58b4c7f79c-55gtf\" (UID: \"37a5e44f-9a88-4405-be8a-b645485e7312\") " pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" Jan 23 09:07:27 crc kubenswrapper[4684]: I0123 09:07:27.729316 4684 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"iptables-alerter-script\" (UniqueName: \"kubernetes.io/configmap/d75a4c96-2883-4a0b-bab2-0fab2b6c0b49-iptables-alerter-script\") pod \"iptables-alerter-4ln5h\" (UID: \"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\") " pod="openshift-network-operator/iptables-alerter-4ln5h" Jan 23 09:07:27 crc kubenswrapper[4684]: I0123 09:07:27.729331 4684 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-slash\" (UniqueName: \"kubernetes.io/host-path/d75a4c96-2883-4a0b-bab2-0fab2b6c0b49-host-slash\") pod \"iptables-alerter-4ln5h\" (UID: \"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\") " pod="openshift-network-operator/iptables-alerter-4ln5h" Jan 23 09:07:27 crc kubenswrapper[4684]: I0123 09:07:27.729372 4684 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/57a731c4-ef35-47a8-b875-bfb08a7f8011-utilities\") on node \"crc\" DevicePath \"\"" Jan 23 09:07:27 crc kubenswrapper[4684]: I0123 09:07:27.729383 4684 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/22c825df-677d-4ca6-82db-3454ed06e783-config\") on node \"crc\" DevicePath \"\"" Jan 23 09:07:27 crc kubenswrapper[4684]: I0123 09:07:27.729393 4684 reconciler_common.go:293] "Volume detached for volume \"signing-cabundle\" (UniqueName: \"kubernetes.io/configmap/25e176fe-21b4-4974-b1ed-c8b94f112a7f-signing-cabundle\") on node \"crc\" DevicePath \"\"" Jan 23 09:07:27 crc kubenswrapper[4684]: I0123 09:07:27.729403 4684 reconciler_common.go:293] "Volume detached for volume \"v4-0-config-system-serving-cert\" (UniqueName: \"kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-system-serving-cert\") on node \"crc\" DevicePath \"\"" Jan 23 09:07:27 crc kubenswrapper[4684]: I0123 09:07:27.729413 4684 reconciler_common.go:293] "Volume detached for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/7583ce53-e0fe-4a16-9e4d-50516596a136-serving-cert\") on node \"crc\" DevicePath \"\"" Jan 23 09:07:27 crc kubenswrapper[4684]: I0123 09:07:27.729423 4684 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-7c4vf\" (UniqueName: \"kubernetes.io/projected/22c825df-677d-4ca6-82db-3454ed06e783-kube-api-access-7c4vf\") on node \"crc\" DevicePath \"\"" Jan 23 09:07:27 crc kubenswrapper[4684]: I0123 09:07:27.729434 4684 reconciler_common.go:293] "Volume detached for volume \"package-server-manager-serving-cert\" (UniqueName: \"kubernetes.io/secret/3ab1a177-2de0-46d9-b765-d0d0649bb42e-package-server-manager-serving-cert\") on node \"crc\" DevicePath \"\"" Jan 23 09:07:27 crc kubenswrapper[4684]: I0123 09:07:27.735560 4684 swap_util.go:74] "error creating dir to test if tmpfs noswap is enabled. Assuming not supported" mount path="" error="stat /var/lib/kubelet/plugins/kubernetes.io/empty-dir: no such file or directory" Jan 23 09:07:27 crc kubenswrapper[4684]: I0123 09:07:27.726353 4684 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/4bb40260-dbaa-4fb0-84df-5e680505d512-multus-daemon-config" (OuterVolumeSpecName: "multus-daemon-config") pod "4bb40260-dbaa-4fb0-84df-5e680505d512" (UID: "4bb40260-dbaa-4fb0-84df-5e680505d512"). InnerVolumeSpecName "multus-daemon-config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 23 09:07:27 crc kubenswrapper[4684]: I0123 09:07:27.726398 4684 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/a31745f5-9847-4afe-82a5-3161cc66ca93-metrics-tls" (OuterVolumeSpecName: "metrics-tls") pod "a31745f5-9847-4afe-82a5-3161cc66ca93" (UID: "a31745f5-9847-4afe-82a5-3161cc66ca93"). InnerVolumeSpecName "metrics-tls". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 23 09:07:27 crc kubenswrapper[4684]: I0123 09:07:27.726453 4684 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-system-ocp-branding-template" (OuterVolumeSpecName: "v4-0-config-system-ocp-branding-template") pod "49c341d1-5089-4bc2-86a0-a5e165cfcc6b" (UID: "49c341d1-5089-4bc2-86a0-a5e165cfcc6b"). InnerVolumeSpecName "v4-0-config-system-ocp-branding-template". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 23 09:07:27 crc kubenswrapper[4684]: I0123 09:07:27.726535 4684 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/7583ce53-e0fe-4a16-9e4d-50516596a136-kube-api-access-xcphl" (OuterVolumeSpecName: "kube-api-access-xcphl") pod "7583ce53-e0fe-4a16-9e4d-50516596a136" (UID: "7583ce53-e0fe-4a16-9e4d-50516596a136"). InnerVolumeSpecName "kube-api-access-xcphl". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 23 09:07:27 crc kubenswrapper[4684]: I0123 09:07:27.726550 4684 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/6402fda4-df10-493c-b4e5-d0569419652d-machine-api-operator-tls" (OuterVolumeSpecName: "machine-api-operator-tls") pod "6402fda4-df10-493c-b4e5-d0569419652d" (UID: "6402fda4-df10-493c-b4e5-d0569419652d"). InnerVolumeSpecName "machine-api-operator-tls". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 23 09:07:27 crc kubenswrapper[4684]: I0123 09:07:27.726634 4684 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/6509e943-70c6-444c-bc41-48a544e36fbd-serving-cert" (OuterVolumeSpecName: "serving-cert") pod "6509e943-70c6-444c-bc41-48a544e36fbd" (UID: "6509e943-70c6-444c-bc41-48a544e36fbd"). InnerVolumeSpecName "serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 23 09:07:27 crc kubenswrapper[4684]: I0123 09:07:27.726664 4684 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/3cb93b32-e0ae-4377-b9c8-fdb9842c6d59-kube-api-access-wxkg8" (OuterVolumeSpecName: "kube-api-access-wxkg8") pod "3cb93b32-e0ae-4377-b9c8-fdb9842c6d59" (UID: "3cb93b32-e0ae-4377-b9c8-fdb9842c6d59"). InnerVolumeSpecName "kube-api-access-wxkg8". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 23 09:07:27 crc kubenswrapper[4684]: I0123 09:07:27.726769 4684 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/6509e943-70c6-444c-bc41-48a544e36fbd-config" (OuterVolumeSpecName: "config") pod "6509e943-70c6-444c-bc41-48a544e36fbd" (UID: "6509e943-70c6-444c-bc41-48a544e36fbd"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 23 09:07:27 crc kubenswrapper[4684]: I0123 09:07:27.727018 4684 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/e7e6199b-1264-4501-8953-767f51328d08-config" (OuterVolumeSpecName: "config") pod "e7e6199b-1264-4501-8953-767f51328d08" (UID: "e7e6199b-1264-4501-8953-767f51328d08"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 23 09:07:27 crc kubenswrapper[4684]: I0123 09:07:27.727140 4684 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/6402fda4-df10-493c-b4e5-d0569419652d-config" (OuterVolumeSpecName: "config") pod "6402fda4-df10-493c-b4e5-d0569419652d" (UID: "6402fda4-df10-493c-b4e5-d0569419652d"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 23 09:07:27 crc kubenswrapper[4684]: I0123 09:07:27.727190 4684 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/09ae3b1a-e8e7-4524-b54b-61eab6f9239a-audit-policies" (OuterVolumeSpecName: "audit-policies") pod "09ae3b1a-e8e7-4524-b54b-61eab6f9239a" (UID: "09ae3b1a-e8e7-4524-b54b-61eab6f9239a"). InnerVolumeSpecName "audit-policies". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 23 09:07:27 crc kubenswrapper[4684]: I0123 09:07:27.727334 4684 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/3cb93b32-e0ae-4377-b9c8-fdb9842c6d59-serviceca" (OuterVolumeSpecName: "serviceca") pod "3cb93b32-e0ae-4377-b9c8-fdb9842c6d59" (UID: "3cb93b32-e0ae-4377-b9c8-fdb9842c6d59"). InnerVolumeSpecName "serviceca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 23 09:07:27 crc kubenswrapper[4684]: I0123 09:07:27.727366 4684 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/7bb08738-c794-4ee8-9972-3a62ca171029-kube-api-access-279lb" (OuterVolumeSpecName: "kube-api-access-279lb") pod "7bb08738-c794-4ee8-9972-3a62ca171029" (UID: "7bb08738-c794-4ee8-9972-3a62ca171029"). InnerVolumeSpecName "kube-api-access-279lb". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 23 09:07:27 crc kubenswrapper[4684]: I0123 09:07:27.727366 4684 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/bf126b07-da06-4140-9a57-dfd54fc6b486-trusted-ca" (OuterVolumeSpecName: "trusted-ca") pod "bf126b07-da06-4140-9a57-dfd54fc6b486" (UID: "bf126b07-da06-4140-9a57-dfd54fc6b486"). InnerVolumeSpecName "trusted-ca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 23 09:07:27 crc kubenswrapper[4684]: I0123 09:07:27.727503 4684 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/e7e6199b-1264-4501-8953-767f51328d08-serving-cert" (OuterVolumeSpecName: "serving-cert") pod "e7e6199b-1264-4501-8953-767f51328d08" (UID: "e7e6199b-1264-4501-8953-767f51328d08"). InnerVolumeSpecName "serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 23 09:07:27 crc kubenswrapper[4684]: I0123 09:07:27.727591 4684 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-system-service-ca" (OuterVolumeSpecName: "v4-0-config-system-service-ca") pod "49c341d1-5089-4bc2-86a0-a5e165cfcc6b" (UID: "49c341d1-5089-4bc2-86a0-a5e165cfcc6b"). InnerVolumeSpecName "v4-0-config-system-service-ca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 23 09:07:27 crc kubenswrapper[4684]: I0123 09:07:27.727834 4684 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/c03ee662-fb2f-4fc4-a2c1-af487c19d254-service-ca-bundle" (OuterVolumeSpecName: "service-ca-bundle") pod "c03ee662-fb2f-4fc4-a2c1-af487c19d254" (UID: "c03ee662-fb2f-4fc4-a2c1-af487c19d254"). InnerVolumeSpecName "service-ca-bundle". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 23 09:07:27 crc kubenswrapper[4684]: I0123 09:07:27.727851 4684 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/1bf7eb37-55a3-4c65-b768-a94c82151e69-image-import-ca" (OuterVolumeSpecName: "image-import-ca") pod "1bf7eb37-55a3-4c65-b768-a94c82151e69" (UID: "1bf7eb37-55a3-4c65-b768-a94c82151e69"). InnerVolumeSpecName "image-import-ca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 23 09:07:27 crc kubenswrapper[4684]: I0123 09:07:27.727956 4684 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/22c825df-677d-4ca6-82db-3454ed06e783-auth-proxy-config" (OuterVolumeSpecName: "auth-proxy-config") pod "22c825df-677d-4ca6-82db-3454ed06e783" (UID: "22c825df-677d-4ca6-82db-3454ed06e783"). InnerVolumeSpecName "auth-proxy-config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 23 09:07:27 crc kubenswrapper[4684]: I0123 09:07:27.727963 4684 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/8cea82b4-6893-4ddc-af9f-1bb5ae425c5b-config" (OuterVolumeSpecName: "config") pod "8cea82b4-6893-4ddc-af9f-1bb5ae425c5b" (UID: "8cea82b4-6893-4ddc-af9f-1bb5ae425c5b"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 23 09:07:27 crc kubenswrapper[4684]: I0123 09:07:27.728005 4684 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/09ae3b1a-e8e7-4524-b54b-61eab6f9239a-etcd-client" (OuterVolumeSpecName: "etcd-client") pod "09ae3b1a-e8e7-4524-b54b-61eab6f9239a" (UID: "09ae3b1a-e8e7-4524-b54b-61eab6f9239a"). InnerVolumeSpecName "etcd-client". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 23 09:07:27 crc kubenswrapper[4684]: I0123 09:07:27.728019 4684 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/f88749ec-7931-4ee7-b3fc-1ec5e11f92e9-kube-api-access-dbsvg" (OuterVolumeSpecName: "kube-api-access-dbsvg") pod "f88749ec-7931-4ee7-b3fc-1ec5e11f92e9" (UID: "f88749ec-7931-4ee7-b3fc-1ec5e11f92e9"). InnerVolumeSpecName "kube-api-access-dbsvg". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 23 09:07:27 crc kubenswrapper[4684]: I0123 09:07:27.728148 4684 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/8cea82b4-6893-4ddc-af9f-1bb5ae425c5b-kube-api-access-w4xd4" (OuterVolumeSpecName: "kube-api-access-w4xd4") pod "8cea82b4-6893-4ddc-af9f-1bb5ae425c5b" (UID: "8cea82b4-6893-4ddc-af9f-1bb5ae425c5b"). InnerVolumeSpecName "kube-api-access-w4xd4". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 23 09:07:27 crc kubenswrapper[4684]: I0123 09:07:27.728195 4684 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/1bf7eb37-55a3-4c65-b768-a94c82151e69-etcd-client" (OuterVolumeSpecName: "etcd-client") pod "1bf7eb37-55a3-4c65-b768-a94c82151e69" (UID: "1bf7eb37-55a3-4c65-b768-a94c82151e69"). InnerVolumeSpecName "etcd-client". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 23 09:07:27 crc kubenswrapper[4684]: I0123 09:07:27.728263 4684 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-audit-policies" (OuterVolumeSpecName: "audit-policies") pod "49c341d1-5089-4bc2-86a0-a5e165cfcc6b" (UID: "49c341d1-5089-4bc2-86a0-a5e165cfcc6b"). InnerVolumeSpecName "audit-policies". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 23 09:07:27 crc kubenswrapper[4684]: I0123 09:07:27.728359 4684 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/09ae3b1a-e8e7-4524-b54b-61eab6f9239a-serving-cert" (OuterVolumeSpecName: "serving-cert") pod "09ae3b1a-e8e7-4524-b54b-61eab6f9239a" (UID: "09ae3b1a-e8e7-4524-b54b-61eab6f9239a"). InnerVolumeSpecName "serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 23 09:07:27 crc kubenswrapper[4684]: I0123 09:07:27.728415 4684 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/31d8b7a1-420e-4252-a5b7-eebe8a111292-images" (OuterVolumeSpecName: "images") pod "31d8b7a1-420e-4252-a5b7-eebe8a111292" (UID: "31d8b7a1-420e-4252-a5b7-eebe8a111292"). InnerVolumeSpecName "images". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 23 09:07:27 crc kubenswrapper[4684]: I0123 09:07:27.728495 4684 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/5225d0e4-402f-4861-b410-819f433b1803-kube-api-access-9xfj7" (OuterVolumeSpecName: "kube-api-access-9xfj7") pod "5225d0e4-402f-4861-b410-819f433b1803" (UID: "5225d0e4-402f-4861-b410-819f433b1803"). InnerVolumeSpecName "kube-api-access-9xfj7". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 23 09:07:27 crc kubenswrapper[4684]: I0123 09:07:27.735979 4684 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/5441d097-087c-4d9a-baa8-b210afa90fc9-serving-cert" (OuterVolumeSpecName: "serving-cert") pod "5441d097-087c-4d9a-baa8-b210afa90fc9" (UID: "5441d097-087c-4d9a-baa8-b210afa90fc9"). InnerVolumeSpecName "serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 23 09:07:27 crc kubenswrapper[4684]: I0123 09:07:27.728524 4684 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/1386a44e-36a2-460c-96d0-0359d2b6f0f5-config" (OuterVolumeSpecName: "config") pod "1386a44e-36a2-460c-96d0-0359d2b6f0f5" (UID: "1386a44e-36a2-460c-96d0-0359d2b6f0f5"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 23 09:07:27 crc kubenswrapper[4684]: I0123 09:07:27.728667 4684 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/c03ee662-fb2f-4fc4-a2c1-af487c19d254-stats-auth" (OuterVolumeSpecName: "stats-auth") pod "c03ee662-fb2f-4fc4-a2c1-af487c19d254" (UID: "c03ee662-fb2f-4fc4-a2c1-af487c19d254"). InnerVolumeSpecName "stats-auth". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 23 09:07:27 crc kubenswrapper[4684]: I0123 09:07:27.728673 4684 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-user-idp-0-file-data" (OuterVolumeSpecName: "v4-0-config-user-idp-0-file-data") pod "49c341d1-5089-4bc2-86a0-a5e165cfcc6b" (UID: "49c341d1-5089-4bc2-86a0-a5e165cfcc6b"). InnerVolumeSpecName "v4-0-config-user-idp-0-file-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 23 09:07:27 crc kubenswrapper[4684]: I0123 09:07:27.729640 4684 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/a31745f5-9847-4afe-82a5-3161cc66ca93-kube-api-access-lz9wn" (OuterVolumeSpecName: "kube-api-access-lz9wn") pod "a31745f5-9847-4afe-82a5-3161cc66ca93" (UID: "a31745f5-9847-4afe-82a5-3161cc66ca93"). InnerVolumeSpecName "kube-api-access-lz9wn". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 23 09:07:27 crc kubenswrapper[4684]: I0123 09:07:27.730134 4684 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/09ae3b1a-e8e7-4524-b54b-61eab6f9239a-etcd-serving-ca" (OuterVolumeSpecName: "etcd-serving-ca") pod "09ae3b1a-e8e7-4524-b54b-61eab6f9239a" (UID: "09ae3b1a-e8e7-4524-b54b-61eab6f9239a"). InnerVolumeSpecName "etcd-serving-ca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 23 09:07:27 crc kubenswrapper[4684]: I0123 09:07:27.730153 4684 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/43509403-f426-496e-be36-56cef71462f5-trusted-ca-bundle" (OuterVolumeSpecName: "trusted-ca-bundle") pod "43509403-f426-496e-be36-56cef71462f5" (UID: "43509403-f426-496e-be36-56cef71462f5"). InnerVolumeSpecName "trusted-ca-bundle". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 23 09:07:27 crc kubenswrapper[4684]: I0123 09:07:27.730374 4684 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/b6312bbd-5731-4ea0-a20f-81d5a57df44a-kube-api-access-249nr" (OuterVolumeSpecName: "kube-api-access-249nr") pod "b6312bbd-5731-4ea0-a20f-81d5a57df44a" (UID: "b6312bbd-5731-4ea0-a20f-81d5a57df44a"). InnerVolumeSpecName "kube-api-access-249nr". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 23 09:07:27 crc kubenswrapper[4684]: I0123 09:07:27.730660 4684 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/09ae3b1a-e8e7-4524-b54b-61eab6f9239a-trusted-ca-bundle" (OuterVolumeSpecName: "trusted-ca-bundle") pod "09ae3b1a-e8e7-4524-b54b-61eab6f9239a" (UID: "09ae3b1a-e8e7-4524-b54b-61eab6f9239a"). InnerVolumeSpecName "trusted-ca-bundle". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 23 09:07:27 crc kubenswrapper[4684]: I0123 09:07:27.730661 4684 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/0b78653f-4ff9-4508-8672-245ed9b561e3-service-ca" (OuterVolumeSpecName: "service-ca") pod "0b78653f-4ff9-4508-8672-245ed9b561e3" (UID: "0b78653f-4ff9-4508-8672-245ed9b561e3"). InnerVolumeSpecName "service-ca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 23 09:07:27 crc kubenswrapper[4684]: I0123 09:07:27.730735 4684 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/925f1c65-6136-48ba-85aa-3a3b50560753-ovn-control-plane-metrics-cert" (OuterVolumeSpecName: "ovn-control-plane-metrics-cert") pod "925f1c65-6136-48ba-85aa-3a3b50560753" (UID: "925f1c65-6136-48ba-85aa-3a3b50560753"). InnerVolumeSpecName "ovn-control-plane-metrics-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 23 09:07:27 crc kubenswrapper[4684]: I0123 09:07:27.730908 4684 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/5fe579f8-e8a6-4643-bce5-a661393c4dde-kube-api-access-fcqwp" (OuterVolumeSpecName: "kube-api-access-fcqwp") pod "5fe579f8-e8a6-4643-bce5-a661393c4dde" (UID: "5fe579f8-e8a6-4643-bce5-a661393c4dde"). InnerVolumeSpecName "kube-api-access-fcqwp". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 23 09:07:27 crc kubenswrapper[4684]: I0123 09:07:27.731056 4684 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/e7e6199b-1264-4501-8953-767f51328d08-kube-api-access" (OuterVolumeSpecName: "kube-api-access") pod "e7e6199b-1264-4501-8953-767f51328d08" (UID: "e7e6199b-1264-4501-8953-767f51328d08"). InnerVolumeSpecName "kube-api-access". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 23 09:07:27 crc kubenswrapper[4684]: E0123 09:07:27.731122 4684 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-01-23 09:07:28.23110498 +0000 UTC m=+20.854483521 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 23 09:07:27 crc kubenswrapper[4684]: I0123 09:07:27.731143 4684 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/7539238d-5fe0-46ed-884e-1c3b566537ec-kube-api-access-tk88c" (OuterVolumeSpecName: "kube-api-access-tk88c") pod "7539238d-5fe0-46ed-884e-1c3b566537ec" (UID: "7539238d-5fe0-46ed-884e-1c3b566537ec"). InnerVolumeSpecName "kube-api-access-tk88c". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 23 09:07:27 crc kubenswrapper[4684]: I0123 09:07:27.731271 4684 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/20b0d48f-5fd6-431c-a545-e3c800c7b866-cert" (OuterVolumeSpecName: "cert") pod "20b0d48f-5fd6-431c-a545-e3c800c7b866" (UID: "20b0d48f-5fd6-431c-a545-e3c800c7b866"). InnerVolumeSpecName "cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 23 09:07:27 crc kubenswrapper[4684]: I0123 09:07:27.731230 4684 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/09efc573-dbb6-4249-bd59-9b87aba8dd28-etcd-ca" (OuterVolumeSpecName: "etcd-ca") pod "09efc573-dbb6-4249-bd59-9b87aba8dd28" (UID: "09efc573-dbb6-4249-bd59-9b87aba8dd28"). InnerVolumeSpecName "etcd-ca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 23 09:07:27 crc kubenswrapper[4684]: I0123 09:07:27.731437 4684 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/7539238d-5fe0-46ed-884e-1c3b566537ec-serving-cert" (OuterVolumeSpecName: "serving-cert") pod "7539238d-5fe0-46ed-884e-1c3b566537ec" (UID: "7539238d-5fe0-46ed-884e-1c3b566537ec"). InnerVolumeSpecName "serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 23 09:07:27 crc kubenswrapper[4684]: I0123 09:07:27.731566 4684 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/f88749ec-7931-4ee7-b3fc-1ec5e11f92e9-srv-cert" (OuterVolumeSpecName: "srv-cert") pod "f88749ec-7931-4ee7-b3fc-1ec5e11f92e9" (UID: "f88749ec-7931-4ee7-b3fc-1ec5e11f92e9"). InnerVolumeSpecName "srv-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 23 09:07:27 crc kubenswrapper[4684]: I0123 09:07:27.731625 4684 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/6509e943-70c6-444c-bc41-48a544e36fbd-kube-api-access-6g6sz" (OuterVolumeSpecName: "kube-api-access-6g6sz") pod "6509e943-70c6-444c-bc41-48a544e36fbd" (UID: "6509e943-70c6-444c-bc41-48a544e36fbd"). InnerVolumeSpecName "kube-api-access-6g6sz". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 23 09:07:27 crc kubenswrapper[4684]: I0123 09:07:27.731665 4684 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/5b88f790-22fa-440e-b583-365168c0b23d-metrics-certs" (OuterVolumeSpecName: "metrics-certs") pod "5b88f790-22fa-440e-b583-365168c0b23d" (UID: "5b88f790-22fa-440e-b583-365168c0b23d"). InnerVolumeSpecName "metrics-certs". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 23 09:07:27 crc kubenswrapper[4684]: I0123 09:07:27.731785 4684 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-system-session" (OuterVolumeSpecName: "v4-0-config-system-session") pod "49c341d1-5089-4bc2-86a0-a5e165cfcc6b" (UID: "49c341d1-5089-4bc2-86a0-a5e165cfcc6b"). InnerVolumeSpecName "v4-0-config-system-session". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 23 09:07:27 crc kubenswrapper[4684]: I0123 09:07:27.731892 4684 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/0b78653f-4ff9-4508-8672-245ed9b561e3-kube-api-access" (OuterVolumeSpecName: "kube-api-access") pod "0b78653f-4ff9-4508-8672-245ed9b561e3" (UID: "0b78653f-4ff9-4508-8672-245ed9b561e3"). InnerVolumeSpecName "kube-api-access". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 23 09:07:27 crc kubenswrapper[4684]: I0123 09:07:27.732591 4684 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/09efc573-dbb6-4249-bd59-9b87aba8dd28-serving-cert" (OuterVolumeSpecName: "serving-cert") pod "09efc573-dbb6-4249-bd59-9b87aba8dd28" (UID: "09efc573-dbb6-4249-bd59-9b87aba8dd28"). InnerVolumeSpecName "serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 23 09:07:27 crc kubenswrapper[4684]: I0123 09:07:27.733914 4684 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/9d4552c7-cd75-42dd-8880-30dd377c49a4-serving-cert" (OuterVolumeSpecName: "serving-cert") pod "9d4552c7-cd75-42dd-8880-30dd377c49a4" (UID: "9d4552c7-cd75-42dd-8880-30dd377c49a4"). InnerVolumeSpecName "serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 23 09:07:27 crc kubenswrapper[4684]: E0123 09:07:27.733982 4684 configmap.go:193] Couldn't get configMap openshift-network-console/networking-console-plugin: object "openshift-network-console"/"networking-console-plugin" not registered Jan 23 09:07:27 crc kubenswrapper[4684]: I0123 09:07:27.734288 4684 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/bd23aa5c-e532-4e53-bccf-e79f130c5ae8-kube-api-access-jhbk2" (OuterVolumeSpecName: "kube-api-access-jhbk2") pod "bd23aa5c-e532-4e53-bccf-e79f130c5ae8" (UID: "bd23aa5c-e532-4e53-bccf-e79f130c5ae8"). InnerVolumeSpecName "kube-api-access-jhbk2". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 23 09:07:27 crc kubenswrapper[4684]: I0123 09:07:27.734648 4684 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/8f668bae-612b-4b75-9490-919e737c6a3b-trusted-ca" (OuterVolumeSpecName: "trusted-ca") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b"). InnerVolumeSpecName "trusted-ca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 23 09:07:27 crc kubenswrapper[4684]: I0123 09:07:27.735065 4684 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/7583ce53-e0fe-4a16-9e4d-50516596a136-proxy-ca-bundles" (OuterVolumeSpecName: "proxy-ca-bundles") pod "7583ce53-e0fe-4a16-9e4d-50516596a136" (UID: "7583ce53-e0fe-4a16-9e4d-50516596a136"). InnerVolumeSpecName "proxy-ca-bundles". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 23 09:07:27 crc kubenswrapper[4684]: I0123 09:07:27.735366 4684 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/a0128f3a-b052-44ed-a84e-c4c8aaf17c13-kube-api-access-gf66m" (OuterVolumeSpecName: "kube-api-access-gf66m") pod "a0128f3a-b052-44ed-a84e-c4c8aaf17c13" (UID: "a0128f3a-b052-44ed-a84e-c4c8aaf17c13"). InnerVolumeSpecName "kube-api-access-gf66m". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 23 09:07:27 crc kubenswrapper[4684]: I0123 09:07:27.735606 4684 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-user-template-login" (OuterVolumeSpecName: "v4-0-config-user-template-login") pod "49c341d1-5089-4bc2-86a0-a5e165cfcc6b" (UID: "49c341d1-5089-4bc2-86a0-a5e165cfcc6b"). InnerVolumeSpecName "v4-0-config-user-template-login". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 23 09:07:27 crc kubenswrapper[4684]: I0123 09:07:27.735657 4684 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/5b88f790-22fa-440e-b583-365168c0b23d-kube-api-access-jkwtn" (OuterVolumeSpecName: "kube-api-access-jkwtn") pod "5b88f790-22fa-440e-b583-365168c0b23d" (UID: "5b88f790-22fa-440e-b583-365168c0b23d"). InnerVolumeSpecName "kube-api-access-jkwtn". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 23 09:07:27 crc kubenswrapper[4684]: I0123 09:07:27.736166 4684 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/96b93a3a-6083-4aea-8eab-fe1aa8245ad9-metrics-tls" (OuterVolumeSpecName: "metrics-tls") pod "96b93a3a-6083-4aea-8eab-fe1aa8245ad9" (UID: "96b93a3a-6083-4aea-8eab-fe1aa8245ad9"). InnerVolumeSpecName "metrics-tls". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 23 09:07:27 crc kubenswrapper[4684]: I0123 09:07:27.736194 4684 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/01ab3dd5-8196-46d0-ad33-122e2ca51def-kube-api-access-w7l8j" (OuterVolumeSpecName: "kube-api-access-w7l8j") pod "01ab3dd5-8196-46d0-ad33-122e2ca51def" (UID: "01ab3dd5-8196-46d0-ad33-122e2ca51def"). InnerVolumeSpecName "kube-api-access-w7l8j". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 23 09:07:27 crc kubenswrapper[4684]: I0123 09:07:27.736216 4684 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/c03ee662-fb2f-4fc4-a2c1-af487c19d254-kube-api-access-v47cf" (OuterVolumeSpecName: "kube-api-access-v47cf") pod "c03ee662-fb2f-4fc4-a2c1-af487c19d254" (UID: "c03ee662-fb2f-4fc4-a2c1-af487c19d254"). InnerVolumeSpecName "kube-api-access-v47cf". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 23 09:07:27 crc kubenswrapper[4684]: I0123 09:07:27.736333 4684 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/6ea678ab-3438-413e-bfe3-290ae7725660-kube-api-access-htfz6" (OuterVolumeSpecName: "kube-api-access-htfz6") pod "6ea678ab-3438-413e-bfe3-290ae7725660" (UID: "6ea678ab-3438-413e-bfe3-290ae7725660"). InnerVolumeSpecName "kube-api-access-htfz6". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 23 09:07:27 crc kubenswrapper[4684]: I0123 09:07:27.740763 4684 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/31d8b7a1-420e-4252-a5b7-eebe8a111292-kube-api-access-zgdk5" (OuterVolumeSpecName: "kube-api-access-zgdk5") pod "31d8b7a1-420e-4252-a5b7-eebe8a111292" (UID: "31d8b7a1-420e-4252-a5b7-eebe8a111292"). InnerVolumeSpecName "kube-api-access-zgdk5". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 23 09:07:27 crc kubenswrapper[4684]: I0123 09:07:27.745225 4684 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"env-overrides\" (UniqueName: \"kubernetes.io/configmap/ef543e1b-8068-4ea3-b32a-61027b32e95d-env-overrides\") pod \"network-node-identity-vrzqb\" (UID: \"ef543e1b-8068-4ea3-b32a-61027b32e95d\") " pod="openshift-network-node-identity/network-node-identity-vrzqb" Jan 23 09:07:27 crc kubenswrapper[4684]: I0123 09:07:27.745635 4684 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"iptables-alerter-script\" (UniqueName: \"kubernetes.io/configmap/d75a4c96-2883-4a0b-bab2-0fab2b6c0b49-iptables-alerter-script\") pod \"iptables-alerter-4ln5h\" (UID: \"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\") " pod="openshift-network-operator/iptables-alerter-4ln5h" Jan 23 09:07:27 crc kubenswrapper[4684]: I0123 09:07:27.745853 4684 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-23T09:07:27Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-23T09:07:27Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Jan 23 09:07:27 crc kubenswrapper[4684]: I0123 09:07:27.746018 4684 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovnkube-identity-cm\" (UniqueName: \"kubernetes.io/configmap/ef543e1b-8068-4ea3-b32a-61027b32e95d-ovnkube-identity-cm\") pod \"network-node-identity-vrzqb\" (UID: \"ef543e1b-8068-4ea3-b32a-61027b32e95d\") " pod="openshift-network-node-identity/network-node-identity-vrzqb" Jan 23 09:07:27 crc kubenswrapper[4684]: I0123 09:07:27.753909 4684 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/6402fda4-df10-493c-b4e5-d0569419652d-kube-api-access-mg5zb" (OuterVolumeSpecName: "kube-api-access-mg5zb") pod "6402fda4-df10-493c-b4e5-d0569419652d" (UID: "6402fda4-df10-493c-b4e5-d0569419652d"). InnerVolumeSpecName "kube-api-access-mg5zb". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 23 09:07:27 crc kubenswrapper[4684]: I0123 09:07:27.755065 4684 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/7bb08738-c794-4ee8-9972-3a62ca171029-cni-binary-copy" (OuterVolumeSpecName: "cni-binary-copy") pod "7bb08738-c794-4ee8-9972-3a62ca171029" (UID: "7bb08738-c794-4ee8-9972-3a62ca171029"). InnerVolumeSpecName "cni-binary-copy". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 23 09:07:27 crc kubenswrapper[4684]: I0123 09:07:27.755099 4684 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/9d4552c7-cd75-42dd-8880-30dd377c49a4-trusted-ca" (OuterVolumeSpecName: "trusted-ca") pod "9d4552c7-cd75-42dd-8880-30dd377c49a4" (UID: "9d4552c7-cd75-42dd-8880-30dd377c49a4"). InnerVolumeSpecName "trusted-ca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 23 09:07:27 crc kubenswrapper[4684]: I0123 09:07:27.761842 4684 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-user-template-provider-selection" (OuterVolumeSpecName: "v4-0-config-user-template-provider-selection") pod "49c341d1-5089-4bc2-86a0-a5e165cfcc6b" (UID: "49c341d1-5089-4bc2-86a0-a5e165cfcc6b"). InnerVolumeSpecName "v4-0-config-user-template-provider-selection". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 23 09:07:27 crc kubenswrapper[4684]: I0123 09:07:27.762379 4684 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/a31745f5-9847-4afe-82a5-3161cc66ca93-trusted-ca" (OuterVolumeSpecName: "trusted-ca") pod "a31745f5-9847-4afe-82a5-3161cc66ca93" (UID: "a31745f5-9847-4afe-82a5-3161cc66ca93"). InnerVolumeSpecName "trusted-ca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 23 09:07:27 crc kubenswrapper[4684]: I0123 09:07:27.762792 4684 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/bf126b07-da06-4140-9a57-dfd54fc6b486-image-registry-operator-tls" (OuterVolumeSpecName: "image-registry-operator-tls") pod "bf126b07-da06-4140-9a57-dfd54fc6b486" (UID: "bf126b07-da06-4140-9a57-dfd54fc6b486"). InnerVolumeSpecName "image-registry-operator-tls". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 23 09:07:27 crc kubenswrapper[4684]: I0123 09:07:27.763231 4684 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/01ab3dd5-8196-46d0-ad33-122e2ca51def-config" (OuterVolumeSpecName: "config") pod "01ab3dd5-8196-46d0-ad33-122e2ca51def" (UID: "01ab3dd5-8196-46d0-ad33-122e2ca51def"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 23 09:07:27 crc kubenswrapper[4684]: I0123 09:07:27.770914 4684 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/0b574797-001e-440a-8f4e-c0be86edad0f-kube-api-access-lzf88" (OuterVolumeSpecName: "kube-api-access-lzf88") pod "0b574797-001e-440a-8f4e-c0be86edad0f" (UID: "0b574797-001e-440a-8f4e-c0be86edad0f"). InnerVolumeSpecName "kube-api-access-lzf88". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 23 09:07:27 crc kubenswrapper[4684]: I0123 09:07:27.771215 4684 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/9d4552c7-cd75-42dd-8880-30dd377c49a4-config" (OuterVolumeSpecName: "config") pod "9d4552c7-cd75-42dd-8880-30dd377c49a4" (UID: "9d4552c7-cd75-42dd-8880-30dd377c49a4"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 23 09:07:27 crc kubenswrapper[4684]: I0123 09:07:27.771363 4684 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c-kube-api-access-qs4fp" (OuterVolumeSpecName: "kube-api-access-qs4fp") pod "210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c" (UID: "210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c"). InnerVolumeSpecName "kube-api-access-qs4fp". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 23 09:07:27 crc kubenswrapper[4684]: I0123 09:07:27.771818 4684 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/43509403-f426-496e-be36-56cef71462f5-service-ca" (OuterVolumeSpecName: "service-ca") pod "43509403-f426-496e-be36-56cef71462f5" (UID: "43509403-f426-496e-be36-56cef71462f5"). InnerVolumeSpecName "service-ca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 23 09:07:27 crc kubenswrapper[4684]: I0123 09:07:27.771954 4684 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/a31745f5-9847-4afe-82a5-3161cc66ca93-bound-sa-token" (OuterVolumeSpecName: "bound-sa-token") pod "a31745f5-9847-4afe-82a5-3161cc66ca93" (UID: "a31745f5-9847-4afe-82a5-3161cc66ca93"). InnerVolumeSpecName "bound-sa-token". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 23 09:07:27 crc kubenswrapper[4684]: E0123 09:07:27.773138 4684 secret.go:188] Couldn't get secret openshift-network-console/networking-console-plugin-cert: object "openshift-network-console"/"networking-console-plugin-cert" not registered Jan 23 09:07:27 crc kubenswrapper[4684]: I0123 09:07:27.773742 4684 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/1d611f23-29be-4491-8495-bee1670e935f-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "1d611f23-29be-4491-8495-bee1670e935f" (UID: "1d611f23-29be-4491-8495-bee1670e935f"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 23 09:07:27 crc kubenswrapper[4684]: I0123 09:07:27.773743 4684 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/1bf7eb37-55a3-4c65-b768-a94c82151e69-audit" (OuterVolumeSpecName: "audit") pod "1bf7eb37-55a3-4c65-b768-a94c82151e69" (UID: "1bf7eb37-55a3-4c65-b768-a94c82151e69"). InnerVolumeSpecName "audit". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 23 09:07:27 crc kubenswrapper[4684]: I0123 09:07:27.773894 4684 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/496e6271-fb68-4057-954e-a0d97a4afa3f-serving-cert" (OuterVolumeSpecName: "serving-cert") pod "496e6271-fb68-4057-954e-a0d97a4afa3f" (UID: "496e6271-fb68-4057-954e-a0d97a4afa3f"). InnerVolumeSpecName "serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 23 09:07:27 crc kubenswrapper[4684]: I0123 09:07:27.774318 4684 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"metrics-tls\" (UniqueName: \"kubernetes.io/secret/37a5e44f-9a88-4405-be8a-b645485e7312-metrics-tls\") pod \"network-operator-58b4c7f79c-55gtf\" (UID: \"37a5e44f-9a88-4405-be8a-b645485e7312\") " pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" Jan 23 09:07:27 crc kubenswrapper[4684]: I0123 09:07:27.774379 4684 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/7583ce53-e0fe-4a16-9e4d-50516596a136-config" (OuterVolumeSpecName: "config") pod "7583ce53-e0fe-4a16-9e4d-50516596a136" (UID: "7583ce53-e0fe-4a16-9e4d-50516596a136"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 23 09:07:27 crc kubenswrapper[4684]: I0123 09:07:27.774402 4684 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/b6312bbd-5731-4ea0-a20f-81d5a57df44a-profile-collector-cert" (OuterVolumeSpecName: "profile-collector-cert") pod "b6312bbd-5731-4ea0-a20f-81d5a57df44a" (UID: "b6312bbd-5731-4ea0-a20f-81d5a57df44a"). InnerVolumeSpecName "profile-collector-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 23 09:07:27 crc kubenswrapper[4684]: I0123 09:07:27.778048 4684 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/09ae3b1a-e8e7-4524-b54b-61eab6f9239a-kube-api-access-zkvpv" (OuterVolumeSpecName: "kube-api-access-zkvpv") pod "09ae3b1a-e8e7-4524-b54b-61eab6f9239a" (UID: "09ae3b1a-e8e7-4524-b54b-61eab6f9239a"). InnerVolumeSpecName "kube-api-access-zkvpv". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 23 09:07:27 crc kubenswrapper[4684]: I0123 09:07:27.778192 4684 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/1d611f23-29be-4491-8495-bee1670e935f-utilities" (OuterVolumeSpecName: "utilities") pod "1d611f23-29be-4491-8495-bee1670e935f" (UID: "1d611f23-29be-4491-8495-bee1670e935f"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 23 09:07:27 crc kubenswrapper[4684]: I0123 09:07:27.779946 4684 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d-utilities" (OuterVolumeSpecName: "utilities") pod "b11524ee-3fca-4b1b-9cdf-6da289fdbc7d" (UID: "b11524ee-3fca-4b1b-9cdf-6da289fdbc7d"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 23 09:07:27 crc kubenswrapper[4684]: I0123 09:07:27.789360 4684 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"webhook-cert\" (UniqueName: \"kubernetes.io/secret/ef543e1b-8068-4ea3-b32a-61027b32e95d-webhook-cert\") pod \"network-node-identity-vrzqb\" (UID: \"ef543e1b-8068-4ea3-b32a-61027b32e95d\") " pod="openshift-network-node-identity/network-node-identity-vrzqb" Jan 23 09:07:27 crc kubenswrapper[4684]: I0123 09:07:27.794092 4684 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/31d8b7a1-420e-4252-a5b7-eebe8a111292-auth-proxy-config" (OuterVolumeSpecName: "auth-proxy-config") pod "31d8b7a1-420e-4252-a5b7-eebe8a111292" (UID: "31d8b7a1-420e-4252-a5b7-eebe8a111292"). InnerVolumeSpecName "auth-proxy-config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 23 09:07:27 crc kubenswrapper[4684]: I0123 09:07:27.794167 4684 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/1bf7eb37-55a3-4c65-b768-a94c82151e69-serving-cert" (OuterVolumeSpecName: "serving-cert") pod "1bf7eb37-55a3-4c65-b768-a94c82151e69" (UID: "1bf7eb37-55a3-4c65-b768-a94c82151e69"). InnerVolumeSpecName "serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 23 09:07:27 crc kubenswrapper[4684]: I0123 09:07:27.795260 4684 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/43509403-f426-496e-be36-56cef71462f5-console-config" (OuterVolumeSpecName: "console-config") pod "43509403-f426-496e-be36-56cef71462f5" (UID: "43509403-f426-496e-be36-56cef71462f5"). InnerVolumeSpecName "console-config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 23 09:07:27 crc kubenswrapper[4684]: I0123 09:07:27.798210 4684 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/8f668bae-612b-4b75-9490-919e737c6a3b-ca-trust-extracted" (OuterVolumeSpecName: "ca-trust-extracted") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b"). InnerVolumeSpecName "ca-trust-extracted". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 23 09:07:27 crc kubenswrapper[4684]: I0123 09:07:27.803933 4684 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/925f1c65-6136-48ba-85aa-3a3b50560753-kube-api-access-s4n52" (OuterVolumeSpecName: "kube-api-access-s4n52") pod "925f1c65-6136-48ba-85aa-3a3b50560753" (UID: "925f1c65-6136-48ba-85aa-3a3b50560753"). InnerVolumeSpecName "kube-api-access-s4n52". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 23 09:07:27 crc kubenswrapper[4684]: I0123 09:07:27.804517 4684 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/43509403-f426-496e-be36-56cef71462f5-oauth-serving-cert" (OuterVolumeSpecName: "oauth-serving-cert") pod "43509403-f426-496e-be36-56cef71462f5" (UID: "43509403-f426-496e-be36-56cef71462f5"). InnerVolumeSpecName "oauth-serving-cert". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 23 09:07:27 crc kubenswrapper[4684]: I0123 09:07:27.804906 4684 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/8f668bae-612b-4b75-9490-919e737c6a3b-installation-pull-secrets" (OuterVolumeSpecName: "installation-pull-secrets") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b"). InnerVolumeSpecName "installation-pull-secrets". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 23 09:07:27 crc kubenswrapper[4684]: I0123 09:07:27.805189 4684 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/57a731c4-ef35-47a8-b875-bfb08a7f8011-kube-api-access-cfbct" (OuterVolumeSpecName: "kube-api-access-cfbct") pod "57a731c4-ef35-47a8-b875-bfb08a7f8011" (UID: "57a731c4-ef35-47a8-b875-bfb08a7f8011"). InnerVolumeSpecName "kube-api-access-cfbct". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 23 09:07:27 crc kubenswrapper[4684]: I0123 09:07:27.806008 4684 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/1bf7eb37-55a3-4c65-b768-a94c82151e69-etcd-serving-ca" (OuterVolumeSpecName: "etcd-serving-ca") pod "1bf7eb37-55a3-4c65-b768-a94c82151e69" (UID: "1bf7eb37-55a3-4c65-b768-a94c82151e69"). InnerVolumeSpecName "etcd-serving-ca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 23 09:07:27 crc kubenswrapper[4684]: I0123 09:07:27.806197 4684 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/1d611f23-29be-4491-8495-bee1670e935f-kube-api-access-bf2bz" (OuterVolumeSpecName: "kube-api-access-bf2bz") pod "1d611f23-29be-4491-8495-bee1670e935f" (UID: "1d611f23-29be-4491-8495-bee1670e935f"). InnerVolumeSpecName "kube-api-access-bf2bz". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 23 09:07:27 crc kubenswrapper[4684]: I0123 09:07:27.806981 4684 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/496e6271-fb68-4057-954e-a0d97a4afa3f-kube-api-access" (OuterVolumeSpecName: "kube-api-access") pod "496e6271-fb68-4057-954e-a0d97a4afa3f" (UID: "496e6271-fb68-4057-954e-a0d97a4afa3f"). InnerVolumeSpecName "kube-api-access". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 23 09:07:27 crc kubenswrapper[4684]: I0123 09:07:27.807782 4684 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/5441d097-087c-4d9a-baa8-b210afa90fc9-client-ca" (OuterVolumeSpecName: "client-ca") pod "5441d097-087c-4d9a-baa8-b210afa90fc9" (UID: "5441d097-087c-4d9a-baa8-b210afa90fc9"). InnerVolumeSpecName "client-ca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 23 09:07:27 crc kubenswrapper[4684]: I0123 09:07:27.809076 4684 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/8f668bae-612b-4b75-9490-919e737c6a3b-bound-sa-token" (OuterVolumeSpecName: "bound-sa-token") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b"). InnerVolumeSpecName "bound-sa-token". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 23 09:07:27 crc kubenswrapper[4684]: I0123 09:07:27.809616 4684 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/9d4552c7-cd75-42dd-8880-30dd377c49a4-kube-api-access-pcxfs" (OuterVolumeSpecName: "kube-api-access-pcxfs") pod "9d4552c7-cd75-42dd-8880-30dd377c49a4" (UID: "9d4552c7-cd75-42dd-8880-30dd377c49a4"). InnerVolumeSpecName "kube-api-access-pcxfs". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 23 09:07:27 crc kubenswrapper[4684]: I0123 09:07:27.810846 4684 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/43509403-f426-496e-be36-56cef71462f5-console-serving-cert" (OuterVolumeSpecName: "console-serving-cert") pod "43509403-f426-496e-be36-56cef71462f5" (UID: "43509403-f426-496e-be36-56cef71462f5"). InnerVolumeSpecName "console-serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 23 09:07:27 crc kubenswrapper[4684]: I0123 09:07:27.811065 4684 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/8cea82b4-6893-4ddc-af9f-1bb5ae425c5b-serving-cert" (OuterVolumeSpecName: "serving-cert") pod "8cea82b4-6893-4ddc-af9f-1bb5ae425c5b" (UID: "8cea82b4-6893-4ddc-af9f-1bb5ae425c5b"). InnerVolumeSpecName "serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 23 09:07:27 crc kubenswrapper[4684]: I0123 09:07:27.811208 4684 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/fda69060-fa79-4696-b1a6-7980f124bf7c-proxy-tls" (OuterVolumeSpecName: "proxy-tls") pod "fda69060-fa79-4696-b1a6-7980f124bf7c" (UID: "fda69060-fa79-4696-b1a6-7980f124bf7c"). InnerVolumeSpecName "proxy-tls". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 23 09:07:27 crc kubenswrapper[4684]: I0123 09:07:27.811268 4684 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/5441d097-087c-4d9a-baa8-b210afa90fc9-config" (OuterVolumeSpecName: "config") pod "5441d097-087c-4d9a-baa8-b210afa90fc9" (UID: "5441d097-087c-4d9a-baa8-b210afa90fc9"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 23 09:07:27 crc kubenswrapper[4684]: I0123 09:07:27.811468 4684 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/6ea678ab-3438-413e-bfe3-290ae7725660-ovnkube-config" (OuterVolumeSpecName: "ovnkube-config") pod "6ea678ab-3438-413e-bfe3-290ae7725660" (UID: "6ea678ab-3438-413e-bfe3-290ae7725660"). InnerVolumeSpecName "ovnkube-config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 23 09:07:27 crc kubenswrapper[4684]: I0123 09:07:27.811631 4684 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/22c825df-677d-4ca6-82db-3454ed06e783-machine-approver-tls" (OuterVolumeSpecName: "machine-approver-tls") pod "22c825df-677d-4ca6-82db-3454ed06e783" (UID: "22c825df-677d-4ca6-82db-3454ed06e783"). InnerVolumeSpecName "machine-approver-tls". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 23 09:07:27 crc kubenswrapper[4684]: I0123 09:07:27.812110 4684 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/b6cd30de-2eeb-49a2-ab40-9167f4560ff5-marketplace-trusted-ca" (OuterVolumeSpecName: "marketplace-trusted-ca") pod "b6cd30de-2eeb-49a2-ab40-9167f4560ff5" (UID: "b6cd30de-2eeb-49a2-ab40-9167f4560ff5"). InnerVolumeSpecName "marketplace-trusted-ca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 23 09:07:27 crc kubenswrapper[4684]: I0123 09:07:27.812360 4684 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/6ea678ab-3438-413e-bfe3-290ae7725660-env-overrides" (OuterVolumeSpecName: "env-overrides") pod "6ea678ab-3438-413e-bfe3-290ae7725660" (UID: "6ea678ab-3438-413e-bfe3-290ae7725660"). InnerVolumeSpecName "env-overrides". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 23 09:07:27 crc kubenswrapper[4684]: I0123 09:07:27.812556 4684 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/3ab1a177-2de0-46d9-b765-d0d0649bb42e-kube-api-access-4d4hj" (OuterVolumeSpecName: "kube-api-access-4d4hj") pod "3ab1a177-2de0-46d9-b765-d0d0649bb42e" (UID: "3ab1a177-2de0-46d9-b765-d0d0649bb42e"). InnerVolumeSpecName "kube-api-access-4d4hj". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 23 09:07:27 crc kubenswrapper[4684]: I0123 09:07:27.812615 4684 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/09efc573-dbb6-4249-bd59-9b87aba8dd28-etcd-service-ca" (OuterVolumeSpecName: "etcd-service-ca") pod "09efc573-dbb6-4249-bd59-9b87aba8dd28" (UID: "09efc573-dbb6-4249-bd59-9b87aba8dd28"). InnerVolumeSpecName "etcd-service-ca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 23 09:07:27 crc kubenswrapper[4684]: I0123 09:07:27.812737 4684 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/b6cd30de-2eeb-49a2-ab40-9167f4560ff5-marketplace-operator-metrics" (OuterVolumeSpecName: "marketplace-operator-metrics") pod "b6cd30de-2eeb-49a2-ab40-9167f4560ff5" (UID: "b6cd30de-2eeb-49a2-ab40-9167f4560ff5"). InnerVolumeSpecName "marketplace-operator-metrics". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 23 09:07:27 crc kubenswrapper[4684]: I0123 09:07:27.812892 4684 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/f88749ec-7931-4ee7-b3fc-1ec5e11f92e9-profile-collector-cert" (OuterVolumeSpecName: "profile-collector-cert") pod "f88749ec-7931-4ee7-b3fc-1ec5e11f92e9" (UID: "f88749ec-7931-4ee7-b3fc-1ec5e11f92e9"). InnerVolumeSpecName "profile-collector-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 23 09:07:27 crc kubenswrapper[4684]: I0123 09:07:27.813079 4684 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/b6cd30de-2eeb-49a2-ab40-9167f4560ff5-kube-api-access-pj782" (OuterVolumeSpecName: "kube-api-access-pj782") pod "b6cd30de-2eeb-49a2-ab40-9167f4560ff5" (UID: "b6cd30de-2eeb-49a2-ab40-9167f4560ff5"). InnerVolumeSpecName "kube-api-access-pj782". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 23 09:07:27 crc kubenswrapper[4684]: I0123 09:07:27.813193 4684 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/fda69060-fa79-4696-b1a6-7980f124bf7c-kube-api-access-xcgwh" (OuterVolumeSpecName: "kube-api-access-xcgwh") pod "fda69060-fa79-4696-b1a6-7980f124bf7c" (UID: "fda69060-fa79-4696-b1a6-7980f124bf7c"). InnerVolumeSpecName "kube-api-access-xcgwh". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 23 09:07:27 crc kubenswrapper[4684]: I0123 09:07:27.813358 4684 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/87cf06ed-a83f-41a7-828d-70653580a8cb-metrics-tls" (OuterVolumeSpecName: "metrics-tls") pod "87cf06ed-a83f-41a7-828d-70653580a8cb" (UID: "87cf06ed-a83f-41a7-828d-70653580a8cb"). InnerVolumeSpecName "metrics-tls". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 23 09:07:27 crc kubenswrapper[4684]: I0123 09:07:27.813405 4684 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/308be0ea-9f5f-4b29-aeb1-5abd31a0b17b-tmpfs" (OuterVolumeSpecName: "tmpfs") pod "308be0ea-9f5f-4b29-aeb1-5abd31a0b17b" (UID: "308be0ea-9f5f-4b29-aeb1-5abd31a0b17b"). InnerVolumeSpecName "tmpfs". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 23 09:07:27 crc kubenswrapper[4684]: I0123 09:07:27.813750 4684 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/308be0ea-9f5f-4b29-aeb1-5abd31a0b17b-apiservice-cert" (OuterVolumeSpecName: "apiservice-cert") pod "308be0ea-9f5f-4b29-aeb1-5abd31a0b17b" (UID: "308be0ea-9f5f-4b29-aeb1-5abd31a0b17b"). InnerVolumeSpecName "apiservice-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 23 09:07:27 crc kubenswrapper[4684]: I0123 09:07:27.814644 4684 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/bc5039c0-ea34-426b-a2b7-fbbc87b49a6d-available-featuregates" (OuterVolumeSpecName: "available-featuregates") pod "bc5039c0-ea34-426b-a2b7-fbbc87b49a6d" (UID: "bc5039c0-ea34-426b-a2b7-fbbc87b49a6d"). InnerVolumeSpecName "available-featuregates". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 23 09:07:27 crc kubenswrapper[4684]: I0123 09:07:27.815356 4684 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/bf126b07-da06-4140-9a57-dfd54fc6b486-kube-api-access-rnphk" (OuterVolumeSpecName: "kube-api-access-rnphk") pod "bf126b07-da06-4140-9a57-dfd54fc6b486" (UID: "bf126b07-da06-4140-9a57-dfd54fc6b486"). InnerVolumeSpecName "kube-api-access-rnphk". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 23 09:07:27 crc kubenswrapper[4684]: I0123 09:07:27.816903 4684 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-system-trusted-ca-bundle" (OuterVolumeSpecName: "v4-0-config-system-trusted-ca-bundle") pod "49c341d1-5089-4bc2-86a0-a5e165cfcc6b" (UID: "49c341d1-5089-4bc2-86a0-a5e165cfcc6b"). InnerVolumeSpecName "v4-0-config-system-trusted-ca-bundle". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 23 09:07:27 crc kubenswrapper[4684]: I0123 09:07:27.817397 4684 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/fda69060-fa79-4696-b1a6-7980f124bf7c-mcd-auth-proxy-config" (OuterVolumeSpecName: "mcd-auth-proxy-config") pod "fda69060-fa79-4696-b1a6-7980f124bf7c" (UID: "fda69060-fa79-4696-b1a6-7980f124bf7c"). InnerVolumeSpecName "mcd-auth-proxy-config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 23 09:07:27 crc kubenswrapper[4684]: I0123 09:07:27.817681 4684 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/efdd0498-1daa-4136-9a4a-3b948c2293fc-webhook-certs" (OuterVolumeSpecName: "webhook-certs") pod "efdd0498-1daa-4136-9a4a-3b948c2293fc" (UID: "efdd0498-1daa-4136-9a4a-3b948c2293fc"). InnerVolumeSpecName "webhook-certs". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 23 09:07:27 crc kubenswrapper[4684]: I0123 09:07:27.817956 4684 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-kube-api-access-ngvvp" (OuterVolumeSpecName: "kube-api-access-ngvvp") pod "49c341d1-5089-4bc2-86a0-a5e165cfcc6b" (UID: "49c341d1-5089-4bc2-86a0-a5e165cfcc6b"). InnerVolumeSpecName "kube-api-access-ngvvp". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 23 09:07:27 crc kubenswrapper[4684]: I0123 09:07:27.817966 4684 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/bc5039c0-ea34-426b-a2b7-fbbc87b49a6d-kube-api-access-mnrrd" (OuterVolumeSpecName: "kube-api-access-mnrrd") pod "bc5039c0-ea34-426b-a2b7-fbbc87b49a6d" (UID: "bc5039c0-ea34-426b-a2b7-fbbc87b49a6d"). InnerVolumeSpecName "kube-api-access-mnrrd". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 23 09:07:27 crc kubenswrapper[4684]: I0123 09:07:27.818034 4684 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/7583ce53-e0fe-4a16-9e4d-50516596a136-client-ca" (OuterVolumeSpecName: "client-ca") pod "7583ce53-e0fe-4a16-9e4d-50516596a136" (UID: "7583ce53-e0fe-4a16-9e4d-50516596a136"). InnerVolumeSpecName "client-ca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 23 09:07:27 crc kubenswrapper[4684]: I0123 09:07:27.818141 4684 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/5fe579f8-e8a6-4643-bce5-a661393c4dde-node-bootstrap-token" (OuterVolumeSpecName: "node-bootstrap-token") pod "5fe579f8-e8a6-4643-bce5-a661393c4dde" (UID: "5fe579f8-e8a6-4643-bce5-a661393c4dde"). InnerVolumeSpecName "node-bootstrap-token". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 23 09:07:27 crc kubenswrapper[4684]: I0123 09:07:27.818317 4684 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-system-router-certs" (OuterVolumeSpecName: "v4-0-config-system-router-certs") pod "49c341d1-5089-4bc2-86a0-a5e165cfcc6b" (UID: "49c341d1-5089-4bc2-86a0-a5e165cfcc6b"). InnerVolumeSpecName "v4-0-config-system-router-certs". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 23 09:07:27 crc kubenswrapper[4684]: I0123 09:07:27.818337 4684 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/6731426b-95fe-49ff-bb5f-40441049fde2-control-plane-machine-set-operator-tls" (OuterVolumeSpecName: "control-plane-machine-set-operator-tls") pod "6731426b-95fe-49ff-bb5f-40441049fde2" (UID: "6731426b-95fe-49ff-bb5f-40441049fde2"). InnerVolumeSpecName "control-plane-machine-set-operator-tls". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 23 09:07:27 crc kubenswrapper[4684]: I0123 09:07:27.818493 4684 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/1386a44e-36a2-460c-96d0-0359d2b6f0f5-kube-api-access" (OuterVolumeSpecName: "kube-api-access") pod "1386a44e-36a2-460c-96d0-0359d2b6f0f5" (UID: "1386a44e-36a2-460c-96d0-0359d2b6f0f5"). InnerVolumeSpecName "kube-api-access". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 23 09:07:27 crc kubenswrapper[4684]: I0123 09:07:27.819846 4684 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/bf126b07-da06-4140-9a57-dfd54fc6b486-bound-sa-token" (OuterVolumeSpecName: "bound-sa-token") pod "bf126b07-da06-4140-9a57-dfd54fc6b486" (UID: "bf126b07-da06-4140-9a57-dfd54fc6b486"). InnerVolumeSpecName "bound-sa-token". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 23 09:07:27 crc kubenswrapper[4684]: I0123 09:07:27.820081 4684 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/43509403-f426-496e-be36-56cef71462f5-kube-api-access-qg5z5" (OuterVolumeSpecName: "kube-api-access-qg5z5") pod "43509403-f426-496e-be36-56cef71462f5" (UID: "43509403-f426-496e-be36-56cef71462f5"). InnerVolumeSpecName "kube-api-access-qg5z5". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 23 09:07:27 crc kubenswrapper[4684]: I0123 09:07:27.820635 4684 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/1bf7eb37-55a3-4c65-b768-a94c82151e69-encryption-config" (OuterVolumeSpecName: "encryption-config") pod "1bf7eb37-55a3-4c65-b768-a94c82151e69" (UID: "1bf7eb37-55a3-4c65-b768-a94c82151e69"). InnerVolumeSpecName "encryption-config". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 23 09:07:27 crc kubenswrapper[4684]: I0123 09:07:27.821900 4684 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/0b574797-001e-440a-8f4e-c0be86edad0f-proxy-tls" (OuterVolumeSpecName: "proxy-tls") pod "0b574797-001e-440a-8f4e-c0be86edad0f" (UID: "0b574797-001e-440a-8f4e-c0be86edad0f"). InnerVolumeSpecName "proxy-tls". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 23 09:07:27 crc kubenswrapper[4684]: I0123 09:07:27.823413 4684 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/8f668bae-612b-4b75-9490-919e737c6a3b-registry-certificates" (OuterVolumeSpecName: "registry-certificates") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b"). InnerVolumeSpecName "registry-certificates". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 23 09:07:27 crc kubenswrapper[4684]: I0123 09:07:27.823844 4684 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/49ef4625-1d3a-4a9f-b595-c2433d32326d-kube-api-access-pjr6v" (OuterVolumeSpecName: "kube-api-access-pjr6v") pod "49ef4625-1d3a-4a9f-b595-c2433d32326d" (UID: "49ef4625-1d3a-4a9f-b595-c2433d32326d"). InnerVolumeSpecName "kube-api-access-pjr6v". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 23 09:07:27 crc kubenswrapper[4684]: I0123 09:07:27.824105 4684 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/efdd0498-1daa-4136-9a4a-3b948c2293fc-kube-api-access-fqsjt" (OuterVolumeSpecName: "kube-api-access-fqsjt") pod "efdd0498-1daa-4136-9a4a-3b948c2293fc" (UID: "efdd0498-1daa-4136-9a4a-3b948c2293fc"). InnerVolumeSpecName "kube-api-access-fqsjt". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 23 09:07:27 crc kubenswrapper[4684]: I0123 09:07:27.824417 4684 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/1bf7eb37-55a3-4c65-b768-a94c82151e69-trusted-ca-bundle" (OuterVolumeSpecName: "trusted-ca-bundle") pod "1bf7eb37-55a3-4c65-b768-a94c82151e69" (UID: "1bf7eb37-55a3-4c65-b768-a94c82151e69"). InnerVolumeSpecName "trusted-ca-bundle". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 23 09:07:27 crc kubenswrapper[4684]: I0123 09:07:27.824988 4684 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/925f1c65-6136-48ba-85aa-3a3b50560753-ovnkube-config" (OuterVolumeSpecName: "ovnkube-config") pod "925f1c65-6136-48ba-85aa-3a3b50560753" (UID: "925f1c65-6136-48ba-85aa-3a3b50560753"). InnerVolumeSpecName "ovnkube-config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 23 09:07:27 crc kubenswrapper[4684]: I0123 09:07:27.825237 4684 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/5fe579f8-e8a6-4643-bce5-a661393c4dde-certs" (OuterVolumeSpecName: "certs") pod "5fe579f8-e8a6-4643-bce5-a661393c4dde" (UID: "5fe579f8-e8a6-4643-bce5-a661393c4dde"). InnerVolumeSpecName "certs". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 23 09:07:27 crc kubenswrapper[4684]: I0123 09:07:27.825461 4684 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/09ae3b1a-e8e7-4524-b54b-61eab6f9239a-encryption-config" (OuterVolumeSpecName: "encryption-config") pod "09ae3b1a-e8e7-4524-b54b-61eab6f9239a" (UID: "09ae3b1a-e8e7-4524-b54b-61eab6f9239a"). InnerVolumeSpecName "encryption-config". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 23 09:07:27 crc kubenswrapper[4684]: I0123 09:07:27.825711 4684 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/1bf7eb37-55a3-4c65-b768-a94c82151e69-kube-api-access-sb6h7" (OuterVolumeSpecName: "kube-api-access-sb6h7") pod "1bf7eb37-55a3-4c65-b768-a94c82151e69" (UID: "1bf7eb37-55a3-4c65-b768-a94c82151e69"). InnerVolumeSpecName "kube-api-access-sb6h7". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 23 09:07:27 crc kubenswrapper[4684]: I0123 09:07:27.825935 4684 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c-serving-cert" (OuterVolumeSpecName: "serving-cert") pod "210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c" (UID: "210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c"). InnerVolumeSpecName "serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 23 09:07:27 crc kubenswrapper[4684]: I0123 09:07:27.826129 4684 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/20b0d48f-5fd6-431c-a545-e3c800c7b866-kube-api-access-w9rds" (OuterVolumeSpecName: "kube-api-access-w9rds") pod "20b0d48f-5fd6-431c-a545-e3c800c7b866" (UID: "20b0d48f-5fd6-431c-a545-e3c800c7b866"). InnerVolumeSpecName "kube-api-access-w9rds". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 23 09:07:27 crc kubenswrapper[4684]: I0123 09:07:27.826322 4684 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/01ab3dd5-8196-46d0-ad33-122e2ca51def-serving-cert" (OuterVolumeSpecName: "serving-cert") pod "01ab3dd5-8196-46d0-ad33-122e2ca51def" (UID: "01ab3dd5-8196-46d0-ad33-122e2ca51def"). InnerVolumeSpecName "serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 23 09:07:27 crc kubenswrapper[4684]: I0123 09:07:27.826584 4684 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/0b574797-001e-440a-8f4e-c0be86edad0f-mcc-auth-proxy-config" (OuterVolumeSpecName: "mcc-auth-proxy-config") pod "0b574797-001e-440a-8f4e-c0be86edad0f" (UID: "0b574797-001e-440a-8f4e-c0be86edad0f"). InnerVolumeSpecName "mcc-auth-proxy-config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 23 09:07:27 crc kubenswrapper[4684]: I0123 09:07:27.826727 4684 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/a0128f3a-b052-44ed-a84e-c4c8aaf17c13-samples-operator-tls" (OuterVolumeSpecName: "samples-operator-tls") pod "a0128f3a-b052-44ed-a84e-c4c8aaf17c13" (UID: "a0128f3a-b052-44ed-a84e-c4c8aaf17c13"). InnerVolumeSpecName "samples-operator-tls". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 23 09:07:27 crc kubenswrapper[4684]: I0123 09:07:27.830155 4684 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-system-cliconfig" (OuterVolumeSpecName: "v4-0-config-system-cliconfig") pod "49c341d1-5089-4bc2-86a0-a5e165cfcc6b" (UID: "49c341d1-5089-4bc2-86a0-a5e165cfcc6b"). InnerVolumeSpecName "v4-0-config-system-cliconfig". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 23 09:07:27 crc kubenswrapper[4684]: I0123 09:07:27.830599 4684 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/4bb40260-dbaa-4fb0-84df-5e680505d512-kube-api-access-2w9zh" (OuterVolumeSpecName: "kube-api-access-2w9zh") pod "4bb40260-dbaa-4fb0-84df-5e680505d512" (UID: "4bb40260-dbaa-4fb0-84df-5e680505d512"). InnerVolumeSpecName "kube-api-access-2w9zh". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 23 09:07:27 crc kubenswrapper[4684]: I0123 09:07:27.830872 4684 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/87cf06ed-a83f-41a7-828d-70653580a8cb-kube-api-access-d6qdx" (OuterVolumeSpecName: "kube-api-access-d6qdx") pod "87cf06ed-a83f-41a7-828d-70653580a8cb" (UID: "87cf06ed-a83f-41a7-828d-70653580a8cb"). InnerVolumeSpecName "kube-api-access-d6qdx". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 23 09:07:27 crc kubenswrapper[4684]: I0123 09:07:27.831196 4684 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c-config" (OuterVolumeSpecName: "config") pod "210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c" (UID: "210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 23 09:07:27 crc kubenswrapper[4684]: I0123 09:07:27.831322 4684 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/09efc573-dbb6-4249-bd59-9b87aba8dd28-kube-api-access-8tdtz" (OuterVolumeSpecName: "kube-api-access-8tdtz") pod "09efc573-dbb6-4249-bd59-9b87aba8dd28" (UID: "09efc573-dbb6-4249-bd59-9b87aba8dd28"). InnerVolumeSpecName "kube-api-access-8tdtz". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 23 09:07:27 crc kubenswrapper[4684]: E0123 09:07:27.832624 4684 projected.go:288] Couldn't get configMap openshift-network-diagnostics/kube-root-ca.crt: object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered Jan 23 09:07:27 crc kubenswrapper[4684]: E0123 09:07:27.832659 4684 projected.go:288] Couldn't get configMap openshift-network-diagnostics/openshift-service-ca.crt: object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered Jan 23 09:07:27 crc kubenswrapper[4684]: E0123 09:07:27.832722 4684 projected.go:194] Error preparing data for projected volume kube-api-access-cqllr for pod openshift-network-diagnostics/network-check-target-xd92c: [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Jan 23 09:07:27 crc kubenswrapper[4684]: I0123 09:07:27.832692 4684 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/09efc573-dbb6-4249-bd59-9b87aba8dd28-config" (OuterVolumeSpecName: "config") pod "09efc573-dbb6-4249-bd59-9b87aba8dd28" (UID: "09efc573-dbb6-4249-bd59-9b87aba8dd28"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 23 09:07:27 crc kubenswrapper[4684]: E0123 09:07:27.832910 4684 projected.go:288] Couldn't get configMap openshift-network-diagnostics/kube-root-ca.crt: object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered Jan 23 09:07:27 crc kubenswrapper[4684]: E0123 09:07:27.832923 4684 projected.go:288] Couldn't get configMap openshift-network-diagnostics/openshift-service-ca.crt: object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered Jan 23 09:07:27 crc kubenswrapper[4684]: E0123 09:07:27.832933 4684 projected.go:194] Error preparing data for projected volume kube-api-access-s2dwl for pod openshift-network-diagnostics/network-check-source-55646444c4-trplf: [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Jan 23 09:07:27 crc kubenswrapper[4684]: I0123 09:07:27.833264 4684 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/7bb08738-c794-4ee8-9972-3a62ca171029-cni-sysctl-allowlist" (OuterVolumeSpecName: "cni-sysctl-allowlist") pod "7bb08738-c794-4ee8-9972-3a62ca171029" (UID: "7bb08738-c794-4ee8-9972-3a62ca171029"). InnerVolumeSpecName "cni-sysctl-allowlist". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 23 09:07:27 crc kubenswrapper[4684]: I0123 09:07:27.833322 4684 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/bc5039c0-ea34-426b-a2b7-fbbc87b49a6d-serving-cert" (OuterVolumeSpecName: "serving-cert") pod "bc5039c0-ea34-426b-a2b7-fbbc87b49a6d" (UID: "bc5039c0-ea34-426b-a2b7-fbbc87b49a6d"). InnerVolumeSpecName "serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 23 09:07:27 crc kubenswrapper[4684]: I0123 09:07:27.834285 4684 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/6402fda4-df10-493c-b4e5-d0569419652d-images" (OuterVolumeSpecName: "images") pod "6402fda4-df10-493c-b4e5-d0569419652d" (UID: "6402fda4-df10-493c-b4e5-d0569419652d"). InnerVolumeSpecName "images". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 23 09:07:27 crc kubenswrapper[4684]: I0123 09:07:27.834926 4684 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/31d8b7a1-420e-4252-a5b7-eebe8a111292-proxy-tls" (OuterVolumeSpecName: "proxy-tls") pod "31d8b7a1-420e-4252-a5b7-eebe8a111292" (UID: "31d8b7a1-420e-4252-a5b7-eebe8a111292"). InnerVolumeSpecName "proxy-tls". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 23 09:07:27 crc kubenswrapper[4684]: I0123 09:07:27.835109 4684 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/b6312bbd-5731-4ea0-a20f-81d5a57df44a-srv-cert" (OuterVolumeSpecName: "srv-cert") pod "b6312bbd-5731-4ea0-a20f-81d5a57df44a" (UID: "b6312bbd-5731-4ea0-a20f-81d5a57df44a"). InnerVolumeSpecName "srv-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 23 09:07:27 crc kubenswrapper[4684]: I0123 09:07:27.835465 4684 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/44663579-783b-4372-86d6-acf235a62d72-kube-api-access-vt5rc" (OuterVolumeSpecName: "kube-api-access-vt5rc") pod "44663579-783b-4372-86d6-acf235a62d72" (UID: "44663579-783b-4372-86d6-acf235a62d72"). InnerVolumeSpecName "kube-api-access-vt5rc". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 23 09:07:27 crc kubenswrapper[4684]: I0123 09:07:27.835617 4684 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/09efc573-dbb6-4249-bd59-9b87aba8dd28-etcd-client" (OuterVolumeSpecName: "etcd-client") pod "09efc573-dbb6-4249-bd59-9b87aba8dd28" (UID: "09efc573-dbb6-4249-bd59-9b87aba8dd28"). InnerVolumeSpecName "etcd-client". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 23 09:07:27 crc kubenswrapper[4684]: I0123 09:07:27.835795 4684 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/8f668bae-612b-4b75-9490-919e737c6a3b-registry-tls" (OuterVolumeSpecName: "registry-tls") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b"). InnerVolumeSpecName "registry-tls". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 23 09:07:27 crc kubenswrapper[4684]: I0123 09:07:27.835883 4684 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/25e176fe-21b4-4974-b1ed-c8b94f112a7f-kube-api-access-d4lsv" (OuterVolumeSpecName: "kube-api-access-d4lsv") pod "25e176fe-21b4-4974-b1ed-c8b94f112a7f" (UID: "25e176fe-21b4-4974-b1ed-c8b94f112a7f"). InnerVolumeSpecName "kube-api-access-d4lsv". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 23 09:07:27 crc kubenswrapper[4684]: I0123 09:07:27.835969 4684 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d-kube-api-access-x4zgh" (OuterVolumeSpecName: "kube-api-access-x4zgh") pod "b11524ee-3fca-4b1b-9cdf-6da289fdbc7d" (UID: "b11524ee-3fca-4b1b-9cdf-6da289fdbc7d"). InnerVolumeSpecName "kube-api-access-x4zgh". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 23 09:07:27 crc kubenswrapper[4684]: I0123 09:07:27.836279 4684 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/7539238d-5fe0-46ed-884e-1c3b566537ec-config" (OuterVolumeSpecName: "config") pod "7539238d-5fe0-46ed-884e-1c3b566537ec" (UID: "7539238d-5fe0-46ed-884e-1c3b566537ec"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 23 09:07:27 crc kubenswrapper[4684]: I0123 09:07:27.836358 4684 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/6509e943-70c6-444c-bc41-48a544e36fbd-service-ca-bundle" (OuterVolumeSpecName: "service-ca-bundle") pod "6509e943-70c6-444c-bc41-48a544e36fbd" (UID: "6509e943-70c6-444c-bc41-48a544e36fbd"). InnerVolumeSpecName "service-ca-bundle". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 23 09:07:27 crc kubenswrapper[4684]: I0123 09:07:27.839092 4684 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/8f668bae-612b-4b75-9490-919e737c6a3b-kube-api-access-kfwg7" (OuterVolumeSpecName: "kube-api-access-kfwg7") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b"). InnerVolumeSpecName "kube-api-access-kfwg7". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 23 09:07:27 crc kubenswrapper[4684]: I0123 09:07:27.839480 4684 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/25e176fe-21b4-4974-b1ed-c8b94f112a7f-signing-key" (OuterVolumeSpecName: "signing-key") pod "25e176fe-21b4-4974-b1ed-c8b94f112a7f" (UID: "25e176fe-21b4-4974-b1ed-c8b94f112a7f"). InnerVolumeSpecName "signing-key". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 23 09:07:27 crc kubenswrapper[4684]: I0123 09:07:27.839552 4684 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/43509403-f426-496e-be36-56cef71462f5-console-oauth-config" (OuterVolumeSpecName: "console-oauth-config") pod "43509403-f426-496e-be36-56cef71462f5" (UID: "43509403-f426-496e-be36-56cef71462f5"). InnerVolumeSpecName "console-oauth-config". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 23 09:07:27 crc kubenswrapper[4684]: I0123 09:07:27.839971 4684 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/1386a44e-36a2-460c-96d0-0359d2b6f0f5-serving-cert" (OuterVolumeSpecName: "serving-cert") pod "1386a44e-36a2-460c-96d0-0359d2b6f0f5" (UID: "1386a44e-36a2-460c-96d0-0359d2b6f0f5"). InnerVolumeSpecName "serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 23 09:07:27 crc kubenswrapper[4684]: I0123 09:07:27.840039 4684 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/4bb40260-dbaa-4fb0-84df-5e680505d512-cni-binary-copy" (OuterVolumeSpecName: "cni-binary-copy") pod "4bb40260-dbaa-4fb0-84df-5e680505d512" (UID: "4bb40260-dbaa-4fb0-84df-5e680505d512"). InnerVolumeSpecName "cni-binary-copy". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 23 09:07:27 crc kubenswrapper[4684]: I0123 09:07:27.840241 4684 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/5225d0e4-402f-4861-b410-819f433b1803-utilities" (OuterVolumeSpecName: "utilities") pod "5225d0e4-402f-4861-b410-819f433b1803" (UID: "5225d0e4-402f-4861-b410-819f433b1803"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 23 09:07:27 crc kubenswrapper[4684]: E0123 09:07:27.840406 4684 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-networking-console-plugin-cert podName:5fe485a1-e14f-4c09-b5b9-f252bc42b7e8 nodeName:}" failed. No retries permitted until 2026-01-23 09:07:28.340367796 +0000 UTC m=+20.963746337 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "networking-console-plugin-cert" (UniqueName: "kubernetes.io/secret/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-networking-console-plugin-cert") pod "networking-console-plugin-85b44fc459-gdk6g" (UID: "5fe485a1-e14f-4c09-b5b9-f252bc42b7e8") : object "openshift-network-console"/"networking-console-plugin-cert" not registered Jan 23 09:07:27 crc kubenswrapper[4684]: I0123 09:07:27.840539 4684 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/0b78653f-4ff9-4508-8672-245ed9b561e3-serving-cert" (OuterVolumeSpecName: "serving-cert") pod "0b78653f-4ff9-4508-8672-245ed9b561e3" (UID: "0b78653f-4ff9-4508-8672-245ed9b561e3"). InnerVolumeSpecName "serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 23 09:07:27 crc kubenswrapper[4684]: I0123 09:07:27.840553 4684 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/6509e943-70c6-444c-bc41-48a544e36fbd-trusted-ca-bundle" (OuterVolumeSpecName: "trusted-ca-bundle") pod "6509e943-70c6-444c-bc41-48a544e36fbd" (UID: "6509e943-70c6-444c-bc41-48a544e36fbd"). InnerVolumeSpecName "trusted-ca-bundle". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 23 09:07:27 crc kubenswrapper[4684]: I0123 09:07:27.840570 4684 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/6731426b-95fe-49ff-bb5f-40441049fde2-kube-api-access-x7zkh" (OuterVolumeSpecName: "kube-api-access-x7zkh") pod "6731426b-95fe-49ff-bb5f-40441049fde2" (UID: "6731426b-95fe-49ff-bb5f-40441049fde2"). InnerVolumeSpecName "kube-api-access-x7zkh". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 23 09:07:27 crc kubenswrapper[4684]: I0123 09:07:27.840945 4684 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/87cf06ed-a83f-41a7-828d-70653580a8cb-config-volume" (OuterVolumeSpecName: "config-volume") pod "87cf06ed-a83f-41a7-828d-70653580a8cb" (UID: "87cf06ed-a83f-41a7-828d-70653580a8cb"). InnerVolumeSpecName "config-volume". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 23 09:07:27 crc kubenswrapper[4684]: E0123 09:07:27.841024 4684 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/3b6479f0-333b-4a96-9adf-2099afdc2447-kube-api-access-cqllr podName:3b6479f0-333b-4a96-9adf-2099afdc2447 nodeName:}" failed. No retries permitted until 2026-01-23 09:07:28.341011495 +0000 UTC m=+20.964390186 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "kube-api-access-cqllr" (UniqueName: "kubernetes.io/projected/3b6479f0-333b-4a96-9adf-2099afdc2447-kube-api-access-cqllr") pod "network-check-target-xd92c" (UID: "3b6479f0-333b-4a96-9adf-2099afdc2447") : [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Jan 23 09:07:27 crc kubenswrapper[4684]: E0123 09:07:27.841058 4684 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/9d751cbb-f2e2-430d-9754-c882a5e924a5-kube-api-access-s2dwl podName:9d751cbb-f2e2-430d-9754-c882a5e924a5 nodeName:}" failed. No retries permitted until 2026-01-23 09:07:28.341049256 +0000 UTC m=+20.964428027 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "kube-api-access-s2dwl" (UniqueName: "kubernetes.io/projected/9d751cbb-f2e2-430d-9754-c882a5e924a5-kube-api-access-s2dwl") pod "network-check-source-55646444c4-trplf" (UID: "9d751cbb-f2e2-430d-9754-c882a5e924a5") : [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Jan 23 09:07:27 crc kubenswrapper[4684]: I0123 09:07:27.841098 4684 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"v4-0-config-system-session\" (UniqueName: \"kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-system-session\") pod \"49c341d1-5089-4bc2-86a0-a5e165cfcc6b\" (UID: \"49c341d1-5089-4bc2-86a0-a5e165cfcc6b\") " Jan 23 09:07:27 crc kubenswrapper[4684]: I0123 09:07:27.841202 4684 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/09efc573-dbb6-4249-bd59-9b87aba8dd28-serving-cert\") pod \"09efc573-dbb6-4249-bd59-9b87aba8dd28\" (UID: \"09efc573-dbb6-4249-bd59-9b87aba8dd28\") " Jan 23 09:07:27 crc kubenswrapper[4684]: I0123 09:07:27.841254 4684 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"cert\" (UniqueName: \"kubernetes.io/secret/20b0d48f-5fd6-431c-a545-e3c800c7b866-cert\") pod \"20b0d48f-5fd6-431c-a545-e3c800c7b866\" (UID: \"20b0d48f-5fd6-431c-a545-e3c800c7b866\") " Jan 23 09:07:27 crc kubenswrapper[4684]: E0123 09:07:27.841351 4684 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-nginx-conf podName:5fe485a1-e14f-4c09-b5b9-f252bc42b7e8 nodeName:}" failed. No retries permitted until 2026-01-23 09:07:28.341330924 +0000 UTC m=+20.964709465 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "nginx-conf" (UniqueName: "kubernetes.io/configmap/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-nginx-conf") pod "networking-console-plugin-85b44fc459-gdk6g" (UID: "5fe485a1-e14f-4c09-b5b9-f252bc42b7e8") : object "openshift-network-console"/"networking-console-plugin" not registered Jan 23 09:07:27 crc kubenswrapper[4684]: W0123 09:07:27.841418 4684 empty_dir.go:500] Warning: Unmount skipped because path does not exist: /var/lib/kubelet/pods/20b0d48f-5fd6-431c-a545-e3c800c7b866/volumes/kubernetes.io~secret/cert Jan 23 09:07:27 crc kubenswrapper[4684]: I0123 09:07:27.841438 4684 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/20b0d48f-5fd6-431c-a545-e3c800c7b866-cert" (OuterVolumeSpecName: "cert") pod "20b0d48f-5fd6-431c-a545-e3c800c7b866" (UID: "20b0d48f-5fd6-431c-a545-e3c800c7b866"). InnerVolumeSpecName "cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 23 09:07:27 crc kubenswrapper[4684]: W0123 09:07:27.841534 4684 empty_dir.go:500] Warning: Unmount skipped because path does not exist: /var/lib/kubelet/pods/49c341d1-5089-4bc2-86a0-a5e165cfcc6b/volumes/kubernetes.io~secret/v4-0-config-system-session Jan 23 09:07:27 crc kubenswrapper[4684]: I0123 09:07:27.841551 4684 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-system-session" (OuterVolumeSpecName: "v4-0-config-system-session") pod "49c341d1-5089-4bc2-86a0-a5e165cfcc6b" (UID: "49c341d1-5089-4bc2-86a0-a5e165cfcc6b"). InnerVolumeSpecName "v4-0-config-system-session". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 23 09:07:27 crc kubenswrapper[4684]: I0123 09:07:27.841534 4684 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-6g6sz\" (UniqueName: \"kubernetes.io/projected/6509e943-70c6-444c-bc41-48a544e36fbd-kube-api-access-6g6sz\") pod \"6509e943-70c6-444c-bc41-48a544e36fbd\" (UID: \"6509e943-70c6-444c-bc41-48a544e36fbd\") " Jan 23 09:07:27 crc kubenswrapper[4684]: I0123 09:07:27.841632 4684 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/0b78653f-4ff9-4508-8672-245ed9b561e3-serving-cert\") pod \"0b78653f-4ff9-4508-8672-245ed9b561e3\" (UID: \"0b78653f-4ff9-4508-8672-245ed9b561e3\") " Jan 23 09:07:27 crc kubenswrapper[4684]: W0123 09:07:27.841748 4684 empty_dir.go:500] Warning: Unmount skipped because path does not exist: /var/lib/kubelet/pods/6509e943-70c6-444c-bc41-48a544e36fbd/volumes/kubernetes.io~projected/kube-api-access-6g6sz Jan 23 09:07:27 crc kubenswrapper[4684]: I0123 09:07:27.841808 4684 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/5225d0e4-402f-4861-b410-819f433b1803-utilities\") pod \"5225d0e4-402f-4861-b410-819f433b1803\" (UID: \"5225d0e4-402f-4861-b410-819f433b1803\") " Jan 23 09:07:27 crc kubenswrapper[4684]: I0123 09:07:27.841894 4684 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"etcd-ca\" (UniqueName: \"kubernetes.io/configmap/09efc573-dbb6-4249-bd59-9b87aba8dd28-etcd-ca\") pod \"09efc573-dbb6-4249-bd59-9b87aba8dd28\" (UID: \"09efc573-dbb6-4249-bd59-9b87aba8dd28\") " Jan 23 09:07:27 crc kubenswrapper[4684]: I0123 09:07:27.841924 4684 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/5b88f790-22fa-440e-b583-365168c0b23d-metrics-certs\") pod \"5b88f790-22fa-440e-b583-365168c0b23d\" (UID: \"5b88f790-22fa-440e-b583-365168c0b23d\") " Jan 23 09:07:27 crc kubenswrapper[4684]: I0123 09:07:27.841947 4684 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/8f668bae-612b-4b75-9490-919e737c6a3b-trusted-ca\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 23 09:07:27 crc kubenswrapper[4684]: I0123 09:07:27.842010 4684 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/6509e943-70c6-444c-bc41-48a544e36fbd-trusted-ca-bundle\") pod \"6509e943-70c6-444c-bc41-48a544e36fbd\" (UID: \"6509e943-70c6-444c-bc41-48a544e36fbd\") " Jan 23 09:07:27 crc kubenswrapper[4684]: I0123 09:07:27.842039 4684 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-x7zkh\" (UniqueName: \"kubernetes.io/projected/6731426b-95fe-49ff-bb5f-40441049fde2-kube-api-access-x7zkh\") pod \"6731426b-95fe-49ff-bb5f-40441049fde2\" (UID: \"6731426b-95fe-49ff-bb5f-40441049fde2\") " Jan 23 09:07:27 crc kubenswrapper[4684]: I0123 09:07:27.842059 4684 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-jhbk2\" (UniqueName: \"kubernetes.io/projected/bd23aa5c-e532-4e53-bccf-e79f130c5ae8-kube-api-access-jhbk2\") pod \"bd23aa5c-e532-4e53-bccf-e79f130c5ae8\" (UID: \"bd23aa5c-e532-4e53-bccf-e79f130c5ae8\") " Jan 23 09:07:27 crc kubenswrapper[4684]: I0123 09:07:27.842275 4684 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-vt5rc\" (UniqueName: \"kubernetes.io/projected/44663579-783b-4372-86d6-acf235a62d72-kube-api-access-vt5rc\") pod \"44663579-783b-4372-86d6-acf235a62d72\" (UID: \"44663579-783b-4372-86d6-acf235a62d72\") " Jan 23 09:07:27 crc kubenswrapper[4684]: I0123 09:07:27.842315 4684 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/87cf06ed-a83f-41a7-828d-70653580a8cb-config-volume\") pod \"87cf06ed-a83f-41a7-828d-70653580a8cb\" (UID: \"87cf06ed-a83f-41a7-828d-70653580a8cb\") " Jan 23 09:07:27 crc kubenswrapper[4684]: I0123 09:07:27.842337 4684 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/5441d097-087c-4d9a-baa8-b210afa90fc9-serving-cert\") pod \"5441d097-087c-4d9a-baa8-b210afa90fc9\" (UID: \"5441d097-087c-4d9a-baa8-b210afa90fc9\") " Jan 23 09:07:27 crc kubenswrapper[4684]: I0123 09:07:27.842379 4684 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-d4lsv\" (UniqueName: \"kubernetes.io/projected/25e176fe-21b4-4974-b1ed-c8b94f112a7f-kube-api-access-d4lsv\") pod \"25e176fe-21b4-4974-b1ed-c8b94f112a7f\" (UID: \"25e176fe-21b4-4974-b1ed-c8b94f112a7f\") " Jan 23 09:07:27 crc kubenswrapper[4684]: I0123 09:07:27.842401 4684 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"images\" (UniqueName: \"kubernetes.io/configmap/6402fda4-df10-493c-b4e5-d0569419652d-images\") pod \"6402fda4-df10-493c-b4e5-d0569419652d\" (UID: \"6402fda4-df10-493c-b4e5-d0569419652d\") " Jan 23 09:07:27 crc kubenswrapper[4684]: I0123 09:07:27.842427 4684 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"v4-0-config-user-idp-0-file-data\" (UniqueName: \"kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-user-idp-0-file-data\") pod \"49c341d1-5089-4bc2-86a0-a5e165cfcc6b\" (UID: \"49c341d1-5089-4bc2-86a0-a5e165cfcc6b\") " Jan 23 09:07:27 crc kubenswrapper[4684]: I0123 09:07:27.842450 4684 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"service-ca\" (UniqueName: \"kubernetes.io/configmap/0b78653f-4ff9-4508-8672-245ed9b561e3-service-ca\") pod \"0b78653f-4ff9-4508-8672-245ed9b561e3\" (UID: \"0b78653f-4ff9-4508-8672-245ed9b561e3\") " Jan 23 09:07:27 crc kubenswrapper[4684]: I0123 09:07:27.842518 4684 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host-etc-kube\" (UniqueName: \"kubernetes.io/host-path/37a5e44f-9a88-4405-be8a-b645485e7312-host-etc-kube\") pod \"network-operator-58b4c7f79c-55gtf\" (UID: \"37a5e44f-9a88-4405-be8a-b645485e7312\") " pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" Jan 23 09:07:27 crc kubenswrapper[4684]: I0123 09:07:27.842560 4684 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host-slash\" (UniqueName: \"kubernetes.io/host-path/d75a4c96-2883-4a0b-bab2-0fab2b6c0b49-host-slash\") pod \"iptables-alerter-4ln5h\" (UID: \"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\") " pod="openshift-network-operator/iptables-alerter-4ln5h" Jan 23 09:07:27 crc kubenswrapper[4684]: I0123 09:07:27.842764 4684 reconciler_common.go:293] "Volume detached for volume \"ca-trust-extracted\" (UniqueName: \"kubernetes.io/empty-dir/8f668bae-612b-4b75-9490-919e737c6a3b-ca-trust-extracted\") on node \"crc\" DevicePath \"\"" Jan 23 09:07:27 crc kubenswrapper[4684]: I0123 09:07:27.842786 4684 reconciler_common.go:293] "Volume detached for volume \"v4-0-config-system-session\" (UniqueName: \"kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-system-session\") on node \"crc\" DevicePath \"\"" Jan 23 09:07:27 crc kubenswrapper[4684]: I0123 09:07:27.842803 4684 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/1386a44e-36a2-460c-96d0-0359d2b6f0f5-config\") on node \"crc\" DevicePath \"\"" Jan 23 09:07:27 crc kubenswrapper[4684]: I0123 09:07:27.842816 4684 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-9xfj7\" (UniqueName: \"kubernetes.io/projected/5225d0e4-402f-4861-b410-819f433b1803-kube-api-access-9xfj7\") on node \"crc\" DevicePath \"\"" Jan 23 09:07:27 crc kubenswrapper[4684]: I0123 09:07:27.842830 4684 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-tk88c\" (UniqueName: \"kubernetes.io/projected/7539238d-5fe0-46ed-884e-1c3b566537ec-kube-api-access-tk88c\") on node \"crc\" DevicePath \"\"" Jan 23 09:07:27 crc kubenswrapper[4684]: I0123 09:07:27.842843 4684 reconciler_common.go:293] "Volume detached for volume \"srv-cert\" (UniqueName: \"kubernetes.io/secret/f88749ec-7931-4ee7-b3fc-1ec5e11f92e9-srv-cert\") on node \"crc\" DevicePath \"\"" Jan 23 09:07:27 crc kubenswrapper[4684]: I0123 09:07:27.842853 4684 reconciler_common.go:293] "Volume detached for volume \"console-config\" (UniqueName: \"kubernetes.io/configmap/43509403-f426-496e-be36-56cef71462f5-console-config\") on node \"crc\" DevicePath \"\"" Jan 23 09:07:27 crc kubenswrapper[4684]: I0123 09:07:27.842864 4684 reconciler_common.go:293] "Volume detached for volume \"stats-auth\" (UniqueName: \"kubernetes.io/secret/c03ee662-fb2f-4fc4-a2c1-af487c19d254-stats-auth\") on node \"crc\" DevicePath \"\"" Jan 23 09:07:27 crc kubenswrapper[4684]: I0123 09:07:27.842874 4684 reconciler_common.go:293] "Volume detached for volume \"ovn-control-plane-metrics-cert\" (UniqueName: \"kubernetes.io/secret/925f1c65-6136-48ba-85aa-3a3b50560753-ovn-control-plane-metrics-cert\") on node \"crc\" DevicePath \"\"" Jan 23 09:07:27 crc kubenswrapper[4684]: I0123 09:07:27.842885 4684 reconciler_common.go:293] "Volume detached for volume \"proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/7583ce53-e0fe-4a16-9e4d-50516596a136-proxy-ca-bundles\") on node \"crc\" DevicePath \"\"" Jan 23 09:07:27 crc kubenswrapper[4684]: I0123 09:07:27.842896 4684 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/7583ce53-e0fe-4a16-9e4d-50516596a136-config\") on node \"crc\" DevicePath \"\"" Jan 23 09:07:27 crc kubenswrapper[4684]: I0123 09:07:27.842910 4684 reconciler_common.go:293] "Volume detached for volume \"multus-daemon-config\" (UniqueName: \"kubernetes.io/configmap/4bb40260-dbaa-4fb0-84df-5e680505d512-multus-daemon-config\") on node \"crc\" DevicePath \"\"" Jan 23 09:07:27 crc kubenswrapper[4684]: I0123 09:07:27.842921 4684 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-fcqwp\" (UniqueName: \"kubernetes.io/projected/5fe579f8-e8a6-4643-bce5-a661393c4dde-kube-api-access-fcqwp\") on node \"crc\" DevicePath \"\"" Jan 23 09:07:27 crc kubenswrapper[4684]: I0123 09:07:27.842934 4684 reconciler_common.go:293] "Volume detached for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/09ae3b1a-e8e7-4524-b54b-61eab6f9239a-serving-cert\") on node \"crc\" DevicePath \"\"" Jan 23 09:07:27 crc kubenswrapper[4684]: I0123 09:07:27.842944 4684 reconciler_common.go:293] "Volume detached for volume \"cert\" (UniqueName: \"kubernetes.io/secret/20b0d48f-5fd6-431c-a545-e3c800c7b866-cert\") on node \"crc\" DevicePath \"\"" Jan 23 09:07:27 crc kubenswrapper[4684]: I0123 09:07:27.842956 4684 reconciler_common.go:293] "Volume detached for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/7539238d-5fe0-46ed-884e-1c3b566537ec-serving-cert\") on node \"crc\" DevicePath \"\"" Jan 23 09:07:27 crc kubenswrapper[4684]: I0123 09:07:27.842967 4684 reconciler_common.go:293] "Volume detached for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/e7e6199b-1264-4501-8953-767f51328d08-kube-api-access\") on node \"crc\" DevicePath \"\"" Jan 23 09:07:27 crc kubenswrapper[4684]: I0123 09:07:27.842977 4684 reconciler_common.go:293] "Volume detached for volume \"audit\" (UniqueName: \"kubernetes.io/configmap/1bf7eb37-55a3-4c65-b768-a94c82151e69-audit\") on node \"crc\" DevicePath \"\"" Jan 23 09:07:27 crc kubenswrapper[4684]: I0123 09:07:27.842987 4684 reconciler_common.go:293] "Volume detached for volume \"cni-binary-copy\" (UniqueName: \"kubernetes.io/configmap/7bb08738-c794-4ee8-9972-3a62ca171029-cni-binary-copy\") on node \"crc\" DevicePath \"\"" Jan 23 09:07:27 crc kubenswrapper[4684]: I0123 09:07:27.842998 4684 reconciler_common.go:293] "Volume detached for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/0b78653f-4ff9-4508-8672-245ed9b561e3-kube-api-access\") on node \"crc\" DevicePath \"\"" Jan 23 09:07:27 crc kubenswrapper[4684]: I0123 09:07:27.843009 4684 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-v47cf\" (UniqueName: \"kubernetes.io/projected/c03ee662-fb2f-4fc4-a2c1-af487c19d254-kube-api-access-v47cf\") on node \"crc\" DevicePath \"\"" Jan 23 09:07:27 crc kubenswrapper[4684]: I0123 09:07:27.843019 4684 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-lzf88\" (UniqueName: \"kubernetes.io/projected/0b574797-001e-440a-8f4e-c0be86edad0f-kube-api-access-lzf88\") on node \"crc\" DevicePath \"\"" Jan 23 09:07:27 crc kubenswrapper[4684]: I0123 09:07:27.843029 4684 reconciler_common.go:293] "Volume detached for volume \"v4-0-config-user-template-provider-selection\" (UniqueName: \"kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-user-template-provider-selection\") on node \"crc\" DevicePath \"\"" Jan 23 09:07:27 crc kubenswrapper[4684]: I0123 09:07:27.843040 4684 reconciler_common.go:293] "Volume detached for volume \"oauth-serving-cert\" (UniqueName: \"kubernetes.io/configmap/43509403-f426-496e-be36-56cef71462f5-oauth-serving-cert\") on node \"crc\" DevicePath \"\"" Jan 23 09:07:27 crc kubenswrapper[4684]: I0123 09:07:27.843050 4684 reconciler_common.go:293] "Volume detached for volume \"service-ca\" (UniqueName: \"kubernetes.io/configmap/43509403-f426-496e-be36-56cef71462f5-service-ca\") on node \"crc\" DevicePath \"\"" Jan 23 09:07:27 crc kubenswrapper[4684]: I0123 09:07:27.843061 4684 reconciler_common.go:293] "Volume detached for volume \"bound-sa-token\" (UniqueName: \"kubernetes.io/projected/a31745f5-9847-4afe-82a5-3161cc66ca93-bound-sa-token\") on node \"crc\" DevicePath \"\"" Jan 23 09:07:27 crc kubenswrapper[4684]: I0123 09:07:27.843075 4684 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-pj782\" (UniqueName: \"kubernetes.io/projected/b6cd30de-2eeb-49a2-ab40-9167f4560ff5-kube-api-access-pj782\") on node \"crc\" DevicePath \"\"" Jan 23 09:07:27 crc kubenswrapper[4684]: I0123 09:07:27.843087 4684 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/9d4552c7-cd75-42dd-8880-30dd377c49a4-config\") on node \"crc\" DevicePath \"\"" Jan 23 09:07:27 crc kubenswrapper[4684]: I0123 09:07:27.843097 4684 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-qs4fp\" (UniqueName: \"kubernetes.io/projected/210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c-kube-api-access-qs4fp\") on node \"crc\" DevicePath \"\"" Jan 23 09:07:27 crc kubenswrapper[4684]: I0123 09:07:27.843107 4684 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/1d611f23-29be-4491-8495-bee1670e935f-catalog-content\") on node \"crc\" DevicePath \"\"" Jan 23 09:07:27 crc kubenswrapper[4684]: I0123 09:07:27.843117 4684 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-xcgwh\" (UniqueName: \"kubernetes.io/projected/fda69060-fa79-4696-b1a6-7980f124bf7c-kube-api-access-xcgwh\") on node \"crc\" DevicePath \"\"" Jan 23 09:07:27 crc kubenswrapper[4684]: I0123 09:07:27.843127 4684 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-cfbct\" (UniqueName: \"kubernetes.io/projected/57a731c4-ef35-47a8-b875-bfb08a7f8011-kube-api-access-cfbct\") on node \"crc\" DevicePath \"\"" Jan 23 09:07:27 crc kubenswrapper[4684]: I0123 09:07:27.843139 4684 reconciler_common.go:293] "Volume detached for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/496e6271-fb68-4057-954e-a0d97a4afa3f-serving-cert\") on node \"crc\" DevicePath \"\"" Jan 23 09:07:27 crc kubenswrapper[4684]: I0123 09:07:27.843152 4684 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-x4zgh\" (UniqueName: \"kubernetes.io/projected/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d-kube-api-access-x4zgh\") on node \"crc\" DevicePath \"\"" Jan 23 09:07:27 crc kubenswrapper[4684]: I0123 09:07:27.843163 4684 reconciler_common.go:293] "Volume detached for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/1bf7eb37-55a3-4c65-b768-a94c82151e69-trusted-ca-bundle\") on node \"crc\" DevicePath \"\"" Jan 23 09:07:27 crc kubenswrapper[4684]: I0123 09:07:27.843174 4684 reconciler_common.go:293] "Volume detached for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/496e6271-fb68-4057-954e-a0d97a4afa3f-kube-api-access\") on node \"crc\" DevicePath \"\"" Jan 23 09:07:27 crc kubenswrapper[4684]: I0123 09:07:27.843184 4684 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-fqsjt\" (UniqueName: \"kubernetes.io/projected/efdd0498-1daa-4136-9a4a-3b948c2293fc-kube-api-access-fqsjt\") on node \"crc\" DevicePath \"\"" Jan 23 09:07:27 crc kubenswrapper[4684]: I0123 09:07:27.843195 4684 reconciler_common.go:293] "Volume detached for volume \"registry-certificates\" (UniqueName: \"kubernetes.io/configmap/8f668bae-612b-4b75-9490-919e737c6a3b-registry-certificates\") on node \"crc\" DevicePath \"\"" Jan 23 09:07:27 crc kubenswrapper[4684]: I0123 09:07:27.843206 4684 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-pjr6v\" (UniqueName: \"kubernetes.io/projected/49ef4625-1d3a-4a9f-b595-c2433d32326d-kube-api-access-pjr6v\") on node \"crc\" DevicePath \"\"" Jan 23 09:07:27 crc kubenswrapper[4684]: I0123 09:07:27.843224 4684 reconciler_common.go:293] "Volume detached for volume \"ovnkube-config\" (UniqueName: \"kubernetes.io/configmap/925f1c65-6136-48ba-85aa-3a3b50560753-ovnkube-config\") on node \"crc\" DevicePath \"\"" Jan 23 09:07:27 crc kubenswrapper[4684]: I0123 09:07:27.843235 4684 reconciler_common.go:293] "Volume detached for volume \"certs\" (UniqueName: \"kubernetes.io/secret/5fe579f8-e8a6-4643-bce5-a661393c4dde-certs\") on node \"crc\" DevicePath \"\"" Jan 23 09:07:27 crc kubenswrapper[4684]: I0123 09:07:27.843243 4684 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-sb6h7\" (UniqueName: \"kubernetes.io/projected/1bf7eb37-55a3-4c65-b768-a94c82151e69-kube-api-access-sb6h7\") on node \"crc\" DevicePath \"\"" Jan 23 09:07:27 crc kubenswrapper[4684]: I0123 09:07:27.843255 4684 reconciler_common.go:293] "Volume detached for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c-serving-cert\") on node \"crc\" DevicePath \"\"" Jan 23 09:07:27 crc kubenswrapper[4684]: I0123 09:07:27.843267 4684 reconciler_common.go:293] "Volume detached for volume \"encryption-config\" (UniqueName: \"kubernetes.io/secret/09ae3b1a-e8e7-4524-b54b-61eab6f9239a-encryption-config\") on node \"crc\" DevicePath \"\"" Jan 23 09:07:27 crc kubenswrapper[4684]: I0123 09:07:27.841809 4684 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/6509e943-70c6-444c-bc41-48a544e36fbd-kube-api-access-6g6sz" (OuterVolumeSpecName: "kube-api-access-6g6sz") pod "6509e943-70c6-444c-bc41-48a544e36fbd" (UID: "6509e943-70c6-444c-bc41-48a544e36fbd"). InnerVolumeSpecName "kube-api-access-6g6sz". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 23 09:07:27 crc kubenswrapper[4684]: I0123 09:07:27.843284 4684 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-w9rds\" (UniqueName: \"kubernetes.io/projected/20b0d48f-5fd6-431c-a545-e3c800c7b866-kube-api-access-w9rds\") on node \"crc\" DevicePath \"\"" Jan 23 09:07:27 crc kubenswrapper[4684]: W0123 09:07:27.842223 4684 empty_dir.go:500] Warning: Unmount skipped because path does not exist: /var/lib/kubelet/pods/bd23aa5c-e532-4e53-bccf-e79f130c5ae8/volumes/kubernetes.io~projected/kube-api-access-jhbk2 Jan 23 09:07:27 crc kubenswrapper[4684]: I0123 09:07:27.843311 4684 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/bd23aa5c-e532-4e53-bccf-e79f130c5ae8-kube-api-access-jhbk2" (OuterVolumeSpecName: "kube-api-access-jhbk2") pod "bd23aa5c-e532-4e53-bccf-e79f130c5ae8" (UID: "bd23aa5c-e532-4e53-bccf-e79f130c5ae8"). InnerVolumeSpecName "kube-api-access-jhbk2". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 23 09:07:27 crc kubenswrapper[4684]: I0123 09:07:27.842835 4684 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/6ea678ab-3438-413e-bfe3-290ae7725660-ovn-node-metrics-cert" (OuterVolumeSpecName: "ovn-node-metrics-cert") pod "6ea678ab-3438-413e-bfe3-290ae7725660" (UID: "6ea678ab-3438-413e-bfe3-290ae7725660"). InnerVolumeSpecName "ovn-node-metrics-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 23 09:07:27 crc kubenswrapper[4684]: I0123 09:07:27.843052 4684 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/6ea678ab-3438-413e-bfe3-290ae7725660-ovnkube-script-lib" (OuterVolumeSpecName: "ovnkube-script-lib") pod "6ea678ab-3438-413e-bfe3-290ae7725660" (UID: "6ea678ab-3438-413e-bfe3-290ae7725660"). InnerVolumeSpecName "ovnkube-script-lib". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 23 09:07:27 crc kubenswrapper[4684]: W0123 09:07:27.843118 4684 empty_dir.go:500] Warning: Unmount skipped because path does not exist: /var/lib/kubelet/pods/6731426b-95fe-49ff-bb5f-40441049fde2/volumes/kubernetes.io~projected/kube-api-access-x7zkh Jan 23 09:07:27 crc kubenswrapper[4684]: I0123 09:07:27.843353 4684 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/6731426b-95fe-49ff-bb5f-40441049fde2-kube-api-access-x7zkh" (OuterVolumeSpecName: "kube-api-access-x7zkh") pod "6731426b-95fe-49ff-bb5f-40441049fde2" (UID: "6731426b-95fe-49ff-bb5f-40441049fde2"). InnerVolumeSpecName "kube-api-access-x7zkh". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 23 09:07:27 crc kubenswrapper[4684]: W0123 09:07:27.843161 4684 empty_dir.go:500] Warning: Unmount skipped because path does not exist: /var/lib/kubelet/pods/09efc573-dbb6-4249-bd59-9b87aba8dd28/volumes/kubernetes.io~secret/serving-cert Jan 23 09:07:27 crc kubenswrapper[4684]: I0123 09:07:27.843371 4684 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/09efc573-dbb6-4249-bd59-9b87aba8dd28-serving-cert" (OuterVolumeSpecName: "serving-cert") pod "09efc573-dbb6-4249-bd59-9b87aba8dd28" (UID: "09efc573-dbb6-4249-bd59-9b87aba8dd28"). InnerVolumeSpecName "serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 23 09:07:27 crc kubenswrapper[4684]: W0123 09:07:27.843222 4684 empty_dir.go:500] Warning: Unmount skipped because path does not exist: /var/lib/kubelet/pods/0b78653f-4ff9-4508-8672-245ed9b561e3/volumes/kubernetes.io~secret/serving-cert Jan 23 09:07:27 crc kubenswrapper[4684]: I0123 09:07:27.843386 4684 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/0b78653f-4ff9-4508-8672-245ed9b561e3-serving-cert" (OuterVolumeSpecName: "serving-cert") pod "0b78653f-4ff9-4508-8672-245ed9b561e3" (UID: "0b78653f-4ff9-4508-8672-245ed9b561e3"). InnerVolumeSpecName "serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 23 09:07:27 crc kubenswrapper[4684]: W0123 09:07:27.843260 4684 empty_dir.go:500] Warning: Unmount skipped because path does not exist: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/utilities Jan 23 09:07:27 crc kubenswrapper[4684]: I0123 09:07:27.843404 4684 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/5225d0e4-402f-4861-b410-819f433b1803-utilities" (OuterVolumeSpecName: "utilities") pod "5225d0e4-402f-4861-b410-819f433b1803" (UID: "5225d0e4-402f-4861-b410-819f433b1803"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 23 09:07:27 crc kubenswrapper[4684]: W0123 09:07:27.843396 4684 empty_dir.go:500] Warning: Unmount skipped because path does not exist: /var/lib/kubelet/pods/44663579-783b-4372-86d6-acf235a62d72/volumes/kubernetes.io~projected/kube-api-access-vt5rc Jan 23 09:07:27 crc kubenswrapper[4684]: W0123 09:07:27.843438 4684 empty_dir.go:500] Warning: Unmount skipped because path does not exist: /var/lib/kubelet/pods/87cf06ed-a83f-41a7-828d-70653580a8cb/volumes/kubernetes.io~configmap/config-volume Jan 23 09:07:27 crc kubenswrapper[4684]: I0123 09:07:27.843463 4684 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/87cf06ed-a83f-41a7-828d-70653580a8cb-config-volume" (OuterVolumeSpecName: "config-volume") pod "87cf06ed-a83f-41a7-828d-70653580a8cb" (UID: "87cf06ed-a83f-41a7-828d-70653580a8cb"). InnerVolumeSpecName "config-volume". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 23 09:07:27 crc kubenswrapper[4684]: I0123 09:07:27.843427 4684 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/44663579-783b-4372-86d6-acf235a62d72-kube-api-access-vt5rc" (OuterVolumeSpecName: "kube-api-access-vt5rc") pod "44663579-783b-4372-86d6-acf235a62d72" (UID: "44663579-783b-4372-86d6-acf235a62d72"). InnerVolumeSpecName "kube-api-access-vt5rc". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 23 09:07:27 crc kubenswrapper[4684]: W0123 09:07:27.843517 4684 empty_dir.go:500] Warning: Unmount skipped because path does not exist: /var/lib/kubelet/pods/09efc573-dbb6-4249-bd59-9b87aba8dd28/volumes/kubernetes.io~configmap/etcd-ca Jan 23 09:07:27 crc kubenswrapper[4684]: I0123 09:07:27.843530 4684 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/09efc573-dbb6-4249-bd59-9b87aba8dd28-etcd-ca" (OuterVolumeSpecName: "etcd-ca") pod "09efc573-dbb6-4249-bd59-9b87aba8dd28" (UID: "09efc573-dbb6-4249-bd59-9b87aba8dd28"). InnerVolumeSpecName "etcd-ca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 23 09:07:27 crc kubenswrapper[4684]: W0123 09:07:27.843532 4684 empty_dir.go:500] Warning: Unmount skipped because path does not exist: /var/lib/kubelet/pods/5441d097-087c-4d9a-baa8-b210afa90fc9/volumes/kubernetes.io~secret/serving-cert Jan 23 09:07:27 crc kubenswrapper[4684]: I0123 09:07:27.843558 4684 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/5441d097-087c-4d9a-baa8-b210afa90fc9-serving-cert" (OuterVolumeSpecName: "serving-cert") pod "5441d097-087c-4d9a-baa8-b210afa90fc9" (UID: "5441d097-087c-4d9a-baa8-b210afa90fc9"). InnerVolumeSpecName "serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 23 09:07:27 crc kubenswrapper[4684]: W0123 09:07:27.843591 4684 empty_dir.go:500] Warning: Unmount skipped because path does not exist: /var/lib/kubelet/pods/25e176fe-21b4-4974-b1ed-c8b94f112a7f/volumes/kubernetes.io~projected/kube-api-access-d4lsv Jan 23 09:07:27 crc kubenswrapper[4684]: I0123 09:07:27.843602 4684 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/25e176fe-21b4-4974-b1ed-c8b94f112a7f-kube-api-access-d4lsv" (OuterVolumeSpecName: "kube-api-access-d4lsv") pod "25e176fe-21b4-4974-b1ed-c8b94f112a7f" (UID: "25e176fe-21b4-4974-b1ed-c8b94f112a7f"). InnerVolumeSpecName "kube-api-access-d4lsv". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 23 09:07:27 crc kubenswrapper[4684]: W0123 09:07:27.843644 4684 empty_dir.go:500] Warning: Unmount skipped because path does not exist: /var/lib/kubelet/pods/5b88f790-22fa-440e-b583-365168c0b23d/volumes/kubernetes.io~secret/metrics-certs Jan 23 09:07:27 crc kubenswrapper[4684]: I0123 09:07:27.843654 4684 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/5b88f790-22fa-440e-b583-365168c0b23d-metrics-certs" (OuterVolumeSpecName: "metrics-certs") pod "5b88f790-22fa-440e-b583-365168c0b23d" (UID: "5b88f790-22fa-440e-b583-365168c0b23d"). InnerVolumeSpecName "metrics-certs". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 23 09:07:27 crc kubenswrapper[4684]: W0123 09:07:27.843660 4684 empty_dir.go:500] Warning: Unmount skipped because path does not exist: /var/lib/kubelet/pods/6402fda4-df10-493c-b4e5-d0569419652d/volumes/kubernetes.io~configmap/images Jan 23 09:07:27 crc kubenswrapper[4684]: I0123 09:07:27.843670 4684 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/6402fda4-df10-493c-b4e5-d0569419652d-images" (OuterVolumeSpecName: "images") pod "6402fda4-df10-493c-b4e5-d0569419652d" (UID: "6402fda4-df10-493c-b4e5-d0569419652d"). InnerVolumeSpecName "images". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 23 09:07:27 crc kubenswrapper[4684]: W0123 09:07:27.843728 4684 empty_dir.go:500] Warning: Unmount skipped because path does not exist: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~configmap/trusted-ca Jan 23 09:07:27 crc kubenswrapper[4684]: W0123 09:07:27.843735 4684 empty_dir.go:500] Warning: Unmount skipped because path does not exist: /var/lib/kubelet/pods/49c341d1-5089-4bc2-86a0-a5e165cfcc6b/volumes/kubernetes.io~secret/v4-0-config-user-idp-0-file-data Jan 23 09:07:27 crc kubenswrapper[4684]: I0123 09:07:27.843745 4684 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-user-idp-0-file-data" (OuterVolumeSpecName: "v4-0-config-user-idp-0-file-data") pod "49c341d1-5089-4bc2-86a0-a5e165cfcc6b" (UID: "49c341d1-5089-4bc2-86a0-a5e165cfcc6b"). InnerVolumeSpecName "v4-0-config-user-idp-0-file-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 23 09:07:27 crc kubenswrapper[4684]: I0123 09:07:27.843773 4684 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/8f668bae-612b-4b75-9490-919e737c6a3b-trusted-ca" (OuterVolumeSpecName: "trusted-ca") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b"). InnerVolumeSpecName "trusted-ca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 23 09:07:27 crc kubenswrapper[4684]: W0123 09:07:27.843798 4684 empty_dir.go:500] Warning: Unmount skipped because path does not exist: /var/lib/kubelet/pods/0b78653f-4ff9-4508-8672-245ed9b561e3/volumes/kubernetes.io~configmap/service-ca Jan 23 09:07:27 crc kubenswrapper[4684]: I0123 09:07:27.843791 4684 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-user-template-error" (OuterVolumeSpecName: "v4-0-config-user-template-error") pod "49c341d1-5089-4bc2-86a0-a5e165cfcc6b" (UID: "49c341d1-5089-4bc2-86a0-a5e165cfcc6b"). InnerVolumeSpecName "v4-0-config-user-template-error". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 23 09:07:27 crc kubenswrapper[4684]: I0123 09:07:27.843809 4684 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/0b78653f-4ff9-4508-8672-245ed9b561e3-service-ca" (OuterVolumeSpecName: "service-ca") pod "0b78653f-4ff9-4508-8672-245ed9b561e3" (UID: "0b78653f-4ff9-4508-8672-245ed9b561e3"). InnerVolumeSpecName "service-ca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 23 09:07:27 crc kubenswrapper[4684]: I0123 09:07:27.843856 4684 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host-etc-kube\" (UniqueName: \"kubernetes.io/host-path/37a5e44f-9a88-4405-be8a-b645485e7312-host-etc-kube\") pod \"network-operator-58b4c7f79c-55gtf\" (UID: \"37a5e44f-9a88-4405-be8a-b645485e7312\") " pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" Jan 23 09:07:27 crc kubenswrapper[4684]: W0123 09:07:27.843873 4684 empty_dir.go:500] Warning: Unmount skipped because path does not exist: /var/lib/kubelet/pods/6509e943-70c6-444c-bc41-48a544e36fbd/volumes/kubernetes.io~configmap/trusted-ca-bundle Jan 23 09:07:27 crc kubenswrapper[4684]: I0123 09:07:27.843885 4684 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/6509e943-70c6-444c-bc41-48a544e36fbd-trusted-ca-bundle" (OuterVolumeSpecName: "trusted-ca-bundle") pod "6509e943-70c6-444c-bc41-48a544e36fbd" (UID: "6509e943-70c6-444c-bc41-48a544e36fbd"). InnerVolumeSpecName "trusted-ca-bundle". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 23 09:07:27 crc kubenswrapper[4684]: I0123 09:07:27.843297 4684 reconciler_common.go:293] "Volume detached for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/01ab3dd5-8196-46d0-ad33-122e2ca51def-serving-cert\") on node \"crc\" DevicePath \"\"" Jan 23 09:07:27 crc kubenswrapper[4684]: I0123 09:07:27.843914 4684 reconciler_common.go:293] "Volume detached for volume \"mcc-auth-proxy-config\" (UniqueName: \"kubernetes.io/configmap/0b574797-001e-440a-8f4e-c0be86edad0f-mcc-auth-proxy-config\") on node \"crc\" DevicePath \"\"" Jan 23 09:07:27 crc kubenswrapper[4684]: I0123 09:07:27.843925 4684 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/09efc573-dbb6-4249-bd59-9b87aba8dd28-config\") on node \"crc\" DevicePath \"\"" Jan 23 09:07:27 crc kubenswrapper[4684]: I0123 09:07:27.843936 4684 reconciler_common.go:293] "Volume detached for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/bc5039c0-ea34-426b-a2b7-fbbc87b49a6d-serving-cert\") on node \"crc\" DevicePath \"\"" Jan 23 09:07:27 crc kubenswrapper[4684]: I0123 09:07:27.843952 4684 reconciler_common.go:293] "Volume detached for volume \"service-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/6509e943-70c6-444c-bc41-48a544e36fbd-service-ca-bundle\") on node \"crc\" DevicePath \"\"" Jan 23 09:07:27 crc kubenswrapper[4684]: I0123 09:07:27.843962 4684 reconciler_common.go:293] "Volume detached for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/0b78653f-4ff9-4508-8672-245ed9b561e3-serving-cert\") on node \"crc\" DevicePath \"\"" Jan 23 09:07:27 crc kubenswrapper[4684]: I0123 09:07:27.843976 4684 reconciler_common.go:293] "Volume detached for volume \"etcd-client\" (UniqueName: \"kubernetes.io/secret/1bf7eb37-55a3-4c65-b768-a94c82151e69-etcd-client\") on node \"crc\" DevicePath \"\"" Jan 23 09:07:27 crc kubenswrapper[4684]: I0123 09:07:27.843988 4684 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-lz9wn\" (UniqueName: \"kubernetes.io/projected/a31745f5-9847-4afe-82a5-3161cc66ca93-kube-api-access-lz9wn\") on node \"crc\" DevicePath \"\"" Jan 23 09:07:27 crc kubenswrapper[4684]: I0123 09:07:27.843999 4684 reconciler_common.go:293] "Volume detached for volume \"etcd-serving-ca\" (UniqueName: \"kubernetes.io/configmap/09ae3b1a-e8e7-4524-b54b-61eab6f9239a-etcd-serving-ca\") on node \"crc\" DevicePath \"\"" Jan 23 09:07:27 crc kubenswrapper[4684]: I0123 09:07:27.844012 4684 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-249nr\" (UniqueName: \"kubernetes.io/projected/b6312bbd-5731-4ea0-a20f-81d5a57df44a-kube-api-access-249nr\") on node \"crc\" DevicePath \"\"" Jan 23 09:07:27 crc kubenswrapper[4684]: I0123 09:07:27.844022 4684 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/5225d0e4-402f-4861-b410-819f433b1803-utilities\") on node \"crc\" DevicePath \"\"" Jan 23 09:07:27 crc kubenswrapper[4684]: I0123 09:07:27.844031 4684 reconciler_common.go:293] "Volume detached for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/09ae3b1a-e8e7-4524-b54b-61eab6f9239a-trusted-ca-bundle\") on node \"crc\" DevicePath \"\"" Jan 23 09:07:27 crc kubenswrapper[4684]: I0123 09:07:27.844042 4684 reconciler_common.go:293] "Volume detached for volume \"etcd-ca\" (UniqueName: \"kubernetes.io/configmap/09efc573-dbb6-4249-bd59-9b87aba8dd28-etcd-ca\") on node \"crc\" DevicePath \"\"" Jan 23 09:07:27 crc kubenswrapper[4684]: I0123 09:07:27.844058 4684 reconciler_common.go:293] "Volume detached for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/5b88f790-22fa-440e-b583-365168c0b23d-metrics-certs\") on node \"crc\" DevicePath \"\"" Jan 23 09:07:27 crc kubenswrapper[4684]: I0123 09:07:27.844075 4684 reconciler_common.go:293] "Volume detached for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/8f668bae-612b-4b75-9490-919e737c6a3b-trusted-ca\") on node \"crc\" DevicePath \"\"" Jan 23 09:07:27 crc kubenswrapper[4684]: I0123 09:07:27.844086 4684 reconciler_common.go:293] "Volume detached for volume \"profile-collector-cert\" (UniqueName: \"kubernetes.io/secret/b6312bbd-5731-4ea0-a20f-81d5a57df44a-profile-collector-cert\") on node \"crc\" DevicePath \"\"" Jan 23 09:07:27 crc kubenswrapper[4684]: I0123 09:07:27.844102 4684 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-jkwtn\" (UniqueName: \"kubernetes.io/projected/5b88f790-22fa-440e-b583-365168c0b23d-kube-api-access-jkwtn\") on node \"crc\" DevicePath \"\"" Jan 23 09:07:27 crc kubenswrapper[4684]: I0123 09:07:27.844112 4684 reconciler_common.go:293] "Volume detached for volume \"metrics-tls\" (UniqueName: \"kubernetes.io/secret/87cf06ed-a83f-41a7-828d-70653580a8cb-metrics-tls\") on node \"crc\" DevicePath \"\"" Jan 23 09:07:27 crc kubenswrapper[4684]: I0123 09:07:27.844124 4684 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-zkvpv\" (UniqueName: \"kubernetes.io/projected/09ae3b1a-e8e7-4524-b54b-61eab6f9239a-kube-api-access-zkvpv\") on node \"crc\" DevicePath \"\"" Jan 23 09:07:27 crc kubenswrapper[4684]: I0123 09:07:27.844134 4684 reconciler_common.go:293] "Volume detached for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/5441d097-087c-4d9a-baa8-b210afa90fc9-client-ca\") on node \"crc\" DevicePath \"\"" Jan 23 09:07:27 crc kubenswrapper[4684]: I0123 09:07:27.844144 4684 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/5441d097-087c-4d9a-baa8-b210afa90fc9-config\") on node \"crc\" DevicePath \"\"" Jan 23 09:07:27 crc kubenswrapper[4684]: I0123 09:07:27.844154 4684 reconciler_common.go:293] "Volume detached for volume \"tmpfs\" (UniqueName: \"kubernetes.io/empty-dir/308be0ea-9f5f-4b29-aeb1-5abd31a0b17b-tmpfs\") on node \"crc\" DevicePath \"\"" Jan 23 09:07:27 crc kubenswrapper[4684]: I0123 09:07:27.844165 4684 reconciler_common.go:293] "Volume detached for volume \"apiservice-cert\" (UniqueName: \"kubernetes.io/secret/308be0ea-9f5f-4b29-aeb1-5abd31a0b17b-apiservice-cert\") on node \"crc\" DevicePath \"\"" Jan 23 09:07:27 crc kubenswrapper[4684]: I0123 09:07:27.844174 4684 reconciler_common.go:293] "Volume detached for volume \"etcd-service-ca\" (UniqueName: \"kubernetes.io/configmap/09efc573-dbb6-4249-bd59-9b87aba8dd28-etcd-service-ca\") on node \"crc\" DevicePath \"\"" Jan 23 09:07:27 crc kubenswrapper[4684]: I0123 09:07:27.844184 4684 reconciler_common.go:293] "Volume detached for volume \"available-featuregates\" (UniqueName: \"kubernetes.io/empty-dir/bc5039c0-ea34-426b-a2b7-fbbc87b49a6d-available-featuregates\") on node \"crc\" DevicePath \"\"" Jan 23 09:07:27 crc kubenswrapper[4684]: I0123 09:07:27.844194 4684 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-rnphk\" (UniqueName: \"kubernetes.io/projected/bf126b07-da06-4140-9a57-dfd54fc6b486-kube-api-access-rnphk\") on node \"crc\" DevicePath \"\"" Jan 23 09:07:27 crc kubenswrapper[4684]: I0123 09:07:27.844204 4684 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-kfwg7\" (UniqueName: \"kubernetes.io/projected/8f668bae-612b-4b75-9490-919e737c6a3b-kube-api-access-kfwg7\") on node \"crc\" DevicePath \"\"" Jan 23 09:07:27 crc kubenswrapper[4684]: I0123 09:07:27.844214 4684 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-qg5z5\" (UniqueName: \"kubernetes.io/projected/43509403-f426-496e-be36-56cef71462f5-kube-api-access-qg5z5\") on node \"crc\" DevicePath \"\"" Jan 23 09:07:27 crc kubenswrapper[4684]: I0123 09:07:27.844225 4684 reconciler_common.go:293] "Volume detached for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/6509e943-70c6-444c-bc41-48a544e36fbd-trusted-ca-bundle\") on node \"crc\" DevicePath \"\"" Jan 23 09:07:27 crc kubenswrapper[4684]: I0123 09:07:27.844234 4684 reconciler_common.go:293] "Volume detached for volume \"cni-binary-copy\" (UniqueName: \"kubernetes.io/configmap/4bb40260-dbaa-4fb0-84df-5e680505d512-cni-binary-copy\") on node \"crc\" DevicePath \"\"" Jan 23 09:07:27 crc kubenswrapper[4684]: I0123 09:07:27.844248 4684 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-x7zkh\" (UniqueName: \"kubernetes.io/projected/6731426b-95fe-49ff-bb5f-40441049fde2-kube-api-access-x7zkh\") on node \"crc\" DevicePath \"\"" Jan 23 09:07:27 crc kubenswrapper[4684]: I0123 09:07:27.844258 4684 reconciler_common.go:293] "Volume detached for volume \"image-registry-operator-tls\" (UniqueName: \"kubernetes.io/secret/bf126b07-da06-4140-9a57-dfd54fc6b486-image-registry-operator-tls\") on node \"crc\" DevicePath \"\"" Jan 23 09:07:27 crc kubenswrapper[4684]: I0123 09:07:27.844270 4684 reconciler_common.go:293] "Volume detached for volume \"auth-proxy-config\" (UniqueName: \"kubernetes.io/configmap/31d8b7a1-420e-4252-a5b7-eebe8a111292-auth-proxy-config\") on node \"crc\" DevicePath \"\"" Jan 23 09:07:27 crc kubenswrapper[4684]: I0123 09:07:27.844280 4684 reconciler_common.go:293] "Volume detached for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/1386a44e-36a2-460c-96d0-0359d2b6f0f5-serving-cert\") on node \"crc\" DevicePath \"\"" Jan 23 09:07:27 crc kubenswrapper[4684]: I0123 09:07:27.844291 4684 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-jhbk2\" (UniqueName: \"kubernetes.io/projected/bd23aa5c-e532-4e53-bccf-e79f130c5ae8-kube-api-access-jhbk2\") on node \"crc\" DevicePath \"\"" Jan 23 09:07:27 crc kubenswrapper[4684]: I0123 09:07:27.844300 4684 reconciler_common.go:293] "Volume detached for volume \"bound-sa-token\" (UniqueName: \"kubernetes.io/projected/8f668bae-612b-4b75-9490-919e737c6a3b-bound-sa-token\") on node \"crc\" DevicePath \"\"" Jan 23 09:07:27 crc kubenswrapper[4684]: I0123 09:07:27.844312 4684 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-mg5zb\" (UniqueName: \"kubernetes.io/projected/6402fda4-df10-493c-b4e5-d0569419652d-kube-api-access-mg5zb\") on node \"crc\" DevicePath \"\"" Jan 23 09:07:27 crc kubenswrapper[4684]: I0123 09:07:27.844322 4684 reconciler_common.go:293] "Volume detached for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/9d4552c7-cd75-42dd-8880-30dd377c49a4-trusted-ca\") on node \"crc\" DevicePath \"\"" Jan 23 09:07:27 crc kubenswrapper[4684]: I0123 09:07:27.844331 4684 reconciler_common.go:293] "Volume detached for volume \"console-serving-cert\" (UniqueName: \"kubernetes.io/secret/43509403-f426-496e-be36-56cef71462f5-console-serving-cert\") on node \"crc\" DevicePath \"\"" Jan 23 09:07:27 crc kubenswrapper[4684]: I0123 09:07:27.844341 4684 reconciler_common.go:293] "Volume detached for volume \"ovnkube-config\" (UniqueName: \"kubernetes.io/configmap/6ea678ab-3438-413e-bfe3-290ae7725660-ovnkube-config\") on node \"crc\" DevicePath \"\"" Jan 23 09:07:27 crc kubenswrapper[4684]: I0123 09:07:27.844361 4684 reconciler_common.go:293] "Volume detached for volume \"marketplace-trusted-ca\" (UniqueName: \"kubernetes.io/configmap/b6cd30de-2eeb-49a2-ab40-9167f4560ff5-marketplace-trusted-ca\") on node \"crc\" DevicePath \"\"" Jan 23 09:07:27 crc kubenswrapper[4684]: I0123 09:07:27.844375 4684 reconciler_common.go:293] "Volume detached for volume \"profile-collector-cert\" (UniqueName: \"kubernetes.io/secret/f88749ec-7931-4ee7-b3fc-1ec5e11f92e9-profile-collector-cert\") on node \"crc\" DevicePath \"\"" Jan 23 09:07:27 crc kubenswrapper[4684]: I0123 09:07:27.844386 4684 reconciler_common.go:293] "Volume detached for volume \"node-bootstrap-token\" (UniqueName: \"kubernetes.io/secret/5fe579f8-e8a6-4643-bce5-a661393c4dde-node-bootstrap-token\") on node \"crc\" DevicePath \"\"" Jan 23 09:07:27 crc kubenswrapper[4684]: I0123 09:07:27.844398 4684 reconciler_common.go:293] "Volume detached for volume \"v4-0-config-system-router-certs\" (UniqueName: \"kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-system-router-certs\") on node \"crc\" DevicePath \"\"" Jan 23 09:07:27 crc kubenswrapper[4684]: I0123 09:07:27.844411 4684 reconciler_common.go:293] "Volume detached for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/1bf7eb37-55a3-4c65-b768-a94c82151e69-serving-cert\") on node \"crc\" DevicePath \"\"" Jan 23 09:07:27 crc kubenswrapper[4684]: I0123 09:07:27.844428 4684 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-mnrrd\" (UniqueName: \"kubernetes.io/projected/bc5039c0-ea34-426b-a2b7-fbbc87b49a6d-kube-api-access-mnrrd\") on node \"crc\" DevicePath \"\"" Jan 23 09:07:27 crc kubenswrapper[4684]: I0123 09:07:27.844444 4684 reconciler_common.go:293] "Volume detached for volume \"machine-approver-tls\" (UniqueName: \"kubernetes.io/secret/22c825df-677d-4ca6-82db-3454ed06e783-machine-approver-tls\") on node \"crc\" DevicePath \"\"" Jan 23 09:07:27 crc kubenswrapper[4684]: I0123 09:07:27.844440 4684 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-rczfb\" (UniqueName: \"kubernetes.io/projected/d75a4c96-2883-4a0b-bab2-0fab2b6c0b49-kube-api-access-rczfb\") pod \"iptables-alerter-4ln5h\" (UID: \"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\") " pod="openshift-network-operator/iptables-alerter-4ln5h" Jan 23 09:07:27 crc kubenswrapper[4684]: I0123 09:07:27.844455 4684 reconciler_common.go:293] "Volume detached for volume \"env-overrides\" (UniqueName: \"kubernetes.io/configmap/6ea678ab-3438-413e-bfe3-290ae7725660-env-overrides\") on node \"crc\" DevicePath \"\"" Jan 23 09:07:27 crc kubenswrapper[4684]: I0123 09:07:27.844467 4684 reconciler_common.go:293] "Volume detached for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/1386a44e-36a2-460c-96d0-0359d2b6f0f5-kube-api-access\") on node \"crc\" DevicePath \"\"" Jan 23 09:07:27 crc kubenswrapper[4684]: I0123 09:07:27.844484 4684 reconciler_common.go:293] "Volume detached for volume \"proxy-tls\" (UniqueName: \"kubernetes.io/secret/fda69060-fa79-4696-b1a6-7980f124bf7c-proxy-tls\") on node \"crc\" DevicePath \"\"" Jan 23 09:07:27 crc kubenswrapper[4684]: I0123 09:07:27.844493 4684 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-4d4hj\" (UniqueName: \"kubernetes.io/projected/3ab1a177-2de0-46d9-b765-d0d0649bb42e-kube-api-access-4d4hj\") on node \"crc\" DevicePath \"\"" Jan 23 09:07:27 crc kubenswrapper[4684]: I0123 09:07:27.843339 4684 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host-slash\" (UniqueName: \"kubernetes.io/host-path/d75a4c96-2883-4a0b-bab2-0fab2b6c0b49-host-slash\") pod \"iptables-alerter-4ln5h\" (UID: \"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\") " pod="openshift-network-operator/iptables-alerter-4ln5h" Jan 23 09:07:27 crc kubenswrapper[4684]: I0123 09:07:27.844593 4684 reconciler_common.go:293] "Volume detached for volume \"marketplace-operator-metrics\" (UniqueName: \"kubernetes.io/secret/b6cd30de-2eeb-49a2-ab40-9167f4560ff5-marketplace-operator-metrics\") on node \"crc\" DevicePath \"\"" Jan 23 09:07:27 crc kubenswrapper[4684]: I0123 09:07:27.844861 4684 reconciler_common.go:293] "Volume detached for volume \"samples-operator-tls\" (UniqueName: \"kubernetes.io/secret/a0128f3a-b052-44ed-a84e-c4c8aaf17c13-samples-operator-tls\") on node \"crc\" DevicePath \"\"" Jan 23 09:07:27 crc kubenswrapper[4684]: I0123 09:07:27.844893 4684 reconciler_common.go:293] "Volume detached for volume \"v4-0-config-system-trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-system-trusted-ca-bundle\") on node \"crc\" DevicePath \"\"" Jan 23 09:07:27 crc kubenswrapper[4684]: I0123 09:07:27.844916 4684 reconciler_common.go:293] "Volume detached for volume \"mcd-auth-proxy-config\" (UniqueName: \"kubernetes.io/configmap/fda69060-fa79-4696-b1a6-7980f124bf7c-mcd-auth-proxy-config\") on node \"crc\" DevicePath \"\"" Jan 23 09:07:27 crc kubenswrapper[4684]: I0123 09:07:27.844933 4684 reconciler_common.go:293] "Volume detached for volume \"webhook-certs\" (UniqueName: \"kubernetes.io/secret/efdd0498-1daa-4136-9a4a-3b948c2293fc-webhook-certs\") on node \"crc\" DevicePath \"\"" Jan 23 09:07:27 crc kubenswrapper[4684]: I0123 09:07:27.844944 4684 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-ngvvp\" (UniqueName: \"kubernetes.io/projected/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-kube-api-access-ngvvp\") on node \"crc\" DevicePath \"\"" Jan 23 09:07:27 crc kubenswrapper[4684]: I0123 09:07:27.844956 4684 reconciler_common.go:293] "Volume detached for volume \"control-plane-machine-set-operator-tls\" (UniqueName: \"kubernetes.io/secret/6731426b-95fe-49ff-bb5f-40441049fde2-control-plane-machine-set-operator-tls\") on node \"crc\" DevicePath \"\"" Jan 23 09:07:27 crc kubenswrapper[4684]: I0123 09:07:27.844969 4684 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-gf66m\" (UniqueName: \"kubernetes.io/projected/a0128f3a-b052-44ed-a84e-c4c8aaf17c13-kube-api-access-gf66m\") on node \"crc\" DevicePath \"\"" Jan 23 09:07:27 crc kubenswrapper[4684]: I0123 09:07:27.844982 4684 reconciler_common.go:293] "Volume detached for volume \"v4-0-config-user-template-login\" (UniqueName: \"kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-user-template-login\") on node \"crc\" DevicePath \"\"" Jan 23 09:07:27 crc kubenswrapper[4684]: I0123 09:07:27.844997 4684 reconciler_common.go:293] "Volume detached for volume \"bound-sa-token\" (UniqueName: \"kubernetes.io/projected/bf126b07-da06-4140-9a57-dfd54fc6b486-bound-sa-token\") on node \"crc\" DevicePath \"\"" Jan 23 09:07:27 crc kubenswrapper[4684]: I0123 09:07:27.845009 4684 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-w7l8j\" (UniqueName: \"kubernetes.io/projected/01ab3dd5-8196-46d0-ad33-122e2ca51def-kube-api-access-w7l8j\") on node \"crc\" DevicePath \"\"" Jan 23 09:07:27 crc kubenswrapper[4684]: I0123 09:07:27.845020 4684 reconciler_common.go:293] "Volume detached for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/9d4552c7-cd75-42dd-8880-30dd377c49a4-serving-cert\") on node \"crc\" DevicePath \"\"" Jan 23 09:07:27 crc kubenswrapper[4684]: I0123 09:07:27.845518 4684 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/01ab3dd5-8196-46d0-ad33-122e2ca51def-config\") on node \"crc\" DevicePath \"\"" Jan 23 09:07:27 crc kubenswrapper[4684]: I0123 09:07:27.845539 4684 reconciler_common.go:293] "Volume detached for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/8cea82b4-6893-4ddc-af9f-1bb5ae425c5b-serving-cert\") on node \"crc\" DevicePath \"\"" Jan 23 09:07:27 crc kubenswrapper[4684]: I0123 09:07:27.845552 4684 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/1d611f23-29be-4491-8495-bee1670e935f-utilities\") on node \"crc\" DevicePath \"\"" Jan 23 09:07:27 crc kubenswrapper[4684]: I0123 09:07:27.845580 4684 reconciler_common.go:293] "Volume detached for volume \"metrics-tls\" (UniqueName: \"kubernetes.io/secret/96b93a3a-6083-4aea-8eab-fe1aa8245ad9-metrics-tls\") on node \"crc\" DevicePath \"\"" Jan 23 09:07:27 crc kubenswrapper[4684]: I0123 09:07:27.845594 4684 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-htfz6\" (UniqueName: \"kubernetes.io/projected/6ea678ab-3438-413e-bfe3-290ae7725660-kube-api-access-htfz6\") on node \"crc\" DevicePath \"\"" Jan 23 09:07:27 crc kubenswrapper[4684]: I0123 09:07:27.845608 4684 reconciler_common.go:293] "Volume detached for volume \"console-oauth-config\" (UniqueName: \"kubernetes.io/secret/43509403-f426-496e-be36-56cef71462f5-console-oauth-config\") on node \"crc\" DevicePath \"\"" Jan 23 09:07:27 crc kubenswrapper[4684]: I0123 09:07:27.845621 4684 reconciler_common.go:293] "Volume detached for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/a31745f5-9847-4afe-82a5-3161cc66ca93-trusted-ca\") on node \"crc\" DevicePath \"\"" Jan 23 09:07:27 crc kubenswrapper[4684]: I0123 09:07:27.845635 4684 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-2w9zh\" (UniqueName: \"kubernetes.io/projected/4bb40260-dbaa-4fb0-84df-5e680505d512-kube-api-access-2w9zh\") on node \"crc\" DevicePath \"\"" Jan 23 09:07:27 crc kubenswrapper[4684]: I0123 09:07:27.845647 4684 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-pcxfs\" (UniqueName: \"kubernetes.io/projected/9d4552c7-cd75-42dd-8880-30dd377c49a4-kube-api-access-pcxfs\") on node \"crc\" DevicePath \"\"" Jan 23 09:07:27 crc kubenswrapper[4684]: I0123 09:07:27.845659 4684 reconciler_common.go:293] "Volume detached for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/7583ce53-e0fe-4a16-9e4d-50516596a136-client-ca\") on node \"crc\" DevicePath \"\"" Jan 23 09:07:27 crc kubenswrapper[4684]: I0123 09:07:27.845670 4684 reconciler_common.go:293] "Volume detached for volume \"proxy-tls\" (UniqueName: \"kubernetes.io/secret/0b574797-001e-440a-8f4e-c0be86edad0f-proxy-tls\") on node \"crc\" DevicePath \"\"" Jan 23 09:07:27 crc kubenswrapper[4684]: I0123 09:07:27.845683 4684 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-d6qdx\" (UniqueName: \"kubernetes.io/projected/87cf06ed-a83f-41a7-828d-70653580a8cb-kube-api-access-d6qdx\") on node \"crc\" DevicePath \"\"" Jan 23 09:07:27 crc kubenswrapper[4684]: I0123 09:07:27.845694 4684 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-8tdtz\" (UniqueName: \"kubernetes.io/projected/09efc573-dbb6-4249-bd59-9b87aba8dd28-kube-api-access-8tdtz\") on node \"crc\" DevicePath \"\"" Jan 23 09:07:27 crc kubenswrapper[4684]: I0123 09:07:27.845720 4684 reconciler_common.go:293] "Volume detached for volume \"cni-sysctl-allowlist\" (UniqueName: \"kubernetes.io/configmap/7bb08738-c794-4ee8-9972-3a62ca171029-cni-sysctl-allowlist\") on node \"crc\" DevicePath \"\"" Jan 23 09:07:27 crc kubenswrapper[4684]: I0123 09:07:27.845732 4684 reconciler_common.go:293] "Volume detached for volume \"srv-cert\" (UniqueName: \"kubernetes.io/secret/b6312bbd-5731-4ea0-a20f-81d5a57df44a-srv-cert\") on node \"crc\" DevicePath \"\"" Jan 23 09:07:27 crc kubenswrapper[4684]: I0123 09:07:27.845743 4684 reconciler_common.go:293] "Volume detached for volume \"etcd-client\" (UniqueName: \"kubernetes.io/secret/09efc573-dbb6-4249-bd59-9b87aba8dd28-etcd-client\") on node \"crc\" DevicePath \"\"" Jan 23 09:07:27 crc kubenswrapper[4684]: I0123 09:07:27.845755 4684 reconciler_common.go:293] "Volume detached for volume \"registry-tls\" (UniqueName: \"kubernetes.io/projected/8f668bae-612b-4b75-9490-919e737c6a3b-registry-tls\") on node \"crc\" DevicePath \"\"" Jan 23 09:07:27 crc kubenswrapper[4684]: I0123 09:07:27.845767 4684 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/7539238d-5fe0-46ed-884e-1c3b566537ec-config\") on node \"crc\" DevicePath \"\"" Jan 23 09:07:27 crc kubenswrapper[4684]: I0123 09:07:27.845778 4684 reconciler_common.go:293] "Volume detached for volume \"proxy-tls\" (UniqueName: \"kubernetes.io/secret/31d8b7a1-420e-4252-a5b7-eebe8a111292-proxy-tls\") on node \"crc\" DevicePath \"\"" Jan 23 09:07:27 crc kubenswrapper[4684]: I0123 09:07:27.845788 4684 reconciler_common.go:293] "Volume detached for volume \"signing-key\" (UniqueName: \"kubernetes.io/secret/25e176fe-21b4-4974-b1ed-c8b94f112a7f-signing-key\") on node \"crc\" DevicePath \"\"" Jan 23 09:07:27 crc kubenswrapper[4684]: I0123 09:07:27.845800 4684 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-vt5rc\" (UniqueName: \"kubernetes.io/projected/44663579-783b-4372-86d6-acf235a62d72-kube-api-access-vt5rc\") on node \"crc\" DevicePath \"\"" Jan 23 09:07:27 crc kubenswrapper[4684]: I0123 09:07:27.845813 4684 reconciler_common.go:293] "Volume detached for volume \"installation-pull-secrets\" (UniqueName: \"kubernetes.io/secret/8f668bae-612b-4b75-9490-919e737c6a3b-installation-pull-secrets\") on node \"crc\" DevicePath \"\"" Jan 23 09:07:27 crc kubenswrapper[4684]: I0123 09:07:27.845824 4684 reconciler_common.go:293] "Volume detached for volume \"etcd-serving-ca\" (UniqueName: \"kubernetes.io/configmap/1bf7eb37-55a3-4c65-b768-a94c82151e69-etcd-serving-ca\") on node \"crc\" DevicePath \"\"" Jan 23 09:07:27 crc kubenswrapper[4684]: I0123 09:07:27.845835 4684 reconciler_common.go:293] "Volume detached for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/87cf06ed-a83f-41a7-828d-70653580a8cb-config-volume\") on node \"crc\" DevicePath \"\"" Jan 23 09:07:27 crc kubenswrapper[4684]: I0123 09:07:27.845847 4684 reconciler_common.go:293] "Volume detached for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/5441d097-087c-4d9a-baa8-b210afa90fc9-serving-cert\") on node \"crc\" DevicePath \"\"" Jan 23 09:07:27 crc kubenswrapper[4684]: I0123 09:07:27.845859 4684 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d-utilities\") on node \"crc\" DevicePath \"\"" Jan 23 09:07:27 crc kubenswrapper[4684]: I0123 09:07:27.845870 4684 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-s4n52\" (UniqueName: \"kubernetes.io/projected/925f1c65-6136-48ba-85aa-3a3b50560753-kube-api-access-s4n52\") on node \"crc\" DevicePath \"\"" Jan 23 09:07:27 crc kubenswrapper[4684]: I0123 09:07:27.845882 4684 reconciler_common.go:293] "Volume detached for volume \"encryption-config\" (UniqueName: \"kubernetes.io/secret/1bf7eb37-55a3-4c65-b768-a94c82151e69-encryption-config\") on node \"crc\" DevicePath \"\"" Jan 23 09:07:27 crc kubenswrapper[4684]: I0123 09:07:27.845895 4684 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-bf2bz\" (UniqueName: \"kubernetes.io/projected/1d611f23-29be-4491-8495-bee1670e935f-kube-api-access-bf2bz\") on node \"crc\" DevicePath \"\"" Jan 23 09:07:27 crc kubenswrapper[4684]: I0123 09:07:27.845907 4684 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-zgdk5\" (UniqueName: \"kubernetes.io/projected/31d8b7a1-420e-4252-a5b7-eebe8a111292-kube-api-access-zgdk5\") on node \"crc\" DevicePath \"\"" Jan 23 09:07:27 crc kubenswrapper[4684]: I0123 09:07:27.845919 4684 reconciler_common.go:293] "Volume detached for volume \"v4-0-config-system-cliconfig\" (UniqueName: \"kubernetes.io/configmap/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-system-cliconfig\") on node \"crc\" DevicePath \"\"" Jan 23 09:07:27 crc kubenswrapper[4684]: I0123 09:07:27.845931 4684 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c-config\") on node \"crc\" DevicePath \"\"" Jan 23 09:07:27 crc kubenswrapper[4684]: I0123 09:07:27.845945 4684 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-d4lsv\" (UniqueName: \"kubernetes.io/projected/25e176fe-21b4-4974-b1ed-c8b94f112a7f-kube-api-access-d4lsv\") on node \"crc\" DevicePath \"\"" Jan 23 09:07:27 crc kubenswrapper[4684]: I0123 09:07:27.845958 4684 reconciler_common.go:293] "Volume detached for volume \"images\" (UniqueName: \"kubernetes.io/configmap/6402fda4-df10-493c-b4e5-d0569419652d-images\") on node \"crc\" DevicePath \"\"" Jan 23 09:07:27 crc kubenswrapper[4684]: I0123 09:07:27.845970 4684 reconciler_common.go:293] "Volume detached for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/6509e943-70c6-444c-bc41-48a544e36fbd-serving-cert\") on node \"crc\" DevicePath \"\"" Jan 23 09:07:27 crc kubenswrapper[4684]: I0123 09:07:27.845981 4684 reconciler_common.go:293] "Volume detached for volume \"serviceca\" (UniqueName: \"kubernetes.io/configmap/3cb93b32-e0ae-4377-b9c8-fdb9842c6d59-serviceca\") on node \"crc\" DevicePath \"\"" Jan 23 09:07:27 crc kubenswrapper[4684]: I0123 09:07:27.845993 4684 reconciler_common.go:293] "Volume detached for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/e7e6199b-1264-4501-8953-767f51328d08-serving-cert\") on node \"crc\" DevicePath \"\"" Jan 23 09:07:27 crc kubenswrapper[4684]: I0123 09:07:27.846008 4684 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/8cea82b4-6893-4ddc-af9f-1bb5ae425c5b-config\") on node \"crc\" DevicePath \"\"" Jan 23 09:07:27 crc kubenswrapper[4684]: I0123 09:07:27.846021 4684 reconciler_common.go:293] "Volume detached for volume \"v4-0-config-system-ocp-branding-template\" (UniqueName: \"kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-system-ocp-branding-template\") on node \"crc\" DevicePath \"\"" Jan 23 09:07:27 crc kubenswrapper[4684]: I0123 09:07:27.846033 4684 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-xcphl\" (UniqueName: \"kubernetes.io/projected/7583ce53-e0fe-4a16-9e4d-50516596a136-kube-api-access-xcphl\") on node \"crc\" DevicePath \"\"" Jan 23 09:07:27 crc kubenswrapper[4684]: I0123 09:07:27.846046 4684 reconciler_common.go:293] "Volume detached for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/bf126b07-da06-4140-9a57-dfd54fc6b486-trusted-ca\") on node \"crc\" DevicePath \"\"" Jan 23 09:07:27 crc kubenswrapper[4684]: I0123 09:07:27.846057 4684 reconciler_common.go:293] "Volume detached for volume \"metrics-tls\" (UniqueName: \"kubernetes.io/secret/a31745f5-9847-4afe-82a5-3161cc66ca93-metrics-tls\") on node \"crc\" DevicePath \"\"" Jan 23 09:07:27 crc kubenswrapper[4684]: I0123 09:07:27.846069 4684 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-wxkg8\" (UniqueName: \"kubernetes.io/projected/3cb93b32-e0ae-4377-b9c8-fdb9842c6d59-kube-api-access-wxkg8\") on node \"crc\" DevicePath \"\"" Jan 23 09:07:27 crc kubenswrapper[4684]: I0123 09:07:27.846081 4684 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/e7e6199b-1264-4501-8953-767f51328d08-config\") on node \"crc\" DevicePath \"\"" Jan 23 09:07:27 crc kubenswrapper[4684]: I0123 09:07:27.846092 4684 reconciler_common.go:293] "Volume detached for volume \"audit-policies\" (UniqueName: \"kubernetes.io/configmap/09ae3b1a-e8e7-4524-b54b-61eab6f9239a-audit-policies\") on node \"crc\" DevicePath \"\"" Jan 23 09:07:27 crc kubenswrapper[4684]: I0123 09:07:27.846103 4684 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-279lb\" (UniqueName: \"kubernetes.io/projected/7bb08738-c794-4ee8-9972-3a62ca171029-kube-api-access-279lb\") on node \"crc\" DevicePath \"\"" Jan 23 09:07:27 crc kubenswrapper[4684]: I0123 09:07:27.846116 4684 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/6509e943-70c6-444c-bc41-48a544e36fbd-config\") on node \"crc\" DevicePath \"\"" Jan 23 09:07:27 crc kubenswrapper[4684]: I0123 09:07:27.846128 4684 reconciler_common.go:293] "Volume detached for volume \"service-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/c03ee662-fb2f-4fc4-a2c1-af487c19d254-service-ca-bundle\") on node \"crc\" DevicePath \"\"" Jan 23 09:07:27 crc kubenswrapper[4684]: I0123 09:07:27.846140 4684 reconciler_common.go:293] "Volume detached for volume \"machine-api-operator-tls\" (UniqueName: \"kubernetes.io/secret/6402fda4-df10-493c-b4e5-d0569419652d-machine-api-operator-tls\") on node \"crc\" DevicePath \"\"" Jan 23 09:07:27 crc kubenswrapper[4684]: I0123 09:07:27.846152 4684 reconciler_common.go:293] "Volume detached for volume \"etcd-client\" (UniqueName: \"kubernetes.io/secret/09ae3b1a-e8e7-4524-b54b-61eab6f9239a-etcd-client\") on node \"crc\" DevicePath \"\"" Jan 23 09:07:27 crc kubenswrapper[4684]: I0123 09:07:27.846165 4684 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-w4xd4\" (UniqueName: \"kubernetes.io/projected/8cea82b4-6893-4ddc-af9f-1bb5ae425c5b-kube-api-access-w4xd4\") on node \"crc\" DevicePath \"\"" Jan 23 09:07:27 crc kubenswrapper[4684]: I0123 09:07:27.846176 4684 reconciler_common.go:293] "Volume detached for volume \"image-import-ca\" (UniqueName: \"kubernetes.io/configmap/1bf7eb37-55a3-4c65-b768-a94c82151e69-image-import-ca\") on node \"crc\" DevicePath \"\"" Jan 23 09:07:27 crc kubenswrapper[4684]: I0123 09:07:27.846188 4684 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-dbsvg\" (UniqueName: \"kubernetes.io/projected/f88749ec-7931-4ee7-b3fc-1ec5e11f92e9-kube-api-access-dbsvg\") on node \"crc\" DevicePath \"\"" Jan 23 09:07:27 crc kubenswrapper[4684]: I0123 09:07:27.846201 4684 reconciler_common.go:293] "Volume detached for volume \"images\" (UniqueName: \"kubernetes.io/configmap/31d8b7a1-420e-4252-a5b7-eebe8a111292-images\") on node \"crc\" DevicePath \"\"" Jan 23 09:07:27 crc kubenswrapper[4684]: I0123 09:07:27.846212 4684 reconciler_common.go:293] "Volume detached for volume \"auth-proxy-config\" (UniqueName: \"kubernetes.io/configmap/22c825df-677d-4ca6-82db-3454ed06e783-auth-proxy-config\") on node \"crc\" DevicePath \"\"" Jan 23 09:07:27 crc kubenswrapper[4684]: I0123 09:07:27.846223 4684 reconciler_common.go:293] "Volume detached for volume \"audit-policies\" (UniqueName: \"kubernetes.io/configmap/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-audit-policies\") on node \"crc\" DevicePath \"\"" Jan 23 09:07:27 crc kubenswrapper[4684]: I0123 09:07:27.846234 4684 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/6402fda4-df10-493c-b4e5-d0569419652d-config\") on node \"crc\" DevicePath \"\"" Jan 23 09:07:27 crc kubenswrapper[4684]: I0123 09:07:27.846245 4684 reconciler_common.go:293] "Volume detached for volume \"v4-0-config-system-service-ca\" (UniqueName: \"kubernetes.io/configmap/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-system-service-ca\") on node \"crc\" DevicePath \"\"" Jan 23 09:07:27 crc kubenswrapper[4684]: I0123 09:07:27.846257 4684 reconciler_common.go:293] "Volume detached for volume \"v4-0-config-user-idp-0-file-data\" (UniqueName: \"kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-user-idp-0-file-data\") on node \"crc\" DevicePath \"\"" Jan 23 09:07:27 crc kubenswrapper[4684]: I0123 09:07:27.844648 4684 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/c03ee662-fb2f-4fc4-a2c1-af487c19d254-default-certificate" (OuterVolumeSpecName: "default-certificate") pod "c03ee662-fb2f-4fc4-a2c1-af487c19d254" (UID: "c03ee662-fb2f-4fc4-a2c1-af487c19d254"). InnerVolumeSpecName "default-certificate". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 23 09:07:27 crc kubenswrapper[4684]: I0123 09:07:27.846576 4684 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/cd70aa09-68dd-4d64-bd6f-156fe6d1dc6d-kube-api-access-x2m85" (OuterVolumeSpecName: "kube-api-access-x2m85") pod "cd70aa09-68dd-4d64-bd6f-156fe6d1dc6d" (UID: "cd70aa09-68dd-4d64-bd6f-156fe6d1dc6d"). InnerVolumeSpecName "kube-api-access-x2m85". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 23 09:07:27 crc kubenswrapper[4684]: I0123 09:07:27.849378 4684 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/c03ee662-fb2f-4fc4-a2c1-af487c19d254-metrics-certs" (OuterVolumeSpecName: "metrics-certs") pod "c03ee662-fb2f-4fc4-a2c1-af487c19d254" (UID: "c03ee662-fb2f-4fc4-a2c1-af487c19d254"). InnerVolumeSpecName "metrics-certs". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 23 09:07:27 crc kubenswrapper[4684]: I0123 09:07:27.850361 4684 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/308be0ea-9f5f-4b29-aeb1-5abd31a0b17b-webhook-cert" (OuterVolumeSpecName: "webhook-cert") pod "308be0ea-9f5f-4b29-aeb1-5abd31a0b17b" (UID: "308be0ea-9f5f-4b29-aeb1-5abd31a0b17b"). InnerVolumeSpecName "webhook-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 23 09:07:27 crc kubenswrapper[4684]: I0123 09:07:27.851173 4684 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-23T09:07:27Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-23T09:07:27Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Jan 23 09:07:27 crc kubenswrapper[4684]: I0123 09:07:27.854022 4684 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/1bf7eb37-55a3-4c65-b768-a94c82151e69-config" (OuterVolumeSpecName: "config") pod "1bf7eb37-55a3-4c65-b768-a94c82151e69" (UID: "1bf7eb37-55a3-4c65-b768-a94c82151e69"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 23 09:07:27 crc kubenswrapper[4684]: I0123 09:07:27.855341 4684 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/96b93a3a-6083-4aea-8eab-fe1aa8245ad9-kube-api-access-nzwt7" (OuterVolumeSpecName: "kube-api-access-nzwt7") pod "96b93a3a-6083-4aea-8eab-fe1aa8245ad9" (UID: "96b93a3a-6083-4aea-8eab-fe1aa8245ad9"). InnerVolumeSpecName "kube-api-access-nzwt7". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 23 09:07:27 crc kubenswrapper[4684]: I0123 09:07:27.855500 4684 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/5441d097-087c-4d9a-baa8-b210afa90fc9-kube-api-access-2d4wz" (OuterVolumeSpecName: "kube-api-access-2d4wz") pod "5441d097-087c-4d9a-baa8-b210afa90fc9" (UID: "5441d097-087c-4d9a-baa8-b210afa90fc9"). InnerVolumeSpecName "kube-api-access-2d4wz". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 23 09:07:27 crc kubenswrapper[4684]: I0123 09:07:27.856774 4684 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/925f1c65-6136-48ba-85aa-3a3b50560753-env-overrides" (OuterVolumeSpecName: "env-overrides") pod "925f1c65-6136-48ba-85aa-3a3b50560753" (UID: "925f1c65-6136-48ba-85aa-3a3b50560753"). InnerVolumeSpecName "env-overrides". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 23 09:07:27 crc kubenswrapper[4684]: I0123 09:07:27.856811 4684 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/496e6271-fb68-4057-954e-a0d97a4afa3f-config" (OuterVolumeSpecName: "config") pod "496e6271-fb68-4057-954e-a0d97a4afa3f" (UID: "496e6271-fb68-4057-954e-a0d97a4afa3f"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 23 09:07:27 crc kubenswrapper[4684]: I0123 09:07:27.862822 4684 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/308be0ea-9f5f-4b29-aeb1-5abd31a0b17b-kube-api-access-6ccd8" (OuterVolumeSpecName: "kube-api-access-6ccd8") pod "308be0ea-9f5f-4b29-aeb1-5abd31a0b17b" (UID: "308be0ea-9f5f-4b29-aeb1-5abd31a0b17b"). InnerVolumeSpecName "kube-api-access-6ccd8". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 23 09:07:27 crc kubenswrapper[4684]: I0123 09:07:27.863714 4684 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-rdwmf\" (UniqueName: \"kubernetes.io/projected/37a5e44f-9a88-4405-be8a-b645485e7312-kube-api-access-rdwmf\") pod \"network-operator-58b4c7f79c-55gtf\" (UID: \"37a5e44f-9a88-4405-be8a-b645485e7312\") " pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" Jan 23 09:07:27 crc kubenswrapper[4684]: I0123 09:07:27.864618 4684 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-s2kz5\" (UniqueName: \"kubernetes.io/projected/ef543e1b-8068-4ea3-b32a-61027b32e95d-kube-api-access-s2kz5\") pod \"network-node-identity-vrzqb\" (UID: \"ef543e1b-8068-4ea3-b32a-61027b32e95d\") " pod="openshift-network-node-identity/network-node-identity-vrzqb" Jan 23 09:07:27 crc kubenswrapper[4684]: I0123 09:07:27.864827 4684 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-node-identity/network-node-identity-vrzqb" Jan 23 09:07:27 crc kubenswrapper[4684]: I0123 09:07:27.873677 4684 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "b11524ee-3fca-4b1b-9cdf-6da289fdbc7d" (UID: "b11524ee-3fca-4b1b-9cdf-6da289fdbc7d"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 23 09:07:27 crc kubenswrapper[4684]: I0123 09:07:27.873898 4684 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-operator/iptables-alerter-4ln5h" Jan 23 09:07:27 crc kubenswrapper[4684]: I0123 09:07:27.880678 4684 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/57a731c4-ef35-47a8-b875-bfb08a7f8011-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "57a731c4-ef35-47a8-b875-bfb08a7f8011" (UID: "57a731c4-ef35-47a8-b875-bfb08a7f8011"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 23 09:07:27 crc kubenswrapper[4684]: I0123 09:07:27.902917 4684 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/5225d0e4-402f-4861-b410-819f433b1803-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "5225d0e4-402f-4861-b410-819f433b1803" (UID: "5225d0e4-402f-4861-b410-819f433b1803"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 23 09:07:27 crc kubenswrapper[4684]: I0123 09:07:27.905015 4684 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-23T09:07:27Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [webhook approver]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-23T09:07:27Z\\\",\\\"message\\\":\\\"containers with unready status: [webhook approver]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":false,\\\"restartCount\\\":6,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Jan 23 09:07:27 crc kubenswrapper[4684]: I0123 09:07:27.931781 4684 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-23T09:07:27Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [iptables-alerter]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-23T09:07:27Z\\\",\\\"message\\\":\\\"containers with unready status: [iptables-alerter]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Jan 23 09:07:27 crc kubenswrapper[4684]: I0123 09:07:27.947579 4684 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-x2m85\" (UniqueName: \"kubernetes.io/projected/cd70aa09-68dd-4d64-bd6f-156fe6d1dc6d-kube-api-access-x2m85\") on node \"crc\" DevicePath \"\"" Jan 23 09:07:27 crc kubenswrapper[4684]: I0123 09:07:27.947873 4684 reconciler_common.go:293] "Volume detached for volume \"ovnkube-script-lib\" (UniqueName: \"kubernetes.io/configmap/6ea678ab-3438-413e-bfe3-290ae7725660-ovnkube-script-lib\") on node \"crc\" DevicePath \"\"" Jan 23 09:07:27 crc kubenswrapper[4684]: I0123 09:07:27.947951 4684 reconciler_common.go:293] "Volume detached for volume \"default-certificate\" (UniqueName: \"kubernetes.io/secret/c03ee662-fb2f-4fc4-a2c1-af487c19d254-default-certificate\") on node \"crc\" DevicePath \"\"" Jan 23 09:07:27 crc kubenswrapper[4684]: I0123 09:07:27.948023 4684 reconciler_common.go:293] "Volume detached for volume \"webhook-cert\" (UniqueName: \"kubernetes.io/secret/308be0ea-9f5f-4b29-aeb1-5abd31a0b17b-webhook-cert\") on node \"crc\" DevicePath \"\"" Jan 23 09:07:27 crc kubenswrapper[4684]: I0123 09:07:27.948095 4684 reconciler_common.go:293] "Volume detached for volume \"env-overrides\" (UniqueName: \"kubernetes.io/configmap/925f1c65-6136-48ba-85aa-3a3b50560753-env-overrides\") on node \"crc\" DevicePath \"\"" Jan 23 09:07:27 crc kubenswrapper[4684]: I0123 09:07:27.948170 4684 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-6ccd8\" (UniqueName: \"kubernetes.io/projected/308be0ea-9f5f-4b29-aeb1-5abd31a0b17b-kube-api-access-6ccd8\") on node \"crc\" DevicePath \"\"" Jan 23 09:07:27 crc kubenswrapper[4684]: I0123 09:07:27.948225 4684 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/496e6271-fb68-4057-954e-a0d97a4afa3f-config\") on node \"crc\" DevicePath \"\"" Jan 23 09:07:27 crc kubenswrapper[4684]: I0123 09:07:27.948323 4684 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-nzwt7\" (UniqueName: \"kubernetes.io/projected/96b93a3a-6083-4aea-8eab-fe1aa8245ad9-kube-api-access-nzwt7\") on node \"crc\" DevicePath \"\"" Jan 23 09:07:27 crc kubenswrapper[4684]: I0123 09:07:27.948400 4684 reconciler_common.go:293] "Volume detached for volume \"service-ca\" (UniqueName: \"kubernetes.io/configmap/0b78653f-4ff9-4508-8672-245ed9b561e3-service-ca\") on node \"crc\" DevicePath \"\"" Jan 23 09:07:27 crc kubenswrapper[4684]: I0123 09:07:27.948469 4684 reconciler_common.go:293] "Volume detached for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/09efc573-dbb6-4249-bd59-9b87aba8dd28-serving-cert\") on node \"crc\" DevicePath \"\"" Jan 23 09:07:27 crc kubenswrapper[4684]: I0123 09:07:27.948537 4684 reconciler_common.go:293] "Volume detached for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/43509403-f426-496e-be36-56cef71462f5-trusted-ca-bundle\") on node \"crc\" DevicePath \"\"" Jan 23 09:07:27 crc kubenswrapper[4684]: I0123 09:07:27.948636 4684 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-6g6sz\" (UniqueName: \"kubernetes.io/projected/6509e943-70c6-444c-bc41-48a544e36fbd-kube-api-access-6g6sz\") on node \"crc\" DevicePath \"\"" Jan 23 09:07:27 crc kubenswrapper[4684]: I0123 09:07:27.948692 4684 reconciler_common.go:293] "Volume detached for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/c03ee662-fb2f-4fc4-a2c1-af487c19d254-metrics-certs\") on node \"crc\" DevicePath \"\"" Jan 23 09:07:27 crc kubenswrapper[4684]: I0123 09:07:27.948772 4684 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-2d4wz\" (UniqueName: \"kubernetes.io/projected/5441d097-087c-4d9a-baa8-b210afa90fc9-kube-api-access-2d4wz\") on node \"crc\" DevicePath \"\"" Jan 23 09:07:27 crc kubenswrapper[4684]: I0123 09:07:27.948825 4684 reconciler_common.go:293] "Volume detached for volume \"ovn-node-metrics-cert\" (UniqueName: \"kubernetes.io/secret/6ea678ab-3438-413e-bfe3-290ae7725660-ovn-node-metrics-cert\") on node \"crc\" DevicePath \"\"" Jan 23 09:07:27 crc kubenswrapper[4684]: I0123 09:07:27.948879 4684 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/1bf7eb37-55a3-4c65-b768-a94c82151e69-config\") on node \"crc\" DevicePath \"\"" Jan 23 09:07:27 crc kubenswrapper[4684]: I0123 09:07:27.948955 4684 reconciler_common.go:293] "Volume detached for volume \"v4-0-config-user-template-error\" (UniqueName: \"kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-user-template-error\") on node \"crc\" DevicePath \"\"" Jan 23 09:07:27 crc kubenswrapper[4684]: I0123 09:07:27.949053 4684 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d-catalog-content\") on node \"crc\" DevicePath \"\"" Jan 23 09:07:27 crc kubenswrapper[4684]: I0123 09:07:27.949122 4684 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/57a731c4-ef35-47a8-b875-bfb08a7f8011-catalog-content\") on node \"crc\" DevicePath \"\"" Jan 23 09:07:27 crc kubenswrapper[4684]: I0123 09:07:27.949187 4684 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/5225d0e4-402f-4861-b410-819f433b1803-catalog-content\") on node \"crc\" DevicePath \"\"" Jan 23 09:07:27 crc kubenswrapper[4684]: I0123 09:07:27.954817 4684 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-23T09:07:27Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-23T09:07:27Z\\\",\\\"message\\\":\\\"containers with unready status: [network-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":false,\\\"restartCount\\\":5,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Jan 23 09:07:27 crc kubenswrapper[4684]: I0123 09:07:27.969976 4684 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-23T09:07:27Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-23T09:07:27Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Jan 23 09:07:28 crc kubenswrapper[4684]: I0123 09:07:28.156487 4684 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" Jan 23 09:07:28 crc kubenswrapper[4684]: W0123 09:07:28.168618 4684 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod37a5e44f_9a88_4405_be8a_b645485e7312.slice/crio-68aec69fa95602e37684e2d8b7363e3cd2aa4eb9de2eb33447c8e5222a2bb40f WatchSource:0}: Error finding container 68aec69fa95602e37684e2d8b7363e3cd2aa4eb9de2eb33447c8e5222a2bb40f: Status 404 returned error can't find the container with id 68aec69fa95602e37684e2d8b7363e3cd2aa4eb9de2eb33447c8e5222a2bb40f Jan 23 09:07:28 crc kubenswrapper[4684]: I0123 09:07:28.251812 4684 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 23 09:07:28 crc kubenswrapper[4684]: E0123 09:07:28.251969 4684 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-01-23 09:07:29.251952789 +0000 UTC m=+21.875331330 (durationBeforeRetry 1s). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 23 09:07:28 crc kubenswrapper[4684]: I0123 09:07:28.256852 4684 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-machine-config-operator/machine-config-daemon-wtphf"] Jan 23 09:07:28 crc kubenswrapper[4684]: I0123 09:07:28.257302 4684 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-ovn-kubernetes/ovnkube-node-nk7v5"] Jan 23 09:07:28 crc kubenswrapper[4684]: I0123 09:07:28.257576 4684 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-machine-config-operator/machine-config-daemon-wtphf" Jan 23 09:07:28 crc kubenswrapper[4684]: I0123 09:07:28.258421 4684 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-ovn-kubernetes/ovnkube-node-nk7v5" Jan 23 09:07:28 crc kubenswrapper[4684]: W0123 09:07:28.262733 4684 reflector.go:561] object-"openshift-machine-config-operator"/"openshift-service-ca.crt": failed to list *v1.ConfigMap: configmaps "openshift-service-ca.crt" is forbidden: User "system:node:crc" cannot list resource "configmaps" in API group "" in the namespace "openshift-machine-config-operator": no relationship found between node 'crc' and this object Jan 23 09:07:28 crc kubenswrapper[4684]: E0123 09:07:28.262789 4684 reflector.go:158] "Unhandled Error" err="object-\"openshift-machine-config-operator\"/\"openshift-service-ca.crt\": Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps \"openshift-service-ca.crt\" is forbidden: User \"system:node:crc\" cannot list resource \"configmaps\" in API group \"\" in the namespace \"openshift-machine-config-operator\": no relationship found between node 'crc' and this object" logger="UnhandledError" Jan 23 09:07:28 crc kubenswrapper[4684]: W0123 09:07:28.262749 4684 reflector.go:561] object-"openshift-machine-config-operator"/"kube-rbac-proxy": failed to list *v1.ConfigMap: configmaps "kube-rbac-proxy" is forbidden: User "system:node:crc" cannot list resource "configmaps" in API group "" in the namespace "openshift-machine-config-operator": no relationship found between node 'crc' and this object Jan 23 09:07:28 crc kubenswrapper[4684]: E0123 09:07:28.263191 4684 reflector.go:158] "Unhandled Error" err="object-\"openshift-machine-config-operator\"/\"kube-rbac-proxy\": Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps \"kube-rbac-proxy\" is forbidden: User \"system:node:crc\" cannot list resource \"configmaps\" in API group \"\" in the namespace \"openshift-machine-config-operator\": no relationship found between node 'crc' and this object" logger="UnhandledError" Jan 23 09:07:28 crc kubenswrapper[4684]: W0123 09:07:28.263667 4684 reflector.go:561] object-"openshift-machine-config-operator"/"machine-config-daemon-dockercfg-r5tcq": failed to list *v1.Secret: secrets "machine-config-daemon-dockercfg-r5tcq" is forbidden: User "system:node:crc" cannot list resource "secrets" in API group "" in the namespace "openshift-machine-config-operator": no relationship found between node 'crc' and this object Jan 23 09:07:28 crc kubenswrapper[4684]: E0123 09:07:28.263814 4684 reflector.go:158] "Unhandled Error" err="object-\"openshift-machine-config-operator\"/\"machine-config-daemon-dockercfg-r5tcq\": Failed to watch *v1.Secret: failed to list *v1.Secret: secrets \"machine-config-daemon-dockercfg-r5tcq\" is forbidden: User \"system:node:crc\" cannot list resource \"secrets\" in API group \"\" in the namespace \"openshift-machine-config-operator\": no relationship found between node 'crc' and this object" logger="UnhandledError" Jan 23 09:07:28 crc kubenswrapper[4684]: I0123 09:07:28.264261 4684 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ovn-kubernetes"/"openshift-service-ca.crt" Jan 23 09:07:28 crc kubenswrapper[4684]: I0123 09:07:28.264478 4684 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ovn-kubernetes"/"ovnkube-config" Jan 23 09:07:28 crc kubenswrapper[4684]: I0123 09:07:28.265098 4684 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-machine-config-operator"/"kube-root-ca.crt" Jan 23 09:07:28 crc kubenswrapper[4684]: I0123 09:07:28.266236 4684 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-dns/node-resolver-6stgf"] Jan 23 09:07:28 crc kubenswrapper[4684]: I0123 09:07:28.266765 4684 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-dns/node-resolver-6stgf" Jan 23 09:07:28 crc kubenswrapper[4684]: I0123 09:07:28.267727 4684 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-config-operator"/"proxy-tls" Jan 23 09:07:28 crc kubenswrapper[4684]: I0123 09:07:28.267961 4684 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ovn-kubernetes"/"env-overrides" Jan 23 09:07:28 crc kubenswrapper[4684]: I0123 09:07:28.268240 4684 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-multus/multus-additional-cni-plugins-dmqcw"] Jan 23 09:07:28 crc kubenswrapper[4684]: I0123 09:07:28.268355 4684 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-ovn-kubernetes"/"ovn-kubernetes-node-dockercfg-pwtwl" Jan 23 09:07:28 crc kubenswrapper[4684]: I0123 09:07:28.268803 4684 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-ovn-kubernetes"/"ovn-node-metrics-cert" Jan 23 09:07:28 crc kubenswrapper[4684]: I0123 09:07:28.268902 4684 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-multus/multus-jwr4q"] Jan 23 09:07:28 crc kubenswrapper[4684]: I0123 09:07:28.268948 4684 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ovn-kubernetes"/"ovnkube-script-lib" Jan 23 09:07:28 crc kubenswrapper[4684]: I0123 09:07:28.269112 4684 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/multus-jwr4q" Jan 23 09:07:28 crc kubenswrapper[4684]: I0123 09:07:28.269510 4684 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/multus-additional-cni-plugins-dmqcw" Jan 23 09:07:28 crc kubenswrapper[4684]: I0123 09:07:28.270340 4684 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-dns"/"kube-root-ca.crt" Jan 23 09:07:28 crc kubenswrapper[4684]: I0123 09:07:28.270599 4684 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-dns"/"openshift-service-ca.crt" Jan 23 09:07:28 crc kubenswrapper[4684]: I0123 09:07:28.270749 4684 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-dns"/"node-resolver-dockercfg-kz9s7" Jan 23 09:07:28 crc kubenswrapper[4684]: I0123 09:07:28.275213 4684 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-multus"/"default-dockercfg-2q5b6" Jan 23 09:07:28 crc kubenswrapper[4684]: I0123 09:07:28.275380 4684 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-multus"/"openshift-service-ca.crt" Jan 23 09:07:28 crc kubenswrapper[4684]: I0123 09:07:28.275502 4684 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-multus"/"multus-daemon-config" Jan 23 09:07:28 crc kubenswrapper[4684]: I0123 09:07:28.275535 4684 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-multus"/"kube-root-ca.crt" Jan 23 09:07:28 crc kubenswrapper[4684]: I0123 09:07:28.275745 4684 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-multus"/"multus-ancillary-tools-dockercfg-vnmsz" Jan 23 09:07:28 crc kubenswrapper[4684]: I0123 09:07:28.279090 4684 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-multus"/"default-cni-sysctl-allowlist" Jan 23 09:07:28 crc kubenswrapper[4684]: I0123 09:07:28.280424 4684 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ovn-kubernetes"/"kube-root-ca.crt" Jan 23 09:07:28 crc kubenswrapper[4684]: I0123 09:07:28.282624 4684 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-multus"/"cni-copy-resources" Jan 23 09:07:28 crc kubenswrapper[4684]: I0123 09:07:28.307067 4684 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-wtphf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"fe8e0d00-860e-4d47-9f48-686555520d79\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T09:07:28Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T09:07:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T09:07:28Z\\\",\\\"message\\\":\\\"containers with unready status: [machine-config-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T09:07:28Z\\\",\\\"message\\\":\\\"containers with unready status: [machine-config-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-dmwsl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-dmwsl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-23T09:07:28Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-wtphf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Jan 23 09:07:28 crc kubenswrapper[4684]: I0123 09:07:28.353040 4684 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"systemd-units\" (UniqueName: \"kubernetes.io/host-path/5fd1b372-d164-4037-ae8e-cf634b1c4b41-systemd-units\") pod \"ovnkube-node-nk7v5\" (UID: \"5fd1b372-d164-4037-ae8e-cf634b1c4b41\") " pod="openshift-ovn-kubernetes/ovnkube-node-nk7v5" Jan 23 09:07:28 crc kubenswrapper[4684]: I0123 09:07:28.353088 4684 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"system-cni-dir\" (UniqueName: \"kubernetes.io/host-path/ab0885cc-d621-4e36-9e37-1326848bd147-system-cni-dir\") pod \"multus-jwr4q\" (UID: \"ab0885cc-d621-4e36-9e37-1326848bd147\") " pod="openshift-multus/multus-jwr4q" Jan 23 09:07:28 crc kubenswrapper[4684]: I0123 09:07:28.353111 4684 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tuning-conf-dir\" (UniqueName: \"kubernetes.io/host-path/95d1563a-3ca4-4fb0-8365-c1168fbe2e70-tuning-conf-dir\") pod \"multus-additional-cni-plugins-dmqcw\" (UID: \"95d1563a-3ca4-4fb0-8365-c1168fbe2e70\") " pod="openshift-multus/multus-additional-cni-plugins-dmqcw" Jan 23 09:07:28 crc kubenswrapper[4684]: I0123 09:07:28.353140 4684 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"nginx-conf\" (UniqueName: \"kubernetes.io/configmap/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-nginx-conf\") pod \"networking-console-plugin-85b44fc459-gdk6g\" (UID: \"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\") " pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 23 09:07:28 crc kubenswrapper[4684]: I0123 09:07:28.353162 4684 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"node-log\" (UniqueName: \"kubernetes.io/host-path/5fd1b372-d164-4037-ae8e-cf634b1c4b41-node-log\") pod \"ovnkube-node-nk7v5\" (UID: \"5fd1b372-d164-4037-ae8e-cf634b1c4b41\") " pod="openshift-ovn-kubernetes/ovnkube-node-nk7v5" Jan 23 09:07:28 crc kubenswrapper[4684]: I0123 09:07:28.353178 4684 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-lib-openvswitch\" (UniqueName: \"kubernetes.io/host-path/5fd1b372-d164-4037-ae8e-cf634b1c4b41-var-lib-openvswitch\") pod \"ovnkube-node-nk7v5\" (UID: \"5fd1b372-d164-4037-ae8e-cf634b1c4b41\") " pod="openshift-ovn-kubernetes/ovnkube-node-nk7v5" Jan 23 09:07:28 crc kubenswrapper[4684]: I0123 09:07:28.353195 4684 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"run-ovn\" (UniqueName: \"kubernetes.io/host-path/5fd1b372-d164-4037-ae8e-cf634b1c4b41-run-ovn\") pod \"ovnkube-node-nk7v5\" (UID: \"5fd1b372-d164-4037-ae8e-cf634b1c4b41\") " pod="openshift-ovn-kubernetes/ovnkube-node-nk7v5" Jan 23 09:07:28 crc kubenswrapper[4684]: I0123 09:07:28.353214 4684 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"multus-cni-dir\" (UniqueName: \"kubernetes.io/host-path/ab0885cc-d621-4e36-9e37-1326848bd147-multus-cni-dir\") pod \"multus-jwr4q\" (UID: \"ab0885cc-d621-4e36-9e37-1326848bd147\") " pod="openshift-multus/multus-jwr4q" Jan 23 09:07:28 crc kubenswrapper[4684]: I0123 09:07:28.353232 4684 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cnibin\" (UniqueName: \"kubernetes.io/host-path/ab0885cc-d621-4e36-9e37-1326848bd147-cnibin\") pod \"multus-jwr4q\" (UID: \"ab0885cc-d621-4e36-9e37-1326848bd147\") " pod="openshift-multus/multus-jwr4q" Jan 23 09:07:28 crc kubenswrapper[4684]: I0123 09:07:28.353252 4684 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-var-lib-cni-bin\" (UniqueName: \"kubernetes.io/host-path/ab0885cc-d621-4e36-9e37-1326848bd147-host-var-lib-cni-bin\") pod \"multus-jwr4q\" (UID: \"ab0885cc-d621-4e36-9e37-1326848bd147\") " pod="openshift-multus/multus-jwr4q" Jan 23 09:07:28 crc kubenswrapper[4684]: I0123 09:07:28.353276 4684 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-sysctl-allowlist\" (UniqueName: \"kubernetes.io/configmap/95d1563a-3ca4-4fb0-8365-c1168fbe2e70-cni-sysctl-allowlist\") pod \"multus-additional-cni-plugins-dmqcw\" (UID: \"95d1563a-3ca4-4fb0-8365-c1168fbe2e70\") " pod="openshift-multus/multus-additional-cni-plugins-dmqcw" Jan 23 09:07:28 crc kubenswrapper[4684]: I0123 09:07:28.353297 4684 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-s2dwl\" (UniqueName: \"kubernetes.io/projected/9d751cbb-f2e2-430d-9754-c882a5e924a5-kube-api-access-s2dwl\") pod \"network-check-source-55646444c4-trplf\" (UID: \"9d751cbb-f2e2-430d-9754-c882a5e924a5\") " pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Jan 23 09:07:28 crc kubenswrapper[4684]: I0123 09:07:28.353312 4684 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-run-k8s-cni-cncf-io\" (UniqueName: \"kubernetes.io/host-path/ab0885cc-d621-4e36-9e37-1326848bd147-host-run-k8s-cni-cncf-io\") pod \"multus-jwr4q\" (UID: \"ab0885cc-d621-4e36-9e37-1326848bd147\") " pod="openshift-multus/multus-jwr4q" Jan 23 09:07:28 crc kubenswrapper[4684]: I0123 09:07:28.353327 4684 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"system-cni-dir\" (UniqueName: \"kubernetes.io/host-path/95d1563a-3ca4-4fb0-8365-c1168fbe2e70-system-cni-dir\") pod \"multus-additional-cni-plugins-dmqcw\" (UID: \"95d1563a-3ca4-4fb0-8365-c1168fbe2e70\") " pod="openshift-multus/multus-additional-cni-plugins-dmqcw" Jan 23 09:07:28 crc kubenswrapper[4684]: I0123 09:07:28.353341 4684 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-l46bg\" (UniqueName: \"kubernetes.io/projected/5fd1b372-d164-4037-ae8e-cf634b1c4b41-kube-api-access-l46bg\") pod \"ovnkube-node-nk7v5\" (UID: \"5fd1b372-d164-4037-ae8e-cf634b1c4b41\") " pod="openshift-ovn-kubernetes/ovnkube-node-nk7v5" Jan 23 09:07:28 crc kubenswrapper[4684]: I0123 09:07:28.353356 4684 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"multus-conf-dir\" (UniqueName: \"kubernetes.io/host-path/ab0885cc-d621-4e36-9e37-1326848bd147-multus-conf-dir\") pod \"multus-jwr4q\" (UID: \"ab0885cc-d621-4e36-9e37-1326848bd147\") " pod="openshift-multus/multus-jwr4q" Jan 23 09:07:28 crc kubenswrapper[4684]: I0123 09:07:28.353372 4684 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-wv8g2\" (UniqueName: \"kubernetes.io/projected/4fce7017-186f-4953-b968-c8a8868a0fd4-kube-api-access-wv8g2\") pod \"node-resolver-6stgf\" (UID: \"4fce7017-186f-4953-b968-c8a8868a0fd4\") " pod="openshift-dns/node-resolver-6stgf" Jan 23 09:07:28 crc kubenswrapper[4684]: I0123 09:07:28.353386 4684 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"mcd-auth-proxy-config\" (UniqueName: \"kubernetes.io/configmap/fe8e0d00-860e-4d47-9f48-686555520d79-mcd-auth-proxy-config\") pod \"machine-config-daemon-wtphf\" (UID: \"fe8e0d00-860e-4d47-9f48-686555520d79\") " pod="openshift-machine-config-operator/machine-config-daemon-wtphf" Jan 23 09:07:28 crc kubenswrapper[4684]: I0123 09:07:28.353402 4684 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-cni-netd\" (UniqueName: \"kubernetes.io/host-path/5fd1b372-d164-4037-ae8e-cf634b1c4b41-host-cni-netd\") pod \"ovnkube-node-nk7v5\" (UID: \"5fd1b372-d164-4037-ae8e-cf634b1c4b41\") " pod="openshift-ovn-kubernetes/ovnkube-node-nk7v5" Jan 23 09:07:28 crc kubenswrapper[4684]: I0123 09:07:28.353416 4684 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"proxy-tls\" (UniqueName: \"kubernetes.io/secret/fe8e0d00-860e-4d47-9f48-686555520d79-proxy-tls\") pod \"machine-config-daemon-wtphf\" (UID: \"fe8e0d00-860e-4d47-9f48-686555520d79\") " pod="openshift-machine-config-operator/machine-config-daemon-wtphf" Jan 23 09:07:28 crc kubenswrapper[4684]: I0123 09:07:28.353440 4684 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-run-ovn-kubernetes\" (UniqueName: \"kubernetes.io/host-path/5fd1b372-d164-4037-ae8e-cf634b1c4b41-host-run-ovn-kubernetes\") pod \"ovnkube-node-nk7v5\" (UID: \"5fd1b372-d164-4037-ae8e-cf634b1c4b41\") " pod="openshift-ovn-kubernetes/ovnkube-node-nk7v5" Jan 23 09:07:28 crc kubenswrapper[4684]: I0123 09:07:28.353462 4684 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"log-socket\" (UniqueName: \"kubernetes.io/host-path/5fd1b372-d164-4037-ae8e-cf634b1c4b41-log-socket\") pod \"ovnkube-node-nk7v5\" (UID: \"5fd1b372-d164-4037-ae8e-cf634b1c4b41\") " pod="openshift-ovn-kubernetes/ovnkube-node-nk7v5" Jan 23 09:07:28 crc kubenswrapper[4684]: I0123 09:07:28.353476 4684 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-run-netns\" (UniqueName: \"kubernetes.io/host-path/ab0885cc-d621-4e36-9e37-1326848bd147-host-run-netns\") pod \"multus-jwr4q\" (UID: \"ab0885cc-d621-4e36-9e37-1326848bd147\") " pod="openshift-multus/multus-jwr4q" Jan 23 09:07:28 crc kubenswrapper[4684]: I0123 09:07:28.353489 4684 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-wrhqc\" (UniqueName: \"kubernetes.io/projected/95d1563a-3ca4-4fb0-8365-c1168fbe2e70-kube-api-access-wrhqc\") pod \"multus-additional-cni-plugins-dmqcw\" (UID: \"95d1563a-3ca4-4fb0-8365-c1168fbe2e70\") " pod="openshift-multus/multus-additional-cni-plugins-dmqcw" Jan 23 09:07:28 crc kubenswrapper[4684]: I0123 09:07:28.353507 4684 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"rootfs\" (UniqueName: \"kubernetes.io/host-path/fe8e0d00-860e-4d47-9f48-686555520d79-rootfs\") pod \"machine-config-daemon-wtphf\" (UID: \"fe8e0d00-860e-4d47-9f48-686555520d79\") " pod="openshift-machine-config-operator/machine-config-daemon-wtphf" Jan 23 09:07:28 crc kubenswrapper[4684]: I0123 09:07:28.353521 4684 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-slash\" (UniqueName: \"kubernetes.io/host-path/5fd1b372-d164-4037-ae8e-cf634b1c4b41-host-slash\") pod \"ovnkube-node-nk7v5\" (UID: \"5fd1b372-d164-4037-ae8e-cf634b1c4b41\") " pod="openshift-ovn-kubernetes/ovnkube-node-nk7v5" Jan 23 09:07:28 crc kubenswrapper[4684]: I0123 09:07:28.353534 4684 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"run-openvswitch\" (UniqueName: \"kubernetes.io/host-path/5fd1b372-d164-4037-ae8e-cf634b1c4b41-run-openvswitch\") pod \"ovnkube-node-nk7v5\" (UID: \"5fd1b372-d164-4037-ae8e-cf634b1c4b41\") " pod="openshift-ovn-kubernetes/ovnkube-node-nk7v5" Jan 23 09:07:28 crc kubenswrapper[4684]: I0123 09:07:28.353557 4684 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-openvswitch\" (UniqueName: \"kubernetes.io/host-path/5fd1b372-d164-4037-ae8e-cf634b1c4b41-etc-openvswitch\") pod \"ovnkube-node-nk7v5\" (UID: \"5fd1b372-d164-4037-ae8e-cf634b1c4b41\") " pod="openshift-ovn-kubernetes/ovnkube-node-nk7v5" Jan 23 09:07:28 crc kubenswrapper[4684]: I0123 09:07:28.353572 4684 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovn-node-metrics-cert\" (UniqueName: \"kubernetes.io/secret/5fd1b372-d164-4037-ae8e-cf634b1c4b41-ovn-node-metrics-cert\") pod \"ovnkube-node-nk7v5\" (UID: \"5fd1b372-d164-4037-ae8e-cf634b1c4b41\") " pod="openshift-ovn-kubernetes/ovnkube-node-nk7v5" Jan 23 09:07:28 crc kubenswrapper[4684]: I0123 09:07:28.353586 4684 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"os-release\" (UniqueName: \"kubernetes.io/host-path/ab0885cc-d621-4e36-9e37-1326848bd147-os-release\") pod \"multus-jwr4q\" (UID: \"ab0885cc-d621-4e36-9e37-1326848bd147\") " pod="openshift-multus/multus-jwr4q" Jan 23 09:07:28 crc kubenswrapper[4684]: I0123 09:07:28.353600 4684 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"run-systemd\" (UniqueName: \"kubernetes.io/host-path/5fd1b372-d164-4037-ae8e-cf634b1c4b41-run-systemd\") pod \"ovnkube-node-nk7v5\" (UID: \"5fd1b372-d164-4037-ae8e-cf634b1c4b41\") " pod="openshift-ovn-kubernetes/ovnkube-node-nk7v5" Jan 23 09:07:28 crc kubenswrapper[4684]: I0123 09:07:28.353618 4684 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-run-multus-certs\" (UniqueName: \"kubernetes.io/host-path/ab0885cc-d621-4e36-9e37-1326848bd147-host-run-multus-certs\") pod \"multus-jwr4q\" (UID: \"ab0885cc-d621-4e36-9e37-1326848bd147\") " pod="openshift-multus/multus-jwr4q" Jan 23 09:07:28 crc kubenswrapper[4684]: I0123 09:07:28.353633 4684 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-kubernetes\" (UniqueName: \"kubernetes.io/host-path/ab0885cc-d621-4e36-9e37-1326848bd147-etc-kubernetes\") pod \"multus-jwr4q\" (UID: \"ab0885cc-d621-4e36-9e37-1326848bd147\") " pod="openshift-multus/multus-jwr4q" Jan 23 09:07:28 crc kubenswrapper[4684]: I0123 09:07:28.353647 4684 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-binary-copy\" (UniqueName: \"kubernetes.io/configmap/95d1563a-3ca4-4fb0-8365-c1168fbe2e70-cni-binary-copy\") pod \"multus-additional-cni-plugins-dmqcw\" (UID: \"95d1563a-3ca4-4fb0-8365-c1168fbe2e70\") " pod="openshift-multus/multus-additional-cni-plugins-dmqcw" Jan 23 09:07:28 crc kubenswrapper[4684]: I0123 09:07:28.353666 4684 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-cqllr\" (UniqueName: \"kubernetes.io/projected/3b6479f0-333b-4a96-9adf-2099afdc2447-kube-api-access-cqllr\") pod \"network-check-target-xd92c\" (UID: \"3b6479f0-333b-4a96-9adf-2099afdc2447\") " pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 23 09:07:28 crc kubenswrapper[4684]: I0123 09:07:28.353682 4684 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-var-lib-cni-multus\" (UniqueName: \"kubernetes.io/host-path/ab0885cc-d621-4e36-9e37-1326848bd147-host-var-lib-cni-multus\") pod \"multus-jwr4q\" (UID: \"ab0885cc-d621-4e36-9e37-1326848bd147\") " pod="openshift-multus/multus-jwr4q" Jan 23 09:07:28 crc kubenswrapper[4684]: I0123 09:07:28.353717 4684 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"multus-daemon-config\" (UniqueName: \"kubernetes.io/configmap/ab0885cc-d621-4e36-9e37-1326848bd147-multus-daemon-config\") pod \"multus-jwr4q\" (UID: \"ab0885cc-d621-4e36-9e37-1326848bd147\") " pod="openshift-multus/multus-jwr4q" Jan 23 09:07:28 crc kubenswrapper[4684]: I0123 09:07:28.353734 4684 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-dmwsl\" (UniqueName: \"kubernetes.io/projected/fe8e0d00-860e-4d47-9f48-686555520d79-kube-api-access-dmwsl\") pod \"machine-config-daemon-wtphf\" (UID: \"fe8e0d00-860e-4d47-9f48-686555520d79\") " pod="openshift-machine-config-operator/machine-config-daemon-wtphf" Jan 23 09:07:28 crc kubenswrapper[4684]: I0123 09:07:28.353748 4684 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-binary-copy\" (UniqueName: \"kubernetes.io/configmap/ab0885cc-d621-4e36-9e37-1326848bd147-cni-binary-copy\") pod \"multus-jwr4q\" (UID: \"ab0885cc-d621-4e36-9e37-1326848bd147\") " pod="openshift-multus/multus-jwr4q" Jan 23 09:07:28 crc kubenswrapper[4684]: I0123 09:07:28.353763 4684 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-run-netns\" (UniqueName: \"kubernetes.io/host-path/5fd1b372-d164-4037-ae8e-cf634b1c4b41-host-run-netns\") pod \"ovnkube-node-nk7v5\" (UID: \"5fd1b372-d164-4037-ae8e-cf634b1c4b41\") " pod="openshift-ovn-kubernetes/ovnkube-node-nk7v5" Jan 23 09:07:28 crc kubenswrapper[4684]: I0123 09:07:28.353779 4684 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-var-lib-kubelet\" (UniqueName: \"kubernetes.io/host-path/ab0885cc-d621-4e36-9e37-1326848bd147-host-var-lib-kubelet\") pod \"multus-jwr4q\" (UID: \"ab0885cc-d621-4e36-9e37-1326848bd147\") " pod="openshift-multus/multus-jwr4q" Jan 23 09:07:28 crc kubenswrapper[4684]: I0123 09:07:28.353792 4684 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovnkube-config\" (UniqueName: \"kubernetes.io/configmap/5fd1b372-d164-4037-ae8e-cf634b1c4b41-ovnkube-config\") pod \"ovnkube-node-nk7v5\" (UID: \"5fd1b372-d164-4037-ae8e-cf634b1c4b41\") " pod="openshift-ovn-kubernetes/ovnkube-node-nk7v5" Jan 23 09:07:28 crc kubenswrapper[4684]: I0123 09:07:28.353816 4684 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"env-overrides\" (UniqueName: \"kubernetes.io/configmap/5fd1b372-d164-4037-ae8e-cf634b1c4b41-env-overrides\") pod \"ovnkube-node-nk7v5\" (UID: \"5fd1b372-d164-4037-ae8e-cf634b1c4b41\") " pod="openshift-ovn-kubernetes/ovnkube-node-nk7v5" Jan 23 09:07:28 crc kubenswrapper[4684]: I0123 09:07:28.353829 4684 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovnkube-script-lib\" (UniqueName: \"kubernetes.io/configmap/5fd1b372-d164-4037-ae8e-cf634b1c4b41-ovnkube-script-lib\") pod \"ovnkube-node-nk7v5\" (UID: \"5fd1b372-d164-4037-ae8e-cf634b1c4b41\") " pod="openshift-ovn-kubernetes/ovnkube-node-nk7v5" Jan 23 09:07:28 crc kubenswrapper[4684]: I0123 09:07:28.353847 4684 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"networking-console-plugin-cert\" (UniqueName: \"kubernetes.io/secret/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-networking-console-plugin-cert\") pod \"networking-console-plugin-85b44fc459-gdk6g\" (UID: \"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\") " pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 23 09:07:28 crc kubenswrapper[4684]: I0123 09:07:28.353862 4684 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"hostroot\" (UniqueName: \"kubernetes.io/host-path/ab0885cc-d621-4e36-9e37-1326848bd147-hostroot\") pod \"multus-jwr4q\" (UID: \"ab0885cc-d621-4e36-9e37-1326848bd147\") " pod="openshift-multus/multus-jwr4q" Jan 23 09:07:28 crc kubenswrapper[4684]: I0123 09:07:28.353876 4684 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cnibin\" (UniqueName: \"kubernetes.io/host-path/95d1563a-3ca4-4fb0-8365-c1168fbe2e70-cnibin\") pod \"multus-additional-cni-plugins-dmqcw\" (UID: \"95d1563a-3ca4-4fb0-8365-c1168fbe2e70\") " pod="openshift-multus/multus-additional-cni-plugins-dmqcw" Jan 23 09:07:28 crc kubenswrapper[4684]: I0123 09:07:28.353890 4684 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-cni-bin\" (UniqueName: \"kubernetes.io/host-path/5fd1b372-d164-4037-ae8e-cf634b1c4b41-host-cni-bin\") pod \"ovnkube-node-nk7v5\" (UID: \"5fd1b372-d164-4037-ae8e-cf634b1c4b41\") " pod="openshift-ovn-kubernetes/ovnkube-node-nk7v5" Jan 23 09:07:28 crc kubenswrapper[4684]: I0123 09:07:28.353905 4684 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-kubelet\" (UniqueName: \"kubernetes.io/host-path/5fd1b372-d164-4037-ae8e-cf634b1c4b41-host-kubelet\") pod \"ovnkube-node-nk7v5\" (UID: \"5fd1b372-d164-4037-ae8e-cf634b1c4b41\") " pod="openshift-ovn-kubernetes/ovnkube-node-nk7v5" Jan 23 09:07:28 crc kubenswrapper[4684]: I0123 09:07:28.353920 4684 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-cw2mk\" (UniqueName: \"kubernetes.io/projected/ab0885cc-d621-4e36-9e37-1326848bd147-kube-api-access-cw2mk\") pod \"multus-jwr4q\" (UID: \"ab0885cc-d621-4e36-9e37-1326848bd147\") " pod="openshift-multus/multus-jwr4q" Jan 23 09:07:28 crc kubenswrapper[4684]: I0123 09:07:28.353934 4684 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"hosts-file\" (UniqueName: \"kubernetes.io/host-path/4fce7017-186f-4953-b968-c8a8868a0fd4-hosts-file\") pod \"node-resolver-6stgf\" (UID: \"4fce7017-186f-4953-b968-c8a8868a0fd4\") " pod="openshift-dns/node-resolver-6stgf" Jan 23 09:07:28 crc kubenswrapper[4684]: I0123 09:07:28.353948 4684 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"multus-socket-dir-parent\" (UniqueName: \"kubernetes.io/host-path/ab0885cc-d621-4e36-9e37-1326848bd147-multus-socket-dir-parent\") pod \"multus-jwr4q\" (UID: \"ab0885cc-d621-4e36-9e37-1326848bd147\") " pod="openshift-multus/multus-jwr4q" Jan 23 09:07:28 crc kubenswrapper[4684]: I0123 09:07:28.353961 4684 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"os-release\" (UniqueName: \"kubernetes.io/host-path/95d1563a-3ca4-4fb0-8365-c1168fbe2e70-os-release\") pod \"multus-additional-cni-plugins-dmqcw\" (UID: \"95d1563a-3ca4-4fb0-8365-c1168fbe2e70\") " pod="openshift-multus/multus-additional-cni-plugins-dmqcw" Jan 23 09:07:28 crc kubenswrapper[4684]: I0123 09:07:28.353975 4684 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-var-lib-cni-networks-ovn-kubernetes\" (UniqueName: \"kubernetes.io/host-path/5fd1b372-d164-4037-ae8e-cf634b1c4b41-host-var-lib-cni-networks-ovn-kubernetes\") pod \"ovnkube-node-nk7v5\" (UID: \"5fd1b372-d164-4037-ae8e-cf634b1c4b41\") " pod="openshift-ovn-kubernetes/ovnkube-node-nk7v5" Jan 23 09:07:28 crc kubenswrapper[4684]: E0123 09:07:28.354061 4684 configmap.go:193] Couldn't get configMap openshift-network-console/networking-console-plugin: object "openshift-network-console"/"networking-console-plugin" not registered Jan 23 09:07:28 crc kubenswrapper[4684]: E0123 09:07:28.354101 4684 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-nginx-conf podName:5fe485a1-e14f-4c09-b5b9-f252bc42b7e8 nodeName:}" failed. No retries permitted until 2026-01-23 09:07:29.354088807 +0000 UTC m=+21.977467348 (durationBeforeRetry 1s). Error: MountVolume.SetUp failed for volume "nginx-conf" (UniqueName: "kubernetes.io/configmap/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-nginx-conf") pod "networking-console-plugin-85b44fc459-gdk6g" (UID: "5fe485a1-e14f-4c09-b5b9-f252bc42b7e8") : object "openshift-network-console"/"networking-console-plugin" not registered Jan 23 09:07:28 crc kubenswrapper[4684]: E0123 09:07:28.354199 4684 projected.go:288] Couldn't get configMap openshift-network-diagnostics/kube-root-ca.crt: object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered Jan 23 09:07:28 crc kubenswrapper[4684]: E0123 09:07:28.354210 4684 projected.go:288] Couldn't get configMap openshift-network-diagnostics/openshift-service-ca.crt: object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered Jan 23 09:07:28 crc kubenswrapper[4684]: E0123 09:07:28.354220 4684 projected.go:194] Error preparing data for projected volume kube-api-access-s2dwl for pod openshift-network-diagnostics/network-check-source-55646444c4-trplf: [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Jan 23 09:07:28 crc kubenswrapper[4684]: E0123 09:07:28.354248 4684 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/9d751cbb-f2e2-430d-9754-c882a5e924a5-kube-api-access-s2dwl podName:9d751cbb-f2e2-430d-9754-c882a5e924a5 nodeName:}" failed. No retries permitted until 2026-01-23 09:07:29.354241051 +0000 UTC m=+21.977619592 (durationBeforeRetry 1s). Error: MountVolume.SetUp failed for volume "kube-api-access-s2dwl" (UniqueName: "kubernetes.io/projected/9d751cbb-f2e2-430d-9754-c882a5e924a5-kube-api-access-s2dwl") pod "network-check-source-55646444c4-trplf" (UID: "9d751cbb-f2e2-430d-9754-c882a5e924a5") : [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Jan 23 09:07:28 crc kubenswrapper[4684]: E0123 09:07:28.354375 4684 projected.go:288] Couldn't get configMap openshift-network-diagnostics/kube-root-ca.crt: object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered Jan 23 09:07:28 crc kubenswrapper[4684]: E0123 09:07:28.354385 4684 projected.go:288] Couldn't get configMap openshift-network-diagnostics/openshift-service-ca.crt: object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered Jan 23 09:07:28 crc kubenswrapper[4684]: E0123 09:07:28.354391 4684 projected.go:194] Error preparing data for projected volume kube-api-access-cqllr for pod openshift-network-diagnostics/network-check-target-xd92c: [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Jan 23 09:07:28 crc kubenswrapper[4684]: E0123 09:07:28.354410 4684 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/3b6479f0-333b-4a96-9adf-2099afdc2447-kube-api-access-cqllr podName:3b6479f0-333b-4a96-9adf-2099afdc2447 nodeName:}" failed. No retries permitted until 2026-01-23 09:07:29.354404426 +0000 UTC m=+21.977782967 (durationBeforeRetry 1s). Error: MountVolume.SetUp failed for volume "kube-api-access-cqllr" (UniqueName: "kubernetes.io/projected/3b6479f0-333b-4a96-9adf-2099afdc2447-kube-api-access-cqllr") pod "network-check-target-xd92c" (UID: "3b6479f0-333b-4a96-9adf-2099afdc2447") : [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Jan 23 09:07:28 crc kubenswrapper[4684]: E0123 09:07:28.354489 4684 secret.go:188] Couldn't get secret openshift-network-console/networking-console-plugin-cert: object "openshift-network-console"/"networking-console-plugin-cert" not registered Jan 23 09:07:28 crc kubenswrapper[4684]: E0123 09:07:28.354509 4684 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-networking-console-plugin-cert podName:5fe485a1-e14f-4c09-b5b9-f252bc42b7e8 nodeName:}" failed. No retries permitted until 2026-01-23 09:07:29.354503449 +0000 UTC m=+21.977881990 (durationBeforeRetry 1s). Error: MountVolume.SetUp failed for volume "networking-console-plugin-cert" (UniqueName: "kubernetes.io/secret/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-networking-console-plugin-cert") pod "networking-console-plugin-85b44fc459-gdk6g" (UID: "5fe485a1-e14f-4c09-b5b9-f252bc42b7e8") : object "openshift-network-console"/"networking-console-plugin-cert" not registered Jan 23 09:07:28 crc kubenswrapper[4684]: I0123 09:07:28.381506 4684 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"e31ff448-5258-4887-9532-ccb1444b5a2f\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T09:07:08Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T09:07:08Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T09:07:07Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T09:07:07Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T09:07:07Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://39b1d62654cdce3e6a1e54cc35f36d530dec39b7ec54d7aba2ea8a64844ff90a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-23T09:07:09Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://5b80737ea9f882f63be2cf6a2f74002963d16e18aea3c96f738b2cd188f3c1da\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-23T09:07:10Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://68e3ed6cfd5c1ab6379385c7acee58117333f815f21be7d7c61038f7827f6621\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-23T09:07:10Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://42263a97079566dbd93f1ca20399fd1f6cc2400f0d042ed062c1c1e15eaf0109\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://42263a97079566dbd93f1ca20399fd1f6cc2400f0d042ed062c1c1e15eaf0109\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-01-23T09:07:27Z\\\",\\\"message\\\":\\\"23 09:07:26.845110 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW0123 09:07:26.845113 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW0123 09:07:26.845115 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nI0123 09:07:26.845353 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nI0123 09:07:26.849378 1 tlsconfig.go:203] \\\\\\\"Loaded serving cert\\\\\\\" certName=\\\\\\\"serving-cert::/tmp/serving-cert-4138284268/tls.crt::/tmp/serving-cert-4138284268/tls.key\\\\\\\" certDetail=\\\\\\\"\\\\\\\\\\\\\\\"localhost\\\\\\\\\\\\\\\" [serving] validServingFor=[localhost] issuer=\\\\\\\\\\\\\\\"check-endpoints-signer@1769159230\\\\\\\\\\\\\\\" (2026-01-23 09:07:10 +0000 UTC to 2026-02-22 09:07:11 +0000 UTC (now=2026-01-23 09:07:26.849349521 +0000 UTC))\\\\\\\"\\\\nI0123 09:07:26.849507 1 named_certificates.go:53] \\\\\\\"Loaded SNI cert\\\\\\\" index=0 certName=\\\\\\\"self-signed loopback\\\\\\\" certDetail=\\\\\\\"\\\\\\\\\\\\\\\"apiserver-loopback-client@1769159241\\\\\\\\\\\\\\\" [serving] validServingFor=[apiserver-loopback-client] issuer=\\\\\\\\\\\\\\\"apiserver-loopback-client-ca@1769159241\\\\\\\\\\\\\\\" (2026-01-23 08:07:21 +0000 UTC to 2027-01-23 08:07:21 +0000 UTC (now=2026-01-23 09:07:26.849489185 +0000 UTC))\\\\\\\"\\\\nI0123 09:07:26.849527 1 secure_serving.go:213] Serving securely on [::]:17697\\\\nI0123 09:07:26.849546 1 genericapiserver.go:683] [graceful-termination] waiting for shutdown to be initiated\\\\nI0123 09:07:26.849566 1 requestheader_controller.go:172] Starting RequestHeaderAuthRequestController\\\\nI0123 09:07:26.849583 1 shared_informer.go:313] Waiting for caches to sync for RequestHeaderAuthRequestController\\\\nI0123 09:07:26.849611 1 dynamic_serving_content.go:135] \\\\\\\"Starting controller\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-4138284268/tls.crt::/tmp/serving-cert-4138284268/tls.key\\\\\\\"\\\\nI0123 09:07:26.849731 1 tlsconfig.go:243] \\\\\\\"Starting DynamicServingCertificateController\\\\\\\"\\\\nF0123 09:07:26.849820 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-23T09:07:10Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://9db80d9b156d2828ad5bcd38bc2d0783dac35f10f547f098815ee596931cde3b\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-23T09:07:10Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://efa2eef93c6f5766565795e6674f79bc2e7cb62ac76cd9a1e407561378d62732\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://efa2eef93c6f5766565795e6674f79bc2e7cb62ac76cd9a1e407561378d62732\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-23T09:07:08Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-23T09:07:08Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-23T09:07:07Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Jan 23 09:07:28 crc kubenswrapper[4684]: I0123 09:07:28.401773 4684 certificate_manager.go:356] kubernetes.io/kube-apiserver-client-kubelet: Certificate expiration is 2027-01-23 09:02:27 +0000 UTC, rotation deadline is 2026-10-18 23:04:53.922949345 +0000 UTC Jan 23 09:07:28 crc kubenswrapper[4684]: I0123 09:07:28.401866 4684 certificate_manager.go:356] kubernetes.io/kube-apiserver-client-kubelet: Waiting 6445h57m25.52108709s for next certificate rotation Jan 23 09:07:28 crc kubenswrapper[4684]: I0123 09:07:28.420460 4684 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-23T09:07:27Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-23T09:07:27Z\\\",\\\"message\\\":\\\"containers with unready status: [network-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":false,\\\"restartCount\\\":5,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Jan 23 09:07:28 crc kubenswrapper[4684]: I0123 09:07:28.455218 4684 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host-run-netns\" (UniqueName: \"kubernetes.io/host-path/5fd1b372-d164-4037-ae8e-cf634b1c4b41-host-run-netns\") pod \"ovnkube-node-nk7v5\" (UID: \"5fd1b372-d164-4037-ae8e-cf634b1c4b41\") " pod="openshift-ovn-kubernetes/ovnkube-node-nk7v5" Jan 23 09:07:28 crc kubenswrapper[4684]: I0123 09:07:28.455269 4684 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host-var-lib-kubelet\" (UniqueName: \"kubernetes.io/host-path/ab0885cc-d621-4e36-9e37-1326848bd147-host-var-lib-kubelet\") pod \"multus-jwr4q\" (UID: \"ab0885cc-d621-4e36-9e37-1326848bd147\") " pod="openshift-multus/multus-jwr4q" Jan 23 09:07:28 crc kubenswrapper[4684]: I0123 09:07:28.455298 4684 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"env-overrides\" (UniqueName: \"kubernetes.io/configmap/5fd1b372-d164-4037-ae8e-cf634b1c4b41-env-overrides\") pod \"ovnkube-node-nk7v5\" (UID: \"5fd1b372-d164-4037-ae8e-cf634b1c4b41\") " pod="openshift-ovn-kubernetes/ovnkube-node-nk7v5" Jan 23 09:07:28 crc kubenswrapper[4684]: I0123 09:07:28.455318 4684 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovnkube-script-lib\" (UniqueName: \"kubernetes.io/configmap/5fd1b372-d164-4037-ae8e-cf634b1c4b41-ovnkube-script-lib\") pod \"ovnkube-node-nk7v5\" (UID: \"5fd1b372-d164-4037-ae8e-cf634b1c4b41\") " pod="openshift-ovn-kubernetes/ovnkube-node-nk7v5" Jan 23 09:07:28 crc kubenswrapper[4684]: I0123 09:07:28.455349 4684 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovnkube-config\" (UniqueName: \"kubernetes.io/configmap/5fd1b372-d164-4037-ae8e-cf634b1c4b41-ovnkube-config\") pod \"ovnkube-node-nk7v5\" (UID: \"5fd1b372-d164-4037-ae8e-cf634b1c4b41\") " pod="openshift-ovn-kubernetes/ovnkube-node-nk7v5" Jan 23 09:07:28 crc kubenswrapper[4684]: I0123 09:07:28.455361 4684 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host-run-netns\" (UniqueName: \"kubernetes.io/host-path/5fd1b372-d164-4037-ae8e-cf634b1c4b41-host-run-netns\") pod \"ovnkube-node-nk7v5\" (UID: \"5fd1b372-d164-4037-ae8e-cf634b1c4b41\") " pod="openshift-ovn-kubernetes/ovnkube-node-nk7v5" Jan 23 09:07:28 crc kubenswrapper[4684]: I0123 09:07:28.455412 4684 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"cnibin\" (UniqueName: \"kubernetes.io/host-path/95d1563a-3ca4-4fb0-8365-c1168fbe2e70-cnibin\") pod \"multus-additional-cni-plugins-dmqcw\" (UID: \"95d1563a-3ca4-4fb0-8365-c1168fbe2e70\") " pod="openshift-multus/multus-additional-cni-plugins-dmqcw" Jan 23 09:07:28 crc kubenswrapper[4684]: I0123 09:07:28.455441 4684 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host-var-lib-kubelet\" (UniqueName: \"kubernetes.io/host-path/ab0885cc-d621-4e36-9e37-1326848bd147-host-var-lib-kubelet\") pod \"multus-jwr4q\" (UID: \"ab0885cc-d621-4e36-9e37-1326848bd147\") " pod="openshift-multus/multus-jwr4q" Jan 23 09:07:28 crc kubenswrapper[4684]: I0123 09:07:28.455369 4684 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cnibin\" (UniqueName: \"kubernetes.io/host-path/95d1563a-3ca4-4fb0-8365-c1168fbe2e70-cnibin\") pod \"multus-additional-cni-plugins-dmqcw\" (UID: \"95d1563a-3ca4-4fb0-8365-c1168fbe2e70\") " pod="openshift-multus/multus-additional-cni-plugins-dmqcw" Jan 23 09:07:28 crc kubenswrapper[4684]: I0123 09:07:28.455526 4684 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host-cni-bin\" (UniqueName: \"kubernetes.io/host-path/5fd1b372-d164-4037-ae8e-cf634b1c4b41-host-cni-bin\") pod \"ovnkube-node-nk7v5\" (UID: \"5fd1b372-d164-4037-ae8e-cf634b1c4b41\") " pod="openshift-ovn-kubernetes/ovnkube-node-nk7v5" Jan 23 09:07:28 crc kubenswrapper[4684]: I0123 09:07:28.455783 4684 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"hostroot\" (UniqueName: \"kubernetes.io/host-path/ab0885cc-d621-4e36-9e37-1326848bd147-hostroot\") pod \"multus-jwr4q\" (UID: \"ab0885cc-d621-4e36-9e37-1326848bd147\") " pod="openshift-multus/multus-jwr4q" Jan 23 09:07:28 crc kubenswrapper[4684]: I0123 09:07:28.455817 4684 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host-kubelet\" (UniqueName: \"kubernetes.io/host-path/5fd1b372-d164-4037-ae8e-cf634b1c4b41-host-kubelet\") pod \"ovnkube-node-nk7v5\" (UID: \"5fd1b372-d164-4037-ae8e-cf634b1c4b41\") " pod="openshift-ovn-kubernetes/ovnkube-node-nk7v5" Jan 23 09:07:28 crc kubenswrapper[4684]: I0123 09:07:28.455852 4684 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-cw2mk\" (UniqueName: \"kubernetes.io/projected/ab0885cc-d621-4e36-9e37-1326848bd147-kube-api-access-cw2mk\") pod \"multus-jwr4q\" (UID: \"ab0885cc-d621-4e36-9e37-1326848bd147\") " pod="openshift-multus/multus-jwr4q" Jan 23 09:07:28 crc kubenswrapper[4684]: I0123 09:07:28.455874 4684 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"hosts-file\" (UniqueName: \"kubernetes.io/host-path/4fce7017-186f-4953-b968-c8a8868a0fd4-hosts-file\") pod \"node-resolver-6stgf\" (UID: \"4fce7017-186f-4953-b968-c8a8868a0fd4\") " pod="openshift-dns/node-resolver-6stgf" Jan 23 09:07:28 crc kubenswrapper[4684]: I0123 09:07:28.455897 4684 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"os-release\" (UniqueName: \"kubernetes.io/host-path/95d1563a-3ca4-4fb0-8365-c1168fbe2e70-os-release\") pod \"multus-additional-cni-plugins-dmqcw\" (UID: \"95d1563a-3ca4-4fb0-8365-c1168fbe2e70\") " pod="openshift-multus/multus-additional-cni-plugins-dmqcw" Jan 23 09:07:28 crc kubenswrapper[4684]: I0123 09:07:28.455921 4684 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host-var-lib-cni-networks-ovn-kubernetes\" (UniqueName: \"kubernetes.io/host-path/5fd1b372-d164-4037-ae8e-cf634b1c4b41-host-var-lib-cni-networks-ovn-kubernetes\") pod \"ovnkube-node-nk7v5\" (UID: \"5fd1b372-d164-4037-ae8e-cf634b1c4b41\") " pod="openshift-ovn-kubernetes/ovnkube-node-nk7v5" Jan 23 09:07:28 crc kubenswrapper[4684]: I0123 09:07:28.455944 4684 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"multus-socket-dir-parent\" (UniqueName: \"kubernetes.io/host-path/ab0885cc-d621-4e36-9e37-1326848bd147-multus-socket-dir-parent\") pod \"multus-jwr4q\" (UID: \"ab0885cc-d621-4e36-9e37-1326848bd147\") " pod="openshift-multus/multus-jwr4q" Jan 23 09:07:28 crc kubenswrapper[4684]: I0123 09:07:28.455963 4684 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"systemd-units\" (UniqueName: \"kubernetes.io/host-path/5fd1b372-d164-4037-ae8e-cf634b1c4b41-systemd-units\") pod \"ovnkube-node-nk7v5\" (UID: \"5fd1b372-d164-4037-ae8e-cf634b1c4b41\") " pod="openshift-ovn-kubernetes/ovnkube-node-nk7v5" Jan 23 09:07:28 crc kubenswrapper[4684]: I0123 09:07:28.455985 4684 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"system-cni-dir\" (UniqueName: \"kubernetes.io/host-path/ab0885cc-d621-4e36-9e37-1326848bd147-system-cni-dir\") pod \"multus-jwr4q\" (UID: \"ab0885cc-d621-4e36-9e37-1326848bd147\") " pod="openshift-multus/multus-jwr4q" Jan 23 09:07:28 crc kubenswrapper[4684]: I0123 09:07:28.456009 4684 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"tuning-conf-dir\" (UniqueName: \"kubernetes.io/host-path/95d1563a-3ca4-4fb0-8365-c1168fbe2e70-tuning-conf-dir\") pod \"multus-additional-cni-plugins-dmqcw\" (UID: \"95d1563a-3ca4-4fb0-8365-c1168fbe2e70\") " pod="openshift-multus/multus-additional-cni-plugins-dmqcw" Jan 23 09:07:28 crc kubenswrapper[4684]: I0123 09:07:28.456037 4684 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"node-log\" (UniqueName: \"kubernetes.io/host-path/5fd1b372-d164-4037-ae8e-cf634b1c4b41-node-log\") pod \"ovnkube-node-nk7v5\" (UID: \"5fd1b372-d164-4037-ae8e-cf634b1c4b41\") " pod="openshift-ovn-kubernetes/ovnkube-node-nk7v5" Jan 23 09:07:28 crc kubenswrapper[4684]: I0123 09:07:28.456058 4684 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"run-ovn\" (UniqueName: \"kubernetes.io/host-path/5fd1b372-d164-4037-ae8e-cf634b1c4b41-run-ovn\") pod \"ovnkube-node-nk7v5\" (UID: \"5fd1b372-d164-4037-ae8e-cf634b1c4b41\") " pod="openshift-ovn-kubernetes/ovnkube-node-nk7v5" Jan 23 09:07:28 crc kubenswrapper[4684]: I0123 09:07:28.456078 4684 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"multus-cni-dir\" (UniqueName: \"kubernetes.io/host-path/ab0885cc-d621-4e36-9e37-1326848bd147-multus-cni-dir\") pod \"multus-jwr4q\" (UID: \"ab0885cc-d621-4e36-9e37-1326848bd147\") " pod="openshift-multus/multus-jwr4q" Jan 23 09:07:28 crc kubenswrapper[4684]: I0123 09:07:28.456097 4684 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cnibin\" (UniqueName: \"kubernetes.io/host-path/ab0885cc-d621-4e36-9e37-1326848bd147-cnibin\") pod \"multus-jwr4q\" (UID: \"ab0885cc-d621-4e36-9e37-1326848bd147\") " pod="openshift-multus/multus-jwr4q" Jan 23 09:07:28 crc kubenswrapper[4684]: I0123 09:07:28.456117 4684 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host-var-lib-cni-bin\" (UniqueName: \"kubernetes.io/host-path/ab0885cc-d621-4e36-9e37-1326848bd147-host-var-lib-cni-bin\") pod \"multus-jwr4q\" (UID: \"ab0885cc-d621-4e36-9e37-1326848bd147\") " pod="openshift-multus/multus-jwr4q" Jan 23 09:07:28 crc kubenswrapper[4684]: I0123 09:07:28.456140 4684 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cni-sysctl-allowlist\" (UniqueName: \"kubernetes.io/configmap/95d1563a-3ca4-4fb0-8365-c1168fbe2e70-cni-sysctl-allowlist\") pod \"multus-additional-cni-plugins-dmqcw\" (UID: \"95d1563a-3ca4-4fb0-8365-c1168fbe2e70\") " pod="openshift-multus/multus-additional-cni-plugins-dmqcw" Jan 23 09:07:28 crc kubenswrapper[4684]: I0123 09:07:28.456172 4684 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"var-lib-openvswitch\" (UniqueName: \"kubernetes.io/host-path/5fd1b372-d164-4037-ae8e-cf634b1c4b41-var-lib-openvswitch\") pod \"ovnkube-node-nk7v5\" (UID: \"5fd1b372-d164-4037-ae8e-cf634b1c4b41\") " pod="openshift-ovn-kubernetes/ovnkube-node-nk7v5" Jan 23 09:07:28 crc kubenswrapper[4684]: I0123 09:07:28.456195 4684 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host-run-k8s-cni-cncf-io\" (UniqueName: \"kubernetes.io/host-path/ab0885cc-d621-4e36-9e37-1326848bd147-host-run-k8s-cni-cncf-io\") pod \"multus-jwr4q\" (UID: \"ab0885cc-d621-4e36-9e37-1326848bd147\") " pod="openshift-multus/multus-jwr4q" Jan 23 09:07:28 crc kubenswrapper[4684]: I0123 09:07:28.456218 4684 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"system-cni-dir\" (UniqueName: \"kubernetes.io/host-path/95d1563a-3ca4-4fb0-8365-c1168fbe2e70-system-cni-dir\") pod \"multus-additional-cni-plugins-dmqcw\" (UID: \"95d1563a-3ca4-4fb0-8365-c1168fbe2e70\") " pod="openshift-multus/multus-additional-cni-plugins-dmqcw" Jan 23 09:07:28 crc kubenswrapper[4684]: I0123 09:07:28.456237 4684 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"multus-conf-dir\" (UniqueName: \"kubernetes.io/host-path/ab0885cc-d621-4e36-9e37-1326848bd147-multus-conf-dir\") pod \"multus-jwr4q\" (UID: \"ab0885cc-d621-4e36-9e37-1326848bd147\") " pod="openshift-multus/multus-jwr4q" Jan 23 09:07:28 crc kubenswrapper[4684]: I0123 09:07:28.456257 4684 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-wv8g2\" (UniqueName: \"kubernetes.io/projected/4fce7017-186f-4953-b968-c8a8868a0fd4-kube-api-access-wv8g2\") pod \"node-resolver-6stgf\" (UID: \"4fce7017-186f-4953-b968-c8a8868a0fd4\") " pod="openshift-dns/node-resolver-6stgf" Jan 23 09:07:28 crc kubenswrapper[4684]: I0123 09:07:28.456277 4684 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"mcd-auth-proxy-config\" (UniqueName: \"kubernetes.io/configmap/fe8e0d00-860e-4d47-9f48-686555520d79-mcd-auth-proxy-config\") pod \"machine-config-daemon-wtphf\" (UID: \"fe8e0d00-860e-4d47-9f48-686555520d79\") " pod="openshift-machine-config-operator/machine-config-daemon-wtphf" Jan 23 09:07:28 crc kubenswrapper[4684]: I0123 09:07:28.456297 4684 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host-cni-netd\" (UniqueName: \"kubernetes.io/host-path/5fd1b372-d164-4037-ae8e-cf634b1c4b41-host-cni-netd\") pod \"ovnkube-node-nk7v5\" (UID: \"5fd1b372-d164-4037-ae8e-cf634b1c4b41\") " pod="openshift-ovn-kubernetes/ovnkube-node-nk7v5" Jan 23 09:07:28 crc kubenswrapper[4684]: I0123 09:07:28.456314 4684 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovnkube-config\" (UniqueName: \"kubernetes.io/configmap/5fd1b372-d164-4037-ae8e-cf634b1c4b41-ovnkube-config\") pod \"ovnkube-node-nk7v5\" (UID: \"5fd1b372-d164-4037-ae8e-cf634b1c4b41\") " pod="openshift-ovn-kubernetes/ovnkube-node-nk7v5" Jan 23 09:07:28 crc kubenswrapper[4684]: I0123 09:07:28.456318 4684 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-l46bg\" (UniqueName: \"kubernetes.io/projected/5fd1b372-d164-4037-ae8e-cf634b1c4b41-kube-api-access-l46bg\") pod \"ovnkube-node-nk7v5\" (UID: \"5fd1b372-d164-4037-ae8e-cf634b1c4b41\") " pod="openshift-ovn-kubernetes/ovnkube-node-nk7v5" Jan 23 09:07:28 crc kubenswrapper[4684]: I0123 09:07:28.456345 4684 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"env-overrides\" (UniqueName: \"kubernetes.io/configmap/5fd1b372-d164-4037-ae8e-cf634b1c4b41-env-overrides\") pod \"ovnkube-node-nk7v5\" (UID: \"5fd1b372-d164-4037-ae8e-cf634b1c4b41\") " pod="openshift-ovn-kubernetes/ovnkube-node-nk7v5" Jan 23 09:07:28 crc kubenswrapper[4684]: I0123 09:07:28.456382 4684 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovnkube-script-lib\" (UniqueName: \"kubernetes.io/configmap/5fd1b372-d164-4037-ae8e-cf634b1c4b41-ovnkube-script-lib\") pod \"ovnkube-node-nk7v5\" (UID: \"5fd1b372-d164-4037-ae8e-cf634b1c4b41\") " pod="openshift-ovn-kubernetes/ovnkube-node-nk7v5" Jan 23 09:07:28 crc kubenswrapper[4684]: I0123 09:07:28.456558 4684 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"proxy-tls\" (UniqueName: \"kubernetes.io/secret/fe8e0d00-860e-4d47-9f48-686555520d79-proxy-tls\") pod \"machine-config-daemon-wtphf\" (UID: \"fe8e0d00-860e-4d47-9f48-686555520d79\") " pod="openshift-machine-config-operator/machine-config-daemon-wtphf" Jan 23 09:07:28 crc kubenswrapper[4684]: I0123 09:07:28.456551 4684 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host-var-lib-cni-bin\" (UniqueName: \"kubernetes.io/host-path/ab0885cc-d621-4e36-9e37-1326848bd147-host-var-lib-cni-bin\") pod \"multus-jwr4q\" (UID: \"ab0885cc-d621-4e36-9e37-1326848bd147\") " pod="openshift-multus/multus-jwr4q" Jan 23 09:07:28 crc kubenswrapper[4684]: I0123 09:07:28.456588 4684 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host-run-ovn-kubernetes\" (UniqueName: \"kubernetes.io/host-path/5fd1b372-d164-4037-ae8e-cf634b1c4b41-host-run-ovn-kubernetes\") pod \"ovnkube-node-nk7v5\" (UID: \"5fd1b372-d164-4037-ae8e-cf634b1c4b41\") " pod="openshift-ovn-kubernetes/ovnkube-node-nk7v5" Jan 23 09:07:28 crc kubenswrapper[4684]: I0123 09:07:28.456603 4684 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"multus-conf-dir\" (UniqueName: \"kubernetes.io/host-path/ab0885cc-d621-4e36-9e37-1326848bd147-multus-conf-dir\") pod \"multus-jwr4q\" (UID: \"ab0885cc-d621-4e36-9e37-1326848bd147\") " pod="openshift-multus/multus-jwr4q" Jan 23 09:07:28 crc kubenswrapper[4684]: I0123 09:07:28.456621 4684 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host-run-netns\" (UniqueName: \"kubernetes.io/host-path/ab0885cc-d621-4e36-9e37-1326848bd147-host-run-netns\") pod \"multus-jwr4q\" (UID: \"ab0885cc-d621-4e36-9e37-1326848bd147\") " pod="openshift-multus/multus-jwr4q" Jan 23 09:07:28 crc kubenswrapper[4684]: I0123 09:07:28.456638 4684 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"hosts-file\" (UniqueName: \"kubernetes.io/host-path/4fce7017-186f-4953-b968-c8a8868a0fd4-hosts-file\") pod \"node-resolver-6stgf\" (UID: \"4fce7017-186f-4953-b968-c8a8868a0fd4\") " pod="openshift-dns/node-resolver-6stgf" Jan 23 09:07:28 crc kubenswrapper[4684]: I0123 09:07:28.456652 4684 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-wrhqc\" (UniqueName: \"kubernetes.io/projected/95d1563a-3ca4-4fb0-8365-c1168fbe2e70-kube-api-access-wrhqc\") pod \"multus-additional-cni-plugins-dmqcw\" (UID: \"95d1563a-3ca4-4fb0-8365-c1168fbe2e70\") " pod="openshift-multus/multus-additional-cni-plugins-dmqcw" Jan 23 09:07:28 crc kubenswrapper[4684]: I0123 09:07:28.456666 4684 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"node-log\" (UniqueName: \"kubernetes.io/host-path/5fd1b372-d164-4037-ae8e-cf634b1c4b41-node-log\") pod \"ovnkube-node-nk7v5\" (UID: \"5fd1b372-d164-4037-ae8e-cf634b1c4b41\") " pod="openshift-ovn-kubernetes/ovnkube-node-nk7v5" Jan 23 09:07:28 crc kubenswrapper[4684]: I0123 09:07:28.456679 4684 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"log-socket\" (UniqueName: \"kubernetes.io/host-path/5fd1b372-d164-4037-ae8e-cf634b1c4b41-log-socket\") pod \"ovnkube-node-nk7v5\" (UID: \"5fd1b372-d164-4037-ae8e-cf634b1c4b41\") " pod="openshift-ovn-kubernetes/ovnkube-node-nk7v5" Jan 23 09:07:28 crc kubenswrapper[4684]: I0123 09:07:28.456688 4684 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host-cni-bin\" (UniqueName: \"kubernetes.io/host-path/5fd1b372-d164-4037-ae8e-cf634b1c4b41-host-cni-bin\") pod \"ovnkube-node-nk7v5\" (UID: \"5fd1b372-d164-4037-ae8e-cf634b1c4b41\") " pod="openshift-ovn-kubernetes/ovnkube-node-nk7v5" Jan 23 09:07:28 crc kubenswrapper[4684]: I0123 09:07:28.456720 4684 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"rootfs\" (UniqueName: \"kubernetes.io/host-path/fe8e0d00-860e-4d47-9f48-686555520d79-rootfs\") pod \"machine-config-daemon-wtphf\" (UID: \"fe8e0d00-860e-4d47-9f48-686555520d79\") " pod="openshift-machine-config-operator/machine-config-daemon-wtphf" Jan 23 09:07:28 crc kubenswrapper[4684]: I0123 09:07:28.456729 4684 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"hostroot\" (UniqueName: \"kubernetes.io/host-path/ab0885cc-d621-4e36-9e37-1326848bd147-hostroot\") pod \"multus-jwr4q\" (UID: \"ab0885cc-d621-4e36-9e37-1326848bd147\") " pod="openshift-multus/multus-jwr4q" Jan 23 09:07:28 crc kubenswrapper[4684]: I0123 09:07:28.456744 4684 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host-slash\" (UniqueName: \"kubernetes.io/host-path/5fd1b372-d164-4037-ae8e-cf634b1c4b41-host-slash\") pod \"ovnkube-node-nk7v5\" (UID: \"5fd1b372-d164-4037-ae8e-cf634b1c4b41\") " pod="openshift-ovn-kubernetes/ovnkube-node-nk7v5" Jan 23 09:07:28 crc kubenswrapper[4684]: I0123 09:07:28.456750 4684 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host-kubelet\" (UniqueName: \"kubernetes.io/host-path/5fd1b372-d164-4037-ae8e-cf634b1c4b41-host-kubelet\") pod \"ovnkube-node-nk7v5\" (UID: \"5fd1b372-d164-4037-ae8e-cf634b1c4b41\") " pod="openshift-ovn-kubernetes/ovnkube-node-nk7v5" Jan 23 09:07:28 crc kubenswrapper[4684]: I0123 09:07:28.456559 4684 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"cnibin\" (UniqueName: \"kubernetes.io/host-path/ab0885cc-d621-4e36-9e37-1326848bd147-cnibin\") pod \"multus-jwr4q\" (UID: \"ab0885cc-d621-4e36-9e37-1326848bd147\") " pod="openshift-multus/multus-jwr4q" Jan 23 09:07:28 crc kubenswrapper[4684]: I0123 09:07:28.456766 4684 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"run-openvswitch\" (UniqueName: \"kubernetes.io/host-path/5fd1b372-d164-4037-ae8e-cf634b1c4b41-run-openvswitch\") pod \"ovnkube-node-nk7v5\" (UID: \"5fd1b372-d164-4037-ae8e-cf634b1c4b41\") " pod="openshift-ovn-kubernetes/ovnkube-node-nk7v5" Jan 23 09:07:28 crc kubenswrapper[4684]: I0123 09:07:28.456866 4684 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host-slash\" (UniqueName: \"kubernetes.io/host-path/5fd1b372-d164-4037-ae8e-cf634b1c4b41-host-slash\") pod \"ovnkube-node-nk7v5\" (UID: \"5fd1b372-d164-4037-ae8e-cf634b1c4b41\") " pod="openshift-ovn-kubernetes/ovnkube-node-nk7v5" Jan 23 09:07:28 crc kubenswrapper[4684]: I0123 09:07:28.456992 4684 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"os-release\" (UniqueName: \"kubernetes.io/host-path/95d1563a-3ca4-4fb0-8365-c1168fbe2e70-os-release\") pod \"multus-additional-cni-plugins-dmqcw\" (UID: \"95d1563a-3ca4-4fb0-8365-c1168fbe2e70\") " pod="openshift-multus/multus-additional-cni-plugins-dmqcw" Jan 23 09:07:28 crc kubenswrapper[4684]: I0123 09:07:28.457017 4684 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host-run-netns\" (UniqueName: \"kubernetes.io/host-path/ab0885cc-d621-4e36-9e37-1326848bd147-host-run-netns\") pod \"multus-jwr4q\" (UID: \"ab0885cc-d621-4e36-9e37-1326848bd147\") " pod="openshift-multus/multus-jwr4q" Jan 23 09:07:28 crc kubenswrapper[4684]: I0123 09:07:28.457033 4684 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host-cni-netd\" (UniqueName: \"kubernetes.io/host-path/5fd1b372-d164-4037-ae8e-cf634b1c4b41-host-cni-netd\") pod \"ovnkube-node-nk7v5\" (UID: \"5fd1b372-d164-4037-ae8e-cf634b1c4b41\") " pod="openshift-ovn-kubernetes/ovnkube-node-nk7v5" Jan 23 09:07:28 crc kubenswrapper[4684]: I0123 09:07:28.457046 4684 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"multus-cni-dir\" (UniqueName: \"kubernetes.io/host-path/ab0885cc-d621-4e36-9e37-1326848bd147-multus-cni-dir\") pod \"multus-jwr4q\" (UID: \"ab0885cc-d621-4e36-9e37-1326848bd147\") " pod="openshift-multus/multus-jwr4q" Jan 23 09:07:28 crc kubenswrapper[4684]: I0123 09:07:28.457172 4684 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"multus-socket-dir-parent\" (UniqueName: \"kubernetes.io/host-path/ab0885cc-d621-4e36-9e37-1326848bd147-multus-socket-dir-parent\") pod \"multus-jwr4q\" (UID: \"ab0885cc-d621-4e36-9e37-1326848bd147\") " pod="openshift-multus/multus-jwr4q" Jan 23 09:07:28 crc kubenswrapper[4684]: I0123 09:07:28.457205 4684 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"cni-sysctl-allowlist\" (UniqueName: \"kubernetes.io/configmap/95d1563a-3ca4-4fb0-8365-c1168fbe2e70-cni-sysctl-allowlist\") pod \"multus-additional-cni-plugins-dmqcw\" (UID: \"95d1563a-3ca4-4fb0-8365-c1168fbe2e70\") " pod="openshift-multus/multus-additional-cni-plugins-dmqcw" Jan 23 09:07:28 crc kubenswrapper[4684]: I0123 09:07:28.457211 4684 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etc-openvswitch\" (UniqueName: \"kubernetes.io/host-path/5fd1b372-d164-4037-ae8e-cf634b1c4b41-etc-openvswitch\") pod \"ovnkube-node-nk7v5\" (UID: \"5fd1b372-d164-4037-ae8e-cf634b1c4b41\") " pod="openshift-ovn-kubernetes/ovnkube-node-nk7v5" Jan 23 09:07:28 crc kubenswrapper[4684]: I0123 09:07:28.457242 4684 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovn-node-metrics-cert\" (UniqueName: \"kubernetes.io/secret/5fd1b372-d164-4037-ae8e-cf634b1c4b41-ovn-node-metrics-cert\") pod \"ovnkube-node-nk7v5\" (UID: \"5fd1b372-d164-4037-ae8e-cf634b1c4b41\") " pod="openshift-ovn-kubernetes/ovnkube-node-nk7v5" Jan 23 09:07:28 crc kubenswrapper[4684]: I0123 09:07:28.457250 4684 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"log-socket\" (UniqueName: \"kubernetes.io/host-path/5fd1b372-d164-4037-ae8e-cf634b1c4b41-log-socket\") pod \"ovnkube-node-nk7v5\" (UID: \"5fd1b372-d164-4037-ae8e-cf634b1c4b41\") " pod="openshift-ovn-kubernetes/ovnkube-node-nk7v5" Jan 23 09:07:28 crc kubenswrapper[4684]: I0123 09:07:28.457266 4684 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"os-release\" (UniqueName: \"kubernetes.io/host-path/ab0885cc-d621-4e36-9e37-1326848bd147-os-release\") pod \"multus-jwr4q\" (UID: \"ab0885cc-d621-4e36-9e37-1326848bd147\") " pod="openshift-multus/multus-jwr4q" Jan 23 09:07:28 crc kubenswrapper[4684]: I0123 09:07:28.457277 4684 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"rootfs\" (UniqueName: \"kubernetes.io/host-path/fe8e0d00-860e-4d47-9f48-686555520d79-rootfs\") pod \"machine-config-daemon-wtphf\" (UID: \"fe8e0d00-860e-4d47-9f48-686555520d79\") " pod="openshift-machine-config-operator/machine-config-daemon-wtphf" Jan 23 09:07:28 crc kubenswrapper[4684]: I0123 09:07:28.457294 4684 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host-run-multus-certs\" (UniqueName: \"kubernetes.io/host-path/ab0885cc-d621-4e36-9e37-1326848bd147-host-run-multus-certs\") pod \"multus-jwr4q\" (UID: \"ab0885cc-d621-4e36-9e37-1326848bd147\") " pod="openshift-multus/multus-jwr4q" Jan 23 09:07:28 crc kubenswrapper[4684]: I0123 09:07:28.457316 4684 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etc-openvswitch\" (UniqueName: \"kubernetes.io/host-path/5fd1b372-d164-4037-ae8e-cf634b1c4b41-etc-openvswitch\") pod \"ovnkube-node-nk7v5\" (UID: \"5fd1b372-d164-4037-ae8e-cf634b1c4b41\") " pod="openshift-ovn-kubernetes/ovnkube-node-nk7v5" Jan 23 09:07:28 crc kubenswrapper[4684]: I0123 09:07:28.457318 4684 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etc-kubernetes\" (UniqueName: \"kubernetes.io/host-path/ab0885cc-d621-4e36-9e37-1326848bd147-etc-kubernetes\") pod \"multus-jwr4q\" (UID: \"ab0885cc-d621-4e36-9e37-1326848bd147\") " pod="openshift-multus/multus-jwr4q" Jan 23 09:07:28 crc kubenswrapper[4684]: I0123 09:07:28.457341 4684 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etc-kubernetes\" (UniqueName: \"kubernetes.io/host-path/ab0885cc-d621-4e36-9e37-1326848bd147-etc-kubernetes\") pod \"multus-jwr4q\" (UID: \"ab0885cc-d621-4e36-9e37-1326848bd147\") " pod="openshift-multus/multus-jwr4q" Jan 23 09:07:28 crc kubenswrapper[4684]: I0123 09:07:28.457395 4684 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host-run-multus-certs\" (UniqueName: \"kubernetes.io/host-path/ab0885cc-d621-4e36-9e37-1326848bd147-host-run-multus-certs\") pod \"multus-jwr4q\" (UID: \"ab0885cc-d621-4e36-9e37-1326848bd147\") " pod="openshift-multus/multus-jwr4q" Jan 23 09:07:28 crc kubenswrapper[4684]: I0123 09:07:28.457395 4684 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"tuning-conf-dir\" (UniqueName: \"kubernetes.io/host-path/95d1563a-3ca4-4fb0-8365-c1168fbe2e70-tuning-conf-dir\") pod \"multus-additional-cni-plugins-dmqcw\" (UID: \"95d1563a-3ca4-4fb0-8365-c1168fbe2e70\") " pod="openshift-multus/multus-additional-cni-plugins-dmqcw" Jan 23 09:07:28 crc kubenswrapper[4684]: I0123 09:07:28.457423 4684 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"system-cni-dir\" (UniqueName: \"kubernetes.io/host-path/95d1563a-3ca4-4fb0-8365-c1168fbe2e70-system-cni-dir\") pod \"multus-additional-cni-plugins-dmqcw\" (UID: \"95d1563a-3ca4-4fb0-8365-c1168fbe2e70\") " pod="openshift-multus/multus-additional-cni-plugins-dmqcw" Jan 23 09:07:28 crc kubenswrapper[4684]: I0123 09:07:28.457432 4684 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"run-ovn\" (UniqueName: \"kubernetes.io/host-path/5fd1b372-d164-4037-ae8e-cf634b1c4b41-run-ovn\") pod \"ovnkube-node-nk7v5\" (UID: \"5fd1b372-d164-4037-ae8e-cf634b1c4b41\") " pod="openshift-ovn-kubernetes/ovnkube-node-nk7v5" Jan 23 09:07:28 crc kubenswrapper[4684]: I0123 09:07:28.457465 4684 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host-run-k8s-cni-cncf-io\" (UniqueName: \"kubernetes.io/host-path/ab0885cc-d621-4e36-9e37-1326848bd147-host-run-k8s-cni-cncf-io\") pod \"multus-jwr4q\" (UID: \"ab0885cc-d621-4e36-9e37-1326848bd147\") " pod="openshift-multus/multus-jwr4q" Jan 23 09:07:28 crc kubenswrapper[4684]: I0123 09:07:28.457498 4684 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cni-binary-copy\" (UniqueName: \"kubernetes.io/configmap/95d1563a-3ca4-4fb0-8365-c1168fbe2e70-cni-binary-copy\") pod \"multus-additional-cni-plugins-dmqcw\" (UID: \"95d1563a-3ca4-4fb0-8365-c1168fbe2e70\") " pod="openshift-multus/multus-additional-cni-plugins-dmqcw" Jan 23 09:07:28 crc kubenswrapper[4684]: I0123 09:07:28.457504 4684 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"os-release\" (UniqueName: \"kubernetes.io/host-path/ab0885cc-d621-4e36-9e37-1326848bd147-os-release\") pod \"multus-jwr4q\" (UID: \"ab0885cc-d621-4e36-9e37-1326848bd147\") " pod="openshift-multus/multus-jwr4q" Jan 23 09:07:28 crc kubenswrapper[4684]: I0123 09:07:28.457563 4684 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"systemd-units\" (UniqueName: \"kubernetes.io/host-path/5fd1b372-d164-4037-ae8e-cf634b1c4b41-systemd-units\") pod \"ovnkube-node-nk7v5\" (UID: \"5fd1b372-d164-4037-ae8e-cf634b1c4b41\") " pod="openshift-ovn-kubernetes/ovnkube-node-nk7v5" Jan 23 09:07:28 crc kubenswrapper[4684]: I0123 09:07:28.457567 4684 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"run-systemd\" (UniqueName: \"kubernetes.io/host-path/5fd1b372-d164-4037-ae8e-cf634b1c4b41-run-systemd\") pod \"ovnkube-node-nk7v5\" (UID: \"5fd1b372-d164-4037-ae8e-cf634b1c4b41\") " pod="openshift-ovn-kubernetes/ovnkube-node-nk7v5" Jan 23 09:07:28 crc kubenswrapper[4684]: I0123 09:07:28.457598 4684 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"run-systemd\" (UniqueName: \"kubernetes.io/host-path/5fd1b372-d164-4037-ae8e-cf634b1c4b41-run-systemd\") pod \"ovnkube-node-nk7v5\" (UID: \"5fd1b372-d164-4037-ae8e-cf634b1c4b41\") " pod="openshift-ovn-kubernetes/ovnkube-node-nk7v5" Jan 23 09:07:28 crc kubenswrapper[4684]: I0123 09:07:28.457637 4684 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"multus-daemon-config\" (UniqueName: \"kubernetes.io/configmap/ab0885cc-d621-4e36-9e37-1326848bd147-multus-daemon-config\") pod \"multus-jwr4q\" (UID: \"ab0885cc-d621-4e36-9e37-1326848bd147\") " pod="openshift-multus/multus-jwr4q" Jan 23 09:07:28 crc kubenswrapper[4684]: I0123 09:07:28.457672 4684 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-dmwsl\" (UniqueName: \"kubernetes.io/projected/fe8e0d00-860e-4d47-9f48-686555520d79-kube-api-access-dmwsl\") pod \"machine-config-daemon-wtphf\" (UID: \"fe8e0d00-860e-4d47-9f48-686555520d79\") " pod="openshift-machine-config-operator/machine-config-daemon-wtphf" Jan 23 09:07:28 crc kubenswrapper[4684]: I0123 09:07:28.457733 4684 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cni-binary-copy\" (UniqueName: \"kubernetes.io/configmap/ab0885cc-d621-4e36-9e37-1326848bd147-cni-binary-copy\") pod \"multus-jwr4q\" (UID: \"ab0885cc-d621-4e36-9e37-1326848bd147\") " pod="openshift-multus/multus-jwr4q" Jan 23 09:07:28 crc kubenswrapper[4684]: I0123 09:07:28.457757 4684 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host-var-lib-cni-multus\" (UniqueName: \"kubernetes.io/host-path/ab0885cc-d621-4e36-9e37-1326848bd147-host-var-lib-cni-multus\") pod \"multus-jwr4q\" (UID: \"ab0885cc-d621-4e36-9e37-1326848bd147\") " pod="openshift-multus/multus-jwr4q" Jan 23 09:07:28 crc kubenswrapper[4684]: I0123 09:07:28.457849 4684 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host-var-lib-cni-multus\" (UniqueName: \"kubernetes.io/host-path/ab0885cc-d621-4e36-9e37-1326848bd147-host-var-lib-cni-multus\") pod \"multus-jwr4q\" (UID: \"ab0885cc-d621-4e36-9e37-1326848bd147\") " pod="openshift-multus/multus-jwr4q" Jan 23 09:07:28 crc kubenswrapper[4684]: I0123 09:07:28.458044 4684 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"var-lib-openvswitch\" (UniqueName: \"kubernetes.io/host-path/5fd1b372-d164-4037-ae8e-cf634b1c4b41-var-lib-openvswitch\") pod \"ovnkube-node-nk7v5\" (UID: \"5fd1b372-d164-4037-ae8e-cf634b1c4b41\") " pod="openshift-ovn-kubernetes/ovnkube-node-nk7v5" Jan 23 09:07:28 crc kubenswrapper[4684]: I0123 09:07:28.458095 4684 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host-run-ovn-kubernetes\" (UniqueName: \"kubernetes.io/host-path/5fd1b372-d164-4037-ae8e-cf634b1c4b41-host-run-ovn-kubernetes\") pod \"ovnkube-node-nk7v5\" (UID: \"5fd1b372-d164-4037-ae8e-cf634b1c4b41\") " pod="openshift-ovn-kubernetes/ovnkube-node-nk7v5" Jan 23 09:07:28 crc kubenswrapper[4684]: I0123 09:07:28.458119 4684 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"run-openvswitch\" (UniqueName: \"kubernetes.io/host-path/5fd1b372-d164-4037-ae8e-cf634b1c4b41-run-openvswitch\") pod \"ovnkube-node-nk7v5\" (UID: \"5fd1b372-d164-4037-ae8e-cf634b1c4b41\") " pod="openshift-ovn-kubernetes/ovnkube-node-nk7v5" Jan 23 09:07:28 crc kubenswrapper[4684]: I0123 09:07:28.458153 4684 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"cni-binary-copy\" (UniqueName: \"kubernetes.io/configmap/95d1563a-3ca4-4fb0-8365-c1168fbe2e70-cni-binary-copy\") pod \"multus-additional-cni-plugins-dmqcw\" (UID: \"95d1563a-3ca4-4fb0-8365-c1168fbe2e70\") " pod="openshift-multus/multus-additional-cni-plugins-dmqcw" Jan 23 09:07:28 crc kubenswrapper[4684]: I0123 09:07:28.458161 4684 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"system-cni-dir\" (UniqueName: \"kubernetes.io/host-path/ab0885cc-d621-4e36-9e37-1326848bd147-system-cni-dir\") pod \"multus-jwr4q\" (UID: \"ab0885cc-d621-4e36-9e37-1326848bd147\") " pod="openshift-multus/multus-jwr4q" Jan 23 09:07:28 crc kubenswrapper[4684]: I0123 09:07:28.458203 4684 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host-var-lib-cni-networks-ovn-kubernetes\" (UniqueName: \"kubernetes.io/host-path/5fd1b372-d164-4037-ae8e-cf634b1c4b41-host-var-lib-cni-networks-ovn-kubernetes\") pod \"ovnkube-node-nk7v5\" (UID: \"5fd1b372-d164-4037-ae8e-cf634b1c4b41\") " pod="openshift-ovn-kubernetes/ovnkube-node-nk7v5" Jan 23 09:07:28 crc kubenswrapper[4684]: I0123 09:07:28.458853 4684 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"multus-daemon-config\" (UniqueName: \"kubernetes.io/configmap/ab0885cc-d621-4e36-9e37-1326848bd147-multus-daemon-config\") pod \"multus-jwr4q\" (UID: \"ab0885cc-d621-4e36-9e37-1326848bd147\") " pod="openshift-multus/multus-jwr4q" Jan 23 09:07:28 crc kubenswrapper[4684]: I0123 09:07:28.458979 4684 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"cni-binary-copy\" (UniqueName: \"kubernetes.io/configmap/ab0885cc-d621-4e36-9e37-1326848bd147-cni-binary-copy\") pod \"multus-jwr4q\" (UID: \"ab0885cc-d621-4e36-9e37-1326848bd147\") " pod="openshift-multus/multus-jwr4q" Jan 23 09:07:28 crc kubenswrapper[4684]: I0123 09:07:28.472032 4684 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-23T09:07:27Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-23T09:07:27Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Jan 23 09:07:28 crc kubenswrapper[4684]: I0123 09:07:28.509288 4684 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovn-node-metrics-cert\" (UniqueName: \"kubernetes.io/secret/5fd1b372-d164-4037-ae8e-cf634b1c4b41-ovn-node-metrics-cert\") pod \"ovnkube-node-nk7v5\" (UID: \"5fd1b372-d164-4037-ae8e-cf634b1c4b41\") " pod="openshift-ovn-kubernetes/ovnkube-node-nk7v5" Jan 23 09:07:28 crc kubenswrapper[4684]: I0123 09:07:28.510881 4684 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"proxy-tls\" (UniqueName: \"kubernetes.io/secret/fe8e0d00-860e-4d47-9f48-686555520d79-proxy-tls\") pod \"machine-config-daemon-wtphf\" (UID: \"fe8e0d00-860e-4d47-9f48-686555520d79\") " pod="openshift-machine-config-operator/machine-config-daemon-wtphf" Jan 23 09:07:28 crc kubenswrapper[4684]: I0123 09:07:28.511981 4684 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-wrhqc\" (UniqueName: \"kubernetes.io/projected/95d1563a-3ca4-4fb0-8365-c1168fbe2e70-kube-api-access-wrhqc\") pod \"multus-additional-cni-plugins-dmqcw\" (UID: \"95d1563a-3ca4-4fb0-8365-c1168fbe2e70\") " pod="openshift-multus/multus-additional-cni-plugins-dmqcw" Jan 23 09:07:28 crc kubenswrapper[4684]: I0123 09:07:28.513331 4684 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-cw2mk\" (UniqueName: \"kubernetes.io/projected/ab0885cc-d621-4e36-9e37-1326848bd147-kube-api-access-cw2mk\") pod \"multus-jwr4q\" (UID: \"ab0885cc-d621-4e36-9e37-1326848bd147\") " pod="openshift-multus/multus-jwr4q" Jan 23 09:07:28 crc kubenswrapper[4684]: I0123 09:07:28.520414 4684 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-23T09:07:27Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [iptables-alerter]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-23T09:07:27Z\\\",\\\"message\\\":\\\"containers with unready status: [iptables-alerter]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Jan 23 09:07:28 crc kubenswrapper[4684]: I0123 09:07:28.523258 4684 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-wv8g2\" (UniqueName: \"kubernetes.io/projected/4fce7017-186f-4953-b968-c8a8868a0fd4-kube-api-access-wv8g2\") pod \"node-resolver-6stgf\" (UID: \"4fce7017-186f-4953-b968-c8a8868a0fd4\") " pod="openshift-dns/node-resolver-6stgf" Jan 23 09:07:28 crc kubenswrapper[4684]: I0123 09:07:28.532399 4684 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-l46bg\" (UniqueName: \"kubernetes.io/projected/5fd1b372-d164-4037-ae8e-cf634b1c4b41-kube-api-access-l46bg\") pod \"ovnkube-node-nk7v5\" (UID: \"5fd1b372-d164-4037-ae8e-cf634b1c4b41\") " pod="openshift-ovn-kubernetes/ovnkube-node-nk7v5" Jan 23 09:07:28 crc kubenswrapper[4684]: I0123 09:07:28.539401 4684 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-23T09:07:27Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-23T09:07:27Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Jan 23 09:07:28 crc kubenswrapper[4684]: I0123 09:07:28.547496 4684 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-12-01 09:12:21.000722232 +0000 UTC Jan 23 09:07:28 crc kubenswrapper[4684]: I0123 09:07:28.555478 4684 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-23T09:07:27Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-23T09:07:27Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Jan 23 09:07:28 crc kubenswrapper[4684]: I0123 09:07:28.572973 4684 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-23T09:07:27Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [webhook approver]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-23T09:07:27Z\\\",\\\"message\\\":\\\"containers with unready status: [webhook approver]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":false,\\\"restartCount\\\":6,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Jan 23 09:07:28 crc kubenswrapper[4684]: I0123 09:07:28.588165 4684 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-ovn-kubernetes/ovnkube-node-nk7v5" Jan 23 09:07:28 crc kubenswrapper[4684]: I0123 09:07:28.597105 4684 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-nk7v5" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fd1b372-d164-4037-ae8e-cf634b1c4b41\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T09:07:28Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T09:07:28Z\\\",\\\"message\\\":\\\"containers with incomplete status: [kubecfg-setup]\\\",\\\"reason\\\":\\\"ContainersNotInitialized\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T09:07:28Z\\\",\\\"message\\\":\\\"containers with unready status: [ovn-controller ovn-acl-logging kube-rbac-proxy-node kube-rbac-proxy-ovn-metrics northd nbdb sbdb ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T09:07:28Z\\\",\\\"message\\\":\\\"containers with unready status: [ovn-controller ovn-acl-logging kube-rbac-proxy-node kube-rbac-proxy-ovn-metrics northd nbdb sbdb ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-l46bg\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-l46bg\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-l46bg\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-l46bg\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-l46bg\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-l46bg\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-l46bg\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-l46bg\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-l46bg\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-23T09:07:28Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-nk7v5\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Jan 23 09:07:28 crc kubenswrapper[4684]: I0123 09:07:28.607023 4684 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-23T09:07:27Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-23T09:07:27Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Jan 23 09:07:28 crc kubenswrapper[4684]: W0123 09:07:28.617879 4684 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod5fd1b372_d164_4037_ae8e_cf634b1c4b41.slice/crio-a0d453ba54f696f071b7b86eca3cf00c4656d73f0f7fd74a9a9302ecf012b310 WatchSource:0}: Error finding container a0d453ba54f696f071b7b86eca3cf00c4656d73f0f7fd74a9a9302ecf012b310: Status 404 returned error can't find the container with id a0d453ba54f696f071b7b86eca3cf00c4656d73f0f7fd74a9a9302ecf012b310 Jan 23 09:07:28 crc kubenswrapper[4684]: I0123 09:07:28.626837 4684 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-dns/node-resolver-6stgf" Jan 23 09:07:28 crc kubenswrapper[4684]: I0123 09:07:28.630455 4684 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-23T09:07:27Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-23T09:07:27Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Jan 23 09:07:28 crc kubenswrapper[4684]: I0123 09:07:28.642199 4684 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-jwr4q" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ab0885cc-d621-4e36-9e37-1326848bd147\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T09:07:28Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T09:07:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T09:07:28Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T09:07:28Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cw2mk\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-23T09:07:28Z\\\"}}\" for pod \"openshift-multus\"/\"multus-jwr4q\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Jan 23 09:07:28 crc kubenswrapper[4684]: I0123 09:07:28.643827 4684 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/multus-jwr4q" Jan 23 09:07:28 crc kubenswrapper[4684]: W0123 09:07:28.655238 4684 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podab0885cc_d621_4e36_9e37_1326848bd147.slice/crio-cf939a1aa188a7d65bff14f0b3af6c60a0fa83487ede75f4e8a4da0b33c18fd2 WatchSource:0}: Error finding container cf939a1aa188a7d65bff14f0b3af6c60a0fa83487ede75f4e8a4da0b33c18fd2: Status 404 returned error can't find the container with id cf939a1aa188a7d65bff14f0b3af6c60a0fa83487ede75f4e8a4da0b33c18fd2 Jan 23 09:07:28 crc kubenswrapper[4684]: I0123 09:07:28.665895 4684 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/multus-additional-cni-plugins-dmqcw" Jan 23 09:07:28 crc kubenswrapper[4684]: I0123 09:07:28.678192 4684 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-dmqcw" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"95d1563a-3ca4-4fb0-8365-c1168fbe2e70\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T09:07:28Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T09:07:28Z\\\",\\\"message\\\":\\\"containers with incomplete status: [egress-router-binary-copy cni-plugins bond-cni-plugin routeoverride-cni whereabouts-cni-bincopy whereabouts-cni]\\\",\\\"reason\\\":\\\"ContainersNotInitialized\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T09:07:28Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T09:07:28Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-wrhqc\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-wrhqc\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-wrhqc\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-wrhqc\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-wrhqc\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-wrhqc\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-wrhqc\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-23T09:07:28Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-dmqcw\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Jan 23 09:07:28 crc kubenswrapper[4684]: I0123 09:07:28.679181 4684 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-kube-controller-manager/kube-controller-manager-crc" Jan 23 09:07:28 crc kubenswrapper[4684]: I0123 09:07:28.685168 4684 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-kube-controller-manager/kube-controller-manager-crc" Jan 23 09:07:28 crc kubenswrapper[4684]: I0123 09:07:28.698556 4684 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-kube-controller-manager/kube-controller-manager-crc"] Jan 23 09:07:28 crc kubenswrapper[4684]: I0123 09:07:28.699895 4684 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"e31ff448-5258-4887-9532-ccb1444b5a2f\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T09:07:08Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T09:07:08Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T09:07:07Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T09:07:07Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T09:07:07Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://39b1d62654cdce3e6a1e54cc35f36d530dec39b7ec54d7aba2ea8a64844ff90a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-23T09:07:09Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://5b80737ea9f882f63be2cf6a2f74002963d16e18aea3c96f738b2cd188f3c1da\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-23T09:07:10Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://68e3ed6cfd5c1ab6379385c7acee58117333f815f21be7d7c61038f7827f6621\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-23T09:07:10Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://42263a97079566dbd93f1ca20399fd1f6cc2400f0d042ed062c1c1e15eaf0109\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://42263a97079566dbd93f1ca20399fd1f6cc2400f0d042ed062c1c1e15eaf0109\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-01-23T09:07:27Z\\\",\\\"message\\\":\\\"23 09:07:26.845110 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW0123 09:07:26.845113 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW0123 09:07:26.845115 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nI0123 09:07:26.845353 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nI0123 09:07:26.849378 1 tlsconfig.go:203] \\\\\\\"Loaded serving cert\\\\\\\" certName=\\\\\\\"serving-cert::/tmp/serving-cert-4138284268/tls.crt::/tmp/serving-cert-4138284268/tls.key\\\\\\\" certDetail=\\\\\\\"\\\\\\\\\\\\\\\"localhost\\\\\\\\\\\\\\\" [serving] validServingFor=[localhost] issuer=\\\\\\\\\\\\\\\"check-endpoints-signer@1769159230\\\\\\\\\\\\\\\" (2026-01-23 09:07:10 +0000 UTC to 2026-02-22 09:07:11 +0000 UTC (now=2026-01-23 09:07:26.849349521 +0000 UTC))\\\\\\\"\\\\nI0123 09:07:26.849507 1 named_certificates.go:53] \\\\\\\"Loaded SNI cert\\\\\\\" index=0 certName=\\\\\\\"self-signed loopback\\\\\\\" certDetail=\\\\\\\"\\\\\\\\\\\\\\\"apiserver-loopback-client@1769159241\\\\\\\\\\\\\\\" [serving] validServingFor=[apiserver-loopback-client] issuer=\\\\\\\\\\\\\\\"apiserver-loopback-client-ca@1769159241\\\\\\\\\\\\\\\" (2026-01-23 08:07:21 +0000 UTC to 2027-01-23 08:07:21 +0000 UTC (now=2026-01-23 09:07:26.849489185 +0000 UTC))\\\\\\\"\\\\nI0123 09:07:26.849527 1 secure_serving.go:213] Serving securely on [::]:17697\\\\nI0123 09:07:26.849546 1 genericapiserver.go:683] [graceful-termination] waiting for shutdown to be initiated\\\\nI0123 09:07:26.849566 1 requestheader_controller.go:172] Starting RequestHeaderAuthRequestController\\\\nI0123 09:07:26.849583 1 shared_informer.go:313] Waiting for caches to sync for RequestHeaderAuthRequestController\\\\nI0123 09:07:26.849611 1 dynamic_serving_content.go:135] \\\\\\\"Starting controller\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-4138284268/tls.crt::/tmp/serving-cert-4138284268/tls.key\\\\\\\"\\\\nI0123 09:07:26.849731 1 tlsconfig.go:243] \\\\\\\"Starting DynamicServingCertificateController\\\\\\\"\\\\nF0123 09:07:26.849820 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-23T09:07:10Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://9db80d9b156d2828ad5bcd38bc2d0783dac35f10f547f098815ee596931cde3b\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-23T09:07:10Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://efa2eef93c6f5766565795e6674f79bc2e7cb62ac76cd9a1e407561378d62732\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://efa2eef93c6f5766565795e6674f79bc2e7cb62ac76cd9a1e407561378d62732\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-23T09:07:08Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-23T09:07:08Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-23T09:07:07Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Jan 23 09:07:28 crc kubenswrapper[4684]: I0123 09:07:28.702949 4684 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-nk7v5" event={"ID":"5fd1b372-d164-4037-ae8e-cf634b1c4b41","Type":"ContainerStarted","Data":"a0d453ba54f696f071b7b86eca3cf00c4656d73f0f7fd74a9a9302ecf012b310"} Jan 23 09:07:28 crc kubenswrapper[4684]: I0123 09:07:28.706626 4684 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" event={"ID":"37a5e44f-9a88-4405-be8a-b645485e7312","Type":"ContainerStarted","Data":"68aec69fa95602e37684e2d8b7363e3cd2aa4eb9de2eb33447c8e5222a2bb40f"} Jan 23 09:07:28 crc kubenswrapper[4684]: W0123 09:07:28.707994 4684 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod95d1563a_3ca4_4fb0_8365_c1168fbe2e70.slice/crio-69bfff7172d6363e17b2b773ac43fe4b7871eca3c7e974120e9fefe38cf81f0f WatchSource:0}: Error finding container 69bfff7172d6363e17b2b773ac43fe4b7871eca3c7e974120e9fefe38cf81f0f: Status 404 returned error can't find the container with id 69bfff7172d6363e17b2b773ac43fe4b7871eca3c7e974120e9fefe38cf81f0f Jan 23 09:07:28 crc kubenswrapper[4684]: I0123 09:07:28.710885 4684 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-kube-apiserver_kube-apiserver-crc_f4b27818a5e8e43d0dc095d08835c792/kube-apiserver-check-endpoints/0.log" Jan 23 09:07:28 crc kubenswrapper[4684]: I0123 09:07:28.721188 4684 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" event={"ID":"f4b27818a5e8e43d0dc095d08835c792","Type":"ContainerStarted","Data":"74958cd4355a9eb04e07c960b1063b56f11cb3ae27a3ab9eac50f54ebac78c8c"} Jan 23 09:07:28 crc kubenswrapper[4684]: I0123 09:07:28.721358 4684 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-kube-apiserver/kube-apiserver-crc" Jan 23 09:07:28 crc kubenswrapper[4684]: I0123 09:07:28.722353 4684 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-dns/node-resolver-6stgf" event={"ID":"4fce7017-186f-4953-b968-c8a8868a0fd4","Type":"ContainerStarted","Data":"99dc0f139606bc4910d1f97bbba4b6fda45fc64043cfc5c5e815bce9be856ce3"} Jan 23 09:07:28 crc kubenswrapper[4684]: I0123 09:07:28.724144 4684 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-multus/multus-jwr4q" event={"ID":"ab0885cc-d621-4e36-9e37-1326848bd147","Type":"ContainerStarted","Data":"cf939a1aa188a7d65bff14f0b3af6c60a0fa83487ede75f4e8a4da0b33c18fd2"} Jan 23 09:07:28 crc kubenswrapper[4684]: I0123 09:07:28.730814 4684 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-23T09:07:27Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-23T09:07:27Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Jan 23 09:07:28 crc kubenswrapper[4684]: I0123 09:07:28.738023 4684 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" event={"ID":"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49","Type":"ContainerStarted","Data":"742f8328d2afc9c5fe309d699cd6e8b76727748a233f522df98d2259a33af8fb"} Jan 23 09:07:28 crc kubenswrapper[4684]: I0123 09:07:28.745004 4684 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" event={"ID":"ef543e1b-8068-4ea3-b32a-61027b32e95d","Type":"ContainerStarted","Data":"4f741db786a98b9e9302c17c5f5061484149b0372c03b3cf06b017d37da7237a"} Jan 23 09:07:28 crc kubenswrapper[4684]: I0123 09:07:28.745046 4684 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" event={"ID":"ef543e1b-8068-4ea3-b32a-61027b32e95d","Type":"ContainerStarted","Data":"e0bf99a80423f9d4d2262b21f7dc70d1cf73731c48008e484d9768495596d5b0"} Jan 23 09:07:28 crc kubenswrapper[4684]: I0123 09:07:28.745056 4684 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" event={"ID":"ef543e1b-8068-4ea3-b32a-61027b32e95d","Type":"ContainerStarted","Data":"eb0f4f5a485ef9ebd0d9d75f50b64056fcd9b112f2616be9594f70eb502dc590"} Jan 23 09:07:28 crc kubenswrapper[4684]: I0123 09:07:28.747557 4684 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-6stgf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"4fce7017-186f-4953-b968-c8a8868a0fd4\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T09:07:28Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T09:07:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T09:07:28Z\\\",\\\"message\\\":\\\"containers with unready status: [dns-node-resolver]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T09:07:28Z\\\",\\\"message\\\":\\\"containers with unready status: [dns-node-resolver]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-wv8g2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-23T09:07:28Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-6stgf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Jan 23 09:07:28 crc kubenswrapper[4684]: I0123 09:07:28.762250 4684 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-23T09:07:27Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [webhook approver]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-23T09:07:27Z\\\",\\\"message\\\":\\\"containers with unready status: [webhook approver]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":false,\\\"restartCount\\\":6,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Jan 23 09:07:28 crc kubenswrapper[4684]: I0123 09:07:28.812597 4684 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-23T09:07:27Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [iptables-alerter]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-23T09:07:27Z\\\",\\\"message\\\":\\\"containers with unready status: [iptables-alerter]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-23T09:07:28Z is after 2025-08-24T17:21:41Z" Jan 23 09:07:28 crc kubenswrapper[4684]: I0123 09:07:28.844486 4684 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-23T09:07:27Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-23T09:07:27Z\\\",\\\"message\\\":\\\"containers with unready status: [network-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":false,\\\"restartCount\\\":5,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-23T09:07:28Z is after 2025-08-24T17:21:41Z" Jan 23 09:07:28 crc kubenswrapper[4684]: I0123 09:07:28.865337 4684 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-wtphf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"fe8e0d00-860e-4d47-9f48-686555520d79\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T09:07:28Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T09:07:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T09:07:28Z\\\",\\\"message\\\":\\\"containers with unready status: [machine-config-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T09:07:28Z\\\",\\\"message\\\":\\\"containers with unready status: [machine-config-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-dmwsl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-dmwsl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-23T09:07:28Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-wtphf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-23T09:07:28Z is after 2025-08-24T17:21:41Z" Jan 23 09:07:28 crc kubenswrapper[4684]: I0123 09:07:28.882975 4684 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-wtphf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"fe8e0d00-860e-4d47-9f48-686555520d79\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T09:07:28Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T09:07:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T09:07:28Z\\\",\\\"message\\\":\\\"containers with unready status: [machine-config-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T09:07:28Z\\\",\\\"message\\\":\\\"containers with unready status: [machine-config-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-dmwsl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-dmwsl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-23T09:07:28Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-wtphf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-23T09:07:28Z is after 2025-08-24T17:21:41Z" Jan 23 09:07:28 crc kubenswrapper[4684]: I0123 09:07:28.897369 4684 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-23T09:07:27Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-23T09:07:27Z\\\",\\\"message\\\":\\\"containers with unready status: [network-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":false,\\\"restartCount\\\":5,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-23T09:07:28Z is after 2025-08-24T17:21:41Z" Jan 23 09:07:28 crc kubenswrapper[4684]: I0123 09:07:28.912811 4684 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-23T09:07:27Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-23T09:07:27Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-23T09:07:28Z is after 2025-08-24T17:21:41Z" Jan 23 09:07:28 crc kubenswrapper[4684]: I0123 09:07:28.934662 4684 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-nk7v5" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fd1b372-d164-4037-ae8e-cf634b1c4b41\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T09:07:28Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T09:07:28Z\\\",\\\"message\\\":\\\"containers with incomplete status: [kubecfg-setup]\\\",\\\"reason\\\":\\\"ContainersNotInitialized\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T09:07:28Z\\\",\\\"message\\\":\\\"containers with unready status: [ovn-controller ovn-acl-logging kube-rbac-proxy-node kube-rbac-proxy-ovn-metrics northd nbdb sbdb ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T09:07:28Z\\\",\\\"message\\\":\\\"containers with unready status: [ovn-controller ovn-acl-logging kube-rbac-proxy-node kube-rbac-proxy-ovn-metrics northd nbdb sbdb ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-l46bg\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-l46bg\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-l46bg\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-l46bg\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-l46bg\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-l46bg\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-l46bg\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-l46bg\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-l46bg\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-23T09:07:28Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-nk7v5\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-23T09:07:28Z is after 2025-08-24T17:21:41Z" Jan 23 09:07:28 crc kubenswrapper[4684]: I0123 09:07:28.954544 4684 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-23T09:07:27Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-23T09:07:27Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-23T09:07:28Z is after 2025-08-24T17:21:41Z" Jan 23 09:07:28 crc kubenswrapper[4684]: I0123 09:07:28.969845 4684 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-23T09:07:27Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-23T09:07:27Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-23T09:07:28Z is after 2025-08-24T17:21:41Z" Jan 23 09:07:28 crc kubenswrapper[4684]: I0123 09:07:28.982018 4684 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-jwr4q" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ab0885cc-d621-4e36-9e37-1326848bd147\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T09:07:28Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T09:07:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T09:07:28Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T09:07:28Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cw2mk\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-23T09:07:28Z\\\"}}\" for pod \"openshift-multus\"/\"multus-jwr4q\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-23T09:07:28Z is after 2025-08-24T17:21:41Z" Jan 23 09:07:28 crc kubenswrapper[4684]: I0123 09:07:28.999571 4684 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-dmqcw" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"95d1563a-3ca4-4fb0-8365-c1168fbe2e70\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T09:07:28Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T09:07:28Z\\\",\\\"message\\\":\\\"containers with incomplete status: [egress-router-binary-copy cni-plugins bond-cni-plugin routeoverride-cni whereabouts-cni-bincopy whereabouts-cni]\\\",\\\"reason\\\":\\\"ContainersNotInitialized\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T09:07:28Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T09:07:28Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-wrhqc\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-wrhqc\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-wrhqc\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-wrhqc\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-wrhqc\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-wrhqc\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-wrhqc\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-23T09:07:28Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-dmqcw\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-23T09:07:28Z is after 2025-08-24T17:21:41Z" Jan 23 09:07:29 crc kubenswrapper[4684]: I0123 09:07:29.015798 4684 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"e31ff448-5258-4887-9532-ccb1444b5a2f\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T09:07:08Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T09:07:08Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T09:07:07Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T09:07:07Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T09:07:07Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://39b1d62654cdce3e6a1e54cc35f36d530dec39b7ec54d7aba2ea8a64844ff90a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-23T09:07:09Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://5b80737ea9f882f63be2cf6a2f74002963d16e18aea3c96f738b2cd188f3c1da\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-23T09:07:10Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://68e3ed6cfd5c1ab6379385c7acee58117333f815f21be7d7c61038f7827f6621\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-23T09:07:10Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://74958cd4355a9eb04e07c960b1063b56f11cb3ae27a3ab9eac50f54ebac78c8c\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://42263a97079566dbd93f1ca20399fd1f6cc2400f0d042ed062c1c1e15eaf0109\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-01-23T09:07:27Z\\\",\\\"message\\\":\\\"23 09:07:26.845110 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW0123 09:07:26.845113 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW0123 09:07:26.845115 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nI0123 09:07:26.845353 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nI0123 09:07:26.849378 1 tlsconfig.go:203] \\\\\\\"Loaded serving cert\\\\\\\" certName=\\\\\\\"serving-cert::/tmp/serving-cert-4138284268/tls.crt::/tmp/serving-cert-4138284268/tls.key\\\\\\\" certDetail=\\\\\\\"\\\\\\\\\\\\\\\"localhost\\\\\\\\\\\\\\\" [serving] validServingFor=[localhost] issuer=\\\\\\\\\\\\\\\"check-endpoints-signer@1769159230\\\\\\\\\\\\\\\" (2026-01-23 09:07:10 +0000 UTC to 2026-02-22 09:07:11 +0000 UTC (now=2026-01-23 09:07:26.849349521 +0000 UTC))\\\\\\\"\\\\nI0123 09:07:26.849507 1 named_certificates.go:53] \\\\\\\"Loaded SNI cert\\\\\\\" index=0 certName=\\\\\\\"self-signed loopback\\\\\\\" certDetail=\\\\\\\"\\\\\\\\\\\\\\\"apiserver-loopback-client@1769159241\\\\\\\\\\\\\\\" [serving] validServingFor=[apiserver-loopback-client] issuer=\\\\\\\\\\\\\\\"apiserver-loopback-client-ca@1769159241\\\\\\\\\\\\\\\" (2026-01-23 08:07:21 +0000 UTC to 2027-01-23 08:07:21 +0000 UTC (now=2026-01-23 09:07:26.849489185 +0000 UTC))\\\\\\\"\\\\nI0123 09:07:26.849527 1 secure_serving.go:213] Serving securely on [::]:17697\\\\nI0123 09:07:26.849546 1 genericapiserver.go:683] [graceful-termination] waiting for shutdown to be initiated\\\\nI0123 09:07:26.849566 1 requestheader_controller.go:172] Starting RequestHeaderAuthRequestController\\\\nI0123 09:07:26.849583 1 shared_informer.go:313] Waiting for caches to sync for RequestHeaderAuthRequestController\\\\nI0123 09:07:26.849611 1 dynamic_serving_content.go:135] \\\\\\\"Starting controller\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-4138284268/tls.crt::/tmp/serving-cert-4138284268/tls.key\\\\\\\"\\\\nI0123 09:07:26.849731 1 tlsconfig.go:243] \\\\\\\"Starting DynamicServingCertificateController\\\\\\\"\\\\nF0123 09:07:26.849820 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-23T09:07:10Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-23T09:07:28Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://9db80d9b156d2828ad5bcd38bc2d0783dac35f10f547f098815ee596931cde3b\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-23T09:07:10Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://efa2eef93c6f5766565795e6674f79bc2e7cb62ac76cd9a1e407561378d62732\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://efa2eef93c6f5766565795e6674f79bc2e7cb62ac76cd9a1e407561378d62732\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-23T09:07:08Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-23T09:07:08Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-23T09:07:07Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-23T09:07:29Z is after 2025-08-24T17:21:41Z" Jan 23 09:07:29 crc kubenswrapper[4684]: I0123 09:07:29.036764 4684 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-23T09:07:27Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [iptables-alerter]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-23T09:07:27Z\\\",\\\"message\\\":\\\"containers with unready status: [iptables-alerter]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-23T09:07:29Z is after 2025-08-24T17:21:41Z" Jan 23 09:07:29 crc kubenswrapper[4684]: I0123 09:07:29.054395 4684 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-6stgf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"4fce7017-186f-4953-b968-c8a8868a0fd4\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T09:07:28Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T09:07:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T09:07:28Z\\\",\\\"message\\\":\\\"containers with unready status: [dns-node-resolver]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T09:07:28Z\\\",\\\"message\\\":\\\"containers with unready status: [dns-node-resolver]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-wv8g2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-23T09:07:28Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-6stgf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-23T09:07:29Z is after 2025-08-24T17:21:41Z" Jan 23 09:07:29 crc kubenswrapper[4684]: I0123 09:07:29.070533 4684 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d618dabd-5de3-4c94-b9c1-69682da77628\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T09:07:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T09:07:07Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T09:07:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T09:07:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T09:07:07Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://c027c8977c1e3870ef0132bf28d479e8999b1a7d216327be7a9cff2aeee05c9e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-23T09:07:08Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://7954e2feb1e89e1ec2c9055234e7b9bde7005afc751a3067c18cbb54d16045cc\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-23T09:07:08Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://fde45d47daa7855ee7caa1df0222d2773fcdc8fb29413c61d6b74f7e7d8fa6e4\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-23T09:07:09Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://f34540a58dd0dfcebbfd694b24202f58a89ddca8a0f04f3f4f2bcdba4be5c4b6\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-23T09:07:10Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-23T09:07:07Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-23T09:07:29Z is after 2025-08-24T17:21:41Z" Jan 23 09:07:29 crc kubenswrapper[4684]: I0123 09:07:29.085639 4684 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-23T09:07:28Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-23T09:07:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-23T09:07:28Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://4f741db786a98b9e9302c17c5f5061484149b0372c03b3cf06b017d37da7237a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-23T09:07:28Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://e0bf99a80423f9d4d2262b21f7dc70d1cf73731c48008e484d9768495596d5b0\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-23T09:07:28Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-23T09:07:29Z is after 2025-08-24T17:21:41Z" Jan 23 09:07:29 crc kubenswrapper[4684]: I0123 09:07:29.270350 4684 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 23 09:07:29 crc kubenswrapper[4684]: E0123 09:07:29.270591 4684 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-01-23 09:07:31.270559192 +0000 UTC m=+23.893937733 (durationBeforeRetry 2s). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 23 09:07:29 crc kubenswrapper[4684]: I0123 09:07:29.371420 4684 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-cqllr\" (UniqueName: \"kubernetes.io/projected/3b6479f0-333b-4a96-9adf-2099afdc2447-kube-api-access-cqllr\") pod \"network-check-target-xd92c\" (UID: \"3b6479f0-333b-4a96-9adf-2099afdc2447\") " pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 23 09:07:29 crc kubenswrapper[4684]: I0123 09:07:29.371692 4684 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"networking-console-plugin-cert\" (UniqueName: \"kubernetes.io/secret/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-networking-console-plugin-cert\") pod \"networking-console-plugin-85b44fc459-gdk6g\" (UID: \"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\") " pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 23 09:07:29 crc kubenswrapper[4684]: I0123 09:07:29.371803 4684 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"nginx-conf\" (UniqueName: \"kubernetes.io/configmap/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-nginx-conf\") pod \"networking-console-plugin-85b44fc459-gdk6g\" (UID: \"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\") " pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 23 09:07:29 crc kubenswrapper[4684]: E0123 09:07:29.371606 4684 projected.go:288] Couldn't get configMap openshift-network-diagnostics/kube-root-ca.crt: object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered Jan 23 09:07:29 crc kubenswrapper[4684]: E0123 09:07:29.371919 4684 projected.go:288] Couldn't get configMap openshift-network-diagnostics/openshift-service-ca.crt: object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered Jan 23 09:07:29 crc kubenswrapper[4684]: E0123 09:07:29.371933 4684 projected.go:194] Error preparing data for projected volume kube-api-access-cqllr for pod openshift-network-diagnostics/network-check-target-xd92c: [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Jan 23 09:07:29 crc kubenswrapper[4684]: I0123 09:07:29.371886 4684 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-s2dwl\" (UniqueName: \"kubernetes.io/projected/9d751cbb-f2e2-430d-9754-c882a5e924a5-kube-api-access-s2dwl\") pod \"network-check-source-55646444c4-trplf\" (UID: \"9d751cbb-f2e2-430d-9754-c882a5e924a5\") " pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Jan 23 09:07:29 crc kubenswrapper[4684]: E0123 09:07:29.371984 4684 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/3b6479f0-333b-4a96-9adf-2099afdc2447-kube-api-access-cqllr podName:3b6479f0-333b-4a96-9adf-2099afdc2447 nodeName:}" failed. No retries permitted until 2026-01-23 09:07:31.371967939 +0000 UTC m=+23.995346550 (durationBeforeRetry 2s). Error: MountVolume.SetUp failed for volume "kube-api-access-cqllr" (UniqueName: "kubernetes.io/projected/3b6479f0-333b-4a96-9adf-2099afdc2447-kube-api-access-cqllr") pod "network-check-target-xd92c" (UID: "3b6479f0-333b-4a96-9adf-2099afdc2447") : [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Jan 23 09:07:29 crc kubenswrapper[4684]: E0123 09:07:29.371771 4684 secret.go:188] Couldn't get secret openshift-network-console/networking-console-plugin-cert: object "openshift-network-console"/"networking-console-plugin-cert" not registered Jan 23 09:07:29 crc kubenswrapper[4684]: E0123 09:07:29.372018 4684 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-networking-console-plugin-cert podName:5fe485a1-e14f-4c09-b5b9-f252bc42b7e8 nodeName:}" failed. No retries permitted until 2026-01-23 09:07:31.37201264 +0000 UTC m=+23.995391181 (durationBeforeRetry 2s). Error: MountVolume.SetUp failed for volume "networking-console-plugin-cert" (UniqueName: "kubernetes.io/secret/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-networking-console-plugin-cert") pod "networking-console-plugin-85b44fc459-gdk6g" (UID: "5fe485a1-e14f-4c09-b5b9-f252bc42b7e8") : object "openshift-network-console"/"networking-console-plugin-cert" not registered Jan 23 09:07:29 crc kubenswrapper[4684]: E0123 09:07:29.371874 4684 configmap.go:193] Couldn't get configMap openshift-network-console/networking-console-plugin: object "openshift-network-console"/"networking-console-plugin" not registered Jan 23 09:07:29 crc kubenswrapper[4684]: E0123 09:07:29.372044 4684 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-nginx-conf podName:5fe485a1-e14f-4c09-b5b9-f252bc42b7e8 nodeName:}" failed. No retries permitted until 2026-01-23 09:07:31.372037191 +0000 UTC m=+23.995415722 (durationBeforeRetry 2s). Error: MountVolume.SetUp failed for volume "nginx-conf" (UniqueName: "kubernetes.io/configmap/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-nginx-conf") pod "networking-console-plugin-85b44fc459-gdk6g" (UID: "5fe485a1-e14f-4c09-b5b9-f252bc42b7e8") : object "openshift-network-console"/"networking-console-plugin" not registered Jan 23 09:07:29 crc kubenswrapper[4684]: E0123 09:07:29.372377 4684 projected.go:288] Couldn't get configMap openshift-network-diagnostics/kube-root-ca.crt: object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered Jan 23 09:07:29 crc kubenswrapper[4684]: E0123 09:07:29.372467 4684 projected.go:288] Couldn't get configMap openshift-network-diagnostics/openshift-service-ca.crt: object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered Jan 23 09:07:29 crc kubenswrapper[4684]: E0123 09:07:29.372544 4684 projected.go:194] Error preparing data for projected volume kube-api-access-s2dwl for pod openshift-network-diagnostics/network-check-source-55646444c4-trplf: [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Jan 23 09:07:29 crc kubenswrapper[4684]: E0123 09:07:29.372672 4684 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/9d751cbb-f2e2-430d-9754-c882a5e924a5-kube-api-access-s2dwl podName:9d751cbb-f2e2-430d-9754-c882a5e924a5 nodeName:}" failed. No retries permitted until 2026-01-23 09:07:31.372656119 +0000 UTC m=+23.996034660 (durationBeforeRetry 2s). Error: MountVolume.SetUp failed for volume "kube-api-access-s2dwl" (UniqueName: "kubernetes.io/projected/9d751cbb-f2e2-430d-9754-c882a5e924a5-kube-api-access-s2dwl") pod "network-check-source-55646444c4-trplf" (UID: "9d751cbb-f2e2-430d-9754-c882a5e924a5") : [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Jan 23 09:07:29 crc kubenswrapper[4684]: I0123 09:07:29.372725 4684 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-config-operator"/"machine-config-daemon-dockercfg-r5tcq" Jan 23 09:07:29 crc kubenswrapper[4684]: E0123 09:07:29.457766 4684 configmap.go:193] Couldn't get configMap openshift-machine-config-operator/kube-rbac-proxy: failed to sync configmap cache: timed out waiting for the condition Jan 23 09:07:29 crc kubenswrapper[4684]: E0123 09:07:29.457903 4684 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/fe8e0d00-860e-4d47-9f48-686555520d79-mcd-auth-proxy-config podName:fe8e0d00-860e-4d47-9f48-686555520d79 nodeName:}" failed. No retries permitted until 2026-01-23 09:07:29.957877723 +0000 UTC m=+22.581256264 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "mcd-auth-proxy-config" (UniqueName: "kubernetes.io/configmap/fe8e0d00-860e-4d47-9f48-686555520d79-mcd-auth-proxy-config") pod "machine-config-daemon-wtphf" (UID: "fe8e0d00-860e-4d47-9f48-686555520d79") : failed to sync configmap cache: timed out waiting for the condition Jan 23 09:07:29 crc kubenswrapper[4684]: I0123 09:07:29.490415 4684 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-machine-config-operator"/"openshift-service-ca.crt" Jan 23 09:07:29 crc kubenswrapper[4684]: I0123 09:07:29.497219 4684 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-dmwsl\" (UniqueName: \"kubernetes.io/projected/fe8e0d00-860e-4d47-9f48-686555520d79-kube-api-access-dmwsl\") pod \"machine-config-daemon-wtphf\" (UID: \"fe8e0d00-860e-4d47-9f48-686555520d79\") " pod="openshift-machine-config-operator/machine-config-daemon-wtphf" Jan 23 09:07:29 crc kubenswrapper[4684]: I0123 09:07:29.542861 4684 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-machine-config-operator"/"kube-rbac-proxy" Jan 23 09:07:29 crc kubenswrapper[4684]: I0123 09:07:29.548131 4684 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-11-08 12:33:09.623670621 +0000 UTC Jan 23 09:07:29 crc kubenswrapper[4684]: I0123 09:07:29.581901 4684 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Jan 23 09:07:29 crc kubenswrapper[4684]: E0123 09:07:29.582009 4684 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Jan 23 09:07:29 crc kubenswrapper[4684]: I0123 09:07:29.582072 4684 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 23 09:07:29 crc kubenswrapper[4684]: E0123 09:07:29.582113 4684 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Jan 23 09:07:29 crc kubenswrapper[4684]: I0123 09:07:29.582149 4684 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 23 09:07:29 crc kubenswrapper[4684]: E0123 09:07:29.582185 4684 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Jan 23 09:07:29 crc kubenswrapper[4684]: I0123 09:07:29.586977 4684 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="01ab3dd5-8196-46d0-ad33-122e2ca51def" path="/var/lib/kubelet/pods/01ab3dd5-8196-46d0-ad33-122e2ca51def/volumes" Jan 23 09:07:29 crc kubenswrapper[4684]: I0123 09:07:29.587650 4684 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="09ae3b1a-e8e7-4524-b54b-61eab6f9239a" path="/var/lib/kubelet/pods/09ae3b1a-e8e7-4524-b54b-61eab6f9239a/volumes" Jan 23 09:07:29 crc kubenswrapper[4684]: I0123 09:07:29.589102 4684 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="09efc573-dbb6-4249-bd59-9b87aba8dd28" path="/var/lib/kubelet/pods/09efc573-dbb6-4249-bd59-9b87aba8dd28/volumes" Jan 23 09:07:29 crc kubenswrapper[4684]: I0123 09:07:29.589918 4684 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="0b574797-001e-440a-8f4e-c0be86edad0f" path="/var/lib/kubelet/pods/0b574797-001e-440a-8f4e-c0be86edad0f/volumes" Jan 23 09:07:29 crc kubenswrapper[4684]: I0123 09:07:29.591139 4684 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="0b78653f-4ff9-4508-8672-245ed9b561e3" path="/var/lib/kubelet/pods/0b78653f-4ff9-4508-8672-245ed9b561e3/volumes" Jan 23 09:07:29 crc kubenswrapper[4684]: I0123 09:07:29.591757 4684 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="1386a44e-36a2-460c-96d0-0359d2b6f0f5" path="/var/lib/kubelet/pods/1386a44e-36a2-460c-96d0-0359d2b6f0f5/volumes" Jan 23 09:07:29 crc kubenswrapper[4684]: I0123 09:07:29.592397 4684 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="1bf7eb37-55a3-4c65-b768-a94c82151e69" path="/var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/volumes" Jan 23 09:07:29 crc kubenswrapper[4684]: I0123 09:07:29.593828 4684 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="1d611f23-29be-4491-8495-bee1670e935f" path="/var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes" Jan 23 09:07:29 crc kubenswrapper[4684]: I0123 09:07:29.594582 4684 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="20b0d48f-5fd6-431c-a545-e3c800c7b866" path="/var/lib/kubelet/pods/20b0d48f-5fd6-431c-a545-e3c800c7b866/volumes" Jan 23 09:07:29 crc kubenswrapper[4684]: I0123 09:07:29.595593 4684 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c" path="/var/lib/kubelet/pods/210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c/volumes" Jan 23 09:07:29 crc kubenswrapper[4684]: I0123 09:07:29.596116 4684 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="22c825df-677d-4ca6-82db-3454ed06e783" path="/var/lib/kubelet/pods/22c825df-677d-4ca6-82db-3454ed06e783/volumes" Jan 23 09:07:29 crc kubenswrapper[4684]: I0123 09:07:29.597536 4684 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="25e176fe-21b4-4974-b1ed-c8b94f112a7f" path="/var/lib/kubelet/pods/25e176fe-21b4-4974-b1ed-c8b94f112a7f/volumes" Jan 23 09:07:29 crc kubenswrapper[4684]: I0123 09:07:29.598267 4684 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="308be0ea-9f5f-4b29-aeb1-5abd31a0b17b" path="/var/lib/kubelet/pods/308be0ea-9f5f-4b29-aeb1-5abd31a0b17b/volumes" Jan 23 09:07:29 crc kubenswrapper[4684]: I0123 09:07:29.598830 4684 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="31d8b7a1-420e-4252-a5b7-eebe8a111292" path="/var/lib/kubelet/pods/31d8b7a1-420e-4252-a5b7-eebe8a111292/volumes" Jan 23 09:07:29 crc kubenswrapper[4684]: I0123 09:07:29.599993 4684 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="3ab1a177-2de0-46d9-b765-d0d0649bb42e" path="/var/lib/kubelet/pods/3ab1a177-2de0-46d9-b765-d0d0649bb42e/volumes" Jan 23 09:07:29 crc kubenswrapper[4684]: I0123 09:07:29.600557 4684 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="3cb93b32-e0ae-4377-b9c8-fdb9842c6d59" path="/var/lib/kubelet/pods/3cb93b32-e0ae-4377-b9c8-fdb9842c6d59/volumes" Jan 23 09:07:29 crc kubenswrapper[4684]: I0123 09:07:29.601727 4684 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="43509403-f426-496e-be36-56cef71462f5" path="/var/lib/kubelet/pods/43509403-f426-496e-be36-56cef71462f5/volumes" Jan 23 09:07:29 crc kubenswrapper[4684]: I0123 09:07:29.602275 4684 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="44663579-783b-4372-86d6-acf235a62d72" path="/var/lib/kubelet/pods/44663579-783b-4372-86d6-acf235a62d72/volumes" Jan 23 09:07:29 crc kubenswrapper[4684]: I0123 09:07:29.602868 4684 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="496e6271-fb68-4057-954e-a0d97a4afa3f" path="/var/lib/kubelet/pods/496e6271-fb68-4057-954e-a0d97a4afa3f/volumes" Jan 23 09:07:29 crc kubenswrapper[4684]: I0123 09:07:29.606342 4684 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="49c341d1-5089-4bc2-86a0-a5e165cfcc6b" path="/var/lib/kubelet/pods/49c341d1-5089-4bc2-86a0-a5e165cfcc6b/volumes" Jan 23 09:07:29 crc kubenswrapper[4684]: I0123 09:07:29.607015 4684 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="49ef4625-1d3a-4a9f-b595-c2433d32326d" path="/var/lib/kubelet/pods/49ef4625-1d3a-4a9f-b595-c2433d32326d/volumes" Jan 23 09:07:29 crc kubenswrapper[4684]: I0123 09:07:29.609228 4684 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="4bb40260-dbaa-4fb0-84df-5e680505d512" path="/var/lib/kubelet/pods/4bb40260-dbaa-4fb0-84df-5e680505d512/volumes" Jan 23 09:07:29 crc kubenswrapper[4684]: I0123 09:07:29.609902 4684 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="5225d0e4-402f-4861-b410-819f433b1803" path="/var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes" Jan 23 09:07:29 crc kubenswrapper[4684]: I0123 09:07:29.610809 4684 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="5441d097-087c-4d9a-baa8-b210afa90fc9" path="/var/lib/kubelet/pods/5441d097-087c-4d9a-baa8-b210afa90fc9/volumes" Jan 23 09:07:29 crc kubenswrapper[4684]: I0123 09:07:29.611565 4684 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="57a731c4-ef35-47a8-b875-bfb08a7f8011" path="/var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes" Jan 23 09:07:29 crc kubenswrapper[4684]: I0123 09:07:29.612351 4684 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="5b88f790-22fa-440e-b583-365168c0b23d" path="/var/lib/kubelet/pods/5b88f790-22fa-440e-b583-365168c0b23d/volumes" Jan 23 09:07:29 crc kubenswrapper[4684]: I0123 09:07:29.613288 4684 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="5fe579f8-e8a6-4643-bce5-a661393c4dde" path="/var/lib/kubelet/pods/5fe579f8-e8a6-4643-bce5-a661393c4dde/volumes" Jan 23 09:07:29 crc kubenswrapper[4684]: I0123 09:07:29.613899 4684 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="6402fda4-df10-493c-b4e5-d0569419652d" path="/var/lib/kubelet/pods/6402fda4-df10-493c-b4e5-d0569419652d/volumes" Jan 23 09:07:29 crc kubenswrapper[4684]: I0123 09:07:29.614507 4684 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="6509e943-70c6-444c-bc41-48a544e36fbd" path="/var/lib/kubelet/pods/6509e943-70c6-444c-bc41-48a544e36fbd/volumes" Jan 23 09:07:29 crc kubenswrapper[4684]: I0123 09:07:29.615098 4684 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="6731426b-95fe-49ff-bb5f-40441049fde2" path="/var/lib/kubelet/pods/6731426b-95fe-49ff-bb5f-40441049fde2/volumes" Jan 23 09:07:29 crc kubenswrapper[4684]: I0123 09:07:29.615591 4684 kubelet_volumes.go:152] "Cleaned up orphaned volume subpath from pod" podUID="6ea678ab-3438-413e-bfe3-290ae7725660" path="/var/lib/kubelet/pods/6ea678ab-3438-413e-bfe3-290ae7725660/volume-subpaths/run-systemd/ovnkube-controller/6" Jan 23 09:07:29 crc kubenswrapper[4684]: I0123 09:07:29.615726 4684 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="6ea678ab-3438-413e-bfe3-290ae7725660" path="/var/lib/kubelet/pods/6ea678ab-3438-413e-bfe3-290ae7725660/volumes" Jan 23 09:07:29 crc kubenswrapper[4684]: I0123 09:07:29.617051 4684 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="7539238d-5fe0-46ed-884e-1c3b566537ec" path="/var/lib/kubelet/pods/7539238d-5fe0-46ed-884e-1c3b566537ec/volumes" Jan 23 09:07:29 crc kubenswrapper[4684]: I0123 09:07:29.617861 4684 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="7583ce53-e0fe-4a16-9e4d-50516596a136" path="/var/lib/kubelet/pods/7583ce53-e0fe-4a16-9e4d-50516596a136/volumes" Jan 23 09:07:29 crc kubenswrapper[4684]: I0123 09:07:29.618453 4684 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="7bb08738-c794-4ee8-9972-3a62ca171029" path="/var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/volumes" Jan 23 09:07:29 crc kubenswrapper[4684]: I0123 09:07:29.623575 4684 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="87cf06ed-a83f-41a7-828d-70653580a8cb" path="/var/lib/kubelet/pods/87cf06ed-a83f-41a7-828d-70653580a8cb/volumes" Jan 23 09:07:29 crc kubenswrapper[4684]: I0123 09:07:29.624629 4684 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="8cea82b4-6893-4ddc-af9f-1bb5ae425c5b" path="/var/lib/kubelet/pods/8cea82b4-6893-4ddc-af9f-1bb5ae425c5b/volumes" Jan 23 09:07:29 crc kubenswrapper[4684]: I0123 09:07:29.625845 4684 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="925f1c65-6136-48ba-85aa-3a3b50560753" path="/var/lib/kubelet/pods/925f1c65-6136-48ba-85aa-3a3b50560753/volumes" Jan 23 09:07:29 crc kubenswrapper[4684]: I0123 09:07:29.626874 4684 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="96b93a3a-6083-4aea-8eab-fe1aa8245ad9" path="/var/lib/kubelet/pods/96b93a3a-6083-4aea-8eab-fe1aa8245ad9/volumes" Jan 23 09:07:29 crc kubenswrapper[4684]: I0123 09:07:29.628252 4684 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="9d4552c7-cd75-42dd-8880-30dd377c49a4" path="/var/lib/kubelet/pods/9d4552c7-cd75-42dd-8880-30dd377c49a4/volumes" Jan 23 09:07:29 crc kubenswrapper[4684]: I0123 09:07:29.628804 4684 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="a0128f3a-b052-44ed-a84e-c4c8aaf17c13" path="/var/lib/kubelet/pods/a0128f3a-b052-44ed-a84e-c4c8aaf17c13/volumes" Jan 23 09:07:29 crc kubenswrapper[4684]: I0123 09:07:29.629420 4684 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="a31745f5-9847-4afe-82a5-3161cc66ca93" path="/var/lib/kubelet/pods/a31745f5-9847-4afe-82a5-3161cc66ca93/volumes" Jan 23 09:07:29 crc kubenswrapper[4684]: I0123 09:07:29.630584 4684 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="b11524ee-3fca-4b1b-9cdf-6da289fdbc7d" path="/var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes" Jan 23 09:07:29 crc kubenswrapper[4684]: I0123 09:07:29.631833 4684 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="b6312bbd-5731-4ea0-a20f-81d5a57df44a" path="/var/lib/kubelet/pods/b6312bbd-5731-4ea0-a20f-81d5a57df44a/volumes" Jan 23 09:07:29 crc kubenswrapper[4684]: I0123 09:07:29.632327 4684 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="b6cd30de-2eeb-49a2-ab40-9167f4560ff5" path="/var/lib/kubelet/pods/b6cd30de-2eeb-49a2-ab40-9167f4560ff5/volumes" Jan 23 09:07:29 crc kubenswrapper[4684]: I0123 09:07:29.633432 4684 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="bc5039c0-ea34-426b-a2b7-fbbc87b49a6d" path="/var/lib/kubelet/pods/bc5039c0-ea34-426b-a2b7-fbbc87b49a6d/volumes" Jan 23 09:07:29 crc kubenswrapper[4684]: I0123 09:07:29.634335 4684 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="bd23aa5c-e532-4e53-bccf-e79f130c5ae8" path="/var/lib/kubelet/pods/bd23aa5c-e532-4e53-bccf-e79f130c5ae8/volumes" Jan 23 09:07:29 crc kubenswrapper[4684]: I0123 09:07:29.635637 4684 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="bf126b07-da06-4140-9a57-dfd54fc6b486" path="/var/lib/kubelet/pods/bf126b07-da06-4140-9a57-dfd54fc6b486/volumes" Jan 23 09:07:29 crc kubenswrapper[4684]: I0123 09:07:29.636373 4684 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="c03ee662-fb2f-4fc4-a2c1-af487c19d254" path="/var/lib/kubelet/pods/c03ee662-fb2f-4fc4-a2c1-af487c19d254/volumes" Jan 23 09:07:29 crc kubenswrapper[4684]: I0123 09:07:29.637338 4684 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="cd70aa09-68dd-4d64-bd6f-156fe6d1dc6d" path="/var/lib/kubelet/pods/cd70aa09-68dd-4d64-bd6f-156fe6d1dc6d/volumes" Jan 23 09:07:29 crc kubenswrapper[4684]: I0123 09:07:29.637886 4684 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="e7e6199b-1264-4501-8953-767f51328d08" path="/var/lib/kubelet/pods/e7e6199b-1264-4501-8953-767f51328d08/volumes" Jan 23 09:07:29 crc kubenswrapper[4684]: I0123 09:07:29.638599 4684 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="efdd0498-1daa-4136-9a4a-3b948c2293fc" path="/var/lib/kubelet/pods/efdd0498-1daa-4136-9a4a-3b948c2293fc/volumes" Jan 23 09:07:29 crc kubenswrapper[4684]: I0123 09:07:29.639886 4684 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="f88749ec-7931-4ee7-b3fc-1ec5e11f92e9" path="/var/lib/kubelet/pods/f88749ec-7931-4ee7-b3fc-1ec5e11f92e9/volumes" Jan 23 09:07:29 crc kubenswrapper[4684]: I0123 09:07:29.640360 4684 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="fda69060-fa79-4696-b1a6-7980f124bf7c" path="/var/lib/kubelet/pods/fda69060-fa79-4696-b1a6-7980f124bf7c/volumes" Jan 23 09:07:29 crc kubenswrapper[4684]: I0123 09:07:29.748929 4684 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-multus/multus-jwr4q" event={"ID":"ab0885cc-d621-4e36-9e37-1326848bd147","Type":"ContainerStarted","Data":"d957cfbf388d17fa825ac41c56e15d6cd4caec6e13b2fb8c93b304205f0bbefe"} Jan 23 09:07:29 crc kubenswrapper[4684]: I0123 09:07:29.750953 4684 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" event={"ID":"37a5e44f-9a88-4405-be8a-b645485e7312","Type":"ContainerStarted","Data":"d66a59d2f527c396c3b591ef694a20a6852d8e2b2f3d4c77ef0f0b795a18b535"} Jan 23 09:07:29 crc kubenswrapper[4684]: I0123 09:07:29.752883 4684 generic.go:334] "Generic (PLEG): container finished" podID="95d1563a-3ca4-4fb0-8365-c1168fbe2e70" containerID="d3d64538fa49212ecd97fac81f22251d985b9963024dcd5625ca82b0a19111fb" exitCode=0 Jan 23 09:07:29 crc kubenswrapper[4684]: I0123 09:07:29.752934 4684 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-multus/multus-additional-cni-plugins-dmqcw" event={"ID":"95d1563a-3ca4-4fb0-8365-c1168fbe2e70","Type":"ContainerDied","Data":"d3d64538fa49212ecd97fac81f22251d985b9963024dcd5625ca82b0a19111fb"} Jan 23 09:07:29 crc kubenswrapper[4684]: I0123 09:07:29.752953 4684 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-multus/multus-additional-cni-plugins-dmqcw" event={"ID":"95d1563a-3ca4-4fb0-8365-c1168fbe2e70","Type":"ContainerStarted","Data":"69bfff7172d6363e17b2b773ac43fe4b7871eca3c7e974120e9fefe38cf81f0f"} Jan 23 09:07:29 crc kubenswrapper[4684]: I0123 09:07:29.755989 4684 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-dns/node-resolver-6stgf" event={"ID":"4fce7017-186f-4953-b968-c8a8868a0fd4","Type":"ContainerStarted","Data":"e120546e2ca9261a5bc169c39194c52add608d78b5783a10dad5f3ba4ee27c23"} Jan 23 09:07:29 crc kubenswrapper[4684]: I0123 09:07:29.757932 4684 generic.go:334] "Generic (PLEG): container finished" podID="5fd1b372-d164-4037-ae8e-cf634b1c4b41" containerID="6cfc04b44ac724b5e32e0102b3f0d670fdd7f2b7ae9b40266065c7b8192b228e" exitCode=0 Jan 23 09:07:29 crc kubenswrapper[4684]: I0123 09:07:29.758049 4684 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-nk7v5" event={"ID":"5fd1b372-d164-4037-ae8e-cf634b1c4b41","Type":"ContainerDied","Data":"6cfc04b44ac724b5e32e0102b3f0d670fdd7f2b7ae9b40266065c7b8192b228e"} Jan 23 09:07:29 crc kubenswrapper[4684]: E0123 09:07:29.767897 4684 kubelet.go:1929] "Failed creating a mirror pod for" err="pods \"kube-controller-manager-crc\" already exists" pod="openshift-kube-controller-manager/kube-controller-manager-crc" Jan 23 09:07:29 crc kubenswrapper[4684]: I0123 09:07:29.774430 4684 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-23T09:07:27Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-23T09:07:27Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-23T09:07:29Z is after 2025-08-24T17:21:41Z" Jan 23 09:07:29 crc kubenswrapper[4684]: I0123 09:07:29.804981 4684 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-nk7v5" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fd1b372-d164-4037-ae8e-cf634b1c4b41\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T09:07:28Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T09:07:28Z\\\",\\\"message\\\":\\\"containers with incomplete status: [kubecfg-setup]\\\",\\\"reason\\\":\\\"ContainersNotInitialized\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T09:07:28Z\\\",\\\"message\\\":\\\"containers with unready status: [ovn-controller ovn-acl-logging kube-rbac-proxy-node kube-rbac-proxy-ovn-metrics northd nbdb sbdb ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T09:07:28Z\\\",\\\"message\\\":\\\"containers with unready status: [ovn-controller ovn-acl-logging kube-rbac-proxy-node kube-rbac-proxy-ovn-metrics northd nbdb sbdb ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-l46bg\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-l46bg\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-l46bg\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-l46bg\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-l46bg\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-l46bg\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-l46bg\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-l46bg\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-l46bg\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-23T09:07:28Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-nk7v5\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-23T09:07:29Z is after 2025-08-24T17:21:41Z" Jan 23 09:07:29 crc kubenswrapper[4684]: I0123 09:07:29.831584 4684 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-23T09:07:27Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-23T09:07:27Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-23T09:07:29Z is after 2025-08-24T17:21:41Z" Jan 23 09:07:29 crc kubenswrapper[4684]: I0123 09:07:29.853138 4684 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-23T09:07:27Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-23T09:07:27Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-23T09:07:29Z is after 2025-08-24T17:21:41Z" Jan 23 09:07:29 crc kubenswrapper[4684]: I0123 09:07:29.869356 4684 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-jwr4q" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ab0885cc-d621-4e36-9e37-1326848bd147\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T09:07:29Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T09:07:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T09:07:29Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T09:07:29Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://d957cfbf388d17fa825ac41c56e15d6cd4caec6e13b2fb8c93b304205f0bbefe\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-23T09:07:29Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cw2mk\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-23T09:07:28Z\\\"}}\" for pod \"openshift-multus\"/\"multus-jwr4q\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-23T09:07:29Z is after 2025-08-24T17:21:41Z" Jan 23 09:07:29 crc kubenswrapper[4684]: I0123 09:07:29.890628 4684 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-dmqcw" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"95d1563a-3ca4-4fb0-8365-c1168fbe2e70\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T09:07:28Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T09:07:28Z\\\",\\\"message\\\":\\\"containers with incomplete status: [egress-router-binary-copy cni-plugins bond-cni-plugin routeoverride-cni whereabouts-cni-bincopy whereabouts-cni]\\\",\\\"reason\\\":\\\"ContainersNotInitialized\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T09:07:28Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T09:07:28Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-wrhqc\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-wrhqc\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-wrhqc\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-wrhqc\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-wrhqc\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-wrhqc\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-wrhqc\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-23T09:07:28Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-dmqcw\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-23T09:07:29Z is after 2025-08-24T17:21:41Z" Jan 23 09:07:29 crc kubenswrapper[4684]: I0123 09:07:29.913451 4684 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-image-registry/node-ca-qt2j2"] Jan 23 09:07:29 crc kubenswrapper[4684]: I0123 09:07:29.913888 4684 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-image-registry/node-ca-qt2j2" Jan 23 09:07:29 crc kubenswrapper[4684]: I0123 09:07:29.914358 4684 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"e31ff448-5258-4887-9532-ccb1444b5a2f\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T09:07:08Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T09:07:08Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T09:07:07Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T09:07:07Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T09:07:07Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://39b1d62654cdce3e6a1e54cc35f36d530dec39b7ec54d7aba2ea8a64844ff90a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-23T09:07:09Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://5b80737ea9f882f63be2cf6a2f74002963d16e18aea3c96f738b2cd188f3c1da\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-23T09:07:10Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://68e3ed6cfd5c1ab6379385c7acee58117333f815f21be7d7c61038f7827f6621\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-23T09:07:10Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://74958cd4355a9eb04e07c960b1063b56f11cb3ae27a3ab9eac50f54ebac78c8c\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://42263a97079566dbd93f1ca20399fd1f6cc2400f0d042ed062c1c1e15eaf0109\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-01-23T09:07:27Z\\\",\\\"message\\\":\\\"23 09:07:26.845110 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW0123 09:07:26.845113 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW0123 09:07:26.845115 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nI0123 09:07:26.845353 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nI0123 09:07:26.849378 1 tlsconfig.go:203] \\\\\\\"Loaded serving cert\\\\\\\" certName=\\\\\\\"serving-cert::/tmp/serving-cert-4138284268/tls.crt::/tmp/serving-cert-4138284268/tls.key\\\\\\\" certDetail=\\\\\\\"\\\\\\\\\\\\\\\"localhost\\\\\\\\\\\\\\\" [serving] validServingFor=[localhost] issuer=\\\\\\\\\\\\\\\"check-endpoints-signer@1769159230\\\\\\\\\\\\\\\" (2026-01-23 09:07:10 +0000 UTC to 2026-02-22 09:07:11 +0000 UTC (now=2026-01-23 09:07:26.849349521 +0000 UTC))\\\\\\\"\\\\nI0123 09:07:26.849507 1 named_certificates.go:53] \\\\\\\"Loaded SNI cert\\\\\\\" index=0 certName=\\\\\\\"self-signed loopback\\\\\\\" certDetail=\\\\\\\"\\\\\\\\\\\\\\\"apiserver-loopback-client@1769159241\\\\\\\\\\\\\\\" [serving] validServingFor=[apiserver-loopback-client] issuer=\\\\\\\\\\\\\\\"apiserver-loopback-client-ca@1769159241\\\\\\\\\\\\\\\" (2026-01-23 08:07:21 +0000 UTC to 2027-01-23 08:07:21 +0000 UTC (now=2026-01-23 09:07:26.849489185 +0000 UTC))\\\\\\\"\\\\nI0123 09:07:26.849527 1 secure_serving.go:213] Serving securely on [::]:17697\\\\nI0123 09:07:26.849546 1 genericapiserver.go:683] [graceful-termination] waiting for shutdown to be initiated\\\\nI0123 09:07:26.849566 1 requestheader_controller.go:172] Starting RequestHeaderAuthRequestController\\\\nI0123 09:07:26.849583 1 shared_informer.go:313] Waiting for caches to sync for RequestHeaderAuthRequestController\\\\nI0123 09:07:26.849611 1 dynamic_serving_content.go:135] \\\\\\\"Starting controller\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-4138284268/tls.crt::/tmp/serving-cert-4138284268/tls.key\\\\\\\"\\\\nI0123 09:07:26.849731 1 tlsconfig.go:243] \\\\\\\"Starting DynamicServingCertificateController\\\\\\\"\\\\nF0123 09:07:26.849820 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-23T09:07:10Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-23T09:07:28Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://9db80d9b156d2828ad5bcd38bc2d0783dac35f10f547f098815ee596931cde3b\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-23T09:07:10Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://efa2eef93c6f5766565795e6674f79bc2e7cb62ac76cd9a1e407561378d62732\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://efa2eef93c6f5766565795e6674f79bc2e7cb62ac76cd9a1e407561378d62732\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-23T09:07:08Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-23T09:07:08Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-23T09:07:07Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-23T09:07:29Z is after 2025-08-24T17:21:41Z" Jan 23 09:07:29 crc kubenswrapper[4684]: I0123 09:07:29.919608 4684 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-image-registry"/"kube-root-ca.crt" Jan 23 09:07:29 crc kubenswrapper[4684]: I0123 09:07:29.919853 4684 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-image-registry"/"openshift-service-ca.crt" Jan 23 09:07:29 crc kubenswrapper[4684]: I0123 09:07:29.919992 4684 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-image-registry"/"image-registry-certificates" Jan 23 09:07:29 crc kubenswrapper[4684]: I0123 09:07:29.922718 4684 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-image-registry"/"node-ca-dockercfg-4777p" Jan 23 09:07:29 crc kubenswrapper[4684]: I0123 09:07:29.930814 4684 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-23T09:07:27Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [iptables-alerter]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-23T09:07:27Z\\\",\\\"message\\\":\\\"containers with unready status: [iptables-alerter]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-23T09:07:29Z is after 2025-08-24T17:21:41Z" Jan 23 09:07:29 crc kubenswrapper[4684]: I0123 09:07:29.941783 4684 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-6stgf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"4fce7017-186f-4953-b968-c8a8868a0fd4\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T09:07:28Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T09:07:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T09:07:28Z\\\",\\\"message\\\":\\\"containers with unready status: [dns-node-resolver]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T09:07:28Z\\\",\\\"message\\\":\\\"containers with unready status: [dns-node-resolver]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-wv8g2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-23T09:07:28Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-6stgf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-23T09:07:29Z is after 2025-08-24T17:21:41Z" Jan 23 09:07:29 crc kubenswrapper[4684]: I0123 09:07:29.959884 4684 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d618dabd-5de3-4c94-b9c1-69682da77628\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T09:07:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T09:07:07Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T09:07:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T09:07:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T09:07:07Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://c027c8977c1e3870ef0132bf28d479e8999b1a7d216327be7a9cff2aeee05c9e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-23T09:07:08Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://7954e2feb1e89e1ec2c9055234e7b9bde7005afc751a3067c18cbb54d16045cc\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-23T09:07:08Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://fde45d47daa7855ee7caa1df0222d2773fcdc8fb29413c61d6b74f7e7d8fa6e4\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-23T09:07:09Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://f34540a58dd0dfcebbfd694b24202f58a89ddca8a0f04f3f4f2bcdba4be5c4b6\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-23T09:07:10Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-23T09:07:07Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-23T09:07:29Z is after 2025-08-24T17:21:41Z" Jan 23 09:07:29 crc kubenswrapper[4684]: I0123 09:07:29.977340 4684 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"mcd-auth-proxy-config\" (UniqueName: \"kubernetes.io/configmap/fe8e0d00-860e-4d47-9f48-686555520d79-mcd-auth-proxy-config\") pod \"machine-config-daemon-wtphf\" (UID: \"fe8e0d00-860e-4d47-9f48-686555520d79\") " pod="openshift-machine-config-operator/machine-config-daemon-wtphf" Jan 23 09:07:29 crc kubenswrapper[4684]: I0123 09:07:29.978142 4684 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"mcd-auth-proxy-config\" (UniqueName: \"kubernetes.io/configmap/fe8e0d00-860e-4d47-9f48-686555520d79-mcd-auth-proxy-config\") pod \"machine-config-daemon-wtphf\" (UID: \"fe8e0d00-860e-4d47-9f48-686555520d79\") " pod="openshift-machine-config-operator/machine-config-daemon-wtphf" Jan 23 09:07:29 crc kubenswrapper[4684]: I0123 09:07:29.991090 4684 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-23T09:07:28Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-23T09:07:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-23T09:07:28Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://4f741db786a98b9e9302c17c5f5061484149b0372c03b3cf06b017d37da7237a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-23T09:07:28Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://e0bf99a80423f9d4d2262b21f7dc70d1cf73731c48008e484d9768495596d5b0\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-23T09:07:28Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-23T09:07:29Z is after 2025-08-24T17:21:41Z" Jan 23 09:07:30 crc kubenswrapper[4684]: I0123 09:07:30.021497 4684 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-wtphf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"fe8e0d00-860e-4d47-9f48-686555520d79\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T09:07:28Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T09:07:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T09:07:28Z\\\",\\\"message\\\":\\\"containers with unready status: [machine-config-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T09:07:28Z\\\",\\\"message\\\":\\\"containers with unready status: [machine-config-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-dmwsl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-dmwsl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-23T09:07:28Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-wtphf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-23T09:07:30Z is after 2025-08-24T17:21:41Z" Jan 23 09:07:30 crc kubenswrapper[4684]: I0123 09:07:30.058555 4684 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-23T09:07:27Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-23T09:07:27Z\\\",\\\"message\\\":\\\"containers with unready status: [network-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":false,\\\"restartCount\\\":5,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-23T09:07:30Z is after 2025-08-24T17:21:41Z" Jan 23 09:07:30 crc kubenswrapper[4684]: I0123 09:07:30.082097 4684 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serviceca\" (UniqueName: \"kubernetes.io/configmap/d5069a6f-07bb-4423-8df0-92cdc541e6de-serviceca\") pod \"node-ca-qt2j2\" (UID: \"d5069a6f-07bb-4423-8df0-92cdc541e6de\") " pod="openshift-image-registry/node-ca-qt2j2" Jan 23 09:07:30 crc kubenswrapper[4684]: I0123 09:07:30.082136 4684 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-l62zw\" (UniqueName: \"kubernetes.io/projected/d5069a6f-07bb-4423-8df0-92cdc541e6de-kube-api-access-l62zw\") pod \"node-ca-qt2j2\" (UID: \"d5069a6f-07bb-4423-8df0-92cdc541e6de\") " pod="openshift-image-registry/node-ca-qt2j2" Jan 23 09:07:30 crc kubenswrapper[4684]: I0123 09:07:30.082161 4684 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host\" (UniqueName: \"kubernetes.io/host-path/d5069a6f-07bb-4423-8df0-92cdc541e6de-host\") pod \"node-ca-qt2j2\" (UID: \"d5069a6f-07bb-4423-8df0-92cdc541e6de\") " pod="openshift-image-registry/node-ca-qt2j2" Jan 23 09:07:30 crc kubenswrapper[4684]: I0123 09:07:30.095334 4684 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d618dabd-5de3-4c94-b9c1-69682da77628\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T09:07:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T09:07:07Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T09:07:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T09:07:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T09:07:07Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://c027c8977c1e3870ef0132bf28d479e8999b1a7d216327be7a9cff2aeee05c9e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-23T09:07:08Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://7954e2feb1e89e1ec2c9055234e7b9bde7005afc751a3067c18cbb54d16045cc\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-23T09:07:08Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://fde45d47daa7855ee7caa1df0222d2773fcdc8fb29413c61d6b74f7e7d8fa6e4\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-23T09:07:09Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://f34540a58dd0dfcebbfd694b24202f58a89ddca8a0f04f3f4f2bcdba4be5c4b6\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-23T09:07:10Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-23T09:07:07Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-23T09:07:30Z is after 2025-08-24T17:21:41Z" Jan 23 09:07:30 crc kubenswrapper[4684]: I0123 09:07:30.095690 4684 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-machine-config-operator/machine-config-daemon-wtphf" Jan 23 09:07:30 crc kubenswrapper[4684]: I0123 09:07:30.126775 4684 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-23T09:07:28Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-23T09:07:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-23T09:07:28Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://4f741db786a98b9e9302c17c5f5061484149b0372c03b3cf06b017d37da7237a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-23T09:07:28Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://e0bf99a80423f9d4d2262b21f7dc70d1cf73731c48008e484d9768495596d5b0\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-23T09:07:28Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-23T09:07:30Z is after 2025-08-24T17:21:41Z" Jan 23 09:07:30 crc kubenswrapper[4684]: I0123 09:07:30.152469 4684 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-23T09:07:27Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [iptables-alerter]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-23T09:07:27Z\\\",\\\"message\\\":\\\"containers with unready status: [iptables-alerter]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-23T09:07:30Z is after 2025-08-24T17:21:41Z" Jan 23 09:07:30 crc kubenswrapper[4684]: I0123 09:07:30.181692 4684 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-6stgf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"4fce7017-186f-4953-b968-c8a8868a0fd4\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T09:07:29Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T09:07:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T09:07:29Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T09:07:29Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://e120546e2ca9261a5bc169c39194c52add608d78b5783a10dad5f3ba4ee27c23\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-23T09:07:29Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-wv8g2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-23T09:07:28Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-6stgf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-23T09:07:30Z is after 2025-08-24T17:21:41Z" Jan 23 09:07:30 crc kubenswrapper[4684]: I0123 09:07:30.182849 4684 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host\" (UniqueName: \"kubernetes.io/host-path/d5069a6f-07bb-4423-8df0-92cdc541e6de-host\") pod \"node-ca-qt2j2\" (UID: \"d5069a6f-07bb-4423-8df0-92cdc541e6de\") " pod="openshift-image-registry/node-ca-qt2j2" Jan 23 09:07:30 crc kubenswrapper[4684]: I0123 09:07:30.182972 4684 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serviceca\" (UniqueName: \"kubernetes.io/configmap/d5069a6f-07bb-4423-8df0-92cdc541e6de-serviceca\") pod \"node-ca-qt2j2\" (UID: \"d5069a6f-07bb-4423-8df0-92cdc541e6de\") " pod="openshift-image-registry/node-ca-qt2j2" Jan 23 09:07:30 crc kubenswrapper[4684]: I0123 09:07:30.183011 4684 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-l62zw\" (UniqueName: \"kubernetes.io/projected/d5069a6f-07bb-4423-8df0-92cdc541e6de-kube-api-access-l62zw\") pod \"node-ca-qt2j2\" (UID: \"d5069a6f-07bb-4423-8df0-92cdc541e6de\") " pod="openshift-image-registry/node-ca-qt2j2" Jan 23 09:07:30 crc kubenswrapper[4684]: I0123 09:07:30.183054 4684 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host\" (UniqueName: \"kubernetes.io/host-path/d5069a6f-07bb-4423-8df0-92cdc541e6de-host\") pod \"node-ca-qt2j2\" (UID: \"d5069a6f-07bb-4423-8df0-92cdc541e6de\") " pod="openshift-image-registry/node-ca-qt2j2" Jan 23 09:07:30 crc kubenswrapper[4684]: I0123 09:07:30.184479 4684 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"serviceca\" (UniqueName: \"kubernetes.io/configmap/d5069a6f-07bb-4423-8df0-92cdc541e6de-serviceca\") pod \"node-ca-qt2j2\" (UID: \"d5069a6f-07bb-4423-8df0-92cdc541e6de\") " pod="openshift-image-registry/node-ca-qt2j2" Jan 23 09:07:30 crc kubenswrapper[4684]: I0123 09:07:30.198098 4684 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-qt2j2" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d5069a6f-07bb-4423-8df0-92cdc541e6de\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T09:07:29Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T09:07:29Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T09:07:29Z\\\",\\\"message\\\":\\\"containers with unready status: [node-ca]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T09:07:29Z\\\",\\\"message\\\":\\\"containers with unready status: [node-ca]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-l62zw\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-23T09:07:29Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-qt2j2\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-23T09:07:30Z is after 2025-08-24T17:21:41Z" Jan 23 09:07:30 crc kubenswrapper[4684]: I0123 09:07:30.213429 4684 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-l62zw\" (UniqueName: \"kubernetes.io/projected/d5069a6f-07bb-4423-8df0-92cdc541e6de-kube-api-access-l62zw\") pod \"node-ca-qt2j2\" (UID: \"d5069a6f-07bb-4423-8df0-92cdc541e6de\") " pod="openshift-image-registry/node-ca-qt2j2" Jan 23 09:07:30 crc kubenswrapper[4684]: I0123 09:07:30.219573 4684 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-23T09:07:29Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-23T09:07:29Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-23T09:07:29Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://d66a59d2f527c396c3b591ef694a20a6852d8e2b2f3d4c77ef0f0b795a18b535\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-23T09:07:28Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-23T09:07:30Z is after 2025-08-24T17:21:41Z" Jan 23 09:07:30 crc kubenswrapper[4684]: I0123 09:07:30.249296 4684 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-wtphf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"fe8e0d00-860e-4d47-9f48-686555520d79\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T09:07:28Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T09:07:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T09:07:28Z\\\",\\\"message\\\":\\\"containers with unready status: [machine-config-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T09:07:28Z\\\",\\\"message\\\":\\\"containers with unready status: [machine-config-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-dmwsl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-dmwsl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-23T09:07:28Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-wtphf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-23T09:07:30Z is after 2025-08-24T17:21:41Z" Jan 23 09:07:30 crc kubenswrapper[4684]: I0123 09:07:30.252485 4684 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-image-registry/node-ca-qt2j2" Jan 23 09:07:30 crc kubenswrapper[4684]: I0123 09:07:30.270684 4684 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-23T09:07:27Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-23T09:07:27Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-23T09:07:30Z is after 2025-08-24T17:21:41Z" Jan 23 09:07:30 crc kubenswrapper[4684]: W0123 09:07:30.271841 4684 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podd5069a6f_07bb_4423_8df0_92cdc541e6de.slice/crio-af4eea8ff267dc2543e3feb4331ff2c7d9699e2640974bbc6624970d61ff5354 WatchSource:0}: Error finding container af4eea8ff267dc2543e3feb4331ff2c7d9699e2640974bbc6624970d61ff5354: Status 404 returned error can't find the container with id af4eea8ff267dc2543e3feb4331ff2c7d9699e2640974bbc6624970d61ff5354 Jan 23 09:07:30 crc kubenswrapper[4684]: I0123 09:07:30.295991 4684 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-23T09:07:27Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-23T09:07:27Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-23T09:07:30Z is after 2025-08-24T17:21:41Z" Jan 23 09:07:30 crc kubenswrapper[4684]: I0123 09:07:30.332826 4684 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-nk7v5" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fd1b372-d164-4037-ae8e-cf634b1c4b41\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T09:07:29Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T09:07:29Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T09:07:28Z\\\",\\\"message\\\":\\\"containers with unready status: [ovn-controller ovn-acl-logging kube-rbac-proxy-node kube-rbac-proxy-ovn-metrics northd nbdb sbdb ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T09:07:28Z\\\",\\\"message\\\":\\\"containers with unready status: [ovn-controller ovn-acl-logging kube-rbac-proxy-node kube-rbac-proxy-ovn-metrics northd nbdb sbdb ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-l46bg\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-l46bg\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-l46bg\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-l46bg\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-l46bg\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-l46bg\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-l46bg\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-l46bg\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://6cfc04b44ac724b5e32e0102b3f0d670fdd7f2b7ae9b40266065c7b8192b228e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://6cfc04b44ac724b5e32e0102b3f0d670fdd7f2b7ae9b40266065c7b8192b228e\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-23T09:07:29Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-23T09:07:29Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-l46bg\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-23T09:07:28Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-nk7v5\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-23T09:07:30Z is after 2025-08-24T17:21:41Z" Jan 23 09:07:30 crc kubenswrapper[4684]: I0123 09:07:30.350453 4684 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"e31ff448-5258-4887-9532-ccb1444b5a2f\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T09:07:08Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T09:07:08Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T09:07:07Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T09:07:07Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T09:07:07Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://39b1d62654cdce3e6a1e54cc35f36d530dec39b7ec54d7aba2ea8a64844ff90a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-23T09:07:09Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://5b80737ea9f882f63be2cf6a2f74002963d16e18aea3c96f738b2cd188f3c1da\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-23T09:07:10Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://68e3ed6cfd5c1ab6379385c7acee58117333f815f21be7d7c61038f7827f6621\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-23T09:07:10Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://74958cd4355a9eb04e07c960b1063b56f11cb3ae27a3ab9eac50f54ebac78c8c\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://42263a97079566dbd93f1ca20399fd1f6cc2400f0d042ed062c1c1e15eaf0109\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-01-23T09:07:27Z\\\",\\\"message\\\":\\\"23 09:07:26.845110 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW0123 09:07:26.845113 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW0123 09:07:26.845115 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nI0123 09:07:26.845353 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nI0123 09:07:26.849378 1 tlsconfig.go:203] \\\\\\\"Loaded serving cert\\\\\\\" certName=\\\\\\\"serving-cert::/tmp/serving-cert-4138284268/tls.crt::/tmp/serving-cert-4138284268/tls.key\\\\\\\" certDetail=\\\\\\\"\\\\\\\\\\\\\\\"localhost\\\\\\\\\\\\\\\" [serving] validServingFor=[localhost] issuer=\\\\\\\\\\\\\\\"check-endpoints-signer@1769159230\\\\\\\\\\\\\\\" (2026-01-23 09:07:10 +0000 UTC to 2026-02-22 09:07:11 +0000 UTC (now=2026-01-23 09:07:26.849349521 +0000 UTC))\\\\\\\"\\\\nI0123 09:07:26.849507 1 named_certificates.go:53] \\\\\\\"Loaded SNI cert\\\\\\\" index=0 certName=\\\\\\\"self-signed loopback\\\\\\\" certDetail=\\\\\\\"\\\\\\\\\\\\\\\"apiserver-loopback-client@1769159241\\\\\\\\\\\\\\\" [serving] validServingFor=[apiserver-loopback-client] issuer=\\\\\\\\\\\\\\\"apiserver-loopback-client-ca@1769159241\\\\\\\\\\\\\\\" (2026-01-23 08:07:21 +0000 UTC to 2027-01-23 08:07:21 +0000 UTC (now=2026-01-23 09:07:26.849489185 +0000 UTC))\\\\\\\"\\\\nI0123 09:07:26.849527 1 secure_serving.go:213] Serving securely on [::]:17697\\\\nI0123 09:07:26.849546 1 genericapiserver.go:683] [graceful-termination] waiting for shutdown to be initiated\\\\nI0123 09:07:26.849566 1 requestheader_controller.go:172] Starting RequestHeaderAuthRequestController\\\\nI0123 09:07:26.849583 1 shared_informer.go:313] Waiting for caches to sync for RequestHeaderAuthRequestController\\\\nI0123 09:07:26.849611 1 dynamic_serving_content.go:135] \\\\\\\"Starting controller\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-4138284268/tls.crt::/tmp/serving-cert-4138284268/tls.key\\\\\\\"\\\\nI0123 09:07:26.849731 1 tlsconfig.go:243] \\\\\\\"Starting DynamicServingCertificateController\\\\\\\"\\\\nF0123 09:07:26.849820 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-23T09:07:10Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-23T09:07:28Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://9db80d9b156d2828ad5bcd38bc2d0783dac35f10f547f098815ee596931cde3b\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-23T09:07:10Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://efa2eef93c6f5766565795e6674f79bc2e7cb62ac76cd9a1e407561378d62732\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://efa2eef93c6f5766565795e6674f79bc2e7cb62ac76cd9a1e407561378d62732\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-23T09:07:08Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-23T09:07:08Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-23T09:07:07Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-23T09:07:30Z is after 2025-08-24T17:21:41Z" Jan 23 09:07:30 crc kubenswrapper[4684]: I0123 09:07:30.363380 4684 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Jan 23 09:07:30 crc kubenswrapper[4684]: I0123 09:07:30.370028 4684 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 23 09:07:30 crc kubenswrapper[4684]: I0123 09:07:30.370360 4684 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 23 09:07:30 crc kubenswrapper[4684]: I0123 09:07:30.370375 4684 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 23 09:07:30 crc kubenswrapper[4684]: I0123 09:07:30.370496 4684 kubelet_node_status.go:76] "Attempting to register node" node="crc" Jan 23 09:07:30 crc kubenswrapper[4684]: I0123 09:07:30.379375 4684 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-23T09:07:27Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-23T09:07:27Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-23T09:07:30Z is after 2025-08-24T17:21:41Z" Jan 23 09:07:30 crc kubenswrapper[4684]: I0123 09:07:30.379681 4684 kubelet_node_status.go:115] "Node was previously registered" node="crc" Jan 23 09:07:30 crc kubenswrapper[4684]: I0123 09:07:30.379964 4684 kubelet_node_status.go:79] "Successfully registered node" node="crc" Jan 23 09:07:30 crc kubenswrapper[4684]: I0123 09:07:30.381020 4684 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 23 09:07:30 crc kubenswrapper[4684]: I0123 09:07:30.381050 4684 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 23 09:07:30 crc kubenswrapper[4684]: I0123 09:07:30.381064 4684 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 23 09:07:30 crc kubenswrapper[4684]: I0123 09:07:30.381080 4684 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 23 09:07:30 crc kubenswrapper[4684]: I0123 09:07:30.381092 4684 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-23T09:07:30Z","lastTransitionTime":"2026-01-23T09:07:30Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 23 09:07:30 crc kubenswrapper[4684]: I0123 09:07:30.398878 4684 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-jwr4q" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ab0885cc-d621-4e36-9e37-1326848bd147\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T09:07:29Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T09:07:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T09:07:29Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T09:07:29Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://d957cfbf388d17fa825ac41c56e15d6cd4caec6e13b2fb8c93b304205f0bbefe\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-23T09:07:29Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cw2mk\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-23T09:07:28Z\\\"}}\" for pod \"openshift-multus\"/\"multus-jwr4q\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-23T09:07:30Z is after 2025-08-24T17:21:41Z" Jan 23 09:07:30 crc kubenswrapper[4684]: E0123 09:07:30.404183 4684 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"7800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"24148068Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"8\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"24608868Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-01-23T09:07:30Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-23T09:07:30Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-23T09:07:30Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-23T09:07:30Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-23T09:07:30Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-23T09:07:30Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-23T09:07:30Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-23T09:07:30Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"bcfe8adf-9d26-48e3-b456-e1c8d79ddfed\\\",\\\"systemUUID\\\":\\\"63162577-fb09-4289-a5f3-3b12988dcfbf\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-23T09:07:30Z is after 2025-08-24T17:21:41Z" Jan 23 09:07:30 crc kubenswrapper[4684]: I0123 09:07:30.415293 4684 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 23 09:07:30 crc kubenswrapper[4684]: I0123 09:07:30.415345 4684 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 23 09:07:30 crc kubenswrapper[4684]: I0123 09:07:30.415357 4684 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 23 09:07:30 crc kubenswrapper[4684]: I0123 09:07:30.415372 4684 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 23 09:07:30 crc kubenswrapper[4684]: I0123 09:07:30.415384 4684 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-23T09:07:30Z","lastTransitionTime":"2026-01-23T09:07:30Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 23 09:07:30 crc kubenswrapper[4684]: I0123 09:07:30.422911 4684 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-dmqcw" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"95d1563a-3ca4-4fb0-8365-c1168fbe2e70\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T09:07:29Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T09:07:28Z\\\",\\\"message\\\":\\\"containers with incomplete status: [cni-plugins bond-cni-plugin routeoverride-cni whereabouts-cni-bincopy whereabouts-cni]\\\",\\\"reason\\\":\\\"ContainersNotInitialized\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T09:07:28Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T09:07:28Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-wrhqc\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://d3d64538fa49212ecd97fac81f22251d985b9963024dcd5625ca82b0a19111fb\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://d3d64538fa49212ecd97fac81f22251d985b9963024dcd5625ca82b0a19111fb\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-23T09:07:29Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-23T09:07:29Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-wrhqc\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-wrhqc\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-wrhqc\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-wrhqc\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-wrhqc\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-wrhqc\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-23T09:07:28Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-dmqcw\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-23T09:07:30Z is after 2025-08-24T17:21:41Z" Jan 23 09:07:30 crc kubenswrapper[4684]: E0123 09:07:30.431184 4684 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"7800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"24148068Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"8\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"24608868Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-01-23T09:07:30Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-23T09:07:30Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-23T09:07:30Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-23T09:07:30Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-23T09:07:30Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-23T09:07:30Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-23T09:07:30Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-23T09:07:30Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"bcfe8adf-9d26-48e3-b456-e1c8d79ddfed\\\",\\\"systemUUID\\\":\\\"63162577-fb09-4289-a5f3-3b12988dcfbf\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-23T09:07:30Z is after 2025-08-24T17:21:41Z" Jan 23 09:07:30 crc kubenswrapper[4684]: I0123 09:07:30.435193 4684 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 23 09:07:30 crc kubenswrapper[4684]: I0123 09:07:30.435240 4684 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 23 09:07:30 crc kubenswrapper[4684]: I0123 09:07:30.435253 4684 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 23 09:07:30 crc kubenswrapper[4684]: I0123 09:07:30.435269 4684 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 23 09:07:30 crc kubenswrapper[4684]: I0123 09:07:30.435280 4684 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-23T09:07:30Z","lastTransitionTime":"2026-01-23T09:07:30Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 23 09:07:30 crc kubenswrapper[4684]: E0123 09:07:30.450719 4684 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"7800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"24148068Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"8\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"24608868Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-01-23T09:07:30Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-23T09:07:30Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-23T09:07:30Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-23T09:07:30Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-23T09:07:30Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-23T09:07:30Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-23T09:07:30Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-23T09:07:30Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"bcfe8adf-9d26-48e3-b456-e1c8d79ddfed\\\",\\\"systemUUID\\\":\\\"63162577-fb09-4289-a5f3-3b12988dcfbf\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-23T09:07:30Z is after 2025-08-24T17:21:41Z" Jan 23 09:07:30 crc kubenswrapper[4684]: I0123 09:07:30.454289 4684 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 23 09:07:30 crc kubenswrapper[4684]: I0123 09:07:30.454340 4684 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 23 09:07:30 crc kubenswrapper[4684]: I0123 09:07:30.454353 4684 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 23 09:07:30 crc kubenswrapper[4684]: I0123 09:07:30.454372 4684 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 23 09:07:30 crc kubenswrapper[4684]: I0123 09:07:30.454386 4684 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-23T09:07:30Z","lastTransitionTime":"2026-01-23T09:07:30Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 23 09:07:30 crc kubenswrapper[4684]: E0123 09:07:30.491639 4684 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"7800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"24148068Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"8\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"24608868Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-01-23T09:07:30Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-23T09:07:30Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-23T09:07:30Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-23T09:07:30Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-23T09:07:30Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-23T09:07:30Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-23T09:07:30Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-23T09:07:30Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"bcfe8adf-9d26-48e3-b456-e1c8d79ddfed\\\",\\\"systemUUID\\\":\\\"63162577-fb09-4289-a5f3-3b12988dcfbf\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-23T09:07:30Z is after 2025-08-24T17:21:41Z" Jan 23 09:07:30 crc kubenswrapper[4684]: I0123 09:07:30.497763 4684 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 23 09:07:30 crc kubenswrapper[4684]: I0123 09:07:30.497808 4684 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 23 09:07:30 crc kubenswrapper[4684]: I0123 09:07:30.497820 4684 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 23 09:07:30 crc kubenswrapper[4684]: I0123 09:07:30.497837 4684 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 23 09:07:30 crc kubenswrapper[4684]: I0123 09:07:30.497850 4684 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-23T09:07:30Z","lastTransitionTime":"2026-01-23T09:07:30Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 23 09:07:30 crc kubenswrapper[4684]: E0123 09:07:30.514349 4684 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"7800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"24148068Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"8\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"24608868Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-01-23T09:07:30Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-23T09:07:30Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-23T09:07:30Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-23T09:07:30Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-23T09:07:30Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-23T09:07:30Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-23T09:07:30Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-23T09:07:30Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"bcfe8adf-9d26-48e3-b456-e1c8d79ddfed\\\",\\\"systemUUID\\\":\\\"63162577-fb09-4289-a5f3-3b12988dcfbf\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-23T09:07:30Z is after 2025-08-24T17:21:41Z" Jan 23 09:07:30 crc kubenswrapper[4684]: E0123 09:07:30.514529 4684 kubelet_node_status.go:572] "Unable to update node status" err="update node status exceeds retry count" Jan 23 09:07:30 crc kubenswrapper[4684]: I0123 09:07:30.516589 4684 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 23 09:07:30 crc kubenswrapper[4684]: I0123 09:07:30.516620 4684 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 23 09:07:30 crc kubenswrapper[4684]: I0123 09:07:30.516633 4684 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 23 09:07:30 crc kubenswrapper[4684]: I0123 09:07:30.516649 4684 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 23 09:07:30 crc kubenswrapper[4684]: I0123 09:07:30.516660 4684 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-23T09:07:30Z","lastTransitionTime":"2026-01-23T09:07:30Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 23 09:07:30 crc kubenswrapper[4684]: I0123 09:07:30.548928 4684 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-11-29 07:27:30.806436457 +0000 UTC Jan 23 09:07:30 crc kubenswrapper[4684]: I0123 09:07:30.619453 4684 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 23 09:07:30 crc kubenswrapper[4684]: I0123 09:07:30.619502 4684 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 23 09:07:30 crc kubenswrapper[4684]: I0123 09:07:30.619515 4684 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 23 09:07:30 crc kubenswrapper[4684]: I0123 09:07:30.619533 4684 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 23 09:07:30 crc kubenswrapper[4684]: I0123 09:07:30.619546 4684 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-23T09:07:30Z","lastTransitionTime":"2026-01-23T09:07:30Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 23 09:07:30 crc kubenswrapper[4684]: I0123 09:07:30.722659 4684 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 23 09:07:30 crc kubenswrapper[4684]: I0123 09:07:30.722727 4684 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 23 09:07:30 crc kubenswrapper[4684]: I0123 09:07:30.722739 4684 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 23 09:07:30 crc kubenswrapper[4684]: I0123 09:07:30.722756 4684 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 23 09:07:30 crc kubenswrapper[4684]: I0123 09:07:30.722771 4684 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-23T09:07:30Z","lastTransitionTime":"2026-01-23T09:07:30Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 23 09:07:30 crc kubenswrapper[4684]: I0123 09:07:30.766853 4684 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-nk7v5" event={"ID":"5fd1b372-d164-4037-ae8e-cf634b1c4b41","Type":"ContainerStarted","Data":"1d7d0cedb437ec48e365912b092c7f28a30e01fbab86c49bce1b26734ab264ee"} Jan 23 09:07:30 crc kubenswrapper[4684]: I0123 09:07:30.766902 4684 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-nk7v5" event={"ID":"5fd1b372-d164-4037-ae8e-cf634b1c4b41","Type":"ContainerStarted","Data":"6ab83043e744c91535278153a247d7ba2b3612b867edbabf3a43192b51304e14"} Jan 23 09:07:30 crc kubenswrapper[4684]: I0123 09:07:30.766917 4684 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-nk7v5" event={"ID":"5fd1b372-d164-4037-ae8e-cf634b1c4b41","Type":"ContainerStarted","Data":"5ecd3493767226c89a1f3e3dff04d36ff5c47117c6ad2712e71633f5c6e375b3"} Jan 23 09:07:30 crc kubenswrapper[4684]: I0123 09:07:30.766931 4684 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-nk7v5" event={"ID":"5fd1b372-d164-4037-ae8e-cf634b1c4b41","Type":"ContainerStarted","Data":"c845b6b78d55b23f70032599e19fb345571b02ca00353315bb08e94c834330d4"} Jan 23 09:07:30 crc kubenswrapper[4684]: I0123 09:07:30.766941 4684 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-nk7v5" event={"ID":"5fd1b372-d164-4037-ae8e-cf634b1c4b41","Type":"ContainerStarted","Data":"d44f8256ce0d8ea5237e13fb4f6d7ee5cd698c2821613b48d73ba903d2ab5351"} Jan 23 09:07:30 crc kubenswrapper[4684]: I0123 09:07:30.766953 4684 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-nk7v5" event={"ID":"5fd1b372-d164-4037-ae8e-cf634b1c4b41","Type":"ContainerStarted","Data":"3eab81e73847c2d5a8a24bd2be84c8ed97ecc482fe023474b519ae6bcf3e6e49"} Jan 23 09:07:30 crc kubenswrapper[4684]: I0123 09:07:30.768387 4684 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" event={"ID":"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49","Type":"ContainerStarted","Data":"dc74050180463e44d7c545c89833c0282af87ae8cde4800f95e019dbd21ebb08"} Jan 23 09:07:30 crc kubenswrapper[4684]: I0123 09:07:30.771031 4684 generic.go:334] "Generic (PLEG): container finished" podID="95d1563a-3ca4-4fb0-8365-c1168fbe2e70" containerID="bd008bc398cf858c150426e45222e76743f5cacfffb45c24f2cad83a6140abe4" exitCode=0 Jan 23 09:07:30 crc kubenswrapper[4684]: I0123 09:07:30.771092 4684 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-multus/multus-additional-cni-plugins-dmqcw" event={"ID":"95d1563a-3ca4-4fb0-8365-c1168fbe2e70","Type":"ContainerDied","Data":"bd008bc398cf858c150426e45222e76743f5cacfffb45c24f2cad83a6140abe4"} Jan 23 09:07:30 crc kubenswrapper[4684]: I0123 09:07:30.773723 4684 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-image-registry/node-ca-qt2j2" event={"ID":"d5069a6f-07bb-4423-8df0-92cdc541e6de","Type":"ContainerStarted","Data":"4ab843f59e857c481772565098789264b06141f58dd54cbb8dba2e40b44a54ad"} Jan 23 09:07:30 crc kubenswrapper[4684]: I0123 09:07:30.773751 4684 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-image-registry/node-ca-qt2j2" event={"ID":"d5069a6f-07bb-4423-8df0-92cdc541e6de","Type":"ContainerStarted","Data":"af4eea8ff267dc2543e3feb4331ff2c7d9699e2640974bbc6624970d61ff5354"} Jan 23 09:07:30 crc kubenswrapper[4684]: I0123 09:07:30.776369 4684 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-wtphf" event={"ID":"fe8e0d00-860e-4d47-9f48-686555520d79","Type":"ContainerStarted","Data":"87b6f66b276518f9c25bbd5c97bd4a330b2c796958b395d04a01ef7115b95440"} Jan 23 09:07:30 crc kubenswrapper[4684]: I0123 09:07:30.776398 4684 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-wtphf" event={"ID":"fe8e0d00-860e-4d47-9f48-686555520d79","Type":"ContainerStarted","Data":"c3d090a4ca15b818846dbd02be034a5029761509ea8671673795d0b2b15249c9"} Jan 23 09:07:30 crc kubenswrapper[4684]: I0123 09:07:30.776416 4684 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-wtphf" event={"ID":"fe8e0d00-860e-4d47-9f48-686555520d79","Type":"ContainerStarted","Data":"2662ffcfb351e01a4d1b712cc148edc121a8201113625aca1e852e37cdd23a20"} Jan 23 09:07:30 crc kubenswrapper[4684]: I0123 09:07:30.788415 4684 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-23T09:07:27Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-23T09:07:27Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-23T09:07:30Z is after 2025-08-24T17:21:41Z" Jan 23 09:07:30 crc kubenswrapper[4684]: I0123 09:07:30.803219 4684 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-23T09:07:27Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-23T09:07:27Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-23T09:07:30Z is after 2025-08-24T17:21:41Z" Jan 23 09:07:30 crc kubenswrapper[4684]: I0123 09:07:30.825577 4684 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-nk7v5" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fd1b372-d164-4037-ae8e-cf634b1c4b41\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T09:07:29Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T09:07:29Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T09:07:28Z\\\",\\\"message\\\":\\\"containers with unready status: [ovn-controller ovn-acl-logging kube-rbac-proxy-node kube-rbac-proxy-ovn-metrics northd nbdb sbdb ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T09:07:28Z\\\",\\\"message\\\":\\\"containers with unready status: [ovn-controller ovn-acl-logging kube-rbac-proxy-node kube-rbac-proxy-ovn-metrics northd nbdb sbdb ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-l46bg\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-l46bg\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-l46bg\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-l46bg\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-l46bg\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-l46bg\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-l46bg\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-l46bg\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://6cfc04b44ac724b5e32e0102b3f0d670fdd7f2b7ae9b40266065c7b8192b228e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://6cfc04b44ac724b5e32e0102b3f0d670fdd7f2b7ae9b40266065c7b8192b228e\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-23T09:07:29Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-23T09:07:29Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-l46bg\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-23T09:07:28Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-nk7v5\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-23T09:07:30Z is after 2025-08-24T17:21:41Z" Jan 23 09:07:30 crc kubenswrapper[4684]: I0123 09:07:30.826516 4684 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 23 09:07:30 crc kubenswrapper[4684]: I0123 09:07:30.826555 4684 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 23 09:07:30 crc kubenswrapper[4684]: I0123 09:07:30.826569 4684 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 23 09:07:30 crc kubenswrapper[4684]: I0123 09:07:30.826586 4684 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 23 09:07:30 crc kubenswrapper[4684]: I0123 09:07:30.826597 4684 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-23T09:07:30Z","lastTransitionTime":"2026-01-23T09:07:30Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 23 09:07:30 crc kubenswrapper[4684]: I0123 09:07:30.848191 4684 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"e31ff448-5258-4887-9532-ccb1444b5a2f\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T09:07:08Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T09:07:08Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T09:07:07Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T09:07:07Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T09:07:07Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://39b1d62654cdce3e6a1e54cc35f36d530dec39b7ec54d7aba2ea8a64844ff90a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-23T09:07:09Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://5b80737ea9f882f63be2cf6a2f74002963d16e18aea3c96f738b2cd188f3c1da\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-23T09:07:10Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://68e3ed6cfd5c1ab6379385c7acee58117333f815f21be7d7c61038f7827f6621\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-23T09:07:10Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://74958cd4355a9eb04e07c960b1063b56f11cb3ae27a3ab9eac50f54ebac78c8c\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://42263a97079566dbd93f1ca20399fd1f6cc2400f0d042ed062c1c1e15eaf0109\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-01-23T09:07:27Z\\\",\\\"message\\\":\\\"23 09:07:26.845110 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW0123 09:07:26.845113 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW0123 09:07:26.845115 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nI0123 09:07:26.845353 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nI0123 09:07:26.849378 1 tlsconfig.go:203] \\\\\\\"Loaded serving cert\\\\\\\" certName=\\\\\\\"serving-cert::/tmp/serving-cert-4138284268/tls.crt::/tmp/serving-cert-4138284268/tls.key\\\\\\\" certDetail=\\\\\\\"\\\\\\\\\\\\\\\"localhost\\\\\\\\\\\\\\\" [serving] validServingFor=[localhost] issuer=\\\\\\\\\\\\\\\"check-endpoints-signer@1769159230\\\\\\\\\\\\\\\" (2026-01-23 09:07:10 +0000 UTC to 2026-02-22 09:07:11 +0000 UTC (now=2026-01-23 09:07:26.849349521 +0000 UTC))\\\\\\\"\\\\nI0123 09:07:26.849507 1 named_certificates.go:53] \\\\\\\"Loaded SNI cert\\\\\\\" index=0 certName=\\\\\\\"self-signed loopback\\\\\\\" certDetail=\\\\\\\"\\\\\\\\\\\\\\\"apiserver-loopback-client@1769159241\\\\\\\\\\\\\\\" [serving] validServingFor=[apiserver-loopback-client] issuer=\\\\\\\\\\\\\\\"apiserver-loopback-client-ca@1769159241\\\\\\\\\\\\\\\" (2026-01-23 08:07:21 +0000 UTC to 2027-01-23 08:07:21 +0000 UTC (now=2026-01-23 09:07:26.849489185 +0000 UTC))\\\\\\\"\\\\nI0123 09:07:26.849527 1 secure_serving.go:213] Serving securely on [::]:17697\\\\nI0123 09:07:26.849546 1 genericapiserver.go:683] [graceful-termination] waiting for shutdown to be initiated\\\\nI0123 09:07:26.849566 1 requestheader_controller.go:172] Starting RequestHeaderAuthRequestController\\\\nI0123 09:07:26.849583 1 shared_informer.go:313] Waiting for caches to sync for RequestHeaderAuthRequestController\\\\nI0123 09:07:26.849611 1 dynamic_serving_content.go:135] \\\\\\\"Starting controller\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-4138284268/tls.crt::/tmp/serving-cert-4138284268/tls.key\\\\\\\"\\\\nI0123 09:07:26.849731 1 tlsconfig.go:243] \\\\\\\"Starting DynamicServingCertificateController\\\\\\\"\\\\nF0123 09:07:26.849820 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-23T09:07:10Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-23T09:07:28Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://9db80d9b156d2828ad5bcd38bc2d0783dac35f10f547f098815ee596931cde3b\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-23T09:07:10Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://efa2eef93c6f5766565795e6674f79bc2e7cb62ac76cd9a1e407561378d62732\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://efa2eef93c6f5766565795e6674f79bc2e7cb62ac76cd9a1e407561378d62732\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-23T09:07:08Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-23T09:07:08Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-23T09:07:07Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-23T09:07:30Z is after 2025-08-24T17:21:41Z" Jan 23 09:07:30 crc kubenswrapper[4684]: I0123 09:07:30.866669 4684 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-23T09:07:27Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-23T09:07:27Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-23T09:07:30Z is after 2025-08-24T17:21:41Z" Jan 23 09:07:30 crc kubenswrapper[4684]: I0123 09:07:30.888464 4684 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-jwr4q" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ab0885cc-d621-4e36-9e37-1326848bd147\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T09:07:29Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T09:07:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T09:07:29Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T09:07:29Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://d957cfbf388d17fa825ac41c56e15d6cd4caec6e13b2fb8c93b304205f0bbefe\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-23T09:07:29Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cw2mk\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-23T09:07:28Z\\\"}}\" for pod \"openshift-multus\"/\"multus-jwr4q\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-23T09:07:30Z is after 2025-08-24T17:21:41Z" Jan 23 09:07:30 crc kubenswrapper[4684]: I0123 09:07:30.906891 4684 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-dmqcw" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"95d1563a-3ca4-4fb0-8365-c1168fbe2e70\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T09:07:29Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T09:07:28Z\\\",\\\"message\\\":\\\"containers with incomplete status: [cni-plugins bond-cni-plugin routeoverride-cni whereabouts-cni-bincopy whereabouts-cni]\\\",\\\"reason\\\":\\\"ContainersNotInitialized\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T09:07:28Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T09:07:28Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-wrhqc\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://d3d64538fa49212ecd97fac81f22251d985b9963024dcd5625ca82b0a19111fb\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://d3d64538fa49212ecd97fac81f22251d985b9963024dcd5625ca82b0a19111fb\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-23T09:07:29Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-23T09:07:29Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-wrhqc\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-wrhqc\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-wrhqc\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-wrhqc\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-wrhqc\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-wrhqc\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-23T09:07:28Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-dmqcw\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-23T09:07:30Z is after 2025-08-24T17:21:41Z" Jan 23 09:07:30 crc kubenswrapper[4684]: I0123 09:07:30.923159 4684 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d618dabd-5de3-4c94-b9c1-69682da77628\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T09:07:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T09:07:07Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T09:07:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T09:07:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T09:07:07Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://c027c8977c1e3870ef0132bf28d479e8999b1a7d216327be7a9cff2aeee05c9e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-23T09:07:08Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://7954e2feb1e89e1ec2c9055234e7b9bde7005afc751a3067c18cbb54d16045cc\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-23T09:07:08Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://fde45d47daa7855ee7caa1df0222d2773fcdc8fb29413c61d6b74f7e7d8fa6e4\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-23T09:07:09Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://f34540a58dd0dfcebbfd694b24202f58a89ddca8a0f04f3f4f2bcdba4be5c4b6\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-23T09:07:10Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-23T09:07:07Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-23T09:07:30Z is after 2025-08-24T17:21:41Z" Jan 23 09:07:30 crc kubenswrapper[4684]: I0123 09:07:30.936442 4684 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 23 09:07:30 crc kubenswrapper[4684]: I0123 09:07:30.936474 4684 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 23 09:07:30 crc kubenswrapper[4684]: I0123 09:07:30.936483 4684 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 23 09:07:30 crc kubenswrapper[4684]: I0123 09:07:30.936496 4684 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 23 09:07:30 crc kubenswrapper[4684]: I0123 09:07:30.936505 4684 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-23T09:07:30Z","lastTransitionTime":"2026-01-23T09:07:30Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 23 09:07:30 crc kubenswrapper[4684]: I0123 09:07:30.939293 4684 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-23T09:07:28Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-23T09:07:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-23T09:07:28Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://4f741db786a98b9e9302c17c5f5061484149b0372c03b3cf06b017d37da7237a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-23T09:07:28Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://e0bf99a80423f9d4d2262b21f7dc70d1cf73731c48008e484d9768495596d5b0\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-23T09:07:28Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-23T09:07:30Z is after 2025-08-24T17:21:41Z" Jan 23 09:07:30 crc kubenswrapper[4684]: I0123 09:07:30.953319 4684 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-23T09:07:30Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-23T09:07:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-23T09:07:30Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://dc74050180463e44d7c545c89833c0282af87ae8cde4800f95e019dbd21ebb08\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-23T09:07:30Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-23T09:07:30Z is after 2025-08-24T17:21:41Z" Jan 23 09:07:30 crc kubenswrapper[4684]: I0123 09:07:30.967985 4684 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-6stgf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"4fce7017-186f-4953-b968-c8a8868a0fd4\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T09:07:29Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T09:07:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T09:07:29Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T09:07:29Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://e120546e2ca9261a5bc169c39194c52add608d78b5783a10dad5f3ba4ee27c23\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-23T09:07:29Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-wv8g2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-23T09:07:28Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-6stgf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-23T09:07:30Z is after 2025-08-24T17:21:41Z" Jan 23 09:07:30 crc kubenswrapper[4684]: I0123 09:07:30.979843 4684 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-qt2j2" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d5069a6f-07bb-4423-8df0-92cdc541e6de\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T09:07:29Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T09:07:29Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T09:07:29Z\\\",\\\"message\\\":\\\"containers with unready status: [node-ca]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T09:07:29Z\\\",\\\"message\\\":\\\"containers with unready status: [node-ca]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-l62zw\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-23T09:07:29Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-qt2j2\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-23T09:07:30Z is after 2025-08-24T17:21:41Z" Jan 23 09:07:30 crc kubenswrapper[4684]: I0123 09:07:30.996256 4684 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-23T09:07:29Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-23T09:07:29Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-23T09:07:29Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://d66a59d2f527c396c3b591ef694a20a6852d8e2b2f3d4c77ef0f0b795a18b535\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-23T09:07:28Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-23T09:07:30Z is after 2025-08-24T17:21:41Z" Jan 23 09:07:31 crc kubenswrapper[4684]: I0123 09:07:31.011480 4684 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-wtphf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"fe8e0d00-860e-4d47-9f48-686555520d79\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T09:07:28Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T09:07:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T09:07:28Z\\\",\\\"message\\\":\\\"containers with unready status: [machine-config-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T09:07:28Z\\\",\\\"message\\\":\\\"containers with unready status: [machine-config-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-dmwsl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-dmwsl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-23T09:07:28Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-wtphf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-23T09:07:31Z is after 2025-08-24T17:21:41Z" Jan 23 09:07:31 crc kubenswrapper[4684]: I0123 09:07:31.027538 4684 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"e31ff448-5258-4887-9532-ccb1444b5a2f\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T09:07:08Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T09:07:08Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T09:07:07Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T09:07:07Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T09:07:07Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://39b1d62654cdce3e6a1e54cc35f36d530dec39b7ec54d7aba2ea8a64844ff90a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-23T09:07:09Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://5b80737ea9f882f63be2cf6a2f74002963d16e18aea3c96f738b2cd188f3c1da\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-23T09:07:10Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://68e3ed6cfd5c1ab6379385c7acee58117333f815f21be7d7c61038f7827f6621\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-23T09:07:10Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://74958cd4355a9eb04e07c960b1063b56f11cb3ae27a3ab9eac50f54ebac78c8c\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://42263a97079566dbd93f1ca20399fd1f6cc2400f0d042ed062c1c1e15eaf0109\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-01-23T09:07:27Z\\\",\\\"message\\\":\\\"23 09:07:26.845110 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW0123 09:07:26.845113 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW0123 09:07:26.845115 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nI0123 09:07:26.845353 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nI0123 09:07:26.849378 1 tlsconfig.go:203] \\\\\\\"Loaded serving cert\\\\\\\" certName=\\\\\\\"serving-cert::/tmp/serving-cert-4138284268/tls.crt::/tmp/serving-cert-4138284268/tls.key\\\\\\\" certDetail=\\\\\\\"\\\\\\\\\\\\\\\"localhost\\\\\\\\\\\\\\\" [serving] validServingFor=[localhost] issuer=\\\\\\\\\\\\\\\"check-endpoints-signer@1769159230\\\\\\\\\\\\\\\" (2026-01-23 09:07:10 +0000 UTC to 2026-02-22 09:07:11 +0000 UTC (now=2026-01-23 09:07:26.849349521 +0000 UTC))\\\\\\\"\\\\nI0123 09:07:26.849507 1 named_certificates.go:53] \\\\\\\"Loaded SNI cert\\\\\\\" index=0 certName=\\\\\\\"self-signed loopback\\\\\\\" certDetail=\\\\\\\"\\\\\\\\\\\\\\\"apiserver-loopback-client@1769159241\\\\\\\\\\\\\\\" [serving] validServingFor=[apiserver-loopback-client] issuer=\\\\\\\\\\\\\\\"apiserver-loopback-client-ca@1769159241\\\\\\\\\\\\\\\" (2026-01-23 08:07:21 +0000 UTC to 2027-01-23 08:07:21 +0000 UTC (now=2026-01-23 09:07:26.849489185 +0000 UTC))\\\\\\\"\\\\nI0123 09:07:26.849527 1 secure_serving.go:213] Serving securely on [::]:17697\\\\nI0123 09:07:26.849546 1 genericapiserver.go:683] [graceful-termination] waiting for shutdown to be initiated\\\\nI0123 09:07:26.849566 1 requestheader_controller.go:172] Starting RequestHeaderAuthRequestController\\\\nI0123 09:07:26.849583 1 shared_informer.go:313] Waiting for caches to sync for RequestHeaderAuthRequestController\\\\nI0123 09:07:26.849611 1 dynamic_serving_content.go:135] \\\\\\\"Starting controller\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-4138284268/tls.crt::/tmp/serving-cert-4138284268/tls.key\\\\\\\"\\\\nI0123 09:07:26.849731 1 tlsconfig.go:243] \\\\\\\"Starting DynamicServingCertificateController\\\\\\\"\\\\nF0123 09:07:26.849820 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-23T09:07:10Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-23T09:07:28Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://9db80d9b156d2828ad5bcd38bc2d0783dac35f10f547f098815ee596931cde3b\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-23T09:07:10Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://efa2eef93c6f5766565795e6674f79bc2e7cb62ac76cd9a1e407561378d62732\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://efa2eef93c6f5766565795e6674f79bc2e7cb62ac76cd9a1e407561378d62732\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-23T09:07:08Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-23T09:07:08Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-23T09:07:07Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-23T09:07:31Z is after 2025-08-24T17:21:41Z" Jan 23 09:07:31 crc kubenswrapper[4684]: I0123 09:07:31.039982 4684 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 23 09:07:31 crc kubenswrapper[4684]: I0123 09:07:31.040342 4684 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 23 09:07:31 crc kubenswrapper[4684]: I0123 09:07:31.040351 4684 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 23 09:07:31 crc kubenswrapper[4684]: I0123 09:07:31.040364 4684 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 23 09:07:31 crc kubenswrapper[4684]: I0123 09:07:31.040373 4684 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-23T09:07:31Z","lastTransitionTime":"2026-01-23T09:07:31Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 23 09:07:31 crc kubenswrapper[4684]: I0123 09:07:31.041542 4684 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-23T09:07:27Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-23T09:07:27Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-23T09:07:31Z is after 2025-08-24T17:21:41Z" Jan 23 09:07:31 crc kubenswrapper[4684]: I0123 09:07:31.058259 4684 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-jwr4q" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ab0885cc-d621-4e36-9e37-1326848bd147\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T09:07:29Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T09:07:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T09:07:29Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T09:07:29Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://d957cfbf388d17fa825ac41c56e15d6cd4caec6e13b2fb8c93b304205f0bbefe\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-23T09:07:29Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cw2mk\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-23T09:07:28Z\\\"}}\" for pod \"openshift-multus\"/\"multus-jwr4q\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-23T09:07:31Z is after 2025-08-24T17:21:41Z" Jan 23 09:07:31 crc kubenswrapper[4684]: I0123 09:07:31.072979 4684 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-dmqcw" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"95d1563a-3ca4-4fb0-8365-c1168fbe2e70\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T09:07:29Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T09:07:28Z\\\",\\\"message\\\":\\\"containers with incomplete status: [bond-cni-plugin routeoverride-cni whereabouts-cni-bincopy whereabouts-cni]\\\",\\\"reason\\\":\\\"ContainersNotInitialized\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T09:07:28Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T09:07:28Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-wrhqc\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://d3d64538fa49212ecd97fac81f22251d985b9963024dcd5625ca82b0a19111fb\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://d3d64538fa49212ecd97fac81f22251d985b9963024dcd5625ca82b0a19111fb\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-23T09:07:29Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-23T09:07:29Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-wrhqc\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://bd008bc398cf858c150426e45222e76743f5cacfffb45c24f2cad83a6140abe4\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://bd008bc398cf858c150426e45222e76743f5cacfffb45c24f2cad83a6140abe4\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-23T09:07:30Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-23T09:07:30Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-wrhqc\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-wrhqc\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-wrhqc\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-wrhqc\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-wrhqc\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-23T09:07:28Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-dmqcw\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-23T09:07:31Z is after 2025-08-24T17:21:41Z" Jan 23 09:07:31 crc kubenswrapper[4684]: I0123 09:07:31.088291 4684 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d618dabd-5de3-4c94-b9c1-69682da77628\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T09:07:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T09:07:07Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T09:07:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T09:07:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T09:07:07Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://c027c8977c1e3870ef0132bf28d479e8999b1a7d216327be7a9cff2aeee05c9e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-23T09:07:08Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://7954e2feb1e89e1ec2c9055234e7b9bde7005afc751a3067c18cbb54d16045cc\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-23T09:07:08Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://fde45d47daa7855ee7caa1df0222d2773fcdc8fb29413c61d6b74f7e7d8fa6e4\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-23T09:07:09Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://f34540a58dd0dfcebbfd694b24202f58a89ddca8a0f04f3f4f2bcdba4be5c4b6\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-23T09:07:10Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-23T09:07:07Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-23T09:07:31Z is after 2025-08-24T17:21:41Z" Jan 23 09:07:31 crc kubenswrapper[4684]: I0123 09:07:31.104431 4684 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-23T09:07:28Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-23T09:07:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-23T09:07:28Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://4f741db786a98b9e9302c17c5f5061484149b0372c03b3cf06b017d37da7237a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-23T09:07:28Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://e0bf99a80423f9d4d2262b21f7dc70d1cf73731c48008e484d9768495596d5b0\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-23T09:07:28Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-23T09:07:31Z is after 2025-08-24T17:21:41Z" Jan 23 09:07:31 crc kubenswrapper[4684]: I0123 09:07:31.117504 4684 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-23T09:07:30Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-23T09:07:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-23T09:07:30Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://dc74050180463e44d7c545c89833c0282af87ae8cde4800f95e019dbd21ebb08\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-23T09:07:30Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-23T09:07:31Z is after 2025-08-24T17:21:41Z" Jan 23 09:07:31 crc kubenswrapper[4684]: I0123 09:07:31.128173 4684 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-6stgf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"4fce7017-186f-4953-b968-c8a8868a0fd4\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T09:07:29Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T09:07:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T09:07:29Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T09:07:29Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://e120546e2ca9261a5bc169c39194c52add608d78b5783a10dad5f3ba4ee27c23\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-23T09:07:29Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-wv8g2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-23T09:07:28Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-6stgf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-23T09:07:31Z is after 2025-08-24T17:21:41Z" Jan 23 09:07:31 crc kubenswrapper[4684]: I0123 09:07:31.146549 4684 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 23 09:07:31 crc kubenswrapper[4684]: I0123 09:07:31.146589 4684 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 23 09:07:31 crc kubenswrapper[4684]: I0123 09:07:31.146601 4684 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 23 09:07:31 crc kubenswrapper[4684]: I0123 09:07:31.146618 4684 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 23 09:07:31 crc kubenswrapper[4684]: I0123 09:07:31.146630 4684 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-23T09:07:31Z","lastTransitionTime":"2026-01-23T09:07:31Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 23 09:07:31 crc kubenswrapper[4684]: I0123 09:07:31.148869 4684 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-qt2j2" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d5069a6f-07bb-4423-8df0-92cdc541e6de\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T09:07:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T09:07:29Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T09:07:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T09:07:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://4ab843f59e857c481772565098789264b06141f58dd54cbb8dba2e40b44a54ad\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-23T09:07:30Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-l62zw\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-23T09:07:29Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-qt2j2\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-23T09:07:31Z is after 2025-08-24T17:21:41Z" Jan 23 09:07:31 crc kubenswrapper[4684]: I0123 09:07:31.163227 4684 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-23T09:07:29Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-23T09:07:29Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-23T09:07:29Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://d66a59d2f527c396c3b591ef694a20a6852d8e2b2f3d4c77ef0f0b795a18b535\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-23T09:07:28Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-23T09:07:31Z is after 2025-08-24T17:21:41Z" Jan 23 09:07:31 crc kubenswrapper[4684]: I0123 09:07:31.179377 4684 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-wtphf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"fe8e0d00-860e-4d47-9f48-686555520d79\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T09:07:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T09:07:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T09:07:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T09:07:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://87b6f66b276518f9c25bbd5c97bd4a330b2c796958b395d04a01ef7115b95440\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-23T09:07:30Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-dmwsl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://c3d090a4ca15b818846dbd02be034a5029761509ea8671673795d0b2b15249c9\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-23T09:07:30Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-dmwsl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-23T09:07:28Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-wtphf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-23T09:07:31Z is after 2025-08-24T17:21:41Z" Jan 23 09:07:31 crc kubenswrapper[4684]: I0123 09:07:31.195794 4684 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-23T09:07:27Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-23T09:07:27Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-23T09:07:31Z is after 2025-08-24T17:21:41Z" Jan 23 09:07:31 crc kubenswrapper[4684]: I0123 09:07:31.214169 4684 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-23T09:07:27Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-23T09:07:27Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-23T09:07:31Z is after 2025-08-24T17:21:41Z" Jan 23 09:07:31 crc kubenswrapper[4684]: I0123 09:07:31.239468 4684 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-nk7v5" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fd1b372-d164-4037-ae8e-cf634b1c4b41\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T09:07:29Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T09:07:29Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T09:07:28Z\\\",\\\"message\\\":\\\"containers with unready status: [ovn-controller ovn-acl-logging kube-rbac-proxy-node kube-rbac-proxy-ovn-metrics northd nbdb sbdb ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T09:07:28Z\\\",\\\"message\\\":\\\"containers with unready status: [ovn-controller ovn-acl-logging kube-rbac-proxy-node kube-rbac-proxy-ovn-metrics northd nbdb sbdb ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-l46bg\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-l46bg\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-l46bg\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-l46bg\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-l46bg\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-l46bg\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-l46bg\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-l46bg\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://6cfc04b44ac724b5e32e0102b3f0d670fdd7f2b7ae9b40266065c7b8192b228e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://6cfc04b44ac724b5e32e0102b3f0d670fdd7f2b7ae9b40266065c7b8192b228e\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-23T09:07:29Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-23T09:07:29Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-l46bg\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-23T09:07:28Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-nk7v5\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-23T09:07:31Z is after 2025-08-24T17:21:41Z" Jan 23 09:07:31 crc kubenswrapper[4684]: I0123 09:07:31.249177 4684 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 23 09:07:31 crc kubenswrapper[4684]: I0123 09:07:31.249215 4684 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 23 09:07:31 crc kubenswrapper[4684]: I0123 09:07:31.249225 4684 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 23 09:07:31 crc kubenswrapper[4684]: I0123 09:07:31.249240 4684 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 23 09:07:31 crc kubenswrapper[4684]: I0123 09:07:31.249252 4684 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-23T09:07:31Z","lastTransitionTime":"2026-01-23T09:07:31Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 23 09:07:31 crc kubenswrapper[4684]: I0123 09:07:31.295257 4684 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 23 09:07:31 crc kubenswrapper[4684]: E0123 09:07:31.295526 4684 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-01-23 09:07:35.295497039 +0000 UTC m=+27.918875590 (durationBeforeRetry 4s). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 23 09:07:31 crc kubenswrapper[4684]: I0123 09:07:31.351645 4684 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 23 09:07:31 crc kubenswrapper[4684]: I0123 09:07:31.352037 4684 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 23 09:07:31 crc kubenswrapper[4684]: I0123 09:07:31.352142 4684 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 23 09:07:31 crc kubenswrapper[4684]: I0123 09:07:31.352238 4684 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 23 09:07:31 crc kubenswrapper[4684]: I0123 09:07:31.352317 4684 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-23T09:07:31Z","lastTransitionTime":"2026-01-23T09:07:31Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 23 09:07:31 crc kubenswrapper[4684]: I0123 09:07:31.395879 4684 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-cqllr\" (UniqueName: \"kubernetes.io/projected/3b6479f0-333b-4a96-9adf-2099afdc2447-kube-api-access-cqllr\") pod \"network-check-target-xd92c\" (UID: \"3b6479f0-333b-4a96-9adf-2099afdc2447\") " pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 23 09:07:31 crc kubenswrapper[4684]: I0123 09:07:31.395932 4684 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"networking-console-plugin-cert\" (UniqueName: \"kubernetes.io/secret/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-networking-console-plugin-cert\") pod \"networking-console-plugin-85b44fc459-gdk6g\" (UID: \"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\") " pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 23 09:07:31 crc kubenswrapper[4684]: I0123 09:07:31.395964 4684 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"nginx-conf\" (UniqueName: \"kubernetes.io/configmap/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-nginx-conf\") pod \"networking-console-plugin-85b44fc459-gdk6g\" (UID: \"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\") " pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 23 09:07:31 crc kubenswrapper[4684]: I0123 09:07:31.396016 4684 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-s2dwl\" (UniqueName: \"kubernetes.io/projected/9d751cbb-f2e2-430d-9754-c882a5e924a5-kube-api-access-s2dwl\") pod \"network-check-source-55646444c4-trplf\" (UID: \"9d751cbb-f2e2-430d-9754-c882a5e924a5\") " pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Jan 23 09:07:31 crc kubenswrapper[4684]: E0123 09:07:31.396121 4684 projected.go:288] Couldn't get configMap openshift-network-diagnostics/kube-root-ca.crt: object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered Jan 23 09:07:31 crc kubenswrapper[4684]: E0123 09:07:31.396139 4684 secret.go:188] Couldn't get secret openshift-network-console/networking-console-plugin-cert: object "openshift-network-console"/"networking-console-plugin-cert" not registered Jan 23 09:07:31 crc kubenswrapper[4684]: E0123 09:07:31.396186 4684 configmap.go:193] Couldn't get configMap openshift-network-console/networking-console-plugin: object "openshift-network-console"/"networking-console-plugin" not registered Jan 23 09:07:31 crc kubenswrapper[4684]: E0123 09:07:31.396219 4684 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-networking-console-plugin-cert podName:5fe485a1-e14f-4c09-b5b9-f252bc42b7e8 nodeName:}" failed. No retries permitted until 2026-01-23 09:07:35.396201795 +0000 UTC m=+28.019580336 (durationBeforeRetry 4s). Error: MountVolume.SetUp failed for volume "networking-console-plugin-cert" (UniqueName: "kubernetes.io/secret/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-networking-console-plugin-cert") pod "networking-console-plugin-85b44fc459-gdk6g" (UID: "5fe485a1-e14f-4c09-b5b9-f252bc42b7e8") : object "openshift-network-console"/"networking-console-plugin-cert" not registered Jan 23 09:07:31 crc kubenswrapper[4684]: E0123 09:07:31.396146 4684 projected.go:288] Couldn't get configMap openshift-network-diagnostics/kube-root-ca.crt: object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered Jan 23 09:07:31 crc kubenswrapper[4684]: E0123 09:07:31.396249 4684 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-nginx-conf podName:5fe485a1-e14f-4c09-b5b9-f252bc42b7e8 nodeName:}" failed. No retries permitted until 2026-01-23 09:07:35.396226905 +0000 UTC m=+28.019605496 (durationBeforeRetry 4s). Error: MountVolume.SetUp failed for volume "nginx-conf" (UniqueName: "kubernetes.io/configmap/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-nginx-conf") pod "networking-console-plugin-85b44fc459-gdk6g" (UID: "5fe485a1-e14f-4c09-b5b9-f252bc42b7e8") : object "openshift-network-console"/"networking-console-plugin" not registered Jan 23 09:07:31 crc kubenswrapper[4684]: E0123 09:07:31.396257 4684 projected.go:288] Couldn't get configMap openshift-network-diagnostics/openshift-service-ca.crt: object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered Jan 23 09:07:31 crc kubenswrapper[4684]: E0123 09:07:31.396270 4684 projected.go:194] Error preparing data for projected volume kube-api-access-s2dwl for pod openshift-network-diagnostics/network-check-source-55646444c4-trplf: [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Jan 23 09:07:31 crc kubenswrapper[4684]: E0123 09:07:31.396315 4684 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/9d751cbb-f2e2-430d-9754-c882a5e924a5-kube-api-access-s2dwl podName:9d751cbb-f2e2-430d-9754-c882a5e924a5 nodeName:}" failed. No retries permitted until 2026-01-23 09:07:35.396302087 +0000 UTC m=+28.019680648 (durationBeforeRetry 4s). Error: MountVolume.SetUp failed for volume "kube-api-access-s2dwl" (UniqueName: "kubernetes.io/projected/9d751cbb-f2e2-430d-9754-c882a5e924a5-kube-api-access-s2dwl") pod "network-check-source-55646444c4-trplf" (UID: "9d751cbb-f2e2-430d-9754-c882a5e924a5") : [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Jan 23 09:07:31 crc kubenswrapper[4684]: E0123 09:07:31.396153 4684 projected.go:288] Couldn't get configMap openshift-network-diagnostics/openshift-service-ca.crt: object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered Jan 23 09:07:31 crc kubenswrapper[4684]: E0123 09:07:31.396339 4684 projected.go:194] Error preparing data for projected volume kube-api-access-cqllr for pod openshift-network-diagnostics/network-check-target-xd92c: [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Jan 23 09:07:31 crc kubenswrapper[4684]: E0123 09:07:31.396376 4684 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/3b6479f0-333b-4a96-9adf-2099afdc2447-kube-api-access-cqllr podName:3b6479f0-333b-4a96-9adf-2099afdc2447 nodeName:}" failed. No retries permitted until 2026-01-23 09:07:35.396367829 +0000 UTC m=+28.019746370 (durationBeforeRetry 4s). Error: MountVolume.SetUp failed for volume "kube-api-access-cqllr" (UniqueName: "kubernetes.io/projected/3b6479f0-333b-4a96-9adf-2099afdc2447-kube-api-access-cqllr") pod "network-check-target-xd92c" (UID: "3b6479f0-333b-4a96-9adf-2099afdc2447") : [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Jan 23 09:07:31 crc kubenswrapper[4684]: I0123 09:07:31.454527 4684 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 23 09:07:31 crc kubenswrapper[4684]: I0123 09:07:31.454573 4684 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 23 09:07:31 crc kubenswrapper[4684]: I0123 09:07:31.454586 4684 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 23 09:07:31 crc kubenswrapper[4684]: I0123 09:07:31.454604 4684 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 23 09:07:31 crc kubenswrapper[4684]: I0123 09:07:31.454618 4684 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-23T09:07:31Z","lastTransitionTime":"2026-01-23T09:07:31Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 23 09:07:31 crc kubenswrapper[4684]: I0123 09:07:31.549627 4684 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2026-01-14 02:55:04.904644548 +0000 UTC Jan 23 09:07:31 crc kubenswrapper[4684]: I0123 09:07:31.558117 4684 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 23 09:07:31 crc kubenswrapper[4684]: I0123 09:07:31.558217 4684 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 23 09:07:31 crc kubenswrapper[4684]: I0123 09:07:31.558230 4684 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 23 09:07:31 crc kubenswrapper[4684]: I0123 09:07:31.558247 4684 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 23 09:07:31 crc kubenswrapper[4684]: I0123 09:07:31.558258 4684 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-23T09:07:31Z","lastTransitionTime":"2026-01-23T09:07:31Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 23 09:07:31 crc kubenswrapper[4684]: I0123 09:07:31.584391 4684 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Jan 23 09:07:31 crc kubenswrapper[4684]: E0123 09:07:31.584514 4684 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Jan 23 09:07:31 crc kubenswrapper[4684]: I0123 09:07:31.584578 4684 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 23 09:07:31 crc kubenswrapper[4684]: E0123 09:07:31.584629 4684 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Jan 23 09:07:31 crc kubenswrapper[4684]: I0123 09:07:31.584674 4684 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 23 09:07:31 crc kubenswrapper[4684]: E0123 09:07:31.584758 4684 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Jan 23 09:07:31 crc kubenswrapper[4684]: I0123 09:07:31.661802 4684 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 23 09:07:31 crc kubenswrapper[4684]: I0123 09:07:31.662139 4684 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 23 09:07:31 crc kubenswrapper[4684]: I0123 09:07:31.662153 4684 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 23 09:07:31 crc kubenswrapper[4684]: I0123 09:07:31.662171 4684 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 23 09:07:31 crc kubenswrapper[4684]: I0123 09:07:31.662180 4684 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-23T09:07:31Z","lastTransitionTime":"2026-01-23T09:07:31Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 23 09:07:31 crc kubenswrapper[4684]: I0123 09:07:31.765098 4684 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 23 09:07:31 crc kubenswrapper[4684]: I0123 09:07:31.765156 4684 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 23 09:07:31 crc kubenswrapper[4684]: I0123 09:07:31.765168 4684 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 23 09:07:31 crc kubenswrapper[4684]: I0123 09:07:31.765184 4684 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 23 09:07:31 crc kubenswrapper[4684]: I0123 09:07:31.765195 4684 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-23T09:07:31Z","lastTransitionTime":"2026-01-23T09:07:31Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 23 09:07:31 crc kubenswrapper[4684]: I0123 09:07:31.782266 4684 generic.go:334] "Generic (PLEG): container finished" podID="95d1563a-3ca4-4fb0-8365-c1168fbe2e70" containerID="11ea09253e6f4c4eab537b794b793c1f07e8cbaf361c1d8773381e7894805322" exitCode=0 Jan 23 09:07:31 crc kubenswrapper[4684]: I0123 09:07:31.782330 4684 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-multus/multus-additional-cni-plugins-dmqcw" event={"ID":"95d1563a-3ca4-4fb0-8365-c1168fbe2e70","Type":"ContainerDied","Data":"11ea09253e6f4c4eab537b794b793c1f07e8cbaf361c1d8773381e7894805322"} Jan 23 09:07:31 crc kubenswrapper[4684]: I0123 09:07:31.813283 4684 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-23T09:07:29Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-23T09:07:29Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-23T09:07:29Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://d66a59d2f527c396c3b591ef694a20a6852d8e2b2f3d4c77ef0f0b795a18b535\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-23T09:07:28Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-23T09:07:31Z is after 2025-08-24T17:21:41Z" Jan 23 09:07:31 crc kubenswrapper[4684]: I0123 09:07:31.835447 4684 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-wtphf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"fe8e0d00-860e-4d47-9f48-686555520d79\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T09:07:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T09:07:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T09:07:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T09:07:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://87b6f66b276518f9c25bbd5c97bd4a330b2c796958b395d04a01ef7115b95440\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-23T09:07:30Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-dmwsl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://c3d090a4ca15b818846dbd02be034a5029761509ea8671673795d0b2b15249c9\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-23T09:07:30Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-dmwsl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-23T09:07:28Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-wtphf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-23T09:07:31Z is after 2025-08-24T17:21:41Z" Jan 23 09:07:31 crc kubenswrapper[4684]: I0123 09:07:31.857375 4684 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-nk7v5" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fd1b372-d164-4037-ae8e-cf634b1c4b41\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T09:07:29Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T09:07:29Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T09:07:28Z\\\",\\\"message\\\":\\\"containers with unready status: [ovn-controller ovn-acl-logging kube-rbac-proxy-node kube-rbac-proxy-ovn-metrics northd nbdb sbdb ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T09:07:28Z\\\",\\\"message\\\":\\\"containers with unready status: [ovn-controller ovn-acl-logging kube-rbac-proxy-node kube-rbac-proxy-ovn-metrics northd nbdb sbdb ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-l46bg\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-l46bg\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-l46bg\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-l46bg\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-l46bg\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-l46bg\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-l46bg\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-l46bg\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://6cfc04b44ac724b5e32e0102b3f0d670fdd7f2b7ae9b40266065c7b8192b228e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://6cfc04b44ac724b5e32e0102b3f0d670fdd7f2b7ae9b40266065c7b8192b228e\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-23T09:07:29Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-23T09:07:29Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-l46bg\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-23T09:07:28Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-nk7v5\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-23T09:07:31Z is after 2025-08-24T17:21:41Z" Jan 23 09:07:31 crc kubenswrapper[4684]: I0123 09:07:31.868593 4684 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 23 09:07:31 crc kubenswrapper[4684]: I0123 09:07:31.868638 4684 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 23 09:07:31 crc kubenswrapper[4684]: I0123 09:07:31.868651 4684 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 23 09:07:31 crc kubenswrapper[4684]: I0123 09:07:31.868666 4684 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 23 09:07:31 crc kubenswrapper[4684]: I0123 09:07:31.868678 4684 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-23T09:07:31Z","lastTransitionTime":"2026-01-23T09:07:31Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 23 09:07:31 crc kubenswrapper[4684]: I0123 09:07:31.871246 4684 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-23T09:07:27Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-23T09:07:27Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-23T09:07:31Z is after 2025-08-24T17:21:41Z" Jan 23 09:07:31 crc kubenswrapper[4684]: I0123 09:07:31.887058 4684 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-23T09:07:27Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-23T09:07:27Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-23T09:07:31Z is after 2025-08-24T17:21:41Z" Jan 23 09:07:31 crc kubenswrapper[4684]: I0123 09:07:31.903932 4684 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-jwr4q" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ab0885cc-d621-4e36-9e37-1326848bd147\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T09:07:29Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T09:07:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T09:07:29Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T09:07:29Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://d957cfbf388d17fa825ac41c56e15d6cd4caec6e13b2fb8c93b304205f0bbefe\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-23T09:07:29Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cw2mk\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-23T09:07:28Z\\\"}}\" for pod \"openshift-multus\"/\"multus-jwr4q\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-23T09:07:31Z is after 2025-08-24T17:21:41Z" Jan 23 09:07:31 crc kubenswrapper[4684]: I0123 09:07:31.920509 4684 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-dmqcw" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"95d1563a-3ca4-4fb0-8365-c1168fbe2e70\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T09:07:29Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T09:07:28Z\\\",\\\"message\\\":\\\"containers with incomplete status: [routeoverride-cni whereabouts-cni-bincopy whereabouts-cni]\\\",\\\"reason\\\":\\\"ContainersNotInitialized\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T09:07:28Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T09:07:28Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-wrhqc\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://d3d64538fa49212ecd97fac81f22251d985b9963024dcd5625ca82b0a19111fb\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://d3d64538fa49212ecd97fac81f22251d985b9963024dcd5625ca82b0a19111fb\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-23T09:07:29Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-23T09:07:29Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-wrhqc\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://bd008bc398cf858c150426e45222e76743f5cacfffb45c24f2cad83a6140abe4\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://bd008bc398cf858c150426e45222e76743f5cacfffb45c24f2cad83a6140abe4\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-23T09:07:30Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-23T09:07:30Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-wrhqc\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://11ea09253e6f4c4eab537b794b793c1f07e8cbaf361c1d8773381e7894805322\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://11ea09253e6f4c4eab537b794b793c1f07e8cbaf361c1d8773381e7894805322\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-23T09:07:31Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-23T09:07:31Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-wrhqc\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-wrhqc\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-wrhqc\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-wrhqc\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-23T09:07:28Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-dmqcw\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-23T09:07:31Z is after 2025-08-24T17:21:41Z" Jan 23 09:07:31 crc kubenswrapper[4684]: I0123 09:07:31.941449 4684 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"e31ff448-5258-4887-9532-ccb1444b5a2f\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T09:07:08Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T09:07:08Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T09:07:07Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T09:07:07Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T09:07:07Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://39b1d62654cdce3e6a1e54cc35f36d530dec39b7ec54d7aba2ea8a64844ff90a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-23T09:07:09Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://5b80737ea9f882f63be2cf6a2f74002963d16e18aea3c96f738b2cd188f3c1da\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-23T09:07:10Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://68e3ed6cfd5c1ab6379385c7acee58117333f815f21be7d7c61038f7827f6621\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-23T09:07:10Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://74958cd4355a9eb04e07c960b1063b56f11cb3ae27a3ab9eac50f54ebac78c8c\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://42263a97079566dbd93f1ca20399fd1f6cc2400f0d042ed062c1c1e15eaf0109\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-01-23T09:07:27Z\\\",\\\"message\\\":\\\"23 09:07:26.845110 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW0123 09:07:26.845113 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW0123 09:07:26.845115 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nI0123 09:07:26.845353 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nI0123 09:07:26.849378 1 tlsconfig.go:203] \\\\\\\"Loaded serving cert\\\\\\\" certName=\\\\\\\"serving-cert::/tmp/serving-cert-4138284268/tls.crt::/tmp/serving-cert-4138284268/tls.key\\\\\\\" certDetail=\\\\\\\"\\\\\\\\\\\\\\\"localhost\\\\\\\\\\\\\\\" [serving] validServingFor=[localhost] issuer=\\\\\\\\\\\\\\\"check-endpoints-signer@1769159230\\\\\\\\\\\\\\\" (2026-01-23 09:07:10 +0000 UTC to 2026-02-22 09:07:11 +0000 UTC (now=2026-01-23 09:07:26.849349521 +0000 UTC))\\\\\\\"\\\\nI0123 09:07:26.849507 1 named_certificates.go:53] \\\\\\\"Loaded SNI cert\\\\\\\" index=0 certName=\\\\\\\"self-signed loopback\\\\\\\" certDetail=\\\\\\\"\\\\\\\\\\\\\\\"apiserver-loopback-client@1769159241\\\\\\\\\\\\\\\" [serving] validServingFor=[apiserver-loopback-client] issuer=\\\\\\\\\\\\\\\"apiserver-loopback-client-ca@1769159241\\\\\\\\\\\\\\\" (2026-01-23 08:07:21 +0000 UTC to 2027-01-23 08:07:21 +0000 UTC (now=2026-01-23 09:07:26.849489185 +0000 UTC))\\\\\\\"\\\\nI0123 09:07:26.849527 1 secure_serving.go:213] Serving securely on [::]:17697\\\\nI0123 09:07:26.849546 1 genericapiserver.go:683] [graceful-termination] waiting for shutdown to be initiated\\\\nI0123 09:07:26.849566 1 requestheader_controller.go:172] Starting RequestHeaderAuthRequestController\\\\nI0123 09:07:26.849583 1 shared_informer.go:313] Waiting for caches to sync for RequestHeaderAuthRequestController\\\\nI0123 09:07:26.849611 1 dynamic_serving_content.go:135] \\\\\\\"Starting controller\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-4138284268/tls.crt::/tmp/serving-cert-4138284268/tls.key\\\\\\\"\\\\nI0123 09:07:26.849731 1 tlsconfig.go:243] \\\\\\\"Starting DynamicServingCertificateController\\\\\\\"\\\\nF0123 09:07:26.849820 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-23T09:07:10Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-23T09:07:28Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://9db80d9b156d2828ad5bcd38bc2d0783dac35f10f547f098815ee596931cde3b\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-23T09:07:10Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://efa2eef93c6f5766565795e6674f79bc2e7cb62ac76cd9a1e407561378d62732\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://efa2eef93c6f5766565795e6674f79bc2e7cb62ac76cd9a1e407561378d62732\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-23T09:07:08Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-23T09:07:08Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-23T09:07:07Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-23T09:07:31Z is after 2025-08-24T17:21:41Z" Jan 23 09:07:31 crc kubenswrapper[4684]: I0123 09:07:31.957741 4684 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-23T09:07:27Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-23T09:07:27Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-23T09:07:31Z is after 2025-08-24T17:21:41Z" Jan 23 09:07:31 crc kubenswrapper[4684]: I0123 09:07:31.971315 4684 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 23 09:07:31 crc kubenswrapper[4684]: I0123 09:07:31.971562 4684 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 23 09:07:31 crc kubenswrapper[4684]: I0123 09:07:31.971658 4684 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 23 09:07:31 crc kubenswrapper[4684]: I0123 09:07:31.971757 4684 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 23 09:07:31 crc kubenswrapper[4684]: I0123 09:07:31.971818 4684 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-23T09:07:31Z","lastTransitionTime":"2026-01-23T09:07:31Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 23 09:07:31 crc kubenswrapper[4684]: I0123 09:07:31.972882 4684 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-6stgf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"4fce7017-186f-4953-b968-c8a8868a0fd4\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T09:07:29Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T09:07:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T09:07:29Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T09:07:29Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://e120546e2ca9261a5bc169c39194c52add608d78b5783a10dad5f3ba4ee27c23\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-23T09:07:29Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-wv8g2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-23T09:07:28Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-6stgf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-23T09:07:31Z is after 2025-08-24T17:21:41Z" Jan 23 09:07:31 crc kubenswrapper[4684]: I0123 09:07:31.984258 4684 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-qt2j2" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d5069a6f-07bb-4423-8df0-92cdc541e6de\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T09:07:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T09:07:29Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T09:07:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T09:07:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://4ab843f59e857c481772565098789264b06141f58dd54cbb8dba2e40b44a54ad\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-23T09:07:30Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-l62zw\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-23T09:07:29Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-qt2j2\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-23T09:07:31Z is after 2025-08-24T17:21:41Z" Jan 23 09:07:31 crc kubenswrapper[4684]: I0123 09:07:31.996214 4684 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d618dabd-5de3-4c94-b9c1-69682da77628\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T09:07:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T09:07:07Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T09:07:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T09:07:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T09:07:07Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://c027c8977c1e3870ef0132bf28d479e8999b1a7d216327be7a9cff2aeee05c9e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-23T09:07:08Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://7954e2feb1e89e1ec2c9055234e7b9bde7005afc751a3067c18cbb54d16045cc\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-23T09:07:08Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://fde45d47daa7855ee7caa1df0222d2773fcdc8fb29413c61d6b74f7e7d8fa6e4\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-23T09:07:09Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://f34540a58dd0dfcebbfd694b24202f58a89ddca8a0f04f3f4f2bcdba4be5c4b6\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-23T09:07:10Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-23T09:07:07Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-23T09:07:31Z is after 2025-08-24T17:21:41Z" Jan 23 09:07:32 crc kubenswrapper[4684]: I0123 09:07:32.008822 4684 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-23T09:07:28Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-23T09:07:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-23T09:07:28Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://4f741db786a98b9e9302c17c5f5061484149b0372c03b3cf06b017d37da7237a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-23T09:07:28Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://e0bf99a80423f9d4d2262b21f7dc70d1cf73731c48008e484d9768495596d5b0\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-23T09:07:28Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-23T09:07:32Z is after 2025-08-24T17:21:41Z" Jan 23 09:07:32 crc kubenswrapper[4684]: I0123 09:07:32.021047 4684 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-23T09:07:30Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-23T09:07:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-23T09:07:30Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://dc74050180463e44d7c545c89833c0282af87ae8cde4800f95e019dbd21ebb08\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-23T09:07:30Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-23T09:07:32Z is after 2025-08-24T17:21:41Z" Jan 23 09:07:32 crc kubenswrapper[4684]: I0123 09:07:32.074362 4684 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 23 09:07:32 crc kubenswrapper[4684]: I0123 09:07:32.074411 4684 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 23 09:07:32 crc kubenswrapper[4684]: I0123 09:07:32.074423 4684 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 23 09:07:32 crc kubenswrapper[4684]: I0123 09:07:32.074442 4684 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 23 09:07:32 crc kubenswrapper[4684]: I0123 09:07:32.074453 4684 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-23T09:07:32Z","lastTransitionTime":"2026-01-23T09:07:32Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 23 09:07:32 crc kubenswrapper[4684]: I0123 09:07:32.177091 4684 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 23 09:07:32 crc kubenswrapper[4684]: I0123 09:07:32.177390 4684 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 23 09:07:32 crc kubenswrapper[4684]: I0123 09:07:32.177482 4684 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 23 09:07:32 crc kubenswrapper[4684]: I0123 09:07:32.177564 4684 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 23 09:07:32 crc kubenswrapper[4684]: I0123 09:07:32.177659 4684 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-23T09:07:32Z","lastTransitionTime":"2026-01-23T09:07:32Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 23 09:07:32 crc kubenswrapper[4684]: I0123 09:07:32.280035 4684 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 23 09:07:32 crc kubenswrapper[4684]: I0123 09:07:32.280076 4684 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 23 09:07:32 crc kubenswrapper[4684]: I0123 09:07:32.280089 4684 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 23 09:07:32 crc kubenswrapper[4684]: I0123 09:07:32.280102 4684 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 23 09:07:32 crc kubenswrapper[4684]: I0123 09:07:32.280111 4684 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-23T09:07:32Z","lastTransitionTime":"2026-01-23T09:07:32Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 23 09:07:32 crc kubenswrapper[4684]: I0123 09:07:32.383243 4684 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 23 09:07:32 crc kubenswrapper[4684]: I0123 09:07:32.383289 4684 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 23 09:07:32 crc kubenswrapper[4684]: I0123 09:07:32.383304 4684 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 23 09:07:32 crc kubenswrapper[4684]: I0123 09:07:32.383321 4684 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 23 09:07:32 crc kubenswrapper[4684]: I0123 09:07:32.383332 4684 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-23T09:07:32Z","lastTransitionTime":"2026-01-23T09:07:32Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 23 09:07:32 crc kubenswrapper[4684]: I0123 09:07:32.485820 4684 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 23 09:07:32 crc kubenswrapper[4684]: I0123 09:07:32.485857 4684 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 23 09:07:32 crc kubenswrapper[4684]: I0123 09:07:32.485869 4684 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 23 09:07:32 crc kubenswrapper[4684]: I0123 09:07:32.485883 4684 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 23 09:07:32 crc kubenswrapper[4684]: I0123 09:07:32.485895 4684 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-23T09:07:32Z","lastTransitionTime":"2026-01-23T09:07:32Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 23 09:07:32 crc kubenswrapper[4684]: I0123 09:07:32.551448 4684 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2026-01-16 11:26:03.781948548 +0000 UTC Jan 23 09:07:32 crc kubenswrapper[4684]: I0123 09:07:32.588389 4684 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 23 09:07:32 crc kubenswrapper[4684]: I0123 09:07:32.588433 4684 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 23 09:07:32 crc kubenswrapper[4684]: I0123 09:07:32.588445 4684 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 23 09:07:32 crc kubenswrapper[4684]: I0123 09:07:32.588460 4684 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 23 09:07:32 crc kubenswrapper[4684]: I0123 09:07:32.588481 4684 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-23T09:07:32Z","lastTransitionTime":"2026-01-23T09:07:32Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 23 09:07:32 crc kubenswrapper[4684]: I0123 09:07:32.690990 4684 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 23 09:07:32 crc kubenswrapper[4684]: I0123 09:07:32.691023 4684 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 23 09:07:32 crc kubenswrapper[4684]: I0123 09:07:32.691034 4684 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 23 09:07:32 crc kubenswrapper[4684]: I0123 09:07:32.691049 4684 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 23 09:07:32 crc kubenswrapper[4684]: I0123 09:07:32.691094 4684 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-23T09:07:32Z","lastTransitionTime":"2026-01-23T09:07:32Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 23 09:07:32 crc kubenswrapper[4684]: I0123 09:07:32.790514 4684 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-nk7v5" event={"ID":"5fd1b372-d164-4037-ae8e-cf634b1c4b41","Type":"ContainerStarted","Data":"6eab0113b2445bd23a5d3eb5f4bd79d26dd3352a1bf807cf7e770d55db85b699"} Jan 23 09:07:32 crc kubenswrapper[4684]: I0123 09:07:32.792947 4684 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 23 09:07:32 crc kubenswrapper[4684]: I0123 09:07:32.792990 4684 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 23 09:07:32 crc kubenswrapper[4684]: I0123 09:07:32.793002 4684 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 23 09:07:32 crc kubenswrapper[4684]: I0123 09:07:32.793019 4684 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 23 09:07:32 crc kubenswrapper[4684]: I0123 09:07:32.793031 4684 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-23T09:07:32Z","lastTransitionTime":"2026-01-23T09:07:32Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 23 09:07:32 crc kubenswrapper[4684]: I0123 09:07:32.793632 4684 generic.go:334] "Generic (PLEG): container finished" podID="95d1563a-3ca4-4fb0-8365-c1168fbe2e70" containerID="4dddcfb8219bc8ac2d0f92294aef29222b71b1eb35ac84e7e833905e868e784e" exitCode=0 Jan 23 09:07:32 crc kubenswrapper[4684]: I0123 09:07:32.793671 4684 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-multus/multus-additional-cni-plugins-dmqcw" event={"ID":"95d1563a-3ca4-4fb0-8365-c1168fbe2e70","Type":"ContainerDied","Data":"4dddcfb8219bc8ac2d0f92294aef29222b71b1eb35ac84e7e833905e868e784e"} Jan 23 09:07:32 crc kubenswrapper[4684]: I0123 09:07:32.819082 4684 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-23T09:07:27Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-23T09:07:27Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-23T09:07:32Z is after 2025-08-24T17:21:41Z" Jan 23 09:07:32 crc kubenswrapper[4684]: I0123 09:07:32.832326 4684 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-jwr4q" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ab0885cc-d621-4e36-9e37-1326848bd147\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T09:07:29Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T09:07:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T09:07:29Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T09:07:29Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://d957cfbf388d17fa825ac41c56e15d6cd4caec6e13b2fb8c93b304205f0bbefe\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-23T09:07:29Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cw2mk\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-23T09:07:28Z\\\"}}\" for pod \"openshift-multus\"/\"multus-jwr4q\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-23T09:07:32Z is after 2025-08-24T17:21:41Z" Jan 23 09:07:32 crc kubenswrapper[4684]: I0123 09:07:32.853010 4684 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-dmqcw" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"95d1563a-3ca4-4fb0-8365-c1168fbe2e70\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T09:07:29Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T09:07:28Z\\\",\\\"message\\\":\\\"containers with incomplete status: [whereabouts-cni-bincopy whereabouts-cni]\\\",\\\"reason\\\":\\\"ContainersNotInitialized\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T09:07:28Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T09:07:28Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-wrhqc\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://d3d64538fa49212ecd97fac81f22251d985b9963024dcd5625ca82b0a19111fb\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://d3d64538fa49212ecd97fac81f22251d985b9963024dcd5625ca82b0a19111fb\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-23T09:07:29Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-23T09:07:29Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-wrhqc\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://bd008bc398cf858c150426e45222e76743f5cacfffb45c24f2cad83a6140abe4\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://bd008bc398cf858c150426e45222e76743f5cacfffb45c24f2cad83a6140abe4\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-23T09:07:30Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-23T09:07:30Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-wrhqc\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://11ea09253e6f4c4eab537b794b793c1f07e8cbaf361c1d8773381e7894805322\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://11ea09253e6f4c4eab537b794b793c1f07e8cbaf361c1d8773381e7894805322\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-23T09:07:31Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-23T09:07:31Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-wrhqc\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://4dddcfb8219bc8ac2d0f92294aef29222b71b1eb35ac84e7e833905e868e784e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://4dddcfb8219bc8ac2d0f92294aef29222b71b1eb35ac84e7e833905e868e784e\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-23T09:07:32Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-23T09:07:32Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-wrhqc\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-wrhqc\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-wrhqc\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-23T09:07:28Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-dmqcw\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-23T09:07:32Z is after 2025-08-24T17:21:41Z" Jan 23 09:07:32 crc kubenswrapper[4684]: I0123 09:07:32.870239 4684 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"e31ff448-5258-4887-9532-ccb1444b5a2f\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T09:07:08Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T09:07:08Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T09:07:07Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T09:07:07Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T09:07:07Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://39b1d62654cdce3e6a1e54cc35f36d530dec39b7ec54d7aba2ea8a64844ff90a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-23T09:07:09Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://5b80737ea9f882f63be2cf6a2f74002963d16e18aea3c96f738b2cd188f3c1da\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-23T09:07:10Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://68e3ed6cfd5c1ab6379385c7acee58117333f815f21be7d7c61038f7827f6621\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-23T09:07:10Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://74958cd4355a9eb04e07c960b1063b56f11cb3ae27a3ab9eac50f54ebac78c8c\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://42263a97079566dbd93f1ca20399fd1f6cc2400f0d042ed062c1c1e15eaf0109\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-01-23T09:07:27Z\\\",\\\"message\\\":\\\"23 09:07:26.845110 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW0123 09:07:26.845113 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW0123 09:07:26.845115 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nI0123 09:07:26.845353 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nI0123 09:07:26.849378 1 tlsconfig.go:203] \\\\\\\"Loaded serving cert\\\\\\\" certName=\\\\\\\"serving-cert::/tmp/serving-cert-4138284268/tls.crt::/tmp/serving-cert-4138284268/tls.key\\\\\\\" certDetail=\\\\\\\"\\\\\\\\\\\\\\\"localhost\\\\\\\\\\\\\\\" [serving] validServingFor=[localhost] issuer=\\\\\\\\\\\\\\\"check-endpoints-signer@1769159230\\\\\\\\\\\\\\\" (2026-01-23 09:07:10 +0000 UTC to 2026-02-22 09:07:11 +0000 UTC (now=2026-01-23 09:07:26.849349521 +0000 UTC))\\\\\\\"\\\\nI0123 09:07:26.849507 1 named_certificates.go:53] \\\\\\\"Loaded SNI cert\\\\\\\" index=0 certName=\\\\\\\"self-signed loopback\\\\\\\" certDetail=\\\\\\\"\\\\\\\\\\\\\\\"apiserver-loopback-client@1769159241\\\\\\\\\\\\\\\" [serving] validServingFor=[apiserver-loopback-client] issuer=\\\\\\\\\\\\\\\"apiserver-loopback-client-ca@1769159241\\\\\\\\\\\\\\\" (2026-01-23 08:07:21 +0000 UTC to 2027-01-23 08:07:21 +0000 UTC (now=2026-01-23 09:07:26.849489185 +0000 UTC))\\\\\\\"\\\\nI0123 09:07:26.849527 1 secure_serving.go:213] Serving securely on [::]:17697\\\\nI0123 09:07:26.849546 1 genericapiserver.go:683] [graceful-termination] waiting for shutdown to be initiated\\\\nI0123 09:07:26.849566 1 requestheader_controller.go:172] Starting RequestHeaderAuthRequestController\\\\nI0123 09:07:26.849583 1 shared_informer.go:313] Waiting for caches to sync for RequestHeaderAuthRequestController\\\\nI0123 09:07:26.849611 1 dynamic_serving_content.go:135] \\\\\\\"Starting controller\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-4138284268/tls.crt::/tmp/serving-cert-4138284268/tls.key\\\\\\\"\\\\nI0123 09:07:26.849731 1 tlsconfig.go:243] \\\\\\\"Starting DynamicServingCertificateController\\\\\\\"\\\\nF0123 09:07:26.849820 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-23T09:07:10Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-23T09:07:28Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://9db80d9b156d2828ad5bcd38bc2d0783dac35f10f547f098815ee596931cde3b\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-23T09:07:10Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://efa2eef93c6f5766565795e6674f79bc2e7cb62ac76cd9a1e407561378d62732\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://efa2eef93c6f5766565795e6674f79bc2e7cb62ac76cd9a1e407561378d62732\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-23T09:07:08Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-23T09:07:08Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-23T09:07:07Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-23T09:07:32Z is after 2025-08-24T17:21:41Z" Jan 23 09:07:32 crc kubenswrapper[4684]: I0123 09:07:32.885856 4684 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-23T09:07:30Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-23T09:07:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-23T09:07:30Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://dc74050180463e44d7c545c89833c0282af87ae8cde4800f95e019dbd21ebb08\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-23T09:07:30Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-23T09:07:32Z is after 2025-08-24T17:21:41Z" Jan 23 09:07:32 crc kubenswrapper[4684]: I0123 09:07:32.895164 4684 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 23 09:07:32 crc kubenswrapper[4684]: I0123 09:07:32.895228 4684 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 23 09:07:32 crc kubenswrapper[4684]: I0123 09:07:32.895238 4684 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 23 09:07:32 crc kubenswrapper[4684]: I0123 09:07:32.895253 4684 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 23 09:07:32 crc kubenswrapper[4684]: I0123 09:07:32.895265 4684 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-23T09:07:32Z","lastTransitionTime":"2026-01-23T09:07:32Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 23 09:07:32 crc kubenswrapper[4684]: I0123 09:07:32.906217 4684 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-6stgf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"4fce7017-186f-4953-b968-c8a8868a0fd4\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T09:07:29Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T09:07:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T09:07:29Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T09:07:29Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://e120546e2ca9261a5bc169c39194c52add608d78b5783a10dad5f3ba4ee27c23\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-23T09:07:29Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-wv8g2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-23T09:07:28Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-6stgf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-23T09:07:32Z is after 2025-08-24T17:21:41Z" Jan 23 09:07:32 crc kubenswrapper[4684]: I0123 09:07:32.918556 4684 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-qt2j2" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d5069a6f-07bb-4423-8df0-92cdc541e6de\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T09:07:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T09:07:29Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T09:07:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T09:07:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://4ab843f59e857c481772565098789264b06141f58dd54cbb8dba2e40b44a54ad\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-23T09:07:30Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-l62zw\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-23T09:07:29Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-qt2j2\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-23T09:07:32Z is after 2025-08-24T17:21:41Z" Jan 23 09:07:32 crc kubenswrapper[4684]: I0123 09:07:32.933317 4684 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d618dabd-5de3-4c94-b9c1-69682da77628\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T09:07:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T09:07:07Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T09:07:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T09:07:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T09:07:07Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://c027c8977c1e3870ef0132bf28d479e8999b1a7d216327be7a9cff2aeee05c9e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-23T09:07:08Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://7954e2feb1e89e1ec2c9055234e7b9bde7005afc751a3067c18cbb54d16045cc\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-23T09:07:08Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://fde45d47daa7855ee7caa1df0222d2773fcdc8fb29413c61d6b74f7e7d8fa6e4\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-23T09:07:09Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://f34540a58dd0dfcebbfd694b24202f58a89ddca8a0f04f3f4f2bcdba4be5c4b6\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-23T09:07:10Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-23T09:07:07Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-23T09:07:32Z is after 2025-08-24T17:21:41Z" Jan 23 09:07:32 crc kubenswrapper[4684]: I0123 09:07:32.948789 4684 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-23T09:07:28Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-23T09:07:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-23T09:07:28Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://4f741db786a98b9e9302c17c5f5061484149b0372c03b3cf06b017d37da7237a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-23T09:07:28Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://e0bf99a80423f9d4d2262b21f7dc70d1cf73731c48008e484d9768495596d5b0\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-23T09:07:28Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-23T09:07:32Z is after 2025-08-24T17:21:41Z" Jan 23 09:07:32 crc kubenswrapper[4684]: I0123 09:07:32.963465 4684 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-wtphf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"fe8e0d00-860e-4d47-9f48-686555520d79\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T09:07:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T09:07:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T09:07:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T09:07:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://87b6f66b276518f9c25bbd5c97bd4a330b2c796958b395d04a01ef7115b95440\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-23T09:07:30Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-dmwsl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://c3d090a4ca15b818846dbd02be034a5029761509ea8671673795d0b2b15249c9\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-23T09:07:30Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-dmwsl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-23T09:07:28Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-wtphf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-23T09:07:32Z is after 2025-08-24T17:21:41Z" Jan 23 09:07:32 crc kubenswrapper[4684]: I0123 09:07:32.984279 4684 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-23T09:07:29Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-23T09:07:29Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-23T09:07:29Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://d66a59d2f527c396c3b591ef694a20a6852d8e2b2f3d4c77ef0f0b795a18b535\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-23T09:07:28Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-23T09:07:32Z is after 2025-08-24T17:21:41Z" Jan 23 09:07:32 crc kubenswrapper[4684]: I0123 09:07:32.996958 4684 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 23 09:07:32 crc kubenswrapper[4684]: I0123 09:07:32.996998 4684 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 23 09:07:32 crc kubenswrapper[4684]: I0123 09:07:32.997008 4684 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 23 09:07:32 crc kubenswrapper[4684]: I0123 09:07:32.997024 4684 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 23 09:07:32 crc kubenswrapper[4684]: I0123 09:07:32.997034 4684 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-23T09:07:32Z","lastTransitionTime":"2026-01-23T09:07:32Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 23 09:07:33 crc kubenswrapper[4684]: I0123 09:07:33.007181 4684 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-23T09:07:27Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-23T09:07:27Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-23T09:07:33Z is after 2025-08-24T17:21:41Z" Jan 23 09:07:33 crc kubenswrapper[4684]: I0123 09:07:33.029594 4684 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-nk7v5" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fd1b372-d164-4037-ae8e-cf634b1c4b41\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T09:07:29Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T09:07:29Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T09:07:28Z\\\",\\\"message\\\":\\\"containers with unready status: [ovn-controller ovn-acl-logging kube-rbac-proxy-node kube-rbac-proxy-ovn-metrics northd nbdb sbdb ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T09:07:28Z\\\",\\\"message\\\":\\\"containers with unready status: [ovn-controller ovn-acl-logging kube-rbac-proxy-node kube-rbac-proxy-ovn-metrics northd nbdb sbdb ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-l46bg\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-l46bg\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-l46bg\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-l46bg\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-l46bg\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-l46bg\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-l46bg\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-l46bg\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://6cfc04b44ac724b5e32e0102b3f0d670fdd7f2b7ae9b40266065c7b8192b228e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://6cfc04b44ac724b5e32e0102b3f0d670fdd7f2b7ae9b40266065c7b8192b228e\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-23T09:07:29Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-23T09:07:29Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-l46bg\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-23T09:07:28Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-nk7v5\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-23T09:07:33Z is after 2025-08-24T17:21:41Z" Jan 23 09:07:33 crc kubenswrapper[4684]: I0123 09:07:33.044370 4684 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-23T09:07:27Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-23T09:07:27Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-23T09:07:33Z is after 2025-08-24T17:21:41Z" Jan 23 09:07:33 crc kubenswrapper[4684]: I0123 09:07:33.103175 4684 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 23 09:07:33 crc kubenswrapper[4684]: I0123 09:07:33.103572 4684 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 23 09:07:33 crc kubenswrapper[4684]: I0123 09:07:33.103586 4684 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 23 09:07:33 crc kubenswrapper[4684]: I0123 09:07:33.103606 4684 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 23 09:07:33 crc kubenswrapper[4684]: I0123 09:07:33.103618 4684 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-23T09:07:33Z","lastTransitionTime":"2026-01-23T09:07:33Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 23 09:07:33 crc kubenswrapper[4684]: I0123 09:07:33.206331 4684 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 23 09:07:33 crc kubenswrapper[4684]: I0123 09:07:33.206379 4684 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 23 09:07:33 crc kubenswrapper[4684]: I0123 09:07:33.206392 4684 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 23 09:07:33 crc kubenswrapper[4684]: I0123 09:07:33.206409 4684 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 23 09:07:33 crc kubenswrapper[4684]: I0123 09:07:33.206423 4684 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-23T09:07:33Z","lastTransitionTime":"2026-01-23T09:07:33Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 23 09:07:33 crc kubenswrapper[4684]: I0123 09:07:33.309329 4684 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 23 09:07:33 crc kubenswrapper[4684]: I0123 09:07:33.309466 4684 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 23 09:07:33 crc kubenswrapper[4684]: I0123 09:07:33.309478 4684 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 23 09:07:33 crc kubenswrapper[4684]: I0123 09:07:33.309493 4684 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 23 09:07:33 crc kubenswrapper[4684]: I0123 09:07:33.309507 4684 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-23T09:07:33Z","lastTransitionTime":"2026-01-23T09:07:33Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 23 09:07:33 crc kubenswrapper[4684]: I0123 09:07:33.411307 4684 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 23 09:07:33 crc kubenswrapper[4684]: I0123 09:07:33.411345 4684 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 23 09:07:33 crc kubenswrapper[4684]: I0123 09:07:33.411355 4684 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 23 09:07:33 crc kubenswrapper[4684]: I0123 09:07:33.411370 4684 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 23 09:07:33 crc kubenswrapper[4684]: I0123 09:07:33.411382 4684 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-23T09:07:33Z","lastTransitionTime":"2026-01-23T09:07:33Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 23 09:07:33 crc kubenswrapper[4684]: I0123 09:07:33.514074 4684 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 23 09:07:33 crc kubenswrapper[4684]: I0123 09:07:33.514107 4684 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 23 09:07:33 crc kubenswrapper[4684]: I0123 09:07:33.514118 4684 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 23 09:07:33 crc kubenswrapper[4684]: I0123 09:07:33.514135 4684 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 23 09:07:33 crc kubenswrapper[4684]: I0123 09:07:33.514147 4684 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-23T09:07:33Z","lastTransitionTime":"2026-01-23T09:07:33Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 23 09:07:33 crc kubenswrapper[4684]: I0123 09:07:33.551761 4684 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-12-28 04:00:40.863039971 +0000 UTC Jan 23 09:07:33 crc kubenswrapper[4684]: I0123 09:07:33.581013 4684 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Jan 23 09:07:33 crc kubenswrapper[4684]: E0123 09:07:33.581171 4684 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Jan 23 09:07:33 crc kubenswrapper[4684]: I0123 09:07:33.581023 4684 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 23 09:07:33 crc kubenswrapper[4684]: E0123 09:07:33.581630 4684 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Jan 23 09:07:33 crc kubenswrapper[4684]: I0123 09:07:33.581653 4684 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 23 09:07:33 crc kubenswrapper[4684]: E0123 09:07:33.581766 4684 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Jan 23 09:07:33 crc kubenswrapper[4684]: I0123 09:07:33.616960 4684 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 23 09:07:33 crc kubenswrapper[4684]: I0123 09:07:33.616994 4684 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 23 09:07:33 crc kubenswrapper[4684]: I0123 09:07:33.617006 4684 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 23 09:07:33 crc kubenswrapper[4684]: I0123 09:07:33.617021 4684 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 23 09:07:33 crc kubenswrapper[4684]: I0123 09:07:33.617032 4684 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-23T09:07:33Z","lastTransitionTime":"2026-01-23T09:07:33Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 23 09:07:33 crc kubenswrapper[4684]: I0123 09:07:33.719508 4684 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 23 09:07:33 crc kubenswrapper[4684]: I0123 09:07:33.719554 4684 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 23 09:07:33 crc kubenswrapper[4684]: I0123 09:07:33.719564 4684 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 23 09:07:33 crc kubenswrapper[4684]: I0123 09:07:33.719580 4684 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 23 09:07:33 crc kubenswrapper[4684]: I0123 09:07:33.719592 4684 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-23T09:07:33Z","lastTransitionTime":"2026-01-23T09:07:33Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 23 09:07:33 crc kubenswrapper[4684]: I0123 09:07:33.800383 4684 generic.go:334] "Generic (PLEG): container finished" podID="95d1563a-3ca4-4fb0-8365-c1168fbe2e70" containerID="d935dd54133a2edd7ccddba6ec6b4c3ee7c86d3d6bc097b93fab3a6aa873ece9" exitCode=0 Jan 23 09:07:33 crc kubenswrapper[4684]: I0123 09:07:33.800418 4684 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-multus/multus-additional-cni-plugins-dmqcw" event={"ID":"95d1563a-3ca4-4fb0-8365-c1168fbe2e70","Type":"ContainerDied","Data":"d935dd54133a2edd7ccddba6ec6b4c3ee7c86d3d6bc097b93fab3a6aa873ece9"} Jan 23 09:07:33 crc kubenswrapper[4684]: I0123 09:07:33.814819 4684 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-wtphf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"fe8e0d00-860e-4d47-9f48-686555520d79\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T09:07:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T09:07:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T09:07:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T09:07:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://87b6f66b276518f9c25bbd5c97bd4a330b2c796958b395d04a01ef7115b95440\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-23T09:07:30Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-dmwsl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://c3d090a4ca15b818846dbd02be034a5029761509ea8671673795d0b2b15249c9\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-23T09:07:30Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-dmwsl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-23T09:07:28Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-wtphf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-23T09:07:33Z is after 2025-08-24T17:21:41Z" Jan 23 09:07:33 crc kubenswrapper[4684]: I0123 09:07:33.821877 4684 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 23 09:07:33 crc kubenswrapper[4684]: I0123 09:07:33.821924 4684 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 23 09:07:33 crc kubenswrapper[4684]: I0123 09:07:33.821936 4684 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 23 09:07:33 crc kubenswrapper[4684]: I0123 09:07:33.821952 4684 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 23 09:07:33 crc kubenswrapper[4684]: I0123 09:07:33.821964 4684 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-23T09:07:33Z","lastTransitionTime":"2026-01-23T09:07:33Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 23 09:07:33 crc kubenswrapper[4684]: I0123 09:07:33.827773 4684 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-23T09:07:29Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-23T09:07:29Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-23T09:07:29Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://d66a59d2f527c396c3b591ef694a20a6852d8e2b2f3d4c77ef0f0b795a18b535\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-23T09:07:28Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-23T09:07:33Z is after 2025-08-24T17:21:41Z" Jan 23 09:07:33 crc kubenswrapper[4684]: I0123 09:07:33.841965 4684 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-23T09:07:27Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-23T09:07:27Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-23T09:07:33Z is after 2025-08-24T17:21:41Z" Jan 23 09:07:33 crc kubenswrapper[4684]: I0123 09:07:33.860062 4684 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-nk7v5" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fd1b372-d164-4037-ae8e-cf634b1c4b41\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T09:07:29Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T09:07:29Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T09:07:28Z\\\",\\\"message\\\":\\\"containers with unready status: [ovn-controller ovn-acl-logging kube-rbac-proxy-node kube-rbac-proxy-ovn-metrics northd nbdb sbdb ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T09:07:28Z\\\",\\\"message\\\":\\\"containers with unready status: [ovn-controller ovn-acl-logging kube-rbac-proxy-node kube-rbac-proxy-ovn-metrics northd nbdb sbdb ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-l46bg\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-l46bg\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-l46bg\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-l46bg\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-l46bg\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-l46bg\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-l46bg\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-l46bg\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://6cfc04b44ac724b5e32e0102b3f0d670fdd7f2b7ae9b40266065c7b8192b228e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://6cfc04b44ac724b5e32e0102b3f0d670fdd7f2b7ae9b40266065c7b8192b228e\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-23T09:07:29Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-23T09:07:29Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-l46bg\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-23T09:07:28Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-nk7v5\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-23T09:07:33Z is after 2025-08-24T17:21:41Z" Jan 23 09:07:33 crc kubenswrapper[4684]: I0123 09:07:33.873105 4684 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-23T09:07:27Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-23T09:07:27Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-23T09:07:33Z is after 2025-08-24T17:21:41Z" Jan 23 09:07:33 crc kubenswrapper[4684]: I0123 09:07:33.886833 4684 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-23T09:07:27Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-23T09:07:27Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-23T09:07:33Z is after 2025-08-24T17:21:41Z" Jan 23 09:07:33 crc kubenswrapper[4684]: I0123 09:07:33.904463 4684 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-jwr4q" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ab0885cc-d621-4e36-9e37-1326848bd147\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T09:07:29Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T09:07:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T09:07:29Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T09:07:29Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://d957cfbf388d17fa825ac41c56e15d6cd4caec6e13b2fb8c93b304205f0bbefe\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-23T09:07:29Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cw2mk\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-23T09:07:28Z\\\"}}\" for pod \"openshift-multus\"/\"multus-jwr4q\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-23T09:07:33Z is after 2025-08-24T17:21:41Z" Jan 23 09:07:33 crc kubenswrapper[4684]: I0123 09:07:33.922099 4684 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-dmqcw" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"95d1563a-3ca4-4fb0-8365-c1168fbe2e70\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T09:07:29Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T09:07:28Z\\\",\\\"message\\\":\\\"containers with incomplete status: [whereabouts-cni]\\\",\\\"reason\\\":\\\"ContainersNotInitialized\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T09:07:28Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T09:07:28Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-wrhqc\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://d3d64538fa49212ecd97fac81f22251d985b9963024dcd5625ca82b0a19111fb\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://d3d64538fa49212ecd97fac81f22251d985b9963024dcd5625ca82b0a19111fb\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-23T09:07:29Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-23T09:07:29Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-wrhqc\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://bd008bc398cf858c150426e45222e76743f5cacfffb45c24f2cad83a6140abe4\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://bd008bc398cf858c150426e45222e76743f5cacfffb45c24f2cad83a6140abe4\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-23T09:07:30Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-23T09:07:30Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-wrhqc\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://11ea09253e6f4c4eab537b794b793c1f07e8cbaf361c1d8773381e7894805322\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://11ea09253e6f4c4eab537b794b793c1f07e8cbaf361c1d8773381e7894805322\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-23T09:07:31Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-23T09:07:31Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-wrhqc\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://4dddcfb8219bc8ac2d0f92294aef29222b71b1eb35ac84e7e833905e868e784e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://4dddcfb8219bc8ac2d0f92294aef29222b71b1eb35ac84e7e833905e868e784e\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-23T09:07:32Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-23T09:07:32Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-wrhqc\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://d935dd54133a2edd7ccddba6ec6b4c3ee7c86d3d6bc097b93fab3a6aa873ece9\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://d935dd54133a2edd7ccddba6ec6b4c3ee7c86d3d6bc097b93fab3a6aa873ece9\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-23T09:07:33Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-23T09:07:33Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-wrhqc\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-wrhqc\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-23T09:07:28Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-dmqcw\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-23T09:07:33Z is after 2025-08-24T17:21:41Z" Jan 23 09:07:33 crc kubenswrapper[4684]: I0123 09:07:33.925647 4684 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 23 09:07:33 crc kubenswrapper[4684]: I0123 09:07:33.925679 4684 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 23 09:07:33 crc kubenswrapper[4684]: I0123 09:07:33.925689 4684 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 23 09:07:33 crc kubenswrapper[4684]: I0123 09:07:33.925721 4684 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 23 09:07:33 crc kubenswrapper[4684]: I0123 09:07:33.925733 4684 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-23T09:07:33Z","lastTransitionTime":"2026-01-23T09:07:33Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 23 09:07:33 crc kubenswrapper[4684]: I0123 09:07:33.935968 4684 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"e31ff448-5258-4887-9532-ccb1444b5a2f\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T09:07:08Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T09:07:08Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T09:07:07Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T09:07:07Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T09:07:07Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://39b1d62654cdce3e6a1e54cc35f36d530dec39b7ec54d7aba2ea8a64844ff90a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-23T09:07:09Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://5b80737ea9f882f63be2cf6a2f74002963d16e18aea3c96f738b2cd188f3c1da\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-23T09:07:10Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://68e3ed6cfd5c1ab6379385c7acee58117333f815f21be7d7c61038f7827f6621\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-23T09:07:10Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://74958cd4355a9eb04e07c960b1063b56f11cb3ae27a3ab9eac50f54ebac78c8c\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://42263a97079566dbd93f1ca20399fd1f6cc2400f0d042ed062c1c1e15eaf0109\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-01-23T09:07:27Z\\\",\\\"message\\\":\\\"23 09:07:26.845110 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW0123 09:07:26.845113 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW0123 09:07:26.845115 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nI0123 09:07:26.845353 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nI0123 09:07:26.849378 1 tlsconfig.go:203] \\\\\\\"Loaded serving cert\\\\\\\" certName=\\\\\\\"serving-cert::/tmp/serving-cert-4138284268/tls.crt::/tmp/serving-cert-4138284268/tls.key\\\\\\\" certDetail=\\\\\\\"\\\\\\\\\\\\\\\"localhost\\\\\\\\\\\\\\\" [serving] validServingFor=[localhost] issuer=\\\\\\\\\\\\\\\"check-endpoints-signer@1769159230\\\\\\\\\\\\\\\" (2026-01-23 09:07:10 +0000 UTC to 2026-02-22 09:07:11 +0000 UTC (now=2026-01-23 09:07:26.849349521 +0000 UTC))\\\\\\\"\\\\nI0123 09:07:26.849507 1 named_certificates.go:53] \\\\\\\"Loaded SNI cert\\\\\\\" index=0 certName=\\\\\\\"self-signed loopback\\\\\\\" certDetail=\\\\\\\"\\\\\\\\\\\\\\\"apiserver-loopback-client@1769159241\\\\\\\\\\\\\\\" [serving] validServingFor=[apiserver-loopback-client] issuer=\\\\\\\\\\\\\\\"apiserver-loopback-client-ca@1769159241\\\\\\\\\\\\\\\" (2026-01-23 08:07:21 +0000 UTC to 2027-01-23 08:07:21 +0000 UTC (now=2026-01-23 09:07:26.849489185 +0000 UTC))\\\\\\\"\\\\nI0123 09:07:26.849527 1 secure_serving.go:213] Serving securely on [::]:17697\\\\nI0123 09:07:26.849546 1 genericapiserver.go:683] [graceful-termination] waiting for shutdown to be initiated\\\\nI0123 09:07:26.849566 1 requestheader_controller.go:172] Starting RequestHeaderAuthRequestController\\\\nI0123 09:07:26.849583 1 shared_informer.go:313] Waiting for caches to sync for RequestHeaderAuthRequestController\\\\nI0123 09:07:26.849611 1 dynamic_serving_content.go:135] \\\\\\\"Starting controller\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-4138284268/tls.crt::/tmp/serving-cert-4138284268/tls.key\\\\\\\"\\\\nI0123 09:07:26.849731 1 tlsconfig.go:243] \\\\\\\"Starting DynamicServingCertificateController\\\\\\\"\\\\nF0123 09:07:26.849820 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-23T09:07:10Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-23T09:07:28Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://9db80d9b156d2828ad5bcd38bc2d0783dac35f10f547f098815ee596931cde3b\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-23T09:07:10Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://efa2eef93c6f5766565795e6674f79bc2e7cb62ac76cd9a1e407561378d62732\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://efa2eef93c6f5766565795e6674f79bc2e7cb62ac76cd9a1e407561378d62732\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-23T09:07:08Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-23T09:07:08Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-23T09:07:07Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-23T09:07:33Z is after 2025-08-24T17:21:41Z" Jan 23 09:07:33 crc kubenswrapper[4684]: I0123 09:07:33.946232 4684 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-23T09:07:30Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-23T09:07:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-23T09:07:30Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://dc74050180463e44d7c545c89833c0282af87ae8cde4800f95e019dbd21ebb08\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-23T09:07:30Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-23T09:07:33Z is after 2025-08-24T17:21:41Z" Jan 23 09:07:33 crc kubenswrapper[4684]: I0123 09:07:33.954936 4684 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-6stgf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"4fce7017-186f-4953-b968-c8a8868a0fd4\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T09:07:29Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T09:07:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T09:07:29Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T09:07:29Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://e120546e2ca9261a5bc169c39194c52add608d78b5783a10dad5f3ba4ee27c23\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-23T09:07:29Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-wv8g2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-23T09:07:28Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-6stgf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-23T09:07:33Z is after 2025-08-24T17:21:41Z" Jan 23 09:07:33 crc kubenswrapper[4684]: I0123 09:07:33.964739 4684 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-qt2j2" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d5069a6f-07bb-4423-8df0-92cdc541e6de\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T09:07:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T09:07:29Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T09:07:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T09:07:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://4ab843f59e857c481772565098789264b06141f58dd54cbb8dba2e40b44a54ad\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-23T09:07:30Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-l62zw\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-23T09:07:29Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-qt2j2\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-23T09:07:33Z is after 2025-08-24T17:21:41Z" Jan 23 09:07:33 crc kubenswrapper[4684]: I0123 09:07:33.977855 4684 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d618dabd-5de3-4c94-b9c1-69682da77628\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T09:07:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T09:07:07Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T09:07:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T09:07:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T09:07:07Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://c027c8977c1e3870ef0132bf28d479e8999b1a7d216327be7a9cff2aeee05c9e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-23T09:07:08Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://7954e2feb1e89e1ec2c9055234e7b9bde7005afc751a3067c18cbb54d16045cc\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-23T09:07:08Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://fde45d47daa7855ee7caa1df0222d2773fcdc8fb29413c61d6b74f7e7d8fa6e4\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-23T09:07:09Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://f34540a58dd0dfcebbfd694b24202f58a89ddca8a0f04f3f4f2bcdba4be5c4b6\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-23T09:07:10Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-23T09:07:07Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-23T09:07:33Z is after 2025-08-24T17:21:41Z" Jan 23 09:07:33 crc kubenswrapper[4684]: I0123 09:07:33.992414 4684 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-23T09:07:28Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-23T09:07:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-23T09:07:28Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://4f741db786a98b9e9302c17c5f5061484149b0372c03b3cf06b017d37da7237a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-23T09:07:28Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://e0bf99a80423f9d4d2262b21f7dc70d1cf73731c48008e484d9768495596d5b0\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-23T09:07:28Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-23T09:07:33Z is after 2025-08-24T17:21:41Z" Jan 23 09:07:34 crc kubenswrapper[4684]: I0123 09:07:34.027879 4684 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 23 09:07:34 crc kubenswrapper[4684]: I0123 09:07:34.027923 4684 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 23 09:07:34 crc kubenswrapper[4684]: I0123 09:07:34.027933 4684 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 23 09:07:34 crc kubenswrapper[4684]: I0123 09:07:34.027948 4684 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 23 09:07:34 crc kubenswrapper[4684]: I0123 09:07:34.027959 4684 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-23T09:07:34Z","lastTransitionTime":"2026-01-23T09:07:34Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 23 09:07:34 crc kubenswrapper[4684]: I0123 09:07:34.131035 4684 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 23 09:07:34 crc kubenswrapper[4684]: I0123 09:07:34.131068 4684 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 23 09:07:34 crc kubenswrapper[4684]: I0123 09:07:34.131075 4684 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 23 09:07:34 crc kubenswrapper[4684]: I0123 09:07:34.131088 4684 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 23 09:07:34 crc kubenswrapper[4684]: I0123 09:07:34.131098 4684 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-23T09:07:34Z","lastTransitionTime":"2026-01-23T09:07:34Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 23 09:07:34 crc kubenswrapper[4684]: I0123 09:07:34.234253 4684 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 23 09:07:34 crc kubenswrapper[4684]: I0123 09:07:34.234316 4684 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 23 09:07:34 crc kubenswrapper[4684]: I0123 09:07:34.234328 4684 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 23 09:07:34 crc kubenswrapper[4684]: I0123 09:07:34.234345 4684 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 23 09:07:34 crc kubenswrapper[4684]: I0123 09:07:34.234356 4684 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-23T09:07:34Z","lastTransitionTime":"2026-01-23T09:07:34Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 23 09:07:34 crc kubenswrapper[4684]: I0123 09:07:34.337191 4684 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 23 09:07:34 crc kubenswrapper[4684]: I0123 09:07:34.337234 4684 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 23 09:07:34 crc kubenswrapper[4684]: I0123 09:07:34.337245 4684 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 23 09:07:34 crc kubenswrapper[4684]: I0123 09:07:34.337261 4684 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 23 09:07:34 crc kubenswrapper[4684]: I0123 09:07:34.337275 4684 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-23T09:07:34Z","lastTransitionTime":"2026-01-23T09:07:34Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 23 09:07:34 crc kubenswrapper[4684]: I0123 09:07:34.440035 4684 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 23 09:07:34 crc kubenswrapper[4684]: I0123 09:07:34.440078 4684 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 23 09:07:34 crc kubenswrapper[4684]: I0123 09:07:34.440095 4684 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 23 09:07:34 crc kubenswrapper[4684]: I0123 09:07:34.440113 4684 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 23 09:07:34 crc kubenswrapper[4684]: I0123 09:07:34.440125 4684 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-23T09:07:34Z","lastTransitionTime":"2026-01-23T09:07:34Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 23 09:07:34 crc kubenswrapper[4684]: I0123 09:07:34.542815 4684 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 23 09:07:34 crc kubenswrapper[4684]: I0123 09:07:34.543228 4684 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 23 09:07:34 crc kubenswrapper[4684]: I0123 09:07:34.543334 4684 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 23 09:07:34 crc kubenswrapper[4684]: I0123 09:07:34.543518 4684 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 23 09:07:34 crc kubenswrapper[4684]: I0123 09:07:34.543629 4684 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-23T09:07:34Z","lastTransitionTime":"2026-01-23T09:07:34Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 23 09:07:34 crc kubenswrapper[4684]: I0123 09:07:34.552290 4684 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-12-09 20:26:03.092010754 +0000 UTC Jan 23 09:07:34 crc kubenswrapper[4684]: I0123 09:07:34.646203 4684 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 23 09:07:34 crc kubenswrapper[4684]: I0123 09:07:34.646243 4684 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 23 09:07:34 crc kubenswrapper[4684]: I0123 09:07:34.646255 4684 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 23 09:07:34 crc kubenswrapper[4684]: I0123 09:07:34.646269 4684 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 23 09:07:34 crc kubenswrapper[4684]: I0123 09:07:34.646278 4684 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-23T09:07:34Z","lastTransitionTime":"2026-01-23T09:07:34Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 23 09:07:34 crc kubenswrapper[4684]: I0123 09:07:34.748976 4684 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 23 09:07:34 crc kubenswrapper[4684]: I0123 09:07:34.749028 4684 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 23 09:07:34 crc kubenswrapper[4684]: I0123 09:07:34.749041 4684 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 23 09:07:34 crc kubenswrapper[4684]: I0123 09:07:34.749061 4684 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 23 09:07:34 crc kubenswrapper[4684]: I0123 09:07:34.749097 4684 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-23T09:07:34Z","lastTransitionTime":"2026-01-23T09:07:34Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 23 09:07:34 crc kubenswrapper[4684]: I0123 09:07:34.805634 4684 generic.go:334] "Generic (PLEG): container finished" podID="95d1563a-3ca4-4fb0-8365-c1168fbe2e70" containerID="a3f58ad8e7c313247b77e5259a2f82d740ea1f08c3aeaefc116293729ce1b143" exitCode=0 Jan 23 09:07:34 crc kubenswrapper[4684]: I0123 09:07:34.805747 4684 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-multus/multus-additional-cni-plugins-dmqcw" event={"ID":"95d1563a-3ca4-4fb0-8365-c1168fbe2e70","Type":"ContainerDied","Data":"a3f58ad8e7c313247b77e5259a2f82d740ea1f08c3aeaefc116293729ce1b143"} Jan 23 09:07:34 crc kubenswrapper[4684]: I0123 09:07:34.824653 4684 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-23T09:07:27Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-23T09:07:27Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-23T09:07:34Z is after 2025-08-24T17:21:41Z" Jan 23 09:07:34 crc kubenswrapper[4684]: I0123 09:07:34.849608 4684 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-23T09:07:27Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-23T09:07:27Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-23T09:07:34Z is after 2025-08-24T17:21:41Z" Jan 23 09:07:34 crc kubenswrapper[4684]: I0123 09:07:34.852333 4684 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 23 09:07:34 crc kubenswrapper[4684]: I0123 09:07:34.852360 4684 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 23 09:07:34 crc kubenswrapper[4684]: I0123 09:07:34.852369 4684 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 23 09:07:34 crc kubenswrapper[4684]: I0123 09:07:34.852389 4684 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 23 09:07:34 crc kubenswrapper[4684]: I0123 09:07:34.852401 4684 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-23T09:07:34Z","lastTransitionTime":"2026-01-23T09:07:34Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 23 09:07:34 crc kubenswrapper[4684]: I0123 09:07:34.868894 4684 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-nk7v5" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fd1b372-d164-4037-ae8e-cf634b1c4b41\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T09:07:29Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T09:07:29Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T09:07:28Z\\\",\\\"message\\\":\\\"containers with unready status: [ovn-controller ovn-acl-logging kube-rbac-proxy-node kube-rbac-proxy-ovn-metrics northd nbdb sbdb ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T09:07:28Z\\\",\\\"message\\\":\\\"containers with unready status: [ovn-controller ovn-acl-logging kube-rbac-proxy-node kube-rbac-proxy-ovn-metrics northd nbdb sbdb ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-l46bg\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-l46bg\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-l46bg\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-l46bg\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-l46bg\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-l46bg\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-l46bg\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-l46bg\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://6cfc04b44ac724b5e32e0102b3f0d670fdd7f2b7ae9b40266065c7b8192b228e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://6cfc04b44ac724b5e32e0102b3f0d670fdd7f2b7ae9b40266065c7b8192b228e\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-23T09:07:29Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-23T09:07:29Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-l46bg\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-23T09:07:28Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-nk7v5\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-23T09:07:34Z is after 2025-08-24T17:21:41Z" Jan 23 09:07:34 crc kubenswrapper[4684]: I0123 09:07:34.885623 4684 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"e31ff448-5258-4887-9532-ccb1444b5a2f\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T09:07:08Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T09:07:08Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T09:07:07Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T09:07:07Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T09:07:07Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://39b1d62654cdce3e6a1e54cc35f36d530dec39b7ec54d7aba2ea8a64844ff90a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-23T09:07:09Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://5b80737ea9f882f63be2cf6a2f74002963d16e18aea3c96f738b2cd188f3c1da\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-23T09:07:10Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://68e3ed6cfd5c1ab6379385c7acee58117333f815f21be7d7c61038f7827f6621\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-23T09:07:10Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://74958cd4355a9eb04e07c960b1063b56f11cb3ae27a3ab9eac50f54ebac78c8c\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://42263a97079566dbd93f1ca20399fd1f6cc2400f0d042ed062c1c1e15eaf0109\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-01-23T09:07:27Z\\\",\\\"message\\\":\\\"23 09:07:26.845110 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW0123 09:07:26.845113 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW0123 09:07:26.845115 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nI0123 09:07:26.845353 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nI0123 09:07:26.849378 1 tlsconfig.go:203] \\\\\\\"Loaded serving cert\\\\\\\" certName=\\\\\\\"serving-cert::/tmp/serving-cert-4138284268/tls.crt::/tmp/serving-cert-4138284268/tls.key\\\\\\\" certDetail=\\\\\\\"\\\\\\\\\\\\\\\"localhost\\\\\\\\\\\\\\\" [serving] validServingFor=[localhost] issuer=\\\\\\\\\\\\\\\"check-endpoints-signer@1769159230\\\\\\\\\\\\\\\" (2026-01-23 09:07:10 +0000 UTC to 2026-02-22 09:07:11 +0000 UTC (now=2026-01-23 09:07:26.849349521 +0000 UTC))\\\\\\\"\\\\nI0123 09:07:26.849507 1 named_certificates.go:53] \\\\\\\"Loaded SNI cert\\\\\\\" index=0 certName=\\\\\\\"self-signed loopback\\\\\\\" certDetail=\\\\\\\"\\\\\\\\\\\\\\\"apiserver-loopback-client@1769159241\\\\\\\\\\\\\\\" [serving] validServingFor=[apiserver-loopback-client] issuer=\\\\\\\\\\\\\\\"apiserver-loopback-client-ca@1769159241\\\\\\\\\\\\\\\" (2026-01-23 08:07:21 +0000 UTC to 2027-01-23 08:07:21 +0000 UTC (now=2026-01-23 09:07:26.849489185 +0000 UTC))\\\\\\\"\\\\nI0123 09:07:26.849527 1 secure_serving.go:213] Serving securely on [::]:17697\\\\nI0123 09:07:26.849546 1 genericapiserver.go:683] [graceful-termination] waiting for shutdown to be initiated\\\\nI0123 09:07:26.849566 1 requestheader_controller.go:172] Starting RequestHeaderAuthRequestController\\\\nI0123 09:07:26.849583 1 shared_informer.go:313] Waiting for caches to sync for RequestHeaderAuthRequestController\\\\nI0123 09:07:26.849611 1 dynamic_serving_content.go:135] \\\\\\\"Starting controller\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-4138284268/tls.crt::/tmp/serving-cert-4138284268/tls.key\\\\\\\"\\\\nI0123 09:07:26.849731 1 tlsconfig.go:243] \\\\\\\"Starting DynamicServingCertificateController\\\\\\\"\\\\nF0123 09:07:26.849820 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-23T09:07:10Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-23T09:07:28Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://9db80d9b156d2828ad5bcd38bc2d0783dac35f10f547f098815ee596931cde3b\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-23T09:07:10Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://efa2eef93c6f5766565795e6674f79bc2e7cb62ac76cd9a1e407561378d62732\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://efa2eef93c6f5766565795e6674f79bc2e7cb62ac76cd9a1e407561378d62732\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-23T09:07:08Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-23T09:07:08Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-23T09:07:07Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-23T09:07:34Z is after 2025-08-24T17:21:41Z" Jan 23 09:07:34 crc kubenswrapper[4684]: I0123 09:07:34.902056 4684 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-23T09:07:27Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-23T09:07:27Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-23T09:07:34Z is after 2025-08-24T17:21:41Z" Jan 23 09:07:34 crc kubenswrapper[4684]: I0123 09:07:34.914605 4684 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-jwr4q" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ab0885cc-d621-4e36-9e37-1326848bd147\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T09:07:29Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T09:07:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T09:07:29Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T09:07:29Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://d957cfbf388d17fa825ac41c56e15d6cd4caec6e13b2fb8c93b304205f0bbefe\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-23T09:07:29Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cw2mk\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-23T09:07:28Z\\\"}}\" for pod \"openshift-multus\"/\"multus-jwr4q\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-23T09:07:34Z is after 2025-08-24T17:21:41Z" Jan 23 09:07:34 crc kubenswrapper[4684]: I0123 09:07:34.929649 4684 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-dmqcw" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"95d1563a-3ca4-4fb0-8365-c1168fbe2e70\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T09:07:29Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T09:07:34Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T09:07:28Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T09:07:28Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-wrhqc\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://d3d64538fa49212ecd97fac81f22251d985b9963024dcd5625ca82b0a19111fb\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://d3d64538fa49212ecd97fac81f22251d985b9963024dcd5625ca82b0a19111fb\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-23T09:07:29Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-23T09:07:29Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-wrhqc\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://bd008bc398cf858c150426e45222e76743f5cacfffb45c24f2cad83a6140abe4\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://bd008bc398cf858c150426e45222e76743f5cacfffb45c24f2cad83a6140abe4\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-23T09:07:30Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-23T09:07:30Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-wrhqc\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://11ea09253e6f4c4eab537b794b793c1f07e8cbaf361c1d8773381e7894805322\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://11ea09253e6f4c4eab537b794b793c1f07e8cbaf361c1d8773381e7894805322\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-23T09:07:31Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-23T09:07:31Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-wrhqc\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://4dddcfb8219bc8ac2d0f92294aef29222b71b1eb35ac84e7e833905e868e784e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://4dddcfb8219bc8ac2d0f92294aef29222b71b1eb35ac84e7e833905e868e784e\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-23T09:07:32Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-23T09:07:32Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-wrhqc\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://d935dd54133a2edd7ccddba6ec6b4c3ee7c86d3d6bc097b93fab3a6aa873ece9\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://d935dd54133a2edd7ccddba6ec6b4c3ee7c86d3d6bc097b93fab3a6aa873ece9\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-23T09:07:33Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-23T09:07:33Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-wrhqc\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://a3f58ad8e7c313247b77e5259a2f82d740ea1f08c3aeaefc116293729ce1b143\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://a3f58ad8e7c313247b77e5259a2f82d740ea1f08c3aeaefc116293729ce1b143\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-23T09:07:34Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-23T09:07:34Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-wrhqc\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-23T09:07:28Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-dmqcw\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-23T09:07:34Z is after 2025-08-24T17:21:41Z" Jan 23 09:07:34 crc kubenswrapper[4684]: I0123 09:07:34.941378 4684 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d618dabd-5de3-4c94-b9c1-69682da77628\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T09:07:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T09:07:07Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T09:07:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T09:07:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T09:07:07Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://c027c8977c1e3870ef0132bf28d479e8999b1a7d216327be7a9cff2aeee05c9e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-23T09:07:08Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://7954e2feb1e89e1ec2c9055234e7b9bde7005afc751a3067c18cbb54d16045cc\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-23T09:07:08Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://fde45d47daa7855ee7caa1df0222d2773fcdc8fb29413c61d6b74f7e7d8fa6e4\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-23T09:07:09Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://f34540a58dd0dfcebbfd694b24202f58a89ddca8a0f04f3f4f2bcdba4be5c4b6\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-23T09:07:10Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-23T09:07:07Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-23T09:07:34Z is after 2025-08-24T17:21:41Z" Jan 23 09:07:34 crc kubenswrapper[4684]: I0123 09:07:34.953919 4684 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-23T09:07:28Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-23T09:07:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-23T09:07:28Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://4f741db786a98b9e9302c17c5f5061484149b0372c03b3cf06b017d37da7237a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-23T09:07:28Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://e0bf99a80423f9d4d2262b21f7dc70d1cf73731c48008e484d9768495596d5b0\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-23T09:07:28Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-23T09:07:34Z is after 2025-08-24T17:21:41Z" Jan 23 09:07:34 crc kubenswrapper[4684]: I0123 09:07:34.956043 4684 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 23 09:07:34 crc kubenswrapper[4684]: I0123 09:07:34.956097 4684 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 23 09:07:34 crc kubenswrapper[4684]: I0123 09:07:34.956107 4684 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 23 09:07:34 crc kubenswrapper[4684]: I0123 09:07:34.956127 4684 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 23 09:07:34 crc kubenswrapper[4684]: I0123 09:07:34.956139 4684 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-23T09:07:34Z","lastTransitionTime":"2026-01-23T09:07:34Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 23 09:07:34 crc kubenswrapper[4684]: I0123 09:07:34.966526 4684 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-23T09:07:30Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-23T09:07:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-23T09:07:30Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://dc74050180463e44d7c545c89833c0282af87ae8cde4800f95e019dbd21ebb08\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-23T09:07:30Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-23T09:07:34Z is after 2025-08-24T17:21:41Z" Jan 23 09:07:34 crc kubenswrapper[4684]: I0123 09:07:34.976229 4684 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-6stgf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"4fce7017-186f-4953-b968-c8a8868a0fd4\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T09:07:29Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T09:07:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T09:07:29Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T09:07:29Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://e120546e2ca9261a5bc169c39194c52add608d78b5783a10dad5f3ba4ee27c23\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-23T09:07:29Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-wv8g2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-23T09:07:28Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-6stgf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-23T09:07:34Z is after 2025-08-24T17:21:41Z" Jan 23 09:07:34 crc kubenswrapper[4684]: I0123 09:07:34.985916 4684 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-qt2j2" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d5069a6f-07bb-4423-8df0-92cdc541e6de\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T09:07:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T09:07:29Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T09:07:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T09:07:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://4ab843f59e857c481772565098789264b06141f58dd54cbb8dba2e40b44a54ad\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-23T09:07:30Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-l62zw\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-23T09:07:29Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-qt2j2\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-23T09:07:34Z is after 2025-08-24T17:21:41Z" Jan 23 09:07:34 crc kubenswrapper[4684]: I0123 09:07:34.998139 4684 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-23T09:07:29Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-23T09:07:29Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-23T09:07:29Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://d66a59d2f527c396c3b591ef694a20a6852d8e2b2f3d4c77ef0f0b795a18b535\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-23T09:07:28Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-23T09:07:34Z is after 2025-08-24T17:21:41Z" Jan 23 09:07:35 crc kubenswrapper[4684]: I0123 09:07:35.011258 4684 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-wtphf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"fe8e0d00-860e-4d47-9f48-686555520d79\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T09:07:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T09:07:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T09:07:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T09:07:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://87b6f66b276518f9c25bbd5c97bd4a330b2c796958b395d04a01ef7115b95440\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-23T09:07:30Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-dmwsl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://c3d090a4ca15b818846dbd02be034a5029761509ea8671673795d0b2b15249c9\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-23T09:07:30Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-dmwsl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-23T09:07:28Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-wtphf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-23T09:07:35Z is after 2025-08-24T17:21:41Z" Jan 23 09:07:35 crc kubenswrapper[4684]: I0123 09:07:35.058620 4684 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 23 09:07:35 crc kubenswrapper[4684]: I0123 09:07:35.058658 4684 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 23 09:07:35 crc kubenswrapper[4684]: I0123 09:07:35.058666 4684 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 23 09:07:35 crc kubenswrapper[4684]: I0123 09:07:35.058683 4684 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 23 09:07:35 crc kubenswrapper[4684]: I0123 09:07:35.058714 4684 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-23T09:07:35Z","lastTransitionTime":"2026-01-23T09:07:35Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 23 09:07:35 crc kubenswrapper[4684]: I0123 09:07:35.161721 4684 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 23 09:07:35 crc kubenswrapper[4684]: I0123 09:07:35.161764 4684 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 23 09:07:35 crc kubenswrapper[4684]: I0123 09:07:35.161774 4684 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 23 09:07:35 crc kubenswrapper[4684]: I0123 09:07:35.161790 4684 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 23 09:07:35 crc kubenswrapper[4684]: I0123 09:07:35.161801 4684 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-23T09:07:35Z","lastTransitionTime":"2026-01-23T09:07:35Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 23 09:07:35 crc kubenswrapper[4684]: I0123 09:07:35.264227 4684 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 23 09:07:35 crc kubenswrapper[4684]: I0123 09:07:35.264274 4684 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 23 09:07:35 crc kubenswrapper[4684]: I0123 09:07:35.264285 4684 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 23 09:07:35 crc kubenswrapper[4684]: I0123 09:07:35.264301 4684 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 23 09:07:35 crc kubenswrapper[4684]: I0123 09:07:35.264312 4684 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-23T09:07:35Z","lastTransitionTime":"2026-01-23T09:07:35Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 23 09:07:35 crc kubenswrapper[4684]: I0123 09:07:35.336125 4684 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 23 09:07:35 crc kubenswrapper[4684]: E0123 09:07:35.336282 4684 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-01-23 09:07:43.336252185 +0000 UTC m=+35.959630766 (durationBeforeRetry 8s). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 23 09:07:35 crc kubenswrapper[4684]: I0123 09:07:35.367577 4684 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 23 09:07:35 crc kubenswrapper[4684]: I0123 09:07:35.367621 4684 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 23 09:07:35 crc kubenswrapper[4684]: I0123 09:07:35.367630 4684 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 23 09:07:35 crc kubenswrapper[4684]: I0123 09:07:35.367643 4684 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 23 09:07:35 crc kubenswrapper[4684]: I0123 09:07:35.367652 4684 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-23T09:07:35Z","lastTransitionTime":"2026-01-23T09:07:35Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 23 09:07:35 crc kubenswrapper[4684]: I0123 09:07:35.436785 4684 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-cqllr\" (UniqueName: \"kubernetes.io/projected/3b6479f0-333b-4a96-9adf-2099afdc2447-kube-api-access-cqllr\") pod \"network-check-target-xd92c\" (UID: \"3b6479f0-333b-4a96-9adf-2099afdc2447\") " pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 23 09:07:35 crc kubenswrapper[4684]: I0123 09:07:35.436842 4684 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"networking-console-plugin-cert\" (UniqueName: \"kubernetes.io/secret/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-networking-console-plugin-cert\") pod \"networking-console-plugin-85b44fc459-gdk6g\" (UID: \"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\") " pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 23 09:07:35 crc kubenswrapper[4684]: I0123 09:07:35.436872 4684 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"nginx-conf\" (UniqueName: \"kubernetes.io/configmap/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-nginx-conf\") pod \"networking-console-plugin-85b44fc459-gdk6g\" (UID: \"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\") " pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 23 09:07:35 crc kubenswrapper[4684]: I0123 09:07:35.436895 4684 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-s2dwl\" (UniqueName: \"kubernetes.io/projected/9d751cbb-f2e2-430d-9754-c882a5e924a5-kube-api-access-s2dwl\") pod \"network-check-source-55646444c4-trplf\" (UID: \"9d751cbb-f2e2-430d-9754-c882a5e924a5\") " pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Jan 23 09:07:35 crc kubenswrapper[4684]: E0123 09:07:35.437014 4684 projected.go:288] Couldn't get configMap openshift-network-diagnostics/kube-root-ca.crt: object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered Jan 23 09:07:35 crc kubenswrapper[4684]: E0123 09:07:35.437033 4684 projected.go:288] Couldn't get configMap openshift-network-diagnostics/openshift-service-ca.crt: object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered Jan 23 09:07:35 crc kubenswrapper[4684]: E0123 09:07:35.437046 4684 projected.go:194] Error preparing data for projected volume kube-api-access-s2dwl for pod openshift-network-diagnostics/network-check-source-55646444c4-trplf: [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Jan 23 09:07:35 crc kubenswrapper[4684]: E0123 09:07:35.437087 4684 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/9d751cbb-f2e2-430d-9754-c882a5e924a5-kube-api-access-s2dwl podName:9d751cbb-f2e2-430d-9754-c882a5e924a5 nodeName:}" failed. No retries permitted until 2026-01-23 09:07:43.437074505 +0000 UTC m=+36.060453046 (durationBeforeRetry 8s). Error: MountVolume.SetUp failed for volume "kube-api-access-s2dwl" (UniqueName: "kubernetes.io/projected/9d751cbb-f2e2-430d-9754-c882a5e924a5-kube-api-access-s2dwl") pod "network-check-source-55646444c4-trplf" (UID: "9d751cbb-f2e2-430d-9754-c882a5e924a5") : [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Jan 23 09:07:35 crc kubenswrapper[4684]: E0123 09:07:35.437442 4684 projected.go:288] Couldn't get configMap openshift-network-diagnostics/kube-root-ca.crt: object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered Jan 23 09:07:35 crc kubenswrapper[4684]: E0123 09:07:35.437453 4684 projected.go:288] Couldn't get configMap openshift-network-diagnostics/openshift-service-ca.crt: object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered Jan 23 09:07:35 crc kubenswrapper[4684]: E0123 09:07:35.437461 4684 projected.go:194] Error preparing data for projected volume kube-api-access-cqllr for pod openshift-network-diagnostics/network-check-target-xd92c: [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Jan 23 09:07:35 crc kubenswrapper[4684]: E0123 09:07:35.437485 4684 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/3b6479f0-333b-4a96-9adf-2099afdc2447-kube-api-access-cqllr podName:3b6479f0-333b-4a96-9adf-2099afdc2447 nodeName:}" failed. No retries permitted until 2026-01-23 09:07:43.437478456 +0000 UTC m=+36.060856997 (durationBeforeRetry 8s). Error: MountVolume.SetUp failed for volume "kube-api-access-cqllr" (UniqueName: "kubernetes.io/projected/3b6479f0-333b-4a96-9adf-2099afdc2447-kube-api-access-cqllr") pod "network-check-target-xd92c" (UID: "3b6479f0-333b-4a96-9adf-2099afdc2447") : [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Jan 23 09:07:35 crc kubenswrapper[4684]: E0123 09:07:35.437510 4684 configmap.go:193] Couldn't get configMap openshift-network-console/networking-console-plugin: object "openshift-network-console"/"networking-console-plugin" not registered Jan 23 09:07:35 crc kubenswrapper[4684]: E0123 09:07:35.437607 4684 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-nginx-conf podName:5fe485a1-e14f-4c09-b5b9-f252bc42b7e8 nodeName:}" failed. No retries permitted until 2026-01-23 09:07:43.437581689 +0000 UTC m=+36.060960240 (durationBeforeRetry 8s). Error: MountVolume.SetUp failed for volume "nginx-conf" (UniqueName: "kubernetes.io/configmap/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-nginx-conf") pod "networking-console-plugin-85b44fc459-gdk6g" (UID: "5fe485a1-e14f-4c09-b5b9-f252bc42b7e8") : object "openshift-network-console"/"networking-console-plugin" not registered Jan 23 09:07:35 crc kubenswrapper[4684]: E0123 09:07:35.437531 4684 secret.go:188] Couldn't get secret openshift-network-console/networking-console-plugin-cert: object "openshift-network-console"/"networking-console-plugin-cert" not registered Jan 23 09:07:35 crc kubenswrapper[4684]: E0123 09:07:35.437655 4684 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-networking-console-plugin-cert podName:5fe485a1-e14f-4c09-b5b9-f252bc42b7e8 nodeName:}" failed. No retries permitted until 2026-01-23 09:07:43.437646211 +0000 UTC m=+36.061024772 (durationBeforeRetry 8s). Error: MountVolume.SetUp failed for volume "networking-console-plugin-cert" (UniqueName: "kubernetes.io/secret/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-networking-console-plugin-cert") pod "networking-console-plugin-85b44fc459-gdk6g" (UID: "5fe485a1-e14f-4c09-b5b9-f252bc42b7e8") : object "openshift-network-console"/"networking-console-plugin-cert" not registered Jan 23 09:07:35 crc kubenswrapper[4684]: I0123 09:07:35.471360 4684 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 23 09:07:35 crc kubenswrapper[4684]: I0123 09:07:35.471430 4684 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 23 09:07:35 crc kubenswrapper[4684]: I0123 09:07:35.471449 4684 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 23 09:07:35 crc kubenswrapper[4684]: I0123 09:07:35.471486 4684 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 23 09:07:35 crc kubenswrapper[4684]: I0123 09:07:35.471509 4684 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-23T09:07:35Z","lastTransitionTime":"2026-01-23T09:07:35Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 23 09:07:35 crc kubenswrapper[4684]: I0123 09:07:35.552772 4684 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-12-19 06:55:54.246514723 +0000 UTC Jan 23 09:07:35 crc kubenswrapper[4684]: I0123 09:07:35.573578 4684 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 23 09:07:35 crc kubenswrapper[4684]: I0123 09:07:35.573620 4684 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 23 09:07:35 crc kubenswrapper[4684]: I0123 09:07:35.573630 4684 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 23 09:07:35 crc kubenswrapper[4684]: I0123 09:07:35.573658 4684 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 23 09:07:35 crc kubenswrapper[4684]: I0123 09:07:35.573669 4684 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-23T09:07:35Z","lastTransitionTime":"2026-01-23T09:07:35Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 23 09:07:35 crc kubenswrapper[4684]: I0123 09:07:35.581123 4684 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 23 09:07:35 crc kubenswrapper[4684]: I0123 09:07:35.581219 4684 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Jan 23 09:07:35 crc kubenswrapper[4684]: I0123 09:07:35.581153 4684 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 23 09:07:35 crc kubenswrapper[4684]: E0123 09:07:35.581283 4684 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Jan 23 09:07:35 crc kubenswrapper[4684]: E0123 09:07:35.581368 4684 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Jan 23 09:07:35 crc kubenswrapper[4684]: E0123 09:07:35.581461 4684 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Jan 23 09:07:35 crc kubenswrapper[4684]: I0123 09:07:35.676685 4684 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 23 09:07:35 crc kubenswrapper[4684]: I0123 09:07:35.676744 4684 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 23 09:07:35 crc kubenswrapper[4684]: I0123 09:07:35.676769 4684 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 23 09:07:35 crc kubenswrapper[4684]: I0123 09:07:35.676785 4684 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 23 09:07:35 crc kubenswrapper[4684]: I0123 09:07:35.676796 4684 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-23T09:07:35Z","lastTransitionTime":"2026-01-23T09:07:35Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 23 09:07:35 crc kubenswrapper[4684]: I0123 09:07:35.778720 4684 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 23 09:07:35 crc kubenswrapper[4684]: I0123 09:07:35.778764 4684 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 23 09:07:35 crc kubenswrapper[4684]: I0123 09:07:35.778774 4684 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 23 09:07:35 crc kubenswrapper[4684]: I0123 09:07:35.778789 4684 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 23 09:07:35 crc kubenswrapper[4684]: I0123 09:07:35.778800 4684 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-23T09:07:35Z","lastTransitionTime":"2026-01-23T09:07:35Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 23 09:07:35 crc kubenswrapper[4684]: I0123 09:07:35.813339 4684 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-nk7v5" event={"ID":"5fd1b372-d164-4037-ae8e-cf634b1c4b41","Type":"ContainerStarted","Data":"71f1640626a831e4da81a382d015a6467377fa8e787db1ce1cebe4a788c40d8a"} Jan 23 09:07:35 crc kubenswrapper[4684]: I0123 09:07:35.813608 4684 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-ovn-kubernetes/ovnkube-node-nk7v5" Jan 23 09:07:35 crc kubenswrapper[4684]: I0123 09:07:35.819923 4684 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-multus/multus-additional-cni-plugins-dmqcw" event={"ID":"95d1563a-3ca4-4fb0-8365-c1168fbe2e70","Type":"ContainerStarted","Data":"49a6a5854f711f7c177bc9c2ddea16027d535e15a3bbce2771702baed672fc06"} Jan 23 09:07:35 crc kubenswrapper[4684]: I0123 09:07:35.830268 4684 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"e31ff448-5258-4887-9532-ccb1444b5a2f\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T09:07:08Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T09:07:08Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T09:07:07Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T09:07:07Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T09:07:07Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://39b1d62654cdce3e6a1e54cc35f36d530dec39b7ec54d7aba2ea8a64844ff90a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-23T09:07:09Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://5b80737ea9f882f63be2cf6a2f74002963d16e18aea3c96f738b2cd188f3c1da\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-23T09:07:10Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://68e3ed6cfd5c1ab6379385c7acee58117333f815f21be7d7c61038f7827f6621\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-23T09:07:10Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://74958cd4355a9eb04e07c960b1063b56f11cb3ae27a3ab9eac50f54ebac78c8c\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://42263a97079566dbd93f1ca20399fd1f6cc2400f0d042ed062c1c1e15eaf0109\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-01-23T09:07:27Z\\\",\\\"message\\\":\\\"23 09:07:26.845110 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW0123 09:07:26.845113 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW0123 09:07:26.845115 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nI0123 09:07:26.845353 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nI0123 09:07:26.849378 1 tlsconfig.go:203] \\\\\\\"Loaded serving cert\\\\\\\" certName=\\\\\\\"serving-cert::/tmp/serving-cert-4138284268/tls.crt::/tmp/serving-cert-4138284268/tls.key\\\\\\\" certDetail=\\\\\\\"\\\\\\\\\\\\\\\"localhost\\\\\\\\\\\\\\\" [serving] validServingFor=[localhost] issuer=\\\\\\\\\\\\\\\"check-endpoints-signer@1769159230\\\\\\\\\\\\\\\" (2026-01-23 09:07:10 +0000 UTC to 2026-02-22 09:07:11 +0000 UTC (now=2026-01-23 09:07:26.849349521 +0000 UTC))\\\\\\\"\\\\nI0123 09:07:26.849507 1 named_certificates.go:53] \\\\\\\"Loaded SNI cert\\\\\\\" index=0 certName=\\\\\\\"self-signed loopback\\\\\\\" certDetail=\\\\\\\"\\\\\\\\\\\\\\\"apiserver-loopback-client@1769159241\\\\\\\\\\\\\\\" [serving] validServingFor=[apiserver-loopback-client] issuer=\\\\\\\\\\\\\\\"apiserver-loopback-client-ca@1769159241\\\\\\\\\\\\\\\" (2026-01-23 08:07:21 +0000 UTC to 2027-01-23 08:07:21 +0000 UTC (now=2026-01-23 09:07:26.849489185 +0000 UTC))\\\\\\\"\\\\nI0123 09:07:26.849527 1 secure_serving.go:213] Serving securely on [::]:17697\\\\nI0123 09:07:26.849546 1 genericapiserver.go:683] [graceful-termination] waiting for shutdown to be initiated\\\\nI0123 09:07:26.849566 1 requestheader_controller.go:172] Starting RequestHeaderAuthRequestController\\\\nI0123 09:07:26.849583 1 shared_informer.go:313] Waiting for caches to sync for RequestHeaderAuthRequestController\\\\nI0123 09:07:26.849611 1 dynamic_serving_content.go:135] \\\\\\\"Starting controller\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-4138284268/tls.crt::/tmp/serving-cert-4138284268/tls.key\\\\\\\"\\\\nI0123 09:07:26.849731 1 tlsconfig.go:243] \\\\\\\"Starting DynamicServingCertificateController\\\\\\\"\\\\nF0123 09:07:26.849820 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-23T09:07:10Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-23T09:07:28Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://9db80d9b156d2828ad5bcd38bc2d0783dac35f10f547f098815ee596931cde3b\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-23T09:07:10Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://efa2eef93c6f5766565795e6674f79bc2e7cb62ac76cd9a1e407561378d62732\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://efa2eef93c6f5766565795e6674f79bc2e7cb62ac76cd9a1e407561378d62732\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-23T09:07:08Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-23T09:07:08Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-23T09:07:07Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-23T09:07:35Z is after 2025-08-24T17:21:41Z" Jan 23 09:07:35 crc kubenswrapper[4684]: I0123 09:07:35.835941 4684 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-ovn-kubernetes/ovnkube-node-nk7v5" Jan 23 09:07:35 crc kubenswrapper[4684]: I0123 09:07:35.843234 4684 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-23T09:07:27Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-23T09:07:27Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-23T09:07:35Z is after 2025-08-24T17:21:41Z" Jan 23 09:07:35 crc kubenswrapper[4684]: I0123 09:07:35.857148 4684 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-jwr4q" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ab0885cc-d621-4e36-9e37-1326848bd147\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T09:07:29Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T09:07:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T09:07:29Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T09:07:29Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://d957cfbf388d17fa825ac41c56e15d6cd4caec6e13b2fb8c93b304205f0bbefe\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-23T09:07:29Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cw2mk\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-23T09:07:28Z\\\"}}\" for pod \"openshift-multus\"/\"multus-jwr4q\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-23T09:07:35Z is after 2025-08-24T17:21:41Z" Jan 23 09:07:35 crc kubenswrapper[4684]: I0123 09:07:35.870825 4684 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-dmqcw" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"95d1563a-3ca4-4fb0-8365-c1168fbe2e70\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T09:07:29Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T09:07:34Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T09:07:28Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T09:07:28Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-wrhqc\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://d3d64538fa49212ecd97fac81f22251d985b9963024dcd5625ca82b0a19111fb\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://d3d64538fa49212ecd97fac81f22251d985b9963024dcd5625ca82b0a19111fb\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-23T09:07:29Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-23T09:07:29Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-wrhqc\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://bd008bc398cf858c150426e45222e76743f5cacfffb45c24f2cad83a6140abe4\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://bd008bc398cf858c150426e45222e76743f5cacfffb45c24f2cad83a6140abe4\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-23T09:07:30Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-23T09:07:30Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-wrhqc\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://11ea09253e6f4c4eab537b794b793c1f07e8cbaf361c1d8773381e7894805322\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://11ea09253e6f4c4eab537b794b793c1f07e8cbaf361c1d8773381e7894805322\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-23T09:07:31Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-23T09:07:31Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-wrhqc\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://4dddcfb8219bc8ac2d0f92294aef29222b71b1eb35ac84e7e833905e868e784e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://4dddcfb8219bc8ac2d0f92294aef29222b71b1eb35ac84e7e833905e868e784e\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-23T09:07:32Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-23T09:07:32Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-wrhqc\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://d935dd54133a2edd7ccddba6ec6b4c3ee7c86d3d6bc097b93fab3a6aa873ece9\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://d935dd54133a2edd7ccddba6ec6b4c3ee7c86d3d6bc097b93fab3a6aa873ece9\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-23T09:07:33Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-23T09:07:33Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-wrhqc\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://a3f58ad8e7c313247b77e5259a2f82d740ea1f08c3aeaefc116293729ce1b143\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://a3f58ad8e7c313247b77e5259a2f82d740ea1f08c3aeaefc116293729ce1b143\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-23T09:07:34Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-23T09:07:34Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-wrhqc\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-23T09:07:28Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-dmqcw\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-23T09:07:35Z is after 2025-08-24T17:21:41Z" Jan 23 09:07:35 crc kubenswrapper[4684]: I0123 09:07:35.881467 4684 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 23 09:07:35 crc kubenswrapper[4684]: I0123 09:07:35.881784 4684 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 23 09:07:35 crc kubenswrapper[4684]: I0123 09:07:35.881933 4684 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 23 09:07:35 crc kubenswrapper[4684]: I0123 09:07:35.882069 4684 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 23 09:07:35 crc kubenswrapper[4684]: I0123 09:07:35.882166 4684 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-23T09:07:35Z","lastTransitionTime":"2026-01-23T09:07:35Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 23 09:07:35 crc kubenswrapper[4684]: I0123 09:07:35.883677 4684 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d618dabd-5de3-4c94-b9c1-69682da77628\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T09:07:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T09:07:07Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T09:07:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T09:07:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T09:07:07Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://c027c8977c1e3870ef0132bf28d479e8999b1a7d216327be7a9cff2aeee05c9e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-23T09:07:08Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://7954e2feb1e89e1ec2c9055234e7b9bde7005afc751a3067c18cbb54d16045cc\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-23T09:07:08Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://fde45d47daa7855ee7caa1df0222d2773fcdc8fb29413c61d6b74f7e7d8fa6e4\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-23T09:07:09Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://f34540a58dd0dfcebbfd694b24202f58a89ddca8a0f04f3f4f2bcdba4be5c4b6\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-23T09:07:10Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-23T09:07:07Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-23T09:07:35Z is after 2025-08-24T17:21:41Z" Jan 23 09:07:35 crc kubenswrapper[4684]: I0123 09:07:35.897298 4684 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-23T09:07:28Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-23T09:07:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-23T09:07:28Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://4f741db786a98b9e9302c17c5f5061484149b0372c03b3cf06b017d37da7237a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-23T09:07:28Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://e0bf99a80423f9d4d2262b21f7dc70d1cf73731c48008e484d9768495596d5b0\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-23T09:07:28Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-23T09:07:35Z is after 2025-08-24T17:21:41Z" Jan 23 09:07:35 crc kubenswrapper[4684]: I0123 09:07:35.910567 4684 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-23T09:07:30Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-23T09:07:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-23T09:07:30Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://dc74050180463e44d7c545c89833c0282af87ae8cde4800f95e019dbd21ebb08\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-23T09:07:30Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-23T09:07:35Z is after 2025-08-24T17:21:41Z" Jan 23 09:07:35 crc kubenswrapper[4684]: I0123 09:07:35.920293 4684 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-6stgf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"4fce7017-186f-4953-b968-c8a8868a0fd4\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T09:07:29Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T09:07:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T09:07:29Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T09:07:29Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://e120546e2ca9261a5bc169c39194c52add608d78b5783a10dad5f3ba4ee27c23\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-23T09:07:29Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-wv8g2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-23T09:07:28Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-6stgf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-23T09:07:35Z is after 2025-08-24T17:21:41Z" Jan 23 09:07:35 crc kubenswrapper[4684]: I0123 09:07:35.929343 4684 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-qt2j2" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d5069a6f-07bb-4423-8df0-92cdc541e6de\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T09:07:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T09:07:29Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T09:07:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T09:07:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://4ab843f59e857c481772565098789264b06141f58dd54cbb8dba2e40b44a54ad\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-23T09:07:30Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-l62zw\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-23T09:07:29Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-qt2j2\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-23T09:07:35Z is after 2025-08-24T17:21:41Z" Jan 23 09:07:35 crc kubenswrapper[4684]: I0123 09:07:35.940529 4684 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-23T09:07:29Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-23T09:07:29Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-23T09:07:29Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://d66a59d2f527c396c3b591ef694a20a6852d8e2b2f3d4c77ef0f0b795a18b535\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-23T09:07:28Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-23T09:07:35Z is after 2025-08-24T17:21:41Z" Jan 23 09:07:35 crc kubenswrapper[4684]: I0123 09:07:35.951523 4684 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-wtphf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"fe8e0d00-860e-4d47-9f48-686555520d79\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T09:07:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T09:07:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T09:07:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T09:07:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://87b6f66b276518f9c25bbd5c97bd4a330b2c796958b395d04a01ef7115b95440\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-23T09:07:30Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-dmwsl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://c3d090a4ca15b818846dbd02be034a5029761509ea8671673795d0b2b15249c9\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-23T09:07:30Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-dmwsl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-23T09:07:28Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-wtphf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-23T09:07:35Z is after 2025-08-24T17:21:41Z" Jan 23 09:07:35 crc kubenswrapper[4684]: I0123 09:07:35.962170 4684 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-23T09:07:27Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-23T09:07:27Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-23T09:07:35Z is after 2025-08-24T17:21:41Z" Jan 23 09:07:35 crc kubenswrapper[4684]: I0123 09:07:35.971806 4684 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-23T09:07:27Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-23T09:07:27Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-23T09:07:35Z is after 2025-08-24T17:21:41Z" Jan 23 09:07:35 crc kubenswrapper[4684]: I0123 09:07:35.984323 4684 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 23 09:07:35 crc kubenswrapper[4684]: I0123 09:07:35.984356 4684 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 23 09:07:35 crc kubenswrapper[4684]: I0123 09:07:35.984364 4684 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 23 09:07:35 crc kubenswrapper[4684]: I0123 09:07:35.984380 4684 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 23 09:07:35 crc kubenswrapper[4684]: I0123 09:07:35.984392 4684 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-23T09:07:35Z","lastTransitionTime":"2026-01-23T09:07:35Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 23 09:07:35 crc kubenswrapper[4684]: I0123 09:07:35.989569 4684 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-nk7v5" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fd1b372-d164-4037-ae8e-cf634b1c4b41\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T09:07:29Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T09:07:29Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T09:07:28Z\\\",\\\"message\\\":\\\"containers with unready status: [nbdb sbdb ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T09:07:28Z\\\",\\\"message\\\":\\\"containers with unready status: [nbdb sbdb ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://c845b6b78d55b23f70032599e19fb345571b02ca00353315bb08e94c834330d4\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-23T09:07:30Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-l46bg\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://5ecd3493767226c89a1f3e3dff04d36ff5c47117c6ad2712e71633f5c6e375b3\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-23T09:07:30Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-l46bg\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://1d7d0cedb437ec48e365912b092c7f28a30e01fbab86c49bce1b26734ab264ee\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-23T09:07:30Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-l46bg\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://6ab83043e744c91535278153a247d7ba2b3612b867edbabf3a43192b51304e14\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-23T09:07:30Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-l46bg\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://d44f8256ce0d8ea5237e13fb4f6d7ee5cd698c2821613b48d73ba903d2ab5351\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-23T09:07:30Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-l46bg\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://3eab81e73847c2d5a8a24bd2be84c8ed97ecc482fe023474b519ae6bcf3e6e49\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-23T09:07:29Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-l46bg\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://71f1640626a831e4da81a382d015a6467377fa8e787db1ce1cebe4a788c40d8a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-23T09:07:35Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-l46bg\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://6eab0113b2445bd23a5d3eb5f4bd79d26dd3352a1bf807cf7e770d55db85b699\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-23T09:07:32Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-l46bg\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://6cfc04b44ac724b5e32e0102b3f0d670fdd7f2b7ae9b40266065c7b8192b228e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://6cfc04b44ac724b5e32e0102b3f0d670fdd7f2b7ae9b40266065c7b8192b228e\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-23T09:07:29Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-23T09:07:29Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-l46bg\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-23T09:07:28Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-nk7v5\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-23T09:07:35Z is after 2025-08-24T17:21:41Z" Jan 23 09:07:36 crc kubenswrapper[4684]: I0123 09:07:36.006726 4684 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-nk7v5" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fd1b372-d164-4037-ae8e-cf634b1c4b41\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T09:07:29Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T09:07:29Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T09:07:28Z\\\",\\\"message\\\":\\\"containers with unready status: [nbdb ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T09:07:28Z\\\",\\\"message\\\":\\\"containers with unready status: [nbdb ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://c845b6b78d55b23f70032599e19fb345571b02ca00353315bb08e94c834330d4\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-23T09:07:30Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-l46bg\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://5ecd3493767226c89a1f3e3dff04d36ff5c47117c6ad2712e71633f5c6e375b3\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-23T09:07:30Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-l46bg\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://1d7d0cedb437ec48e365912b092c7f28a30e01fbab86c49bce1b26734ab264ee\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-23T09:07:30Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-l46bg\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://6ab83043e744c91535278153a247d7ba2b3612b867edbabf3a43192b51304e14\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-23T09:07:30Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-l46bg\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://d44f8256ce0d8ea5237e13fb4f6d7ee5cd698c2821613b48d73ba903d2ab5351\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-23T09:07:30Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-l46bg\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://3eab81e73847c2d5a8a24bd2be84c8ed97ecc482fe023474b519ae6bcf3e6e49\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-23T09:07:29Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-l46bg\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://71f1640626a831e4da81a382d015a6467377fa8e787db1ce1cebe4a788c40d8a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-23T09:07:35Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-l46bg\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://6eab0113b2445bd23a5d3eb5f4bd79d26dd3352a1bf807cf7e770d55db85b699\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-23T09:07:32Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-l46bg\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://6cfc04b44ac724b5e32e0102b3f0d670fdd7f2b7ae9b40266065c7b8192b228e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://6cfc04b44ac724b5e32e0102b3f0d670fdd7f2b7ae9b40266065c7b8192b228e\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-23T09:07:29Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-23T09:07:29Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-l46bg\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-23T09:07:28Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-nk7v5\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-23T09:07:36Z is after 2025-08-24T17:21:41Z" Jan 23 09:07:36 crc kubenswrapper[4684]: I0123 09:07:36.018468 4684 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-23T09:07:27Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-23T09:07:27Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-23T09:07:36Z is after 2025-08-24T17:21:41Z" Jan 23 09:07:36 crc kubenswrapper[4684]: I0123 09:07:36.033311 4684 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-23T09:07:27Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-23T09:07:27Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-23T09:07:36Z is after 2025-08-24T17:21:41Z" Jan 23 09:07:36 crc kubenswrapper[4684]: I0123 09:07:36.044947 4684 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-jwr4q" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ab0885cc-d621-4e36-9e37-1326848bd147\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T09:07:29Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T09:07:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T09:07:29Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T09:07:29Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://d957cfbf388d17fa825ac41c56e15d6cd4caec6e13b2fb8c93b304205f0bbefe\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-23T09:07:29Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cw2mk\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-23T09:07:28Z\\\"}}\" for pod \"openshift-multus\"/\"multus-jwr4q\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-23T09:07:36Z is after 2025-08-24T17:21:41Z" Jan 23 09:07:36 crc kubenswrapper[4684]: I0123 09:07:36.057581 4684 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-dmqcw" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"95d1563a-3ca4-4fb0-8365-c1168fbe2e70\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T09:07:29Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T09:07:34Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T09:07:35Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T09:07:35Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://49a6a5854f711f7c177bc9c2ddea16027d535e15a3bbce2771702baed672fc06\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-23T09:07:35Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-wrhqc\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://d3d64538fa49212ecd97fac81f22251d985b9963024dcd5625ca82b0a19111fb\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://d3d64538fa49212ecd97fac81f22251d985b9963024dcd5625ca82b0a19111fb\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-23T09:07:29Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-23T09:07:29Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-wrhqc\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://bd008bc398cf858c150426e45222e76743f5cacfffb45c24f2cad83a6140abe4\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://bd008bc398cf858c150426e45222e76743f5cacfffb45c24f2cad83a6140abe4\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-23T09:07:30Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-23T09:07:30Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-wrhqc\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://11ea09253e6f4c4eab537b794b793c1f07e8cbaf361c1d8773381e7894805322\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://11ea09253e6f4c4eab537b794b793c1f07e8cbaf361c1d8773381e7894805322\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-23T09:07:31Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-23T09:07:31Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-wrhqc\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://4dddcfb8219bc8ac2d0f92294aef29222b71b1eb35ac84e7e833905e868e784e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://4dddcfb8219bc8ac2d0f92294aef29222b71b1eb35ac84e7e833905e868e784e\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-23T09:07:32Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-23T09:07:32Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-wrhqc\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://d935dd54133a2edd7ccddba6ec6b4c3ee7c86d3d6bc097b93fab3a6aa873ece9\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://d935dd54133a2edd7ccddba6ec6b4c3ee7c86d3d6bc097b93fab3a6aa873ece9\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-23T09:07:33Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-23T09:07:33Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-wrhqc\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://a3f58ad8e7c313247b77e5259a2f82d740ea1f08c3aeaefc116293729ce1b143\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://a3f58ad8e7c313247b77e5259a2f82d740ea1f08c3aeaefc116293729ce1b143\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-23T09:07:34Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-23T09:07:34Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-wrhqc\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-23T09:07:28Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-dmqcw\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-23T09:07:36Z is after 2025-08-24T17:21:41Z" Jan 23 09:07:36 crc kubenswrapper[4684]: I0123 09:07:36.070346 4684 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"e31ff448-5258-4887-9532-ccb1444b5a2f\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T09:07:08Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T09:07:08Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T09:07:07Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T09:07:07Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T09:07:07Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://39b1d62654cdce3e6a1e54cc35f36d530dec39b7ec54d7aba2ea8a64844ff90a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-23T09:07:09Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://5b80737ea9f882f63be2cf6a2f74002963d16e18aea3c96f738b2cd188f3c1da\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-23T09:07:10Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://68e3ed6cfd5c1ab6379385c7acee58117333f815f21be7d7c61038f7827f6621\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-23T09:07:10Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://74958cd4355a9eb04e07c960b1063b56f11cb3ae27a3ab9eac50f54ebac78c8c\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://42263a97079566dbd93f1ca20399fd1f6cc2400f0d042ed062c1c1e15eaf0109\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-01-23T09:07:27Z\\\",\\\"message\\\":\\\"23 09:07:26.845110 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW0123 09:07:26.845113 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW0123 09:07:26.845115 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nI0123 09:07:26.845353 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nI0123 09:07:26.849378 1 tlsconfig.go:203] \\\\\\\"Loaded serving cert\\\\\\\" certName=\\\\\\\"serving-cert::/tmp/serving-cert-4138284268/tls.crt::/tmp/serving-cert-4138284268/tls.key\\\\\\\" certDetail=\\\\\\\"\\\\\\\\\\\\\\\"localhost\\\\\\\\\\\\\\\" [serving] validServingFor=[localhost] issuer=\\\\\\\\\\\\\\\"check-endpoints-signer@1769159230\\\\\\\\\\\\\\\" (2026-01-23 09:07:10 +0000 UTC to 2026-02-22 09:07:11 +0000 UTC (now=2026-01-23 09:07:26.849349521 +0000 UTC))\\\\\\\"\\\\nI0123 09:07:26.849507 1 named_certificates.go:53] \\\\\\\"Loaded SNI cert\\\\\\\" index=0 certName=\\\\\\\"self-signed loopback\\\\\\\" certDetail=\\\\\\\"\\\\\\\\\\\\\\\"apiserver-loopback-client@1769159241\\\\\\\\\\\\\\\" [serving] validServingFor=[apiserver-loopback-client] issuer=\\\\\\\\\\\\\\\"apiserver-loopback-client-ca@1769159241\\\\\\\\\\\\\\\" (2026-01-23 08:07:21 +0000 UTC to 2027-01-23 08:07:21 +0000 UTC (now=2026-01-23 09:07:26.849489185 +0000 UTC))\\\\\\\"\\\\nI0123 09:07:26.849527 1 secure_serving.go:213] Serving securely on [::]:17697\\\\nI0123 09:07:26.849546 1 genericapiserver.go:683] [graceful-termination] waiting for shutdown to be initiated\\\\nI0123 09:07:26.849566 1 requestheader_controller.go:172] Starting RequestHeaderAuthRequestController\\\\nI0123 09:07:26.849583 1 shared_informer.go:313] Waiting for caches to sync for RequestHeaderAuthRequestController\\\\nI0123 09:07:26.849611 1 dynamic_serving_content.go:135] \\\\\\\"Starting controller\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-4138284268/tls.crt::/tmp/serving-cert-4138284268/tls.key\\\\\\\"\\\\nI0123 09:07:26.849731 1 tlsconfig.go:243] \\\\\\\"Starting DynamicServingCertificateController\\\\\\\"\\\\nF0123 09:07:26.849820 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-23T09:07:10Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-23T09:07:28Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://9db80d9b156d2828ad5bcd38bc2d0783dac35f10f547f098815ee596931cde3b\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-23T09:07:10Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://efa2eef93c6f5766565795e6674f79bc2e7cb62ac76cd9a1e407561378d62732\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://efa2eef93c6f5766565795e6674f79bc2e7cb62ac76cd9a1e407561378d62732\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-23T09:07:08Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-23T09:07:08Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-23T09:07:07Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-23T09:07:36Z is after 2025-08-24T17:21:41Z" Jan 23 09:07:36 crc kubenswrapper[4684]: I0123 09:07:36.082768 4684 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-23T09:07:27Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-23T09:07:27Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-23T09:07:36Z is after 2025-08-24T17:21:41Z" Jan 23 09:07:36 crc kubenswrapper[4684]: I0123 09:07:36.086337 4684 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 23 09:07:36 crc kubenswrapper[4684]: I0123 09:07:36.086373 4684 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 23 09:07:36 crc kubenswrapper[4684]: I0123 09:07:36.086383 4684 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 23 09:07:36 crc kubenswrapper[4684]: I0123 09:07:36.086396 4684 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 23 09:07:36 crc kubenswrapper[4684]: I0123 09:07:36.086405 4684 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-23T09:07:36Z","lastTransitionTime":"2026-01-23T09:07:36Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 23 09:07:36 crc kubenswrapper[4684]: I0123 09:07:36.092394 4684 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-6stgf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"4fce7017-186f-4953-b968-c8a8868a0fd4\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T09:07:29Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T09:07:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T09:07:29Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T09:07:29Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://e120546e2ca9261a5bc169c39194c52add608d78b5783a10dad5f3ba4ee27c23\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-23T09:07:29Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-wv8g2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-23T09:07:28Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-6stgf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-23T09:07:36Z is after 2025-08-24T17:21:41Z" Jan 23 09:07:36 crc kubenswrapper[4684]: I0123 09:07:36.102442 4684 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-qt2j2" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d5069a6f-07bb-4423-8df0-92cdc541e6de\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T09:07:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T09:07:29Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T09:07:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T09:07:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://4ab843f59e857c481772565098789264b06141f58dd54cbb8dba2e40b44a54ad\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-23T09:07:30Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-l62zw\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-23T09:07:29Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-qt2j2\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-23T09:07:36Z is after 2025-08-24T17:21:41Z" Jan 23 09:07:36 crc kubenswrapper[4684]: I0123 09:07:36.113364 4684 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d618dabd-5de3-4c94-b9c1-69682da77628\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T09:07:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T09:07:07Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T09:07:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T09:07:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T09:07:07Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://c027c8977c1e3870ef0132bf28d479e8999b1a7d216327be7a9cff2aeee05c9e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-23T09:07:08Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://7954e2feb1e89e1ec2c9055234e7b9bde7005afc751a3067c18cbb54d16045cc\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-23T09:07:08Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://fde45d47daa7855ee7caa1df0222d2773fcdc8fb29413c61d6b74f7e7d8fa6e4\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-23T09:07:09Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://f34540a58dd0dfcebbfd694b24202f58a89ddca8a0f04f3f4f2bcdba4be5c4b6\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-23T09:07:10Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-23T09:07:07Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-23T09:07:36Z is after 2025-08-24T17:21:41Z" Jan 23 09:07:36 crc kubenswrapper[4684]: I0123 09:07:36.127361 4684 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-23T09:07:28Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-23T09:07:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-23T09:07:28Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://4f741db786a98b9e9302c17c5f5061484149b0372c03b3cf06b017d37da7237a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-23T09:07:28Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://e0bf99a80423f9d4d2262b21f7dc70d1cf73731c48008e484d9768495596d5b0\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-23T09:07:28Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-23T09:07:36Z is after 2025-08-24T17:21:41Z" Jan 23 09:07:36 crc kubenswrapper[4684]: I0123 09:07:36.139980 4684 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-23T09:07:30Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-23T09:07:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-23T09:07:30Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://dc74050180463e44d7c545c89833c0282af87ae8cde4800f95e019dbd21ebb08\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-23T09:07:30Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-23T09:07:36Z is after 2025-08-24T17:21:41Z" Jan 23 09:07:36 crc kubenswrapper[4684]: I0123 09:07:36.154654 4684 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-23T09:07:29Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-23T09:07:29Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-23T09:07:29Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://d66a59d2f527c396c3b591ef694a20a6852d8e2b2f3d4c77ef0f0b795a18b535\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-23T09:07:28Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-23T09:07:36Z is after 2025-08-24T17:21:41Z" Jan 23 09:07:36 crc kubenswrapper[4684]: I0123 09:07:36.166067 4684 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-wtphf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"fe8e0d00-860e-4d47-9f48-686555520d79\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T09:07:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T09:07:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T09:07:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T09:07:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://87b6f66b276518f9c25bbd5c97bd4a330b2c796958b395d04a01ef7115b95440\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-23T09:07:30Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-dmwsl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://c3d090a4ca15b818846dbd02be034a5029761509ea8671673795d0b2b15249c9\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-23T09:07:30Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-dmwsl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-23T09:07:28Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-wtphf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-23T09:07:36Z is after 2025-08-24T17:21:41Z" Jan 23 09:07:36 crc kubenswrapper[4684]: I0123 09:07:36.188620 4684 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 23 09:07:36 crc kubenswrapper[4684]: I0123 09:07:36.188673 4684 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 23 09:07:36 crc kubenswrapper[4684]: I0123 09:07:36.188684 4684 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 23 09:07:36 crc kubenswrapper[4684]: I0123 09:07:36.188715 4684 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 23 09:07:36 crc kubenswrapper[4684]: I0123 09:07:36.188726 4684 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-23T09:07:36Z","lastTransitionTime":"2026-01-23T09:07:36Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 23 09:07:36 crc kubenswrapper[4684]: I0123 09:07:36.290807 4684 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 23 09:07:36 crc kubenswrapper[4684]: I0123 09:07:36.290845 4684 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 23 09:07:36 crc kubenswrapper[4684]: I0123 09:07:36.290854 4684 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 23 09:07:36 crc kubenswrapper[4684]: I0123 09:07:36.290868 4684 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 23 09:07:36 crc kubenswrapper[4684]: I0123 09:07:36.290877 4684 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-23T09:07:36Z","lastTransitionTime":"2026-01-23T09:07:36Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 23 09:07:36 crc kubenswrapper[4684]: I0123 09:07:36.392966 4684 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 23 09:07:36 crc kubenswrapper[4684]: I0123 09:07:36.392998 4684 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 23 09:07:36 crc kubenswrapper[4684]: I0123 09:07:36.393010 4684 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 23 09:07:36 crc kubenswrapper[4684]: I0123 09:07:36.393053 4684 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 23 09:07:36 crc kubenswrapper[4684]: I0123 09:07:36.393062 4684 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-23T09:07:36Z","lastTransitionTime":"2026-01-23T09:07:36Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 23 09:07:36 crc kubenswrapper[4684]: I0123 09:07:36.495173 4684 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 23 09:07:36 crc kubenswrapper[4684]: I0123 09:07:36.495205 4684 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 23 09:07:36 crc kubenswrapper[4684]: I0123 09:07:36.495214 4684 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 23 09:07:36 crc kubenswrapper[4684]: I0123 09:07:36.495228 4684 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 23 09:07:36 crc kubenswrapper[4684]: I0123 09:07:36.495236 4684 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-23T09:07:36Z","lastTransitionTime":"2026-01-23T09:07:36Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 23 09:07:36 crc kubenswrapper[4684]: I0123 09:07:36.552971 4684 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-11-20 00:34:45.810212994 +0000 UTC Jan 23 09:07:36 crc kubenswrapper[4684]: I0123 09:07:36.556820 4684 reflector.go:368] Caches populated for *v1.RuntimeClass from k8s.io/client-go/informers/factory.go:160 Jan 23 09:07:36 crc kubenswrapper[4684]: I0123 09:07:36.597729 4684 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 23 09:07:36 crc kubenswrapper[4684]: I0123 09:07:36.597768 4684 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 23 09:07:36 crc kubenswrapper[4684]: I0123 09:07:36.597777 4684 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 23 09:07:36 crc kubenswrapper[4684]: I0123 09:07:36.597792 4684 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 23 09:07:36 crc kubenswrapper[4684]: I0123 09:07:36.597802 4684 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-23T09:07:36Z","lastTransitionTime":"2026-01-23T09:07:36Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 23 09:07:36 crc kubenswrapper[4684]: I0123 09:07:36.701381 4684 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 23 09:07:36 crc kubenswrapper[4684]: I0123 09:07:36.701663 4684 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 23 09:07:36 crc kubenswrapper[4684]: I0123 09:07:36.701674 4684 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 23 09:07:36 crc kubenswrapper[4684]: I0123 09:07:36.701688 4684 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 23 09:07:36 crc kubenswrapper[4684]: I0123 09:07:36.701720 4684 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-23T09:07:36Z","lastTransitionTime":"2026-01-23T09:07:36Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 23 09:07:36 crc kubenswrapper[4684]: I0123 09:07:36.803739 4684 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 23 09:07:36 crc kubenswrapper[4684]: I0123 09:07:36.803774 4684 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 23 09:07:36 crc kubenswrapper[4684]: I0123 09:07:36.803786 4684 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 23 09:07:36 crc kubenswrapper[4684]: I0123 09:07:36.803802 4684 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 23 09:07:36 crc kubenswrapper[4684]: I0123 09:07:36.803813 4684 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-23T09:07:36Z","lastTransitionTime":"2026-01-23T09:07:36Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 23 09:07:36 crc kubenswrapper[4684]: I0123 09:07:36.824011 4684 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-ovn-kubernetes/ovnkube-node-nk7v5" Jan 23 09:07:36 crc kubenswrapper[4684]: I0123 09:07:36.824046 4684 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-ovn-kubernetes/ovnkube-node-nk7v5" Jan 23 09:07:36 crc kubenswrapper[4684]: I0123 09:07:36.849575 4684 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-ovn-kubernetes/ovnkube-node-nk7v5" Jan 23 09:07:36 crc kubenswrapper[4684]: I0123 09:07:36.863012 4684 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-23T09:07:27Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-23T09:07:27Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-23T09:07:36Z is after 2025-08-24T17:21:41Z" Jan 23 09:07:36 crc kubenswrapper[4684]: I0123 09:07:36.875407 4684 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-23T09:07:27Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-23T09:07:27Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-23T09:07:36Z is after 2025-08-24T17:21:41Z" Jan 23 09:07:36 crc kubenswrapper[4684]: I0123 09:07:36.892047 4684 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-nk7v5" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fd1b372-d164-4037-ae8e-cf634b1c4b41\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T09:07:29Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T09:07:29Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T09:07:28Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T09:07:28Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://c845b6b78d55b23f70032599e19fb345571b02ca00353315bb08e94c834330d4\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-23T09:07:30Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-l46bg\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://5ecd3493767226c89a1f3e3dff04d36ff5c47117c6ad2712e71633f5c6e375b3\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-23T09:07:30Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-l46bg\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://1d7d0cedb437ec48e365912b092c7f28a30e01fbab86c49bce1b26734ab264ee\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-23T09:07:30Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-l46bg\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://6ab83043e744c91535278153a247d7ba2b3612b867edbabf3a43192b51304e14\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-23T09:07:30Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-l46bg\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://d44f8256ce0d8ea5237e13fb4f6d7ee5cd698c2821613b48d73ba903d2ab5351\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-23T09:07:30Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-l46bg\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://3eab81e73847c2d5a8a24bd2be84c8ed97ecc482fe023474b519ae6bcf3e6e49\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-23T09:07:29Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-l46bg\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://71f1640626a831e4da81a382d015a6467377fa8e787db1ce1cebe4a788c40d8a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-23T09:07:35Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-l46bg\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://6eab0113b2445bd23a5d3eb5f4bd79d26dd3352a1bf807cf7e770d55db85b699\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-23T09:07:32Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-l46bg\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://6cfc04b44ac724b5e32e0102b3f0d670fdd7f2b7ae9b40266065c7b8192b228e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://6cfc04b44ac724b5e32e0102b3f0d670fdd7f2b7ae9b40266065c7b8192b228e\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-23T09:07:29Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-23T09:07:29Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-l46bg\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-23T09:07:28Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-nk7v5\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-23T09:07:36Z is after 2025-08-24T17:21:41Z" Jan 23 09:07:36 crc kubenswrapper[4684]: I0123 09:07:36.904418 4684 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-23T09:07:27Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-23T09:07:27Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-23T09:07:36Z is after 2025-08-24T17:21:41Z" Jan 23 09:07:36 crc kubenswrapper[4684]: I0123 09:07:36.906048 4684 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 23 09:07:36 crc kubenswrapper[4684]: I0123 09:07:36.906077 4684 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 23 09:07:36 crc kubenswrapper[4684]: I0123 09:07:36.906086 4684 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 23 09:07:36 crc kubenswrapper[4684]: I0123 09:07:36.906099 4684 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 23 09:07:36 crc kubenswrapper[4684]: I0123 09:07:36.906108 4684 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-23T09:07:36Z","lastTransitionTime":"2026-01-23T09:07:36Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 23 09:07:36 crc kubenswrapper[4684]: I0123 09:07:36.917341 4684 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-jwr4q" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ab0885cc-d621-4e36-9e37-1326848bd147\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T09:07:29Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T09:07:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T09:07:29Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T09:07:29Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://d957cfbf388d17fa825ac41c56e15d6cd4caec6e13b2fb8c93b304205f0bbefe\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-23T09:07:29Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cw2mk\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-23T09:07:28Z\\\"}}\" for pod \"openshift-multus\"/\"multus-jwr4q\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-23T09:07:36Z is after 2025-08-24T17:21:41Z" Jan 23 09:07:36 crc kubenswrapper[4684]: I0123 09:07:36.932850 4684 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-dmqcw" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"95d1563a-3ca4-4fb0-8365-c1168fbe2e70\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T09:07:29Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T09:07:34Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T09:07:35Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T09:07:35Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://49a6a5854f711f7c177bc9c2ddea16027d535e15a3bbce2771702baed672fc06\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-23T09:07:35Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-wrhqc\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://d3d64538fa49212ecd97fac81f22251d985b9963024dcd5625ca82b0a19111fb\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://d3d64538fa49212ecd97fac81f22251d985b9963024dcd5625ca82b0a19111fb\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-23T09:07:29Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-23T09:07:29Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-wrhqc\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://bd008bc398cf858c150426e45222e76743f5cacfffb45c24f2cad83a6140abe4\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://bd008bc398cf858c150426e45222e76743f5cacfffb45c24f2cad83a6140abe4\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-23T09:07:30Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-23T09:07:30Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-wrhqc\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://11ea09253e6f4c4eab537b794b793c1f07e8cbaf361c1d8773381e7894805322\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://11ea09253e6f4c4eab537b794b793c1f07e8cbaf361c1d8773381e7894805322\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-23T09:07:31Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-23T09:07:31Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-wrhqc\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://4dddcfb8219bc8ac2d0f92294aef29222b71b1eb35ac84e7e833905e868e784e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://4dddcfb8219bc8ac2d0f92294aef29222b71b1eb35ac84e7e833905e868e784e\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-23T09:07:32Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-23T09:07:32Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-wrhqc\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://d935dd54133a2edd7ccddba6ec6b4c3ee7c86d3d6bc097b93fab3a6aa873ece9\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://d935dd54133a2edd7ccddba6ec6b4c3ee7c86d3d6bc097b93fab3a6aa873ece9\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-23T09:07:33Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-23T09:07:33Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-wrhqc\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://a3f58ad8e7c313247b77e5259a2f82d740ea1f08c3aeaefc116293729ce1b143\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://a3f58ad8e7c313247b77e5259a2f82d740ea1f08c3aeaefc116293729ce1b143\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-23T09:07:34Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-23T09:07:34Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-wrhqc\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-23T09:07:28Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-dmqcw\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-23T09:07:36Z is after 2025-08-24T17:21:41Z" Jan 23 09:07:36 crc kubenswrapper[4684]: I0123 09:07:36.947238 4684 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"e31ff448-5258-4887-9532-ccb1444b5a2f\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T09:07:08Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T09:07:08Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T09:07:07Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T09:07:07Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T09:07:07Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://39b1d62654cdce3e6a1e54cc35f36d530dec39b7ec54d7aba2ea8a64844ff90a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-23T09:07:09Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://5b80737ea9f882f63be2cf6a2f74002963d16e18aea3c96f738b2cd188f3c1da\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-23T09:07:10Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://68e3ed6cfd5c1ab6379385c7acee58117333f815f21be7d7c61038f7827f6621\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-23T09:07:10Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://74958cd4355a9eb04e07c960b1063b56f11cb3ae27a3ab9eac50f54ebac78c8c\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://42263a97079566dbd93f1ca20399fd1f6cc2400f0d042ed062c1c1e15eaf0109\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-01-23T09:07:27Z\\\",\\\"message\\\":\\\"23 09:07:26.845110 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW0123 09:07:26.845113 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW0123 09:07:26.845115 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nI0123 09:07:26.845353 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nI0123 09:07:26.849378 1 tlsconfig.go:203] \\\\\\\"Loaded serving cert\\\\\\\" certName=\\\\\\\"serving-cert::/tmp/serving-cert-4138284268/tls.crt::/tmp/serving-cert-4138284268/tls.key\\\\\\\" certDetail=\\\\\\\"\\\\\\\\\\\\\\\"localhost\\\\\\\\\\\\\\\" [serving] validServingFor=[localhost] issuer=\\\\\\\\\\\\\\\"check-endpoints-signer@1769159230\\\\\\\\\\\\\\\" (2026-01-23 09:07:10 +0000 UTC to 2026-02-22 09:07:11 +0000 UTC (now=2026-01-23 09:07:26.849349521 +0000 UTC))\\\\\\\"\\\\nI0123 09:07:26.849507 1 named_certificates.go:53] \\\\\\\"Loaded SNI cert\\\\\\\" index=0 certName=\\\\\\\"self-signed loopback\\\\\\\" certDetail=\\\\\\\"\\\\\\\\\\\\\\\"apiserver-loopback-client@1769159241\\\\\\\\\\\\\\\" [serving] validServingFor=[apiserver-loopback-client] issuer=\\\\\\\\\\\\\\\"apiserver-loopback-client-ca@1769159241\\\\\\\\\\\\\\\" (2026-01-23 08:07:21 +0000 UTC to 2027-01-23 08:07:21 +0000 UTC (now=2026-01-23 09:07:26.849489185 +0000 UTC))\\\\\\\"\\\\nI0123 09:07:26.849527 1 secure_serving.go:213] Serving securely on [::]:17697\\\\nI0123 09:07:26.849546 1 genericapiserver.go:683] [graceful-termination] waiting for shutdown to be initiated\\\\nI0123 09:07:26.849566 1 requestheader_controller.go:172] Starting RequestHeaderAuthRequestController\\\\nI0123 09:07:26.849583 1 shared_informer.go:313] Waiting for caches to sync for RequestHeaderAuthRequestController\\\\nI0123 09:07:26.849611 1 dynamic_serving_content.go:135] \\\\\\\"Starting controller\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-4138284268/tls.crt::/tmp/serving-cert-4138284268/tls.key\\\\\\\"\\\\nI0123 09:07:26.849731 1 tlsconfig.go:243] \\\\\\\"Starting DynamicServingCertificateController\\\\\\\"\\\\nF0123 09:07:26.849820 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-23T09:07:10Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-23T09:07:28Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://9db80d9b156d2828ad5bcd38bc2d0783dac35f10f547f098815ee596931cde3b\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-23T09:07:10Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://efa2eef93c6f5766565795e6674f79bc2e7cb62ac76cd9a1e407561378d62732\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://efa2eef93c6f5766565795e6674f79bc2e7cb62ac76cd9a1e407561378d62732\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-23T09:07:08Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-23T09:07:08Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-23T09:07:07Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-23T09:07:36Z is after 2025-08-24T17:21:41Z" Jan 23 09:07:36 crc kubenswrapper[4684]: I0123 09:07:36.962133 4684 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-23T09:07:28Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-23T09:07:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-23T09:07:28Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://4f741db786a98b9e9302c17c5f5061484149b0372c03b3cf06b017d37da7237a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-23T09:07:28Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://e0bf99a80423f9d4d2262b21f7dc70d1cf73731c48008e484d9768495596d5b0\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-23T09:07:28Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-23T09:07:36Z is after 2025-08-24T17:21:41Z" Jan 23 09:07:36 crc kubenswrapper[4684]: I0123 09:07:36.974116 4684 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-23T09:07:30Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-23T09:07:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-23T09:07:30Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://dc74050180463e44d7c545c89833c0282af87ae8cde4800f95e019dbd21ebb08\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-23T09:07:30Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-23T09:07:36Z is after 2025-08-24T17:21:41Z" Jan 23 09:07:36 crc kubenswrapper[4684]: I0123 09:07:36.984808 4684 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-6stgf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"4fce7017-186f-4953-b968-c8a8868a0fd4\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T09:07:29Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T09:07:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T09:07:29Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T09:07:29Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://e120546e2ca9261a5bc169c39194c52add608d78b5783a10dad5f3ba4ee27c23\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-23T09:07:29Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-wv8g2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-23T09:07:28Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-6stgf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-23T09:07:36Z is after 2025-08-24T17:21:41Z" Jan 23 09:07:36 crc kubenswrapper[4684]: I0123 09:07:36.996310 4684 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-qt2j2" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d5069a6f-07bb-4423-8df0-92cdc541e6de\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T09:07:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T09:07:29Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T09:07:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T09:07:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://4ab843f59e857c481772565098789264b06141f58dd54cbb8dba2e40b44a54ad\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-23T09:07:30Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-l62zw\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-23T09:07:29Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-qt2j2\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-23T09:07:36Z is after 2025-08-24T17:21:41Z" Jan 23 09:07:37 crc kubenswrapper[4684]: I0123 09:07:37.008175 4684 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 23 09:07:37 crc kubenswrapper[4684]: I0123 09:07:37.008218 4684 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 23 09:07:37 crc kubenswrapper[4684]: I0123 09:07:37.008231 4684 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 23 09:07:37 crc kubenswrapper[4684]: I0123 09:07:37.008248 4684 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 23 09:07:37 crc kubenswrapper[4684]: I0123 09:07:37.008259 4684 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-23T09:07:37Z","lastTransitionTime":"2026-01-23T09:07:37Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 23 09:07:37 crc kubenswrapper[4684]: I0123 09:07:37.009943 4684 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d618dabd-5de3-4c94-b9c1-69682da77628\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T09:07:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T09:07:07Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T09:07:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T09:07:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T09:07:07Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://c027c8977c1e3870ef0132bf28d479e8999b1a7d216327be7a9cff2aeee05c9e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-23T09:07:08Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://7954e2feb1e89e1ec2c9055234e7b9bde7005afc751a3067c18cbb54d16045cc\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-23T09:07:08Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://fde45d47daa7855ee7caa1df0222d2773fcdc8fb29413c61d6b74f7e7d8fa6e4\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-23T09:07:09Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://f34540a58dd0dfcebbfd694b24202f58a89ddca8a0f04f3f4f2bcdba4be5c4b6\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-23T09:07:10Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-23T09:07:07Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-23T09:07:37Z is after 2025-08-24T17:21:41Z" Jan 23 09:07:37 crc kubenswrapper[4684]: I0123 09:07:37.023737 4684 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-23T09:07:29Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-23T09:07:29Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-23T09:07:29Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://d66a59d2f527c396c3b591ef694a20a6852d8e2b2f3d4c77ef0f0b795a18b535\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-23T09:07:28Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-23T09:07:37Z is after 2025-08-24T17:21:41Z" Jan 23 09:07:37 crc kubenswrapper[4684]: I0123 09:07:37.036604 4684 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-wtphf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"fe8e0d00-860e-4d47-9f48-686555520d79\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T09:07:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T09:07:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T09:07:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T09:07:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://87b6f66b276518f9c25bbd5c97bd4a330b2c796958b395d04a01ef7115b95440\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-23T09:07:30Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-dmwsl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://c3d090a4ca15b818846dbd02be034a5029761509ea8671673795d0b2b15249c9\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-23T09:07:30Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-dmwsl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-23T09:07:28Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-wtphf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-23T09:07:37Z is after 2025-08-24T17:21:41Z" Jan 23 09:07:37 crc kubenswrapper[4684]: I0123 09:07:37.110987 4684 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 23 09:07:37 crc kubenswrapper[4684]: I0123 09:07:37.111041 4684 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 23 09:07:37 crc kubenswrapper[4684]: I0123 09:07:37.111054 4684 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 23 09:07:37 crc kubenswrapper[4684]: I0123 09:07:37.111078 4684 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 23 09:07:37 crc kubenswrapper[4684]: I0123 09:07:37.111093 4684 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-23T09:07:37Z","lastTransitionTime":"2026-01-23T09:07:37Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 23 09:07:37 crc kubenswrapper[4684]: I0123 09:07:37.213347 4684 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 23 09:07:37 crc kubenswrapper[4684]: I0123 09:07:37.213383 4684 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 23 09:07:37 crc kubenswrapper[4684]: I0123 09:07:37.213393 4684 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 23 09:07:37 crc kubenswrapper[4684]: I0123 09:07:37.213407 4684 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 23 09:07:37 crc kubenswrapper[4684]: I0123 09:07:37.213419 4684 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-23T09:07:37Z","lastTransitionTime":"2026-01-23T09:07:37Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 23 09:07:37 crc kubenswrapper[4684]: I0123 09:07:37.315433 4684 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 23 09:07:37 crc kubenswrapper[4684]: I0123 09:07:37.315501 4684 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 23 09:07:37 crc kubenswrapper[4684]: I0123 09:07:37.315515 4684 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 23 09:07:37 crc kubenswrapper[4684]: I0123 09:07:37.315532 4684 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 23 09:07:37 crc kubenswrapper[4684]: I0123 09:07:37.315546 4684 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-23T09:07:37Z","lastTransitionTime":"2026-01-23T09:07:37Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 23 09:07:37 crc kubenswrapper[4684]: I0123 09:07:37.418186 4684 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 23 09:07:37 crc kubenswrapper[4684]: I0123 09:07:37.418241 4684 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 23 09:07:37 crc kubenswrapper[4684]: I0123 09:07:37.418253 4684 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 23 09:07:37 crc kubenswrapper[4684]: I0123 09:07:37.418270 4684 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 23 09:07:37 crc kubenswrapper[4684]: I0123 09:07:37.418282 4684 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-23T09:07:37Z","lastTransitionTime":"2026-01-23T09:07:37Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 23 09:07:37 crc kubenswrapper[4684]: I0123 09:07:37.520312 4684 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 23 09:07:37 crc kubenswrapper[4684]: I0123 09:07:37.520646 4684 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 23 09:07:37 crc kubenswrapper[4684]: I0123 09:07:37.520664 4684 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 23 09:07:37 crc kubenswrapper[4684]: I0123 09:07:37.520681 4684 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 23 09:07:37 crc kubenswrapper[4684]: I0123 09:07:37.520690 4684 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-23T09:07:37Z","lastTransitionTime":"2026-01-23T09:07:37Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 23 09:07:37 crc kubenswrapper[4684]: I0123 09:07:37.553822 4684 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-12-03 22:01:07.316790608 +0000 UTC Jan 23 09:07:37 crc kubenswrapper[4684]: I0123 09:07:37.581509 4684 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Jan 23 09:07:37 crc kubenswrapper[4684]: I0123 09:07:37.581509 4684 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 23 09:07:37 crc kubenswrapper[4684]: I0123 09:07:37.581825 4684 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 23 09:07:37 crc kubenswrapper[4684]: E0123 09:07:37.581818 4684 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Jan 23 09:07:37 crc kubenswrapper[4684]: E0123 09:07:37.581940 4684 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Jan 23 09:07:37 crc kubenswrapper[4684]: E0123 09:07:37.582047 4684 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Jan 23 09:07:37 crc kubenswrapper[4684]: I0123 09:07:37.596543 4684 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-23T09:07:27Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-23T09:07:27Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-23T09:07:37Z is after 2025-08-24T17:21:41Z" Jan 23 09:07:37 crc kubenswrapper[4684]: I0123 09:07:37.618133 4684 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-nk7v5" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fd1b372-d164-4037-ae8e-cf634b1c4b41\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T09:07:29Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T09:07:29Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T09:07:28Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T09:07:28Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://c845b6b78d55b23f70032599e19fb345571b02ca00353315bb08e94c834330d4\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-23T09:07:30Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-l46bg\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://5ecd3493767226c89a1f3e3dff04d36ff5c47117c6ad2712e71633f5c6e375b3\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-23T09:07:30Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-l46bg\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://1d7d0cedb437ec48e365912b092c7f28a30e01fbab86c49bce1b26734ab264ee\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-23T09:07:30Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-l46bg\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://6ab83043e744c91535278153a247d7ba2b3612b867edbabf3a43192b51304e14\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-23T09:07:30Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-l46bg\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://d44f8256ce0d8ea5237e13fb4f6d7ee5cd698c2821613b48d73ba903d2ab5351\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-23T09:07:30Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-l46bg\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://3eab81e73847c2d5a8a24bd2be84c8ed97ecc482fe023474b519ae6bcf3e6e49\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-23T09:07:29Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-l46bg\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://71f1640626a831e4da81a382d015a6467377fa8e787db1ce1cebe4a788c40d8a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-23T09:07:35Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-l46bg\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://6eab0113b2445bd23a5d3eb5f4bd79d26dd3352a1bf807cf7e770d55db85b699\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-23T09:07:32Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-l46bg\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://6cfc04b44ac724b5e32e0102b3f0d670fdd7f2b7ae9b40266065c7b8192b228e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://6cfc04b44ac724b5e32e0102b3f0d670fdd7f2b7ae9b40266065c7b8192b228e\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-23T09:07:29Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-23T09:07:29Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-l46bg\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-23T09:07:28Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-nk7v5\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-23T09:07:37Z is after 2025-08-24T17:21:41Z" Jan 23 09:07:37 crc kubenswrapper[4684]: I0123 09:07:37.622480 4684 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 23 09:07:37 crc kubenswrapper[4684]: I0123 09:07:37.622529 4684 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 23 09:07:37 crc kubenswrapper[4684]: I0123 09:07:37.622539 4684 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 23 09:07:37 crc kubenswrapper[4684]: I0123 09:07:37.622553 4684 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 23 09:07:37 crc kubenswrapper[4684]: I0123 09:07:37.622561 4684 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-23T09:07:37Z","lastTransitionTime":"2026-01-23T09:07:37Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 23 09:07:37 crc kubenswrapper[4684]: I0123 09:07:37.633229 4684 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-23T09:07:27Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-23T09:07:27Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-23T09:07:37Z is after 2025-08-24T17:21:41Z" Jan 23 09:07:37 crc kubenswrapper[4684]: I0123 09:07:37.647401 4684 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-23T09:07:27Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-23T09:07:27Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-23T09:07:37Z is after 2025-08-24T17:21:41Z" Jan 23 09:07:37 crc kubenswrapper[4684]: I0123 09:07:37.669055 4684 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-jwr4q" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ab0885cc-d621-4e36-9e37-1326848bd147\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T09:07:29Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T09:07:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T09:07:29Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T09:07:29Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://d957cfbf388d17fa825ac41c56e15d6cd4caec6e13b2fb8c93b304205f0bbefe\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-23T09:07:29Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cw2mk\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-23T09:07:28Z\\\"}}\" for pod \"openshift-multus\"/\"multus-jwr4q\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-23T09:07:37Z is after 2025-08-24T17:21:41Z" Jan 23 09:07:37 crc kubenswrapper[4684]: I0123 09:07:37.690455 4684 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-dmqcw" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"95d1563a-3ca4-4fb0-8365-c1168fbe2e70\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T09:07:29Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T09:07:34Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T09:07:35Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T09:07:35Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://49a6a5854f711f7c177bc9c2ddea16027d535e15a3bbce2771702baed672fc06\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-23T09:07:35Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-wrhqc\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://d3d64538fa49212ecd97fac81f22251d985b9963024dcd5625ca82b0a19111fb\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://d3d64538fa49212ecd97fac81f22251d985b9963024dcd5625ca82b0a19111fb\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-23T09:07:29Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-23T09:07:29Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-wrhqc\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://bd008bc398cf858c150426e45222e76743f5cacfffb45c24f2cad83a6140abe4\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://bd008bc398cf858c150426e45222e76743f5cacfffb45c24f2cad83a6140abe4\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-23T09:07:30Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-23T09:07:30Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-wrhqc\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://11ea09253e6f4c4eab537b794b793c1f07e8cbaf361c1d8773381e7894805322\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://11ea09253e6f4c4eab537b794b793c1f07e8cbaf361c1d8773381e7894805322\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-23T09:07:31Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-23T09:07:31Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-wrhqc\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://4dddcfb8219bc8ac2d0f92294aef29222b71b1eb35ac84e7e833905e868e784e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://4dddcfb8219bc8ac2d0f92294aef29222b71b1eb35ac84e7e833905e868e784e\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-23T09:07:32Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-23T09:07:32Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-wrhqc\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://d935dd54133a2edd7ccddba6ec6b4c3ee7c86d3d6bc097b93fab3a6aa873ece9\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://d935dd54133a2edd7ccddba6ec6b4c3ee7c86d3d6bc097b93fab3a6aa873ece9\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-23T09:07:33Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-23T09:07:33Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-wrhqc\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://a3f58ad8e7c313247b77e5259a2f82d740ea1f08c3aeaefc116293729ce1b143\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://a3f58ad8e7c313247b77e5259a2f82d740ea1f08c3aeaefc116293729ce1b143\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-23T09:07:34Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-23T09:07:34Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-wrhqc\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-23T09:07:28Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-dmqcw\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-23T09:07:37Z is after 2025-08-24T17:21:41Z" Jan 23 09:07:37 crc kubenswrapper[4684]: I0123 09:07:37.706935 4684 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"e31ff448-5258-4887-9532-ccb1444b5a2f\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T09:07:08Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T09:07:08Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T09:07:07Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T09:07:07Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T09:07:07Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://39b1d62654cdce3e6a1e54cc35f36d530dec39b7ec54d7aba2ea8a64844ff90a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-23T09:07:09Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://5b80737ea9f882f63be2cf6a2f74002963d16e18aea3c96f738b2cd188f3c1da\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-23T09:07:10Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://68e3ed6cfd5c1ab6379385c7acee58117333f815f21be7d7c61038f7827f6621\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-23T09:07:10Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://74958cd4355a9eb04e07c960b1063b56f11cb3ae27a3ab9eac50f54ebac78c8c\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://42263a97079566dbd93f1ca20399fd1f6cc2400f0d042ed062c1c1e15eaf0109\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-01-23T09:07:27Z\\\",\\\"message\\\":\\\"23 09:07:26.845110 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW0123 09:07:26.845113 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW0123 09:07:26.845115 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nI0123 09:07:26.845353 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nI0123 09:07:26.849378 1 tlsconfig.go:203] \\\\\\\"Loaded serving cert\\\\\\\" certName=\\\\\\\"serving-cert::/tmp/serving-cert-4138284268/tls.crt::/tmp/serving-cert-4138284268/tls.key\\\\\\\" certDetail=\\\\\\\"\\\\\\\\\\\\\\\"localhost\\\\\\\\\\\\\\\" [serving] validServingFor=[localhost] issuer=\\\\\\\\\\\\\\\"check-endpoints-signer@1769159230\\\\\\\\\\\\\\\" (2026-01-23 09:07:10 +0000 UTC to 2026-02-22 09:07:11 +0000 UTC (now=2026-01-23 09:07:26.849349521 +0000 UTC))\\\\\\\"\\\\nI0123 09:07:26.849507 1 named_certificates.go:53] \\\\\\\"Loaded SNI cert\\\\\\\" index=0 certName=\\\\\\\"self-signed loopback\\\\\\\" certDetail=\\\\\\\"\\\\\\\\\\\\\\\"apiserver-loopback-client@1769159241\\\\\\\\\\\\\\\" [serving] validServingFor=[apiserver-loopback-client] issuer=\\\\\\\\\\\\\\\"apiserver-loopback-client-ca@1769159241\\\\\\\\\\\\\\\" (2026-01-23 08:07:21 +0000 UTC to 2027-01-23 08:07:21 +0000 UTC (now=2026-01-23 09:07:26.849489185 +0000 UTC))\\\\\\\"\\\\nI0123 09:07:26.849527 1 secure_serving.go:213] Serving securely on [::]:17697\\\\nI0123 09:07:26.849546 1 genericapiserver.go:683] [graceful-termination] waiting for shutdown to be initiated\\\\nI0123 09:07:26.849566 1 requestheader_controller.go:172] Starting RequestHeaderAuthRequestController\\\\nI0123 09:07:26.849583 1 shared_informer.go:313] Waiting for caches to sync for RequestHeaderAuthRequestController\\\\nI0123 09:07:26.849611 1 dynamic_serving_content.go:135] \\\\\\\"Starting controller\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-4138284268/tls.crt::/tmp/serving-cert-4138284268/tls.key\\\\\\\"\\\\nI0123 09:07:26.849731 1 tlsconfig.go:243] \\\\\\\"Starting DynamicServingCertificateController\\\\\\\"\\\\nF0123 09:07:26.849820 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-23T09:07:10Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-23T09:07:28Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://9db80d9b156d2828ad5bcd38bc2d0783dac35f10f547f098815ee596931cde3b\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-23T09:07:10Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://efa2eef93c6f5766565795e6674f79bc2e7cb62ac76cd9a1e407561378d62732\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://efa2eef93c6f5766565795e6674f79bc2e7cb62ac76cd9a1e407561378d62732\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-23T09:07:08Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-23T09:07:08Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-23T09:07:07Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-23T09:07:37Z is after 2025-08-24T17:21:41Z" Jan 23 09:07:37 crc kubenswrapper[4684]: I0123 09:07:37.720722 4684 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-23T09:07:30Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-23T09:07:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-23T09:07:30Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://dc74050180463e44d7c545c89833c0282af87ae8cde4800f95e019dbd21ebb08\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-23T09:07:30Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-23T09:07:37Z is after 2025-08-24T17:21:41Z" Jan 23 09:07:37 crc kubenswrapper[4684]: I0123 09:07:37.724570 4684 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 23 09:07:37 crc kubenswrapper[4684]: I0123 09:07:37.724597 4684 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 23 09:07:37 crc kubenswrapper[4684]: I0123 09:07:37.724607 4684 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 23 09:07:37 crc kubenswrapper[4684]: I0123 09:07:37.724653 4684 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 23 09:07:37 crc kubenswrapper[4684]: I0123 09:07:37.724663 4684 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-23T09:07:37Z","lastTransitionTime":"2026-01-23T09:07:37Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 23 09:07:37 crc kubenswrapper[4684]: I0123 09:07:37.731919 4684 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-6stgf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"4fce7017-186f-4953-b968-c8a8868a0fd4\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T09:07:29Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T09:07:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T09:07:29Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T09:07:29Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://e120546e2ca9261a5bc169c39194c52add608d78b5783a10dad5f3ba4ee27c23\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-23T09:07:29Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-wv8g2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-23T09:07:28Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-6stgf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-23T09:07:37Z is after 2025-08-24T17:21:41Z" Jan 23 09:07:37 crc kubenswrapper[4684]: I0123 09:07:37.744446 4684 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-qt2j2" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d5069a6f-07bb-4423-8df0-92cdc541e6de\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T09:07:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T09:07:29Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T09:07:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T09:07:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://4ab843f59e857c481772565098789264b06141f58dd54cbb8dba2e40b44a54ad\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-23T09:07:30Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-l62zw\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-23T09:07:29Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-qt2j2\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-23T09:07:37Z is after 2025-08-24T17:21:41Z" Jan 23 09:07:37 crc kubenswrapper[4684]: I0123 09:07:37.757264 4684 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d618dabd-5de3-4c94-b9c1-69682da77628\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T09:07:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T09:07:07Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T09:07:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T09:07:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T09:07:07Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://c027c8977c1e3870ef0132bf28d479e8999b1a7d216327be7a9cff2aeee05c9e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-23T09:07:08Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://7954e2feb1e89e1ec2c9055234e7b9bde7005afc751a3067c18cbb54d16045cc\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-23T09:07:08Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://fde45d47daa7855ee7caa1df0222d2773fcdc8fb29413c61d6b74f7e7d8fa6e4\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-23T09:07:09Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://f34540a58dd0dfcebbfd694b24202f58a89ddca8a0f04f3f4f2bcdba4be5c4b6\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-23T09:07:10Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-23T09:07:07Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-23T09:07:37Z is after 2025-08-24T17:21:41Z" Jan 23 09:07:37 crc kubenswrapper[4684]: I0123 09:07:37.772298 4684 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-23T09:07:28Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-23T09:07:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-23T09:07:28Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://4f741db786a98b9e9302c17c5f5061484149b0372c03b3cf06b017d37da7237a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-23T09:07:28Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://e0bf99a80423f9d4d2262b21f7dc70d1cf73731c48008e484d9768495596d5b0\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-23T09:07:28Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-23T09:07:37Z is after 2025-08-24T17:21:41Z" Jan 23 09:07:37 crc kubenswrapper[4684]: I0123 09:07:37.807715 4684 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-wtphf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"fe8e0d00-860e-4d47-9f48-686555520d79\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T09:07:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T09:07:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T09:07:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T09:07:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://87b6f66b276518f9c25bbd5c97bd4a330b2c796958b395d04a01ef7115b95440\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-23T09:07:30Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-dmwsl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://c3d090a4ca15b818846dbd02be034a5029761509ea8671673795d0b2b15249c9\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-23T09:07:30Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-dmwsl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-23T09:07:28Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-wtphf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-23T09:07:37Z is after 2025-08-24T17:21:41Z" Jan 23 09:07:37 crc kubenswrapper[4684]: I0123 09:07:37.825998 4684 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 23 09:07:37 crc kubenswrapper[4684]: I0123 09:07:37.826045 4684 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 23 09:07:37 crc kubenswrapper[4684]: I0123 09:07:37.826056 4684 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 23 09:07:37 crc kubenswrapper[4684]: I0123 09:07:37.826071 4684 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 23 09:07:37 crc kubenswrapper[4684]: I0123 09:07:37.826082 4684 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-23T09:07:37Z","lastTransitionTime":"2026-01-23T09:07:37Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 23 09:07:37 crc kubenswrapper[4684]: I0123 09:07:37.850188 4684 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-23T09:07:29Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-23T09:07:29Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-23T09:07:29Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://d66a59d2f527c396c3b591ef694a20a6852d8e2b2f3d4c77ef0f0b795a18b535\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-23T09:07:28Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-23T09:07:37Z is after 2025-08-24T17:21:41Z" Jan 23 09:07:37 crc kubenswrapper[4684]: I0123 09:07:37.928074 4684 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 23 09:07:37 crc kubenswrapper[4684]: I0123 09:07:37.928122 4684 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 23 09:07:37 crc kubenswrapper[4684]: I0123 09:07:37.928134 4684 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 23 09:07:37 crc kubenswrapper[4684]: I0123 09:07:37.928150 4684 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 23 09:07:37 crc kubenswrapper[4684]: I0123 09:07:37.928162 4684 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-23T09:07:37Z","lastTransitionTime":"2026-01-23T09:07:37Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 23 09:07:37 crc kubenswrapper[4684]: I0123 09:07:37.940966 4684 reflector.go:368] Caches populated for *v1.Node from k8s.io/client-go/informers/factory.go:160 Jan 23 09:07:38 crc kubenswrapper[4684]: I0123 09:07:38.030499 4684 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 23 09:07:38 crc kubenswrapper[4684]: I0123 09:07:38.030551 4684 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 23 09:07:38 crc kubenswrapper[4684]: I0123 09:07:38.030562 4684 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 23 09:07:38 crc kubenswrapper[4684]: I0123 09:07:38.030578 4684 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 23 09:07:38 crc kubenswrapper[4684]: I0123 09:07:38.030587 4684 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-23T09:07:38Z","lastTransitionTime":"2026-01-23T09:07:38Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 23 09:07:38 crc kubenswrapper[4684]: I0123 09:07:38.133936 4684 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 23 09:07:38 crc kubenswrapper[4684]: I0123 09:07:38.134001 4684 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 23 09:07:38 crc kubenswrapper[4684]: I0123 09:07:38.134015 4684 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 23 09:07:38 crc kubenswrapper[4684]: I0123 09:07:38.134031 4684 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 23 09:07:38 crc kubenswrapper[4684]: I0123 09:07:38.134044 4684 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-23T09:07:38Z","lastTransitionTime":"2026-01-23T09:07:38Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 23 09:07:38 crc kubenswrapper[4684]: I0123 09:07:38.235977 4684 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 23 09:07:38 crc kubenswrapper[4684]: I0123 09:07:38.236206 4684 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 23 09:07:38 crc kubenswrapper[4684]: I0123 09:07:38.236294 4684 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 23 09:07:38 crc kubenswrapper[4684]: I0123 09:07:38.236383 4684 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 23 09:07:38 crc kubenswrapper[4684]: I0123 09:07:38.236494 4684 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-23T09:07:38Z","lastTransitionTime":"2026-01-23T09:07:38Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 23 09:07:38 crc kubenswrapper[4684]: I0123 09:07:38.338407 4684 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 23 09:07:38 crc kubenswrapper[4684]: I0123 09:07:38.338466 4684 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 23 09:07:38 crc kubenswrapper[4684]: I0123 09:07:38.338479 4684 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 23 09:07:38 crc kubenswrapper[4684]: I0123 09:07:38.338500 4684 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 23 09:07:38 crc kubenswrapper[4684]: I0123 09:07:38.338512 4684 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-23T09:07:38Z","lastTransitionTime":"2026-01-23T09:07:38Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 23 09:07:38 crc kubenswrapper[4684]: I0123 09:07:38.441026 4684 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 23 09:07:38 crc kubenswrapper[4684]: I0123 09:07:38.441082 4684 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 23 09:07:38 crc kubenswrapper[4684]: I0123 09:07:38.441095 4684 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 23 09:07:38 crc kubenswrapper[4684]: I0123 09:07:38.441111 4684 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 23 09:07:38 crc kubenswrapper[4684]: I0123 09:07:38.441123 4684 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-23T09:07:38Z","lastTransitionTime":"2026-01-23T09:07:38Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 23 09:07:38 crc kubenswrapper[4684]: I0123 09:07:38.516655 4684 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-kube-apiserver/kube-apiserver-crc" Jan 23 09:07:38 crc kubenswrapper[4684]: I0123 09:07:38.528788 4684 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-23T09:07:27Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-23T09:07:27Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-23T09:07:38Z is after 2025-08-24T17:21:41Z" Jan 23 09:07:38 crc kubenswrapper[4684]: I0123 09:07:38.543431 4684 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 23 09:07:38 crc kubenswrapper[4684]: I0123 09:07:38.543458 4684 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 23 09:07:38 crc kubenswrapper[4684]: I0123 09:07:38.543466 4684 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 23 09:07:38 crc kubenswrapper[4684]: I0123 09:07:38.543478 4684 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 23 09:07:38 crc kubenswrapper[4684]: I0123 09:07:38.543449 4684 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-jwr4q" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ab0885cc-d621-4e36-9e37-1326848bd147\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T09:07:29Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T09:07:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T09:07:29Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T09:07:29Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://d957cfbf388d17fa825ac41c56e15d6cd4caec6e13b2fb8c93b304205f0bbefe\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-23T09:07:29Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cw2mk\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-23T09:07:28Z\\\"}}\" for pod \"openshift-multus\"/\"multus-jwr4q\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-23T09:07:38Z is after 2025-08-24T17:21:41Z" Jan 23 09:07:38 crc kubenswrapper[4684]: I0123 09:07:38.543487 4684 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-23T09:07:38Z","lastTransitionTime":"2026-01-23T09:07:38Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 23 09:07:38 crc kubenswrapper[4684]: I0123 09:07:38.554758 4684 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-12-29 21:52:15.896387823 +0000 UTC Jan 23 09:07:38 crc kubenswrapper[4684]: I0123 09:07:38.557780 4684 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-dmqcw" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"95d1563a-3ca4-4fb0-8365-c1168fbe2e70\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T09:07:29Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T09:07:34Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T09:07:35Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T09:07:35Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://49a6a5854f711f7c177bc9c2ddea16027d535e15a3bbce2771702baed672fc06\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-23T09:07:35Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-wrhqc\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://d3d64538fa49212ecd97fac81f22251d985b9963024dcd5625ca82b0a19111fb\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://d3d64538fa49212ecd97fac81f22251d985b9963024dcd5625ca82b0a19111fb\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-23T09:07:29Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-23T09:07:29Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-wrhqc\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://bd008bc398cf858c150426e45222e76743f5cacfffb45c24f2cad83a6140abe4\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://bd008bc398cf858c150426e45222e76743f5cacfffb45c24f2cad83a6140abe4\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-23T09:07:30Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-23T09:07:30Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-wrhqc\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://11ea09253e6f4c4eab537b794b793c1f07e8cbaf361c1d8773381e7894805322\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://11ea09253e6f4c4eab537b794b793c1f07e8cbaf361c1d8773381e7894805322\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-23T09:07:31Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-23T09:07:31Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-wrhqc\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://4dddcfb8219bc8ac2d0f92294aef29222b71b1eb35ac84e7e833905e868e784e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://4dddcfb8219bc8ac2d0f92294aef29222b71b1eb35ac84e7e833905e868e784e\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-23T09:07:32Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-23T09:07:32Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-wrhqc\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://d935dd54133a2edd7ccddba6ec6b4c3ee7c86d3d6bc097b93fab3a6aa873ece9\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://d935dd54133a2edd7ccddba6ec6b4c3ee7c86d3d6bc097b93fab3a6aa873ece9\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-23T09:07:33Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-23T09:07:33Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-wrhqc\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://a3f58ad8e7c313247b77e5259a2f82d740ea1f08c3aeaefc116293729ce1b143\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://a3f58ad8e7c313247b77e5259a2f82d740ea1f08c3aeaefc116293729ce1b143\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-23T09:07:34Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-23T09:07:34Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-wrhqc\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-23T09:07:28Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-dmqcw\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-23T09:07:38Z is after 2025-08-24T17:21:41Z" Jan 23 09:07:38 crc kubenswrapper[4684]: I0123 09:07:38.571896 4684 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"e31ff448-5258-4887-9532-ccb1444b5a2f\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T09:07:08Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T09:07:08Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T09:07:38Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T09:07:38Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T09:07:07Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://39b1d62654cdce3e6a1e54cc35f36d530dec39b7ec54d7aba2ea8a64844ff90a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-23T09:07:09Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://5b80737ea9f882f63be2cf6a2f74002963d16e18aea3c96f738b2cd188f3c1da\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-23T09:07:10Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://68e3ed6cfd5c1ab6379385c7acee58117333f815f21be7d7c61038f7827f6621\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-23T09:07:10Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://74958cd4355a9eb04e07c960b1063b56f11cb3ae27a3ab9eac50f54ebac78c8c\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://42263a97079566dbd93f1ca20399fd1f6cc2400f0d042ed062c1c1e15eaf0109\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-01-23T09:07:27Z\\\",\\\"message\\\":\\\"23 09:07:26.845110 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW0123 09:07:26.845113 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW0123 09:07:26.845115 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nI0123 09:07:26.845353 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nI0123 09:07:26.849378 1 tlsconfig.go:203] \\\\\\\"Loaded serving cert\\\\\\\" certName=\\\\\\\"serving-cert::/tmp/serving-cert-4138284268/tls.crt::/tmp/serving-cert-4138284268/tls.key\\\\\\\" certDetail=\\\\\\\"\\\\\\\\\\\\\\\"localhost\\\\\\\\\\\\\\\" [serving] validServingFor=[localhost] issuer=\\\\\\\\\\\\\\\"check-endpoints-signer@1769159230\\\\\\\\\\\\\\\" (2026-01-23 09:07:10 +0000 UTC to 2026-02-22 09:07:11 +0000 UTC (now=2026-01-23 09:07:26.849349521 +0000 UTC))\\\\\\\"\\\\nI0123 09:07:26.849507 1 named_certificates.go:53] \\\\\\\"Loaded SNI cert\\\\\\\" index=0 certName=\\\\\\\"self-signed loopback\\\\\\\" certDetail=\\\\\\\"\\\\\\\\\\\\\\\"apiserver-loopback-client@1769159241\\\\\\\\\\\\\\\" [serving] validServingFor=[apiserver-loopback-client] issuer=\\\\\\\\\\\\\\\"apiserver-loopback-client-ca@1769159241\\\\\\\\\\\\\\\" (2026-01-23 08:07:21 +0000 UTC to 2027-01-23 08:07:21 +0000 UTC (now=2026-01-23 09:07:26.849489185 +0000 UTC))\\\\\\\"\\\\nI0123 09:07:26.849527 1 secure_serving.go:213] Serving securely on [::]:17697\\\\nI0123 09:07:26.849546 1 genericapiserver.go:683] [graceful-termination] waiting for shutdown to be initiated\\\\nI0123 09:07:26.849566 1 requestheader_controller.go:172] Starting RequestHeaderAuthRequestController\\\\nI0123 09:07:26.849583 1 shared_informer.go:313] Waiting for caches to sync for RequestHeaderAuthRequestController\\\\nI0123 09:07:26.849611 1 dynamic_serving_content.go:135] \\\\\\\"Starting controller\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-4138284268/tls.crt::/tmp/serving-cert-4138284268/tls.key\\\\\\\"\\\\nI0123 09:07:26.849731 1 tlsconfig.go:243] \\\\\\\"Starting DynamicServingCertificateController\\\\\\\"\\\\nF0123 09:07:26.849820 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-23T09:07:10Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":true,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-23T09:07:28Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://9db80d9b156d2828ad5bcd38bc2d0783dac35f10f547f098815ee596931cde3b\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-23T09:07:10Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://efa2eef93c6f5766565795e6674f79bc2e7cb62ac76cd9a1e407561378d62732\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://efa2eef93c6f5766565795e6674f79bc2e7cb62ac76cd9a1e407561378d62732\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-23T09:07:08Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-23T09:07:08Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-23T09:07:07Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-23T09:07:38Z is after 2025-08-24T17:21:41Z" Jan 23 09:07:38 crc kubenswrapper[4684]: I0123 09:07:38.583457 4684 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-23T09:07:30Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-23T09:07:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-23T09:07:30Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://dc74050180463e44d7c545c89833c0282af87ae8cde4800f95e019dbd21ebb08\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-23T09:07:30Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-23T09:07:38Z is after 2025-08-24T17:21:41Z" Jan 23 09:07:38 crc kubenswrapper[4684]: I0123 09:07:38.593713 4684 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-6stgf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"4fce7017-186f-4953-b968-c8a8868a0fd4\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T09:07:29Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T09:07:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T09:07:29Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T09:07:29Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://e120546e2ca9261a5bc169c39194c52add608d78b5783a10dad5f3ba4ee27c23\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-23T09:07:29Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-wv8g2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-23T09:07:28Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-6stgf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-23T09:07:38Z is after 2025-08-24T17:21:41Z" Jan 23 09:07:38 crc kubenswrapper[4684]: I0123 09:07:38.604319 4684 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-qt2j2" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d5069a6f-07bb-4423-8df0-92cdc541e6de\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T09:07:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T09:07:29Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T09:07:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T09:07:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://4ab843f59e857c481772565098789264b06141f58dd54cbb8dba2e40b44a54ad\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-23T09:07:30Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-l62zw\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-23T09:07:29Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-qt2j2\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-23T09:07:38Z is after 2025-08-24T17:21:41Z" Jan 23 09:07:38 crc kubenswrapper[4684]: I0123 09:07:38.622989 4684 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d618dabd-5de3-4c94-b9c1-69682da77628\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T09:07:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T09:07:07Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T09:07:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T09:07:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T09:07:07Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://c027c8977c1e3870ef0132bf28d479e8999b1a7d216327be7a9cff2aeee05c9e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-23T09:07:08Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://7954e2feb1e89e1ec2c9055234e7b9bde7005afc751a3067c18cbb54d16045cc\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-23T09:07:08Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://fde45d47daa7855ee7caa1df0222d2773fcdc8fb29413c61d6b74f7e7d8fa6e4\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-23T09:07:09Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://f34540a58dd0dfcebbfd694b24202f58a89ddca8a0f04f3f4f2bcdba4be5c4b6\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-23T09:07:10Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-23T09:07:07Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-23T09:07:38Z is after 2025-08-24T17:21:41Z" Jan 23 09:07:38 crc kubenswrapper[4684]: I0123 09:07:38.634120 4684 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-23T09:07:28Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-23T09:07:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-23T09:07:28Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://4f741db786a98b9e9302c17c5f5061484149b0372c03b3cf06b017d37da7237a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-23T09:07:28Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://e0bf99a80423f9d4d2262b21f7dc70d1cf73731c48008e484d9768495596d5b0\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-23T09:07:28Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-23T09:07:38Z is after 2025-08-24T17:21:41Z" Jan 23 09:07:38 crc kubenswrapper[4684]: I0123 09:07:38.646346 4684 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 23 09:07:38 crc kubenswrapper[4684]: I0123 09:07:38.646385 4684 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 23 09:07:38 crc kubenswrapper[4684]: I0123 09:07:38.646395 4684 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 23 09:07:38 crc kubenswrapper[4684]: I0123 09:07:38.646427 4684 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 23 09:07:38 crc kubenswrapper[4684]: I0123 09:07:38.646440 4684 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-23T09:07:38Z","lastTransitionTime":"2026-01-23T09:07:38Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 23 09:07:38 crc kubenswrapper[4684]: I0123 09:07:38.733139 4684 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-wtphf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"fe8e0d00-860e-4d47-9f48-686555520d79\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T09:07:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T09:07:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T09:07:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T09:07:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://87b6f66b276518f9c25bbd5c97bd4a330b2c796958b395d04a01ef7115b95440\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-23T09:07:30Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-dmwsl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://c3d090a4ca15b818846dbd02be034a5029761509ea8671673795d0b2b15249c9\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-23T09:07:30Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-dmwsl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-23T09:07:28Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-wtphf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-23T09:07:38Z is after 2025-08-24T17:21:41Z" Jan 23 09:07:38 crc kubenswrapper[4684]: I0123 09:07:38.735942 4684 reflector.go:368] Caches populated for *v1.Service from k8s.io/client-go/informers/factory.go:160 Jan 23 09:07:38 crc kubenswrapper[4684]: I0123 09:07:38.747125 4684 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-23T09:07:29Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-23T09:07:29Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-23T09:07:29Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://d66a59d2f527c396c3b591ef694a20a6852d8e2b2f3d4c77ef0f0b795a18b535\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-23T09:07:28Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-23T09:07:38Z is after 2025-08-24T17:21:41Z" Jan 23 09:07:38 crc kubenswrapper[4684]: I0123 09:07:38.748968 4684 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 23 09:07:38 crc kubenswrapper[4684]: I0123 09:07:38.749086 4684 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 23 09:07:38 crc kubenswrapper[4684]: I0123 09:07:38.749266 4684 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 23 09:07:38 crc kubenswrapper[4684]: I0123 09:07:38.749565 4684 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 23 09:07:38 crc kubenswrapper[4684]: I0123 09:07:38.750043 4684 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-23T09:07:38Z","lastTransitionTime":"2026-01-23T09:07:38Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 23 09:07:38 crc kubenswrapper[4684]: I0123 09:07:38.760732 4684 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-23T09:07:27Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-23T09:07:27Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-23T09:07:38Z is after 2025-08-24T17:21:41Z" Jan 23 09:07:38 crc kubenswrapper[4684]: I0123 09:07:38.778615 4684 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-nk7v5" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fd1b372-d164-4037-ae8e-cf634b1c4b41\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T09:07:29Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T09:07:29Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T09:07:28Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T09:07:28Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://c845b6b78d55b23f70032599e19fb345571b02ca00353315bb08e94c834330d4\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-23T09:07:30Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-l46bg\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://5ecd3493767226c89a1f3e3dff04d36ff5c47117c6ad2712e71633f5c6e375b3\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-23T09:07:30Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-l46bg\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://1d7d0cedb437ec48e365912b092c7f28a30e01fbab86c49bce1b26734ab264ee\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-23T09:07:30Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-l46bg\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://6ab83043e744c91535278153a247d7ba2b3612b867edbabf3a43192b51304e14\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-23T09:07:30Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-l46bg\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://d44f8256ce0d8ea5237e13fb4f6d7ee5cd698c2821613b48d73ba903d2ab5351\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-23T09:07:30Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-l46bg\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://3eab81e73847c2d5a8a24bd2be84c8ed97ecc482fe023474b519ae6bcf3e6e49\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-23T09:07:29Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-l46bg\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://71f1640626a831e4da81a382d015a6467377fa8e787db1ce1cebe4a788c40d8a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-23T09:07:35Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-l46bg\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://6eab0113b2445bd23a5d3eb5f4bd79d26dd3352a1bf807cf7e770d55db85b699\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-23T09:07:32Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-l46bg\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://6cfc04b44ac724b5e32e0102b3f0d670fdd7f2b7ae9b40266065c7b8192b228e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://6cfc04b44ac724b5e32e0102b3f0d670fdd7f2b7ae9b40266065c7b8192b228e\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-23T09:07:29Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-23T09:07:29Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-l46bg\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-23T09:07:28Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-nk7v5\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-23T09:07:38Z is after 2025-08-24T17:21:41Z" Jan 23 09:07:38 crc kubenswrapper[4684]: I0123 09:07:38.792240 4684 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-23T09:07:27Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-23T09:07:27Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-23T09:07:38Z is after 2025-08-24T17:21:41Z" Jan 23 09:07:38 crc kubenswrapper[4684]: I0123 09:07:38.831362 4684 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-ovn-kubernetes_ovnkube-node-nk7v5_5fd1b372-d164-4037-ae8e-cf634b1c4b41/ovnkube-controller/0.log" Jan 23 09:07:38 crc kubenswrapper[4684]: I0123 09:07:38.835848 4684 generic.go:334] "Generic (PLEG): container finished" podID="5fd1b372-d164-4037-ae8e-cf634b1c4b41" containerID="71f1640626a831e4da81a382d015a6467377fa8e787db1ce1cebe4a788c40d8a" exitCode=1 Jan 23 09:07:38 crc kubenswrapper[4684]: I0123 09:07:38.835895 4684 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-nk7v5" event={"ID":"5fd1b372-d164-4037-ae8e-cf634b1c4b41","Type":"ContainerDied","Data":"71f1640626a831e4da81a382d015a6467377fa8e787db1ce1cebe4a788c40d8a"} Jan 23 09:07:38 crc kubenswrapper[4684]: I0123 09:07:38.836553 4684 scope.go:117] "RemoveContainer" containerID="71f1640626a831e4da81a382d015a6467377fa8e787db1ce1cebe4a788c40d8a" Jan 23 09:07:38 crc kubenswrapper[4684]: I0123 09:07:38.851842 4684 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d618dabd-5de3-4c94-b9c1-69682da77628\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T09:07:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T09:07:07Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T09:07:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T09:07:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T09:07:07Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://c027c8977c1e3870ef0132bf28d479e8999b1a7d216327be7a9cff2aeee05c9e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-23T09:07:08Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://7954e2feb1e89e1ec2c9055234e7b9bde7005afc751a3067c18cbb54d16045cc\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-23T09:07:08Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://fde45d47daa7855ee7caa1df0222d2773fcdc8fb29413c61d6b74f7e7d8fa6e4\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-23T09:07:09Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://f34540a58dd0dfcebbfd694b24202f58a89ddca8a0f04f3f4f2bcdba4be5c4b6\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-23T09:07:10Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-23T09:07:07Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-23T09:07:38Z is after 2025-08-24T17:21:41Z" Jan 23 09:07:38 crc kubenswrapper[4684]: I0123 09:07:38.853675 4684 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 23 09:07:38 crc kubenswrapper[4684]: I0123 09:07:38.853733 4684 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 23 09:07:38 crc kubenswrapper[4684]: I0123 09:07:38.853743 4684 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 23 09:07:38 crc kubenswrapper[4684]: I0123 09:07:38.853758 4684 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 23 09:07:38 crc kubenswrapper[4684]: I0123 09:07:38.853767 4684 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-23T09:07:38Z","lastTransitionTime":"2026-01-23T09:07:38Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 23 09:07:38 crc kubenswrapper[4684]: I0123 09:07:38.865170 4684 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-23T09:07:28Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-23T09:07:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-23T09:07:28Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://4f741db786a98b9e9302c17c5f5061484149b0372c03b3cf06b017d37da7237a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-23T09:07:28Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://e0bf99a80423f9d4d2262b21f7dc70d1cf73731c48008e484d9768495596d5b0\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-23T09:07:28Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-23T09:07:38Z is after 2025-08-24T17:21:41Z" Jan 23 09:07:38 crc kubenswrapper[4684]: I0123 09:07:38.878284 4684 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-23T09:07:30Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-23T09:07:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-23T09:07:30Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://dc74050180463e44d7c545c89833c0282af87ae8cde4800f95e019dbd21ebb08\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-23T09:07:30Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-23T09:07:38Z is after 2025-08-24T17:21:41Z" Jan 23 09:07:38 crc kubenswrapper[4684]: I0123 09:07:38.890632 4684 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-6stgf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"4fce7017-186f-4953-b968-c8a8868a0fd4\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T09:07:29Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T09:07:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T09:07:29Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T09:07:29Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://e120546e2ca9261a5bc169c39194c52add608d78b5783a10dad5f3ba4ee27c23\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-23T09:07:29Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-wv8g2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-23T09:07:28Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-6stgf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-23T09:07:38Z is after 2025-08-24T17:21:41Z" Jan 23 09:07:38 crc kubenswrapper[4684]: I0123 09:07:38.903455 4684 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-qt2j2" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d5069a6f-07bb-4423-8df0-92cdc541e6de\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T09:07:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T09:07:29Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T09:07:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T09:07:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://4ab843f59e857c481772565098789264b06141f58dd54cbb8dba2e40b44a54ad\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-23T09:07:30Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-l62zw\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-23T09:07:29Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-qt2j2\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-23T09:07:38Z is after 2025-08-24T17:21:41Z" Jan 23 09:07:38 crc kubenswrapper[4684]: I0123 09:07:38.920025 4684 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-23T09:07:29Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-23T09:07:29Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-23T09:07:29Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://d66a59d2f527c396c3b591ef694a20a6852d8e2b2f3d4c77ef0f0b795a18b535\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-23T09:07:28Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-23T09:07:38Z is after 2025-08-24T17:21:41Z" Jan 23 09:07:38 crc kubenswrapper[4684]: I0123 09:07:38.934010 4684 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-wtphf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"fe8e0d00-860e-4d47-9f48-686555520d79\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T09:07:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T09:07:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T09:07:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T09:07:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://87b6f66b276518f9c25bbd5c97bd4a330b2c796958b395d04a01ef7115b95440\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-23T09:07:30Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-dmwsl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://c3d090a4ca15b818846dbd02be034a5029761509ea8671673795d0b2b15249c9\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-23T09:07:30Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-dmwsl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-23T09:07:28Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-wtphf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-23T09:07:38Z is after 2025-08-24T17:21:41Z" Jan 23 09:07:38 crc kubenswrapper[4684]: I0123 09:07:38.946660 4684 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-23T09:07:27Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-23T09:07:27Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-23T09:07:38Z is after 2025-08-24T17:21:41Z" Jan 23 09:07:38 crc kubenswrapper[4684]: I0123 09:07:38.955936 4684 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 23 09:07:38 crc kubenswrapper[4684]: I0123 09:07:38.955996 4684 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 23 09:07:38 crc kubenswrapper[4684]: I0123 09:07:38.956007 4684 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 23 09:07:38 crc kubenswrapper[4684]: I0123 09:07:38.956043 4684 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 23 09:07:38 crc kubenswrapper[4684]: I0123 09:07:38.956055 4684 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-23T09:07:38Z","lastTransitionTime":"2026-01-23T09:07:38Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 23 09:07:38 crc kubenswrapper[4684]: I0123 09:07:38.961823 4684 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-23T09:07:27Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-23T09:07:27Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-23T09:07:38Z is after 2025-08-24T17:21:41Z" Jan 23 09:07:38 crc kubenswrapper[4684]: I0123 09:07:38.982596 4684 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-nk7v5" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fd1b372-d164-4037-ae8e-cf634b1c4b41\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T09:07:29Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T09:07:29Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T09:07:28Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T09:07:28Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://c845b6b78d55b23f70032599e19fb345571b02ca00353315bb08e94c834330d4\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-23T09:07:30Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-l46bg\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://5ecd3493767226c89a1f3e3dff04d36ff5c47117c6ad2712e71633f5c6e375b3\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-23T09:07:30Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-l46bg\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://1d7d0cedb437ec48e365912b092c7f28a30e01fbab86c49bce1b26734ab264ee\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-23T09:07:30Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-l46bg\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://6ab83043e744c91535278153a247d7ba2b3612b867edbabf3a43192b51304e14\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-23T09:07:30Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-l46bg\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://d44f8256ce0d8ea5237e13fb4f6d7ee5cd698c2821613b48d73ba903d2ab5351\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-23T09:07:30Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-l46bg\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://3eab81e73847c2d5a8a24bd2be84c8ed97ecc482fe023474b519ae6bcf3e6e49\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-23T09:07:29Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-l46bg\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://71f1640626a831e4da81a382d015a6467377fa8e787db1ce1cebe4a788c40d8a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://71f1640626a831e4da81a382d015a6467377fa8e787db1ce1cebe4a788c40d8a\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2026-01-23T09:07:38Z\\\",\\\"message\\\":\\\"3 09:07:37.962441 5888 handler.go:190] Sending *v1.Namespace event handler 5 for removal\\\\nI0123 09:07:37.962452 5888 handler.go:190] Sending *v1.Node event handler 7 for removal\\\\nI0123 09:07:37.962460 5888 handler.go:190] Sending *v1.Node event handler 2 for removal\\\\nI0123 09:07:37.962476 5888 handler.go:190] Sending *v1.EgressIP event handler 8 for removal\\\\nI0123 09:07:37.962491 5888 handler.go:190] Sending *v1.NetworkPolicy event handler 4 for removal\\\\nI0123 09:07:37.962519 5888 factory.go:656] Stopping watch factory\\\\nI0123 09:07:37.962539 5888 handler.go:208] Removed *v1.NetworkPolicy event handler 4\\\\nI0123 09:07:37.962547 5888 handler.go:208] Removed *v1.Node event handler 7\\\\nI0123 09:07:37.962553 5888 handler.go:208] Removed *v1.Node event handler 2\\\\nI0123 09:07:37.962551 5888 handler.go:208] Removed *v1.Namespace event handler 5\\\\nI0123 09:07:37.962559 5888 handler.go:208] Removed *v1.EgressIP event handler 8\\\\nI0123 09:07:37.962570 5888 handler.go:208] Removed *v1.EgressFirewall event handler 9\\\\nI0123 09:07:37.962580 5888 handler.go:208] Removed *v1.Namespace event handler 1\\\\nI0123 09:07:37.962599 5888 reflector.go:311] Stopping reflector *v1.Pod (0s) from k8s.io/client-go/informers/factory.go:160\\\\nI0123 09:07:37.962628 5888 reflector.go:311] Stopping reflector *v1.Node (0s) from k8s.io/client-go/informers/f\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-23T09:07:35Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-l46bg\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://6eab0113b2445bd23a5d3eb5f4bd79d26dd3352a1bf807cf7e770d55db85b699\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-23T09:07:32Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-l46bg\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://6cfc04b44ac724b5e32e0102b3f0d670fdd7f2b7ae9b40266065c7b8192b228e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://6cfc04b44ac724b5e32e0102b3f0d670fdd7f2b7ae9b40266065c7b8192b228e\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-23T09:07:29Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-23T09:07:29Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-l46bg\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-23T09:07:28Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-nk7v5\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-23T09:07:38Z is after 2025-08-24T17:21:41Z" Jan 23 09:07:38 crc kubenswrapper[4684]: I0123 09:07:38.997933 4684 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"e31ff448-5258-4887-9532-ccb1444b5a2f\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T09:07:08Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T09:07:08Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T09:07:38Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T09:07:38Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T09:07:07Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://39b1d62654cdce3e6a1e54cc35f36d530dec39b7ec54d7aba2ea8a64844ff90a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-23T09:07:09Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://5b80737ea9f882f63be2cf6a2f74002963d16e18aea3c96f738b2cd188f3c1da\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-23T09:07:10Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://68e3ed6cfd5c1ab6379385c7acee58117333f815f21be7d7c61038f7827f6621\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-23T09:07:10Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://74958cd4355a9eb04e07c960b1063b56f11cb3ae27a3ab9eac50f54ebac78c8c\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://42263a97079566dbd93f1ca20399fd1f6cc2400f0d042ed062c1c1e15eaf0109\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-01-23T09:07:27Z\\\",\\\"message\\\":\\\"23 09:07:26.845110 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW0123 09:07:26.845113 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW0123 09:07:26.845115 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nI0123 09:07:26.845353 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nI0123 09:07:26.849378 1 tlsconfig.go:203] \\\\\\\"Loaded serving cert\\\\\\\" certName=\\\\\\\"serving-cert::/tmp/serving-cert-4138284268/tls.crt::/tmp/serving-cert-4138284268/tls.key\\\\\\\" certDetail=\\\\\\\"\\\\\\\\\\\\\\\"localhost\\\\\\\\\\\\\\\" [serving] validServingFor=[localhost] issuer=\\\\\\\\\\\\\\\"check-endpoints-signer@1769159230\\\\\\\\\\\\\\\" (2026-01-23 09:07:10 +0000 UTC to 2026-02-22 09:07:11 +0000 UTC (now=2026-01-23 09:07:26.849349521 +0000 UTC))\\\\\\\"\\\\nI0123 09:07:26.849507 1 named_certificates.go:53] \\\\\\\"Loaded SNI cert\\\\\\\" index=0 certName=\\\\\\\"self-signed loopback\\\\\\\" certDetail=\\\\\\\"\\\\\\\\\\\\\\\"apiserver-loopback-client@1769159241\\\\\\\\\\\\\\\" [serving] validServingFor=[apiserver-loopback-client] issuer=\\\\\\\\\\\\\\\"apiserver-loopback-client-ca@1769159241\\\\\\\\\\\\\\\" (2026-01-23 08:07:21 +0000 UTC to 2027-01-23 08:07:21 +0000 UTC (now=2026-01-23 09:07:26.849489185 +0000 UTC))\\\\\\\"\\\\nI0123 09:07:26.849527 1 secure_serving.go:213] Serving securely on [::]:17697\\\\nI0123 09:07:26.849546 1 genericapiserver.go:683] [graceful-termination] waiting for shutdown to be initiated\\\\nI0123 09:07:26.849566 1 requestheader_controller.go:172] Starting RequestHeaderAuthRequestController\\\\nI0123 09:07:26.849583 1 shared_informer.go:313] Waiting for caches to sync for RequestHeaderAuthRequestController\\\\nI0123 09:07:26.849611 1 dynamic_serving_content.go:135] \\\\\\\"Starting controller\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-4138284268/tls.crt::/tmp/serving-cert-4138284268/tls.key\\\\\\\"\\\\nI0123 09:07:26.849731 1 tlsconfig.go:243] \\\\\\\"Starting DynamicServingCertificateController\\\\\\\"\\\\nF0123 09:07:26.849820 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-23T09:07:10Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":true,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-23T09:07:28Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://9db80d9b156d2828ad5bcd38bc2d0783dac35f10f547f098815ee596931cde3b\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-23T09:07:10Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://efa2eef93c6f5766565795e6674f79bc2e7cb62ac76cd9a1e407561378d62732\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://efa2eef93c6f5766565795e6674f79bc2e7cb62ac76cd9a1e407561378d62732\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-23T09:07:08Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-23T09:07:08Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-23T09:07:07Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-23T09:07:38Z is after 2025-08-24T17:21:41Z" Jan 23 09:07:39 crc kubenswrapper[4684]: I0123 09:07:39.013619 4684 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-23T09:07:27Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-23T09:07:27Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-23T09:07:39Z is after 2025-08-24T17:21:41Z" Jan 23 09:07:39 crc kubenswrapper[4684]: I0123 09:07:39.031331 4684 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-jwr4q" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ab0885cc-d621-4e36-9e37-1326848bd147\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T09:07:29Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T09:07:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T09:07:29Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T09:07:29Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://d957cfbf388d17fa825ac41c56e15d6cd4caec6e13b2fb8c93b304205f0bbefe\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-23T09:07:29Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cw2mk\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-23T09:07:28Z\\\"}}\" for pod \"openshift-multus\"/\"multus-jwr4q\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-23T09:07:39Z is after 2025-08-24T17:21:41Z" Jan 23 09:07:39 crc kubenswrapper[4684]: I0123 09:07:39.048600 4684 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-dmqcw" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"95d1563a-3ca4-4fb0-8365-c1168fbe2e70\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T09:07:29Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T09:07:34Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T09:07:35Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T09:07:35Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://49a6a5854f711f7c177bc9c2ddea16027d535e15a3bbce2771702baed672fc06\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-23T09:07:35Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-wrhqc\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://d3d64538fa49212ecd97fac81f22251d985b9963024dcd5625ca82b0a19111fb\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://d3d64538fa49212ecd97fac81f22251d985b9963024dcd5625ca82b0a19111fb\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-23T09:07:29Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-23T09:07:29Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-wrhqc\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://bd008bc398cf858c150426e45222e76743f5cacfffb45c24f2cad83a6140abe4\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://bd008bc398cf858c150426e45222e76743f5cacfffb45c24f2cad83a6140abe4\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-23T09:07:30Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-23T09:07:30Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-wrhqc\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://11ea09253e6f4c4eab537b794b793c1f07e8cbaf361c1d8773381e7894805322\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://11ea09253e6f4c4eab537b794b793c1f07e8cbaf361c1d8773381e7894805322\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-23T09:07:31Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-23T09:07:31Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-wrhqc\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://4dddcfb8219bc8ac2d0f92294aef29222b71b1eb35ac84e7e833905e868e784e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://4dddcfb8219bc8ac2d0f92294aef29222b71b1eb35ac84e7e833905e868e784e\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-23T09:07:32Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-23T09:07:32Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-wrhqc\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://d935dd54133a2edd7ccddba6ec6b4c3ee7c86d3d6bc097b93fab3a6aa873ece9\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://d935dd54133a2edd7ccddba6ec6b4c3ee7c86d3d6bc097b93fab3a6aa873ece9\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-23T09:07:33Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-23T09:07:33Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-wrhqc\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://a3f58ad8e7c313247b77e5259a2f82d740ea1f08c3aeaefc116293729ce1b143\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://a3f58ad8e7c313247b77e5259a2f82d740ea1f08c3aeaefc116293729ce1b143\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-23T09:07:34Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-23T09:07:34Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-wrhqc\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-23T09:07:28Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-dmqcw\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-23T09:07:39Z is after 2025-08-24T17:21:41Z" Jan 23 09:07:39 crc kubenswrapper[4684]: I0123 09:07:39.058360 4684 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 23 09:07:39 crc kubenswrapper[4684]: I0123 09:07:39.058402 4684 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 23 09:07:39 crc kubenswrapper[4684]: I0123 09:07:39.058412 4684 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 23 09:07:39 crc kubenswrapper[4684]: I0123 09:07:39.058426 4684 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 23 09:07:39 crc kubenswrapper[4684]: I0123 09:07:39.058478 4684 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-23T09:07:39Z","lastTransitionTime":"2026-01-23T09:07:39Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 23 09:07:39 crc kubenswrapper[4684]: I0123 09:07:39.161371 4684 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 23 09:07:39 crc kubenswrapper[4684]: I0123 09:07:39.161430 4684 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 23 09:07:39 crc kubenswrapper[4684]: I0123 09:07:39.161443 4684 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 23 09:07:39 crc kubenswrapper[4684]: I0123 09:07:39.161464 4684 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 23 09:07:39 crc kubenswrapper[4684]: I0123 09:07:39.161476 4684 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-23T09:07:39Z","lastTransitionTime":"2026-01-23T09:07:39Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 23 09:07:39 crc kubenswrapper[4684]: I0123 09:07:39.264400 4684 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 23 09:07:39 crc kubenswrapper[4684]: I0123 09:07:39.264457 4684 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 23 09:07:39 crc kubenswrapper[4684]: I0123 09:07:39.264474 4684 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 23 09:07:39 crc kubenswrapper[4684]: I0123 09:07:39.264492 4684 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 23 09:07:39 crc kubenswrapper[4684]: I0123 09:07:39.264504 4684 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-23T09:07:39Z","lastTransitionTime":"2026-01-23T09:07:39Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 23 09:07:39 crc kubenswrapper[4684]: I0123 09:07:39.301133 4684 reflector.go:368] Caches populated for *v1.CSIDriver from k8s.io/client-go/informers/factory.go:160 Jan 23 09:07:39 crc kubenswrapper[4684]: I0123 09:07:39.367767 4684 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 23 09:07:39 crc kubenswrapper[4684]: I0123 09:07:39.367815 4684 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 23 09:07:39 crc kubenswrapper[4684]: I0123 09:07:39.367827 4684 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 23 09:07:39 crc kubenswrapper[4684]: I0123 09:07:39.367846 4684 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 23 09:07:39 crc kubenswrapper[4684]: I0123 09:07:39.367859 4684 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-23T09:07:39Z","lastTransitionTime":"2026-01-23T09:07:39Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 23 09:07:39 crc kubenswrapper[4684]: I0123 09:07:39.470218 4684 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 23 09:07:39 crc kubenswrapper[4684]: I0123 09:07:39.470269 4684 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 23 09:07:39 crc kubenswrapper[4684]: I0123 09:07:39.470281 4684 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 23 09:07:39 crc kubenswrapper[4684]: I0123 09:07:39.470296 4684 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 23 09:07:39 crc kubenswrapper[4684]: I0123 09:07:39.470306 4684 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-23T09:07:39Z","lastTransitionTime":"2026-01-23T09:07:39Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 23 09:07:39 crc kubenswrapper[4684]: I0123 09:07:39.555791 4684 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-11-15 18:39:52.992809571 +0000 UTC Jan 23 09:07:39 crc kubenswrapper[4684]: I0123 09:07:39.572945 4684 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 23 09:07:39 crc kubenswrapper[4684]: I0123 09:07:39.572983 4684 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 23 09:07:39 crc kubenswrapper[4684]: I0123 09:07:39.572995 4684 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 23 09:07:39 crc kubenswrapper[4684]: I0123 09:07:39.573011 4684 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 23 09:07:39 crc kubenswrapper[4684]: I0123 09:07:39.573021 4684 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-23T09:07:39Z","lastTransitionTime":"2026-01-23T09:07:39Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 23 09:07:39 crc kubenswrapper[4684]: I0123 09:07:39.581225 4684 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Jan 23 09:07:39 crc kubenswrapper[4684]: I0123 09:07:39.581310 4684 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 23 09:07:39 crc kubenswrapper[4684]: I0123 09:07:39.581375 4684 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 23 09:07:39 crc kubenswrapper[4684]: E0123 09:07:39.581335 4684 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Jan 23 09:07:39 crc kubenswrapper[4684]: E0123 09:07:39.581456 4684 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Jan 23 09:07:39 crc kubenswrapper[4684]: E0123 09:07:39.581519 4684 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Jan 23 09:07:39 crc kubenswrapper[4684]: I0123 09:07:39.675159 4684 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 23 09:07:39 crc kubenswrapper[4684]: I0123 09:07:39.675202 4684 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 23 09:07:39 crc kubenswrapper[4684]: I0123 09:07:39.675213 4684 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 23 09:07:39 crc kubenswrapper[4684]: I0123 09:07:39.675229 4684 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 23 09:07:39 crc kubenswrapper[4684]: I0123 09:07:39.675240 4684 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-23T09:07:39Z","lastTransitionTime":"2026-01-23T09:07:39Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 23 09:07:39 crc kubenswrapper[4684]: I0123 09:07:39.778130 4684 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 23 09:07:39 crc kubenswrapper[4684]: I0123 09:07:39.778164 4684 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 23 09:07:39 crc kubenswrapper[4684]: I0123 09:07:39.778175 4684 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 23 09:07:39 crc kubenswrapper[4684]: I0123 09:07:39.778190 4684 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 23 09:07:39 crc kubenswrapper[4684]: I0123 09:07:39.778200 4684 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-23T09:07:39Z","lastTransitionTime":"2026-01-23T09:07:39Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 23 09:07:39 crc kubenswrapper[4684]: I0123 09:07:39.840283 4684 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-ovn-kubernetes_ovnkube-node-nk7v5_5fd1b372-d164-4037-ae8e-cf634b1c4b41/ovnkube-controller/0.log" Jan 23 09:07:39 crc kubenswrapper[4684]: I0123 09:07:39.843570 4684 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-nk7v5" event={"ID":"5fd1b372-d164-4037-ae8e-cf634b1c4b41","Type":"ContainerStarted","Data":"0a7c4719b2eaaa5e4439e33009fbfab815e8ac21cf72b90aeaeeb1b6717029de"} Jan 23 09:07:39 crc kubenswrapper[4684]: I0123 09:07:39.844776 4684 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-ovn-kubernetes/ovnkube-node-nk7v5" Jan 23 09:07:39 crc kubenswrapper[4684]: I0123 09:07:39.857935 4684 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"e31ff448-5258-4887-9532-ccb1444b5a2f\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T09:07:08Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T09:07:08Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T09:07:38Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T09:07:38Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T09:07:07Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://39b1d62654cdce3e6a1e54cc35f36d530dec39b7ec54d7aba2ea8a64844ff90a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-23T09:07:09Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://5b80737ea9f882f63be2cf6a2f74002963d16e18aea3c96f738b2cd188f3c1da\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-23T09:07:10Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://68e3ed6cfd5c1ab6379385c7acee58117333f815f21be7d7c61038f7827f6621\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-23T09:07:10Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://74958cd4355a9eb04e07c960b1063b56f11cb3ae27a3ab9eac50f54ebac78c8c\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://42263a97079566dbd93f1ca20399fd1f6cc2400f0d042ed062c1c1e15eaf0109\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-01-23T09:07:27Z\\\",\\\"message\\\":\\\"23 09:07:26.845110 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW0123 09:07:26.845113 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW0123 09:07:26.845115 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nI0123 09:07:26.845353 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nI0123 09:07:26.849378 1 tlsconfig.go:203] \\\\\\\"Loaded serving cert\\\\\\\" certName=\\\\\\\"serving-cert::/tmp/serving-cert-4138284268/tls.crt::/tmp/serving-cert-4138284268/tls.key\\\\\\\" certDetail=\\\\\\\"\\\\\\\\\\\\\\\"localhost\\\\\\\\\\\\\\\" [serving] validServingFor=[localhost] issuer=\\\\\\\\\\\\\\\"check-endpoints-signer@1769159230\\\\\\\\\\\\\\\" (2026-01-23 09:07:10 +0000 UTC to 2026-02-22 09:07:11 +0000 UTC (now=2026-01-23 09:07:26.849349521 +0000 UTC))\\\\\\\"\\\\nI0123 09:07:26.849507 1 named_certificates.go:53] \\\\\\\"Loaded SNI cert\\\\\\\" index=0 certName=\\\\\\\"self-signed loopback\\\\\\\" certDetail=\\\\\\\"\\\\\\\\\\\\\\\"apiserver-loopback-client@1769159241\\\\\\\\\\\\\\\" [serving] validServingFor=[apiserver-loopback-client] issuer=\\\\\\\\\\\\\\\"apiserver-loopback-client-ca@1769159241\\\\\\\\\\\\\\\" (2026-01-23 08:07:21 +0000 UTC to 2027-01-23 08:07:21 +0000 UTC (now=2026-01-23 09:07:26.849489185 +0000 UTC))\\\\\\\"\\\\nI0123 09:07:26.849527 1 secure_serving.go:213] Serving securely on [::]:17697\\\\nI0123 09:07:26.849546 1 genericapiserver.go:683] [graceful-termination] waiting for shutdown to be initiated\\\\nI0123 09:07:26.849566 1 requestheader_controller.go:172] Starting RequestHeaderAuthRequestController\\\\nI0123 09:07:26.849583 1 shared_informer.go:313] Waiting for caches to sync for RequestHeaderAuthRequestController\\\\nI0123 09:07:26.849611 1 dynamic_serving_content.go:135] \\\\\\\"Starting controller\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-4138284268/tls.crt::/tmp/serving-cert-4138284268/tls.key\\\\\\\"\\\\nI0123 09:07:26.849731 1 tlsconfig.go:243] \\\\\\\"Starting DynamicServingCertificateController\\\\\\\"\\\\nF0123 09:07:26.849820 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-23T09:07:10Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":true,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-23T09:07:28Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://9db80d9b156d2828ad5bcd38bc2d0783dac35f10f547f098815ee596931cde3b\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-23T09:07:10Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://efa2eef93c6f5766565795e6674f79bc2e7cb62ac76cd9a1e407561378d62732\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://efa2eef93c6f5766565795e6674f79bc2e7cb62ac76cd9a1e407561378d62732\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-23T09:07:08Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-23T09:07:08Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-23T09:07:07Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-23T09:07:39Z is after 2025-08-24T17:21:41Z" Jan 23 09:07:39 crc kubenswrapper[4684]: I0123 09:07:39.871011 4684 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-23T09:07:27Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-23T09:07:27Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-23T09:07:39Z is after 2025-08-24T17:21:41Z" Jan 23 09:07:39 crc kubenswrapper[4684]: I0123 09:07:39.880739 4684 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 23 09:07:39 crc kubenswrapper[4684]: I0123 09:07:39.880774 4684 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 23 09:07:39 crc kubenswrapper[4684]: I0123 09:07:39.880785 4684 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 23 09:07:39 crc kubenswrapper[4684]: I0123 09:07:39.880803 4684 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 23 09:07:39 crc kubenswrapper[4684]: I0123 09:07:39.880814 4684 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-23T09:07:39Z","lastTransitionTime":"2026-01-23T09:07:39Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 23 09:07:39 crc kubenswrapper[4684]: I0123 09:07:39.885499 4684 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-jwr4q" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ab0885cc-d621-4e36-9e37-1326848bd147\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T09:07:29Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T09:07:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T09:07:29Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T09:07:29Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://d957cfbf388d17fa825ac41c56e15d6cd4caec6e13b2fb8c93b304205f0bbefe\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-23T09:07:29Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cw2mk\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-23T09:07:28Z\\\"}}\" for pod \"openshift-multus\"/\"multus-jwr4q\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-23T09:07:39Z is after 2025-08-24T17:21:41Z" Jan 23 09:07:39 crc kubenswrapper[4684]: I0123 09:07:39.901013 4684 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-dmqcw" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"95d1563a-3ca4-4fb0-8365-c1168fbe2e70\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T09:07:29Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T09:07:34Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T09:07:35Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T09:07:35Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://49a6a5854f711f7c177bc9c2ddea16027d535e15a3bbce2771702baed672fc06\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-23T09:07:35Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-wrhqc\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://d3d64538fa49212ecd97fac81f22251d985b9963024dcd5625ca82b0a19111fb\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://d3d64538fa49212ecd97fac81f22251d985b9963024dcd5625ca82b0a19111fb\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-23T09:07:29Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-23T09:07:29Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-wrhqc\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://bd008bc398cf858c150426e45222e76743f5cacfffb45c24f2cad83a6140abe4\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://bd008bc398cf858c150426e45222e76743f5cacfffb45c24f2cad83a6140abe4\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-23T09:07:30Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-23T09:07:30Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-wrhqc\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://11ea09253e6f4c4eab537b794b793c1f07e8cbaf361c1d8773381e7894805322\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://11ea09253e6f4c4eab537b794b793c1f07e8cbaf361c1d8773381e7894805322\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-23T09:07:31Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-23T09:07:31Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-wrhqc\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://4dddcfb8219bc8ac2d0f92294aef29222b71b1eb35ac84e7e833905e868e784e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://4dddcfb8219bc8ac2d0f92294aef29222b71b1eb35ac84e7e833905e868e784e\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-23T09:07:32Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-23T09:07:32Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-wrhqc\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://d935dd54133a2edd7ccddba6ec6b4c3ee7c86d3d6bc097b93fab3a6aa873ece9\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://d935dd54133a2edd7ccddba6ec6b4c3ee7c86d3d6bc097b93fab3a6aa873ece9\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-23T09:07:33Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-23T09:07:33Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-wrhqc\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://a3f58ad8e7c313247b77e5259a2f82d740ea1f08c3aeaefc116293729ce1b143\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://a3f58ad8e7c313247b77e5259a2f82d740ea1f08c3aeaefc116293729ce1b143\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-23T09:07:34Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-23T09:07:34Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-wrhqc\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-23T09:07:28Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-dmqcw\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-23T09:07:39Z is after 2025-08-24T17:21:41Z" Jan 23 09:07:39 crc kubenswrapper[4684]: I0123 09:07:39.913078 4684 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d618dabd-5de3-4c94-b9c1-69682da77628\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T09:07:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T09:07:07Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T09:07:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T09:07:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T09:07:07Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://c027c8977c1e3870ef0132bf28d479e8999b1a7d216327be7a9cff2aeee05c9e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-23T09:07:08Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://7954e2feb1e89e1ec2c9055234e7b9bde7005afc751a3067c18cbb54d16045cc\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-23T09:07:08Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://fde45d47daa7855ee7caa1df0222d2773fcdc8fb29413c61d6b74f7e7d8fa6e4\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-23T09:07:09Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://f34540a58dd0dfcebbfd694b24202f58a89ddca8a0f04f3f4f2bcdba4be5c4b6\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-23T09:07:10Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-23T09:07:07Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-23T09:07:39Z is after 2025-08-24T17:21:41Z" Jan 23 09:07:39 crc kubenswrapper[4684]: I0123 09:07:39.925790 4684 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-23T09:07:28Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-23T09:07:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-23T09:07:28Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://4f741db786a98b9e9302c17c5f5061484149b0372c03b3cf06b017d37da7237a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-23T09:07:28Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://e0bf99a80423f9d4d2262b21f7dc70d1cf73731c48008e484d9768495596d5b0\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-23T09:07:28Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-23T09:07:39Z is after 2025-08-24T17:21:41Z" Jan 23 09:07:39 crc kubenswrapper[4684]: I0123 09:07:39.937807 4684 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-23T09:07:30Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-23T09:07:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-23T09:07:30Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://dc74050180463e44d7c545c89833c0282af87ae8cde4800f95e019dbd21ebb08\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-23T09:07:30Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-23T09:07:39Z is after 2025-08-24T17:21:41Z" Jan 23 09:07:39 crc kubenswrapper[4684]: I0123 09:07:39.951902 4684 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-6stgf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"4fce7017-186f-4953-b968-c8a8868a0fd4\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T09:07:29Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T09:07:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T09:07:29Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T09:07:29Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://e120546e2ca9261a5bc169c39194c52add608d78b5783a10dad5f3ba4ee27c23\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-23T09:07:29Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-wv8g2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-23T09:07:28Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-6stgf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-23T09:07:39Z is after 2025-08-24T17:21:41Z" Jan 23 09:07:39 crc kubenswrapper[4684]: I0123 09:07:39.963413 4684 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-qt2j2" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d5069a6f-07bb-4423-8df0-92cdc541e6de\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T09:07:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T09:07:29Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T09:07:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T09:07:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://4ab843f59e857c481772565098789264b06141f58dd54cbb8dba2e40b44a54ad\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-23T09:07:30Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-l62zw\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-23T09:07:29Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-qt2j2\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-23T09:07:39Z is after 2025-08-24T17:21:41Z" Jan 23 09:07:39 crc kubenswrapper[4684]: I0123 09:07:39.976938 4684 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-23T09:07:29Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-23T09:07:29Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-23T09:07:29Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://d66a59d2f527c396c3b591ef694a20a6852d8e2b2f3d4c77ef0f0b795a18b535\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-23T09:07:28Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-23T09:07:39Z is after 2025-08-24T17:21:41Z" Jan 23 09:07:39 crc kubenswrapper[4684]: I0123 09:07:39.983340 4684 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 23 09:07:39 crc kubenswrapper[4684]: I0123 09:07:39.983375 4684 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 23 09:07:39 crc kubenswrapper[4684]: I0123 09:07:39.983387 4684 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 23 09:07:39 crc kubenswrapper[4684]: I0123 09:07:39.983403 4684 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 23 09:07:39 crc kubenswrapper[4684]: I0123 09:07:39.983415 4684 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-23T09:07:39Z","lastTransitionTime":"2026-01-23T09:07:39Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 23 09:07:39 crc kubenswrapper[4684]: I0123 09:07:39.990287 4684 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-wtphf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"fe8e0d00-860e-4d47-9f48-686555520d79\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T09:07:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T09:07:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T09:07:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T09:07:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://87b6f66b276518f9c25bbd5c97bd4a330b2c796958b395d04a01ef7115b95440\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-23T09:07:30Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-dmwsl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://c3d090a4ca15b818846dbd02be034a5029761509ea8671673795d0b2b15249c9\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-23T09:07:30Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-dmwsl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-23T09:07:28Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-wtphf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-23T09:07:39Z is after 2025-08-24T17:21:41Z" Jan 23 09:07:40 crc kubenswrapper[4684]: I0123 09:07:40.002791 4684 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-23T09:07:27Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-23T09:07:27Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-23T09:07:40Z is after 2025-08-24T17:21:41Z" Jan 23 09:07:40 crc kubenswrapper[4684]: I0123 09:07:40.018541 4684 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-23T09:07:27Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-23T09:07:27Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-23T09:07:40Z is after 2025-08-24T17:21:41Z" Jan 23 09:07:40 crc kubenswrapper[4684]: I0123 09:07:40.042854 4684 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-nk7v5" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fd1b372-d164-4037-ae8e-cf634b1c4b41\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T09:07:29Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T09:07:29Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T09:07:28Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T09:07:28Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://c845b6b78d55b23f70032599e19fb345571b02ca00353315bb08e94c834330d4\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-23T09:07:30Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-l46bg\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://5ecd3493767226c89a1f3e3dff04d36ff5c47117c6ad2712e71633f5c6e375b3\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-23T09:07:30Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-l46bg\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://1d7d0cedb437ec48e365912b092c7f28a30e01fbab86c49bce1b26734ab264ee\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-23T09:07:30Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-l46bg\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://6ab83043e744c91535278153a247d7ba2b3612b867edbabf3a43192b51304e14\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-23T09:07:30Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-l46bg\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://d44f8256ce0d8ea5237e13fb4f6d7ee5cd698c2821613b48d73ba903d2ab5351\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-23T09:07:30Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-l46bg\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://3eab81e73847c2d5a8a24bd2be84c8ed97ecc482fe023474b519ae6bcf3e6e49\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-23T09:07:29Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-l46bg\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://0a7c4719b2eaaa5e4439e33009fbfab815e8ac21cf72b90aeaeeb1b6717029de\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://71f1640626a831e4da81a382d015a6467377fa8e787db1ce1cebe4a788c40d8a\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2026-01-23T09:07:38Z\\\",\\\"message\\\":\\\"3 09:07:37.962441 5888 handler.go:190] Sending *v1.Namespace event handler 5 for removal\\\\nI0123 09:07:37.962452 5888 handler.go:190] Sending *v1.Node event handler 7 for removal\\\\nI0123 09:07:37.962460 5888 handler.go:190] Sending *v1.Node event handler 2 for removal\\\\nI0123 09:07:37.962476 5888 handler.go:190] Sending *v1.EgressIP event handler 8 for removal\\\\nI0123 09:07:37.962491 5888 handler.go:190] Sending *v1.NetworkPolicy event handler 4 for removal\\\\nI0123 09:07:37.962519 5888 factory.go:656] Stopping watch factory\\\\nI0123 09:07:37.962539 5888 handler.go:208] Removed *v1.NetworkPolicy event handler 4\\\\nI0123 09:07:37.962547 5888 handler.go:208] Removed *v1.Node event handler 7\\\\nI0123 09:07:37.962553 5888 handler.go:208] Removed *v1.Node event handler 2\\\\nI0123 09:07:37.962551 5888 handler.go:208] Removed *v1.Namespace event handler 5\\\\nI0123 09:07:37.962559 5888 handler.go:208] Removed *v1.EgressIP event handler 8\\\\nI0123 09:07:37.962570 5888 handler.go:208] Removed *v1.EgressFirewall event handler 9\\\\nI0123 09:07:37.962580 5888 handler.go:208] Removed *v1.Namespace event handler 1\\\\nI0123 09:07:37.962599 5888 reflector.go:311] Stopping reflector *v1.Pod (0s) from k8s.io/client-go/informers/factory.go:160\\\\nI0123 09:07:37.962628 5888 reflector.go:311] Stopping reflector *v1.Node (0s) from k8s.io/client-go/informers/f\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-23T09:07:35Z\\\"}},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-23T09:07:39Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-l46bg\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://6eab0113b2445bd23a5d3eb5f4bd79d26dd3352a1bf807cf7e770d55db85b699\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-23T09:07:32Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-l46bg\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://6cfc04b44ac724b5e32e0102b3f0d670fdd7f2b7ae9b40266065c7b8192b228e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://6cfc04b44ac724b5e32e0102b3f0d670fdd7f2b7ae9b40266065c7b8192b228e\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-23T09:07:29Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-23T09:07:29Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-l46bg\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-23T09:07:28Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-nk7v5\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-23T09:07:40Z is after 2025-08-24T17:21:41Z" Jan 23 09:07:40 crc kubenswrapper[4684]: I0123 09:07:40.085572 4684 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 23 09:07:40 crc kubenswrapper[4684]: I0123 09:07:40.085606 4684 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 23 09:07:40 crc kubenswrapper[4684]: I0123 09:07:40.085615 4684 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 23 09:07:40 crc kubenswrapper[4684]: I0123 09:07:40.085630 4684 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 23 09:07:40 crc kubenswrapper[4684]: I0123 09:07:40.085639 4684 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-23T09:07:40Z","lastTransitionTime":"2026-01-23T09:07:40Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 23 09:07:40 crc kubenswrapper[4684]: I0123 09:07:40.187935 4684 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 23 09:07:40 crc kubenswrapper[4684]: I0123 09:07:40.187971 4684 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 23 09:07:40 crc kubenswrapper[4684]: I0123 09:07:40.187984 4684 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 23 09:07:40 crc kubenswrapper[4684]: I0123 09:07:40.187998 4684 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 23 09:07:40 crc kubenswrapper[4684]: I0123 09:07:40.188008 4684 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-23T09:07:40Z","lastTransitionTime":"2026-01-23T09:07:40Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 23 09:07:40 crc kubenswrapper[4684]: I0123 09:07:40.289864 4684 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 23 09:07:40 crc kubenswrapper[4684]: I0123 09:07:40.289894 4684 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 23 09:07:40 crc kubenswrapper[4684]: I0123 09:07:40.289904 4684 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 23 09:07:40 crc kubenswrapper[4684]: I0123 09:07:40.289917 4684 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 23 09:07:40 crc kubenswrapper[4684]: I0123 09:07:40.289925 4684 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-23T09:07:40Z","lastTransitionTime":"2026-01-23T09:07:40Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 23 09:07:40 crc kubenswrapper[4684]: I0123 09:07:40.392174 4684 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 23 09:07:40 crc kubenswrapper[4684]: I0123 09:07:40.392212 4684 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 23 09:07:40 crc kubenswrapper[4684]: I0123 09:07:40.392225 4684 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 23 09:07:40 crc kubenswrapper[4684]: I0123 09:07:40.392241 4684 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 23 09:07:40 crc kubenswrapper[4684]: I0123 09:07:40.392252 4684 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-23T09:07:40Z","lastTransitionTime":"2026-01-23T09:07:40Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 23 09:07:40 crc kubenswrapper[4684]: I0123 09:07:40.493766 4684 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 23 09:07:40 crc kubenswrapper[4684]: I0123 09:07:40.493796 4684 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 23 09:07:40 crc kubenswrapper[4684]: I0123 09:07:40.493804 4684 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 23 09:07:40 crc kubenswrapper[4684]: I0123 09:07:40.493819 4684 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 23 09:07:40 crc kubenswrapper[4684]: I0123 09:07:40.493828 4684 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-23T09:07:40Z","lastTransitionTime":"2026-01-23T09:07:40Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 23 09:07:40 crc kubenswrapper[4684]: I0123 09:07:40.556165 4684 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-12-28 20:19:42.415885576 +0000 UTC Jan 23 09:07:40 crc kubenswrapper[4684]: I0123 09:07:40.571639 4684 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 23 09:07:40 crc kubenswrapper[4684]: I0123 09:07:40.571680 4684 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 23 09:07:40 crc kubenswrapper[4684]: I0123 09:07:40.571690 4684 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 23 09:07:40 crc kubenswrapper[4684]: I0123 09:07:40.571745 4684 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 23 09:07:40 crc kubenswrapper[4684]: I0123 09:07:40.571759 4684 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-23T09:07:40Z","lastTransitionTime":"2026-01-23T09:07:40Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 23 09:07:40 crc kubenswrapper[4684]: E0123 09:07:40.583773 4684 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"7800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"24148068Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"8\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"24608868Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-01-23T09:07:40Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-23T09:07:40Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-23T09:07:40Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-23T09:07:40Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-23T09:07:40Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-23T09:07:40Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-23T09:07:40Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-23T09:07:40Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"bcfe8adf-9d26-48e3-b456-e1c8d79ddfed\\\",\\\"systemUUID\\\":\\\"63162577-fb09-4289-a5f3-3b12988dcfbf\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-23T09:07:40Z is after 2025-08-24T17:21:41Z" Jan 23 09:07:40 crc kubenswrapper[4684]: I0123 09:07:40.586974 4684 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 23 09:07:40 crc kubenswrapper[4684]: I0123 09:07:40.587021 4684 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 23 09:07:40 crc kubenswrapper[4684]: I0123 09:07:40.587034 4684 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 23 09:07:40 crc kubenswrapper[4684]: I0123 09:07:40.587049 4684 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 23 09:07:40 crc kubenswrapper[4684]: I0123 09:07:40.587060 4684 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-23T09:07:40Z","lastTransitionTime":"2026-01-23T09:07:40Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 23 09:07:40 crc kubenswrapper[4684]: E0123 09:07:40.599318 4684 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"7800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"24148068Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"8\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"24608868Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-01-23T09:07:40Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-23T09:07:40Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-23T09:07:40Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-23T09:07:40Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-23T09:07:40Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-23T09:07:40Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-23T09:07:40Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-23T09:07:40Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"bcfe8adf-9d26-48e3-b456-e1c8d79ddfed\\\",\\\"systemUUID\\\":\\\"63162577-fb09-4289-a5f3-3b12988dcfbf\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-23T09:07:40Z is after 2025-08-24T17:21:41Z" Jan 23 09:07:40 crc kubenswrapper[4684]: I0123 09:07:40.602816 4684 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 23 09:07:40 crc kubenswrapper[4684]: I0123 09:07:40.602885 4684 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 23 09:07:40 crc kubenswrapper[4684]: I0123 09:07:40.602894 4684 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 23 09:07:40 crc kubenswrapper[4684]: I0123 09:07:40.602908 4684 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 23 09:07:40 crc kubenswrapper[4684]: I0123 09:07:40.602917 4684 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-23T09:07:40Z","lastTransitionTime":"2026-01-23T09:07:40Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 23 09:07:40 crc kubenswrapper[4684]: E0123 09:07:40.615846 4684 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"7800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"24148068Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"8\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"24608868Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-01-23T09:07:40Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-23T09:07:40Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-23T09:07:40Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-23T09:07:40Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-23T09:07:40Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-23T09:07:40Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-23T09:07:40Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-23T09:07:40Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"bcfe8adf-9d26-48e3-b456-e1c8d79ddfed\\\",\\\"systemUUID\\\":\\\"63162577-fb09-4289-a5f3-3b12988dcfbf\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-23T09:07:40Z is after 2025-08-24T17:21:41Z" Jan 23 09:07:40 crc kubenswrapper[4684]: I0123 09:07:40.619646 4684 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 23 09:07:40 crc kubenswrapper[4684]: I0123 09:07:40.619677 4684 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 23 09:07:40 crc kubenswrapper[4684]: I0123 09:07:40.619717 4684 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 23 09:07:40 crc kubenswrapper[4684]: I0123 09:07:40.619736 4684 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 23 09:07:40 crc kubenswrapper[4684]: I0123 09:07:40.619747 4684 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-23T09:07:40Z","lastTransitionTime":"2026-01-23T09:07:40Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 23 09:07:40 crc kubenswrapper[4684]: E0123 09:07:40.631793 4684 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"7800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"24148068Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"8\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"24608868Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-01-23T09:07:40Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-23T09:07:40Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-23T09:07:40Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-23T09:07:40Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-23T09:07:40Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-23T09:07:40Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-23T09:07:40Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-23T09:07:40Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"bcfe8adf-9d26-48e3-b456-e1c8d79ddfed\\\",\\\"systemUUID\\\":\\\"63162577-fb09-4289-a5f3-3b12988dcfbf\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-23T09:07:40Z is after 2025-08-24T17:21:41Z" Jan 23 09:07:40 crc kubenswrapper[4684]: I0123 09:07:40.635339 4684 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 23 09:07:40 crc kubenswrapper[4684]: I0123 09:07:40.635373 4684 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 23 09:07:40 crc kubenswrapper[4684]: I0123 09:07:40.635383 4684 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 23 09:07:40 crc kubenswrapper[4684]: I0123 09:07:40.635398 4684 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 23 09:07:40 crc kubenswrapper[4684]: I0123 09:07:40.635409 4684 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-23T09:07:40Z","lastTransitionTime":"2026-01-23T09:07:40Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 23 09:07:40 crc kubenswrapper[4684]: E0123 09:07:40.646148 4684 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"7800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"24148068Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"8\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"24608868Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-01-23T09:07:40Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-23T09:07:40Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-23T09:07:40Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-23T09:07:40Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-23T09:07:40Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-23T09:07:40Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-23T09:07:40Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-23T09:07:40Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"bcfe8adf-9d26-48e3-b456-e1c8d79ddfed\\\",\\\"systemUUID\\\":\\\"63162577-fb09-4289-a5f3-3b12988dcfbf\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-23T09:07:40Z is after 2025-08-24T17:21:41Z" Jan 23 09:07:40 crc kubenswrapper[4684]: E0123 09:07:40.646261 4684 kubelet_node_status.go:572] "Unable to update node status" err="update node status exceeds retry count" Jan 23 09:07:40 crc kubenswrapper[4684]: I0123 09:07:40.647751 4684 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 23 09:07:40 crc kubenswrapper[4684]: I0123 09:07:40.647780 4684 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 23 09:07:40 crc kubenswrapper[4684]: I0123 09:07:40.647788 4684 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 23 09:07:40 crc kubenswrapper[4684]: I0123 09:07:40.647802 4684 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 23 09:07:40 crc kubenswrapper[4684]: I0123 09:07:40.647812 4684 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-23T09:07:40Z","lastTransitionTime":"2026-01-23T09:07:40Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 23 09:07:40 crc kubenswrapper[4684]: I0123 09:07:40.750790 4684 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 23 09:07:40 crc kubenswrapper[4684]: I0123 09:07:40.750833 4684 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 23 09:07:40 crc kubenswrapper[4684]: I0123 09:07:40.750845 4684 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 23 09:07:40 crc kubenswrapper[4684]: I0123 09:07:40.750861 4684 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 23 09:07:40 crc kubenswrapper[4684]: I0123 09:07:40.750872 4684 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-23T09:07:40Z","lastTransitionTime":"2026-01-23T09:07:40Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 23 09:07:40 crc kubenswrapper[4684]: I0123 09:07:40.853531 4684 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 23 09:07:40 crc kubenswrapper[4684]: I0123 09:07:40.853572 4684 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 23 09:07:40 crc kubenswrapper[4684]: I0123 09:07:40.853581 4684 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 23 09:07:40 crc kubenswrapper[4684]: I0123 09:07:40.853594 4684 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 23 09:07:40 crc kubenswrapper[4684]: I0123 09:07:40.853604 4684 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-23T09:07:40Z","lastTransitionTime":"2026-01-23T09:07:40Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 23 09:07:40 crc kubenswrapper[4684]: I0123 09:07:40.956090 4684 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 23 09:07:40 crc kubenswrapper[4684]: I0123 09:07:40.956124 4684 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 23 09:07:40 crc kubenswrapper[4684]: I0123 09:07:40.956134 4684 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 23 09:07:40 crc kubenswrapper[4684]: I0123 09:07:40.956148 4684 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 23 09:07:40 crc kubenswrapper[4684]: I0123 09:07:40.956158 4684 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-23T09:07:40Z","lastTransitionTime":"2026-01-23T09:07:40Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 23 09:07:41 crc kubenswrapper[4684]: I0123 09:07:41.058313 4684 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 23 09:07:41 crc kubenswrapper[4684]: I0123 09:07:41.058347 4684 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 23 09:07:41 crc kubenswrapper[4684]: I0123 09:07:41.058355 4684 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 23 09:07:41 crc kubenswrapper[4684]: I0123 09:07:41.058370 4684 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 23 09:07:41 crc kubenswrapper[4684]: I0123 09:07:41.058379 4684 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-23T09:07:41Z","lastTransitionTime":"2026-01-23T09:07:41Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 23 09:07:41 crc kubenswrapper[4684]: I0123 09:07:41.160795 4684 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 23 09:07:41 crc kubenswrapper[4684]: I0123 09:07:41.160825 4684 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 23 09:07:41 crc kubenswrapper[4684]: I0123 09:07:41.160834 4684 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 23 09:07:41 crc kubenswrapper[4684]: I0123 09:07:41.160848 4684 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 23 09:07:41 crc kubenswrapper[4684]: I0123 09:07:41.160857 4684 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-23T09:07:41Z","lastTransitionTime":"2026-01-23T09:07:41Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 23 09:07:41 crc kubenswrapper[4684]: I0123 09:07:41.263150 4684 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 23 09:07:41 crc kubenswrapper[4684]: I0123 09:07:41.263187 4684 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 23 09:07:41 crc kubenswrapper[4684]: I0123 09:07:41.263199 4684 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 23 09:07:41 crc kubenswrapper[4684]: I0123 09:07:41.263215 4684 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 23 09:07:41 crc kubenswrapper[4684]: I0123 09:07:41.263227 4684 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-23T09:07:41Z","lastTransitionTime":"2026-01-23T09:07:41Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 23 09:07:41 crc kubenswrapper[4684]: I0123 09:07:41.365819 4684 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 23 09:07:41 crc kubenswrapper[4684]: I0123 09:07:41.365864 4684 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 23 09:07:41 crc kubenswrapper[4684]: I0123 09:07:41.365875 4684 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 23 09:07:41 crc kubenswrapper[4684]: I0123 09:07:41.365889 4684 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 23 09:07:41 crc kubenswrapper[4684]: I0123 09:07:41.365927 4684 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-23T09:07:41Z","lastTransitionTime":"2026-01-23T09:07:41Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 23 09:07:41 crc kubenswrapper[4684]: I0123 09:07:41.468760 4684 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 23 09:07:41 crc kubenswrapper[4684]: I0123 09:07:41.468803 4684 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 23 09:07:41 crc kubenswrapper[4684]: I0123 09:07:41.468815 4684 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 23 09:07:41 crc kubenswrapper[4684]: I0123 09:07:41.468834 4684 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 23 09:07:41 crc kubenswrapper[4684]: I0123 09:07:41.468845 4684 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-23T09:07:41Z","lastTransitionTime":"2026-01-23T09:07:41Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 23 09:07:41 crc kubenswrapper[4684]: I0123 09:07:41.554285 4684 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-ckltm"] Jan 23 09:07:41 crc kubenswrapper[4684]: I0123 09:07:41.554756 4684 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-ckltm" Jan 23 09:07:41 crc kubenswrapper[4684]: I0123 09:07:41.556434 4684 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-12-11 22:08:52.644475915 +0000 UTC Jan 23 09:07:41 crc kubenswrapper[4684]: I0123 09:07:41.557050 4684 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-ovn-kubernetes"/"ovn-kubernetes-control-plane-dockercfg-gs7dd" Jan 23 09:07:41 crc kubenswrapper[4684]: I0123 09:07:41.557388 4684 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-ovn-kubernetes"/"ovn-control-plane-metrics-cert" Jan 23 09:07:41 crc kubenswrapper[4684]: I0123 09:07:41.558189 4684 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovn-control-plane-metrics-cert\" (UniqueName: \"kubernetes.io/secret/17ebb42b-c0ef-423b-8337-cb73bcdbd301-ovn-control-plane-metrics-cert\") pod \"ovnkube-control-plane-749d76644c-ckltm\" (UID: \"17ebb42b-c0ef-423b-8337-cb73bcdbd301\") " pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-ckltm" Jan 23 09:07:41 crc kubenswrapper[4684]: I0123 09:07:41.558242 4684 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovnkube-config\" (UniqueName: \"kubernetes.io/configmap/17ebb42b-c0ef-423b-8337-cb73bcdbd301-ovnkube-config\") pod \"ovnkube-control-plane-749d76644c-ckltm\" (UID: \"17ebb42b-c0ef-423b-8337-cb73bcdbd301\") " pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-ckltm" Jan 23 09:07:41 crc kubenswrapper[4684]: I0123 09:07:41.558300 4684 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"env-overrides\" (UniqueName: \"kubernetes.io/configmap/17ebb42b-c0ef-423b-8337-cb73bcdbd301-env-overrides\") pod \"ovnkube-control-plane-749d76644c-ckltm\" (UID: \"17ebb42b-c0ef-423b-8337-cb73bcdbd301\") " pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-ckltm" Jan 23 09:07:41 crc kubenswrapper[4684]: I0123 09:07:41.558326 4684 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-bqdrh\" (UniqueName: \"kubernetes.io/projected/17ebb42b-c0ef-423b-8337-cb73bcdbd301-kube-api-access-bqdrh\") pod \"ovnkube-control-plane-749d76644c-ckltm\" (UID: \"17ebb42b-c0ef-423b-8337-cb73bcdbd301\") " pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-ckltm" Jan 23 09:07:41 crc kubenswrapper[4684]: I0123 09:07:41.570913 4684 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 23 09:07:41 crc kubenswrapper[4684]: I0123 09:07:41.570950 4684 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 23 09:07:41 crc kubenswrapper[4684]: I0123 09:07:41.570960 4684 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 23 09:07:41 crc kubenswrapper[4684]: I0123 09:07:41.570977 4684 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 23 09:07:41 crc kubenswrapper[4684]: I0123 09:07:41.570987 4684 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-23T09:07:41Z","lastTransitionTime":"2026-01-23T09:07:41Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 23 09:07:41 crc kubenswrapper[4684]: I0123 09:07:41.572271 4684 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-jwr4q" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ab0885cc-d621-4e36-9e37-1326848bd147\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T09:07:29Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T09:07:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T09:07:29Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T09:07:29Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://d957cfbf388d17fa825ac41c56e15d6cd4caec6e13b2fb8c93b304205f0bbefe\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-23T09:07:29Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cw2mk\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-23T09:07:28Z\\\"}}\" for pod \"openshift-multus\"/\"multus-jwr4q\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-23T09:07:41Z is after 2025-08-24T17:21:41Z" Jan 23 09:07:41 crc kubenswrapper[4684]: I0123 09:07:41.581725 4684 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Jan 23 09:07:41 crc kubenswrapper[4684]: I0123 09:07:41.581788 4684 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 23 09:07:41 crc kubenswrapper[4684]: I0123 09:07:41.581741 4684 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 23 09:07:41 crc kubenswrapper[4684]: E0123 09:07:41.581889 4684 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Jan 23 09:07:41 crc kubenswrapper[4684]: E0123 09:07:41.582027 4684 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Jan 23 09:07:41 crc kubenswrapper[4684]: E0123 09:07:41.582122 4684 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Jan 23 09:07:41 crc kubenswrapper[4684]: I0123 09:07:41.589270 4684 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-dmqcw" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"95d1563a-3ca4-4fb0-8365-c1168fbe2e70\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T09:07:29Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T09:07:34Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T09:07:35Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T09:07:35Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://49a6a5854f711f7c177bc9c2ddea16027d535e15a3bbce2771702baed672fc06\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-23T09:07:35Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-wrhqc\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://d3d64538fa49212ecd97fac81f22251d985b9963024dcd5625ca82b0a19111fb\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://d3d64538fa49212ecd97fac81f22251d985b9963024dcd5625ca82b0a19111fb\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-23T09:07:29Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-23T09:07:29Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-wrhqc\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://bd008bc398cf858c150426e45222e76743f5cacfffb45c24f2cad83a6140abe4\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://bd008bc398cf858c150426e45222e76743f5cacfffb45c24f2cad83a6140abe4\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-23T09:07:30Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-23T09:07:30Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-wrhqc\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://11ea09253e6f4c4eab537b794b793c1f07e8cbaf361c1d8773381e7894805322\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://11ea09253e6f4c4eab537b794b793c1f07e8cbaf361c1d8773381e7894805322\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-23T09:07:31Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-23T09:07:31Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-wrhqc\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://4dddcfb8219bc8ac2d0f92294aef29222b71b1eb35ac84e7e833905e868e784e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://4dddcfb8219bc8ac2d0f92294aef29222b71b1eb35ac84e7e833905e868e784e\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-23T09:07:32Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-23T09:07:32Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-wrhqc\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://d935dd54133a2edd7ccddba6ec6b4c3ee7c86d3d6bc097b93fab3a6aa873ece9\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://d935dd54133a2edd7ccddba6ec6b4c3ee7c86d3d6bc097b93fab3a6aa873ece9\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-23T09:07:33Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-23T09:07:33Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-wrhqc\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://a3f58ad8e7c313247b77e5259a2f82d740ea1f08c3aeaefc116293729ce1b143\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://a3f58ad8e7c313247b77e5259a2f82d740ea1f08c3aeaefc116293729ce1b143\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-23T09:07:34Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-23T09:07:34Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-wrhqc\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-23T09:07:28Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-dmqcw\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-23T09:07:41Z is after 2025-08-24T17:21:41Z" Jan 23 09:07:41 crc kubenswrapper[4684]: I0123 09:07:41.602104 4684 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"e31ff448-5258-4887-9532-ccb1444b5a2f\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T09:07:08Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T09:07:08Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T09:07:38Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T09:07:38Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T09:07:07Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://39b1d62654cdce3e6a1e54cc35f36d530dec39b7ec54d7aba2ea8a64844ff90a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-23T09:07:09Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://5b80737ea9f882f63be2cf6a2f74002963d16e18aea3c96f738b2cd188f3c1da\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-23T09:07:10Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://68e3ed6cfd5c1ab6379385c7acee58117333f815f21be7d7c61038f7827f6621\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-23T09:07:10Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://74958cd4355a9eb04e07c960b1063b56f11cb3ae27a3ab9eac50f54ebac78c8c\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://42263a97079566dbd93f1ca20399fd1f6cc2400f0d042ed062c1c1e15eaf0109\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-01-23T09:07:27Z\\\",\\\"message\\\":\\\"23 09:07:26.845110 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW0123 09:07:26.845113 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW0123 09:07:26.845115 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nI0123 09:07:26.845353 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nI0123 09:07:26.849378 1 tlsconfig.go:203] \\\\\\\"Loaded serving cert\\\\\\\" certName=\\\\\\\"serving-cert::/tmp/serving-cert-4138284268/tls.crt::/tmp/serving-cert-4138284268/tls.key\\\\\\\" certDetail=\\\\\\\"\\\\\\\\\\\\\\\"localhost\\\\\\\\\\\\\\\" [serving] validServingFor=[localhost] issuer=\\\\\\\\\\\\\\\"check-endpoints-signer@1769159230\\\\\\\\\\\\\\\" (2026-01-23 09:07:10 +0000 UTC to 2026-02-22 09:07:11 +0000 UTC (now=2026-01-23 09:07:26.849349521 +0000 UTC))\\\\\\\"\\\\nI0123 09:07:26.849507 1 named_certificates.go:53] \\\\\\\"Loaded SNI cert\\\\\\\" index=0 certName=\\\\\\\"self-signed loopback\\\\\\\" certDetail=\\\\\\\"\\\\\\\\\\\\\\\"apiserver-loopback-client@1769159241\\\\\\\\\\\\\\\" [serving] validServingFor=[apiserver-loopback-client] issuer=\\\\\\\\\\\\\\\"apiserver-loopback-client-ca@1769159241\\\\\\\\\\\\\\\" (2026-01-23 08:07:21 +0000 UTC to 2027-01-23 08:07:21 +0000 UTC (now=2026-01-23 09:07:26.849489185 +0000 UTC))\\\\\\\"\\\\nI0123 09:07:26.849527 1 secure_serving.go:213] Serving securely on [::]:17697\\\\nI0123 09:07:26.849546 1 genericapiserver.go:683] [graceful-termination] waiting for shutdown to be initiated\\\\nI0123 09:07:26.849566 1 requestheader_controller.go:172] Starting RequestHeaderAuthRequestController\\\\nI0123 09:07:26.849583 1 shared_informer.go:313] Waiting for caches to sync for RequestHeaderAuthRequestController\\\\nI0123 09:07:26.849611 1 dynamic_serving_content.go:135] \\\\\\\"Starting controller\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-4138284268/tls.crt::/tmp/serving-cert-4138284268/tls.key\\\\\\\"\\\\nI0123 09:07:26.849731 1 tlsconfig.go:243] \\\\\\\"Starting DynamicServingCertificateController\\\\\\\"\\\\nF0123 09:07:26.849820 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-23T09:07:10Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":true,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-23T09:07:28Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://9db80d9b156d2828ad5bcd38bc2d0783dac35f10f547f098815ee596931cde3b\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-23T09:07:10Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://efa2eef93c6f5766565795e6674f79bc2e7cb62ac76cd9a1e407561378d62732\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://efa2eef93c6f5766565795e6674f79bc2e7cb62ac76cd9a1e407561378d62732\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-23T09:07:08Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-23T09:07:08Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-23T09:07:07Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-23T09:07:41Z is after 2025-08-24T17:21:41Z" Jan 23 09:07:41 crc kubenswrapper[4684]: I0123 09:07:41.614415 4684 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-23T09:07:27Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-23T09:07:27Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-23T09:07:41Z is after 2025-08-24T17:21:41Z" Jan 23 09:07:41 crc kubenswrapper[4684]: I0123 09:07:41.622886 4684 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-6stgf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"4fce7017-186f-4953-b968-c8a8868a0fd4\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T09:07:29Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T09:07:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T09:07:29Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T09:07:29Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://e120546e2ca9261a5bc169c39194c52add608d78b5783a10dad5f3ba4ee27c23\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-23T09:07:29Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-wv8g2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-23T09:07:28Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-6stgf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-23T09:07:41Z is after 2025-08-24T17:21:41Z" Jan 23 09:07:41 crc kubenswrapper[4684]: I0123 09:07:41.631338 4684 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-qt2j2" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d5069a6f-07bb-4423-8df0-92cdc541e6de\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T09:07:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T09:07:29Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T09:07:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T09:07:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://4ab843f59e857c481772565098789264b06141f58dd54cbb8dba2e40b44a54ad\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-23T09:07:30Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-l62zw\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-23T09:07:29Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-qt2j2\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-23T09:07:41Z is after 2025-08-24T17:21:41Z" Jan 23 09:07:41 crc kubenswrapper[4684]: I0123 09:07:41.641335 4684 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d618dabd-5de3-4c94-b9c1-69682da77628\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T09:07:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T09:07:07Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T09:07:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T09:07:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T09:07:07Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://c027c8977c1e3870ef0132bf28d479e8999b1a7d216327be7a9cff2aeee05c9e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-23T09:07:08Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://7954e2feb1e89e1ec2c9055234e7b9bde7005afc751a3067c18cbb54d16045cc\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-23T09:07:08Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://fde45d47daa7855ee7caa1df0222d2773fcdc8fb29413c61d6b74f7e7d8fa6e4\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-23T09:07:09Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://f34540a58dd0dfcebbfd694b24202f58a89ddca8a0f04f3f4f2bcdba4be5c4b6\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-23T09:07:10Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-23T09:07:07Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-23T09:07:41Z is after 2025-08-24T17:21:41Z" Jan 23 09:07:41 crc kubenswrapper[4684]: I0123 09:07:41.652900 4684 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-23T09:07:28Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-23T09:07:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-23T09:07:28Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://4f741db786a98b9e9302c17c5f5061484149b0372c03b3cf06b017d37da7237a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-23T09:07:28Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://e0bf99a80423f9d4d2262b21f7dc70d1cf73731c48008e484d9768495596d5b0\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-23T09:07:28Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-23T09:07:41Z is after 2025-08-24T17:21:41Z" Jan 23 09:07:41 crc kubenswrapper[4684]: I0123 09:07:41.658834 4684 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"env-overrides\" (UniqueName: \"kubernetes.io/configmap/17ebb42b-c0ef-423b-8337-cb73bcdbd301-env-overrides\") pod \"ovnkube-control-plane-749d76644c-ckltm\" (UID: \"17ebb42b-c0ef-423b-8337-cb73bcdbd301\") " pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-ckltm" Jan 23 09:07:41 crc kubenswrapper[4684]: I0123 09:07:41.658883 4684 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-bqdrh\" (UniqueName: \"kubernetes.io/projected/17ebb42b-c0ef-423b-8337-cb73bcdbd301-kube-api-access-bqdrh\") pod \"ovnkube-control-plane-749d76644c-ckltm\" (UID: \"17ebb42b-c0ef-423b-8337-cb73bcdbd301\") " pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-ckltm" Jan 23 09:07:41 crc kubenswrapper[4684]: I0123 09:07:41.658922 4684 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovn-control-plane-metrics-cert\" (UniqueName: \"kubernetes.io/secret/17ebb42b-c0ef-423b-8337-cb73bcdbd301-ovn-control-plane-metrics-cert\") pod \"ovnkube-control-plane-749d76644c-ckltm\" (UID: \"17ebb42b-c0ef-423b-8337-cb73bcdbd301\") " pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-ckltm" Jan 23 09:07:41 crc kubenswrapper[4684]: I0123 09:07:41.658956 4684 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovnkube-config\" (UniqueName: \"kubernetes.io/configmap/17ebb42b-c0ef-423b-8337-cb73bcdbd301-ovnkube-config\") pod \"ovnkube-control-plane-749d76644c-ckltm\" (UID: \"17ebb42b-c0ef-423b-8337-cb73bcdbd301\") " pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-ckltm" Jan 23 09:07:41 crc kubenswrapper[4684]: I0123 09:07:41.659451 4684 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"env-overrides\" (UniqueName: \"kubernetes.io/configmap/17ebb42b-c0ef-423b-8337-cb73bcdbd301-env-overrides\") pod \"ovnkube-control-plane-749d76644c-ckltm\" (UID: \"17ebb42b-c0ef-423b-8337-cb73bcdbd301\") " pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-ckltm" Jan 23 09:07:41 crc kubenswrapper[4684]: I0123 09:07:41.659901 4684 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovnkube-config\" (UniqueName: \"kubernetes.io/configmap/17ebb42b-c0ef-423b-8337-cb73bcdbd301-ovnkube-config\") pod \"ovnkube-control-plane-749d76644c-ckltm\" (UID: \"17ebb42b-c0ef-423b-8337-cb73bcdbd301\") " pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-ckltm" Jan 23 09:07:41 crc kubenswrapper[4684]: I0123 09:07:41.664233 4684 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovn-control-plane-metrics-cert\" (UniqueName: \"kubernetes.io/secret/17ebb42b-c0ef-423b-8337-cb73bcdbd301-ovn-control-plane-metrics-cert\") pod \"ovnkube-control-plane-749d76644c-ckltm\" (UID: \"17ebb42b-c0ef-423b-8337-cb73bcdbd301\") " pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-ckltm" Jan 23 09:07:41 crc kubenswrapper[4684]: I0123 09:07:41.668432 4684 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-23T09:07:30Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-23T09:07:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-23T09:07:30Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://dc74050180463e44d7c545c89833c0282af87ae8cde4800f95e019dbd21ebb08\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-23T09:07:30Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-23T09:07:41Z is after 2025-08-24T17:21:41Z" Jan 23 09:07:41 crc kubenswrapper[4684]: I0123 09:07:41.672516 4684 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 23 09:07:41 crc kubenswrapper[4684]: I0123 09:07:41.672581 4684 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 23 09:07:41 crc kubenswrapper[4684]: I0123 09:07:41.672594 4684 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 23 09:07:41 crc kubenswrapper[4684]: I0123 09:07:41.672609 4684 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 23 09:07:41 crc kubenswrapper[4684]: I0123 09:07:41.672646 4684 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-23T09:07:41Z","lastTransitionTime":"2026-01-23T09:07:41Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 23 09:07:41 crc kubenswrapper[4684]: I0123 09:07:41.674661 4684 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-bqdrh\" (UniqueName: \"kubernetes.io/projected/17ebb42b-c0ef-423b-8337-cb73bcdbd301-kube-api-access-bqdrh\") pod \"ovnkube-control-plane-749d76644c-ckltm\" (UID: \"17ebb42b-c0ef-423b-8337-cb73bcdbd301\") " pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-ckltm" Jan 23 09:07:41 crc kubenswrapper[4684]: I0123 09:07:41.682470 4684 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-23T09:07:29Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-23T09:07:29Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-23T09:07:29Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://d66a59d2f527c396c3b591ef694a20a6852d8e2b2f3d4c77ef0f0b795a18b535\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-23T09:07:28Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-23T09:07:41Z is after 2025-08-24T17:21:41Z" Jan 23 09:07:41 crc kubenswrapper[4684]: I0123 09:07:41.693271 4684 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-wtphf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"fe8e0d00-860e-4d47-9f48-686555520d79\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T09:07:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T09:07:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T09:07:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T09:07:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://87b6f66b276518f9c25bbd5c97bd4a330b2c796958b395d04a01ef7115b95440\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-23T09:07:30Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-dmwsl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://c3d090a4ca15b818846dbd02be034a5029761509ea8671673795d0b2b15249c9\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-23T09:07:30Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-dmwsl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-23T09:07:28Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-wtphf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-23T09:07:41Z is after 2025-08-24T17:21:41Z" Jan 23 09:07:41 crc kubenswrapper[4684]: I0123 09:07:41.710984 4684 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-nk7v5" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fd1b372-d164-4037-ae8e-cf634b1c4b41\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T09:07:29Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T09:07:29Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T09:07:28Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T09:07:28Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://c845b6b78d55b23f70032599e19fb345571b02ca00353315bb08e94c834330d4\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-23T09:07:30Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-l46bg\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://5ecd3493767226c89a1f3e3dff04d36ff5c47117c6ad2712e71633f5c6e375b3\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-23T09:07:30Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-l46bg\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://1d7d0cedb437ec48e365912b092c7f28a30e01fbab86c49bce1b26734ab264ee\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-23T09:07:30Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-l46bg\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://6ab83043e744c91535278153a247d7ba2b3612b867edbabf3a43192b51304e14\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-23T09:07:30Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-l46bg\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://d44f8256ce0d8ea5237e13fb4f6d7ee5cd698c2821613b48d73ba903d2ab5351\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-23T09:07:30Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-l46bg\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://3eab81e73847c2d5a8a24bd2be84c8ed97ecc482fe023474b519ae6bcf3e6e49\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-23T09:07:29Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-l46bg\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://0a7c4719b2eaaa5e4439e33009fbfab815e8ac21cf72b90aeaeeb1b6717029de\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://71f1640626a831e4da81a382d015a6467377fa8e787db1ce1cebe4a788c40d8a\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2026-01-23T09:07:38Z\\\",\\\"message\\\":\\\"3 09:07:37.962441 5888 handler.go:190] Sending *v1.Namespace event handler 5 for removal\\\\nI0123 09:07:37.962452 5888 handler.go:190] Sending *v1.Node event handler 7 for removal\\\\nI0123 09:07:37.962460 5888 handler.go:190] Sending *v1.Node event handler 2 for removal\\\\nI0123 09:07:37.962476 5888 handler.go:190] Sending *v1.EgressIP event handler 8 for removal\\\\nI0123 09:07:37.962491 5888 handler.go:190] Sending *v1.NetworkPolicy event handler 4 for removal\\\\nI0123 09:07:37.962519 5888 factory.go:656] Stopping watch factory\\\\nI0123 09:07:37.962539 5888 handler.go:208] Removed *v1.NetworkPolicy event handler 4\\\\nI0123 09:07:37.962547 5888 handler.go:208] Removed *v1.Node event handler 7\\\\nI0123 09:07:37.962553 5888 handler.go:208] Removed *v1.Node event handler 2\\\\nI0123 09:07:37.962551 5888 handler.go:208] Removed *v1.Namespace event handler 5\\\\nI0123 09:07:37.962559 5888 handler.go:208] Removed *v1.EgressIP event handler 8\\\\nI0123 09:07:37.962570 5888 handler.go:208] Removed *v1.EgressFirewall event handler 9\\\\nI0123 09:07:37.962580 5888 handler.go:208] Removed *v1.Namespace event handler 1\\\\nI0123 09:07:37.962599 5888 reflector.go:311] Stopping reflector *v1.Pod (0s) from k8s.io/client-go/informers/factory.go:160\\\\nI0123 09:07:37.962628 5888 reflector.go:311] Stopping reflector *v1.Node (0s) from k8s.io/client-go/informers/f\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-23T09:07:35Z\\\"}},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-23T09:07:39Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-l46bg\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://6eab0113b2445bd23a5d3eb5f4bd79d26dd3352a1bf807cf7e770d55db85b699\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-23T09:07:32Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-l46bg\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://6cfc04b44ac724b5e32e0102b3f0d670fdd7f2b7ae9b40266065c7b8192b228e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://6cfc04b44ac724b5e32e0102b3f0d670fdd7f2b7ae9b40266065c7b8192b228e\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-23T09:07:29Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-23T09:07:29Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-l46bg\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-23T09:07:28Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-nk7v5\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-23T09:07:41Z is after 2025-08-24T17:21:41Z" Jan 23 09:07:41 crc kubenswrapper[4684]: I0123 09:07:41.719820 4684 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-ckltm" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"17ebb42b-c0ef-423b-8337-cb73bcdbd301\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T09:07:41Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T09:07:41Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T09:07:41Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-rbac-proxy ovnkube-cluster-manager]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T09:07:41Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-rbac-proxy ovnkube-cluster-manager]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-control-plane-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-bqdrh\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-cluster-manager\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-bqdrh\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-23T09:07:41Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-control-plane-749d76644c-ckltm\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-23T09:07:41Z is after 2025-08-24T17:21:41Z" Jan 23 09:07:41 crc kubenswrapper[4684]: I0123 09:07:41.729051 4684 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-23T09:07:27Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-23T09:07:27Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-23T09:07:41Z is after 2025-08-24T17:21:41Z" Jan 23 09:07:41 crc kubenswrapper[4684]: I0123 09:07:41.738876 4684 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-23T09:07:27Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-23T09:07:27Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-23T09:07:41Z is after 2025-08-24T17:21:41Z" Jan 23 09:07:41 crc kubenswrapper[4684]: I0123 09:07:41.774845 4684 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 23 09:07:41 crc kubenswrapper[4684]: I0123 09:07:41.774921 4684 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 23 09:07:41 crc kubenswrapper[4684]: I0123 09:07:41.774935 4684 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 23 09:07:41 crc kubenswrapper[4684]: I0123 09:07:41.774952 4684 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 23 09:07:41 crc kubenswrapper[4684]: I0123 09:07:41.774962 4684 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-23T09:07:41Z","lastTransitionTime":"2026-01-23T09:07:41Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 23 09:07:41 crc kubenswrapper[4684]: I0123 09:07:41.866893 4684 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-ckltm" Jan 23 09:07:41 crc kubenswrapper[4684]: I0123 09:07:41.877278 4684 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 23 09:07:41 crc kubenswrapper[4684]: I0123 09:07:41.877299 4684 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 23 09:07:41 crc kubenswrapper[4684]: I0123 09:07:41.877307 4684 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 23 09:07:41 crc kubenswrapper[4684]: I0123 09:07:41.877320 4684 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 23 09:07:41 crc kubenswrapper[4684]: I0123 09:07:41.877329 4684 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-23T09:07:41Z","lastTransitionTime":"2026-01-23T09:07:41Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 23 09:07:41 crc kubenswrapper[4684]: W0123 09:07:41.879474 4684 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod17ebb42b_c0ef_423b_8337_cb73bcdbd301.slice/crio-bf79d8396c5451feed38775580a4f1249a2e52c0a75afa9f12b3b00cf2aac8ab WatchSource:0}: Error finding container bf79d8396c5451feed38775580a4f1249a2e52c0a75afa9f12b3b00cf2aac8ab: Status 404 returned error can't find the container with id bf79d8396c5451feed38775580a4f1249a2e52c0a75afa9f12b3b00cf2aac8ab Jan 23 09:07:41 crc kubenswrapper[4684]: I0123 09:07:41.979923 4684 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 23 09:07:41 crc kubenswrapper[4684]: I0123 09:07:41.979964 4684 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 23 09:07:41 crc kubenswrapper[4684]: I0123 09:07:41.979996 4684 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 23 09:07:41 crc kubenswrapper[4684]: I0123 09:07:41.980013 4684 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 23 09:07:41 crc kubenswrapper[4684]: I0123 09:07:41.980025 4684 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-23T09:07:41Z","lastTransitionTime":"2026-01-23T09:07:41Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 23 09:07:42 crc kubenswrapper[4684]: I0123 09:07:42.083596 4684 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 23 09:07:42 crc kubenswrapper[4684]: I0123 09:07:42.083656 4684 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 23 09:07:42 crc kubenswrapper[4684]: I0123 09:07:42.083667 4684 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 23 09:07:42 crc kubenswrapper[4684]: I0123 09:07:42.083685 4684 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 23 09:07:42 crc kubenswrapper[4684]: I0123 09:07:42.083723 4684 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-23T09:07:42Z","lastTransitionTime":"2026-01-23T09:07:42Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 23 09:07:42 crc kubenswrapper[4684]: I0123 09:07:42.186017 4684 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 23 09:07:42 crc kubenswrapper[4684]: I0123 09:07:42.186051 4684 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 23 09:07:42 crc kubenswrapper[4684]: I0123 09:07:42.186059 4684 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 23 09:07:42 crc kubenswrapper[4684]: I0123 09:07:42.186072 4684 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 23 09:07:42 crc kubenswrapper[4684]: I0123 09:07:42.186082 4684 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-23T09:07:42Z","lastTransitionTime":"2026-01-23T09:07:42Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 23 09:07:42 crc kubenswrapper[4684]: I0123 09:07:42.289412 4684 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 23 09:07:42 crc kubenswrapper[4684]: I0123 09:07:42.289617 4684 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 23 09:07:42 crc kubenswrapper[4684]: I0123 09:07:42.289669 4684 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 23 09:07:42 crc kubenswrapper[4684]: I0123 09:07:42.289689 4684 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 23 09:07:42 crc kubenswrapper[4684]: I0123 09:07:42.289752 4684 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-23T09:07:42Z","lastTransitionTime":"2026-01-23T09:07:42Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 23 09:07:42 crc kubenswrapper[4684]: I0123 09:07:42.299563 4684 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-multus/network-metrics-daemon-wrrtl"] Jan 23 09:07:42 crc kubenswrapper[4684]: I0123 09:07:42.300562 4684 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-wrrtl" Jan 23 09:07:42 crc kubenswrapper[4684]: E0123 09:07:42.300741 4684 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-wrrtl" podUID="8a1145d8-e0e9-481b-9e5c-65815e74874f" Jan 23 09:07:42 crc kubenswrapper[4684]: I0123 09:07:42.313994 4684 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-23T09:07:27Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-23T09:07:27Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-23T09:07:42Z is after 2025-08-24T17:21:41Z" Jan 23 09:07:42 crc kubenswrapper[4684]: I0123 09:07:42.330196 4684 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-jwr4q" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ab0885cc-d621-4e36-9e37-1326848bd147\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T09:07:29Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T09:07:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T09:07:29Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T09:07:29Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://d957cfbf388d17fa825ac41c56e15d6cd4caec6e13b2fb8c93b304205f0bbefe\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-23T09:07:29Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cw2mk\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-23T09:07:28Z\\\"}}\" for pod \"openshift-multus\"/\"multus-jwr4q\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-23T09:07:42Z is after 2025-08-24T17:21:41Z" Jan 23 09:07:42 crc kubenswrapper[4684]: I0123 09:07:42.348200 4684 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-dmqcw" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"95d1563a-3ca4-4fb0-8365-c1168fbe2e70\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T09:07:29Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T09:07:34Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T09:07:35Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T09:07:35Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://49a6a5854f711f7c177bc9c2ddea16027d535e15a3bbce2771702baed672fc06\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-23T09:07:35Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-wrhqc\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://d3d64538fa49212ecd97fac81f22251d985b9963024dcd5625ca82b0a19111fb\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://d3d64538fa49212ecd97fac81f22251d985b9963024dcd5625ca82b0a19111fb\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-23T09:07:29Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-23T09:07:29Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-wrhqc\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://bd008bc398cf858c150426e45222e76743f5cacfffb45c24f2cad83a6140abe4\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://bd008bc398cf858c150426e45222e76743f5cacfffb45c24f2cad83a6140abe4\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-23T09:07:30Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-23T09:07:30Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-wrhqc\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://11ea09253e6f4c4eab537b794b793c1f07e8cbaf361c1d8773381e7894805322\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://11ea09253e6f4c4eab537b794b793c1f07e8cbaf361c1d8773381e7894805322\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-23T09:07:31Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-23T09:07:31Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-wrhqc\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://4dddcfb8219bc8ac2d0f92294aef29222b71b1eb35ac84e7e833905e868e784e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://4dddcfb8219bc8ac2d0f92294aef29222b71b1eb35ac84e7e833905e868e784e\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-23T09:07:32Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-23T09:07:32Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-wrhqc\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://d935dd54133a2edd7ccddba6ec6b4c3ee7c86d3d6bc097b93fab3a6aa873ece9\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://d935dd54133a2edd7ccddba6ec6b4c3ee7c86d3d6bc097b93fab3a6aa873ece9\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-23T09:07:33Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-23T09:07:33Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-wrhqc\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://a3f58ad8e7c313247b77e5259a2f82d740ea1f08c3aeaefc116293729ce1b143\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://a3f58ad8e7c313247b77e5259a2f82d740ea1f08c3aeaefc116293729ce1b143\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-23T09:07:34Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-23T09:07:34Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-wrhqc\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-23T09:07:28Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-dmqcw\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-23T09:07:42Z is after 2025-08-24T17:21:41Z" Jan 23 09:07:42 crc kubenswrapper[4684]: I0123 09:07:42.365338 4684 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"e31ff448-5258-4887-9532-ccb1444b5a2f\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T09:07:08Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T09:07:08Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T09:07:38Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T09:07:38Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T09:07:07Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://39b1d62654cdce3e6a1e54cc35f36d530dec39b7ec54d7aba2ea8a64844ff90a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-23T09:07:09Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://5b80737ea9f882f63be2cf6a2f74002963d16e18aea3c96f738b2cd188f3c1da\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-23T09:07:10Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://68e3ed6cfd5c1ab6379385c7acee58117333f815f21be7d7c61038f7827f6621\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-23T09:07:10Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://74958cd4355a9eb04e07c960b1063b56f11cb3ae27a3ab9eac50f54ebac78c8c\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://42263a97079566dbd93f1ca20399fd1f6cc2400f0d042ed062c1c1e15eaf0109\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-01-23T09:07:27Z\\\",\\\"message\\\":\\\"23 09:07:26.845110 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW0123 09:07:26.845113 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW0123 09:07:26.845115 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nI0123 09:07:26.845353 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nI0123 09:07:26.849378 1 tlsconfig.go:203] \\\\\\\"Loaded serving cert\\\\\\\" certName=\\\\\\\"serving-cert::/tmp/serving-cert-4138284268/tls.crt::/tmp/serving-cert-4138284268/tls.key\\\\\\\" certDetail=\\\\\\\"\\\\\\\\\\\\\\\"localhost\\\\\\\\\\\\\\\" [serving] validServingFor=[localhost] issuer=\\\\\\\\\\\\\\\"check-endpoints-signer@1769159230\\\\\\\\\\\\\\\" (2026-01-23 09:07:10 +0000 UTC to 2026-02-22 09:07:11 +0000 UTC (now=2026-01-23 09:07:26.849349521 +0000 UTC))\\\\\\\"\\\\nI0123 09:07:26.849507 1 named_certificates.go:53] \\\\\\\"Loaded SNI cert\\\\\\\" index=0 certName=\\\\\\\"self-signed loopback\\\\\\\" certDetail=\\\\\\\"\\\\\\\\\\\\\\\"apiserver-loopback-client@1769159241\\\\\\\\\\\\\\\" [serving] validServingFor=[apiserver-loopback-client] issuer=\\\\\\\\\\\\\\\"apiserver-loopback-client-ca@1769159241\\\\\\\\\\\\\\\" (2026-01-23 08:07:21 +0000 UTC to 2027-01-23 08:07:21 +0000 UTC (now=2026-01-23 09:07:26.849489185 +0000 UTC))\\\\\\\"\\\\nI0123 09:07:26.849527 1 secure_serving.go:213] Serving securely on [::]:17697\\\\nI0123 09:07:26.849546 1 genericapiserver.go:683] [graceful-termination] waiting for shutdown to be initiated\\\\nI0123 09:07:26.849566 1 requestheader_controller.go:172] Starting RequestHeaderAuthRequestController\\\\nI0123 09:07:26.849583 1 shared_informer.go:313] Waiting for caches to sync for RequestHeaderAuthRequestController\\\\nI0123 09:07:26.849611 1 dynamic_serving_content.go:135] \\\\\\\"Starting controller\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-4138284268/tls.crt::/tmp/serving-cert-4138284268/tls.key\\\\\\\"\\\\nI0123 09:07:26.849731 1 tlsconfig.go:243] \\\\\\\"Starting DynamicServingCertificateController\\\\\\\"\\\\nF0123 09:07:26.849820 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-23T09:07:10Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":true,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-23T09:07:28Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://9db80d9b156d2828ad5bcd38bc2d0783dac35f10f547f098815ee596931cde3b\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-23T09:07:10Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://efa2eef93c6f5766565795e6674f79bc2e7cb62ac76cd9a1e407561378d62732\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://efa2eef93c6f5766565795e6674f79bc2e7cb62ac76cd9a1e407561378d62732\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-23T09:07:08Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-23T09:07:08Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-23T09:07:07Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-23T09:07:42Z is after 2025-08-24T17:21:41Z" Jan 23 09:07:42 crc kubenswrapper[4684]: I0123 09:07:42.366228 4684 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-hlsjn\" (UniqueName: \"kubernetes.io/projected/8a1145d8-e0e9-481b-9e5c-65815e74874f-kube-api-access-hlsjn\") pod \"network-metrics-daemon-wrrtl\" (UID: \"8a1145d8-e0e9-481b-9e5c-65815e74874f\") " pod="openshift-multus/network-metrics-daemon-wrrtl" Jan 23 09:07:42 crc kubenswrapper[4684]: I0123 09:07:42.366329 4684 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/8a1145d8-e0e9-481b-9e5c-65815e74874f-metrics-certs\") pod \"network-metrics-daemon-wrrtl\" (UID: \"8a1145d8-e0e9-481b-9e5c-65815e74874f\") " pod="openshift-multus/network-metrics-daemon-wrrtl" Jan 23 09:07:42 crc kubenswrapper[4684]: I0123 09:07:42.379189 4684 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-23T09:07:30Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-23T09:07:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-23T09:07:30Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://dc74050180463e44d7c545c89833c0282af87ae8cde4800f95e019dbd21ebb08\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-23T09:07:30Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-23T09:07:42Z is after 2025-08-24T17:21:41Z" Jan 23 09:07:42 crc kubenswrapper[4684]: I0123 09:07:42.391930 4684 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 23 09:07:42 crc kubenswrapper[4684]: I0123 09:07:42.391986 4684 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 23 09:07:42 crc kubenswrapper[4684]: I0123 09:07:42.392003 4684 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 23 09:07:42 crc kubenswrapper[4684]: I0123 09:07:42.392030 4684 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 23 09:07:42 crc kubenswrapper[4684]: I0123 09:07:42.392045 4684 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-23T09:07:42Z","lastTransitionTime":"2026-01-23T09:07:42Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 23 09:07:42 crc kubenswrapper[4684]: I0123 09:07:42.393427 4684 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-6stgf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"4fce7017-186f-4953-b968-c8a8868a0fd4\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T09:07:29Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T09:07:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T09:07:29Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T09:07:29Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://e120546e2ca9261a5bc169c39194c52add608d78b5783a10dad5f3ba4ee27c23\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-23T09:07:29Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-wv8g2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-23T09:07:28Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-6stgf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-23T09:07:42Z is after 2025-08-24T17:21:41Z" Jan 23 09:07:42 crc kubenswrapper[4684]: I0123 09:07:42.405364 4684 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-qt2j2" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d5069a6f-07bb-4423-8df0-92cdc541e6de\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T09:07:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T09:07:29Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T09:07:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T09:07:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://4ab843f59e857c481772565098789264b06141f58dd54cbb8dba2e40b44a54ad\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-23T09:07:30Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-l62zw\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-23T09:07:29Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-qt2j2\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-23T09:07:42Z is after 2025-08-24T17:21:41Z" Jan 23 09:07:42 crc kubenswrapper[4684]: I0123 09:07:42.417933 4684 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d618dabd-5de3-4c94-b9c1-69682da77628\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T09:07:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T09:07:07Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T09:07:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T09:07:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T09:07:07Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://c027c8977c1e3870ef0132bf28d479e8999b1a7d216327be7a9cff2aeee05c9e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-23T09:07:08Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://7954e2feb1e89e1ec2c9055234e7b9bde7005afc751a3067c18cbb54d16045cc\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-23T09:07:08Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://fde45d47daa7855ee7caa1df0222d2773fcdc8fb29413c61d6b74f7e7d8fa6e4\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-23T09:07:09Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://f34540a58dd0dfcebbfd694b24202f58a89ddca8a0f04f3f4f2bcdba4be5c4b6\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-23T09:07:10Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-23T09:07:07Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-23T09:07:42Z is after 2025-08-24T17:21:41Z" Jan 23 09:07:42 crc kubenswrapper[4684]: I0123 09:07:42.433343 4684 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-23T09:07:28Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-23T09:07:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-23T09:07:28Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://4f741db786a98b9e9302c17c5f5061484149b0372c03b3cf06b017d37da7237a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-23T09:07:28Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://e0bf99a80423f9d4d2262b21f7dc70d1cf73731c48008e484d9768495596d5b0\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-23T09:07:28Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-23T09:07:42Z is after 2025-08-24T17:21:41Z" Jan 23 09:07:42 crc kubenswrapper[4684]: I0123 09:07:42.446036 4684 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-wtphf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"fe8e0d00-860e-4d47-9f48-686555520d79\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T09:07:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T09:07:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T09:07:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T09:07:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://87b6f66b276518f9c25bbd5c97bd4a330b2c796958b395d04a01ef7115b95440\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-23T09:07:30Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-dmwsl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://c3d090a4ca15b818846dbd02be034a5029761509ea8671673795d0b2b15249c9\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-23T09:07:30Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-dmwsl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-23T09:07:28Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-wtphf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-23T09:07:42Z is after 2025-08-24T17:21:41Z" Jan 23 09:07:42 crc kubenswrapper[4684]: I0123 09:07:42.456805 4684 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/network-metrics-daemon-wrrtl" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"8a1145d8-e0e9-481b-9e5c-65815e74874f\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T09:07:42Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T09:07:42Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T09:07:42Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T09:07:42Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/metrics\\\",\\\"name\\\":\\\"metrics-certs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-hlsjn\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d98bb346a17feae024d92663df92b25c120938395ab7043afbed543c6db9ca8d\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-metrics-daemon\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-hlsjn\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-23T09:07:42Z\\\"}}\" for pod \"openshift-multus\"/\"network-metrics-daemon-wrrtl\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-23T09:07:42Z is after 2025-08-24T17:21:41Z" Jan 23 09:07:42 crc kubenswrapper[4684]: I0123 09:07:42.467109 4684 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-hlsjn\" (UniqueName: \"kubernetes.io/projected/8a1145d8-e0e9-481b-9e5c-65815e74874f-kube-api-access-hlsjn\") pod \"network-metrics-daemon-wrrtl\" (UID: \"8a1145d8-e0e9-481b-9e5c-65815e74874f\") " pod="openshift-multus/network-metrics-daemon-wrrtl" Jan 23 09:07:42 crc kubenswrapper[4684]: I0123 09:07:42.467170 4684 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/8a1145d8-e0e9-481b-9e5c-65815e74874f-metrics-certs\") pod \"network-metrics-daemon-wrrtl\" (UID: \"8a1145d8-e0e9-481b-9e5c-65815e74874f\") " pod="openshift-multus/network-metrics-daemon-wrrtl" Jan 23 09:07:42 crc kubenswrapper[4684]: E0123 09:07:42.467271 4684 secret.go:188] Couldn't get secret openshift-multus/metrics-daemon-secret: object "openshift-multus"/"metrics-daemon-secret" not registered Jan 23 09:07:42 crc kubenswrapper[4684]: E0123 09:07:42.467329 4684 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/8a1145d8-e0e9-481b-9e5c-65815e74874f-metrics-certs podName:8a1145d8-e0e9-481b-9e5c-65815e74874f nodeName:}" failed. No retries permitted until 2026-01-23 09:07:42.967310789 +0000 UTC m=+35.590689330 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "metrics-certs" (UniqueName: "kubernetes.io/secret/8a1145d8-e0e9-481b-9e5c-65815e74874f-metrics-certs") pod "network-metrics-daemon-wrrtl" (UID: "8a1145d8-e0e9-481b-9e5c-65815e74874f") : object "openshift-multus"/"metrics-daemon-secret" not registered Jan 23 09:07:42 crc kubenswrapper[4684]: I0123 09:07:42.470524 4684 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-23T09:07:29Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-23T09:07:29Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-23T09:07:29Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://d66a59d2f527c396c3b591ef694a20a6852d8e2b2f3d4c77ef0f0b795a18b535\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-23T09:07:28Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-23T09:07:42Z is after 2025-08-24T17:21:41Z" Jan 23 09:07:42 crc kubenswrapper[4684]: I0123 09:07:42.481376 4684 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-23T09:07:27Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-23T09:07:27Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-23T09:07:42Z is after 2025-08-24T17:21:41Z" Jan 23 09:07:42 crc kubenswrapper[4684]: I0123 09:07:42.494893 4684 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 23 09:07:42 crc kubenswrapper[4684]: I0123 09:07:42.494937 4684 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 23 09:07:42 crc kubenswrapper[4684]: I0123 09:07:42.494951 4684 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 23 09:07:42 crc kubenswrapper[4684]: I0123 09:07:42.494971 4684 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 23 09:07:42 crc kubenswrapper[4684]: I0123 09:07:42.494988 4684 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-23T09:07:42Z","lastTransitionTime":"2026-01-23T09:07:42Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 23 09:07:42 crc kubenswrapper[4684]: I0123 09:07:42.501189 4684 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-nk7v5" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fd1b372-d164-4037-ae8e-cf634b1c4b41\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T09:07:29Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T09:07:29Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T09:07:28Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T09:07:28Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://c845b6b78d55b23f70032599e19fb345571b02ca00353315bb08e94c834330d4\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-23T09:07:30Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-l46bg\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://5ecd3493767226c89a1f3e3dff04d36ff5c47117c6ad2712e71633f5c6e375b3\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-23T09:07:30Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-l46bg\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://1d7d0cedb437ec48e365912b092c7f28a30e01fbab86c49bce1b26734ab264ee\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-23T09:07:30Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-l46bg\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://6ab83043e744c91535278153a247d7ba2b3612b867edbabf3a43192b51304e14\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-23T09:07:30Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-l46bg\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://d44f8256ce0d8ea5237e13fb4f6d7ee5cd698c2821613b48d73ba903d2ab5351\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-23T09:07:30Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-l46bg\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://3eab81e73847c2d5a8a24bd2be84c8ed97ecc482fe023474b519ae6bcf3e6e49\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-23T09:07:29Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-l46bg\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://0a7c4719b2eaaa5e4439e33009fbfab815e8ac21cf72b90aeaeeb1b6717029de\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://71f1640626a831e4da81a382d015a6467377fa8e787db1ce1cebe4a788c40d8a\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2026-01-23T09:07:38Z\\\",\\\"message\\\":\\\"3 09:07:37.962441 5888 handler.go:190] Sending *v1.Namespace event handler 5 for removal\\\\nI0123 09:07:37.962452 5888 handler.go:190] Sending *v1.Node event handler 7 for removal\\\\nI0123 09:07:37.962460 5888 handler.go:190] Sending *v1.Node event handler 2 for removal\\\\nI0123 09:07:37.962476 5888 handler.go:190] Sending *v1.EgressIP event handler 8 for removal\\\\nI0123 09:07:37.962491 5888 handler.go:190] Sending *v1.NetworkPolicy event handler 4 for removal\\\\nI0123 09:07:37.962519 5888 factory.go:656] Stopping watch factory\\\\nI0123 09:07:37.962539 5888 handler.go:208] Removed *v1.NetworkPolicy event handler 4\\\\nI0123 09:07:37.962547 5888 handler.go:208] Removed *v1.Node event handler 7\\\\nI0123 09:07:37.962553 5888 handler.go:208] Removed *v1.Node event handler 2\\\\nI0123 09:07:37.962551 5888 handler.go:208] Removed *v1.Namespace event handler 5\\\\nI0123 09:07:37.962559 5888 handler.go:208] Removed *v1.EgressIP event handler 8\\\\nI0123 09:07:37.962570 5888 handler.go:208] Removed *v1.EgressFirewall event handler 9\\\\nI0123 09:07:37.962580 5888 handler.go:208] Removed *v1.Namespace event handler 1\\\\nI0123 09:07:37.962599 5888 reflector.go:311] Stopping reflector *v1.Pod (0s) from k8s.io/client-go/informers/factory.go:160\\\\nI0123 09:07:37.962628 5888 reflector.go:311] Stopping reflector *v1.Node (0s) from k8s.io/client-go/informers/f\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-23T09:07:35Z\\\"}},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-23T09:07:39Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-l46bg\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://6eab0113b2445bd23a5d3eb5f4bd79d26dd3352a1bf807cf7e770d55db85b699\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-23T09:07:32Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-l46bg\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://6cfc04b44ac724b5e32e0102b3f0d670fdd7f2b7ae9b40266065c7b8192b228e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://6cfc04b44ac724b5e32e0102b3f0d670fdd7f2b7ae9b40266065c7b8192b228e\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-23T09:07:29Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-23T09:07:29Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-l46bg\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-23T09:07:28Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-nk7v5\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-23T09:07:42Z is after 2025-08-24T17:21:41Z" Jan 23 09:07:42 crc kubenswrapper[4684]: I0123 09:07:42.512829 4684 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-ckltm" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"17ebb42b-c0ef-423b-8337-cb73bcdbd301\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T09:07:41Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T09:07:41Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T09:07:41Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-rbac-proxy ovnkube-cluster-manager]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T09:07:41Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-rbac-proxy ovnkube-cluster-manager]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-control-plane-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-bqdrh\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-cluster-manager\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-bqdrh\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-23T09:07:41Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-control-plane-749d76644c-ckltm\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-23T09:07:42Z is after 2025-08-24T17:21:41Z" Jan 23 09:07:42 crc kubenswrapper[4684]: I0123 09:07:42.523675 4684 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-23T09:07:27Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-23T09:07:27Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-23T09:07:42Z is after 2025-08-24T17:21:41Z" Jan 23 09:07:42 crc kubenswrapper[4684]: I0123 09:07:42.556606 4684 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-12-10 06:15:34.110448439 +0000 UTC Jan 23 09:07:42 crc kubenswrapper[4684]: I0123 09:07:42.597253 4684 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 23 09:07:42 crc kubenswrapper[4684]: I0123 09:07:42.597293 4684 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 23 09:07:42 crc kubenswrapper[4684]: I0123 09:07:42.597303 4684 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 23 09:07:42 crc kubenswrapper[4684]: I0123 09:07:42.597317 4684 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 23 09:07:42 crc kubenswrapper[4684]: I0123 09:07:42.597326 4684 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-23T09:07:42Z","lastTransitionTime":"2026-01-23T09:07:42Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 23 09:07:42 crc kubenswrapper[4684]: I0123 09:07:42.659540 4684 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-hlsjn\" (UniqueName: \"kubernetes.io/projected/8a1145d8-e0e9-481b-9e5c-65815e74874f-kube-api-access-hlsjn\") pod \"network-metrics-daemon-wrrtl\" (UID: \"8a1145d8-e0e9-481b-9e5c-65815e74874f\") " pod="openshift-multus/network-metrics-daemon-wrrtl" Jan 23 09:07:42 crc kubenswrapper[4684]: I0123 09:07:42.701148 4684 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 23 09:07:42 crc kubenswrapper[4684]: I0123 09:07:42.701185 4684 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 23 09:07:42 crc kubenswrapper[4684]: I0123 09:07:42.701195 4684 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 23 09:07:42 crc kubenswrapper[4684]: I0123 09:07:42.701209 4684 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 23 09:07:42 crc kubenswrapper[4684]: I0123 09:07:42.701220 4684 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-23T09:07:42Z","lastTransitionTime":"2026-01-23T09:07:42Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 23 09:07:42 crc kubenswrapper[4684]: I0123 09:07:42.803916 4684 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 23 09:07:42 crc kubenswrapper[4684]: I0123 09:07:42.803950 4684 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 23 09:07:42 crc kubenswrapper[4684]: I0123 09:07:42.803958 4684 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 23 09:07:42 crc kubenswrapper[4684]: I0123 09:07:42.803972 4684 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 23 09:07:42 crc kubenswrapper[4684]: I0123 09:07:42.803981 4684 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-23T09:07:42Z","lastTransitionTime":"2026-01-23T09:07:42Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 23 09:07:42 crc kubenswrapper[4684]: I0123 09:07:42.853097 4684 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-ovn-kubernetes_ovnkube-node-nk7v5_5fd1b372-d164-4037-ae8e-cf634b1c4b41/ovnkube-controller/1.log" Jan 23 09:07:42 crc kubenswrapper[4684]: I0123 09:07:42.853825 4684 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-ovn-kubernetes_ovnkube-node-nk7v5_5fd1b372-d164-4037-ae8e-cf634b1c4b41/ovnkube-controller/0.log" Jan 23 09:07:42 crc kubenswrapper[4684]: I0123 09:07:42.856143 4684 generic.go:334] "Generic (PLEG): container finished" podID="5fd1b372-d164-4037-ae8e-cf634b1c4b41" containerID="0a7c4719b2eaaa5e4439e33009fbfab815e8ac21cf72b90aeaeeb1b6717029de" exitCode=1 Jan 23 09:07:42 crc kubenswrapper[4684]: I0123 09:07:42.856213 4684 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-nk7v5" event={"ID":"5fd1b372-d164-4037-ae8e-cf634b1c4b41","Type":"ContainerDied","Data":"0a7c4719b2eaaa5e4439e33009fbfab815e8ac21cf72b90aeaeeb1b6717029de"} Jan 23 09:07:42 crc kubenswrapper[4684]: I0123 09:07:42.856313 4684 scope.go:117] "RemoveContainer" containerID="71f1640626a831e4da81a382d015a6467377fa8e787db1ce1cebe4a788c40d8a" Jan 23 09:07:42 crc kubenswrapper[4684]: I0123 09:07:42.857146 4684 scope.go:117] "RemoveContainer" containerID="0a7c4719b2eaaa5e4439e33009fbfab815e8ac21cf72b90aeaeeb1b6717029de" Jan 23 09:07:42 crc kubenswrapper[4684]: E0123 09:07:42.857870 4684 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"ovnkube-controller\" with CrashLoopBackOff: \"back-off 10s restarting failed container=ovnkube-controller pod=ovnkube-node-nk7v5_openshift-ovn-kubernetes(5fd1b372-d164-4037-ae8e-cf634b1c4b41)\"" pod="openshift-ovn-kubernetes/ovnkube-node-nk7v5" podUID="5fd1b372-d164-4037-ae8e-cf634b1c4b41" Jan 23 09:07:42 crc kubenswrapper[4684]: I0123 09:07:42.858235 4684 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-ckltm" event={"ID":"17ebb42b-c0ef-423b-8337-cb73bcdbd301","Type":"ContainerStarted","Data":"bf79d8396c5451feed38775580a4f1249a2e52c0a75afa9f12b3b00cf2aac8ab"} Jan 23 09:07:42 crc kubenswrapper[4684]: I0123 09:07:42.868739 4684 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-wtphf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"fe8e0d00-860e-4d47-9f48-686555520d79\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T09:07:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T09:07:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T09:07:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T09:07:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://87b6f66b276518f9c25bbd5c97bd4a330b2c796958b395d04a01ef7115b95440\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-23T09:07:30Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-dmwsl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://c3d090a4ca15b818846dbd02be034a5029761509ea8671673795d0b2b15249c9\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-23T09:07:30Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-dmwsl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-23T09:07:28Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-wtphf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-23T09:07:42Z is after 2025-08-24T17:21:41Z" Jan 23 09:07:42 crc kubenswrapper[4684]: I0123 09:07:42.881553 4684 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/network-metrics-daemon-wrrtl" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"8a1145d8-e0e9-481b-9e5c-65815e74874f\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T09:07:42Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T09:07:42Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T09:07:42Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T09:07:42Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/metrics\\\",\\\"name\\\":\\\"metrics-certs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-hlsjn\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d98bb346a17feae024d92663df92b25c120938395ab7043afbed543c6db9ca8d\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-metrics-daemon\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-hlsjn\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-23T09:07:42Z\\\"}}\" for pod \"openshift-multus\"/\"network-metrics-daemon-wrrtl\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-23T09:07:42Z is after 2025-08-24T17:21:41Z" Jan 23 09:07:42 crc kubenswrapper[4684]: I0123 09:07:42.892873 4684 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-23T09:07:29Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-23T09:07:29Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-23T09:07:29Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://d66a59d2f527c396c3b591ef694a20a6852d8e2b2f3d4c77ef0f0b795a18b535\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-23T09:07:28Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-23T09:07:42Z is after 2025-08-24T17:21:41Z" Jan 23 09:07:42 crc kubenswrapper[4684]: I0123 09:07:42.906076 4684 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-23T09:07:27Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-23T09:07:27Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-23T09:07:42Z is after 2025-08-24T17:21:41Z" Jan 23 09:07:42 crc kubenswrapper[4684]: I0123 09:07:42.907218 4684 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 23 09:07:42 crc kubenswrapper[4684]: I0123 09:07:42.907248 4684 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 23 09:07:42 crc kubenswrapper[4684]: I0123 09:07:42.907259 4684 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 23 09:07:42 crc kubenswrapper[4684]: I0123 09:07:42.907274 4684 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 23 09:07:42 crc kubenswrapper[4684]: I0123 09:07:42.907286 4684 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-23T09:07:42Z","lastTransitionTime":"2026-01-23T09:07:42Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 23 09:07:42 crc kubenswrapper[4684]: I0123 09:07:42.927688 4684 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-nk7v5" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fd1b372-d164-4037-ae8e-cf634b1c4b41\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T09:07:29Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T09:07:29Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T09:07:28Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T09:07:28Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://c845b6b78d55b23f70032599e19fb345571b02ca00353315bb08e94c834330d4\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-23T09:07:30Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-l46bg\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://5ecd3493767226c89a1f3e3dff04d36ff5c47117c6ad2712e71633f5c6e375b3\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-23T09:07:30Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-l46bg\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://1d7d0cedb437ec48e365912b092c7f28a30e01fbab86c49bce1b26734ab264ee\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-23T09:07:30Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-l46bg\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://6ab83043e744c91535278153a247d7ba2b3612b867edbabf3a43192b51304e14\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-23T09:07:30Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-l46bg\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://d44f8256ce0d8ea5237e13fb4f6d7ee5cd698c2821613b48d73ba903d2ab5351\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-23T09:07:30Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-l46bg\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://3eab81e73847c2d5a8a24bd2be84c8ed97ecc482fe023474b519ae6bcf3e6e49\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-23T09:07:29Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-l46bg\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://0a7c4719b2eaaa5e4439e33009fbfab815e8ac21cf72b90aeaeeb1b6717029de\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://71f1640626a831e4da81a382d015a6467377fa8e787db1ce1cebe4a788c40d8a\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2026-01-23T09:07:38Z\\\",\\\"message\\\":\\\"3 09:07:37.962441 5888 handler.go:190] Sending *v1.Namespace event handler 5 for removal\\\\nI0123 09:07:37.962452 5888 handler.go:190] Sending *v1.Node event handler 7 for removal\\\\nI0123 09:07:37.962460 5888 handler.go:190] Sending *v1.Node event handler 2 for removal\\\\nI0123 09:07:37.962476 5888 handler.go:190] Sending *v1.EgressIP event handler 8 for removal\\\\nI0123 09:07:37.962491 5888 handler.go:190] Sending *v1.NetworkPolicy event handler 4 for removal\\\\nI0123 09:07:37.962519 5888 factory.go:656] Stopping watch factory\\\\nI0123 09:07:37.962539 5888 handler.go:208] Removed *v1.NetworkPolicy event handler 4\\\\nI0123 09:07:37.962547 5888 handler.go:208] Removed *v1.Node event handler 7\\\\nI0123 09:07:37.962553 5888 handler.go:208] Removed *v1.Node event handler 2\\\\nI0123 09:07:37.962551 5888 handler.go:208] Removed *v1.Namespace event handler 5\\\\nI0123 09:07:37.962559 5888 handler.go:208] Removed *v1.EgressIP event handler 8\\\\nI0123 09:07:37.962570 5888 handler.go:208] Removed *v1.EgressFirewall event handler 9\\\\nI0123 09:07:37.962580 5888 handler.go:208] Removed *v1.Namespace event handler 1\\\\nI0123 09:07:37.962599 5888 reflector.go:311] Stopping reflector *v1.Pod (0s) from k8s.io/client-go/informers/factory.go:160\\\\nI0123 09:07:37.962628 5888 reflector.go:311] Stopping reflector *v1.Node (0s) from k8s.io/client-go/informers/f\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-23T09:07:35Z\\\"}},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://0a7c4719b2eaaa5e4439e33009fbfab815e8ac21cf72b90aeaeeb1b6717029de\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2026-01-23T09:07:42Z\\\",\\\"message\\\":\\\"ce openshift-marketplace/community-operators for network=default has 1 cluster-wide, 0 per-node configs, 0 template configs, making 1 (cluster) 0 (per node) and 0 (template) load balancers\\\\nI0123 09:07:40.472083 6043 services_controller.go:473] Services do not match for network=default, existing lbs: []services.LB{services.LB{Name:\\\\\\\"Service_openshift-marketplace/community-operators_TCP_cluster\\\\\\\", UUID:\\\\\\\"d389393c-7ba9-422c-b3f5-06e391d537d2\\\\\\\", Protocol:\\\\\\\"tcp\\\\\\\", ExternalIDs:map[string]string{\\\\\\\"k8s.ovn.org/kind\\\\\\\":\\\\\\\"Service\\\\\\\", \\\\\\\"k8s.ovn.org/owner\\\\\\\":\\\\\\\"openshift-marketplace/community-operators\\\\\\\"}, Opts:services.LBOpts{Reject:false, EmptyLBEvents:false, AffinityTimeOut:0, SkipSNAT:false, Template:false, AddressFamily:\\\\\\\"\\\\\\\"}, Rules:[]services.LBRule{}, Templates:services.TemplateMap{}, Switches:[]string{}, Routers:[]string{}, Groups:[]string{\\\\\\\"clusterLBGroup\\\\\\\"}}}, built lbs: []services.LB{services.LB{Name:\\\\\\\"Service_openshift-marketplace/community-operators_TCP_cluster\\\\\\\", UUID:\\\\\\\"\\\\\\\", Protocol:\\\\\\\"TCP\\\\\\\", ExternalIDs:map[string]string{\\\\\\\"k8s.ovn.org/kind\\\\\\\":\\\\\\\"Service\\\\\\\", \\\\\\\"k8s.ovn.org/owner\\\\\\\":\\\\\\\"openshift-marketplace/community-operators\\\\\\\"}, Opts:services.LBOpts{Reject:true, EmptyLBEvents:false, AffinityTimeOut:0, SkipSNAT:false, Template:false, AddressFamily:\\\\\\\"\\\\\\\"}, Rules:[]services.LBRule{services.LBRule{Source:services.Addr{IP:\\\\\\\"10.217.5.189\\\\\\\", Port:50051, Template:(*services.Template)(nil)}, T\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-23T09:07:39Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-l46bg\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://6eab0113b2445bd23a5d3eb5f4bd79d26dd3352a1bf807cf7e770d55db85b699\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-23T09:07:32Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-l46bg\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://6cfc04b44ac724b5e32e0102b3f0d670fdd7f2b7ae9b40266065c7b8192b228e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://6cfc04b44ac724b5e32e0102b3f0d670fdd7f2b7ae9b40266065c7b8192b228e\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-23T09:07:29Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-23T09:07:29Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-l46bg\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-23T09:07:28Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-nk7v5\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-23T09:07:42Z is after 2025-08-24T17:21:41Z" Jan 23 09:07:42 crc kubenswrapper[4684]: I0123 09:07:42.943623 4684 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-ckltm" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"17ebb42b-c0ef-423b-8337-cb73bcdbd301\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T09:07:41Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T09:07:41Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T09:07:41Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-rbac-proxy ovnkube-cluster-manager]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T09:07:41Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-rbac-proxy ovnkube-cluster-manager]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-control-plane-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-bqdrh\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-cluster-manager\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-bqdrh\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-23T09:07:41Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-control-plane-749d76644c-ckltm\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-23T09:07:42Z is after 2025-08-24T17:21:41Z" Jan 23 09:07:42 crc kubenswrapper[4684]: I0123 09:07:42.959267 4684 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-23T09:07:27Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-23T09:07:27Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-23T09:07:42Z is after 2025-08-24T17:21:41Z" Jan 23 09:07:42 crc kubenswrapper[4684]: I0123 09:07:42.971550 4684 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/8a1145d8-e0e9-481b-9e5c-65815e74874f-metrics-certs\") pod \"network-metrics-daemon-wrrtl\" (UID: \"8a1145d8-e0e9-481b-9e5c-65815e74874f\") " pod="openshift-multus/network-metrics-daemon-wrrtl" Jan 23 09:07:42 crc kubenswrapper[4684]: E0123 09:07:42.971868 4684 secret.go:188] Couldn't get secret openshift-multus/metrics-daemon-secret: object "openshift-multus"/"metrics-daemon-secret" not registered Jan 23 09:07:42 crc kubenswrapper[4684]: E0123 09:07:42.972016 4684 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/8a1145d8-e0e9-481b-9e5c-65815e74874f-metrics-certs podName:8a1145d8-e0e9-481b-9e5c-65815e74874f nodeName:}" failed. No retries permitted until 2026-01-23 09:07:43.971983255 +0000 UTC m=+36.595361986 (durationBeforeRetry 1s). Error: MountVolume.SetUp failed for volume "metrics-certs" (UniqueName: "kubernetes.io/secret/8a1145d8-e0e9-481b-9e5c-65815e74874f-metrics-certs") pod "network-metrics-daemon-wrrtl" (UID: "8a1145d8-e0e9-481b-9e5c-65815e74874f") : object "openshift-multus"/"metrics-daemon-secret" not registered Jan 23 09:07:42 crc kubenswrapper[4684]: I0123 09:07:42.980166 4684 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-23T09:07:27Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-23T09:07:27Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-23T09:07:42Z is after 2025-08-24T17:21:41Z" Jan 23 09:07:42 crc kubenswrapper[4684]: I0123 09:07:42.994775 4684 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-jwr4q" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ab0885cc-d621-4e36-9e37-1326848bd147\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T09:07:29Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T09:07:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T09:07:29Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T09:07:29Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://d957cfbf388d17fa825ac41c56e15d6cd4caec6e13b2fb8c93b304205f0bbefe\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-23T09:07:29Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cw2mk\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-23T09:07:28Z\\\"}}\" for pod \"openshift-multus\"/\"multus-jwr4q\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-23T09:07:42Z is after 2025-08-24T17:21:41Z" Jan 23 09:07:43 crc kubenswrapper[4684]: I0123 09:07:43.009822 4684 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 23 09:07:43 crc kubenswrapper[4684]: I0123 09:07:43.009872 4684 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 23 09:07:43 crc kubenswrapper[4684]: I0123 09:07:43.009883 4684 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 23 09:07:43 crc kubenswrapper[4684]: I0123 09:07:43.009900 4684 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 23 09:07:43 crc kubenswrapper[4684]: I0123 09:07:43.009911 4684 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-23T09:07:43Z","lastTransitionTime":"2026-01-23T09:07:43Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 23 09:07:43 crc kubenswrapper[4684]: I0123 09:07:43.011020 4684 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-dmqcw" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"95d1563a-3ca4-4fb0-8365-c1168fbe2e70\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T09:07:29Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T09:07:34Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T09:07:35Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T09:07:35Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://49a6a5854f711f7c177bc9c2ddea16027d535e15a3bbce2771702baed672fc06\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-23T09:07:35Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-wrhqc\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://d3d64538fa49212ecd97fac81f22251d985b9963024dcd5625ca82b0a19111fb\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://d3d64538fa49212ecd97fac81f22251d985b9963024dcd5625ca82b0a19111fb\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-23T09:07:29Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-23T09:07:29Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-wrhqc\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://bd008bc398cf858c150426e45222e76743f5cacfffb45c24f2cad83a6140abe4\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://bd008bc398cf858c150426e45222e76743f5cacfffb45c24f2cad83a6140abe4\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-23T09:07:30Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-23T09:07:30Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-wrhqc\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://11ea09253e6f4c4eab537b794b793c1f07e8cbaf361c1d8773381e7894805322\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://11ea09253e6f4c4eab537b794b793c1f07e8cbaf361c1d8773381e7894805322\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-23T09:07:31Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-23T09:07:31Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-wrhqc\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://4dddcfb8219bc8ac2d0f92294aef29222b71b1eb35ac84e7e833905e868e784e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://4dddcfb8219bc8ac2d0f92294aef29222b71b1eb35ac84e7e833905e868e784e\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-23T09:07:32Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-23T09:07:32Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-wrhqc\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://d935dd54133a2edd7ccddba6ec6b4c3ee7c86d3d6bc097b93fab3a6aa873ece9\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://d935dd54133a2edd7ccddba6ec6b4c3ee7c86d3d6bc097b93fab3a6aa873ece9\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-23T09:07:33Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-23T09:07:33Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-wrhqc\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://a3f58ad8e7c313247b77e5259a2f82d740ea1f08c3aeaefc116293729ce1b143\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://a3f58ad8e7c313247b77e5259a2f82d740ea1f08c3aeaefc116293729ce1b143\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-23T09:07:34Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-23T09:07:34Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-wrhqc\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-23T09:07:28Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-dmqcw\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-23T09:07:43Z is after 2025-08-24T17:21:41Z" Jan 23 09:07:43 crc kubenswrapper[4684]: I0123 09:07:43.028891 4684 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"e31ff448-5258-4887-9532-ccb1444b5a2f\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T09:07:08Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T09:07:08Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T09:07:38Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T09:07:38Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T09:07:07Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://39b1d62654cdce3e6a1e54cc35f36d530dec39b7ec54d7aba2ea8a64844ff90a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-23T09:07:09Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://5b80737ea9f882f63be2cf6a2f74002963d16e18aea3c96f738b2cd188f3c1da\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-23T09:07:10Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://68e3ed6cfd5c1ab6379385c7acee58117333f815f21be7d7c61038f7827f6621\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-23T09:07:10Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://74958cd4355a9eb04e07c960b1063b56f11cb3ae27a3ab9eac50f54ebac78c8c\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://42263a97079566dbd93f1ca20399fd1f6cc2400f0d042ed062c1c1e15eaf0109\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-01-23T09:07:27Z\\\",\\\"message\\\":\\\"23 09:07:26.845110 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW0123 09:07:26.845113 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW0123 09:07:26.845115 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nI0123 09:07:26.845353 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nI0123 09:07:26.849378 1 tlsconfig.go:203] \\\\\\\"Loaded serving cert\\\\\\\" certName=\\\\\\\"serving-cert::/tmp/serving-cert-4138284268/tls.crt::/tmp/serving-cert-4138284268/tls.key\\\\\\\" certDetail=\\\\\\\"\\\\\\\\\\\\\\\"localhost\\\\\\\\\\\\\\\" [serving] validServingFor=[localhost] issuer=\\\\\\\\\\\\\\\"check-endpoints-signer@1769159230\\\\\\\\\\\\\\\" (2026-01-23 09:07:10 +0000 UTC to 2026-02-22 09:07:11 +0000 UTC (now=2026-01-23 09:07:26.849349521 +0000 UTC))\\\\\\\"\\\\nI0123 09:07:26.849507 1 named_certificates.go:53] \\\\\\\"Loaded SNI cert\\\\\\\" index=0 certName=\\\\\\\"self-signed loopback\\\\\\\" certDetail=\\\\\\\"\\\\\\\\\\\\\\\"apiserver-loopback-client@1769159241\\\\\\\\\\\\\\\" [serving] validServingFor=[apiserver-loopback-client] issuer=\\\\\\\\\\\\\\\"apiserver-loopback-client-ca@1769159241\\\\\\\\\\\\\\\" (2026-01-23 08:07:21 +0000 UTC to 2027-01-23 08:07:21 +0000 UTC (now=2026-01-23 09:07:26.849489185 +0000 UTC))\\\\\\\"\\\\nI0123 09:07:26.849527 1 secure_serving.go:213] Serving securely on [::]:17697\\\\nI0123 09:07:26.849546 1 genericapiserver.go:683] [graceful-termination] waiting for shutdown to be initiated\\\\nI0123 09:07:26.849566 1 requestheader_controller.go:172] Starting RequestHeaderAuthRequestController\\\\nI0123 09:07:26.849583 1 shared_informer.go:313] Waiting for caches to sync for RequestHeaderAuthRequestController\\\\nI0123 09:07:26.849611 1 dynamic_serving_content.go:135] \\\\\\\"Starting controller\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-4138284268/tls.crt::/tmp/serving-cert-4138284268/tls.key\\\\\\\"\\\\nI0123 09:07:26.849731 1 tlsconfig.go:243] \\\\\\\"Starting DynamicServingCertificateController\\\\\\\"\\\\nF0123 09:07:26.849820 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-23T09:07:10Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":true,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-23T09:07:28Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://9db80d9b156d2828ad5bcd38bc2d0783dac35f10f547f098815ee596931cde3b\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-23T09:07:10Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://efa2eef93c6f5766565795e6674f79bc2e7cb62ac76cd9a1e407561378d62732\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://efa2eef93c6f5766565795e6674f79bc2e7cb62ac76cd9a1e407561378d62732\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-23T09:07:08Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-23T09:07:08Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-23T09:07:07Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-23T09:07:43Z is after 2025-08-24T17:21:41Z" Jan 23 09:07:43 crc kubenswrapper[4684]: I0123 09:07:43.040886 4684 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-23T09:07:30Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-23T09:07:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-23T09:07:30Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://dc74050180463e44d7c545c89833c0282af87ae8cde4800f95e019dbd21ebb08\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-23T09:07:30Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-23T09:07:43Z is after 2025-08-24T17:21:41Z" Jan 23 09:07:43 crc kubenswrapper[4684]: I0123 09:07:43.052531 4684 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-6stgf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"4fce7017-186f-4953-b968-c8a8868a0fd4\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T09:07:29Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T09:07:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T09:07:29Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T09:07:29Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://e120546e2ca9261a5bc169c39194c52add608d78b5783a10dad5f3ba4ee27c23\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-23T09:07:29Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-wv8g2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-23T09:07:28Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-6stgf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-23T09:07:43Z is after 2025-08-24T17:21:41Z" Jan 23 09:07:43 crc kubenswrapper[4684]: I0123 09:07:43.063310 4684 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-qt2j2" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d5069a6f-07bb-4423-8df0-92cdc541e6de\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T09:07:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T09:07:29Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T09:07:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T09:07:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://4ab843f59e857c481772565098789264b06141f58dd54cbb8dba2e40b44a54ad\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-23T09:07:30Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-l62zw\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-23T09:07:29Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-qt2j2\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-23T09:07:43Z is after 2025-08-24T17:21:41Z" Jan 23 09:07:43 crc kubenswrapper[4684]: I0123 09:07:43.081141 4684 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d618dabd-5de3-4c94-b9c1-69682da77628\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T09:07:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T09:07:07Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T09:07:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T09:07:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T09:07:07Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://c027c8977c1e3870ef0132bf28d479e8999b1a7d216327be7a9cff2aeee05c9e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-23T09:07:08Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://7954e2feb1e89e1ec2c9055234e7b9bde7005afc751a3067c18cbb54d16045cc\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-23T09:07:08Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://fde45d47daa7855ee7caa1df0222d2773fcdc8fb29413c61d6b74f7e7d8fa6e4\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-23T09:07:09Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://f34540a58dd0dfcebbfd694b24202f58a89ddca8a0f04f3f4f2bcdba4be5c4b6\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-23T09:07:10Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-23T09:07:07Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-23T09:07:43Z is after 2025-08-24T17:21:41Z" Jan 23 09:07:43 crc kubenswrapper[4684]: I0123 09:07:43.094187 4684 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-23T09:07:28Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-23T09:07:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-23T09:07:28Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://4f741db786a98b9e9302c17c5f5061484149b0372c03b3cf06b017d37da7237a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-23T09:07:28Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://e0bf99a80423f9d4d2262b21f7dc70d1cf73731c48008e484d9768495596d5b0\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-23T09:07:28Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-23T09:07:43Z is after 2025-08-24T17:21:41Z" Jan 23 09:07:43 crc kubenswrapper[4684]: I0123 09:07:43.112986 4684 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 23 09:07:43 crc kubenswrapper[4684]: I0123 09:07:43.113023 4684 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 23 09:07:43 crc kubenswrapper[4684]: I0123 09:07:43.113033 4684 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 23 09:07:43 crc kubenswrapper[4684]: I0123 09:07:43.113048 4684 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 23 09:07:43 crc kubenswrapper[4684]: I0123 09:07:43.113059 4684 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-23T09:07:43Z","lastTransitionTime":"2026-01-23T09:07:43Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 23 09:07:43 crc kubenswrapper[4684]: I0123 09:07:43.215993 4684 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 23 09:07:43 crc kubenswrapper[4684]: I0123 09:07:43.216031 4684 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 23 09:07:43 crc kubenswrapper[4684]: I0123 09:07:43.216042 4684 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 23 09:07:43 crc kubenswrapper[4684]: I0123 09:07:43.216058 4684 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 23 09:07:43 crc kubenswrapper[4684]: I0123 09:07:43.216068 4684 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-23T09:07:43Z","lastTransitionTime":"2026-01-23T09:07:43Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 23 09:07:43 crc kubenswrapper[4684]: I0123 09:07:43.318417 4684 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 23 09:07:43 crc kubenswrapper[4684]: I0123 09:07:43.318468 4684 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 23 09:07:43 crc kubenswrapper[4684]: I0123 09:07:43.318481 4684 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 23 09:07:43 crc kubenswrapper[4684]: I0123 09:07:43.318498 4684 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 23 09:07:43 crc kubenswrapper[4684]: I0123 09:07:43.318511 4684 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-23T09:07:43Z","lastTransitionTime":"2026-01-23T09:07:43Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 23 09:07:43 crc kubenswrapper[4684]: I0123 09:07:43.375057 4684 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 23 09:07:43 crc kubenswrapper[4684]: E0123 09:07:43.375314 4684 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-01-23 09:07:59.375283036 +0000 UTC m=+51.998661577 (durationBeforeRetry 16s). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 23 09:07:43 crc kubenswrapper[4684]: I0123 09:07:43.421176 4684 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 23 09:07:43 crc kubenswrapper[4684]: I0123 09:07:43.421213 4684 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 23 09:07:43 crc kubenswrapper[4684]: I0123 09:07:43.421221 4684 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 23 09:07:43 crc kubenswrapper[4684]: I0123 09:07:43.421235 4684 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 23 09:07:43 crc kubenswrapper[4684]: I0123 09:07:43.421244 4684 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-23T09:07:43Z","lastTransitionTime":"2026-01-23T09:07:43Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 23 09:07:43 crc kubenswrapper[4684]: I0123 09:07:43.476070 4684 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"nginx-conf\" (UniqueName: \"kubernetes.io/configmap/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-nginx-conf\") pod \"networking-console-plugin-85b44fc459-gdk6g\" (UID: \"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\") " pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 23 09:07:43 crc kubenswrapper[4684]: I0123 09:07:43.476134 4684 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-s2dwl\" (UniqueName: \"kubernetes.io/projected/9d751cbb-f2e2-430d-9754-c882a5e924a5-kube-api-access-s2dwl\") pod \"network-check-source-55646444c4-trplf\" (UID: \"9d751cbb-f2e2-430d-9754-c882a5e924a5\") " pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Jan 23 09:07:43 crc kubenswrapper[4684]: I0123 09:07:43.476173 4684 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-cqllr\" (UniqueName: \"kubernetes.io/projected/3b6479f0-333b-4a96-9adf-2099afdc2447-kube-api-access-cqllr\") pod \"network-check-target-xd92c\" (UID: \"3b6479f0-333b-4a96-9adf-2099afdc2447\") " pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 23 09:07:43 crc kubenswrapper[4684]: I0123 09:07:43.476215 4684 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"networking-console-plugin-cert\" (UniqueName: \"kubernetes.io/secret/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-networking-console-plugin-cert\") pod \"networking-console-plugin-85b44fc459-gdk6g\" (UID: \"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\") " pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 23 09:07:43 crc kubenswrapper[4684]: E0123 09:07:43.476219 4684 configmap.go:193] Couldn't get configMap openshift-network-console/networking-console-plugin: object "openshift-network-console"/"networking-console-plugin" not registered Jan 23 09:07:43 crc kubenswrapper[4684]: E0123 09:07:43.476276 4684 projected.go:288] Couldn't get configMap openshift-network-diagnostics/kube-root-ca.crt: object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered Jan 23 09:07:43 crc kubenswrapper[4684]: E0123 09:07:43.476294 4684 projected.go:288] Couldn't get configMap openshift-network-diagnostics/openshift-service-ca.crt: object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered Jan 23 09:07:43 crc kubenswrapper[4684]: E0123 09:07:43.476305 4684 projected.go:194] Error preparing data for projected volume kube-api-access-s2dwl for pod openshift-network-diagnostics/network-check-source-55646444c4-trplf: [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Jan 23 09:07:43 crc kubenswrapper[4684]: E0123 09:07:43.476311 4684 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-nginx-conf podName:5fe485a1-e14f-4c09-b5b9-f252bc42b7e8 nodeName:}" failed. No retries permitted until 2026-01-23 09:07:59.476295391 +0000 UTC m=+52.099673932 (durationBeforeRetry 16s). Error: MountVolume.SetUp failed for volume "nginx-conf" (UniqueName: "kubernetes.io/configmap/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-nginx-conf") pod "networking-console-plugin-85b44fc459-gdk6g" (UID: "5fe485a1-e14f-4c09-b5b9-f252bc42b7e8") : object "openshift-network-console"/"networking-console-plugin" not registered Jan 23 09:07:43 crc kubenswrapper[4684]: E0123 09:07:43.476325 4684 secret.go:188] Couldn't get secret openshift-network-console/networking-console-plugin-cert: object "openshift-network-console"/"networking-console-plugin-cert" not registered Jan 23 09:07:43 crc kubenswrapper[4684]: E0123 09:07:43.476336 4684 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/9d751cbb-f2e2-430d-9754-c882a5e924a5-kube-api-access-s2dwl podName:9d751cbb-f2e2-430d-9754-c882a5e924a5 nodeName:}" failed. No retries permitted until 2026-01-23 09:07:59.476326462 +0000 UTC m=+52.099705003 (durationBeforeRetry 16s). Error: MountVolume.SetUp failed for volume "kube-api-access-s2dwl" (UniqueName: "kubernetes.io/projected/9d751cbb-f2e2-430d-9754-c882a5e924a5-kube-api-access-s2dwl") pod "network-check-source-55646444c4-trplf" (UID: "9d751cbb-f2e2-430d-9754-c882a5e924a5") : [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Jan 23 09:07:43 crc kubenswrapper[4684]: E0123 09:07:43.476383 4684 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-networking-console-plugin-cert podName:5fe485a1-e14f-4c09-b5b9-f252bc42b7e8 nodeName:}" failed. No retries permitted until 2026-01-23 09:07:59.476365693 +0000 UTC m=+52.099744294 (durationBeforeRetry 16s). Error: MountVolume.SetUp failed for volume "networking-console-plugin-cert" (UniqueName: "kubernetes.io/secret/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-networking-console-plugin-cert") pod "networking-console-plugin-85b44fc459-gdk6g" (UID: "5fe485a1-e14f-4c09-b5b9-f252bc42b7e8") : object "openshift-network-console"/"networking-console-plugin-cert" not registered Jan 23 09:07:43 crc kubenswrapper[4684]: E0123 09:07:43.476413 4684 projected.go:288] Couldn't get configMap openshift-network-diagnostics/kube-root-ca.crt: object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered Jan 23 09:07:43 crc kubenswrapper[4684]: E0123 09:07:43.476471 4684 projected.go:288] Couldn't get configMap openshift-network-diagnostics/openshift-service-ca.crt: object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered Jan 23 09:07:43 crc kubenswrapper[4684]: E0123 09:07:43.476487 4684 projected.go:194] Error preparing data for projected volume kube-api-access-cqllr for pod openshift-network-diagnostics/network-check-target-xd92c: [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Jan 23 09:07:43 crc kubenswrapper[4684]: E0123 09:07:43.476548 4684 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/3b6479f0-333b-4a96-9adf-2099afdc2447-kube-api-access-cqllr podName:3b6479f0-333b-4a96-9adf-2099afdc2447 nodeName:}" failed. No retries permitted until 2026-01-23 09:07:59.476529858 +0000 UTC m=+52.099908479 (durationBeforeRetry 16s). Error: MountVolume.SetUp failed for volume "kube-api-access-cqllr" (UniqueName: "kubernetes.io/projected/3b6479f0-333b-4a96-9adf-2099afdc2447-kube-api-access-cqllr") pod "network-check-target-xd92c" (UID: "3b6479f0-333b-4a96-9adf-2099afdc2447") : [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Jan 23 09:07:43 crc kubenswrapper[4684]: I0123 09:07:43.523300 4684 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 23 09:07:43 crc kubenswrapper[4684]: I0123 09:07:43.523336 4684 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 23 09:07:43 crc kubenswrapper[4684]: I0123 09:07:43.523348 4684 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 23 09:07:43 crc kubenswrapper[4684]: I0123 09:07:43.523365 4684 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 23 09:07:43 crc kubenswrapper[4684]: I0123 09:07:43.523375 4684 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-23T09:07:43Z","lastTransitionTime":"2026-01-23T09:07:43Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 23 09:07:43 crc kubenswrapper[4684]: I0123 09:07:43.557976 4684 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-11-07 10:32:39.764084154 +0000 UTC Jan 23 09:07:43 crc kubenswrapper[4684]: I0123 09:07:43.581664 4684 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Jan 23 09:07:43 crc kubenswrapper[4684]: I0123 09:07:43.581682 4684 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 23 09:07:43 crc kubenswrapper[4684]: E0123 09:07:43.581824 4684 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Jan 23 09:07:43 crc kubenswrapper[4684]: I0123 09:07:43.581887 4684 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 23 09:07:43 crc kubenswrapper[4684]: E0123 09:07:43.581987 4684 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Jan 23 09:07:43 crc kubenswrapper[4684]: E0123 09:07:43.582053 4684 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Jan 23 09:07:43 crc kubenswrapper[4684]: I0123 09:07:43.625909 4684 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 23 09:07:43 crc kubenswrapper[4684]: I0123 09:07:43.625949 4684 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 23 09:07:43 crc kubenswrapper[4684]: I0123 09:07:43.625962 4684 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 23 09:07:43 crc kubenswrapper[4684]: I0123 09:07:43.625977 4684 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 23 09:07:43 crc kubenswrapper[4684]: I0123 09:07:43.625988 4684 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-23T09:07:43Z","lastTransitionTime":"2026-01-23T09:07:43Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 23 09:07:43 crc kubenswrapper[4684]: I0123 09:07:43.728426 4684 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 23 09:07:43 crc kubenswrapper[4684]: I0123 09:07:43.728471 4684 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 23 09:07:43 crc kubenswrapper[4684]: I0123 09:07:43.728480 4684 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 23 09:07:43 crc kubenswrapper[4684]: I0123 09:07:43.728495 4684 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 23 09:07:43 crc kubenswrapper[4684]: I0123 09:07:43.728505 4684 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-23T09:07:43Z","lastTransitionTime":"2026-01-23T09:07:43Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 23 09:07:43 crc kubenswrapper[4684]: I0123 09:07:43.831205 4684 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 23 09:07:43 crc kubenswrapper[4684]: I0123 09:07:43.831250 4684 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 23 09:07:43 crc kubenswrapper[4684]: I0123 09:07:43.831264 4684 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 23 09:07:43 crc kubenswrapper[4684]: I0123 09:07:43.831280 4684 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 23 09:07:43 crc kubenswrapper[4684]: I0123 09:07:43.831288 4684 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-23T09:07:43Z","lastTransitionTime":"2026-01-23T09:07:43Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 23 09:07:43 crc kubenswrapper[4684]: I0123 09:07:43.862168 4684 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-ckltm" event={"ID":"17ebb42b-c0ef-423b-8337-cb73bcdbd301","Type":"ContainerStarted","Data":"831d14b0a3293bdf6aaef4805513c47cca40592929fd0a059c0415e6bb072462"} Jan 23 09:07:43 crc kubenswrapper[4684]: I0123 09:07:43.863839 4684 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-ovn-kubernetes_ovnkube-node-nk7v5_5fd1b372-d164-4037-ae8e-cf634b1c4b41/ovnkube-controller/1.log" Jan 23 09:07:43 crc kubenswrapper[4684]: I0123 09:07:43.933730 4684 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 23 09:07:43 crc kubenswrapper[4684]: I0123 09:07:43.933767 4684 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 23 09:07:43 crc kubenswrapper[4684]: I0123 09:07:43.933778 4684 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 23 09:07:43 crc kubenswrapper[4684]: I0123 09:07:43.933793 4684 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 23 09:07:43 crc kubenswrapper[4684]: I0123 09:07:43.933804 4684 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-23T09:07:43Z","lastTransitionTime":"2026-01-23T09:07:43Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 23 09:07:43 crc kubenswrapper[4684]: I0123 09:07:43.980381 4684 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/8a1145d8-e0e9-481b-9e5c-65815e74874f-metrics-certs\") pod \"network-metrics-daemon-wrrtl\" (UID: \"8a1145d8-e0e9-481b-9e5c-65815e74874f\") " pod="openshift-multus/network-metrics-daemon-wrrtl" Jan 23 09:07:43 crc kubenswrapper[4684]: E0123 09:07:43.980538 4684 secret.go:188] Couldn't get secret openshift-multus/metrics-daemon-secret: object "openshift-multus"/"metrics-daemon-secret" not registered Jan 23 09:07:43 crc kubenswrapper[4684]: E0123 09:07:43.980600 4684 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/8a1145d8-e0e9-481b-9e5c-65815e74874f-metrics-certs podName:8a1145d8-e0e9-481b-9e5c-65815e74874f nodeName:}" failed. No retries permitted until 2026-01-23 09:07:45.980582246 +0000 UTC m=+38.603960787 (durationBeforeRetry 2s). Error: MountVolume.SetUp failed for volume "metrics-certs" (UniqueName: "kubernetes.io/secret/8a1145d8-e0e9-481b-9e5c-65815e74874f-metrics-certs") pod "network-metrics-daemon-wrrtl" (UID: "8a1145d8-e0e9-481b-9e5c-65815e74874f") : object "openshift-multus"/"metrics-daemon-secret" not registered Jan 23 09:07:44 crc kubenswrapper[4684]: I0123 09:07:44.036929 4684 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 23 09:07:44 crc kubenswrapper[4684]: I0123 09:07:44.036978 4684 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 23 09:07:44 crc kubenswrapper[4684]: I0123 09:07:44.036987 4684 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 23 09:07:44 crc kubenswrapper[4684]: I0123 09:07:44.037007 4684 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 23 09:07:44 crc kubenswrapper[4684]: I0123 09:07:44.037017 4684 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-23T09:07:44Z","lastTransitionTime":"2026-01-23T09:07:44Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 23 09:07:44 crc kubenswrapper[4684]: I0123 09:07:44.139094 4684 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 23 09:07:44 crc kubenswrapper[4684]: I0123 09:07:44.139140 4684 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 23 09:07:44 crc kubenswrapper[4684]: I0123 09:07:44.139152 4684 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 23 09:07:44 crc kubenswrapper[4684]: I0123 09:07:44.139167 4684 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 23 09:07:44 crc kubenswrapper[4684]: I0123 09:07:44.139178 4684 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-23T09:07:44Z","lastTransitionTime":"2026-01-23T09:07:44Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 23 09:07:44 crc kubenswrapper[4684]: I0123 09:07:44.241169 4684 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 23 09:07:44 crc kubenswrapper[4684]: I0123 09:07:44.241244 4684 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 23 09:07:44 crc kubenswrapper[4684]: I0123 09:07:44.241257 4684 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 23 09:07:44 crc kubenswrapper[4684]: I0123 09:07:44.241270 4684 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 23 09:07:44 crc kubenswrapper[4684]: I0123 09:07:44.241278 4684 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-23T09:07:44Z","lastTransitionTime":"2026-01-23T09:07:44Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 23 09:07:44 crc kubenswrapper[4684]: I0123 09:07:44.343899 4684 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 23 09:07:44 crc kubenswrapper[4684]: I0123 09:07:44.343940 4684 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 23 09:07:44 crc kubenswrapper[4684]: I0123 09:07:44.343951 4684 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 23 09:07:44 crc kubenswrapper[4684]: I0123 09:07:44.343965 4684 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 23 09:07:44 crc kubenswrapper[4684]: I0123 09:07:44.343974 4684 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-23T09:07:44Z","lastTransitionTime":"2026-01-23T09:07:44Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 23 09:07:44 crc kubenswrapper[4684]: I0123 09:07:44.446504 4684 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 23 09:07:44 crc kubenswrapper[4684]: I0123 09:07:44.446537 4684 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 23 09:07:44 crc kubenswrapper[4684]: I0123 09:07:44.446545 4684 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 23 09:07:44 crc kubenswrapper[4684]: I0123 09:07:44.446559 4684 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 23 09:07:44 crc kubenswrapper[4684]: I0123 09:07:44.446567 4684 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-23T09:07:44Z","lastTransitionTime":"2026-01-23T09:07:44Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 23 09:07:44 crc kubenswrapper[4684]: I0123 09:07:44.549075 4684 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 23 09:07:44 crc kubenswrapper[4684]: I0123 09:07:44.549113 4684 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 23 09:07:44 crc kubenswrapper[4684]: I0123 09:07:44.549123 4684 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 23 09:07:44 crc kubenswrapper[4684]: I0123 09:07:44.549138 4684 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 23 09:07:44 crc kubenswrapper[4684]: I0123 09:07:44.549150 4684 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-23T09:07:44Z","lastTransitionTime":"2026-01-23T09:07:44Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 23 09:07:44 crc kubenswrapper[4684]: I0123 09:07:44.558529 4684 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-11-29 10:21:50.538002533 +0000 UTC Jan 23 09:07:44 crc kubenswrapper[4684]: I0123 09:07:44.581902 4684 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-wrrtl" Jan 23 09:07:44 crc kubenswrapper[4684]: E0123 09:07:44.582026 4684 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-wrrtl" podUID="8a1145d8-e0e9-481b-9e5c-65815e74874f" Jan 23 09:07:44 crc kubenswrapper[4684]: I0123 09:07:44.651334 4684 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 23 09:07:44 crc kubenswrapper[4684]: I0123 09:07:44.651663 4684 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 23 09:07:44 crc kubenswrapper[4684]: I0123 09:07:44.651675 4684 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 23 09:07:44 crc kubenswrapper[4684]: I0123 09:07:44.651688 4684 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 23 09:07:44 crc kubenswrapper[4684]: I0123 09:07:44.651712 4684 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-23T09:07:44Z","lastTransitionTime":"2026-01-23T09:07:44Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 23 09:07:44 crc kubenswrapper[4684]: I0123 09:07:44.753607 4684 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 23 09:07:44 crc kubenswrapper[4684]: I0123 09:07:44.753652 4684 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 23 09:07:44 crc kubenswrapper[4684]: I0123 09:07:44.753660 4684 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 23 09:07:44 crc kubenswrapper[4684]: I0123 09:07:44.753674 4684 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 23 09:07:44 crc kubenswrapper[4684]: I0123 09:07:44.753683 4684 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-23T09:07:44Z","lastTransitionTime":"2026-01-23T09:07:44Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 23 09:07:44 crc kubenswrapper[4684]: I0123 09:07:44.855956 4684 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 23 09:07:44 crc kubenswrapper[4684]: I0123 09:07:44.856291 4684 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 23 09:07:44 crc kubenswrapper[4684]: I0123 09:07:44.856369 4684 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 23 09:07:44 crc kubenswrapper[4684]: I0123 09:07:44.856442 4684 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 23 09:07:44 crc kubenswrapper[4684]: I0123 09:07:44.856508 4684 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-23T09:07:44Z","lastTransitionTime":"2026-01-23T09:07:44Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 23 09:07:44 crc kubenswrapper[4684]: I0123 09:07:44.871191 4684 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-ckltm" event={"ID":"17ebb42b-c0ef-423b-8337-cb73bcdbd301","Type":"ContainerStarted","Data":"53174a72a4ae2ff8105c162641526b8d33dbc8ae6f6301c8c1399e1493d9f6e9"} Jan 23 09:07:44 crc kubenswrapper[4684]: I0123 09:07:44.889853 4684 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/network-metrics-daemon-wrrtl" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"8a1145d8-e0e9-481b-9e5c-65815e74874f\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T09:07:42Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T09:07:42Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T09:07:42Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T09:07:42Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/metrics\\\",\\\"name\\\":\\\"metrics-certs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-hlsjn\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d98bb346a17feae024d92663df92b25c120938395ab7043afbed543c6db9ca8d\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-metrics-daemon\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-hlsjn\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-23T09:07:42Z\\\"}}\" for pod \"openshift-multus\"/\"network-metrics-daemon-wrrtl\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-23T09:07:44Z is after 2025-08-24T17:21:41Z" Jan 23 09:07:44 crc kubenswrapper[4684]: I0123 09:07:44.905965 4684 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-23T09:07:29Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-23T09:07:29Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-23T09:07:29Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://d66a59d2f527c396c3b591ef694a20a6852d8e2b2f3d4c77ef0f0b795a18b535\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-23T09:07:28Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-23T09:07:44Z is after 2025-08-24T17:21:41Z" Jan 23 09:07:44 crc kubenswrapper[4684]: I0123 09:07:44.918219 4684 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-wtphf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"fe8e0d00-860e-4d47-9f48-686555520d79\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T09:07:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T09:07:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T09:07:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T09:07:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://87b6f66b276518f9c25bbd5c97bd4a330b2c796958b395d04a01ef7115b95440\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-23T09:07:30Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-dmwsl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://c3d090a4ca15b818846dbd02be034a5029761509ea8671673795d0b2b15249c9\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-23T09:07:30Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-dmwsl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-23T09:07:28Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-wtphf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-23T09:07:44Z is after 2025-08-24T17:21:41Z" Jan 23 09:07:44 crc kubenswrapper[4684]: I0123 09:07:44.935530 4684 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-nk7v5" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fd1b372-d164-4037-ae8e-cf634b1c4b41\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T09:07:29Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T09:07:29Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T09:07:28Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T09:07:28Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://c845b6b78d55b23f70032599e19fb345571b02ca00353315bb08e94c834330d4\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-23T09:07:30Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-l46bg\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://5ecd3493767226c89a1f3e3dff04d36ff5c47117c6ad2712e71633f5c6e375b3\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-23T09:07:30Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-l46bg\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://1d7d0cedb437ec48e365912b092c7f28a30e01fbab86c49bce1b26734ab264ee\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-23T09:07:30Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-l46bg\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://6ab83043e744c91535278153a247d7ba2b3612b867edbabf3a43192b51304e14\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-23T09:07:30Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-l46bg\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://d44f8256ce0d8ea5237e13fb4f6d7ee5cd698c2821613b48d73ba903d2ab5351\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-23T09:07:30Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-l46bg\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://3eab81e73847c2d5a8a24bd2be84c8ed97ecc482fe023474b519ae6bcf3e6e49\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-23T09:07:29Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-l46bg\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://0a7c4719b2eaaa5e4439e33009fbfab815e8ac21cf72b90aeaeeb1b6717029de\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://71f1640626a831e4da81a382d015a6467377fa8e787db1ce1cebe4a788c40d8a\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2026-01-23T09:07:38Z\\\",\\\"message\\\":\\\"3 09:07:37.962441 5888 handler.go:190] Sending *v1.Namespace event handler 5 for removal\\\\nI0123 09:07:37.962452 5888 handler.go:190] Sending *v1.Node event handler 7 for removal\\\\nI0123 09:07:37.962460 5888 handler.go:190] Sending *v1.Node event handler 2 for removal\\\\nI0123 09:07:37.962476 5888 handler.go:190] Sending *v1.EgressIP event handler 8 for removal\\\\nI0123 09:07:37.962491 5888 handler.go:190] Sending *v1.NetworkPolicy event handler 4 for removal\\\\nI0123 09:07:37.962519 5888 factory.go:656] Stopping watch factory\\\\nI0123 09:07:37.962539 5888 handler.go:208] Removed *v1.NetworkPolicy event handler 4\\\\nI0123 09:07:37.962547 5888 handler.go:208] Removed *v1.Node event handler 7\\\\nI0123 09:07:37.962553 5888 handler.go:208] Removed *v1.Node event handler 2\\\\nI0123 09:07:37.962551 5888 handler.go:208] Removed *v1.Namespace event handler 5\\\\nI0123 09:07:37.962559 5888 handler.go:208] Removed *v1.EgressIP event handler 8\\\\nI0123 09:07:37.962570 5888 handler.go:208] Removed *v1.EgressFirewall event handler 9\\\\nI0123 09:07:37.962580 5888 handler.go:208] Removed *v1.Namespace event handler 1\\\\nI0123 09:07:37.962599 5888 reflector.go:311] Stopping reflector *v1.Pod (0s) from k8s.io/client-go/informers/factory.go:160\\\\nI0123 09:07:37.962628 5888 reflector.go:311] Stopping reflector *v1.Node (0s) from k8s.io/client-go/informers/f\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-23T09:07:35Z\\\"}},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://0a7c4719b2eaaa5e4439e33009fbfab815e8ac21cf72b90aeaeeb1b6717029de\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2026-01-23T09:07:42Z\\\",\\\"message\\\":\\\"ce openshift-marketplace/community-operators for network=default has 1 cluster-wide, 0 per-node configs, 0 template configs, making 1 (cluster) 0 (per node) and 0 (template) load balancers\\\\nI0123 09:07:40.472083 6043 services_controller.go:473] Services do not match for network=default, existing lbs: []services.LB{services.LB{Name:\\\\\\\"Service_openshift-marketplace/community-operators_TCP_cluster\\\\\\\", UUID:\\\\\\\"d389393c-7ba9-422c-b3f5-06e391d537d2\\\\\\\", Protocol:\\\\\\\"tcp\\\\\\\", ExternalIDs:map[string]string{\\\\\\\"k8s.ovn.org/kind\\\\\\\":\\\\\\\"Service\\\\\\\", \\\\\\\"k8s.ovn.org/owner\\\\\\\":\\\\\\\"openshift-marketplace/community-operators\\\\\\\"}, Opts:services.LBOpts{Reject:false, EmptyLBEvents:false, AffinityTimeOut:0, SkipSNAT:false, Template:false, AddressFamily:\\\\\\\"\\\\\\\"}, Rules:[]services.LBRule{}, Templates:services.TemplateMap{}, Switches:[]string{}, Routers:[]string{}, Groups:[]string{\\\\\\\"clusterLBGroup\\\\\\\"}}}, built lbs: []services.LB{services.LB{Name:\\\\\\\"Service_openshift-marketplace/community-operators_TCP_cluster\\\\\\\", UUID:\\\\\\\"\\\\\\\", Protocol:\\\\\\\"TCP\\\\\\\", ExternalIDs:map[string]string{\\\\\\\"k8s.ovn.org/kind\\\\\\\":\\\\\\\"Service\\\\\\\", \\\\\\\"k8s.ovn.org/owner\\\\\\\":\\\\\\\"openshift-marketplace/community-operators\\\\\\\"}, Opts:services.LBOpts{Reject:true, EmptyLBEvents:false, AffinityTimeOut:0, SkipSNAT:false, Template:false, AddressFamily:\\\\\\\"\\\\\\\"}, Rules:[]services.LBRule{services.LBRule{Source:services.Addr{IP:\\\\\\\"10.217.5.189\\\\\\\", Port:50051, Template:(*services.Template)(nil)}, T\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-23T09:07:39Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-l46bg\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://6eab0113b2445bd23a5d3eb5f4bd79d26dd3352a1bf807cf7e770d55db85b699\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-23T09:07:32Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-l46bg\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://6cfc04b44ac724b5e32e0102b3f0d670fdd7f2b7ae9b40266065c7b8192b228e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://6cfc04b44ac724b5e32e0102b3f0d670fdd7f2b7ae9b40266065c7b8192b228e\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-23T09:07:29Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-23T09:07:29Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-l46bg\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-23T09:07:28Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-nk7v5\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-23T09:07:44Z is after 2025-08-24T17:21:41Z" Jan 23 09:07:44 crc kubenswrapper[4684]: I0123 09:07:44.945667 4684 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-ckltm" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"17ebb42b-c0ef-423b-8337-cb73bcdbd301\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T09:07:44Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T09:07:41Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T09:07:44Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T09:07:44Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://831d14b0a3293bdf6aaef4805513c47cca40592929fd0a059c0415e6bb072462\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-23T09:07:43Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-control-plane-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-bqdrh\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://53174a72a4ae2ff8105c162641526b8d33dbc8ae6f6301c8c1399e1493d9f6e9\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-cluster-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-23T09:07:43Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-bqdrh\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-23T09:07:41Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-control-plane-749d76644c-ckltm\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-23T09:07:44Z is after 2025-08-24T17:21:41Z" Jan 23 09:07:44 crc kubenswrapper[4684]: I0123 09:07:44.955762 4684 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-23T09:07:27Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-23T09:07:27Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-23T09:07:44Z is after 2025-08-24T17:21:41Z" Jan 23 09:07:44 crc kubenswrapper[4684]: I0123 09:07:44.958390 4684 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 23 09:07:44 crc kubenswrapper[4684]: I0123 09:07:44.958756 4684 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 23 09:07:44 crc kubenswrapper[4684]: I0123 09:07:44.958862 4684 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 23 09:07:44 crc kubenswrapper[4684]: I0123 09:07:44.958956 4684 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 23 09:07:44 crc kubenswrapper[4684]: I0123 09:07:44.959036 4684 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-23T09:07:44Z","lastTransitionTime":"2026-01-23T09:07:44Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 23 09:07:44 crc kubenswrapper[4684]: I0123 09:07:44.968782 4684 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-23T09:07:27Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-23T09:07:27Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-23T09:07:44Z is after 2025-08-24T17:21:41Z" Jan 23 09:07:44 crc kubenswrapper[4684]: I0123 09:07:44.981712 4684 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-jwr4q" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ab0885cc-d621-4e36-9e37-1326848bd147\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T09:07:29Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T09:07:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T09:07:29Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T09:07:29Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://d957cfbf388d17fa825ac41c56e15d6cd4caec6e13b2fb8c93b304205f0bbefe\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-23T09:07:29Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cw2mk\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-23T09:07:28Z\\\"}}\" for pod \"openshift-multus\"/\"multus-jwr4q\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-23T09:07:44Z is after 2025-08-24T17:21:41Z" Jan 23 09:07:44 crc kubenswrapper[4684]: I0123 09:07:44.995853 4684 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-dmqcw" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"95d1563a-3ca4-4fb0-8365-c1168fbe2e70\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T09:07:29Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T09:07:34Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T09:07:35Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T09:07:35Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://49a6a5854f711f7c177bc9c2ddea16027d535e15a3bbce2771702baed672fc06\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-23T09:07:35Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-wrhqc\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://d3d64538fa49212ecd97fac81f22251d985b9963024dcd5625ca82b0a19111fb\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://d3d64538fa49212ecd97fac81f22251d985b9963024dcd5625ca82b0a19111fb\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-23T09:07:29Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-23T09:07:29Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-wrhqc\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://bd008bc398cf858c150426e45222e76743f5cacfffb45c24f2cad83a6140abe4\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://bd008bc398cf858c150426e45222e76743f5cacfffb45c24f2cad83a6140abe4\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-23T09:07:30Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-23T09:07:30Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-wrhqc\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://11ea09253e6f4c4eab537b794b793c1f07e8cbaf361c1d8773381e7894805322\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://11ea09253e6f4c4eab537b794b793c1f07e8cbaf361c1d8773381e7894805322\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-23T09:07:31Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-23T09:07:31Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-wrhqc\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://4dddcfb8219bc8ac2d0f92294aef29222b71b1eb35ac84e7e833905e868e784e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://4dddcfb8219bc8ac2d0f92294aef29222b71b1eb35ac84e7e833905e868e784e\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-23T09:07:32Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-23T09:07:32Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-wrhqc\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://d935dd54133a2edd7ccddba6ec6b4c3ee7c86d3d6bc097b93fab3a6aa873ece9\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://d935dd54133a2edd7ccddba6ec6b4c3ee7c86d3d6bc097b93fab3a6aa873ece9\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-23T09:07:33Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-23T09:07:33Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-wrhqc\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://a3f58ad8e7c313247b77e5259a2f82d740ea1f08c3aeaefc116293729ce1b143\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://a3f58ad8e7c313247b77e5259a2f82d740ea1f08c3aeaefc116293729ce1b143\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-23T09:07:34Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-23T09:07:34Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-wrhqc\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-23T09:07:28Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-dmqcw\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-23T09:07:44Z is after 2025-08-24T17:21:41Z" Jan 23 09:07:45 crc kubenswrapper[4684]: I0123 09:07:45.007911 4684 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"e31ff448-5258-4887-9532-ccb1444b5a2f\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T09:07:08Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T09:07:08Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T09:07:38Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T09:07:38Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T09:07:07Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://39b1d62654cdce3e6a1e54cc35f36d530dec39b7ec54d7aba2ea8a64844ff90a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-23T09:07:09Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://5b80737ea9f882f63be2cf6a2f74002963d16e18aea3c96f738b2cd188f3c1da\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-23T09:07:10Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://68e3ed6cfd5c1ab6379385c7acee58117333f815f21be7d7c61038f7827f6621\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-23T09:07:10Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://74958cd4355a9eb04e07c960b1063b56f11cb3ae27a3ab9eac50f54ebac78c8c\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://42263a97079566dbd93f1ca20399fd1f6cc2400f0d042ed062c1c1e15eaf0109\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-01-23T09:07:27Z\\\",\\\"message\\\":\\\"23 09:07:26.845110 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW0123 09:07:26.845113 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW0123 09:07:26.845115 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nI0123 09:07:26.845353 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nI0123 09:07:26.849378 1 tlsconfig.go:203] \\\\\\\"Loaded serving cert\\\\\\\" certName=\\\\\\\"serving-cert::/tmp/serving-cert-4138284268/tls.crt::/tmp/serving-cert-4138284268/tls.key\\\\\\\" certDetail=\\\\\\\"\\\\\\\\\\\\\\\"localhost\\\\\\\\\\\\\\\" [serving] validServingFor=[localhost] issuer=\\\\\\\\\\\\\\\"check-endpoints-signer@1769159230\\\\\\\\\\\\\\\" (2026-01-23 09:07:10 +0000 UTC to 2026-02-22 09:07:11 +0000 UTC (now=2026-01-23 09:07:26.849349521 +0000 UTC))\\\\\\\"\\\\nI0123 09:07:26.849507 1 named_certificates.go:53] \\\\\\\"Loaded SNI cert\\\\\\\" index=0 certName=\\\\\\\"self-signed loopback\\\\\\\" certDetail=\\\\\\\"\\\\\\\\\\\\\\\"apiserver-loopback-client@1769159241\\\\\\\\\\\\\\\" [serving] validServingFor=[apiserver-loopback-client] issuer=\\\\\\\\\\\\\\\"apiserver-loopback-client-ca@1769159241\\\\\\\\\\\\\\\" (2026-01-23 08:07:21 +0000 UTC to 2027-01-23 08:07:21 +0000 UTC (now=2026-01-23 09:07:26.849489185 +0000 UTC))\\\\\\\"\\\\nI0123 09:07:26.849527 1 secure_serving.go:213] Serving securely on [::]:17697\\\\nI0123 09:07:26.849546 1 genericapiserver.go:683] [graceful-termination] waiting for shutdown to be initiated\\\\nI0123 09:07:26.849566 1 requestheader_controller.go:172] Starting RequestHeaderAuthRequestController\\\\nI0123 09:07:26.849583 1 shared_informer.go:313] Waiting for caches to sync for RequestHeaderAuthRequestController\\\\nI0123 09:07:26.849611 1 dynamic_serving_content.go:135] \\\\\\\"Starting controller\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-4138284268/tls.crt::/tmp/serving-cert-4138284268/tls.key\\\\\\\"\\\\nI0123 09:07:26.849731 1 tlsconfig.go:243] \\\\\\\"Starting DynamicServingCertificateController\\\\\\\"\\\\nF0123 09:07:26.849820 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-23T09:07:10Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":true,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-23T09:07:28Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://9db80d9b156d2828ad5bcd38bc2d0783dac35f10f547f098815ee596931cde3b\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-23T09:07:10Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://efa2eef93c6f5766565795e6674f79bc2e7cb62ac76cd9a1e407561378d62732\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://efa2eef93c6f5766565795e6674f79bc2e7cb62ac76cd9a1e407561378d62732\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-23T09:07:08Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-23T09:07:08Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-23T09:07:07Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-23T09:07:45Z is after 2025-08-24T17:21:41Z" Jan 23 09:07:45 crc kubenswrapper[4684]: I0123 09:07:45.022011 4684 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-23T09:07:27Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-23T09:07:27Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-23T09:07:45Z is after 2025-08-24T17:21:41Z" Jan 23 09:07:45 crc kubenswrapper[4684]: I0123 09:07:45.031527 4684 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-6stgf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"4fce7017-186f-4953-b968-c8a8868a0fd4\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T09:07:29Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T09:07:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T09:07:29Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T09:07:29Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://e120546e2ca9261a5bc169c39194c52add608d78b5783a10dad5f3ba4ee27c23\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-23T09:07:29Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-wv8g2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-23T09:07:28Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-6stgf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-23T09:07:45Z is after 2025-08-24T17:21:41Z" Jan 23 09:07:45 crc kubenswrapper[4684]: I0123 09:07:45.042771 4684 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-qt2j2" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d5069a6f-07bb-4423-8df0-92cdc541e6de\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T09:07:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T09:07:29Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T09:07:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T09:07:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://4ab843f59e857c481772565098789264b06141f58dd54cbb8dba2e40b44a54ad\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-23T09:07:30Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-l62zw\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-23T09:07:29Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-qt2j2\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-23T09:07:45Z is after 2025-08-24T17:21:41Z" Jan 23 09:07:45 crc kubenswrapper[4684]: I0123 09:07:45.057010 4684 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d618dabd-5de3-4c94-b9c1-69682da77628\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T09:07:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T09:07:07Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T09:07:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T09:07:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T09:07:07Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://c027c8977c1e3870ef0132bf28d479e8999b1a7d216327be7a9cff2aeee05c9e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-23T09:07:08Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://7954e2feb1e89e1ec2c9055234e7b9bde7005afc751a3067c18cbb54d16045cc\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-23T09:07:08Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://fde45d47daa7855ee7caa1df0222d2773fcdc8fb29413c61d6b74f7e7d8fa6e4\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-23T09:07:09Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://f34540a58dd0dfcebbfd694b24202f58a89ddca8a0f04f3f4f2bcdba4be5c4b6\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-23T09:07:10Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-23T09:07:07Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-23T09:07:45Z is after 2025-08-24T17:21:41Z" Jan 23 09:07:45 crc kubenswrapper[4684]: I0123 09:07:45.060974 4684 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 23 09:07:45 crc kubenswrapper[4684]: I0123 09:07:45.061272 4684 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 23 09:07:45 crc kubenswrapper[4684]: I0123 09:07:45.061365 4684 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 23 09:07:45 crc kubenswrapper[4684]: I0123 09:07:45.061502 4684 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 23 09:07:45 crc kubenswrapper[4684]: I0123 09:07:45.061558 4684 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-23T09:07:45Z","lastTransitionTime":"2026-01-23T09:07:45Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 23 09:07:45 crc kubenswrapper[4684]: I0123 09:07:45.071796 4684 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-23T09:07:28Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-23T09:07:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-23T09:07:28Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://4f741db786a98b9e9302c17c5f5061484149b0372c03b3cf06b017d37da7237a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-23T09:07:28Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://e0bf99a80423f9d4d2262b21f7dc70d1cf73731c48008e484d9768495596d5b0\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-23T09:07:28Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-23T09:07:45Z is after 2025-08-24T17:21:41Z" Jan 23 09:07:45 crc kubenswrapper[4684]: I0123 09:07:45.083659 4684 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-23T09:07:30Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-23T09:07:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-23T09:07:30Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://dc74050180463e44d7c545c89833c0282af87ae8cde4800f95e019dbd21ebb08\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-23T09:07:30Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-23T09:07:45Z is after 2025-08-24T17:21:41Z" Jan 23 09:07:45 crc kubenswrapper[4684]: I0123 09:07:45.163982 4684 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 23 09:07:45 crc kubenswrapper[4684]: I0123 09:07:45.164254 4684 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 23 09:07:45 crc kubenswrapper[4684]: I0123 09:07:45.164326 4684 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 23 09:07:45 crc kubenswrapper[4684]: I0123 09:07:45.164416 4684 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 23 09:07:45 crc kubenswrapper[4684]: I0123 09:07:45.164488 4684 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-23T09:07:45Z","lastTransitionTime":"2026-01-23T09:07:45Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 23 09:07:45 crc kubenswrapper[4684]: I0123 09:07:45.267177 4684 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 23 09:07:45 crc kubenswrapper[4684]: I0123 09:07:45.267249 4684 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 23 09:07:45 crc kubenswrapper[4684]: I0123 09:07:45.267265 4684 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 23 09:07:45 crc kubenswrapper[4684]: I0123 09:07:45.267281 4684 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 23 09:07:45 crc kubenswrapper[4684]: I0123 09:07:45.267290 4684 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-23T09:07:45Z","lastTransitionTime":"2026-01-23T09:07:45Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 23 09:07:45 crc kubenswrapper[4684]: I0123 09:07:45.369855 4684 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 23 09:07:45 crc kubenswrapper[4684]: I0123 09:07:45.369903 4684 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 23 09:07:45 crc kubenswrapper[4684]: I0123 09:07:45.369916 4684 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 23 09:07:45 crc kubenswrapper[4684]: I0123 09:07:45.369934 4684 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 23 09:07:45 crc kubenswrapper[4684]: I0123 09:07:45.369947 4684 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-23T09:07:45Z","lastTransitionTime":"2026-01-23T09:07:45Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 23 09:07:45 crc kubenswrapper[4684]: I0123 09:07:45.471919 4684 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 23 09:07:45 crc kubenswrapper[4684]: I0123 09:07:45.471949 4684 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 23 09:07:45 crc kubenswrapper[4684]: I0123 09:07:45.471959 4684 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 23 09:07:45 crc kubenswrapper[4684]: I0123 09:07:45.471973 4684 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 23 09:07:45 crc kubenswrapper[4684]: I0123 09:07:45.471981 4684 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-23T09:07:45Z","lastTransitionTime":"2026-01-23T09:07:45Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 23 09:07:45 crc kubenswrapper[4684]: I0123 09:07:45.559617 4684 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-12-02 13:21:13.192316613 +0000 UTC Jan 23 09:07:45 crc kubenswrapper[4684]: I0123 09:07:45.574796 4684 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 23 09:07:45 crc kubenswrapper[4684]: I0123 09:07:45.574846 4684 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 23 09:07:45 crc kubenswrapper[4684]: I0123 09:07:45.574859 4684 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 23 09:07:45 crc kubenswrapper[4684]: I0123 09:07:45.574875 4684 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 23 09:07:45 crc kubenswrapper[4684]: I0123 09:07:45.574884 4684 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-23T09:07:45Z","lastTransitionTime":"2026-01-23T09:07:45Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 23 09:07:45 crc kubenswrapper[4684]: I0123 09:07:45.581197 4684 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Jan 23 09:07:45 crc kubenswrapper[4684]: I0123 09:07:45.581210 4684 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 23 09:07:45 crc kubenswrapper[4684]: E0123 09:07:45.581368 4684 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Jan 23 09:07:45 crc kubenswrapper[4684]: E0123 09:07:45.581473 4684 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Jan 23 09:07:45 crc kubenswrapper[4684]: I0123 09:07:45.581246 4684 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 23 09:07:45 crc kubenswrapper[4684]: E0123 09:07:45.581581 4684 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Jan 23 09:07:45 crc kubenswrapper[4684]: I0123 09:07:45.678074 4684 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 23 09:07:45 crc kubenswrapper[4684]: I0123 09:07:45.678117 4684 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 23 09:07:45 crc kubenswrapper[4684]: I0123 09:07:45.678128 4684 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 23 09:07:45 crc kubenswrapper[4684]: I0123 09:07:45.678143 4684 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 23 09:07:45 crc kubenswrapper[4684]: I0123 09:07:45.678154 4684 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-23T09:07:45Z","lastTransitionTime":"2026-01-23T09:07:45Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 23 09:07:45 crc kubenswrapper[4684]: I0123 09:07:45.780572 4684 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 23 09:07:45 crc kubenswrapper[4684]: I0123 09:07:45.781075 4684 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 23 09:07:45 crc kubenswrapper[4684]: I0123 09:07:45.781156 4684 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 23 09:07:45 crc kubenswrapper[4684]: I0123 09:07:45.781222 4684 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 23 09:07:45 crc kubenswrapper[4684]: I0123 09:07:45.781286 4684 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-23T09:07:45Z","lastTransitionTime":"2026-01-23T09:07:45Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 23 09:07:45 crc kubenswrapper[4684]: I0123 09:07:45.883538 4684 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 23 09:07:45 crc kubenswrapper[4684]: I0123 09:07:45.883575 4684 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 23 09:07:45 crc kubenswrapper[4684]: I0123 09:07:45.883586 4684 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 23 09:07:45 crc kubenswrapper[4684]: I0123 09:07:45.883600 4684 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 23 09:07:45 crc kubenswrapper[4684]: I0123 09:07:45.883610 4684 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-23T09:07:45Z","lastTransitionTime":"2026-01-23T09:07:45Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 23 09:07:45 crc kubenswrapper[4684]: I0123 09:07:45.986138 4684 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 23 09:07:45 crc kubenswrapper[4684]: I0123 09:07:45.986178 4684 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 23 09:07:45 crc kubenswrapper[4684]: I0123 09:07:45.986192 4684 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 23 09:07:45 crc kubenswrapper[4684]: I0123 09:07:45.986209 4684 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 23 09:07:45 crc kubenswrapper[4684]: I0123 09:07:45.986220 4684 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-23T09:07:45Z","lastTransitionTime":"2026-01-23T09:07:45Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 23 09:07:46 crc kubenswrapper[4684]: I0123 09:07:46.001288 4684 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/8a1145d8-e0e9-481b-9e5c-65815e74874f-metrics-certs\") pod \"network-metrics-daemon-wrrtl\" (UID: \"8a1145d8-e0e9-481b-9e5c-65815e74874f\") " pod="openshift-multus/network-metrics-daemon-wrrtl" Jan 23 09:07:46 crc kubenswrapper[4684]: E0123 09:07:46.001415 4684 secret.go:188] Couldn't get secret openshift-multus/metrics-daemon-secret: object "openshift-multus"/"metrics-daemon-secret" not registered Jan 23 09:07:46 crc kubenswrapper[4684]: E0123 09:07:46.001471 4684 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/8a1145d8-e0e9-481b-9e5c-65815e74874f-metrics-certs podName:8a1145d8-e0e9-481b-9e5c-65815e74874f nodeName:}" failed. No retries permitted until 2026-01-23 09:07:50.001456504 +0000 UTC m=+42.624835055 (durationBeforeRetry 4s). Error: MountVolume.SetUp failed for volume "metrics-certs" (UniqueName: "kubernetes.io/secret/8a1145d8-e0e9-481b-9e5c-65815e74874f-metrics-certs") pod "network-metrics-daemon-wrrtl" (UID: "8a1145d8-e0e9-481b-9e5c-65815e74874f") : object "openshift-multus"/"metrics-daemon-secret" not registered Jan 23 09:07:46 crc kubenswrapper[4684]: I0123 09:07:46.088385 4684 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 23 09:07:46 crc kubenswrapper[4684]: I0123 09:07:46.088415 4684 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 23 09:07:46 crc kubenswrapper[4684]: I0123 09:07:46.088424 4684 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 23 09:07:46 crc kubenswrapper[4684]: I0123 09:07:46.088437 4684 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 23 09:07:46 crc kubenswrapper[4684]: I0123 09:07:46.088447 4684 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-23T09:07:46Z","lastTransitionTime":"2026-01-23T09:07:46Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 23 09:07:46 crc kubenswrapper[4684]: I0123 09:07:46.191208 4684 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 23 09:07:46 crc kubenswrapper[4684]: I0123 09:07:46.191250 4684 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 23 09:07:46 crc kubenswrapper[4684]: I0123 09:07:46.191259 4684 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 23 09:07:46 crc kubenswrapper[4684]: I0123 09:07:46.191277 4684 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 23 09:07:46 crc kubenswrapper[4684]: I0123 09:07:46.191305 4684 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-23T09:07:46Z","lastTransitionTime":"2026-01-23T09:07:46Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 23 09:07:46 crc kubenswrapper[4684]: I0123 09:07:46.293800 4684 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 23 09:07:46 crc kubenswrapper[4684]: I0123 09:07:46.293845 4684 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 23 09:07:46 crc kubenswrapper[4684]: I0123 09:07:46.293856 4684 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 23 09:07:46 crc kubenswrapper[4684]: I0123 09:07:46.293872 4684 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 23 09:07:46 crc kubenswrapper[4684]: I0123 09:07:46.293883 4684 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-23T09:07:46Z","lastTransitionTime":"2026-01-23T09:07:46Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 23 09:07:46 crc kubenswrapper[4684]: I0123 09:07:46.396570 4684 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 23 09:07:46 crc kubenswrapper[4684]: I0123 09:07:46.396612 4684 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 23 09:07:46 crc kubenswrapper[4684]: I0123 09:07:46.396624 4684 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 23 09:07:46 crc kubenswrapper[4684]: I0123 09:07:46.396641 4684 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 23 09:07:46 crc kubenswrapper[4684]: I0123 09:07:46.396653 4684 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-23T09:07:46Z","lastTransitionTime":"2026-01-23T09:07:46Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 23 09:07:46 crc kubenswrapper[4684]: I0123 09:07:46.499219 4684 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 23 09:07:46 crc kubenswrapper[4684]: I0123 09:07:46.499265 4684 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 23 09:07:46 crc kubenswrapper[4684]: I0123 09:07:46.499277 4684 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 23 09:07:46 crc kubenswrapper[4684]: I0123 09:07:46.499294 4684 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 23 09:07:46 crc kubenswrapper[4684]: I0123 09:07:46.499306 4684 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-23T09:07:46Z","lastTransitionTime":"2026-01-23T09:07:46Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 23 09:07:46 crc kubenswrapper[4684]: I0123 09:07:46.560642 4684 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2026-01-08 15:03:58.011677985 +0000 UTC Jan 23 09:07:46 crc kubenswrapper[4684]: I0123 09:07:46.581100 4684 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-wrrtl" Jan 23 09:07:46 crc kubenswrapper[4684]: E0123 09:07:46.581257 4684 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-wrrtl" podUID="8a1145d8-e0e9-481b-9e5c-65815e74874f" Jan 23 09:07:46 crc kubenswrapper[4684]: I0123 09:07:46.602123 4684 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 23 09:07:46 crc kubenswrapper[4684]: I0123 09:07:46.602168 4684 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 23 09:07:46 crc kubenswrapper[4684]: I0123 09:07:46.602178 4684 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 23 09:07:46 crc kubenswrapper[4684]: I0123 09:07:46.602194 4684 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 23 09:07:46 crc kubenswrapper[4684]: I0123 09:07:46.602203 4684 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-23T09:07:46Z","lastTransitionTime":"2026-01-23T09:07:46Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 23 09:07:46 crc kubenswrapper[4684]: I0123 09:07:46.704282 4684 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 23 09:07:46 crc kubenswrapper[4684]: I0123 09:07:46.704310 4684 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 23 09:07:46 crc kubenswrapper[4684]: I0123 09:07:46.704319 4684 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 23 09:07:46 crc kubenswrapper[4684]: I0123 09:07:46.704332 4684 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 23 09:07:46 crc kubenswrapper[4684]: I0123 09:07:46.704342 4684 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-23T09:07:46Z","lastTransitionTime":"2026-01-23T09:07:46Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 23 09:07:46 crc kubenswrapper[4684]: I0123 09:07:46.806944 4684 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 23 09:07:46 crc kubenswrapper[4684]: I0123 09:07:46.806972 4684 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 23 09:07:46 crc kubenswrapper[4684]: I0123 09:07:46.806981 4684 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 23 09:07:46 crc kubenswrapper[4684]: I0123 09:07:46.806994 4684 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 23 09:07:46 crc kubenswrapper[4684]: I0123 09:07:46.807020 4684 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-23T09:07:46Z","lastTransitionTime":"2026-01-23T09:07:46Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 23 09:07:46 crc kubenswrapper[4684]: I0123 09:07:46.915017 4684 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 23 09:07:46 crc kubenswrapper[4684]: I0123 09:07:46.915050 4684 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 23 09:07:46 crc kubenswrapper[4684]: I0123 09:07:46.915060 4684 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 23 09:07:46 crc kubenswrapper[4684]: I0123 09:07:46.915073 4684 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 23 09:07:46 crc kubenswrapper[4684]: I0123 09:07:46.915082 4684 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-23T09:07:46Z","lastTransitionTime":"2026-01-23T09:07:46Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 23 09:07:47 crc kubenswrapper[4684]: I0123 09:07:47.017794 4684 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 23 09:07:47 crc kubenswrapper[4684]: I0123 09:07:47.017829 4684 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 23 09:07:47 crc kubenswrapper[4684]: I0123 09:07:47.017837 4684 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 23 09:07:47 crc kubenswrapper[4684]: I0123 09:07:47.017853 4684 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 23 09:07:47 crc kubenswrapper[4684]: I0123 09:07:47.017863 4684 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-23T09:07:47Z","lastTransitionTime":"2026-01-23T09:07:47Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 23 09:07:47 crc kubenswrapper[4684]: I0123 09:07:47.120003 4684 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 23 09:07:47 crc kubenswrapper[4684]: I0123 09:07:47.120194 4684 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 23 09:07:47 crc kubenswrapper[4684]: I0123 09:07:47.120362 4684 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 23 09:07:47 crc kubenswrapper[4684]: I0123 09:07:47.120553 4684 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 23 09:07:47 crc kubenswrapper[4684]: I0123 09:07:47.120732 4684 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-23T09:07:47Z","lastTransitionTime":"2026-01-23T09:07:47Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 23 09:07:47 crc kubenswrapper[4684]: I0123 09:07:47.223350 4684 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 23 09:07:47 crc kubenswrapper[4684]: I0123 09:07:47.223376 4684 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 23 09:07:47 crc kubenswrapper[4684]: I0123 09:07:47.223385 4684 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 23 09:07:47 crc kubenswrapper[4684]: I0123 09:07:47.223397 4684 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 23 09:07:47 crc kubenswrapper[4684]: I0123 09:07:47.223409 4684 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-23T09:07:47Z","lastTransitionTime":"2026-01-23T09:07:47Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 23 09:07:47 crc kubenswrapper[4684]: I0123 09:07:47.325770 4684 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 23 09:07:47 crc kubenswrapper[4684]: I0123 09:07:47.326012 4684 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 23 09:07:47 crc kubenswrapper[4684]: I0123 09:07:47.326131 4684 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 23 09:07:47 crc kubenswrapper[4684]: I0123 09:07:47.326215 4684 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 23 09:07:47 crc kubenswrapper[4684]: I0123 09:07:47.326289 4684 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-23T09:07:47Z","lastTransitionTime":"2026-01-23T09:07:47Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 23 09:07:47 crc kubenswrapper[4684]: I0123 09:07:47.429128 4684 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 23 09:07:47 crc kubenswrapper[4684]: I0123 09:07:47.429367 4684 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 23 09:07:47 crc kubenswrapper[4684]: I0123 09:07:47.429435 4684 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 23 09:07:47 crc kubenswrapper[4684]: I0123 09:07:47.429500 4684 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 23 09:07:47 crc kubenswrapper[4684]: I0123 09:07:47.429556 4684 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-23T09:07:47Z","lastTransitionTime":"2026-01-23T09:07:47Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 23 09:07:47 crc kubenswrapper[4684]: I0123 09:07:47.531565 4684 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 23 09:07:47 crc kubenswrapper[4684]: I0123 09:07:47.531815 4684 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 23 09:07:47 crc kubenswrapper[4684]: I0123 09:07:47.531883 4684 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 23 09:07:47 crc kubenswrapper[4684]: I0123 09:07:47.531954 4684 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 23 09:07:47 crc kubenswrapper[4684]: I0123 09:07:47.532028 4684 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-23T09:07:47Z","lastTransitionTime":"2026-01-23T09:07:47Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 23 09:07:47 crc kubenswrapper[4684]: I0123 09:07:47.560994 4684 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2026-01-02 15:04:35.333999185 +0000 UTC Jan 23 09:07:47 crc kubenswrapper[4684]: I0123 09:07:47.581845 4684 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Jan 23 09:07:47 crc kubenswrapper[4684]: E0123 09:07:47.582106 4684 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Jan 23 09:07:47 crc kubenswrapper[4684]: I0123 09:07:47.581978 4684 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 23 09:07:47 crc kubenswrapper[4684]: E0123 09:07:47.582328 4684 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Jan 23 09:07:47 crc kubenswrapper[4684]: I0123 09:07:47.581927 4684 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 23 09:07:47 crc kubenswrapper[4684]: E0123 09:07:47.582497 4684 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Jan 23 09:07:47 crc kubenswrapper[4684]: I0123 09:07:47.602313 4684 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-nk7v5" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fd1b372-d164-4037-ae8e-cf634b1c4b41\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T09:07:29Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T09:07:29Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T09:07:28Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T09:07:28Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://c845b6b78d55b23f70032599e19fb345571b02ca00353315bb08e94c834330d4\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-23T09:07:30Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-l46bg\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://5ecd3493767226c89a1f3e3dff04d36ff5c47117c6ad2712e71633f5c6e375b3\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-23T09:07:30Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-l46bg\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://1d7d0cedb437ec48e365912b092c7f28a30e01fbab86c49bce1b26734ab264ee\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-23T09:07:30Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-l46bg\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://6ab83043e744c91535278153a247d7ba2b3612b867edbabf3a43192b51304e14\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-23T09:07:30Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-l46bg\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://d44f8256ce0d8ea5237e13fb4f6d7ee5cd698c2821613b48d73ba903d2ab5351\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-23T09:07:30Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-l46bg\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://3eab81e73847c2d5a8a24bd2be84c8ed97ecc482fe023474b519ae6bcf3e6e49\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-23T09:07:29Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-l46bg\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://0a7c4719b2eaaa5e4439e33009fbfab815e8ac21cf72b90aeaeeb1b6717029de\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://71f1640626a831e4da81a382d015a6467377fa8e787db1ce1cebe4a788c40d8a\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2026-01-23T09:07:38Z\\\",\\\"message\\\":\\\"3 09:07:37.962441 5888 handler.go:190] Sending *v1.Namespace event handler 5 for removal\\\\nI0123 09:07:37.962452 5888 handler.go:190] Sending *v1.Node event handler 7 for removal\\\\nI0123 09:07:37.962460 5888 handler.go:190] Sending *v1.Node event handler 2 for removal\\\\nI0123 09:07:37.962476 5888 handler.go:190] Sending *v1.EgressIP event handler 8 for removal\\\\nI0123 09:07:37.962491 5888 handler.go:190] Sending *v1.NetworkPolicy event handler 4 for removal\\\\nI0123 09:07:37.962519 5888 factory.go:656] Stopping watch factory\\\\nI0123 09:07:37.962539 5888 handler.go:208] Removed *v1.NetworkPolicy event handler 4\\\\nI0123 09:07:37.962547 5888 handler.go:208] Removed *v1.Node event handler 7\\\\nI0123 09:07:37.962553 5888 handler.go:208] Removed *v1.Node event handler 2\\\\nI0123 09:07:37.962551 5888 handler.go:208] Removed *v1.Namespace event handler 5\\\\nI0123 09:07:37.962559 5888 handler.go:208] Removed *v1.EgressIP event handler 8\\\\nI0123 09:07:37.962570 5888 handler.go:208] Removed *v1.EgressFirewall event handler 9\\\\nI0123 09:07:37.962580 5888 handler.go:208] Removed *v1.Namespace event handler 1\\\\nI0123 09:07:37.962599 5888 reflector.go:311] Stopping reflector *v1.Pod (0s) from k8s.io/client-go/informers/factory.go:160\\\\nI0123 09:07:37.962628 5888 reflector.go:311] Stopping reflector *v1.Node (0s) from k8s.io/client-go/informers/f\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-23T09:07:35Z\\\"}},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://0a7c4719b2eaaa5e4439e33009fbfab815e8ac21cf72b90aeaeeb1b6717029de\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2026-01-23T09:07:42Z\\\",\\\"message\\\":\\\"ce openshift-marketplace/community-operators for network=default has 1 cluster-wide, 0 per-node configs, 0 template configs, making 1 (cluster) 0 (per node) and 0 (template) load balancers\\\\nI0123 09:07:40.472083 6043 services_controller.go:473] Services do not match for network=default, existing lbs: []services.LB{services.LB{Name:\\\\\\\"Service_openshift-marketplace/community-operators_TCP_cluster\\\\\\\", UUID:\\\\\\\"d389393c-7ba9-422c-b3f5-06e391d537d2\\\\\\\", Protocol:\\\\\\\"tcp\\\\\\\", ExternalIDs:map[string]string{\\\\\\\"k8s.ovn.org/kind\\\\\\\":\\\\\\\"Service\\\\\\\", \\\\\\\"k8s.ovn.org/owner\\\\\\\":\\\\\\\"openshift-marketplace/community-operators\\\\\\\"}, Opts:services.LBOpts{Reject:false, EmptyLBEvents:false, AffinityTimeOut:0, SkipSNAT:false, Template:false, AddressFamily:\\\\\\\"\\\\\\\"}, Rules:[]services.LBRule{}, Templates:services.TemplateMap{}, Switches:[]string{}, Routers:[]string{}, Groups:[]string{\\\\\\\"clusterLBGroup\\\\\\\"}}}, built lbs: []services.LB{services.LB{Name:\\\\\\\"Service_openshift-marketplace/community-operators_TCP_cluster\\\\\\\", UUID:\\\\\\\"\\\\\\\", Protocol:\\\\\\\"TCP\\\\\\\", ExternalIDs:map[string]string{\\\\\\\"k8s.ovn.org/kind\\\\\\\":\\\\\\\"Service\\\\\\\", \\\\\\\"k8s.ovn.org/owner\\\\\\\":\\\\\\\"openshift-marketplace/community-operators\\\\\\\"}, Opts:services.LBOpts{Reject:true, EmptyLBEvents:false, AffinityTimeOut:0, SkipSNAT:false, Template:false, AddressFamily:\\\\\\\"\\\\\\\"}, Rules:[]services.LBRule{services.LBRule{Source:services.Addr{IP:\\\\\\\"10.217.5.189\\\\\\\", Port:50051, Template:(*services.Template)(nil)}, T\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-23T09:07:39Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-l46bg\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://6eab0113b2445bd23a5d3eb5f4bd79d26dd3352a1bf807cf7e770d55db85b699\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-23T09:07:32Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-l46bg\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://6cfc04b44ac724b5e32e0102b3f0d670fdd7f2b7ae9b40266065c7b8192b228e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://6cfc04b44ac724b5e32e0102b3f0d670fdd7f2b7ae9b40266065c7b8192b228e\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-23T09:07:29Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-23T09:07:29Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-l46bg\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-23T09:07:28Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-nk7v5\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-23T09:07:47Z is after 2025-08-24T17:21:41Z" Jan 23 09:07:47 crc kubenswrapper[4684]: I0123 09:07:47.616451 4684 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-ckltm" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"17ebb42b-c0ef-423b-8337-cb73bcdbd301\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T09:07:44Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T09:07:41Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T09:07:44Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T09:07:44Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://831d14b0a3293bdf6aaef4805513c47cca40592929fd0a059c0415e6bb072462\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-23T09:07:43Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-control-plane-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-bqdrh\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://53174a72a4ae2ff8105c162641526b8d33dbc8ae6f6301c8c1399e1493d9f6e9\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-cluster-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-23T09:07:43Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-bqdrh\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-23T09:07:41Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-control-plane-749d76644c-ckltm\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-23T09:07:47Z is after 2025-08-24T17:21:41Z" Jan 23 09:07:47 crc kubenswrapper[4684]: I0123 09:07:47.629125 4684 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-23T09:07:27Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-23T09:07:27Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-23T09:07:47Z is after 2025-08-24T17:21:41Z" Jan 23 09:07:47 crc kubenswrapper[4684]: I0123 09:07:47.634055 4684 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 23 09:07:47 crc kubenswrapper[4684]: I0123 09:07:47.634090 4684 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 23 09:07:47 crc kubenswrapper[4684]: I0123 09:07:47.634099 4684 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 23 09:07:47 crc kubenswrapper[4684]: I0123 09:07:47.634115 4684 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 23 09:07:47 crc kubenswrapper[4684]: I0123 09:07:47.634126 4684 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-23T09:07:47Z","lastTransitionTime":"2026-01-23T09:07:47Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 23 09:07:47 crc kubenswrapper[4684]: I0123 09:07:47.642495 4684 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-23T09:07:27Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-23T09:07:27Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-23T09:07:47Z is after 2025-08-24T17:21:41Z" Jan 23 09:07:47 crc kubenswrapper[4684]: I0123 09:07:47.657394 4684 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-jwr4q" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ab0885cc-d621-4e36-9e37-1326848bd147\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T09:07:29Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T09:07:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T09:07:29Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T09:07:29Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://d957cfbf388d17fa825ac41c56e15d6cd4caec6e13b2fb8c93b304205f0bbefe\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-23T09:07:29Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cw2mk\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-23T09:07:28Z\\\"}}\" for pod \"openshift-multus\"/\"multus-jwr4q\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-23T09:07:47Z is after 2025-08-24T17:21:41Z" Jan 23 09:07:47 crc kubenswrapper[4684]: I0123 09:07:47.673417 4684 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-dmqcw" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"95d1563a-3ca4-4fb0-8365-c1168fbe2e70\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T09:07:29Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T09:07:34Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T09:07:35Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T09:07:35Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://49a6a5854f711f7c177bc9c2ddea16027d535e15a3bbce2771702baed672fc06\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-23T09:07:35Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-wrhqc\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://d3d64538fa49212ecd97fac81f22251d985b9963024dcd5625ca82b0a19111fb\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://d3d64538fa49212ecd97fac81f22251d985b9963024dcd5625ca82b0a19111fb\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-23T09:07:29Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-23T09:07:29Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-wrhqc\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://bd008bc398cf858c150426e45222e76743f5cacfffb45c24f2cad83a6140abe4\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://bd008bc398cf858c150426e45222e76743f5cacfffb45c24f2cad83a6140abe4\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-23T09:07:30Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-23T09:07:30Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-wrhqc\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://11ea09253e6f4c4eab537b794b793c1f07e8cbaf361c1d8773381e7894805322\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://11ea09253e6f4c4eab537b794b793c1f07e8cbaf361c1d8773381e7894805322\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-23T09:07:31Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-23T09:07:31Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-wrhqc\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://4dddcfb8219bc8ac2d0f92294aef29222b71b1eb35ac84e7e833905e868e784e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://4dddcfb8219bc8ac2d0f92294aef29222b71b1eb35ac84e7e833905e868e784e\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-23T09:07:32Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-23T09:07:32Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-wrhqc\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://d935dd54133a2edd7ccddba6ec6b4c3ee7c86d3d6bc097b93fab3a6aa873ece9\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://d935dd54133a2edd7ccddba6ec6b4c3ee7c86d3d6bc097b93fab3a6aa873ece9\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-23T09:07:33Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-23T09:07:33Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-wrhqc\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://a3f58ad8e7c313247b77e5259a2f82d740ea1f08c3aeaefc116293729ce1b143\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://a3f58ad8e7c313247b77e5259a2f82d740ea1f08c3aeaefc116293729ce1b143\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-23T09:07:34Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-23T09:07:34Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-wrhqc\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-23T09:07:28Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-dmqcw\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-23T09:07:47Z is after 2025-08-24T17:21:41Z" Jan 23 09:07:47 crc kubenswrapper[4684]: I0123 09:07:47.687671 4684 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"e31ff448-5258-4887-9532-ccb1444b5a2f\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T09:07:08Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T09:07:08Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T09:07:38Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T09:07:38Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T09:07:07Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://39b1d62654cdce3e6a1e54cc35f36d530dec39b7ec54d7aba2ea8a64844ff90a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-23T09:07:09Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://5b80737ea9f882f63be2cf6a2f74002963d16e18aea3c96f738b2cd188f3c1da\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-23T09:07:10Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://68e3ed6cfd5c1ab6379385c7acee58117333f815f21be7d7c61038f7827f6621\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-23T09:07:10Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://74958cd4355a9eb04e07c960b1063b56f11cb3ae27a3ab9eac50f54ebac78c8c\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://42263a97079566dbd93f1ca20399fd1f6cc2400f0d042ed062c1c1e15eaf0109\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-01-23T09:07:27Z\\\",\\\"message\\\":\\\"23 09:07:26.845110 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW0123 09:07:26.845113 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW0123 09:07:26.845115 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nI0123 09:07:26.845353 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nI0123 09:07:26.849378 1 tlsconfig.go:203] \\\\\\\"Loaded serving cert\\\\\\\" certName=\\\\\\\"serving-cert::/tmp/serving-cert-4138284268/tls.crt::/tmp/serving-cert-4138284268/tls.key\\\\\\\" certDetail=\\\\\\\"\\\\\\\\\\\\\\\"localhost\\\\\\\\\\\\\\\" [serving] validServingFor=[localhost] issuer=\\\\\\\\\\\\\\\"check-endpoints-signer@1769159230\\\\\\\\\\\\\\\" (2026-01-23 09:07:10 +0000 UTC to 2026-02-22 09:07:11 +0000 UTC (now=2026-01-23 09:07:26.849349521 +0000 UTC))\\\\\\\"\\\\nI0123 09:07:26.849507 1 named_certificates.go:53] \\\\\\\"Loaded SNI cert\\\\\\\" index=0 certName=\\\\\\\"self-signed loopback\\\\\\\" certDetail=\\\\\\\"\\\\\\\\\\\\\\\"apiserver-loopback-client@1769159241\\\\\\\\\\\\\\\" [serving] validServingFor=[apiserver-loopback-client] issuer=\\\\\\\\\\\\\\\"apiserver-loopback-client-ca@1769159241\\\\\\\\\\\\\\\" (2026-01-23 08:07:21 +0000 UTC to 2027-01-23 08:07:21 +0000 UTC (now=2026-01-23 09:07:26.849489185 +0000 UTC))\\\\\\\"\\\\nI0123 09:07:26.849527 1 secure_serving.go:213] Serving securely on [::]:17697\\\\nI0123 09:07:26.849546 1 genericapiserver.go:683] [graceful-termination] waiting for shutdown to be initiated\\\\nI0123 09:07:26.849566 1 requestheader_controller.go:172] Starting RequestHeaderAuthRequestController\\\\nI0123 09:07:26.849583 1 shared_informer.go:313] Waiting for caches to sync for RequestHeaderAuthRequestController\\\\nI0123 09:07:26.849611 1 dynamic_serving_content.go:135] \\\\\\\"Starting controller\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-4138284268/tls.crt::/tmp/serving-cert-4138284268/tls.key\\\\\\\"\\\\nI0123 09:07:26.849731 1 tlsconfig.go:243] \\\\\\\"Starting DynamicServingCertificateController\\\\\\\"\\\\nF0123 09:07:26.849820 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-23T09:07:10Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":true,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-23T09:07:28Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://9db80d9b156d2828ad5bcd38bc2d0783dac35f10f547f098815ee596931cde3b\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-23T09:07:10Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://efa2eef93c6f5766565795e6674f79bc2e7cb62ac76cd9a1e407561378d62732\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://efa2eef93c6f5766565795e6674f79bc2e7cb62ac76cd9a1e407561378d62732\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-23T09:07:08Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-23T09:07:08Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-23T09:07:07Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-23T09:07:47Z is after 2025-08-24T17:21:41Z" Jan 23 09:07:47 crc kubenswrapper[4684]: I0123 09:07:47.699069 4684 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-23T09:07:27Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-23T09:07:27Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-23T09:07:47Z is after 2025-08-24T17:21:41Z" Jan 23 09:07:47 crc kubenswrapper[4684]: I0123 09:07:47.708916 4684 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-6stgf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"4fce7017-186f-4953-b968-c8a8868a0fd4\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T09:07:29Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T09:07:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T09:07:29Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T09:07:29Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://e120546e2ca9261a5bc169c39194c52add608d78b5783a10dad5f3ba4ee27c23\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-23T09:07:29Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-wv8g2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-23T09:07:28Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-6stgf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-23T09:07:47Z is after 2025-08-24T17:21:41Z" Jan 23 09:07:47 crc kubenswrapper[4684]: I0123 09:07:47.719315 4684 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-qt2j2" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d5069a6f-07bb-4423-8df0-92cdc541e6de\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T09:07:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T09:07:29Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T09:07:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T09:07:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://4ab843f59e857c481772565098789264b06141f58dd54cbb8dba2e40b44a54ad\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-23T09:07:30Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-l62zw\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-23T09:07:29Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-qt2j2\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-23T09:07:47Z is after 2025-08-24T17:21:41Z" Jan 23 09:07:47 crc kubenswrapper[4684]: I0123 09:07:47.730577 4684 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d618dabd-5de3-4c94-b9c1-69682da77628\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T09:07:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T09:07:07Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T09:07:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T09:07:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T09:07:07Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://c027c8977c1e3870ef0132bf28d479e8999b1a7d216327be7a9cff2aeee05c9e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-23T09:07:08Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://7954e2feb1e89e1ec2c9055234e7b9bde7005afc751a3067c18cbb54d16045cc\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-23T09:07:08Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://fde45d47daa7855ee7caa1df0222d2773fcdc8fb29413c61d6b74f7e7d8fa6e4\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-23T09:07:09Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://f34540a58dd0dfcebbfd694b24202f58a89ddca8a0f04f3f4f2bcdba4be5c4b6\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-23T09:07:10Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-23T09:07:07Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-23T09:07:47Z is after 2025-08-24T17:21:41Z" Jan 23 09:07:47 crc kubenswrapper[4684]: I0123 09:07:47.736059 4684 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 23 09:07:47 crc kubenswrapper[4684]: I0123 09:07:47.736092 4684 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 23 09:07:47 crc kubenswrapper[4684]: I0123 09:07:47.736101 4684 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 23 09:07:47 crc kubenswrapper[4684]: I0123 09:07:47.736113 4684 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 23 09:07:47 crc kubenswrapper[4684]: I0123 09:07:47.736123 4684 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-23T09:07:47Z","lastTransitionTime":"2026-01-23T09:07:47Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 23 09:07:47 crc kubenswrapper[4684]: I0123 09:07:47.743130 4684 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-23T09:07:28Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-23T09:07:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-23T09:07:28Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://4f741db786a98b9e9302c17c5f5061484149b0372c03b3cf06b017d37da7237a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-23T09:07:28Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://e0bf99a80423f9d4d2262b21f7dc70d1cf73731c48008e484d9768495596d5b0\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-23T09:07:28Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-23T09:07:47Z is after 2025-08-24T17:21:41Z" Jan 23 09:07:47 crc kubenswrapper[4684]: I0123 09:07:47.754998 4684 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-23T09:07:30Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-23T09:07:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-23T09:07:30Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://dc74050180463e44d7c545c89833c0282af87ae8cde4800f95e019dbd21ebb08\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-23T09:07:30Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-23T09:07:47Z is after 2025-08-24T17:21:41Z" Jan 23 09:07:47 crc kubenswrapper[4684]: I0123 09:07:47.765785 4684 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/network-metrics-daemon-wrrtl" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"8a1145d8-e0e9-481b-9e5c-65815e74874f\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T09:07:42Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T09:07:42Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T09:07:42Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T09:07:42Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/metrics\\\",\\\"name\\\":\\\"metrics-certs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-hlsjn\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d98bb346a17feae024d92663df92b25c120938395ab7043afbed543c6db9ca8d\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-metrics-daemon\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-hlsjn\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-23T09:07:42Z\\\"}}\" for pod \"openshift-multus\"/\"network-metrics-daemon-wrrtl\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-23T09:07:47Z is after 2025-08-24T17:21:41Z" Jan 23 09:07:47 crc kubenswrapper[4684]: I0123 09:07:47.778392 4684 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-23T09:07:29Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-23T09:07:29Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-23T09:07:29Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://d66a59d2f527c396c3b591ef694a20a6852d8e2b2f3d4c77ef0f0b795a18b535\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-23T09:07:28Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-23T09:07:47Z is after 2025-08-24T17:21:41Z" Jan 23 09:07:47 crc kubenswrapper[4684]: I0123 09:07:47.789520 4684 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-wtphf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"fe8e0d00-860e-4d47-9f48-686555520d79\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T09:07:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T09:07:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T09:07:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T09:07:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://87b6f66b276518f9c25bbd5c97bd4a330b2c796958b395d04a01ef7115b95440\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-23T09:07:30Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-dmwsl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://c3d090a4ca15b818846dbd02be034a5029761509ea8671673795d0b2b15249c9\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-23T09:07:30Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-dmwsl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-23T09:07:28Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-wtphf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-23T09:07:47Z is after 2025-08-24T17:21:41Z" Jan 23 09:07:47 crc kubenswrapper[4684]: I0123 09:07:47.838857 4684 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 23 09:07:47 crc kubenswrapper[4684]: I0123 09:07:47.838907 4684 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 23 09:07:47 crc kubenswrapper[4684]: I0123 09:07:47.838918 4684 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 23 09:07:47 crc kubenswrapper[4684]: I0123 09:07:47.838935 4684 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 23 09:07:47 crc kubenswrapper[4684]: I0123 09:07:47.838946 4684 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-23T09:07:47Z","lastTransitionTime":"2026-01-23T09:07:47Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 23 09:07:47 crc kubenswrapper[4684]: I0123 09:07:47.940868 4684 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 23 09:07:47 crc kubenswrapper[4684]: I0123 09:07:47.940924 4684 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 23 09:07:47 crc kubenswrapper[4684]: I0123 09:07:47.940945 4684 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 23 09:07:47 crc kubenswrapper[4684]: I0123 09:07:47.940972 4684 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 23 09:07:47 crc kubenswrapper[4684]: I0123 09:07:47.940991 4684 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-23T09:07:47Z","lastTransitionTime":"2026-01-23T09:07:47Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 23 09:07:48 crc kubenswrapper[4684]: I0123 09:07:48.043052 4684 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 23 09:07:48 crc kubenswrapper[4684]: I0123 09:07:48.043077 4684 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 23 09:07:48 crc kubenswrapper[4684]: I0123 09:07:48.043087 4684 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 23 09:07:48 crc kubenswrapper[4684]: I0123 09:07:48.043101 4684 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 23 09:07:48 crc kubenswrapper[4684]: I0123 09:07:48.043111 4684 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-23T09:07:48Z","lastTransitionTime":"2026-01-23T09:07:48Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 23 09:07:48 crc kubenswrapper[4684]: I0123 09:07:48.145445 4684 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 23 09:07:48 crc kubenswrapper[4684]: I0123 09:07:48.145504 4684 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 23 09:07:48 crc kubenswrapper[4684]: I0123 09:07:48.145516 4684 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 23 09:07:48 crc kubenswrapper[4684]: I0123 09:07:48.145532 4684 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 23 09:07:48 crc kubenswrapper[4684]: I0123 09:07:48.145544 4684 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-23T09:07:48Z","lastTransitionTime":"2026-01-23T09:07:48Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 23 09:07:48 crc kubenswrapper[4684]: I0123 09:07:48.261330 4684 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 23 09:07:48 crc kubenswrapper[4684]: I0123 09:07:48.261378 4684 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 23 09:07:48 crc kubenswrapper[4684]: I0123 09:07:48.261392 4684 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 23 09:07:48 crc kubenswrapper[4684]: I0123 09:07:48.261411 4684 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 23 09:07:48 crc kubenswrapper[4684]: I0123 09:07:48.261426 4684 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-23T09:07:48Z","lastTransitionTime":"2026-01-23T09:07:48Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 23 09:07:48 crc kubenswrapper[4684]: I0123 09:07:48.363801 4684 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 23 09:07:48 crc kubenswrapper[4684]: I0123 09:07:48.363851 4684 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 23 09:07:48 crc kubenswrapper[4684]: I0123 09:07:48.363862 4684 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 23 09:07:48 crc kubenswrapper[4684]: I0123 09:07:48.363882 4684 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 23 09:07:48 crc kubenswrapper[4684]: I0123 09:07:48.363898 4684 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-23T09:07:48Z","lastTransitionTime":"2026-01-23T09:07:48Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 23 09:07:48 crc kubenswrapper[4684]: I0123 09:07:48.466835 4684 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 23 09:07:48 crc kubenswrapper[4684]: I0123 09:07:48.466880 4684 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 23 09:07:48 crc kubenswrapper[4684]: I0123 09:07:48.466897 4684 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 23 09:07:48 crc kubenswrapper[4684]: I0123 09:07:48.466917 4684 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 23 09:07:48 crc kubenswrapper[4684]: I0123 09:07:48.466933 4684 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-23T09:07:48Z","lastTransitionTime":"2026-01-23T09:07:48Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 23 09:07:48 crc kubenswrapper[4684]: I0123 09:07:48.561089 4684 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-12-16 03:18:51.319628416 +0000 UTC Jan 23 09:07:48 crc kubenswrapper[4684]: I0123 09:07:48.569127 4684 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 23 09:07:48 crc kubenswrapper[4684]: I0123 09:07:48.569153 4684 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 23 09:07:48 crc kubenswrapper[4684]: I0123 09:07:48.569161 4684 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 23 09:07:48 crc kubenswrapper[4684]: I0123 09:07:48.569174 4684 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 23 09:07:48 crc kubenswrapper[4684]: I0123 09:07:48.569183 4684 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-23T09:07:48Z","lastTransitionTime":"2026-01-23T09:07:48Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 23 09:07:48 crc kubenswrapper[4684]: I0123 09:07:48.581437 4684 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-wrrtl" Jan 23 09:07:48 crc kubenswrapper[4684]: E0123 09:07:48.581579 4684 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-wrrtl" podUID="8a1145d8-e0e9-481b-9e5c-65815e74874f" Jan 23 09:07:48 crc kubenswrapper[4684]: I0123 09:07:48.671413 4684 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 23 09:07:48 crc kubenswrapper[4684]: I0123 09:07:48.671448 4684 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 23 09:07:48 crc kubenswrapper[4684]: I0123 09:07:48.671456 4684 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 23 09:07:48 crc kubenswrapper[4684]: I0123 09:07:48.671471 4684 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 23 09:07:48 crc kubenswrapper[4684]: I0123 09:07:48.671479 4684 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-23T09:07:48Z","lastTransitionTime":"2026-01-23T09:07:48Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 23 09:07:48 crc kubenswrapper[4684]: I0123 09:07:48.773603 4684 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 23 09:07:48 crc kubenswrapper[4684]: I0123 09:07:48.773887 4684 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 23 09:07:48 crc kubenswrapper[4684]: I0123 09:07:48.773970 4684 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 23 09:07:48 crc kubenswrapper[4684]: I0123 09:07:48.774084 4684 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 23 09:07:48 crc kubenswrapper[4684]: I0123 09:07:48.774187 4684 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-23T09:07:48Z","lastTransitionTime":"2026-01-23T09:07:48Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 23 09:07:48 crc kubenswrapper[4684]: I0123 09:07:48.877339 4684 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 23 09:07:48 crc kubenswrapper[4684]: I0123 09:07:48.877388 4684 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 23 09:07:48 crc kubenswrapper[4684]: I0123 09:07:48.877397 4684 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 23 09:07:48 crc kubenswrapper[4684]: I0123 09:07:48.877414 4684 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 23 09:07:48 crc kubenswrapper[4684]: I0123 09:07:48.877427 4684 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-23T09:07:48Z","lastTransitionTime":"2026-01-23T09:07:48Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 23 09:07:48 crc kubenswrapper[4684]: I0123 09:07:48.979839 4684 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 23 09:07:48 crc kubenswrapper[4684]: I0123 09:07:48.980146 4684 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 23 09:07:48 crc kubenswrapper[4684]: I0123 09:07:48.980274 4684 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 23 09:07:48 crc kubenswrapper[4684]: I0123 09:07:48.980378 4684 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 23 09:07:48 crc kubenswrapper[4684]: I0123 09:07:48.980479 4684 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-23T09:07:48Z","lastTransitionTime":"2026-01-23T09:07:48Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 23 09:07:49 crc kubenswrapper[4684]: I0123 09:07:49.084591 4684 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 23 09:07:49 crc kubenswrapper[4684]: I0123 09:07:49.084616 4684 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 23 09:07:49 crc kubenswrapper[4684]: I0123 09:07:49.084624 4684 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 23 09:07:49 crc kubenswrapper[4684]: I0123 09:07:49.084637 4684 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 23 09:07:49 crc kubenswrapper[4684]: I0123 09:07:49.084645 4684 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-23T09:07:49Z","lastTransitionTime":"2026-01-23T09:07:49Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 23 09:07:49 crc kubenswrapper[4684]: I0123 09:07:49.187653 4684 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 23 09:07:49 crc kubenswrapper[4684]: I0123 09:07:49.187998 4684 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 23 09:07:49 crc kubenswrapper[4684]: I0123 09:07:49.188167 4684 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 23 09:07:49 crc kubenswrapper[4684]: I0123 09:07:49.188346 4684 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 23 09:07:49 crc kubenswrapper[4684]: I0123 09:07:49.188502 4684 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-23T09:07:49Z","lastTransitionTime":"2026-01-23T09:07:49Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 23 09:07:49 crc kubenswrapper[4684]: I0123 09:07:49.291022 4684 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 23 09:07:49 crc kubenswrapper[4684]: I0123 09:07:49.291309 4684 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 23 09:07:49 crc kubenswrapper[4684]: I0123 09:07:49.291427 4684 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 23 09:07:49 crc kubenswrapper[4684]: I0123 09:07:49.291546 4684 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 23 09:07:49 crc kubenswrapper[4684]: I0123 09:07:49.291638 4684 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-23T09:07:49Z","lastTransitionTime":"2026-01-23T09:07:49Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 23 09:07:49 crc kubenswrapper[4684]: I0123 09:07:49.394538 4684 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 23 09:07:49 crc kubenswrapper[4684]: I0123 09:07:49.394571 4684 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 23 09:07:49 crc kubenswrapper[4684]: I0123 09:07:49.394580 4684 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 23 09:07:49 crc kubenswrapper[4684]: I0123 09:07:49.394593 4684 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 23 09:07:49 crc kubenswrapper[4684]: I0123 09:07:49.394602 4684 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-23T09:07:49Z","lastTransitionTime":"2026-01-23T09:07:49Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 23 09:07:49 crc kubenswrapper[4684]: I0123 09:07:49.498371 4684 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 23 09:07:49 crc kubenswrapper[4684]: I0123 09:07:49.498401 4684 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 23 09:07:49 crc kubenswrapper[4684]: I0123 09:07:49.498411 4684 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 23 09:07:49 crc kubenswrapper[4684]: I0123 09:07:49.498424 4684 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 23 09:07:49 crc kubenswrapper[4684]: I0123 09:07:49.498433 4684 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-23T09:07:49Z","lastTransitionTime":"2026-01-23T09:07:49Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 23 09:07:49 crc kubenswrapper[4684]: I0123 09:07:49.561211 4684 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-11-26 10:22:00.015517101 +0000 UTC Jan 23 09:07:49 crc kubenswrapper[4684]: I0123 09:07:49.581830 4684 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 23 09:07:49 crc kubenswrapper[4684]: I0123 09:07:49.581875 4684 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 23 09:07:49 crc kubenswrapper[4684]: I0123 09:07:49.581830 4684 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Jan 23 09:07:49 crc kubenswrapper[4684]: E0123 09:07:49.581956 4684 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Jan 23 09:07:49 crc kubenswrapper[4684]: E0123 09:07:49.582060 4684 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Jan 23 09:07:49 crc kubenswrapper[4684]: E0123 09:07:49.582168 4684 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Jan 23 09:07:49 crc kubenswrapper[4684]: I0123 09:07:49.600830 4684 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 23 09:07:49 crc kubenswrapper[4684]: I0123 09:07:49.601079 4684 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 23 09:07:49 crc kubenswrapper[4684]: I0123 09:07:49.601175 4684 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 23 09:07:49 crc kubenswrapper[4684]: I0123 09:07:49.601292 4684 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 23 09:07:49 crc kubenswrapper[4684]: I0123 09:07:49.601374 4684 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-23T09:07:49Z","lastTransitionTime":"2026-01-23T09:07:49Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 23 09:07:49 crc kubenswrapper[4684]: I0123 09:07:49.704225 4684 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 23 09:07:49 crc kubenswrapper[4684]: I0123 09:07:49.704290 4684 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 23 09:07:49 crc kubenswrapper[4684]: I0123 09:07:49.704299 4684 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 23 09:07:49 crc kubenswrapper[4684]: I0123 09:07:49.704314 4684 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 23 09:07:49 crc kubenswrapper[4684]: I0123 09:07:49.704323 4684 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-23T09:07:49Z","lastTransitionTime":"2026-01-23T09:07:49Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 23 09:07:49 crc kubenswrapper[4684]: I0123 09:07:49.807166 4684 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 23 09:07:49 crc kubenswrapper[4684]: I0123 09:07:49.807203 4684 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 23 09:07:49 crc kubenswrapper[4684]: I0123 09:07:49.807211 4684 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 23 09:07:49 crc kubenswrapper[4684]: I0123 09:07:49.807227 4684 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 23 09:07:49 crc kubenswrapper[4684]: I0123 09:07:49.807238 4684 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-23T09:07:49Z","lastTransitionTime":"2026-01-23T09:07:49Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 23 09:07:49 crc kubenswrapper[4684]: I0123 09:07:49.909492 4684 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 23 09:07:49 crc kubenswrapper[4684]: I0123 09:07:49.909534 4684 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 23 09:07:49 crc kubenswrapper[4684]: I0123 09:07:49.909545 4684 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 23 09:07:49 crc kubenswrapper[4684]: I0123 09:07:49.909560 4684 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 23 09:07:49 crc kubenswrapper[4684]: I0123 09:07:49.909573 4684 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-23T09:07:49Z","lastTransitionTime":"2026-01-23T09:07:49Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 23 09:07:50 crc kubenswrapper[4684]: I0123 09:07:50.012416 4684 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 23 09:07:50 crc kubenswrapper[4684]: I0123 09:07:50.012468 4684 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 23 09:07:50 crc kubenswrapper[4684]: I0123 09:07:50.012481 4684 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 23 09:07:50 crc kubenswrapper[4684]: I0123 09:07:50.012507 4684 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 23 09:07:50 crc kubenswrapper[4684]: I0123 09:07:50.012518 4684 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-23T09:07:50Z","lastTransitionTime":"2026-01-23T09:07:50Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 23 09:07:50 crc kubenswrapper[4684]: I0123 09:07:50.081902 4684 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/8a1145d8-e0e9-481b-9e5c-65815e74874f-metrics-certs\") pod \"network-metrics-daemon-wrrtl\" (UID: \"8a1145d8-e0e9-481b-9e5c-65815e74874f\") " pod="openshift-multus/network-metrics-daemon-wrrtl" Jan 23 09:07:50 crc kubenswrapper[4684]: E0123 09:07:50.082161 4684 secret.go:188] Couldn't get secret openshift-multus/metrics-daemon-secret: object "openshift-multus"/"metrics-daemon-secret" not registered Jan 23 09:07:50 crc kubenswrapper[4684]: E0123 09:07:50.082309 4684 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/8a1145d8-e0e9-481b-9e5c-65815e74874f-metrics-certs podName:8a1145d8-e0e9-481b-9e5c-65815e74874f nodeName:}" failed. No retries permitted until 2026-01-23 09:07:58.082270362 +0000 UTC m=+50.705648913 (durationBeforeRetry 8s). Error: MountVolume.SetUp failed for volume "metrics-certs" (UniqueName: "kubernetes.io/secret/8a1145d8-e0e9-481b-9e5c-65815e74874f-metrics-certs") pod "network-metrics-daemon-wrrtl" (UID: "8a1145d8-e0e9-481b-9e5c-65815e74874f") : object "openshift-multus"/"metrics-daemon-secret" not registered Jan 23 09:07:50 crc kubenswrapper[4684]: I0123 09:07:50.115770 4684 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 23 09:07:50 crc kubenswrapper[4684]: I0123 09:07:50.115834 4684 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 23 09:07:50 crc kubenswrapper[4684]: I0123 09:07:50.115850 4684 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 23 09:07:50 crc kubenswrapper[4684]: I0123 09:07:50.115870 4684 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 23 09:07:50 crc kubenswrapper[4684]: I0123 09:07:50.115889 4684 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-23T09:07:50Z","lastTransitionTime":"2026-01-23T09:07:50Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 23 09:07:50 crc kubenswrapper[4684]: I0123 09:07:50.219239 4684 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 23 09:07:50 crc kubenswrapper[4684]: I0123 09:07:50.219289 4684 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 23 09:07:50 crc kubenswrapper[4684]: I0123 09:07:50.219300 4684 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 23 09:07:50 crc kubenswrapper[4684]: I0123 09:07:50.219319 4684 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 23 09:07:50 crc kubenswrapper[4684]: I0123 09:07:50.219332 4684 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-23T09:07:50Z","lastTransitionTime":"2026-01-23T09:07:50Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 23 09:07:50 crc kubenswrapper[4684]: I0123 09:07:50.322397 4684 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 23 09:07:50 crc kubenswrapper[4684]: I0123 09:07:50.322444 4684 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 23 09:07:50 crc kubenswrapper[4684]: I0123 09:07:50.322454 4684 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 23 09:07:50 crc kubenswrapper[4684]: I0123 09:07:50.322473 4684 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 23 09:07:50 crc kubenswrapper[4684]: I0123 09:07:50.322483 4684 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-23T09:07:50Z","lastTransitionTime":"2026-01-23T09:07:50Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 23 09:07:50 crc kubenswrapper[4684]: I0123 09:07:50.425333 4684 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 23 09:07:50 crc kubenswrapper[4684]: I0123 09:07:50.425367 4684 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 23 09:07:50 crc kubenswrapper[4684]: I0123 09:07:50.425380 4684 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 23 09:07:50 crc kubenswrapper[4684]: I0123 09:07:50.425396 4684 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 23 09:07:50 crc kubenswrapper[4684]: I0123 09:07:50.425406 4684 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-23T09:07:50Z","lastTransitionTime":"2026-01-23T09:07:50Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 23 09:07:50 crc kubenswrapper[4684]: I0123 09:07:50.527506 4684 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 23 09:07:50 crc kubenswrapper[4684]: I0123 09:07:50.527547 4684 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 23 09:07:50 crc kubenswrapper[4684]: I0123 09:07:50.527557 4684 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 23 09:07:50 crc kubenswrapper[4684]: I0123 09:07:50.527572 4684 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 23 09:07:50 crc kubenswrapper[4684]: I0123 09:07:50.527582 4684 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-23T09:07:50Z","lastTransitionTime":"2026-01-23T09:07:50Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 23 09:07:50 crc kubenswrapper[4684]: I0123 09:07:50.562107 4684 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-11-06 21:20:46.547326137 +0000 UTC Jan 23 09:07:50 crc kubenswrapper[4684]: I0123 09:07:50.581480 4684 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-wrrtl" Jan 23 09:07:50 crc kubenswrapper[4684]: E0123 09:07:50.581640 4684 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-wrrtl" podUID="8a1145d8-e0e9-481b-9e5c-65815e74874f" Jan 23 09:07:50 crc kubenswrapper[4684]: I0123 09:07:50.630283 4684 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 23 09:07:50 crc kubenswrapper[4684]: I0123 09:07:50.630314 4684 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 23 09:07:50 crc kubenswrapper[4684]: I0123 09:07:50.630324 4684 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 23 09:07:50 crc kubenswrapper[4684]: I0123 09:07:50.630336 4684 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 23 09:07:50 crc kubenswrapper[4684]: I0123 09:07:50.630344 4684 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-23T09:07:50Z","lastTransitionTime":"2026-01-23T09:07:50Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 23 09:07:50 crc kubenswrapper[4684]: I0123 09:07:50.732750 4684 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 23 09:07:50 crc kubenswrapper[4684]: I0123 09:07:50.732795 4684 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 23 09:07:50 crc kubenswrapper[4684]: I0123 09:07:50.732804 4684 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 23 09:07:50 crc kubenswrapper[4684]: I0123 09:07:50.732822 4684 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 23 09:07:50 crc kubenswrapper[4684]: I0123 09:07:50.732832 4684 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-23T09:07:50Z","lastTransitionTime":"2026-01-23T09:07:50Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 23 09:07:50 crc kubenswrapper[4684]: I0123 09:07:50.819113 4684 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 23 09:07:50 crc kubenswrapper[4684]: I0123 09:07:50.819168 4684 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 23 09:07:50 crc kubenswrapper[4684]: I0123 09:07:50.819181 4684 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 23 09:07:50 crc kubenswrapper[4684]: I0123 09:07:50.819203 4684 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 23 09:07:50 crc kubenswrapper[4684]: I0123 09:07:50.819220 4684 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-23T09:07:50Z","lastTransitionTime":"2026-01-23T09:07:50Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 23 09:07:50 crc kubenswrapper[4684]: E0123 09:07:50.837594 4684 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"7800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"24148068Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"8\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"24608868Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-01-23T09:07:50Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-23T09:07:50Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-23T09:07:50Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-23T09:07:50Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-23T09:07:50Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-23T09:07:50Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-23T09:07:50Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-23T09:07:50Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"bcfe8adf-9d26-48e3-b456-e1c8d79ddfed\\\",\\\"systemUUID\\\":\\\"63162577-fb09-4289-a5f3-3b12988dcfbf\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-23T09:07:50Z is after 2025-08-24T17:21:41Z" Jan 23 09:07:50 crc kubenswrapper[4684]: I0123 09:07:50.841060 4684 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 23 09:07:50 crc kubenswrapper[4684]: I0123 09:07:50.841089 4684 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 23 09:07:50 crc kubenswrapper[4684]: I0123 09:07:50.841098 4684 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 23 09:07:50 crc kubenswrapper[4684]: I0123 09:07:50.841112 4684 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 23 09:07:50 crc kubenswrapper[4684]: I0123 09:07:50.841121 4684 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-23T09:07:50Z","lastTransitionTime":"2026-01-23T09:07:50Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 23 09:07:50 crc kubenswrapper[4684]: E0123 09:07:50.853649 4684 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"7800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"24148068Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"8\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"24608868Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-01-23T09:07:50Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-23T09:07:50Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-23T09:07:50Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-23T09:07:50Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-23T09:07:50Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-23T09:07:50Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-23T09:07:50Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-23T09:07:50Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"bcfe8adf-9d26-48e3-b456-e1c8d79ddfed\\\",\\\"systemUUID\\\":\\\"63162577-fb09-4289-a5f3-3b12988dcfbf\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-23T09:07:50Z is after 2025-08-24T17:21:41Z" Jan 23 09:07:50 crc kubenswrapper[4684]: I0123 09:07:50.858087 4684 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 23 09:07:50 crc kubenswrapper[4684]: I0123 09:07:50.858128 4684 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 23 09:07:50 crc kubenswrapper[4684]: I0123 09:07:50.858143 4684 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 23 09:07:50 crc kubenswrapper[4684]: I0123 09:07:50.858159 4684 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 23 09:07:50 crc kubenswrapper[4684]: I0123 09:07:50.858170 4684 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-23T09:07:50Z","lastTransitionTime":"2026-01-23T09:07:50Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 23 09:07:50 crc kubenswrapper[4684]: E0123 09:07:50.871847 4684 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"7800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"24148068Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"8\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"24608868Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-01-23T09:07:50Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-23T09:07:50Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-23T09:07:50Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-23T09:07:50Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-23T09:07:50Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-23T09:07:50Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-23T09:07:50Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-23T09:07:50Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"bcfe8adf-9d26-48e3-b456-e1c8d79ddfed\\\",\\\"systemUUID\\\":\\\"63162577-fb09-4289-a5f3-3b12988dcfbf\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-23T09:07:50Z is after 2025-08-24T17:21:41Z" Jan 23 09:07:50 crc kubenswrapper[4684]: I0123 09:07:50.875876 4684 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 23 09:07:50 crc kubenswrapper[4684]: I0123 09:07:50.876171 4684 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 23 09:07:50 crc kubenswrapper[4684]: I0123 09:07:50.876259 4684 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 23 09:07:50 crc kubenswrapper[4684]: I0123 09:07:50.876358 4684 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 23 09:07:50 crc kubenswrapper[4684]: I0123 09:07:50.876452 4684 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-23T09:07:50Z","lastTransitionTime":"2026-01-23T09:07:50Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 23 09:07:50 crc kubenswrapper[4684]: E0123 09:07:50.892780 4684 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"7800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"24148068Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"8\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"24608868Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-01-23T09:07:50Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-23T09:07:50Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-23T09:07:50Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-23T09:07:50Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-23T09:07:50Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-23T09:07:50Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-23T09:07:50Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-23T09:07:50Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"bcfe8adf-9d26-48e3-b456-e1c8d79ddfed\\\",\\\"systemUUID\\\":\\\"63162577-fb09-4289-a5f3-3b12988dcfbf\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-23T09:07:50Z is after 2025-08-24T17:21:41Z" Jan 23 09:07:50 crc kubenswrapper[4684]: I0123 09:07:50.897105 4684 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 23 09:07:50 crc kubenswrapper[4684]: I0123 09:07:50.897340 4684 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 23 09:07:50 crc kubenswrapper[4684]: I0123 09:07:50.897473 4684 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 23 09:07:50 crc kubenswrapper[4684]: I0123 09:07:50.897575 4684 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 23 09:07:50 crc kubenswrapper[4684]: I0123 09:07:50.897652 4684 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-23T09:07:50Z","lastTransitionTime":"2026-01-23T09:07:50Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 23 09:07:50 crc kubenswrapper[4684]: E0123 09:07:50.910541 4684 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"7800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"24148068Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"8\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"24608868Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-01-23T09:07:50Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-23T09:07:50Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-23T09:07:50Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-23T09:07:50Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-23T09:07:50Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-23T09:07:50Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-23T09:07:50Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-23T09:07:50Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"bcfe8adf-9d26-48e3-b456-e1c8d79ddfed\\\",\\\"systemUUID\\\":\\\"63162577-fb09-4289-a5f3-3b12988dcfbf\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-23T09:07:50Z is after 2025-08-24T17:21:41Z" Jan 23 09:07:50 crc kubenswrapper[4684]: E0123 09:07:50.911044 4684 kubelet_node_status.go:572] "Unable to update node status" err="update node status exceeds retry count" Jan 23 09:07:50 crc kubenswrapper[4684]: I0123 09:07:50.912736 4684 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 23 09:07:50 crc kubenswrapper[4684]: I0123 09:07:50.912773 4684 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 23 09:07:50 crc kubenswrapper[4684]: I0123 09:07:50.912792 4684 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 23 09:07:50 crc kubenswrapper[4684]: I0123 09:07:50.912810 4684 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 23 09:07:50 crc kubenswrapper[4684]: I0123 09:07:50.912820 4684 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-23T09:07:50Z","lastTransitionTime":"2026-01-23T09:07:50Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 23 09:07:51 crc kubenswrapper[4684]: I0123 09:07:51.015900 4684 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 23 09:07:51 crc kubenswrapper[4684]: I0123 09:07:51.015938 4684 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 23 09:07:51 crc kubenswrapper[4684]: I0123 09:07:51.015948 4684 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 23 09:07:51 crc kubenswrapper[4684]: I0123 09:07:51.015964 4684 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 23 09:07:51 crc kubenswrapper[4684]: I0123 09:07:51.015977 4684 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-23T09:07:51Z","lastTransitionTime":"2026-01-23T09:07:51Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 23 09:07:51 crc kubenswrapper[4684]: I0123 09:07:51.119006 4684 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 23 09:07:51 crc kubenswrapper[4684]: I0123 09:07:51.119072 4684 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 23 09:07:51 crc kubenswrapper[4684]: I0123 09:07:51.119085 4684 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 23 09:07:51 crc kubenswrapper[4684]: I0123 09:07:51.119103 4684 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 23 09:07:51 crc kubenswrapper[4684]: I0123 09:07:51.119115 4684 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-23T09:07:51Z","lastTransitionTime":"2026-01-23T09:07:51Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 23 09:07:51 crc kubenswrapper[4684]: I0123 09:07:51.221848 4684 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 23 09:07:51 crc kubenswrapper[4684]: I0123 09:07:51.221898 4684 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 23 09:07:51 crc kubenswrapper[4684]: I0123 09:07:51.221911 4684 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 23 09:07:51 crc kubenswrapper[4684]: I0123 09:07:51.221927 4684 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 23 09:07:51 crc kubenswrapper[4684]: I0123 09:07:51.221940 4684 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-23T09:07:51Z","lastTransitionTime":"2026-01-23T09:07:51Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 23 09:07:51 crc kubenswrapper[4684]: I0123 09:07:51.324660 4684 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 23 09:07:51 crc kubenswrapper[4684]: I0123 09:07:51.324714 4684 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 23 09:07:51 crc kubenswrapper[4684]: I0123 09:07:51.324723 4684 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 23 09:07:51 crc kubenswrapper[4684]: I0123 09:07:51.324736 4684 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 23 09:07:51 crc kubenswrapper[4684]: I0123 09:07:51.324744 4684 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-23T09:07:51Z","lastTransitionTime":"2026-01-23T09:07:51Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 23 09:07:51 crc kubenswrapper[4684]: I0123 09:07:51.427293 4684 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 23 09:07:51 crc kubenswrapper[4684]: I0123 09:07:51.427327 4684 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 23 09:07:51 crc kubenswrapper[4684]: I0123 09:07:51.427335 4684 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 23 09:07:51 crc kubenswrapper[4684]: I0123 09:07:51.427348 4684 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 23 09:07:51 crc kubenswrapper[4684]: I0123 09:07:51.427356 4684 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-23T09:07:51Z","lastTransitionTime":"2026-01-23T09:07:51Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 23 09:07:51 crc kubenswrapper[4684]: I0123 09:07:51.529500 4684 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 23 09:07:51 crc kubenswrapper[4684]: I0123 09:07:51.529541 4684 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 23 09:07:51 crc kubenswrapper[4684]: I0123 09:07:51.529552 4684 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 23 09:07:51 crc kubenswrapper[4684]: I0123 09:07:51.529566 4684 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 23 09:07:51 crc kubenswrapper[4684]: I0123 09:07:51.529576 4684 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-23T09:07:51Z","lastTransitionTime":"2026-01-23T09:07:51Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 23 09:07:51 crc kubenswrapper[4684]: I0123 09:07:51.563096 4684 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2026-01-05 04:40:32.951825103 +0000 UTC Jan 23 09:07:51 crc kubenswrapper[4684]: I0123 09:07:51.581465 4684 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 23 09:07:51 crc kubenswrapper[4684]: I0123 09:07:51.581465 4684 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 23 09:07:51 crc kubenswrapper[4684]: E0123 09:07:51.581678 4684 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Jan 23 09:07:51 crc kubenswrapper[4684]: E0123 09:07:51.581596 4684 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Jan 23 09:07:51 crc kubenswrapper[4684]: I0123 09:07:51.581992 4684 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Jan 23 09:07:51 crc kubenswrapper[4684]: E0123 09:07:51.582088 4684 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Jan 23 09:07:51 crc kubenswrapper[4684]: I0123 09:07:51.632114 4684 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 23 09:07:51 crc kubenswrapper[4684]: I0123 09:07:51.632196 4684 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 23 09:07:51 crc kubenswrapper[4684]: I0123 09:07:51.632210 4684 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 23 09:07:51 crc kubenswrapper[4684]: I0123 09:07:51.632237 4684 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 23 09:07:51 crc kubenswrapper[4684]: I0123 09:07:51.632250 4684 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-23T09:07:51Z","lastTransitionTime":"2026-01-23T09:07:51Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 23 09:07:51 crc kubenswrapper[4684]: I0123 09:07:51.734690 4684 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 23 09:07:51 crc kubenswrapper[4684]: I0123 09:07:51.734753 4684 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 23 09:07:51 crc kubenswrapper[4684]: I0123 09:07:51.734763 4684 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 23 09:07:51 crc kubenswrapper[4684]: I0123 09:07:51.734779 4684 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 23 09:07:51 crc kubenswrapper[4684]: I0123 09:07:51.734790 4684 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-23T09:07:51Z","lastTransitionTime":"2026-01-23T09:07:51Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 23 09:07:51 crc kubenswrapper[4684]: I0123 09:07:51.837924 4684 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 23 09:07:51 crc kubenswrapper[4684]: I0123 09:07:51.838154 4684 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 23 09:07:51 crc kubenswrapper[4684]: I0123 09:07:51.838166 4684 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 23 09:07:51 crc kubenswrapper[4684]: I0123 09:07:51.838182 4684 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 23 09:07:51 crc kubenswrapper[4684]: I0123 09:07:51.838193 4684 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-23T09:07:51Z","lastTransitionTime":"2026-01-23T09:07:51Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 23 09:07:51 crc kubenswrapper[4684]: I0123 09:07:51.941887 4684 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 23 09:07:51 crc kubenswrapper[4684]: I0123 09:07:51.941942 4684 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 23 09:07:51 crc kubenswrapper[4684]: I0123 09:07:51.941955 4684 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 23 09:07:51 crc kubenswrapper[4684]: I0123 09:07:51.941978 4684 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 23 09:07:51 crc kubenswrapper[4684]: I0123 09:07:51.941999 4684 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-23T09:07:51Z","lastTransitionTime":"2026-01-23T09:07:51Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 23 09:07:52 crc kubenswrapper[4684]: I0123 09:07:52.044062 4684 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 23 09:07:52 crc kubenswrapper[4684]: I0123 09:07:52.044340 4684 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 23 09:07:52 crc kubenswrapper[4684]: I0123 09:07:52.044465 4684 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 23 09:07:52 crc kubenswrapper[4684]: I0123 09:07:52.044580 4684 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 23 09:07:52 crc kubenswrapper[4684]: I0123 09:07:52.044713 4684 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-23T09:07:52Z","lastTransitionTime":"2026-01-23T09:07:52Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 23 09:07:52 crc kubenswrapper[4684]: I0123 09:07:52.147390 4684 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 23 09:07:52 crc kubenswrapper[4684]: I0123 09:07:52.147443 4684 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 23 09:07:52 crc kubenswrapper[4684]: I0123 09:07:52.147460 4684 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 23 09:07:52 crc kubenswrapper[4684]: I0123 09:07:52.147478 4684 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 23 09:07:52 crc kubenswrapper[4684]: I0123 09:07:52.147486 4684 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-23T09:07:52Z","lastTransitionTime":"2026-01-23T09:07:52Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 23 09:07:52 crc kubenswrapper[4684]: I0123 09:07:52.250789 4684 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 23 09:07:52 crc kubenswrapper[4684]: I0123 09:07:52.250832 4684 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 23 09:07:52 crc kubenswrapper[4684]: I0123 09:07:52.250845 4684 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 23 09:07:52 crc kubenswrapper[4684]: I0123 09:07:52.250863 4684 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 23 09:07:52 crc kubenswrapper[4684]: I0123 09:07:52.250881 4684 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-23T09:07:52Z","lastTransitionTime":"2026-01-23T09:07:52Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 23 09:07:52 crc kubenswrapper[4684]: I0123 09:07:52.353269 4684 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 23 09:07:52 crc kubenswrapper[4684]: I0123 09:07:52.353308 4684 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 23 09:07:52 crc kubenswrapper[4684]: I0123 09:07:52.353317 4684 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 23 09:07:52 crc kubenswrapper[4684]: I0123 09:07:52.353331 4684 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 23 09:07:52 crc kubenswrapper[4684]: I0123 09:07:52.353339 4684 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-23T09:07:52Z","lastTransitionTime":"2026-01-23T09:07:52Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 23 09:07:52 crc kubenswrapper[4684]: I0123 09:07:52.455415 4684 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 23 09:07:52 crc kubenswrapper[4684]: I0123 09:07:52.455443 4684 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 23 09:07:52 crc kubenswrapper[4684]: I0123 09:07:52.455451 4684 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 23 09:07:52 crc kubenswrapper[4684]: I0123 09:07:52.455465 4684 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 23 09:07:52 crc kubenswrapper[4684]: I0123 09:07:52.455474 4684 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-23T09:07:52Z","lastTransitionTime":"2026-01-23T09:07:52Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 23 09:07:52 crc kubenswrapper[4684]: I0123 09:07:52.557889 4684 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 23 09:07:52 crc kubenswrapper[4684]: I0123 09:07:52.557919 4684 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 23 09:07:52 crc kubenswrapper[4684]: I0123 09:07:52.557928 4684 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 23 09:07:52 crc kubenswrapper[4684]: I0123 09:07:52.557940 4684 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 23 09:07:52 crc kubenswrapper[4684]: I0123 09:07:52.557950 4684 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-23T09:07:52Z","lastTransitionTime":"2026-01-23T09:07:52Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 23 09:07:52 crc kubenswrapper[4684]: I0123 09:07:52.564244 4684 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-11-30 15:16:08.744524421 +0000 UTC Jan 23 09:07:52 crc kubenswrapper[4684]: I0123 09:07:52.581719 4684 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-wrrtl" Jan 23 09:07:52 crc kubenswrapper[4684]: E0123 09:07:52.581897 4684 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-wrrtl" podUID="8a1145d8-e0e9-481b-9e5c-65815e74874f" Jan 23 09:07:52 crc kubenswrapper[4684]: I0123 09:07:52.660174 4684 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 23 09:07:52 crc kubenswrapper[4684]: I0123 09:07:52.660499 4684 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 23 09:07:52 crc kubenswrapper[4684]: I0123 09:07:52.660607 4684 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 23 09:07:52 crc kubenswrapper[4684]: I0123 09:07:52.660749 4684 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 23 09:07:52 crc kubenswrapper[4684]: I0123 09:07:52.660945 4684 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-23T09:07:52Z","lastTransitionTime":"2026-01-23T09:07:52Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 23 09:07:52 crc kubenswrapper[4684]: I0123 09:07:52.762711 4684 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 23 09:07:52 crc kubenswrapper[4684]: I0123 09:07:52.762944 4684 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 23 09:07:52 crc kubenswrapper[4684]: I0123 09:07:52.763195 4684 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 23 09:07:52 crc kubenswrapper[4684]: I0123 09:07:52.763343 4684 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 23 09:07:52 crc kubenswrapper[4684]: I0123 09:07:52.763415 4684 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-23T09:07:52Z","lastTransitionTime":"2026-01-23T09:07:52Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 23 09:07:52 crc kubenswrapper[4684]: I0123 09:07:52.865954 4684 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 23 09:07:52 crc kubenswrapper[4684]: I0123 09:07:52.866215 4684 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 23 09:07:52 crc kubenswrapper[4684]: I0123 09:07:52.866428 4684 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 23 09:07:52 crc kubenswrapper[4684]: I0123 09:07:52.866659 4684 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 23 09:07:52 crc kubenswrapper[4684]: I0123 09:07:52.866894 4684 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-23T09:07:52Z","lastTransitionTime":"2026-01-23T09:07:52Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 23 09:07:52 crc kubenswrapper[4684]: I0123 09:07:52.969288 4684 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 23 09:07:52 crc kubenswrapper[4684]: I0123 09:07:52.969600 4684 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 23 09:07:52 crc kubenswrapper[4684]: I0123 09:07:52.969677 4684 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 23 09:07:52 crc kubenswrapper[4684]: I0123 09:07:52.969859 4684 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 23 09:07:52 crc kubenswrapper[4684]: I0123 09:07:52.969957 4684 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-23T09:07:52Z","lastTransitionTime":"2026-01-23T09:07:52Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 23 09:07:53 crc kubenswrapper[4684]: I0123 09:07:53.072843 4684 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 23 09:07:53 crc kubenswrapper[4684]: I0123 09:07:53.072890 4684 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 23 09:07:53 crc kubenswrapper[4684]: I0123 09:07:53.072903 4684 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 23 09:07:53 crc kubenswrapper[4684]: I0123 09:07:53.072922 4684 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 23 09:07:53 crc kubenswrapper[4684]: I0123 09:07:53.072936 4684 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-23T09:07:53Z","lastTransitionTime":"2026-01-23T09:07:53Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 23 09:07:53 crc kubenswrapper[4684]: I0123 09:07:53.175324 4684 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 23 09:07:53 crc kubenswrapper[4684]: I0123 09:07:53.175373 4684 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 23 09:07:53 crc kubenswrapper[4684]: I0123 09:07:53.175384 4684 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 23 09:07:53 crc kubenswrapper[4684]: I0123 09:07:53.175399 4684 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 23 09:07:53 crc kubenswrapper[4684]: I0123 09:07:53.175409 4684 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-23T09:07:53Z","lastTransitionTime":"2026-01-23T09:07:53Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 23 09:07:53 crc kubenswrapper[4684]: I0123 09:07:53.277465 4684 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 23 09:07:53 crc kubenswrapper[4684]: I0123 09:07:53.277498 4684 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 23 09:07:53 crc kubenswrapper[4684]: I0123 09:07:53.277508 4684 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 23 09:07:53 crc kubenswrapper[4684]: I0123 09:07:53.277520 4684 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 23 09:07:53 crc kubenswrapper[4684]: I0123 09:07:53.277532 4684 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-23T09:07:53Z","lastTransitionTime":"2026-01-23T09:07:53Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 23 09:07:53 crc kubenswrapper[4684]: I0123 09:07:53.379973 4684 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 23 09:07:53 crc kubenswrapper[4684]: I0123 09:07:53.379998 4684 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 23 09:07:53 crc kubenswrapper[4684]: I0123 09:07:53.380047 4684 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 23 09:07:53 crc kubenswrapper[4684]: I0123 09:07:53.380061 4684 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 23 09:07:53 crc kubenswrapper[4684]: I0123 09:07:53.380070 4684 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-23T09:07:53Z","lastTransitionTime":"2026-01-23T09:07:53Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 23 09:07:53 crc kubenswrapper[4684]: I0123 09:07:53.482906 4684 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 23 09:07:53 crc kubenswrapper[4684]: I0123 09:07:53.482954 4684 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 23 09:07:53 crc kubenswrapper[4684]: I0123 09:07:53.482965 4684 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 23 09:07:53 crc kubenswrapper[4684]: I0123 09:07:53.482986 4684 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 23 09:07:53 crc kubenswrapper[4684]: I0123 09:07:53.483002 4684 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-23T09:07:53Z","lastTransitionTime":"2026-01-23T09:07:53Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 23 09:07:53 crc kubenswrapper[4684]: I0123 09:07:53.565247 4684 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2026-01-12 07:36:31.036424946 +0000 UTC Jan 23 09:07:53 crc kubenswrapper[4684]: I0123 09:07:53.581734 4684 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 23 09:07:53 crc kubenswrapper[4684]: I0123 09:07:53.581820 4684 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Jan 23 09:07:53 crc kubenswrapper[4684]: E0123 09:07:53.581918 4684 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Jan 23 09:07:53 crc kubenswrapper[4684]: E0123 09:07:53.582012 4684 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Jan 23 09:07:53 crc kubenswrapper[4684]: I0123 09:07:53.582166 4684 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 23 09:07:53 crc kubenswrapper[4684]: E0123 09:07:53.582394 4684 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Jan 23 09:07:53 crc kubenswrapper[4684]: I0123 09:07:53.590094 4684 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 23 09:07:53 crc kubenswrapper[4684]: I0123 09:07:53.590189 4684 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 23 09:07:53 crc kubenswrapper[4684]: I0123 09:07:53.590206 4684 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 23 09:07:53 crc kubenswrapper[4684]: I0123 09:07:53.590227 4684 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 23 09:07:53 crc kubenswrapper[4684]: I0123 09:07:53.590242 4684 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-23T09:07:53Z","lastTransitionTime":"2026-01-23T09:07:53Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 23 09:07:53 crc kubenswrapper[4684]: I0123 09:07:53.692849 4684 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 23 09:07:53 crc kubenswrapper[4684]: I0123 09:07:53.692898 4684 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 23 09:07:53 crc kubenswrapper[4684]: I0123 09:07:53.692912 4684 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 23 09:07:53 crc kubenswrapper[4684]: I0123 09:07:53.692937 4684 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 23 09:07:53 crc kubenswrapper[4684]: I0123 09:07:53.692968 4684 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-23T09:07:53Z","lastTransitionTime":"2026-01-23T09:07:53Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 23 09:07:53 crc kubenswrapper[4684]: I0123 09:07:53.795948 4684 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 23 09:07:53 crc kubenswrapper[4684]: I0123 09:07:53.795992 4684 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 23 09:07:53 crc kubenswrapper[4684]: I0123 09:07:53.796000 4684 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 23 09:07:53 crc kubenswrapper[4684]: I0123 09:07:53.796014 4684 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 23 09:07:53 crc kubenswrapper[4684]: I0123 09:07:53.796022 4684 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-23T09:07:53Z","lastTransitionTime":"2026-01-23T09:07:53Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 23 09:07:53 crc kubenswrapper[4684]: I0123 09:07:53.898545 4684 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 23 09:07:53 crc kubenswrapper[4684]: I0123 09:07:53.898583 4684 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 23 09:07:53 crc kubenswrapper[4684]: I0123 09:07:53.898593 4684 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 23 09:07:53 crc kubenswrapper[4684]: I0123 09:07:53.898609 4684 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 23 09:07:53 crc kubenswrapper[4684]: I0123 09:07:53.898620 4684 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-23T09:07:53Z","lastTransitionTime":"2026-01-23T09:07:53Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 23 09:07:54 crc kubenswrapper[4684]: I0123 09:07:54.001650 4684 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 23 09:07:54 crc kubenswrapper[4684]: I0123 09:07:54.001712 4684 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 23 09:07:54 crc kubenswrapper[4684]: I0123 09:07:54.001724 4684 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 23 09:07:54 crc kubenswrapper[4684]: I0123 09:07:54.001741 4684 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 23 09:07:54 crc kubenswrapper[4684]: I0123 09:07:54.001752 4684 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-23T09:07:54Z","lastTransitionTime":"2026-01-23T09:07:54Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 23 09:07:54 crc kubenswrapper[4684]: I0123 09:07:54.104518 4684 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 23 09:07:54 crc kubenswrapper[4684]: I0123 09:07:54.104743 4684 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 23 09:07:54 crc kubenswrapper[4684]: I0123 09:07:54.104828 4684 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 23 09:07:54 crc kubenswrapper[4684]: I0123 09:07:54.104953 4684 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 23 09:07:54 crc kubenswrapper[4684]: I0123 09:07:54.105045 4684 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-23T09:07:54Z","lastTransitionTime":"2026-01-23T09:07:54Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 23 09:07:54 crc kubenswrapper[4684]: I0123 09:07:54.207599 4684 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 23 09:07:54 crc kubenswrapper[4684]: I0123 09:07:54.207871 4684 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 23 09:07:54 crc kubenswrapper[4684]: I0123 09:07:54.207963 4684 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 23 09:07:54 crc kubenswrapper[4684]: I0123 09:07:54.208062 4684 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 23 09:07:54 crc kubenswrapper[4684]: I0123 09:07:54.208178 4684 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-23T09:07:54Z","lastTransitionTime":"2026-01-23T09:07:54Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 23 09:07:54 crc kubenswrapper[4684]: I0123 09:07:54.311075 4684 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 23 09:07:54 crc kubenswrapper[4684]: I0123 09:07:54.311117 4684 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 23 09:07:54 crc kubenswrapper[4684]: I0123 09:07:54.311129 4684 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 23 09:07:54 crc kubenswrapper[4684]: I0123 09:07:54.311146 4684 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 23 09:07:54 crc kubenswrapper[4684]: I0123 09:07:54.311160 4684 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-23T09:07:54Z","lastTransitionTime":"2026-01-23T09:07:54Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 23 09:07:54 crc kubenswrapper[4684]: I0123 09:07:54.413461 4684 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 23 09:07:54 crc kubenswrapper[4684]: I0123 09:07:54.413505 4684 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 23 09:07:54 crc kubenswrapper[4684]: I0123 09:07:54.413525 4684 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 23 09:07:54 crc kubenswrapper[4684]: I0123 09:07:54.413547 4684 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 23 09:07:54 crc kubenswrapper[4684]: I0123 09:07:54.413564 4684 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-23T09:07:54Z","lastTransitionTime":"2026-01-23T09:07:54Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 23 09:07:54 crc kubenswrapper[4684]: I0123 09:07:54.516082 4684 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 23 09:07:54 crc kubenswrapper[4684]: I0123 09:07:54.516126 4684 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 23 09:07:54 crc kubenswrapper[4684]: I0123 09:07:54.516137 4684 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 23 09:07:54 crc kubenswrapper[4684]: I0123 09:07:54.516155 4684 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 23 09:07:54 crc kubenswrapper[4684]: I0123 09:07:54.516172 4684 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-23T09:07:54Z","lastTransitionTime":"2026-01-23T09:07:54Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 23 09:07:54 crc kubenswrapper[4684]: I0123 09:07:54.566190 4684 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-11-16 12:08:46.728406444 +0000 UTC Jan 23 09:07:54 crc kubenswrapper[4684]: I0123 09:07:54.581719 4684 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-wrrtl" Jan 23 09:07:54 crc kubenswrapper[4684]: E0123 09:07:54.581867 4684 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-wrrtl" podUID="8a1145d8-e0e9-481b-9e5c-65815e74874f" Jan 23 09:07:54 crc kubenswrapper[4684]: I0123 09:07:54.619098 4684 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 23 09:07:54 crc kubenswrapper[4684]: I0123 09:07:54.619131 4684 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 23 09:07:54 crc kubenswrapper[4684]: I0123 09:07:54.619149 4684 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 23 09:07:54 crc kubenswrapper[4684]: I0123 09:07:54.619166 4684 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 23 09:07:54 crc kubenswrapper[4684]: I0123 09:07:54.619178 4684 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-23T09:07:54Z","lastTransitionTime":"2026-01-23T09:07:54Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 23 09:07:54 crc kubenswrapper[4684]: I0123 09:07:54.722321 4684 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 23 09:07:54 crc kubenswrapper[4684]: I0123 09:07:54.722367 4684 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 23 09:07:54 crc kubenswrapper[4684]: I0123 09:07:54.722378 4684 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 23 09:07:54 crc kubenswrapper[4684]: I0123 09:07:54.722394 4684 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 23 09:07:54 crc kubenswrapper[4684]: I0123 09:07:54.722405 4684 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-23T09:07:54Z","lastTransitionTime":"2026-01-23T09:07:54Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 23 09:07:54 crc kubenswrapper[4684]: I0123 09:07:54.825424 4684 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 23 09:07:54 crc kubenswrapper[4684]: I0123 09:07:54.825481 4684 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 23 09:07:54 crc kubenswrapper[4684]: I0123 09:07:54.825494 4684 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 23 09:07:54 crc kubenswrapper[4684]: I0123 09:07:54.825514 4684 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 23 09:07:54 crc kubenswrapper[4684]: I0123 09:07:54.825525 4684 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-23T09:07:54Z","lastTransitionTime":"2026-01-23T09:07:54Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 23 09:07:54 crc kubenswrapper[4684]: I0123 09:07:54.928137 4684 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 23 09:07:54 crc kubenswrapper[4684]: I0123 09:07:54.928404 4684 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 23 09:07:54 crc kubenswrapper[4684]: I0123 09:07:54.928500 4684 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 23 09:07:54 crc kubenswrapper[4684]: I0123 09:07:54.928611 4684 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 23 09:07:54 crc kubenswrapper[4684]: I0123 09:07:54.928714 4684 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-23T09:07:54Z","lastTransitionTime":"2026-01-23T09:07:54Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 23 09:07:55 crc kubenswrapper[4684]: I0123 09:07:55.032124 4684 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 23 09:07:55 crc kubenswrapper[4684]: I0123 09:07:55.032176 4684 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 23 09:07:55 crc kubenswrapper[4684]: I0123 09:07:55.032189 4684 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 23 09:07:55 crc kubenswrapper[4684]: I0123 09:07:55.032206 4684 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 23 09:07:55 crc kubenswrapper[4684]: I0123 09:07:55.032218 4684 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-23T09:07:55Z","lastTransitionTime":"2026-01-23T09:07:55Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 23 09:07:55 crc kubenswrapper[4684]: I0123 09:07:55.135363 4684 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 23 09:07:55 crc kubenswrapper[4684]: I0123 09:07:55.135402 4684 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 23 09:07:55 crc kubenswrapper[4684]: I0123 09:07:55.135412 4684 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 23 09:07:55 crc kubenswrapper[4684]: I0123 09:07:55.135425 4684 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 23 09:07:55 crc kubenswrapper[4684]: I0123 09:07:55.135435 4684 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-23T09:07:55Z","lastTransitionTime":"2026-01-23T09:07:55Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 23 09:07:55 crc kubenswrapper[4684]: I0123 09:07:55.237977 4684 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 23 09:07:55 crc kubenswrapper[4684]: I0123 09:07:55.238012 4684 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 23 09:07:55 crc kubenswrapper[4684]: I0123 09:07:55.238022 4684 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 23 09:07:55 crc kubenswrapper[4684]: I0123 09:07:55.238038 4684 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 23 09:07:55 crc kubenswrapper[4684]: I0123 09:07:55.238048 4684 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-23T09:07:55Z","lastTransitionTime":"2026-01-23T09:07:55Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 23 09:07:55 crc kubenswrapper[4684]: I0123 09:07:55.340622 4684 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 23 09:07:55 crc kubenswrapper[4684]: I0123 09:07:55.340674 4684 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 23 09:07:55 crc kubenswrapper[4684]: I0123 09:07:55.340688 4684 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 23 09:07:55 crc kubenswrapper[4684]: I0123 09:07:55.340743 4684 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 23 09:07:55 crc kubenswrapper[4684]: I0123 09:07:55.340761 4684 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-23T09:07:55Z","lastTransitionTime":"2026-01-23T09:07:55Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 23 09:07:55 crc kubenswrapper[4684]: I0123 09:07:55.443030 4684 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 23 09:07:55 crc kubenswrapper[4684]: I0123 09:07:55.443061 4684 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 23 09:07:55 crc kubenswrapper[4684]: I0123 09:07:55.443075 4684 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 23 09:07:55 crc kubenswrapper[4684]: I0123 09:07:55.443097 4684 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 23 09:07:55 crc kubenswrapper[4684]: I0123 09:07:55.443139 4684 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-23T09:07:55Z","lastTransitionTime":"2026-01-23T09:07:55Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 23 09:07:55 crc kubenswrapper[4684]: I0123 09:07:55.545177 4684 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 23 09:07:55 crc kubenswrapper[4684]: I0123 09:07:55.545231 4684 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 23 09:07:55 crc kubenswrapper[4684]: I0123 09:07:55.545244 4684 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 23 09:07:55 crc kubenswrapper[4684]: I0123 09:07:55.545262 4684 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 23 09:07:55 crc kubenswrapper[4684]: I0123 09:07:55.545274 4684 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-23T09:07:55Z","lastTransitionTime":"2026-01-23T09:07:55Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 23 09:07:55 crc kubenswrapper[4684]: I0123 09:07:55.566537 4684 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-12-05 07:10:58.384120407 +0000 UTC Jan 23 09:07:55 crc kubenswrapper[4684]: I0123 09:07:55.581198 4684 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Jan 23 09:07:55 crc kubenswrapper[4684]: I0123 09:07:55.581268 4684 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 23 09:07:55 crc kubenswrapper[4684]: I0123 09:07:55.581401 4684 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 23 09:07:55 crc kubenswrapper[4684]: E0123 09:07:55.581492 4684 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Jan 23 09:07:55 crc kubenswrapper[4684]: E0123 09:07:55.581660 4684 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Jan 23 09:07:55 crc kubenswrapper[4684]: E0123 09:07:55.581754 4684 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Jan 23 09:07:55 crc kubenswrapper[4684]: I0123 09:07:55.647206 4684 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 23 09:07:55 crc kubenswrapper[4684]: I0123 09:07:55.647287 4684 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 23 09:07:55 crc kubenswrapper[4684]: I0123 09:07:55.647298 4684 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 23 09:07:55 crc kubenswrapper[4684]: I0123 09:07:55.647314 4684 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 23 09:07:55 crc kubenswrapper[4684]: I0123 09:07:55.647343 4684 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-23T09:07:55Z","lastTransitionTime":"2026-01-23T09:07:55Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 23 09:07:55 crc kubenswrapper[4684]: I0123 09:07:55.749631 4684 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 23 09:07:55 crc kubenswrapper[4684]: I0123 09:07:55.749662 4684 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 23 09:07:55 crc kubenswrapper[4684]: I0123 09:07:55.749670 4684 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 23 09:07:55 crc kubenswrapper[4684]: I0123 09:07:55.749684 4684 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 23 09:07:55 crc kubenswrapper[4684]: I0123 09:07:55.749692 4684 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-23T09:07:55Z","lastTransitionTime":"2026-01-23T09:07:55Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 23 09:07:55 crc kubenswrapper[4684]: I0123 09:07:55.852453 4684 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 23 09:07:55 crc kubenswrapper[4684]: I0123 09:07:55.852672 4684 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 23 09:07:55 crc kubenswrapper[4684]: I0123 09:07:55.852808 4684 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 23 09:07:55 crc kubenswrapper[4684]: I0123 09:07:55.852912 4684 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 23 09:07:55 crc kubenswrapper[4684]: I0123 09:07:55.853042 4684 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-23T09:07:55Z","lastTransitionTime":"2026-01-23T09:07:55Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 23 09:07:55 crc kubenswrapper[4684]: I0123 09:07:55.955691 4684 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 23 09:07:55 crc kubenswrapper[4684]: I0123 09:07:55.955743 4684 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 23 09:07:55 crc kubenswrapper[4684]: I0123 09:07:55.955752 4684 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 23 09:07:55 crc kubenswrapper[4684]: I0123 09:07:55.955765 4684 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 23 09:07:55 crc kubenswrapper[4684]: I0123 09:07:55.955774 4684 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-23T09:07:55Z","lastTransitionTime":"2026-01-23T09:07:55Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 23 09:07:56 crc kubenswrapper[4684]: I0123 09:07:56.058371 4684 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 23 09:07:56 crc kubenswrapper[4684]: I0123 09:07:56.058459 4684 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 23 09:07:56 crc kubenswrapper[4684]: I0123 09:07:56.058472 4684 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 23 09:07:56 crc kubenswrapper[4684]: I0123 09:07:56.058490 4684 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 23 09:07:56 crc kubenswrapper[4684]: I0123 09:07:56.058503 4684 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-23T09:07:56Z","lastTransitionTime":"2026-01-23T09:07:56Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 23 09:07:56 crc kubenswrapper[4684]: I0123 09:07:56.161117 4684 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 23 09:07:56 crc kubenswrapper[4684]: I0123 09:07:56.161168 4684 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 23 09:07:56 crc kubenswrapper[4684]: I0123 09:07:56.161183 4684 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 23 09:07:56 crc kubenswrapper[4684]: I0123 09:07:56.161203 4684 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 23 09:07:56 crc kubenswrapper[4684]: I0123 09:07:56.161215 4684 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-23T09:07:56Z","lastTransitionTime":"2026-01-23T09:07:56Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 23 09:07:56 crc kubenswrapper[4684]: I0123 09:07:56.263362 4684 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 23 09:07:56 crc kubenswrapper[4684]: I0123 09:07:56.263395 4684 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 23 09:07:56 crc kubenswrapper[4684]: I0123 09:07:56.263405 4684 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 23 09:07:56 crc kubenswrapper[4684]: I0123 09:07:56.263417 4684 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 23 09:07:56 crc kubenswrapper[4684]: I0123 09:07:56.263426 4684 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-23T09:07:56Z","lastTransitionTime":"2026-01-23T09:07:56Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 23 09:07:56 crc kubenswrapper[4684]: I0123 09:07:56.365856 4684 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 23 09:07:56 crc kubenswrapper[4684]: I0123 09:07:56.365907 4684 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 23 09:07:56 crc kubenswrapper[4684]: I0123 09:07:56.365927 4684 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 23 09:07:56 crc kubenswrapper[4684]: I0123 09:07:56.365942 4684 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 23 09:07:56 crc kubenswrapper[4684]: I0123 09:07:56.365952 4684 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-23T09:07:56Z","lastTransitionTime":"2026-01-23T09:07:56Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 23 09:07:56 crc kubenswrapper[4684]: I0123 09:07:56.468760 4684 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 23 09:07:56 crc kubenswrapper[4684]: I0123 09:07:56.469040 4684 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 23 09:07:56 crc kubenswrapper[4684]: I0123 09:07:56.469123 4684 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 23 09:07:56 crc kubenswrapper[4684]: I0123 09:07:56.469225 4684 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 23 09:07:56 crc kubenswrapper[4684]: I0123 09:07:56.469294 4684 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-23T09:07:56Z","lastTransitionTime":"2026-01-23T09:07:56Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 23 09:07:56 crc kubenswrapper[4684]: I0123 09:07:56.566806 4684 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-12-13 11:38:47.078885426 +0000 UTC Jan 23 09:07:56 crc kubenswrapper[4684]: I0123 09:07:56.571531 4684 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 23 09:07:56 crc kubenswrapper[4684]: I0123 09:07:56.571594 4684 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 23 09:07:56 crc kubenswrapper[4684]: I0123 09:07:56.571606 4684 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 23 09:07:56 crc kubenswrapper[4684]: I0123 09:07:56.571620 4684 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 23 09:07:56 crc kubenswrapper[4684]: I0123 09:07:56.571634 4684 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-23T09:07:56Z","lastTransitionTime":"2026-01-23T09:07:56Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 23 09:07:56 crc kubenswrapper[4684]: I0123 09:07:56.580927 4684 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-wrrtl" Jan 23 09:07:56 crc kubenswrapper[4684]: E0123 09:07:56.581046 4684 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-wrrtl" podUID="8a1145d8-e0e9-481b-9e5c-65815e74874f" Jan 23 09:07:56 crc kubenswrapper[4684]: I0123 09:07:56.674135 4684 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 23 09:07:56 crc kubenswrapper[4684]: I0123 09:07:56.674192 4684 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 23 09:07:56 crc kubenswrapper[4684]: I0123 09:07:56.674202 4684 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 23 09:07:56 crc kubenswrapper[4684]: I0123 09:07:56.674218 4684 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 23 09:07:56 crc kubenswrapper[4684]: I0123 09:07:56.674228 4684 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-23T09:07:56Z","lastTransitionTime":"2026-01-23T09:07:56Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 23 09:07:56 crc kubenswrapper[4684]: I0123 09:07:56.777434 4684 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 23 09:07:56 crc kubenswrapper[4684]: I0123 09:07:56.777811 4684 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 23 09:07:56 crc kubenswrapper[4684]: I0123 09:07:56.777916 4684 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 23 09:07:56 crc kubenswrapper[4684]: I0123 09:07:56.778029 4684 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 23 09:07:56 crc kubenswrapper[4684]: I0123 09:07:56.778124 4684 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-23T09:07:56Z","lastTransitionTime":"2026-01-23T09:07:56Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 23 09:07:56 crc kubenswrapper[4684]: I0123 09:07:56.879957 4684 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 23 09:07:56 crc kubenswrapper[4684]: I0123 09:07:56.879984 4684 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 23 09:07:56 crc kubenswrapper[4684]: I0123 09:07:56.879992 4684 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 23 09:07:56 crc kubenswrapper[4684]: I0123 09:07:56.880022 4684 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 23 09:07:56 crc kubenswrapper[4684]: I0123 09:07:56.880031 4684 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-23T09:07:56Z","lastTransitionTime":"2026-01-23T09:07:56Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 23 09:07:56 crc kubenswrapper[4684]: I0123 09:07:56.982186 4684 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 23 09:07:56 crc kubenswrapper[4684]: I0123 09:07:56.982486 4684 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 23 09:07:56 crc kubenswrapper[4684]: I0123 09:07:56.982779 4684 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 23 09:07:56 crc kubenswrapper[4684]: I0123 09:07:56.982976 4684 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 23 09:07:56 crc kubenswrapper[4684]: I0123 09:07:56.983182 4684 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-23T09:07:56Z","lastTransitionTime":"2026-01-23T09:07:56Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 23 09:07:57 crc kubenswrapper[4684]: I0123 09:07:57.086195 4684 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 23 09:07:57 crc kubenswrapper[4684]: I0123 09:07:57.086240 4684 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 23 09:07:57 crc kubenswrapper[4684]: I0123 09:07:57.086254 4684 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 23 09:07:57 crc kubenswrapper[4684]: I0123 09:07:57.086270 4684 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 23 09:07:57 crc kubenswrapper[4684]: I0123 09:07:57.086284 4684 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-23T09:07:57Z","lastTransitionTime":"2026-01-23T09:07:57Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 23 09:07:57 crc kubenswrapper[4684]: I0123 09:07:57.188986 4684 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 23 09:07:57 crc kubenswrapper[4684]: I0123 09:07:57.189021 4684 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 23 09:07:57 crc kubenswrapper[4684]: I0123 09:07:57.189028 4684 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 23 09:07:57 crc kubenswrapper[4684]: I0123 09:07:57.189043 4684 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 23 09:07:57 crc kubenswrapper[4684]: I0123 09:07:57.189052 4684 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-23T09:07:57Z","lastTransitionTime":"2026-01-23T09:07:57Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 23 09:07:57 crc kubenswrapper[4684]: I0123 09:07:57.291107 4684 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 23 09:07:57 crc kubenswrapper[4684]: I0123 09:07:57.291167 4684 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 23 09:07:57 crc kubenswrapper[4684]: I0123 09:07:57.291183 4684 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 23 09:07:57 crc kubenswrapper[4684]: I0123 09:07:57.291200 4684 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 23 09:07:57 crc kubenswrapper[4684]: I0123 09:07:57.291213 4684 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-23T09:07:57Z","lastTransitionTime":"2026-01-23T09:07:57Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 23 09:07:57 crc kubenswrapper[4684]: I0123 09:07:57.394393 4684 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 23 09:07:57 crc kubenswrapper[4684]: I0123 09:07:57.394445 4684 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 23 09:07:57 crc kubenswrapper[4684]: I0123 09:07:57.394457 4684 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 23 09:07:57 crc kubenswrapper[4684]: I0123 09:07:57.394478 4684 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 23 09:07:57 crc kubenswrapper[4684]: I0123 09:07:57.394491 4684 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-23T09:07:57Z","lastTransitionTime":"2026-01-23T09:07:57Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 23 09:07:57 crc kubenswrapper[4684]: I0123 09:07:57.496841 4684 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 23 09:07:57 crc kubenswrapper[4684]: I0123 09:07:57.496884 4684 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 23 09:07:57 crc kubenswrapper[4684]: I0123 09:07:57.496895 4684 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 23 09:07:57 crc kubenswrapper[4684]: I0123 09:07:57.496917 4684 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 23 09:07:57 crc kubenswrapper[4684]: I0123 09:07:57.496929 4684 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-23T09:07:57Z","lastTransitionTime":"2026-01-23T09:07:57Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 23 09:07:57 crc kubenswrapper[4684]: I0123 09:07:57.567468 4684 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-12-31 18:05:18.763091285 +0000 UTC Jan 23 09:07:57 crc kubenswrapper[4684]: I0123 09:07:57.582832 4684 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 23 09:07:57 crc kubenswrapper[4684]: E0123 09:07:57.582972 4684 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Jan 23 09:07:57 crc kubenswrapper[4684]: I0123 09:07:57.583873 4684 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Jan 23 09:07:57 crc kubenswrapper[4684]: I0123 09:07:57.583907 4684 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 23 09:07:57 crc kubenswrapper[4684]: E0123 09:07:57.584035 4684 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Jan 23 09:07:57 crc kubenswrapper[4684]: E0123 09:07:57.584111 4684 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Jan 23 09:07:57 crc kubenswrapper[4684]: I0123 09:07:57.584683 4684 scope.go:117] "RemoveContainer" containerID="0a7c4719b2eaaa5e4439e33009fbfab815e8ac21cf72b90aeaeeb1b6717029de" Jan 23 09:07:57 crc kubenswrapper[4684]: I0123 09:07:57.597777 4684 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-6stgf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"4fce7017-186f-4953-b968-c8a8868a0fd4\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T09:07:29Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T09:07:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T09:07:29Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T09:07:29Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://e120546e2ca9261a5bc169c39194c52add608d78b5783a10dad5f3ba4ee27c23\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-23T09:07:29Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-wv8g2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-23T09:07:28Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-6stgf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-23T09:07:57Z is after 2025-08-24T17:21:41Z" Jan 23 09:07:57 crc kubenswrapper[4684]: I0123 09:07:57.598812 4684 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 23 09:07:57 crc kubenswrapper[4684]: I0123 09:07:57.598945 4684 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 23 09:07:57 crc kubenswrapper[4684]: I0123 09:07:57.599053 4684 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 23 09:07:57 crc kubenswrapper[4684]: I0123 09:07:57.599133 4684 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 23 09:07:57 crc kubenswrapper[4684]: I0123 09:07:57.599218 4684 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-23T09:07:57Z","lastTransitionTime":"2026-01-23T09:07:57Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 23 09:07:57 crc kubenswrapper[4684]: I0123 09:07:57.609075 4684 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-qt2j2" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d5069a6f-07bb-4423-8df0-92cdc541e6de\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T09:07:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T09:07:29Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T09:07:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T09:07:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://4ab843f59e857c481772565098789264b06141f58dd54cbb8dba2e40b44a54ad\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-23T09:07:30Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-l62zw\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-23T09:07:29Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-qt2j2\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-23T09:07:57Z is after 2025-08-24T17:21:41Z" Jan 23 09:07:57 crc kubenswrapper[4684]: I0123 09:07:57.623951 4684 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d618dabd-5de3-4c94-b9c1-69682da77628\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T09:07:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T09:07:07Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T09:07:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T09:07:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T09:07:07Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://c027c8977c1e3870ef0132bf28d479e8999b1a7d216327be7a9cff2aeee05c9e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-23T09:07:08Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://7954e2feb1e89e1ec2c9055234e7b9bde7005afc751a3067c18cbb54d16045cc\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-23T09:07:08Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://fde45d47daa7855ee7caa1df0222d2773fcdc8fb29413c61d6b74f7e7d8fa6e4\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-23T09:07:09Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://f34540a58dd0dfcebbfd694b24202f58a89ddca8a0f04f3f4f2bcdba4be5c4b6\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-23T09:07:10Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-23T09:07:07Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-23T09:07:57Z is after 2025-08-24T17:21:41Z" Jan 23 09:07:57 crc kubenswrapper[4684]: I0123 09:07:57.639045 4684 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-23T09:07:28Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-23T09:07:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-23T09:07:28Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://4f741db786a98b9e9302c17c5f5061484149b0372c03b3cf06b017d37da7237a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-23T09:07:28Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://e0bf99a80423f9d4d2262b21f7dc70d1cf73731c48008e484d9768495596d5b0\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-23T09:07:28Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-23T09:07:57Z is after 2025-08-24T17:21:41Z" Jan 23 09:07:57 crc kubenswrapper[4684]: I0123 09:07:57.649582 4684 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-23T09:07:30Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-23T09:07:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-23T09:07:30Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://dc74050180463e44d7c545c89833c0282af87ae8cde4800f95e019dbd21ebb08\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-23T09:07:30Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-23T09:07:57Z is after 2025-08-24T17:21:41Z" Jan 23 09:07:57 crc kubenswrapper[4684]: I0123 09:07:57.665417 4684 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/network-metrics-daemon-wrrtl" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"8a1145d8-e0e9-481b-9e5c-65815e74874f\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T09:07:42Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T09:07:42Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T09:07:42Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T09:07:42Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/metrics\\\",\\\"name\\\":\\\"metrics-certs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-hlsjn\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d98bb346a17feae024d92663df92b25c120938395ab7043afbed543c6db9ca8d\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-metrics-daemon\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-hlsjn\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-23T09:07:42Z\\\"}}\" for pod \"openshift-multus\"/\"network-metrics-daemon-wrrtl\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-23T09:07:57Z is after 2025-08-24T17:21:41Z" Jan 23 09:07:57 crc kubenswrapper[4684]: I0123 09:07:57.677360 4684 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-23T09:07:29Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-23T09:07:29Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-23T09:07:29Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://d66a59d2f527c396c3b591ef694a20a6852d8e2b2f3d4c77ef0f0b795a18b535\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-23T09:07:28Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-23T09:07:57Z is after 2025-08-24T17:21:41Z" Jan 23 09:07:57 crc kubenswrapper[4684]: I0123 09:07:57.687903 4684 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-wtphf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"fe8e0d00-860e-4d47-9f48-686555520d79\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T09:07:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T09:07:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T09:07:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T09:07:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://87b6f66b276518f9c25bbd5c97bd4a330b2c796958b395d04a01ef7115b95440\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-23T09:07:30Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-dmwsl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://c3d090a4ca15b818846dbd02be034a5029761509ea8671673795d0b2b15249c9\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-23T09:07:30Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-dmwsl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-23T09:07:28Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-wtphf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-23T09:07:57Z is after 2025-08-24T17:21:41Z" Jan 23 09:07:57 crc kubenswrapper[4684]: I0123 09:07:57.701429 4684 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 23 09:07:57 crc kubenswrapper[4684]: I0123 09:07:57.701488 4684 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 23 09:07:57 crc kubenswrapper[4684]: I0123 09:07:57.701500 4684 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 23 09:07:57 crc kubenswrapper[4684]: I0123 09:07:57.701554 4684 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 23 09:07:57 crc kubenswrapper[4684]: I0123 09:07:57.701569 4684 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-23T09:07:57Z","lastTransitionTime":"2026-01-23T09:07:57Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 23 09:07:57 crc kubenswrapper[4684]: I0123 09:07:57.706208 4684 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-nk7v5" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fd1b372-d164-4037-ae8e-cf634b1c4b41\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T09:07:29Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T09:07:29Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T09:07:28Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T09:07:28Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://c845b6b78d55b23f70032599e19fb345571b02ca00353315bb08e94c834330d4\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-23T09:07:30Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-l46bg\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://5ecd3493767226c89a1f3e3dff04d36ff5c47117c6ad2712e71633f5c6e375b3\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-23T09:07:30Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-l46bg\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://1d7d0cedb437ec48e365912b092c7f28a30e01fbab86c49bce1b26734ab264ee\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-23T09:07:30Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-l46bg\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://6ab83043e744c91535278153a247d7ba2b3612b867edbabf3a43192b51304e14\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-23T09:07:30Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-l46bg\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://d44f8256ce0d8ea5237e13fb4f6d7ee5cd698c2821613b48d73ba903d2ab5351\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-23T09:07:30Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-l46bg\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://3eab81e73847c2d5a8a24bd2be84c8ed97ecc482fe023474b519ae6bcf3e6e49\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-23T09:07:29Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-l46bg\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://0a7c4719b2eaaa5e4439e33009fbfab815e8ac21cf72b90aeaeeb1b6717029de\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://71f1640626a831e4da81a382d015a6467377fa8e787db1ce1cebe4a788c40d8a\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2026-01-23T09:07:38Z\\\",\\\"message\\\":\\\"3 09:07:37.962441 5888 handler.go:190] Sending *v1.Namespace event handler 5 for removal\\\\nI0123 09:07:37.962452 5888 handler.go:190] Sending *v1.Node event handler 7 for removal\\\\nI0123 09:07:37.962460 5888 handler.go:190] Sending *v1.Node event handler 2 for removal\\\\nI0123 09:07:37.962476 5888 handler.go:190] Sending *v1.EgressIP event handler 8 for removal\\\\nI0123 09:07:37.962491 5888 handler.go:190] Sending *v1.NetworkPolicy event handler 4 for removal\\\\nI0123 09:07:37.962519 5888 factory.go:656] Stopping watch factory\\\\nI0123 09:07:37.962539 5888 handler.go:208] Removed *v1.NetworkPolicy event handler 4\\\\nI0123 09:07:37.962547 5888 handler.go:208] Removed *v1.Node event handler 7\\\\nI0123 09:07:37.962553 5888 handler.go:208] Removed *v1.Node event handler 2\\\\nI0123 09:07:37.962551 5888 handler.go:208] Removed *v1.Namespace event handler 5\\\\nI0123 09:07:37.962559 5888 handler.go:208] Removed *v1.EgressIP event handler 8\\\\nI0123 09:07:37.962570 5888 handler.go:208] Removed *v1.EgressFirewall event handler 9\\\\nI0123 09:07:37.962580 5888 handler.go:208] Removed *v1.Namespace event handler 1\\\\nI0123 09:07:37.962599 5888 reflector.go:311] Stopping reflector *v1.Pod (0s) from k8s.io/client-go/informers/factory.go:160\\\\nI0123 09:07:37.962628 5888 reflector.go:311] Stopping reflector *v1.Node (0s) from k8s.io/client-go/informers/f\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-23T09:07:35Z\\\"}},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://0a7c4719b2eaaa5e4439e33009fbfab815e8ac21cf72b90aeaeeb1b6717029de\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2026-01-23T09:07:42Z\\\",\\\"message\\\":\\\"ce openshift-marketplace/community-operators for network=default has 1 cluster-wide, 0 per-node configs, 0 template configs, making 1 (cluster) 0 (per node) and 0 (template) load balancers\\\\nI0123 09:07:40.472083 6043 services_controller.go:473] Services do not match for network=default, existing lbs: []services.LB{services.LB{Name:\\\\\\\"Service_openshift-marketplace/community-operators_TCP_cluster\\\\\\\", UUID:\\\\\\\"d389393c-7ba9-422c-b3f5-06e391d537d2\\\\\\\", Protocol:\\\\\\\"tcp\\\\\\\", ExternalIDs:map[string]string{\\\\\\\"k8s.ovn.org/kind\\\\\\\":\\\\\\\"Service\\\\\\\", \\\\\\\"k8s.ovn.org/owner\\\\\\\":\\\\\\\"openshift-marketplace/community-operators\\\\\\\"}, Opts:services.LBOpts{Reject:false, EmptyLBEvents:false, AffinityTimeOut:0, SkipSNAT:false, Template:false, AddressFamily:\\\\\\\"\\\\\\\"}, Rules:[]services.LBRule{}, Templates:services.TemplateMap{}, Switches:[]string{}, Routers:[]string{}, Groups:[]string{\\\\\\\"clusterLBGroup\\\\\\\"}}}, built lbs: []services.LB{services.LB{Name:\\\\\\\"Service_openshift-marketplace/community-operators_TCP_cluster\\\\\\\", UUID:\\\\\\\"\\\\\\\", Protocol:\\\\\\\"TCP\\\\\\\", ExternalIDs:map[string]string{\\\\\\\"k8s.ovn.org/kind\\\\\\\":\\\\\\\"Service\\\\\\\", \\\\\\\"k8s.ovn.org/owner\\\\\\\":\\\\\\\"openshift-marketplace/community-operators\\\\\\\"}, Opts:services.LBOpts{Reject:true, EmptyLBEvents:false, AffinityTimeOut:0, SkipSNAT:false, Template:false, AddressFamily:\\\\\\\"\\\\\\\"}, Rules:[]services.LBRule{services.LBRule{Source:services.Addr{IP:\\\\\\\"10.217.5.189\\\\\\\", Port:50051, Template:(*services.Template)(nil)}, T\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-23T09:07:39Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-l46bg\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://6eab0113b2445bd23a5d3eb5f4bd79d26dd3352a1bf807cf7e770d55db85b699\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-23T09:07:32Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-l46bg\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://6cfc04b44ac724b5e32e0102b3f0d670fdd7f2b7ae9b40266065c7b8192b228e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://6cfc04b44ac724b5e32e0102b3f0d670fdd7f2b7ae9b40266065c7b8192b228e\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-23T09:07:29Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-23T09:07:29Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-l46bg\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-23T09:07:28Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-nk7v5\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-23T09:07:57Z is after 2025-08-24T17:21:41Z" Jan 23 09:07:57 crc kubenswrapper[4684]: I0123 09:07:57.718276 4684 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-ckltm" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"17ebb42b-c0ef-423b-8337-cb73bcdbd301\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T09:07:44Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T09:07:41Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T09:07:44Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T09:07:44Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://831d14b0a3293bdf6aaef4805513c47cca40592929fd0a059c0415e6bb072462\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-23T09:07:43Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-control-plane-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-bqdrh\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://53174a72a4ae2ff8105c162641526b8d33dbc8ae6f6301c8c1399e1493d9f6e9\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-cluster-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-23T09:07:43Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-bqdrh\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-23T09:07:41Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-control-plane-749d76644c-ckltm\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-23T09:07:57Z is after 2025-08-24T17:21:41Z" Jan 23 09:07:57 crc kubenswrapper[4684]: I0123 09:07:57.730022 4684 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-23T09:07:27Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-23T09:07:27Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-23T09:07:57Z is after 2025-08-24T17:21:41Z" Jan 23 09:07:57 crc kubenswrapper[4684]: I0123 09:07:57.742561 4684 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-23T09:07:27Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-23T09:07:27Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-23T09:07:57Z is after 2025-08-24T17:21:41Z" Jan 23 09:07:57 crc kubenswrapper[4684]: I0123 09:07:57.754427 4684 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-jwr4q" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ab0885cc-d621-4e36-9e37-1326848bd147\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T09:07:29Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T09:07:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T09:07:29Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T09:07:29Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://d957cfbf388d17fa825ac41c56e15d6cd4caec6e13b2fb8c93b304205f0bbefe\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-23T09:07:29Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cw2mk\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-23T09:07:28Z\\\"}}\" for pod \"openshift-multus\"/\"multus-jwr4q\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-23T09:07:57Z is after 2025-08-24T17:21:41Z" Jan 23 09:07:57 crc kubenswrapper[4684]: I0123 09:07:57.769007 4684 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-dmqcw" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"95d1563a-3ca4-4fb0-8365-c1168fbe2e70\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T09:07:29Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T09:07:34Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T09:07:35Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T09:07:35Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://49a6a5854f711f7c177bc9c2ddea16027d535e15a3bbce2771702baed672fc06\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-23T09:07:35Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-wrhqc\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://d3d64538fa49212ecd97fac81f22251d985b9963024dcd5625ca82b0a19111fb\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://d3d64538fa49212ecd97fac81f22251d985b9963024dcd5625ca82b0a19111fb\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-23T09:07:29Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-23T09:07:29Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-wrhqc\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://bd008bc398cf858c150426e45222e76743f5cacfffb45c24f2cad83a6140abe4\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://bd008bc398cf858c150426e45222e76743f5cacfffb45c24f2cad83a6140abe4\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-23T09:07:30Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-23T09:07:30Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-wrhqc\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://11ea09253e6f4c4eab537b794b793c1f07e8cbaf361c1d8773381e7894805322\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://11ea09253e6f4c4eab537b794b793c1f07e8cbaf361c1d8773381e7894805322\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-23T09:07:31Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-23T09:07:31Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-wrhqc\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://4dddcfb8219bc8ac2d0f92294aef29222b71b1eb35ac84e7e833905e868e784e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://4dddcfb8219bc8ac2d0f92294aef29222b71b1eb35ac84e7e833905e868e784e\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-23T09:07:32Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-23T09:07:32Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-wrhqc\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://d935dd54133a2edd7ccddba6ec6b4c3ee7c86d3d6bc097b93fab3a6aa873ece9\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://d935dd54133a2edd7ccddba6ec6b4c3ee7c86d3d6bc097b93fab3a6aa873ece9\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-23T09:07:33Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-23T09:07:33Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-wrhqc\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://a3f58ad8e7c313247b77e5259a2f82d740ea1f08c3aeaefc116293729ce1b143\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://a3f58ad8e7c313247b77e5259a2f82d740ea1f08c3aeaefc116293729ce1b143\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-23T09:07:34Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-23T09:07:34Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-wrhqc\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-23T09:07:28Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-dmqcw\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-23T09:07:57Z is after 2025-08-24T17:21:41Z" Jan 23 09:07:57 crc kubenswrapper[4684]: I0123 09:07:57.782133 4684 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"e31ff448-5258-4887-9532-ccb1444b5a2f\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T09:07:08Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T09:07:08Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T09:07:38Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T09:07:38Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T09:07:07Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://39b1d62654cdce3e6a1e54cc35f36d530dec39b7ec54d7aba2ea8a64844ff90a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-23T09:07:09Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://5b80737ea9f882f63be2cf6a2f74002963d16e18aea3c96f738b2cd188f3c1da\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-23T09:07:10Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://68e3ed6cfd5c1ab6379385c7acee58117333f815f21be7d7c61038f7827f6621\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-23T09:07:10Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://74958cd4355a9eb04e07c960b1063b56f11cb3ae27a3ab9eac50f54ebac78c8c\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://42263a97079566dbd93f1ca20399fd1f6cc2400f0d042ed062c1c1e15eaf0109\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-01-23T09:07:27Z\\\",\\\"message\\\":\\\"23 09:07:26.845110 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW0123 09:07:26.845113 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW0123 09:07:26.845115 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nI0123 09:07:26.845353 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nI0123 09:07:26.849378 1 tlsconfig.go:203] \\\\\\\"Loaded serving cert\\\\\\\" certName=\\\\\\\"serving-cert::/tmp/serving-cert-4138284268/tls.crt::/tmp/serving-cert-4138284268/tls.key\\\\\\\" certDetail=\\\\\\\"\\\\\\\\\\\\\\\"localhost\\\\\\\\\\\\\\\" [serving] validServingFor=[localhost] issuer=\\\\\\\\\\\\\\\"check-endpoints-signer@1769159230\\\\\\\\\\\\\\\" (2026-01-23 09:07:10 +0000 UTC to 2026-02-22 09:07:11 +0000 UTC (now=2026-01-23 09:07:26.849349521 +0000 UTC))\\\\\\\"\\\\nI0123 09:07:26.849507 1 named_certificates.go:53] \\\\\\\"Loaded SNI cert\\\\\\\" index=0 certName=\\\\\\\"self-signed loopback\\\\\\\" certDetail=\\\\\\\"\\\\\\\\\\\\\\\"apiserver-loopback-client@1769159241\\\\\\\\\\\\\\\" [serving] validServingFor=[apiserver-loopback-client] issuer=\\\\\\\\\\\\\\\"apiserver-loopback-client-ca@1769159241\\\\\\\\\\\\\\\" (2026-01-23 08:07:21 +0000 UTC to 2027-01-23 08:07:21 +0000 UTC (now=2026-01-23 09:07:26.849489185 +0000 UTC))\\\\\\\"\\\\nI0123 09:07:26.849527 1 secure_serving.go:213] Serving securely on [::]:17697\\\\nI0123 09:07:26.849546 1 genericapiserver.go:683] [graceful-termination] waiting for shutdown to be initiated\\\\nI0123 09:07:26.849566 1 requestheader_controller.go:172] Starting RequestHeaderAuthRequestController\\\\nI0123 09:07:26.849583 1 shared_informer.go:313] Waiting for caches to sync for RequestHeaderAuthRequestController\\\\nI0123 09:07:26.849611 1 dynamic_serving_content.go:135] \\\\\\\"Starting controller\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-4138284268/tls.crt::/tmp/serving-cert-4138284268/tls.key\\\\\\\"\\\\nI0123 09:07:26.849731 1 tlsconfig.go:243] \\\\\\\"Starting DynamicServingCertificateController\\\\\\\"\\\\nF0123 09:07:26.849820 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-23T09:07:10Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":true,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-23T09:07:28Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://9db80d9b156d2828ad5bcd38bc2d0783dac35f10f547f098815ee596931cde3b\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-23T09:07:10Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://efa2eef93c6f5766565795e6674f79bc2e7cb62ac76cd9a1e407561378d62732\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://efa2eef93c6f5766565795e6674f79bc2e7cb62ac76cd9a1e407561378d62732\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-23T09:07:08Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-23T09:07:08Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-23T09:07:07Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-23T09:07:57Z is after 2025-08-24T17:21:41Z" Jan 23 09:07:57 crc kubenswrapper[4684]: I0123 09:07:57.794093 4684 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-23T09:07:27Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-23T09:07:27Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-23T09:07:57Z is after 2025-08-24T17:21:41Z" Jan 23 09:07:57 crc kubenswrapper[4684]: I0123 09:07:57.804476 4684 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 23 09:07:57 crc kubenswrapper[4684]: I0123 09:07:57.804524 4684 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 23 09:07:57 crc kubenswrapper[4684]: I0123 09:07:57.804536 4684 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 23 09:07:57 crc kubenswrapper[4684]: I0123 09:07:57.804553 4684 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 23 09:07:57 crc kubenswrapper[4684]: I0123 09:07:57.804566 4684 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-23T09:07:57Z","lastTransitionTime":"2026-01-23T09:07:57Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 23 09:07:57 crc kubenswrapper[4684]: I0123 09:07:57.816322 4684 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-nk7v5" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fd1b372-d164-4037-ae8e-cf634b1c4b41\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T09:07:29Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T09:07:29Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T09:07:28Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T09:07:28Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://c845b6b78d55b23f70032599e19fb345571b02ca00353315bb08e94c834330d4\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-23T09:07:30Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-l46bg\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://5ecd3493767226c89a1f3e3dff04d36ff5c47117c6ad2712e71633f5c6e375b3\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-23T09:07:30Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-l46bg\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://1d7d0cedb437ec48e365912b092c7f28a30e01fbab86c49bce1b26734ab264ee\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-23T09:07:30Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-l46bg\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://6ab83043e744c91535278153a247d7ba2b3612b867edbabf3a43192b51304e14\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-23T09:07:30Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-l46bg\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://d44f8256ce0d8ea5237e13fb4f6d7ee5cd698c2821613b48d73ba903d2ab5351\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-23T09:07:30Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-l46bg\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://3eab81e73847c2d5a8a24bd2be84c8ed97ecc482fe023474b519ae6bcf3e6e49\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-23T09:07:29Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-l46bg\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://0a7c4719b2eaaa5e4439e33009fbfab815e8ac21cf72b90aeaeeb1b6717029de\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://0a7c4719b2eaaa5e4439e33009fbfab815e8ac21cf72b90aeaeeb1b6717029de\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2026-01-23T09:07:42Z\\\",\\\"message\\\":\\\"ce openshift-marketplace/community-operators for network=default has 1 cluster-wide, 0 per-node configs, 0 template configs, making 1 (cluster) 0 (per node) and 0 (template) load balancers\\\\nI0123 09:07:40.472083 6043 services_controller.go:473] Services do not match for network=default, existing lbs: []services.LB{services.LB{Name:\\\\\\\"Service_openshift-marketplace/community-operators_TCP_cluster\\\\\\\", UUID:\\\\\\\"d389393c-7ba9-422c-b3f5-06e391d537d2\\\\\\\", Protocol:\\\\\\\"tcp\\\\\\\", ExternalIDs:map[string]string{\\\\\\\"k8s.ovn.org/kind\\\\\\\":\\\\\\\"Service\\\\\\\", \\\\\\\"k8s.ovn.org/owner\\\\\\\":\\\\\\\"openshift-marketplace/community-operators\\\\\\\"}, Opts:services.LBOpts{Reject:false, EmptyLBEvents:false, AffinityTimeOut:0, SkipSNAT:false, Template:false, AddressFamily:\\\\\\\"\\\\\\\"}, Rules:[]services.LBRule{}, Templates:services.TemplateMap{}, Switches:[]string{}, Routers:[]string{}, Groups:[]string{\\\\\\\"clusterLBGroup\\\\\\\"}}}, built lbs: []services.LB{services.LB{Name:\\\\\\\"Service_openshift-marketplace/community-operators_TCP_cluster\\\\\\\", UUID:\\\\\\\"\\\\\\\", Protocol:\\\\\\\"TCP\\\\\\\", ExternalIDs:map[string]string{\\\\\\\"k8s.ovn.org/kind\\\\\\\":\\\\\\\"Service\\\\\\\", \\\\\\\"k8s.ovn.org/owner\\\\\\\":\\\\\\\"openshift-marketplace/community-operators\\\\\\\"}, Opts:services.LBOpts{Reject:true, EmptyLBEvents:false, AffinityTimeOut:0, SkipSNAT:false, Template:false, AddressFamily:\\\\\\\"\\\\\\\"}, Rules:[]services.LBRule{services.LBRule{Source:services.Addr{IP:\\\\\\\"10.217.5.189\\\\\\\", Port:50051, Template:(*services.Template)(nil)}, T\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-23T09:07:39Z\\\"}},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"back-off 10s restarting failed container=ovnkube-controller pod=ovnkube-node-nk7v5_openshift-ovn-kubernetes(5fd1b372-d164-4037-ae8e-cf634b1c4b41)\\\",\\\"reason\\\":\\\"CrashLoopBackOff\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-l46bg\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://6eab0113b2445bd23a5d3eb5f4bd79d26dd3352a1bf807cf7e770d55db85b699\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-23T09:07:32Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-l46bg\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://6cfc04b44ac724b5e32e0102b3f0d670fdd7f2b7ae9b40266065c7b8192b228e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://6cfc04b44ac724b5e32e0102b3f0d670fdd7f2b7ae9b40266065c7b8192b228e\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-23T09:07:29Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-23T09:07:29Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-l46bg\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-23T09:07:28Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-nk7v5\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-23T09:07:57Z is after 2025-08-24T17:21:41Z" Jan 23 09:07:57 crc kubenswrapper[4684]: I0123 09:07:57.827930 4684 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-ckltm" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"17ebb42b-c0ef-423b-8337-cb73bcdbd301\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T09:07:44Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T09:07:41Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T09:07:44Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T09:07:44Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://831d14b0a3293bdf6aaef4805513c47cca40592929fd0a059c0415e6bb072462\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-23T09:07:43Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-control-plane-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-bqdrh\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://53174a72a4ae2ff8105c162641526b8d33dbc8ae6f6301c8c1399e1493d9f6e9\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-cluster-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-23T09:07:43Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-bqdrh\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-23T09:07:41Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-control-plane-749d76644c-ckltm\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-23T09:07:57Z is after 2025-08-24T17:21:41Z" Jan 23 09:07:57 crc kubenswrapper[4684]: I0123 09:07:57.839774 4684 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-23T09:07:27Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-23T09:07:27Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-23T09:07:57Z is after 2025-08-24T17:21:41Z" Jan 23 09:07:57 crc kubenswrapper[4684]: I0123 09:07:57.852711 4684 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-23T09:07:27Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-23T09:07:27Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-23T09:07:57Z is after 2025-08-24T17:21:41Z" Jan 23 09:07:57 crc kubenswrapper[4684]: I0123 09:07:57.865546 4684 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-jwr4q" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ab0885cc-d621-4e36-9e37-1326848bd147\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T09:07:29Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T09:07:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T09:07:29Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T09:07:29Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://d957cfbf388d17fa825ac41c56e15d6cd4caec6e13b2fb8c93b304205f0bbefe\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-23T09:07:29Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cw2mk\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-23T09:07:28Z\\\"}}\" for pod \"openshift-multus\"/\"multus-jwr4q\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-23T09:07:57Z is after 2025-08-24T17:21:41Z" Jan 23 09:07:57 crc kubenswrapper[4684]: I0123 09:07:57.878972 4684 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-dmqcw" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"95d1563a-3ca4-4fb0-8365-c1168fbe2e70\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T09:07:29Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T09:07:34Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T09:07:35Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T09:07:35Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://49a6a5854f711f7c177bc9c2ddea16027d535e15a3bbce2771702baed672fc06\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-23T09:07:35Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-wrhqc\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://d3d64538fa49212ecd97fac81f22251d985b9963024dcd5625ca82b0a19111fb\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://d3d64538fa49212ecd97fac81f22251d985b9963024dcd5625ca82b0a19111fb\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-23T09:07:29Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-23T09:07:29Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-wrhqc\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://bd008bc398cf858c150426e45222e76743f5cacfffb45c24f2cad83a6140abe4\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://bd008bc398cf858c150426e45222e76743f5cacfffb45c24f2cad83a6140abe4\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-23T09:07:30Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-23T09:07:30Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-wrhqc\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://11ea09253e6f4c4eab537b794b793c1f07e8cbaf361c1d8773381e7894805322\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://11ea09253e6f4c4eab537b794b793c1f07e8cbaf361c1d8773381e7894805322\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-23T09:07:31Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-23T09:07:31Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-wrhqc\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://4dddcfb8219bc8ac2d0f92294aef29222b71b1eb35ac84e7e833905e868e784e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://4dddcfb8219bc8ac2d0f92294aef29222b71b1eb35ac84e7e833905e868e784e\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-23T09:07:32Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-23T09:07:32Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-wrhqc\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://d935dd54133a2edd7ccddba6ec6b4c3ee7c86d3d6bc097b93fab3a6aa873ece9\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://d935dd54133a2edd7ccddba6ec6b4c3ee7c86d3d6bc097b93fab3a6aa873ece9\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-23T09:07:33Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-23T09:07:33Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-wrhqc\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://a3f58ad8e7c313247b77e5259a2f82d740ea1f08c3aeaefc116293729ce1b143\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://a3f58ad8e7c313247b77e5259a2f82d740ea1f08c3aeaefc116293729ce1b143\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-23T09:07:34Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-23T09:07:34Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-wrhqc\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-23T09:07:28Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-dmqcw\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-23T09:07:57Z is after 2025-08-24T17:21:41Z" Jan 23 09:07:57 crc kubenswrapper[4684]: I0123 09:07:57.892394 4684 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"e31ff448-5258-4887-9532-ccb1444b5a2f\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T09:07:08Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T09:07:08Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T09:07:38Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T09:07:38Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T09:07:07Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://39b1d62654cdce3e6a1e54cc35f36d530dec39b7ec54d7aba2ea8a64844ff90a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-23T09:07:09Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://5b80737ea9f882f63be2cf6a2f74002963d16e18aea3c96f738b2cd188f3c1da\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-23T09:07:10Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://68e3ed6cfd5c1ab6379385c7acee58117333f815f21be7d7c61038f7827f6621\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-23T09:07:10Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://74958cd4355a9eb04e07c960b1063b56f11cb3ae27a3ab9eac50f54ebac78c8c\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://42263a97079566dbd93f1ca20399fd1f6cc2400f0d042ed062c1c1e15eaf0109\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-01-23T09:07:27Z\\\",\\\"message\\\":\\\"23 09:07:26.845110 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW0123 09:07:26.845113 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW0123 09:07:26.845115 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nI0123 09:07:26.845353 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nI0123 09:07:26.849378 1 tlsconfig.go:203] \\\\\\\"Loaded serving cert\\\\\\\" certName=\\\\\\\"serving-cert::/tmp/serving-cert-4138284268/tls.crt::/tmp/serving-cert-4138284268/tls.key\\\\\\\" certDetail=\\\\\\\"\\\\\\\\\\\\\\\"localhost\\\\\\\\\\\\\\\" [serving] validServingFor=[localhost] issuer=\\\\\\\\\\\\\\\"check-endpoints-signer@1769159230\\\\\\\\\\\\\\\" (2026-01-23 09:07:10 +0000 UTC to 2026-02-22 09:07:11 +0000 UTC (now=2026-01-23 09:07:26.849349521 +0000 UTC))\\\\\\\"\\\\nI0123 09:07:26.849507 1 named_certificates.go:53] \\\\\\\"Loaded SNI cert\\\\\\\" index=0 certName=\\\\\\\"self-signed loopback\\\\\\\" certDetail=\\\\\\\"\\\\\\\\\\\\\\\"apiserver-loopback-client@1769159241\\\\\\\\\\\\\\\" [serving] validServingFor=[apiserver-loopback-client] issuer=\\\\\\\\\\\\\\\"apiserver-loopback-client-ca@1769159241\\\\\\\\\\\\\\\" (2026-01-23 08:07:21 +0000 UTC to 2027-01-23 08:07:21 +0000 UTC (now=2026-01-23 09:07:26.849489185 +0000 UTC))\\\\\\\"\\\\nI0123 09:07:26.849527 1 secure_serving.go:213] Serving securely on [::]:17697\\\\nI0123 09:07:26.849546 1 genericapiserver.go:683] [graceful-termination] waiting for shutdown to be initiated\\\\nI0123 09:07:26.849566 1 requestheader_controller.go:172] Starting RequestHeaderAuthRequestController\\\\nI0123 09:07:26.849583 1 shared_informer.go:313] Waiting for caches to sync for RequestHeaderAuthRequestController\\\\nI0123 09:07:26.849611 1 dynamic_serving_content.go:135] \\\\\\\"Starting controller\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-4138284268/tls.crt::/tmp/serving-cert-4138284268/tls.key\\\\\\\"\\\\nI0123 09:07:26.849731 1 tlsconfig.go:243] \\\\\\\"Starting DynamicServingCertificateController\\\\\\\"\\\\nF0123 09:07:26.849820 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-23T09:07:10Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":true,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-23T09:07:28Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://9db80d9b156d2828ad5bcd38bc2d0783dac35f10f547f098815ee596931cde3b\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-23T09:07:10Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://efa2eef93c6f5766565795e6674f79bc2e7cb62ac76cd9a1e407561378d62732\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://efa2eef93c6f5766565795e6674f79bc2e7cb62ac76cd9a1e407561378d62732\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-23T09:07:08Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-23T09:07:08Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-23T09:07:07Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-23T09:07:57Z is after 2025-08-24T17:21:41Z" Jan 23 09:07:57 crc kubenswrapper[4684]: I0123 09:07:57.903242 4684 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-23T09:07:27Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-23T09:07:27Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-23T09:07:57Z is after 2025-08-24T17:21:41Z" Jan 23 09:07:57 crc kubenswrapper[4684]: I0123 09:07:57.906139 4684 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 23 09:07:57 crc kubenswrapper[4684]: I0123 09:07:57.906167 4684 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 23 09:07:57 crc kubenswrapper[4684]: I0123 09:07:57.906194 4684 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 23 09:07:57 crc kubenswrapper[4684]: I0123 09:07:57.906209 4684 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 23 09:07:57 crc kubenswrapper[4684]: I0123 09:07:57.906220 4684 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-23T09:07:57Z","lastTransitionTime":"2026-01-23T09:07:57Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 23 09:07:57 crc kubenswrapper[4684]: I0123 09:07:57.912320 4684 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-6stgf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"4fce7017-186f-4953-b968-c8a8868a0fd4\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T09:07:29Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T09:07:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T09:07:29Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T09:07:29Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://e120546e2ca9261a5bc169c39194c52add608d78b5783a10dad5f3ba4ee27c23\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-23T09:07:29Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-wv8g2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-23T09:07:28Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-6stgf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-23T09:07:57Z is after 2025-08-24T17:21:41Z" Jan 23 09:07:57 crc kubenswrapper[4684]: I0123 09:07:57.922691 4684 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-qt2j2" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d5069a6f-07bb-4423-8df0-92cdc541e6de\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T09:07:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T09:07:29Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T09:07:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T09:07:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://4ab843f59e857c481772565098789264b06141f58dd54cbb8dba2e40b44a54ad\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-23T09:07:30Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-l62zw\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-23T09:07:29Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-qt2j2\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-23T09:07:57Z is after 2025-08-24T17:21:41Z" Jan 23 09:07:57 crc kubenswrapper[4684]: I0123 09:07:57.934526 4684 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d618dabd-5de3-4c94-b9c1-69682da77628\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T09:07:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T09:07:07Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T09:07:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T09:07:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T09:07:07Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://c027c8977c1e3870ef0132bf28d479e8999b1a7d216327be7a9cff2aeee05c9e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-23T09:07:08Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://7954e2feb1e89e1ec2c9055234e7b9bde7005afc751a3067c18cbb54d16045cc\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-23T09:07:08Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://fde45d47daa7855ee7caa1df0222d2773fcdc8fb29413c61d6b74f7e7d8fa6e4\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-23T09:07:09Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://f34540a58dd0dfcebbfd694b24202f58a89ddca8a0f04f3f4f2bcdba4be5c4b6\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-23T09:07:10Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-23T09:07:07Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-23T09:07:57Z is after 2025-08-24T17:21:41Z" Jan 23 09:07:57 crc kubenswrapper[4684]: I0123 09:07:57.946802 4684 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-23T09:07:28Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-23T09:07:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-23T09:07:28Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://4f741db786a98b9e9302c17c5f5061484149b0372c03b3cf06b017d37da7237a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-23T09:07:28Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://e0bf99a80423f9d4d2262b21f7dc70d1cf73731c48008e484d9768495596d5b0\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-23T09:07:28Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-23T09:07:57Z is after 2025-08-24T17:21:41Z" Jan 23 09:07:57 crc kubenswrapper[4684]: I0123 09:07:57.963128 4684 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-23T09:07:30Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-23T09:07:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-23T09:07:30Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://dc74050180463e44d7c545c89833c0282af87ae8cde4800f95e019dbd21ebb08\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-23T09:07:30Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-23T09:07:57Z is after 2025-08-24T17:21:41Z" Jan 23 09:07:57 crc kubenswrapper[4684]: I0123 09:07:57.972402 4684 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/network-metrics-daemon-wrrtl" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"8a1145d8-e0e9-481b-9e5c-65815e74874f\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T09:07:42Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T09:07:42Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T09:07:42Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T09:07:42Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/metrics\\\",\\\"name\\\":\\\"metrics-certs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-hlsjn\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d98bb346a17feae024d92663df92b25c120938395ab7043afbed543c6db9ca8d\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-metrics-daemon\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-hlsjn\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-23T09:07:42Z\\\"}}\" for pod \"openshift-multus\"/\"network-metrics-daemon-wrrtl\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-23T09:07:57Z is after 2025-08-24T17:21:41Z" Jan 23 09:07:57 crc kubenswrapper[4684]: I0123 09:07:57.983138 4684 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-23T09:07:29Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-23T09:07:29Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-23T09:07:29Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://d66a59d2f527c396c3b591ef694a20a6852d8e2b2f3d4c77ef0f0b795a18b535\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-23T09:07:28Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-23T09:07:57Z is after 2025-08-24T17:21:41Z" Jan 23 09:07:57 crc kubenswrapper[4684]: I0123 09:07:57.993119 4684 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-wtphf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"fe8e0d00-860e-4d47-9f48-686555520d79\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T09:07:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T09:07:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T09:07:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T09:07:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://87b6f66b276518f9c25bbd5c97bd4a330b2c796958b395d04a01ef7115b95440\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-23T09:07:30Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-dmwsl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://c3d090a4ca15b818846dbd02be034a5029761509ea8671673795d0b2b15249c9\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-23T09:07:30Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-dmwsl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-23T09:07:28Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-wtphf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-23T09:07:57Z is after 2025-08-24T17:21:41Z" Jan 23 09:07:58 crc kubenswrapper[4684]: I0123 09:07:58.008988 4684 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 23 09:07:58 crc kubenswrapper[4684]: I0123 09:07:58.009031 4684 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 23 09:07:58 crc kubenswrapper[4684]: I0123 09:07:58.009046 4684 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 23 09:07:58 crc kubenswrapper[4684]: I0123 09:07:58.009065 4684 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 23 09:07:58 crc kubenswrapper[4684]: I0123 09:07:58.009079 4684 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-23T09:07:58Z","lastTransitionTime":"2026-01-23T09:07:58Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 23 09:07:58 crc kubenswrapper[4684]: I0123 09:07:58.088761 4684 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/8a1145d8-e0e9-481b-9e5c-65815e74874f-metrics-certs\") pod \"network-metrics-daemon-wrrtl\" (UID: \"8a1145d8-e0e9-481b-9e5c-65815e74874f\") " pod="openshift-multus/network-metrics-daemon-wrrtl" Jan 23 09:07:58 crc kubenswrapper[4684]: E0123 09:07:58.088939 4684 secret.go:188] Couldn't get secret openshift-multus/metrics-daemon-secret: object "openshift-multus"/"metrics-daemon-secret" not registered Jan 23 09:07:58 crc kubenswrapper[4684]: E0123 09:07:58.088999 4684 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/8a1145d8-e0e9-481b-9e5c-65815e74874f-metrics-certs podName:8a1145d8-e0e9-481b-9e5c-65815e74874f nodeName:}" failed. No retries permitted until 2026-01-23 09:08:14.088983568 +0000 UTC m=+66.712362109 (durationBeforeRetry 16s). Error: MountVolume.SetUp failed for volume "metrics-certs" (UniqueName: "kubernetes.io/secret/8a1145d8-e0e9-481b-9e5c-65815e74874f-metrics-certs") pod "network-metrics-daemon-wrrtl" (UID: "8a1145d8-e0e9-481b-9e5c-65815e74874f") : object "openshift-multus"/"metrics-daemon-secret" not registered Jan 23 09:07:58 crc kubenswrapper[4684]: I0123 09:07:58.110990 4684 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 23 09:07:58 crc kubenswrapper[4684]: I0123 09:07:58.111025 4684 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 23 09:07:58 crc kubenswrapper[4684]: I0123 09:07:58.111036 4684 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 23 09:07:58 crc kubenswrapper[4684]: I0123 09:07:58.111052 4684 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 23 09:07:58 crc kubenswrapper[4684]: I0123 09:07:58.111062 4684 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-23T09:07:58Z","lastTransitionTime":"2026-01-23T09:07:58Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 23 09:07:58 crc kubenswrapper[4684]: I0123 09:07:58.213804 4684 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 23 09:07:58 crc kubenswrapper[4684]: I0123 09:07:58.213824 4684 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 23 09:07:58 crc kubenswrapper[4684]: I0123 09:07:58.213833 4684 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 23 09:07:58 crc kubenswrapper[4684]: I0123 09:07:58.213845 4684 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 23 09:07:58 crc kubenswrapper[4684]: I0123 09:07:58.213853 4684 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-23T09:07:58Z","lastTransitionTime":"2026-01-23T09:07:58Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 23 09:07:58 crc kubenswrapper[4684]: I0123 09:07:58.315291 4684 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 23 09:07:58 crc kubenswrapper[4684]: I0123 09:07:58.315323 4684 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 23 09:07:58 crc kubenswrapper[4684]: I0123 09:07:58.315331 4684 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 23 09:07:58 crc kubenswrapper[4684]: I0123 09:07:58.315344 4684 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 23 09:07:58 crc kubenswrapper[4684]: I0123 09:07:58.315354 4684 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-23T09:07:58Z","lastTransitionTime":"2026-01-23T09:07:58Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 23 09:07:58 crc kubenswrapper[4684]: I0123 09:07:58.418090 4684 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 23 09:07:58 crc kubenswrapper[4684]: I0123 09:07:58.418123 4684 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 23 09:07:58 crc kubenswrapper[4684]: I0123 09:07:58.418134 4684 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 23 09:07:58 crc kubenswrapper[4684]: I0123 09:07:58.418151 4684 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 23 09:07:58 crc kubenswrapper[4684]: I0123 09:07:58.418163 4684 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-23T09:07:58Z","lastTransitionTime":"2026-01-23T09:07:58Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 23 09:07:58 crc kubenswrapper[4684]: I0123 09:07:58.520808 4684 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 23 09:07:58 crc kubenswrapper[4684]: I0123 09:07:58.520842 4684 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 23 09:07:58 crc kubenswrapper[4684]: I0123 09:07:58.520852 4684 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 23 09:07:58 crc kubenswrapper[4684]: I0123 09:07:58.520868 4684 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 23 09:07:58 crc kubenswrapper[4684]: I0123 09:07:58.520880 4684 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-23T09:07:58Z","lastTransitionTime":"2026-01-23T09:07:58Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 23 09:07:58 crc kubenswrapper[4684]: I0123 09:07:58.568553 4684 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2026-01-10 20:05:38.81289648 +0000 UTC Jan 23 09:07:58 crc kubenswrapper[4684]: I0123 09:07:58.581873 4684 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-wrrtl" Jan 23 09:07:58 crc kubenswrapper[4684]: E0123 09:07:58.581992 4684 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-wrrtl" podUID="8a1145d8-e0e9-481b-9e5c-65815e74874f" Jan 23 09:07:58 crc kubenswrapper[4684]: I0123 09:07:58.623567 4684 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 23 09:07:58 crc kubenswrapper[4684]: I0123 09:07:58.623601 4684 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 23 09:07:58 crc kubenswrapper[4684]: I0123 09:07:58.623609 4684 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 23 09:07:58 crc kubenswrapper[4684]: I0123 09:07:58.623628 4684 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 23 09:07:58 crc kubenswrapper[4684]: I0123 09:07:58.623641 4684 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-23T09:07:58Z","lastTransitionTime":"2026-01-23T09:07:58Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 23 09:07:58 crc kubenswrapper[4684]: I0123 09:07:58.726028 4684 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 23 09:07:58 crc kubenswrapper[4684]: I0123 09:07:58.726062 4684 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 23 09:07:58 crc kubenswrapper[4684]: I0123 09:07:58.726071 4684 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 23 09:07:58 crc kubenswrapper[4684]: I0123 09:07:58.726083 4684 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 23 09:07:58 crc kubenswrapper[4684]: I0123 09:07:58.726091 4684 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-23T09:07:58Z","lastTransitionTime":"2026-01-23T09:07:58Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 23 09:07:58 crc kubenswrapper[4684]: I0123 09:07:58.828322 4684 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 23 09:07:58 crc kubenswrapper[4684]: I0123 09:07:58.828357 4684 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 23 09:07:58 crc kubenswrapper[4684]: I0123 09:07:58.828367 4684 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 23 09:07:58 crc kubenswrapper[4684]: I0123 09:07:58.828381 4684 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 23 09:07:58 crc kubenswrapper[4684]: I0123 09:07:58.828392 4684 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-23T09:07:58Z","lastTransitionTime":"2026-01-23T09:07:58Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 23 09:07:58 crc kubenswrapper[4684]: I0123 09:07:58.912047 4684 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-ovn-kubernetes_ovnkube-node-nk7v5_5fd1b372-d164-4037-ae8e-cf634b1c4b41/ovnkube-controller/1.log" Jan 23 09:07:58 crc kubenswrapper[4684]: I0123 09:07:58.914475 4684 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-nk7v5" event={"ID":"5fd1b372-d164-4037-ae8e-cf634b1c4b41","Type":"ContainerStarted","Data":"96a86556114a977603ae87310370eefd3122daae9dcb97c57a715eab43e8c195"} Jan 23 09:07:58 crc kubenswrapper[4684]: I0123 09:07:58.914951 4684 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-ovn-kubernetes/ovnkube-node-nk7v5" Jan 23 09:07:58 crc kubenswrapper[4684]: I0123 09:07:58.929219 4684 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"e31ff448-5258-4887-9532-ccb1444b5a2f\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T09:07:08Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T09:07:08Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T09:07:38Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T09:07:38Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T09:07:07Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://39b1d62654cdce3e6a1e54cc35f36d530dec39b7ec54d7aba2ea8a64844ff90a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-23T09:07:09Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://5b80737ea9f882f63be2cf6a2f74002963d16e18aea3c96f738b2cd188f3c1da\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-23T09:07:10Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://68e3ed6cfd5c1ab6379385c7acee58117333f815f21be7d7c61038f7827f6621\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-23T09:07:10Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://74958cd4355a9eb04e07c960b1063b56f11cb3ae27a3ab9eac50f54ebac78c8c\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://42263a97079566dbd93f1ca20399fd1f6cc2400f0d042ed062c1c1e15eaf0109\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-01-23T09:07:27Z\\\",\\\"message\\\":\\\"23 09:07:26.845110 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW0123 09:07:26.845113 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW0123 09:07:26.845115 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nI0123 09:07:26.845353 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nI0123 09:07:26.849378 1 tlsconfig.go:203] \\\\\\\"Loaded serving cert\\\\\\\" certName=\\\\\\\"serving-cert::/tmp/serving-cert-4138284268/tls.crt::/tmp/serving-cert-4138284268/tls.key\\\\\\\" certDetail=\\\\\\\"\\\\\\\\\\\\\\\"localhost\\\\\\\\\\\\\\\" [serving] validServingFor=[localhost] issuer=\\\\\\\\\\\\\\\"check-endpoints-signer@1769159230\\\\\\\\\\\\\\\" (2026-01-23 09:07:10 +0000 UTC to 2026-02-22 09:07:11 +0000 UTC (now=2026-01-23 09:07:26.849349521 +0000 UTC))\\\\\\\"\\\\nI0123 09:07:26.849507 1 named_certificates.go:53] \\\\\\\"Loaded SNI cert\\\\\\\" index=0 certName=\\\\\\\"self-signed loopback\\\\\\\" certDetail=\\\\\\\"\\\\\\\\\\\\\\\"apiserver-loopback-client@1769159241\\\\\\\\\\\\\\\" [serving] validServingFor=[apiserver-loopback-client] issuer=\\\\\\\\\\\\\\\"apiserver-loopback-client-ca@1769159241\\\\\\\\\\\\\\\" (2026-01-23 08:07:21 +0000 UTC to 2027-01-23 08:07:21 +0000 UTC (now=2026-01-23 09:07:26.849489185 +0000 UTC))\\\\\\\"\\\\nI0123 09:07:26.849527 1 secure_serving.go:213] Serving securely on [::]:17697\\\\nI0123 09:07:26.849546 1 genericapiserver.go:683] [graceful-termination] waiting for shutdown to be initiated\\\\nI0123 09:07:26.849566 1 requestheader_controller.go:172] Starting RequestHeaderAuthRequestController\\\\nI0123 09:07:26.849583 1 shared_informer.go:313] Waiting for caches to sync for RequestHeaderAuthRequestController\\\\nI0123 09:07:26.849611 1 dynamic_serving_content.go:135] \\\\\\\"Starting controller\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-4138284268/tls.crt::/tmp/serving-cert-4138284268/tls.key\\\\\\\"\\\\nI0123 09:07:26.849731 1 tlsconfig.go:243] \\\\\\\"Starting DynamicServingCertificateController\\\\\\\"\\\\nF0123 09:07:26.849820 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-23T09:07:10Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":true,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-23T09:07:28Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://9db80d9b156d2828ad5bcd38bc2d0783dac35f10f547f098815ee596931cde3b\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-23T09:07:10Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://efa2eef93c6f5766565795e6674f79bc2e7cb62ac76cd9a1e407561378d62732\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://efa2eef93c6f5766565795e6674f79bc2e7cb62ac76cd9a1e407561378d62732\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-23T09:07:08Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-23T09:07:08Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-23T09:07:07Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-23T09:07:58Z is after 2025-08-24T17:21:41Z" Jan 23 09:07:58 crc kubenswrapper[4684]: I0123 09:07:58.929904 4684 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 23 09:07:58 crc kubenswrapper[4684]: I0123 09:07:58.929928 4684 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 23 09:07:58 crc kubenswrapper[4684]: I0123 09:07:58.929937 4684 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 23 09:07:58 crc kubenswrapper[4684]: I0123 09:07:58.929950 4684 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 23 09:07:58 crc kubenswrapper[4684]: I0123 09:07:58.929959 4684 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-23T09:07:58Z","lastTransitionTime":"2026-01-23T09:07:58Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 23 09:07:58 crc kubenswrapper[4684]: I0123 09:07:58.941852 4684 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-23T09:07:27Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-23T09:07:27Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-23T09:07:58Z is after 2025-08-24T17:21:41Z" Jan 23 09:07:58 crc kubenswrapper[4684]: I0123 09:07:58.954416 4684 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-jwr4q" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ab0885cc-d621-4e36-9e37-1326848bd147\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T09:07:29Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T09:07:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T09:07:29Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T09:07:29Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://d957cfbf388d17fa825ac41c56e15d6cd4caec6e13b2fb8c93b304205f0bbefe\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-23T09:07:29Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cw2mk\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-23T09:07:28Z\\\"}}\" for pod \"openshift-multus\"/\"multus-jwr4q\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-23T09:07:58Z is after 2025-08-24T17:21:41Z" Jan 23 09:07:58 crc kubenswrapper[4684]: I0123 09:07:58.969732 4684 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-dmqcw" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"95d1563a-3ca4-4fb0-8365-c1168fbe2e70\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T09:07:29Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T09:07:34Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T09:07:35Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T09:07:35Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://49a6a5854f711f7c177bc9c2ddea16027d535e15a3bbce2771702baed672fc06\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-23T09:07:35Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-wrhqc\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://d3d64538fa49212ecd97fac81f22251d985b9963024dcd5625ca82b0a19111fb\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://d3d64538fa49212ecd97fac81f22251d985b9963024dcd5625ca82b0a19111fb\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-23T09:07:29Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-23T09:07:29Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-wrhqc\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://bd008bc398cf858c150426e45222e76743f5cacfffb45c24f2cad83a6140abe4\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://bd008bc398cf858c150426e45222e76743f5cacfffb45c24f2cad83a6140abe4\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-23T09:07:30Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-23T09:07:30Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-wrhqc\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://11ea09253e6f4c4eab537b794b793c1f07e8cbaf361c1d8773381e7894805322\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://11ea09253e6f4c4eab537b794b793c1f07e8cbaf361c1d8773381e7894805322\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-23T09:07:31Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-23T09:07:31Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-wrhqc\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://4dddcfb8219bc8ac2d0f92294aef29222b71b1eb35ac84e7e833905e868e784e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://4dddcfb8219bc8ac2d0f92294aef29222b71b1eb35ac84e7e833905e868e784e\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-23T09:07:32Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-23T09:07:32Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-wrhqc\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://d935dd54133a2edd7ccddba6ec6b4c3ee7c86d3d6bc097b93fab3a6aa873ece9\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://d935dd54133a2edd7ccddba6ec6b4c3ee7c86d3d6bc097b93fab3a6aa873ece9\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-23T09:07:33Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-23T09:07:33Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-wrhqc\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://a3f58ad8e7c313247b77e5259a2f82d740ea1f08c3aeaefc116293729ce1b143\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://a3f58ad8e7c313247b77e5259a2f82d740ea1f08c3aeaefc116293729ce1b143\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-23T09:07:34Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-23T09:07:34Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-wrhqc\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-23T09:07:28Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-dmqcw\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-23T09:07:58Z is after 2025-08-24T17:21:41Z" Jan 23 09:07:58 crc kubenswrapper[4684]: I0123 09:07:58.983061 4684 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d618dabd-5de3-4c94-b9c1-69682da77628\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T09:07:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T09:07:07Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T09:07:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T09:07:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T09:07:07Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://c027c8977c1e3870ef0132bf28d479e8999b1a7d216327be7a9cff2aeee05c9e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-23T09:07:08Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://7954e2feb1e89e1ec2c9055234e7b9bde7005afc751a3067c18cbb54d16045cc\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-23T09:07:08Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://fde45d47daa7855ee7caa1df0222d2773fcdc8fb29413c61d6b74f7e7d8fa6e4\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-23T09:07:09Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://f34540a58dd0dfcebbfd694b24202f58a89ddca8a0f04f3f4f2bcdba4be5c4b6\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-23T09:07:10Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-23T09:07:07Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-23T09:07:58Z is after 2025-08-24T17:21:41Z" Jan 23 09:07:58 crc kubenswrapper[4684]: I0123 09:07:58.994663 4684 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-23T09:07:28Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-23T09:07:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-23T09:07:28Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://4f741db786a98b9e9302c17c5f5061484149b0372c03b3cf06b017d37da7237a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-23T09:07:28Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://e0bf99a80423f9d4d2262b21f7dc70d1cf73731c48008e484d9768495596d5b0\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-23T09:07:28Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-23T09:07:58Z is after 2025-08-24T17:21:41Z" Jan 23 09:07:59 crc kubenswrapper[4684]: I0123 09:07:59.004640 4684 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-23T09:07:30Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-23T09:07:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-23T09:07:30Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://dc74050180463e44d7c545c89833c0282af87ae8cde4800f95e019dbd21ebb08\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-23T09:07:30Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-23T09:07:59Z is after 2025-08-24T17:21:41Z" Jan 23 09:07:59 crc kubenswrapper[4684]: I0123 09:07:59.013380 4684 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-6stgf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"4fce7017-186f-4953-b968-c8a8868a0fd4\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T09:07:29Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T09:07:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T09:07:29Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T09:07:29Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://e120546e2ca9261a5bc169c39194c52add608d78b5783a10dad5f3ba4ee27c23\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-23T09:07:29Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-wv8g2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-23T09:07:28Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-6stgf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-23T09:07:59Z is after 2025-08-24T17:21:41Z" Jan 23 09:07:59 crc kubenswrapper[4684]: I0123 09:07:59.021757 4684 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-qt2j2" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d5069a6f-07bb-4423-8df0-92cdc541e6de\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T09:07:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T09:07:29Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T09:07:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T09:07:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://4ab843f59e857c481772565098789264b06141f58dd54cbb8dba2e40b44a54ad\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-23T09:07:30Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-l62zw\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-23T09:07:29Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-qt2j2\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-23T09:07:59Z is after 2025-08-24T17:21:41Z" Jan 23 09:07:59 crc kubenswrapper[4684]: I0123 09:07:59.032275 4684 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 23 09:07:59 crc kubenswrapper[4684]: I0123 09:07:59.032314 4684 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 23 09:07:59 crc kubenswrapper[4684]: I0123 09:07:59.032323 4684 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 23 09:07:59 crc kubenswrapper[4684]: I0123 09:07:59.032337 4684 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 23 09:07:59 crc kubenswrapper[4684]: I0123 09:07:59.032347 4684 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-23T09:07:59Z","lastTransitionTime":"2026-01-23T09:07:59Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 23 09:07:59 crc kubenswrapper[4684]: I0123 09:07:59.035914 4684 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-23T09:07:29Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-23T09:07:29Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-23T09:07:29Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://d66a59d2f527c396c3b591ef694a20a6852d8e2b2f3d4c77ef0f0b795a18b535\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-23T09:07:28Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-23T09:07:59Z is after 2025-08-24T17:21:41Z" Jan 23 09:07:59 crc kubenswrapper[4684]: I0123 09:07:59.046720 4684 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-wtphf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"fe8e0d00-860e-4d47-9f48-686555520d79\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T09:07:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T09:07:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T09:07:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T09:07:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://87b6f66b276518f9c25bbd5c97bd4a330b2c796958b395d04a01ef7115b95440\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-23T09:07:30Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-dmwsl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://c3d090a4ca15b818846dbd02be034a5029761509ea8671673795d0b2b15249c9\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-23T09:07:30Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-dmwsl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-23T09:07:28Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-wtphf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-23T09:07:59Z is after 2025-08-24T17:21:41Z" Jan 23 09:07:59 crc kubenswrapper[4684]: I0123 09:07:59.059738 4684 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/network-metrics-daemon-wrrtl" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"8a1145d8-e0e9-481b-9e5c-65815e74874f\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T09:07:42Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T09:07:42Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T09:07:42Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T09:07:42Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/metrics\\\",\\\"name\\\":\\\"metrics-certs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-hlsjn\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d98bb346a17feae024d92663df92b25c120938395ab7043afbed543c6db9ca8d\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-metrics-daemon\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-hlsjn\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-23T09:07:42Z\\\"}}\" for pod \"openshift-multus\"/\"network-metrics-daemon-wrrtl\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-23T09:07:59Z is after 2025-08-24T17:21:41Z" Jan 23 09:07:59 crc kubenswrapper[4684]: I0123 09:07:59.072145 4684 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-23T09:07:27Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-23T09:07:27Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-23T09:07:59Z is after 2025-08-24T17:21:41Z" Jan 23 09:07:59 crc kubenswrapper[4684]: I0123 09:07:59.086247 4684 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-23T09:07:27Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-23T09:07:27Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-23T09:07:59Z is after 2025-08-24T17:21:41Z" Jan 23 09:07:59 crc kubenswrapper[4684]: I0123 09:07:59.106621 4684 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-nk7v5" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fd1b372-d164-4037-ae8e-cf634b1c4b41\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T09:07:29Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T09:07:29Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T09:07:28Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T09:07:28Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://c845b6b78d55b23f70032599e19fb345571b02ca00353315bb08e94c834330d4\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-23T09:07:30Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-l46bg\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://5ecd3493767226c89a1f3e3dff04d36ff5c47117c6ad2712e71633f5c6e375b3\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-23T09:07:30Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-l46bg\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://1d7d0cedb437ec48e365912b092c7f28a30e01fbab86c49bce1b26734ab264ee\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-23T09:07:30Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-l46bg\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://6ab83043e744c91535278153a247d7ba2b3612b867edbabf3a43192b51304e14\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-23T09:07:30Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-l46bg\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://d44f8256ce0d8ea5237e13fb4f6d7ee5cd698c2821613b48d73ba903d2ab5351\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-23T09:07:30Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-l46bg\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://3eab81e73847c2d5a8a24bd2be84c8ed97ecc482fe023474b519ae6bcf3e6e49\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-23T09:07:29Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-l46bg\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://96a86556114a977603ae87310370eefd3122daae9dcb97c57a715eab43e8c195\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://0a7c4719b2eaaa5e4439e33009fbfab815e8ac21cf72b90aeaeeb1b6717029de\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2026-01-23T09:07:42Z\\\",\\\"message\\\":\\\"ce openshift-marketplace/community-operators for network=default has 1 cluster-wide, 0 per-node configs, 0 template configs, making 1 (cluster) 0 (per node) and 0 (template) load balancers\\\\nI0123 09:07:40.472083 6043 services_controller.go:473] Services do not match for network=default, existing lbs: []services.LB{services.LB{Name:\\\\\\\"Service_openshift-marketplace/community-operators_TCP_cluster\\\\\\\", UUID:\\\\\\\"d389393c-7ba9-422c-b3f5-06e391d537d2\\\\\\\", Protocol:\\\\\\\"tcp\\\\\\\", ExternalIDs:map[string]string{\\\\\\\"k8s.ovn.org/kind\\\\\\\":\\\\\\\"Service\\\\\\\", \\\\\\\"k8s.ovn.org/owner\\\\\\\":\\\\\\\"openshift-marketplace/community-operators\\\\\\\"}, Opts:services.LBOpts{Reject:false, EmptyLBEvents:false, AffinityTimeOut:0, SkipSNAT:false, Template:false, AddressFamily:\\\\\\\"\\\\\\\"}, Rules:[]services.LBRule{}, Templates:services.TemplateMap{}, Switches:[]string{}, Routers:[]string{}, Groups:[]string{\\\\\\\"clusterLBGroup\\\\\\\"}}}, built lbs: []services.LB{services.LB{Name:\\\\\\\"Service_openshift-marketplace/community-operators_TCP_cluster\\\\\\\", UUID:\\\\\\\"\\\\\\\", Protocol:\\\\\\\"TCP\\\\\\\", ExternalIDs:map[string]string{\\\\\\\"k8s.ovn.org/kind\\\\\\\":\\\\\\\"Service\\\\\\\", \\\\\\\"k8s.ovn.org/owner\\\\\\\":\\\\\\\"openshift-marketplace/community-operators\\\\\\\"}, Opts:services.LBOpts{Reject:true, EmptyLBEvents:false, AffinityTimeOut:0, SkipSNAT:false, Template:false, AddressFamily:\\\\\\\"\\\\\\\"}, Rules:[]services.LBRule{services.LBRule{Source:services.Addr{IP:\\\\\\\"10.217.5.189\\\\\\\", Port:50051, Template:(*services.Template)(nil)}, T\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-23T09:07:39Z\\\"}},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":2,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-23T09:07:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-l46bg\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://6eab0113b2445bd23a5d3eb5f4bd79d26dd3352a1bf807cf7e770d55db85b699\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-23T09:07:32Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-l46bg\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://6cfc04b44ac724b5e32e0102b3f0d670fdd7f2b7ae9b40266065c7b8192b228e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://6cfc04b44ac724b5e32e0102b3f0d670fdd7f2b7ae9b40266065c7b8192b228e\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-23T09:07:29Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-23T09:07:29Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-l46bg\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-23T09:07:28Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-nk7v5\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-23T09:07:59Z is after 2025-08-24T17:21:41Z" Jan 23 09:07:59 crc kubenswrapper[4684]: I0123 09:07:59.118370 4684 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-ckltm" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"17ebb42b-c0ef-423b-8337-cb73bcdbd301\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T09:07:44Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T09:07:41Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T09:07:44Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T09:07:44Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://831d14b0a3293bdf6aaef4805513c47cca40592929fd0a059c0415e6bb072462\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-23T09:07:43Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-control-plane-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-bqdrh\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://53174a72a4ae2ff8105c162641526b8d33dbc8ae6f6301c8c1399e1493d9f6e9\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-cluster-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-23T09:07:43Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-bqdrh\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-23T09:07:41Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-control-plane-749d76644c-ckltm\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-23T09:07:59Z is after 2025-08-24T17:21:41Z" Jan 23 09:07:59 crc kubenswrapper[4684]: I0123 09:07:59.134881 4684 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 23 09:07:59 crc kubenswrapper[4684]: I0123 09:07:59.134913 4684 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 23 09:07:59 crc kubenswrapper[4684]: I0123 09:07:59.134922 4684 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 23 09:07:59 crc kubenswrapper[4684]: I0123 09:07:59.134967 4684 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 23 09:07:59 crc kubenswrapper[4684]: I0123 09:07:59.134982 4684 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-23T09:07:59Z","lastTransitionTime":"2026-01-23T09:07:59Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 23 09:07:59 crc kubenswrapper[4684]: I0123 09:07:59.237824 4684 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 23 09:07:59 crc kubenswrapper[4684]: I0123 09:07:59.237860 4684 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 23 09:07:59 crc kubenswrapper[4684]: I0123 09:07:59.237871 4684 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 23 09:07:59 crc kubenswrapper[4684]: I0123 09:07:59.237886 4684 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 23 09:07:59 crc kubenswrapper[4684]: I0123 09:07:59.237897 4684 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-23T09:07:59Z","lastTransitionTime":"2026-01-23T09:07:59Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 23 09:07:59 crc kubenswrapper[4684]: I0123 09:07:59.341051 4684 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 23 09:07:59 crc kubenswrapper[4684]: I0123 09:07:59.341105 4684 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 23 09:07:59 crc kubenswrapper[4684]: I0123 09:07:59.341120 4684 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 23 09:07:59 crc kubenswrapper[4684]: I0123 09:07:59.341142 4684 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 23 09:07:59 crc kubenswrapper[4684]: I0123 09:07:59.341158 4684 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-23T09:07:59Z","lastTransitionTime":"2026-01-23T09:07:59Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 23 09:07:59 crc kubenswrapper[4684]: I0123 09:07:59.402601 4684 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 23 09:07:59 crc kubenswrapper[4684]: E0123 09:07:59.403097 4684 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-01-23 09:08:31.403064156 +0000 UTC m=+84.026442737 (durationBeforeRetry 32s). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 23 09:07:59 crc kubenswrapper[4684]: I0123 09:07:59.444283 4684 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 23 09:07:59 crc kubenswrapper[4684]: I0123 09:07:59.444338 4684 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 23 09:07:59 crc kubenswrapper[4684]: I0123 09:07:59.444352 4684 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 23 09:07:59 crc kubenswrapper[4684]: I0123 09:07:59.444375 4684 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 23 09:07:59 crc kubenswrapper[4684]: I0123 09:07:59.444394 4684 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-23T09:07:59Z","lastTransitionTime":"2026-01-23T09:07:59Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 23 09:07:59 crc kubenswrapper[4684]: I0123 09:07:59.504168 4684 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-cqllr\" (UniqueName: \"kubernetes.io/projected/3b6479f0-333b-4a96-9adf-2099afdc2447-kube-api-access-cqllr\") pod \"network-check-target-xd92c\" (UID: \"3b6479f0-333b-4a96-9adf-2099afdc2447\") " pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 23 09:07:59 crc kubenswrapper[4684]: I0123 09:07:59.504238 4684 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"networking-console-plugin-cert\" (UniqueName: \"kubernetes.io/secret/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-networking-console-plugin-cert\") pod \"networking-console-plugin-85b44fc459-gdk6g\" (UID: \"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\") " pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 23 09:07:59 crc kubenswrapper[4684]: I0123 09:07:59.504269 4684 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"nginx-conf\" (UniqueName: \"kubernetes.io/configmap/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-nginx-conf\") pod \"networking-console-plugin-85b44fc459-gdk6g\" (UID: \"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\") " pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 23 09:07:59 crc kubenswrapper[4684]: I0123 09:07:59.504302 4684 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-s2dwl\" (UniqueName: \"kubernetes.io/projected/9d751cbb-f2e2-430d-9754-c882a5e924a5-kube-api-access-s2dwl\") pod \"network-check-source-55646444c4-trplf\" (UID: \"9d751cbb-f2e2-430d-9754-c882a5e924a5\") " pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Jan 23 09:07:59 crc kubenswrapper[4684]: E0123 09:07:59.504495 4684 projected.go:288] Couldn't get configMap openshift-network-diagnostics/kube-root-ca.crt: object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered Jan 23 09:07:59 crc kubenswrapper[4684]: E0123 09:07:59.504491 4684 configmap.go:193] Couldn't get configMap openshift-network-console/networking-console-plugin: object "openshift-network-console"/"networking-console-plugin" not registered Jan 23 09:07:59 crc kubenswrapper[4684]: E0123 09:07:59.504596 4684 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-nginx-conf podName:5fe485a1-e14f-4c09-b5b9-f252bc42b7e8 nodeName:}" failed. No retries permitted until 2026-01-23 09:08:31.504573776 +0000 UTC m=+84.127952347 (durationBeforeRetry 32s). Error: MountVolume.SetUp failed for volume "nginx-conf" (UniqueName: "kubernetes.io/configmap/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-nginx-conf") pod "networking-console-plugin-85b44fc459-gdk6g" (UID: "5fe485a1-e14f-4c09-b5b9-f252bc42b7e8") : object "openshift-network-console"/"networking-console-plugin" not registered Jan 23 09:07:59 crc kubenswrapper[4684]: E0123 09:07:59.504497 4684 projected.go:288] Couldn't get configMap openshift-network-diagnostics/kube-root-ca.crt: object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered Jan 23 09:07:59 crc kubenswrapper[4684]: E0123 09:07:59.504819 4684 projected.go:288] Couldn't get configMap openshift-network-diagnostics/openshift-service-ca.crt: object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered Jan 23 09:07:59 crc kubenswrapper[4684]: E0123 09:07:59.504846 4684 projected.go:194] Error preparing data for projected volume kube-api-access-cqllr for pod openshift-network-diagnostics/network-check-target-xd92c: [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Jan 23 09:07:59 crc kubenswrapper[4684]: E0123 09:07:59.504514 4684 projected.go:288] Couldn't get configMap openshift-network-diagnostics/openshift-service-ca.crt: object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered Jan 23 09:07:59 crc kubenswrapper[4684]: E0123 09:07:59.504885 4684 projected.go:194] Error preparing data for projected volume kube-api-access-s2dwl for pod openshift-network-diagnostics/network-check-source-55646444c4-trplf: [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Jan 23 09:07:59 crc kubenswrapper[4684]: E0123 09:07:59.504916 4684 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/3b6479f0-333b-4a96-9adf-2099afdc2447-kube-api-access-cqllr podName:3b6479f0-333b-4a96-9adf-2099afdc2447 nodeName:}" failed. No retries permitted until 2026-01-23 09:08:31.504897106 +0000 UTC m=+84.128275647 (durationBeforeRetry 32s). Error: MountVolume.SetUp failed for volume "kube-api-access-cqllr" (UniqueName: "kubernetes.io/projected/3b6479f0-333b-4a96-9adf-2099afdc2447-kube-api-access-cqllr") pod "network-check-target-xd92c" (UID: "3b6479f0-333b-4a96-9adf-2099afdc2447") : [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Jan 23 09:07:59 crc kubenswrapper[4684]: E0123 09:07:59.504935 4684 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/9d751cbb-f2e2-430d-9754-c882a5e924a5-kube-api-access-s2dwl podName:9d751cbb-f2e2-430d-9754-c882a5e924a5 nodeName:}" failed. No retries permitted until 2026-01-23 09:08:31.504925386 +0000 UTC m=+84.128304027 (durationBeforeRetry 32s). Error: MountVolume.SetUp failed for volume "kube-api-access-s2dwl" (UniqueName: "kubernetes.io/projected/9d751cbb-f2e2-430d-9754-c882a5e924a5-kube-api-access-s2dwl") pod "network-check-source-55646444c4-trplf" (UID: "9d751cbb-f2e2-430d-9754-c882a5e924a5") : [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Jan 23 09:07:59 crc kubenswrapper[4684]: E0123 09:07:59.505004 4684 secret.go:188] Couldn't get secret openshift-network-console/networking-console-plugin-cert: object "openshift-network-console"/"networking-console-plugin-cert" not registered Jan 23 09:07:59 crc kubenswrapper[4684]: E0123 09:07:59.505044 4684 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-networking-console-plugin-cert podName:5fe485a1-e14f-4c09-b5b9-f252bc42b7e8 nodeName:}" failed. No retries permitted until 2026-01-23 09:08:31.50503511 +0000 UTC m=+84.128413771 (durationBeforeRetry 32s). Error: MountVolume.SetUp failed for volume "networking-console-plugin-cert" (UniqueName: "kubernetes.io/secret/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-networking-console-plugin-cert") pod "networking-console-plugin-85b44fc459-gdk6g" (UID: "5fe485a1-e14f-4c09-b5b9-f252bc42b7e8") : object "openshift-network-console"/"networking-console-plugin-cert" not registered Jan 23 09:07:59 crc kubenswrapper[4684]: I0123 09:07:59.547021 4684 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 23 09:07:59 crc kubenswrapper[4684]: I0123 09:07:59.547103 4684 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 23 09:07:59 crc kubenswrapper[4684]: I0123 09:07:59.547122 4684 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 23 09:07:59 crc kubenswrapper[4684]: I0123 09:07:59.547148 4684 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 23 09:07:59 crc kubenswrapper[4684]: I0123 09:07:59.547165 4684 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-23T09:07:59Z","lastTransitionTime":"2026-01-23T09:07:59Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 23 09:07:59 crc kubenswrapper[4684]: I0123 09:07:59.569312 4684 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2026-01-12 21:02:16.195760499 +0000 UTC Jan 23 09:07:59 crc kubenswrapper[4684]: I0123 09:07:59.581731 4684 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Jan 23 09:07:59 crc kubenswrapper[4684]: I0123 09:07:59.581726 4684 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 23 09:07:59 crc kubenswrapper[4684]: I0123 09:07:59.581754 4684 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 23 09:07:59 crc kubenswrapper[4684]: E0123 09:07:59.582211 4684 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Jan 23 09:07:59 crc kubenswrapper[4684]: E0123 09:07:59.582233 4684 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Jan 23 09:07:59 crc kubenswrapper[4684]: E0123 09:07:59.582251 4684 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Jan 23 09:07:59 crc kubenswrapper[4684]: I0123 09:07:59.650018 4684 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 23 09:07:59 crc kubenswrapper[4684]: I0123 09:07:59.650048 4684 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 23 09:07:59 crc kubenswrapper[4684]: I0123 09:07:59.650057 4684 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 23 09:07:59 crc kubenswrapper[4684]: I0123 09:07:59.650071 4684 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 23 09:07:59 crc kubenswrapper[4684]: I0123 09:07:59.650083 4684 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-23T09:07:59Z","lastTransitionTime":"2026-01-23T09:07:59Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 23 09:07:59 crc kubenswrapper[4684]: I0123 09:07:59.752216 4684 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 23 09:07:59 crc kubenswrapper[4684]: I0123 09:07:59.752264 4684 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 23 09:07:59 crc kubenswrapper[4684]: I0123 09:07:59.752278 4684 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 23 09:07:59 crc kubenswrapper[4684]: I0123 09:07:59.752299 4684 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 23 09:07:59 crc kubenswrapper[4684]: I0123 09:07:59.752315 4684 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-23T09:07:59Z","lastTransitionTime":"2026-01-23T09:07:59Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 23 09:07:59 crc kubenswrapper[4684]: I0123 09:07:59.854245 4684 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 23 09:07:59 crc kubenswrapper[4684]: I0123 09:07:59.854333 4684 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 23 09:07:59 crc kubenswrapper[4684]: I0123 09:07:59.854349 4684 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 23 09:07:59 crc kubenswrapper[4684]: I0123 09:07:59.854370 4684 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 23 09:07:59 crc kubenswrapper[4684]: I0123 09:07:59.854388 4684 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-23T09:07:59Z","lastTransitionTime":"2026-01-23T09:07:59Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 23 09:07:59 crc kubenswrapper[4684]: I0123 09:07:59.957071 4684 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 23 09:07:59 crc kubenswrapper[4684]: I0123 09:07:59.957114 4684 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 23 09:07:59 crc kubenswrapper[4684]: I0123 09:07:59.957126 4684 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 23 09:07:59 crc kubenswrapper[4684]: I0123 09:07:59.957141 4684 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 23 09:07:59 crc kubenswrapper[4684]: I0123 09:07:59.957152 4684 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-23T09:07:59Z","lastTransitionTime":"2026-01-23T09:07:59Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 23 09:08:00 crc kubenswrapper[4684]: I0123 09:08:00.059053 4684 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 23 09:08:00 crc kubenswrapper[4684]: I0123 09:08:00.059088 4684 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 23 09:08:00 crc kubenswrapper[4684]: I0123 09:08:00.059096 4684 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 23 09:08:00 crc kubenswrapper[4684]: I0123 09:08:00.059108 4684 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 23 09:08:00 crc kubenswrapper[4684]: I0123 09:08:00.059117 4684 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-23T09:08:00Z","lastTransitionTime":"2026-01-23T09:08:00Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 23 09:08:00 crc kubenswrapper[4684]: I0123 09:08:00.161013 4684 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 23 09:08:00 crc kubenswrapper[4684]: I0123 09:08:00.161247 4684 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 23 09:08:00 crc kubenswrapper[4684]: I0123 09:08:00.161353 4684 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 23 09:08:00 crc kubenswrapper[4684]: I0123 09:08:00.161497 4684 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 23 09:08:00 crc kubenswrapper[4684]: I0123 09:08:00.161594 4684 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-23T09:08:00Z","lastTransitionTime":"2026-01-23T09:08:00Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 23 09:08:00 crc kubenswrapper[4684]: I0123 09:08:00.264355 4684 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 23 09:08:00 crc kubenswrapper[4684]: I0123 09:08:00.264401 4684 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 23 09:08:00 crc kubenswrapper[4684]: I0123 09:08:00.264414 4684 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 23 09:08:00 crc kubenswrapper[4684]: I0123 09:08:00.264431 4684 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 23 09:08:00 crc kubenswrapper[4684]: I0123 09:08:00.264445 4684 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-23T09:08:00Z","lastTransitionTime":"2026-01-23T09:08:00Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 23 09:08:00 crc kubenswrapper[4684]: I0123 09:08:00.366640 4684 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 23 09:08:00 crc kubenswrapper[4684]: I0123 09:08:00.366860 4684 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 23 09:08:00 crc kubenswrapper[4684]: I0123 09:08:00.366980 4684 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 23 09:08:00 crc kubenswrapper[4684]: I0123 09:08:00.367104 4684 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 23 09:08:00 crc kubenswrapper[4684]: I0123 09:08:00.367194 4684 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-23T09:08:00Z","lastTransitionTime":"2026-01-23T09:08:00Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 23 09:08:00 crc kubenswrapper[4684]: I0123 09:08:00.469061 4684 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 23 09:08:00 crc kubenswrapper[4684]: I0123 09:08:00.469086 4684 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 23 09:08:00 crc kubenswrapper[4684]: I0123 09:08:00.469093 4684 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 23 09:08:00 crc kubenswrapper[4684]: I0123 09:08:00.469106 4684 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 23 09:08:00 crc kubenswrapper[4684]: I0123 09:08:00.469115 4684 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-23T09:08:00Z","lastTransitionTime":"2026-01-23T09:08:00Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 23 09:08:00 crc kubenswrapper[4684]: I0123 09:08:00.569503 4684 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-12-05 08:18:27.115751133 +0000 UTC Jan 23 09:08:00 crc kubenswrapper[4684]: I0123 09:08:00.571124 4684 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 23 09:08:00 crc kubenswrapper[4684]: I0123 09:08:00.571162 4684 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 23 09:08:00 crc kubenswrapper[4684]: I0123 09:08:00.571175 4684 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 23 09:08:00 crc kubenswrapper[4684]: I0123 09:08:00.571191 4684 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 23 09:08:00 crc kubenswrapper[4684]: I0123 09:08:00.571204 4684 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-23T09:08:00Z","lastTransitionTime":"2026-01-23T09:08:00Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 23 09:08:00 crc kubenswrapper[4684]: I0123 09:08:00.581286 4684 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-wrrtl" Jan 23 09:08:00 crc kubenswrapper[4684]: E0123 09:08:00.581419 4684 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-wrrtl" podUID="8a1145d8-e0e9-481b-9e5c-65815e74874f" Jan 23 09:08:00 crc kubenswrapper[4684]: I0123 09:08:00.673216 4684 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 23 09:08:00 crc kubenswrapper[4684]: I0123 09:08:00.673257 4684 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 23 09:08:00 crc kubenswrapper[4684]: I0123 09:08:00.673265 4684 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 23 09:08:00 crc kubenswrapper[4684]: I0123 09:08:00.673277 4684 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 23 09:08:00 crc kubenswrapper[4684]: I0123 09:08:00.673289 4684 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-23T09:08:00Z","lastTransitionTime":"2026-01-23T09:08:00Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 23 09:08:00 crc kubenswrapper[4684]: I0123 09:08:00.776111 4684 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 23 09:08:00 crc kubenswrapper[4684]: I0123 09:08:00.776171 4684 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 23 09:08:00 crc kubenswrapper[4684]: I0123 09:08:00.776187 4684 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 23 09:08:00 crc kubenswrapper[4684]: I0123 09:08:00.776202 4684 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 23 09:08:00 crc kubenswrapper[4684]: I0123 09:08:00.776211 4684 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-23T09:08:00Z","lastTransitionTime":"2026-01-23T09:08:00Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 23 09:08:00 crc kubenswrapper[4684]: I0123 09:08:00.878011 4684 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 23 09:08:00 crc kubenswrapper[4684]: I0123 09:08:00.878248 4684 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 23 09:08:00 crc kubenswrapper[4684]: I0123 09:08:00.878307 4684 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 23 09:08:00 crc kubenswrapper[4684]: I0123 09:08:00.878369 4684 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 23 09:08:00 crc kubenswrapper[4684]: I0123 09:08:00.878423 4684 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-23T09:08:00Z","lastTransitionTime":"2026-01-23T09:08:00Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 23 09:08:00 crc kubenswrapper[4684]: I0123 09:08:00.923183 4684 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-ovn-kubernetes_ovnkube-node-nk7v5_5fd1b372-d164-4037-ae8e-cf634b1c4b41/ovnkube-controller/2.log" Jan 23 09:08:00 crc kubenswrapper[4684]: I0123 09:08:00.924059 4684 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-ovn-kubernetes_ovnkube-node-nk7v5_5fd1b372-d164-4037-ae8e-cf634b1c4b41/ovnkube-controller/1.log" Jan 23 09:08:00 crc kubenswrapper[4684]: I0123 09:08:00.930386 4684 generic.go:334] "Generic (PLEG): container finished" podID="5fd1b372-d164-4037-ae8e-cf634b1c4b41" containerID="96a86556114a977603ae87310370eefd3122daae9dcb97c57a715eab43e8c195" exitCode=1 Jan 23 09:08:00 crc kubenswrapper[4684]: I0123 09:08:00.930443 4684 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-nk7v5" event={"ID":"5fd1b372-d164-4037-ae8e-cf634b1c4b41","Type":"ContainerDied","Data":"96a86556114a977603ae87310370eefd3122daae9dcb97c57a715eab43e8c195"} Jan 23 09:08:00 crc kubenswrapper[4684]: I0123 09:08:00.930495 4684 scope.go:117] "RemoveContainer" containerID="0a7c4719b2eaaa5e4439e33009fbfab815e8ac21cf72b90aeaeeb1b6717029de" Jan 23 09:08:00 crc kubenswrapper[4684]: I0123 09:08:00.933299 4684 scope.go:117] "RemoveContainer" containerID="96a86556114a977603ae87310370eefd3122daae9dcb97c57a715eab43e8c195" Jan 23 09:08:00 crc kubenswrapper[4684]: E0123 09:08:00.933473 4684 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"ovnkube-controller\" with CrashLoopBackOff: \"back-off 20s restarting failed container=ovnkube-controller pod=ovnkube-node-nk7v5_openshift-ovn-kubernetes(5fd1b372-d164-4037-ae8e-cf634b1c4b41)\"" pod="openshift-ovn-kubernetes/ovnkube-node-nk7v5" podUID="5fd1b372-d164-4037-ae8e-cf634b1c4b41" Jan 23 09:08:00 crc kubenswrapper[4684]: I0123 09:08:00.945798 4684 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/network-metrics-daemon-wrrtl" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"8a1145d8-e0e9-481b-9e5c-65815e74874f\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T09:07:42Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T09:07:42Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T09:07:42Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T09:07:42Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/metrics\\\",\\\"name\\\":\\\"metrics-certs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-hlsjn\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d98bb346a17feae024d92663df92b25c120938395ab7043afbed543c6db9ca8d\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-metrics-daemon\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-hlsjn\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-23T09:07:42Z\\\"}}\" for pod \"openshift-multus\"/\"network-metrics-daemon-wrrtl\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-23T09:08:00Z is after 2025-08-24T17:21:41Z" Jan 23 09:08:00 crc kubenswrapper[4684]: I0123 09:08:00.960777 4684 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-23T09:07:29Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-23T09:07:29Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-23T09:07:29Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://d66a59d2f527c396c3b591ef694a20a6852d8e2b2f3d4c77ef0f0b795a18b535\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-23T09:07:28Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-23T09:08:00Z is after 2025-08-24T17:21:41Z" Jan 23 09:08:00 crc kubenswrapper[4684]: I0123 09:08:00.973863 4684 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-wtphf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"fe8e0d00-860e-4d47-9f48-686555520d79\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T09:07:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T09:07:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T09:07:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T09:07:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://87b6f66b276518f9c25bbd5c97bd4a330b2c796958b395d04a01ef7115b95440\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-23T09:07:30Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-dmwsl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://c3d090a4ca15b818846dbd02be034a5029761509ea8671673795d0b2b15249c9\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-23T09:07:30Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-dmwsl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-23T09:07:28Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-wtphf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-23T09:08:00Z is after 2025-08-24T17:21:41Z" Jan 23 09:08:00 crc kubenswrapper[4684]: I0123 09:08:00.981014 4684 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 23 09:08:00 crc kubenswrapper[4684]: I0123 09:08:00.981261 4684 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 23 09:08:00 crc kubenswrapper[4684]: I0123 09:08:00.981354 4684 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 23 09:08:00 crc kubenswrapper[4684]: I0123 09:08:00.981439 4684 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 23 09:08:00 crc kubenswrapper[4684]: I0123 09:08:00.981518 4684 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-23T09:08:00Z","lastTransitionTime":"2026-01-23T09:08:00Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 23 09:08:00 crc kubenswrapper[4684]: I0123 09:08:00.993755 4684 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-nk7v5" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fd1b372-d164-4037-ae8e-cf634b1c4b41\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T09:07:29Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T09:07:29Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T09:07:28Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T09:07:28Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://c845b6b78d55b23f70032599e19fb345571b02ca00353315bb08e94c834330d4\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-23T09:07:30Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-l46bg\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://5ecd3493767226c89a1f3e3dff04d36ff5c47117c6ad2712e71633f5c6e375b3\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-23T09:07:30Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-l46bg\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://1d7d0cedb437ec48e365912b092c7f28a30e01fbab86c49bce1b26734ab264ee\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-23T09:07:30Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-l46bg\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://6ab83043e744c91535278153a247d7ba2b3612b867edbabf3a43192b51304e14\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-23T09:07:30Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-l46bg\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://d44f8256ce0d8ea5237e13fb4f6d7ee5cd698c2821613b48d73ba903d2ab5351\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-23T09:07:30Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-l46bg\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://3eab81e73847c2d5a8a24bd2be84c8ed97ecc482fe023474b519ae6bcf3e6e49\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-23T09:07:29Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-l46bg\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://96a86556114a977603ae87310370eefd3122daae9dcb97c57a715eab43e8c195\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://0a7c4719b2eaaa5e4439e33009fbfab815e8ac21cf72b90aeaeeb1b6717029de\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2026-01-23T09:07:42Z\\\",\\\"message\\\":\\\"ce openshift-marketplace/community-operators for network=default has 1 cluster-wide, 0 per-node configs, 0 template configs, making 1 (cluster) 0 (per node) and 0 (template) load balancers\\\\nI0123 09:07:40.472083 6043 services_controller.go:473] Services do not match for network=default, existing lbs: []services.LB{services.LB{Name:\\\\\\\"Service_openshift-marketplace/community-operators_TCP_cluster\\\\\\\", UUID:\\\\\\\"d389393c-7ba9-422c-b3f5-06e391d537d2\\\\\\\", Protocol:\\\\\\\"tcp\\\\\\\", ExternalIDs:map[string]string{\\\\\\\"k8s.ovn.org/kind\\\\\\\":\\\\\\\"Service\\\\\\\", \\\\\\\"k8s.ovn.org/owner\\\\\\\":\\\\\\\"openshift-marketplace/community-operators\\\\\\\"}, Opts:services.LBOpts{Reject:false, EmptyLBEvents:false, AffinityTimeOut:0, SkipSNAT:false, Template:false, AddressFamily:\\\\\\\"\\\\\\\"}, Rules:[]services.LBRule{}, Templates:services.TemplateMap{}, Switches:[]string{}, Routers:[]string{}, Groups:[]string{\\\\\\\"clusterLBGroup\\\\\\\"}}}, built lbs: []services.LB{services.LB{Name:\\\\\\\"Service_openshift-marketplace/community-operators_TCP_cluster\\\\\\\", UUID:\\\\\\\"\\\\\\\", Protocol:\\\\\\\"TCP\\\\\\\", ExternalIDs:map[string]string{\\\\\\\"k8s.ovn.org/kind\\\\\\\":\\\\\\\"Service\\\\\\\", \\\\\\\"k8s.ovn.org/owner\\\\\\\":\\\\\\\"openshift-marketplace/community-operators\\\\\\\"}, Opts:services.LBOpts{Reject:true, EmptyLBEvents:false, AffinityTimeOut:0, SkipSNAT:false, Template:false, AddressFamily:\\\\\\\"\\\\\\\"}, Rules:[]services.LBRule{services.LBRule{Source:services.Addr{IP:\\\\\\\"10.217.5.189\\\\\\\", Port:50051, Template:(*services.Template)(nil)}, T\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-23T09:07:39Z\\\"}},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":2,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://96a86556114a977603ae87310370eefd3122daae9dcb97c57a715eab43e8c195\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2026-01-23T09:08:00Z\\\",\\\"message\\\":\\\":services.LBOpts{Reject:false, EmptyLBEvents:false, AffinityTimeOut:0, SkipSNAT:false, Template:false, AddressFamily:\\\\\\\"\\\\\\\"}, Rules:[]services.LBRule{}, Templates:services.TemplateMap{}, Switches:[]string{}, Routers:[]string{}, Groups:[]string{\\\\\\\"clusterLBGroup\\\\\\\"}}}, built lbs: []services.LB{services.LB{Name:\\\\\\\"Service_openshift-dns-operator/metrics_TCP_cluster\\\\\\\", UUID:\\\\\\\"\\\\\\\", Protocol:\\\\\\\"TCP\\\\\\\", ExternalIDs:map[string]string{\\\\\\\"k8s.ovn.org/kind\\\\\\\":\\\\\\\"Service\\\\\\\", \\\\\\\"k8s.ovn.org/owner\\\\\\\":\\\\\\\"openshift-dns-operator/metrics\\\\\\\"}, Opts:services.LBOpts{Reject:true, EmptyLBEvents:false, AffinityTimeOut:0, SkipSNAT:false, Template:false, AddressFamily:\\\\\\\"\\\\\\\"}, Rules:[]services.LBRule{services.LBRule{Source:services.Addr{IP:\\\\\\\"10.217.4.174\\\\\\\", Port:9393, Template:(*services.Template)(nil)}, Targets:[]services.Addr{}}}, Templates:services.TemplateMap(nil), Switches:[]string{}, Routers:[]string{}, Groups:[]string{\\\\\\\"clusterLBGroup\\\\\\\"}}}\\\\nI0123 09:07:58.859000 6248 ovnkube.go:599] Stopped ovnkube\\\\nI0123 09:07:58.859024 6248 metrics.go:553] Stopping metrics server at address \\\\\\\"127.0.0.1:29103\\\\\\\"\\\\nI0123 09:07:58.859031 6248 obj_retry.go:434] periodicallyRetryResources: Retry channel got triggered: retrying failed objects of type *v1.Pod\\\\nF0123 09:07:58.859119 6248 ovnkube.go:137] failed to run ovnkube: [failed to start network controller: failed to start default network controller: unable to create\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-23T09:07:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-l46bg\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://6eab0113b2445bd23a5d3eb5f4bd79d26dd3352a1bf807cf7e770d55db85b699\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-23T09:07:32Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-l46bg\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://6cfc04b44ac724b5e32e0102b3f0d670fdd7f2b7ae9b40266065c7b8192b228e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://6cfc04b44ac724b5e32e0102b3f0d670fdd7f2b7ae9b40266065c7b8192b228e\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-23T09:07:29Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-23T09:07:29Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-l46bg\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-23T09:07:28Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-nk7v5\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-23T09:08:00Z is after 2025-08-24T17:21:41Z" Jan 23 09:08:01 crc kubenswrapper[4684]: I0123 09:08:01.004910 4684 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-ckltm" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"17ebb42b-c0ef-423b-8337-cb73bcdbd301\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T09:07:44Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T09:07:41Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T09:07:44Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T09:07:44Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://831d14b0a3293bdf6aaef4805513c47cca40592929fd0a059c0415e6bb072462\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-23T09:07:43Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-control-plane-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-bqdrh\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://53174a72a4ae2ff8105c162641526b8d33dbc8ae6f6301c8c1399e1493d9f6e9\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-cluster-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-23T09:07:43Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-bqdrh\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-23T09:07:41Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-control-plane-749d76644c-ckltm\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-23T09:08:01Z is after 2025-08-24T17:21:41Z" Jan 23 09:08:01 crc kubenswrapper[4684]: I0123 09:08:01.017185 4684 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-23T09:07:27Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-23T09:07:27Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-23T09:08:01Z is after 2025-08-24T17:21:41Z" Jan 23 09:08:01 crc kubenswrapper[4684]: I0123 09:08:01.027779 4684 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-23T09:07:27Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-23T09:07:27Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-23T09:08:01Z is after 2025-08-24T17:21:41Z" Jan 23 09:08:01 crc kubenswrapper[4684]: I0123 09:08:01.040175 4684 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-jwr4q" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ab0885cc-d621-4e36-9e37-1326848bd147\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T09:07:29Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T09:07:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T09:07:29Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T09:07:29Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://d957cfbf388d17fa825ac41c56e15d6cd4caec6e13b2fb8c93b304205f0bbefe\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-23T09:07:29Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cw2mk\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-23T09:07:28Z\\\"}}\" for pod \"openshift-multus\"/\"multus-jwr4q\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-23T09:08:01Z is after 2025-08-24T17:21:41Z" Jan 23 09:08:01 crc kubenswrapper[4684]: I0123 09:08:01.053767 4684 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-dmqcw" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"95d1563a-3ca4-4fb0-8365-c1168fbe2e70\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T09:07:29Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T09:07:34Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T09:07:35Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T09:07:35Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://49a6a5854f711f7c177bc9c2ddea16027d535e15a3bbce2771702baed672fc06\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-23T09:07:35Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-wrhqc\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://d3d64538fa49212ecd97fac81f22251d985b9963024dcd5625ca82b0a19111fb\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://d3d64538fa49212ecd97fac81f22251d985b9963024dcd5625ca82b0a19111fb\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-23T09:07:29Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-23T09:07:29Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-wrhqc\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://bd008bc398cf858c150426e45222e76743f5cacfffb45c24f2cad83a6140abe4\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://bd008bc398cf858c150426e45222e76743f5cacfffb45c24f2cad83a6140abe4\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-23T09:07:30Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-23T09:07:30Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-wrhqc\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://11ea09253e6f4c4eab537b794b793c1f07e8cbaf361c1d8773381e7894805322\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://11ea09253e6f4c4eab537b794b793c1f07e8cbaf361c1d8773381e7894805322\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-23T09:07:31Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-23T09:07:31Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-wrhqc\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://4dddcfb8219bc8ac2d0f92294aef29222b71b1eb35ac84e7e833905e868e784e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://4dddcfb8219bc8ac2d0f92294aef29222b71b1eb35ac84e7e833905e868e784e\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-23T09:07:32Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-23T09:07:32Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-wrhqc\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://d935dd54133a2edd7ccddba6ec6b4c3ee7c86d3d6bc097b93fab3a6aa873ece9\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://d935dd54133a2edd7ccddba6ec6b4c3ee7c86d3d6bc097b93fab3a6aa873ece9\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-23T09:07:33Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-23T09:07:33Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-wrhqc\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://a3f58ad8e7c313247b77e5259a2f82d740ea1f08c3aeaefc116293729ce1b143\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://a3f58ad8e7c313247b77e5259a2f82d740ea1f08c3aeaefc116293729ce1b143\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-23T09:07:34Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-23T09:07:34Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-wrhqc\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-23T09:07:28Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-dmqcw\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-23T09:08:01Z is after 2025-08-24T17:21:41Z" Jan 23 09:08:01 crc kubenswrapper[4684]: I0123 09:08:01.066233 4684 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"e31ff448-5258-4887-9532-ccb1444b5a2f\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T09:07:08Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T09:07:08Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T09:07:38Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T09:07:38Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T09:07:07Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://39b1d62654cdce3e6a1e54cc35f36d530dec39b7ec54d7aba2ea8a64844ff90a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-23T09:07:09Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://5b80737ea9f882f63be2cf6a2f74002963d16e18aea3c96f738b2cd188f3c1da\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-23T09:07:10Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://68e3ed6cfd5c1ab6379385c7acee58117333f815f21be7d7c61038f7827f6621\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-23T09:07:10Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://74958cd4355a9eb04e07c960b1063b56f11cb3ae27a3ab9eac50f54ebac78c8c\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://42263a97079566dbd93f1ca20399fd1f6cc2400f0d042ed062c1c1e15eaf0109\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-01-23T09:07:27Z\\\",\\\"message\\\":\\\"23 09:07:26.845110 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW0123 09:07:26.845113 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW0123 09:07:26.845115 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nI0123 09:07:26.845353 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nI0123 09:07:26.849378 1 tlsconfig.go:203] \\\\\\\"Loaded serving cert\\\\\\\" certName=\\\\\\\"serving-cert::/tmp/serving-cert-4138284268/tls.crt::/tmp/serving-cert-4138284268/tls.key\\\\\\\" certDetail=\\\\\\\"\\\\\\\\\\\\\\\"localhost\\\\\\\\\\\\\\\" [serving] validServingFor=[localhost] issuer=\\\\\\\\\\\\\\\"check-endpoints-signer@1769159230\\\\\\\\\\\\\\\" (2026-01-23 09:07:10 +0000 UTC to 2026-02-22 09:07:11 +0000 UTC (now=2026-01-23 09:07:26.849349521 +0000 UTC))\\\\\\\"\\\\nI0123 09:07:26.849507 1 named_certificates.go:53] \\\\\\\"Loaded SNI cert\\\\\\\" index=0 certName=\\\\\\\"self-signed loopback\\\\\\\" certDetail=\\\\\\\"\\\\\\\\\\\\\\\"apiserver-loopback-client@1769159241\\\\\\\\\\\\\\\" [serving] validServingFor=[apiserver-loopback-client] issuer=\\\\\\\\\\\\\\\"apiserver-loopback-client-ca@1769159241\\\\\\\\\\\\\\\" (2026-01-23 08:07:21 +0000 UTC to 2027-01-23 08:07:21 +0000 UTC (now=2026-01-23 09:07:26.849489185 +0000 UTC))\\\\\\\"\\\\nI0123 09:07:26.849527 1 secure_serving.go:213] Serving securely on [::]:17697\\\\nI0123 09:07:26.849546 1 genericapiserver.go:683] [graceful-termination] waiting for shutdown to be initiated\\\\nI0123 09:07:26.849566 1 requestheader_controller.go:172] Starting RequestHeaderAuthRequestController\\\\nI0123 09:07:26.849583 1 shared_informer.go:313] Waiting for caches to sync for RequestHeaderAuthRequestController\\\\nI0123 09:07:26.849611 1 dynamic_serving_content.go:135] \\\\\\\"Starting controller\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-4138284268/tls.crt::/tmp/serving-cert-4138284268/tls.key\\\\\\\"\\\\nI0123 09:07:26.849731 1 tlsconfig.go:243] \\\\\\\"Starting DynamicServingCertificateController\\\\\\\"\\\\nF0123 09:07:26.849820 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-23T09:07:10Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":true,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-23T09:07:28Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://9db80d9b156d2828ad5bcd38bc2d0783dac35f10f547f098815ee596931cde3b\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-23T09:07:10Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://efa2eef93c6f5766565795e6674f79bc2e7cb62ac76cd9a1e407561378d62732\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://efa2eef93c6f5766565795e6674f79bc2e7cb62ac76cd9a1e407561378d62732\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-23T09:07:08Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-23T09:07:08Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-23T09:07:07Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-23T09:08:01Z is after 2025-08-24T17:21:41Z" Jan 23 09:08:01 crc kubenswrapper[4684]: I0123 09:08:01.077460 4684 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-23T09:07:27Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-23T09:07:27Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-23T09:08:01Z is after 2025-08-24T17:21:41Z" Jan 23 09:08:01 crc kubenswrapper[4684]: I0123 09:08:01.083626 4684 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 23 09:08:01 crc kubenswrapper[4684]: I0123 09:08:01.083668 4684 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 23 09:08:01 crc kubenswrapper[4684]: I0123 09:08:01.083680 4684 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 23 09:08:01 crc kubenswrapper[4684]: I0123 09:08:01.083717 4684 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 23 09:08:01 crc kubenswrapper[4684]: I0123 09:08:01.083730 4684 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-23T09:08:01Z","lastTransitionTime":"2026-01-23T09:08:01Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 23 09:08:01 crc kubenswrapper[4684]: I0123 09:08:01.087614 4684 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-6stgf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"4fce7017-186f-4953-b968-c8a8868a0fd4\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T09:07:29Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T09:07:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T09:07:29Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T09:07:29Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://e120546e2ca9261a5bc169c39194c52add608d78b5783a10dad5f3ba4ee27c23\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-23T09:07:29Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-wv8g2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-23T09:07:28Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-6stgf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-23T09:08:01Z is after 2025-08-24T17:21:41Z" Jan 23 09:08:01 crc kubenswrapper[4684]: I0123 09:08:01.097151 4684 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-qt2j2" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d5069a6f-07bb-4423-8df0-92cdc541e6de\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T09:07:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T09:07:29Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T09:07:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T09:07:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://4ab843f59e857c481772565098789264b06141f58dd54cbb8dba2e40b44a54ad\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-23T09:07:30Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-l62zw\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-23T09:07:29Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-qt2j2\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-23T09:08:01Z is after 2025-08-24T17:21:41Z" Jan 23 09:08:01 crc kubenswrapper[4684]: I0123 09:08:01.108016 4684 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d618dabd-5de3-4c94-b9c1-69682da77628\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T09:07:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T09:07:07Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T09:07:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T09:07:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T09:07:07Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://c027c8977c1e3870ef0132bf28d479e8999b1a7d216327be7a9cff2aeee05c9e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-23T09:07:08Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://7954e2feb1e89e1ec2c9055234e7b9bde7005afc751a3067c18cbb54d16045cc\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-23T09:07:08Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://fde45d47daa7855ee7caa1df0222d2773fcdc8fb29413c61d6b74f7e7d8fa6e4\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-23T09:07:09Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://f34540a58dd0dfcebbfd694b24202f58a89ddca8a0f04f3f4f2bcdba4be5c4b6\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-23T09:07:10Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-23T09:07:07Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-23T09:08:01Z is after 2025-08-24T17:21:41Z" Jan 23 09:08:01 crc kubenswrapper[4684]: I0123 09:08:01.119832 4684 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-23T09:07:28Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-23T09:07:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-23T09:07:28Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://4f741db786a98b9e9302c17c5f5061484149b0372c03b3cf06b017d37da7237a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-23T09:07:28Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://e0bf99a80423f9d4d2262b21f7dc70d1cf73731c48008e484d9768495596d5b0\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-23T09:07:28Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-23T09:08:01Z is after 2025-08-24T17:21:41Z" Jan 23 09:08:01 crc kubenswrapper[4684]: I0123 09:08:01.131103 4684 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-23T09:07:30Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-23T09:07:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-23T09:07:30Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://dc74050180463e44d7c545c89833c0282af87ae8cde4800f95e019dbd21ebb08\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-23T09:07:30Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-23T09:08:01Z is after 2025-08-24T17:21:41Z" Jan 23 09:08:01 crc kubenswrapper[4684]: I0123 09:08:01.186621 4684 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 23 09:08:01 crc kubenswrapper[4684]: I0123 09:08:01.186718 4684 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 23 09:08:01 crc kubenswrapper[4684]: I0123 09:08:01.186732 4684 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 23 09:08:01 crc kubenswrapper[4684]: I0123 09:08:01.186751 4684 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 23 09:08:01 crc kubenswrapper[4684]: I0123 09:08:01.186763 4684 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-23T09:08:01Z","lastTransitionTime":"2026-01-23T09:08:01Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 23 09:08:01 crc kubenswrapper[4684]: I0123 09:08:01.289032 4684 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 23 09:08:01 crc kubenswrapper[4684]: I0123 09:08:01.289076 4684 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 23 09:08:01 crc kubenswrapper[4684]: I0123 09:08:01.289094 4684 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 23 09:08:01 crc kubenswrapper[4684]: I0123 09:08:01.289112 4684 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 23 09:08:01 crc kubenswrapper[4684]: I0123 09:08:01.289126 4684 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-23T09:08:01Z","lastTransitionTime":"2026-01-23T09:08:01Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 23 09:08:01 crc kubenswrapper[4684]: I0123 09:08:01.299341 4684 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 23 09:08:01 crc kubenswrapper[4684]: I0123 09:08:01.299384 4684 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 23 09:08:01 crc kubenswrapper[4684]: I0123 09:08:01.299393 4684 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 23 09:08:01 crc kubenswrapper[4684]: I0123 09:08:01.299409 4684 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 23 09:08:01 crc kubenswrapper[4684]: I0123 09:08:01.299421 4684 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-23T09:08:01Z","lastTransitionTime":"2026-01-23T09:08:01Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 23 09:08:01 crc kubenswrapper[4684]: E0123 09:08:01.311983 4684 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"7800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"24148068Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"8\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"24608868Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-01-23T09:08:01Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-23T09:08:01Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-23T09:08:01Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-23T09:08:01Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-23T09:08:01Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-23T09:08:01Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-23T09:08:01Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-23T09:08:01Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"bcfe8adf-9d26-48e3-b456-e1c8d79ddfed\\\",\\\"systemUUID\\\":\\\"63162577-fb09-4289-a5f3-3b12988dcfbf\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-23T09:08:01Z is after 2025-08-24T17:21:41Z" Jan 23 09:08:01 crc kubenswrapper[4684]: I0123 09:08:01.316005 4684 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 23 09:08:01 crc kubenswrapper[4684]: I0123 09:08:01.316089 4684 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 23 09:08:01 crc kubenswrapper[4684]: I0123 09:08:01.316106 4684 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 23 09:08:01 crc kubenswrapper[4684]: I0123 09:08:01.316132 4684 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 23 09:08:01 crc kubenswrapper[4684]: I0123 09:08:01.316149 4684 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-23T09:08:01Z","lastTransitionTime":"2026-01-23T09:08:01Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 23 09:08:01 crc kubenswrapper[4684]: E0123 09:08:01.329816 4684 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"7800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"24148068Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"8\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"24608868Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-01-23T09:08:01Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-23T09:08:01Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-23T09:08:01Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-23T09:08:01Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-23T09:08:01Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-23T09:08:01Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-23T09:08:01Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-23T09:08:01Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"bcfe8adf-9d26-48e3-b456-e1c8d79ddfed\\\",\\\"systemUUID\\\":\\\"63162577-fb09-4289-a5f3-3b12988dcfbf\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-23T09:08:01Z is after 2025-08-24T17:21:41Z" Jan 23 09:08:01 crc kubenswrapper[4684]: I0123 09:08:01.333384 4684 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 23 09:08:01 crc kubenswrapper[4684]: I0123 09:08:01.333413 4684 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 23 09:08:01 crc kubenswrapper[4684]: I0123 09:08:01.333423 4684 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 23 09:08:01 crc kubenswrapper[4684]: I0123 09:08:01.333436 4684 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 23 09:08:01 crc kubenswrapper[4684]: I0123 09:08:01.333447 4684 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-23T09:08:01Z","lastTransitionTime":"2026-01-23T09:08:01Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 23 09:08:01 crc kubenswrapper[4684]: E0123 09:08:01.348416 4684 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"7800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"24148068Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"8\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"24608868Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-01-23T09:08:01Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-23T09:08:01Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-23T09:08:01Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-23T09:08:01Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-23T09:08:01Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-23T09:08:01Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-23T09:08:01Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-23T09:08:01Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"bcfe8adf-9d26-48e3-b456-e1c8d79ddfed\\\",\\\"systemUUID\\\":\\\"63162577-fb09-4289-a5f3-3b12988dcfbf\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-23T09:08:01Z is after 2025-08-24T17:21:41Z" Jan 23 09:08:01 crc kubenswrapper[4684]: I0123 09:08:01.353372 4684 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 23 09:08:01 crc kubenswrapper[4684]: I0123 09:08:01.353421 4684 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 23 09:08:01 crc kubenswrapper[4684]: I0123 09:08:01.353435 4684 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 23 09:08:01 crc kubenswrapper[4684]: I0123 09:08:01.353455 4684 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 23 09:08:01 crc kubenswrapper[4684]: I0123 09:08:01.353468 4684 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-23T09:08:01Z","lastTransitionTime":"2026-01-23T09:08:01Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 23 09:08:01 crc kubenswrapper[4684]: E0123 09:08:01.368437 4684 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"7800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"24148068Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"8\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"24608868Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-01-23T09:08:01Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-23T09:08:01Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-23T09:08:01Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-23T09:08:01Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-23T09:08:01Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-23T09:08:01Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-23T09:08:01Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-23T09:08:01Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"bcfe8adf-9d26-48e3-b456-e1c8d79ddfed\\\",\\\"systemUUID\\\":\\\"63162577-fb09-4289-a5f3-3b12988dcfbf\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-23T09:08:01Z is after 2025-08-24T17:21:41Z" Jan 23 09:08:01 crc kubenswrapper[4684]: I0123 09:08:01.373370 4684 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 23 09:08:01 crc kubenswrapper[4684]: I0123 09:08:01.373432 4684 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 23 09:08:01 crc kubenswrapper[4684]: I0123 09:08:01.373446 4684 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 23 09:08:01 crc kubenswrapper[4684]: I0123 09:08:01.373463 4684 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 23 09:08:01 crc kubenswrapper[4684]: I0123 09:08:01.373475 4684 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-23T09:08:01Z","lastTransitionTime":"2026-01-23T09:08:01Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 23 09:08:01 crc kubenswrapper[4684]: E0123 09:08:01.387288 4684 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"7800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"24148068Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"8\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"24608868Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-01-23T09:08:01Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-23T09:08:01Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-23T09:08:01Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-23T09:08:01Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-23T09:08:01Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-23T09:08:01Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-23T09:08:01Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-23T09:08:01Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"bcfe8adf-9d26-48e3-b456-e1c8d79ddfed\\\",\\\"systemUUID\\\":\\\"63162577-fb09-4289-a5f3-3b12988dcfbf\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-23T09:08:01Z is after 2025-08-24T17:21:41Z" Jan 23 09:08:01 crc kubenswrapper[4684]: E0123 09:08:01.387466 4684 kubelet_node_status.go:572] "Unable to update node status" err="update node status exceeds retry count" Jan 23 09:08:01 crc kubenswrapper[4684]: I0123 09:08:01.395143 4684 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 23 09:08:01 crc kubenswrapper[4684]: I0123 09:08:01.395223 4684 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 23 09:08:01 crc kubenswrapper[4684]: I0123 09:08:01.395247 4684 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 23 09:08:01 crc kubenswrapper[4684]: I0123 09:08:01.395269 4684 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 23 09:08:01 crc kubenswrapper[4684]: I0123 09:08:01.395284 4684 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-23T09:08:01Z","lastTransitionTime":"2026-01-23T09:08:01Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 23 09:08:01 crc kubenswrapper[4684]: I0123 09:08:01.497176 4684 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 23 09:08:01 crc kubenswrapper[4684]: I0123 09:08:01.497215 4684 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 23 09:08:01 crc kubenswrapper[4684]: I0123 09:08:01.497226 4684 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 23 09:08:01 crc kubenswrapper[4684]: I0123 09:08:01.497240 4684 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 23 09:08:01 crc kubenswrapper[4684]: I0123 09:08:01.497249 4684 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-23T09:08:01Z","lastTransitionTime":"2026-01-23T09:08:01Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 23 09:08:01 crc kubenswrapper[4684]: I0123 09:08:01.570176 4684 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-11-28 01:16:05.834613359 +0000 UTC Jan 23 09:08:01 crc kubenswrapper[4684]: I0123 09:08:01.581520 4684 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 23 09:08:01 crc kubenswrapper[4684]: I0123 09:08:01.581544 4684 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 23 09:08:01 crc kubenswrapper[4684]: E0123 09:08:01.581667 4684 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Jan 23 09:08:01 crc kubenswrapper[4684]: E0123 09:08:01.581808 4684 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Jan 23 09:08:01 crc kubenswrapper[4684]: I0123 09:08:01.582122 4684 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Jan 23 09:08:01 crc kubenswrapper[4684]: E0123 09:08:01.582179 4684 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Jan 23 09:08:01 crc kubenswrapper[4684]: I0123 09:08:01.599030 4684 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 23 09:08:01 crc kubenswrapper[4684]: I0123 09:08:01.599076 4684 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 23 09:08:01 crc kubenswrapper[4684]: I0123 09:08:01.599090 4684 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 23 09:08:01 crc kubenswrapper[4684]: I0123 09:08:01.599108 4684 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 23 09:08:01 crc kubenswrapper[4684]: I0123 09:08:01.599122 4684 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-23T09:08:01Z","lastTransitionTime":"2026-01-23T09:08:01Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 23 09:08:01 crc kubenswrapper[4684]: I0123 09:08:01.700714 4684 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 23 09:08:01 crc kubenswrapper[4684]: I0123 09:08:01.700737 4684 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 23 09:08:01 crc kubenswrapper[4684]: I0123 09:08:01.700744 4684 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 23 09:08:01 crc kubenswrapper[4684]: I0123 09:08:01.700757 4684 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 23 09:08:01 crc kubenswrapper[4684]: I0123 09:08:01.700766 4684 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-23T09:08:01Z","lastTransitionTime":"2026-01-23T09:08:01Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 23 09:08:01 crc kubenswrapper[4684]: I0123 09:08:01.803377 4684 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 23 09:08:01 crc kubenswrapper[4684]: I0123 09:08:01.803428 4684 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 23 09:08:01 crc kubenswrapper[4684]: I0123 09:08:01.803441 4684 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 23 09:08:01 crc kubenswrapper[4684]: I0123 09:08:01.803456 4684 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 23 09:08:01 crc kubenswrapper[4684]: I0123 09:08:01.803468 4684 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-23T09:08:01Z","lastTransitionTime":"2026-01-23T09:08:01Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 23 09:08:01 crc kubenswrapper[4684]: I0123 09:08:01.906460 4684 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 23 09:08:01 crc kubenswrapper[4684]: I0123 09:08:01.906888 4684 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 23 09:08:01 crc kubenswrapper[4684]: I0123 09:08:01.906978 4684 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 23 09:08:01 crc kubenswrapper[4684]: I0123 09:08:01.907052 4684 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 23 09:08:01 crc kubenswrapper[4684]: I0123 09:08:01.907111 4684 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-23T09:08:01Z","lastTransitionTime":"2026-01-23T09:08:01Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 23 09:08:01 crc kubenswrapper[4684]: I0123 09:08:01.938516 4684 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-ovn-kubernetes_ovnkube-node-nk7v5_5fd1b372-d164-4037-ae8e-cf634b1c4b41/ovnkube-controller/2.log" Jan 23 09:08:02 crc kubenswrapper[4684]: I0123 09:08:02.009519 4684 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 23 09:08:02 crc kubenswrapper[4684]: I0123 09:08:02.009592 4684 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 23 09:08:02 crc kubenswrapper[4684]: I0123 09:08:02.009608 4684 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 23 09:08:02 crc kubenswrapper[4684]: I0123 09:08:02.009632 4684 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 23 09:08:02 crc kubenswrapper[4684]: I0123 09:08:02.009648 4684 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-23T09:08:02Z","lastTransitionTime":"2026-01-23T09:08:02Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 23 09:08:02 crc kubenswrapper[4684]: I0123 09:08:02.112272 4684 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 23 09:08:02 crc kubenswrapper[4684]: I0123 09:08:02.112304 4684 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 23 09:08:02 crc kubenswrapper[4684]: I0123 09:08:02.112314 4684 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 23 09:08:02 crc kubenswrapper[4684]: I0123 09:08:02.112329 4684 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 23 09:08:02 crc kubenswrapper[4684]: I0123 09:08:02.112339 4684 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-23T09:08:02Z","lastTransitionTime":"2026-01-23T09:08:02Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 23 09:08:02 crc kubenswrapper[4684]: I0123 09:08:02.214236 4684 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 23 09:08:02 crc kubenswrapper[4684]: I0123 09:08:02.214503 4684 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 23 09:08:02 crc kubenswrapper[4684]: I0123 09:08:02.214566 4684 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 23 09:08:02 crc kubenswrapper[4684]: I0123 09:08:02.214641 4684 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 23 09:08:02 crc kubenswrapper[4684]: I0123 09:08:02.214727 4684 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-23T09:08:02Z","lastTransitionTime":"2026-01-23T09:08:02Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 23 09:08:02 crc kubenswrapper[4684]: I0123 09:08:02.317526 4684 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 23 09:08:02 crc kubenswrapper[4684]: I0123 09:08:02.317573 4684 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 23 09:08:02 crc kubenswrapper[4684]: I0123 09:08:02.317591 4684 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 23 09:08:02 crc kubenswrapper[4684]: I0123 09:08:02.317612 4684 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 23 09:08:02 crc kubenswrapper[4684]: I0123 09:08:02.317628 4684 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-23T09:08:02Z","lastTransitionTime":"2026-01-23T09:08:02Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 23 09:08:02 crc kubenswrapper[4684]: I0123 09:08:02.419720 4684 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 23 09:08:02 crc kubenswrapper[4684]: I0123 09:08:02.419769 4684 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 23 09:08:02 crc kubenswrapper[4684]: I0123 09:08:02.419781 4684 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 23 09:08:02 crc kubenswrapper[4684]: I0123 09:08:02.419803 4684 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 23 09:08:02 crc kubenswrapper[4684]: I0123 09:08:02.419813 4684 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-23T09:08:02Z","lastTransitionTime":"2026-01-23T09:08:02Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 23 09:08:02 crc kubenswrapper[4684]: I0123 09:08:02.522215 4684 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 23 09:08:02 crc kubenswrapper[4684]: I0123 09:08:02.522425 4684 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 23 09:08:02 crc kubenswrapper[4684]: I0123 09:08:02.522516 4684 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 23 09:08:02 crc kubenswrapper[4684]: I0123 09:08:02.522601 4684 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 23 09:08:02 crc kubenswrapper[4684]: I0123 09:08:02.522693 4684 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-23T09:08:02Z","lastTransitionTime":"2026-01-23T09:08:02Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 23 09:08:02 crc kubenswrapper[4684]: I0123 09:08:02.571020 4684 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2026-01-14 05:17:26.5288164 +0000 UTC Jan 23 09:08:02 crc kubenswrapper[4684]: I0123 09:08:02.581342 4684 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-wrrtl" Jan 23 09:08:02 crc kubenswrapper[4684]: E0123 09:08:02.581470 4684 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-wrrtl" podUID="8a1145d8-e0e9-481b-9e5c-65815e74874f" Jan 23 09:08:02 crc kubenswrapper[4684]: I0123 09:08:02.625028 4684 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 23 09:08:02 crc kubenswrapper[4684]: I0123 09:08:02.625060 4684 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 23 09:08:02 crc kubenswrapper[4684]: I0123 09:08:02.625069 4684 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 23 09:08:02 crc kubenswrapper[4684]: I0123 09:08:02.625082 4684 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 23 09:08:02 crc kubenswrapper[4684]: I0123 09:08:02.625093 4684 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-23T09:08:02Z","lastTransitionTime":"2026-01-23T09:08:02Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 23 09:08:02 crc kubenswrapper[4684]: I0123 09:08:02.727100 4684 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 23 09:08:02 crc kubenswrapper[4684]: I0123 09:08:02.727138 4684 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 23 09:08:02 crc kubenswrapper[4684]: I0123 09:08:02.727150 4684 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 23 09:08:02 crc kubenswrapper[4684]: I0123 09:08:02.727188 4684 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 23 09:08:02 crc kubenswrapper[4684]: I0123 09:08:02.727199 4684 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-23T09:08:02Z","lastTransitionTime":"2026-01-23T09:08:02Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 23 09:08:02 crc kubenswrapper[4684]: I0123 09:08:02.829103 4684 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 23 09:08:02 crc kubenswrapper[4684]: I0123 09:08:02.829136 4684 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 23 09:08:02 crc kubenswrapper[4684]: I0123 09:08:02.829147 4684 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 23 09:08:02 crc kubenswrapper[4684]: I0123 09:08:02.829162 4684 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 23 09:08:02 crc kubenswrapper[4684]: I0123 09:08:02.829171 4684 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-23T09:08:02Z","lastTransitionTime":"2026-01-23T09:08:02Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 23 09:08:02 crc kubenswrapper[4684]: I0123 09:08:02.931142 4684 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 23 09:08:02 crc kubenswrapper[4684]: I0123 09:08:02.931199 4684 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 23 09:08:02 crc kubenswrapper[4684]: I0123 09:08:02.931213 4684 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 23 09:08:02 crc kubenswrapper[4684]: I0123 09:08:02.931228 4684 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 23 09:08:02 crc kubenswrapper[4684]: I0123 09:08:02.931241 4684 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-23T09:08:02Z","lastTransitionTime":"2026-01-23T09:08:02Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 23 09:08:03 crc kubenswrapper[4684]: I0123 09:08:03.033940 4684 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 23 09:08:03 crc kubenswrapper[4684]: I0123 09:08:03.034189 4684 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 23 09:08:03 crc kubenswrapper[4684]: I0123 09:08:03.034349 4684 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 23 09:08:03 crc kubenswrapper[4684]: I0123 09:08:03.034449 4684 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 23 09:08:03 crc kubenswrapper[4684]: I0123 09:08:03.034557 4684 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-23T09:08:03Z","lastTransitionTime":"2026-01-23T09:08:03Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 23 09:08:03 crc kubenswrapper[4684]: I0123 09:08:03.137636 4684 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 23 09:08:03 crc kubenswrapper[4684]: I0123 09:08:03.137673 4684 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 23 09:08:03 crc kubenswrapper[4684]: I0123 09:08:03.137682 4684 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 23 09:08:03 crc kubenswrapper[4684]: I0123 09:08:03.137710 4684 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 23 09:08:03 crc kubenswrapper[4684]: I0123 09:08:03.137722 4684 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-23T09:08:03Z","lastTransitionTime":"2026-01-23T09:08:03Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 23 09:08:03 crc kubenswrapper[4684]: I0123 09:08:03.239751 4684 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 23 09:08:03 crc kubenswrapper[4684]: I0123 09:08:03.239788 4684 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 23 09:08:03 crc kubenswrapper[4684]: I0123 09:08:03.239798 4684 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 23 09:08:03 crc kubenswrapper[4684]: I0123 09:08:03.239810 4684 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 23 09:08:03 crc kubenswrapper[4684]: I0123 09:08:03.239819 4684 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-23T09:08:03Z","lastTransitionTime":"2026-01-23T09:08:03Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 23 09:08:03 crc kubenswrapper[4684]: I0123 09:08:03.342024 4684 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 23 09:08:03 crc kubenswrapper[4684]: I0123 09:08:03.342089 4684 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 23 09:08:03 crc kubenswrapper[4684]: I0123 09:08:03.342101 4684 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 23 09:08:03 crc kubenswrapper[4684]: I0123 09:08:03.342116 4684 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 23 09:08:03 crc kubenswrapper[4684]: I0123 09:08:03.342136 4684 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-23T09:08:03Z","lastTransitionTime":"2026-01-23T09:08:03Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 23 09:08:03 crc kubenswrapper[4684]: I0123 09:08:03.438415 4684 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" Jan 23 09:08:03 crc kubenswrapper[4684]: I0123 09:08:03.444503 4684 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 23 09:08:03 crc kubenswrapper[4684]: I0123 09:08:03.444538 4684 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 23 09:08:03 crc kubenswrapper[4684]: I0123 09:08:03.444547 4684 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 23 09:08:03 crc kubenswrapper[4684]: I0123 09:08:03.444561 4684 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 23 09:08:03 crc kubenswrapper[4684]: I0123 09:08:03.444573 4684 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-23T09:08:03Z","lastTransitionTime":"2026-01-23T09:08:03Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 23 09:08:03 crc kubenswrapper[4684]: I0123 09:08:03.451566 4684 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-kube-scheduler/openshift-kube-scheduler-crc"] Jan 23 09:08:03 crc kubenswrapper[4684]: I0123 09:08:03.457582 4684 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-jwr4q" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ab0885cc-d621-4e36-9e37-1326848bd147\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T09:07:29Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T09:07:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T09:07:29Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T09:07:29Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://d957cfbf388d17fa825ac41c56e15d6cd4caec6e13b2fb8c93b304205f0bbefe\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-23T09:07:29Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cw2mk\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-23T09:07:28Z\\\"}}\" for pod \"openshift-multus\"/\"multus-jwr4q\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-23T09:08:03Z is after 2025-08-24T17:21:41Z" Jan 23 09:08:03 crc kubenswrapper[4684]: I0123 09:08:03.473209 4684 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-dmqcw" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"95d1563a-3ca4-4fb0-8365-c1168fbe2e70\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T09:07:29Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T09:07:34Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T09:07:35Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T09:07:35Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://49a6a5854f711f7c177bc9c2ddea16027d535e15a3bbce2771702baed672fc06\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-23T09:07:35Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-wrhqc\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://d3d64538fa49212ecd97fac81f22251d985b9963024dcd5625ca82b0a19111fb\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://d3d64538fa49212ecd97fac81f22251d985b9963024dcd5625ca82b0a19111fb\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-23T09:07:29Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-23T09:07:29Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-wrhqc\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://bd008bc398cf858c150426e45222e76743f5cacfffb45c24f2cad83a6140abe4\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://bd008bc398cf858c150426e45222e76743f5cacfffb45c24f2cad83a6140abe4\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-23T09:07:30Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-23T09:07:30Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-wrhqc\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://11ea09253e6f4c4eab537b794b793c1f07e8cbaf361c1d8773381e7894805322\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://11ea09253e6f4c4eab537b794b793c1f07e8cbaf361c1d8773381e7894805322\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-23T09:07:31Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-23T09:07:31Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-wrhqc\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://4dddcfb8219bc8ac2d0f92294aef29222b71b1eb35ac84e7e833905e868e784e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://4dddcfb8219bc8ac2d0f92294aef29222b71b1eb35ac84e7e833905e868e784e\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-23T09:07:32Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-23T09:07:32Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-wrhqc\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://d935dd54133a2edd7ccddba6ec6b4c3ee7c86d3d6bc097b93fab3a6aa873ece9\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://d935dd54133a2edd7ccddba6ec6b4c3ee7c86d3d6bc097b93fab3a6aa873ece9\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-23T09:07:33Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-23T09:07:33Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-wrhqc\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://a3f58ad8e7c313247b77e5259a2f82d740ea1f08c3aeaefc116293729ce1b143\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://a3f58ad8e7c313247b77e5259a2f82d740ea1f08c3aeaefc116293729ce1b143\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-23T09:07:34Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-23T09:07:34Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-wrhqc\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-23T09:07:28Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-dmqcw\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-23T09:08:03Z is after 2025-08-24T17:21:41Z" Jan 23 09:08:03 crc kubenswrapper[4684]: I0123 09:08:03.487537 4684 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"e31ff448-5258-4887-9532-ccb1444b5a2f\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T09:07:08Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T09:07:08Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T09:07:38Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T09:07:38Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T09:07:07Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://39b1d62654cdce3e6a1e54cc35f36d530dec39b7ec54d7aba2ea8a64844ff90a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-23T09:07:09Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://5b80737ea9f882f63be2cf6a2f74002963d16e18aea3c96f738b2cd188f3c1da\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-23T09:07:10Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://68e3ed6cfd5c1ab6379385c7acee58117333f815f21be7d7c61038f7827f6621\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-23T09:07:10Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://74958cd4355a9eb04e07c960b1063b56f11cb3ae27a3ab9eac50f54ebac78c8c\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://42263a97079566dbd93f1ca20399fd1f6cc2400f0d042ed062c1c1e15eaf0109\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-01-23T09:07:27Z\\\",\\\"message\\\":\\\"23 09:07:26.845110 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW0123 09:07:26.845113 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW0123 09:07:26.845115 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nI0123 09:07:26.845353 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nI0123 09:07:26.849378 1 tlsconfig.go:203] \\\\\\\"Loaded serving cert\\\\\\\" certName=\\\\\\\"serving-cert::/tmp/serving-cert-4138284268/tls.crt::/tmp/serving-cert-4138284268/tls.key\\\\\\\" certDetail=\\\\\\\"\\\\\\\\\\\\\\\"localhost\\\\\\\\\\\\\\\" [serving] validServingFor=[localhost] issuer=\\\\\\\\\\\\\\\"check-endpoints-signer@1769159230\\\\\\\\\\\\\\\" (2026-01-23 09:07:10 +0000 UTC to 2026-02-22 09:07:11 +0000 UTC (now=2026-01-23 09:07:26.849349521 +0000 UTC))\\\\\\\"\\\\nI0123 09:07:26.849507 1 named_certificates.go:53] \\\\\\\"Loaded SNI cert\\\\\\\" index=0 certName=\\\\\\\"self-signed loopback\\\\\\\" certDetail=\\\\\\\"\\\\\\\\\\\\\\\"apiserver-loopback-client@1769159241\\\\\\\\\\\\\\\" [serving] validServingFor=[apiserver-loopback-client] issuer=\\\\\\\\\\\\\\\"apiserver-loopback-client-ca@1769159241\\\\\\\\\\\\\\\" (2026-01-23 08:07:21 +0000 UTC to 2027-01-23 08:07:21 +0000 UTC (now=2026-01-23 09:07:26.849489185 +0000 UTC))\\\\\\\"\\\\nI0123 09:07:26.849527 1 secure_serving.go:213] Serving securely on [::]:17697\\\\nI0123 09:07:26.849546 1 genericapiserver.go:683] [graceful-termination] waiting for shutdown to be initiated\\\\nI0123 09:07:26.849566 1 requestheader_controller.go:172] Starting RequestHeaderAuthRequestController\\\\nI0123 09:07:26.849583 1 shared_informer.go:313] Waiting for caches to sync for RequestHeaderAuthRequestController\\\\nI0123 09:07:26.849611 1 dynamic_serving_content.go:135] \\\\\\\"Starting controller\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-4138284268/tls.crt::/tmp/serving-cert-4138284268/tls.key\\\\\\\"\\\\nI0123 09:07:26.849731 1 tlsconfig.go:243] \\\\\\\"Starting DynamicServingCertificateController\\\\\\\"\\\\nF0123 09:07:26.849820 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-23T09:07:10Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":true,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-23T09:07:28Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://9db80d9b156d2828ad5bcd38bc2d0783dac35f10f547f098815ee596931cde3b\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-23T09:07:10Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://efa2eef93c6f5766565795e6674f79bc2e7cb62ac76cd9a1e407561378d62732\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://efa2eef93c6f5766565795e6674f79bc2e7cb62ac76cd9a1e407561378d62732\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-23T09:07:08Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-23T09:07:08Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-23T09:07:07Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-23T09:08:03Z is after 2025-08-24T17:21:41Z" Jan 23 09:08:03 crc kubenswrapper[4684]: I0123 09:08:03.499335 4684 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-23T09:07:27Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-23T09:07:27Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-23T09:08:03Z is after 2025-08-24T17:21:41Z" Jan 23 09:08:03 crc kubenswrapper[4684]: I0123 09:08:03.510116 4684 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-6stgf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"4fce7017-186f-4953-b968-c8a8868a0fd4\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T09:07:29Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T09:07:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T09:07:29Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T09:07:29Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://e120546e2ca9261a5bc169c39194c52add608d78b5783a10dad5f3ba4ee27c23\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-23T09:07:29Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-wv8g2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-23T09:07:28Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-6stgf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-23T09:08:03Z is after 2025-08-24T17:21:41Z" Jan 23 09:08:03 crc kubenswrapper[4684]: I0123 09:08:03.519828 4684 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-qt2j2" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d5069a6f-07bb-4423-8df0-92cdc541e6de\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T09:07:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T09:07:29Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T09:07:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T09:07:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://4ab843f59e857c481772565098789264b06141f58dd54cbb8dba2e40b44a54ad\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-23T09:07:30Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-l62zw\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-23T09:07:29Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-qt2j2\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-23T09:08:03Z is after 2025-08-24T17:21:41Z" Jan 23 09:08:03 crc kubenswrapper[4684]: I0123 09:08:03.531149 4684 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d618dabd-5de3-4c94-b9c1-69682da77628\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T09:07:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T09:07:07Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T09:07:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T09:07:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T09:07:07Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://c027c8977c1e3870ef0132bf28d479e8999b1a7d216327be7a9cff2aeee05c9e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-23T09:07:08Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://7954e2feb1e89e1ec2c9055234e7b9bde7005afc751a3067c18cbb54d16045cc\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-23T09:07:08Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://fde45d47daa7855ee7caa1df0222d2773fcdc8fb29413c61d6b74f7e7d8fa6e4\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-23T09:07:09Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://f34540a58dd0dfcebbfd694b24202f58a89ddca8a0f04f3f4f2bcdba4be5c4b6\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-23T09:07:10Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-23T09:07:07Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-23T09:08:03Z is after 2025-08-24T17:21:41Z" Jan 23 09:08:03 crc kubenswrapper[4684]: I0123 09:08:03.544626 4684 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-23T09:07:28Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-23T09:07:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-23T09:07:28Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://4f741db786a98b9e9302c17c5f5061484149b0372c03b3cf06b017d37da7237a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-23T09:07:28Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://e0bf99a80423f9d4d2262b21f7dc70d1cf73731c48008e484d9768495596d5b0\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-23T09:07:28Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-23T09:08:03Z is after 2025-08-24T17:21:41Z" Jan 23 09:08:03 crc kubenswrapper[4684]: I0123 09:08:03.547175 4684 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 23 09:08:03 crc kubenswrapper[4684]: I0123 09:08:03.547499 4684 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 23 09:08:03 crc kubenswrapper[4684]: I0123 09:08:03.547724 4684 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 23 09:08:03 crc kubenswrapper[4684]: I0123 09:08:03.547921 4684 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 23 09:08:03 crc kubenswrapper[4684]: I0123 09:08:03.548044 4684 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-23T09:08:03Z","lastTransitionTime":"2026-01-23T09:08:03Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 23 09:08:03 crc kubenswrapper[4684]: I0123 09:08:03.556332 4684 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-23T09:07:30Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-23T09:07:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-23T09:07:30Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://dc74050180463e44d7c545c89833c0282af87ae8cde4800f95e019dbd21ebb08\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-23T09:07:30Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-23T09:08:03Z is after 2025-08-24T17:21:41Z" Jan 23 09:08:03 crc kubenswrapper[4684]: I0123 09:08:03.566723 4684 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/network-metrics-daemon-wrrtl" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"8a1145d8-e0e9-481b-9e5c-65815e74874f\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T09:07:42Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T09:07:42Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T09:07:42Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T09:07:42Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/metrics\\\",\\\"name\\\":\\\"metrics-certs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-hlsjn\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d98bb346a17feae024d92663df92b25c120938395ab7043afbed543c6db9ca8d\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-metrics-daemon\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-hlsjn\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-23T09:07:42Z\\\"}}\" for pod \"openshift-multus\"/\"network-metrics-daemon-wrrtl\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-23T09:08:03Z is after 2025-08-24T17:21:41Z" Jan 23 09:08:03 crc kubenswrapper[4684]: I0123 09:08:03.571778 4684 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-11-08 00:22:05.168646491 +0000 UTC Jan 23 09:08:03 crc kubenswrapper[4684]: I0123 09:08:03.579593 4684 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-23T09:07:29Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-23T09:07:29Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-23T09:07:29Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://d66a59d2f527c396c3b591ef694a20a6852d8e2b2f3d4c77ef0f0b795a18b535\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-23T09:07:28Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-23T09:08:03Z is after 2025-08-24T17:21:41Z" Jan 23 09:08:03 crc kubenswrapper[4684]: I0123 09:08:03.581540 4684 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 23 09:08:03 crc kubenswrapper[4684]: E0123 09:08:03.582264 4684 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Jan 23 09:08:03 crc kubenswrapper[4684]: I0123 09:08:03.581610 4684 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 23 09:08:03 crc kubenswrapper[4684]: I0123 09:08:03.581591 4684 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Jan 23 09:08:03 crc kubenswrapper[4684]: E0123 09:08:03.582900 4684 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Jan 23 09:08:03 crc kubenswrapper[4684]: E0123 09:08:03.583013 4684 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Jan 23 09:08:03 crc kubenswrapper[4684]: I0123 09:08:03.590074 4684 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-wtphf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"fe8e0d00-860e-4d47-9f48-686555520d79\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T09:07:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T09:07:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T09:07:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T09:07:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://87b6f66b276518f9c25bbd5c97bd4a330b2c796958b395d04a01ef7115b95440\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-23T09:07:30Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-dmwsl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://c3d090a4ca15b818846dbd02be034a5029761509ea8671673795d0b2b15249c9\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-23T09:07:30Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-dmwsl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-23T09:07:28Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-wtphf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-23T09:08:03Z is after 2025-08-24T17:21:41Z" Jan 23 09:08:03 crc kubenswrapper[4684]: I0123 09:08:03.606303 4684 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-nk7v5" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fd1b372-d164-4037-ae8e-cf634b1c4b41\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T09:07:29Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T09:07:29Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T09:07:28Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T09:07:28Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://c845b6b78d55b23f70032599e19fb345571b02ca00353315bb08e94c834330d4\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-23T09:07:30Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-l46bg\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://5ecd3493767226c89a1f3e3dff04d36ff5c47117c6ad2712e71633f5c6e375b3\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-23T09:07:30Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-l46bg\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://1d7d0cedb437ec48e365912b092c7f28a30e01fbab86c49bce1b26734ab264ee\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-23T09:07:30Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-l46bg\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://6ab83043e744c91535278153a247d7ba2b3612b867edbabf3a43192b51304e14\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-23T09:07:30Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-l46bg\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://d44f8256ce0d8ea5237e13fb4f6d7ee5cd698c2821613b48d73ba903d2ab5351\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-23T09:07:30Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-l46bg\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://3eab81e73847c2d5a8a24bd2be84c8ed97ecc482fe023474b519ae6bcf3e6e49\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-23T09:07:29Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-l46bg\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://96a86556114a977603ae87310370eefd3122daae9dcb97c57a715eab43e8c195\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://0a7c4719b2eaaa5e4439e33009fbfab815e8ac21cf72b90aeaeeb1b6717029de\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2026-01-23T09:07:42Z\\\",\\\"message\\\":\\\"ce openshift-marketplace/community-operators for network=default has 1 cluster-wide, 0 per-node configs, 0 template configs, making 1 (cluster) 0 (per node) and 0 (template) load balancers\\\\nI0123 09:07:40.472083 6043 services_controller.go:473] Services do not match for network=default, existing lbs: []services.LB{services.LB{Name:\\\\\\\"Service_openshift-marketplace/community-operators_TCP_cluster\\\\\\\", UUID:\\\\\\\"d389393c-7ba9-422c-b3f5-06e391d537d2\\\\\\\", Protocol:\\\\\\\"tcp\\\\\\\", ExternalIDs:map[string]string{\\\\\\\"k8s.ovn.org/kind\\\\\\\":\\\\\\\"Service\\\\\\\", \\\\\\\"k8s.ovn.org/owner\\\\\\\":\\\\\\\"openshift-marketplace/community-operators\\\\\\\"}, Opts:services.LBOpts{Reject:false, EmptyLBEvents:false, AffinityTimeOut:0, SkipSNAT:false, Template:false, AddressFamily:\\\\\\\"\\\\\\\"}, Rules:[]services.LBRule{}, Templates:services.TemplateMap{}, Switches:[]string{}, Routers:[]string{}, Groups:[]string{\\\\\\\"clusterLBGroup\\\\\\\"}}}, built lbs: []services.LB{services.LB{Name:\\\\\\\"Service_openshift-marketplace/community-operators_TCP_cluster\\\\\\\", UUID:\\\\\\\"\\\\\\\", Protocol:\\\\\\\"TCP\\\\\\\", ExternalIDs:map[string]string{\\\\\\\"k8s.ovn.org/kind\\\\\\\":\\\\\\\"Service\\\\\\\", \\\\\\\"k8s.ovn.org/owner\\\\\\\":\\\\\\\"openshift-marketplace/community-operators\\\\\\\"}, Opts:services.LBOpts{Reject:true, EmptyLBEvents:false, AffinityTimeOut:0, SkipSNAT:false, Template:false, AddressFamily:\\\\\\\"\\\\\\\"}, Rules:[]services.LBRule{services.LBRule{Source:services.Addr{IP:\\\\\\\"10.217.5.189\\\\\\\", Port:50051, Template:(*services.Template)(nil)}, T\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-23T09:07:39Z\\\"}},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":2,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://96a86556114a977603ae87310370eefd3122daae9dcb97c57a715eab43e8c195\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2026-01-23T09:08:00Z\\\",\\\"message\\\":\\\":services.LBOpts{Reject:false, EmptyLBEvents:false, AffinityTimeOut:0, SkipSNAT:false, Template:false, AddressFamily:\\\\\\\"\\\\\\\"}, Rules:[]services.LBRule{}, Templates:services.TemplateMap{}, Switches:[]string{}, Routers:[]string{}, Groups:[]string{\\\\\\\"clusterLBGroup\\\\\\\"}}}, built lbs: []services.LB{services.LB{Name:\\\\\\\"Service_openshift-dns-operator/metrics_TCP_cluster\\\\\\\", UUID:\\\\\\\"\\\\\\\", Protocol:\\\\\\\"TCP\\\\\\\", ExternalIDs:map[string]string{\\\\\\\"k8s.ovn.org/kind\\\\\\\":\\\\\\\"Service\\\\\\\", \\\\\\\"k8s.ovn.org/owner\\\\\\\":\\\\\\\"openshift-dns-operator/metrics\\\\\\\"}, Opts:services.LBOpts{Reject:true, EmptyLBEvents:false, AffinityTimeOut:0, SkipSNAT:false, Template:false, AddressFamily:\\\\\\\"\\\\\\\"}, Rules:[]services.LBRule{services.LBRule{Source:services.Addr{IP:\\\\\\\"10.217.4.174\\\\\\\", Port:9393, Template:(*services.Template)(nil)}, Targets:[]services.Addr{}}}, Templates:services.TemplateMap(nil), Switches:[]string{}, Routers:[]string{}, Groups:[]string{\\\\\\\"clusterLBGroup\\\\\\\"}}}\\\\nI0123 09:07:58.859000 6248 ovnkube.go:599] Stopped ovnkube\\\\nI0123 09:07:58.859024 6248 metrics.go:553] Stopping metrics server at address \\\\\\\"127.0.0.1:29103\\\\\\\"\\\\nI0123 09:07:58.859031 6248 obj_retry.go:434] periodicallyRetryResources: Retry channel got triggered: retrying failed objects of type *v1.Pod\\\\nF0123 09:07:58.859119 6248 ovnkube.go:137] failed to run ovnkube: [failed to start network controller: failed to start default network controller: unable to create\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-23T09:07:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-l46bg\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://6eab0113b2445bd23a5d3eb5f4bd79d26dd3352a1bf807cf7e770d55db85b699\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-23T09:07:32Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-l46bg\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://6cfc04b44ac724b5e32e0102b3f0d670fdd7f2b7ae9b40266065c7b8192b228e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://6cfc04b44ac724b5e32e0102b3f0d670fdd7f2b7ae9b40266065c7b8192b228e\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-23T09:07:29Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-23T09:07:29Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-l46bg\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-23T09:07:28Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-nk7v5\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-23T09:08:03Z is after 2025-08-24T17:21:41Z" Jan 23 09:08:03 crc kubenswrapper[4684]: I0123 09:08:03.615422 4684 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-ckltm" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"17ebb42b-c0ef-423b-8337-cb73bcdbd301\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T09:07:44Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T09:07:41Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T09:07:44Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T09:07:44Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://831d14b0a3293bdf6aaef4805513c47cca40592929fd0a059c0415e6bb072462\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-23T09:07:43Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-control-plane-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-bqdrh\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://53174a72a4ae2ff8105c162641526b8d33dbc8ae6f6301c8c1399e1493d9f6e9\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-cluster-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-23T09:07:43Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-bqdrh\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-23T09:07:41Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-control-plane-749d76644c-ckltm\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-23T09:08:03Z is after 2025-08-24T17:21:41Z" Jan 23 09:08:03 crc kubenswrapper[4684]: I0123 09:08:03.625848 4684 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-23T09:07:27Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-23T09:07:27Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-23T09:08:03Z is after 2025-08-24T17:21:41Z" Jan 23 09:08:03 crc kubenswrapper[4684]: I0123 09:08:03.638510 4684 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-23T09:07:27Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-23T09:07:27Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-23T09:08:03Z is after 2025-08-24T17:21:41Z" Jan 23 09:08:03 crc kubenswrapper[4684]: I0123 09:08:03.650728 4684 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 23 09:08:03 crc kubenswrapper[4684]: I0123 09:08:03.650765 4684 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 23 09:08:03 crc kubenswrapper[4684]: I0123 09:08:03.650776 4684 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 23 09:08:03 crc kubenswrapper[4684]: I0123 09:08:03.650791 4684 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 23 09:08:03 crc kubenswrapper[4684]: I0123 09:08:03.650801 4684 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-23T09:08:03Z","lastTransitionTime":"2026-01-23T09:08:03Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 23 09:08:03 crc kubenswrapper[4684]: I0123 09:08:03.752921 4684 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 23 09:08:03 crc kubenswrapper[4684]: I0123 09:08:03.752952 4684 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 23 09:08:03 crc kubenswrapper[4684]: I0123 09:08:03.752963 4684 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 23 09:08:03 crc kubenswrapper[4684]: I0123 09:08:03.752978 4684 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 23 09:08:03 crc kubenswrapper[4684]: I0123 09:08:03.752989 4684 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-23T09:08:03Z","lastTransitionTime":"2026-01-23T09:08:03Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 23 09:08:03 crc kubenswrapper[4684]: I0123 09:08:03.855660 4684 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 23 09:08:03 crc kubenswrapper[4684]: I0123 09:08:03.855721 4684 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 23 09:08:03 crc kubenswrapper[4684]: I0123 09:08:03.855753 4684 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 23 09:08:03 crc kubenswrapper[4684]: I0123 09:08:03.855770 4684 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 23 09:08:03 crc kubenswrapper[4684]: I0123 09:08:03.855797 4684 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-23T09:08:03Z","lastTransitionTime":"2026-01-23T09:08:03Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 23 09:08:03 crc kubenswrapper[4684]: I0123 09:08:03.957863 4684 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 23 09:08:03 crc kubenswrapper[4684]: I0123 09:08:03.958081 4684 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 23 09:08:03 crc kubenswrapper[4684]: I0123 09:08:03.958146 4684 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 23 09:08:03 crc kubenswrapper[4684]: I0123 09:08:03.958307 4684 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 23 09:08:03 crc kubenswrapper[4684]: I0123 09:08:03.958366 4684 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-23T09:08:03Z","lastTransitionTime":"2026-01-23T09:08:03Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 23 09:08:04 crc kubenswrapper[4684]: I0123 09:08:04.061655 4684 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 23 09:08:04 crc kubenswrapper[4684]: I0123 09:08:04.061691 4684 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 23 09:08:04 crc kubenswrapper[4684]: I0123 09:08:04.061723 4684 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 23 09:08:04 crc kubenswrapper[4684]: I0123 09:08:04.061737 4684 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 23 09:08:04 crc kubenswrapper[4684]: I0123 09:08:04.061746 4684 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-23T09:08:04Z","lastTransitionTime":"2026-01-23T09:08:04Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 23 09:08:04 crc kubenswrapper[4684]: I0123 09:08:04.164479 4684 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 23 09:08:04 crc kubenswrapper[4684]: I0123 09:08:04.164518 4684 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 23 09:08:04 crc kubenswrapper[4684]: I0123 09:08:04.164526 4684 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 23 09:08:04 crc kubenswrapper[4684]: I0123 09:08:04.164542 4684 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 23 09:08:04 crc kubenswrapper[4684]: I0123 09:08:04.164551 4684 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-23T09:08:04Z","lastTransitionTime":"2026-01-23T09:08:04Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 23 09:08:04 crc kubenswrapper[4684]: I0123 09:08:04.267715 4684 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 23 09:08:04 crc kubenswrapper[4684]: I0123 09:08:04.267754 4684 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 23 09:08:04 crc kubenswrapper[4684]: I0123 09:08:04.267765 4684 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 23 09:08:04 crc kubenswrapper[4684]: I0123 09:08:04.267783 4684 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 23 09:08:04 crc kubenswrapper[4684]: I0123 09:08:04.267795 4684 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-23T09:08:04Z","lastTransitionTime":"2026-01-23T09:08:04Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 23 09:08:04 crc kubenswrapper[4684]: I0123 09:08:04.370592 4684 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 23 09:08:04 crc kubenswrapper[4684]: I0123 09:08:04.370615 4684 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 23 09:08:04 crc kubenswrapper[4684]: I0123 09:08:04.370623 4684 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 23 09:08:04 crc kubenswrapper[4684]: I0123 09:08:04.370634 4684 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 23 09:08:04 crc kubenswrapper[4684]: I0123 09:08:04.370643 4684 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-23T09:08:04Z","lastTransitionTime":"2026-01-23T09:08:04Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 23 09:08:04 crc kubenswrapper[4684]: I0123 09:08:04.472795 4684 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 23 09:08:04 crc kubenswrapper[4684]: I0123 09:08:04.472835 4684 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 23 09:08:04 crc kubenswrapper[4684]: I0123 09:08:04.472850 4684 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 23 09:08:04 crc kubenswrapper[4684]: I0123 09:08:04.472866 4684 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 23 09:08:04 crc kubenswrapper[4684]: I0123 09:08:04.472880 4684 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-23T09:08:04Z","lastTransitionTime":"2026-01-23T09:08:04Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 23 09:08:04 crc kubenswrapper[4684]: I0123 09:08:04.572794 4684 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-11-13 11:27:29.902192858 +0000 UTC Jan 23 09:08:04 crc kubenswrapper[4684]: I0123 09:08:04.575460 4684 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 23 09:08:04 crc kubenswrapper[4684]: I0123 09:08:04.575521 4684 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 23 09:08:04 crc kubenswrapper[4684]: I0123 09:08:04.575534 4684 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 23 09:08:04 crc kubenswrapper[4684]: I0123 09:08:04.575550 4684 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 23 09:08:04 crc kubenswrapper[4684]: I0123 09:08:04.575561 4684 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-23T09:08:04Z","lastTransitionTime":"2026-01-23T09:08:04Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 23 09:08:04 crc kubenswrapper[4684]: I0123 09:08:04.581167 4684 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-wrrtl" Jan 23 09:08:04 crc kubenswrapper[4684]: E0123 09:08:04.581298 4684 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-wrrtl" podUID="8a1145d8-e0e9-481b-9e5c-65815e74874f" Jan 23 09:08:04 crc kubenswrapper[4684]: I0123 09:08:04.678538 4684 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 23 09:08:04 crc kubenswrapper[4684]: I0123 09:08:04.678577 4684 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 23 09:08:04 crc kubenswrapper[4684]: I0123 09:08:04.678586 4684 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 23 09:08:04 crc kubenswrapper[4684]: I0123 09:08:04.678599 4684 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 23 09:08:04 crc kubenswrapper[4684]: I0123 09:08:04.678608 4684 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-23T09:08:04Z","lastTransitionTime":"2026-01-23T09:08:04Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 23 09:08:04 crc kubenswrapper[4684]: I0123 09:08:04.780734 4684 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 23 09:08:04 crc kubenswrapper[4684]: I0123 09:08:04.780777 4684 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 23 09:08:04 crc kubenswrapper[4684]: I0123 09:08:04.780789 4684 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 23 09:08:04 crc kubenswrapper[4684]: I0123 09:08:04.780808 4684 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 23 09:08:04 crc kubenswrapper[4684]: I0123 09:08:04.780820 4684 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-23T09:08:04Z","lastTransitionTime":"2026-01-23T09:08:04Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 23 09:08:04 crc kubenswrapper[4684]: I0123 09:08:04.883092 4684 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 23 09:08:04 crc kubenswrapper[4684]: I0123 09:08:04.883123 4684 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 23 09:08:04 crc kubenswrapper[4684]: I0123 09:08:04.883131 4684 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 23 09:08:04 crc kubenswrapper[4684]: I0123 09:08:04.883144 4684 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 23 09:08:04 crc kubenswrapper[4684]: I0123 09:08:04.883154 4684 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-23T09:08:04Z","lastTransitionTime":"2026-01-23T09:08:04Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 23 09:08:04 crc kubenswrapper[4684]: I0123 09:08:04.985139 4684 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 23 09:08:04 crc kubenswrapper[4684]: I0123 09:08:04.985187 4684 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 23 09:08:04 crc kubenswrapper[4684]: I0123 09:08:04.985197 4684 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 23 09:08:04 crc kubenswrapper[4684]: I0123 09:08:04.985211 4684 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 23 09:08:04 crc kubenswrapper[4684]: I0123 09:08:04.985220 4684 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-23T09:08:04Z","lastTransitionTime":"2026-01-23T09:08:04Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 23 09:08:05 crc kubenswrapper[4684]: I0123 09:08:05.087657 4684 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 23 09:08:05 crc kubenswrapper[4684]: I0123 09:08:05.087692 4684 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 23 09:08:05 crc kubenswrapper[4684]: I0123 09:08:05.087726 4684 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 23 09:08:05 crc kubenswrapper[4684]: I0123 09:08:05.087740 4684 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 23 09:08:05 crc kubenswrapper[4684]: I0123 09:08:05.087752 4684 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-23T09:08:05Z","lastTransitionTime":"2026-01-23T09:08:05Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 23 09:08:05 crc kubenswrapper[4684]: I0123 09:08:05.189603 4684 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 23 09:08:05 crc kubenswrapper[4684]: I0123 09:08:05.189642 4684 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 23 09:08:05 crc kubenswrapper[4684]: I0123 09:08:05.189651 4684 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 23 09:08:05 crc kubenswrapper[4684]: I0123 09:08:05.189664 4684 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 23 09:08:05 crc kubenswrapper[4684]: I0123 09:08:05.189674 4684 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-23T09:08:05Z","lastTransitionTime":"2026-01-23T09:08:05Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 23 09:08:05 crc kubenswrapper[4684]: I0123 09:08:05.292033 4684 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 23 09:08:05 crc kubenswrapper[4684]: I0123 09:08:05.292062 4684 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 23 09:08:05 crc kubenswrapper[4684]: I0123 09:08:05.292072 4684 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 23 09:08:05 crc kubenswrapper[4684]: I0123 09:08:05.292089 4684 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 23 09:08:05 crc kubenswrapper[4684]: I0123 09:08:05.292100 4684 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-23T09:08:05Z","lastTransitionTime":"2026-01-23T09:08:05Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 23 09:08:05 crc kubenswrapper[4684]: I0123 09:08:05.394951 4684 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 23 09:08:05 crc kubenswrapper[4684]: I0123 09:08:05.395228 4684 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 23 09:08:05 crc kubenswrapper[4684]: I0123 09:08:05.395312 4684 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 23 09:08:05 crc kubenswrapper[4684]: I0123 09:08:05.395402 4684 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 23 09:08:05 crc kubenswrapper[4684]: I0123 09:08:05.395495 4684 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-23T09:08:05Z","lastTransitionTime":"2026-01-23T09:08:05Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 23 09:08:05 crc kubenswrapper[4684]: I0123 09:08:05.497994 4684 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 23 09:08:05 crc kubenswrapper[4684]: I0123 09:08:05.498022 4684 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 23 09:08:05 crc kubenswrapper[4684]: I0123 09:08:05.498031 4684 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 23 09:08:05 crc kubenswrapper[4684]: I0123 09:08:05.498043 4684 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 23 09:08:05 crc kubenswrapper[4684]: I0123 09:08:05.498052 4684 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-23T09:08:05Z","lastTransitionTime":"2026-01-23T09:08:05Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 23 09:08:05 crc kubenswrapper[4684]: I0123 09:08:05.573389 4684 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-11-08 05:26:12.56768281 +0000 UTC Jan 23 09:08:05 crc kubenswrapper[4684]: I0123 09:08:05.581768 4684 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 23 09:08:05 crc kubenswrapper[4684]: I0123 09:08:05.581862 4684 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Jan 23 09:08:05 crc kubenswrapper[4684]: I0123 09:08:05.581901 4684 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 23 09:08:05 crc kubenswrapper[4684]: E0123 09:08:05.582009 4684 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Jan 23 09:08:05 crc kubenswrapper[4684]: E0123 09:08:05.582134 4684 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Jan 23 09:08:05 crc kubenswrapper[4684]: E0123 09:08:05.582233 4684 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Jan 23 09:08:05 crc kubenswrapper[4684]: I0123 09:08:05.600661 4684 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 23 09:08:05 crc kubenswrapper[4684]: I0123 09:08:05.600718 4684 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 23 09:08:05 crc kubenswrapper[4684]: I0123 09:08:05.600731 4684 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 23 09:08:05 crc kubenswrapper[4684]: I0123 09:08:05.600747 4684 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 23 09:08:05 crc kubenswrapper[4684]: I0123 09:08:05.600758 4684 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-23T09:08:05Z","lastTransitionTime":"2026-01-23T09:08:05Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 23 09:08:05 crc kubenswrapper[4684]: I0123 09:08:05.703366 4684 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 23 09:08:05 crc kubenswrapper[4684]: I0123 09:08:05.703678 4684 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 23 09:08:05 crc kubenswrapper[4684]: I0123 09:08:05.703795 4684 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 23 09:08:05 crc kubenswrapper[4684]: I0123 09:08:05.703885 4684 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 23 09:08:05 crc kubenswrapper[4684]: I0123 09:08:05.703959 4684 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-23T09:08:05Z","lastTransitionTime":"2026-01-23T09:08:05Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 23 09:08:05 crc kubenswrapper[4684]: I0123 09:08:05.806996 4684 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 23 09:08:05 crc kubenswrapper[4684]: I0123 09:08:05.807029 4684 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 23 09:08:05 crc kubenswrapper[4684]: I0123 09:08:05.807040 4684 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 23 09:08:05 crc kubenswrapper[4684]: I0123 09:08:05.807054 4684 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 23 09:08:05 crc kubenswrapper[4684]: I0123 09:08:05.807063 4684 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-23T09:08:05Z","lastTransitionTime":"2026-01-23T09:08:05Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 23 09:08:05 crc kubenswrapper[4684]: I0123 09:08:05.909519 4684 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 23 09:08:05 crc kubenswrapper[4684]: I0123 09:08:05.909551 4684 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 23 09:08:05 crc kubenswrapper[4684]: I0123 09:08:05.909560 4684 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 23 09:08:05 crc kubenswrapper[4684]: I0123 09:08:05.909573 4684 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 23 09:08:05 crc kubenswrapper[4684]: I0123 09:08:05.909584 4684 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-23T09:08:05Z","lastTransitionTime":"2026-01-23T09:08:05Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 23 09:08:06 crc kubenswrapper[4684]: I0123 09:08:06.011853 4684 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 23 09:08:06 crc kubenswrapper[4684]: I0123 09:08:06.012223 4684 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 23 09:08:06 crc kubenswrapper[4684]: I0123 09:08:06.012429 4684 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 23 09:08:06 crc kubenswrapper[4684]: I0123 09:08:06.012625 4684 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 23 09:08:06 crc kubenswrapper[4684]: I0123 09:08:06.012886 4684 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-23T09:08:06Z","lastTransitionTime":"2026-01-23T09:08:06Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 23 09:08:06 crc kubenswrapper[4684]: I0123 09:08:06.115842 4684 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 23 09:08:06 crc kubenswrapper[4684]: I0123 09:08:06.116081 4684 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 23 09:08:06 crc kubenswrapper[4684]: I0123 09:08:06.116215 4684 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 23 09:08:06 crc kubenswrapper[4684]: I0123 09:08:06.116313 4684 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 23 09:08:06 crc kubenswrapper[4684]: I0123 09:08:06.116397 4684 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-23T09:08:06Z","lastTransitionTime":"2026-01-23T09:08:06Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 23 09:08:06 crc kubenswrapper[4684]: I0123 09:08:06.219124 4684 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 23 09:08:06 crc kubenswrapper[4684]: I0123 09:08:06.219444 4684 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 23 09:08:06 crc kubenswrapper[4684]: I0123 09:08:06.219655 4684 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 23 09:08:06 crc kubenswrapper[4684]: I0123 09:08:06.219852 4684 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 23 09:08:06 crc kubenswrapper[4684]: I0123 09:08:06.220017 4684 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-23T09:08:06Z","lastTransitionTime":"2026-01-23T09:08:06Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 23 09:08:06 crc kubenswrapper[4684]: I0123 09:08:06.322042 4684 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 23 09:08:06 crc kubenswrapper[4684]: I0123 09:08:06.322290 4684 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 23 09:08:06 crc kubenswrapper[4684]: I0123 09:08:06.322378 4684 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 23 09:08:06 crc kubenswrapper[4684]: I0123 09:08:06.322466 4684 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 23 09:08:06 crc kubenswrapper[4684]: I0123 09:08:06.322541 4684 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-23T09:08:06Z","lastTransitionTime":"2026-01-23T09:08:06Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 23 09:08:06 crc kubenswrapper[4684]: I0123 09:08:06.425482 4684 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 23 09:08:06 crc kubenswrapper[4684]: I0123 09:08:06.425529 4684 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 23 09:08:06 crc kubenswrapper[4684]: I0123 09:08:06.425545 4684 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 23 09:08:06 crc kubenswrapper[4684]: I0123 09:08:06.425567 4684 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 23 09:08:06 crc kubenswrapper[4684]: I0123 09:08:06.425584 4684 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-23T09:08:06Z","lastTransitionTime":"2026-01-23T09:08:06Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 23 09:08:06 crc kubenswrapper[4684]: I0123 09:08:06.527882 4684 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 23 09:08:06 crc kubenswrapper[4684]: I0123 09:08:06.527929 4684 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 23 09:08:06 crc kubenswrapper[4684]: I0123 09:08:06.527940 4684 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 23 09:08:06 crc kubenswrapper[4684]: I0123 09:08:06.527958 4684 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 23 09:08:06 crc kubenswrapper[4684]: I0123 09:08:06.527970 4684 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-23T09:08:06Z","lastTransitionTime":"2026-01-23T09:08:06Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 23 09:08:06 crc kubenswrapper[4684]: I0123 09:08:06.573731 4684 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-11-24 11:27:35.289231713 +0000 UTC Jan 23 09:08:06 crc kubenswrapper[4684]: I0123 09:08:06.580937 4684 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-wrrtl" Jan 23 09:08:06 crc kubenswrapper[4684]: E0123 09:08:06.581063 4684 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-wrrtl" podUID="8a1145d8-e0e9-481b-9e5c-65815e74874f" Jan 23 09:08:06 crc kubenswrapper[4684]: I0123 09:08:06.630147 4684 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 23 09:08:06 crc kubenswrapper[4684]: I0123 09:08:06.630189 4684 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 23 09:08:06 crc kubenswrapper[4684]: I0123 09:08:06.630199 4684 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 23 09:08:06 crc kubenswrapper[4684]: I0123 09:08:06.630213 4684 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 23 09:08:06 crc kubenswrapper[4684]: I0123 09:08:06.630223 4684 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-23T09:08:06Z","lastTransitionTime":"2026-01-23T09:08:06Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 23 09:08:06 crc kubenswrapper[4684]: I0123 09:08:06.731902 4684 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 23 09:08:06 crc kubenswrapper[4684]: I0123 09:08:06.731935 4684 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 23 09:08:06 crc kubenswrapper[4684]: I0123 09:08:06.731943 4684 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 23 09:08:06 crc kubenswrapper[4684]: I0123 09:08:06.731956 4684 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 23 09:08:06 crc kubenswrapper[4684]: I0123 09:08:06.731967 4684 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-23T09:08:06Z","lastTransitionTime":"2026-01-23T09:08:06Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 23 09:08:06 crc kubenswrapper[4684]: I0123 09:08:06.835484 4684 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 23 09:08:06 crc kubenswrapper[4684]: I0123 09:08:06.835522 4684 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 23 09:08:06 crc kubenswrapper[4684]: I0123 09:08:06.835532 4684 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 23 09:08:06 crc kubenswrapper[4684]: I0123 09:08:06.835546 4684 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 23 09:08:06 crc kubenswrapper[4684]: I0123 09:08:06.835556 4684 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-23T09:08:06Z","lastTransitionTime":"2026-01-23T09:08:06Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 23 09:08:06 crc kubenswrapper[4684]: I0123 09:08:06.938379 4684 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 23 09:08:06 crc kubenswrapper[4684]: I0123 09:08:06.938436 4684 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 23 09:08:06 crc kubenswrapper[4684]: I0123 09:08:06.938448 4684 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 23 09:08:06 crc kubenswrapper[4684]: I0123 09:08:06.938465 4684 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 23 09:08:06 crc kubenswrapper[4684]: I0123 09:08:06.938478 4684 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-23T09:08:06Z","lastTransitionTime":"2026-01-23T09:08:06Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 23 09:08:07 crc kubenswrapper[4684]: I0123 09:08:07.043688 4684 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 23 09:08:07 crc kubenswrapper[4684]: I0123 09:08:07.043750 4684 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 23 09:08:07 crc kubenswrapper[4684]: I0123 09:08:07.043763 4684 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 23 09:08:07 crc kubenswrapper[4684]: I0123 09:08:07.043787 4684 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 23 09:08:07 crc kubenswrapper[4684]: I0123 09:08:07.043800 4684 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-23T09:08:07Z","lastTransitionTime":"2026-01-23T09:08:07Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 23 09:08:07 crc kubenswrapper[4684]: I0123 09:08:07.146510 4684 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 23 09:08:07 crc kubenswrapper[4684]: I0123 09:08:07.146551 4684 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 23 09:08:07 crc kubenswrapper[4684]: I0123 09:08:07.146561 4684 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 23 09:08:07 crc kubenswrapper[4684]: I0123 09:08:07.146577 4684 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 23 09:08:07 crc kubenswrapper[4684]: I0123 09:08:07.146587 4684 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-23T09:08:07Z","lastTransitionTime":"2026-01-23T09:08:07Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 23 09:08:07 crc kubenswrapper[4684]: I0123 09:08:07.248859 4684 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 23 09:08:07 crc kubenswrapper[4684]: I0123 09:08:07.248896 4684 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 23 09:08:07 crc kubenswrapper[4684]: I0123 09:08:07.248904 4684 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 23 09:08:07 crc kubenswrapper[4684]: I0123 09:08:07.248917 4684 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 23 09:08:07 crc kubenswrapper[4684]: I0123 09:08:07.248926 4684 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-23T09:08:07Z","lastTransitionTime":"2026-01-23T09:08:07Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 23 09:08:07 crc kubenswrapper[4684]: I0123 09:08:07.351095 4684 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 23 09:08:07 crc kubenswrapper[4684]: I0123 09:08:07.351158 4684 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 23 09:08:07 crc kubenswrapper[4684]: I0123 09:08:07.351170 4684 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 23 09:08:07 crc kubenswrapper[4684]: I0123 09:08:07.351185 4684 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 23 09:08:07 crc kubenswrapper[4684]: I0123 09:08:07.351195 4684 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-23T09:08:07Z","lastTransitionTime":"2026-01-23T09:08:07Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 23 09:08:07 crc kubenswrapper[4684]: I0123 09:08:07.453224 4684 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 23 09:08:07 crc kubenswrapper[4684]: I0123 09:08:07.453285 4684 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 23 09:08:07 crc kubenswrapper[4684]: I0123 09:08:07.453294 4684 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 23 09:08:07 crc kubenswrapper[4684]: I0123 09:08:07.453307 4684 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 23 09:08:07 crc kubenswrapper[4684]: I0123 09:08:07.453316 4684 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-23T09:08:07Z","lastTransitionTime":"2026-01-23T09:08:07Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 23 09:08:07 crc kubenswrapper[4684]: I0123 09:08:07.555290 4684 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 23 09:08:07 crc kubenswrapper[4684]: I0123 09:08:07.555342 4684 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 23 09:08:07 crc kubenswrapper[4684]: I0123 09:08:07.555353 4684 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 23 09:08:07 crc kubenswrapper[4684]: I0123 09:08:07.555368 4684 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 23 09:08:07 crc kubenswrapper[4684]: I0123 09:08:07.555378 4684 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-23T09:08:07Z","lastTransitionTime":"2026-01-23T09:08:07Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 23 09:08:07 crc kubenswrapper[4684]: I0123 09:08:07.574865 4684 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-12-11 01:38:44.084798719 +0000 UTC Jan 23 09:08:07 crc kubenswrapper[4684]: I0123 09:08:07.581434 4684 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Jan 23 09:08:07 crc kubenswrapper[4684]: I0123 09:08:07.581454 4684 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 23 09:08:07 crc kubenswrapper[4684]: E0123 09:08:07.581596 4684 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Jan 23 09:08:07 crc kubenswrapper[4684]: I0123 09:08:07.581659 4684 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 23 09:08:07 crc kubenswrapper[4684]: E0123 09:08:07.581743 4684 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Jan 23 09:08:07 crc kubenswrapper[4684]: E0123 09:08:07.581858 4684 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Jan 23 09:08:07 crc kubenswrapper[4684]: I0123 09:08:07.595084 4684 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"e31ff448-5258-4887-9532-ccb1444b5a2f\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T09:07:08Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T09:07:08Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T09:07:38Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T09:07:38Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T09:07:07Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://39b1d62654cdce3e6a1e54cc35f36d530dec39b7ec54d7aba2ea8a64844ff90a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-23T09:07:09Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://5b80737ea9f882f63be2cf6a2f74002963d16e18aea3c96f738b2cd188f3c1da\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-23T09:07:10Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://68e3ed6cfd5c1ab6379385c7acee58117333f815f21be7d7c61038f7827f6621\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-23T09:07:10Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://74958cd4355a9eb04e07c960b1063b56f11cb3ae27a3ab9eac50f54ebac78c8c\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://42263a97079566dbd93f1ca20399fd1f6cc2400f0d042ed062c1c1e15eaf0109\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-01-23T09:07:27Z\\\",\\\"message\\\":\\\"23 09:07:26.845110 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW0123 09:07:26.845113 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW0123 09:07:26.845115 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nI0123 09:07:26.845353 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nI0123 09:07:26.849378 1 tlsconfig.go:203] \\\\\\\"Loaded serving cert\\\\\\\" certName=\\\\\\\"serving-cert::/tmp/serving-cert-4138284268/tls.crt::/tmp/serving-cert-4138284268/tls.key\\\\\\\" certDetail=\\\\\\\"\\\\\\\\\\\\\\\"localhost\\\\\\\\\\\\\\\" [serving] validServingFor=[localhost] issuer=\\\\\\\\\\\\\\\"check-endpoints-signer@1769159230\\\\\\\\\\\\\\\" (2026-01-23 09:07:10 +0000 UTC to 2026-02-22 09:07:11 +0000 UTC (now=2026-01-23 09:07:26.849349521 +0000 UTC))\\\\\\\"\\\\nI0123 09:07:26.849507 1 named_certificates.go:53] \\\\\\\"Loaded SNI cert\\\\\\\" index=0 certName=\\\\\\\"self-signed loopback\\\\\\\" certDetail=\\\\\\\"\\\\\\\\\\\\\\\"apiserver-loopback-client@1769159241\\\\\\\\\\\\\\\" [serving] validServingFor=[apiserver-loopback-client] issuer=\\\\\\\\\\\\\\\"apiserver-loopback-client-ca@1769159241\\\\\\\\\\\\\\\" (2026-01-23 08:07:21 +0000 UTC to 2027-01-23 08:07:21 +0000 UTC (now=2026-01-23 09:07:26.849489185 +0000 UTC))\\\\\\\"\\\\nI0123 09:07:26.849527 1 secure_serving.go:213] Serving securely on [::]:17697\\\\nI0123 09:07:26.849546 1 genericapiserver.go:683] [graceful-termination] waiting for shutdown to be initiated\\\\nI0123 09:07:26.849566 1 requestheader_controller.go:172] Starting RequestHeaderAuthRequestController\\\\nI0123 09:07:26.849583 1 shared_informer.go:313] Waiting for caches to sync for RequestHeaderAuthRequestController\\\\nI0123 09:07:26.849611 1 dynamic_serving_content.go:135] \\\\\\\"Starting controller\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-4138284268/tls.crt::/tmp/serving-cert-4138284268/tls.key\\\\\\\"\\\\nI0123 09:07:26.849731 1 tlsconfig.go:243] \\\\\\\"Starting DynamicServingCertificateController\\\\\\\"\\\\nF0123 09:07:26.849820 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-23T09:07:10Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":true,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-23T09:07:28Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://9db80d9b156d2828ad5bcd38bc2d0783dac35f10f547f098815ee596931cde3b\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-23T09:07:10Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://efa2eef93c6f5766565795e6674f79bc2e7cb62ac76cd9a1e407561378d62732\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://efa2eef93c6f5766565795e6674f79bc2e7cb62ac76cd9a1e407561378d62732\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-23T09:07:08Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-23T09:07:08Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-23T09:07:07Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-23T09:08:07Z is after 2025-08-24T17:21:41Z" Jan 23 09:08:07 crc kubenswrapper[4684]: I0123 09:08:07.607112 4684 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-23T09:07:27Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-23T09:07:27Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-23T09:08:07Z is after 2025-08-24T17:21:41Z" Jan 23 09:08:07 crc kubenswrapper[4684]: I0123 09:08:07.619748 4684 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-jwr4q" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ab0885cc-d621-4e36-9e37-1326848bd147\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T09:07:29Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T09:07:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T09:07:29Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T09:07:29Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://d957cfbf388d17fa825ac41c56e15d6cd4caec6e13b2fb8c93b304205f0bbefe\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-23T09:07:29Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cw2mk\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-23T09:07:28Z\\\"}}\" for pod \"openshift-multus\"/\"multus-jwr4q\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-23T09:08:07Z is after 2025-08-24T17:21:41Z" Jan 23 09:08:07 crc kubenswrapper[4684]: I0123 09:08:07.634634 4684 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-dmqcw" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"95d1563a-3ca4-4fb0-8365-c1168fbe2e70\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T09:07:29Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T09:07:34Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T09:07:35Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T09:07:35Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://49a6a5854f711f7c177bc9c2ddea16027d535e15a3bbce2771702baed672fc06\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-23T09:07:35Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-wrhqc\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://d3d64538fa49212ecd97fac81f22251d985b9963024dcd5625ca82b0a19111fb\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://d3d64538fa49212ecd97fac81f22251d985b9963024dcd5625ca82b0a19111fb\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-23T09:07:29Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-23T09:07:29Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-wrhqc\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://bd008bc398cf858c150426e45222e76743f5cacfffb45c24f2cad83a6140abe4\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://bd008bc398cf858c150426e45222e76743f5cacfffb45c24f2cad83a6140abe4\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-23T09:07:30Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-23T09:07:30Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-wrhqc\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://11ea09253e6f4c4eab537b794b793c1f07e8cbaf361c1d8773381e7894805322\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://11ea09253e6f4c4eab537b794b793c1f07e8cbaf361c1d8773381e7894805322\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-23T09:07:31Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-23T09:07:31Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-wrhqc\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://4dddcfb8219bc8ac2d0f92294aef29222b71b1eb35ac84e7e833905e868e784e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://4dddcfb8219bc8ac2d0f92294aef29222b71b1eb35ac84e7e833905e868e784e\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-23T09:07:32Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-23T09:07:32Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-wrhqc\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://d935dd54133a2edd7ccddba6ec6b4c3ee7c86d3d6bc097b93fab3a6aa873ece9\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://d935dd54133a2edd7ccddba6ec6b4c3ee7c86d3d6bc097b93fab3a6aa873ece9\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-23T09:07:33Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-23T09:07:33Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-wrhqc\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://a3f58ad8e7c313247b77e5259a2f82d740ea1f08c3aeaefc116293729ce1b143\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://a3f58ad8e7c313247b77e5259a2f82d740ea1f08c3aeaefc116293729ce1b143\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-23T09:07:34Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-23T09:07:34Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-wrhqc\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-23T09:07:28Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-dmqcw\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-23T09:08:07Z is after 2025-08-24T17:21:41Z" Jan 23 09:08:07 crc kubenswrapper[4684]: I0123 09:08:07.646305 4684 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d618dabd-5de3-4c94-b9c1-69682da77628\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T09:07:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T09:07:07Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T09:07:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T09:07:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T09:07:07Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://c027c8977c1e3870ef0132bf28d479e8999b1a7d216327be7a9cff2aeee05c9e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-23T09:07:08Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://7954e2feb1e89e1ec2c9055234e7b9bde7005afc751a3067c18cbb54d16045cc\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-23T09:07:08Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://fde45d47daa7855ee7caa1df0222d2773fcdc8fb29413c61d6b74f7e7d8fa6e4\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-23T09:07:09Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://f34540a58dd0dfcebbfd694b24202f58a89ddca8a0f04f3f4f2bcdba4be5c4b6\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-23T09:07:10Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-23T09:07:07Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-23T09:08:07Z is after 2025-08-24T17:21:41Z" Jan 23 09:08:07 crc kubenswrapper[4684]: I0123 09:08:07.658396 4684 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-23T09:07:28Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-23T09:07:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-23T09:07:28Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://4f741db786a98b9e9302c17c5f5061484149b0372c03b3cf06b017d37da7237a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-23T09:07:28Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://e0bf99a80423f9d4d2262b21f7dc70d1cf73731c48008e484d9768495596d5b0\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-23T09:07:28Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-23T09:08:07Z is after 2025-08-24T17:21:41Z" Jan 23 09:08:07 crc kubenswrapper[4684]: I0123 09:08:07.659981 4684 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 23 09:08:07 crc kubenswrapper[4684]: I0123 09:08:07.660013 4684 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 23 09:08:07 crc kubenswrapper[4684]: I0123 09:08:07.660026 4684 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 23 09:08:07 crc kubenswrapper[4684]: I0123 09:08:07.660040 4684 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 23 09:08:07 crc kubenswrapper[4684]: I0123 09:08:07.660051 4684 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-23T09:08:07Z","lastTransitionTime":"2026-01-23T09:08:07Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 23 09:08:07 crc kubenswrapper[4684]: I0123 09:08:07.674242 4684 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-23T09:07:30Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-23T09:07:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-23T09:07:30Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://dc74050180463e44d7c545c89833c0282af87ae8cde4800f95e019dbd21ebb08\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-23T09:07:30Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-23T09:08:07Z is after 2025-08-24T17:21:41Z" Jan 23 09:08:07 crc kubenswrapper[4684]: I0123 09:08:07.683857 4684 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-6stgf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"4fce7017-186f-4953-b968-c8a8868a0fd4\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T09:07:29Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T09:07:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T09:07:29Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T09:07:29Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://e120546e2ca9261a5bc169c39194c52add608d78b5783a10dad5f3ba4ee27c23\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-23T09:07:29Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-wv8g2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-23T09:07:28Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-6stgf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-23T09:08:07Z is after 2025-08-24T17:21:41Z" Jan 23 09:08:07 crc kubenswrapper[4684]: I0123 09:08:07.692545 4684 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-qt2j2" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d5069a6f-07bb-4423-8df0-92cdc541e6de\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T09:07:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T09:07:29Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T09:07:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T09:07:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://4ab843f59e857c481772565098789264b06141f58dd54cbb8dba2e40b44a54ad\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-23T09:07:30Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-l62zw\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-23T09:07:29Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-qt2j2\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-23T09:08:07Z is after 2025-08-24T17:21:41Z" Jan 23 09:08:07 crc kubenswrapper[4684]: I0123 09:08:07.702169 4684 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"8f3a9b90-c984-4ff9-9c1e-877941f387c7\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T09:07:08Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T09:07:08Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T09:08:03Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T09:08:03Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T09:07:07Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://02d494d3d24ff74db057c3d7e3a703635ce5b73863f17e5287e60eb112fcadf1\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-23T09:07:09Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://b3735bcc057b640850e5db0bc7cd406ef0ac0c002d4550e741deaf34cf10908f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-23T09:07:10Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://beeba329cbddfbfbd71509b5d37064ec6031709b1403feb8e76af0e7818516cb\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-23T09:07:10Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://dfb74f1ff410b32092837918e51a33643c917e2cf829af6edd2e36180c64fcba\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"wait-for-host-port\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://dfb74f1ff410b32092837918e51a33643c917e2cf829af6edd2e36180c64fcba\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-23T09:07:08Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-23T09:07:08Z\\\"}}}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-23T09:07:07Z\\\"}}\" for pod \"openshift-kube-scheduler\"/\"openshift-kube-scheduler-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-23T09:08:07Z is after 2025-08-24T17:21:41Z" Jan 23 09:08:07 crc kubenswrapper[4684]: I0123 09:08:07.714727 4684 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-23T09:07:29Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-23T09:07:29Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-23T09:07:29Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://d66a59d2f527c396c3b591ef694a20a6852d8e2b2f3d4c77ef0f0b795a18b535\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-23T09:07:28Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-23T09:08:07Z is after 2025-08-24T17:21:41Z" Jan 23 09:08:07 crc kubenswrapper[4684]: I0123 09:08:07.724276 4684 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-wtphf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"fe8e0d00-860e-4d47-9f48-686555520d79\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T09:07:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T09:07:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T09:07:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T09:07:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://87b6f66b276518f9c25bbd5c97bd4a330b2c796958b395d04a01ef7115b95440\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-23T09:07:30Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-dmwsl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://c3d090a4ca15b818846dbd02be034a5029761509ea8671673795d0b2b15249c9\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-23T09:07:30Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-dmwsl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-23T09:07:28Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-wtphf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-23T09:08:07Z is after 2025-08-24T17:21:41Z" Jan 23 09:08:07 crc kubenswrapper[4684]: I0123 09:08:07.732796 4684 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/network-metrics-daemon-wrrtl" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"8a1145d8-e0e9-481b-9e5c-65815e74874f\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T09:07:42Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T09:07:42Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T09:07:42Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T09:07:42Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/metrics\\\",\\\"name\\\":\\\"metrics-certs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-hlsjn\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d98bb346a17feae024d92663df92b25c120938395ab7043afbed543c6db9ca8d\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-metrics-daemon\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-hlsjn\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-23T09:07:42Z\\\"}}\" for pod \"openshift-multus\"/\"network-metrics-daemon-wrrtl\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-23T09:08:07Z is after 2025-08-24T17:21:41Z" Jan 23 09:08:07 crc kubenswrapper[4684]: I0123 09:08:07.742752 4684 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-23T09:07:27Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-23T09:07:27Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-23T09:08:07Z is after 2025-08-24T17:21:41Z" Jan 23 09:08:07 crc kubenswrapper[4684]: I0123 09:08:07.753327 4684 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-23T09:07:27Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-23T09:07:27Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-23T09:08:07Z is after 2025-08-24T17:21:41Z" Jan 23 09:08:07 crc kubenswrapper[4684]: I0123 09:08:07.762603 4684 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 23 09:08:07 crc kubenswrapper[4684]: I0123 09:08:07.762638 4684 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 23 09:08:07 crc kubenswrapper[4684]: I0123 09:08:07.762646 4684 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 23 09:08:07 crc kubenswrapper[4684]: I0123 09:08:07.762659 4684 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 23 09:08:07 crc kubenswrapper[4684]: I0123 09:08:07.762667 4684 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-23T09:08:07Z","lastTransitionTime":"2026-01-23T09:08:07Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 23 09:08:07 crc kubenswrapper[4684]: I0123 09:08:07.784460 4684 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-nk7v5" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fd1b372-d164-4037-ae8e-cf634b1c4b41\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T09:07:29Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T09:07:29Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T09:07:28Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T09:07:28Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://c845b6b78d55b23f70032599e19fb345571b02ca00353315bb08e94c834330d4\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-23T09:07:30Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-l46bg\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://5ecd3493767226c89a1f3e3dff04d36ff5c47117c6ad2712e71633f5c6e375b3\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-23T09:07:30Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-l46bg\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://1d7d0cedb437ec48e365912b092c7f28a30e01fbab86c49bce1b26734ab264ee\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-23T09:07:30Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-l46bg\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://6ab83043e744c91535278153a247d7ba2b3612b867edbabf3a43192b51304e14\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-23T09:07:30Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-l46bg\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://d44f8256ce0d8ea5237e13fb4f6d7ee5cd698c2821613b48d73ba903d2ab5351\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-23T09:07:30Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-l46bg\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://3eab81e73847c2d5a8a24bd2be84c8ed97ecc482fe023474b519ae6bcf3e6e49\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-23T09:07:29Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-l46bg\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://96a86556114a977603ae87310370eefd3122daae9dcb97c57a715eab43e8c195\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://0a7c4719b2eaaa5e4439e33009fbfab815e8ac21cf72b90aeaeeb1b6717029de\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2026-01-23T09:07:42Z\\\",\\\"message\\\":\\\"ce openshift-marketplace/community-operators for network=default has 1 cluster-wide, 0 per-node configs, 0 template configs, making 1 (cluster) 0 (per node) and 0 (template) load balancers\\\\nI0123 09:07:40.472083 6043 services_controller.go:473] Services do not match for network=default, existing lbs: []services.LB{services.LB{Name:\\\\\\\"Service_openshift-marketplace/community-operators_TCP_cluster\\\\\\\", UUID:\\\\\\\"d389393c-7ba9-422c-b3f5-06e391d537d2\\\\\\\", Protocol:\\\\\\\"tcp\\\\\\\", ExternalIDs:map[string]string{\\\\\\\"k8s.ovn.org/kind\\\\\\\":\\\\\\\"Service\\\\\\\", \\\\\\\"k8s.ovn.org/owner\\\\\\\":\\\\\\\"openshift-marketplace/community-operators\\\\\\\"}, Opts:services.LBOpts{Reject:false, EmptyLBEvents:false, AffinityTimeOut:0, SkipSNAT:false, Template:false, AddressFamily:\\\\\\\"\\\\\\\"}, Rules:[]services.LBRule{}, Templates:services.TemplateMap{}, Switches:[]string{}, Routers:[]string{}, Groups:[]string{\\\\\\\"clusterLBGroup\\\\\\\"}}}, built lbs: []services.LB{services.LB{Name:\\\\\\\"Service_openshift-marketplace/community-operators_TCP_cluster\\\\\\\", UUID:\\\\\\\"\\\\\\\", Protocol:\\\\\\\"TCP\\\\\\\", ExternalIDs:map[string]string{\\\\\\\"k8s.ovn.org/kind\\\\\\\":\\\\\\\"Service\\\\\\\", \\\\\\\"k8s.ovn.org/owner\\\\\\\":\\\\\\\"openshift-marketplace/community-operators\\\\\\\"}, Opts:services.LBOpts{Reject:true, EmptyLBEvents:false, AffinityTimeOut:0, SkipSNAT:false, Template:false, AddressFamily:\\\\\\\"\\\\\\\"}, Rules:[]services.LBRule{services.LBRule{Source:services.Addr{IP:\\\\\\\"10.217.5.189\\\\\\\", Port:50051, Template:(*services.Template)(nil)}, T\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-23T09:07:39Z\\\"}},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":2,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://96a86556114a977603ae87310370eefd3122daae9dcb97c57a715eab43e8c195\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2026-01-23T09:08:00Z\\\",\\\"message\\\":\\\":services.LBOpts{Reject:false, EmptyLBEvents:false, AffinityTimeOut:0, SkipSNAT:false, Template:false, AddressFamily:\\\\\\\"\\\\\\\"}, Rules:[]services.LBRule{}, Templates:services.TemplateMap{}, Switches:[]string{}, Routers:[]string{}, Groups:[]string{\\\\\\\"clusterLBGroup\\\\\\\"}}}, built lbs: []services.LB{services.LB{Name:\\\\\\\"Service_openshift-dns-operator/metrics_TCP_cluster\\\\\\\", UUID:\\\\\\\"\\\\\\\", Protocol:\\\\\\\"TCP\\\\\\\", ExternalIDs:map[string]string{\\\\\\\"k8s.ovn.org/kind\\\\\\\":\\\\\\\"Service\\\\\\\", \\\\\\\"k8s.ovn.org/owner\\\\\\\":\\\\\\\"openshift-dns-operator/metrics\\\\\\\"}, Opts:services.LBOpts{Reject:true, EmptyLBEvents:false, AffinityTimeOut:0, SkipSNAT:false, Template:false, AddressFamily:\\\\\\\"\\\\\\\"}, Rules:[]services.LBRule{services.LBRule{Source:services.Addr{IP:\\\\\\\"10.217.4.174\\\\\\\", Port:9393, Template:(*services.Template)(nil)}, Targets:[]services.Addr{}}}, Templates:services.TemplateMap(nil), Switches:[]string{}, Routers:[]string{}, Groups:[]string{\\\\\\\"clusterLBGroup\\\\\\\"}}}\\\\nI0123 09:07:58.859000 6248 ovnkube.go:599] Stopped ovnkube\\\\nI0123 09:07:58.859024 6248 metrics.go:553] Stopping metrics server at address \\\\\\\"127.0.0.1:29103\\\\\\\"\\\\nI0123 09:07:58.859031 6248 obj_retry.go:434] periodicallyRetryResources: Retry channel got triggered: retrying failed objects of type *v1.Pod\\\\nF0123 09:07:58.859119 6248 ovnkube.go:137] failed to run ovnkube: [failed to start network controller: failed to start default network controller: unable to create\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-23T09:07:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-l46bg\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://6eab0113b2445bd23a5d3eb5f4bd79d26dd3352a1bf807cf7e770d55db85b699\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-23T09:07:32Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-l46bg\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://6cfc04b44ac724b5e32e0102b3f0d670fdd7f2b7ae9b40266065c7b8192b228e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://6cfc04b44ac724b5e32e0102b3f0d670fdd7f2b7ae9b40266065c7b8192b228e\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-23T09:07:29Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-23T09:07:29Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-l46bg\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-23T09:07:28Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-nk7v5\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-23T09:08:07Z is after 2025-08-24T17:21:41Z" Jan 23 09:08:07 crc kubenswrapper[4684]: I0123 09:08:07.803846 4684 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-ckltm" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"17ebb42b-c0ef-423b-8337-cb73bcdbd301\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T09:07:44Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T09:07:41Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T09:07:44Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T09:07:44Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://831d14b0a3293bdf6aaef4805513c47cca40592929fd0a059c0415e6bb072462\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-23T09:07:43Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-control-plane-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-bqdrh\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://53174a72a4ae2ff8105c162641526b8d33dbc8ae6f6301c8c1399e1493d9f6e9\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-cluster-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-23T09:07:43Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-bqdrh\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-23T09:07:41Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-control-plane-749d76644c-ckltm\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-23T09:08:07Z is after 2025-08-24T17:21:41Z" Jan 23 09:08:07 crc kubenswrapper[4684]: I0123 09:08:07.864464 4684 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 23 09:08:07 crc kubenswrapper[4684]: I0123 09:08:07.864561 4684 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 23 09:08:07 crc kubenswrapper[4684]: I0123 09:08:07.864571 4684 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 23 09:08:07 crc kubenswrapper[4684]: I0123 09:08:07.864583 4684 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 23 09:08:07 crc kubenswrapper[4684]: I0123 09:08:07.864608 4684 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-23T09:08:07Z","lastTransitionTime":"2026-01-23T09:08:07Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 23 09:08:07 crc kubenswrapper[4684]: I0123 09:08:07.967734 4684 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 23 09:08:07 crc kubenswrapper[4684]: I0123 09:08:07.967796 4684 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 23 09:08:07 crc kubenswrapper[4684]: I0123 09:08:07.967810 4684 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 23 09:08:07 crc kubenswrapper[4684]: I0123 09:08:07.967832 4684 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 23 09:08:07 crc kubenswrapper[4684]: I0123 09:08:07.967846 4684 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-23T09:08:07Z","lastTransitionTime":"2026-01-23T09:08:07Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 23 09:08:08 crc kubenswrapper[4684]: I0123 09:08:08.070870 4684 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 23 09:08:08 crc kubenswrapper[4684]: I0123 09:08:08.070916 4684 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 23 09:08:08 crc kubenswrapper[4684]: I0123 09:08:08.070926 4684 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 23 09:08:08 crc kubenswrapper[4684]: I0123 09:08:08.070944 4684 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 23 09:08:08 crc kubenswrapper[4684]: I0123 09:08:08.071208 4684 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-23T09:08:08Z","lastTransitionTime":"2026-01-23T09:08:08Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 23 09:08:08 crc kubenswrapper[4684]: I0123 09:08:08.173692 4684 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 23 09:08:08 crc kubenswrapper[4684]: I0123 09:08:08.173806 4684 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 23 09:08:08 crc kubenswrapper[4684]: I0123 09:08:08.173827 4684 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 23 09:08:08 crc kubenswrapper[4684]: I0123 09:08:08.173859 4684 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 23 09:08:08 crc kubenswrapper[4684]: I0123 09:08:08.173879 4684 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-23T09:08:08Z","lastTransitionTime":"2026-01-23T09:08:08Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 23 09:08:08 crc kubenswrapper[4684]: I0123 09:08:08.276111 4684 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 23 09:08:08 crc kubenswrapper[4684]: I0123 09:08:08.276166 4684 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 23 09:08:08 crc kubenswrapper[4684]: I0123 09:08:08.276182 4684 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 23 09:08:08 crc kubenswrapper[4684]: I0123 09:08:08.276206 4684 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 23 09:08:08 crc kubenswrapper[4684]: I0123 09:08:08.276223 4684 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-23T09:08:08Z","lastTransitionTime":"2026-01-23T09:08:08Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 23 09:08:08 crc kubenswrapper[4684]: I0123 09:08:08.379290 4684 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 23 09:08:08 crc kubenswrapper[4684]: I0123 09:08:08.379340 4684 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 23 09:08:08 crc kubenswrapper[4684]: I0123 09:08:08.379351 4684 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 23 09:08:08 crc kubenswrapper[4684]: I0123 09:08:08.379366 4684 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 23 09:08:08 crc kubenswrapper[4684]: I0123 09:08:08.379375 4684 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-23T09:08:08Z","lastTransitionTime":"2026-01-23T09:08:08Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 23 09:08:08 crc kubenswrapper[4684]: I0123 09:08:08.482942 4684 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 23 09:08:08 crc kubenswrapper[4684]: I0123 09:08:08.482987 4684 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 23 09:08:08 crc kubenswrapper[4684]: I0123 09:08:08.482999 4684 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 23 09:08:08 crc kubenswrapper[4684]: I0123 09:08:08.483016 4684 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 23 09:08:08 crc kubenswrapper[4684]: I0123 09:08:08.483025 4684 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-23T09:08:08Z","lastTransitionTime":"2026-01-23T09:08:08Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 23 09:08:08 crc kubenswrapper[4684]: I0123 09:08:08.575221 4684 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-11-23 13:11:32.771329825 +0000 UTC Jan 23 09:08:08 crc kubenswrapper[4684]: I0123 09:08:08.581554 4684 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-wrrtl" Jan 23 09:08:08 crc kubenswrapper[4684]: E0123 09:08:08.581748 4684 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-wrrtl" podUID="8a1145d8-e0e9-481b-9e5c-65815e74874f" Jan 23 09:08:08 crc kubenswrapper[4684]: I0123 09:08:08.586370 4684 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 23 09:08:08 crc kubenswrapper[4684]: I0123 09:08:08.586403 4684 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 23 09:08:08 crc kubenswrapper[4684]: I0123 09:08:08.586434 4684 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 23 09:08:08 crc kubenswrapper[4684]: I0123 09:08:08.586452 4684 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 23 09:08:08 crc kubenswrapper[4684]: I0123 09:08:08.586461 4684 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-23T09:08:08Z","lastTransitionTime":"2026-01-23T09:08:08Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 23 09:08:08 crc kubenswrapper[4684]: I0123 09:08:08.689999 4684 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 23 09:08:08 crc kubenswrapper[4684]: I0123 09:08:08.690034 4684 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 23 09:08:08 crc kubenswrapper[4684]: I0123 09:08:08.690043 4684 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 23 09:08:08 crc kubenswrapper[4684]: I0123 09:08:08.690074 4684 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 23 09:08:08 crc kubenswrapper[4684]: I0123 09:08:08.690083 4684 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-23T09:08:08Z","lastTransitionTime":"2026-01-23T09:08:08Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 23 09:08:08 crc kubenswrapper[4684]: I0123 09:08:08.791783 4684 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 23 09:08:08 crc kubenswrapper[4684]: I0123 09:08:08.791829 4684 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 23 09:08:08 crc kubenswrapper[4684]: I0123 09:08:08.791842 4684 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 23 09:08:08 crc kubenswrapper[4684]: I0123 09:08:08.791857 4684 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 23 09:08:08 crc kubenswrapper[4684]: I0123 09:08:08.791869 4684 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-23T09:08:08Z","lastTransitionTime":"2026-01-23T09:08:08Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 23 09:08:08 crc kubenswrapper[4684]: I0123 09:08:08.894612 4684 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 23 09:08:08 crc kubenswrapper[4684]: I0123 09:08:08.894662 4684 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 23 09:08:08 crc kubenswrapper[4684]: I0123 09:08:08.894677 4684 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 23 09:08:08 crc kubenswrapper[4684]: I0123 09:08:08.894714 4684 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 23 09:08:08 crc kubenswrapper[4684]: I0123 09:08:08.894728 4684 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-23T09:08:08Z","lastTransitionTime":"2026-01-23T09:08:08Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 23 09:08:08 crc kubenswrapper[4684]: I0123 09:08:08.997609 4684 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 23 09:08:08 crc kubenswrapper[4684]: I0123 09:08:08.997676 4684 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 23 09:08:08 crc kubenswrapper[4684]: I0123 09:08:08.997691 4684 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 23 09:08:08 crc kubenswrapper[4684]: I0123 09:08:08.997755 4684 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 23 09:08:08 crc kubenswrapper[4684]: I0123 09:08:08.997767 4684 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-23T09:08:08Z","lastTransitionTime":"2026-01-23T09:08:08Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 23 09:08:09 crc kubenswrapper[4684]: I0123 09:08:09.105519 4684 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 23 09:08:09 crc kubenswrapper[4684]: I0123 09:08:09.105598 4684 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 23 09:08:09 crc kubenswrapper[4684]: I0123 09:08:09.105624 4684 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 23 09:08:09 crc kubenswrapper[4684]: I0123 09:08:09.105657 4684 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 23 09:08:09 crc kubenswrapper[4684]: I0123 09:08:09.105679 4684 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-23T09:08:09Z","lastTransitionTime":"2026-01-23T09:08:09Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 23 09:08:09 crc kubenswrapper[4684]: I0123 09:08:09.208637 4684 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 23 09:08:09 crc kubenswrapper[4684]: I0123 09:08:09.208687 4684 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 23 09:08:09 crc kubenswrapper[4684]: I0123 09:08:09.208722 4684 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 23 09:08:09 crc kubenswrapper[4684]: I0123 09:08:09.208741 4684 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 23 09:08:09 crc kubenswrapper[4684]: I0123 09:08:09.208753 4684 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-23T09:08:09Z","lastTransitionTime":"2026-01-23T09:08:09Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 23 09:08:09 crc kubenswrapper[4684]: I0123 09:08:09.311109 4684 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 23 09:08:09 crc kubenswrapper[4684]: I0123 09:08:09.311150 4684 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 23 09:08:09 crc kubenswrapper[4684]: I0123 09:08:09.311165 4684 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 23 09:08:09 crc kubenswrapper[4684]: I0123 09:08:09.311186 4684 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 23 09:08:09 crc kubenswrapper[4684]: I0123 09:08:09.311200 4684 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-23T09:08:09Z","lastTransitionTime":"2026-01-23T09:08:09Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 23 09:08:09 crc kubenswrapper[4684]: I0123 09:08:09.413297 4684 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 23 09:08:09 crc kubenswrapper[4684]: I0123 09:08:09.413337 4684 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 23 09:08:09 crc kubenswrapper[4684]: I0123 09:08:09.413352 4684 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 23 09:08:09 crc kubenswrapper[4684]: I0123 09:08:09.413374 4684 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 23 09:08:09 crc kubenswrapper[4684]: I0123 09:08:09.413388 4684 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-23T09:08:09Z","lastTransitionTime":"2026-01-23T09:08:09Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 23 09:08:09 crc kubenswrapper[4684]: I0123 09:08:09.516976 4684 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 23 09:08:09 crc kubenswrapper[4684]: I0123 09:08:09.517013 4684 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 23 09:08:09 crc kubenswrapper[4684]: I0123 09:08:09.517024 4684 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 23 09:08:09 crc kubenswrapper[4684]: I0123 09:08:09.517040 4684 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 23 09:08:09 crc kubenswrapper[4684]: I0123 09:08:09.517052 4684 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-23T09:08:09Z","lastTransitionTime":"2026-01-23T09:08:09Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 23 09:08:09 crc kubenswrapper[4684]: I0123 09:08:09.575991 4684 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-12-08 10:25:11.601864144 +0000 UTC Jan 23 09:08:09 crc kubenswrapper[4684]: I0123 09:08:09.583448 4684 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 23 09:08:09 crc kubenswrapper[4684]: E0123 09:08:09.583548 4684 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Jan 23 09:08:09 crc kubenswrapper[4684]: I0123 09:08:09.583720 4684 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Jan 23 09:08:09 crc kubenswrapper[4684]: E0123 09:08:09.583762 4684 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Jan 23 09:08:09 crc kubenswrapper[4684]: I0123 09:08:09.583861 4684 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 23 09:08:09 crc kubenswrapper[4684]: E0123 09:08:09.583901 4684 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Jan 23 09:08:09 crc kubenswrapper[4684]: I0123 09:08:09.620061 4684 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 23 09:08:09 crc kubenswrapper[4684]: I0123 09:08:09.620100 4684 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 23 09:08:09 crc kubenswrapper[4684]: I0123 09:08:09.620116 4684 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 23 09:08:09 crc kubenswrapper[4684]: I0123 09:08:09.620139 4684 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 23 09:08:09 crc kubenswrapper[4684]: I0123 09:08:09.620156 4684 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-23T09:08:09Z","lastTransitionTime":"2026-01-23T09:08:09Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 23 09:08:09 crc kubenswrapper[4684]: I0123 09:08:09.722582 4684 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 23 09:08:09 crc kubenswrapper[4684]: I0123 09:08:09.722617 4684 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 23 09:08:09 crc kubenswrapper[4684]: I0123 09:08:09.722626 4684 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 23 09:08:09 crc kubenswrapper[4684]: I0123 09:08:09.722641 4684 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 23 09:08:09 crc kubenswrapper[4684]: I0123 09:08:09.722650 4684 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-23T09:08:09Z","lastTransitionTime":"2026-01-23T09:08:09Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 23 09:08:09 crc kubenswrapper[4684]: I0123 09:08:09.825310 4684 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 23 09:08:09 crc kubenswrapper[4684]: I0123 09:08:09.826011 4684 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 23 09:08:09 crc kubenswrapper[4684]: I0123 09:08:09.826086 4684 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 23 09:08:09 crc kubenswrapper[4684]: I0123 09:08:09.826196 4684 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 23 09:08:09 crc kubenswrapper[4684]: I0123 09:08:09.826322 4684 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-23T09:08:09Z","lastTransitionTime":"2026-01-23T09:08:09Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 23 09:08:09 crc kubenswrapper[4684]: I0123 09:08:09.929232 4684 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 23 09:08:09 crc kubenswrapper[4684]: I0123 09:08:09.929743 4684 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 23 09:08:09 crc kubenswrapper[4684]: I0123 09:08:09.929860 4684 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 23 09:08:09 crc kubenswrapper[4684]: I0123 09:08:09.929980 4684 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 23 09:08:09 crc kubenswrapper[4684]: I0123 09:08:09.930076 4684 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-23T09:08:09Z","lastTransitionTime":"2026-01-23T09:08:09Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 23 09:08:10 crc kubenswrapper[4684]: I0123 09:08:10.032545 4684 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 23 09:08:10 crc kubenswrapper[4684]: I0123 09:08:10.032580 4684 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 23 09:08:10 crc kubenswrapper[4684]: I0123 09:08:10.032590 4684 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 23 09:08:10 crc kubenswrapper[4684]: I0123 09:08:10.032605 4684 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 23 09:08:10 crc kubenswrapper[4684]: I0123 09:08:10.032616 4684 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-23T09:08:10Z","lastTransitionTime":"2026-01-23T09:08:10Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 23 09:08:10 crc kubenswrapper[4684]: I0123 09:08:10.134629 4684 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 23 09:08:10 crc kubenswrapper[4684]: I0123 09:08:10.134667 4684 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 23 09:08:10 crc kubenswrapper[4684]: I0123 09:08:10.134679 4684 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 23 09:08:10 crc kubenswrapper[4684]: I0123 09:08:10.134693 4684 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 23 09:08:10 crc kubenswrapper[4684]: I0123 09:08:10.134724 4684 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-23T09:08:10Z","lastTransitionTime":"2026-01-23T09:08:10Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 23 09:08:10 crc kubenswrapper[4684]: I0123 09:08:10.237654 4684 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 23 09:08:10 crc kubenswrapper[4684]: I0123 09:08:10.237688 4684 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 23 09:08:10 crc kubenswrapper[4684]: I0123 09:08:10.237720 4684 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 23 09:08:10 crc kubenswrapper[4684]: I0123 09:08:10.237735 4684 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 23 09:08:10 crc kubenswrapper[4684]: I0123 09:08:10.237745 4684 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-23T09:08:10Z","lastTransitionTime":"2026-01-23T09:08:10Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 23 09:08:10 crc kubenswrapper[4684]: I0123 09:08:10.339774 4684 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 23 09:08:10 crc kubenswrapper[4684]: I0123 09:08:10.339813 4684 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 23 09:08:10 crc kubenswrapper[4684]: I0123 09:08:10.339824 4684 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 23 09:08:10 crc kubenswrapper[4684]: I0123 09:08:10.339842 4684 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 23 09:08:10 crc kubenswrapper[4684]: I0123 09:08:10.339854 4684 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-23T09:08:10Z","lastTransitionTime":"2026-01-23T09:08:10Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 23 09:08:10 crc kubenswrapper[4684]: I0123 09:08:10.442361 4684 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 23 09:08:10 crc kubenswrapper[4684]: I0123 09:08:10.442403 4684 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 23 09:08:10 crc kubenswrapper[4684]: I0123 09:08:10.442415 4684 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 23 09:08:10 crc kubenswrapper[4684]: I0123 09:08:10.442432 4684 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 23 09:08:10 crc kubenswrapper[4684]: I0123 09:08:10.442443 4684 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-23T09:08:10Z","lastTransitionTime":"2026-01-23T09:08:10Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 23 09:08:10 crc kubenswrapper[4684]: I0123 09:08:10.545008 4684 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 23 09:08:10 crc kubenswrapper[4684]: I0123 09:08:10.545053 4684 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 23 09:08:10 crc kubenswrapper[4684]: I0123 09:08:10.545063 4684 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 23 09:08:10 crc kubenswrapper[4684]: I0123 09:08:10.545075 4684 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 23 09:08:10 crc kubenswrapper[4684]: I0123 09:08:10.545085 4684 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-23T09:08:10Z","lastTransitionTime":"2026-01-23T09:08:10Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 23 09:08:10 crc kubenswrapper[4684]: I0123 09:08:10.576812 4684 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-11-13 14:30:12.211818349 +0000 UTC Jan 23 09:08:10 crc kubenswrapper[4684]: I0123 09:08:10.581105 4684 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-wrrtl" Jan 23 09:08:10 crc kubenswrapper[4684]: E0123 09:08:10.581233 4684 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-wrrtl" podUID="8a1145d8-e0e9-481b-9e5c-65815e74874f" Jan 23 09:08:10 crc kubenswrapper[4684]: I0123 09:08:10.647610 4684 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 23 09:08:10 crc kubenswrapper[4684]: I0123 09:08:10.647648 4684 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 23 09:08:10 crc kubenswrapper[4684]: I0123 09:08:10.647659 4684 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 23 09:08:10 crc kubenswrapper[4684]: I0123 09:08:10.647673 4684 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 23 09:08:10 crc kubenswrapper[4684]: I0123 09:08:10.647682 4684 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-23T09:08:10Z","lastTransitionTime":"2026-01-23T09:08:10Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 23 09:08:10 crc kubenswrapper[4684]: I0123 09:08:10.750080 4684 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 23 09:08:10 crc kubenswrapper[4684]: I0123 09:08:10.750150 4684 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 23 09:08:10 crc kubenswrapper[4684]: I0123 09:08:10.750162 4684 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 23 09:08:10 crc kubenswrapper[4684]: I0123 09:08:10.750178 4684 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 23 09:08:10 crc kubenswrapper[4684]: I0123 09:08:10.750188 4684 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-23T09:08:10Z","lastTransitionTime":"2026-01-23T09:08:10Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 23 09:08:10 crc kubenswrapper[4684]: I0123 09:08:10.852936 4684 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 23 09:08:10 crc kubenswrapper[4684]: I0123 09:08:10.852992 4684 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 23 09:08:10 crc kubenswrapper[4684]: I0123 09:08:10.853006 4684 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 23 09:08:10 crc kubenswrapper[4684]: I0123 09:08:10.853026 4684 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 23 09:08:10 crc kubenswrapper[4684]: I0123 09:08:10.853037 4684 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-23T09:08:10Z","lastTransitionTime":"2026-01-23T09:08:10Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 23 09:08:10 crc kubenswrapper[4684]: I0123 09:08:10.955158 4684 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 23 09:08:10 crc kubenswrapper[4684]: I0123 09:08:10.955197 4684 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 23 09:08:10 crc kubenswrapper[4684]: I0123 09:08:10.955208 4684 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 23 09:08:10 crc kubenswrapper[4684]: I0123 09:08:10.955222 4684 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 23 09:08:10 crc kubenswrapper[4684]: I0123 09:08:10.955234 4684 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-23T09:08:10Z","lastTransitionTime":"2026-01-23T09:08:10Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 23 09:08:11 crc kubenswrapper[4684]: I0123 09:08:11.057621 4684 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 23 09:08:11 crc kubenswrapper[4684]: I0123 09:08:11.057663 4684 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 23 09:08:11 crc kubenswrapper[4684]: I0123 09:08:11.057674 4684 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 23 09:08:11 crc kubenswrapper[4684]: I0123 09:08:11.057692 4684 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 23 09:08:11 crc kubenswrapper[4684]: I0123 09:08:11.057717 4684 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-23T09:08:11Z","lastTransitionTime":"2026-01-23T09:08:11Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 23 09:08:11 crc kubenswrapper[4684]: I0123 09:08:11.159960 4684 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 23 09:08:11 crc kubenswrapper[4684]: I0123 09:08:11.160005 4684 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 23 09:08:11 crc kubenswrapper[4684]: I0123 09:08:11.160019 4684 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 23 09:08:11 crc kubenswrapper[4684]: I0123 09:08:11.160034 4684 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 23 09:08:11 crc kubenswrapper[4684]: I0123 09:08:11.160044 4684 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-23T09:08:11Z","lastTransitionTime":"2026-01-23T09:08:11Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 23 09:08:11 crc kubenswrapper[4684]: I0123 09:08:11.261950 4684 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 23 09:08:11 crc kubenswrapper[4684]: I0123 09:08:11.261995 4684 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 23 09:08:11 crc kubenswrapper[4684]: I0123 09:08:11.262003 4684 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 23 09:08:11 crc kubenswrapper[4684]: I0123 09:08:11.262016 4684 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 23 09:08:11 crc kubenswrapper[4684]: I0123 09:08:11.262026 4684 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-23T09:08:11Z","lastTransitionTime":"2026-01-23T09:08:11Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 23 09:08:11 crc kubenswrapper[4684]: I0123 09:08:11.364988 4684 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 23 09:08:11 crc kubenswrapper[4684]: I0123 09:08:11.365024 4684 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 23 09:08:11 crc kubenswrapper[4684]: I0123 09:08:11.365032 4684 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 23 09:08:11 crc kubenswrapper[4684]: I0123 09:08:11.365045 4684 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 23 09:08:11 crc kubenswrapper[4684]: I0123 09:08:11.365055 4684 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-23T09:08:11Z","lastTransitionTime":"2026-01-23T09:08:11Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 23 09:08:11 crc kubenswrapper[4684]: I0123 09:08:11.467864 4684 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 23 09:08:11 crc kubenswrapper[4684]: I0123 09:08:11.467893 4684 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 23 09:08:11 crc kubenswrapper[4684]: I0123 09:08:11.467901 4684 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 23 09:08:11 crc kubenswrapper[4684]: I0123 09:08:11.467915 4684 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 23 09:08:11 crc kubenswrapper[4684]: I0123 09:08:11.467924 4684 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-23T09:08:11Z","lastTransitionTime":"2026-01-23T09:08:11Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 23 09:08:11 crc kubenswrapper[4684]: I0123 09:08:11.570019 4684 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 23 09:08:11 crc kubenswrapper[4684]: I0123 09:08:11.570045 4684 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 23 09:08:11 crc kubenswrapper[4684]: I0123 09:08:11.570053 4684 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 23 09:08:11 crc kubenswrapper[4684]: I0123 09:08:11.570066 4684 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 23 09:08:11 crc kubenswrapper[4684]: I0123 09:08:11.570074 4684 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-23T09:08:11Z","lastTransitionTime":"2026-01-23T09:08:11Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 23 09:08:11 crc kubenswrapper[4684]: I0123 09:08:11.577451 4684 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-11-25 01:24:14.864651359 +0000 UTC Jan 23 09:08:11 crc kubenswrapper[4684]: I0123 09:08:11.581770 4684 scope.go:117] "RemoveContainer" containerID="96a86556114a977603ae87310370eefd3122daae9dcb97c57a715eab43e8c195" Jan 23 09:08:11 crc kubenswrapper[4684]: I0123 09:08:11.581850 4684 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Jan 23 09:08:11 crc kubenswrapper[4684]: E0123 09:08:11.581945 4684 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"ovnkube-controller\" with CrashLoopBackOff: \"back-off 20s restarting failed container=ovnkube-controller pod=ovnkube-node-nk7v5_openshift-ovn-kubernetes(5fd1b372-d164-4037-ae8e-cf634b1c4b41)\"" pod="openshift-ovn-kubernetes/ovnkube-node-nk7v5" podUID="5fd1b372-d164-4037-ae8e-cf634b1c4b41" Jan 23 09:08:11 crc kubenswrapper[4684]: E0123 09:08:11.581966 4684 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Jan 23 09:08:11 crc kubenswrapper[4684]: I0123 09:08:11.582017 4684 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 23 09:08:11 crc kubenswrapper[4684]: E0123 09:08:11.582059 4684 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Jan 23 09:08:11 crc kubenswrapper[4684]: I0123 09:08:11.582107 4684 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 23 09:08:11 crc kubenswrapper[4684]: E0123 09:08:11.582177 4684 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Jan 23 09:08:11 crc kubenswrapper[4684]: I0123 09:08:11.593898 4684 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-machine-config-operator/kube-rbac-proxy-crio-crc"] Jan 23 09:08:11 crc kubenswrapper[4684]: I0123 09:08:11.594663 4684 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-23T09:07:27Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-23T09:07:27Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-23T09:08:11Z is after 2025-08-24T17:21:41Z" Jan 23 09:08:11 crc kubenswrapper[4684]: I0123 09:08:11.606921 4684 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-23T09:07:27Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-23T09:07:27Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-23T09:08:11Z is after 2025-08-24T17:21:41Z" Jan 23 09:08:11 crc kubenswrapper[4684]: I0123 09:08:11.623993 4684 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-nk7v5" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fd1b372-d164-4037-ae8e-cf634b1c4b41\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T09:07:29Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T09:07:29Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T09:07:28Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T09:07:28Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://c845b6b78d55b23f70032599e19fb345571b02ca00353315bb08e94c834330d4\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-23T09:07:30Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-l46bg\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://5ecd3493767226c89a1f3e3dff04d36ff5c47117c6ad2712e71633f5c6e375b3\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-23T09:07:30Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-l46bg\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://1d7d0cedb437ec48e365912b092c7f28a30e01fbab86c49bce1b26734ab264ee\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-23T09:07:30Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-l46bg\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://6ab83043e744c91535278153a247d7ba2b3612b867edbabf3a43192b51304e14\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-23T09:07:30Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-l46bg\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://d44f8256ce0d8ea5237e13fb4f6d7ee5cd698c2821613b48d73ba903d2ab5351\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-23T09:07:30Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-l46bg\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://3eab81e73847c2d5a8a24bd2be84c8ed97ecc482fe023474b519ae6bcf3e6e49\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-23T09:07:29Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-l46bg\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://96a86556114a977603ae87310370eefd3122daae9dcb97c57a715eab43e8c195\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://96a86556114a977603ae87310370eefd3122daae9dcb97c57a715eab43e8c195\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2026-01-23T09:08:00Z\\\",\\\"message\\\":\\\":services.LBOpts{Reject:false, EmptyLBEvents:false, AffinityTimeOut:0, SkipSNAT:false, Template:false, AddressFamily:\\\\\\\"\\\\\\\"}, Rules:[]services.LBRule{}, Templates:services.TemplateMap{}, Switches:[]string{}, Routers:[]string{}, Groups:[]string{\\\\\\\"clusterLBGroup\\\\\\\"}}}, built lbs: []services.LB{services.LB{Name:\\\\\\\"Service_openshift-dns-operator/metrics_TCP_cluster\\\\\\\", UUID:\\\\\\\"\\\\\\\", Protocol:\\\\\\\"TCP\\\\\\\", ExternalIDs:map[string]string{\\\\\\\"k8s.ovn.org/kind\\\\\\\":\\\\\\\"Service\\\\\\\", \\\\\\\"k8s.ovn.org/owner\\\\\\\":\\\\\\\"openshift-dns-operator/metrics\\\\\\\"}, Opts:services.LBOpts{Reject:true, EmptyLBEvents:false, AffinityTimeOut:0, SkipSNAT:false, Template:false, AddressFamily:\\\\\\\"\\\\\\\"}, Rules:[]services.LBRule{services.LBRule{Source:services.Addr{IP:\\\\\\\"10.217.4.174\\\\\\\", Port:9393, Template:(*services.Template)(nil)}, Targets:[]services.Addr{}}}, Templates:services.TemplateMap(nil), Switches:[]string{}, Routers:[]string{}, Groups:[]string{\\\\\\\"clusterLBGroup\\\\\\\"}}}\\\\nI0123 09:07:58.859000 6248 ovnkube.go:599] Stopped ovnkube\\\\nI0123 09:07:58.859024 6248 metrics.go:553] Stopping metrics server at address \\\\\\\"127.0.0.1:29103\\\\\\\"\\\\nI0123 09:07:58.859031 6248 obj_retry.go:434] periodicallyRetryResources: Retry channel got triggered: retrying failed objects of type *v1.Pod\\\\nF0123 09:07:58.859119 6248 ovnkube.go:137] failed to run ovnkube: [failed to start network controller: failed to start default network controller: unable to create\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-23T09:07:58Z\\\"}},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":2,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"back-off 20s restarting failed container=ovnkube-controller pod=ovnkube-node-nk7v5_openshift-ovn-kubernetes(5fd1b372-d164-4037-ae8e-cf634b1c4b41)\\\",\\\"reason\\\":\\\"CrashLoopBackOff\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-l46bg\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://6eab0113b2445bd23a5d3eb5f4bd79d26dd3352a1bf807cf7e770d55db85b699\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-23T09:07:32Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-l46bg\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://6cfc04b44ac724b5e32e0102b3f0d670fdd7f2b7ae9b40266065c7b8192b228e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://6cfc04b44ac724b5e32e0102b3f0d670fdd7f2b7ae9b40266065c7b8192b228e\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-23T09:07:29Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-23T09:07:29Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-l46bg\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-23T09:07:28Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-nk7v5\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-23T09:08:11Z is after 2025-08-24T17:21:41Z" Jan 23 09:08:11 crc kubenswrapper[4684]: I0123 09:08:11.634144 4684 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-ckltm" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"17ebb42b-c0ef-423b-8337-cb73bcdbd301\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T09:07:44Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T09:07:41Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T09:07:44Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T09:07:44Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://831d14b0a3293bdf6aaef4805513c47cca40592929fd0a059c0415e6bb072462\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-23T09:07:43Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-control-plane-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-bqdrh\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://53174a72a4ae2ff8105c162641526b8d33dbc8ae6f6301c8c1399e1493d9f6e9\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-cluster-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-23T09:07:43Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-bqdrh\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-23T09:07:41Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-control-plane-749d76644c-ckltm\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-23T09:08:11Z is after 2025-08-24T17:21:41Z" Jan 23 09:08:11 crc kubenswrapper[4684]: I0123 09:08:11.647523 4684 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"e31ff448-5258-4887-9532-ccb1444b5a2f\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T09:07:08Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T09:07:08Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T09:07:38Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T09:07:38Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T09:07:07Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://39b1d62654cdce3e6a1e54cc35f36d530dec39b7ec54d7aba2ea8a64844ff90a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-23T09:07:09Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://5b80737ea9f882f63be2cf6a2f74002963d16e18aea3c96f738b2cd188f3c1da\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-23T09:07:10Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://68e3ed6cfd5c1ab6379385c7acee58117333f815f21be7d7c61038f7827f6621\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-23T09:07:10Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://74958cd4355a9eb04e07c960b1063b56f11cb3ae27a3ab9eac50f54ebac78c8c\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://42263a97079566dbd93f1ca20399fd1f6cc2400f0d042ed062c1c1e15eaf0109\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-01-23T09:07:27Z\\\",\\\"message\\\":\\\"23 09:07:26.845110 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW0123 09:07:26.845113 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW0123 09:07:26.845115 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nI0123 09:07:26.845353 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nI0123 09:07:26.849378 1 tlsconfig.go:203] \\\\\\\"Loaded serving cert\\\\\\\" certName=\\\\\\\"serving-cert::/tmp/serving-cert-4138284268/tls.crt::/tmp/serving-cert-4138284268/tls.key\\\\\\\" certDetail=\\\\\\\"\\\\\\\\\\\\\\\"localhost\\\\\\\\\\\\\\\" [serving] validServingFor=[localhost] issuer=\\\\\\\\\\\\\\\"check-endpoints-signer@1769159230\\\\\\\\\\\\\\\" (2026-01-23 09:07:10 +0000 UTC to 2026-02-22 09:07:11 +0000 UTC (now=2026-01-23 09:07:26.849349521 +0000 UTC))\\\\\\\"\\\\nI0123 09:07:26.849507 1 named_certificates.go:53] \\\\\\\"Loaded SNI cert\\\\\\\" index=0 certName=\\\\\\\"self-signed loopback\\\\\\\" certDetail=\\\\\\\"\\\\\\\\\\\\\\\"apiserver-loopback-client@1769159241\\\\\\\\\\\\\\\" [serving] validServingFor=[apiserver-loopback-client] issuer=\\\\\\\\\\\\\\\"apiserver-loopback-client-ca@1769159241\\\\\\\\\\\\\\\" (2026-01-23 08:07:21 +0000 UTC to 2027-01-23 08:07:21 +0000 UTC (now=2026-01-23 09:07:26.849489185 +0000 UTC))\\\\\\\"\\\\nI0123 09:07:26.849527 1 secure_serving.go:213] Serving securely on [::]:17697\\\\nI0123 09:07:26.849546 1 genericapiserver.go:683] [graceful-termination] waiting for shutdown to be initiated\\\\nI0123 09:07:26.849566 1 requestheader_controller.go:172] Starting RequestHeaderAuthRequestController\\\\nI0123 09:07:26.849583 1 shared_informer.go:313] Waiting for caches to sync for RequestHeaderAuthRequestController\\\\nI0123 09:07:26.849611 1 dynamic_serving_content.go:135] \\\\\\\"Starting controller\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-4138284268/tls.crt::/tmp/serving-cert-4138284268/tls.key\\\\\\\"\\\\nI0123 09:07:26.849731 1 tlsconfig.go:243] \\\\\\\"Starting DynamicServingCertificateController\\\\\\\"\\\\nF0123 09:07:26.849820 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-23T09:07:10Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":true,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-23T09:07:28Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://9db80d9b156d2828ad5bcd38bc2d0783dac35f10f547f098815ee596931cde3b\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-23T09:07:10Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://efa2eef93c6f5766565795e6674f79bc2e7cb62ac76cd9a1e407561378d62732\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://efa2eef93c6f5766565795e6674f79bc2e7cb62ac76cd9a1e407561378d62732\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-23T09:07:08Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-23T09:07:08Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-23T09:07:07Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-23T09:08:11Z is after 2025-08-24T17:21:41Z" Jan 23 09:08:11 crc kubenswrapper[4684]: I0123 09:08:11.658314 4684 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-23T09:07:27Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-23T09:07:27Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-23T09:08:11Z is after 2025-08-24T17:21:41Z" Jan 23 09:08:11 crc kubenswrapper[4684]: I0123 09:08:11.670062 4684 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-jwr4q" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ab0885cc-d621-4e36-9e37-1326848bd147\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T09:07:29Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T09:07:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T09:07:29Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T09:07:29Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://d957cfbf388d17fa825ac41c56e15d6cd4caec6e13b2fb8c93b304205f0bbefe\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-23T09:07:29Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cw2mk\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-23T09:07:28Z\\\"}}\" for pod \"openshift-multus\"/\"multus-jwr4q\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-23T09:08:11Z is after 2025-08-24T17:21:41Z" Jan 23 09:08:11 crc kubenswrapper[4684]: I0123 09:08:11.671888 4684 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 23 09:08:11 crc kubenswrapper[4684]: I0123 09:08:11.672001 4684 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 23 09:08:11 crc kubenswrapper[4684]: I0123 09:08:11.672074 4684 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 23 09:08:11 crc kubenswrapper[4684]: I0123 09:08:11.672150 4684 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 23 09:08:11 crc kubenswrapper[4684]: I0123 09:08:11.672226 4684 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-23T09:08:11Z","lastTransitionTime":"2026-01-23T09:08:11Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 23 09:08:11 crc kubenswrapper[4684]: I0123 09:08:11.677858 4684 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 23 09:08:11 crc kubenswrapper[4684]: I0123 09:08:11.677879 4684 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 23 09:08:11 crc kubenswrapper[4684]: I0123 09:08:11.677887 4684 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 23 09:08:11 crc kubenswrapper[4684]: I0123 09:08:11.677898 4684 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 23 09:08:11 crc kubenswrapper[4684]: I0123 09:08:11.677906 4684 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-23T09:08:11Z","lastTransitionTime":"2026-01-23T09:08:11Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 23 09:08:11 crc kubenswrapper[4684]: I0123 09:08:11.685667 4684 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-dmqcw" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"95d1563a-3ca4-4fb0-8365-c1168fbe2e70\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T09:07:29Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T09:07:34Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T09:07:35Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T09:07:35Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://49a6a5854f711f7c177bc9c2ddea16027d535e15a3bbce2771702baed672fc06\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-23T09:07:35Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-wrhqc\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://d3d64538fa49212ecd97fac81f22251d985b9963024dcd5625ca82b0a19111fb\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://d3d64538fa49212ecd97fac81f22251d985b9963024dcd5625ca82b0a19111fb\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-23T09:07:29Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-23T09:07:29Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-wrhqc\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://bd008bc398cf858c150426e45222e76743f5cacfffb45c24f2cad83a6140abe4\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://bd008bc398cf858c150426e45222e76743f5cacfffb45c24f2cad83a6140abe4\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-23T09:07:30Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-23T09:07:30Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-wrhqc\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://11ea09253e6f4c4eab537b794b793c1f07e8cbaf361c1d8773381e7894805322\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://11ea09253e6f4c4eab537b794b793c1f07e8cbaf361c1d8773381e7894805322\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-23T09:07:31Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-23T09:07:31Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-wrhqc\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://4dddcfb8219bc8ac2d0f92294aef29222b71b1eb35ac84e7e833905e868e784e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://4dddcfb8219bc8ac2d0f92294aef29222b71b1eb35ac84e7e833905e868e784e\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-23T09:07:32Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-23T09:07:32Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-wrhqc\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://d935dd54133a2edd7ccddba6ec6b4c3ee7c86d3d6bc097b93fab3a6aa873ece9\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://d935dd54133a2edd7ccddba6ec6b4c3ee7c86d3d6bc097b93fab3a6aa873ece9\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-23T09:07:33Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-23T09:07:33Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-wrhqc\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://a3f58ad8e7c313247b77e5259a2f82d740ea1f08c3aeaefc116293729ce1b143\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://a3f58ad8e7c313247b77e5259a2f82d740ea1f08c3aeaefc116293729ce1b143\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-23T09:07:34Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-23T09:07:34Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-wrhqc\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-23T09:07:28Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-dmqcw\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-23T09:08:11Z is after 2025-08-24T17:21:41Z" Jan 23 09:08:11 crc kubenswrapper[4684]: E0123 09:08:11.691478 4684 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"7800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"24148068Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"8\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"24608868Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-01-23T09:08:11Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-23T09:08:11Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-23T09:08:11Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-23T09:08:11Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-23T09:08:11Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-23T09:08:11Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-23T09:08:11Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-23T09:08:11Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"bcfe8adf-9d26-48e3-b456-e1c8d79ddfed\\\",\\\"systemUUID\\\":\\\"63162577-fb09-4289-a5f3-3b12988dcfbf\\\"},\\\"runtimeHandlers\\\":[{\\\"features\\\":{\\\"recursiveReadOnlyMounts\\\":true,\\\"userNamespaces\\\":false},\\\"name\\\":\\\"runc\\\"},{\\\"features\\\":{\\\"recursiveReadOnlyMounts\\\":true,\\\"userNamespaces\\\":true},\\\"name\\\":\\\"crun\\\"},{\\\"features\\\":{\\\"recursiveReadOnlyMounts\\\":true,\\\"userNamespaces\\\":true},\\\"name\\\":\\\"\\\"}]}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-23T09:08:11Z is after 2025-08-24T17:21:41Z" Jan 23 09:08:11 crc kubenswrapper[4684]: I0123 09:08:11.695058 4684 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 23 09:08:11 crc kubenswrapper[4684]: I0123 09:08:11.695085 4684 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 23 09:08:11 crc kubenswrapper[4684]: I0123 09:08:11.695094 4684 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 23 09:08:11 crc kubenswrapper[4684]: I0123 09:08:11.695107 4684 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 23 09:08:11 crc kubenswrapper[4684]: I0123 09:08:11.695119 4684 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-23T09:08:11Z","lastTransitionTime":"2026-01-23T09:08:11Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 23 09:08:11 crc kubenswrapper[4684]: I0123 09:08:11.698842 4684 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d618dabd-5de3-4c94-b9c1-69682da77628\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T09:07:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T09:07:07Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T09:07:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T09:07:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T09:07:07Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://c027c8977c1e3870ef0132bf28d479e8999b1a7d216327be7a9cff2aeee05c9e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-23T09:07:08Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://7954e2feb1e89e1ec2c9055234e7b9bde7005afc751a3067c18cbb54d16045cc\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-23T09:07:08Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://fde45d47daa7855ee7caa1df0222d2773fcdc8fb29413c61d6b74f7e7d8fa6e4\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-23T09:07:09Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://f34540a58dd0dfcebbfd694b24202f58a89ddca8a0f04f3f4f2bcdba4be5c4b6\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-23T09:07:10Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-23T09:07:07Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-23T09:08:11Z is after 2025-08-24T17:21:41Z" Jan 23 09:08:11 crc kubenswrapper[4684]: E0123 09:08:11.707071 4684 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"7800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"24148068Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"8\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"24608868Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-01-23T09:08:11Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-23T09:08:11Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-23T09:08:11Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-23T09:08:11Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-23T09:08:11Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-23T09:08:11Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-23T09:08:11Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-23T09:08:11Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"bcfe8adf-9d26-48e3-b456-e1c8d79ddfed\\\",\\\"systemUUID\\\":\\\"63162577-fb09-4289-a5f3-3b12988dcfbf\\\"},\\\"runtimeHandlers\\\":[{\\\"features\\\":{\\\"recursiveReadOnlyMounts\\\":true,\\\"userNamespaces\\\":false},\\\"name\\\":\\\"runc\\\"},{\\\"features\\\":{\\\"recursiveReadOnlyMounts\\\":true,\\\"userNamespaces\\\":true},\\\"name\\\":\\\"crun\\\"},{\\\"features\\\":{\\\"recursiveReadOnlyMounts\\\":true,\\\"userNamespaces\\\":true},\\\"name\\\":\\\"\\\"}]}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-23T09:08:11Z is after 2025-08-24T17:21:41Z" Jan 23 09:08:11 crc kubenswrapper[4684]: I0123 09:08:11.710147 4684 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 23 09:08:11 crc kubenswrapper[4684]: I0123 09:08:11.710170 4684 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 23 09:08:11 crc kubenswrapper[4684]: I0123 09:08:11.710178 4684 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 23 09:08:11 crc kubenswrapper[4684]: I0123 09:08:11.710189 4684 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 23 09:08:11 crc kubenswrapper[4684]: I0123 09:08:11.710198 4684 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-23T09:08:11Z","lastTransitionTime":"2026-01-23T09:08:11Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 23 09:08:11 crc kubenswrapper[4684]: I0123 09:08:11.712015 4684 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-23T09:07:28Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-23T09:07:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-23T09:07:28Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://4f741db786a98b9e9302c17c5f5061484149b0372c03b3cf06b017d37da7237a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-23T09:07:28Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://e0bf99a80423f9d4d2262b21f7dc70d1cf73731c48008e484d9768495596d5b0\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-23T09:07:28Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-23T09:08:11Z is after 2025-08-24T17:21:41Z" Jan 23 09:08:11 crc kubenswrapper[4684]: E0123 09:08:11.721776 4684 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"7800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"24148068Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"8\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"24608868Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-01-23T09:08:11Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-23T09:08:11Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-23T09:08:11Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-23T09:08:11Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-23T09:08:11Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-23T09:08:11Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-23T09:08:11Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-23T09:08:11Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"bcfe8adf-9d26-48e3-b456-e1c8d79ddfed\\\",\\\"systemUUID\\\":\\\"63162577-fb09-4289-a5f3-3b12988dcfbf\\\"},\\\"runtimeHandlers\\\":[{\\\"features\\\":{\\\"recursiveReadOnlyMounts\\\":true,\\\"userNamespaces\\\":false},\\\"name\\\":\\\"runc\\\"},{\\\"features\\\":{\\\"recursiveReadOnlyMounts\\\":true,\\\"userNamespaces\\\":true},\\\"name\\\":\\\"crun\\\"},{\\\"features\\\":{\\\"recursiveReadOnlyMounts\\\":true,\\\"userNamespaces\\\":true},\\\"name\\\":\\\"\\\"}]}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-23T09:08:11Z is after 2025-08-24T17:21:41Z" Jan 23 09:08:11 crc kubenswrapper[4684]: I0123 09:08:11.722813 4684 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-23T09:07:30Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-23T09:07:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-23T09:07:30Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://dc74050180463e44d7c545c89833c0282af87ae8cde4800f95e019dbd21ebb08\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-23T09:07:30Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-23T09:08:11Z is after 2025-08-24T17:21:41Z" Jan 23 09:08:11 crc kubenswrapper[4684]: I0123 09:08:11.724922 4684 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 23 09:08:11 crc kubenswrapper[4684]: I0123 09:08:11.725014 4684 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 23 09:08:11 crc kubenswrapper[4684]: I0123 09:08:11.725083 4684 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 23 09:08:11 crc kubenswrapper[4684]: I0123 09:08:11.725231 4684 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 23 09:08:11 crc kubenswrapper[4684]: I0123 09:08:11.725295 4684 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-23T09:08:11Z","lastTransitionTime":"2026-01-23T09:08:11Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 23 09:08:11 crc kubenswrapper[4684]: I0123 09:08:11.734320 4684 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-6stgf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"4fce7017-186f-4953-b968-c8a8868a0fd4\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T09:07:29Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T09:07:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T09:07:29Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T09:07:29Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://e120546e2ca9261a5bc169c39194c52add608d78b5783a10dad5f3ba4ee27c23\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-23T09:07:29Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-wv8g2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-23T09:07:28Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-6stgf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-23T09:08:11Z is after 2025-08-24T17:21:41Z" Jan 23 09:08:11 crc kubenswrapper[4684]: E0123 09:08:11.736852 4684 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"7800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"24148068Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"8\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"24608868Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-01-23T09:08:11Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-23T09:08:11Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-23T09:08:11Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-23T09:08:11Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-23T09:08:11Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-23T09:08:11Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-23T09:08:11Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-23T09:08:11Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"bcfe8adf-9d26-48e3-b456-e1c8d79ddfed\\\",\\\"systemUUID\\\":\\\"63162577-fb09-4289-a5f3-3b12988dcfbf\\\"},\\\"runtimeHandlers\\\":[{\\\"features\\\":{\\\"recursiveReadOnlyMounts\\\":true,\\\"userNamespaces\\\":false},\\\"name\\\":\\\"runc\\\"},{\\\"features\\\":{\\\"recursiveReadOnlyMounts\\\":true,\\\"userNamespaces\\\":true},\\\"name\\\":\\\"crun\\\"},{\\\"features\\\":{\\\"recursiveReadOnlyMounts\\\":true,\\\"userNamespaces\\\":true},\\\"name\\\":\\\"\\\"}]}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-23T09:08:11Z is after 2025-08-24T17:21:41Z" Jan 23 09:08:11 crc kubenswrapper[4684]: I0123 09:08:11.741843 4684 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 23 09:08:11 crc kubenswrapper[4684]: I0123 09:08:11.741880 4684 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 23 09:08:11 crc kubenswrapper[4684]: I0123 09:08:11.741893 4684 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 23 09:08:11 crc kubenswrapper[4684]: I0123 09:08:11.741908 4684 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 23 09:08:11 crc kubenswrapper[4684]: I0123 09:08:11.741920 4684 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-23T09:08:11Z","lastTransitionTime":"2026-01-23T09:08:11Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 23 09:08:11 crc kubenswrapper[4684]: I0123 09:08:11.746333 4684 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-qt2j2" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d5069a6f-07bb-4423-8df0-92cdc541e6de\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T09:07:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T09:07:29Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T09:07:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T09:07:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://4ab843f59e857c481772565098789264b06141f58dd54cbb8dba2e40b44a54ad\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-23T09:07:30Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-l62zw\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-23T09:07:29Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-qt2j2\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-23T09:08:11Z is after 2025-08-24T17:21:41Z" Jan 23 09:08:11 crc kubenswrapper[4684]: E0123 09:08:11.756552 4684 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"7800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"24148068Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"8\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"24608868Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-01-23T09:08:11Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-23T09:08:11Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-23T09:08:11Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-23T09:08:11Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-23T09:08:11Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-23T09:08:11Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-23T09:08:11Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-23T09:08:11Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"bcfe8adf-9d26-48e3-b456-e1c8d79ddfed\\\",\\\"systemUUID\\\":\\\"63162577-fb09-4289-a5f3-3b12988dcfbf\\\"},\\\"runtimeHandlers\\\":[{\\\"features\\\":{\\\"recursiveReadOnlyMounts\\\":true,\\\"userNamespaces\\\":false},\\\"name\\\":\\\"runc\\\"},{\\\"features\\\":{\\\"recursiveReadOnlyMounts\\\":true,\\\"userNamespaces\\\":true},\\\"name\\\":\\\"crun\\\"},{\\\"features\\\":{\\\"recursiveReadOnlyMounts\\\":true,\\\"userNamespaces\\\":true},\\\"name\\\":\\\"\\\"}]}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-23T09:08:11Z is after 2025-08-24T17:21:41Z" Jan 23 09:08:11 crc kubenswrapper[4684]: E0123 09:08:11.756727 4684 kubelet_node_status.go:572] "Unable to update node status" err="update node status exceeds retry count" Jan 23 09:08:11 crc kubenswrapper[4684]: I0123 09:08:11.758969 4684 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"8f3a9b90-c984-4ff9-9c1e-877941f387c7\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T09:07:08Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T09:07:08Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T09:08:03Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T09:08:03Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T09:07:07Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://02d494d3d24ff74db057c3d7e3a703635ce5b73863f17e5287e60eb112fcadf1\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-23T09:07:09Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://b3735bcc057b640850e5db0bc7cd406ef0ac0c002d4550e741deaf34cf10908f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-23T09:07:10Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://beeba329cbddfbfbd71509b5d37064ec6031709b1403feb8e76af0e7818516cb\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-23T09:07:10Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://dfb74f1ff410b32092837918e51a33643c917e2cf829af6edd2e36180c64fcba\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"wait-for-host-port\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://dfb74f1ff410b32092837918e51a33643c917e2cf829af6edd2e36180c64fcba\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-23T09:07:08Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-23T09:07:08Z\\\"}}}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-23T09:07:07Z\\\"}}\" for pod \"openshift-kube-scheduler\"/\"openshift-kube-scheduler-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-23T09:08:11Z is after 2025-08-24T17:21:41Z" Jan 23 09:08:11 crc kubenswrapper[4684]: I0123 09:08:11.774240 4684 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 23 09:08:11 crc kubenswrapper[4684]: I0123 09:08:11.774523 4684 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 23 09:08:11 crc kubenswrapper[4684]: I0123 09:08:11.774614 4684 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 23 09:08:11 crc kubenswrapper[4684]: I0123 09:08:11.774714 4684 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 23 09:08:11 crc kubenswrapper[4684]: I0123 09:08:11.774807 4684 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-23T09:08:11Z","lastTransitionTime":"2026-01-23T09:08:11Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 23 09:08:11 crc kubenswrapper[4684]: I0123 09:08:11.776104 4684 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-23T09:07:29Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-23T09:07:29Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-23T09:07:29Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://d66a59d2f527c396c3b591ef694a20a6852d8e2b2f3d4c77ef0f0b795a18b535\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-23T09:07:28Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-23T09:08:11Z is after 2025-08-24T17:21:41Z" Jan 23 09:08:11 crc kubenswrapper[4684]: I0123 09:08:11.788445 4684 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-wtphf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"fe8e0d00-860e-4d47-9f48-686555520d79\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T09:07:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T09:07:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T09:07:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T09:07:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://87b6f66b276518f9c25bbd5c97bd4a330b2c796958b395d04a01ef7115b95440\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-23T09:07:30Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-dmwsl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://c3d090a4ca15b818846dbd02be034a5029761509ea8671673795d0b2b15249c9\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-23T09:07:30Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-dmwsl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-23T09:07:28Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-wtphf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-23T09:08:11Z is after 2025-08-24T17:21:41Z" Jan 23 09:08:11 crc kubenswrapper[4684]: I0123 09:08:11.800825 4684 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/network-metrics-daemon-wrrtl" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"8a1145d8-e0e9-481b-9e5c-65815e74874f\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T09:07:42Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T09:07:42Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T09:07:42Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T09:07:42Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/metrics\\\",\\\"name\\\":\\\"metrics-certs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-hlsjn\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d98bb346a17feae024d92663df92b25c120938395ab7043afbed543c6db9ca8d\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-metrics-daemon\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-hlsjn\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-23T09:07:42Z\\\"}}\" for pod \"openshift-multus\"/\"network-metrics-daemon-wrrtl\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-23T09:08:11Z is after 2025-08-24T17:21:41Z" Jan 23 09:08:11 crc kubenswrapper[4684]: I0123 09:08:11.877950 4684 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 23 09:08:11 crc kubenswrapper[4684]: I0123 09:08:11.877984 4684 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 23 09:08:11 crc kubenswrapper[4684]: I0123 09:08:11.877993 4684 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 23 09:08:11 crc kubenswrapper[4684]: I0123 09:08:11.878006 4684 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 23 09:08:11 crc kubenswrapper[4684]: I0123 09:08:11.878016 4684 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-23T09:08:11Z","lastTransitionTime":"2026-01-23T09:08:11Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 23 09:08:11 crc kubenswrapper[4684]: I0123 09:08:11.981104 4684 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 23 09:08:11 crc kubenswrapper[4684]: I0123 09:08:11.981149 4684 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 23 09:08:11 crc kubenswrapper[4684]: I0123 09:08:11.981163 4684 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 23 09:08:11 crc kubenswrapper[4684]: I0123 09:08:11.981186 4684 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 23 09:08:11 crc kubenswrapper[4684]: I0123 09:08:11.981199 4684 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-23T09:08:11Z","lastTransitionTime":"2026-01-23T09:08:11Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 23 09:08:12 crc kubenswrapper[4684]: I0123 09:08:12.084308 4684 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 23 09:08:12 crc kubenswrapper[4684]: I0123 09:08:12.084342 4684 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 23 09:08:12 crc kubenswrapper[4684]: I0123 09:08:12.084351 4684 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 23 09:08:12 crc kubenswrapper[4684]: I0123 09:08:12.084364 4684 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 23 09:08:12 crc kubenswrapper[4684]: I0123 09:08:12.084372 4684 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-23T09:08:12Z","lastTransitionTime":"2026-01-23T09:08:12Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 23 09:08:12 crc kubenswrapper[4684]: I0123 09:08:12.186499 4684 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 23 09:08:12 crc kubenswrapper[4684]: I0123 09:08:12.186534 4684 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 23 09:08:12 crc kubenswrapper[4684]: I0123 09:08:12.186543 4684 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 23 09:08:12 crc kubenswrapper[4684]: I0123 09:08:12.186559 4684 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 23 09:08:12 crc kubenswrapper[4684]: I0123 09:08:12.186569 4684 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-23T09:08:12Z","lastTransitionTime":"2026-01-23T09:08:12Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 23 09:08:12 crc kubenswrapper[4684]: I0123 09:08:12.288739 4684 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 23 09:08:12 crc kubenswrapper[4684]: I0123 09:08:12.288775 4684 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 23 09:08:12 crc kubenswrapper[4684]: I0123 09:08:12.288785 4684 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 23 09:08:12 crc kubenswrapper[4684]: I0123 09:08:12.288801 4684 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 23 09:08:12 crc kubenswrapper[4684]: I0123 09:08:12.288811 4684 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-23T09:08:12Z","lastTransitionTime":"2026-01-23T09:08:12Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 23 09:08:12 crc kubenswrapper[4684]: I0123 09:08:12.391306 4684 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 23 09:08:12 crc kubenswrapper[4684]: I0123 09:08:12.391347 4684 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 23 09:08:12 crc kubenswrapper[4684]: I0123 09:08:12.391359 4684 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 23 09:08:12 crc kubenswrapper[4684]: I0123 09:08:12.391373 4684 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 23 09:08:12 crc kubenswrapper[4684]: I0123 09:08:12.391383 4684 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-23T09:08:12Z","lastTransitionTime":"2026-01-23T09:08:12Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 23 09:08:12 crc kubenswrapper[4684]: I0123 09:08:12.494209 4684 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 23 09:08:12 crc kubenswrapper[4684]: I0123 09:08:12.494326 4684 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 23 09:08:12 crc kubenswrapper[4684]: I0123 09:08:12.494344 4684 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 23 09:08:12 crc kubenswrapper[4684]: I0123 09:08:12.494373 4684 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 23 09:08:12 crc kubenswrapper[4684]: I0123 09:08:12.494395 4684 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-23T09:08:12Z","lastTransitionTime":"2026-01-23T09:08:12Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 23 09:08:12 crc kubenswrapper[4684]: I0123 09:08:12.578479 4684 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-12-06 08:18:37.863459679 +0000 UTC Jan 23 09:08:12 crc kubenswrapper[4684]: I0123 09:08:12.582045 4684 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-wrrtl" Jan 23 09:08:12 crc kubenswrapper[4684]: E0123 09:08:12.582300 4684 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-wrrtl" podUID="8a1145d8-e0e9-481b-9e5c-65815e74874f" Jan 23 09:08:12 crc kubenswrapper[4684]: I0123 09:08:12.597955 4684 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 23 09:08:12 crc kubenswrapper[4684]: I0123 09:08:12.598017 4684 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 23 09:08:12 crc kubenswrapper[4684]: I0123 09:08:12.598035 4684 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 23 09:08:12 crc kubenswrapper[4684]: I0123 09:08:12.598062 4684 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 23 09:08:12 crc kubenswrapper[4684]: I0123 09:08:12.598082 4684 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-23T09:08:12Z","lastTransitionTime":"2026-01-23T09:08:12Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 23 09:08:12 crc kubenswrapper[4684]: I0123 09:08:12.699841 4684 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 23 09:08:12 crc kubenswrapper[4684]: I0123 09:08:12.699873 4684 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 23 09:08:12 crc kubenswrapper[4684]: I0123 09:08:12.699882 4684 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 23 09:08:12 crc kubenswrapper[4684]: I0123 09:08:12.699896 4684 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 23 09:08:12 crc kubenswrapper[4684]: I0123 09:08:12.699906 4684 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-23T09:08:12Z","lastTransitionTime":"2026-01-23T09:08:12Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 23 09:08:12 crc kubenswrapper[4684]: I0123 09:08:12.802827 4684 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 23 09:08:12 crc kubenswrapper[4684]: I0123 09:08:12.802864 4684 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 23 09:08:12 crc kubenswrapper[4684]: I0123 09:08:12.802875 4684 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 23 09:08:12 crc kubenswrapper[4684]: I0123 09:08:12.802887 4684 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 23 09:08:12 crc kubenswrapper[4684]: I0123 09:08:12.802896 4684 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-23T09:08:12Z","lastTransitionTime":"2026-01-23T09:08:12Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 23 09:08:12 crc kubenswrapper[4684]: I0123 09:08:12.905118 4684 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 23 09:08:12 crc kubenswrapper[4684]: I0123 09:08:12.905175 4684 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 23 09:08:12 crc kubenswrapper[4684]: I0123 09:08:12.905183 4684 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 23 09:08:12 crc kubenswrapper[4684]: I0123 09:08:12.905197 4684 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 23 09:08:12 crc kubenswrapper[4684]: I0123 09:08:12.905206 4684 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-23T09:08:12Z","lastTransitionTime":"2026-01-23T09:08:12Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 23 09:08:13 crc kubenswrapper[4684]: I0123 09:08:13.007230 4684 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 23 09:08:13 crc kubenswrapper[4684]: I0123 09:08:13.007268 4684 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 23 09:08:13 crc kubenswrapper[4684]: I0123 09:08:13.007280 4684 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 23 09:08:13 crc kubenswrapper[4684]: I0123 09:08:13.007296 4684 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 23 09:08:13 crc kubenswrapper[4684]: I0123 09:08:13.007306 4684 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-23T09:08:13Z","lastTransitionTime":"2026-01-23T09:08:13Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 23 09:08:13 crc kubenswrapper[4684]: I0123 09:08:13.109224 4684 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 23 09:08:13 crc kubenswrapper[4684]: I0123 09:08:13.109261 4684 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 23 09:08:13 crc kubenswrapper[4684]: I0123 09:08:13.109272 4684 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 23 09:08:13 crc kubenswrapper[4684]: I0123 09:08:13.109289 4684 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 23 09:08:13 crc kubenswrapper[4684]: I0123 09:08:13.109299 4684 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-23T09:08:13Z","lastTransitionTime":"2026-01-23T09:08:13Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 23 09:08:13 crc kubenswrapper[4684]: I0123 09:08:13.212557 4684 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 23 09:08:13 crc kubenswrapper[4684]: I0123 09:08:13.212592 4684 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 23 09:08:13 crc kubenswrapper[4684]: I0123 09:08:13.212604 4684 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 23 09:08:13 crc kubenswrapper[4684]: I0123 09:08:13.212620 4684 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 23 09:08:13 crc kubenswrapper[4684]: I0123 09:08:13.212633 4684 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-23T09:08:13Z","lastTransitionTime":"2026-01-23T09:08:13Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 23 09:08:13 crc kubenswrapper[4684]: I0123 09:08:13.315388 4684 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 23 09:08:13 crc kubenswrapper[4684]: I0123 09:08:13.315441 4684 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 23 09:08:13 crc kubenswrapper[4684]: I0123 09:08:13.315453 4684 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 23 09:08:13 crc kubenswrapper[4684]: I0123 09:08:13.315468 4684 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 23 09:08:13 crc kubenswrapper[4684]: I0123 09:08:13.315477 4684 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-23T09:08:13Z","lastTransitionTime":"2026-01-23T09:08:13Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 23 09:08:13 crc kubenswrapper[4684]: I0123 09:08:13.417546 4684 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 23 09:08:13 crc kubenswrapper[4684]: I0123 09:08:13.417595 4684 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 23 09:08:13 crc kubenswrapper[4684]: I0123 09:08:13.417605 4684 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 23 09:08:13 crc kubenswrapper[4684]: I0123 09:08:13.417619 4684 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 23 09:08:13 crc kubenswrapper[4684]: I0123 09:08:13.417630 4684 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-23T09:08:13Z","lastTransitionTime":"2026-01-23T09:08:13Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 23 09:08:13 crc kubenswrapper[4684]: I0123 09:08:13.519571 4684 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 23 09:08:13 crc kubenswrapper[4684]: I0123 09:08:13.519606 4684 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 23 09:08:13 crc kubenswrapper[4684]: I0123 09:08:13.519617 4684 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 23 09:08:13 crc kubenswrapper[4684]: I0123 09:08:13.519632 4684 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 23 09:08:13 crc kubenswrapper[4684]: I0123 09:08:13.519644 4684 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-23T09:08:13Z","lastTransitionTime":"2026-01-23T09:08:13Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 23 09:08:13 crc kubenswrapper[4684]: I0123 09:08:13.579063 4684 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-12-11 12:25:10.943490019 +0000 UTC Jan 23 09:08:13 crc kubenswrapper[4684]: I0123 09:08:13.581727 4684 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 23 09:08:13 crc kubenswrapper[4684]: I0123 09:08:13.581775 4684 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Jan 23 09:08:13 crc kubenswrapper[4684]: E0123 09:08:13.581950 4684 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Jan 23 09:08:13 crc kubenswrapper[4684]: E0123 09:08:13.582055 4684 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Jan 23 09:08:13 crc kubenswrapper[4684]: I0123 09:08:13.581804 4684 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 23 09:08:13 crc kubenswrapper[4684]: E0123 09:08:13.582276 4684 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Jan 23 09:08:13 crc kubenswrapper[4684]: I0123 09:08:13.621302 4684 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 23 09:08:13 crc kubenswrapper[4684]: I0123 09:08:13.621546 4684 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 23 09:08:13 crc kubenswrapper[4684]: I0123 09:08:13.621644 4684 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 23 09:08:13 crc kubenswrapper[4684]: I0123 09:08:13.621757 4684 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 23 09:08:13 crc kubenswrapper[4684]: I0123 09:08:13.621847 4684 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-23T09:08:13Z","lastTransitionTime":"2026-01-23T09:08:13Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 23 09:08:13 crc kubenswrapper[4684]: I0123 09:08:13.724027 4684 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 23 09:08:13 crc kubenswrapper[4684]: I0123 09:08:13.724061 4684 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 23 09:08:13 crc kubenswrapper[4684]: I0123 09:08:13.724071 4684 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 23 09:08:13 crc kubenswrapper[4684]: I0123 09:08:13.724086 4684 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 23 09:08:13 crc kubenswrapper[4684]: I0123 09:08:13.724096 4684 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-23T09:08:13Z","lastTransitionTime":"2026-01-23T09:08:13Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 23 09:08:13 crc kubenswrapper[4684]: I0123 09:08:13.826471 4684 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 23 09:08:13 crc kubenswrapper[4684]: I0123 09:08:13.826516 4684 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 23 09:08:13 crc kubenswrapper[4684]: I0123 09:08:13.826527 4684 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 23 09:08:13 crc kubenswrapper[4684]: I0123 09:08:13.826539 4684 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 23 09:08:13 crc kubenswrapper[4684]: I0123 09:08:13.826549 4684 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-23T09:08:13Z","lastTransitionTime":"2026-01-23T09:08:13Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 23 09:08:13 crc kubenswrapper[4684]: I0123 09:08:13.928759 4684 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 23 09:08:13 crc kubenswrapper[4684]: I0123 09:08:13.928791 4684 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 23 09:08:13 crc kubenswrapper[4684]: I0123 09:08:13.928799 4684 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 23 09:08:13 crc kubenswrapper[4684]: I0123 09:08:13.928812 4684 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 23 09:08:13 crc kubenswrapper[4684]: I0123 09:08:13.928822 4684 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-23T09:08:13Z","lastTransitionTime":"2026-01-23T09:08:13Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 23 09:08:14 crc kubenswrapper[4684]: I0123 09:08:14.030803 4684 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 23 09:08:14 crc kubenswrapper[4684]: I0123 09:08:14.030855 4684 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 23 09:08:14 crc kubenswrapper[4684]: I0123 09:08:14.030868 4684 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 23 09:08:14 crc kubenswrapper[4684]: I0123 09:08:14.030886 4684 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 23 09:08:14 crc kubenswrapper[4684]: I0123 09:08:14.030897 4684 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-23T09:08:14Z","lastTransitionTime":"2026-01-23T09:08:14Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 23 09:08:14 crc kubenswrapper[4684]: I0123 09:08:14.114071 4684 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/8a1145d8-e0e9-481b-9e5c-65815e74874f-metrics-certs\") pod \"network-metrics-daemon-wrrtl\" (UID: \"8a1145d8-e0e9-481b-9e5c-65815e74874f\") " pod="openshift-multus/network-metrics-daemon-wrrtl" Jan 23 09:08:14 crc kubenswrapper[4684]: E0123 09:08:14.114206 4684 secret.go:188] Couldn't get secret openshift-multus/metrics-daemon-secret: object "openshift-multus"/"metrics-daemon-secret" not registered Jan 23 09:08:14 crc kubenswrapper[4684]: E0123 09:08:14.114282 4684 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/8a1145d8-e0e9-481b-9e5c-65815e74874f-metrics-certs podName:8a1145d8-e0e9-481b-9e5c-65815e74874f nodeName:}" failed. No retries permitted until 2026-01-23 09:08:46.114264265 +0000 UTC m=+98.737642806 (durationBeforeRetry 32s). Error: MountVolume.SetUp failed for volume "metrics-certs" (UniqueName: "kubernetes.io/secret/8a1145d8-e0e9-481b-9e5c-65815e74874f-metrics-certs") pod "network-metrics-daemon-wrrtl" (UID: "8a1145d8-e0e9-481b-9e5c-65815e74874f") : object "openshift-multus"/"metrics-daemon-secret" not registered Jan 23 09:08:14 crc kubenswrapper[4684]: I0123 09:08:14.133406 4684 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 23 09:08:14 crc kubenswrapper[4684]: I0123 09:08:14.133447 4684 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 23 09:08:14 crc kubenswrapper[4684]: I0123 09:08:14.133465 4684 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 23 09:08:14 crc kubenswrapper[4684]: I0123 09:08:14.133485 4684 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 23 09:08:14 crc kubenswrapper[4684]: I0123 09:08:14.133498 4684 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-23T09:08:14Z","lastTransitionTime":"2026-01-23T09:08:14Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 23 09:08:14 crc kubenswrapper[4684]: I0123 09:08:14.236370 4684 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 23 09:08:14 crc kubenswrapper[4684]: I0123 09:08:14.236414 4684 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 23 09:08:14 crc kubenswrapper[4684]: I0123 09:08:14.236424 4684 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 23 09:08:14 crc kubenswrapper[4684]: I0123 09:08:14.236440 4684 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 23 09:08:14 crc kubenswrapper[4684]: I0123 09:08:14.236451 4684 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-23T09:08:14Z","lastTransitionTime":"2026-01-23T09:08:14Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 23 09:08:14 crc kubenswrapper[4684]: I0123 09:08:14.338330 4684 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 23 09:08:14 crc kubenswrapper[4684]: I0123 09:08:14.338622 4684 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 23 09:08:14 crc kubenswrapper[4684]: I0123 09:08:14.338720 4684 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 23 09:08:14 crc kubenswrapper[4684]: I0123 09:08:14.338809 4684 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 23 09:08:14 crc kubenswrapper[4684]: I0123 09:08:14.338882 4684 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-23T09:08:14Z","lastTransitionTime":"2026-01-23T09:08:14Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 23 09:08:14 crc kubenswrapper[4684]: I0123 09:08:14.441691 4684 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 23 09:08:14 crc kubenswrapper[4684]: I0123 09:08:14.441741 4684 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 23 09:08:14 crc kubenswrapper[4684]: I0123 09:08:14.441750 4684 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 23 09:08:14 crc kubenswrapper[4684]: I0123 09:08:14.441769 4684 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 23 09:08:14 crc kubenswrapper[4684]: I0123 09:08:14.441780 4684 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-23T09:08:14Z","lastTransitionTime":"2026-01-23T09:08:14Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 23 09:08:14 crc kubenswrapper[4684]: I0123 09:08:14.544262 4684 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 23 09:08:14 crc kubenswrapper[4684]: I0123 09:08:14.544297 4684 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 23 09:08:14 crc kubenswrapper[4684]: I0123 09:08:14.544305 4684 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 23 09:08:14 crc kubenswrapper[4684]: I0123 09:08:14.544321 4684 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 23 09:08:14 crc kubenswrapper[4684]: I0123 09:08:14.544329 4684 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-23T09:08:14Z","lastTransitionTime":"2026-01-23T09:08:14Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 23 09:08:14 crc kubenswrapper[4684]: I0123 09:08:14.579592 4684 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-12-08 13:33:26.376683837 +0000 UTC Jan 23 09:08:14 crc kubenswrapper[4684]: I0123 09:08:14.581887 4684 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-wrrtl" Jan 23 09:08:14 crc kubenswrapper[4684]: E0123 09:08:14.582122 4684 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-wrrtl" podUID="8a1145d8-e0e9-481b-9e5c-65815e74874f" Jan 23 09:08:14 crc kubenswrapper[4684]: I0123 09:08:14.646763 4684 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 23 09:08:14 crc kubenswrapper[4684]: I0123 09:08:14.646811 4684 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 23 09:08:14 crc kubenswrapper[4684]: I0123 09:08:14.646822 4684 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 23 09:08:14 crc kubenswrapper[4684]: I0123 09:08:14.646839 4684 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 23 09:08:14 crc kubenswrapper[4684]: I0123 09:08:14.647237 4684 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-23T09:08:14Z","lastTransitionTime":"2026-01-23T09:08:14Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 23 09:08:14 crc kubenswrapper[4684]: I0123 09:08:14.748995 4684 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 23 09:08:14 crc kubenswrapper[4684]: I0123 09:08:14.749017 4684 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 23 09:08:14 crc kubenswrapper[4684]: I0123 09:08:14.749026 4684 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 23 09:08:14 crc kubenswrapper[4684]: I0123 09:08:14.749037 4684 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 23 09:08:14 crc kubenswrapper[4684]: I0123 09:08:14.749045 4684 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-23T09:08:14Z","lastTransitionTime":"2026-01-23T09:08:14Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 23 09:08:14 crc kubenswrapper[4684]: I0123 09:08:14.851439 4684 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 23 09:08:14 crc kubenswrapper[4684]: I0123 09:08:14.851475 4684 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 23 09:08:14 crc kubenswrapper[4684]: I0123 09:08:14.851485 4684 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 23 09:08:14 crc kubenswrapper[4684]: I0123 09:08:14.851501 4684 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 23 09:08:14 crc kubenswrapper[4684]: I0123 09:08:14.851511 4684 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-23T09:08:14Z","lastTransitionTime":"2026-01-23T09:08:14Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 23 09:08:14 crc kubenswrapper[4684]: I0123 09:08:14.953428 4684 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 23 09:08:14 crc kubenswrapper[4684]: I0123 09:08:14.953466 4684 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 23 09:08:14 crc kubenswrapper[4684]: I0123 09:08:14.953475 4684 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 23 09:08:14 crc kubenswrapper[4684]: I0123 09:08:14.953489 4684 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 23 09:08:14 crc kubenswrapper[4684]: I0123 09:08:14.953498 4684 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-23T09:08:14Z","lastTransitionTime":"2026-01-23T09:08:14Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 23 09:08:15 crc kubenswrapper[4684]: I0123 09:08:15.056623 4684 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 23 09:08:15 crc kubenswrapper[4684]: I0123 09:08:15.056864 4684 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 23 09:08:15 crc kubenswrapper[4684]: I0123 09:08:15.056930 4684 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 23 09:08:15 crc kubenswrapper[4684]: I0123 09:08:15.056998 4684 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 23 09:08:15 crc kubenswrapper[4684]: I0123 09:08:15.057059 4684 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-23T09:08:15Z","lastTransitionTime":"2026-01-23T09:08:15Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 23 09:08:15 crc kubenswrapper[4684]: I0123 09:08:15.158847 4684 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 23 09:08:15 crc kubenswrapper[4684]: I0123 09:08:15.158942 4684 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 23 09:08:15 crc kubenswrapper[4684]: I0123 09:08:15.158950 4684 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 23 09:08:15 crc kubenswrapper[4684]: I0123 09:08:15.158963 4684 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 23 09:08:15 crc kubenswrapper[4684]: I0123 09:08:15.158971 4684 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-23T09:08:15Z","lastTransitionTime":"2026-01-23T09:08:15Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 23 09:08:15 crc kubenswrapper[4684]: I0123 09:08:15.261442 4684 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 23 09:08:15 crc kubenswrapper[4684]: I0123 09:08:15.261658 4684 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 23 09:08:15 crc kubenswrapper[4684]: I0123 09:08:15.261751 4684 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 23 09:08:15 crc kubenswrapper[4684]: I0123 09:08:15.261829 4684 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 23 09:08:15 crc kubenswrapper[4684]: I0123 09:08:15.261923 4684 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-23T09:08:15Z","lastTransitionTime":"2026-01-23T09:08:15Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 23 09:08:15 crc kubenswrapper[4684]: I0123 09:08:15.364720 4684 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 23 09:08:15 crc kubenswrapper[4684]: I0123 09:08:15.365003 4684 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 23 09:08:15 crc kubenswrapper[4684]: I0123 09:08:15.365091 4684 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 23 09:08:15 crc kubenswrapper[4684]: I0123 09:08:15.365177 4684 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 23 09:08:15 crc kubenswrapper[4684]: I0123 09:08:15.365261 4684 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-23T09:08:15Z","lastTransitionTime":"2026-01-23T09:08:15Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 23 09:08:15 crc kubenswrapper[4684]: I0123 09:08:15.467902 4684 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 23 09:08:15 crc kubenswrapper[4684]: I0123 09:08:15.467940 4684 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 23 09:08:15 crc kubenswrapper[4684]: I0123 09:08:15.467949 4684 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 23 09:08:15 crc kubenswrapper[4684]: I0123 09:08:15.467964 4684 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 23 09:08:15 crc kubenswrapper[4684]: I0123 09:08:15.467973 4684 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-23T09:08:15Z","lastTransitionTime":"2026-01-23T09:08:15Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 23 09:08:15 crc kubenswrapper[4684]: I0123 09:08:15.570410 4684 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 23 09:08:15 crc kubenswrapper[4684]: I0123 09:08:15.570447 4684 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 23 09:08:15 crc kubenswrapper[4684]: I0123 09:08:15.570460 4684 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 23 09:08:15 crc kubenswrapper[4684]: I0123 09:08:15.570479 4684 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 23 09:08:15 crc kubenswrapper[4684]: I0123 09:08:15.570493 4684 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-23T09:08:15Z","lastTransitionTime":"2026-01-23T09:08:15Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 23 09:08:15 crc kubenswrapper[4684]: I0123 09:08:15.579859 4684 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-11-25 14:41:04.158597065 +0000 UTC Jan 23 09:08:15 crc kubenswrapper[4684]: I0123 09:08:15.583633 4684 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 23 09:08:15 crc kubenswrapper[4684]: E0123 09:08:15.583782 4684 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Jan 23 09:08:15 crc kubenswrapper[4684]: I0123 09:08:15.583973 4684 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Jan 23 09:08:15 crc kubenswrapper[4684]: E0123 09:08:15.584037 4684 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Jan 23 09:08:15 crc kubenswrapper[4684]: I0123 09:08:15.584169 4684 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 23 09:08:15 crc kubenswrapper[4684]: E0123 09:08:15.584233 4684 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Jan 23 09:08:15 crc kubenswrapper[4684]: I0123 09:08:15.672906 4684 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 23 09:08:15 crc kubenswrapper[4684]: I0123 09:08:15.672941 4684 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 23 09:08:15 crc kubenswrapper[4684]: I0123 09:08:15.672953 4684 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 23 09:08:15 crc kubenswrapper[4684]: I0123 09:08:15.672968 4684 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 23 09:08:15 crc kubenswrapper[4684]: I0123 09:08:15.672978 4684 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-23T09:08:15Z","lastTransitionTime":"2026-01-23T09:08:15Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 23 09:08:15 crc kubenswrapper[4684]: I0123 09:08:15.775422 4684 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 23 09:08:15 crc kubenswrapper[4684]: I0123 09:08:15.775443 4684 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 23 09:08:15 crc kubenswrapper[4684]: I0123 09:08:15.775451 4684 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 23 09:08:15 crc kubenswrapper[4684]: I0123 09:08:15.775463 4684 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 23 09:08:15 crc kubenswrapper[4684]: I0123 09:08:15.775471 4684 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-23T09:08:15Z","lastTransitionTime":"2026-01-23T09:08:15Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 23 09:08:15 crc kubenswrapper[4684]: I0123 09:08:15.877870 4684 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 23 09:08:15 crc kubenswrapper[4684]: I0123 09:08:15.877910 4684 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 23 09:08:15 crc kubenswrapper[4684]: I0123 09:08:15.877919 4684 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 23 09:08:15 crc kubenswrapper[4684]: I0123 09:08:15.877934 4684 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 23 09:08:15 crc kubenswrapper[4684]: I0123 09:08:15.877944 4684 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-23T09:08:15Z","lastTransitionTime":"2026-01-23T09:08:15Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 23 09:08:15 crc kubenswrapper[4684]: I0123 09:08:15.980439 4684 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 23 09:08:15 crc kubenswrapper[4684]: I0123 09:08:15.980479 4684 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 23 09:08:15 crc kubenswrapper[4684]: I0123 09:08:15.980488 4684 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 23 09:08:15 crc kubenswrapper[4684]: I0123 09:08:15.980501 4684 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 23 09:08:15 crc kubenswrapper[4684]: I0123 09:08:15.980513 4684 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-23T09:08:15Z","lastTransitionTime":"2026-01-23T09:08:15Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 23 09:08:15 crc kubenswrapper[4684]: I0123 09:08:15.981764 4684 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-multus_multus-jwr4q_ab0885cc-d621-4e36-9e37-1326848bd147/kube-multus/0.log" Jan 23 09:08:15 crc kubenswrapper[4684]: I0123 09:08:15.981806 4684 generic.go:334] "Generic (PLEG): container finished" podID="ab0885cc-d621-4e36-9e37-1326848bd147" containerID="d957cfbf388d17fa825ac41c56e15d6cd4caec6e13b2fb8c93b304205f0bbefe" exitCode=1 Jan 23 09:08:15 crc kubenswrapper[4684]: I0123 09:08:15.981832 4684 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-multus/multus-jwr4q" event={"ID":"ab0885cc-d621-4e36-9e37-1326848bd147","Type":"ContainerDied","Data":"d957cfbf388d17fa825ac41c56e15d6cd4caec6e13b2fb8c93b304205f0bbefe"} Jan 23 09:08:15 crc kubenswrapper[4684]: I0123 09:08:15.982201 4684 scope.go:117] "RemoveContainer" containerID="d957cfbf388d17fa825ac41c56e15d6cd4caec6e13b2fb8c93b304205f0bbefe" Jan 23 09:08:15 crc kubenswrapper[4684]: I0123 09:08:15.995740 4684 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"8f3a9b90-c984-4ff9-9c1e-877941f387c7\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T09:07:08Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T09:07:08Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T09:08:03Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T09:08:03Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T09:07:07Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://02d494d3d24ff74db057c3d7e3a703635ce5b73863f17e5287e60eb112fcadf1\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-23T09:07:09Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://b3735bcc057b640850e5db0bc7cd406ef0ac0c002d4550e741deaf34cf10908f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-23T09:07:10Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://beeba329cbddfbfbd71509b5d37064ec6031709b1403feb8e76af0e7818516cb\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-23T09:07:10Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://dfb74f1ff410b32092837918e51a33643c917e2cf829af6edd2e36180c64fcba\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"wait-for-host-port\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://dfb74f1ff410b32092837918e51a33643c917e2cf829af6edd2e36180c64fcba\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-23T09:07:08Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-23T09:07:08Z\\\"}}}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-23T09:07:07Z\\\"}}\" for pod \"openshift-kube-scheduler\"/\"openshift-kube-scheduler-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-23T09:08:15Z is after 2025-08-24T17:21:41Z" Jan 23 09:08:16 crc kubenswrapper[4684]: I0123 09:08:16.008052 4684 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-23T09:07:29Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-23T09:07:29Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-23T09:07:29Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://d66a59d2f527c396c3b591ef694a20a6852d8e2b2f3d4c77ef0f0b795a18b535\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-23T09:07:28Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-23T09:08:16Z is after 2025-08-24T17:21:41Z" Jan 23 09:08:16 crc kubenswrapper[4684]: I0123 09:08:16.019472 4684 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-wtphf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"fe8e0d00-860e-4d47-9f48-686555520d79\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T09:07:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T09:07:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T09:07:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T09:07:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://87b6f66b276518f9c25bbd5c97bd4a330b2c796958b395d04a01ef7115b95440\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-23T09:07:30Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-dmwsl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://c3d090a4ca15b818846dbd02be034a5029761509ea8671673795d0b2b15249c9\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-23T09:07:30Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-dmwsl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-23T09:07:28Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-wtphf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-23T09:08:16Z is after 2025-08-24T17:21:41Z" Jan 23 09:08:16 crc kubenswrapper[4684]: I0123 09:08:16.029919 4684 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/network-metrics-daemon-wrrtl" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"8a1145d8-e0e9-481b-9e5c-65815e74874f\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T09:07:42Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T09:07:42Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T09:07:42Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T09:07:42Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/metrics\\\",\\\"name\\\":\\\"metrics-certs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-hlsjn\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d98bb346a17feae024d92663df92b25c120938395ab7043afbed543c6db9ca8d\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-metrics-daemon\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-hlsjn\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-23T09:07:42Z\\\"}}\" for pod \"openshift-multus\"/\"network-metrics-daemon-wrrtl\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-23T09:08:16Z is after 2025-08-24T17:21:41Z" Jan 23 09:08:16 crc kubenswrapper[4684]: I0123 09:08:16.040619 4684 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-23T09:07:27Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-23T09:07:27Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-23T09:08:16Z is after 2025-08-24T17:21:41Z" Jan 23 09:08:16 crc kubenswrapper[4684]: I0123 09:08:16.055575 4684 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-23T09:07:27Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-23T09:07:27Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-23T09:08:16Z is after 2025-08-24T17:21:41Z" Jan 23 09:08:16 crc kubenswrapper[4684]: I0123 09:08:16.076543 4684 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-nk7v5" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fd1b372-d164-4037-ae8e-cf634b1c4b41\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T09:07:29Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T09:07:29Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T09:07:28Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T09:07:28Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://c845b6b78d55b23f70032599e19fb345571b02ca00353315bb08e94c834330d4\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-23T09:07:30Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-l46bg\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://5ecd3493767226c89a1f3e3dff04d36ff5c47117c6ad2712e71633f5c6e375b3\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-23T09:07:30Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-l46bg\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://1d7d0cedb437ec48e365912b092c7f28a30e01fbab86c49bce1b26734ab264ee\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-23T09:07:30Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-l46bg\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://6ab83043e744c91535278153a247d7ba2b3612b867edbabf3a43192b51304e14\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-23T09:07:30Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-l46bg\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://d44f8256ce0d8ea5237e13fb4f6d7ee5cd698c2821613b48d73ba903d2ab5351\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-23T09:07:30Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-l46bg\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://3eab81e73847c2d5a8a24bd2be84c8ed97ecc482fe023474b519ae6bcf3e6e49\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-23T09:07:29Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-l46bg\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://96a86556114a977603ae87310370eefd3122daae9dcb97c57a715eab43e8c195\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://96a86556114a977603ae87310370eefd3122daae9dcb97c57a715eab43e8c195\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2026-01-23T09:08:00Z\\\",\\\"message\\\":\\\":services.LBOpts{Reject:false, EmptyLBEvents:false, AffinityTimeOut:0, SkipSNAT:false, Template:false, AddressFamily:\\\\\\\"\\\\\\\"}, Rules:[]services.LBRule{}, Templates:services.TemplateMap{}, Switches:[]string{}, Routers:[]string{}, Groups:[]string{\\\\\\\"clusterLBGroup\\\\\\\"}}}, built lbs: []services.LB{services.LB{Name:\\\\\\\"Service_openshift-dns-operator/metrics_TCP_cluster\\\\\\\", UUID:\\\\\\\"\\\\\\\", Protocol:\\\\\\\"TCP\\\\\\\", ExternalIDs:map[string]string{\\\\\\\"k8s.ovn.org/kind\\\\\\\":\\\\\\\"Service\\\\\\\", \\\\\\\"k8s.ovn.org/owner\\\\\\\":\\\\\\\"openshift-dns-operator/metrics\\\\\\\"}, Opts:services.LBOpts{Reject:true, EmptyLBEvents:false, AffinityTimeOut:0, SkipSNAT:false, Template:false, AddressFamily:\\\\\\\"\\\\\\\"}, Rules:[]services.LBRule{services.LBRule{Source:services.Addr{IP:\\\\\\\"10.217.4.174\\\\\\\", Port:9393, Template:(*services.Template)(nil)}, Targets:[]services.Addr{}}}, Templates:services.TemplateMap(nil), Switches:[]string{}, Routers:[]string{}, Groups:[]string{\\\\\\\"clusterLBGroup\\\\\\\"}}}\\\\nI0123 09:07:58.859000 6248 ovnkube.go:599] Stopped ovnkube\\\\nI0123 09:07:58.859024 6248 metrics.go:553] Stopping metrics server at address \\\\\\\"127.0.0.1:29103\\\\\\\"\\\\nI0123 09:07:58.859031 6248 obj_retry.go:434] periodicallyRetryResources: Retry channel got triggered: retrying failed objects of type *v1.Pod\\\\nF0123 09:07:58.859119 6248 ovnkube.go:137] failed to run ovnkube: [failed to start network controller: failed to start default network controller: unable to create\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-23T09:07:58Z\\\"}},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":2,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"back-off 20s restarting failed container=ovnkube-controller pod=ovnkube-node-nk7v5_openshift-ovn-kubernetes(5fd1b372-d164-4037-ae8e-cf634b1c4b41)\\\",\\\"reason\\\":\\\"CrashLoopBackOff\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-l46bg\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://6eab0113b2445bd23a5d3eb5f4bd79d26dd3352a1bf807cf7e770d55db85b699\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-23T09:07:32Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-l46bg\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://6cfc04b44ac724b5e32e0102b3f0d670fdd7f2b7ae9b40266065c7b8192b228e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://6cfc04b44ac724b5e32e0102b3f0d670fdd7f2b7ae9b40266065c7b8192b228e\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-23T09:07:29Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-23T09:07:29Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-l46bg\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-23T09:07:28Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-nk7v5\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-23T09:08:16Z is after 2025-08-24T17:21:41Z" Jan 23 09:08:16 crc kubenswrapper[4684]: I0123 09:08:16.083235 4684 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 23 09:08:16 crc kubenswrapper[4684]: I0123 09:08:16.083259 4684 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 23 09:08:16 crc kubenswrapper[4684]: I0123 09:08:16.083269 4684 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 23 09:08:16 crc kubenswrapper[4684]: I0123 09:08:16.083282 4684 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 23 09:08:16 crc kubenswrapper[4684]: I0123 09:08:16.083293 4684 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-23T09:08:16Z","lastTransitionTime":"2026-01-23T09:08:16Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 23 09:08:16 crc kubenswrapper[4684]: I0123 09:08:16.087284 4684 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-ckltm" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"17ebb42b-c0ef-423b-8337-cb73bcdbd301\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T09:07:44Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T09:07:41Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T09:07:44Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T09:07:44Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://831d14b0a3293bdf6aaef4805513c47cca40592929fd0a059c0415e6bb072462\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-23T09:07:43Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-control-plane-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-bqdrh\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://53174a72a4ae2ff8105c162641526b8d33dbc8ae6f6301c8c1399e1493d9f6e9\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-cluster-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-23T09:07:43Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-bqdrh\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-23T09:07:41Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-control-plane-749d76644c-ckltm\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-23T09:08:16Z is after 2025-08-24T17:21:41Z" Jan 23 09:08:16 crc kubenswrapper[4684]: I0123 09:08:16.099066 4684 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"e31ff448-5258-4887-9532-ccb1444b5a2f\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T09:07:08Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T09:07:08Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T09:07:38Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T09:07:38Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T09:07:07Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://39b1d62654cdce3e6a1e54cc35f36d530dec39b7ec54d7aba2ea8a64844ff90a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-23T09:07:09Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://5b80737ea9f882f63be2cf6a2f74002963d16e18aea3c96f738b2cd188f3c1da\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-23T09:07:10Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://68e3ed6cfd5c1ab6379385c7acee58117333f815f21be7d7c61038f7827f6621\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-23T09:07:10Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://74958cd4355a9eb04e07c960b1063b56f11cb3ae27a3ab9eac50f54ebac78c8c\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://42263a97079566dbd93f1ca20399fd1f6cc2400f0d042ed062c1c1e15eaf0109\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-01-23T09:07:27Z\\\",\\\"message\\\":\\\"23 09:07:26.845110 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW0123 09:07:26.845113 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW0123 09:07:26.845115 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nI0123 09:07:26.845353 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nI0123 09:07:26.849378 1 tlsconfig.go:203] \\\\\\\"Loaded serving cert\\\\\\\" certName=\\\\\\\"serving-cert::/tmp/serving-cert-4138284268/tls.crt::/tmp/serving-cert-4138284268/tls.key\\\\\\\" certDetail=\\\\\\\"\\\\\\\\\\\\\\\"localhost\\\\\\\\\\\\\\\" [serving] validServingFor=[localhost] issuer=\\\\\\\\\\\\\\\"check-endpoints-signer@1769159230\\\\\\\\\\\\\\\" (2026-01-23 09:07:10 +0000 UTC to 2026-02-22 09:07:11 +0000 UTC (now=2026-01-23 09:07:26.849349521 +0000 UTC))\\\\\\\"\\\\nI0123 09:07:26.849507 1 named_certificates.go:53] \\\\\\\"Loaded SNI cert\\\\\\\" index=0 certName=\\\\\\\"self-signed loopback\\\\\\\" certDetail=\\\\\\\"\\\\\\\\\\\\\\\"apiserver-loopback-client@1769159241\\\\\\\\\\\\\\\" [serving] validServingFor=[apiserver-loopback-client] issuer=\\\\\\\\\\\\\\\"apiserver-loopback-client-ca@1769159241\\\\\\\\\\\\\\\" (2026-01-23 08:07:21 +0000 UTC to 2027-01-23 08:07:21 +0000 UTC (now=2026-01-23 09:07:26.849489185 +0000 UTC))\\\\\\\"\\\\nI0123 09:07:26.849527 1 secure_serving.go:213] Serving securely on [::]:17697\\\\nI0123 09:07:26.849546 1 genericapiserver.go:683] [graceful-termination] waiting for shutdown to be initiated\\\\nI0123 09:07:26.849566 1 requestheader_controller.go:172] Starting RequestHeaderAuthRequestController\\\\nI0123 09:07:26.849583 1 shared_informer.go:313] Waiting for caches to sync for RequestHeaderAuthRequestController\\\\nI0123 09:07:26.849611 1 dynamic_serving_content.go:135] \\\\\\\"Starting controller\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-4138284268/tls.crt::/tmp/serving-cert-4138284268/tls.key\\\\\\\"\\\\nI0123 09:07:26.849731 1 tlsconfig.go:243] \\\\\\\"Starting DynamicServingCertificateController\\\\\\\"\\\\nF0123 09:07:26.849820 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-23T09:07:10Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":true,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-23T09:07:28Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://9db80d9b156d2828ad5bcd38bc2d0783dac35f10f547f098815ee596931cde3b\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-23T09:07:10Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://efa2eef93c6f5766565795e6674f79bc2e7cb62ac76cd9a1e407561378d62732\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://efa2eef93c6f5766565795e6674f79bc2e7cb62ac76cd9a1e407561378d62732\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-23T09:07:08Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-23T09:07:08Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-23T09:07:07Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-23T09:08:16Z is after 2025-08-24T17:21:41Z" Jan 23 09:08:16 crc kubenswrapper[4684]: I0123 09:08:16.109691 4684 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/kube-rbac-proxy-crio-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"7df9a725-0566-46a8-8527-66802dfe40b0\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T09:07:08Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T09:07:08Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T09:07:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T09:07:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T09:07:07Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://f896177a3b765a2129450136ccb007601fff3c2d5669c777ad8af0eeaaf15d5a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-crio\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-23T09:07:09Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kube\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"var-lib-kubelet\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://1cdc2db678a5d1d932c0ed23c453f2450562334bfa685ec920e0a8bc8af61d7c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://1cdc2db678a5d1d932c0ed23c453f2450562334bfa685ec920e0a8bc8af61d7c\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-23T09:07:08Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-23T09:07:08Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var\\\",\\\"name\\\":\\\"var-lib-kubelet\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-23T09:07:07Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"kube-rbac-proxy-crio-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-23T09:08:16Z is after 2025-08-24T17:21:41Z" Jan 23 09:08:16 crc kubenswrapper[4684]: I0123 09:08:16.120723 4684 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-23T09:07:27Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-23T09:07:27Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-23T09:08:16Z is after 2025-08-24T17:21:41Z" Jan 23 09:08:16 crc kubenswrapper[4684]: I0123 09:08:16.132574 4684 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-jwr4q" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ab0885cc-d621-4e36-9e37-1326848bd147\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T09:07:29Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T09:07:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T09:08:15Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T09:08:15Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://d957cfbf388d17fa825ac41c56e15d6cd4caec6e13b2fb8c93b304205f0bbefe\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://d957cfbf388d17fa825ac41c56e15d6cd4caec6e13b2fb8c93b304205f0bbefe\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2026-01-23T09:08:15Z\\\",\\\"message\\\":\\\"2026-01-23T09:07:29+00:00 [cnibincopy] Successfully copied files in /usr/src/multus-cni/rhel9/bin/ to /host/opt/cni/bin/upgrade_fce49956-cc05-4dc7-8d8f-580147be71f6\\\\n2026-01-23T09:07:29+00:00 [cnibincopy] Successfully moved files in /host/opt/cni/bin/upgrade_fce49956-cc05-4dc7-8d8f-580147be71f6 to /host/opt/cni/bin/\\\\n2026-01-23T09:07:30Z [verbose] multus-daemon started\\\\n2026-01-23T09:07:30Z [verbose] Readiness Indicator file check\\\\n2026-01-23T09:08:15Z [error] have you checked that your default network is ready? still waiting for readinessindicatorfile @ /host/run/multus/cni/net.d/10-ovn-kubernetes.conf. pollimmediate error: timed out waiting for the condition\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-23T09:07:29Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cw2mk\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-23T09:07:28Z\\\"}}\" for pod \"openshift-multus\"/\"multus-jwr4q\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-23T09:08:16Z is after 2025-08-24T17:21:41Z" Jan 23 09:08:16 crc kubenswrapper[4684]: I0123 09:08:16.146062 4684 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-dmqcw" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"95d1563a-3ca4-4fb0-8365-c1168fbe2e70\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T09:07:29Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T09:07:34Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T09:07:35Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T09:07:35Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://49a6a5854f711f7c177bc9c2ddea16027d535e15a3bbce2771702baed672fc06\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-23T09:07:35Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-wrhqc\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://d3d64538fa49212ecd97fac81f22251d985b9963024dcd5625ca82b0a19111fb\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://d3d64538fa49212ecd97fac81f22251d985b9963024dcd5625ca82b0a19111fb\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-23T09:07:29Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-23T09:07:29Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-wrhqc\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://bd008bc398cf858c150426e45222e76743f5cacfffb45c24f2cad83a6140abe4\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://bd008bc398cf858c150426e45222e76743f5cacfffb45c24f2cad83a6140abe4\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-23T09:07:30Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-23T09:07:30Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-wrhqc\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://11ea09253e6f4c4eab537b794b793c1f07e8cbaf361c1d8773381e7894805322\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://11ea09253e6f4c4eab537b794b793c1f07e8cbaf361c1d8773381e7894805322\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-23T09:07:31Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-23T09:07:31Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-wrhqc\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://4dddcfb8219bc8ac2d0f92294aef29222b71b1eb35ac84e7e833905e868e784e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://4dddcfb8219bc8ac2d0f92294aef29222b71b1eb35ac84e7e833905e868e784e\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-23T09:07:32Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-23T09:07:32Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-wrhqc\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://d935dd54133a2edd7ccddba6ec6b4c3ee7c86d3d6bc097b93fab3a6aa873ece9\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://d935dd54133a2edd7ccddba6ec6b4c3ee7c86d3d6bc097b93fab3a6aa873ece9\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-23T09:07:33Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-23T09:07:33Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-wrhqc\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://a3f58ad8e7c313247b77e5259a2f82d740ea1f08c3aeaefc116293729ce1b143\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://a3f58ad8e7c313247b77e5259a2f82d740ea1f08c3aeaefc116293729ce1b143\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-23T09:07:34Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-23T09:07:34Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-wrhqc\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-23T09:07:28Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-dmqcw\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-23T09:08:16Z is after 2025-08-24T17:21:41Z" Jan 23 09:08:16 crc kubenswrapper[4684]: I0123 09:08:16.159887 4684 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d618dabd-5de3-4c94-b9c1-69682da77628\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T09:07:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T09:07:07Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T09:07:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T09:07:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T09:07:07Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://c027c8977c1e3870ef0132bf28d479e8999b1a7d216327be7a9cff2aeee05c9e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-23T09:07:08Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://7954e2feb1e89e1ec2c9055234e7b9bde7005afc751a3067c18cbb54d16045cc\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-23T09:07:08Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://fde45d47daa7855ee7caa1df0222d2773fcdc8fb29413c61d6b74f7e7d8fa6e4\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-23T09:07:09Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://f34540a58dd0dfcebbfd694b24202f58a89ddca8a0f04f3f4f2bcdba4be5c4b6\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-23T09:07:10Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-23T09:07:07Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-23T09:08:16Z is after 2025-08-24T17:21:41Z" Jan 23 09:08:16 crc kubenswrapper[4684]: I0123 09:08:16.173276 4684 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-23T09:07:28Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-23T09:07:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-23T09:07:28Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://4f741db786a98b9e9302c17c5f5061484149b0372c03b3cf06b017d37da7237a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-23T09:07:28Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://e0bf99a80423f9d4d2262b21f7dc70d1cf73731c48008e484d9768495596d5b0\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-23T09:07:28Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-23T09:08:16Z is after 2025-08-24T17:21:41Z" Jan 23 09:08:16 crc kubenswrapper[4684]: I0123 09:08:16.186525 4684 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 23 09:08:16 crc kubenswrapper[4684]: I0123 09:08:16.186566 4684 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 23 09:08:16 crc kubenswrapper[4684]: I0123 09:08:16.186578 4684 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 23 09:08:16 crc kubenswrapper[4684]: I0123 09:08:16.186595 4684 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 23 09:08:16 crc kubenswrapper[4684]: I0123 09:08:16.186611 4684 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-23T09:08:16Z","lastTransitionTime":"2026-01-23T09:08:16Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 23 09:08:16 crc kubenswrapper[4684]: I0123 09:08:16.187615 4684 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-23T09:07:30Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-23T09:07:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-23T09:07:30Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://dc74050180463e44d7c545c89833c0282af87ae8cde4800f95e019dbd21ebb08\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-23T09:07:30Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-23T09:08:16Z is after 2025-08-24T17:21:41Z" Jan 23 09:08:16 crc kubenswrapper[4684]: I0123 09:08:16.197614 4684 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-6stgf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"4fce7017-186f-4953-b968-c8a8868a0fd4\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T09:07:29Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T09:07:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T09:07:29Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T09:07:29Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://e120546e2ca9261a5bc169c39194c52add608d78b5783a10dad5f3ba4ee27c23\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-23T09:07:29Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-wv8g2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-23T09:07:28Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-6stgf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-23T09:08:16Z is after 2025-08-24T17:21:41Z" Jan 23 09:08:16 crc kubenswrapper[4684]: I0123 09:08:16.207862 4684 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-qt2j2" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d5069a6f-07bb-4423-8df0-92cdc541e6de\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T09:07:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T09:07:29Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T09:07:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T09:07:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://4ab843f59e857c481772565098789264b06141f58dd54cbb8dba2e40b44a54ad\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-23T09:07:30Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-l62zw\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-23T09:07:29Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-qt2j2\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-23T09:08:16Z is after 2025-08-24T17:21:41Z" Jan 23 09:08:16 crc kubenswrapper[4684]: I0123 09:08:16.289766 4684 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 23 09:08:16 crc kubenswrapper[4684]: I0123 09:08:16.289828 4684 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 23 09:08:16 crc kubenswrapper[4684]: I0123 09:08:16.289839 4684 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 23 09:08:16 crc kubenswrapper[4684]: I0123 09:08:16.289854 4684 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 23 09:08:16 crc kubenswrapper[4684]: I0123 09:08:16.289864 4684 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-23T09:08:16Z","lastTransitionTime":"2026-01-23T09:08:16Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 23 09:08:16 crc kubenswrapper[4684]: I0123 09:08:16.392945 4684 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 23 09:08:16 crc kubenswrapper[4684]: I0123 09:08:16.392990 4684 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 23 09:08:16 crc kubenswrapper[4684]: I0123 09:08:16.393002 4684 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 23 09:08:16 crc kubenswrapper[4684]: I0123 09:08:16.393017 4684 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 23 09:08:16 crc kubenswrapper[4684]: I0123 09:08:16.393027 4684 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-23T09:08:16Z","lastTransitionTime":"2026-01-23T09:08:16Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 23 09:08:16 crc kubenswrapper[4684]: I0123 09:08:16.494929 4684 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 23 09:08:16 crc kubenswrapper[4684]: I0123 09:08:16.494966 4684 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 23 09:08:16 crc kubenswrapper[4684]: I0123 09:08:16.494976 4684 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 23 09:08:16 crc kubenswrapper[4684]: I0123 09:08:16.494993 4684 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 23 09:08:16 crc kubenswrapper[4684]: I0123 09:08:16.495004 4684 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-23T09:08:16Z","lastTransitionTime":"2026-01-23T09:08:16Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 23 09:08:16 crc kubenswrapper[4684]: I0123 09:08:16.580749 4684 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-11-22 04:20:22.813160493 +0000 UTC Jan 23 09:08:16 crc kubenswrapper[4684]: I0123 09:08:16.580945 4684 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-wrrtl" Jan 23 09:08:16 crc kubenswrapper[4684]: E0123 09:08:16.581052 4684 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-wrrtl" podUID="8a1145d8-e0e9-481b-9e5c-65815e74874f" Jan 23 09:08:16 crc kubenswrapper[4684]: I0123 09:08:16.597898 4684 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 23 09:08:16 crc kubenswrapper[4684]: I0123 09:08:16.597934 4684 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 23 09:08:16 crc kubenswrapper[4684]: I0123 09:08:16.597946 4684 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 23 09:08:16 crc kubenswrapper[4684]: I0123 09:08:16.597961 4684 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 23 09:08:16 crc kubenswrapper[4684]: I0123 09:08:16.597973 4684 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-23T09:08:16Z","lastTransitionTime":"2026-01-23T09:08:16Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 23 09:08:16 crc kubenswrapper[4684]: I0123 09:08:16.700038 4684 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 23 09:08:16 crc kubenswrapper[4684]: I0123 09:08:16.700092 4684 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 23 09:08:16 crc kubenswrapper[4684]: I0123 09:08:16.700104 4684 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 23 09:08:16 crc kubenswrapper[4684]: I0123 09:08:16.700120 4684 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 23 09:08:16 crc kubenswrapper[4684]: I0123 09:08:16.700130 4684 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-23T09:08:16Z","lastTransitionTime":"2026-01-23T09:08:16Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 23 09:08:16 crc kubenswrapper[4684]: I0123 09:08:16.802824 4684 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 23 09:08:16 crc kubenswrapper[4684]: I0123 09:08:16.802881 4684 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 23 09:08:16 crc kubenswrapper[4684]: I0123 09:08:16.802894 4684 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 23 09:08:16 crc kubenswrapper[4684]: I0123 09:08:16.802910 4684 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 23 09:08:16 crc kubenswrapper[4684]: I0123 09:08:16.802922 4684 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-23T09:08:16Z","lastTransitionTime":"2026-01-23T09:08:16Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 23 09:08:16 crc kubenswrapper[4684]: I0123 09:08:16.904997 4684 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 23 09:08:16 crc kubenswrapper[4684]: I0123 09:08:16.905058 4684 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 23 09:08:16 crc kubenswrapper[4684]: I0123 09:08:16.905068 4684 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 23 09:08:16 crc kubenswrapper[4684]: I0123 09:08:16.905107 4684 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 23 09:08:16 crc kubenswrapper[4684]: I0123 09:08:16.905123 4684 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-23T09:08:16Z","lastTransitionTime":"2026-01-23T09:08:16Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 23 09:08:16 crc kubenswrapper[4684]: I0123 09:08:16.988335 4684 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-multus_multus-jwr4q_ab0885cc-d621-4e36-9e37-1326848bd147/kube-multus/0.log" Jan 23 09:08:16 crc kubenswrapper[4684]: I0123 09:08:16.988423 4684 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-multus/multus-jwr4q" event={"ID":"ab0885cc-d621-4e36-9e37-1326848bd147","Type":"ContainerStarted","Data":"7bc78adb5a12c736586e26f00e1e598d2404f62b6f15dbb005f241e1d5fddae3"} Jan 23 09:08:17 crc kubenswrapper[4684]: I0123 09:08:17.002613 4684 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"8f3a9b90-c984-4ff9-9c1e-877941f387c7\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T09:07:08Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T09:07:08Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T09:08:03Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T09:08:03Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T09:07:07Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://02d494d3d24ff74db057c3d7e3a703635ce5b73863f17e5287e60eb112fcadf1\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-23T09:07:09Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://b3735bcc057b640850e5db0bc7cd406ef0ac0c002d4550e741deaf34cf10908f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-23T09:07:10Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://beeba329cbddfbfbd71509b5d37064ec6031709b1403feb8e76af0e7818516cb\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-23T09:07:10Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://dfb74f1ff410b32092837918e51a33643c917e2cf829af6edd2e36180c64fcba\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"wait-for-host-port\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://dfb74f1ff410b32092837918e51a33643c917e2cf829af6edd2e36180c64fcba\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-23T09:07:08Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-23T09:07:08Z\\\"}}}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-23T09:07:07Z\\\"}}\" for pod \"openshift-kube-scheduler\"/\"openshift-kube-scheduler-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-23T09:08:17Z is after 2025-08-24T17:21:41Z" Jan 23 09:08:17 crc kubenswrapper[4684]: I0123 09:08:17.007180 4684 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 23 09:08:17 crc kubenswrapper[4684]: I0123 09:08:17.007236 4684 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 23 09:08:17 crc kubenswrapper[4684]: I0123 09:08:17.007249 4684 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 23 09:08:17 crc kubenswrapper[4684]: I0123 09:08:17.007263 4684 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 23 09:08:17 crc kubenswrapper[4684]: I0123 09:08:17.007273 4684 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-23T09:08:17Z","lastTransitionTime":"2026-01-23T09:08:17Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 23 09:08:17 crc kubenswrapper[4684]: I0123 09:08:17.016966 4684 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-23T09:07:29Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-23T09:07:29Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-23T09:07:29Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://d66a59d2f527c396c3b591ef694a20a6852d8e2b2f3d4c77ef0f0b795a18b535\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-23T09:07:28Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-23T09:08:17Z is after 2025-08-24T17:21:41Z" Jan 23 09:08:17 crc kubenswrapper[4684]: I0123 09:08:17.029208 4684 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-wtphf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"fe8e0d00-860e-4d47-9f48-686555520d79\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T09:07:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T09:07:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T09:07:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T09:07:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://87b6f66b276518f9c25bbd5c97bd4a330b2c796958b395d04a01ef7115b95440\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-23T09:07:30Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-dmwsl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://c3d090a4ca15b818846dbd02be034a5029761509ea8671673795d0b2b15249c9\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-23T09:07:30Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-dmwsl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-23T09:07:28Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-wtphf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-23T09:08:17Z is after 2025-08-24T17:21:41Z" Jan 23 09:08:17 crc kubenswrapper[4684]: I0123 09:08:17.040719 4684 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/network-metrics-daemon-wrrtl" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"8a1145d8-e0e9-481b-9e5c-65815e74874f\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T09:07:42Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T09:07:42Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T09:07:42Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T09:07:42Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/metrics\\\",\\\"name\\\":\\\"metrics-certs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-hlsjn\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d98bb346a17feae024d92663df92b25c120938395ab7043afbed543c6db9ca8d\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-metrics-daemon\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-hlsjn\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-23T09:07:42Z\\\"}}\" for pod \"openshift-multus\"/\"network-metrics-daemon-wrrtl\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-23T09:08:17Z is after 2025-08-24T17:21:41Z" Jan 23 09:08:17 crc kubenswrapper[4684]: I0123 09:08:17.052417 4684 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-ckltm" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"17ebb42b-c0ef-423b-8337-cb73bcdbd301\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T09:07:44Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T09:07:41Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T09:07:44Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T09:07:44Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://831d14b0a3293bdf6aaef4805513c47cca40592929fd0a059c0415e6bb072462\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-23T09:07:43Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-control-plane-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-bqdrh\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://53174a72a4ae2ff8105c162641526b8d33dbc8ae6f6301c8c1399e1493d9f6e9\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-cluster-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-23T09:07:43Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-bqdrh\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-23T09:07:41Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-control-plane-749d76644c-ckltm\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-23T09:08:17Z is after 2025-08-24T17:21:41Z" Jan 23 09:08:17 crc kubenswrapper[4684]: I0123 09:08:17.064035 4684 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-23T09:07:27Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-23T09:07:27Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-23T09:08:17Z is after 2025-08-24T17:21:41Z" Jan 23 09:08:17 crc kubenswrapper[4684]: I0123 09:08:17.075936 4684 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-23T09:07:27Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-23T09:07:27Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-23T09:08:17Z is after 2025-08-24T17:21:41Z" Jan 23 09:08:17 crc kubenswrapper[4684]: I0123 09:08:17.094413 4684 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-nk7v5" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fd1b372-d164-4037-ae8e-cf634b1c4b41\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T09:07:29Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T09:07:29Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T09:07:28Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T09:07:28Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://c845b6b78d55b23f70032599e19fb345571b02ca00353315bb08e94c834330d4\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-23T09:07:30Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-l46bg\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://5ecd3493767226c89a1f3e3dff04d36ff5c47117c6ad2712e71633f5c6e375b3\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-23T09:07:30Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-l46bg\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://1d7d0cedb437ec48e365912b092c7f28a30e01fbab86c49bce1b26734ab264ee\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-23T09:07:30Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-l46bg\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://6ab83043e744c91535278153a247d7ba2b3612b867edbabf3a43192b51304e14\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-23T09:07:30Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-l46bg\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://d44f8256ce0d8ea5237e13fb4f6d7ee5cd698c2821613b48d73ba903d2ab5351\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-23T09:07:30Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-l46bg\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://3eab81e73847c2d5a8a24bd2be84c8ed97ecc482fe023474b519ae6bcf3e6e49\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-23T09:07:29Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-l46bg\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://96a86556114a977603ae87310370eefd3122daae9dcb97c57a715eab43e8c195\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://96a86556114a977603ae87310370eefd3122daae9dcb97c57a715eab43e8c195\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2026-01-23T09:08:00Z\\\",\\\"message\\\":\\\":services.LBOpts{Reject:false, EmptyLBEvents:false, AffinityTimeOut:0, SkipSNAT:false, Template:false, AddressFamily:\\\\\\\"\\\\\\\"}, Rules:[]services.LBRule{}, Templates:services.TemplateMap{}, Switches:[]string{}, Routers:[]string{}, Groups:[]string{\\\\\\\"clusterLBGroup\\\\\\\"}}}, built lbs: []services.LB{services.LB{Name:\\\\\\\"Service_openshift-dns-operator/metrics_TCP_cluster\\\\\\\", UUID:\\\\\\\"\\\\\\\", Protocol:\\\\\\\"TCP\\\\\\\", ExternalIDs:map[string]string{\\\\\\\"k8s.ovn.org/kind\\\\\\\":\\\\\\\"Service\\\\\\\", \\\\\\\"k8s.ovn.org/owner\\\\\\\":\\\\\\\"openshift-dns-operator/metrics\\\\\\\"}, Opts:services.LBOpts{Reject:true, EmptyLBEvents:false, AffinityTimeOut:0, SkipSNAT:false, Template:false, AddressFamily:\\\\\\\"\\\\\\\"}, Rules:[]services.LBRule{services.LBRule{Source:services.Addr{IP:\\\\\\\"10.217.4.174\\\\\\\", Port:9393, Template:(*services.Template)(nil)}, Targets:[]services.Addr{}}}, Templates:services.TemplateMap(nil), Switches:[]string{}, Routers:[]string{}, Groups:[]string{\\\\\\\"clusterLBGroup\\\\\\\"}}}\\\\nI0123 09:07:58.859000 6248 ovnkube.go:599] Stopped ovnkube\\\\nI0123 09:07:58.859024 6248 metrics.go:553] Stopping metrics server at address \\\\\\\"127.0.0.1:29103\\\\\\\"\\\\nI0123 09:07:58.859031 6248 obj_retry.go:434] periodicallyRetryResources: Retry channel got triggered: retrying failed objects of type *v1.Pod\\\\nF0123 09:07:58.859119 6248 ovnkube.go:137] failed to run ovnkube: [failed to start network controller: failed to start default network controller: unable to create\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-23T09:07:58Z\\\"}},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":2,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"back-off 20s restarting failed container=ovnkube-controller pod=ovnkube-node-nk7v5_openshift-ovn-kubernetes(5fd1b372-d164-4037-ae8e-cf634b1c4b41)\\\",\\\"reason\\\":\\\"CrashLoopBackOff\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-l46bg\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://6eab0113b2445bd23a5d3eb5f4bd79d26dd3352a1bf807cf7e770d55db85b699\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-23T09:07:32Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-l46bg\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://6cfc04b44ac724b5e32e0102b3f0d670fdd7f2b7ae9b40266065c7b8192b228e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://6cfc04b44ac724b5e32e0102b3f0d670fdd7f2b7ae9b40266065c7b8192b228e\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-23T09:07:29Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-23T09:07:29Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-l46bg\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-23T09:07:28Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-nk7v5\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-23T09:08:17Z is after 2025-08-24T17:21:41Z" Jan 23 09:08:17 crc kubenswrapper[4684]: I0123 09:08:17.110174 4684 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 23 09:08:17 crc kubenswrapper[4684]: I0123 09:08:17.110211 4684 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 23 09:08:17 crc kubenswrapper[4684]: I0123 09:08:17.110242 4684 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 23 09:08:17 crc kubenswrapper[4684]: I0123 09:08:17.110260 4684 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 23 09:08:17 crc kubenswrapper[4684]: I0123 09:08:17.110271 4684 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-23T09:08:17Z","lastTransitionTime":"2026-01-23T09:08:17Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 23 09:08:17 crc kubenswrapper[4684]: I0123 09:08:17.110738 4684 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-dmqcw" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"95d1563a-3ca4-4fb0-8365-c1168fbe2e70\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T09:07:29Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T09:07:34Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T09:07:35Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T09:07:35Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://49a6a5854f711f7c177bc9c2ddea16027d535e15a3bbce2771702baed672fc06\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-23T09:07:35Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-wrhqc\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://d3d64538fa49212ecd97fac81f22251d985b9963024dcd5625ca82b0a19111fb\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://d3d64538fa49212ecd97fac81f22251d985b9963024dcd5625ca82b0a19111fb\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-23T09:07:29Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-23T09:07:29Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-wrhqc\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://bd008bc398cf858c150426e45222e76743f5cacfffb45c24f2cad83a6140abe4\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://bd008bc398cf858c150426e45222e76743f5cacfffb45c24f2cad83a6140abe4\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-23T09:07:30Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-23T09:07:30Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-wrhqc\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://11ea09253e6f4c4eab537b794b793c1f07e8cbaf361c1d8773381e7894805322\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://11ea09253e6f4c4eab537b794b793c1f07e8cbaf361c1d8773381e7894805322\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-23T09:07:31Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-23T09:07:31Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-wrhqc\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://4dddcfb8219bc8ac2d0f92294aef29222b71b1eb35ac84e7e833905e868e784e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://4dddcfb8219bc8ac2d0f92294aef29222b71b1eb35ac84e7e833905e868e784e\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-23T09:07:32Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-23T09:07:32Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-wrhqc\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://d935dd54133a2edd7ccddba6ec6b4c3ee7c86d3d6bc097b93fab3a6aa873ece9\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://d935dd54133a2edd7ccddba6ec6b4c3ee7c86d3d6bc097b93fab3a6aa873ece9\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-23T09:07:33Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-23T09:07:33Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-wrhqc\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://a3f58ad8e7c313247b77e5259a2f82d740ea1f08c3aeaefc116293729ce1b143\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://a3f58ad8e7c313247b77e5259a2f82d740ea1f08c3aeaefc116293729ce1b143\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-23T09:07:34Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-23T09:07:34Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-wrhqc\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-23T09:07:28Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-dmqcw\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-23T09:08:17Z is after 2025-08-24T17:21:41Z" Jan 23 09:08:17 crc kubenswrapper[4684]: I0123 09:08:17.124361 4684 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"e31ff448-5258-4887-9532-ccb1444b5a2f\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T09:07:08Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T09:07:08Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T09:07:38Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T09:07:38Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T09:07:07Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://39b1d62654cdce3e6a1e54cc35f36d530dec39b7ec54d7aba2ea8a64844ff90a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-23T09:07:09Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://5b80737ea9f882f63be2cf6a2f74002963d16e18aea3c96f738b2cd188f3c1da\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-23T09:07:10Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://68e3ed6cfd5c1ab6379385c7acee58117333f815f21be7d7c61038f7827f6621\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-23T09:07:10Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://74958cd4355a9eb04e07c960b1063b56f11cb3ae27a3ab9eac50f54ebac78c8c\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://42263a97079566dbd93f1ca20399fd1f6cc2400f0d042ed062c1c1e15eaf0109\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-01-23T09:07:27Z\\\",\\\"message\\\":\\\"23 09:07:26.845110 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW0123 09:07:26.845113 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW0123 09:07:26.845115 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nI0123 09:07:26.845353 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nI0123 09:07:26.849378 1 tlsconfig.go:203] \\\\\\\"Loaded serving cert\\\\\\\" certName=\\\\\\\"serving-cert::/tmp/serving-cert-4138284268/tls.crt::/tmp/serving-cert-4138284268/tls.key\\\\\\\" certDetail=\\\\\\\"\\\\\\\\\\\\\\\"localhost\\\\\\\\\\\\\\\" [serving] validServingFor=[localhost] issuer=\\\\\\\\\\\\\\\"check-endpoints-signer@1769159230\\\\\\\\\\\\\\\" (2026-01-23 09:07:10 +0000 UTC to 2026-02-22 09:07:11 +0000 UTC (now=2026-01-23 09:07:26.849349521 +0000 UTC))\\\\\\\"\\\\nI0123 09:07:26.849507 1 named_certificates.go:53] \\\\\\\"Loaded SNI cert\\\\\\\" index=0 certName=\\\\\\\"self-signed loopback\\\\\\\" certDetail=\\\\\\\"\\\\\\\\\\\\\\\"apiserver-loopback-client@1769159241\\\\\\\\\\\\\\\" [serving] validServingFor=[apiserver-loopback-client] issuer=\\\\\\\\\\\\\\\"apiserver-loopback-client-ca@1769159241\\\\\\\\\\\\\\\" (2026-01-23 08:07:21 +0000 UTC to 2027-01-23 08:07:21 +0000 UTC (now=2026-01-23 09:07:26.849489185 +0000 UTC))\\\\\\\"\\\\nI0123 09:07:26.849527 1 secure_serving.go:213] Serving securely on [::]:17697\\\\nI0123 09:07:26.849546 1 genericapiserver.go:683] [graceful-termination] waiting for shutdown to be initiated\\\\nI0123 09:07:26.849566 1 requestheader_controller.go:172] Starting RequestHeaderAuthRequestController\\\\nI0123 09:07:26.849583 1 shared_informer.go:313] Waiting for caches to sync for RequestHeaderAuthRequestController\\\\nI0123 09:07:26.849611 1 dynamic_serving_content.go:135] \\\\\\\"Starting controller\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-4138284268/tls.crt::/tmp/serving-cert-4138284268/tls.key\\\\\\\"\\\\nI0123 09:07:26.849731 1 tlsconfig.go:243] \\\\\\\"Starting DynamicServingCertificateController\\\\\\\"\\\\nF0123 09:07:26.849820 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-23T09:07:10Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":true,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-23T09:07:28Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://9db80d9b156d2828ad5bcd38bc2d0783dac35f10f547f098815ee596931cde3b\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-23T09:07:10Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://efa2eef93c6f5766565795e6674f79bc2e7cb62ac76cd9a1e407561378d62732\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://efa2eef93c6f5766565795e6674f79bc2e7cb62ac76cd9a1e407561378d62732\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-23T09:07:08Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-23T09:07:08Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-23T09:07:07Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-23T09:08:17Z is after 2025-08-24T17:21:41Z" Jan 23 09:08:17 crc kubenswrapper[4684]: I0123 09:08:17.134999 4684 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/kube-rbac-proxy-crio-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"7df9a725-0566-46a8-8527-66802dfe40b0\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T09:07:08Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T09:07:08Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T09:07:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T09:07:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T09:07:07Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://f896177a3b765a2129450136ccb007601fff3c2d5669c777ad8af0eeaaf15d5a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-crio\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-23T09:07:09Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kube\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"var-lib-kubelet\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://1cdc2db678a5d1d932c0ed23c453f2450562334bfa685ec920e0a8bc8af61d7c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://1cdc2db678a5d1d932c0ed23c453f2450562334bfa685ec920e0a8bc8af61d7c\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-23T09:07:08Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-23T09:07:08Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var\\\",\\\"name\\\":\\\"var-lib-kubelet\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-23T09:07:07Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"kube-rbac-proxy-crio-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-23T09:08:17Z is after 2025-08-24T17:21:41Z" Jan 23 09:08:17 crc kubenswrapper[4684]: I0123 09:08:17.148925 4684 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-23T09:07:27Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-23T09:07:27Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-23T09:08:17Z is after 2025-08-24T17:21:41Z" Jan 23 09:08:17 crc kubenswrapper[4684]: I0123 09:08:17.161091 4684 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-jwr4q" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ab0885cc-d621-4e36-9e37-1326848bd147\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T09:07:29Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T09:07:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T09:08:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T09:08:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://7bc78adb5a12c736586e26f00e1e598d2404f62b6f15dbb005f241e1d5fddae3\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://d957cfbf388d17fa825ac41c56e15d6cd4caec6e13b2fb8c93b304205f0bbefe\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2026-01-23T09:08:15Z\\\",\\\"message\\\":\\\"2026-01-23T09:07:29+00:00 [cnibincopy] Successfully copied files in /usr/src/multus-cni/rhel9/bin/ to /host/opt/cni/bin/upgrade_fce49956-cc05-4dc7-8d8f-580147be71f6\\\\n2026-01-23T09:07:29+00:00 [cnibincopy] Successfully moved files in /host/opt/cni/bin/upgrade_fce49956-cc05-4dc7-8d8f-580147be71f6 to /host/opt/cni/bin/\\\\n2026-01-23T09:07:30Z [verbose] multus-daemon started\\\\n2026-01-23T09:07:30Z [verbose] Readiness Indicator file check\\\\n2026-01-23T09:08:15Z [error] have you checked that your default network is ready? still waiting for readinessindicatorfile @ /host/run/multus/cni/net.d/10-ovn-kubernetes.conf. pollimmediate error: timed out waiting for the condition\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-23T09:07:29Z\\\"}},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-23T09:08:16Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cw2mk\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-23T09:07:28Z\\\"}}\" for pod \"openshift-multus\"/\"multus-jwr4q\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-23T09:08:17Z is after 2025-08-24T17:21:41Z" Jan 23 09:08:17 crc kubenswrapper[4684]: I0123 09:08:17.172184 4684 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-qt2j2" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d5069a6f-07bb-4423-8df0-92cdc541e6de\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T09:07:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T09:07:29Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T09:07:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T09:07:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://4ab843f59e857c481772565098789264b06141f58dd54cbb8dba2e40b44a54ad\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-23T09:07:30Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-l62zw\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-23T09:07:29Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-qt2j2\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-23T09:08:17Z is after 2025-08-24T17:21:41Z" Jan 23 09:08:17 crc kubenswrapper[4684]: I0123 09:08:17.183466 4684 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d618dabd-5de3-4c94-b9c1-69682da77628\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T09:07:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T09:07:07Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T09:07:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T09:07:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T09:07:07Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://c027c8977c1e3870ef0132bf28d479e8999b1a7d216327be7a9cff2aeee05c9e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-23T09:07:08Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://7954e2feb1e89e1ec2c9055234e7b9bde7005afc751a3067c18cbb54d16045cc\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-23T09:07:08Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://fde45d47daa7855ee7caa1df0222d2773fcdc8fb29413c61d6b74f7e7d8fa6e4\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-23T09:07:09Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://f34540a58dd0dfcebbfd694b24202f58a89ddca8a0f04f3f4f2bcdba4be5c4b6\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-23T09:07:10Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-23T09:07:07Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-23T09:08:17Z is after 2025-08-24T17:21:41Z" Jan 23 09:08:17 crc kubenswrapper[4684]: I0123 09:08:17.196744 4684 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-23T09:07:28Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-23T09:07:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-23T09:07:28Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://4f741db786a98b9e9302c17c5f5061484149b0372c03b3cf06b017d37da7237a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-23T09:07:28Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://e0bf99a80423f9d4d2262b21f7dc70d1cf73731c48008e484d9768495596d5b0\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-23T09:07:28Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-23T09:08:17Z is after 2025-08-24T17:21:41Z" Jan 23 09:08:17 crc kubenswrapper[4684]: I0123 09:08:17.208280 4684 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-23T09:07:30Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-23T09:07:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-23T09:07:30Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://dc74050180463e44d7c545c89833c0282af87ae8cde4800f95e019dbd21ebb08\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-23T09:07:30Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-23T09:08:17Z is after 2025-08-24T17:21:41Z" Jan 23 09:08:17 crc kubenswrapper[4684]: I0123 09:08:17.212070 4684 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 23 09:08:17 crc kubenswrapper[4684]: I0123 09:08:17.212122 4684 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 23 09:08:17 crc kubenswrapper[4684]: I0123 09:08:17.212134 4684 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 23 09:08:17 crc kubenswrapper[4684]: I0123 09:08:17.212152 4684 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 23 09:08:17 crc kubenswrapper[4684]: I0123 09:08:17.212163 4684 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-23T09:08:17Z","lastTransitionTime":"2026-01-23T09:08:17Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 23 09:08:17 crc kubenswrapper[4684]: I0123 09:08:17.219895 4684 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-6stgf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"4fce7017-186f-4953-b968-c8a8868a0fd4\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T09:07:29Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T09:07:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T09:07:29Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T09:07:29Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://e120546e2ca9261a5bc169c39194c52add608d78b5783a10dad5f3ba4ee27c23\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-23T09:07:29Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-wv8g2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-23T09:07:28Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-6stgf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-23T09:08:17Z is after 2025-08-24T17:21:41Z" Jan 23 09:08:17 crc kubenswrapper[4684]: I0123 09:08:17.314926 4684 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 23 09:08:17 crc kubenswrapper[4684]: I0123 09:08:17.315213 4684 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 23 09:08:17 crc kubenswrapper[4684]: I0123 09:08:17.315300 4684 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 23 09:08:17 crc kubenswrapper[4684]: I0123 09:08:17.315384 4684 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 23 09:08:17 crc kubenswrapper[4684]: I0123 09:08:17.315464 4684 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-23T09:08:17Z","lastTransitionTime":"2026-01-23T09:08:17Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 23 09:08:17 crc kubenswrapper[4684]: I0123 09:08:17.417925 4684 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 23 09:08:17 crc kubenswrapper[4684]: I0123 09:08:17.418171 4684 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 23 09:08:17 crc kubenswrapper[4684]: I0123 09:08:17.418251 4684 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 23 09:08:17 crc kubenswrapper[4684]: I0123 09:08:17.418348 4684 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 23 09:08:17 crc kubenswrapper[4684]: I0123 09:08:17.418417 4684 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-23T09:08:17Z","lastTransitionTime":"2026-01-23T09:08:17Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 23 09:08:17 crc kubenswrapper[4684]: I0123 09:08:17.520803 4684 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 23 09:08:17 crc kubenswrapper[4684]: I0123 09:08:17.520849 4684 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 23 09:08:17 crc kubenswrapper[4684]: I0123 09:08:17.520860 4684 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 23 09:08:17 crc kubenswrapper[4684]: I0123 09:08:17.520873 4684 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 23 09:08:17 crc kubenswrapper[4684]: I0123 09:08:17.520883 4684 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-23T09:08:17Z","lastTransitionTime":"2026-01-23T09:08:17Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 23 09:08:17 crc kubenswrapper[4684]: I0123 09:08:17.580953 4684 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-12-14 16:08:09.64724495 +0000 UTC Jan 23 09:08:17 crc kubenswrapper[4684]: I0123 09:08:17.581770 4684 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Jan 23 09:08:17 crc kubenswrapper[4684]: I0123 09:08:17.581785 4684 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 23 09:08:17 crc kubenswrapper[4684]: I0123 09:08:17.581828 4684 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 23 09:08:17 crc kubenswrapper[4684]: E0123 09:08:17.581861 4684 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Jan 23 09:08:17 crc kubenswrapper[4684]: E0123 09:08:17.581944 4684 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Jan 23 09:08:17 crc kubenswrapper[4684]: E0123 09:08:17.582005 4684 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Jan 23 09:08:17 crc kubenswrapper[4684]: I0123 09:08:17.592401 4684 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-23T09:07:30Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-23T09:07:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-23T09:07:30Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://dc74050180463e44d7c545c89833c0282af87ae8cde4800f95e019dbd21ebb08\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-23T09:07:30Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-23T09:08:17Z is after 2025-08-24T17:21:41Z" Jan 23 09:08:17 crc kubenswrapper[4684]: I0123 09:08:17.601941 4684 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-6stgf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"4fce7017-186f-4953-b968-c8a8868a0fd4\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T09:07:29Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T09:07:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T09:07:29Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T09:07:29Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://e120546e2ca9261a5bc169c39194c52add608d78b5783a10dad5f3ba4ee27c23\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-23T09:07:29Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-wv8g2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-23T09:07:28Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-6stgf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-23T09:08:17Z is after 2025-08-24T17:21:41Z" Jan 23 09:08:17 crc kubenswrapper[4684]: I0123 09:08:17.611744 4684 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-qt2j2" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d5069a6f-07bb-4423-8df0-92cdc541e6de\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T09:07:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T09:07:29Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T09:07:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T09:07:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://4ab843f59e857c481772565098789264b06141f58dd54cbb8dba2e40b44a54ad\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-23T09:07:30Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-l62zw\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-23T09:07:29Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-qt2j2\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-23T09:08:17Z is after 2025-08-24T17:21:41Z" Jan 23 09:08:17 crc kubenswrapper[4684]: I0123 09:08:17.623263 4684 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 23 09:08:17 crc kubenswrapper[4684]: I0123 09:08:17.623298 4684 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 23 09:08:17 crc kubenswrapper[4684]: I0123 09:08:17.623309 4684 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 23 09:08:17 crc kubenswrapper[4684]: I0123 09:08:17.623334 4684 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 23 09:08:17 crc kubenswrapper[4684]: I0123 09:08:17.623347 4684 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-23T09:08:17Z","lastTransitionTime":"2026-01-23T09:08:17Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 23 09:08:17 crc kubenswrapper[4684]: I0123 09:08:17.624286 4684 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d618dabd-5de3-4c94-b9c1-69682da77628\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T09:07:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T09:07:07Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T09:07:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T09:07:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T09:07:07Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://c027c8977c1e3870ef0132bf28d479e8999b1a7d216327be7a9cff2aeee05c9e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-23T09:07:08Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://7954e2feb1e89e1ec2c9055234e7b9bde7005afc751a3067c18cbb54d16045cc\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-23T09:07:08Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://fde45d47daa7855ee7caa1df0222d2773fcdc8fb29413c61d6b74f7e7d8fa6e4\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-23T09:07:09Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://f34540a58dd0dfcebbfd694b24202f58a89ddca8a0f04f3f4f2bcdba4be5c4b6\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-23T09:07:10Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-23T09:07:07Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-23T09:08:17Z is after 2025-08-24T17:21:41Z" Jan 23 09:08:17 crc kubenswrapper[4684]: I0123 09:08:17.640230 4684 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-23T09:07:28Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-23T09:07:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-23T09:07:28Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://4f741db786a98b9e9302c17c5f5061484149b0372c03b3cf06b017d37da7237a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-23T09:07:28Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://e0bf99a80423f9d4d2262b21f7dc70d1cf73731c48008e484d9768495596d5b0\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-23T09:07:28Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-23T09:08:17Z is after 2025-08-24T17:21:41Z" Jan 23 09:08:17 crc kubenswrapper[4684]: I0123 09:08:17.650976 4684 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-wtphf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"fe8e0d00-860e-4d47-9f48-686555520d79\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T09:07:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T09:07:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T09:07:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T09:07:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://87b6f66b276518f9c25bbd5c97bd4a330b2c796958b395d04a01ef7115b95440\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-23T09:07:30Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-dmwsl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://c3d090a4ca15b818846dbd02be034a5029761509ea8671673795d0b2b15249c9\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-23T09:07:30Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-dmwsl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-23T09:07:28Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-wtphf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-23T09:08:17Z is after 2025-08-24T17:21:41Z" Jan 23 09:08:17 crc kubenswrapper[4684]: I0123 09:08:17.663820 4684 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/network-metrics-daemon-wrrtl" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"8a1145d8-e0e9-481b-9e5c-65815e74874f\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T09:07:42Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T09:07:42Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T09:07:42Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T09:07:42Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/metrics\\\",\\\"name\\\":\\\"metrics-certs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-hlsjn\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d98bb346a17feae024d92663df92b25c120938395ab7043afbed543c6db9ca8d\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-metrics-daemon\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-hlsjn\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-23T09:07:42Z\\\"}}\" for pod \"openshift-multus\"/\"network-metrics-daemon-wrrtl\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-23T09:08:17Z is after 2025-08-24T17:21:41Z" Jan 23 09:08:17 crc kubenswrapper[4684]: I0123 09:08:17.674853 4684 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"8f3a9b90-c984-4ff9-9c1e-877941f387c7\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T09:07:08Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T09:07:08Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T09:08:03Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T09:08:03Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T09:07:07Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://02d494d3d24ff74db057c3d7e3a703635ce5b73863f17e5287e60eb112fcadf1\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-23T09:07:09Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://b3735bcc057b640850e5db0bc7cd406ef0ac0c002d4550e741deaf34cf10908f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-23T09:07:10Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://beeba329cbddfbfbd71509b5d37064ec6031709b1403feb8e76af0e7818516cb\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-23T09:07:10Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://dfb74f1ff410b32092837918e51a33643c917e2cf829af6edd2e36180c64fcba\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"wait-for-host-port\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://dfb74f1ff410b32092837918e51a33643c917e2cf829af6edd2e36180c64fcba\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-23T09:07:08Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-23T09:07:08Z\\\"}}}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-23T09:07:07Z\\\"}}\" for pod \"openshift-kube-scheduler\"/\"openshift-kube-scheduler-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-23T09:08:17Z is after 2025-08-24T17:21:41Z" Jan 23 09:08:17 crc kubenswrapper[4684]: I0123 09:08:17.687645 4684 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-23T09:07:29Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-23T09:07:29Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-23T09:07:29Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://d66a59d2f527c396c3b591ef694a20a6852d8e2b2f3d4c77ef0f0b795a18b535\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-23T09:07:28Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-23T09:08:17Z is after 2025-08-24T17:21:41Z" Jan 23 09:08:17 crc kubenswrapper[4684]: I0123 09:08:17.698798 4684 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-23T09:07:27Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-23T09:07:27Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-23T09:08:17Z is after 2025-08-24T17:21:41Z" Jan 23 09:08:17 crc kubenswrapper[4684]: I0123 09:08:17.716481 4684 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-nk7v5" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fd1b372-d164-4037-ae8e-cf634b1c4b41\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T09:07:29Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T09:07:29Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T09:07:28Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T09:07:28Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://c845b6b78d55b23f70032599e19fb345571b02ca00353315bb08e94c834330d4\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-23T09:07:30Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-l46bg\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://5ecd3493767226c89a1f3e3dff04d36ff5c47117c6ad2712e71633f5c6e375b3\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-23T09:07:30Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-l46bg\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://1d7d0cedb437ec48e365912b092c7f28a30e01fbab86c49bce1b26734ab264ee\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-23T09:07:30Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-l46bg\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://6ab83043e744c91535278153a247d7ba2b3612b867edbabf3a43192b51304e14\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-23T09:07:30Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-l46bg\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://d44f8256ce0d8ea5237e13fb4f6d7ee5cd698c2821613b48d73ba903d2ab5351\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-23T09:07:30Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-l46bg\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://3eab81e73847c2d5a8a24bd2be84c8ed97ecc482fe023474b519ae6bcf3e6e49\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-23T09:07:29Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-l46bg\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://96a86556114a977603ae87310370eefd3122daae9dcb97c57a715eab43e8c195\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://96a86556114a977603ae87310370eefd3122daae9dcb97c57a715eab43e8c195\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2026-01-23T09:08:00Z\\\",\\\"message\\\":\\\":services.LBOpts{Reject:false, EmptyLBEvents:false, AffinityTimeOut:0, SkipSNAT:false, Template:false, AddressFamily:\\\\\\\"\\\\\\\"}, Rules:[]services.LBRule{}, Templates:services.TemplateMap{}, Switches:[]string{}, Routers:[]string{}, Groups:[]string{\\\\\\\"clusterLBGroup\\\\\\\"}}}, built lbs: []services.LB{services.LB{Name:\\\\\\\"Service_openshift-dns-operator/metrics_TCP_cluster\\\\\\\", UUID:\\\\\\\"\\\\\\\", Protocol:\\\\\\\"TCP\\\\\\\", ExternalIDs:map[string]string{\\\\\\\"k8s.ovn.org/kind\\\\\\\":\\\\\\\"Service\\\\\\\", \\\\\\\"k8s.ovn.org/owner\\\\\\\":\\\\\\\"openshift-dns-operator/metrics\\\\\\\"}, Opts:services.LBOpts{Reject:true, EmptyLBEvents:false, AffinityTimeOut:0, SkipSNAT:false, Template:false, AddressFamily:\\\\\\\"\\\\\\\"}, Rules:[]services.LBRule{services.LBRule{Source:services.Addr{IP:\\\\\\\"10.217.4.174\\\\\\\", Port:9393, Template:(*services.Template)(nil)}, Targets:[]services.Addr{}}}, Templates:services.TemplateMap(nil), Switches:[]string{}, Routers:[]string{}, Groups:[]string{\\\\\\\"clusterLBGroup\\\\\\\"}}}\\\\nI0123 09:07:58.859000 6248 ovnkube.go:599] Stopped ovnkube\\\\nI0123 09:07:58.859024 6248 metrics.go:553] Stopping metrics server at address \\\\\\\"127.0.0.1:29103\\\\\\\"\\\\nI0123 09:07:58.859031 6248 obj_retry.go:434] periodicallyRetryResources: Retry channel got triggered: retrying failed objects of type *v1.Pod\\\\nF0123 09:07:58.859119 6248 ovnkube.go:137] failed to run ovnkube: [failed to start network controller: failed to start default network controller: unable to create\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-23T09:07:58Z\\\"}},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":2,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"back-off 20s restarting failed container=ovnkube-controller pod=ovnkube-node-nk7v5_openshift-ovn-kubernetes(5fd1b372-d164-4037-ae8e-cf634b1c4b41)\\\",\\\"reason\\\":\\\"CrashLoopBackOff\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-l46bg\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://6eab0113b2445bd23a5d3eb5f4bd79d26dd3352a1bf807cf7e770d55db85b699\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-23T09:07:32Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-l46bg\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://6cfc04b44ac724b5e32e0102b3f0d670fdd7f2b7ae9b40266065c7b8192b228e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://6cfc04b44ac724b5e32e0102b3f0d670fdd7f2b7ae9b40266065c7b8192b228e\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-23T09:07:29Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-23T09:07:29Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-l46bg\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-23T09:07:28Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-nk7v5\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-23T09:08:17Z is after 2025-08-24T17:21:41Z" Jan 23 09:08:17 crc kubenswrapper[4684]: I0123 09:08:17.725384 4684 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 23 09:08:17 crc kubenswrapper[4684]: I0123 09:08:17.725430 4684 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 23 09:08:17 crc kubenswrapper[4684]: I0123 09:08:17.725451 4684 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 23 09:08:17 crc kubenswrapper[4684]: I0123 09:08:17.725466 4684 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 23 09:08:17 crc kubenswrapper[4684]: I0123 09:08:17.725475 4684 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-23T09:08:17Z","lastTransitionTime":"2026-01-23T09:08:17Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 23 09:08:17 crc kubenswrapper[4684]: I0123 09:08:17.727482 4684 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-ckltm" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"17ebb42b-c0ef-423b-8337-cb73bcdbd301\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T09:07:44Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T09:07:41Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T09:07:44Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T09:07:44Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://831d14b0a3293bdf6aaef4805513c47cca40592929fd0a059c0415e6bb072462\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-23T09:07:43Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-control-plane-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-bqdrh\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://53174a72a4ae2ff8105c162641526b8d33dbc8ae6f6301c8c1399e1493d9f6e9\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-cluster-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-23T09:07:43Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-bqdrh\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-23T09:07:41Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-control-plane-749d76644c-ckltm\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-23T09:08:17Z is after 2025-08-24T17:21:41Z" Jan 23 09:08:17 crc kubenswrapper[4684]: I0123 09:08:17.739722 4684 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-23T09:07:27Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-23T09:07:27Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-23T09:08:17Z is after 2025-08-24T17:21:41Z" Jan 23 09:08:17 crc kubenswrapper[4684]: I0123 09:08:17.751048 4684 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-23T09:07:27Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-23T09:07:27Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-23T09:08:17Z is after 2025-08-24T17:21:41Z" Jan 23 09:08:17 crc kubenswrapper[4684]: I0123 09:08:17.763136 4684 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-jwr4q" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ab0885cc-d621-4e36-9e37-1326848bd147\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T09:07:29Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T09:07:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T09:08:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T09:08:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://7bc78adb5a12c736586e26f00e1e598d2404f62b6f15dbb005f241e1d5fddae3\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://d957cfbf388d17fa825ac41c56e15d6cd4caec6e13b2fb8c93b304205f0bbefe\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2026-01-23T09:08:15Z\\\",\\\"message\\\":\\\"2026-01-23T09:07:29+00:00 [cnibincopy] Successfully copied files in /usr/src/multus-cni/rhel9/bin/ to /host/opt/cni/bin/upgrade_fce49956-cc05-4dc7-8d8f-580147be71f6\\\\n2026-01-23T09:07:29+00:00 [cnibincopy] Successfully moved files in /host/opt/cni/bin/upgrade_fce49956-cc05-4dc7-8d8f-580147be71f6 to /host/opt/cni/bin/\\\\n2026-01-23T09:07:30Z [verbose] multus-daemon started\\\\n2026-01-23T09:07:30Z [verbose] Readiness Indicator file check\\\\n2026-01-23T09:08:15Z [error] have you checked that your default network is ready? still waiting for readinessindicatorfile @ /host/run/multus/cni/net.d/10-ovn-kubernetes.conf. pollimmediate error: timed out waiting for the condition\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-23T09:07:29Z\\\"}},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-23T09:08:16Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cw2mk\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-23T09:07:28Z\\\"}}\" for pod \"openshift-multus\"/\"multus-jwr4q\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-23T09:08:17Z is after 2025-08-24T17:21:41Z" Jan 23 09:08:17 crc kubenswrapper[4684]: I0123 09:08:17.776356 4684 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-dmqcw" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"95d1563a-3ca4-4fb0-8365-c1168fbe2e70\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T09:07:29Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T09:07:34Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T09:07:35Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T09:07:35Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://49a6a5854f711f7c177bc9c2ddea16027d535e15a3bbce2771702baed672fc06\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-23T09:07:35Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-wrhqc\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://d3d64538fa49212ecd97fac81f22251d985b9963024dcd5625ca82b0a19111fb\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://d3d64538fa49212ecd97fac81f22251d985b9963024dcd5625ca82b0a19111fb\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-23T09:07:29Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-23T09:07:29Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-wrhqc\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://bd008bc398cf858c150426e45222e76743f5cacfffb45c24f2cad83a6140abe4\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://bd008bc398cf858c150426e45222e76743f5cacfffb45c24f2cad83a6140abe4\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-23T09:07:30Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-23T09:07:30Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-wrhqc\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://11ea09253e6f4c4eab537b794b793c1f07e8cbaf361c1d8773381e7894805322\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://11ea09253e6f4c4eab537b794b793c1f07e8cbaf361c1d8773381e7894805322\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-23T09:07:31Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-23T09:07:31Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-wrhqc\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://4dddcfb8219bc8ac2d0f92294aef29222b71b1eb35ac84e7e833905e868e784e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://4dddcfb8219bc8ac2d0f92294aef29222b71b1eb35ac84e7e833905e868e784e\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-23T09:07:32Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-23T09:07:32Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-wrhqc\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://d935dd54133a2edd7ccddba6ec6b4c3ee7c86d3d6bc097b93fab3a6aa873ece9\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://d935dd54133a2edd7ccddba6ec6b4c3ee7c86d3d6bc097b93fab3a6aa873ece9\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-23T09:07:33Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-23T09:07:33Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-wrhqc\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://a3f58ad8e7c313247b77e5259a2f82d740ea1f08c3aeaefc116293729ce1b143\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://a3f58ad8e7c313247b77e5259a2f82d740ea1f08c3aeaefc116293729ce1b143\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-23T09:07:34Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-23T09:07:34Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-wrhqc\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-23T09:07:28Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-dmqcw\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-23T09:08:17Z is after 2025-08-24T17:21:41Z" Jan 23 09:08:17 crc kubenswrapper[4684]: I0123 09:08:17.788011 4684 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"e31ff448-5258-4887-9532-ccb1444b5a2f\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T09:07:08Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T09:07:08Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T09:07:38Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T09:07:38Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T09:07:07Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://39b1d62654cdce3e6a1e54cc35f36d530dec39b7ec54d7aba2ea8a64844ff90a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-23T09:07:09Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://5b80737ea9f882f63be2cf6a2f74002963d16e18aea3c96f738b2cd188f3c1da\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-23T09:07:10Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://68e3ed6cfd5c1ab6379385c7acee58117333f815f21be7d7c61038f7827f6621\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-23T09:07:10Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://74958cd4355a9eb04e07c960b1063b56f11cb3ae27a3ab9eac50f54ebac78c8c\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://42263a97079566dbd93f1ca20399fd1f6cc2400f0d042ed062c1c1e15eaf0109\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-01-23T09:07:27Z\\\",\\\"message\\\":\\\"23 09:07:26.845110 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW0123 09:07:26.845113 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW0123 09:07:26.845115 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nI0123 09:07:26.845353 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nI0123 09:07:26.849378 1 tlsconfig.go:203] \\\\\\\"Loaded serving cert\\\\\\\" certName=\\\\\\\"serving-cert::/tmp/serving-cert-4138284268/tls.crt::/tmp/serving-cert-4138284268/tls.key\\\\\\\" certDetail=\\\\\\\"\\\\\\\\\\\\\\\"localhost\\\\\\\\\\\\\\\" [serving] validServingFor=[localhost] issuer=\\\\\\\\\\\\\\\"check-endpoints-signer@1769159230\\\\\\\\\\\\\\\" (2026-01-23 09:07:10 +0000 UTC to 2026-02-22 09:07:11 +0000 UTC (now=2026-01-23 09:07:26.849349521 +0000 UTC))\\\\\\\"\\\\nI0123 09:07:26.849507 1 named_certificates.go:53] \\\\\\\"Loaded SNI cert\\\\\\\" index=0 certName=\\\\\\\"self-signed loopback\\\\\\\" certDetail=\\\\\\\"\\\\\\\\\\\\\\\"apiserver-loopback-client@1769159241\\\\\\\\\\\\\\\" [serving] validServingFor=[apiserver-loopback-client] issuer=\\\\\\\\\\\\\\\"apiserver-loopback-client-ca@1769159241\\\\\\\\\\\\\\\" (2026-01-23 08:07:21 +0000 UTC to 2027-01-23 08:07:21 +0000 UTC (now=2026-01-23 09:07:26.849489185 +0000 UTC))\\\\\\\"\\\\nI0123 09:07:26.849527 1 secure_serving.go:213] Serving securely on [::]:17697\\\\nI0123 09:07:26.849546 1 genericapiserver.go:683] [graceful-termination] waiting for shutdown to be initiated\\\\nI0123 09:07:26.849566 1 requestheader_controller.go:172] Starting RequestHeaderAuthRequestController\\\\nI0123 09:07:26.849583 1 shared_informer.go:313] Waiting for caches to sync for RequestHeaderAuthRequestController\\\\nI0123 09:07:26.849611 1 dynamic_serving_content.go:135] \\\\\\\"Starting controller\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-4138284268/tls.crt::/tmp/serving-cert-4138284268/tls.key\\\\\\\"\\\\nI0123 09:07:26.849731 1 tlsconfig.go:243] \\\\\\\"Starting DynamicServingCertificateController\\\\\\\"\\\\nF0123 09:07:26.849820 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-23T09:07:10Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":true,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-23T09:07:28Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://9db80d9b156d2828ad5bcd38bc2d0783dac35f10f547f098815ee596931cde3b\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-23T09:07:10Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://efa2eef93c6f5766565795e6674f79bc2e7cb62ac76cd9a1e407561378d62732\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://efa2eef93c6f5766565795e6674f79bc2e7cb62ac76cd9a1e407561378d62732\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-23T09:07:08Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-23T09:07:08Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-23T09:07:07Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-23T09:08:17Z is after 2025-08-24T17:21:41Z" Jan 23 09:08:17 crc kubenswrapper[4684]: I0123 09:08:17.796862 4684 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/kube-rbac-proxy-crio-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"7df9a725-0566-46a8-8527-66802dfe40b0\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T09:07:08Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T09:07:08Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T09:07:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T09:07:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T09:07:07Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://f896177a3b765a2129450136ccb007601fff3c2d5669c777ad8af0eeaaf15d5a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-crio\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-23T09:07:09Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kube\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"var-lib-kubelet\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://1cdc2db678a5d1d932c0ed23c453f2450562334bfa685ec920e0a8bc8af61d7c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://1cdc2db678a5d1d932c0ed23c453f2450562334bfa685ec920e0a8bc8af61d7c\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-23T09:07:08Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-23T09:07:08Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var\\\",\\\"name\\\":\\\"var-lib-kubelet\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-23T09:07:07Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"kube-rbac-proxy-crio-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-23T09:08:17Z is after 2025-08-24T17:21:41Z" Jan 23 09:08:17 crc kubenswrapper[4684]: I0123 09:08:17.827636 4684 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 23 09:08:17 crc kubenswrapper[4684]: I0123 09:08:17.827676 4684 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 23 09:08:17 crc kubenswrapper[4684]: I0123 09:08:17.827685 4684 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 23 09:08:17 crc kubenswrapper[4684]: I0123 09:08:17.827720 4684 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 23 09:08:17 crc kubenswrapper[4684]: I0123 09:08:17.827731 4684 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-23T09:08:17Z","lastTransitionTime":"2026-01-23T09:08:17Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 23 09:08:17 crc kubenswrapper[4684]: I0123 09:08:17.929834 4684 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 23 09:08:17 crc kubenswrapper[4684]: I0123 09:08:17.929882 4684 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 23 09:08:17 crc kubenswrapper[4684]: I0123 09:08:17.929896 4684 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 23 09:08:17 crc kubenswrapper[4684]: I0123 09:08:17.930108 4684 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 23 09:08:17 crc kubenswrapper[4684]: I0123 09:08:17.930123 4684 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-23T09:08:17Z","lastTransitionTime":"2026-01-23T09:08:17Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 23 09:08:18 crc kubenswrapper[4684]: I0123 09:08:18.032443 4684 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 23 09:08:18 crc kubenswrapper[4684]: I0123 09:08:18.032485 4684 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 23 09:08:18 crc kubenswrapper[4684]: I0123 09:08:18.032497 4684 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 23 09:08:18 crc kubenswrapper[4684]: I0123 09:08:18.032512 4684 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 23 09:08:18 crc kubenswrapper[4684]: I0123 09:08:18.032524 4684 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-23T09:08:18Z","lastTransitionTime":"2026-01-23T09:08:18Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 23 09:08:18 crc kubenswrapper[4684]: I0123 09:08:18.135100 4684 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 23 09:08:18 crc kubenswrapper[4684]: I0123 09:08:18.135146 4684 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 23 09:08:18 crc kubenswrapper[4684]: I0123 09:08:18.135155 4684 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 23 09:08:18 crc kubenswrapper[4684]: I0123 09:08:18.135173 4684 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 23 09:08:18 crc kubenswrapper[4684]: I0123 09:08:18.135189 4684 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-23T09:08:18Z","lastTransitionTime":"2026-01-23T09:08:18Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 23 09:08:18 crc kubenswrapper[4684]: I0123 09:08:18.237241 4684 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 23 09:08:18 crc kubenswrapper[4684]: I0123 09:08:18.237273 4684 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 23 09:08:18 crc kubenswrapper[4684]: I0123 09:08:18.237283 4684 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 23 09:08:18 crc kubenswrapper[4684]: I0123 09:08:18.237297 4684 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 23 09:08:18 crc kubenswrapper[4684]: I0123 09:08:18.237307 4684 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-23T09:08:18Z","lastTransitionTime":"2026-01-23T09:08:18Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 23 09:08:18 crc kubenswrapper[4684]: I0123 09:08:18.340003 4684 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 23 09:08:18 crc kubenswrapper[4684]: I0123 09:08:18.340051 4684 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 23 09:08:18 crc kubenswrapper[4684]: I0123 09:08:18.340062 4684 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 23 09:08:18 crc kubenswrapper[4684]: I0123 09:08:18.340077 4684 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 23 09:08:18 crc kubenswrapper[4684]: I0123 09:08:18.340087 4684 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-23T09:08:18Z","lastTransitionTime":"2026-01-23T09:08:18Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 23 09:08:18 crc kubenswrapper[4684]: I0123 09:08:18.442409 4684 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 23 09:08:18 crc kubenswrapper[4684]: I0123 09:08:18.442444 4684 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 23 09:08:18 crc kubenswrapper[4684]: I0123 09:08:18.442457 4684 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 23 09:08:18 crc kubenswrapper[4684]: I0123 09:08:18.442474 4684 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 23 09:08:18 crc kubenswrapper[4684]: I0123 09:08:18.442485 4684 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-23T09:08:18Z","lastTransitionTime":"2026-01-23T09:08:18Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 23 09:08:18 crc kubenswrapper[4684]: I0123 09:08:18.545147 4684 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 23 09:08:18 crc kubenswrapper[4684]: I0123 09:08:18.545182 4684 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 23 09:08:18 crc kubenswrapper[4684]: I0123 09:08:18.545190 4684 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 23 09:08:18 crc kubenswrapper[4684]: I0123 09:08:18.545206 4684 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 23 09:08:18 crc kubenswrapper[4684]: I0123 09:08:18.545217 4684 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-23T09:08:18Z","lastTransitionTime":"2026-01-23T09:08:18Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 23 09:08:18 crc kubenswrapper[4684]: I0123 09:08:18.581035 4684 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-wrrtl" Jan 23 09:08:18 crc kubenswrapper[4684]: E0123 09:08:18.581206 4684 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-wrrtl" podUID="8a1145d8-e0e9-481b-9e5c-65815e74874f" Jan 23 09:08:18 crc kubenswrapper[4684]: I0123 09:08:18.581317 4684 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-12-01 15:59:20.983901266 +0000 UTC Jan 23 09:08:18 crc kubenswrapper[4684]: I0123 09:08:18.647554 4684 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 23 09:08:18 crc kubenswrapper[4684]: I0123 09:08:18.647603 4684 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 23 09:08:18 crc kubenswrapper[4684]: I0123 09:08:18.647612 4684 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 23 09:08:18 crc kubenswrapper[4684]: I0123 09:08:18.647628 4684 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 23 09:08:18 crc kubenswrapper[4684]: I0123 09:08:18.647637 4684 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-23T09:08:18Z","lastTransitionTime":"2026-01-23T09:08:18Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 23 09:08:18 crc kubenswrapper[4684]: I0123 09:08:18.750116 4684 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 23 09:08:18 crc kubenswrapper[4684]: I0123 09:08:18.750143 4684 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 23 09:08:18 crc kubenswrapper[4684]: I0123 09:08:18.750151 4684 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 23 09:08:18 crc kubenswrapper[4684]: I0123 09:08:18.750163 4684 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 23 09:08:18 crc kubenswrapper[4684]: I0123 09:08:18.750172 4684 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-23T09:08:18Z","lastTransitionTime":"2026-01-23T09:08:18Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 23 09:08:18 crc kubenswrapper[4684]: I0123 09:08:18.852403 4684 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 23 09:08:18 crc kubenswrapper[4684]: I0123 09:08:18.852439 4684 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 23 09:08:18 crc kubenswrapper[4684]: I0123 09:08:18.852451 4684 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 23 09:08:18 crc kubenswrapper[4684]: I0123 09:08:18.852470 4684 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 23 09:08:18 crc kubenswrapper[4684]: I0123 09:08:18.852482 4684 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-23T09:08:18Z","lastTransitionTime":"2026-01-23T09:08:18Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 23 09:08:18 crc kubenswrapper[4684]: I0123 09:08:18.955192 4684 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 23 09:08:18 crc kubenswrapper[4684]: I0123 09:08:18.955219 4684 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 23 09:08:18 crc kubenswrapper[4684]: I0123 09:08:18.955229 4684 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 23 09:08:18 crc kubenswrapper[4684]: I0123 09:08:18.955240 4684 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 23 09:08:18 crc kubenswrapper[4684]: I0123 09:08:18.955249 4684 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-23T09:08:18Z","lastTransitionTime":"2026-01-23T09:08:18Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 23 09:08:19 crc kubenswrapper[4684]: I0123 09:08:19.057674 4684 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 23 09:08:19 crc kubenswrapper[4684]: I0123 09:08:19.057720 4684 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 23 09:08:19 crc kubenswrapper[4684]: I0123 09:08:19.057729 4684 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 23 09:08:19 crc kubenswrapper[4684]: I0123 09:08:19.057743 4684 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 23 09:08:19 crc kubenswrapper[4684]: I0123 09:08:19.057753 4684 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-23T09:08:19Z","lastTransitionTime":"2026-01-23T09:08:19Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 23 09:08:19 crc kubenswrapper[4684]: I0123 09:08:19.160507 4684 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 23 09:08:19 crc kubenswrapper[4684]: I0123 09:08:19.160538 4684 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 23 09:08:19 crc kubenswrapper[4684]: I0123 09:08:19.160550 4684 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 23 09:08:19 crc kubenswrapper[4684]: I0123 09:08:19.160567 4684 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 23 09:08:19 crc kubenswrapper[4684]: I0123 09:08:19.160577 4684 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-23T09:08:19Z","lastTransitionTime":"2026-01-23T09:08:19Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 23 09:08:19 crc kubenswrapper[4684]: I0123 09:08:19.262340 4684 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 23 09:08:19 crc kubenswrapper[4684]: I0123 09:08:19.262397 4684 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 23 09:08:19 crc kubenswrapper[4684]: I0123 09:08:19.262407 4684 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 23 09:08:19 crc kubenswrapper[4684]: I0123 09:08:19.262421 4684 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 23 09:08:19 crc kubenswrapper[4684]: I0123 09:08:19.262431 4684 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-23T09:08:19Z","lastTransitionTime":"2026-01-23T09:08:19Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 23 09:08:19 crc kubenswrapper[4684]: I0123 09:08:19.364495 4684 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 23 09:08:19 crc kubenswrapper[4684]: I0123 09:08:19.364519 4684 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 23 09:08:19 crc kubenswrapper[4684]: I0123 09:08:19.364528 4684 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 23 09:08:19 crc kubenswrapper[4684]: I0123 09:08:19.364541 4684 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 23 09:08:19 crc kubenswrapper[4684]: I0123 09:08:19.364549 4684 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-23T09:08:19Z","lastTransitionTime":"2026-01-23T09:08:19Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 23 09:08:19 crc kubenswrapper[4684]: I0123 09:08:19.466336 4684 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 23 09:08:19 crc kubenswrapper[4684]: I0123 09:08:19.466366 4684 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 23 09:08:19 crc kubenswrapper[4684]: I0123 09:08:19.466374 4684 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 23 09:08:19 crc kubenswrapper[4684]: I0123 09:08:19.466387 4684 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 23 09:08:19 crc kubenswrapper[4684]: I0123 09:08:19.466406 4684 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-23T09:08:19Z","lastTransitionTime":"2026-01-23T09:08:19Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 23 09:08:19 crc kubenswrapper[4684]: I0123 09:08:19.568044 4684 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 23 09:08:19 crc kubenswrapper[4684]: I0123 09:08:19.568263 4684 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 23 09:08:19 crc kubenswrapper[4684]: I0123 09:08:19.568442 4684 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 23 09:08:19 crc kubenswrapper[4684]: I0123 09:08:19.568603 4684 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 23 09:08:19 crc kubenswrapper[4684]: I0123 09:08:19.568817 4684 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-23T09:08:19Z","lastTransitionTime":"2026-01-23T09:08:19Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 23 09:08:19 crc kubenswrapper[4684]: I0123 09:08:19.581442 4684 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2026-01-17 03:14:39.54271875 +0000 UTC Jan 23 09:08:19 crc kubenswrapper[4684]: I0123 09:08:19.581865 4684 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Jan 23 09:08:19 crc kubenswrapper[4684]: I0123 09:08:19.581904 4684 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 23 09:08:19 crc kubenswrapper[4684]: E0123 09:08:19.582140 4684 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Jan 23 09:08:19 crc kubenswrapper[4684]: E0123 09:08:19.582247 4684 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Jan 23 09:08:19 crc kubenswrapper[4684]: I0123 09:08:19.581966 4684 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 23 09:08:19 crc kubenswrapper[4684]: E0123 09:08:19.582550 4684 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Jan 23 09:08:19 crc kubenswrapper[4684]: I0123 09:08:19.671441 4684 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 23 09:08:19 crc kubenswrapper[4684]: I0123 09:08:19.671483 4684 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 23 09:08:19 crc kubenswrapper[4684]: I0123 09:08:19.671495 4684 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 23 09:08:19 crc kubenswrapper[4684]: I0123 09:08:19.671512 4684 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 23 09:08:19 crc kubenswrapper[4684]: I0123 09:08:19.671523 4684 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-23T09:08:19Z","lastTransitionTime":"2026-01-23T09:08:19Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 23 09:08:19 crc kubenswrapper[4684]: I0123 09:08:19.773356 4684 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 23 09:08:19 crc kubenswrapper[4684]: I0123 09:08:19.773383 4684 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 23 09:08:19 crc kubenswrapper[4684]: I0123 09:08:19.773392 4684 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 23 09:08:19 crc kubenswrapper[4684]: I0123 09:08:19.773405 4684 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 23 09:08:19 crc kubenswrapper[4684]: I0123 09:08:19.773414 4684 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-23T09:08:19Z","lastTransitionTime":"2026-01-23T09:08:19Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 23 09:08:19 crc kubenswrapper[4684]: I0123 09:08:19.875827 4684 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 23 09:08:19 crc kubenswrapper[4684]: I0123 09:08:19.875873 4684 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 23 09:08:19 crc kubenswrapper[4684]: I0123 09:08:19.875884 4684 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 23 09:08:19 crc kubenswrapper[4684]: I0123 09:08:19.875902 4684 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 23 09:08:19 crc kubenswrapper[4684]: I0123 09:08:19.875915 4684 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-23T09:08:19Z","lastTransitionTime":"2026-01-23T09:08:19Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 23 09:08:19 crc kubenswrapper[4684]: I0123 09:08:19.978480 4684 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 23 09:08:19 crc kubenswrapper[4684]: I0123 09:08:19.978733 4684 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 23 09:08:19 crc kubenswrapper[4684]: I0123 09:08:19.978752 4684 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 23 09:08:19 crc kubenswrapper[4684]: I0123 09:08:19.978767 4684 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 23 09:08:19 crc kubenswrapper[4684]: I0123 09:08:19.978778 4684 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-23T09:08:19Z","lastTransitionTime":"2026-01-23T09:08:19Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 23 09:08:20 crc kubenswrapper[4684]: I0123 09:08:20.081715 4684 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 23 09:08:20 crc kubenswrapper[4684]: I0123 09:08:20.081760 4684 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 23 09:08:20 crc kubenswrapper[4684]: I0123 09:08:20.081769 4684 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 23 09:08:20 crc kubenswrapper[4684]: I0123 09:08:20.081786 4684 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 23 09:08:20 crc kubenswrapper[4684]: I0123 09:08:20.081795 4684 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-23T09:08:20Z","lastTransitionTime":"2026-01-23T09:08:20Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 23 09:08:20 crc kubenswrapper[4684]: I0123 09:08:20.183946 4684 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 23 09:08:20 crc kubenswrapper[4684]: I0123 09:08:20.183978 4684 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 23 09:08:20 crc kubenswrapper[4684]: I0123 09:08:20.183988 4684 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 23 09:08:20 crc kubenswrapper[4684]: I0123 09:08:20.184001 4684 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 23 09:08:20 crc kubenswrapper[4684]: I0123 09:08:20.184011 4684 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-23T09:08:20Z","lastTransitionTime":"2026-01-23T09:08:20Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 23 09:08:20 crc kubenswrapper[4684]: I0123 09:08:20.286351 4684 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 23 09:08:20 crc kubenswrapper[4684]: I0123 09:08:20.286396 4684 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 23 09:08:20 crc kubenswrapper[4684]: I0123 09:08:20.286406 4684 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 23 09:08:20 crc kubenswrapper[4684]: I0123 09:08:20.286422 4684 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 23 09:08:20 crc kubenswrapper[4684]: I0123 09:08:20.286433 4684 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-23T09:08:20Z","lastTransitionTime":"2026-01-23T09:08:20Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 23 09:08:20 crc kubenswrapper[4684]: I0123 09:08:20.389639 4684 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 23 09:08:20 crc kubenswrapper[4684]: I0123 09:08:20.389723 4684 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 23 09:08:20 crc kubenswrapper[4684]: I0123 09:08:20.389737 4684 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 23 09:08:20 crc kubenswrapper[4684]: I0123 09:08:20.389765 4684 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 23 09:08:20 crc kubenswrapper[4684]: I0123 09:08:20.389782 4684 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-23T09:08:20Z","lastTransitionTime":"2026-01-23T09:08:20Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 23 09:08:20 crc kubenswrapper[4684]: I0123 09:08:20.491537 4684 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 23 09:08:20 crc kubenswrapper[4684]: I0123 09:08:20.491784 4684 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 23 09:08:20 crc kubenswrapper[4684]: I0123 09:08:20.491903 4684 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 23 09:08:20 crc kubenswrapper[4684]: I0123 09:08:20.492006 4684 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 23 09:08:20 crc kubenswrapper[4684]: I0123 09:08:20.492079 4684 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-23T09:08:20Z","lastTransitionTime":"2026-01-23T09:08:20Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 23 09:08:20 crc kubenswrapper[4684]: I0123 09:08:20.581123 4684 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-wrrtl" Jan 23 09:08:20 crc kubenswrapper[4684]: E0123 09:08:20.581283 4684 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-wrrtl" podUID="8a1145d8-e0e9-481b-9e5c-65815e74874f" Jan 23 09:08:20 crc kubenswrapper[4684]: I0123 09:08:20.582083 4684 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-12-06 05:47:36.416765263 +0000 UTC Jan 23 09:08:20 crc kubenswrapper[4684]: I0123 09:08:20.594628 4684 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 23 09:08:20 crc kubenswrapper[4684]: I0123 09:08:20.594915 4684 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 23 09:08:20 crc kubenswrapper[4684]: I0123 09:08:20.595016 4684 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 23 09:08:20 crc kubenswrapper[4684]: I0123 09:08:20.595102 4684 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 23 09:08:20 crc kubenswrapper[4684]: I0123 09:08:20.595176 4684 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-23T09:08:20Z","lastTransitionTime":"2026-01-23T09:08:20Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 23 09:08:20 crc kubenswrapper[4684]: I0123 09:08:20.697772 4684 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 23 09:08:20 crc kubenswrapper[4684]: I0123 09:08:20.697813 4684 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 23 09:08:20 crc kubenswrapper[4684]: I0123 09:08:20.697822 4684 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 23 09:08:20 crc kubenswrapper[4684]: I0123 09:08:20.697836 4684 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 23 09:08:20 crc kubenswrapper[4684]: I0123 09:08:20.697845 4684 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-23T09:08:20Z","lastTransitionTime":"2026-01-23T09:08:20Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 23 09:08:20 crc kubenswrapper[4684]: I0123 09:08:20.800192 4684 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 23 09:08:20 crc kubenswrapper[4684]: I0123 09:08:20.800229 4684 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 23 09:08:20 crc kubenswrapper[4684]: I0123 09:08:20.800241 4684 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 23 09:08:20 crc kubenswrapper[4684]: I0123 09:08:20.800259 4684 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 23 09:08:20 crc kubenswrapper[4684]: I0123 09:08:20.800271 4684 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-23T09:08:20Z","lastTransitionTime":"2026-01-23T09:08:20Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 23 09:08:20 crc kubenswrapper[4684]: I0123 09:08:20.903196 4684 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 23 09:08:20 crc kubenswrapper[4684]: I0123 09:08:20.903250 4684 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 23 09:08:20 crc kubenswrapper[4684]: I0123 09:08:20.903260 4684 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 23 09:08:20 crc kubenswrapper[4684]: I0123 09:08:20.903283 4684 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 23 09:08:20 crc kubenswrapper[4684]: I0123 09:08:20.903300 4684 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-23T09:08:20Z","lastTransitionTime":"2026-01-23T09:08:20Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 23 09:08:21 crc kubenswrapper[4684]: I0123 09:08:21.005097 4684 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 23 09:08:21 crc kubenswrapper[4684]: I0123 09:08:21.005427 4684 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 23 09:08:21 crc kubenswrapper[4684]: I0123 09:08:21.005522 4684 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 23 09:08:21 crc kubenswrapper[4684]: I0123 09:08:21.005596 4684 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 23 09:08:21 crc kubenswrapper[4684]: I0123 09:08:21.005661 4684 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-23T09:08:21Z","lastTransitionTime":"2026-01-23T09:08:21Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 23 09:08:21 crc kubenswrapper[4684]: I0123 09:08:21.107775 4684 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 23 09:08:21 crc kubenswrapper[4684]: I0123 09:08:21.107819 4684 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 23 09:08:21 crc kubenswrapper[4684]: I0123 09:08:21.107830 4684 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 23 09:08:21 crc kubenswrapper[4684]: I0123 09:08:21.107846 4684 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 23 09:08:21 crc kubenswrapper[4684]: I0123 09:08:21.107857 4684 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-23T09:08:21Z","lastTransitionTime":"2026-01-23T09:08:21Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 23 09:08:21 crc kubenswrapper[4684]: I0123 09:08:21.210170 4684 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 23 09:08:21 crc kubenswrapper[4684]: I0123 09:08:21.210213 4684 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 23 09:08:21 crc kubenswrapper[4684]: I0123 09:08:21.210226 4684 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 23 09:08:21 crc kubenswrapper[4684]: I0123 09:08:21.210246 4684 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 23 09:08:21 crc kubenswrapper[4684]: I0123 09:08:21.210258 4684 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-23T09:08:21Z","lastTransitionTime":"2026-01-23T09:08:21Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 23 09:08:21 crc kubenswrapper[4684]: I0123 09:08:21.312326 4684 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 23 09:08:21 crc kubenswrapper[4684]: I0123 09:08:21.312639 4684 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 23 09:08:21 crc kubenswrapper[4684]: I0123 09:08:21.312745 4684 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 23 09:08:21 crc kubenswrapper[4684]: I0123 09:08:21.312817 4684 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 23 09:08:21 crc kubenswrapper[4684]: I0123 09:08:21.312904 4684 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-23T09:08:21Z","lastTransitionTime":"2026-01-23T09:08:21Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 23 09:08:21 crc kubenswrapper[4684]: I0123 09:08:21.414615 4684 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 23 09:08:21 crc kubenswrapper[4684]: I0123 09:08:21.414869 4684 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 23 09:08:21 crc kubenswrapper[4684]: I0123 09:08:21.414985 4684 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 23 09:08:21 crc kubenswrapper[4684]: I0123 09:08:21.415072 4684 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 23 09:08:21 crc kubenswrapper[4684]: I0123 09:08:21.415155 4684 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-23T09:08:21Z","lastTransitionTime":"2026-01-23T09:08:21Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 23 09:08:21 crc kubenswrapper[4684]: I0123 09:08:21.518364 4684 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 23 09:08:21 crc kubenswrapper[4684]: I0123 09:08:21.518393 4684 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 23 09:08:21 crc kubenswrapper[4684]: I0123 09:08:21.518402 4684 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 23 09:08:21 crc kubenswrapper[4684]: I0123 09:08:21.518415 4684 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 23 09:08:21 crc kubenswrapper[4684]: I0123 09:08:21.518426 4684 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-23T09:08:21Z","lastTransitionTime":"2026-01-23T09:08:21Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 23 09:08:21 crc kubenswrapper[4684]: I0123 09:08:21.581782 4684 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Jan 23 09:08:21 crc kubenswrapper[4684]: E0123 09:08:21.581896 4684 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Jan 23 09:08:21 crc kubenswrapper[4684]: I0123 09:08:21.582074 4684 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 23 09:08:21 crc kubenswrapper[4684]: E0123 09:08:21.582135 4684 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Jan 23 09:08:21 crc kubenswrapper[4684]: I0123 09:08:21.582289 4684 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 23 09:08:21 crc kubenswrapper[4684]: E0123 09:08:21.582390 4684 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Jan 23 09:08:21 crc kubenswrapper[4684]: I0123 09:08:21.582510 4684 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-11-18 22:52:07.685253831 +0000 UTC Jan 23 09:08:21 crc kubenswrapper[4684]: I0123 09:08:21.620213 4684 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 23 09:08:21 crc kubenswrapper[4684]: I0123 09:08:21.620246 4684 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 23 09:08:21 crc kubenswrapper[4684]: I0123 09:08:21.620255 4684 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 23 09:08:21 crc kubenswrapper[4684]: I0123 09:08:21.620267 4684 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 23 09:08:21 crc kubenswrapper[4684]: I0123 09:08:21.620276 4684 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-23T09:08:21Z","lastTransitionTime":"2026-01-23T09:08:21Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 23 09:08:21 crc kubenswrapper[4684]: I0123 09:08:21.722780 4684 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 23 09:08:21 crc kubenswrapper[4684]: I0123 09:08:21.722814 4684 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 23 09:08:21 crc kubenswrapper[4684]: I0123 09:08:21.722822 4684 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 23 09:08:21 crc kubenswrapper[4684]: I0123 09:08:21.722835 4684 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 23 09:08:21 crc kubenswrapper[4684]: I0123 09:08:21.722844 4684 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-23T09:08:21Z","lastTransitionTime":"2026-01-23T09:08:21Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 23 09:08:21 crc kubenswrapper[4684]: I0123 09:08:21.825186 4684 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 23 09:08:21 crc kubenswrapper[4684]: I0123 09:08:21.825224 4684 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 23 09:08:21 crc kubenswrapper[4684]: I0123 09:08:21.825239 4684 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 23 09:08:21 crc kubenswrapper[4684]: I0123 09:08:21.825254 4684 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 23 09:08:21 crc kubenswrapper[4684]: I0123 09:08:21.825266 4684 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-23T09:08:21Z","lastTransitionTime":"2026-01-23T09:08:21Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 23 09:08:21 crc kubenswrapper[4684]: I0123 09:08:21.928185 4684 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 23 09:08:21 crc kubenswrapper[4684]: I0123 09:08:21.928226 4684 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 23 09:08:21 crc kubenswrapper[4684]: I0123 09:08:21.928237 4684 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 23 09:08:21 crc kubenswrapper[4684]: I0123 09:08:21.928254 4684 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 23 09:08:21 crc kubenswrapper[4684]: I0123 09:08:21.928264 4684 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-23T09:08:21Z","lastTransitionTime":"2026-01-23T09:08:21Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 23 09:08:22 crc kubenswrapper[4684]: I0123 09:08:22.031096 4684 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 23 09:08:22 crc kubenswrapper[4684]: I0123 09:08:22.031134 4684 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 23 09:08:22 crc kubenswrapper[4684]: I0123 09:08:22.031144 4684 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 23 09:08:22 crc kubenswrapper[4684]: I0123 09:08:22.031159 4684 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 23 09:08:22 crc kubenswrapper[4684]: I0123 09:08:22.031168 4684 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-23T09:08:22Z","lastTransitionTime":"2026-01-23T09:08:22Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 23 09:08:22 crc kubenswrapper[4684]: I0123 09:08:22.133874 4684 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 23 09:08:22 crc kubenswrapper[4684]: I0123 09:08:22.133907 4684 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 23 09:08:22 crc kubenswrapper[4684]: I0123 09:08:22.133917 4684 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 23 09:08:22 crc kubenswrapper[4684]: I0123 09:08:22.133932 4684 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 23 09:08:22 crc kubenswrapper[4684]: I0123 09:08:22.133943 4684 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-23T09:08:22Z","lastTransitionTime":"2026-01-23T09:08:22Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 23 09:08:22 crc kubenswrapper[4684]: I0123 09:08:22.157892 4684 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 23 09:08:22 crc kubenswrapper[4684]: I0123 09:08:22.157925 4684 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 23 09:08:22 crc kubenswrapper[4684]: I0123 09:08:22.157934 4684 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 23 09:08:22 crc kubenswrapper[4684]: I0123 09:08:22.157949 4684 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 23 09:08:22 crc kubenswrapper[4684]: I0123 09:08:22.157961 4684 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-23T09:08:22Z","lastTransitionTime":"2026-01-23T09:08:22Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 23 09:08:22 crc kubenswrapper[4684]: E0123 09:08:22.172559 4684 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"7800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"24148068Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"8\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"24608868Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-01-23T09:08:22Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-23T09:08:22Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-23T09:08:22Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-23T09:08:22Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-23T09:08:22Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-23T09:08:22Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-23T09:08:22Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-23T09:08:22Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"bcfe8adf-9d26-48e3-b456-e1c8d79ddfed\\\",\\\"systemUUID\\\":\\\"63162577-fb09-4289-a5f3-3b12988dcfbf\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-23T09:08:22Z is after 2025-08-24T17:21:41Z" Jan 23 09:08:22 crc kubenswrapper[4684]: I0123 09:08:22.176102 4684 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 23 09:08:22 crc kubenswrapper[4684]: I0123 09:08:22.176142 4684 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 23 09:08:22 crc kubenswrapper[4684]: I0123 09:08:22.176155 4684 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 23 09:08:22 crc kubenswrapper[4684]: I0123 09:08:22.176171 4684 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 23 09:08:22 crc kubenswrapper[4684]: I0123 09:08:22.176182 4684 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-23T09:08:22Z","lastTransitionTime":"2026-01-23T09:08:22Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 23 09:08:22 crc kubenswrapper[4684]: E0123 09:08:22.188680 4684 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"7800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"24148068Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"8\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"24608868Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-01-23T09:08:22Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-23T09:08:22Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-23T09:08:22Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-23T09:08:22Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-23T09:08:22Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-23T09:08:22Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-23T09:08:22Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-23T09:08:22Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"bcfe8adf-9d26-48e3-b456-e1c8d79ddfed\\\",\\\"systemUUID\\\":\\\"63162577-fb09-4289-a5f3-3b12988dcfbf\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-23T09:08:22Z is after 2025-08-24T17:21:41Z" Jan 23 09:08:22 crc kubenswrapper[4684]: I0123 09:08:22.192336 4684 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 23 09:08:22 crc kubenswrapper[4684]: I0123 09:08:22.192369 4684 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 23 09:08:22 crc kubenswrapper[4684]: I0123 09:08:22.192378 4684 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 23 09:08:22 crc kubenswrapper[4684]: I0123 09:08:22.192392 4684 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 23 09:08:22 crc kubenswrapper[4684]: I0123 09:08:22.192400 4684 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-23T09:08:22Z","lastTransitionTime":"2026-01-23T09:08:22Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 23 09:08:22 crc kubenswrapper[4684]: E0123 09:08:22.204764 4684 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"7800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"24148068Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"8\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"24608868Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-01-23T09:08:22Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-23T09:08:22Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-23T09:08:22Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-23T09:08:22Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-23T09:08:22Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-23T09:08:22Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-23T09:08:22Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-23T09:08:22Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"bcfe8adf-9d26-48e3-b456-e1c8d79ddfed\\\",\\\"systemUUID\\\":\\\"63162577-fb09-4289-a5f3-3b12988dcfbf\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-23T09:08:22Z is after 2025-08-24T17:21:41Z" Jan 23 09:08:22 crc kubenswrapper[4684]: I0123 09:08:22.207950 4684 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 23 09:08:22 crc kubenswrapper[4684]: I0123 09:08:22.207988 4684 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 23 09:08:22 crc kubenswrapper[4684]: I0123 09:08:22.207999 4684 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 23 09:08:22 crc kubenswrapper[4684]: I0123 09:08:22.208016 4684 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 23 09:08:22 crc kubenswrapper[4684]: I0123 09:08:22.208026 4684 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-23T09:08:22Z","lastTransitionTime":"2026-01-23T09:08:22Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 23 09:08:22 crc kubenswrapper[4684]: E0123 09:08:22.219554 4684 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"7800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"24148068Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"8\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"24608868Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-01-23T09:08:22Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-23T09:08:22Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-23T09:08:22Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-23T09:08:22Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-23T09:08:22Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-23T09:08:22Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-23T09:08:22Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-23T09:08:22Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"bcfe8adf-9d26-48e3-b456-e1c8d79ddfed\\\",\\\"systemUUID\\\":\\\"63162577-fb09-4289-a5f3-3b12988dcfbf\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-23T09:08:22Z is after 2025-08-24T17:21:41Z" Jan 23 09:08:22 crc kubenswrapper[4684]: I0123 09:08:22.223032 4684 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 23 09:08:22 crc kubenswrapper[4684]: I0123 09:08:22.223058 4684 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 23 09:08:22 crc kubenswrapper[4684]: I0123 09:08:22.223066 4684 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 23 09:08:22 crc kubenswrapper[4684]: I0123 09:08:22.223078 4684 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 23 09:08:22 crc kubenswrapper[4684]: I0123 09:08:22.223086 4684 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-23T09:08:22Z","lastTransitionTime":"2026-01-23T09:08:22Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 23 09:08:22 crc kubenswrapper[4684]: E0123 09:08:22.233829 4684 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"7800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"24148068Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"8\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"24608868Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-01-23T09:08:22Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-23T09:08:22Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-23T09:08:22Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-23T09:08:22Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-23T09:08:22Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-23T09:08:22Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-23T09:08:22Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-23T09:08:22Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"bcfe8adf-9d26-48e3-b456-e1c8d79ddfed\\\",\\\"systemUUID\\\":\\\"63162577-fb09-4289-a5f3-3b12988dcfbf\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-23T09:08:22Z is after 2025-08-24T17:21:41Z" Jan 23 09:08:22 crc kubenswrapper[4684]: E0123 09:08:22.233960 4684 kubelet_node_status.go:572] "Unable to update node status" err="update node status exceeds retry count" Jan 23 09:08:22 crc kubenswrapper[4684]: I0123 09:08:22.236126 4684 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 23 09:08:22 crc kubenswrapper[4684]: I0123 09:08:22.236161 4684 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 23 09:08:22 crc kubenswrapper[4684]: I0123 09:08:22.236172 4684 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 23 09:08:22 crc kubenswrapper[4684]: I0123 09:08:22.236189 4684 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 23 09:08:22 crc kubenswrapper[4684]: I0123 09:08:22.236206 4684 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-23T09:08:22Z","lastTransitionTime":"2026-01-23T09:08:22Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 23 09:08:22 crc kubenswrapper[4684]: I0123 09:08:22.339061 4684 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 23 09:08:22 crc kubenswrapper[4684]: I0123 09:08:22.339112 4684 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 23 09:08:22 crc kubenswrapper[4684]: I0123 09:08:22.339123 4684 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 23 09:08:22 crc kubenswrapper[4684]: I0123 09:08:22.339139 4684 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 23 09:08:22 crc kubenswrapper[4684]: I0123 09:08:22.339151 4684 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-23T09:08:22Z","lastTransitionTime":"2026-01-23T09:08:22Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 23 09:08:22 crc kubenswrapper[4684]: I0123 09:08:22.441802 4684 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 23 09:08:22 crc kubenswrapper[4684]: I0123 09:08:22.441843 4684 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 23 09:08:22 crc kubenswrapper[4684]: I0123 09:08:22.441854 4684 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 23 09:08:22 crc kubenswrapper[4684]: I0123 09:08:22.441869 4684 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 23 09:08:22 crc kubenswrapper[4684]: I0123 09:08:22.441881 4684 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-23T09:08:22Z","lastTransitionTime":"2026-01-23T09:08:22Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 23 09:08:22 crc kubenswrapper[4684]: I0123 09:08:22.544376 4684 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 23 09:08:22 crc kubenswrapper[4684]: I0123 09:08:22.544413 4684 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 23 09:08:22 crc kubenswrapper[4684]: I0123 09:08:22.544429 4684 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 23 09:08:22 crc kubenswrapper[4684]: I0123 09:08:22.544444 4684 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 23 09:08:22 crc kubenswrapper[4684]: I0123 09:08:22.544454 4684 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-23T09:08:22Z","lastTransitionTime":"2026-01-23T09:08:22Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 23 09:08:22 crc kubenswrapper[4684]: I0123 09:08:22.581272 4684 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-wrrtl" Jan 23 09:08:22 crc kubenswrapper[4684]: E0123 09:08:22.581607 4684 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-wrrtl" podUID="8a1145d8-e0e9-481b-9e5c-65815e74874f" Jan 23 09:08:22 crc kubenswrapper[4684]: I0123 09:08:22.583395 4684 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-11-19 06:02:32.62904661 +0000 UTC Jan 23 09:08:22 crc kubenswrapper[4684]: I0123 09:08:22.646607 4684 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 23 09:08:22 crc kubenswrapper[4684]: I0123 09:08:22.646646 4684 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 23 09:08:22 crc kubenswrapper[4684]: I0123 09:08:22.646654 4684 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 23 09:08:22 crc kubenswrapper[4684]: I0123 09:08:22.646669 4684 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 23 09:08:22 crc kubenswrapper[4684]: I0123 09:08:22.646678 4684 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-23T09:08:22Z","lastTransitionTime":"2026-01-23T09:08:22Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 23 09:08:22 crc kubenswrapper[4684]: I0123 09:08:22.748892 4684 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 23 09:08:22 crc kubenswrapper[4684]: I0123 09:08:22.748924 4684 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 23 09:08:22 crc kubenswrapper[4684]: I0123 09:08:22.748932 4684 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 23 09:08:22 crc kubenswrapper[4684]: I0123 09:08:22.748947 4684 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 23 09:08:22 crc kubenswrapper[4684]: I0123 09:08:22.748957 4684 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-23T09:08:22Z","lastTransitionTime":"2026-01-23T09:08:22Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 23 09:08:22 crc kubenswrapper[4684]: I0123 09:08:22.851262 4684 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 23 09:08:22 crc kubenswrapper[4684]: I0123 09:08:22.851298 4684 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 23 09:08:22 crc kubenswrapper[4684]: I0123 09:08:22.851307 4684 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 23 09:08:22 crc kubenswrapper[4684]: I0123 09:08:22.851322 4684 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 23 09:08:22 crc kubenswrapper[4684]: I0123 09:08:22.851332 4684 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-23T09:08:22Z","lastTransitionTime":"2026-01-23T09:08:22Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 23 09:08:22 crc kubenswrapper[4684]: I0123 09:08:22.953424 4684 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 23 09:08:22 crc kubenswrapper[4684]: I0123 09:08:22.953459 4684 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 23 09:08:22 crc kubenswrapper[4684]: I0123 09:08:22.953471 4684 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 23 09:08:22 crc kubenswrapper[4684]: I0123 09:08:22.953485 4684 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 23 09:08:22 crc kubenswrapper[4684]: I0123 09:08:22.953496 4684 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-23T09:08:22Z","lastTransitionTime":"2026-01-23T09:08:22Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 23 09:08:23 crc kubenswrapper[4684]: I0123 09:08:23.056224 4684 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 23 09:08:23 crc kubenswrapper[4684]: I0123 09:08:23.056263 4684 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 23 09:08:23 crc kubenswrapper[4684]: I0123 09:08:23.056272 4684 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 23 09:08:23 crc kubenswrapper[4684]: I0123 09:08:23.056290 4684 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 23 09:08:23 crc kubenswrapper[4684]: I0123 09:08:23.056301 4684 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-23T09:08:23Z","lastTransitionTime":"2026-01-23T09:08:23Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 23 09:08:23 crc kubenswrapper[4684]: I0123 09:08:23.158426 4684 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 23 09:08:23 crc kubenswrapper[4684]: I0123 09:08:23.158462 4684 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 23 09:08:23 crc kubenswrapper[4684]: I0123 09:08:23.158470 4684 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 23 09:08:23 crc kubenswrapper[4684]: I0123 09:08:23.158484 4684 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 23 09:08:23 crc kubenswrapper[4684]: I0123 09:08:23.158493 4684 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-23T09:08:23Z","lastTransitionTime":"2026-01-23T09:08:23Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 23 09:08:23 crc kubenswrapper[4684]: I0123 09:08:23.261268 4684 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 23 09:08:23 crc kubenswrapper[4684]: I0123 09:08:23.261303 4684 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 23 09:08:23 crc kubenswrapper[4684]: I0123 09:08:23.261314 4684 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 23 09:08:23 crc kubenswrapper[4684]: I0123 09:08:23.261329 4684 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 23 09:08:23 crc kubenswrapper[4684]: I0123 09:08:23.261339 4684 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-23T09:08:23Z","lastTransitionTime":"2026-01-23T09:08:23Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 23 09:08:23 crc kubenswrapper[4684]: I0123 09:08:23.364134 4684 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 23 09:08:23 crc kubenswrapper[4684]: I0123 09:08:23.364176 4684 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 23 09:08:23 crc kubenswrapper[4684]: I0123 09:08:23.364190 4684 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 23 09:08:23 crc kubenswrapper[4684]: I0123 09:08:23.364205 4684 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 23 09:08:23 crc kubenswrapper[4684]: I0123 09:08:23.364220 4684 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-23T09:08:23Z","lastTransitionTime":"2026-01-23T09:08:23Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 23 09:08:23 crc kubenswrapper[4684]: I0123 09:08:23.466586 4684 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 23 09:08:23 crc kubenswrapper[4684]: I0123 09:08:23.466625 4684 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 23 09:08:23 crc kubenswrapper[4684]: I0123 09:08:23.466637 4684 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 23 09:08:23 crc kubenswrapper[4684]: I0123 09:08:23.466655 4684 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 23 09:08:23 crc kubenswrapper[4684]: I0123 09:08:23.466667 4684 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-23T09:08:23Z","lastTransitionTime":"2026-01-23T09:08:23Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 23 09:08:23 crc kubenswrapper[4684]: I0123 09:08:23.569418 4684 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 23 09:08:23 crc kubenswrapper[4684]: I0123 09:08:23.569481 4684 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 23 09:08:23 crc kubenswrapper[4684]: I0123 09:08:23.569497 4684 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 23 09:08:23 crc kubenswrapper[4684]: I0123 09:08:23.569516 4684 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 23 09:08:23 crc kubenswrapper[4684]: I0123 09:08:23.569531 4684 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-23T09:08:23Z","lastTransitionTime":"2026-01-23T09:08:23Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 23 09:08:23 crc kubenswrapper[4684]: I0123 09:08:23.581118 4684 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 23 09:08:23 crc kubenswrapper[4684]: E0123 09:08:23.581250 4684 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Jan 23 09:08:23 crc kubenswrapper[4684]: I0123 09:08:23.581450 4684 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Jan 23 09:08:23 crc kubenswrapper[4684]: E0123 09:08:23.581511 4684 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Jan 23 09:08:23 crc kubenswrapper[4684]: I0123 09:08:23.581792 4684 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 23 09:08:23 crc kubenswrapper[4684]: E0123 09:08:23.582249 4684 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Jan 23 09:08:23 crc kubenswrapper[4684]: I0123 09:08:23.582543 4684 scope.go:117] "RemoveContainer" containerID="96a86556114a977603ae87310370eefd3122daae9dcb97c57a715eab43e8c195" Jan 23 09:08:23 crc kubenswrapper[4684]: I0123 09:08:23.583579 4684 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-12-26 05:21:12.243583325 +0000 UTC Jan 23 09:08:23 crc kubenswrapper[4684]: I0123 09:08:23.672387 4684 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 23 09:08:23 crc kubenswrapper[4684]: I0123 09:08:23.672447 4684 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 23 09:08:23 crc kubenswrapper[4684]: I0123 09:08:23.672461 4684 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 23 09:08:23 crc kubenswrapper[4684]: I0123 09:08:23.672479 4684 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 23 09:08:23 crc kubenswrapper[4684]: I0123 09:08:23.672492 4684 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-23T09:08:23Z","lastTransitionTime":"2026-01-23T09:08:23Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 23 09:08:23 crc kubenswrapper[4684]: I0123 09:08:23.774609 4684 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 23 09:08:23 crc kubenswrapper[4684]: I0123 09:08:23.774667 4684 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 23 09:08:23 crc kubenswrapper[4684]: I0123 09:08:23.774676 4684 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 23 09:08:23 crc kubenswrapper[4684]: I0123 09:08:23.774691 4684 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 23 09:08:23 crc kubenswrapper[4684]: I0123 09:08:23.774717 4684 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-23T09:08:23Z","lastTransitionTime":"2026-01-23T09:08:23Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 23 09:08:23 crc kubenswrapper[4684]: I0123 09:08:23.877137 4684 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 23 09:08:23 crc kubenswrapper[4684]: I0123 09:08:23.877166 4684 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 23 09:08:23 crc kubenswrapper[4684]: I0123 09:08:23.877178 4684 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 23 09:08:23 crc kubenswrapper[4684]: I0123 09:08:23.877201 4684 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 23 09:08:23 crc kubenswrapper[4684]: I0123 09:08:23.877213 4684 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-23T09:08:23Z","lastTransitionTime":"2026-01-23T09:08:23Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 23 09:08:23 crc kubenswrapper[4684]: I0123 09:08:23.979151 4684 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 23 09:08:23 crc kubenswrapper[4684]: I0123 09:08:23.979504 4684 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 23 09:08:23 crc kubenswrapper[4684]: I0123 09:08:23.979570 4684 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 23 09:08:23 crc kubenswrapper[4684]: I0123 09:08:23.979645 4684 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 23 09:08:23 crc kubenswrapper[4684]: I0123 09:08:23.979748 4684 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-23T09:08:23Z","lastTransitionTime":"2026-01-23T09:08:23Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 23 09:08:24 crc kubenswrapper[4684]: I0123 09:08:24.081942 4684 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 23 09:08:24 crc kubenswrapper[4684]: I0123 09:08:24.081990 4684 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 23 09:08:24 crc kubenswrapper[4684]: I0123 09:08:24.082002 4684 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 23 09:08:24 crc kubenswrapper[4684]: I0123 09:08:24.082020 4684 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 23 09:08:24 crc kubenswrapper[4684]: I0123 09:08:24.082033 4684 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-23T09:08:24Z","lastTransitionTime":"2026-01-23T09:08:24Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 23 09:08:24 crc kubenswrapper[4684]: I0123 09:08:24.185118 4684 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 23 09:08:24 crc kubenswrapper[4684]: I0123 09:08:24.185152 4684 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 23 09:08:24 crc kubenswrapper[4684]: I0123 09:08:24.185161 4684 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 23 09:08:24 crc kubenswrapper[4684]: I0123 09:08:24.185175 4684 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 23 09:08:24 crc kubenswrapper[4684]: I0123 09:08:24.185185 4684 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-23T09:08:24Z","lastTransitionTime":"2026-01-23T09:08:24Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 23 09:08:24 crc kubenswrapper[4684]: I0123 09:08:24.287768 4684 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 23 09:08:24 crc kubenswrapper[4684]: I0123 09:08:24.287803 4684 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 23 09:08:24 crc kubenswrapper[4684]: I0123 09:08:24.287813 4684 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 23 09:08:24 crc kubenswrapper[4684]: I0123 09:08:24.287829 4684 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 23 09:08:24 crc kubenswrapper[4684]: I0123 09:08:24.287839 4684 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-23T09:08:24Z","lastTransitionTime":"2026-01-23T09:08:24Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 23 09:08:24 crc kubenswrapper[4684]: I0123 09:08:24.390437 4684 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 23 09:08:24 crc kubenswrapper[4684]: I0123 09:08:24.390488 4684 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 23 09:08:24 crc kubenswrapper[4684]: I0123 09:08:24.390498 4684 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 23 09:08:24 crc kubenswrapper[4684]: I0123 09:08:24.390512 4684 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 23 09:08:24 crc kubenswrapper[4684]: I0123 09:08:24.390522 4684 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-23T09:08:24Z","lastTransitionTime":"2026-01-23T09:08:24Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 23 09:08:24 crc kubenswrapper[4684]: I0123 09:08:24.492015 4684 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 23 09:08:24 crc kubenswrapper[4684]: I0123 09:08:24.492066 4684 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 23 09:08:24 crc kubenswrapper[4684]: I0123 09:08:24.492082 4684 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 23 09:08:24 crc kubenswrapper[4684]: I0123 09:08:24.492102 4684 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 23 09:08:24 crc kubenswrapper[4684]: I0123 09:08:24.492116 4684 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-23T09:08:24Z","lastTransitionTime":"2026-01-23T09:08:24Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 23 09:08:24 crc kubenswrapper[4684]: I0123 09:08:24.581131 4684 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-wrrtl" Jan 23 09:08:24 crc kubenswrapper[4684]: E0123 09:08:24.581267 4684 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-wrrtl" podUID="8a1145d8-e0e9-481b-9e5c-65815e74874f" Jan 23 09:08:24 crc kubenswrapper[4684]: I0123 09:08:24.584484 4684 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-12-15 18:12:11.922087 +0000 UTC Jan 23 09:08:24 crc kubenswrapper[4684]: I0123 09:08:24.594625 4684 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 23 09:08:24 crc kubenswrapper[4684]: I0123 09:08:24.594670 4684 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 23 09:08:24 crc kubenswrapper[4684]: I0123 09:08:24.594683 4684 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 23 09:08:24 crc kubenswrapper[4684]: I0123 09:08:24.594726 4684 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 23 09:08:24 crc kubenswrapper[4684]: I0123 09:08:24.594740 4684 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-23T09:08:24Z","lastTransitionTime":"2026-01-23T09:08:24Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 23 09:08:24 crc kubenswrapper[4684]: I0123 09:08:24.696773 4684 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 23 09:08:24 crc kubenswrapper[4684]: I0123 09:08:24.696804 4684 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 23 09:08:24 crc kubenswrapper[4684]: I0123 09:08:24.696814 4684 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 23 09:08:24 crc kubenswrapper[4684]: I0123 09:08:24.696827 4684 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 23 09:08:24 crc kubenswrapper[4684]: I0123 09:08:24.696838 4684 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-23T09:08:24Z","lastTransitionTime":"2026-01-23T09:08:24Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 23 09:08:24 crc kubenswrapper[4684]: I0123 09:08:24.799863 4684 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 23 09:08:24 crc kubenswrapper[4684]: I0123 09:08:24.799908 4684 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 23 09:08:24 crc kubenswrapper[4684]: I0123 09:08:24.799924 4684 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 23 09:08:24 crc kubenswrapper[4684]: I0123 09:08:24.799948 4684 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 23 09:08:24 crc kubenswrapper[4684]: I0123 09:08:24.799965 4684 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-23T09:08:24Z","lastTransitionTime":"2026-01-23T09:08:24Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 23 09:08:24 crc kubenswrapper[4684]: I0123 09:08:24.902341 4684 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 23 09:08:24 crc kubenswrapper[4684]: I0123 09:08:24.902380 4684 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 23 09:08:24 crc kubenswrapper[4684]: I0123 09:08:24.902391 4684 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 23 09:08:24 crc kubenswrapper[4684]: I0123 09:08:24.902406 4684 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 23 09:08:24 crc kubenswrapper[4684]: I0123 09:08:24.902417 4684 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-23T09:08:24Z","lastTransitionTime":"2026-01-23T09:08:24Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 23 09:08:25 crc kubenswrapper[4684]: I0123 09:08:25.004581 4684 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 23 09:08:25 crc kubenswrapper[4684]: I0123 09:08:25.004614 4684 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 23 09:08:25 crc kubenswrapper[4684]: I0123 09:08:25.004623 4684 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 23 09:08:25 crc kubenswrapper[4684]: I0123 09:08:25.004635 4684 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 23 09:08:25 crc kubenswrapper[4684]: I0123 09:08:25.004644 4684 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-23T09:08:25Z","lastTransitionTime":"2026-01-23T09:08:25Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 23 09:08:25 crc kubenswrapper[4684]: I0123 09:08:25.107071 4684 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 23 09:08:25 crc kubenswrapper[4684]: I0123 09:08:25.107132 4684 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 23 09:08:25 crc kubenswrapper[4684]: I0123 09:08:25.107143 4684 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 23 09:08:25 crc kubenswrapper[4684]: I0123 09:08:25.107164 4684 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 23 09:08:25 crc kubenswrapper[4684]: I0123 09:08:25.107176 4684 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-23T09:08:25Z","lastTransitionTime":"2026-01-23T09:08:25Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 23 09:08:25 crc kubenswrapper[4684]: I0123 09:08:25.209470 4684 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 23 09:08:25 crc kubenswrapper[4684]: I0123 09:08:25.209512 4684 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 23 09:08:25 crc kubenswrapper[4684]: I0123 09:08:25.209525 4684 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 23 09:08:25 crc kubenswrapper[4684]: I0123 09:08:25.209539 4684 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 23 09:08:25 crc kubenswrapper[4684]: I0123 09:08:25.209549 4684 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-23T09:08:25Z","lastTransitionTime":"2026-01-23T09:08:25Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 23 09:08:25 crc kubenswrapper[4684]: I0123 09:08:25.310951 4684 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 23 09:08:25 crc kubenswrapper[4684]: I0123 09:08:25.310993 4684 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 23 09:08:25 crc kubenswrapper[4684]: I0123 09:08:25.311004 4684 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 23 09:08:25 crc kubenswrapper[4684]: I0123 09:08:25.311020 4684 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 23 09:08:25 crc kubenswrapper[4684]: I0123 09:08:25.311032 4684 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-23T09:08:25Z","lastTransitionTime":"2026-01-23T09:08:25Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 23 09:08:25 crc kubenswrapper[4684]: I0123 09:08:25.413177 4684 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 23 09:08:25 crc kubenswrapper[4684]: I0123 09:08:25.413208 4684 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 23 09:08:25 crc kubenswrapper[4684]: I0123 09:08:25.413217 4684 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 23 09:08:25 crc kubenswrapper[4684]: I0123 09:08:25.413233 4684 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 23 09:08:25 crc kubenswrapper[4684]: I0123 09:08:25.413242 4684 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-23T09:08:25Z","lastTransitionTime":"2026-01-23T09:08:25Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 23 09:08:25 crc kubenswrapper[4684]: I0123 09:08:25.515490 4684 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 23 09:08:25 crc kubenswrapper[4684]: I0123 09:08:25.515521 4684 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 23 09:08:25 crc kubenswrapper[4684]: I0123 09:08:25.515529 4684 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 23 09:08:25 crc kubenswrapper[4684]: I0123 09:08:25.515543 4684 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 23 09:08:25 crc kubenswrapper[4684]: I0123 09:08:25.515553 4684 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-23T09:08:25Z","lastTransitionTime":"2026-01-23T09:08:25Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 23 09:08:25 crc kubenswrapper[4684]: I0123 09:08:25.581540 4684 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Jan 23 09:08:25 crc kubenswrapper[4684]: I0123 09:08:25.581592 4684 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 23 09:08:25 crc kubenswrapper[4684]: E0123 09:08:25.581647 4684 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Jan 23 09:08:25 crc kubenswrapper[4684]: E0123 09:08:25.581788 4684 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Jan 23 09:08:25 crc kubenswrapper[4684]: I0123 09:08:25.581553 4684 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 23 09:08:25 crc kubenswrapper[4684]: E0123 09:08:25.581865 4684 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Jan 23 09:08:25 crc kubenswrapper[4684]: I0123 09:08:25.584796 4684 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2026-01-05 00:42:10.685261501 +0000 UTC Jan 23 09:08:25 crc kubenswrapper[4684]: I0123 09:08:25.617671 4684 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 23 09:08:25 crc kubenswrapper[4684]: I0123 09:08:25.617725 4684 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 23 09:08:25 crc kubenswrapper[4684]: I0123 09:08:25.617735 4684 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 23 09:08:25 crc kubenswrapper[4684]: I0123 09:08:25.617748 4684 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 23 09:08:25 crc kubenswrapper[4684]: I0123 09:08:25.617756 4684 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-23T09:08:25Z","lastTransitionTime":"2026-01-23T09:08:25Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 23 09:08:25 crc kubenswrapper[4684]: I0123 09:08:25.720022 4684 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 23 09:08:25 crc kubenswrapper[4684]: I0123 09:08:25.720076 4684 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 23 09:08:25 crc kubenswrapper[4684]: I0123 09:08:25.720087 4684 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 23 09:08:25 crc kubenswrapper[4684]: I0123 09:08:25.720103 4684 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 23 09:08:25 crc kubenswrapper[4684]: I0123 09:08:25.720114 4684 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-23T09:08:25Z","lastTransitionTime":"2026-01-23T09:08:25Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 23 09:08:25 crc kubenswrapper[4684]: I0123 09:08:25.822494 4684 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 23 09:08:25 crc kubenswrapper[4684]: I0123 09:08:25.822551 4684 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 23 09:08:25 crc kubenswrapper[4684]: I0123 09:08:25.822565 4684 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 23 09:08:25 crc kubenswrapper[4684]: I0123 09:08:25.822586 4684 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 23 09:08:25 crc kubenswrapper[4684]: I0123 09:08:25.822603 4684 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-23T09:08:25Z","lastTransitionTime":"2026-01-23T09:08:25Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 23 09:08:25 crc kubenswrapper[4684]: I0123 09:08:25.925968 4684 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 23 09:08:25 crc kubenswrapper[4684]: I0123 09:08:25.926021 4684 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 23 09:08:25 crc kubenswrapper[4684]: I0123 09:08:25.926034 4684 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 23 09:08:25 crc kubenswrapper[4684]: I0123 09:08:25.926052 4684 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 23 09:08:25 crc kubenswrapper[4684]: I0123 09:08:25.926066 4684 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-23T09:08:25Z","lastTransitionTime":"2026-01-23T09:08:25Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 23 09:08:26 crc kubenswrapper[4684]: I0123 09:08:26.014337 4684 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-ovn-kubernetes_ovnkube-node-nk7v5_5fd1b372-d164-4037-ae8e-cf634b1c4b41/ovnkube-controller/2.log" Jan 23 09:08:26 crc kubenswrapper[4684]: I0123 09:08:26.016920 4684 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-nk7v5" event={"ID":"5fd1b372-d164-4037-ae8e-cf634b1c4b41","Type":"ContainerStarted","Data":"4982abf5ece76335ecf3d32af453818177712b3e256640b9bebec20436b73eb7"} Jan 23 09:08:26 crc kubenswrapper[4684]: I0123 09:08:26.029111 4684 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 23 09:08:26 crc kubenswrapper[4684]: I0123 09:08:26.029310 4684 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 23 09:08:26 crc kubenswrapper[4684]: I0123 09:08:26.029341 4684 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 23 09:08:26 crc kubenswrapper[4684]: I0123 09:08:26.029372 4684 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 23 09:08:26 crc kubenswrapper[4684]: I0123 09:08:26.029392 4684 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-23T09:08:26Z","lastTransitionTime":"2026-01-23T09:08:26Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 23 09:08:26 crc kubenswrapper[4684]: I0123 09:08:26.131551 4684 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 23 09:08:26 crc kubenswrapper[4684]: I0123 09:08:26.131588 4684 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 23 09:08:26 crc kubenswrapper[4684]: I0123 09:08:26.131596 4684 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 23 09:08:26 crc kubenswrapper[4684]: I0123 09:08:26.131609 4684 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 23 09:08:26 crc kubenswrapper[4684]: I0123 09:08:26.131618 4684 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-23T09:08:26Z","lastTransitionTime":"2026-01-23T09:08:26Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 23 09:08:26 crc kubenswrapper[4684]: I0123 09:08:26.234224 4684 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 23 09:08:26 crc kubenswrapper[4684]: I0123 09:08:26.234260 4684 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 23 09:08:26 crc kubenswrapper[4684]: I0123 09:08:26.234272 4684 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 23 09:08:26 crc kubenswrapper[4684]: I0123 09:08:26.234288 4684 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 23 09:08:26 crc kubenswrapper[4684]: I0123 09:08:26.234299 4684 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-23T09:08:26Z","lastTransitionTime":"2026-01-23T09:08:26Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 23 09:08:26 crc kubenswrapper[4684]: I0123 09:08:26.336283 4684 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 23 09:08:26 crc kubenswrapper[4684]: I0123 09:08:26.336345 4684 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 23 09:08:26 crc kubenswrapper[4684]: I0123 09:08:26.336357 4684 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 23 09:08:26 crc kubenswrapper[4684]: I0123 09:08:26.336401 4684 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 23 09:08:26 crc kubenswrapper[4684]: I0123 09:08:26.336414 4684 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-23T09:08:26Z","lastTransitionTime":"2026-01-23T09:08:26Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 23 09:08:26 crc kubenswrapper[4684]: I0123 09:08:26.439219 4684 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 23 09:08:26 crc kubenswrapper[4684]: I0123 09:08:26.439269 4684 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 23 09:08:26 crc kubenswrapper[4684]: I0123 09:08:26.439308 4684 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 23 09:08:26 crc kubenswrapper[4684]: I0123 09:08:26.439342 4684 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 23 09:08:26 crc kubenswrapper[4684]: I0123 09:08:26.439352 4684 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-23T09:08:26Z","lastTransitionTime":"2026-01-23T09:08:26Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 23 09:08:26 crc kubenswrapper[4684]: I0123 09:08:26.542094 4684 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 23 09:08:26 crc kubenswrapper[4684]: I0123 09:08:26.542138 4684 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 23 09:08:26 crc kubenswrapper[4684]: I0123 09:08:26.542147 4684 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 23 09:08:26 crc kubenswrapper[4684]: I0123 09:08:26.542161 4684 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 23 09:08:26 crc kubenswrapper[4684]: I0123 09:08:26.542172 4684 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-23T09:08:26Z","lastTransitionTime":"2026-01-23T09:08:26Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 23 09:08:26 crc kubenswrapper[4684]: I0123 09:08:26.581248 4684 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-wrrtl" Jan 23 09:08:26 crc kubenswrapper[4684]: E0123 09:08:26.581473 4684 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-wrrtl" podUID="8a1145d8-e0e9-481b-9e5c-65815e74874f" Jan 23 09:08:26 crc kubenswrapper[4684]: I0123 09:08:26.585730 4684 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2026-01-02 05:10:41.768260138 +0000 UTC Jan 23 09:08:26 crc kubenswrapper[4684]: I0123 09:08:26.645171 4684 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 23 09:08:26 crc kubenswrapper[4684]: I0123 09:08:26.645210 4684 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 23 09:08:26 crc kubenswrapper[4684]: I0123 09:08:26.645219 4684 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 23 09:08:26 crc kubenswrapper[4684]: I0123 09:08:26.645234 4684 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 23 09:08:26 crc kubenswrapper[4684]: I0123 09:08:26.645243 4684 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-23T09:08:26Z","lastTransitionTime":"2026-01-23T09:08:26Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 23 09:08:26 crc kubenswrapper[4684]: I0123 09:08:26.747049 4684 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 23 09:08:26 crc kubenswrapper[4684]: I0123 09:08:26.747081 4684 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 23 09:08:26 crc kubenswrapper[4684]: I0123 09:08:26.747090 4684 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 23 09:08:26 crc kubenswrapper[4684]: I0123 09:08:26.747103 4684 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 23 09:08:26 crc kubenswrapper[4684]: I0123 09:08:26.747111 4684 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-23T09:08:26Z","lastTransitionTime":"2026-01-23T09:08:26Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 23 09:08:26 crc kubenswrapper[4684]: I0123 09:08:26.849799 4684 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 23 09:08:26 crc kubenswrapper[4684]: I0123 09:08:26.849830 4684 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 23 09:08:26 crc kubenswrapper[4684]: I0123 09:08:26.849841 4684 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 23 09:08:26 crc kubenswrapper[4684]: I0123 09:08:26.849856 4684 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 23 09:08:26 crc kubenswrapper[4684]: I0123 09:08:26.849868 4684 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-23T09:08:26Z","lastTransitionTime":"2026-01-23T09:08:26Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 23 09:08:26 crc kubenswrapper[4684]: I0123 09:08:26.952191 4684 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 23 09:08:26 crc kubenswrapper[4684]: I0123 09:08:26.952251 4684 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 23 09:08:26 crc kubenswrapper[4684]: I0123 09:08:26.952263 4684 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 23 09:08:26 crc kubenswrapper[4684]: I0123 09:08:26.952280 4684 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 23 09:08:26 crc kubenswrapper[4684]: I0123 09:08:26.952293 4684 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-23T09:08:26Z","lastTransitionTime":"2026-01-23T09:08:26Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 23 09:08:27 crc kubenswrapper[4684]: I0123 09:08:27.021043 4684 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-ovn-kubernetes_ovnkube-node-nk7v5_5fd1b372-d164-4037-ae8e-cf634b1c4b41/ovnkube-controller/3.log" Jan 23 09:08:27 crc kubenswrapper[4684]: I0123 09:08:27.022139 4684 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-ovn-kubernetes_ovnkube-node-nk7v5_5fd1b372-d164-4037-ae8e-cf634b1c4b41/ovnkube-controller/2.log" Jan 23 09:08:27 crc kubenswrapper[4684]: I0123 09:08:27.024143 4684 generic.go:334] "Generic (PLEG): container finished" podID="5fd1b372-d164-4037-ae8e-cf634b1c4b41" containerID="4982abf5ece76335ecf3d32af453818177712b3e256640b9bebec20436b73eb7" exitCode=1 Jan 23 09:08:27 crc kubenswrapper[4684]: I0123 09:08:27.024180 4684 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-nk7v5" event={"ID":"5fd1b372-d164-4037-ae8e-cf634b1c4b41","Type":"ContainerDied","Data":"4982abf5ece76335ecf3d32af453818177712b3e256640b9bebec20436b73eb7"} Jan 23 09:08:27 crc kubenswrapper[4684]: I0123 09:08:27.024232 4684 scope.go:117] "RemoveContainer" containerID="96a86556114a977603ae87310370eefd3122daae9dcb97c57a715eab43e8c195" Jan 23 09:08:27 crc kubenswrapper[4684]: I0123 09:08:27.024997 4684 scope.go:117] "RemoveContainer" containerID="4982abf5ece76335ecf3d32af453818177712b3e256640b9bebec20436b73eb7" Jan 23 09:08:27 crc kubenswrapper[4684]: E0123 09:08:27.025229 4684 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"ovnkube-controller\" with CrashLoopBackOff: \"back-off 40s restarting failed container=ovnkube-controller pod=ovnkube-node-nk7v5_openshift-ovn-kubernetes(5fd1b372-d164-4037-ae8e-cf634b1c4b41)\"" pod="openshift-ovn-kubernetes/ovnkube-node-nk7v5" podUID="5fd1b372-d164-4037-ae8e-cf634b1c4b41" Jan 23 09:08:27 crc kubenswrapper[4684]: I0123 09:08:27.039134 4684 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"e31ff448-5258-4887-9532-ccb1444b5a2f\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T09:07:08Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T09:07:08Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T09:07:38Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T09:07:38Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T09:07:07Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://39b1d62654cdce3e6a1e54cc35f36d530dec39b7ec54d7aba2ea8a64844ff90a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-23T09:07:09Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://5b80737ea9f882f63be2cf6a2f74002963d16e18aea3c96f738b2cd188f3c1da\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-23T09:07:10Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://68e3ed6cfd5c1ab6379385c7acee58117333f815f21be7d7c61038f7827f6621\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-23T09:07:10Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://74958cd4355a9eb04e07c960b1063b56f11cb3ae27a3ab9eac50f54ebac78c8c\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://42263a97079566dbd93f1ca20399fd1f6cc2400f0d042ed062c1c1e15eaf0109\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-01-23T09:07:27Z\\\",\\\"message\\\":\\\"23 09:07:26.845110 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW0123 09:07:26.845113 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW0123 09:07:26.845115 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nI0123 09:07:26.845353 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nI0123 09:07:26.849378 1 tlsconfig.go:203] \\\\\\\"Loaded serving cert\\\\\\\" certName=\\\\\\\"serving-cert::/tmp/serving-cert-4138284268/tls.crt::/tmp/serving-cert-4138284268/tls.key\\\\\\\" certDetail=\\\\\\\"\\\\\\\\\\\\\\\"localhost\\\\\\\\\\\\\\\" [serving] validServingFor=[localhost] issuer=\\\\\\\\\\\\\\\"check-endpoints-signer@1769159230\\\\\\\\\\\\\\\" (2026-01-23 09:07:10 +0000 UTC to 2026-02-22 09:07:11 +0000 UTC (now=2026-01-23 09:07:26.849349521 +0000 UTC))\\\\\\\"\\\\nI0123 09:07:26.849507 1 named_certificates.go:53] \\\\\\\"Loaded SNI cert\\\\\\\" index=0 certName=\\\\\\\"self-signed loopback\\\\\\\" certDetail=\\\\\\\"\\\\\\\\\\\\\\\"apiserver-loopback-client@1769159241\\\\\\\\\\\\\\\" [serving] validServingFor=[apiserver-loopback-client] issuer=\\\\\\\\\\\\\\\"apiserver-loopback-client-ca@1769159241\\\\\\\\\\\\\\\" (2026-01-23 08:07:21 +0000 UTC to 2027-01-23 08:07:21 +0000 UTC (now=2026-01-23 09:07:26.849489185 +0000 UTC))\\\\\\\"\\\\nI0123 09:07:26.849527 1 secure_serving.go:213] Serving securely on [::]:17697\\\\nI0123 09:07:26.849546 1 genericapiserver.go:683] [graceful-termination] waiting for shutdown to be initiated\\\\nI0123 09:07:26.849566 1 requestheader_controller.go:172] Starting RequestHeaderAuthRequestController\\\\nI0123 09:07:26.849583 1 shared_informer.go:313] Waiting for caches to sync for RequestHeaderAuthRequestController\\\\nI0123 09:07:26.849611 1 dynamic_serving_content.go:135] \\\\\\\"Starting controller\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-4138284268/tls.crt::/tmp/serving-cert-4138284268/tls.key\\\\\\\"\\\\nI0123 09:07:26.849731 1 tlsconfig.go:243] \\\\\\\"Starting DynamicServingCertificateController\\\\\\\"\\\\nF0123 09:07:26.849820 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-23T09:07:10Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":true,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-23T09:07:28Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://9db80d9b156d2828ad5bcd38bc2d0783dac35f10f547f098815ee596931cde3b\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-23T09:07:10Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://efa2eef93c6f5766565795e6674f79bc2e7cb62ac76cd9a1e407561378d62732\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://efa2eef93c6f5766565795e6674f79bc2e7cb62ac76cd9a1e407561378d62732\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-23T09:07:08Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-23T09:07:08Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-23T09:07:07Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-23T09:08:27Z is after 2025-08-24T17:21:41Z" Jan 23 09:08:27 crc kubenswrapper[4684]: I0123 09:08:27.050459 4684 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/kube-rbac-proxy-crio-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"7df9a725-0566-46a8-8527-66802dfe40b0\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T09:07:08Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T09:07:08Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T09:07:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T09:07:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T09:07:07Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://f896177a3b765a2129450136ccb007601fff3c2d5669c777ad8af0eeaaf15d5a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-crio\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-23T09:07:09Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kube\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"var-lib-kubelet\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://1cdc2db678a5d1d932c0ed23c453f2450562334bfa685ec920e0a8bc8af61d7c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://1cdc2db678a5d1d932c0ed23c453f2450562334bfa685ec920e0a8bc8af61d7c\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-23T09:07:08Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-23T09:07:08Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var\\\",\\\"name\\\":\\\"var-lib-kubelet\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-23T09:07:07Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"kube-rbac-proxy-crio-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-23T09:08:27Z is after 2025-08-24T17:21:41Z" Jan 23 09:08:27 crc kubenswrapper[4684]: I0123 09:08:27.054729 4684 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 23 09:08:27 crc kubenswrapper[4684]: I0123 09:08:27.054760 4684 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 23 09:08:27 crc kubenswrapper[4684]: I0123 09:08:27.054768 4684 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 23 09:08:27 crc kubenswrapper[4684]: I0123 09:08:27.054780 4684 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 23 09:08:27 crc kubenswrapper[4684]: I0123 09:08:27.054790 4684 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-23T09:08:27Z","lastTransitionTime":"2026-01-23T09:08:27Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 23 09:08:27 crc kubenswrapper[4684]: I0123 09:08:27.064463 4684 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-23T09:07:27Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-23T09:07:27Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-23T09:08:27Z is after 2025-08-24T17:21:41Z" Jan 23 09:08:27 crc kubenswrapper[4684]: I0123 09:08:27.077808 4684 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-jwr4q" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ab0885cc-d621-4e36-9e37-1326848bd147\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T09:07:29Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T09:07:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T09:08:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T09:08:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://7bc78adb5a12c736586e26f00e1e598d2404f62b6f15dbb005f241e1d5fddae3\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://d957cfbf388d17fa825ac41c56e15d6cd4caec6e13b2fb8c93b304205f0bbefe\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2026-01-23T09:08:15Z\\\",\\\"message\\\":\\\"2026-01-23T09:07:29+00:00 [cnibincopy] Successfully copied files in /usr/src/multus-cni/rhel9/bin/ to /host/opt/cni/bin/upgrade_fce49956-cc05-4dc7-8d8f-580147be71f6\\\\n2026-01-23T09:07:29+00:00 [cnibincopy] Successfully moved files in /host/opt/cni/bin/upgrade_fce49956-cc05-4dc7-8d8f-580147be71f6 to /host/opt/cni/bin/\\\\n2026-01-23T09:07:30Z [verbose] multus-daemon started\\\\n2026-01-23T09:07:30Z [verbose] Readiness Indicator file check\\\\n2026-01-23T09:08:15Z [error] have you checked that your default network is ready? still waiting for readinessindicatorfile @ /host/run/multus/cni/net.d/10-ovn-kubernetes.conf. pollimmediate error: timed out waiting for the condition\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-23T09:07:29Z\\\"}},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-23T09:08:16Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cw2mk\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-23T09:07:28Z\\\"}}\" for pod \"openshift-multus\"/\"multus-jwr4q\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-23T09:08:27Z is after 2025-08-24T17:21:41Z" Jan 23 09:08:27 crc kubenswrapper[4684]: I0123 09:08:27.093980 4684 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-dmqcw" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"95d1563a-3ca4-4fb0-8365-c1168fbe2e70\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T09:07:29Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T09:07:34Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T09:07:35Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T09:07:35Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://49a6a5854f711f7c177bc9c2ddea16027d535e15a3bbce2771702baed672fc06\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-23T09:07:35Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-wrhqc\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://d3d64538fa49212ecd97fac81f22251d985b9963024dcd5625ca82b0a19111fb\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://d3d64538fa49212ecd97fac81f22251d985b9963024dcd5625ca82b0a19111fb\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-23T09:07:29Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-23T09:07:29Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-wrhqc\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://bd008bc398cf858c150426e45222e76743f5cacfffb45c24f2cad83a6140abe4\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://bd008bc398cf858c150426e45222e76743f5cacfffb45c24f2cad83a6140abe4\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-23T09:07:30Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-23T09:07:30Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-wrhqc\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://11ea09253e6f4c4eab537b794b793c1f07e8cbaf361c1d8773381e7894805322\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://11ea09253e6f4c4eab537b794b793c1f07e8cbaf361c1d8773381e7894805322\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-23T09:07:31Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-23T09:07:31Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-wrhqc\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://4dddcfb8219bc8ac2d0f92294aef29222b71b1eb35ac84e7e833905e868e784e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://4dddcfb8219bc8ac2d0f92294aef29222b71b1eb35ac84e7e833905e868e784e\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-23T09:07:32Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-23T09:07:32Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-wrhqc\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://d935dd54133a2edd7ccddba6ec6b4c3ee7c86d3d6bc097b93fab3a6aa873ece9\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://d935dd54133a2edd7ccddba6ec6b4c3ee7c86d3d6bc097b93fab3a6aa873ece9\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-23T09:07:33Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-23T09:07:33Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-wrhqc\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://a3f58ad8e7c313247b77e5259a2f82d740ea1f08c3aeaefc116293729ce1b143\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://a3f58ad8e7c313247b77e5259a2f82d740ea1f08c3aeaefc116293729ce1b143\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-23T09:07:34Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-23T09:07:34Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-wrhqc\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-23T09:07:28Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-dmqcw\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-23T09:08:27Z is after 2025-08-24T17:21:41Z" Jan 23 09:08:27 crc kubenswrapper[4684]: I0123 09:08:27.105768 4684 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d618dabd-5de3-4c94-b9c1-69682da77628\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T09:07:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T09:07:07Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T09:07:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T09:07:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T09:07:07Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://c027c8977c1e3870ef0132bf28d479e8999b1a7d216327be7a9cff2aeee05c9e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-23T09:07:08Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://7954e2feb1e89e1ec2c9055234e7b9bde7005afc751a3067c18cbb54d16045cc\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-23T09:07:08Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://fde45d47daa7855ee7caa1df0222d2773fcdc8fb29413c61d6b74f7e7d8fa6e4\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-23T09:07:09Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://f34540a58dd0dfcebbfd694b24202f58a89ddca8a0f04f3f4f2bcdba4be5c4b6\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-23T09:07:10Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-23T09:07:07Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-23T09:08:27Z is after 2025-08-24T17:21:41Z" Jan 23 09:08:27 crc kubenswrapper[4684]: I0123 09:08:27.118073 4684 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-23T09:07:28Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-23T09:07:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-23T09:07:28Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://4f741db786a98b9e9302c17c5f5061484149b0372c03b3cf06b017d37da7237a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-23T09:07:28Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://e0bf99a80423f9d4d2262b21f7dc70d1cf73731c48008e484d9768495596d5b0\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-23T09:07:28Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-23T09:08:27Z is after 2025-08-24T17:21:41Z" Jan 23 09:08:27 crc kubenswrapper[4684]: I0123 09:08:27.130027 4684 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-23T09:07:30Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-23T09:07:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-23T09:07:30Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://dc74050180463e44d7c545c89833c0282af87ae8cde4800f95e019dbd21ebb08\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-23T09:07:30Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-23T09:08:27Z is after 2025-08-24T17:21:41Z" Jan 23 09:08:27 crc kubenswrapper[4684]: I0123 09:08:27.139965 4684 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-6stgf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"4fce7017-186f-4953-b968-c8a8868a0fd4\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T09:07:29Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T09:07:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T09:07:29Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T09:07:29Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://e120546e2ca9261a5bc169c39194c52add608d78b5783a10dad5f3ba4ee27c23\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-23T09:07:29Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-wv8g2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-23T09:07:28Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-6stgf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-23T09:08:27Z is after 2025-08-24T17:21:41Z" Jan 23 09:08:27 crc kubenswrapper[4684]: I0123 09:08:27.150045 4684 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-qt2j2" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d5069a6f-07bb-4423-8df0-92cdc541e6de\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T09:07:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T09:07:29Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T09:07:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T09:07:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://4ab843f59e857c481772565098789264b06141f58dd54cbb8dba2e40b44a54ad\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-23T09:07:30Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-l62zw\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-23T09:07:29Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-qt2j2\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-23T09:08:27Z is after 2025-08-24T17:21:41Z" Jan 23 09:08:27 crc kubenswrapper[4684]: I0123 09:08:27.157183 4684 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 23 09:08:27 crc kubenswrapper[4684]: I0123 09:08:27.157209 4684 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 23 09:08:27 crc kubenswrapper[4684]: I0123 09:08:27.157216 4684 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 23 09:08:27 crc kubenswrapper[4684]: I0123 09:08:27.157232 4684 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 23 09:08:27 crc kubenswrapper[4684]: I0123 09:08:27.157242 4684 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-23T09:08:27Z","lastTransitionTime":"2026-01-23T09:08:27Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 23 09:08:27 crc kubenswrapper[4684]: I0123 09:08:27.162748 4684 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"8f3a9b90-c984-4ff9-9c1e-877941f387c7\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T09:07:08Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T09:07:08Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T09:08:03Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T09:08:03Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T09:07:07Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://02d494d3d24ff74db057c3d7e3a703635ce5b73863f17e5287e60eb112fcadf1\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-23T09:07:09Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://b3735bcc057b640850e5db0bc7cd406ef0ac0c002d4550e741deaf34cf10908f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-23T09:07:10Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://beeba329cbddfbfbd71509b5d37064ec6031709b1403feb8e76af0e7818516cb\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-23T09:07:10Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://dfb74f1ff410b32092837918e51a33643c917e2cf829af6edd2e36180c64fcba\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"wait-for-host-port\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://dfb74f1ff410b32092837918e51a33643c917e2cf829af6edd2e36180c64fcba\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-23T09:07:08Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-23T09:07:08Z\\\"}}}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-23T09:07:07Z\\\"}}\" for pod \"openshift-kube-scheduler\"/\"openshift-kube-scheduler-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-23T09:08:27Z is after 2025-08-24T17:21:41Z" Jan 23 09:08:27 crc kubenswrapper[4684]: I0123 09:08:27.176495 4684 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-23T09:07:29Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-23T09:07:29Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-23T09:07:29Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://d66a59d2f527c396c3b591ef694a20a6852d8e2b2f3d4c77ef0f0b795a18b535\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-23T09:07:28Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-23T09:08:27Z is after 2025-08-24T17:21:41Z" Jan 23 09:08:27 crc kubenswrapper[4684]: I0123 09:08:27.190301 4684 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-wtphf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"fe8e0d00-860e-4d47-9f48-686555520d79\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T09:07:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T09:07:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T09:07:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T09:07:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://87b6f66b276518f9c25bbd5c97bd4a330b2c796958b395d04a01ef7115b95440\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-23T09:07:30Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-dmwsl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://c3d090a4ca15b818846dbd02be034a5029761509ea8671673795d0b2b15249c9\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-23T09:07:30Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-dmwsl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-23T09:07:28Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-wtphf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-23T09:08:27Z is after 2025-08-24T17:21:41Z" Jan 23 09:08:27 crc kubenswrapper[4684]: I0123 09:08:27.200364 4684 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/network-metrics-daemon-wrrtl" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"8a1145d8-e0e9-481b-9e5c-65815e74874f\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T09:07:42Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T09:07:42Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T09:07:42Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T09:07:42Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/metrics\\\",\\\"name\\\":\\\"metrics-certs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-hlsjn\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d98bb346a17feae024d92663df92b25c120938395ab7043afbed543c6db9ca8d\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-metrics-daemon\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-hlsjn\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-23T09:07:42Z\\\"}}\" for pod \"openshift-multus\"/\"network-metrics-daemon-wrrtl\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-23T09:08:27Z is after 2025-08-24T17:21:41Z" Jan 23 09:08:27 crc kubenswrapper[4684]: I0123 09:08:27.213788 4684 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-23T09:07:27Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-23T09:07:27Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-23T09:08:27Z is after 2025-08-24T17:21:41Z" Jan 23 09:08:27 crc kubenswrapper[4684]: I0123 09:08:27.227893 4684 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-23T09:07:27Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-23T09:07:27Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-23T09:08:27Z is after 2025-08-24T17:21:41Z" Jan 23 09:08:27 crc kubenswrapper[4684]: I0123 09:08:27.246832 4684 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-nk7v5" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fd1b372-d164-4037-ae8e-cf634b1c4b41\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T09:07:29Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T09:07:29Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T09:07:28Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T09:07:28Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://c845b6b78d55b23f70032599e19fb345571b02ca00353315bb08e94c834330d4\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-23T09:07:30Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-l46bg\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://5ecd3493767226c89a1f3e3dff04d36ff5c47117c6ad2712e71633f5c6e375b3\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-23T09:07:30Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-l46bg\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://1d7d0cedb437ec48e365912b092c7f28a30e01fbab86c49bce1b26734ab264ee\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-23T09:07:30Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-l46bg\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://6ab83043e744c91535278153a247d7ba2b3612b867edbabf3a43192b51304e14\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-23T09:07:30Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-l46bg\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://d44f8256ce0d8ea5237e13fb4f6d7ee5cd698c2821613b48d73ba903d2ab5351\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-23T09:07:30Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-l46bg\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://3eab81e73847c2d5a8a24bd2be84c8ed97ecc482fe023474b519ae6bcf3e6e49\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-23T09:07:29Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-l46bg\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://4982abf5ece76335ecf3d32af453818177712b3e256640b9bebec20436b73eb7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://96a86556114a977603ae87310370eefd3122daae9dcb97c57a715eab43e8c195\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2026-01-23T09:08:00Z\\\",\\\"message\\\":\\\":services.LBOpts{Reject:false, EmptyLBEvents:false, AffinityTimeOut:0, SkipSNAT:false, Template:false, AddressFamily:\\\\\\\"\\\\\\\"}, Rules:[]services.LBRule{}, Templates:services.TemplateMap{}, Switches:[]string{}, Routers:[]string{}, Groups:[]string{\\\\\\\"clusterLBGroup\\\\\\\"}}}, built lbs: []services.LB{services.LB{Name:\\\\\\\"Service_openshift-dns-operator/metrics_TCP_cluster\\\\\\\", UUID:\\\\\\\"\\\\\\\", Protocol:\\\\\\\"TCP\\\\\\\", ExternalIDs:map[string]string{\\\\\\\"k8s.ovn.org/kind\\\\\\\":\\\\\\\"Service\\\\\\\", \\\\\\\"k8s.ovn.org/owner\\\\\\\":\\\\\\\"openshift-dns-operator/metrics\\\\\\\"}, Opts:services.LBOpts{Reject:true, EmptyLBEvents:false, AffinityTimeOut:0, SkipSNAT:false, Template:false, AddressFamily:\\\\\\\"\\\\\\\"}, Rules:[]services.LBRule{services.LBRule{Source:services.Addr{IP:\\\\\\\"10.217.4.174\\\\\\\", Port:9393, Template:(*services.Template)(nil)}, Targets:[]services.Addr{}}}, Templates:services.TemplateMap(nil), Switches:[]string{}, Routers:[]string{}, Groups:[]string{\\\\\\\"clusterLBGroup\\\\\\\"}}}\\\\nI0123 09:07:58.859000 6248 ovnkube.go:599] Stopped ovnkube\\\\nI0123 09:07:58.859024 6248 metrics.go:553] Stopping metrics server at address \\\\\\\"127.0.0.1:29103\\\\\\\"\\\\nI0123 09:07:58.859031 6248 obj_retry.go:434] periodicallyRetryResources: Retry channel got triggered: retrying failed objects of type *v1.Pod\\\\nF0123 09:07:58.859119 6248 ovnkube.go:137] failed to run ovnkube: [failed to start network controller: failed to start default network controller: unable to create\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-23T09:07:58Z\\\"}},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://4982abf5ece76335ecf3d32af453818177712b3e256640b9bebec20436b73eb7\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2026-01-23T09:08:26Z\\\",\\\"message\\\":\\\"10s\\\\\\\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-23T09:08:26Z is after 2025-08-24T17:21:41Z\\\\nI0123 09:08:26.262216 6629 obj_retry.go:434] periodicallyRetryResources: Retry channel got triggered: retrying failed objects of type *v1.Pod\\\\nI0123 09:08:26.262245 6629 obj_retry.go:409] Going to retry *v1.Pod resource setup for 1 objects: [openshift-multus/network-metrics-daemon-wrrtl]\\\\nI0123 09:08:26.262257 6629 factory.go:656] Stopping watch factory\\\\nI0123 09:08:26.262259 6629 obj_retry.go:418] Waiting for all the *v1.Pod retry setup to complete in iterateRetryResources\\\\nI0123 09:08:26.262281 6629 ovnkube.go:599] Stopped ovnkube\\\\nI0123 09:08:26.262282 6629 obj_retry.go:285] Attempting retry of *v1.Pod openshift-multus/network-metrics-daemon-wrrtl before timer (time: 2026-01-23 09:08:26.737373558 +0000 UTC m=+1.601472596): skip\\\\nI0123 09:08:26.262299 6629 obj_retry.go:420] Function iterateRetryResources for *v1.Pod ended (in 56.172µs)\\\\nI0123 09:08:26.262311 6629 metrics.go:553] Stopping metrics server at address \\\\\\\"127.0.0.1:29103\\\\\\\"\\\\nI0123 09:08:26.262365 6629 handler.go:208] Removed *v1.Pod event handler 3\\\\nF0123 09:08:26.262378 6629 ovnkube.go:137] failed to run ovnkube: [failed to start network controller: failed to start default network controller: unable to create\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-23T09:08:25Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-l46bg\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://6eab0113b2445bd23a5d3eb5f4bd79d26dd3352a1bf807cf7e770d55db85b699\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-23T09:07:32Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-l46bg\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://6cfc04b44ac724b5e32e0102b3f0d670fdd7f2b7ae9b40266065c7b8192b228e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://6cfc04b44ac724b5e32e0102b3f0d670fdd7f2b7ae9b40266065c7b8192b228e\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-23T09:07:29Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-23T09:07:29Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-l46bg\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-23T09:07:28Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-nk7v5\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-23T09:08:27Z is after 2025-08-24T17:21:41Z" Jan 23 09:08:27 crc kubenswrapper[4684]: I0123 09:08:27.259338 4684 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-ckltm" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"17ebb42b-c0ef-423b-8337-cb73bcdbd301\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T09:07:44Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T09:07:41Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T09:07:44Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T09:07:44Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://831d14b0a3293bdf6aaef4805513c47cca40592929fd0a059c0415e6bb072462\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-23T09:07:43Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-control-plane-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-bqdrh\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://53174a72a4ae2ff8105c162641526b8d33dbc8ae6f6301c8c1399e1493d9f6e9\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-cluster-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-23T09:07:43Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-bqdrh\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-23T09:07:41Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-control-plane-749d76644c-ckltm\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-23T09:08:27Z is after 2025-08-24T17:21:41Z" Jan 23 09:08:27 crc kubenswrapper[4684]: I0123 09:08:27.259564 4684 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 23 09:08:27 crc kubenswrapper[4684]: I0123 09:08:27.259596 4684 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 23 09:08:27 crc kubenswrapper[4684]: I0123 09:08:27.259612 4684 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 23 09:08:27 crc kubenswrapper[4684]: I0123 09:08:27.259627 4684 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 23 09:08:27 crc kubenswrapper[4684]: I0123 09:08:27.259637 4684 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-23T09:08:27Z","lastTransitionTime":"2026-01-23T09:08:27Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 23 09:08:27 crc kubenswrapper[4684]: I0123 09:08:27.361562 4684 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 23 09:08:27 crc kubenswrapper[4684]: I0123 09:08:27.361600 4684 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 23 09:08:27 crc kubenswrapper[4684]: I0123 09:08:27.361610 4684 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 23 09:08:27 crc kubenswrapper[4684]: I0123 09:08:27.361623 4684 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 23 09:08:27 crc kubenswrapper[4684]: I0123 09:08:27.361633 4684 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-23T09:08:27Z","lastTransitionTime":"2026-01-23T09:08:27Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 23 09:08:27 crc kubenswrapper[4684]: I0123 09:08:27.463445 4684 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 23 09:08:27 crc kubenswrapper[4684]: I0123 09:08:27.463485 4684 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 23 09:08:27 crc kubenswrapper[4684]: I0123 09:08:27.463495 4684 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 23 09:08:27 crc kubenswrapper[4684]: I0123 09:08:27.463509 4684 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 23 09:08:27 crc kubenswrapper[4684]: I0123 09:08:27.463520 4684 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-23T09:08:27Z","lastTransitionTime":"2026-01-23T09:08:27Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 23 09:08:27 crc kubenswrapper[4684]: I0123 09:08:27.565612 4684 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 23 09:08:27 crc kubenswrapper[4684]: I0123 09:08:27.565657 4684 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 23 09:08:27 crc kubenswrapper[4684]: I0123 09:08:27.565666 4684 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 23 09:08:27 crc kubenswrapper[4684]: I0123 09:08:27.565680 4684 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 23 09:08:27 crc kubenswrapper[4684]: I0123 09:08:27.565689 4684 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-23T09:08:27Z","lastTransitionTime":"2026-01-23T09:08:27Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 23 09:08:27 crc kubenswrapper[4684]: I0123 09:08:27.581758 4684 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Jan 23 09:08:27 crc kubenswrapper[4684]: E0123 09:08:27.581932 4684 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Jan 23 09:08:27 crc kubenswrapper[4684]: I0123 09:08:27.582087 4684 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 23 09:08:27 crc kubenswrapper[4684]: I0123 09:08:27.582104 4684 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 23 09:08:27 crc kubenswrapper[4684]: E0123 09:08:27.582222 4684 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Jan 23 09:08:27 crc kubenswrapper[4684]: E0123 09:08:27.582308 4684 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Jan 23 09:08:27 crc kubenswrapper[4684]: I0123 09:08:27.586081 4684 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-11-28 08:15:21.19790113 +0000 UTC Jan 23 09:08:27 crc kubenswrapper[4684]: I0123 09:08:27.596540 4684 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d618dabd-5de3-4c94-b9c1-69682da77628\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T09:07:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T09:07:07Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T09:07:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T09:07:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T09:07:07Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://c027c8977c1e3870ef0132bf28d479e8999b1a7d216327be7a9cff2aeee05c9e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-23T09:07:08Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://7954e2feb1e89e1ec2c9055234e7b9bde7005afc751a3067c18cbb54d16045cc\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-23T09:07:08Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://fde45d47daa7855ee7caa1df0222d2773fcdc8fb29413c61d6b74f7e7d8fa6e4\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-23T09:07:09Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://f34540a58dd0dfcebbfd694b24202f58a89ddca8a0f04f3f4f2bcdba4be5c4b6\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-23T09:07:10Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-23T09:07:07Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-23T09:08:27Z is after 2025-08-24T17:21:41Z" Jan 23 09:08:27 crc kubenswrapper[4684]: I0123 09:08:27.609923 4684 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-23T09:07:28Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-23T09:07:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-23T09:07:28Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://4f741db786a98b9e9302c17c5f5061484149b0372c03b3cf06b017d37da7237a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-23T09:07:28Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://e0bf99a80423f9d4d2262b21f7dc70d1cf73731c48008e484d9768495596d5b0\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-23T09:07:28Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-23T09:08:27Z is after 2025-08-24T17:21:41Z" Jan 23 09:08:27 crc kubenswrapper[4684]: I0123 09:08:27.622629 4684 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-23T09:07:30Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-23T09:07:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-23T09:07:30Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://dc74050180463e44d7c545c89833c0282af87ae8cde4800f95e019dbd21ebb08\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-23T09:07:30Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-23T09:08:27Z is after 2025-08-24T17:21:41Z" Jan 23 09:08:27 crc kubenswrapper[4684]: I0123 09:08:27.634231 4684 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-6stgf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"4fce7017-186f-4953-b968-c8a8868a0fd4\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T09:07:29Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T09:07:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T09:07:29Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T09:07:29Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://e120546e2ca9261a5bc169c39194c52add608d78b5783a10dad5f3ba4ee27c23\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-23T09:07:29Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-wv8g2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-23T09:07:28Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-6stgf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-23T09:08:27Z is after 2025-08-24T17:21:41Z" Jan 23 09:08:27 crc kubenswrapper[4684]: I0123 09:08:27.645308 4684 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-qt2j2" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d5069a6f-07bb-4423-8df0-92cdc541e6de\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T09:07:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T09:07:29Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T09:07:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T09:07:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://4ab843f59e857c481772565098789264b06141f58dd54cbb8dba2e40b44a54ad\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-23T09:07:30Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-l62zw\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-23T09:07:29Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-qt2j2\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-23T09:08:27Z is after 2025-08-24T17:21:41Z" Jan 23 09:08:27 crc kubenswrapper[4684]: I0123 09:08:27.657513 4684 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"8f3a9b90-c984-4ff9-9c1e-877941f387c7\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T09:07:08Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T09:07:08Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T09:08:03Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T09:08:03Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T09:07:07Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://02d494d3d24ff74db057c3d7e3a703635ce5b73863f17e5287e60eb112fcadf1\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-23T09:07:09Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://b3735bcc057b640850e5db0bc7cd406ef0ac0c002d4550e741deaf34cf10908f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-23T09:07:10Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://beeba329cbddfbfbd71509b5d37064ec6031709b1403feb8e76af0e7818516cb\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-23T09:07:10Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://dfb74f1ff410b32092837918e51a33643c917e2cf829af6edd2e36180c64fcba\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"wait-for-host-port\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://dfb74f1ff410b32092837918e51a33643c917e2cf829af6edd2e36180c64fcba\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-23T09:07:08Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-23T09:07:08Z\\\"}}}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-23T09:07:07Z\\\"}}\" for pod \"openshift-kube-scheduler\"/\"openshift-kube-scheduler-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-23T09:08:27Z is after 2025-08-24T17:21:41Z" Jan 23 09:08:27 crc kubenswrapper[4684]: I0123 09:08:27.668202 4684 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 23 09:08:27 crc kubenswrapper[4684]: I0123 09:08:27.668231 4684 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 23 09:08:27 crc kubenswrapper[4684]: I0123 09:08:27.668239 4684 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 23 09:08:27 crc kubenswrapper[4684]: I0123 09:08:27.668251 4684 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 23 09:08:27 crc kubenswrapper[4684]: I0123 09:08:27.668261 4684 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-23T09:08:27Z","lastTransitionTime":"2026-01-23T09:08:27Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 23 09:08:27 crc kubenswrapper[4684]: I0123 09:08:27.673259 4684 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-23T09:07:29Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-23T09:07:29Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-23T09:07:29Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://d66a59d2f527c396c3b591ef694a20a6852d8e2b2f3d4c77ef0f0b795a18b535\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-23T09:07:28Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-23T09:08:27Z is after 2025-08-24T17:21:41Z" Jan 23 09:08:27 crc kubenswrapper[4684]: I0123 09:08:27.684408 4684 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-wtphf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"fe8e0d00-860e-4d47-9f48-686555520d79\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T09:07:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T09:07:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T09:07:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T09:07:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://87b6f66b276518f9c25bbd5c97bd4a330b2c796958b395d04a01ef7115b95440\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-23T09:07:30Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-dmwsl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://c3d090a4ca15b818846dbd02be034a5029761509ea8671673795d0b2b15249c9\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-23T09:07:30Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-dmwsl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-23T09:07:28Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-wtphf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-23T09:08:27Z is after 2025-08-24T17:21:41Z" Jan 23 09:08:27 crc kubenswrapper[4684]: I0123 09:08:27.695665 4684 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/network-metrics-daemon-wrrtl" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"8a1145d8-e0e9-481b-9e5c-65815e74874f\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T09:07:42Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T09:07:42Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T09:07:42Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T09:07:42Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/metrics\\\",\\\"name\\\":\\\"metrics-certs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-hlsjn\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d98bb346a17feae024d92663df92b25c120938395ab7043afbed543c6db9ca8d\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-metrics-daemon\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-hlsjn\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-23T09:07:42Z\\\"}}\" for pod \"openshift-multus\"/\"network-metrics-daemon-wrrtl\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-23T09:08:27Z is after 2025-08-24T17:21:41Z" Jan 23 09:08:27 crc kubenswrapper[4684]: I0123 09:08:27.708109 4684 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-23T09:07:27Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-23T09:07:27Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-23T09:08:27Z is after 2025-08-24T17:21:41Z" Jan 23 09:08:27 crc kubenswrapper[4684]: I0123 09:08:27.720918 4684 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-23T09:07:27Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-23T09:07:27Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-23T09:08:27Z is after 2025-08-24T17:21:41Z" Jan 23 09:08:27 crc kubenswrapper[4684]: I0123 09:08:27.741484 4684 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-nk7v5" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fd1b372-d164-4037-ae8e-cf634b1c4b41\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T09:07:29Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T09:07:29Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T09:07:28Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T09:07:28Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://c845b6b78d55b23f70032599e19fb345571b02ca00353315bb08e94c834330d4\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-23T09:07:30Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-l46bg\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://5ecd3493767226c89a1f3e3dff04d36ff5c47117c6ad2712e71633f5c6e375b3\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-23T09:07:30Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-l46bg\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://1d7d0cedb437ec48e365912b092c7f28a30e01fbab86c49bce1b26734ab264ee\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-23T09:07:30Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-l46bg\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://6ab83043e744c91535278153a247d7ba2b3612b867edbabf3a43192b51304e14\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-23T09:07:30Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-l46bg\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://d44f8256ce0d8ea5237e13fb4f6d7ee5cd698c2821613b48d73ba903d2ab5351\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-23T09:07:30Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-l46bg\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://3eab81e73847c2d5a8a24bd2be84c8ed97ecc482fe023474b519ae6bcf3e6e49\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-23T09:07:29Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-l46bg\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://4982abf5ece76335ecf3d32af453818177712b3e256640b9bebec20436b73eb7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://96a86556114a977603ae87310370eefd3122daae9dcb97c57a715eab43e8c195\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2026-01-23T09:08:00Z\\\",\\\"message\\\":\\\":services.LBOpts{Reject:false, EmptyLBEvents:false, AffinityTimeOut:0, SkipSNAT:false, Template:false, AddressFamily:\\\\\\\"\\\\\\\"}, Rules:[]services.LBRule{}, Templates:services.TemplateMap{}, Switches:[]string{}, Routers:[]string{}, Groups:[]string{\\\\\\\"clusterLBGroup\\\\\\\"}}}, built lbs: []services.LB{services.LB{Name:\\\\\\\"Service_openshift-dns-operator/metrics_TCP_cluster\\\\\\\", UUID:\\\\\\\"\\\\\\\", Protocol:\\\\\\\"TCP\\\\\\\", ExternalIDs:map[string]string{\\\\\\\"k8s.ovn.org/kind\\\\\\\":\\\\\\\"Service\\\\\\\", \\\\\\\"k8s.ovn.org/owner\\\\\\\":\\\\\\\"openshift-dns-operator/metrics\\\\\\\"}, Opts:services.LBOpts{Reject:true, EmptyLBEvents:false, AffinityTimeOut:0, SkipSNAT:false, Template:false, AddressFamily:\\\\\\\"\\\\\\\"}, Rules:[]services.LBRule{services.LBRule{Source:services.Addr{IP:\\\\\\\"10.217.4.174\\\\\\\", Port:9393, Template:(*services.Template)(nil)}, Targets:[]services.Addr{}}}, Templates:services.TemplateMap(nil), Switches:[]string{}, Routers:[]string{}, Groups:[]string{\\\\\\\"clusterLBGroup\\\\\\\"}}}\\\\nI0123 09:07:58.859000 6248 ovnkube.go:599] Stopped ovnkube\\\\nI0123 09:07:58.859024 6248 metrics.go:553] Stopping metrics server at address \\\\\\\"127.0.0.1:29103\\\\\\\"\\\\nI0123 09:07:58.859031 6248 obj_retry.go:434] periodicallyRetryResources: Retry channel got triggered: retrying failed objects of type *v1.Pod\\\\nF0123 09:07:58.859119 6248 ovnkube.go:137] failed to run ovnkube: [failed to start network controller: failed to start default network controller: unable to create\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-23T09:07:58Z\\\"}},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://4982abf5ece76335ecf3d32af453818177712b3e256640b9bebec20436b73eb7\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2026-01-23T09:08:26Z\\\",\\\"message\\\":\\\"10s\\\\\\\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-23T09:08:26Z is after 2025-08-24T17:21:41Z\\\\nI0123 09:08:26.262216 6629 obj_retry.go:434] periodicallyRetryResources: Retry channel got triggered: retrying failed objects of type *v1.Pod\\\\nI0123 09:08:26.262245 6629 obj_retry.go:409] Going to retry *v1.Pod resource setup for 1 objects: [openshift-multus/network-metrics-daemon-wrrtl]\\\\nI0123 09:08:26.262257 6629 factory.go:656] Stopping watch factory\\\\nI0123 09:08:26.262259 6629 obj_retry.go:418] Waiting for all the *v1.Pod retry setup to complete in iterateRetryResources\\\\nI0123 09:08:26.262281 6629 ovnkube.go:599] Stopped ovnkube\\\\nI0123 09:08:26.262282 6629 obj_retry.go:285] Attempting retry of *v1.Pod openshift-multus/network-metrics-daemon-wrrtl before timer (time: 2026-01-23 09:08:26.737373558 +0000 UTC m=+1.601472596): skip\\\\nI0123 09:08:26.262299 6629 obj_retry.go:420] Function iterateRetryResources for *v1.Pod ended (in 56.172µs)\\\\nI0123 09:08:26.262311 6629 metrics.go:553] Stopping metrics server at address \\\\\\\"127.0.0.1:29103\\\\\\\"\\\\nI0123 09:08:26.262365 6629 handler.go:208] Removed *v1.Pod event handler 3\\\\nF0123 09:08:26.262378 6629 ovnkube.go:137] failed to run ovnkube: [failed to start network controller: failed to start default network controller: unable to create\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-23T09:08:25Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-l46bg\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://6eab0113b2445bd23a5d3eb5f4bd79d26dd3352a1bf807cf7e770d55db85b699\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-23T09:07:32Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-l46bg\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://6cfc04b44ac724b5e32e0102b3f0d670fdd7f2b7ae9b40266065c7b8192b228e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://6cfc04b44ac724b5e32e0102b3f0d670fdd7f2b7ae9b40266065c7b8192b228e\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-23T09:07:29Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-23T09:07:29Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-l46bg\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-23T09:07:28Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-nk7v5\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-23T09:08:27Z is after 2025-08-24T17:21:41Z" Jan 23 09:08:27 crc kubenswrapper[4684]: I0123 09:08:27.762759 4684 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-ckltm" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"17ebb42b-c0ef-423b-8337-cb73bcdbd301\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T09:07:44Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T09:07:41Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T09:07:44Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T09:07:44Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://831d14b0a3293bdf6aaef4805513c47cca40592929fd0a059c0415e6bb072462\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-23T09:07:43Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-control-plane-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-bqdrh\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://53174a72a4ae2ff8105c162641526b8d33dbc8ae6f6301c8c1399e1493d9f6e9\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-cluster-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-23T09:07:43Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-bqdrh\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-23T09:07:41Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-control-plane-749d76644c-ckltm\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-23T09:08:27Z is after 2025-08-24T17:21:41Z" Jan 23 09:08:27 crc kubenswrapper[4684]: I0123 09:08:27.772255 4684 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 23 09:08:27 crc kubenswrapper[4684]: I0123 09:08:27.772348 4684 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 23 09:08:27 crc kubenswrapper[4684]: I0123 09:08:27.772386 4684 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 23 09:08:27 crc kubenswrapper[4684]: I0123 09:08:27.772414 4684 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 23 09:08:27 crc kubenswrapper[4684]: I0123 09:08:27.772427 4684 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-23T09:08:27Z","lastTransitionTime":"2026-01-23T09:08:27Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 23 09:08:27 crc kubenswrapper[4684]: I0123 09:08:27.782424 4684 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"e31ff448-5258-4887-9532-ccb1444b5a2f\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T09:07:08Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T09:07:08Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T09:07:38Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T09:07:38Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T09:07:07Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://39b1d62654cdce3e6a1e54cc35f36d530dec39b7ec54d7aba2ea8a64844ff90a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-23T09:07:09Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://5b80737ea9f882f63be2cf6a2f74002963d16e18aea3c96f738b2cd188f3c1da\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-23T09:07:10Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://68e3ed6cfd5c1ab6379385c7acee58117333f815f21be7d7c61038f7827f6621\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-23T09:07:10Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://74958cd4355a9eb04e07c960b1063b56f11cb3ae27a3ab9eac50f54ebac78c8c\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://42263a97079566dbd93f1ca20399fd1f6cc2400f0d042ed062c1c1e15eaf0109\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-01-23T09:07:27Z\\\",\\\"message\\\":\\\"23 09:07:26.845110 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW0123 09:07:26.845113 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW0123 09:07:26.845115 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nI0123 09:07:26.845353 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nI0123 09:07:26.849378 1 tlsconfig.go:203] \\\\\\\"Loaded serving cert\\\\\\\" certName=\\\\\\\"serving-cert::/tmp/serving-cert-4138284268/tls.crt::/tmp/serving-cert-4138284268/tls.key\\\\\\\" certDetail=\\\\\\\"\\\\\\\\\\\\\\\"localhost\\\\\\\\\\\\\\\" [serving] validServingFor=[localhost] issuer=\\\\\\\\\\\\\\\"check-endpoints-signer@1769159230\\\\\\\\\\\\\\\" (2026-01-23 09:07:10 +0000 UTC to 2026-02-22 09:07:11 +0000 UTC (now=2026-01-23 09:07:26.849349521 +0000 UTC))\\\\\\\"\\\\nI0123 09:07:26.849507 1 named_certificates.go:53] \\\\\\\"Loaded SNI cert\\\\\\\" index=0 certName=\\\\\\\"self-signed loopback\\\\\\\" certDetail=\\\\\\\"\\\\\\\\\\\\\\\"apiserver-loopback-client@1769159241\\\\\\\\\\\\\\\" [serving] validServingFor=[apiserver-loopback-client] issuer=\\\\\\\\\\\\\\\"apiserver-loopback-client-ca@1769159241\\\\\\\\\\\\\\\" (2026-01-23 08:07:21 +0000 UTC to 2027-01-23 08:07:21 +0000 UTC (now=2026-01-23 09:07:26.849489185 +0000 UTC))\\\\\\\"\\\\nI0123 09:07:26.849527 1 secure_serving.go:213] Serving securely on [::]:17697\\\\nI0123 09:07:26.849546 1 genericapiserver.go:683] [graceful-termination] waiting for shutdown to be initiated\\\\nI0123 09:07:26.849566 1 requestheader_controller.go:172] Starting RequestHeaderAuthRequestController\\\\nI0123 09:07:26.849583 1 shared_informer.go:313] Waiting for caches to sync for RequestHeaderAuthRequestController\\\\nI0123 09:07:26.849611 1 dynamic_serving_content.go:135] \\\\\\\"Starting controller\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-4138284268/tls.crt::/tmp/serving-cert-4138284268/tls.key\\\\\\\"\\\\nI0123 09:07:26.849731 1 tlsconfig.go:243] \\\\\\\"Starting DynamicServingCertificateController\\\\\\\"\\\\nF0123 09:07:26.849820 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-23T09:07:10Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":true,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-23T09:07:28Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://9db80d9b156d2828ad5bcd38bc2d0783dac35f10f547f098815ee596931cde3b\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-23T09:07:10Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://efa2eef93c6f5766565795e6674f79bc2e7cb62ac76cd9a1e407561378d62732\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://efa2eef93c6f5766565795e6674f79bc2e7cb62ac76cd9a1e407561378d62732\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-23T09:07:08Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-23T09:07:08Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-23T09:07:07Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-23T09:08:27Z is after 2025-08-24T17:21:41Z" Jan 23 09:08:27 crc kubenswrapper[4684]: I0123 09:08:27.801315 4684 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/kube-rbac-proxy-crio-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"7df9a725-0566-46a8-8527-66802dfe40b0\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T09:07:08Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T09:07:08Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T09:07:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T09:07:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T09:07:07Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://f896177a3b765a2129450136ccb007601fff3c2d5669c777ad8af0eeaaf15d5a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-crio\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-23T09:07:09Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kube\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"var-lib-kubelet\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://1cdc2db678a5d1d932c0ed23c453f2450562334bfa685ec920e0a8bc8af61d7c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://1cdc2db678a5d1d932c0ed23c453f2450562334bfa685ec920e0a8bc8af61d7c\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-23T09:07:08Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-23T09:07:08Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var\\\",\\\"name\\\":\\\"var-lib-kubelet\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-23T09:07:07Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"kube-rbac-proxy-crio-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-23T09:08:27Z is after 2025-08-24T17:21:41Z" Jan 23 09:08:27 crc kubenswrapper[4684]: I0123 09:08:27.818630 4684 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-23T09:07:27Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-23T09:07:27Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-23T09:08:27Z is after 2025-08-24T17:21:41Z" Jan 23 09:08:27 crc kubenswrapper[4684]: I0123 09:08:27.834566 4684 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-jwr4q" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ab0885cc-d621-4e36-9e37-1326848bd147\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T09:07:29Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T09:07:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T09:08:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T09:08:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://7bc78adb5a12c736586e26f00e1e598d2404f62b6f15dbb005f241e1d5fddae3\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://d957cfbf388d17fa825ac41c56e15d6cd4caec6e13b2fb8c93b304205f0bbefe\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2026-01-23T09:08:15Z\\\",\\\"message\\\":\\\"2026-01-23T09:07:29+00:00 [cnibincopy] Successfully copied files in /usr/src/multus-cni/rhel9/bin/ to /host/opt/cni/bin/upgrade_fce49956-cc05-4dc7-8d8f-580147be71f6\\\\n2026-01-23T09:07:29+00:00 [cnibincopy] Successfully moved files in /host/opt/cni/bin/upgrade_fce49956-cc05-4dc7-8d8f-580147be71f6 to /host/opt/cni/bin/\\\\n2026-01-23T09:07:30Z [verbose] multus-daemon started\\\\n2026-01-23T09:07:30Z [verbose] Readiness Indicator file check\\\\n2026-01-23T09:08:15Z [error] have you checked that your default network is ready? still waiting for readinessindicatorfile @ /host/run/multus/cni/net.d/10-ovn-kubernetes.conf. pollimmediate error: timed out waiting for the condition\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-23T09:07:29Z\\\"}},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-23T09:08:16Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cw2mk\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-23T09:07:28Z\\\"}}\" for pod \"openshift-multus\"/\"multus-jwr4q\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-23T09:08:27Z is after 2025-08-24T17:21:41Z" Jan 23 09:08:27 crc kubenswrapper[4684]: I0123 09:08:27.855043 4684 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-dmqcw" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"95d1563a-3ca4-4fb0-8365-c1168fbe2e70\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T09:07:29Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T09:07:34Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T09:07:35Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T09:07:35Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://49a6a5854f711f7c177bc9c2ddea16027d535e15a3bbce2771702baed672fc06\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-23T09:07:35Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-wrhqc\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://d3d64538fa49212ecd97fac81f22251d985b9963024dcd5625ca82b0a19111fb\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://d3d64538fa49212ecd97fac81f22251d985b9963024dcd5625ca82b0a19111fb\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-23T09:07:29Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-23T09:07:29Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-wrhqc\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://bd008bc398cf858c150426e45222e76743f5cacfffb45c24f2cad83a6140abe4\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://bd008bc398cf858c150426e45222e76743f5cacfffb45c24f2cad83a6140abe4\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-23T09:07:30Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-23T09:07:30Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-wrhqc\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://11ea09253e6f4c4eab537b794b793c1f07e8cbaf361c1d8773381e7894805322\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://11ea09253e6f4c4eab537b794b793c1f07e8cbaf361c1d8773381e7894805322\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-23T09:07:31Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-23T09:07:31Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-wrhqc\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://4dddcfb8219bc8ac2d0f92294aef29222b71b1eb35ac84e7e833905e868e784e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://4dddcfb8219bc8ac2d0f92294aef29222b71b1eb35ac84e7e833905e868e784e\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-23T09:07:32Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-23T09:07:32Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-wrhqc\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://d935dd54133a2edd7ccddba6ec6b4c3ee7c86d3d6bc097b93fab3a6aa873ece9\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://d935dd54133a2edd7ccddba6ec6b4c3ee7c86d3d6bc097b93fab3a6aa873ece9\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-23T09:07:33Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-23T09:07:33Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-wrhqc\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://a3f58ad8e7c313247b77e5259a2f82d740ea1f08c3aeaefc116293729ce1b143\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://a3f58ad8e7c313247b77e5259a2f82d740ea1f08c3aeaefc116293729ce1b143\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-23T09:07:34Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-23T09:07:34Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-wrhqc\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-23T09:07:28Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-dmqcw\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-23T09:08:27Z is after 2025-08-24T17:21:41Z" Jan 23 09:08:27 crc kubenswrapper[4684]: I0123 09:08:27.875616 4684 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 23 09:08:27 crc kubenswrapper[4684]: I0123 09:08:27.875644 4684 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 23 09:08:27 crc kubenswrapper[4684]: I0123 09:08:27.875655 4684 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 23 09:08:27 crc kubenswrapper[4684]: I0123 09:08:27.875670 4684 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 23 09:08:27 crc kubenswrapper[4684]: I0123 09:08:27.875687 4684 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-23T09:08:27Z","lastTransitionTime":"2026-01-23T09:08:27Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 23 09:08:27 crc kubenswrapper[4684]: I0123 09:08:27.978435 4684 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 23 09:08:27 crc kubenswrapper[4684]: I0123 09:08:27.978496 4684 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 23 09:08:27 crc kubenswrapper[4684]: I0123 09:08:27.978509 4684 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 23 09:08:27 crc kubenswrapper[4684]: I0123 09:08:27.978525 4684 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 23 09:08:27 crc kubenswrapper[4684]: I0123 09:08:27.978552 4684 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-23T09:08:27Z","lastTransitionTime":"2026-01-23T09:08:27Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 23 09:08:28 crc kubenswrapper[4684]: I0123 09:08:28.028608 4684 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-ovn-kubernetes_ovnkube-node-nk7v5_5fd1b372-d164-4037-ae8e-cf634b1c4b41/ovnkube-controller/3.log" Jan 23 09:08:28 crc kubenswrapper[4684]: I0123 09:08:28.032990 4684 scope.go:117] "RemoveContainer" containerID="4982abf5ece76335ecf3d32af453818177712b3e256640b9bebec20436b73eb7" Jan 23 09:08:28 crc kubenswrapper[4684]: E0123 09:08:28.033218 4684 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"ovnkube-controller\" with CrashLoopBackOff: \"back-off 40s restarting failed container=ovnkube-controller pod=ovnkube-node-nk7v5_openshift-ovn-kubernetes(5fd1b372-d164-4037-ae8e-cf634b1c4b41)\"" pod="openshift-ovn-kubernetes/ovnkube-node-nk7v5" podUID="5fd1b372-d164-4037-ae8e-cf634b1c4b41" Jan 23 09:08:28 crc kubenswrapper[4684]: I0123 09:08:28.044470 4684 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-23T09:07:27Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-23T09:07:27Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-23T09:08:28Z is after 2025-08-24T17:21:41Z" Jan 23 09:08:28 crc kubenswrapper[4684]: I0123 09:08:28.057037 4684 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-23T09:07:27Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-23T09:07:27Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-23T09:08:28Z is after 2025-08-24T17:21:41Z" Jan 23 09:08:28 crc kubenswrapper[4684]: I0123 09:08:28.077905 4684 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-nk7v5" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fd1b372-d164-4037-ae8e-cf634b1c4b41\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T09:07:29Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T09:07:29Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T09:07:28Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T09:07:28Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://c845b6b78d55b23f70032599e19fb345571b02ca00353315bb08e94c834330d4\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-23T09:07:30Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-l46bg\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://5ecd3493767226c89a1f3e3dff04d36ff5c47117c6ad2712e71633f5c6e375b3\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-23T09:07:30Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-l46bg\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://1d7d0cedb437ec48e365912b092c7f28a30e01fbab86c49bce1b26734ab264ee\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-23T09:07:30Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-l46bg\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://6ab83043e744c91535278153a247d7ba2b3612b867edbabf3a43192b51304e14\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-23T09:07:30Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-l46bg\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://d44f8256ce0d8ea5237e13fb4f6d7ee5cd698c2821613b48d73ba903d2ab5351\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-23T09:07:30Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-l46bg\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://3eab81e73847c2d5a8a24bd2be84c8ed97ecc482fe023474b519ae6bcf3e6e49\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-23T09:07:29Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-l46bg\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://4982abf5ece76335ecf3d32af453818177712b3e256640b9bebec20436b73eb7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://4982abf5ece76335ecf3d32af453818177712b3e256640b9bebec20436b73eb7\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2026-01-23T09:08:26Z\\\",\\\"message\\\":\\\"10s\\\\\\\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-23T09:08:26Z is after 2025-08-24T17:21:41Z\\\\nI0123 09:08:26.262216 6629 obj_retry.go:434] periodicallyRetryResources: Retry channel got triggered: retrying failed objects of type *v1.Pod\\\\nI0123 09:08:26.262245 6629 obj_retry.go:409] Going to retry *v1.Pod resource setup for 1 objects: [openshift-multus/network-metrics-daemon-wrrtl]\\\\nI0123 09:08:26.262257 6629 factory.go:656] Stopping watch factory\\\\nI0123 09:08:26.262259 6629 obj_retry.go:418] Waiting for all the *v1.Pod retry setup to complete in iterateRetryResources\\\\nI0123 09:08:26.262281 6629 ovnkube.go:599] Stopped ovnkube\\\\nI0123 09:08:26.262282 6629 obj_retry.go:285] Attempting retry of *v1.Pod openshift-multus/network-metrics-daemon-wrrtl before timer (time: 2026-01-23 09:08:26.737373558 +0000 UTC m=+1.601472596): skip\\\\nI0123 09:08:26.262299 6629 obj_retry.go:420] Function iterateRetryResources for *v1.Pod ended (in 56.172µs)\\\\nI0123 09:08:26.262311 6629 metrics.go:553] Stopping metrics server at address \\\\\\\"127.0.0.1:29103\\\\\\\"\\\\nI0123 09:08:26.262365 6629 handler.go:208] Removed *v1.Pod event handler 3\\\\nF0123 09:08:26.262378 6629 ovnkube.go:137] failed to run ovnkube: [failed to start network controller: failed to start default network controller: unable to create\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-23T09:08:25Z\\\"}},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"back-off 40s restarting failed container=ovnkube-controller pod=ovnkube-node-nk7v5_openshift-ovn-kubernetes(5fd1b372-d164-4037-ae8e-cf634b1c4b41)\\\",\\\"reason\\\":\\\"CrashLoopBackOff\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-l46bg\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://6eab0113b2445bd23a5d3eb5f4bd79d26dd3352a1bf807cf7e770d55db85b699\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-23T09:07:32Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-l46bg\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://6cfc04b44ac724b5e32e0102b3f0d670fdd7f2b7ae9b40266065c7b8192b228e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://6cfc04b44ac724b5e32e0102b3f0d670fdd7f2b7ae9b40266065c7b8192b228e\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-23T09:07:29Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-23T09:07:29Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-l46bg\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-23T09:07:28Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-nk7v5\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-23T09:08:28Z is after 2025-08-24T17:21:41Z" Jan 23 09:08:28 crc kubenswrapper[4684]: I0123 09:08:28.080334 4684 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 23 09:08:28 crc kubenswrapper[4684]: I0123 09:08:28.080374 4684 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 23 09:08:28 crc kubenswrapper[4684]: I0123 09:08:28.080387 4684 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 23 09:08:28 crc kubenswrapper[4684]: I0123 09:08:28.080405 4684 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 23 09:08:28 crc kubenswrapper[4684]: I0123 09:08:28.080417 4684 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-23T09:08:28Z","lastTransitionTime":"2026-01-23T09:08:28Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 23 09:08:28 crc kubenswrapper[4684]: I0123 09:08:28.091633 4684 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-ckltm" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"17ebb42b-c0ef-423b-8337-cb73bcdbd301\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T09:07:44Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T09:07:41Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T09:07:44Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T09:07:44Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://831d14b0a3293bdf6aaef4805513c47cca40592929fd0a059c0415e6bb072462\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-23T09:07:43Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-control-plane-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-bqdrh\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://53174a72a4ae2ff8105c162641526b8d33dbc8ae6f6301c8c1399e1493d9f6e9\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-cluster-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-23T09:07:43Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-bqdrh\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-23T09:07:41Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-control-plane-749d76644c-ckltm\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-23T09:08:28Z is after 2025-08-24T17:21:41Z" Jan 23 09:08:28 crc kubenswrapper[4684]: I0123 09:08:28.105032 4684 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"e31ff448-5258-4887-9532-ccb1444b5a2f\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T09:07:08Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T09:07:08Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T09:07:38Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T09:07:38Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T09:07:07Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://39b1d62654cdce3e6a1e54cc35f36d530dec39b7ec54d7aba2ea8a64844ff90a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-23T09:07:09Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://5b80737ea9f882f63be2cf6a2f74002963d16e18aea3c96f738b2cd188f3c1da\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-23T09:07:10Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://68e3ed6cfd5c1ab6379385c7acee58117333f815f21be7d7c61038f7827f6621\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-23T09:07:10Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://74958cd4355a9eb04e07c960b1063b56f11cb3ae27a3ab9eac50f54ebac78c8c\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://42263a97079566dbd93f1ca20399fd1f6cc2400f0d042ed062c1c1e15eaf0109\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-01-23T09:07:27Z\\\",\\\"message\\\":\\\"23 09:07:26.845110 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW0123 09:07:26.845113 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW0123 09:07:26.845115 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nI0123 09:07:26.845353 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nI0123 09:07:26.849378 1 tlsconfig.go:203] \\\\\\\"Loaded serving cert\\\\\\\" certName=\\\\\\\"serving-cert::/tmp/serving-cert-4138284268/tls.crt::/tmp/serving-cert-4138284268/tls.key\\\\\\\" certDetail=\\\\\\\"\\\\\\\\\\\\\\\"localhost\\\\\\\\\\\\\\\" [serving] validServingFor=[localhost] issuer=\\\\\\\\\\\\\\\"check-endpoints-signer@1769159230\\\\\\\\\\\\\\\" (2026-01-23 09:07:10 +0000 UTC to 2026-02-22 09:07:11 +0000 UTC (now=2026-01-23 09:07:26.849349521 +0000 UTC))\\\\\\\"\\\\nI0123 09:07:26.849507 1 named_certificates.go:53] \\\\\\\"Loaded SNI cert\\\\\\\" index=0 certName=\\\\\\\"self-signed loopback\\\\\\\" certDetail=\\\\\\\"\\\\\\\\\\\\\\\"apiserver-loopback-client@1769159241\\\\\\\\\\\\\\\" [serving] validServingFor=[apiserver-loopback-client] issuer=\\\\\\\\\\\\\\\"apiserver-loopback-client-ca@1769159241\\\\\\\\\\\\\\\" (2026-01-23 08:07:21 +0000 UTC to 2027-01-23 08:07:21 +0000 UTC (now=2026-01-23 09:07:26.849489185 +0000 UTC))\\\\\\\"\\\\nI0123 09:07:26.849527 1 secure_serving.go:213] Serving securely on [::]:17697\\\\nI0123 09:07:26.849546 1 genericapiserver.go:683] [graceful-termination] waiting for shutdown to be initiated\\\\nI0123 09:07:26.849566 1 requestheader_controller.go:172] Starting RequestHeaderAuthRequestController\\\\nI0123 09:07:26.849583 1 shared_informer.go:313] Waiting for caches to sync for RequestHeaderAuthRequestController\\\\nI0123 09:07:26.849611 1 dynamic_serving_content.go:135] \\\\\\\"Starting controller\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-4138284268/tls.crt::/tmp/serving-cert-4138284268/tls.key\\\\\\\"\\\\nI0123 09:07:26.849731 1 tlsconfig.go:243] \\\\\\\"Starting DynamicServingCertificateController\\\\\\\"\\\\nF0123 09:07:26.849820 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-23T09:07:10Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":true,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-23T09:07:28Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://9db80d9b156d2828ad5bcd38bc2d0783dac35f10f547f098815ee596931cde3b\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-23T09:07:10Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://efa2eef93c6f5766565795e6674f79bc2e7cb62ac76cd9a1e407561378d62732\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://efa2eef93c6f5766565795e6674f79bc2e7cb62ac76cd9a1e407561378d62732\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-23T09:07:08Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-23T09:07:08Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-23T09:07:07Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-23T09:08:28Z is after 2025-08-24T17:21:41Z" Jan 23 09:08:28 crc kubenswrapper[4684]: I0123 09:08:28.117055 4684 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/kube-rbac-proxy-crio-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"7df9a725-0566-46a8-8527-66802dfe40b0\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T09:07:08Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T09:07:08Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T09:07:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T09:07:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T09:07:07Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://f896177a3b765a2129450136ccb007601fff3c2d5669c777ad8af0eeaaf15d5a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-crio\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-23T09:07:09Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kube\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"var-lib-kubelet\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://1cdc2db678a5d1d932c0ed23c453f2450562334bfa685ec920e0a8bc8af61d7c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://1cdc2db678a5d1d932c0ed23c453f2450562334bfa685ec920e0a8bc8af61d7c\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-23T09:07:08Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-23T09:07:08Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var\\\",\\\"name\\\":\\\"var-lib-kubelet\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-23T09:07:07Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"kube-rbac-proxy-crio-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-23T09:08:28Z is after 2025-08-24T17:21:41Z" Jan 23 09:08:28 crc kubenswrapper[4684]: I0123 09:08:28.129545 4684 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-23T09:07:27Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-23T09:07:27Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-23T09:08:28Z is after 2025-08-24T17:21:41Z" Jan 23 09:08:28 crc kubenswrapper[4684]: I0123 09:08:28.146681 4684 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-jwr4q" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ab0885cc-d621-4e36-9e37-1326848bd147\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T09:07:29Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T09:07:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T09:08:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T09:08:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://7bc78adb5a12c736586e26f00e1e598d2404f62b6f15dbb005f241e1d5fddae3\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://d957cfbf388d17fa825ac41c56e15d6cd4caec6e13b2fb8c93b304205f0bbefe\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2026-01-23T09:08:15Z\\\",\\\"message\\\":\\\"2026-01-23T09:07:29+00:00 [cnibincopy] Successfully copied files in /usr/src/multus-cni/rhel9/bin/ to /host/opt/cni/bin/upgrade_fce49956-cc05-4dc7-8d8f-580147be71f6\\\\n2026-01-23T09:07:29+00:00 [cnibincopy] Successfully moved files in /host/opt/cni/bin/upgrade_fce49956-cc05-4dc7-8d8f-580147be71f6 to /host/opt/cni/bin/\\\\n2026-01-23T09:07:30Z [verbose] multus-daemon started\\\\n2026-01-23T09:07:30Z [verbose] Readiness Indicator file check\\\\n2026-01-23T09:08:15Z [error] have you checked that your default network is ready? still waiting for readinessindicatorfile @ /host/run/multus/cni/net.d/10-ovn-kubernetes.conf. pollimmediate error: timed out waiting for the condition\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-23T09:07:29Z\\\"}},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-23T09:08:16Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cw2mk\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-23T09:07:28Z\\\"}}\" for pod \"openshift-multus\"/\"multus-jwr4q\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-23T09:08:28Z is after 2025-08-24T17:21:41Z" Jan 23 09:08:28 crc kubenswrapper[4684]: I0123 09:08:28.167544 4684 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-dmqcw" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"95d1563a-3ca4-4fb0-8365-c1168fbe2e70\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T09:07:29Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T09:07:34Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T09:07:35Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T09:07:35Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://49a6a5854f711f7c177bc9c2ddea16027d535e15a3bbce2771702baed672fc06\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-23T09:07:35Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-wrhqc\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://d3d64538fa49212ecd97fac81f22251d985b9963024dcd5625ca82b0a19111fb\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://d3d64538fa49212ecd97fac81f22251d985b9963024dcd5625ca82b0a19111fb\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-23T09:07:29Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-23T09:07:29Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-wrhqc\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://bd008bc398cf858c150426e45222e76743f5cacfffb45c24f2cad83a6140abe4\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://bd008bc398cf858c150426e45222e76743f5cacfffb45c24f2cad83a6140abe4\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-23T09:07:30Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-23T09:07:30Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-wrhqc\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://11ea09253e6f4c4eab537b794b793c1f07e8cbaf361c1d8773381e7894805322\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://11ea09253e6f4c4eab537b794b793c1f07e8cbaf361c1d8773381e7894805322\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-23T09:07:31Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-23T09:07:31Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-wrhqc\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://4dddcfb8219bc8ac2d0f92294aef29222b71b1eb35ac84e7e833905e868e784e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://4dddcfb8219bc8ac2d0f92294aef29222b71b1eb35ac84e7e833905e868e784e\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-23T09:07:32Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-23T09:07:32Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-wrhqc\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://d935dd54133a2edd7ccddba6ec6b4c3ee7c86d3d6bc097b93fab3a6aa873ece9\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://d935dd54133a2edd7ccddba6ec6b4c3ee7c86d3d6bc097b93fab3a6aa873ece9\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-23T09:07:33Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-23T09:07:33Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-wrhqc\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://a3f58ad8e7c313247b77e5259a2f82d740ea1f08c3aeaefc116293729ce1b143\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://a3f58ad8e7c313247b77e5259a2f82d740ea1f08c3aeaefc116293729ce1b143\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-23T09:07:34Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-23T09:07:34Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-wrhqc\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-23T09:07:28Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-dmqcw\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-23T09:08:28Z is after 2025-08-24T17:21:41Z" Jan 23 09:08:28 crc kubenswrapper[4684]: I0123 09:08:28.183322 4684 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 23 09:08:28 crc kubenswrapper[4684]: I0123 09:08:28.183365 4684 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 23 09:08:28 crc kubenswrapper[4684]: I0123 09:08:28.183376 4684 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 23 09:08:28 crc kubenswrapper[4684]: I0123 09:08:28.183393 4684 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 23 09:08:28 crc kubenswrapper[4684]: I0123 09:08:28.183405 4684 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-23T09:08:28Z","lastTransitionTime":"2026-01-23T09:08:28Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 23 09:08:28 crc kubenswrapper[4684]: I0123 09:08:28.183367 4684 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d618dabd-5de3-4c94-b9c1-69682da77628\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T09:07:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T09:07:07Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T09:07:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T09:07:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T09:07:07Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://c027c8977c1e3870ef0132bf28d479e8999b1a7d216327be7a9cff2aeee05c9e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-23T09:07:08Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://7954e2feb1e89e1ec2c9055234e7b9bde7005afc751a3067c18cbb54d16045cc\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-23T09:07:08Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://fde45d47daa7855ee7caa1df0222d2773fcdc8fb29413c61d6b74f7e7d8fa6e4\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-23T09:07:09Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://f34540a58dd0dfcebbfd694b24202f58a89ddca8a0f04f3f4f2bcdba4be5c4b6\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-23T09:07:10Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-23T09:07:07Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-23T09:08:28Z is after 2025-08-24T17:21:41Z" Jan 23 09:08:28 crc kubenswrapper[4684]: I0123 09:08:28.196609 4684 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-23T09:07:28Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-23T09:07:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-23T09:07:28Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://4f741db786a98b9e9302c17c5f5061484149b0372c03b3cf06b017d37da7237a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-23T09:07:28Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://e0bf99a80423f9d4d2262b21f7dc70d1cf73731c48008e484d9768495596d5b0\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-23T09:07:28Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-23T09:08:28Z is after 2025-08-24T17:21:41Z" Jan 23 09:08:28 crc kubenswrapper[4684]: I0123 09:08:28.208513 4684 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-23T09:07:30Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-23T09:07:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-23T09:07:30Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://dc74050180463e44d7c545c89833c0282af87ae8cde4800f95e019dbd21ebb08\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-23T09:07:30Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-23T09:08:28Z is after 2025-08-24T17:21:41Z" Jan 23 09:08:28 crc kubenswrapper[4684]: I0123 09:08:28.219007 4684 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-6stgf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"4fce7017-186f-4953-b968-c8a8868a0fd4\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T09:07:29Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T09:07:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T09:07:29Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T09:07:29Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://e120546e2ca9261a5bc169c39194c52add608d78b5783a10dad5f3ba4ee27c23\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-23T09:07:29Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-wv8g2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-23T09:07:28Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-6stgf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-23T09:08:28Z is after 2025-08-24T17:21:41Z" Jan 23 09:08:28 crc kubenswrapper[4684]: I0123 09:08:28.232030 4684 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-qt2j2" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d5069a6f-07bb-4423-8df0-92cdc541e6de\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T09:07:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T09:07:29Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T09:07:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T09:07:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://4ab843f59e857c481772565098789264b06141f58dd54cbb8dba2e40b44a54ad\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-23T09:07:30Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-l62zw\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-23T09:07:29Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-qt2j2\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-23T09:08:28Z is after 2025-08-24T17:21:41Z" Jan 23 09:08:28 crc kubenswrapper[4684]: I0123 09:08:28.246227 4684 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"8f3a9b90-c984-4ff9-9c1e-877941f387c7\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T09:07:08Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T09:07:08Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T09:08:03Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T09:08:03Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T09:07:07Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://02d494d3d24ff74db057c3d7e3a703635ce5b73863f17e5287e60eb112fcadf1\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-23T09:07:09Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://b3735bcc057b640850e5db0bc7cd406ef0ac0c002d4550e741deaf34cf10908f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-23T09:07:10Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://beeba329cbddfbfbd71509b5d37064ec6031709b1403feb8e76af0e7818516cb\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-23T09:07:10Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://dfb74f1ff410b32092837918e51a33643c917e2cf829af6edd2e36180c64fcba\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"wait-for-host-port\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://dfb74f1ff410b32092837918e51a33643c917e2cf829af6edd2e36180c64fcba\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-23T09:07:08Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-23T09:07:08Z\\\"}}}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-23T09:07:07Z\\\"}}\" for pod \"openshift-kube-scheduler\"/\"openshift-kube-scheduler-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-23T09:08:28Z is after 2025-08-24T17:21:41Z" Jan 23 09:08:28 crc kubenswrapper[4684]: I0123 09:08:28.262347 4684 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-23T09:07:29Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-23T09:07:29Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-23T09:07:29Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://d66a59d2f527c396c3b591ef694a20a6852d8e2b2f3d4c77ef0f0b795a18b535\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-23T09:07:28Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-23T09:08:28Z is after 2025-08-24T17:21:41Z" Jan 23 09:08:28 crc kubenswrapper[4684]: I0123 09:08:28.273527 4684 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-wtphf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"fe8e0d00-860e-4d47-9f48-686555520d79\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T09:07:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T09:07:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T09:07:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T09:07:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://87b6f66b276518f9c25bbd5c97bd4a330b2c796958b395d04a01ef7115b95440\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-23T09:07:30Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-dmwsl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://c3d090a4ca15b818846dbd02be034a5029761509ea8671673795d0b2b15249c9\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-23T09:07:30Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-dmwsl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-23T09:07:28Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-wtphf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-23T09:08:28Z is after 2025-08-24T17:21:41Z" Jan 23 09:08:28 crc kubenswrapper[4684]: I0123 09:08:28.284911 4684 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 23 09:08:28 crc kubenswrapper[4684]: I0123 09:08:28.284955 4684 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 23 09:08:28 crc kubenswrapper[4684]: I0123 09:08:28.284966 4684 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 23 09:08:28 crc kubenswrapper[4684]: I0123 09:08:28.284980 4684 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 23 09:08:28 crc kubenswrapper[4684]: I0123 09:08:28.284994 4684 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-23T09:08:28Z","lastTransitionTime":"2026-01-23T09:08:28Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 23 09:08:28 crc kubenswrapper[4684]: I0123 09:08:28.286246 4684 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/network-metrics-daemon-wrrtl" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"8a1145d8-e0e9-481b-9e5c-65815e74874f\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T09:07:42Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T09:07:42Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T09:07:42Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T09:07:42Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/metrics\\\",\\\"name\\\":\\\"metrics-certs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-hlsjn\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d98bb346a17feae024d92663df92b25c120938395ab7043afbed543c6db9ca8d\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-metrics-daemon\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-hlsjn\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-23T09:07:42Z\\\"}}\" for pod \"openshift-multus\"/\"network-metrics-daemon-wrrtl\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-23T09:08:28Z is after 2025-08-24T17:21:41Z" Jan 23 09:08:28 crc kubenswrapper[4684]: I0123 09:08:28.386690 4684 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 23 09:08:28 crc kubenswrapper[4684]: I0123 09:08:28.386750 4684 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 23 09:08:28 crc kubenswrapper[4684]: I0123 09:08:28.386760 4684 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 23 09:08:28 crc kubenswrapper[4684]: I0123 09:08:28.386773 4684 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 23 09:08:28 crc kubenswrapper[4684]: I0123 09:08:28.386782 4684 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-23T09:08:28Z","lastTransitionTime":"2026-01-23T09:08:28Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 23 09:08:28 crc kubenswrapper[4684]: I0123 09:08:28.488990 4684 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 23 09:08:28 crc kubenswrapper[4684]: I0123 09:08:28.489266 4684 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 23 09:08:28 crc kubenswrapper[4684]: I0123 09:08:28.489358 4684 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 23 09:08:28 crc kubenswrapper[4684]: I0123 09:08:28.489449 4684 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 23 09:08:28 crc kubenswrapper[4684]: I0123 09:08:28.489532 4684 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-23T09:08:28Z","lastTransitionTime":"2026-01-23T09:08:28Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 23 09:08:28 crc kubenswrapper[4684]: I0123 09:08:28.582562 4684 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-wrrtl" Jan 23 09:08:28 crc kubenswrapper[4684]: E0123 09:08:28.583366 4684 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-wrrtl" podUID="8a1145d8-e0e9-481b-9e5c-65815e74874f" Jan 23 09:08:28 crc kubenswrapper[4684]: I0123 09:08:28.587000 4684 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-12-15 19:18:20.242095605 +0000 UTC Jan 23 09:08:28 crc kubenswrapper[4684]: I0123 09:08:28.592143 4684 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 23 09:08:28 crc kubenswrapper[4684]: I0123 09:08:28.592184 4684 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 23 09:08:28 crc kubenswrapper[4684]: I0123 09:08:28.592195 4684 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 23 09:08:28 crc kubenswrapper[4684]: I0123 09:08:28.592213 4684 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 23 09:08:28 crc kubenswrapper[4684]: I0123 09:08:28.592225 4684 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-23T09:08:28Z","lastTransitionTime":"2026-01-23T09:08:28Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 23 09:08:28 crc kubenswrapper[4684]: I0123 09:08:28.694453 4684 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 23 09:08:28 crc kubenswrapper[4684]: I0123 09:08:28.694498 4684 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 23 09:08:28 crc kubenswrapper[4684]: I0123 09:08:28.694516 4684 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 23 09:08:28 crc kubenswrapper[4684]: I0123 09:08:28.694539 4684 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 23 09:08:28 crc kubenswrapper[4684]: I0123 09:08:28.694556 4684 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-23T09:08:28Z","lastTransitionTime":"2026-01-23T09:08:28Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 23 09:08:28 crc kubenswrapper[4684]: I0123 09:08:28.796913 4684 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 23 09:08:28 crc kubenswrapper[4684]: I0123 09:08:28.796993 4684 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 23 09:08:28 crc kubenswrapper[4684]: I0123 09:08:28.797016 4684 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 23 09:08:28 crc kubenswrapper[4684]: I0123 09:08:28.797043 4684 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 23 09:08:28 crc kubenswrapper[4684]: I0123 09:08:28.797066 4684 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-23T09:08:28Z","lastTransitionTime":"2026-01-23T09:08:28Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 23 09:08:28 crc kubenswrapper[4684]: I0123 09:08:28.899854 4684 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 23 09:08:28 crc kubenswrapper[4684]: I0123 09:08:28.899895 4684 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 23 09:08:28 crc kubenswrapper[4684]: I0123 09:08:28.899907 4684 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 23 09:08:28 crc kubenswrapper[4684]: I0123 09:08:28.899924 4684 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 23 09:08:28 crc kubenswrapper[4684]: I0123 09:08:28.899935 4684 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-23T09:08:28Z","lastTransitionTime":"2026-01-23T09:08:28Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 23 09:08:29 crc kubenswrapper[4684]: I0123 09:08:29.002403 4684 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 23 09:08:29 crc kubenswrapper[4684]: I0123 09:08:29.002459 4684 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 23 09:08:29 crc kubenswrapper[4684]: I0123 09:08:29.002469 4684 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 23 09:08:29 crc kubenswrapper[4684]: I0123 09:08:29.002515 4684 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 23 09:08:29 crc kubenswrapper[4684]: I0123 09:08:29.002526 4684 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-23T09:08:29Z","lastTransitionTime":"2026-01-23T09:08:29Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 23 09:08:29 crc kubenswrapper[4684]: I0123 09:08:29.104108 4684 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 23 09:08:29 crc kubenswrapper[4684]: I0123 09:08:29.104149 4684 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 23 09:08:29 crc kubenswrapper[4684]: I0123 09:08:29.104158 4684 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 23 09:08:29 crc kubenswrapper[4684]: I0123 09:08:29.104169 4684 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 23 09:08:29 crc kubenswrapper[4684]: I0123 09:08:29.104178 4684 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-23T09:08:29Z","lastTransitionTime":"2026-01-23T09:08:29Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 23 09:08:29 crc kubenswrapper[4684]: I0123 09:08:29.206291 4684 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 23 09:08:29 crc kubenswrapper[4684]: I0123 09:08:29.206324 4684 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 23 09:08:29 crc kubenswrapper[4684]: I0123 09:08:29.206332 4684 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 23 09:08:29 crc kubenswrapper[4684]: I0123 09:08:29.206346 4684 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 23 09:08:29 crc kubenswrapper[4684]: I0123 09:08:29.206356 4684 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-23T09:08:29Z","lastTransitionTime":"2026-01-23T09:08:29Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 23 09:08:29 crc kubenswrapper[4684]: I0123 09:08:29.309571 4684 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 23 09:08:29 crc kubenswrapper[4684]: I0123 09:08:29.309615 4684 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 23 09:08:29 crc kubenswrapper[4684]: I0123 09:08:29.309626 4684 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 23 09:08:29 crc kubenswrapper[4684]: I0123 09:08:29.309642 4684 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 23 09:08:29 crc kubenswrapper[4684]: I0123 09:08:29.309653 4684 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-23T09:08:29Z","lastTransitionTime":"2026-01-23T09:08:29Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 23 09:08:29 crc kubenswrapper[4684]: I0123 09:08:29.412108 4684 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 23 09:08:29 crc kubenswrapper[4684]: I0123 09:08:29.412189 4684 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 23 09:08:29 crc kubenswrapper[4684]: I0123 09:08:29.412217 4684 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 23 09:08:29 crc kubenswrapper[4684]: I0123 09:08:29.412235 4684 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 23 09:08:29 crc kubenswrapper[4684]: I0123 09:08:29.412247 4684 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-23T09:08:29Z","lastTransitionTime":"2026-01-23T09:08:29Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 23 09:08:29 crc kubenswrapper[4684]: I0123 09:08:29.514836 4684 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 23 09:08:29 crc kubenswrapper[4684]: I0123 09:08:29.514897 4684 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 23 09:08:29 crc kubenswrapper[4684]: I0123 09:08:29.514914 4684 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 23 09:08:29 crc kubenswrapper[4684]: I0123 09:08:29.514935 4684 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 23 09:08:29 crc kubenswrapper[4684]: I0123 09:08:29.514953 4684 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-23T09:08:29Z","lastTransitionTime":"2026-01-23T09:08:29Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 23 09:08:29 crc kubenswrapper[4684]: I0123 09:08:29.581054 4684 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 23 09:08:29 crc kubenswrapper[4684]: I0123 09:08:29.581078 4684 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 23 09:08:29 crc kubenswrapper[4684]: I0123 09:08:29.581143 4684 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Jan 23 09:08:29 crc kubenswrapper[4684]: E0123 09:08:29.581195 4684 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Jan 23 09:08:29 crc kubenswrapper[4684]: E0123 09:08:29.581367 4684 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Jan 23 09:08:29 crc kubenswrapper[4684]: E0123 09:08:29.581462 4684 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Jan 23 09:08:29 crc kubenswrapper[4684]: I0123 09:08:29.588058 4684 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2026-01-04 01:00:28.707257041 +0000 UTC Jan 23 09:08:29 crc kubenswrapper[4684]: I0123 09:08:29.616727 4684 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 23 09:08:29 crc kubenswrapper[4684]: I0123 09:08:29.616760 4684 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 23 09:08:29 crc kubenswrapper[4684]: I0123 09:08:29.616769 4684 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 23 09:08:29 crc kubenswrapper[4684]: I0123 09:08:29.616789 4684 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 23 09:08:29 crc kubenswrapper[4684]: I0123 09:08:29.616816 4684 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-23T09:08:29Z","lastTransitionTime":"2026-01-23T09:08:29Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 23 09:08:29 crc kubenswrapper[4684]: I0123 09:08:29.718600 4684 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 23 09:08:29 crc kubenswrapper[4684]: I0123 09:08:29.718644 4684 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 23 09:08:29 crc kubenswrapper[4684]: I0123 09:08:29.718655 4684 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 23 09:08:29 crc kubenswrapper[4684]: I0123 09:08:29.718669 4684 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 23 09:08:29 crc kubenswrapper[4684]: I0123 09:08:29.718679 4684 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-23T09:08:29Z","lastTransitionTime":"2026-01-23T09:08:29Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 23 09:08:29 crc kubenswrapper[4684]: I0123 09:08:29.821395 4684 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 23 09:08:29 crc kubenswrapper[4684]: I0123 09:08:29.821448 4684 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 23 09:08:29 crc kubenswrapper[4684]: I0123 09:08:29.821463 4684 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 23 09:08:29 crc kubenswrapper[4684]: I0123 09:08:29.821483 4684 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 23 09:08:29 crc kubenswrapper[4684]: I0123 09:08:29.821498 4684 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-23T09:08:29Z","lastTransitionTime":"2026-01-23T09:08:29Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 23 09:08:29 crc kubenswrapper[4684]: I0123 09:08:29.923995 4684 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 23 09:08:29 crc kubenswrapper[4684]: I0123 09:08:29.924032 4684 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 23 09:08:29 crc kubenswrapper[4684]: I0123 09:08:29.924044 4684 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 23 09:08:29 crc kubenswrapper[4684]: I0123 09:08:29.924059 4684 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 23 09:08:29 crc kubenswrapper[4684]: I0123 09:08:29.924071 4684 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-23T09:08:29Z","lastTransitionTime":"2026-01-23T09:08:29Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 23 09:08:30 crc kubenswrapper[4684]: I0123 09:08:30.025976 4684 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 23 09:08:30 crc kubenswrapper[4684]: I0123 09:08:30.026020 4684 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 23 09:08:30 crc kubenswrapper[4684]: I0123 09:08:30.026030 4684 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 23 09:08:30 crc kubenswrapper[4684]: I0123 09:08:30.026047 4684 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 23 09:08:30 crc kubenswrapper[4684]: I0123 09:08:30.026059 4684 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-23T09:08:30Z","lastTransitionTime":"2026-01-23T09:08:30Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 23 09:08:30 crc kubenswrapper[4684]: I0123 09:08:30.128635 4684 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 23 09:08:30 crc kubenswrapper[4684]: I0123 09:08:30.128746 4684 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 23 09:08:30 crc kubenswrapper[4684]: I0123 09:08:30.128759 4684 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 23 09:08:30 crc kubenswrapper[4684]: I0123 09:08:30.128787 4684 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 23 09:08:30 crc kubenswrapper[4684]: I0123 09:08:30.128800 4684 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-23T09:08:30Z","lastTransitionTime":"2026-01-23T09:08:30Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 23 09:08:30 crc kubenswrapper[4684]: I0123 09:08:30.234469 4684 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 23 09:08:30 crc kubenswrapper[4684]: I0123 09:08:30.234631 4684 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 23 09:08:30 crc kubenswrapper[4684]: I0123 09:08:30.234987 4684 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 23 09:08:30 crc kubenswrapper[4684]: I0123 09:08:30.235076 4684 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 23 09:08:30 crc kubenswrapper[4684]: I0123 09:08:30.235100 4684 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-23T09:08:30Z","lastTransitionTime":"2026-01-23T09:08:30Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 23 09:08:30 crc kubenswrapper[4684]: I0123 09:08:30.337610 4684 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 23 09:08:30 crc kubenswrapper[4684]: I0123 09:08:30.337739 4684 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 23 09:08:30 crc kubenswrapper[4684]: I0123 09:08:30.337751 4684 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 23 09:08:30 crc kubenswrapper[4684]: I0123 09:08:30.337768 4684 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 23 09:08:30 crc kubenswrapper[4684]: I0123 09:08:30.337779 4684 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-23T09:08:30Z","lastTransitionTime":"2026-01-23T09:08:30Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 23 09:08:30 crc kubenswrapper[4684]: I0123 09:08:30.440321 4684 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 23 09:08:30 crc kubenswrapper[4684]: I0123 09:08:30.440575 4684 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 23 09:08:30 crc kubenswrapper[4684]: I0123 09:08:30.440655 4684 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 23 09:08:30 crc kubenswrapper[4684]: I0123 09:08:30.440740 4684 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 23 09:08:30 crc kubenswrapper[4684]: I0123 09:08:30.440808 4684 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-23T09:08:30Z","lastTransitionTime":"2026-01-23T09:08:30Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 23 09:08:30 crc kubenswrapper[4684]: I0123 09:08:30.543491 4684 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 23 09:08:30 crc kubenswrapper[4684]: I0123 09:08:30.543836 4684 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 23 09:08:30 crc kubenswrapper[4684]: I0123 09:08:30.543921 4684 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 23 09:08:30 crc kubenswrapper[4684]: I0123 09:08:30.544013 4684 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 23 09:08:30 crc kubenswrapper[4684]: I0123 09:08:30.544076 4684 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-23T09:08:30Z","lastTransitionTime":"2026-01-23T09:08:30Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 23 09:08:30 crc kubenswrapper[4684]: I0123 09:08:30.580992 4684 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-wrrtl" Jan 23 09:08:30 crc kubenswrapper[4684]: E0123 09:08:30.581225 4684 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-wrrtl" podUID="8a1145d8-e0e9-481b-9e5c-65815e74874f" Jan 23 09:08:30 crc kubenswrapper[4684]: I0123 09:08:30.589158 4684 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-11-17 04:16:01.544464936 +0000 UTC Jan 23 09:08:30 crc kubenswrapper[4684]: I0123 09:08:30.647002 4684 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 23 09:08:30 crc kubenswrapper[4684]: I0123 09:08:30.647066 4684 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 23 09:08:30 crc kubenswrapper[4684]: I0123 09:08:30.647103 4684 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 23 09:08:30 crc kubenswrapper[4684]: I0123 09:08:30.647130 4684 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 23 09:08:30 crc kubenswrapper[4684]: I0123 09:08:30.647147 4684 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-23T09:08:30Z","lastTransitionTime":"2026-01-23T09:08:30Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 23 09:08:30 crc kubenswrapper[4684]: I0123 09:08:30.750074 4684 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 23 09:08:30 crc kubenswrapper[4684]: I0123 09:08:30.750109 4684 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 23 09:08:30 crc kubenswrapper[4684]: I0123 09:08:30.750118 4684 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 23 09:08:30 crc kubenswrapper[4684]: I0123 09:08:30.750137 4684 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 23 09:08:30 crc kubenswrapper[4684]: I0123 09:08:30.750146 4684 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-23T09:08:30Z","lastTransitionTime":"2026-01-23T09:08:30Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 23 09:08:30 crc kubenswrapper[4684]: I0123 09:08:30.852281 4684 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 23 09:08:30 crc kubenswrapper[4684]: I0123 09:08:30.852312 4684 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 23 09:08:30 crc kubenswrapper[4684]: I0123 09:08:30.852322 4684 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 23 09:08:30 crc kubenswrapper[4684]: I0123 09:08:30.852338 4684 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 23 09:08:30 crc kubenswrapper[4684]: I0123 09:08:30.852348 4684 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-23T09:08:30Z","lastTransitionTime":"2026-01-23T09:08:30Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 23 09:08:30 crc kubenswrapper[4684]: I0123 09:08:30.955030 4684 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 23 09:08:30 crc kubenswrapper[4684]: I0123 09:08:30.955069 4684 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 23 09:08:30 crc kubenswrapper[4684]: I0123 09:08:30.955079 4684 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 23 09:08:30 crc kubenswrapper[4684]: I0123 09:08:30.955094 4684 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 23 09:08:30 crc kubenswrapper[4684]: I0123 09:08:30.955103 4684 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-23T09:08:30Z","lastTransitionTime":"2026-01-23T09:08:30Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 23 09:08:31 crc kubenswrapper[4684]: I0123 09:08:31.056947 4684 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 23 09:08:31 crc kubenswrapper[4684]: I0123 09:08:31.056985 4684 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 23 09:08:31 crc kubenswrapper[4684]: I0123 09:08:31.056998 4684 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 23 09:08:31 crc kubenswrapper[4684]: I0123 09:08:31.057037 4684 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 23 09:08:31 crc kubenswrapper[4684]: I0123 09:08:31.057051 4684 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-23T09:08:31Z","lastTransitionTime":"2026-01-23T09:08:31Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 23 09:08:31 crc kubenswrapper[4684]: I0123 09:08:31.159042 4684 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 23 09:08:31 crc kubenswrapper[4684]: I0123 09:08:31.159094 4684 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 23 09:08:31 crc kubenswrapper[4684]: I0123 09:08:31.159107 4684 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 23 09:08:31 crc kubenswrapper[4684]: I0123 09:08:31.159125 4684 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 23 09:08:31 crc kubenswrapper[4684]: I0123 09:08:31.159138 4684 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-23T09:08:31Z","lastTransitionTime":"2026-01-23T09:08:31Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 23 09:08:31 crc kubenswrapper[4684]: I0123 09:08:31.261762 4684 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 23 09:08:31 crc kubenswrapper[4684]: I0123 09:08:31.261795 4684 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 23 09:08:31 crc kubenswrapper[4684]: I0123 09:08:31.261807 4684 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 23 09:08:31 crc kubenswrapper[4684]: I0123 09:08:31.261824 4684 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 23 09:08:31 crc kubenswrapper[4684]: I0123 09:08:31.261837 4684 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-23T09:08:31Z","lastTransitionTime":"2026-01-23T09:08:31Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 23 09:08:31 crc kubenswrapper[4684]: I0123 09:08:31.364225 4684 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 23 09:08:31 crc kubenswrapper[4684]: I0123 09:08:31.364259 4684 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 23 09:08:31 crc kubenswrapper[4684]: I0123 09:08:31.364267 4684 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 23 09:08:31 crc kubenswrapper[4684]: I0123 09:08:31.364280 4684 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 23 09:08:31 crc kubenswrapper[4684]: I0123 09:08:31.364289 4684 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-23T09:08:31Z","lastTransitionTime":"2026-01-23T09:08:31Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 23 09:08:31 crc kubenswrapper[4684]: I0123 09:08:31.394352 4684 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-ovn-kubernetes/ovnkube-node-nk7v5" Jan 23 09:08:31 crc kubenswrapper[4684]: I0123 09:08:31.395160 4684 scope.go:117] "RemoveContainer" containerID="4982abf5ece76335ecf3d32af453818177712b3e256640b9bebec20436b73eb7" Jan 23 09:08:31 crc kubenswrapper[4684]: E0123 09:08:31.395319 4684 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"ovnkube-controller\" with CrashLoopBackOff: \"back-off 40s restarting failed container=ovnkube-controller pod=ovnkube-node-nk7v5_openshift-ovn-kubernetes(5fd1b372-d164-4037-ae8e-cf634b1c4b41)\"" pod="openshift-ovn-kubernetes/ovnkube-node-nk7v5" podUID="5fd1b372-d164-4037-ae8e-cf634b1c4b41" Jan 23 09:08:31 crc kubenswrapper[4684]: I0123 09:08:31.466684 4684 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 23 09:08:31 crc kubenswrapper[4684]: I0123 09:08:31.466993 4684 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 23 09:08:31 crc kubenswrapper[4684]: I0123 09:08:31.467068 4684 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 23 09:08:31 crc kubenswrapper[4684]: I0123 09:08:31.467134 4684 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 23 09:08:31 crc kubenswrapper[4684]: I0123 09:08:31.467196 4684 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-23T09:08:31Z","lastTransitionTime":"2026-01-23T09:08:31Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 23 09:08:31 crc kubenswrapper[4684]: I0123 09:08:31.484577 4684 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 23 09:08:31 crc kubenswrapper[4684]: E0123 09:08:31.484848 4684 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-01-23 09:09:35.4848185 +0000 UTC m=+148.108197041 (durationBeforeRetry 1m4s). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 23 09:08:31 crc kubenswrapper[4684]: I0123 09:08:31.570668 4684 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 23 09:08:31 crc kubenswrapper[4684]: I0123 09:08:31.571024 4684 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 23 09:08:31 crc kubenswrapper[4684]: I0123 09:08:31.571138 4684 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 23 09:08:31 crc kubenswrapper[4684]: I0123 09:08:31.571244 4684 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 23 09:08:31 crc kubenswrapper[4684]: I0123 09:08:31.571321 4684 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-23T09:08:31Z","lastTransitionTime":"2026-01-23T09:08:31Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 23 09:08:31 crc kubenswrapper[4684]: I0123 09:08:31.581229 4684 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Jan 23 09:08:31 crc kubenswrapper[4684]: I0123 09:08:31.581304 4684 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 23 09:08:31 crc kubenswrapper[4684]: I0123 09:08:31.581337 4684 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 23 09:08:31 crc kubenswrapper[4684]: E0123 09:08:31.581813 4684 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Jan 23 09:08:31 crc kubenswrapper[4684]: E0123 09:08:31.581896 4684 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Jan 23 09:08:31 crc kubenswrapper[4684]: E0123 09:08:31.581618 4684 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Jan 23 09:08:31 crc kubenswrapper[4684]: I0123 09:08:31.586131 4684 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-cqllr\" (UniqueName: \"kubernetes.io/projected/3b6479f0-333b-4a96-9adf-2099afdc2447-kube-api-access-cqllr\") pod \"network-check-target-xd92c\" (UID: \"3b6479f0-333b-4a96-9adf-2099afdc2447\") " pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 23 09:08:31 crc kubenswrapper[4684]: I0123 09:08:31.586186 4684 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"networking-console-plugin-cert\" (UniqueName: \"kubernetes.io/secret/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-networking-console-plugin-cert\") pod \"networking-console-plugin-85b44fc459-gdk6g\" (UID: \"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\") " pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 23 09:08:31 crc kubenswrapper[4684]: I0123 09:08:31.586218 4684 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"nginx-conf\" (UniqueName: \"kubernetes.io/configmap/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-nginx-conf\") pod \"networking-console-plugin-85b44fc459-gdk6g\" (UID: \"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\") " pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 23 09:08:31 crc kubenswrapper[4684]: I0123 09:08:31.586241 4684 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-s2dwl\" (UniqueName: \"kubernetes.io/projected/9d751cbb-f2e2-430d-9754-c882a5e924a5-kube-api-access-s2dwl\") pod \"network-check-source-55646444c4-trplf\" (UID: \"9d751cbb-f2e2-430d-9754-c882a5e924a5\") " pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Jan 23 09:08:31 crc kubenswrapper[4684]: E0123 09:08:31.586374 4684 projected.go:288] Couldn't get configMap openshift-network-diagnostics/kube-root-ca.crt: object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered Jan 23 09:08:31 crc kubenswrapper[4684]: E0123 09:08:31.586394 4684 projected.go:288] Couldn't get configMap openshift-network-diagnostics/openshift-service-ca.crt: object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered Jan 23 09:08:31 crc kubenswrapper[4684]: E0123 09:08:31.586388 4684 projected.go:288] Couldn't get configMap openshift-network-diagnostics/kube-root-ca.crt: object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered Jan 23 09:08:31 crc kubenswrapper[4684]: E0123 09:08:31.586429 4684 projected.go:288] Couldn't get configMap openshift-network-diagnostics/openshift-service-ca.crt: object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered Jan 23 09:08:31 crc kubenswrapper[4684]: E0123 09:08:31.586447 4684 projected.go:194] Error preparing data for projected volume kube-api-access-cqllr for pod openshift-network-diagnostics/network-check-target-xd92c: [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Jan 23 09:08:31 crc kubenswrapper[4684]: E0123 09:08:31.586471 4684 secret.go:188] Couldn't get secret openshift-network-console/networking-console-plugin-cert: object "openshift-network-console"/"networking-console-plugin-cert" not registered Jan 23 09:08:31 crc kubenswrapper[4684]: E0123 09:08:31.586501 4684 configmap.go:193] Couldn't get configMap openshift-network-console/networking-console-plugin: object "openshift-network-console"/"networking-console-plugin" not registered Jan 23 09:08:31 crc kubenswrapper[4684]: E0123 09:08:31.586518 4684 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/3b6479f0-333b-4a96-9adf-2099afdc2447-kube-api-access-cqllr podName:3b6479f0-333b-4a96-9adf-2099afdc2447 nodeName:}" failed. No retries permitted until 2026-01-23 09:09:35.58649173 +0000 UTC m=+148.209870271 (durationBeforeRetry 1m4s). Error: MountVolume.SetUp failed for volume "kube-api-access-cqllr" (UniqueName: "kubernetes.io/projected/3b6479f0-333b-4a96-9adf-2099afdc2447-kube-api-access-cqllr") pod "network-check-target-xd92c" (UID: "3b6479f0-333b-4a96-9adf-2099afdc2447") : [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Jan 23 09:08:31 crc kubenswrapper[4684]: E0123 09:08:31.586542 4684 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-networking-console-plugin-cert podName:5fe485a1-e14f-4c09-b5b9-f252bc42b7e8 nodeName:}" failed. No retries permitted until 2026-01-23 09:09:35.586533562 +0000 UTC m=+148.209912153 (durationBeforeRetry 1m4s). Error: MountVolume.SetUp failed for volume "networking-console-plugin-cert" (UniqueName: "kubernetes.io/secret/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-networking-console-plugin-cert") pod "networking-console-plugin-85b44fc459-gdk6g" (UID: "5fe485a1-e14f-4c09-b5b9-f252bc42b7e8") : object "openshift-network-console"/"networking-console-plugin-cert" not registered Jan 23 09:08:31 crc kubenswrapper[4684]: E0123 09:08:31.586406 4684 projected.go:194] Error preparing data for projected volume kube-api-access-s2dwl for pod openshift-network-diagnostics/network-check-source-55646444c4-trplf: [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Jan 23 09:08:31 crc kubenswrapper[4684]: E0123 09:08:31.586564 4684 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-nginx-conf podName:5fe485a1-e14f-4c09-b5b9-f252bc42b7e8 nodeName:}" failed. No retries permitted until 2026-01-23 09:09:35.586549812 +0000 UTC m=+148.209928453 (durationBeforeRetry 1m4s). Error: MountVolume.SetUp failed for volume "nginx-conf" (UniqueName: "kubernetes.io/configmap/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-nginx-conf") pod "networking-console-plugin-85b44fc459-gdk6g" (UID: "5fe485a1-e14f-4c09-b5b9-f252bc42b7e8") : object "openshift-network-console"/"networking-console-plugin" not registered Jan 23 09:08:31 crc kubenswrapper[4684]: E0123 09:08:31.586595 4684 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/9d751cbb-f2e2-430d-9754-c882a5e924a5-kube-api-access-s2dwl podName:9d751cbb-f2e2-430d-9754-c882a5e924a5 nodeName:}" failed. No retries permitted until 2026-01-23 09:09:35.586571713 +0000 UTC m=+148.209950254 (durationBeforeRetry 1m4s). Error: MountVolume.SetUp failed for volume "kube-api-access-s2dwl" (UniqueName: "kubernetes.io/projected/9d751cbb-f2e2-430d-9754-c882a5e924a5-kube-api-access-s2dwl") pod "network-check-source-55646444c4-trplf" (UID: "9d751cbb-f2e2-430d-9754-c882a5e924a5") : [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Jan 23 09:08:31 crc kubenswrapper[4684]: I0123 09:08:31.590205 4684 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-11-21 09:58:35.127106291 +0000 UTC Jan 23 09:08:31 crc kubenswrapper[4684]: I0123 09:08:31.675127 4684 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 23 09:08:31 crc kubenswrapper[4684]: I0123 09:08:31.675168 4684 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 23 09:08:31 crc kubenswrapper[4684]: I0123 09:08:31.675177 4684 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 23 09:08:31 crc kubenswrapper[4684]: I0123 09:08:31.675192 4684 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 23 09:08:31 crc kubenswrapper[4684]: I0123 09:08:31.675202 4684 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-23T09:08:31Z","lastTransitionTime":"2026-01-23T09:08:31Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 23 09:08:31 crc kubenswrapper[4684]: I0123 09:08:31.777518 4684 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 23 09:08:31 crc kubenswrapper[4684]: I0123 09:08:31.777554 4684 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 23 09:08:31 crc kubenswrapper[4684]: I0123 09:08:31.777562 4684 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 23 09:08:31 crc kubenswrapper[4684]: I0123 09:08:31.777578 4684 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 23 09:08:31 crc kubenswrapper[4684]: I0123 09:08:31.777588 4684 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-23T09:08:31Z","lastTransitionTime":"2026-01-23T09:08:31Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 23 09:08:31 crc kubenswrapper[4684]: I0123 09:08:31.879355 4684 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 23 09:08:31 crc kubenswrapper[4684]: I0123 09:08:31.879391 4684 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 23 09:08:31 crc kubenswrapper[4684]: I0123 09:08:31.879401 4684 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 23 09:08:31 crc kubenswrapper[4684]: I0123 09:08:31.879415 4684 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 23 09:08:31 crc kubenswrapper[4684]: I0123 09:08:31.879425 4684 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-23T09:08:31Z","lastTransitionTime":"2026-01-23T09:08:31Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 23 09:08:31 crc kubenswrapper[4684]: I0123 09:08:31.981505 4684 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 23 09:08:31 crc kubenswrapper[4684]: I0123 09:08:31.981542 4684 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 23 09:08:31 crc kubenswrapper[4684]: I0123 09:08:31.981553 4684 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 23 09:08:31 crc kubenswrapper[4684]: I0123 09:08:31.981567 4684 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 23 09:08:31 crc kubenswrapper[4684]: I0123 09:08:31.981579 4684 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-23T09:08:31Z","lastTransitionTime":"2026-01-23T09:08:31Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 23 09:08:32 crc kubenswrapper[4684]: I0123 09:08:32.084364 4684 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 23 09:08:32 crc kubenswrapper[4684]: I0123 09:08:32.084412 4684 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 23 09:08:32 crc kubenswrapper[4684]: I0123 09:08:32.084425 4684 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 23 09:08:32 crc kubenswrapper[4684]: I0123 09:08:32.084442 4684 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 23 09:08:32 crc kubenswrapper[4684]: I0123 09:08:32.084453 4684 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-23T09:08:32Z","lastTransitionTime":"2026-01-23T09:08:32Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 23 09:08:32 crc kubenswrapper[4684]: I0123 09:08:32.186774 4684 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 23 09:08:32 crc kubenswrapper[4684]: I0123 09:08:32.186818 4684 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 23 09:08:32 crc kubenswrapper[4684]: I0123 09:08:32.186828 4684 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 23 09:08:32 crc kubenswrapper[4684]: I0123 09:08:32.186843 4684 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 23 09:08:32 crc kubenswrapper[4684]: I0123 09:08:32.186853 4684 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-23T09:08:32Z","lastTransitionTime":"2026-01-23T09:08:32Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 23 09:08:32 crc kubenswrapper[4684]: I0123 09:08:32.288661 4684 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 23 09:08:32 crc kubenswrapper[4684]: I0123 09:08:32.288692 4684 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 23 09:08:32 crc kubenswrapper[4684]: I0123 09:08:32.288728 4684 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 23 09:08:32 crc kubenswrapper[4684]: I0123 09:08:32.288743 4684 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 23 09:08:32 crc kubenswrapper[4684]: I0123 09:08:32.288753 4684 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-23T09:08:32Z","lastTransitionTime":"2026-01-23T09:08:32Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 23 09:08:32 crc kubenswrapper[4684]: I0123 09:08:32.391451 4684 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 23 09:08:32 crc kubenswrapper[4684]: I0123 09:08:32.391492 4684 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 23 09:08:32 crc kubenswrapper[4684]: I0123 09:08:32.391537 4684 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 23 09:08:32 crc kubenswrapper[4684]: I0123 09:08:32.391551 4684 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 23 09:08:32 crc kubenswrapper[4684]: I0123 09:08:32.391562 4684 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-23T09:08:32Z","lastTransitionTime":"2026-01-23T09:08:32Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 23 09:08:32 crc kubenswrapper[4684]: I0123 09:08:32.494123 4684 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 23 09:08:32 crc kubenswrapper[4684]: I0123 09:08:32.494156 4684 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 23 09:08:32 crc kubenswrapper[4684]: I0123 09:08:32.494167 4684 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 23 09:08:32 crc kubenswrapper[4684]: I0123 09:08:32.494182 4684 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 23 09:08:32 crc kubenswrapper[4684]: I0123 09:08:32.494193 4684 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-23T09:08:32Z","lastTransitionTime":"2026-01-23T09:08:32Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 23 09:08:32 crc kubenswrapper[4684]: I0123 09:08:32.534195 4684 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 23 09:08:32 crc kubenswrapper[4684]: I0123 09:08:32.534232 4684 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 23 09:08:32 crc kubenswrapper[4684]: I0123 09:08:32.534241 4684 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 23 09:08:32 crc kubenswrapper[4684]: I0123 09:08:32.534255 4684 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 23 09:08:32 crc kubenswrapper[4684]: I0123 09:08:32.534264 4684 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-23T09:08:32Z","lastTransitionTime":"2026-01-23T09:08:32Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 23 09:08:32 crc kubenswrapper[4684]: E0123 09:08:32.546635 4684 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"7800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"24148068Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"8\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"24608868Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-01-23T09:08:32Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-23T09:08:32Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-23T09:08:32Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-23T09:08:32Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-23T09:08:32Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-23T09:08:32Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-23T09:08:32Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-23T09:08:32Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"bcfe8adf-9d26-48e3-b456-e1c8d79ddfed\\\",\\\"systemUUID\\\":\\\"63162577-fb09-4289-a5f3-3b12988dcfbf\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-23T09:08:32Z is after 2025-08-24T17:21:41Z" Jan 23 09:08:32 crc kubenswrapper[4684]: I0123 09:08:32.552342 4684 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 23 09:08:32 crc kubenswrapper[4684]: I0123 09:08:32.552385 4684 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 23 09:08:32 crc kubenswrapper[4684]: I0123 09:08:32.552398 4684 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 23 09:08:32 crc kubenswrapper[4684]: I0123 09:08:32.552415 4684 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 23 09:08:32 crc kubenswrapper[4684]: I0123 09:08:32.552432 4684 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-23T09:08:32Z","lastTransitionTime":"2026-01-23T09:08:32Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 23 09:08:32 crc kubenswrapper[4684]: E0123 09:08:32.564781 4684 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"7800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"24148068Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"8\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"24608868Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-01-23T09:08:32Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-23T09:08:32Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-23T09:08:32Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-23T09:08:32Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-23T09:08:32Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-23T09:08:32Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-23T09:08:32Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-23T09:08:32Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"bcfe8adf-9d26-48e3-b456-e1c8d79ddfed\\\",\\\"systemUUID\\\":\\\"63162577-fb09-4289-a5f3-3b12988dcfbf\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-23T09:08:32Z is after 2025-08-24T17:21:41Z" Jan 23 09:08:32 crc kubenswrapper[4684]: I0123 09:08:32.567673 4684 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 23 09:08:32 crc kubenswrapper[4684]: I0123 09:08:32.567719 4684 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 23 09:08:32 crc kubenswrapper[4684]: I0123 09:08:32.567729 4684 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 23 09:08:32 crc kubenswrapper[4684]: I0123 09:08:32.567781 4684 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 23 09:08:32 crc kubenswrapper[4684]: I0123 09:08:32.567795 4684 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-23T09:08:32Z","lastTransitionTime":"2026-01-23T09:08:32Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 23 09:08:32 crc kubenswrapper[4684]: E0123 09:08:32.577932 4684 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"7800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"24148068Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"8\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"24608868Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-01-23T09:08:32Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-23T09:08:32Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-23T09:08:32Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-23T09:08:32Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-23T09:08:32Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-23T09:08:32Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-23T09:08:32Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-23T09:08:32Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"bcfe8adf-9d26-48e3-b456-e1c8d79ddfed\\\",\\\"systemUUID\\\":\\\"63162577-fb09-4289-a5f3-3b12988dcfbf\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-23T09:08:32Z is after 2025-08-24T17:21:41Z" Jan 23 09:08:32 crc kubenswrapper[4684]: I0123 09:08:32.580979 4684 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-wrrtl" Jan 23 09:08:32 crc kubenswrapper[4684]: E0123 09:08:32.581107 4684 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-wrrtl" podUID="8a1145d8-e0e9-481b-9e5c-65815e74874f" Jan 23 09:08:32 crc kubenswrapper[4684]: I0123 09:08:32.581464 4684 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 23 09:08:32 crc kubenswrapper[4684]: I0123 09:08:32.581504 4684 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 23 09:08:32 crc kubenswrapper[4684]: I0123 09:08:32.581515 4684 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 23 09:08:32 crc kubenswrapper[4684]: I0123 09:08:32.581531 4684 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 23 09:08:32 crc kubenswrapper[4684]: I0123 09:08:32.581541 4684 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-23T09:08:32Z","lastTransitionTime":"2026-01-23T09:08:32Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 23 09:08:32 crc kubenswrapper[4684]: I0123 09:08:32.590898 4684 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-12-12 02:06:29.773628704 +0000 UTC Jan 23 09:08:32 crc kubenswrapper[4684]: E0123 09:08:32.591455 4684 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"7800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"24148068Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"8\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"24608868Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-01-23T09:08:32Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-23T09:08:32Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-23T09:08:32Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-23T09:08:32Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-23T09:08:32Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-23T09:08:32Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-23T09:08:32Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-23T09:08:32Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"bcfe8adf-9d26-48e3-b456-e1c8d79ddfed\\\",\\\"systemUUID\\\":\\\"63162577-fb09-4289-a5f3-3b12988dcfbf\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-23T09:08:32Z is after 2025-08-24T17:21:41Z" Jan 23 09:08:32 crc kubenswrapper[4684]: I0123 09:08:32.594904 4684 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 23 09:08:32 crc kubenswrapper[4684]: I0123 09:08:32.594930 4684 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 23 09:08:32 crc kubenswrapper[4684]: I0123 09:08:32.594941 4684 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 23 09:08:32 crc kubenswrapper[4684]: I0123 09:08:32.594955 4684 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 23 09:08:32 crc kubenswrapper[4684]: I0123 09:08:32.594965 4684 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-23T09:08:32Z","lastTransitionTime":"2026-01-23T09:08:32Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 23 09:08:32 crc kubenswrapper[4684]: E0123 09:08:32.606191 4684 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"7800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"24148068Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"8\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"24608868Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-01-23T09:08:32Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-23T09:08:32Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-23T09:08:32Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-23T09:08:32Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-23T09:08:32Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-23T09:08:32Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-23T09:08:32Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-23T09:08:32Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"bcfe8adf-9d26-48e3-b456-e1c8d79ddfed\\\",\\\"systemUUID\\\":\\\"63162577-fb09-4289-a5f3-3b12988dcfbf\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-23T09:08:32Z is after 2025-08-24T17:21:41Z" Jan 23 09:08:32 crc kubenswrapper[4684]: E0123 09:08:32.606299 4684 kubelet_node_status.go:572] "Unable to update node status" err="update node status exceeds retry count" Jan 23 09:08:32 crc kubenswrapper[4684]: I0123 09:08:32.607673 4684 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 23 09:08:32 crc kubenswrapper[4684]: I0123 09:08:32.607714 4684 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 23 09:08:32 crc kubenswrapper[4684]: I0123 09:08:32.607725 4684 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 23 09:08:32 crc kubenswrapper[4684]: I0123 09:08:32.607739 4684 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 23 09:08:32 crc kubenswrapper[4684]: I0123 09:08:32.607749 4684 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-23T09:08:32Z","lastTransitionTime":"2026-01-23T09:08:32Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 23 09:08:32 crc kubenswrapper[4684]: I0123 09:08:32.710256 4684 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 23 09:08:32 crc kubenswrapper[4684]: I0123 09:08:32.710290 4684 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 23 09:08:32 crc kubenswrapper[4684]: I0123 09:08:32.710308 4684 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 23 09:08:32 crc kubenswrapper[4684]: I0123 09:08:32.710323 4684 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 23 09:08:32 crc kubenswrapper[4684]: I0123 09:08:32.710333 4684 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-23T09:08:32Z","lastTransitionTime":"2026-01-23T09:08:32Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 23 09:08:32 crc kubenswrapper[4684]: I0123 09:08:32.812127 4684 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 23 09:08:32 crc kubenswrapper[4684]: I0123 09:08:32.812161 4684 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 23 09:08:32 crc kubenswrapper[4684]: I0123 09:08:32.812173 4684 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 23 09:08:32 crc kubenswrapper[4684]: I0123 09:08:32.812188 4684 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 23 09:08:32 crc kubenswrapper[4684]: I0123 09:08:32.812198 4684 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-23T09:08:32Z","lastTransitionTime":"2026-01-23T09:08:32Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 23 09:08:32 crc kubenswrapper[4684]: I0123 09:08:32.914965 4684 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 23 09:08:32 crc kubenswrapper[4684]: I0123 09:08:32.915006 4684 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 23 09:08:32 crc kubenswrapper[4684]: I0123 09:08:32.915016 4684 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 23 09:08:32 crc kubenswrapper[4684]: I0123 09:08:32.915030 4684 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 23 09:08:32 crc kubenswrapper[4684]: I0123 09:08:32.915039 4684 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-23T09:08:32Z","lastTransitionTime":"2026-01-23T09:08:32Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 23 09:08:33 crc kubenswrapper[4684]: I0123 09:08:33.017255 4684 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 23 09:08:33 crc kubenswrapper[4684]: I0123 09:08:33.017296 4684 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 23 09:08:33 crc kubenswrapper[4684]: I0123 09:08:33.017307 4684 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 23 09:08:33 crc kubenswrapper[4684]: I0123 09:08:33.017324 4684 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 23 09:08:33 crc kubenswrapper[4684]: I0123 09:08:33.017334 4684 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-23T09:08:33Z","lastTransitionTime":"2026-01-23T09:08:33Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 23 09:08:33 crc kubenswrapper[4684]: I0123 09:08:33.119298 4684 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 23 09:08:33 crc kubenswrapper[4684]: I0123 09:08:33.119338 4684 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 23 09:08:33 crc kubenswrapper[4684]: I0123 09:08:33.119348 4684 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 23 09:08:33 crc kubenswrapper[4684]: I0123 09:08:33.119365 4684 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 23 09:08:33 crc kubenswrapper[4684]: I0123 09:08:33.119379 4684 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-23T09:08:33Z","lastTransitionTime":"2026-01-23T09:08:33Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 23 09:08:33 crc kubenswrapper[4684]: I0123 09:08:33.222730 4684 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 23 09:08:33 crc kubenswrapper[4684]: I0123 09:08:33.222784 4684 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 23 09:08:33 crc kubenswrapper[4684]: I0123 09:08:33.222796 4684 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 23 09:08:33 crc kubenswrapper[4684]: I0123 09:08:33.222814 4684 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 23 09:08:33 crc kubenswrapper[4684]: I0123 09:08:33.222826 4684 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-23T09:08:33Z","lastTransitionTime":"2026-01-23T09:08:33Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 23 09:08:33 crc kubenswrapper[4684]: I0123 09:08:33.326481 4684 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 23 09:08:33 crc kubenswrapper[4684]: I0123 09:08:33.326534 4684 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 23 09:08:33 crc kubenswrapper[4684]: I0123 09:08:33.326543 4684 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 23 09:08:33 crc kubenswrapper[4684]: I0123 09:08:33.326566 4684 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 23 09:08:33 crc kubenswrapper[4684]: I0123 09:08:33.326577 4684 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-23T09:08:33Z","lastTransitionTime":"2026-01-23T09:08:33Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 23 09:08:33 crc kubenswrapper[4684]: I0123 09:08:33.429040 4684 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 23 09:08:33 crc kubenswrapper[4684]: I0123 09:08:33.429086 4684 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 23 09:08:33 crc kubenswrapper[4684]: I0123 09:08:33.429097 4684 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 23 09:08:33 crc kubenswrapper[4684]: I0123 09:08:33.429112 4684 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 23 09:08:33 crc kubenswrapper[4684]: I0123 09:08:33.429121 4684 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-23T09:08:33Z","lastTransitionTime":"2026-01-23T09:08:33Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 23 09:08:33 crc kubenswrapper[4684]: I0123 09:08:33.534091 4684 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 23 09:08:33 crc kubenswrapper[4684]: I0123 09:08:33.534364 4684 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 23 09:08:33 crc kubenswrapper[4684]: I0123 09:08:33.534479 4684 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 23 09:08:33 crc kubenswrapper[4684]: I0123 09:08:33.534596 4684 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 23 09:08:33 crc kubenswrapper[4684]: I0123 09:08:33.534681 4684 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-23T09:08:33Z","lastTransitionTime":"2026-01-23T09:08:33Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 23 09:08:33 crc kubenswrapper[4684]: I0123 09:08:33.581888 4684 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Jan 23 09:08:33 crc kubenswrapper[4684]: E0123 09:08:33.582074 4684 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Jan 23 09:08:33 crc kubenswrapper[4684]: I0123 09:08:33.581898 4684 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 23 09:08:33 crc kubenswrapper[4684]: E0123 09:08:33.582160 4684 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Jan 23 09:08:33 crc kubenswrapper[4684]: I0123 09:08:33.581898 4684 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 23 09:08:33 crc kubenswrapper[4684]: E0123 09:08:33.582219 4684 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Jan 23 09:08:33 crc kubenswrapper[4684]: I0123 09:08:33.592011 4684 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2026-01-01 11:45:43.802491857 +0000 UTC Jan 23 09:08:33 crc kubenswrapper[4684]: I0123 09:08:33.637377 4684 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 23 09:08:33 crc kubenswrapper[4684]: I0123 09:08:33.637907 4684 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 23 09:08:33 crc kubenswrapper[4684]: I0123 09:08:33.638026 4684 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 23 09:08:33 crc kubenswrapper[4684]: I0123 09:08:33.638122 4684 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 23 09:08:33 crc kubenswrapper[4684]: I0123 09:08:33.638217 4684 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-23T09:08:33Z","lastTransitionTime":"2026-01-23T09:08:33Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 23 09:08:33 crc kubenswrapper[4684]: I0123 09:08:33.741144 4684 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 23 09:08:33 crc kubenswrapper[4684]: I0123 09:08:33.741178 4684 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 23 09:08:33 crc kubenswrapper[4684]: I0123 09:08:33.741187 4684 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 23 09:08:33 crc kubenswrapper[4684]: I0123 09:08:33.741200 4684 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 23 09:08:33 crc kubenswrapper[4684]: I0123 09:08:33.741208 4684 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-23T09:08:33Z","lastTransitionTime":"2026-01-23T09:08:33Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 23 09:08:33 crc kubenswrapper[4684]: I0123 09:08:33.842985 4684 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 23 09:08:33 crc kubenswrapper[4684]: I0123 09:08:33.843036 4684 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 23 09:08:33 crc kubenswrapper[4684]: I0123 09:08:33.843052 4684 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 23 09:08:33 crc kubenswrapper[4684]: I0123 09:08:33.843070 4684 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 23 09:08:33 crc kubenswrapper[4684]: I0123 09:08:33.843083 4684 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-23T09:08:33Z","lastTransitionTime":"2026-01-23T09:08:33Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 23 09:08:33 crc kubenswrapper[4684]: I0123 09:08:33.945296 4684 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 23 09:08:33 crc kubenswrapper[4684]: I0123 09:08:33.945332 4684 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 23 09:08:33 crc kubenswrapper[4684]: I0123 09:08:33.945344 4684 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 23 09:08:33 crc kubenswrapper[4684]: I0123 09:08:33.945360 4684 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 23 09:08:33 crc kubenswrapper[4684]: I0123 09:08:33.945400 4684 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-23T09:08:33Z","lastTransitionTime":"2026-01-23T09:08:33Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 23 09:08:34 crc kubenswrapper[4684]: I0123 09:08:34.047878 4684 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 23 09:08:34 crc kubenswrapper[4684]: I0123 09:08:34.047927 4684 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 23 09:08:34 crc kubenswrapper[4684]: I0123 09:08:34.047939 4684 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 23 09:08:34 crc kubenswrapper[4684]: I0123 09:08:34.047952 4684 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 23 09:08:34 crc kubenswrapper[4684]: I0123 09:08:34.047961 4684 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-23T09:08:34Z","lastTransitionTime":"2026-01-23T09:08:34Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 23 09:08:34 crc kubenswrapper[4684]: I0123 09:08:34.151545 4684 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 23 09:08:34 crc kubenswrapper[4684]: I0123 09:08:34.151935 4684 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 23 09:08:34 crc kubenswrapper[4684]: I0123 09:08:34.152016 4684 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 23 09:08:34 crc kubenswrapper[4684]: I0123 09:08:34.152136 4684 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 23 09:08:34 crc kubenswrapper[4684]: I0123 09:08:34.152219 4684 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-23T09:08:34Z","lastTransitionTime":"2026-01-23T09:08:34Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 23 09:08:34 crc kubenswrapper[4684]: I0123 09:08:34.254354 4684 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 23 09:08:34 crc kubenswrapper[4684]: I0123 09:08:34.254589 4684 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 23 09:08:34 crc kubenswrapper[4684]: I0123 09:08:34.254600 4684 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 23 09:08:34 crc kubenswrapper[4684]: I0123 09:08:34.254632 4684 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 23 09:08:34 crc kubenswrapper[4684]: I0123 09:08:34.254643 4684 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-23T09:08:34Z","lastTransitionTime":"2026-01-23T09:08:34Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 23 09:08:34 crc kubenswrapper[4684]: I0123 09:08:34.357478 4684 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 23 09:08:34 crc kubenswrapper[4684]: I0123 09:08:34.357548 4684 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 23 09:08:34 crc kubenswrapper[4684]: I0123 09:08:34.357566 4684 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 23 09:08:34 crc kubenswrapper[4684]: I0123 09:08:34.357588 4684 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 23 09:08:34 crc kubenswrapper[4684]: I0123 09:08:34.357607 4684 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-23T09:08:34Z","lastTransitionTime":"2026-01-23T09:08:34Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 23 09:08:34 crc kubenswrapper[4684]: I0123 09:08:34.460833 4684 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 23 09:08:34 crc kubenswrapper[4684]: I0123 09:08:34.460878 4684 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 23 09:08:34 crc kubenswrapper[4684]: I0123 09:08:34.460887 4684 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 23 09:08:34 crc kubenswrapper[4684]: I0123 09:08:34.460902 4684 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 23 09:08:34 crc kubenswrapper[4684]: I0123 09:08:34.460912 4684 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-23T09:08:34Z","lastTransitionTime":"2026-01-23T09:08:34Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 23 09:08:34 crc kubenswrapper[4684]: I0123 09:08:34.563365 4684 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 23 09:08:34 crc kubenswrapper[4684]: I0123 09:08:34.563779 4684 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 23 09:08:34 crc kubenswrapper[4684]: I0123 09:08:34.563792 4684 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 23 09:08:34 crc kubenswrapper[4684]: I0123 09:08:34.563809 4684 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 23 09:08:34 crc kubenswrapper[4684]: I0123 09:08:34.563821 4684 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-23T09:08:34Z","lastTransitionTime":"2026-01-23T09:08:34Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 23 09:08:34 crc kubenswrapper[4684]: I0123 09:08:34.581064 4684 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-wrrtl" Jan 23 09:08:34 crc kubenswrapper[4684]: E0123 09:08:34.581293 4684 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-wrrtl" podUID="8a1145d8-e0e9-481b-9e5c-65815e74874f" Jan 23 09:08:34 crc kubenswrapper[4684]: I0123 09:08:34.593028 4684 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2026-01-09 10:23:56.391673785 +0000 UTC Jan 23 09:08:34 crc kubenswrapper[4684]: I0123 09:08:34.665982 4684 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 23 09:08:34 crc kubenswrapper[4684]: I0123 09:08:34.666014 4684 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 23 09:08:34 crc kubenswrapper[4684]: I0123 09:08:34.666022 4684 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 23 09:08:34 crc kubenswrapper[4684]: I0123 09:08:34.666036 4684 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 23 09:08:34 crc kubenswrapper[4684]: I0123 09:08:34.666045 4684 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-23T09:08:34Z","lastTransitionTime":"2026-01-23T09:08:34Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 23 09:08:34 crc kubenswrapper[4684]: I0123 09:08:34.769222 4684 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 23 09:08:34 crc kubenswrapper[4684]: I0123 09:08:34.769268 4684 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 23 09:08:34 crc kubenswrapper[4684]: I0123 09:08:34.769280 4684 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 23 09:08:34 crc kubenswrapper[4684]: I0123 09:08:34.769297 4684 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 23 09:08:34 crc kubenswrapper[4684]: I0123 09:08:34.769308 4684 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-23T09:08:34Z","lastTransitionTime":"2026-01-23T09:08:34Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 23 09:08:34 crc kubenswrapper[4684]: I0123 09:08:34.871818 4684 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 23 09:08:34 crc kubenswrapper[4684]: I0123 09:08:34.871858 4684 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 23 09:08:34 crc kubenswrapper[4684]: I0123 09:08:34.871868 4684 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 23 09:08:34 crc kubenswrapper[4684]: I0123 09:08:34.871881 4684 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 23 09:08:34 crc kubenswrapper[4684]: I0123 09:08:34.871890 4684 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-23T09:08:34Z","lastTransitionTime":"2026-01-23T09:08:34Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 23 09:08:34 crc kubenswrapper[4684]: I0123 09:08:34.973759 4684 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 23 09:08:34 crc kubenswrapper[4684]: I0123 09:08:34.973789 4684 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 23 09:08:34 crc kubenswrapper[4684]: I0123 09:08:34.973803 4684 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 23 09:08:34 crc kubenswrapper[4684]: I0123 09:08:34.973818 4684 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 23 09:08:34 crc kubenswrapper[4684]: I0123 09:08:34.973848 4684 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-23T09:08:34Z","lastTransitionTime":"2026-01-23T09:08:34Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 23 09:08:35 crc kubenswrapper[4684]: I0123 09:08:35.075726 4684 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 23 09:08:35 crc kubenswrapper[4684]: I0123 09:08:35.075766 4684 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 23 09:08:35 crc kubenswrapper[4684]: I0123 09:08:35.075778 4684 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 23 09:08:35 crc kubenswrapper[4684]: I0123 09:08:35.075793 4684 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 23 09:08:35 crc kubenswrapper[4684]: I0123 09:08:35.075804 4684 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-23T09:08:35Z","lastTransitionTime":"2026-01-23T09:08:35Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 23 09:08:35 crc kubenswrapper[4684]: I0123 09:08:35.178142 4684 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 23 09:08:35 crc kubenswrapper[4684]: I0123 09:08:35.178175 4684 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 23 09:08:35 crc kubenswrapper[4684]: I0123 09:08:35.178186 4684 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 23 09:08:35 crc kubenswrapper[4684]: I0123 09:08:35.178201 4684 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 23 09:08:35 crc kubenswrapper[4684]: I0123 09:08:35.178214 4684 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-23T09:08:35Z","lastTransitionTime":"2026-01-23T09:08:35Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 23 09:08:35 crc kubenswrapper[4684]: I0123 09:08:35.280589 4684 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 23 09:08:35 crc kubenswrapper[4684]: I0123 09:08:35.280632 4684 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 23 09:08:35 crc kubenswrapper[4684]: I0123 09:08:35.280745 4684 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 23 09:08:35 crc kubenswrapper[4684]: I0123 09:08:35.280772 4684 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 23 09:08:35 crc kubenswrapper[4684]: I0123 09:08:35.280790 4684 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-23T09:08:35Z","lastTransitionTime":"2026-01-23T09:08:35Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 23 09:08:35 crc kubenswrapper[4684]: I0123 09:08:35.383693 4684 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 23 09:08:35 crc kubenswrapper[4684]: I0123 09:08:35.383760 4684 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 23 09:08:35 crc kubenswrapper[4684]: I0123 09:08:35.383772 4684 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 23 09:08:35 crc kubenswrapper[4684]: I0123 09:08:35.383790 4684 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 23 09:08:35 crc kubenswrapper[4684]: I0123 09:08:35.383800 4684 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-23T09:08:35Z","lastTransitionTime":"2026-01-23T09:08:35Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 23 09:08:35 crc kubenswrapper[4684]: I0123 09:08:35.485541 4684 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 23 09:08:35 crc kubenswrapper[4684]: I0123 09:08:35.485594 4684 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 23 09:08:35 crc kubenswrapper[4684]: I0123 09:08:35.485608 4684 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 23 09:08:35 crc kubenswrapper[4684]: I0123 09:08:35.485623 4684 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 23 09:08:35 crc kubenswrapper[4684]: I0123 09:08:35.485632 4684 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-23T09:08:35Z","lastTransitionTime":"2026-01-23T09:08:35Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 23 09:08:35 crc kubenswrapper[4684]: I0123 09:08:35.582495 4684 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Jan 23 09:08:35 crc kubenswrapper[4684]: I0123 09:08:35.582569 4684 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 23 09:08:35 crc kubenswrapper[4684]: I0123 09:08:35.582168 4684 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 23 09:08:35 crc kubenswrapper[4684]: E0123 09:08:35.582681 4684 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Jan 23 09:08:35 crc kubenswrapper[4684]: E0123 09:08:35.582887 4684 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Jan 23 09:08:35 crc kubenswrapper[4684]: E0123 09:08:35.582943 4684 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Jan 23 09:08:35 crc kubenswrapper[4684]: I0123 09:08:35.588160 4684 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 23 09:08:35 crc kubenswrapper[4684]: I0123 09:08:35.588230 4684 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 23 09:08:35 crc kubenswrapper[4684]: I0123 09:08:35.588242 4684 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 23 09:08:35 crc kubenswrapper[4684]: I0123 09:08:35.588268 4684 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 23 09:08:35 crc kubenswrapper[4684]: I0123 09:08:35.588283 4684 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-23T09:08:35Z","lastTransitionTime":"2026-01-23T09:08:35Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 23 09:08:35 crc kubenswrapper[4684]: I0123 09:08:35.593438 4684 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2026-01-11 01:45:51.433542835 +0000 UTC Jan 23 09:08:35 crc kubenswrapper[4684]: I0123 09:08:35.691851 4684 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 23 09:08:35 crc kubenswrapper[4684]: I0123 09:08:35.691889 4684 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 23 09:08:35 crc kubenswrapper[4684]: I0123 09:08:35.691945 4684 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 23 09:08:35 crc kubenswrapper[4684]: I0123 09:08:35.691964 4684 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 23 09:08:35 crc kubenswrapper[4684]: I0123 09:08:35.691980 4684 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-23T09:08:35Z","lastTransitionTime":"2026-01-23T09:08:35Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 23 09:08:35 crc kubenswrapper[4684]: I0123 09:08:35.794802 4684 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 23 09:08:35 crc kubenswrapper[4684]: I0123 09:08:35.794846 4684 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 23 09:08:35 crc kubenswrapper[4684]: I0123 09:08:35.794856 4684 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 23 09:08:35 crc kubenswrapper[4684]: I0123 09:08:35.794870 4684 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 23 09:08:35 crc kubenswrapper[4684]: I0123 09:08:35.794880 4684 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-23T09:08:35Z","lastTransitionTime":"2026-01-23T09:08:35Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 23 09:08:35 crc kubenswrapper[4684]: I0123 09:08:35.897680 4684 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 23 09:08:35 crc kubenswrapper[4684]: I0123 09:08:35.897743 4684 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 23 09:08:35 crc kubenswrapper[4684]: I0123 09:08:35.897753 4684 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 23 09:08:35 crc kubenswrapper[4684]: I0123 09:08:35.897769 4684 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 23 09:08:35 crc kubenswrapper[4684]: I0123 09:08:35.897782 4684 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-23T09:08:35Z","lastTransitionTime":"2026-01-23T09:08:35Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 23 09:08:36 crc kubenswrapper[4684]: I0123 09:08:36.000840 4684 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 23 09:08:36 crc kubenswrapper[4684]: I0123 09:08:36.000874 4684 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 23 09:08:36 crc kubenswrapper[4684]: I0123 09:08:36.000887 4684 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 23 09:08:36 crc kubenswrapper[4684]: I0123 09:08:36.000902 4684 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 23 09:08:36 crc kubenswrapper[4684]: I0123 09:08:36.000914 4684 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-23T09:08:36Z","lastTransitionTime":"2026-01-23T09:08:36Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 23 09:08:36 crc kubenswrapper[4684]: I0123 09:08:36.103406 4684 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 23 09:08:36 crc kubenswrapper[4684]: I0123 09:08:36.103448 4684 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 23 09:08:36 crc kubenswrapper[4684]: I0123 09:08:36.103459 4684 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 23 09:08:36 crc kubenswrapper[4684]: I0123 09:08:36.103475 4684 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 23 09:08:36 crc kubenswrapper[4684]: I0123 09:08:36.103486 4684 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-23T09:08:36Z","lastTransitionTime":"2026-01-23T09:08:36Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 23 09:08:36 crc kubenswrapper[4684]: I0123 09:08:36.206216 4684 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 23 09:08:36 crc kubenswrapper[4684]: I0123 09:08:36.206272 4684 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 23 09:08:36 crc kubenswrapper[4684]: I0123 09:08:36.206284 4684 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 23 09:08:36 crc kubenswrapper[4684]: I0123 09:08:36.206301 4684 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 23 09:08:36 crc kubenswrapper[4684]: I0123 09:08:36.206312 4684 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-23T09:08:36Z","lastTransitionTime":"2026-01-23T09:08:36Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 23 09:08:36 crc kubenswrapper[4684]: I0123 09:08:36.308772 4684 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 23 09:08:36 crc kubenswrapper[4684]: I0123 09:08:36.308798 4684 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 23 09:08:36 crc kubenswrapper[4684]: I0123 09:08:36.308805 4684 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 23 09:08:36 crc kubenswrapper[4684]: I0123 09:08:36.308837 4684 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 23 09:08:36 crc kubenswrapper[4684]: I0123 09:08:36.308848 4684 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-23T09:08:36Z","lastTransitionTime":"2026-01-23T09:08:36Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 23 09:08:36 crc kubenswrapper[4684]: I0123 09:08:36.411625 4684 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 23 09:08:36 crc kubenswrapper[4684]: I0123 09:08:36.411663 4684 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 23 09:08:36 crc kubenswrapper[4684]: I0123 09:08:36.411675 4684 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 23 09:08:36 crc kubenswrapper[4684]: I0123 09:08:36.411690 4684 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 23 09:08:36 crc kubenswrapper[4684]: I0123 09:08:36.411721 4684 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-23T09:08:36Z","lastTransitionTime":"2026-01-23T09:08:36Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 23 09:08:36 crc kubenswrapper[4684]: I0123 09:08:36.514143 4684 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 23 09:08:36 crc kubenswrapper[4684]: I0123 09:08:36.514431 4684 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 23 09:08:36 crc kubenswrapper[4684]: I0123 09:08:36.514645 4684 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 23 09:08:36 crc kubenswrapper[4684]: I0123 09:08:36.514892 4684 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 23 09:08:36 crc kubenswrapper[4684]: I0123 09:08:36.514990 4684 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-23T09:08:36Z","lastTransitionTime":"2026-01-23T09:08:36Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 23 09:08:36 crc kubenswrapper[4684]: I0123 09:08:36.582003 4684 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-wrrtl" Jan 23 09:08:36 crc kubenswrapper[4684]: E0123 09:08:36.582166 4684 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-wrrtl" podUID="8a1145d8-e0e9-481b-9e5c-65815e74874f" Jan 23 09:08:36 crc kubenswrapper[4684]: I0123 09:08:36.594354 4684 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2026-01-11 01:42:10.232456423 +0000 UTC Jan 23 09:08:36 crc kubenswrapper[4684]: I0123 09:08:36.617036 4684 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 23 09:08:36 crc kubenswrapper[4684]: I0123 09:08:36.617080 4684 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 23 09:08:36 crc kubenswrapper[4684]: I0123 09:08:36.617091 4684 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 23 09:08:36 crc kubenswrapper[4684]: I0123 09:08:36.617107 4684 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 23 09:08:36 crc kubenswrapper[4684]: I0123 09:08:36.617118 4684 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-23T09:08:36Z","lastTransitionTime":"2026-01-23T09:08:36Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 23 09:08:36 crc kubenswrapper[4684]: I0123 09:08:36.719351 4684 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 23 09:08:36 crc kubenswrapper[4684]: I0123 09:08:36.719393 4684 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 23 09:08:36 crc kubenswrapper[4684]: I0123 09:08:36.719403 4684 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 23 09:08:36 crc kubenswrapper[4684]: I0123 09:08:36.719418 4684 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 23 09:08:36 crc kubenswrapper[4684]: I0123 09:08:36.719428 4684 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-23T09:08:36Z","lastTransitionTime":"2026-01-23T09:08:36Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 23 09:08:36 crc kubenswrapper[4684]: I0123 09:08:36.822050 4684 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 23 09:08:36 crc kubenswrapper[4684]: I0123 09:08:36.822090 4684 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 23 09:08:36 crc kubenswrapper[4684]: I0123 09:08:36.822101 4684 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 23 09:08:36 crc kubenswrapper[4684]: I0123 09:08:36.822117 4684 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 23 09:08:36 crc kubenswrapper[4684]: I0123 09:08:36.822128 4684 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-23T09:08:36Z","lastTransitionTime":"2026-01-23T09:08:36Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 23 09:08:36 crc kubenswrapper[4684]: I0123 09:08:36.924334 4684 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 23 09:08:36 crc kubenswrapper[4684]: I0123 09:08:36.924370 4684 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 23 09:08:36 crc kubenswrapper[4684]: I0123 09:08:36.924384 4684 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 23 09:08:36 crc kubenswrapper[4684]: I0123 09:08:36.924396 4684 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 23 09:08:36 crc kubenswrapper[4684]: I0123 09:08:36.924404 4684 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-23T09:08:36Z","lastTransitionTime":"2026-01-23T09:08:36Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 23 09:08:37 crc kubenswrapper[4684]: I0123 09:08:37.026392 4684 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 23 09:08:37 crc kubenswrapper[4684]: I0123 09:08:37.026431 4684 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 23 09:08:37 crc kubenswrapper[4684]: I0123 09:08:37.026444 4684 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 23 09:08:37 crc kubenswrapper[4684]: I0123 09:08:37.026457 4684 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 23 09:08:37 crc kubenswrapper[4684]: I0123 09:08:37.026468 4684 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-23T09:08:37Z","lastTransitionTime":"2026-01-23T09:08:37Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 23 09:08:37 crc kubenswrapper[4684]: I0123 09:08:37.129143 4684 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 23 09:08:37 crc kubenswrapper[4684]: I0123 09:08:37.129177 4684 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 23 09:08:37 crc kubenswrapper[4684]: I0123 09:08:37.129191 4684 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 23 09:08:37 crc kubenswrapper[4684]: I0123 09:08:37.129208 4684 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 23 09:08:37 crc kubenswrapper[4684]: I0123 09:08:37.129219 4684 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-23T09:08:37Z","lastTransitionTime":"2026-01-23T09:08:37Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 23 09:08:37 crc kubenswrapper[4684]: I0123 09:08:37.231633 4684 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 23 09:08:37 crc kubenswrapper[4684]: I0123 09:08:37.231675 4684 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 23 09:08:37 crc kubenswrapper[4684]: I0123 09:08:37.231686 4684 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 23 09:08:37 crc kubenswrapper[4684]: I0123 09:08:37.231730 4684 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 23 09:08:37 crc kubenswrapper[4684]: I0123 09:08:37.231742 4684 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-23T09:08:37Z","lastTransitionTime":"2026-01-23T09:08:37Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 23 09:08:37 crc kubenswrapper[4684]: I0123 09:08:37.333607 4684 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 23 09:08:37 crc kubenswrapper[4684]: I0123 09:08:37.333647 4684 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 23 09:08:37 crc kubenswrapper[4684]: I0123 09:08:37.333657 4684 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 23 09:08:37 crc kubenswrapper[4684]: I0123 09:08:37.333671 4684 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 23 09:08:37 crc kubenswrapper[4684]: I0123 09:08:37.333682 4684 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-23T09:08:37Z","lastTransitionTime":"2026-01-23T09:08:37Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 23 09:08:37 crc kubenswrapper[4684]: I0123 09:08:37.436282 4684 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 23 09:08:37 crc kubenswrapper[4684]: I0123 09:08:37.436322 4684 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 23 09:08:37 crc kubenswrapper[4684]: I0123 09:08:37.436331 4684 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 23 09:08:37 crc kubenswrapper[4684]: I0123 09:08:37.436346 4684 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 23 09:08:37 crc kubenswrapper[4684]: I0123 09:08:37.436389 4684 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-23T09:08:37Z","lastTransitionTime":"2026-01-23T09:08:37Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 23 09:08:37 crc kubenswrapper[4684]: I0123 09:08:37.539130 4684 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 23 09:08:37 crc kubenswrapper[4684]: I0123 09:08:37.539369 4684 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 23 09:08:37 crc kubenswrapper[4684]: I0123 09:08:37.539430 4684 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 23 09:08:37 crc kubenswrapper[4684]: I0123 09:08:37.539508 4684 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 23 09:08:37 crc kubenswrapper[4684]: I0123 09:08:37.539611 4684 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-23T09:08:37Z","lastTransitionTime":"2026-01-23T09:08:37Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 23 09:08:37 crc kubenswrapper[4684]: I0123 09:08:37.581142 4684 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 23 09:08:37 crc kubenswrapper[4684]: I0123 09:08:37.581186 4684 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 23 09:08:37 crc kubenswrapper[4684]: I0123 09:08:37.581202 4684 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Jan 23 09:08:37 crc kubenswrapper[4684]: E0123 09:08:37.581284 4684 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Jan 23 09:08:37 crc kubenswrapper[4684]: E0123 09:08:37.581413 4684 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Jan 23 09:08:37 crc kubenswrapper[4684]: E0123 09:08:37.581486 4684 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Jan 23 09:08:37 crc kubenswrapper[4684]: I0123 09:08:37.594578 4684 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-wtphf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"fe8e0d00-860e-4d47-9f48-686555520d79\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T09:07:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T09:07:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T09:07:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T09:07:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://87b6f66b276518f9c25bbd5c97bd4a330b2c796958b395d04a01ef7115b95440\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-23T09:07:30Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-dmwsl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://c3d090a4ca15b818846dbd02be034a5029761509ea8671673795d0b2b15249c9\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-23T09:07:30Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-dmwsl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-23T09:07:28Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-wtphf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-23T09:08:37Z is after 2025-08-24T17:21:41Z" Jan 23 09:08:37 crc kubenswrapper[4684]: I0123 09:08:37.595270 4684 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2026-01-03 07:56:06.128819701 +0000 UTC Jan 23 09:08:37 crc kubenswrapper[4684]: I0123 09:08:37.605074 4684 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/network-metrics-daemon-wrrtl" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"8a1145d8-e0e9-481b-9e5c-65815e74874f\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T09:07:42Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T09:07:42Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T09:07:42Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T09:07:42Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/metrics\\\",\\\"name\\\":\\\"metrics-certs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-hlsjn\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d98bb346a17feae024d92663df92b25c120938395ab7043afbed543c6db9ca8d\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-metrics-daemon\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-hlsjn\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-23T09:07:42Z\\\"}}\" for pod \"openshift-multus\"/\"network-metrics-daemon-wrrtl\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-23T09:08:37Z is after 2025-08-24T17:21:41Z" Jan 23 09:08:37 crc kubenswrapper[4684]: I0123 09:08:37.615599 4684 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"8f3a9b90-c984-4ff9-9c1e-877941f387c7\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T09:07:08Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T09:07:08Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T09:08:03Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T09:08:03Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T09:07:07Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://02d494d3d24ff74db057c3d7e3a703635ce5b73863f17e5287e60eb112fcadf1\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-23T09:07:09Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://b3735bcc057b640850e5db0bc7cd406ef0ac0c002d4550e741deaf34cf10908f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-23T09:07:10Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://beeba329cbddfbfbd71509b5d37064ec6031709b1403feb8e76af0e7818516cb\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-23T09:07:10Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://dfb74f1ff410b32092837918e51a33643c917e2cf829af6edd2e36180c64fcba\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"wait-for-host-port\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://dfb74f1ff410b32092837918e51a33643c917e2cf829af6edd2e36180c64fcba\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-23T09:07:08Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-23T09:07:08Z\\\"}}}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-23T09:07:07Z\\\"}}\" for pod \"openshift-kube-scheduler\"/\"openshift-kube-scheduler-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-23T09:08:37Z is after 2025-08-24T17:21:41Z" Jan 23 09:08:37 crc kubenswrapper[4684]: I0123 09:08:37.628128 4684 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-23T09:07:29Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-23T09:07:29Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-23T09:07:29Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://d66a59d2f527c396c3b591ef694a20a6852d8e2b2f3d4c77ef0f0b795a18b535\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-23T09:07:28Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-23T09:08:37Z is after 2025-08-24T17:21:41Z" Jan 23 09:08:37 crc kubenswrapper[4684]: I0123 09:08:37.638602 4684 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-23T09:07:27Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-23T09:07:27Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-23T09:08:37Z is after 2025-08-24T17:21:41Z" Jan 23 09:08:37 crc kubenswrapper[4684]: I0123 09:08:37.641460 4684 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 23 09:08:37 crc kubenswrapper[4684]: I0123 09:08:37.641494 4684 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 23 09:08:37 crc kubenswrapper[4684]: I0123 09:08:37.641506 4684 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 23 09:08:37 crc kubenswrapper[4684]: I0123 09:08:37.641522 4684 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 23 09:08:37 crc kubenswrapper[4684]: I0123 09:08:37.641532 4684 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-23T09:08:37Z","lastTransitionTime":"2026-01-23T09:08:37Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 23 09:08:37 crc kubenswrapper[4684]: I0123 09:08:37.656344 4684 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-nk7v5" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fd1b372-d164-4037-ae8e-cf634b1c4b41\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T09:07:29Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T09:07:29Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T09:07:28Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T09:07:28Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://c845b6b78d55b23f70032599e19fb345571b02ca00353315bb08e94c834330d4\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-23T09:07:30Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-l46bg\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://5ecd3493767226c89a1f3e3dff04d36ff5c47117c6ad2712e71633f5c6e375b3\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-23T09:07:30Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-l46bg\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://1d7d0cedb437ec48e365912b092c7f28a30e01fbab86c49bce1b26734ab264ee\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-23T09:07:30Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-l46bg\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://6ab83043e744c91535278153a247d7ba2b3612b867edbabf3a43192b51304e14\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-23T09:07:30Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-l46bg\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://d44f8256ce0d8ea5237e13fb4f6d7ee5cd698c2821613b48d73ba903d2ab5351\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-23T09:07:30Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-l46bg\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://3eab81e73847c2d5a8a24bd2be84c8ed97ecc482fe023474b519ae6bcf3e6e49\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-23T09:07:29Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-l46bg\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://4982abf5ece76335ecf3d32af453818177712b3e256640b9bebec20436b73eb7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://4982abf5ece76335ecf3d32af453818177712b3e256640b9bebec20436b73eb7\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2026-01-23T09:08:26Z\\\",\\\"message\\\":\\\"10s\\\\\\\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-23T09:08:26Z is after 2025-08-24T17:21:41Z\\\\nI0123 09:08:26.262216 6629 obj_retry.go:434] periodicallyRetryResources: Retry channel got triggered: retrying failed objects of type *v1.Pod\\\\nI0123 09:08:26.262245 6629 obj_retry.go:409] Going to retry *v1.Pod resource setup for 1 objects: [openshift-multus/network-metrics-daemon-wrrtl]\\\\nI0123 09:08:26.262257 6629 factory.go:656] Stopping watch factory\\\\nI0123 09:08:26.262259 6629 obj_retry.go:418] Waiting for all the *v1.Pod retry setup to complete in iterateRetryResources\\\\nI0123 09:08:26.262281 6629 ovnkube.go:599] Stopped ovnkube\\\\nI0123 09:08:26.262282 6629 obj_retry.go:285] Attempting retry of *v1.Pod openshift-multus/network-metrics-daemon-wrrtl before timer (time: 2026-01-23 09:08:26.737373558 +0000 UTC m=+1.601472596): skip\\\\nI0123 09:08:26.262299 6629 obj_retry.go:420] Function iterateRetryResources for *v1.Pod ended (in 56.172µs)\\\\nI0123 09:08:26.262311 6629 metrics.go:553] Stopping metrics server at address \\\\\\\"127.0.0.1:29103\\\\\\\"\\\\nI0123 09:08:26.262365 6629 handler.go:208] Removed *v1.Pod event handler 3\\\\nF0123 09:08:26.262378 6629 ovnkube.go:137] failed to run ovnkube: [failed to start network controller: failed to start default network controller: unable to create\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-23T09:08:25Z\\\"}},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"back-off 40s restarting failed container=ovnkube-controller pod=ovnkube-node-nk7v5_openshift-ovn-kubernetes(5fd1b372-d164-4037-ae8e-cf634b1c4b41)\\\",\\\"reason\\\":\\\"CrashLoopBackOff\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-l46bg\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://6eab0113b2445bd23a5d3eb5f4bd79d26dd3352a1bf807cf7e770d55db85b699\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-23T09:07:32Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-l46bg\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://6cfc04b44ac724b5e32e0102b3f0d670fdd7f2b7ae9b40266065c7b8192b228e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://6cfc04b44ac724b5e32e0102b3f0d670fdd7f2b7ae9b40266065c7b8192b228e\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-23T09:07:29Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-23T09:07:29Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-l46bg\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-23T09:07:28Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-nk7v5\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-23T09:08:37Z is after 2025-08-24T17:21:41Z" Jan 23 09:08:37 crc kubenswrapper[4684]: I0123 09:08:37.670973 4684 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-ckltm" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"17ebb42b-c0ef-423b-8337-cb73bcdbd301\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T09:07:44Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T09:07:41Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T09:07:44Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T09:07:44Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://831d14b0a3293bdf6aaef4805513c47cca40592929fd0a059c0415e6bb072462\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-23T09:07:43Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-control-plane-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-bqdrh\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://53174a72a4ae2ff8105c162641526b8d33dbc8ae6f6301c8c1399e1493d9f6e9\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-cluster-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-23T09:07:43Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-bqdrh\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-23T09:07:41Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-control-plane-749d76644c-ckltm\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-23T09:08:37Z is after 2025-08-24T17:21:41Z" Jan 23 09:08:37 crc kubenswrapper[4684]: I0123 09:08:37.685273 4684 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-23T09:07:27Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-23T09:07:27Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-23T09:08:37Z is after 2025-08-24T17:21:41Z" Jan 23 09:08:37 crc kubenswrapper[4684]: I0123 09:08:37.694939 4684 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-23T09:07:27Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-23T09:07:27Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-23T09:08:37Z is after 2025-08-24T17:21:41Z" Jan 23 09:08:37 crc kubenswrapper[4684]: I0123 09:08:37.705075 4684 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-jwr4q" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ab0885cc-d621-4e36-9e37-1326848bd147\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T09:07:29Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T09:07:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T09:08:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T09:08:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://7bc78adb5a12c736586e26f00e1e598d2404f62b6f15dbb005f241e1d5fddae3\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://d957cfbf388d17fa825ac41c56e15d6cd4caec6e13b2fb8c93b304205f0bbefe\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2026-01-23T09:08:15Z\\\",\\\"message\\\":\\\"2026-01-23T09:07:29+00:00 [cnibincopy] Successfully copied files in /usr/src/multus-cni/rhel9/bin/ to /host/opt/cni/bin/upgrade_fce49956-cc05-4dc7-8d8f-580147be71f6\\\\n2026-01-23T09:07:29+00:00 [cnibincopy] Successfully moved files in /host/opt/cni/bin/upgrade_fce49956-cc05-4dc7-8d8f-580147be71f6 to /host/opt/cni/bin/\\\\n2026-01-23T09:07:30Z [verbose] multus-daemon started\\\\n2026-01-23T09:07:30Z [verbose] Readiness Indicator file check\\\\n2026-01-23T09:08:15Z [error] have you checked that your default network is ready? still waiting for readinessindicatorfile @ /host/run/multus/cni/net.d/10-ovn-kubernetes.conf. pollimmediate error: timed out waiting for the condition\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-23T09:07:29Z\\\"}},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-23T09:08:16Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cw2mk\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-23T09:07:28Z\\\"}}\" for pod \"openshift-multus\"/\"multus-jwr4q\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-23T09:08:37Z is after 2025-08-24T17:21:41Z" Jan 23 09:08:37 crc kubenswrapper[4684]: I0123 09:08:37.716182 4684 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-dmqcw" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"95d1563a-3ca4-4fb0-8365-c1168fbe2e70\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T09:07:29Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T09:07:34Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T09:07:35Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T09:07:35Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://49a6a5854f711f7c177bc9c2ddea16027d535e15a3bbce2771702baed672fc06\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-23T09:07:35Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-wrhqc\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://d3d64538fa49212ecd97fac81f22251d985b9963024dcd5625ca82b0a19111fb\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://d3d64538fa49212ecd97fac81f22251d985b9963024dcd5625ca82b0a19111fb\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-23T09:07:29Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-23T09:07:29Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-wrhqc\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://bd008bc398cf858c150426e45222e76743f5cacfffb45c24f2cad83a6140abe4\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://bd008bc398cf858c150426e45222e76743f5cacfffb45c24f2cad83a6140abe4\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-23T09:07:30Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-23T09:07:30Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-wrhqc\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://11ea09253e6f4c4eab537b794b793c1f07e8cbaf361c1d8773381e7894805322\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://11ea09253e6f4c4eab537b794b793c1f07e8cbaf361c1d8773381e7894805322\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-23T09:07:31Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-23T09:07:31Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-wrhqc\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://4dddcfb8219bc8ac2d0f92294aef29222b71b1eb35ac84e7e833905e868e784e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://4dddcfb8219bc8ac2d0f92294aef29222b71b1eb35ac84e7e833905e868e784e\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-23T09:07:32Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-23T09:07:32Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-wrhqc\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://d935dd54133a2edd7ccddba6ec6b4c3ee7c86d3d6bc097b93fab3a6aa873ece9\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://d935dd54133a2edd7ccddba6ec6b4c3ee7c86d3d6bc097b93fab3a6aa873ece9\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-23T09:07:33Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-23T09:07:33Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-wrhqc\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://a3f58ad8e7c313247b77e5259a2f82d740ea1f08c3aeaefc116293729ce1b143\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://a3f58ad8e7c313247b77e5259a2f82d740ea1f08c3aeaefc116293729ce1b143\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-23T09:07:34Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-23T09:07:34Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-wrhqc\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-23T09:07:28Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-dmqcw\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-23T09:08:37Z is after 2025-08-24T17:21:41Z" Jan 23 09:08:37 crc kubenswrapper[4684]: I0123 09:08:37.727841 4684 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"e31ff448-5258-4887-9532-ccb1444b5a2f\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T09:07:08Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T09:07:08Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T09:07:38Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T09:07:38Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T09:07:07Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://39b1d62654cdce3e6a1e54cc35f36d530dec39b7ec54d7aba2ea8a64844ff90a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-23T09:07:09Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://5b80737ea9f882f63be2cf6a2f74002963d16e18aea3c96f738b2cd188f3c1da\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-23T09:07:10Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://68e3ed6cfd5c1ab6379385c7acee58117333f815f21be7d7c61038f7827f6621\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-23T09:07:10Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://74958cd4355a9eb04e07c960b1063b56f11cb3ae27a3ab9eac50f54ebac78c8c\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://42263a97079566dbd93f1ca20399fd1f6cc2400f0d042ed062c1c1e15eaf0109\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-01-23T09:07:27Z\\\",\\\"message\\\":\\\"23 09:07:26.845110 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW0123 09:07:26.845113 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW0123 09:07:26.845115 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nI0123 09:07:26.845353 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nI0123 09:07:26.849378 1 tlsconfig.go:203] \\\\\\\"Loaded serving cert\\\\\\\" certName=\\\\\\\"serving-cert::/tmp/serving-cert-4138284268/tls.crt::/tmp/serving-cert-4138284268/tls.key\\\\\\\" certDetail=\\\\\\\"\\\\\\\\\\\\\\\"localhost\\\\\\\\\\\\\\\" [serving] validServingFor=[localhost] issuer=\\\\\\\\\\\\\\\"check-endpoints-signer@1769159230\\\\\\\\\\\\\\\" (2026-01-23 09:07:10 +0000 UTC to 2026-02-22 09:07:11 +0000 UTC (now=2026-01-23 09:07:26.849349521 +0000 UTC))\\\\\\\"\\\\nI0123 09:07:26.849507 1 named_certificates.go:53] \\\\\\\"Loaded SNI cert\\\\\\\" index=0 certName=\\\\\\\"self-signed loopback\\\\\\\" certDetail=\\\\\\\"\\\\\\\\\\\\\\\"apiserver-loopback-client@1769159241\\\\\\\\\\\\\\\" [serving] validServingFor=[apiserver-loopback-client] issuer=\\\\\\\\\\\\\\\"apiserver-loopback-client-ca@1769159241\\\\\\\\\\\\\\\" (2026-01-23 08:07:21 +0000 UTC to 2027-01-23 08:07:21 +0000 UTC (now=2026-01-23 09:07:26.849489185 +0000 UTC))\\\\\\\"\\\\nI0123 09:07:26.849527 1 secure_serving.go:213] Serving securely on [::]:17697\\\\nI0123 09:07:26.849546 1 genericapiserver.go:683] [graceful-termination] waiting for shutdown to be initiated\\\\nI0123 09:07:26.849566 1 requestheader_controller.go:172] Starting RequestHeaderAuthRequestController\\\\nI0123 09:07:26.849583 1 shared_informer.go:313] Waiting for caches to sync for RequestHeaderAuthRequestController\\\\nI0123 09:07:26.849611 1 dynamic_serving_content.go:135] \\\\\\\"Starting controller\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-4138284268/tls.crt::/tmp/serving-cert-4138284268/tls.key\\\\\\\"\\\\nI0123 09:07:26.849731 1 tlsconfig.go:243] \\\\\\\"Starting DynamicServingCertificateController\\\\\\\"\\\\nF0123 09:07:26.849820 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-23T09:07:10Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":true,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-23T09:07:28Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://9db80d9b156d2828ad5bcd38bc2d0783dac35f10f547f098815ee596931cde3b\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-23T09:07:10Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://efa2eef93c6f5766565795e6674f79bc2e7cb62ac76cd9a1e407561378d62732\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://efa2eef93c6f5766565795e6674f79bc2e7cb62ac76cd9a1e407561378d62732\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-23T09:07:08Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-23T09:07:08Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-23T09:07:07Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-23T09:08:37Z is after 2025-08-24T17:21:41Z" Jan 23 09:08:37 crc kubenswrapper[4684]: I0123 09:08:37.736752 4684 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/kube-rbac-proxy-crio-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"7df9a725-0566-46a8-8527-66802dfe40b0\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T09:07:08Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T09:07:08Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T09:07:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T09:07:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T09:07:07Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://f896177a3b765a2129450136ccb007601fff3c2d5669c777ad8af0eeaaf15d5a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-crio\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-23T09:07:09Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kube\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"var-lib-kubelet\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://1cdc2db678a5d1d932c0ed23c453f2450562334bfa685ec920e0a8bc8af61d7c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://1cdc2db678a5d1d932c0ed23c453f2450562334bfa685ec920e0a8bc8af61d7c\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-23T09:07:08Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-23T09:07:08Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var\\\",\\\"name\\\":\\\"var-lib-kubelet\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-23T09:07:07Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"kube-rbac-proxy-crio-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-23T09:08:37Z is after 2025-08-24T17:21:41Z" Jan 23 09:08:37 crc kubenswrapper[4684]: I0123 09:08:37.743242 4684 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 23 09:08:37 crc kubenswrapper[4684]: I0123 09:08:37.743275 4684 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 23 09:08:37 crc kubenswrapper[4684]: I0123 09:08:37.743283 4684 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 23 09:08:37 crc kubenswrapper[4684]: I0123 09:08:37.743296 4684 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 23 09:08:37 crc kubenswrapper[4684]: I0123 09:08:37.743306 4684 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-23T09:08:37Z","lastTransitionTime":"2026-01-23T09:08:37Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 23 09:08:37 crc kubenswrapper[4684]: I0123 09:08:37.748194 4684 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-23T09:07:30Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-23T09:07:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-23T09:07:30Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://dc74050180463e44d7c545c89833c0282af87ae8cde4800f95e019dbd21ebb08\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-23T09:07:30Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-23T09:08:37Z is after 2025-08-24T17:21:41Z" Jan 23 09:08:37 crc kubenswrapper[4684]: I0123 09:08:37.757866 4684 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-6stgf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"4fce7017-186f-4953-b968-c8a8868a0fd4\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T09:07:29Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T09:07:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T09:07:29Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T09:07:29Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://e120546e2ca9261a5bc169c39194c52add608d78b5783a10dad5f3ba4ee27c23\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-23T09:07:29Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-wv8g2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-23T09:07:28Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-6stgf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-23T09:08:37Z is after 2025-08-24T17:21:41Z" Jan 23 09:08:37 crc kubenswrapper[4684]: I0123 09:08:37.766424 4684 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-qt2j2" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d5069a6f-07bb-4423-8df0-92cdc541e6de\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T09:07:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T09:07:29Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T09:07:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T09:07:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://4ab843f59e857c481772565098789264b06141f58dd54cbb8dba2e40b44a54ad\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-23T09:07:30Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-l62zw\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-23T09:07:29Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-qt2j2\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-23T09:08:37Z is after 2025-08-24T17:21:41Z" Jan 23 09:08:37 crc kubenswrapper[4684]: I0123 09:08:37.785364 4684 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d618dabd-5de3-4c94-b9c1-69682da77628\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T09:07:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T09:07:07Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T09:07:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T09:07:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T09:07:07Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://c027c8977c1e3870ef0132bf28d479e8999b1a7d216327be7a9cff2aeee05c9e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-23T09:07:08Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://7954e2feb1e89e1ec2c9055234e7b9bde7005afc751a3067c18cbb54d16045cc\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-23T09:07:08Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://fde45d47daa7855ee7caa1df0222d2773fcdc8fb29413c61d6b74f7e7d8fa6e4\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-23T09:07:09Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://f34540a58dd0dfcebbfd694b24202f58a89ddca8a0f04f3f4f2bcdba4be5c4b6\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-23T09:07:10Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-23T09:07:07Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-23T09:08:37Z is after 2025-08-24T17:21:41Z" Jan 23 09:08:37 crc kubenswrapper[4684]: I0123 09:08:37.797387 4684 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-23T09:07:28Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-23T09:07:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-23T09:07:28Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://4f741db786a98b9e9302c17c5f5061484149b0372c03b3cf06b017d37da7237a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-23T09:07:28Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://e0bf99a80423f9d4d2262b21f7dc70d1cf73731c48008e484d9768495596d5b0\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-23T09:07:28Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-23T09:08:37Z is after 2025-08-24T17:21:41Z" Jan 23 09:08:37 crc kubenswrapper[4684]: I0123 09:08:37.848888 4684 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 23 09:08:37 crc kubenswrapper[4684]: I0123 09:08:37.848922 4684 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 23 09:08:37 crc kubenswrapper[4684]: I0123 09:08:37.848930 4684 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 23 09:08:37 crc kubenswrapper[4684]: I0123 09:08:37.848944 4684 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 23 09:08:37 crc kubenswrapper[4684]: I0123 09:08:37.848953 4684 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-23T09:08:37Z","lastTransitionTime":"2026-01-23T09:08:37Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 23 09:08:37 crc kubenswrapper[4684]: I0123 09:08:37.950588 4684 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 23 09:08:37 crc kubenswrapper[4684]: I0123 09:08:37.950645 4684 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 23 09:08:37 crc kubenswrapper[4684]: I0123 09:08:37.950656 4684 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 23 09:08:37 crc kubenswrapper[4684]: I0123 09:08:37.950670 4684 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 23 09:08:37 crc kubenswrapper[4684]: I0123 09:08:37.950682 4684 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-23T09:08:37Z","lastTransitionTime":"2026-01-23T09:08:37Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 23 09:08:38 crc kubenswrapper[4684]: I0123 09:08:38.053242 4684 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 23 09:08:38 crc kubenswrapper[4684]: I0123 09:08:38.053293 4684 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 23 09:08:38 crc kubenswrapper[4684]: I0123 09:08:38.053305 4684 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 23 09:08:38 crc kubenswrapper[4684]: I0123 09:08:38.053321 4684 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 23 09:08:38 crc kubenswrapper[4684]: I0123 09:08:38.053332 4684 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-23T09:08:38Z","lastTransitionTime":"2026-01-23T09:08:38Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 23 09:08:38 crc kubenswrapper[4684]: I0123 09:08:38.155551 4684 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 23 09:08:38 crc kubenswrapper[4684]: I0123 09:08:38.155633 4684 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 23 09:08:38 crc kubenswrapper[4684]: I0123 09:08:38.155642 4684 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 23 09:08:38 crc kubenswrapper[4684]: I0123 09:08:38.155655 4684 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 23 09:08:38 crc kubenswrapper[4684]: I0123 09:08:38.155666 4684 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-23T09:08:38Z","lastTransitionTime":"2026-01-23T09:08:38Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 23 09:08:38 crc kubenswrapper[4684]: I0123 09:08:38.257645 4684 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 23 09:08:38 crc kubenswrapper[4684]: I0123 09:08:38.257679 4684 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 23 09:08:38 crc kubenswrapper[4684]: I0123 09:08:38.257687 4684 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 23 09:08:38 crc kubenswrapper[4684]: I0123 09:08:38.257701 4684 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 23 09:08:38 crc kubenswrapper[4684]: I0123 09:08:38.257733 4684 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-23T09:08:38Z","lastTransitionTime":"2026-01-23T09:08:38Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 23 09:08:38 crc kubenswrapper[4684]: I0123 09:08:38.381429 4684 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 23 09:08:38 crc kubenswrapper[4684]: I0123 09:08:38.381463 4684 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 23 09:08:38 crc kubenswrapper[4684]: I0123 09:08:38.381472 4684 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 23 09:08:38 crc kubenswrapper[4684]: I0123 09:08:38.381484 4684 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 23 09:08:38 crc kubenswrapper[4684]: I0123 09:08:38.381493 4684 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-23T09:08:38Z","lastTransitionTime":"2026-01-23T09:08:38Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 23 09:08:38 crc kubenswrapper[4684]: I0123 09:08:38.483178 4684 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 23 09:08:38 crc kubenswrapper[4684]: I0123 09:08:38.483227 4684 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 23 09:08:38 crc kubenswrapper[4684]: I0123 09:08:38.483241 4684 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 23 09:08:38 crc kubenswrapper[4684]: I0123 09:08:38.483257 4684 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 23 09:08:38 crc kubenswrapper[4684]: I0123 09:08:38.483269 4684 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-23T09:08:38Z","lastTransitionTime":"2026-01-23T09:08:38Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 23 09:08:38 crc kubenswrapper[4684]: I0123 09:08:38.581391 4684 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-wrrtl" Jan 23 09:08:38 crc kubenswrapper[4684]: E0123 09:08:38.581553 4684 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-wrrtl" podUID="8a1145d8-e0e9-481b-9e5c-65815e74874f" Jan 23 09:08:38 crc kubenswrapper[4684]: I0123 09:08:38.585212 4684 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 23 09:08:38 crc kubenswrapper[4684]: I0123 09:08:38.585249 4684 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 23 09:08:38 crc kubenswrapper[4684]: I0123 09:08:38.585259 4684 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 23 09:08:38 crc kubenswrapper[4684]: I0123 09:08:38.585275 4684 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 23 09:08:38 crc kubenswrapper[4684]: I0123 09:08:38.585285 4684 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-23T09:08:38Z","lastTransitionTime":"2026-01-23T09:08:38Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 23 09:08:38 crc kubenswrapper[4684]: I0123 09:08:38.595765 4684 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-12-15 08:13:51.804775635 +0000 UTC Jan 23 09:08:38 crc kubenswrapper[4684]: I0123 09:08:38.687477 4684 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 23 09:08:38 crc kubenswrapper[4684]: I0123 09:08:38.687518 4684 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 23 09:08:38 crc kubenswrapper[4684]: I0123 09:08:38.687529 4684 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 23 09:08:38 crc kubenswrapper[4684]: I0123 09:08:38.687548 4684 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 23 09:08:38 crc kubenswrapper[4684]: I0123 09:08:38.687558 4684 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-23T09:08:38Z","lastTransitionTime":"2026-01-23T09:08:38Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 23 09:08:38 crc kubenswrapper[4684]: I0123 09:08:38.789543 4684 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 23 09:08:38 crc kubenswrapper[4684]: I0123 09:08:38.789569 4684 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 23 09:08:38 crc kubenswrapper[4684]: I0123 09:08:38.789577 4684 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 23 09:08:38 crc kubenswrapper[4684]: I0123 09:08:38.789588 4684 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 23 09:08:38 crc kubenswrapper[4684]: I0123 09:08:38.789597 4684 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-23T09:08:38Z","lastTransitionTime":"2026-01-23T09:08:38Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 23 09:08:38 crc kubenswrapper[4684]: I0123 09:08:38.891872 4684 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 23 09:08:38 crc kubenswrapper[4684]: I0123 09:08:38.891923 4684 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 23 09:08:38 crc kubenswrapper[4684]: I0123 09:08:38.891938 4684 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 23 09:08:38 crc kubenswrapper[4684]: I0123 09:08:38.891969 4684 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 23 09:08:38 crc kubenswrapper[4684]: I0123 09:08:38.891982 4684 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-23T09:08:38Z","lastTransitionTime":"2026-01-23T09:08:38Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 23 09:08:38 crc kubenswrapper[4684]: I0123 09:08:38.994076 4684 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 23 09:08:38 crc kubenswrapper[4684]: I0123 09:08:38.994108 4684 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 23 09:08:38 crc kubenswrapper[4684]: I0123 09:08:38.994120 4684 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 23 09:08:38 crc kubenswrapper[4684]: I0123 09:08:38.994135 4684 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 23 09:08:38 crc kubenswrapper[4684]: I0123 09:08:38.994145 4684 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-23T09:08:38Z","lastTransitionTime":"2026-01-23T09:08:38Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 23 09:08:39 crc kubenswrapper[4684]: I0123 09:08:39.097032 4684 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 23 09:08:39 crc kubenswrapper[4684]: I0123 09:08:39.097068 4684 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 23 09:08:39 crc kubenswrapper[4684]: I0123 09:08:39.097079 4684 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 23 09:08:39 crc kubenswrapper[4684]: I0123 09:08:39.097094 4684 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 23 09:08:39 crc kubenswrapper[4684]: I0123 09:08:39.097103 4684 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-23T09:08:39Z","lastTransitionTime":"2026-01-23T09:08:39Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 23 09:08:39 crc kubenswrapper[4684]: I0123 09:08:39.199325 4684 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 23 09:08:39 crc kubenswrapper[4684]: I0123 09:08:39.199352 4684 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 23 09:08:39 crc kubenswrapper[4684]: I0123 09:08:39.199361 4684 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 23 09:08:39 crc kubenswrapper[4684]: I0123 09:08:39.199486 4684 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 23 09:08:39 crc kubenswrapper[4684]: I0123 09:08:39.199496 4684 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-23T09:08:39Z","lastTransitionTime":"2026-01-23T09:08:39Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 23 09:08:39 crc kubenswrapper[4684]: I0123 09:08:39.301722 4684 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 23 09:08:39 crc kubenswrapper[4684]: I0123 09:08:39.301982 4684 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 23 09:08:39 crc kubenswrapper[4684]: I0123 09:08:39.302076 4684 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 23 09:08:39 crc kubenswrapper[4684]: I0123 09:08:39.302158 4684 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 23 09:08:39 crc kubenswrapper[4684]: I0123 09:08:39.302251 4684 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-23T09:08:39Z","lastTransitionTime":"2026-01-23T09:08:39Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 23 09:08:39 crc kubenswrapper[4684]: I0123 09:08:39.405223 4684 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 23 09:08:39 crc kubenswrapper[4684]: I0123 09:08:39.405544 4684 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 23 09:08:39 crc kubenswrapper[4684]: I0123 09:08:39.405623 4684 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 23 09:08:39 crc kubenswrapper[4684]: I0123 09:08:39.405724 4684 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 23 09:08:39 crc kubenswrapper[4684]: I0123 09:08:39.405797 4684 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-23T09:08:39Z","lastTransitionTime":"2026-01-23T09:08:39Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 23 09:08:39 crc kubenswrapper[4684]: I0123 09:08:39.508648 4684 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 23 09:08:39 crc kubenswrapper[4684]: I0123 09:08:39.509053 4684 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 23 09:08:39 crc kubenswrapper[4684]: I0123 09:08:39.509175 4684 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 23 09:08:39 crc kubenswrapper[4684]: I0123 09:08:39.509250 4684 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 23 09:08:39 crc kubenswrapper[4684]: I0123 09:08:39.509338 4684 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-23T09:08:39Z","lastTransitionTime":"2026-01-23T09:08:39Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 23 09:08:39 crc kubenswrapper[4684]: I0123 09:08:39.582288 4684 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 23 09:08:39 crc kubenswrapper[4684]: I0123 09:08:39.582358 4684 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Jan 23 09:08:39 crc kubenswrapper[4684]: I0123 09:08:39.582288 4684 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 23 09:08:39 crc kubenswrapper[4684]: E0123 09:08:39.582548 4684 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Jan 23 09:08:39 crc kubenswrapper[4684]: E0123 09:08:39.582939 4684 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Jan 23 09:08:39 crc kubenswrapper[4684]: E0123 09:08:39.583186 4684 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Jan 23 09:08:39 crc kubenswrapper[4684]: I0123 09:08:39.596350 4684 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-12-21 20:54:39.139242047 +0000 UTC Jan 23 09:08:39 crc kubenswrapper[4684]: I0123 09:08:39.613085 4684 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 23 09:08:39 crc kubenswrapper[4684]: I0123 09:08:39.613132 4684 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 23 09:08:39 crc kubenswrapper[4684]: I0123 09:08:39.613143 4684 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 23 09:08:39 crc kubenswrapper[4684]: I0123 09:08:39.613161 4684 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 23 09:08:39 crc kubenswrapper[4684]: I0123 09:08:39.613172 4684 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-23T09:08:39Z","lastTransitionTime":"2026-01-23T09:08:39Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 23 09:08:39 crc kubenswrapper[4684]: I0123 09:08:39.715452 4684 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 23 09:08:39 crc kubenswrapper[4684]: I0123 09:08:39.715693 4684 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 23 09:08:39 crc kubenswrapper[4684]: I0123 09:08:39.715808 4684 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 23 09:08:39 crc kubenswrapper[4684]: I0123 09:08:39.715895 4684 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 23 09:08:39 crc kubenswrapper[4684]: I0123 09:08:39.716072 4684 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-23T09:08:39Z","lastTransitionTime":"2026-01-23T09:08:39Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 23 09:08:39 crc kubenswrapper[4684]: I0123 09:08:39.818975 4684 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 23 09:08:39 crc kubenswrapper[4684]: I0123 09:08:39.819212 4684 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 23 09:08:39 crc kubenswrapper[4684]: I0123 09:08:39.819285 4684 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 23 09:08:39 crc kubenswrapper[4684]: I0123 09:08:39.819347 4684 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 23 09:08:39 crc kubenswrapper[4684]: I0123 09:08:39.819402 4684 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-23T09:08:39Z","lastTransitionTime":"2026-01-23T09:08:39Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 23 09:08:39 crc kubenswrapper[4684]: I0123 09:08:39.922457 4684 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 23 09:08:39 crc kubenswrapper[4684]: I0123 09:08:39.922956 4684 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 23 09:08:39 crc kubenswrapper[4684]: I0123 09:08:39.923062 4684 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 23 09:08:39 crc kubenswrapper[4684]: I0123 09:08:39.923160 4684 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 23 09:08:39 crc kubenswrapper[4684]: I0123 09:08:39.923237 4684 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-23T09:08:39Z","lastTransitionTime":"2026-01-23T09:08:39Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 23 09:08:40 crc kubenswrapper[4684]: I0123 09:08:40.026754 4684 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 23 09:08:40 crc kubenswrapper[4684]: I0123 09:08:40.026804 4684 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 23 09:08:40 crc kubenswrapper[4684]: I0123 09:08:40.026921 4684 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 23 09:08:40 crc kubenswrapper[4684]: I0123 09:08:40.026940 4684 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 23 09:08:40 crc kubenswrapper[4684]: I0123 09:08:40.026950 4684 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-23T09:08:40Z","lastTransitionTime":"2026-01-23T09:08:40Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 23 09:08:40 crc kubenswrapper[4684]: I0123 09:08:40.130729 4684 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 23 09:08:40 crc kubenswrapper[4684]: I0123 09:08:40.130777 4684 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 23 09:08:40 crc kubenswrapper[4684]: I0123 09:08:40.130786 4684 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 23 09:08:40 crc kubenswrapper[4684]: I0123 09:08:40.130800 4684 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 23 09:08:40 crc kubenswrapper[4684]: I0123 09:08:40.130809 4684 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-23T09:08:40Z","lastTransitionTime":"2026-01-23T09:08:40Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 23 09:08:40 crc kubenswrapper[4684]: I0123 09:08:40.233446 4684 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 23 09:08:40 crc kubenswrapper[4684]: I0123 09:08:40.233486 4684 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 23 09:08:40 crc kubenswrapper[4684]: I0123 09:08:40.233499 4684 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 23 09:08:40 crc kubenswrapper[4684]: I0123 09:08:40.233515 4684 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 23 09:08:40 crc kubenswrapper[4684]: I0123 09:08:40.233526 4684 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-23T09:08:40Z","lastTransitionTime":"2026-01-23T09:08:40Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 23 09:08:40 crc kubenswrapper[4684]: I0123 09:08:40.336232 4684 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 23 09:08:40 crc kubenswrapper[4684]: I0123 09:08:40.336273 4684 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 23 09:08:40 crc kubenswrapper[4684]: I0123 09:08:40.336285 4684 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 23 09:08:40 crc kubenswrapper[4684]: I0123 09:08:40.336301 4684 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 23 09:08:40 crc kubenswrapper[4684]: I0123 09:08:40.336312 4684 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-23T09:08:40Z","lastTransitionTime":"2026-01-23T09:08:40Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 23 09:08:40 crc kubenswrapper[4684]: I0123 09:08:40.438622 4684 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 23 09:08:40 crc kubenswrapper[4684]: I0123 09:08:40.438679 4684 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 23 09:08:40 crc kubenswrapper[4684]: I0123 09:08:40.438690 4684 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 23 09:08:40 crc kubenswrapper[4684]: I0123 09:08:40.438717 4684 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 23 09:08:40 crc kubenswrapper[4684]: I0123 09:08:40.438726 4684 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-23T09:08:40Z","lastTransitionTime":"2026-01-23T09:08:40Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 23 09:08:40 crc kubenswrapper[4684]: I0123 09:08:40.562808 4684 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 23 09:08:40 crc kubenswrapper[4684]: I0123 09:08:40.562845 4684 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 23 09:08:40 crc kubenswrapper[4684]: I0123 09:08:40.562856 4684 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 23 09:08:40 crc kubenswrapper[4684]: I0123 09:08:40.562871 4684 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 23 09:08:40 crc kubenswrapper[4684]: I0123 09:08:40.562881 4684 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-23T09:08:40Z","lastTransitionTime":"2026-01-23T09:08:40Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 23 09:08:40 crc kubenswrapper[4684]: I0123 09:08:40.581363 4684 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-wrrtl" Jan 23 09:08:40 crc kubenswrapper[4684]: E0123 09:08:40.581526 4684 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-wrrtl" podUID="8a1145d8-e0e9-481b-9e5c-65815e74874f" Jan 23 09:08:40 crc kubenswrapper[4684]: I0123 09:08:40.597007 4684 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-11-20 18:55:36.814331366 +0000 UTC Jan 23 09:08:40 crc kubenswrapper[4684]: I0123 09:08:40.664769 4684 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 23 09:08:40 crc kubenswrapper[4684]: I0123 09:08:40.664800 4684 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 23 09:08:40 crc kubenswrapper[4684]: I0123 09:08:40.664810 4684 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 23 09:08:40 crc kubenswrapper[4684]: I0123 09:08:40.664824 4684 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 23 09:08:40 crc kubenswrapper[4684]: I0123 09:08:40.664834 4684 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-23T09:08:40Z","lastTransitionTime":"2026-01-23T09:08:40Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 23 09:08:40 crc kubenswrapper[4684]: I0123 09:08:40.766882 4684 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 23 09:08:40 crc kubenswrapper[4684]: I0123 09:08:40.766919 4684 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 23 09:08:40 crc kubenswrapper[4684]: I0123 09:08:40.766936 4684 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 23 09:08:40 crc kubenswrapper[4684]: I0123 09:08:40.766956 4684 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 23 09:08:40 crc kubenswrapper[4684]: I0123 09:08:40.766974 4684 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-23T09:08:40Z","lastTransitionTime":"2026-01-23T09:08:40Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 23 09:08:40 crc kubenswrapper[4684]: I0123 09:08:40.869321 4684 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 23 09:08:40 crc kubenswrapper[4684]: I0123 09:08:40.869368 4684 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 23 09:08:40 crc kubenswrapper[4684]: I0123 09:08:40.869380 4684 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 23 09:08:40 crc kubenswrapper[4684]: I0123 09:08:40.869397 4684 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 23 09:08:40 crc kubenswrapper[4684]: I0123 09:08:40.869857 4684 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-23T09:08:40Z","lastTransitionTime":"2026-01-23T09:08:40Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 23 09:08:40 crc kubenswrapper[4684]: I0123 09:08:40.972606 4684 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 23 09:08:40 crc kubenswrapper[4684]: I0123 09:08:40.972655 4684 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 23 09:08:40 crc kubenswrapper[4684]: I0123 09:08:40.972669 4684 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 23 09:08:40 crc kubenswrapper[4684]: I0123 09:08:40.972687 4684 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 23 09:08:40 crc kubenswrapper[4684]: I0123 09:08:40.972722 4684 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-23T09:08:40Z","lastTransitionTime":"2026-01-23T09:08:40Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 23 09:08:41 crc kubenswrapper[4684]: I0123 09:08:41.074673 4684 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 23 09:08:41 crc kubenswrapper[4684]: I0123 09:08:41.074728 4684 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 23 09:08:41 crc kubenswrapper[4684]: I0123 09:08:41.074738 4684 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 23 09:08:41 crc kubenswrapper[4684]: I0123 09:08:41.074753 4684 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 23 09:08:41 crc kubenswrapper[4684]: I0123 09:08:41.074763 4684 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-23T09:08:41Z","lastTransitionTime":"2026-01-23T09:08:41Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 23 09:08:41 crc kubenswrapper[4684]: I0123 09:08:41.177494 4684 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 23 09:08:41 crc kubenswrapper[4684]: I0123 09:08:41.177541 4684 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 23 09:08:41 crc kubenswrapper[4684]: I0123 09:08:41.177555 4684 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 23 09:08:41 crc kubenswrapper[4684]: I0123 09:08:41.177576 4684 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 23 09:08:41 crc kubenswrapper[4684]: I0123 09:08:41.177594 4684 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-23T09:08:41Z","lastTransitionTime":"2026-01-23T09:08:41Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 23 09:08:41 crc kubenswrapper[4684]: I0123 09:08:41.280322 4684 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 23 09:08:41 crc kubenswrapper[4684]: I0123 09:08:41.280358 4684 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 23 09:08:41 crc kubenswrapper[4684]: I0123 09:08:41.280373 4684 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 23 09:08:41 crc kubenswrapper[4684]: I0123 09:08:41.280390 4684 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 23 09:08:41 crc kubenswrapper[4684]: I0123 09:08:41.280403 4684 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-23T09:08:41Z","lastTransitionTime":"2026-01-23T09:08:41Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 23 09:08:41 crc kubenswrapper[4684]: I0123 09:08:41.383065 4684 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 23 09:08:41 crc kubenswrapper[4684]: I0123 09:08:41.383101 4684 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 23 09:08:41 crc kubenswrapper[4684]: I0123 09:08:41.383115 4684 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 23 09:08:41 crc kubenswrapper[4684]: I0123 09:08:41.383136 4684 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 23 09:08:41 crc kubenswrapper[4684]: I0123 09:08:41.383151 4684 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-23T09:08:41Z","lastTransitionTime":"2026-01-23T09:08:41Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 23 09:08:41 crc kubenswrapper[4684]: I0123 09:08:41.485744 4684 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 23 09:08:41 crc kubenswrapper[4684]: I0123 09:08:41.486057 4684 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 23 09:08:41 crc kubenswrapper[4684]: I0123 09:08:41.486136 4684 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 23 09:08:41 crc kubenswrapper[4684]: I0123 09:08:41.486248 4684 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 23 09:08:41 crc kubenswrapper[4684]: I0123 09:08:41.486336 4684 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-23T09:08:41Z","lastTransitionTime":"2026-01-23T09:08:41Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 23 09:08:41 crc kubenswrapper[4684]: I0123 09:08:41.581418 4684 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Jan 23 09:08:41 crc kubenswrapper[4684]: I0123 09:08:41.581418 4684 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 23 09:08:41 crc kubenswrapper[4684]: E0123 09:08:41.581541 4684 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Jan 23 09:08:41 crc kubenswrapper[4684]: I0123 09:08:41.581576 4684 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 23 09:08:41 crc kubenswrapper[4684]: E0123 09:08:41.581750 4684 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Jan 23 09:08:41 crc kubenswrapper[4684]: E0123 09:08:41.581927 4684 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Jan 23 09:08:41 crc kubenswrapper[4684]: I0123 09:08:41.588467 4684 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 23 09:08:41 crc kubenswrapper[4684]: I0123 09:08:41.588504 4684 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 23 09:08:41 crc kubenswrapper[4684]: I0123 09:08:41.588521 4684 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 23 09:08:41 crc kubenswrapper[4684]: I0123 09:08:41.588539 4684 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 23 09:08:41 crc kubenswrapper[4684]: I0123 09:08:41.588555 4684 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-23T09:08:41Z","lastTransitionTime":"2026-01-23T09:08:41Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 23 09:08:41 crc kubenswrapper[4684]: I0123 09:08:41.597845 4684 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-11-09 08:45:25.032345319 +0000 UTC Jan 23 09:08:41 crc kubenswrapper[4684]: I0123 09:08:41.690536 4684 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 23 09:08:41 crc kubenswrapper[4684]: I0123 09:08:41.690567 4684 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 23 09:08:41 crc kubenswrapper[4684]: I0123 09:08:41.690576 4684 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 23 09:08:41 crc kubenswrapper[4684]: I0123 09:08:41.690588 4684 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 23 09:08:41 crc kubenswrapper[4684]: I0123 09:08:41.690597 4684 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-23T09:08:41Z","lastTransitionTime":"2026-01-23T09:08:41Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 23 09:08:41 crc kubenswrapper[4684]: I0123 09:08:41.792710 4684 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 23 09:08:41 crc kubenswrapper[4684]: I0123 09:08:41.792753 4684 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 23 09:08:41 crc kubenswrapper[4684]: I0123 09:08:41.792770 4684 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 23 09:08:41 crc kubenswrapper[4684]: I0123 09:08:41.792787 4684 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 23 09:08:41 crc kubenswrapper[4684]: I0123 09:08:41.792798 4684 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-23T09:08:41Z","lastTransitionTime":"2026-01-23T09:08:41Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 23 09:08:41 crc kubenswrapper[4684]: I0123 09:08:41.894670 4684 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 23 09:08:41 crc kubenswrapper[4684]: I0123 09:08:41.894738 4684 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 23 09:08:41 crc kubenswrapper[4684]: I0123 09:08:41.894767 4684 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 23 09:08:41 crc kubenswrapper[4684]: I0123 09:08:41.894783 4684 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 23 09:08:41 crc kubenswrapper[4684]: I0123 09:08:41.894793 4684 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-23T09:08:41Z","lastTransitionTime":"2026-01-23T09:08:41Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 23 09:08:41 crc kubenswrapper[4684]: I0123 09:08:41.997368 4684 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 23 09:08:41 crc kubenswrapper[4684]: I0123 09:08:41.997404 4684 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 23 09:08:41 crc kubenswrapper[4684]: I0123 09:08:41.997416 4684 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 23 09:08:41 crc kubenswrapper[4684]: I0123 09:08:41.997430 4684 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 23 09:08:41 crc kubenswrapper[4684]: I0123 09:08:41.997442 4684 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-23T09:08:41Z","lastTransitionTime":"2026-01-23T09:08:41Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 23 09:08:42 crc kubenswrapper[4684]: I0123 09:08:42.100125 4684 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 23 09:08:42 crc kubenswrapper[4684]: I0123 09:08:42.100160 4684 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 23 09:08:42 crc kubenswrapper[4684]: I0123 09:08:42.100179 4684 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 23 09:08:42 crc kubenswrapper[4684]: I0123 09:08:42.100197 4684 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 23 09:08:42 crc kubenswrapper[4684]: I0123 09:08:42.100209 4684 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-23T09:08:42Z","lastTransitionTime":"2026-01-23T09:08:42Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 23 09:08:42 crc kubenswrapper[4684]: I0123 09:08:42.202764 4684 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 23 09:08:42 crc kubenswrapper[4684]: I0123 09:08:42.202803 4684 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 23 09:08:42 crc kubenswrapper[4684]: I0123 09:08:42.202814 4684 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 23 09:08:42 crc kubenswrapper[4684]: I0123 09:08:42.202830 4684 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 23 09:08:42 crc kubenswrapper[4684]: I0123 09:08:42.202844 4684 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-23T09:08:42Z","lastTransitionTime":"2026-01-23T09:08:42Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 23 09:08:42 crc kubenswrapper[4684]: I0123 09:08:42.305262 4684 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 23 09:08:42 crc kubenswrapper[4684]: I0123 09:08:42.305300 4684 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 23 09:08:42 crc kubenswrapper[4684]: I0123 09:08:42.305310 4684 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 23 09:08:42 crc kubenswrapper[4684]: I0123 09:08:42.305325 4684 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 23 09:08:42 crc kubenswrapper[4684]: I0123 09:08:42.305335 4684 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-23T09:08:42Z","lastTransitionTime":"2026-01-23T09:08:42Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 23 09:08:42 crc kubenswrapper[4684]: I0123 09:08:42.407737 4684 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 23 09:08:42 crc kubenswrapper[4684]: I0123 09:08:42.407775 4684 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 23 09:08:42 crc kubenswrapper[4684]: I0123 09:08:42.407786 4684 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 23 09:08:42 crc kubenswrapper[4684]: I0123 09:08:42.407803 4684 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 23 09:08:42 crc kubenswrapper[4684]: I0123 09:08:42.407814 4684 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-23T09:08:42Z","lastTransitionTime":"2026-01-23T09:08:42Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 23 09:08:42 crc kubenswrapper[4684]: I0123 09:08:42.511212 4684 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 23 09:08:42 crc kubenswrapper[4684]: I0123 09:08:42.511298 4684 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 23 09:08:42 crc kubenswrapper[4684]: I0123 09:08:42.511314 4684 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 23 09:08:42 crc kubenswrapper[4684]: I0123 09:08:42.511332 4684 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 23 09:08:42 crc kubenswrapper[4684]: I0123 09:08:42.511344 4684 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-23T09:08:42Z","lastTransitionTime":"2026-01-23T09:08:42Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 23 09:08:42 crc kubenswrapper[4684]: I0123 09:08:42.582169 4684 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-wrrtl" Jan 23 09:08:42 crc kubenswrapper[4684]: E0123 09:08:42.582522 4684 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-wrrtl" podUID="8a1145d8-e0e9-481b-9e5c-65815e74874f" Jan 23 09:08:42 crc kubenswrapper[4684]: I0123 09:08:42.598380 4684 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-11-28 17:02:18.438098641 +0000 UTC Jan 23 09:08:42 crc kubenswrapper[4684]: I0123 09:08:42.613708 4684 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 23 09:08:42 crc kubenswrapper[4684]: I0123 09:08:42.613931 4684 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 23 09:08:42 crc kubenswrapper[4684]: I0123 09:08:42.614006 4684 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 23 09:08:42 crc kubenswrapper[4684]: I0123 09:08:42.614089 4684 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 23 09:08:42 crc kubenswrapper[4684]: I0123 09:08:42.614153 4684 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-23T09:08:42Z","lastTransitionTime":"2026-01-23T09:08:42Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 23 09:08:42 crc kubenswrapper[4684]: I0123 09:08:42.615493 4684 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 23 09:08:42 crc kubenswrapper[4684]: I0123 09:08:42.615602 4684 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 23 09:08:42 crc kubenswrapper[4684]: I0123 09:08:42.615680 4684 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 23 09:08:42 crc kubenswrapper[4684]: I0123 09:08:42.615778 4684 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 23 09:08:42 crc kubenswrapper[4684]: I0123 09:08:42.615859 4684 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-23T09:08:42Z","lastTransitionTime":"2026-01-23T09:08:42Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 23 09:08:42 crc kubenswrapper[4684]: E0123 09:08:42.627914 4684 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"7800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"24148068Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"8\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"24608868Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-01-23T09:08:42Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-23T09:08:42Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-23T09:08:42Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-23T09:08:42Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-23T09:08:42Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-23T09:08:42Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-23T09:08:42Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-23T09:08:42Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"bcfe8adf-9d26-48e3-b456-e1c8d79ddfed\\\",\\\"systemUUID\\\":\\\"63162577-fb09-4289-a5f3-3b12988dcfbf\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-23T09:08:42Z is after 2025-08-24T17:21:41Z" Jan 23 09:08:42 crc kubenswrapper[4684]: I0123 09:08:42.631226 4684 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 23 09:08:42 crc kubenswrapper[4684]: I0123 09:08:42.631386 4684 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 23 09:08:42 crc kubenswrapper[4684]: I0123 09:08:42.631579 4684 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 23 09:08:42 crc kubenswrapper[4684]: I0123 09:08:42.631676 4684 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 23 09:08:42 crc kubenswrapper[4684]: I0123 09:08:42.631782 4684 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-23T09:08:42Z","lastTransitionTime":"2026-01-23T09:08:42Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 23 09:08:42 crc kubenswrapper[4684]: E0123 09:08:42.645198 4684 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"7800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"24148068Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"8\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"24608868Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-01-23T09:08:42Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-23T09:08:42Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-23T09:08:42Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-23T09:08:42Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-23T09:08:42Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-23T09:08:42Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-23T09:08:42Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-23T09:08:42Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"bcfe8adf-9d26-48e3-b456-e1c8d79ddfed\\\",\\\"systemUUID\\\":\\\"63162577-fb09-4289-a5f3-3b12988dcfbf\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-23T09:08:42Z is after 2025-08-24T17:21:41Z" Jan 23 09:08:42 crc kubenswrapper[4684]: I0123 09:08:42.648929 4684 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 23 09:08:42 crc kubenswrapper[4684]: I0123 09:08:42.649142 4684 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 23 09:08:42 crc kubenswrapper[4684]: I0123 09:08:42.649374 4684 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 23 09:08:42 crc kubenswrapper[4684]: I0123 09:08:42.649583 4684 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 23 09:08:42 crc kubenswrapper[4684]: I0123 09:08:42.649808 4684 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-23T09:08:42Z","lastTransitionTime":"2026-01-23T09:08:42Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 23 09:08:42 crc kubenswrapper[4684]: E0123 09:08:42.661588 4684 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"7800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"24148068Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"8\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"24608868Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-01-23T09:08:42Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-23T09:08:42Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-23T09:08:42Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-23T09:08:42Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-23T09:08:42Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-23T09:08:42Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-23T09:08:42Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-23T09:08:42Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"bcfe8adf-9d26-48e3-b456-e1c8d79ddfed\\\",\\\"systemUUID\\\":\\\"63162577-fb09-4289-a5f3-3b12988dcfbf\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-23T09:08:42Z is after 2025-08-24T17:21:41Z" Jan 23 09:08:42 crc kubenswrapper[4684]: I0123 09:08:42.666017 4684 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 23 09:08:42 crc kubenswrapper[4684]: I0123 09:08:42.666049 4684 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 23 09:08:42 crc kubenswrapper[4684]: I0123 09:08:42.666059 4684 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 23 09:08:42 crc kubenswrapper[4684]: I0123 09:08:42.666076 4684 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 23 09:08:42 crc kubenswrapper[4684]: I0123 09:08:42.666088 4684 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-23T09:08:42Z","lastTransitionTime":"2026-01-23T09:08:42Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 23 09:08:42 crc kubenswrapper[4684]: E0123 09:08:42.676629 4684 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"7800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"24148068Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"8\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"24608868Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-01-23T09:08:42Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-23T09:08:42Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-23T09:08:42Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-23T09:08:42Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-23T09:08:42Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-23T09:08:42Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-23T09:08:42Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-23T09:08:42Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"bcfe8adf-9d26-48e3-b456-e1c8d79ddfed\\\",\\\"systemUUID\\\":\\\"63162577-fb09-4289-a5f3-3b12988dcfbf\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-23T09:08:42Z is after 2025-08-24T17:21:41Z" Jan 23 09:08:42 crc kubenswrapper[4684]: I0123 09:08:42.679940 4684 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 23 09:08:42 crc kubenswrapper[4684]: I0123 09:08:42.680308 4684 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 23 09:08:42 crc kubenswrapper[4684]: I0123 09:08:42.680405 4684 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 23 09:08:42 crc kubenswrapper[4684]: I0123 09:08:42.680501 4684 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 23 09:08:42 crc kubenswrapper[4684]: I0123 09:08:42.680604 4684 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-23T09:08:42Z","lastTransitionTime":"2026-01-23T09:08:42Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 23 09:08:42 crc kubenswrapper[4684]: E0123 09:08:42.692122 4684 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"7800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"24148068Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"8\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"24608868Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-01-23T09:08:42Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-23T09:08:42Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-23T09:08:42Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-23T09:08:42Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-23T09:08:42Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-23T09:08:42Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-23T09:08:42Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-23T09:08:42Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"bcfe8adf-9d26-48e3-b456-e1c8d79ddfed\\\",\\\"systemUUID\\\":\\\"63162577-fb09-4289-a5f3-3b12988dcfbf\\\"},\\\"runtimeHandlers\\\":[{\\\"features\\\":{\\\"recursiveReadOnlyMounts\\\":true,\\\"userNamespaces\\\":false},\\\"name\\\":\\\"runc\\\"},{\\\"features\\\":{\\\"recursiveReadOnlyMounts\\\":true,\\\"userNamespaces\\\":true},\\\"name\\\":\\\"crun\\\"},{\\\"features\\\":{\\\"recursiveReadOnlyMounts\\\":true,\\\"userNamespaces\\\":true},\\\"name\\\":\\\"\\\"}]}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-23T09:08:42Z is after 2025-08-24T17:21:41Z" Jan 23 09:08:42 crc kubenswrapper[4684]: E0123 09:08:42.692519 4684 kubelet_node_status.go:572] "Unable to update node status" err="update node status exceeds retry count" Jan 23 09:08:42 crc kubenswrapper[4684]: I0123 09:08:42.716427 4684 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 23 09:08:42 crc kubenswrapper[4684]: I0123 09:08:42.716727 4684 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 23 09:08:42 crc kubenswrapper[4684]: I0123 09:08:42.716860 4684 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 23 09:08:42 crc kubenswrapper[4684]: I0123 09:08:42.716989 4684 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 23 09:08:42 crc kubenswrapper[4684]: I0123 09:08:42.717090 4684 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-23T09:08:42Z","lastTransitionTime":"2026-01-23T09:08:42Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 23 09:08:42 crc kubenswrapper[4684]: I0123 09:08:42.818849 4684 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 23 09:08:42 crc kubenswrapper[4684]: I0123 09:08:42.819085 4684 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 23 09:08:42 crc kubenswrapper[4684]: I0123 09:08:42.819145 4684 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 23 09:08:42 crc kubenswrapper[4684]: I0123 09:08:42.819215 4684 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 23 09:08:42 crc kubenswrapper[4684]: I0123 09:08:42.819276 4684 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-23T09:08:42Z","lastTransitionTime":"2026-01-23T09:08:42Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 23 09:08:42 crc kubenswrapper[4684]: I0123 09:08:42.921088 4684 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 23 09:08:42 crc kubenswrapper[4684]: I0123 09:08:42.921429 4684 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 23 09:08:42 crc kubenswrapper[4684]: I0123 09:08:42.921667 4684 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 23 09:08:42 crc kubenswrapper[4684]: I0123 09:08:42.921918 4684 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 23 09:08:42 crc kubenswrapper[4684]: I0123 09:08:42.922108 4684 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-23T09:08:42Z","lastTransitionTime":"2026-01-23T09:08:42Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 23 09:08:43 crc kubenswrapper[4684]: I0123 09:08:43.024836 4684 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 23 09:08:43 crc kubenswrapper[4684]: I0123 09:08:43.024868 4684 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 23 09:08:43 crc kubenswrapper[4684]: I0123 09:08:43.024877 4684 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 23 09:08:43 crc kubenswrapper[4684]: I0123 09:08:43.024890 4684 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 23 09:08:43 crc kubenswrapper[4684]: I0123 09:08:43.024899 4684 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-23T09:08:43Z","lastTransitionTime":"2026-01-23T09:08:43Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 23 09:08:43 crc kubenswrapper[4684]: I0123 09:08:43.126828 4684 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 23 09:08:43 crc kubenswrapper[4684]: I0123 09:08:43.126861 4684 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 23 09:08:43 crc kubenswrapper[4684]: I0123 09:08:43.126894 4684 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 23 09:08:43 crc kubenswrapper[4684]: I0123 09:08:43.126910 4684 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 23 09:08:43 crc kubenswrapper[4684]: I0123 09:08:43.126921 4684 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-23T09:08:43Z","lastTransitionTime":"2026-01-23T09:08:43Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 23 09:08:43 crc kubenswrapper[4684]: I0123 09:08:43.233817 4684 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 23 09:08:43 crc kubenswrapper[4684]: I0123 09:08:43.233854 4684 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 23 09:08:43 crc kubenswrapper[4684]: I0123 09:08:43.233863 4684 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 23 09:08:43 crc kubenswrapper[4684]: I0123 09:08:43.233876 4684 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 23 09:08:43 crc kubenswrapper[4684]: I0123 09:08:43.233885 4684 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-23T09:08:43Z","lastTransitionTime":"2026-01-23T09:08:43Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 23 09:08:43 crc kubenswrapper[4684]: I0123 09:08:43.336372 4684 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 23 09:08:43 crc kubenswrapper[4684]: I0123 09:08:43.336428 4684 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 23 09:08:43 crc kubenswrapper[4684]: I0123 09:08:43.336436 4684 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 23 09:08:43 crc kubenswrapper[4684]: I0123 09:08:43.336448 4684 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 23 09:08:43 crc kubenswrapper[4684]: I0123 09:08:43.336457 4684 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-23T09:08:43Z","lastTransitionTime":"2026-01-23T09:08:43Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 23 09:08:43 crc kubenswrapper[4684]: I0123 09:08:43.438735 4684 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 23 09:08:43 crc kubenswrapper[4684]: I0123 09:08:43.439188 4684 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 23 09:08:43 crc kubenswrapper[4684]: I0123 09:08:43.439306 4684 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 23 09:08:43 crc kubenswrapper[4684]: I0123 09:08:43.439404 4684 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 23 09:08:43 crc kubenswrapper[4684]: I0123 09:08:43.439500 4684 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-23T09:08:43Z","lastTransitionTime":"2026-01-23T09:08:43Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 23 09:08:43 crc kubenswrapper[4684]: I0123 09:08:43.541792 4684 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 23 09:08:43 crc kubenswrapper[4684]: I0123 09:08:43.541822 4684 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 23 09:08:43 crc kubenswrapper[4684]: I0123 09:08:43.541831 4684 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 23 09:08:43 crc kubenswrapper[4684]: I0123 09:08:43.541844 4684 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 23 09:08:43 crc kubenswrapper[4684]: I0123 09:08:43.541855 4684 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-23T09:08:43Z","lastTransitionTime":"2026-01-23T09:08:43Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 23 09:08:43 crc kubenswrapper[4684]: I0123 09:08:43.581079 4684 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Jan 23 09:08:43 crc kubenswrapper[4684]: E0123 09:08:43.581226 4684 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Jan 23 09:08:43 crc kubenswrapper[4684]: I0123 09:08:43.581459 4684 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 23 09:08:43 crc kubenswrapper[4684]: E0123 09:08:43.581527 4684 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Jan 23 09:08:43 crc kubenswrapper[4684]: I0123 09:08:43.581665 4684 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 23 09:08:43 crc kubenswrapper[4684]: E0123 09:08:43.581789 4684 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Jan 23 09:08:43 crc kubenswrapper[4684]: I0123 09:08:43.594587 4684 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-etcd/etcd-crc"] Jan 23 09:08:43 crc kubenswrapper[4684]: I0123 09:08:43.599034 4684 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-11-15 00:10:25.507008506 +0000 UTC Jan 23 09:08:43 crc kubenswrapper[4684]: I0123 09:08:43.643636 4684 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 23 09:08:43 crc kubenswrapper[4684]: I0123 09:08:43.643881 4684 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 23 09:08:43 crc kubenswrapper[4684]: I0123 09:08:43.643967 4684 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 23 09:08:43 crc kubenswrapper[4684]: I0123 09:08:43.644232 4684 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 23 09:08:43 crc kubenswrapper[4684]: I0123 09:08:43.644350 4684 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-23T09:08:43Z","lastTransitionTime":"2026-01-23T09:08:43Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 23 09:08:43 crc kubenswrapper[4684]: I0123 09:08:43.746497 4684 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 23 09:08:43 crc kubenswrapper[4684]: I0123 09:08:43.746960 4684 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 23 09:08:43 crc kubenswrapper[4684]: I0123 09:08:43.747107 4684 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 23 09:08:43 crc kubenswrapper[4684]: I0123 09:08:43.748457 4684 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 23 09:08:43 crc kubenswrapper[4684]: I0123 09:08:43.748567 4684 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-23T09:08:43Z","lastTransitionTime":"2026-01-23T09:08:43Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 23 09:08:43 crc kubenswrapper[4684]: I0123 09:08:43.851071 4684 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 23 09:08:43 crc kubenswrapper[4684]: I0123 09:08:43.851117 4684 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 23 09:08:43 crc kubenswrapper[4684]: I0123 09:08:43.851131 4684 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 23 09:08:43 crc kubenswrapper[4684]: I0123 09:08:43.851149 4684 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 23 09:08:43 crc kubenswrapper[4684]: I0123 09:08:43.851160 4684 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-23T09:08:43Z","lastTransitionTime":"2026-01-23T09:08:43Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 23 09:08:43 crc kubenswrapper[4684]: I0123 09:08:43.953621 4684 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 23 09:08:43 crc kubenswrapper[4684]: I0123 09:08:43.953659 4684 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 23 09:08:43 crc kubenswrapper[4684]: I0123 09:08:43.953671 4684 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 23 09:08:43 crc kubenswrapper[4684]: I0123 09:08:43.953688 4684 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 23 09:08:43 crc kubenswrapper[4684]: I0123 09:08:43.953713 4684 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-23T09:08:43Z","lastTransitionTime":"2026-01-23T09:08:43Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 23 09:08:44 crc kubenswrapper[4684]: I0123 09:08:44.056327 4684 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 23 09:08:44 crc kubenswrapper[4684]: I0123 09:08:44.056356 4684 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 23 09:08:44 crc kubenswrapper[4684]: I0123 09:08:44.056363 4684 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 23 09:08:44 crc kubenswrapper[4684]: I0123 09:08:44.056377 4684 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 23 09:08:44 crc kubenswrapper[4684]: I0123 09:08:44.056386 4684 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-23T09:08:44Z","lastTransitionTime":"2026-01-23T09:08:44Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 23 09:08:44 crc kubenswrapper[4684]: I0123 09:08:44.159362 4684 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 23 09:08:44 crc kubenswrapper[4684]: I0123 09:08:44.159402 4684 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 23 09:08:44 crc kubenswrapper[4684]: I0123 09:08:44.159411 4684 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 23 09:08:44 crc kubenswrapper[4684]: I0123 09:08:44.159427 4684 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 23 09:08:44 crc kubenswrapper[4684]: I0123 09:08:44.159437 4684 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-23T09:08:44Z","lastTransitionTime":"2026-01-23T09:08:44Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 23 09:08:44 crc kubenswrapper[4684]: I0123 09:08:44.261719 4684 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 23 09:08:44 crc kubenswrapper[4684]: I0123 09:08:44.261757 4684 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 23 09:08:44 crc kubenswrapper[4684]: I0123 09:08:44.261770 4684 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 23 09:08:44 crc kubenswrapper[4684]: I0123 09:08:44.261786 4684 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 23 09:08:44 crc kubenswrapper[4684]: I0123 09:08:44.261797 4684 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-23T09:08:44Z","lastTransitionTime":"2026-01-23T09:08:44Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 23 09:08:44 crc kubenswrapper[4684]: I0123 09:08:44.364477 4684 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 23 09:08:44 crc kubenswrapper[4684]: I0123 09:08:44.364518 4684 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 23 09:08:44 crc kubenswrapper[4684]: I0123 09:08:44.364529 4684 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 23 09:08:44 crc kubenswrapper[4684]: I0123 09:08:44.364543 4684 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 23 09:08:44 crc kubenswrapper[4684]: I0123 09:08:44.364554 4684 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-23T09:08:44Z","lastTransitionTime":"2026-01-23T09:08:44Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 23 09:08:44 crc kubenswrapper[4684]: I0123 09:08:44.466835 4684 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 23 09:08:44 crc kubenswrapper[4684]: I0123 09:08:44.466872 4684 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 23 09:08:44 crc kubenswrapper[4684]: I0123 09:08:44.466882 4684 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 23 09:08:44 crc kubenswrapper[4684]: I0123 09:08:44.466897 4684 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 23 09:08:44 crc kubenswrapper[4684]: I0123 09:08:44.466907 4684 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-23T09:08:44Z","lastTransitionTime":"2026-01-23T09:08:44Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 23 09:08:44 crc kubenswrapper[4684]: I0123 09:08:44.568593 4684 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 23 09:08:44 crc kubenswrapper[4684]: I0123 09:08:44.568640 4684 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 23 09:08:44 crc kubenswrapper[4684]: I0123 09:08:44.568649 4684 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 23 09:08:44 crc kubenswrapper[4684]: I0123 09:08:44.568661 4684 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 23 09:08:44 crc kubenswrapper[4684]: I0123 09:08:44.568670 4684 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-23T09:08:44Z","lastTransitionTime":"2026-01-23T09:08:44Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 23 09:08:44 crc kubenswrapper[4684]: I0123 09:08:44.582147 4684 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-wrrtl" Jan 23 09:08:44 crc kubenswrapper[4684]: E0123 09:08:44.582436 4684 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-wrrtl" podUID="8a1145d8-e0e9-481b-9e5c-65815e74874f" Jan 23 09:08:44 crc kubenswrapper[4684]: I0123 09:08:44.599943 4684 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-11-14 02:13:26.647228962 +0000 UTC Jan 23 09:08:44 crc kubenswrapper[4684]: I0123 09:08:44.671570 4684 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 23 09:08:44 crc kubenswrapper[4684]: I0123 09:08:44.671615 4684 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 23 09:08:44 crc kubenswrapper[4684]: I0123 09:08:44.671625 4684 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 23 09:08:44 crc kubenswrapper[4684]: I0123 09:08:44.671638 4684 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 23 09:08:44 crc kubenswrapper[4684]: I0123 09:08:44.671649 4684 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-23T09:08:44Z","lastTransitionTime":"2026-01-23T09:08:44Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 23 09:08:44 crc kubenswrapper[4684]: I0123 09:08:44.774273 4684 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 23 09:08:44 crc kubenswrapper[4684]: I0123 09:08:44.774315 4684 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 23 09:08:44 crc kubenswrapper[4684]: I0123 09:08:44.774328 4684 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 23 09:08:44 crc kubenswrapper[4684]: I0123 09:08:44.774342 4684 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 23 09:08:44 crc kubenswrapper[4684]: I0123 09:08:44.774354 4684 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-23T09:08:44Z","lastTransitionTime":"2026-01-23T09:08:44Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 23 09:08:44 crc kubenswrapper[4684]: I0123 09:08:44.875905 4684 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 23 09:08:44 crc kubenswrapper[4684]: I0123 09:08:44.875936 4684 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 23 09:08:44 crc kubenswrapper[4684]: I0123 09:08:44.875947 4684 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 23 09:08:44 crc kubenswrapper[4684]: I0123 09:08:44.875964 4684 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 23 09:08:44 crc kubenswrapper[4684]: I0123 09:08:44.875974 4684 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-23T09:08:44Z","lastTransitionTime":"2026-01-23T09:08:44Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 23 09:08:44 crc kubenswrapper[4684]: I0123 09:08:44.978202 4684 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 23 09:08:44 crc kubenswrapper[4684]: I0123 09:08:44.978437 4684 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 23 09:08:44 crc kubenswrapper[4684]: I0123 09:08:44.978524 4684 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 23 09:08:44 crc kubenswrapper[4684]: I0123 09:08:44.978622 4684 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 23 09:08:44 crc kubenswrapper[4684]: I0123 09:08:44.978728 4684 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-23T09:08:44Z","lastTransitionTime":"2026-01-23T09:08:44Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 23 09:08:45 crc kubenswrapper[4684]: I0123 09:08:45.080560 4684 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 23 09:08:45 crc kubenswrapper[4684]: I0123 09:08:45.080595 4684 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 23 09:08:45 crc kubenswrapper[4684]: I0123 09:08:45.080605 4684 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 23 09:08:45 crc kubenswrapper[4684]: I0123 09:08:45.080619 4684 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 23 09:08:45 crc kubenswrapper[4684]: I0123 09:08:45.080631 4684 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-23T09:08:45Z","lastTransitionTime":"2026-01-23T09:08:45Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 23 09:08:45 crc kubenswrapper[4684]: I0123 09:08:45.183162 4684 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 23 09:08:45 crc kubenswrapper[4684]: I0123 09:08:45.183439 4684 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 23 09:08:45 crc kubenswrapper[4684]: I0123 09:08:45.183519 4684 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 23 09:08:45 crc kubenswrapper[4684]: I0123 09:08:45.183587 4684 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 23 09:08:45 crc kubenswrapper[4684]: I0123 09:08:45.183644 4684 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-23T09:08:45Z","lastTransitionTime":"2026-01-23T09:08:45Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 23 09:08:45 crc kubenswrapper[4684]: I0123 09:08:45.286280 4684 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 23 09:08:45 crc kubenswrapper[4684]: I0123 09:08:45.286316 4684 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 23 09:08:45 crc kubenswrapper[4684]: I0123 09:08:45.286328 4684 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 23 09:08:45 crc kubenswrapper[4684]: I0123 09:08:45.286344 4684 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 23 09:08:45 crc kubenswrapper[4684]: I0123 09:08:45.286354 4684 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-23T09:08:45Z","lastTransitionTime":"2026-01-23T09:08:45Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 23 09:08:45 crc kubenswrapper[4684]: I0123 09:08:45.388479 4684 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 23 09:08:45 crc kubenswrapper[4684]: I0123 09:08:45.388513 4684 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 23 09:08:45 crc kubenswrapper[4684]: I0123 09:08:45.388523 4684 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 23 09:08:45 crc kubenswrapper[4684]: I0123 09:08:45.388540 4684 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 23 09:08:45 crc kubenswrapper[4684]: I0123 09:08:45.388551 4684 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-23T09:08:45Z","lastTransitionTime":"2026-01-23T09:08:45Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 23 09:08:45 crc kubenswrapper[4684]: I0123 09:08:45.490814 4684 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 23 09:08:45 crc kubenswrapper[4684]: I0123 09:08:45.491040 4684 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 23 09:08:45 crc kubenswrapper[4684]: I0123 09:08:45.491145 4684 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 23 09:08:45 crc kubenswrapper[4684]: I0123 09:08:45.491219 4684 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 23 09:08:45 crc kubenswrapper[4684]: I0123 09:08:45.491293 4684 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-23T09:08:45Z","lastTransitionTime":"2026-01-23T09:08:45Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 23 09:08:45 crc kubenswrapper[4684]: I0123 09:08:45.582722 4684 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Jan 23 09:08:45 crc kubenswrapper[4684]: E0123 09:08:45.582866 4684 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Jan 23 09:08:45 crc kubenswrapper[4684]: I0123 09:08:45.582892 4684 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 23 09:08:45 crc kubenswrapper[4684]: E0123 09:08:45.582968 4684 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Jan 23 09:08:45 crc kubenswrapper[4684]: I0123 09:08:45.583108 4684 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 23 09:08:45 crc kubenswrapper[4684]: E0123 09:08:45.583167 4684 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Jan 23 09:08:45 crc kubenswrapper[4684]: I0123 09:08:45.597686 4684 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 23 09:08:45 crc kubenswrapper[4684]: I0123 09:08:45.597749 4684 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 23 09:08:45 crc kubenswrapper[4684]: I0123 09:08:45.597758 4684 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 23 09:08:45 crc kubenswrapper[4684]: I0123 09:08:45.597773 4684 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 23 09:08:45 crc kubenswrapper[4684]: I0123 09:08:45.597782 4684 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-23T09:08:45Z","lastTransitionTime":"2026-01-23T09:08:45Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 23 09:08:45 crc kubenswrapper[4684]: I0123 09:08:45.600842 4684 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-12-25 03:34:53.813197849 +0000 UTC Jan 23 09:08:45 crc kubenswrapper[4684]: I0123 09:08:45.700237 4684 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 23 09:08:45 crc kubenswrapper[4684]: I0123 09:08:45.700286 4684 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 23 09:08:45 crc kubenswrapper[4684]: I0123 09:08:45.700297 4684 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 23 09:08:45 crc kubenswrapper[4684]: I0123 09:08:45.700315 4684 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 23 09:08:45 crc kubenswrapper[4684]: I0123 09:08:45.700325 4684 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-23T09:08:45Z","lastTransitionTime":"2026-01-23T09:08:45Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 23 09:08:45 crc kubenswrapper[4684]: I0123 09:08:45.802729 4684 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 23 09:08:45 crc kubenswrapper[4684]: I0123 09:08:45.803049 4684 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 23 09:08:45 crc kubenswrapper[4684]: I0123 09:08:45.803182 4684 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 23 09:08:45 crc kubenswrapper[4684]: I0123 09:08:45.803319 4684 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 23 09:08:45 crc kubenswrapper[4684]: I0123 09:08:45.803457 4684 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-23T09:08:45Z","lastTransitionTime":"2026-01-23T09:08:45Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 23 09:08:45 crc kubenswrapper[4684]: I0123 09:08:45.906210 4684 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 23 09:08:45 crc kubenswrapper[4684]: I0123 09:08:45.906539 4684 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 23 09:08:45 crc kubenswrapper[4684]: I0123 09:08:45.906626 4684 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 23 09:08:45 crc kubenswrapper[4684]: I0123 09:08:45.906716 4684 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 23 09:08:45 crc kubenswrapper[4684]: I0123 09:08:45.906789 4684 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-23T09:08:45Z","lastTransitionTime":"2026-01-23T09:08:45Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 23 09:08:46 crc kubenswrapper[4684]: I0123 09:08:46.009092 4684 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 23 09:08:46 crc kubenswrapper[4684]: I0123 09:08:46.009136 4684 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 23 09:08:46 crc kubenswrapper[4684]: I0123 09:08:46.009146 4684 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 23 09:08:46 crc kubenswrapper[4684]: I0123 09:08:46.009162 4684 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 23 09:08:46 crc kubenswrapper[4684]: I0123 09:08:46.009173 4684 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-23T09:08:46Z","lastTransitionTime":"2026-01-23T09:08:46Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 23 09:08:46 crc kubenswrapper[4684]: I0123 09:08:46.111156 4684 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 23 09:08:46 crc kubenswrapper[4684]: I0123 09:08:46.111190 4684 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 23 09:08:46 crc kubenswrapper[4684]: I0123 09:08:46.111200 4684 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 23 09:08:46 crc kubenswrapper[4684]: I0123 09:08:46.111214 4684 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 23 09:08:46 crc kubenswrapper[4684]: I0123 09:08:46.111224 4684 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-23T09:08:46Z","lastTransitionTime":"2026-01-23T09:08:46Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 23 09:08:46 crc kubenswrapper[4684]: I0123 09:08:46.116873 4684 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/8a1145d8-e0e9-481b-9e5c-65815e74874f-metrics-certs\") pod \"network-metrics-daemon-wrrtl\" (UID: \"8a1145d8-e0e9-481b-9e5c-65815e74874f\") " pod="openshift-multus/network-metrics-daemon-wrrtl" Jan 23 09:08:46 crc kubenswrapper[4684]: E0123 09:08:46.116998 4684 secret.go:188] Couldn't get secret openshift-multus/metrics-daemon-secret: object "openshift-multus"/"metrics-daemon-secret" not registered Jan 23 09:08:46 crc kubenswrapper[4684]: E0123 09:08:46.117061 4684 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/8a1145d8-e0e9-481b-9e5c-65815e74874f-metrics-certs podName:8a1145d8-e0e9-481b-9e5c-65815e74874f nodeName:}" failed. No retries permitted until 2026-01-23 09:09:50.117045445 +0000 UTC m=+162.740423986 (durationBeforeRetry 1m4s). Error: MountVolume.SetUp failed for volume "metrics-certs" (UniqueName: "kubernetes.io/secret/8a1145d8-e0e9-481b-9e5c-65815e74874f-metrics-certs") pod "network-metrics-daemon-wrrtl" (UID: "8a1145d8-e0e9-481b-9e5c-65815e74874f") : object "openshift-multus"/"metrics-daemon-secret" not registered Jan 23 09:08:46 crc kubenswrapper[4684]: I0123 09:08:46.213072 4684 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 23 09:08:46 crc kubenswrapper[4684]: I0123 09:08:46.213100 4684 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 23 09:08:46 crc kubenswrapper[4684]: I0123 09:08:46.213109 4684 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 23 09:08:46 crc kubenswrapper[4684]: I0123 09:08:46.213122 4684 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 23 09:08:46 crc kubenswrapper[4684]: I0123 09:08:46.213132 4684 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-23T09:08:46Z","lastTransitionTime":"2026-01-23T09:08:46Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 23 09:08:46 crc kubenswrapper[4684]: I0123 09:08:46.315488 4684 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 23 09:08:46 crc kubenswrapper[4684]: I0123 09:08:46.315535 4684 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 23 09:08:46 crc kubenswrapper[4684]: I0123 09:08:46.315546 4684 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 23 09:08:46 crc kubenswrapper[4684]: I0123 09:08:46.315562 4684 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 23 09:08:46 crc kubenswrapper[4684]: I0123 09:08:46.315570 4684 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-23T09:08:46Z","lastTransitionTime":"2026-01-23T09:08:46Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 23 09:08:46 crc kubenswrapper[4684]: I0123 09:08:46.418115 4684 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 23 09:08:46 crc kubenswrapper[4684]: I0123 09:08:46.418147 4684 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 23 09:08:46 crc kubenswrapper[4684]: I0123 09:08:46.418158 4684 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 23 09:08:46 crc kubenswrapper[4684]: I0123 09:08:46.418172 4684 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 23 09:08:46 crc kubenswrapper[4684]: I0123 09:08:46.418182 4684 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-23T09:08:46Z","lastTransitionTime":"2026-01-23T09:08:46Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 23 09:08:46 crc kubenswrapper[4684]: I0123 09:08:46.520470 4684 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 23 09:08:46 crc kubenswrapper[4684]: I0123 09:08:46.520814 4684 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 23 09:08:46 crc kubenswrapper[4684]: I0123 09:08:46.520947 4684 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 23 09:08:46 crc kubenswrapper[4684]: I0123 09:08:46.521065 4684 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 23 09:08:46 crc kubenswrapper[4684]: I0123 09:08:46.521152 4684 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-23T09:08:46Z","lastTransitionTime":"2026-01-23T09:08:46Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 23 09:08:46 crc kubenswrapper[4684]: I0123 09:08:46.580956 4684 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-wrrtl" Jan 23 09:08:46 crc kubenswrapper[4684]: E0123 09:08:46.581140 4684 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-wrrtl" podUID="8a1145d8-e0e9-481b-9e5c-65815e74874f" Jan 23 09:08:46 crc kubenswrapper[4684]: I0123 09:08:46.581796 4684 scope.go:117] "RemoveContainer" containerID="4982abf5ece76335ecf3d32af453818177712b3e256640b9bebec20436b73eb7" Jan 23 09:08:46 crc kubenswrapper[4684]: E0123 09:08:46.581950 4684 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"ovnkube-controller\" with CrashLoopBackOff: \"back-off 40s restarting failed container=ovnkube-controller pod=ovnkube-node-nk7v5_openshift-ovn-kubernetes(5fd1b372-d164-4037-ae8e-cf634b1c4b41)\"" pod="openshift-ovn-kubernetes/ovnkube-node-nk7v5" podUID="5fd1b372-d164-4037-ae8e-cf634b1c4b41" Jan 23 09:08:46 crc kubenswrapper[4684]: I0123 09:08:46.601984 4684 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2026-01-04 23:42:25.358692516 +0000 UTC Jan 23 09:08:46 crc kubenswrapper[4684]: I0123 09:08:46.622980 4684 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 23 09:08:46 crc kubenswrapper[4684]: I0123 09:08:46.623010 4684 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 23 09:08:46 crc kubenswrapper[4684]: I0123 09:08:46.623020 4684 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 23 09:08:46 crc kubenswrapper[4684]: I0123 09:08:46.623035 4684 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 23 09:08:46 crc kubenswrapper[4684]: I0123 09:08:46.623044 4684 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-23T09:08:46Z","lastTransitionTime":"2026-01-23T09:08:46Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 23 09:08:46 crc kubenswrapper[4684]: I0123 09:08:46.724767 4684 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 23 09:08:46 crc kubenswrapper[4684]: I0123 09:08:46.724809 4684 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 23 09:08:46 crc kubenswrapper[4684]: I0123 09:08:46.724819 4684 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 23 09:08:46 crc kubenswrapper[4684]: I0123 09:08:46.724833 4684 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 23 09:08:46 crc kubenswrapper[4684]: I0123 09:08:46.724843 4684 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-23T09:08:46Z","lastTransitionTime":"2026-01-23T09:08:46Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 23 09:08:46 crc kubenswrapper[4684]: I0123 09:08:46.827600 4684 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 23 09:08:46 crc kubenswrapper[4684]: I0123 09:08:46.827639 4684 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 23 09:08:46 crc kubenswrapper[4684]: I0123 09:08:46.827655 4684 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 23 09:08:46 crc kubenswrapper[4684]: I0123 09:08:46.827669 4684 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 23 09:08:46 crc kubenswrapper[4684]: I0123 09:08:46.827678 4684 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-23T09:08:46Z","lastTransitionTime":"2026-01-23T09:08:46Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 23 09:08:46 crc kubenswrapper[4684]: I0123 09:08:46.930752 4684 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 23 09:08:46 crc kubenswrapper[4684]: I0123 09:08:46.930780 4684 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 23 09:08:46 crc kubenswrapper[4684]: I0123 09:08:46.930790 4684 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 23 09:08:46 crc kubenswrapper[4684]: I0123 09:08:46.930804 4684 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 23 09:08:46 crc kubenswrapper[4684]: I0123 09:08:46.930813 4684 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-23T09:08:46Z","lastTransitionTime":"2026-01-23T09:08:46Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 23 09:08:47 crc kubenswrapper[4684]: I0123 09:08:47.032580 4684 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 23 09:08:47 crc kubenswrapper[4684]: I0123 09:08:47.032850 4684 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 23 09:08:47 crc kubenswrapper[4684]: I0123 09:08:47.032955 4684 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 23 09:08:47 crc kubenswrapper[4684]: I0123 09:08:47.033026 4684 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 23 09:08:47 crc kubenswrapper[4684]: I0123 09:08:47.033087 4684 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-23T09:08:47Z","lastTransitionTime":"2026-01-23T09:08:47Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 23 09:08:47 crc kubenswrapper[4684]: I0123 09:08:47.136453 4684 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 23 09:08:47 crc kubenswrapper[4684]: I0123 09:08:47.136745 4684 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 23 09:08:47 crc kubenswrapper[4684]: I0123 09:08:47.136830 4684 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 23 09:08:47 crc kubenswrapper[4684]: I0123 09:08:47.136899 4684 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 23 09:08:47 crc kubenswrapper[4684]: I0123 09:08:47.136964 4684 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-23T09:08:47Z","lastTransitionTime":"2026-01-23T09:08:47Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 23 09:08:47 crc kubenswrapper[4684]: I0123 09:08:47.239583 4684 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 23 09:08:47 crc kubenswrapper[4684]: I0123 09:08:47.240590 4684 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 23 09:08:47 crc kubenswrapper[4684]: I0123 09:08:47.240894 4684 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 23 09:08:47 crc kubenswrapper[4684]: I0123 09:08:47.241019 4684 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 23 09:08:47 crc kubenswrapper[4684]: I0123 09:08:47.241143 4684 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-23T09:08:47Z","lastTransitionTime":"2026-01-23T09:08:47Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 23 09:08:47 crc kubenswrapper[4684]: I0123 09:08:47.345355 4684 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 23 09:08:47 crc kubenswrapper[4684]: I0123 09:08:47.345669 4684 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 23 09:08:47 crc kubenswrapper[4684]: I0123 09:08:47.345970 4684 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 23 09:08:47 crc kubenswrapper[4684]: I0123 09:08:47.346156 4684 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 23 09:08:47 crc kubenswrapper[4684]: I0123 09:08:47.346319 4684 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-23T09:08:47Z","lastTransitionTime":"2026-01-23T09:08:47Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 23 09:08:47 crc kubenswrapper[4684]: I0123 09:08:47.449893 4684 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 23 09:08:47 crc kubenswrapper[4684]: I0123 09:08:47.450397 4684 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 23 09:08:47 crc kubenswrapper[4684]: I0123 09:08:47.450487 4684 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 23 09:08:47 crc kubenswrapper[4684]: I0123 09:08:47.450616 4684 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 23 09:08:47 crc kubenswrapper[4684]: I0123 09:08:47.450737 4684 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-23T09:08:47Z","lastTransitionTime":"2026-01-23T09:08:47Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 23 09:08:47 crc kubenswrapper[4684]: I0123 09:08:47.553819 4684 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 23 09:08:47 crc kubenswrapper[4684]: I0123 09:08:47.553849 4684 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 23 09:08:47 crc kubenswrapper[4684]: I0123 09:08:47.553858 4684 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 23 09:08:47 crc kubenswrapper[4684]: I0123 09:08:47.553871 4684 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 23 09:08:47 crc kubenswrapper[4684]: I0123 09:08:47.553881 4684 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-23T09:08:47Z","lastTransitionTime":"2026-01-23T09:08:47Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 23 09:08:47 crc kubenswrapper[4684]: I0123 09:08:47.581475 4684 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 23 09:08:47 crc kubenswrapper[4684]: I0123 09:08:47.581970 4684 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 23 09:08:47 crc kubenswrapper[4684]: I0123 09:08:47.581570 4684 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Jan 23 09:08:47 crc kubenswrapper[4684]: E0123 09:08:47.582093 4684 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Jan 23 09:08:47 crc kubenswrapper[4684]: E0123 09:08:47.582138 4684 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Jan 23 09:08:47 crc kubenswrapper[4684]: E0123 09:08:47.582203 4684 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Jan 23 09:08:47 crc kubenswrapper[4684]: I0123 09:08:47.602916 4684 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2026-01-07 12:36:11.950726598 +0000 UTC Jan 23 09:08:47 crc kubenswrapper[4684]: I0123 09:08:47.605352 4684 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-dns/node-resolver-6stgf" podStartSLOduration=80.605335178 podStartE2EDuration="1m20.605335178s" podCreationTimestamp="2026-01-23 09:07:27 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-23 09:08:47.594842172 +0000 UTC m=+100.218220713" watchObservedRunningTime="2026-01-23 09:08:47.605335178 +0000 UTC m=+100.228713719" Jan 23 09:08:47 crc kubenswrapper[4684]: I0123 09:08:47.621757 4684 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-image-registry/node-ca-qt2j2" podStartSLOduration=80.621736594 podStartE2EDuration="1m20.621736594s" podCreationTimestamp="2026-01-23 09:07:27 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-23 09:08:47.606271747 +0000 UTC m=+100.229650288" watchObservedRunningTime="2026-01-23 09:08:47.621736594 +0000 UTC m=+100.245115125" Jan 23 09:08:47 crc kubenswrapper[4684]: I0123 09:08:47.643477 4684 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-kube-controller-manager/kube-controller-manager-crc" podStartSLOduration=79.64345882 podStartE2EDuration="1m19.64345882s" podCreationTimestamp="2026-01-23 09:07:28 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-23 09:08:47.623849318 +0000 UTC m=+100.247227859" watchObservedRunningTime="2026-01-23 09:08:47.64345882 +0000 UTC m=+100.266837361" Jan 23 09:08:47 crc kubenswrapper[4684]: I0123 09:08:47.656112 4684 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 23 09:08:47 crc kubenswrapper[4684]: I0123 09:08:47.656147 4684 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 23 09:08:47 crc kubenswrapper[4684]: I0123 09:08:47.656157 4684 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 23 09:08:47 crc kubenswrapper[4684]: I0123 09:08:47.656172 4684 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 23 09:08:47 crc kubenswrapper[4684]: I0123 09:08:47.656182 4684 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-23T09:08:47Z","lastTransitionTime":"2026-01-23T09:08:47Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 23 09:08:47 crc kubenswrapper[4684]: I0123 09:08:47.696755 4684 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" podStartSLOduration=44.696734239 podStartE2EDuration="44.696734239s" podCreationTimestamp="2026-01-23 09:08:03 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-23 09:08:47.685677235 +0000 UTC m=+100.309055796" watchObservedRunningTime="2026-01-23 09:08:47.696734239 +0000 UTC m=+100.320112780" Jan 23 09:08:47 crc kubenswrapper[4684]: I0123 09:08:47.709488 4684 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-machine-config-operator/machine-config-daemon-wtphf" podStartSLOduration=80.709469063 podStartE2EDuration="1m20.709469063s" podCreationTimestamp="2026-01-23 09:07:27 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-23 09:08:47.709214725 +0000 UTC m=+100.332593266" watchObservedRunningTime="2026-01-23 09:08:47.709469063 +0000 UTC m=+100.332847604" Jan 23 09:08:47 crc kubenswrapper[4684]: I0123 09:08:47.757954 4684 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 23 09:08:47 crc kubenswrapper[4684]: I0123 09:08:47.758015 4684 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 23 09:08:47 crc kubenswrapper[4684]: I0123 09:08:47.758029 4684 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 23 09:08:47 crc kubenswrapper[4684]: I0123 09:08:47.758067 4684 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 23 09:08:47 crc kubenswrapper[4684]: I0123 09:08:47.758079 4684 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-23T09:08:47Z","lastTransitionTime":"2026-01-23T09:08:47Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 23 09:08:47 crc kubenswrapper[4684]: I0123 09:08:47.820868 4684 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-ckltm" podStartSLOduration=79.820849057 podStartE2EDuration="1m19.820849057s" podCreationTimestamp="2026-01-23 09:07:28 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-23 09:08:47.779878179 +0000 UTC m=+100.403256720" watchObservedRunningTime="2026-01-23 09:08:47.820849057 +0000 UTC m=+100.444227598" Jan 23 09:08:47 crc kubenswrapper[4684]: I0123 09:08:47.835117 4684 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-etcd/etcd-crc" podStartSLOduration=4.8350999770000005 podStartE2EDuration="4.835099977s" podCreationTimestamp="2026-01-23 09:08:43 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-23 09:08:47.820783435 +0000 UTC m=+100.444161996" watchObservedRunningTime="2026-01-23 09:08:47.835099977 +0000 UTC m=+100.458478518" Jan 23 09:08:47 crc kubenswrapper[4684]: I0123 09:08:47.859990 4684 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 23 09:08:47 crc kubenswrapper[4684]: I0123 09:08:47.860247 4684 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 23 09:08:47 crc kubenswrapper[4684]: I0123 09:08:47.860343 4684 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 23 09:08:47 crc kubenswrapper[4684]: I0123 09:08:47.860437 4684 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 23 09:08:47 crc kubenswrapper[4684]: I0123 09:08:47.860533 4684 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-23T09:08:47Z","lastTransitionTime":"2026-01-23T09:08:47Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 23 09:08:47 crc kubenswrapper[4684]: I0123 09:08:47.863989 4684 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-multus/multus-jwr4q" podStartSLOduration=80.863972519 podStartE2EDuration="1m20.863972519s" podCreationTimestamp="2026-01-23 09:07:27 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-23 09:08:47.863856355 +0000 UTC m=+100.487234916" watchObservedRunningTime="2026-01-23 09:08:47.863972519 +0000 UTC m=+100.487351060" Jan 23 09:08:47 crc kubenswrapper[4684]: I0123 09:08:47.885580 4684 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-multus/multus-additional-cni-plugins-dmqcw" podStartSLOduration=80.885562471 podStartE2EDuration="1m20.885562471s" podCreationTimestamp="2026-01-23 09:07:27 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-23 09:08:47.884886581 +0000 UTC m=+100.508265122" watchObservedRunningTime="2026-01-23 09:08:47.885562471 +0000 UTC m=+100.508941012" Jan 23 09:08:47 crc kubenswrapper[4684]: I0123 09:08:47.901924 4684 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-kube-apiserver/kube-apiserver-crc" podStartSLOduration=80.901902694 podStartE2EDuration="1m20.901902694s" podCreationTimestamp="2026-01-23 09:07:27 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-23 09:08:47.900818732 +0000 UTC m=+100.524197273" watchObservedRunningTime="2026-01-23 09:08:47.901902694 +0000 UTC m=+100.525281235" Jan 23 09:08:47 crc kubenswrapper[4684]: I0123 09:08:47.925359 4684 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-machine-config-operator/kube-rbac-proxy-crio-crc" podStartSLOduration=36.925342622 podStartE2EDuration="36.925342622s" podCreationTimestamp="2026-01-23 09:08:11 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-23 09:08:47.913067742 +0000 UTC m=+100.536446293" watchObservedRunningTime="2026-01-23 09:08:47.925342622 +0000 UTC m=+100.548721163" Jan 23 09:08:47 crc kubenswrapper[4684]: I0123 09:08:47.963090 4684 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 23 09:08:47 crc kubenswrapper[4684]: I0123 09:08:47.963131 4684 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 23 09:08:47 crc kubenswrapper[4684]: I0123 09:08:47.963140 4684 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 23 09:08:47 crc kubenswrapper[4684]: I0123 09:08:47.963155 4684 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 23 09:08:47 crc kubenswrapper[4684]: I0123 09:08:47.963167 4684 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-23T09:08:47Z","lastTransitionTime":"2026-01-23T09:08:47Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 23 09:08:48 crc kubenswrapper[4684]: I0123 09:08:48.065370 4684 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 23 09:08:48 crc kubenswrapper[4684]: I0123 09:08:48.065711 4684 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 23 09:08:48 crc kubenswrapper[4684]: I0123 09:08:48.065807 4684 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 23 09:08:48 crc kubenswrapper[4684]: I0123 09:08:48.065894 4684 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 23 09:08:48 crc kubenswrapper[4684]: I0123 09:08:48.065980 4684 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-23T09:08:48Z","lastTransitionTime":"2026-01-23T09:08:48Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 23 09:08:48 crc kubenswrapper[4684]: I0123 09:08:48.168199 4684 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 23 09:08:48 crc kubenswrapper[4684]: I0123 09:08:48.168242 4684 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 23 09:08:48 crc kubenswrapper[4684]: I0123 09:08:48.168253 4684 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 23 09:08:48 crc kubenswrapper[4684]: I0123 09:08:48.168268 4684 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 23 09:08:48 crc kubenswrapper[4684]: I0123 09:08:48.168279 4684 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-23T09:08:48Z","lastTransitionTime":"2026-01-23T09:08:48Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 23 09:08:48 crc kubenswrapper[4684]: I0123 09:08:48.270604 4684 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 23 09:08:48 crc kubenswrapper[4684]: I0123 09:08:48.270637 4684 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 23 09:08:48 crc kubenswrapper[4684]: I0123 09:08:48.270647 4684 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 23 09:08:48 crc kubenswrapper[4684]: I0123 09:08:48.270662 4684 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 23 09:08:48 crc kubenswrapper[4684]: I0123 09:08:48.270671 4684 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-23T09:08:48Z","lastTransitionTime":"2026-01-23T09:08:48Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 23 09:08:48 crc kubenswrapper[4684]: I0123 09:08:48.372917 4684 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 23 09:08:48 crc kubenswrapper[4684]: I0123 09:08:48.373183 4684 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 23 09:08:48 crc kubenswrapper[4684]: I0123 09:08:48.373257 4684 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 23 09:08:48 crc kubenswrapper[4684]: I0123 09:08:48.373323 4684 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 23 09:08:48 crc kubenswrapper[4684]: I0123 09:08:48.373390 4684 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-23T09:08:48Z","lastTransitionTime":"2026-01-23T09:08:48Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 23 09:08:48 crc kubenswrapper[4684]: I0123 09:08:48.475627 4684 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 23 09:08:48 crc kubenswrapper[4684]: I0123 09:08:48.475931 4684 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 23 09:08:48 crc kubenswrapper[4684]: I0123 09:08:48.476157 4684 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 23 09:08:48 crc kubenswrapper[4684]: I0123 09:08:48.476345 4684 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 23 09:08:48 crc kubenswrapper[4684]: I0123 09:08:48.476531 4684 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-23T09:08:48Z","lastTransitionTime":"2026-01-23T09:08:48Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 23 09:08:48 crc kubenswrapper[4684]: I0123 09:08:48.579303 4684 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 23 09:08:48 crc kubenswrapper[4684]: I0123 09:08:48.579344 4684 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 23 09:08:48 crc kubenswrapper[4684]: I0123 09:08:48.579354 4684 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 23 09:08:48 crc kubenswrapper[4684]: I0123 09:08:48.579367 4684 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 23 09:08:48 crc kubenswrapper[4684]: I0123 09:08:48.579376 4684 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-23T09:08:48Z","lastTransitionTime":"2026-01-23T09:08:48Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 23 09:08:48 crc kubenswrapper[4684]: I0123 09:08:48.581543 4684 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-wrrtl" Jan 23 09:08:48 crc kubenswrapper[4684]: E0123 09:08:48.581781 4684 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-wrrtl" podUID="8a1145d8-e0e9-481b-9e5c-65815e74874f" Jan 23 09:08:48 crc kubenswrapper[4684]: I0123 09:08:48.603521 4684 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-12-08 09:24:18.561287878 +0000 UTC Jan 23 09:08:48 crc kubenswrapper[4684]: I0123 09:08:48.681470 4684 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 23 09:08:48 crc kubenswrapper[4684]: I0123 09:08:48.681497 4684 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 23 09:08:48 crc kubenswrapper[4684]: I0123 09:08:48.681506 4684 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 23 09:08:48 crc kubenswrapper[4684]: I0123 09:08:48.681519 4684 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 23 09:08:48 crc kubenswrapper[4684]: I0123 09:08:48.681527 4684 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-23T09:08:48Z","lastTransitionTime":"2026-01-23T09:08:48Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 23 09:08:48 crc kubenswrapper[4684]: I0123 09:08:48.783961 4684 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 23 09:08:48 crc kubenswrapper[4684]: I0123 09:08:48.783997 4684 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 23 09:08:48 crc kubenswrapper[4684]: I0123 09:08:48.784008 4684 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 23 09:08:48 crc kubenswrapper[4684]: I0123 09:08:48.784026 4684 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 23 09:08:48 crc kubenswrapper[4684]: I0123 09:08:48.784037 4684 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-23T09:08:48Z","lastTransitionTime":"2026-01-23T09:08:48Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 23 09:08:48 crc kubenswrapper[4684]: I0123 09:08:48.886675 4684 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 23 09:08:48 crc kubenswrapper[4684]: I0123 09:08:48.886725 4684 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 23 09:08:48 crc kubenswrapper[4684]: I0123 09:08:48.886737 4684 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 23 09:08:48 crc kubenswrapper[4684]: I0123 09:08:48.886753 4684 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 23 09:08:48 crc kubenswrapper[4684]: I0123 09:08:48.886765 4684 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-23T09:08:48Z","lastTransitionTime":"2026-01-23T09:08:48Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 23 09:08:48 crc kubenswrapper[4684]: I0123 09:08:48.989100 4684 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 23 09:08:48 crc kubenswrapper[4684]: I0123 09:08:48.989141 4684 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 23 09:08:48 crc kubenswrapper[4684]: I0123 09:08:48.989152 4684 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 23 09:08:48 crc kubenswrapper[4684]: I0123 09:08:48.989174 4684 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 23 09:08:48 crc kubenswrapper[4684]: I0123 09:08:48.989184 4684 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-23T09:08:48Z","lastTransitionTime":"2026-01-23T09:08:48Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 23 09:08:49 crc kubenswrapper[4684]: I0123 09:08:49.090869 4684 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 23 09:08:49 crc kubenswrapper[4684]: I0123 09:08:49.091188 4684 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 23 09:08:49 crc kubenswrapper[4684]: I0123 09:08:49.091317 4684 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 23 09:08:49 crc kubenswrapper[4684]: I0123 09:08:49.091394 4684 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 23 09:08:49 crc kubenswrapper[4684]: I0123 09:08:49.091463 4684 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-23T09:08:49Z","lastTransitionTime":"2026-01-23T09:08:49Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 23 09:08:49 crc kubenswrapper[4684]: I0123 09:08:49.194144 4684 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 23 09:08:49 crc kubenswrapper[4684]: I0123 09:08:49.194480 4684 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 23 09:08:49 crc kubenswrapper[4684]: I0123 09:08:49.194594 4684 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 23 09:08:49 crc kubenswrapper[4684]: I0123 09:08:49.194720 4684 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 23 09:08:49 crc kubenswrapper[4684]: I0123 09:08:49.194849 4684 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-23T09:08:49Z","lastTransitionTime":"2026-01-23T09:08:49Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 23 09:08:49 crc kubenswrapper[4684]: I0123 09:08:49.297152 4684 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 23 09:08:49 crc kubenswrapper[4684]: I0123 09:08:49.297181 4684 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 23 09:08:49 crc kubenswrapper[4684]: I0123 09:08:49.297212 4684 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 23 09:08:49 crc kubenswrapper[4684]: I0123 09:08:49.297228 4684 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 23 09:08:49 crc kubenswrapper[4684]: I0123 09:08:49.297238 4684 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-23T09:08:49Z","lastTransitionTime":"2026-01-23T09:08:49Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 23 09:08:49 crc kubenswrapper[4684]: I0123 09:08:49.399306 4684 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 23 09:08:49 crc kubenswrapper[4684]: I0123 09:08:49.399351 4684 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 23 09:08:49 crc kubenswrapper[4684]: I0123 09:08:49.399366 4684 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 23 09:08:49 crc kubenswrapper[4684]: I0123 09:08:49.399386 4684 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 23 09:08:49 crc kubenswrapper[4684]: I0123 09:08:49.399400 4684 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-23T09:08:49Z","lastTransitionTime":"2026-01-23T09:08:49Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 23 09:08:49 crc kubenswrapper[4684]: I0123 09:08:49.502111 4684 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 23 09:08:49 crc kubenswrapper[4684]: I0123 09:08:49.502377 4684 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 23 09:08:49 crc kubenswrapper[4684]: I0123 09:08:49.502457 4684 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 23 09:08:49 crc kubenswrapper[4684]: I0123 09:08:49.502535 4684 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 23 09:08:49 crc kubenswrapper[4684]: I0123 09:08:49.502599 4684 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-23T09:08:49Z","lastTransitionTime":"2026-01-23T09:08:49Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 23 09:08:49 crc kubenswrapper[4684]: I0123 09:08:49.581275 4684 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 23 09:08:49 crc kubenswrapper[4684]: E0123 09:08:49.581414 4684 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Jan 23 09:08:49 crc kubenswrapper[4684]: I0123 09:08:49.581618 4684 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Jan 23 09:08:49 crc kubenswrapper[4684]: E0123 09:08:49.581804 4684 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Jan 23 09:08:49 crc kubenswrapper[4684]: I0123 09:08:49.581629 4684 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 23 09:08:49 crc kubenswrapper[4684]: E0123 09:08:49.582173 4684 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Jan 23 09:08:49 crc kubenswrapper[4684]: I0123 09:08:49.604145 4684 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-11-22 03:06:39.140050755 +0000 UTC Jan 23 09:08:49 crc kubenswrapper[4684]: I0123 09:08:49.605300 4684 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 23 09:08:49 crc kubenswrapper[4684]: I0123 09:08:49.605435 4684 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 23 09:08:49 crc kubenswrapper[4684]: I0123 09:08:49.605534 4684 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 23 09:08:49 crc kubenswrapper[4684]: I0123 09:08:49.605636 4684 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 23 09:08:49 crc kubenswrapper[4684]: I0123 09:08:49.605765 4684 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-23T09:08:49Z","lastTransitionTime":"2026-01-23T09:08:49Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 23 09:08:49 crc kubenswrapper[4684]: I0123 09:08:49.708614 4684 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 23 09:08:49 crc kubenswrapper[4684]: I0123 09:08:49.708913 4684 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 23 09:08:49 crc kubenswrapper[4684]: I0123 09:08:49.708997 4684 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 23 09:08:49 crc kubenswrapper[4684]: I0123 09:08:49.709081 4684 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 23 09:08:49 crc kubenswrapper[4684]: I0123 09:08:49.709193 4684 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-23T09:08:49Z","lastTransitionTime":"2026-01-23T09:08:49Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 23 09:08:49 crc kubenswrapper[4684]: I0123 09:08:49.812235 4684 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 23 09:08:49 crc kubenswrapper[4684]: I0123 09:08:49.812511 4684 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 23 09:08:49 crc kubenswrapper[4684]: I0123 09:08:49.812621 4684 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 23 09:08:49 crc kubenswrapper[4684]: I0123 09:08:49.812719 4684 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 23 09:08:49 crc kubenswrapper[4684]: I0123 09:08:49.812801 4684 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-23T09:08:49Z","lastTransitionTime":"2026-01-23T09:08:49Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 23 09:08:49 crc kubenswrapper[4684]: I0123 09:08:49.915051 4684 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 23 09:08:49 crc kubenswrapper[4684]: I0123 09:08:49.915086 4684 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 23 09:08:49 crc kubenswrapper[4684]: I0123 09:08:49.915096 4684 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 23 09:08:49 crc kubenswrapper[4684]: I0123 09:08:49.915110 4684 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 23 09:08:49 crc kubenswrapper[4684]: I0123 09:08:49.915121 4684 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-23T09:08:49Z","lastTransitionTime":"2026-01-23T09:08:49Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 23 09:08:50 crc kubenswrapper[4684]: I0123 09:08:50.018152 4684 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 23 09:08:50 crc kubenswrapper[4684]: I0123 09:08:50.018441 4684 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 23 09:08:50 crc kubenswrapper[4684]: I0123 09:08:50.018535 4684 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 23 09:08:50 crc kubenswrapper[4684]: I0123 09:08:50.018630 4684 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 23 09:08:50 crc kubenswrapper[4684]: I0123 09:08:50.018766 4684 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-23T09:08:50Z","lastTransitionTime":"2026-01-23T09:08:50Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 23 09:08:50 crc kubenswrapper[4684]: I0123 09:08:50.121145 4684 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 23 09:08:50 crc kubenswrapper[4684]: I0123 09:08:50.121401 4684 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 23 09:08:50 crc kubenswrapper[4684]: I0123 09:08:50.121500 4684 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 23 09:08:50 crc kubenswrapper[4684]: I0123 09:08:50.121581 4684 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 23 09:08:50 crc kubenswrapper[4684]: I0123 09:08:50.121642 4684 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-23T09:08:50Z","lastTransitionTime":"2026-01-23T09:08:50Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 23 09:08:50 crc kubenswrapper[4684]: I0123 09:08:50.224418 4684 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 23 09:08:50 crc kubenswrapper[4684]: I0123 09:08:50.224855 4684 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 23 09:08:50 crc kubenswrapper[4684]: I0123 09:08:50.224954 4684 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 23 09:08:50 crc kubenswrapper[4684]: I0123 09:08:50.225049 4684 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 23 09:08:50 crc kubenswrapper[4684]: I0123 09:08:50.225143 4684 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-23T09:08:50Z","lastTransitionTime":"2026-01-23T09:08:50Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 23 09:08:50 crc kubenswrapper[4684]: I0123 09:08:50.328685 4684 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 23 09:08:50 crc kubenswrapper[4684]: I0123 09:08:50.329054 4684 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 23 09:08:50 crc kubenswrapper[4684]: I0123 09:08:50.329140 4684 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 23 09:08:50 crc kubenswrapper[4684]: I0123 09:08:50.329219 4684 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 23 09:08:50 crc kubenswrapper[4684]: I0123 09:08:50.329305 4684 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-23T09:08:50Z","lastTransitionTime":"2026-01-23T09:08:50Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 23 09:08:50 crc kubenswrapper[4684]: I0123 09:08:50.431638 4684 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 23 09:08:50 crc kubenswrapper[4684]: I0123 09:08:50.432081 4684 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 23 09:08:50 crc kubenswrapper[4684]: I0123 09:08:50.432194 4684 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 23 09:08:50 crc kubenswrapper[4684]: I0123 09:08:50.432279 4684 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 23 09:08:50 crc kubenswrapper[4684]: I0123 09:08:50.432352 4684 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-23T09:08:50Z","lastTransitionTime":"2026-01-23T09:08:50Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 23 09:08:50 crc kubenswrapper[4684]: I0123 09:08:50.534845 4684 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 23 09:08:50 crc kubenswrapper[4684]: I0123 09:08:50.534883 4684 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 23 09:08:50 crc kubenswrapper[4684]: I0123 09:08:50.534892 4684 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 23 09:08:50 crc kubenswrapper[4684]: I0123 09:08:50.534907 4684 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 23 09:08:50 crc kubenswrapper[4684]: I0123 09:08:50.534917 4684 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-23T09:08:50Z","lastTransitionTime":"2026-01-23T09:08:50Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 23 09:08:50 crc kubenswrapper[4684]: I0123 09:08:50.581520 4684 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-wrrtl" Jan 23 09:08:50 crc kubenswrapper[4684]: E0123 09:08:50.582001 4684 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-wrrtl" podUID="8a1145d8-e0e9-481b-9e5c-65815e74874f" Jan 23 09:08:50 crc kubenswrapper[4684]: I0123 09:08:50.604346 4684 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-11-22 16:21:15.504664364 +0000 UTC Jan 23 09:08:50 crc kubenswrapper[4684]: I0123 09:08:50.637344 4684 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 23 09:08:50 crc kubenswrapper[4684]: I0123 09:08:50.637382 4684 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 23 09:08:50 crc kubenswrapper[4684]: I0123 09:08:50.637395 4684 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 23 09:08:50 crc kubenswrapper[4684]: I0123 09:08:50.637412 4684 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 23 09:08:50 crc kubenswrapper[4684]: I0123 09:08:50.637424 4684 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-23T09:08:50Z","lastTransitionTime":"2026-01-23T09:08:50Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 23 09:08:50 crc kubenswrapper[4684]: I0123 09:08:50.739757 4684 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 23 09:08:50 crc kubenswrapper[4684]: I0123 09:08:50.739790 4684 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 23 09:08:50 crc kubenswrapper[4684]: I0123 09:08:50.739798 4684 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 23 09:08:50 crc kubenswrapper[4684]: I0123 09:08:50.739811 4684 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 23 09:08:50 crc kubenswrapper[4684]: I0123 09:08:50.739820 4684 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-23T09:08:50Z","lastTransitionTime":"2026-01-23T09:08:50Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 23 09:08:50 crc kubenswrapper[4684]: I0123 09:08:50.842566 4684 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 23 09:08:50 crc kubenswrapper[4684]: I0123 09:08:50.842591 4684 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 23 09:08:50 crc kubenswrapper[4684]: I0123 09:08:50.842599 4684 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 23 09:08:50 crc kubenswrapper[4684]: I0123 09:08:50.842613 4684 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 23 09:08:50 crc kubenswrapper[4684]: I0123 09:08:50.842624 4684 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-23T09:08:50Z","lastTransitionTime":"2026-01-23T09:08:50Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 23 09:08:50 crc kubenswrapper[4684]: I0123 09:08:50.945772 4684 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 23 09:08:50 crc kubenswrapper[4684]: I0123 09:08:50.946066 4684 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 23 09:08:50 crc kubenswrapper[4684]: I0123 09:08:50.946177 4684 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 23 09:08:50 crc kubenswrapper[4684]: I0123 09:08:50.946441 4684 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 23 09:08:50 crc kubenswrapper[4684]: I0123 09:08:50.946529 4684 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-23T09:08:50Z","lastTransitionTime":"2026-01-23T09:08:50Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 23 09:08:51 crc kubenswrapper[4684]: I0123 09:08:51.050161 4684 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 23 09:08:51 crc kubenswrapper[4684]: I0123 09:08:51.050195 4684 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 23 09:08:51 crc kubenswrapper[4684]: I0123 09:08:51.050227 4684 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 23 09:08:51 crc kubenswrapper[4684]: I0123 09:08:51.050244 4684 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 23 09:08:51 crc kubenswrapper[4684]: I0123 09:08:51.050255 4684 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-23T09:08:51Z","lastTransitionTime":"2026-01-23T09:08:51Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 23 09:08:51 crc kubenswrapper[4684]: I0123 09:08:51.152643 4684 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 23 09:08:51 crc kubenswrapper[4684]: I0123 09:08:51.152733 4684 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 23 09:08:51 crc kubenswrapper[4684]: I0123 09:08:51.152746 4684 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 23 09:08:51 crc kubenswrapper[4684]: I0123 09:08:51.152761 4684 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 23 09:08:51 crc kubenswrapper[4684]: I0123 09:08:51.152771 4684 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-23T09:08:51Z","lastTransitionTime":"2026-01-23T09:08:51Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 23 09:08:51 crc kubenswrapper[4684]: I0123 09:08:51.257515 4684 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 23 09:08:51 crc kubenswrapper[4684]: I0123 09:08:51.257811 4684 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 23 09:08:51 crc kubenswrapper[4684]: I0123 09:08:51.257934 4684 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 23 09:08:51 crc kubenswrapper[4684]: I0123 09:08:51.258039 4684 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 23 09:08:51 crc kubenswrapper[4684]: I0123 09:08:51.258873 4684 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-23T09:08:51Z","lastTransitionTime":"2026-01-23T09:08:51Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 23 09:08:51 crc kubenswrapper[4684]: I0123 09:08:51.361451 4684 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 23 09:08:51 crc kubenswrapper[4684]: I0123 09:08:51.361483 4684 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 23 09:08:51 crc kubenswrapper[4684]: I0123 09:08:51.361493 4684 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 23 09:08:51 crc kubenswrapper[4684]: I0123 09:08:51.361508 4684 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 23 09:08:51 crc kubenswrapper[4684]: I0123 09:08:51.361519 4684 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-23T09:08:51Z","lastTransitionTime":"2026-01-23T09:08:51Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 23 09:08:51 crc kubenswrapper[4684]: I0123 09:08:51.464172 4684 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 23 09:08:51 crc kubenswrapper[4684]: I0123 09:08:51.464224 4684 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 23 09:08:51 crc kubenswrapper[4684]: I0123 09:08:51.464234 4684 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 23 09:08:51 crc kubenswrapper[4684]: I0123 09:08:51.464249 4684 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 23 09:08:51 crc kubenswrapper[4684]: I0123 09:08:51.464259 4684 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-23T09:08:51Z","lastTransitionTime":"2026-01-23T09:08:51Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 23 09:08:51 crc kubenswrapper[4684]: I0123 09:08:51.566915 4684 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 23 09:08:51 crc kubenswrapper[4684]: I0123 09:08:51.566948 4684 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 23 09:08:51 crc kubenswrapper[4684]: I0123 09:08:51.566957 4684 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 23 09:08:51 crc kubenswrapper[4684]: I0123 09:08:51.566971 4684 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 23 09:08:51 crc kubenswrapper[4684]: I0123 09:08:51.566981 4684 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-23T09:08:51Z","lastTransitionTime":"2026-01-23T09:08:51Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 23 09:08:51 crc kubenswrapper[4684]: I0123 09:08:51.581856 4684 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Jan 23 09:08:51 crc kubenswrapper[4684]: I0123 09:08:51.581899 4684 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 23 09:08:51 crc kubenswrapper[4684]: E0123 09:08:51.581989 4684 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Jan 23 09:08:51 crc kubenswrapper[4684]: I0123 09:08:51.582300 4684 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 23 09:08:51 crc kubenswrapper[4684]: E0123 09:08:51.582374 4684 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Jan 23 09:08:51 crc kubenswrapper[4684]: E0123 09:08:51.582636 4684 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Jan 23 09:08:51 crc kubenswrapper[4684]: I0123 09:08:51.605396 4684 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2026-01-11 12:38:43.056767237 +0000 UTC Jan 23 09:08:51 crc kubenswrapper[4684]: I0123 09:08:51.669420 4684 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 23 09:08:51 crc kubenswrapper[4684]: I0123 09:08:51.669466 4684 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 23 09:08:51 crc kubenswrapper[4684]: I0123 09:08:51.669484 4684 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 23 09:08:51 crc kubenswrapper[4684]: I0123 09:08:51.669503 4684 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 23 09:08:51 crc kubenswrapper[4684]: I0123 09:08:51.669517 4684 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-23T09:08:51Z","lastTransitionTime":"2026-01-23T09:08:51Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 23 09:08:51 crc kubenswrapper[4684]: I0123 09:08:51.771789 4684 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 23 09:08:51 crc kubenswrapper[4684]: I0123 09:08:51.771829 4684 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 23 09:08:51 crc kubenswrapper[4684]: I0123 09:08:51.771861 4684 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 23 09:08:51 crc kubenswrapper[4684]: I0123 09:08:51.771878 4684 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 23 09:08:51 crc kubenswrapper[4684]: I0123 09:08:51.771889 4684 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-23T09:08:51Z","lastTransitionTime":"2026-01-23T09:08:51Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 23 09:08:51 crc kubenswrapper[4684]: I0123 09:08:51.874922 4684 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 23 09:08:51 crc kubenswrapper[4684]: I0123 09:08:51.874959 4684 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 23 09:08:51 crc kubenswrapper[4684]: I0123 09:08:51.874970 4684 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 23 09:08:51 crc kubenswrapper[4684]: I0123 09:08:51.874985 4684 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 23 09:08:51 crc kubenswrapper[4684]: I0123 09:08:51.874996 4684 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-23T09:08:51Z","lastTransitionTime":"2026-01-23T09:08:51Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 23 09:08:51 crc kubenswrapper[4684]: I0123 09:08:51.977830 4684 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 23 09:08:51 crc kubenswrapper[4684]: I0123 09:08:51.977868 4684 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 23 09:08:51 crc kubenswrapper[4684]: I0123 09:08:51.977879 4684 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 23 09:08:51 crc kubenswrapper[4684]: I0123 09:08:51.977894 4684 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 23 09:08:51 crc kubenswrapper[4684]: I0123 09:08:51.977904 4684 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-23T09:08:51Z","lastTransitionTime":"2026-01-23T09:08:51Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 23 09:08:52 crc kubenswrapper[4684]: I0123 09:08:52.079640 4684 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 23 09:08:52 crc kubenswrapper[4684]: I0123 09:08:52.079682 4684 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 23 09:08:52 crc kubenswrapper[4684]: I0123 09:08:52.079691 4684 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 23 09:08:52 crc kubenswrapper[4684]: I0123 09:08:52.079721 4684 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 23 09:08:52 crc kubenswrapper[4684]: I0123 09:08:52.079730 4684 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-23T09:08:52Z","lastTransitionTime":"2026-01-23T09:08:52Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 23 09:08:52 crc kubenswrapper[4684]: I0123 09:08:52.182541 4684 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 23 09:08:52 crc kubenswrapper[4684]: I0123 09:08:52.182595 4684 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 23 09:08:52 crc kubenswrapper[4684]: I0123 09:08:52.182611 4684 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 23 09:08:52 crc kubenswrapper[4684]: I0123 09:08:52.182638 4684 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 23 09:08:52 crc kubenswrapper[4684]: I0123 09:08:52.182655 4684 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-23T09:08:52Z","lastTransitionTime":"2026-01-23T09:08:52Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 23 09:08:52 crc kubenswrapper[4684]: I0123 09:08:52.284907 4684 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 23 09:08:52 crc kubenswrapper[4684]: I0123 09:08:52.284937 4684 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 23 09:08:52 crc kubenswrapper[4684]: I0123 09:08:52.284946 4684 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 23 09:08:52 crc kubenswrapper[4684]: I0123 09:08:52.284959 4684 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 23 09:08:52 crc kubenswrapper[4684]: I0123 09:08:52.284968 4684 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-23T09:08:52Z","lastTransitionTime":"2026-01-23T09:08:52Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 23 09:08:52 crc kubenswrapper[4684]: I0123 09:08:52.387026 4684 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 23 09:08:52 crc kubenswrapper[4684]: I0123 09:08:52.387066 4684 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 23 09:08:52 crc kubenswrapper[4684]: I0123 09:08:52.387074 4684 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 23 09:08:52 crc kubenswrapper[4684]: I0123 09:08:52.387086 4684 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 23 09:08:52 crc kubenswrapper[4684]: I0123 09:08:52.387096 4684 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-23T09:08:52Z","lastTransitionTime":"2026-01-23T09:08:52Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 23 09:08:52 crc kubenswrapper[4684]: I0123 09:08:52.488632 4684 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 23 09:08:52 crc kubenswrapper[4684]: I0123 09:08:52.488667 4684 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 23 09:08:52 crc kubenswrapper[4684]: I0123 09:08:52.488678 4684 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 23 09:08:52 crc kubenswrapper[4684]: I0123 09:08:52.488693 4684 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 23 09:08:52 crc kubenswrapper[4684]: I0123 09:08:52.488718 4684 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-23T09:08:52Z","lastTransitionTime":"2026-01-23T09:08:52Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 23 09:08:52 crc kubenswrapper[4684]: I0123 09:08:52.580957 4684 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-wrrtl" Jan 23 09:08:52 crc kubenswrapper[4684]: E0123 09:08:52.581068 4684 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-wrrtl" podUID="8a1145d8-e0e9-481b-9e5c-65815e74874f" Jan 23 09:08:52 crc kubenswrapper[4684]: I0123 09:08:52.591324 4684 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 23 09:08:52 crc kubenswrapper[4684]: I0123 09:08:52.591365 4684 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 23 09:08:52 crc kubenswrapper[4684]: I0123 09:08:52.591377 4684 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 23 09:08:52 crc kubenswrapper[4684]: I0123 09:08:52.591391 4684 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 23 09:08:52 crc kubenswrapper[4684]: I0123 09:08:52.591402 4684 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-23T09:08:52Z","lastTransitionTime":"2026-01-23T09:08:52Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 23 09:08:52 crc kubenswrapper[4684]: I0123 09:08:52.605856 4684 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2026-01-17 19:10:33.195106037 +0000 UTC Jan 23 09:08:52 crc kubenswrapper[4684]: I0123 09:08:52.693350 4684 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 23 09:08:52 crc kubenswrapper[4684]: I0123 09:08:52.693398 4684 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 23 09:08:52 crc kubenswrapper[4684]: I0123 09:08:52.693416 4684 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 23 09:08:52 crc kubenswrapper[4684]: I0123 09:08:52.693437 4684 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 23 09:08:52 crc kubenswrapper[4684]: I0123 09:08:52.693454 4684 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-23T09:08:52Z","lastTransitionTime":"2026-01-23T09:08:52Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 23 09:08:52 crc kubenswrapper[4684]: I0123 09:08:52.713617 4684 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 23 09:08:52 crc kubenswrapper[4684]: I0123 09:08:52.713655 4684 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 23 09:08:52 crc kubenswrapper[4684]: I0123 09:08:52.713670 4684 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 23 09:08:52 crc kubenswrapper[4684]: I0123 09:08:52.713692 4684 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 23 09:08:52 crc kubenswrapper[4684]: I0123 09:08:52.713717 4684 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-23T09:08:52Z","lastTransitionTime":"2026-01-23T09:08:52Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 23 09:08:52 crc kubenswrapper[4684]: I0123 09:08:52.755722 4684 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-cluster-version/cluster-version-operator-5c965bbfc6-h8l4h"] Jan 23 09:08:52 crc kubenswrapper[4684]: I0123 09:08:52.756108 4684 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-cluster-version/cluster-version-operator-5c965bbfc6-h8l4h" Jan 23 09:08:52 crc kubenswrapper[4684]: I0123 09:08:52.758087 4684 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-cluster-version"/"cluster-version-operator-serving-cert" Jan 23 09:08:52 crc kubenswrapper[4684]: I0123 09:08:52.758204 4684 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-cluster-version"/"openshift-service-ca.crt" Jan 23 09:08:52 crc kubenswrapper[4684]: I0123 09:08:52.759104 4684 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-cluster-version"/"default-dockercfg-gxtc4" Jan 23 09:08:52 crc kubenswrapper[4684]: I0123 09:08:52.759429 4684 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-cluster-version"/"kube-root-ca.crt" Jan 23 09:08:52 crc kubenswrapper[4684]: I0123 09:08:52.782715 4684 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"service-ca\" (UniqueName: \"kubernetes.io/configmap/8bd1e3ec-8287-4db5-8171-8a450568ff3d-service-ca\") pod \"cluster-version-operator-5c965bbfc6-h8l4h\" (UID: \"8bd1e3ec-8287-4db5-8171-8a450568ff3d\") " pod="openshift-cluster-version/cluster-version-operator-5c965bbfc6-h8l4h" Jan 23 09:08:52 crc kubenswrapper[4684]: I0123 09:08:52.782758 4684 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/8bd1e3ec-8287-4db5-8171-8a450568ff3d-kube-api-access\") pod \"cluster-version-operator-5c965bbfc6-h8l4h\" (UID: \"8bd1e3ec-8287-4db5-8171-8a450568ff3d\") " pod="openshift-cluster-version/cluster-version-operator-5c965bbfc6-h8l4h" Jan 23 09:08:52 crc kubenswrapper[4684]: I0123 09:08:52.782781 4684 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-cvo-updatepayloads\" (UniqueName: \"kubernetes.io/host-path/8bd1e3ec-8287-4db5-8171-8a450568ff3d-etc-cvo-updatepayloads\") pod \"cluster-version-operator-5c965bbfc6-h8l4h\" (UID: \"8bd1e3ec-8287-4db5-8171-8a450568ff3d\") " pod="openshift-cluster-version/cluster-version-operator-5c965bbfc6-h8l4h" Jan 23 09:08:52 crc kubenswrapper[4684]: I0123 09:08:52.782827 4684 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/8bd1e3ec-8287-4db5-8171-8a450568ff3d-serving-cert\") pod \"cluster-version-operator-5c965bbfc6-h8l4h\" (UID: \"8bd1e3ec-8287-4db5-8171-8a450568ff3d\") " pod="openshift-cluster-version/cluster-version-operator-5c965bbfc6-h8l4h" Jan 23 09:08:52 crc kubenswrapper[4684]: I0123 09:08:52.782902 4684 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-ssl-certs\" (UniqueName: \"kubernetes.io/host-path/8bd1e3ec-8287-4db5-8171-8a450568ff3d-etc-ssl-certs\") pod \"cluster-version-operator-5c965bbfc6-h8l4h\" (UID: \"8bd1e3ec-8287-4db5-8171-8a450568ff3d\") " pod="openshift-cluster-version/cluster-version-operator-5c965bbfc6-h8l4h" Jan 23 09:08:52 crc kubenswrapper[4684]: I0123 09:08:52.883428 4684 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"service-ca\" (UniqueName: \"kubernetes.io/configmap/8bd1e3ec-8287-4db5-8171-8a450568ff3d-service-ca\") pod \"cluster-version-operator-5c965bbfc6-h8l4h\" (UID: \"8bd1e3ec-8287-4db5-8171-8a450568ff3d\") " pod="openshift-cluster-version/cluster-version-operator-5c965bbfc6-h8l4h" Jan 23 09:08:52 crc kubenswrapper[4684]: I0123 09:08:52.883479 4684 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/8bd1e3ec-8287-4db5-8171-8a450568ff3d-kube-api-access\") pod \"cluster-version-operator-5c965bbfc6-h8l4h\" (UID: \"8bd1e3ec-8287-4db5-8171-8a450568ff3d\") " pod="openshift-cluster-version/cluster-version-operator-5c965bbfc6-h8l4h" Jan 23 09:08:52 crc kubenswrapper[4684]: I0123 09:08:52.883508 4684 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etc-cvo-updatepayloads\" (UniqueName: \"kubernetes.io/host-path/8bd1e3ec-8287-4db5-8171-8a450568ff3d-etc-cvo-updatepayloads\") pod \"cluster-version-operator-5c965bbfc6-h8l4h\" (UID: \"8bd1e3ec-8287-4db5-8171-8a450568ff3d\") " pod="openshift-cluster-version/cluster-version-operator-5c965bbfc6-h8l4h" Jan 23 09:08:52 crc kubenswrapper[4684]: I0123 09:08:52.883531 4684 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/8bd1e3ec-8287-4db5-8171-8a450568ff3d-serving-cert\") pod \"cluster-version-operator-5c965bbfc6-h8l4h\" (UID: \"8bd1e3ec-8287-4db5-8171-8a450568ff3d\") " pod="openshift-cluster-version/cluster-version-operator-5c965bbfc6-h8l4h" Jan 23 09:08:52 crc kubenswrapper[4684]: I0123 09:08:52.883557 4684 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etc-ssl-certs\" (UniqueName: \"kubernetes.io/host-path/8bd1e3ec-8287-4db5-8171-8a450568ff3d-etc-ssl-certs\") pod \"cluster-version-operator-5c965bbfc6-h8l4h\" (UID: \"8bd1e3ec-8287-4db5-8171-8a450568ff3d\") " pod="openshift-cluster-version/cluster-version-operator-5c965bbfc6-h8l4h" Jan 23 09:08:52 crc kubenswrapper[4684]: I0123 09:08:52.883653 4684 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etc-ssl-certs\" (UniqueName: \"kubernetes.io/host-path/8bd1e3ec-8287-4db5-8171-8a450568ff3d-etc-ssl-certs\") pod \"cluster-version-operator-5c965bbfc6-h8l4h\" (UID: \"8bd1e3ec-8287-4db5-8171-8a450568ff3d\") " pod="openshift-cluster-version/cluster-version-operator-5c965bbfc6-h8l4h" Jan 23 09:08:52 crc kubenswrapper[4684]: I0123 09:08:52.883782 4684 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etc-cvo-updatepayloads\" (UniqueName: \"kubernetes.io/host-path/8bd1e3ec-8287-4db5-8171-8a450568ff3d-etc-cvo-updatepayloads\") pod \"cluster-version-operator-5c965bbfc6-h8l4h\" (UID: \"8bd1e3ec-8287-4db5-8171-8a450568ff3d\") " pod="openshift-cluster-version/cluster-version-operator-5c965bbfc6-h8l4h" Jan 23 09:08:52 crc kubenswrapper[4684]: I0123 09:08:52.884886 4684 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"service-ca\" (UniqueName: \"kubernetes.io/configmap/8bd1e3ec-8287-4db5-8171-8a450568ff3d-service-ca\") pod \"cluster-version-operator-5c965bbfc6-h8l4h\" (UID: \"8bd1e3ec-8287-4db5-8171-8a450568ff3d\") " pod="openshift-cluster-version/cluster-version-operator-5c965bbfc6-h8l4h" Jan 23 09:08:52 crc kubenswrapper[4684]: I0123 09:08:52.891755 4684 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/8bd1e3ec-8287-4db5-8171-8a450568ff3d-serving-cert\") pod \"cluster-version-operator-5c965bbfc6-h8l4h\" (UID: \"8bd1e3ec-8287-4db5-8171-8a450568ff3d\") " pod="openshift-cluster-version/cluster-version-operator-5c965bbfc6-h8l4h" Jan 23 09:08:52 crc kubenswrapper[4684]: I0123 09:08:52.902042 4684 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/8bd1e3ec-8287-4db5-8171-8a450568ff3d-kube-api-access\") pod \"cluster-version-operator-5c965bbfc6-h8l4h\" (UID: \"8bd1e3ec-8287-4db5-8171-8a450568ff3d\") " pod="openshift-cluster-version/cluster-version-operator-5c965bbfc6-h8l4h" Jan 23 09:08:53 crc kubenswrapper[4684]: I0123 09:08:53.071784 4684 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-cluster-version/cluster-version-operator-5c965bbfc6-h8l4h" Jan 23 09:08:53 crc kubenswrapper[4684]: I0123 09:08:53.103169 4684 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-cluster-version/cluster-version-operator-5c965bbfc6-h8l4h" event={"ID":"8bd1e3ec-8287-4db5-8171-8a450568ff3d","Type":"ContainerStarted","Data":"1893d290f5d85787ce62279e5e2f9676124528a4e59f95e9ad05725dcfe85630"} Jan 23 09:08:53 crc kubenswrapper[4684]: I0123 09:08:53.584341 4684 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Jan 23 09:08:53 crc kubenswrapper[4684]: E0123 09:08:53.584729 4684 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Jan 23 09:08:53 crc kubenswrapper[4684]: I0123 09:08:53.584935 4684 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 23 09:08:53 crc kubenswrapper[4684]: E0123 09:08:53.585139 4684 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Jan 23 09:08:53 crc kubenswrapper[4684]: I0123 09:08:53.585342 4684 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 23 09:08:53 crc kubenswrapper[4684]: E0123 09:08:53.585514 4684 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Jan 23 09:08:53 crc kubenswrapper[4684]: I0123 09:08:53.606943 4684 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-11-23 00:56:01.329917798 +0000 UTC Jan 23 09:08:53 crc kubenswrapper[4684]: I0123 09:08:53.607002 4684 certificate_manager.go:356] kubernetes.io/kubelet-serving: Rotating certificates Jan 23 09:08:53 crc kubenswrapper[4684]: I0123 09:08:53.613489 4684 reflector.go:368] Caches populated for *v1.CertificateSigningRequest from k8s.io/client-go/tools/watch/informerwatcher.go:146 Jan 23 09:08:54 crc kubenswrapper[4684]: I0123 09:08:54.107183 4684 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-cluster-version/cluster-version-operator-5c965bbfc6-h8l4h" event={"ID":"8bd1e3ec-8287-4db5-8171-8a450568ff3d","Type":"ContainerStarted","Data":"d7c1a2bd78506591ee18b930f6b8cd8f802c6a7ca239b4dd21c6d0217e3276c7"} Jan 23 09:08:54 crc kubenswrapper[4684]: I0123 09:08:54.581917 4684 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-wrrtl" Jan 23 09:08:54 crc kubenswrapper[4684]: E0123 09:08:54.582212 4684 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-wrrtl" podUID="8a1145d8-e0e9-481b-9e5c-65815e74874f" Jan 23 09:08:55 crc kubenswrapper[4684]: I0123 09:08:55.581016 4684 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Jan 23 09:08:55 crc kubenswrapper[4684]: E0123 09:08:55.581156 4684 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Jan 23 09:08:55 crc kubenswrapper[4684]: I0123 09:08:55.581295 4684 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 23 09:08:55 crc kubenswrapper[4684]: E0123 09:08:55.581422 4684 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Jan 23 09:08:55 crc kubenswrapper[4684]: I0123 09:08:55.581607 4684 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 23 09:08:55 crc kubenswrapper[4684]: E0123 09:08:55.581672 4684 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Jan 23 09:08:56 crc kubenswrapper[4684]: I0123 09:08:56.581655 4684 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-wrrtl" Jan 23 09:08:56 crc kubenswrapper[4684]: E0123 09:08:56.582300 4684 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-wrrtl" podUID="8a1145d8-e0e9-481b-9e5c-65815e74874f" Jan 23 09:08:57 crc kubenswrapper[4684]: I0123 09:08:57.581317 4684 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 23 09:08:57 crc kubenswrapper[4684]: I0123 09:08:57.581318 4684 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Jan 23 09:08:57 crc kubenswrapper[4684]: I0123 09:08:57.581358 4684 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 23 09:08:57 crc kubenswrapper[4684]: E0123 09:08:57.582527 4684 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Jan 23 09:08:57 crc kubenswrapper[4684]: E0123 09:08:57.583044 4684 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Jan 23 09:08:57 crc kubenswrapper[4684]: E0123 09:08:57.583272 4684 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Jan 23 09:08:57 crc kubenswrapper[4684]: I0123 09:08:57.583857 4684 scope.go:117] "RemoveContainer" containerID="4982abf5ece76335ecf3d32af453818177712b3e256640b9bebec20436b73eb7" Jan 23 09:08:57 crc kubenswrapper[4684]: E0123 09:08:57.584001 4684 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"ovnkube-controller\" with CrashLoopBackOff: \"back-off 40s restarting failed container=ovnkube-controller pod=ovnkube-node-nk7v5_openshift-ovn-kubernetes(5fd1b372-d164-4037-ae8e-cf634b1c4b41)\"" pod="openshift-ovn-kubernetes/ovnkube-node-nk7v5" podUID="5fd1b372-d164-4037-ae8e-cf634b1c4b41" Jan 23 09:08:58 crc kubenswrapper[4684]: I0123 09:08:58.581318 4684 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-wrrtl" Jan 23 09:08:58 crc kubenswrapper[4684]: E0123 09:08:58.581549 4684 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-wrrtl" podUID="8a1145d8-e0e9-481b-9e5c-65815e74874f" Jan 23 09:08:59 crc kubenswrapper[4684]: I0123 09:08:59.581308 4684 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 23 09:08:59 crc kubenswrapper[4684]: E0123 09:08:59.581932 4684 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Jan 23 09:08:59 crc kubenswrapper[4684]: I0123 09:08:59.581512 4684 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Jan 23 09:08:59 crc kubenswrapper[4684]: E0123 09:08:59.582277 4684 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Jan 23 09:08:59 crc kubenswrapper[4684]: I0123 09:08:59.581454 4684 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 23 09:08:59 crc kubenswrapper[4684]: E0123 09:08:59.582584 4684 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Jan 23 09:09:00 crc kubenswrapper[4684]: I0123 09:09:00.581172 4684 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-wrrtl" Jan 23 09:09:00 crc kubenswrapper[4684]: E0123 09:09:00.581307 4684 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-wrrtl" podUID="8a1145d8-e0e9-481b-9e5c-65815e74874f" Jan 23 09:09:01 crc kubenswrapper[4684]: I0123 09:09:01.582072 4684 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Jan 23 09:09:01 crc kubenswrapper[4684]: I0123 09:09:01.582137 4684 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 23 09:09:01 crc kubenswrapper[4684]: I0123 09:09:01.582106 4684 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 23 09:09:01 crc kubenswrapper[4684]: E0123 09:09:01.582225 4684 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Jan 23 09:09:01 crc kubenswrapper[4684]: E0123 09:09:01.582300 4684 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Jan 23 09:09:01 crc kubenswrapper[4684]: E0123 09:09:01.582360 4684 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Jan 23 09:09:02 crc kubenswrapper[4684]: I0123 09:09:02.130321 4684 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-multus_multus-jwr4q_ab0885cc-d621-4e36-9e37-1326848bd147/kube-multus/1.log" Jan 23 09:09:02 crc kubenswrapper[4684]: I0123 09:09:02.130734 4684 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-multus_multus-jwr4q_ab0885cc-d621-4e36-9e37-1326848bd147/kube-multus/0.log" Jan 23 09:09:02 crc kubenswrapper[4684]: I0123 09:09:02.130767 4684 generic.go:334] "Generic (PLEG): container finished" podID="ab0885cc-d621-4e36-9e37-1326848bd147" containerID="7bc78adb5a12c736586e26f00e1e598d2404f62b6f15dbb005f241e1d5fddae3" exitCode=1 Jan 23 09:09:02 crc kubenswrapper[4684]: I0123 09:09:02.130803 4684 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-multus/multus-jwr4q" event={"ID":"ab0885cc-d621-4e36-9e37-1326848bd147","Type":"ContainerDied","Data":"7bc78adb5a12c736586e26f00e1e598d2404f62b6f15dbb005f241e1d5fddae3"} Jan 23 09:09:02 crc kubenswrapper[4684]: I0123 09:09:02.130843 4684 scope.go:117] "RemoveContainer" containerID="d957cfbf388d17fa825ac41c56e15d6cd4caec6e13b2fb8c93b304205f0bbefe" Jan 23 09:09:02 crc kubenswrapper[4684]: I0123 09:09:02.131257 4684 scope.go:117] "RemoveContainer" containerID="7bc78adb5a12c736586e26f00e1e598d2404f62b6f15dbb005f241e1d5fddae3" Jan 23 09:09:02 crc kubenswrapper[4684]: E0123 09:09:02.131410 4684 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"kube-multus\" with CrashLoopBackOff: \"back-off 10s restarting failed container=kube-multus pod=multus-jwr4q_openshift-multus(ab0885cc-d621-4e36-9e37-1326848bd147)\"" pod="openshift-multus/multus-jwr4q" podUID="ab0885cc-d621-4e36-9e37-1326848bd147" Jan 23 09:09:02 crc kubenswrapper[4684]: I0123 09:09:02.153028 4684 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-cluster-version/cluster-version-operator-5c965bbfc6-h8l4h" podStartSLOduration=95.153011915 podStartE2EDuration="1m35.153011915s" podCreationTimestamp="2026-01-23 09:07:27 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-23 09:08:54.122447255 +0000 UTC m=+106.745825796" watchObservedRunningTime="2026-01-23 09:09:02.153011915 +0000 UTC m=+114.776390456" Jan 23 09:09:02 crc kubenswrapper[4684]: I0123 09:09:02.582049 4684 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-wrrtl" Jan 23 09:09:02 crc kubenswrapper[4684]: E0123 09:09:02.582239 4684 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-wrrtl" podUID="8a1145d8-e0e9-481b-9e5c-65815e74874f" Jan 23 09:09:03 crc kubenswrapper[4684]: I0123 09:09:03.135148 4684 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-multus_multus-jwr4q_ab0885cc-d621-4e36-9e37-1326848bd147/kube-multus/1.log" Jan 23 09:09:03 crc kubenswrapper[4684]: I0123 09:09:03.581806 4684 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Jan 23 09:09:03 crc kubenswrapper[4684]: I0123 09:09:03.581872 4684 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 23 09:09:03 crc kubenswrapper[4684]: I0123 09:09:03.581813 4684 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 23 09:09:03 crc kubenswrapper[4684]: E0123 09:09:03.581963 4684 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Jan 23 09:09:03 crc kubenswrapper[4684]: E0123 09:09:03.582072 4684 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Jan 23 09:09:03 crc kubenswrapper[4684]: E0123 09:09:03.582161 4684 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Jan 23 09:09:04 crc kubenswrapper[4684]: I0123 09:09:04.775675 4684 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-wrrtl" Jan 23 09:09:04 crc kubenswrapper[4684]: E0123 09:09:04.775903 4684 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-wrrtl" podUID="8a1145d8-e0e9-481b-9e5c-65815e74874f" Jan 23 09:09:05 crc kubenswrapper[4684]: I0123 09:09:05.581676 4684 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Jan 23 09:09:05 crc kubenswrapper[4684]: I0123 09:09:05.581758 4684 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 23 09:09:05 crc kubenswrapper[4684]: E0123 09:09:05.582386 4684 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Jan 23 09:09:05 crc kubenswrapper[4684]: I0123 09:09:05.581848 4684 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 23 09:09:05 crc kubenswrapper[4684]: E0123 09:09:05.582734 4684 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Jan 23 09:09:05 crc kubenswrapper[4684]: E0123 09:09:05.582624 4684 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Jan 23 09:09:06 crc kubenswrapper[4684]: I0123 09:09:06.581401 4684 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-wrrtl" Jan 23 09:09:06 crc kubenswrapper[4684]: E0123 09:09:06.581686 4684 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-wrrtl" podUID="8a1145d8-e0e9-481b-9e5c-65815e74874f" Jan 23 09:09:07 crc kubenswrapper[4684]: I0123 09:09:07.581741 4684 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Jan 23 09:09:07 crc kubenswrapper[4684]: E0123 09:09:07.581895 4684 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Jan 23 09:09:07 crc kubenswrapper[4684]: I0123 09:09:07.582613 4684 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 23 09:09:07 crc kubenswrapper[4684]: I0123 09:09:07.582689 4684 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 23 09:09:07 crc kubenswrapper[4684]: E0123 09:09:07.583228 4684 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Jan 23 09:09:07 crc kubenswrapper[4684]: E0123 09:09:07.583428 4684 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Jan 23 09:09:07 crc kubenswrapper[4684]: E0123 09:09:07.615061 4684 kubelet_node_status.go:497] "Node not becoming ready in time after startup" Jan 23 09:09:07 crc kubenswrapper[4684]: E0123 09:09:07.674082 4684 kubelet.go:2916] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" Jan 23 09:09:08 crc kubenswrapper[4684]: I0123 09:09:08.581202 4684 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-wrrtl" Jan 23 09:09:08 crc kubenswrapper[4684]: E0123 09:09:08.581365 4684 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-wrrtl" podUID="8a1145d8-e0e9-481b-9e5c-65815e74874f" Jan 23 09:09:09 crc kubenswrapper[4684]: I0123 09:09:09.581034 4684 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Jan 23 09:09:09 crc kubenswrapper[4684]: I0123 09:09:09.581070 4684 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 23 09:09:09 crc kubenswrapper[4684]: E0123 09:09:09.581214 4684 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Jan 23 09:09:09 crc kubenswrapper[4684]: I0123 09:09:09.581254 4684 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 23 09:09:09 crc kubenswrapper[4684]: E0123 09:09:09.581358 4684 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Jan 23 09:09:09 crc kubenswrapper[4684]: E0123 09:09:09.581428 4684 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Jan 23 09:09:10 crc kubenswrapper[4684]: I0123 09:09:10.581285 4684 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-wrrtl" Jan 23 09:09:10 crc kubenswrapper[4684]: E0123 09:09:10.581454 4684 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-wrrtl" podUID="8a1145d8-e0e9-481b-9e5c-65815e74874f" Jan 23 09:09:11 crc kubenswrapper[4684]: I0123 09:09:11.581933 4684 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Jan 23 09:09:11 crc kubenswrapper[4684]: I0123 09:09:11.582046 4684 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 23 09:09:11 crc kubenswrapper[4684]: I0123 09:09:11.581939 4684 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 23 09:09:11 crc kubenswrapper[4684]: E0123 09:09:11.582088 4684 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Jan 23 09:09:11 crc kubenswrapper[4684]: E0123 09:09:11.582505 4684 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Jan 23 09:09:11 crc kubenswrapper[4684]: E0123 09:09:11.582621 4684 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Jan 23 09:09:11 crc kubenswrapper[4684]: I0123 09:09:11.582791 4684 scope.go:117] "RemoveContainer" containerID="4982abf5ece76335ecf3d32af453818177712b3e256640b9bebec20436b73eb7" Jan 23 09:09:12 crc kubenswrapper[4684]: I0123 09:09:12.164788 4684 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-ovn-kubernetes_ovnkube-node-nk7v5_5fd1b372-d164-4037-ae8e-cf634b1c4b41/ovnkube-controller/3.log" Jan 23 09:09:12 crc kubenswrapper[4684]: I0123 09:09:12.167736 4684 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-nk7v5" event={"ID":"5fd1b372-d164-4037-ae8e-cf634b1c4b41","Type":"ContainerStarted","Data":"8218cbc66b770be0ac1518a792ef1b287a309ea7d28374ac237fea5de79088e5"} Jan 23 09:09:12 crc kubenswrapper[4684]: I0123 09:09:12.169116 4684 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-ovn-kubernetes/ovnkube-node-nk7v5" Jan 23 09:09:12 crc kubenswrapper[4684]: I0123 09:09:12.195166 4684 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-ovn-kubernetes/ovnkube-node-nk7v5" podStartSLOduration=105.195146568 podStartE2EDuration="1m45.195146568s" podCreationTimestamp="2026-01-23 09:07:27 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-23 09:09:12.194252701 +0000 UTC m=+124.817631252" watchObservedRunningTime="2026-01-23 09:09:12.195146568 +0000 UTC m=+124.818525119" Jan 23 09:09:12 crc kubenswrapper[4684]: I0123 09:09:12.538801 4684 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-multus/network-metrics-daemon-wrrtl"] Jan 23 09:09:12 crc kubenswrapper[4684]: I0123 09:09:12.538896 4684 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-wrrtl" Jan 23 09:09:12 crc kubenswrapper[4684]: E0123 09:09:12.539018 4684 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-wrrtl" podUID="8a1145d8-e0e9-481b-9e5c-65815e74874f" Jan 23 09:09:12 crc kubenswrapper[4684]: E0123 09:09:12.676674 4684 kubelet.go:2916] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" Jan 23 09:09:13 crc kubenswrapper[4684]: I0123 09:09:13.581358 4684 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Jan 23 09:09:13 crc kubenswrapper[4684]: I0123 09:09:13.581375 4684 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 23 09:09:13 crc kubenswrapper[4684]: I0123 09:09:13.581480 4684 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 23 09:09:13 crc kubenswrapper[4684]: E0123 09:09:13.581649 4684 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Jan 23 09:09:13 crc kubenswrapper[4684]: E0123 09:09:13.581777 4684 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Jan 23 09:09:13 crc kubenswrapper[4684]: E0123 09:09:13.581577 4684 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Jan 23 09:09:14 crc kubenswrapper[4684]: I0123 09:09:14.581860 4684 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-wrrtl" Jan 23 09:09:14 crc kubenswrapper[4684]: I0123 09:09:14.582229 4684 scope.go:117] "RemoveContainer" containerID="7bc78adb5a12c736586e26f00e1e598d2404f62b6f15dbb005f241e1d5fddae3" Jan 23 09:09:14 crc kubenswrapper[4684]: E0123 09:09:14.583229 4684 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-wrrtl" podUID="8a1145d8-e0e9-481b-9e5c-65815e74874f" Jan 23 09:09:15 crc kubenswrapper[4684]: I0123 09:09:15.582089 4684 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Jan 23 09:09:15 crc kubenswrapper[4684]: I0123 09:09:15.582189 4684 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 23 09:09:15 crc kubenswrapper[4684]: E0123 09:09:15.582266 4684 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Jan 23 09:09:15 crc kubenswrapper[4684]: E0123 09:09:15.582370 4684 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Jan 23 09:09:15 crc kubenswrapper[4684]: I0123 09:09:15.582546 4684 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 23 09:09:15 crc kubenswrapper[4684]: E0123 09:09:15.582623 4684 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Jan 23 09:09:16 crc kubenswrapper[4684]: I0123 09:09:16.180772 4684 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-multus_multus-jwr4q_ab0885cc-d621-4e36-9e37-1326848bd147/kube-multus/1.log" Jan 23 09:09:16 crc kubenswrapper[4684]: I0123 09:09:16.181070 4684 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-multus/multus-jwr4q" event={"ID":"ab0885cc-d621-4e36-9e37-1326848bd147","Type":"ContainerStarted","Data":"610ad7c3751dfca11e84d63256a09136a679cafc9de6642417b891d4b967f206"} Jan 23 09:09:16 crc kubenswrapper[4684]: I0123 09:09:16.581349 4684 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-wrrtl" Jan 23 09:09:16 crc kubenswrapper[4684]: E0123 09:09:16.581484 4684 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-wrrtl" podUID="8a1145d8-e0e9-481b-9e5c-65815e74874f" Jan 23 09:09:17 crc kubenswrapper[4684]: I0123 09:09:17.581049 4684 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Jan 23 09:09:17 crc kubenswrapper[4684]: I0123 09:09:17.581079 4684 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 23 09:09:17 crc kubenswrapper[4684]: I0123 09:09:17.582952 4684 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 23 09:09:17 crc kubenswrapper[4684]: E0123 09:09:17.582968 4684 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Jan 23 09:09:17 crc kubenswrapper[4684]: E0123 09:09:17.583059 4684 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Jan 23 09:09:17 crc kubenswrapper[4684]: E0123 09:09:17.583133 4684 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Jan 23 09:09:17 crc kubenswrapper[4684]: E0123 09:09:17.678007 4684 kubelet.go:2916] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" Jan 23 09:09:18 crc kubenswrapper[4684]: I0123 09:09:18.581284 4684 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-wrrtl" Jan 23 09:09:18 crc kubenswrapper[4684]: E0123 09:09:18.581460 4684 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-wrrtl" podUID="8a1145d8-e0e9-481b-9e5c-65815e74874f" Jan 23 09:09:19 crc kubenswrapper[4684]: I0123 09:09:19.582014 4684 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 23 09:09:19 crc kubenswrapper[4684]: I0123 09:09:19.582138 4684 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Jan 23 09:09:19 crc kubenswrapper[4684]: I0123 09:09:19.582172 4684 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 23 09:09:19 crc kubenswrapper[4684]: E0123 09:09:19.583083 4684 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Jan 23 09:09:19 crc kubenswrapper[4684]: E0123 09:09:19.582980 4684 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Jan 23 09:09:19 crc kubenswrapper[4684]: E0123 09:09:19.582845 4684 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Jan 23 09:09:20 crc kubenswrapper[4684]: I0123 09:09:20.581234 4684 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-wrrtl" Jan 23 09:09:20 crc kubenswrapper[4684]: E0123 09:09:20.581392 4684 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-wrrtl" podUID="8a1145d8-e0e9-481b-9e5c-65815e74874f" Jan 23 09:09:21 crc kubenswrapper[4684]: I0123 09:09:21.581242 4684 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 23 09:09:21 crc kubenswrapper[4684]: E0123 09:09:21.581377 4684 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Jan 23 09:09:21 crc kubenswrapper[4684]: I0123 09:09:21.581487 4684 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Jan 23 09:09:21 crc kubenswrapper[4684]: I0123 09:09:21.581517 4684 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 23 09:09:21 crc kubenswrapper[4684]: E0123 09:09:21.581644 4684 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Jan 23 09:09:21 crc kubenswrapper[4684]: E0123 09:09:21.581752 4684 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Jan 23 09:09:22 crc kubenswrapper[4684]: I0123 09:09:22.581532 4684 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-wrrtl" Jan 23 09:09:22 crc kubenswrapper[4684]: E0123 09:09:22.582392 4684 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-wrrtl" podUID="8a1145d8-e0e9-481b-9e5c-65815e74874f" Jan 23 09:09:23 crc kubenswrapper[4684]: I0123 09:09:23.494585 4684 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeReady" Jan 23 09:09:23 crc kubenswrapper[4684]: I0123 09:09:23.539228 4684 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-route-controller-manager/route-controller-manager-6576b87f9c-wnhgg"] Jan 23 09:09:23 crc kubenswrapper[4684]: I0123 09:09:23.539684 4684 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-route-controller-manager/route-controller-manager-6576b87f9c-wnhgg" Jan 23 09:09:23 crc kubenswrapper[4684]: I0123 09:09:23.540331 4684 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-authentication-operator/authentication-operator-69f744f599-tbqbw"] Jan 23 09:09:23 crc kubenswrapper[4684]: I0123 09:09:23.545843 4684 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-oauth-apiserver/apiserver-7bbb656c7d-bhzj6"] Jan 23 09:09:23 crc kubenswrapper[4684]: I0123 09:09:23.546572 4684 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-bhzj6" Jan 23 09:09:23 crc kubenswrapper[4684]: I0123 09:09:23.547766 4684 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-authentication-operator/authentication-operator-69f744f599-tbqbw" Jan 23 09:09:23 crc kubenswrapper[4684]: I0123 09:09:23.548242 4684 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-apiserver/apiserver-76f77b778f-7j9vw"] Jan 23 09:09:23 crc kubenswrapper[4684]: I0123 09:09:23.555090 4684 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-route-controller-manager"/"openshift-service-ca.crt" Jan 23 09:09:23 crc kubenswrapper[4684]: I0123 09:09:23.555266 4684 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-oauth-apiserver"/"oauth-apiserver-sa-dockercfg-6r2bq" Jan 23 09:09:23 crc kubenswrapper[4684]: I0123 09:09:23.555097 4684 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-authentication-operator"/"openshift-service-ca.crt" Jan 23 09:09:23 crc kubenswrapper[4684]: I0123 09:09:23.555341 4684 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-oauth-apiserver"/"trusted-ca-bundle" Jan 23 09:09:23 crc kubenswrapper[4684]: I0123 09:09:23.555447 4684 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-oauth-apiserver"/"etcd-client" Jan 23 09:09:23 crc kubenswrapper[4684]: I0123 09:09:23.555452 4684 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-apiserver/apiserver-76f77b778f-7j9vw" Jan 23 09:09:23 crc kubenswrapper[4684]: I0123 09:09:23.555539 4684 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-oauth-apiserver"/"etcd-serving-ca" Jan 23 09:09:23 crc kubenswrapper[4684]: I0123 09:09:23.555554 4684 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-oauth-apiserver"/"kube-root-ca.crt" Jan 23 09:09:23 crc kubenswrapper[4684]: I0123 09:09:23.555538 4684 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-oauth-apiserver"/"encryption-config-1" Jan 23 09:09:23 crc kubenswrapper[4684]: I0123 09:09:23.555683 4684 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-route-controller-manager"/"serving-cert" Jan 23 09:09:23 crc kubenswrapper[4684]: I0123 09:09:23.555981 4684 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-route-controller-manager"/"kube-root-ca.crt" Jan 23 09:09:23 crc kubenswrapper[4684]: I0123 09:09:23.556882 4684 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-route-controller-manager"/"client-ca" Jan 23 09:09:23 crc kubenswrapper[4684]: I0123 09:09:23.557098 4684 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-route-controller-manager"/"config" Jan 23 09:09:23 crc kubenswrapper[4684]: I0123 09:09:23.557219 4684 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-cluster-samples-operator/cluster-samples-operator-665b6dd947-5r2wv"] Jan 23 09:09:23 crc kubenswrapper[4684]: I0123 09:09:23.557874 4684 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-console/downloads-7954f5f757-mc6nm"] Jan 23 09:09:23 crc kubenswrapper[4684]: I0123 09:09:23.558217 4684 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-console/downloads-7954f5f757-mc6nm" Jan 23 09:09:23 crc kubenswrapper[4684]: I0123 09:09:23.558474 4684 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-controller-manager/controller-manager-879f6c89f-l7895"] Jan 23 09:09:23 crc kubenswrapper[4684]: I0123 09:09:23.558566 4684 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-cluster-samples-operator/cluster-samples-operator-665b6dd947-5r2wv" Jan 23 09:09:23 crc kubenswrapper[4684]: I0123 09:09:23.559223 4684 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-controller-manager/controller-manager-879f6c89f-l7895" Jan 23 09:09:23 crc kubenswrapper[4684]: I0123 09:09:23.560116 4684 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-apiserver-operator/openshift-apiserver-operator-796bbdcf4f-9crd7"] Jan 23 09:09:23 crc kubenswrapper[4684]: I0123 09:09:23.560731 4684 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-apiserver-operator/openshift-apiserver-operator-796bbdcf4f-9crd7" Jan 23 09:09:23 crc kubenswrapper[4684]: I0123 09:09:23.561061 4684 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-machine-api/machine-api-operator-5694c8668f-pgngb"] Jan 23 09:09:23 crc kubenswrapper[4684]: I0123 09:09:23.563542 4684 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-cluster-machine-approver/machine-approver-56656f9798-fxzlb"] Jan 23 09:09:23 crc kubenswrapper[4684]: I0123 09:09:23.564423 4684 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-cluster-machine-approver/machine-approver-56656f9798-fxzlb" Jan 23 09:09:23 crc kubenswrapper[4684]: I0123 09:09:23.564877 4684 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-authentication/oauth-openshift-558db77b4-hv7d8"] Jan 23 09:09:23 crc kubenswrapper[4684]: I0123 09:09:23.566563 4684 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-authentication/oauth-openshift-558db77b4-hv7d8" Jan 23 09:09:23 crc kubenswrapper[4684]: I0123 09:09:23.561690 4684 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-machine-api/machine-api-operator-5694c8668f-pgngb" Jan 23 09:09:23 crc kubenswrapper[4684]: I0123 09:09:23.583728 4684 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 23 09:09:23 crc kubenswrapper[4684]: I0123 09:09:23.583766 4684 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Jan 23 09:09:23 crc kubenswrapper[4684]: I0123 09:09:23.583856 4684 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 23 09:09:23 crc kubenswrapper[4684]: I0123 09:09:23.674360 4684 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-route-controller-manager"/"route-controller-manager-sa-dockercfg-h2zr2" Jan 23 09:09:23 crc kubenswrapper[4684]: I0123 09:09:23.674846 4684 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-oauth-apiserver"/"audit-1" Jan 23 09:09:23 crc kubenswrapper[4684]: I0123 09:09:23.675135 4684 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-apiserver"/"etcd-serving-ca" Jan 23 09:09:23 crc kubenswrapper[4684]: I0123 09:09:23.675233 4684 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-apiserver"/"image-import-ca" Jan 23 09:09:23 crc kubenswrapper[4684]: I0123 09:09:23.675292 4684 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-apiserver"/"etcd-client" Jan 23 09:09:23 crc kubenswrapper[4684]: I0123 09:09:23.675390 4684 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-apiserver"/"config" Jan 23 09:09:23 crc kubenswrapper[4684]: I0123 09:09:23.675402 4684 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-apiserver"/"encryption-config-1" Jan 23 09:09:23 crc kubenswrapper[4684]: I0123 09:09:23.675532 4684 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-apiserver"/"audit-1" Jan 23 09:09:23 crc kubenswrapper[4684]: I0123 09:09:23.675717 4684 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-network-console"/"networking-console-plugin" Jan 23 09:09:23 crc kubenswrapper[4684]: I0123 09:09:23.675731 4684 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager"/"openshift-service-ca.crt" Jan 23 09:09:23 crc kubenswrapper[4684]: I0123 09:09:23.675746 4684 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-cluster-samples-operator"/"openshift-service-ca.crt" Jan 23 09:09:23 crc kubenswrapper[4684]: I0123 09:09:23.675861 4684 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-apiserver"/"openshift-service-ca.crt" Jan 23 09:09:23 crc kubenswrapper[4684]: I0123 09:09:23.676038 4684 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-authentication-operator"/"serving-cert" Jan 23 09:09:23 crc kubenswrapper[4684]: I0123 09:09:23.676138 4684 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-console"/"openshift-service-ca.crt" Jan 23 09:09:23 crc kubenswrapper[4684]: I0123 09:09:23.676509 4684 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-authentication-operator"/"authentication-operator-dockercfg-mz9bj" Jan 23 09:09:23 crc kubenswrapper[4684]: I0123 09:09:23.677662 4684 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-authentication-operator"/"authentication-operator-config" Jan 23 09:09:23 crc kubenswrapper[4684]: I0123 09:09:23.681132 4684 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-authentication-operator"/"service-ca-bundle" Jan 23 09:09:23 crc kubenswrapper[4684]: I0123 09:09:23.681388 4684 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-apiserver"/"serving-cert" Jan 23 09:09:23 crc kubenswrapper[4684]: I0123 09:09:23.681580 4684 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-cluster-machine-approver"/"openshift-service-ca.crt" Jan 23 09:09:23 crc kubenswrapper[4684]: I0123 09:09:23.681936 4684 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-authentication"/"openshift-service-ca.crt" Jan 23 09:09:23 crc kubenswrapper[4684]: I0123 09:09:23.682090 4684 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-cluster-machine-approver"/"kube-rbac-proxy" Jan 23 09:09:23 crc kubenswrapper[4684]: I0123 09:09:23.682226 4684 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-authentication"/"audit" Jan 23 09:09:23 crc kubenswrapper[4684]: I0123 09:09:23.682373 4684 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-apiserver"/"openshift-apiserver-sa-dockercfg-djjff" Jan 23 09:09:23 crc kubenswrapper[4684]: I0123 09:09:23.682548 4684 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-cluster-machine-approver"/"machine-approver-config" Jan 23 09:09:23 crc kubenswrapper[4684]: I0123 09:09:23.682733 4684 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-authentication"/"v4-0-config-system-cliconfig" Jan 23 09:09:23 crc kubenswrapper[4684]: I0123 09:09:23.682745 4684 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-config-operator/openshift-config-operator-7777fb866f-7g8g8"] Jan 23 09:09:23 crc kubenswrapper[4684]: I0123 09:09:23.682959 4684 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-oauth-apiserver"/"openshift-service-ca.crt" Jan 23 09:09:23 crc kubenswrapper[4684]: I0123 09:09:23.683084 4684 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-apiserver-operator"/"openshift-apiserver-operator-config" Jan 23 09:09:23 crc kubenswrapper[4684]: I0123 09:09:23.683304 4684 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-machine-api"/"kube-rbac-proxy" Jan 23 09:09:23 crc kubenswrapper[4684]: I0123 09:09:23.683359 4684 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-config-operator/openshift-config-operator-7777fb866f-7g8g8" Jan 23 09:09:23 crc kubenswrapper[4684]: I0123 09:09:23.683390 4684 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-authentication"/"v4-0-config-user-template-provider-selection" Jan 23 09:09:23 crc kubenswrapper[4684]: I0123 09:09:23.683466 4684 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-machine-api"/"openshift-service-ca.crt" Jan 23 09:09:23 crc kubenswrapper[4684]: I0123 09:09:23.683600 4684 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-machine-api"/"machine-api-operator-images" Jan 23 09:09:23 crc kubenswrapper[4684]: I0123 09:09:23.683661 4684 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-authentication"/"v4-0-config-system-service-ca" Jan 23 09:09:23 crc kubenswrapper[4684]: I0123 09:09:23.683793 4684 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-network-diagnostics"/"openshift-service-ca.crt" Jan 23 09:09:23 crc kubenswrapper[4684]: I0123 09:09:23.683880 4684 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-authentication"/"v4-0-config-system-session" Jan 23 09:09:23 crc kubenswrapper[4684]: I0123 09:09:23.683924 4684 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-authentication"/"v4-0-config-user-template-error" Jan 23 09:09:23 crc kubenswrapper[4684]: I0123 09:09:23.684039 4684 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-oauth-apiserver"/"serving-cert" Jan 23 09:09:23 crc kubenswrapper[4684]: I0123 09:09:23.684059 4684 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-authentication"/"v4-0-config-user-idp-0-file-data" Jan 23 09:09:23 crc kubenswrapper[4684]: I0123 09:09:23.684060 4684 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-apiserver"/"kube-root-ca.crt" Jan 23 09:09:23 crc kubenswrapper[4684]: I0123 09:09:23.684733 4684 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-apiserver-operator"/"openshift-service-ca.crt" Jan 23 09:09:23 crc kubenswrapper[4684]: I0123 09:09:23.686796 4684 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager"/"config" Jan 23 09:09:23 crc kubenswrapper[4684]: I0123 09:09:23.689541 4684 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-console"/"default-dockercfg-chnjx" Jan 23 09:09:23 crc kubenswrapper[4684]: I0123 09:09:23.689817 4684 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-network-console"/"networking-console-plugin-cert" Jan 23 09:09:23 crc kubenswrapper[4684]: I0123 09:09:23.690023 4684 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-console"/"kube-root-ca.crt" Jan 23 09:09:23 crc kubenswrapper[4684]: I0123 09:09:23.690228 4684 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-cluster-samples-operator"/"cluster-samples-operator-dockercfg-xpp9w" Jan 23 09:09:23 crc kubenswrapper[4684]: I0123 09:09:23.690493 4684 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-authentication-operator"/"kube-root-ca.crt" Jan 23 09:09:23 crc kubenswrapper[4684]: I0123 09:09:23.690877 4684 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-cluster-samples-operator"/"kube-root-ca.crt" Jan 23 09:09:23 crc kubenswrapper[4684]: I0123 09:09:23.690937 4684 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-controller-manager"/"serving-cert" Jan 23 09:09:23 crc kubenswrapper[4684]: I0123 09:09:23.691177 4684 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-controller-manager"/"openshift-controller-manager-sa-dockercfg-msq4c" Jan 23 09:09:23 crc kubenswrapper[4684]: I0123 09:09:23.691320 4684 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-cluster-samples-operator"/"samples-operator-tls" Jan 23 09:09:23 crc kubenswrapper[4684]: I0123 09:09:23.691457 4684 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager"/"client-ca" Jan 23 09:09:23 crc kubenswrapper[4684]: I0123 09:09:23.691632 4684 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager"/"kube-root-ca.crt" Jan 23 09:09:23 crc kubenswrapper[4684]: I0123 09:09:23.692600 4684 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-console-operator/console-operator-58897d9998-642xz"] Jan 23 09:09:23 crc kubenswrapper[4684]: I0123 09:09:23.693151 4684 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-console-operator/console-operator-58897d9998-642xz" Jan 23 09:09:23 crc kubenswrapper[4684]: I0123 09:09:23.696208 4684 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-apiserver-operator"/"openshift-apiserver-operator-dockercfg-xtcjv" Jan 23 09:09:23 crc kubenswrapper[4684]: I0123 09:09:23.696561 4684 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-apiserver-operator"/"openshift-apiserver-operator-serving-cert" Jan 23 09:09:23 crc kubenswrapper[4684]: I0123 09:09:23.696790 4684 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-apiserver-operator"/"kube-root-ca.crt" Jan 23 09:09:23 crc kubenswrapper[4684]: I0123 09:09:23.696959 4684 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-cluster-machine-approver"/"kube-root-ca.crt" Jan 23 09:09:23 crc kubenswrapper[4684]: I0123 09:09:23.698447 4684 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-cluster-machine-approver"/"machine-approver-sa-dockercfg-nl2j4" Jan 23 09:09:23 crc kubenswrapper[4684]: I0123 09:09:23.699107 4684 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-authentication"/"oauth-openshift-dockercfg-znhcc" Jan 23 09:09:23 crc kubenswrapper[4684]: I0123 09:09:23.699416 4684 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-api"/"machine-api-operator-dockercfg-mfbb7" Jan 23 09:09:23 crc kubenswrapper[4684]: I0123 09:09:23.699574 4684 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-api"/"machine-api-operator-tls" Jan 23 09:09:23 crc kubenswrapper[4684]: I0123 09:09:23.699741 4684 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-cluster-machine-approver"/"machine-approver-tls" Jan 23 09:09:23 crc kubenswrapper[4684]: I0123 09:09:23.700118 4684 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-authentication"/"kube-root-ca.crt" Jan 23 09:09:23 crc kubenswrapper[4684]: I0123 09:09:23.700154 4684 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-authentication"/"v4-0-config-system-router-certs" Jan 23 09:09:23 crc kubenswrapper[4684]: I0123 09:09:23.717788 4684 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/3e8dddad-fbbb-4169-9fd1-c908bc5e3660-config\") pod \"route-controller-manager-6576b87f9c-wnhgg\" (UID: \"3e8dddad-fbbb-4169-9fd1-c908bc5e3660\") " pod="openshift-route-controller-manager/route-controller-manager-6576b87f9c-wnhgg" Jan 23 09:09:23 crc kubenswrapper[4684]: I0123 09:09:23.717908 4684 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-rngtr\" (UniqueName: \"kubernetes.io/projected/6bea838f-25ef-4690-b5c9-feddd10b04bf-kube-api-access-rngtr\") pod \"apiserver-7bbb656c7d-bhzj6\" (UID: \"6bea838f-25ef-4690-b5c9-feddd10b04bf\") " pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-bhzj6" Jan 23 09:09:23 crc kubenswrapper[4684]: I0123 09:09:23.717942 4684 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"v4-0-config-user-template-login\" (UniqueName: \"kubernetes.io/secret/c846db13-b93b-4e07-9e7b-e22106203982-v4-0-config-user-template-login\") pod \"oauth-openshift-558db77b4-hv7d8\" (UID: \"c846db13-b93b-4e07-9e7b-e22106203982\") " pod="openshift-authentication/oauth-openshift-558db77b4-hv7d8" Jan 23 09:09:23 crc kubenswrapper[4684]: I0123 09:09:23.717985 4684 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/1d318028-1d65-442a-9e50-ccf71fb54b04-trusted-ca-bundle\") pod \"authentication-operator-69f744f599-tbqbw\" (UID: \"1d318028-1d65-442a-9e50-ccf71fb54b04\") " pod="openshift-authentication-operator/authentication-operator-69f744f599-tbqbw" Jan 23 09:09:23 crc kubenswrapper[4684]: I0123 09:09:23.718015 4684 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/1d318028-1d65-442a-9e50-ccf71fb54b04-config\") pod \"authentication-operator-69f744f599-tbqbw\" (UID: \"1d318028-1d65-442a-9e50-ccf71fb54b04\") " pod="openshift-authentication-operator/authentication-operator-69f744f599-tbqbw" Jan 23 09:09:23 crc kubenswrapper[4684]: I0123 09:09:23.718067 4684 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etcd-serving-ca\" (UniqueName: \"kubernetes.io/configmap/e6245e77-409a-4116-8c6e-78b21d87529f-etcd-serving-ca\") pod \"apiserver-76f77b778f-7j9vw\" (UID: \"e6245e77-409a-4116-8c6e-78b21d87529f\") " pod="openshift-apiserver/apiserver-76f77b778f-7j9vw" Jan 23 09:09:23 crc kubenswrapper[4684]: I0123 09:09:23.725081 4684 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-dq8c7\" (UniqueName: \"kubernetes.io/projected/e6245e77-409a-4116-8c6e-78b21d87529f-kube-api-access-dq8c7\") pod \"apiserver-76f77b778f-7j9vw\" (UID: \"e6245e77-409a-4116-8c6e-78b21d87529f\") " pod="openshift-apiserver/apiserver-76f77b778f-7j9vw" Jan 23 09:09:23 crc kubenswrapper[4684]: I0123 09:09:23.725195 4684 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etcd-serving-ca\" (UniqueName: \"kubernetes.io/configmap/6bea838f-25ef-4690-b5c9-feddd10b04bf-etcd-serving-ca\") pod \"apiserver-7bbb656c7d-bhzj6\" (UID: \"6bea838f-25ef-4690-b5c9-feddd10b04bf\") " pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-bhzj6" Jan 23 09:09:23 crc kubenswrapper[4684]: I0123 09:09:23.725235 4684 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/3e8dddad-fbbb-4169-9fd1-c908bc5e3660-serving-cert\") pod \"route-controller-manager-6576b87f9c-wnhgg\" (UID: \"3e8dddad-fbbb-4169-9fd1-c908bc5e3660\") " pod="openshift-route-controller-manager/route-controller-manager-6576b87f9c-wnhgg" Jan 23 09:09:23 crc kubenswrapper[4684]: I0123 09:09:23.725277 4684 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/c846db13-b93b-4e07-9e7b-e22106203982-audit-dir\") pod \"oauth-openshift-558db77b4-hv7d8\" (UID: \"c846db13-b93b-4e07-9e7b-e22106203982\") " pod="openshift-authentication/oauth-openshift-558db77b4-hv7d8" Jan 23 09:09:23 crc kubenswrapper[4684]: I0123 09:09:23.725412 4684 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"v4-0-config-system-trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/c846db13-b93b-4e07-9e7b-e22106203982-v4-0-config-system-trusted-ca-bundle\") pod \"oauth-openshift-558db77b4-hv7d8\" (UID: \"c846db13-b93b-4e07-9e7b-e22106203982\") " pod="openshift-authentication/oauth-openshift-558db77b4-hv7d8" Jan 23 09:09:23 crc kubenswrapper[4684]: I0123 09:09:23.725457 4684 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-jx4sb\" (UniqueName: \"kubernetes.io/projected/513ccd39-0870-4964-85a2-0e9eb9d14a85-kube-api-access-jx4sb\") pod \"controller-manager-879f6c89f-l7895\" (UID: \"513ccd39-0870-4964-85a2-0e9eb9d14a85\") " pod="openshift-controller-manager/controller-manager-879f6c89f-l7895" Jan 23 09:09:23 crc kubenswrapper[4684]: I0123 09:09:23.725491 4684 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"audit-policies\" (UniqueName: \"kubernetes.io/configmap/c846db13-b93b-4e07-9e7b-e22106203982-audit-policies\") pod \"oauth-openshift-558db77b4-hv7d8\" (UID: \"c846db13-b93b-4e07-9e7b-e22106203982\") " pod="openshift-authentication/oauth-openshift-558db77b4-hv7d8" Jan 23 09:09:23 crc kubenswrapper[4684]: I0123 09:09:23.725588 4684 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"v4-0-config-system-cliconfig\" (UniqueName: \"kubernetes.io/configmap/c846db13-b93b-4e07-9e7b-e22106203982-v4-0-config-system-cliconfig\") pod \"oauth-openshift-558db77b4-hv7d8\" (UID: \"c846db13-b93b-4e07-9e7b-e22106203982\") " pod="openshift-authentication/oauth-openshift-558db77b4-hv7d8" Jan 23 09:09:23 crc kubenswrapper[4684]: I0123 09:09:23.725688 4684 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/1d318028-1d65-442a-9e50-ccf71fb54b04-serving-cert\") pod \"authentication-operator-69f744f599-tbqbw\" (UID: \"1d318028-1d65-442a-9e50-ccf71fb54b04\") " pod="openshift-authentication-operator/authentication-operator-69f744f599-tbqbw" Jan 23 09:09:23 crc kubenswrapper[4684]: I0123 09:09:23.725740 4684 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/513ccd39-0870-4964-85a2-0e9eb9d14a85-serving-cert\") pod \"controller-manager-879f6c89f-l7895\" (UID: \"513ccd39-0870-4964-85a2-0e9eb9d14a85\") " pod="openshift-controller-manager/controller-manager-879f6c89f-l7895" Jan 23 09:09:23 crc kubenswrapper[4684]: I0123 09:09:23.725769 4684 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"node-pullsecrets\" (UniqueName: \"kubernetes.io/host-path/e6245e77-409a-4116-8c6e-78b21d87529f-node-pullsecrets\") pod \"apiserver-76f77b778f-7j9vw\" (UID: \"e6245e77-409a-4116-8c6e-78b21d87529f\") " pod="openshift-apiserver/apiserver-76f77b778f-7j9vw" Jan 23 09:09:23 crc kubenswrapper[4684]: I0123 09:09:23.725792 4684 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/6bea838f-25ef-4690-b5c9-feddd10b04bf-audit-dir\") pod \"apiserver-7bbb656c7d-bhzj6\" (UID: \"6bea838f-25ef-4690-b5c9-feddd10b04bf\") " pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-bhzj6" Jan 23 09:09:23 crc kubenswrapper[4684]: I0123 09:09:23.725817 4684 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-chgrt\" (UniqueName: \"kubernetes.io/projected/1d318028-1d65-442a-9e50-ccf71fb54b04-kube-api-access-chgrt\") pod \"authentication-operator-69f744f599-tbqbw\" (UID: \"1d318028-1d65-442a-9e50-ccf71fb54b04\") " pod="openshift-authentication-operator/authentication-operator-69f744f599-tbqbw" Jan 23 09:09:23 crc kubenswrapper[4684]: I0123 09:09:23.725846 4684 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/e6245e77-409a-4116-8c6e-78b21d87529f-trusted-ca-bundle\") pod \"apiserver-76f77b778f-7j9vw\" (UID: \"e6245e77-409a-4116-8c6e-78b21d87529f\") " pod="openshift-apiserver/apiserver-76f77b778f-7j9vw" Jan 23 09:09:23 crc kubenswrapper[4684]: I0123 09:09:23.725949 4684 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"machine-approver-tls\" (UniqueName: \"kubernetes.io/secret/a0343333-605f-4fb8-adb7-8423a1d36552-machine-approver-tls\") pod \"machine-approver-56656f9798-fxzlb\" (UID: \"a0343333-605f-4fb8-adb7-8423a1d36552\") " pod="openshift-cluster-machine-approver/machine-approver-56656f9798-fxzlb" Jan 23 09:09:23 crc kubenswrapper[4684]: I0123 09:09:23.725980 4684 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-2xcgv\" (UniqueName: \"kubernetes.io/projected/8fa74b73-0b76-426c-a769-39477ab913f6-kube-api-access-2xcgv\") pod \"downloads-7954f5f757-mc6nm\" (UID: \"8fa74b73-0b76-426c-a769-39477ab913f6\") " pod="openshift-console/downloads-7954f5f757-mc6nm" Jan 23 09:09:23 crc kubenswrapper[4684]: I0123 09:09:23.726019 4684 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etcd-client\" (UniqueName: \"kubernetes.io/secret/e6245e77-409a-4116-8c6e-78b21d87529f-etcd-client\") pod \"apiserver-76f77b778f-7j9vw\" (UID: \"e6245e77-409a-4116-8c6e-78b21d87529f\") " pod="openshift-apiserver/apiserver-76f77b778f-7j9vw" Jan 23 09:09:23 crc kubenswrapper[4684]: I0123 09:09:23.726049 4684 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etcd-client\" (UniqueName: \"kubernetes.io/secret/6bea838f-25ef-4690-b5c9-feddd10b04bf-etcd-client\") pod \"apiserver-7bbb656c7d-bhzj6\" (UID: \"6bea838f-25ef-4690-b5c9-feddd10b04bf\") " pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-bhzj6" Jan 23 09:09:23 crc kubenswrapper[4684]: I0123 09:09:23.726138 4684 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/e6245e77-409a-4116-8c6e-78b21d87529f-audit-dir\") pod \"apiserver-76f77b778f-7j9vw\" (UID: \"e6245e77-409a-4116-8c6e-78b21d87529f\") " pod="openshift-apiserver/apiserver-76f77b778f-7j9vw" Jan 23 09:09:23 crc kubenswrapper[4684]: I0123 09:09:23.726190 4684 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/513ccd39-0870-4964-85a2-0e9eb9d14a85-config\") pod \"controller-manager-879f6c89f-l7895\" (UID: \"513ccd39-0870-4964-85a2-0e9eb9d14a85\") " pod="openshift-controller-manager/controller-manager-879f6c89f-l7895" Jan 23 09:09:23 crc kubenswrapper[4684]: I0123 09:09:23.726219 4684 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"machine-api-operator-tls\" (UniqueName: \"kubernetes.io/secret/9b3c5fb5-4205-4162-9d9e-b522ee092236-machine-api-operator-tls\") pod \"machine-api-operator-5694c8668f-pgngb\" (UID: \"9b3c5fb5-4205-4162-9d9e-b522ee092236\") " pod="openshift-machine-api/machine-api-operator-5694c8668f-pgngb" Jan 23 09:09:23 crc kubenswrapper[4684]: I0123 09:09:23.726251 4684 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-fjxqp\" (UniqueName: \"kubernetes.io/projected/e9844493-3620-4f52-bfae-61a79062d001-kube-api-access-fjxqp\") pod \"cluster-samples-operator-665b6dd947-5r2wv\" (UID: \"e9844493-3620-4f52-bfae-61a79062d001\") " pod="openshift-cluster-samples-operator/cluster-samples-operator-665b6dd947-5r2wv" Jan 23 09:09:23 crc kubenswrapper[4684]: I0123 09:09:23.726262 4684 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-authentication"/"v4-0-config-system-serving-cert" Jan 23 09:09:23 crc kubenswrapper[4684]: I0123 09:09:23.726301 4684 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/3e8dddad-fbbb-4169-9fd1-c908bc5e3660-client-ca\") pod \"route-controller-manager-6576b87f9c-wnhgg\" (UID: \"3e8dddad-fbbb-4169-9fd1-c908bc5e3660\") " pod="openshift-route-controller-manager/route-controller-manager-6576b87f9c-wnhgg" Jan 23 09:09:23 crc kubenswrapper[4684]: I0123 09:09:23.726328 4684 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"auth-proxy-config\" (UniqueName: \"kubernetes.io/configmap/a0343333-605f-4fb8-adb7-8423a1d36552-auth-proxy-config\") pod \"machine-approver-56656f9798-fxzlb\" (UID: \"a0343333-605f-4fb8-adb7-8423a1d36552\") " pod="openshift-cluster-machine-approver/machine-approver-56656f9798-fxzlb" Jan 23 09:09:23 crc kubenswrapper[4684]: I0123 09:09:23.726378 4684 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-2nnm7\" (UniqueName: \"kubernetes.io/projected/c846db13-b93b-4e07-9e7b-e22106203982-kube-api-access-2nnm7\") pod \"oauth-openshift-558db77b4-hv7d8\" (UID: \"c846db13-b93b-4e07-9e7b-e22106203982\") " pod="openshift-authentication/oauth-openshift-558db77b4-hv7d8" Jan 23 09:09:23 crc kubenswrapper[4684]: I0123 09:09:23.726536 4684 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"images\" (UniqueName: \"kubernetes.io/configmap/9b3c5fb5-4205-4162-9d9e-b522ee092236-images\") pod \"machine-api-operator-5694c8668f-pgngb\" (UID: \"9b3c5fb5-4205-4162-9d9e-b522ee092236\") " pod="openshift-machine-api/machine-api-operator-5694c8668f-pgngb" Jan 23 09:09:23 crc kubenswrapper[4684]: I0123 09:09:23.726578 4684 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"service-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/1d318028-1d65-442a-9e50-ccf71fb54b04-service-ca-bundle\") pod \"authentication-operator-69f744f599-tbqbw\" (UID: \"1d318028-1d65-442a-9e50-ccf71fb54b04\") " pod="openshift-authentication-operator/authentication-operator-69f744f599-tbqbw" Jan 23 09:09:23 crc kubenswrapper[4684]: I0123 09:09:23.726635 4684 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-machine-api"/"kube-root-ca.crt" Jan 23 09:09:23 crc kubenswrapper[4684]: I0123 09:09:23.727108 4684 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-network-diagnostics"/"kube-root-ca.crt" Jan 23 09:09:23 crc kubenswrapper[4684]: I0123 09:09:23.726632 4684 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-qmvnz\" (UniqueName: \"kubernetes.io/projected/5ba2e281-6dc9-44ad-90ef-e389fddb83cf-kube-api-access-qmvnz\") pod \"openshift-apiserver-operator-796bbdcf4f-9crd7\" (UID: \"5ba2e281-6dc9-44ad-90ef-e389fddb83cf\") " pod="openshift-apiserver-operator/openshift-apiserver-operator-796bbdcf4f-9crd7" Jan 23 09:09:23 crc kubenswrapper[4684]: I0123 09:09:23.727556 4684 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"v4-0-config-system-serving-cert\" (UniqueName: \"kubernetes.io/secret/c846db13-b93b-4e07-9e7b-e22106203982-v4-0-config-system-serving-cert\") pod \"oauth-openshift-558db77b4-hv7d8\" (UID: \"c846db13-b93b-4e07-9e7b-e22106203982\") " pod="openshift-authentication/oauth-openshift-558db77b4-hv7d8" Jan 23 09:09:23 crc kubenswrapper[4684]: I0123 09:09:23.727617 4684 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"encryption-config\" (UniqueName: \"kubernetes.io/secret/e6245e77-409a-4116-8c6e-78b21d87529f-encryption-config\") pod \"apiserver-76f77b778f-7j9vw\" (UID: \"e6245e77-409a-4116-8c6e-78b21d87529f\") " pod="openshift-apiserver/apiserver-76f77b778f-7j9vw" Jan 23 09:09:23 crc kubenswrapper[4684]: I0123 09:09:23.727679 4684 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"audit-policies\" (UniqueName: \"kubernetes.io/configmap/6bea838f-25ef-4690-b5c9-feddd10b04bf-audit-policies\") pod \"apiserver-7bbb656c7d-bhzj6\" (UID: \"6bea838f-25ef-4690-b5c9-feddd10b04bf\") " pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-bhzj6" Jan 23 09:09:23 crc kubenswrapper[4684]: I0123 09:09:23.727752 4684 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/5ba2e281-6dc9-44ad-90ef-e389fddb83cf-config\") pod \"openshift-apiserver-operator-796bbdcf4f-9crd7\" (UID: \"5ba2e281-6dc9-44ad-90ef-e389fddb83cf\") " pod="openshift-apiserver-operator/openshift-apiserver-operator-796bbdcf4f-9crd7" Jan 23 09:09:23 crc kubenswrapper[4684]: I0123 09:09:23.727781 4684 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-m9z7v\" (UniqueName: \"kubernetes.io/projected/a0343333-605f-4fb8-adb7-8423a1d36552-kube-api-access-m9z7v\") pod \"machine-approver-56656f9798-fxzlb\" (UID: \"a0343333-605f-4fb8-adb7-8423a1d36552\") " pod="openshift-cluster-machine-approver/machine-approver-56656f9798-fxzlb" Jan 23 09:09:23 crc kubenswrapper[4684]: I0123 09:09:23.727838 4684 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"v4-0-config-system-session\" (UniqueName: \"kubernetes.io/secret/c846db13-b93b-4e07-9e7b-e22106203982-v4-0-config-system-session\") pod \"oauth-openshift-558db77b4-hv7d8\" (UID: \"c846db13-b93b-4e07-9e7b-e22106203982\") " pod="openshift-authentication/oauth-openshift-558db77b4-hv7d8" Jan 23 09:09:23 crc kubenswrapper[4684]: I0123 09:09:23.727888 4684 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"v4-0-config-system-service-ca\" (UniqueName: \"kubernetes.io/configmap/c846db13-b93b-4e07-9e7b-e22106203982-v4-0-config-system-service-ca\") pod \"oauth-openshift-558db77b4-hv7d8\" (UID: \"c846db13-b93b-4e07-9e7b-e22106203982\") " pod="openshift-authentication/oauth-openshift-558db77b4-hv7d8" Jan 23 09:09:23 crc kubenswrapper[4684]: I0123 09:09:23.727920 4684 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/513ccd39-0870-4964-85a2-0e9eb9d14a85-proxy-ca-bundles\") pod \"controller-manager-879f6c89f-l7895\" (UID: \"513ccd39-0870-4964-85a2-0e9eb9d14a85\") " pod="openshift-controller-manager/controller-manager-879f6c89f-l7895" Jan 23 09:09:23 crc kubenswrapper[4684]: I0123 09:09:23.728879 4684 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-59fjz\" (UniqueName: \"kubernetes.io/projected/9b3c5fb5-4205-4162-9d9e-b522ee092236-kube-api-access-59fjz\") pod \"machine-api-operator-5694c8668f-pgngb\" (UID: \"9b3c5fb5-4205-4162-9d9e-b522ee092236\") " pod="openshift-machine-api/machine-api-operator-5694c8668f-pgngb" Jan 23 09:09:23 crc kubenswrapper[4684]: I0123 09:09:23.729032 4684 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"audit\" (UniqueName: \"kubernetes.io/configmap/e6245e77-409a-4116-8c6e-78b21d87529f-audit\") pod \"apiserver-76f77b778f-7j9vw\" (UID: \"e6245e77-409a-4116-8c6e-78b21d87529f\") " pod="openshift-apiserver/apiserver-76f77b778f-7j9vw" Jan 23 09:09:23 crc kubenswrapper[4684]: I0123 09:09:23.729066 4684 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/e6245e77-409a-4116-8c6e-78b21d87529f-serving-cert\") pod \"apiserver-76f77b778f-7j9vw\" (UID: \"e6245e77-409a-4116-8c6e-78b21d87529f\") " pod="openshift-apiserver/apiserver-76f77b778f-7j9vw" Jan 23 09:09:23 crc kubenswrapper[4684]: I0123 09:09:23.729207 4684 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"v4-0-config-user-template-error\" (UniqueName: \"kubernetes.io/secret/c846db13-b93b-4e07-9e7b-e22106203982-v4-0-config-user-template-error\") pod \"oauth-openshift-558db77b4-hv7d8\" (UID: \"c846db13-b93b-4e07-9e7b-e22106203982\") " pod="openshift-authentication/oauth-openshift-558db77b4-hv7d8" Jan 23 09:09:23 crc kubenswrapper[4684]: I0123 09:09:23.729369 4684 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"samples-operator-tls\" (UniqueName: \"kubernetes.io/secret/e9844493-3620-4f52-bfae-61a79062d001-samples-operator-tls\") pod \"cluster-samples-operator-665b6dd947-5r2wv\" (UID: \"e9844493-3620-4f52-bfae-61a79062d001\") " pod="openshift-cluster-samples-operator/cluster-samples-operator-665b6dd947-5r2wv" Jan 23 09:09:23 crc kubenswrapper[4684]: I0123 09:09:23.729400 4684 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"v4-0-config-system-router-certs\" (UniqueName: \"kubernetes.io/secret/c846db13-b93b-4e07-9e7b-e22106203982-v4-0-config-system-router-certs\") pod \"oauth-openshift-558db77b4-hv7d8\" (UID: \"c846db13-b93b-4e07-9e7b-e22106203982\") " pod="openshift-authentication/oauth-openshift-558db77b4-hv7d8" Jan 23 09:09:23 crc kubenswrapper[4684]: I0123 09:09:23.729750 4684 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"image-import-ca\" (UniqueName: \"kubernetes.io/configmap/e6245e77-409a-4116-8c6e-78b21d87529f-image-import-ca\") pod \"apiserver-76f77b778f-7j9vw\" (UID: \"e6245e77-409a-4116-8c6e-78b21d87529f\") " pod="openshift-apiserver/apiserver-76f77b778f-7j9vw" Jan 23 09:09:23 crc kubenswrapper[4684]: I0123 09:09:23.729919 4684 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/a0343333-605f-4fb8-adb7-8423a1d36552-config\") pod \"machine-approver-56656f9798-fxzlb\" (UID: \"a0343333-605f-4fb8-adb7-8423a1d36552\") " pod="openshift-cluster-machine-approver/machine-approver-56656f9798-fxzlb" Jan 23 09:09:23 crc kubenswrapper[4684]: I0123 09:09:23.730061 4684 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"encryption-config\" (UniqueName: \"kubernetes.io/secret/6bea838f-25ef-4690-b5c9-feddd10b04bf-encryption-config\") pod \"apiserver-7bbb656c7d-bhzj6\" (UID: \"6bea838f-25ef-4690-b5c9-feddd10b04bf\") " pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-bhzj6" Jan 23 09:09:23 crc kubenswrapper[4684]: I0123 09:09:23.730095 4684 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"v4-0-config-user-template-provider-selection\" (UniqueName: \"kubernetes.io/secret/c846db13-b93b-4e07-9e7b-e22106203982-v4-0-config-user-template-provider-selection\") pod \"oauth-openshift-558db77b4-hv7d8\" (UID: \"c846db13-b93b-4e07-9e7b-e22106203982\") " pod="openshift-authentication/oauth-openshift-558db77b4-hv7d8" Jan 23 09:09:23 crc kubenswrapper[4684]: I0123 09:09:23.730236 4684 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/e6245e77-409a-4116-8c6e-78b21d87529f-config\") pod \"apiserver-76f77b778f-7j9vw\" (UID: \"e6245e77-409a-4116-8c6e-78b21d87529f\") " pod="openshift-apiserver/apiserver-76f77b778f-7j9vw" Jan 23 09:09:23 crc kubenswrapper[4684]: I0123 09:09:23.730265 4684 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/5ba2e281-6dc9-44ad-90ef-e389fddb83cf-serving-cert\") pod \"openshift-apiserver-operator-796bbdcf4f-9crd7\" (UID: \"5ba2e281-6dc9-44ad-90ef-e389fddb83cf\") " pod="openshift-apiserver-operator/openshift-apiserver-operator-796bbdcf4f-9crd7" Jan 23 09:09:23 crc kubenswrapper[4684]: I0123 09:09:23.730409 4684 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"v4-0-config-system-ocp-branding-template\" (UniqueName: \"kubernetes.io/secret/c846db13-b93b-4e07-9e7b-e22106203982-v4-0-config-system-ocp-branding-template\") pod \"oauth-openshift-558db77b4-hv7d8\" (UID: \"c846db13-b93b-4e07-9e7b-e22106203982\") " pod="openshift-authentication/oauth-openshift-558db77b4-hv7d8" Jan 23 09:09:23 crc kubenswrapper[4684]: I0123 09:09:23.730568 4684 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"v4-0-config-user-idp-0-file-data\" (UniqueName: \"kubernetes.io/secret/c846db13-b93b-4e07-9e7b-e22106203982-v4-0-config-user-idp-0-file-data\") pod \"oauth-openshift-558db77b4-hv7d8\" (UID: \"c846db13-b93b-4e07-9e7b-e22106203982\") " pod="openshift-authentication/oauth-openshift-558db77b4-hv7d8" Jan 23 09:09:23 crc kubenswrapper[4684]: I0123 09:09:23.730597 4684 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-df9nx\" (UniqueName: \"kubernetes.io/projected/3e8dddad-fbbb-4169-9fd1-c908bc5e3660-kube-api-access-df9nx\") pod \"route-controller-manager-6576b87f9c-wnhgg\" (UID: \"3e8dddad-fbbb-4169-9fd1-c908bc5e3660\") " pod="openshift-route-controller-manager/route-controller-manager-6576b87f9c-wnhgg" Jan 23 09:09:23 crc kubenswrapper[4684]: I0123 09:09:23.737805 4684 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/513ccd39-0870-4964-85a2-0e9eb9d14a85-client-ca\") pod \"controller-manager-879f6c89f-l7895\" (UID: \"513ccd39-0870-4964-85a2-0e9eb9d14a85\") " pod="openshift-controller-manager/controller-manager-879f6c89f-l7895" Jan 23 09:09:23 crc kubenswrapper[4684]: I0123 09:09:23.737991 4684 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/9b3c5fb5-4205-4162-9d9e-b522ee092236-config\") pod \"machine-api-operator-5694c8668f-pgngb\" (UID: \"9b3c5fb5-4205-4162-9d9e-b522ee092236\") " pod="openshift-machine-api/machine-api-operator-5694c8668f-pgngb" Jan 23 09:09:23 crc kubenswrapper[4684]: I0123 09:09:23.738178 4684 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/6bea838f-25ef-4690-b5c9-feddd10b04bf-serving-cert\") pod \"apiserver-7bbb656c7d-bhzj6\" (UID: \"6bea838f-25ef-4690-b5c9-feddd10b04bf\") " pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-bhzj6" Jan 23 09:09:23 crc kubenswrapper[4684]: I0123 09:09:23.738215 4684 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/6bea838f-25ef-4690-b5c9-feddd10b04bf-trusted-ca-bundle\") pod \"apiserver-7bbb656c7d-bhzj6\" (UID: \"6bea838f-25ef-4690-b5c9-feddd10b04bf\") " pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-bhzj6" Jan 23 09:09:23 crc kubenswrapper[4684]: I0123 09:09:23.762466 4684 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager"/"openshift-global-ca" Jan 23 09:09:23 crc kubenswrapper[4684]: I0123 09:09:23.763168 4684 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-console/console-f9d7485db-wd9fz"] Jan 23 09:09:23 crc kubenswrapper[4684]: I0123 09:09:23.763988 4684 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-dns-operator/dns-operator-744455d44c-kx2tw"] Jan 23 09:09:23 crc kubenswrapper[4684]: I0123 09:09:23.764411 4684 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-controller-manager-operator/openshift-controller-manager-operator-756b6f6bc6-g5k2t"] Jan 23 09:09:23 crc kubenswrapper[4684]: I0123 09:09:23.764882 4684 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-controller-manager-operator/openshift-controller-manager-operator-756b6f6bc6-g5k2t" Jan 23 09:09:23 crc kubenswrapper[4684]: I0123 09:09:23.765379 4684 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-console/console-f9d7485db-wd9fz" Jan 23 09:09:23 crc kubenswrapper[4684]: I0123 09:09:23.765661 4684 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-dns-operator/dns-operator-744455d44c-kx2tw" Jan 23 09:09:23 crc kubenswrapper[4684]: I0123 09:09:23.775230 4684 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-console"/"oauth-serving-cert" Jan 23 09:09:23 crc kubenswrapper[4684]: I0123 09:09:23.775538 4684 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-dns-operator"/"openshift-service-ca.crt" Jan 23 09:09:23 crc kubenswrapper[4684]: I0123 09:09:23.776051 4684 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-console-operator"/"serving-cert" Jan 23 09:09:23 crc kubenswrapper[4684]: I0123 09:09:23.776585 4684 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-console"/"console-serving-cert" Jan 23 09:09:23 crc kubenswrapper[4684]: I0123 09:09:23.776942 4684 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-config-operator"/"openshift-service-ca.crt" Jan 23 09:09:23 crc kubenswrapper[4684]: I0123 09:09:23.777055 4684 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-config-operator"/"openshift-config-operator-dockercfg-7pc5z" Jan 23 09:09:23 crc kubenswrapper[4684]: I0123 09:09:23.777146 4684 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-console-operator"/"openshift-service-ca.crt" Jan 23 09:09:23 crc kubenswrapper[4684]: I0123 09:09:23.777540 4684 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-config-operator"/"kube-root-ca.crt" Jan 23 09:09:23 crc kubenswrapper[4684]: I0123 09:09:23.777872 4684 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-config-operator"/"config-operator-serving-cert" Jan 23 09:09:23 crc kubenswrapper[4684]: I0123 09:09:23.777976 4684 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-console-operator"/"console-operator-config" Jan 23 09:09:23 crc kubenswrapper[4684]: I0123 09:09:23.778350 4684 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-controller-manager-operator"/"openshift-controller-manager-operator-dockercfg-vw8fw" Jan 23 09:09:23 crc kubenswrapper[4684]: I0123 09:09:23.778612 4684 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-console-operator"/"console-operator-dockercfg-4xjcr" Jan 23 09:09:23 crc kubenswrapper[4684]: I0123 09:09:23.778794 4684 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-console-operator"/"kube-root-ca.crt" Jan 23 09:09:23 crc kubenswrapper[4684]: I0123 09:09:23.778934 4684 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager-operator"/"kube-root-ca.crt" Jan 23 09:09:23 crc kubenswrapper[4684]: I0123 09:09:23.782220 4684 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-route-controller-manager/route-controller-manager-6576b87f9c-wnhgg"] Jan 23 09:09:23 crc kubenswrapper[4684]: I0123 09:09:23.782272 4684 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-authentication-operator/authentication-operator-69f744f599-tbqbw"] Jan 23 09:09:23 crc kubenswrapper[4684]: I0123 09:09:23.782584 4684 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-image-registry/cluster-image-registry-operator-dc59b4c8b-w66j2"] Jan 23 09:09:23 crc kubenswrapper[4684]: I0123 09:09:23.783150 4684 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-image-registry/cluster-image-registry-operator-dc59b4c8b-w66j2" Jan 23 09:09:23 crc kubenswrapper[4684]: I0123 09:09:23.785667 4684 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-etcd-operator/etcd-operator-b45778765-p2wtg"] Jan 23 09:09:23 crc kubenswrapper[4684]: I0123 09:09:23.786162 4684 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-image-registry/image-registry-697d97f7c8-wn9b6"] Jan 23 09:09:23 crc kubenswrapper[4684]: I0123 09:09:23.786560 4684 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-image-registry/image-registry-697d97f7c8-wn9b6" Jan 23 09:09:23 crc kubenswrapper[4684]: I0123 09:09:23.787010 4684 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-etcd-operator/etcd-operator-b45778765-p2wtg" Jan 23 09:09:23 crc kubenswrapper[4684]: I0123 09:09:23.793870 4684 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-kube-storage-version-migrator-operator/kube-storage-version-migrator-operator-b67b599dd-tkzz2"] Jan 23 09:09:23 crc kubenswrapper[4684]: I0123 09:09:23.794420 4684 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-kube-storage-version-migrator/migrator-59844c95c7-g8kmw"] Jan 23 09:09:23 crc kubenswrapper[4684]: I0123 09:09:23.795420 4684 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-storage-version-migrator-operator/kube-storage-version-migrator-operator-b67b599dd-tkzz2" Jan 23 09:09:23 crc kubenswrapper[4684]: I0123 09:09:23.807829 4684 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-kube-apiserver-operator/kube-apiserver-operator-766d6c64bb-dhf86"] Jan 23 09:09:23 crc kubenswrapper[4684]: I0123 09:09:23.807983 4684 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-storage-version-migrator/migrator-59844c95c7-g8kmw" Jan 23 09:09:23 crc kubenswrapper[4684]: I0123 09:09:23.808485 4684 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-kube-controller-manager-operator/kube-controller-manager-operator-78b949d7b-9np9f"] Jan 23 09:09:23 crc kubenswrapper[4684]: I0123 09:09:23.808775 4684 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-controller-manager-operator/kube-controller-manager-operator-78b949d7b-9np9f" Jan 23 09:09:23 crc kubenswrapper[4684]: I0123 09:09:23.808969 4684 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver-operator/kube-apiserver-operator-766d6c64bb-dhf86" Jan 23 09:09:23 crc kubenswrapper[4684]: I0123 09:09:23.819290 4684 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-controller-manager-operator"/"openshift-controller-manager-operator-serving-cert" Jan 23 09:09:23 crc kubenswrapper[4684]: I0123 09:09:23.820383 4684 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-console"/"service-ca" Jan 23 09:09:23 crc kubenswrapper[4684]: I0123 09:09:23.820745 4684 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-dns-operator"/"metrics-tls" Jan 23 09:09:23 crc kubenswrapper[4684]: I0123 09:09:23.820867 4684 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager-operator"/"openshift-service-ca.crt" Jan 23 09:09:23 crc kubenswrapper[4684]: I0123 09:09:23.821028 4684 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager-operator"/"openshift-controller-manager-operator-config" Jan 23 09:09:23 crc kubenswrapper[4684]: I0123 09:09:23.821481 4684 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-console"/"console-config" Jan 23 09:09:23 crc kubenswrapper[4684]: I0123 09:09:23.821767 4684 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-authentication"/"v4-0-config-user-template-login" Jan 23 09:09:23 crc kubenswrapper[4684]: I0123 09:09:23.822068 4684 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-console"/"console-oauth-config" Jan 23 09:09:23 crc kubenswrapper[4684]: I0123 09:09:23.822096 4684 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-machine-config-operator/machine-config-controller-84d6567774-r9qbw"] Jan 23 09:09:23 crc kubenswrapper[4684]: I0123 09:09:23.822657 4684 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-machine-config-operator/machine-config-controller-84d6567774-r9qbw" Jan 23 09:09:23 crc kubenswrapper[4684]: I0123 09:09:23.826860 4684 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-kube-scheduler-operator/openshift-kube-scheduler-operator-5fdd9b5758-sm5m4"] Jan 23 09:09:23 crc kubenswrapper[4684]: I0123 09:09:23.828169 4684 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-console"/"console-dockercfg-f62pw" Jan 23 09:09:23 crc kubenswrapper[4684]: I0123 09:09:23.828335 4684 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-dns-operator"/"dns-operator-dockercfg-9mqw5" Jan 23 09:09:23 crc kubenswrapper[4684]: I0123 09:09:23.828536 4684 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-dns-operator"/"kube-root-ca.crt" Jan 23 09:09:23 crc kubenswrapper[4684]: I0123 09:09:23.828801 4684 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-authentication-operator"/"trusted-ca-bundle" Jan 23 09:09:23 crc kubenswrapper[4684]: I0123 09:09:23.834204 4684 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-scheduler-operator/openshift-kube-scheduler-operator-5fdd9b5758-sm5m4" Jan 23 09:09:23 crc kubenswrapper[4684]: I0123 09:09:23.839163 4684 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"encryption-config\" (UniqueName: \"kubernetes.io/secret/e6245e77-409a-4116-8c6e-78b21d87529f-encryption-config\") pod \"apiserver-76f77b778f-7j9vw\" (UID: \"e6245e77-409a-4116-8c6e-78b21d87529f\") " pod="openshift-apiserver/apiserver-76f77b778f-7j9vw" Jan 23 09:09:23 crc kubenswrapper[4684]: I0123 09:09:23.839208 4684 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"audit-policies\" (UniqueName: \"kubernetes.io/configmap/6bea838f-25ef-4690-b5c9-feddd10b04bf-audit-policies\") pod \"apiserver-7bbb656c7d-bhzj6\" (UID: \"6bea838f-25ef-4690-b5c9-feddd10b04bf\") " pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-bhzj6" Jan 23 09:09:23 crc kubenswrapper[4684]: I0123 09:09:23.839231 4684 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/5ba2e281-6dc9-44ad-90ef-e389fddb83cf-config\") pod \"openshift-apiserver-operator-796bbdcf4f-9crd7\" (UID: \"5ba2e281-6dc9-44ad-90ef-e389fddb83cf\") " pod="openshift-apiserver-operator/openshift-apiserver-operator-796bbdcf4f-9crd7" Jan 23 09:09:23 crc kubenswrapper[4684]: I0123 09:09:23.839258 4684 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/2493a510-4c7f-4d74-b1e2-1bfde5d9513b-serving-cert\") pod \"console-operator-58897d9998-642xz\" (UID: \"2493a510-4c7f-4d74-b1e2-1bfde5d9513b\") " pod="openshift-console-operator/console-operator-58897d9998-642xz" Jan 23 09:09:23 crc kubenswrapper[4684]: I0123 09:09:23.839286 4684 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-m9z7v\" (UniqueName: \"kubernetes.io/projected/a0343333-605f-4fb8-adb7-8423a1d36552-kube-api-access-m9z7v\") pod \"machine-approver-56656f9798-fxzlb\" (UID: \"a0343333-605f-4fb8-adb7-8423a1d36552\") " pod="openshift-cluster-machine-approver/machine-approver-56656f9798-fxzlb" Jan 23 09:09:23 crc kubenswrapper[4684]: I0123 09:09:23.839308 4684 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"v4-0-config-system-session\" (UniqueName: \"kubernetes.io/secret/c846db13-b93b-4e07-9e7b-e22106203982-v4-0-config-system-session\") pod \"oauth-openshift-558db77b4-hv7d8\" (UID: \"c846db13-b93b-4e07-9e7b-e22106203982\") " pod="openshift-authentication/oauth-openshift-558db77b4-hv7d8" Jan 23 09:09:23 crc kubenswrapper[4684]: I0123 09:09:23.839331 4684 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/e19380fe-fa6c-4c7e-a706-aea1c30a6013-serving-cert\") pod \"openshift-config-operator-7777fb866f-7g8g8\" (UID: \"e19380fe-fa6c-4c7e-a706-aea1c30a6013\") " pod="openshift-config-operator/openshift-config-operator-7777fb866f-7g8g8" Jan 23 09:09:23 crc kubenswrapper[4684]: I0123 09:09:23.839352 4684 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-98xzz\" (UniqueName: \"kubernetes.io/projected/e19380fe-fa6c-4c7e-a706-aea1c30a6013-kube-api-access-98xzz\") pod \"openshift-config-operator-7777fb866f-7g8g8\" (UID: \"e19380fe-fa6c-4c7e-a706-aea1c30a6013\") " pod="openshift-config-operator/openshift-config-operator-7777fb866f-7g8g8" Jan 23 09:09:23 crc kubenswrapper[4684]: I0123 09:09:23.839389 4684 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"v4-0-config-system-service-ca\" (UniqueName: \"kubernetes.io/configmap/c846db13-b93b-4e07-9e7b-e22106203982-v4-0-config-system-service-ca\") pod \"oauth-openshift-558db77b4-hv7d8\" (UID: \"c846db13-b93b-4e07-9e7b-e22106203982\") " pod="openshift-authentication/oauth-openshift-558db77b4-hv7d8" Jan 23 09:09:23 crc kubenswrapper[4684]: I0123 09:09:23.839415 4684 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/513ccd39-0870-4964-85a2-0e9eb9d14a85-proxy-ca-bundles\") pod \"controller-manager-879f6c89f-l7895\" (UID: \"513ccd39-0870-4964-85a2-0e9eb9d14a85\") " pod="openshift-controller-manager/controller-manager-879f6c89f-l7895" Jan 23 09:09:23 crc kubenswrapper[4684]: I0123 09:09:23.839457 4684 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-59fjz\" (UniqueName: \"kubernetes.io/projected/9b3c5fb5-4205-4162-9d9e-b522ee092236-kube-api-access-59fjz\") pod \"machine-api-operator-5694c8668f-pgngb\" (UID: \"9b3c5fb5-4205-4162-9d9e-b522ee092236\") " pod="openshift-machine-api/machine-api-operator-5694c8668f-pgngb" Jan 23 09:09:23 crc kubenswrapper[4684]: I0123 09:09:23.839479 4684 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"audit\" (UniqueName: \"kubernetes.io/configmap/e6245e77-409a-4116-8c6e-78b21d87529f-audit\") pod \"apiserver-76f77b778f-7j9vw\" (UID: \"e6245e77-409a-4116-8c6e-78b21d87529f\") " pod="openshift-apiserver/apiserver-76f77b778f-7j9vw" Jan 23 09:09:23 crc kubenswrapper[4684]: I0123 09:09:23.839500 4684 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/e6245e77-409a-4116-8c6e-78b21d87529f-serving-cert\") pod \"apiserver-76f77b778f-7j9vw\" (UID: \"e6245e77-409a-4116-8c6e-78b21d87529f\") " pod="openshift-apiserver/apiserver-76f77b778f-7j9vw" Jan 23 09:09:23 crc kubenswrapper[4684]: I0123 09:09:23.839521 4684 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"v4-0-config-user-template-error\" (UniqueName: \"kubernetes.io/secret/c846db13-b93b-4e07-9e7b-e22106203982-v4-0-config-user-template-error\") pod \"oauth-openshift-558db77b4-hv7d8\" (UID: \"c846db13-b93b-4e07-9e7b-e22106203982\") " pod="openshift-authentication/oauth-openshift-558db77b4-hv7d8" Jan 23 09:09:23 crc kubenswrapper[4684]: I0123 09:09:23.839547 4684 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"samples-operator-tls\" (UniqueName: \"kubernetes.io/secret/e9844493-3620-4f52-bfae-61a79062d001-samples-operator-tls\") pod \"cluster-samples-operator-665b6dd947-5r2wv\" (UID: \"e9844493-3620-4f52-bfae-61a79062d001\") " pod="openshift-cluster-samples-operator/cluster-samples-operator-665b6dd947-5r2wv" Jan 23 09:09:23 crc kubenswrapper[4684]: I0123 09:09:23.839583 4684 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"v4-0-config-system-router-certs\" (UniqueName: \"kubernetes.io/secret/c846db13-b93b-4e07-9e7b-e22106203982-v4-0-config-system-router-certs\") pod \"oauth-openshift-558db77b4-hv7d8\" (UID: \"c846db13-b93b-4e07-9e7b-e22106203982\") " pod="openshift-authentication/oauth-openshift-558db77b4-hv7d8" Jan 23 09:09:23 crc kubenswrapper[4684]: I0123 09:09:23.839606 4684 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"image-import-ca\" (UniqueName: \"kubernetes.io/configmap/e6245e77-409a-4116-8c6e-78b21d87529f-image-import-ca\") pod \"apiserver-76f77b778f-7j9vw\" (UID: \"e6245e77-409a-4116-8c6e-78b21d87529f\") " pod="openshift-apiserver/apiserver-76f77b778f-7j9vw" Jan 23 09:09:23 crc kubenswrapper[4684]: I0123 09:09:23.839630 4684 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/a0343333-605f-4fb8-adb7-8423a1d36552-config\") pod \"machine-approver-56656f9798-fxzlb\" (UID: \"a0343333-605f-4fb8-adb7-8423a1d36552\") " pod="openshift-cluster-machine-approver/machine-approver-56656f9798-fxzlb" Jan 23 09:09:23 crc kubenswrapper[4684]: I0123 09:09:23.839662 4684 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"encryption-config\" (UniqueName: \"kubernetes.io/secret/6bea838f-25ef-4690-b5c9-feddd10b04bf-encryption-config\") pod \"apiserver-7bbb656c7d-bhzj6\" (UID: \"6bea838f-25ef-4690-b5c9-feddd10b04bf\") " pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-bhzj6" Jan 23 09:09:23 crc kubenswrapper[4684]: I0123 09:09:23.839682 4684 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"v4-0-config-user-template-provider-selection\" (UniqueName: \"kubernetes.io/secret/c846db13-b93b-4e07-9e7b-e22106203982-v4-0-config-user-template-provider-selection\") pod \"oauth-openshift-558db77b4-hv7d8\" (UID: \"c846db13-b93b-4e07-9e7b-e22106203982\") " pod="openshift-authentication/oauth-openshift-558db77b4-hv7d8" Jan 23 09:09:23 crc kubenswrapper[4684]: I0123 09:09:23.839728 4684 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/e6245e77-409a-4116-8c6e-78b21d87529f-config\") pod \"apiserver-76f77b778f-7j9vw\" (UID: \"e6245e77-409a-4116-8c6e-78b21d87529f\") " pod="openshift-apiserver/apiserver-76f77b778f-7j9vw" Jan 23 09:09:23 crc kubenswrapper[4684]: I0123 09:09:23.839749 4684 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/5ba2e281-6dc9-44ad-90ef-e389fddb83cf-serving-cert\") pod \"openshift-apiserver-operator-796bbdcf4f-9crd7\" (UID: \"5ba2e281-6dc9-44ad-90ef-e389fddb83cf\") " pod="openshift-apiserver-operator/openshift-apiserver-operator-796bbdcf4f-9crd7" Jan 23 09:09:23 crc kubenswrapper[4684]: I0123 09:09:23.839772 4684 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"v4-0-config-system-ocp-branding-template\" (UniqueName: \"kubernetes.io/secret/c846db13-b93b-4e07-9e7b-e22106203982-v4-0-config-system-ocp-branding-template\") pod \"oauth-openshift-558db77b4-hv7d8\" (UID: \"c846db13-b93b-4e07-9e7b-e22106203982\") " pod="openshift-authentication/oauth-openshift-558db77b4-hv7d8" Jan 23 09:09:23 crc kubenswrapper[4684]: I0123 09:09:23.839824 4684 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"v4-0-config-user-idp-0-file-data\" (UniqueName: \"kubernetes.io/secret/c846db13-b93b-4e07-9e7b-e22106203982-v4-0-config-user-idp-0-file-data\") pod \"oauth-openshift-558db77b4-hv7d8\" (UID: \"c846db13-b93b-4e07-9e7b-e22106203982\") " pod="openshift-authentication/oauth-openshift-558db77b4-hv7d8" Jan 23 09:09:23 crc kubenswrapper[4684]: I0123 09:09:23.839848 4684 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-df9nx\" (UniqueName: \"kubernetes.io/projected/3e8dddad-fbbb-4169-9fd1-c908bc5e3660-kube-api-access-df9nx\") pod \"route-controller-manager-6576b87f9c-wnhgg\" (UID: \"3e8dddad-fbbb-4169-9fd1-c908bc5e3660\") " pod="openshift-route-controller-manager/route-controller-manager-6576b87f9c-wnhgg" Jan 23 09:09:23 crc kubenswrapper[4684]: I0123 09:09:23.839972 4684 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-mk7jd\" (UniqueName: \"kubernetes.io/projected/2493a510-4c7f-4d74-b1e2-1bfde5d9513b-kube-api-access-mk7jd\") pod \"console-operator-58897d9998-642xz\" (UID: \"2493a510-4c7f-4d74-b1e2-1bfde5d9513b\") " pod="openshift-console-operator/console-operator-58897d9998-642xz" Jan 23 09:09:23 crc kubenswrapper[4684]: I0123 09:09:23.840006 4684 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/513ccd39-0870-4964-85a2-0e9eb9d14a85-client-ca\") pod \"controller-manager-879f6c89f-l7895\" (UID: \"513ccd39-0870-4964-85a2-0e9eb9d14a85\") " pod="openshift-controller-manager/controller-manager-879f6c89f-l7895" Jan 23 09:09:23 crc kubenswrapper[4684]: I0123 09:09:23.840026 4684 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/9b3c5fb5-4205-4162-9d9e-b522ee092236-config\") pod \"machine-api-operator-5694c8668f-pgngb\" (UID: \"9b3c5fb5-4205-4162-9d9e-b522ee092236\") " pod="openshift-machine-api/machine-api-operator-5694c8668f-pgngb" Jan 23 09:09:23 crc kubenswrapper[4684]: I0123 09:09:23.840058 4684 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/6bea838f-25ef-4690-b5c9-feddd10b04bf-serving-cert\") pod \"apiserver-7bbb656c7d-bhzj6\" (UID: \"6bea838f-25ef-4690-b5c9-feddd10b04bf\") " pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-bhzj6" Jan 23 09:09:23 crc kubenswrapper[4684]: I0123 09:09:23.840083 4684 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/6bea838f-25ef-4690-b5c9-feddd10b04bf-trusted-ca-bundle\") pod \"apiserver-7bbb656c7d-bhzj6\" (UID: \"6bea838f-25ef-4690-b5c9-feddd10b04bf\") " pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-bhzj6" Jan 23 09:09:23 crc kubenswrapper[4684]: I0123 09:09:23.840106 4684 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/3e8dddad-fbbb-4169-9fd1-c908bc5e3660-config\") pod \"route-controller-manager-6576b87f9c-wnhgg\" (UID: \"3e8dddad-fbbb-4169-9fd1-c908bc5e3660\") " pod="openshift-route-controller-manager/route-controller-manager-6576b87f9c-wnhgg" Jan 23 09:09:23 crc kubenswrapper[4684]: I0123 09:09:23.840125 4684 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-rngtr\" (UniqueName: \"kubernetes.io/projected/6bea838f-25ef-4690-b5c9-feddd10b04bf-kube-api-access-rngtr\") pod \"apiserver-7bbb656c7d-bhzj6\" (UID: \"6bea838f-25ef-4690-b5c9-feddd10b04bf\") " pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-bhzj6" Jan 23 09:09:23 crc kubenswrapper[4684]: I0123 09:09:23.840156 4684 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"v4-0-config-user-template-login\" (UniqueName: \"kubernetes.io/secret/c846db13-b93b-4e07-9e7b-e22106203982-v4-0-config-user-template-login\") pod \"oauth-openshift-558db77b4-hv7d8\" (UID: \"c846db13-b93b-4e07-9e7b-e22106203982\") " pod="openshift-authentication/oauth-openshift-558db77b4-hv7d8" Jan 23 09:09:23 crc kubenswrapper[4684]: I0123 09:09:23.840227 4684 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/1d318028-1d65-442a-9e50-ccf71fb54b04-trusted-ca-bundle\") pod \"authentication-operator-69f744f599-tbqbw\" (UID: \"1d318028-1d65-442a-9e50-ccf71fb54b04\") " pod="openshift-authentication-operator/authentication-operator-69f744f599-tbqbw" Jan 23 09:09:23 crc kubenswrapper[4684]: I0123 09:09:23.840287 4684 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/1d318028-1d65-442a-9e50-ccf71fb54b04-config\") pod \"authentication-operator-69f744f599-tbqbw\" (UID: \"1d318028-1d65-442a-9e50-ccf71fb54b04\") " pod="openshift-authentication-operator/authentication-operator-69f744f599-tbqbw" Jan 23 09:09:23 crc kubenswrapper[4684]: I0123 09:09:23.840344 4684 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etcd-serving-ca\" (UniqueName: \"kubernetes.io/configmap/e6245e77-409a-4116-8c6e-78b21d87529f-etcd-serving-ca\") pod \"apiserver-76f77b778f-7j9vw\" (UID: \"e6245e77-409a-4116-8c6e-78b21d87529f\") " pod="openshift-apiserver/apiserver-76f77b778f-7j9vw" Jan 23 09:09:23 crc kubenswrapper[4684]: I0123 09:09:23.840410 4684 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-dq8c7\" (UniqueName: \"kubernetes.io/projected/e6245e77-409a-4116-8c6e-78b21d87529f-kube-api-access-dq8c7\") pod \"apiserver-76f77b778f-7j9vw\" (UID: \"e6245e77-409a-4116-8c6e-78b21d87529f\") " pod="openshift-apiserver/apiserver-76f77b778f-7j9vw" Jan 23 09:09:23 crc kubenswrapper[4684]: I0123 09:09:23.840473 4684 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etcd-serving-ca\" (UniqueName: \"kubernetes.io/configmap/6bea838f-25ef-4690-b5c9-feddd10b04bf-etcd-serving-ca\") pod \"apiserver-7bbb656c7d-bhzj6\" (UID: \"6bea838f-25ef-4690-b5c9-feddd10b04bf\") " pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-bhzj6" Jan 23 09:09:23 crc kubenswrapper[4684]: I0123 09:09:23.840528 4684 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/3e8dddad-fbbb-4169-9fd1-c908bc5e3660-serving-cert\") pod \"route-controller-manager-6576b87f9c-wnhgg\" (UID: \"3e8dddad-fbbb-4169-9fd1-c908bc5e3660\") " pod="openshift-route-controller-manager/route-controller-manager-6576b87f9c-wnhgg" Jan 23 09:09:23 crc kubenswrapper[4684]: I0123 09:09:23.840548 4684 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/c846db13-b93b-4e07-9e7b-e22106203982-audit-dir\") pod \"oauth-openshift-558db77b4-hv7d8\" (UID: \"c846db13-b93b-4e07-9e7b-e22106203982\") " pod="openshift-authentication/oauth-openshift-558db77b4-hv7d8" Jan 23 09:09:23 crc kubenswrapper[4684]: I0123 09:09:23.840568 4684 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"v4-0-config-system-trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/c846db13-b93b-4e07-9e7b-e22106203982-v4-0-config-system-trusted-ca-bundle\") pod \"oauth-openshift-558db77b4-hv7d8\" (UID: \"c846db13-b93b-4e07-9e7b-e22106203982\") " pod="openshift-authentication/oauth-openshift-558db77b4-hv7d8" Jan 23 09:09:23 crc kubenswrapper[4684]: I0123 09:09:23.840687 4684 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-jx4sb\" (UniqueName: \"kubernetes.io/projected/513ccd39-0870-4964-85a2-0e9eb9d14a85-kube-api-access-jx4sb\") pod \"controller-manager-879f6c89f-l7895\" (UID: \"513ccd39-0870-4964-85a2-0e9eb9d14a85\") " pod="openshift-controller-manager/controller-manager-879f6c89f-l7895" Jan 23 09:09:23 crc kubenswrapper[4684]: I0123 09:09:23.840730 4684 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"audit-policies\" (UniqueName: \"kubernetes.io/configmap/c846db13-b93b-4e07-9e7b-e22106203982-audit-policies\") pod \"oauth-openshift-558db77b4-hv7d8\" (UID: \"c846db13-b93b-4e07-9e7b-e22106203982\") " pod="openshift-authentication/oauth-openshift-558db77b4-hv7d8" Jan 23 09:09:23 crc kubenswrapper[4684]: I0123 09:09:23.840764 4684 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"v4-0-config-system-cliconfig\" (UniqueName: \"kubernetes.io/configmap/c846db13-b93b-4e07-9e7b-e22106203982-v4-0-config-system-cliconfig\") pod \"oauth-openshift-558db77b4-hv7d8\" (UID: \"c846db13-b93b-4e07-9e7b-e22106203982\") " pod="openshift-authentication/oauth-openshift-558db77b4-hv7d8" Jan 23 09:09:23 crc kubenswrapper[4684]: I0123 09:09:23.840781 4684 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/1d318028-1d65-442a-9e50-ccf71fb54b04-serving-cert\") pod \"authentication-operator-69f744f599-tbqbw\" (UID: \"1d318028-1d65-442a-9e50-ccf71fb54b04\") " pod="openshift-authentication-operator/authentication-operator-69f744f599-tbqbw" Jan 23 09:09:23 crc kubenswrapper[4684]: I0123 09:09:23.840813 4684 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/513ccd39-0870-4964-85a2-0e9eb9d14a85-serving-cert\") pod \"controller-manager-879f6c89f-l7895\" (UID: \"513ccd39-0870-4964-85a2-0e9eb9d14a85\") " pod="openshift-controller-manager/controller-manager-879f6c89f-l7895" Jan 23 09:09:23 crc kubenswrapper[4684]: I0123 09:09:23.840833 4684 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"node-pullsecrets\" (UniqueName: \"kubernetes.io/host-path/e6245e77-409a-4116-8c6e-78b21d87529f-node-pullsecrets\") pod \"apiserver-76f77b778f-7j9vw\" (UID: \"e6245e77-409a-4116-8c6e-78b21d87529f\") " pod="openshift-apiserver/apiserver-76f77b778f-7j9vw" Jan 23 09:09:23 crc kubenswrapper[4684]: I0123 09:09:23.840854 4684 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/6bea838f-25ef-4690-b5c9-feddd10b04bf-audit-dir\") pod \"apiserver-7bbb656c7d-bhzj6\" (UID: \"6bea838f-25ef-4690-b5c9-feddd10b04bf\") " pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-bhzj6" Jan 23 09:09:23 crc kubenswrapper[4684]: I0123 09:09:23.840887 4684 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-chgrt\" (UniqueName: \"kubernetes.io/projected/1d318028-1d65-442a-9e50-ccf71fb54b04-kube-api-access-chgrt\") pod \"authentication-operator-69f744f599-tbqbw\" (UID: \"1d318028-1d65-442a-9e50-ccf71fb54b04\") " pod="openshift-authentication-operator/authentication-operator-69f744f599-tbqbw" Jan 23 09:09:23 crc kubenswrapper[4684]: I0123 09:09:23.840906 4684 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/e6245e77-409a-4116-8c6e-78b21d87529f-trusted-ca-bundle\") pod \"apiserver-76f77b778f-7j9vw\" (UID: \"e6245e77-409a-4116-8c6e-78b21d87529f\") " pod="openshift-apiserver/apiserver-76f77b778f-7j9vw" Jan 23 09:09:23 crc kubenswrapper[4684]: I0123 09:09:23.840926 4684 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"machine-approver-tls\" (UniqueName: \"kubernetes.io/secret/a0343333-605f-4fb8-adb7-8423a1d36552-machine-approver-tls\") pod \"machine-approver-56656f9798-fxzlb\" (UID: \"a0343333-605f-4fb8-adb7-8423a1d36552\") " pod="openshift-cluster-machine-approver/machine-approver-56656f9798-fxzlb" Jan 23 09:09:23 crc kubenswrapper[4684]: I0123 09:09:23.840946 4684 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-2xcgv\" (UniqueName: \"kubernetes.io/projected/8fa74b73-0b76-426c-a769-39477ab913f6-kube-api-access-2xcgv\") pod \"downloads-7954f5f757-mc6nm\" (UID: \"8fa74b73-0b76-426c-a769-39477ab913f6\") " pod="openshift-console/downloads-7954f5f757-mc6nm" Jan 23 09:09:23 crc kubenswrapper[4684]: I0123 09:09:23.840995 4684 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etcd-client\" (UniqueName: \"kubernetes.io/secret/e6245e77-409a-4116-8c6e-78b21d87529f-etcd-client\") pod \"apiserver-76f77b778f-7j9vw\" (UID: \"e6245e77-409a-4116-8c6e-78b21d87529f\") " pod="openshift-apiserver/apiserver-76f77b778f-7j9vw" Jan 23 09:09:23 crc kubenswrapper[4684]: I0123 09:09:23.841023 4684 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etcd-client\" (UniqueName: \"kubernetes.io/secret/6bea838f-25ef-4690-b5c9-feddd10b04bf-etcd-client\") pod \"apiserver-7bbb656c7d-bhzj6\" (UID: \"6bea838f-25ef-4690-b5c9-feddd10b04bf\") " pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-bhzj6" Jan 23 09:09:23 crc kubenswrapper[4684]: I0123 09:09:23.841064 4684 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/e6245e77-409a-4116-8c6e-78b21d87529f-audit-dir\") pod \"apiserver-76f77b778f-7j9vw\" (UID: \"e6245e77-409a-4116-8c6e-78b21d87529f\") " pod="openshift-apiserver/apiserver-76f77b778f-7j9vw" Jan 23 09:09:23 crc kubenswrapper[4684]: I0123 09:09:23.841085 4684 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"available-featuregates\" (UniqueName: \"kubernetes.io/empty-dir/e19380fe-fa6c-4c7e-a706-aea1c30a6013-available-featuregates\") pod \"openshift-config-operator-7777fb866f-7g8g8\" (UID: \"e19380fe-fa6c-4c7e-a706-aea1c30a6013\") " pod="openshift-config-operator/openshift-config-operator-7777fb866f-7g8g8" Jan 23 09:09:23 crc kubenswrapper[4684]: I0123 09:09:23.841120 4684 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/513ccd39-0870-4964-85a2-0e9eb9d14a85-config\") pod \"controller-manager-879f6c89f-l7895\" (UID: \"513ccd39-0870-4964-85a2-0e9eb9d14a85\") " pod="openshift-controller-manager/controller-manager-879f6c89f-l7895" Jan 23 09:09:23 crc kubenswrapper[4684]: I0123 09:09:23.841241 4684 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"machine-api-operator-tls\" (UniqueName: \"kubernetes.io/secret/9b3c5fb5-4205-4162-9d9e-b522ee092236-machine-api-operator-tls\") pod \"machine-api-operator-5694c8668f-pgngb\" (UID: \"9b3c5fb5-4205-4162-9d9e-b522ee092236\") " pod="openshift-machine-api/machine-api-operator-5694c8668f-pgngb" Jan 23 09:09:23 crc kubenswrapper[4684]: I0123 09:09:23.841279 4684 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-fjxqp\" (UniqueName: \"kubernetes.io/projected/e9844493-3620-4f52-bfae-61a79062d001-kube-api-access-fjxqp\") pod \"cluster-samples-operator-665b6dd947-5r2wv\" (UID: \"e9844493-3620-4f52-bfae-61a79062d001\") " pod="openshift-cluster-samples-operator/cluster-samples-operator-665b6dd947-5r2wv" Jan 23 09:09:23 crc kubenswrapper[4684]: I0123 09:09:23.841333 4684 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/3e8dddad-fbbb-4169-9fd1-c908bc5e3660-client-ca\") pod \"route-controller-manager-6576b87f9c-wnhgg\" (UID: \"3e8dddad-fbbb-4169-9fd1-c908bc5e3660\") " pod="openshift-route-controller-manager/route-controller-manager-6576b87f9c-wnhgg" Jan 23 09:09:23 crc kubenswrapper[4684]: I0123 09:09:23.841358 4684 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-2nnm7\" (UniqueName: \"kubernetes.io/projected/c846db13-b93b-4e07-9e7b-e22106203982-kube-api-access-2nnm7\") pod \"oauth-openshift-558db77b4-hv7d8\" (UID: \"c846db13-b93b-4e07-9e7b-e22106203982\") " pod="openshift-authentication/oauth-openshift-558db77b4-hv7d8" Jan 23 09:09:23 crc kubenswrapper[4684]: I0123 09:09:23.841383 4684 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"auth-proxy-config\" (UniqueName: \"kubernetes.io/configmap/a0343333-605f-4fb8-adb7-8423a1d36552-auth-proxy-config\") pod \"machine-approver-56656f9798-fxzlb\" (UID: \"a0343333-605f-4fb8-adb7-8423a1d36552\") " pod="openshift-cluster-machine-approver/machine-approver-56656f9798-fxzlb" Jan 23 09:09:23 crc kubenswrapper[4684]: I0123 09:09:23.841427 4684 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"images\" (UniqueName: \"kubernetes.io/configmap/9b3c5fb5-4205-4162-9d9e-b522ee092236-images\") pod \"machine-api-operator-5694c8668f-pgngb\" (UID: \"9b3c5fb5-4205-4162-9d9e-b522ee092236\") " pod="openshift-machine-api/machine-api-operator-5694c8668f-pgngb" Jan 23 09:09:23 crc kubenswrapper[4684]: I0123 09:09:23.841467 4684 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"service-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/1d318028-1d65-442a-9e50-ccf71fb54b04-service-ca-bundle\") pod \"authentication-operator-69f744f599-tbqbw\" (UID: \"1d318028-1d65-442a-9e50-ccf71fb54b04\") " pod="openshift-authentication-operator/authentication-operator-69f744f599-tbqbw" Jan 23 09:09:23 crc kubenswrapper[4684]: I0123 09:09:23.841501 4684 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-qmvnz\" (UniqueName: \"kubernetes.io/projected/5ba2e281-6dc9-44ad-90ef-e389fddb83cf-kube-api-access-qmvnz\") pod \"openshift-apiserver-operator-796bbdcf4f-9crd7\" (UID: \"5ba2e281-6dc9-44ad-90ef-e389fddb83cf\") " pod="openshift-apiserver-operator/openshift-apiserver-operator-796bbdcf4f-9crd7" Jan 23 09:09:23 crc kubenswrapper[4684]: I0123 09:09:23.841521 4684 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"v4-0-config-system-serving-cert\" (UniqueName: \"kubernetes.io/secret/c846db13-b93b-4e07-9e7b-e22106203982-v4-0-config-system-serving-cert\") pod \"oauth-openshift-558db77b4-hv7d8\" (UID: \"c846db13-b93b-4e07-9e7b-e22106203982\") " pod="openshift-authentication/oauth-openshift-558db77b4-hv7d8" Jan 23 09:09:23 crc kubenswrapper[4684]: I0123 09:09:23.841545 4684 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/2493a510-4c7f-4d74-b1e2-1bfde5d9513b-config\") pod \"console-operator-58897d9998-642xz\" (UID: \"2493a510-4c7f-4d74-b1e2-1bfde5d9513b\") " pod="openshift-console-operator/console-operator-58897d9998-642xz" Jan 23 09:09:23 crc kubenswrapper[4684]: I0123 09:09:23.841564 4684 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/2493a510-4c7f-4d74-b1e2-1bfde5d9513b-trusted-ca\") pod \"console-operator-58897d9998-642xz\" (UID: \"2493a510-4c7f-4d74-b1e2-1bfde5d9513b\") " pod="openshift-console-operator/console-operator-58897d9998-642xz" Jan 23 09:09:23 crc kubenswrapper[4684]: I0123 09:09:23.866524 4684 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"audit-policies\" (UniqueName: \"kubernetes.io/configmap/6bea838f-25ef-4690-b5c9-feddd10b04bf-audit-policies\") pod \"apiserver-7bbb656c7d-bhzj6\" (UID: \"6bea838f-25ef-4690-b5c9-feddd10b04bf\") " pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-bhzj6" Jan 23 09:09:23 crc kubenswrapper[4684]: I0123 09:09:23.867422 4684 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/5ba2e281-6dc9-44ad-90ef-e389fddb83cf-config\") pod \"openshift-apiserver-operator-796bbdcf4f-9crd7\" (UID: \"5ba2e281-6dc9-44ad-90ef-e389fddb83cf\") " pod="openshift-apiserver-operator/openshift-apiserver-operator-796bbdcf4f-9crd7" Jan 23 09:09:23 crc kubenswrapper[4684]: I0123 09:09:23.870410 4684 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"v4-0-config-system-service-ca\" (UniqueName: \"kubernetes.io/configmap/c846db13-b93b-4e07-9e7b-e22106203982-v4-0-config-system-service-ca\") pod \"oauth-openshift-558db77b4-hv7d8\" (UID: \"c846db13-b93b-4e07-9e7b-e22106203982\") " pod="openshift-authentication/oauth-openshift-558db77b4-hv7d8" Jan 23 09:09:23 crc kubenswrapper[4684]: I0123 09:09:23.873937 4684 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-ingress/router-default-5444994796-whxn9"] Jan 23 09:09:23 crc kubenswrapper[4684]: I0123 09:09:23.875236 4684 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-ingress/router-default-5444994796-whxn9" Jan 23 09:09:23 crc kubenswrapper[4684]: I0123 09:09:23.875835 4684 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-ingress-operator/ingress-operator-5b745b69d9-5jrnp"] Jan 23 09:09:23 crc kubenswrapper[4684]: I0123 09:09:23.877421 4684 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"v4-0-config-system-session\" (UniqueName: \"kubernetes.io/secret/c846db13-b93b-4e07-9e7b-e22106203982-v4-0-config-system-session\") pod \"oauth-openshift-558db77b4-hv7d8\" (UID: \"c846db13-b93b-4e07-9e7b-e22106203982\") " pod="openshift-authentication/oauth-openshift-558db77b4-hv7d8" Jan 23 09:09:23 crc kubenswrapper[4684]: I0123 09:09:23.877946 4684 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-ingress-operator/ingress-operator-5b745b69d9-5jrnp" Jan 23 09:09:23 crc kubenswrapper[4684]: I0123 09:09:23.879824 4684 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etcd-serving-ca\" (UniqueName: \"kubernetes.io/configmap/6bea838f-25ef-4690-b5c9-feddd10b04bf-etcd-serving-ca\") pod \"apiserver-7bbb656c7d-bhzj6\" (UID: \"6bea838f-25ef-4690-b5c9-feddd10b04bf\") " pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-bhzj6" Jan 23 09:09:23 crc kubenswrapper[4684]: I0123 09:09:23.880402 4684 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"audit\" (UniqueName: \"kubernetes.io/configmap/e6245e77-409a-4116-8c6e-78b21d87529f-audit\") pod \"apiserver-76f77b778f-7j9vw\" (UID: \"e6245e77-409a-4116-8c6e-78b21d87529f\") " pod="openshift-apiserver/apiserver-76f77b778f-7j9vw" Jan 23 09:09:23 crc kubenswrapper[4684]: I0123 09:09:23.841355 4684 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-apiserver"/"trusted-ca-bundle" Jan 23 09:09:23 crc kubenswrapper[4684]: I0123 09:09:23.876352 4684 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-console"/"trusted-ca-bundle" Jan 23 09:09:23 crc kubenswrapper[4684]: I0123 09:09:23.880567 4684 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-image-registry"/"installation-pull-secrets" Jan 23 09:09:23 crc kubenswrapper[4684]: I0123 09:09:23.881201 4684 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/513ccd39-0870-4964-85a2-0e9eb9d14a85-proxy-ca-bundles\") pod \"controller-manager-879f6c89f-l7895\" (UID: \"513ccd39-0870-4964-85a2-0e9eb9d14a85\") " pod="openshift-controller-manager/controller-manager-879f6c89f-l7895" Jan 23 09:09:23 crc kubenswrapper[4684]: I0123 09:09:23.881293 4684 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etcd-serving-ca\" (UniqueName: \"kubernetes.io/configmap/e6245e77-409a-4116-8c6e-78b21d87529f-etcd-serving-ca\") pod \"apiserver-76f77b778f-7j9vw\" (UID: \"e6245e77-409a-4116-8c6e-78b21d87529f\") " pod="openshift-apiserver/apiserver-76f77b778f-7j9vw" Jan 23 09:09:23 crc kubenswrapper[4684]: I0123 09:09:23.883390 4684 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/3e8dddad-fbbb-4169-9fd1-c908bc5e3660-serving-cert\") pod \"route-controller-manager-6576b87f9c-wnhgg\" (UID: \"3e8dddad-fbbb-4169-9fd1-c908bc5e3660\") " pod="openshift-route-controller-manager/route-controller-manager-6576b87f9c-wnhgg" Jan 23 09:09:23 crc kubenswrapper[4684]: I0123 09:09:23.884408 4684 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"node-pullsecrets\" (UniqueName: \"kubernetes.io/host-path/e6245e77-409a-4116-8c6e-78b21d87529f-node-pullsecrets\") pod \"apiserver-76f77b778f-7j9vw\" (UID: \"e6245e77-409a-4116-8c6e-78b21d87529f\") " pod="openshift-apiserver/apiserver-76f77b778f-7j9vw" Jan 23 09:09:23 crc kubenswrapper[4684]: I0123 09:09:23.885164 4684 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-image-registry"/"cluster-image-registry-operator-dockercfg-m4qtx" Jan 23 09:09:23 crc kubenswrapper[4684]: I0123 09:09:23.885341 4684 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-console-operator"/"trusted-ca" Jan 23 09:09:23 crc kubenswrapper[4684]: I0123 09:09:23.885376 4684 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-image-registry"/"trusted-ca" Jan 23 09:09:23 crc kubenswrapper[4684]: I0123 09:09:23.886038 4684 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-oauth-apiserver/apiserver-7bbb656c7d-bhzj6"] Jan 23 09:09:23 crc kubenswrapper[4684]: I0123 09:09:23.886128 4684 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"encryption-config\" (UniqueName: \"kubernetes.io/secret/e6245e77-409a-4116-8c6e-78b21d87529f-encryption-config\") pod \"apiserver-76f77b778f-7j9vw\" (UID: \"e6245e77-409a-4116-8c6e-78b21d87529f\") " pod="openshift-apiserver/apiserver-76f77b778f-7j9vw" Jan 23 09:09:23 crc kubenswrapper[4684]: I0123 09:09:23.886274 4684 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-image-registry"/"image-registry-operator-tls" Jan 23 09:09:23 crc kubenswrapper[4684]: I0123 09:09:23.886828 4684 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/3e8dddad-fbbb-4169-9fd1-c908bc5e3660-config\") pod \"route-controller-manager-6576b87f9c-wnhgg\" (UID: \"3e8dddad-fbbb-4169-9fd1-c908bc5e3660\") " pod="openshift-route-controller-manager/route-controller-manager-6576b87f9c-wnhgg" Jan 23 09:09:23 crc kubenswrapper[4684]: I0123 09:09:23.886952 4684 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/6bea838f-25ef-4690-b5c9-feddd10b04bf-audit-dir\") pod \"apiserver-7bbb656c7d-bhzj6\" (UID: \"6bea838f-25ef-4690-b5c9-feddd10b04bf\") " pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-bhzj6" Jan 23 09:09:23 crc kubenswrapper[4684]: I0123 09:09:23.887637 4684 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"audit-policies\" (UniqueName: \"kubernetes.io/configmap/c846db13-b93b-4e07-9e7b-e22106203982-audit-policies\") pod \"oauth-openshift-558db77b4-hv7d8\" (UID: \"c846db13-b93b-4e07-9e7b-e22106203982\") " pod="openshift-authentication/oauth-openshift-558db77b4-hv7d8" Jan 23 09:09:23 crc kubenswrapper[4684]: I0123 09:09:23.887662 4684 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/9b3c5fb5-4205-4162-9d9e-b522ee092236-config\") pod \"machine-api-operator-5694c8668f-pgngb\" (UID: \"9b3c5fb5-4205-4162-9d9e-b522ee092236\") " pod="openshift-machine-api/machine-api-operator-5694c8668f-pgngb" Jan 23 09:09:23 crc kubenswrapper[4684]: I0123 09:09:23.888235 4684 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/e6245e77-409a-4116-8c6e-78b21d87529f-config\") pod \"apiserver-76f77b778f-7j9vw\" (UID: \"e6245e77-409a-4116-8c6e-78b21d87529f\") " pod="openshift-apiserver/apiserver-76f77b778f-7j9vw" Jan 23 09:09:23 crc kubenswrapper[4684]: I0123 09:09:23.888958 4684 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/513ccd39-0870-4964-85a2-0e9eb9d14a85-client-ca\") pod \"controller-manager-879f6c89f-l7895\" (UID: \"513ccd39-0870-4964-85a2-0e9eb9d14a85\") " pod="openshift-controller-manager/controller-manager-879f6c89f-l7895" Jan 23 09:09:23 crc kubenswrapper[4684]: I0123 09:09:23.889043 4684 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"image-import-ca\" (UniqueName: \"kubernetes.io/configmap/e6245e77-409a-4116-8c6e-78b21d87529f-image-import-ca\") pod \"apiserver-76f77b778f-7j9vw\" (UID: \"e6245e77-409a-4116-8c6e-78b21d87529f\") " pod="openshift-apiserver/apiserver-76f77b778f-7j9vw" Jan 23 09:09:23 crc kubenswrapper[4684]: I0123 09:09:23.891260 4684 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-controller-manager/controller-manager-879f6c89f-l7895"] Jan 23 09:09:23 crc kubenswrapper[4684]: I0123 09:09:23.894450 4684 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-machine-config-operator/machine-config-operator-74547568cd-k7fnj"] Jan 23 09:09:23 crc kubenswrapper[4684]: I0123 09:09:23.894900 4684 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/e6245e77-409a-4116-8c6e-78b21d87529f-serving-cert\") pod \"apiserver-76f77b778f-7j9vw\" (UID: \"e6245e77-409a-4116-8c6e-78b21d87529f\") " pod="openshift-apiserver/apiserver-76f77b778f-7j9vw" Jan 23 09:09:23 crc kubenswrapper[4684]: I0123 09:09:23.895328 4684 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/6bea838f-25ef-4690-b5c9-feddd10b04bf-serving-cert\") pod \"apiserver-7bbb656c7d-bhzj6\" (UID: \"6bea838f-25ef-4690-b5c9-feddd10b04bf\") " pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-bhzj6" Jan 23 09:09:23 crc kubenswrapper[4684]: I0123 09:09:23.895382 4684 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/c846db13-b93b-4e07-9e7b-e22106203982-audit-dir\") pod \"oauth-openshift-558db77b4-hv7d8\" (UID: \"c846db13-b93b-4e07-9e7b-e22106203982\") " pod="openshift-authentication/oauth-openshift-558db77b4-hv7d8" Jan 23 09:09:23 crc kubenswrapper[4684]: I0123 09:09:23.895695 4684 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-multus/multus-admission-controller-857f4d67dd-zp7ft"] Jan 23 09:09:23 crc kubenswrapper[4684]: I0123 09:09:23.895893 4684 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/a0343333-605f-4fb8-adb7-8423a1d36552-config\") pod \"machine-approver-56656f9798-fxzlb\" (UID: \"a0343333-605f-4fb8-adb7-8423a1d36552\") " pod="openshift-cluster-machine-approver/machine-approver-56656f9798-fxzlb" Jan 23 09:09:23 crc kubenswrapper[4684]: I0123 09:09:23.899718 4684 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-machine-config-operator/machine-config-operator-74547568cd-k7fnj" Jan 23 09:09:23 crc kubenswrapper[4684]: I0123 09:09:23.903960 4684 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/multus-admission-controller-857f4d67dd-zp7ft" Jan 23 09:09:23 crc kubenswrapper[4684]: I0123 09:09:23.905408 4684 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"v4-0-config-system-cliconfig\" (UniqueName: \"kubernetes.io/configmap/c846db13-b93b-4e07-9e7b-e22106203982-v4-0-config-system-cliconfig\") pod \"oauth-openshift-558db77b4-hv7d8\" (UID: \"c846db13-b93b-4e07-9e7b-e22106203982\") " pod="openshift-authentication/oauth-openshift-558db77b4-hv7d8" Jan 23 09:09:23 crc kubenswrapper[4684]: I0123 09:09:23.902960 4684 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-etcd-operator"/"openshift-service-ca.crt" Jan 23 09:09:23 crc kubenswrapper[4684]: I0123 09:09:23.907327 4684 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"auth-proxy-config\" (UniqueName: \"kubernetes.io/configmap/a0343333-605f-4fb8-adb7-8423a1d36552-auth-proxy-config\") pod \"machine-approver-56656f9798-fxzlb\" (UID: \"a0343333-605f-4fb8-adb7-8423a1d36552\") " pod="openshift-cluster-machine-approver/machine-approver-56656f9798-fxzlb" Jan 23 09:09:23 crc kubenswrapper[4684]: I0123 09:09:23.908440 4684 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-authentication"/"v4-0-config-system-trusted-ca-bundle" Jan 23 09:09:23 crc kubenswrapper[4684]: I0123 09:09:23.910204 4684 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/1d318028-1d65-442a-9e50-ccf71fb54b04-trusted-ca-bundle\") pod \"authentication-operator-69f744f599-tbqbw\" (UID: \"1d318028-1d65-442a-9e50-ccf71fb54b04\") " pod="openshift-authentication-operator/authentication-operator-69f744f599-tbqbw" Jan 23 09:09:23 crc kubenswrapper[4684]: I0123 09:09:23.910336 4684 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/e6245e77-409a-4116-8c6e-78b21d87529f-audit-dir\") pod \"apiserver-76f77b778f-7j9vw\" (UID: \"e6245e77-409a-4116-8c6e-78b21d87529f\") " pod="openshift-apiserver/apiserver-76f77b778f-7j9vw" Jan 23 09:09:23 crc kubenswrapper[4684]: I0123 09:09:23.910576 4684 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"v4-0-config-user-template-login\" (UniqueName: \"kubernetes.io/secret/c846db13-b93b-4e07-9e7b-e22106203982-v4-0-config-user-template-login\") pod \"oauth-openshift-558db77b4-hv7d8\" (UID: \"c846db13-b93b-4e07-9e7b-e22106203982\") " pod="openshift-authentication/oauth-openshift-558db77b4-hv7d8" Jan 23 09:09:23 crc kubenswrapper[4684]: I0123 09:09:23.911340 4684 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"v4-0-config-system-trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/c846db13-b93b-4e07-9e7b-e22106203982-v4-0-config-system-trusted-ca-bundle\") pod \"oauth-openshift-558db77b4-hv7d8\" (UID: \"c846db13-b93b-4e07-9e7b-e22106203982\") " pod="openshift-authentication/oauth-openshift-558db77b4-hv7d8" Jan 23 09:09:23 crc kubenswrapper[4684]: I0123 09:09:23.912460 4684 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/513ccd39-0870-4964-85a2-0e9eb9d14a85-config\") pod \"controller-manager-879f6c89f-l7895\" (UID: \"513ccd39-0870-4964-85a2-0e9eb9d14a85\") " pod="openshift-controller-manager/controller-manager-879f6c89f-l7895" Jan 23 09:09:23 crc kubenswrapper[4684]: I0123 09:09:23.912977 4684 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/1d318028-1d65-442a-9e50-ccf71fb54b04-config\") pod \"authentication-operator-69f744f599-tbqbw\" (UID: \"1d318028-1d65-442a-9e50-ccf71fb54b04\") " pod="openshift-authentication-operator/authentication-operator-69f744f599-tbqbw" Jan 23 09:09:23 crc kubenswrapper[4684]: I0123 09:09:23.901889 4684 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/5ba2e281-6dc9-44ad-90ef-e389fddb83cf-serving-cert\") pod \"openshift-apiserver-operator-796bbdcf4f-9crd7\" (UID: \"5ba2e281-6dc9-44ad-90ef-e389fddb83cf\") " pod="openshift-apiserver-operator/openshift-apiserver-operator-796bbdcf4f-9crd7" Jan 23 09:09:23 crc kubenswrapper[4684]: I0123 09:09:23.902396 4684 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/6bea838f-25ef-4690-b5c9-feddd10b04bf-trusted-ca-bundle\") pod \"apiserver-7bbb656c7d-bhzj6\" (UID: \"6bea838f-25ef-4690-b5c9-feddd10b04bf\") " pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-bhzj6" Jan 23 09:09:23 crc kubenswrapper[4684]: I0123 09:09:23.916085 4684 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-operator-lifecycle-manager/olm-operator-6b444d44fb-qj7jr"] Jan 23 09:09:23 crc kubenswrapper[4684]: I0123 09:09:23.916184 4684 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"v4-0-config-system-serving-cert\" (UniqueName: \"kubernetes.io/secret/c846db13-b93b-4e07-9e7b-e22106203982-v4-0-config-system-serving-cert\") pod \"oauth-openshift-558db77b4-hv7d8\" (UID: \"c846db13-b93b-4e07-9e7b-e22106203982\") " pod="openshift-authentication/oauth-openshift-558db77b4-hv7d8" Jan 23 09:09:23 crc kubenswrapper[4684]: I0123 09:09:23.916628 4684 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/marketplace-operator-79b997595-tfmsb"] Jan 23 09:09:23 crc kubenswrapper[4684]: I0123 09:09:23.916878 4684 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"images\" (UniqueName: \"kubernetes.io/configmap/9b3c5fb5-4205-4162-9d9e-b522ee092236-images\") pod \"machine-api-operator-5694c8668f-pgngb\" (UID: \"9b3c5fb5-4205-4162-9d9e-b522ee092236\") " pod="openshift-machine-api/machine-api-operator-5694c8668f-pgngb" Jan 23 09:09:23 crc kubenswrapper[4684]: I0123 09:09:23.917153 4684 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/marketplace-operator-79b997595-tfmsb" Jan 23 09:09:23 crc kubenswrapper[4684]: I0123 09:09:23.917436 4684 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/olm-operator-6b444d44fb-qj7jr" Jan 23 09:09:23 crc kubenswrapper[4684]: I0123 09:09:23.903072 4684 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-authentication"/"v4-0-config-system-ocp-branding-template" Jan 23 09:09:23 crc kubenswrapper[4684]: I0123 09:09:23.923500 4684 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"samples-operator-tls\" (UniqueName: \"kubernetes.io/secret/e9844493-3620-4f52-bfae-61a79062d001-samples-operator-tls\") pod \"cluster-samples-operator-665b6dd947-5r2wv\" (UID: \"e9844493-3620-4f52-bfae-61a79062d001\") " pod="openshift-cluster-samples-operator/cluster-samples-operator-665b6dd947-5r2wv" Jan 23 09:09:23 crc kubenswrapper[4684]: I0123 09:09:23.925640 4684 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"service-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/1d318028-1d65-442a-9e50-ccf71fb54b04-service-ca-bundle\") pod \"authentication-operator-69f744f599-tbqbw\" (UID: \"1d318028-1d65-442a-9e50-ccf71fb54b04\") " pod="openshift-authentication-operator/authentication-operator-69f744f599-tbqbw" Jan 23 09:09:23 crc kubenswrapper[4684]: I0123 09:09:23.926017 4684 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/e6245e77-409a-4116-8c6e-78b21d87529f-trusted-ca-bundle\") pod \"apiserver-76f77b778f-7j9vw\" (UID: \"e6245e77-409a-4116-8c6e-78b21d87529f\") " pod="openshift-apiserver/apiserver-76f77b778f-7j9vw" Jan 23 09:09:23 crc kubenswrapper[4684]: I0123 09:09:23.926642 4684 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/3e8dddad-fbbb-4169-9fd1-c908bc5e3660-client-ca\") pod \"route-controller-manager-6576b87f9c-wnhgg\" (UID: \"3e8dddad-fbbb-4169-9fd1-c908bc5e3660\") " pod="openshift-route-controller-manager/route-controller-manager-6576b87f9c-wnhgg" Jan 23 09:09:23 crc kubenswrapper[4684]: I0123 09:09:23.926877 4684 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"v4-0-config-user-template-provider-selection\" (UniqueName: \"kubernetes.io/secret/c846db13-b93b-4e07-9e7b-e22106203982-v4-0-config-user-template-provider-selection\") pod \"oauth-openshift-558db77b4-hv7d8\" (UID: \"c846db13-b93b-4e07-9e7b-e22106203982\") " pod="openshift-authentication/oauth-openshift-558db77b4-hv7d8" Jan 23 09:09:23 crc kubenswrapper[4684]: I0123 09:09:23.927215 4684 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/1d318028-1d65-442a-9e50-ccf71fb54b04-serving-cert\") pod \"authentication-operator-69f744f599-tbqbw\" (UID: \"1d318028-1d65-442a-9e50-ccf71fb54b04\") " pod="openshift-authentication-operator/authentication-operator-69f744f599-tbqbw" Jan 23 09:09:23 crc kubenswrapper[4684]: I0123 09:09:23.927612 4684 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-image-registry"/"registry-dockercfg-kzzsd" Jan 23 09:09:23 crc kubenswrapper[4684]: I0123 09:09:23.928042 4684 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-etcd-operator"/"etcd-operator-dockercfg-r9srn" Jan 23 09:09:23 crc kubenswrapper[4684]: I0123 09:09:23.928290 4684 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-operator-lifecycle-manager/catalog-operator-68c6474976-dxd9h"] Jan 23 09:09:23 crc kubenswrapper[4684]: I0123 09:09:23.933117 4684 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-operator-lifecycle-manager/package-server-manager-789f6589d5-2xmjn"] Jan 23 09:09:23 crc kubenswrapper[4684]: I0123 09:09:23.933516 4684 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"machine-api-operator-tls\" (UniqueName: \"kubernetes.io/secret/9b3c5fb5-4205-4162-9d9e-b522ee092236-machine-api-operator-tls\") pod \"machine-api-operator-5694c8668f-pgngb\" (UID: \"9b3c5fb5-4205-4162-9d9e-b522ee092236\") " pod="openshift-machine-api/machine-api-operator-5694c8668f-pgngb" Jan 23 09:09:23 crc kubenswrapper[4684]: I0123 09:09:23.933866 4684 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-service-ca/service-ca-9c57cc56f-tk452"] Jan 23 09:09:23 crc kubenswrapper[4684]: I0123 09:09:23.934067 4684 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/catalog-operator-68c6474976-dxd9h" Jan 23 09:09:23 crc kubenswrapper[4684]: I0123 09:09:23.934163 4684 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-operator-lifecycle-manager/packageserver-d55dfcdfc-g94qp"] Jan 23 09:09:23 crc kubenswrapper[4684]: I0123 09:09:23.934323 4684 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/package-server-manager-789f6589d5-2xmjn" Jan 23 09:09:23 crc kubenswrapper[4684]: I0123 09:09:23.934524 4684 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-cluster-samples-operator/cluster-samples-operator-665b6dd947-5r2wv"] Jan 23 09:09:23 crc kubenswrapper[4684]: I0123 09:09:23.934619 4684 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/packageserver-d55dfcdfc-g94qp" Jan 23 09:09:23 crc kubenswrapper[4684]: I0123 09:09:23.934961 4684 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-service-ca/service-ca-9c57cc56f-tk452" Jan 23 09:09:23 crc kubenswrapper[4684]: I0123 09:09:23.940476 4684 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-service-ca-operator/service-ca-operator-777779d784-sxckj"] Jan 23 09:09:23 crc kubenswrapper[4684]: I0123 09:09:23.941140 4684 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-service-ca-operator/service-ca-operator-777779d784-sxckj" Jan 23 09:09:23 crc kubenswrapper[4684]: I0123 09:09:23.941831 4684 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-etcd-operator"/"etcd-client" Jan 23 09:09:23 crc kubenswrapper[4684]: I0123 09:09:23.941997 4684 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"v4-0-config-user-template-error\" (UniqueName: \"kubernetes.io/secret/c846db13-b93b-4e07-9e7b-e22106203982-v4-0-config-user-template-error\") pod \"oauth-openshift-558db77b4-hv7d8\" (UID: \"c846db13-b93b-4e07-9e7b-e22106203982\") " pod="openshift-authentication/oauth-openshift-558db77b4-hv7d8" Jan 23 09:09:23 crc kubenswrapper[4684]: I0123 09:09:23.942123 4684 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29485980-dfbbw"] Jan 23 09:09:23 crc kubenswrapper[4684]: I0123 09:09:23.942314 4684 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etcd-client\" (UniqueName: \"kubernetes.io/secret/6bea838f-25ef-4690-b5c9-feddd10b04bf-etcd-client\") pod \"apiserver-7bbb656c7d-bhzj6\" (UID: \"6bea838f-25ef-4690-b5c9-feddd10b04bf\") " pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-bhzj6" Jan 23 09:09:23 crc kubenswrapper[4684]: I0123 09:09:23.942387 4684 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"machine-approver-tls\" (UniqueName: \"kubernetes.io/secret/a0343333-605f-4fb8-adb7-8423a1d36552-machine-approver-tls\") pod \"machine-approver-56656f9798-fxzlb\" (UID: \"a0343333-605f-4fb8-adb7-8423a1d36552\") " pod="openshift-cluster-machine-approver/machine-approver-56656f9798-fxzlb" Jan 23 09:09:23 crc kubenswrapper[4684]: I0123 09:09:23.942723 4684 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29485980-dfbbw" Jan 23 09:09:23 crc kubenswrapper[4684]: I0123 09:09:23.943173 4684 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-mk7jd\" (UniqueName: \"kubernetes.io/projected/2493a510-4c7f-4d74-b1e2-1bfde5d9513b-kube-api-access-mk7jd\") pod \"console-operator-58897d9998-642xz\" (UID: \"2493a510-4c7f-4d74-b1e2-1bfde5d9513b\") " pod="openshift-console-operator/console-operator-58897d9998-642xz" Jan 23 09:09:23 crc kubenswrapper[4684]: I0123 09:09:23.943208 4684 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etcd-ca\" (UniqueName: \"kubernetes.io/configmap/f2cab908-172f-4775-881a-226d8c87bcdc-etcd-ca\") pod \"etcd-operator-b45778765-p2wtg\" (UID: \"f2cab908-172f-4775-881a-226d8c87bcdc\") " pod="openshift-etcd-operator/etcd-operator-b45778765-p2wtg" Jan 23 09:09:23 crc kubenswrapper[4684]: I0123 09:09:23.944267 4684 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-etcd-operator"/"etcd-operator-config" Jan 23 09:09:23 crc kubenswrapper[4684]: I0123 09:09:23.945261 4684 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-machine-api/control-plane-machine-set-operator-78cbb6b69f-4qpn2"] Jan 23 09:09:23 crc kubenswrapper[4684]: I0123 09:09:23.945808 4684 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-apiserver-operator/openshift-apiserver-operator-796bbdcf4f-9crd7"] Jan 23 09:09:23 crc kubenswrapper[4684]: I0123 09:09:23.945867 4684 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-machine-api/control-plane-machine-set-operator-78cbb6b69f-4qpn2" Jan 23 09:09:23 crc kubenswrapper[4684]: I0123 09:09:23.947288 4684 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"v4-0-config-system-ocp-branding-template\" (UniqueName: \"kubernetes.io/secret/c846db13-b93b-4e07-9e7b-e22106203982-v4-0-config-system-ocp-branding-template\") pod \"oauth-openshift-558db77b4-hv7d8\" (UID: \"c846db13-b93b-4e07-9e7b-e22106203982\") " pod="openshift-authentication/oauth-openshift-558db77b4-hv7d8" Jan 23 09:09:23 crc kubenswrapper[4684]: I0123 09:09:23.947514 4684 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"encryption-config\" (UniqueName: \"kubernetes.io/secret/6bea838f-25ef-4690-b5c9-feddd10b04bf-encryption-config\") pod \"apiserver-7bbb656c7d-bhzj6\" (UID: \"6bea838f-25ef-4690-b5c9-feddd10b04bf\") " pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-bhzj6" Jan 23 09:09:23 crc kubenswrapper[4684]: I0123 09:09:23.947577 4684 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-5kmzq\" (UniqueName: \"kubernetes.io/projected/0457957b-8bad-468f-9602-6d32a17c8f92-kube-api-access-5kmzq\") pod \"openshift-controller-manager-operator-756b6f6bc6-g5k2t\" (UID: \"0457957b-8bad-468f-9602-6d32a17c8f92\") " pod="openshift-controller-manager-operator/openshift-controller-manager-operator-756b6f6bc6-g5k2t" Jan 23 09:09:23 crc kubenswrapper[4684]: I0123 09:09:23.948661 4684 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"v4-0-config-system-router-certs\" (UniqueName: \"kubernetes.io/secret/c846db13-b93b-4e07-9e7b-e22106203982-v4-0-config-system-router-certs\") pod \"oauth-openshift-558db77b4-hv7d8\" (UID: \"c846db13-b93b-4e07-9e7b-e22106203982\") " pod="openshift-authentication/oauth-openshift-558db77b4-hv7d8" Jan 23 09:09:23 crc kubenswrapper[4684]: I0123 09:09:23.949351 4684 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/513ccd39-0870-4964-85a2-0e9eb9d14a85-serving-cert\") pod \"controller-manager-879f6c89f-l7895\" (UID: \"513ccd39-0870-4964-85a2-0e9eb9d14a85\") " pod="openshift-controller-manager/controller-manager-879f6c89f-l7895" Jan 23 09:09:23 crc kubenswrapper[4684]: I0123 09:09:23.950050 4684 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"bound-sa-token\" (UniqueName: \"kubernetes.io/projected/e52e2888-8938-4b6f-96a3-e25eaaaf112c-bound-sa-token\") pod \"cluster-image-registry-operator-dc59b4c8b-w66j2\" (UID: \"e52e2888-8938-4b6f-96a3-e25eaaaf112c\") " pod="openshift-image-registry/cluster-image-registry-operator-dc59b4c8b-w66j2" Jan 23 09:09:23 crc kubenswrapper[4684]: I0123 09:09:23.950354 4684 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-bcr99\" (UniqueName: \"kubernetes.io/projected/f2cab908-172f-4775-881a-226d8c87bcdc-kube-api-access-bcr99\") pod \"etcd-operator-b45778765-p2wtg\" (UID: \"f2cab908-172f-4775-881a-226d8c87bcdc\") " pod="openshift-etcd-operator/etcd-operator-b45778765-p2wtg" Jan 23 09:09:23 crc kubenswrapper[4684]: I0123 09:09:23.950399 4684 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/e52e2888-8938-4b6f-96a3-e25eaaaf112c-trusted-ca\") pod \"cluster-image-registry-operator-dc59b4c8b-w66j2\" (UID: \"e52e2888-8938-4b6f-96a3-e25eaaaf112c\") " pod="openshift-image-registry/cluster-image-registry-operator-dc59b4c8b-w66j2" Jan 23 09:09:23 crc kubenswrapper[4684]: I0123 09:09:23.952647 4684 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"available-featuregates\" (UniqueName: \"kubernetes.io/empty-dir/e19380fe-fa6c-4c7e-a706-aea1c30a6013-available-featuregates\") pod \"openshift-config-operator-7777fb866f-7g8g8\" (UID: \"e19380fe-fa6c-4c7e-a706-aea1c30a6013\") " pod="openshift-config-operator/openshift-config-operator-7777fb866f-7g8g8" Jan 23 09:09:23 crc kubenswrapper[4684]: I0123 09:09:23.952989 4684 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/fecf2330-df0b-41ad-99fd-7a58537bfbc6-config\") pod \"kube-controller-manager-operator-78b949d7b-9np9f\" (UID: \"fecf2330-df0b-41ad-99fd-7a58537bfbc6\") " pod="openshift-kube-controller-manager-operator/kube-controller-manager-operator-78b949d7b-9np9f" Jan 23 09:09:23 crc kubenswrapper[4684]: I0123 09:09:23.953159 4684 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/f2cab908-172f-4775-881a-226d8c87bcdc-serving-cert\") pod \"etcd-operator-b45778765-p2wtg\" (UID: \"f2cab908-172f-4775-881a-226d8c87bcdc\") " pod="openshift-etcd-operator/etcd-operator-b45778765-p2wtg" Jan 23 09:09:23 crc kubenswrapper[4684]: I0123 09:09:23.953431 4684 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etcd-service-ca\" (UniqueName: \"kubernetes.io/configmap/f2cab908-172f-4775-881a-226d8c87bcdc-etcd-service-ca\") pod \"etcd-operator-b45778765-p2wtg\" (UID: \"f2cab908-172f-4775-881a-226d8c87bcdc\") " pod="openshift-etcd-operator/etcd-operator-b45778765-p2wtg" Jan 23 09:09:23 crc kubenswrapper[4684]: I0123 09:09:23.953599 4684 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/2493a510-4c7f-4d74-b1e2-1bfde5d9513b-config\") pod \"console-operator-58897d9998-642xz\" (UID: \"2493a510-4c7f-4d74-b1e2-1bfde5d9513b\") " pod="openshift-console-operator/console-operator-58897d9998-642xz" Jan 23 09:09:23 crc kubenswrapper[4684]: I0123 09:09:23.953640 4684 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/2493a510-4c7f-4d74-b1e2-1bfde5d9513b-trusted-ca\") pod \"console-operator-58897d9998-642xz\" (UID: \"2493a510-4c7f-4d74-b1e2-1bfde5d9513b\") " pod="openshift-console-operator/console-operator-58897d9998-642xz" Jan 23 09:09:23 crc kubenswrapper[4684]: I0123 09:09:23.954027 4684 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/2493a510-4c7f-4d74-b1e2-1bfde5d9513b-serving-cert\") pod \"console-operator-58897d9998-642xz\" (UID: \"2493a510-4c7f-4d74-b1e2-1bfde5d9513b\") " pod="openshift-console-operator/console-operator-58897d9998-642xz" Jan 23 09:09:23 crc kubenswrapper[4684]: I0123 09:09:23.954163 4684 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/0457957b-8bad-468f-9602-6d32a17c8f92-config\") pod \"openshift-controller-manager-operator-756b6f6bc6-g5k2t\" (UID: \"0457957b-8bad-468f-9602-6d32a17c8f92\") " pod="openshift-controller-manager-operator/openshift-controller-manager-operator-756b6f6bc6-g5k2t" Jan 23 09:09:23 crc kubenswrapper[4684]: I0123 09:09:23.954196 4684 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/f2cab908-172f-4775-881a-226d8c87bcdc-config\") pod \"etcd-operator-b45778765-p2wtg\" (UID: \"f2cab908-172f-4775-881a-226d8c87bcdc\") " pod="openshift-etcd-operator/etcd-operator-b45778765-p2wtg" Jan 23 09:09:23 crc kubenswrapper[4684]: I0123 09:09:23.954340 4684 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/fecf2330-df0b-41ad-99fd-7a58537bfbc6-serving-cert\") pod \"kube-controller-manager-operator-78b949d7b-9np9f\" (UID: \"fecf2330-df0b-41ad-99fd-7a58537bfbc6\") " pod="openshift-kube-controller-manager-operator/kube-controller-manager-operator-78b949d7b-9np9f" Jan 23 09:09:23 crc kubenswrapper[4684]: I0123 09:09:23.954363 4684 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/fecf2330-df0b-41ad-99fd-7a58537bfbc6-kube-api-access\") pod \"kube-controller-manager-operator-78b949d7b-9np9f\" (UID: \"fecf2330-df0b-41ad-99fd-7a58537bfbc6\") " pod="openshift-kube-controller-manager-operator/kube-controller-manager-operator-78b949d7b-9np9f" Jan 23 09:09:23 crc kubenswrapper[4684]: I0123 09:09:23.954622 4684 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etcd-client\" (UniqueName: \"kubernetes.io/secret/f2cab908-172f-4775-881a-226d8c87bcdc-etcd-client\") pod \"etcd-operator-b45778765-p2wtg\" (UID: \"f2cab908-172f-4775-881a-226d8c87bcdc\") " pod="openshift-etcd-operator/etcd-operator-b45778765-p2wtg" Jan 23 09:09:23 crc kubenswrapper[4684]: I0123 09:09:23.954746 4684 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/e19380fe-fa6c-4c7e-a706-aea1c30a6013-serving-cert\") pod \"openshift-config-operator-7777fb866f-7g8g8\" (UID: \"e19380fe-fa6c-4c7e-a706-aea1c30a6013\") " pod="openshift-config-operator/openshift-config-operator-7777fb866f-7g8g8" Jan 23 09:09:23 crc kubenswrapper[4684]: I0123 09:09:23.954775 4684 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-98xzz\" (UniqueName: \"kubernetes.io/projected/e19380fe-fa6c-4c7e-a706-aea1c30a6013-kube-api-access-98xzz\") pod \"openshift-config-operator-7777fb866f-7g8g8\" (UID: \"e19380fe-fa6c-4c7e-a706-aea1c30a6013\") " pod="openshift-config-operator/openshift-config-operator-7777fb866f-7g8g8" Jan 23 09:09:23 crc kubenswrapper[4684]: I0123 09:09:23.954913 4684 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"image-registry-operator-tls\" (UniqueName: \"kubernetes.io/secret/e52e2888-8938-4b6f-96a3-e25eaaaf112c-image-registry-operator-tls\") pod \"cluster-image-registry-operator-dc59b4c8b-w66j2\" (UID: \"e52e2888-8938-4b6f-96a3-e25eaaaf112c\") " pod="openshift-image-registry/cluster-image-registry-operator-dc59b4c8b-w66j2" Jan 23 09:09:23 crc kubenswrapper[4684]: I0123 09:09:23.955058 4684 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-hr2lr\" (UniqueName: \"kubernetes.io/projected/e52e2888-8938-4b6f-96a3-e25eaaaf112c-kube-api-access-hr2lr\") pod \"cluster-image-registry-operator-dc59b4c8b-w66j2\" (UID: \"e52e2888-8938-4b6f-96a3-e25eaaaf112c\") " pod="openshift-image-registry/cluster-image-registry-operator-dc59b4c8b-w66j2" Jan 23 09:09:23 crc kubenswrapper[4684]: I0123 09:09:23.955340 4684 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/0457957b-8bad-468f-9602-6d32a17c8f92-serving-cert\") pod \"openshift-controller-manager-operator-756b6f6bc6-g5k2t\" (UID: \"0457957b-8bad-468f-9602-6d32a17c8f92\") " pod="openshift-controller-manager-operator/openshift-controller-manager-operator-756b6f6bc6-g5k2t" Jan 23 09:09:23 crc kubenswrapper[4684]: I0123 09:09:23.955370 4684 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/2493a510-4c7f-4d74-b1e2-1bfde5d9513b-trusted-ca\") pod \"console-operator-58897d9998-642xz\" (UID: \"2493a510-4c7f-4d74-b1e2-1bfde5d9513b\") " pod="openshift-console-operator/console-operator-58897d9998-642xz" Jan 23 09:09:23 crc kubenswrapper[4684]: I0123 09:09:23.954554 4684 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/2493a510-4c7f-4d74-b1e2-1bfde5d9513b-config\") pod \"console-operator-58897d9998-642xz\" (UID: \"2493a510-4c7f-4d74-b1e2-1bfde5d9513b\") " pod="openshift-console-operator/console-operator-58897d9998-642xz" Jan 23 09:09:23 crc kubenswrapper[4684]: I0123 09:09:23.953785 4684 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"available-featuregates\" (UniqueName: \"kubernetes.io/empty-dir/e19380fe-fa6c-4c7e-a706-aea1c30a6013-available-featuregates\") pod \"openshift-config-operator-7777fb866f-7g8g8\" (UID: \"e19380fe-fa6c-4c7e-a706-aea1c30a6013\") " pod="openshift-config-operator/openshift-config-operator-7777fb866f-7g8g8" Jan 23 09:09:23 crc kubenswrapper[4684]: I0123 09:09:23.956057 4684 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"v4-0-config-user-idp-0-file-data\" (UniqueName: \"kubernetes.io/secret/c846db13-b93b-4e07-9e7b-e22106203982-v4-0-config-user-idp-0-file-data\") pod \"oauth-openshift-558db77b4-hv7d8\" (UID: \"c846db13-b93b-4e07-9e7b-e22106203982\") " pod="openshift-authentication/oauth-openshift-558db77b4-hv7d8" Jan 23 09:09:23 crc kubenswrapper[4684]: I0123 09:09:23.956398 4684 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-authentication/oauth-openshift-558db77b4-hv7d8"] Jan 23 09:09:23 crc kubenswrapper[4684]: I0123 09:09:23.958924 4684 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etcd-client\" (UniqueName: \"kubernetes.io/secret/e6245e77-409a-4116-8c6e-78b21d87529f-etcd-client\") pod \"apiserver-76f77b778f-7j9vw\" (UID: \"e6245e77-409a-4116-8c6e-78b21d87529f\") " pod="openshift-apiserver/apiserver-76f77b778f-7j9vw" Jan 23 09:09:23 crc kubenswrapper[4684]: I0123 09:09:23.959027 4684 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-dns-operator/dns-operator-744455d44c-kx2tw"] Jan 23 09:09:23 crc kubenswrapper[4684]: I0123 09:09:23.962094 4684 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-machine-api/machine-api-operator-5694c8668f-pgngb"] Jan 23 09:09:23 crc kubenswrapper[4684]: I0123 09:09:23.962158 4684 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-image-registry"/"image-registry-tls" Jan 23 09:09:23 crc kubenswrapper[4684]: I0123 09:09:23.962404 4684 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/e19380fe-fa6c-4c7e-a706-aea1c30a6013-serving-cert\") pod \"openshift-config-operator-7777fb866f-7g8g8\" (UID: \"e19380fe-fa6c-4c7e-a706-aea1c30a6013\") " pod="openshift-config-operator/openshift-config-operator-7777fb866f-7g8g8" Jan 23 09:09:23 crc kubenswrapper[4684]: I0123 09:09:23.963769 4684 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-kube-scheduler-operator/openshift-kube-scheduler-operator-5fdd9b5758-sm5m4"] Jan 23 09:09:23 crc kubenswrapper[4684]: I0123 09:09:23.965198 4684 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-kube-storage-version-migrator-operator/kube-storage-version-migrator-operator-b67b599dd-tkzz2"] Jan 23 09:09:23 crc kubenswrapper[4684]: I0123 09:09:23.966385 4684 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/marketplace-operator-79b997595-tfmsb"] Jan 23 09:09:23 crc kubenswrapper[4684]: I0123 09:09:23.967373 4684 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-controller-manager-operator/openshift-controller-manager-operator-756b6f6bc6-g5k2t"] Jan 23 09:09:23 crc kubenswrapper[4684]: I0123 09:09:23.968762 4684 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-image-registry/cluster-image-registry-operator-dc59b4c8b-w66j2"] Jan 23 09:09:23 crc kubenswrapper[4684]: I0123 09:09:23.975421 4684 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-console/downloads-7954f5f757-mc6nm"] Jan 23 09:09:23 crc kubenswrapper[4684]: I0123 09:09:23.975489 4684 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-console-operator/console-operator-58897d9998-642xz"] Jan 23 09:09:23 crc kubenswrapper[4684]: I0123 09:09:23.980586 4684 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-dns/dns-default-bxczb"] Jan 23 09:09:23 crc kubenswrapper[4684]: I0123 09:09:23.992806 4684 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/2493a510-4c7f-4d74-b1e2-1bfde5d9513b-serving-cert\") pod \"console-operator-58897d9998-642xz\" (UID: \"2493a510-4c7f-4d74-b1e2-1bfde5d9513b\") " pod="openshift-console-operator/console-operator-58897d9998-642xz" Jan 23 09:09:23 crc kubenswrapper[4684]: I0123 09:09:23.993129 4684 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-config-operator/openshift-config-operator-7777fb866f-7g8g8"] Jan 23 09:09:23 crc kubenswrapper[4684]: I0123 09:09:23.993323 4684 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-dns/dns-default-bxczb" Jan 23 09:09:23 crc kubenswrapper[4684]: I0123 09:09:23.993744 4684 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-etcd-operator"/"etcd-ca-bundle" Jan 23 09:09:24 crc kubenswrapper[4684]: I0123 09:09:24.004992 4684 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-etcd-operator"/"etcd-operator-serving-cert" Jan 23 09:09:24 crc kubenswrapper[4684]: I0123 09:09:24.005256 4684 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-kube-controller-manager-operator/kube-controller-manager-operator-78b949d7b-9np9f"] Jan 23 09:09:24 crc kubenswrapper[4684]: I0123 09:09:24.010022 4684 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-ingress-operator/ingress-operator-5b745b69d9-5jrnp"] Jan 23 09:09:24 crc kubenswrapper[4684]: I0123 09:09:24.014335 4684 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-apiserver/apiserver-76f77b778f-7j9vw"] Jan 23 09:09:24 crc kubenswrapper[4684]: I0123 09:09:24.024652 4684 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-etcd-operator"/"etcd-service-ca-bundle" Jan 23 09:09:24 crc kubenswrapper[4684]: I0123 09:09:24.030614 4684 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-dns/dns-default-bxczb"] Jan 23 09:09:24 crc kubenswrapper[4684]: I0123 09:09:24.031885 4684 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operator-lifecycle-manager/package-server-manager-789f6589d5-2xmjn"] Jan 23 09:09:24 crc kubenswrapper[4684]: I0123 09:09:24.033026 4684 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["hostpath-provisioner/csi-hostpathplugin-8tk99"] Jan 23 09:09:24 crc kubenswrapper[4684]: I0123 09:09:24.035348 4684 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="hostpath-provisioner/csi-hostpathplugin-8tk99" Jan 23 09:09:24 crc kubenswrapper[4684]: I0123 09:09:24.035554 4684 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operator-lifecycle-manager/olm-operator-6b444d44fb-qj7jr"] Jan 23 09:09:24 crc kubenswrapper[4684]: I0123 09:09:24.037902 4684 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-service-ca/service-ca-9c57cc56f-tk452"] Jan 23 09:09:24 crc kubenswrapper[4684]: I0123 09:09:24.040680 4684 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-kube-apiserver-operator/kube-apiserver-operator-766d6c64bb-dhf86"] Jan 23 09:09:24 crc kubenswrapper[4684]: I0123 09:09:24.041320 4684 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-etcd-operator"/"kube-root-ca.crt" Jan 23 09:09:24 crc kubenswrapper[4684]: I0123 09:09:24.041758 4684 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-machine-config-operator/machine-config-controller-84d6567774-r9qbw"] Jan 23 09:09:24 crc kubenswrapper[4684]: I0123 09:09:24.043670 4684 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-machine-api/control-plane-machine-set-operator-78cbb6b69f-4qpn2"] Jan 23 09:09:24 crc kubenswrapper[4684]: I0123 09:09:24.043766 4684 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-kube-storage-version-migrator/migrator-59844c95c7-g8kmw"] Jan 23 09:09:24 crc kubenswrapper[4684]: I0123 09:09:24.045584 4684 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-machine-config-operator/machine-config-operator-74547568cd-k7fnj"] Jan 23 09:09:24 crc kubenswrapper[4684]: I0123 09:09:24.046628 4684 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-etcd-operator/etcd-operator-b45778765-p2wtg"] Jan 23 09:09:24 crc kubenswrapper[4684]: I0123 09:09:24.047803 4684 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-multus/multus-admission-controller-857f4d67dd-zp7ft"] Jan 23 09:09:24 crc kubenswrapper[4684]: I0123 09:09:24.049563 4684 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-image-registry/image-registry-697d97f7c8-wn9b6"] Jan 23 09:09:24 crc kubenswrapper[4684]: I0123 09:09:24.050731 4684 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operator-lifecycle-manager/catalog-operator-68c6474976-dxd9h"] Jan 23 09:09:24 crc kubenswrapper[4684]: I0123 09:09:24.051935 4684 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-machine-config-operator/machine-config-server-9m65q"] Jan 23 09:09:24 crc kubenswrapper[4684]: I0123 09:09:24.052683 4684 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-machine-config-operator/machine-config-server-9m65q" Jan 23 09:09:24 crc kubenswrapper[4684]: I0123 09:09:24.053070 4684 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-ingress-canary/ingress-canary-76rxn"] Jan 23 09:09:24 crc kubenswrapper[4684]: I0123 09:09:24.053959 4684 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-ingress-canary/ingress-canary-76rxn" Jan 23 09:09:24 crc kubenswrapper[4684]: I0123 09:09:24.054609 4684 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-console/console-f9d7485db-wd9fz"] Jan 23 09:09:24 crc kubenswrapper[4684]: I0123 09:09:24.056413 4684 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["hostpath-provisioner/csi-hostpathplugin-8tk99"] Jan 23 09:09:24 crc kubenswrapper[4684]: I0123 09:09:24.057211 4684 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/0457957b-8bad-468f-9602-6d32a17c8f92-serving-cert\") pod \"openshift-controller-manager-operator-756b6f6bc6-g5k2t\" (UID: \"0457957b-8bad-468f-9602-6d32a17c8f92\") " pod="openshift-controller-manager-operator/openshift-controller-manager-operator-756b6f6bc6-g5k2t" Jan 23 09:09:24 crc kubenswrapper[4684]: I0123 09:09:24.057294 4684 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etcd-ca\" (UniqueName: \"kubernetes.io/configmap/f2cab908-172f-4775-881a-226d8c87bcdc-etcd-ca\") pod \"etcd-operator-b45778765-p2wtg\" (UID: \"f2cab908-172f-4775-881a-226d8c87bcdc\") " pod="openshift-etcd-operator/etcd-operator-b45778765-p2wtg" Jan 23 09:09:24 crc kubenswrapper[4684]: I0123 09:09:24.057352 4684 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-5kmzq\" (UniqueName: \"kubernetes.io/projected/0457957b-8bad-468f-9602-6d32a17c8f92-kube-api-access-5kmzq\") pod \"openshift-controller-manager-operator-756b6f6bc6-g5k2t\" (UID: \"0457957b-8bad-468f-9602-6d32a17c8f92\") " pod="openshift-controller-manager-operator/openshift-controller-manager-operator-756b6f6bc6-g5k2t" Jan 23 09:09:24 crc kubenswrapper[4684]: I0123 09:09:24.057425 4684 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"bound-sa-token\" (UniqueName: \"kubernetes.io/projected/e52e2888-8938-4b6f-96a3-e25eaaaf112c-bound-sa-token\") pod \"cluster-image-registry-operator-dc59b4c8b-w66j2\" (UID: \"e52e2888-8938-4b6f-96a3-e25eaaaf112c\") " pod="openshift-image-registry/cluster-image-registry-operator-dc59b4c8b-w66j2" Jan 23 09:09:24 crc kubenswrapper[4684]: I0123 09:09:24.057480 4684 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-bcr99\" (UniqueName: \"kubernetes.io/projected/f2cab908-172f-4775-881a-226d8c87bcdc-kube-api-access-bcr99\") pod \"etcd-operator-b45778765-p2wtg\" (UID: \"f2cab908-172f-4775-881a-226d8c87bcdc\") " pod="openshift-etcd-operator/etcd-operator-b45778765-p2wtg" Jan 23 09:09:24 crc kubenswrapper[4684]: I0123 09:09:24.057515 4684 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/e52e2888-8938-4b6f-96a3-e25eaaaf112c-trusted-ca\") pod \"cluster-image-registry-operator-dc59b4c8b-w66j2\" (UID: \"e52e2888-8938-4b6f-96a3-e25eaaaf112c\") " pod="openshift-image-registry/cluster-image-registry-operator-dc59b4c8b-w66j2" Jan 23 09:09:24 crc kubenswrapper[4684]: I0123 09:09:24.057564 4684 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/fecf2330-df0b-41ad-99fd-7a58537bfbc6-config\") pod \"kube-controller-manager-operator-78b949d7b-9np9f\" (UID: \"fecf2330-df0b-41ad-99fd-7a58537bfbc6\") " pod="openshift-kube-controller-manager-operator/kube-controller-manager-operator-78b949d7b-9np9f" Jan 23 09:09:24 crc kubenswrapper[4684]: I0123 09:09:24.057603 4684 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/f2cab908-172f-4775-881a-226d8c87bcdc-serving-cert\") pod \"etcd-operator-b45778765-p2wtg\" (UID: \"f2cab908-172f-4775-881a-226d8c87bcdc\") " pod="openshift-etcd-operator/etcd-operator-b45778765-p2wtg" Jan 23 09:09:24 crc kubenswrapper[4684]: I0123 09:09:24.057627 4684 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etcd-service-ca\" (UniqueName: \"kubernetes.io/configmap/f2cab908-172f-4775-881a-226d8c87bcdc-etcd-service-ca\") pod \"etcd-operator-b45778765-p2wtg\" (UID: \"f2cab908-172f-4775-881a-226d8c87bcdc\") " pod="openshift-etcd-operator/etcd-operator-b45778765-p2wtg" Jan 23 09:09:24 crc kubenswrapper[4684]: I0123 09:09:24.057666 4684 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/0457957b-8bad-468f-9602-6d32a17c8f92-config\") pod \"openshift-controller-manager-operator-756b6f6bc6-g5k2t\" (UID: \"0457957b-8bad-468f-9602-6d32a17c8f92\") " pod="openshift-controller-manager-operator/openshift-controller-manager-operator-756b6f6bc6-g5k2t" Jan 23 09:09:24 crc kubenswrapper[4684]: I0123 09:09:24.057691 4684 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/f2cab908-172f-4775-881a-226d8c87bcdc-config\") pod \"etcd-operator-b45778765-p2wtg\" (UID: \"f2cab908-172f-4775-881a-226d8c87bcdc\") " pod="openshift-etcd-operator/etcd-operator-b45778765-p2wtg" Jan 23 09:09:24 crc kubenswrapper[4684]: I0123 09:09:24.057759 4684 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/fecf2330-df0b-41ad-99fd-7a58537bfbc6-serving-cert\") pod \"kube-controller-manager-operator-78b949d7b-9np9f\" (UID: \"fecf2330-df0b-41ad-99fd-7a58537bfbc6\") " pod="openshift-kube-controller-manager-operator/kube-controller-manager-operator-78b949d7b-9np9f" Jan 23 09:09:24 crc kubenswrapper[4684]: I0123 09:09:24.057782 4684 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/fecf2330-df0b-41ad-99fd-7a58537bfbc6-kube-api-access\") pod \"kube-controller-manager-operator-78b949d7b-9np9f\" (UID: \"fecf2330-df0b-41ad-99fd-7a58537bfbc6\") " pod="openshift-kube-controller-manager-operator/kube-controller-manager-operator-78b949d7b-9np9f" Jan 23 09:09:24 crc kubenswrapper[4684]: I0123 09:09:24.057803 4684 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etcd-client\" (UniqueName: \"kubernetes.io/secret/f2cab908-172f-4775-881a-226d8c87bcdc-etcd-client\") pod \"etcd-operator-b45778765-p2wtg\" (UID: \"f2cab908-172f-4775-881a-226d8c87bcdc\") " pod="openshift-etcd-operator/etcd-operator-b45778765-p2wtg" Jan 23 09:09:24 crc kubenswrapper[4684]: I0123 09:09:24.057834 4684 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"image-registry-operator-tls\" (UniqueName: \"kubernetes.io/secret/e52e2888-8938-4b6f-96a3-e25eaaaf112c-image-registry-operator-tls\") pod \"cluster-image-registry-operator-dc59b4c8b-w66j2\" (UID: \"e52e2888-8938-4b6f-96a3-e25eaaaf112c\") " pod="openshift-image-registry/cluster-image-registry-operator-dc59b4c8b-w66j2" Jan 23 09:09:24 crc kubenswrapper[4684]: I0123 09:09:24.057859 4684 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-hr2lr\" (UniqueName: \"kubernetes.io/projected/e52e2888-8938-4b6f-96a3-e25eaaaf112c-kube-api-access-hr2lr\") pod \"cluster-image-registry-operator-dc59b4c8b-w66j2\" (UID: \"e52e2888-8938-4b6f-96a3-e25eaaaf112c\") " pod="openshift-image-registry/cluster-image-registry-operator-dc59b4c8b-w66j2" Jan 23 09:09:24 crc kubenswrapper[4684]: I0123 09:09:24.058984 4684 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etcd-ca\" (UniqueName: \"kubernetes.io/configmap/f2cab908-172f-4775-881a-226d8c87bcdc-etcd-ca\") pod \"etcd-operator-b45778765-p2wtg\" (UID: \"f2cab908-172f-4775-881a-226d8c87bcdc\") " pod="openshift-etcd-operator/etcd-operator-b45778765-p2wtg" Jan 23 09:09:24 crc kubenswrapper[4684]: I0123 09:09:24.059074 4684 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-service-ca-operator/service-ca-operator-777779d784-sxckj"] Jan 23 09:09:24 crc kubenswrapper[4684]: I0123 09:09:24.059112 4684 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29485980-dfbbw"] Jan 23 09:09:24 crc kubenswrapper[4684]: I0123 09:09:24.059245 4684 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etcd-service-ca\" (UniqueName: \"kubernetes.io/configmap/f2cab908-172f-4775-881a-226d8c87bcdc-etcd-service-ca\") pod \"etcd-operator-b45778765-p2wtg\" (UID: \"f2cab908-172f-4775-881a-226d8c87bcdc\") " pod="openshift-etcd-operator/etcd-operator-b45778765-p2wtg" Jan 23 09:09:24 crc kubenswrapper[4684]: I0123 09:09:24.060248 4684 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/e52e2888-8938-4b6f-96a3-e25eaaaf112c-trusted-ca\") pod \"cluster-image-registry-operator-dc59b4c8b-w66j2\" (UID: \"e52e2888-8938-4b6f-96a3-e25eaaaf112c\") " pod="openshift-image-registry/cluster-image-registry-operator-dc59b4c8b-w66j2" Jan 23 09:09:24 crc kubenswrapper[4684]: I0123 09:09:24.060920 4684 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/0457957b-8bad-468f-9602-6d32a17c8f92-config\") pod \"openshift-controller-manager-operator-756b6f6bc6-g5k2t\" (UID: \"0457957b-8bad-468f-9602-6d32a17c8f92\") " pod="openshift-controller-manager-operator/openshift-controller-manager-operator-756b6f6bc6-g5k2t" Jan 23 09:09:24 crc kubenswrapper[4684]: I0123 09:09:24.060947 4684 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operator-lifecycle-manager/packageserver-d55dfcdfc-g94qp"] Jan 23 09:09:24 crc kubenswrapper[4684]: I0123 09:09:24.061294 4684 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/0457957b-8bad-468f-9602-6d32a17c8f92-serving-cert\") pod \"openshift-controller-manager-operator-756b6f6bc6-g5k2t\" (UID: \"0457957b-8bad-468f-9602-6d32a17c8f92\") " pod="openshift-controller-manager-operator/openshift-controller-manager-operator-756b6f6bc6-g5k2t" Jan 23 09:09:24 crc kubenswrapper[4684]: I0123 09:09:24.061781 4684 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/f2cab908-172f-4775-881a-226d8c87bcdc-config\") pod \"etcd-operator-b45778765-p2wtg\" (UID: \"f2cab908-172f-4775-881a-226d8c87bcdc\") " pod="openshift-etcd-operator/etcd-operator-b45778765-p2wtg" Jan 23 09:09:24 crc kubenswrapper[4684]: I0123 09:09:24.062346 4684 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-storage-version-migrator-operator"/"config" Jan 23 09:09:24 crc kubenswrapper[4684]: I0123 09:09:24.062685 4684 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-ingress-canary/ingress-canary-76rxn"] Jan 23 09:09:24 crc kubenswrapper[4684]: I0123 09:09:24.063303 4684 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/f2cab908-172f-4775-881a-226d8c87bcdc-serving-cert\") pod \"etcd-operator-b45778765-p2wtg\" (UID: \"f2cab908-172f-4775-881a-226d8c87bcdc\") " pod="openshift-etcd-operator/etcd-operator-b45778765-p2wtg" Jan 23 09:09:24 crc kubenswrapper[4684]: I0123 09:09:24.072983 4684 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etcd-client\" (UniqueName: \"kubernetes.io/secret/f2cab908-172f-4775-881a-226d8c87bcdc-etcd-client\") pod \"etcd-operator-b45778765-p2wtg\" (UID: \"f2cab908-172f-4775-881a-226d8c87bcdc\") " pod="openshift-etcd-operator/etcd-operator-b45778765-p2wtg" Jan 23 09:09:24 crc kubenswrapper[4684]: I0123 09:09:24.073255 4684 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"image-registry-operator-tls\" (UniqueName: \"kubernetes.io/secret/e52e2888-8938-4b6f-96a3-e25eaaaf112c-image-registry-operator-tls\") pod \"cluster-image-registry-operator-dc59b4c8b-w66j2\" (UID: \"e52e2888-8938-4b6f-96a3-e25eaaaf112c\") " pod="openshift-image-registry/cluster-image-registry-operator-dc59b4c8b-w66j2" Jan 23 09:09:24 crc kubenswrapper[4684]: I0123 09:09:24.082169 4684 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-kube-storage-version-migrator-operator"/"serving-cert" Jan 23 09:09:24 crc kubenswrapper[4684]: I0123 09:09:24.102555 4684 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-kube-storage-version-migrator-operator"/"kube-storage-version-migrator-operator-dockercfg-2bh8d" Jan 23 09:09:24 crc kubenswrapper[4684]: I0123 09:09:24.122641 4684 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-storage-version-migrator-operator"/"kube-root-ca.crt" Jan 23 09:09:24 crc kubenswrapper[4684]: I0123 09:09:24.142340 4684 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-storage-version-migrator-operator"/"openshift-service-ca.crt" Jan 23 09:09:24 crc kubenswrapper[4684]: I0123 09:09:24.162264 4684 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-storage-version-migrator"/"openshift-service-ca.crt" Jan 23 09:09:24 crc kubenswrapper[4684]: I0123 09:09:24.183346 4684 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-controller-manager-operator"/"kube-root-ca.crt" Jan 23 09:09:24 crc kubenswrapper[4684]: I0123 09:09:24.202036 4684 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-kube-storage-version-migrator"/"kube-storage-version-migrator-sa-dockercfg-5xfcg" Jan 23 09:09:24 crc kubenswrapper[4684]: I0123 09:09:24.221918 4684 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-storage-version-migrator"/"kube-root-ca.crt" Jan 23 09:09:24 crc kubenswrapper[4684]: I0123 09:09:24.242629 4684 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-apiserver-operator"/"kube-apiserver-operator-config" Jan 23 09:09:24 crc kubenswrapper[4684]: I0123 09:09:24.265235 4684 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-kube-controller-manager-operator"/"kube-controller-manager-operator-dockercfg-gkqpw" Jan 23 09:09:24 crc kubenswrapper[4684]: I0123 09:09:24.283300 4684 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-kube-controller-manager-operator"/"kube-controller-manager-operator-serving-cert" Jan 23 09:09:24 crc kubenswrapper[4684]: I0123 09:09:24.292743 4684 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/fecf2330-df0b-41ad-99fd-7a58537bfbc6-serving-cert\") pod \"kube-controller-manager-operator-78b949d7b-9np9f\" (UID: \"fecf2330-df0b-41ad-99fd-7a58537bfbc6\") " pod="openshift-kube-controller-manager-operator/kube-controller-manager-operator-78b949d7b-9np9f" Jan 23 09:09:24 crc kubenswrapper[4684]: I0123 09:09:24.302247 4684 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-controller-manager-operator"/"kube-controller-manager-operator-config" Jan 23 09:09:24 crc kubenswrapper[4684]: I0123 09:09:24.312320 4684 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/fecf2330-df0b-41ad-99fd-7a58537bfbc6-config\") pod \"kube-controller-manager-operator-78b949d7b-9np9f\" (UID: \"fecf2330-df0b-41ad-99fd-7a58537bfbc6\") " pod="openshift-kube-controller-manager-operator/kube-controller-manager-operator-78b949d7b-9np9f" Jan 23 09:09:24 crc kubenswrapper[4684]: I0123 09:09:24.322517 4684 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-kube-apiserver-operator"/"kube-apiserver-operator-dockercfg-x57mr" Jan 23 09:09:24 crc kubenswrapper[4684]: I0123 09:09:24.342983 4684 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-kube-apiserver-operator"/"kube-apiserver-operator-serving-cert" Jan 23 09:09:24 crc kubenswrapper[4684]: I0123 09:09:24.363115 4684 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-apiserver-operator"/"kube-root-ca.crt" Jan 23 09:09:24 crc kubenswrapper[4684]: I0123 09:09:24.382992 4684 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-config-operator"/"mcc-proxy-tls" Jan 23 09:09:24 crc kubenswrapper[4684]: I0123 09:09:24.402942 4684 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-config-operator"/"machine-config-controller-dockercfg-c2lfx" Jan 23 09:09:24 crc kubenswrapper[4684]: I0123 09:09:24.442852 4684 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-scheduler-operator"/"openshift-kube-scheduler-operator-config" Jan 23 09:09:24 crc kubenswrapper[4684]: I0123 09:09:24.463236 4684 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-kube-scheduler-operator"/"openshift-kube-scheduler-operator-dockercfg-qt55r" Jan 23 09:09:24 crc kubenswrapper[4684]: I0123 09:09:24.482290 4684 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-scheduler-operator"/"kube-root-ca.crt" Jan 23 09:09:24 crc kubenswrapper[4684]: I0123 09:09:24.503110 4684 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-kube-scheduler-operator"/"kube-scheduler-operator-serving-cert" Jan 23 09:09:24 crc kubenswrapper[4684]: I0123 09:09:24.544774 4684 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-m9z7v\" (UniqueName: \"kubernetes.io/projected/a0343333-605f-4fb8-adb7-8423a1d36552-kube-api-access-m9z7v\") pod \"machine-approver-56656f9798-fxzlb\" (UID: \"a0343333-605f-4fb8-adb7-8423a1d36552\") " pod="openshift-cluster-machine-approver/machine-approver-56656f9798-fxzlb" Jan 23 09:09:24 crc kubenswrapper[4684]: I0123 09:09:24.561593 4684 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-dq8c7\" (UniqueName: \"kubernetes.io/projected/e6245e77-409a-4116-8c6e-78b21d87529f-kube-api-access-dq8c7\") pod \"apiserver-76f77b778f-7j9vw\" (UID: \"e6245e77-409a-4116-8c6e-78b21d87529f\") " pod="openshift-apiserver/apiserver-76f77b778f-7j9vw" Jan 23 09:09:24 crc kubenswrapper[4684]: I0123 09:09:24.571463 4684 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-cluster-machine-approver/machine-approver-56656f9798-fxzlb" Jan 23 09:09:24 crc kubenswrapper[4684]: I0123 09:09:24.580940 4684 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-wrrtl" Jan 23 09:09:24 crc kubenswrapper[4684]: I0123 09:09:24.582799 4684 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ingress"/"openshift-service-ca.crt" Jan 23 09:09:24 crc kubenswrapper[4684]: W0123 09:09:24.589069 4684 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-poda0343333_605f_4fb8_adb7_8423a1d36552.slice/crio-0fc1bb0ecf402f937dc570d289e040e2cc679129e850ad35d4542375f3ba71d9 WatchSource:0}: Error finding container 0fc1bb0ecf402f937dc570d289e040e2cc679129e850ad35d4542375f3ba71d9: Status 404 returned error can't find the container with id 0fc1bb0ecf402f937dc570d289e040e2cc679129e850ad35d4542375f3ba71d9 Jan 23 09:09:24 crc kubenswrapper[4684]: I0123 09:09:24.602104 4684 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-ingress"/"router-metrics-certs-default" Jan 23 09:09:24 crc kubenswrapper[4684]: I0123 09:09:24.623081 4684 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-ingress"/"router-dockercfg-zdk86" Jan 23 09:09:24 crc kubenswrapper[4684]: I0123 09:09:24.642735 4684 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-ingress"/"router-certs-default" Jan 23 09:09:24 crc kubenswrapper[4684]: I0123 09:09:24.663402 4684 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-ingress"/"router-stats-default" Jan 23 09:09:24 crc kubenswrapper[4684]: I0123 09:09:24.682351 4684 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ingress-operator"/"openshift-service-ca.crt" Jan 23 09:09:24 crc kubenswrapper[4684]: I0123 09:09:24.718764 4684 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-59fjz\" (UniqueName: \"kubernetes.io/projected/9b3c5fb5-4205-4162-9d9e-b522ee092236-kube-api-access-59fjz\") pod \"machine-api-operator-5694c8668f-pgngb\" (UID: \"9b3c5fb5-4205-4162-9d9e-b522ee092236\") " pod="openshift-machine-api/machine-api-operator-5694c8668f-pgngb" Jan 23 09:09:24 crc kubenswrapper[4684]: I0123 09:09:24.723035 4684 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ingress"/"service-ca-bundle" Jan 23 09:09:24 crc kubenswrapper[4684]: I0123 09:09:24.742596 4684 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ingress-operator"/"kube-root-ca.crt" Jan 23 09:09:24 crc kubenswrapper[4684]: I0123 09:09:24.762932 4684 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-ingress-operator"/"ingress-operator-dockercfg-7lnqk" Jan 23 09:09:24 crc kubenswrapper[4684]: I0123 09:09:24.782442 4684 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-ingress-operator"/"metrics-tls" Jan 23 09:09:24 crc kubenswrapper[4684]: I0123 09:09:24.812272 4684 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ingress-operator"/"trusted-ca" Jan 23 09:09:24 crc kubenswrapper[4684]: I0123 09:09:24.812409 4684 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-apiserver/apiserver-76f77b778f-7j9vw" Jan 23 09:09:24 crc kubenswrapper[4684]: I0123 09:09:24.823740 4684 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ingress"/"kube-root-ca.crt" Jan 23 09:09:24 crc kubenswrapper[4684]: I0123 09:09:24.859226 4684 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-jx4sb\" (UniqueName: \"kubernetes.io/projected/513ccd39-0870-4964-85a2-0e9eb9d14a85-kube-api-access-jx4sb\") pod \"controller-manager-879f6c89f-l7895\" (UID: \"513ccd39-0870-4964-85a2-0e9eb9d14a85\") " pod="openshift-controller-manager/controller-manager-879f6c89f-l7895" Jan 23 09:09:24 crc kubenswrapper[4684]: I0123 09:09:24.876673 4684 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-qmvnz\" (UniqueName: \"kubernetes.io/projected/5ba2e281-6dc9-44ad-90ef-e389fddb83cf-kube-api-access-qmvnz\") pod \"openshift-apiserver-operator-796bbdcf4f-9crd7\" (UID: \"5ba2e281-6dc9-44ad-90ef-e389fddb83cf\") " pod="openshift-apiserver-operator/openshift-apiserver-operator-796bbdcf4f-9crd7" Jan 23 09:09:24 crc kubenswrapper[4684]: I0123 09:09:24.901933 4684 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-2nnm7\" (UniqueName: \"kubernetes.io/projected/c846db13-b93b-4e07-9e7b-e22106203982-kube-api-access-2nnm7\") pod \"oauth-openshift-558db77b4-hv7d8\" (UID: \"c846db13-b93b-4e07-9e7b-e22106203982\") " pod="openshift-authentication/oauth-openshift-558db77b4-hv7d8" Jan 23 09:09:24 crc kubenswrapper[4684]: I0123 09:09:24.903149 4684 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-machine-config-operator"/"machine-config-operator-images" Jan 23 09:09:24 crc kubenswrapper[4684]: I0123 09:09:24.911203 4684 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-machine-api/machine-api-operator-5694c8668f-pgngb" Jan 23 09:09:24 crc kubenswrapper[4684]: I0123 09:09:24.920761 4684 request.go:700] Waited for 1.015413841s due to client-side throttling, not priority and fairness, request: GET:https://api-int.crc.testing:6443/api/v1/namespaces/openshift-multus/secrets?fieldSelector=metadata.name%3Dmultus-admission-controller-secret&limit=500&resourceVersion=0 Jan 23 09:09:24 crc kubenswrapper[4684]: I0123 09:09:24.922477 4684 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-multus"/"multus-admission-controller-secret" Jan 23 09:09:24 crc kubenswrapper[4684]: I0123 09:09:24.961868 4684 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-df9nx\" (UniqueName: \"kubernetes.io/projected/3e8dddad-fbbb-4169-9fd1-c908bc5e3660-kube-api-access-df9nx\") pod \"route-controller-manager-6576b87f9c-wnhgg\" (UID: \"3e8dddad-fbbb-4169-9fd1-c908bc5e3660\") " pod="openshift-route-controller-manager/route-controller-manager-6576b87f9c-wnhgg" Jan 23 09:09:24 crc kubenswrapper[4684]: I0123 09:09:24.961984 4684 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-config-operator"/"machine-config-operator-dockercfg-98p87" Jan 23 09:09:24 crc kubenswrapper[4684]: I0123 09:09:24.982666 4684 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-config-operator"/"mco-proxy-tls" Jan 23 09:09:25 crc kubenswrapper[4684]: I0123 09:09:25.002655 4684 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-multus"/"multus-ac-dockercfg-9lkdf" Jan 23 09:09:25 crc kubenswrapper[4684]: I0123 09:09:25.039738 4684 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-rngtr\" (UniqueName: \"kubernetes.io/projected/6bea838f-25ef-4690-b5c9-feddd10b04bf-kube-api-access-rngtr\") pod \"apiserver-7bbb656c7d-bhzj6\" (UID: \"6bea838f-25ef-4690-b5c9-feddd10b04bf\") " pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-bhzj6" Jan 23 09:09:25 crc kubenswrapper[4684]: I0123 09:09:25.042646 4684 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-marketplace"/"openshift-service-ca.crt" Jan 23 09:09:25 crc kubenswrapper[4684]: I0123 09:09:25.062599 4684 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-operator-lifecycle-manager"/"openshift-service-ca.crt" Jan 23 09:09:25 crc kubenswrapper[4684]: I0123 09:09:25.077433 4684 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-route-controller-manager/route-controller-manager-6576b87f9c-wnhgg" Jan 23 09:09:25 crc kubenswrapper[4684]: I0123 09:09:25.082354 4684 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-marketplace"/"marketplace-operator-dockercfg-5nsgg" Jan 23 09:09:25 crc kubenswrapper[4684]: I0123 09:09:25.087627 4684 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-bhzj6" Jan 23 09:09:25 crc kubenswrapper[4684]: I0123 09:09:25.103179 4684 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-marketplace"/"marketplace-operator-metrics" Jan 23 09:09:25 crc kubenswrapper[4684]: I0123 09:09:25.129837 4684 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-marketplace"/"marketplace-trusted-ca" Jan 23 09:09:25 crc kubenswrapper[4684]: I0123 09:09:25.143008 4684 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-marketplace"/"kube-root-ca.crt" Jan 23 09:09:25 crc kubenswrapper[4684]: I0123 09:09:25.147385 4684 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-controller-manager/controller-manager-879f6c89f-l7895" Jan 23 09:09:25 crc kubenswrapper[4684]: I0123 09:09:25.156591 4684 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-apiserver-operator/openshift-apiserver-operator-796bbdcf4f-9crd7" Jan 23 09:09:25 crc kubenswrapper[4684]: I0123 09:09:25.162458 4684 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-operator-lifecycle-manager"/"olm-operator-serviceaccount-dockercfg-rq7zk" Jan 23 09:09:25 crc kubenswrapper[4684]: I0123 09:09:25.182339 4684 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-operator-lifecycle-manager"/"olm-operator-serving-cert" Jan 23 09:09:25 crc kubenswrapper[4684]: I0123 09:09:25.185595 4684 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-authentication/oauth-openshift-558db77b4-hv7d8" Jan 23 09:09:25 crc kubenswrapper[4684]: I0123 09:09:25.202610 4684 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-operator-lifecycle-manager"/"pprof-cert" Jan 23 09:09:25 crc kubenswrapper[4684]: I0123 09:09:25.209123 4684 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-cluster-machine-approver/machine-approver-56656f9798-fxzlb" event={"ID":"a0343333-605f-4fb8-adb7-8423a1d36552","Type":"ContainerStarted","Data":"0fc1bb0ecf402f937dc570d289e040e2cc679129e850ad35d4542375f3ba71d9"} Jan 23 09:09:25 crc kubenswrapper[4684]: I0123 09:09:25.223354 4684 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-operator-lifecycle-manager"/"kube-root-ca.crt" Jan 23 09:09:25 crc kubenswrapper[4684]: I0123 09:09:25.277975 4684 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-2xcgv\" (UniqueName: \"kubernetes.io/projected/8fa74b73-0b76-426c-a769-39477ab913f6-kube-api-access-2xcgv\") pod \"downloads-7954f5f757-mc6nm\" (UID: \"8fa74b73-0b76-426c-a769-39477ab913f6\") " pod="openshift-console/downloads-7954f5f757-mc6nm" Jan 23 09:09:25 crc kubenswrapper[4684]: I0123 09:09:25.299549 4684 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-fjxqp\" (UniqueName: \"kubernetes.io/projected/e9844493-3620-4f52-bfae-61a79062d001-kube-api-access-fjxqp\") pod \"cluster-samples-operator-665b6dd947-5r2wv\" (UID: \"e9844493-3620-4f52-bfae-61a79062d001\") " pod="openshift-cluster-samples-operator/cluster-samples-operator-665b6dd947-5r2wv" Jan 23 09:09:25 crc kubenswrapper[4684]: I0123 09:09:25.302040 4684 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-operator-lifecycle-manager"/"catalog-operator-serving-cert" Jan 23 09:09:25 crc kubenswrapper[4684]: I0123 09:09:25.322498 4684 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-operator-lifecycle-manager"/"package-server-manager-serving-cert" Jan 23 09:09:25 crc kubenswrapper[4684]: I0123 09:09:25.341850 4684 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-operator-lifecycle-manager"/"packageserver-service-cert" Jan 23 09:09:25 crc kubenswrapper[4684]: I0123 09:09:25.362497 4684 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-service-ca"/"openshift-service-ca.crt" Jan 23 09:09:25 crc kubenswrapper[4684]: I0123 09:09:25.382969 4684 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-service-ca"/"service-ca-dockercfg-pn86c" Jan 23 09:09:25 crc kubenswrapper[4684]: I0123 09:09:25.402942 4684 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-service-ca"/"signing-key" Jan 23 09:09:25 crc kubenswrapper[4684]: I0123 09:09:25.423497 4684 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-service-ca"/"signing-cabundle" Jan 23 09:09:25 crc kubenswrapper[4684]: I0123 09:09:25.426750 4684 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-console/downloads-7954f5f757-mc6nm" Jan 23 09:09:25 crc kubenswrapper[4684]: I0123 09:09:25.437901 4684 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-cluster-samples-operator/cluster-samples-operator-665b6dd947-5r2wv" Jan 23 09:09:25 crc kubenswrapper[4684]: I0123 09:09:25.442142 4684 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-service-ca"/"kube-root-ca.crt" Jan 23 09:09:25 crc kubenswrapper[4684]: I0123 09:09:25.462026 4684 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-service-ca-operator"/"openshift-service-ca.crt" Jan 23 09:09:25 crc kubenswrapper[4684]: I0123 09:09:25.482832 4684 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-service-ca-operator"/"service-ca-operator-dockercfg-rg9jl" Jan 23 09:09:25 crc kubenswrapper[4684]: I0123 09:09:25.501899 4684 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-service-ca-operator"/"serving-cert" Jan 23 09:09:25 crc kubenswrapper[4684]: I0123 09:09:25.522175 4684 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-service-ca-operator"/"service-ca-operator-config" Jan 23 09:09:25 crc kubenswrapper[4684]: I0123 09:09:25.542567 4684 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-service-ca-operator"/"kube-root-ca.crt" Jan 23 09:09:25 crc kubenswrapper[4684]: I0123 09:09:25.562905 4684 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-operator-lifecycle-manager"/"collect-profiles-config" Jan 23 09:09:25 crc kubenswrapper[4684]: I0123 09:09:25.583034 4684 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-operator-lifecycle-manager"/"collect-profiles-dockercfg-kzf4t" Jan 23 09:09:25 crc kubenswrapper[4684]: I0123 09:09:25.622209 4684 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-api"/"control-plane-machine-set-operator-tls" Jan 23 09:09:25 crc kubenswrapper[4684]: I0123 09:09:25.642434 4684 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-api"/"control-plane-machine-set-operator-dockercfg-k9rxt" Jan 23 09:09:25 crc kubenswrapper[4684]: I0123 09:09:25.677448 4684 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-98xzz\" (UniqueName: \"kubernetes.io/projected/e19380fe-fa6c-4c7e-a706-aea1c30a6013-kube-api-access-98xzz\") pod \"openshift-config-operator-7777fb866f-7g8g8\" (UID: \"e19380fe-fa6c-4c7e-a706-aea1c30a6013\") " pod="openshift-config-operator/openshift-config-operator-7777fb866f-7g8g8" Jan 23 09:09:25 crc kubenswrapper[4684]: I0123 09:09:25.682047 4684 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-dns"/"dns-default" Jan 23 09:09:25 crc kubenswrapper[4684]: I0123 09:09:25.702098 4684 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-dns"/"dns-dockercfg-jwfmh" Jan 23 09:09:25 crc kubenswrapper[4684]: I0123 09:09:25.722305 4684 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-dns"/"dns-default-metrics-tls" Jan 23 09:09:25 crc kubenswrapper[4684]: I0123 09:09:25.742813 4684 reflector.go:368] Caches populated for *v1.ConfigMap from object-"hostpath-provisioner"/"openshift-service-ca.crt" Jan 23 09:09:25 crc kubenswrapper[4684]: I0123 09:09:25.764499 4684 reflector.go:368] Caches populated for *v1.Secret from object-"hostpath-provisioner"/"csi-hostpath-provisioner-sa-dockercfg-qd74k" Jan 23 09:09:25 crc kubenswrapper[4684]: I0123 09:09:25.783129 4684 reflector.go:368] Caches populated for *v1.ConfigMap from object-"hostpath-provisioner"/"kube-root-ca.crt" Jan 23 09:09:25 crc kubenswrapper[4684]: I0123 09:09:25.802664 4684 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-config-operator"/"machine-config-server-tls" Jan 23 09:09:25 crc kubenswrapper[4684]: I0123 09:09:25.823913 4684 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-config-operator"/"machine-config-server-dockercfg-qx5rd" Jan 23 09:09:25 crc kubenswrapper[4684]: I0123 09:09:25.843398 4684 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-config-operator"/"node-bootstrapper-token" Jan 23 09:09:25 crc kubenswrapper[4684]: I0123 09:09:25.863070 4684 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-ingress-canary"/"default-dockercfg-2llfx" Jan 23 09:09:25 crc kubenswrapper[4684]: I0123 09:09:25.882978 4684 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-ingress-canary"/"canary-serving-cert" Jan 23 09:09:25 crc kubenswrapper[4684]: I0123 09:09:25.902200 4684 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ingress-canary"/"openshift-service-ca.crt" Jan 23 09:09:25 crc kubenswrapper[4684]: I0123 09:09:25.922491 4684 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ingress-canary"/"kube-root-ca.crt" Jan 23 09:09:25 crc kubenswrapper[4684]: I0123 09:09:25.923981 4684 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-config-operator/openshift-config-operator-7777fb866f-7g8g8" Jan 23 09:09:25 crc kubenswrapper[4684]: I0123 09:09:25.940136 4684 request.go:700] Waited for 1.881825073s due to client-side throttling, not priority and fairness, request: POST:https://api-int.crc.testing:6443/api/v1/namespaces/openshift-image-registry/serviceaccounts/cluster-image-registry-operator/token Jan 23 09:09:25 crc kubenswrapper[4684]: I0123 09:09:25.959530 4684 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"bound-sa-token\" (UniqueName: \"kubernetes.io/projected/e52e2888-8938-4b6f-96a3-e25eaaaf112c-bound-sa-token\") pod \"cluster-image-registry-operator-dc59b4c8b-w66j2\" (UID: \"e52e2888-8938-4b6f-96a3-e25eaaaf112c\") " pod="openshift-image-registry/cluster-image-registry-operator-dc59b4c8b-w66j2" Jan 23 09:09:25 crc kubenswrapper[4684]: I0123 09:09:25.980602 4684 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-hr2lr\" (UniqueName: \"kubernetes.io/projected/e52e2888-8938-4b6f-96a3-e25eaaaf112c-kube-api-access-hr2lr\") pod \"cluster-image-registry-operator-dc59b4c8b-w66j2\" (UID: \"e52e2888-8938-4b6f-96a3-e25eaaaf112c\") " pod="openshift-image-registry/cluster-image-registry-operator-dc59b4c8b-w66j2" Jan 23 09:09:25 crc kubenswrapper[4684]: I0123 09:09:25.996884 4684 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-5kmzq\" (UniqueName: \"kubernetes.io/projected/0457957b-8bad-468f-9602-6d32a17c8f92-kube-api-access-5kmzq\") pod \"openshift-controller-manager-operator-756b6f6bc6-g5k2t\" (UID: \"0457957b-8bad-468f-9602-6d32a17c8f92\") " pod="openshift-controller-manager-operator/openshift-controller-manager-operator-756b6f6bc6-g5k2t" Jan 23 09:09:26 crc kubenswrapper[4684]: I0123 09:09:26.016253 4684 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-bcr99\" (UniqueName: \"kubernetes.io/projected/f2cab908-172f-4775-881a-226d8c87bcdc-kube-api-access-bcr99\") pod \"etcd-operator-b45778765-p2wtg\" (UID: \"f2cab908-172f-4775-881a-226d8c87bcdc\") " pod="openshift-etcd-operator/etcd-operator-b45778765-p2wtg" Jan 23 09:09:26 crc kubenswrapper[4684]: I0123 09:09:26.036648 4684 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/fecf2330-df0b-41ad-99fd-7a58537bfbc6-kube-api-access\") pod \"kube-controller-manager-operator-78b949d7b-9np9f\" (UID: \"fecf2330-df0b-41ad-99fd-7a58537bfbc6\") " pod="openshift-kube-controller-manager-operator/kube-controller-manager-operator-78b949d7b-9np9f" Jan 23 09:09:26 crc kubenswrapper[4684]: I0123 09:09:26.082637 4684 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-multus"/"metrics-daemon-sa-dockercfg-d427c" Jan 23 09:09:26 crc kubenswrapper[4684]: I0123 09:09:26.103521 4684 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-multus"/"metrics-daemon-secret" Jan 23 09:09:26 crc kubenswrapper[4684]: I0123 09:09:26.510028 4684 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-controller-manager-operator/openshift-controller-manager-operator-756b6f6bc6-g5k2t" Jan 23 09:09:26 crc kubenswrapper[4684]: I0123 09:09:26.510367 4684 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-etcd-operator/etcd-operator-b45778765-p2wtg" Jan 23 09:09:26 crc kubenswrapper[4684]: I0123 09:09:26.510391 4684 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-mk7jd\" (UniqueName: \"kubernetes.io/projected/2493a510-4c7f-4d74-b1e2-1bfde5d9513b-kube-api-access-mk7jd\") pod \"console-operator-58897d9998-642xz\" (UID: \"2493a510-4c7f-4d74-b1e2-1bfde5d9513b\") " pod="openshift-console-operator/console-operator-58897d9998-642xz" Jan 23 09:09:26 crc kubenswrapper[4684]: I0123 09:09:26.510452 4684 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-image-registry/cluster-image-registry-operator-dc59b4c8b-w66j2" Jan 23 09:09:26 crc kubenswrapper[4684]: I0123 09:09:26.510820 4684 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-controller-manager-operator/kube-controller-manager-operator-78b949d7b-9np9f" Jan 23 09:09:26 crc kubenswrapper[4684]: I0123 09:09:26.511385 4684 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-console-operator/console-operator-58897d9998-642xz" Jan 23 09:09:26 crc kubenswrapper[4684]: I0123 09:09:26.511738 4684 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/4d94b705-3a9a-4cb2-87f1-b898ba859d79-trusted-ca\") pod \"image-registry-697d97f7c8-wn9b6\" (UID: \"4d94b705-3a9a-4cb2-87f1-b898ba859d79\") " pod="openshift-image-registry/image-registry-697d97f7c8-wn9b6" Jan 23 09:09:26 crc kubenswrapper[4684]: I0123 09:09:26.511780 4684 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-wn9b6\" (UID: \"4d94b705-3a9a-4cb2-87f1-b898ba859d79\") " pod="openshift-image-registry/image-registry-697d97f7c8-wn9b6" Jan 23 09:09:26 crc kubenswrapper[4684]: I0123 09:09:26.511812 4684 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"registry-certificates\" (UniqueName: \"kubernetes.io/configmap/4d94b705-3a9a-4cb2-87f1-b898ba859d79-registry-certificates\") pod \"image-registry-697d97f7c8-wn9b6\" (UID: \"4d94b705-3a9a-4cb2-87f1-b898ba859d79\") " pod="openshift-image-registry/image-registry-697d97f7c8-wn9b6" Jan 23 09:09:26 crc kubenswrapper[4684]: I0123 09:09:26.511895 4684 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"installation-pull-secrets\" (UniqueName: \"kubernetes.io/secret/4d94b705-3a9a-4cb2-87f1-b898ba859d79-installation-pull-secrets\") pod \"image-registry-697d97f7c8-wn9b6\" (UID: \"4d94b705-3a9a-4cb2-87f1-b898ba859d79\") " pod="openshift-image-registry/image-registry-697d97f7c8-wn9b6" Jan 23 09:09:26 crc kubenswrapper[4684]: I0123 09:09:26.512300 4684 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-trust-extracted\" (UniqueName: \"kubernetes.io/empty-dir/4d94b705-3a9a-4cb2-87f1-b898ba859d79-ca-trust-extracted\") pod \"image-registry-697d97f7c8-wn9b6\" (UID: \"4d94b705-3a9a-4cb2-87f1-b898ba859d79\") " pod="openshift-image-registry/image-registry-697d97f7c8-wn9b6" Jan 23 09:09:26 crc kubenswrapper[4684]: E0123 09:09:26.512522 4684 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-01-23 09:09:27.012506277 +0000 UTC m=+139.635884898 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-wn9b6" (UID: "4d94b705-3a9a-4cb2-87f1-b898ba859d79") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 23 09:09:26 crc kubenswrapper[4684]: I0123 09:09:26.512658 4684 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"registry-tls\" (UniqueName: \"kubernetes.io/projected/4d94b705-3a9a-4cb2-87f1-b898ba859d79-registry-tls\") pod \"image-registry-697d97f7c8-wn9b6\" (UID: \"4d94b705-3a9a-4cb2-87f1-b898ba859d79\") " pod="openshift-image-registry/image-registry-697d97f7c8-wn9b6" Jan 23 09:09:26 crc kubenswrapper[4684]: I0123 09:09:26.512810 4684 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"bound-sa-token\" (UniqueName: \"kubernetes.io/projected/4d94b705-3a9a-4cb2-87f1-b898ba859d79-bound-sa-token\") pod \"image-registry-697d97f7c8-wn9b6\" (UID: \"4d94b705-3a9a-4cb2-87f1-b898ba859d79\") " pod="openshift-image-registry/image-registry-697d97f7c8-wn9b6" Jan 23 09:09:26 crc kubenswrapper[4684]: I0123 09:09:26.513981 4684 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-lgptj\" (UniqueName: \"kubernetes.io/projected/4d94b705-3a9a-4cb2-87f1-b898ba859d79-kube-api-access-lgptj\") pod \"image-registry-697d97f7c8-wn9b6\" (UID: \"4d94b705-3a9a-4cb2-87f1-b898ba859d79\") " pod="openshift-image-registry/image-registry-697d97f7c8-wn9b6" Jan 23 09:09:26 crc kubenswrapper[4684]: I0123 09:09:26.516635 4684 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-chgrt\" (UniqueName: \"kubernetes.io/projected/1d318028-1d65-442a-9e50-ccf71fb54b04-kube-api-access-chgrt\") pod \"authentication-operator-69f744f599-tbqbw\" (UID: \"1d318028-1d65-442a-9e50-ccf71fb54b04\") " pod="openshift-authentication-operator/authentication-operator-69f744f599-tbqbw" Jan 23 09:09:26 crc kubenswrapper[4684]: I0123 09:09:26.597022 4684 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-authentication-operator/authentication-operator-69f744f599-tbqbw" Jan 23 09:09:26 crc kubenswrapper[4684]: I0123 09:09:26.615326 4684 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 23 09:09:26 crc kubenswrapper[4684]: I0123 09:09:26.615555 4684 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/4d94b705-3a9a-4cb2-87f1-b898ba859d79-trusted-ca\") pod \"image-registry-697d97f7c8-wn9b6\" (UID: \"4d94b705-3a9a-4cb2-87f1-b898ba859d79\") " pod="openshift-image-registry/image-registry-697d97f7c8-wn9b6" Jan 23 09:09:26 crc kubenswrapper[4684]: I0123 09:09:26.615599 4684 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"registry-certificates\" (UniqueName: \"kubernetes.io/configmap/4d94b705-3a9a-4cb2-87f1-b898ba859d79-registry-certificates\") pod \"image-registry-697d97f7c8-wn9b6\" (UID: \"4d94b705-3a9a-4cb2-87f1-b898ba859d79\") " pod="openshift-image-registry/image-registry-697d97f7c8-wn9b6" Jan 23 09:09:26 crc kubenswrapper[4684]: I0123 09:09:26.615629 4684 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/a9289743-2808-4efc-a6f9-bd8b5e33d553-serving-cert\") pod \"kube-apiserver-operator-766d6c64bb-dhf86\" (UID: \"a9289743-2808-4efc-a6f9-bd8b5e33d553\") " pod="openshift-kube-apiserver-operator/kube-apiserver-operator-766d6c64bb-dhf86" Jan 23 09:09:26 crc kubenswrapper[4684]: I0123 09:09:26.615651 4684 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-2s8rx\" (UniqueName: \"kubernetes.io/projected/31ebe80c-870d-4be6-844c-504b72eb09d6-kube-api-access-2s8rx\") pod \"console-f9d7485db-wd9fz\" (UID: \"31ebe80c-870d-4be6-844c-504b72eb09d6\") " pod="openshift-console/console-f9d7485db-wd9fz" Jan 23 09:09:26 crc kubenswrapper[4684]: I0123 09:09:26.615673 4684 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/1b327e86-ed37-44e8-b30d-ef50195f0972-kube-api-access\") pod \"openshift-kube-scheduler-operator-5fdd9b5758-sm5m4\" (UID: \"1b327e86-ed37-44e8-b30d-ef50195f0972\") " pod="openshift-kube-scheduler-operator/openshift-kube-scheduler-operator-5fdd9b5758-sm5m4" Jan 23 09:09:26 crc kubenswrapper[4684]: I0123 09:09:26.615783 4684 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-jwcw9\" (UniqueName: \"kubernetes.io/projected/b4b2d72e-d91a-4cde-8e13-205f5346b4ba-kube-api-access-jwcw9\") pod \"migrator-59844c95c7-g8kmw\" (UID: \"b4b2d72e-d91a-4cde-8e13-205f5346b4ba\") " pod="openshift-kube-storage-version-migrator/migrator-59844c95c7-g8kmw" Jan 23 09:09:26 crc kubenswrapper[4684]: I0123 09:09:26.615814 4684 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"oauth-serving-cert\" (UniqueName: \"kubernetes.io/configmap/31ebe80c-870d-4be6-844c-504b72eb09d6-oauth-serving-cert\") pod \"console-f9d7485db-wd9fz\" (UID: \"31ebe80c-870d-4be6-844c-504b72eb09d6\") " pod="openshift-console/console-f9d7485db-wd9fz" Jan 23 09:09:26 crc kubenswrapper[4684]: I0123 09:09:26.615871 4684 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"installation-pull-secrets\" (UniqueName: \"kubernetes.io/secret/4d94b705-3a9a-4cb2-87f1-b898ba859d79-installation-pull-secrets\") pod \"image-registry-697d97f7c8-wn9b6\" (UID: \"4d94b705-3a9a-4cb2-87f1-b898ba859d79\") " pod="openshift-image-registry/image-registry-697d97f7c8-wn9b6" Jan 23 09:09:26 crc kubenswrapper[4684]: I0123 09:09:26.615897 4684 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"console-config\" (UniqueName: \"kubernetes.io/configmap/31ebe80c-870d-4be6-844c-504b72eb09d6-console-config\") pod \"console-f9d7485db-wd9fz\" (UID: \"31ebe80c-870d-4be6-844c-504b72eb09d6\") " pod="openshift-console/console-f9d7485db-wd9fz" Jan 23 09:09:26 crc kubenswrapper[4684]: I0123 09:09:26.615932 4684 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-cqtlk\" (UniqueName: \"kubernetes.io/projected/af9efd93-5eee-4e16-a36f-25d29663ff5c-kube-api-access-cqtlk\") pod \"machine-config-controller-84d6567774-r9qbw\" (UID: \"af9efd93-5eee-4e16-a36f-25d29663ff5c\") " pod="openshift-machine-config-operator/machine-config-controller-84d6567774-r9qbw" Jan 23 09:09:26 crc kubenswrapper[4684]: I0123 09:09:26.615953 4684 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/1b327e86-ed37-44e8-b30d-ef50195f0972-config\") pod \"openshift-kube-scheduler-operator-5fdd9b5758-sm5m4\" (UID: \"1b327e86-ed37-44e8-b30d-ef50195f0972\") " pod="openshift-kube-scheduler-operator/openshift-kube-scheduler-operator-5fdd9b5758-sm5m4" Jan 23 09:09:26 crc kubenswrapper[4684]: I0123 09:09:26.616036 4684 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/a9289743-2808-4efc-a6f9-bd8b5e33d553-config\") pod \"kube-apiserver-operator-766d6c64bb-dhf86\" (UID: \"a9289743-2808-4efc-a6f9-bd8b5e33d553\") " pod="openshift-kube-apiserver-operator/kube-apiserver-operator-766d6c64bb-dhf86" Jan 23 09:09:26 crc kubenswrapper[4684]: I0123 09:09:26.616072 4684 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ca-trust-extracted\" (UniqueName: \"kubernetes.io/empty-dir/4d94b705-3a9a-4cb2-87f1-b898ba859d79-ca-trust-extracted\") pod \"image-registry-697d97f7c8-wn9b6\" (UID: \"4d94b705-3a9a-4cb2-87f1-b898ba859d79\") " pod="openshift-image-registry/image-registry-697d97f7c8-wn9b6" Jan 23 09:09:26 crc kubenswrapper[4684]: I0123 09:09:26.616095 4684 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"registry-tls\" (UniqueName: \"kubernetes.io/projected/4d94b705-3a9a-4cb2-87f1-b898ba859d79-registry-tls\") pod \"image-registry-697d97f7c8-wn9b6\" (UID: \"4d94b705-3a9a-4cb2-87f1-b898ba859d79\") " pod="openshift-image-registry/image-registry-697d97f7c8-wn9b6" Jan 23 09:09:26 crc kubenswrapper[4684]: I0123 09:09:26.616121 4684 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"console-serving-cert\" (UniqueName: \"kubernetes.io/secret/31ebe80c-870d-4be6-844c-504b72eb09d6-console-serving-cert\") pod \"console-f9d7485db-wd9fz\" (UID: \"31ebe80c-870d-4be6-844c-504b72eb09d6\") " pod="openshift-console/console-f9d7485db-wd9fz" Jan 23 09:09:26 crc kubenswrapper[4684]: I0123 09:09:26.616142 4684 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"bound-sa-token\" (UniqueName: \"kubernetes.io/projected/4d94b705-3a9a-4cb2-87f1-b898ba859d79-bound-sa-token\") pod \"image-registry-697d97f7c8-wn9b6\" (UID: \"4d94b705-3a9a-4cb2-87f1-b898ba859d79\") " pod="openshift-image-registry/image-registry-697d97f7c8-wn9b6" Jan 23 09:09:26 crc kubenswrapper[4684]: I0123 09:09:26.616178 4684 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"proxy-tls\" (UniqueName: \"kubernetes.io/secret/af9efd93-5eee-4e16-a36f-25d29663ff5c-proxy-tls\") pod \"machine-config-controller-84d6567774-r9qbw\" (UID: \"af9efd93-5eee-4e16-a36f-25d29663ff5c\") " pod="openshift-machine-config-operator/machine-config-controller-84d6567774-r9qbw" Jan 23 09:09:26 crc kubenswrapper[4684]: I0123 09:09:26.616199 4684 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-7jxvf\" (UniqueName: \"kubernetes.io/projected/94f9b51c-2051-4b01-bf38-09a32c853699-kube-api-access-7jxvf\") pod \"dns-operator-744455d44c-kx2tw\" (UID: \"94f9b51c-2051-4b01-bf38-09a32c853699\") " pod="openshift-dns-operator/dns-operator-744455d44c-kx2tw" Jan 23 09:09:26 crc kubenswrapper[4684]: I0123 09:09:26.616222 4684 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-tz5tt\" (UniqueName: \"kubernetes.io/projected/97bfbd24-43dd-4c7c-abc0-cc5c502d938a-kube-api-access-tz5tt\") pod \"kube-storage-version-migrator-operator-b67b599dd-tkzz2\" (UID: \"97bfbd24-43dd-4c7c-abc0-cc5c502d938a\") " pod="openshift-kube-storage-version-migrator-operator/kube-storage-version-migrator-operator-b67b599dd-tkzz2" Jan 23 09:09:26 crc kubenswrapper[4684]: I0123 09:09:26.616245 4684 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/31ebe80c-870d-4be6-844c-504b72eb09d6-trusted-ca-bundle\") pod \"console-f9d7485db-wd9fz\" (UID: \"31ebe80c-870d-4be6-844c-504b72eb09d6\") " pod="openshift-console/console-f9d7485db-wd9fz" Jan 23 09:09:26 crc kubenswrapper[4684]: I0123 09:09:26.616269 4684 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"metrics-tls\" (UniqueName: \"kubernetes.io/secret/94f9b51c-2051-4b01-bf38-09a32c853699-metrics-tls\") pod \"dns-operator-744455d44c-kx2tw\" (UID: \"94f9b51c-2051-4b01-bf38-09a32c853699\") " pod="openshift-dns-operator/dns-operator-744455d44c-kx2tw" Jan 23 09:09:26 crc kubenswrapper[4684]: I0123 09:09:26.616295 4684 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-lgptj\" (UniqueName: \"kubernetes.io/projected/4d94b705-3a9a-4cb2-87f1-b898ba859d79-kube-api-access-lgptj\") pod \"image-registry-697d97f7c8-wn9b6\" (UID: \"4d94b705-3a9a-4cb2-87f1-b898ba859d79\") " pod="openshift-image-registry/image-registry-697d97f7c8-wn9b6" Jan 23 09:09:26 crc kubenswrapper[4684]: I0123 09:09:26.616341 4684 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/97bfbd24-43dd-4c7c-abc0-cc5c502d938a-config\") pod \"kube-storage-version-migrator-operator-b67b599dd-tkzz2\" (UID: \"97bfbd24-43dd-4c7c-abc0-cc5c502d938a\") " pod="openshift-kube-storage-version-migrator-operator/kube-storage-version-migrator-operator-b67b599dd-tkzz2" Jan 23 09:09:26 crc kubenswrapper[4684]: I0123 09:09:26.616371 4684 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"console-oauth-config\" (UniqueName: \"kubernetes.io/secret/31ebe80c-870d-4be6-844c-504b72eb09d6-console-oauth-config\") pod \"console-f9d7485db-wd9fz\" (UID: \"31ebe80c-870d-4be6-844c-504b72eb09d6\") " pod="openshift-console/console-f9d7485db-wd9fz" Jan 23 09:09:26 crc kubenswrapper[4684]: I0123 09:09:26.616393 4684 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"mcc-auth-proxy-config\" (UniqueName: \"kubernetes.io/configmap/af9efd93-5eee-4e16-a36f-25d29663ff5c-mcc-auth-proxy-config\") pod \"machine-config-controller-84d6567774-r9qbw\" (UID: \"af9efd93-5eee-4e16-a36f-25d29663ff5c\") " pod="openshift-machine-config-operator/machine-config-controller-84d6567774-r9qbw" Jan 23 09:09:26 crc kubenswrapper[4684]: I0123 09:09:26.616425 4684 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"service-ca\" (UniqueName: \"kubernetes.io/configmap/31ebe80c-870d-4be6-844c-504b72eb09d6-service-ca\") pod \"console-f9d7485db-wd9fz\" (UID: \"31ebe80c-870d-4be6-844c-504b72eb09d6\") " pod="openshift-console/console-f9d7485db-wd9fz" Jan 23 09:09:26 crc kubenswrapper[4684]: I0123 09:09:26.616458 4684 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/a9289743-2808-4efc-a6f9-bd8b5e33d553-kube-api-access\") pod \"kube-apiserver-operator-766d6c64bb-dhf86\" (UID: \"a9289743-2808-4efc-a6f9-bd8b5e33d553\") " pod="openshift-kube-apiserver-operator/kube-apiserver-operator-766d6c64bb-dhf86" Jan 23 09:09:26 crc kubenswrapper[4684]: I0123 09:09:26.616503 4684 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/1b327e86-ed37-44e8-b30d-ef50195f0972-serving-cert\") pod \"openshift-kube-scheduler-operator-5fdd9b5758-sm5m4\" (UID: \"1b327e86-ed37-44e8-b30d-ef50195f0972\") " pod="openshift-kube-scheduler-operator/openshift-kube-scheduler-operator-5fdd9b5758-sm5m4" Jan 23 09:09:26 crc kubenswrapper[4684]: I0123 09:09:26.616524 4684 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/97bfbd24-43dd-4c7c-abc0-cc5c502d938a-serving-cert\") pod \"kube-storage-version-migrator-operator-b67b599dd-tkzz2\" (UID: \"97bfbd24-43dd-4c7c-abc0-cc5c502d938a\") " pod="openshift-kube-storage-version-migrator-operator/kube-storage-version-migrator-operator-b67b599dd-tkzz2" Jan 23 09:09:26 crc kubenswrapper[4684]: E0123 09:09:26.616642 4684 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-01-23 09:09:27.116624313 +0000 UTC m=+139.740002854 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 23 09:09:26 crc kubenswrapper[4684]: I0123 09:09:26.618027 4684 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/4d94b705-3a9a-4cb2-87f1-b898ba859d79-trusted-ca\") pod \"image-registry-697d97f7c8-wn9b6\" (UID: \"4d94b705-3a9a-4cb2-87f1-b898ba859d79\") " pod="openshift-image-registry/image-registry-697d97f7c8-wn9b6" Jan 23 09:09:26 crc kubenswrapper[4684]: I0123 09:09:26.624763 4684 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ca-trust-extracted\" (UniqueName: \"kubernetes.io/empty-dir/4d94b705-3a9a-4cb2-87f1-b898ba859d79-ca-trust-extracted\") pod \"image-registry-697d97f7c8-wn9b6\" (UID: \"4d94b705-3a9a-4cb2-87f1-b898ba859d79\") " pod="openshift-image-registry/image-registry-697d97f7c8-wn9b6" Jan 23 09:09:26 crc kubenswrapper[4684]: I0123 09:09:26.630671 4684 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"installation-pull-secrets\" (UniqueName: \"kubernetes.io/secret/4d94b705-3a9a-4cb2-87f1-b898ba859d79-installation-pull-secrets\") pod \"image-registry-697d97f7c8-wn9b6\" (UID: \"4d94b705-3a9a-4cb2-87f1-b898ba859d79\") " pod="openshift-image-registry/image-registry-697d97f7c8-wn9b6" Jan 23 09:09:26 crc kubenswrapper[4684]: I0123 09:09:26.720227 4684 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"registry-certificates\" (UniqueName: \"kubernetes.io/configmap/4d94b705-3a9a-4cb2-87f1-b898ba859d79-registry-certificates\") pod \"image-registry-697d97f7c8-wn9b6\" (UID: \"4d94b705-3a9a-4cb2-87f1-b898ba859d79\") " pod="openshift-image-registry/image-registry-697d97f7c8-wn9b6" Jan 23 09:09:26 crc kubenswrapper[4684]: I0123 09:09:26.720897 4684 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/1b327e86-ed37-44e8-b30d-ef50195f0972-kube-api-access\") pod \"openshift-kube-scheduler-operator-5fdd9b5758-sm5m4\" (UID: \"1b327e86-ed37-44e8-b30d-ef50195f0972\") " pod="openshift-kube-scheduler-operator/openshift-kube-scheduler-operator-5fdd9b5758-sm5m4" Jan 23 09:09:26 crc kubenswrapper[4684]: I0123 09:09:26.720950 4684 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-jwcw9\" (UniqueName: \"kubernetes.io/projected/b4b2d72e-d91a-4cde-8e13-205f5346b4ba-kube-api-access-jwcw9\") pod \"migrator-59844c95c7-g8kmw\" (UID: \"b4b2d72e-d91a-4cde-8e13-205f5346b4ba\") " pod="openshift-kube-storage-version-migrator/migrator-59844c95c7-g8kmw" Jan 23 09:09:26 crc kubenswrapper[4684]: I0123 09:09:26.720973 4684 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"oauth-serving-cert\" (UniqueName: \"kubernetes.io/configmap/31ebe80c-870d-4be6-844c-504b72eb09d6-oauth-serving-cert\") pod \"console-f9d7485db-wd9fz\" (UID: \"31ebe80c-870d-4be6-844c-504b72eb09d6\") " pod="openshift-console/console-f9d7485db-wd9fz" Jan 23 09:09:26 crc kubenswrapper[4684]: I0123 09:09:26.720997 4684 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"images\" (UniqueName: \"kubernetes.io/configmap/ebc04459-cb74-4868-8eb4-51a4d8856890-images\") pod \"machine-config-operator-74547568cd-k7fnj\" (UID: \"ebc04459-cb74-4868-8eb4-51a4d8856890\") " pod="openshift-machine-config-operator/machine-config-operator-74547568cd-k7fnj" Jan 23 09:09:26 crc kubenswrapper[4684]: I0123 09:09:26.721025 4684 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"console-config\" (UniqueName: \"kubernetes.io/configmap/31ebe80c-870d-4be6-844c-504b72eb09d6-console-config\") pod \"console-f9d7485db-wd9fz\" (UID: \"31ebe80c-870d-4be6-844c-504b72eb09d6\") " pod="openshift-console/console-f9d7485db-wd9fz" Jan 23 09:09:26 crc kubenswrapper[4684]: I0123 09:09:26.721050 4684 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cert\" (UniqueName: \"kubernetes.io/secret/2bcafabc-bd27-41f8-bcec-0ea45d079a79-cert\") pod \"ingress-canary-76rxn\" (UID: \"2bcafabc-bd27-41f8-bcec-0ea45d079a79\") " pod="openshift-ingress-canary/ingress-canary-76rxn" Jan 23 09:09:26 crc kubenswrapper[4684]: I0123 09:09:26.721072 4684 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"stats-auth\" (UniqueName: \"kubernetes.io/secret/637adfa6-5f16-415d-b536-f8c65e5b32c2-stats-auth\") pod \"router-default-5444994796-whxn9\" (UID: \"637adfa6-5f16-415d-b536-f8c65e5b32c2\") " pod="openshift-ingress/router-default-5444994796-whxn9" Jan 23 09:09:26 crc kubenswrapper[4684]: I0123 09:09:26.721096 4684 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-7kvb6\" (UniqueName: \"kubernetes.io/projected/9071fc4b-8d0f-41fe-832b-c3c9f5f0351b-kube-api-access-7kvb6\") pod \"package-server-manager-789f6589d5-2xmjn\" (UID: \"9071fc4b-8d0f-41fe-832b-c3c9f5f0351b\") " pod="openshift-operator-lifecycle-manager/package-server-manager-789f6589d5-2xmjn" Jan 23 09:09:26 crc kubenswrapper[4684]: I0123 09:09:26.721117 4684 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"srv-cert\" (UniqueName: \"kubernetes.io/secret/e60787da-c4f0-4034-b543-f70e46a6ded4-srv-cert\") pod \"catalog-operator-68c6474976-dxd9h\" (UID: \"e60787da-c4f0-4034-b543-f70e46a6ded4\") " pod="openshift-operator-lifecycle-manager/catalog-operator-68c6474976-dxd9h" Jan 23 09:09:26 crc kubenswrapper[4684]: I0123 09:09:26.721140 4684 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"proxy-tls\" (UniqueName: \"kubernetes.io/secret/ebc04459-cb74-4868-8eb4-51a4d8856890-proxy-tls\") pod \"machine-config-operator-74547568cd-k7fnj\" (UID: \"ebc04459-cb74-4868-8eb4-51a4d8856890\") " pod="openshift-machine-config-operator/machine-config-operator-74547568cd-k7fnj" Jan 23 09:09:26 crc kubenswrapper[4684]: I0123 09:09:26.721164 4684 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-t9ndx\" (UniqueName: \"kubernetes.io/projected/52f6483b-3d4f-482d-8802-fb7ba6736b69-kube-api-access-t9ndx\") pod \"csi-hostpathplugin-8tk99\" (UID: \"52f6483b-3d4f-482d-8802-fb7ba6736b69\") " pod="hostpath-provisioner/csi-hostpathplugin-8tk99" Jan 23 09:09:26 crc kubenswrapper[4684]: I0123 09:09:26.721188 4684 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-sj7bm\" (UniqueName: \"kubernetes.io/projected/e1331c42-e8e8-4e17-bfa3-0961208c57fd-kube-api-access-sj7bm\") pod \"service-ca-9c57cc56f-tk452\" (UID: \"e1331c42-e8e8-4e17-bfa3-0961208c57fd\") " pod="openshift-service-ca/service-ca-9c57cc56f-tk452" Jan 23 09:09:26 crc kubenswrapper[4684]: I0123 09:09:26.721213 4684 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"control-plane-machine-set-operator-tls\" (UniqueName: \"kubernetes.io/secret/f92af7c0-b6ef-4fe1-b057-b2424aa96458-control-plane-machine-set-operator-tls\") pod \"control-plane-machine-set-operator-78cbb6b69f-4qpn2\" (UID: \"f92af7c0-b6ef-4fe1-b057-b2424aa96458\") " pod="openshift-machine-api/control-plane-machine-set-operator-78cbb6b69f-4qpn2" Jan 23 09:09:26 crc kubenswrapper[4684]: I0123 09:09:26.721251 4684 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"proxy-tls\" (UniqueName: \"kubernetes.io/secret/af9efd93-5eee-4e16-a36f-25d29663ff5c-proxy-tls\") pod \"machine-config-controller-84d6567774-r9qbw\" (UID: \"af9efd93-5eee-4e16-a36f-25d29663ff5c\") " pod="openshift-machine-config-operator/machine-config-controller-84d6567774-r9qbw" Jan 23 09:09:26 crc kubenswrapper[4684]: I0123 09:09:26.721278 4684 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-tz5tt\" (UniqueName: \"kubernetes.io/projected/97bfbd24-43dd-4c7c-abc0-cc5c502d938a-kube-api-access-tz5tt\") pod \"kube-storage-version-migrator-operator-b67b599dd-tkzz2\" (UID: \"97bfbd24-43dd-4c7c-abc0-cc5c502d938a\") " pod="openshift-kube-storage-version-migrator-operator/kube-storage-version-migrator-operator-b67b599dd-tkzz2" Jan 23 09:09:26 crc kubenswrapper[4684]: I0123 09:09:26.721302 4684 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/7d3e8240-e3e7-42d7-a0fa-6379a76c546e-secret-volume\") pod \"collect-profiles-29485980-dfbbw\" (UID: \"7d3e8240-e3e7-42d7-a0fa-6379a76c546e\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29485980-dfbbw" Jan 23 09:09:26 crc kubenswrapper[4684]: I0123 09:09:26.721323 4684 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/31ebe80c-870d-4be6-844c-504b72eb09d6-trusted-ca-bundle\") pod \"console-f9d7485db-wd9fz\" (UID: \"31ebe80c-870d-4be6-844c-504b72eb09d6\") " pod="openshift-console/console-f9d7485db-wd9fz" Jan 23 09:09:26 crc kubenswrapper[4684]: I0123 09:09:26.721347 4684 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"profile-collector-cert\" (UniqueName: \"kubernetes.io/secret/e60787da-c4f0-4034-b543-f70e46a6ded4-profile-collector-cert\") pod \"catalog-operator-68c6474976-dxd9h\" (UID: \"e60787da-c4f0-4034-b543-f70e46a6ded4\") " pod="openshift-operator-lifecycle-manager/catalog-operator-68c6474976-dxd9h" Jan 23 09:09:26 crc kubenswrapper[4684]: I0123 09:09:26.721369 4684 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-c5p5q\" (UniqueName: \"kubernetes.io/projected/e60787da-c4f0-4034-b543-f70e46a6ded4-kube-api-access-c5p5q\") pod \"catalog-operator-68c6474976-dxd9h\" (UID: \"e60787da-c4f0-4034-b543-f70e46a6ded4\") " pod="openshift-operator-lifecycle-manager/catalog-operator-68c6474976-dxd9h" Jan 23 09:09:26 crc kubenswrapper[4684]: I0123 09:09:26.721396 4684 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-tls\" (UniqueName: \"kubernetes.io/secret/94f9b51c-2051-4b01-bf38-09a32c853699-metrics-tls\") pod \"dns-operator-744455d44c-kx2tw\" (UID: \"94f9b51c-2051-4b01-bf38-09a32c853699\") " pod="openshift-dns-operator/dns-operator-744455d44c-kx2tw" Jan 23 09:09:26 crc kubenswrapper[4684]: I0123 09:09:26.721422 4684 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"csi-data-dir\" (UniqueName: \"kubernetes.io/host-path/52f6483b-3d4f-482d-8802-fb7ba6736b69-csi-data-dir\") pod \"csi-hostpathplugin-8tk99\" (UID: \"52f6483b-3d4f-482d-8802-fb7ba6736b69\") " pod="hostpath-provisioner/csi-hostpathplugin-8tk99" Jan 23 09:09:26 crc kubenswrapper[4684]: I0123 09:09:26.721445 4684 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/991fb184-b936-412b-ae42-fe3a085c4bf9-trusted-ca\") pod \"ingress-operator-5b745b69d9-5jrnp\" (UID: \"991fb184-b936-412b-ae42-fe3a085c4bf9\") " pod="openshift-ingress-operator/ingress-operator-5b745b69d9-5jrnp" Jan 23 09:09:26 crc kubenswrapper[4684]: I0123 09:09:26.721474 4684 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-dqvx6\" (UniqueName: \"kubernetes.io/projected/9eb90a45-05a1-450a-93d7-d20129d62e40-kube-api-access-dqvx6\") pod \"multus-admission-controller-857f4d67dd-zp7ft\" (UID: \"9eb90a45-05a1-450a-93d7-d20129d62e40\") " pod="openshift-multus/multus-admission-controller-857f4d67dd-zp7ft" Jan 23 09:09:26 crc kubenswrapper[4684]: I0123 09:09:26.721498 4684 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/97bfbd24-43dd-4c7c-abc0-cc5c502d938a-config\") pod \"kube-storage-version-migrator-operator-b67b599dd-tkzz2\" (UID: \"97bfbd24-43dd-4c7c-abc0-cc5c502d938a\") " pod="openshift-kube-storage-version-migrator-operator/kube-storage-version-migrator-operator-b67b599dd-tkzz2" Jan 23 09:09:26 crc kubenswrapper[4684]: I0123 09:09:26.721533 4684 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"metrics-tls\" (UniqueName: \"kubernetes.io/secret/6ad4033e-405b-4649-a039-5169aa401f18-metrics-tls\") pod \"dns-default-bxczb\" (UID: \"6ad4033e-405b-4649-a039-5169aa401f18\") " pod="openshift-dns/dns-default-bxczb" Jan 23 09:09:26 crc kubenswrapper[4684]: I0123 09:09:26.721552 4684 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/637adfa6-5f16-415d-b536-f8c65e5b32c2-metrics-certs\") pod \"router-default-5444994796-whxn9\" (UID: \"637adfa6-5f16-415d-b536-f8c65e5b32c2\") " pod="openshift-ingress/router-default-5444994796-whxn9" Jan 23 09:09:26 crc kubenswrapper[4684]: I0123 09:09:26.721589 4684 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"console-oauth-config\" (UniqueName: \"kubernetes.io/secret/31ebe80c-870d-4be6-844c-504b72eb09d6-console-oauth-config\") pod \"console-f9d7485db-wd9fz\" (UID: \"31ebe80c-870d-4be6-844c-504b72eb09d6\") " pod="openshift-console/console-f9d7485db-wd9fz" Jan 23 09:09:26 crc kubenswrapper[4684]: I0123 09:09:26.721624 4684 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"certs\" (UniqueName: \"kubernetes.io/secret/f0966605-6157-4679-a78d-7e744be794a0-certs\") pod \"machine-config-server-9m65q\" (UID: \"f0966605-6157-4679-a78d-7e744be794a0\") " pod="openshift-machine-config-operator/machine-config-server-9m65q" Jan 23 09:09:26 crc kubenswrapper[4684]: I0123 09:09:26.721647 4684 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"apiservice-cert\" (UniqueName: \"kubernetes.io/secret/35a3e02f-21f3-4762-8260-c52003d4499c-apiservice-cert\") pod \"packageserver-d55dfcdfc-g94qp\" (UID: \"35a3e02f-21f3-4762-8260-c52003d4499c\") " pod="openshift-operator-lifecycle-manager/packageserver-d55dfcdfc-g94qp" Jan 23 09:09:26 crc kubenswrapper[4684]: I0123 09:09:26.721731 4684 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"service-ca\" (UniqueName: \"kubernetes.io/configmap/31ebe80c-870d-4be6-844c-504b72eb09d6-service-ca\") pod \"console-f9d7485db-wd9fz\" (UID: \"31ebe80c-870d-4be6-844c-504b72eb09d6\") " pod="openshift-console/console-f9d7485db-wd9fz" Jan 23 09:09:26 crc kubenswrapper[4684]: I0123 09:09:26.721756 4684 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"metrics-tls\" (UniqueName: \"kubernetes.io/secret/991fb184-b936-412b-ae42-fe3a085c4bf9-metrics-tls\") pod \"ingress-operator-5b745b69d9-5jrnp\" (UID: \"991fb184-b936-412b-ae42-fe3a085c4bf9\") " pod="openshift-ingress-operator/ingress-operator-5b745b69d9-5jrnp" Jan 23 09:09:26 crc kubenswrapper[4684]: I0123 09:09:26.721777 4684 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/7d3e8240-e3e7-42d7-a0fa-6379a76c546e-config-volume\") pod \"collect-profiles-29485980-dfbbw\" (UID: \"7d3e8240-e3e7-42d7-a0fa-6379a76c546e\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29485980-dfbbw" Jan 23 09:09:26 crc kubenswrapper[4684]: I0123 09:09:26.721800 4684 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"webhook-cert\" (UniqueName: \"kubernetes.io/secret/35a3e02f-21f3-4762-8260-c52003d4499c-webhook-cert\") pod \"packageserver-d55dfcdfc-g94qp\" (UID: \"35a3e02f-21f3-4762-8260-c52003d4499c\") " pod="openshift-operator-lifecycle-manager/packageserver-d55dfcdfc-g94qp" Jan 23 09:09:26 crc kubenswrapper[4684]: I0123 09:09:26.721823 4684 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-sjnf6\" (UniqueName: \"kubernetes.io/projected/637adfa6-5f16-415d-b536-f8c65e5b32c2-kube-api-access-sjnf6\") pod \"router-default-5444994796-whxn9\" (UID: \"637adfa6-5f16-415d-b536-f8c65e5b32c2\") " pod="openshift-ingress/router-default-5444994796-whxn9" Jan 23 09:09:26 crc kubenswrapper[4684]: I0123 09:09:26.721862 4684 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-lmtxd\" (UniqueName: \"kubernetes.io/projected/31daf061-abd6-415c-9cd6-2e59cb07d605-kube-api-access-lmtxd\") pod \"olm-operator-6b444d44fb-qj7jr\" (UID: \"31daf061-abd6-415c-9cd6-2e59cb07d605\") " pod="openshift-operator-lifecycle-manager/olm-operator-6b444d44fb-qj7jr" Jan 23 09:09:26 crc kubenswrapper[4684]: I0123 09:09:26.721935 4684 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/a9289743-2808-4efc-a6f9-bd8b5e33d553-kube-api-access\") pod \"kube-apiserver-operator-766d6c64bb-dhf86\" (UID: \"a9289743-2808-4efc-a6f9-bd8b5e33d553\") " pod="openshift-kube-apiserver-operator/kube-apiserver-operator-766d6c64bb-dhf86" Jan 23 09:09:26 crc kubenswrapper[4684]: I0123 09:09:26.721966 4684 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"marketplace-operator-metrics\" (UniqueName: \"kubernetes.io/secret/703df6b3-b903-4818-b0c8-8681de1c6065-marketplace-operator-metrics\") pod \"marketplace-operator-79b997595-tfmsb\" (UID: \"703df6b3-b903-4818-b0c8-8681de1c6065\") " pod="openshift-marketplace/marketplace-operator-79b997595-tfmsb" Jan 23 09:09:26 crc kubenswrapper[4684]: I0123 09:09:26.721996 4684 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/1b327e86-ed37-44e8-b30d-ef50195f0972-serving-cert\") pod \"openshift-kube-scheduler-operator-5fdd9b5758-sm5m4\" (UID: \"1b327e86-ed37-44e8-b30d-ef50195f0972\") " pod="openshift-kube-scheduler-operator/openshift-kube-scheduler-operator-5fdd9b5758-sm5m4" Jan 23 09:09:26 crc kubenswrapper[4684]: I0123 09:09:26.722022 4684 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/97bfbd24-43dd-4c7c-abc0-cc5c502d938a-serving-cert\") pod \"kube-storage-version-migrator-operator-b67b599dd-tkzz2\" (UID: \"97bfbd24-43dd-4c7c-abc0-cc5c502d938a\") " pod="openshift-kube-storage-version-migrator-operator/kube-storage-version-migrator-operator-b67b599dd-tkzz2" Jan 23 09:09:26 crc kubenswrapper[4684]: I0123 09:09:26.722067 4684 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"signing-key\" (UniqueName: \"kubernetes.io/secret/e1331c42-e8e8-4e17-bfa3-0961208c57fd-signing-key\") pod \"service-ca-9c57cc56f-tk452\" (UID: \"e1331c42-e8e8-4e17-bfa3-0961208c57fd\") " pod="openshift-service-ca/service-ca-9c57cc56f-tk452" Jan 23 09:09:26 crc kubenswrapper[4684]: I0123 09:09:26.722091 4684 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"default-certificate\" (UniqueName: \"kubernetes.io/secret/637adfa6-5f16-415d-b536-f8c65e5b32c2-default-certificate\") pod \"router-default-5444994796-whxn9\" (UID: \"637adfa6-5f16-415d-b536-f8c65e5b32c2\") " pod="openshift-ingress/router-default-5444994796-whxn9" Jan 23 09:09:26 crc kubenswrapper[4684]: I0123 09:09:26.722125 4684 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-2s8rx\" (UniqueName: \"kubernetes.io/projected/31ebe80c-870d-4be6-844c-504b72eb09d6-kube-api-access-2s8rx\") pod \"console-f9d7485db-wd9fz\" (UID: \"31ebe80c-870d-4be6-844c-504b72eb09d6\") " pod="openshift-console/console-f9d7485db-wd9fz" Jan 23 09:09:26 crc kubenswrapper[4684]: I0123 09:09:26.722169 4684 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"webhook-certs\" (UniqueName: \"kubernetes.io/secret/9eb90a45-05a1-450a-93d7-d20129d62e40-webhook-certs\") pod \"multus-admission-controller-857f4d67dd-zp7ft\" (UID: \"9eb90a45-05a1-450a-93d7-d20129d62e40\") " pod="openshift-multus/multus-admission-controller-857f4d67dd-zp7ft" Jan 23 09:09:26 crc kubenswrapper[4684]: I0123 09:09:26.722472 4684 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"oauth-serving-cert\" (UniqueName: \"kubernetes.io/configmap/31ebe80c-870d-4be6-844c-504b72eb09d6-oauth-serving-cert\") pod \"console-f9d7485db-wd9fz\" (UID: \"31ebe80c-870d-4be6-844c-504b72eb09d6\") " pod="openshift-console/console-f9d7485db-wd9fz" Jan 23 09:09:26 crc kubenswrapper[4684]: I0123 09:09:26.722575 4684 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/97bfbd24-43dd-4c7c-abc0-cc5c502d938a-config\") pod \"kube-storage-version-migrator-operator-b67b599dd-tkzz2\" (UID: \"97bfbd24-43dd-4c7c-abc0-cc5c502d938a\") " pod="openshift-kube-storage-version-migrator-operator/kube-storage-version-migrator-operator-b67b599dd-tkzz2" Jan 23 09:09:26 crc kubenswrapper[4684]: I0123 09:09:26.723052 4684 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"signing-cabundle\" (UniqueName: \"kubernetes.io/configmap/e1331c42-e8e8-4e17-bfa3-0961208c57fd-signing-cabundle\") pod \"service-ca-9c57cc56f-tk452\" (UID: \"e1331c42-e8e8-4e17-bfa3-0961208c57fd\") " pod="openshift-service-ca/service-ca-9c57cc56f-tk452" Jan 23 09:09:26 crc kubenswrapper[4684]: I0123 09:09:26.723117 4684 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/af6d441a-7f4f-42b0-8ab4-ddbdcef0a7c5-serving-cert\") pod \"service-ca-operator-777779d784-sxckj\" (UID: \"af6d441a-7f4f-42b0-8ab4-ddbdcef0a7c5\") " pod="openshift-service-ca-operator/service-ca-operator-777779d784-sxckj" Jan 23 09:09:26 crc kubenswrapper[4684]: I0123 09:09:26.723140 4684 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/6ad4033e-405b-4649-a039-5169aa401f18-config-volume\") pod \"dns-default-bxczb\" (UID: \"6ad4033e-405b-4649-a039-5169aa401f18\") " pod="openshift-dns/dns-default-bxczb" Jan 23 09:09:26 crc kubenswrapper[4684]: I0123 09:09:26.723161 4684 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"service-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/637adfa6-5f16-415d-b536-f8c65e5b32c2-service-ca-bundle\") pod \"router-default-5444994796-whxn9\" (UID: \"637adfa6-5f16-415d-b536-f8c65e5b32c2\") " pod="openshift-ingress/router-default-5444994796-whxn9" Jan 23 09:09:26 crc kubenswrapper[4684]: I0123 09:09:26.723190 4684 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/a9289743-2808-4efc-a6f9-bd8b5e33d553-config\") pod \"kube-apiserver-operator-766d6c64bb-dhf86\" (UID: \"a9289743-2808-4efc-a6f9-bd8b5e33d553\") " pod="openshift-kube-apiserver-operator/kube-apiserver-operator-766d6c64bb-dhf86" Jan 23 09:09:26 crc kubenswrapper[4684]: I0123 09:09:26.723243 4684 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-cqtlk\" (UniqueName: \"kubernetes.io/projected/af9efd93-5eee-4e16-a36f-25d29663ff5c-kube-api-access-cqtlk\") pod \"machine-config-controller-84d6567774-r9qbw\" (UID: \"af9efd93-5eee-4e16-a36f-25d29663ff5c\") " pod="openshift-machine-config-operator/machine-config-controller-84d6567774-r9qbw" Jan 23 09:09:26 crc kubenswrapper[4684]: I0123 09:09:26.723270 4684 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/1b327e86-ed37-44e8-b30d-ef50195f0972-config\") pod \"openshift-kube-scheduler-operator-5fdd9b5758-sm5m4\" (UID: \"1b327e86-ed37-44e8-b30d-ef50195f0972\") " pod="openshift-kube-scheduler-operator/openshift-kube-scheduler-operator-5fdd9b5758-sm5m4" Jan 23 09:09:26 crc kubenswrapper[4684]: I0123 09:09:26.723322 4684 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-xvd4r\" (UniqueName: \"kubernetes.io/projected/6ad4033e-405b-4649-a039-5169aa401f18-kube-api-access-xvd4r\") pod \"dns-default-bxczb\" (UID: \"6ad4033e-405b-4649-a039-5169aa401f18\") " pod="openshift-dns/dns-default-bxczb" Jan 23 09:09:26 crc kubenswrapper[4684]: I0123 09:09:26.723356 4684 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"mountpoint-dir\" (UniqueName: \"kubernetes.io/host-path/52f6483b-3d4f-482d-8802-fb7ba6736b69-mountpoint-dir\") pod \"csi-hostpathplugin-8tk99\" (UID: \"52f6483b-3d4f-482d-8802-fb7ba6736b69\") " pod="hostpath-provisioner/csi-hostpathplugin-8tk99" Jan 23 09:09:26 crc kubenswrapper[4684]: I0123 09:09:26.723382 4684 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"console-serving-cert\" (UniqueName: \"kubernetes.io/secret/31ebe80c-870d-4be6-844c-504b72eb09d6-console-serving-cert\") pod \"console-f9d7485db-wd9fz\" (UID: \"31ebe80c-870d-4be6-844c-504b72eb09d6\") " pod="openshift-console/console-f9d7485db-wd9fz" Jan 23 09:09:26 crc kubenswrapper[4684]: I0123 09:09:26.723449 4684 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tmpfs\" (UniqueName: \"kubernetes.io/empty-dir/35a3e02f-21f3-4762-8260-c52003d4499c-tmpfs\") pod \"packageserver-d55dfcdfc-g94qp\" (UID: \"35a3e02f-21f3-4762-8260-c52003d4499c\") " pod="openshift-operator-lifecycle-manager/packageserver-d55dfcdfc-g94qp" Jan 23 09:09:26 crc kubenswrapper[4684]: I0123 09:09:26.723489 4684 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-7jxvf\" (UniqueName: \"kubernetes.io/projected/94f9b51c-2051-4b01-bf38-09a32c853699-kube-api-access-7jxvf\") pod \"dns-operator-744455d44c-kx2tw\" (UID: \"94f9b51c-2051-4b01-bf38-09a32c853699\") " pod="openshift-dns-operator/dns-operator-744455d44c-kx2tw" Jan 23 09:09:26 crc kubenswrapper[4684]: I0123 09:09:26.723515 4684 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-7pmhf\" (UniqueName: \"kubernetes.io/projected/f92af7c0-b6ef-4fe1-b057-b2424aa96458-kube-api-access-7pmhf\") pod \"control-plane-machine-set-operator-78cbb6b69f-4qpn2\" (UID: \"f92af7c0-b6ef-4fe1-b057-b2424aa96458\") " pod="openshift-machine-api/control-plane-machine-set-operator-78cbb6b69f-4qpn2" Jan 23 09:09:26 crc kubenswrapper[4684]: I0123 09:09:26.723540 4684 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"plugins-dir\" (UniqueName: \"kubernetes.io/host-path/52f6483b-3d4f-482d-8802-fb7ba6736b69-plugins-dir\") pod \"csi-hostpathplugin-8tk99\" (UID: \"52f6483b-3d4f-482d-8802-fb7ba6736b69\") " pod="hostpath-provisioner/csi-hostpathplugin-8tk99" Jan 23 09:09:26 crc kubenswrapper[4684]: I0123 09:09:26.723613 4684 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"bound-sa-token\" (UniqueName: \"kubernetes.io/projected/991fb184-b936-412b-ae42-fe3a085c4bf9-bound-sa-token\") pod \"ingress-operator-5b745b69d9-5jrnp\" (UID: \"991fb184-b936-412b-ae42-fe3a085c4bf9\") " pod="openshift-ingress-operator/ingress-operator-5b745b69d9-5jrnp" Jan 23 09:09:26 crc kubenswrapper[4684]: I0123 09:09:26.723639 4684 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"auth-proxy-config\" (UniqueName: \"kubernetes.io/configmap/ebc04459-cb74-4868-8eb4-51a4d8856890-auth-proxy-config\") pod \"machine-config-operator-74547568cd-k7fnj\" (UID: \"ebc04459-cb74-4868-8eb4-51a4d8856890\") " pod="openshift-machine-config-operator/machine-config-operator-74547568cd-k7fnj" Jan 23 09:09:26 crc kubenswrapper[4684]: I0123 09:09:26.723694 4684 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"profile-collector-cert\" (UniqueName: \"kubernetes.io/secret/31daf061-abd6-415c-9cd6-2e59cb07d605-profile-collector-cert\") pod \"olm-operator-6b444d44fb-qj7jr\" (UID: \"31daf061-abd6-415c-9cd6-2e59cb07d605\") " pod="openshift-operator-lifecycle-manager/olm-operator-6b444d44fb-qj7jr" Jan 23 09:09:26 crc kubenswrapper[4684]: I0123 09:09:26.723749 4684 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-wrj8b\" (UniqueName: \"kubernetes.io/projected/35a3e02f-21f3-4762-8260-c52003d4499c-kube-api-access-wrj8b\") pod \"packageserver-d55dfcdfc-g94qp\" (UID: \"35a3e02f-21f3-4762-8260-c52003d4499c\") " pod="openshift-operator-lifecycle-manager/packageserver-d55dfcdfc-g94qp" Jan 23 09:09:26 crc kubenswrapper[4684]: I0123 09:09:26.723775 4684 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-svhdb\" (UniqueName: \"kubernetes.io/projected/af6d441a-7f4f-42b0-8ab4-ddbdcef0a7c5-kube-api-access-svhdb\") pod \"service-ca-operator-777779d784-sxckj\" (UID: \"af6d441a-7f4f-42b0-8ab4-ddbdcef0a7c5\") " pod="openshift-service-ca-operator/service-ca-operator-777779d784-sxckj" Jan 23 09:09:26 crc kubenswrapper[4684]: I0123 09:09:26.723840 4684 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"marketplace-trusted-ca\" (UniqueName: \"kubernetes.io/configmap/703df6b3-b903-4818-b0c8-8681de1c6065-marketplace-trusted-ca\") pod \"marketplace-operator-79b997595-tfmsb\" (UID: \"703df6b3-b903-4818-b0c8-8681de1c6065\") " pod="openshift-marketplace/marketplace-operator-79b997595-tfmsb" Jan 23 09:09:26 crc kubenswrapper[4684]: I0123 09:09:26.723905 4684 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"node-bootstrap-token\" (UniqueName: \"kubernetes.io/secret/f0966605-6157-4679-a78d-7e744be794a0-node-bootstrap-token\") pod \"machine-config-server-9m65q\" (UID: \"f0966605-6157-4679-a78d-7e744be794a0\") " pod="openshift-machine-config-operator/machine-config-server-9m65q" Jan 23 09:09:26 crc kubenswrapper[4684]: I0123 09:09:26.723936 4684 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-qgmzc\" (UniqueName: \"kubernetes.io/projected/991fb184-b936-412b-ae42-fe3a085c4bf9-kube-api-access-qgmzc\") pod \"ingress-operator-5b745b69d9-5jrnp\" (UID: \"991fb184-b936-412b-ae42-fe3a085c4bf9\") " pod="openshift-ingress-operator/ingress-operator-5b745b69d9-5jrnp" Jan 23 09:09:26 crc kubenswrapper[4684]: I0123 09:09:26.724008 4684 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"mcc-auth-proxy-config\" (UniqueName: \"kubernetes.io/configmap/af9efd93-5eee-4e16-a36f-25d29663ff5c-mcc-auth-proxy-config\") pod \"machine-config-controller-84d6567774-r9qbw\" (UID: \"af9efd93-5eee-4e16-a36f-25d29663ff5c\") " pod="openshift-machine-config-operator/machine-config-controller-84d6567774-r9qbw" Jan 23 09:09:26 crc kubenswrapper[4684]: I0123 09:09:26.724033 4684 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-qvsq9\" (UniqueName: \"kubernetes.io/projected/2bcafabc-bd27-41f8-bcec-0ea45d079a79-kube-api-access-qvsq9\") pod \"ingress-canary-76rxn\" (UID: \"2bcafabc-bd27-41f8-bcec-0ea45d079a79\") " pod="openshift-ingress-canary/ingress-canary-76rxn" Jan 23 09:09:26 crc kubenswrapper[4684]: I0123 09:09:26.724068 4684 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-4tbxm\" (UniqueName: \"kubernetes.io/projected/f0966605-6157-4679-a78d-7e744be794a0-kube-api-access-4tbxm\") pod \"machine-config-server-9m65q\" (UID: \"f0966605-6157-4679-a78d-7e744be794a0\") " pod="openshift-machine-config-operator/machine-config-server-9m65q" Jan 23 09:09:26 crc kubenswrapper[4684]: I0123 09:09:26.724095 4684 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"socket-dir\" (UniqueName: \"kubernetes.io/host-path/52f6483b-3d4f-482d-8802-fb7ba6736b69-socket-dir\") pod \"csi-hostpathplugin-8tk99\" (UID: \"52f6483b-3d4f-482d-8802-fb7ba6736b69\") " pod="hostpath-provisioner/csi-hostpathplugin-8tk99" Jan 23 09:09:26 crc kubenswrapper[4684]: I0123 09:09:26.724118 4684 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"registration-dir\" (UniqueName: \"kubernetes.io/host-path/52f6483b-3d4f-482d-8802-fb7ba6736b69-registration-dir\") pod \"csi-hostpathplugin-8tk99\" (UID: \"52f6483b-3d4f-482d-8802-fb7ba6736b69\") " pod="hostpath-provisioner/csi-hostpathplugin-8tk99" Jan 23 09:09:26 crc kubenswrapper[4684]: I0123 09:09:26.724144 4684 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"package-server-manager-serving-cert\" (UniqueName: \"kubernetes.io/secret/9071fc4b-8d0f-41fe-832b-c3c9f5f0351b-package-server-manager-serving-cert\") pod \"package-server-manager-789f6589d5-2xmjn\" (UID: \"9071fc4b-8d0f-41fe-832b-c3c9f5f0351b\") " pod="openshift-operator-lifecycle-manager/package-server-manager-789f6589d5-2xmjn" Jan 23 09:09:26 crc kubenswrapper[4684]: I0123 09:09:26.724173 4684 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-47xb2\" (UniqueName: \"kubernetes.io/projected/7d3e8240-e3e7-42d7-a0fa-6379a76c546e-kube-api-access-47xb2\") pod \"collect-profiles-29485980-dfbbw\" (UID: \"7d3e8240-e3e7-42d7-a0fa-6379a76c546e\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29485980-dfbbw" Jan 23 09:09:26 crc kubenswrapper[4684]: I0123 09:09:26.724199 4684 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-vmx7g\" (UniqueName: \"kubernetes.io/projected/ebc04459-cb74-4868-8eb4-51a4d8856890-kube-api-access-vmx7g\") pod \"machine-config-operator-74547568cd-k7fnj\" (UID: \"ebc04459-cb74-4868-8eb4-51a4d8856890\") " pod="openshift-machine-config-operator/machine-config-operator-74547568cd-k7fnj" Jan 23 09:09:26 crc kubenswrapper[4684]: I0123 09:09:26.724222 4684 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"srv-cert\" (UniqueName: \"kubernetes.io/secret/31daf061-abd6-415c-9cd6-2e59cb07d605-srv-cert\") pod \"olm-operator-6b444d44fb-qj7jr\" (UID: \"31daf061-abd6-415c-9cd6-2e59cb07d605\") " pod="openshift-operator-lifecycle-manager/olm-operator-6b444d44fb-qj7jr" Jan 23 09:09:26 crc kubenswrapper[4684]: I0123 09:09:26.724254 4684 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/af6d441a-7f4f-42b0-8ab4-ddbdcef0a7c5-config\") pod \"service-ca-operator-777779d784-sxckj\" (UID: \"af6d441a-7f4f-42b0-8ab4-ddbdcef0a7c5\") " pod="openshift-service-ca-operator/service-ca-operator-777779d784-sxckj" Jan 23 09:09:26 crc kubenswrapper[4684]: I0123 09:09:26.724276 4684 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-nmcbh\" (UniqueName: \"kubernetes.io/projected/703df6b3-b903-4818-b0c8-8681de1c6065-kube-api-access-nmcbh\") pod \"marketplace-operator-79b997595-tfmsb\" (UID: \"703df6b3-b903-4818-b0c8-8681de1c6065\") " pod="openshift-marketplace/marketplace-operator-79b997595-tfmsb" Jan 23 09:09:26 crc kubenswrapper[4684]: I0123 09:09:26.724308 4684 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-wn9b6\" (UID: \"4d94b705-3a9a-4cb2-87f1-b898ba859d79\") " pod="openshift-image-registry/image-registry-697d97f7c8-wn9b6" Jan 23 09:09:26 crc kubenswrapper[4684]: I0123 09:09:26.724334 4684 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/a9289743-2808-4efc-a6f9-bd8b5e33d553-serving-cert\") pod \"kube-apiserver-operator-766d6c64bb-dhf86\" (UID: \"a9289743-2808-4efc-a6f9-bd8b5e33d553\") " pod="openshift-kube-apiserver-operator/kube-apiserver-operator-766d6c64bb-dhf86" Jan 23 09:09:26 crc kubenswrapper[4684]: I0123 09:09:26.727193 4684 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"console-config\" (UniqueName: \"kubernetes.io/configmap/31ebe80c-870d-4be6-844c-504b72eb09d6-console-config\") pod \"console-f9d7485db-wd9fz\" (UID: \"31ebe80c-870d-4be6-844c-504b72eb09d6\") " pod="openshift-console/console-f9d7485db-wd9fz" Jan 23 09:09:26 crc kubenswrapper[4684]: I0123 09:09:26.728573 4684 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/1b327e86-ed37-44e8-b30d-ef50195f0972-config\") pod \"openshift-kube-scheduler-operator-5fdd9b5758-sm5m4\" (UID: \"1b327e86-ed37-44e8-b30d-ef50195f0972\") " pod="openshift-kube-scheduler-operator/openshift-kube-scheduler-operator-5fdd9b5758-sm5m4" Jan 23 09:09:26 crc kubenswrapper[4684]: I0123 09:09:26.730471 4684 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/31ebe80c-870d-4be6-844c-504b72eb09d6-trusted-ca-bundle\") pod \"console-f9d7485db-wd9fz\" (UID: \"31ebe80c-870d-4be6-844c-504b72eb09d6\") " pod="openshift-console/console-f9d7485db-wd9fz" Jan 23 09:09:26 crc kubenswrapper[4684]: I0123 09:09:26.731820 4684 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"service-ca\" (UniqueName: \"kubernetes.io/configmap/31ebe80c-870d-4be6-844c-504b72eb09d6-service-ca\") pod \"console-f9d7485db-wd9fz\" (UID: \"31ebe80c-870d-4be6-844c-504b72eb09d6\") " pod="openshift-console/console-f9d7485db-wd9fz" Jan 23 09:09:26 crc kubenswrapper[4684]: I0123 09:09:26.733291 4684 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/a9289743-2808-4efc-a6f9-bd8b5e33d553-config\") pod \"kube-apiserver-operator-766d6c64bb-dhf86\" (UID: \"a9289743-2808-4efc-a6f9-bd8b5e33d553\") " pod="openshift-kube-apiserver-operator/kube-apiserver-operator-766d6c64bb-dhf86" Jan 23 09:09:26 crc kubenswrapper[4684]: E0123 09:09:26.737384 4684 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-01-23 09:09:27.237370013 +0000 UTC m=+139.860748554 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-wn9b6" (UID: "4d94b705-3a9a-4cb2-87f1-b898ba859d79") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 23 09:09:26 crc kubenswrapper[4684]: I0123 09:09:26.737888 4684 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"mcc-auth-proxy-config\" (UniqueName: \"kubernetes.io/configmap/af9efd93-5eee-4e16-a36f-25d29663ff5c-mcc-auth-proxy-config\") pod \"machine-config-controller-84d6567774-r9qbw\" (UID: \"af9efd93-5eee-4e16-a36f-25d29663ff5c\") " pod="openshift-machine-config-operator/machine-config-controller-84d6567774-r9qbw" Jan 23 09:09:26 crc kubenswrapper[4684]: I0123 09:09:26.738145 4684 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"metrics-tls\" (UniqueName: \"kubernetes.io/secret/94f9b51c-2051-4b01-bf38-09a32c853699-metrics-tls\") pod \"dns-operator-744455d44c-kx2tw\" (UID: \"94f9b51c-2051-4b01-bf38-09a32c853699\") " pod="openshift-dns-operator/dns-operator-744455d44c-kx2tw" Jan 23 09:09:26 crc kubenswrapper[4684]: I0123 09:09:26.741617 4684 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"console-oauth-config\" (UniqueName: \"kubernetes.io/secret/31ebe80c-870d-4be6-844c-504b72eb09d6-console-oauth-config\") pod \"console-f9d7485db-wd9fz\" (UID: \"31ebe80c-870d-4be6-844c-504b72eb09d6\") " pod="openshift-console/console-f9d7485db-wd9fz" Jan 23 09:09:26 crc kubenswrapper[4684]: I0123 09:09:26.743785 4684 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"registry-tls\" (UniqueName: \"kubernetes.io/projected/4d94b705-3a9a-4cb2-87f1-b898ba859d79-registry-tls\") pod \"image-registry-697d97f7c8-wn9b6\" (UID: \"4d94b705-3a9a-4cb2-87f1-b898ba859d79\") " pod="openshift-image-registry/image-registry-697d97f7c8-wn9b6" Jan 23 09:09:26 crc kubenswrapper[4684]: I0123 09:09:26.744230 4684 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"proxy-tls\" (UniqueName: \"kubernetes.io/secret/af9efd93-5eee-4e16-a36f-25d29663ff5c-proxy-tls\") pod \"machine-config-controller-84d6567774-r9qbw\" (UID: \"af9efd93-5eee-4e16-a36f-25d29663ff5c\") " pod="openshift-machine-config-operator/machine-config-controller-84d6567774-r9qbw" Jan 23 09:09:26 crc kubenswrapper[4684]: I0123 09:09:26.745661 4684 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/97bfbd24-43dd-4c7c-abc0-cc5c502d938a-serving-cert\") pod \"kube-storage-version-migrator-operator-b67b599dd-tkzz2\" (UID: \"97bfbd24-43dd-4c7c-abc0-cc5c502d938a\") " pod="openshift-kube-storage-version-migrator-operator/kube-storage-version-migrator-operator-b67b599dd-tkzz2" Jan 23 09:09:26 crc kubenswrapper[4684]: I0123 09:09:26.745989 4684 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/1b327e86-ed37-44e8-b30d-ef50195f0972-serving-cert\") pod \"openshift-kube-scheduler-operator-5fdd9b5758-sm5m4\" (UID: \"1b327e86-ed37-44e8-b30d-ef50195f0972\") " pod="openshift-kube-scheduler-operator/openshift-kube-scheduler-operator-5fdd9b5758-sm5m4" Jan 23 09:09:26 crc kubenswrapper[4684]: I0123 09:09:26.752959 4684 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"console-serving-cert\" (UniqueName: \"kubernetes.io/secret/31ebe80c-870d-4be6-844c-504b72eb09d6-console-serving-cert\") pod \"console-f9d7485db-wd9fz\" (UID: \"31ebe80c-870d-4be6-844c-504b72eb09d6\") " pod="openshift-console/console-f9d7485db-wd9fz" Jan 23 09:09:26 crc kubenswrapper[4684]: I0123 09:09:26.759840 4684 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"bound-sa-token\" (UniqueName: \"kubernetes.io/projected/4d94b705-3a9a-4cb2-87f1-b898ba859d79-bound-sa-token\") pod \"image-registry-697d97f7c8-wn9b6\" (UID: \"4d94b705-3a9a-4cb2-87f1-b898ba859d79\") " pod="openshift-image-registry/image-registry-697d97f7c8-wn9b6" Jan 23 09:09:26 crc kubenswrapper[4684]: I0123 09:09:26.770476 4684 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-lgptj\" (UniqueName: \"kubernetes.io/projected/4d94b705-3a9a-4cb2-87f1-b898ba859d79-kube-api-access-lgptj\") pod \"image-registry-697d97f7c8-wn9b6\" (UID: \"4d94b705-3a9a-4cb2-87f1-b898ba859d79\") " pod="openshift-image-registry/image-registry-697d97f7c8-wn9b6" Jan 23 09:09:26 crc kubenswrapper[4684]: I0123 09:09:26.778934 4684 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/a9289743-2808-4efc-a6f9-bd8b5e33d553-serving-cert\") pod \"kube-apiserver-operator-766d6c64bb-dhf86\" (UID: \"a9289743-2808-4efc-a6f9-bd8b5e33d553\") " pod="openshift-kube-apiserver-operator/kube-apiserver-operator-766d6c64bb-dhf86" Jan 23 09:09:26 crc kubenswrapper[4684]: I0123 09:09:26.779150 4684 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/1b327e86-ed37-44e8-b30d-ef50195f0972-kube-api-access\") pod \"openshift-kube-scheduler-operator-5fdd9b5758-sm5m4\" (UID: \"1b327e86-ed37-44e8-b30d-ef50195f0972\") " pod="openshift-kube-scheduler-operator/openshift-kube-scheduler-operator-5fdd9b5758-sm5m4" Jan 23 09:09:26 crc kubenswrapper[4684]: I0123 09:09:26.780417 4684 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-jwcw9\" (UniqueName: \"kubernetes.io/projected/b4b2d72e-d91a-4cde-8e13-205f5346b4ba-kube-api-access-jwcw9\") pod \"migrator-59844c95c7-g8kmw\" (UID: \"b4b2d72e-d91a-4cde-8e13-205f5346b4ba\") " pod="openshift-kube-storage-version-migrator/migrator-59844c95c7-g8kmw" Jan 23 09:09:26 crc kubenswrapper[4684]: I0123 09:09:26.794571 4684 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-storage-version-migrator/migrator-59844c95c7-g8kmw" Jan 23 09:09:26 crc kubenswrapper[4684]: I0123 09:09:26.828991 4684 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 23 09:09:26 crc kubenswrapper[4684]: I0123 09:09:26.829095 4684 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"default-certificate\" (UniqueName: \"kubernetes.io/secret/637adfa6-5f16-415d-b536-f8c65e5b32c2-default-certificate\") pod \"router-default-5444994796-whxn9\" (UID: \"637adfa6-5f16-415d-b536-f8c65e5b32c2\") " pod="openshift-ingress/router-default-5444994796-whxn9" Jan 23 09:09:26 crc kubenswrapper[4684]: I0123 09:09:26.829124 4684 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"signing-key\" (UniqueName: \"kubernetes.io/secret/e1331c42-e8e8-4e17-bfa3-0961208c57fd-signing-key\") pod \"service-ca-9c57cc56f-tk452\" (UID: \"e1331c42-e8e8-4e17-bfa3-0961208c57fd\") " pod="openshift-service-ca/service-ca-9c57cc56f-tk452" Jan 23 09:09:26 crc kubenswrapper[4684]: I0123 09:09:26.829156 4684 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"webhook-certs\" (UniqueName: \"kubernetes.io/secret/9eb90a45-05a1-450a-93d7-d20129d62e40-webhook-certs\") pod \"multus-admission-controller-857f4d67dd-zp7ft\" (UID: \"9eb90a45-05a1-450a-93d7-d20129d62e40\") " pod="openshift-multus/multus-admission-controller-857f4d67dd-zp7ft" Jan 23 09:09:26 crc kubenswrapper[4684]: I0123 09:09:26.829179 4684 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"signing-cabundle\" (UniqueName: \"kubernetes.io/configmap/e1331c42-e8e8-4e17-bfa3-0961208c57fd-signing-cabundle\") pod \"service-ca-9c57cc56f-tk452\" (UID: \"e1331c42-e8e8-4e17-bfa3-0961208c57fd\") " pod="openshift-service-ca/service-ca-9c57cc56f-tk452" Jan 23 09:09:26 crc kubenswrapper[4684]: I0123 09:09:26.829200 4684 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/af6d441a-7f4f-42b0-8ab4-ddbdcef0a7c5-serving-cert\") pod \"service-ca-operator-777779d784-sxckj\" (UID: \"af6d441a-7f4f-42b0-8ab4-ddbdcef0a7c5\") " pod="openshift-service-ca-operator/service-ca-operator-777779d784-sxckj" Jan 23 09:09:26 crc kubenswrapper[4684]: I0123 09:09:26.829219 4684 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/6ad4033e-405b-4649-a039-5169aa401f18-config-volume\") pod \"dns-default-bxczb\" (UID: \"6ad4033e-405b-4649-a039-5169aa401f18\") " pod="openshift-dns/dns-default-bxczb" Jan 23 09:09:26 crc kubenswrapper[4684]: I0123 09:09:26.829237 4684 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"service-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/637adfa6-5f16-415d-b536-f8c65e5b32c2-service-ca-bundle\") pod \"router-default-5444994796-whxn9\" (UID: \"637adfa6-5f16-415d-b536-f8c65e5b32c2\") " pod="openshift-ingress/router-default-5444994796-whxn9" Jan 23 09:09:26 crc kubenswrapper[4684]: I0123 09:09:26.829278 4684 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-xvd4r\" (UniqueName: \"kubernetes.io/projected/6ad4033e-405b-4649-a039-5169aa401f18-kube-api-access-xvd4r\") pod \"dns-default-bxczb\" (UID: \"6ad4033e-405b-4649-a039-5169aa401f18\") " pod="openshift-dns/dns-default-bxczb" Jan 23 09:09:26 crc kubenswrapper[4684]: I0123 09:09:26.829314 4684 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"mountpoint-dir\" (UniqueName: \"kubernetes.io/host-path/52f6483b-3d4f-482d-8802-fb7ba6736b69-mountpoint-dir\") pod \"csi-hostpathplugin-8tk99\" (UID: \"52f6483b-3d4f-482d-8802-fb7ba6736b69\") " pod="hostpath-provisioner/csi-hostpathplugin-8tk99" Jan 23 09:09:26 crc kubenswrapper[4684]: I0123 09:09:26.829343 4684 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"tmpfs\" (UniqueName: \"kubernetes.io/empty-dir/35a3e02f-21f3-4762-8260-c52003d4499c-tmpfs\") pod \"packageserver-d55dfcdfc-g94qp\" (UID: \"35a3e02f-21f3-4762-8260-c52003d4499c\") " pod="openshift-operator-lifecycle-manager/packageserver-d55dfcdfc-g94qp" Jan 23 09:09:26 crc kubenswrapper[4684]: I0123 09:09:26.829374 4684 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-7pmhf\" (UniqueName: \"kubernetes.io/projected/f92af7c0-b6ef-4fe1-b057-b2424aa96458-kube-api-access-7pmhf\") pod \"control-plane-machine-set-operator-78cbb6b69f-4qpn2\" (UID: \"f92af7c0-b6ef-4fe1-b057-b2424aa96458\") " pod="openshift-machine-api/control-plane-machine-set-operator-78cbb6b69f-4qpn2" Jan 23 09:09:26 crc kubenswrapper[4684]: I0123 09:09:26.829400 4684 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"plugins-dir\" (UniqueName: \"kubernetes.io/host-path/52f6483b-3d4f-482d-8802-fb7ba6736b69-plugins-dir\") pod \"csi-hostpathplugin-8tk99\" (UID: \"52f6483b-3d4f-482d-8802-fb7ba6736b69\") " pod="hostpath-provisioner/csi-hostpathplugin-8tk99" Jan 23 09:09:26 crc kubenswrapper[4684]: I0123 09:09:26.829426 4684 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"bound-sa-token\" (UniqueName: \"kubernetes.io/projected/991fb184-b936-412b-ae42-fe3a085c4bf9-bound-sa-token\") pod \"ingress-operator-5b745b69d9-5jrnp\" (UID: \"991fb184-b936-412b-ae42-fe3a085c4bf9\") " pod="openshift-ingress-operator/ingress-operator-5b745b69d9-5jrnp" Jan 23 09:09:26 crc kubenswrapper[4684]: I0123 09:09:26.829447 4684 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"auth-proxy-config\" (UniqueName: \"kubernetes.io/configmap/ebc04459-cb74-4868-8eb4-51a4d8856890-auth-proxy-config\") pod \"machine-config-operator-74547568cd-k7fnj\" (UID: \"ebc04459-cb74-4868-8eb4-51a4d8856890\") " pod="openshift-machine-config-operator/machine-config-operator-74547568cd-k7fnj" Jan 23 09:09:26 crc kubenswrapper[4684]: I0123 09:09:26.829470 4684 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"profile-collector-cert\" (UniqueName: \"kubernetes.io/secret/31daf061-abd6-415c-9cd6-2e59cb07d605-profile-collector-cert\") pod \"olm-operator-6b444d44fb-qj7jr\" (UID: \"31daf061-abd6-415c-9cd6-2e59cb07d605\") " pod="openshift-operator-lifecycle-manager/olm-operator-6b444d44fb-qj7jr" Jan 23 09:09:26 crc kubenswrapper[4684]: I0123 09:09:26.829502 4684 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-wrj8b\" (UniqueName: \"kubernetes.io/projected/35a3e02f-21f3-4762-8260-c52003d4499c-kube-api-access-wrj8b\") pod \"packageserver-d55dfcdfc-g94qp\" (UID: \"35a3e02f-21f3-4762-8260-c52003d4499c\") " pod="openshift-operator-lifecycle-manager/packageserver-d55dfcdfc-g94qp" Jan 23 09:09:26 crc kubenswrapper[4684]: I0123 09:09:26.829529 4684 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-svhdb\" (UniqueName: \"kubernetes.io/projected/af6d441a-7f4f-42b0-8ab4-ddbdcef0a7c5-kube-api-access-svhdb\") pod \"service-ca-operator-777779d784-sxckj\" (UID: \"af6d441a-7f4f-42b0-8ab4-ddbdcef0a7c5\") " pod="openshift-service-ca-operator/service-ca-operator-777779d784-sxckj" Jan 23 09:09:26 crc kubenswrapper[4684]: I0123 09:09:26.829554 4684 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"marketplace-trusted-ca\" (UniqueName: \"kubernetes.io/configmap/703df6b3-b903-4818-b0c8-8681de1c6065-marketplace-trusted-ca\") pod \"marketplace-operator-79b997595-tfmsb\" (UID: \"703df6b3-b903-4818-b0c8-8681de1c6065\") " pod="openshift-marketplace/marketplace-operator-79b997595-tfmsb" Jan 23 09:09:26 crc kubenswrapper[4684]: I0123 09:09:26.829577 4684 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"node-bootstrap-token\" (UniqueName: \"kubernetes.io/secret/f0966605-6157-4679-a78d-7e744be794a0-node-bootstrap-token\") pod \"machine-config-server-9m65q\" (UID: \"f0966605-6157-4679-a78d-7e744be794a0\") " pod="openshift-machine-config-operator/machine-config-server-9m65q" Jan 23 09:09:26 crc kubenswrapper[4684]: I0123 09:09:26.829604 4684 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-qgmzc\" (UniqueName: \"kubernetes.io/projected/991fb184-b936-412b-ae42-fe3a085c4bf9-kube-api-access-qgmzc\") pod \"ingress-operator-5b745b69d9-5jrnp\" (UID: \"991fb184-b936-412b-ae42-fe3a085c4bf9\") " pod="openshift-ingress-operator/ingress-operator-5b745b69d9-5jrnp" Jan 23 09:09:26 crc kubenswrapper[4684]: I0123 09:09:26.829630 4684 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-qvsq9\" (UniqueName: \"kubernetes.io/projected/2bcafabc-bd27-41f8-bcec-0ea45d079a79-kube-api-access-qvsq9\") pod \"ingress-canary-76rxn\" (UID: \"2bcafabc-bd27-41f8-bcec-0ea45d079a79\") " pod="openshift-ingress-canary/ingress-canary-76rxn" Jan 23 09:09:26 crc kubenswrapper[4684]: I0123 09:09:26.829665 4684 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-4tbxm\" (UniqueName: \"kubernetes.io/projected/f0966605-6157-4679-a78d-7e744be794a0-kube-api-access-4tbxm\") pod \"machine-config-server-9m65q\" (UID: \"f0966605-6157-4679-a78d-7e744be794a0\") " pod="openshift-machine-config-operator/machine-config-server-9m65q" Jan 23 09:09:26 crc kubenswrapper[4684]: I0123 09:09:26.829690 4684 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"registration-dir\" (UniqueName: \"kubernetes.io/host-path/52f6483b-3d4f-482d-8802-fb7ba6736b69-registration-dir\") pod \"csi-hostpathplugin-8tk99\" (UID: \"52f6483b-3d4f-482d-8802-fb7ba6736b69\") " pod="hostpath-provisioner/csi-hostpathplugin-8tk99" Jan 23 09:09:26 crc kubenswrapper[4684]: I0123 09:09:26.830763 4684 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"package-server-manager-serving-cert\" (UniqueName: \"kubernetes.io/secret/9071fc4b-8d0f-41fe-832b-c3c9f5f0351b-package-server-manager-serving-cert\") pod \"package-server-manager-789f6589d5-2xmjn\" (UID: \"9071fc4b-8d0f-41fe-832b-c3c9f5f0351b\") " pod="openshift-operator-lifecycle-manager/package-server-manager-789f6589d5-2xmjn" Jan 23 09:09:26 crc kubenswrapper[4684]: I0123 09:09:26.830793 4684 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"socket-dir\" (UniqueName: \"kubernetes.io/host-path/52f6483b-3d4f-482d-8802-fb7ba6736b69-socket-dir\") pod \"csi-hostpathplugin-8tk99\" (UID: \"52f6483b-3d4f-482d-8802-fb7ba6736b69\") " pod="hostpath-provisioner/csi-hostpathplugin-8tk99" Jan 23 09:09:26 crc kubenswrapper[4684]: I0123 09:09:26.830824 4684 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-47xb2\" (UniqueName: \"kubernetes.io/projected/7d3e8240-e3e7-42d7-a0fa-6379a76c546e-kube-api-access-47xb2\") pod \"collect-profiles-29485980-dfbbw\" (UID: \"7d3e8240-e3e7-42d7-a0fa-6379a76c546e\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29485980-dfbbw" Jan 23 09:09:26 crc kubenswrapper[4684]: I0123 09:09:26.830849 4684 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-vmx7g\" (UniqueName: \"kubernetes.io/projected/ebc04459-cb74-4868-8eb4-51a4d8856890-kube-api-access-vmx7g\") pod \"machine-config-operator-74547568cd-k7fnj\" (UID: \"ebc04459-cb74-4868-8eb4-51a4d8856890\") " pod="openshift-machine-config-operator/machine-config-operator-74547568cd-k7fnj" Jan 23 09:09:26 crc kubenswrapper[4684]: I0123 09:09:26.830876 4684 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"srv-cert\" (UniqueName: \"kubernetes.io/secret/31daf061-abd6-415c-9cd6-2e59cb07d605-srv-cert\") pod \"olm-operator-6b444d44fb-qj7jr\" (UID: \"31daf061-abd6-415c-9cd6-2e59cb07d605\") " pod="openshift-operator-lifecycle-manager/olm-operator-6b444d44fb-qj7jr" Jan 23 09:09:26 crc kubenswrapper[4684]: I0123 09:09:26.830904 4684 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/af6d441a-7f4f-42b0-8ab4-ddbdcef0a7c5-config\") pod \"service-ca-operator-777779d784-sxckj\" (UID: \"af6d441a-7f4f-42b0-8ab4-ddbdcef0a7c5\") " pod="openshift-service-ca-operator/service-ca-operator-777779d784-sxckj" Jan 23 09:09:26 crc kubenswrapper[4684]: I0123 09:09:26.830924 4684 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-nmcbh\" (UniqueName: \"kubernetes.io/projected/703df6b3-b903-4818-b0c8-8681de1c6065-kube-api-access-nmcbh\") pod \"marketplace-operator-79b997595-tfmsb\" (UID: \"703df6b3-b903-4818-b0c8-8681de1c6065\") " pod="openshift-marketplace/marketplace-operator-79b997595-tfmsb" Jan 23 09:09:26 crc kubenswrapper[4684]: I0123 09:09:26.830959 4684 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"images\" (UniqueName: \"kubernetes.io/configmap/ebc04459-cb74-4868-8eb4-51a4d8856890-images\") pod \"machine-config-operator-74547568cd-k7fnj\" (UID: \"ebc04459-cb74-4868-8eb4-51a4d8856890\") " pod="openshift-machine-config-operator/machine-config-operator-74547568cd-k7fnj" Jan 23 09:09:26 crc kubenswrapper[4684]: I0123 09:09:26.830987 4684 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"stats-auth\" (UniqueName: \"kubernetes.io/secret/637adfa6-5f16-415d-b536-f8c65e5b32c2-stats-auth\") pod \"router-default-5444994796-whxn9\" (UID: \"637adfa6-5f16-415d-b536-f8c65e5b32c2\") " pod="openshift-ingress/router-default-5444994796-whxn9" Jan 23 09:09:26 crc kubenswrapper[4684]: I0123 09:09:26.831008 4684 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cert\" (UniqueName: \"kubernetes.io/secret/2bcafabc-bd27-41f8-bcec-0ea45d079a79-cert\") pod \"ingress-canary-76rxn\" (UID: \"2bcafabc-bd27-41f8-bcec-0ea45d079a79\") " pod="openshift-ingress-canary/ingress-canary-76rxn" Jan 23 09:09:26 crc kubenswrapper[4684]: I0123 09:09:26.831029 4684 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"srv-cert\" (UniqueName: \"kubernetes.io/secret/e60787da-c4f0-4034-b543-f70e46a6ded4-srv-cert\") pod \"catalog-operator-68c6474976-dxd9h\" (UID: \"e60787da-c4f0-4034-b543-f70e46a6ded4\") " pod="openshift-operator-lifecycle-manager/catalog-operator-68c6474976-dxd9h" Jan 23 09:09:26 crc kubenswrapper[4684]: I0123 09:09:26.831053 4684 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-7kvb6\" (UniqueName: \"kubernetes.io/projected/9071fc4b-8d0f-41fe-832b-c3c9f5f0351b-kube-api-access-7kvb6\") pod \"package-server-manager-789f6589d5-2xmjn\" (UID: \"9071fc4b-8d0f-41fe-832b-c3c9f5f0351b\") " pod="openshift-operator-lifecycle-manager/package-server-manager-789f6589d5-2xmjn" Jan 23 09:09:26 crc kubenswrapper[4684]: I0123 09:09:26.831081 4684 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"proxy-tls\" (UniqueName: \"kubernetes.io/secret/ebc04459-cb74-4868-8eb4-51a4d8856890-proxy-tls\") pod \"machine-config-operator-74547568cd-k7fnj\" (UID: \"ebc04459-cb74-4868-8eb4-51a4d8856890\") " pod="openshift-machine-config-operator/machine-config-operator-74547568cd-k7fnj" Jan 23 09:09:26 crc kubenswrapper[4684]: I0123 09:09:26.831108 4684 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-t9ndx\" (UniqueName: \"kubernetes.io/projected/52f6483b-3d4f-482d-8802-fb7ba6736b69-kube-api-access-t9ndx\") pod \"csi-hostpathplugin-8tk99\" (UID: \"52f6483b-3d4f-482d-8802-fb7ba6736b69\") " pod="hostpath-provisioner/csi-hostpathplugin-8tk99" Jan 23 09:09:26 crc kubenswrapper[4684]: I0123 09:09:26.831136 4684 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-sj7bm\" (UniqueName: \"kubernetes.io/projected/e1331c42-e8e8-4e17-bfa3-0961208c57fd-kube-api-access-sj7bm\") pod \"service-ca-9c57cc56f-tk452\" (UID: \"e1331c42-e8e8-4e17-bfa3-0961208c57fd\") " pod="openshift-service-ca/service-ca-9c57cc56f-tk452" Jan 23 09:09:26 crc kubenswrapper[4684]: I0123 09:09:26.831162 4684 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"control-plane-machine-set-operator-tls\" (UniqueName: \"kubernetes.io/secret/f92af7c0-b6ef-4fe1-b057-b2424aa96458-control-plane-machine-set-operator-tls\") pod \"control-plane-machine-set-operator-78cbb6b69f-4qpn2\" (UID: \"f92af7c0-b6ef-4fe1-b057-b2424aa96458\") " pod="openshift-machine-api/control-plane-machine-set-operator-78cbb6b69f-4qpn2" Jan 23 09:09:26 crc kubenswrapper[4684]: I0123 09:09:26.831207 4684 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/7d3e8240-e3e7-42d7-a0fa-6379a76c546e-secret-volume\") pod \"collect-profiles-29485980-dfbbw\" (UID: \"7d3e8240-e3e7-42d7-a0fa-6379a76c546e\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29485980-dfbbw" Jan 23 09:09:26 crc kubenswrapper[4684]: I0123 09:09:26.831233 4684 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"profile-collector-cert\" (UniqueName: \"kubernetes.io/secret/e60787da-c4f0-4034-b543-f70e46a6ded4-profile-collector-cert\") pod \"catalog-operator-68c6474976-dxd9h\" (UID: \"e60787da-c4f0-4034-b543-f70e46a6ded4\") " pod="openshift-operator-lifecycle-manager/catalog-operator-68c6474976-dxd9h" Jan 23 09:09:26 crc kubenswrapper[4684]: I0123 09:09:26.831258 4684 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"csi-data-dir\" (UniqueName: \"kubernetes.io/host-path/52f6483b-3d4f-482d-8802-fb7ba6736b69-csi-data-dir\") pod \"csi-hostpathplugin-8tk99\" (UID: \"52f6483b-3d4f-482d-8802-fb7ba6736b69\") " pod="hostpath-provisioner/csi-hostpathplugin-8tk99" Jan 23 09:09:26 crc kubenswrapper[4684]: I0123 09:09:26.831280 4684 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/991fb184-b936-412b-ae42-fe3a085c4bf9-trusted-ca\") pod \"ingress-operator-5b745b69d9-5jrnp\" (UID: \"991fb184-b936-412b-ae42-fe3a085c4bf9\") " pod="openshift-ingress-operator/ingress-operator-5b745b69d9-5jrnp" Jan 23 09:09:26 crc kubenswrapper[4684]: I0123 09:09:26.831303 4684 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-c5p5q\" (UniqueName: \"kubernetes.io/projected/e60787da-c4f0-4034-b543-f70e46a6ded4-kube-api-access-c5p5q\") pod \"catalog-operator-68c6474976-dxd9h\" (UID: \"e60787da-c4f0-4034-b543-f70e46a6ded4\") " pod="openshift-operator-lifecycle-manager/catalog-operator-68c6474976-dxd9h" Jan 23 09:09:26 crc kubenswrapper[4684]: I0123 09:09:26.831332 4684 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-dqvx6\" (UniqueName: \"kubernetes.io/projected/9eb90a45-05a1-450a-93d7-d20129d62e40-kube-api-access-dqvx6\") pod \"multus-admission-controller-857f4d67dd-zp7ft\" (UID: \"9eb90a45-05a1-450a-93d7-d20129d62e40\") " pod="openshift-multus/multus-admission-controller-857f4d67dd-zp7ft" Jan 23 09:09:26 crc kubenswrapper[4684]: I0123 09:09:26.831359 4684 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-tls\" (UniqueName: \"kubernetes.io/secret/6ad4033e-405b-4649-a039-5169aa401f18-metrics-tls\") pod \"dns-default-bxczb\" (UID: \"6ad4033e-405b-4649-a039-5169aa401f18\") " pod="openshift-dns/dns-default-bxczb" Jan 23 09:09:26 crc kubenswrapper[4684]: I0123 09:09:26.831380 4684 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/637adfa6-5f16-415d-b536-f8c65e5b32c2-metrics-certs\") pod \"router-default-5444994796-whxn9\" (UID: \"637adfa6-5f16-415d-b536-f8c65e5b32c2\") " pod="openshift-ingress/router-default-5444994796-whxn9" Jan 23 09:09:26 crc kubenswrapper[4684]: I0123 09:09:26.832735 4684 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"certs\" (UniqueName: \"kubernetes.io/secret/f0966605-6157-4679-a78d-7e744be794a0-certs\") pod \"machine-config-server-9m65q\" (UID: \"f0966605-6157-4679-a78d-7e744be794a0\") " pod="openshift-machine-config-operator/machine-config-server-9m65q" Jan 23 09:09:26 crc kubenswrapper[4684]: I0123 09:09:26.832777 4684 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"apiservice-cert\" (UniqueName: \"kubernetes.io/secret/35a3e02f-21f3-4762-8260-c52003d4499c-apiservice-cert\") pod \"packageserver-d55dfcdfc-g94qp\" (UID: \"35a3e02f-21f3-4762-8260-c52003d4499c\") " pod="openshift-operator-lifecycle-manager/packageserver-d55dfcdfc-g94qp" Jan 23 09:09:26 crc kubenswrapper[4684]: I0123 09:09:26.832805 4684 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-tls\" (UniqueName: \"kubernetes.io/secret/991fb184-b936-412b-ae42-fe3a085c4bf9-metrics-tls\") pod \"ingress-operator-5b745b69d9-5jrnp\" (UID: \"991fb184-b936-412b-ae42-fe3a085c4bf9\") " pod="openshift-ingress-operator/ingress-operator-5b745b69d9-5jrnp" Jan 23 09:09:26 crc kubenswrapper[4684]: I0123 09:09:26.832830 4684 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/7d3e8240-e3e7-42d7-a0fa-6379a76c546e-config-volume\") pod \"collect-profiles-29485980-dfbbw\" (UID: \"7d3e8240-e3e7-42d7-a0fa-6379a76c546e\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29485980-dfbbw" Jan 23 09:09:26 crc kubenswrapper[4684]: I0123 09:09:26.832852 4684 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"webhook-cert\" (UniqueName: \"kubernetes.io/secret/35a3e02f-21f3-4762-8260-c52003d4499c-webhook-cert\") pod \"packageserver-d55dfcdfc-g94qp\" (UID: \"35a3e02f-21f3-4762-8260-c52003d4499c\") " pod="openshift-operator-lifecycle-manager/packageserver-d55dfcdfc-g94qp" Jan 23 09:09:26 crc kubenswrapper[4684]: I0123 09:09:26.832876 4684 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-sjnf6\" (UniqueName: \"kubernetes.io/projected/637adfa6-5f16-415d-b536-f8c65e5b32c2-kube-api-access-sjnf6\") pod \"router-default-5444994796-whxn9\" (UID: \"637adfa6-5f16-415d-b536-f8c65e5b32c2\") " pod="openshift-ingress/router-default-5444994796-whxn9" Jan 23 09:09:26 crc kubenswrapper[4684]: I0123 09:09:26.832902 4684 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-lmtxd\" (UniqueName: \"kubernetes.io/projected/31daf061-abd6-415c-9cd6-2e59cb07d605-kube-api-access-lmtxd\") pod \"olm-operator-6b444d44fb-qj7jr\" (UID: \"31daf061-abd6-415c-9cd6-2e59cb07d605\") " pod="openshift-operator-lifecycle-manager/olm-operator-6b444d44fb-qj7jr" Jan 23 09:09:26 crc kubenswrapper[4684]: I0123 09:09:26.832950 4684 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"marketplace-operator-metrics\" (UniqueName: \"kubernetes.io/secret/703df6b3-b903-4818-b0c8-8681de1c6065-marketplace-operator-metrics\") pod \"marketplace-operator-79b997595-tfmsb\" (UID: \"703df6b3-b903-4818-b0c8-8681de1c6065\") " pod="openshift-marketplace/marketplace-operator-79b997595-tfmsb" Jan 23 09:09:26 crc kubenswrapper[4684]: E0123 09:09:26.833185 4684 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-01-23 09:09:27.333152361 +0000 UTC m=+139.956530912 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 23 09:09:26 crc kubenswrapper[4684]: I0123 09:09:26.834387 4684 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-scheduler-operator/openshift-kube-scheduler-operator-5fdd9b5758-sm5m4" Jan 23 09:09:26 crc kubenswrapper[4684]: I0123 09:09:26.838673 4684 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/7d3e8240-e3e7-42d7-a0fa-6379a76c546e-config-volume\") pod \"collect-profiles-29485980-dfbbw\" (UID: \"7d3e8240-e3e7-42d7-a0fa-6379a76c546e\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29485980-dfbbw" Jan 23 09:09:26 crc kubenswrapper[4684]: I0123 09:09:26.844023 4684 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-tz5tt\" (UniqueName: \"kubernetes.io/projected/97bfbd24-43dd-4c7c-abc0-cc5c502d938a-kube-api-access-tz5tt\") pod \"kube-storage-version-migrator-operator-b67b599dd-tkzz2\" (UID: \"97bfbd24-43dd-4c7c-abc0-cc5c502d938a\") " pod="openshift-kube-storage-version-migrator-operator/kube-storage-version-migrator-operator-b67b599dd-tkzz2" Jan 23 09:09:26 crc kubenswrapper[4684]: I0123 09:09:26.845548 4684 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"metrics-tls\" (UniqueName: \"kubernetes.io/secret/991fb184-b936-412b-ae42-fe3a085c4bf9-metrics-tls\") pod \"ingress-operator-5b745b69d9-5jrnp\" (UID: \"991fb184-b936-412b-ae42-fe3a085c4bf9\") " pod="openshift-ingress-operator/ingress-operator-5b745b69d9-5jrnp" Jan 23 09:09:26 crc kubenswrapper[4684]: I0123 09:09:26.846798 4684 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"socket-dir\" (UniqueName: \"kubernetes.io/host-path/52f6483b-3d4f-482d-8802-fb7ba6736b69-socket-dir\") pod \"csi-hostpathplugin-8tk99\" (UID: \"52f6483b-3d4f-482d-8802-fb7ba6736b69\") " pod="hostpath-provisioner/csi-hostpathplugin-8tk99" Jan 23 09:09:26 crc kubenswrapper[4684]: I0123 09:09:26.848870 4684 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"auth-proxy-config\" (UniqueName: \"kubernetes.io/configmap/ebc04459-cb74-4868-8eb4-51a4d8856890-auth-proxy-config\") pod \"machine-config-operator-74547568cd-k7fnj\" (UID: \"ebc04459-cb74-4868-8eb4-51a4d8856890\") " pod="openshift-machine-config-operator/machine-config-operator-74547568cd-k7fnj" Jan 23 09:09:26 crc kubenswrapper[4684]: I0123 09:09:26.853659 4684 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"srv-cert\" (UniqueName: \"kubernetes.io/secret/31daf061-abd6-415c-9cd6-2e59cb07d605-srv-cert\") pod \"olm-operator-6b444d44fb-qj7jr\" (UID: \"31daf061-abd6-415c-9cd6-2e59cb07d605\") " pod="openshift-operator-lifecycle-manager/olm-operator-6b444d44fb-qj7jr" Jan 23 09:09:26 crc kubenswrapper[4684]: I0123 09:09:26.854026 4684 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"marketplace-trusted-ca\" (UniqueName: \"kubernetes.io/configmap/703df6b3-b903-4818-b0c8-8681de1c6065-marketplace-trusted-ca\") pod \"marketplace-operator-79b997595-tfmsb\" (UID: \"703df6b3-b903-4818-b0c8-8681de1c6065\") " pod="openshift-marketplace/marketplace-operator-79b997595-tfmsb" Jan 23 09:09:26 crc kubenswrapper[4684]: I0123 09:09:26.854372 4684 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/af6d441a-7f4f-42b0-8ab4-ddbdcef0a7c5-config\") pod \"service-ca-operator-777779d784-sxckj\" (UID: \"af6d441a-7f4f-42b0-8ab4-ddbdcef0a7c5\") " pod="openshift-service-ca-operator/service-ca-operator-777779d784-sxckj" Jan 23 09:09:26 crc kubenswrapper[4684]: I0123 09:09:26.860910 4684 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"package-server-manager-serving-cert\" (UniqueName: \"kubernetes.io/secret/9071fc4b-8d0f-41fe-832b-c3c9f5f0351b-package-server-manager-serving-cert\") pod \"package-server-manager-789f6589d5-2xmjn\" (UID: \"9071fc4b-8d0f-41fe-832b-c3c9f5f0351b\") " pod="openshift-operator-lifecycle-manager/package-server-manager-789f6589d5-2xmjn" Jan 23 09:09:26 crc kubenswrapper[4684]: I0123 09:09:26.861375 4684 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"registration-dir\" (UniqueName: \"kubernetes.io/host-path/52f6483b-3d4f-482d-8802-fb7ba6736b69-registration-dir\") pod \"csi-hostpathplugin-8tk99\" (UID: \"52f6483b-3d4f-482d-8802-fb7ba6736b69\") " pod="hostpath-provisioner/csi-hostpathplugin-8tk99" Jan 23 09:09:26 crc kubenswrapper[4684]: I0123 09:09:26.861560 4684 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"csi-data-dir\" (UniqueName: \"kubernetes.io/host-path/52f6483b-3d4f-482d-8802-fb7ba6736b69-csi-data-dir\") pod \"csi-hostpathplugin-8tk99\" (UID: \"52f6483b-3d4f-482d-8802-fb7ba6736b69\") " pod="hostpath-provisioner/csi-hostpathplugin-8tk99" Jan 23 09:09:26 crc kubenswrapper[4684]: I0123 09:09:26.872514 4684 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/991fb184-b936-412b-ae42-fe3a085c4bf9-trusted-ca\") pod \"ingress-operator-5b745b69d9-5jrnp\" (UID: \"991fb184-b936-412b-ae42-fe3a085c4bf9\") " pod="openshift-ingress-operator/ingress-operator-5b745b69d9-5jrnp" Jan 23 09:09:26 crc kubenswrapper[4684]: I0123 09:09:26.883384 4684 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"signing-cabundle\" (UniqueName: \"kubernetes.io/configmap/e1331c42-e8e8-4e17-bfa3-0961208c57fd-signing-cabundle\") pod \"service-ca-9c57cc56f-tk452\" (UID: \"e1331c42-e8e8-4e17-bfa3-0961208c57fd\") " pod="openshift-service-ca/service-ca-9c57cc56f-tk452" Jan 23 09:09:26 crc kubenswrapper[4684]: I0123 09:09:26.894410 4684 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/af6d441a-7f4f-42b0-8ab4-ddbdcef0a7c5-serving-cert\") pod \"service-ca-operator-777779d784-sxckj\" (UID: \"af6d441a-7f4f-42b0-8ab4-ddbdcef0a7c5\") " pod="openshift-service-ca-operator/service-ca-operator-777779d784-sxckj" Jan 23 09:09:26 crc kubenswrapper[4684]: I0123 09:09:26.895162 4684 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/6ad4033e-405b-4649-a039-5169aa401f18-config-volume\") pod \"dns-default-bxczb\" (UID: \"6ad4033e-405b-4649-a039-5169aa401f18\") " pod="openshift-dns/dns-default-bxczb" Jan 23 09:09:26 crc kubenswrapper[4684]: I0123 09:09:26.896007 4684 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"service-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/637adfa6-5f16-415d-b536-f8c65e5b32c2-service-ca-bundle\") pod \"router-default-5444994796-whxn9\" (UID: \"637adfa6-5f16-415d-b536-f8c65e5b32c2\") " pod="openshift-ingress/router-default-5444994796-whxn9" Jan 23 09:09:26 crc kubenswrapper[4684]: I0123 09:09:26.896203 4684 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"mountpoint-dir\" (UniqueName: \"kubernetes.io/host-path/52f6483b-3d4f-482d-8802-fb7ba6736b69-mountpoint-dir\") pod \"csi-hostpathplugin-8tk99\" (UID: \"52f6483b-3d4f-482d-8802-fb7ba6736b69\") " pod="hostpath-provisioner/csi-hostpathplugin-8tk99" Jan 23 09:09:26 crc kubenswrapper[4684]: I0123 09:09:26.896583 4684 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"tmpfs\" (UniqueName: \"kubernetes.io/empty-dir/35a3e02f-21f3-4762-8260-c52003d4499c-tmpfs\") pod \"packageserver-d55dfcdfc-g94qp\" (UID: \"35a3e02f-21f3-4762-8260-c52003d4499c\") " pod="openshift-operator-lifecycle-manager/packageserver-d55dfcdfc-g94qp" Jan 23 09:09:26 crc kubenswrapper[4684]: I0123 09:09:26.896748 4684 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"plugins-dir\" (UniqueName: \"kubernetes.io/host-path/52f6483b-3d4f-482d-8802-fb7ba6736b69-plugins-dir\") pod \"csi-hostpathplugin-8tk99\" (UID: \"52f6483b-3d4f-482d-8802-fb7ba6736b69\") " pod="hostpath-provisioner/csi-hostpathplugin-8tk99" Jan 23 09:09:26 crc kubenswrapper[4684]: I0123 09:09:26.909744 4684 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"marketplace-operator-metrics\" (UniqueName: \"kubernetes.io/secret/703df6b3-b903-4818-b0c8-8681de1c6065-marketplace-operator-metrics\") pod \"marketplace-operator-79b997595-tfmsb\" (UID: \"703df6b3-b903-4818-b0c8-8681de1c6065\") " pod="openshift-marketplace/marketplace-operator-79b997595-tfmsb" Jan 23 09:09:26 crc kubenswrapper[4684]: I0123 09:09:26.910750 4684 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"profile-collector-cert\" (UniqueName: \"kubernetes.io/secret/31daf061-abd6-415c-9cd6-2e59cb07d605-profile-collector-cert\") pod \"olm-operator-6b444d44fb-qj7jr\" (UID: \"31daf061-abd6-415c-9cd6-2e59cb07d605\") " pod="openshift-operator-lifecycle-manager/olm-operator-6b444d44fb-qj7jr" Jan 23 09:09:26 crc kubenswrapper[4684]: I0123 09:09:26.924378 4684 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"images\" (UniqueName: \"kubernetes.io/configmap/ebc04459-cb74-4868-8eb4-51a4d8856890-images\") pod \"machine-config-operator-74547568cd-k7fnj\" (UID: \"ebc04459-cb74-4868-8eb4-51a4d8856890\") " pod="openshift-machine-config-operator/machine-config-operator-74547568cd-k7fnj" Jan 23 09:09:26 crc kubenswrapper[4684]: I0123 09:09:26.924649 4684 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"srv-cert\" (UniqueName: \"kubernetes.io/secret/e60787da-c4f0-4034-b543-f70e46a6ded4-srv-cert\") pod \"catalog-operator-68c6474976-dxd9h\" (UID: \"e60787da-c4f0-4034-b543-f70e46a6ded4\") " pod="openshift-operator-lifecycle-manager/catalog-operator-68c6474976-dxd9h" Jan 23 09:09:26 crc kubenswrapper[4684]: I0123 09:09:26.925956 4684 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"profile-collector-cert\" (UniqueName: \"kubernetes.io/secret/e60787da-c4f0-4034-b543-f70e46a6ded4-profile-collector-cert\") pod \"catalog-operator-68c6474976-dxd9h\" (UID: \"e60787da-c4f0-4034-b543-f70e46a6ded4\") " pod="openshift-operator-lifecycle-manager/catalog-operator-68c6474976-dxd9h" Jan 23 09:09:26 crc kubenswrapper[4684]: I0123 09:09:26.930205 4684 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"proxy-tls\" (UniqueName: \"kubernetes.io/secret/ebc04459-cb74-4868-8eb4-51a4d8856890-proxy-tls\") pod \"machine-config-operator-74547568cd-k7fnj\" (UID: \"ebc04459-cb74-4868-8eb4-51a4d8856890\") " pod="openshift-machine-config-operator/machine-config-operator-74547568cd-k7fnj" Jan 23 09:09:26 crc kubenswrapper[4684]: I0123 09:09:26.933271 4684 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"cert\" (UniqueName: \"kubernetes.io/secret/2bcafabc-bd27-41f8-bcec-0ea45d079a79-cert\") pod \"ingress-canary-76rxn\" (UID: \"2bcafabc-bd27-41f8-bcec-0ea45d079a79\") " pod="openshift-ingress-canary/ingress-canary-76rxn" Jan 23 09:09:26 crc kubenswrapper[4684]: I0123 09:09:26.933755 4684 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-wn9b6\" (UID: \"4d94b705-3a9a-4cb2-87f1-b898ba859d79\") " pod="openshift-image-registry/image-registry-697d97f7c8-wn9b6" Jan 23 09:09:26 crc kubenswrapper[4684]: E0123 09:09:26.934349 4684 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-01-23 09:09:27.434332222 +0000 UTC m=+140.057710763 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-wn9b6" (UID: "4d94b705-3a9a-4cb2-87f1-b898ba859d79") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 23 09:09:26 crc kubenswrapper[4684]: I0123 09:09:26.940931 4684 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/637adfa6-5f16-415d-b536-f8c65e5b32c2-metrics-certs\") pod \"router-default-5444994796-whxn9\" (UID: \"637adfa6-5f16-415d-b536-f8c65e5b32c2\") " pod="openshift-ingress/router-default-5444994796-whxn9" Jan 23 09:09:26 crc kubenswrapper[4684]: I0123 09:09:26.941474 4684 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"signing-key\" (UniqueName: \"kubernetes.io/secret/e1331c42-e8e8-4e17-bfa3-0961208c57fd-signing-key\") pod \"service-ca-9c57cc56f-tk452\" (UID: \"e1331c42-e8e8-4e17-bfa3-0961208c57fd\") " pod="openshift-service-ca/service-ca-9c57cc56f-tk452" Jan 23 09:09:26 crc kubenswrapper[4684]: I0123 09:09:26.949627 4684 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-2s8rx\" (UniqueName: \"kubernetes.io/projected/31ebe80c-870d-4be6-844c-504b72eb09d6-kube-api-access-2s8rx\") pod \"console-f9d7485db-wd9fz\" (UID: \"31ebe80c-870d-4be6-844c-504b72eb09d6\") " pod="openshift-console/console-f9d7485db-wd9fz" Jan 23 09:09:26 crc kubenswrapper[4684]: I0123 09:09:26.950511 4684 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"control-plane-machine-set-operator-tls\" (UniqueName: \"kubernetes.io/secret/f92af7c0-b6ef-4fe1-b057-b2424aa96458-control-plane-machine-set-operator-tls\") pod \"control-plane-machine-set-operator-78cbb6b69f-4qpn2\" (UID: \"f92af7c0-b6ef-4fe1-b057-b2424aa96458\") " pod="openshift-machine-api/control-plane-machine-set-operator-78cbb6b69f-4qpn2" Jan 23 09:09:26 crc kubenswrapper[4684]: I0123 09:09:26.951268 4684 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"default-certificate\" (UniqueName: \"kubernetes.io/secret/637adfa6-5f16-415d-b536-f8c65e5b32c2-default-certificate\") pod \"router-default-5444994796-whxn9\" (UID: \"637adfa6-5f16-415d-b536-f8c65e5b32c2\") " pod="openshift-ingress/router-default-5444994796-whxn9" Jan 23 09:09:26 crc kubenswrapper[4684]: I0123 09:09:26.951394 4684 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"node-bootstrap-token\" (UniqueName: \"kubernetes.io/secret/f0966605-6157-4679-a78d-7e744be794a0-node-bootstrap-token\") pod \"machine-config-server-9m65q\" (UID: \"f0966605-6157-4679-a78d-7e744be794a0\") " pod="openshift-machine-config-operator/machine-config-server-9m65q" Jan 23 09:09:26 crc kubenswrapper[4684]: I0123 09:09:26.951541 4684 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"apiservice-cert\" (UniqueName: \"kubernetes.io/secret/35a3e02f-21f3-4762-8260-c52003d4499c-apiservice-cert\") pod \"packageserver-d55dfcdfc-g94qp\" (UID: \"35a3e02f-21f3-4762-8260-c52003d4499c\") " pod="openshift-operator-lifecycle-manager/packageserver-d55dfcdfc-g94qp" Jan 23 09:09:26 crc kubenswrapper[4684]: I0123 09:09:26.951786 4684 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"webhook-certs\" (UniqueName: \"kubernetes.io/secret/9eb90a45-05a1-450a-93d7-d20129d62e40-webhook-certs\") pod \"multus-admission-controller-857f4d67dd-zp7ft\" (UID: \"9eb90a45-05a1-450a-93d7-d20129d62e40\") " pod="openshift-multus/multus-admission-controller-857f4d67dd-zp7ft" Jan 23 09:09:26 crc kubenswrapper[4684]: I0123 09:09:26.953857 4684 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"webhook-cert\" (UniqueName: \"kubernetes.io/secret/35a3e02f-21f3-4762-8260-c52003d4499c-webhook-cert\") pod \"packageserver-d55dfcdfc-g94qp\" (UID: \"35a3e02f-21f3-4762-8260-c52003d4499c\") " pod="openshift-operator-lifecycle-manager/packageserver-d55dfcdfc-g94qp" Jan 23 09:09:26 crc kubenswrapper[4684]: I0123 09:09:26.954064 4684 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/7d3e8240-e3e7-42d7-a0fa-6379a76c546e-secret-volume\") pod \"collect-profiles-29485980-dfbbw\" (UID: \"7d3e8240-e3e7-42d7-a0fa-6379a76c546e\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29485980-dfbbw" Jan 23 09:09:26 crc kubenswrapper[4684]: I0123 09:09:26.957172 4684 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"metrics-tls\" (UniqueName: \"kubernetes.io/secret/6ad4033e-405b-4649-a039-5169aa401f18-metrics-tls\") pod \"dns-default-bxczb\" (UID: \"6ad4033e-405b-4649-a039-5169aa401f18\") " pod="openshift-dns/dns-default-bxczb" Jan 23 09:09:26 crc kubenswrapper[4684]: I0123 09:09:26.962332 4684 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"stats-auth\" (UniqueName: \"kubernetes.io/secret/637adfa6-5f16-415d-b536-f8c65e5b32c2-stats-auth\") pod \"router-default-5444994796-whxn9\" (UID: \"637adfa6-5f16-415d-b536-f8c65e5b32c2\") " pod="openshift-ingress/router-default-5444994796-whxn9" Jan 23 09:09:26 crc kubenswrapper[4684]: I0123 09:09:26.962864 4684 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-lmtxd\" (UniqueName: \"kubernetes.io/projected/31daf061-abd6-415c-9cd6-2e59cb07d605-kube-api-access-lmtxd\") pod \"olm-operator-6b444d44fb-qj7jr\" (UID: \"31daf061-abd6-415c-9cd6-2e59cb07d605\") " pod="openshift-operator-lifecycle-manager/olm-operator-6b444d44fb-qj7jr" Jan 23 09:09:26 crc kubenswrapper[4684]: I0123 09:09:26.966540 4684 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-7jxvf\" (UniqueName: \"kubernetes.io/projected/94f9b51c-2051-4b01-bf38-09a32c853699-kube-api-access-7jxvf\") pod \"dns-operator-744455d44c-kx2tw\" (UID: \"94f9b51c-2051-4b01-bf38-09a32c853699\") " pod="openshift-dns-operator/dns-operator-744455d44c-kx2tw" Jan 23 09:09:26 crc kubenswrapper[4684]: I0123 09:09:26.973345 4684 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-cqtlk\" (UniqueName: \"kubernetes.io/projected/af9efd93-5eee-4e16-a36f-25d29663ff5c-kube-api-access-cqtlk\") pod \"machine-config-controller-84d6567774-r9qbw\" (UID: \"af9efd93-5eee-4e16-a36f-25d29663ff5c\") " pod="openshift-machine-config-operator/machine-config-controller-84d6567774-r9qbw" Jan 23 09:09:26 crc kubenswrapper[4684]: I0123 09:09:26.979242 4684 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"certs\" (UniqueName: \"kubernetes.io/secret/f0966605-6157-4679-a78d-7e744be794a0-certs\") pod \"machine-config-server-9m65q\" (UID: \"f0966605-6157-4679-a78d-7e744be794a0\") " pod="openshift-machine-config-operator/machine-config-server-9m65q" Jan 23 09:09:26 crc kubenswrapper[4684]: I0123 09:09:26.979489 4684 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/a9289743-2808-4efc-a6f9-bd8b5e33d553-kube-api-access\") pod \"kube-apiserver-operator-766d6c64bb-dhf86\" (UID: \"a9289743-2808-4efc-a6f9-bd8b5e33d553\") " pod="openshift-kube-apiserver-operator/kube-apiserver-operator-766d6c64bb-dhf86" Jan 23 09:09:27 crc kubenswrapper[4684]: I0123 09:09:27.001047 4684 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"bound-sa-token\" (UniqueName: \"kubernetes.io/projected/991fb184-b936-412b-ae42-fe3a085c4bf9-bound-sa-token\") pod \"ingress-operator-5b745b69d9-5jrnp\" (UID: \"991fb184-b936-412b-ae42-fe3a085c4bf9\") " pod="openshift-ingress-operator/ingress-operator-5b745b69d9-5jrnp" Jan 23 09:09:27 crc kubenswrapper[4684]: I0123 09:09:27.011034 4684 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-console/console-f9d7485db-wd9fz" Jan 23 09:09:27 crc kubenswrapper[4684]: I0123 09:09:27.013815 4684 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-7kvb6\" (UniqueName: \"kubernetes.io/projected/9071fc4b-8d0f-41fe-832b-c3c9f5f0351b-kube-api-access-7kvb6\") pod \"package-server-manager-789f6589d5-2xmjn\" (UID: \"9071fc4b-8d0f-41fe-832b-c3c9f5f0351b\") " pod="openshift-operator-lifecycle-manager/package-server-manager-789f6589d5-2xmjn" Jan 23 09:09:27 crc kubenswrapper[4684]: I0123 09:09:27.018792 4684 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-sjnf6\" (UniqueName: \"kubernetes.io/projected/637adfa6-5f16-415d-b536-f8c65e5b32c2-kube-api-access-sjnf6\") pod \"router-default-5444994796-whxn9\" (UID: \"637adfa6-5f16-415d-b536-f8c65e5b32c2\") " pod="openshift-ingress/router-default-5444994796-whxn9" Jan 23 09:09:27 crc kubenswrapper[4684]: I0123 09:09:27.025107 4684 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-dns-operator/dns-operator-744455d44c-kx2tw" Jan 23 09:09:27 crc kubenswrapper[4684]: I0123 09:09:27.038140 4684 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 23 09:09:27 crc kubenswrapper[4684]: E0123 09:09:27.038640 4684 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-01-23 09:09:27.538621694 +0000 UTC m=+140.162000235 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 23 09:09:27 crc kubenswrapper[4684]: I0123 09:09:27.051870 4684 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-47xb2\" (UniqueName: \"kubernetes.io/projected/7d3e8240-e3e7-42d7-a0fa-6379a76c546e-kube-api-access-47xb2\") pod \"collect-profiles-29485980-dfbbw\" (UID: \"7d3e8240-e3e7-42d7-a0fa-6379a76c546e\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29485980-dfbbw" Jan 23 09:09:27 crc kubenswrapper[4684]: I0123 09:09:27.067814 4684 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-storage-version-migrator-operator/kube-storage-version-migrator-operator-b67b599dd-tkzz2" Jan 23 09:09:27 crc kubenswrapper[4684]: I0123 09:09:27.069349 4684 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-vmx7g\" (UniqueName: \"kubernetes.io/projected/ebc04459-cb74-4868-8eb4-51a4d8856890-kube-api-access-vmx7g\") pod \"machine-config-operator-74547568cd-k7fnj\" (UID: \"ebc04459-cb74-4868-8eb4-51a4d8856890\") " pod="openshift-machine-config-operator/machine-config-operator-74547568cd-k7fnj" Jan 23 09:09:27 crc kubenswrapper[4684]: I0123 09:09:27.093114 4684 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-wrj8b\" (UniqueName: \"kubernetes.io/projected/35a3e02f-21f3-4762-8260-c52003d4499c-kube-api-access-wrj8b\") pod \"packageserver-d55dfcdfc-g94qp\" (UID: \"35a3e02f-21f3-4762-8260-c52003d4499c\") " pod="openshift-operator-lifecycle-manager/packageserver-d55dfcdfc-g94qp" Jan 23 09:09:27 crc kubenswrapper[4684]: I0123 09:09:27.109825 4684 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-svhdb\" (UniqueName: \"kubernetes.io/projected/af6d441a-7f4f-42b0-8ab4-ddbdcef0a7c5-kube-api-access-svhdb\") pod \"service-ca-operator-777779d784-sxckj\" (UID: \"af6d441a-7f4f-42b0-8ab4-ddbdcef0a7c5\") " pod="openshift-service-ca-operator/service-ca-operator-777779d784-sxckj" Jan 23 09:09:27 crc kubenswrapper[4684]: I0123 09:09:27.116838 4684 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver-operator/kube-apiserver-operator-766d6c64bb-dhf86" Jan 23 09:09:27 crc kubenswrapper[4684]: I0123 09:09:27.124008 4684 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-machine-config-operator/machine-config-controller-84d6567774-r9qbw" Jan 23 09:09:27 crc kubenswrapper[4684]: I0123 09:09:27.129026 4684 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-sj7bm\" (UniqueName: \"kubernetes.io/projected/e1331c42-e8e8-4e17-bfa3-0961208c57fd-kube-api-access-sj7bm\") pod \"service-ca-9c57cc56f-tk452\" (UID: \"e1331c42-e8e8-4e17-bfa3-0961208c57fd\") " pod="openshift-service-ca/service-ca-9c57cc56f-tk452" Jan 23 09:09:27 crc kubenswrapper[4684]: I0123 09:09:27.133881 4684 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-t9ndx\" (UniqueName: \"kubernetes.io/projected/52f6483b-3d4f-482d-8802-fb7ba6736b69-kube-api-access-t9ndx\") pod \"csi-hostpathplugin-8tk99\" (UID: \"52f6483b-3d4f-482d-8802-fb7ba6736b69\") " pod="hostpath-provisioner/csi-hostpathplugin-8tk99" Jan 23 09:09:27 crc kubenswrapper[4684]: I0123 09:09:27.140684 4684 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-wn9b6\" (UID: \"4d94b705-3a9a-4cb2-87f1-b898ba859d79\") " pod="openshift-image-registry/image-registry-697d97f7c8-wn9b6" Jan 23 09:09:27 crc kubenswrapper[4684]: E0123 09:09:27.141868 4684 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-01-23 09:09:27.641841731 +0000 UTC m=+140.265220282 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-wn9b6" (UID: "4d94b705-3a9a-4cb2-87f1-b898ba859d79") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 23 09:09:27 crc kubenswrapper[4684]: I0123 09:09:27.145925 4684 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-ingress/router-default-5444994796-whxn9" Jan 23 09:09:27 crc kubenswrapper[4684]: I0123 09:09:27.168208 4684 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-machine-config-operator/machine-config-operator-74547568cd-k7fnj" Jan 23 09:09:27 crc kubenswrapper[4684]: I0123 09:09:27.181258 4684 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-qgmzc\" (UniqueName: \"kubernetes.io/projected/991fb184-b936-412b-ae42-fe3a085c4bf9-kube-api-access-qgmzc\") pod \"ingress-operator-5b745b69d9-5jrnp\" (UID: \"991fb184-b936-412b-ae42-fe3a085c4bf9\") " pod="openshift-ingress-operator/ingress-operator-5b745b69d9-5jrnp" Jan 23 09:09:27 crc kubenswrapper[4684]: I0123 09:09:27.196173 4684 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/olm-operator-6b444d44fb-qj7jr" Jan 23 09:09:27 crc kubenswrapper[4684]: I0123 09:09:27.207522 4684 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-qvsq9\" (UniqueName: \"kubernetes.io/projected/2bcafabc-bd27-41f8-bcec-0ea45d079a79-kube-api-access-qvsq9\") pod \"ingress-canary-76rxn\" (UID: \"2bcafabc-bd27-41f8-bcec-0ea45d079a79\") " pod="openshift-ingress-canary/ingress-canary-76rxn" Jan 23 09:09:27 crc kubenswrapper[4684]: I0123 09:09:27.214104 4684 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/package-server-manager-789f6589d5-2xmjn" Jan 23 09:09:27 crc kubenswrapper[4684]: I0123 09:09:27.216367 4684 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-c5p5q\" (UniqueName: \"kubernetes.io/projected/e60787da-c4f0-4034-b543-f70e46a6ded4-kube-api-access-c5p5q\") pod \"catalog-operator-68c6474976-dxd9h\" (UID: \"e60787da-c4f0-4034-b543-f70e46a6ded4\") " pod="openshift-operator-lifecycle-manager/catalog-operator-68c6474976-dxd9h" Jan 23 09:09:27 crc kubenswrapper[4684]: I0123 09:09:27.217404 4684 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/packageserver-d55dfcdfc-g94qp" Jan 23 09:09:27 crc kubenswrapper[4684]: I0123 09:09:27.219824 4684 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-4tbxm\" (UniqueName: \"kubernetes.io/projected/f0966605-6157-4679-a78d-7e744be794a0-kube-api-access-4tbxm\") pod \"machine-config-server-9m65q\" (UID: \"f0966605-6157-4679-a78d-7e744be794a0\") " pod="openshift-machine-config-operator/machine-config-server-9m65q" Jan 23 09:09:27 crc kubenswrapper[4684]: I0123 09:09:27.228450 4684 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-service-ca/service-ca-9c57cc56f-tk452" Jan 23 09:09:27 crc kubenswrapper[4684]: I0123 09:09:27.232934 4684 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-service-ca-operator/service-ca-operator-777779d784-sxckj" Jan 23 09:09:27 crc kubenswrapper[4684]: I0123 09:09:27.238203 4684 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-cluster-machine-approver/machine-approver-56656f9798-fxzlb" event={"ID":"a0343333-605f-4fb8-adb7-8423a1d36552","Type":"ContainerStarted","Data":"534dc87cdacaa06bb39eeffba78a0154bfe5caa5bbc41f169f720e506ce8eb8d"} Jan 23 09:09:27 crc kubenswrapper[4684]: I0123 09:09:27.240025 4684 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-dqvx6\" (UniqueName: \"kubernetes.io/projected/9eb90a45-05a1-450a-93d7-d20129d62e40-kube-api-access-dqvx6\") pod \"multus-admission-controller-857f4d67dd-zp7ft\" (UID: \"9eb90a45-05a1-450a-93d7-d20129d62e40\") " pod="openshift-multus/multus-admission-controller-857f4d67dd-zp7ft" Jan 23 09:09:27 crc kubenswrapper[4684]: I0123 09:09:27.241575 4684 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 23 09:09:27 crc kubenswrapper[4684]: E0123 09:09:27.242020 4684 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-01-23 09:09:27.742003579 +0000 UTC m=+140.365382120 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 23 09:09:27 crc kubenswrapper[4684]: I0123 09:09:27.247229 4684 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29485980-dfbbw" Jan 23 09:09:27 crc kubenswrapper[4684]: I0123 09:09:27.247635 4684 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-xvd4r\" (UniqueName: \"kubernetes.io/projected/6ad4033e-405b-4649-a039-5169aa401f18-kube-api-access-xvd4r\") pod \"dns-default-bxczb\" (UID: \"6ad4033e-405b-4649-a039-5169aa401f18\") " pod="openshift-dns/dns-default-bxczb" Jan 23 09:09:27 crc kubenswrapper[4684]: I0123 09:09:27.258797 4684 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-7pmhf\" (UniqueName: \"kubernetes.io/projected/f92af7c0-b6ef-4fe1-b057-b2424aa96458-kube-api-access-7pmhf\") pod \"control-plane-machine-set-operator-78cbb6b69f-4qpn2\" (UID: \"f92af7c0-b6ef-4fe1-b057-b2424aa96458\") " pod="openshift-machine-api/control-plane-machine-set-operator-78cbb6b69f-4qpn2" Jan 23 09:09:27 crc kubenswrapper[4684]: W0123 09:09:27.261550 4684 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod637adfa6_5f16_415d_b536_f8c65e5b32c2.slice/crio-cf8604ce2ba997fda495baa729032f4b73f214f21f30c64fd1d161a81c8c1eee WatchSource:0}: Error finding container cf8604ce2ba997fda495baa729032f4b73f214f21f30c64fd1d161a81c8c1eee: Status 404 returned error can't find the container with id cf8604ce2ba997fda495baa729032f4b73f214f21f30c64fd1d161a81c8c1eee Jan 23 09:09:27 crc kubenswrapper[4684]: I0123 09:09:27.272051 4684 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-dns/dns-default-bxczb" Jan 23 09:09:27 crc kubenswrapper[4684]: I0123 09:09:27.278395 4684 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-nmcbh\" (UniqueName: \"kubernetes.io/projected/703df6b3-b903-4818-b0c8-8681de1c6065-kube-api-access-nmcbh\") pod \"marketplace-operator-79b997595-tfmsb\" (UID: \"703df6b3-b903-4818-b0c8-8681de1c6065\") " pod="openshift-marketplace/marketplace-operator-79b997595-tfmsb" Jan 23 09:09:27 crc kubenswrapper[4684]: I0123 09:09:27.299684 4684 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="hostpath-provisioner/csi-hostpathplugin-8tk99" Jan 23 09:09:27 crc kubenswrapper[4684]: I0123 09:09:27.312405 4684 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-machine-config-operator/machine-config-server-9m65q" Jan 23 09:09:27 crc kubenswrapper[4684]: I0123 09:09:27.324403 4684 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-ingress-canary/ingress-canary-76rxn" Jan 23 09:09:27 crc kubenswrapper[4684]: I0123 09:09:27.345538 4684 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-wn9b6\" (UID: \"4d94b705-3a9a-4cb2-87f1-b898ba859d79\") " pod="openshift-image-registry/image-registry-697d97f7c8-wn9b6" Jan 23 09:09:27 crc kubenswrapper[4684]: E0123 09:09:27.345956 4684 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-01-23 09:09:27.84594447 +0000 UTC m=+140.469323011 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-wn9b6" (UID: "4d94b705-3a9a-4cb2-87f1-b898ba859d79") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 23 09:09:27 crc kubenswrapper[4684]: I0123 09:09:27.447463 4684 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 23 09:09:27 crc kubenswrapper[4684]: E0123 09:09:27.448600 4684 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-01-23 09:09:27.948555397 +0000 UTC m=+140.571933938 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 23 09:09:27 crc kubenswrapper[4684]: I0123 09:09:27.454398 4684 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-wn9b6\" (UID: \"4d94b705-3a9a-4cb2-87f1-b898ba859d79\") " pod="openshift-image-registry/image-registry-697d97f7c8-wn9b6" Jan 23 09:09:27 crc kubenswrapper[4684]: E0123 09:09:27.455006 4684 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-01-23 09:09:27.954987764 +0000 UTC m=+140.578366305 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-wn9b6" (UID: "4d94b705-3a9a-4cb2-87f1-b898ba859d79") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 23 09:09:27 crc kubenswrapper[4684]: I0123 09:09:27.459818 4684 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-ingress-operator/ingress-operator-5b745b69d9-5jrnp" Jan 23 09:09:27 crc kubenswrapper[4684]: I0123 09:09:27.478404 4684 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/multus-admission-controller-857f4d67dd-zp7ft" Jan 23 09:09:27 crc kubenswrapper[4684]: I0123 09:09:27.482961 4684 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/marketplace-operator-79b997595-tfmsb" Jan 23 09:09:27 crc kubenswrapper[4684]: I0123 09:09:27.501168 4684 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/catalog-operator-68c6474976-dxd9h" Jan 23 09:09:27 crc kubenswrapper[4684]: I0123 09:09:27.553541 4684 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-machine-api/control-plane-machine-set-operator-78cbb6b69f-4qpn2" Jan 23 09:09:27 crc kubenswrapper[4684]: I0123 09:09:27.557461 4684 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 23 09:09:27 crc kubenswrapper[4684]: E0123 09:09:27.557794 4684 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-01-23 09:09:28.057766507 +0000 UTC m=+140.681145048 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 23 09:09:27 crc kubenswrapper[4684]: I0123 09:09:27.558140 4684 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-wn9b6\" (UID: \"4d94b705-3a9a-4cb2-87f1-b898ba859d79\") " pod="openshift-image-registry/image-registry-697d97f7c8-wn9b6" Jan 23 09:09:27 crc kubenswrapper[4684]: E0123 09:09:27.558520 4684 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-01-23 09:09:28.058504481 +0000 UTC m=+140.681883032 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-wn9b6" (UID: "4d94b705-3a9a-4cb2-87f1-b898ba859d79") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 23 09:09:27 crc kubenswrapper[4684]: I0123 09:09:27.670755 4684 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 23 09:09:27 crc kubenswrapper[4684]: E0123 09:09:27.671104 4684 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-01-23 09:09:28.171089108 +0000 UTC m=+140.794467649 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 23 09:09:27 crc kubenswrapper[4684]: I0123 09:09:27.774028 4684 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-wn9b6\" (UID: \"4d94b705-3a9a-4cb2-87f1-b898ba859d79\") " pod="openshift-image-registry/image-registry-697d97f7c8-wn9b6" Jan 23 09:09:27 crc kubenswrapper[4684]: E0123 09:09:27.775564 4684 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-01-23 09:09:28.274400878 +0000 UTC m=+140.897779419 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-wn9b6" (UID: "4d94b705-3a9a-4cb2-87f1-b898ba859d79") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 23 09:09:27 crc kubenswrapper[4684]: I0123 09:09:27.878228 4684 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 23 09:09:27 crc kubenswrapper[4684]: E0123 09:09:27.878743 4684 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-01-23 09:09:28.378722591 +0000 UTC m=+141.002101132 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 23 09:09:27 crc kubenswrapper[4684]: I0123 09:09:27.980409 4684 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-wn9b6\" (UID: \"4d94b705-3a9a-4cb2-87f1-b898ba859d79\") " pod="openshift-image-registry/image-registry-697d97f7c8-wn9b6" Jan 23 09:09:27 crc kubenswrapper[4684]: E0123 09:09:27.980823 4684 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-01-23 09:09:28.480808731 +0000 UTC m=+141.104187272 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-wn9b6" (UID: "4d94b705-3a9a-4cb2-87f1-b898ba859d79") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 23 09:09:28 crc kubenswrapper[4684]: I0123 09:09:28.081397 4684 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 23 09:09:28 crc kubenswrapper[4684]: E0123 09:09:28.081721 4684 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-01-23 09:09:28.581687163 +0000 UTC m=+141.205065704 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 23 09:09:28 crc kubenswrapper[4684]: I0123 09:09:28.182451 4684 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-wn9b6\" (UID: \"4d94b705-3a9a-4cb2-87f1-b898ba859d79\") " pod="openshift-image-registry/image-registry-697d97f7c8-wn9b6" Jan 23 09:09:28 crc kubenswrapper[4684]: E0123 09:09:28.182908 4684 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-01-23 09:09:28.682892875 +0000 UTC m=+141.306271416 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-wn9b6" (UID: "4d94b705-3a9a-4cb2-87f1-b898ba859d79") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 23 09:09:28 crc kubenswrapper[4684]: I0123 09:09:28.284014 4684 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 23 09:09:28 crc kubenswrapper[4684]: E0123 09:09:28.284690 4684 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-01-23 09:09:28.784671496 +0000 UTC m=+141.408050037 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 23 09:09:28 crc kubenswrapper[4684]: I0123 09:09:28.391950 4684 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-wn9b6\" (UID: \"4d94b705-3a9a-4cb2-87f1-b898ba859d79\") " pod="openshift-image-registry/image-registry-697d97f7c8-wn9b6" Jan 23 09:09:28 crc kubenswrapper[4684]: E0123 09:09:28.392336 4684 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-01-23 09:09:28.892322475 +0000 UTC m=+141.515701016 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-wn9b6" (UID: "4d94b705-3a9a-4cb2-87f1-b898ba859d79") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 23 09:09:28 crc kubenswrapper[4684]: I0123 09:09:28.469872 4684 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ingress/router-default-5444994796-whxn9" event={"ID":"637adfa6-5f16-415d-b536-f8c65e5b32c2","Type":"ContainerStarted","Data":"cf8604ce2ba997fda495baa729032f4b73f214f21f30c64fd1d161a81c8c1eee"} Jan 23 09:09:28 crc kubenswrapper[4684]: I0123 09:09:28.471044 4684 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-server-9m65q" event={"ID":"f0966605-6157-4679-a78d-7e744be794a0","Type":"ContainerStarted","Data":"1dd920faebe55c514bfd30c880fcecb1032b41b4e73d67f1d0ae5029d5c9caba"} Jan 23 09:09:28 crc kubenswrapper[4684]: I0123 09:09:28.489526 4684 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-console/downloads-7954f5f757-mc6nm"] Jan 23 09:09:28 crc kubenswrapper[4684]: I0123 09:09:28.492515 4684 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 23 09:09:28 crc kubenswrapper[4684]: E0123 09:09:28.492902 4684 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-01-23 09:09:28.992887487 +0000 UTC m=+141.616266028 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 23 09:09:28 crc kubenswrapper[4684]: I0123 09:09:28.503566 4684 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-route-controller-manager/route-controller-manager-6576b87f9c-wnhgg"] Jan 23 09:09:28 crc kubenswrapper[4684]: I0123 09:09:28.522591 4684 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-config-operator/openshift-config-operator-7777fb866f-7g8g8"] Jan 23 09:09:28 crc kubenswrapper[4684]: I0123 09:09:28.594561 4684 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-wn9b6\" (UID: \"4d94b705-3a9a-4cb2-87f1-b898ba859d79\") " pod="openshift-image-registry/image-registry-697d97f7c8-wn9b6" Jan 23 09:09:28 crc kubenswrapper[4684]: E0123 09:09:28.594896 4684 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-01-23 09:09:29.094879975 +0000 UTC m=+141.718258516 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-wn9b6" (UID: "4d94b705-3a9a-4cb2-87f1-b898ba859d79") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 23 09:09:28 crc kubenswrapper[4684]: I0123 09:09:28.604476 4684 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-console-operator/console-operator-58897d9998-642xz"] Jan 23 09:09:28 crc kubenswrapper[4684]: I0123 09:09:28.609965 4684 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-cluster-samples-operator/cluster-samples-operator-665b6dd947-5r2wv"] Jan 23 09:09:28 crc kubenswrapper[4684]: I0123 09:09:28.646455 4684 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-authentication/oauth-openshift-558db77b4-hv7d8"] Jan 23 09:09:28 crc kubenswrapper[4684]: I0123 09:09:28.689977 4684 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-apiserver-operator/openshift-apiserver-operator-796bbdcf4f-9crd7"] Jan 23 09:09:28 crc kubenswrapper[4684]: I0123 09:09:28.698820 4684 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 23 09:09:28 crc kubenswrapper[4684]: E0123 09:09:28.699328 4684 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-01-23 09:09:29.199312981 +0000 UTC m=+141.822691522 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 23 09:09:28 crc kubenswrapper[4684]: I0123 09:09:28.712771 4684 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-machine-api/machine-api-operator-5694c8668f-pgngb"] Jan 23 09:09:28 crc kubenswrapper[4684]: I0123 09:09:28.714999 4684 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-kube-controller-manager-operator/kube-controller-manager-operator-78b949d7b-9np9f"] Jan 23 09:09:28 crc kubenswrapper[4684]: I0123 09:09:28.729816 4684 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-etcd-operator/etcd-operator-b45778765-p2wtg"] Jan 23 09:09:28 crc kubenswrapper[4684]: I0123 09:09:28.733139 4684 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-oauth-apiserver/apiserver-7bbb656c7d-bhzj6"] Jan 23 09:09:28 crc kubenswrapper[4684]: I0123 09:09:28.736323 4684 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-image-registry/cluster-image-registry-operator-dc59b4c8b-w66j2"] Jan 23 09:09:28 crc kubenswrapper[4684]: I0123 09:09:28.742105 4684 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-apiserver/apiserver-76f77b778f-7j9vw"] Jan 23 09:09:28 crc kubenswrapper[4684]: I0123 09:09:28.746425 4684 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-controller-manager/controller-manager-879f6c89f-l7895"] Jan 23 09:09:28 crc kubenswrapper[4684]: I0123 09:09:28.802261 4684 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-wn9b6\" (UID: \"4d94b705-3a9a-4cb2-87f1-b898ba859d79\") " pod="openshift-image-registry/image-registry-697d97f7c8-wn9b6" Jan 23 09:09:28 crc kubenswrapper[4684]: E0123 09:09:28.802592 4684 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-01-23 09:09:29.302577009 +0000 UTC m=+141.925955570 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-wn9b6" (UID: "4d94b705-3a9a-4cb2-87f1-b898ba859d79") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 23 09:09:28 crc kubenswrapper[4684]: I0123 09:09:28.863177 4684 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operator-lifecycle-manager/package-server-manager-789f6589d5-2xmjn"] Jan 23 09:09:28 crc kubenswrapper[4684]: I0123 09:09:28.903163 4684 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 23 09:09:28 crc kubenswrapper[4684]: E0123 09:09:28.903618 4684 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-01-23 09:09:29.403597715 +0000 UTC m=+142.026976256 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 23 09:09:28 crc kubenswrapper[4684]: I0123 09:09:28.908602 4684 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-console/console-f9d7485db-wd9fz"] Jan 23 09:09:28 crc kubenswrapper[4684]: I0123 09:09:28.915075 4684 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-kube-storage-version-migrator/migrator-59844c95c7-g8kmw"] Jan 23 09:09:28 crc kubenswrapper[4684]: I0123 09:09:28.927199 4684 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-authentication-operator/authentication-operator-69f744f599-tbqbw"] Jan 23 09:09:28 crc kubenswrapper[4684]: I0123 09:09:28.952422 4684 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-controller-manager-operator/openshift-controller-manager-operator-756b6f6bc6-g5k2t"] Jan 23 09:09:28 crc kubenswrapper[4684]: W0123 09:09:28.955885 4684 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podf2cab908_172f_4775_881a_226d8c87bcdc.slice/crio-0c0b74414b6afc0dc0968ef064e10385d46fcf08180f1e1c8aef0aa496c1cc3e WatchSource:0}: Error finding container 0c0b74414b6afc0dc0968ef064e10385d46fcf08180f1e1c8aef0aa496c1cc3e: Status 404 returned error can't find the container with id 0c0b74414b6afc0dc0968ef064e10385d46fcf08180f1e1c8aef0aa496c1cc3e Jan 23 09:09:29 crc kubenswrapper[4684]: I0123 09:09:29.005108 4684 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-wn9b6\" (UID: \"4d94b705-3a9a-4cb2-87f1-b898ba859d79\") " pod="openshift-image-registry/image-registry-697d97f7c8-wn9b6" Jan 23 09:09:29 crc kubenswrapper[4684]: E0123 09:09:29.006026 4684 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-01-23 09:09:29.506012187 +0000 UTC m=+142.129390718 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-wn9b6" (UID: "4d94b705-3a9a-4cb2-87f1-b898ba859d79") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 23 09:09:29 crc kubenswrapper[4684]: I0123 09:09:29.104459 4684 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-dns-operator/dns-operator-744455d44c-kx2tw"] Jan 23 09:09:29 crc kubenswrapper[4684]: I0123 09:09:29.106791 4684 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 23 09:09:29 crc kubenswrapper[4684]: E0123 09:09:29.107940 4684 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-01-23 09:09:29.607919341 +0000 UTC m=+142.231297882 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 23 09:09:29 crc kubenswrapper[4684]: I0123 09:09:29.127577 4684 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-service-ca-operator/service-ca-operator-777779d784-sxckj"] Jan 23 09:09:29 crc kubenswrapper[4684]: W0123 09:09:29.149426 4684 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod94f9b51c_2051_4b01_bf38_09a32c853699.slice/crio-61ee4f130ed437448d6399822f8cf46f22751a83614e761c81beeba207ebaf27 WatchSource:0}: Error finding container 61ee4f130ed437448d6399822f8cf46f22751a83614e761c81beeba207ebaf27: Status 404 returned error can't find the container with id 61ee4f130ed437448d6399822f8cf46f22751a83614e761c81beeba207ebaf27 Jan 23 09:09:29 crc kubenswrapper[4684]: I0123 09:09:29.152891 4684 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-kube-scheduler-operator/openshift-kube-scheduler-operator-5fdd9b5758-sm5m4"] Jan 23 09:09:29 crc kubenswrapper[4684]: W0123 09:09:29.154111 4684 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podaf6d441a_7f4f_42b0_8ab4_ddbdcef0a7c5.slice/crio-fdd6c649690476915d0a7383c04957dbd3e407bc66691de174092534751e9eef WatchSource:0}: Error finding container fdd6c649690476915d0a7383c04957dbd3e407bc66691de174092534751e9eef: Status 404 returned error can't find the container with id fdd6c649690476915d0a7383c04957dbd3e407bc66691de174092534751e9eef Jan 23 09:09:29 crc kubenswrapper[4684]: I0123 09:09:29.209456 4684 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-wn9b6\" (UID: \"4d94b705-3a9a-4cb2-87f1-b898ba859d79\") " pod="openshift-image-registry/image-registry-697d97f7c8-wn9b6" Jan 23 09:09:29 crc kubenswrapper[4684]: E0123 09:09:29.209841 4684 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-01-23 09:09:29.709828896 +0000 UTC m=+142.333207437 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-wn9b6" (UID: "4d94b705-3a9a-4cb2-87f1-b898ba859d79") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 23 09:09:29 crc kubenswrapper[4684]: I0123 09:09:29.283681 4684 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operator-lifecycle-manager/olm-operator-6b444d44fb-qj7jr"] Jan 23 09:09:29 crc kubenswrapper[4684]: I0123 09:09:29.311827 4684 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 23 09:09:29 crc kubenswrapper[4684]: E0123 09:09:29.312090 4684 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-01-23 09:09:29.812059792 +0000 UTC m=+142.435438333 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 23 09:09:29 crc kubenswrapper[4684]: I0123 09:09:29.312577 4684 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-wn9b6\" (UID: \"4d94b705-3a9a-4cb2-87f1-b898ba859d79\") " pod="openshift-image-registry/image-registry-697d97f7c8-wn9b6" Jan 23 09:09:29 crc kubenswrapper[4684]: E0123 09:09:29.313034 4684 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-01-23 09:09:29.813023483 +0000 UTC m=+142.436402024 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-wn9b6" (UID: "4d94b705-3a9a-4cb2-87f1-b898ba859d79") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 23 09:09:29 crc kubenswrapper[4684]: I0123 09:09:29.332192 4684 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-multus/multus-admission-controller-857f4d67dd-zp7ft"] Jan 23 09:09:29 crc kubenswrapper[4684]: I0123 09:09:29.377833 4684 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-kube-apiserver-operator/kube-apiserver-operator-766d6c64bb-dhf86"] Jan 23 09:09:29 crc kubenswrapper[4684]: I0123 09:09:29.396141 4684 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-ingress-canary/ingress-canary-76rxn"] Jan 23 09:09:29 crc kubenswrapper[4684]: I0123 09:09:29.416453 4684 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 23 09:09:29 crc kubenswrapper[4684]: E0123 09:09:29.416780 4684 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-01-23 09:09:29.916761636 +0000 UTC m=+142.540140177 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 23 09:09:29 crc kubenswrapper[4684]: I0123 09:09:29.416942 4684 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["hostpath-provisioner/csi-hostpathplugin-8tk99"] Jan 23 09:09:29 crc kubenswrapper[4684]: I0123 09:09:29.429264 4684 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operator-lifecycle-manager/packageserver-d55dfcdfc-g94qp"] Jan 23 09:09:29 crc kubenswrapper[4684]: I0123 09:09:29.435051 4684 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29485980-dfbbw"] Jan 23 09:09:29 crc kubenswrapper[4684]: I0123 09:09:29.449024 4684 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-service-ca/service-ca-9c57cc56f-tk452"] Jan 23 09:09:29 crc kubenswrapper[4684]: I0123 09:09:29.449082 4684 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/marketplace-operator-79b997595-tfmsb"] Jan 23 09:09:29 crc kubenswrapper[4684]: I0123 09:09:29.463098 4684 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-ingress-operator/ingress-operator-5b745b69d9-5jrnp"] Jan 23 09:09:29 crc kubenswrapper[4684]: W0123 09:09:29.470241 4684 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod9eb90a45_05a1_450a_93d7_d20129d62e40.slice/crio-74e4043cd2fd064dd7a32df8a68fe0b4be9433d7d73cc853da28983707302b23 WatchSource:0}: Error finding container 74e4043cd2fd064dd7a32df8a68fe0b4be9433d7d73cc853da28983707302b23: Status 404 returned error can't find the container with id 74e4043cd2fd064dd7a32df8a68fe0b4be9433d7d73cc853da28983707302b23 Jan 23 09:09:29 crc kubenswrapper[4684]: I0123 09:09:29.502742 4684 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-console/console-f9d7485db-wd9fz" event={"ID":"31ebe80c-870d-4be6-844c-504b72eb09d6","Type":"ContainerStarted","Data":"9cb30a261b457dd8175788a4df57479ba5c1c4b8f7ae517d48b1674045855b08"} Jan 23 09:09:29 crc kubenswrapper[4684]: I0123 09:09:29.517656 4684 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-wn9b6\" (UID: \"4d94b705-3a9a-4cb2-87f1-b898ba859d79\") " pod="openshift-image-registry/image-registry-697d97f7c8-wn9b6" Jan 23 09:09:29 crc kubenswrapper[4684]: E0123 09:09:29.521536 4684 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-01-23 09:09:30.021517642 +0000 UTC m=+142.644896193 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-wn9b6" (UID: "4d94b705-3a9a-4cb2-87f1-b898ba859d79") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 23 09:09:29 crc kubenswrapper[4684]: I0123 09:09:29.521521 4684 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-bhzj6" event={"ID":"6bea838f-25ef-4690-b5c9-feddd10b04bf","Type":"ContainerStarted","Data":"9e3309bf3f2e95659c6726a03de91cca268222ae2a2dccfc8f0a52da27bed960"} Jan 23 09:09:29 crc kubenswrapper[4684]: I0123 09:09:29.533381 4684 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-apiserver-operator/openshift-apiserver-operator-796bbdcf4f-9crd7" event={"ID":"5ba2e281-6dc9-44ad-90ef-e389fddb83cf","Type":"ContainerStarted","Data":"72c8e0a8196ef1cd49728b71f700ccdde88420c1a4fe80edb34b51af4fc7d075"} Jan 23 09:09:29 crc kubenswrapper[4684]: I0123 09:09:29.543735 4684 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-dns-operator/dns-operator-744455d44c-kx2tw" event={"ID":"94f9b51c-2051-4b01-bf38-09a32c853699","Type":"ContainerStarted","Data":"61ee4f130ed437448d6399822f8cf46f22751a83614e761c81beeba207ebaf27"} Jan 23 09:09:29 crc kubenswrapper[4684]: I0123 09:09:29.555962 4684 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-api/machine-api-operator-5694c8668f-pgngb" event={"ID":"9b3c5fb5-4205-4162-9d9e-b522ee092236","Type":"ContainerStarted","Data":"b1bff07abb4da27fbcde02006dae45e4f5945b7f8cfa01a12241ea0ed4e45388"} Jan 23 09:09:29 crc kubenswrapper[4684]: I0123 09:09:29.556032 4684 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-api/machine-api-operator-5694c8668f-pgngb" event={"ID":"9b3c5fb5-4205-4162-9d9e-b522ee092236","Type":"ContainerStarted","Data":"939576166ddc838ca670b3a67f9932168da8954bc52490f4a1e8aca962591d6a"} Jan 23 09:09:29 crc kubenswrapper[4684]: I0123 09:09:29.557952 4684 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-controller-manager-operator/openshift-controller-manager-operator-756b6f6bc6-g5k2t" event={"ID":"0457957b-8bad-468f-9602-6d32a17c8f92","Type":"ContainerStarted","Data":"fbde75040cac461a3ca8044cffe14b117dfb247db335ab954388a4c22a5ce5f3"} Jan 23 09:09:29 crc kubenswrapper[4684]: I0123 09:09:29.561721 4684 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-image-registry/cluster-image-registry-operator-dc59b4c8b-w66j2" event={"ID":"e52e2888-8938-4b6f-96a3-e25eaaaf112c","Type":"ContainerStarted","Data":"343f4f8074058b1c3ce81bce225b42f0ec192b904ae7d62fc9fc101e96d965de"} Jan 23 09:09:29 crc kubenswrapper[4684]: I0123 09:09:29.568514 4684 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-kube-storage-version-migrator-operator/kube-storage-version-migrator-operator-b67b599dd-tkzz2"] Jan 23 09:09:29 crc kubenswrapper[4684]: I0123 09:09:29.577522 4684 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-etcd-operator/etcd-operator-b45778765-p2wtg" event={"ID":"f2cab908-172f-4775-881a-226d8c87bcdc","Type":"ContainerStarted","Data":"0c0b74414b6afc0dc0968ef064e10385d46fcf08180f1e1c8aef0aa496c1cc3e"} Jan 23 09:09:29 crc kubenswrapper[4684]: I0123 09:09:29.580032 4684 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/olm-operator-6b444d44fb-qj7jr" event={"ID":"31daf061-abd6-415c-9cd6-2e59cb07d605","Type":"ContainerStarted","Data":"76134924d9c7fccb3e55c68a41b898c35cc17246abb5a225f7173f74c6a2397a"} Jan 23 09:09:29 crc kubenswrapper[4684]: I0123 09:09:29.593841 4684 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-route-controller-manager/route-controller-manager-6576b87f9c-wnhgg" event={"ID":"3e8dddad-fbbb-4169-9fd1-c908bc5e3660","Type":"ContainerStarted","Data":"8700a71d50cec1c47c155fef4e2d9b6139a53799adbd514adc9cff1fcd8ab8be"} Jan 23 09:09:29 crc kubenswrapper[4684]: I0123 09:09:29.594122 4684 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-route-controller-manager/route-controller-manager-6576b87f9c-wnhgg" event={"ID":"3e8dddad-fbbb-4169-9fd1-c908bc5e3660","Type":"ContainerStarted","Data":"ae1af42a51f2f9aec2bf7be18eb895ab84cb2b88b72af73af8d7ebfdd2d44ae4"} Jan 23 09:09:29 crc kubenswrapper[4684]: I0123 09:09:29.595330 4684 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-route-controller-manager/route-controller-manager-6576b87f9c-wnhgg" Jan 23 09:09:29 crc kubenswrapper[4684]: I0123 09:09:29.600041 4684 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-authentication-operator/authentication-operator-69f744f599-tbqbw" event={"ID":"1d318028-1d65-442a-9e50-ccf71fb54b04","Type":"ContainerStarted","Data":"9c282a5a0c407368897dde9e26dee6edbca5657b8f76132c53741c097e53c7ec"} Jan 23 09:09:29 crc kubenswrapper[4684]: I0123 09:09:29.605410 4684 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operator-lifecycle-manager/catalog-operator-68c6474976-dxd9h"] Jan 23 09:09:29 crc kubenswrapper[4684]: I0123 09:09:29.605987 4684 patch_prober.go:28] interesting pod/route-controller-manager-6576b87f9c-wnhgg container/route-controller-manager namespace/openshift-route-controller-manager: Readiness probe status=failure output="Get \"https://10.217.0.5:8443/healthz\": dial tcp 10.217.0.5:8443: connect: connection refused" start-of-body= Jan 23 09:09:29 crc kubenswrapper[4684]: I0123 09:09:29.606493 4684 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-route-controller-manager/route-controller-manager-6576b87f9c-wnhgg" podUID="3e8dddad-fbbb-4169-9fd1-c908bc5e3660" containerName="route-controller-manager" probeResult="failure" output="Get \"https://10.217.0.5:8443/healthz\": dial tcp 10.217.0.5:8443: connect: connection refused" Jan 23 09:09:29 crc kubenswrapper[4684]: I0123 09:09:29.623269 4684 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 23 09:09:29 crc kubenswrapper[4684]: E0123 09:09:29.623977 4684 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-01-23 09:09:30.123954004 +0000 UTC m=+142.747332555 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 23 09:09:29 crc kubenswrapper[4684]: I0123 09:09:29.624563 4684 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-route-controller-manager/route-controller-manager-6576b87f9c-wnhgg" podStartSLOduration=121.624533192 podStartE2EDuration="2m1.624533192s" podCreationTimestamp="2026-01-23 09:07:28 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-23 09:09:29.622940421 +0000 UTC m=+142.246318972" watchObservedRunningTime="2026-01-23 09:09:29.624533192 +0000 UTC m=+142.247911733" Jan 23 09:09:29 crc kubenswrapper[4684]: I0123 09:09:29.631281 4684 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-config-operator/openshift-config-operator-7777fb866f-7g8g8" event={"ID":"e19380fe-fa6c-4c7e-a706-aea1c30a6013","Type":"ContainerStarted","Data":"34fab9127f39b73bd8eb19a465f0607c68df717487c789dd217632216148a78a"} Jan 23 09:09:29 crc kubenswrapper[4684]: I0123 09:09:29.639368 4684 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-apiserver/apiserver-76f77b778f-7j9vw" event={"ID":"e6245e77-409a-4116-8c6e-78b21d87529f","Type":"ContainerStarted","Data":"3a9056b1d8a818238b11d051abab67deb539f0f671c24889ffe4a9ef28108d82"} Jan 23 09:09:29 crc kubenswrapper[4684]: I0123 09:09:29.642841 4684 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-machine-config-operator/machine-config-operator-74547568cd-k7fnj"] Jan 23 09:09:29 crc kubenswrapper[4684]: I0123 09:09:29.643951 4684 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-controller-manager-operator/kube-controller-manager-operator-78b949d7b-9np9f" event={"ID":"fecf2330-df0b-41ad-99fd-7a58537bfbc6","Type":"ContainerStarted","Data":"0ede1b6200ca530dee5af1718dfcf7fafc998ffa656a8ebd0cecca5b2a68988c"} Jan 23 09:09:29 crc kubenswrapper[4684]: I0123 09:09:29.647524 4684 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-cluster-samples-operator/cluster-samples-operator-665b6dd947-5r2wv" event={"ID":"e9844493-3620-4f52-bfae-61a79062d001","Type":"ContainerStarted","Data":"493c1503155805cad5fd90235f3a892136409302a9e0aee15692412e3fc2ceb2"} Jan 23 09:09:29 crc kubenswrapper[4684]: I0123 09:09:29.649637 4684 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-console-operator/console-operator-58897d9998-642xz" event={"ID":"2493a510-4c7f-4d74-b1e2-1bfde5d9513b","Type":"ContainerStarted","Data":"d60f43842f19700cc0f6a73b75d08823d4177b99bad4849a6d94bf85db25f02d"} Jan 23 09:09:29 crc kubenswrapper[4684]: I0123 09:09:29.649691 4684 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-console-operator/console-operator-58897d9998-642xz" event={"ID":"2493a510-4c7f-4d74-b1e2-1bfde5d9513b","Type":"ContainerStarted","Data":"97cbe08293be3f28a5d48214a01f2f0c4426dd93a40ed73251597f06c5fc8186"} Jan 23 09:09:29 crc kubenswrapper[4684]: I0123 09:09:29.649910 4684 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-console-operator/console-operator-58897d9998-642xz" Jan 23 09:09:29 crc kubenswrapper[4684]: I0123 09:09:29.651304 4684 patch_prober.go:28] interesting pod/console-operator-58897d9998-642xz container/console-operator namespace/openshift-console-operator: Readiness probe status=failure output="Get \"https://10.217.0.13:8443/readyz\": dial tcp 10.217.0.13:8443: connect: connection refused" start-of-body= Jan 23 09:09:29 crc kubenswrapper[4684]: I0123 09:09:29.651362 4684 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-console-operator/console-operator-58897d9998-642xz" podUID="2493a510-4c7f-4d74-b1e2-1bfde5d9513b" containerName="console-operator" probeResult="failure" output="Get \"https://10.217.0.13:8443/readyz\": dial tcp 10.217.0.13:8443: connect: connection refused" Jan 23 09:09:29 crc kubenswrapper[4684]: I0123 09:09:29.657587 4684 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-console/downloads-7954f5f757-mc6nm" event={"ID":"8fa74b73-0b76-426c-a769-39477ab913f6","Type":"ContainerStarted","Data":"ea015939aefd66860eddf0b0326e052d8bf0bc629873cec14014169e24510457"} Jan 23 09:09:29 crc kubenswrapper[4684]: I0123 09:09:29.657635 4684 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-console/downloads-7954f5f757-mc6nm" event={"ID":"8fa74b73-0b76-426c-a769-39477ab913f6","Type":"ContainerStarted","Data":"ebff8dd0588f94371e89132b6472ad500d81a75346949d7f49d53ae466ecec9a"} Jan 23 09:09:29 crc kubenswrapper[4684]: I0123 09:09:29.658288 4684 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-console/downloads-7954f5f757-mc6nm" Jan 23 09:09:29 crc kubenswrapper[4684]: I0123 09:09:29.659413 4684 patch_prober.go:28] interesting pod/downloads-7954f5f757-mc6nm container/download-server namespace/openshift-console: Readiness probe status=failure output="Get \"http://10.217.0.20:8080/\": dial tcp 10.217.0.20:8080: connect: connection refused" start-of-body= Jan 23 09:09:29 crc kubenswrapper[4684]: I0123 09:09:29.659454 4684 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-console/downloads-7954f5f757-mc6nm" podUID="8fa74b73-0b76-426c-a769-39477ab913f6" containerName="download-server" probeResult="failure" output="Get \"http://10.217.0.20:8080/\": dial tcp 10.217.0.20:8080: connect: connection refused" Jan 23 09:09:29 crc kubenswrapper[4684]: I0123 09:09:29.668142 4684 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ingress/router-default-5444994796-whxn9" event={"ID":"637adfa6-5f16-415d-b536-f8c65e5b32c2","Type":"ContainerStarted","Data":"678dabafb2c39f26d62f94e5daa0ed802ccdc9f108295fa5f95813f8ce9d2644"} Jan 23 09:09:29 crc kubenswrapper[4684]: I0123 09:09:29.670024 4684 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-console-operator/console-operator-58897d9998-642xz" podStartSLOduration=122.670010284 podStartE2EDuration="2m2.670010284s" podCreationTimestamp="2026-01-23 09:07:27 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-23 09:09:29.667214474 +0000 UTC m=+142.290593015" watchObservedRunningTime="2026-01-23 09:09:29.670010284 +0000 UTC m=+142.293388825" Jan 23 09:09:29 crc kubenswrapper[4684]: I0123 09:09:29.680199 4684 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-authentication/oauth-openshift-558db77b4-hv7d8" event={"ID":"c846db13-b93b-4e07-9e7b-e22106203982","Type":"ContainerStarted","Data":"e6fb2423efcaf120919a2ec511db67b899b41f2db615f1160512f485e94158c5"} Jan 23 09:09:29 crc kubenswrapper[4684]: I0123 09:09:29.683955 4684 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-cluster-machine-approver/machine-approver-56656f9798-fxzlb" event={"ID":"a0343333-605f-4fb8-adb7-8423a1d36552","Type":"ContainerStarted","Data":"b4cd3ce0933a0a3fbe411eeed7dea13dbd4c1cc59bb0dddef27f220f934ee69d"} Jan 23 09:09:29 crc kubenswrapper[4684]: I0123 09:09:29.688823 4684 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-console/downloads-7954f5f757-mc6nm" podStartSLOduration=122.688806218 podStartE2EDuration="2m2.688806218s" podCreationTimestamp="2026-01-23 09:07:27 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-23 09:09:29.687978941 +0000 UTC m=+142.311357492" watchObservedRunningTime="2026-01-23 09:09:29.688806218 +0000 UTC m=+142.312184759" Jan 23 09:09:29 crc kubenswrapper[4684]: I0123 09:09:29.690931 4684 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-controller-manager/controller-manager-879f6c89f-l7895" event={"ID":"513ccd39-0870-4964-85a2-0e9eb9d14a85","Type":"ContainerStarted","Data":"d6e732afeeaf384b46a5419ed102bc340a575932802a41f61141d25044d02c90"} Jan 23 09:09:29 crc kubenswrapper[4684]: I0123 09:09:29.693069 4684 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-scheduler-operator/openshift-kube-scheduler-operator-5fdd9b5758-sm5m4" event={"ID":"1b327e86-ed37-44e8-b30d-ef50195f0972","Type":"ContainerStarted","Data":"9502177a599a404c253319afe145d4e5ae03b6ded41f14c6afbbaeed85264f0f"} Jan 23 09:09:29 crc kubenswrapper[4684]: I0123 09:09:29.694105 4684 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-service-ca-operator/service-ca-operator-777779d784-sxckj" event={"ID":"af6d441a-7f4f-42b0-8ab4-ddbdcef0a7c5","Type":"ContainerStarted","Data":"fdd6c649690476915d0a7383c04957dbd3e407bc66691de174092534751e9eef"} Jan 23 09:09:29 crc kubenswrapper[4684]: I0123 09:09:29.695027 4684 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-storage-version-migrator/migrator-59844c95c7-g8kmw" event={"ID":"b4b2d72e-d91a-4cde-8e13-205f5346b4ba","Type":"ContainerStarted","Data":"42a8c3b15a4205fb3988b76c17b82dfdc0eac6b74633c590d667b05a241f9c1e"} Jan 23 09:09:29 crc kubenswrapper[4684]: I0123 09:09:29.695864 4684 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/package-server-manager-789f6589d5-2xmjn" event={"ID":"9071fc4b-8d0f-41fe-832b-c3c9f5f0351b","Type":"ContainerStarted","Data":"5482c5e6bb511be883ab930c3b6001fb2a8d4872f40c355b152ab9c25da49be1"} Jan 23 09:09:29 crc kubenswrapper[4684]: I0123 09:09:29.697225 4684 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-server-9m65q" event={"ID":"f0966605-6157-4679-a78d-7e744be794a0","Type":"ContainerStarted","Data":"af1befff71a186eb13d3e3299ca237ccbadcb3a6235c32e698ce7931fd8aaaf3"} Jan 23 09:09:29 crc kubenswrapper[4684]: I0123 09:09:29.705015 4684 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-machine-config-operator/machine-config-controller-84d6567774-r9qbw"] Jan 23 09:09:29 crc kubenswrapper[4684]: I0123 09:09:29.714612 4684 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-ingress/router-default-5444994796-whxn9" podStartSLOduration=122.714592776 podStartE2EDuration="2m2.714592776s" podCreationTimestamp="2026-01-23 09:07:27 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-23 09:09:29.712569101 +0000 UTC m=+142.335947642" watchObservedRunningTime="2026-01-23 09:09:29.714592776 +0000 UTC m=+142.337971317" Jan 23 09:09:29 crc kubenswrapper[4684]: I0123 09:09:29.716239 4684 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-dns/dns-default-bxczb"] Jan 23 09:09:29 crc kubenswrapper[4684]: I0123 09:09:29.725426 4684 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-wn9b6\" (UID: \"4d94b705-3a9a-4cb2-87f1-b898ba859d79\") " pod="openshift-image-registry/image-registry-697d97f7c8-wn9b6" Jan 23 09:09:29 crc kubenswrapper[4684]: E0123 09:09:29.726068 4684 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-01-23 09:09:30.226053465 +0000 UTC m=+142.849432006 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-wn9b6" (UID: "4d94b705-3a9a-4cb2-87f1-b898ba859d79") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 23 09:09:29 crc kubenswrapper[4684]: I0123 09:09:29.734422 4684 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-cluster-machine-approver/machine-approver-56656f9798-fxzlb" podStartSLOduration=122.734399753 podStartE2EDuration="2m2.734399753s" podCreationTimestamp="2026-01-23 09:07:27 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-23 09:09:29.732034327 +0000 UTC m=+142.355412868" watchObservedRunningTime="2026-01-23 09:09:29.734399753 +0000 UTC m=+142.357778294" Jan 23 09:09:29 crc kubenswrapper[4684]: I0123 09:09:29.742496 4684 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-machine-api/control-plane-machine-set-operator-78cbb6b69f-4qpn2"] Jan 23 09:09:29 crc kubenswrapper[4684]: I0123 09:09:29.753095 4684 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-machine-config-operator/machine-config-server-9m65q" podStartSLOduration=6.753077793 podStartE2EDuration="6.753077793s" podCreationTimestamp="2026-01-23 09:09:23 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-23 09:09:29.752611048 +0000 UTC m=+142.375989599" watchObservedRunningTime="2026-01-23 09:09:29.753077793 +0000 UTC m=+142.376456334" Jan 23 09:09:29 crc kubenswrapper[4684]: W0123 09:09:29.790635 4684 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod52f6483b_3d4f_482d_8802_fb7ba6736b69.slice/crio-bf6b8388444210cf0dc8200dcc29f7545e8cc27497f4757c6f4be579c59cfa02 WatchSource:0}: Error finding container bf6b8388444210cf0dc8200dcc29f7545e8cc27497f4757c6f4be579c59cfa02: Status 404 returned error can't find the container with id bf6b8388444210cf0dc8200dcc29f7545e8cc27497f4757c6f4be579c59cfa02 Jan 23 09:09:29 crc kubenswrapper[4684]: W0123 09:09:29.799724 4684 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod97bfbd24_43dd_4c7c_abc0_cc5c502d938a.slice/crio-34c49c2aba4064ee028038fb8422cc620c6472dd98cea9363b2951a9e3da0b42 WatchSource:0}: Error finding container 34c49c2aba4064ee028038fb8422cc620c6472dd98cea9363b2951a9e3da0b42: Status 404 returned error can't find the container with id 34c49c2aba4064ee028038fb8422cc620c6472dd98cea9363b2951a9e3da0b42 Jan 23 09:09:29 crc kubenswrapper[4684]: W0123 09:09:29.809412 4684 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod2bcafabc_bd27_41f8_bcec_0ea45d079a79.slice/crio-55cbd38a4965c855bcbf40231fc11e0e0080224e44a8eeac1aeac7eee43d5fd1 WatchSource:0}: Error finding container 55cbd38a4965c855bcbf40231fc11e0e0080224e44a8eeac1aeac7eee43d5fd1: Status 404 returned error can't find the container with id 55cbd38a4965c855bcbf40231fc11e0e0080224e44a8eeac1aeac7eee43d5fd1 Jan 23 09:09:29 crc kubenswrapper[4684]: W0123 09:09:29.813323 4684 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod7d3e8240_e3e7_42d7_a0fa_6379a76c546e.slice/crio-8c68be423129790aead549eda638973712efaa7868e457584bdff95a4981e9c0 WatchSource:0}: Error finding container 8c68be423129790aead549eda638973712efaa7868e457584bdff95a4981e9c0: Status 404 returned error can't find the container with id 8c68be423129790aead549eda638973712efaa7868e457584bdff95a4981e9c0 Jan 23 09:09:29 crc kubenswrapper[4684]: I0123 09:09:29.826383 4684 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 23 09:09:29 crc kubenswrapper[4684]: W0123 09:09:29.839651 4684 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod6ad4033e_405b_4649_a039_5169aa401f18.slice/crio-05f13c5e1063b602f98a3a18a41e78e6aad0aa9ada49396ff5b216f51bc58bbe WatchSource:0}: Error finding container 05f13c5e1063b602f98a3a18a41e78e6aad0aa9ada49396ff5b216f51bc58bbe: Status 404 returned error can't find the container with id 05f13c5e1063b602f98a3a18a41e78e6aad0aa9ada49396ff5b216f51bc58bbe Jan 23 09:09:29 crc kubenswrapper[4684]: E0123 09:09:29.841170 4684 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-01-23 09:09:30.341131793 +0000 UTC m=+142.964510334 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 23 09:09:29 crc kubenswrapper[4684]: W0123 09:09:29.849103 4684 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pode60787da_c4f0_4034_b543_f70e46a6ded4.slice/crio-c0ecca90115759d9b041e8fae3d1ddafbe7fb8c991fc20b4ee158af3907cd768 WatchSource:0}: Error finding container c0ecca90115759d9b041e8fae3d1ddafbe7fb8c991fc20b4ee158af3907cd768: Status 404 returned error can't find the container with id c0ecca90115759d9b041e8fae3d1ddafbe7fb8c991fc20b4ee158af3907cd768 Jan 23 09:09:29 crc kubenswrapper[4684]: I0123 09:09:29.928619 4684 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-wn9b6\" (UID: \"4d94b705-3a9a-4cb2-87f1-b898ba859d79\") " pod="openshift-image-registry/image-registry-697d97f7c8-wn9b6" Jan 23 09:09:29 crc kubenswrapper[4684]: E0123 09:09:29.929003 4684 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-01-23 09:09:30.428987896 +0000 UTC m=+143.052366437 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-wn9b6" (UID: "4d94b705-3a9a-4cb2-87f1-b898ba859d79") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 23 09:09:30 crc kubenswrapper[4684]: I0123 09:09:30.031148 4684 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 23 09:09:30 crc kubenswrapper[4684]: E0123 09:09:30.031462 4684 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-01-23 09:09:30.531418698 +0000 UTC m=+143.154797239 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 23 09:09:30 crc kubenswrapper[4684]: I0123 09:09:30.031733 4684 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-wn9b6\" (UID: \"4d94b705-3a9a-4cb2-87f1-b898ba859d79\") " pod="openshift-image-registry/image-registry-697d97f7c8-wn9b6" Jan 23 09:09:30 crc kubenswrapper[4684]: E0123 09:09:30.032234 4684 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-01-23 09:09:30.532219353 +0000 UTC m=+143.155597894 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-wn9b6" (UID: "4d94b705-3a9a-4cb2-87f1-b898ba859d79") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 23 09:09:30 crc kubenswrapper[4684]: I0123 09:09:30.134601 4684 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 23 09:09:30 crc kubenswrapper[4684]: E0123 09:09:30.135074 4684 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-01-23 09:09:30.635055008 +0000 UTC m=+143.258433549 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 23 09:09:30 crc kubenswrapper[4684]: I0123 09:09:30.148668 4684 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-ingress/router-default-5444994796-whxn9" Jan 23 09:09:30 crc kubenswrapper[4684]: I0123 09:09:30.164285 4684 patch_prober.go:28] interesting pod/router-default-5444994796-whxn9 container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Jan 23 09:09:30 crc kubenswrapper[4684]: [-]has-synced failed: reason withheld Jan 23 09:09:30 crc kubenswrapper[4684]: [+]process-running ok Jan 23 09:09:30 crc kubenswrapper[4684]: healthz check failed Jan 23 09:09:30 crc kubenswrapper[4684]: I0123 09:09:30.164334 4684 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-5444994796-whxn9" podUID="637adfa6-5f16-415d-b536-f8c65e5b32c2" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Jan 23 09:09:30 crc kubenswrapper[4684]: I0123 09:09:30.238095 4684 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-wn9b6\" (UID: \"4d94b705-3a9a-4cb2-87f1-b898ba859d79\") " pod="openshift-image-registry/image-registry-697d97f7c8-wn9b6" Jan 23 09:09:30 crc kubenswrapper[4684]: E0123 09:09:30.238396 4684 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-01-23 09:09:30.738384318 +0000 UTC m=+143.361762859 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-wn9b6" (UID: "4d94b705-3a9a-4cb2-87f1-b898ba859d79") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 23 09:09:30 crc kubenswrapper[4684]: I0123 09:09:30.338772 4684 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 23 09:09:30 crc kubenswrapper[4684]: E0123 09:09:30.339522 4684 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-01-23 09:09:30.839487338 +0000 UTC m=+143.462865899 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 23 09:09:30 crc kubenswrapper[4684]: I0123 09:09:30.443033 4684 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-wn9b6\" (UID: \"4d94b705-3a9a-4cb2-87f1-b898ba859d79\") " pod="openshift-image-registry/image-registry-697d97f7c8-wn9b6" Jan 23 09:09:30 crc kubenswrapper[4684]: E0123 09:09:30.443442 4684 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-01-23 09:09:30.943427148 +0000 UTC m=+143.566805689 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-wn9b6" (UID: "4d94b705-3a9a-4cb2-87f1-b898ba859d79") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 23 09:09:30 crc kubenswrapper[4684]: I0123 09:09:30.546794 4684 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 23 09:09:30 crc kubenswrapper[4684]: E0123 09:09:30.547091 4684 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-01-23 09:09:31.047067458 +0000 UTC m=+143.670445999 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 23 09:09:30 crc kubenswrapper[4684]: I0123 09:09:30.547299 4684 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-wn9b6\" (UID: \"4d94b705-3a9a-4cb2-87f1-b898ba859d79\") " pod="openshift-image-registry/image-registry-697d97f7c8-wn9b6" Jan 23 09:09:30 crc kubenswrapper[4684]: E0123 09:09:30.547774 4684 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-01-23 09:09:31.04775649 +0000 UTC m=+143.671135031 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-wn9b6" (UID: "4d94b705-3a9a-4cb2-87f1-b898ba859d79") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 23 09:09:30 crc kubenswrapper[4684]: I0123 09:09:30.656365 4684 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 23 09:09:30 crc kubenswrapper[4684]: E0123 09:09:30.656525 4684 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-01-23 09:09:31.156494145 +0000 UTC m=+143.779872686 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 23 09:09:30 crc kubenswrapper[4684]: I0123 09:09:30.656734 4684 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-wn9b6\" (UID: \"4d94b705-3a9a-4cb2-87f1-b898ba859d79\") " pod="openshift-image-registry/image-registry-697d97f7c8-wn9b6" Jan 23 09:09:30 crc kubenswrapper[4684]: E0123 09:09:30.657087 4684 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-01-23 09:09:31.157070793 +0000 UTC m=+143.780449334 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-wn9b6" (UID: "4d94b705-3a9a-4cb2-87f1-b898ba859d79") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 23 09:09:30 crc kubenswrapper[4684]: I0123 09:09:30.743339 4684 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-controller-84d6567774-r9qbw" event={"ID":"af9efd93-5eee-4e16-a36f-25d29663ff5c","Type":"ContainerStarted","Data":"6059ee6a7e9f9a64e4a9861fffe403b6950eb61c006d8fa96aadd72a2c4b7a58"} Jan 23 09:09:30 crc kubenswrapper[4684]: I0123 09:09:30.761846 4684 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 23 09:09:30 crc kubenswrapper[4684]: E0123 09:09:30.762007 4684 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-01-23 09:09:31.261969384 +0000 UTC m=+143.885347925 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 23 09:09:30 crc kubenswrapper[4684]: I0123 09:09:30.762294 4684 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-wn9b6\" (UID: \"4d94b705-3a9a-4cb2-87f1-b898ba859d79\") " pod="openshift-image-registry/image-registry-697d97f7c8-wn9b6" Jan 23 09:09:30 crc kubenswrapper[4684]: E0123 09:09:30.762730 4684 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-01-23 09:09:31.262708478 +0000 UTC m=+143.886087019 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-wn9b6" (UID: "4d94b705-3a9a-4cb2-87f1-b898ba859d79") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 23 09:09:30 crc kubenswrapper[4684]: I0123 09:09:30.816577 4684 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-image-registry/cluster-image-registry-operator-dc59b4c8b-w66j2" event={"ID":"e52e2888-8938-4b6f-96a3-e25eaaaf112c","Type":"ContainerStarted","Data":"823ac7dbd483de1e579b8233a1d842d2f4430e283ffb2972e4133d7396a7c0eb"} Jan 23 09:09:30 crc kubenswrapper[4684]: I0123 09:09:30.854832 4684 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-image-registry/cluster-image-registry-operator-dc59b4c8b-w66j2" podStartSLOduration=123.854806088 podStartE2EDuration="2m3.854806088s" podCreationTimestamp="2026-01-23 09:07:27 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-23 09:09:30.85208692 +0000 UTC m=+143.475465461" watchObservedRunningTime="2026-01-23 09:09:30.854806088 +0000 UTC m=+143.478184629" Jan 23 09:09:30 crc kubenswrapper[4684]: I0123 09:09:30.860382 4684 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-operator-74547568cd-k7fnj" event={"ID":"ebc04459-cb74-4868-8eb4-51a4d8856890","Type":"ContainerStarted","Data":"6ad519180ee6073edb987a3512795f70cfc59837e6cb7d69766722070e755103"} Jan 23 09:09:30 crc kubenswrapper[4684]: I0123 09:09:30.863391 4684 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 23 09:09:30 crc kubenswrapper[4684]: E0123 09:09:30.863560 4684 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-01-23 09:09:31.363537098 +0000 UTC m=+143.986915639 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 23 09:09:30 crc kubenswrapper[4684]: I0123 09:09:30.863597 4684 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-wn9b6\" (UID: \"4d94b705-3a9a-4cb2-87f1-b898ba859d79\") " pod="openshift-image-registry/image-registry-697d97f7c8-wn9b6" Jan 23 09:09:30 crc kubenswrapper[4684]: E0123 09:09:30.873571 4684 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-01-23 09:09:31.37355165 +0000 UTC m=+143.996930191 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-wn9b6" (UID: "4d94b705-3a9a-4cb2-87f1-b898ba859d79") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 23 09:09:30 crc kubenswrapper[4684]: I0123 09:09:30.891754 4684 generic.go:334] "Generic (PLEG): container finished" podID="e6245e77-409a-4116-8c6e-78b21d87529f" containerID="92e4c2cabc1326e0f94a391e4b6479369b2537e842b2b0becb9c092a023e8851" exitCode=0 Jan 23 09:09:30 crc kubenswrapper[4684]: I0123 09:09:30.892334 4684 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-apiserver/apiserver-76f77b778f-7j9vw" event={"ID":"e6245e77-409a-4116-8c6e-78b21d87529f","Type":"ContainerDied","Data":"92e4c2cabc1326e0f94a391e4b6479369b2537e842b2b0becb9c092a023e8851"} Jan 23 09:09:30 crc kubenswrapper[4684]: I0123 09:09:30.931515 4684 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/package-server-manager-789f6589d5-2xmjn" event={"ID":"9071fc4b-8d0f-41fe-832b-c3c9f5f0351b","Type":"ContainerStarted","Data":"ea649958e251ddecbff663a9dae20fc25bd415a297c2e2e08555a6b39378af70"} Jan 23 09:09:30 crc kubenswrapper[4684]: I0123 09:09:30.964862 4684 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 23 09:09:30 crc kubenswrapper[4684]: E0123 09:09:30.966681 4684 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-01-23 09:09:31.466625121 +0000 UTC m=+144.090003742 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 23 09:09:30 crc kubenswrapper[4684]: I0123 09:09:30.972865 4684 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-authentication-operator/authentication-operator-69f744f599-tbqbw" event={"ID":"1d318028-1d65-442a-9e50-ccf71fb54b04","Type":"ContainerStarted","Data":"1cebb26606775c66a2f2e78a7eaf6537abd27cac6130c8c5c8fae01efda20e44"} Jan 23 09:09:30 crc kubenswrapper[4684]: I0123 09:09:30.974977 4684 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-console/console-f9d7485db-wd9fz" event={"ID":"31ebe80c-870d-4be6-844c-504b72eb09d6","Type":"ContainerStarted","Data":"9c8580cf9d6f1f3b2e3183d9599cd2dc8a20148912e482b2d8f1ed733d44fe11"} Jan 23 09:09:30 crc kubenswrapper[4684]: I0123 09:09:30.979684 4684 generic.go:334] "Generic (PLEG): container finished" podID="6bea838f-25ef-4690-b5c9-feddd10b04bf" containerID="2c31472b2a061f6eaf5ca5bc314f807c251d5af3f02a47a36436af18ca149817" exitCode=0 Jan 23 09:09:30 crc kubenswrapper[4684]: I0123 09:09:30.979782 4684 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-bhzj6" event={"ID":"6bea838f-25ef-4690-b5c9-feddd10b04bf","Type":"ContainerDied","Data":"2c31472b2a061f6eaf5ca5bc314f807c251d5af3f02a47a36436af18ca149817"} Jan 23 09:09:30 crc kubenswrapper[4684]: I0123 09:09:30.996509 4684 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/catalog-operator-68c6474976-dxd9h" event={"ID":"e60787da-c4f0-4034-b543-f70e46a6ded4","Type":"ContainerStarted","Data":"c0ecca90115759d9b041e8fae3d1ddafbe7fb8c991fc20b4ee158af3907cd768"} Jan 23 09:09:30 crc kubenswrapper[4684]: I0123 09:09:30.997633 4684 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-authentication/oauth-openshift-558db77b4-hv7d8" event={"ID":"c846db13-b93b-4e07-9e7b-e22106203982","Type":"ContainerStarted","Data":"f3d4749d1cdf3b2ee51e79c20ce920b5dfc161f4e6da5794c6c4502f5b162b07"} Jan 23 09:09:31 crc kubenswrapper[4684]: I0123 09:09:31.000538 4684 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-authentication/oauth-openshift-558db77b4-hv7d8" Jan 23 09:09:31 crc kubenswrapper[4684]: I0123 09:09:31.026996 4684 patch_prober.go:28] interesting pod/oauth-openshift-558db77b4-hv7d8 container/oauth-openshift namespace/openshift-authentication: Readiness probe status=failure output="Get \"https://10.217.0.11:6443/healthz\": dial tcp 10.217.0.11:6443: connect: connection refused" start-of-body= Jan 23 09:09:31 crc kubenswrapper[4684]: I0123 09:09:31.027064 4684 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-authentication/oauth-openshift-558db77b4-hv7d8" podUID="c846db13-b93b-4e07-9e7b-e22106203982" containerName="oauth-openshift" probeResult="failure" output="Get \"https://10.217.0.11:6443/healthz\": dial tcp 10.217.0.11:6443: connect: connection refused" Jan 23 09:09:31 crc kubenswrapper[4684]: I0123 09:09:31.038286 4684 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-authentication-operator/authentication-operator-69f744f599-tbqbw" podStartSLOduration=124.038270653 podStartE2EDuration="2m4.038270653s" podCreationTimestamp="2026-01-23 09:07:27 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-23 09:09:31.036499326 +0000 UTC m=+143.659877867" watchObservedRunningTime="2026-01-23 09:09:31.038270653 +0000 UTC m=+143.661649194" Jan 23 09:09:31 crc kubenswrapper[4684]: I0123 09:09:31.057752 4684 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ingress-operator/ingress-operator-5b745b69d9-5jrnp" event={"ID":"991fb184-b936-412b-ae42-fe3a085c4bf9","Type":"ContainerStarted","Data":"e253183f625013813e7339c7732c64c540410ff66bfd83b7acbebd45aa2ce026"} Jan 23 09:09:31 crc kubenswrapper[4684]: I0123 09:09:31.076582 4684 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/marketplace-operator-79b997595-tfmsb" event={"ID":"703df6b3-b903-4818-b0c8-8681de1c6065","Type":"ContainerStarted","Data":"c38f54fca325f71ef8fa291d6bd120a9bf3abc611b72ed610b80b584badf9fc0"} Jan 23 09:09:31 crc kubenswrapper[4684]: I0123 09:09:31.078348 4684 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-wn9b6\" (UID: \"4d94b705-3a9a-4cb2-87f1-b898ba859d79\") " pod="openshift-image-registry/image-registry-697d97f7c8-wn9b6" Jan 23 09:09:31 crc kubenswrapper[4684]: E0123 09:09:31.078838 4684 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-01-23 09:09:31.578820546 +0000 UTC m=+144.202199077 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-wn9b6" (UID: "4d94b705-3a9a-4cb2-87f1-b898ba859d79") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 23 09:09:31 crc kubenswrapper[4684]: I0123 09:09:31.089087 4684 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ingress-canary/ingress-canary-76rxn" event={"ID":"2bcafabc-bd27-41f8-bcec-0ea45d079a79","Type":"ContainerStarted","Data":"55cbd38a4965c855bcbf40231fc11e0e0080224e44a8eeac1aeac7eee43d5fd1"} Jan 23 09:09:31 crc kubenswrapper[4684]: I0123 09:09:31.096336 4684 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="hostpath-provisioner/csi-hostpathplugin-8tk99" event={"ID":"52f6483b-3d4f-482d-8802-fb7ba6736b69","Type":"ContainerStarted","Data":"bf6b8388444210cf0dc8200dcc29f7545e8cc27497f4757c6f4be579c59cfa02"} Jan 23 09:09:31 crc kubenswrapper[4684]: I0123 09:09:31.107659 4684 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/packageserver-d55dfcdfc-g94qp" event={"ID":"35a3e02f-21f3-4762-8260-c52003d4499c","Type":"ContainerStarted","Data":"98a4c7c888694afc391bd0dbe1aba651e13d7b392cfc90f53d63e540e7e1af26"} Jan 23 09:09:31 crc kubenswrapper[4684]: I0123 09:09:31.111288 4684 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-storage-version-migrator/migrator-59844c95c7-g8kmw" event={"ID":"b4b2d72e-d91a-4cde-8e13-205f5346b4ba","Type":"ContainerStarted","Data":"a309865425d50fd6ba719e63187751b4d3a6abf037336b8b06cbde5343dd1c0f"} Jan 23 09:09:31 crc kubenswrapper[4684]: I0123 09:09:31.113424 4684 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/collect-profiles-29485980-dfbbw" event={"ID":"7d3e8240-e3e7-42d7-a0fa-6379a76c546e","Type":"ContainerStarted","Data":"8c68be423129790aead549eda638973712efaa7868e457584bdff95a4981e9c0"} Jan 23 09:09:31 crc kubenswrapper[4684]: I0123 09:09:31.133840 4684 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-authentication/oauth-openshift-558db77b4-hv7d8" podStartSLOduration=124.133821664 podStartE2EDuration="2m4.133821664s" podCreationTimestamp="2026-01-23 09:07:27 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-23 09:09:31.133033589 +0000 UTC m=+143.756412140" watchObservedRunningTime="2026-01-23 09:09:31.133821664 +0000 UTC m=+143.757200215" Jan 23 09:09:31 crc kubenswrapper[4684]: I0123 09:09:31.149274 4684 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-apiserver-operator/openshift-apiserver-operator-796bbdcf4f-9crd7" event={"ID":"5ba2e281-6dc9-44ad-90ef-e389fddb83cf","Type":"ContainerStarted","Data":"ce33f2fdc2f45fa4b45fba6680dd934fa414da6b8947de479749d19c2cb61fd9"} Jan 23 09:09:31 crc kubenswrapper[4684]: I0123 09:09:31.152356 4684 patch_prober.go:28] interesting pod/router-default-5444994796-whxn9 container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Jan 23 09:09:31 crc kubenswrapper[4684]: [-]has-synced failed: reason withheld Jan 23 09:09:31 crc kubenswrapper[4684]: [+]process-running ok Jan 23 09:09:31 crc kubenswrapper[4684]: healthz check failed Jan 23 09:09:31 crc kubenswrapper[4684]: I0123 09:09:31.152421 4684 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-5444994796-whxn9" podUID="637adfa6-5f16-415d-b536-f8c65e5b32c2" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Jan 23 09:09:31 crc kubenswrapper[4684]: I0123 09:09:31.163662 4684 generic.go:334] "Generic (PLEG): container finished" podID="e19380fe-fa6c-4c7e-a706-aea1c30a6013" containerID="324929a117e9f0c9e14ed9e46f6e18f442a26a14152b34019990e59c522bd553" exitCode=0 Jan 23 09:09:31 crc kubenswrapper[4684]: I0123 09:09:31.164592 4684 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-config-operator/openshift-config-operator-7777fb866f-7g8g8" event={"ID":"e19380fe-fa6c-4c7e-a706-aea1c30a6013","Type":"ContainerDied","Data":"324929a117e9f0c9e14ed9e46f6e18f442a26a14152b34019990e59c522bd553"} Jan 23 09:09:31 crc kubenswrapper[4684]: I0123 09:09:31.180925 4684 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 23 09:09:31 crc kubenswrapper[4684]: E0123 09:09:31.181466 4684 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-01-23 09:09:31.681447354 +0000 UTC m=+144.304825895 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 23 09:09:31 crc kubenswrapper[4684]: I0123 09:09:31.189788 4684 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-console/console-f9d7485db-wd9fz" podStartSLOduration=124.189768502 podStartE2EDuration="2m4.189768502s" podCreationTimestamp="2026-01-23 09:07:27 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-23 09:09:31.187316343 +0000 UTC m=+143.810694884" watchObservedRunningTime="2026-01-23 09:09:31.189768502 +0000 UTC m=+143.813147043" Jan 23 09:09:31 crc kubenswrapper[4684]: I0123 09:09:31.220750 4684 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-apiserver-operator/openshift-apiserver-operator-796bbdcf4f-9crd7" podStartSLOduration=124.220729747 podStartE2EDuration="2m4.220729747s" podCreationTimestamp="2026-01-23 09:07:27 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-23 09:09:31.219041143 +0000 UTC m=+143.842419684" watchObservedRunningTime="2026-01-23 09:09:31.220729747 +0000 UTC m=+143.844108288" Jan 23 09:09:31 crc kubenswrapper[4684]: I0123 09:09:31.240992 4684 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-controller-manager/controller-manager-879f6c89f-l7895" event={"ID":"513ccd39-0870-4964-85a2-0e9eb9d14a85","Type":"ContainerStarted","Data":"69fae2986f7b62f8976db48a682d2480f1762540e77dedb511982fe427237c74"} Jan 23 09:09:31 crc kubenswrapper[4684]: I0123 09:09:31.241986 4684 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-controller-manager/controller-manager-879f6c89f-l7895" Jan 23 09:09:31 crc kubenswrapper[4684]: I0123 09:09:31.247538 4684 csr.go:261] certificate signing request csr-jgrd4 is approved, waiting to be issued Jan 23 09:09:31 crc kubenswrapper[4684]: I0123 09:09:31.250629 4684 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-etcd-operator/etcd-operator-b45778765-p2wtg" event={"ID":"f2cab908-172f-4775-881a-226d8c87bcdc","Type":"ContainerStarted","Data":"4d82ba4c96010fd8adbe555756e745b5f53c3b66d6fbfd683f2c503919497eb1"} Jan 23 09:09:31 crc kubenswrapper[4684]: I0123 09:09:31.269052 4684 patch_prober.go:28] interesting pod/controller-manager-879f6c89f-l7895 container/controller-manager namespace/openshift-controller-manager: Readiness probe status=failure output="Get \"https://10.217.0.9:8443/healthz\": dial tcp 10.217.0.9:8443: connect: connection refused" start-of-body= Jan 23 09:09:31 crc kubenswrapper[4684]: I0123 09:09:31.269126 4684 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-controller-manager/controller-manager-879f6c89f-l7895" podUID="513ccd39-0870-4964-85a2-0e9eb9d14a85" containerName="controller-manager" probeResult="failure" output="Get \"https://10.217.0.9:8443/healthz\": dial tcp 10.217.0.9:8443: connect: connection refused" Jan 23 09:09:31 crc kubenswrapper[4684]: I0123 09:09:31.270055 4684 csr.go:257] certificate signing request csr-jgrd4 is issued Jan 23 09:09:31 crc kubenswrapper[4684]: I0123 09:09:31.289217 4684 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-wn9b6\" (UID: \"4d94b705-3a9a-4cb2-87f1-b898ba859d79\") " pod="openshift-image-registry/image-registry-697d97f7c8-wn9b6" Jan 23 09:09:31 crc kubenswrapper[4684]: E0123 09:09:31.289583 4684 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-01-23 09:09:31.789569529 +0000 UTC m=+144.412948070 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-wn9b6" (UID: "4d94b705-3a9a-4cb2-87f1-b898ba859d79") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 23 09:09:31 crc kubenswrapper[4684]: I0123 09:09:31.295904 4684 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-service-ca/service-ca-9c57cc56f-tk452" event={"ID":"e1331c42-e8e8-4e17-bfa3-0961208c57fd","Type":"ContainerStarted","Data":"e699178b160c93d4c7b028257d039f9bccd0ffe3a91ccbbafe31a45931d23ea0"} Jan 23 09:09:31 crc kubenswrapper[4684]: I0123 09:09:31.307266 4684 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-controller-manager-operator/openshift-controller-manager-operator-756b6f6bc6-g5k2t" event={"ID":"0457957b-8bad-468f-9602-6d32a17c8f92","Type":"ContainerStarted","Data":"3a292cceac22cf4c43a73490134d026f27983494bc65916fe46dc14f34046bf2"} Jan 23 09:09:31 crc kubenswrapper[4684]: I0123 09:09:31.325325 4684 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-controller-manager/controller-manager-879f6c89f-l7895" podStartSLOduration=124.325302637 podStartE2EDuration="2m4.325302637s" podCreationTimestamp="2026-01-23 09:07:27 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-23 09:09:31.315658577 +0000 UTC m=+143.939037138" watchObservedRunningTime="2026-01-23 09:09:31.325302637 +0000 UTC m=+143.948681178" Jan 23 09:09:31 crc kubenswrapper[4684]: I0123 09:09:31.328541 4684 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver-operator/kube-apiserver-operator-766d6c64bb-dhf86" event={"ID":"a9289743-2808-4efc-a6f9-bd8b5e33d553","Type":"ContainerStarted","Data":"57391f86f6250c54ec39c1015770ba421a7298adff990c24cec88709f797e476"} Jan 23 09:09:31 crc kubenswrapper[4684]: I0123 09:09:31.381632 4684 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-etcd-operator/etcd-operator-b45778765-p2wtg" podStartSLOduration=124.37578799 podStartE2EDuration="2m4.37578799s" podCreationTimestamp="2026-01-23 09:07:27 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-23 09:09:31.374184688 +0000 UTC m=+143.997563239" watchObservedRunningTime="2026-01-23 09:09:31.37578799 +0000 UTC m=+143.999166541" Jan 23 09:09:31 crc kubenswrapper[4684]: I0123 09:09:31.390467 4684 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 23 09:09:31 crc kubenswrapper[4684]: E0123 09:09:31.392166 4684 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-01-23 09:09:31.892148225 +0000 UTC m=+144.515526766 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 23 09:09:31 crc kubenswrapper[4684]: I0123 09:09:31.394315 4684 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-multus/multus-admission-controller-857f4d67dd-zp7ft" event={"ID":"9eb90a45-05a1-450a-93d7-d20129d62e40","Type":"ContainerStarted","Data":"74e4043cd2fd064dd7a32df8a68fe0b4be9433d7d73cc853da28983707302b23"} Jan 23 09:09:31 crc kubenswrapper[4684]: I0123 09:09:31.418399 4684 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-dns/dns-default-bxczb" event={"ID":"6ad4033e-405b-4649-a039-5169aa401f18","Type":"ContainerStarted","Data":"05f13c5e1063b602f98a3a18a41e78e6aad0aa9ada49396ff5b216f51bc58bbe"} Jan 23 09:09:31 crc kubenswrapper[4684]: I0123 09:09:31.456067 4684 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-storage-version-migrator-operator/kube-storage-version-migrator-operator-b67b599dd-tkzz2" event={"ID":"97bfbd24-43dd-4c7c-abc0-cc5c502d938a","Type":"ContainerStarted","Data":"34c49c2aba4064ee028038fb8422cc620c6472dd98cea9363b2951a9e3da0b42"} Jan 23 09:09:31 crc kubenswrapper[4684]: I0123 09:09:31.491594 4684 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-api/control-plane-machine-set-operator-78cbb6b69f-4qpn2" event={"ID":"f92af7c0-b6ef-4fe1-b057-b2424aa96458","Type":"ContainerStarted","Data":"46984ec05dab61de4366c6636dd798f657ab72422f639ba75b726b7574159a53"} Jan 23 09:09:31 crc kubenswrapper[4684]: I0123 09:09:31.493776 4684 patch_prober.go:28] interesting pod/downloads-7954f5f757-mc6nm container/download-server namespace/openshift-console: Readiness probe status=failure output="Get \"http://10.217.0.20:8080/\": dial tcp 10.217.0.20:8080: connect: connection refused" start-of-body= Jan 23 09:09:31 crc kubenswrapper[4684]: I0123 09:09:31.493846 4684 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-console/downloads-7954f5f757-mc6nm" podUID="8fa74b73-0b76-426c-a769-39477ab913f6" containerName="download-server" probeResult="failure" output="Get \"http://10.217.0.20:8080/\": dial tcp 10.217.0.20:8080: connect: connection refused" Jan 23 09:09:31 crc kubenswrapper[4684]: I0123 09:09:31.494959 4684 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-wn9b6\" (UID: \"4d94b705-3a9a-4cb2-87f1-b898ba859d79\") " pod="openshift-image-registry/image-registry-697d97f7c8-wn9b6" Jan 23 09:09:31 crc kubenswrapper[4684]: E0123 09:09:31.497261 4684 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-01-23 09:09:31.997247553 +0000 UTC m=+144.620626094 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-wn9b6" (UID: "4d94b705-3a9a-4cb2-87f1-b898ba859d79") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 23 09:09:31 crc kubenswrapper[4684]: I0123 09:09:31.556345 4684 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-route-controller-manager/route-controller-manager-6576b87f9c-wnhgg" Jan 23 09:09:31 crc kubenswrapper[4684]: I0123 09:09:31.591431 4684 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-controller-manager-operator/openshift-controller-manager-operator-756b6f6bc6-g5k2t" podStartSLOduration=124.591410369 podStartE2EDuration="2m4.591410369s" podCreationTimestamp="2026-01-23 09:07:27 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-23 09:09:31.423227544 +0000 UTC m=+144.046606085" watchObservedRunningTime="2026-01-23 09:09:31.591410369 +0000 UTC m=+144.214788920" Jan 23 09:09:31 crc kubenswrapper[4684]: I0123 09:09:31.598072 4684 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 23 09:09:31 crc kubenswrapper[4684]: E0123 09:09:31.599435 4684 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-01-23 09:09:32.099412566 +0000 UTC m=+144.722791107 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 23 09:09:31 crc kubenswrapper[4684]: I0123 09:09:31.699183 4684 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-wn9b6\" (UID: \"4d94b705-3a9a-4cb2-87f1-b898ba859d79\") " pod="openshift-image-registry/image-registry-697d97f7c8-wn9b6" Jan 23 09:09:31 crc kubenswrapper[4684]: E0123 09:09:31.699902 4684 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-01-23 09:09:32.199889715 +0000 UTC m=+144.823268256 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-wn9b6" (UID: "4d94b705-3a9a-4cb2-87f1-b898ba859d79") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 23 09:09:31 crc kubenswrapper[4684]: I0123 09:09:31.802585 4684 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 23 09:09:31 crc kubenswrapper[4684]: E0123 09:09:31.803085 4684 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-01-23 09:09:32.30303589 +0000 UTC m=+144.926414441 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 23 09:09:31 crc kubenswrapper[4684]: I0123 09:09:31.831543 4684 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-ovn-kubernetes/ovnkube-node-nk7v5" Jan 23 09:09:31 crc kubenswrapper[4684]: I0123 09:09:31.908363 4684 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-wn9b6\" (UID: \"4d94b705-3a9a-4cb2-87f1-b898ba859d79\") " pod="openshift-image-registry/image-registry-697d97f7c8-wn9b6" Jan 23 09:09:31 crc kubenswrapper[4684]: E0123 09:09:31.908949 4684 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-01-23 09:09:32.408929393 +0000 UTC m=+145.032308034 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-wn9b6" (UID: "4d94b705-3a9a-4cb2-87f1-b898ba859d79") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 23 09:09:32 crc kubenswrapper[4684]: I0123 09:09:32.009084 4684 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 23 09:09:32 crc kubenswrapper[4684]: E0123 09:09:32.009581 4684 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-01-23 09:09:32.509560646 +0000 UTC m=+145.132939187 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 23 09:09:32 crc kubenswrapper[4684]: I0123 09:09:32.110952 4684 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-wn9b6\" (UID: \"4d94b705-3a9a-4cb2-87f1-b898ba859d79\") " pod="openshift-image-registry/image-registry-697d97f7c8-wn9b6" Jan 23 09:09:32 crc kubenswrapper[4684]: E0123 09:09:32.111664 4684 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-01-23 09:09:32.611647907 +0000 UTC m=+145.235026448 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-wn9b6" (UID: "4d94b705-3a9a-4cb2-87f1-b898ba859d79") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 23 09:09:32 crc kubenswrapper[4684]: I0123 09:09:32.150050 4684 patch_prober.go:28] interesting pod/router-default-5444994796-whxn9 container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Jan 23 09:09:32 crc kubenswrapper[4684]: [-]has-synced failed: reason withheld Jan 23 09:09:32 crc kubenswrapper[4684]: [+]process-running ok Jan 23 09:09:32 crc kubenswrapper[4684]: healthz check failed Jan 23 09:09:32 crc kubenswrapper[4684]: I0123 09:09:32.150109 4684 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-5444994796-whxn9" podUID="637adfa6-5f16-415d-b536-f8c65e5b32c2" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Jan 23 09:09:32 crc kubenswrapper[4684]: I0123 09:09:32.211960 4684 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 23 09:09:32 crc kubenswrapper[4684]: E0123 09:09:32.212457 4684 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-01-23 09:09:32.712435796 +0000 UTC m=+145.335814337 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 23 09:09:32 crc kubenswrapper[4684]: I0123 09:09:32.240137 4684 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-console-operator/console-operator-58897d9998-642xz" Jan 23 09:09:32 crc kubenswrapper[4684]: I0123 09:09:32.280941 4684 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2027-01-23 09:04:31 +0000 UTC, rotation deadline is 2026-10-11 11:45:13.131151799 +0000 UTC Jan 23 09:09:32 crc kubenswrapper[4684]: I0123 09:09:32.280989 4684 certificate_manager.go:356] kubernetes.io/kubelet-serving: Waiting 6266h35m40.85016653s for next certificate rotation Jan 23 09:09:32 crc kubenswrapper[4684]: I0123 09:09:32.313860 4684 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-wn9b6\" (UID: \"4d94b705-3a9a-4cb2-87f1-b898ba859d79\") " pod="openshift-image-registry/image-registry-697d97f7c8-wn9b6" Jan 23 09:09:32 crc kubenswrapper[4684]: E0123 09:09:32.314236 4684 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-01-23 09:09:32.814220777 +0000 UTC m=+145.437599328 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-wn9b6" (UID: "4d94b705-3a9a-4cb2-87f1-b898ba859d79") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 23 09:09:32 crc kubenswrapper[4684]: I0123 09:09:32.414775 4684 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 23 09:09:32 crc kubenswrapper[4684]: E0123 09:09:32.414874 4684 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-01-23 09:09:32.91484083 +0000 UTC m=+145.538219381 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 23 09:09:32 crc kubenswrapper[4684]: I0123 09:09:32.415107 4684 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-wn9b6\" (UID: \"4d94b705-3a9a-4cb2-87f1-b898ba859d79\") " pod="openshift-image-registry/image-registry-697d97f7c8-wn9b6" Jan 23 09:09:32 crc kubenswrapper[4684]: E0123 09:09:32.415439 4684 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-01-23 09:09:32.915431259 +0000 UTC m=+145.538809800 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-wn9b6" (UID: "4d94b705-3a9a-4cb2-87f1-b898ba859d79") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 23 09:09:32 crc kubenswrapper[4684]: I0123 09:09:32.498382 4684 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/olm-operator-6b444d44fb-qj7jr" event={"ID":"31daf061-abd6-415c-9cd6-2e59cb07d605","Type":"ContainerStarted","Data":"5f04e45525ee610c3a71179ec04cbb1984ee2d8ea5457613e0eef7f2eedda688"} Jan 23 09:09:32 crc kubenswrapper[4684]: I0123 09:09:32.500883 4684 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/packageserver-d55dfcdfc-g94qp" event={"ID":"35a3e02f-21f3-4762-8260-c52003d4499c","Type":"ContainerStarted","Data":"497d3397240f7f0b34909e8caf6992d566fbf2e31c5d16bf8236951ad3bc75c7"} Jan 23 09:09:32 crc kubenswrapper[4684]: I0123 09:09:32.502044 4684 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-operator-lifecycle-manager/packageserver-d55dfcdfc-g94qp" Jan 23 09:09:32 crc kubenswrapper[4684]: I0123 09:09:32.504040 4684 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-controller-84d6567774-r9qbw" event={"ID":"af9efd93-5eee-4e16-a36f-25d29663ff5c","Type":"ContainerStarted","Data":"ce5949bd5ed8897d7cde533193c7bb4ca7bb94e7957c991dd5d0adc635153051"} Jan 23 09:09:32 crc kubenswrapper[4684]: I0123 09:09:32.504213 4684 patch_prober.go:28] interesting pod/packageserver-d55dfcdfc-g94qp container/packageserver namespace/openshift-operator-lifecycle-manager: Readiness probe status=failure output="Get \"https://10.217.0.42:5443/healthz\": dial tcp 10.217.0.42:5443: connect: connection refused" start-of-body= Jan 23 09:09:32 crc kubenswrapper[4684]: I0123 09:09:32.504395 4684 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-operator-lifecycle-manager/packageserver-d55dfcdfc-g94qp" podUID="35a3e02f-21f3-4762-8260-c52003d4499c" containerName="packageserver" probeResult="failure" output="Get \"https://10.217.0.42:5443/healthz\": dial tcp 10.217.0.42:5443: connect: connection refused" Jan 23 09:09:32 crc kubenswrapper[4684]: I0123 09:09:32.506679 4684 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ingress-operator/ingress-operator-5b745b69d9-5jrnp" event={"ID":"991fb184-b936-412b-ae42-fe3a085c4bf9","Type":"ContainerStarted","Data":"127b228220c95e70f41558791f17956529305e0e794459ca359601a9801e6706"} Jan 23 09:09:32 crc kubenswrapper[4684]: I0123 09:09:32.507320 4684 patch_prober.go:28] interesting pod/controller-manager-879f6c89f-l7895 container/controller-manager namespace/openshift-controller-manager: Readiness probe status=failure output="Get \"https://10.217.0.9:8443/healthz\": dial tcp 10.217.0.9:8443: connect: connection refused" start-of-body= Jan 23 09:09:32 crc kubenswrapper[4684]: I0123 09:09:32.507430 4684 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-controller-manager/controller-manager-879f6c89f-l7895" podUID="513ccd39-0870-4964-85a2-0e9eb9d14a85" containerName="controller-manager" probeResult="failure" output="Get \"https://10.217.0.9:8443/healthz\": dial tcp 10.217.0.9:8443: connect: connection refused" Jan 23 09:09:32 crc kubenswrapper[4684]: I0123 09:09:32.507890 4684 patch_prober.go:28] interesting pod/downloads-7954f5f757-mc6nm container/download-server namespace/openshift-console: Readiness probe status=failure output="Get \"http://10.217.0.20:8080/\": dial tcp 10.217.0.20:8080: connect: connection refused" start-of-body= Jan 23 09:09:32 crc kubenswrapper[4684]: I0123 09:09:32.507937 4684 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-console/downloads-7954f5f757-mc6nm" podUID="8fa74b73-0b76-426c-a769-39477ab913f6" containerName="download-server" probeResult="failure" output="Get \"http://10.217.0.20:8080/\": dial tcp 10.217.0.20:8080: connect: connection refused" Jan 23 09:09:32 crc kubenswrapper[4684]: I0123 09:09:32.508327 4684 patch_prober.go:28] interesting pod/oauth-openshift-558db77b4-hv7d8 container/oauth-openshift namespace/openshift-authentication: Readiness probe status=failure output="Get \"https://10.217.0.11:6443/healthz\": dial tcp 10.217.0.11:6443: connect: connection refused" start-of-body= Jan 23 09:09:32 crc kubenswrapper[4684]: I0123 09:09:32.508472 4684 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-authentication/oauth-openshift-558db77b4-hv7d8" podUID="c846db13-b93b-4e07-9e7b-e22106203982" containerName="oauth-openshift" probeResult="failure" output="Get \"https://10.217.0.11:6443/healthz\": dial tcp 10.217.0.11:6443: connect: connection refused" Jan 23 09:09:32 crc kubenswrapper[4684]: I0123 09:09:32.515904 4684 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 23 09:09:32 crc kubenswrapper[4684]: E0123 09:09:32.516011 4684 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-01-23 09:09:33.015995161 +0000 UTC m=+145.639373702 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 23 09:09:32 crc kubenswrapper[4684]: I0123 09:09:32.517662 4684 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-wn9b6\" (UID: \"4d94b705-3a9a-4cb2-87f1-b898ba859d79\") " pod="openshift-image-registry/image-registry-697d97f7c8-wn9b6" Jan 23 09:09:32 crc kubenswrapper[4684]: E0123 09:09:32.530210 4684 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-01-23 09:09:33.030183487 +0000 UTC m=+145.653562028 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-wn9b6" (UID: "4d94b705-3a9a-4cb2-87f1-b898ba859d79") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 23 09:09:32 crc kubenswrapper[4684]: I0123 09:09:32.558382 4684 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-operator-lifecycle-manager/packageserver-d55dfcdfc-g94qp" podStartSLOduration=124.558355842 podStartE2EDuration="2m4.558355842s" podCreationTimestamp="2026-01-23 09:07:28 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-23 09:09:32.552447532 +0000 UTC m=+145.175826083" watchObservedRunningTime="2026-01-23 09:09:32.558355842 +0000 UTC m=+145.181734383" Jan 23 09:09:32 crc kubenswrapper[4684]: I0123 09:09:32.619990 4684 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 23 09:09:32 crc kubenswrapper[4684]: E0123 09:09:32.620231 4684 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-01-23 09:09:33.12020184 +0000 UTC m=+145.743580391 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 23 09:09:32 crc kubenswrapper[4684]: I0123 09:09:32.620312 4684 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-wn9b6\" (UID: \"4d94b705-3a9a-4cb2-87f1-b898ba859d79\") " pod="openshift-image-registry/image-registry-697d97f7c8-wn9b6" Jan 23 09:09:32 crc kubenswrapper[4684]: E0123 09:09:32.620719 4684 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-01-23 09:09:33.120687385 +0000 UTC m=+145.744065926 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-wn9b6" (UID: "4d94b705-3a9a-4cb2-87f1-b898ba859d79") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 23 09:09:32 crc kubenswrapper[4684]: I0123 09:09:32.726095 4684 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 23 09:09:32 crc kubenswrapper[4684]: E0123 09:09:32.726309 4684 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-01-23 09:09:33.226278189 +0000 UTC m=+145.849656750 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 23 09:09:32 crc kubenswrapper[4684]: I0123 09:09:32.726438 4684 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-wn9b6\" (UID: \"4d94b705-3a9a-4cb2-87f1-b898ba859d79\") " pod="openshift-image-registry/image-registry-697d97f7c8-wn9b6" Jan 23 09:09:32 crc kubenswrapper[4684]: E0123 09:09:32.726770 4684 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-01-23 09:09:33.226755414 +0000 UTC m=+145.850134025 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-wn9b6" (UID: "4d94b705-3a9a-4cb2-87f1-b898ba859d79") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 23 09:09:32 crc kubenswrapper[4684]: I0123 09:09:32.827718 4684 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 23 09:09:32 crc kubenswrapper[4684]: E0123 09:09:32.828173 4684 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-01-23 09:09:33.328152722 +0000 UTC m=+145.951531263 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 23 09:09:32 crc kubenswrapper[4684]: I0123 09:09:32.929194 4684 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-wn9b6\" (UID: \"4d94b705-3a9a-4cb2-87f1-b898ba859d79\") " pod="openshift-image-registry/image-registry-697d97f7c8-wn9b6" Jan 23 09:09:32 crc kubenswrapper[4684]: E0123 09:09:32.929758 4684 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-01-23 09:09:33.429739267 +0000 UTC m=+146.053117878 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-wn9b6" (UID: "4d94b705-3a9a-4cb2-87f1-b898ba859d79") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 23 09:09:33 crc kubenswrapper[4684]: I0123 09:09:33.031142 4684 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 23 09:09:33 crc kubenswrapper[4684]: E0123 09:09:33.031460 4684 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-01-23 09:09:33.531443905 +0000 UTC m=+146.154822436 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 23 09:09:33 crc kubenswrapper[4684]: I0123 09:09:33.132924 4684 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-wn9b6\" (UID: \"4d94b705-3a9a-4cb2-87f1-b898ba859d79\") " pod="openshift-image-registry/image-registry-697d97f7c8-wn9b6" Jan 23 09:09:33 crc kubenswrapper[4684]: E0123 09:09:33.133262 4684 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-01-23 09:09:33.633246666 +0000 UTC m=+146.256625207 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-wn9b6" (UID: "4d94b705-3a9a-4cb2-87f1-b898ba859d79") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 23 09:09:33 crc kubenswrapper[4684]: I0123 09:09:33.151280 4684 patch_prober.go:28] interesting pod/router-default-5444994796-whxn9 container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Jan 23 09:09:33 crc kubenswrapper[4684]: [-]has-synced failed: reason withheld Jan 23 09:09:33 crc kubenswrapper[4684]: [+]process-running ok Jan 23 09:09:33 crc kubenswrapper[4684]: healthz check failed Jan 23 09:09:33 crc kubenswrapper[4684]: I0123 09:09:33.151389 4684 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-5444994796-whxn9" podUID="637adfa6-5f16-415d-b536-f8c65e5b32c2" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Jan 23 09:09:33 crc kubenswrapper[4684]: I0123 09:09:33.240430 4684 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 23 09:09:33 crc kubenswrapper[4684]: E0123 09:09:33.240912 4684 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-01-23 09:09:33.740893215 +0000 UTC m=+146.364271756 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 23 09:09:33 crc kubenswrapper[4684]: I0123 09:09:33.342497 4684 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-wn9b6\" (UID: \"4d94b705-3a9a-4cb2-87f1-b898ba859d79\") " pod="openshift-image-registry/image-registry-697d97f7c8-wn9b6" Jan 23 09:09:33 crc kubenswrapper[4684]: E0123 09:09:33.342895 4684 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-01-23 09:09:33.842880423 +0000 UTC m=+146.466258964 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-wn9b6" (UID: "4d94b705-3a9a-4cb2-87f1-b898ba859d79") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 23 09:09:33 crc kubenswrapper[4684]: I0123 09:09:33.443984 4684 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 23 09:09:33 crc kubenswrapper[4684]: E0123 09:09:33.444359 4684 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-01-23 09:09:33.944343583 +0000 UTC m=+146.567722124 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 23 09:09:33 crc kubenswrapper[4684]: I0123 09:09:33.520272 4684 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-service-ca-operator/service-ca-operator-777779d784-sxckj" event={"ID":"af6d441a-7f4f-42b0-8ab4-ddbdcef0a7c5","Type":"ContainerStarted","Data":"28c4b8e25fc94f75832be41947674bc45caaf537421971a9224d38b33211d70a"} Jan 23 09:09:33 crc kubenswrapper[4684]: I0123 09:09:33.521847 4684 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-dns-operator/dns-operator-744455d44c-kx2tw" event={"ID":"94f9b51c-2051-4b01-bf38-09a32c853699","Type":"ContainerStarted","Data":"72213f3d2203bb687eabe07184555792bfbe415f40413c1180993262a146245c"} Jan 23 09:09:33 crc kubenswrapper[4684]: I0123 09:09:33.524285 4684 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ingress-canary/ingress-canary-76rxn" event={"ID":"2bcafabc-bd27-41f8-bcec-0ea45d079a79","Type":"ContainerStarted","Data":"67b1c06a5859134b3ea1d7b17cb75b396674ebc28605e4c582a54b42e9908eb8"} Jan 23 09:09:33 crc kubenswrapper[4684]: I0123 09:09:33.526935 4684 patch_prober.go:28] interesting pod/controller-manager-879f6c89f-l7895 container/controller-manager namespace/openshift-controller-manager: Readiness probe status=failure output="Get \"https://10.217.0.9:8443/healthz\": dial tcp 10.217.0.9:8443: connect: connection refused" start-of-body= Jan 23 09:09:33 crc kubenswrapper[4684]: I0123 09:09:33.526990 4684 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-controller-manager/controller-manager-879f6c89f-l7895" podUID="513ccd39-0870-4964-85a2-0e9eb9d14a85" containerName="controller-manager" probeResult="failure" output="Get \"https://10.217.0.9:8443/healthz\": dial tcp 10.217.0.9:8443: connect: connection refused" Jan 23 09:09:33 crc kubenswrapper[4684]: I0123 09:09:33.527022 4684 patch_prober.go:28] interesting pod/packageserver-d55dfcdfc-g94qp container/packageserver namespace/openshift-operator-lifecycle-manager: Readiness probe status=failure output="Get \"https://10.217.0.42:5443/healthz\": dial tcp 10.217.0.42:5443: connect: connection refused" start-of-body= Jan 23 09:09:33 crc kubenswrapper[4684]: I0123 09:09:33.527079 4684 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-operator-lifecycle-manager/packageserver-d55dfcdfc-g94qp" podUID="35a3e02f-21f3-4762-8260-c52003d4499c" containerName="packageserver" probeResult="failure" output="Get \"https://10.217.0.42:5443/healthz\": dial tcp 10.217.0.42:5443: connect: connection refused" Jan 23 09:09:33 crc kubenswrapper[4684]: I0123 09:09:33.545432 4684 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-wn9b6\" (UID: \"4d94b705-3a9a-4cb2-87f1-b898ba859d79\") " pod="openshift-image-registry/image-registry-697d97f7c8-wn9b6" Jan 23 09:09:33 crc kubenswrapper[4684]: E0123 09:09:33.546641 4684 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-01-23 09:09:34.04662601 +0000 UTC m=+146.670004541 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-wn9b6" (UID: "4d94b705-3a9a-4cb2-87f1-b898ba859d79") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 23 09:09:33 crc kubenswrapper[4684]: I0123 09:09:33.646884 4684 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 23 09:09:33 crc kubenswrapper[4684]: E0123 09:09:33.647228 4684 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-01-23 09:09:34.147212342 +0000 UTC m=+146.770590883 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 23 09:09:33 crc kubenswrapper[4684]: I0123 09:09:33.748506 4684 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-wn9b6\" (UID: \"4d94b705-3a9a-4cb2-87f1-b898ba859d79\") " pod="openshift-image-registry/image-registry-697d97f7c8-wn9b6" Jan 23 09:09:33 crc kubenswrapper[4684]: E0123 09:09:33.748915 4684 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-01-23 09:09:34.2489019 +0000 UTC m=+146.872280441 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-wn9b6" (UID: "4d94b705-3a9a-4cb2-87f1-b898ba859d79") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 23 09:09:33 crc kubenswrapper[4684]: I0123 09:09:33.849348 4684 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 23 09:09:33 crc kubenswrapper[4684]: E0123 09:09:33.849669 4684 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-01-23 09:09:34.349652918 +0000 UTC m=+146.973031459 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 23 09:09:33 crc kubenswrapper[4684]: I0123 09:09:33.951404 4684 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-wn9b6\" (UID: \"4d94b705-3a9a-4cb2-87f1-b898ba859d79\") " pod="openshift-image-registry/image-registry-697d97f7c8-wn9b6" Jan 23 09:09:33 crc kubenswrapper[4684]: E0123 09:09:33.951549 4684 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-01-23 09:09:34.451533672 +0000 UTC m=+147.074912213 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-wn9b6" (UID: "4d94b705-3a9a-4cb2-87f1-b898ba859d79") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 23 09:09:34 crc kubenswrapper[4684]: I0123 09:09:34.052053 4684 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 23 09:09:34 crc kubenswrapper[4684]: E0123 09:09:34.052426 4684 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-01-23 09:09:34.552408064 +0000 UTC m=+147.175786615 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 23 09:09:34 crc kubenswrapper[4684]: I0123 09:09:34.149146 4684 patch_prober.go:28] interesting pod/router-default-5444994796-whxn9 container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Jan 23 09:09:34 crc kubenswrapper[4684]: [-]has-synced failed: reason withheld Jan 23 09:09:34 crc kubenswrapper[4684]: [+]process-running ok Jan 23 09:09:34 crc kubenswrapper[4684]: healthz check failed Jan 23 09:09:34 crc kubenswrapper[4684]: I0123 09:09:34.149209 4684 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-5444994796-whxn9" podUID="637adfa6-5f16-415d-b536-f8c65e5b32c2" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Jan 23 09:09:34 crc kubenswrapper[4684]: I0123 09:09:34.154032 4684 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-wn9b6\" (UID: \"4d94b705-3a9a-4cb2-87f1-b898ba859d79\") " pod="openshift-image-registry/image-registry-697d97f7c8-wn9b6" Jan 23 09:09:34 crc kubenswrapper[4684]: E0123 09:09:34.154430 4684 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-01-23 09:09:34.654411592 +0000 UTC m=+147.277790133 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-wn9b6" (UID: "4d94b705-3a9a-4cb2-87f1-b898ba859d79") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 23 09:09:34 crc kubenswrapper[4684]: I0123 09:09:34.255479 4684 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 23 09:09:34 crc kubenswrapper[4684]: E0123 09:09:34.255910 4684 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-01-23 09:09:34.755894693 +0000 UTC m=+147.379273234 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 23 09:09:34 crc kubenswrapper[4684]: I0123 09:09:34.356722 4684 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-wn9b6\" (UID: \"4d94b705-3a9a-4cb2-87f1-b898ba859d79\") " pod="openshift-image-registry/image-registry-697d97f7c8-wn9b6" Jan 23 09:09:34 crc kubenswrapper[4684]: E0123 09:09:34.357068 4684 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-01-23 09:09:34.857053034 +0000 UTC m=+147.480431575 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-wn9b6" (UID: "4d94b705-3a9a-4cb2-87f1-b898ba859d79") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 23 09:09:34 crc kubenswrapper[4684]: I0123 09:09:34.457611 4684 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 23 09:09:34 crc kubenswrapper[4684]: E0123 09:09:34.457784 4684 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-01-23 09:09:34.95775637 +0000 UTC m=+147.581134911 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 23 09:09:34 crc kubenswrapper[4684]: I0123 09:09:34.457976 4684 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-wn9b6\" (UID: \"4d94b705-3a9a-4cb2-87f1-b898ba859d79\") " pod="openshift-image-registry/image-registry-697d97f7c8-wn9b6" Jan 23 09:09:34 crc kubenswrapper[4684]: E0123 09:09:34.458369 4684 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-01-23 09:09:34.958359199 +0000 UTC m=+147.581737780 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-wn9b6" (UID: "4d94b705-3a9a-4cb2-87f1-b898ba859d79") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 23 09:09:34 crc kubenswrapper[4684]: I0123 09:09:34.525419 4684 patch_prober.go:28] interesting pod/oauth-openshift-558db77b4-hv7d8 container/oauth-openshift namespace/openshift-authentication: Readiness probe status=failure output="Get \"https://10.217.0.11:6443/healthz\": net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers)" start-of-body= Jan 23 09:09:34 crc kubenswrapper[4684]: I0123 09:09:34.525501 4684 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-authentication/oauth-openshift-558db77b4-hv7d8" podUID="c846db13-b93b-4e07-9e7b-e22106203982" containerName="oauth-openshift" probeResult="failure" output="Get \"https://10.217.0.11:6443/healthz\": net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers)" Jan 23 09:09:34 crc kubenswrapper[4684]: I0123 09:09:34.530676 4684 patch_prober.go:28] interesting pod/packageserver-d55dfcdfc-g94qp container/packageserver namespace/openshift-operator-lifecycle-manager: Readiness probe status=failure output="Get \"https://10.217.0.42:5443/healthz\": dial tcp 10.217.0.42:5443: connect: connection refused" start-of-body= Jan 23 09:09:34 crc kubenswrapper[4684]: I0123 09:09:34.530751 4684 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-operator-lifecycle-manager/packageserver-d55dfcdfc-g94qp" podUID="35a3e02f-21f3-4762-8260-c52003d4499c" containerName="packageserver" probeResult="failure" output="Get \"https://10.217.0.42:5443/healthz\": dial tcp 10.217.0.42:5443: connect: connection refused" Jan 23 09:09:34 crc kubenswrapper[4684]: I0123 09:09:34.559511 4684 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 23 09:09:34 crc kubenswrapper[4684]: E0123 09:09:34.559667 4684 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-01-23 09:09:35.059633794 +0000 UTC m=+147.683012345 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 23 09:09:34 crc kubenswrapper[4684]: I0123 09:09:34.559756 4684 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-wn9b6\" (UID: \"4d94b705-3a9a-4cb2-87f1-b898ba859d79\") " pod="openshift-image-registry/image-registry-697d97f7c8-wn9b6" Jan 23 09:09:34 crc kubenswrapper[4684]: E0123 09:09:34.560163 4684 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-01-23 09:09:35.06015112 +0000 UTC m=+147.683529661 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-wn9b6" (UID: "4d94b705-3a9a-4cb2-87f1-b898ba859d79") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 23 09:09:34 crc kubenswrapper[4684]: I0123 09:09:34.661369 4684 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 23 09:09:34 crc kubenswrapper[4684]: E0123 09:09:34.661594 4684 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-01-23 09:09:35.161567019 +0000 UTC m=+147.784945560 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 23 09:09:34 crc kubenswrapper[4684]: I0123 09:09:34.661772 4684 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-wn9b6\" (UID: \"4d94b705-3a9a-4cb2-87f1-b898ba859d79\") " pod="openshift-image-registry/image-registry-697d97f7c8-wn9b6" Jan 23 09:09:34 crc kubenswrapper[4684]: E0123 09:09:34.662062 4684 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-01-23 09:09:35.162050525 +0000 UTC m=+147.785429066 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-wn9b6" (UID: "4d94b705-3a9a-4cb2-87f1-b898ba859d79") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 23 09:09:34 crc kubenswrapper[4684]: I0123 09:09:34.763203 4684 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 23 09:09:34 crc kubenswrapper[4684]: E0123 09:09:34.763444 4684 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-01-23 09:09:35.263413532 +0000 UTC m=+147.886792073 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 23 09:09:34 crc kubenswrapper[4684]: I0123 09:09:34.763742 4684 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-wn9b6\" (UID: \"4d94b705-3a9a-4cb2-87f1-b898ba859d79\") " pod="openshift-image-registry/image-registry-697d97f7c8-wn9b6" Jan 23 09:09:34 crc kubenswrapper[4684]: E0123 09:09:34.764089 4684 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-01-23 09:09:35.264081264 +0000 UTC m=+147.887459805 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-wn9b6" (UID: "4d94b705-3a9a-4cb2-87f1-b898ba859d79") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 23 09:09:34 crc kubenswrapper[4684]: I0123 09:09:34.865236 4684 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 23 09:09:34 crc kubenswrapper[4684]: E0123 09:09:34.865332 4684 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-01-23 09:09:35.365310747 +0000 UTC m=+147.988689288 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 23 09:09:34 crc kubenswrapper[4684]: I0123 09:09:34.865582 4684 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-wn9b6\" (UID: \"4d94b705-3a9a-4cb2-87f1-b898ba859d79\") " pod="openshift-image-registry/image-registry-697d97f7c8-wn9b6" Jan 23 09:09:34 crc kubenswrapper[4684]: E0123 09:09:34.865921 4684 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-01-23 09:09:35.365911066 +0000 UTC m=+147.989289607 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-wn9b6" (UID: "4d94b705-3a9a-4cb2-87f1-b898ba859d79") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 23 09:09:34 crc kubenswrapper[4684]: I0123 09:09:34.967052 4684 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 23 09:09:34 crc kubenswrapper[4684]: E0123 09:09:34.967251 4684 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-01-23 09:09:35.467225782 +0000 UTC m=+148.090604323 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 23 09:09:34 crc kubenswrapper[4684]: I0123 09:09:34.967325 4684 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-wn9b6\" (UID: \"4d94b705-3a9a-4cb2-87f1-b898ba859d79\") " pod="openshift-image-registry/image-registry-697d97f7c8-wn9b6" Jan 23 09:09:34 crc kubenswrapper[4684]: E0123 09:09:34.967630 4684 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-01-23 09:09:35.467617604 +0000 UTC m=+148.090996145 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-wn9b6" (UID: "4d94b705-3a9a-4cb2-87f1-b898ba859d79") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 23 09:09:35 crc kubenswrapper[4684]: I0123 09:09:35.069043 4684 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 23 09:09:35 crc kubenswrapper[4684]: E0123 09:09:35.069224 4684 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-01-23 09:09:35.569196129 +0000 UTC m=+148.192574670 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 23 09:09:35 crc kubenswrapper[4684]: I0123 09:09:35.069366 4684 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-wn9b6\" (UID: \"4d94b705-3a9a-4cb2-87f1-b898ba859d79\") " pod="openshift-image-registry/image-registry-697d97f7c8-wn9b6" Jan 23 09:09:35 crc kubenswrapper[4684]: E0123 09:09:35.069643 4684 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-01-23 09:09:35.569629993 +0000 UTC m=+148.193008534 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-wn9b6" (UID: "4d94b705-3a9a-4cb2-87f1-b898ba859d79") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 23 09:09:35 crc kubenswrapper[4684]: I0123 09:09:35.147872 4684 patch_prober.go:28] interesting pod/controller-manager-879f6c89f-l7895 container/controller-manager namespace/openshift-controller-manager: Readiness probe status=failure output="Get \"https://10.217.0.9:8443/healthz\": dial tcp 10.217.0.9:8443: connect: connection refused" start-of-body= Jan 23 09:09:35 crc kubenswrapper[4684]: I0123 09:09:35.147936 4684 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-controller-manager/controller-manager-879f6c89f-l7895" podUID="513ccd39-0870-4964-85a2-0e9eb9d14a85" containerName="controller-manager" probeResult="failure" output="Get \"https://10.217.0.9:8443/healthz\": dial tcp 10.217.0.9:8443: connect: connection refused" Jan 23 09:09:35 crc kubenswrapper[4684]: I0123 09:09:35.154025 4684 patch_prober.go:28] interesting pod/router-default-5444994796-whxn9 container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Jan 23 09:09:35 crc kubenswrapper[4684]: [-]has-synced failed: reason withheld Jan 23 09:09:35 crc kubenswrapper[4684]: [+]process-running ok Jan 23 09:09:35 crc kubenswrapper[4684]: healthz check failed Jan 23 09:09:35 crc kubenswrapper[4684]: I0123 09:09:35.154078 4684 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-5444994796-whxn9" podUID="637adfa6-5f16-415d-b536-f8c65e5b32c2" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Jan 23 09:09:35 crc kubenswrapper[4684]: I0123 09:09:35.171242 4684 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 23 09:09:35 crc kubenswrapper[4684]: E0123 09:09:35.171444 4684 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-01-23 09:09:35.671416114 +0000 UTC m=+148.294794655 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 23 09:09:35 crc kubenswrapper[4684]: I0123 09:09:35.171898 4684 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-wn9b6\" (UID: \"4d94b705-3a9a-4cb2-87f1-b898ba859d79\") " pod="openshift-image-registry/image-registry-697d97f7c8-wn9b6" Jan 23 09:09:35 crc kubenswrapper[4684]: E0123 09:09:35.172196 4684 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-01-23 09:09:35.672187548 +0000 UTC m=+148.295566089 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-wn9b6" (UID: "4d94b705-3a9a-4cb2-87f1-b898ba859d79") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 23 09:09:35 crc kubenswrapper[4684]: I0123 09:09:35.273298 4684 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 23 09:09:35 crc kubenswrapper[4684]: E0123 09:09:35.273578 4684 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-01-23 09:09:35.773534085 +0000 UTC m=+148.396912666 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 23 09:09:35 crc kubenswrapper[4684]: I0123 09:09:35.273821 4684 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-wn9b6\" (UID: \"4d94b705-3a9a-4cb2-87f1-b898ba859d79\") " pod="openshift-image-registry/image-registry-697d97f7c8-wn9b6" Jan 23 09:09:35 crc kubenswrapper[4684]: E0123 09:09:35.274313 4684 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-01-23 09:09:35.77429422 +0000 UTC m=+148.397672801 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-wn9b6" (UID: "4d94b705-3a9a-4cb2-87f1-b898ba859d79") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 23 09:09:35 crc kubenswrapper[4684]: I0123 09:09:35.375445 4684 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 23 09:09:35 crc kubenswrapper[4684]: E0123 09:09:35.375641 4684 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-01-23 09:09:35.875610026 +0000 UTC m=+148.498988567 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 23 09:09:35 crc kubenswrapper[4684]: I0123 09:09:35.375789 4684 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-wn9b6\" (UID: \"4d94b705-3a9a-4cb2-87f1-b898ba859d79\") " pod="openshift-image-registry/image-registry-697d97f7c8-wn9b6" Jan 23 09:09:35 crc kubenswrapper[4684]: E0123 09:09:35.376068 4684 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-01-23 09:09:35.87605989 +0000 UTC m=+148.499438431 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-wn9b6" (UID: "4d94b705-3a9a-4cb2-87f1-b898ba859d79") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 23 09:09:35 crc kubenswrapper[4684]: I0123 09:09:35.431733 4684 patch_prober.go:28] interesting pod/downloads-7954f5f757-mc6nm container/download-server namespace/openshift-console: Liveness probe status=failure output="Get \"http://10.217.0.20:8080/\": dial tcp 10.217.0.20:8080: connect: connection refused" start-of-body= Jan 23 09:09:35 crc kubenswrapper[4684]: I0123 09:09:35.432058 4684 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-console/downloads-7954f5f757-mc6nm" podUID="8fa74b73-0b76-426c-a769-39477ab913f6" containerName="download-server" probeResult="failure" output="Get \"http://10.217.0.20:8080/\": dial tcp 10.217.0.20:8080: connect: connection refused" Jan 23 09:09:35 crc kubenswrapper[4684]: I0123 09:09:35.431830 4684 patch_prober.go:28] interesting pod/downloads-7954f5f757-mc6nm container/download-server namespace/openshift-console: Readiness probe status=failure output="Get \"http://10.217.0.20:8080/\": dial tcp 10.217.0.20:8080: connect: connection refused" start-of-body= Jan 23 09:09:35 crc kubenswrapper[4684]: I0123 09:09:35.432288 4684 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-console/downloads-7954f5f757-mc6nm" podUID="8fa74b73-0b76-426c-a769-39477ab913f6" containerName="download-server" probeResult="failure" output="Get \"http://10.217.0.20:8080/\": dial tcp 10.217.0.20:8080: connect: connection refused" Jan 23 09:09:35 crc kubenswrapper[4684]: I0123 09:09:35.476549 4684 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 23 09:09:35 crc kubenswrapper[4684]: E0123 09:09:35.477044 4684 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-01-23 09:09:35.977024915 +0000 UTC m=+148.600403456 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 23 09:09:35 crc kubenswrapper[4684]: I0123 09:09:35.536774 4684 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-scheduler-operator/openshift-kube-scheduler-operator-5fdd9b5758-sm5m4" event={"ID":"1b327e86-ed37-44e8-b30d-ef50195f0972","Type":"ContainerStarted","Data":"532abed3a4ecfb42646c7a2dc0081fe222d94f4273fc45ac79ced4e0b5febbc8"} Jan 23 09:09:35 crc kubenswrapper[4684]: I0123 09:09:35.578488 4684 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-wn9b6\" (UID: \"4d94b705-3a9a-4cb2-87f1-b898ba859d79\") " pod="openshift-image-registry/image-registry-697d97f7c8-wn9b6" Jan 23 09:09:35 crc kubenswrapper[4684]: E0123 09:09:35.578802 4684 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-01-23 09:09:36.078787325 +0000 UTC m=+148.702165866 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-wn9b6" (UID: "4d94b705-3a9a-4cb2-87f1-b898ba859d79") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 23 09:09:35 crc kubenswrapper[4684]: I0123 09:09:35.679792 4684 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 23 09:09:35 crc kubenswrapper[4684]: E0123 09:09:35.679945 4684 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-01-23 09:09:36.179919585 +0000 UTC m=+148.803298126 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 23 09:09:35 crc kubenswrapper[4684]: I0123 09:09:35.680058 4684 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-s2dwl\" (UniqueName: \"kubernetes.io/projected/9d751cbb-f2e2-430d-9754-c882a5e924a5-kube-api-access-s2dwl\") pod \"network-check-source-55646444c4-trplf\" (UID: \"9d751cbb-f2e2-430d-9754-c882a5e924a5\") " pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Jan 23 09:09:35 crc kubenswrapper[4684]: I0123 09:09:35.680108 4684 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-cqllr\" (UniqueName: \"kubernetes.io/projected/3b6479f0-333b-4a96-9adf-2099afdc2447-kube-api-access-cqllr\") pod \"network-check-target-xd92c\" (UID: \"3b6479f0-333b-4a96-9adf-2099afdc2447\") " pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 23 09:09:35 crc kubenswrapper[4684]: I0123 09:09:35.680169 4684 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"networking-console-plugin-cert\" (UniqueName: \"kubernetes.io/secret/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-networking-console-plugin-cert\") pod \"networking-console-plugin-85b44fc459-gdk6g\" (UID: \"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\") " pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 23 09:09:35 crc kubenswrapper[4684]: I0123 09:09:35.680215 4684 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-wn9b6\" (UID: \"4d94b705-3a9a-4cb2-87f1-b898ba859d79\") " pod="openshift-image-registry/image-registry-697d97f7c8-wn9b6" Jan 23 09:09:35 crc kubenswrapper[4684]: E0123 09:09:35.680670 4684 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-01-23 09:09:36.180651728 +0000 UTC m=+148.804030259 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-wn9b6" (UID: "4d94b705-3a9a-4cb2-87f1-b898ba859d79") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 23 09:09:35 crc kubenswrapper[4684]: I0123 09:09:35.681004 4684 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"nginx-conf\" (UniqueName: \"kubernetes.io/configmap/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-nginx-conf\") pod \"networking-console-plugin-85b44fc459-gdk6g\" (UID: \"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\") " pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 23 09:09:35 crc kubenswrapper[4684]: I0123 09:09:35.686649 4684 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"networking-console-plugin-cert\" (UniqueName: \"kubernetes.io/secret/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-networking-console-plugin-cert\") pod \"networking-console-plugin-85b44fc459-gdk6g\" (UID: \"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\") " pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 23 09:09:35 crc kubenswrapper[4684]: I0123 09:09:35.686777 4684 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-s2dwl\" (UniqueName: \"kubernetes.io/projected/9d751cbb-f2e2-430d-9754-c882a5e924a5-kube-api-access-s2dwl\") pod \"network-check-source-55646444c4-trplf\" (UID: \"9d751cbb-f2e2-430d-9754-c882a5e924a5\") " pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Jan 23 09:09:35 crc kubenswrapper[4684]: I0123 09:09:35.687234 4684 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-cqllr\" (UniqueName: \"kubernetes.io/projected/3b6479f0-333b-4a96-9adf-2099afdc2447-kube-api-access-cqllr\") pod \"network-check-target-xd92c\" (UID: \"3b6479f0-333b-4a96-9adf-2099afdc2447\") " pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 23 09:09:35 crc kubenswrapper[4684]: I0123 09:09:35.729330 4684 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 23 09:09:35 crc kubenswrapper[4684]: I0123 09:09:35.775870 4684 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Jan 23 09:09:35 crc kubenswrapper[4684]: I0123 09:09:35.782040 4684 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 23 09:09:35 crc kubenswrapper[4684]: E0123 09:09:35.782492 4684 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-01-23 09:09:36.28247481 +0000 UTC m=+148.905853351 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 23 09:09:35 crc kubenswrapper[4684]: I0123 09:09:35.883167 4684 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-wn9b6\" (UID: \"4d94b705-3a9a-4cb2-87f1-b898ba859d79\") " pod="openshift-image-registry/image-registry-697d97f7c8-wn9b6" Jan 23 09:09:35 crc kubenswrapper[4684]: E0123 09:09:35.883454 4684 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-01-23 09:09:36.383443455 +0000 UTC m=+149.006821996 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-wn9b6" (UID: "4d94b705-3a9a-4cb2-87f1-b898ba859d79") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 23 09:09:35 crc kubenswrapper[4684]: I0123 09:09:35.985247 4684 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 23 09:09:35 crc kubenswrapper[4684]: E0123 09:09:35.985730 4684 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-01-23 09:09:36.485689731 +0000 UTC m=+149.109068262 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 23 09:09:35 crc kubenswrapper[4684]: I0123 09:09:35.985939 4684 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-wn9b6\" (UID: \"4d94b705-3a9a-4cb2-87f1-b898ba859d79\") " pod="openshift-image-registry/image-registry-697d97f7c8-wn9b6" Jan 23 09:09:35 crc kubenswrapper[4684]: E0123 09:09:35.986269 4684 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-01-23 09:09:36.486255359 +0000 UTC m=+149.109633900 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-wn9b6" (UID: "4d94b705-3a9a-4cb2-87f1-b898ba859d79") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 23 09:09:36 crc kubenswrapper[4684]: I0123 09:09:36.095336 4684 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 23 09:09:36 crc kubenswrapper[4684]: E0123 09:09:36.095855 4684 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-01-23 09:09:36.595839621 +0000 UTC m=+149.219218162 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 23 09:09:36 crc kubenswrapper[4684]: I0123 09:09:36.149847 4684 patch_prober.go:28] interesting pod/router-default-5444994796-whxn9 container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Jan 23 09:09:36 crc kubenswrapper[4684]: [-]has-synced failed: reason withheld Jan 23 09:09:36 crc kubenswrapper[4684]: [+]process-running ok Jan 23 09:09:36 crc kubenswrapper[4684]: healthz check failed Jan 23 09:09:36 crc kubenswrapper[4684]: I0123 09:09:36.149915 4684 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-5444994796-whxn9" podUID="637adfa6-5f16-415d-b536-f8c65e5b32c2" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Jan 23 09:09:36 crc kubenswrapper[4684]: I0123 09:09:36.187829 4684 patch_prober.go:28] interesting pod/oauth-openshift-558db77b4-hv7d8 container/oauth-openshift namespace/openshift-authentication: Readiness probe status=failure output="Get \"https://10.217.0.11:6443/healthz\": net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers)" start-of-body= Jan 23 09:09:36 crc kubenswrapper[4684]: I0123 09:09:36.187917 4684 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-authentication/oauth-openshift-558db77b4-hv7d8" podUID="c846db13-b93b-4e07-9e7b-e22106203982" containerName="oauth-openshift" probeResult="failure" output="Get \"https://10.217.0.11:6443/healthz\": net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers)" Jan 23 09:09:36 crc kubenswrapper[4684]: I0123 09:09:36.197616 4684 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-wn9b6\" (UID: \"4d94b705-3a9a-4cb2-87f1-b898ba859d79\") " pod="openshift-image-registry/image-registry-697d97f7c8-wn9b6" Jan 23 09:09:36 crc kubenswrapper[4684]: E0123 09:09:36.197925 4684 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-01-23 09:09:36.697912681 +0000 UTC m=+149.321291222 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-wn9b6" (UID: "4d94b705-3a9a-4cb2-87f1-b898ba859d79") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 23 09:09:36 crc kubenswrapper[4684]: I0123 09:09:36.198515 4684 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"nginx-conf\" (UniqueName: \"kubernetes.io/configmap/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-nginx-conf\") pod \"networking-console-plugin-85b44fc459-gdk6g\" (UID: \"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\") " pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 23 09:09:36 crc kubenswrapper[4684]: I0123 09:09:36.298734 4684 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 23 09:09:36 crc kubenswrapper[4684]: E0123 09:09:36.298988 4684 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-01-23 09:09:36.798931237 +0000 UTC m=+149.422309778 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 23 09:09:36 crc kubenswrapper[4684]: I0123 09:09:36.299095 4684 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-wn9b6\" (UID: \"4d94b705-3a9a-4cb2-87f1-b898ba859d79\") " pod="openshift-image-registry/image-registry-697d97f7c8-wn9b6" Jan 23 09:09:36 crc kubenswrapper[4684]: E0123 09:09:36.299490 4684 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-01-23 09:09:36.799478524 +0000 UTC m=+149.422857055 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-wn9b6" (UID: "4d94b705-3a9a-4cb2-87f1-b898ba859d79") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 23 09:09:36 crc kubenswrapper[4684]: I0123 09:09:36.389158 4684 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 23 09:09:36 crc kubenswrapper[4684]: I0123 09:09:36.400666 4684 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 23 09:09:36 crc kubenswrapper[4684]: E0123 09:09:36.401032 4684 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-01-23 09:09:36.901006807 +0000 UTC m=+149.524385338 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 23 09:09:36 crc kubenswrapper[4684]: I0123 09:09:36.401319 4684 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-wn9b6\" (UID: \"4d94b705-3a9a-4cb2-87f1-b898ba859d79\") " pod="openshift-image-registry/image-registry-697d97f7c8-wn9b6" Jan 23 09:09:36 crc kubenswrapper[4684]: E0123 09:09:36.401671 4684 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-01-23 09:09:36.901656938 +0000 UTC m=+149.525035479 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-wn9b6" (UID: "4d94b705-3a9a-4cb2-87f1-b898ba859d79") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 23 09:09:36 crc kubenswrapper[4684]: I0123 09:09:36.506102 4684 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 23 09:09:36 crc kubenswrapper[4684]: E0123 09:09:36.506448 4684 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-01-23 09:09:37.006435065 +0000 UTC m=+149.629813606 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 23 09:09:36 crc kubenswrapper[4684]: I0123 09:09:36.573205 4684 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" event={"ID":"3b6479f0-333b-4a96-9adf-2099afdc2447","Type":"ContainerStarted","Data":"9d1abec8192efb62690477e6733d1b04de294aee56ca3948d2e24b05749796eb"} Jan 23 09:09:36 crc kubenswrapper[4684]: I0123 09:09:36.586356 4684 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/package-server-manager-789f6589d5-2xmjn" event={"ID":"9071fc4b-8d0f-41fe-832b-c3c9f5f0351b","Type":"ContainerStarted","Data":"c09286ca0e8704035ecf9ed075a884d0631ce75da660bc9637411a195d09d11b"} Jan 23 09:09:36 crc kubenswrapper[4684]: I0123 09:09:36.600259 4684 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-storage-version-migrator-operator/kube-storage-version-migrator-operator-b67b599dd-tkzz2" event={"ID":"97bfbd24-43dd-4c7c-abc0-cc5c502d938a","Type":"ContainerStarted","Data":"c5f65acfef45d3101828c5d8af5c1c4728eaf7ef5d5aa14e3ea1fe65be5d3668"} Jan 23 09:09:36 crc kubenswrapper[4684]: I0123 09:09:36.607495 4684 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-wn9b6\" (UID: \"4d94b705-3a9a-4cb2-87f1-b898ba859d79\") " pod="openshift-image-registry/image-registry-697d97f7c8-wn9b6" Jan 23 09:09:36 crc kubenswrapper[4684]: E0123 09:09:36.607861 4684 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-01-23 09:09:37.107846914 +0000 UTC m=+149.731225455 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-wn9b6" (UID: "4d94b705-3a9a-4cb2-87f1-b898ba859d79") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 23 09:09:36 crc kubenswrapper[4684]: I0123 09:09:36.612064 4684 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-controller-manager-operator/kube-controller-manager-operator-78b949d7b-9np9f" event={"ID":"fecf2330-df0b-41ad-99fd-7a58537bfbc6","Type":"ContainerStarted","Data":"6a65ca8c96a1fec678d0323f4828cdb6fe10d5fd1d47e4b43f1859c48bc5993f"} Jan 23 09:09:36 crc kubenswrapper[4684]: I0123 09:09:36.628552 4684 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-api/control-plane-machine-set-operator-78cbb6b69f-4qpn2" event={"ID":"f92af7c0-b6ef-4fe1-b057-b2424aa96458","Type":"ContainerStarted","Data":"7f826bc2b313f9ae71ecbc2a4f871255db93dfc962ce3bc786b7259b3e1115c0"} Jan 23 09:09:36 crc kubenswrapper[4684]: I0123 09:09:36.634735 4684 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-multus/multus-admission-controller-857f4d67dd-zp7ft" event={"ID":"9eb90a45-05a1-450a-93d7-d20129d62e40","Type":"ContainerStarted","Data":"abefceaa56d1924aac402832e82d3d8b919ceb35b76f1e7785d2c4a18a988223"} Jan 23 09:09:36 crc kubenswrapper[4684]: I0123 09:09:36.648892 4684 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-cluster-samples-operator/cluster-samples-operator-665b6dd947-5r2wv" event={"ID":"e9844493-3620-4f52-bfae-61a79062d001","Type":"ContainerStarted","Data":"931dff0077156c8e0def96c50b0fa20e00dafc6077ff622bd36e865cec7e426e"} Jan 23 09:09:36 crc kubenswrapper[4684]: I0123 09:09:36.650874 4684 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" event={"ID":"9d751cbb-f2e2-430d-9754-c882a5e924a5","Type":"ContainerStarted","Data":"84ebeb3f02c50feaf5825ed0aad99a67f1706f8f0082608c676b8c2acded4bde"} Jan 23 09:09:36 crc kubenswrapper[4684]: I0123 09:09:36.663498 4684 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-bhzj6" event={"ID":"6bea838f-25ef-4690-b5c9-feddd10b04bf","Type":"ContainerStarted","Data":"4fef6ff3c75583f402d5b9bf985a1c0535937f37f9c9eb99307c19c8afb55032"} Jan 23 09:09:36 crc kubenswrapper[4684]: I0123 09:09:36.672745 4684 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/collect-profiles-29485980-dfbbw" event={"ID":"7d3e8240-e3e7-42d7-a0fa-6379a76c546e","Type":"ContainerStarted","Data":"2892349cfbda780621ff677d6c6b8e64018aa431d2495b06c636d820584190b5"} Jan 23 09:09:36 crc kubenswrapper[4684]: I0123 09:09:36.708832 4684 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 23 09:09:36 crc kubenswrapper[4684]: E0123 09:09:36.709034 4684 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-01-23 09:09:37.209009144 +0000 UTC m=+149.832387685 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 23 09:09:36 crc kubenswrapper[4684]: I0123 09:09:36.709105 4684 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-wn9b6\" (UID: \"4d94b705-3a9a-4cb2-87f1-b898ba859d79\") " pod="openshift-image-registry/image-registry-697d97f7c8-wn9b6" Jan 23 09:09:36 crc kubenswrapper[4684]: E0123 09:09:36.709385 4684 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-01-23 09:09:37.209370006 +0000 UTC m=+149.832748547 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-wn9b6" (UID: "4d94b705-3a9a-4cb2-87f1-b898ba859d79") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 23 09:09:36 crc kubenswrapper[4684]: I0123 09:09:36.809921 4684 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 23 09:09:36 crc kubenswrapper[4684]: E0123 09:09:36.810583 4684 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-01-23 09:09:37.310540917 +0000 UTC m=+149.933919458 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 23 09:09:36 crc kubenswrapper[4684]: I0123 09:09:36.911661 4684 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-wn9b6\" (UID: \"4d94b705-3a9a-4cb2-87f1-b898ba859d79\") " pod="openshift-image-registry/image-registry-697d97f7c8-wn9b6" Jan 23 09:09:36 crc kubenswrapper[4684]: E0123 09:09:36.912061 4684 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-01-23 09:09:37.412043019 +0000 UTC m=+150.035421620 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-wn9b6" (UID: "4d94b705-3a9a-4cb2-87f1-b898ba859d79") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 23 09:09:37 crc kubenswrapper[4684]: I0123 09:09:37.011890 4684 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-console/console-f9d7485db-wd9fz" Jan 23 09:09:37 crc kubenswrapper[4684]: I0123 09:09:37.011933 4684 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-console/console-f9d7485db-wd9fz" Jan 23 09:09:37 crc kubenswrapper[4684]: I0123 09:09:37.012224 4684 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 23 09:09:37 crc kubenswrapper[4684]: E0123 09:09:37.012439 4684 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-01-23 09:09:37.512402374 +0000 UTC m=+150.135780925 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 23 09:09:37 crc kubenswrapper[4684]: I0123 09:09:37.012525 4684 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-wn9b6\" (UID: \"4d94b705-3a9a-4cb2-87f1-b898ba859d79\") " pod="openshift-image-registry/image-registry-697d97f7c8-wn9b6" Jan 23 09:09:37 crc kubenswrapper[4684]: E0123 09:09:37.012827 4684 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-01-23 09:09:37.512819617 +0000 UTC m=+150.136198158 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-wn9b6" (UID: "4d94b705-3a9a-4cb2-87f1-b898ba859d79") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 23 09:09:37 crc kubenswrapper[4684]: I0123 09:09:37.019882 4684 patch_prober.go:28] interesting pod/console-f9d7485db-wd9fz container/console namespace/openshift-console: Startup probe status=failure output="Get \"https://10.217.0.30:8443/health\": dial tcp 10.217.0.30:8443: connect: connection refused" start-of-body= Jan 23 09:09:37 crc kubenswrapper[4684]: I0123 09:09:37.019930 4684 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-console/console-f9d7485db-wd9fz" podUID="31ebe80c-870d-4be6-844c-504b72eb09d6" containerName="console" probeResult="failure" output="Get \"https://10.217.0.30:8443/health\": dial tcp 10.217.0.30:8443: connect: connection refused" Jan 23 09:09:37 crc kubenswrapper[4684]: I0123 09:09:37.113546 4684 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 23 09:09:37 crc kubenswrapper[4684]: W0123 09:09:37.113751 4684 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod5fe485a1_e14f_4c09_b5b9_f252bc42b7e8.slice/crio-ffc7714725432e556ab572030b8a3ad501e08c93ccbe27016789edd020674088 WatchSource:0}: Error finding container ffc7714725432e556ab572030b8a3ad501e08c93ccbe27016789edd020674088: Status 404 returned error can't find the container with id ffc7714725432e556ab572030b8a3ad501e08c93ccbe27016789edd020674088 Jan 23 09:09:37 crc kubenswrapper[4684]: E0123 09:09:37.113853 4684 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-01-23 09:09:37.613833263 +0000 UTC m=+150.237211814 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 23 09:09:37 crc kubenswrapper[4684]: I0123 09:09:37.113972 4684 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-wn9b6\" (UID: \"4d94b705-3a9a-4cb2-87f1-b898ba859d79\") " pod="openshift-image-registry/image-registry-697d97f7c8-wn9b6" Jan 23 09:09:37 crc kubenswrapper[4684]: E0123 09:09:37.114454 4684 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-01-23 09:09:37.614443863 +0000 UTC m=+150.237822404 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-wn9b6" (UID: "4d94b705-3a9a-4cb2-87f1-b898ba859d79") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 23 09:09:37 crc kubenswrapper[4684]: I0123 09:09:37.147689 4684 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-ingress/router-default-5444994796-whxn9" Jan 23 09:09:37 crc kubenswrapper[4684]: I0123 09:09:37.159007 4684 patch_prober.go:28] interesting pod/router-default-5444994796-whxn9 container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Jan 23 09:09:37 crc kubenswrapper[4684]: [-]has-synced failed: reason withheld Jan 23 09:09:37 crc kubenswrapper[4684]: [+]process-running ok Jan 23 09:09:37 crc kubenswrapper[4684]: healthz check failed Jan 23 09:09:37 crc kubenswrapper[4684]: I0123 09:09:37.159085 4684 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-5444994796-whxn9" podUID="637adfa6-5f16-415d-b536-f8c65e5b32c2" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Jan 23 09:09:37 crc kubenswrapper[4684]: I0123 09:09:37.214869 4684 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 23 09:09:37 crc kubenswrapper[4684]: E0123 09:09:37.215812 4684 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-01-23 09:09:37.7157857 +0000 UTC m=+150.339164241 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 23 09:09:37 crc kubenswrapper[4684]: I0123 09:09:37.316124 4684 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-wn9b6\" (UID: \"4d94b705-3a9a-4cb2-87f1-b898ba859d79\") " pod="openshift-image-registry/image-registry-697d97f7c8-wn9b6" Jan 23 09:09:37 crc kubenswrapper[4684]: E0123 09:09:37.316435 4684 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-01-23 09:09:37.816418244 +0000 UTC m=+150.439796785 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-wn9b6" (UID: "4d94b705-3a9a-4cb2-87f1-b898ba859d79") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 23 09:09:37 crc kubenswrapper[4684]: I0123 09:09:37.417222 4684 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 23 09:09:37 crc kubenswrapper[4684]: E0123 09:09:37.417374 4684 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-01-23 09:09:37.917349957 +0000 UTC m=+150.540728498 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 23 09:09:37 crc kubenswrapper[4684]: I0123 09:09:37.417418 4684 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-wn9b6\" (UID: \"4d94b705-3a9a-4cb2-87f1-b898ba859d79\") " pod="openshift-image-registry/image-registry-697d97f7c8-wn9b6" Jan 23 09:09:37 crc kubenswrapper[4684]: E0123 09:09:37.417764 4684 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-01-23 09:09:37.91775405 +0000 UTC m=+150.541132581 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-wn9b6" (UID: "4d94b705-3a9a-4cb2-87f1-b898ba859d79") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 23 09:09:37 crc kubenswrapper[4684]: I0123 09:09:37.519113 4684 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 23 09:09:37 crc kubenswrapper[4684]: E0123 09:09:37.519751 4684 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-01-23 09:09:38.019730867 +0000 UTC m=+150.643109408 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 23 09:09:37 crc kubenswrapper[4684]: I0123 09:09:37.621331 4684 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-wn9b6\" (UID: \"4d94b705-3a9a-4cb2-87f1-b898ba859d79\") " pod="openshift-image-registry/image-registry-697d97f7c8-wn9b6" Jan 23 09:09:37 crc kubenswrapper[4684]: E0123 09:09:37.621708 4684 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-01-23 09:09:38.121681593 +0000 UTC m=+150.745060134 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-wn9b6" (UID: "4d94b705-3a9a-4cb2-87f1-b898ba859d79") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 23 09:09:37 crc kubenswrapper[4684]: I0123 09:09:37.686020 4684 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver-operator/kube-apiserver-operator-766d6c64bb-dhf86" event={"ID":"a9289743-2808-4efc-a6f9-bd8b5e33d553","Type":"ContainerStarted","Data":"02f499d3e1e2b286e39db75ca1d7d17bff27a85600c008395e118f11a9a7e4b6"} Jan 23 09:09:37 crc kubenswrapper[4684]: I0123 09:09:37.689295 4684 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-operator-74547568cd-k7fnj" event={"ID":"ebc04459-cb74-4868-8eb4-51a4d8856890","Type":"ContainerStarted","Data":"7f92ddf55b3841d893b4e6c98b60379d99cbe0362294d68321baa4cbde4f1ee3"} Jan 23 09:09:37 crc kubenswrapper[4684]: I0123 09:09:37.691071 4684 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-storage-version-migrator/migrator-59844c95c7-g8kmw" event={"ID":"b4b2d72e-d91a-4cde-8e13-205f5346b4ba","Type":"ContainerStarted","Data":"c2a51b9836324a1255e033e80a8ff5f3d3897349126a6bfe229c1b17d305a864"} Jan 23 09:09:37 crc kubenswrapper[4684]: I0123 09:09:37.713907 4684 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-dns/dns-default-bxczb" event={"ID":"6ad4033e-405b-4649-a039-5169aa401f18","Type":"ContainerStarted","Data":"0f6d5e6034a896c78915cb563e9a2db83c8e48e388b01031bb4b0ef921a03b55"} Jan 23 09:09:37 crc kubenswrapper[4684]: I0123 09:09:37.721973 4684 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 23 09:09:37 crc kubenswrapper[4684]: E0123 09:09:37.722282 4684 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-01-23 09:09:38.222263756 +0000 UTC m=+150.845642307 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 23 09:09:37 crc kubenswrapper[4684]: I0123 09:09:37.726263 4684 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-config-operator/openshift-config-operator-7777fb866f-7g8g8" event={"ID":"e19380fe-fa6c-4c7e-a706-aea1c30a6013","Type":"ContainerStarted","Data":"dddcdd98a37fa11f53bbe485e53975445b74a8bf27b24a717899aaaad85da499"} Jan 23 09:09:37 crc kubenswrapper[4684]: I0123 09:09:37.727654 4684 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-api/machine-api-operator-5694c8668f-pgngb" event={"ID":"9b3c5fb5-4205-4162-9d9e-b522ee092236","Type":"ContainerStarted","Data":"e05d9ba3a4689c4eee3b46e7c629ef3a03f12234e89bb5665070110c196fa71e"} Jan 23 09:09:37 crc kubenswrapper[4684]: I0123 09:09:37.729950 4684 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ingress-operator/ingress-operator-5b745b69d9-5jrnp" event={"ID":"991fb184-b936-412b-ae42-fe3a085c4bf9","Type":"ContainerStarted","Data":"68ba29aee0db2280aba5ddb5a6b8933c786ec6076e6870d9278371558118763b"} Jan 23 09:09:37 crc kubenswrapper[4684]: I0123 09:09:37.733630 4684 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-apiserver/apiserver-76f77b778f-7j9vw" event={"ID":"e6245e77-409a-4116-8c6e-78b21d87529f","Type":"ContainerStarted","Data":"c182fe93d1e339fa5c4a1e73a501dfad3635be9ebfa47693a0a312b7acc8c856"} Jan 23 09:09:37 crc kubenswrapper[4684]: I0123 09:09:37.745837 4684 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/marketplace-operator-79b997595-tfmsb" event={"ID":"703df6b3-b903-4818-b0c8-8681de1c6065","Type":"ContainerStarted","Data":"bf0e2db7f62363906898199e85bc114cf704a5ad24bf8db0ca11597b9b1db919"} Jan 23 09:09:37 crc kubenswrapper[4684]: I0123 09:09:37.753576 4684 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="hostpath-provisioner/csi-hostpathplugin-8tk99" event={"ID":"52f6483b-3d4f-482d-8802-fb7ba6736b69","Type":"ContainerStarted","Data":"f84ca9bd5292bd2d7c62029a39fcb5d51c25ec214c589a7592d61d03cf4bbe28"} Jan 23 09:09:37 crc kubenswrapper[4684]: I0123 09:09:37.755998 4684 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/catalog-operator-68c6474976-dxd9h" event={"ID":"e60787da-c4f0-4034-b543-f70e46a6ded4","Type":"ContainerStarted","Data":"87666b2b0a0702ea8947215420114682dc1657e618df4ae3a021075e2545bf72"} Jan 23 09:09:37 crc kubenswrapper[4684]: I0123 09:09:37.759155 4684 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-service-ca/service-ca-9c57cc56f-tk452" event={"ID":"e1331c42-e8e8-4e17-bfa3-0961208c57fd","Type":"ContainerStarted","Data":"8f6e8edee2d110d1d3bbe9ee9badd8c6f8d5ad424c24a62bf5d522adac9a7165"} Jan 23 09:09:37 crc kubenswrapper[4684]: I0123 09:09:37.760671 4684 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" event={"ID":"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8","Type":"ContainerStarted","Data":"ffc7714725432e556ab572030b8a3ad501e08c93ccbe27016789edd020674088"} Jan 23 09:09:37 crc kubenswrapper[4684]: I0123 09:09:37.761739 4684 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-operator-lifecycle-manager/olm-operator-6b444d44fb-qj7jr" Jan 23 09:09:37 crc kubenswrapper[4684]: I0123 09:09:37.773856 4684 patch_prober.go:28] interesting pod/olm-operator-6b444d44fb-qj7jr container/olm-operator namespace/openshift-operator-lifecycle-manager: Readiness probe status=failure output="Get \"https://10.217.0.27:8443/healthz\": dial tcp 10.217.0.27:8443: connect: connection refused" start-of-body= Jan 23 09:09:37 crc kubenswrapper[4684]: I0123 09:09:37.773944 4684 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-operator-lifecycle-manager/olm-operator-6b444d44fb-qj7jr" podUID="31daf061-abd6-415c-9cd6-2e59cb07d605" containerName="olm-operator" probeResult="failure" output="Get \"https://10.217.0.27:8443/healthz\": dial tcp 10.217.0.27:8443: connect: connection refused" Jan 23 09:09:37 crc kubenswrapper[4684]: I0123 09:09:37.828154 4684 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-wn9b6\" (UID: \"4d94b705-3a9a-4cb2-87f1-b898ba859d79\") " pod="openshift-image-registry/image-registry-697d97f7c8-wn9b6" Jan 23 09:09:37 crc kubenswrapper[4684]: E0123 09:09:37.833042 4684 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-01-23 09:09:38.333019645 +0000 UTC m=+150.956398186 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-wn9b6" (UID: "4d94b705-3a9a-4cb2-87f1-b898ba859d79") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 23 09:09:37 crc kubenswrapper[4684]: I0123 09:09:37.932211 4684 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 23 09:09:37 crc kubenswrapper[4684]: E0123 09:09:37.932901 4684 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-01-23 09:09:38.432870394 +0000 UTC m=+151.056248935 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 23 09:09:38 crc kubenswrapper[4684]: I0123 09:09:38.036009 4684 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-wn9b6\" (UID: \"4d94b705-3a9a-4cb2-87f1-b898ba859d79\") " pod="openshift-image-registry/image-registry-697d97f7c8-wn9b6" Jan 23 09:09:38 crc kubenswrapper[4684]: E0123 09:09:38.036365 4684 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-01-23 09:09:38.536350549 +0000 UTC m=+151.159729090 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-wn9b6" (UID: "4d94b705-3a9a-4cb2-87f1-b898ba859d79") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 23 09:09:38 crc kubenswrapper[4684]: I0123 09:09:38.136602 4684 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 23 09:09:38 crc kubenswrapper[4684]: E0123 09:09:38.137015 4684 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-01-23 09:09:38.636997784 +0000 UTC m=+151.260376325 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 23 09:09:38 crc kubenswrapper[4684]: I0123 09:09:38.150255 4684 patch_prober.go:28] interesting pod/router-default-5444994796-whxn9 container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Jan 23 09:09:38 crc kubenswrapper[4684]: [-]has-synced failed: reason withheld Jan 23 09:09:38 crc kubenswrapper[4684]: [+]process-running ok Jan 23 09:09:38 crc kubenswrapper[4684]: healthz check failed Jan 23 09:09:38 crc kubenswrapper[4684]: I0123 09:09:38.150336 4684 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-5444994796-whxn9" podUID="637adfa6-5f16-415d-b536-f8c65e5b32c2" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Jan 23 09:09:38 crc kubenswrapper[4684]: I0123 09:09:38.218812 4684 patch_prober.go:28] interesting pod/packageserver-d55dfcdfc-g94qp container/packageserver namespace/openshift-operator-lifecycle-manager: Liveness probe status=failure output="Get \"https://10.217.0.42:5443/healthz\": net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers)" start-of-body= Jan 23 09:09:38 crc kubenswrapper[4684]: I0123 09:09:38.218861 4684 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-operator-lifecycle-manager/packageserver-d55dfcdfc-g94qp" podUID="35a3e02f-21f3-4762-8260-c52003d4499c" containerName="packageserver" probeResult="failure" output="Get \"https://10.217.0.42:5443/healthz\": net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers)" Jan 23 09:09:38 crc kubenswrapper[4684]: I0123 09:09:38.218885 4684 patch_prober.go:28] interesting pod/packageserver-d55dfcdfc-g94qp container/packageserver namespace/openshift-operator-lifecycle-manager: Readiness probe status=failure output="Get \"https://10.217.0.42:5443/healthz\": net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers)" start-of-body= Jan 23 09:09:38 crc kubenswrapper[4684]: I0123 09:09:38.218943 4684 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-operator-lifecycle-manager/packageserver-d55dfcdfc-g94qp" podUID="35a3e02f-21f3-4762-8260-c52003d4499c" containerName="packageserver" probeResult="failure" output="Get \"https://10.217.0.42:5443/healthz\": net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers)" Jan 23 09:09:38 crc kubenswrapper[4684]: I0123 09:09:38.238503 4684 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-wn9b6\" (UID: \"4d94b705-3a9a-4cb2-87f1-b898ba859d79\") " pod="openshift-image-registry/image-registry-697d97f7c8-wn9b6" Jan 23 09:09:38 crc kubenswrapper[4684]: E0123 09:09:38.238822 4684 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-01-23 09:09:38.738809395 +0000 UTC m=+151.362187936 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-wn9b6" (UID: "4d94b705-3a9a-4cb2-87f1-b898ba859d79") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 23 09:09:38 crc kubenswrapper[4684]: I0123 09:09:38.340039 4684 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 23 09:09:38 crc kubenswrapper[4684]: E0123 09:09:38.340692 4684 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-01-23 09:09:38.840669629 +0000 UTC m=+151.464048170 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 23 09:09:38 crc kubenswrapper[4684]: I0123 09:09:38.441520 4684 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-wn9b6\" (UID: \"4d94b705-3a9a-4cb2-87f1-b898ba859d79\") " pod="openshift-image-registry/image-registry-697d97f7c8-wn9b6" Jan 23 09:09:38 crc kubenswrapper[4684]: E0123 09:09:38.441929 4684 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-01-23 09:09:38.941913902 +0000 UTC m=+151.565292443 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-wn9b6" (UID: "4d94b705-3a9a-4cb2-87f1-b898ba859d79") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 23 09:09:38 crc kubenswrapper[4684]: I0123 09:09:38.542387 4684 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 23 09:09:38 crc kubenswrapper[4684]: E0123 09:09:38.542795 4684 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-01-23 09:09:39.042779044 +0000 UTC m=+151.666157585 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 23 09:09:38 crc kubenswrapper[4684]: I0123 09:09:38.634492 4684 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-service-ca-operator/service-ca-operator-777779d784-sxckj" podStartSLOduration=130.63447182 podStartE2EDuration="2m10.63447182s" podCreationTimestamp="2026-01-23 09:07:28 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-23 09:09:38.6245124 +0000 UTC m=+151.247890951" watchObservedRunningTime="2026-01-23 09:09:38.63447182 +0000 UTC m=+151.257850381" Jan 23 09:09:38 crc kubenswrapper[4684]: I0123 09:09:38.644139 4684 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-wn9b6\" (UID: \"4d94b705-3a9a-4cb2-87f1-b898ba859d79\") " pod="openshift-image-registry/image-registry-697d97f7c8-wn9b6" Jan 23 09:09:38 crc kubenswrapper[4684]: E0123 09:09:38.644583 4684 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-01-23 09:09:39.144564715 +0000 UTC m=+151.767943256 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-wn9b6" (UID: "4d94b705-3a9a-4cb2-87f1-b898ba859d79") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 23 09:09:38 crc kubenswrapper[4684]: I0123 09:09:38.659495 4684 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-ingress-canary/ingress-canary-76rxn" podStartSLOduration=15.659480004 podStartE2EDuration="15.659480004s" podCreationTimestamp="2026-01-23 09:09:23 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-23 09:09:38.657266723 +0000 UTC m=+151.280645274" watchObservedRunningTime="2026-01-23 09:09:38.659480004 +0000 UTC m=+151.282858545" Jan 23 09:09:38 crc kubenswrapper[4684]: I0123 09:09:38.746328 4684 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 23 09:09:38 crc kubenswrapper[4684]: E0123 09:09:38.746458 4684 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-01-23 09:09:39.246429808 +0000 UTC m=+151.869808339 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 23 09:09:38 crc kubenswrapper[4684]: I0123 09:09:38.746808 4684 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-wn9b6\" (UID: \"4d94b705-3a9a-4cb2-87f1-b898ba859d79\") " pod="openshift-image-registry/image-registry-697d97f7c8-wn9b6" Jan 23 09:09:38 crc kubenswrapper[4684]: E0123 09:09:38.747170 4684 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-01-23 09:09:39.247160342 +0000 UTC m=+151.870538883 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-wn9b6" (UID: "4d94b705-3a9a-4cb2-87f1-b898ba859d79") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 23 09:09:38 crc kubenswrapper[4684]: I0123 09:09:38.768273 4684 patch_prober.go:28] interesting pod/olm-operator-6b444d44fb-qj7jr container/olm-operator namespace/openshift-operator-lifecycle-manager: Readiness probe status=failure output="Get \"https://10.217.0.27:8443/healthz\": dial tcp 10.217.0.27:8443: connect: connection refused" start-of-body= Jan 23 09:09:38 crc kubenswrapper[4684]: I0123 09:09:38.768352 4684 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-operator-lifecycle-manager/olm-operator-6b444d44fb-qj7jr" podUID="31daf061-abd6-415c-9cd6-2e59cb07d605" containerName="olm-operator" probeResult="failure" output="Get \"https://10.217.0.27:8443/healthz\": dial tcp 10.217.0.27:8443: connect: connection refused" Jan 23 09:09:38 crc kubenswrapper[4684]: I0123 09:09:38.811843 4684 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-operator-lifecycle-manager/olm-operator-6b444d44fb-qj7jr" podStartSLOduration=130.81182649 podStartE2EDuration="2m10.81182649s" podCreationTimestamp="2026-01-23 09:07:28 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-23 09:09:38.684232129 +0000 UTC m=+151.307610690" watchObservedRunningTime="2026-01-23 09:09:38.81182649 +0000 UTC m=+151.435205031" Jan 23 09:09:38 crc kubenswrapper[4684]: I0123 09:09:38.847445 4684 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 23 09:09:38 crc kubenswrapper[4684]: E0123 09:09:38.849795 4684 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-01-23 09:09:39.349771279 +0000 UTC m=+151.973149820 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 23 09:09:38 crc kubenswrapper[4684]: I0123 09:09:38.949067 4684 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-wn9b6\" (UID: \"4d94b705-3a9a-4cb2-87f1-b898ba859d79\") " pod="openshift-image-registry/image-registry-697d97f7c8-wn9b6" Jan 23 09:09:38 crc kubenswrapper[4684]: E0123 09:09:38.949445 4684 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-01-23 09:09:39.449431652 +0000 UTC m=+152.072810193 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-wn9b6" (UID: "4d94b705-3a9a-4cb2-87f1-b898ba859d79") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 23 09:09:39 crc kubenswrapper[4684]: I0123 09:09:39.050199 4684 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 23 09:09:39 crc kubenswrapper[4684]: E0123 09:09:39.050362 4684 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-01-23 09:09:39.550339024 +0000 UTC m=+152.173717565 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 23 09:09:39 crc kubenswrapper[4684]: I0123 09:09:39.050561 4684 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-wn9b6\" (UID: \"4d94b705-3a9a-4cb2-87f1-b898ba859d79\") " pod="openshift-image-registry/image-registry-697d97f7c8-wn9b6" Jan 23 09:09:39 crc kubenswrapper[4684]: E0123 09:09:39.050890 4684 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-01-23 09:09:39.550876222 +0000 UTC m=+152.174254763 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-wn9b6" (UID: "4d94b705-3a9a-4cb2-87f1-b898ba859d79") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 23 09:09:39 crc kubenswrapper[4684]: I0123 09:09:39.149760 4684 patch_prober.go:28] interesting pod/router-default-5444994796-whxn9 container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Jan 23 09:09:39 crc kubenswrapper[4684]: [-]has-synced failed: reason withheld Jan 23 09:09:39 crc kubenswrapper[4684]: [+]process-running ok Jan 23 09:09:39 crc kubenswrapper[4684]: healthz check failed Jan 23 09:09:39 crc kubenswrapper[4684]: I0123 09:09:39.149839 4684 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-5444994796-whxn9" podUID="637adfa6-5f16-415d-b536-f8c65e5b32c2" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Jan 23 09:09:39 crc kubenswrapper[4684]: I0123 09:09:39.151210 4684 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 23 09:09:39 crc kubenswrapper[4684]: E0123 09:09:39.151344 4684 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-01-23 09:09:39.65132687 +0000 UTC m=+152.274705411 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 23 09:09:39 crc kubenswrapper[4684]: I0123 09:09:39.151472 4684 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-wn9b6\" (UID: \"4d94b705-3a9a-4cb2-87f1-b898ba859d79\") " pod="openshift-image-registry/image-registry-697d97f7c8-wn9b6" Jan 23 09:09:39 crc kubenswrapper[4684]: E0123 09:09:39.151954 4684 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-01-23 09:09:39.651944249 +0000 UTC m=+152.275322790 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-wn9b6" (UID: "4d94b705-3a9a-4cb2-87f1-b898ba859d79") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 23 09:09:39 crc kubenswrapper[4684]: I0123 09:09:39.254331 4684 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 23 09:09:39 crc kubenswrapper[4684]: E0123 09:09:39.254568 4684 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-01-23 09:09:39.754536786 +0000 UTC m=+152.377915337 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 23 09:09:39 crc kubenswrapper[4684]: I0123 09:09:39.254721 4684 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-wn9b6\" (UID: \"4d94b705-3a9a-4cb2-87f1-b898ba859d79\") " pod="openshift-image-registry/image-registry-697d97f7c8-wn9b6" Jan 23 09:09:39 crc kubenswrapper[4684]: E0123 09:09:39.255103 4684 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-01-23 09:09:39.755087844 +0000 UTC m=+152.378466435 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-wn9b6" (UID: "4d94b705-3a9a-4cb2-87f1-b898ba859d79") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 23 09:09:39 crc kubenswrapper[4684]: I0123 09:09:39.355881 4684 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 23 09:09:39 crc kubenswrapper[4684]: E0123 09:09:39.356413 4684 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-01-23 09:09:39.8563986 +0000 UTC m=+152.479777141 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 23 09:09:39 crc kubenswrapper[4684]: I0123 09:09:39.457870 4684 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-wn9b6\" (UID: \"4d94b705-3a9a-4cb2-87f1-b898ba859d79\") " pod="openshift-image-registry/image-registry-697d97f7c8-wn9b6" Jan 23 09:09:39 crc kubenswrapper[4684]: E0123 09:09:39.458459 4684 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-01-23 09:09:39.958442369 +0000 UTC m=+152.581820910 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-wn9b6" (UID: "4d94b705-3a9a-4cb2-87f1-b898ba859d79") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 23 09:09:39 crc kubenswrapper[4684]: I0123 09:09:39.559153 4684 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 23 09:09:39 crc kubenswrapper[4684]: E0123 09:09:39.559739 4684 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-01-23 09:09:40.059722573 +0000 UTC m=+152.683101114 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 23 09:09:39 crc kubenswrapper[4684]: I0123 09:09:39.661408 4684 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-wn9b6\" (UID: \"4d94b705-3a9a-4cb2-87f1-b898ba859d79\") " pod="openshift-image-registry/image-registry-697d97f7c8-wn9b6" Jan 23 09:09:39 crc kubenswrapper[4684]: E0123 09:09:39.662215 4684 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-01-23 09:09:40.162197226 +0000 UTC m=+152.785575767 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-wn9b6" (UID: "4d94b705-3a9a-4cb2-87f1-b898ba859d79") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 23 09:09:39 crc kubenswrapper[4684]: I0123 09:09:39.763232 4684 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 23 09:09:39 crc kubenswrapper[4684]: E0123 09:09:39.763489 4684 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-01-23 09:09:40.263463141 +0000 UTC m=+152.886841682 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 23 09:09:39 crc kubenswrapper[4684]: I0123 09:09:39.763667 4684 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-wn9b6\" (UID: \"4d94b705-3a9a-4cb2-87f1-b898ba859d79\") " pod="openshift-image-registry/image-registry-697d97f7c8-wn9b6" Jan 23 09:09:39 crc kubenswrapper[4684]: E0123 09:09:39.764079 4684 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-01-23 09:09:40.26406475 +0000 UTC m=+152.887443291 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-wn9b6" (UID: "4d94b705-3a9a-4cb2-87f1-b898ba859d79") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 23 09:09:39 crc kubenswrapper[4684]: I0123 09:09:39.794205 4684 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-kube-scheduler-operator/openshift-kube-scheduler-operator-5fdd9b5758-sm5m4" podStartSLOduration=131.794187168 podStartE2EDuration="2m11.794187168s" podCreationTimestamp="2026-01-23 09:07:28 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-23 09:09:38.811971394 +0000 UTC m=+151.435349945" watchObservedRunningTime="2026-01-23 09:09:39.794187168 +0000 UTC m=+152.417565709" Jan 23 09:09:39 crc kubenswrapper[4684]: I0123 09:09:39.865311 4684 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 23 09:09:39 crc kubenswrapper[4684]: E0123 09:09:39.865729 4684 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-01-23 09:09:40.365683955 +0000 UTC m=+152.989062506 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 23 09:09:39 crc kubenswrapper[4684]: I0123 09:09:39.972919 4684 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-wn9b6\" (UID: \"4d94b705-3a9a-4cb2-87f1-b898ba859d79\") " pod="openshift-image-registry/image-registry-697d97f7c8-wn9b6" Jan 23 09:09:39 crc kubenswrapper[4684]: E0123 09:09:39.973894 4684 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-01-23 09:09:40.473872772 +0000 UTC m=+153.097251313 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-wn9b6" (UID: "4d94b705-3a9a-4cb2-87f1-b898ba859d79") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 23 09:09:40 crc kubenswrapper[4684]: I0123 09:09:40.075167 4684 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 23 09:09:40 crc kubenswrapper[4684]: E0123 09:09:40.075308 4684 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-01-23 09:09:40.575287201 +0000 UTC m=+153.198665742 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 23 09:09:40 crc kubenswrapper[4684]: I0123 09:09:40.075408 4684 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-wn9b6\" (UID: \"4d94b705-3a9a-4cb2-87f1-b898ba859d79\") " pod="openshift-image-registry/image-registry-697d97f7c8-wn9b6" Jan 23 09:09:40 crc kubenswrapper[4684]: E0123 09:09:40.075727 4684 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-01-23 09:09:40.575714645 +0000 UTC m=+153.199093186 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-wn9b6" (UID: "4d94b705-3a9a-4cb2-87f1-b898ba859d79") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 23 09:09:40 crc kubenswrapper[4684]: I0123 09:09:40.150098 4684 patch_prober.go:28] interesting pod/router-default-5444994796-whxn9 container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Jan 23 09:09:40 crc kubenswrapper[4684]: [-]has-synced failed: reason withheld Jan 23 09:09:40 crc kubenswrapper[4684]: [+]process-running ok Jan 23 09:09:40 crc kubenswrapper[4684]: healthz check failed Jan 23 09:09:40 crc kubenswrapper[4684]: I0123 09:09:40.150584 4684 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-5444994796-whxn9" podUID="637adfa6-5f16-415d-b536-f8c65e5b32c2" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Jan 23 09:09:40 crc kubenswrapper[4684]: I0123 09:09:40.176587 4684 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 23 09:09:40 crc kubenswrapper[4684]: E0123 09:09:40.176993 4684 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-01-23 09:09:40.676964219 +0000 UTC m=+153.300342810 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 23 09:09:40 crc kubenswrapper[4684]: I0123 09:09:40.278267 4684 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-wn9b6\" (UID: \"4d94b705-3a9a-4cb2-87f1-b898ba859d79\") " pod="openshift-image-registry/image-registry-697d97f7c8-wn9b6" Jan 23 09:09:40 crc kubenswrapper[4684]: E0123 09:09:40.279149 4684 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-01-23 09:09:40.779136261 +0000 UTC m=+153.402514802 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-wn9b6" (UID: "4d94b705-3a9a-4cb2-87f1-b898ba859d79") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 23 09:09:40 crc kubenswrapper[4684]: I0123 09:09:40.379613 4684 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 23 09:09:40 crc kubenswrapper[4684]: E0123 09:09:40.380020 4684 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-01-23 09:09:40.879994742 +0000 UTC m=+153.503373283 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 23 09:09:40 crc kubenswrapper[4684]: I0123 09:09:40.481011 4684 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-wn9b6\" (UID: \"4d94b705-3a9a-4cb2-87f1-b898ba859d79\") " pod="openshift-image-registry/image-registry-697d97f7c8-wn9b6" Jan 23 09:09:40 crc kubenswrapper[4684]: E0123 09:09:40.481301 4684 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-01-23 09:09:40.981289028 +0000 UTC m=+153.604667569 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-wn9b6" (UID: "4d94b705-3a9a-4cb2-87f1-b898ba859d79") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 23 09:09:40 crc kubenswrapper[4684]: I0123 09:09:40.581821 4684 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 23 09:09:40 crc kubenswrapper[4684]: E0123 09:09:40.582090 4684 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-01-23 09:09:41.082061736 +0000 UTC m=+153.705440277 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 23 09:09:40 crc kubenswrapper[4684]: I0123 09:09:40.582488 4684 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-wn9b6\" (UID: \"4d94b705-3a9a-4cb2-87f1-b898ba859d79\") " pod="openshift-image-registry/image-registry-697d97f7c8-wn9b6" Jan 23 09:09:40 crc kubenswrapper[4684]: E0123 09:09:40.582874 4684 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-01-23 09:09:41.082857051 +0000 UTC m=+153.706235592 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-wn9b6" (UID: "4d94b705-3a9a-4cb2-87f1-b898ba859d79") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 23 09:09:40 crc kubenswrapper[4684]: I0123 09:09:40.683938 4684 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 23 09:09:40 crc kubenswrapper[4684]: E0123 09:09:40.684308 4684 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-01-23 09:09:41.184281821 +0000 UTC m=+153.807660372 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 23 09:09:40 crc kubenswrapper[4684]: I0123 09:09:40.783457 4684 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-operator-lifecycle-manager/package-server-manager-789f6589d5-2xmjn" Jan 23 09:09:40 crc kubenswrapper[4684]: I0123 09:09:40.785682 4684 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-wn9b6\" (UID: \"4d94b705-3a9a-4cb2-87f1-b898ba859d79\") " pod="openshift-image-registry/image-registry-697d97f7c8-wn9b6" Jan 23 09:09:40 crc kubenswrapper[4684]: E0123 09:09:40.786214 4684 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-01-23 09:09:41.286194976 +0000 UTC m=+153.909573517 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-wn9b6" (UID: "4d94b705-3a9a-4cb2-87f1-b898ba859d79") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 23 09:09:40 crc kubenswrapper[4684]: I0123 09:09:40.803293 4684 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-kube-controller-manager-operator/kube-controller-manager-operator-78b949d7b-9np9f" podStartSLOduration=132.803269665 podStartE2EDuration="2m12.803269665s" podCreationTimestamp="2026-01-23 09:07:28 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-23 09:09:39.797797894 +0000 UTC m=+152.421176435" watchObservedRunningTime="2026-01-23 09:09:40.803269665 +0000 UTC m=+153.426648206" Jan 23 09:09:40 crc kubenswrapper[4684]: I0123 09:09:40.804948 4684 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-kube-storage-version-migrator-operator/kube-storage-version-migrator-operator-b67b599dd-tkzz2" podStartSLOduration=132.804880576 podStartE2EDuration="2m12.804880576s" podCreationTimestamp="2026-01-23 09:07:28 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-23 09:09:40.801351633 +0000 UTC m=+153.424730184" watchObservedRunningTime="2026-01-23 09:09:40.804880576 +0000 UTC m=+153.428259127" Jan 23 09:09:40 crc kubenswrapper[4684]: I0123 09:09:40.887612 4684 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 23 09:09:40 crc kubenswrapper[4684]: E0123 09:09:40.890911 4684 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-01-23 09:09:41.390853229 +0000 UTC m=+154.014231820 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 23 09:09:40 crc kubenswrapper[4684]: I0123 09:09:40.953544 4684 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-operator-lifecycle-manager/package-server-manager-789f6589d5-2xmjn" podStartSLOduration=132.953527993 podStartE2EDuration="2m12.953527993s" podCreationTimestamp="2026-01-23 09:07:28 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-23 09:09:40.846173443 +0000 UTC m=+153.469551994" watchObservedRunningTime="2026-01-23 09:09:40.953527993 +0000 UTC m=+153.576906534" Jan 23 09:09:40 crc kubenswrapper[4684]: I0123 09:09:40.989181 4684 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-wn9b6\" (UID: \"4d94b705-3a9a-4cb2-87f1-b898ba859d79\") " pod="openshift-image-registry/image-registry-697d97f7c8-wn9b6" Jan 23 09:09:40 crc kubenswrapper[4684]: E0123 09:09:40.989569 4684 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-01-23 09:09:41.489555501 +0000 UTC m=+154.112934042 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-wn9b6" (UID: "4d94b705-3a9a-4cb2-87f1-b898ba859d79") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 23 09:09:41 crc kubenswrapper[4684]: I0123 09:09:41.015406 4684 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-machine-api/control-plane-machine-set-operator-78cbb6b69f-4qpn2" podStartSLOduration=133.015391221 podStartE2EDuration="2m13.015391221s" podCreationTimestamp="2026-01-23 09:07:28 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-23 09:09:40.954662139 +0000 UTC m=+153.578040680" watchObservedRunningTime="2026-01-23 09:09:41.015391221 +0000 UTC m=+153.638769762" Jan 23 09:09:41 crc kubenswrapper[4684]: I0123 09:09:41.085963 4684 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-bhzj6" podStartSLOduration=133.085926288 podStartE2EDuration="2m13.085926288s" podCreationTimestamp="2026-01-23 09:07:28 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-23 09:09:41.016717514 +0000 UTC m=+153.640096095" watchObservedRunningTime="2026-01-23 09:09:41.085926288 +0000 UTC m=+153.709304829" Jan 23 09:09:41 crc kubenswrapper[4684]: I0123 09:09:41.090509 4684 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 23 09:09:41 crc kubenswrapper[4684]: E0123 09:09:41.091266 4684 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-01-23 09:09:41.591246249 +0000 UTC m=+154.214624790 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 23 09:09:41 crc kubenswrapper[4684]: I0123 09:09:41.172349 4684 patch_prober.go:28] interesting pod/router-default-5444994796-whxn9 container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Jan 23 09:09:41 crc kubenswrapper[4684]: [-]has-synced failed: reason withheld Jan 23 09:09:41 crc kubenswrapper[4684]: [+]process-running ok Jan 23 09:09:41 crc kubenswrapper[4684]: healthz check failed Jan 23 09:09:41 crc kubenswrapper[4684]: I0123 09:09:41.172414 4684 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-5444994796-whxn9" podUID="637adfa6-5f16-415d-b536-f8c65e5b32c2" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Jan 23 09:09:41 crc kubenswrapper[4684]: I0123 09:09:41.195415 4684 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-wn9b6\" (UID: \"4d94b705-3a9a-4cb2-87f1-b898ba859d79\") " pod="openshift-image-registry/image-registry-697d97f7c8-wn9b6" Jan 23 09:09:41 crc kubenswrapper[4684]: E0123 09:09:41.195877 4684 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-01-23 09:09:41.69585792 +0000 UTC m=+154.319236521 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-wn9b6" (UID: "4d94b705-3a9a-4cb2-87f1-b898ba859d79") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 23 09:09:41 crc kubenswrapper[4684]: I0123 09:09:41.296664 4684 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 23 09:09:41 crc kubenswrapper[4684]: E0123 09:09:41.296851 4684 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-01-23 09:09:41.796822685 +0000 UTC m=+154.420201226 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 23 09:09:41 crc kubenswrapper[4684]: I0123 09:09:41.296898 4684 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-wn9b6\" (UID: \"4d94b705-3a9a-4cb2-87f1-b898ba859d79\") " pod="openshift-image-registry/image-registry-697d97f7c8-wn9b6" Jan 23 09:09:41 crc kubenswrapper[4684]: E0123 09:09:41.297408 4684 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-01-23 09:09:41.797396983 +0000 UTC m=+154.420775524 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-wn9b6" (UID: "4d94b705-3a9a-4cb2-87f1-b898ba859d79") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 23 09:09:41 crc kubenswrapper[4684]: I0123 09:09:41.398509 4684 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 23 09:09:41 crc kubenswrapper[4684]: E0123 09:09:41.398796 4684 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-01-23 09:09:41.898773071 +0000 UTC m=+154.522151612 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 23 09:09:41 crc kubenswrapper[4684]: I0123 09:09:41.399177 4684 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-wn9b6\" (UID: \"4d94b705-3a9a-4cb2-87f1-b898ba859d79\") " pod="openshift-image-registry/image-registry-697d97f7c8-wn9b6" Jan 23 09:09:41 crc kubenswrapper[4684]: E0123 09:09:41.399530 4684 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-01-23 09:09:41.899514335 +0000 UTC m=+154.522892876 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-wn9b6" (UID: "4d94b705-3a9a-4cb2-87f1-b898ba859d79") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 23 09:09:41 crc kubenswrapper[4684]: I0123 09:09:41.506015 4684 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 23 09:09:41 crc kubenswrapper[4684]: E0123 09:09:41.506310 4684 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-01-23 09:09:42.006294696 +0000 UTC m=+154.629673237 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 23 09:09:41 crc kubenswrapper[4684]: I0123 09:09:41.607235 4684 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-wn9b6\" (UID: \"4d94b705-3a9a-4cb2-87f1-b898ba859d79\") " pod="openshift-image-registry/image-registry-697d97f7c8-wn9b6" Jan 23 09:09:41 crc kubenswrapper[4684]: E0123 09:09:41.607560 4684 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-01-23 09:09:42.10754831 +0000 UTC m=+154.730926851 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-wn9b6" (UID: "4d94b705-3a9a-4cb2-87f1-b898ba859d79") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 23 09:09:41 crc kubenswrapper[4684]: I0123 09:09:41.708546 4684 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 23 09:09:41 crc kubenswrapper[4684]: E0123 09:09:41.708682 4684 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-01-23 09:09:42.20866119 +0000 UTC m=+154.832039731 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 23 09:09:41 crc kubenswrapper[4684]: I0123 09:09:41.708979 4684 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-wn9b6\" (UID: \"4d94b705-3a9a-4cb2-87f1-b898ba859d79\") " pod="openshift-image-registry/image-registry-697d97f7c8-wn9b6" Jan 23 09:09:41 crc kubenswrapper[4684]: E0123 09:09:41.709361 4684 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-01-23 09:09:42.209344191 +0000 UTC m=+154.832722732 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-wn9b6" (UID: "4d94b705-3a9a-4cb2-87f1-b898ba859d79") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 23 09:09:41 crc kubenswrapper[4684]: I0123 09:09:41.814523 4684 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 23 09:09:41 crc kubenswrapper[4684]: E0123 09:09:41.814873 4684 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-01-23 09:09:42.314858262 +0000 UTC m=+154.938236803 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 23 09:09:41 crc kubenswrapper[4684]: I0123 09:09:41.826994 4684 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-controller-84d6567774-r9qbw" event={"ID":"af9efd93-5eee-4e16-a36f-25d29663ff5c","Type":"ContainerStarted","Data":"087492a1f998bb446280537f59ce6f23c5eeb89be066693e8a30a9a875cc014f"} Jan 23 09:09:41 crc kubenswrapper[4684]: I0123 09:09:41.847879 4684 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" event={"ID":"3b6479f0-333b-4a96-9adf-2099afdc2447","Type":"ContainerStarted","Data":"d40560e4209ff10377ae7b31a54870b27ed4802d81dcf7e0f934985ebef20b06"} Jan 23 09:09:41 crc kubenswrapper[4684]: I0123 09:09:41.848611 4684 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 23 09:09:41 crc kubenswrapper[4684]: I0123 09:09:41.875871 4684 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-dns-operator/dns-operator-744455d44c-kx2tw" event={"ID":"94f9b51c-2051-4b01-bf38-09a32c853699","Type":"ContainerStarted","Data":"e4c108a3d927579a1edc69b992fbe3b3c10834b5a6eb8520f949421b6cdf48b9"} Jan 23 09:09:41 crc kubenswrapper[4684]: I0123 09:09:41.884091 4684 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" event={"ID":"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8","Type":"ContainerStarted","Data":"b5faafb460043cec164f4b8f9f9297f5d4caf0cdb930a60bd143798e6a00c5a8"} Jan 23 09:09:41 crc kubenswrapper[4684]: I0123 09:09:41.901268 4684 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" event={"ID":"9d751cbb-f2e2-430d-9754-c882a5e924a5","Type":"ContainerStarted","Data":"8dfa2b202baf4ef37cbe3142db96ed7b54d2bf17ff0dca7a6f3d07bd8e3b16b3"} Jan 23 09:09:41 crc kubenswrapper[4684]: I0123 09:09:41.915946 4684 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-wn9b6\" (UID: \"4d94b705-3a9a-4cb2-87f1-b898ba859d79\") " pod="openshift-image-registry/image-registry-697d97f7c8-wn9b6" Jan 23 09:09:41 crc kubenswrapper[4684]: E0123 09:09:41.918565 4684 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-01-23 09:09:42.418552885 +0000 UTC m=+155.041931426 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-wn9b6" (UID: "4d94b705-3a9a-4cb2-87f1-b898ba859d79") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 23 09:09:41 crc kubenswrapper[4684]: I0123 09:09:41.924442 4684 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-cluster-samples-operator/cluster-samples-operator-665b6dd947-5r2wv" event={"ID":"e9844493-3620-4f52-bfae-61a79062d001","Type":"ContainerStarted","Data":"55722eef854b7292d5cb0d3815c5513861dada284c361f44d6410a7d9bdc6eda"} Jan 23 09:09:41 crc kubenswrapper[4684]: I0123 09:09:41.924557 4684 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/marketplace-operator-79b997595-tfmsb" Jan 23 09:09:41 crc kubenswrapper[4684]: I0123 09:09:41.925494 4684 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-operator-lifecycle-manager/catalog-operator-68c6474976-dxd9h" Jan 23 09:09:41 crc kubenswrapper[4684]: I0123 09:09:41.936418 4684 patch_prober.go:28] interesting pod/marketplace-operator-79b997595-tfmsb container/marketplace-operator namespace/openshift-marketplace: Readiness probe status=failure output="Get \"http://10.217.0.23:8080/healthz\": dial tcp 10.217.0.23:8080: connect: connection refused" start-of-body= Jan 23 09:09:41 crc kubenswrapper[4684]: I0123 09:09:41.936467 4684 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-marketplace/marketplace-operator-79b997595-tfmsb" podUID="703df6b3-b903-4818-b0c8-8681de1c6065" containerName="marketplace-operator" probeResult="failure" output="Get \"http://10.217.0.23:8080/healthz\": dial tcp 10.217.0.23:8080: connect: connection refused" Jan 23 09:09:41 crc kubenswrapper[4684]: I0123 09:09:41.936658 4684 patch_prober.go:28] interesting pod/catalog-operator-68c6474976-dxd9h container/catalog-operator namespace/openshift-operator-lifecycle-manager: Readiness probe status=failure output="Get \"https://10.217.0.33:8443/healthz\": dial tcp 10.217.0.33:8443: connect: connection refused" start-of-body= Jan 23 09:09:41 crc kubenswrapper[4684]: I0123 09:09:41.936682 4684 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-operator-lifecycle-manager/catalog-operator-68c6474976-dxd9h" podUID="e60787da-c4f0-4034-b543-f70e46a6ded4" containerName="catalog-operator" probeResult="failure" output="Get \"https://10.217.0.33:8443/healthz\": dial tcp 10.217.0.33:8443: connect: connection refused" Jan 23 09:09:42 crc kubenswrapper[4684]: I0123 09:09:42.016830 4684 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 23 09:09:42 crc kubenswrapper[4684]: E0123 09:09:42.018404 4684 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-01-23 09:09:42.518383943 +0000 UTC m=+155.141762484 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 23 09:09:42 crc kubenswrapper[4684]: I0123 09:09:42.118490 4684 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-wn9b6\" (UID: \"4d94b705-3a9a-4cb2-87f1-b898ba859d79\") " pod="openshift-image-registry/image-registry-697d97f7c8-wn9b6" Jan 23 09:09:42 crc kubenswrapper[4684]: E0123 09:09:42.118936 4684 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-01-23 09:09:42.618920984 +0000 UTC m=+155.242299525 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-wn9b6" (UID: "4d94b705-3a9a-4cb2-87f1-b898ba859d79") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 23 09:09:42 crc kubenswrapper[4684]: I0123 09:09:42.120632 4684 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-operator-lifecycle-manager/collect-profiles-29485980-dfbbw" podStartSLOduration=135.120615788 podStartE2EDuration="2m15.120615788s" podCreationTimestamp="2026-01-23 09:07:27 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-23 09:09:41.091045952 +0000 UTC m=+153.714424493" watchObservedRunningTime="2026-01-23 09:09:42.120615788 +0000 UTC m=+154.743994329" Jan 23 09:09:42 crc kubenswrapper[4684]: I0123 09:09:42.121262 4684 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-machine-config-operator/machine-config-controller-84d6567774-r9qbw" podStartSLOduration=134.121258349 podStartE2EDuration="2m14.121258349s" podCreationTimestamp="2026-01-23 09:07:28 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-23 09:09:42.119148641 +0000 UTC m=+154.742527182" watchObservedRunningTime="2026-01-23 09:09:42.121258349 +0000 UTC m=+154.744636890" Jan 23 09:09:42 crc kubenswrapper[4684]: I0123 09:09:42.153270 4684 patch_prober.go:28] interesting pod/router-default-5444994796-whxn9 container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Jan 23 09:09:42 crc kubenswrapper[4684]: [-]has-synced failed: reason withheld Jan 23 09:09:42 crc kubenswrapper[4684]: [+]process-running ok Jan 23 09:09:42 crc kubenswrapper[4684]: healthz check failed Jan 23 09:09:42 crc kubenswrapper[4684]: I0123 09:09:42.153805 4684 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-5444994796-whxn9" podUID="637adfa6-5f16-415d-b536-f8c65e5b32c2" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Jan 23 09:09:42 crc kubenswrapper[4684]: I0123 09:09:42.225395 4684 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 23 09:09:42 crc kubenswrapper[4684]: E0123 09:09:42.225571 4684 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-01-23 09:09:42.72554446 +0000 UTC m=+155.348923011 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 23 09:09:42 crc kubenswrapper[4684]: I0123 09:09:42.225646 4684 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-wn9b6\" (UID: \"4d94b705-3a9a-4cb2-87f1-b898ba859d79\") " pod="openshift-image-registry/image-registry-697d97f7c8-wn9b6" Jan 23 09:09:42 crc kubenswrapper[4684]: E0123 09:09:42.226021 4684 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-01-23 09:09:42.726010125 +0000 UTC m=+155.349388716 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-wn9b6" (UID: "4d94b705-3a9a-4cb2-87f1-b898ba859d79") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 23 09:09:42 crc kubenswrapper[4684]: I0123 09:09:42.327449 4684 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 23 09:09:42 crc kubenswrapper[4684]: E0123 09:09:42.331198 4684 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-01-23 09:09:42.831170394 +0000 UTC m=+155.454548935 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 23 09:09:42 crc kubenswrapper[4684]: I0123 09:09:42.411648 4684 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-service-ca/service-ca-9c57cc56f-tk452" podStartSLOduration=134.41162922 podStartE2EDuration="2m14.41162922s" podCreationTimestamp="2026-01-23 09:07:28 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-23 09:09:42.307273686 +0000 UTC m=+154.930652227" watchObservedRunningTime="2026-01-23 09:09:42.41162922 +0000 UTC m=+155.035007761" Jan 23 09:09:42 crc kubenswrapper[4684]: I0123 09:09:42.434856 4684 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-wn9b6\" (UID: \"4d94b705-3a9a-4cb2-87f1-b898ba859d79\") " pod="openshift-image-registry/image-registry-697d97f7c8-wn9b6" Jan 23 09:09:42 crc kubenswrapper[4684]: E0123 09:09:42.445391 4684 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-01-23 09:09:42.945359344 +0000 UTC m=+155.568737885 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-wn9b6" (UID: "4d94b705-3a9a-4cb2-87f1-b898ba859d79") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 23 09:09:42 crc kubenswrapper[4684]: I0123 09:09:42.474602 4684 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-operator-lifecycle-manager/catalog-operator-68c6474976-dxd9h" podStartSLOduration=134.474580113 podStartE2EDuration="2m14.474580113s" podCreationTimestamp="2026-01-23 09:07:28 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-23 09:09:42.472114204 +0000 UTC m=+155.095492745" watchObservedRunningTime="2026-01-23 09:09:42.474580113 +0000 UTC m=+155.097958654" Jan 23 09:09:42 crc kubenswrapper[4684]: I0123 09:09:42.475194 4684 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-ingress-operator/ingress-operator-5b745b69d9-5jrnp" podStartSLOduration=135.475187612 podStartE2EDuration="2m15.475187612s" podCreationTimestamp="2026-01-23 09:07:27 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-23 09:09:42.413855722 +0000 UTC m=+155.037234263" watchObservedRunningTime="2026-01-23 09:09:42.475187612 +0000 UTC m=+155.098566153" Jan 23 09:09:42 crc kubenswrapper[4684]: I0123 09:09:42.525023 4684 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-cluster-samples-operator/cluster-samples-operator-665b6dd947-5r2wv" podStartSLOduration=135.525007433 podStartE2EDuration="2m15.525007433s" podCreationTimestamp="2026-01-23 09:07:27 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-23 09:09:42.52458328 +0000 UTC m=+155.147961811" watchObservedRunningTime="2026-01-23 09:09:42.525007433 +0000 UTC m=+155.148385974" Jan 23 09:09:42 crc kubenswrapper[4684]: I0123 09:09:42.536564 4684 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 23 09:09:42 crc kubenswrapper[4684]: E0123 09:09:42.537071 4684 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-01-23 09:09:43.03705073 +0000 UTC m=+155.660429271 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 23 09:09:42 crc kubenswrapper[4684]: I0123 09:09:42.584218 4684 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-kube-storage-version-migrator/migrator-59844c95c7-g8kmw" podStartSLOduration=134.584196525 podStartE2EDuration="2m14.584196525s" podCreationTimestamp="2026-01-23 09:07:28 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-23 09:09:42.580774756 +0000 UTC m=+155.204153307" watchObservedRunningTime="2026-01-23 09:09:42.584196525 +0000 UTC m=+155.207575066" Jan 23 09:09:42 crc kubenswrapper[4684]: I0123 09:09:42.638286 4684 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-wn9b6\" (UID: \"4d94b705-3a9a-4cb2-87f1-b898ba859d79\") " pod="openshift-image-registry/image-registry-697d97f7c8-wn9b6" Jan 23 09:09:42 crc kubenswrapper[4684]: E0123 09:09:42.638635 4684 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-01-23 09:09:43.138622415 +0000 UTC m=+155.762000956 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-wn9b6" (UID: "4d94b705-3a9a-4cb2-87f1-b898ba859d79") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 23 09:09:42 crc kubenswrapper[4684]: I0123 09:09:42.739170 4684 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 23 09:09:42 crc kubenswrapper[4684]: E0123 09:09:42.739522 4684 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-01-23 09:09:43.239507497 +0000 UTC m=+155.862886038 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 23 09:09:42 crc kubenswrapper[4684]: I0123 09:09:42.758261 4684 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-kube-apiserver-operator/kube-apiserver-operator-766d6c64bb-dhf86" podStartSLOduration=135.758236718 podStartE2EDuration="2m15.758236718s" podCreationTimestamp="2026-01-23 09:07:27 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-23 09:09:42.716179647 +0000 UTC m=+155.339558208" watchObservedRunningTime="2026-01-23 09:09:42.758236718 +0000 UTC m=+155.381615259" Jan 23 09:09:42 crc kubenswrapper[4684]: I0123 09:09:42.760354 4684 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/marketplace-operator-79b997595-tfmsb" podStartSLOduration=134.760345546 podStartE2EDuration="2m14.760345546s" podCreationTimestamp="2026-01-23 09:07:28 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-23 09:09:42.757489294 +0000 UTC m=+155.380867835" watchObservedRunningTime="2026-01-23 09:09:42.760345546 +0000 UTC m=+155.383724097" Jan 23 09:09:42 crc kubenswrapper[4684]: I0123 09:09:42.797688 4684 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-config-operator/openshift-config-operator-7777fb866f-7g8g8" podStartSLOduration=135.797671086 podStartE2EDuration="2m15.797671086s" podCreationTimestamp="2026-01-23 09:07:27 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-23 09:09:42.796191458 +0000 UTC m=+155.419569999" watchObservedRunningTime="2026-01-23 09:09:42.797671086 +0000 UTC m=+155.421049627" Jan 23 09:09:42 crc kubenswrapper[4684]: I0123 09:09:42.844198 4684 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-wn9b6\" (UID: \"4d94b705-3a9a-4cb2-87f1-b898ba859d79\") " pod="openshift-image-registry/image-registry-697d97f7c8-wn9b6" Jan 23 09:09:42 crc kubenswrapper[4684]: E0123 09:09:42.844508 4684 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-01-23 09:09:43.34449353 +0000 UTC m=+155.967872071 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-wn9b6" (UID: "4d94b705-3a9a-4cb2-87f1-b898ba859d79") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 23 09:09:42 crc kubenswrapper[4684]: I0123 09:09:42.918332 4684 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-dns-operator/dns-operator-744455d44c-kx2tw" podStartSLOduration=135.918318013 podStartE2EDuration="2m15.918318013s" podCreationTimestamp="2026-01-23 09:07:27 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-23 09:09:42.843144547 +0000 UTC m=+155.466523108" watchObservedRunningTime="2026-01-23 09:09:42.918318013 +0000 UTC m=+155.541696554" Jan 23 09:09:42 crc kubenswrapper[4684]: I0123 09:09:42.945083 4684 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 23 09:09:42 crc kubenswrapper[4684]: E0123 09:09:42.945483 4684 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-01-23 09:09:43.445467655 +0000 UTC m=+156.068846196 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 23 09:09:42 crc kubenswrapper[4684]: I0123 09:09:42.970967 4684 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-multus/multus-admission-controller-857f4d67dd-zp7ft" event={"ID":"9eb90a45-05a1-450a-93d7-d20129d62e40","Type":"ContainerStarted","Data":"6992e4e82931d6658e81a0038576bf774d2e9ae20bf6855da1cb018fa67ff4ff"} Jan 23 09:09:42 crc kubenswrapper[4684]: I0123 09:09:42.976862 4684 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-operator-74547568cd-k7fnj" event={"ID":"ebc04459-cb74-4868-8eb4-51a4d8856890","Type":"ContainerStarted","Data":"1739d6a3959afe3bd0144dda013bb05eabc497cfabd96f4b9953e96607f503dc"} Jan 23 09:09:42 crc kubenswrapper[4684]: I0123 09:09:42.990335 4684 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-dns/dns-default-bxczb" event={"ID":"6ad4033e-405b-4649-a039-5169aa401f18","Type":"ContainerStarted","Data":"21ec6f5c5d5d642517bff792870bed2ec9800e194e0268dcaba2771a7a0a5aae"} Jan 23 09:09:42 crc kubenswrapper[4684]: I0123 09:09:42.991079 4684 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-dns/dns-default-bxczb" Jan 23 09:09:43 crc kubenswrapper[4684]: I0123 09:09:43.000557 4684 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-multus/multus-admission-controller-857f4d67dd-zp7ft" podStartSLOduration=135.000537445 podStartE2EDuration="2m15.000537445s" podCreationTimestamp="2026-01-23 09:07:28 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-23 09:09:43.000421901 +0000 UTC m=+155.623800462" watchObservedRunningTime="2026-01-23 09:09:43.000537445 +0000 UTC m=+155.623915996" Jan 23 09:09:43 crc kubenswrapper[4684]: I0123 09:09:43.001207 4684 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-machine-api/machine-api-operator-5694c8668f-pgngb" podStartSLOduration=135.001199646 podStartE2EDuration="2m15.001199646s" podCreationTimestamp="2026-01-23 09:07:28 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-23 09:09:42.919553533 +0000 UTC m=+155.542932074" watchObservedRunningTime="2026-01-23 09:09:43.001199646 +0000 UTC m=+155.624578177" Jan 23 09:09:43 crc kubenswrapper[4684]: I0123 09:09:43.011715 4684 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-apiserver/apiserver-76f77b778f-7j9vw" event={"ID":"e6245e77-409a-4116-8c6e-78b21d87529f","Type":"ContainerStarted","Data":"aa3a23bedffc2d57cea5e7b83473c0c101cae97500541e5876709fea9b7aa7e1"} Jan 23 09:09:43 crc kubenswrapper[4684]: I0123 09:09:43.012384 4684 patch_prober.go:28] interesting pod/marketplace-operator-79b997595-tfmsb container/marketplace-operator namespace/openshift-marketplace: Readiness probe status=failure output="Get \"http://10.217.0.23:8080/healthz\": dial tcp 10.217.0.23:8080: connect: connection refused" start-of-body= Jan 23 09:09:43 crc kubenswrapper[4684]: I0123 09:09:43.012434 4684 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-marketplace/marketplace-operator-79b997595-tfmsb" podUID="703df6b3-b903-4818-b0c8-8681de1c6065" containerName="marketplace-operator" probeResult="failure" output="Get \"http://10.217.0.23:8080/healthz\": dial tcp 10.217.0.23:8080: connect: connection refused" Jan 23 09:09:43 crc kubenswrapper[4684]: I0123 09:09:43.028916 4684 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-operator-lifecycle-manager/catalog-operator-68c6474976-dxd9h" Jan 23 09:09:43 crc kubenswrapper[4684]: I0123 09:09:43.048444 4684 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-wn9b6\" (UID: \"4d94b705-3a9a-4cb2-87f1-b898ba859d79\") " pod="openshift-image-registry/image-registry-697d97f7c8-wn9b6" Jan 23 09:09:43 crc kubenswrapper[4684]: E0123 09:09:43.049523 4684 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-01-23 09:09:43.549511729 +0000 UTC m=+156.172890270 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-wn9b6" (UID: "4d94b705-3a9a-4cb2-87f1-b898ba859d79") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 23 09:09:43 crc kubenswrapper[4684]: I0123 09:09:43.103672 4684 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-dns/dns-default-bxczb" podStartSLOduration=20.103635498 podStartE2EDuration="20.103635498s" podCreationTimestamp="2026-01-23 09:09:23 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-23 09:09:43.038105942 +0000 UTC m=+155.661484493" watchObservedRunningTime="2026-01-23 09:09:43.103635498 +0000 UTC m=+155.727014039" Jan 23 09:09:43 crc kubenswrapper[4684]: I0123 09:09:43.104842 4684 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-machine-config-operator/machine-config-operator-74547568cd-k7fnj" podStartSLOduration=135.104831117 podStartE2EDuration="2m15.104831117s" podCreationTimestamp="2026-01-23 09:07:28 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-23 09:09:43.096289582 +0000 UTC m=+155.719668143" watchObservedRunningTime="2026-01-23 09:09:43.104831117 +0000 UTC m=+155.728209658" Jan 23 09:09:43 crc kubenswrapper[4684]: I0123 09:09:43.149520 4684 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 23 09:09:43 crc kubenswrapper[4684]: E0123 09:09:43.149934 4684 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-01-23 09:09:43.649914315 +0000 UTC m=+156.273292856 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 23 09:09:43 crc kubenswrapper[4684]: I0123 09:09:43.150092 4684 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-wn9b6\" (UID: \"4d94b705-3a9a-4cb2-87f1-b898ba859d79\") " pod="openshift-image-registry/image-registry-697d97f7c8-wn9b6" Jan 23 09:09:43 crc kubenswrapper[4684]: E0123 09:09:43.153545 4684 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-01-23 09:09:43.653532272 +0000 UTC m=+156.276910813 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-wn9b6" (UID: "4d94b705-3a9a-4cb2-87f1-b898ba859d79") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 23 09:09:43 crc kubenswrapper[4684]: I0123 09:09:43.153599 4684 patch_prober.go:28] interesting pod/router-default-5444994796-whxn9 container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Jan 23 09:09:43 crc kubenswrapper[4684]: [-]has-synced failed: reason withheld Jan 23 09:09:43 crc kubenswrapper[4684]: [+]process-running ok Jan 23 09:09:43 crc kubenswrapper[4684]: healthz check failed Jan 23 09:09:43 crc kubenswrapper[4684]: I0123 09:09:43.153626 4684 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-5444994796-whxn9" podUID="637adfa6-5f16-415d-b536-f8c65e5b32c2" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Jan 23 09:09:43 crc kubenswrapper[4684]: I0123 09:09:43.200221 4684 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-apiserver/apiserver-76f77b778f-7j9vw" podStartSLOduration=136.200204062 podStartE2EDuration="2m16.200204062s" podCreationTimestamp="2026-01-23 09:07:27 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-23 09:09:43.199685825 +0000 UTC m=+155.823064386" watchObservedRunningTime="2026-01-23 09:09:43.200204062 +0000 UTC m=+155.823582593" Jan 23 09:09:43 crc kubenswrapper[4684]: I0123 09:09:43.251786 4684 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 23 09:09:43 crc kubenswrapper[4684]: E0123 09:09:43.252074 4684 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-01-23 09:09:43.752042277 +0000 UTC m=+156.375420828 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 23 09:09:43 crc kubenswrapper[4684]: I0123 09:09:43.252138 4684 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-wn9b6\" (UID: \"4d94b705-3a9a-4cb2-87f1-b898ba859d79\") " pod="openshift-image-registry/image-registry-697d97f7c8-wn9b6" Jan 23 09:09:43 crc kubenswrapper[4684]: E0123 09:09:43.252491 4684 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-01-23 09:09:43.752475651 +0000 UTC m=+156.375854252 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-wn9b6" (UID: "4d94b705-3a9a-4cb2-87f1-b898ba859d79") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 23 09:09:43 crc kubenswrapper[4684]: I0123 09:09:43.353339 4684 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 23 09:09:43 crc kubenswrapper[4684]: E0123 09:09:43.353525 4684 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-01-23 09:09:43.853500128 +0000 UTC m=+156.476878669 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 23 09:09:43 crc kubenswrapper[4684]: I0123 09:09:43.353869 4684 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-wn9b6\" (UID: \"4d94b705-3a9a-4cb2-87f1-b898ba859d79\") " pod="openshift-image-registry/image-registry-697d97f7c8-wn9b6" Jan 23 09:09:43 crc kubenswrapper[4684]: E0123 09:09:43.354199 4684 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-01-23 09:09:43.85419055 +0000 UTC m=+156.477569091 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-wn9b6" (UID: "4d94b705-3a9a-4cb2-87f1-b898ba859d79") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 23 09:09:43 crc kubenswrapper[4684]: I0123 09:09:43.455125 4684 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 23 09:09:43 crc kubenswrapper[4684]: E0123 09:09:43.455375 4684 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-01-23 09:09:43.955338871 +0000 UTC m=+156.578717412 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 23 09:09:43 crc kubenswrapper[4684]: I0123 09:09:43.455493 4684 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-wn9b6\" (UID: \"4d94b705-3a9a-4cb2-87f1-b898ba859d79\") " pod="openshift-image-registry/image-registry-697d97f7c8-wn9b6" Jan 23 09:09:43 crc kubenswrapper[4684]: E0123 09:09:43.456010 4684 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-01-23 09:09:43.955998362 +0000 UTC m=+156.579376903 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-wn9b6" (UID: "4d94b705-3a9a-4cb2-87f1-b898ba859d79") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 23 09:09:43 crc kubenswrapper[4684]: I0123 09:09:43.558076 4684 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 23 09:09:43 crc kubenswrapper[4684]: E0123 09:09:43.558362 4684 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-01-23 09:09:44.058336051 +0000 UTC m=+156.681714592 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 23 09:09:43 crc kubenswrapper[4684]: I0123 09:09:43.558632 4684 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-wn9b6\" (UID: \"4d94b705-3a9a-4cb2-87f1-b898ba859d79\") " pod="openshift-image-registry/image-registry-697d97f7c8-wn9b6" Jan 23 09:09:43 crc kubenswrapper[4684]: E0123 09:09:43.559066 4684 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-01-23 09:09:44.059057364 +0000 UTC m=+156.682435905 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-wn9b6" (UID: "4d94b705-3a9a-4cb2-87f1-b898ba859d79") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 23 09:09:43 crc kubenswrapper[4684]: I0123 09:09:43.659843 4684 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 23 09:09:43 crc kubenswrapper[4684]: E0123 09:09:43.659981 4684 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-01-23 09:09:44.159955946 +0000 UTC m=+156.783334487 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 23 09:09:43 crc kubenswrapper[4684]: I0123 09:09:43.660193 4684 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-wn9b6\" (UID: \"4d94b705-3a9a-4cb2-87f1-b898ba859d79\") " pod="openshift-image-registry/image-registry-697d97f7c8-wn9b6" Jan 23 09:09:43 crc kubenswrapper[4684]: E0123 09:09:43.660483 4684 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-01-23 09:09:44.160474093 +0000 UTC m=+156.783852634 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-wn9b6" (UID: "4d94b705-3a9a-4cb2-87f1-b898ba859d79") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 23 09:09:43 crc kubenswrapper[4684]: I0123 09:09:43.728789 4684 patch_prober.go:28] interesting pod/machine-config-daemon-wtphf container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Jan 23 09:09:43 crc kubenswrapper[4684]: I0123 09:09:43.728850 4684 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-wtphf" podUID="fe8e0d00-860e-4d47-9f48-686555520d79" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Jan 23 09:09:43 crc kubenswrapper[4684]: I0123 09:09:43.762278 4684 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 23 09:09:43 crc kubenswrapper[4684]: E0123 09:09:43.762603 4684 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-01-23 09:09:44.262578404 +0000 UTC m=+156.885956945 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 23 09:09:43 crc kubenswrapper[4684]: I0123 09:09:43.863835 4684 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-wn9b6\" (UID: \"4d94b705-3a9a-4cb2-87f1-b898ba859d79\") " pod="openshift-image-registry/image-registry-697d97f7c8-wn9b6" Jan 23 09:09:43 crc kubenswrapper[4684]: E0123 09:09:43.864285 4684 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-01-23 09:09:44.364268761 +0000 UTC m=+156.987647302 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-wn9b6" (UID: "4d94b705-3a9a-4cb2-87f1-b898ba859d79") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 23 09:09:43 crc kubenswrapper[4684]: I0123 09:09:43.924922 4684 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-config-operator/openshift-config-operator-7777fb866f-7g8g8" Jan 23 09:09:43 crc kubenswrapper[4684]: I0123 09:09:43.967975 4684 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 23 09:09:43 crc kubenswrapper[4684]: E0123 09:09:43.968353 4684 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-01-23 09:09:44.468332905 +0000 UTC m=+157.091711446 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 23 09:09:44 crc kubenswrapper[4684]: I0123 09:09:44.016690 4684 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="hostpath-provisioner/csi-hostpathplugin-8tk99" event={"ID":"52f6483b-3d4f-482d-8802-fb7ba6736b69","Type":"ContainerStarted","Data":"0511bbe32b25f565b31442bd7937800fef80938052de864c742980543c5fd7b1"} Jan 23 09:09:44 crc kubenswrapper[4684]: I0123 09:09:44.069086 4684 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-wn9b6\" (UID: \"4d94b705-3a9a-4cb2-87f1-b898ba859d79\") " pod="openshift-image-registry/image-registry-697d97f7c8-wn9b6" Jan 23 09:09:44 crc kubenswrapper[4684]: E0123 09:09:44.069468 4684 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-01-23 09:09:44.569452405 +0000 UTC m=+157.192830936 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-wn9b6" (UID: "4d94b705-3a9a-4cb2-87f1-b898ba859d79") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 23 09:09:44 crc kubenswrapper[4684]: I0123 09:09:44.150711 4684 patch_prober.go:28] interesting pod/router-default-5444994796-whxn9 container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Jan 23 09:09:44 crc kubenswrapper[4684]: [-]has-synced failed: reason withheld Jan 23 09:09:44 crc kubenswrapper[4684]: [+]process-running ok Jan 23 09:09:44 crc kubenswrapper[4684]: healthz check failed Jan 23 09:09:44 crc kubenswrapper[4684]: I0123 09:09:44.150801 4684 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-5444994796-whxn9" podUID="637adfa6-5f16-415d-b536-f8c65e5b32c2" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Jan 23 09:09:44 crc kubenswrapper[4684]: I0123 09:09:44.170342 4684 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 23 09:09:44 crc kubenswrapper[4684]: E0123 09:09:44.170788 4684 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-01-23 09:09:44.67074449 +0000 UTC m=+157.294123031 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 23 09:09:44 crc kubenswrapper[4684]: I0123 09:09:44.171106 4684 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-wn9b6\" (UID: \"4d94b705-3a9a-4cb2-87f1-b898ba859d79\") " pod="openshift-image-registry/image-registry-697d97f7c8-wn9b6" Jan 23 09:09:44 crc kubenswrapper[4684]: E0123 09:09:44.171524 4684 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-01-23 09:09:44.671507244 +0000 UTC m=+157.294885865 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-wn9b6" (UID: "4d94b705-3a9a-4cb2-87f1-b898ba859d79") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 23 09:09:44 crc kubenswrapper[4684]: I0123 09:09:44.205590 4684 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-kube-controller-manager/revision-pruner-9-crc"] Jan 23 09:09:44 crc kubenswrapper[4684]: I0123 09:09:44.206728 4684 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-controller-manager/revision-pruner-9-crc" Jan 23 09:09:44 crc kubenswrapper[4684]: I0123 09:09:44.209534 4684 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-kube-controller-manager"/"installer-sa-dockercfg-kjl2n" Jan 23 09:09:44 crc kubenswrapper[4684]: I0123 09:09:44.217328 4684 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-kube-controller-manager/revision-pruner-9-crc"] Jan 23 09:09:44 crc kubenswrapper[4684]: I0123 09:09:44.221592 4684 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-controller-manager"/"kube-root-ca.crt" Jan 23 09:09:44 crc kubenswrapper[4684]: I0123 09:09:44.274306 4684 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 23 09:09:44 crc kubenswrapper[4684]: E0123 09:09:44.274501 4684 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-01-23 09:09:44.774469933 +0000 UTC m=+157.397848474 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 23 09:09:44 crc kubenswrapper[4684]: I0123 09:09:44.274591 4684 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/b9e7bf0d-a002-48a0-a2fc-4617d4311b10-kubelet-dir\") pod \"revision-pruner-9-crc\" (UID: \"b9e7bf0d-a002-48a0-a2fc-4617d4311b10\") " pod="openshift-kube-controller-manager/revision-pruner-9-crc" Jan 23 09:09:44 crc kubenswrapper[4684]: I0123 09:09:44.274802 4684 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/b9e7bf0d-a002-48a0-a2fc-4617d4311b10-kube-api-access\") pod \"revision-pruner-9-crc\" (UID: \"b9e7bf0d-a002-48a0-a2fc-4617d4311b10\") " pod="openshift-kube-controller-manager/revision-pruner-9-crc" Jan 23 09:09:44 crc kubenswrapper[4684]: I0123 09:09:44.274856 4684 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-wn9b6\" (UID: \"4d94b705-3a9a-4cb2-87f1-b898ba859d79\") " pod="openshift-image-registry/image-registry-697d97f7c8-wn9b6" Jan 23 09:09:44 crc kubenswrapper[4684]: E0123 09:09:44.275199 4684 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-01-23 09:09:44.775189206 +0000 UTC m=+157.398567807 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-wn9b6" (UID: "4d94b705-3a9a-4cb2-87f1-b898ba859d79") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 23 09:09:44 crc kubenswrapper[4684]: I0123 09:09:44.376049 4684 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 23 09:09:44 crc kubenswrapper[4684]: E0123 09:09:44.376183 4684 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-01-23 09:09:44.876165381 +0000 UTC m=+157.499543922 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 23 09:09:44 crc kubenswrapper[4684]: I0123 09:09:44.376559 4684 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/b9e7bf0d-a002-48a0-a2fc-4617d4311b10-kube-api-access\") pod \"revision-pruner-9-crc\" (UID: \"b9e7bf0d-a002-48a0-a2fc-4617d4311b10\") " pod="openshift-kube-controller-manager/revision-pruner-9-crc" Jan 23 09:09:44 crc kubenswrapper[4684]: I0123 09:09:44.376596 4684 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-wn9b6\" (UID: \"4d94b705-3a9a-4cb2-87f1-b898ba859d79\") " pod="openshift-image-registry/image-registry-697d97f7c8-wn9b6" Jan 23 09:09:44 crc kubenswrapper[4684]: I0123 09:09:44.376678 4684 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/b9e7bf0d-a002-48a0-a2fc-4617d4311b10-kubelet-dir\") pod \"revision-pruner-9-crc\" (UID: \"b9e7bf0d-a002-48a0-a2fc-4617d4311b10\") " pod="openshift-kube-controller-manager/revision-pruner-9-crc" Jan 23 09:09:44 crc kubenswrapper[4684]: I0123 09:09:44.376828 4684 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/b9e7bf0d-a002-48a0-a2fc-4617d4311b10-kubelet-dir\") pod \"revision-pruner-9-crc\" (UID: \"b9e7bf0d-a002-48a0-a2fc-4617d4311b10\") " pod="openshift-kube-controller-manager/revision-pruner-9-crc" Jan 23 09:09:44 crc kubenswrapper[4684]: E0123 09:09:44.377104 4684 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-01-23 09:09:44.877084851 +0000 UTC m=+157.500463462 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-wn9b6" (UID: "4d94b705-3a9a-4cb2-87f1-b898ba859d79") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 23 09:09:44 crc kubenswrapper[4684]: I0123 09:09:44.462222 4684 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/b9e7bf0d-a002-48a0-a2fc-4617d4311b10-kube-api-access\") pod \"revision-pruner-9-crc\" (UID: \"b9e7bf0d-a002-48a0-a2fc-4617d4311b10\") " pod="openshift-kube-controller-manager/revision-pruner-9-crc" Jan 23 09:09:44 crc kubenswrapper[4684]: I0123 09:09:44.478028 4684 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 23 09:09:44 crc kubenswrapper[4684]: E0123 09:09:44.478323 4684 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-01-23 09:09:44.978307344 +0000 UTC m=+157.601685885 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 23 09:09:44 crc kubenswrapper[4684]: I0123 09:09:44.521810 4684 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-controller-manager/revision-pruner-9-crc" Jan 23 09:09:44 crc kubenswrapper[4684]: I0123 09:09:44.579973 4684 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-wn9b6\" (UID: \"4d94b705-3a9a-4cb2-87f1-b898ba859d79\") " pod="openshift-image-registry/image-registry-697d97f7c8-wn9b6" Jan 23 09:09:44 crc kubenswrapper[4684]: E0123 09:09:44.580366 4684 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-01-23 09:09:45.080350543 +0000 UTC m=+157.703729084 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-wn9b6" (UID: "4d94b705-3a9a-4cb2-87f1-b898ba859d79") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 23 09:09:44 crc kubenswrapper[4684]: I0123 09:09:44.681250 4684 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 23 09:09:44 crc kubenswrapper[4684]: E0123 09:09:44.681727 4684 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-01-23 09:09:45.18170933 +0000 UTC m=+157.805087871 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 23 09:09:44 crc kubenswrapper[4684]: I0123 09:09:44.783252 4684 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-wn9b6\" (UID: \"4d94b705-3a9a-4cb2-87f1-b898ba859d79\") " pod="openshift-image-registry/image-registry-697d97f7c8-wn9b6" Jan 23 09:09:44 crc kubenswrapper[4684]: E0123 09:09:44.783641 4684 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-01-23 09:09:45.283627725 +0000 UTC m=+157.907006266 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-wn9b6" (UID: "4d94b705-3a9a-4cb2-87f1-b898ba859d79") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 23 09:09:44 crc kubenswrapper[4684]: I0123 09:09:44.813406 4684 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-apiserver/apiserver-76f77b778f-7j9vw" Jan 23 09:09:44 crc kubenswrapper[4684]: I0123 09:09:44.813779 4684 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-apiserver/apiserver-76f77b778f-7j9vw" Jan 23 09:09:44 crc kubenswrapper[4684]: I0123 09:09:44.884735 4684 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 23 09:09:44 crc kubenswrapper[4684]: E0123 09:09:44.884922 4684 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-01-23 09:09:45.384893199 +0000 UTC m=+158.008271750 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 23 09:09:44 crc kubenswrapper[4684]: I0123 09:09:44.885035 4684 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-wn9b6\" (UID: \"4d94b705-3a9a-4cb2-87f1-b898ba859d79\") " pod="openshift-image-registry/image-registry-697d97f7c8-wn9b6" Jan 23 09:09:44 crc kubenswrapper[4684]: E0123 09:09:44.885337 4684 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-01-23 09:09:45.385325253 +0000 UTC m=+158.008703794 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-wn9b6" (UID: "4d94b705-3a9a-4cb2-87f1-b898ba859d79") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 23 09:09:44 crc kubenswrapper[4684]: I0123 09:09:44.925244 4684 patch_prober.go:28] interesting pod/openshift-config-operator-7777fb866f-7g8g8 container/openshift-config-operator namespace/openshift-config-operator: Liveness probe status=failure output="Get \"https://10.217.0.12:8443/healthz\": net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers)" start-of-body= Jan 23 09:09:44 crc kubenswrapper[4684]: I0123 09:09:44.925317 4684 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-config-operator/openshift-config-operator-7777fb866f-7g8g8" podUID="e19380fe-fa6c-4c7e-a706-aea1c30a6013" containerName="openshift-config-operator" probeResult="failure" output="Get \"https://10.217.0.12:8443/healthz\": net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers)" Jan 23 09:09:44 crc kubenswrapper[4684]: I0123 09:09:44.925716 4684 patch_prober.go:28] interesting pod/openshift-config-operator-7777fb866f-7g8g8 container/openshift-config-operator namespace/openshift-config-operator: Readiness probe status=failure output="Get \"https://10.217.0.12:8443/healthz\": net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers)" start-of-body= Jan 23 09:09:44 crc kubenswrapper[4684]: I0123 09:09:44.925745 4684 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-config-operator/openshift-config-operator-7777fb866f-7g8g8" podUID="e19380fe-fa6c-4c7e-a706-aea1c30a6013" containerName="openshift-config-operator" probeResult="failure" output="Get \"https://10.217.0.12:8443/healthz\": net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers)" Jan 23 09:09:44 crc kubenswrapper[4684]: I0123 09:09:44.986156 4684 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 23 09:09:44 crc kubenswrapper[4684]: E0123 09:09:44.986526 4684 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-01-23 09:09:45.486498695 +0000 UTC m=+158.109877236 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 23 09:09:45 crc kubenswrapper[4684]: I0123 09:09:45.089563 4684 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-bhzj6" Jan 23 09:09:45 crc kubenswrapper[4684]: I0123 09:09:45.089875 4684 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-bhzj6" Jan 23 09:09:45 crc kubenswrapper[4684]: I0123 09:09:45.090561 4684 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-wn9b6\" (UID: \"4d94b705-3a9a-4cb2-87f1-b898ba859d79\") " pod="openshift-image-registry/image-registry-697d97f7c8-wn9b6" Jan 23 09:09:45 crc kubenswrapper[4684]: E0123 09:09:45.090966 4684 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-01-23 09:09:45.590952641 +0000 UTC m=+158.214331182 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-wn9b6" (UID: "4d94b705-3a9a-4cb2-87f1-b898ba859d79") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 23 09:09:45 crc kubenswrapper[4684]: I0123 09:09:45.094681 4684 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/certified-operators-x2mrs"] Jan 23 09:09:45 crc kubenswrapper[4684]: I0123 09:09:45.096306 4684 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-x2mrs" Jan 23 09:09:45 crc kubenswrapper[4684]: I0123 09:09:45.115791 4684 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-marketplace"/"certified-operators-dockercfg-4rs5g" Jan 23 09:09:45 crc kubenswrapper[4684]: I0123 09:09:45.150604 4684 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/community-operators-pc4kj"] Jan 23 09:09:45 crc kubenswrapper[4684]: I0123 09:09:45.154885 4684 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-bhzj6" Jan 23 09:09:45 crc kubenswrapper[4684]: I0123 09:09:45.154995 4684 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-pc4kj" Jan 23 09:09:45 crc kubenswrapper[4684]: I0123 09:09:45.157941 4684 patch_prober.go:28] interesting pod/router-default-5444994796-whxn9 container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Jan 23 09:09:45 crc kubenswrapper[4684]: [-]has-synced failed: reason withheld Jan 23 09:09:45 crc kubenswrapper[4684]: [+]process-running ok Jan 23 09:09:45 crc kubenswrapper[4684]: healthz check failed Jan 23 09:09:45 crc kubenswrapper[4684]: I0123 09:09:45.157993 4684 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-5444994796-whxn9" podUID="637adfa6-5f16-415d-b536-f8c65e5b32c2" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Jan 23 09:09:45 crc kubenswrapper[4684]: I0123 09:09:45.158605 4684 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-marketplace"/"community-operators-dockercfg-dmngl" Jan 23 09:09:45 crc kubenswrapper[4684]: I0123 09:09:45.174733 4684 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-controller-manager/controller-manager-879f6c89f-l7895" Jan 23 09:09:45 crc kubenswrapper[4684]: I0123 09:09:45.191254 4684 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 23 09:09:45 crc kubenswrapper[4684]: I0123 09:09:45.191617 4684 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-pf2sj\" (UniqueName: \"kubernetes.io/projected/b97308cc-f7d2-4693-8990-76cbb4c9abff-kube-api-access-pf2sj\") pod \"certified-operators-x2mrs\" (UID: \"b97308cc-f7d2-4693-8990-76cbb4c9abff\") " pod="openshift-marketplace/certified-operators-x2mrs" Jan 23 09:09:45 crc kubenswrapper[4684]: I0123 09:09:45.191693 4684 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/b97308cc-f7d2-4693-8990-76cbb4c9abff-utilities\") pod \"certified-operators-x2mrs\" (UID: \"b97308cc-f7d2-4693-8990-76cbb4c9abff\") " pod="openshift-marketplace/certified-operators-x2mrs" Jan 23 09:09:45 crc kubenswrapper[4684]: I0123 09:09:45.191862 4684 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/b97308cc-f7d2-4693-8990-76cbb4c9abff-catalog-content\") pod \"certified-operators-x2mrs\" (UID: \"b97308cc-f7d2-4693-8990-76cbb4c9abff\") " pod="openshift-marketplace/certified-operators-x2mrs" Jan 23 09:09:45 crc kubenswrapper[4684]: E0123 09:09:45.197619 4684 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-01-23 09:09:45.697592408 +0000 UTC m=+158.320970969 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 23 09:09:45 crc kubenswrapper[4684]: I0123 09:09:45.215084 4684 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-authentication/oauth-openshift-558db77b4-hv7d8" Jan 23 09:09:45 crc kubenswrapper[4684]: I0123 09:09:45.301586 4684 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/b97308cc-f7d2-4693-8990-76cbb4c9abff-utilities\") pod \"certified-operators-x2mrs\" (UID: \"b97308cc-f7d2-4693-8990-76cbb4c9abff\") " pod="openshift-marketplace/certified-operators-x2mrs" Jan 23 09:09:45 crc kubenswrapper[4684]: I0123 09:09:45.301737 4684 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-wn9b6\" (UID: \"4d94b705-3a9a-4cb2-87f1-b898ba859d79\") " pod="openshift-image-registry/image-registry-697d97f7c8-wn9b6" Jan 23 09:09:45 crc kubenswrapper[4684]: I0123 09:09:45.301886 4684 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/2f9880b0-14ae-4649-b7ba-6d0dd1ab5151-catalog-content\") pod \"community-operators-pc4kj\" (UID: \"2f9880b0-14ae-4649-b7ba-6d0dd1ab5151\") " pod="openshift-marketplace/community-operators-pc4kj" Jan 23 09:09:45 crc kubenswrapper[4684]: I0123 09:09:45.301931 4684 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/b97308cc-f7d2-4693-8990-76cbb4c9abff-catalog-content\") pod \"certified-operators-x2mrs\" (UID: \"b97308cc-f7d2-4693-8990-76cbb4c9abff\") " pod="openshift-marketplace/certified-operators-x2mrs" Jan 23 09:09:45 crc kubenswrapper[4684]: I0123 09:09:45.301967 4684 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-v857v\" (UniqueName: \"kubernetes.io/projected/2f9880b0-14ae-4649-b7ba-6d0dd1ab5151-kube-api-access-v857v\") pod \"community-operators-pc4kj\" (UID: \"2f9880b0-14ae-4649-b7ba-6d0dd1ab5151\") " pod="openshift-marketplace/community-operators-pc4kj" Jan 23 09:09:45 crc kubenswrapper[4684]: I0123 09:09:45.302044 4684 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/2f9880b0-14ae-4649-b7ba-6d0dd1ab5151-utilities\") pod \"community-operators-pc4kj\" (UID: \"2f9880b0-14ae-4649-b7ba-6d0dd1ab5151\") " pod="openshift-marketplace/community-operators-pc4kj" Jan 23 09:09:45 crc kubenswrapper[4684]: I0123 09:09:45.302119 4684 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-pf2sj\" (UniqueName: \"kubernetes.io/projected/b97308cc-f7d2-4693-8990-76cbb4c9abff-kube-api-access-pf2sj\") pod \"certified-operators-x2mrs\" (UID: \"b97308cc-f7d2-4693-8990-76cbb4c9abff\") " pod="openshift-marketplace/certified-operators-x2mrs" Jan 23 09:09:45 crc kubenswrapper[4684]: E0123 09:09:45.303158 4684 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-01-23 09:09:45.80314173 +0000 UTC m=+158.426520271 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-wn9b6" (UID: "4d94b705-3a9a-4cb2-87f1-b898ba859d79") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 23 09:09:45 crc kubenswrapper[4684]: I0123 09:09:45.303458 4684 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/b97308cc-f7d2-4693-8990-76cbb4c9abff-utilities\") pod \"certified-operators-x2mrs\" (UID: \"b97308cc-f7d2-4693-8990-76cbb4c9abff\") " pod="openshift-marketplace/certified-operators-x2mrs" Jan 23 09:09:45 crc kubenswrapper[4684]: I0123 09:09:45.304381 4684 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/b97308cc-f7d2-4693-8990-76cbb4c9abff-catalog-content\") pod \"certified-operators-x2mrs\" (UID: \"b97308cc-f7d2-4693-8990-76cbb4c9abff\") " pod="openshift-marketplace/certified-operators-x2mrs" Jan 23 09:09:45 crc kubenswrapper[4684]: I0123 09:09:45.314722 4684 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/community-operators-pc4kj"] Jan 23 09:09:45 crc kubenswrapper[4684]: I0123 09:09:45.348458 4684 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/certified-operators-x2mrs"] Jan 23 09:09:45 crc kubenswrapper[4684]: I0123 09:09:45.403537 4684 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 23 09:09:45 crc kubenswrapper[4684]: E0123 09:09:45.403856 4684 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-01-23 09:09:45.903828156 +0000 UTC m=+158.527206697 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 23 09:09:45 crc kubenswrapper[4684]: I0123 09:09:45.404252 4684 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/2f9880b0-14ae-4649-b7ba-6d0dd1ab5151-catalog-content\") pod \"community-operators-pc4kj\" (UID: \"2f9880b0-14ae-4649-b7ba-6d0dd1ab5151\") " pod="openshift-marketplace/community-operators-pc4kj" Jan 23 09:09:45 crc kubenswrapper[4684]: I0123 09:09:45.404383 4684 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-v857v\" (UniqueName: \"kubernetes.io/projected/2f9880b0-14ae-4649-b7ba-6d0dd1ab5151-kube-api-access-v857v\") pod \"community-operators-pc4kj\" (UID: \"2f9880b0-14ae-4649-b7ba-6d0dd1ab5151\") " pod="openshift-marketplace/community-operators-pc4kj" Jan 23 09:09:45 crc kubenswrapper[4684]: I0123 09:09:45.404488 4684 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/2f9880b0-14ae-4649-b7ba-6d0dd1ab5151-utilities\") pod \"community-operators-pc4kj\" (UID: \"2f9880b0-14ae-4649-b7ba-6d0dd1ab5151\") " pod="openshift-marketplace/community-operators-pc4kj" Jan 23 09:09:45 crc kubenswrapper[4684]: I0123 09:09:45.404633 4684 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-wn9b6\" (UID: \"4d94b705-3a9a-4cb2-87f1-b898ba859d79\") " pod="openshift-image-registry/image-registry-697d97f7c8-wn9b6" Jan 23 09:09:45 crc kubenswrapper[4684]: E0123 09:09:45.405110 4684 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-01-23 09:09:45.905098116 +0000 UTC m=+158.528476657 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-wn9b6" (UID: "4d94b705-3a9a-4cb2-87f1-b898ba859d79") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 23 09:09:45 crc kubenswrapper[4684]: I0123 09:09:45.405963 4684 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/2f9880b0-14ae-4649-b7ba-6d0dd1ab5151-catalog-content\") pod \"community-operators-pc4kj\" (UID: \"2f9880b0-14ae-4649-b7ba-6d0dd1ab5151\") " pod="openshift-marketplace/community-operators-pc4kj" Jan 23 09:09:45 crc kubenswrapper[4684]: I0123 09:09:45.406640 4684 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/2f9880b0-14ae-4649-b7ba-6d0dd1ab5151-utilities\") pod \"community-operators-pc4kj\" (UID: \"2f9880b0-14ae-4649-b7ba-6d0dd1ab5151\") " pod="openshift-marketplace/community-operators-pc4kj" Jan 23 09:09:45 crc kubenswrapper[4684]: I0123 09:09:45.408336 4684 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-pf2sj\" (UniqueName: \"kubernetes.io/projected/b97308cc-f7d2-4693-8990-76cbb4c9abff-kube-api-access-pf2sj\") pod \"certified-operators-x2mrs\" (UID: \"b97308cc-f7d2-4693-8990-76cbb4c9abff\") " pod="openshift-marketplace/certified-operators-x2mrs" Jan 23 09:09:45 crc kubenswrapper[4684]: I0123 09:09:45.428332 4684 patch_prober.go:28] interesting pod/downloads-7954f5f757-mc6nm container/download-server namespace/openshift-console: Liveness probe status=failure output="Get \"http://10.217.0.20:8080/\": dial tcp 10.217.0.20:8080: connect: connection refused" start-of-body= Jan 23 09:09:45 crc kubenswrapper[4684]: I0123 09:09:45.428397 4684 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-console/downloads-7954f5f757-mc6nm" podUID="8fa74b73-0b76-426c-a769-39477ab913f6" containerName="download-server" probeResult="failure" output="Get \"http://10.217.0.20:8080/\": dial tcp 10.217.0.20:8080: connect: connection refused" Jan 23 09:09:45 crc kubenswrapper[4684]: I0123 09:09:45.428358 4684 patch_prober.go:28] interesting pod/downloads-7954f5f757-mc6nm container/download-server namespace/openshift-console: Readiness probe status=failure output="Get \"http://10.217.0.20:8080/\": dial tcp 10.217.0.20:8080: connect: connection refused" start-of-body= Jan 23 09:09:45 crc kubenswrapper[4684]: I0123 09:09:45.428746 4684 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-console/downloads-7954f5f757-mc6nm" podUID="8fa74b73-0b76-426c-a769-39477ab913f6" containerName="download-server" probeResult="failure" output="Get \"http://10.217.0.20:8080/\": dial tcp 10.217.0.20:8080: connect: connection refused" Jan 23 09:09:45 crc kubenswrapper[4684]: I0123 09:09:45.433026 4684 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-x2mrs" Jan 23 09:09:45 crc kubenswrapper[4684]: I0123 09:09:45.437068 4684 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/certified-operators-vk9hn"] Jan 23 09:09:45 crc kubenswrapper[4684]: I0123 09:09:45.438311 4684 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-vk9hn" Jan 23 09:09:45 crc kubenswrapper[4684]: I0123 09:09:45.511332 4684 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 23 09:09:45 crc kubenswrapper[4684]: I0123 09:09:45.511409 4684 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/certified-operators-vk9hn"] Jan 23 09:09:45 crc kubenswrapper[4684]: E0123 09:09:45.511779 4684 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-01-23 09:09:46.011738653 +0000 UTC m=+158.635117194 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 23 09:09:45 crc kubenswrapper[4684]: I0123 09:09:45.520502 4684 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-v857v\" (UniqueName: \"kubernetes.io/projected/2f9880b0-14ae-4649-b7ba-6d0dd1ab5151-kube-api-access-v857v\") pod \"community-operators-pc4kj\" (UID: \"2f9880b0-14ae-4649-b7ba-6d0dd1ab5151\") " pod="openshift-marketplace/community-operators-pc4kj" Jan 23 09:09:45 crc kubenswrapper[4684]: I0123 09:09:45.567157 4684 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-kube-controller-manager/revision-pruner-9-crc"] Jan 23 09:09:45 crc kubenswrapper[4684]: I0123 09:09:45.617899 4684 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-q8hxb\" (UniqueName: \"kubernetes.io/projected/0cd73bd8-4034-44e9-b00a-75ea938360c8-kube-api-access-q8hxb\") pod \"certified-operators-vk9hn\" (UID: \"0cd73bd8-4034-44e9-b00a-75ea938360c8\") " pod="openshift-marketplace/certified-operators-vk9hn" Jan 23 09:09:45 crc kubenswrapper[4684]: I0123 09:09:45.631663 4684 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/0cd73bd8-4034-44e9-b00a-75ea938360c8-utilities\") pod \"certified-operators-vk9hn\" (UID: \"0cd73bd8-4034-44e9-b00a-75ea938360c8\") " pod="openshift-marketplace/certified-operators-vk9hn" Jan 23 09:09:45 crc kubenswrapper[4684]: I0123 09:09:45.632579 4684 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-wn9b6\" (UID: \"4d94b705-3a9a-4cb2-87f1-b898ba859d79\") " pod="openshift-image-registry/image-registry-697d97f7c8-wn9b6" Jan 23 09:09:45 crc kubenswrapper[4684]: I0123 09:09:45.632674 4684 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/0cd73bd8-4034-44e9-b00a-75ea938360c8-catalog-content\") pod \"certified-operators-vk9hn\" (UID: \"0cd73bd8-4034-44e9-b00a-75ea938360c8\") " pod="openshift-marketplace/certified-operators-vk9hn" Jan 23 09:09:45 crc kubenswrapper[4684]: E0123 09:09:45.633937 4684 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-01-23 09:09:46.13392174 +0000 UTC m=+158.757300281 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-wn9b6" (UID: "4d94b705-3a9a-4cb2-87f1-b898ba859d79") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 23 09:09:45 crc kubenswrapper[4684]: I0123 09:09:45.648146 4684 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/community-operators-4w77d"] Jan 23 09:09:45 crc kubenswrapper[4684]: I0123 09:09:45.649885 4684 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-4w77d" Jan 23 09:09:45 crc kubenswrapper[4684]: I0123 09:09:45.685172 4684 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/community-operators-4w77d"] Jan 23 09:09:45 crc kubenswrapper[4684]: I0123 09:09:45.734028 4684 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 23 09:09:45 crc kubenswrapper[4684]: I0123 09:09:45.734240 4684 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-q8hxb\" (UniqueName: \"kubernetes.io/projected/0cd73bd8-4034-44e9-b00a-75ea938360c8-kube-api-access-q8hxb\") pod \"certified-operators-vk9hn\" (UID: \"0cd73bd8-4034-44e9-b00a-75ea938360c8\") " pod="openshift-marketplace/certified-operators-vk9hn" Jan 23 09:09:45 crc kubenswrapper[4684]: I0123 09:09:45.734285 4684 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/0cd73bd8-4034-44e9-b00a-75ea938360c8-utilities\") pod \"certified-operators-vk9hn\" (UID: \"0cd73bd8-4034-44e9-b00a-75ea938360c8\") " pod="openshift-marketplace/certified-operators-vk9hn" Jan 23 09:09:45 crc kubenswrapper[4684]: I0123 09:09:45.734320 4684 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/0cd73bd8-4034-44e9-b00a-75ea938360c8-catalog-content\") pod \"certified-operators-vk9hn\" (UID: \"0cd73bd8-4034-44e9-b00a-75ea938360c8\") " pod="openshift-marketplace/certified-operators-vk9hn" Jan 23 09:09:45 crc kubenswrapper[4684]: I0123 09:09:45.735312 4684 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/0cd73bd8-4034-44e9-b00a-75ea938360c8-catalog-content\") pod \"certified-operators-vk9hn\" (UID: \"0cd73bd8-4034-44e9-b00a-75ea938360c8\") " pod="openshift-marketplace/certified-operators-vk9hn" Jan 23 09:09:45 crc kubenswrapper[4684]: E0123 09:09:45.735394 4684 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-01-23 09:09:46.2353794 +0000 UTC m=+158.858757941 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 23 09:09:45 crc kubenswrapper[4684]: I0123 09:09:45.735829 4684 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/0cd73bd8-4034-44e9-b00a-75ea938360c8-utilities\") pod \"certified-operators-vk9hn\" (UID: \"0cd73bd8-4034-44e9-b00a-75ea938360c8\") " pod="openshift-marketplace/certified-operators-vk9hn" Jan 23 09:09:45 crc kubenswrapper[4684]: I0123 09:09:45.798350 4684 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-pc4kj" Jan 23 09:09:45 crc kubenswrapper[4684]: I0123 09:09:45.817318 4684 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-config-operator/openshift-config-operator-7777fb866f-7g8g8" Jan 23 09:09:45 crc kubenswrapper[4684]: I0123 09:09:45.840790 4684 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-q8hxb\" (UniqueName: \"kubernetes.io/projected/0cd73bd8-4034-44e9-b00a-75ea938360c8-kube-api-access-q8hxb\") pod \"certified-operators-vk9hn\" (UID: \"0cd73bd8-4034-44e9-b00a-75ea938360c8\") " pod="openshift-marketplace/certified-operators-vk9hn" Jan 23 09:09:45 crc kubenswrapper[4684]: I0123 09:09:45.840861 4684 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/6386382b-e651-4888-857e-a3a7325f1f14-catalog-content\") pod \"community-operators-4w77d\" (UID: \"6386382b-e651-4888-857e-a3a7325f1f14\") " pod="openshift-marketplace/community-operators-4w77d" Jan 23 09:09:45 crc kubenswrapper[4684]: I0123 09:09:45.841247 4684 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/6386382b-e651-4888-857e-a3a7325f1f14-utilities\") pod \"community-operators-4w77d\" (UID: \"6386382b-e651-4888-857e-a3a7325f1f14\") " pod="openshift-marketplace/community-operators-4w77d" Jan 23 09:09:45 crc kubenswrapper[4684]: I0123 09:09:45.841327 4684 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-cpttc\" (UniqueName: \"kubernetes.io/projected/6386382b-e651-4888-857e-a3a7325f1f14-kube-api-access-cpttc\") pod \"community-operators-4w77d\" (UID: \"6386382b-e651-4888-857e-a3a7325f1f14\") " pod="openshift-marketplace/community-operators-4w77d" Jan 23 09:09:45 crc kubenswrapper[4684]: I0123 09:09:45.841451 4684 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-wn9b6\" (UID: \"4d94b705-3a9a-4cb2-87f1-b898ba859d79\") " pod="openshift-image-registry/image-registry-697d97f7c8-wn9b6" Jan 23 09:09:45 crc kubenswrapper[4684]: E0123 09:09:45.841830 4684 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-01-23 09:09:46.341815721 +0000 UTC m=+158.965194262 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-wn9b6" (UID: "4d94b705-3a9a-4cb2-87f1-b898ba859d79") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 23 09:09:45 crc kubenswrapper[4684]: I0123 09:09:45.945488 4684 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 23 09:09:45 crc kubenswrapper[4684]: I0123 09:09:45.945662 4684 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/6386382b-e651-4888-857e-a3a7325f1f14-utilities\") pod \"community-operators-4w77d\" (UID: \"6386382b-e651-4888-857e-a3a7325f1f14\") " pod="openshift-marketplace/community-operators-4w77d" Jan 23 09:09:45 crc kubenswrapper[4684]: I0123 09:09:45.945754 4684 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-cpttc\" (UniqueName: \"kubernetes.io/projected/6386382b-e651-4888-857e-a3a7325f1f14-kube-api-access-cpttc\") pod \"community-operators-4w77d\" (UID: \"6386382b-e651-4888-857e-a3a7325f1f14\") " pod="openshift-marketplace/community-operators-4w77d" Jan 23 09:09:45 crc kubenswrapper[4684]: I0123 09:09:45.945848 4684 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/6386382b-e651-4888-857e-a3a7325f1f14-catalog-content\") pod \"community-operators-4w77d\" (UID: \"6386382b-e651-4888-857e-a3a7325f1f14\") " pod="openshift-marketplace/community-operators-4w77d" Jan 23 09:09:45 crc kubenswrapper[4684]: I0123 09:09:45.946235 4684 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/6386382b-e651-4888-857e-a3a7325f1f14-catalog-content\") pod \"community-operators-4w77d\" (UID: \"6386382b-e651-4888-857e-a3a7325f1f14\") " pod="openshift-marketplace/community-operators-4w77d" Jan 23 09:09:45 crc kubenswrapper[4684]: E0123 09:09:45.946315 4684 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-01-23 09:09:46.446299118 +0000 UTC m=+159.069677659 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 23 09:09:45 crc kubenswrapper[4684]: I0123 09:09:45.946562 4684 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/6386382b-e651-4888-857e-a3a7325f1f14-utilities\") pod \"community-operators-4w77d\" (UID: \"6386382b-e651-4888-857e-a3a7325f1f14\") " pod="openshift-marketplace/community-operators-4w77d" Jan 23 09:09:46 crc kubenswrapper[4684]: I0123 09:09:46.055621 4684 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-cpttc\" (UniqueName: \"kubernetes.io/projected/6386382b-e651-4888-857e-a3a7325f1f14-kube-api-access-cpttc\") pod \"community-operators-4w77d\" (UID: \"6386382b-e651-4888-857e-a3a7325f1f14\") " pod="openshift-marketplace/community-operators-4w77d" Jan 23 09:09:46 crc kubenswrapper[4684]: I0123 09:09:46.056371 4684 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-wn9b6\" (UID: \"4d94b705-3a9a-4cb2-87f1-b898ba859d79\") " pod="openshift-image-registry/image-registry-697d97f7c8-wn9b6" Jan 23 09:09:46 crc kubenswrapper[4684]: E0123 09:09:46.056788 4684 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-01-23 09:09:46.556773749 +0000 UTC m=+159.180152290 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-wn9b6" (UID: "4d94b705-3a9a-4cb2-87f1-b898ba859d79") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 23 09:09:46 crc kubenswrapper[4684]: I0123 09:09:46.099460 4684 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-vk9hn" Jan 23 09:09:46 crc kubenswrapper[4684]: I0123 09:09:46.149865 4684 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="hostpath-provisioner/csi-hostpathplugin-8tk99" event={"ID":"52f6483b-3d4f-482d-8802-fb7ba6736b69","Type":"ContainerStarted","Data":"a40184c5558a3e7afa7aa0d1fa045d72b9277e3ac66ecb0a10dc7a481efb0e51"} Jan 23 09:09:46 crc kubenswrapper[4684]: I0123 09:09:46.158760 4684 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 23 09:09:46 crc kubenswrapper[4684]: E0123 09:09:46.159205 4684 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-01-23 09:09:46.659174949 +0000 UTC m=+159.282553500 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 23 09:09:46 crc kubenswrapper[4684]: I0123 09:09:46.160949 4684 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-controller-manager/revision-pruner-9-crc" event={"ID":"b9e7bf0d-a002-48a0-a2fc-4617d4311b10","Type":"ContainerStarted","Data":"16218980d9a956a3e56800344e4835c0c033e40ec9ba741ee4b7ea324977f61d"} Jan 23 09:09:46 crc kubenswrapper[4684]: I0123 09:09:46.184495 4684 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-bhzj6" Jan 23 09:09:46 crc kubenswrapper[4684]: I0123 09:09:46.193823 4684 patch_prober.go:28] interesting pod/router-default-5444994796-whxn9 container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Jan 23 09:09:46 crc kubenswrapper[4684]: [-]has-synced failed: reason withheld Jan 23 09:09:46 crc kubenswrapper[4684]: [+]process-running ok Jan 23 09:09:46 crc kubenswrapper[4684]: healthz check failed Jan 23 09:09:46 crc kubenswrapper[4684]: I0123 09:09:46.193883 4684 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-5444994796-whxn9" podUID="637adfa6-5f16-415d-b536-f8c65e5b32c2" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Jan 23 09:09:46 crc kubenswrapper[4684]: I0123 09:09:46.260368 4684 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-wn9b6\" (UID: \"4d94b705-3a9a-4cb2-87f1-b898ba859d79\") " pod="openshift-image-registry/image-registry-697d97f7c8-wn9b6" Jan 23 09:09:46 crc kubenswrapper[4684]: E0123 09:09:46.260820 4684 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-01-23 09:09:46.760807575 +0000 UTC m=+159.384186116 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-wn9b6" (UID: "4d94b705-3a9a-4cb2-87f1-b898ba859d79") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 23 09:09:46 crc kubenswrapper[4684]: I0123 09:09:46.313005 4684 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-4w77d" Jan 23 09:09:46 crc kubenswrapper[4684]: I0123 09:09:46.361962 4684 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 23 09:09:46 crc kubenswrapper[4684]: E0123 09:09:46.363057 4684 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-01-23 09:09:46.863037091 +0000 UTC m=+159.486415642 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 23 09:09:46 crc kubenswrapper[4684]: I0123 09:09:46.467478 4684 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-wn9b6\" (UID: \"4d94b705-3a9a-4cb2-87f1-b898ba859d79\") " pod="openshift-image-registry/image-registry-697d97f7c8-wn9b6" Jan 23 09:09:46 crc kubenswrapper[4684]: E0123 09:09:46.467831 4684 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-01-23 09:09:46.967819298 +0000 UTC m=+159.591197839 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-wn9b6" (UID: "4d94b705-3a9a-4cb2-87f1-b898ba859d79") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 23 09:09:46 crc kubenswrapper[4684]: I0123 09:09:46.568084 4684 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 23 09:09:46 crc kubenswrapper[4684]: E0123 09:09:46.568475 4684 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-01-23 09:09:47.068458092 +0000 UTC m=+159.691836633 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 23 09:09:46 crc kubenswrapper[4684]: I0123 09:09:46.650468 4684 patch_prober.go:28] interesting pod/apiserver-76f77b778f-7j9vw container/openshift-apiserver namespace/openshift-apiserver: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[+]ping ok Jan 23 09:09:46 crc kubenswrapper[4684]: [+]log ok Jan 23 09:09:46 crc kubenswrapper[4684]: [+]etcd ok Jan 23 09:09:46 crc kubenswrapper[4684]: [+]poststarthook/start-apiserver-admission-initializer ok Jan 23 09:09:46 crc kubenswrapper[4684]: [+]poststarthook/generic-apiserver-start-informers ok Jan 23 09:09:46 crc kubenswrapper[4684]: [+]poststarthook/max-in-flight-filter ok Jan 23 09:09:46 crc kubenswrapper[4684]: [+]poststarthook/storage-object-count-tracker-hook ok Jan 23 09:09:46 crc kubenswrapper[4684]: [+]poststarthook/image.openshift.io-apiserver-caches ok Jan 23 09:09:46 crc kubenswrapper[4684]: [-]poststarthook/authorization.openshift.io-bootstrapclusterroles failed: reason withheld Jan 23 09:09:46 crc kubenswrapper[4684]: [-]poststarthook/authorization.openshift.io-ensurenodebootstrap-sa failed: reason withheld Jan 23 09:09:46 crc kubenswrapper[4684]: [+]poststarthook/project.openshift.io-projectcache ok Jan 23 09:09:46 crc kubenswrapper[4684]: [+]poststarthook/project.openshift.io-projectauthorizationcache ok Jan 23 09:09:46 crc kubenswrapper[4684]: [+]poststarthook/openshift.io-startinformers ok Jan 23 09:09:46 crc kubenswrapper[4684]: [+]poststarthook/openshift.io-restmapperupdater ok Jan 23 09:09:46 crc kubenswrapper[4684]: [+]poststarthook/quota.openshift.io-clusterquotamapping ok Jan 23 09:09:46 crc kubenswrapper[4684]: livez check failed Jan 23 09:09:46 crc kubenswrapper[4684]: I0123 09:09:46.650812 4684 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-apiserver/apiserver-76f77b778f-7j9vw" podUID="e6245e77-409a-4116-8c6e-78b21d87529f" containerName="openshift-apiserver" probeResult="failure" output="HTTP probe failed with statuscode: 500" Jan 23 09:09:46 crc kubenswrapper[4684]: I0123 09:09:46.659199 4684 plugin_watcher.go:194] "Adding socket path or updating timestamp to desired state cache" path="/var/lib/kubelet/plugins_registry/kubevirt.io.hostpath-provisioner-reg.sock" Jan 23 09:09:46 crc kubenswrapper[4684]: I0123 09:09:46.669631 4684 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-wn9b6\" (UID: \"4d94b705-3a9a-4cb2-87f1-b898ba859d79\") " pod="openshift-image-registry/image-registry-697d97f7c8-wn9b6" Jan 23 09:09:46 crc kubenswrapper[4684]: E0123 09:09:46.669936 4684 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-01-23 09:09:47.169923673 +0000 UTC m=+159.793302214 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-wn9b6" (UID: "4d94b705-3a9a-4cb2-87f1-b898ba859d79") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 23 09:09:46 crc kubenswrapper[4684]: I0123 09:09:46.675755 4684 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/certified-operators-x2mrs"] Jan 23 09:09:46 crc kubenswrapper[4684]: W0123 09:09:46.704068 4684 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podb97308cc_f7d2_4693_8990_76cbb4c9abff.slice/crio-b3daf9fb2bd9bbd959b198cc9dba0ca809470f44c696cc6a38eade09391d9dd0 WatchSource:0}: Error finding container b3daf9fb2bd9bbd959b198cc9dba0ca809470f44c696cc6a38eade09391d9dd0: Status 404 returned error can't find the container with id b3daf9fb2bd9bbd959b198cc9dba0ca809470f44c696cc6a38eade09391d9dd0 Jan 23 09:09:46 crc kubenswrapper[4684]: I0123 09:09:46.770354 4684 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 23 09:09:46 crc kubenswrapper[4684]: E0123 09:09:46.770536 4684 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-01-23 09:09:47.270510885 +0000 UTC m=+159.893889416 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 23 09:09:46 crc kubenswrapper[4684]: I0123 09:09:46.770604 4684 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-wn9b6\" (UID: \"4d94b705-3a9a-4cb2-87f1-b898ba859d79\") " pod="openshift-image-registry/image-registry-697d97f7c8-wn9b6" Jan 23 09:09:46 crc kubenswrapper[4684]: E0123 09:09:46.770879 4684 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-01-23 09:09:47.270866456 +0000 UTC m=+159.894244997 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-wn9b6" (UID: "4d94b705-3a9a-4cb2-87f1-b898ba859d79") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 23 09:09:46 crc kubenswrapper[4684]: I0123 09:09:46.845779 4684 reconciler.go:161] "OperationExecutor.RegisterPlugin started" plugin={"SocketPath":"/var/lib/kubelet/plugins_registry/kubevirt.io.hostpath-provisioner-reg.sock","Timestamp":"2026-01-23T09:09:46.659216599Z","Handler":null,"Name":""} Jan 23 09:09:46 crc kubenswrapper[4684]: I0123 09:09:46.871473 4684 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 23 09:09:46 crc kubenswrapper[4684]: E0123 09:09:46.871818 4684 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-01-23 09:09:47.37180198 +0000 UTC m=+159.995180521 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 23 09:09:46 crc kubenswrapper[4684]: I0123 09:09:46.905580 4684 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/community-operators-pc4kj"] Jan 23 09:09:46 crc kubenswrapper[4684]: I0123 09:09:46.972891 4684 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-wn9b6\" (UID: \"4d94b705-3a9a-4cb2-87f1-b898ba859d79\") " pod="openshift-image-registry/image-registry-697d97f7c8-wn9b6" Jan 23 09:09:46 crc kubenswrapper[4684]: E0123 09:09:46.973434 4684 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-01-23 09:09:47.473420046 +0000 UTC m=+160.096798587 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-wn9b6" (UID: "4d94b705-3a9a-4cb2-87f1-b898ba859d79") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 23 09:09:46 crc kubenswrapper[4684]: I0123 09:09:46.991662 4684 csi_plugin.go:100] kubernetes.io/csi: Trying to validate a new CSI Driver with name: kubevirt.io.hostpath-provisioner endpoint: /var/lib/kubelet/plugins/csi-hostpath/csi.sock versions: 1.0.0 Jan 23 09:09:46 crc kubenswrapper[4684]: I0123 09:09:46.991710 4684 csi_plugin.go:113] kubernetes.io/csi: Register new plugin with name: kubevirt.io.hostpath-provisioner at endpoint: /var/lib/kubelet/plugins/csi-hostpath/csi.sock Jan 23 09:09:47 crc kubenswrapper[4684]: I0123 09:09:47.017928 4684 patch_prober.go:28] interesting pod/console-f9d7485db-wd9fz container/console namespace/openshift-console: Startup probe status=failure output="Get \"https://10.217.0.30:8443/health\": dial tcp 10.217.0.30:8443: connect: connection refused" start-of-body= Jan 23 09:09:47 crc kubenswrapper[4684]: I0123 09:09:47.017977 4684 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-console/console-f9d7485db-wd9fz" podUID="31ebe80c-870d-4be6-844c-504b72eb09d6" containerName="console" probeResult="failure" output="Get \"https://10.217.0.30:8443/health\": dial tcp 10.217.0.30:8443: connect: connection refused" Jan 23 09:09:47 crc kubenswrapper[4684]: I0123 09:09:47.076010 4684 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 23 09:09:47 crc kubenswrapper[4684]: I0123 09:09:47.127385 4684 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/redhat-marketplace-74vxp"] Jan 23 09:09:47 crc kubenswrapper[4684]: I0123 09:09:47.128567 4684 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-74vxp" Jan 23 09:09:47 crc kubenswrapper[4684]: I0123 09:09:47.142036 4684 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-marketplace"/"redhat-marketplace-dockercfg-x2ctb" Jan 23 09:09:47 crc kubenswrapper[4684]: I0123 09:09:47.146092 4684 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-marketplace-74vxp"] Jan 23 09:09:47 crc kubenswrapper[4684]: I0123 09:09:47.153228 4684 patch_prober.go:28] interesting pod/router-default-5444994796-whxn9 container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Jan 23 09:09:47 crc kubenswrapper[4684]: [-]has-synced failed: reason withheld Jan 23 09:09:47 crc kubenswrapper[4684]: [+]process-running ok Jan 23 09:09:47 crc kubenswrapper[4684]: healthz check failed Jan 23 09:09:47 crc kubenswrapper[4684]: I0123 09:09:47.153310 4684 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-5444994796-whxn9" podUID="637adfa6-5f16-415d-b536-f8c65e5b32c2" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Jan 23 09:09:47 crc kubenswrapper[4684]: I0123 09:09:47.161201 4684 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (OuterVolumeSpecName: "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b"). InnerVolumeSpecName "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8". PluginName "kubernetes.io/csi", VolumeGidValue "" Jan 23 09:09:47 crc kubenswrapper[4684]: I0123 09:09:47.177044 4684 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-wn9b6\" (UID: \"4d94b705-3a9a-4cb2-87f1-b898ba859d79\") " pod="openshift-image-registry/image-registry-697d97f7c8-wn9b6" Jan 23 09:09:47 crc kubenswrapper[4684]: I0123 09:09:47.207669 4684 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-x2mrs" event={"ID":"b97308cc-f7d2-4693-8990-76cbb4c9abff","Type":"ContainerStarted","Data":"b3daf9fb2bd9bbd959b198cc9dba0ca809470f44c696cc6a38eade09391d9dd0"} Jan 23 09:09:47 crc kubenswrapper[4684]: I0123 09:09:47.213976 4684 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-pc4kj" event={"ID":"2f9880b0-14ae-4649-b7ba-6d0dd1ab5151","Type":"ContainerStarted","Data":"2cadccfba472a9129d21ba9328500650192be1557c8ea77badde77e57f6ea4dd"} Jan 23 09:09:47 crc kubenswrapper[4684]: I0123 09:09:47.214001 4684 csi_attacher.go:380] kubernetes.io/csi: attacher.MountDevice STAGE_UNSTAGE_VOLUME capability not set. Skipping MountDevice... Jan 23 09:09:47 crc kubenswrapper[4684]: I0123 09:09:47.214079 4684 operation_generator.go:580] "MountVolume.MountDevice succeeded for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-wn9b6\" (UID: \"4d94b705-3a9a-4cb2-87f1-b898ba859d79\") device mount path \"/var/lib/kubelet/plugins/kubernetes.io/csi/kubevirt.io.hostpath-provisioner/1f4776af88835e41c12b831b4c9fed40233456d14189815a54dbe7f892fc1983/globalmount\"" pod="openshift-image-registry/image-registry-697d97f7c8-wn9b6" Jan 23 09:09:47 crc kubenswrapper[4684]: I0123 09:09:47.233147 4684 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-operator-lifecycle-manager/olm-operator-6b444d44fb-qj7jr" Jan 23 09:09:47 crc kubenswrapper[4684]: I0123 09:09:47.254343 4684 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/certified-operators-vk9hn"] Jan 23 09:09:47 crc kubenswrapper[4684]: I0123 09:09:47.256816 4684 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-operator-lifecycle-manager/packageserver-d55dfcdfc-g94qp" Jan 23 09:09:47 crc kubenswrapper[4684]: I0123 09:09:47.299491 4684 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/597fda0b-2292-4816-a498-539a84a87f33-catalog-content\") pod \"redhat-marketplace-74vxp\" (UID: \"597fda0b-2292-4816-a498-539a84a87f33\") " pod="openshift-marketplace/redhat-marketplace-74vxp" Jan 23 09:09:47 crc kubenswrapper[4684]: I0123 09:09:47.299615 4684 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-f8jv6\" (UniqueName: \"kubernetes.io/projected/597fda0b-2292-4816-a498-539a84a87f33-kube-api-access-f8jv6\") pod \"redhat-marketplace-74vxp\" (UID: \"597fda0b-2292-4816-a498-539a84a87f33\") " pod="openshift-marketplace/redhat-marketplace-74vxp" Jan 23 09:09:47 crc kubenswrapper[4684]: I0123 09:09:47.299657 4684 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/597fda0b-2292-4816-a498-539a84a87f33-utilities\") pod \"redhat-marketplace-74vxp\" (UID: \"597fda0b-2292-4816-a498-539a84a87f33\") " pod="openshift-marketplace/redhat-marketplace-74vxp" Jan 23 09:09:47 crc kubenswrapper[4684]: I0123 09:09:47.326615 4684 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/community-operators-4w77d"] Jan 23 09:09:47 crc kubenswrapper[4684]: I0123 09:09:47.400366 4684 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/597fda0b-2292-4816-a498-539a84a87f33-catalog-content\") pod \"redhat-marketplace-74vxp\" (UID: \"597fda0b-2292-4816-a498-539a84a87f33\") " pod="openshift-marketplace/redhat-marketplace-74vxp" Jan 23 09:09:47 crc kubenswrapper[4684]: I0123 09:09:47.400439 4684 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-f8jv6\" (UniqueName: \"kubernetes.io/projected/597fda0b-2292-4816-a498-539a84a87f33-kube-api-access-f8jv6\") pod \"redhat-marketplace-74vxp\" (UID: \"597fda0b-2292-4816-a498-539a84a87f33\") " pod="openshift-marketplace/redhat-marketplace-74vxp" Jan 23 09:09:47 crc kubenswrapper[4684]: I0123 09:09:47.400469 4684 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/597fda0b-2292-4816-a498-539a84a87f33-utilities\") pod \"redhat-marketplace-74vxp\" (UID: \"597fda0b-2292-4816-a498-539a84a87f33\") " pod="openshift-marketplace/redhat-marketplace-74vxp" Jan 23 09:09:47 crc kubenswrapper[4684]: I0123 09:09:47.401095 4684 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/597fda0b-2292-4816-a498-539a84a87f33-catalog-content\") pod \"redhat-marketplace-74vxp\" (UID: \"597fda0b-2292-4816-a498-539a84a87f33\") " pod="openshift-marketplace/redhat-marketplace-74vxp" Jan 23 09:09:47 crc kubenswrapper[4684]: I0123 09:09:47.401305 4684 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/597fda0b-2292-4816-a498-539a84a87f33-utilities\") pod \"redhat-marketplace-74vxp\" (UID: \"597fda0b-2292-4816-a498-539a84a87f33\") " pod="openshift-marketplace/redhat-marketplace-74vxp" Jan 23 09:09:47 crc kubenswrapper[4684]: I0123 09:09:47.421947 4684 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-f8jv6\" (UniqueName: \"kubernetes.io/projected/597fda0b-2292-4816-a498-539a84a87f33-kube-api-access-f8jv6\") pod \"redhat-marketplace-74vxp\" (UID: \"597fda0b-2292-4816-a498-539a84a87f33\") " pod="openshift-marketplace/redhat-marketplace-74vxp" Jan 23 09:09:47 crc kubenswrapper[4684]: I0123 09:09:47.485937 4684 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/redhat-marketplace-hcd6g"] Jan 23 09:09:47 crc kubenswrapper[4684]: I0123 09:09:47.486867 4684 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-hcd6g" Jan 23 09:09:47 crc kubenswrapper[4684]: I0123 09:09:47.489665 4684 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/marketplace-operator-79b997595-tfmsb" Jan 23 09:09:47 crc kubenswrapper[4684]: I0123 09:09:47.497461 4684 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-74vxp" Jan 23 09:09:47 crc kubenswrapper[4684]: I0123 09:09:47.527429 4684 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-marketplace-hcd6g"] Jan 23 09:09:47 crc kubenswrapper[4684]: I0123 09:09:47.588080 4684 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="8f668bae-612b-4b75-9490-919e737c6a3b" path="/var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes" Jan 23 09:09:47 crc kubenswrapper[4684]: I0123 09:09:47.622187 4684 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/a32a23a8-fd38-4a01-bc87-e589889a39e6-catalog-content\") pod \"redhat-marketplace-hcd6g\" (UID: \"a32a23a8-fd38-4a01-bc87-e589889a39e6\") " pod="openshift-marketplace/redhat-marketplace-hcd6g" Jan 23 09:09:47 crc kubenswrapper[4684]: I0123 09:09:47.622246 4684 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-g5tjv\" (UniqueName: \"kubernetes.io/projected/a32a23a8-fd38-4a01-bc87-e589889a39e6-kube-api-access-g5tjv\") pod \"redhat-marketplace-hcd6g\" (UID: \"a32a23a8-fd38-4a01-bc87-e589889a39e6\") " pod="openshift-marketplace/redhat-marketplace-hcd6g" Jan 23 09:09:47 crc kubenswrapper[4684]: I0123 09:09:47.622305 4684 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/a32a23a8-fd38-4a01-bc87-e589889a39e6-utilities\") pod \"redhat-marketplace-hcd6g\" (UID: \"a32a23a8-fd38-4a01-bc87-e589889a39e6\") " pod="openshift-marketplace/redhat-marketplace-hcd6g" Jan 23 09:09:47 crc kubenswrapper[4684]: I0123 09:09:47.727247 4684 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/a32a23a8-fd38-4a01-bc87-e589889a39e6-catalog-content\") pod \"redhat-marketplace-hcd6g\" (UID: \"a32a23a8-fd38-4a01-bc87-e589889a39e6\") " pod="openshift-marketplace/redhat-marketplace-hcd6g" Jan 23 09:09:47 crc kubenswrapper[4684]: I0123 09:09:47.727301 4684 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-g5tjv\" (UniqueName: \"kubernetes.io/projected/a32a23a8-fd38-4a01-bc87-e589889a39e6-kube-api-access-g5tjv\") pod \"redhat-marketplace-hcd6g\" (UID: \"a32a23a8-fd38-4a01-bc87-e589889a39e6\") " pod="openshift-marketplace/redhat-marketplace-hcd6g" Jan 23 09:09:47 crc kubenswrapper[4684]: I0123 09:09:47.727352 4684 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/a32a23a8-fd38-4a01-bc87-e589889a39e6-utilities\") pod \"redhat-marketplace-hcd6g\" (UID: \"a32a23a8-fd38-4a01-bc87-e589889a39e6\") " pod="openshift-marketplace/redhat-marketplace-hcd6g" Jan 23 09:09:47 crc kubenswrapper[4684]: I0123 09:09:47.727801 4684 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/a32a23a8-fd38-4a01-bc87-e589889a39e6-utilities\") pod \"redhat-marketplace-hcd6g\" (UID: \"a32a23a8-fd38-4a01-bc87-e589889a39e6\") " pod="openshift-marketplace/redhat-marketplace-hcd6g" Jan 23 09:09:47 crc kubenswrapper[4684]: I0123 09:09:47.728018 4684 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/a32a23a8-fd38-4a01-bc87-e589889a39e6-catalog-content\") pod \"redhat-marketplace-hcd6g\" (UID: \"a32a23a8-fd38-4a01-bc87-e589889a39e6\") " pod="openshift-marketplace/redhat-marketplace-hcd6g" Jan 23 09:09:47 crc kubenswrapper[4684]: I0123 09:09:47.753790 4684 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-g5tjv\" (UniqueName: \"kubernetes.io/projected/a32a23a8-fd38-4a01-bc87-e589889a39e6-kube-api-access-g5tjv\") pod \"redhat-marketplace-hcd6g\" (UID: \"a32a23a8-fd38-4a01-bc87-e589889a39e6\") " pod="openshift-marketplace/redhat-marketplace-hcd6g" Jan 23 09:09:47 crc kubenswrapper[4684]: I0123 09:09:47.802957 4684 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-hcd6g" Jan 23 09:09:47 crc kubenswrapper[4684]: I0123 09:09:47.889677 4684 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-wn9b6\" (UID: \"4d94b705-3a9a-4cb2-87f1-b898ba859d79\") " pod="openshift-image-registry/image-registry-697d97f7c8-wn9b6" Jan 23 09:09:48 crc kubenswrapper[4684]: I0123 09:09:48.041945 4684 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-image-registry/image-registry-697d97f7c8-wn9b6" Jan 23 09:09:48 crc kubenswrapper[4684]: I0123 09:09:48.125477 4684 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/redhat-operators-9nnzz"] Jan 23 09:09:48 crc kubenswrapper[4684]: I0123 09:09:48.126533 4684 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-9nnzz" Jan 23 09:09:48 crc kubenswrapper[4684]: I0123 09:09:48.131398 4684 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-marketplace"/"redhat-operators-dockercfg-ct8rh" Jan 23 09:09:48 crc kubenswrapper[4684]: I0123 09:09:48.186578 4684 patch_prober.go:28] interesting pod/router-default-5444994796-whxn9 container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Jan 23 09:09:48 crc kubenswrapper[4684]: [-]has-synced failed: reason withheld Jan 23 09:09:48 crc kubenswrapper[4684]: [+]process-running ok Jan 23 09:09:48 crc kubenswrapper[4684]: healthz check failed Jan 23 09:09:48 crc kubenswrapper[4684]: I0123 09:09:48.186642 4684 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-5444994796-whxn9" podUID="637adfa6-5f16-415d-b536-f8c65e5b32c2" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Jan 23 09:09:48 crc kubenswrapper[4684]: I0123 09:09:48.196791 4684 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-operators-9nnzz"] Jan 23 09:09:48 crc kubenswrapper[4684]: I0123 09:09:48.237264 4684 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-vk9hn" event={"ID":"0cd73bd8-4034-44e9-b00a-75ea938360c8","Type":"ContainerStarted","Data":"02a79c96ce85262af4bccbdaf679ca1f8afe6db43539c8474bc14422d95c30f6"} Jan 23 09:09:48 crc kubenswrapper[4684]: I0123 09:09:48.248763 4684 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/888f4644-d4e6-4334-8711-c552d0ef037a-catalog-content\") pod \"redhat-operators-9nnzz\" (UID: \"888f4644-d4e6-4334-8711-c552d0ef037a\") " pod="openshift-marketplace/redhat-operators-9nnzz" Jan 23 09:09:48 crc kubenswrapper[4684]: I0123 09:09:48.248801 4684 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/888f4644-d4e6-4334-8711-c552d0ef037a-utilities\") pod \"redhat-operators-9nnzz\" (UID: \"888f4644-d4e6-4334-8711-c552d0ef037a\") " pod="openshift-marketplace/redhat-operators-9nnzz" Jan 23 09:09:48 crc kubenswrapper[4684]: I0123 09:09:48.248876 4684 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-6gk9c\" (UniqueName: \"kubernetes.io/projected/888f4644-d4e6-4334-8711-c552d0ef037a-kube-api-access-6gk9c\") pod \"redhat-operators-9nnzz\" (UID: \"888f4644-d4e6-4334-8711-c552d0ef037a\") " pod="openshift-marketplace/redhat-operators-9nnzz" Jan 23 09:09:48 crc kubenswrapper[4684]: I0123 09:09:48.250285 4684 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-4w77d" event={"ID":"6386382b-e651-4888-857e-a3a7325f1f14","Type":"ContainerStarted","Data":"e2eababe803b0e383040d4e00a014fd611b2b02d6377f845e753369e476ad8ab"} Jan 23 09:09:48 crc kubenswrapper[4684]: I0123 09:09:48.267012 4684 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-controller-manager/revision-pruner-9-crc" event={"ID":"b9e7bf0d-a002-48a0-a2fc-4617d4311b10","Type":"ContainerStarted","Data":"85110a6508df26fba19fc7b4263885557a2ba0345fe99929c5a5190e99ccae53"} Jan 23 09:09:48 crc kubenswrapper[4684]: I0123 09:09:48.281057 4684 generic.go:334] "Generic (PLEG): container finished" podID="7d3e8240-e3e7-42d7-a0fa-6379a76c546e" containerID="2892349cfbda780621ff677d6c6b8e64018aa431d2495b06c636d820584190b5" exitCode=0 Jan 23 09:09:48 crc kubenswrapper[4684]: I0123 09:09:48.281190 4684 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/collect-profiles-29485980-dfbbw" event={"ID":"7d3e8240-e3e7-42d7-a0fa-6379a76c546e","Type":"ContainerDied","Data":"2892349cfbda780621ff677d6c6b8e64018aa431d2495b06c636d820584190b5"} Jan 23 09:09:48 crc kubenswrapper[4684]: I0123 09:09:48.310402 4684 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-dns/dns-default-bxczb" Jan 23 09:09:48 crc kubenswrapper[4684]: I0123 09:09:48.355267 4684 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/888f4644-d4e6-4334-8711-c552d0ef037a-catalog-content\") pod \"redhat-operators-9nnzz\" (UID: \"888f4644-d4e6-4334-8711-c552d0ef037a\") " pod="openshift-marketplace/redhat-operators-9nnzz" Jan 23 09:09:48 crc kubenswrapper[4684]: I0123 09:09:48.355304 4684 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/888f4644-d4e6-4334-8711-c552d0ef037a-utilities\") pod \"redhat-operators-9nnzz\" (UID: \"888f4644-d4e6-4334-8711-c552d0ef037a\") " pod="openshift-marketplace/redhat-operators-9nnzz" Jan 23 09:09:48 crc kubenswrapper[4684]: I0123 09:09:48.355435 4684 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-6gk9c\" (UniqueName: \"kubernetes.io/projected/888f4644-d4e6-4334-8711-c552d0ef037a-kube-api-access-6gk9c\") pod \"redhat-operators-9nnzz\" (UID: \"888f4644-d4e6-4334-8711-c552d0ef037a\") " pod="openshift-marketplace/redhat-operators-9nnzz" Jan 23 09:09:48 crc kubenswrapper[4684]: I0123 09:09:48.357038 4684 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/888f4644-d4e6-4334-8711-c552d0ef037a-catalog-content\") pod \"redhat-operators-9nnzz\" (UID: \"888f4644-d4e6-4334-8711-c552d0ef037a\") " pod="openshift-marketplace/redhat-operators-9nnzz" Jan 23 09:09:48 crc kubenswrapper[4684]: I0123 09:09:48.357253 4684 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/888f4644-d4e6-4334-8711-c552d0ef037a-utilities\") pod \"redhat-operators-9nnzz\" (UID: \"888f4644-d4e6-4334-8711-c552d0ef037a\") " pod="openshift-marketplace/redhat-operators-9nnzz" Jan 23 09:09:48 crc kubenswrapper[4684]: I0123 09:09:48.424535 4684 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-6gk9c\" (UniqueName: \"kubernetes.io/projected/888f4644-d4e6-4334-8711-c552d0ef037a-kube-api-access-6gk9c\") pod \"redhat-operators-9nnzz\" (UID: \"888f4644-d4e6-4334-8711-c552d0ef037a\") " pod="openshift-marketplace/redhat-operators-9nnzz" Jan 23 09:09:48 crc kubenswrapper[4684]: I0123 09:09:48.493279 4684 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/redhat-operators-vnv8t"] Jan 23 09:09:48 crc kubenswrapper[4684]: I0123 09:09:48.494715 4684 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-vnv8t" Jan 23 09:09:48 crc kubenswrapper[4684]: I0123 09:09:48.538078 4684 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-operators-vnv8t"] Jan 23 09:09:48 crc kubenswrapper[4684]: I0123 09:09:48.567854 4684 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-vfgdl\" (UniqueName: \"kubernetes.io/projected/5a6b0dac-56a9-4bc7-b6f1-fdbe9578f226-kube-api-access-vfgdl\") pod \"redhat-operators-vnv8t\" (UID: \"5a6b0dac-56a9-4bc7-b6f1-fdbe9578f226\") " pod="openshift-marketplace/redhat-operators-vnv8t" Jan 23 09:09:48 crc kubenswrapper[4684]: I0123 09:09:48.567960 4684 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/5a6b0dac-56a9-4bc7-b6f1-fdbe9578f226-catalog-content\") pod \"redhat-operators-vnv8t\" (UID: \"5a6b0dac-56a9-4bc7-b6f1-fdbe9578f226\") " pod="openshift-marketplace/redhat-operators-vnv8t" Jan 23 09:09:48 crc kubenswrapper[4684]: I0123 09:09:48.567989 4684 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/5a6b0dac-56a9-4bc7-b6f1-fdbe9578f226-utilities\") pod \"redhat-operators-vnv8t\" (UID: \"5a6b0dac-56a9-4bc7-b6f1-fdbe9578f226\") " pod="openshift-marketplace/redhat-operators-vnv8t" Jan 23 09:09:48 crc kubenswrapper[4684]: I0123 09:09:48.577256 4684 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-9nnzz" Jan 23 09:09:48 crc kubenswrapper[4684]: I0123 09:09:48.672264 4684 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/5a6b0dac-56a9-4bc7-b6f1-fdbe9578f226-catalog-content\") pod \"redhat-operators-vnv8t\" (UID: \"5a6b0dac-56a9-4bc7-b6f1-fdbe9578f226\") " pod="openshift-marketplace/redhat-operators-vnv8t" Jan 23 09:09:48 crc kubenswrapper[4684]: I0123 09:09:48.672575 4684 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/5a6b0dac-56a9-4bc7-b6f1-fdbe9578f226-utilities\") pod \"redhat-operators-vnv8t\" (UID: \"5a6b0dac-56a9-4bc7-b6f1-fdbe9578f226\") " pod="openshift-marketplace/redhat-operators-vnv8t" Jan 23 09:09:48 crc kubenswrapper[4684]: I0123 09:09:48.672634 4684 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-vfgdl\" (UniqueName: \"kubernetes.io/projected/5a6b0dac-56a9-4bc7-b6f1-fdbe9578f226-kube-api-access-vfgdl\") pod \"redhat-operators-vnv8t\" (UID: \"5a6b0dac-56a9-4bc7-b6f1-fdbe9578f226\") " pod="openshift-marketplace/redhat-operators-vnv8t" Jan 23 09:09:48 crc kubenswrapper[4684]: I0123 09:09:48.672959 4684 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/5a6b0dac-56a9-4bc7-b6f1-fdbe9578f226-catalog-content\") pod \"redhat-operators-vnv8t\" (UID: \"5a6b0dac-56a9-4bc7-b6f1-fdbe9578f226\") " pod="openshift-marketplace/redhat-operators-vnv8t" Jan 23 09:09:48 crc kubenswrapper[4684]: I0123 09:09:48.673404 4684 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/5a6b0dac-56a9-4bc7-b6f1-fdbe9578f226-utilities\") pod \"redhat-operators-vnv8t\" (UID: \"5a6b0dac-56a9-4bc7-b6f1-fdbe9578f226\") " pod="openshift-marketplace/redhat-operators-vnv8t" Jan 23 09:09:48 crc kubenswrapper[4684]: I0123 09:09:48.688313 4684 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-marketplace-hcd6g"] Jan 23 09:09:48 crc kubenswrapper[4684]: I0123 09:09:48.708931 4684 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-vfgdl\" (UniqueName: \"kubernetes.io/projected/5a6b0dac-56a9-4bc7-b6f1-fdbe9578f226-kube-api-access-vfgdl\") pod \"redhat-operators-vnv8t\" (UID: \"5a6b0dac-56a9-4bc7-b6f1-fdbe9578f226\") " pod="openshift-marketplace/redhat-operators-vnv8t" Jan 23 09:09:48 crc kubenswrapper[4684]: I0123 09:09:48.720022 4684 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-marketplace-74vxp"] Jan 23 09:09:48 crc kubenswrapper[4684]: I0123 09:09:48.803823 4684 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-image-registry/image-registry-697d97f7c8-wn9b6"] Jan 23 09:09:48 crc kubenswrapper[4684]: W0123 09:09:48.817097 4684 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod4d94b705_3a9a_4cb2_87f1_b898ba859d79.slice/crio-8c4d6501dd1ed065829a396d4ef2969998745d3985084a988cf8b37d8872ec82 WatchSource:0}: Error finding container 8c4d6501dd1ed065829a396d4ef2969998745d3985084a988cf8b37d8872ec82: Status 404 returned error can't find the container with id 8c4d6501dd1ed065829a396d4ef2969998745d3985084a988cf8b37d8872ec82 Jan 23 09:09:48 crc kubenswrapper[4684]: I0123 09:09:48.832137 4684 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-vnv8t" Jan 23 09:09:48 crc kubenswrapper[4684]: I0123 09:09:48.918894 4684 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-kube-apiserver/revision-pruner-8-crc"] Jan 23 09:09:48 crc kubenswrapper[4684]: I0123 09:09:48.920628 4684 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver/revision-pruner-8-crc" Jan 23 09:09:48 crc kubenswrapper[4684]: I0123 09:09:48.925910 4684 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-kube-apiserver"/"installer-sa-dockercfg-5pr6n" Jan 23 09:09:48 crc kubenswrapper[4684]: I0123 09:09:48.926395 4684 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-apiserver"/"kube-root-ca.crt" Jan 23 09:09:48 crc kubenswrapper[4684]: I0123 09:09:48.927252 4684 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-kube-apiserver/revision-pruner-8-crc"] Jan 23 09:09:48 crc kubenswrapper[4684]: I0123 09:09:48.980415 4684 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/428aa8aa-295e-47ae-8ef7-9a8f11a4912a-kube-api-access\") pod \"revision-pruner-8-crc\" (UID: \"428aa8aa-295e-47ae-8ef7-9a8f11a4912a\") " pod="openshift-kube-apiserver/revision-pruner-8-crc" Jan 23 09:09:48 crc kubenswrapper[4684]: I0123 09:09:48.980481 4684 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/428aa8aa-295e-47ae-8ef7-9a8f11a4912a-kubelet-dir\") pod \"revision-pruner-8-crc\" (UID: \"428aa8aa-295e-47ae-8ef7-9a8f11a4912a\") " pod="openshift-kube-apiserver/revision-pruner-8-crc" Jan 23 09:09:49 crc kubenswrapper[4684]: I0123 09:09:49.081206 4684 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/428aa8aa-295e-47ae-8ef7-9a8f11a4912a-kube-api-access\") pod \"revision-pruner-8-crc\" (UID: \"428aa8aa-295e-47ae-8ef7-9a8f11a4912a\") " pod="openshift-kube-apiserver/revision-pruner-8-crc" Jan 23 09:09:49 crc kubenswrapper[4684]: I0123 09:09:49.081267 4684 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/428aa8aa-295e-47ae-8ef7-9a8f11a4912a-kubelet-dir\") pod \"revision-pruner-8-crc\" (UID: \"428aa8aa-295e-47ae-8ef7-9a8f11a4912a\") " pod="openshift-kube-apiserver/revision-pruner-8-crc" Jan 23 09:09:49 crc kubenswrapper[4684]: I0123 09:09:49.081350 4684 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/428aa8aa-295e-47ae-8ef7-9a8f11a4912a-kubelet-dir\") pod \"revision-pruner-8-crc\" (UID: \"428aa8aa-295e-47ae-8ef7-9a8f11a4912a\") " pod="openshift-kube-apiserver/revision-pruner-8-crc" Jan 23 09:09:49 crc kubenswrapper[4684]: E0123 09:09:49.085891 4684 cadvisor_stats_provider.go:516] "Partial failure issuing cadvisor.ContainerInfoV2" err="partial failures: [\"/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podb97308cc_f7d2_4693_8990_76cbb4c9abff.slice/crio-f8d713cb3c6dd62d1d1924fbda88c2164baa1d0bcc5e3c259042314d9890fd95.scope\": RecentStats: unable to find data in memory cache]" Jan 23 09:09:49 crc kubenswrapper[4684]: I0123 09:09:49.128383 4684 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/428aa8aa-295e-47ae-8ef7-9a8f11a4912a-kube-api-access\") pod \"revision-pruner-8-crc\" (UID: \"428aa8aa-295e-47ae-8ef7-9a8f11a4912a\") " pod="openshift-kube-apiserver/revision-pruner-8-crc" Jan 23 09:09:49 crc kubenswrapper[4684]: I0123 09:09:49.137859 4684 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-operators-9nnzz"] Jan 23 09:09:49 crc kubenswrapper[4684]: W0123 09:09:49.141297 4684 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod888f4644_d4e6_4334_8711_c552d0ef037a.slice/crio-7a0429e517c619a52ee080046051172e4f048c3b7e3a6df818bf352bef79b571 WatchSource:0}: Error finding container 7a0429e517c619a52ee080046051172e4f048c3b7e3a6df818bf352bef79b571: Status 404 returned error can't find the container with id 7a0429e517c619a52ee080046051172e4f048c3b7e3a6df818bf352bef79b571 Jan 23 09:09:49 crc kubenswrapper[4684]: I0123 09:09:49.150912 4684 patch_prober.go:28] interesting pod/router-default-5444994796-whxn9 container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Jan 23 09:09:49 crc kubenswrapper[4684]: [-]has-synced failed: reason withheld Jan 23 09:09:49 crc kubenswrapper[4684]: [+]process-running ok Jan 23 09:09:49 crc kubenswrapper[4684]: healthz check failed Jan 23 09:09:49 crc kubenswrapper[4684]: I0123 09:09:49.150972 4684 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-5444994796-whxn9" podUID="637adfa6-5f16-415d-b536-f8c65e5b32c2" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Jan 23 09:09:49 crc kubenswrapper[4684]: I0123 09:09:49.237190 4684 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver/revision-pruner-8-crc" Jan 23 09:09:49 crc kubenswrapper[4684]: I0123 09:09:49.243376 4684 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-operators-vnv8t"] Jan 23 09:09:49 crc kubenswrapper[4684]: W0123 09:09:49.254620 4684 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod5a6b0dac_56a9_4bc7_b6f1_fdbe9578f226.slice/crio-ed6afda4661386cdecb0954427f3d46f6d134aa9ea73909bb0066874a733c081 WatchSource:0}: Error finding container ed6afda4661386cdecb0954427f3d46f6d134aa9ea73909bb0066874a733c081: Status 404 returned error can't find the container with id ed6afda4661386cdecb0954427f3d46f6d134aa9ea73909bb0066874a733c081 Jan 23 09:09:49 crc kubenswrapper[4684]: I0123 09:09:49.299887 4684 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-x2mrs" event={"ID":"b97308cc-f7d2-4693-8990-76cbb4c9abff","Type":"ContainerStarted","Data":"f8d713cb3c6dd62d1d1924fbda88c2164baa1d0bcc5e3c259042314d9890fd95"} Jan 23 09:09:49 crc kubenswrapper[4684]: I0123 09:09:49.300555 4684 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-image-registry/image-registry-697d97f7c8-wn9b6" event={"ID":"4d94b705-3a9a-4cb2-87f1-b898ba859d79","Type":"ContainerStarted","Data":"8c4d6501dd1ed065829a396d4ef2969998745d3985084a988cf8b37d8872ec82"} Jan 23 09:09:49 crc kubenswrapper[4684]: I0123 09:09:49.301557 4684 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-9nnzz" event={"ID":"888f4644-d4e6-4334-8711-c552d0ef037a","Type":"ContainerStarted","Data":"7a0429e517c619a52ee080046051172e4f048c3b7e3a6df818bf352bef79b571"} Jan 23 09:09:49 crc kubenswrapper[4684]: I0123 09:09:49.302351 4684 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-74vxp" event={"ID":"597fda0b-2292-4816-a498-539a84a87f33","Type":"ContainerStarted","Data":"13a091f6b0321d9ec401bcc2522dc85c846fb0a496454a8da29c56211b52ad0d"} Jan 23 09:09:49 crc kubenswrapper[4684]: I0123 09:09:49.302966 4684 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-vnv8t" event={"ID":"5a6b0dac-56a9-4bc7-b6f1-fdbe9578f226","Type":"ContainerStarted","Data":"ed6afda4661386cdecb0954427f3d46f6d134aa9ea73909bb0066874a733c081"} Jan 23 09:09:49 crc kubenswrapper[4684]: I0123 09:09:49.303623 4684 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-hcd6g" event={"ID":"a32a23a8-fd38-4a01-bc87-e589889a39e6","Type":"ContainerStarted","Data":"6c512678b9d2d1b1ebee786cfdbc46a57fce5a9f38caa72f3cb3e62093dfb242"} Jan 23 09:09:49 crc kubenswrapper[4684]: I0123 09:09:49.514046 4684 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-kube-apiserver/revision-pruner-8-crc"] Jan 23 09:09:49 crc kubenswrapper[4684]: I0123 09:09:49.669383 4684 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29485980-dfbbw" Jan 23 09:09:49 crc kubenswrapper[4684]: I0123 09:09:49.790964 4684 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/7d3e8240-e3e7-42d7-a0fa-6379a76c546e-secret-volume\") pod \"7d3e8240-e3e7-42d7-a0fa-6379a76c546e\" (UID: \"7d3e8240-e3e7-42d7-a0fa-6379a76c546e\") " Jan 23 09:09:49 crc kubenswrapper[4684]: I0123 09:09:49.791076 4684 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/7d3e8240-e3e7-42d7-a0fa-6379a76c546e-config-volume\") pod \"7d3e8240-e3e7-42d7-a0fa-6379a76c546e\" (UID: \"7d3e8240-e3e7-42d7-a0fa-6379a76c546e\") " Jan 23 09:09:49 crc kubenswrapper[4684]: I0123 09:09:49.791114 4684 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-47xb2\" (UniqueName: \"kubernetes.io/projected/7d3e8240-e3e7-42d7-a0fa-6379a76c546e-kube-api-access-47xb2\") pod \"7d3e8240-e3e7-42d7-a0fa-6379a76c546e\" (UID: \"7d3e8240-e3e7-42d7-a0fa-6379a76c546e\") " Jan 23 09:09:49 crc kubenswrapper[4684]: I0123 09:09:49.791974 4684 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/7d3e8240-e3e7-42d7-a0fa-6379a76c546e-config-volume" (OuterVolumeSpecName: "config-volume") pod "7d3e8240-e3e7-42d7-a0fa-6379a76c546e" (UID: "7d3e8240-e3e7-42d7-a0fa-6379a76c546e"). InnerVolumeSpecName "config-volume". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 23 09:09:49 crc kubenswrapper[4684]: I0123 09:09:49.796517 4684 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/7d3e8240-e3e7-42d7-a0fa-6379a76c546e-secret-volume" (OuterVolumeSpecName: "secret-volume") pod "7d3e8240-e3e7-42d7-a0fa-6379a76c546e" (UID: "7d3e8240-e3e7-42d7-a0fa-6379a76c546e"). InnerVolumeSpecName "secret-volume". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 23 09:09:49 crc kubenswrapper[4684]: I0123 09:09:49.796584 4684 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/7d3e8240-e3e7-42d7-a0fa-6379a76c546e-kube-api-access-47xb2" (OuterVolumeSpecName: "kube-api-access-47xb2") pod "7d3e8240-e3e7-42d7-a0fa-6379a76c546e" (UID: "7d3e8240-e3e7-42d7-a0fa-6379a76c546e"). InnerVolumeSpecName "kube-api-access-47xb2". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 23 09:09:49 crc kubenswrapper[4684]: I0123 09:09:49.817946 4684 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-apiserver/apiserver-76f77b778f-7j9vw" Jan 23 09:09:49 crc kubenswrapper[4684]: I0123 09:09:49.823408 4684 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-apiserver/apiserver-76f77b778f-7j9vw" Jan 23 09:09:49 crc kubenswrapper[4684]: I0123 09:09:49.893072 4684 reconciler_common.go:293] "Volume detached for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/7d3e8240-e3e7-42d7-a0fa-6379a76c546e-config-volume\") on node \"crc\" DevicePath \"\"" Jan 23 09:09:49 crc kubenswrapper[4684]: I0123 09:09:49.893105 4684 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-47xb2\" (UniqueName: \"kubernetes.io/projected/7d3e8240-e3e7-42d7-a0fa-6379a76c546e-kube-api-access-47xb2\") on node \"crc\" DevicePath \"\"" Jan 23 09:09:49 crc kubenswrapper[4684]: I0123 09:09:49.893119 4684 reconciler_common.go:293] "Volume detached for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/7d3e8240-e3e7-42d7-a0fa-6379a76c546e-secret-volume\") on node \"crc\" DevicePath \"\"" Jan 23 09:09:50 crc kubenswrapper[4684]: I0123 09:09:50.150813 4684 patch_prober.go:28] interesting pod/router-default-5444994796-whxn9 container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Jan 23 09:09:50 crc kubenswrapper[4684]: [-]has-synced failed: reason withheld Jan 23 09:09:50 crc kubenswrapper[4684]: [+]process-running ok Jan 23 09:09:50 crc kubenswrapper[4684]: healthz check failed Jan 23 09:09:50 crc kubenswrapper[4684]: I0123 09:09:50.151103 4684 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-5444994796-whxn9" podUID="637adfa6-5f16-415d-b536-f8c65e5b32c2" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Jan 23 09:09:50 crc kubenswrapper[4684]: I0123 09:09:50.196829 4684 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/8a1145d8-e0e9-481b-9e5c-65815e74874f-metrics-certs\") pod \"network-metrics-daemon-wrrtl\" (UID: \"8a1145d8-e0e9-481b-9e5c-65815e74874f\") " pod="openshift-multus/network-metrics-daemon-wrrtl" Jan 23 09:09:50 crc kubenswrapper[4684]: I0123 09:09:50.201785 4684 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/8a1145d8-e0e9-481b-9e5c-65815e74874f-metrics-certs\") pod \"network-metrics-daemon-wrrtl\" (UID: \"8a1145d8-e0e9-481b-9e5c-65815e74874f\") " pod="openshift-multus/network-metrics-daemon-wrrtl" Jan 23 09:09:50 crc kubenswrapper[4684]: I0123 09:09:50.316748 4684 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29485980-dfbbw" Jan 23 09:09:50 crc kubenswrapper[4684]: I0123 09:09:50.317524 4684 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/collect-profiles-29485980-dfbbw" event={"ID":"7d3e8240-e3e7-42d7-a0fa-6379a76c546e","Type":"ContainerDied","Data":"8c68be423129790aead549eda638973712efaa7868e457584bdff95a4981e9c0"} Jan 23 09:09:50 crc kubenswrapper[4684]: I0123 09:09:50.317555 4684 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="8c68be423129790aead549eda638973712efaa7868e457584bdff95a4981e9c0" Jan 23 09:09:50 crc kubenswrapper[4684]: I0123 09:09:50.323922 4684 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-vk9hn" event={"ID":"0cd73bd8-4034-44e9-b00a-75ea938360c8","Type":"ContainerStarted","Data":"c165a5490980dba46a6a11e0d4d67e28cfd06b0160b050c3d226fee89fbc4e3f"} Jan 23 09:09:50 crc kubenswrapper[4684]: I0123 09:09:50.330111 4684 generic.go:334] "Generic (PLEG): container finished" podID="b97308cc-f7d2-4693-8990-76cbb4c9abff" containerID="f8d713cb3c6dd62d1d1924fbda88c2164baa1d0bcc5e3c259042314d9890fd95" exitCode=0 Jan 23 09:09:50 crc kubenswrapper[4684]: I0123 09:09:50.330394 4684 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-x2mrs" event={"ID":"b97308cc-f7d2-4693-8990-76cbb4c9abff","Type":"ContainerDied","Data":"f8d713cb3c6dd62d1d1924fbda88c2164baa1d0bcc5e3c259042314d9890fd95"} Jan 23 09:09:50 crc kubenswrapper[4684]: I0123 09:09:50.336588 4684 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/revision-pruner-8-crc" event={"ID":"428aa8aa-295e-47ae-8ef7-9a8f11a4912a","Type":"ContainerStarted","Data":"d40fb1a5fc430092cc4014eafec4876bb75a4bb0ee8e574e3e18a477df09714a"} Jan 23 09:09:50 crc kubenswrapper[4684]: I0123 09:09:50.337792 4684 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-pc4kj" event={"ID":"2f9880b0-14ae-4649-b7ba-6d0dd1ab5151","Type":"ContainerStarted","Data":"9aa7a109bdedcefff1026559d43ae04050530da5d9493dfb559e06d896ee94c3"} Jan 23 09:09:50 crc kubenswrapper[4684]: I0123 09:09:50.398565 4684 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-wrrtl" Jan 23 09:09:50 crc kubenswrapper[4684]: I0123 09:09:50.728765 4684 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-multus/network-metrics-daemon-wrrtl"] Jan 23 09:09:50 crc kubenswrapper[4684]: W0123 09:09:50.737901 4684 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod8a1145d8_e0e9_481b_9e5c_65815e74874f.slice/crio-dbfa31f988fc201ec20db775b9e4d7b08d3672a34de52d284ed697cc2b079579 WatchSource:0}: Error finding container dbfa31f988fc201ec20db775b9e4d7b08d3672a34de52d284ed697cc2b079579: Status 404 returned error can't find the container with id dbfa31f988fc201ec20db775b9e4d7b08d3672a34de52d284ed697cc2b079579 Jan 23 09:09:51 crc kubenswrapper[4684]: I0123 09:09:51.151755 4684 patch_prober.go:28] interesting pod/router-default-5444994796-whxn9 container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Jan 23 09:09:51 crc kubenswrapper[4684]: [-]has-synced failed: reason withheld Jan 23 09:09:51 crc kubenswrapper[4684]: [+]process-running ok Jan 23 09:09:51 crc kubenswrapper[4684]: healthz check failed Jan 23 09:09:51 crc kubenswrapper[4684]: I0123 09:09:51.151821 4684 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-5444994796-whxn9" podUID="637adfa6-5f16-415d-b536-f8c65e5b32c2" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Jan 23 09:09:51 crc kubenswrapper[4684]: I0123 09:09:51.344055 4684 generic.go:334] "Generic (PLEG): container finished" podID="2f9880b0-14ae-4649-b7ba-6d0dd1ab5151" containerID="9aa7a109bdedcefff1026559d43ae04050530da5d9493dfb559e06d896ee94c3" exitCode=0 Jan 23 09:09:51 crc kubenswrapper[4684]: I0123 09:09:51.344138 4684 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-pc4kj" event={"ID":"2f9880b0-14ae-4649-b7ba-6d0dd1ab5151","Type":"ContainerDied","Data":"9aa7a109bdedcefff1026559d43ae04050530da5d9493dfb559e06d896ee94c3"} Jan 23 09:09:51 crc kubenswrapper[4684]: I0123 09:09:51.345814 4684 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-4w77d" event={"ID":"6386382b-e651-4888-857e-a3a7325f1f14","Type":"ContainerStarted","Data":"1369ef62f47d70a6abfe04a6bccb8793c585d6bd4fa2af1a177195b5b91a127c"} Jan 23 09:09:51 crc kubenswrapper[4684]: I0123 09:09:51.348014 4684 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="hostpath-provisioner/csi-hostpathplugin-8tk99" event={"ID":"52f6483b-3d4f-482d-8802-fb7ba6736b69","Type":"ContainerStarted","Data":"cb7ff7c26c83afb995fd52f608cb9b4fe4df7fe04a2e9526e1cd9731d251f24a"} Jan 23 09:09:51 crc kubenswrapper[4684]: I0123 09:09:51.349019 4684 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-multus/network-metrics-daemon-wrrtl" event={"ID":"8a1145d8-e0e9-481b-9e5c-65815e74874f","Type":"ContainerStarted","Data":"dbfa31f988fc201ec20db775b9e4d7b08d3672a34de52d284ed697cc2b079579"} Jan 23 09:09:52 crc kubenswrapper[4684]: I0123 09:09:52.159508 4684 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-ingress/router-default-5444994796-whxn9" Jan 23 09:09:52 crc kubenswrapper[4684]: I0123 09:09:52.175978 4684 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-ingress/router-default-5444994796-whxn9" Jan 23 09:09:52 crc kubenswrapper[4684]: I0123 09:09:52.361808 4684 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-74vxp" event={"ID":"597fda0b-2292-4816-a498-539a84a87f33","Type":"ContainerStarted","Data":"6c523c49df4bd31c5a1a6578dd029cb4bd7f24aa003d53a87404f0f60e12a1a5"} Jan 23 09:09:52 crc kubenswrapper[4684]: I0123 09:09:52.363190 4684 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-vnv8t" event={"ID":"5a6b0dac-56a9-4bc7-b6f1-fdbe9578f226","Type":"ContainerStarted","Data":"dacd5b2a5ab88c954b2fdd2de0d065f964f4be46436612b02cfc3dfbf18e3900"} Jan 23 09:09:52 crc kubenswrapper[4684]: I0123 09:09:52.365034 4684 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-hcd6g" event={"ID":"a32a23a8-fd38-4a01-bc87-e589889a39e6","Type":"ContainerStarted","Data":"284121e3234b1751b9f0e90389bb58657d0ac7e24039d13a2fbbbd3f59e1e44f"} Jan 23 09:09:52 crc kubenswrapper[4684]: I0123 09:09:52.366468 4684 generic.go:334] "Generic (PLEG): container finished" podID="0cd73bd8-4034-44e9-b00a-75ea938360c8" containerID="c165a5490980dba46a6a11e0d4d67e28cfd06b0160b050c3d226fee89fbc4e3f" exitCode=0 Jan 23 09:09:52 crc kubenswrapper[4684]: I0123 09:09:52.366519 4684 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-vk9hn" event={"ID":"0cd73bd8-4034-44e9-b00a-75ea938360c8","Type":"ContainerDied","Data":"c165a5490980dba46a6a11e0d4d67e28cfd06b0160b050c3d226fee89fbc4e3f"} Jan 23 09:09:52 crc kubenswrapper[4684]: I0123 09:09:52.368064 4684 generic.go:334] "Generic (PLEG): container finished" podID="6386382b-e651-4888-857e-a3a7325f1f14" containerID="1369ef62f47d70a6abfe04a6bccb8793c585d6bd4fa2af1a177195b5b91a127c" exitCode=0 Jan 23 09:09:52 crc kubenswrapper[4684]: I0123 09:09:52.368143 4684 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-4w77d" event={"ID":"6386382b-e651-4888-857e-a3a7325f1f14","Type":"ContainerDied","Data":"1369ef62f47d70a6abfe04a6bccb8793c585d6bd4fa2af1a177195b5b91a127c"} Jan 23 09:09:52 crc kubenswrapper[4684]: I0123 09:09:52.369672 4684 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-image-registry/image-registry-697d97f7c8-wn9b6" event={"ID":"4d94b705-3a9a-4cb2-87f1-b898ba859d79","Type":"ContainerStarted","Data":"2d31b9150d13567eab4ba3d1e40978cc76326048fb23aec05169609805334785"} Jan 23 09:09:52 crc kubenswrapper[4684]: I0123 09:09:52.370876 4684 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-9nnzz" event={"ID":"888f4644-d4e6-4334-8711-c552d0ef037a","Type":"ContainerStarted","Data":"ac05fea6304567e0ccecf4cefb4a5030cb710bfc6febbd89c5f92d462402fda8"} Jan 23 09:09:52 crc kubenswrapper[4684]: I0123 09:09:52.374735 4684 generic.go:334] "Generic (PLEG): container finished" podID="b9e7bf0d-a002-48a0-a2fc-4617d4311b10" containerID="85110a6508df26fba19fc7b4263885557a2ba0345fe99929c5a5190e99ccae53" exitCode=0 Jan 23 09:09:52 crc kubenswrapper[4684]: I0123 09:09:52.374887 4684 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-controller-manager/revision-pruner-9-crc" event={"ID":"b9e7bf0d-a002-48a0-a2fc-4617d4311b10","Type":"ContainerDied","Data":"85110a6508df26fba19fc7b4263885557a2ba0345fe99929c5a5190e99ccae53"} Jan 23 09:09:52 crc kubenswrapper[4684]: I0123 09:09:52.376475 4684 provider.go:102] Refreshing cache for provider: *credentialprovider.defaultDockerConfigProvider Jan 23 09:09:53 crc kubenswrapper[4684]: I0123 09:09:53.392743 4684 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-multus/network-metrics-daemon-wrrtl" event={"ID":"8a1145d8-e0e9-481b-9e5c-65815e74874f","Type":"ContainerStarted","Data":"07719445778b77e1011b33b5d0690618ba62bb1a2fc11c727e92ab7f5623da51"} Jan 23 09:09:53 crc kubenswrapper[4684]: I0123 09:09:53.394655 4684 generic.go:334] "Generic (PLEG): container finished" podID="888f4644-d4e6-4334-8711-c552d0ef037a" containerID="ac05fea6304567e0ccecf4cefb4a5030cb710bfc6febbd89c5f92d462402fda8" exitCode=0 Jan 23 09:09:53 crc kubenswrapper[4684]: I0123 09:09:53.394748 4684 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-9nnzz" event={"ID":"888f4644-d4e6-4334-8711-c552d0ef037a","Type":"ContainerDied","Data":"ac05fea6304567e0ccecf4cefb4a5030cb710bfc6febbd89c5f92d462402fda8"} Jan 23 09:09:53 crc kubenswrapper[4684]: I0123 09:09:53.396807 4684 generic.go:334] "Generic (PLEG): container finished" podID="597fda0b-2292-4816-a498-539a84a87f33" containerID="6c523c49df4bd31c5a1a6578dd029cb4bd7f24aa003d53a87404f0f60e12a1a5" exitCode=0 Jan 23 09:09:53 crc kubenswrapper[4684]: I0123 09:09:53.396878 4684 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-74vxp" event={"ID":"597fda0b-2292-4816-a498-539a84a87f33","Type":"ContainerDied","Data":"6c523c49df4bd31c5a1a6578dd029cb4bd7f24aa003d53a87404f0f60e12a1a5"} Jan 23 09:09:53 crc kubenswrapper[4684]: I0123 09:09:53.398649 4684 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/revision-pruner-8-crc" event={"ID":"428aa8aa-295e-47ae-8ef7-9a8f11a4912a","Type":"ContainerStarted","Data":"96be333f643b4f2650189f39b1ade090612a4dfdc5d5a5ec47722ee7e4e71276"} Jan 23 09:09:53 crc kubenswrapper[4684]: I0123 09:09:53.400519 4684 generic.go:334] "Generic (PLEG): container finished" podID="5a6b0dac-56a9-4bc7-b6f1-fdbe9578f226" containerID="dacd5b2a5ab88c954b2fdd2de0d065f964f4be46436612b02cfc3dfbf18e3900" exitCode=0 Jan 23 09:09:53 crc kubenswrapper[4684]: I0123 09:09:53.400598 4684 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-vnv8t" event={"ID":"5a6b0dac-56a9-4bc7-b6f1-fdbe9578f226","Type":"ContainerDied","Data":"dacd5b2a5ab88c954b2fdd2de0d065f964f4be46436612b02cfc3dfbf18e3900"} Jan 23 09:09:53 crc kubenswrapper[4684]: I0123 09:09:53.403670 4684 generic.go:334] "Generic (PLEG): container finished" podID="a32a23a8-fd38-4a01-bc87-e589889a39e6" containerID="284121e3234b1751b9f0e90389bb58657d0ac7e24039d13a2fbbbd3f59e1e44f" exitCode=0 Jan 23 09:09:53 crc kubenswrapper[4684]: I0123 09:09:53.403725 4684 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-hcd6g" event={"ID":"a32a23a8-fd38-4a01-bc87-e589889a39e6","Type":"ContainerDied","Data":"284121e3234b1751b9f0e90389bb58657d0ac7e24039d13a2fbbbd3f59e1e44f"} Jan 23 09:09:53 crc kubenswrapper[4684]: I0123 09:09:53.779416 4684 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-kube-controller-manager/revision-pruner-9-crc" Jan 23 09:09:53 crc kubenswrapper[4684]: I0123 09:09:53.805029 4684 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="hostpath-provisioner/csi-hostpathplugin-8tk99" podStartSLOduration=30.805013661 podStartE2EDuration="30.805013661s" podCreationTimestamp="2026-01-23 09:09:23 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-23 09:09:53.517898485 +0000 UTC m=+166.141277026" watchObservedRunningTime="2026-01-23 09:09:53.805013661 +0000 UTC m=+166.428392202" Jan 23 09:09:53 crc kubenswrapper[4684]: I0123 09:09:53.858027 4684 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/b9e7bf0d-a002-48a0-a2fc-4617d4311b10-kubelet-dir\") pod \"b9e7bf0d-a002-48a0-a2fc-4617d4311b10\" (UID: \"b9e7bf0d-a002-48a0-a2fc-4617d4311b10\") " Jan 23 09:09:53 crc kubenswrapper[4684]: I0123 09:09:53.858164 4684 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/b9e7bf0d-a002-48a0-a2fc-4617d4311b10-kube-api-access\") pod \"b9e7bf0d-a002-48a0-a2fc-4617d4311b10\" (UID: \"b9e7bf0d-a002-48a0-a2fc-4617d4311b10\") " Jan 23 09:09:53 crc kubenswrapper[4684]: I0123 09:09:53.858201 4684 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/b9e7bf0d-a002-48a0-a2fc-4617d4311b10-kubelet-dir" (OuterVolumeSpecName: "kubelet-dir") pod "b9e7bf0d-a002-48a0-a2fc-4617d4311b10" (UID: "b9e7bf0d-a002-48a0-a2fc-4617d4311b10"). InnerVolumeSpecName "kubelet-dir". PluginName "kubernetes.io/host-path", VolumeGidValue "" Jan 23 09:09:53 crc kubenswrapper[4684]: I0123 09:09:53.858438 4684 reconciler_common.go:293] "Volume detached for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/b9e7bf0d-a002-48a0-a2fc-4617d4311b10-kubelet-dir\") on node \"crc\" DevicePath \"\"" Jan 23 09:09:53 crc kubenswrapper[4684]: I0123 09:09:53.863839 4684 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/b9e7bf0d-a002-48a0-a2fc-4617d4311b10-kube-api-access" (OuterVolumeSpecName: "kube-api-access") pod "b9e7bf0d-a002-48a0-a2fc-4617d4311b10" (UID: "b9e7bf0d-a002-48a0-a2fc-4617d4311b10"). InnerVolumeSpecName "kube-api-access". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 23 09:09:53 crc kubenswrapper[4684]: I0123 09:09:53.959589 4684 reconciler_common.go:293] "Volume detached for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/b9e7bf0d-a002-48a0-a2fc-4617d4311b10-kube-api-access\") on node \"crc\" DevicePath \"\"" Jan 23 09:09:54 crc kubenswrapper[4684]: I0123 09:09:54.412171 4684 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-kube-controller-manager/revision-pruner-9-crc" Jan 23 09:09:54 crc kubenswrapper[4684]: I0123 09:09:54.412322 4684 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-controller-manager/revision-pruner-9-crc" event={"ID":"b9e7bf0d-a002-48a0-a2fc-4617d4311b10","Type":"ContainerDied","Data":"16218980d9a956a3e56800344e4835c0c033e40ec9ba741ee4b7ea324977f61d"} Jan 23 09:09:54 crc kubenswrapper[4684]: I0123 09:09:54.412367 4684 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="16218980d9a956a3e56800344e4835c0c033e40ec9ba741ee4b7ea324977f61d" Jan 23 09:09:54 crc kubenswrapper[4684]: I0123 09:09:54.414615 4684 generic.go:334] "Generic (PLEG): container finished" podID="428aa8aa-295e-47ae-8ef7-9a8f11a4912a" containerID="96be333f643b4f2650189f39b1ade090612a4dfdc5d5a5ec47722ee7e4e71276" exitCode=0 Jan 23 09:09:54 crc kubenswrapper[4684]: I0123 09:09:54.414692 4684 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/revision-pruner-8-crc" event={"ID":"428aa8aa-295e-47ae-8ef7-9a8f11a4912a","Type":"ContainerDied","Data":"96be333f643b4f2650189f39b1ade090612a4dfdc5d5a5ec47722ee7e4e71276"} Jan 23 09:09:54 crc kubenswrapper[4684]: I0123 09:09:54.415477 4684 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-image-registry/image-registry-697d97f7c8-wn9b6" Jan 23 09:09:54 crc kubenswrapper[4684]: I0123 09:09:54.491415 4684 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-image-registry/image-registry-697d97f7c8-wn9b6" podStartSLOduration=147.491393808 podStartE2EDuration="2m27.491393808s" podCreationTimestamp="2026-01-23 09:07:27 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-23 09:09:54.489299251 +0000 UTC m=+167.112677802" watchObservedRunningTime="2026-01-23 09:09:54.491393808 +0000 UTC m=+167.114772349" Jan 23 09:09:55 crc kubenswrapper[4684]: I0123 09:09:55.427968 4684 patch_prober.go:28] interesting pod/downloads-7954f5f757-mc6nm container/download-server namespace/openshift-console: Liveness probe status=failure output="Get \"http://10.217.0.20:8080/\": dial tcp 10.217.0.20:8080: connect: connection refused" start-of-body= Jan 23 09:09:55 crc kubenswrapper[4684]: I0123 09:09:55.428021 4684 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-console/downloads-7954f5f757-mc6nm" podUID="8fa74b73-0b76-426c-a769-39477ab913f6" containerName="download-server" probeResult="failure" output="Get \"http://10.217.0.20:8080/\": dial tcp 10.217.0.20:8080: connect: connection refused" Jan 23 09:09:55 crc kubenswrapper[4684]: I0123 09:09:55.428064 4684 kubelet.go:2542] "SyncLoop (probe)" probe="liveness" status="unhealthy" pod="openshift-console/downloads-7954f5f757-mc6nm" Jan 23 09:09:55 crc kubenswrapper[4684]: I0123 09:09:55.428790 4684 patch_prober.go:28] interesting pod/downloads-7954f5f757-mc6nm container/download-server namespace/openshift-console: Readiness probe status=failure output="Get \"http://10.217.0.20:8080/\": dial tcp 10.217.0.20:8080: connect: connection refused" start-of-body= Jan 23 09:09:55 crc kubenswrapper[4684]: I0123 09:09:55.428761 4684 kuberuntime_manager.go:1027] "Message for Container of pod" containerName="download-server" containerStatusID={"Type":"cri-o","ID":"ea015939aefd66860eddf0b0326e052d8bf0bc629873cec14014169e24510457"} pod="openshift-console/downloads-7954f5f757-mc6nm" containerMessage="Container download-server failed liveness probe, will be restarted" Jan 23 09:09:55 crc kubenswrapper[4684]: I0123 09:09:55.428840 4684 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-console/downloads-7954f5f757-mc6nm" podUID="8fa74b73-0b76-426c-a769-39477ab913f6" containerName="download-server" probeResult="failure" output="Get \"http://10.217.0.20:8080/\": dial tcp 10.217.0.20:8080: connect: connection refused" Jan 23 09:09:55 crc kubenswrapper[4684]: I0123 09:09:55.428851 4684 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-console/downloads-7954f5f757-mc6nm" podUID="8fa74b73-0b76-426c-a769-39477ab913f6" containerName="download-server" containerID="cri-o://ea015939aefd66860eddf0b0326e052d8bf0bc629873cec14014169e24510457" gracePeriod=2 Jan 23 09:09:55 crc kubenswrapper[4684]: I0123 09:09:55.429404 4684 patch_prober.go:28] interesting pod/downloads-7954f5f757-mc6nm container/download-server namespace/openshift-console: Readiness probe status=failure output="Get \"http://10.217.0.20:8080/\": dial tcp 10.217.0.20:8080: connect: connection refused" start-of-body= Jan 23 09:09:55 crc kubenswrapper[4684]: I0123 09:09:55.429443 4684 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-console/downloads-7954f5f757-mc6nm" podUID="8fa74b73-0b76-426c-a769-39477ab913f6" containerName="download-server" probeResult="failure" output="Get \"http://10.217.0.20:8080/\": dial tcp 10.217.0.20:8080: connect: connection refused" Jan 23 09:09:55 crc kubenswrapper[4684]: I0123 09:09:55.548237 4684 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-controller-manager/controller-manager-879f6c89f-l7895"] Jan 23 09:09:55 crc kubenswrapper[4684]: I0123 09:09:55.548621 4684 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-controller-manager/controller-manager-879f6c89f-l7895" podUID="513ccd39-0870-4964-85a2-0e9eb9d14a85" containerName="controller-manager" containerID="cri-o://69fae2986f7b62f8976db48a682d2480f1762540e77dedb511982fe427237c74" gracePeriod=30 Jan 23 09:09:55 crc kubenswrapper[4684]: I0123 09:09:55.580208 4684 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-route-controller-manager/route-controller-manager-6576b87f9c-wnhgg"] Jan 23 09:09:55 crc kubenswrapper[4684]: I0123 09:09:55.580423 4684 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-route-controller-manager/route-controller-manager-6576b87f9c-wnhgg" podUID="3e8dddad-fbbb-4169-9fd1-c908bc5e3660" containerName="route-controller-manager" containerID="cri-o://8700a71d50cec1c47c155fef4e2d9b6139a53799adbd514adc9cff1fcd8ab8be" gracePeriod=30 Jan 23 09:09:55 crc kubenswrapper[4684]: I0123 09:09:55.848793 4684 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver/revision-pruner-8-crc" Jan 23 09:09:55 crc kubenswrapper[4684]: I0123 09:09:55.883473 4684 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/428aa8aa-295e-47ae-8ef7-9a8f11a4912a-kube-api-access\") pod \"428aa8aa-295e-47ae-8ef7-9a8f11a4912a\" (UID: \"428aa8aa-295e-47ae-8ef7-9a8f11a4912a\") " Jan 23 09:09:55 crc kubenswrapper[4684]: I0123 09:09:55.883666 4684 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/428aa8aa-295e-47ae-8ef7-9a8f11a4912a-kubelet-dir\") pod \"428aa8aa-295e-47ae-8ef7-9a8f11a4912a\" (UID: \"428aa8aa-295e-47ae-8ef7-9a8f11a4912a\") " Jan 23 09:09:55 crc kubenswrapper[4684]: I0123 09:09:55.883781 4684 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/428aa8aa-295e-47ae-8ef7-9a8f11a4912a-kubelet-dir" (OuterVolumeSpecName: "kubelet-dir") pod "428aa8aa-295e-47ae-8ef7-9a8f11a4912a" (UID: "428aa8aa-295e-47ae-8ef7-9a8f11a4912a"). InnerVolumeSpecName "kubelet-dir". PluginName "kubernetes.io/host-path", VolumeGidValue "" Jan 23 09:09:55 crc kubenswrapper[4684]: I0123 09:09:55.884158 4684 reconciler_common.go:293] "Volume detached for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/428aa8aa-295e-47ae-8ef7-9a8f11a4912a-kubelet-dir\") on node \"crc\" DevicePath \"\"" Jan 23 09:09:55 crc kubenswrapper[4684]: I0123 09:09:55.890247 4684 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/428aa8aa-295e-47ae-8ef7-9a8f11a4912a-kube-api-access" (OuterVolumeSpecName: "kube-api-access") pod "428aa8aa-295e-47ae-8ef7-9a8f11a4912a" (UID: "428aa8aa-295e-47ae-8ef7-9a8f11a4912a"). InnerVolumeSpecName "kube-api-access". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 23 09:09:55 crc kubenswrapper[4684]: I0123 09:09:55.985681 4684 reconciler_common.go:293] "Volume detached for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/428aa8aa-295e-47ae-8ef7-9a8f11a4912a-kube-api-access\") on node \"crc\" DevicePath \"\"" Jan 23 09:09:56 crc kubenswrapper[4684]: I0123 09:09:56.435016 4684 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/revision-pruner-8-crc" event={"ID":"428aa8aa-295e-47ae-8ef7-9a8f11a4912a","Type":"ContainerDied","Data":"d40fb1a5fc430092cc4014eafec4876bb75a4bb0ee8e574e3e18a477df09714a"} Jan 23 09:09:56 crc kubenswrapper[4684]: I0123 09:09:56.435061 4684 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="d40fb1a5fc430092cc4014eafec4876bb75a4bb0ee8e574e3e18a477df09714a" Jan 23 09:09:56 crc kubenswrapper[4684]: I0123 09:09:56.435069 4684 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver/revision-pruner-8-crc" Jan 23 09:09:57 crc kubenswrapper[4684]: I0123 09:09:57.015555 4684 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-console/console-f9d7485db-wd9fz" Jan 23 09:09:57 crc kubenswrapper[4684]: I0123 09:09:57.019151 4684 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-console/console-f9d7485db-wd9fz" Jan 23 09:09:57 crc kubenswrapper[4684]: I0123 09:09:57.444642 4684 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-multus/network-metrics-daemon-wrrtl" event={"ID":"8a1145d8-e0e9-481b-9e5c-65815e74874f","Type":"ContainerStarted","Data":"ac713dd11937ac6c4cd00a85b016b4c4c1122f127a6e06896930036b39938af8"} Jan 23 09:09:57 crc kubenswrapper[4684]: I0123 09:09:57.449403 4684 generic.go:334] "Generic (PLEG): container finished" podID="3e8dddad-fbbb-4169-9fd1-c908bc5e3660" containerID="8700a71d50cec1c47c155fef4e2d9b6139a53799adbd514adc9cff1fcd8ab8be" exitCode=0 Jan 23 09:09:57 crc kubenswrapper[4684]: I0123 09:09:57.449480 4684 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-route-controller-manager/route-controller-manager-6576b87f9c-wnhgg" event={"ID":"3e8dddad-fbbb-4169-9fd1-c908bc5e3660","Type":"ContainerDied","Data":"8700a71d50cec1c47c155fef4e2d9b6139a53799adbd514adc9cff1fcd8ab8be"} Jan 23 09:09:57 crc kubenswrapper[4684]: I0123 09:09:57.450837 4684 generic.go:334] "Generic (PLEG): container finished" podID="8fa74b73-0b76-426c-a769-39477ab913f6" containerID="ea015939aefd66860eddf0b0326e052d8bf0bc629873cec14014169e24510457" exitCode=0 Jan 23 09:09:57 crc kubenswrapper[4684]: I0123 09:09:57.450932 4684 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-console/downloads-7954f5f757-mc6nm" event={"ID":"8fa74b73-0b76-426c-a769-39477ab913f6","Type":"ContainerDied","Data":"ea015939aefd66860eddf0b0326e052d8bf0bc629873cec14014169e24510457"} Jan 23 09:09:57 crc kubenswrapper[4684]: I0123 09:09:57.452364 4684 generic.go:334] "Generic (PLEG): container finished" podID="513ccd39-0870-4964-85a2-0e9eb9d14a85" containerID="69fae2986f7b62f8976db48a682d2480f1762540e77dedb511982fe427237c74" exitCode=0 Jan 23 09:09:57 crc kubenswrapper[4684]: I0123 09:09:57.452758 4684 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-controller-manager/controller-manager-879f6c89f-l7895" event={"ID":"513ccd39-0870-4964-85a2-0e9eb9d14a85","Type":"ContainerDied","Data":"69fae2986f7b62f8976db48a682d2480f1762540e77dedb511982fe427237c74"} Jan 23 09:09:57 crc kubenswrapper[4684]: I0123 09:09:57.464086 4684 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-multus/network-metrics-daemon-wrrtl" podStartSLOduration=150.464070686 podStartE2EDuration="2m30.464070686s" podCreationTimestamp="2026-01-23 09:07:27 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-23 09:09:57.461883115 +0000 UTC m=+170.085261656" watchObservedRunningTime="2026-01-23 09:09:57.464070686 +0000 UTC m=+170.087449217" Jan 23 09:10:01 crc kubenswrapper[4684]: I0123 09:10:01.117092 4684 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-route-controller-manager/route-controller-manager-6576b87f9c-wnhgg" Jan 23 09:10:01 crc kubenswrapper[4684]: I0123 09:10:01.150216 4684 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-route-controller-manager/route-controller-manager-644dd99d8f-ms9zz"] Jan 23 09:10:01 crc kubenswrapper[4684]: E0123 09:10:01.150448 4684 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="428aa8aa-295e-47ae-8ef7-9a8f11a4912a" containerName="pruner" Jan 23 09:10:01 crc kubenswrapper[4684]: I0123 09:10:01.150462 4684 state_mem.go:107] "Deleted CPUSet assignment" podUID="428aa8aa-295e-47ae-8ef7-9a8f11a4912a" containerName="pruner" Jan 23 09:10:01 crc kubenswrapper[4684]: E0123 09:10:01.150476 4684 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="7d3e8240-e3e7-42d7-a0fa-6379a76c546e" containerName="collect-profiles" Jan 23 09:10:01 crc kubenswrapper[4684]: I0123 09:10:01.150483 4684 state_mem.go:107] "Deleted CPUSet assignment" podUID="7d3e8240-e3e7-42d7-a0fa-6379a76c546e" containerName="collect-profiles" Jan 23 09:10:01 crc kubenswrapper[4684]: E0123 09:10:01.150495 4684 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="3e8dddad-fbbb-4169-9fd1-c908bc5e3660" containerName="route-controller-manager" Jan 23 09:10:01 crc kubenswrapper[4684]: I0123 09:10:01.150503 4684 state_mem.go:107] "Deleted CPUSet assignment" podUID="3e8dddad-fbbb-4169-9fd1-c908bc5e3660" containerName="route-controller-manager" Jan 23 09:10:01 crc kubenswrapper[4684]: E0123 09:10:01.150524 4684 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="b9e7bf0d-a002-48a0-a2fc-4617d4311b10" containerName="pruner" Jan 23 09:10:01 crc kubenswrapper[4684]: I0123 09:10:01.150530 4684 state_mem.go:107] "Deleted CPUSet assignment" podUID="b9e7bf0d-a002-48a0-a2fc-4617d4311b10" containerName="pruner" Jan 23 09:10:01 crc kubenswrapper[4684]: I0123 09:10:01.155334 4684 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-df9nx\" (UniqueName: \"kubernetes.io/projected/3e8dddad-fbbb-4169-9fd1-c908bc5e3660-kube-api-access-df9nx\") pod \"3e8dddad-fbbb-4169-9fd1-c908bc5e3660\" (UID: \"3e8dddad-fbbb-4169-9fd1-c908bc5e3660\") " Jan 23 09:10:01 crc kubenswrapper[4684]: I0123 09:10:01.155405 4684 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/3e8dddad-fbbb-4169-9fd1-c908bc5e3660-client-ca\") pod \"3e8dddad-fbbb-4169-9fd1-c908bc5e3660\" (UID: \"3e8dddad-fbbb-4169-9fd1-c908bc5e3660\") " Jan 23 09:10:01 crc kubenswrapper[4684]: I0123 09:10:01.155532 4684 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/3e8dddad-fbbb-4169-9fd1-c908bc5e3660-config\") pod \"3e8dddad-fbbb-4169-9fd1-c908bc5e3660\" (UID: \"3e8dddad-fbbb-4169-9fd1-c908bc5e3660\") " Jan 23 09:10:01 crc kubenswrapper[4684]: I0123 09:10:01.155565 4684 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/3e8dddad-fbbb-4169-9fd1-c908bc5e3660-serving-cert\") pod \"3e8dddad-fbbb-4169-9fd1-c908bc5e3660\" (UID: \"3e8dddad-fbbb-4169-9fd1-c908bc5e3660\") " Jan 23 09:10:01 crc kubenswrapper[4684]: I0123 09:10:01.156507 4684 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/3e8dddad-fbbb-4169-9fd1-c908bc5e3660-client-ca" (OuterVolumeSpecName: "client-ca") pod "3e8dddad-fbbb-4169-9fd1-c908bc5e3660" (UID: "3e8dddad-fbbb-4169-9fd1-c908bc5e3660"). InnerVolumeSpecName "client-ca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 23 09:10:01 crc kubenswrapper[4684]: I0123 09:10:01.156542 4684 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/3e8dddad-fbbb-4169-9fd1-c908bc5e3660-config" (OuterVolumeSpecName: "config") pod "3e8dddad-fbbb-4169-9fd1-c908bc5e3660" (UID: "3e8dddad-fbbb-4169-9fd1-c908bc5e3660"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 23 09:10:01 crc kubenswrapper[4684]: I0123 09:10:01.157650 4684 memory_manager.go:354] "RemoveStaleState removing state" podUID="b9e7bf0d-a002-48a0-a2fc-4617d4311b10" containerName="pruner" Jan 23 09:10:01 crc kubenswrapper[4684]: I0123 09:10:01.157684 4684 memory_manager.go:354] "RemoveStaleState removing state" podUID="3e8dddad-fbbb-4169-9fd1-c908bc5e3660" containerName="route-controller-manager" Jan 23 09:10:01 crc kubenswrapper[4684]: I0123 09:10:01.157723 4684 memory_manager.go:354] "RemoveStaleState removing state" podUID="7d3e8240-e3e7-42d7-a0fa-6379a76c546e" containerName="collect-profiles" Jan 23 09:10:01 crc kubenswrapper[4684]: I0123 09:10:01.157738 4684 memory_manager.go:354] "RemoveStaleState removing state" podUID="428aa8aa-295e-47ae-8ef7-9a8f11a4912a" containerName="pruner" Jan 23 09:10:01 crc kubenswrapper[4684]: I0123 09:10:01.158399 4684 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-route-controller-manager/route-controller-manager-644dd99d8f-ms9zz" Jan 23 09:10:01 crc kubenswrapper[4684]: I0123 09:10:01.159426 4684 reconciler_common.go:293] "Volume detached for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/3e8dddad-fbbb-4169-9fd1-c908bc5e3660-client-ca\") on node \"crc\" DevicePath \"\"" Jan 23 09:10:01 crc kubenswrapper[4684]: I0123 09:10:01.159448 4684 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/3e8dddad-fbbb-4169-9fd1-c908bc5e3660-config\") on node \"crc\" DevicePath \"\"" Jan 23 09:10:01 crc kubenswrapper[4684]: I0123 09:10:01.161979 4684 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-route-controller-manager/route-controller-manager-644dd99d8f-ms9zz"] Jan 23 09:10:01 crc kubenswrapper[4684]: I0123 09:10:01.162674 4684 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/3e8dddad-fbbb-4169-9fd1-c908bc5e3660-kube-api-access-df9nx" (OuterVolumeSpecName: "kube-api-access-df9nx") pod "3e8dddad-fbbb-4169-9fd1-c908bc5e3660" (UID: "3e8dddad-fbbb-4169-9fd1-c908bc5e3660"). InnerVolumeSpecName "kube-api-access-df9nx". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 23 09:10:01 crc kubenswrapper[4684]: I0123 09:10:01.186212 4684 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/3e8dddad-fbbb-4169-9fd1-c908bc5e3660-serving-cert" (OuterVolumeSpecName: "serving-cert") pod "3e8dddad-fbbb-4169-9fd1-c908bc5e3660" (UID: "3e8dddad-fbbb-4169-9fd1-c908bc5e3660"). InnerVolumeSpecName "serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 23 09:10:01 crc kubenswrapper[4684]: I0123 09:10:01.260480 4684 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/b559f4dd-770e-4380-afff-f2ca9f20697f-config\") pod \"route-controller-manager-644dd99d8f-ms9zz\" (UID: \"b559f4dd-770e-4380-afff-f2ca9f20697f\") " pod="openshift-route-controller-manager/route-controller-manager-644dd99d8f-ms9zz" Jan 23 09:10:01 crc kubenswrapper[4684]: I0123 09:10:01.260572 4684 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-g9bdt\" (UniqueName: \"kubernetes.io/projected/b559f4dd-770e-4380-afff-f2ca9f20697f-kube-api-access-g9bdt\") pod \"route-controller-manager-644dd99d8f-ms9zz\" (UID: \"b559f4dd-770e-4380-afff-f2ca9f20697f\") " pod="openshift-route-controller-manager/route-controller-manager-644dd99d8f-ms9zz" Jan 23 09:10:01 crc kubenswrapper[4684]: I0123 09:10:01.260790 4684 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/b559f4dd-770e-4380-afff-f2ca9f20697f-serving-cert\") pod \"route-controller-manager-644dd99d8f-ms9zz\" (UID: \"b559f4dd-770e-4380-afff-f2ca9f20697f\") " pod="openshift-route-controller-manager/route-controller-manager-644dd99d8f-ms9zz" Jan 23 09:10:01 crc kubenswrapper[4684]: I0123 09:10:01.260941 4684 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/b559f4dd-770e-4380-afff-f2ca9f20697f-client-ca\") pod \"route-controller-manager-644dd99d8f-ms9zz\" (UID: \"b559f4dd-770e-4380-afff-f2ca9f20697f\") " pod="openshift-route-controller-manager/route-controller-manager-644dd99d8f-ms9zz" Jan 23 09:10:01 crc kubenswrapper[4684]: I0123 09:10:01.261043 4684 reconciler_common.go:293] "Volume detached for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/3e8dddad-fbbb-4169-9fd1-c908bc5e3660-serving-cert\") on node \"crc\" DevicePath \"\"" Jan 23 09:10:01 crc kubenswrapper[4684]: I0123 09:10:01.261061 4684 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-df9nx\" (UniqueName: \"kubernetes.io/projected/3e8dddad-fbbb-4169-9fd1-c908bc5e3660-kube-api-access-df9nx\") on node \"crc\" DevicePath \"\"" Jan 23 09:10:01 crc kubenswrapper[4684]: I0123 09:10:01.362290 4684 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/b559f4dd-770e-4380-afff-f2ca9f20697f-config\") pod \"route-controller-manager-644dd99d8f-ms9zz\" (UID: \"b559f4dd-770e-4380-afff-f2ca9f20697f\") " pod="openshift-route-controller-manager/route-controller-manager-644dd99d8f-ms9zz" Jan 23 09:10:01 crc kubenswrapper[4684]: I0123 09:10:01.362409 4684 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-g9bdt\" (UniqueName: \"kubernetes.io/projected/b559f4dd-770e-4380-afff-f2ca9f20697f-kube-api-access-g9bdt\") pod \"route-controller-manager-644dd99d8f-ms9zz\" (UID: \"b559f4dd-770e-4380-afff-f2ca9f20697f\") " pod="openshift-route-controller-manager/route-controller-manager-644dd99d8f-ms9zz" Jan 23 09:10:01 crc kubenswrapper[4684]: I0123 09:10:01.362462 4684 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/b559f4dd-770e-4380-afff-f2ca9f20697f-serving-cert\") pod \"route-controller-manager-644dd99d8f-ms9zz\" (UID: \"b559f4dd-770e-4380-afff-f2ca9f20697f\") " pod="openshift-route-controller-manager/route-controller-manager-644dd99d8f-ms9zz" Jan 23 09:10:01 crc kubenswrapper[4684]: I0123 09:10:01.362521 4684 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/b559f4dd-770e-4380-afff-f2ca9f20697f-client-ca\") pod \"route-controller-manager-644dd99d8f-ms9zz\" (UID: \"b559f4dd-770e-4380-afff-f2ca9f20697f\") " pod="openshift-route-controller-manager/route-controller-manager-644dd99d8f-ms9zz" Jan 23 09:10:01 crc kubenswrapper[4684]: I0123 09:10:01.364195 4684 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/b559f4dd-770e-4380-afff-f2ca9f20697f-client-ca\") pod \"route-controller-manager-644dd99d8f-ms9zz\" (UID: \"b559f4dd-770e-4380-afff-f2ca9f20697f\") " pod="openshift-route-controller-manager/route-controller-manager-644dd99d8f-ms9zz" Jan 23 09:10:01 crc kubenswrapper[4684]: I0123 09:10:01.364416 4684 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/b559f4dd-770e-4380-afff-f2ca9f20697f-config\") pod \"route-controller-manager-644dd99d8f-ms9zz\" (UID: \"b559f4dd-770e-4380-afff-f2ca9f20697f\") " pod="openshift-route-controller-manager/route-controller-manager-644dd99d8f-ms9zz" Jan 23 09:10:01 crc kubenswrapper[4684]: I0123 09:10:01.368392 4684 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/b559f4dd-770e-4380-afff-f2ca9f20697f-serving-cert\") pod \"route-controller-manager-644dd99d8f-ms9zz\" (UID: \"b559f4dd-770e-4380-afff-f2ca9f20697f\") " pod="openshift-route-controller-manager/route-controller-manager-644dd99d8f-ms9zz" Jan 23 09:10:01 crc kubenswrapper[4684]: I0123 09:10:01.382208 4684 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-g9bdt\" (UniqueName: \"kubernetes.io/projected/b559f4dd-770e-4380-afff-f2ca9f20697f-kube-api-access-g9bdt\") pod \"route-controller-manager-644dd99d8f-ms9zz\" (UID: \"b559f4dd-770e-4380-afff-f2ca9f20697f\") " pod="openshift-route-controller-manager/route-controller-manager-644dd99d8f-ms9zz" Jan 23 09:10:01 crc kubenswrapper[4684]: I0123 09:10:01.526783 4684 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-route-controller-manager/route-controller-manager-644dd99d8f-ms9zz" Jan 23 09:10:01 crc kubenswrapper[4684]: I0123 09:10:01.815321 4684 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-controller-manager/controller-manager-879f6c89f-l7895" Jan 23 09:10:01 crc kubenswrapper[4684]: I0123 09:10:01.866676 4684 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/513ccd39-0870-4964-85a2-0e9eb9d14a85-config\") pod \"513ccd39-0870-4964-85a2-0e9eb9d14a85\" (UID: \"513ccd39-0870-4964-85a2-0e9eb9d14a85\") " Jan 23 09:10:01 crc kubenswrapper[4684]: I0123 09:10:01.866759 4684 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-jx4sb\" (UniqueName: \"kubernetes.io/projected/513ccd39-0870-4964-85a2-0e9eb9d14a85-kube-api-access-jx4sb\") pod \"513ccd39-0870-4964-85a2-0e9eb9d14a85\" (UID: \"513ccd39-0870-4964-85a2-0e9eb9d14a85\") " Jan 23 09:10:01 crc kubenswrapper[4684]: I0123 09:10:01.866784 4684 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/513ccd39-0870-4964-85a2-0e9eb9d14a85-serving-cert\") pod \"513ccd39-0870-4964-85a2-0e9eb9d14a85\" (UID: \"513ccd39-0870-4964-85a2-0e9eb9d14a85\") " Jan 23 09:10:01 crc kubenswrapper[4684]: I0123 09:10:01.866808 4684 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/513ccd39-0870-4964-85a2-0e9eb9d14a85-proxy-ca-bundles\") pod \"513ccd39-0870-4964-85a2-0e9eb9d14a85\" (UID: \"513ccd39-0870-4964-85a2-0e9eb9d14a85\") " Jan 23 09:10:01 crc kubenswrapper[4684]: I0123 09:10:01.866826 4684 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/513ccd39-0870-4964-85a2-0e9eb9d14a85-client-ca\") pod \"513ccd39-0870-4964-85a2-0e9eb9d14a85\" (UID: \"513ccd39-0870-4964-85a2-0e9eb9d14a85\") " Jan 23 09:10:01 crc kubenswrapper[4684]: I0123 09:10:01.871690 4684 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/513ccd39-0870-4964-85a2-0e9eb9d14a85-client-ca" (OuterVolumeSpecName: "client-ca") pod "513ccd39-0870-4964-85a2-0e9eb9d14a85" (UID: "513ccd39-0870-4964-85a2-0e9eb9d14a85"). InnerVolumeSpecName "client-ca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 23 09:10:01 crc kubenswrapper[4684]: I0123 09:10:01.872521 4684 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/513ccd39-0870-4964-85a2-0e9eb9d14a85-config" (OuterVolumeSpecName: "config") pod "513ccd39-0870-4964-85a2-0e9eb9d14a85" (UID: "513ccd39-0870-4964-85a2-0e9eb9d14a85"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 23 09:10:01 crc kubenswrapper[4684]: I0123 09:10:01.872538 4684 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/513ccd39-0870-4964-85a2-0e9eb9d14a85-proxy-ca-bundles" (OuterVolumeSpecName: "proxy-ca-bundles") pod "513ccd39-0870-4964-85a2-0e9eb9d14a85" (UID: "513ccd39-0870-4964-85a2-0e9eb9d14a85"). InnerVolumeSpecName "proxy-ca-bundles". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 23 09:10:01 crc kubenswrapper[4684]: I0123 09:10:01.885394 4684 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/513ccd39-0870-4964-85a2-0e9eb9d14a85-serving-cert" (OuterVolumeSpecName: "serving-cert") pod "513ccd39-0870-4964-85a2-0e9eb9d14a85" (UID: "513ccd39-0870-4964-85a2-0e9eb9d14a85"). InnerVolumeSpecName "serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 23 09:10:01 crc kubenswrapper[4684]: I0123 09:10:01.892140 4684 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/513ccd39-0870-4964-85a2-0e9eb9d14a85-kube-api-access-jx4sb" (OuterVolumeSpecName: "kube-api-access-jx4sb") pod "513ccd39-0870-4964-85a2-0e9eb9d14a85" (UID: "513ccd39-0870-4964-85a2-0e9eb9d14a85"). InnerVolumeSpecName "kube-api-access-jx4sb". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 23 09:10:01 crc kubenswrapper[4684]: I0123 09:10:01.968439 4684 reconciler_common.go:293] "Volume detached for volume \"proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/513ccd39-0870-4964-85a2-0e9eb9d14a85-proxy-ca-bundles\") on node \"crc\" DevicePath \"\"" Jan 23 09:10:01 crc kubenswrapper[4684]: I0123 09:10:01.968490 4684 reconciler_common.go:293] "Volume detached for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/513ccd39-0870-4964-85a2-0e9eb9d14a85-client-ca\") on node \"crc\" DevicePath \"\"" Jan 23 09:10:01 crc kubenswrapper[4684]: I0123 09:10:01.968502 4684 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/513ccd39-0870-4964-85a2-0e9eb9d14a85-config\") on node \"crc\" DevicePath \"\"" Jan 23 09:10:01 crc kubenswrapper[4684]: I0123 09:10:01.968515 4684 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-jx4sb\" (UniqueName: \"kubernetes.io/projected/513ccd39-0870-4964-85a2-0e9eb9d14a85-kube-api-access-jx4sb\") on node \"crc\" DevicePath \"\"" Jan 23 09:10:01 crc kubenswrapper[4684]: I0123 09:10:01.968526 4684 reconciler_common.go:293] "Volume detached for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/513ccd39-0870-4964-85a2-0e9eb9d14a85-serving-cert\") on node \"crc\" DevicePath \"\"" Jan 23 09:10:02 crc kubenswrapper[4684]: I0123 09:10:02.090995 4684 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-route-controller-manager/route-controller-manager-644dd99d8f-ms9zz"] Jan 23 09:10:02 crc kubenswrapper[4684]: I0123 09:10:02.146110 4684 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-route-controller-manager/route-controller-manager-6576b87f9c-wnhgg" Jan 23 09:10:02 crc kubenswrapper[4684]: I0123 09:10:02.146154 4684 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-route-controller-manager/route-controller-manager-6576b87f9c-wnhgg" event={"ID":"3e8dddad-fbbb-4169-9fd1-c908bc5e3660","Type":"ContainerDied","Data":"ae1af42a51f2f9aec2bf7be18eb895ab84cb2b88b72af73af8d7ebfdd2d44ae4"} Jan 23 09:10:02 crc kubenswrapper[4684]: I0123 09:10:02.146739 4684 scope.go:117] "RemoveContainer" containerID="8700a71d50cec1c47c155fef4e2d9b6139a53799adbd514adc9cff1fcd8ab8be" Jan 23 09:10:02 crc kubenswrapper[4684]: I0123 09:10:02.154802 4684 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-controller-manager/controller-manager-879f6c89f-l7895" event={"ID":"513ccd39-0870-4964-85a2-0e9eb9d14a85","Type":"ContainerDied","Data":"d6e732afeeaf384b46a5419ed102bc340a575932802a41f61141d25044d02c90"} Jan 23 09:10:02 crc kubenswrapper[4684]: I0123 09:10:02.154871 4684 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-controller-manager/controller-manager-879f6c89f-l7895" Jan 23 09:10:02 crc kubenswrapper[4684]: I0123 09:10:02.192231 4684 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-route-controller-manager/route-controller-manager-6576b87f9c-wnhgg"] Jan 23 09:10:02 crc kubenswrapper[4684]: I0123 09:10:02.221909 4684 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-route-controller-manager/route-controller-manager-6576b87f9c-wnhgg"] Jan 23 09:10:02 crc kubenswrapper[4684]: I0123 09:10:02.230749 4684 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-controller-manager/controller-manager-879f6c89f-l7895"] Jan 23 09:10:02 crc kubenswrapper[4684]: I0123 09:10:02.235391 4684 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-controller-manager/controller-manager-879f6c89f-l7895"] Jan 23 09:10:02 crc kubenswrapper[4684]: I0123 09:10:02.240954 4684 scope.go:117] "RemoveContainer" containerID="69fae2986f7b62f8976db48a682d2480f1762540e77dedb511982fe427237c74" Jan 23 09:10:03 crc kubenswrapper[4684]: I0123 09:10:03.190204 4684 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-route-controller-manager/route-controller-manager-644dd99d8f-ms9zz" event={"ID":"b559f4dd-770e-4380-afff-f2ca9f20697f","Type":"ContainerStarted","Data":"a117d1769475e81ec0a745e9a19cd5b878afb866595d1d71a9873c040bb2f0c0"} Jan 23 09:10:03 crc kubenswrapper[4684]: I0123 09:10:03.633172 4684 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="3e8dddad-fbbb-4169-9fd1-c908bc5e3660" path="/var/lib/kubelet/pods/3e8dddad-fbbb-4169-9fd1-c908bc5e3660/volumes" Jan 23 09:10:03 crc kubenswrapper[4684]: I0123 09:10:03.634042 4684 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="513ccd39-0870-4964-85a2-0e9eb9d14a85" path="/var/lib/kubelet/pods/513ccd39-0870-4964-85a2-0e9eb9d14a85/volumes" Jan 23 09:10:05 crc kubenswrapper[4684]: I0123 09:10:05.247177 4684 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-console/downloads-7954f5f757-mc6nm" event={"ID":"8fa74b73-0b76-426c-a769-39477ab913f6","Type":"ContainerStarted","Data":"7415b35457ad3f86a8167db4bb8e1f72a058211b36333be7d5c152a8a4abe9f5"} Jan 23 09:10:05 crc kubenswrapper[4684]: I0123 09:10:05.436267 4684 patch_prober.go:28] interesting pod/downloads-7954f5f757-mc6nm container/download-server namespace/openshift-console: Readiness probe status=failure output="Get \"http://10.217.0.20:8080/\": dial tcp 10.217.0.20:8080: connect: connection refused" start-of-body= Jan 23 09:10:05 crc kubenswrapper[4684]: I0123 09:10:05.436334 4684 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-console/downloads-7954f5f757-mc6nm" podUID="8fa74b73-0b76-426c-a769-39477ab913f6" containerName="download-server" probeResult="failure" output="Get \"http://10.217.0.20:8080/\": dial tcp 10.217.0.20:8080: connect: connection refused" Jan 23 09:10:06 crc kubenswrapper[4684]: I0123 09:10:06.109774 4684 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-controller-manager/controller-manager-666dd99597-k6rxk"] Jan 23 09:10:06 crc kubenswrapper[4684]: E0123 09:10:06.110344 4684 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="513ccd39-0870-4964-85a2-0e9eb9d14a85" containerName="controller-manager" Jan 23 09:10:06 crc kubenswrapper[4684]: I0123 09:10:06.110356 4684 state_mem.go:107] "Deleted CPUSet assignment" podUID="513ccd39-0870-4964-85a2-0e9eb9d14a85" containerName="controller-manager" Jan 23 09:10:06 crc kubenswrapper[4684]: I0123 09:10:06.110515 4684 memory_manager.go:354] "RemoveStaleState removing state" podUID="513ccd39-0870-4964-85a2-0e9eb9d14a85" containerName="controller-manager" Jan 23 09:10:06 crc kubenswrapper[4684]: I0123 09:10:06.111038 4684 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-controller-manager/controller-manager-666dd99597-k6rxk" Jan 23 09:10:06 crc kubenswrapper[4684]: I0123 09:10:06.121593 4684 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-controller-manager"/"openshift-controller-manager-sa-dockercfg-msq4c" Jan 23 09:10:06 crc kubenswrapper[4684]: I0123 09:10:06.121902 4684 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager"/"openshift-service-ca.crt" Jan 23 09:10:06 crc kubenswrapper[4684]: I0123 09:10:06.122003 4684 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager"/"kube-root-ca.crt" Jan 23 09:10:06 crc kubenswrapper[4684]: I0123 09:10:06.124253 4684 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-controller-manager"/"serving-cert" Jan 23 09:10:06 crc kubenswrapper[4684]: I0123 09:10:06.127035 4684 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager"/"openshift-global-ca" Jan 23 09:10:06 crc kubenswrapper[4684]: I0123 09:10:06.128455 4684 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager"/"client-ca" Jan 23 09:10:06 crc kubenswrapper[4684]: I0123 09:10:06.133574 4684 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-controller-manager/controller-manager-666dd99597-k6rxk"] Jan 23 09:10:06 crc kubenswrapper[4684]: I0123 09:10:06.133655 4684 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager"/"config" Jan 23 09:10:06 crc kubenswrapper[4684]: I0123 09:10:06.257300 4684 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/4c3952b1-8e23-4e55-a022-523ba6db327c-config\") pod \"controller-manager-666dd99597-k6rxk\" (UID: \"4c3952b1-8e23-4e55-a022-523ba6db327c\") " pod="openshift-controller-manager/controller-manager-666dd99597-k6rxk" Jan 23 09:10:06 crc kubenswrapper[4684]: I0123 09:10:06.257369 4684 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-m26hc\" (UniqueName: \"kubernetes.io/projected/4c3952b1-8e23-4e55-a022-523ba6db327c-kube-api-access-m26hc\") pod \"controller-manager-666dd99597-k6rxk\" (UID: \"4c3952b1-8e23-4e55-a022-523ba6db327c\") " pod="openshift-controller-manager/controller-manager-666dd99597-k6rxk" Jan 23 09:10:06 crc kubenswrapper[4684]: I0123 09:10:06.257494 4684 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/4c3952b1-8e23-4e55-a022-523ba6db327c-serving-cert\") pod \"controller-manager-666dd99597-k6rxk\" (UID: \"4c3952b1-8e23-4e55-a022-523ba6db327c\") " pod="openshift-controller-manager/controller-manager-666dd99597-k6rxk" Jan 23 09:10:06 crc kubenswrapper[4684]: I0123 09:10:06.257524 4684 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/4c3952b1-8e23-4e55-a022-523ba6db327c-proxy-ca-bundles\") pod \"controller-manager-666dd99597-k6rxk\" (UID: \"4c3952b1-8e23-4e55-a022-523ba6db327c\") " pod="openshift-controller-manager/controller-manager-666dd99597-k6rxk" Jan 23 09:10:06 crc kubenswrapper[4684]: I0123 09:10:06.257552 4684 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/4c3952b1-8e23-4e55-a022-523ba6db327c-client-ca\") pod \"controller-manager-666dd99597-k6rxk\" (UID: \"4c3952b1-8e23-4e55-a022-523ba6db327c\") " pod="openshift-controller-manager/controller-manager-666dd99597-k6rxk" Jan 23 09:10:06 crc kubenswrapper[4684]: I0123 09:10:06.263661 4684 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-route-controller-manager/route-controller-manager-644dd99d8f-ms9zz" event={"ID":"b559f4dd-770e-4380-afff-f2ca9f20697f","Type":"ContainerStarted","Data":"a368317946b0a45c6fd34c70243e7eadf4bfd4911f023e39fcdb33a5aa028ab4"} Jan 23 09:10:06 crc kubenswrapper[4684]: I0123 09:10:06.359206 4684 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/4c3952b1-8e23-4e55-a022-523ba6db327c-config\") pod \"controller-manager-666dd99597-k6rxk\" (UID: \"4c3952b1-8e23-4e55-a022-523ba6db327c\") " pod="openshift-controller-manager/controller-manager-666dd99597-k6rxk" Jan 23 09:10:06 crc kubenswrapper[4684]: I0123 09:10:06.359344 4684 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-m26hc\" (UniqueName: \"kubernetes.io/projected/4c3952b1-8e23-4e55-a022-523ba6db327c-kube-api-access-m26hc\") pod \"controller-manager-666dd99597-k6rxk\" (UID: \"4c3952b1-8e23-4e55-a022-523ba6db327c\") " pod="openshift-controller-manager/controller-manager-666dd99597-k6rxk" Jan 23 09:10:06 crc kubenswrapper[4684]: I0123 09:10:06.359382 4684 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/4c3952b1-8e23-4e55-a022-523ba6db327c-serving-cert\") pod \"controller-manager-666dd99597-k6rxk\" (UID: \"4c3952b1-8e23-4e55-a022-523ba6db327c\") " pod="openshift-controller-manager/controller-manager-666dd99597-k6rxk" Jan 23 09:10:06 crc kubenswrapper[4684]: I0123 09:10:06.359407 4684 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/4c3952b1-8e23-4e55-a022-523ba6db327c-proxy-ca-bundles\") pod \"controller-manager-666dd99597-k6rxk\" (UID: \"4c3952b1-8e23-4e55-a022-523ba6db327c\") " pod="openshift-controller-manager/controller-manager-666dd99597-k6rxk" Jan 23 09:10:06 crc kubenswrapper[4684]: I0123 09:10:06.359439 4684 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/4c3952b1-8e23-4e55-a022-523ba6db327c-client-ca\") pod \"controller-manager-666dd99597-k6rxk\" (UID: \"4c3952b1-8e23-4e55-a022-523ba6db327c\") " pod="openshift-controller-manager/controller-manager-666dd99597-k6rxk" Jan 23 09:10:06 crc kubenswrapper[4684]: I0123 09:10:06.360276 4684 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/4c3952b1-8e23-4e55-a022-523ba6db327c-client-ca\") pod \"controller-manager-666dd99597-k6rxk\" (UID: \"4c3952b1-8e23-4e55-a022-523ba6db327c\") " pod="openshift-controller-manager/controller-manager-666dd99597-k6rxk" Jan 23 09:10:06 crc kubenswrapper[4684]: I0123 09:10:06.361983 4684 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/4c3952b1-8e23-4e55-a022-523ba6db327c-config\") pod \"controller-manager-666dd99597-k6rxk\" (UID: \"4c3952b1-8e23-4e55-a022-523ba6db327c\") " pod="openshift-controller-manager/controller-manager-666dd99597-k6rxk" Jan 23 09:10:06 crc kubenswrapper[4684]: I0123 09:10:06.364216 4684 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/4c3952b1-8e23-4e55-a022-523ba6db327c-proxy-ca-bundles\") pod \"controller-manager-666dd99597-k6rxk\" (UID: \"4c3952b1-8e23-4e55-a022-523ba6db327c\") " pod="openshift-controller-manager/controller-manager-666dd99597-k6rxk" Jan 23 09:10:06 crc kubenswrapper[4684]: I0123 09:10:06.387458 4684 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/4c3952b1-8e23-4e55-a022-523ba6db327c-serving-cert\") pod \"controller-manager-666dd99597-k6rxk\" (UID: \"4c3952b1-8e23-4e55-a022-523ba6db327c\") " pod="openshift-controller-manager/controller-manager-666dd99597-k6rxk" Jan 23 09:10:06 crc kubenswrapper[4684]: I0123 09:10:06.407990 4684 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-m26hc\" (UniqueName: \"kubernetes.io/projected/4c3952b1-8e23-4e55-a022-523ba6db327c-kube-api-access-m26hc\") pod \"controller-manager-666dd99597-k6rxk\" (UID: \"4c3952b1-8e23-4e55-a022-523ba6db327c\") " pod="openshift-controller-manager/controller-manager-666dd99597-k6rxk" Jan 23 09:10:06 crc kubenswrapper[4684]: I0123 09:10:06.442367 4684 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-controller-manager/controller-manager-666dd99597-k6rxk" Jan 23 09:10:07 crc kubenswrapper[4684]: I0123 09:10:07.219781 4684 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-operator-lifecycle-manager/package-server-manager-789f6589d5-2xmjn" Jan 23 09:10:08 crc kubenswrapper[4684]: I0123 09:10:08.048541 4684 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-image-registry/image-registry-697d97f7c8-wn9b6" Jan 23 09:10:12 crc kubenswrapper[4684]: I0123 09:10:12.346241 4684 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-console/downloads-7954f5f757-mc6nm" Jan 23 09:10:12 crc kubenswrapper[4684]: I0123 09:10:12.346514 4684 patch_prober.go:28] interesting pod/downloads-7954f5f757-mc6nm container/download-server namespace/openshift-console: Readiness probe status=failure output="Get \"http://10.217.0.20:8080/\": dial tcp 10.217.0.20:8080: connect: connection refused" start-of-body= Jan 23 09:10:12 crc kubenswrapper[4684]: I0123 09:10:12.346569 4684 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-console/downloads-7954f5f757-mc6nm" podUID="8fa74b73-0b76-426c-a769-39477ab913f6" containerName="download-server" probeResult="failure" output="Get \"http://10.217.0.20:8080/\": dial tcp 10.217.0.20:8080: connect: connection refused" Jan 23 09:10:13 crc kubenswrapper[4684]: I0123 09:10:13.354194 4684 patch_prober.go:28] interesting pod/downloads-7954f5f757-mc6nm container/download-server namespace/openshift-console: Readiness probe status=failure output="Get \"http://10.217.0.20:8080/\": dial tcp 10.217.0.20:8080: connect: connection refused" start-of-body= Jan 23 09:10:13 crc kubenswrapper[4684]: I0123 09:10:13.354248 4684 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-console/downloads-7954f5f757-mc6nm" podUID="8fa74b73-0b76-426c-a769-39477ab913f6" containerName="download-server" probeResult="failure" output="Get \"http://10.217.0.20:8080/\": dial tcp 10.217.0.20:8080: connect: connection refused" Jan 23 09:10:13 crc kubenswrapper[4684]: I0123 09:10:13.728609 4684 patch_prober.go:28] interesting pod/machine-config-daemon-wtphf container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Jan 23 09:10:13 crc kubenswrapper[4684]: I0123 09:10:13.728686 4684 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-wtphf" podUID="fe8e0d00-860e-4d47-9f48-686555520d79" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Jan 23 09:10:15 crc kubenswrapper[4684]: I0123 09:10:15.427803 4684 patch_prober.go:28] interesting pod/downloads-7954f5f757-mc6nm container/download-server namespace/openshift-console: Readiness probe status=failure output="Get \"http://10.217.0.20:8080/\": dial tcp 10.217.0.20:8080: connect: connection refused" start-of-body= Jan 23 09:10:15 crc kubenswrapper[4684]: I0123 09:10:15.427875 4684 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-console/downloads-7954f5f757-mc6nm" podUID="8fa74b73-0b76-426c-a769-39477ab913f6" containerName="download-server" probeResult="failure" output="Get \"http://10.217.0.20:8080/\": dial tcp 10.217.0.20:8080: connect: connection refused" Jan 23 09:10:15 crc kubenswrapper[4684]: I0123 09:10:15.427917 4684 patch_prober.go:28] interesting pod/downloads-7954f5f757-mc6nm container/download-server namespace/openshift-console: Liveness probe status=failure output="Get \"http://10.217.0.20:8080/\": dial tcp 10.217.0.20:8080: connect: connection refused" start-of-body= Jan 23 09:10:15 crc kubenswrapper[4684]: I0123 09:10:15.427977 4684 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-console/downloads-7954f5f757-mc6nm" podUID="8fa74b73-0b76-426c-a769-39477ab913f6" containerName="download-server" probeResult="failure" output="Get \"http://10.217.0.20:8080/\": dial tcp 10.217.0.20:8080: connect: connection refused" Jan 23 09:10:15 crc kubenswrapper[4684]: I0123 09:10:15.575684 4684 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-route-controller-manager/route-controller-manager-644dd99d8f-ms9zz" podStartSLOduration=20.575668607 podStartE2EDuration="20.575668607s" podCreationTimestamp="2026-01-23 09:09:55 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-23 09:10:13.373010523 +0000 UTC m=+185.996389084" watchObservedRunningTime="2026-01-23 09:10:15.575668607 +0000 UTC m=+188.199047138" Jan 23 09:10:15 crc kubenswrapper[4684]: I0123 09:10:15.575912 4684 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-controller-manager/controller-manager-666dd99597-k6rxk"] Jan 23 09:10:15 crc kubenswrapper[4684]: I0123 09:10:15.658050 4684 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-route-controller-manager/route-controller-manager-644dd99d8f-ms9zz"] Jan 23 09:10:15 crc kubenswrapper[4684]: I0123 09:10:15.658261 4684 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-route-controller-manager/route-controller-manager-644dd99d8f-ms9zz" podUID="b559f4dd-770e-4380-afff-f2ca9f20697f" containerName="route-controller-manager" containerID="cri-o://a368317946b0a45c6fd34c70243e7eadf4bfd4911f023e39fcdb33a5aa028ab4" gracePeriod=30 Jan 23 09:10:15 crc kubenswrapper[4684]: I0123 09:10:15.658416 4684 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-route-controller-manager/route-controller-manager-644dd99d8f-ms9zz" Jan 23 09:10:15 crc kubenswrapper[4684]: I0123 09:10:15.664733 4684 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-route-controller-manager/route-controller-manager-644dd99d8f-ms9zz" Jan 23 09:10:17 crc kubenswrapper[4684]: I0123 09:10:17.499271 4684 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 23 09:10:21 crc kubenswrapper[4684]: I0123 09:10:21.406289 4684 generic.go:334] "Generic (PLEG): container finished" podID="b559f4dd-770e-4380-afff-f2ca9f20697f" containerID="a368317946b0a45c6fd34c70243e7eadf4bfd4911f023e39fcdb33a5aa028ab4" exitCode=0 Jan 23 09:10:21 crc kubenswrapper[4684]: I0123 09:10:21.406398 4684 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-route-controller-manager/route-controller-manager-644dd99d8f-ms9zz" event={"ID":"b559f4dd-770e-4380-afff-f2ca9f20697f","Type":"ContainerDied","Data":"a368317946b0a45c6fd34c70243e7eadf4bfd4911f023e39fcdb33a5aa028ab4"} Jan 23 09:10:21 crc kubenswrapper[4684]: I0123 09:10:21.528849 4684 patch_prober.go:28] interesting pod/route-controller-manager-644dd99d8f-ms9zz container/route-controller-manager namespace/openshift-route-controller-manager: Readiness probe status=failure output="Get \"https://10.217.0.54:8443/healthz\": dial tcp 10.217.0.54:8443: connect: connection refused" start-of-body= Jan 23 09:10:21 crc kubenswrapper[4684]: I0123 09:10:21.528920 4684 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-route-controller-manager/route-controller-manager-644dd99d8f-ms9zz" podUID="b559f4dd-770e-4380-afff-f2ca9f20697f" containerName="route-controller-manager" probeResult="failure" output="Get \"https://10.217.0.54:8443/healthz\": dial tcp 10.217.0.54:8443: connect: connection refused" Jan 23 09:10:24 crc kubenswrapper[4684]: I0123 09:10:24.297162 4684 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-kube-apiserver/revision-pruner-9-crc"] Jan 23 09:10:24 crc kubenswrapper[4684]: I0123 09:10:24.297947 4684 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver/revision-pruner-9-crc" Jan 23 09:10:24 crc kubenswrapper[4684]: I0123 09:10:24.301736 4684 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-apiserver"/"kube-root-ca.crt" Jan 23 09:10:24 crc kubenswrapper[4684]: I0123 09:10:24.301943 4684 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-kube-apiserver"/"installer-sa-dockercfg-5pr6n" Jan 23 09:10:24 crc kubenswrapper[4684]: I0123 09:10:24.306218 4684 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-kube-apiserver/revision-pruner-9-crc"] Jan 23 09:10:24 crc kubenswrapper[4684]: I0123 09:10:24.460431 4684 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/11bcae14-ba2a-42b3-85b1-edbfec10d93a-kube-api-access\") pod \"revision-pruner-9-crc\" (UID: \"11bcae14-ba2a-42b3-85b1-edbfec10d93a\") " pod="openshift-kube-apiserver/revision-pruner-9-crc" Jan 23 09:10:24 crc kubenswrapper[4684]: I0123 09:10:24.460858 4684 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/11bcae14-ba2a-42b3-85b1-edbfec10d93a-kubelet-dir\") pod \"revision-pruner-9-crc\" (UID: \"11bcae14-ba2a-42b3-85b1-edbfec10d93a\") " pod="openshift-kube-apiserver/revision-pruner-9-crc" Jan 23 09:10:24 crc kubenswrapper[4684]: I0123 09:10:24.561769 4684 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/11bcae14-ba2a-42b3-85b1-edbfec10d93a-kubelet-dir\") pod \"revision-pruner-9-crc\" (UID: \"11bcae14-ba2a-42b3-85b1-edbfec10d93a\") " pod="openshift-kube-apiserver/revision-pruner-9-crc" Jan 23 09:10:24 crc kubenswrapper[4684]: I0123 09:10:24.561856 4684 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/11bcae14-ba2a-42b3-85b1-edbfec10d93a-kube-api-access\") pod \"revision-pruner-9-crc\" (UID: \"11bcae14-ba2a-42b3-85b1-edbfec10d93a\") " pod="openshift-kube-apiserver/revision-pruner-9-crc" Jan 23 09:10:24 crc kubenswrapper[4684]: I0123 09:10:24.562234 4684 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/11bcae14-ba2a-42b3-85b1-edbfec10d93a-kubelet-dir\") pod \"revision-pruner-9-crc\" (UID: \"11bcae14-ba2a-42b3-85b1-edbfec10d93a\") " pod="openshift-kube-apiserver/revision-pruner-9-crc" Jan 23 09:10:24 crc kubenswrapper[4684]: I0123 09:10:24.581834 4684 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/11bcae14-ba2a-42b3-85b1-edbfec10d93a-kube-api-access\") pod \"revision-pruner-9-crc\" (UID: \"11bcae14-ba2a-42b3-85b1-edbfec10d93a\") " pod="openshift-kube-apiserver/revision-pruner-9-crc" Jan 23 09:10:24 crc kubenswrapper[4684]: I0123 09:10:24.646107 4684 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver/revision-pruner-9-crc" Jan 23 09:10:25 crc kubenswrapper[4684]: I0123 09:10:25.427432 4684 patch_prober.go:28] interesting pod/downloads-7954f5f757-mc6nm container/download-server namespace/openshift-console: Readiness probe status=failure output="Get \"http://10.217.0.20:8080/\": dial tcp 10.217.0.20:8080: connect: connection refused" start-of-body= Jan 23 09:10:25 crc kubenswrapper[4684]: I0123 09:10:25.427484 4684 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-console/downloads-7954f5f757-mc6nm" podUID="8fa74b73-0b76-426c-a769-39477ab913f6" containerName="download-server" probeResult="failure" output="Get \"http://10.217.0.20:8080/\": dial tcp 10.217.0.20:8080: connect: connection refused" Jan 23 09:10:25 crc kubenswrapper[4684]: I0123 09:10:25.427951 4684 patch_prober.go:28] interesting pod/downloads-7954f5f757-mc6nm container/download-server namespace/openshift-console: Liveness probe status=failure output="Get \"http://10.217.0.20:8080/\": dial tcp 10.217.0.20:8080: connect: connection refused" start-of-body= Jan 23 09:10:25 crc kubenswrapper[4684]: I0123 09:10:25.427977 4684 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-console/downloads-7954f5f757-mc6nm" podUID="8fa74b73-0b76-426c-a769-39477ab913f6" containerName="download-server" probeResult="failure" output="Get \"http://10.217.0.20:8080/\": dial tcp 10.217.0.20:8080: connect: connection refused" Jan 23 09:10:28 crc kubenswrapper[4684]: I0123 09:10:28.496562 4684 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-kube-apiserver/installer-9-crc"] Jan 23 09:10:28 crc kubenswrapper[4684]: I0123 09:10:28.497500 4684 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver/installer-9-crc" Jan 23 09:10:28 crc kubenswrapper[4684]: I0123 09:10:28.506502 4684 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-kube-apiserver/installer-9-crc"] Jan 23 09:10:28 crc kubenswrapper[4684]: I0123 09:10:28.617973 4684 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/edcaacae-d1c5-4a66-9220-54ee4b5991ac-kube-api-access\") pod \"installer-9-crc\" (UID: \"edcaacae-d1c5-4a66-9220-54ee4b5991ac\") " pod="openshift-kube-apiserver/installer-9-crc" Jan 23 09:10:28 crc kubenswrapper[4684]: I0123 09:10:28.618016 4684 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-lock\" (UniqueName: \"kubernetes.io/host-path/edcaacae-d1c5-4a66-9220-54ee4b5991ac-var-lock\") pod \"installer-9-crc\" (UID: \"edcaacae-d1c5-4a66-9220-54ee4b5991ac\") " pod="openshift-kube-apiserver/installer-9-crc" Jan 23 09:10:28 crc kubenswrapper[4684]: I0123 09:10:28.618066 4684 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/edcaacae-d1c5-4a66-9220-54ee4b5991ac-kubelet-dir\") pod \"installer-9-crc\" (UID: \"edcaacae-d1c5-4a66-9220-54ee4b5991ac\") " pod="openshift-kube-apiserver/installer-9-crc" Jan 23 09:10:28 crc kubenswrapper[4684]: I0123 09:10:28.719773 4684 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/edcaacae-d1c5-4a66-9220-54ee4b5991ac-kubelet-dir\") pod \"installer-9-crc\" (UID: \"edcaacae-d1c5-4a66-9220-54ee4b5991ac\") " pod="openshift-kube-apiserver/installer-9-crc" Jan 23 09:10:28 crc kubenswrapper[4684]: I0123 09:10:28.719833 4684 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/edcaacae-d1c5-4a66-9220-54ee4b5991ac-kubelet-dir\") pod \"installer-9-crc\" (UID: \"edcaacae-d1c5-4a66-9220-54ee4b5991ac\") " pod="openshift-kube-apiserver/installer-9-crc" Jan 23 09:10:28 crc kubenswrapper[4684]: I0123 09:10:28.719917 4684 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/edcaacae-d1c5-4a66-9220-54ee4b5991ac-kube-api-access\") pod \"installer-9-crc\" (UID: \"edcaacae-d1c5-4a66-9220-54ee4b5991ac\") " pod="openshift-kube-apiserver/installer-9-crc" Jan 23 09:10:28 crc kubenswrapper[4684]: I0123 09:10:28.719938 4684 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"var-lock\" (UniqueName: \"kubernetes.io/host-path/edcaacae-d1c5-4a66-9220-54ee4b5991ac-var-lock\") pod \"installer-9-crc\" (UID: \"edcaacae-d1c5-4a66-9220-54ee4b5991ac\") " pod="openshift-kube-apiserver/installer-9-crc" Jan 23 09:10:28 crc kubenswrapper[4684]: I0123 09:10:28.720182 4684 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"var-lock\" (UniqueName: \"kubernetes.io/host-path/edcaacae-d1c5-4a66-9220-54ee4b5991ac-var-lock\") pod \"installer-9-crc\" (UID: \"edcaacae-d1c5-4a66-9220-54ee4b5991ac\") " pod="openshift-kube-apiserver/installer-9-crc" Jan 23 09:10:28 crc kubenswrapper[4684]: I0123 09:10:28.741491 4684 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/edcaacae-d1c5-4a66-9220-54ee4b5991ac-kube-api-access\") pod \"installer-9-crc\" (UID: \"edcaacae-d1c5-4a66-9220-54ee4b5991ac\") " pod="openshift-kube-apiserver/installer-9-crc" Jan 23 09:10:28 crc kubenswrapper[4684]: I0123 09:10:28.842515 4684 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver/installer-9-crc" Jan 23 09:10:31 crc kubenswrapper[4684]: I0123 09:10:31.528903 4684 patch_prober.go:28] interesting pod/route-controller-manager-644dd99d8f-ms9zz container/route-controller-manager namespace/openshift-route-controller-manager: Readiness probe status=failure output="Get \"https://10.217.0.54:8443/healthz\": dial tcp 10.217.0.54:8443: connect: connection refused" start-of-body= Jan 23 09:10:31 crc kubenswrapper[4684]: I0123 09:10:31.529263 4684 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-route-controller-manager/route-controller-manager-644dd99d8f-ms9zz" podUID="b559f4dd-770e-4380-afff-f2ca9f20697f" containerName="route-controller-manager" probeResult="failure" output="Get \"https://10.217.0.54:8443/healthz\": dial tcp 10.217.0.54:8443: connect: connection refused" Jan 23 09:10:35 crc kubenswrapper[4684]: I0123 09:10:35.427636 4684 patch_prober.go:28] interesting pod/downloads-7954f5f757-mc6nm container/download-server namespace/openshift-console: Liveness probe status=failure output="Get \"http://10.217.0.20:8080/\": dial tcp 10.217.0.20:8080: connect: connection refused" start-of-body= Jan 23 09:10:35 crc kubenswrapper[4684]: I0123 09:10:35.427956 4684 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-console/downloads-7954f5f757-mc6nm" podUID="8fa74b73-0b76-426c-a769-39477ab913f6" containerName="download-server" probeResult="failure" output="Get \"http://10.217.0.20:8080/\": dial tcp 10.217.0.20:8080: connect: connection refused" Jan 23 09:10:35 crc kubenswrapper[4684]: I0123 09:10:35.427655 4684 patch_prober.go:28] interesting pod/downloads-7954f5f757-mc6nm container/download-server namespace/openshift-console: Readiness probe status=failure output="Get \"http://10.217.0.20:8080/\": dial tcp 10.217.0.20:8080: connect: connection refused" start-of-body= Jan 23 09:10:35 crc kubenswrapper[4684]: I0123 09:10:35.428007 4684 kubelet.go:2542] "SyncLoop (probe)" probe="liveness" status="unhealthy" pod="openshift-console/downloads-7954f5f757-mc6nm" Jan 23 09:10:35 crc kubenswrapper[4684]: I0123 09:10:35.428075 4684 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-console/downloads-7954f5f757-mc6nm" podUID="8fa74b73-0b76-426c-a769-39477ab913f6" containerName="download-server" probeResult="failure" output="Get \"http://10.217.0.20:8080/\": dial tcp 10.217.0.20:8080: connect: connection refused" Jan 23 09:10:35 crc kubenswrapper[4684]: I0123 09:10:35.429059 4684 patch_prober.go:28] interesting pod/downloads-7954f5f757-mc6nm container/download-server namespace/openshift-console: Readiness probe status=failure output="Get \"http://10.217.0.20:8080/\": dial tcp 10.217.0.20:8080: connect: connection refused" start-of-body= Jan 23 09:10:35 crc kubenswrapper[4684]: I0123 09:10:35.429115 4684 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-console/downloads-7954f5f757-mc6nm" podUID="8fa74b73-0b76-426c-a769-39477ab913f6" containerName="download-server" probeResult="failure" output="Get \"http://10.217.0.20:8080/\": dial tcp 10.217.0.20:8080: connect: connection refused" Jan 23 09:10:35 crc kubenswrapper[4684]: I0123 09:10:35.442018 4684 kuberuntime_manager.go:1027] "Message for Container of pod" containerName="download-server" containerStatusID={"Type":"cri-o","ID":"7415b35457ad3f86a8167db4bb8e1f72a058211b36333be7d5c152a8a4abe9f5"} pod="openshift-console/downloads-7954f5f757-mc6nm" containerMessage="Container download-server failed liveness probe, will be restarted" Jan 23 09:10:35 crc kubenswrapper[4684]: I0123 09:10:35.442081 4684 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-console/downloads-7954f5f757-mc6nm" podUID="8fa74b73-0b76-426c-a769-39477ab913f6" containerName="download-server" containerID="cri-o://7415b35457ad3f86a8167db4bb8e1f72a058211b36333be7d5c152a8a4abe9f5" gracePeriod=2 Jan 23 09:10:39 crc kubenswrapper[4684]: I0123 09:10:39.532601 4684 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-console_downloads-7954f5f757-mc6nm_8fa74b73-0b76-426c-a769-39477ab913f6/download-server/1.log" Jan 23 09:10:39 crc kubenswrapper[4684]: I0123 09:10:39.533215 4684 generic.go:334] "Generic (PLEG): container finished" podID="8fa74b73-0b76-426c-a769-39477ab913f6" containerID="7415b35457ad3f86a8167db4bb8e1f72a058211b36333be7d5c152a8a4abe9f5" exitCode=137 Jan 23 09:10:39 crc kubenswrapper[4684]: I0123 09:10:39.533255 4684 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-console/downloads-7954f5f757-mc6nm" event={"ID":"8fa74b73-0b76-426c-a769-39477ab913f6","Type":"ContainerDied","Data":"7415b35457ad3f86a8167db4bb8e1f72a058211b36333be7d5c152a8a4abe9f5"} Jan 23 09:10:39 crc kubenswrapper[4684]: I0123 09:10:39.533287 4684 scope.go:117] "RemoveContainer" containerID="ea015939aefd66860eddf0b0326e052d8bf0bc629873cec14014169e24510457" Jan 23 09:10:41 crc kubenswrapper[4684]: I0123 09:10:41.529127 4684 patch_prober.go:28] interesting pod/route-controller-manager-644dd99d8f-ms9zz container/route-controller-manager namespace/openshift-route-controller-manager: Readiness probe status=failure output="Get \"https://10.217.0.54:8443/healthz\": dial tcp 10.217.0.54:8443: connect: connection refused" start-of-body= Jan 23 09:10:41 crc kubenswrapper[4684]: I0123 09:10:41.529183 4684 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-route-controller-manager/route-controller-manager-644dd99d8f-ms9zz" podUID="b559f4dd-770e-4380-afff-f2ca9f20697f" containerName="route-controller-manager" probeResult="failure" output="Get \"https://10.217.0.54:8443/healthz\": dial tcp 10.217.0.54:8443: connect: connection refused" Jan 23 09:10:43 crc kubenswrapper[4684]: I0123 09:10:43.728856 4684 patch_prober.go:28] interesting pod/machine-config-daemon-wtphf container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Jan 23 09:10:43 crc kubenswrapper[4684]: I0123 09:10:43.729220 4684 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-wtphf" podUID="fe8e0d00-860e-4d47-9f48-686555520d79" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Jan 23 09:10:43 crc kubenswrapper[4684]: I0123 09:10:43.729266 4684 kubelet.go:2542] "SyncLoop (probe)" probe="liveness" status="unhealthy" pod="openshift-machine-config-operator/machine-config-daemon-wtphf" Jan 23 09:10:43 crc kubenswrapper[4684]: I0123 09:10:43.729886 4684 kuberuntime_manager.go:1027] "Message for Container of pod" containerName="machine-config-daemon" containerStatusID={"Type":"cri-o","ID":"c3d090a4ca15b818846dbd02be034a5029761509ea8671673795d0b2b15249c9"} pod="openshift-machine-config-operator/machine-config-daemon-wtphf" containerMessage="Container machine-config-daemon failed liveness probe, will be restarted" Jan 23 09:10:43 crc kubenswrapper[4684]: I0123 09:10:43.729952 4684 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-machine-config-operator/machine-config-daemon-wtphf" podUID="fe8e0d00-860e-4d47-9f48-686555520d79" containerName="machine-config-daemon" containerID="cri-o://c3d090a4ca15b818846dbd02be034a5029761509ea8671673795d0b2b15249c9" gracePeriod=600 Jan 23 09:10:45 crc kubenswrapper[4684]: I0123 09:10:45.428096 4684 patch_prober.go:28] interesting pod/downloads-7954f5f757-mc6nm container/download-server namespace/openshift-console: Readiness probe status=failure output="Get \"http://10.217.0.20:8080/\": dial tcp 10.217.0.20:8080: connect: connection refused" start-of-body= Jan 23 09:10:45 crc kubenswrapper[4684]: I0123 09:10:45.428410 4684 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-console/downloads-7954f5f757-mc6nm" podUID="8fa74b73-0b76-426c-a769-39477ab913f6" containerName="download-server" probeResult="failure" output="Get \"http://10.217.0.20:8080/\": dial tcp 10.217.0.20:8080: connect: connection refused" Jan 23 09:10:48 crc kubenswrapper[4684]: I0123 09:10:48.584098 4684 generic.go:334] "Generic (PLEG): container finished" podID="fe8e0d00-860e-4d47-9f48-686555520d79" containerID="c3d090a4ca15b818846dbd02be034a5029761509ea8671673795d0b2b15249c9" exitCode=0 Jan 23 09:10:48 crc kubenswrapper[4684]: I0123 09:10:48.584142 4684 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-wtphf" event={"ID":"fe8e0d00-860e-4d47-9f48-686555520d79","Type":"ContainerDied","Data":"c3d090a4ca15b818846dbd02be034a5029761509ea8671673795d0b2b15249c9"} Jan 23 09:10:51 crc kubenswrapper[4684]: I0123 09:10:51.528836 4684 patch_prober.go:28] interesting pod/route-controller-manager-644dd99d8f-ms9zz container/route-controller-manager namespace/openshift-route-controller-manager: Readiness probe status=failure output="Get \"https://10.217.0.54:8443/healthz\": dial tcp 10.217.0.54:8443: connect: connection refused" start-of-body= Jan 23 09:10:51 crc kubenswrapper[4684]: I0123 09:10:51.529883 4684 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-route-controller-manager/route-controller-manager-644dd99d8f-ms9zz" podUID="b559f4dd-770e-4380-afff-f2ca9f20697f" containerName="route-controller-manager" probeResult="failure" output="Get \"https://10.217.0.54:8443/healthz\": dial tcp 10.217.0.54:8443: connect: connection refused" Jan 23 09:10:55 crc kubenswrapper[4684]: I0123 09:10:55.428474 4684 patch_prober.go:28] interesting pod/downloads-7954f5f757-mc6nm container/download-server namespace/openshift-console: Readiness probe status=failure output="Get \"http://10.217.0.20:8080/\": dial tcp 10.217.0.20:8080: connect: connection refused" start-of-body= Jan 23 09:10:55 crc kubenswrapper[4684]: I0123 09:10:55.428584 4684 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-console/downloads-7954f5f757-mc6nm" podUID="8fa74b73-0b76-426c-a769-39477ab913f6" containerName="download-server" probeResult="failure" output="Get \"http://10.217.0.20:8080/\": dial tcp 10.217.0.20:8080: connect: connection refused" Jan 23 09:11:01 crc kubenswrapper[4684]: I0123 09:11:01.529101 4684 patch_prober.go:28] interesting pod/route-controller-manager-644dd99d8f-ms9zz container/route-controller-manager namespace/openshift-route-controller-manager: Readiness probe status=failure output="Get \"https://10.217.0.54:8443/healthz\": dial tcp 10.217.0.54:8443: connect: connection refused" start-of-body= Jan 23 09:11:01 crc kubenswrapper[4684]: I0123 09:11:01.530818 4684 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-route-controller-manager/route-controller-manager-644dd99d8f-ms9zz" podUID="b559f4dd-770e-4380-afff-f2ca9f20697f" containerName="route-controller-manager" probeResult="failure" output="Get \"https://10.217.0.54:8443/healthz\": dial tcp 10.217.0.54:8443: connect: connection refused" Jan 23 09:11:05 crc kubenswrapper[4684]: I0123 09:11:05.428616 4684 patch_prober.go:28] interesting pod/downloads-7954f5f757-mc6nm container/download-server namespace/openshift-console: Readiness probe status=failure output="Get \"http://10.217.0.20:8080/\": dial tcp 10.217.0.20:8080: connect: connection refused" start-of-body= Jan 23 09:11:05 crc kubenswrapper[4684]: I0123 09:11:05.429606 4684 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-console/downloads-7954f5f757-mc6nm" podUID="8fa74b73-0b76-426c-a769-39477ab913f6" containerName="download-server" probeResult="failure" output="Get \"http://10.217.0.20:8080/\": dial tcp 10.217.0.20:8080: connect: connection refused" Jan 23 09:11:11 crc kubenswrapper[4684]: I0123 09:11:11.163635 4684 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-route-controller-manager/route-controller-manager-644dd99d8f-ms9zz" Jan 23 09:11:11 crc kubenswrapper[4684]: I0123 09:11:11.200252 4684 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-route-controller-manager/route-controller-manager-6d7f76996d-965j8"] Jan 23 09:11:11 crc kubenswrapper[4684]: E0123 09:11:11.200570 4684 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="b559f4dd-770e-4380-afff-f2ca9f20697f" containerName="route-controller-manager" Jan 23 09:11:11 crc kubenswrapper[4684]: I0123 09:11:11.200585 4684 state_mem.go:107] "Deleted CPUSet assignment" podUID="b559f4dd-770e-4380-afff-f2ca9f20697f" containerName="route-controller-manager" Jan 23 09:11:11 crc kubenswrapper[4684]: I0123 09:11:11.200723 4684 memory_manager.go:354] "RemoveStaleState removing state" podUID="b559f4dd-770e-4380-afff-f2ca9f20697f" containerName="route-controller-manager" Jan 23 09:11:11 crc kubenswrapper[4684]: I0123 09:11:11.201201 4684 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-route-controller-manager/route-controller-manager-6d7f76996d-965j8" Jan 23 09:11:11 crc kubenswrapper[4684]: I0123 09:11:11.208276 4684 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-route-controller-manager/route-controller-manager-6d7f76996d-965j8"] Jan 23 09:11:11 crc kubenswrapper[4684]: I0123 09:11:11.266233 4684 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-g9bdt\" (UniqueName: \"kubernetes.io/projected/b559f4dd-770e-4380-afff-f2ca9f20697f-kube-api-access-g9bdt\") pod \"b559f4dd-770e-4380-afff-f2ca9f20697f\" (UID: \"b559f4dd-770e-4380-afff-f2ca9f20697f\") " Jan 23 09:11:11 crc kubenswrapper[4684]: I0123 09:11:11.266303 4684 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/b559f4dd-770e-4380-afff-f2ca9f20697f-config\") pod \"b559f4dd-770e-4380-afff-f2ca9f20697f\" (UID: \"b559f4dd-770e-4380-afff-f2ca9f20697f\") " Jan 23 09:11:11 crc kubenswrapper[4684]: I0123 09:11:11.266382 4684 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/b559f4dd-770e-4380-afff-f2ca9f20697f-client-ca\") pod \"b559f4dd-770e-4380-afff-f2ca9f20697f\" (UID: \"b559f4dd-770e-4380-afff-f2ca9f20697f\") " Jan 23 09:11:11 crc kubenswrapper[4684]: I0123 09:11:11.266444 4684 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/b559f4dd-770e-4380-afff-f2ca9f20697f-serving-cert\") pod \"b559f4dd-770e-4380-afff-f2ca9f20697f\" (UID: \"b559f4dd-770e-4380-afff-f2ca9f20697f\") " Jan 23 09:11:11 crc kubenswrapper[4684]: I0123 09:11:11.266655 4684 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/a3346474-c3f2-4ef3-bcee-65f80e85ace4-client-ca\") pod \"route-controller-manager-6d7f76996d-965j8\" (UID: \"a3346474-c3f2-4ef3-bcee-65f80e85ace4\") " pod="openshift-route-controller-manager/route-controller-manager-6d7f76996d-965j8" Jan 23 09:11:11 crc kubenswrapper[4684]: I0123 09:11:11.266685 4684 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-zx5p9\" (UniqueName: \"kubernetes.io/projected/a3346474-c3f2-4ef3-bcee-65f80e85ace4-kube-api-access-zx5p9\") pod \"route-controller-manager-6d7f76996d-965j8\" (UID: \"a3346474-c3f2-4ef3-bcee-65f80e85ace4\") " pod="openshift-route-controller-manager/route-controller-manager-6d7f76996d-965j8" Jan 23 09:11:11 crc kubenswrapper[4684]: I0123 09:11:11.266780 4684 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/a3346474-c3f2-4ef3-bcee-65f80e85ace4-config\") pod \"route-controller-manager-6d7f76996d-965j8\" (UID: \"a3346474-c3f2-4ef3-bcee-65f80e85ace4\") " pod="openshift-route-controller-manager/route-controller-manager-6d7f76996d-965j8" Jan 23 09:11:11 crc kubenswrapper[4684]: I0123 09:11:11.266805 4684 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/a3346474-c3f2-4ef3-bcee-65f80e85ace4-serving-cert\") pod \"route-controller-manager-6d7f76996d-965j8\" (UID: \"a3346474-c3f2-4ef3-bcee-65f80e85ace4\") " pod="openshift-route-controller-manager/route-controller-manager-6d7f76996d-965j8" Jan 23 09:11:11 crc kubenswrapper[4684]: I0123 09:11:11.267586 4684 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/b559f4dd-770e-4380-afff-f2ca9f20697f-client-ca" (OuterVolumeSpecName: "client-ca") pod "b559f4dd-770e-4380-afff-f2ca9f20697f" (UID: "b559f4dd-770e-4380-afff-f2ca9f20697f"). InnerVolumeSpecName "client-ca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 23 09:11:11 crc kubenswrapper[4684]: I0123 09:11:11.267633 4684 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/b559f4dd-770e-4380-afff-f2ca9f20697f-config" (OuterVolumeSpecName: "config") pod "b559f4dd-770e-4380-afff-f2ca9f20697f" (UID: "b559f4dd-770e-4380-afff-f2ca9f20697f"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 23 09:11:11 crc kubenswrapper[4684]: I0123 09:11:11.274045 4684 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/b559f4dd-770e-4380-afff-f2ca9f20697f-kube-api-access-g9bdt" (OuterVolumeSpecName: "kube-api-access-g9bdt") pod "b559f4dd-770e-4380-afff-f2ca9f20697f" (UID: "b559f4dd-770e-4380-afff-f2ca9f20697f"). InnerVolumeSpecName "kube-api-access-g9bdt". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 23 09:11:11 crc kubenswrapper[4684]: I0123 09:11:11.275145 4684 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/b559f4dd-770e-4380-afff-f2ca9f20697f-serving-cert" (OuterVolumeSpecName: "serving-cert") pod "b559f4dd-770e-4380-afff-f2ca9f20697f" (UID: "b559f4dd-770e-4380-afff-f2ca9f20697f"). InnerVolumeSpecName "serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 23 09:11:11 crc kubenswrapper[4684]: I0123 09:11:11.367971 4684 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/a3346474-c3f2-4ef3-bcee-65f80e85ace4-client-ca\") pod \"route-controller-manager-6d7f76996d-965j8\" (UID: \"a3346474-c3f2-4ef3-bcee-65f80e85ace4\") " pod="openshift-route-controller-manager/route-controller-manager-6d7f76996d-965j8" Jan 23 09:11:11 crc kubenswrapper[4684]: I0123 09:11:11.368034 4684 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-zx5p9\" (UniqueName: \"kubernetes.io/projected/a3346474-c3f2-4ef3-bcee-65f80e85ace4-kube-api-access-zx5p9\") pod \"route-controller-manager-6d7f76996d-965j8\" (UID: \"a3346474-c3f2-4ef3-bcee-65f80e85ace4\") " pod="openshift-route-controller-manager/route-controller-manager-6d7f76996d-965j8" Jan 23 09:11:11 crc kubenswrapper[4684]: I0123 09:11:11.368101 4684 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/a3346474-c3f2-4ef3-bcee-65f80e85ace4-config\") pod \"route-controller-manager-6d7f76996d-965j8\" (UID: \"a3346474-c3f2-4ef3-bcee-65f80e85ace4\") " pod="openshift-route-controller-manager/route-controller-manager-6d7f76996d-965j8" Jan 23 09:11:11 crc kubenswrapper[4684]: I0123 09:11:11.368123 4684 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/a3346474-c3f2-4ef3-bcee-65f80e85ace4-serving-cert\") pod \"route-controller-manager-6d7f76996d-965j8\" (UID: \"a3346474-c3f2-4ef3-bcee-65f80e85ace4\") " pod="openshift-route-controller-manager/route-controller-manager-6d7f76996d-965j8" Jan 23 09:11:11 crc kubenswrapper[4684]: I0123 09:11:11.368939 4684 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/a3346474-c3f2-4ef3-bcee-65f80e85ace4-client-ca\") pod \"route-controller-manager-6d7f76996d-965j8\" (UID: \"a3346474-c3f2-4ef3-bcee-65f80e85ace4\") " pod="openshift-route-controller-manager/route-controller-manager-6d7f76996d-965j8" Jan 23 09:11:11 crc kubenswrapper[4684]: I0123 09:11:11.369276 4684 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/a3346474-c3f2-4ef3-bcee-65f80e85ace4-config\") pod \"route-controller-manager-6d7f76996d-965j8\" (UID: \"a3346474-c3f2-4ef3-bcee-65f80e85ace4\") " pod="openshift-route-controller-manager/route-controller-manager-6d7f76996d-965j8" Jan 23 09:11:11 crc kubenswrapper[4684]: I0123 09:11:11.369428 4684 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-g9bdt\" (UniqueName: \"kubernetes.io/projected/b559f4dd-770e-4380-afff-f2ca9f20697f-kube-api-access-g9bdt\") on node \"crc\" DevicePath \"\"" Jan 23 09:11:11 crc kubenswrapper[4684]: I0123 09:11:11.369460 4684 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/b559f4dd-770e-4380-afff-f2ca9f20697f-config\") on node \"crc\" DevicePath \"\"" Jan 23 09:11:11 crc kubenswrapper[4684]: I0123 09:11:11.369471 4684 reconciler_common.go:293] "Volume detached for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/b559f4dd-770e-4380-afff-f2ca9f20697f-client-ca\") on node \"crc\" DevicePath \"\"" Jan 23 09:11:11 crc kubenswrapper[4684]: I0123 09:11:11.369482 4684 reconciler_common.go:293] "Volume detached for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/b559f4dd-770e-4380-afff-f2ca9f20697f-serving-cert\") on node \"crc\" DevicePath \"\"" Jan 23 09:11:11 crc kubenswrapper[4684]: I0123 09:11:11.372765 4684 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/a3346474-c3f2-4ef3-bcee-65f80e85ace4-serving-cert\") pod \"route-controller-manager-6d7f76996d-965j8\" (UID: \"a3346474-c3f2-4ef3-bcee-65f80e85ace4\") " pod="openshift-route-controller-manager/route-controller-manager-6d7f76996d-965j8" Jan 23 09:11:11 crc kubenswrapper[4684]: I0123 09:11:11.387685 4684 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-zx5p9\" (UniqueName: \"kubernetes.io/projected/a3346474-c3f2-4ef3-bcee-65f80e85ace4-kube-api-access-zx5p9\") pod \"route-controller-manager-6d7f76996d-965j8\" (UID: \"a3346474-c3f2-4ef3-bcee-65f80e85ace4\") " pod="openshift-route-controller-manager/route-controller-manager-6d7f76996d-965j8" Jan 23 09:11:11 crc kubenswrapper[4684]: I0123 09:11:11.521289 4684 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-route-controller-manager/route-controller-manager-6d7f76996d-965j8" Jan 23 09:11:12 crc kubenswrapper[4684]: I0123 09:11:12.056074 4684 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-route-controller-manager/route-controller-manager-644dd99d8f-ms9zz" event={"ID":"b559f4dd-770e-4380-afff-f2ca9f20697f","Type":"ContainerDied","Data":"a117d1769475e81ec0a745e9a19cd5b878afb866595d1d71a9873c040bb2f0c0"} Jan 23 09:11:12 crc kubenswrapper[4684]: I0123 09:11:12.056162 4684 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-route-controller-manager/route-controller-manager-644dd99d8f-ms9zz" Jan 23 09:11:12 crc kubenswrapper[4684]: I0123 09:11:12.081328 4684 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-route-controller-manager/route-controller-manager-644dd99d8f-ms9zz"] Jan 23 09:11:12 crc kubenswrapper[4684]: I0123 09:11:12.084327 4684 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-route-controller-manager/route-controller-manager-644dd99d8f-ms9zz"] Jan 23 09:11:13 crc kubenswrapper[4684]: I0123 09:11:13.592375 4684 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="b559f4dd-770e-4380-afff-f2ca9f20697f" path="/var/lib/kubelet/pods/b559f4dd-770e-4380-afff-f2ca9f20697f/volumes" Jan 23 09:11:15 crc kubenswrapper[4684]: I0123 09:11:15.433459 4684 patch_prober.go:28] interesting pod/downloads-7954f5f757-mc6nm container/download-server namespace/openshift-console: Readiness probe status=failure output="Get \"http://10.217.0.20:8080/\": dial tcp 10.217.0.20:8080: connect: connection refused" start-of-body= Jan 23 09:11:15 crc kubenswrapper[4684]: I0123 09:11:15.433825 4684 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-console/downloads-7954f5f757-mc6nm" podUID="8fa74b73-0b76-426c-a769-39477ab913f6" containerName="download-server" probeResult="failure" output="Get \"http://10.217.0.20:8080/\": dial tcp 10.217.0.20:8080: connect: connection refused" Jan 23 09:11:17 crc kubenswrapper[4684]: E0123 09:11:17.920123 4684 log.go:32] "PullImage from image service failed" err="rpc error: code = Canceled desc = copying system image from manifest list: copying config: context canceled" image="registry.redhat.io/redhat/redhat-operator-index:v4.18" Jan 23 09:11:17 crc kubenswrapper[4684]: E0123 09:11:17.920620 4684 kuberuntime_manager.go:1274] "Unhandled Error" err="init container &Container{Name:extract-content,Image:registry.redhat.io/redhat/redhat-operator-index:v4.18,Command:[/utilities/copy-content],Args:[--catalog.from=/configs --catalog.to=/extracted-catalog/catalog --cache.from=/tmp/cache --cache.to=/extracted-catalog/cache],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:utilities,ReadOnly:false,MountPath:/utilities,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:catalog-content,ReadOnly:false,MountPath:/extracted-catalog,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-6gk9c,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:Always,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:nil,SELinuxOptions:nil,RunAsUser:*1000170000,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:FallbackToLogsOnError,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod redhat-operators-9nnzz_openshift-marketplace(888f4644-d4e6-4334-8711-c552d0ef037a): ErrImagePull: rpc error: code = Canceled desc = copying system image from manifest list: copying config: context canceled" logger="UnhandledError" Jan 23 09:11:17 crc kubenswrapper[4684]: E0123 09:11:17.922565 4684 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"extract-content\" with ErrImagePull: \"rpc error: code = Canceled desc = copying system image from manifest list: copying config: context canceled\"" pod="openshift-marketplace/redhat-operators-9nnzz" podUID="888f4644-d4e6-4334-8711-c552d0ef037a" Jan 23 09:11:23 crc kubenswrapper[4684]: E0123 09:11:23.442744 4684 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"extract-content\" with ImagePullBackOff: \"Back-off pulling image \\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"\"" pod="openshift-marketplace/redhat-operators-9nnzz" podUID="888f4644-d4e6-4334-8711-c552d0ef037a" Jan 23 09:11:23 crc kubenswrapper[4684]: E0123 09:11:23.927853 4684 log.go:32] "PullImage from image service failed" err="rpc error: code = Canceled desc = copying system image from manifest list: copying config: context canceled" image="registry.redhat.io/redhat/certified-operator-index:v4.18" Jan 23 09:11:23 crc kubenswrapper[4684]: E0123 09:11:23.928038 4684 kuberuntime_manager.go:1274] "Unhandled Error" err="init container &Container{Name:extract-content,Image:registry.redhat.io/redhat/certified-operator-index:v4.18,Command:[/utilities/copy-content],Args:[--catalog.from=/configs --catalog.to=/extracted-catalog/catalog --cache.from=/tmp/cache --cache.to=/extracted-catalog/cache],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:utilities,ReadOnly:false,MountPath:/utilities,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:catalog-content,ReadOnly:false,MountPath:/extracted-catalog,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-pf2sj,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:Always,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:nil,SELinuxOptions:nil,RunAsUser:*1000170000,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:FallbackToLogsOnError,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod certified-operators-x2mrs_openshift-marketplace(b97308cc-f7d2-4693-8990-76cbb4c9abff): ErrImagePull: rpc error: code = Canceled desc = copying system image from manifest list: copying config: context canceled" logger="UnhandledError" Jan 23 09:11:23 crc kubenswrapper[4684]: E0123 09:11:23.929290 4684 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"extract-content\" with ErrImagePull: \"rpc error: code = Canceled desc = copying system image from manifest list: copying config: context canceled\"" pod="openshift-marketplace/certified-operators-x2mrs" podUID="b97308cc-f7d2-4693-8990-76cbb4c9abff" Jan 23 09:11:24 crc kubenswrapper[4684]: E0123 09:11:24.521737 4684 log.go:32] "PullImage from image service failed" err="rpc error: code = Canceled desc = copying system image from manifest list: reading blob sha256:375463ce314e9870c2ef316f6ae8ec2bad821721d7dac5d2800db42bce264bea: Get \"https://registry.redhat.io/v2/redhat/certified-operator-index/blobs/sha256:375463ce314e9870c2ef316f6ae8ec2bad821721d7dac5d2800db42bce264bea\": context canceled" image="registry.redhat.io/redhat/certified-operator-index:v4.18" Jan 23 09:11:24 crc kubenswrapper[4684]: E0123 09:11:24.522458 4684 kuberuntime_manager.go:1274] "Unhandled Error" err="init container &Container{Name:extract-content,Image:registry.redhat.io/redhat/certified-operator-index:v4.18,Command:[/utilities/copy-content],Args:[--catalog.from=/configs --catalog.to=/extracted-catalog/catalog --cache.from=/tmp/cache --cache.to=/extracted-catalog/cache],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:utilities,ReadOnly:false,MountPath:/utilities,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:catalog-content,ReadOnly:false,MountPath:/extracted-catalog,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-q8hxb,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:Always,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:nil,SELinuxOptions:nil,RunAsUser:*1000170000,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:FallbackToLogsOnError,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod certified-operators-vk9hn_openshift-marketplace(0cd73bd8-4034-44e9-b00a-75ea938360c8): ErrImagePull: rpc error: code = Canceled desc = copying system image from manifest list: reading blob sha256:375463ce314e9870c2ef316f6ae8ec2bad821721d7dac5d2800db42bce264bea: Get \"https://registry.redhat.io/v2/redhat/certified-operator-index/blobs/sha256:375463ce314e9870c2ef316f6ae8ec2bad821721d7dac5d2800db42bce264bea\": context canceled" logger="UnhandledError" Jan 23 09:11:24 crc kubenswrapper[4684]: E0123 09:11:24.526286 4684 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"extract-content\" with ErrImagePull: \"rpc error: code = Canceled desc = copying system image from manifest list: reading blob sha256:375463ce314e9870c2ef316f6ae8ec2bad821721d7dac5d2800db42bce264bea: Get \\\"https://registry.redhat.io/v2/redhat/certified-operator-index/blobs/sha256:375463ce314e9870c2ef316f6ae8ec2bad821721d7dac5d2800db42bce264bea\\\": context canceled\"" pod="openshift-marketplace/certified-operators-vk9hn" podUID="0cd73bd8-4034-44e9-b00a-75ea938360c8" Jan 23 09:11:25 crc kubenswrapper[4684]: E0123 09:11:25.089040 4684 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"extract-content\" with ImagePullBackOff: \"Back-off pulling image \\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"\"" pod="openshift-marketplace/certified-operators-x2mrs" podUID="b97308cc-f7d2-4693-8990-76cbb4c9abff" Jan 23 09:11:25 crc kubenswrapper[4684]: E0123 09:11:25.147897 4684 log.go:32] "PullImage from image service failed" err="rpc error: code = Canceled desc = copying system image from manifest list: copying config: context canceled" image="registry.redhat.io/redhat/redhat-operator-index:v4.18" Jan 23 09:11:25 crc kubenswrapper[4684]: E0123 09:11:25.148081 4684 kuberuntime_manager.go:1274] "Unhandled Error" err="init container &Container{Name:extract-content,Image:registry.redhat.io/redhat/redhat-operator-index:v4.18,Command:[/utilities/copy-content],Args:[--catalog.from=/configs --catalog.to=/extracted-catalog/catalog --cache.from=/tmp/cache --cache.to=/extracted-catalog/cache],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:utilities,ReadOnly:false,MountPath:/utilities,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:catalog-content,ReadOnly:false,MountPath:/extracted-catalog,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-vfgdl,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:Always,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:nil,SELinuxOptions:nil,RunAsUser:*1000170000,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:FallbackToLogsOnError,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod redhat-operators-vnv8t_openshift-marketplace(5a6b0dac-56a9-4bc7-b6f1-fdbe9578f226): ErrImagePull: rpc error: code = Canceled desc = copying system image from manifest list: copying config: context canceled" logger="UnhandledError" Jan 23 09:11:25 crc kubenswrapper[4684]: E0123 09:11:25.150025 4684 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"extract-content\" with ErrImagePull: \"rpc error: code = Canceled desc = copying system image from manifest list: copying config: context canceled\"" pod="openshift-marketplace/redhat-operators-vnv8t" podUID="5a6b0dac-56a9-4bc7-b6f1-fdbe9578f226" Jan 23 09:11:25 crc kubenswrapper[4684]: E0123 09:11:25.177762 4684 log.go:32] "PullImage from image service failed" err="rpc error: code = Canceled desc = copying system image from manifest list: copying config: context canceled" image="registry.redhat.io/redhat/redhat-marketplace-index:v4.18" Jan 23 09:11:25 crc kubenswrapper[4684]: E0123 09:11:25.177935 4684 kuberuntime_manager.go:1274] "Unhandled Error" err="init container &Container{Name:extract-content,Image:registry.redhat.io/redhat/redhat-marketplace-index:v4.18,Command:[/utilities/copy-content],Args:[--catalog.from=/configs --catalog.to=/extracted-catalog/catalog --cache.from=/tmp/cache --cache.to=/extracted-catalog/cache],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:utilities,ReadOnly:false,MountPath:/utilities,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:catalog-content,ReadOnly:false,MountPath:/extracted-catalog,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-g5tjv,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:Always,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:nil,SELinuxOptions:nil,RunAsUser:*1000170000,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:FallbackToLogsOnError,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod redhat-marketplace-hcd6g_openshift-marketplace(a32a23a8-fd38-4a01-bc87-e589889a39e6): ErrImagePull: rpc error: code = Canceled desc = copying system image from manifest list: copying config: context canceled" logger="UnhandledError" Jan 23 09:11:25 crc kubenswrapper[4684]: E0123 09:11:25.179181 4684 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"extract-content\" with ErrImagePull: \"rpc error: code = Canceled desc = copying system image from manifest list: copying config: context canceled\"" pod="openshift-marketplace/redhat-marketplace-hcd6g" podUID="a32a23a8-fd38-4a01-bc87-e589889a39e6" Jan 23 09:11:25 crc kubenswrapper[4684]: I0123 09:11:25.427647 4684 patch_prober.go:28] interesting pod/downloads-7954f5f757-mc6nm container/download-server namespace/openshift-console: Readiness probe status=failure output="Get \"http://10.217.0.20:8080/\": dial tcp 10.217.0.20:8080: connect: connection refused" start-of-body= Jan 23 09:11:25 crc kubenswrapper[4684]: I0123 09:11:25.427734 4684 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-console/downloads-7954f5f757-mc6nm" podUID="8fa74b73-0b76-426c-a769-39477ab913f6" containerName="download-server" probeResult="failure" output="Get \"http://10.217.0.20:8080/\": dial tcp 10.217.0.20:8080: connect: connection refused" Jan 23 09:11:35 crc kubenswrapper[4684]: I0123 09:11:35.427817 4684 patch_prober.go:28] interesting pod/downloads-7954f5f757-mc6nm container/download-server namespace/openshift-console: Readiness probe status=failure output="Get \"http://10.217.0.20:8080/\": dial tcp 10.217.0.20:8080: connect: connection refused" start-of-body= Jan 23 09:11:35 crc kubenswrapper[4684]: I0123 09:11:35.428382 4684 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-console/downloads-7954f5f757-mc6nm" podUID="8fa74b73-0b76-426c-a769-39477ab913f6" containerName="download-server" probeResult="failure" output="Get \"http://10.217.0.20:8080/\": dial tcp 10.217.0.20:8080: connect: connection refused" Jan 23 09:11:40 crc kubenswrapper[4684]: I0123 09:11:40.349474 4684 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-authentication/oauth-openshift-558db77b4-hv7d8"] Jan 23 09:11:44 crc kubenswrapper[4684]: E0123 09:11:44.747254 4684 log.go:32] "PullImage from image service failed" err="rpc error: code = Canceled desc = copying system image from manifest list: copying config: context canceled" image="registry.redhat.io/redhat/community-operator-index:v4.18" Jan 23 09:11:44 crc kubenswrapper[4684]: E0123 09:11:44.747720 4684 kuberuntime_manager.go:1274] "Unhandled Error" err="init container &Container{Name:extract-content,Image:registry.redhat.io/redhat/community-operator-index:v4.18,Command:[/utilities/copy-content],Args:[--catalog.from=/configs --catalog.to=/extracted-catalog/catalog --cache.from=/tmp/cache --cache.to=/extracted-catalog/cache],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:utilities,ReadOnly:false,MountPath:/utilities,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:catalog-content,ReadOnly:false,MountPath:/extracted-catalog,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-v857v,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:Always,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:nil,SELinuxOptions:nil,RunAsUser:*1000170000,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:FallbackToLogsOnError,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod community-operators-pc4kj_openshift-marketplace(2f9880b0-14ae-4649-b7ba-6d0dd1ab5151): ErrImagePull: rpc error: code = Canceled desc = copying system image from manifest list: copying config: context canceled" logger="UnhandledError" Jan 23 09:11:44 crc kubenswrapper[4684]: E0123 09:11:44.748902 4684 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"extract-content\" with ErrImagePull: \"rpc error: code = Canceled desc = copying system image from manifest list: copying config: context canceled\"" pod="openshift-marketplace/community-operators-pc4kj" podUID="2f9880b0-14ae-4649-b7ba-6d0dd1ab5151" Jan 23 09:11:44 crc kubenswrapper[4684]: I0123 09:11:44.870562 4684 scope.go:117] "RemoveContainer" containerID="a368317946b0a45c6fd34c70243e7eadf4bfd4911f023e39fcdb33a5aa028ab4" Jan 23 09:11:44 crc kubenswrapper[4684]: E0123 09:11:44.942744 4684 log.go:32] "PullImage from image service failed" err="rpc error: code = Canceled desc = copying system image from manifest list: copying config: context canceled" image="registry.redhat.io/redhat/redhat-marketplace-index:v4.18" Jan 23 09:11:44 crc kubenswrapper[4684]: E0123 09:11:44.942900 4684 kuberuntime_manager.go:1274] "Unhandled Error" err="init container &Container{Name:extract-content,Image:registry.redhat.io/redhat/redhat-marketplace-index:v4.18,Command:[/utilities/copy-content],Args:[--catalog.from=/configs --catalog.to=/extracted-catalog/catalog --cache.from=/tmp/cache --cache.to=/extracted-catalog/cache],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:utilities,ReadOnly:false,MountPath:/utilities,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:catalog-content,ReadOnly:false,MountPath:/extracted-catalog,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-f8jv6,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:Always,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:nil,SELinuxOptions:nil,RunAsUser:*1000170000,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:FallbackToLogsOnError,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod redhat-marketplace-74vxp_openshift-marketplace(597fda0b-2292-4816-a498-539a84a87f33): ErrImagePull: rpc error: code = Canceled desc = copying system image from manifest list: copying config: context canceled" logger="UnhandledError" Jan 23 09:11:44 crc kubenswrapper[4684]: E0123 09:11:44.944251 4684 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"extract-content\" with ErrImagePull: \"rpc error: code = Canceled desc = copying system image from manifest list: copying config: context canceled\"" pod="openshift-marketplace/redhat-marketplace-74vxp" podUID="597fda0b-2292-4816-a498-539a84a87f33" Jan 23 09:11:45 crc kubenswrapper[4684]: I0123 09:11:45.237423 4684 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-console_downloads-7954f5f757-mc6nm_8fa74b73-0b76-426c-a769-39477ab913f6/download-server/1.log" Jan 23 09:11:45 crc kubenswrapper[4684]: E0123 09:11:45.242218 4684 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"extract-content\" with ImagePullBackOff: \"Back-off pulling image \\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"\"" pod="openshift-marketplace/redhat-marketplace-74vxp" podUID="597fda0b-2292-4816-a498-539a84a87f33" Jan 23 09:11:45 crc kubenswrapper[4684]: E0123 09:11:45.244745 4684 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"extract-content\" with ImagePullBackOff: \"Back-off pulling image \\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"\"" pod="openshift-marketplace/community-operators-pc4kj" podUID="2f9880b0-14ae-4649-b7ba-6d0dd1ab5151" Jan 23 09:11:45 crc kubenswrapper[4684]: I0123 09:11:45.282333 4684 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-kube-apiserver/revision-pruner-9-crc"] Jan 23 09:11:45 crc kubenswrapper[4684]: E0123 09:11:45.295077 4684 log.go:32] "PullImage from image service failed" err="rpc error: code = Canceled desc = copying system image from manifest list: copying config: context canceled" image="registry.redhat.io/redhat/community-operator-index:v4.18" Jan 23 09:11:45 crc kubenswrapper[4684]: E0123 09:11:45.295512 4684 kuberuntime_manager.go:1274] "Unhandled Error" err="init container &Container{Name:extract-content,Image:registry.redhat.io/redhat/community-operator-index:v4.18,Command:[/utilities/copy-content],Args:[--catalog.from=/configs --catalog.to=/extracted-catalog/catalog --cache.from=/tmp/cache --cache.to=/extracted-catalog/cache],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:utilities,ReadOnly:false,MountPath:/utilities,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:catalog-content,ReadOnly:false,MountPath:/extracted-catalog,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-cpttc,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:Always,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:nil,SELinuxOptions:nil,RunAsUser:*1000170000,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:FallbackToLogsOnError,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod community-operators-4w77d_openshift-marketplace(6386382b-e651-4888-857e-a3a7325f1f14): ErrImagePull: rpc error: code = Canceled desc = copying system image from manifest list: copying config: context canceled" logger="UnhandledError" Jan 23 09:11:45 crc kubenswrapper[4684]: E0123 09:11:45.297264 4684 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"extract-content\" with ErrImagePull: \"rpc error: code = Canceled desc = copying system image from manifest list: copying config: context canceled\"" pod="openshift-marketplace/community-operators-4w77d" podUID="6386382b-e651-4888-857e-a3a7325f1f14" Jan 23 09:11:45 crc kubenswrapper[4684]: I0123 09:11:45.405617 4684 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-route-controller-manager/route-controller-manager-6d7f76996d-965j8"] Jan 23 09:11:45 crc kubenswrapper[4684]: W0123 09:11:45.417985 4684 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-poda3346474_c3f2_4ef3_bcee_65f80e85ace4.slice/crio-94353a2406ab348f5d41a13b774781e37b5574d76077bd174e3111670d5f5633 WatchSource:0}: Error finding container 94353a2406ab348f5d41a13b774781e37b5574d76077bd174e3111670d5f5633: Status 404 returned error can't find the container with id 94353a2406ab348f5d41a13b774781e37b5574d76077bd174e3111670d5f5633 Jan 23 09:11:45 crc kubenswrapper[4684]: I0123 09:11:45.428757 4684 patch_prober.go:28] interesting pod/downloads-7954f5f757-mc6nm container/download-server namespace/openshift-console: Readiness probe status=failure output="Get \"http://10.217.0.20:8080/\": dial tcp 10.217.0.20:8080: connect: connection refused" start-of-body= Jan 23 09:11:45 crc kubenswrapper[4684]: I0123 09:11:45.428839 4684 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-console/downloads-7954f5f757-mc6nm" podUID="8fa74b73-0b76-426c-a769-39477ab913f6" containerName="download-server" probeResult="failure" output="Get \"http://10.217.0.20:8080/\": dial tcp 10.217.0.20:8080: connect: connection refused" Jan 23 09:11:45 crc kubenswrapper[4684]: I0123 09:11:45.560558 4684 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-controller-manager/controller-manager-666dd99597-k6rxk"] Jan 23 09:11:45 crc kubenswrapper[4684]: W0123 09:11:45.587029 4684 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-podedcaacae_d1c5_4a66_9220_54ee4b5991ac.slice/crio-aa1ae4ae08fd4acf2f597f9a976c5dfa9d2ec38907d8c6d95942bc0efbcbec66 WatchSource:0}: Error finding container aa1ae4ae08fd4acf2f597f9a976c5dfa9d2ec38907d8c6d95942bc0efbcbec66: Status 404 returned error can't find the container with id aa1ae4ae08fd4acf2f597f9a976c5dfa9d2ec38907d8c6d95942bc0efbcbec66 Jan 23 09:11:45 crc kubenswrapper[4684]: I0123 09:11:45.611651 4684 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-kube-apiserver/installer-9-crc"] Jan 23 09:11:46 crc kubenswrapper[4684]: I0123 09:11:46.246014 4684 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/revision-pruner-9-crc" event={"ID":"11bcae14-ba2a-42b3-85b1-edbfec10d93a","Type":"ContainerStarted","Data":"f1e367a9aa2cd94e5b07f61528286d35e703d6aed43c7848308d357e57500b74"} Jan 23 09:11:46 crc kubenswrapper[4684]: I0123 09:11:46.246583 4684 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/revision-pruner-9-crc" event={"ID":"11bcae14-ba2a-42b3-85b1-edbfec10d93a","Type":"ContainerStarted","Data":"6a62396e618e6acadfc8a8072bc048776ccbf02e265e58f225633aef55b8460b"} Jan 23 09:11:46 crc kubenswrapper[4684]: I0123 09:11:46.247982 4684 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-route-controller-manager/route-controller-manager-6d7f76996d-965j8" event={"ID":"a3346474-c3f2-4ef3-bcee-65f80e85ace4","Type":"ContainerStarted","Data":"000c480365208dc2c60e6a41d525590947cd55e78d338c2d73957c31fb245675"} Jan 23 09:11:46 crc kubenswrapper[4684]: I0123 09:11:46.248008 4684 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-route-controller-manager/route-controller-manager-6d7f76996d-965j8" event={"ID":"a3346474-c3f2-4ef3-bcee-65f80e85ace4","Type":"ContainerStarted","Data":"94353a2406ab348f5d41a13b774781e37b5574d76077bd174e3111670d5f5633"} Jan 23 09:11:46 crc kubenswrapper[4684]: I0123 09:11:46.248178 4684 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-route-controller-manager/route-controller-manager-6d7f76996d-965j8" Jan 23 09:11:46 crc kubenswrapper[4684]: I0123 09:11:46.250399 4684 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-console_downloads-7954f5f757-mc6nm_8fa74b73-0b76-426c-a769-39477ab913f6/download-server/1.log" Jan 23 09:11:46 crc kubenswrapper[4684]: I0123 09:11:46.250624 4684 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-console/downloads-7954f5f757-mc6nm" event={"ID":"8fa74b73-0b76-426c-a769-39477ab913f6","Type":"ContainerStarted","Data":"38031c59ba5c0113c181a122f262d83ad32ca935b54bb63c28520c7d21008772"} Jan 23 09:11:46 crc kubenswrapper[4684]: I0123 09:11:46.250908 4684 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-console/downloads-7954f5f757-mc6nm" Jan 23 09:11:46 crc kubenswrapper[4684]: I0123 09:11:46.250982 4684 patch_prober.go:28] interesting pod/downloads-7954f5f757-mc6nm container/download-server namespace/openshift-console: Readiness probe status=failure output="Get \"http://10.217.0.20:8080/\": dial tcp 10.217.0.20:8080: connect: connection refused" start-of-body= Jan 23 09:11:46 crc kubenswrapper[4684]: I0123 09:11:46.251024 4684 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-console/downloads-7954f5f757-mc6nm" podUID="8fa74b73-0b76-426c-a769-39477ab913f6" containerName="download-server" probeResult="failure" output="Get \"http://10.217.0.20:8080/\": dial tcp 10.217.0.20:8080: connect: connection refused" Jan 23 09:11:46 crc kubenswrapper[4684]: I0123 09:11:46.253384 4684 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/installer-9-crc" event={"ID":"edcaacae-d1c5-4a66-9220-54ee4b5991ac","Type":"ContainerStarted","Data":"104390b7d36d2bb63212448fb64f1a139447c9ca332f78344ccd7b61d1a97a76"} Jan 23 09:11:46 crc kubenswrapper[4684]: I0123 09:11:46.253435 4684 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/installer-9-crc" event={"ID":"edcaacae-d1c5-4a66-9220-54ee4b5991ac","Type":"ContainerStarted","Data":"aa1ae4ae08fd4acf2f597f9a976c5dfa9d2ec38907d8c6d95942bc0efbcbec66"} Jan 23 09:11:46 crc kubenswrapper[4684]: I0123 09:11:46.255849 4684 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-controller-manager/controller-manager-666dd99597-k6rxk" event={"ID":"4c3952b1-8e23-4e55-a022-523ba6db327c","Type":"ContainerStarted","Data":"083eff80b0b15e25896b156af3a1d44000a148b9115182bb0c7989b28777bc96"} Jan 23 09:11:46 crc kubenswrapper[4684]: I0123 09:11:46.255894 4684 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-controller-manager/controller-manager-666dd99597-k6rxk" event={"ID":"4c3952b1-8e23-4e55-a022-523ba6db327c","Type":"ContainerStarted","Data":"04bdf1c22d7d4b07411e7621982f53406dffb9197e51565b96cf29ab33054cf5"} Jan 23 09:11:46 crc kubenswrapper[4684]: I0123 09:11:46.256006 4684 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-controller-manager/controller-manager-666dd99597-k6rxk" podUID="4c3952b1-8e23-4e55-a022-523ba6db327c" containerName="controller-manager" containerID="cri-o://083eff80b0b15e25896b156af3a1d44000a148b9115182bb0c7989b28777bc96" gracePeriod=30 Jan 23 09:11:46 crc kubenswrapper[4684]: I0123 09:11:46.256490 4684 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-controller-manager/controller-manager-666dd99597-k6rxk" Jan 23 09:11:46 crc kubenswrapper[4684]: I0123 09:11:46.256798 4684 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-route-controller-manager/route-controller-manager-6d7f76996d-965j8" Jan 23 09:11:46 crc kubenswrapper[4684]: I0123 09:11:46.267341 4684 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-wtphf" event={"ID":"fe8e0d00-860e-4d47-9f48-686555520d79","Type":"ContainerStarted","Data":"4f60477adb3b4dbc421728a3db0033ffac18f45a46c7ebdec44ba3b981e2ba81"} Jan 23 09:11:46 crc kubenswrapper[4684]: I0123 09:11:46.268977 4684 patch_prober.go:28] interesting pod/controller-manager-666dd99597-k6rxk container/controller-manager namespace/openshift-controller-manager: Readiness probe status=failure output="Get \"https://10.217.0.55:8443/healthz\": EOF" start-of-body= Jan 23 09:11:46 crc kubenswrapper[4684]: I0123 09:11:46.269047 4684 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-controller-manager/controller-manager-666dd99597-k6rxk" podUID="4c3952b1-8e23-4e55-a022-523ba6db327c" containerName="controller-manager" probeResult="failure" output="Get \"https://10.217.0.55:8443/healthz\": EOF" Jan 23 09:11:46 crc kubenswrapper[4684]: I0123 09:11:46.301137 4684 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-kube-apiserver/revision-pruner-9-crc" podStartSLOduration=82.301120831 podStartE2EDuration="1m22.301120831s" podCreationTimestamp="2026-01-23 09:10:24 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-23 09:11:46.28174018 +0000 UTC m=+278.905118731" watchObservedRunningTime="2026-01-23 09:11:46.301120831 +0000 UTC m=+278.924499372" Jan 23 09:11:46 crc kubenswrapper[4684]: I0123 09:11:46.301456 4684 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-route-controller-manager/route-controller-manager-6d7f76996d-965j8" podStartSLOduration=91.301451911 podStartE2EDuration="1m31.301451911s" podCreationTimestamp="2026-01-23 09:10:15 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-23 09:11:46.299133551 +0000 UTC m=+278.922512112" watchObservedRunningTime="2026-01-23 09:11:46.301451911 +0000 UTC m=+278.924830452" Jan 23 09:11:46 crc kubenswrapper[4684]: I0123 09:11:46.342093 4684 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-kube-apiserver/installer-9-crc" podStartSLOduration=78.342068861 podStartE2EDuration="1m18.342068861s" podCreationTimestamp="2026-01-23 09:10:28 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-23 09:11:46.337399369 +0000 UTC m=+278.960777920" watchObservedRunningTime="2026-01-23 09:11:46.342068861 +0000 UTC m=+278.965447402" Jan 23 09:11:46 crc kubenswrapper[4684]: I0123 09:11:46.406711 4684 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-controller-manager/controller-manager-666dd99597-k6rxk" podStartSLOduration=111.406677973 podStartE2EDuration="1m51.406677973s" podCreationTimestamp="2026-01-23 09:09:55 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-23 09:11:46.405933301 +0000 UTC m=+279.029311852" watchObservedRunningTime="2026-01-23 09:11:46.406677973 +0000 UTC m=+279.030056514" Jan 23 09:11:46 crc kubenswrapper[4684]: I0123 09:11:46.444384 4684 patch_prober.go:28] interesting pod/controller-manager-666dd99597-k6rxk container/controller-manager namespace/openshift-controller-manager: Readiness probe status=failure output="Get \"https://10.217.0.55:8443/healthz\": dial tcp 10.217.0.55:8443: connect: connection refused" start-of-body= Jan 23 09:11:46 crc kubenswrapper[4684]: I0123 09:11:46.444442 4684 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-controller-manager/controller-manager-666dd99597-k6rxk" podUID="4c3952b1-8e23-4e55-a022-523ba6db327c" containerName="controller-manager" probeResult="failure" output="Get \"https://10.217.0.55:8443/healthz\": dial tcp 10.217.0.55:8443: connect: connection refused" Jan 23 09:11:47 crc kubenswrapper[4684]: I0123 09:11:47.284094 4684 generic.go:334] "Generic (PLEG): container finished" podID="4c3952b1-8e23-4e55-a022-523ba6db327c" containerID="083eff80b0b15e25896b156af3a1d44000a148b9115182bb0c7989b28777bc96" exitCode=0 Jan 23 09:11:47 crc kubenswrapper[4684]: I0123 09:11:47.284257 4684 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-controller-manager/controller-manager-666dd99597-k6rxk" event={"ID":"4c3952b1-8e23-4e55-a022-523ba6db327c","Type":"ContainerDied","Data":"083eff80b0b15e25896b156af3a1d44000a148b9115182bb0c7989b28777bc96"} Jan 23 09:11:47 crc kubenswrapper[4684]: I0123 09:11:47.286984 4684 generic.go:334] "Generic (PLEG): container finished" podID="11bcae14-ba2a-42b3-85b1-edbfec10d93a" containerID="f1e367a9aa2cd94e5b07f61528286d35e703d6aed43c7848308d357e57500b74" exitCode=0 Jan 23 09:11:47 crc kubenswrapper[4684]: I0123 09:11:47.287098 4684 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/revision-pruner-9-crc" event={"ID":"11bcae14-ba2a-42b3-85b1-edbfec10d93a","Type":"ContainerDied","Data":"f1e367a9aa2cd94e5b07f61528286d35e703d6aed43c7848308d357e57500b74"} Jan 23 09:11:47 crc kubenswrapper[4684]: I0123 09:11:47.288216 4684 patch_prober.go:28] interesting pod/downloads-7954f5f757-mc6nm container/download-server namespace/openshift-console: Readiness probe status=failure output="Get \"http://10.217.0.20:8080/\": dial tcp 10.217.0.20:8080: connect: connection refused" start-of-body= Jan 23 09:11:47 crc kubenswrapper[4684]: I0123 09:11:47.288302 4684 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-console/downloads-7954f5f757-mc6nm" podUID="8fa74b73-0b76-426c-a769-39477ab913f6" containerName="download-server" probeResult="failure" output="Get \"http://10.217.0.20:8080/\": dial tcp 10.217.0.20:8080: connect: connection refused" Jan 23 09:11:47 crc kubenswrapper[4684]: E0123 09:11:47.629071 4684 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"extract-content\" with ImagePullBackOff: \"Back-off pulling image \\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"\"" pod="openshift-marketplace/community-operators-4w77d" podUID="6386382b-e651-4888-857e-a3a7325f1f14" Jan 23 09:11:48 crc kubenswrapper[4684]: I0123 09:11:48.232344 4684 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-controller-manager/controller-manager-666dd99597-k6rxk" Jan 23 09:11:48 crc kubenswrapper[4684]: I0123 09:11:48.265592 4684 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-controller-manager/controller-manager-55f64d9478-gvnsf"] Jan 23 09:11:48 crc kubenswrapper[4684]: E0123 09:11:48.265867 4684 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="4c3952b1-8e23-4e55-a022-523ba6db327c" containerName="controller-manager" Jan 23 09:11:48 crc kubenswrapper[4684]: I0123 09:11:48.265890 4684 state_mem.go:107] "Deleted CPUSet assignment" podUID="4c3952b1-8e23-4e55-a022-523ba6db327c" containerName="controller-manager" Jan 23 09:11:48 crc kubenswrapper[4684]: I0123 09:11:48.265999 4684 memory_manager.go:354] "RemoveStaleState removing state" podUID="4c3952b1-8e23-4e55-a022-523ba6db327c" containerName="controller-manager" Jan 23 09:11:48 crc kubenswrapper[4684]: I0123 09:11:48.273108 4684 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-controller-manager/controller-manager-55f64d9478-gvnsf" Jan 23 09:11:48 crc kubenswrapper[4684]: I0123 09:11:48.290993 4684 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/4c3952b1-8e23-4e55-a022-523ba6db327c-client-ca\") pod \"4c3952b1-8e23-4e55-a022-523ba6db327c\" (UID: \"4c3952b1-8e23-4e55-a022-523ba6db327c\") " Jan 23 09:11:48 crc kubenswrapper[4684]: I0123 09:11:48.291083 4684 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/4c3952b1-8e23-4e55-a022-523ba6db327c-serving-cert\") pod \"4c3952b1-8e23-4e55-a022-523ba6db327c\" (UID: \"4c3952b1-8e23-4e55-a022-523ba6db327c\") " Jan 23 09:11:48 crc kubenswrapper[4684]: I0123 09:11:48.291127 4684 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-m26hc\" (UniqueName: \"kubernetes.io/projected/4c3952b1-8e23-4e55-a022-523ba6db327c-kube-api-access-m26hc\") pod \"4c3952b1-8e23-4e55-a022-523ba6db327c\" (UID: \"4c3952b1-8e23-4e55-a022-523ba6db327c\") " Jan 23 09:11:48 crc kubenswrapper[4684]: I0123 09:11:48.291189 4684 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/4c3952b1-8e23-4e55-a022-523ba6db327c-config\") pod \"4c3952b1-8e23-4e55-a022-523ba6db327c\" (UID: \"4c3952b1-8e23-4e55-a022-523ba6db327c\") " Jan 23 09:11:48 crc kubenswrapper[4684]: I0123 09:11:48.291266 4684 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/4c3952b1-8e23-4e55-a022-523ba6db327c-proxy-ca-bundles\") pod \"4c3952b1-8e23-4e55-a022-523ba6db327c\" (UID: \"4c3952b1-8e23-4e55-a022-523ba6db327c\") " Jan 23 09:11:48 crc kubenswrapper[4684]: I0123 09:11:48.292330 4684 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/4c3952b1-8e23-4e55-a022-523ba6db327c-client-ca" (OuterVolumeSpecName: "client-ca") pod "4c3952b1-8e23-4e55-a022-523ba6db327c" (UID: "4c3952b1-8e23-4e55-a022-523ba6db327c"). InnerVolumeSpecName "client-ca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 23 09:11:48 crc kubenswrapper[4684]: I0123 09:11:48.292578 4684 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/4c3952b1-8e23-4e55-a022-523ba6db327c-proxy-ca-bundles" (OuterVolumeSpecName: "proxy-ca-bundles") pod "4c3952b1-8e23-4e55-a022-523ba6db327c" (UID: "4c3952b1-8e23-4e55-a022-523ba6db327c"). InnerVolumeSpecName "proxy-ca-bundles". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 23 09:11:48 crc kubenswrapper[4684]: I0123 09:11:48.293359 4684 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/4c3952b1-8e23-4e55-a022-523ba6db327c-config" (OuterVolumeSpecName: "config") pod "4c3952b1-8e23-4e55-a022-523ba6db327c" (UID: "4c3952b1-8e23-4e55-a022-523ba6db327c"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 23 09:11:48 crc kubenswrapper[4684]: I0123 09:11:48.299433 4684 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-controller-manager/controller-manager-55f64d9478-gvnsf"] Jan 23 09:11:48 crc kubenswrapper[4684]: I0123 09:11:48.311195 4684 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/4c3952b1-8e23-4e55-a022-523ba6db327c-serving-cert" (OuterVolumeSpecName: "serving-cert") pod "4c3952b1-8e23-4e55-a022-523ba6db327c" (UID: "4c3952b1-8e23-4e55-a022-523ba6db327c"). InnerVolumeSpecName "serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 23 09:11:48 crc kubenswrapper[4684]: I0123 09:11:48.320949 4684 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/4c3952b1-8e23-4e55-a022-523ba6db327c-kube-api-access-m26hc" (OuterVolumeSpecName: "kube-api-access-m26hc") pod "4c3952b1-8e23-4e55-a022-523ba6db327c" (UID: "4c3952b1-8e23-4e55-a022-523ba6db327c"). InnerVolumeSpecName "kube-api-access-m26hc". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 23 09:11:48 crc kubenswrapper[4684]: I0123 09:11:48.322860 4684 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-controller-manager/controller-manager-666dd99597-k6rxk" Jan 23 09:11:48 crc kubenswrapper[4684]: I0123 09:11:48.323796 4684 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-controller-manager/controller-manager-666dd99597-k6rxk" event={"ID":"4c3952b1-8e23-4e55-a022-523ba6db327c","Type":"ContainerDied","Data":"04bdf1c22d7d4b07411e7621982f53406dffb9197e51565b96cf29ab33054cf5"} Jan 23 09:11:48 crc kubenswrapper[4684]: I0123 09:11:48.323862 4684 scope.go:117] "RemoveContainer" containerID="083eff80b0b15e25896b156af3a1d44000a148b9115182bb0c7989b28777bc96" Jan 23 09:11:48 crc kubenswrapper[4684]: I0123 09:11:48.325871 4684 patch_prober.go:28] interesting pod/downloads-7954f5f757-mc6nm container/download-server namespace/openshift-console: Readiness probe status=failure output="Get \"http://10.217.0.20:8080/\": dial tcp 10.217.0.20:8080: connect: connection refused" start-of-body= Jan 23 09:11:48 crc kubenswrapper[4684]: I0123 09:11:48.336060 4684 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-console/downloads-7954f5f757-mc6nm" podUID="8fa74b73-0b76-426c-a769-39477ab913f6" containerName="download-server" probeResult="failure" output="Get \"http://10.217.0.20:8080/\": dial tcp 10.217.0.20:8080: connect: connection refused" Jan 23 09:11:48 crc kubenswrapper[4684]: I0123 09:11:48.389870 4684 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-controller-manager/controller-manager-666dd99597-k6rxk"] Jan 23 09:11:48 crc kubenswrapper[4684]: I0123 09:11:48.394338 4684 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-controller-manager/controller-manager-666dd99597-k6rxk"] Jan 23 09:11:48 crc kubenswrapper[4684]: I0123 09:11:48.394476 4684 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/fb237825-b7c8-46ae-ae20-a1ea7309ee7e-config\") pod \"controller-manager-55f64d9478-gvnsf\" (UID: \"fb237825-b7c8-46ae-ae20-a1ea7309ee7e\") " pod="openshift-controller-manager/controller-manager-55f64d9478-gvnsf" Jan 23 09:11:48 crc kubenswrapper[4684]: I0123 09:11:48.394513 4684 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-kz5ph\" (UniqueName: \"kubernetes.io/projected/fb237825-b7c8-46ae-ae20-a1ea7309ee7e-kube-api-access-kz5ph\") pod \"controller-manager-55f64d9478-gvnsf\" (UID: \"fb237825-b7c8-46ae-ae20-a1ea7309ee7e\") " pod="openshift-controller-manager/controller-manager-55f64d9478-gvnsf" Jan 23 09:11:48 crc kubenswrapper[4684]: I0123 09:11:48.394609 4684 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/fb237825-b7c8-46ae-ae20-a1ea7309ee7e-client-ca\") pod \"controller-manager-55f64d9478-gvnsf\" (UID: \"fb237825-b7c8-46ae-ae20-a1ea7309ee7e\") " pod="openshift-controller-manager/controller-manager-55f64d9478-gvnsf" Jan 23 09:11:48 crc kubenswrapper[4684]: I0123 09:11:48.394681 4684 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/fb237825-b7c8-46ae-ae20-a1ea7309ee7e-serving-cert\") pod \"controller-manager-55f64d9478-gvnsf\" (UID: \"fb237825-b7c8-46ae-ae20-a1ea7309ee7e\") " pod="openshift-controller-manager/controller-manager-55f64d9478-gvnsf" Jan 23 09:11:48 crc kubenswrapper[4684]: I0123 09:11:48.394995 4684 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/fb237825-b7c8-46ae-ae20-a1ea7309ee7e-proxy-ca-bundles\") pod \"controller-manager-55f64d9478-gvnsf\" (UID: \"fb237825-b7c8-46ae-ae20-a1ea7309ee7e\") " pod="openshift-controller-manager/controller-manager-55f64d9478-gvnsf" Jan 23 09:11:48 crc kubenswrapper[4684]: I0123 09:11:48.395068 4684 reconciler_common.go:293] "Volume detached for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/4c3952b1-8e23-4e55-a022-523ba6db327c-serving-cert\") on node \"crc\" DevicePath \"\"" Jan 23 09:11:48 crc kubenswrapper[4684]: I0123 09:11:48.395220 4684 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-m26hc\" (UniqueName: \"kubernetes.io/projected/4c3952b1-8e23-4e55-a022-523ba6db327c-kube-api-access-m26hc\") on node \"crc\" DevicePath \"\"" Jan 23 09:11:48 crc kubenswrapper[4684]: I0123 09:11:48.395274 4684 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/4c3952b1-8e23-4e55-a022-523ba6db327c-config\") on node \"crc\" DevicePath \"\"" Jan 23 09:11:48 crc kubenswrapper[4684]: I0123 09:11:48.395286 4684 reconciler_common.go:293] "Volume detached for volume \"proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/4c3952b1-8e23-4e55-a022-523ba6db327c-proxy-ca-bundles\") on node \"crc\" DevicePath \"\"" Jan 23 09:11:48 crc kubenswrapper[4684]: I0123 09:11:48.395295 4684 reconciler_common.go:293] "Volume detached for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/4c3952b1-8e23-4e55-a022-523ba6db327c-client-ca\") on node \"crc\" DevicePath \"\"" Jan 23 09:11:48 crc kubenswrapper[4684]: I0123 09:11:48.496747 4684 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/fb237825-b7c8-46ae-ae20-a1ea7309ee7e-client-ca\") pod \"controller-manager-55f64d9478-gvnsf\" (UID: \"fb237825-b7c8-46ae-ae20-a1ea7309ee7e\") " pod="openshift-controller-manager/controller-manager-55f64d9478-gvnsf" Jan 23 09:11:48 crc kubenswrapper[4684]: I0123 09:11:48.496823 4684 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/fb237825-b7c8-46ae-ae20-a1ea7309ee7e-serving-cert\") pod \"controller-manager-55f64d9478-gvnsf\" (UID: \"fb237825-b7c8-46ae-ae20-a1ea7309ee7e\") " pod="openshift-controller-manager/controller-manager-55f64d9478-gvnsf" Jan 23 09:11:48 crc kubenswrapper[4684]: I0123 09:11:48.496852 4684 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/fb237825-b7c8-46ae-ae20-a1ea7309ee7e-proxy-ca-bundles\") pod \"controller-manager-55f64d9478-gvnsf\" (UID: \"fb237825-b7c8-46ae-ae20-a1ea7309ee7e\") " pod="openshift-controller-manager/controller-manager-55f64d9478-gvnsf" Jan 23 09:11:48 crc kubenswrapper[4684]: I0123 09:11:48.496886 4684 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/fb237825-b7c8-46ae-ae20-a1ea7309ee7e-config\") pod \"controller-manager-55f64d9478-gvnsf\" (UID: \"fb237825-b7c8-46ae-ae20-a1ea7309ee7e\") " pod="openshift-controller-manager/controller-manager-55f64d9478-gvnsf" Jan 23 09:11:48 crc kubenswrapper[4684]: I0123 09:11:48.496906 4684 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-kz5ph\" (UniqueName: \"kubernetes.io/projected/fb237825-b7c8-46ae-ae20-a1ea7309ee7e-kube-api-access-kz5ph\") pod \"controller-manager-55f64d9478-gvnsf\" (UID: \"fb237825-b7c8-46ae-ae20-a1ea7309ee7e\") " pod="openshift-controller-manager/controller-manager-55f64d9478-gvnsf" Jan 23 09:11:48 crc kubenswrapper[4684]: I0123 09:11:48.498079 4684 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/fb237825-b7c8-46ae-ae20-a1ea7309ee7e-client-ca\") pod \"controller-manager-55f64d9478-gvnsf\" (UID: \"fb237825-b7c8-46ae-ae20-a1ea7309ee7e\") " pod="openshift-controller-manager/controller-manager-55f64d9478-gvnsf" Jan 23 09:11:48 crc kubenswrapper[4684]: I0123 09:11:48.502406 4684 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/fb237825-b7c8-46ae-ae20-a1ea7309ee7e-serving-cert\") pod \"controller-manager-55f64d9478-gvnsf\" (UID: \"fb237825-b7c8-46ae-ae20-a1ea7309ee7e\") " pod="openshift-controller-manager/controller-manager-55f64d9478-gvnsf" Jan 23 09:11:48 crc kubenswrapper[4684]: I0123 09:11:48.503275 4684 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/fb237825-b7c8-46ae-ae20-a1ea7309ee7e-proxy-ca-bundles\") pod \"controller-manager-55f64d9478-gvnsf\" (UID: \"fb237825-b7c8-46ae-ae20-a1ea7309ee7e\") " pod="openshift-controller-manager/controller-manager-55f64d9478-gvnsf" Jan 23 09:11:48 crc kubenswrapper[4684]: I0123 09:11:48.504246 4684 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/fb237825-b7c8-46ae-ae20-a1ea7309ee7e-config\") pod \"controller-manager-55f64d9478-gvnsf\" (UID: \"fb237825-b7c8-46ae-ae20-a1ea7309ee7e\") " pod="openshift-controller-manager/controller-manager-55f64d9478-gvnsf" Jan 23 09:11:48 crc kubenswrapper[4684]: I0123 09:11:48.513268 4684 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-kz5ph\" (UniqueName: \"kubernetes.io/projected/fb237825-b7c8-46ae-ae20-a1ea7309ee7e-kube-api-access-kz5ph\") pod \"controller-manager-55f64d9478-gvnsf\" (UID: \"fb237825-b7c8-46ae-ae20-a1ea7309ee7e\") " pod="openshift-controller-manager/controller-manager-55f64d9478-gvnsf" Jan 23 09:11:48 crc kubenswrapper[4684]: I0123 09:11:48.658534 4684 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-controller-manager/controller-manager-55f64d9478-gvnsf" Jan 23 09:11:49 crc kubenswrapper[4684]: I0123 09:11:49.136219 4684 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver/revision-pruner-9-crc" Jan 23 09:11:49 crc kubenswrapper[4684]: I0123 09:11:49.206252 4684 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/11bcae14-ba2a-42b3-85b1-edbfec10d93a-kube-api-access\") pod \"11bcae14-ba2a-42b3-85b1-edbfec10d93a\" (UID: \"11bcae14-ba2a-42b3-85b1-edbfec10d93a\") " Jan 23 09:11:49 crc kubenswrapper[4684]: I0123 09:11:49.206659 4684 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/11bcae14-ba2a-42b3-85b1-edbfec10d93a-kubelet-dir\") pod \"11bcae14-ba2a-42b3-85b1-edbfec10d93a\" (UID: \"11bcae14-ba2a-42b3-85b1-edbfec10d93a\") " Jan 23 09:11:49 crc kubenswrapper[4684]: I0123 09:11:49.206788 4684 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/11bcae14-ba2a-42b3-85b1-edbfec10d93a-kubelet-dir" (OuterVolumeSpecName: "kubelet-dir") pod "11bcae14-ba2a-42b3-85b1-edbfec10d93a" (UID: "11bcae14-ba2a-42b3-85b1-edbfec10d93a"). InnerVolumeSpecName "kubelet-dir". PluginName "kubernetes.io/host-path", VolumeGidValue "" Jan 23 09:11:49 crc kubenswrapper[4684]: I0123 09:11:49.207603 4684 reconciler_common.go:293] "Volume detached for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/11bcae14-ba2a-42b3-85b1-edbfec10d93a-kubelet-dir\") on node \"crc\" DevicePath \"\"" Jan 23 09:11:49 crc kubenswrapper[4684]: I0123 09:11:49.218890 4684 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/11bcae14-ba2a-42b3-85b1-edbfec10d93a-kube-api-access" (OuterVolumeSpecName: "kube-api-access") pod "11bcae14-ba2a-42b3-85b1-edbfec10d93a" (UID: "11bcae14-ba2a-42b3-85b1-edbfec10d93a"). InnerVolumeSpecName "kube-api-access". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 23 09:11:49 crc kubenswrapper[4684]: I0123 09:11:49.309675 4684 reconciler_common.go:293] "Volume detached for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/11bcae14-ba2a-42b3-85b1-edbfec10d93a-kube-api-access\") on node \"crc\" DevicePath \"\"" Jan 23 09:11:49 crc kubenswrapper[4684]: I0123 09:11:49.358646 4684 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/revision-pruner-9-crc" event={"ID":"11bcae14-ba2a-42b3-85b1-edbfec10d93a","Type":"ContainerDied","Data":"6a62396e618e6acadfc8a8072bc048776ccbf02e265e58f225633aef55b8460b"} Jan 23 09:11:49 crc kubenswrapper[4684]: I0123 09:11:49.359002 4684 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="6a62396e618e6acadfc8a8072bc048776ccbf02e265e58f225633aef55b8460b" Jan 23 09:11:49 crc kubenswrapper[4684]: I0123 09:11:49.359100 4684 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver/revision-pruner-9-crc" Jan 23 09:11:49 crc kubenswrapper[4684]: I0123 09:11:49.459483 4684 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-controller-manager/controller-manager-55f64d9478-gvnsf"] Jan 23 09:11:49 crc kubenswrapper[4684]: W0123 09:11:49.466505 4684 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podfb237825_b7c8_46ae_ae20_a1ea7309ee7e.slice/crio-fba3777596df8220d8e9f522a26200651cbad463698c1f64e2d1038fd0a1fac2 WatchSource:0}: Error finding container fba3777596df8220d8e9f522a26200651cbad463698c1f64e2d1038fd0a1fac2: Status 404 returned error can't find the container with id fba3777596df8220d8e9f522a26200651cbad463698c1f64e2d1038fd0a1fac2 Jan 23 09:11:49 crc kubenswrapper[4684]: I0123 09:11:49.590873 4684 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="4c3952b1-8e23-4e55-a022-523ba6db327c" path="/var/lib/kubelet/pods/4c3952b1-8e23-4e55-a022-523ba6db327c/volumes" Jan 23 09:11:50 crc kubenswrapper[4684]: I0123 09:11:50.402421 4684 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-controller-manager/controller-manager-55f64d9478-gvnsf" event={"ID":"fb237825-b7c8-46ae-ae20-a1ea7309ee7e","Type":"ContainerStarted","Data":"fba3777596df8220d8e9f522a26200651cbad463698c1f64e2d1038fd0a1fac2"} Jan 23 09:12:08 crc kubenswrapper[4684]: I0123 09:11:54.447251 4684 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-x2mrs" event={"ID":"b97308cc-f7d2-4693-8990-76cbb4c9abff","Type":"ContainerStarted","Data":"06d322703213706612011807604b50100de632a8938a972d78bd8b80d55fff50"} Jan 23 09:12:08 crc kubenswrapper[4684]: I0123 09:11:54.456590 4684 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-9nnzz" event={"ID":"888f4644-d4e6-4334-8711-c552d0ef037a","Type":"ContainerStarted","Data":"b5d7d77b40dc4fa0e8a2a3fc914c5aac0bc55be1aefd4db81f8f63b6be5c5a0f"} Jan 23 09:12:08 crc kubenswrapper[4684]: I0123 09:11:54.459472 4684 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-vnv8t" event={"ID":"5a6b0dac-56a9-4bc7-b6f1-fdbe9578f226","Type":"ContainerStarted","Data":"d090c7c7d777792af0ce7e82f8e7dc254cea89eea157b0c23551c9669b6d9aa8"} Jan 23 09:12:08 crc kubenswrapper[4684]: I0123 09:11:54.462447 4684 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-controller-manager/controller-manager-55f64d9478-gvnsf" event={"ID":"fb237825-b7c8-46ae-ae20-a1ea7309ee7e","Type":"ContainerStarted","Data":"d6b80ad8fa3e19acf0c0b44cafb3483e63c310f9bdfb6ce3e2ca51d36f3852fb"} Jan 23 09:12:08 crc kubenswrapper[4684]: I0123 09:11:54.464381 4684 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-hcd6g" event={"ID":"a32a23a8-fd38-4a01-bc87-e589889a39e6","Type":"ContainerStarted","Data":"cfa2ad3764d44551aa6bc6c6a7de1e285407c0fea2f82dac38fd64dee528a1ec"} Jan 23 09:12:08 crc kubenswrapper[4684]: I0123 09:11:54.467944 4684 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-vk9hn" event={"ID":"0cd73bd8-4034-44e9-b00a-75ea938360c8","Type":"ContainerStarted","Data":"958b4d3b02248b0f89d810bbcfdb481c0b9625c53aae088528f6ccc9bc27c639"} Jan 23 09:12:08 crc kubenswrapper[4684]: I0123 09:11:55.428039 4684 patch_prober.go:28] interesting pod/downloads-7954f5f757-mc6nm container/download-server namespace/openshift-console: Readiness probe status=failure output="Get \"http://10.217.0.20:8080/\": dial tcp 10.217.0.20:8080: connect: connection refused" start-of-body= Jan 23 09:12:08 crc kubenswrapper[4684]: I0123 09:11:55.428352 4684 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-console/downloads-7954f5f757-mc6nm" podUID="8fa74b73-0b76-426c-a769-39477ab913f6" containerName="download-server" probeResult="failure" output="Get \"http://10.217.0.20:8080/\": dial tcp 10.217.0.20:8080: connect: connection refused" Jan 23 09:12:08 crc kubenswrapper[4684]: I0123 09:11:55.428091 4684 patch_prober.go:28] interesting pod/downloads-7954f5f757-mc6nm container/download-server namespace/openshift-console: Liveness probe status=failure output="Get \"http://10.217.0.20:8080/\": dial tcp 10.217.0.20:8080: connect: connection refused" start-of-body= Jan 23 09:12:08 crc kubenswrapper[4684]: I0123 09:11:55.428473 4684 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-console/downloads-7954f5f757-mc6nm" podUID="8fa74b73-0b76-426c-a769-39477ab913f6" containerName="download-server" probeResult="failure" output="Get \"http://10.217.0.20:8080/\": dial tcp 10.217.0.20:8080: connect: connection refused" Jan 23 09:12:08 crc kubenswrapper[4684]: I0123 09:11:55.506887 4684 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-controller-manager/controller-manager-55f64d9478-gvnsf"] Jan 23 09:12:08 crc kubenswrapper[4684]: I0123 09:11:55.696585 4684 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-route-controller-manager/route-controller-manager-6d7f76996d-965j8"] Jan 23 09:12:08 crc kubenswrapper[4684]: I0123 09:11:55.696836 4684 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-route-controller-manager/route-controller-manager-6d7f76996d-965j8" podUID="a3346474-c3f2-4ef3-bcee-65f80e85ace4" containerName="route-controller-manager" containerID="cri-o://000c480365208dc2c60e6a41d525590947cd55e78d338c2d73957c31fb245675" gracePeriod=30 Jan 23 09:12:08 crc kubenswrapper[4684]: I0123 09:11:56.495322 4684 generic.go:334] "Generic (PLEG): container finished" podID="0cd73bd8-4034-44e9-b00a-75ea938360c8" containerID="958b4d3b02248b0f89d810bbcfdb481c0b9625c53aae088528f6ccc9bc27c639" exitCode=0 Jan 23 09:12:08 crc kubenswrapper[4684]: I0123 09:11:56.495637 4684 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-vk9hn" event={"ID":"0cd73bd8-4034-44e9-b00a-75ea938360c8","Type":"ContainerDied","Data":"958b4d3b02248b0f89d810bbcfdb481c0b9625c53aae088528f6ccc9bc27c639"} Jan 23 09:12:08 crc kubenswrapper[4684]: I0123 09:11:57.503125 4684 generic.go:334] "Generic (PLEG): container finished" podID="b97308cc-f7d2-4693-8990-76cbb4c9abff" containerID="06d322703213706612011807604b50100de632a8938a972d78bd8b80d55fff50" exitCode=0 Jan 23 09:12:08 crc kubenswrapper[4684]: I0123 09:11:57.503179 4684 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-x2mrs" event={"ID":"b97308cc-f7d2-4693-8990-76cbb4c9abff","Type":"ContainerDied","Data":"06d322703213706612011807604b50100de632a8938a972d78bd8b80d55fff50"} Jan 23 09:12:08 crc kubenswrapper[4684]: I0123 09:11:57.507752 4684 generic.go:334] "Generic (PLEG): container finished" podID="a32a23a8-fd38-4a01-bc87-e589889a39e6" containerID="cfa2ad3764d44551aa6bc6c6a7de1e285407c0fea2f82dac38fd64dee528a1ec" exitCode=0 Jan 23 09:12:08 crc kubenswrapper[4684]: I0123 09:11:57.507797 4684 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-hcd6g" event={"ID":"a32a23a8-fd38-4a01-bc87-e589889a39e6","Type":"ContainerDied","Data":"cfa2ad3764d44551aa6bc6c6a7de1e285407c0fea2f82dac38fd64dee528a1ec"} Jan 23 09:12:08 crc kubenswrapper[4684]: I0123 09:11:58.512111 4684 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-controller-manager/controller-manager-55f64d9478-gvnsf" podUID="fb237825-b7c8-46ae-ae20-a1ea7309ee7e" containerName="controller-manager" containerID="cri-o://d6b80ad8fa3e19acf0c0b44cafb3483e63c310f9bdfb6ce3e2ca51d36f3852fb" gracePeriod=30 Jan 23 09:12:08 crc kubenswrapper[4684]: I0123 09:11:58.512500 4684 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-controller-manager/controller-manager-55f64d9478-gvnsf" Jan 23 09:12:08 crc kubenswrapper[4684]: I0123 09:11:58.545363 4684 patch_prober.go:28] interesting pod/controller-manager-55f64d9478-gvnsf container/controller-manager namespace/openshift-controller-manager: Readiness probe status=failure output="Get \"https://10.217.0.60:8443/healthz\": EOF" start-of-body= Jan 23 09:12:08 crc kubenswrapper[4684]: I0123 09:11:58.545427 4684 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-controller-manager/controller-manager-55f64d9478-gvnsf" podUID="fb237825-b7c8-46ae-ae20-a1ea7309ee7e" containerName="controller-manager" probeResult="failure" output="Get \"https://10.217.0.60:8443/healthz\": EOF" Jan 23 09:12:08 crc kubenswrapper[4684]: I0123 09:11:58.558549 4684 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-controller-manager/controller-manager-55f64d9478-gvnsf" podStartSLOduration=103.558532566 podStartE2EDuration="1m43.558532566s" podCreationTimestamp="2026-01-23 09:10:15 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-23 09:11:58.555290937 +0000 UTC m=+291.178669478" watchObservedRunningTime="2026-01-23 09:11:58.558532566 +0000 UTC m=+291.181911107" Jan 23 09:12:08 crc kubenswrapper[4684]: I0123 09:11:58.659857 4684 patch_prober.go:28] interesting pod/controller-manager-55f64d9478-gvnsf container/controller-manager namespace/openshift-controller-manager: Readiness probe status=failure output="Get \"https://10.217.0.60:8443/healthz\": dial tcp 10.217.0.60:8443: connect: connection refused" start-of-body= Jan 23 09:12:08 crc kubenswrapper[4684]: I0123 09:11:58.660165 4684 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-controller-manager/controller-manager-55f64d9478-gvnsf" podUID="fb237825-b7c8-46ae-ae20-a1ea7309ee7e" containerName="controller-manager" probeResult="failure" output="Get \"https://10.217.0.60:8443/healthz\": dial tcp 10.217.0.60:8443: connect: connection refused" Jan 23 09:12:08 crc kubenswrapper[4684]: I0123 09:12:01.523162 4684 patch_prober.go:28] interesting pod/route-controller-manager-6d7f76996d-965j8 container/route-controller-manager namespace/openshift-route-controller-manager: Readiness probe status=failure output="Get \"https://10.217.0.58:8443/healthz\": dial tcp 10.217.0.58:8443: connect: connection refused" start-of-body= Jan 23 09:12:08 crc kubenswrapper[4684]: I0123 09:12:01.523214 4684 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-route-controller-manager/route-controller-manager-6d7f76996d-965j8" podUID="a3346474-c3f2-4ef3-bcee-65f80e85ace4" containerName="route-controller-manager" probeResult="failure" output="Get \"https://10.217.0.58:8443/healthz\": dial tcp 10.217.0.58:8443: connect: connection refused" Jan 23 09:12:08 crc kubenswrapper[4684]: I0123 09:12:01.631895 4684 generic.go:334] "Generic (PLEG): container finished" podID="a3346474-c3f2-4ef3-bcee-65f80e85ace4" containerID="000c480365208dc2c60e6a41d525590947cd55e78d338c2d73957c31fb245675" exitCode=0 Jan 23 09:12:08 crc kubenswrapper[4684]: I0123 09:12:01.631945 4684 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-route-controller-manager/route-controller-manager-6d7f76996d-965j8" event={"ID":"a3346474-c3f2-4ef3-bcee-65f80e85ace4","Type":"ContainerDied","Data":"000c480365208dc2c60e6a41d525590947cd55e78d338c2d73957c31fb245675"} Jan 23 09:12:08 crc kubenswrapper[4684]: I0123 09:12:01.634792 4684 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-9nnzz" event={"ID":"888f4644-d4e6-4334-8711-c552d0ef037a","Type":"ContainerDied","Data":"b5d7d77b40dc4fa0e8a2a3fc914c5aac0bc55be1aefd4db81f8f63b6be5c5a0f"} Jan 23 09:12:08 crc kubenswrapper[4684]: I0123 09:12:01.634871 4684 generic.go:334] "Generic (PLEG): container finished" podID="888f4644-d4e6-4334-8711-c552d0ef037a" containerID="b5d7d77b40dc4fa0e8a2a3fc914c5aac0bc55be1aefd4db81f8f63b6be5c5a0f" exitCode=0 Jan 23 09:12:08 crc kubenswrapper[4684]: I0123 09:12:04.474061 4684 generic.go:334] "Generic (PLEG): container finished" podID="fb237825-b7c8-46ae-ae20-a1ea7309ee7e" containerID="d6b80ad8fa3e19acf0c0b44cafb3483e63c310f9bdfb6ce3e2ca51d36f3852fb" exitCode=0 Jan 23 09:12:08 crc kubenswrapper[4684]: I0123 09:12:04.474194 4684 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-controller-manager/controller-manager-55f64d9478-gvnsf" event={"ID":"fb237825-b7c8-46ae-ae20-a1ea7309ee7e","Type":"ContainerDied","Data":"d6b80ad8fa3e19acf0c0b44cafb3483e63c310f9bdfb6ce3e2ca51d36f3852fb"} Jan 23 09:12:08 crc kubenswrapper[4684]: I0123 09:12:05.390381 4684 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-authentication/oauth-openshift-558db77b4-hv7d8" podUID="c846db13-b93b-4e07-9e7b-e22106203982" containerName="oauth-openshift" containerID="cri-o://f3d4749d1cdf3b2ee51e79c20ce920b5dfc161f4e6da5794c6c4502f5b162b07" gracePeriod=15 Jan 23 09:12:08 crc kubenswrapper[4684]: I0123 09:12:05.446299 4684 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-console/downloads-7954f5f757-mc6nm" Jan 23 09:12:08 crc kubenswrapper[4684]: I0123 09:12:05.492222 4684 generic.go:334] "Generic (PLEG): container finished" podID="5a6b0dac-56a9-4bc7-b6f1-fdbe9578f226" containerID="d090c7c7d777792af0ce7e82f8e7dc254cea89eea157b0c23551c9669b6d9aa8" exitCode=0 Jan 23 09:12:08 crc kubenswrapper[4684]: I0123 09:12:05.492272 4684 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-vnv8t" event={"ID":"5a6b0dac-56a9-4bc7-b6f1-fdbe9578f226","Type":"ContainerDied","Data":"d090c7c7d777792af0ce7e82f8e7dc254cea89eea157b0c23551c9669b6d9aa8"} Jan 23 09:12:08 crc kubenswrapper[4684]: I0123 09:12:07.487878 4684 cert_rotation.go:91] certificate rotation detected, shutting down client connections to start using new credentials Jan 23 09:12:08 crc kubenswrapper[4684]: I0123 09:12:08.659468 4684 patch_prober.go:28] interesting pod/controller-manager-55f64d9478-gvnsf container/controller-manager namespace/openshift-controller-manager: Readiness probe status=failure output="Get \"https://10.217.0.60:8443/healthz\": dial tcp 10.217.0.60:8443: connect: connection refused" start-of-body= Jan 23 09:12:08 crc kubenswrapper[4684]: I0123 09:12:08.659828 4684 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-controller-manager/controller-manager-55f64d9478-gvnsf" podUID="fb237825-b7c8-46ae-ae20-a1ea7309ee7e" containerName="controller-manager" probeResult="failure" output="Get \"https://10.217.0.60:8443/healthz\": dial tcp 10.217.0.60:8443: connect: connection refused" Jan 23 09:12:09 crc kubenswrapper[4684]: I0123 09:12:09.518711 4684 generic.go:334] "Generic (PLEG): container finished" podID="c846db13-b93b-4e07-9e7b-e22106203982" containerID="f3d4749d1cdf3b2ee51e79c20ce920b5dfc161f4e6da5794c6c4502f5b162b07" exitCode=0 Jan 23 09:12:09 crc kubenswrapper[4684]: I0123 09:12:09.518764 4684 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-authentication/oauth-openshift-558db77b4-hv7d8" event={"ID":"c846db13-b93b-4e07-9e7b-e22106203982","Type":"ContainerDied","Data":"f3d4749d1cdf3b2ee51e79c20ce920b5dfc161f4e6da5794c6c4502f5b162b07"} Jan 23 09:12:10 crc kubenswrapper[4684]: I0123 09:12:10.445764 4684 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-route-controller-manager/route-controller-manager-6d7f76996d-965j8" Jan 23 09:12:10 crc kubenswrapper[4684]: I0123 09:12:10.450332 4684 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-controller-manager/controller-manager-55f64d9478-gvnsf" Jan 23 09:12:10 crc kubenswrapper[4684]: I0123 09:12:10.480550 4684 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-route-controller-manager/route-controller-manager-85d79997c7-pbqc5"] Jan 23 09:12:10 crc kubenswrapper[4684]: E0123 09:12:10.480837 4684 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="11bcae14-ba2a-42b3-85b1-edbfec10d93a" containerName="pruner" Jan 23 09:12:10 crc kubenswrapper[4684]: I0123 09:12:10.480854 4684 state_mem.go:107] "Deleted CPUSet assignment" podUID="11bcae14-ba2a-42b3-85b1-edbfec10d93a" containerName="pruner" Jan 23 09:12:10 crc kubenswrapper[4684]: E0123 09:12:10.480878 4684 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="fb237825-b7c8-46ae-ae20-a1ea7309ee7e" containerName="controller-manager" Jan 23 09:12:10 crc kubenswrapper[4684]: I0123 09:12:10.480884 4684 state_mem.go:107] "Deleted CPUSet assignment" podUID="fb237825-b7c8-46ae-ae20-a1ea7309ee7e" containerName="controller-manager" Jan 23 09:12:10 crc kubenswrapper[4684]: E0123 09:12:10.480894 4684 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="a3346474-c3f2-4ef3-bcee-65f80e85ace4" containerName="route-controller-manager" Jan 23 09:12:10 crc kubenswrapper[4684]: I0123 09:12:10.480902 4684 state_mem.go:107] "Deleted CPUSet assignment" podUID="a3346474-c3f2-4ef3-bcee-65f80e85ace4" containerName="route-controller-manager" Jan 23 09:12:10 crc kubenswrapper[4684]: I0123 09:12:10.481014 4684 memory_manager.go:354] "RemoveStaleState removing state" podUID="11bcae14-ba2a-42b3-85b1-edbfec10d93a" containerName="pruner" Jan 23 09:12:10 crc kubenswrapper[4684]: I0123 09:12:10.481025 4684 memory_manager.go:354] "RemoveStaleState removing state" podUID="a3346474-c3f2-4ef3-bcee-65f80e85ace4" containerName="route-controller-manager" Jan 23 09:12:10 crc kubenswrapper[4684]: I0123 09:12:10.481032 4684 memory_manager.go:354] "RemoveStaleState removing state" podUID="fb237825-b7c8-46ae-ae20-a1ea7309ee7e" containerName="controller-manager" Jan 23 09:12:10 crc kubenswrapper[4684]: I0123 09:12:10.481479 4684 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-route-controller-manager/route-controller-manager-85d79997c7-pbqc5" Jan 23 09:12:10 crc kubenswrapper[4684]: I0123 09:12:10.496175 4684 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-route-controller-manager/route-controller-manager-85d79997c7-pbqc5"] Jan 23 09:12:10 crc kubenswrapper[4684]: I0123 09:12:10.524658 4684 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/fb237825-b7c8-46ae-ae20-a1ea7309ee7e-serving-cert\") pod \"fb237825-b7c8-46ae-ae20-a1ea7309ee7e\" (UID: \"fb237825-b7c8-46ae-ae20-a1ea7309ee7e\") " Jan 23 09:12:10 crc kubenswrapper[4684]: I0123 09:12:10.524754 4684 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/fb237825-b7c8-46ae-ae20-a1ea7309ee7e-config\") pod \"fb237825-b7c8-46ae-ae20-a1ea7309ee7e\" (UID: \"fb237825-b7c8-46ae-ae20-a1ea7309ee7e\") " Jan 23 09:12:10 crc kubenswrapper[4684]: I0123 09:12:10.524810 4684 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/a3346474-c3f2-4ef3-bcee-65f80e85ace4-client-ca\") pod \"a3346474-c3f2-4ef3-bcee-65f80e85ace4\" (UID: \"a3346474-c3f2-4ef3-bcee-65f80e85ace4\") " Jan 23 09:12:10 crc kubenswrapper[4684]: I0123 09:12:10.524891 4684 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-kz5ph\" (UniqueName: \"kubernetes.io/projected/fb237825-b7c8-46ae-ae20-a1ea7309ee7e-kube-api-access-kz5ph\") pod \"fb237825-b7c8-46ae-ae20-a1ea7309ee7e\" (UID: \"fb237825-b7c8-46ae-ae20-a1ea7309ee7e\") " Jan 23 09:12:10 crc kubenswrapper[4684]: I0123 09:12:10.524925 4684 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-zx5p9\" (UniqueName: \"kubernetes.io/projected/a3346474-c3f2-4ef3-bcee-65f80e85ace4-kube-api-access-zx5p9\") pod \"a3346474-c3f2-4ef3-bcee-65f80e85ace4\" (UID: \"a3346474-c3f2-4ef3-bcee-65f80e85ace4\") " Jan 23 09:12:10 crc kubenswrapper[4684]: I0123 09:12:10.524957 4684 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/a3346474-c3f2-4ef3-bcee-65f80e85ace4-serving-cert\") pod \"a3346474-c3f2-4ef3-bcee-65f80e85ace4\" (UID: \"a3346474-c3f2-4ef3-bcee-65f80e85ace4\") " Jan 23 09:12:10 crc kubenswrapper[4684]: I0123 09:12:10.524997 4684 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/a3346474-c3f2-4ef3-bcee-65f80e85ace4-config\") pod \"a3346474-c3f2-4ef3-bcee-65f80e85ace4\" (UID: \"a3346474-c3f2-4ef3-bcee-65f80e85ace4\") " Jan 23 09:12:10 crc kubenswrapper[4684]: I0123 09:12:10.525056 4684 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/fb237825-b7c8-46ae-ae20-a1ea7309ee7e-proxy-ca-bundles\") pod \"fb237825-b7c8-46ae-ae20-a1ea7309ee7e\" (UID: \"fb237825-b7c8-46ae-ae20-a1ea7309ee7e\") " Jan 23 09:12:10 crc kubenswrapper[4684]: I0123 09:12:10.525089 4684 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/fb237825-b7c8-46ae-ae20-a1ea7309ee7e-client-ca\") pod \"fb237825-b7c8-46ae-ae20-a1ea7309ee7e\" (UID: \"fb237825-b7c8-46ae-ae20-a1ea7309ee7e\") " Jan 23 09:12:10 crc kubenswrapper[4684]: I0123 09:12:10.525522 4684 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/ea467a71-d4b5-4361-b648-61dc754033ca-serving-cert\") pod \"route-controller-manager-85d79997c7-pbqc5\" (UID: \"ea467a71-d4b5-4361-b648-61dc754033ca\") " pod="openshift-route-controller-manager/route-controller-manager-85d79997c7-pbqc5" Jan 23 09:12:10 crc kubenswrapper[4684]: I0123 09:12:10.525570 4684 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/ea467a71-d4b5-4361-b648-61dc754033ca-config\") pod \"route-controller-manager-85d79997c7-pbqc5\" (UID: \"ea467a71-d4b5-4361-b648-61dc754033ca\") " pod="openshift-route-controller-manager/route-controller-manager-85d79997c7-pbqc5" Jan 23 09:12:10 crc kubenswrapper[4684]: I0123 09:12:10.525632 4684 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-2694h\" (UniqueName: \"kubernetes.io/projected/ea467a71-d4b5-4361-b648-61dc754033ca-kube-api-access-2694h\") pod \"route-controller-manager-85d79997c7-pbqc5\" (UID: \"ea467a71-d4b5-4361-b648-61dc754033ca\") " pod="openshift-route-controller-manager/route-controller-manager-85d79997c7-pbqc5" Jan 23 09:12:10 crc kubenswrapper[4684]: I0123 09:12:10.525673 4684 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/ea467a71-d4b5-4361-b648-61dc754033ca-client-ca\") pod \"route-controller-manager-85d79997c7-pbqc5\" (UID: \"ea467a71-d4b5-4361-b648-61dc754033ca\") " pod="openshift-route-controller-manager/route-controller-manager-85d79997c7-pbqc5" Jan 23 09:12:10 crc kubenswrapper[4684]: I0123 09:12:10.528568 4684 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/fb237825-b7c8-46ae-ae20-a1ea7309ee7e-client-ca" (OuterVolumeSpecName: "client-ca") pod "fb237825-b7c8-46ae-ae20-a1ea7309ee7e" (UID: "fb237825-b7c8-46ae-ae20-a1ea7309ee7e"). InnerVolumeSpecName "client-ca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 23 09:12:10 crc kubenswrapper[4684]: I0123 09:12:10.528801 4684 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/a3346474-c3f2-4ef3-bcee-65f80e85ace4-config" (OuterVolumeSpecName: "config") pod "a3346474-c3f2-4ef3-bcee-65f80e85ace4" (UID: "a3346474-c3f2-4ef3-bcee-65f80e85ace4"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 23 09:12:10 crc kubenswrapper[4684]: I0123 09:12:10.528988 4684 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/fb237825-b7c8-46ae-ae20-a1ea7309ee7e-proxy-ca-bundles" (OuterVolumeSpecName: "proxy-ca-bundles") pod "fb237825-b7c8-46ae-ae20-a1ea7309ee7e" (UID: "fb237825-b7c8-46ae-ae20-a1ea7309ee7e"). InnerVolumeSpecName "proxy-ca-bundles". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 23 09:12:10 crc kubenswrapper[4684]: I0123 09:12:10.529538 4684 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/a3346474-c3f2-4ef3-bcee-65f80e85ace4-client-ca" (OuterVolumeSpecName: "client-ca") pod "a3346474-c3f2-4ef3-bcee-65f80e85ace4" (UID: "a3346474-c3f2-4ef3-bcee-65f80e85ace4"). InnerVolumeSpecName "client-ca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 23 09:12:10 crc kubenswrapper[4684]: I0123 09:12:10.529915 4684 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/fb237825-b7c8-46ae-ae20-a1ea7309ee7e-config" (OuterVolumeSpecName: "config") pod "fb237825-b7c8-46ae-ae20-a1ea7309ee7e" (UID: "fb237825-b7c8-46ae-ae20-a1ea7309ee7e"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 23 09:12:10 crc kubenswrapper[4684]: I0123 09:12:10.539963 4684 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/a3346474-c3f2-4ef3-bcee-65f80e85ace4-kube-api-access-zx5p9" (OuterVolumeSpecName: "kube-api-access-zx5p9") pod "a3346474-c3f2-4ef3-bcee-65f80e85ace4" (UID: "a3346474-c3f2-4ef3-bcee-65f80e85ace4"). InnerVolumeSpecName "kube-api-access-zx5p9". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 23 09:12:10 crc kubenswrapper[4684]: I0123 09:12:10.540131 4684 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/fb237825-b7c8-46ae-ae20-a1ea7309ee7e-serving-cert" (OuterVolumeSpecName: "serving-cert") pod "fb237825-b7c8-46ae-ae20-a1ea7309ee7e" (UID: "fb237825-b7c8-46ae-ae20-a1ea7309ee7e"). InnerVolumeSpecName "serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 23 09:12:10 crc kubenswrapper[4684]: I0123 09:12:10.541110 4684 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/a3346474-c3f2-4ef3-bcee-65f80e85ace4-serving-cert" (OuterVolumeSpecName: "serving-cert") pod "a3346474-c3f2-4ef3-bcee-65f80e85ace4" (UID: "a3346474-c3f2-4ef3-bcee-65f80e85ace4"). InnerVolumeSpecName "serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 23 09:12:10 crc kubenswrapper[4684]: I0123 09:12:10.542762 4684 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-route-controller-manager/route-controller-manager-6d7f76996d-965j8" event={"ID":"a3346474-c3f2-4ef3-bcee-65f80e85ace4","Type":"ContainerDied","Data":"94353a2406ab348f5d41a13b774781e37b5574d76077bd174e3111670d5f5633"} Jan 23 09:12:10 crc kubenswrapper[4684]: I0123 09:12:10.542825 4684 scope.go:117] "RemoveContainer" containerID="000c480365208dc2c60e6a41d525590947cd55e78d338c2d73957c31fb245675" Jan 23 09:12:10 crc kubenswrapper[4684]: I0123 09:12:10.543174 4684 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-route-controller-manager/route-controller-manager-6d7f76996d-965j8" Jan 23 09:12:10 crc kubenswrapper[4684]: I0123 09:12:10.543253 4684 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/fb237825-b7c8-46ae-ae20-a1ea7309ee7e-kube-api-access-kz5ph" (OuterVolumeSpecName: "kube-api-access-kz5ph") pod "fb237825-b7c8-46ae-ae20-a1ea7309ee7e" (UID: "fb237825-b7c8-46ae-ae20-a1ea7309ee7e"). InnerVolumeSpecName "kube-api-access-kz5ph". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 23 09:12:10 crc kubenswrapper[4684]: I0123 09:12:10.552187 4684 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-controller-manager/controller-manager-55f64d9478-gvnsf" event={"ID":"fb237825-b7c8-46ae-ae20-a1ea7309ee7e","Type":"ContainerDied","Data":"fba3777596df8220d8e9f522a26200651cbad463698c1f64e2d1038fd0a1fac2"} Jan 23 09:12:10 crc kubenswrapper[4684]: I0123 09:12:10.552401 4684 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-controller-manager/controller-manager-55f64d9478-gvnsf" Jan 23 09:12:10 crc kubenswrapper[4684]: I0123 09:12:10.600912 4684 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-authentication/oauth-openshift-558db77b4-hv7d8" Jan 23 09:12:10 crc kubenswrapper[4684]: I0123 09:12:10.613156 4684 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-route-controller-manager/route-controller-manager-6d7f76996d-965j8"] Jan 23 09:12:10 crc kubenswrapper[4684]: I0123 09:12:10.616146 4684 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-route-controller-manager/route-controller-manager-6d7f76996d-965j8"] Jan 23 09:12:10 crc kubenswrapper[4684]: I0123 09:12:10.626267 4684 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"v4-0-config-user-template-provider-selection\" (UniqueName: \"kubernetes.io/secret/c846db13-b93b-4e07-9e7b-e22106203982-v4-0-config-user-template-provider-selection\") pod \"c846db13-b93b-4e07-9e7b-e22106203982\" (UID: \"c846db13-b93b-4e07-9e7b-e22106203982\") " Jan 23 09:12:10 crc kubenswrapper[4684]: I0123 09:12:10.626310 4684 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"audit-policies\" (UniqueName: \"kubernetes.io/configmap/c846db13-b93b-4e07-9e7b-e22106203982-audit-policies\") pod \"c846db13-b93b-4e07-9e7b-e22106203982\" (UID: \"c846db13-b93b-4e07-9e7b-e22106203982\") " Jan 23 09:12:10 crc kubenswrapper[4684]: I0123 09:12:10.626332 4684 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"v4-0-config-user-template-error\" (UniqueName: \"kubernetes.io/secret/c846db13-b93b-4e07-9e7b-e22106203982-v4-0-config-user-template-error\") pod \"c846db13-b93b-4e07-9e7b-e22106203982\" (UID: \"c846db13-b93b-4e07-9e7b-e22106203982\") " Jan 23 09:12:10 crc kubenswrapper[4684]: I0123 09:12:10.626362 4684 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"v4-0-config-system-session\" (UniqueName: \"kubernetes.io/secret/c846db13-b93b-4e07-9e7b-e22106203982-v4-0-config-system-session\") pod \"c846db13-b93b-4e07-9e7b-e22106203982\" (UID: \"c846db13-b93b-4e07-9e7b-e22106203982\") " Jan 23 09:12:10 crc kubenswrapper[4684]: I0123 09:12:10.626384 4684 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"v4-0-config-user-idp-0-file-data\" (UniqueName: \"kubernetes.io/secret/c846db13-b93b-4e07-9e7b-e22106203982-v4-0-config-user-idp-0-file-data\") pod \"c846db13-b93b-4e07-9e7b-e22106203982\" (UID: \"c846db13-b93b-4e07-9e7b-e22106203982\") " Jan 23 09:12:10 crc kubenswrapper[4684]: I0123 09:12:10.626402 4684 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"v4-0-config-system-service-ca\" (UniqueName: \"kubernetes.io/configmap/c846db13-b93b-4e07-9e7b-e22106203982-v4-0-config-system-service-ca\") pod \"c846db13-b93b-4e07-9e7b-e22106203982\" (UID: \"c846db13-b93b-4e07-9e7b-e22106203982\") " Jan 23 09:12:10 crc kubenswrapper[4684]: I0123 09:12:10.626433 4684 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"v4-0-config-system-cliconfig\" (UniqueName: \"kubernetes.io/configmap/c846db13-b93b-4e07-9e7b-e22106203982-v4-0-config-system-cliconfig\") pod \"c846db13-b93b-4e07-9e7b-e22106203982\" (UID: \"c846db13-b93b-4e07-9e7b-e22106203982\") " Jan 23 09:12:10 crc kubenswrapper[4684]: I0123 09:12:10.626455 4684 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/c846db13-b93b-4e07-9e7b-e22106203982-audit-dir\") pod \"c846db13-b93b-4e07-9e7b-e22106203982\" (UID: \"c846db13-b93b-4e07-9e7b-e22106203982\") " Jan 23 09:12:10 crc kubenswrapper[4684]: I0123 09:12:10.626480 4684 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"v4-0-config-system-ocp-branding-template\" (UniqueName: \"kubernetes.io/secret/c846db13-b93b-4e07-9e7b-e22106203982-v4-0-config-system-ocp-branding-template\") pod \"c846db13-b93b-4e07-9e7b-e22106203982\" (UID: \"c846db13-b93b-4e07-9e7b-e22106203982\") " Jan 23 09:12:10 crc kubenswrapper[4684]: I0123 09:12:10.626501 4684 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"v4-0-config-system-trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/c846db13-b93b-4e07-9e7b-e22106203982-v4-0-config-system-trusted-ca-bundle\") pod \"c846db13-b93b-4e07-9e7b-e22106203982\" (UID: \"c846db13-b93b-4e07-9e7b-e22106203982\") " Jan 23 09:12:10 crc kubenswrapper[4684]: I0123 09:12:10.626521 4684 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"v4-0-config-user-template-login\" (UniqueName: \"kubernetes.io/secret/c846db13-b93b-4e07-9e7b-e22106203982-v4-0-config-user-template-login\") pod \"c846db13-b93b-4e07-9e7b-e22106203982\" (UID: \"c846db13-b93b-4e07-9e7b-e22106203982\") " Jan 23 09:12:10 crc kubenswrapper[4684]: I0123 09:12:10.626540 4684 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"v4-0-config-system-serving-cert\" (UniqueName: \"kubernetes.io/secret/c846db13-b93b-4e07-9e7b-e22106203982-v4-0-config-system-serving-cert\") pod \"c846db13-b93b-4e07-9e7b-e22106203982\" (UID: \"c846db13-b93b-4e07-9e7b-e22106203982\") " Jan 23 09:12:10 crc kubenswrapper[4684]: I0123 09:12:10.626569 4684 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-2nnm7\" (UniqueName: \"kubernetes.io/projected/c846db13-b93b-4e07-9e7b-e22106203982-kube-api-access-2nnm7\") pod \"c846db13-b93b-4e07-9e7b-e22106203982\" (UID: \"c846db13-b93b-4e07-9e7b-e22106203982\") " Jan 23 09:12:10 crc kubenswrapper[4684]: I0123 09:12:10.626641 4684 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"v4-0-config-system-router-certs\" (UniqueName: \"kubernetes.io/secret/c846db13-b93b-4e07-9e7b-e22106203982-v4-0-config-system-router-certs\") pod \"c846db13-b93b-4e07-9e7b-e22106203982\" (UID: \"c846db13-b93b-4e07-9e7b-e22106203982\") " Jan 23 09:12:10 crc kubenswrapper[4684]: I0123 09:12:10.626757 4684 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/ea467a71-d4b5-4361-b648-61dc754033ca-serving-cert\") pod \"route-controller-manager-85d79997c7-pbqc5\" (UID: \"ea467a71-d4b5-4361-b648-61dc754033ca\") " pod="openshift-route-controller-manager/route-controller-manager-85d79997c7-pbqc5" Jan 23 09:12:10 crc kubenswrapper[4684]: I0123 09:12:10.626789 4684 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/ea467a71-d4b5-4361-b648-61dc754033ca-config\") pod \"route-controller-manager-85d79997c7-pbqc5\" (UID: \"ea467a71-d4b5-4361-b648-61dc754033ca\") " pod="openshift-route-controller-manager/route-controller-manager-85d79997c7-pbqc5" Jan 23 09:12:10 crc kubenswrapper[4684]: I0123 09:12:10.626848 4684 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-2694h\" (UniqueName: \"kubernetes.io/projected/ea467a71-d4b5-4361-b648-61dc754033ca-kube-api-access-2694h\") pod \"route-controller-manager-85d79997c7-pbqc5\" (UID: \"ea467a71-d4b5-4361-b648-61dc754033ca\") " pod="openshift-route-controller-manager/route-controller-manager-85d79997c7-pbqc5" Jan 23 09:12:10 crc kubenswrapper[4684]: I0123 09:12:10.626876 4684 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/ea467a71-d4b5-4361-b648-61dc754033ca-client-ca\") pod \"route-controller-manager-85d79997c7-pbqc5\" (UID: \"ea467a71-d4b5-4361-b648-61dc754033ca\") " pod="openshift-route-controller-manager/route-controller-manager-85d79997c7-pbqc5" Jan 23 09:12:10 crc kubenswrapper[4684]: I0123 09:12:10.626939 4684 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-kz5ph\" (UniqueName: \"kubernetes.io/projected/fb237825-b7c8-46ae-ae20-a1ea7309ee7e-kube-api-access-kz5ph\") on node \"crc\" DevicePath \"\"" Jan 23 09:12:10 crc kubenswrapper[4684]: I0123 09:12:10.626950 4684 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-zx5p9\" (UniqueName: \"kubernetes.io/projected/a3346474-c3f2-4ef3-bcee-65f80e85ace4-kube-api-access-zx5p9\") on node \"crc\" DevicePath \"\"" Jan 23 09:12:10 crc kubenswrapper[4684]: I0123 09:12:10.626961 4684 reconciler_common.go:293] "Volume detached for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/a3346474-c3f2-4ef3-bcee-65f80e85ace4-serving-cert\") on node \"crc\" DevicePath \"\"" Jan 23 09:12:10 crc kubenswrapper[4684]: I0123 09:12:10.626970 4684 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/a3346474-c3f2-4ef3-bcee-65f80e85ace4-config\") on node \"crc\" DevicePath \"\"" Jan 23 09:12:10 crc kubenswrapper[4684]: I0123 09:12:10.626979 4684 reconciler_common.go:293] "Volume detached for volume \"proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/fb237825-b7c8-46ae-ae20-a1ea7309ee7e-proxy-ca-bundles\") on node \"crc\" DevicePath \"\"" Jan 23 09:12:10 crc kubenswrapper[4684]: I0123 09:12:10.626989 4684 reconciler_common.go:293] "Volume detached for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/fb237825-b7c8-46ae-ae20-a1ea7309ee7e-client-ca\") on node \"crc\" DevicePath \"\"" Jan 23 09:12:10 crc kubenswrapper[4684]: I0123 09:12:10.626998 4684 reconciler_common.go:293] "Volume detached for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/fb237825-b7c8-46ae-ae20-a1ea7309ee7e-serving-cert\") on node \"crc\" DevicePath \"\"" Jan 23 09:12:10 crc kubenswrapper[4684]: I0123 09:12:10.627008 4684 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/fb237825-b7c8-46ae-ae20-a1ea7309ee7e-config\") on node \"crc\" DevicePath \"\"" Jan 23 09:12:10 crc kubenswrapper[4684]: I0123 09:12:10.627017 4684 reconciler_common.go:293] "Volume detached for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/a3346474-c3f2-4ef3-bcee-65f80e85ace4-client-ca\") on node \"crc\" DevicePath \"\"" Jan 23 09:12:10 crc kubenswrapper[4684]: I0123 09:12:10.627936 4684 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/ea467a71-d4b5-4361-b648-61dc754033ca-client-ca\") pod \"route-controller-manager-85d79997c7-pbqc5\" (UID: \"ea467a71-d4b5-4361-b648-61dc754033ca\") " pod="openshift-route-controller-manager/route-controller-manager-85d79997c7-pbqc5" Jan 23 09:12:10 crc kubenswrapper[4684]: I0123 09:12:10.629717 4684 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/c846db13-b93b-4e07-9e7b-e22106203982-v4-0-config-user-template-error" (OuterVolumeSpecName: "v4-0-config-user-template-error") pod "c846db13-b93b-4e07-9e7b-e22106203982" (UID: "c846db13-b93b-4e07-9e7b-e22106203982"). InnerVolumeSpecName "v4-0-config-user-template-error". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 23 09:12:10 crc kubenswrapper[4684]: I0123 09:12:10.630305 4684 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/c846db13-b93b-4e07-9e7b-e22106203982-audit-policies" (OuterVolumeSpecName: "audit-policies") pod "c846db13-b93b-4e07-9e7b-e22106203982" (UID: "c846db13-b93b-4e07-9e7b-e22106203982"). InnerVolumeSpecName "audit-policies". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 23 09:12:10 crc kubenswrapper[4684]: I0123 09:12:10.630890 4684 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/c846db13-b93b-4e07-9e7b-e22106203982-v4-0-config-system-session" (OuterVolumeSpecName: "v4-0-config-system-session") pod "c846db13-b93b-4e07-9e7b-e22106203982" (UID: "c846db13-b93b-4e07-9e7b-e22106203982"). InnerVolumeSpecName "v4-0-config-system-session". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 23 09:12:10 crc kubenswrapper[4684]: I0123 09:12:10.633827 4684 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/c846db13-b93b-4e07-9e7b-e22106203982-audit-dir" (OuterVolumeSpecName: "audit-dir") pod "c846db13-b93b-4e07-9e7b-e22106203982" (UID: "c846db13-b93b-4e07-9e7b-e22106203982"). InnerVolumeSpecName "audit-dir". PluginName "kubernetes.io/host-path", VolumeGidValue "" Jan 23 09:12:10 crc kubenswrapper[4684]: I0123 09:12:10.634433 4684 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/c846db13-b93b-4e07-9e7b-e22106203982-v4-0-config-system-service-ca" (OuterVolumeSpecName: "v4-0-config-system-service-ca") pod "c846db13-b93b-4e07-9e7b-e22106203982" (UID: "c846db13-b93b-4e07-9e7b-e22106203982"). InnerVolumeSpecName "v4-0-config-system-service-ca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 23 09:12:10 crc kubenswrapper[4684]: I0123 09:12:10.636594 4684 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/ea467a71-d4b5-4361-b648-61dc754033ca-config\") pod \"route-controller-manager-85d79997c7-pbqc5\" (UID: \"ea467a71-d4b5-4361-b648-61dc754033ca\") " pod="openshift-route-controller-manager/route-controller-manager-85d79997c7-pbqc5" Jan 23 09:12:10 crc kubenswrapper[4684]: I0123 09:12:10.640429 4684 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/c846db13-b93b-4e07-9e7b-e22106203982-v4-0-config-system-cliconfig" (OuterVolumeSpecName: "v4-0-config-system-cliconfig") pod "c846db13-b93b-4e07-9e7b-e22106203982" (UID: "c846db13-b93b-4e07-9e7b-e22106203982"). InnerVolumeSpecName "v4-0-config-system-cliconfig". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 23 09:12:10 crc kubenswrapper[4684]: I0123 09:12:10.640826 4684 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/c846db13-b93b-4e07-9e7b-e22106203982-v4-0-config-user-template-provider-selection" (OuterVolumeSpecName: "v4-0-config-user-template-provider-selection") pod "c846db13-b93b-4e07-9e7b-e22106203982" (UID: "c846db13-b93b-4e07-9e7b-e22106203982"). InnerVolumeSpecName "v4-0-config-user-template-provider-selection". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 23 09:12:10 crc kubenswrapper[4684]: I0123 09:12:10.645578 4684 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-controller-manager/controller-manager-55f64d9478-gvnsf"] Jan 23 09:12:10 crc kubenswrapper[4684]: I0123 09:12:10.651974 4684 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/ea467a71-d4b5-4361-b648-61dc754033ca-serving-cert\") pod \"route-controller-manager-85d79997c7-pbqc5\" (UID: \"ea467a71-d4b5-4361-b648-61dc754033ca\") " pod="openshift-route-controller-manager/route-controller-manager-85d79997c7-pbqc5" Jan 23 09:12:10 crc kubenswrapper[4684]: I0123 09:12:10.653380 4684 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/c846db13-b93b-4e07-9e7b-e22106203982-v4-0-config-system-trusted-ca-bundle" (OuterVolumeSpecName: "v4-0-config-system-trusted-ca-bundle") pod "c846db13-b93b-4e07-9e7b-e22106203982" (UID: "c846db13-b93b-4e07-9e7b-e22106203982"). InnerVolumeSpecName "v4-0-config-system-trusted-ca-bundle". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 23 09:12:10 crc kubenswrapper[4684]: I0123 09:12:10.653520 4684 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-controller-manager/controller-manager-55f64d9478-gvnsf"] Jan 23 09:12:10 crc kubenswrapper[4684]: I0123 09:12:10.669654 4684 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/c846db13-b93b-4e07-9e7b-e22106203982-v4-0-config-system-serving-cert" (OuterVolumeSpecName: "v4-0-config-system-serving-cert") pod "c846db13-b93b-4e07-9e7b-e22106203982" (UID: "c846db13-b93b-4e07-9e7b-e22106203982"). InnerVolumeSpecName "v4-0-config-system-serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 23 09:12:10 crc kubenswrapper[4684]: I0123 09:12:10.673470 4684 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/c846db13-b93b-4e07-9e7b-e22106203982-v4-0-config-system-router-certs" (OuterVolumeSpecName: "v4-0-config-system-router-certs") pod "c846db13-b93b-4e07-9e7b-e22106203982" (UID: "c846db13-b93b-4e07-9e7b-e22106203982"). InnerVolumeSpecName "v4-0-config-system-router-certs". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 23 09:12:10 crc kubenswrapper[4684]: I0123 09:12:10.674097 4684 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/c846db13-b93b-4e07-9e7b-e22106203982-v4-0-config-system-ocp-branding-template" (OuterVolumeSpecName: "v4-0-config-system-ocp-branding-template") pod "c846db13-b93b-4e07-9e7b-e22106203982" (UID: "c846db13-b93b-4e07-9e7b-e22106203982"). InnerVolumeSpecName "v4-0-config-system-ocp-branding-template". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 23 09:12:10 crc kubenswrapper[4684]: I0123 09:12:10.674889 4684 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/c846db13-b93b-4e07-9e7b-e22106203982-kube-api-access-2nnm7" (OuterVolumeSpecName: "kube-api-access-2nnm7") pod "c846db13-b93b-4e07-9e7b-e22106203982" (UID: "c846db13-b93b-4e07-9e7b-e22106203982"). InnerVolumeSpecName "kube-api-access-2nnm7". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 23 09:12:10 crc kubenswrapper[4684]: I0123 09:12:10.675130 4684 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/c846db13-b93b-4e07-9e7b-e22106203982-v4-0-config-user-idp-0-file-data" (OuterVolumeSpecName: "v4-0-config-user-idp-0-file-data") pod "c846db13-b93b-4e07-9e7b-e22106203982" (UID: "c846db13-b93b-4e07-9e7b-e22106203982"). InnerVolumeSpecName "v4-0-config-user-idp-0-file-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 23 09:12:10 crc kubenswrapper[4684]: I0123 09:12:10.678105 4684 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/c846db13-b93b-4e07-9e7b-e22106203982-v4-0-config-user-template-login" (OuterVolumeSpecName: "v4-0-config-user-template-login") pod "c846db13-b93b-4e07-9e7b-e22106203982" (UID: "c846db13-b93b-4e07-9e7b-e22106203982"). InnerVolumeSpecName "v4-0-config-user-template-login". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 23 09:12:10 crc kubenswrapper[4684]: I0123 09:12:10.678348 4684 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-2694h\" (UniqueName: \"kubernetes.io/projected/ea467a71-d4b5-4361-b648-61dc754033ca-kube-api-access-2694h\") pod \"route-controller-manager-85d79997c7-pbqc5\" (UID: \"ea467a71-d4b5-4361-b648-61dc754033ca\") " pod="openshift-route-controller-manager/route-controller-manager-85d79997c7-pbqc5" Jan 23 09:12:10 crc kubenswrapper[4684]: I0123 09:12:10.727766 4684 reconciler_common.go:293] "Volume detached for volume \"v4-0-config-system-cliconfig\" (UniqueName: \"kubernetes.io/configmap/c846db13-b93b-4e07-9e7b-e22106203982-v4-0-config-system-cliconfig\") on node \"crc\" DevicePath \"\"" Jan 23 09:12:10 crc kubenswrapper[4684]: I0123 09:12:10.727808 4684 reconciler_common.go:293] "Volume detached for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/c846db13-b93b-4e07-9e7b-e22106203982-audit-dir\") on node \"crc\" DevicePath \"\"" Jan 23 09:12:10 crc kubenswrapper[4684]: I0123 09:12:10.727824 4684 reconciler_common.go:293] "Volume detached for volume \"v4-0-config-system-ocp-branding-template\" (UniqueName: \"kubernetes.io/secret/c846db13-b93b-4e07-9e7b-e22106203982-v4-0-config-system-ocp-branding-template\") on node \"crc\" DevicePath \"\"" Jan 23 09:12:10 crc kubenswrapper[4684]: I0123 09:12:10.727837 4684 reconciler_common.go:293] "Volume detached for volume \"v4-0-config-system-trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/c846db13-b93b-4e07-9e7b-e22106203982-v4-0-config-system-trusted-ca-bundle\") on node \"crc\" DevicePath \"\"" Jan 23 09:12:10 crc kubenswrapper[4684]: I0123 09:12:10.727848 4684 reconciler_common.go:293] "Volume detached for volume \"v4-0-config-user-template-login\" (UniqueName: \"kubernetes.io/secret/c846db13-b93b-4e07-9e7b-e22106203982-v4-0-config-user-template-login\") on node \"crc\" DevicePath \"\"" Jan 23 09:12:10 crc kubenswrapper[4684]: I0123 09:12:10.727859 4684 reconciler_common.go:293] "Volume detached for volume \"v4-0-config-system-serving-cert\" (UniqueName: \"kubernetes.io/secret/c846db13-b93b-4e07-9e7b-e22106203982-v4-0-config-system-serving-cert\") on node \"crc\" DevicePath \"\"" Jan 23 09:12:10 crc kubenswrapper[4684]: I0123 09:12:10.727874 4684 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-2nnm7\" (UniqueName: \"kubernetes.io/projected/c846db13-b93b-4e07-9e7b-e22106203982-kube-api-access-2nnm7\") on node \"crc\" DevicePath \"\"" Jan 23 09:12:10 crc kubenswrapper[4684]: I0123 09:12:10.727885 4684 reconciler_common.go:293] "Volume detached for volume \"v4-0-config-system-router-certs\" (UniqueName: \"kubernetes.io/secret/c846db13-b93b-4e07-9e7b-e22106203982-v4-0-config-system-router-certs\") on node \"crc\" DevicePath \"\"" Jan 23 09:12:10 crc kubenswrapper[4684]: I0123 09:12:10.727896 4684 reconciler_common.go:293] "Volume detached for volume \"v4-0-config-user-template-provider-selection\" (UniqueName: \"kubernetes.io/secret/c846db13-b93b-4e07-9e7b-e22106203982-v4-0-config-user-template-provider-selection\") on node \"crc\" DevicePath \"\"" Jan 23 09:12:10 crc kubenswrapper[4684]: I0123 09:12:10.727912 4684 reconciler_common.go:293] "Volume detached for volume \"audit-policies\" (UniqueName: \"kubernetes.io/configmap/c846db13-b93b-4e07-9e7b-e22106203982-audit-policies\") on node \"crc\" DevicePath \"\"" Jan 23 09:12:10 crc kubenswrapper[4684]: I0123 09:12:10.727923 4684 reconciler_common.go:293] "Volume detached for volume \"v4-0-config-user-template-error\" (UniqueName: \"kubernetes.io/secret/c846db13-b93b-4e07-9e7b-e22106203982-v4-0-config-user-template-error\") on node \"crc\" DevicePath \"\"" Jan 23 09:12:10 crc kubenswrapper[4684]: I0123 09:12:10.727934 4684 reconciler_common.go:293] "Volume detached for volume \"v4-0-config-system-session\" (UniqueName: \"kubernetes.io/secret/c846db13-b93b-4e07-9e7b-e22106203982-v4-0-config-system-session\") on node \"crc\" DevicePath \"\"" Jan 23 09:12:10 crc kubenswrapper[4684]: I0123 09:12:10.727945 4684 reconciler_common.go:293] "Volume detached for volume \"v4-0-config-user-idp-0-file-data\" (UniqueName: \"kubernetes.io/secret/c846db13-b93b-4e07-9e7b-e22106203982-v4-0-config-user-idp-0-file-data\") on node \"crc\" DevicePath \"\"" Jan 23 09:12:10 crc kubenswrapper[4684]: I0123 09:12:10.727957 4684 reconciler_common.go:293] "Volume detached for volume \"v4-0-config-system-service-ca\" (UniqueName: \"kubernetes.io/configmap/c846db13-b93b-4e07-9e7b-e22106203982-v4-0-config-system-service-ca\") on node \"crc\" DevicePath \"\"" Jan 23 09:12:10 crc kubenswrapper[4684]: I0123 09:12:10.812082 4684 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-route-controller-manager/route-controller-manager-85d79997c7-pbqc5" Jan 23 09:12:11 crc kubenswrapper[4684]: I0123 09:12:11.559330 4684 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-authentication/oauth-openshift-558db77b4-hv7d8" event={"ID":"c846db13-b93b-4e07-9e7b-e22106203982","Type":"ContainerDied","Data":"e6fb2423efcaf120919a2ec511db67b899b41f2db615f1160512f485e94158c5"} Jan 23 09:12:11 crc kubenswrapper[4684]: I0123 09:12:11.559391 4684 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-authentication/oauth-openshift-558db77b4-hv7d8" Jan 23 09:12:11 crc kubenswrapper[4684]: I0123 09:12:11.600740 4684 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="a3346474-c3f2-4ef3-bcee-65f80e85ace4" path="/var/lib/kubelet/pods/a3346474-c3f2-4ef3-bcee-65f80e85ace4/volumes" Jan 23 09:12:11 crc kubenswrapper[4684]: I0123 09:12:11.601387 4684 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="fb237825-b7c8-46ae-ae20-a1ea7309ee7e" path="/var/lib/kubelet/pods/fb237825-b7c8-46ae-ae20-a1ea7309ee7e/volumes" Jan 23 09:12:11 crc kubenswrapper[4684]: I0123 09:12:11.601959 4684 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-authentication/oauth-openshift-558db77b4-hv7d8"] Jan 23 09:12:11 crc kubenswrapper[4684]: I0123 09:12:11.601995 4684 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-authentication/oauth-openshift-558db77b4-hv7d8"] Jan 23 09:12:13 crc kubenswrapper[4684]: I0123 09:12:13.238943 4684 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-controller-manager/controller-manager-b7cc87cc9-sxktc"] Jan 23 09:12:13 crc kubenswrapper[4684]: E0123 09:12:13.239530 4684 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="c846db13-b93b-4e07-9e7b-e22106203982" containerName="oauth-openshift" Jan 23 09:12:13 crc kubenswrapper[4684]: I0123 09:12:13.239542 4684 state_mem.go:107] "Deleted CPUSet assignment" podUID="c846db13-b93b-4e07-9e7b-e22106203982" containerName="oauth-openshift" Jan 23 09:12:13 crc kubenswrapper[4684]: I0123 09:12:13.239661 4684 memory_manager.go:354] "RemoveStaleState removing state" podUID="c846db13-b93b-4e07-9e7b-e22106203982" containerName="oauth-openshift" Jan 23 09:12:13 crc kubenswrapper[4684]: I0123 09:12:13.240140 4684 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-controller-manager/controller-manager-b7cc87cc9-sxktc" Jan 23 09:12:13 crc kubenswrapper[4684]: I0123 09:12:13.250734 4684 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-controller-manager/controller-manager-b7cc87cc9-sxktc"] Jan 23 09:12:13 crc kubenswrapper[4684]: I0123 09:12:13.251233 4684 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-controller-manager"/"openshift-controller-manager-sa-dockercfg-msq4c" Jan 23 09:12:13 crc kubenswrapper[4684]: I0123 09:12:13.251383 4684 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager"/"config" Jan 23 09:12:13 crc kubenswrapper[4684]: I0123 09:12:13.251533 4684 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager"/"kube-root-ca.crt" Jan 23 09:12:13 crc kubenswrapper[4684]: I0123 09:12:13.251637 4684 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager"/"client-ca" Jan 23 09:12:13 crc kubenswrapper[4684]: I0123 09:12:13.252988 4684 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager"/"openshift-service-ca.crt" Jan 23 09:12:13 crc kubenswrapper[4684]: I0123 09:12:13.253667 4684 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-controller-manager"/"serving-cert" Jan 23 09:12:13 crc kubenswrapper[4684]: I0123 09:12:13.253908 4684 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager"/"openshift-global-ca" Jan 23 09:12:13 crc kubenswrapper[4684]: I0123 09:12:13.258681 4684 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/aa75b6a1-3672-4315-8606-19758a6604b7-config\") pod \"controller-manager-b7cc87cc9-sxktc\" (UID: \"aa75b6a1-3672-4315-8606-19758a6604b7\") " pod="openshift-controller-manager/controller-manager-b7cc87cc9-sxktc" Jan 23 09:12:13 crc kubenswrapper[4684]: I0123 09:12:13.258753 4684 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-fkdt9\" (UniqueName: \"kubernetes.io/projected/aa75b6a1-3672-4315-8606-19758a6604b7-kube-api-access-fkdt9\") pod \"controller-manager-b7cc87cc9-sxktc\" (UID: \"aa75b6a1-3672-4315-8606-19758a6604b7\") " pod="openshift-controller-manager/controller-manager-b7cc87cc9-sxktc" Jan 23 09:12:13 crc kubenswrapper[4684]: I0123 09:12:13.258793 4684 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/aa75b6a1-3672-4315-8606-19758a6604b7-serving-cert\") pod \"controller-manager-b7cc87cc9-sxktc\" (UID: \"aa75b6a1-3672-4315-8606-19758a6604b7\") " pod="openshift-controller-manager/controller-manager-b7cc87cc9-sxktc" Jan 23 09:12:13 crc kubenswrapper[4684]: I0123 09:12:13.258816 4684 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/aa75b6a1-3672-4315-8606-19758a6604b7-client-ca\") pod \"controller-manager-b7cc87cc9-sxktc\" (UID: \"aa75b6a1-3672-4315-8606-19758a6604b7\") " pod="openshift-controller-manager/controller-manager-b7cc87cc9-sxktc" Jan 23 09:12:13 crc kubenswrapper[4684]: I0123 09:12:13.258938 4684 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/aa75b6a1-3672-4315-8606-19758a6604b7-proxy-ca-bundles\") pod \"controller-manager-b7cc87cc9-sxktc\" (UID: \"aa75b6a1-3672-4315-8606-19758a6604b7\") " pod="openshift-controller-manager/controller-manager-b7cc87cc9-sxktc" Jan 23 09:12:13 crc kubenswrapper[4684]: I0123 09:12:13.365927 4684 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/aa75b6a1-3672-4315-8606-19758a6604b7-serving-cert\") pod \"controller-manager-b7cc87cc9-sxktc\" (UID: \"aa75b6a1-3672-4315-8606-19758a6604b7\") " pod="openshift-controller-manager/controller-manager-b7cc87cc9-sxktc" Jan 23 09:12:13 crc kubenswrapper[4684]: I0123 09:12:13.365994 4684 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/aa75b6a1-3672-4315-8606-19758a6604b7-client-ca\") pod \"controller-manager-b7cc87cc9-sxktc\" (UID: \"aa75b6a1-3672-4315-8606-19758a6604b7\") " pod="openshift-controller-manager/controller-manager-b7cc87cc9-sxktc" Jan 23 09:12:13 crc kubenswrapper[4684]: I0123 09:12:13.366034 4684 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/aa75b6a1-3672-4315-8606-19758a6604b7-proxy-ca-bundles\") pod \"controller-manager-b7cc87cc9-sxktc\" (UID: \"aa75b6a1-3672-4315-8606-19758a6604b7\") " pod="openshift-controller-manager/controller-manager-b7cc87cc9-sxktc" Jan 23 09:12:13 crc kubenswrapper[4684]: I0123 09:12:13.366086 4684 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/aa75b6a1-3672-4315-8606-19758a6604b7-config\") pod \"controller-manager-b7cc87cc9-sxktc\" (UID: \"aa75b6a1-3672-4315-8606-19758a6604b7\") " pod="openshift-controller-manager/controller-manager-b7cc87cc9-sxktc" Jan 23 09:12:13 crc kubenswrapper[4684]: I0123 09:12:13.366117 4684 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-fkdt9\" (UniqueName: \"kubernetes.io/projected/aa75b6a1-3672-4315-8606-19758a6604b7-kube-api-access-fkdt9\") pod \"controller-manager-b7cc87cc9-sxktc\" (UID: \"aa75b6a1-3672-4315-8606-19758a6604b7\") " pod="openshift-controller-manager/controller-manager-b7cc87cc9-sxktc" Jan 23 09:12:13 crc kubenswrapper[4684]: I0123 09:12:13.367482 4684 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/aa75b6a1-3672-4315-8606-19758a6604b7-client-ca\") pod \"controller-manager-b7cc87cc9-sxktc\" (UID: \"aa75b6a1-3672-4315-8606-19758a6604b7\") " pod="openshift-controller-manager/controller-manager-b7cc87cc9-sxktc" Jan 23 09:12:13 crc kubenswrapper[4684]: I0123 09:12:13.369572 4684 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/aa75b6a1-3672-4315-8606-19758a6604b7-config\") pod \"controller-manager-b7cc87cc9-sxktc\" (UID: \"aa75b6a1-3672-4315-8606-19758a6604b7\") " pod="openshift-controller-manager/controller-manager-b7cc87cc9-sxktc" Jan 23 09:12:13 crc kubenswrapper[4684]: I0123 09:12:13.369578 4684 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/aa75b6a1-3672-4315-8606-19758a6604b7-proxy-ca-bundles\") pod \"controller-manager-b7cc87cc9-sxktc\" (UID: \"aa75b6a1-3672-4315-8606-19758a6604b7\") " pod="openshift-controller-manager/controller-manager-b7cc87cc9-sxktc" Jan 23 09:12:13 crc kubenswrapper[4684]: I0123 09:12:13.376337 4684 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/aa75b6a1-3672-4315-8606-19758a6604b7-serving-cert\") pod \"controller-manager-b7cc87cc9-sxktc\" (UID: \"aa75b6a1-3672-4315-8606-19758a6604b7\") " pod="openshift-controller-manager/controller-manager-b7cc87cc9-sxktc" Jan 23 09:12:13 crc kubenswrapper[4684]: I0123 09:12:13.382890 4684 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-fkdt9\" (UniqueName: \"kubernetes.io/projected/aa75b6a1-3672-4315-8606-19758a6604b7-kube-api-access-fkdt9\") pod \"controller-manager-b7cc87cc9-sxktc\" (UID: \"aa75b6a1-3672-4315-8606-19758a6604b7\") " pod="openshift-controller-manager/controller-manager-b7cc87cc9-sxktc" Jan 23 09:12:13 crc kubenswrapper[4684]: I0123 09:12:13.557524 4684 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-controller-manager/controller-manager-b7cc87cc9-sxktc" Jan 23 09:12:13 crc kubenswrapper[4684]: I0123 09:12:13.588655 4684 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="c846db13-b93b-4e07-9e7b-e22106203982" path="/var/lib/kubelet/pods/c846db13-b93b-4e07-9e7b-e22106203982/volumes" Jan 23 09:12:15 crc kubenswrapper[4684]: I0123 09:12:15.444454 4684 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-controller-manager/controller-manager-b7cc87cc9-sxktc"] Jan 23 09:12:15 crc kubenswrapper[4684]: I0123 09:12:15.451454 4684 scope.go:117] "RemoveContainer" containerID="d6b80ad8fa3e19acf0c0b44cafb3483e63c310f9bdfb6ce3e2ca51d36f3852fb" Jan 23 09:12:15 crc kubenswrapper[4684]: I0123 09:12:15.461769 4684 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-route-controller-manager/route-controller-manager-85d79997c7-pbqc5"] Jan 23 09:12:19 crc kubenswrapper[4684]: I0123 09:12:19.243425 4684 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-authentication/oauth-openshift-69cb985589-w7hkw"] Jan 23 09:12:19 crc kubenswrapper[4684]: I0123 09:12:19.245193 4684 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-authentication/oauth-openshift-69cb985589-w7hkw" Jan 23 09:12:19 crc kubenswrapper[4684]: I0123 09:12:19.258193 4684 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-authentication"/"v4-0-config-system-session" Jan 23 09:12:19 crc kubenswrapper[4684]: I0123 09:12:19.258229 4684 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-authentication"/"openshift-service-ca.crt" Jan 23 09:12:19 crc kubenswrapper[4684]: I0123 09:12:19.259807 4684 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-authentication"/"v4-0-config-system-service-ca" Jan 23 09:12:19 crc kubenswrapper[4684]: I0123 09:12:19.260140 4684 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-authentication"/"v4-0-config-user-template-provider-selection" Jan 23 09:12:19 crc kubenswrapper[4684]: I0123 09:12:19.260168 4684 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-authentication"/"v4-0-config-system-cliconfig" Jan 23 09:12:19 crc kubenswrapper[4684]: I0123 09:12:19.260188 4684 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-authentication"/"kube-root-ca.crt" Jan 23 09:12:19 crc kubenswrapper[4684]: I0123 09:12:19.260322 4684 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-authentication"/"v4-0-config-user-template-error" Jan 23 09:12:19 crc kubenswrapper[4684]: I0123 09:12:19.260328 4684 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-authentication"/"v4-0-config-user-idp-0-file-data" Jan 23 09:12:19 crc kubenswrapper[4684]: I0123 09:12:19.260461 4684 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-authentication"/"audit" Jan 23 09:12:19 crc kubenswrapper[4684]: I0123 09:12:19.260507 4684 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-authentication"/"oauth-openshift-dockercfg-znhcc" Jan 23 09:12:19 crc kubenswrapper[4684]: I0123 09:12:19.260628 4684 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-authentication"/"v4-0-config-system-serving-cert" Jan 23 09:12:19 crc kubenswrapper[4684]: I0123 09:12:19.261088 4684 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-authentication"/"v4-0-config-system-router-certs" Jan 23 09:12:19 crc kubenswrapper[4684]: I0123 09:12:19.264396 4684 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-authentication/oauth-openshift-69cb985589-w7hkw"] Jan 23 09:12:19 crc kubenswrapper[4684]: I0123 09:12:19.268910 4684 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-authentication"/"v4-0-config-system-trusted-ca-bundle" Jan 23 09:12:19 crc kubenswrapper[4684]: I0123 09:12:19.279572 4684 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-authentication"/"v4-0-config-user-template-login" Jan 23 09:12:19 crc kubenswrapper[4684]: I0123 09:12:19.284397 4684 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-authentication"/"v4-0-config-system-ocp-branding-template" Jan 23 09:12:19 crc kubenswrapper[4684]: I0123 09:12:19.347098 4684 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-6mq68\" (UniqueName: \"kubernetes.io/projected/d25f9561-bcbb-4309-b3b6-de838bbf47bd-kube-api-access-6mq68\") pod \"oauth-openshift-69cb985589-w7hkw\" (UID: \"d25f9561-bcbb-4309-b3b6-de838bbf47bd\") " pod="openshift-authentication/oauth-openshift-69cb985589-w7hkw" Jan 23 09:12:19 crc kubenswrapper[4684]: I0123 09:12:19.347345 4684 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"v4-0-config-system-ocp-branding-template\" (UniqueName: \"kubernetes.io/secret/d25f9561-bcbb-4309-b3b6-de838bbf47bd-v4-0-config-system-ocp-branding-template\") pod \"oauth-openshift-69cb985589-w7hkw\" (UID: \"d25f9561-bcbb-4309-b3b6-de838bbf47bd\") " pod="openshift-authentication/oauth-openshift-69cb985589-w7hkw" Jan 23 09:12:19 crc kubenswrapper[4684]: I0123 09:12:19.347411 4684 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/d25f9561-bcbb-4309-b3b6-de838bbf47bd-audit-dir\") pod \"oauth-openshift-69cb985589-w7hkw\" (UID: \"d25f9561-bcbb-4309-b3b6-de838bbf47bd\") " pod="openshift-authentication/oauth-openshift-69cb985589-w7hkw" Jan 23 09:12:19 crc kubenswrapper[4684]: I0123 09:12:19.347429 4684 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"v4-0-config-system-service-ca\" (UniqueName: \"kubernetes.io/configmap/d25f9561-bcbb-4309-b3b6-de838bbf47bd-v4-0-config-system-service-ca\") pod \"oauth-openshift-69cb985589-w7hkw\" (UID: \"d25f9561-bcbb-4309-b3b6-de838bbf47bd\") " pod="openshift-authentication/oauth-openshift-69cb985589-w7hkw" Jan 23 09:12:19 crc kubenswrapper[4684]: I0123 09:12:19.347454 4684 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"v4-0-config-system-session\" (UniqueName: \"kubernetes.io/secret/d25f9561-bcbb-4309-b3b6-de838bbf47bd-v4-0-config-system-session\") pod \"oauth-openshift-69cb985589-w7hkw\" (UID: \"d25f9561-bcbb-4309-b3b6-de838bbf47bd\") " pod="openshift-authentication/oauth-openshift-69cb985589-w7hkw" Jan 23 09:12:19 crc kubenswrapper[4684]: I0123 09:12:19.347591 4684 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"v4-0-config-user-template-login\" (UniqueName: \"kubernetes.io/secret/d25f9561-bcbb-4309-b3b6-de838bbf47bd-v4-0-config-user-template-login\") pod \"oauth-openshift-69cb985589-w7hkw\" (UID: \"d25f9561-bcbb-4309-b3b6-de838bbf47bd\") " pod="openshift-authentication/oauth-openshift-69cb985589-w7hkw" Jan 23 09:12:19 crc kubenswrapper[4684]: I0123 09:12:19.347673 4684 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"v4-0-config-user-template-provider-selection\" (UniqueName: \"kubernetes.io/secret/d25f9561-bcbb-4309-b3b6-de838bbf47bd-v4-0-config-user-template-provider-selection\") pod \"oauth-openshift-69cb985589-w7hkw\" (UID: \"d25f9561-bcbb-4309-b3b6-de838bbf47bd\") " pod="openshift-authentication/oauth-openshift-69cb985589-w7hkw" Jan 23 09:12:19 crc kubenswrapper[4684]: I0123 09:12:19.347838 4684 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"v4-0-config-system-router-certs\" (UniqueName: \"kubernetes.io/secret/d25f9561-bcbb-4309-b3b6-de838bbf47bd-v4-0-config-system-router-certs\") pod \"oauth-openshift-69cb985589-w7hkw\" (UID: \"d25f9561-bcbb-4309-b3b6-de838bbf47bd\") " pod="openshift-authentication/oauth-openshift-69cb985589-w7hkw" Jan 23 09:12:19 crc kubenswrapper[4684]: I0123 09:12:19.347871 4684 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"v4-0-config-user-template-error\" (UniqueName: \"kubernetes.io/secret/d25f9561-bcbb-4309-b3b6-de838bbf47bd-v4-0-config-user-template-error\") pod \"oauth-openshift-69cb985589-w7hkw\" (UID: \"d25f9561-bcbb-4309-b3b6-de838bbf47bd\") " pod="openshift-authentication/oauth-openshift-69cb985589-w7hkw" Jan 23 09:12:19 crc kubenswrapper[4684]: I0123 09:12:19.347908 4684 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"audit-policies\" (UniqueName: \"kubernetes.io/configmap/d25f9561-bcbb-4309-b3b6-de838bbf47bd-audit-policies\") pod \"oauth-openshift-69cb985589-w7hkw\" (UID: \"d25f9561-bcbb-4309-b3b6-de838bbf47bd\") " pod="openshift-authentication/oauth-openshift-69cb985589-w7hkw" Jan 23 09:12:19 crc kubenswrapper[4684]: I0123 09:12:19.347926 4684 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"v4-0-config-system-cliconfig\" (UniqueName: \"kubernetes.io/configmap/d25f9561-bcbb-4309-b3b6-de838bbf47bd-v4-0-config-system-cliconfig\") pod \"oauth-openshift-69cb985589-w7hkw\" (UID: \"d25f9561-bcbb-4309-b3b6-de838bbf47bd\") " pod="openshift-authentication/oauth-openshift-69cb985589-w7hkw" Jan 23 09:12:19 crc kubenswrapper[4684]: I0123 09:12:19.347961 4684 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"v4-0-config-system-trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/d25f9561-bcbb-4309-b3b6-de838bbf47bd-v4-0-config-system-trusted-ca-bundle\") pod \"oauth-openshift-69cb985589-w7hkw\" (UID: \"d25f9561-bcbb-4309-b3b6-de838bbf47bd\") " pod="openshift-authentication/oauth-openshift-69cb985589-w7hkw" Jan 23 09:12:19 crc kubenswrapper[4684]: I0123 09:12:19.348080 4684 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"v4-0-config-system-serving-cert\" (UniqueName: \"kubernetes.io/secret/d25f9561-bcbb-4309-b3b6-de838bbf47bd-v4-0-config-system-serving-cert\") pod \"oauth-openshift-69cb985589-w7hkw\" (UID: \"d25f9561-bcbb-4309-b3b6-de838bbf47bd\") " pod="openshift-authentication/oauth-openshift-69cb985589-w7hkw" Jan 23 09:12:19 crc kubenswrapper[4684]: I0123 09:12:19.348148 4684 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"v4-0-config-user-idp-0-file-data\" (UniqueName: \"kubernetes.io/secret/d25f9561-bcbb-4309-b3b6-de838bbf47bd-v4-0-config-user-idp-0-file-data\") pod \"oauth-openshift-69cb985589-w7hkw\" (UID: \"d25f9561-bcbb-4309-b3b6-de838bbf47bd\") " pod="openshift-authentication/oauth-openshift-69cb985589-w7hkw" Jan 23 09:12:19 crc kubenswrapper[4684]: I0123 09:12:19.449570 4684 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-6mq68\" (UniqueName: \"kubernetes.io/projected/d25f9561-bcbb-4309-b3b6-de838bbf47bd-kube-api-access-6mq68\") pod \"oauth-openshift-69cb985589-w7hkw\" (UID: \"d25f9561-bcbb-4309-b3b6-de838bbf47bd\") " pod="openshift-authentication/oauth-openshift-69cb985589-w7hkw" Jan 23 09:12:19 crc kubenswrapper[4684]: I0123 09:12:19.449640 4684 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"v4-0-config-system-ocp-branding-template\" (UniqueName: \"kubernetes.io/secret/d25f9561-bcbb-4309-b3b6-de838bbf47bd-v4-0-config-system-ocp-branding-template\") pod \"oauth-openshift-69cb985589-w7hkw\" (UID: \"d25f9561-bcbb-4309-b3b6-de838bbf47bd\") " pod="openshift-authentication/oauth-openshift-69cb985589-w7hkw" Jan 23 09:12:19 crc kubenswrapper[4684]: I0123 09:12:19.449675 4684 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/d25f9561-bcbb-4309-b3b6-de838bbf47bd-audit-dir\") pod \"oauth-openshift-69cb985589-w7hkw\" (UID: \"d25f9561-bcbb-4309-b3b6-de838bbf47bd\") " pod="openshift-authentication/oauth-openshift-69cb985589-w7hkw" Jan 23 09:12:19 crc kubenswrapper[4684]: I0123 09:12:19.449694 4684 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"v4-0-config-system-service-ca\" (UniqueName: \"kubernetes.io/configmap/d25f9561-bcbb-4309-b3b6-de838bbf47bd-v4-0-config-system-service-ca\") pod \"oauth-openshift-69cb985589-w7hkw\" (UID: \"d25f9561-bcbb-4309-b3b6-de838bbf47bd\") " pod="openshift-authentication/oauth-openshift-69cb985589-w7hkw" Jan 23 09:12:19 crc kubenswrapper[4684]: I0123 09:12:19.449764 4684 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"v4-0-config-system-session\" (UniqueName: \"kubernetes.io/secret/d25f9561-bcbb-4309-b3b6-de838bbf47bd-v4-0-config-system-session\") pod \"oauth-openshift-69cb985589-w7hkw\" (UID: \"d25f9561-bcbb-4309-b3b6-de838bbf47bd\") " pod="openshift-authentication/oauth-openshift-69cb985589-w7hkw" Jan 23 09:12:19 crc kubenswrapper[4684]: I0123 09:12:19.449793 4684 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/d25f9561-bcbb-4309-b3b6-de838bbf47bd-audit-dir\") pod \"oauth-openshift-69cb985589-w7hkw\" (UID: \"d25f9561-bcbb-4309-b3b6-de838bbf47bd\") " pod="openshift-authentication/oauth-openshift-69cb985589-w7hkw" Jan 23 09:12:19 crc kubenswrapper[4684]: I0123 09:12:19.449803 4684 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"v4-0-config-user-template-login\" (UniqueName: \"kubernetes.io/secret/d25f9561-bcbb-4309-b3b6-de838bbf47bd-v4-0-config-user-template-login\") pod \"oauth-openshift-69cb985589-w7hkw\" (UID: \"d25f9561-bcbb-4309-b3b6-de838bbf47bd\") " pod="openshift-authentication/oauth-openshift-69cb985589-w7hkw" Jan 23 09:12:19 crc kubenswrapper[4684]: I0123 09:12:19.449915 4684 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"v4-0-config-user-template-provider-selection\" (UniqueName: \"kubernetes.io/secret/d25f9561-bcbb-4309-b3b6-de838bbf47bd-v4-0-config-user-template-provider-selection\") pod \"oauth-openshift-69cb985589-w7hkw\" (UID: \"d25f9561-bcbb-4309-b3b6-de838bbf47bd\") " pod="openshift-authentication/oauth-openshift-69cb985589-w7hkw" Jan 23 09:12:19 crc kubenswrapper[4684]: I0123 09:12:19.449950 4684 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"v4-0-config-system-router-certs\" (UniqueName: \"kubernetes.io/secret/d25f9561-bcbb-4309-b3b6-de838bbf47bd-v4-0-config-system-router-certs\") pod \"oauth-openshift-69cb985589-w7hkw\" (UID: \"d25f9561-bcbb-4309-b3b6-de838bbf47bd\") " pod="openshift-authentication/oauth-openshift-69cb985589-w7hkw" Jan 23 09:12:19 crc kubenswrapper[4684]: I0123 09:12:19.449974 4684 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"v4-0-config-user-template-error\" (UniqueName: \"kubernetes.io/secret/d25f9561-bcbb-4309-b3b6-de838bbf47bd-v4-0-config-user-template-error\") pod \"oauth-openshift-69cb985589-w7hkw\" (UID: \"d25f9561-bcbb-4309-b3b6-de838bbf47bd\") " pod="openshift-authentication/oauth-openshift-69cb985589-w7hkw" Jan 23 09:12:19 crc kubenswrapper[4684]: I0123 09:12:19.449997 4684 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"v4-0-config-system-cliconfig\" (UniqueName: \"kubernetes.io/configmap/d25f9561-bcbb-4309-b3b6-de838bbf47bd-v4-0-config-system-cliconfig\") pod \"oauth-openshift-69cb985589-w7hkw\" (UID: \"d25f9561-bcbb-4309-b3b6-de838bbf47bd\") " pod="openshift-authentication/oauth-openshift-69cb985589-w7hkw" Jan 23 09:12:19 crc kubenswrapper[4684]: I0123 09:12:19.450128 4684 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"audit-policies\" (UniqueName: \"kubernetes.io/configmap/d25f9561-bcbb-4309-b3b6-de838bbf47bd-audit-policies\") pod \"oauth-openshift-69cb985589-w7hkw\" (UID: \"d25f9561-bcbb-4309-b3b6-de838bbf47bd\") " pod="openshift-authentication/oauth-openshift-69cb985589-w7hkw" Jan 23 09:12:19 crc kubenswrapper[4684]: I0123 09:12:19.450152 4684 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"v4-0-config-system-trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/d25f9561-bcbb-4309-b3b6-de838bbf47bd-v4-0-config-system-trusted-ca-bundle\") pod \"oauth-openshift-69cb985589-w7hkw\" (UID: \"d25f9561-bcbb-4309-b3b6-de838bbf47bd\") " pod="openshift-authentication/oauth-openshift-69cb985589-w7hkw" Jan 23 09:12:19 crc kubenswrapper[4684]: I0123 09:12:19.450184 4684 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"v4-0-config-system-serving-cert\" (UniqueName: \"kubernetes.io/secret/d25f9561-bcbb-4309-b3b6-de838bbf47bd-v4-0-config-system-serving-cert\") pod \"oauth-openshift-69cb985589-w7hkw\" (UID: \"d25f9561-bcbb-4309-b3b6-de838bbf47bd\") " pod="openshift-authentication/oauth-openshift-69cb985589-w7hkw" Jan 23 09:12:19 crc kubenswrapper[4684]: I0123 09:12:19.450219 4684 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"v4-0-config-user-idp-0-file-data\" (UniqueName: \"kubernetes.io/secret/d25f9561-bcbb-4309-b3b6-de838bbf47bd-v4-0-config-user-idp-0-file-data\") pod \"oauth-openshift-69cb985589-w7hkw\" (UID: \"d25f9561-bcbb-4309-b3b6-de838bbf47bd\") " pod="openshift-authentication/oauth-openshift-69cb985589-w7hkw" Jan 23 09:12:19 crc kubenswrapper[4684]: I0123 09:12:19.451334 4684 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"v4-0-config-system-service-ca\" (UniqueName: \"kubernetes.io/configmap/d25f9561-bcbb-4309-b3b6-de838bbf47bd-v4-0-config-system-service-ca\") pod \"oauth-openshift-69cb985589-w7hkw\" (UID: \"d25f9561-bcbb-4309-b3b6-de838bbf47bd\") " pod="openshift-authentication/oauth-openshift-69cb985589-w7hkw" Jan 23 09:12:19 crc kubenswrapper[4684]: I0123 09:12:19.451419 4684 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"audit-policies\" (UniqueName: \"kubernetes.io/configmap/d25f9561-bcbb-4309-b3b6-de838bbf47bd-audit-policies\") pod \"oauth-openshift-69cb985589-w7hkw\" (UID: \"d25f9561-bcbb-4309-b3b6-de838bbf47bd\") " pod="openshift-authentication/oauth-openshift-69cb985589-w7hkw" Jan 23 09:12:19 crc kubenswrapper[4684]: I0123 09:12:19.452428 4684 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"v4-0-config-system-cliconfig\" (UniqueName: \"kubernetes.io/configmap/d25f9561-bcbb-4309-b3b6-de838bbf47bd-v4-0-config-system-cliconfig\") pod \"oauth-openshift-69cb985589-w7hkw\" (UID: \"d25f9561-bcbb-4309-b3b6-de838bbf47bd\") " pod="openshift-authentication/oauth-openshift-69cb985589-w7hkw" Jan 23 09:12:19 crc kubenswrapper[4684]: I0123 09:12:19.454626 4684 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"v4-0-config-user-idp-0-file-data\" (UniqueName: \"kubernetes.io/secret/d25f9561-bcbb-4309-b3b6-de838bbf47bd-v4-0-config-user-idp-0-file-data\") pod \"oauth-openshift-69cb985589-w7hkw\" (UID: \"d25f9561-bcbb-4309-b3b6-de838bbf47bd\") " pod="openshift-authentication/oauth-openshift-69cb985589-w7hkw" Jan 23 09:12:19 crc kubenswrapper[4684]: I0123 09:12:19.455655 4684 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"v4-0-config-user-template-provider-selection\" (UniqueName: \"kubernetes.io/secret/d25f9561-bcbb-4309-b3b6-de838bbf47bd-v4-0-config-user-template-provider-selection\") pod \"oauth-openshift-69cb985589-w7hkw\" (UID: \"d25f9561-bcbb-4309-b3b6-de838bbf47bd\") " pod="openshift-authentication/oauth-openshift-69cb985589-w7hkw" Jan 23 09:12:19 crc kubenswrapper[4684]: I0123 09:12:19.456155 4684 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"v4-0-config-system-trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/d25f9561-bcbb-4309-b3b6-de838bbf47bd-v4-0-config-system-trusted-ca-bundle\") pod \"oauth-openshift-69cb985589-w7hkw\" (UID: \"d25f9561-bcbb-4309-b3b6-de838bbf47bd\") " pod="openshift-authentication/oauth-openshift-69cb985589-w7hkw" Jan 23 09:12:19 crc kubenswrapper[4684]: I0123 09:12:19.461367 4684 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"v4-0-config-system-router-certs\" (UniqueName: \"kubernetes.io/secret/d25f9561-bcbb-4309-b3b6-de838bbf47bd-v4-0-config-system-router-certs\") pod \"oauth-openshift-69cb985589-w7hkw\" (UID: \"d25f9561-bcbb-4309-b3b6-de838bbf47bd\") " pod="openshift-authentication/oauth-openshift-69cb985589-w7hkw" Jan 23 09:12:19 crc kubenswrapper[4684]: I0123 09:12:19.461571 4684 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"v4-0-config-user-template-error\" (UniqueName: \"kubernetes.io/secret/d25f9561-bcbb-4309-b3b6-de838bbf47bd-v4-0-config-user-template-error\") pod \"oauth-openshift-69cb985589-w7hkw\" (UID: \"d25f9561-bcbb-4309-b3b6-de838bbf47bd\") " pod="openshift-authentication/oauth-openshift-69cb985589-w7hkw" Jan 23 09:12:19 crc kubenswrapper[4684]: I0123 09:12:19.461731 4684 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"v4-0-config-system-session\" (UniqueName: \"kubernetes.io/secret/d25f9561-bcbb-4309-b3b6-de838bbf47bd-v4-0-config-system-session\") pod \"oauth-openshift-69cb985589-w7hkw\" (UID: \"d25f9561-bcbb-4309-b3b6-de838bbf47bd\") " pod="openshift-authentication/oauth-openshift-69cb985589-w7hkw" Jan 23 09:12:19 crc kubenswrapper[4684]: I0123 09:12:19.462040 4684 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"v4-0-config-user-template-login\" (UniqueName: \"kubernetes.io/secret/d25f9561-bcbb-4309-b3b6-de838bbf47bd-v4-0-config-user-template-login\") pod \"oauth-openshift-69cb985589-w7hkw\" (UID: \"d25f9561-bcbb-4309-b3b6-de838bbf47bd\") " pod="openshift-authentication/oauth-openshift-69cb985589-w7hkw" Jan 23 09:12:19 crc kubenswrapper[4684]: I0123 09:12:19.462143 4684 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"v4-0-config-system-ocp-branding-template\" (UniqueName: \"kubernetes.io/secret/d25f9561-bcbb-4309-b3b6-de838bbf47bd-v4-0-config-system-ocp-branding-template\") pod \"oauth-openshift-69cb985589-w7hkw\" (UID: \"d25f9561-bcbb-4309-b3b6-de838bbf47bd\") " pod="openshift-authentication/oauth-openshift-69cb985589-w7hkw" Jan 23 09:12:19 crc kubenswrapper[4684]: I0123 09:12:19.465624 4684 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"v4-0-config-system-serving-cert\" (UniqueName: \"kubernetes.io/secret/d25f9561-bcbb-4309-b3b6-de838bbf47bd-v4-0-config-system-serving-cert\") pod \"oauth-openshift-69cb985589-w7hkw\" (UID: \"d25f9561-bcbb-4309-b3b6-de838bbf47bd\") " pod="openshift-authentication/oauth-openshift-69cb985589-w7hkw" Jan 23 09:12:19 crc kubenswrapper[4684]: I0123 09:12:19.467632 4684 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-6mq68\" (UniqueName: \"kubernetes.io/projected/d25f9561-bcbb-4309-b3b6-de838bbf47bd-kube-api-access-6mq68\") pod \"oauth-openshift-69cb985589-w7hkw\" (UID: \"d25f9561-bcbb-4309-b3b6-de838bbf47bd\") " pod="openshift-authentication/oauth-openshift-69cb985589-w7hkw" Jan 23 09:12:19 crc kubenswrapper[4684]: I0123 09:12:19.581854 4684 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-authentication/oauth-openshift-69cb985589-w7hkw" Jan 23 09:12:23 crc kubenswrapper[4684]: I0123 09:12:23.664313 4684 kubelet.go:2421] "SyncLoop ADD" source="file" pods=["openshift-kube-apiserver/kube-apiserver-startup-monitor-crc"] Jan 23 09:12:23 crc kubenswrapper[4684]: I0123 09:12:23.665778 4684 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" Jan 23 09:12:23 crc kubenswrapper[4684]: I0123 09:12:23.666082 4684 kubelet.go:2431] "SyncLoop REMOVE" source="file" pods=["openshift-kube-apiserver/kube-apiserver-crc"] Jan 23 09:12:23 crc kubenswrapper[4684]: I0123 09:12:23.666551 4684 kubelet.go:2421] "SyncLoop ADD" source="file" pods=["openshift-kube-apiserver/kube-apiserver-crc"] Jan 23 09:12:23 crc kubenswrapper[4684]: E0123 09:12:23.666739 4684 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="f4b27818a5e8e43d0dc095d08835c792" containerName="kube-apiserver-cert-syncer" Jan 23 09:12:23 crc kubenswrapper[4684]: I0123 09:12:23.666759 4684 state_mem.go:107] "Deleted CPUSet assignment" podUID="f4b27818a5e8e43d0dc095d08835c792" containerName="kube-apiserver-cert-syncer" Jan 23 09:12:23 crc kubenswrapper[4684]: E0123 09:12:23.666772 4684 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="f4b27818a5e8e43d0dc095d08835c792" containerName="kube-apiserver-check-endpoints" Jan 23 09:12:23 crc kubenswrapper[4684]: I0123 09:12:23.666781 4684 state_mem.go:107] "Deleted CPUSet assignment" podUID="f4b27818a5e8e43d0dc095d08835c792" containerName="kube-apiserver-check-endpoints" Jan 23 09:12:23 crc kubenswrapper[4684]: E0123 09:12:23.666794 4684 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="f4b27818a5e8e43d0dc095d08835c792" containerName="setup" Jan 23 09:12:23 crc kubenswrapper[4684]: I0123 09:12:23.666801 4684 state_mem.go:107] "Deleted CPUSet assignment" podUID="f4b27818a5e8e43d0dc095d08835c792" containerName="setup" Jan 23 09:12:23 crc kubenswrapper[4684]: E0123 09:12:23.666816 4684 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="f4b27818a5e8e43d0dc095d08835c792" containerName="kube-apiserver-check-endpoints" Jan 23 09:12:23 crc kubenswrapper[4684]: I0123 09:12:23.666824 4684 state_mem.go:107] "Deleted CPUSet assignment" podUID="f4b27818a5e8e43d0dc095d08835c792" containerName="kube-apiserver-check-endpoints" Jan 23 09:12:23 crc kubenswrapper[4684]: E0123 09:12:23.666833 4684 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="f4b27818a5e8e43d0dc095d08835c792" containerName="kube-apiserver-insecure-readyz" Jan 23 09:12:23 crc kubenswrapper[4684]: I0123 09:12:23.666841 4684 state_mem.go:107] "Deleted CPUSet assignment" podUID="f4b27818a5e8e43d0dc095d08835c792" containerName="kube-apiserver-insecure-readyz" Jan 23 09:12:23 crc kubenswrapper[4684]: E0123 09:12:23.666851 4684 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="f4b27818a5e8e43d0dc095d08835c792" containerName="kube-apiserver-cert-regeneration-controller" Jan 23 09:12:23 crc kubenswrapper[4684]: I0123 09:12:23.666858 4684 state_mem.go:107] "Deleted CPUSet assignment" podUID="f4b27818a5e8e43d0dc095d08835c792" containerName="kube-apiserver-cert-regeneration-controller" Jan 23 09:12:23 crc kubenswrapper[4684]: E0123 09:12:23.666869 4684 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="f4b27818a5e8e43d0dc095d08835c792" containerName="kube-apiserver" Jan 23 09:12:23 crc kubenswrapper[4684]: I0123 09:12:23.666876 4684 state_mem.go:107] "Deleted CPUSet assignment" podUID="f4b27818a5e8e43d0dc095d08835c792" containerName="kube-apiserver" Jan 23 09:12:23 crc kubenswrapper[4684]: I0123 09:12:23.666992 4684 memory_manager.go:354] "RemoveStaleState removing state" podUID="f4b27818a5e8e43d0dc095d08835c792" containerName="kube-apiserver-check-endpoints" Jan 23 09:12:23 crc kubenswrapper[4684]: I0123 09:12:23.667009 4684 memory_manager.go:354] "RemoveStaleState removing state" podUID="f4b27818a5e8e43d0dc095d08835c792" containerName="kube-apiserver-cert-syncer" Jan 23 09:12:23 crc kubenswrapper[4684]: I0123 09:12:23.667018 4684 memory_manager.go:354] "RemoveStaleState removing state" podUID="f4b27818a5e8e43d0dc095d08835c792" containerName="kube-apiserver" Jan 23 09:12:23 crc kubenswrapper[4684]: I0123 09:12:23.667024 4684 memory_manager.go:354] "RemoveStaleState removing state" podUID="f4b27818a5e8e43d0dc095d08835c792" containerName="kube-apiserver-cert-regeneration-controller" Jan 23 09:12:23 crc kubenswrapper[4684]: I0123 09:12:23.667033 4684 memory_manager.go:354] "RemoveStaleState removing state" podUID="f4b27818a5e8e43d0dc095d08835c792" containerName="kube-apiserver-insecure-readyz" Jan 23 09:12:23 crc kubenswrapper[4684]: I0123 09:12:23.667269 4684 memory_manager.go:354] "RemoveStaleState removing state" podUID="f4b27818a5e8e43d0dc095d08835c792" containerName="kube-apiserver-check-endpoints" Jan 23 09:12:23 crc kubenswrapper[4684]: I0123 09:12:23.668476 4684 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-kube-apiserver/kube-apiserver-crc" podUID="f4b27818a5e8e43d0dc095d08835c792" containerName="kube-apiserver" containerID="cri-o://39b1d62654cdce3e6a1e54cc35f36d530dec39b7ec54d7aba2ea8a64844ff90a" gracePeriod=15 Jan 23 09:12:23 crc kubenswrapper[4684]: I0123 09:12:23.668639 4684 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-kube-apiserver/kube-apiserver-crc" podUID="f4b27818a5e8e43d0dc095d08835c792" containerName="kube-apiserver-cert-syncer" containerID="cri-o://68e3ed6cfd5c1ab6379385c7acee58117333f815f21be7d7c61038f7827f6621" gracePeriod=15 Jan 23 09:12:23 crc kubenswrapper[4684]: I0123 09:12:23.668658 4684 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-kube-apiserver/kube-apiserver-crc" podUID="f4b27818a5e8e43d0dc095d08835c792" containerName="kube-apiserver-cert-regeneration-controller" containerID="cri-o://5b80737ea9f882f63be2cf6a2f74002963d16e18aea3c96f738b2cd188f3c1da" gracePeriod=15 Jan 23 09:12:23 crc kubenswrapper[4684]: I0123 09:12:23.668723 4684 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-kube-apiserver/kube-apiserver-crc" podUID="f4b27818a5e8e43d0dc095d08835c792" containerName="kube-apiserver-insecure-readyz" containerID="cri-o://9db80d9b156d2828ad5bcd38bc2d0783dac35f10f547f098815ee596931cde3b" gracePeriod=15 Jan 23 09:12:23 crc kubenswrapper[4684]: I0123 09:12:23.670813 4684 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-kube-apiserver/kube-apiserver-crc" podUID="f4b27818a5e8e43d0dc095d08835c792" containerName="kube-apiserver-check-endpoints" containerID="cri-o://74958cd4355a9eb04e07c960b1063b56f11cb3ae27a3ab9eac50f54ebac78c8c" gracePeriod=15 Jan 23 09:12:23 crc kubenswrapper[4684]: I0123 09:12:23.701612 4684 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-kube-apiserver/kube-apiserver-startup-monitor-crc"] Jan 23 09:12:23 crc kubenswrapper[4684]: I0123 09:12:23.722355 4684 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-lock\" (UniqueName: \"kubernetes.io/host-path/f85e55b1a89d02b0cb034b1ea31ed45a-var-lock\") pod \"kube-apiserver-startup-monitor-crc\" (UID: \"f85e55b1a89d02b0cb034b1ea31ed45a\") " pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" Jan 23 09:12:23 crc kubenswrapper[4684]: I0123 09:12:23.722415 4684 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"resource-dir\" (UniqueName: \"kubernetes.io/host-path/71bb4a3aecc4ba5b26c4b7318770ce13-resource-dir\") pod \"kube-apiserver-crc\" (UID: \"71bb4a3aecc4ba5b26c4b7318770ce13\") " pod="openshift-kube-apiserver/kube-apiserver-crc" Jan 23 09:12:23 crc kubenswrapper[4684]: I0123 09:12:23.722797 4684 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cert-dir\" (UniqueName: \"kubernetes.io/host-path/71bb4a3aecc4ba5b26c4b7318770ce13-cert-dir\") pod \"kube-apiserver-crc\" (UID: \"71bb4a3aecc4ba5b26c4b7318770ce13\") " pod="openshift-kube-apiserver/kube-apiserver-crc" Jan 23 09:12:23 crc kubenswrapper[4684]: I0123 09:12:23.722899 4684 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"manifests\" (UniqueName: \"kubernetes.io/host-path/f85e55b1a89d02b0cb034b1ea31ed45a-manifests\") pod \"kube-apiserver-startup-monitor-crc\" (UID: \"f85e55b1a89d02b0cb034b1ea31ed45a\") " pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" Jan 23 09:12:23 crc kubenswrapper[4684]: I0123 09:12:23.723041 4684 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/71bb4a3aecc4ba5b26c4b7318770ce13-audit-dir\") pod \"kube-apiserver-crc\" (UID: \"71bb4a3aecc4ba5b26c4b7318770ce13\") " pod="openshift-kube-apiserver/kube-apiserver-crc" Jan 23 09:12:23 crc kubenswrapper[4684]: I0123 09:12:23.723120 4684 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"resource-dir\" (UniqueName: \"kubernetes.io/host-path/f85e55b1a89d02b0cb034b1ea31ed45a-resource-dir\") pod \"kube-apiserver-startup-monitor-crc\" (UID: \"f85e55b1a89d02b0cb034b1ea31ed45a\") " pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" Jan 23 09:12:23 crc kubenswrapper[4684]: I0123 09:12:23.723262 4684 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-log\" (UniqueName: \"kubernetes.io/host-path/f85e55b1a89d02b0cb034b1ea31ed45a-var-log\") pod \"kube-apiserver-startup-monitor-crc\" (UID: \"f85e55b1a89d02b0cb034b1ea31ed45a\") " pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" Jan 23 09:12:23 crc kubenswrapper[4684]: I0123 09:12:23.723302 4684 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"pod-resource-dir\" (UniqueName: \"kubernetes.io/host-path/f85e55b1a89d02b0cb034b1ea31ed45a-pod-resource-dir\") pod \"kube-apiserver-startup-monitor-crc\" (UID: \"f85e55b1a89d02b0cb034b1ea31ed45a\") " pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" Jan 23 09:12:23 crc kubenswrapper[4684]: I0123 09:12:23.824902 4684 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/71bb4a3aecc4ba5b26c4b7318770ce13-audit-dir\") pod \"kube-apiserver-crc\" (UID: \"71bb4a3aecc4ba5b26c4b7318770ce13\") " pod="openshift-kube-apiserver/kube-apiserver-crc" Jan 23 09:12:23 crc kubenswrapper[4684]: I0123 09:12:23.824968 4684 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"resource-dir\" (UniqueName: \"kubernetes.io/host-path/f85e55b1a89d02b0cb034b1ea31ed45a-resource-dir\") pod \"kube-apiserver-startup-monitor-crc\" (UID: \"f85e55b1a89d02b0cb034b1ea31ed45a\") " pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" Jan 23 09:12:23 crc kubenswrapper[4684]: I0123 09:12:23.825005 4684 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"var-log\" (UniqueName: \"kubernetes.io/host-path/f85e55b1a89d02b0cb034b1ea31ed45a-var-log\") pod \"kube-apiserver-startup-monitor-crc\" (UID: \"f85e55b1a89d02b0cb034b1ea31ed45a\") " pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" Jan 23 09:12:23 crc kubenswrapper[4684]: I0123 09:12:23.825028 4684 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pod-resource-dir\" (UniqueName: \"kubernetes.io/host-path/f85e55b1a89d02b0cb034b1ea31ed45a-pod-resource-dir\") pod \"kube-apiserver-startup-monitor-crc\" (UID: \"f85e55b1a89d02b0cb034b1ea31ed45a\") " pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" Jan 23 09:12:23 crc kubenswrapper[4684]: I0123 09:12:23.825057 4684 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"var-lock\" (UniqueName: \"kubernetes.io/host-path/f85e55b1a89d02b0cb034b1ea31ed45a-var-lock\") pod \"kube-apiserver-startup-monitor-crc\" (UID: \"f85e55b1a89d02b0cb034b1ea31ed45a\") " pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" Jan 23 09:12:23 crc kubenswrapper[4684]: I0123 09:12:23.825079 4684 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"var-log\" (UniqueName: \"kubernetes.io/host-path/f85e55b1a89d02b0cb034b1ea31ed45a-var-log\") pod \"kube-apiserver-startup-monitor-crc\" (UID: \"f85e55b1a89d02b0cb034b1ea31ed45a\") " pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" Jan 23 09:12:23 crc kubenswrapper[4684]: I0123 09:12:23.825088 4684 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"resource-dir\" (UniqueName: \"kubernetes.io/host-path/71bb4a3aecc4ba5b26c4b7318770ce13-resource-dir\") pod \"kube-apiserver-crc\" (UID: \"71bb4a3aecc4ba5b26c4b7318770ce13\") " pod="openshift-kube-apiserver/kube-apiserver-crc" Jan 23 09:12:23 crc kubenswrapper[4684]: I0123 09:12:23.825109 4684 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"pod-resource-dir\" (UniqueName: \"kubernetes.io/host-path/f85e55b1a89d02b0cb034b1ea31ed45a-pod-resource-dir\") pod \"kube-apiserver-startup-monitor-crc\" (UID: \"f85e55b1a89d02b0cb034b1ea31ed45a\") " pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" Jan 23 09:12:23 crc kubenswrapper[4684]: I0123 09:12:23.825157 4684 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cert-dir\" (UniqueName: \"kubernetes.io/host-path/71bb4a3aecc4ba5b26c4b7318770ce13-cert-dir\") pod \"kube-apiserver-crc\" (UID: \"71bb4a3aecc4ba5b26c4b7318770ce13\") " pod="openshift-kube-apiserver/kube-apiserver-crc" Jan 23 09:12:23 crc kubenswrapper[4684]: I0123 09:12:23.825118 4684 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"resource-dir\" (UniqueName: \"kubernetes.io/host-path/71bb4a3aecc4ba5b26c4b7318770ce13-resource-dir\") pod \"kube-apiserver-crc\" (UID: \"71bb4a3aecc4ba5b26c4b7318770ce13\") " pod="openshift-kube-apiserver/kube-apiserver-crc" Jan 23 09:12:23 crc kubenswrapper[4684]: I0123 09:12:23.825205 4684 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"manifests\" (UniqueName: \"kubernetes.io/host-path/f85e55b1a89d02b0cb034b1ea31ed45a-manifests\") pod \"kube-apiserver-startup-monitor-crc\" (UID: \"f85e55b1a89d02b0cb034b1ea31ed45a\") " pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" Jan 23 09:12:23 crc kubenswrapper[4684]: I0123 09:12:23.825202 4684 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"var-lock\" (UniqueName: \"kubernetes.io/host-path/f85e55b1a89d02b0cb034b1ea31ed45a-var-lock\") pod \"kube-apiserver-startup-monitor-crc\" (UID: \"f85e55b1a89d02b0cb034b1ea31ed45a\") " pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" Jan 23 09:12:23 crc kubenswrapper[4684]: I0123 09:12:23.825194 4684 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"cert-dir\" (UniqueName: \"kubernetes.io/host-path/71bb4a3aecc4ba5b26c4b7318770ce13-cert-dir\") pod \"kube-apiserver-crc\" (UID: \"71bb4a3aecc4ba5b26c4b7318770ce13\") " pod="openshift-kube-apiserver/kube-apiserver-crc" Jan 23 09:12:23 crc kubenswrapper[4684]: I0123 09:12:23.825107 4684 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"resource-dir\" (UniqueName: \"kubernetes.io/host-path/f85e55b1a89d02b0cb034b1ea31ed45a-resource-dir\") pod \"kube-apiserver-startup-monitor-crc\" (UID: \"f85e55b1a89d02b0cb034b1ea31ed45a\") " pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" Jan 23 09:12:23 crc kubenswrapper[4684]: I0123 09:12:23.825277 4684 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/71bb4a3aecc4ba5b26c4b7318770ce13-audit-dir\") pod \"kube-apiserver-crc\" (UID: \"71bb4a3aecc4ba5b26c4b7318770ce13\") " pod="openshift-kube-apiserver/kube-apiserver-crc" Jan 23 09:12:23 crc kubenswrapper[4684]: I0123 09:12:23.825282 4684 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"manifests\" (UniqueName: \"kubernetes.io/host-path/f85e55b1a89d02b0cb034b1ea31ed45a-manifests\") pod \"kube-apiserver-startup-monitor-crc\" (UID: \"f85e55b1a89d02b0cb034b1ea31ed45a\") " pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" Jan 23 09:12:23 crc kubenswrapper[4684]: I0123 09:12:23.998294 4684 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" Jan 23 09:12:24 crc kubenswrapper[4684]: I0123 09:12:24.045015 4684 patch_prober.go:28] interesting pod/kube-apiserver-crc container/kube-apiserver namespace/openshift-kube-apiserver: Readiness probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[+]ping ok Jan 23 09:12:24 crc kubenswrapper[4684]: [+]log ok Jan 23 09:12:24 crc kubenswrapper[4684]: [+]api-openshift-apiserver-available ok Jan 23 09:12:24 crc kubenswrapper[4684]: [+]api-openshift-oauth-apiserver-available ok Jan 23 09:12:24 crc kubenswrapper[4684]: [+]informer-sync ok Jan 23 09:12:24 crc kubenswrapper[4684]: [+]poststarthook/quota.openshift.io-clusterquotamapping ok Jan 23 09:12:24 crc kubenswrapper[4684]: [+]poststarthook/openshift.io-api-request-count-filter ok Jan 23 09:12:24 crc kubenswrapper[4684]: [+]poststarthook/openshift.io-startkubeinformers ok Jan 23 09:12:24 crc kubenswrapper[4684]: [+]poststarthook/openshift.io-openshift-apiserver-reachable ok Jan 23 09:12:24 crc kubenswrapper[4684]: [+]poststarthook/openshift.io-oauth-apiserver-reachable ok Jan 23 09:12:24 crc kubenswrapper[4684]: [+]poststarthook/start-apiserver-admission-initializer ok Jan 23 09:12:24 crc kubenswrapper[4684]: [+]poststarthook/generic-apiserver-start-informers ok Jan 23 09:12:24 crc kubenswrapper[4684]: [+]poststarthook/priority-and-fairness-config-consumer ok Jan 23 09:12:24 crc kubenswrapper[4684]: [+]poststarthook/priority-and-fairness-filter ok Jan 23 09:12:24 crc kubenswrapper[4684]: [+]poststarthook/storage-object-count-tracker-hook ok Jan 23 09:12:24 crc kubenswrapper[4684]: [+]poststarthook/start-apiextensions-informers ok Jan 23 09:12:24 crc kubenswrapper[4684]: [+]poststarthook/start-apiextensions-controllers ok Jan 23 09:12:24 crc kubenswrapper[4684]: [+]poststarthook/crd-informer-synced ok Jan 23 09:12:24 crc kubenswrapper[4684]: [+]poststarthook/start-system-namespaces-controller ok Jan 23 09:12:24 crc kubenswrapper[4684]: [+]poststarthook/start-cluster-authentication-info-controller ok Jan 23 09:12:24 crc kubenswrapper[4684]: [+]poststarthook/start-kube-apiserver-identity-lease-controller ok Jan 23 09:12:24 crc kubenswrapper[4684]: [+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok Jan 23 09:12:24 crc kubenswrapper[4684]: [+]poststarthook/start-legacy-token-tracking-controller ok Jan 23 09:12:24 crc kubenswrapper[4684]: [+]poststarthook/start-service-ip-repair-controllers ok Jan 23 09:12:24 crc kubenswrapper[4684]: [+]poststarthook/rbac/bootstrap-roles ok Jan 23 09:12:24 crc kubenswrapper[4684]: [+]poststarthook/scheduling/bootstrap-system-priority-classes ok Jan 23 09:12:24 crc kubenswrapper[4684]: [+]poststarthook/priority-and-fairness-config-producer ok Jan 23 09:12:24 crc kubenswrapper[4684]: [+]poststarthook/bootstrap-controller ok Jan 23 09:12:24 crc kubenswrapper[4684]: [+]poststarthook/aggregator-reload-proxy-client-cert ok Jan 23 09:12:24 crc kubenswrapper[4684]: [+]poststarthook/start-kube-aggregator-informers ok Jan 23 09:12:24 crc kubenswrapper[4684]: [+]poststarthook/apiservice-status-local-available-controller ok Jan 23 09:12:24 crc kubenswrapper[4684]: [+]poststarthook/apiservice-status-remote-available-controller ok Jan 23 09:12:24 crc kubenswrapper[4684]: [+]poststarthook/apiservice-registration-controller ok Jan 23 09:12:24 crc kubenswrapper[4684]: [+]poststarthook/apiservice-wait-for-first-sync ok Jan 23 09:12:24 crc kubenswrapper[4684]: [+]poststarthook/apiservice-discovery-controller ok Jan 23 09:12:24 crc kubenswrapper[4684]: [+]poststarthook/kube-apiserver-autoregistration ok Jan 23 09:12:24 crc kubenswrapper[4684]: [+]autoregister-completion ok Jan 23 09:12:24 crc kubenswrapper[4684]: [+]poststarthook/apiservice-openapi-controller ok Jan 23 09:12:24 crc kubenswrapper[4684]: [+]poststarthook/apiservice-openapiv3-controller ok Jan 23 09:12:24 crc kubenswrapper[4684]: [-]shutdown failed: reason withheld Jan 23 09:12:24 crc kubenswrapper[4684]: readyz check failed Jan 23 09:12:24 crc kubenswrapper[4684]: I0123 09:12:24.045086 4684 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-kube-apiserver/kube-apiserver-crc" podUID="f4b27818a5e8e43d0dc095d08835c792" containerName="kube-apiserver" probeResult="failure" output="HTTP probe failed with statuscode: 500" Jan 23 09:12:26 crc kubenswrapper[4684]: I0123 09:12:26.648454 4684 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-kube-apiserver_kube-apiserver-crc_f4b27818a5e8e43d0dc095d08835c792/kube-apiserver-check-endpoints/0.log" Jan 23 09:12:26 crc kubenswrapper[4684]: I0123 09:12:26.650107 4684 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-kube-apiserver_kube-apiserver-crc_f4b27818a5e8e43d0dc095d08835c792/kube-apiserver-cert-syncer/0.log" Jan 23 09:12:26 crc kubenswrapper[4684]: I0123 09:12:26.650845 4684 generic.go:334] "Generic (PLEG): container finished" podID="f4b27818a5e8e43d0dc095d08835c792" containerID="68e3ed6cfd5c1ab6379385c7acee58117333f815f21be7d7c61038f7827f6621" exitCode=2 Jan 23 09:12:27 crc kubenswrapper[4684]: E0123 09:12:27.363218 4684 event.go:368] "Unable to write event (may retry after sleeping)" err="Post \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/events\": dial tcp 38.129.56.16:6443: connect: connection refused" event="&Event{ObjectMeta:{redhat-marketplace-hcd6g.188d513c2381b5e8 openshift-marketplace 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:openshift-marketplace,Name:redhat-marketplace-hcd6g,UID:a32a23a8-fd38-4a01-bc87-e589889a39e6,APIVersion:v1,ResourceVersion:28629,FieldPath:spec.containers{registry-server},},Reason:Pulled,Message:Successfully pulled image \"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\" in 28.848s (28.848s including waiting). Image size: 907837715 bytes.,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2026-01-23 09:12:27.362063848 +0000 UTC m=+319.985442409,LastTimestamp:2026-01-23 09:12:27.362063848 +0000 UTC m=+319.985442409,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Jan 23 09:12:27 crc kubenswrapper[4684]: I0123 09:12:27.584600 4684 status_manager.go:851] "Failed to get status for pod" podUID="f85e55b1a89d02b0cb034b1ea31ed45a" pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-startup-monitor-crc\": dial tcp 38.129.56.16:6443: connect: connection refused" Jan 23 09:12:31 crc kubenswrapper[4684]: E0123 09:12:31.331236 4684 event.go:368] "Unable to write event (may retry after sleeping)" err="Post \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/events\": dial tcp 38.129.56.16:6443: connect: connection refused" event="&Event{ObjectMeta:{redhat-marketplace-hcd6g.188d513c2381b5e8 openshift-marketplace 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:openshift-marketplace,Name:redhat-marketplace-hcd6g,UID:a32a23a8-fd38-4a01-bc87-e589889a39e6,APIVersion:v1,ResourceVersion:28629,FieldPath:spec.containers{registry-server},},Reason:Pulled,Message:Successfully pulled image \"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\" in 28.848s (28.848s including waiting). Image size: 907837715 bytes.,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2026-01-23 09:12:27.362063848 +0000 UTC m=+319.985442409,LastTimestamp:2026-01-23 09:12:27.362063848 +0000 UTC m=+319.985442409,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Jan 23 09:12:34 crc kubenswrapper[4684]: E0123 09:12:34.124054 4684 controller.go:195] "Failed to update lease" err="Put \"https://api-int.crc.testing:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/crc?timeout=10s\": dial tcp 38.129.56.16:6443: connect: connection refused" Jan 23 09:12:34 crc kubenswrapper[4684]: E0123 09:12:34.125006 4684 controller.go:195] "Failed to update lease" err="Put \"https://api-int.crc.testing:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/crc?timeout=10s\": dial tcp 38.129.56.16:6443: connect: connection refused" Jan 23 09:12:34 crc kubenswrapper[4684]: E0123 09:12:34.125589 4684 controller.go:195] "Failed to update lease" err="Put \"https://api-int.crc.testing:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/crc?timeout=10s\": dial tcp 38.129.56.16:6443: connect: connection refused" Jan 23 09:12:34 crc kubenswrapper[4684]: E0123 09:12:34.126114 4684 controller.go:195] "Failed to update lease" err="Put \"https://api-int.crc.testing:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/crc?timeout=10s\": dial tcp 38.129.56.16:6443: connect: connection refused" Jan 23 09:12:34 crc kubenswrapper[4684]: E0123 09:12:34.126475 4684 controller.go:195] "Failed to update lease" err="Put \"https://api-int.crc.testing:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/crc?timeout=10s\": dial tcp 38.129.56.16:6443: connect: connection refused" Jan 23 09:12:34 crc kubenswrapper[4684]: I0123 09:12:34.126513 4684 controller.go:115] "failed to update lease using latest lease, fallback to ensure lease" err="failed 5 attempts to update lease" Jan 23 09:12:34 crc kubenswrapper[4684]: E0123 09:12:34.126831 4684 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://api-int.crc.testing:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/crc?timeout=10s\": dial tcp 38.129.56.16:6443: connect: connection refused" interval="200ms" Jan 23 09:12:34 crc kubenswrapper[4684]: E0123 09:12:34.327392 4684 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://api-int.crc.testing:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/crc?timeout=10s\": dial tcp 38.129.56.16:6443: connect: connection refused" interval="400ms" Jan 23 09:12:34 crc kubenswrapper[4684]: E0123 09:12:34.728755 4684 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://api-int.crc.testing:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/crc?timeout=10s\": dial tcp 38.129.56.16:6443: connect: connection refused" interval="800ms" Jan 23 09:12:35 crc kubenswrapper[4684]: E0123 09:12:35.529614 4684 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://api-int.crc.testing:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/crc?timeout=10s\": dial tcp 38.129.56.16:6443: connect: connection refused" interval="1.6s" Jan 23 09:12:35 crc kubenswrapper[4684]: I0123 09:12:35.701647 4684 generic.go:334] "Generic (PLEG): container finished" podID="edcaacae-d1c5-4a66-9220-54ee4b5991ac" containerID="104390b7d36d2bb63212448fb64f1a139447c9ca332f78344ccd7b61d1a97a76" exitCode=0 Jan 23 09:12:35 crc kubenswrapper[4684]: I0123 09:12:35.702011 4684 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/installer-9-crc" event={"ID":"edcaacae-d1c5-4a66-9220-54ee4b5991ac","Type":"ContainerDied","Data":"104390b7d36d2bb63212448fb64f1a139447c9ca332f78344ccd7b61d1a97a76"} Jan 23 09:12:35 crc kubenswrapper[4684]: I0123 09:12:35.703043 4684 status_manager.go:851] "Failed to get status for pod" podUID="f85e55b1a89d02b0cb034b1ea31ed45a" pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-startup-monitor-crc\": dial tcp 38.129.56.16:6443: connect: connection refused" Jan 23 09:12:35 crc kubenswrapper[4684]: I0123 09:12:35.703963 4684 status_manager.go:851] "Failed to get status for pod" podUID="edcaacae-d1c5-4a66-9220-54ee4b5991ac" pod="openshift-kube-apiserver/installer-9-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/installer-9-crc\": dial tcp 38.129.56.16:6443: connect: connection refused" Jan 23 09:12:35 crc kubenswrapper[4684]: I0123 09:12:35.704060 4684 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-kube-apiserver_kube-apiserver-crc_f4b27818a5e8e43d0dc095d08835c792/kube-apiserver-check-endpoints/0.log" Jan 23 09:12:35 crc kubenswrapper[4684]: I0123 09:12:35.705416 4684 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-kube-apiserver_kube-apiserver-crc_f4b27818a5e8e43d0dc095d08835c792/kube-apiserver-cert-syncer/0.log" Jan 23 09:12:35 crc kubenswrapper[4684]: I0123 09:12:35.706151 4684 generic.go:334] "Generic (PLEG): container finished" podID="f4b27818a5e8e43d0dc095d08835c792" containerID="74958cd4355a9eb04e07c960b1063b56f11cb3ae27a3ab9eac50f54ebac78c8c" exitCode=0 Jan 23 09:12:35 crc kubenswrapper[4684]: I0123 09:12:35.706173 4684 generic.go:334] "Generic (PLEG): container finished" podID="f4b27818a5e8e43d0dc095d08835c792" containerID="9db80d9b156d2828ad5bcd38bc2d0783dac35f10f547f098815ee596931cde3b" exitCode=0 Jan 23 09:12:35 crc kubenswrapper[4684]: I0123 09:12:35.706184 4684 generic.go:334] "Generic (PLEG): container finished" podID="f4b27818a5e8e43d0dc095d08835c792" containerID="5b80737ea9f882f63be2cf6a2f74002963d16e18aea3c96f738b2cd188f3c1da" exitCode=0 Jan 23 09:12:35 crc kubenswrapper[4684]: I0123 09:12:35.706193 4684 generic.go:334] "Generic (PLEG): container finished" podID="f4b27818a5e8e43d0dc095d08835c792" containerID="39b1d62654cdce3e6a1e54cc35f36d530dec39b7ec54d7aba2ea8a64844ff90a" exitCode=0 Jan 23 09:12:36 crc kubenswrapper[4684]: I0123 09:12:36.636498 4684 scope.go:117] "RemoveContainer" containerID="f3d4749d1cdf3b2ee51e79c20ce920b5dfc161f4e6da5794c6c4502f5b162b07" Jan 23 09:12:36 crc kubenswrapper[4684]: E0123 09:12:36.645213 4684 log.go:32] "PullImage from image service failed" err="rpc error: code = Canceled desc = copying system image from manifest list: copying config: context canceled" image="registry.redhat.io/redhat/community-operator-index:v4.18" Jan 23 09:12:36 crc kubenswrapper[4684]: E0123 09:12:36.645634 4684 kuberuntime_manager.go:1274] "Unhandled Error" err="init container &Container{Name:extract-content,Image:registry.redhat.io/redhat/community-operator-index:v4.18,Command:[/utilities/copy-content],Args:[--catalog.from=/configs --catalog.to=/extracted-catalog/catalog --cache.from=/tmp/cache --cache.to=/extracted-catalog/cache],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:utilities,ReadOnly:false,MountPath:/utilities,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:catalog-content,ReadOnly:false,MountPath:/extracted-catalog,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-v857v,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:Always,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:nil,SELinuxOptions:nil,RunAsUser:*1000170000,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:FallbackToLogsOnError,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod community-operators-pc4kj_openshift-marketplace(2f9880b0-14ae-4649-b7ba-6d0dd1ab5151): ErrImagePull: rpc error: code = Canceled desc = copying system image from manifest list: copying config: context canceled" logger="UnhandledError" Jan 23 09:12:36 crc kubenswrapper[4684]: E0123 09:12:36.645900 4684 log.go:32] "PullImage from image service failed" err="rpc error: code = Canceled desc = copying config: context canceled" image="quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad" Jan 23 09:12:36 crc kubenswrapper[4684]: E0123 09:12:36.646103 4684 kuberuntime_manager.go:1274] "Unhandled Error" err="container &Container{Name:registry-server,Image:quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad,Command:[/bin/opm],Args:[serve /extracted-catalog/catalog --cache-dir=/extracted-catalog/cache],WorkingDir:,Ports:[]ContainerPort{ContainerPort{Name:grpc,HostPort:0,ContainerPort:50051,Protocol:TCP,HostIP:,},},Env:[]EnvVar{EnvVar{Name:GOMEMLIMIT,Value:30MiB,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{cpu: {{10 -3} {} 10m DecimalSI},memory: {{31457280 0} {} 30Mi BinarySI},},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:catalog-content,ReadOnly:false,MountPath:/extracted-catalog,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-vfgdl,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:&ExecAction{Command:[grpc_health_probe -addr=:50051],},HTTPGet:nil,TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:10,TimeoutSeconds:5,PeriodSeconds:10,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},ReadinessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:&ExecAction{Command:[grpc_health_probe -addr=:50051],},HTTPGet:nil,TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:5,TimeoutSeconds:5,PeriodSeconds:10,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:Always,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:nil,SELinuxOptions:nil,RunAsUser:*1000170000,RunAsNonRoot:*true,ReadOnlyRootFilesystem:*false,AllowPrivilegeEscalation:*false,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:FallbackToLogsOnError,VolumeDevices:[]VolumeDevice{},StartupProbe:&Probe{ProbeHandler:ProbeHandler{Exec:&ExecAction{Command:[grpc_health_probe -addr=:50051],},HTTPGet:nil,TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:0,TimeoutSeconds:5,PeriodSeconds:10,SuccessThreshold:1,FailureThreshold:10,TerminationGracePeriodSeconds:nil,},ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod redhat-operators-vnv8t_openshift-marketplace(5a6b0dac-56a9-4bc7-b6f1-fdbe9578f226): ErrImagePull: rpc error: code = Canceled desc = copying config: context canceled" logger="UnhandledError" Jan 23 09:12:36 crc kubenswrapper[4684]: E0123 09:12:36.646969 4684 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"extract-content\" with ErrImagePull: \"rpc error: code = Canceled desc = copying system image from manifest list: copying config: context canceled\"" pod="openshift-marketplace/community-operators-pc4kj" podUID="2f9880b0-14ae-4649-b7ba-6d0dd1ab5151" Jan 23 09:12:36 crc kubenswrapper[4684]: E0123 09:12:36.648062 4684 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"registry-server\" with ErrImagePull: \"rpc error: code = Canceled desc = copying config: context canceled\"" pod="openshift-marketplace/redhat-operators-vnv8t" podUID="5a6b0dac-56a9-4bc7-b6f1-fdbe9578f226" Jan 23 09:12:37 crc kubenswrapper[4684]: I0123 09:12:37.078075 4684 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-kube-apiserver_kube-apiserver-crc_f4b27818a5e8e43d0dc095d08835c792/kube-apiserver-check-endpoints/0.log" Jan 23 09:12:37 crc kubenswrapper[4684]: I0123 09:12:37.079646 4684 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-kube-apiserver_kube-apiserver-crc_f4b27818a5e8e43d0dc095d08835c792/kube-apiserver-cert-syncer/0.log" Jan 23 09:12:37 crc kubenswrapper[4684]: I0123 09:12:37.080416 4684 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver/kube-apiserver-crc" Jan 23 09:12:37 crc kubenswrapper[4684]: I0123 09:12:37.080878 4684 status_manager.go:851] "Failed to get status for pod" podUID="edcaacae-d1c5-4a66-9220-54ee4b5991ac" pod="openshift-kube-apiserver/installer-9-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/installer-9-crc\": dial tcp 38.129.56.16:6443: connect: connection refused" Jan 23 09:12:37 crc kubenswrapper[4684]: I0123 09:12:37.081359 4684 status_manager.go:851] "Failed to get status for pod" podUID="f85e55b1a89d02b0cb034b1ea31ed45a" pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-startup-monitor-crc\": dial tcp 38.129.56.16:6443: connect: connection refused" Jan 23 09:12:37 crc kubenswrapper[4684]: I0123 09:12:37.082533 4684 status_manager.go:851] "Failed to get status for pod" podUID="f4b27818a5e8e43d0dc095d08835c792" pod="openshift-kube-apiserver/kube-apiserver-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-crc\": dial tcp 38.129.56.16:6443: connect: connection refused" Jan 23 09:12:37 crc kubenswrapper[4684]: I0123 09:12:37.106952 4684 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"resource-dir\" (UniqueName: \"kubernetes.io/host-path/f4b27818a5e8e43d0dc095d08835c792-resource-dir\") pod \"f4b27818a5e8e43d0dc095d08835c792\" (UID: \"f4b27818a5e8e43d0dc095d08835c792\") " Jan 23 09:12:37 crc kubenswrapper[4684]: I0123 09:12:37.107029 4684 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/f4b27818a5e8e43d0dc095d08835c792-audit-dir\") pod \"f4b27818a5e8e43d0dc095d08835c792\" (UID: \"f4b27818a5e8e43d0dc095d08835c792\") " Jan 23 09:12:37 crc kubenswrapper[4684]: I0123 09:12:37.107037 4684 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/f4b27818a5e8e43d0dc095d08835c792-resource-dir" (OuterVolumeSpecName: "resource-dir") pod "f4b27818a5e8e43d0dc095d08835c792" (UID: "f4b27818a5e8e43d0dc095d08835c792"). InnerVolumeSpecName "resource-dir". PluginName "kubernetes.io/host-path", VolumeGidValue "" Jan 23 09:12:37 crc kubenswrapper[4684]: I0123 09:12:37.107099 4684 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"cert-dir\" (UniqueName: \"kubernetes.io/host-path/f4b27818a5e8e43d0dc095d08835c792-cert-dir\") pod \"f4b27818a5e8e43d0dc095d08835c792\" (UID: \"f4b27818a5e8e43d0dc095d08835c792\") " Jan 23 09:12:37 crc kubenswrapper[4684]: I0123 09:12:37.107118 4684 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/f4b27818a5e8e43d0dc095d08835c792-audit-dir" (OuterVolumeSpecName: "audit-dir") pod "f4b27818a5e8e43d0dc095d08835c792" (UID: "f4b27818a5e8e43d0dc095d08835c792"). InnerVolumeSpecName "audit-dir". PluginName "kubernetes.io/host-path", VolumeGidValue "" Jan 23 09:12:37 crc kubenswrapper[4684]: I0123 09:12:37.107216 4684 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/f4b27818a5e8e43d0dc095d08835c792-cert-dir" (OuterVolumeSpecName: "cert-dir") pod "f4b27818a5e8e43d0dc095d08835c792" (UID: "f4b27818a5e8e43d0dc095d08835c792"). InnerVolumeSpecName "cert-dir". PluginName "kubernetes.io/host-path", VolumeGidValue "" Jan 23 09:12:37 crc kubenswrapper[4684]: I0123 09:12:37.107387 4684 reconciler_common.go:293] "Volume detached for volume \"resource-dir\" (UniqueName: \"kubernetes.io/host-path/f4b27818a5e8e43d0dc095d08835c792-resource-dir\") on node \"crc\" DevicePath \"\"" Jan 23 09:12:37 crc kubenswrapper[4684]: I0123 09:12:37.107401 4684 reconciler_common.go:293] "Volume detached for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/f4b27818a5e8e43d0dc095d08835c792-audit-dir\") on node \"crc\" DevicePath \"\"" Jan 23 09:12:37 crc kubenswrapper[4684]: I0123 09:12:37.107410 4684 reconciler_common.go:293] "Volume detached for volume \"cert-dir\" (UniqueName: \"kubernetes.io/host-path/f4b27818a5e8e43d0dc095d08835c792-cert-dir\") on node \"crc\" DevicePath \"\"" Jan 23 09:12:37 crc kubenswrapper[4684]: E0123 09:12:37.130825 4684 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://api-int.crc.testing:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/crc?timeout=10s\": dial tcp 38.129.56.16:6443: connect: connection refused" interval="3.2s" Jan 23 09:12:37 crc kubenswrapper[4684]: I0123 09:12:37.282139 4684 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver/installer-9-crc" Jan 23 09:12:37 crc kubenswrapper[4684]: I0123 09:12:37.282648 4684 status_manager.go:851] "Failed to get status for pod" podUID="f85e55b1a89d02b0cb034b1ea31ed45a" pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-startup-monitor-crc\": dial tcp 38.129.56.16:6443: connect: connection refused" Jan 23 09:12:37 crc kubenswrapper[4684]: I0123 09:12:37.282921 4684 status_manager.go:851] "Failed to get status for pod" podUID="f4b27818a5e8e43d0dc095d08835c792" pod="openshift-kube-apiserver/kube-apiserver-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-crc\": dial tcp 38.129.56.16:6443: connect: connection refused" Jan 23 09:12:37 crc kubenswrapper[4684]: I0123 09:12:37.283199 4684 status_manager.go:851] "Failed to get status for pod" podUID="edcaacae-d1c5-4a66-9220-54ee4b5991ac" pod="openshift-kube-apiserver/installer-9-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/installer-9-crc\": dial tcp 38.129.56.16:6443: connect: connection refused" Jan 23 09:12:37 crc kubenswrapper[4684]: I0123 09:12:37.309575 4684 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/edcaacae-d1c5-4a66-9220-54ee4b5991ac-kubelet-dir\") pod \"edcaacae-d1c5-4a66-9220-54ee4b5991ac\" (UID: \"edcaacae-d1c5-4a66-9220-54ee4b5991ac\") " Jan 23 09:12:37 crc kubenswrapper[4684]: I0123 09:12:37.309752 4684 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/edcaacae-d1c5-4a66-9220-54ee4b5991ac-kubelet-dir" (OuterVolumeSpecName: "kubelet-dir") pod "edcaacae-d1c5-4a66-9220-54ee4b5991ac" (UID: "edcaacae-d1c5-4a66-9220-54ee4b5991ac"). InnerVolumeSpecName "kubelet-dir". PluginName "kubernetes.io/host-path", VolumeGidValue "" Jan 23 09:12:37 crc kubenswrapper[4684]: I0123 09:12:37.309784 4684 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/edcaacae-d1c5-4a66-9220-54ee4b5991ac-kube-api-access\") pod \"edcaacae-d1c5-4a66-9220-54ee4b5991ac\" (UID: \"edcaacae-d1c5-4a66-9220-54ee4b5991ac\") " Jan 23 09:12:37 crc kubenswrapper[4684]: I0123 09:12:37.310137 4684 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"var-lock\" (UniqueName: \"kubernetes.io/host-path/edcaacae-d1c5-4a66-9220-54ee4b5991ac-var-lock\") pod \"edcaacae-d1c5-4a66-9220-54ee4b5991ac\" (UID: \"edcaacae-d1c5-4a66-9220-54ee4b5991ac\") " Jan 23 09:12:37 crc kubenswrapper[4684]: I0123 09:12:37.310426 4684 reconciler_common.go:293] "Volume detached for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/edcaacae-d1c5-4a66-9220-54ee4b5991ac-kubelet-dir\") on node \"crc\" DevicePath \"\"" Jan 23 09:12:37 crc kubenswrapper[4684]: I0123 09:12:37.310462 4684 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/edcaacae-d1c5-4a66-9220-54ee4b5991ac-var-lock" (OuterVolumeSpecName: "var-lock") pod "edcaacae-d1c5-4a66-9220-54ee4b5991ac" (UID: "edcaacae-d1c5-4a66-9220-54ee4b5991ac"). InnerVolumeSpecName "var-lock". PluginName "kubernetes.io/host-path", VolumeGidValue "" Jan 23 09:12:37 crc kubenswrapper[4684]: I0123 09:12:37.316600 4684 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/edcaacae-d1c5-4a66-9220-54ee4b5991ac-kube-api-access" (OuterVolumeSpecName: "kube-api-access") pod "edcaacae-d1c5-4a66-9220-54ee4b5991ac" (UID: "edcaacae-d1c5-4a66-9220-54ee4b5991ac"). InnerVolumeSpecName "kube-api-access". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 23 09:12:37 crc kubenswrapper[4684]: I0123 09:12:37.320900 4684 scope.go:117] "RemoveContainer" containerID="42263a97079566dbd93f1ca20399fd1f6cc2400f0d042ed062c1c1e15eaf0109" Jan 23 09:12:37 crc kubenswrapper[4684]: I0123 09:12:37.412240 4684 reconciler_common.go:293] "Volume detached for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/edcaacae-d1c5-4a66-9220-54ee4b5991ac-kube-api-access\") on node \"crc\" DevicePath \"\"" Jan 23 09:12:37 crc kubenswrapper[4684]: I0123 09:12:37.412276 4684 reconciler_common.go:293] "Volume detached for volume \"var-lock\" (UniqueName: \"kubernetes.io/host-path/edcaacae-d1c5-4a66-9220-54ee4b5991ac-var-lock\") on node \"crc\" DevicePath \"\"" Jan 23 09:12:37 crc kubenswrapper[4684]: I0123 09:12:37.593102 4684 status_manager.go:851] "Failed to get status for pod" podUID="f85e55b1a89d02b0cb034b1ea31ed45a" pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-startup-monitor-crc\": dial tcp 38.129.56.16:6443: connect: connection refused" Jan 23 09:12:37 crc kubenswrapper[4684]: I0123 09:12:37.593538 4684 status_manager.go:851] "Failed to get status for pod" podUID="f4b27818a5e8e43d0dc095d08835c792" pod="openshift-kube-apiserver/kube-apiserver-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-crc\": dial tcp 38.129.56.16:6443: connect: connection refused" Jan 23 09:12:37 crc kubenswrapper[4684]: I0123 09:12:37.593728 4684 status_manager.go:851] "Failed to get status for pod" podUID="edcaacae-d1c5-4a66-9220-54ee4b5991ac" pod="openshift-kube-apiserver/installer-9-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/installer-9-crc\": dial tcp 38.129.56.16:6443: connect: connection refused" Jan 23 09:12:37 crc kubenswrapper[4684]: I0123 09:12:37.651995 4684 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="f4b27818a5e8e43d0dc095d08835c792" path="/var/lib/kubelet/pods/f4b27818a5e8e43d0dc095d08835c792/volumes" Jan 23 09:12:37 crc kubenswrapper[4684]: I0123 09:12:37.795554 4684 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver/installer-9-crc" Jan 23 09:12:37 crc kubenswrapper[4684]: I0123 09:12:37.795867 4684 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/installer-9-crc" event={"ID":"edcaacae-d1c5-4a66-9220-54ee4b5991ac","Type":"ContainerDied","Data":"aa1ae4ae08fd4acf2f597f9a976c5dfa9d2ec38907d8c6d95942bc0efbcbec66"} Jan 23 09:12:37 crc kubenswrapper[4684]: I0123 09:12:37.795922 4684 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="aa1ae4ae08fd4acf2f597f9a976c5dfa9d2ec38907d8c6d95942bc0efbcbec66" Jan 23 09:12:37 crc kubenswrapper[4684]: I0123 09:12:37.797989 4684 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" event={"ID":"f85e55b1a89d02b0cb034b1ea31ed45a","Type":"ContainerStarted","Data":"f3dfd4924e01b0ed9a407af0c786f4f5ab341424b25243b096eada7ff105468d"} Jan 23 09:12:37 crc kubenswrapper[4684]: I0123 09:12:37.807097 4684 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-kube-apiserver_kube-apiserver-crc_f4b27818a5e8e43d0dc095d08835c792/kube-apiserver-cert-syncer/0.log" Jan 23 09:12:37 crc kubenswrapper[4684]: I0123 09:12:37.813149 4684 scope.go:117] "RemoveContainer" containerID="74958cd4355a9eb04e07c960b1063b56f11cb3ae27a3ab9eac50f54ebac78c8c" Jan 23 09:12:37 crc kubenswrapper[4684]: I0123 09:12:37.813316 4684 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver/kube-apiserver-crc" Jan 23 09:12:37 crc kubenswrapper[4684]: I0123 09:12:37.815025 4684 status_manager.go:851] "Failed to get status for pod" podUID="edcaacae-d1c5-4a66-9220-54ee4b5991ac" pod="openshift-kube-apiserver/installer-9-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/installer-9-crc\": dial tcp 38.129.56.16:6443: connect: connection refused" Jan 23 09:12:37 crc kubenswrapper[4684]: I0123 09:12:37.815216 4684 status_manager.go:851] "Failed to get status for pod" podUID="f85e55b1a89d02b0cb034b1ea31ed45a" pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-startup-monitor-crc\": dial tcp 38.129.56.16:6443: connect: connection refused" Jan 23 09:12:37 crc kubenswrapper[4684]: I0123 09:12:37.815500 4684 status_manager.go:851] "Failed to get status for pod" podUID="f4b27818a5e8e43d0dc095d08835c792" pod="openshift-kube-apiserver/kube-apiserver-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-crc\": dial tcp 38.129.56.16:6443: connect: connection refused" Jan 23 09:12:37 crc kubenswrapper[4684]: I0123 09:12:37.829810 4684 status_manager.go:851] "Failed to get status for pod" podUID="f85e55b1a89d02b0cb034b1ea31ed45a" pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-startup-monitor-crc\": dial tcp 38.129.56.16:6443: connect: connection refused" Jan 23 09:12:37 crc kubenswrapper[4684]: I0123 09:12:37.830616 4684 status_manager.go:851] "Failed to get status for pod" podUID="f4b27818a5e8e43d0dc095d08835c792" pod="openshift-kube-apiserver/kube-apiserver-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-crc\": dial tcp 38.129.56.16:6443: connect: connection refused" Jan 23 09:12:37 crc kubenswrapper[4684]: I0123 09:12:37.830854 4684 status_manager.go:851] "Failed to get status for pod" podUID="edcaacae-d1c5-4a66-9220-54ee4b5991ac" pod="openshift-kube-apiserver/installer-9-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/installer-9-crc\": dial tcp 38.129.56.16:6443: connect: connection refused" Jan 23 09:12:37 crc kubenswrapper[4684]: I0123 09:12:37.831286 4684 status_manager.go:851] "Failed to get status for pod" podUID="f85e55b1a89d02b0cb034b1ea31ed45a" pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-startup-monitor-crc\": dial tcp 38.129.56.16:6443: connect: connection refused" Jan 23 09:12:37 crc kubenswrapper[4684]: I0123 09:12:37.831478 4684 status_manager.go:851] "Failed to get status for pod" podUID="f4b27818a5e8e43d0dc095d08835c792" pod="openshift-kube-apiserver/kube-apiserver-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-crc\": dial tcp 38.129.56.16:6443: connect: connection refused" Jan 23 09:12:37 crc kubenswrapper[4684]: I0123 09:12:37.831846 4684 status_manager.go:851] "Failed to get status for pod" podUID="edcaacae-d1c5-4a66-9220-54ee4b5991ac" pod="openshift-kube-apiserver/installer-9-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/installer-9-crc\": dial tcp 38.129.56.16:6443: connect: connection refused" Jan 23 09:12:37 crc kubenswrapper[4684]: I0123 09:12:37.875471 4684 scope.go:117] "RemoveContainer" containerID="9db80d9b156d2828ad5bcd38bc2d0783dac35f10f547f098815ee596931cde3b" Jan 23 09:12:37 crc kubenswrapper[4684]: I0123 09:12:37.906714 4684 scope.go:117] "RemoveContainer" containerID="5b80737ea9f882f63be2cf6a2f74002963d16e18aea3c96f738b2cd188f3c1da" Jan 23 09:12:37 crc kubenswrapper[4684]: I0123 09:12:37.928007 4684 scope.go:117] "RemoveContainer" containerID="68e3ed6cfd5c1ab6379385c7acee58117333f815f21be7d7c61038f7827f6621" Jan 23 09:12:37 crc kubenswrapper[4684]: I0123 09:12:37.960742 4684 scope.go:117] "RemoveContainer" containerID="39b1d62654cdce3e6a1e54cc35f36d530dec39b7ec54d7aba2ea8a64844ff90a" Jan 23 09:12:37 crc kubenswrapper[4684]: I0123 09:12:37.976946 4684 scope.go:117] "RemoveContainer" containerID="efa2eef93c6f5766565795e6674f79bc2e7cb62ac76cd9a1e407561378d62732" Jan 23 09:12:38 crc kubenswrapper[4684]: E0123 09:12:38.149641 4684 log.go:32] "RunPodSandbox from runtime service failed" err=< Jan 23 09:12:38 crc kubenswrapper[4684]: rpc error: code = Unknown desc = failed to create pod network sandbox k8s_oauth-openshift-69cb985589-w7hkw_openshift-authentication_d25f9561-bcbb-4309-b3b6-de838bbf47bd_0(655ac40d58ccf9efb9c5a3b2dfcfc2f2bfab897d69ccd48cdda5c0d974bc3596): error adding pod openshift-authentication_oauth-openshift-69cb985589-w7hkw to CNI network "multus-cni-network": plugin type="multus-shim" name="multus-cni-network" failed (add): CmdAdd (shim): CNI request failed with status 400: 'ContainerID:"655ac40d58ccf9efb9c5a3b2dfcfc2f2bfab897d69ccd48cdda5c0d974bc3596" Netns:"/var/run/netns/8457f709-23f1-4186-adf3-36c14f34d693" IfName:"eth0" Args:"IgnoreUnknown=1;K8S_POD_NAMESPACE=openshift-authentication;K8S_POD_NAME=oauth-openshift-69cb985589-w7hkw;K8S_POD_INFRA_CONTAINER_ID=655ac40d58ccf9efb9c5a3b2dfcfc2f2bfab897d69ccd48cdda5c0d974bc3596;K8S_POD_UID=d25f9561-bcbb-4309-b3b6-de838bbf47bd" Path:"" ERRORED: error configuring pod [openshift-authentication/oauth-openshift-69cb985589-w7hkw] networking: Multus: [openshift-authentication/oauth-openshift-69cb985589-w7hkw/d25f9561-bcbb-4309-b3b6-de838bbf47bd]: error setting the networks status: SetPodNetworkStatusAnnotation: failed to update the pod oauth-openshift-69cb985589-w7hkw in out of cluster comm: SetNetworkStatus: failed to update the pod oauth-openshift-69cb985589-w7hkw in out of cluster comm: status update failed for pod /: Get "https://api-int.crc.testing:6443/api/v1/namespaces/openshift-authentication/pods/oauth-openshift-69cb985589-w7hkw?timeout=1m0s": dial tcp 38.129.56.16:6443: connect: connection refused Jan 23 09:12:38 crc kubenswrapper[4684]: ': StdinData: {"binDir":"/var/lib/cni/bin","clusterNetwork":"/host/run/multus/cni/net.d/10-ovn-kubernetes.conf","cniVersion":"0.3.1","daemonSocketDir":"/run/multus/socket","globalNamespaces":"default,openshift-multus,openshift-sriov-network-operator,openshift-cnv","logLevel":"verbose","logToStderr":true,"name":"multus-cni-network","namespaceIsolation":true,"type":"multus-shim"} Jan 23 09:12:38 crc kubenswrapper[4684]: > Jan 23 09:12:38 crc kubenswrapper[4684]: E0123 09:12:38.149744 4684 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err=< Jan 23 09:12:38 crc kubenswrapper[4684]: rpc error: code = Unknown desc = failed to create pod network sandbox k8s_oauth-openshift-69cb985589-w7hkw_openshift-authentication_d25f9561-bcbb-4309-b3b6-de838bbf47bd_0(655ac40d58ccf9efb9c5a3b2dfcfc2f2bfab897d69ccd48cdda5c0d974bc3596): error adding pod openshift-authentication_oauth-openshift-69cb985589-w7hkw to CNI network "multus-cni-network": plugin type="multus-shim" name="multus-cni-network" failed (add): CmdAdd (shim): CNI request failed with status 400: 'ContainerID:"655ac40d58ccf9efb9c5a3b2dfcfc2f2bfab897d69ccd48cdda5c0d974bc3596" Netns:"/var/run/netns/8457f709-23f1-4186-adf3-36c14f34d693" IfName:"eth0" Args:"IgnoreUnknown=1;K8S_POD_NAMESPACE=openshift-authentication;K8S_POD_NAME=oauth-openshift-69cb985589-w7hkw;K8S_POD_INFRA_CONTAINER_ID=655ac40d58ccf9efb9c5a3b2dfcfc2f2bfab897d69ccd48cdda5c0d974bc3596;K8S_POD_UID=d25f9561-bcbb-4309-b3b6-de838bbf47bd" Path:"" ERRORED: error configuring pod [openshift-authentication/oauth-openshift-69cb985589-w7hkw] networking: Multus: [openshift-authentication/oauth-openshift-69cb985589-w7hkw/d25f9561-bcbb-4309-b3b6-de838bbf47bd]: error setting the networks status: SetPodNetworkStatusAnnotation: failed to update the pod oauth-openshift-69cb985589-w7hkw in out of cluster comm: SetNetworkStatus: failed to update the pod oauth-openshift-69cb985589-w7hkw in out of cluster comm: status update failed for pod /: Get "https://api-int.crc.testing:6443/api/v1/namespaces/openshift-authentication/pods/oauth-openshift-69cb985589-w7hkw?timeout=1m0s": dial tcp 38.129.56.16:6443: connect: connection refused Jan 23 09:12:38 crc kubenswrapper[4684]: ': StdinData: {"binDir":"/var/lib/cni/bin","clusterNetwork":"/host/run/multus/cni/net.d/10-ovn-kubernetes.conf","cniVersion":"0.3.1","daemonSocketDir":"/run/multus/socket","globalNamespaces":"default,openshift-multus,openshift-sriov-network-operator,openshift-cnv","logLevel":"verbose","logToStderr":true,"name":"multus-cni-network","namespaceIsolation":true,"type":"multus-shim"} Jan 23 09:12:38 crc kubenswrapper[4684]: > pod="openshift-authentication/oauth-openshift-69cb985589-w7hkw" Jan 23 09:12:38 crc kubenswrapper[4684]: E0123 09:12:38.149769 4684 kuberuntime_manager.go:1170] "CreatePodSandbox for pod failed" err=< Jan 23 09:12:38 crc kubenswrapper[4684]: rpc error: code = Unknown desc = failed to create pod network sandbox k8s_oauth-openshift-69cb985589-w7hkw_openshift-authentication_d25f9561-bcbb-4309-b3b6-de838bbf47bd_0(655ac40d58ccf9efb9c5a3b2dfcfc2f2bfab897d69ccd48cdda5c0d974bc3596): error adding pod openshift-authentication_oauth-openshift-69cb985589-w7hkw to CNI network "multus-cni-network": plugin type="multus-shim" name="multus-cni-network" failed (add): CmdAdd (shim): CNI request failed with status 400: 'ContainerID:"655ac40d58ccf9efb9c5a3b2dfcfc2f2bfab897d69ccd48cdda5c0d974bc3596" Netns:"/var/run/netns/8457f709-23f1-4186-adf3-36c14f34d693" IfName:"eth0" Args:"IgnoreUnknown=1;K8S_POD_NAMESPACE=openshift-authentication;K8S_POD_NAME=oauth-openshift-69cb985589-w7hkw;K8S_POD_INFRA_CONTAINER_ID=655ac40d58ccf9efb9c5a3b2dfcfc2f2bfab897d69ccd48cdda5c0d974bc3596;K8S_POD_UID=d25f9561-bcbb-4309-b3b6-de838bbf47bd" Path:"" ERRORED: error configuring pod [openshift-authentication/oauth-openshift-69cb985589-w7hkw] networking: Multus: [openshift-authentication/oauth-openshift-69cb985589-w7hkw/d25f9561-bcbb-4309-b3b6-de838bbf47bd]: error setting the networks status: SetPodNetworkStatusAnnotation: failed to update the pod oauth-openshift-69cb985589-w7hkw in out of cluster comm: SetNetworkStatus: failed to update the pod oauth-openshift-69cb985589-w7hkw in out of cluster comm: status update failed for pod /: Get "https://api-int.crc.testing:6443/api/v1/namespaces/openshift-authentication/pods/oauth-openshift-69cb985589-w7hkw?timeout=1m0s": dial tcp 38.129.56.16:6443: connect: connection refused Jan 23 09:12:38 crc kubenswrapper[4684]: ': StdinData: {"binDir":"/var/lib/cni/bin","clusterNetwork":"/host/run/multus/cni/net.d/10-ovn-kubernetes.conf","cniVersion":"0.3.1","daemonSocketDir":"/run/multus/socket","globalNamespaces":"default,openshift-multus,openshift-sriov-network-operator,openshift-cnv","logLevel":"verbose","logToStderr":true,"name":"multus-cni-network","namespaceIsolation":true,"type":"multus-shim"} Jan 23 09:12:38 crc kubenswrapper[4684]: > pod="openshift-authentication/oauth-openshift-69cb985589-w7hkw" Jan 23 09:12:38 crc kubenswrapper[4684]: E0123 09:12:38.149838 4684 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"oauth-openshift-69cb985589-w7hkw_openshift-authentication(d25f9561-bcbb-4309-b3b6-de838bbf47bd)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"oauth-openshift-69cb985589-w7hkw_openshift-authentication(d25f9561-bcbb-4309-b3b6-de838bbf47bd)\\\": rpc error: code = Unknown desc = failed to create pod network sandbox k8s_oauth-openshift-69cb985589-w7hkw_openshift-authentication_d25f9561-bcbb-4309-b3b6-de838bbf47bd_0(655ac40d58ccf9efb9c5a3b2dfcfc2f2bfab897d69ccd48cdda5c0d974bc3596): error adding pod openshift-authentication_oauth-openshift-69cb985589-w7hkw to CNI network \\\"multus-cni-network\\\": plugin type=\\\"multus-shim\\\" name=\\\"multus-cni-network\\\" failed (add): CmdAdd (shim): CNI request failed with status 400: 'ContainerID:\\\"655ac40d58ccf9efb9c5a3b2dfcfc2f2bfab897d69ccd48cdda5c0d974bc3596\\\" Netns:\\\"/var/run/netns/8457f709-23f1-4186-adf3-36c14f34d693\\\" IfName:\\\"eth0\\\" Args:\\\"IgnoreUnknown=1;K8S_POD_NAMESPACE=openshift-authentication;K8S_POD_NAME=oauth-openshift-69cb985589-w7hkw;K8S_POD_INFRA_CONTAINER_ID=655ac40d58ccf9efb9c5a3b2dfcfc2f2bfab897d69ccd48cdda5c0d974bc3596;K8S_POD_UID=d25f9561-bcbb-4309-b3b6-de838bbf47bd\\\" Path:\\\"\\\" ERRORED: error configuring pod [openshift-authentication/oauth-openshift-69cb985589-w7hkw] networking: Multus: [openshift-authentication/oauth-openshift-69cb985589-w7hkw/d25f9561-bcbb-4309-b3b6-de838bbf47bd]: error setting the networks status: SetPodNetworkStatusAnnotation: failed to update the pod oauth-openshift-69cb985589-w7hkw in out of cluster comm: SetNetworkStatus: failed to update the pod oauth-openshift-69cb985589-w7hkw in out of cluster comm: status update failed for pod /: Get \\\"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-authentication/pods/oauth-openshift-69cb985589-w7hkw?timeout=1m0s\\\": dial tcp 38.129.56.16:6443: connect: connection refused\\n': StdinData: {\\\"binDir\\\":\\\"/var/lib/cni/bin\\\",\\\"clusterNetwork\\\":\\\"/host/run/multus/cni/net.d/10-ovn-kubernetes.conf\\\",\\\"cniVersion\\\":\\\"0.3.1\\\",\\\"daemonSocketDir\\\":\\\"/run/multus/socket\\\",\\\"globalNamespaces\\\":\\\"default,openshift-multus,openshift-sriov-network-operator,openshift-cnv\\\",\\\"logLevel\\\":\\\"verbose\\\",\\\"logToStderr\\\":true,\\\"name\\\":\\\"multus-cni-network\\\",\\\"namespaceIsolation\\\":true,\\\"type\\\":\\\"multus-shim\\\"}\"" pod="openshift-authentication/oauth-openshift-69cb985589-w7hkw" podUID="d25f9561-bcbb-4309-b3b6-de838bbf47bd" Jan 23 09:12:38 crc kubenswrapper[4684]: E0123 09:12:38.159756 4684 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-01-23T09:12:38Z\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-23T09:12:38Z\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-23T09:12:38Z\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-23T09:12:38Z\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:3295ee1e384bd13d7f93a565d0e83b4cb096da43c673235ced6ac2c39d64dfa1\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:91b55f2f378a9a1fbbda6c2423a0a3bc0c66e0dd45dee584db70782d1b7ba863\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1671873254},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:2b72e40c5d5b36b681f40c16ebf3dcac6520ed0c79f174ba87f673ab7afd209a\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:d83ee77ad07e06451a84205ac4c85c69e912a1c975e1a8a95095d79218028dce\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1178956511},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:8ec63a5af90efa25f6221a312db015f279dc78f8c7319e0fa1782471e1e18acf\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:99b77813d1f8030ff0e28a82bfc5b89346cbad2ca5cb2f89274e21e035b5b066\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1176015092},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917}],\\\"runtimeHandlers\\\":[{\\\"features\\\":{\\\"recursiveReadOnlyMounts\\\":true,\\\"userNamespaces\\\":false},\\\"name\\\":\\\"runc\\\"},{\\\"features\\\":{\\\"recursiveReadOnlyMounts\\\":true,\\\"userNamespaces\\\":true},\\\"name\\\":\\\"crun\\\"},{\\\"features\\\":{\\\"recursiveReadOnlyMounts\\\":true,\\\"userNamespaces\\\":true},\\\"name\\\":\\\"\\\"}]}}\" for node \"crc\": Patch \"https://api-int.crc.testing:6443/api/v1/nodes/crc/status?timeout=10s\": dial tcp 38.129.56.16:6443: connect: connection refused" Jan 23 09:12:38 crc kubenswrapper[4684]: E0123 09:12:38.160188 4684 kubelet_node_status.go:585] "Error updating node status, will retry" err="error getting node \"crc\": Get \"https://api-int.crc.testing:6443/api/v1/nodes/crc?timeout=10s\": dial tcp 38.129.56.16:6443: connect: connection refused" Jan 23 09:12:38 crc kubenswrapper[4684]: E0123 09:12:38.160424 4684 kubelet_node_status.go:585] "Error updating node status, will retry" err="error getting node \"crc\": Get \"https://api-int.crc.testing:6443/api/v1/nodes/crc?timeout=10s\": dial tcp 38.129.56.16:6443: connect: connection refused" Jan 23 09:12:38 crc kubenswrapper[4684]: E0123 09:12:38.160605 4684 kubelet_node_status.go:585] "Error updating node status, will retry" err="error getting node \"crc\": Get \"https://api-int.crc.testing:6443/api/v1/nodes/crc?timeout=10s\": dial tcp 38.129.56.16:6443: connect: connection refused" Jan 23 09:12:38 crc kubenswrapper[4684]: E0123 09:12:38.160792 4684 kubelet_node_status.go:585] "Error updating node status, will retry" err="error getting node \"crc\": Get \"https://api-int.crc.testing:6443/api/v1/nodes/crc?timeout=10s\": dial tcp 38.129.56.16:6443: connect: connection refused" Jan 23 09:12:38 crc kubenswrapper[4684]: E0123 09:12:38.160810 4684 kubelet_node_status.go:572] "Unable to update node status" err="update node status exceeds retry count" Jan 23 09:12:38 crc kubenswrapper[4684]: E0123 09:12:38.161873 4684 log.go:32] "RunPodSandbox from runtime service failed" err=< Jan 23 09:12:38 crc kubenswrapper[4684]: rpc error: code = Unknown desc = failed to create pod network sandbox k8s_controller-manager-b7cc87cc9-sxktc_openshift-controller-manager_aa75b6a1-3672-4315-8606-19758a6604b7_0(807f72c1035994c669c47757e9ff9511e689122a33212f9cfac44d3aa87016e2): error adding pod openshift-controller-manager_controller-manager-b7cc87cc9-sxktc to CNI network "multus-cni-network": plugin type="multus-shim" name="multus-cni-network" failed (add): CmdAdd (shim): CNI request failed with status 400: 'ContainerID:"807f72c1035994c669c47757e9ff9511e689122a33212f9cfac44d3aa87016e2" Netns:"/var/run/netns/5d1c6743-0efc-41e3-874e-771109843615" IfName:"eth0" Args:"IgnoreUnknown=1;K8S_POD_NAMESPACE=openshift-controller-manager;K8S_POD_NAME=controller-manager-b7cc87cc9-sxktc;K8S_POD_INFRA_CONTAINER_ID=807f72c1035994c669c47757e9ff9511e689122a33212f9cfac44d3aa87016e2;K8S_POD_UID=aa75b6a1-3672-4315-8606-19758a6604b7" Path:"" ERRORED: error configuring pod [openshift-controller-manager/controller-manager-b7cc87cc9-sxktc] networking: Multus: [openshift-controller-manager/controller-manager-b7cc87cc9-sxktc/aa75b6a1-3672-4315-8606-19758a6604b7]: error setting the networks status: SetPodNetworkStatusAnnotation: failed to update the pod controller-manager-b7cc87cc9-sxktc in out of cluster comm: SetNetworkStatus: failed to update the pod controller-manager-b7cc87cc9-sxktc in out of cluster comm: status update failed for pod /: Get "https://api-int.crc.testing:6443/api/v1/namespaces/openshift-controller-manager/pods/controller-manager-b7cc87cc9-sxktc?timeout=1m0s": dial tcp 38.129.56.16:6443: connect: connection refused Jan 23 09:12:38 crc kubenswrapper[4684]: ': StdinData: {"binDir":"/var/lib/cni/bin","clusterNetwork":"/host/run/multus/cni/net.d/10-ovn-kubernetes.conf","cniVersion":"0.3.1","daemonSocketDir":"/run/multus/socket","globalNamespaces":"default,openshift-multus,openshift-sriov-network-operator,openshift-cnv","logLevel":"verbose","logToStderr":true,"name":"multus-cni-network","namespaceIsolation":true,"type":"multus-shim"} Jan 23 09:12:38 crc kubenswrapper[4684]: > Jan 23 09:12:38 crc kubenswrapper[4684]: E0123 09:12:38.161928 4684 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err=< Jan 23 09:12:38 crc kubenswrapper[4684]: rpc error: code = Unknown desc = failed to create pod network sandbox k8s_controller-manager-b7cc87cc9-sxktc_openshift-controller-manager_aa75b6a1-3672-4315-8606-19758a6604b7_0(807f72c1035994c669c47757e9ff9511e689122a33212f9cfac44d3aa87016e2): error adding pod openshift-controller-manager_controller-manager-b7cc87cc9-sxktc to CNI network "multus-cni-network": plugin type="multus-shim" name="multus-cni-network" failed (add): CmdAdd (shim): CNI request failed with status 400: 'ContainerID:"807f72c1035994c669c47757e9ff9511e689122a33212f9cfac44d3aa87016e2" Netns:"/var/run/netns/5d1c6743-0efc-41e3-874e-771109843615" IfName:"eth0" Args:"IgnoreUnknown=1;K8S_POD_NAMESPACE=openshift-controller-manager;K8S_POD_NAME=controller-manager-b7cc87cc9-sxktc;K8S_POD_INFRA_CONTAINER_ID=807f72c1035994c669c47757e9ff9511e689122a33212f9cfac44d3aa87016e2;K8S_POD_UID=aa75b6a1-3672-4315-8606-19758a6604b7" Path:"" ERRORED: error configuring pod [openshift-controller-manager/controller-manager-b7cc87cc9-sxktc] networking: Multus: [openshift-controller-manager/controller-manager-b7cc87cc9-sxktc/aa75b6a1-3672-4315-8606-19758a6604b7]: error setting the networks status: SetPodNetworkStatusAnnotation: failed to update the pod controller-manager-b7cc87cc9-sxktc in out of cluster comm: SetNetworkStatus: failed to update the pod controller-manager-b7cc87cc9-sxktc in out of cluster comm: status update failed for pod /: Get "https://api-int.crc.testing:6443/api/v1/namespaces/openshift-controller-manager/pods/controller-manager-b7cc87cc9-sxktc?timeout=1m0s": dial tcp 38.129.56.16:6443: connect: connection refused Jan 23 09:12:38 crc kubenswrapper[4684]: ': StdinData: {"binDir":"/var/lib/cni/bin","clusterNetwork":"/host/run/multus/cni/net.d/10-ovn-kubernetes.conf","cniVersion":"0.3.1","daemonSocketDir":"/run/multus/socket","globalNamespaces":"default,openshift-multus,openshift-sriov-network-operator,openshift-cnv","logLevel":"verbose","logToStderr":true,"name":"multus-cni-network","namespaceIsolation":true,"type":"multus-shim"} Jan 23 09:12:38 crc kubenswrapper[4684]: > pod="openshift-controller-manager/controller-manager-b7cc87cc9-sxktc" Jan 23 09:12:38 crc kubenswrapper[4684]: E0123 09:12:38.254549 4684 log.go:32] "RunPodSandbox from runtime service failed" err=< Jan 23 09:12:38 crc kubenswrapper[4684]: rpc error: code = Unknown desc = failed to create pod network sandbox k8s_route-controller-manager-85d79997c7-pbqc5_openshift-route-controller-manager_ea467a71-d4b5-4361-b648-61dc754033ca_0(7878f60e33682758db5eb417b22b26e659082423c03fd00316aa2c37e848c929): error adding pod openshift-route-controller-manager_route-controller-manager-85d79997c7-pbqc5 to CNI network "multus-cni-network": plugin type="multus-shim" name="multus-cni-network" failed (add): CmdAdd (shim): CNI request failed with status 400: 'ContainerID:"7878f60e33682758db5eb417b22b26e659082423c03fd00316aa2c37e848c929" Netns:"/var/run/netns/54224f4f-c9f9-429b-9aff-2a296c964963" IfName:"eth0" Args:"IgnoreUnknown=1;K8S_POD_NAMESPACE=openshift-route-controller-manager;K8S_POD_NAME=route-controller-manager-85d79997c7-pbqc5;K8S_POD_INFRA_CONTAINER_ID=7878f60e33682758db5eb417b22b26e659082423c03fd00316aa2c37e848c929;K8S_POD_UID=ea467a71-d4b5-4361-b648-61dc754033ca" Path:"" ERRORED: error configuring pod [openshift-route-controller-manager/route-controller-manager-85d79997c7-pbqc5] networking: Multus: [openshift-route-controller-manager/route-controller-manager-85d79997c7-pbqc5/ea467a71-d4b5-4361-b648-61dc754033ca]: error setting the networks status: SetPodNetworkStatusAnnotation: failed to update the pod route-controller-manager-85d79997c7-pbqc5 in out of cluster comm: SetNetworkStatus: failed to update the pod route-controller-manager-85d79997c7-pbqc5 in out of cluster comm: status update failed for pod /: Get "https://api-int.crc.testing:6443/api/v1/namespaces/openshift-route-controller-manager/pods/route-controller-manager-85d79997c7-pbqc5?timeout=1m0s": dial tcp 38.129.56.16:6443: connect: connection refused Jan 23 09:12:38 crc kubenswrapper[4684]: ': StdinData: {"binDir":"/var/lib/cni/bin","clusterNetwork":"/host/run/multus/cni/net.d/10-ovn-kubernetes.conf","cniVersion":"0.3.1","daemonSocketDir":"/run/multus/socket","globalNamespaces":"default,openshift-multus,openshift-sriov-network-operator,openshift-cnv","logLevel":"verbose","logToStderr":true,"name":"multus-cni-network","namespaceIsolation":true,"type":"multus-shim"} Jan 23 09:12:38 crc kubenswrapper[4684]: > Jan 23 09:12:38 crc kubenswrapper[4684]: E0123 09:12:38.254617 4684 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err=< Jan 23 09:12:38 crc kubenswrapper[4684]: rpc error: code = Unknown desc = failed to create pod network sandbox k8s_route-controller-manager-85d79997c7-pbqc5_openshift-route-controller-manager_ea467a71-d4b5-4361-b648-61dc754033ca_0(7878f60e33682758db5eb417b22b26e659082423c03fd00316aa2c37e848c929): error adding pod openshift-route-controller-manager_route-controller-manager-85d79997c7-pbqc5 to CNI network "multus-cni-network": plugin type="multus-shim" name="multus-cni-network" failed (add): CmdAdd (shim): CNI request failed with status 400: 'ContainerID:"7878f60e33682758db5eb417b22b26e659082423c03fd00316aa2c37e848c929" Netns:"/var/run/netns/54224f4f-c9f9-429b-9aff-2a296c964963" IfName:"eth0" Args:"IgnoreUnknown=1;K8S_POD_NAMESPACE=openshift-route-controller-manager;K8S_POD_NAME=route-controller-manager-85d79997c7-pbqc5;K8S_POD_INFRA_CONTAINER_ID=7878f60e33682758db5eb417b22b26e659082423c03fd00316aa2c37e848c929;K8S_POD_UID=ea467a71-d4b5-4361-b648-61dc754033ca" Path:"" ERRORED: error configuring pod [openshift-route-controller-manager/route-controller-manager-85d79997c7-pbqc5] networking: Multus: [openshift-route-controller-manager/route-controller-manager-85d79997c7-pbqc5/ea467a71-d4b5-4361-b648-61dc754033ca]: error setting the networks status: SetPodNetworkStatusAnnotation: failed to update the pod route-controller-manager-85d79997c7-pbqc5 in out of cluster comm: SetNetworkStatus: failed to update the pod route-controller-manager-85d79997c7-pbqc5 in out of cluster comm: status update failed for pod /: Get "https://api-int.crc.testing:6443/api/v1/namespaces/openshift-route-controller-manager/pods/route-controller-manager-85d79997c7-pbqc5?timeout=1m0s": dial tcp 38.129.56.16:6443: connect: connection refused Jan 23 09:12:38 crc kubenswrapper[4684]: ': StdinData: {"binDir":"/var/lib/cni/bin","clusterNetwork":"/host/run/multus/cni/net.d/10-ovn-kubernetes.conf","cniVersion":"0.3.1","daemonSocketDir":"/run/multus/socket","globalNamespaces":"default,openshift-multus,openshift-sriov-network-operator,openshift-cnv","logLevel":"verbose","logToStderr":true,"name":"multus-cni-network","namespaceIsolation":true,"type":"multus-shim"} Jan 23 09:12:38 crc kubenswrapper[4684]: > pod="openshift-route-controller-manager/route-controller-manager-85d79997c7-pbqc5" Jan 23 09:12:38 crc kubenswrapper[4684]: I0123 09:12:38.822710 4684 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-x2mrs" event={"ID":"b97308cc-f7d2-4693-8990-76cbb4c9abff","Type":"ContainerStarted","Data":"7d0fd50bcb08fe29c47575a5ad2121e36eba72bc60c62f6728c33fdad33487b5"} Jan 23 09:12:38 crc kubenswrapper[4684]: I0123 09:12:38.825757 4684 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-hcd6g" event={"ID":"a32a23a8-fd38-4a01-bc87-e589889a39e6","Type":"ContainerStarted","Data":"314a784dfdec297108ed663b3e24d6ac32cd9ce71df0d9f686a8825dfe6a0738"} Jan 23 09:12:38 crc kubenswrapper[4684]: I0123 09:12:38.825803 4684 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-route-controller-manager/route-controller-manager-85d79997c7-pbqc5" Jan 23 09:12:38 crc kubenswrapper[4684]: I0123 09:12:38.825819 4684 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-controller-manager/controller-manager-b7cc87cc9-sxktc" Jan 23 09:12:38 crc kubenswrapper[4684]: I0123 09:12:38.825848 4684 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-authentication/oauth-openshift-69cb985589-w7hkw" Jan 23 09:12:38 crc kubenswrapper[4684]: I0123 09:12:38.826454 4684 status_manager.go:851] "Failed to get status for pod" podUID="ea467a71-d4b5-4361-b648-61dc754033ca" pod="openshift-route-controller-manager/route-controller-manager-85d79997c7-pbqc5" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-route-controller-manager/pods/route-controller-manager-85d79997c7-pbqc5\": dial tcp 38.129.56.16:6443: connect: connection refused" Jan 23 09:12:38 crc kubenswrapper[4684]: I0123 09:12:38.826639 4684 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-authentication/oauth-openshift-69cb985589-w7hkw" Jan 23 09:12:38 crc kubenswrapper[4684]: I0123 09:12:38.826732 4684 status_manager.go:851] "Failed to get status for pod" podUID="f85e55b1a89d02b0cb034b1ea31ed45a" pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-startup-monitor-crc\": dial tcp 38.129.56.16:6443: connect: connection refused" Jan 23 09:12:38 crc kubenswrapper[4684]: I0123 09:12:38.826988 4684 status_manager.go:851] "Failed to get status for pod" podUID="f4b27818a5e8e43d0dc095d08835c792" pod="openshift-kube-apiserver/kube-apiserver-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-crc\": dial tcp 38.129.56.16:6443: connect: connection refused" Jan 23 09:12:38 crc kubenswrapper[4684]: I0123 09:12:38.828027 4684 status_manager.go:851] "Failed to get status for pod" podUID="edcaacae-d1c5-4a66-9220-54ee4b5991ac" pod="openshift-kube-apiserver/installer-9-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/installer-9-crc\": dial tcp 38.129.56.16:6443: connect: connection refused" Jan 23 09:12:38 crc kubenswrapper[4684]: I0123 09:12:38.828302 4684 status_manager.go:851] "Failed to get status for pod" podUID="edcaacae-d1c5-4a66-9220-54ee4b5991ac" pod="openshift-kube-apiserver/installer-9-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/installer-9-crc\": dial tcp 38.129.56.16:6443: connect: connection refused" Jan 23 09:12:38 crc kubenswrapper[4684]: I0123 09:12:38.828502 4684 status_manager.go:851] "Failed to get status for pod" podUID="ea467a71-d4b5-4361-b648-61dc754033ca" pod="openshift-route-controller-manager/route-controller-manager-85d79997c7-pbqc5" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-route-controller-manager/pods/route-controller-manager-85d79997c7-pbqc5\": dial tcp 38.129.56.16:6443: connect: connection refused" Jan 23 09:12:38 crc kubenswrapper[4684]: I0123 09:12:38.828943 4684 status_manager.go:851] "Failed to get status for pod" podUID="f85e55b1a89d02b0cb034b1ea31ed45a" pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-startup-monitor-crc\": dial tcp 38.129.56.16:6443: connect: connection refused" Jan 23 09:12:38 crc kubenswrapper[4684]: I0123 09:12:38.829210 4684 status_manager.go:851] "Failed to get status for pod" podUID="f4b27818a5e8e43d0dc095d08835c792" pod="openshift-kube-apiserver/kube-apiserver-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-crc\": dial tcp 38.129.56.16:6443: connect: connection refused" Jan 23 09:12:38 crc kubenswrapper[4684]: I0123 09:12:38.829457 4684 status_manager.go:851] "Failed to get status for pod" podUID="aa75b6a1-3672-4315-8606-19758a6604b7" pod="openshift-controller-manager/controller-manager-b7cc87cc9-sxktc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-controller-manager/pods/controller-manager-b7cc87cc9-sxktc\": dial tcp 38.129.56.16:6443: connect: connection refused" Jan 23 09:12:38 crc kubenswrapper[4684]: I0123 09:12:38.835002 4684 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-route-controller-manager/route-controller-manager-85d79997c7-pbqc5" Jan 23 09:12:38 crc kubenswrapper[4684]: I0123 09:12:38.835730 4684 status_manager.go:851] "Failed to get status for pod" podUID="ea467a71-d4b5-4361-b648-61dc754033ca" pod="openshift-route-controller-manager/route-controller-manager-85d79997c7-pbqc5" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-route-controller-manager/pods/route-controller-manager-85d79997c7-pbqc5\": dial tcp 38.129.56.16:6443: connect: connection refused" Jan 23 09:12:38 crc kubenswrapper[4684]: I0123 09:12:38.835984 4684 status_manager.go:851] "Failed to get status for pod" podUID="f85e55b1a89d02b0cb034b1ea31ed45a" pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-startup-monitor-crc\": dial tcp 38.129.56.16:6443: connect: connection refused" Jan 23 09:12:38 crc kubenswrapper[4684]: I0123 09:12:38.836227 4684 status_manager.go:851] "Failed to get status for pod" podUID="f4b27818a5e8e43d0dc095d08835c792" pod="openshift-kube-apiserver/kube-apiserver-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-crc\": dial tcp 38.129.56.16:6443: connect: connection refused" Jan 23 09:12:38 crc kubenswrapper[4684]: I0123 09:12:38.840849 4684 status_manager.go:851] "Failed to get status for pod" podUID="aa75b6a1-3672-4315-8606-19758a6604b7" pod="openshift-controller-manager/controller-manager-b7cc87cc9-sxktc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-controller-manager/pods/controller-manager-b7cc87cc9-sxktc\": dial tcp 38.129.56.16:6443: connect: connection refused" Jan 23 09:12:38 crc kubenswrapper[4684]: I0123 09:12:38.841102 4684 status_manager.go:851] "Failed to get status for pod" podUID="edcaacae-d1c5-4a66-9220-54ee4b5991ac" pod="openshift-kube-apiserver/installer-9-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/installer-9-crc\": dial tcp 38.129.56.16:6443: connect: connection refused" Jan 23 09:12:38 crc kubenswrapper[4684]: I0123 09:12:38.879647 4684 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-controller-manager/controller-manager-b7cc87cc9-sxktc" Jan 23 09:12:38 crc kubenswrapper[4684]: I0123 09:12:38.880265 4684 status_manager.go:851] "Failed to get status for pod" podUID="f85e55b1a89d02b0cb034b1ea31ed45a" pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-startup-monitor-crc\": dial tcp 38.129.56.16:6443: connect: connection refused" Jan 23 09:12:38 crc kubenswrapper[4684]: I0123 09:12:38.880637 4684 status_manager.go:851] "Failed to get status for pod" podUID="ea467a71-d4b5-4361-b648-61dc754033ca" pod="openshift-route-controller-manager/route-controller-manager-85d79997c7-pbqc5" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-route-controller-manager/pods/route-controller-manager-85d79997c7-pbqc5\": dial tcp 38.129.56.16:6443: connect: connection refused" Jan 23 09:12:38 crc kubenswrapper[4684]: I0123 09:12:38.881051 4684 status_manager.go:851] "Failed to get status for pod" podUID="f4b27818a5e8e43d0dc095d08835c792" pod="openshift-kube-apiserver/kube-apiserver-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-crc\": dial tcp 38.129.56.16:6443: connect: connection refused" Jan 23 09:12:38 crc kubenswrapper[4684]: I0123 09:12:38.881260 4684 status_manager.go:851] "Failed to get status for pod" podUID="aa75b6a1-3672-4315-8606-19758a6604b7" pod="openshift-controller-manager/controller-manager-b7cc87cc9-sxktc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-controller-manager/pods/controller-manager-b7cc87cc9-sxktc\": dial tcp 38.129.56.16:6443: connect: connection refused" Jan 23 09:12:38 crc kubenswrapper[4684]: I0123 09:12:38.881433 4684 status_manager.go:851] "Failed to get status for pod" podUID="edcaacae-d1c5-4a66-9220-54ee4b5991ac" pod="openshift-kube-apiserver/installer-9-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/installer-9-crc\": dial tcp 38.129.56.16:6443: connect: connection refused" Jan 23 09:12:38 crc kubenswrapper[4684]: I0123 09:12:38.952260 4684 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-2694h\" (UniqueName: \"kubernetes.io/projected/ea467a71-d4b5-4361-b648-61dc754033ca-kube-api-access-2694h\") pod \"ea467a71-d4b5-4361-b648-61dc754033ca\" (UID: \"ea467a71-d4b5-4361-b648-61dc754033ca\") " Jan 23 09:12:38 crc kubenswrapper[4684]: I0123 09:12:38.952358 4684 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/aa75b6a1-3672-4315-8606-19758a6604b7-serving-cert\") pod \"aa75b6a1-3672-4315-8606-19758a6604b7\" (UID: \"aa75b6a1-3672-4315-8606-19758a6604b7\") " Jan 23 09:12:38 crc kubenswrapper[4684]: I0123 09:12:38.952421 4684 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/aa75b6a1-3672-4315-8606-19758a6604b7-client-ca\") pod \"aa75b6a1-3672-4315-8606-19758a6604b7\" (UID: \"aa75b6a1-3672-4315-8606-19758a6604b7\") " Jan 23 09:12:38 crc kubenswrapper[4684]: I0123 09:12:38.952490 4684 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/ea467a71-d4b5-4361-b648-61dc754033ca-config\") pod \"ea467a71-d4b5-4361-b648-61dc754033ca\" (UID: \"ea467a71-d4b5-4361-b648-61dc754033ca\") " Jan 23 09:12:38 crc kubenswrapper[4684]: I0123 09:12:38.952521 4684 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/ea467a71-d4b5-4361-b648-61dc754033ca-client-ca\") pod \"ea467a71-d4b5-4361-b648-61dc754033ca\" (UID: \"ea467a71-d4b5-4361-b648-61dc754033ca\") " Jan 23 09:12:38 crc kubenswrapper[4684]: I0123 09:12:38.952573 4684 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/aa75b6a1-3672-4315-8606-19758a6604b7-config\") pod \"aa75b6a1-3672-4315-8606-19758a6604b7\" (UID: \"aa75b6a1-3672-4315-8606-19758a6604b7\") " Jan 23 09:12:38 crc kubenswrapper[4684]: I0123 09:12:38.952596 4684 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-fkdt9\" (UniqueName: \"kubernetes.io/projected/aa75b6a1-3672-4315-8606-19758a6604b7-kube-api-access-fkdt9\") pod \"aa75b6a1-3672-4315-8606-19758a6604b7\" (UID: \"aa75b6a1-3672-4315-8606-19758a6604b7\") " Jan 23 09:12:38 crc kubenswrapper[4684]: I0123 09:12:38.952619 4684 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/aa75b6a1-3672-4315-8606-19758a6604b7-proxy-ca-bundles\") pod \"aa75b6a1-3672-4315-8606-19758a6604b7\" (UID: \"aa75b6a1-3672-4315-8606-19758a6604b7\") " Jan 23 09:12:38 crc kubenswrapper[4684]: I0123 09:12:38.952653 4684 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/ea467a71-d4b5-4361-b648-61dc754033ca-serving-cert\") pod \"ea467a71-d4b5-4361-b648-61dc754033ca\" (UID: \"ea467a71-d4b5-4361-b648-61dc754033ca\") " Jan 23 09:12:38 crc kubenswrapper[4684]: I0123 09:12:38.953180 4684 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/aa75b6a1-3672-4315-8606-19758a6604b7-client-ca" (OuterVolumeSpecName: "client-ca") pod "aa75b6a1-3672-4315-8606-19758a6604b7" (UID: "aa75b6a1-3672-4315-8606-19758a6604b7"). InnerVolumeSpecName "client-ca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 23 09:12:38 crc kubenswrapper[4684]: I0123 09:12:38.954192 4684 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/ea467a71-d4b5-4361-b648-61dc754033ca-config" (OuterVolumeSpecName: "config") pod "ea467a71-d4b5-4361-b648-61dc754033ca" (UID: "ea467a71-d4b5-4361-b648-61dc754033ca"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 23 09:12:38 crc kubenswrapper[4684]: I0123 09:12:38.954335 4684 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/aa75b6a1-3672-4315-8606-19758a6604b7-proxy-ca-bundles" (OuterVolumeSpecName: "proxy-ca-bundles") pod "aa75b6a1-3672-4315-8606-19758a6604b7" (UID: "aa75b6a1-3672-4315-8606-19758a6604b7"). InnerVolumeSpecName "proxy-ca-bundles". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 23 09:12:38 crc kubenswrapper[4684]: I0123 09:12:38.954892 4684 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/aa75b6a1-3672-4315-8606-19758a6604b7-config" (OuterVolumeSpecName: "config") pod "aa75b6a1-3672-4315-8606-19758a6604b7" (UID: "aa75b6a1-3672-4315-8606-19758a6604b7"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 23 09:12:38 crc kubenswrapper[4684]: I0123 09:12:38.955331 4684 reconciler_common.go:293] "Volume detached for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/aa75b6a1-3672-4315-8606-19758a6604b7-client-ca\") on node \"crc\" DevicePath \"\"" Jan 23 09:12:38 crc kubenswrapper[4684]: I0123 09:12:38.955352 4684 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/ea467a71-d4b5-4361-b648-61dc754033ca-config\") on node \"crc\" DevicePath \"\"" Jan 23 09:12:38 crc kubenswrapper[4684]: I0123 09:12:38.955363 4684 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/aa75b6a1-3672-4315-8606-19758a6604b7-config\") on node \"crc\" DevicePath \"\"" Jan 23 09:12:38 crc kubenswrapper[4684]: I0123 09:12:38.955387 4684 reconciler_common.go:293] "Volume detached for volume \"proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/aa75b6a1-3672-4315-8606-19758a6604b7-proxy-ca-bundles\") on node \"crc\" DevicePath \"\"" Jan 23 09:12:38 crc kubenswrapper[4684]: I0123 09:12:38.955415 4684 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/ea467a71-d4b5-4361-b648-61dc754033ca-client-ca" (OuterVolumeSpecName: "client-ca") pod "ea467a71-d4b5-4361-b648-61dc754033ca" (UID: "ea467a71-d4b5-4361-b648-61dc754033ca"). InnerVolumeSpecName "client-ca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 23 09:12:38 crc kubenswrapper[4684]: I0123 09:12:38.987504 4684 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/ea467a71-d4b5-4361-b648-61dc754033ca-kube-api-access-2694h" (OuterVolumeSpecName: "kube-api-access-2694h") pod "ea467a71-d4b5-4361-b648-61dc754033ca" (UID: "ea467a71-d4b5-4361-b648-61dc754033ca"). InnerVolumeSpecName "kube-api-access-2694h". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 23 09:12:38 crc kubenswrapper[4684]: I0123 09:12:38.987474 4684 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/aa75b6a1-3672-4315-8606-19758a6604b7-serving-cert" (OuterVolumeSpecName: "serving-cert") pod "aa75b6a1-3672-4315-8606-19758a6604b7" (UID: "aa75b6a1-3672-4315-8606-19758a6604b7"). InnerVolumeSpecName "serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 23 09:12:38 crc kubenswrapper[4684]: I0123 09:12:38.987948 4684 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/aa75b6a1-3672-4315-8606-19758a6604b7-kube-api-access-fkdt9" (OuterVolumeSpecName: "kube-api-access-fkdt9") pod "aa75b6a1-3672-4315-8606-19758a6604b7" (UID: "aa75b6a1-3672-4315-8606-19758a6604b7"). InnerVolumeSpecName "kube-api-access-fkdt9". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 23 09:12:38 crc kubenswrapper[4684]: I0123 09:12:38.988078 4684 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/ea467a71-d4b5-4361-b648-61dc754033ca-serving-cert" (OuterVolumeSpecName: "serving-cert") pod "ea467a71-d4b5-4361-b648-61dc754033ca" (UID: "ea467a71-d4b5-4361-b648-61dc754033ca"). InnerVolumeSpecName "serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 23 09:12:39 crc kubenswrapper[4684]: I0123 09:12:39.056614 4684 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-fkdt9\" (UniqueName: \"kubernetes.io/projected/aa75b6a1-3672-4315-8606-19758a6604b7-kube-api-access-fkdt9\") on node \"crc\" DevicePath \"\"" Jan 23 09:12:39 crc kubenswrapper[4684]: I0123 09:12:39.056648 4684 reconciler_common.go:293] "Volume detached for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/ea467a71-d4b5-4361-b648-61dc754033ca-serving-cert\") on node \"crc\" DevicePath \"\"" Jan 23 09:12:39 crc kubenswrapper[4684]: I0123 09:12:39.056664 4684 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-2694h\" (UniqueName: \"kubernetes.io/projected/ea467a71-d4b5-4361-b648-61dc754033ca-kube-api-access-2694h\") on node \"crc\" DevicePath \"\"" Jan 23 09:12:39 crc kubenswrapper[4684]: I0123 09:12:39.056675 4684 reconciler_common.go:293] "Volume detached for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/aa75b6a1-3672-4315-8606-19758a6604b7-serving-cert\") on node \"crc\" DevicePath \"\"" Jan 23 09:12:39 crc kubenswrapper[4684]: I0123 09:12:39.056687 4684 reconciler_common.go:293] "Volume detached for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/ea467a71-d4b5-4361-b648-61dc754033ca-client-ca\") on node \"crc\" DevicePath \"\"" Jan 23 09:12:39 crc kubenswrapper[4684]: E0123 09:12:39.264195 4684 log.go:32] "RunPodSandbox from runtime service failed" err=< Jan 23 09:12:39 crc kubenswrapper[4684]: rpc error: code = Unknown desc = failed to create pod network sandbox k8s_oauth-openshift-69cb985589-w7hkw_openshift-authentication_d25f9561-bcbb-4309-b3b6-de838bbf47bd_0(ef43daf68b5866cd122dcc2ad3581ad11abf057bd00c0f09c2b283976b666791): error adding pod openshift-authentication_oauth-openshift-69cb985589-w7hkw to CNI network "multus-cni-network": plugin type="multus-shim" name="multus-cni-network" failed (add): CmdAdd (shim): CNI request failed with status 400: 'ContainerID:"ef43daf68b5866cd122dcc2ad3581ad11abf057bd00c0f09c2b283976b666791" Netns:"/var/run/netns/86c3e1d0-83bf-4d51-bfe4-c44477c9322b" IfName:"eth0" Args:"IgnoreUnknown=1;K8S_POD_NAMESPACE=openshift-authentication;K8S_POD_NAME=oauth-openshift-69cb985589-w7hkw;K8S_POD_INFRA_CONTAINER_ID=ef43daf68b5866cd122dcc2ad3581ad11abf057bd00c0f09c2b283976b666791;K8S_POD_UID=d25f9561-bcbb-4309-b3b6-de838bbf47bd" Path:"" ERRORED: error configuring pod [openshift-authentication/oauth-openshift-69cb985589-w7hkw] networking: Multus: [openshift-authentication/oauth-openshift-69cb985589-w7hkw/d25f9561-bcbb-4309-b3b6-de838bbf47bd]: error setting the networks status: SetPodNetworkStatusAnnotation: failed to update the pod oauth-openshift-69cb985589-w7hkw in out of cluster comm: SetNetworkStatus: failed to update the pod oauth-openshift-69cb985589-w7hkw in out of cluster comm: status update failed for pod /: Get "https://api-int.crc.testing:6443/api/v1/namespaces/openshift-authentication/pods/oauth-openshift-69cb985589-w7hkw?timeout=1m0s": dial tcp 38.129.56.16:6443: connect: connection refused Jan 23 09:12:39 crc kubenswrapper[4684]: ': StdinData: {"binDir":"/var/lib/cni/bin","clusterNetwork":"/host/run/multus/cni/net.d/10-ovn-kubernetes.conf","cniVersion":"0.3.1","daemonSocketDir":"/run/multus/socket","globalNamespaces":"default,openshift-multus,openshift-sriov-network-operator,openshift-cnv","logLevel":"verbose","logToStderr":true,"name":"multus-cni-network","namespaceIsolation":true,"type":"multus-shim"} Jan 23 09:12:39 crc kubenswrapper[4684]: > Jan 23 09:12:39 crc kubenswrapper[4684]: E0123 09:12:39.264440 4684 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err=< Jan 23 09:12:39 crc kubenswrapper[4684]: rpc error: code = Unknown desc = failed to create pod network sandbox k8s_oauth-openshift-69cb985589-w7hkw_openshift-authentication_d25f9561-bcbb-4309-b3b6-de838bbf47bd_0(ef43daf68b5866cd122dcc2ad3581ad11abf057bd00c0f09c2b283976b666791): error adding pod openshift-authentication_oauth-openshift-69cb985589-w7hkw to CNI network "multus-cni-network": plugin type="multus-shim" name="multus-cni-network" failed (add): CmdAdd (shim): CNI request failed with status 400: 'ContainerID:"ef43daf68b5866cd122dcc2ad3581ad11abf057bd00c0f09c2b283976b666791" Netns:"/var/run/netns/86c3e1d0-83bf-4d51-bfe4-c44477c9322b" IfName:"eth0" Args:"IgnoreUnknown=1;K8S_POD_NAMESPACE=openshift-authentication;K8S_POD_NAME=oauth-openshift-69cb985589-w7hkw;K8S_POD_INFRA_CONTAINER_ID=ef43daf68b5866cd122dcc2ad3581ad11abf057bd00c0f09c2b283976b666791;K8S_POD_UID=d25f9561-bcbb-4309-b3b6-de838bbf47bd" Path:"" ERRORED: error configuring pod [openshift-authentication/oauth-openshift-69cb985589-w7hkw] networking: Multus: [openshift-authentication/oauth-openshift-69cb985589-w7hkw/d25f9561-bcbb-4309-b3b6-de838bbf47bd]: error setting the networks status: SetPodNetworkStatusAnnotation: failed to update the pod oauth-openshift-69cb985589-w7hkw in out of cluster comm: SetNetworkStatus: failed to update the pod oauth-openshift-69cb985589-w7hkw in out of cluster comm: status update failed for pod /: Get "https://api-int.crc.testing:6443/api/v1/namespaces/openshift-authentication/pods/oauth-openshift-69cb985589-w7hkw?timeout=1m0s": dial tcp 38.129.56.16:6443: connect: connection refused Jan 23 09:12:39 crc kubenswrapper[4684]: ': StdinData: {"binDir":"/var/lib/cni/bin","clusterNetwork":"/host/run/multus/cni/net.d/10-ovn-kubernetes.conf","cniVersion":"0.3.1","daemonSocketDir":"/run/multus/socket","globalNamespaces":"default,openshift-multus,openshift-sriov-network-operator,openshift-cnv","logLevel":"verbose","logToStderr":true,"name":"multus-cni-network","namespaceIsolation":true,"type":"multus-shim"} Jan 23 09:12:39 crc kubenswrapper[4684]: > pod="openshift-authentication/oauth-openshift-69cb985589-w7hkw" Jan 23 09:12:39 crc kubenswrapper[4684]: E0123 09:12:39.264459 4684 kuberuntime_manager.go:1170] "CreatePodSandbox for pod failed" err=< Jan 23 09:12:39 crc kubenswrapper[4684]: rpc error: code = Unknown desc = failed to create pod network sandbox k8s_oauth-openshift-69cb985589-w7hkw_openshift-authentication_d25f9561-bcbb-4309-b3b6-de838bbf47bd_0(ef43daf68b5866cd122dcc2ad3581ad11abf057bd00c0f09c2b283976b666791): error adding pod openshift-authentication_oauth-openshift-69cb985589-w7hkw to CNI network "multus-cni-network": plugin type="multus-shim" name="multus-cni-network" failed (add): CmdAdd (shim): CNI request failed with status 400: 'ContainerID:"ef43daf68b5866cd122dcc2ad3581ad11abf057bd00c0f09c2b283976b666791" Netns:"/var/run/netns/86c3e1d0-83bf-4d51-bfe4-c44477c9322b" IfName:"eth0" Args:"IgnoreUnknown=1;K8S_POD_NAMESPACE=openshift-authentication;K8S_POD_NAME=oauth-openshift-69cb985589-w7hkw;K8S_POD_INFRA_CONTAINER_ID=ef43daf68b5866cd122dcc2ad3581ad11abf057bd00c0f09c2b283976b666791;K8S_POD_UID=d25f9561-bcbb-4309-b3b6-de838bbf47bd" Path:"" ERRORED: error configuring pod [openshift-authentication/oauth-openshift-69cb985589-w7hkw] networking: Multus: [openshift-authentication/oauth-openshift-69cb985589-w7hkw/d25f9561-bcbb-4309-b3b6-de838bbf47bd]: error setting the networks status: SetPodNetworkStatusAnnotation: failed to update the pod oauth-openshift-69cb985589-w7hkw in out of cluster comm: SetNetworkStatus: failed to update the pod oauth-openshift-69cb985589-w7hkw in out of cluster comm: status update failed for pod /: Get "https://api-int.crc.testing:6443/api/v1/namespaces/openshift-authentication/pods/oauth-openshift-69cb985589-w7hkw?timeout=1m0s": dial tcp 38.129.56.16:6443: connect: connection refused Jan 23 09:12:39 crc kubenswrapper[4684]: ': StdinData: {"binDir":"/var/lib/cni/bin","clusterNetwork":"/host/run/multus/cni/net.d/10-ovn-kubernetes.conf","cniVersion":"0.3.1","daemonSocketDir":"/run/multus/socket","globalNamespaces":"default,openshift-multus,openshift-sriov-network-operator,openshift-cnv","logLevel":"verbose","logToStderr":true,"name":"multus-cni-network","namespaceIsolation":true,"type":"multus-shim"} Jan 23 09:12:39 crc kubenswrapper[4684]: > pod="openshift-authentication/oauth-openshift-69cb985589-w7hkw" Jan 23 09:12:39 crc kubenswrapper[4684]: E0123 09:12:39.264509 4684 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"oauth-openshift-69cb985589-w7hkw_openshift-authentication(d25f9561-bcbb-4309-b3b6-de838bbf47bd)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"oauth-openshift-69cb985589-w7hkw_openshift-authentication(d25f9561-bcbb-4309-b3b6-de838bbf47bd)\\\": rpc error: code = Unknown desc = failed to create pod network sandbox k8s_oauth-openshift-69cb985589-w7hkw_openshift-authentication_d25f9561-bcbb-4309-b3b6-de838bbf47bd_0(ef43daf68b5866cd122dcc2ad3581ad11abf057bd00c0f09c2b283976b666791): error adding pod openshift-authentication_oauth-openshift-69cb985589-w7hkw to CNI network \\\"multus-cni-network\\\": plugin type=\\\"multus-shim\\\" name=\\\"multus-cni-network\\\" failed (add): CmdAdd (shim): CNI request failed with status 400: 'ContainerID:\\\"ef43daf68b5866cd122dcc2ad3581ad11abf057bd00c0f09c2b283976b666791\\\" Netns:\\\"/var/run/netns/86c3e1d0-83bf-4d51-bfe4-c44477c9322b\\\" IfName:\\\"eth0\\\" Args:\\\"IgnoreUnknown=1;K8S_POD_NAMESPACE=openshift-authentication;K8S_POD_NAME=oauth-openshift-69cb985589-w7hkw;K8S_POD_INFRA_CONTAINER_ID=ef43daf68b5866cd122dcc2ad3581ad11abf057bd00c0f09c2b283976b666791;K8S_POD_UID=d25f9561-bcbb-4309-b3b6-de838bbf47bd\\\" Path:\\\"\\\" ERRORED: error configuring pod [openshift-authentication/oauth-openshift-69cb985589-w7hkw] networking: Multus: [openshift-authentication/oauth-openshift-69cb985589-w7hkw/d25f9561-bcbb-4309-b3b6-de838bbf47bd]: error setting the networks status: SetPodNetworkStatusAnnotation: failed to update the pod oauth-openshift-69cb985589-w7hkw in out of cluster comm: SetNetworkStatus: failed to update the pod oauth-openshift-69cb985589-w7hkw in out of cluster comm: status update failed for pod /: Get \\\"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-authentication/pods/oauth-openshift-69cb985589-w7hkw?timeout=1m0s\\\": dial tcp 38.129.56.16:6443: connect: connection refused\\n': StdinData: {\\\"binDir\\\":\\\"/var/lib/cni/bin\\\",\\\"clusterNetwork\\\":\\\"/host/run/multus/cni/net.d/10-ovn-kubernetes.conf\\\",\\\"cniVersion\\\":\\\"0.3.1\\\",\\\"daemonSocketDir\\\":\\\"/run/multus/socket\\\",\\\"globalNamespaces\\\":\\\"default,openshift-multus,openshift-sriov-network-operator,openshift-cnv\\\",\\\"logLevel\\\":\\\"verbose\\\",\\\"logToStderr\\\":true,\\\"name\\\":\\\"multus-cni-network\\\",\\\"namespaceIsolation\\\":true,\\\"type\\\":\\\"multus-shim\\\"}\"" pod="openshift-authentication/oauth-openshift-69cb985589-w7hkw" podUID="d25f9561-bcbb-4309-b3b6-de838bbf47bd" Jan 23 09:12:39 crc kubenswrapper[4684]: I0123 09:12:39.581358 4684 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver/kube-apiserver-crc" Jan 23 09:12:39 crc kubenswrapper[4684]: I0123 09:12:39.582457 4684 status_manager.go:851] "Failed to get status for pod" podUID="aa75b6a1-3672-4315-8606-19758a6604b7" pod="openshift-controller-manager/controller-manager-b7cc87cc9-sxktc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-controller-manager/pods/controller-manager-b7cc87cc9-sxktc\": dial tcp 38.129.56.16:6443: connect: connection refused" Jan 23 09:12:39 crc kubenswrapper[4684]: I0123 09:12:39.582852 4684 status_manager.go:851] "Failed to get status for pod" podUID="edcaacae-d1c5-4a66-9220-54ee4b5991ac" pod="openshift-kube-apiserver/installer-9-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/installer-9-crc\": dial tcp 38.129.56.16:6443: connect: connection refused" Jan 23 09:12:39 crc kubenswrapper[4684]: I0123 09:12:39.583051 4684 status_manager.go:851] "Failed to get status for pod" podUID="ea467a71-d4b5-4361-b648-61dc754033ca" pod="openshift-route-controller-manager/route-controller-manager-85d79997c7-pbqc5" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-route-controller-manager/pods/route-controller-manager-85d79997c7-pbqc5\": dial tcp 38.129.56.16:6443: connect: connection refused" Jan 23 09:12:39 crc kubenswrapper[4684]: I0123 09:12:39.583267 4684 status_manager.go:851] "Failed to get status for pod" podUID="f85e55b1a89d02b0cb034b1ea31ed45a" pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-startup-monitor-crc\": dial tcp 38.129.56.16:6443: connect: connection refused" Jan 23 09:12:39 crc kubenswrapper[4684]: I0123 09:12:39.583474 4684 status_manager.go:851] "Failed to get status for pod" podUID="f4b27818a5e8e43d0dc095d08835c792" pod="openshift-kube-apiserver/kube-apiserver-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-crc\": dial tcp 38.129.56.16:6443: connect: connection refused" Jan 23 09:12:39 crc kubenswrapper[4684]: I0123 09:12:39.600257 4684 kubelet.go:1909] "Trying to delete pod" pod="openshift-kube-apiserver/kube-apiserver-crc" podUID="e31ff448-5258-4887-9532-ccb1444b5a2f" Jan 23 09:12:39 crc kubenswrapper[4684]: I0123 09:12:39.600286 4684 mirror_client.go:130] "Deleting a mirror pod" pod="openshift-kube-apiserver/kube-apiserver-crc" podUID="e31ff448-5258-4887-9532-ccb1444b5a2f" Jan 23 09:12:39 crc kubenswrapper[4684]: E0123 09:12:39.600732 4684 mirror_client.go:138] "Failed deleting a mirror pod" err="Delete \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-crc\": dial tcp 38.129.56.16:6443: connect: connection refused" pod="openshift-kube-apiserver/kube-apiserver-crc" Jan 23 09:12:39 crc kubenswrapper[4684]: I0123 09:12:39.601173 4684 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver/kube-apiserver-crc" Jan 23 09:12:39 crc kubenswrapper[4684]: W0123 09:12:39.618980 4684 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod71bb4a3aecc4ba5b26c4b7318770ce13.slice/crio-990b6ad19258dfb3361a53ec4bda9f5c153adc294f859b8948f7180ac3dc998e WatchSource:0}: Error finding container 990b6ad19258dfb3361a53ec4bda9f5c153adc294f859b8948f7180ac3dc998e: Status 404 returned error can't find the container with id 990b6ad19258dfb3361a53ec4bda9f5c153adc294f859b8948f7180ac3dc998e Jan 23 09:12:39 crc kubenswrapper[4684]: I0123 09:12:39.832039 4684 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" event={"ID":"71bb4a3aecc4ba5b26c4b7318770ce13","Type":"ContainerStarted","Data":"990b6ad19258dfb3361a53ec4bda9f5c153adc294f859b8948f7180ac3dc998e"} Jan 23 09:12:39 crc kubenswrapper[4684]: I0123 09:12:39.834155 4684 generic.go:334] "Generic (PLEG): container finished" podID="597fda0b-2292-4816-a498-539a84a87f33" containerID="3933d73fa5e12f986261de632bc1fe99236ea7a78d4fc0bf77372ecb4a98b890" exitCode=0 Jan 23 09:12:39 crc kubenswrapper[4684]: I0123 09:12:39.834242 4684 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-74vxp" event={"ID":"597fda0b-2292-4816-a498-539a84a87f33","Type":"ContainerDied","Data":"3933d73fa5e12f986261de632bc1fe99236ea7a78d4fc0bf77372ecb4a98b890"} Jan 23 09:12:39 crc kubenswrapper[4684]: I0123 09:12:39.835661 4684 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" event={"ID":"f85e55b1a89d02b0cb034b1ea31ed45a","Type":"ContainerStarted","Data":"0dd00780f0e77dc2a04fe346f2e62a6625cb34d05249ea99375be8221c7a4a5b"} Jan 23 09:12:39 crc kubenswrapper[4684]: I0123 09:12:39.837605 4684 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-vk9hn" event={"ID":"0cd73bd8-4034-44e9-b00a-75ea938360c8","Type":"ContainerStarted","Data":"ca9d112c1238cb9c63f346015a5fb8d69defb100efff384de4ebf55847fb8dc7"} Jan 23 09:12:39 crc kubenswrapper[4684]: I0123 09:12:39.839344 4684 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-4w77d" event={"ID":"6386382b-e651-4888-857e-a3a7325f1f14","Type":"ContainerStarted","Data":"4a0b2a0ef5c98c480706279798937583e4c985a9e6507df4a2a8b280aea634ca"} Jan 23 09:12:39 crc kubenswrapper[4684]: I0123 09:12:39.842607 4684 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-controller-manager/controller-manager-b7cc87cc9-sxktc" Jan 23 09:12:39 crc kubenswrapper[4684]: I0123 09:12:39.842763 4684 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-9nnzz" event={"ID":"888f4644-d4e6-4334-8711-c552d0ef037a","Type":"ContainerStarted","Data":"ec4c7529e536b562c55fba62ad717583075f744d44ab896b738be8744d0e16ca"} Jan 23 09:12:39 crc kubenswrapper[4684]: I0123 09:12:39.843493 4684 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-route-controller-manager/route-controller-manager-85d79997c7-pbqc5" Jan 23 09:12:39 crc kubenswrapper[4684]: I0123 09:12:39.843887 4684 status_manager.go:851] "Failed to get status for pod" podUID="aa75b6a1-3672-4315-8606-19758a6604b7" pod="openshift-controller-manager/controller-manager-b7cc87cc9-sxktc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-controller-manager/pods/controller-manager-b7cc87cc9-sxktc\": dial tcp 38.129.56.16:6443: connect: connection refused" Jan 23 09:12:39 crc kubenswrapper[4684]: I0123 09:12:39.844250 4684 status_manager.go:851] "Failed to get status for pod" podUID="edcaacae-d1c5-4a66-9220-54ee4b5991ac" pod="openshift-kube-apiserver/installer-9-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/installer-9-crc\": dial tcp 38.129.56.16:6443: connect: connection refused" Jan 23 09:12:39 crc kubenswrapper[4684]: I0123 09:12:39.844948 4684 status_manager.go:851] "Failed to get status for pod" podUID="f85e55b1a89d02b0cb034b1ea31ed45a" pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-startup-monitor-crc\": dial tcp 38.129.56.16:6443: connect: connection refused" Jan 23 09:12:39 crc kubenswrapper[4684]: I0123 09:12:39.845272 4684 status_manager.go:851] "Failed to get status for pod" podUID="ea467a71-d4b5-4361-b648-61dc754033ca" pod="openshift-route-controller-manager/route-controller-manager-85d79997c7-pbqc5" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-route-controller-manager/pods/route-controller-manager-85d79997c7-pbqc5\": dial tcp 38.129.56.16:6443: connect: connection refused" Jan 23 09:12:39 crc kubenswrapper[4684]: I0123 09:12:39.845598 4684 status_manager.go:851] "Failed to get status for pod" podUID="a32a23a8-fd38-4a01-bc87-e589889a39e6" pod="openshift-marketplace/redhat-marketplace-hcd6g" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/redhat-marketplace-hcd6g\": dial tcp 38.129.56.16:6443: connect: connection refused" Jan 23 09:12:39 crc kubenswrapper[4684]: I0123 09:12:39.845885 4684 status_manager.go:851] "Failed to get status for pod" podUID="edcaacae-d1c5-4a66-9220-54ee4b5991ac" pod="openshift-kube-apiserver/installer-9-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/installer-9-crc\": dial tcp 38.129.56.16:6443: connect: connection refused" Jan 23 09:12:39 crc kubenswrapper[4684]: I0123 09:12:39.846194 4684 status_manager.go:851] "Failed to get status for pod" podUID="ea467a71-d4b5-4361-b648-61dc754033ca" pod="openshift-route-controller-manager/route-controller-manager-85d79997c7-pbqc5" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-route-controller-manager/pods/route-controller-manager-85d79997c7-pbqc5\": dial tcp 38.129.56.16:6443: connect: connection refused" Jan 23 09:12:39 crc kubenswrapper[4684]: I0123 09:12:39.846453 4684 status_manager.go:851] "Failed to get status for pod" podUID="f85e55b1a89d02b0cb034b1ea31ed45a" pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-startup-monitor-crc\": dial tcp 38.129.56.16:6443: connect: connection refused" Jan 23 09:12:39 crc kubenswrapper[4684]: I0123 09:12:39.846727 4684 status_manager.go:851] "Failed to get status for pod" podUID="aa75b6a1-3672-4315-8606-19758a6604b7" pod="openshift-controller-manager/controller-manager-b7cc87cc9-sxktc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-controller-manager/pods/controller-manager-b7cc87cc9-sxktc\": dial tcp 38.129.56.16:6443: connect: connection refused" Jan 23 09:12:39 crc kubenswrapper[4684]: I0123 09:12:39.859031 4684 status_manager.go:851] "Failed to get status for pod" podUID="a32a23a8-fd38-4a01-bc87-e589889a39e6" pod="openshift-marketplace/redhat-marketplace-hcd6g" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/redhat-marketplace-hcd6g\": dial tcp 38.129.56.16:6443: connect: connection refused" Jan 23 09:12:39 crc kubenswrapper[4684]: I0123 09:12:39.859422 4684 status_manager.go:851] "Failed to get status for pod" podUID="edcaacae-d1c5-4a66-9220-54ee4b5991ac" pod="openshift-kube-apiserver/installer-9-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/installer-9-crc\": dial tcp 38.129.56.16:6443: connect: connection refused" Jan 23 09:12:39 crc kubenswrapper[4684]: I0123 09:12:39.859667 4684 status_manager.go:851] "Failed to get status for pod" podUID="f85e55b1a89d02b0cb034b1ea31ed45a" pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-startup-monitor-crc\": dial tcp 38.129.56.16:6443: connect: connection refused" Jan 23 09:12:39 crc kubenswrapper[4684]: I0123 09:12:39.859904 4684 status_manager.go:851] "Failed to get status for pod" podUID="ea467a71-d4b5-4361-b648-61dc754033ca" pod="openshift-route-controller-manager/route-controller-manager-85d79997c7-pbqc5" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-route-controller-manager/pods/route-controller-manager-85d79997c7-pbqc5\": dial tcp 38.129.56.16:6443: connect: connection refused" Jan 23 09:12:39 crc kubenswrapper[4684]: I0123 09:12:39.860150 4684 status_manager.go:851] "Failed to get status for pod" podUID="aa75b6a1-3672-4315-8606-19758a6604b7" pod="openshift-controller-manager/controller-manager-b7cc87cc9-sxktc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-controller-manager/pods/controller-manager-b7cc87cc9-sxktc\": dial tcp 38.129.56.16:6443: connect: connection refused" Jan 23 09:12:39 crc kubenswrapper[4684]: I0123 09:12:39.860557 4684 status_manager.go:851] "Failed to get status for pod" podUID="ea467a71-d4b5-4361-b648-61dc754033ca" pod="openshift-route-controller-manager/route-controller-manager-85d79997c7-pbqc5" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-route-controller-manager/pods/route-controller-manager-85d79997c7-pbqc5\": dial tcp 38.129.56.16:6443: connect: connection refused" Jan 23 09:12:39 crc kubenswrapper[4684]: I0123 09:12:39.860885 4684 status_manager.go:851] "Failed to get status for pod" podUID="f85e55b1a89d02b0cb034b1ea31ed45a" pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-startup-monitor-crc\": dial tcp 38.129.56.16:6443: connect: connection refused" Jan 23 09:12:39 crc kubenswrapper[4684]: I0123 09:12:39.861138 4684 status_manager.go:851] "Failed to get status for pod" podUID="aa75b6a1-3672-4315-8606-19758a6604b7" pod="openshift-controller-manager/controller-manager-b7cc87cc9-sxktc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-controller-manager/pods/controller-manager-b7cc87cc9-sxktc\": dial tcp 38.129.56.16:6443: connect: connection refused" Jan 23 09:12:39 crc kubenswrapper[4684]: I0123 09:12:39.861468 4684 status_manager.go:851] "Failed to get status for pod" podUID="a32a23a8-fd38-4a01-bc87-e589889a39e6" pod="openshift-marketplace/redhat-marketplace-hcd6g" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/redhat-marketplace-hcd6g\": dial tcp 38.129.56.16:6443: connect: connection refused" Jan 23 09:12:39 crc kubenswrapper[4684]: I0123 09:12:39.861736 4684 status_manager.go:851] "Failed to get status for pod" podUID="edcaacae-d1c5-4a66-9220-54ee4b5991ac" pod="openshift-kube-apiserver/installer-9-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/installer-9-crc\": dial tcp 38.129.56.16:6443: connect: connection refused" Jan 23 09:12:40 crc kubenswrapper[4684]: E0123 09:12:40.331601 4684 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://api-int.crc.testing:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/crc?timeout=10s\": dial tcp 38.129.56.16:6443: connect: connection refused" interval="6.4s" Jan 23 09:12:40 crc kubenswrapper[4684]: I0123 09:12:40.847664 4684 status_manager.go:851] "Failed to get status for pod" podUID="aa75b6a1-3672-4315-8606-19758a6604b7" pod="openshift-controller-manager/controller-manager-b7cc87cc9-sxktc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-controller-manager/pods/controller-manager-b7cc87cc9-sxktc\": dial tcp 38.129.56.16:6443: connect: connection refused" Jan 23 09:12:40 crc kubenswrapper[4684]: I0123 09:12:40.848256 4684 status_manager.go:851] "Failed to get status for pod" podUID="a32a23a8-fd38-4a01-bc87-e589889a39e6" pod="openshift-marketplace/redhat-marketplace-hcd6g" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/redhat-marketplace-hcd6g\": dial tcp 38.129.56.16:6443: connect: connection refused" Jan 23 09:12:40 crc kubenswrapper[4684]: I0123 09:12:40.848526 4684 status_manager.go:851] "Failed to get status for pod" podUID="b97308cc-f7d2-4693-8990-76cbb4c9abff" pod="openshift-marketplace/certified-operators-x2mrs" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/certified-operators-x2mrs\": dial tcp 38.129.56.16:6443: connect: connection refused" Jan 23 09:12:40 crc kubenswrapper[4684]: I0123 09:12:40.848797 4684 status_manager.go:851] "Failed to get status for pod" podUID="edcaacae-d1c5-4a66-9220-54ee4b5991ac" pod="openshift-kube-apiserver/installer-9-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/installer-9-crc\": dial tcp 38.129.56.16:6443: connect: connection refused" Jan 23 09:12:40 crc kubenswrapper[4684]: I0123 09:12:40.849043 4684 status_manager.go:851] "Failed to get status for pod" podUID="ea467a71-d4b5-4361-b648-61dc754033ca" pod="openshift-route-controller-manager/route-controller-manager-85d79997c7-pbqc5" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-route-controller-manager/pods/route-controller-manager-85d79997c7-pbqc5\": dial tcp 38.129.56.16:6443: connect: connection refused" Jan 23 09:12:40 crc kubenswrapper[4684]: I0123 09:12:40.849300 4684 status_manager.go:851] "Failed to get status for pod" podUID="f85e55b1a89d02b0cb034b1ea31ed45a" pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-startup-monitor-crc\": dial tcp 38.129.56.16:6443: connect: connection refused" Jan 23 09:12:41 crc kubenswrapper[4684]: E0123 09:12:41.332933 4684 event.go:368] "Unable to write event (may retry after sleeping)" err="Post \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/events\": dial tcp 38.129.56.16:6443: connect: connection refused" event="&Event{ObjectMeta:{redhat-marketplace-hcd6g.188d513c2381b5e8 openshift-marketplace 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:openshift-marketplace,Name:redhat-marketplace-hcd6g,UID:a32a23a8-fd38-4a01-bc87-e589889a39e6,APIVersion:v1,ResourceVersion:28629,FieldPath:spec.containers{registry-server},},Reason:Pulled,Message:Successfully pulled image \"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\" in 28.848s (28.848s including waiting). Image size: 907837715 bytes.,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2026-01-23 09:12:27.362063848 +0000 UTC m=+319.985442409,LastTimestamp:2026-01-23 09:12:27.362063848 +0000 UTC m=+319.985442409,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Jan 23 09:12:41 crc kubenswrapper[4684]: I0123 09:12:41.528564 4684 patch_prober.go:28] interesting pod/kube-controller-manager-crc container/kube-controller-manager namespace/openshift-kube-controller-manager: Liveness probe status=failure output="Get \"https://192.168.126.11:10257/healthz\": dial tcp 192.168.126.11:10257: connect: connection refused" start-of-body= Jan 23 09:12:41 crc kubenswrapper[4684]: I0123 09:12:41.528643 4684 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-kube-controller-manager/kube-controller-manager-crc" podUID="f614b9022728cf315e60c057852e563e" containerName="kube-controller-manager" probeResult="failure" output="Get \"https://192.168.126.11:10257/healthz\": dial tcp 192.168.126.11:10257: connect: connection refused" Jan 23 09:12:42 crc kubenswrapper[4684]: I0123 09:12:42.858932 4684 generic.go:334] "Generic (PLEG): container finished" podID="6386382b-e651-4888-857e-a3a7325f1f14" containerID="4a0b2a0ef5c98c480706279798937583e4c985a9e6507df4a2a8b280aea634ca" exitCode=0 Jan 23 09:12:42 crc kubenswrapper[4684]: I0123 09:12:42.858968 4684 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-4w77d" event={"ID":"6386382b-e651-4888-857e-a3a7325f1f14","Type":"ContainerDied","Data":"4a0b2a0ef5c98c480706279798937583e4c985a9e6507df4a2a8b280aea634ca"} Jan 23 09:12:42 crc kubenswrapper[4684]: I0123 09:12:42.860187 4684 status_manager.go:851] "Failed to get status for pod" podUID="ea467a71-d4b5-4361-b648-61dc754033ca" pod="openshift-route-controller-manager/route-controller-manager-85d79997c7-pbqc5" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-route-controller-manager/pods/route-controller-manager-85d79997c7-pbqc5\": dial tcp 38.129.56.16:6443: connect: connection refused" Jan 23 09:12:42 crc kubenswrapper[4684]: I0123 09:12:42.860646 4684 status_manager.go:851] "Failed to get status for pod" podUID="f85e55b1a89d02b0cb034b1ea31ed45a" pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-startup-monitor-crc\": dial tcp 38.129.56.16:6443: connect: connection refused" Jan 23 09:12:42 crc kubenswrapper[4684]: I0123 09:12:42.860984 4684 status_manager.go:851] "Failed to get status for pod" podUID="aa75b6a1-3672-4315-8606-19758a6604b7" pod="openshift-controller-manager/controller-manager-b7cc87cc9-sxktc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-controller-manager/pods/controller-manager-b7cc87cc9-sxktc\": dial tcp 38.129.56.16:6443: connect: connection refused" Jan 23 09:12:42 crc kubenswrapper[4684]: I0123 09:12:42.861261 4684 status_manager.go:851] "Failed to get status for pod" podUID="a32a23a8-fd38-4a01-bc87-e589889a39e6" pod="openshift-marketplace/redhat-marketplace-hcd6g" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/redhat-marketplace-hcd6g\": dial tcp 38.129.56.16:6443: connect: connection refused" Jan 23 09:12:42 crc kubenswrapper[4684]: I0123 09:12:42.861488 4684 status_manager.go:851] "Failed to get status for pod" podUID="edcaacae-d1c5-4a66-9220-54ee4b5991ac" pod="openshift-kube-apiserver/installer-9-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/installer-9-crc\": dial tcp 38.129.56.16:6443: connect: connection refused" Jan 23 09:12:42 crc kubenswrapper[4684]: I0123 09:12:42.861726 4684 status_manager.go:851] "Failed to get status for pod" podUID="b97308cc-f7d2-4693-8990-76cbb4c9abff" pod="openshift-marketplace/certified-operators-x2mrs" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/certified-operators-x2mrs\": dial tcp 38.129.56.16:6443: connect: connection refused" Jan 23 09:12:44 crc kubenswrapper[4684]: I0123 09:12:44.630088 4684 patch_prober.go:28] interesting pod/kube-controller-manager-crc container/kube-controller-manager namespace/openshift-kube-controller-manager: Readiness probe status=failure output="Get \"https://192.168.126.11:10257/healthz\": dial tcp 192.168.126.11:10257: connect: connection refused" start-of-body= Jan 23 09:12:44 crc kubenswrapper[4684]: I0123 09:12:44.630448 4684 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-kube-controller-manager/kube-controller-manager-crc" podUID="f614b9022728cf315e60c057852e563e" containerName="kube-controller-manager" probeResult="failure" output="Get \"https://192.168.126.11:10257/healthz\": dial tcp 192.168.126.11:10257: connect: connection refused" Jan 23 09:12:45 crc kubenswrapper[4684]: I0123 09:12:45.433392 4684 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/certified-operators-x2mrs" Jan 23 09:12:45 crc kubenswrapper[4684]: I0123 09:12:45.433446 4684 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/certified-operators-x2mrs" Jan 23 09:12:45 crc kubenswrapper[4684]: I0123 09:12:45.836741 4684 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/certified-operators-x2mrs" Jan 23 09:12:45 crc kubenswrapper[4684]: I0123 09:12:45.837264 4684 status_manager.go:851] "Failed to get status for pod" podUID="f85e55b1a89d02b0cb034b1ea31ed45a" pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-startup-monitor-crc\": dial tcp 38.129.56.16:6443: connect: connection refused" Jan 23 09:12:45 crc kubenswrapper[4684]: I0123 09:12:45.837511 4684 status_manager.go:851] "Failed to get status for pod" podUID="ea467a71-d4b5-4361-b648-61dc754033ca" pod="openshift-route-controller-manager/route-controller-manager-85d79997c7-pbqc5" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-route-controller-manager/pods/route-controller-manager-85d79997c7-pbqc5\": dial tcp 38.129.56.16:6443: connect: connection refused" Jan 23 09:12:45 crc kubenswrapper[4684]: I0123 09:12:45.837685 4684 status_manager.go:851] "Failed to get status for pod" podUID="aa75b6a1-3672-4315-8606-19758a6604b7" pod="openshift-controller-manager/controller-manager-b7cc87cc9-sxktc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-controller-manager/pods/controller-manager-b7cc87cc9-sxktc\": dial tcp 38.129.56.16:6443: connect: connection refused" Jan 23 09:12:45 crc kubenswrapper[4684]: I0123 09:12:45.837901 4684 status_manager.go:851] "Failed to get status for pod" podUID="a32a23a8-fd38-4a01-bc87-e589889a39e6" pod="openshift-marketplace/redhat-marketplace-hcd6g" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/redhat-marketplace-hcd6g\": dial tcp 38.129.56.16:6443: connect: connection refused" Jan 23 09:12:45 crc kubenswrapper[4684]: I0123 09:12:45.838043 4684 status_manager.go:851] "Failed to get status for pod" podUID="b97308cc-f7d2-4693-8990-76cbb4c9abff" pod="openshift-marketplace/certified-operators-x2mrs" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/certified-operators-x2mrs\": dial tcp 38.129.56.16:6443: connect: connection refused" Jan 23 09:12:45 crc kubenswrapper[4684]: I0123 09:12:45.838182 4684 status_manager.go:851] "Failed to get status for pod" podUID="edcaacae-d1c5-4a66-9220-54ee4b5991ac" pod="openshift-kube-apiserver/installer-9-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/installer-9-crc\": dial tcp 38.129.56.16:6443: connect: connection refused" Jan 23 09:12:45 crc kubenswrapper[4684]: I0123 09:12:45.877528 4684 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-kube-controller-manager_kube-controller-manager-crc_f614b9022728cf315e60c057852e563e/kube-controller-manager/0.log" Jan 23 09:12:45 crc kubenswrapper[4684]: I0123 09:12:45.877574 4684 generic.go:334] "Generic (PLEG): container finished" podID="f614b9022728cf315e60c057852e563e" containerID="7954e2feb1e89e1ec2c9055234e7b9bde7005afc751a3067c18cbb54d16045cc" exitCode=1 Jan 23 09:12:45 crc kubenswrapper[4684]: I0123 09:12:45.877747 4684 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" event={"ID":"f614b9022728cf315e60c057852e563e","Type":"ContainerDied","Data":"7954e2feb1e89e1ec2c9055234e7b9bde7005afc751a3067c18cbb54d16045cc"} Jan 23 09:12:45 crc kubenswrapper[4684]: I0123 09:12:45.878471 4684 status_manager.go:851] "Failed to get status for pod" podUID="a32a23a8-fd38-4a01-bc87-e589889a39e6" pod="openshift-marketplace/redhat-marketplace-hcd6g" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/redhat-marketplace-hcd6g\": dial tcp 38.129.56.16:6443: connect: connection refused" Jan 23 09:12:45 crc kubenswrapper[4684]: I0123 09:12:45.878690 4684 status_manager.go:851] "Failed to get status for pod" podUID="597fda0b-2292-4816-a498-539a84a87f33" pod="openshift-marketplace/redhat-marketplace-74vxp" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/redhat-marketplace-74vxp\": dial tcp 38.129.56.16:6443: connect: connection refused" Jan 23 09:12:45 crc kubenswrapper[4684]: I0123 09:12:45.878923 4684 status_manager.go:851] "Failed to get status for pod" podUID="b97308cc-f7d2-4693-8990-76cbb4c9abff" pod="openshift-marketplace/certified-operators-x2mrs" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/certified-operators-x2mrs\": dial tcp 38.129.56.16:6443: connect: connection refused" Jan 23 09:12:45 crc kubenswrapper[4684]: I0123 09:12:45.879078 4684 scope.go:117] "RemoveContainer" containerID="7954e2feb1e89e1ec2c9055234e7b9bde7005afc751a3067c18cbb54d16045cc" Jan 23 09:12:45 crc kubenswrapper[4684]: I0123 09:12:45.879142 4684 status_manager.go:851] "Failed to get status for pod" podUID="edcaacae-d1c5-4a66-9220-54ee4b5991ac" pod="openshift-kube-apiserver/installer-9-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/installer-9-crc\": dial tcp 38.129.56.16:6443: connect: connection refused" Jan 23 09:12:45 crc kubenswrapper[4684]: I0123 09:12:45.879349 4684 status_manager.go:851] "Failed to get status for pod" podUID="ea467a71-d4b5-4361-b648-61dc754033ca" pod="openshift-route-controller-manager/route-controller-manager-85d79997c7-pbqc5" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-route-controller-manager/pods/route-controller-manager-85d79997c7-pbqc5\": dial tcp 38.129.56.16:6443: connect: connection refused" Jan 23 09:12:45 crc kubenswrapper[4684]: I0123 09:12:45.879579 4684 status_manager.go:851] "Failed to get status for pod" podUID="f85e55b1a89d02b0cb034b1ea31ed45a" pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-startup-monitor-crc\": dial tcp 38.129.56.16:6443: connect: connection refused" Jan 23 09:12:45 crc kubenswrapper[4684]: I0123 09:12:45.879823 4684 status_manager.go:851] "Failed to get status for pod" podUID="aa75b6a1-3672-4315-8606-19758a6604b7" pod="openshift-controller-manager/controller-manager-b7cc87cc9-sxktc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-controller-manager/pods/controller-manager-b7cc87cc9-sxktc\": dial tcp 38.129.56.16:6443: connect: connection refused" Jan 23 09:12:45 crc kubenswrapper[4684]: I0123 09:12:45.880203 4684 status_manager.go:851] "Failed to get status for pod" podUID="ea467a71-d4b5-4361-b648-61dc754033ca" pod="openshift-route-controller-manager/route-controller-manager-85d79997c7-pbqc5" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-route-controller-manager/pods/route-controller-manager-85d79997c7-pbqc5\": dial tcp 38.129.56.16:6443: connect: connection refused" Jan 23 09:12:45 crc kubenswrapper[4684]: I0123 09:12:45.880461 4684 status_manager.go:851] "Failed to get status for pod" podUID="f85e55b1a89d02b0cb034b1ea31ed45a" pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-startup-monitor-crc\": dial tcp 38.129.56.16:6443: connect: connection refused" Jan 23 09:12:45 crc kubenswrapper[4684]: I0123 09:12:45.880728 4684 status_manager.go:851] "Failed to get status for pod" podUID="6386382b-e651-4888-857e-a3a7325f1f14" pod="openshift-marketplace/community-operators-4w77d" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/community-operators-4w77d\": dial tcp 38.129.56.16:6443: connect: connection refused" Jan 23 09:12:45 crc kubenswrapper[4684]: I0123 09:12:45.881064 4684 status_manager.go:851] "Failed to get status for pod" podUID="f614b9022728cf315e60c057852e563e" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-controller-manager/pods/kube-controller-manager-crc\": dial tcp 38.129.56.16:6443: connect: connection refused" Jan 23 09:12:45 crc kubenswrapper[4684]: I0123 09:12:45.881306 4684 status_manager.go:851] "Failed to get status for pod" podUID="aa75b6a1-3672-4315-8606-19758a6604b7" pod="openshift-controller-manager/controller-manager-b7cc87cc9-sxktc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-controller-manager/pods/controller-manager-b7cc87cc9-sxktc\": dial tcp 38.129.56.16:6443: connect: connection refused" Jan 23 09:12:45 crc kubenswrapper[4684]: I0123 09:12:45.881530 4684 status_manager.go:851] "Failed to get status for pod" podUID="a32a23a8-fd38-4a01-bc87-e589889a39e6" pod="openshift-marketplace/redhat-marketplace-hcd6g" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/redhat-marketplace-hcd6g\": dial tcp 38.129.56.16:6443: connect: connection refused" Jan 23 09:12:45 crc kubenswrapper[4684]: I0123 09:12:45.881777 4684 status_manager.go:851] "Failed to get status for pod" podUID="597fda0b-2292-4816-a498-539a84a87f33" pod="openshift-marketplace/redhat-marketplace-74vxp" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/redhat-marketplace-74vxp\": dial tcp 38.129.56.16:6443: connect: connection refused" Jan 23 09:12:45 crc kubenswrapper[4684]: I0123 09:12:45.882000 4684 status_manager.go:851] "Failed to get status for pod" podUID="b97308cc-f7d2-4693-8990-76cbb4c9abff" pod="openshift-marketplace/certified-operators-x2mrs" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/certified-operators-x2mrs\": dial tcp 38.129.56.16:6443: connect: connection refused" Jan 23 09:12:45 crc kubenswrapper[4684]: I0123 09:12:45.882241 4684 status_manager.go:851] "Failed to get status for pod" podUID="888f4644-d4e6-4334-8711-c552d0ef037a" pod="openshift-marketplace/redhat-operators-9nnzz" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/redhat-operators-9nnzz\": dial tcp 38.129.56.16:6443: connect: connection refused" Jan 23 09:12:45 crc kubenswrapper[4684]: I0123 09:12:45.882461 4684 status_manager.go:851] "Failed to get status for pod" podUID="edcaacae-d1c5-4a66-9220-54ee4b5991ac" pod="openshift-kube-apiserver/installer-9-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/installer-9-crc\": dial tcp 38.129.56.16:6443: connect: connection refused" Jan 23 09:12:45 crc kubenswrapper[4684]: I0123 09:12:45.882663 4684 status_manager.go:851] "Failed to get status for pod" podUID="0cd73bd8-4034-44e9-b00a-75ea938360c8" pod="openshift-marketplace/certified-operators-vk9hn" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/certified-operators-vk9hn\": dial tcp 38.129.56.16:6443: connect: connection refused" Jan 23 09:12:45 crc kubenswrapper[4684]: I0123 09:12:45.917027 4684 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/certified-operators-x2mrs" Jan 23 09:12:45 crc kubenswrapper[4684]: I0123 09:12:45.917457 4684 status_manager.go:851] "Failed to get status for pod" podUID="6386382b-e651-4888-857e-a3a7325f1f14" pod="openshift-marketplace/community-operators-4w77d" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/community-operators-4w77d\": dial tcp 38.129.56.16:6443: connect: connection refused" Jan 23 09:12:45 crc kubenswrapper[4684]: I0123 09:12:45.917792 4684 status_manager.go:851] "Failed to get status for pod" podUID="f614b9022728cf315e60c057852e563e" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-controller-manager/pods/kube-controller-manager-crc\": dial tcp 38.129.56.16:6443: connect: connection refused" Jan 23 09:12:45 crc kubenswrapper[4684]: I0123 09:12:45.918000 4684 status_manager.go:851] "Failed to get status for pod" podUID="aa75b6a1-3672-4315-8606-19758a6604b7" pod="openshift-controller-manager/controller-manager-b7cc87cc9-sxktc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-controller-manager/pods/controller-manager-b7cc87cc9-sxktc\": dial tcp 38.129.56.16:6443: connect: connection refused" Jan 23 09:12:45 crc kubenswrapper[4684]: I0123 09:12:45.918167 4684 status_manager.go:851] "Failed to get status for pod" podUID="a32a23a8-fd38-4a01-bc87-e589889a39e6" pod="openshift-marketplace/redhat-marketplace-hcd6g" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/redhat-marketplace-hcd6g\": dial tcp 38.129.56.16:6443: connect: connection refused" Jan 23 09:12:45 crc kubenswrapper[4684]: I0123 09:12:45.918336 4684 status_manager.go:851] "Failed to get status for pod" podUID="597fda0b-2292-4816-a498-539a84a87f33" pod="openshift-marketplace/redhat-marketplace-74vxp" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/redhat-marketplace-74vxp\": dial tcp 38.129.56.16:6443: connect: connection refused" Jan 23 09:12:45 crc kubenswrapper[4684]: I0123 09:12:45.918498 4684 status_manager.go:851] "Failed to get status for pod" podUID="b97308cc-f7d2-4693-8990-76cbb4c9abff" pod="openshift-marketplace/certified-operators-x2mrs" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/certified-operators-x2mrs\": dial tcp 38.129.56.16:6443: connect: connection refused" Jan 23 09:12:45 crc kubenswrapper[4684]: I0123 09:12:45.918654 4684 status_manager.go:851] "Failed to get status for pod" podUID="888f4644-d4e6-4334-8711-c552d0ef037a" pod="openshift-marketplace/redhat-operators-9nnzz" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/redhat-operators-9nnzz\": dial tcp 38.129.56.16:6443: connect: connection refused" Jan 23 09:12:45 crc kubenswrapper[4684]: I0123 09:12:45.918828 4684 status_manager.go:851] "Failed to get status for pod" podUID="edcaacae-d1c5-4a66-9220-54ee4b5991ac" pod="openshift-kube-apiserver/installer-9-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/installer-9-crc\": dial tcp 38.129.56.16:6443: connect: connection refused" Jan 23 09:12:45 crc kubenswrapper[4684]: I0123 09:12:45.918985 4684 status_manager.go:851] "Failed to get status for pod" podUID="0cd73bd8-4034-44e9-b00a-75ea938360c8" pod="openshift-marketplace/certified-operators-vk9hn" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/certified-operators-vk9hn\": dial tcp 38.129.56.16:6443: connect: connection refused" Jan 23 09:12:45 crc kubenswrapper[4684]: I0123 09:12:45.919141 4684 status_manager.go:851] "Failed to get status for pod" podUID="f85e55b1a89d02b0cb034b1ea31ed45a" pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-startup-monitor-crc\": dial tcp 38.129.56.16:6443: connect: connection refused" Jan 23 09:12:45 crc kubenswrapper[4684]: I0123 09:12:45.919307 4684 status_manager.go:851] "Failed to get status for pod" podUID="ea467a71-d4b5-4361-b648-61dc754033ca" pod="openshift-route-controller-manager/route-controller-manager-85d79997c7-pbqc5" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-route-controller-manager/pods/route-controller-manager-85d79997c7-pbqc5\": dial tcp 38.129.56.16:6443: connect: connection refused" Jan 23 09:12:46 crc kubenswrapper[4684]: I0123 09:12:46.100377 4684 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/certified-operators-vk9hn" Jan 23 09:12:46 crc kubenswrapper[4684]: I0123 09:12:46.100687 4684 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/certified-operators-vk9hn" Jan 23 09:12:46 crc kubenswrapper[4684]: I0123 09:12:46.144406 4684 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/certified-operators-vk9hn" Jan 23 09:12:46 crc kubenswrapper[4684]: I0123 09:12:46.145069 4684 status_manager.go:851] "Failed to get status for pod" podUID="f614b9022728cf315e60c057852e563e" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-controller-manager/pods/kube-controller-manager-crc\": dial tcp 38.129.56.16:6443: connect: connection refused" Jan 23 09:12:46 crc kubenswrapper[4684]: I0123 09:12:46.145574 4684 status_manager.go:851] "Failed to get status for pod" podUID="aa75b6a1-3672-4315-8606-19758a6604b7" pod="openshift-controller-manager/controller-manager-b7cc87cc9-sxktc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-controller-manager/pods/controller-manager-b7cc87cc9-sxktc\": dial tcp 38.129.56.16:6443: connect: connection refused" Jan 23 09:12:46 crc kubenswrapper[4684]: I0123 09:12:46.145812 4684 status_manager.go:851] "Failed to get status for pod" podUID="a32a23a8-fd38-4a01-bc87-e589889a39e6" pod="openshift-marketplace/redhat-marketplace-hcd6g" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/redhat-marketplace-hcd6g\": dial tcp 38.129.56.16:6443: connect: connection refused" Jan 23 09:12:46 crc kubenswrapper[4684]: I0123 09:12:46.146006 4684 status_manager.go:851] "Failed to get status for pod" podUID="597fda0b-2292-4816-a498-539a84a87f33" pod="openshift-marketplace/redhat-marketplace-74vxp" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/redhat-marketplace-74vxp\": dial tcp 38.129.56.16:6443: connect: connection refused" Jan 23 09:12:46 crc kubenswrapper[4684]: I0123 09:12:46.146166 4684 status_manager.go:851] "Failed to get status for pod" podUID="b97308cc-f7d2-4693-8990-76cbb4c9abff" pod="openshift-marketplace/certified-operators-x2mrs" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/certified-operators-x2mrs\": dial tcp 38.129.56.16:6443: connect: connection refused" Jan 23 09:12:46 crc kubenswrapper[4684]: I0123 09:12:46.146308 4684 status_manager.go:851] "Failed to get status for pod" podUID="888f4644-d4e6-4334-8711-c552d0ef037a" pod="openshift-marketplace/redhat-operators-9nnzz" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/redhat-operators-9nnzz\": dial tcp 38.129.56.16:6443: connect: connection refused" Jan 23 09:12:46 crc kubenswrapper[4684]: I0123 09:12:46.146482 4684 status_manager.go:851] "Failed to get status for pod" podUID="edcaacae-d1c5-4a66-9220-54ee4b5991ac" pod="openshift-kube-apiserver/installer-9-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/installer-9-crc\": dial tcp 38.129.56.16:6443: connect: connection refused" Jan 23 09:12:46 crc kubenswrapper[4684]: I0123 09:12:46.146660 4684 status_manager.go:851] "Failed to get status for pod" podUID="0cd73bd8-4034-44e9-b00a-75ea938360c8" pod="openshift-marketplace/certified-operators-vk9hn" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/certified-operators-vk9hn\": dial tcp 38.129.56.16:6443: connect: connection refused" Jan 23 09:12:46 crc kubenswrapper[4684]: I0123 09:12:46.146850 4684 status_manager.go:851] "Failed to get status for pod" podUID="ea467a71-d4b5-4361-b648-61dc754033ca" pod="openshift-route-controller-manager/route-controller-manager-85d79997c7-pbqc5" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-route-controller-manager/pods/route-controller-manager-85d79997c7-pbqc5\": dial tcp 38.129.56.16:6443: connect: connection refused" Jan 23 09:12:46 crc kubenswrapper[4684]: I0123 09:12:46.147061 4684 status_manager.go:851] "Failed to get status for pod" podUID="f85e55b1a89d02b0cb034b1ea31ed45a" pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-startup-monitor-crc\": dial tcp 38.129.56.16:6443: connect: connection refused" Jan 23 09:12:46 crc kubenswrapper[4684]: I0123 09:12:46.147272 4684 status_manager.go:851] "Failed to get status for pod" podUID="6386382b-e651-4888-857e-a3a7325f1f14" pod="openshift-marketplace/community-operators-4w77d" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/community-operators-4w77d\": dial tcp 38.129.56.16:6443: connect: connection refused" Jan 23 09:12:46 crc kubenswrapper[4684]: E0123 09:12:46.732669 4684 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://api-int.crc.testing:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/crc?timeout=10s\": dial tcp 38.129.56.16:6443: connect: connection refused" interval="7s" Jan 23 09:12:46 crc kubenswrapper[4684]: I0123 09:12:46.886588 4684 generic.go:334] "Generic (PLEG): container finished" podID="71bb4a3aecc4ba5b26c4b7318770ce13" containerID="717f9aa20dabd593c67f05b99f7b884372537af160c43ec0588b48edd7adf2b7" exitCode=0 Jan 23 09:12:46 crc kubenswrapper[4684]: I0123 09:12:46.886682 4684 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" event={"ID":"71bb4a3aecc4ba5b26c4b7318770ce13","Type":"ContainerDied","Data":"717f9aa20dabd593c67f05b99f7b884372537af160c43ec0588b48edd7adf2b7"} Jan 23 09:12:46 crc kubenswrapper[4684]: I0123 09:12:46.886984 4684 kubelet.go:1909] "Trying to delete pod" pod="openshift-kube-apiserver/kube-apiserver-crc" podUID="e31ff448-5258-4887-9532-ccb1444b5a2f" Jan 23 09:12:46 crc kubenswrapper[4684]: I0123 09:12:46.887476 4684 mirror_client.go:130] "Deleting a mirror pod" pod="openshift-kube-apiserver/kube-apiserver-crc" podUID="e31ff448-5258-4887-9532-ccb1444b5a2f" Jan 23 09:12:46 crc kubenswrapper[4684]: I0123 09:12:46.887634 4684 status_manager.go:851] "Failed to get status for pod" podUID="0cd73bd8-4034-44e9-b00a-75ea938360c8" pod="openshift-marketplace/certified-operators-vk9hn" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/certified-operators-vk9hn\": dial tcp 38.129.56.16:6443: connect: connection refused" Jan 23 09:12:46 crc kubenswrapper[4684]: E0123 09:12:46.887993 4684 mirror_client.go:138] "Failed deleting a mirror pod" err="Delete \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-crc\": dial tcp 38.129.56.16:6443: connect: connection refused" pod="openshift-kube-apiserver/kube-apiserver-crc" Jan 23 09:12:46 crc kubenswrapper[4684]: I0123 09:12:46.888114 4684 status_manager.go:851] "Failed to get status for pod" podUID="ea467a71-d4b5-4361-b648-61dc754033ca" pod="openshift-route-controller-manager/route-controller-manager-85d79997c7-pbqc5" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-route-controller-manager/pods/route-controller-manager-85d79997c7-pbqc5\": dial tcp 38.129.56.16:6443: connect: connection refused" Jan 23 09:12:46 crc kubenswrapper[4684]: I0123 09:12:46.888484 4684 status_manager.go:851] "Failed to get status for pod" podUID="f85e55b1a89d02b0cb034b1ea31ed45a" pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-startup-monitor-crc\": dial tcp 38.129.56.16:6443: connect: connection refused" Jan 23 09:12:46 crc kubenswrapper[4684]: I0123 09:12:46.888998 4684 status_manager.go:851] "Failed to get status for pod" podUID="6386382b-e651-4888-857e-a3a7325f1f14" pod="openshift-marketplace/community-operators-4w77d" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/community-operators-4w77d\": dial tcp 38.129.56.16:6443: connect: connection refused" Jan 23 09:12:46 crc kubenswrapper[4684]: I0123 09:12:46.889384 4684 status_manager.go:851] "Failed to get status for pod" podUID="f614b9022728cf315e60c057852e563e" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-controller-manager/pods/kube-controller-manager-crc\": dial tcp 38.129.56.16:6443: connect: connection refused" Jan 23 09:12:46 crc kubenswrapper[4684]: I0123 09:12:46.889664 4684 status_manager.go:851] "Failed to get status for pod" podUID="aa75b6a1-3672-4315-8606-19758a6604b7" pod="openshift-controller-manager/controller-manager-b7cc87cc9-sxktc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-controller-manager/pods/controller-manager-b7cc87cc9-sxktc\": dial tcp 38.129.56.16:6443: connect: connection refused" Jan 23 09:12:46 crc kubenswrapper[4684]: I0123 09:12:46.889969 4684 status_manager.go:851] "Failed to get status for pod" podUID="a32a23a8-fd38-4a01-bc87-e589889a39e6" pod="openshift-marketplace/redhat-marketplace-hcd6g" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/redhat-marketplace-hcd6g\": dial tcp 38.129.56.16:6443: connect: connection refused" Jan 23 09:12:46 crc kubenswrapper[4684]: I0123 09:12:46.890267 4684 status_manager.go:851] "Failed to get status for pod" podUID="597fda0b-2292-4816-a498-539a84a87f33" pod="openshift-marketplace/redhat-marketplace-74vxp" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/redhat-marketplace-74vxp\": dial tcp 38.129.56.16:6443: connect: connection refused" Jan 23 09:12:46 crc kubenswrapper[4684]: I0123 09:12:46.890504 4684 status_manager.go:851] "Failed to get status for pod" podUID="edcaacae-d1c5-4a66-9220-54ee4b5991ac" pod="openshift-kube-apiserver/installer-9-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/installer-9-crc\": dial tcp 38.129.56.16:6443: connect: connection refused" Jan 23 09:12:46 crc kubenswrapper[4684]: I0123 09:12:46.890799 4684 status_manager.go:851] "Failed to get status for pod" podUID="b97308cc-f7d2-4693-8990-76cbb4c9abff" pod="openshift-marketplace/certified-operators-x2mrs" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/certified-operators-x2mrs\": dial tcp 38.129.56.16:6443: connect: connection refused" Jan 23 09:12:46 crc kubenswrapper[4684]: I0123 09:12:46.891069 4684 status_manager.go:851] "Failed to get status for pod" podUID="888f4644-d4e6-4334-8711-c552d0ef037a" pod="openshift-marketplace/redhat-operators-9nnzz" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/redhat-operators-9nnzz\": dial tcp 38.129.56.16:6443: connect: connection refused" Jan 23 09:12:46 crc kubenswrapper[4684]: I0123 09:12:46.931186 4684 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/certified-operators-vk9hn" Jan 23 09:12:46 crc kubenswrapper[4684]: I0123 09:12:46.931726 4684 status_manager.go:851] "Failed to get status for pod" podUID="597fda0b-2292-4816-a498-539a84a87f33" pod="openshift-marketplace/redhat-marketplace-74vxp" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/redhat-marketplace-74vxp\": dial tcp 38.129.56.16:6443: connect: connection refused" Jan 23 09:12:46 crc kubenswrapper[4684]: I0123 09:12:46.932108 4684 status_manager.go:851] "Failed to get status for pod" podUID="888f4644-d4e6-4334-8711-c552d0ef037a" pod="openshift-marketplace/redhat-operators-9nnzz" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/redhat-operators-9nnzz\": dial tcp 38.129.56.16:6443: connect: connection refused" Jan 23 09:12:46 crc kubenswrapper[4684]: I0123 09:12:46.932560 4684 status_manager.go:851] "Failed to get status for pod" podUID="edcaacae-d1c5-4a66-9220-54ee4b5991ac" pod="openshift-kube-apiserver/installer-9-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/installer-9-crc\": dial tcp 38.129.56.16:6443: connect: connection refused" Jan 23 09:12:46 crc kubenswrapper[4684]: I0123 09:12:46.933200 4684 status_manager.go:851] "Failed to get status for pod" podUID="b97308cc-f7d2-4693-8990-76cbb4c9abff" pod="openshift-marketplace/certified-operators-x2mrs" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/certified-operators-x2mrs\": dial tcp 38.129.56.16:6443: connect: connection refused" Jan 23 09:12:46 crc kubenswrapper[4684]: I0123 09:12:46.933464 4684 status_manager.go:851] "Failed to get status for pod" podUID="0cd73bd8-4034-44e9-b00a-75ea938360c8" pod="openshift-marketplace/certified-operators-vk9hn" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/certified-operators-vk9hn\": dial tcp 38.129.56.16:6443: connect: connection refused" Jan 23 09:12:46 crc kubenswrapper[4684]: I0123 09:12:46.933657 4684 status_manager.go:851] "Failed to get status for pod" podUID="ea467a71-d4b5-4361-b648-61dc754033ca" pod="openshift-route-controller-manager/route-controller-manager-85d79997c7-pbqc5" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-route-controller-manager/pods/route-controller-manager-85d79997c7-pbqc5\": dial tcp 38.129.56.16:6443: connect: connection refused" Jan 23 09:12:46 crc kubenswrapper[4684]: I0123 09:12:46.933860 4684 status_manager.go:851] "Failed to get status for pod" podUID="f85e55b1a89d02b0cb034b1ea31ed45a" pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-startup-monitor-crc\": dial tcp 38.129.56.16:6443: connect: connection refused" Jan 23 09:12:46 crc kubenswrapper[4684]: I0123 09:12:46.934006 4684 status_manager.go:851] "Failed to get status for pod" podUID="6386382b-e651-4888-857e-a3a7325f1f14" pod="openshift-marketplace/community-operators-4w77d" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/community-operators-4w77d\": dial tcp 38.129.56.16:6443: connect: connection refused" Jan 23 09:12:46 crc kubenswrapper[4684]: I0123 09:12:46.934179 4684 status_manager.go:851] "Failed to get status for pod" podUID="f614b9022728cf315e60c057852e563e" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-controller-manager/pods/kube-controller-manager-crc\": dial tcp 38.129.56.16:6443: connect: connection refused" Jan 23 09:12:46 crc kubenswrapper[4684]: I0123 09:12:46.934346 4684 status_manager.go:851] "Failed to get status for pod" podUID="aa75b6a1-3672-4315-8606-19758a6604b7" pod="openshift-controller-manager/controller-manager-b7cc87cc9-sxktc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-controller-manager/pods/controller-manager-b7cc87cc9-sxktc\": dial tcp 38.129.56.16:6443: connect: connection refused" Jan 23 09:12:46 crc kubenswrapper[4684]: I0123 09:12:46.934620 4684 status_manager.go:851] "Failed to get status for pod" podUID="a32a23a8-fd38-4a01-bc87-e589889a39e6" pod="openshift-marketplace/redhat-marketplace-hcd6g" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/redhat-marketplace-hcd6g\": dial tcp 38.129.56.16:6443: connect: connection refused" Jan 23 09:12:47 crc kubenswrapper[4684]: I0123 09:12:47.588437 4684 status_manager.go:851] "Failed to get status for pod" podUID="71bb4a3aecc4ba5b26c4b7318770ce13" pod="openshift-kube-apiserver/kube-apiserver-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-crc\": dial tcp 38.129.56.16:6443: connect: connection refused" Jan 23 09:12:47 crc kubenswrapper[4684]: I0123 09:12:47.589288 4684 status_manager.go:851] "Failed to get status for pod" podUID="ea467a71-d4b5-4361-b648-61dc754033ca" pod="openshift-route-controller-manager/route-controller-manager-85d79997c7-pbqc5" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-route-controller-manager/pods/route-controller-manager-85d79997c7-pbqc5\": dial tcp 38.129.56.16:6443: connect: connection refused" Jan 23 09:12:47 crc kubenswrapper[4684]: I0123 09:12:47.590838 4684 status_manager.go:851] "Failed to get status for pod" podUID="f85e55b1a89d02b0cb034b1ea31ed45a" pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-startup-monitor-crc\": dial tcp 38.129.56.16:6443: connect: connection refused" Jan 23 09:12:47 crc kubenswrapper[4684]: I0123 09:12:47.592806 4684 status_manager.go:851] "Failed to get status for pod" podUID="6386382b-e651-4888-857e-a3a7325f1f14" pod="openshift-marketplace/community-operators-4w77d" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/community-operators-4w77d\": dial tcp 38.129.56.16:6443: connect: connection refused" Jan 23 09:12:47 crc kubenswrapper[4684]: I0123 09:12:47.593135 4684 status_manager.go:851] "Failed to get status for pod" podUID="f614b9022728cf315e60c057852e563e" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-controller-manager/pods/kube-controller-manager-crc\": dial tcp 38.129.56.16:6443: connect: connection refused" Jan 23 09:12:47 crc kubenswrapper[4684]: I0123 09:12:47.593468 4684 status_manager.go:851] "Failed to get status for pod" podUID="aa75b6a1-3672-4315-8606-19758a6604b7" pod="openshift-controller-manager/controller-manager-b7cc87cc9-sxktc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-controller-manager/pods/controller-manager-b7cc87cc9-sxktc\": dial tcp 38.129.56.16:6443: connect: connection refused" Jan 23 09:12:47 crc kubenswrapper[4684]: I0123 09:12:47.593880 4684 status_manager.go:851] "Failed to get status for pod" podUID="a32a23a8-fd38-4a01-bc87-e589889a39e6" pod="openshift-marketplace/redhat-marketplace-hcd6g" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/redhat-marketplace-hcd6g\": dial tcp 38.129.56.16:6443: connect: connection refused" Jan 23 09:12:47 crc kubenswrapper[4684]: I0123 09:12:47.594193 4684 status_manager.go:851] "Failed to get status for pod" podUID="597fda0b-2292-4816-a498-539a84a87f33" pod="openshift-marketplace/redhat-marketplace-74vxp" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/redhat-marketplace-74vxp\": dial tcp 38.129.56.16:6443: connect: connection refused" Jan 23 09:12:47 crc kubenswrapper[4684]: I0123 09:12:47.594533 4684 status_manager.go:851] "Failed to get status for pod" podUID="b97308cc-f7d2-4693-8990-76cbb4c9abff" pod="openshift-marketplace/certified-operators-x2mrs" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/certified-operators-x2mrs\": dial tcp 38.129.56.16:6443: connect: connection refused" Jan 23 09:12:47 crc kubenswrapper[4684]: I0123 09:12:47.594881 4684 status_manager.go:851] "Failed to get status for pod" podUID="888f4644-d4e6-4334-8711-c552d0ef037a" pod="openshift-marketplace/redhat-operators-9nnzz" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/redhat-operators-9nnzz\": dial tcp 38.129.56.16:6443: connect: connection refused" Jan 23 09:12:47 crc kubenswrapper[4684]: I0123 09:12:47.595150 4684 status_manager.go:851] "Failed to get status for pod" podUID="edcaacae-d1c5-4a66-9220-54ee4b5991ac" pod="openshift-kube-apiserver/installer-9-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/installer-9-crc\": dial tcp 38.129.56.16:6443: connect: connection refused" Jan 23 09:12:47 crc kubenswrapper[4684]: I0123 09:12:47.595360 4684 status_manager.go:851] "Failed to get status for pod" podUID="0cd73bd8-4034-44e9-b00a-75ea938360c8" pod="openshift-marketplace/certified-operators-vk9hn" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/certified-operators-vk9hn\": dial tcp 38.129.56.16:6443: connect: connection refused" Jan 23 09:12:47 crc kubenswrapper[4684]: I0123 09:12:47.804527 4684 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/redhat-marketplace-hcd6g" Jan 23 09:12:47 crc kubenswrapper[4684]: I0123 09:12:47.804607 4684 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/redhat-marketplace-hcd6g" Jan 23 09:12:47 crc kubenswrapper[4684]: I0123 09:12:47.851349 4684 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/redhat-marketplace-hcd6g" Jan 23 09:12:47 crc kubenswrapper[4684]: I0123 09:12:47.851918 4684 status_manager.go:851] "Failed to get status for pod" podUID="ea467a71-d4b5-4361-b648-61dc754033ca" pod="openshift-route-controller-manager/route-controller-manager-85d79997c7-pbqc5" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-route-controller-manager/pods/route-controller-manager-85d79997c7-pbqc5\": dial tcp 38.129.56.16:6443: connect: connection refused" Jan 23 09:12:47 crc kubenswrapper[4684]: I0123 09:12:47.852156 4684 status_manager.go:851] "Failed to get status for pod" podUID="f85e55b1a89d02b0cb034b1ea31ed45a" pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-startup-monitor-crc\": dial tcp 38.129.56.16:6443: connect: connection refused" Jan 23 09:12:47 crc kubenswrapper[4684]: I0123 09:12:47.852411 4684 status_manager.go:851] "Failed to get status for pod" podUID="71bb4a3aecc4ba5b26c4b7318770ce13" pod="openshift-kube-apiserver/kube-apiserver-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-crc\": dial tcp 38.129.56.16:6443: connect: connection refused" Jan 23 09:12:47 crc kubenswrapper[4684]: I0123 09:12:47.852721 4684 status_manager.go:851] "Failed to get status for pod" podUID="6386382b-e651-4888-857e-a3a7325f1f14" pod="openshift-marketplace/community-operators-4w77d" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/community-operators-4w77d\": dial tcp 38.129.56.16:6443: connect: connection refused" Jan 23 09:12:47 crc kubenswrapper[4684]: I0123 09:12:47.852949 4684 status_manager.go:851] "Failed to get status for pod" podUID="f614b9022728cf315e60c057852e563e" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-controller-manager/pods/kube-controller-manager-crc\": dial tcp 38.129.56.16:6443: connect: connection refused" Jan 23 09:12:47 crc kubenswrapper[4684]: I0123 09:12:47.853144 4684 status_manager.go:851] "Failed to get status for pod" podUID="aa75b6a1-3672-4315-8606-19758a6604b7" pod="openshift-controller-manager/controller-manager-b7cc87cc9-sxktc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-controller-manager/pods/controller-manager-b7cc87cc9-sxktc\": dial tcp 38.129.56.16:6443: connect: connection refused" Jan 23 09:12:47 crc kubenswrapper[4684]: I0123 09:12:47.853354 4684 status_manager.go:851] "Failed to get status for pod" podUID="a32a23a8-fd38-4a01-bc87-e589889a39e6" pod="openshift-marketplace/redhat-marketplace-hcd6g" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/redhat-marketplace-hcd6g\": dial tcp 38.129.56.16:6443: connect: connection refused" Jan 23 09:12:47 crc kubenswrapper[4684]: I0123 09:12:47.853533 4684 status_manager.go:851] "Failed to get status for pod" podUID="597fda0b-2292-4816-a498-539a84a87f33" pod="openshift-marketplace/redhat-marketplace-74vxp" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/redhat-marketplace-74vxp\": dial tcp 38.129.56.16:6443: connect: connection refused" Jan 23 09:12:47 crc kubenswrapper[4684]: I0123 09:12:47.853754 4684 status_manager.go:851] "Failed to get status for pod" podUID="b97308cc-f7d2-4693-8990-76cbb4c9abff" pod="openshift-marketplace/certified-operators-x2mrs" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/certified-operators-x2mrs\": dial tcp 38.129.56.16:6443: connect: connection refused" Jan 23 09:12:47 crc kubenswrapper[4684]: I0123 09:12:47.853995 4684 status_manager.go:851] "Failed to get status for pod" podUID="888f4644-d4e6-4334-8711-c552d0ef037a" pod="openshift-marketplace/redhat-operators-9nnzz" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/redhat-operators-9nnzz\": dial tcp 38.129.56.16:6443: connect: connection refused" Jan 23 09:12:47 crc kubenswrapper[4684]: I0123 09:12:47.854354 4684 status_manager.go:851] "Failed to get status for pod" podUID="edcaacae-d1c5-4a66-9220-54ee4b5991ac" pod="openshift-kube-apiserver/installer-9-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/installer-9-crc\": dial tcp 38.129.56.16:6443: connect: connection refused" Jan 23 09:12:47 crc kubenswrapper[4684]: I0123 09:12:47.854554 4684 status_manager.go:851] "Failed to get status for pod" podUID="0cd73bd8-4034-44e9-b00a-75ea938360c8" pod="openshift-marketplace/certified-operators-vk9hn" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/certified-operators-vk9hn\": dial tcp 38.129.56.16:6443: connect: connection refused" Jan 23 09:12:47 crc kubenswrapper[4684]: I0123 09:12:47.894299 4684 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-kube-controller-manager_kube-controller-manager-crc_f614b9022728cf315e60c057852e563e/kube-controller-manager/0.log" Jan 23 09:12:47 crc kubenswrapper[4684]: I0123 09:12:47.894376 4684 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" event={"ID":"f614b9022728cf315e60c057852e563e","Type":"ContainerStarted","Data":"b762d3bbfaa08d4ac1c5f31537ef81136ffd57b2a570150592df5265a0f8f169"} Jan 23 09:12:47 crc kubenswrapper[4684]: I0123 09:12:47.932681 4684 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/redhat-marketplace-hcd6g" Jan 23 09:12:47 crc kubenswrapper[4684]: I0123 09:12:47.933681 4684 status_manager.go:851] "Failed to get status for pod" podUID="aa75b6a1-3672-4315-8606-19758a6604b7" pod="openshift-controller-manager/controller-manager-b7cc87cc9-sxktc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-controller-manager/pods/controller-manager-b7cc87cc9-sxktc\": dial tcp 38.129.56.16:6443: connect: connection refused" Jan 23 09:12:47 crc kubenswrapper[4684]: I0123 09:12:47.934046 4684 status_manager.go:851] "Failed to get status for pod" podUID="a32a23a8-fd38-4a01-bc87-e589889a39e6" pod="openshift-marketplace/redhat-marketplace-hcd6g" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/redhat-marketplace-hcd6g\": dial tcp 38.129.56.16:6443: connect: connection refused" Jan 23 09:12:47 crc kubenswrapper[4684]: I0123 09:12:47.934427 4684 status_manager.go:851] "Failed to get status for pod" podUID="597fda0b-2292-4816-a498-539a84a87f33" pod="openshift-marketplace/redhat-marketplace-74vxp" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/redhat-marketplace-74vxp\": dial tcp 38.129.56.16:6443: connect: connection refused" Jan 23 09:12:47 crc kubenswrapper[4684]: I0123 09:12:47.934844 4684 status_manager.go:851] "Failed to get status for pod" podUID="b97308cc-f7d2-4693-8990-76cbb4c9abff" pod="openshift-marketplace/certified-operators-x2mrs" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/certified-operators-x2mrs\": dial tcp 38.129.56.16:6443: connect: connection refused" Jan 23 09:12:47 crc kubenswrapper[4684]: I0123 09:12:47.935096 4684 status_manager.go:851] "Failed to get status for pod" podUID="888f4644-d4e6-4334-8711-c552d0ef037a" pod="openshift-marketplace/redhat-operators-9nnzz" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/redhat-operators-9nnzz\": dial tcp 38.129.56.16:6443: connect: connection refused" Jan 23 09:12:47 crc kubenswrapper[4684]: I0123 09:12:47.935436 4684 status_manager.go:851] "Failed to get status for pod" podUID="edcaacae-d1c5-4a66-9220-54ee4b5991ac" pod="openshift-kube-apiserver/installer-9-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/installer-9-crc\": dial tcp 38.129.56.16:6443: connect: connection refused" Jan 23 09:12:47 crc kubenswrapper[4684]: I0123 09:12:47.935867 4684 status_manager.go:851] "Failed to get status for pod" podUID="0cd73bd8-4034-44e9-b00a-75ea938360c8" pod="openshift-marketplace/certified-operators-vk9hn" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/certified-operators-vk9hn\": dial tcp 38.129.56.16:6443: connect: connection refused" Jan 23 09:12:47 crc kubenswrapper[4684]: I0123 09:12:47.936239 4684 status_manager.go:851] "Failed to get status for pod" podUID="f85e55b1a89d02b0cb034b1ea31ed45a" pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-startup-monitor-crc\": dial tcp 38.129.56.16:6443: connect: connection refused" Jan 23 09:12:47 crc kubenswrapper[4684]: I0123 09:12:47.936541 4684 status_manager.go:851] "Failed to get status for pod" podUID="71bb4a3aecc4ba5b26c4b7318770ce13" pod="openshift-kube-apiserver/kube-apiserver-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-crc\": dial tcp 38.129.56.16:6443: connect: connection refused" Jan 23 09:12:47 crc kubenswrapper[4684]: I0123 09:12:47.936945 4684 status_manager.go:851] "Failed to get status for pod" podUID="ea467a71-d4b5-4361-b648-61dc754033ca" pod="openshift-route-controller-manager/route-controller-manager-85d79997c7-pbqc5" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-route-controller-manager/pods/route-controller-manager-85d79997c7-pbqc5\": dial tcp 38.129.56.16:6443: connect: connection refused" Jan 23 09:12:47 crc kubenswrapper[4684]: I0123 09:12:47.937324 4684 status_manager.go:851] "Failed to get status for pod" podUID="6386382b-e651-4888-857e-a3a7325f1f14" pod="openshift-marketplace/community-operators-4w77d" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/community-operators-4w77d\": dial tcp 38.129.56.16:6443: connect: connection refused" Jan 23 09:12:47 crc kubenswrapper[4684]: I0123 09:12:47.937581 4684 status_manager.go:851] "Failed to get status for pod" podUID="f614b9022728cf315e60c057852e563e" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-controller-manager/pods/kube-controller-manager-crc\": dial tcp 38.129.56.16:6443: connect: connection refused" Jan 23 09:12:48 crc kubenswrapper[4684]: E0123 09:12:48.481751 4684 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-01-23T09:12:48Z\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-23T09:12:48Z\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-23T09:12:48Z\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-23T09:12:48Z\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:3295ee1e384bd13d7f93a565d0e83b4cb096da43c673235ced6ac2c39d64dfa1\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:91b55f2f378a9a1fbbda6c2423a0a3bc0c66e0dd45dee584db70782d1b7ba863\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1671873254},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:2b72e40c5d5b36b681f40c16ebf3dcac6520ed0c79f174ba87f673ab7afd209a\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:d83ee77ad07e06451a84205ac4c85c69e912a1c975e1a8a95095d79218028dce\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1178956511},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:8ec63a5af90efa25f6221a312db015f279dc78f8c7319e0fa1782471e1e18acf\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:99b77813d1f8030ff0e28a82bfc5b89346cbad2ca5cb2f89274e21e035b5b066\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1176015092},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917}]}}\" for node \"crc\": Patch \"https://api-int.crc.testing:6443/api/v1/nodes/crc/status?timeout=10s\": dial tcp 38.129.56.16:6443: connect: connection refused" Jan 23 09:12:48 crc kubenswrapper[4684]: E0123 09:12:48.482815 4684 kubelet_node_status.go:585] "Error updating node status, will retry" err="error getting node \"crc\": Get \"https://api-int.crc.testing:6443/api/v1/nodes/crc?timeout=10s\": dial tcp 38.129.56.16:6443: connect: connection refused" Jan 23 09:12:48 crc kubenswrapper[4684]: E0123 09:12:48.483559 4684 kubelet_node_status.go:585] "Error updating node status, will retry" err="error getting node \"crc\": Get \"https://api-int.crc.testing:6443/api/v1/nodes/crc?timeout=10s\": dial tcp 38.129.56.16:6443: connect: connection refused" Jan 23 09:12:48 crc kubenswrapper[4684]: E0123 09:12:48.484299 4684 kubelet_node_status.go:585] "Error updating node status, will retry" err="error getting node \"crc\": Get \"https://api-int.crc.testing:6443/api/v1/nodes/crc?timeout=10s\": dial tcp 38.129.56.16:6443: connect: connection refused" Jan 23 09:12:48 crc kubenswrapper[4684]: E0123 09:12:48.484820 4684 kubelet_node_status.go:585] "Error updating node status, will retry" err="error getting node \"crc\": Get \"https://api-int.crc.testing:6443/api/v1/nodes/crc?timeout=10s\": dial tcp 38.129.56.16:6443: connect: connection refused" Jan 23 09:12:48 crc kubenswrapper[4684]: E0123 09:12:48.484927 4684 kubelet_node_status.go:572] "Unable to update node status" err="update node status exceeds retry count" Jan 23 09:12:48 crc kubenswrapper[4684]: I0123 09:12:48.577758 4684 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/redhat-operators-9nnzz" Jan 23 09:12:48 crc kubenswrapper[4684]: I0123 09:12:48.578044 4684 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/redhat-operators-9nnzz" Jan 23 09:12:48 crc kubenswrapper[4684]: I0123 09:12:48.616757 4684 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/redhat-operators-9nnzz" Jan 23 09:12:48 crc kubenswrapper[4684]: I0123 09:12:48.617278 4684 status_manager.go:851] "Failed to get status for pod" podUID="0cd73bd8-4034-44e9-b00a-75ea938360c8" pod="openshift-marketplace/certified-operators-vk9hn" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/certified-operators-vk9hn\": dial tcp 38.129.56.16:6443: connect: connection refused" Jan 23 09:12:48 crc kubenswrapper[4684]: I0123 09:12:48.617649 4684 status_manager.go:851] "Failed to get status for pod" podUID="ea467a71-d4b5-4361-b648-61dc754033ca" pod="openshift-route-controller-manager/route-controller-manager-85d79997c7-pbqc5" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-route-controller-manager/pods/route-controller-manager-85d79997c7-pbqc5\": dial tcp 38.129.56.16:6443: connect: connection refused" Jan 23 09:12:48 crc kubenswrapper[4684]: I0123 09:12:48.617912 4684 status_manager.go:851] "Failed to get status for pod" podUID="f85e55b1a89d02b0cb034b1ea31ed45a" pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-startup-monitor-crc\": dial tcp 38.129.56.16:6443: connect: connection refused" Jan 23 09:12:48 crc kubenswrapper[4684]: I0123 09:12:48.618169 4684 status_manager.go:851] "Failed to get status for pod" podUID="71bb4a3aecc4ba5b26c4b7318770ce13" pod="openshift-kube-apiserver/kube-apiserver-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-crc\": dial tcp 38.129.56.16:6443: connect: connection refused" Jan 23 09:12:48 crc kubenswrapper[4684]: I0123 09:12:48.618412 4684 status_manager.go:851] "Failed to get status for pod" podUID="6386382b-e651-4888-857e-a3a7325f1f14" pod="openshift-marketplace/community-operators-4w77d" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/community-operators-4w77d\": dial tcp 38.129.56.16:6443: connect: connection refused" Jan 23 09:12:48 crc kubenswrapper[4684]: I0123 09:12:48.618659 4684 status_manager.go:851] "Failed to get status for pod" podUID="f614b9022728cf315e60c057852e563e" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-controller-manager/pods/kube-controller-manager-crc\": dial tcp 38.129.56.16:6443: connect: connection refused" Jan 23 09:12:48 crc kubenswrapper[4684]: I0123 09:12:48.618903 4684 status_manager.go:851] "Failed to get status for pod" podUID="aa75b6a1-3672-4315-8606-19758a6604b7" pod="openshift-controller-manager/controller-manager-b7cc87cc9-sxktc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-controller-manager/pods/controller-manager-b7cc87cc9-sxktc\": dial tcp 38.129.56.16:6443: connect: connection refused" Jan 23 09:12:48 crc kubenswrapper[4684]: I0123 09:12:48.619125 4684 status_manager.go:851] "Failed to get status for pod" podUID="a32a23a8-fd38-4a01-bc87-e589889a39e6" pod="openshift-marketplace/redhat-marketplace-hcd6g" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/redhat-marketplace-hcd6g\": dial tcp 38.129.56.16:6443: connect: connection refused" Jan 23 09:12:48 crc kubenswrapper[4684]: I0123 09:12:48.619349 4684 status_manager.go:851] "Failed to get status for pod" podUID="597fda0b-2292-4816-a498-539a84a87f33" pod="openshift-marketplace/redhat-marketplace-74vxp" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/redhat-marketplace-74vxp\": dial tcp 38.129.56.16:6443: connect: connection refused" Jan 23 09:12:48 crc kubenswrapper[4684]: I0123 09:12:48.619575 4684 status_manager.go:851] "Failed to get status for pod" podUID="888f4644-d4e6-4334-8711-c552d0ef037a" pod="openshift-marketplace/redhat-operators-9nnzz" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/redhat-operators-9nnzz\": dial tcp 38.129.56.16:6443: connect: connection refused" Jan 23 09:12:48 crc kubenswrapper[4684]: I0123 09:12:48.619795 4684 status_manager.go:851] "Failed to get status for pod" podUID="edcaacae-d1c5-4a66-9220-54ee4b5991ac" pod="openshift-kube-apiserver/installer-9-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/installer-9-crc\": dial tcp 38.129.56.16:6443: connect: connection refused" Jan 23 09:12:48 crc kubenswrapper[4684]: I0123 09:12:48.620065 4684 status_manager.go:851] "Failed to get status for pod" podUID="b97308cc-f7d2-4693-8990-76cbb4c9abff" pod="openshift-marketplace/certified-operators-x2mrs" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/certified-operators-x2mrs\": dial tcp 38.129.56.16:6443: connect: connection refused" Jan 23 09:12:48 crc kubenswrapper[4684]: I0123 09:12:48.902964 4684 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" event={"ID":"71bb4a3aecc4ba5b26c4b7318770ce13","Type":"ContainerStarted","Data":"f01fea7b66b42b0f34f850ac2d8bdbdf8ff6c1ec65e4824a40f33e39e102a17a"} Jan 23 09:12:48 crc kubenswrapper[4684]: I0123 09:12:48.903941 4684 status_manager.go:851] "Failed to get status for pod" podUID="0cd73bd8-4034-44e9-b00a-75ea938360c8" pod="openshift-marketplace/certified-operators-vk9hn" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/certified-operators-vk9hn\": dial tcp 38.129.56.16:6443: connect: connection refused" Jan 23 09:12:48 crc kubenswrapper[4684]: I0123 09:12:48.904294 4684 status_manager.go:851] "Failed to get status for pod" podUID="ea467a71-d4b5-4361-b648-61dc754033ca" pod="openshift-route-controller-manager/route-controller-manager-85d79997c7-pbqc5" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-route-controller-manager/pods/route-controller-manager-85d79997c7-pbqc5\": dial tcp 38.129.56.16:6443: connect: connection refused" Jan 23 09:12:48 crc kubenswrapper[4684]: I0123 09:12:48.904646 4684 status_manager.go:851] "Failed to get status for pod" podUID="f85e55b1a89d02b0cb034b1ea31ed45a" pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-startup-monitor-crc\": dial tcp 38.129.56.16:6443: connect: connection refused" Jan 23 09:12:48 crc kubenswrapper[4684]: I0123 09:12:48.905181 4684 status_manager.go:851] "Failed to get status for pod" podUID="71bb4a3aecc4ba5b26c4b7318770ce13" pod="openshift-kube-apiserver/kube-apiserver-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-crc\": dial tcp 38.129.56.16:6443: connect: connection refused" Jan 23 09:12:48 crc kubenswrapper[4684]: I0123 09:12:48.905559 4684 status_manager.go:851] "Failed to get status for pod" podUID="6386382b-e651-4888-857e-a3a7325f1f14" pod="openshift-marketplace/community-operators-4w77d" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/community-operators-4w77d\": dial tcp 38.129.56.16:6443: connect: connection refused" Jan 23 09:12:48 crc kubenswrapper[4684]: I0123 09:12:48.905941 4684 status_manager.go:851] "Failed to get status for pod" podUID="f614b9022728cf315e60c057852e563e" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-controller-manager/pods/kube-controller-manager-crc\": dial tcp 38.129.56.16:6443: connect: connection refused" Jan 23 09:12:48 crc kubenswrapper[4684]: I0123 09:12:48.906190 4684 status_manager.go:851] "Failed to get status for pod" podUID="aa75b6a1-3672-4315-8606-19758a6604b7" pod="openshift-controller-manager/controller-manager-b7cc87cc9-sxktc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-controller-manager/pods/controller-manager-b7cc87cc9-sxktc\": dial tcp 38.129.56.16:6443: connect: connection refused" Jan 23 09:12:48 crc kubenswrapper[4684]: I0123 09:12:48.906439 4684 status_manager.go:851] "Failed to get status for pod" podUID="a32a23a8-fd38-4a01-bc87-e589889a39e6" pod="openshift-marketplace/redhat-marketplace-hcd6g" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/redhat-marketplace-hcd6g\": dial tcp 38.129.56.16:6443: connect: connection refused" Jan 23 09:12:48 crc kubenswrapper[4684]: I0123 09:12:48.906660 4684 status_manager.go:851] "Failed to get status for pod" podUID="597fda0b-2292-4816-a498-539a84a87f33" pod="openshift-marketplace/redhat-marketplace-74vxp" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/redhat-marketplace-74vxp\": dial tcp 38.129.56.16:6443: connect: connection refused" Jan 23 09:12:48 crc kubenswrapper[4684]: I0123 09:12:48.906969 4684 status_manager.go:851] "Failed to get status for pod" podUID="b97308cc-f7d2-4693-8990-76cbb4c9abff" pod="openshift-marketplace/certified-operators-x2mrs" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/certified-operators-x2mrs\": dial tcp 38.129.56.16:6443: connect: connection refused" Jan 23 09:12:48 crc kubenswrapper[4684]: I0123 09:12:48.907216 4684 status_manager.go:851] "Failed to get status for pod" podUID="888f4644-d4e6-4334-8711-c552d0ef037a" pod="openshift-marketplace/redhat-operators-9nnzz" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/redhat-operators-9nnzz\": dial tcp 38.129.56.16:6443: connect: connection refused" Jan 23 09:12:48 crc kubenswrapper[4684]: I0123 09:12:48.907427 4684 status_manager.go:851] "Failed to get status for pod" podUID="edcaacae-d1c5-4a66-9220-54ee4b5991ac" pod="openshift-kube-apiserver/installer-9-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/installer-9-crc\": dial tcp 38.129.56.16:6443: connect: connection refused" Jan 23 09:12:48 crc kubenswrapper[4684]: I0123 09:12:48.941535 4684 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/redhat-operators-9nnzz" Jan 23 09:12:48 crc kubenswrapper[4684]: I0123 09:12:48.942172 4684 status_manager.go:851] "Failed to get status for pod" podUID="597fda0b-2292-4816-a498-539a84a87f33" pod="openshift-marketplace/redhat-marketplace-74vxp" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/redhat-marketplace-74vxp\": dial tcp 38.129.56.16:6443: connect: connection refused" Jan 23 09:12:48 crc kubenswrapper[4684]: I0123 09:12:48.942663 4684 status_manager.go:851] "Failed to get status for pod" podUID="b97308cc-f7d2-4693-8990-76cbb4c9abff" pod="openshift-marketplace/certified-operators-x2mrs" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/certified-operators-x2mrs\": dial tcp 38.129.56.16:6443: connect: connection refused" Jan 23 09:12:48 crc kubenswrapper[4684]: I0123 09:12:48.943141 4684 status_manager.go:851] "Failed to get status for pod" podUID="888f4644-d4e6-4334-8711-c552d0ef037a" pod="openshift-marketplace/redhat-operators-9nnzz" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/redhat-operators-9nnzz\": dial tcp 38.129.56.16:6443: connect: connection refused" Jan 23 09:12:48 crc kubenswrapper[4684]: I0123 09:12:48.943419 4684 status_manager.go:851] "Failed to get status for pod" podUID="edcaacae-d1c5-4a66-9220-54ee4b5991ac" pod="openshift-kube-apiserver/installer-9-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/installer-9-crc\": dial tcp 38.129.56.16:6443: connect: connection refused" Jan 23 09:12:48 crc kubenswrapper[4684]: I0123 09:12:48.943684 4684 status_manager.go:851] "Failed to get status for pod" podUID="0cd73bd8-4034-44e9-b00a-75ea938360c8" pod="openshift-marketplace/certified-operators-vk9hn" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/certified-operators-vk9hn\": dial tcp 38.129.56.16:6443: connect: connection refused" Jan 23 09:12:48 crc kubenswrapper[4684]: I0123 09:12:48.943961 4684 status_manager.go:851] "Failed to get status for pod" podUID="f85e55b1a89d02b0cb034b1ea31ed45a" pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-startup-monitor-crc\": dial tcp 38.129.56.16:6443: connect: connection refused" Jan 23 09:12:48 crc kubenswrapper[4684]: I0123 09:12:48.944222 4684 status_manager.go:851] "Failed to get status for pod" podUID="71bb4a3aecc4ba5b26c4b7318770ce13" pod="openshift-kube-apiserver/kube-apiserver-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-crc\": dial tcp 38.129.56.16:6443: connect: connection refused" Jan 23 09:12:48 crc kubenswrapper[4684]: I0123 09:12:48.944489 4684 status_manager.go:851] "Failed to get status for pod" podUID="ea467a71-d4b5-4361-b648-61dc754033ca" pod="openshift-route-controller-manager/route-controller-manager-85d79997c7-pbqc5" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-route-controller-manager/pods/route-controller-manager-85d79997c7-pbqc5\": dial tcp 38.129.56.16:6443: connect: connection refused" Jan 23 09:12:48 crc kubenswrapper[4684]: I0123 09:12:48.944768 4684 status_manager.go:851] "Failed to get status for pod" podUID="6386382b-e651-4888-857e-a3a7325f1f14" pod="openshift-marketplace/community-operators-4w77d" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/community-operators-4w77d\": dial tcp 38.129.56.16:6443: connect: connection refused" Jan 23 09:12:48 crc kubenswrapper[4684]: I0123 09:12:48.945034 4684 status_manager.go:851] "Failed to get status for pod" podUID="f614b9022728cf315e60c057852e563e" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-controller-manager/pods/kube-controller-manager-crc\": dial tcp 38.129.56.16:6443: connect: connection refused" Jan 23 09:12:48 crc kubenswrapper[4684]: I0123 09:12:48.945314 4684 status_manager.go:851] "Failed to get status for pod" podUID="aa75b6a1-3672-4315-8606-19758a6604b7" pod="openshift-controller-manager/controller-manager-b7cc87cc9-sxktc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-controller-manager/pods/controller-manager-b7cc87cc9-sxktc\": dial tcp 38.129.56.16:6443: connect: connection refused" Jan 23 09:12:48 crc kubenswrapper[4684]: I0123 09:12:48.945592 4684 status_manager.go:851] "Failed to get status for pod" podUID="a32a23a8-fd38-4a01-bc87-e589889a39e6" pod="openshift-marketplace/redhat-marketplace-hcd6g" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/redhat-marketplace-hcd6g\": dial tcp 38.129.56.16:6443: connect: connection refused" Jan 23 09:12:50 crc kubenswrapper[4684]: I0123 09:12:50.914966 4684 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" event={"ID":"71bb4a3aecc4ba5b26c4b7318770ce13","Type":"ContainerStarted","Data":"98e678199a8b1979a187e02fb86a26e2d84406c7827b89585de809006c681fca"} Jan 23 09:12:50 crc kubenswrapper[4684]: I0123 09:12:50.917410 4684 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-74vxp" event={"ID":"597fda0b-2292-4816-a498-539a84a87f33","Type":"ContainerStarted","Data":"9d553b8b9caf527dd5a57dff15285e93e7edc94de753fa041326a0b1e083cd71"} Jan 23 09:12:51 crc kubenswrapper[4684]: E0123 09:12:51.591150 4684 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"extract-content\" with ImagePullBackOff: \"Back-off pulling image \\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"\"" pod="openshift-marketplace/community-operators-pc4kj" podUID="2f9880b0-14ae-4649-b7ba-6d0dd1ab5151" Jan 23 09:12:51 crc kubenswrapper[4684]: I0123 09:12:51.924738 4684 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-4w77d" event={"ID":"6386382b-e651-4888-857e-a3a7325f1f14","Type":"ContainerStarted","Data":"55737adbbcd4852204cbbab14afeca010baddf56649eeff86b04d0ba17a57ec7"} Jan 23 09:12:51 crc kubenswrapper[4684]: I0123 09:12:51.927182 4684 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" event={"ID":"71bb4a3aecc4ba5b26c4b7318770ce13","Type":"ContainerStarted","Data":"4df97976c4b0d9e539ffc9f0103708f674a745aa33bb35831070c86f8289e387"} Jan 23 09:12:52 crc kubenswrapper[4684]: I0123 09:12:52.175752 4684 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-kube-controller-manager/kube-controller-manager-crc" Jan 23 09:12:52 crc kubenswrapper[4684]: I0123 09:12:52.176138 4684 patch_prober.go:28] interesting pod/kube-controller-manager-crc container/kube-controller-manager namespace/openshift-kube-controller-manager: Startup probe status=failure output="Get \"https://192.168.126.11:10257/healthz\": dial tcp 192.168.126.11:10257: connect: connection refused" start-of-body= Jan 23 09:12:52 crc kubenswrapper[4684]: I0123 09:12:52.176179 4684 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-kube-controller-manager/kube-controller-manager-crc" podUID="f614b9022728cf315e60c057852e563e" containerName="kube-controller-manager" probeResult="failure" output="Get \"https://192.168.126.11:10257/healthz\": dial tcp 192.168.126.11:10257: connect: connection refused" Jan 23 09:12:52 crc kubenswrapper[4684]: I0123 09:12:52.935339 4684 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" event={"ID":"71bb4a3aecc4ba5b26c4b7318770ce13","Type":"ContainerStarted","Data":"bdea1c779670080703bf2040c689af8f27116a77eb257fd82cdf2e33e6a5f7e2"} Jan 23 09:12:52 crc kubenswrapper[4684]: I0123 09:12:52.935390 4684 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" event={"ID":"71bb4a3aecc4ba5b26c4b7318770ce13","Type":"ContainerStarted","Data":"83785679f24773d63c6e893ed1ca4a84c41d46b4e92649b18c4b429c08cf0e19"} Jan 23 09:12:52 crc kubenswrapper[4684]: I0123 09:12:52.935663 4684 kubelet.go:1909] "Trying to delete pod" pod="openshift-kube-apiserver/kube-apiserver-crc" podUID="e31ff448-5258-4887-9532-ccb1444b5a2f" Jan 23 09:12:52 crc kubenswrapper[4684]: I0123 09:12:52.935680 4684 mirror_client.go:130] "Deleting a mirror pod" pod="openshift-kube-apiserver/kube-apiserver-crc" podUID="e31ff448-5258-4887-9532-ccb1444b5a2f" Jan 23 09:12:52 crc kubenswrapper[4684]: I0123 09:12:52.935891 4684 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-kube-apiserver/kube-apiserver-crc" Jan 23 09:12:52 crc kubenswrapper[4684]: I0123 09:12:52.938001 4684 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-vnv8t" event={"ID":"5a6b0dac-56a9-4bc7-b6f1-fdbe9578f226","Type":"ContainerStarted","Data":"126f8fca8120dc84338e5fc813f6a97fb061b4a68033708056ef4759b903aab7"} Jan 23 09:12:52 crc kubenswrapper[4684]: I0123 09:12:52.946631 4684 kubelet.go:1914] "Deleted mirror pod because it is outdated" pod="openshift-kube-apiserver/kube-apiserver-crc" Jan 23 09:12:53 crc kubenswrapper[4684]: I0123 09:12:53.943222 4684 kubelet.go:1909] "Trying to delete pod" pod="openshift-kube-apiserver/kube-apiserver-crc" podUID="e31ff448-5258-4887-9532-ccb1444b5a2f" Jan 23 09:12:53 crc kubenswrapper[4684]: I0123 09:12:53.943990 4684 mirror_client.go:130] "Deleting a mirror pod" pod="openshift-kube-apiserver/kube-apiserver-crc" podUID="e31ff448-5258-4887-9532-ccb1444b5a2f" Jan 23 09:12:54 crc kubenswrapper[4684]: I0123 09:12:54.582024 4684 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-authentication/oauth-openshift-69cb985589-w7hkw" Jan 23 09:12:54 crc kubenswrapper[4684]: I0123 09:12:54.582987 4684 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-authentication/oauth-openshift-69cb985589-w7hkw" Jan 23 09:12:54 crc kubenswrapper[4684]: I0123 09:12:54.602763 4684 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-kube-apiserver/kube-apiserver-crc" Jan 23 09:12:54 crc kubenswrapper[4684]: I0123 09:12:54.602980 4684 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-kube-apiserver/kube-apiserver-crc" Jan 23 09:12:54 crc kubenswrapper[4684]: I0123 09:12:54.611223 4684 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-kube-apiserver/kube-apiserver-crc" Jan 23 09:12:54 crc kubenswrapper[4684]: I0123 09:12:54.630188 4684 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-kube-controller-manager/kube-controller-manager-crc" Jan 23 09:12:54 crc kubenswrapper[4684]: I0123 09:12:54.953332 4684 kubelet.go:1909] "Trying to delete pod" pod="openshift-kube-apiserver/kube-apiserver-crc" podUID="e31ff448-5258-4887-9532-ccb1444b5a2f" Jan 23 09:12:54 crc kubenswrapper[4684]: I0123 09:12:54.953360 4684 mirror_client.go:130] "Deleting a mirror pod" pod="openshift-kube-apiserver/kube-apiserver-crc" podUID="e31ff448-5258-4887-9532-ccb1444b5a2f" Jan 23 09:12:54 crc kubenswrapper[4684]: I0123 09:12:54.953536 4684 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-authentication/oauth-openshift-69cb985589-w7hkw" event={"ID":"d25f9561-bcbb-4309-b3b6-de838bbf47bd","Type":"ContainerStarted","Data":"624b7ad9dfc8be9eb5de99b10b5d5c65e9b3b9a9899d0e5e1e7b76bc93442d41"} Jan 23 09:12:54 crc kubenswrapper[4684]: I0123 09:12:54.959317 4684 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-kube-apiserver/kube-apiserver-crc" Jan 23 09:12:55 crc kubenswrapper[4684]: I0123 09:12:55.333404 4684 status_manager.go:861] "Pod was deleted and then recreated, skipping status update" pod="openshift-kube-apiserver/kube-apiserver-crc" oldPodUID="71bb4a3aecc4ba5b26c4b7318770ce13" podUID="47d7e793-f4f7-47af-85fa-a6a1dbf60333" Jan 23 09:12:55 crc kubenswrapper[4684]: I0123 09:12:55.959381 4684 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-authentication_oauth-openshift-69cb985589-w7hkw_d25f9561-bcbb-4309-b3b6-de838bbf47bd/oauth-openshift/0.log" Jan 23 09:12:55 crc kubenswrapper[4684]: I0123 09:12:55.959435 4684 generic.go:334] "Generic (PLEG): container finished" podID="d25f9561-bcbb-4309-b3b6-de838bbf47bd" containerID="fc7d374c095a2e0fd705523b571dac4bef7cddbf1210feae8fa6c0990d98aa60" exitCode=255 Jan 23 09:12:55 crc kubenswrapper[4684]: I0123 09:12:55.959768 4684 kubelet.go:1909] "Trying to delete pod" pod="openshift-kube-apiserver/kube-apiserver-crc" podUID="e31ff448-5258-4887-9532-ccb1444b5a2f" Jan 23 09:12:55 crc kubenswrapper[4684]: I0123 09:12:55.959782 4684 mirror_client.go:130] "Deleting a mirror pod" pod="openshift-kube-apiserver/kube-apiserver-crc" podUID="e31ff448-5258-4887-9532-ccb1444b5a2f" Jan 23 09:12:55 crc kubenswrapper[4684]: I0123 09:12:55.960411 4684 scope.go:117] "RemoveContainer" containerID="fc7d374c095a2e0fd705523b571dac4bef7cddbf1210feae8fa6c0990d98aa60" Jan 23 09:12:55 crc kubenswrapper[4684]: I0123 09:12:55.960687 4684 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-authentication/oauth-openshift-69cb985589-w7hkw" event={"ID":"d25f9561-bcbb-4309-b3b6-de838bbf47bd","Type":"ContainerDied","Data":"fc7d374c095a2e0fd705523b571dac4bef7cddbf1210feae8fa6c0990d98aa60"} Jan 23 09:12:55 crc kubenswrapper[4684]: I0123 09:12:55.964414 4684 status_manager.go:861] "Pod was deleted and then recreated, skipping status update" pod="openshift-kube-apiserver/kube-apiserver-crc" oldPodUID="71bb4a3aecc4ba5b26c4b7318770ce13" podUID="47d7e793-f4f7-47af-85fa-a6a1dbf60333" Jan 23 09:12:56 crc kubenswrapper[4684]: I0123 09:12:56.314328 4684 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/community-operators-4w77d" Jan 23 09:12:56 crc kubenswrapper[4684]: I0123 09:12:56.314671 4684 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/community-operators-4w77d" Jan 23 09:12:56 crc kubenswrapper[4684]: I0123 09:12:56.372574 4684 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/community-operators-4w77d" Jan 23 09:12:56 crc kubenswrapper[4684]: I0123 09:12:56.967418 4684 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-authentication_oauth-openshift-69cb985589-w7hkw_d25f9561-bcbb-4309-b3b6-de838bbf47bd/oauth-openshift/1.log" Jan 23 09:12:56 crc kubenswrapper[4684]: I0123 09:12:56.968588 4684 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-authentication_oauth-openshift-69cb985589-w7hkw_d25f9561-bcbb-4309-b3b6-de838bbf47bd/oauth-openshift/0.log" Jan 23 09:12:56 crc kubenswrapper[4684]: I0123 09:12:56.968728 4684 generic.go:334] "Generic (PLEG): container finished" podID="d25f9561-bcbb-4309-b3b6-de838bbf47bd" containerID="8b27cb28e3252f4fc8e44a8dcf95b6d75ce01558cc5a7013773fc5b30a8e4ffe" exitCode=255 Jan 23 09:12:56 crc kubenswrapper[4684]: I0123 09:12:56.968823 4684 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-authentication/oauth-openshift-69cb985589-w7hkw" event={"ID":"d25f9561-bcbb-4309-b3b6-de838bbf47bd","Type":"ContainerDied","Data":"8b27cb28e3252f4fc8e44a8dcf95b6d75ce01558cc5a7013773fc5b30a8e4ffe"} Jan 23 09:12:56 crc kubenswrapper[4684]: I0123 09:12:56.968876 4684 scope.go:117] "RemoveContainer" containerID="fc7d374c095a2e0fd705523b571dac4bef7cddbf1210feae8fa6c0990d98aa60" Jan 23 09:12:56 crc kubenswrapper[4684]: I0123 09:12:56.969327 4684 scope.go:117] "RemoveContainer" containerID="8b27cb28e3252f4fc8e44a8dcf95b6d75ce01558cc5a7013773fc5b30a8e4ffe" Jan 23 09:12:56 crc kubenswrapper[4684]: E0123 09:12:56.969645 4684 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"oauth-openshift\" with CrashLoopBackOff: \"back-off 10s restarting failed container=oauth-openshift pod=oauth-openshift-69cb985589-w7hkw_openshift-authentication(d25f9561-bcbb-4309-b3b6-de838bbf47bd)\"" pod="openshift-authentication/oauth-openshift-69cb985589-w7hkw" podUID="d25f9561-bcbb-4309-b3b6-de838bbf47bd" Jan 23 09:12:57 crc kubenswrapper[4684]: I0123 09:12:57.020262 4684 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/community-operators-4w77d" Jan 23 09:12:57 crc kubenswrapper[4684]: I0123 09:12:57.498962 4684 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/redhat-marketplace-74vxp" Jan 23 09:12:57 crc kubenswrapper[4684]: I0123 09:12:57.499318 4684 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/redhat-marketplace-74vxp" Jan 23 09:12:57 crc kubenswrapper[4684]: I0123 09:12:57.541722 4684 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/redhat-marketplace-74vxp" Jan 23 09:12:57 crc kubenswrapper[4684]: I0123 09:12:57.976486 4684 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-authentication_oauth-openshift-69cb985589-w7hkw_d25f9561-bcbb-4309-b3b6-de838bbf47bd/oauth-openshift/1.log" Jan 23 09:12:57 crc kubenswrapper[4684]: I0123 09:12:57.977218 4684 scope.go:117] "RemoveContainer" containerID="8b27cb28e3252f4fc8e44a8dcf95b6d75ce01558cc5a7013773fc5b30a8e4ffe" Jan 23 09:12:57 crc kubenswrapper[4684]: E0123 09:12:57.977480 4684 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"oauth-openshift\" with CrashLoopBackOff: \"back-off 10s restarting failed container=oauth-openshift pod=oauth-openshift-69cb985589-w7hkw_openshift-authentication(d25f9561-bcbb-4309-b3b6-de838bbf47bd)\"" pod="openshift-authentication/oauth-openshift-69cb985589-w7hkw" podUID="d25f9561-bcbb-4309-b3b6-de838bbf47bd" Jan 23 09:12:58 crc kubenswrapper[4684]: I0123 09:12:58.030490 4684 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/redhat-marketplace-74vxp" Jan 23 09:12:58 crc kubenswrapper[4684]: I0123 09:12:58.832620 4684 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/redhat-operators-vnv8t" Jan 23 09:12:58 crc kubenswrapper[4684]: I0123 09:12:58.832685 4684 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/redhat-operators-vnv8t" Jan 23 09:12:58 crc kubenswrapper[4684]: I0123 09:12:58.887878 4684 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/redhat-operators-vnv8t" Jan 23 09:12:58 crc kubenswrapper[4684]: I0123 09:12:58.985515 4684 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-network-node-identity_network-node-identity-vrzqb_ef543e1b-8068-4ea3-b32a-61027b32e95d/approver/0.log" Jan 23 09:12:58 crc kubenswrapper[4684]: I0123 09:12:58.985924 4684 generic.go:334] "Generic (PLEG): container finished" podID="ef543e1b-8068-4ea3-b32a-61027b32e95d" containerID="4f741db786a98b9e9302c17c5f5061484149b0372c03b3cf06b017d37da7237a" exitCode=1 Jan 23 09:12:58 crc kubenswrapper[4684]: I0123 09:12:58.985969 4684 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" event={"ID":"ef543e1b-8068-4ea3-b32a-61027b32e95d","Type":"ContainerDied","Data":"4f741db786a98b9e9302c17c5f5061484149b0372c03b3cf06b017d37da7237a"} Jan 23 09:12:58 crc kubenswrapper[4684]: I0123 09:12:58.986961 4684 scope.go:117] "RemoveContainer" containerID="4f741db786a98b9e9302c17c5f5061484149b0372c03b3cf06b017d37da7237a" Jan 23 09:12:59 crc kubenswrapper[4684]: I0123 09:12:59.028067 4684 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/redhat-operators-vnv8t" Jan 23 09:12:59 crc kubenswrapper[4684]: I0123 09:12:59.592316 4684 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-authentication/oauth-openshift-69cb985589-w7hkw" Jan 23 09:12:59 crc kubenswrapper[4684]: I0123 09:12:59.592783 4684 kubelet.go:2542] "SyncLoop (probe)" probe="liveness" status="unhealthy" pod="openshift-authentication/oauth-openshift-69cb985589-w7hkw" Jan 23 09:12:59 crc kubenswrapper[4684]: I0123 09:12:59.593689 4684 scope.go:117] "RemoveContainer" containerID="8b27cb28e3252f4fc8e44a8dcf95b6d75ce01558cc5a7013773fc5b30a8e4ffe" Jan 23 09:12:59 crc kubenswrapper[4684]: E0123 09:12:59.594116 4684 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"oauth-openshift\" with CrashLoopBackOff: \"back-off 10s restarting failed container=oauth-openshift pod=oauth-openshift-69cb985589-w7hkw_openshift-authentication(d25f9561-bcbb-4309-b3b6-de838bbf47bd)\"" pod="openshift-authentication/oauth-openshift-69cb985589-w7hkw" podUID="d25f9561-bcbb-4309-b3b6-de838bbf47bd" Jan 23 09:12:59 crc kubenswrapper[4684]: I0123 09:12:59.992888 4684 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-network-node-identity_network-node-identity-vrzqb_ef543e1b-8068-4ea3-b32a-61027b32e95d/approver/0.log" Jan 23 09:12:59 crc kubenswrapper[4684]: I0123 09:12:59.994210 4684 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" event={"ID":"ef543e1b-8068-4ea3-b32a-61027b32e95d","Type":"ContainerStarted","Data":"56ee226873559b938e2e3a22c506497269617cc8a433a059cc784bb17c2fcc7b"} Jan 23 09:13:02 crc kubenswrapper[4684]: I0123 09:13:02.179565 4684 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-kube-controller-manager/kube-controller-manager-crc" Jan 23 09:13:02 crc kubenswrapper[4684]: I0123 09:13:02.183618 4684 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-kube-controller-manager/kube-controller-manager-crc" Jan 23 09:13:06 crc kubenswrapper[4684]: I0123 09:13:06.029424 4684 generic.go:334] "Generic (PLEG): container finished" podID="2f9880b0-14ae-4649-b7ba-6d0dd1ab5151" containerID="2795dc8067cfcebc9f49052e239941770f9149a311a853d77fc9c33d333bb07d" exitCode=0 Jan 23 09:13:06 crc kubenswrapper[4684]: I0123 09:13:06.029492 4684 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-pc4kj" event={"ID":"2f9880b0-14ae-4649-b7ba-6d0dd1ab5151","Type":"ContainerDied","Data":"2795dc8067cfcebc9f49052e239941770f9149a311a853d77fc9c33d333bb07d"} Jan 23 09:13:07 crc kubenswrapper[4684]: I0123 09:13:07.038255 4684 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-pc4kj" event={"ID":"2f9880b0-14ae-4649-b7ba-6d0dd1ab5151","Type":"ContainerStarted","Data":"d5c14ba4360eb52e85d58b457e69c943fdceda28b5bf0c035c9f4ef3317f52f7"} Jan 23 09:13:08 crc kubenswrapper[4684]: I0123 09:13:08.467188 4684 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-cluster-samples-operator"/"kube-root-ca.crt" Jan 23 09:13:08 crc kubenswrapper[4684]: I0123 09:13:08.854542 4684 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-etcd-operator"/"etcd-service-ca-bundle" Jan 23 09:13:09 crc kubenswrapper[4684]: I0123 09:13:09.068989 4684 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-machine-config-operator"/"machine-config-operator-images" Jan 23 09:13:09 crc kubenswrapper[4684]: I0123 09:13:09.469677 4684 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ingress-canary"/"kube-root-ca.crt" Jan 23 09:13:10 crc kubenswrapper[4684]: I0123 09:13:10.148118 4684 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-api"/"machine-api-operator-tls" Jan 23 09:13:10 crc kubenswrapper[4684]: I0123 09:13:10.582829 4684 scope.go:117] "RemoveContainer" containerID="8b27cb28e3252f4fc8e44a8dcf95b6d75ce01558cc5a7013773fc5b30a8e4ffe" Jan 23 09:13:11 crc kubenswrapper[4684]: I0123 09:13:11.068717 4684 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-authentication_oauth-openshift-69cb985589-w7hkw_d25f9561-bcbb-4309-b3b6-de838bbf47bd/oauth-openshift/1.log" Jan 23 09:13:11 crc kubenswrapper[4684]: I0123 09:13:11.069048 4684 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-authentication/oauth-openshift-69cb985589-w7hkw" event={"ID":"d25f9561-bcbb-4309-b3b6-de838bbf47bd","Type":"ContainerStarted","Data":"48097d8a5164b4112cbdaeaf9888d4b138e19f3e7ec9af331ec1e7d48e862e80"} Jan 23 09:13:11 crc kubenswrapper[4684]: I0123 09:13:11.069596 4684 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-authentication/oauth-openshift-69cb985589-w7hkw" Jan 23 09:13:11 crc kubenswrapper[4684]: I0123 09:13:11.668657 4684 patch_prober.go:28] interesting pod/oauth-openshift-69cb985589-w7hkw container/oauth-openshift namespace/openshift-authentication: Readiness probe status=failure output="Get \"https://10.217.0.63:6443/healthz\": read tcp 10.217.0.2:50108->10.217.0.63:6443: read: connection reset by peer" start-of-body= Jan 23 09:13:11 crc kubenswrapper[4684]: I0123 09:13:11.671096 4684 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-authentication/oauth-openshift-69cb985589-w7hkw" podUID="d25f9561-bcbb-4309-b3b6-de838bbf47bd" containerName="oauth-openshift" probeResult="failure" output="Get \"https://10.217.0.63:6443/healthz\": read tcp 10.217.0.2:50108->10.217.0.63:6443: read: connection reset by peer" Jan 23 09:13:12 crc kubenswrapper[4684]: I0123 09:13:12.077241 4684 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-authentication_oauth-openshift-69cb985589-w7hkw_d25f9561-bcbb-4309-b3b6-de838bbf47bd/oauth-openshift/2.log" Jan 23 09:13:12 crc kubenswrapper[4684]: I0123 09:13:12.078118 4684 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-authentication_oauth-openshift-69cb985589-w7hkw_d25f9561-bcbb-4309-b3b6-de838bbf47bd/oauth-openshift/1.log" Jan 23 09:13:12 crc kubenswrapper[4684]: I0123 09:13:12.078167 4684 generic.go:334] "Generic (PLEG): container finished" podID="d25f9561-bcbb-4309-b3b6-de838bbf47bd" containerID="48097d8a5164b4112cbdaeaf9888d4b138e19f3e7ec9af331ec1e7d48e862e80" exitCode=255 Jan 23 09:13:12 crc kubenswrapper[4684]: I0123 09:13:12.078201 4684 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-authentication/oauth-openshift-69cb985589-w7hkw" event={"ID":"d25f9561-bcbb-4309-b3b6-de838bbf47bd","Type":"ContainerDied","Data":"48097d8a5164b4112cbdaeaf9888d4b138e19f3e7ec9af331ec1e7d48e862e80"} Jan 23 09:13:12 crc kubenswrapper[4684]: I0123 09:13:12.078310 4684 scope.go:117] "RemoveContainer" containerID="8b27cb28e3252f4fc8e44a8dcf95b6d75ce01558cc5a7013773fc5b30a8e4ffe" Jan 23 09:13:12 crc kubenswrapper[4684]: I0123 09:13:12.078811 4684 scope.go:117] "RemoveContainer" containerID="48097d8a5164b4112cbdaeaf9888d4b138e19f3e7ec9af331ec1e7d48e862e80" Jan 23 09:13:12 crc kubenswrapper[4684]: E0123 09:13:12.079129 4684 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"oauth-openshift\" with CrashLoopBackOff: \"back-off 20s restarting failed container=oauth-openshift pod=oauth-openshift-69cb985589-w7hkw_openshift-authentication(d25f9561-bcbb-4309-b3b6-de838bbf47bd)\"" pod="openshift-authentication/oauth-openshift-69cb985589-w7hkw" podUID="d25f9561-bcbb-4309-b3b6-de838bbf47bd" Jan 23 09:13:13 crc kubenswrapper[4684]: I0123 09:13:13.085848 4684 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-authentication_oauth-openshift-69cb985589-w7hkw_d25f9561-bcbb-4309-b3b6-de838bbf47bd/oauth-openshift/2.log" Jan 23 09:13:13 crc kubenswrapper[4684]: I0123 09:13:13.086776 4684 scope.go:117] "RemoveContainer" containerID="48097d8a5164b4112cbdaeaf9888d4b138e19f3e7ec9af331ec1e7d48e862e80" Jan 23 09:13:13 crc kubenswrapper[4684]: E0123 09:13:13.086983 4684 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"oauth-openshift\" with CrashLoopBackOff: \"back-off 20s restarting failed container=oauth-openshift pod=oauth-openshift-69cb985589-w7hkw_openshift-authentication(d25f9561-bcbb-4309-b3b6-de838bbf47bd)\"" pod="openshift-authentication/oauth-openshift-69cb985589-w7hkw" podUID="d25f9561-bcbb-4309-b3b6-de838bbf47bd" Jan 23 09:13:14 crc kubenswrapper[4684]: I0123 09:13:14.082437 4684 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-operator-lifecycle-manager"/"kube-root-ca.crt" Jan 23 09:13:14 crc kubenswrapper[4684]: I0123 09:13:14.097358 4684 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-authentication"/"v4-0-config-system-trusted-ca-bundle" Jan 23 09:13:15 crc kubenswrapper[4684]: I0123 09:13:15.156449 4684 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-apiserver-operator"/"openshift-apiserver-operator-dockercfg-xtcjv" Jan 23 09:13:15 crc kubenswrapper[4684]: I0123 09:13:15.799525 4684 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/community-operators-pc4kj" Jan 23 09:13:15 crc kubenswrapper[4684]: I0123 09:13:15.800684 4684 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/community-operators-pc4kj" Jan 23 09:13:15 crc kubenswrapper[4684]: I0123 09:13:15.842592 4684 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/community-operators-pc4kj" Jan 23 09:13:16 crc kubenswrapper[4684]: I0123 09:13:16.070239 4684 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-etcd-operator"/"etcd-operator-dockercfg-r9srn" Jan 23 09:13:16 crc kubenswrapper[4684]: I0123 09:13:16.152924 4684 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/community-operators-pc4kj" Jan 23 09:13:17 crc kubenswrapper[4684]: I0123 09:13:17.203313 4684 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-network-diagnostics"/"kube-root-ca.crt" Jan 23 09:13:19 crc kubenswrapper[4684]: I0123 09:13:19.448212 4684 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ovn-kubernetes"/"ovnkube-script-lib" Jan 23 09:13:19 crc kubenswrapper[4684]: I0123 09:13:19.590519 4684 kubelet.go:2542] "SyncLoop (probe)" probe="liveness" status="unhealthy" pod="openshift-authentication/oauth-openshift-69cb985589-w7hkw" Jan 23 09:13:19 crc kubenswrapper[4684]: I0123 09:13:19.591163 4684 scope.go:117] "RemoveContainer" containerID="48097d8a5164b4112cbdaeaf9888d4b138e19f3e7ec9af331ec1e7d48e862e80" Jan 23 09:13:19 crc kubenswrapper[4684]: E0123 09:13:19.591351 4684 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"oauth-openshift\" with CrashLoopBackOff: \"back-off 20s restarting failed container=oauth-openshift pod=oauth-openshift-69cb985589-w7hkw_openshift-authentication(d25f9561-bcbb-4309-b3b6-de838bbf47bd)\"" pod="openshift-authentication/oauth-openshift-69cb985589-w7hkw" podUID="d25f9561-bcbb-4309-b3b6-de838bbf47bd" Jan 23 09:13:20 crc kubenswrapper[4684]: I0123 09:13:20.033430 4684 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-kube-apiserver-operator"/"kube-apiserver-operator-dockercfg-x57mr" Jan 23 09:13:23 crc kubenswrapper[4684]: I0123 09:13:23.953138 4684 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-dns"/"dns-dockercfg-jwfmh" Jan 23 09:13:24 crc kubenswrapper[4684]: I0123 09:13:24.636996 4684 reflector.go:368] Caches populated for *v1.Service from k8s.io/client-go/informers/factory.go:160 Jan 23 09:13:24 crc kubenswrapper[4684]: I0123 09:13:24.852759 4684 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-config-operator"/"machine-config-daemon-dockercfg-r5tcq" Jan 23 09:13:24 crc kubenswrapper[4684]: I0123 09:13:24.867912 4684 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-oauth-apiserver"/"openshift-service-ca.crt" Jan 23 09:13:25 crc kubenswrapper[4684]: I0123 09:13:25.206908 4684 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-config-operator"/"machine-config-operator-dockercfg-98p87" Jan 23 09:13:25 crc kubenswrapper[4684]: I0123 09:13:25.306113 4684 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-network-node-identity"/"env-overrides" Jan 23 09:13:25 crc kubenswrapper[4684]: I0123 09:13:25.346566 4684 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-apiserver"/"etcd-client" Jan 23 09:13:26 crc kubenswrapper[4684]: I0123 09:13:26.931629 4684 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-service-ca-operator"/"service-ca-operator-dockercfg-rg9jl" Jan 23 09:13:27 crc kubenswrapper[4684]: I0123 09:13:27.650979 4684 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-cluster-samples-operator"/"openshift-service-ca.crt" Jan 23 09:13:28 crc kubenswrapper[4684]: I0123 09:13:28.050149 4684 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ovn-kubernetes"/"ovnkube-config" Jan 23 09:13:28 crc kubenswrapper[4684]: I0123 09:13:28.930617 4684 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-dns"/"kube-root-ca.crt" Jan 23 09:13:29 crc kubenswrapper[4684]: I0123 09:13:29.023973 4684 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-apiserver-operator"/"openshift-apiserver-operator-serving-cert" Jan 23 09:13:29 crc kubenswrapper[4684]: I0123 09:13:29.131743 4684 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-ingress-operator"/"ingress-operator-dockercfg-7lnqk" Jan 23 09:13:29 crc kubenswrapper[4684]: I0123 09:13:29.242846 4684 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-oauth-apiserver"/"audit-1" Jan 23 09:13:29 crc kubenswrapper[4684]: I0123 09:13:29.338056 4684 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-api"/"machine-api-operator-dockercfg-mfbb7" Jan 23 09:13:29 crc kubenswrapper[4684]: I0123 09:13:29.456284 4684 reflector.go:368] Caches populated for *v1.Secret from object-"hostpath-provisioner"/"csi-hostpath-provisioner-sa-dockercfg-qd74k" Jan 23 09:13:29 crc kubenswrapper[4684]: I0123 09:13:29.701905 4684 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-machine-config-operator"/"openshift-service-ca.crt" Jan 23 09:13:30 crc kubenswrapper[4684]: I0123 09:13:30.460735 4684 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-console-operator"/"console-operator-config" Jan 23 09:13:30 crc kubenswrapper[4684]: I0123 09:13:30.925726 4684 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-apiserver"/"openshift-apiserver-sa-dockercfg-djjff" Jan 23 09:13:30 crc kubenswrapper[4684]: I0123 09:13:30.940089 4684 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-apiserver-operator"/"kube-apiserver-operator-config" Jan 23 09:13:31 crc kubenswrapper[4684]: I0123 09:13:31.490810 4684 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-config-operator"/"openshift-service-ca.crt" Jan 23 09:13:31 crc kubenswrapper[4684]: I0123 09:13:31.549855 4684 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-authentication-operator"/"authentication-operator-dockercfg-mz9bj" Jan 23 09:13:31 crc kubenswrapper[4684]: I0123 09:13:31.585093 4684 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-controller-manager-operator"/"kube-root-ca.crt" Jan 23 09:13:31 crc kubenswrapper[4684]: I0123 09:13:31.687312 4684 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager-operator"/"openshift-service-ca.crt" Jan 23 09:13:31 crc kubenswrapper[4684]: I0123 09:13:31.915262 4684 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-cluster-machine-approver"/"machine-approver-tls" Jan 23 09:13:31 crc kubenswrapper[4684]: I0123 09:13:31.940687 4684 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ovn-kubernetes"/"openshift-service-ca.crt" Jan 23 09:13:33 crc kubenswrapper[4684]: I0123 09:13:33.246715 4684 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-storage-version-migrator-operator"/"config" Jan 23 09:13:33 crc kubenswrapper[4684]: I0123 09:13:33.415202 4684 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-cluster-version"/"openshift-service-ca.crt" Jan 23 09:13:33 crc kubenswrapper[4684]: I0123 09:13:33.482072 4684 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-etcd-operator"/"kube-root-ca.crt" Jan 23 09:13:33 crc kubenswrapper[4684]: I0123 09:13:33.581644 4684 scope.go:117] "RemoveContainer" containerID="48097d8a5164b4112cbdaeaf9888d4b138e19f3e7ec9af331ec1e7d48e862e80" Jan 23 09:13:33 crc kubenswrapper[4684]: I0123 09:13:33.659034 4684 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-ovn-kubernetes"/"ovn-control-plane-metrics-cert" Jan 23 09:13:33 crc kubenswrapper[4684]: I0123 09:13:33.706576 4684 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-image-registry"/"image-registry-operator-tls" Jan 23 09:13:33 crc kubenswrapper[4684]: I0123 09:13:33.724920 4684 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-network-diagnostics"/"openshift-service-ca.crt" Jan 23 09:13:33 crc kubenswrapper[4684]: I0123 09:13:33.967816 4684 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-oauth-apiserver"/"etcd-client" Jan 23 09:13:34 crc kubenswrapper[4684]: I0123 09:13:34.040654 4684 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-operator-lifecycle-manager"/"package-server-manager-serving-cert" Jan 23 09:13:34 crc kubenswrapper[4684]: I0123 09:13:34.232241 4684 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-ingress-canary"/"default-dockercfg-2llfx" Jan 23 09:13:34 crc kubenswrapper[4684]: I0123 09:13:34.289730 4684 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-authentication-operator"/"serving-cert" Jan 23 09:13:34 crc kubenswrapper[4684]: I0123 09:13:34.316089 4684 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-marketplace"/"marketplace-operator-metrics" Jan 23 09:13:34 crc kubenswrapper[4684]: I0123 09:13:34.700722 4684 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-storage-version-migrator"/"kube-root-ca.crt" Jan 23 09:13:34 crc kubenswrapper[4684]: I0123 09:13:34.784262 4684 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-authentication"/"audit" Jan 23 09:13:34 crc kubenswrapper[4684]: I0123 09:13:34.833773 4684 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-multus"/"multus-daemon-config" Jan 23 09:13:34 crc kubenswrapper[4684]: I0123 09:13:34.890305 4684 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-authentication"/"v4-0-config-system-ocp-branding-template" Jan 23 09:13:34 crc kubenswrapper[4684]: I0123 09:13:34.935246 4684 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-multus"/"multus-admission-controller-secret" Jan 23 09:13:35 crc kubenswrapper[4684]: I0123 09:13:35.190389 4684 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-etcd-operator"/"etcd-operator-serving-cert" Jan 23 09:13:35 crc kubenswrapper[4684]: I0123 09:13:35.209990 4684 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-authentication_oauth-openshift-69cb985589-w7hkw_d25f9561-bcbb-4309-b3b6-de838bbf47bd/oauth-openshift/2.log" Jan 23 09:13:35 crc kubenswrapper[4684]: I0123 09:13:35.210533 4684 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-authentication/oauth-openshift-69cb985589-w7hkw" event={"ID":"d25f9561-bcbb-4309-b3b6-de838bbf47bd","Type":"ContainerStarted","Data":"a4202c0e2c1a5bcd86cd67cc448a5da207297acdaf364f7622782999a8550e43"} Jan 23 09:13:35 crc kubenswrapper[4684]: I0123 09:13:35.210955 4684 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-authentication/oauth-openshift-69cb985589-w7hkw" Jan 23 09:13:35 crc kubenswrapper[4684]: I0123 09:13:35.219840 4684 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-authentication/oauth-openshift-69cb985589-w7hkw" Jan 23 09:13:35 crc kubenswrapper[4684]: I0123 09:13:35.235267 4684 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-machine-api"/"openshift-service-ca.crt" Jan 23 09:13:35 crc kubenswrapper[4684]: I0123 09:13:35.388151 4684 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-network-node-identity"/"kube-root-ca.crt" Jan 23 09:13:35 crc kubenswrapper[4684]: I0123 09:13:35.439953 4684 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-authentication"/"oauth-openshift-dockercfg-znhcc" Jan 23 09:13:35 crc kubenswrapper[4684]: I0123 09:13:35.472388 4684 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-dns"/"node-resolver-dockercfg-kz9s7" Jan 23 09:13:35 crc kubenswrapper[4684]: I0123 09:13:35.528294 4684 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-cluster-machine-approver"/"kube-rbac-proxy" Jan 23 09:13:35 crc kubenswrapper[4684]: I0123 09:13:35.735597 4684 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-apiserver"/"audit-1" Jan 23 09:13:35 crc kubenswrapper[4684]: I0123 09:13:35.987358 4684 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager-operator"/"kube-root-ca.crt" Jan 23 09:13:36 crc kubenswrapper[4684]: I0123 09:13:36.108479 4684 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-machine-api"/"kube-root-ca.crt" Jan 23 09:13:36 crc kubenswrapper[4684]: I0123 09:13:36.354782 4684 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-machine-api"/"machine-api-operator-images" Jan 23 09:13:36 crc kubenswrapper[4684]: I0123 09:13:36.563481 4684 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-oauth-apiserver"/"encryption-config-1" Jan 23 09:13:36 crc kubenswrapper[4684]: I0123 09:13:36.598503 4684 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-authentication"/"v4-0-config-system-router-certs" Jan 23 09:13:36 crc kubenswrapper[4684]: I0123 09:13:36.646616 4684 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-console-operator"/"kube-root-ca.crt" Jan 23 09:13:36 crc kubenswrapper[4684]: I0123 09:13:36.911113 4684 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-storage-version-migrator-operator"/"kube-root-ca.crt" Jan 23 09:13:37 crc kubenswrapper[4684]: I0123 09:13:37.063343 4684 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-operator-lifecycle-manager"/"packageserver-service-cert" Jan 23 09:13:37 crc kubenswrapper[4684]: I0123 09:13:37.222321 4684 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ingress-operator"/"openshift-service-ca.crt" Jan 23 09:13:37 crc kubenswrapper[4684]: I0123 09:13:37.360312 4684 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-console-operator"/"console-operator-dockercfg-4xjcr" Jan 23 09:13:37 crc kubenswrapper[4684]: I0123 09:13:37.498783 4684 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-network-operator"/"openshift-service-ca.crt" Jan 23 09:13:37 crc kubenswrapper[4684]: I0123 09:13:37.584060 4684 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-dns-operator"/"kube-root-ca.crt" Jan 23 09:13:37 crc kubenswrapper[4684]: I0123 09:13:37.710972 4684 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-cluster-version"/"default-dockercfg-gxtc4" Jan 23 09:13:37 crc kubenswrapper[4684]: I0123 09:13:37.719051 4684 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-cluster-machine-approver"/"machine-approver-sa-dockercfg-nl2j4" Jan 23 09:13:37 crc kubenswrapper[4684]: I0123 09:13:37.723738 4684 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-authentication-operator"/"trusted-ca-bundle" Jan 23 09:13:37 crc kubenswrapper[4684]: I0123 09:13:37.851637 4684 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-operator-lifecycle-manager"/"openshift-service-ca.crt" Jan 23 09:13:37 crc kubenswrapper[4684]: I0123 09:13:37.913278 4684 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-config-operator"/"node-bootstrapper-token" Jan 23 09:13:38 crc kubenswrapper[4684]: I0123 09:13:38.062673 4684 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-image-registry"/"kube-root-ca.crt" Jan 23 09:13:38 crc kubenswrapper[4684]: I0123 09:13:38.077880 4684 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-marketplace"/"certified-operators-dockercfg-4rs5g" Jan 23 09:13:38 crc kubenswrapper[4684]: I0123 09:13:38.079735 4684 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-multus"/"default-cni-sysctl-allowlist" Jan 23 09:13:38 crc kubenswrapper[4684]: I0123 09:13:38.093084 4684 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-multus"/"kube-root-ca.crt" Jan 23 09:13:38 crc kubenswrapper[4684]: I0123 09:13:38.304000 4684 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-kube-storage-version-migrator"/"kube-storage-version-migrator-sa-dockercfg-5xfcg" Jan 23 09:13:38 crc kubenswrapper[4684]: I0123 09:13:38.434718 4684 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-api"/"control-plane-machine-set-operator-tls" Jan 23 09:13:38 crc kubenswrapper[4684]: I0123 09:13:38.689280 4684 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-storage-version-migrator"/"openshift-service-ca.crt" Jan 23 09:13:38 crc kubenswrapper[4684]: I0123 09:13:38.735046 4684 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-apiserver"/"encryption-config-1" Jan 23 09:13:38 crc kubenswrapper[4684]: I0123 09:13:38.743141 4684 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-machine-config-operator"/"kube-rbac-proxy" Jan 23 09:13:38 crc kubenswrapper[4684]: I0123 09:13:38.993128 4684 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ovn-kubernetes"/"env-overrides" Jan 23 09:13:38 crc kubenswrapper[4684]: I0123 09:13:38.995564 4684 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-controller-manager-operator"/"kube-controller-manager-operator-config" Jan 23 09:13:39 crc kubenswrapper[4684]: I0123 09:13:39.107967 4684 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-operator-lifecycle-manager"/"olm-operator-serviceaccount-dockercfg-rq7zk" Jan 23 09:13:39 crc kubenswrapper[4684]: I0123 09:13:39.278149 4684 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-service-ca"/"signing-key" Jan 23 09:13:39 crc kubenswrapper[4684]: I0123 09:13:39.416829 4684 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-authentication"/"openshift-service-ca.crt" Jan 23 09:13:39 crc kubenswrapper[4684]: I0123 09:13:39.423005 4684 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-console-operator"/"trusted-ca" Jan 23 09:13:39 crc kubenswrapper[4684]: I0123 09:13:39.453973 4684 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-network-node-identity"/"network-node-identity-cert" Jan 23 09:13:39 crc kubenswrapper[4684]: I0123 09:13:39.715736 4684 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-config-operator"/"machine-config-server-dockercfg-qx5rd" Jan 23 09:13:39 crc kubenswrapper[4684]: I0123 09:13:39.903771 4684 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-console"/"oauth-serving-cert" Jan 23 09:13:39 crc kubenswrapper[4684]: I0123 09:13:39.970152 4684 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-config-operator"/"mcc-proxy-tls" Jan 23 09:13:40 crc kubenswrapper[4684]: I0123 09:13:40.010090 4684 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-authentication"/"v4-0-config-system-cliconfig" Jan 23 09:13:40 crc kubenswrapper[4684]: I0123 09:13:40.013686 4684 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-controller-manager-operator"/"openshift-controller-manager-operator-serving-cert" Jan 23 09:13:40 crc kubenswrapper[4684]: I0123 09:13:40.115976 4684 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-apiserver"/"openshift-service-ca.crt" Jan 23 09:13:40 crc kubenswrapper[4684]: I0123 09:13:40.159712 4684 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-apiserver"/"image-import-ca" Jan 23 09:13:40 crc kubenswrapper[4684]: I0123 09:13:40.244631 4684 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-ingress"/"router-stats-default" Jan 23 09:13:40 crc kubenswrapper[4684]: I0123 09:13:40.293441 4684 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-marketplace"/"marketplace-operator-dockercfg-5nsgg" Jan 23 09:13:40 crc kubenswrapper[4684]: I0123 09:13:40.347807 4684 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-operator-lifecycle-manager"/"catalog-operator-serving-cert" Jan 23 09:13:40 crc kubenswrapper[4684]: I0123 09:13:40.566889 4684 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-image-registry"/"trusted-ca" Jan 23 09:13:40 crc kubenswrapper[4684]: I0123 09:13:40.644424 4684 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-service-ca"/"signing-cabundle" Jan 23 09:13:40 crc kubenswrapper[4684]: I0123 09:13:40.771586 4684 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-ovn-kubernetes"/"ovn-node-metrics-cert" Jan 23 09:13:40 crc kubenswrapper[4684]: I0123 09:13:40.964560 4684 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-marketplace"/"community-operators-dockercfg-dmngl" Jan 23 09:13:41 crc kubenswrapper[4684]: I0123 09:13:41.094220 4684 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-etcd-operator"/"etcd-client" Jan 23 09:13:41 crc kubenswrapper[4684]: I0123 09:13:41.220906 4684 reflector.go:368] Caches populated for *v1.ConfigMap from object-"hostpath-provisioner"/"openshift-service-ca.crt" Jan 23 09:13:41 crc kubenswrapper[4684]: I0123 09:13:41.224188 4684 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-authentication"/"v4-0-config-system-service-ca" Jan 23 09:13:41 crc kubenswrapper[4684]: I0123 09:13:41.364334 4684 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-network-operator"/"metrics-tls" Jan 23 09:13:41 crc kubenswrapper[4684]: I0123 09:13:41.420098 4684 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-ovn-kubernetes"/"ovn-kubernetes-control-plane-dockercfg-gs7dd" Jan 23 09:13:41 crc kubenswrapper[4684]: I0123 09:13:41.753305 4684 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-network-operator"/"iptables-alerter-script" Jan 23 09:13:41 crc kubenswrapper[4684]: I0123 09:13:41.827430 4684 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-console-operator"/"serving-cert" Jan 23 09:13:41 crc kubenswrapper[4684]: I0123 09:13:41.904507 4684 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-kube-apiserver-operator"/"kube-apiserver-operator-serving-cert" Jan 23 09:13:42 crc kubenswrapper[4684]: I0123 09:13:42.122868 4684 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-dns"/"openshift-service-ca.crt" Jan 23 09:13:42 crc kubenswrapper[4684]: I0123 09:13:42.172042 4684 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-oauth-apiserver"/"serving-cert" Jan 23 09:13:42 crc kubenswrapper[4684]: I0123 09:13:42.335864 4684 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-apiserver"/"etcd-serving-ca" Jan 23 09:13:42 crc kubenswrapper[4684]: I0123 09:13:42.483074 4684 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-authentication"/"v4-0-config-user-template-error" Jan 23 09:13:42 crc kubenswrapper[4684]: I0123 09:13:42.576853 4684 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-kube-scheduler-operator"/"openshift-kube-scheduler-operator-dockercfg-qt55r" Jan 23 09:13:42 crc kubenswrapper[4684]: I0123 09:13:42.595200 4684 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ingress"/"openshift-service-ca.crt" Jan 23 09:13:42 crc kubenswrapper[4684]: I0123 09:13:42.728958 4684 reflector.go:368] Caches populated for *v1.CSIDriver from k8s.io/client-go/informers/factory.go:160 Jan 23 09:13:42 crc kubenswrapper[4684]: I0123 09:13:42.823300 4684 reflector.go:368] Caches populated for *v1.Node from k8s.io/client-go/informers/factory.go:160 Jan 23 09:13:42 crc kubenswrapper[4684]: I0123 09:13:42.869307 4684 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ingress"/"service-ca-bundle" Jan 23 09:13:42 crc kubenswrapper[4684]: I0123 09:13:42.894964 4684 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-config-operator"/"kube-root-ca.crt" Jan 23 09:13:42 crc kubenswrapper[4684]: I0123 09:13:42.929114 4684 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-marketplace"/"marketplace-trusted-ca" Jan 23 09:13:42 crc kubenswrapper[4684]: I0123 09:13:42.936245 4684 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-oauth-apiserver"/"trusted-ca-bundle" Jan 23 09:13:42 crc kubenswrapper[4684]: I0123 09:13:42.979723 4684 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-authentication"/"v4-0-config-system-session" Jan 23 09:13:43 crc kubenswrapper[4684]: I0123 09:13:43.122997 4684 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-service-ca"/"service-ca-dockercfg-pn86c" Jan 23 09:13:43 crc kubenswrapper[4684]: I0123 09:13:43.152939 4684 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-oauth-apiserver"/"kube-root-ca.crt" Jan 23 09:13:43 crc kubenswrapper[4684]: I0123 09:13:43.278870 4684 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-multus"/"metrics-daemon-secret" Jan 23 09:13:43 crc kubenswrapper[4684]: I0123 09:13:43.311389 4684 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-cluster-version"/"kube-root-ca.crt" Jan 23 09:13:43 crc kubenswrapper[4684]: I0123 09:13:43.596204 4684 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-console"/"console-serving-cert" Jan 23 09:13:43 crc kubenswrapper[4684]: I0123 09:13:43.665591 4684 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-service-ca"/"kube-root-ca.crt" Jan 23 09:13:43 crc kubenswrapper[4684]: I0123 09:13:43.682785 4684 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-controller-manager-operator"/"openshift-controller-manager-operator-dockercfg-vw8fw" Jan 23 09:13:43 crc kubenswrapper[4684]: I0123 09:13:43.690536 4684 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-network-node-identity"/"ovnkube-identity-cm" Jan 23 09:13:43 crc kubenswrapper[4684]: I0123 09:13:43.712086 4684 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-apiserver-operator"/"openshift-service-ca.crt" Jan 23 09:13:43 crc kubenswrapper[4684]: I0123 09:13:43.837830 4684 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-kube-scheduler-operator"/"kube-scheduler-operator-serving-cert" Jan 23 09:13:43 crc kubenswrapper[4684]: I0123 09:13:43.895937 4684 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-marketplace"/"kube-root-ca.crt" Jan 23 09:13:43 crc kubenswrapper[4684]: I0123 09:13:43.944930 4684 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-apiserver-operator"/"openshift-apiserver-operator-config" Jan 23 09:13:44 crc kubenswrapper[4684]: I0123 09:13:44.182951 4684 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-etcd-operator"/"openshift-service-ca.crt" Jan 23 09:13:44 crc kubenswrapper[4684]: I0123 09:13:44.366963 4684 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-dns"/"dns-default-metrics-tls" Jan 23 09:13:44 crc kubenswrapper[4684]: I0123 09:13:44.463299 4684 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ingress"/"kube-root-ca.crt" Jan 23 09:13:44 crc kubenswrapper[4684]: I0123 09:13:44.566286 4684 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ingress-canary"/"openshift-service-ca.crt" Jan 23 09:13:44 crc kubenswrapper[4684]: I0123 09:13:44.590684 4684 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-multus"/"openshift-service-ca.crt" Jan 23 09:13:44 crc kubenswrapper[4684]: I0123 09:13:44.834976 4684 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-kube-storage-version-migrator-operator"/"kube-storage-version-migrator-operator-dockercfg-2bh8d" Jan 23 09:13:45 crc kubenswrapper[4684]: I0123 09:13:45.116052 4684 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-authentication"/"v4-0-config-user-template-login" Jan 23 09:13:45 crc kubenswrapper[4684]: I0123 09:13:45.236038 4684 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-apiserver-operator"/"kube-root-ca.crt" Jan 23 09:13:45 crc kubenswrapper[4684]: I0123 09:13:45.268877 4684 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-kube-storage-version-migrator-operator"/"serving-cert" Jan 23 09:13:45 crc kubenswrapper[4684]: I0123 09:13:45.294470 4684 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-image-registry"/"image-registry-tls" Jan 23 09:13:45 crc kubenswrapper[4684]: I0123 09:13:45.882994 4684 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-console-operator"/"openshift-service-ca.crt" Jan 23 09:13:46 crc kubenswrapper[4684]: I0123 09:13:46.023053 4684 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-cluster-version"/"cluster-version-operator-serving-cert" Jan 23 09:13:46 crc kubenswrapper[4684]: I0123 09:13:46.098984 4684 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ingress-operator"/"kube-root-ca.crt" Jan 23 09:13:46 crc kubenswrapper[4684]: I0123 09:13:46.163067 4684 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-machine-api"/"kube-rbac-proxy" Jan 23 09:13:46 crc kubenswrapper[4684]: I0123 09:13:46.458424 4684 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-apiserver-operator"/"kube-root-ca.crt" Jan 23 09:13:46 crc kubenswrapper[4684]: I0123 09:13:46.536342 4684 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-scheduler-operator"/"kube-root-ca.crt" Jan 23 09:13:46 crc kubenswrapper[4684]: I0123 09:13:46.731130 4684 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ingress-operator"/"trusted-ca" Jan 23 09:13:46 crc kubenswrapper[4684]: I0123 09:13:46.844474 4684 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-service-ca-operator"/"openshift-service-ca.crt" Jan 23 09:13:46 crc kubenswrapper[4684]: I0123 09:13:46.869505 4684 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-cluster-machine-approver"/"openshift-service-ca.crt" Jan 23 09:13:46 crc kubenswrapper[4684]: I0123 09:13:46.899836 4684 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-multus"/"multus-ac-dockercfg-9lkdf" Jan 23 09:13:46 crc kubenswrapper[4684]: I0123 09:13:46.978199 4684 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-authentication-operator"/"authentication-operator-config" Jan 23 09:13:46 crc kubenswrapper[4684]: I0123 09:13:46.992127 4684 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-network-operator"/"kube-root-ca.crt" Jan 23 09:13:47 crc kubenswrapper[4684]: I0123 09:13:47.084165 4684 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-scheduler-operator"/"openshift-kube-scheduler-operator-config" Jan 23 09:13:47 crc kubenswrapper[4684]: I0123 09:13:47.114621 4684 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-oauth-apiserver"/"etcd-serving-ca" Jan 23 09:13:47 crc kubenswrapper[4684]: I0123 09:13:47.142063 4684 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-console"/"console-oauth-config" Jan 23 09:13:47 crc kubenswrapper[4684]: I0123 09:13:47.194427 4684 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-dns"/"dns-default" Jan 23 09:13:47 crc kubenswrapper[4684]: I0123 09:13:47.383942 4684 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-multus"/"multus-ancillary-tools-dockercfg-vnmsz" Jan 23 09:13:47 crc kubenswrapper[4684]: I0123 09:13:47.648679 4684 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-config-operator"/"proxy-tls" Jan 23 09:13:47 crc kubenswrapper[4684]: I0123 09:13:47.657380 4684 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-apiserver"/"serving-cert" Jan 23 09:13:47 crc kubenswrapper[4684]: I0123 09:13:47.819488 4684 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-authentication"/"kube-root-ca.crt" Jan 23 09:13:47 crc kubenswrapper[4684]: I0123 09:13:47.836097 4684 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-oauth-apiserver"/"oauth-apiserver-sa-dockercfg-6r2bq" Jan 23 09:13:47 crc kubenswrapper[4684]: I0123 09:13:47.855305 4684 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-marketplace"/"redhat-operators-dockercfg-ct8rh" Jan 23 09:13:48 crc kubenswrapper[4684]: I0123 09:13:48.105822 4684 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-operator-lifecycle-manager"/"pprof-cert" Jan 23 09:13:48 crc kubenswrapper[4684]: I0123 09:13:48.250815 4684 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager-operator"/"openshift-controller-manager-operator-config" Jan 23 09:13:48 crc kubenswrapper[4684]: I0123 09:13:48.282022 4684 generic.go:334] "Generic (PLEG): container finished" podID="703df6b3-b903-4818-b0c8-8681de1c6065" containerID="bf0e2db7f62363906898199e85bc114cf704a5ad24bf8db0ca11597b9b1db919" exitCode=0 Jan 23 09:13:48 crc kubenswrapper[4684]: I0123 09:13:48.282072 4684 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/marketplace-operator-79b997595-tfmsb" event={"ID":"703df6b3-b903-4818-b0c8-8681de1c6065","Type":"ContainerDied","Data":"bf0e2db7f62363906898199e85bc114cf704a5ad24bf8db0ca11597b9b1db919"} Jan 23 09:13:48 crc kubenswrapper[4684]: I0123 09:13:48.282642 4684 scope.go:117] "RemoveContainer" containerID="bf0e2db7f62363906898199e85bc114cf704a5ad24bf8db0ca11597b9b1db919" Jan 23 09:13:48 crc kubenswrapper[4684]: I0123 09:13:48.372133 4684 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-etcd-operator"/"etcd-ca-bundle" Jan 23 09:13:48 crc kubenswrapper[4684]: I0123 09:13:48.388333 4684 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ovn-kubernetes"/"kube-root-ca.crt" Jan 23 09:13:48 crc kubenswrapper[4684]: I0123 09:13:48.508953 4684 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-ingress"/"router-metrics-certs-default" Jan 23 09:13:48 crc kubenswrapper[4684]: I0123 09:13:48.642139 4684 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-network-console"/"networking-console-plugin-cert" Jan 23 09:13:48 crc kubenswrapper[4684]: I0123 09:13:48.776569 4684 reflector.go:368] Caches populated for *v1.ConfigMap from object-"hostpath-provisioner"/"kube-root-ca.crt" Jan 23 09:13:48 crc kubenswrapper[4684]: I0123 09:13:48.876354 4684 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-console"/"kube-root-ca.crt" Jan 23 09:13:48 crc kubenswrapper[4684]: I0123 09:13:48.981723 4684 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-apiserver"/"trusted-ca-bundle" Jan 23 09:13:49 crc kubenswrapper[4684]: I0123 09:13:49.107258 4684 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-authentication-operator"/"service-ca-bundle" Jan 23 09:13:49 crc kubenswrapper[4684]: I0123 09:13:49.115018 4684 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-authentication"/"v4-0-config-system-serving-cert" Jan 23 09:13:49 crc kubenswrapper[4684]: I0123 09:13:49.157772 4684 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-config-operator"/"machine-config-controller-dockercfg-c2lfx" Jan 23 09:13:49 crc kubenswrapper[4684]: I0123 09:13:49.245076 4684 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-cluster-samples-operator"/"cluster-samples-operator-dockercfg-xpp9w" Jan 23 09:13:49 crc kubenswrapper[4684]: I0123 09:13:49.289144 4684 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/marketplace-operator-79b997595-tfmsb" event={"ID":"703df6b3-b903-4818-b0c8-8681de1c6065","Type":"ContainerStarted","Data":"080069b9837351b3819630d5376f3ae1b2cacc3a63713a83f095675ad9ff66ce"} Jan 23 09:13:49 crc kubenswrapper[4684]: I0123 09:13:49.289436 4684 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/marketplace-operator-79b997595-tfmsb" Jan 23 09:13:49 crc kubenswrapper[4684]: I0123 09:13:49.291651 4684 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/marketplace-operator-79b997595-tfmsb" Jan 23 09:13:49 crc kubenswrapper[4684]: I0123 09:13:49.479900 4684 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-authentication"/"v4-0-config-user-idp-0-file-data" Jan 23 09:13:49 crc kubenswrapper[4684]: I0123 09:13:49.714965 4684 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-image-registry"/"cluster-image-registry-operator-dockercfg-m4qtx" Jan 23 09:13:49 crc kubenswrapper[4684]: I0123 09:13:49.742394 4684 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-apiserver"/"config" Jan 23 09:13:49 crc kubenswrapper[4684]: I0123 09:13:49.956326 4684 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-service-ca-operator"/"service-ca-operator-config" Jan 23 09:13:50 crc kubenswrapper[4684]: I0123 09:13:50.011720 4684 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-image-registry"/"registry-dockercfg-kzzsd" Jan 23 09:13:50 crc kubenswrapper[4684]: I0123 09:13:50.232164 4684 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-cluster-samples-operator"/"samples-operator-tls" Jan 23 09:13:50 crc kubenswrapper[4684]: I0123 09:13:50.275014 4684 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-config-operator"/"mco-proxy-tls" Jan 23 09:13:50 crc kubenswrapper[4684]: I0123 09:13:50.322719 4684 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-kube-controller-manager-operator"/"kube-controller-manager-operator-serving-cert" Jan 23 09:13:50 crc kubenswrapper[4684]: I0123 09:13:50.533732 4684 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-config-operator"/"config-operator-serving-cert" Jan 23 09:13:50 crc kubenswrapper[4684]: I0123 09:13:50.607771 4684 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-network-console"/"networking-console-plugin" Jan 23 09:13:50 crc kubenswrapper[4684]: I0123 09:13:50.646851 4684 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-config-operator"/"machine-config-server-tls" Jan 23 09:13:50 crc kubenswrapper[4684]: I0123 09:13:50.788036 4684 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-ingress"/"router-certs-default" Jan 23 09:13:50 crc kubenswrapper[4684]: I0123 09:13:50.900981 4684 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-image-registry"/"installation-pull-secrets" Jan 23 09:13:50 crc kubenswrapper[4684]: I0123 09:13:50.904345 4684 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-ingress-canary"/"canary-serving-cert" Jan 23 09:13:50 crc kubenswrapper[4684]: I0123 09:13:50.908140 4684 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-multus"/"metrics-daemon-sa-dockercfg-d427c" Jan 23 09:13:50 crc kubenswrapper[4684]: I0123 09:13:50.925225 4684 reflector.go:368] Caches populated for *v1.Pod from pkg/kubelet/config/apiserver.go:66 Jan 23 09:13:50 crc kubenswrapper[4684]: I0123 09:13:50.926160 4684 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/redhat-marketplace-74vxp" podStartSLOduration=67.06262384 podStartE2EDuration="4m3.926140411s" podCreationTimestamp="2026-01-23 09:09:47 +0000 UTC" firstStartedPulling="2026-01-23 09:09:53.398122255 +0000 UTC m=+166.021500796" lastFinishedPulling="2026-01-23 09:12:50.261638826 +0000 UTC m=+342.885017367" observedRunningTime="2026-01-23 09:12:55.633322665 +0000 UTC m=+348.256701216" watchObservedRunningTime="2026-01-23 09:13:50.926140411 +0000 UTC m=+403.549518952" Jan 23 09:13:50 crc kubenswrapper[4684]: I0123 09:13:50.926279 4684 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/redhat-operators-vnv8t" podStartSLOduration=65.816153068 podStartE2EDuration="4m2.926274425s" podCreationTimestamp="2026-01-23 09:09:48 +0000 UTC" firstStartedPulling="2026-01-23 09:09:54.41647113 +0000 UTC m=+167.039849661" lastFinishedPulling="2026-01-23 09:12:51.526592487 +0000 UTC m=+344.149971018" observedRunningTime="2026-01-23 09:12:55.656647813 +0000 UTC m=+348.280026354" watchObservedRunningTime="2026-01-23 09:13:50.926274425 +0000 UTC m=+403.549652966" Jan 23 09:13:50 crc kubenswrapper[4684]: I0123 09:13:50.926824 4684 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-authentication/oauth-openshift-69cb985589-w7hkw" podStartSLOduration=130.926820981 podStartE2EDuration="2m10.926820981s" podCreationTimestamp="2026-01-23 09:11:40 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-23 09:13:11.08498945 +0000 UTC m=+363.708368001" watchObservedRunningTime="2026-01-23 09:13:50.926820981 +0000 UTC m=+403.550199522" Jan 23 09:13:50 crc kubenswrapper[4684]: I0123 09:13:50.927391 4684 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/community-operators-pc4kj" podStartSLOduration=52.75438605 podStartE2EDuration="4m5.927387187s" podCreationTimestamp="2026-01-23 09:09:45 +0000 UTC" firstStartedPulling="2026-01-23 09:09:53.409679567 +0000 UTC m=+166.033058108" lastFinishedPulling="2026-01-23 09:13:06.582680704 +0000 UTC m=+359.206059245" observedRunningTime="2026-01-23 09:13:07.058253644 +0000 UTC m=+359.681632185" watchObservedRunningTime="2026-01-23 09:13:50.927387187 +0000 UTC m=+403.550765728" Jan 23 09:13:50 crc kubenswrapper[4684]: I0123 09:13:50.927691 4684 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" podStartSLOduration=87.927687706 podStartE2EDuration="1m27.927687706s" podCreationTimestamp="2026-01-23 09:12:23 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-23 09:12:55.329829558 +0000 UTC m=+347.953208099" watchObservedRunningTime="2026-01-23 09:13:50.927687706 +0000 UTC m=+403.551066247" Jan 23 09:13:50 crc kubenswrapper[4684]: I0123 09:13:50.927800 4684 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/redhat-operators-9nnzz" podStartSLOduration=80.015274388 podStartE2EDuration="4m2.927795629s" podCreationTimestamp="2026-01-23 09:09:48 +0000 UTC" firstStartedPulling="2026-01-23 09:09:54.416841502 +0000 UTC m=+167.040220043" lastFinishedPulling="2026-01-23 09:12:37.329362743 +0000 UTC m=+329.952741284" observedRunningTime="2026-01-23 09:12:55.537370779 +0000 UTC m=+348.160749340" watchObservedRunningTime="2026-01-23 09:13:50.927795629 +0000 UTC m=+403.551174180" Jan 23 09:13:50 crc kubenswrapper[4684]: I0123 09:13:50.928243 4684 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/certified-operators-x2mrs" podStartSLOduration=81.768519048 podStartE2EDuration="4m5.928238692s" podCreationTimestamp="2026-01-23 09:09:45 +0000 UTC" firstStartedPulling="2026-01-23 09:09:52.376219527 +0000 UTC m=+164.999598068" lastFinishedPulling="2026-01-23 09:12:36.535939171 +0000 UTC m=+329.159317712" observedRunningTime="2026-01-23 09:12:55.520818289 +0000 UTC m=+348.144196830" watchObservedRunningTime="2026-01-23 09:13:50.928238692 +0000 UTC m=+403.551617233" Jan 23 09:13:50 crc kubenswrapper[4684]: I0123 09:13:50.928321 4684 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/community-operators-4w77d" podStartSLOduration=68.793914838 podStartE2EDuration="4m5.928317684s" podCreationTimestamp="2026-01-23 09:09:45 +0000 UTC" firstStartedPulling="2026-01-23 09:09:53.409498391 +0000 UTC m=+166.032876932" lastFinishedPulling="2026-01-23 09:12:50.543901237 +0000 UTC m=+343.167279778" observedRunningTime="2026-01-23 09:12:55.618061727 +0000 UTC m=+348.241440288" watchObservedRunningTime="2026-01-23 09:13:50.928317684 +0000 UTC m=+403.551696225" Jan 23 09:13:50 crc kubenswrapper[4684]: I0123 09:13:50.928819 4684 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/redhat-marketplace-hcd6g" podStartSLOduration=90.983738368 podStartE2EDuration="4m3.928815629s" podCreationTimestamp="2026-01-23 09:09:47 +0000 UTC" firstStartedPulling="2026-01-23 09:09:54.416970506 +0000 UTC m=+167.040349047" lastFinishedPulling="2026-01-23 09:12:27.362047747 +0000 UTC m=+319.985426308" observedRunningTime="2026-01-23 09:12:55.486282403 +0000 UTC m=+348.109660954" watchObservedRunningTime="2026-01-23 09:13:50.928815629 +0000 UTC m=+403.552194170" Jan 23 09:13:50 crc kubenswrapper[4684]: I0123 09:13:50.929718 4684 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/certified-operators-vk9hn" podStartSLOduration=82.30324546 podStartE2EDuration="4m5.929711435s" podCreationTimestamp="2026-01-23 09:09:45 +0000 UTC" firstStartedPulling="2026-01-23 09:09:53.409509271 +0000 UTC m=+166.032887812" lastFinishedPulling="2026-01-23 09:12:37.035975226 +0000 UTC m=+329.659353787" observedRunningTime="2026-01-23 09:12:55.586009883 +0000 UTC m=+348.209388424" watchObservedRunningTime="2026-01-23 09:13:50.929711435 +0000 UTC m=+403.553089976" Jan 23 09:13:50 crc kubenswrapper[4684]: I0123 09:13:50.930179 4684 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-controller-manager/controller-manager-b7cc87cc9-sxktc","openshift-kube-apiserver/kube-apiserver-crc","openshift-route-controller-manager/route-controller-manager-85d79997c7-pbqc5"] Jan 23 09:13:50 crc kubenswrapper[4684]: I0123 09:13:50.930240 4684 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-kube-apiserver/kube-apiserver-crc","openshift-route-controller-manager/route-controller-manager-7655dfc7db-qxr5g","openshift-controller-manager/controller-manager-7f487d4db6-2qswm"] Jan 23 09:13:50 crc kubenswrapper[4684]: E0123 09:13:50.930465 4684 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="edcaacae-d1c5-4a66-9220-54ee4b5991ac" containerName="installer" Jan 23 09:13:50 crc kubenswrapper[4684]: I0123 09:13:50.930492 4684 state_mem.go:107] "Deleted CPUSet assignment" podUID="edcaacae-d1c5-4a66-9220-54ee4b5991ac" containerName="installer" Jan 23 09:13:50 crc kubenswrapper[4684]: I0123 09:13:50.930621 4684 memory_manager.go:354] "RemoveStaleState removing state" podUID="edcaacae-d1c5-4a66-9220-54ee4b5991ac" containerName="installer" Jan 23 09:13:50 crc kubenswrapper[4684]: I0123 09:13:50.930650 4684 kubelet.go:1909] "Trying to delete pod" pod="openshift-kube-apiserver/kube-apiserver-crc" podUID="e31ff448-5258-4887-9532-ccb1444b5a2f" Jan 23 09:13:50 crc kubenswrapper[4684]: I0123 09:13:50.930670 4684 mirror_client.go:130] "Deleting a mirror pod" pod="openshift-kube-apiserver/kube-apiserver-crc" podUID="e31ff448-5258-4887-9532-ccb1444b5a2f" Jan 23 09:13:50 crc kubenswrapper[4684]: I0123 09:13:50.931233 4684 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-route-controller-manager/route-controller-manager-7655dfc7db-qxr5g" Jan 23 09:13:50 crc kubenswrapper[4684]: I0123 09:13:50.931486 4684 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-authentication/oauth-openshift-69cb985589-w7hkw"] Jan 23 09:13:50 crc kubenswrapper[4684]: I0123 09:13:50.931553 4684 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-controller-manager/controller-manager-7f487d4db6-2qswm" Jan 23 09:13:50 crc kubenswrapper[4684]: I0123 09:13:50.936386 4684 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-route-controller-manager"/"route-controller-manager-sa-dockercfg-h2zr2" Jan 23 09:13:50 crc kubenswrapper[4684]: I0123 09:13:50.936394 4684 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-route-controller-manager"/"config" Jan 23 09:13:50 crc kubenswrapper[4684]: I0123 09:13:50.936824 4684 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-route-controller-manager"/"serving-cert" Jan 23 09:13:50 crc kubenswrapper[4684]: I0123 09:13:50.936956 4684 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-controller-manager"/"serving-cert" Jan 23 09:13:50 crc kubenswrapper[4684]: I0123 09:13:50.937053 4684 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager"/"client-ca" Jan 23 09:13:50 crc kubenswrapper[4684]: I0123 09:13:50.937332 4684 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-route-controller-manager"/"openshift-service-ca.crt" Jan 23 09:13:50 crc kubenswrapper[4684]: I0123 09:13:50.937533 4684 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-controller-manager"/"openshift-controller-manager-sa-dockercfg-msq4c" Jan 23 09:13:50 crc kubenswrapper[4684]: I0123 09:13:50.938414 4684 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-kube-apiserver/kube-apiserver-crc" Jan 23 09:13:50 crc kubenswrapper[4684]: I0123 09:13:50.939042 4684 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-route-controller-manager"/"kube-root-ca.crt" Jan 23 09:13:50 crc kubenswrapper[4684]: I0123 09:13:50.939239 4684 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager"/"openshift-service-ca.crt" Jan 23 09:13:50 crc kubenswrapper[4684]: I0123 09:13:50.945982 4684 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-route-controller-manager"/"client-ca" Jan 23 09:13:50 crc kubenswrapper[4684]: I0123 09:13:50.946248 4684 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager"/"kube-root-ca.crt" Jan 23 09:13:50 crc kubenswrapper[4684]: I0123 09:13:50.949906 4684 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager"/"openshift-global-ca" Jan 23 09:13:50 crc kubenswrapper[4684]: I0123 09:13:50.950069 4684 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager"/"config" Jan 23 09:13:51 crc kubenswrapper[4684]: I0123 09:13:51.020778 4684 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-kube-apiserver/kube-apiserver-crc" podStartSLOduration=59.02076131 podStartE2EDuration="59.02076131s" podCreationTimestamp="2026-01-23 09:12:52 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-23 09:13:51.008742649 +0000 UTC m=+403.632121190" watchObservedRunningTime="2026-01-23 09:13:51.02076131 +0000 UTC m=+403.644139841" Jan 23 09:13:51 crc kubenswrapper[4684]: I0123 09:13:51.090061 4684 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/b457a1f4-0219-476a-badf-21411f731bf4-serving-cert\") pod \"controller-manager-7f487d4db6-2qswm\" (UID: \"b457a1f4-0219-476a-badf-21411f731bf4\") " pod="openshift-controller-manager/controller-manager-7f487d4db6-2qswm" Jan 23 09:13:51 crc kubenswrapper[4684]: I0123 09:13:51.090129 4684 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-4jmfl\" (UniqueName: \"kubernetes.io/projected/b457a1f4-0219-476a-badf-21411f731bf4-kube-api-access-4jmfl\") pod \"controller-manager-7f487d4db6-2qswm\" (UID: \"b457a1f4-0219-476a-badf-21411f731bf4\") " pod="openshift-controller-manager/controller-manager-7f487d4db6-2qswm" Jan 23 09:13:51 crc kubenswrapper[4684]: I0123 09:13:51.090177 4684 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/499e807c-045f-4ea8-8942-156bcc0e8050-config\") pod \"route-controller-manager-7655dfc7db-qxr5g\" (UID: \"499e807c-045f-4ea8-8942-156bcc0e8050\") " pod="openshift-route-controller-manager/route-controller-manager-7655dfc7db-qxr5g" Jan 23 09:13:51 crc kubenswrapper[4684]: I0123 09:13:51.090217 4684 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/499e807c-045f-4ea8-8942-156bcc0e8050-client-ca\") pod \"route-controller-manager-7655dfc7db-qxr5g\" (UID: \"499e807c-045f-4ea8-8942-156bcc0e8050\") " pod="openshift-route-controller-manager/route-controller-manager-7655dfc7db-qxr5g" Jan 23 09:13:51 crc kubenswrapper[4684]: I0123 09:13:51.090259 4684 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/499e807c-045f-4ea8-8942-156bcc0e8050-serving-cert\") pod \"route-controller-manager-7655dfc7db-qxr5g\" (UID: \"499e807c-045f-4ea8-8942-156bcc0e8050\") " pod="openshift-route-controller-manager/route-controller-manager-7655dfc7db-qxr5g" Jan 23 09:13:51 crc kubenswrapper[4684]: I0123 09:13:51.090295 4684 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-7qjhz\" (UniqueName: \"kubernetes.io/projected/499e807c-045f-4ea8-8942-156bcc0e8050-kube-api-access-7qjhz\") pod \"route-controller-manager-7655dfc7db-qxr5g\" (UID: \"499e807c-045f-4ea8-8942-156bcc0e8050\") " pod="openshift-route-controller-manager/route-controller-manager-7655dfc7db-qxr5g" Jan 23 09:13:51 crc kubenswrapper[4684]: I0123 09:13:51.090320 4684 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/b457a1f4-0219-476a-badf-21411f731bf4-client-ca\") pod \"controller-manager-7f487d4db6-2qswm\" (UID: \"b457a1f4-0219-476a-badf-21411f731bf4\") " pod="openshift-controller-manager/controller-manager-7f487d4db6-2qswm" Jan 23 09:13:51 crc kubenswrapper[4684]: I0123 09:13:51.090384 4684 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/b457a1f4-0219-476a-badf-21411f731bf4-proxy-ca-bundles\") pod \"controller-manager-7f487d4db6-2qswm\" (UID: \"b457a1f4-0219-476a-badf-21411f731bf4\") " pod="openshift-controller-manager/controller-manager-7f487d4db6-2qswm" Jan 23 09:13:51 crc kubenswrapper[4684]: I0123 09:13:51.090409 4684 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/b457a1f4-0219-476a-badf-21411f731bf4-config\") pod \"controller-manager-7f487d4db6-2qswm\" (UID: \"b457a1f4-0219-476a-badf-21411f731bf4\") " pod="openshift-controller-manager/controller-manager-7f487d4db6-2qswm" Jan 23 09:13:51 crc kubenswrapper[4684]: I0123 09:13:51.191386 4684 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-4jmfl\" (UniqueName: \"kubernetes.io/projected/b457a1f4-0219-476a-badf-21411f731bf4-kube-api-access-4jmfl\") pod \"controller-manager-7f487d4db6-2qswm\" (UID: \"b457a1f4-0219-476a-badf-21411f731bf4\") " pod="openshift-controller-manager/controller-manager-7f487d4db6-2qswm" Jan 23 09:13:51 crc kubenswrapper[4684]: I0123 09:13:51.191738 4684 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/499e807c-045f-4ea8-8942-156bcc0e8050-config\") pod \"route-controller-manager-7655dfc7db-qxr5g\" (UID: \"499e807c-045f-4ea8-8942-156bcc0e8050\") " pod="openshift-route-controller-manager/route-controller-manager-7655dfc7db-qxr5g" Jan 23 09:13:51 crc kubenswrapper[4684]: I0123 09:13:51.191870 4684 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/499e807c-045f-4ea8-8942-156bcc0e8050-client-ca\") pod \"route-controller-manager-7655dfc7db-qxr5g\" (UID: \"499e807c-045f-4ea8-8942-156bcc0e8050\") " pod="openshift-route-controller-manager/route-controller-manager-7655dfc7db-qxr5g" Jan 23 09:13:51 crc kubenswrapper[4684]: I0123 09:13:51.191997 4684 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/499e807c-045f-4ea8-8942-156bcc0e8050-serving-cert\") pod \"route-controller-manager-7655dfc7db-qxr5g\" (UID: \"499e807c-045f-4ea8-8942-156bcc0e8050\") " pod="openshift-route-controller-manager/route-controller-manager-7655dfc7db-qxr5g" Jan 23 09:13:51 crc kubenswrapper[4684]: I0123 09:13:51.192092 4684 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-7qjhz\" (UniqueName: \"kubernetes.io/projected/499e807c-045f-4ea8-8942-156bcc0e8050-kube-api-access-7qjhz\") pod \"route-controller-manager-7655dfc7db-qxr5g\" (UID: \"499e807c-045f-4ea8-8942-156bcc0e8050\") " pod="openshift-route-controller-manager/route-controller-manager-7655dfc7db-qxr5g" Jan 23 09:13:51 crc kubenswrapper[4684]: I0123 09:13:51.192186 4684 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/b457a1f4-0219-476a-badf-21411f731bf4-client-ca\") pod \"controller-manager-7f487d4db6-2qswm\" (UID: \"b457a1f4-0219-476a-badf-21411f731bf4\") " pod="openshift-controller-manager/controller-manager-7f487d4db6-2qswm" Jan 23 09:13:51 crc kubenswrapper[4684]: I0123 09:13:51.192305 4684 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/b457a1f4-0219-476a-badf-21411f731bf4-proxy-ca-bundles\") pod \"controller-manager-7f487d4db6-2qswm\" (UID: \"b457a1f4-0219-476a-badf-21411f731bf4\") " pod="openshift-controller-manager/controller-manager-7f487d4db6-2qswm" Jan 23 09:13:51 crc kubenswrapper[4684]: I0123 09:13:51.192423 4684 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/b457a1f4-0219-476a-badf-21411f731bf4-config\") pod \"controller-manager-7f487d4db6-2qswm\" (UID: \"b457a1f4-0219-476a-badf-21411f731bf4\") " pod="openshift-controller-manager/controller-manager-7f487d4db6-2qswm" Jan 23 09:13:51 crc kubenswrapper[4684]: I0123 09:13:51.192578 4684 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/b457a1f4-0219-476a-badf-21411f731bf4-serving-cert\") pod \"controller-manager-7f487d4db6-2qswm\" (UID: \"b457a1f4-0219-476a-badf-21411f731bf4\") " pod="openshift-controller-manager/controller-manager-7f487d4db6-2qswm" Jan 23 09:13:51 crc kubenswrapper[4684]: I0123 09:13:51.196082 4684 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/499e807c-045f-4ea8-8942-156bcc0e8050-config\") pod \"route-controller-manager-7655dfc7db-qxr5g\" (UID: \"499e807c-045f-4ea8-8942-156bcc0e8050\") " pod="openshift-route-controller-manager/route-controller-manager-7655dfc7db-qxr5g" Jan 23 09:13:51 crc kubenswrapper[4684]: I0123 09:13:51.197206 4684 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/b457a1f4-0219-476a-badf-21411f731bf4-client-ca\") pod \"controller-manager-7f487d4db6-2qswm\" (UID: \"b457a1f4-0219-476a-badf-21411f731bf4\") " pod="openshift-controller-manager/controller-manager-7f487d4db6-2qswm" Jan 23 09:13:51 crc kubenswrapper[4684]: I0123 09:13:51.198233 4684 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/499e807c-045f-4ea8-8942-156bcc0e8050-client-ca\") pod \"route-controller-manager-7655dfc7db-qxr5g\" (UID: \"499e807c-045f-4ea8-8942-156bcc0e8050\") " pod="openshift-route-controller-manager/route-controller-manager-7655dfc7db-qxr5g" Jan 23 09:13:51 crc kubenswrapper[4684]: I0123 09:13:51.199564 4684 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/b457a1f4-0219-476a-badf-21411f731bf4-proxy-ca-bundles\") pod \"controller-manager-7f487d4db6-2qswm\" (UID: \"b457a1f4-0219-476a-badf-21411f731bf4\") " pod="openshift-controller-manager/controller-manager-7f487d4db6-2qswm" Jan 23 09:13:51 crc kubenswrapper[4684]: I0123 09:13:51.201469 4684 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/b457a1f4-0219-476a-badf-21411f731bf4-config\") pod \"controller-manager-7f487d4db6-2qswm\" (UID: \"b457a1f4-0219-476a-badf-21411f731bf4\") " pod="openshift-controller-manager/controller-manager-7f487d4db6-2qswm" Jan 23 09:13:51 crc kubenswrapper[4684]: I0123 09:13:51.214578 4684 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/499e807c-045f-4ea8-8942-156bcc0e8050-serving-cert\") pod \"route-controller-manager-7655dfc7db-qxr5g\" (UID: \"499e807c-045f-4ea8-8942-156bcc0e8050\") " pod="openshift-route-controller-manager/route-controller-manager-7655dfc7db-qxr5g" Jan 23 09:13:51 crc kubenswrapper[4684]: I0123 09:13:51.225497 4684 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/b457a1f4-0219-476a-badf-21411f731bf4-serving-cert\") pod \"controller-manager-7f487d4db6-2qswm\" (UID: \"b457a1f4-0219-476a-badf-21411f731bf4\") " pod="openshift-controller-manager/controller-manager-7f487d4db6-2qswm" Jan 23 09:13:51 crc kubenswrapper[4684]: I0123 09:13:51.378522 4684 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-console"/"console-dockercfg-f62pw" Jan 23 09:13:51 crc kubenswrapper[4684]: I0123 09:13:51.391986 4684 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-4jmfl\" (UniqueName: \"kubernetes.io/projected/b457a1f4-0219-476a-badf-21411f731bf4-kube-api-access-4jmfl\") pod \"controller-manager-7f487d4db6-2qswm\" (UID: \"b457a1f4-0219-476a-badf-21411f731bf4\") " pod="openshift-controller-manager/controller-manager-7f487d4db6-2qswm" Jan 23 09:13:51 crc kubenswrapper[4684]: I0123 09:13:51.414566 4684 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-7qjhz\" (UniqueName: \"kubernetes.io/projected/499e807c-045f-4ea8-8942-156bcc0e8050-kube-api-access-7qjhz\") pod \"route-controller-manager-7655dfc7db-qxr5g\" (UID: \"499e807c-045f-4ea8-8942-156bcc0e8050\") " pod="openshift-route-controller-manager/route-controller-manager-7655dfc7db-qxr5g" Jan 23 09:13:51 crc kubenswrapper[4684]: I0123 09:13:51.561566 4684 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-route-controller-manager/route-controller-manager-7655dfc7db-qxr5g" Jan 23 09:13:51 crc kubenswrapper[4684]: I0123 09:13:51.570331 4684 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-authentication-operator"/"openshift-service-ca.crt" Jan 23 09:13:51 crc kubenswrapper[4684]: I0123 09:13:51.580646 4684 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-controller-manager/controller-manager-7f487d4db6-2qswm" Jan 23 09:13:51 crc kubenswrapper[4684]: I0123 09:13:51.594314 4684 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="aa75b6a1-3672-4315-8606-19758a6604b7" path="/var/lib/kubelet/pods/aa75b6a1-3672-4315-8606-19758a6604b7/volumes" Jan 23 09:13:51 crc kubenswrapper[4684]: I0123 09:13:51.594816 4684 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="ea467a71-d4b5-4361-b648-61dc754033ca" path="/var/lib/kubelet/pods/ea467a71-d4b5-4361-b648-61dc754033ca/volumes" Jan 23 09:13:51 crc kubenswrapper[4684]: I0123 09:13:51.617432 4684 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-authentication-operator"/"kube-root-ca.crt" Jan 23 09:13:51 crc kubenswrapper[4684]: I0123 09:13:51.743031 4684 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-network-node-identity"/"openshift-service-ca.crt" Jan 23 09:13:51 crc kubenswrapper[4684]: I0123 09:13:51.955305 4684 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-console"/"default-dockercfg-chnjx" Jan 23 09:13:52 crc kubenswrapper[4684]: I0123 09:13:52.036281 4684 reflector.go:368] Caches populated for *v1.RuntimeClass from k8s.io/client-go/informers/factory.go:160 Jan 23 09:13:52 crc kubenswrapper[4684]: I0123 09:13:52.072814 4684 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-ingress"/"router-dockercfg-zdk86" Jan 23 09:13:52 crc kubenswrapper[4684]: I0123 09:13:52.304561 4684 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-apiserver"/"kube-root-ca.crt" Jan 23 09:13:52 crc kubenswrapper[4684]: I0123 09:13:52.821284 4684 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-image-registry"/"node-ca-dockercfg-4777p" Jan 23 09:13:53 crc kubenswrapper[4684]: I0123 09:13:53.131323 4684 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-console"/"openshift-service-ca.crt" Jan 23 09:13:53 crc kubenswrapper[4684]: I0123 09:13:53.308247 4684 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-dns-operator"/"metrics-tls" Jan 23 09:13:53 crc kubenswrapper[4684]: I0123 09:13:53.478178 4684 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-image-registry"/"openshift-service-ca.crt" Jan 23 09:13:53 crc kubenswrapper[4684]: I0123 09:13:53.829810 4684 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-console"/"service-ca" Jan 23 09:13:53 crc kubenswrapper[4684]: I0123 09:13:53.860361 4684 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-console"/"console-config" Jan 23 09:13:53 crc kubenswrapper[4684]: I0123 09:13:53.944305 4684 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-api"/"control-plane-machine-set-operator-dockercfg-k9rxt" Jan 23 09:13:54 crc kubenswrapper[4684]: I0123 09:13:54.404251 4684 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-ingress-operator"/"metrics-tls" Jan 23 09:13:54 crc kubenswrapper[4684]: I0123 09:13:54.631781 4684 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-service-ca-operator"/"serving-cert" Jan 23 09:13:54 crc kubenswrapper[4684]: I0123 09:13:54.863241 4684 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-authentication"/"v4-0-config-user-template-provider-selection" Jan 23 09:13:54 crc kubenswrapper[4684]: I0123 09:13:54.893241 4684 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-console"/"trusted-ca-bundle" Jan 23 09:13:55 crc kubenswrapper[4684]: I0123 09:13:55.036434 4684 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-config-operator"/"openshift-config-operator-dockercfg-7pc5z" Jan 23 09:13:55 crc kubenswrapper[4684]: I0123 09:13:55.451550 4684 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-service-ca-operator"/"kube-root-ca.crt" Jan 23 09:13:55 crc kubenswrapper[4684]: I0123 09:13:55.856244 4684 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-multus"/"cni-copy-resources" Jan 23 09:13:55 crc kubenswrapper[4684]: I0123 09:13:55.856484 4684 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-machine-config-operator"/"kube-root-ca.crt" Jan 23 09:13:55 crc kubenswrapper[4684]: I0123 09:13:55.869118 4684 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-dns-operator"/"dns-operator-dockercfg-9mqw5" Jan 23 09:13:56 crc kubenswrapper[4684]: I0123 09:13:56.034487 4684 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-cluster-machine-approver"/"machine-approver-config" Jan 23 09:13:56 crc kubenswrapper[4684]: I0123 09:13:56.074993 4684 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-etcd-operator"/"etcd-operator-config" Jan 23 09:13:56 crc kubenswrapper[4684]: I0123 09:13:56.139641 4684 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-image-registry"/"image-registry-certificates" Jan 23 09:13:56 crc kubenswrapper[4684]: I0123 09:13:56.645364 4684 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-storage-version-migrator-operator"/"openshift-service-ca.crt" Jan 23 09:13:56 crc kubenswrapper[4684]: I0123 09:13:56.915967 4684 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-ovn-kubernetes"/"ovn-kubernetes-node-dockercfg-pwtwl" Jan 23 09:13:57 crc kubenswrapper[4684]: I0123 09:13:57.318998 4684 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-cluster-machine-approver"/"kube-root-ca.crt" Jan 23 09:13:57 crc kubenswrapper[4684]: I0123 09:13:57.751372 4684 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-marketplace"/"redhat-marketplace-dockercfg-x2ctb" Jan 23 09:13:58 crc kubenswrapper[4684]: I0123 09:13:58.434689 4684 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-dns-operator"/"openshift-service-ca.crt" Jan 23 09:13:58 crc kubenswrapper[4684]: I0123 09:13:58.580267 4684 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-service-ca"/"openshift-service-ca.crt" Jan 23 09:13:58 crc kubenswrapper[4684]: I0123 09:13:58.638866 4684 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-operator-lifecycle-manager"/"olm-operator-serving-cert" Jan 23 09:13:59 crc kubenswrapper[4684]: I0123 09:13:59.232931 4684 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-controller-manager/controller-manager-7f487d4db6-2qswm"] Jan 23 09:13:59 crc kubenswrapper[4684]: I0123 09:13:59.252277 4684 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-route-controller-manager/route-controller-manager-7655dfc7db-qxr5g"] Jan 23 09:13:59 crc kubenswrapper[4684]: I0123 09:13:59.365300 4684 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-controller-manager/controller-manager-7f487d4db6-2qswm"] Jan 23 09:13:59 crc kubenswrapper[4684]: I0123 09:13:59.389137 4684 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-controller-manager/controller-manager-7f487d4db6-2qswm" event={"ID":"b457a1f4-0219-476a-badf-21411f731bf4","Type":"ContainerStarted","Data":"643137ef3f5313c7d4e1ed2ca95b318e0d89166fb16266c8fda3bd35b441f292"} Jan 23 09:13:59 crc kubenswrapper[4684]: I0123 09:13:59.522724 4684 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-route-controller-manager/route-controller-manager-7655dfc7db-qxr5g"] Jan 23 09:13:59 crc kubenswrapper[4684]: W0123 09:13:59.528035 4684 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod499e807c_045f_4ea8_8942_156bcc0e8050.slice/crio-d762c8008de35ea8064b97768b70739c24d7d96729f4c9a5b7a01f22f9c223f2 WatchSource:0}: Error finding container d762c8008de35ea8064b97768b70739c24d7d96729f4c9a5b7a01f22f9c223f2: Status 404 returned error can't find the container with id d762c8008de35ea8064b97768b70739c24d7d96729f4c9a5b7a01f22f9c223f2 Jan 23 09:13:59 crc kubenswrapper[4684]: I0123 09:13:59.567893 4684 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-marketplace"/"openshift-service-ca.crt" Jan 23 09:13:59 crc kubenswrapper[4684]: I0123 09:13:59.702407 4684 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-multus"/"default-dockercfg-2q5b6" Jan 23 09:14:00 crc kubenswrapper[4684]: I0123 09:14:00.395747 4684 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-route-controller-manager/route-controller-manager-7655dfc7db-qxr5g" event={"ID":"499e807c-045f-4ea8-8942-156bcc0e8050","Type":"ContainerStarted","Data":"d762c8008de35ea8064b97768b70739c24d7d96729f4c9a5b7a01f22f9c223f2"} Jan 23 09:14:01 crc kubenswrapper[4684]: I0123 09:14:01.197218 4684 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-kube-controller-manager-operator"/"kube-controller-manager-operator-dockercfg-gkqpw" Jan 23 09:14:01 crc kubenswrapper[4684]: I0123 09:14:01.402176 4684 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-route-controller-manager/route-controller-manager-7655dfc7db-qxr5g" event={"ID":"499e807c-045f-4ea8-8942-156bcc0e8050","Type":"ContainerStarted","Data":"f012283471d06a8eaa4f6b3f4120e689f8971e33fa9444d6ea8ca8355ca953bd"} Jan 23 09:14:01 crc kubenswrapper[4684]: I0123 09:14:01.404352 4684 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-route-controller-manager/route-controller-manager-7655dfc7db-qxr5g" Jan 23 09:14:01 crc kubenswrapper[4684]: I0123 09:14:01.417827 4684 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-controller-manager/controller-manager-7f487d4db6-2qswm" event={"ID":"b457a1f4-0219-476a-badf-21411f731bf4","Type":"ContainerStarted","Data":"9a1239a36ae71bd2488985ee24ec3a5ea4a42b52191990c486b9a6716de7a256"} Jan 23 09:14:01 crc kubenswrapper[4684]: I0123 09:14:01.418634 4684 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-controller-manager/controller-manager-7f487d4db6-2qswm" Jan 23 09:14:01 crc kubenswrapper[4684]: I0123 09:14:01.452433 4684 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-route-controller-manager/route-controller-manager-7655dfc7db-qxr5g" Jan 23 09:14:01 crc kubenswrapper[4684]: I0123 09:14:01.466637 4684 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-route-controller-manager/route-controller-manager-7655dfc7db-qxr5g" podStartSLOduration=106.466623124 podStartE2EDuration="1m46.466623124s" podCreationTimestamp="2026-01-23 09:12:15 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-23 09:14:01.46236281 +0000 UTC m=+414.085741341" watchObservedRunningTime="2026-01-23 09:14:01.466623124 +0000 UTC m=+414.090001655" Jan 23 09:14:01 crc kubenswrapper[4684]: I0123 09:14:01.505619 4684 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-controller-manager/controller-manager-7f487d4db6-2qswm" Jan 23 09:14:01 crc kubenswrapper[4684]: I0123 09:14:01.582220 4684 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-controller-manager/controller-manager-7f487d4db6-2qswm" podStartSLOduration=106.582198713 podStartE2EDuration="1m46.582198713s" podCreationTimestamp="2026-01-23 09:12:15 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-23 09:14:01.55980675 +0000 UTC m=+414.183185311" watchObservedRunningTime="2026-01-23 09:14:01.582198713 +0000 UTC m=+414.205577264" Jan 23 09:14:02 crc kubenswrapper[4684]: I0123 09:14:02.277849 4684 kubelet.go:2431] "SyncLoop REMOVE" source="file" pods=["openshift-kube-apiserver/kube-apiserver-startup-monitor-crc"] Jan 23 09:14:02 crc kubenswrapper[4684]: I0123 09:14:02.278181 4684 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" podUID="f85e55b1a89d02b0cb034b1ea31ed45a" containerName="startup-monitor" containerID="cri-o://0dd00780f0e77dc2a04fe346f2e62a6625cb34d05249ea99375be8221c7a4a5b" gracePeriod=5 Jan 23 09:14:07 crc kubenswrapper[4684]: I0123 09:14:07.416676 4684 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-kube-apiserver_kube-apiserver-startup-monitor-crc_f85e55b1a89d02b0cb034b1ea31ed45a/startup-monitor/0.log" Jan 23 09:14:07 crc kubenswrapper[4684]: I0123 09:14:07.417793 4684 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" Jan 23 09:14:07 crc kubenswrapper[4684]: I0123 09:14:07.454774 4684 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-kube-apiserver_kube-apiserver-startup-monitor-crc_f85e55b1a89d02b0cb034b1ea31ed45a/startup-monitor/0.log" Jan 23 09:14:07 crc kubenswrapper[4684]: I0123 09:14:07.454847 4684 generic.go:334] "Generic (PLEG): container finished" podID="f85e55b1a89d02b0cb034b1ea31ed45a" containerID="0dd00780f0e77dc2a04fe346f2e62a6625cb34d05249ea99375be8221c7a4a5b" exitCode=137 Jan 23 09:14:07 crc kubenswrapper[4684]: I0123 09:14:07.454914 4684 scope.go:117] "RemoveContainer" containerID="0dd00780f0e77dc2a04fe346f2e62a6625cb34d05249ea99375be8221c7a4a5b" Jan 23 09:14:07 crc kubenswrapper[4684]: I0123 09:14:07.454940 4684 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" Jan 23 09:14:07 crc kubenswrapper[4684]: I0123 09:14:07.476547 4684 scope.go:117] "RemoveContainer" containerID="0dd00780f0e77dc2a04fe346f2e62a6625cb34d05249ea99375be8221c7a4a5b" Jan 23 09:14:07 crc kubenswrapper[4684]: E0123 09:14:07.477276 4684 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"0dd00780f0e77dc2a04fe346f2e62a6625cb34d05249ea99375be8221c7a4a5b\": container with ID starting with 0dd00780f0e77dc2a04fe346f2e62a6625cb34d05249ea99375be8221c7a4a5b not found: ID does not exist" containerID="0dd00780f0e77dc2a04fe346f2e62a6625cb34d05249ea99375be8221c7a4a5b" Jan 23 09:14:07 crc kubenswrapper[4684]: I0123 09:14:07.477338 4684 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"0dd00780f0e77dc2a04fe346f2e62a6625cb34d05249ea99375be8221c7a4a5b"} err="failed to get container status \"0dd00780f0e77dc2a04fe346f2e62a6625cb34d05249ea99375be8221c7a4a5b\": rpc error: code = NotFound desc = could not find container \"0dd00780f0e77dc2a04fe346f2e62a6625cb34d05249ea99375be8221c7a4a5b\": container with ID starting with 0dd00780f0e77dc2a04fe346f2e62a6625cb34d05249ea99375be8221c7a4a5b not found: ID does not exist" Jan 23 09:14:07 crc kubenswrapper[4684]: I0123 09:14:07.555468 4684 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"manifests\" (UniqueName: \"kubernetes.io/host-path/f85e55b1a89d02b0cb034b1ea31ed45a-manifests\") pod \"f85e55b1a89d02b0cb034b1ea31ed45a\" (UID: \"f85e55b1a89d02b0cb034b1ea31ed45a\") " Jan 23 09:14:07 crc kubenswrapper[4684]: I0123 09:14:07.555516 4684 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"var-lock\" (UniqueName: \"kubernetes.io/host-path/f85e55b1a89d02b0cb034b1ea31ed45a-var-lock\") pod \"f85e55b1a89d02b0cb034b1ea31ed45a\" (UID: \"f85e55b1a89d02b0cb034b1ea31ed45a\") " Jan 23 09:14:07 crc kubenswrapper[4684]: I0123 09:14:07.555561 4684 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"resource-dir\" (UniqueName: \"kubernetes.io/host-path/f85e55b1a89d02b0cb034b1ea31ed45a-resource-dir\") pod \"f85e55b1a89d02b0cb034b1ea31ed45a\" (UID: \"f85e55b1a89d02b0cb034b1ea31ed45a\") " Jan 23 09:14:07 crc kubenswrapper[4684]: I0123 09:14:07.555582 4684 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"var-log\" (UniqueName: \"kubernetes.io/host-path/f85e55b1a89d02b0cb034b1ea31ed45a-var-log\") pod \"f85e55b1a89d02b0cb034b1ea31ed45a\" (UID: \"f85e55b1a89d02b0cb034b1ea31ed45a\") " Jan 23 09:14:07 crc kubenswrapper[4684]: I0123 09:14:07.555585 4684 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/f85e55b1a89d02b0cb034b1ea31ed45a-manifests" (OuterVolumeSpecName: "manifests") pod "f85e55b1a89d02b0cb034b1ea31ed45a" (UID: "f85e55b1a89d02b0cb034b1ea31ed45a"). InnerVolumeSpecName "manifests". PluginName "kubernetes.io/host-path", VolumeGidValue "" Jan 23 09:14:07 crc kubenswrapper[4684]: I0123 09:14:07.555673 4684 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/f85e55b1a89d02b0cb034b1ea31ed45a-resource-dir" (OuterVolumeSpecName: "resource-dir") pod "f85e55b1a89d02b0cb034b1ea31ed45a" (UID: "f85e55b1a89d02b0cb034b1ea31ed45a"). InnerVolumeSpecName "resource-dir". PluginName "kubernetes.io/host-path", VolumeGidValue "" Jan 23 09:14:07 crc kubenswrapper[4684]: I0123 09:14:07.555692 4684 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/f85e55b1a89d02b0cb034b1ea31ed45a-var-lock" (OuterVolumeSpecName: "var-lock") pod "f85e55b1a89d02b0cb034b1ea31ed45a" (UID: "f85e55b1a89d02b0cb034b1ea31ed45a"). InnerVolumeSpecName "var-lock". PluginName "kubernetes.io/host-path", VolumeGidValue "" Jan 23 09:14:07 crc kubenswrapper[4684]: I0123 09:14:07.555768 4684 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/f85e55b1a89d02b0cb034b1ea31ed45a-var-log" (OuterVolumeSpecName: "var-log") pod "f85e55b1a89d02b0cb034b1ea31ed45a" (UID: "f85e55b1a89d02b0cb034b1ea31ed45a"). InnerVolumeSpecName "var-log". PluginName "kubernetes.io/host-path", VolumeGidValue "" Jan 23 09:14:07 crc kubenswrapper[4684]: I0123 09:14:07.556735 4684 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pod-resource-dir\" (UniqueName: \"kubernetes.io/host-path/f85e55b1a89d02b0cb034b1ea31ed45a-pod-resource-dir\") pod \"f85e55b1a89d02b0cb034b1ea31ed45a\" (UID: \"f85e55b1a89d02b0cb034b1ea31ed45a\") " Jan 23 09:14:07 crc kubenswrapper[4684]: I0123 09:14:07.557713 4684 reconciler_common.go:293] "Volume detached for volume \"manifests\" (UniqueName: \"kubernetes.io/host-path/f85e55b1a89d02b0cb034b1ea31ed45a-manifests\") on node \"crc\" DevicePath \"\"" Jan 23 09:14:07 crc kubenswrapper[4684]: I0123 09:14:07.557735 4684 reconciler_common.go:293] "Volume detached for volume \"var-lock\" (UniqueName: \"kubernetes.io/host-path/f85e55b1a89d02b0cb034b1ea31ed45a-var-lock\") on node \"crc\" DevicePath \"\"" Jan 23 09:14:07 crc kubenswrapper[4684]: I0123 09:14:07.557747 4684 reconciler_common.go:293] "Volume detached for volume \"resource-dir\" (UniqueName: \"kubernetes.io/host-path/f85e55b1a89d02b0cb034b1ea31ed45a-resource-dir\") on node \"crc\" DevicePath \"\"" Jan 23 09:14:07 crc kubenswrapper[4684]: I0123 09:14:07.557759 4684 reconciler_common.go:293] "Volume detached for volume \"var-log\" (UniqueName: \"kubernetes.io/host-path/f85e55b1a89d02b0cb034b1ea31ed45a-var-log\") on node \"crc\" DevicePath \"\"" Jan 23 09:14:07 crc kubenswrapper[4684]: I0123 09:14:07.566767 4684 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/f85e55b1a89d02b0cb034b1ea31ed45a-pod-resource-dir" (OuterVolumeSpecName: "pod-resource-dir") pod "f85e55b1a89d02b0cb034b1ea31ed45a" (UID: "f85e55b1a89d02b0cb034b1ea31ed45a"). InnerVolumeSpecName "pod-resource-dir". PluginName "kubernetes.io/host-path", VolumeGidValue "" Jan 23 09:14:07 crc kubenswrapper[4684]: I0123 09:14:07.593134 4684 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="f85e55b1a89d02b0cb034b1ea31ed45a" path="/var/lib/kubelet/pods/f85e55b1a89d02b0cb034b1ea31ed45a/volumes" Jan 23 09:14:07 crc kubenswrapper[4684]: I0123 09:14:07.593421 4684 mirror_client.go:130] "Deleting a mirror pod" pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" podUID="" Jan 23 09:14:07 crc kubenswrapper[4684]: I0123 09:14:07.607245 4684 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-kube-apiserver/kube-apiserver-startup-monitor-crc"] Jan 23 09:14:07 crc kubenswrapper[4684]: I0123 09:14:07.607291 4684 kubelet.go:2649] "Unable to find pod for mirror pod, skipping" mirrorPod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" mirrorPodUID="a2478790-e5ca-470a-a34c-d58b240ff378" Jan 23 09:14:07 crc kubenswrapper[4684]: I0123 09:14:07.614839 4684 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-kube-apiserver/kube-apiserver-startup-monitor-crc"] Jan 23 09:14:07 crc kubenswrapper[4684]: I0123 09:14:07.614955 4684 kubelet.go:2673] "Unable to find pod for mirror pod, skipping" mirrorPod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" mirrorPodUID="a2478790-e5ca-470a-a34c-d58b240ff378" Jan 23 09:14:07 crc kubenswrapper[4684]: I0123 09:14:07.659069 4684 reconciler_common.go:293] "Volume detached for volume \"pod-resource-dir\" (UniqueName: \"kubernetes.io/host-path/f85e55b1a89d02b0cb034b1ea31ed45a-pod-resource-dir\") on node \"crc\" DevicePath \"\"" Jan 23 09:14:13 crc kubenswrapper[4684]: I0123 09:14:13.728721 4684 patch_prober.go:28] interesting pod/machine-config-daemon-wtphf container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Jan 23 09:14:13 crc kubenswrapper[4684]: I0123 09:14:13.729309 4684 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-wtphf" podUID="fe8e0d00-860e-4d47-9f48-686555520d79" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Jan 23 09:14:13 crc kubenswrapper[4684]: I0123 09:14:13.781715 4684 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/certified-operators-vk9hn"] Jan 23 09:14:13 crc kubenswrapper[4684]: I0123 09:14:13.781996 4684 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/certified-operators-vk9hn" podUID="0cd73bd8-4034-44e9-b00a-75ea938360c8" containerName="registry-server" containerID="cri-o://ca9d112c1238cb9c63f346015a5fb8d69defb100efff384de4ebf55847fb8dc7" gracePeriod=2 Jan 23 09:14:13 crc kubenswrapper[4684]: I0123 09:14:13.974111 4684 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/community-operators-4w77d"] Jan 23 09:14:13 crc kubenswrapper[4684]: I0123 09:14:13.974394 4684 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/community-operators-4w77d" podUID="6386382b-e651-4888-857e-a3a7325f1f14" containerName="registry-server" containerID="cri-o://55737adbbcd4852204cbbab14afeca010baddf56649eeff86b04d0ba17a57ec7" gracePeriod=2 Jan 23 09:14:14 crc kubenswrapper[4684]: I0123 09:14:14.347387 4684 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-vk9hn" Jan 23 09:14:14 crc kubenswrapper[4684]: I0123 09:14:14.457807 4684 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/0cd73bd8-4034-44e9-b00a-75ea938360c8-utilities\") pod \"0cd73bd8-4034-44e9-b00a-75ea938360c8\" (UID: \"0cd73bd8-4034-44e9-b00a-75ea938360c8\") " Jan 23 09:14:14 crc kubenswrapper[4684]: I0123 09:14:14.457900 4684 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-q8hxb\" (UniqueName: \"kubernetes.io/projected/0cd73bd8-4034-44e9-b00a-75ea938360c8-kube-api-access-q8hxb\") pod \"0cd73bd8-4034-44e9-b00a-75ea938360c8\" (UID: \"0cd73bd8-4034-44e9-b00a-75ea938360c8\") " Jan 23 09:14:14 crc kubenswrapper[4684]: I0123 09:14:14.458016 4684 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/0cd73bd8-4034-44e9-b00a-75ea938360c8-catalog-content\") pod \"0cd73bd8-4034-44e9-b00a-75ea938360c8\" (UID: \"0cd73bd8-4034-44e9-b00a-75ea938360c8\") " Jan 23 09:14:14 crc kubenswrapper[4684]: I0123 09:14:14.461248 4684 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/0cd73bd8-4034-44e9-b00a-75ea938360c8-utilities" (OuterVolumeSpecName: "utilities") pod "0cd73bd8-4034-44e9-b00a-75ea938360c8" (UID: "0cd73bd8-4034-44e9-b00a-75ea938360c8"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 23 09:14:14 crc kubenswrapper[4684]: I0123 09:14:14.468061 4684 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/0cd73bd8-4034-44e9-b00a-75ea938360c8-kube-api-access-q8hxb" (OuterVolumeSpecName: "kube-api-access-q8hxb") pod "0cd73bd8-4034-44e9-b00a-75ea938360c8" (UID: "0cd73bd8-4034-44e9-b00a-75ea938360c8"). InnerVolumeSpecName "kube-api-access-q8hxb". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 23 09:14:14 crc kubenswrapper[4684]: I0123 09:14:14.504074 4684 generic.go:334] "Generic (PLEG): container finished" podID="0cd73bd8-4034-44e9-b00a-75ea938360c8" containerID="ca9d112c1238cb9c63f346015a5fb8d69defb100efff384de4ebf55847fb8dc7" exitCode=0 Jan 23 09:14:14 crc kubenswrapper[4684]: I0123 09:14:14.504168 4684 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-vk9hn" event={"ID":"0cd73bd8-4034-44e9-b00a-75ea938360c8","Type":"ContainerDied","Data":"ca9d112c1238cb9c63f346015a5fb8d69defb100efff384de4ebf55847fb8dc7"} Jan 23 09:14:14 crc kubenswrapper[4684]: I0123 09:14:14.504168 4684 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-vk9hn" Jan 23 09:14:14 crc kubenswrapper[4684]: I0123 09:14:14.504643 4684 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-vk9hn" event={"ID":"0cd73bd8-4034-44e9-b00a-75ea938360c8","Type":"ContainerDied","Data":"02a79c96ce85262af4bccbdaf679ca1f8afe6db43539c8474bc14422d95c30f6"} Jan 23 09:14:14 crc kubenswrapper[4684]: I0123 09:14:14.504705 4684 scope.go:117] "RemoveContainer" containerID="ca9d112c1238cb9c63f346015a5fb8d69defb100efff384de4ebf55847fb8dc7" Jan 23 09:14:14 crc kubenswrapper[4684]: I0123 09:14:14.516515 4684 generic.go:334] "Generic (PLEG): container finished" podID="6386382b-e651-4888-857e-a3a7325f1f14" containerID="55737adbbcd4852204cbbab14afeca010baddf56649eeff86b04d0ba17a57ec7" exitCode=0 Jan 23 09:14:14 crc kubenswrapper[4684]: I0123 09:14:14.516692 4684 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-4w77d" event={"ID":"6386382b-e651-4888-857e-a3a7325f1f14","Type":"ContainerDied","Data":"55737adbbcd4852204cbbab14afeca010baddf56649eeff86b04d0ba17a57ec7"} Jan 23 09:14:14 crc kubenswrapper[4684]: I0123 09:14:14.526975 4684 scope.go:117] "RemoveContainer" containerID="958b4d3b02248b0f89d810bbcfdb481c0b9625c53aae088528f6ccc9bc27c639" Jan 23 09:14:14 crc kubenswrapper[4684]: I0123 09:14:14.527712 4684 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/0cd73bd8-4034-44e9-b00a-75ea938360c8-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "0cd73bd8-4034-44e9-b00a-75ea938360c8" (UID: "0cd73bd8-4034-44e9-b00a-75ea938360c8"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 23 09:14:14 crc kubenswrapper[4684]: I0123 09:14:14.538968 4684 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-4w77d" Jan 23 09:14:14 crc kubenswrapper[4684]: I0123 09:14:14.550713 4684 scope.go:117] "RemoveContainer" containerID="c165a5490980dba46a6a11e0d4d67e28cfd06b0160b050c3d226fee89fbc4e3f" Jan 23 09:14:14 crc kubenswrapper[4684]: I0123 09:14:14.559874 4684 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/0cd73bd8-4034-44e9-b00a-75ea938360c8-catalog-content\") on node \"crc\" DevicePath \"\"" Jan 23 09:14:14 crc kubenswrapper[4684]: I0123 09:14:14.559927 4684 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/0cd73bd8-4034-44e9-b00a-75ea938360c8-utilities\") on node \"crc\" DevicePath \"\"" Jan 23 09:14:14 crc kubenswrapper[4684]: I0123 09:14:14.559940 4684 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-q8hxb\" (UniqueName: \"kubernetes.io/projected/0cd73bd8-4034-44e9-b00a-75ea938360c8-kube-api-access-q8hxb\") on node \"crc\" DevicePath \"\"" Jan 23 09:14:14 crc kubenswrapper[4684]: I0123 09:14:14.578483 4684 scope.go:117] "RemoveContainer" containerID="ca9d112c1238cb9c63f346015a5fb8d69defb100efff384de4ebf55847fb8dc7" Jan 23 09:14:14 crc kubenswrapper[4684]: E0123 09:14:14.579235 4684 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"ca9d112c1238cb9c63f346015a5fb8d69defb100efff384de4ebf55847fb8dc7\": container with ID starting with ca9d112c1238cb9c63f346015a5fb8d69defb100efff384de4ebf55847fb8dc7 not found: ID does not exist" containerID="ca9d112c1238cb9c63f346015a5fb8d69defb100efff384de4ebf55847fb8dc7" Jan 23 09:14:14 crc kubenswrapper[4684]: I0123 09:14:14.579282 4684 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"ca9d112c1238cb9c63f346015a5fb8d69defb100efff384de4ebf55847fb8dc7"} err="failed to get container status \"ca9d112c1238cb9c63f346015a5fb8d69defb100efff384de4ebf55847fb8dc7\": rpc error: code = NotFound desc = could not find container \"ca9d112c1238cb9c63f346015a5fb8d69defb100efff384de4ebf55847fb8dc7\": container with ID starting with ca9d112c1238cb9c63f346015a5fb8d69defb100efff384de4ebf55847fb8dc7 not found: ID does not exist" Jan 23 09:14:14 crc kubenswrapper[4684]: I0123 09:14:14.579313 4684 scope.go:117] "RemoveContainer" containerID="958b4d3b02248b0f89d810bbcfdb481c0b9625c53aae088528f6ccc9bc27c639" Jan 23 09:14:14 crc kubenswrapper[4684]: E0123 09:14:14.579925 4684 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"958b4d3b02248b0f89d810bbcfdb481c0b9625c53aae088528f6ccc9bc27c639\": container with ID starting with 958b4d3b02248b0f89d810bbcfdb481c0b9625c53aae088528f6ccc9bc27c639 not found: ID does not exist" containerID="958b4d3b02248b0f89d810bbcfdb481c0b9625c53aae088528f6ccc9bc27c639" Jan 23 09:14:14 crc kubenswrapper[4684]: I0123 09:14:14.579951 4684 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"958b4d3b02248b0f89d810bbcfdb481c0b9625c53aae088528f6ccc9bc27c639"} err="failed to get container status \"958b4d3b02248b0f89d810bbcfdb481c0b9625c53aae088528f6ccc9bc27c639\": rpc error: code = NotFound desc = could not find container \"958b4d3b02248b0f89d810bbcfdb481c0b9625c53aae088528f6ccc9bc27c639\": container with ID starting with 958b4d3b02248b0f89d810bbcfdb481c0b9625c53aae088528f6ccc9bc27c639 not found: ID does not exist" Jan 23 09:14:14 crc kubenswrapper[4684]: I0123 09:14:14.579969 4684 scope.go:117] "RemoveContainer" containerID="c165a5490980dba46a6a11e0d4d67e28cfd06b0160b050c3d226fee89fbc4e3f" Jan 23 09:14:14 crc kubenswrapper[4684]: E0123 09:14:14.580272 4684 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"c165a5490980dba46a6a11e0d4d67e28cfd06b0160b050c3d226fee89fbc4e3f\": container with ID starting with c165a5490980dba46a6a11e0d4d67e28cfd06b0160b050c3d226fee89fbc4e3f not found: ID does not exist" containerID="c165a5490980dba46a6a11e0d4d67e28cfd06b0160b050c3d226fee89fbc4e3f" Jan 23 09:14:14 crc kubenswrapper[4684]: I0123 09:14:14.580299 4684 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"c165a5490980dba46a6a11e0d4d67e28cfd06b0160b050c3d226fee89fbc4e3f"} err="failed to get container status \"c165a5490980dba46a6a11e0d4d67e28cfd06b0160b050c3d226fee89fbc4e3f\": rpc error: code = NotFound desc = could not find container \"c165a5490980dba46a6a11e0d4d67e28cfd06b0160b050c3d226fee89fbc4e3f\": container with ID starting with c165a5490980dba46a6a11e0d4d67e28cfd06b0160b050c3d226fee89fbc4e3f not found: ID does not exist" Jan 23 09:14:14 crc kubenswrapper[4684]: I0123 09:14:14.660719 4684 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-cpttc\" (UniqueName: \"kubernetes.io/projected/6386382b-e651-4888-857e-a3a7325f1f14-kube-api-access-cpttc\") pod \"6386382b-e651-4888-857e-a3a7325f1f14\" (UID: \"6386382b-e651-4888-857e-a3a7325f1f14\") " Jan 23 09:14:14 crc kubenswrapper[4684]: I0123 09:14:14.660805 4684 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/6386382b-e651-4888-857e-a3a7325f1f14-utilities\") pod \"6386382b-e651-4888-857e-a3a7325f1f14\" (UID: \"6386382b-e651-4888-857e-a3a7325f1f14\") " Jan 23 09:14:14 crc kubenswrapper[4684]: I0123 09:14:14.660895 4684 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/6386382b-e651-4888-857e-a3a7325f1f14-catalog-content\") pod \"6386382b-e651-4888-857e-a3a7325f1f14\" (UID: \"6386382b-e651-4888-857e-a3a7325f1f14\") " Jan 23 09:14:14 crc kubenswrapper[4684]: I0123 09:14:14.661511 4684 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/6386382b-e651-4888-857e-a3a7325f1f14-utilities" (OuterVolumeSpecName: "utilities") pod "6386382b-e651-4888-857e-a3a7325f1f14" (UID: "6386382b-e651-4888-857e-a3a7325f1f14"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 23 09:14:14 crc kubenswrapper[4684]: I0123 09:14:14.663637 4684 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/6386382b-e651-4888-857e-a3a7325f1f14-kube-api-access-cpttc" (OuterVolumeSpecName: "kube-api-access-cpttc") pod "6386382b-e651-4888-857e-a3a7325f1f14" (UID: "6386382b-e651-4888-857e-a3a7325f1f14"). InnerVolumeSpecName "kube-api-access-cpttc". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 23 09:14:14 crc kubenswrapper[4684]: I0123 09:14:14.710151 4684 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/6386382b-e651-4888-857e-a3a7325f1f14-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "6386382b-e651-4888-857e-a3a7325f1f14" (UID: "6386382b-e651-4888-857e-a3a7325f1f14"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 23 09:14:14 crc kubenswrapper[4684]: I0123 09:14:14.762271 4684 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-cpttc\" (UniqueName: \"kubernetes.io/projected/6386382b-e651-4888-857e-a3a7325f1f14-kube-api-access-cpttc\") on node \"crc\" DevicePath \"\"" Jan 23 09:14:14 crc kubenswrapper[4684]: I0123 09:14:14.762313 4684 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/6386382b-e651-4888-857e-a3a7325f1f14-utilities\") on node \"crc\" DevicePath \"\"" Jan 23 09:14:14 crc kubenswrapper[4684]: I0123 09:14:14.762328 4684 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/6386382b-e651-4888-857e-a3a7325f1f14-catalog-content\") on node \"crc\" DevicePath \"\"" Jan 23 09:14:14 crc kubenswrapper[4684]: I0123 09:14:14.835190 4684 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/certified-operators-vk9hn"] Jan 23 09:14:14 crc kubenswrapper[4684]: I0123 09:14:14.841363 4684 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/certified-operators-vk9hn"] Jan 23 09:14:15 crc kubenswrapper[4684]: I0123 09:14:15.524540 4684 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-4w77d" event={"ID":"6386382b-e651-4888-857e-a3a7325f1f14","Type":"ContainerDied","Data":"e2eababe803b0e383040d4e00a014fd611b2b02d6377f845e753369e476ad8ab"} Jan 23 09:14:15 crc kubenswrapper[4684]: I0123 09:14:15.524588 4684 scope.go:117] "RemoveContainer" containerID="55737adbbcd4852204cbbab14afeca010baddf56649eeff86b04d0ba17a57ec7" Jan 23 09:14:15 crc kubenswrapper[4684]: I0123 09:14:15.524674 4684 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-4w77d" Jan 23 09:14:15 crc kubenswrapper[4684]: I0123 09:14:15.540279 4684 scope.go:117] "RemoveContainer" containerID="4a0b2a0ef5c98c480706279798937583e4c985a9e6507df4a2a8b280aea634ca" Jan 23 09:14:15 crc kubenswrapper[4684]: I0123 09:14:15.548612 4684 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/community-operators-4w77d"] Jan 23 09:14:15 crc kubenswrapper[4684]: I0123 09:14:15.557169 4684 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/community-operators-4w77d"] Jan 23 09:14:15 crc kubenswrapper[4684]: I0123 09:14:15.569859 4684 scope.go:117] "RemoveContainer" containerID="1369ef62f47d70a6abfe04a6bccb8793c585d6bd4fa2af1a177195b5b91a127c" Jan 23 09:14:15 crc kubenswrapper[4684]: I0123 09:14:15.589357 4684 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="0cd73bd8-4034-44e9-b00a-75ea938360c8" path="/var/lib/kubelet/pods/0cd73bd8-4034-44e9-b00a-75ea938360c8/volumes" Jan 23 09:14:15 crc kubenswrapper[4684]: I0123 09:14:15.590194 4684 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="6386382b-e651-4888-857e-a3a7325f1f14" path="/var/lib/kubelet/pods/6386382b-e651-4888-857e-a3a7325f1f14/volumes" Jan 23 09:14:16 crc kubenswrapper[4684]: I0123 09:14:16.171450 4684 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-marketplace-hcd6g"] Jan 23 09:14:16 crc kubenswrapper[4684]: I0123 09:14:16.171671 4684 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/redhat-marketplace-hcd6g" podUID="a32a23a8-fd38-4a01-bc87-e589889a39e6" containerName="registry-server" containerID="cri-o://314a784dfdec297108ed663b3e24d6ac32cd9ce71df0d9f686a8825dfe6a0738" gracePeriod=2 Jan 23 09:14:16 crc kubenswrapper[4684]: I0123 09:14:16.372533 4684 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-operators-vnv8t"] Jan 23 09:14:16 crc kubenswrapper[4684]: I0123 09:14:16.373138 4684 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/redhat-operators-vnv8t" podUID="5a6b0dac-56a9-4bc7-b6f1-fdbe9578f226" containerName="registry-server" containerID="cri-o://126f8fca8120dc84338e5fc813f6a97fb061b4a68033708056ef4759b903aab7" gracePeriod=2 Jan 23 09:14:17 crc kubenswrapper[4684]: I0123 09:14:17.515274 4684 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-vnv8t" Jan 23 09:14:17 crc kubenswrapper[4684]: I0123 09:14:17.551656 4684 generic.go:334] "Generic (PLEG): container finished" podID="a32a23a8-fd38-4a01-bc87-e589889a39e6" containerID="314a784dfdec297108ed663b3e24d6ac32cd9ce71df0d9f686a8825dfe6a0738" exitCode=0 Jan 23 09:14:17 crc kubenswrapper[4684]: I0123 09:14:17.551761 4684 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-hcd6g" event={"ID":"a32a23a8-fd38-4a01-bc87-e589889a39e6","Type":"ContainerDied","Data":"314a784dfdec297108ed663b3e24d6ac32cd9ce71df0d9f686a8825dfe6a0738"} Jan 23 09:14:17 crc kubenswrapper[4684]: I0123 09:14:17.555686 4684 generic.go:334] "Generic (PLEG): container finished" podID="5a6b0dac-56a9-4bc7-b6f1-fdbe9578f226" containerID="126f8fca8120dc84338e5fc813f6a97fb061b4a68033708056ef4759b903aab7" exitCode=0 Jan 23 09:14:17 crc kubenswrapper[4684]: I0123 09:14:17.555773 4684 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-vnv8t" event={"ID":"5a6b0dac-56a9-4bc7-b6f1-fdbe9578f226","Type":"ContainerDied","Data":"126f8fca8120dc84338e5fc813f6a97fb061b4a68033708056ef4759b903aab7"} Jan 23 09:14:17 crc kubenswrapper[4684]: I0123 09:14:17.555808 4684 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-vnv8t" event={"ID":"5a6b0dac-56a9-4bc7-b6f1-fdbe9578f226","Type":"ContainerDied","Data":"ed6afda4661386cdecb0954427f3d46f6d134aa9ea73909bb0066874a733c081"} Jan 23 09:14:17 crc kubenswrapper[4684]: I0123 09:14:17.555830 4684 scope.go:117] "RemoveContainer" containerID="126f8fca8120dc84338e5fc813f6a97fb061b4a68033708056ef4759b903aab7" Jan 23 09:14:17 crc kubenswrapper[4684]: I0123 09:14:17.555910 4684 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-vnv8t" Jan 23 09:14:17 crc kubenswrapper[4684]: I0123 09:14:17.588869 4684 scope.go:117] "RemoveContainer" containerID="d090c7c7d777792af0ce7e82f8e7dc254cea89eea157b0c23551c9669b6d9aa8" Jan 23 09:14:17 crc kubenswrapper[4684]: I0123 09:14:17.652752 4684 scope.go:117] "RemoveContainer" containerID="dacd5b2a5ab88c954b2fdd2de0d065f964f4be46436612b02cfc3dfbf18e3900" Jan 23 09:14:17 crc kubenswrapper[4684]: I0123 09:14:17.666612 4684 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-hcd6g" Jan 23 09:14:17 crc kubenswrapper[4684]: I0123 09:14:17.678511 4684 scope.go:117] "RemoveContainer" containerID="126f8fca8120dc84338e5fc813f6a97fb061b4a68033708056ef4759b903aab7" Jan 23 09:14:17 crc kubenswrapper[4684]: E0123 09:14:17.679122 4684 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"126f8fca8120dc84338e5fc813f6a97fb061b4a68033708056ef4759b903aab7\": container with ID starting with 126f8fca8120dc84338e5fc813f6a97fb061b4a68033708056ef4759b903aab7 not found: ID does not exist" containerID="126f8fca8120dc84338e5fc813f6a97fb061b4a68033708056ef4759b903aab7" Jan 23 09:14:17 crc kubenswrapper[4684]: I0123 09:14:17.679159 4684 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"126f8fca8120dc84338e5fc813f6a97fb061b4a68033708056ef4759b903aab7"} err="failed to get container status \"126f8fca8120dc84338e5fc813f6a97fb061b4a68033708056ef4759b903aab7\": rpc error: code = NotFound desc = could not find container \"126f8fca8120dc84338e5fc813f6a97fb061b4a68033708056ef4759b903aab7\": container with ID starting with 126f8fca8120dc84338e5fc813f6a97fb061b4a68033708056ef4759b903aab7 not found: ID does not exist" Jan 23 09:14:17 crc kubenswrapper[4684]: I0123 09:14:17.679183 4684 scope.go:117] "RemoveContainer" containerID="d090c7c7d777792af0ce7e82f8e7dc254cea89eea157b0c23551c9669b6d9aa8" Jan 23 09:14:17 crc kubenswrapper[4684]: E0123 09:14:17.679502 4684 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"d090c7c7d777792af0ce7e82f8e7dc254cea89eea157b0c23551c9669b6d9aa8\": container with ID starting with d090c7c7d777792af0ce7e82f8e7dc254cea89eea157b0c23551c9669b6d9aa8 not found: ID does not exist" containerID="d090c7c7d777792af0ce7e82f8e7dc254cea89eea157b0c23551c9669b6d9aa8" Jan 23 09:14:17 crc kubenswrapper[4684]: I0123 09:14:17.679537 4684 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"d090c7c7d777792af0ce7e82f8e7dc254cea89eea157b0c23551c9669b6d9aa8"} err="failed to get container status \"d090c7c7d777792af0ce7e82f8e7dc254cea89eea157b0c23551c9669b6d9aa8\": rpc error: code = NotFound desc = could not find container \"d090c7c7d777792af0ce7e82f8e7dc254cea89eea157b0c23551c9669b6d9aa8\": container with ID starting with d090c7c7d777792af0ce7e82f8e7dc254cea89eea157b0c23551c9669b6d9aa8 not found: ID does not exist" Jan 23 09:14:17 crc kubenswrapper[4684]: I0123 09:14:17.679562 4684 scope.go:117] "RemoveContainer" containerID="dacd5b2a5ab88c954b2fdd2de0d065f964f4be46436612b02cfc3dfbf18e3900" Jan 23 09:14:17 crc kubenswrapper[4684]: E0123 09:14:17.684009 4684 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"dacd5b2a5ab88c954b2fdd2de0d065f964f4be46436612b02cfc3dfbf18e3900\": container with ID starting with dacd5b2a5ab88c954b2fdd2de0d065f964f4be46436612b02cfc3dfbf18e3900 not found: ID does not exist" containerID="dacd5b2a5ab88c954b2fdd2de0d065f964f4be46436612b02cfc3dfbf18e3900" Jan 23 09:14:17 crc kubenswrapper[4684]: I0123 09:14:17.684040 4684 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"dacd5b2a5ab88c954b2fdd2de0d065f964f4be46436612b02cfc3dfbf18e3900"} err="failed to get container status \"dacd5b2a5ab88c954b2fdd2de0d065f964f4be46436612b02cfc3dfbf18e3900\": rpc error: code = NotFound desc = could not find container \"dacd5b2a5ab88c954b2fdd2de0d065f964f4be46436612b02cfc3dfbf18e3900\": container with ID starting with dacd5b2a5ab88c954b2fdd2de0d065f964f4be46436612b02cfc3dfbf18e3900 not found: ID does not exist" Jan 23 09:14:17 crc kubenswrapper[4684]: I0123 09:14:17.701319 4684 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-vfgdl\" (UniqueName: \"kubernetes.io/projected/5a6b0dac-56a9-4bc7-b6f1-fdbe9578f226-kube-api-access-vfgdl\") pod \"5a6b0dac-56a9-4bc7-b6f1-fdbe9578f226\" (UID: \"5a6b0dac-56a9-4bc7-b6f1-fdbe9578f226\") " Jan 23 09:14:17 crc kubenswrapper[4684]: I0123 09:14:17.701379 4684 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/5a6b0dac-56a9-4bc7-b6f1-fdbe9578f226-utilities\") pod \"5a6b0dac-56a9-4bc7-b6f1-fdbe9578f226\" (UID: \"5a6b0dac-56a9-4bc7-b6f1-fdbe9578f226\") " Jan 23 09:14:17 crc kubenswrapper[4684]: I0123 09:14:17.701411 4684 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/5a6b0dac-56a9-4bc7-b6f1-fdbe9578f226-catalog-content\") pod \"5a6b0dac-56a9-4bc7-b6f1-fdbe9578f226\" (UID: \"5a6b0dac-56a9-4bc7-b6f1-fdbe9578f226\") " Jan 23 09:14:17 crc kubenswrapper[4684]: I0123 09:14:17.705723 4684 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/5a6b0dac-56a9-4bc7-b6f1-fdbe9578f226-utilities" (OuterVolumeSpecName: "utilities") pod "5a6b0dac-56a9-4bc7-b6f1-fdbe9578f226" (UID: "5a6b0dac-56a9-4bc7-b6f1-fdbe9578f226"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 23 09:14:17 crc kubenswrapper[4684]: I0123 09:14:17.709808 4684 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/5a6b0dac-56a9-4bc7-b6f1-fdbe9578f226-kube-api-access-vfgdl" (OuterVolumeSpecName: "kube-api-access-vfgdl") pod "5a6b0dac-56a9-4bc7-b6f1-fdbe9578f226" (UID: "5a6b0dac-56a9-4bc7-b6f1-fdbe9578f226"). InnerVolumeSpecName "kube-api-access-vfgdl". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 23 09:14:17 crc kubenswrapper[4684]: I0123 09:14:17.802461 4684 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/a32a23a8-fd38-4a01-bc87-e589889a39e6-utilities\") pod \"a32a23a8-fd38-4a01-bc87-e589889a39e6\" (UID: \"a32a23a8-fd38-4a01-bc87-e589889a39e6\") " Jan 23 09:14:17 crc kubenswrapper[4684]: I0123 09:14:17.802863 4684 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-g5tjv\" (UniqueName: \"kubernetes.io/projected/a32a23a8-fd38-4a01-bc87-e589889a39e6-kube-api-access-g5tjv\") pod \"a32a23a8-fd38-4a01-bc87-e589889a39e6\" (UID: \"a32a23a8-fd38-4a01-bc87-e589889a39e6\") " Jan 23 09:14:17 crc kubenswrapper[4684]: I0123 09:14:17.803068 4684 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/a32a23a8-fd38-4a01-bc87-e589889a39e6-catalog-content\") pod \"a32a23a8-fd38-4a01-bc87-e589889a39e6\" (UID: \"a32a23a8-fd38-4a01-bc87-e589889a39e6\") " Jan 23 09:14:17 crc kubenswrapper[4684]: I0123 09:14:17.803234 4684 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/a32a23a8-fd38-4a01-bc87-e589889a39e6-utilities" (OuterVolumeSpecName: "utilities") pod "a32a23a8-fd38-4a01-bc87-e589889a39e6" (UID: "a32a23a8-fd38-4a01-bc87-e589889a39e6"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 23 09:14:17 crc kubenswrapper[4684]: I0123 09:14:17.803518 4684 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/a32a23a8-fd38-4a01-bc87-e589889a39e6-utilities\") on node \"crc\" DevicePath \"\"" Jan 23 09:14:17 crc kubenswrapper[4684]: I0123 09:14:17.803892 4684 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-vfgdl\" (UniqueName: \"kubernetes.io/projected/5a6b0dac-56a9-4bc7-b6f1-fdbe9578f226-kube-api-access-vfgdl\") on node \"crc\" DevicePath \"\"" Jan 23 09:14:17 crc kubenswrapper[4684]: I0123 09:14:17.803998 4684 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/5a6b0dac-56a9-4bc7-b6f1-fdbe9578f226-utilities\") on node \"crc\" DevicePath \"\"" Jan 23 09:14:17 crc kubenswrapper[4684]: I0123 09:14:17.806882 4684 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/a32a23a8-fd38-4a01-bc87-e589889a39e6-kube-api-access-g5tjv" (OuterVolumeSpecName: "kube-api-access-g5tjv") pod "a32a23a8-fd38-4a01-bc87-e589889a39e6" (UID: "a32a23a8-fd38-4a01-bc87-e589889a39e6"). InnerVolumeSpecName "kube-api-access-g5tjv". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 23 09:14:17 crc kubenswrapper[4684]: I0123 09:14:17.822904 4684 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/5a6b0dac-56a9-4bc7-b6f1-fdbe9578f226-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "5a6b0dac-56a9-4bc7-b6f1-fdbe9578f226" (UID: "5a6b0dac-56a9-4bc7-b6f1-fdbe9578f226"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 23 09:14:17 crc kubenswrapper[4684]: I0123 09:14:17.823715 4684 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/a32a23a8-fd38-4a01-bc87-e589889a39e6-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "a32a23a8-fd38-4a01-bc87-e589889a39e6" (UID: "a32a23a8-fd38-4a01-bc87-e589889a39e6"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 23 09:14:17 crc kubenswrapper[4684]: I0123 09:14:17.884380 4684 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-operators-vnv8t"] Jan 23 09:14:17 crc kubenswrapper[4684]: I0123 09:14:17.890026 4684 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/redhat-operators-vnv8t"] Jan 23 09:14:17 crc kubenswrapper[4684]: I0123 09:14:17.905235 4684 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/5a6b0dac-56a9-4bc7-b6f1-fdbe9578f226-catalog-content\") on node \"crc\" DevicePath \"\"" Jan 23 09:14:17 crc kubenswrapper[4684]: I0123 09:14:17.905262 4684 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/a32a23a8-fd38-4a01-bc87-e589889a39e6-catalog-content\") on node \"crc\" DevicePath \"\"" Jan 23 09:14:17 crc kubenswrapper[4684]: I0123 09:14:17.905272 4684 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-g5tjv\" (UniqueName: \"kubernetes.io/projected/a32a23a8-fd38-4a01-bc87-e589889a39e6-kube-api-access-g5tjv\") on node \"crc\" DevicePath \"\"" Jan 23 09:14:18 crc kubenswrapper[4684]: I0123 09:14:18.565626 4684 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-hcd6g" event={"ID":"a32a23a8-fd38-4a01-bc87-e589889a39e6","Type":"ContainerDied","Data":"6c512678b9d2d1b1ebee786cfdbc46a57fce5a9f38caa72f3cb3e62093dfb242"} Jan 23 09:14:18 crc kubenswrapper[4684]: I0123 09:14:18.566052 4684 scope.go:117] "RemoveContainer" containerID="314a784dfdec297108ed663b3e24d6ac32cd9ce71df0d9f686a8825dfe6a0738" Jan 23 09:14:18 crc kubenswrapper[4684]: I0123 09:14:18.565732 4684 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-hcd6g" Jan 23 09:14:18 crc kubenswrapper[4684]: I0123 09:14:18.595834 4684 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-marketplace-hcd6g"] Jan 23 09:14:18 crc kubenswrapper[4684]: I0123 09:14:18.599967 4684 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/redhat-marketplace-hcd6g"] Jan 23 09:14:18 crc kubenswrapper[4684]: I0123 09:14:18.601660 4684 scope.go:117] "RemoveContainer" containerID="cfa2ad3764d44551aa6bc6c6a7de1e285407c0fea2f82dac38fd64dee528a1ec" Jan 23 09:14:18 crc kubenswrapper[4684]: I0123 09:14:18.643857 4684 scope.go:117] "RemoveContainer" containerID="284121e3234b1751b9f0e90389bb58657d0ac7e24039d13a2fbbbd3f59e1e44f" Jan 23 09:14:19 crc kubenswrapper[4684]: I0123 09:14:19.589761 4684 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="5a6b0dac-56a9-4bc7-b6f1-fdbe9578f226" path="/var/lib/kubelet/pods/5a6b0dac-56a9-4bc7-b6f1-fdbe9578f226/volumes" Jan 23 09:14:19 crc kubenswrapper[4684]: I0123 09:14:19.590536 4684 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="a32a23a8-fd38-4a01-bc87-e589889a39e6" path="/var/lib/kubelet/pods/a32a23a8-fd38-4a01-bc87-e589889a39e6/volumes" Jan 23 09:14:41 crc kubenswrapper[4684]: I0123 09:14:41.135626 4684 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/certified-operators-x2mrs"] Jan 23 09:14:41 crc kubenswrapper[4684]: I0123 09:14:41.136226 4684 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/community-operators-pc4kj"] Jan 23 09:14:41 crc kubenswrapper[4684]: I0123 09:14:41.136464 4684 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/community-operators-pc4kj" podUID="2f9880b0-14ae-4649-b7ba-6d0dd1ab5151" containerName="registry-server" containerID="cri-o://d5c14ba4360eb52e85d58b457e69c943fdceda28b5bf0c035c9f4ef3317f52f7" gracePeriod=30 Jan 23 09:14:41 crc kubenswrapper[4684]: I0123 09:14:41.136758 4684 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/certified-operators-x2mrs" podUID="b97308cc-f7d2-4693-8990-76cbb4c9abff" containerName="registry-server" containerID="cri-o://7d0fd50bcb08fe29c47575a5ad2121e36eba72bc60c62f6728c33fdad33487b5" gracePeriod=30 Jan 23 09:14:41 crc kubenswrapper[4684]: I0123 09:14:41.144424 4684 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/marketplace-operator-79b997595-tfmsb"] Jan 23 09:14:41 crc kubenswrapper[4684]: I0123 09:14:41.144630 4684 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/marketplace-operator-79b997595-tfmsb" podUID="703df6b3-b903-4818-b0c8-8681de1c6065" containerName="marketplace-operator" containerID="cri-o://080069b9837351b3819630d5376f3ae1b2cacc3a63713a83f095675ad9ff66ce" gracePeriod=30 Jan 23 09:14:41 crc kubenswrapper[4684]: I0123 09:14:41.157453 4684 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-marketplace-74vxp"] Jan 23 09:14:41 crc kubenswrapper[4684]: I0123 09:14:41.157756 4684 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/redhat-marketplace-74vxp" podUID="597fda0b-2292-4816-a498-539a84a87f33" containerName="registry-server" containerID="cri-o://9d553b8b9caf527dd5a57dff15285e93e7edc94de753fa041326a0b1e083cd71" gracePeriod=30 Jan 23 09:14:41 crc kubenswrapper[4684]: I0123 09:14:41.181797 4684 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-operators-9nnzz"] Jan 23 09:14:41 crc kubenswrapper[4684]: I0123 09:14:41.182146 4684 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/redhat-operators-9nnzz" podUID="888f4644-d4e6-4334-8711-c552d0ef037a" containerName="registry-server" containerID="cri-o://ec4c7529e536b562c55fba62ad717583075f744d44ab896b738be8744d0e16ca" gracePeriod=30 Jan 23 09:14:41 crc kubenswrapper[4684]: I0123 09:14:41.189169 4684 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/marketplace-operator-79b997595-25vv4"] Jan 23 09:14:41 crc kubenswrapper[4684]: E0123 09:14:41.189430 4684 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="a32a23a8-fd38-4a01-bc87-e589889a39e6" containerName="extract-utilities" Jan 23 09:14:41 crc kubenswrapper[4684]: I0123 09:14:41.189442 4684 state_mem.go:107] "Deleted CPUSet assignment" podUID="a32a23a8-fd38-4a01-bc87-e589889a39e6" containerName="extract-utilities" Jan 23 09:14:41 crc kubenswrapper[4684]: E0123 09:14:41.189451 4684 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="5a6b0dac-56a9-4bc7-b6f1-fdbe9578f226" containerName="extract-content" Jan 23 09:14:41 crc kubenswrapper[4684]: I0123 09:14:41.189459 4684 state_mem.go:107] "Deleted CPUSet assignment" podUID="5a6b0dac-56a9-4bc7-b6f1-fdbe9578f226" containerName="extract-content" Jan 23 09:14:41 crc kubenswrapper[4684]: E0123 09:14:41.189470 4684 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="5a6b0dac-56a9-4bc7-b6f1-fdbe9578f226" containerName="registry-server" Jan 23 09:14:41 crc kubenswrapper[4684]: I0123 09:14:41.189476 4684 state_mem.go:107] "Deleted CPUSet assignment" podUID="5a6b0dac-56a9-4bc7-b6f1-fdbe9578f226" containerName="registry-server" Jan 23 09:14:41 crc kubenswrapper[4684]: E0123 09:14:41.189485 4684 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="5a6b0dac-56a9-4bc7-b6f1-fdbe9578f226" containerName="extract-utilities" Jan 23 09:14:41 crc kubenswrapper[4684]: I0123 09:14:41.189491 4684 state_mem.go:107] "Deleted CPUSet assignment" podUID="5a6b0dac-56a9-4bc7-b6f1-fdbe9578f226" containerName="extract-utilities" Jan 23 09:14:41 crc kubenswrapper[4684]: E0123 09:14:41.189501 4684 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="a32a23a8-fd38-4a01-bc87-e589889a39e6" containerName="registry-server" Jan 23 09:14:41 crc kubenswrapper[4684]: I0123 09:14:41.189507 4684 state_mem.go:107] "Deleted CPUSet assignment" podUID="a32a23a8-fd38-4a01-bc87-e589889a39e6" containerName="registry-server" Jan 23 09:14:41 crc kubenswrapper[4684]: E0123 09:14:41.189515 4684 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="0cd73bd8-4034-44e9-b00a-75ea938360c8" containerName="extract-utilities" Jan 23 09:14:41 crc kubenswrapper[4684]: I0123 09:14:41.189521 4684 state_mem.go:107] "Deleted CPUSet assignment" podUID="0cd73bd8-4034-44e9-b00a-75ea938360c8" containerName="extract-utilities" Jan 23 09:14:41 crc kubenswrapper[4684]: E0123 09:14:41.189530 4684 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="0cd73bd8-4034-44e9-b00a-75ea938360c8" containerName="registry-server" Jan 23 09:14:41 crc kubenswrapper[4684]: I0123 09:14:41.189536 4684 state_mem.go:107] "Deleted CPUSet assignment" podUID="0cd73bd8-4034-44e9-b00a-75ea938360c8" containerName="registry-server" Jan 23 09:14:41 crc kubenswrapper[4684]: E0123 09:14:41.189546 4684 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="6386382b-e651-4888-857e-a3a7325f1f14" containerName="extract-content" Jan 23 09:14:41 crc kubenswrapper[4684]: I0123 09:14:41.189552 4684 state_mem.go:107] "Deleted CPUSet assignment" podUID="6386382b-e651-4888-857e-a3a7325f1f14" containerName="extract-content" Jan 23 09:14:41 crc kubenswrapper[4684]: E0123 09:14:41.189562 4684 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="f85e55b1a89d02b0cb034b1ea31ed45a" containerName="startup-monitor" Jan 23 09:14:41 crc kubenswrapper[4684]: I0123 09:14:41.189568 4684 state_mem.go:107] "Deleted CPUSet assignment" podUID="f85e55b1a89d02b0cb034b1ea31ed45a" containerName="startup-monitor" Jan 23 09:14:41 crc kubenswrapper[4684]: E0123 09:14:41.189579 4684 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="6386382b-e651-4888-857e-a3a7325f1f14" containerName="registry-server" Jan 23 09:14:41 crc kubenswrapper[4684]: I0123 09:14:41.189584 4684 state_mem.go:107] "Deleted CPUSet assignment" podUID="6386382b-e651-4888-857e-a3a7325f1f14" containerName="registry-server" Jan 23 09:14:41 crc kubenswrapper[4684]: E0123 09:14:41.189593 4684 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="6386382b-e651-4888-857e-a3a7325f1f14" containerName="extract-utilities" Jan 23 09:14:41 crc kubenswrapper[4684]: I0123 09:14:41.189599 4684 state_mem.go:107] "Deleted CPUSet assignment" podUID="6386382b-e651-4888-857e-a3a7325f1f14" containerName="extract-utilities" Jan 23 09:14:41 crc kubenswrapper[4684]: E0123 09:14:41.189613 4684 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="0cd73bd8-4034-44e9-b00a-75ea938360c8" containerName="extract-content" Jan 23 09:14:41 crc kubenswrapper[4684]: I0123 09:14:41.189619 4684 state_mem.go:107] "Deleted CPUSet assignment" podUID="0cd73bd8-4034-44e9-b00a-75ea938360c8" containerName="extract-content" Jan 23 09:14:41 crc kubenswrapper[4684]: E0123 09:14:41.189627 4684 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="a32a23a8-fd38-4a01-bc87-e589889a39e6" containerName="extract-content" Jan 23 09:14:41 crc kubenswrapper[4684]: I0123 09:14:41.189633 4684 state_mem.go:107] "Deleted CPUSet assignment" podUID="a32a23a8-fd38-4a01-bc87-e589889a39e6" containerName="extract-content" Jan 23 09:14:41 crc kubenswrapper[4684]: I0123 09:14:41.189745 4684 memory_manager.go:354] "RemoveStaleState removing state" podUID="5a6b0dac-56a9-4bc7-b6f1-fdbe9578f226" containerName="registry-server" Jan 23 09:14:41 crc kubenswrapper[4684]: I0123 09:14:41.189759 4684 memory_manager.go:354] "RemoveStaleState removing state" podUID="f85e55b1a89d02b0cb034b1ea31ed45a" containerName="startup-monitor" Jan 23 09:14:41 crc kubenswrapper[4684]: I0123 09:14:41.189770 4684 memory_manager.go:354] "RemoveStaleState removing state" podUID="0cd73bd8-4034-44e9-b00a-75ea938360c8" containerName="registry-server" Jan 23 09:14:41 crc kubenswrapper[4684]: I0123 09:14:41.189778 4684 memory_manager.go:354] "RemoveStaleState removing state" podUID="6386382b-e651-4888-857e-a3a7325f1f14" containerName="registry-server" Jan 23 09:14:41 crc kubenswrapper[4684]: I0123 09:14:41.189789 4684 memory_manager.go:354] "RemoveStaleState removing state" podUID="a32a23a8-fd38-4a01-bc87-e589889a39e6" containerName="registry-server" Jan 23 09:14:41 crc kubenswrapper[4684]: I0123 09:14:41.190256 4684 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/marketplace-operator-79b997595-25vv4" Jan 23 09:14:41 crc kubenswrapper[4684]: I0123 09:14:41.219290 4684 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/marketplace-operator-79b997595-25vv4"] Jan 23 09:14:41 crc kubenswrapper[4684]: I0123 09:14:41.344921 4684 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-xxn2v\" (UniqueName: \"kubernetes.io/projected/9703bbe4-b658-40eb-b8db-14f18c684ab3-kube-api-access-xxn2v\") pod \"marketplace-operator-79b997595-25vv4\" (UID: \"9703bbe4-b658-40eb-b8db-14f18c684ab3\") " pod="openshift-marketplace/marketplace-operator-79b997595-25vv4" Jan 23 09:14:41 crc kubenswrapper[4684]: I0123 09:14:41.345060 4684 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"marketplace-operator-metrics\" (UniqueName: \"kubernetes.io/secret/9703bbe4-b658-40eb-b8db-14f18c684ab3-marketplace-operator-metrics\") pod \"marketplace-operator-79b997595-25vv4\" (UID: \"9703bbe4-b658-40eb-b8db-14f18c684ab3\") " pod="openshift-marketplace/marketplace-operator-79b997595-25vv4" Jan 23 09:14:41 crc kubenswrapper[4684]: I0123 09:14:41.345582 4684 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"marketplace-trusted-ca\" (UniqueName: \"kubernetes.io/configmap/9703bbe4-b658-40eb-b8db-14f18c684ab3-marketplace-trusted-ca\") pod \"marketplace-operator-79b997595-25vv4\" (UID: \"9703bbe4-b658-40eb-b8db-14f18c684ab3\") " pod="openshift-marketplace/marketplace-operator-79b997595-25vv4" Jan 23 09:14:41 crc kubenswrapper[4684]: I0123 09:14:41.447298 4684 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-xxn2v\" (UniqueName: \"kubernetes.io/projected/9703bbe4-b658-40eb-b8db-14f18c684ab3-kube-api-access-xxn2v\") pod \"marketplace-operator-79b997595-25vv4\" (UID: \"9703bbe4-b658-40eb-b8db-14f18c684ab3\") " pod="openshift-marketplace/marketplace-operator-79b997595-25vv4" Jan 23 09:14:41 crc kubenswrapper[4684]: I0123 09:14:41.447413 4684 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"marketplace-operator-metrics\" (UniqueName: \"kubernetes.io/secret/9703bbe4-b658-40eb-b8db-14f18c684ab3-marketplace-operator-metrics\") pod \"marketplace-operator-79b997595-25vv4\" (UID: \"9703bbe4-b658-40eb-b8db-14f18c684ab3\") " pod="openshift-marketplace/marketplace-operator-79b997595-25vv4" Jan 23 09:14:41 crc kubenswrapper[4684]: I0123 09:14:41.447463 4684 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"marketplace-trusted-ca\" (UniqueName: \"kubernetes.io/configmap/9703bbe4-b658-40eb-b8db-14f18c684ab3-marketplace-trusted-ca\") pod \"marketplace-operator-79b997595-25vv4\" (UID: \"9703bbe4-b658-40eb-b8db-14f18c684ab3\") " pod="openshift-marketplace/marketplace-operator-79b997595-25vv4" Jan 23 09:14:41 crc kubenswrapper[4684]: I0123 09:14:41.449590 4684 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"marketplace-trusted-ca\" (UniqueName: \"kubernetes.io/configmap/9703bbe4-b658-40eb-b8db-14f18c684ab3-marketplace-trusted-ca\") pod \"marketplace-operator-79b997595-25vv4\" (UID: \"9703bbe4-b658-40eb-b8db-14f18c684ab3\") " pod="openshift-marketplace/marketplace-operator-79b997595-25vv4" Jan 23 09:14:41 crc kubenswrapper[4684]: I0123 09:14:41.464959 4684 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"marketplace-operator-metrics\" (UniqueName: \"kubernetes.io/secret/9703bbe4-b658-40eb-b8db-14f18c684ab3-marketplace-operator-metrics\") pod \"marketplace-operator-79b997595-25vv4\" (UID: \"9703bbe4-b658-40eb-b8db-14f18c684ab3\") " pod="openshift-marketplace/marketplace-operator-79b997595-25vv4" Jan 23 09:14:41 crc kubenswrapper[4684]: I0123 09:14:41.467268 4684 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-xxn2v\" (UniqueName: \"kubernetes.io/projected/9703bbe4-b658-40eb-b8db-14f18c684ab3-kube-api-access-xxn2v\") pod \"marketplace-operator-79b997595-25vv4\" (UID: \"9703bbe4-b658-40eb-b8db-14f18c684ab3\") " pod="openshift-marketplace/marketplace-operator-79b997595-25vv4" Jan 23 09:14:41 crc kubenswrapper[4684]: I0123 09:14:41.507260 4684 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/marketplace-operator-79b997595-25vv4" Jan 23 09:14:41 crc kubenswrapper[4684]: I0123 09:14:41.677067 4684 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-74vxp" Jan 23 09:14:41 crc kubenswrapper[4684]: I0123 09:14:41.721052 4684 generic.go:334] "Generic (PLEG): container finished" podID="597fda0b-2292-4816-a498-539a84a87f33" containerID="9d553b8b9caf527dd5a57dff15285e93e7edc94de753fa041326a0b1e083cd71" exitCode=0 Jan 23 09:14:41 crc kubenswrapper[4684]: I0123 09:14:41.721117 4684 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-74vxp" event={"ID":"597fda0b-2292-4816-a498-539a84a87f33","Type":"ContainerDied","Data":"9d553b8b9caf527dd5a57dff15285e93e7edc94de753fa041326a0b1e083cd71"} Jan 23 09:14:41 crc kubenswrapper[4684]: I0123 09:14:41.721153 4684 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-74vxp" event={"ID":"597fda0b-2292-4816-a498-539a84a87f33","Type":"ContainerDied","Data":"13a091f6b0321d9ec401bcc2522dc85c846fb0a496454a8da29c56211b52ad0d"} Jan 23 09:14:41 crc kubenswrapper[4684]: I0123 09:14:41.721174 4684 scope.go:117] "RemoveContainer" containerID="9d553b8b9caf527dd5a57dff15285e93e7edc94de753fa041326a0b1e083cd71" Jan 23 09:14:41 crc kubenswrapper[4684]: I0123 09:14:41.721302 4684 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-74vxp" Jan 23 09:14:41 crc kubenswrapper[4684]: I0123 09:14:41.742175 4684 generic.go:334] "Generic (PLEG): container finished" podID="2f9880b0-14ae-4649-b7ba-6d0dd1ab5151" containerID="d5c14ba4360eb52e85d58b457e69c943fdceda28b5bf0c035c9f4ef3317f52f7" exitCode=0 Jan 23 09:14:41 crc kubenswrapper[4684]: I0123 09:14:41.742301 4684 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-pc4kj" event={"ID":"2f9880b0-14ae-4649-b7ba-6d0dd1ab5151","Type":"ContainerDied","Data":"d5c14ba4360eb52e85d58b457e69c943fdceda28b5bf0c035c9f4ef3317f52f7"} Jan 23 09:14:41 crc kubenswrapper[4684]: I0123 09:14:41.752289 4684 generic.go:334] "Generic (PLEG): container finished" podID="703df6b3-b903-4818-b0c8-8681de1c6065" containerID="080069b9837351b3819630d5376f3ae1b2cacc3a63713a83f095675ad9ff66ce" exitCode=0 Jan 23 09:14:41 crc kubenswrapper[4684]: I0123 09:14:41.752381 4684 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/marketplace-operator-79b997595-tfmsb" event={"ID":"703df6b3-b903-4818-b0c8-8681de1c6065","Type":"ContainerDied","Data":"080069b9837351b3819630d5376f3ae1b2cacc3a63713a83f095675ad9ff66ce"} Jan 23 09:14:41 crc kubenswrapper[4684]: I0123 09:14:41.753025 4684 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/marketplace-operator-79b997595-tfmsb" Jan 23 09:14:41 crc kubenswrapper[4684]: I0123 09:14:41.759888 4684 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-f8jv6\" (UniqueName: \"kubernetes.io/projected/597fda0b-2292-4816-a498-539a84a87f33-kube-api-access-f8jv6\") pod \"597fda0b-2292-4816-a498-539a84a87f33\" (UID: \"597fda0b-2292-4816-a498-539a84a87f33\") " Jan 23 09:14:41 crc kubenswrapper[4684]: I0123 09:14:41.759958 4684 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"marketplace-trusted-ca\" (UniqueName: \"kubernetes.io/configmap/703df6b3-b903-4818-b0c8-8681de1c6065-marketplace-trusted-ca\") pod \"703df6b3-b903-4818-b0c8-8681de1c6065\" (UID: \"703df6b3-b903-4818-b0c8-8681de1c6065\") " Jan 23 09:14:41 crc kubenswrapper[4684]: I0123 09:14:41.760026 4684 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"marketplace-operator-metrics\" (UniqueName: \"kubernetes.io/secret/703df6b3-b903-4818-b0c8-8681de1c6065-marketplace-operator-metrics\") pod \"703df6b3-b903-4818-b0c8-8681de1c6065\" (UID: \"703df6b3-b903-4818-b0c8-8681de1c6065\") " Jan 23 09:14:41 crc kubenswrapper[4684]: I0123 09:14:41.760055 4684 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/597fda0b-2292-4816-a498-539a84a87f33-catalog-content\") pod \"597fda0b-2292-4816-a498-539a84a87f33\" (UID: \"597fda0b-2292-4816-a498-539a84a87f33\") " Jan 23 09:14:41 crc kubenswrapper[4684]: I0123 09:14:41.760100 4684 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-nmcbh\" (UniqueName: \"kubernetes.io/projected/703df6b3-b903-4818-b0c8-8681de1c6065-kube-api-access-nmcbh\") pod \"703df6b3-b903-4818-b0c8-8681de1c6065\" (UID: \"703df6b3-b903-4818-b0c8-8681de1c6065\") " Jan 23 09:14:41 crc kubenswrapper[4684]: I0123 09:14:41.760135 4684 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/597fda0b-2292-4816-a498-539a84a87f33-utilities\") pod \"597fda0b-2292-4816-a498-539a84a87f33\" (UID: \"597fda0b-2292-4816-a498-539a84a87f33\") " Jan 23 09:14:41 crc kubenswrapper[4684]: I0123 09:14:41.762002 4684 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/703df6b3-b903-4818-b0c8-8681de1c6065-marketplace-trusted-ca" (OuterVolumeSpecName: "marketplace-trusted-ca") pod "703df6b3-b903-4818-b0c8-8681de1c6065" (UID: "703df6b3-b903-4818-b0c8-8681de1c6065"). InnerVolumeSpecName "marketplace-trusted-ca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 23 09:14:41 crc kubenswrapper[4684]: I0123 09:14:41.762150 4684 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/597fda0b-2292-4816-a498-539a84a87f33-utilities" (OuterVolumeSpecName: "utilities") pod "597fda0b-2292-4816-a498-539a84a87f33" (UID: "597fda0b-2292-4816-a498-539a84a87f33"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 23 09:14:41 crc kubenswrapper[4684]: I0123 09:14:41.766353 4684 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/597fda0b-2292-4816-a498-539a84a87f33-kube-api-access-f8jv6" (OuterVolumeSpecName: "kube-api-access-f8jv6") pod "597fda0b-2292-4816-a498-539a84a87f33" (UID: "597fda0b-2292-4816-a498-539a84a87f33"). InnerVolumeSpecName "kube-api-access-f8jv6". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 23 09:14:41 crc kubenswrapper[4684]: I0123 09:14:41.766507 4684 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/703df6b3-b903-4818-b0c8-8681de1c6065-kube-api-access-nmcbh" (OuterVolumeSpecName: "kube-api-access-nmcbh") pod "703df6b3-b903-4818-b0c8-8681de1c6065" (UID: "703df6b3-b903-4818-b0c8-8681de1c6065"). InnerVolumeSpecName "kube-api-access-nmcbh". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 23 09:14:41 crc kubenswrapper[4684]: I0123 09:14:41.782918 4684 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/703df6b3-b903-4818-b0c8-8681de1c6065-marketplace-operator-metrics" (OuterVolumeSpecName: "marketplace-operator-metrics") pod "703df6b3-b903-4818-b0c8-8681de1c6065" (UID: "703df6b3-b903-4818-b0c8-8681de1c6065"). InnerVolumeSpecName "marketplace-operator-metrics". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 23 09:14:41 crc kubenswrapper[4684]: I0123 09:14:41.787756 4684 generic.go:334] "Generic (PLEG): container finished" podID="b97308cc-f7d2-4693-8990-76cbb4c9abff" containerID="7d0fd50bcb08fe29c47575a5ad2121e36eba72bc60c62f6728c33fdad33487b5" exitCode=0 Jan 23 09:14:41 crc kubenswrapper[4684]: I0123 09:14:41.787798 4684 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-x2mrs" event={"ID":"b97308cc-f7d2-4693-8990-76cbb4c9abff","Type":"ContainerDied","Data":"7d0fd50bcb08fe29c47575a5ad2121e36eba72bc60c62f6728c33fdad33487b5"} Jan 23 09:14:41 crc kubenswrapper[4684]: I0123 09:14:41.817531 4684 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/597fda0b-2292-4816-a498-539a84a87f33-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "597fda0b-2292-4816-a498-539a84a87f33" (UID: "597fda0b-2292-4816-a498-539a84a87f33"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 23 09:14:41 crc kubenswrapper[4684]: I0123 09:14:41.827042 4684 scope.go:117] "RemoveContainer" containerID="3933d73fa5e12f986261de632bc1fe99236ea7a78d4fc0bf77372ecb4a98b890" Jan 23 09:14:41 crc kubenswrapper[4684]: I0123 09:14:41.847159 4684 scope.go:117] "RemoveContainer" containerID="6c523c49df4bd31c5a1a6578dd029cb4bd7f24aa003d53a87404f0f60e12a1a5" Jan 23 09:14:41 crc kubenswrapper[4684]: I0123 09:14:41.862582 4684 reconciler_common.go:293] "Volume detached for volume \"marketplace-operator-metrics\" (UniqueName: \"kubernetes.io/secret/703df6b3-b903-4818-b0c8-8681de1c6065-marketplace-operator-metrics\") on node \"crc\" DevicePath \"\"" Jan 23 09:14:41 crc kubenswrapper[4684]: I0123 09:14:41.862858 4684 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/597fda0b-2292-4816-a498-539a84a87f33-catalog-content\") on node \"crc\" DevicePath \"\"" Jan 23 09:14:41 crc kubenswrapper[4684]: I0123 09:14:41.862992 4684 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-nmcbh\" (UniqueName: \"kubernetes.io/projected/703df6b3-b903-4818-b0c8-8681de1c6065-kube-api-access-nmcbh\") on node \"crc\" DevicePath \"\"" Jan 23 09:14:41 crc kubenswrapper[4684]: I0123 09:14:41.863091 4684 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/597fda0b-2292-4816-a498-539a84a87f33-utilities\") on node \"crc\" DevicePath \"\"" Jan 23 09:14:41 crc kubenswrapper[4684]: I0123 09:14:41.863152 4684 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-f8jv6\" (UniqueName: \"kubernetes.io/projected/597fda0b-2292-4816-a498-539a84a87f33-kube-api-access-f8jv6\") on node \"crc\" DevicePath \"\"" Jan 23 09:14:41 crc kubenswrapper[4684]: I0123 09:14:41.863286 4684 reconciler_common.go:293] "Volume detached for volume \"marketplace-trusted-ca\" (UniqueName: \"kubernetes.io/configmap/703df6b3-b903-4818-b0c8-8681de1c6065-marketplace-trusted-ca\") on node \"crc\" DevicePath \"\"" Jan 23 09:14:41 crc kubenswrapper[4684]: I0123 09:14:41.888972 4684 scope.go:117] "RemoveContainer" containerID="9d553b8b9caf527dd5a57dff15285e93e7edc94de753fa041326a0b1e083cd71" Jan 23 09:14:41 crc kubenswrapper[4684]: E0123 09:14:41.891378 4684 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"9d553b8b9caf527dd5a57dff15285e93e7edc94de753fa041326a0b1e083cd71\": container with ID starting with 9d553b8b9caf527dd5a57dff15285e93e7edc94de753fa041326a0b1e083cd71 not found: ID does not exist" containerID="9d553b8b9caf527dd5a57dff15285e93e7edc94de753fa041326a0b1e083cd71" Jan 23 09:14:41 crc kubenswrapper[4684]: I0123 09:14:41.891528 4684 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"9d553b8b9caf527dd5a57dff15285e93e7edc94de753fa041326a0b1e083cd71"} err="failed to get container status \"9d553b8b9caf527dd5a57dff15285e93e7edc94de753fa041326a0b1e083cd71\": rpc error: code = NotFound desc = could not find container \"9d553b8b9caf527dd5a57dff15285e93e7edc94de753fa041326a0b1e083cd71\": container with ID starting with 9d553b8b9caf527dd5a57dff15285e93e7edc94de753fa041326a0b1e083cd71 not found: ID does not exist" Jan 23 09:14:41 crc kubenswrapper[4684]: I0123 09:14:41.891591 4684 scope.go:117] "RemoveContainer" containerID="3933d73fa5e12f986261de632bc1fe99236ea7a78d4fc0bf77372ecb4a98b890" Jan 23 09:14:41 crc kubenswrapper[4684]: E0123 09:14:41.892161 4684 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"3933d73fa5e12f986261de632bc1fe99236ea7a78d4fc0bf77372ecb4a98b890\": container with ID starting with 3933d73fa5e12f986261de632bc1fe99236ea7a78d4fc0bf77372ecb4a98b890 not found: ID does not exist" containerID="3933d73fa5e12f986261de632bc1fe99236ea7a78d4fc0bf77372ecb4a98b890" Jan 23 09:14:41 crc kubenswrapper[4684]: I0123 09:14:41.892255 4684 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"3933d73fa5e12f986261de632bc1fe99236ea7a78d4fc0bf77372ecb4a98b890"} err="failed to get container status \"3933d73fa5e12f986261de632bc1fe99236ea7a78d4fc0bf77372ecb4a98b890\": rpc error: code = NotFound desc = could not find container \"3933d73fa5e12f986261de632bc1fe99236ea7a78d4fc0bf77372ecb4a98b890\": container with ID starting with 3933d73fa5e12f986261de632bc1fe99236ea7a78d4fc0bf77372ecb4a98b890 not found: ID does not exist" Jan 23 09:14:41 crc kubenswrapper[4684]: I0123 09:14:41.892302 4684 scope.go:117] "RemoveContainer" containerID="6c523c49df4bd31c5a1a6578dd029cb4bd7f24aa003d53a87404f0f60e12a1a5" Jan 23 09:14:41 crc kubenswrapper[4684]: E0123 09:14:41.892745 4684 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"6c523c49df4bd31c5a1a6578dd029cb4bd7f24aa003d53a87404f0f60e12a1a5\": container with ID starting with 6c523c49df4bd31c5a1a6578dd029cb4bd7f24aa003d53a87404f0f60e12a1a5 not found: ID does not exist" containerID="6c523c49df4bd31c5a1a6578dd029cb4bd7f24aa003d53a87404f0f60e12a1a5" Jan 23 09:14:41 crc kubenswrapper[4684]: I0123 09:14:41.892832 4684 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"6c523c49df4bd31c5a1a6578dd029cb4bd7f24aa003d53a87404f0f60e12a1a5"} err="failed to get container status \"6c523c49df4bd31c5a1a6578dd029cb4bd7f24aa003d53a87404f0f60e12a1a5\": rpc error: code = NotFound desc = could not find container \"6c523c49df4bd31c5a1a6578dd029cb4bd7f24aa003d53a87404f0f60e12a1a5\": container with ID starting with 6c523c49df4bd31c5a1a6578dd029cb4bd7f24aa003d53a87404f0f60e12a1a5 not found: ID does not exist" Jan 23 09:14:41 crc kubenswrapper[4684]: I0123 09:14:41.892855 4684 scope.go:117] "RemoveContainer" containerID="bf0e2db7f62363906898199e85bc114cf704a5ad24bf8db0ca11597b9b1db919" Jan 23 09:14:42 crc kubenswrapper[4684]: I0123 09:14:42.055878 4684 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-marketplace-74vxp"] Jan 23 09:14:42 crc kubenswrapper[4684]: I0123 09:14:42.064605 4684 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/redhat-marketplace-74vxp"] Jan 23 09:14:42 crc kubenswrapper[4684]: I0123 09:14:42.141492 4684 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/marketplace-operator-79b997595-25vv4"] Jan 23 09:14:42 crc kubenswrapper[4684]: I0123 09:14:42.491314 4684 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-pc4kj" Jan 23 09:14:42 crc kubenswrapper[4684]: I0123 09:14:42.570468 4684 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-v857v\" (UniqueName: \"kubernetes.io/projected/2f9880b0-14ae-4649-b7ba-6d0dd1ab5151-kube-api-access-v857v\") pod \"2f9880b0-14ae-4649-b7ba-6d0dd1ab5151\" (UID: \"2f9880b0-14ae-4649-b7ba-6d0dd1ab5151\") " Jan 23 09:14:42 crc kubenswrapper[4684]: I0123 09:14:42.570684 4684 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/2f9880b0-14ae-4649-b7ba-6d0dd1ab5151-catalog-content\") pod \"2f9880b0-14ae-4649-b7ba-6d0dd1ab5151\" (UID: \"2f9880b0-14ae-4649-b7ba-6d0dd1ab5151\") " Jan 23 09:14:42 crc kubenswrapper[4684]: I0123 09:14:42.570782 4684 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/2f9880b0-14ae-4649-b7ba-6d0dd1ab5151-utilities\") pod \"2f9880b0-14ae-4649-b7ba-6d0dd1ab5151\" (UID: \"2f9880b0-14ae-4649-b7ba-6d0dd1ab5151\") " Jan 23 09:14:42 crc kubenswrapper[4684]: I0123 09:14:42.571546 4684 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/2f9880b0-14ae-4649-b7ba-6d0dd1ab5151-utilities" (OuterVolumeSpecName: "utilities") pod "2f9880b0-14ae-4649-b7ba-6d0dd1ab5151" (UID: "2f9880b0-14ae-4649-b7ba-6d0dd1ab5151"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 23 09:14:42 crc kubenswrapper[4684]: I0123 09:14:42.590378 4684 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/2f9880b0-14ae-4649-b7ba-6d0dd1ab5151-kube-api-access-v857v" (OuterVolumeSpecName: "kube-api-access-v857v") pod "2f9880b0-14ae-4649-b7ba-6d0dd1ab5151" (UID: "2f9880b0-14ae-4649-b7ba-6d0dd1ab5151"). InnerVolumeSpecName "kube-api-access-v857v". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 23 09:14:42 crc kubenswrapper[4684]: I0123 09:14:42.633866 4684 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/2f9880b0-14ae-4649-b7ba-6d0dd1ab5151-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "2f9880b0-14ae-4649-b7ba-6d0dd1ab5151" (UID: "2f9880b0-14ae-4649-b7ba-6d0dd1ab5151"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 23 09:14:42 crc kubenswrapper[4684]: I0123 09:14:42.672475 4684 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/2f9880b0-14ae-4649-b7ba-6d0dd1ab5151-catalog-content\") on node \"crc\" DevicePath \"\"" Jan 23 09:14:42 crc kubenswrapper[4684]: I0123 09:14:42.672525 4684 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/2f9880b0-14ae-4649-b7ba-6d0dd1ab5151-utilities\") on node \"crc\" DevicePath \"\"" Jan 23 09:14:42 crc kubenswrapper[4684]: I0123 09:14:42.672535 4684 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-v857v\" (UniqueName: \"kubernetes.io/projected/2f9880b0-14ae-4649-b7ba-6d0dd1ab5151-kube-api-access-v857v\") on node \"crc\" DevicePath \"\"" Jan 23 09:14:42 crc kubenswrapper[4684]: I0123 09:14:42.798327 4684 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-pc4kj" event={"ID":"2f9880b0-14ae-4649-b7ba-6d0dd1ab5151","Type":"ContainerDied","Data":"2cadccfba472a9129d21ba9328500650192be1557c8ea77badde77e57f6ea4dd"} Jan 23 09:14:42 crc kubenswrapper[4684]: I0123 09:14:42.798384 4684 scope.go:117] "RemoveContainer" containerID="d5c14ba4360eb52e85d58b457e69c943fdceda28b5bf0c035c9f4ef3317f52f7" Jan 23 09:14:42 crc kubenswrapper[4684]: I0123 09:14:42.798503 4684 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-pc4kj" Jan 23 09:14:42 crc kubenswrapper[4684]: I0123 09:14:42.829599 4684 scope.go:117] "RemoveContainer" containerID="2795dc8067cfcebc9f49052e239941770f9149a311a853d77fc9c33d333bb07d" Jan 23 09:14:42 crc kubenswrapper[4684]: I0123 09:14:42.831422 4684 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/marketplace-operator-79b997595-tfmsb" Jan 23 09:14:42 crc kubenswrapper[4684]: I0123 09:14:42.831660 4684 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/marketplace-operator-79b997595-tfmsb" event={"ID":"703df6b3-b903-4818-b0c8-8681de1c6065","Type":"ContainerDied","Data":"c38f54fca325f71ef8fa291d6bd120a9bf3abc611b72ed610b80b584badf9fc0"} Jan 23 09:14:42 crc kubenswrapper[4684]: I0123 09:14:42.852753 4684 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/marketplace-operator-79b997595-25vv4" event={"ID":"9703bbe4-b658-40eb-b8db-14f18c684ab3","Type":"ContainerStarted","Data":"6c467947753a5130cdd3c117901ea075024099b6c2b13c091fdefed5f437450b"} Jan 23 09:14:42 crc kubenswrapper[4684]: I0123 09:14:42.857807 4684 scope.go:117] "RemoveContainer" containerID="9aa7a109bdedcefff1026559d43ae04050530da5d9493dfb559e06d896ee94c3" Jan 23 09:14:42 crc kubenswrapper[4684]: I0123 09:14:42.871043 4684 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/community-operators-pc4kj"] Jan 23 09:14:42 crc kubenswrapper[4684]: I0123 09:14:42.878329 4684 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/community-operators-pc4kj"] Jan 23 09:14:42 crc kubenswrapper[4684]: I0123 09:14:42.901679 4684 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/marketplace-operator-79b997595-tfmsb"] Jan 23 09:14:42 crc kubenswrapper[4684]: I0123 09:14:42.902655 4684 scope.go:117] "RemoveContainer" containerID="080069b9837351b3819630d5376f3ae1b2cacc3a63713a83f095675ad9ff66ce" Jan 23 09:14:42 crc kubenswrapper[4684]: I0123 09:14:42.947116 4684 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/marketplace-operator-79b997595-tfmsb"] Jan 23 09:14:43 crc kubenswrapper[4684]: I0123 09:14:43.002005 4684 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-x2mrs" Jan 23 09:14:43 crc kubenswrapper[4684]: I0123 09:14:43.186511 4684 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/redhat-marketplace-b27ph"] Jan 23 09:14:43 crc kubenswrapper[4684]: E0123 09:14:43.187200 4684 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="597fda0b-2292-4816-a498-539a84a87f33" containerName="extract-content" Jan 23 09:14:43 crc kubenswrapper[4684]: I0123 09:14:43.187301 4684 state_mem.go:107] "Deleted CPUSet assignment" podUID="597fda0b-2292-4816-a498-539a84a87f33" containerName="extract-content" Jan 23 09:14:43 crc kubenswrapper[4684]: E0123 09:14:43.187368 4684 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="2f9880b0-14ae-4649-b7ba-6d0dd1ab5151" containerName="extract-content" Jan 23 09:14:43 crc kubenswrapper[4684]: I0123 09:14:43.187423 4684 state_mem.go:107] "Deleted CPUSet assignment" podUID="2f9880b0-14ae-4649-b7ba-6d0dd1ab5151" containerName="extract-content" Jan 23 09:14:43 crc kubenswrapper[4684]: E0123 09:14:43.187487 4684 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="703df6b3-b903-4818-b0c8-8681de1c6065" containerName="marketplace-operator" Jan 23 09:14:43 crc kubenswrapper[4684]: I0123 09:14:43.187546 4684 state_mem.go:107] "Deleted CPUSet assignment" podUID="703df6b3-b903-4818-b0c8-8681de1c6065" containerName="marketplace-operator" Jan 23 09:14:43 crc kubenswrapper[4684]: E0123 09:14:43.187615 4684 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="2f9880b0-14ae-4649-b7ba-6d0dd1ab5151" containerName="registry-server" Jan 23 09:14:43 crc kubenswrapper[4684]: I0123 09:14:43.187677 4684 state_mem.go:107] "Deleted CPUSet assignment" podUID="2f9880b0-14ae-4649-b7ba-6d0dd1ab5151" containerName="registry-server" Jan 23 09:14:43 crc kubenswrapper[4684]: E0123 09:14:43.187783 4684 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="b97308cc-f7d2-4693-8990-76cbb4c9abff" containerName="extract-utilities" Jan 23 09:14:43 crc kubenswrapper[4684]: I0123 09:14:43.187852 4684 state_mem.go:107] "Deleted CPUSet assignment" podUID="b97308cc-f7d2-4693-8990-76cbb4c9abff" containerName="extract-utilities" Jan 23 09:14:43 crc kubenswrapper[4684]: E0123 09:14:43.187920 4684 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="2f9880b0-14ae-4649-b7ba-6d0dd1ab5151" containerName="extract-utilities" Jan 23 09:14:43 crc kubenswrapper[4684]: I0123 09:14:43.187976 4684 state_mem.go:107] "Deleted CPUSet assignment" podUID="2f9880b0-14ae-4649-b7ba-6d0dd1ab5151" containerName="extract-utilities" Jan 23 09:14:43 crc kubenswrapper[4684]: E0123 09:14:43.188032 4684 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="597fda0b-2292-4816-a498-539a84a87f33" containerName="extract-utilities" Jan 23 09:14:43 crc kubenswrapper[4684]: I0123 09:14:43.188092 4684 state_mem.go:107] "Deleted CPUSet assignment" podUID="597fda0b-2292-4816-a498-539a84a87f33" containerName="extract-utilities" Jan 23 09:14:43 crc kubenswrapper[4684]: E0123 09:14:43.188147 4684 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="b97308cc-f7d2-4693-8990-76cbb4c9abff" containerName="registry-server" Jan 23 09:14:43 crc kubenswrapper[4684]: I0123 09:14:43.188212 4684 state_mem.go:107] "Deleted CPUSet assignment" podUID="b97308cc-f7d2-4693-8990-76cbb4c9abff" containerName="registry-server" Jan 23 09:14:43 crc kubenswrapper[4684]: E0123 09:14:43.188293 4684 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="597fda0b-2292-4816-a498-539a84a87f33" containerName="registry-server" Jan 23 09:14:43 crc kubenswrapper[4684]: I0123 09:14:43.188360 4684 state_mem.go:107] "Deleted CPUSet assignment" podUID="597fda0b-2292-4816-a498-539a84a87f33" containerName="registry-server" Jan 23 09:14:43 crc kubenswrapper[4684]: E0123 09:14:43.188440 4684 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="703df6b3-b903-4818-b0c8-8681de1c6065" containerName="marketplace-operator" Jan 23 09:14:43 crc kubenswrapper[4684]: I0123 09:14:43.188516 4684 state_mem.go:107] "Deleted CPUSet assignment" podUID="703df6b3-b903-4818-b0c8-8681de1c6065" containerName="marketplace-operator" Jan 23 09:14:43 crc kubenswrapper[4684]: E0123 09:14:43.188610 4684 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="b97308cc-f7d2-4693-8990-76cbb4c9abff" containerName="extract-content" Jan 23 09:14:43 crc kubenswrapper[4684]: I0123 09:14:43.188689 4684 state_mem.go:107] "Deleted CPUSet assignment" podUID="b97308cc-f7d2-4693-8990-76cbb4c9abff" containerName="extract-content" Jan 23 09:14:43 crc kubenswrapper[4684]: I0123 09:14:43.188910 4684 memory_manager.go:354] "RemoveStaleState removing state" podUID="2f9880b0-14ae-4649-b7ba-6d0dd1ab5151" containerName="registry-server" Jan 23 09:14:43 crc kubenswrapper[4684]: I0123 09:14:43.188994 4684 memory_manager.go:354] "RemoveStaleState removing state" podUID="703df6b3-b903-4818-b0c8-8681de1c6065" containerName="marketplace-operator" Jan 23 09:14:43 crc kubenswrapper[4684]: I0123 09:14:43.189090 4684 memory_manager.go:354] "RemoveStaleState removing state" podUID="703df6b3-b903-4818-b0c8-8681de1c6065" containerName="marketplace-operator" Jan 23 09:14:43 crc kubenswrapper[4684]: I0123 09:14:43.189198 4684 memory_manager.go:354] "RemoveStaleState removing state" podUID="b97308cc-f7d2-4693-8990-76cbb4c9abff" containerName="registry-server" Jan 23 09:14:43 crc kubenswrapper[4684]: I0123 09:14:43.189298 4684 memory_manager.go:354] "RemoveStaleState removing state" podUID="597fda0b-2292-4816-a498-539a84a87f33" containerName="registry-server" Jan 23 09:14:43 crc kubenswrapper[4684]: I0123 09:14:43.189030 4684 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/b97308cc-f7d2-4693-8990-76cbb4c9abff-utilities\") pod \"b97308cc-f7d2-4693-8990-76cbb4c9abff\" (UID: \"b97308cc-f7d2-4693-8990-76cbb4c9abff\") " Jan 23 09:14:43 crc kubenswrapper[4684]: I0123 09:14:43.189548 4684 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-pf2sj\" (UniqueName: \"kubernetes.io/projected/b97308cc-f7d2-4693-8990-76cbb4c9abff-kube-api-access-pf2sj\") pod \"b97308cc-f7d2-4693-8990-76cbb4c9abff\" (UID: \"b97308cc-f7d2-4693-8990-76cbb4c9abff\") " Jan 23 09:14:43 crc kubenswrapper[4684]: I0123 09:14:43.189690 4684 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/b97308cc-f7d2-4693-8990-76cbb4c9abff-catalog-content\") pod \"b97308cc-f7d2-4693-8990-76cbb4c9abff\" (UID: \"b97308cc-f7d2-4693-8990-76cbb4c9abff\") " Jan 23 09:14:43 crc kubenswrapper[4684]: I0123 09:14:43.189791 4684 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/b97308cc-f7d2-4693-8990-76cbb4c9abff-utilities" (OuterVolumeSpecName: "utilities") pod "b97308cc-f7d2-4693-8990-76cbb4c9abff" (UID: "b97308cc-f7d2-4693-8990-76cbb4c9abff"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 23 09:14:43 crc kubenswrapper[4684]: I0123 09:14:43.190187 4684 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/b97308cc-f7d2-4693-8990-76cbb4c9abff-utilities\") on node \"crc\" DevicePath \"\"" Jan 23 09:14:43 crc kubenswrapper[4684]: I0123 09:14:43.190775 4684 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-b27ph" Jan 23 09:14:43 crc kubenswrapper[4684]: I0123 09:14:43.197082 4684 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-marketplace"/"redhat-marketplace-dockercfg-x2ctb" Jan 23 09:14:43 crc kubenswrapper[4684]: I0123 09:14:43.201111 4684 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/b97308cc-f7d2-4693-8990-76cbb4c9abff-kube-api-access-pf2sj" (OuterVolumeSpecName: "kube-api-access-pf2sj") pod "b97308cc-f7d2-4693-8990-76cbb4c9abff" (UID: "b97308cc-f7d2-4693-8990-76cbb4c9abff"). InnerVolumeSpecName "kube-api-access-pf2sj". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 23 09:14:43 crc kubenswrapper[4684]: I0123 09:14:43.215355 4684 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-marketplace-b27ph"] Jan 23 09:14:43 crc kubenswrapper[4684]: I0123 09:14:43.281651 4684 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/b97308cc-f7d2-4693-8990-76cbb4c9abff-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "b97308cc-f7d2-4693-8990-76cbb4c9abff" (UID: "b97308cc-f7d2-4693-8990-76cbb4c9abff"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 23 09:14:43 crc kubenswrapper[4684]: I0123 09:14:43.291164 4684 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/a9a4439f-bc6b-4367-be86-8aa563f0b50e-catalog-content\") pod \"redhat-marketplace-b27ph\" (UID: \"a9a4439f-bc6b-4367-be86-8aa563f0b50e\") " pod="openshift-marketplace/redhat-marketplace-b27ph" Jan 23 09:14:43 crc kubenswrapper[4684]: I0123 09:14:43.291225 4684 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/a9a4439f-bc6b-4367-be86-8aa563f0b50e-utilities\") pod \"redhat-marketplace-b27ph\" (UID: \"a9a4439f-bc6b-4367-be86-8aa563f0b50e\") " pod="openshift-marketplace/redhat-marketplace-b27ph" Jan 23 09:14:43 crc kubenswrapper[4684]: I0123 09:14:43.291246 4684 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-btmjv\" (UniqueName: \"kubernetes.io/projected/a9a4439f-bc6b-4367-be86-8aa563f0b50e-kube-api-access-btmjv\") pod \"redhat-marketplace-b27ph\" (UID: \"a9a4439f-bc6b-4367-be86-8aa563f0b50e\") " pod="openshift-marketplace/redhat-marketplace-b27ph" Jan 23 09:14:43 crc kubenswrapper[4684]: I0123 09:14:43.291323 4684 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/b97308cc-f7d2-4693-8990-76cbb4c9abff-catalog-content\") on node \"crc\" DevicePath \"\"" Jan 23 09:14:43 crc kubenswrapper[4684]: I0123 09:14:43.291337 4684 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-pf2sj\" (UniqueName: \"kubernetes.io/projected/b97308cc-f7d2-4693-8990-76cbb4c9abff-kube-api-access-pf2sj\") on node \"crc\" DevicePath \"\"" Jan 23 09:14:43 crc kubenswrapper[4684]: I0123 09:14:43.392893 4684 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/a9a4439f-bc6b-4367-be86-8aa563f0b50e-catalog-content\") pod \"redhat-marketplace-b27ph\" (UID: \"a9a4439f-bc6b-4367-be86-8aa563f0b50e\") " pod="openshift-marketplace/redhat-marketplace-b27ph" Jan 23 09:14:43 crc kubenswrapper[4684]: I0123 09:14:43.392965 4684 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/a9a4439f-bc6b-4367-be86-8aa563f0b50e-utilities\") pod \"redhat-marketplace-b27ph\" (UID: \"a9a4439f-bc6b-4367-be86-8aa563f0b50e\") " pod="openshift-marketplace/redhat-marketplace-b27ph" Jan 23 09:14:43 crc kubenswrapper[4684]: I0123 09:14:43.392990 4684 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-btmjv\" (UniqueName: \"kubernetes.io/projected/a9a4439f-bc6b-4367-be86-8aa563f0b50e-kube-api-access-btmjv\") pod \"redhat-marketplace-b27ph\" (UID: \"a9a4439f-bc6b-4367-be86-8aa563f0b50e\") " pod="openshift-marketplace/redhat-marketplace-b27ph" Jan 23 09:14:43 crc kubenswrapper[4684]: I0123 09:14:43.393848 4684 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/a9a4439f-bc6b-4367-be86-8aa563f0b50e-catalog-content\") pod \"redhat-marketplace-b27ph\" (UID: \"a9a4439f-bc6b-4367-be86-8aa563f0b50e\") " pod="openshift-marketplace/redhat-marketplace-b27ph" Jan 23 09:14:43 crc kubenswrapper[4684]: I0123 09:14:43.394117 4684 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/a9a4439f-bc6b-4367-be86-8aa563f0b50e-utilities\") pod \"redhat-marketplace-b27ph\" (UID: \"a9a4439f-bc6b-4367-be86-8aa563f0b50e\") " pod="openshift-marketplace/redhat-marketplace-b27ph" Jan 23 09:14:43 crc kubenswrapper[4684]: I0123 09:14:43.418898 4684 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-btmjv\" (UniqueName: \"kubernetes.io/projected/a9a4439f-bc6b-4367-be86-8aa563f0b50e-kube-api-access-btmjv\") pod \"redhat-marketplace-b27ph\" (UID: \"a9a4439f-bc6b-4367-be86-8aa563f0b50e\") " pod="openshift-marketplace/redhat-marketplace-b27ph" Jan 23 09:14:43 crc kubenswrapper[4684]: I0123 09:14:43.511325 4684 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-b27ph" Jan 23 09:14:43 crc kubenswrapper[4684]: I0123 09:14:43.592848 4684 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="2f9880b0-14ae-4649-b7ba-6d0dd1ab5151" path="/var/lib/kubelet/pods/2f9880b0-14ae-4649-b7ba-6d0dd1ab5151/volumes" Jan 23 09:14:43 crc kubenswrapper[4684]: I0123 09:14:43.593642 4684 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="597fda0b-2292-4816-a498-539a84a87f33" path="/var/lib/kubelet/pods/597fda0b-2292-4816-a498-539a84a87f33/volumes" Jan 23 09:14:43 crc kubenswrapper[4684]: I0123 09:14:43.594651 4684 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="703df6b3-b903-4818-b0c8-8681de1c6065" path="/var/lib/kubelet/pods/703df6b3-b903-4818-b0c8-8681de1c6065/volumes" Jan 23 09:14:43 crc kubenswrapper[4684]: I0123 09:14:43.637138 4684 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-9nnzz" Jan 23 09:14:43 crc kubenswrapper[4684]: I0123 09:14:43.695422 4684 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-6gk9c\" (UniqueName: \"kubernetes.io/projected/888f4644-d4e6-4334-8711-c552d0ef037a-kube-api-access-6gk9c\") pod \"888f4644-d4e6-4334-8711-c552d0ef037a\" (UID: \"888f4644-d4e6-4334-8711-c552d0ef037a\") " Jan 23 09:14:43 crc kubenswrapper[4684]: I0123 09:14:43.695470 4684 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/888f4644-d4e6-4334-8711-c552d0ef037a-utilities\") pod \"888f4644-d4e6-4334-8711-c552d0ef037a\" (UID: \"888f4644-d4e6-4334-8711-c552d0ef037a\") " Jan 23 09:14:43 crc kubenswrapper[4684]: I0123 09:14:43.695526 4684 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/888f4644-d4e6-4334-8711-c552d0ef037a-catalog-content\") pod \"888f4644-d4e6-4334-8711-c552d0ef037a\" (UID: \"888f4644-d4e6-4334-8711-c552d0ef037a\") " Jan 23 09:14:43 crc kubenswrapper[4684]: I0123 09:14:43.698914 4684 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/888f4644-d4e6-4334-8711-c552d0ef037a-utilities" (OuterVolumeSpecName: "utilities") pod "888f4644-d4e6-4334-8711-c552d0ef037a" (UID: "888f4644-d4e6-4334-8711-c552d0ef037a"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 23 09:14:43 crc kubenswrapper[4684]: I0123 09:14:43.699506 4684 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/888f4644-d4e6-4334-8711-c552d0ef037a-kube-api-access-6gk9c" (OuterVolumeSpecName: "kube-api-access-6gk9c") pod "888f4644-d4e6-4334-8711-c552d0ef037a" (UID: "888f4644-d4e6-4334-8711-c552d0ef037a"). InnerVolumeSpecName "kube-api-access-6gk9c". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 23 09:14:43 crc kubenswrapper[4684]: I0123 09:14:43.728649 4684 patch_prober.go:28] interesting pod/machine-config-daemon-wtphf container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Jan 23 09:14:43 crc kubenswrapper[4684]: I0123 09:14:43.728728 4684 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-wtphf" podUID="fe8e0d00-860e-4d47-9f48-686555520d79" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Jan 23 09:14:43 crc kubenswrapper[4684]: E0123 09:14:43.796900 4684 cadvisor_stats_provider.go:516] "Partial failure issuing cadvisor.ContainerInfoV2" err="partial failures: [\"/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podb97308cc_f7d2_4693_8990_76cbb4c9abff.slice/crio-b3daf9fb2bd9bbd959b198cc9dba0ca809470f44c696cc6a38eade09391d9dd0\": RecentStats: unable to find data in memory cache]" Jan 23 09:14:43 crc kubenswrapper[4684]: I0123 09:14:43.800145 4684 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-6gk9c\" (UniqueName: \"kubernetes.io/projected/888f4644-d4e6-4334-8711-c552d0ef037a-kube-api-access-6gk9c\") on node \"crc\" DevicePath \"\"" Jan 23 09:14:43 crc kubenswrapper[4684]: I0123 09:14:43.800178 4684 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/888f4644-d4e6-4334-8711-c552d0ef037a-utilities\") on node \"crc\" DevicePath \"\"" Jan 23 09:14:43 crc kubenswrapper[4684]: I0123 09:14:43.867155 4684 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/marketplace-operator-79b997595-25vv4" event={"ID":"9703bbe4-b658-40eb-b8db-14f18c684ab3","Type":"ContainerStarted","Data":"0b2b7895c0cb59c91d29eec5fee4a8b0e5d3359baa6a37513eb0bd8ece6d9ee9"} Jan 23 09:14:43 crc kubenswrapper[4684]: I0123 09:14:43.868039 4684 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/marketplace-operator-79b997595-25vv4" Jan 23 09:14:43 crc kubenswrapper[4684]: I0123 09:14:43.873847 4684 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/marketplace-operator-79b997595-25vv4" Jan 23 09:14:43 crc kubenswrapper[4684]: I0123 09:14:43.875274 4684 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-x2mrs" event={"ID":"b97308cc-f7d2-4693-8990-76cbb4c9abff","Type":"ContainerDied","Data":"b3daf9fb2bd9bbd959b198cc9dba0ca809470f44c696cc6a38eade09391d9dd0"} Jan 23 09:14:43 crc kubenswrapper[4684]: I0123 09:14:43.875332 4684 scope.go:117] "RemoveContainer" containerID="7d0fd50bcb08fe29c47575a5ad2121e36eba72bc60c62f6728c33fdad33487b5" Jan 23 09:14:43 crc kubenswrapper[4684]: I0123 09:14:43.875459 4684 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-x2mrs" Jan 23 09:14:43 crc kubenswrapper[4684]: I0123 09:14:43.880658 4684 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/888f4644-d4e6-4334-8711-c552d0ef037a-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "888f4644-d4e6-4334-8711-c552d0ef037a" (UID: "888f4644-d4e6-4334-8711-c552d0ef037a"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 23 09:14:43 crc kubenswrapper[4684]: I0123 09:14:43.884917 4684 generic.go:334] "Generic (PLEG): container finished" podID="888f4644-d4e6-4334-8711-c552d0ef037a" containerID="ec4c7529e536b562c55fba62ad717583075f744d44ab896b738be8744d0e16ca" exitCode=0 Jan 23 09:14:43 crc kubenswrapper[4684]: I0123 09:14:43.885000 4684 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-9nnzz" event={"ID":"888f4644-d4e6-4334-8711-c552d0ef037a","Type":"ContainerDied","Data":"ec4c7529e536b562c55fba62ad717583075f744d44ab896b738be8744d0e16ca"} Jan 23 09:14:43 crc kubenswrapper[4684]: I0123 09:14:43.885034 4684 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-9nnzz" event={"ID":"888f4644-d4e6-4334-8711-c552d0ef037a","Type":"ContainerDied","Data":"7a0429e517c619a52ee080046051172e4f048c3b7e3a6df818bf352bef79b571"} Jan 23 09:14:43 crc kubenswrapper[4684]: I0123 09:14:43.885118 4684 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-9nnzz" Jan 23 09:14:43 crc kubenswrapper[4684]: I0123 09:14:43.894831 4684 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/marketplace-operator-79b997595-25vv4" podStartSLOduration=2.894810315 podStartE2EDuration="2.894810315s" podCreationTimestamp="2026-01-23 09:14:41 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-23 09:14:43.888196135 +0000 UTC m=+456.511574796" watchObservedRunningTime="2026-01-23 09:14:43.894810315 +0000 UTC m=+456.518188856" Jan 23 09:14:43 crc kubenswrapper[4684]: I0123 09:14:43.900937 4684 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/888f4644-d4e6-4334-8711-c552d0ef037a-catalog-content\") on node \"crc\" DevicePath \"\"" Jan 23 09:14:43 crc kubenswrapper[4684]: I0123 09:14:43.911003 4684 scope.go:117] "RemoveContainer" containerID="06d322703213706612011807604b50100de632a8938a972d78bd8b80d55fff50" Jan 23 09:14:43 crc kubenswrapper[4684]: I0123 09:14:43.917197 4684 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/certified-operators-x2mrs"] Jan 23 09:14:43 crc kubenswrapper[4684]: I0123 09:14:43.920593 4684 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/certified-operators-x2mrs"] Jan 23 09:14:44 crc kubenswrapper[4684]: I0123 09:14:44.123564 4684 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-operators-9nnzz"] Jan 23 09:14:44 crc kubenswrapper[4684]: I0123 09:14:44.127416 4684 scope.go:117] "RemoveContainer" containerID="f8d713cb3c6dd62d1d1924fbda88c2164baa1d0bcc5e3c259042314d9890fd95" Jan 23 09:14:44 crc kubenswrapper[4684]: I0123 09:14:44.132556 4684 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/redhat-operators-9nnzz"] Jan 23 09:14:44 crc kubenswrapper[4684]: I0123 09:14:44.165466 4684 scope.go:117] "RemoveContainer" containerID="ec4c7529e536b562c55fba62ad717583075f744d44ab896b738be8744d0e16ca" Jan 23 09:14:44 crc kubenswrapper[4684]: I0123 09:14:44.188866 4684 scope.go:117] "RemoveContainer" containerID="b5d7d77b40dc4fa0e8a2a3fc914c5aac0bc55be1aefd4db81f8f63b6be5c5a0f" Jan 23 09:14:44 crc kubenswrapper[4684]: I0123 09:14:44.193779 4684 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-marketplace-b27ph"] Jan 23 09:14:44 crc kubenswrapper[4684]: I0123 09:14:44.226091 4684 scope.go:117] "RemoveContainer" containerID="ac05fea6304567e0ccecf4cefb4a5030cb710bfc6febbd89c5f92d462402fda8" Jan 23 09:14:44 crc kubenswrapper[4684]: I0123 09:14:44.281631 4684 scope.go:117] "RemoveContainer" containerID="ec4c7529e536b562c55fba62ad717583075f744d44ab896b738be8744d0e16ca" Jan 23 09:14:44 crc kubenswrapper[4684]: E0123 09:14:44.282353 4684 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"ec4c7529e536b562c55fba62ad717583075f744d44ab896b738be8744d0e16ca\": container with ID starting with ec4c7529e536b562c55fba62ad717583075f744d44ab896b738be8744d0e16ca not found: ID does not exist" containerID="ec4c7529e536b562c55fba62ad717583075f744d44ab896b738be8744d0e16ca" Jan 23 09:14:44 crc kubenswrapper[4684]: I0123 09:14:44.282398 4684 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"ec4c7529e536b562c55fba62ad717583075f744d44ab896b738be8744d0e16ca"} err="failed to get container status \"ec4c7529e536b562c55fba62ad717583075f744d44ab896b738be8744d0e16ca\": rpc error: code = NotFound desc = could not find container \"ec4c7529e536b562c55fba62ad717583075f744d44ab896b738be8744d0e16ca\": container with ID starting with ec4c7529e536b562c55fba62ad717583075f744d44ab896b738be8744d0e16ca not found: ID does not exist" Jan 23 09:14:44 crc kubenswrapper[4684]: I0123 09:14:44.282427 4684 scope.go:117] "RemoveContainer" containerID="b5d7d77b40dc4fa0e8a2a3fc914c5aac0bc55be1aefd4db81f8f63b6be5c5a0f" Jan 23 09:14:44 crc kubenswrapper[4684]: E0123 09:14:44.282837 4684 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"b5d7d77b40dc4fa0e8a2a3fc914c5aac0bc55be1aefd4db81f8f63b6be5c5a0f\": container with ID starting with b5d7d77b40dc4fa0e8a2a3fc914c5aac0bc55be1aefd4db81f8f63b6be5c5a0f not found: ID does not exist" containerID="b5d7d77b40dc4fa0e8a2a3fc914c5aac0bc55be1aefd4db81f8f63b6be5c5a0f" Jan 23 09:14:44 crc kubenswrapper[4684]: I0123 09:14:44.282905 4684 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"b5d7d77b40dc4fa0e8a2a3fc914c5aac0bc55be1aefd4db81f8f63b6be5c5a0f"} err="failed to get container status \"b5d7d77b40dc4fa0e8a2a3fc914c5aac0bc55be1aefd4db81f8f63b6be5c5a0f\": rpc error: code = NotFound desc = could not find container \"b5d7d77b40dc4fa0e8a2a3fc914c5aac0bc55be1aefd4db81f8f63b6be5c5a0f\": container with ID starting with b5d7d77b40dc4fa0e8a2a3fc914c5aac0bc55be1aefd4db81f8f63b6be5c5a0f not found: ID does not exist" Jan 23 09:14:44 crc kubenswrapper[4684]: I0123 09:14:44.282953 4684 scope.go:117] "RemoveContainer" containerID="ac05fea6304567e0ccecf4cefb4a5030cb710bfc6febbd89c5f92d462402fda8" Jan 23 09:14:44 crc kubenswrapper[4684]: E0123 09:14:44.283348 4684 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"ac05fea6304567e0ccecf4cefb4a5030cb710bfc6febbd89c5f92d462402fda8\": container with ID starting with ac05fea6304567e0ccecf4cefb4a5030cb710bfc6febbd89c5f92d462402fda8 not found: ID does not exist" containerID="ac05fea6304567e0ccecf4cefb4a5030cb710bfc6febbd89c5f92d462402fda8" Jan 23 09:14:44 crc kubenswrapper[4684]: I0123 09:14:44.283383 4684 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"ac05fea6304567e0ccecf4cefb4a5030cb710bfc6febbd89c5f92d462402fda8"} err="failed to get container status \"ac05fea6304567e0ccecf4cefb4a5030cb710bfc6febbd89c5f92d462402fda8\": rpc error: code = NotFound desc = could not find container \"ac05fea6304567e0ccecf4cefb4a5030cb710bfc6febbd89c5f92d462402fda8\": container with ID starting with ac05fea6304567e0ccecf4cefb4a5030cb710bfc6febbd89c5f92d462402fda8 not found: ID does not exist" Jan 23 09:14:44 crc kubenswrapper[4684]: I0123 09:14:44.904172 4684 generic.go:334] "Generic (PLEG): container finished" podID="a9a4439f-bc6b-4367-be86-8aa563f0b50e" containerID="31974a9bd5ff4c098b243fdcee9fe6963ef4fa7b5453d3acd79c9b6e7a775384" exitCode=0 Jan 23 09:14:44 crc kubenswrapper[4684]: I0123 09:14:44.904407 4684 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-b27ph" event={"ID":"a9a4439f-bc6b-4367-be86-8aa563f0b50e","Type":"ContainerDied","Data":"31974a9bd5ff4c098b243fdcee9fe6963ef4fa7b5453d3acd79c9b6e7a775384"} Jan 23 09:14:44 crc kubenswrapper[4684]: I0123 09:14:44.904581 4684 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-b27ph" event={"ID":"a9a4439f-bc6b-4367-be86-8aa563f0b50e","Type":"ContainerStarted","Data":"8e15e6b2dbffc1da09cd55543bfa95e36ea4b46c7ad78d671f292ca9c0aa3195"} Jan 23 09:14:44 crc kubenswrapper[4684]: I0123 09:14:44.991978 4684 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/redhat-operators-d7mvn"] Jan 23 09:14:44 crc kubenswrapper[4684]: E0123 09:14:44.992245 4684 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="888f4644-d4e6-4334-8711-c552d0ef037a" containerName="extract-utilities" Jan 23 09:14:44 crc kubenswrapper[4684]: I0123 09:14:44.992265 4684 state_mem.go:107] "Deleted CPUSet assignment" podUID="888f4644-d4e6-4334-8711-c552d0ef037a" containerName="extract-utilities" Jan 23 09:14:44 crc kubenswrapper[4684]: E0123 09:14:44.992277 4684 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="888f4644-d4e6-4334-8711-c552d0ef037a" containerName="registry-server" Jan 23 09:14:44 crc kubenswrapper[4684]: I0123 09:14:44.992284 4684 state_mem.go:107] "Deleted CPUSet assignment" podUID="888f4644-d4e6-4334-8711-c552d0ef037a" containerName="registry-server" Jan 23 09:14:44 crc kubenswrapper[4684]: E0123 09:14:44.992292 4684 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="888f4644-d4e6-4334-8711-c552d0ef037a" containerName="extract-content" Jan 23 09:14:44 crc kubenswrapper[4684]: I0123 09:14:44.992299 4684 state_mem.go:107] "Deleted CPUSet assignment" podUID="888f4644-d4e6-4334-8711-c552d0ef037a" containerName="extract-content" Jan 23 09:14:44 crc kubenswrapper[4684]: I0123 09:14:44.992381 4684 memory_manager.go:354] "RemoveStaleState removing state" podUID="888f4644-d4e6-4334-8711-c552d0ef037a" containerName="registry-server" Jan 23 09:14:44 crc kubenswrapper[4684]: I0123 09:14:44.993202 4684 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-d7mvn" Jan 23 09:14:45 crc kubenswrapper[4684]: I0123 09:14:45.002466 4684 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-marketplace"/"redhat-operators-dockercfg-ct8rh" Jan 23 09:14:45 crc kubenswrapper[4684]: I0123 09:14:45.060182 4684 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-operators-d7mvn"] Jan 23 09:14:45 crc kubenswrapper[4684]: I0123 09:14:45.133788 4684 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/2f0cf87d-0316-45f3-97f8-2808b497892f-catalog-content\") pod \"redhat-operators-d7mvn\" (UID: \"2f0cf87d-0316-45f3-97f8-2808b497892f\") " pod="openshift-marketplace/redhat-operators-d7mvn" Jan 23 09:14:45 crc kubenswrapper[4684]: I0123 09:14:45.133904 4684 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-bvgkr\" (UniqueName: \"kubernetes.io/projected/2f0cf87d-0316-45f3-97f8-2808b497892f-kube-api-access-bvgkr\") pod \"redhat-operators-d7mvn\" (UID: \"2f0cf87d-0316-45f3-97f8-2808b497892f\") " pod="openshift-marketplace/redhat-operators-d7mvn" Jan 23 09:14:45 crc kubenswrapper[4684]: I0123 09:14:45.134082 4684 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/2f0cf87d-0316-45f3-97f8-2808b497892f-utilities\") pod \"redhat-operators-d7mvn\" (UID: \"2f0cf87d-0316-45f3-97f8-2808b497892f\") " pod="openshift-marketplace/redhat-operators-d7mvn" Jan 23 09:14:45 crc kubenswrapper[4684]: I0123 09:14:45.235980 4684 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/2f0cf87d-0316-45f3-97f8-2808b497892f-catalog-content\") pod \"redhat-operators-d7mvn\" (UID: \"2f0cf87d-0316-45f3-97f8-2808b497892f\") " pod="openshift-marketplace/redhat-operators-d7mvn" Jan 23 09:14:45 crc kubenswrapper[4684]: I0123 09:14:45.236056 4684 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-bvgkr\" (UniqueName: \"kubernetes.io/projected/2f0cf87d-0316-45f3-97f8-2808b497892f-kube-api-access-bvgkr\") pod \"redhat-operators-d7mvn\" (UID: \"2f0cf87d-0316-45f3-97f8-2808b497892f\") " pod="openshift-marketplace/redhat-operators-d7mvn" Jan 23 09:14:45 crc kubenswrapper[4684]: I0123 09:14:45.236146 4684 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/2f0cf87d-0316-45f3-97f8-2808b497892f-utilities\") pod \"redhat-operators-d7mvn\" (UID: \"2f0cf87d-0316-45f3-97f8-2808b497892f\") " pod="openshift-marketplace/redhat-operators-d7mvn" Jan 23 09:14:45 crc kubenswrapper[4684]: I0123 09:14:45.236687 4684 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/2f0cf87d-0316-45f3-97f8-2808b497892f-utilities\") pod \"redhat-operators-d7mvn\" (UID: \"2f0cf87d-0316-45f3-97f8-2808b497892f\") " pod="openshift-marketplace/redhat-operators-d7mvn" Jan 23 09:14:45 crc kubenswrapper[4684]: I0123 09:14:45.236979 4684 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/2f0cf87d-0316-45f3-97f8-2808b497892f-catalog-content\") pod \"redhat-operators-d7mvn\" (UID: \"2f0cf87d-0316-45f3-97f8-2808b497892f\") " pod="openshift-marketplace/redhat-operators-d7mvn" Jan 23 09:14:45 crc kubenswrapper[4684]: I0123 09:14:45.265910 4684 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-bvgkr\" (UniqueName: \"kubernetes.io/projected/2f0cf87d-0316-45f3-97f8-2808b497892f-kube-api-access-bvgkr\") pod \"redhat-operators-d7mvn\" (UID: \"2f0cf87d-0316-45f3-97f8-2808b497892f\") " pod="openshift-marketplace/redhat-operators-d7mvn" Jan 23 09:14:45 crc kubenswrapper[4684]: I0123 09:14:45.329367 4684 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-d7mvn" Jan 23 09:14:45 crc kubenswrapper[4684]: I0123 09:14:45.618181 4684 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="888f4644-d4e6-4334-8711-c552d0ef037a" path="/var/lib/kubelet/pods/888f4644-d4e6-4334-8711-c552d0ef037a/volumes" Jan 23 09:14:45 crc kubenswrapper[4684]: I0123 09:14:45.621167 4684 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="b97308cc-f7d2-4693-8990-76cbb4c9abff" path="/var/lib/kubelet/pods/b97308cc-f7d2-4693-8990-76cbb4c9abff/volumes" Jan 23 09:14:45 crc kubenswrapper[4684]: I0123 09:14:45.621959 4684 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-operators-d7mvn"] Jan 23 09:14:45 crc kubenswrapper[4684]: I0123 09:14:45.622005 4684 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/community-operators-6dpg4"] Jan 23 09:14:45 crc kubenswrapper[4684]: I0123 09:14:45.623641 4684 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/community-operators-6dpg4"] Jan 23 09:14:45 crc kubenswrapper[4684]: I0123 09:14:45.624605 4684 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-6dpg4" Jan 23 09:14:45 crc kubenswrapper[4684]: I0123 09:14:45.631617 4684 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-marketplace"/"community-operators-dockercfg-dmngl" Jan 23 09:14:45 crc kubenswrapper[4684]: I0123 09:14:45.742677 4684 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/fdf3fd39-d429-4b70-805a-095ada6f811a-utilities\") pod \"community-operators-6dpg4\" (UID: \"fdf3fd39-d429-4b70-805a-095ada6f811a\") " pod="openshift-marketplace/community-operators-6dpg4" Jan 23 09:14:45 crc kubenswrapper[4684]: I0123 09:14:45.742740 4684 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/fdf3fd39-d429-4b70-805a-095ada6f811a-catalog-content\") pod \"community-operators-6dpg4\" (UID: \"fdf3fd39-d429-4b70-805a-095ada6f811a\") " pod="openshift-marketplace/community-operators-6dpg4" Jan 23 09:14:45 crc kubenswrapper[4684]: I0123 09:14:45.742767 4684 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-55q9r\" (UniqueName: \"kubernetes.io/projected/fdf3fd39-d429-4b70-805a-095ada6f811a-kube-api-access-55q9r\") pod \"community-operators-6dpg4\" (UID: \"fdf3fd39-d429-4b70-805a-095ada6f811a\") " pod="openshift-marketplace/community-operators-6dpg4" Jan 23 09:14:45 crc kubenswrapper[4684]: I0123 09:14:45.843949 4684 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/fdf3fd39-d429-4b70-805a-095ada6f811a-utilities\") pod \"community-operators-6dpg4\" (UID: \"fdf3fd39-d429-4b70-805a-095ada6f811a\") " pod="openshift-marketplace/community-operators-6dpg4" Jan 23 09:14:45 crc kubenswrapper[4684]: I0123 09:14:45.844014 4684 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/fdf3fd39-d429-4b70-805a-095ada6f811a-catalog-content\") pod \"community-operators-6dpg4\" (UID: \"fdf3fd39-d429-4b70-805a-095ada6f811a\") " pod="openshift-marketplace/community-operators-6dpg4" Jan 23 09:14:45 crc kubenswrapper[4684]: I0123 09:14:45.844054 4684 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-55q9r\" (UniqueName: \"kubernetes.io/projected/fdf3fd39-d429-4b70-805a-095ada6f811a-kube-api-access-55q9r\") pod \"community-operators-6dpg4\" (UID: \"fdf3fd39-d429-4b70-805a-095ada6f811a\") " pod="openshift-marketplace/community-operators-6dpg4" Jan 23 09:14:45 crc kubenswrapper[4684]: I0123 09:14:45.844610 4684 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/fdf3fd39-d429-4b70-805a-095ada6f811a-catalog-content\") pod \"community-operators-6dpg4\" (UID: \"fdf3fd39-d429-4b70-805a-095ada6f811a\") " pod="openshift-marketplace/community-operators-6dpg4" Jan 23 09:14:45 crc kubenswrapper[4684]: I0123 09:14:45.846580 4684 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/fdf3fd39-d429-4b70-805a-095ada6f811a-utilities\") pod \"community-operators-6dpg4\" (UID: \"fdf3fd39-d429-4b70-805a-095ada6f811a\") " pod="openshift-marketplace/community-operators-6dpg4" Jan 23 09:14:45 crc kubenswrapper[4684]: I0123 09:14:45.868815 4684 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-55q9r\" (UniqueName: \"kubernetes.io/projected/fdf3fd39-d429-4b70-805a-095ada6f811a-kube-api-access-55q9r\") pod \"community-operators-6dpg4\" (UID: \"fdf3fd39-d429-4b70-805a-095ada6f811a\") " pod="openshift-marketplace/community-operators-6dpg4" Jan 23 09:14:45 crc kubenswrapper[4684]: I0123 09:14:45.917306 4684 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-b27ph" event={"ID":"a9a4439f-bc6b-4367-be86-8aa563f0b50e","Type":"ContainerStarted","Data":"63e074f91ac457a4af4f2b376492814b455d19668b59d1e1692c286887b31bd8"} Jan 23 09:14:45 crc kubenswrapper[4684]: I0123 09:14:45.918815 4684 generic.go:334] "Generic (PLEG): container finished" podID="2f0cf87d-0316-45f3-97f8-2808b497892f" containerID="1aa689ab12ec5a8762fa21f36684d1375462787f66c351ccbdacffd51b6b94d8" exitCode=0 Jan 23 09:14:45 crc kubenswrapper[4684]: I0123 09:14:45.918975 4684 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-d7mvn" event={"ID":"2f0cf87d-0316-45f3-97f8-2808b497892f","Type":"ContainerDied","Data":"1aa689ab12ec5a8762fa21f36684d1375462787f66c351ccbdacffd51b6b94d8"} Jan 23 09:14:45 crc kubenswrapper[4684]: I0123 09:14:45.919113 4684 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-d7mvn" event={"ID":"2f0cf87d-0316-45f3-97f8-2808b497892f","Type":"ContainerStarted","Data":"ce7e6e6a87487e1ba526e35b3fd98546c15b94d37c302e928fc9337f6f16c9c6"} Jan 23 09:14:46 crc kubenswrapper[4684]: I0123 09:14:46.018781 4684 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-6dpg4" Jan 23 09:14:46 crc kubenswrapper[4684]: I0123 09:14:46.505382 4684 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/community-operators-6dpg4"] Jan 23 09:14:46 crc kubenswrapper[4684]: I0123 09:14:46.929480 4684 generic.go:334] "Generic (PLEG): container finished" podID="fdf3fd39-d429-4b70-805a-095ada6f811a" containerID="058fb5db7cf5587d98b17c14b8180a6f9e85b56dbebd42dc9654c20398a4d93e" exitCode=0 Jan 23 09:14:46 crc kubenswrapper[4684]: I0123 09:14:46.929910 4684 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-6dpg4" event={"ID":"fdf3fd39-d429-4b70-805a-095ada6f811a","Type":"ContainerDied","Data":"058fb5db7cf5587d98b17c14b8180a6f9e85b56dbebd42dc9654c20398a4d93e"} Jan 23 09:14:46 crc kubenswrapper[4684]: I0123 09:14:46.929947 4684 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-6dpg4" event={"ID":"fdf3fd39-d429-4b70-805a-095ada6f811a","Type":"ContainerStarted","Data":"c0307b1ae229b28edcc1de2f8cfa6a60fecb67d3f9c153f55fae4ff62fa05983"} Jan 23 09:14:46 crc kubenswrapper[4684]: I0123 09:14:46.935002 4684 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-d7mvn" event={"ID":"2f0cf87d-0316-45f3-97f8-2808b497892f","Type":"ContainerStarted","Data":"c2fb402a29b8bc3966dec0606fafd0ffcb38a3a22a29e4ec6bbfd2aaf379b02c"} Jan 23 09:14:46 crc kubenswrapper[4684]: I0123 09:14:46.941828 4684 generic.go:334] "Generic (PLEG): container finished" podID="a9a4439f-bc6b-4367-be86-8aa563f0b50e" containerID="63e074f91ac457a4af4f2b376492814b455d19668b59d1e1692c286887b31bd8" exitCode=0 Jan 23 09:14:46 crc kubenswrapper[4684]: I0123 09:14:46.941879 4684 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-b27ph" event={"ID":"a9a4439f-bc6b-4367-be86-8aa563f0b50e","Type":"ContainerDied","Data":"63e074f91ac457a4af4f2b376492814b455d19668b59d1e1692c286887b31bd8"} Jan 23 09:14:47 crc kubenswrapper[4684]: I0123 09:14:47.393673 4684 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/certified-operators-qcntf"] Jan 23 09:14:47 crc kubenswrapper[4684]: I0123 09:14:47.394880 4684 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-qcntf" Jan 23 09:14:47 crc kubenswrapper[4684]: I0123 09:14:47.397367 4684 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-marketplace"/"certified-operators-dockercfg-4rs5g" Jan 23 09:14:47 crc kubenswrapper[4684]: I0123 09:14:47.410628 4684 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/certified-operators-qcntf"] Jan 23 09:14:47 crc kubenswrapper[4684]: I0123 09:14:47.568448 4684 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-tmjjf\" (UniqueName: \"kubernetes.io/projected/005d929c-6b2b-4644-bddb-c02aa19facfe-kube-api-access-tmjjf\") pod \"certified-operators-qcntf\" (UID: \"005d929c-6b2b-4644-bddb-c02aa19facfe\") " pod="openshift-marketplace/certified-operators-qcntf" Jan 23 09:14:47 crc kubenswrapper[4684]: I0123 09:14:47.568548 4684 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/005d929c-6b2b-4644-bddb-c02aa19facfe-utilities\") pod \"certified-operators-qcntf\" (UID: \"005d929c-6b2b-4644-bddb-c02aa19facfe\") " pod="openshift-marketplace/certified-operators-qcntf" Jan 23 09:14:47 crc kubenswrapper[4684]: I0123 09:14:47.568590 4684 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/005d929c-6b2b-4644-bddb-c02aa19facfe-catalog-content\") pod \"certified-operators-qcntf\" (UID: \"005d929c-6b2b-4644-bddb-c02aa19facfe\") " pod="openshift-marketplace/certified-operators-qcntf" Jan 23 09:14:47 crc kubenswrapper[4684]: I0123 09:14:47.672049 4684 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-tmjjf\" (UniqueName: \"kubernetes.io/projected/005d929c-6b2b-4644-bddb-c02aa19facfe-kube-api-access-tmjjf\") pod \"certified-operators-qcntf\" (UID: \"005d929c-6b2b-4644-bddb-c02aa19facfe\") " pod="openshift-marketplace/certified-operators-qcntf" Jan 23 09:14:47 crc kubenswrapper[4684]: I0123 09:14:47.674838 4684 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/005d929c-6b2b-4644-bddb-c02aa19facfe-utilities\") pod \"certified-operators-qcntf\" (UID: \"005d929c-6b2b-4644-bddb-c02aa19facfe\") " pod="openshift-marketplace/certified-operators-qcntf" Jan 23 09:14:47 crc kubenswrapper[4684]: I0123 09:14:47.675232 4684 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/005d929c-6b2b-4644-bddb-c02aa19facfe-catalog-content\") pod \"certified-operators-qcntf\" (UID: \"005d929c-6b2b-4644-bddb-c02aa19facfe\") " pod="openshift-marketplace/certified-operators-qcntf" Jan 23 09:14:47 crc kubenswrapper[4684]: I0123 09:14:47.675869 4684 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/005d929c-6b2b-4644-bddb-c02aa19facfe-catalog-content\") pod \"certified-operators-qcntf\" (UID: \"005d929c-6b2b-4644-bddb-c02aa19facfe\") " pod="openshift-marketplace/certified-operators-qcntf" Jan 23 09:14:47 crc kubenswrapper[4684]: I0123 09:14:47.676644 4684 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/005d929c-6b2b-4644-bddb-c02aa19facfe-utilities\") pod \"certified-operators-qcntf\" (UID: \"005d929c-6b2b-4644-bddb-c02aa19facfe\") " pod="openshift-marketplace/certified-operators-qcntf" Jan 23 09:14:47 crc kubenswrapper[4684]: I0123 09:14:47.715229 4684 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-tmjjf\" (UniqueName: \"kubernetes.io/projected/005d929c-6b2b-4644-bddb-c02aa19facfe-kube-api-access-tmjjf\") pod \"certified-operators-qcntf\" (UID: \"005d929c-6b2b-4644-bddb-c02aa19facfe\") " pod="openshift-marketplace/certified-operators-qcntf" Jan 23 09:14:47 crc kubenswrapper[4684]: I0123 09:14:47.718750 4684 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-qcntf" Jan 23 09:14:47 crc kubenswrapper[4684]: I0123 09:14:47.964540 4684 generic.go:334] "Generic (PLEG): container finished" podID="2f0cf87d-0316-45f3-97f8-2808b497892f" containerID="c2fb402a29b8bc3966dec0606fafd0ffcb38a3a22a29e4ec6bbfd2aaf379b02c" exitCode=0 Jan 23 09:14:47 crc kubenswrapper[4684]: I0123 09:14:47.965527 4684 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-d7mvn" event={"ID":"2f0cf87d-0316-45f3-97f8-2808b497892f","Type":"ContainerDied","Data":"c2fb402a29b8bc3966dec0606fafd0ffcb38a3a22a29e4ec6bbfd2aaf379b02c"} Jan 23 09:14:47 crc kubenswrapper[4684]: I0123 09:14:47.979005 4684 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-6dpg4" event={"ID":"fdf3fd39-d429-4b70-805a-095ada6f811a","Type":"ContainerStarted","Data":"accb9bfa42e8d0fab8e5c3aec9261389d62217e9c6bcfb3d21ff82c50921f196"} Jan 23 09:14:48 crc kubenswrapper[4684]: I0123 09:14:48.216863 4684 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/certified-operators-qcntf"] Jan 23 09:14:48 crc kubenswrapper[4684]: W0123 09:14:48.229080 4684 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod005d929c_6b2b_4644_bddb_c02aa19facfe.slice/crio-a3169805ab7fb4c92be91179dbb06a7fe8b66791f13ef6f6c3f60ffa25632bf0 WatchSource:0}: Error finding container a3169805ab7fb4c92be91179dbb06a7fe8b66791f13ef6f6c3f60ffa25632bf0: Status 404 returned error can't find the container with id a3169805ab7fb4c92be91179dbb06a7fe8b66791f13ef6f6c3f60ffa25632bf0 Jan 23 09:14:48 crc kubenswrapper[4684]: I0123 09:14:48.988588 4684 generic.go:334] "Generic (PLEG): container finished" podID="fdf3fd39-d429-4b70-805a-095ada6f811a" containerID="accb9bfa42e8d0fab8e5c3aec9261389d62217e9c6bcfb3d21ff82c50921f196" exitCode=0 Jan 23 09:14:48 crc kubenswrapper[4684]: I0123 09:14:48.988626 4684 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-6dpg4" event={"ID":"fdf3fd39-d429-4b70-805a-095ada6f811a","Type":"ContainerDied","Data":"accb9bfa42e8d0fab8e5c3aec9261389d62217e9c6bcfb3d21ff82c50921f196"} Jan 23 09:14:48 crc kubenswrapper[4684]: I0123 09:14:48.999588 4684 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-d7mvn" event={"ID":"2f0cf87d-0316-45f3-97f8-2808b497892f","Type":"ContainerStarted","Data":"2b313480d347697f22d7ab03afe082620362a5764bdc15bbe26836d53f23140d"} Jan 23 09:14:49 crc kubenswrapper[4684]: I0123 09:14:49.005905 4684 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-b27ph" event={"ID":"a9a4439f-bc6b-4367-be86-8aa563f0b50e","Type":"ContainerStarted","Data":"564d22f737abd637c273040e4a7bd21908247cb2edb1bee947d388b4413c73c4"} Jan 23 09:14:49 crc kubenswrapper[4684]: I0123 09:14:49.008112 4684 generic.go:334] "Generic (PLEG): container finished" podID="005d929c-6b2b-4644-bddb-c02aa19facfe" containerID="fcada9d19423e27061bec8389041c0d57137b368c0be5ebf082e9f3cb5232190" exitCode=0 Jan 23 09:14:49 crc kubenswrapper[4684]: I0123 09:14:49.008162 4684 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-qcntf" event={"ID":"005d929c-6b2b-4644-bddb-c02aa19facfe","Type":"ContainerDied","Data":"fcada9d19423e27061bec8389041c0d57137b368c0be5ebf082e9f3cb5232190"} Jan 23 09:14:49 crc kubenswrapper[4684]: I0123 09:14:49.008184 4684 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-qcntf" event={"ID":"005d929c-6b2b-4644-bddb-c02aa19facfe","Type":"ContainerStarted","Data":"a3169805ab7fb4c92be91179dbb06a7fe8b66791f13ef6f6c3f60ffa25632bf0"} Jan 23 09:14:49 crc kubenswrapper[4684]: I0123 09:14:49.072535 4684 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/redhat-marketplace-b27ph" podStartSLOduration=3.105733013 podStartE2EDuration="6.07251388s" podCreationTimestamp="2026-01-23 09:14:43 +0000 UTC" firstStartedPulling="2026-01-23 09:14:44.908205993 +0000 UTC m=+457.531584534" lastFinishedPulling="2026-01-23 09:14:47.87498686 +0000 UTC m=+460.498365401" observedRunningTime="2026-01-23 09:14:49.065737094 +0000 UTC m=+461.689115635" watchObservedRunningTime="2026-01-23 09:14:49.07251388 +0000 UTC m=+461.695892421" Jan 23 09:14:49 crc kubenswrapper[4684]: I0123 09:14:49.085319 4684 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/redhat-operators-d7mvn" podStartSLOduration=2.476667036 podStartE2EDuration="5.085303249s" podCreationTimestamp="2026-01-23 09:14:44 +0000 UTC" firstStartedPulling="2026-01-23 09:14:45.920290563 +0000 UTC m=+458.543669114" lastFinishedPulling="2026-01-23 09:14:48.528926786 +0000 UTC m=+461.152305327" observedRunningTime="2026-01-23 09:14:49.084573078 +0000 UTC m=+461.707951619" watchObservedRunningTime="2026-01-23 09:14:49.085303249 +0000 UTC m=+461.708681790" Jan 23 09:14:50 crc kubenswrapper[4684]: I0123 09:14:50.018141 4684 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-6dpg4" event={"ID":"fdf3fd39-d429-4b70-805a-095ada6f811a","Type":"ContainerStarted","Data":"93ecc5c148ee28d638b427f172fdc2da323756a29893de9ac1d9a309e5db1290"} Jan 23 09:14:50 crc kubenswrapper[4684]: I0123 09:14:50.025263 4684 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-qcntf" event={"ID":"005d929c-6b2b-4644-bddb-c02aa19facfe","Type":"ContainerStarted","Data":"a52d050a446e2d2b4403a545d3958e1d374fa9715a82ecfc9509bc9ada7c6ecc"} Jan 23 09:14:50 crc kubenswrapper[4684]: I0123 09:14:50.048231 4684 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/community-operators-6dpg4" podStartSLOduration=2.599864772 podStartE2EDuration="5.048215171s" podCreationTimestamp="2026-01-23 09:14:45 +0000 UTC" firstStartedPulling="2026-01-23 09:14:46.932389144 +0000 UTC m=+459.555767685" lastFinishedPulling="2026-01-23 09:14:49.380739543 +0000 UTC m=+462.004118084" observedRunningTime="2026-01-23 09:14:50.046951854 +0000 UTC m=+462.670330405" watchObservedRunningTime="2026-01-23 09:14:50.048215171 +0000 UTC m=+462.671593712" Jan 23 09:14:51 crc kubenswrapper[4684]: I0123 09:14:51.033572 4684 generic.go:334] "Generic (PLEG): container finished" podID="005d929c-6b2b-4644-bddb-c02aa19facfe" containerID="a52d050a446e2d2b4403a545d3958e1d374fa9715a82ecfc9509bc9ada7c6ecc" exitCode=0 Jan 23 09:14:51 crc kubenswrapper[4684]: I0123 09:14:51.033668 4684 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-qcntf" event={"ID":"005d929c-6b2b-4644-bddb-c02aa19facfe","Type":"ContainerDied","Data":"a52d050a446e2d2b4403a545d3958e1d374fa9715a82ecfc9509bc9ada7c6ecc"} Jan 23 09:14:53 crc kubenswrapper[4684]: I0123 09:14:53.065275 4684 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-qcntf" event={"ID":"005d929c-6b2b-4644-bddb-c02aa19facfe","Type":"ContainerStarted","Data":"2df35b44d1fb4f8dfd8c1baea8676445e076857452e62235261d4f81b27ffb46"} Jan 23 09:14:53 crc kubenswrapper[4684]: I0123 09:14:53.087872 4684 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/certified-operators-qcntf" podStartSLOduration=3.103694731 podStartE2EDuration="6.087855149s" podCreationTimestamp="2026-01-23 09:14:47 +0000 UTC" firstStartedPulling="2026-01-23 09:14:49.009663946 +0000 UTC m=+461.633042487" lastFinishedPulling="2026-01-23 09:14:51.993824354 +0000 UTC m=+464.617202905" observedRunningTime="2026-01-23 09:14:53.085851781 +0000 UTC m=+465.709230322" watchObservedRunningTime="2026-01-23 09:14:53.087855149 +0000 UTC m=+465.711233690" Jan 23 09:14:53 crc kubenswrapper[4684]: I0123 09:14:53.519819 4684 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/redhat-marketplace-b27ph" Jan 23 09:14:53 crc kubenswrapper[4684]: I0123 09:14:53.519863 4684 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/redhat-marketplace-b27ph" Jan 23 09:14:53 crc kubenswrapper[4684]: I0123 09:14:53.575144 4684 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/redhat-marketplace-b27ph" Jan 23 09:14:54 crc kubenswrapper[4684]: I0123 09:14:54.111815 4684 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/redhat-marketplace-b27ph" Jan 23 09:14:55 crc kubenswrapper[4684]: I0123 09:14:55.330128 4684 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/redhat-operators-d7mvn" Jan 23 09:14:55 crc kubenswrapper[4684]: I0123 09:14:55.330655 4684 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/redhat-operators-d7mvn" Jan 23 09:14:56 crc kubenswrapper[4684]: I0123 09:14:56.019350 4684 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/community-operators-6dpg4" Jan 23 09:14:56 crc kubenswrapper[4684]: I0123 09:14:56.019414 4684 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/community-operators-6dpg4" Jan 23 09:14:56 crc kubenswrapper[4684]: I0123 09:14:56.059848 4684 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/community-operators-6dpg4" Jan 23 09:14:56 crc kubenswrapper[4684]: I0123 09:14:56.123644 4684 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/community-operators-6dpg4" Jan 23 09:14:56 crc kubenswrapper[4684]: I0123 09:14:56.375977 4684 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-marketplace/redhat-operators-d7mvn" podUID="2f0cf87d-0316-45f3-97f8-2808b497892f" containerName="registry-server" probeResult="failure" output=< Jan 23 09:14:56 crc kubenswrapper[4684]: timeout: failed to connect service ":50051" within 1s Jan 23 09:14:56 crc kubenswrapper[4684]: > Jan 23 09:14:57 crc kubenswrapper[4684]: I0123 09:14:57.719781 4684 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/certified-operators-qcntf" Jan 23 09:14:57 crc kubenswrapper[4684]: I0123 09:14:57.719825 4684 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/certified-operators-qcntf" Jan 23 09:14:57 crc kubenswrapper[4684]: I0123 09:14:57.759169 4684 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/certified-operators-qcntf" Jan 23 09:14:58 crc kubenswrapper[4684]: I0123 09:14:58.153028 4684 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/certified-operators-qcntf" Jan 23 09:15:00 crc kubenswrapper[4684]: I0123 09:15:00.172270 4684 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29485995-jsj57"] Jan 23 09:15:00 crc kubenswrapper[4684]: I0123 09:15:00.173904 4684 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29485995-jsj57" Jan 23 09:15:00 crc kubenswrapper[4684]: I0123 09:15:00.175599 4684 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-operator-lifecycle-manager"/"collect-profiles-dockercfg-kzf4t" Jan 23 09:15:00 crc kubenswrapper[4684]: I0123 09:15:00.175629 4684 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-operator-lifecycle-manager"/"collect-profiles-config" Jan 23 09:15:00 crc kubenswrapper[4684]: I0123 09:15:00.185000 4684 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29485995-jsj57"] Jan 23 09:15:00 crc kubenswrapper[4684]: I0123 09:15:00.355115 4684 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/f47989d9-6d56-4b95-8678-6aa1e287dded-secret-volume\") pod \"collect-profiles-29485995-jsj57\" (UID: \"f47989d9-6d56-4b95-8678-6aa1e287dded\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29485995-jsj57" Jan 23 09:15:00 crc kubenswrapper[4684]: I0123 09:15:00.355182 4684 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-6s567\" (UniqueName: \"kubernetes.io/projected/f47989d9-6d56-4b95-8678-6aa1e287dded-kube-api-access-6s567\") pod \"collect-profiles-29485995-jsj57\" (UID: \"f47989d9-6d56-4b95-8678-6aa1e287dded\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29485995-jsj57" Jan 23 09:15:00 crc kubenswrapper[4684]: I0123 09:15:00.355220 4684 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/f47989d9-6d56-4b95-8678-6aa1e287dded-config-volume\") pod \"collect-profiles-29485995-jsj57\" (UID: \"f47989d9-6d56-4b95-8678-6aa1e287dded\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29485995-jsj57" Jan 23 09:15:00 crc kubenswrapper[4684]: I0123 09:15:00.456955 4684 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-6s567\" (UniqueName: \"kubernetes.io/projected/f47989d9-6d56-4b95-8678-6aa1e287dded-kube-api-access-6s567\") pod \"collect-profiles-29485995-jsj57\" (UID: \"f47989d9-6d56-4b95-8678-6aa1e287dded\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29485995-jsj57" Jan 23 09:15:00 crc kubenswrapper[4684]: I0123 09:15:00.457052 4684 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/f47989d9-6d56-4b95-8678-6aa1e287dded-config-volume\") pod \"collect-profiles-29485995-jsj57\" (UID: \"f47989d9-6d56-4b95-8678-6aa1e287dded\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29485995-jsj57" Jan 23 09:15:00 crc kubenswrapper[4684]: I0123 09:15:00.457114 4684 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/f47989d9-6d56-4b95-8678-6aa1e287dded-secret-volume\") pod \"collect-profiles-29485995-jsj57\" (UID: \"f47989d9-6d56-4b95-8678-6aa1e287dded\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29485995-jsj57" Jan 23 09:15:00 crc kubenswrapper[4684]: I0123 09:15:00.458292 4684 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/f47989d9-6d56-4b95-8678-6aa1e287dded-config-volume\") pod \"collect-profiles-29485995-jsj57\" (UID: \"f47989d9-6d56-4b95-8678-6aa1e287dded\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29485995-jsj57" Jan 23 09:15:00 crc kubenswrapper[4684]: I0123 09:15:00.476348 4684 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-6s567\" (UniqueName: \"kubernetes.io/projected/f47989d9-6d56-4b95-8678-6aa1e287dded-kube-api-access-6s567\") pod \"collect-profiles-29485995-jsj57\" (UID: \"f47989d9-6d56-4b95-8678-6aa1e287dded\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29485995-jsj57" Jan 23 09:15:00 crc kubenswrapper[4684]: I0123 09:15:00.477461 4684 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/f47989d9-6d56-4b95-8678-6aa1e287dded-secret-volume\") pod \"collect-profiles-29485995-jsj57\" (UID: \"f47989d9-6d56-4b95-8678-6aa1e287dded\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29485995-jsj57" Jan 23 09:15:00 crc kubenswrapper[4684]: I0123 09:15:00.500464 4684 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29485995-jsj57" Jan 23 09:15:00 crc kubenswrapper[4684]: I0123 09:15:00.930878 4684 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29485995-jsj57"] Jan 23 09:15:01 crc kubenswrapper[4684]: I0123 09:15:01.124816 4684 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/collect-profiles-29485995-jsj57" event={"ID":"f47989d9-6d56-4b95-8678-6aa1e287dded","Type":"ContainerStarted","Data":"6761b6def5d0b08f3f560f9c2a9f6552b3923979b225338c60eb2c5489d7d31a"} Jan 23 09:15:02 crc kubenswrapper[4684]: I0123 09:15:02.132197 4684 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/collect-profiles-29485995-jsj57" event={"ID":"f47989d9-6d56-4b95-8678-6aa1e287dded","Type":"ContainerStarted","Data":"94c4efdb39c91980e5f8ba1eb61eda93e2821a0ba403c3f60c2903df32e13b81"} Jan 23 09:15:04 crc kubenswrapper[4684]: I0123 09:15:04.147026 4684 generic.go:334] "Generic (PLEG): container finished" podID="f47989d9-6d56-4b95-8678-6aa1e287dded" containerID="94c4efdb39c91980e5f8ba1eb61eda93e2821a0ba403c3f60c2903df32e13b81" exitCode=0 Jan 23 09:15:04 crc kubenswrapper[4684]: I0123 09:15:04.147104 4684 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/collect-profiles-29485995-jsj57" event={"ID":"f47989d9-6d56-4b95-8678-6aa1e287dded","Type":"ContainerDied","Data":"94c4efdb39c91980e5f8ba1eb61eda93e2821a0ba403c3f60c2903df32e13b81"} Jan 23 09:15:05 crc kubenswrapper[4684]: I0123 09:15:05.384427 4684 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/redhat-operators-d7mvn" Jan 23 09:15:05 crc kubenswrapper[4684]: I0123 09:15:05.429842 4684 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/redhat-operators-d7mvn" Jan 23 09:15:05 crc kubenswrapper[4684]: I0123 09:15:05.433245 4684 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29485995-jsj57" Jan 23 09:15:05 crc kubenswrapper[4684]: I0123 09:15:05.533819 4684 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-6s567\" (UniqueName: \"kubernetes.io/projected/f47989d9-6d56-4b95-8678-6aa1e287dded-kube-api-access-6s567\") pod \"f47989d9-6d56-4b95-8678-6aa1e287dded\" (UID: \"f47989d9-6d56-4b95-8678-6aa1e287dded\") " Jan 23 09:15:05 crc kubenswrapper[4684]: I0123 09:15:05.533928 4684 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/f47989d9-6d56-4b95-8678-6aa1e287dded-secret-volume\") pod \"f47989d9-6d56-4b95-8678-6aa1e287dded\" (UID: \"f47989d9-6d56-4b95-8678-6aa1e287dded\") " Jan 23 09:15:05 crc kubenswrapper[4684]: I0123 09:15:05.534094 4684 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/f47989d9-6d56-4b95-8678-6aa1e287dded-config-volume\") pod \"f47989d9-6d56-4b95-8678-6aa1e287dded\" (UID: \"f47989d9-6d56-4b95-8678-6aa1e287dded\") " Jan 23 09:15:05 crc kubenswrapper[4684]: I0123 09:15:05.534469 4684 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/f47989d9-6d56-4b95-8678-6aa1e287dded-config-volume" (OuterVolumeSpecName: "config-volume") pod "f47989d9-6d56-4b95-8678-6aa1e287dded" (UID: "f47989d9-6d56-4b95-8678-6aa1e287dded"). InnerVolumeSpecName "config-volume". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 23 09:15:05 crc kubenswrapper[4684]: I0123 09:15:05.543939 4684 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/f47989d9-6d56-4b95-8678-6aa1e287dded-kube-api-access-6s567" (OuterVolumeSpecName: "kube-api-access-6s567") pod "f47989d9-6d56-4b95-8678-6aa1e287dded" (UID: "f47989d9-6d56-4b95-8678-6aa1e287dded"). InnerVolumeSpecName "kube-api-access-6s567". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 23 09:15:05 crc kubenswrapper[4684]: I0123 09:15:05.544339 4684 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/f47989d9-6d56-4b95-8678-6aa1e287dded-secret-volume" (OuterVolumeSpecName: "secret-volume") pod "f47989d9-6d56-4b95-8678-6aa1e287dded" (UID: "f47989d9-6d56-4b95-8678-6aa1e287dded"). InnerVolumeSpecName "secret-volume". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 23 09:15:05 crc kubenswrapper[4684]: I0123 09:15:05.636090 4684 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-6s567\" (UniqueName: \"kubernetes.io/projected/f47989d9-6d56-4b95-8678-6aa1e287dded-kube-api-access-6s567\") on node \"crc\" DevicePath \"\"" Jan 23 09:15:05 crc kubenswrapper[4684]: I0123 09:15:05.636153 4684 reconciler_common.go:293] "Volume detached for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/f47989d9-6d56-4b95-8678-6aa1e287dded-secret-volume\") on node \"crc\" DevicePath \"\"" Jan 23 09:15:05 crc kubenswrapper[4684]: I0123 09:15:05.636196 4684 reconciler_common.go:293] "Volume detached for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/f47989d9-6d56-4b95-8678-6aa1e287dded-config-volume\") on node \"crc\" DevicePath \"\"" Jan 23 09:15:06 crc kubenswrapper[4684]: I0123 09:15:06.164462 4684 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/collect-profiles-29485995-jsj57" event={"ID":"f47989d9-6d56-4b95-8678-6aa1e287dded","Type":"ContainerDied","Data":"6761b6def5d0b08f3f560f9c2a9f6552b3923979b225338c60eb2c5489d7d31a"} Jan 23 09:15:06 crc kubenswrapper[4684]: I0123 09:15:06.164512 4684 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="6761b6def5d0b08f3f560f9c2a9f6552b3923979b225338c60eb2c5489d7d31a" Jan 23 09:15:06 crc kubenswrapper[4684]: I0123 09:15:06.164515 4684 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29485995-jsj57" Jan 23 09:15:13 crc kubenswrapper[4684]: I0123 09:15:13.729049 4684 patch_prober.go:28] interesting pod/machine-config-daemon-wtphf container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Jan 23 09:15:13 crc kubenswrapper[4684]: I0123 09:15:13.729586 4684 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-wtphf" podUID="fe8e0d00-860e-4d47-9f48-686555520d79" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Jan 23 09:15:13 crc kubenswrapper[4684]: I0123 09:15:13.729631 4684 kubelet.go:2542] "SyncLoop (probe)" probe="liveness" status="unhealthy" pod="openshift-machine-config-operator/machine-config-daemon-wtphf" Jan 23 09:15:13 crc kubenswrapper[4684]: I0123 09:15:13.730199 4684 kuberuntime_manager.go:1027] "Message for Container of pod" containerName="machine-config-daemon" containerStatusID={"Type":"cri-o","ID":"4f60477adb3b4dbc421728a3db0033ffac18f45a46c7ebdec44ba3b981e2ba81"} pod="openshift-machine-config-operator/machine-config-daemon-wtphf" containerMessage="Container machine-config-daemon failed liveness probe, will be restarted" Jan 23 09:15:13 crc kubenswrapper[4684]: I0123 09:15:13.730249 4684 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-machine-config-operator/machine-config-daemon-wtphf" podUID="fe8e0d00-860e-4d47-9f48-686555520d79" containerName="machine-config-daemon" containerID="cri-o://4f60477adb3b4dbc421728a3db0033ffac18f45a46c7ebdec44ba3b981e2ba81" gracePeriod=600 Jan 23 09:15:15 crc kubenswrapper[4684]: I0123 09:15:15.214912 4684 generic.go:334] "Generic (PLEG): container finished" podID="fe8e0d00-860e-4d47-9f48-686555520d79" containerID="4f60477adb3b4dbc421728a3db0033ffac18f45a46c7ebdec44ba3b981e2ba81" exitCode=0 Jan 23 09:15:15 crc kubenswrapper[4684]: I0123 09:15:15.214976 4684 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-wtphf" event={"ID":"fe8e0d00-860e-4d47-9f48-686555520d79","Type":"ContainerDied","Data":"4f60477adb3b4dbc421728a3db0033ffac18f45a46c7ebdec44ba3b981e2ba81"} Jan 23 09:15:15 crc kubenswrapper[4684]: I0123 09:15:15.215208 4684 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-wtphf" event={"ID":"fe8e0d00-860e-4d47-9f48-686555520d79","Type":"ContainerStarted","Data":"391314463e133e14077e9453ef4f023ff6205f2c184fe7d603fab43c81064707"} Jan 23 09:15:15 crc kubenswrapper[4684]: I0123 09:15:15.215225 4684 scope.go:117] "RemoveContainer" containerID="c3d090a4ca15b818846dbd02be034a5029761509ea8671673795d0b2b15249c9" Jan 23 09:15:21 crc kubenswrapper[4684]: I0123 09:15:21.829504 4684 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-image-registry/image-registry-66df7c8f76-hqn97"] Jan 23 09:15:21 crc kubenswrapper[4684]: E0123 09:15:21.830311 4684 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="f47989d9-6d56-4b95-8678-6aa1e287dded" containerName="collect-profiles" Jan 23 09:15:21 crc kubenswrapper[4684]: I0123 09:15:21.830329 4684 state_mem.go:107] "Deleted CPUSet assignment" podUID="f47989d9-6d56-4b95-8678-6aa1e287dded" containerName="collect-profiles" Jan 23 09:15:21 crc kubenswrapper[4684]: I0123 09:15:21.830445 4684 memory_manager.go:354] "RemoveStaleState removing state" podUID="f47989d9-6d56-4b95-8678-6aa1e287dded" containerName="collect-profiles" Jan 23 09:15:21 crc kubenswrapper[4684]: I0123 09:15:21.830985 4684 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-image-registry/image-registry-66df7c8f76-hqn97" Jan 23 09:15:21 crc kubenswrapper[4684]: I0123 09:15:21.851758 4684 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-image-registry/image-registry-66df7c8f76-hqn97"] Jan 23 09:15:22 crc kubenswrapper[4684]: I0123 09:15:22.002835 4684 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-g4kxp\" (UniqueName: \"kubernetes.io/projected/23fd2c4e-c2f4-4daf-89e5-dc9df223f648-kube-api-access-g4kxp\") pod \"image-registry-66df7c8f76-hqn97\" (UID: \"23fd2c4e-c2f4-4daf-89e5-dc9df223f648\") " pod="openshift-image-registry/image-registry-66df7c8f76-hqn97" Jan 23 09:15:22 crc kubenswrapper[4684]: I0123 09:15:22.002887 4684 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-trust-extracted\" (UniqueName: \"kubernetes.io/empty-dir/23fd2c4e-c2f4-4daf-89e5-dc9df223f648-ca-trust-extracted\") pod \"image-registry-66df7c8f76-hqn97\" (UID: \"23fd2c4e-c2f4-4daf-89e5-dc9df223f648\") " pod="openshift-image-registry/image-registry-66df7c8f76-hqn97" Jan 23 09:15:22 crc kubenswrapper[4684]: I0123 09:15:22.002918 4684 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"installation-pull-secrets\" (UniqueName: \"kubernetes.io/secret/23fd2c4e-c2f4-4daf-89e5-dc9df223f648-installation-pull-secrets\") pod \"image-registry-66df7c8f76-hqn97\" (UID: \"23fd2c4e-c2f4-4daf-89e5-dc9df223f648\") " pod="openshift-image-registry/image-registry-66df7c8f76-hqn97" Jan 23 09:15:22 crc kubenswrapper[4684]: I0123 09:15:22.002946 4684 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"bound-sa-token\" (UniqueName: \"kubernetes.io/projected/23fd2c4e-c2f4-4daf-89e5-dc9df223f648-bound-sa-token\") pod \"image-registry-66df7c8f76-hqn97\" (UID: \"23fd2c4e-c2f4-4daf-89e5-dc9df223f648\") " pod="openshift-image-registry/image-registry-66df7c8f76-hqn97" Jan 23 09:15:22 crc kubenswrapper[4684]: I0123 09:15:22.003103 4684 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"registry-tls\" (UniqueName: \"kubernetes.io/projected/23fd2c4e-c2f4-4daf-89e5-dc9df223f648-registry-tls\") pod \"image-registry-66df7c8f76-hqn97\" (UID: \"23fd2c4e-c2f4-4daf-89e5-dc9df223f648\") " pod="openshift-image-registry/image-registry-66df7c8f76-hqn97" Jan 23 09:15:22 crc kubenswrapper[4684]: I0123 09:15:22.003212 4684 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/23fd2c4e-c2f4-4daf-89e5-dc9df223f648-trusted-ca\") pod \"image-registry-66df7c8f76-hqn97\" (UID: \"23fd2c4e-c2f4-4daf-89e5-dc9df223f648\") " pod="openshift-image-registry/image-registry-66df7c8f76-hqn97" Jan 23 09:15:22 crc kubenswrapper[4684]: I0123 09:15:22.003301 4684 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"registry-certificates\" (UniqueName: \"kubernetes.io/configmap/23fd2c4e-c2f4-4daf-89e5-dc9df223f648-registry-certificates\") pod \"image-registry-66df7c8f76-hqn97\" (UID: \"23fd2c4e-c2f4-4daf-89e5-dc9df223f648\") " pod="openshift-image-registry/image-registry-66df7c8f76-hqn97" Jan 23 09:15:22 crc kubenswrapper[4684]: I0123 09:15:22.003562 4684 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-66df7c8f76-hqn97\" (UID: \"23fd2c4e-c2f4-4daf-89e5-dc9df223f648\") " pod="openshift-image-registry/image-registry-66df7c8f76-hqn97" Jan 23 09:15:22 crc kubenswrapper[4684]: I0123 09:15:22.024715 4684 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-66df7c8f76-hqn97\" (UID: \"23fd2c4e-c2f4-4daf-89e5-dc9df223f648\") " pod="openshift-image-registry/image-registry-66df7c8f76-hqn97" Jan 23 09:15:22 crc kubenswrapper[4684]: I0123 09:15:22.105375 4684 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"registry-tls\" (UniqueName: \"kubernetes.io/projected/23fd2c4e-c2f4-4daf-89e5-dc9df223f648-registry-tls\") pod \"image-registry-66df7c8f76-hqn97\" (UID: \"23fd2c4e-c2f4-4daf-89e5-dc9df223f648\") " pod="openshift-image-registry/image-registry-66df7c8f76-hqn97" Jan 23 09:15:22 crc kubenswrapper[4684]: I0123 09:15:22.105438 4684 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/23fd2c4e-c2f4-4daf-89e5-dc9df223f648-trusted-ca\") pod \"image-registry-66df7c8f76-hqn97\" (UID: \"23fd2c4e-c2f4-4daf-89e5-dc9df223f648\") " pod="openshift-image-registry/image-registry-66df7c8f76-hqn97" Jan 23 09:15:22 crc kubenswrapper[4684]: I0123 09:15:22.105463 4684 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"registry-certificates\" (UniqueName: \"kubernetes.io/configmap/23fd2c4e-c2f4-4daf-89e5-dc9df223f648-registry-certificates\") pod \"image-registry-66df7c8f76-hqn97\" (UID: \"23fd2c4e-c2f4-4daf-89e5-dc9df223f648\") " pod="openshift-image-registry/image-registry-66df7c8f76-hqn97" Jan 23 09:15:22 crc kubenswrapper[4684]: I0123 09:15:22.105515 4684 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-g4kxp\" (UniqueName: \"kubernetes.io/projected/23fd2c4e-c2f4-4daf-89e5-dc9df223f648-kube-api-access-g4kxp\") pod \"image-registry-66df7c8f76-hqn97\" (UID: \"23fd2c4e-c2f4-4daf-89e5-dc9df223f648\") " pod="openshift-image-registry/image-registry-66df7c8f76-hqn97" Jan 23 09:15:22 crc kubenswrapper[4684]: I0123 09:15:22.106284 4684 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ca-trust-extracted\" (UniqueName: \"kubernetes.io/empty-dir/23fd2c4e-c2f4-4daf-89e5-dc9df223f648-ca-trust-extracted\") pod \"image-registry-66df7c8f76-hqn97\" (UID: \"23fd2c4e-c2f4-4daf-89e5-dc9df223f648\") " pod="openshift-image-registry/image-registry-66df7c8f76-hqn97" Jan 23 09:15:22 crc kubenswrapper[4684]: I0123 09:15:22.107300 4684 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"installation-pull-secrets\" (UniqueName: \"kubernetes.io/secret/23fd2c4e-c2f4-4daf-89e5-dc9df223f648-installation-pull-secrets\") pod \"image-registry-66df7c8f76-hqn97\" (UID: \"23fd2c4e-c2f4-4daf-89e5-dc9df223f648\") " pod="openshift-image-registry/image-registry-66df7c8f76-hqn97" Jan 23 09:15:22 crc kubenswrapper[4684]: I0123 09:15:22.107773 4684 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"bound-sa-token\" (UniqueName: \"kubernetes.io/projected/23fd2c4e-c2f4-4daf-89e5-dc9df223f648-bound-sa-token\") pod \"image-registry-66df7c8f76-hqn97\" (UID: \"23fd2c4e-c2f4-4daf-89e5-dc9df223f648\") " pod="openshift-image-registry/image-registry-66df7c8f76-hqn97" Jan 23 09:15:22 crc kubenswrapper[4684]: I0123 09:15:22.107220 4684 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"registry-certificates\" (UniqueName: \"kubernetes.io/configmap/23fd2c4e-c2f4-4daf-89e5-dc9df223f648-registry-certificates\") pod \"image-registry-66df7c8f76-hqn97\" (UID: \"23fd2c4e-c2f4-4daf-89e5-dc9df223f648\") " pod="openshift-image-registry/image-registry-66df7c8f76-hqn97" Jan 23 09:15:22 crc kubenswrapper[4684]: I0123 09:15:22.106491 4684 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ca-trust-extracted\" (UniqueName: \"kubernetes.io/empty-dir/23fd2c4e-c2f4-4daf-89e5-dc9df223f648-ca-trust-extracted\") pod \"image-registry-66df7c8f76-hqn97\" (UID: \"23fd2c4e-c2f4-4daf-89e5-dc9df223f648\") " pod="openshift-image-registry/image-registry-66df7c8f76-hqn97" Jan 23 09:15:22 crc kubenswrapper[4684]: I0123 09:15:22.107088 4684 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/23fd2c4e-c2f4-4daf-89e5-dc9df223f648-trusted-ca\") pod \"image-registry-66df7c8f76-hqn97\" (UID: \"23fd2c4e-c2f4-4daf-89e5-dc9df223f648\") " pod="openshift-image-registry/image-registry-66df7c8f76-hqn97" Jan 23 09:15:22 crc kubenswrapper[4684]: I0123 09:15:22.112504 4684 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"registry-tls\" (UniqueName: \"kubernetes.io/projected/23fd2c4e-c2f4-4daf-89e5-dc9df223f648-registry-tls\") pod \"image-registry-66df7c8f76-hqn97\" (UID: \"23fd2c4e-c2f4-4daf-89e5-dc9df223f648\") " pod="openshift-image-registry/image-registry-66df7c8f76-hqn97" Jan 23 09:15:22 crc kubenswrapper[4684]: I0123 09:15:22.114498 4684 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"installation-pull-secrets\" (UniqueName: \"kubernetes.io/secret/23fd2c4e-c2f4-4daf-89e5-dc9df223f648-installation-pull-secrets\") pod \"image-registry-66df7c8f76-hqn97\" (UID: \"23fd2c4e-c2f4-4daf-89e5-dc9df223f648\") " pod="openshift-image-registry/image-registry-66df7c8f76-hqn97" Jan 23 09:15:22 crc kubenswrapper[4684]: I0123 09:15:22.124285 4684 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"bound-sa-token\" (UniqueName: \"kubernetes.io/projected/23fd2c4e-c2f4-4daf-89e5-dc9df223f648-bound-sa-token\") pod \"image-registry-66df7c8f76-hqn97\" (UID: \"23fd2c4e-c2f4-4daf-89e5-dc9df223f648\") " pod="openshift-image-registry/image-registry-66df7c8f76-hqn97" Jan 23 09:15:22 crc kubenswrapper[4684]: I0123 09:15:22.124919 4684 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-g4kxp\" (UniqueName: \"kubernetes.io/projected/23fd2c4e-c2f4-4daf-89e5-dc9df223f648-kube-api-access-g4kxp\") pod \"image-registry-66df7c8f76-hqn97\" (UID: \"23fd2c4e-c2f4-4daf-89e5-dc9df223f648\") " pod="openshift-image-registry/image-registry-66df7c8f76-hqn97" Jan 23 09:15:22 crc kubenswrapper[4684]: I0123 09:15:22.148865 4684 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-image-registry/image-registry-66df7c8f76-hqn97" Jan 23 09:15:22 crc kubenswrapper[4684]: I0123 09:15:22.554832 4684 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-image-registry/image-registry-66df7c8f76-hqn97"] Jan 23 09:15:23 crc kubenswrapper[4684]: I0123 09:15:23.262865 4684 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-image-registry/image-registry-66df7c8f76-hqn97" event={"ID":"23fd2c4e-c2f4-4daf-89e5-dc9df223f648","Type":"ContainerStarted","Data":"628068ac7a53b62b1b4efb339f6628dfd69501945fcac9932062e7bec5dff073"} Jan 23 09:15:23 crc kubenswrapper[4684]: I0123 09:15:23.263813 4684 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-image-registry/image-registry-66df7c8f76-hqn97" event={"ID":"23fd2c4e-c2f4-4daf-89e5-dc9df223f648","Type":"ContainerStarted","Data":"989c162bc49ea42908de14f69842efc76891c10f0fdf3564bcaf1fff2e95272e"} Jan 23 09:15:23 crc kubenswrapper[4684]: I0123 09:15:23.263849 4684 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-image-registry/image-registry-66df7c8f76-hqn97" Jan 23 09:15:23 crc kubenswrapper[4684]: I0123 09:15:23.287761 4684 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-image-registry/image-registry-66df7c8f76-hqn97" podStartSLOduration=2.287739384 podStartE2EDuration="2.287739384s" podCreationTimestamp="2026-01-23 09:15:21 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-23 09:15:23.282720159 +0000 UTC m=+495.906098700" watchObservedRunningTime="2026-01-23 09:15:23.287739384 +0000 UTC m=+495.911117925" Jan 23 09:15:42 crc kubenswrapper[4684]: I0123 09:15:42.155923 4684 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-image-registry/image-registry-66df7c8f76-hqn97" Jan 23 09:15:42 crc kubenswrapper[4684]: I0123 09:15:42.219101 4684 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-image-registry/image-registry-697d97f7c8-wn9b6"] Jan 23 09:16:07 crc kubenswrapper[4684]: I0123 09:16:07.263442 4684 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-image-registry/image-registry-697d97f7c8-wn9b6" podUID="4d94b705-3a9a-4cb2-87f1-b898ba859d79" containerName="registry" containerID="cri-o://2d31b9150d13567eab4ba3d1e40978cc76326048fb23aec05169609805334785" gracePeriod=30 Jan 23 09:16:08 crc kubenswrapper[4684]: I0123 09:16:08.043263 4684 patch_prober.go:28] interesting pod/image-registry-697d97f7c8-wn9b6 container/registry namespace/openshift-image-registry: Readiness probe status=failure output="Get \"https://10.217.0.35:5000/healthz\": dial tcp 10.217.0.35:5000: connect: connection refused" start-of-body= Jan 23 09:16:08 crc kubenswrapper[4684]: I0123 09:16:08.043630 4684 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-image-registry/image-registry-697d97f7c8-wn9b6" podUID="4d94b705-3a9a-4cb2-87f1-b898ba859d79" containerName="registry" probeResult="failure" output="Get \"https://10.217.0.35:5000/healthz\": dial tcp 10.217.0.35:5000: connect: connection refused" Jan 23 09:16:08 crc kubenswrapper[4684]: I0123 09:16:08.367199 4684 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-image-registry/image-registry-697d97f7c8-wn9b6" Jan 23 09:16:08 crc kubenswrapper[4684]: I0123 09:16:08.459272 4684 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"registry-storage\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"4d94b705-3a9a-4cb2-87f1-b898ba859d79\" (UID: \"4d94b705-3a9a-4cb2-87f1-b898ba859d79\") " Jan 23 09:16:08 crc kubenswrapper[4684]: I0123 09:16:08.459324 4684 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"registry-tls\" (UniqueName: \"kubernetes.io/projected/4d94b705-3a9a-4cb2-87f1-b898ba859d79-registry-tls\") pod \"4d94b705-3a9a-4cb2-87f1-b898ba859d79\" (UID: \"4d94b705-3a9a-4cb2-87f1-b898ba859d79\") " Jan 23 09:16:08 crc kubenswrapper[4684]: I0123 09:16:08.459354 4684 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"registry-certificates\" (UniqueName: \"kubernetes.io/configmap/4d94b705-3a9a-4cb2-87f1-b898ba859d79-registry-certificates\") pod \"4d94b705-3a9a-4cb2-87f1-b898ba859d79\" (UID: \"4d94b705-3a9a-4cb2-87f1-b898ba859d79\") " Jan 23 09:16:08 crc kubenswrapper[4684]: I0123 09:16:08.459455 4684 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ca-trust-extracted\" (UniqueName: \"kubernetes.io/empty-dir/4d94b705-3a9a-4cb2-87f1-b898ba859d79-ca-trust-extracted\") pod \"4d94b705-3a9a-4cb2-87f1-b898ba859d79\" (UID: \"4d94b705-3a9a-4cb2-87f1-b898ba859d79\") " Jan 23 09:16:08 crc kubenswrapper[4684]: I0123 09:16:08.459474 4684 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"installation-pull-secrets\" (UniqueName: \"kubernetes.io/secret/4d94b705-3a9a-4cb2-87f1-b898ba859d79-installation-pull-secrets\") pod \"4d94b705-3a9a-4cb2-87f1-b898ba859d79\" (UID: \"4d94b705-3a9a-4cb2-87f1-b898ba859d79\") " Jan 23 09:16:08 crc kubenswrapper[4684]: I0123 09:16:08.459500 4684 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"bound-sa-token\" (UniqueName: \"kubernetes.io/projected/4d94b705-3a9a-4cb2-87f1-b898ba859d79-bound-sa-token\") pod \"4d94b705-3a9a-4cb2-87f1-b898ba859d79\" (UID: \"4d94b705-3a9a-4cb2-87f1-b898ba859d79\") " Jan 23 09:16:08 crc kubenswrapper[4684]: I0123 09:16:08.459552 4684 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-lgptj\" (UniqueName: \"kubernetes.io/projected/4d94b705-3a9a-4cb2-87f1-b898ba859d79-kube-api-access-lgptj\") pod \"4d94b705-3a9a-4cb2-87f1-b898ba859d79\" (UID: \"4d94b705-3a9a-4cb2-87f1-b898ba859d79\") " Jan 23 09:16:08 crc kubenswrapper[4684]: I0123 09:16:08.459813 4684 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/4d94b705-3a9a-4cb2-87f1-b898ba859d79-trusted-ca\") pod \"4d94b705-3a9a-4cb2-87f1-b898ba859d79\" (UID: \"4d94b705-3a9a-4cb2-87f1-b898ba859d79\") " Jan 23 09:16:08 crc kubenswrapper[4684]: I0123 09:16:08.460717 4684 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/4d94b705-3a9a-4cb2-87f1-b898ba859d79-trusted-ca" (OuterVolumeSpecName: "trusted-ca") pod "4d94b705-3a9a-4cb2-87f1-b898ba859d79" (UID: "4d94b705-3a9a-4cb2-87f1-b898ba859d79"). InnerVolumeSpecName "trusted-ca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 23 09:16:08 crc kubenswrapper[4684]: I0123 09:16:08.461262 4684 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/4d94b705-3a9a-4cb2-87f1-b898ba859d79-registry-certificates" (OuterVolumeSpecName: "registry-certificates") pod "4d94b705-3a9a-4cb2-87f1-b898ba859d79" (UID: "4d94b705-3a9a-4cb2-87f1-b898ba859d79"). InnerVolumeSpecName "registry-certificates". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 23 09:16:08 crc kubenswrapper[4684]: I0123 09:16:08.466263 4684 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/4d94b705-3a9a-4cb2-87f1-b898ba859d79-registry-tls" (OuterVolumeSpecName: "registry-tls") pod "4d94b705-3a9a-4cb2-87f1-b898ba859d79" (UID: "4d94b705-3a9a-4cb2-87f1-b898ba859d79"). InnerVolumeSpecName "registry-tls". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 23 09:16:08 crc kubenswrapper[4684]: I0123 09:16:08.469092 4684 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/4d94b705-3a9a-4cb2-87f1-b898ba859d79-bound-sa-token" (OuterVolumeSpecName: "bound-sa-token") pod "4d94b705-3a9a-4cb2-87f1-b898ba859d79" (UID: "4d94b705-3a9a-4cb2-87f1-b898ba859d79"). InnerVolumeSpecName "bound-sa-token". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 23 09:16:08 crc kubenswrapper[4684]: I0123 09:16:08.469435 4684 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (OuterVolumeSpecName: "registry-storage") pod "4d94b705-3a9a-4cb2-87f1-b898ba859d79" (UID: "4d94b705-3a9a-4cb2-87f1-b898ba859d79"). InnerVolumeSpecName "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8". PluginName "kubernetes.io/csi", VolumeGidValue "" Jan 23 09:16:08 crc kubenswrapper[4684]: I0123 09:16:08.469536 4684 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/4d94b705-3a9a-4cb2-87f1-b898ba859d79-kube-api-access-lgptj" (OuterVolumeSpecName: "kube-api-access-lgptj") pod "4d94b705-3a9a-4cb2-87f1-b898ba859d79" (UID: "4d94b705-3a9a-4cb2-87f1-b898ba859d79"). InnerVolumeSpecName "kube-api-access-lgptj". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 23 09:16:08 crc kubenswrapper[4684]: I0123 09:16:08.470401 4684 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/4d94b705-3a9a-4cb2-87f1-b898ba859d79-installation-pull-secrets" (OuterVolumeSpecName: "installation-pull-secrets") pod "4d94b705-3a9a-4cb2-87f1-b898ba859d79" (UID: "4d94b705-3a9a-4cb2-87f1-b898ba859d79"). InnerVolumeSpecName "installation-pull-secrets". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 23 09:16:08 crc kubenswrapper[4684]: I0123 09:16:08.480186 4684 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/4d94b705-3a9a-4cb2-87f1-b898ba859d79-ca-trust-extracted" (OuterVolumeSpecName: "ca-trust-extracted") pod "4d94b705-3a9a-4cb2-87f1-b898ba859d79" (UID: "4d94b705-3a9a-4cb2-87f1-b898ba859d79"). InnerVolumeSpecName "ca-trust-extracted". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 23 09:16:08 crc kubenswrapper[4684]: I0123 09:16:08.515565 4684 generic.go:334] "Generic (PLEG): container finished" podID="4d94b705-3a9a-4cb2-87f1-b898ba859d79" containerID="2d31b9150d13567eab4ba3d1e40978cc76326048fb23aec05169609805334785" exitCode=0 Jan 23 09:16:08 crc kubenswrapper[4684]: I0123 09:16:08.515610 4684 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-image-registry/image-registry-697d97f7c8-wn9b6" event={"ID":"4d94b705-3a9a-4cb2-87f1-b898ba859d79","Type":"ContainerDied","Data":"2d31b9150d13567eab4ba3d1e40978cc76326048fb23aec05169609805334785"} Jan 23 09:16:08 crc kubenswrapper[4684]: I0123 09:16:08.515640 4684 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-image-registry/image-registry-697d97f7c8-wn9b6" event={"ID":"4d94b705-3a9a-4cb2-87f1-b898ba859d79","Type":"ContainerDied","Data":"8c4d6501dd1ed065829a396d4ef2969998745d3985084a988cf8b37d8872ec82"} Jan 23 09:16:08 crc kubenswrapper[4684]: I0123 09:16:08.515659 4684 scope.go:117] "RemoveContainer" containerID="2d31b9150d13567eab4ba3d1e40978cc76326048fb23aec05169609805334785" Jan 23 09:16:08 crc kubenswrapper[4684]: I0123 09:16:08.515788 4684 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-image-registry/image-registry-697d97f7c8-wn9b6" Jan 23 09:16:08 crc kubenswrapper[4684]: I0123 09:16:08.546562 4684 scope.go:117] "RemoveContainer" containerID="2d31b9150d13567eab4ba3d1e40978cc76326048fb23aec05169609805334785" Jan 23 09:16:08 crc kubenswrapper[4684]: E0123 09:16:08.548205 4684 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"2d31b9150d13567eab4ba3d1e40978cc76326048fb23aec05169609805334785\": container with ID starting with 2d31b9150d13567eab4ba3d1e40978cc76326048fb23aec05169609805334785 not found: ID does not exist" containerID="2d31b9150d13567eab4ba3d1e40978cc76326048fb23aec05169609805334785" Jan 23 09:16:08 crc kubenswrapper[4684]: I0123 09:16:08.548235 4684 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"2d31b9150d13567eab4ba3d1e40978cc76326048fb23aec05169609805334785"} err="failed to get container status \"2d31b9150d13567eab4ba3d1e40978cc76326048fb23aec05169609805334785\": rpc error: code = NotFound desc = could not find container \"2d31b9150d13567eab4ba3d1e40978cc76326048fb23aec05169609805334785\": container with ID starting with 2d31b9150d13567eab4ba3d1e40978cc76326048fb23aec05169609805334785 not found: ID does not exist" Jan 23 09:16:08 crc kubenswrapper[4684]: I0123 09:16:08.552526 4684 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-image-registry/image-registry-697d97f7c8-wn9b6"] Jan 23 09:16:08 crc kubenswrapper[4684]: I0123 09:16:08.573710 4684 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-image-registry/image-registry-697d97f7c8-wn9b6"] Jan 23 09:16:08 crc kubenswrapper[4684]: I0123 09:16:08.574256 4684 reconciler_common.go:293] "Volume detached for volume \"ca-trust-extracted\" (UniqueName: \"kubernetes.io/empty-dir/4d94b705-3a9a-4cb2-87f1-b898ba859d79-ca-trust-extracted\") on node \"crc\" DevicePath \"\"" Jan 23 09:16:08 crc kubenswrapper[4684]: I0123 09:16:08.574387 4684 reconciler_common.go:293] "Volume detached for volume \"installation-pull-secrets\" (UniqueName: \"kubernetes.io/secret/4d94b705-3a9a-4cb2-87f1-b898ba859d79-installation-pull-secrets\") on node \"crc\" DevicePath \"\"" Jan 23 09:16:08 crc kubenswrapper[4684]: I0123 09:16:08.574517 4684 reconciler_common.go:293] "Volume detached for volume \"bound-sa-token\" (UniqueName: \"kubernetes.io/projected/4d94b705-3a9a-4cb2-87f1-b898ba859d79-bound-sa-token\") on node \"crc\" DevicePath \"\"" Jan 23 09:16:08 crc kubenswrapper[4684]: I0123 09:16:08.574624 4684 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-lgptj\" (UniqueName: \"kubernetes.io/projected/4d94b705-3a9a-4cb2-87f1-b898ba859d79-kube-api-access-lgptj\") on node \"crc\" DevicePath \"\"" Jan 23 09:16:08 crc kubenswrapper[4684]: I0123 09:16:08.574715 4684 reconciler_common.go:293] "Volume detached for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/4d94b705-3a9a-4cb2-87f1-b898ba859d79-trusted-ca\") on node \"crc\" DevicePath \"\"" Jan 23 09:16:08 crc kubenswrapper[4684]: I0123 09:16:08.574798 4684 reconciler_common.go:293] "Volume detached for volume \"registry-tls\" (UniqueName: \"kubernetes.io/projected/4d94b705-3a9a-4cb2-87f1-b898ba859d79-registry-tls\") on node \"crc\" DevicePath \"\"" Jan 23 09:16:08 crc kubenswrapper[4684]: I0123 09:16:08.574890 4684 reconciler_common.go:293] "Volume detached for volume \"registry-certificates\" (UniqueName: \"kubernetes.io/configmap/4d94b705-3a9a-4cb2-87f1-b898ba859d79-registry-certificates\") on node \"crc\" DevicePath \"\"" Jan 23 09:16:09 crc kubenswrapper[4684]: I0123 09:16:09.589727 4684 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="4d94b705-3a9a-4cb2-87f1-b898ba859d79" path="/var/lib/kubelet/pods/4d94b705-3a9a-4cb2-87f1-b898ba859d79/volumes" Jan 23 09:17:43 crc kubenswrapper[4684]: I0123 09:17:43.729272 4684 patch_prober.go:28] interesting pod/machine-config-daemon-wtphf container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Jan 23 09:17:43 crc kubenswrapper[4684]: I0123 09:17:43.729888 4684 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-wtphf" podUID="fe8e0d00-860e-4d47-9f48-686555520d79" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Jan 23 09:18:13 crc kubenswrapper[4684]: I0123 09:18:13.728801 4684 patch_prober.go:28] interesting pod/machine-config-daemon-wtphf container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Jan 23 09:18:13 crc kubenswrapper[4684]: I0123 09:18:13.729387 4684 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-wtphf" podUID="fe8e0d00-860e-4d47-9f48-686555520d79" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Jan 23 09:18:43 crc kubenswrapper[4684]: I0123 09:18:43.728365 4684 patch_prober.go:28] interesting pod/machine-config-daemon-wtphf container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Jan 23 09:18:43 crc kubenswrapper[4684]: I0123 09:18:43.729046 4684 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-wtphf" podUID="fe8e0d00-860e-4d47-9f48-686555520d79" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Jan 23 09:18:43 crc kubenswrapper[4684]: I0123 09:18:43.729109 4684 kubelet.go:2542] "SyncLoop (probe)" probe="liveness" status="unhealthy" pod="openshift-machine-config-operator/machine-config-daemon-wtphf" Jan 23 09:18:43 crc kubenswrapper[4684]: I0123 09:18:43.729837 4684 kuberuntime_manager.go:1027] "Message for Container of pod" containerName="machine-config-daemon" containerStatusID={"Type":"cri-o","ID":"391314463e133e14077e9453ef4f023ff6205f2c184fe7d603fab43c81064707"} pod="openshift-machine-config-operator/machine-config-daemon-wtphf" containerMessage="Container machine-config-daemon failed liveness probe, will be restarted" Jan 23 09:18:43 crc kubenswrapper[4684]: I0123 09:18:43.729901 4684 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-machine-config-operator/machine-config-daemon-wtphf" podUID="fe8e0d00-860e-4d47-9f48-686555520d79" containerName="machine-config-daemon" containerID="cri-o://391314463e133e14077e9453ef4f023ff6205f2c184fe7d603fab43c81064707" gracePeriod=600 Jan 23 09:18:44 crc kubenswrapper[4684]: I0123 09:18:44.401970 4684 generic.go:334] "Generic (PLEG): container finished" podID="fe8e0d00-860e-4d47-9f48-686555520d79" containerID="391314463e133e14077e9453ef4f023ff6205f2c184fe7d603fab43c81064707" exitCode=0 Jan 23 09:18:44 crc kubenswrapper[4684]: I0123 09:18:44.402025 4684 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-wtphf" event={"ID":"fe8e0d00-860e-4d47-9f48-686555520d79","Type":"ContainerDied","Data":"391314463e133e14077e9453ef4f023ff6205f2c184fe7d603fab43c81064707"} Jan 23 09:18:44 crc kubenswrapper[4684]: I0123 09:18:44.402124 4684 scope.go:117] "RemoveContainer" containerID="4f60477adb3b4dbc421728a3db0033ffac18f45a46c7ebdec44ba3b981e2ba81" Jan 23 09:18:45 crc kubenswrapper[4684]: I0123 09:18:45.409873 4684 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-wtphf" event={"ID":"fe8e0d00-860e-4d47-9f48-686555520d79","Type":"ContainerStarted","Data":"6a54cd0e651571067c33ee3cd9f4af92e5f9d59906264f1f012e4be5834f6450"} Jan 23 09:19:53 crc kubenswrapper[4684]: I0123 09:19:53.618619 4684 dynamic_cafile_content.go:123] "Loaded a new CA Bundle and Verifier" name="client-ca-bundle::/etc/kubernetes/kubelet-ca.crt" Jan 23 09:20:28 crc kubenswrapper[4684]: I0123 09:20:28.364062 4684 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["cert-manager/cert-manager-cainjector-cf98fcc89-8p4gl"] Jan 23 09:20:28 crc kubenswrapper[4684]: E0123 09:20:28.364690 4684 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="4d94b705-3a9a-4cb2-87f1-b898ba859d79" containerName="registry" Jan 23 09:20:28 crc kubenswrapper[4684]: I0123 09:20:28.364716 4684 state_mem.go:107] "Deleted CPUSet assignment" podUID="4d94b705-3a9a-4cb2-87f1-b898ba859d79" containerName="registry" Jan 23 09:20:28 crc kubenswrapper[4684]: I0123 09:20:28.364815 4684 memory_manager.go:354] "RemoveStaleState removing state" podUID="4d94b705-3a9a-4cb2-87f1-b898ba859d79" containerName="registry" Jan 23 09:20:28 crc kubenswrapper[4684]: I0123 09:20:28.365179 4684 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="cert-manager/cert-manager-cainjector-cf98fcc89-8p4gl" Jan 23 09:20:28 crc kubenswrapper[4684]: I0123 09:20:28.367032 4684 reflector.go:368] Caches populated for *v1.ConfigMap from object-"cert-manager"/"openshift-service-ca.crt" Jan 23 09:20:28 crc kubenswrapper[4684]: I0123 09:20:28.368773 4684 reflector.go:368] Caches populated for *v1.ConfigMap from object-"cert-manager"/"kube-root-ca.crt" Jan 23 09:20:28 crc kubenswrapper[4684]: I0123 09:20:28.369900 4684 reflector.go:368] Caches populated for *v1.Secret from object-"cert-manager"/"cert-manager-cainjector-dockercfg-wxh8w" Jan 23 09:20:28 crc kubenswrapper[4684]: I0123 09:20:28.378564 4684 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["cert-manager/cert-manager-cainjector-cf98fcc89-8p4gl"] Jan 23 09:20:28 crc kubenswrapper[4684]: I0123 09:20:28.387585 4684 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["cert-manager/cert-manager-858654f9db-9kbld"] Jan 23 09:20:28 crc kubenswrapper[4684]: I0123 09:20:28.388198 4684 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="cert-manager/cert-manager-858654f9db-9kbld" Jan 23 09:20:28 crc kubenswrapper[4684]: I0123 09:20:28.390023 4684 reflector.go:368] Caches populated for *v1.Secret from object-"cert-manager"/"cert-manager-dockercfg-zxkjz" Jan 23 09:20:28 crc kubenswrapper[4684]: I0123 09:20:28.406779 4684 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["cert-manager/cert-manager-webhook-687f57d79b-sfbw8"] Jan 23 09:20:28 crc kubenswrapper[4684]: I0123 09:20:28.407420 4684 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="cert-manager/cert-manager-webhook-687f57d79b-sfbw8" Jan 23 09:20:28 crc kubenswrapper[4684]: I0123 09:20:28.411188 4684 reflector.go:368] Caches populated for *v1.Secret from object-"cert-manager"/"cert-manager-webhook-dockercfg-zk592" Jan 23 09:20:28 crc kubenswrapper[4684]: I0123 09:20:28.430015 4684 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["cert-manager/cert-manager-858654f9db-9kbld"] Jan 23 09:20:28 crc kubenswrapper[4684]: I0123 09:20:28.433418 4684 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["cert-manager/cert-manager-webhook-687f57d79b-sfbw8"] Jan 23 09:20:28 crc kubenswrapper[4684]: I0123 09:20:28.563358 4684 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-gzkpg\" (UniqueName: \"kubernetes.io/projected/f4c0acc8-e95c-4880-ad7b-eafc6422a713-kube-api-access-gzkpg\") pod \"cert-manager-cainjector-cf98fcc89-8p4gl\" (UID: \"f4c0acc8-e95c-4880-ad7b-eafc6422a713\") " pod="cert-manager/cert-manager-cainjector-cf98fcc89-8p4gl" Jan 23 09:20:28 crc kubenswrapper[4684]: I0123 09:20:28.563453 4684 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-tvn96\" (UniqueName: \"kubernetes.io/projected/b61e14d8-17ad-4f3b-aa18-e0030a15c870-kube-api-access-tvn96\") pod \"cert-manager-webhook-687f57d79b-sfbw8\" (UID: \"b61e14d8-17ad-4f3b-aa18-e0030a15c870\") " pod="cert-manager/cert-manager-webhook-687f57d79b-sfbw8" Jan 23 09:20:28 crc kubenswrapper[4684]: I0123 09:20:28.563529 4684 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-z5h2p\" (UniqueName: \"kubernetes.io/projected/05d3b6d9-c965-441d-a575-dd4d250c519b-kube-api-access-z5h2p\") pod \"cert-manager-858654f9db-9kbld\" (UID: \"05d3b6d9-c965-441d-a575-dd4d250c519b\") " pod="cert-manager/cert-manager-858654f9db-9kbld" Jan 23 09:20:28 crc kubenswrapper[4684]: I0123 09:20:28.664452 4684 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-tvn96\" (UniqueName: \"kubernetes.io/projected/b61e14d8-17ad-4f3b-aa18-e0030a15c870-kube-api-access-tvn96\") pod \"cert-manager-webhook-687f57d79b-sfbw8\" (UID: \"b61e14d8-17ad-4f3b-aa18-e0030a15c870\") " pod="cert-manager/cert-manager-webhook-687f57d79b-sfbw8" Jan 23 09:20:28 crc kubenswrapper[4684]: I0123 09:20:28.664520 4684 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-z5h2p\" (UniqueName: \"kubernetes.io/projected/05d3b6d9-c965-441d-a575-dd4d250c519b-kube-api-access-z5h2p\") pod \"cert-manager-858654f9db-9kbld\" (UID: \"05d3b6d9-c965-441d-a575-dd4d250c519b\") " pod="cert-manager/cert-manager-858654f9db-9kbld" Jan 23 09:20:28 crc kubenswrapper[4684]: I0123 09:20:28.664585 4684 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-gzkpg\" (UniqueName: \"kubernetes.io/projected/f4c0acc8-e95c-4880-ad7b-eafc6422a713-kube-api-access-gzkpg\") pod \"cert-manager-cainjector-cf98fcc89-8p4gl\" (UID: \"f4c0acc8-e95c-4880-ad7b-eafc6422a713\") " pod="cert-manager/cert-manager-cainjector-cf98fcc89-8p4gl" Jan 23 09:20:28 crc kubenswrapper[4684]: I0123 09:20:28.685937 4684 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-z5h2p\" (UniqueName: \"kubernetes.io/projected/05d3b6d9-c965-441d-a575-dd4d250c519b-kube-api-access-z5h2p\") pod \"cert-manager-858654f9db-9kbld\" (UID: \"05d3b6d9-c965-441d-a575-dd4d250c519b\") " pod="cert-manager/cert-manager-858654f9db-9kbld" Jan 23 09:20:28 crc kubenswrapper[4684]: I0123 09:20:28.687938 4684 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-gzkpg\" (UniqueName: \"kubernetes.io/projected/f4c0acc8-e95c-4880-ad7b-eafc6422a713-kube-api-access-gzkpg\") pod \"cert-manager-cainjector-cf98fcc89-8p4gl\" (UID: \"f4c0acc8-e95c-4880-ad7b-eafc6422a713\") " pod="cert-manager/cert-manager-cainjector-cf98fcc89-8p4gl" Jan 23 09:20:28 crc kubenswrapper[4684]: I0123 09:20:28.695033 4684 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-tvn96\" (UniqueName: \"kubernetes.io/projected/b61e14d8-17ad-4f3b-aa18-e0030a15c870-kube-api-access-tvn96\") pod \"cert-manager-webhook-687f57d79b-sfbw8\" (UID: \"b61e14d8-17ad-4f3b-aa18-e0030a15c870\") " pod="cert-manager/cert-manager-webhook-687f57d79b-sfbw8" Jan 23 09:20:28 crc kubenswrapper[4684]: I0123 09:20:28.702978 4684 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="cert-manager/cert-manager-858654f9db-9kbld" Jan 23 09:20:28 crc kubenswrapper[4684]: I0123 09:20:28.720591 4684 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="cert-manager/cert-manager-webhook-687f57d79b-sfbw8" Jan 23 09:20:28 crc kubenswrapper[4684]: I0123 09:20:28.980168 4684 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="cert-manager/cert-manager-cainjector-cf98fcc89-8p4gl" Jan 23 09:20:29 crc kubenswrapper[4684]: I0123 09:20:29.157834 4684 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["cert-manager/cert-manager-858654f9db-9kbld"] Jan 23 09:20:29 crc kubenswrapper[4684]: I0123 09:20:29.163068 4684 provider.go:102] Refreshing cache for provider: *credentialprovider.defaultDockerConfigProvider Jan 23 09:20:29 crc kubenswrapper[4684]: I0123 09:20:29.165280 4684 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["cert-manager/cert-manager-webhook-687f57d79b-sfbw8"] Jan 23 09:20:29 crc kubenswrapper[4684]: W0123 09:20:29.169575 4684 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-podb61e14d8_17ad_4f3b_aa18_e0030a15c870.slice/crio-687f987ad74af855a28672c70f8b854043c056d685be544a2b0508616b147924 WatchSource:0}: Error finding container 687f987ad74af855a28672c70f8b854043c056d685be544a2b0508616b147924: Status 404 returned error can't find the container with id 687f987ad74af855a28672c70f8b854043c056d685be544a2b0508616b147924 Jan 23 09:20:29 crc kubenswrapper[4684]: I0123 09:20:29.450653 4684 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["cert-manager/cert-manager-cainjector-cf98fcc89-8p4gl"] Jan 23 09:20:30 crc kubenswrapper[4684]: I0123 09:20:30.034106 4684 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="cert-manager/cert-manager-webhook-687f57d79b-sfbw8" event={"ID":"b61e14d8-17ad-4f3b-aa18-e0030a15c870","Type":"ContainerStarted","Data":"687f987ad74af855a28672c70f8b854043c056d685be544a2b0508616b147924"} Jan 23 09:20:30 crc kubenswrapper[4684]: I0123 09:20:30.035214 4684 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="cert-manager/cert-manager-858654f9db-9kbld" event={"ID":"05d3b6d9-c965-441d-a575-dd4d250c519b","Type":"ContainerStarted","Data":"7d6a7c3afb8a042c1333d719ba017ca8dd03f9a93b0eefc9016aa778c800a56a"} Jan 23 09:20:30 crc kubenswrapper[4684]: I0123 09:20:30.037323 4684 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="cert-manager/cert-manager-cainjector-cf98fcc89-8p4gl" event={"ID":"f4c0acc8-e95c-4880-ad7b-eafc6422a713","Type":"ContainerStarted","Data":"f2102f5bea5c9c3114a35cc37be163ce04e3a7d14589faedb5842308bde8fc02"} Jan 23 09:20:37 crc kubenswrapper[4684]: I0123 09:20:37.553091 4684 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-ovn-kubernetes/ovnkube-node-nk7v5"] Jan 23 09:20:37 crc kubenswrapper[4684]: I0123 09:20:37.554104 4684 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-ovn-kubernetes/ovnkube-node-nk7v5" podUID="5fd1b372-d164-4037-ae8e-cf634b1c4b41" containerName="kube-rbac-proxy-ovn-metrics" containerID="cri-o://5ecd3493767226c89a1f3e3dff04d36ff5c47117c6ad2712e71633f5c6e375b3" gracePeriod=30 Jan 23 09:20:37 crc kubenswrapper[4684]: I0123 09:20:37.554260 4684 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-ovn-kubernetes/ovnkube-node-nk7v5" podUID="5fd1b372-d164-4037-ae8e-cf634b1c4b41" containerName="sbdb" containerID="cri-o://6eab0113b2445bd23a5d3eb5f4bd79d26dd3352a1bf807cf7e770d55db85b699" gracePeriod=30 Jan 23 09:20:37 crc kubenswrapper[4684]: I0123 09:20:37.554331 4684 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-ovn-kubernetes/ovnkube-node-nk7v5" podUID="5fd1b372-d164-4037-ae8e-cf634b1c4b41" containerName="nbdb" containerID="cri-o://1d7d0cedb437ec48e365912b092c7f28a30e01fbab86c49bce1b26734ab264ee" gracePeriod=30 Jan 23 09:20:37 crc kubenswrapper[4684]: I0123 09:20:37.554323 4684 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-ovn-kubernetes/ovnkube-node-nk7v5" podUID="5fd1b372-d164-4037-ae8e-cf634b1c4b41" containerName="ovn-acl-logging" containerID="cri-o://d44f8256ce0d8ea5237e13fb4f6d7ee5cd698c2821613b48d73ba903d2ab5351" gracePeriod=30 Jan 23 09:20:37 crc kubenswrapper[4684]: I0123 09:20:37.554377 4684 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-ovn-kubernetes/ovnkube-node-nk7v5" podUID="5fd1b372-d164-4037-ae8e-cf634b1c4b41" containerName="northd" containerID="cri-o://6ab83043e744c91535278153a247d7ba2b3612b867edbabf3a43192b51304e14" gracePeriod=30 Jan 23 09:20:37 crc kubenswrapper[4684]: I0123 09:20:37.554430 4684 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-ovn-kubernetes/ovnkube-node-nk7v5" podUID="5fd1b372-d164-4037-ae8e-cf634b1c4b41" containerName="ovn-controller" containerID="cri-o://3eab81e73847c2d5a8a24bd2be84c8ed97ecc482fe023474b519ae6bcf3e6e49" gracePeriod=30 Jan 23 09:20:37 crc kubenswrapper[4684]: I0123 09:20:37.554439 4684 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-ovn-kubernetes/ovnkube-node-nk7v5" podUID="5fd1b372-d164-4037-ae8e-cf634b1c4b41" containerName="kube-rbac-proxy-node" containerID="cri-o://c845b6b78d55b23f70032599e19fb345571b02ca00353315bb08e94c834330d4" gracePeriod=30 Jan 23 09:20:37 crc kubenswrapper[4684]: I0123 09:20:37.603746 4684 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-ovn-kubernetes/ovnkube-node-nk7v5" podUID="5fd1b372-d164-4037-ae8e-cf634b1c4b41" containerName="ovnkube-controller" containerID="cri-o://8218cbc66b770be0ac1518a792ef1b287a309ea7d28374ac237fea5de79088e5" gracePeriod=30 Jan 23 09:20:38 crc kubenswrapper[4684]: E0123 09:20:38.589690 4684 log.go:32] "ExecSync cmd from runtime service failed" err="rpc error: code = NotFound desc = container is not created or running: checking if PID of 6eab0113b2445bd23a5d3eb5f4bd79d26dd3352a1bf807cf7e770d55db85b699 is running failed: container process not found" containerID="6eab0113b2445bd23a5d3eb5f4bd79d26dd3352a1bf807cf7e770d55db85b699" cmd=["/bin/bash","-c","set -xeo pipefail\n. /ovnkube-lib/ovnkube-lib.sh || exit 1\novndb-readiness-probe \"sb\"\n"] Jan 23 09:20:38 crc kubenswrapper[4684]: E0123 09:20:38.589779 4684 log.go:32] "ExecSync cmd from runtime service failed" err="rpc error: code = NotFound desc = container is not created or running: checking if PID of 1d7d0cedb437ec48e365912b092c7f28a30e01fbab86c49bce1b26734ab264ee is running failed: container process not found" containerID="1d7d0cedb437ec48e365912b092c7f28a30e01fbab86c49bce1b26734ab264ee" cmd=["/bin/bash","-c","set -xeo pipefail\n. /ovnkube-lib/ovnkube-lib.sh || exit 1\novndb-readiness-probe \"nb\"\n"] Jan 23 09:20:38 crc kubenswrapper[4684]: E0123 09:20:38.591457 4684 log.go:32] "ExecSync cmd from runtime service failed" err="rpc error: code = NotFound desc = container is not created or running: checking if PID of 6eab0113b2445bd23a5d3eb5f4bd79d26dd3352a1bf807cf7e770d55db85b699 is running failed: container process not found" containerID="6eab0113b2445bd23a5d3eb5f4bd79d26dd3352a1bf807cf7e770d55db85b699" cmd=["/bin/bash","-c","set -xeo pipefail\n. /ovnkube-lib/ovnkube-lib.sh || exit 1\novndb-readiness-probe \"sb\"\n"] Jan 23 09:20:38 crc kubenswrapper[4684]: E0123 09:20:38.591446 4684 log.go:32] "ExecSync cmd from runtime service failed" err="rpc error: code = NotFound desc = container is not created or running: checking if PID of 1d7d0cedb437ec48e365912b092c7f28a30e01fbab86c49bce1b26734ab264ee is running failed: container process not found" containerID="1d7d0cedb437ec48e365912b092c7f28a30e01fbab86c49bce1b26734ab264ee" cmd=["/bin/bash","-c","set -xeo pipefail\n. /ovnkube-lib/ovnkube-lib.sh || exit 1\novndb-readiness-probe \"nb\"\n"] Jan 23 09:20:38 crc kubenswrapper[4684]: E0123 09:20:38.592090 4684 log.go:32] "ExecSync cmd from runtime service failed" err="rpc error: code = NotFound desc = container is not created or running: checking if PID of 6eab0113b2445bd23a5d3eb5f4bd79d26dd3352a1bf807cf7e770d55db85b699 is running failed: container process not found" containerID="6eab0113b2445bd23a5d3eb5f4bd79d26dd3352a1bf807cf7e770d55db85b699" cmd=["/bin/bash","-c","set -xeo pipefail\n. /ovnkube-lib/ovnkube-lib.sh || exit 1\novndb-readiness-probe \"sb\"\n"] Jan 23 09:20:38 crc kubenswrapper[4684]: E0123 09:20:38.592191 4684 prober.go:104] "Probe errored" err="rpc error: code = NotFound desc = container is not created or running: checking if PID of 6eab0113b2445bd23a5d3eb5f4bd79d26dd3352a1bf807cf7e770d55db85b699 is running failed: container process not found" probeType="Readiness" pod="openshift-ovn-kubernetes/ovnkube-node-nk7v5" podUID="5fd1b372-d164-4037-ae8e-cf634b1c4b41" containerName="sbdb" Jan 23 09:20:38 crc kubenswrapper[4684]: E0123 09:20:38.592610 4684 log.go:32] "ExecSync cmd from runtime service failed" err="rpc error: code = NotFound desc = container is not created or running: checking if PID of 1d7d0cedb437ec48e365912b092c7f28a30e01fbab86c49bce1b26734ab264ee is running failed: container process not found" containerID="1d7d0cedb437ec48e365912b092c7f28a30e01fbab86c49bce1b26734ab264ee" cmd=["/bin/bash","-c","set -xeo pipefail\n. /ovnkube-lib/ovnkube-lib.sh || exit 1\novndb-readiness-probe \"nb\"\n"] Jan 23 09:20:38 crc kubenswrapper[4684]: E0123 09:20:38.592687 4684 prober.go:104] "Probe errored" err="rpc error: code = NotFound desc = container is not created or running: checking if PID of 1d7d0cedb437ec48e365912b092c7f28a30e01fbab86c49bce1b26734ab264ee is running failed: container process not found" probeType="Readiness" pod="openshift-ovn-kubernetes/ovnkube-node-nk7v5" podUID="5fd1b372-d164-4037-ae8e-cf634b1c4b41" containerName="nbdb" Jan 23 09:20:39 crc kubenswrapper[4684]: I0123 09:20:39.099167 4684 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-ovn-kubernetes_ovnkube-node-nk7v5_5fd1b372-d164-4037-ae8e-cf634b1c4b41/ovnkube-controller/3.log" Jan 23 09:20:39 crc kubenswrapper[4684]: I0123 09:20:39.108164 4684 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-ovn-kubernetes_ovnkube-node-nk7v5_5fd1b372-d164-4037-ae8e-cf634b1c4b41/ovn-acl-logging/0.log" Jan 23 09:20:39 crc kubenswrapper[4684]: I0123 09:20:39.113509 4684 generic.go:334] "Generic (PLEG): container finished" podID="5fd1b372-d164-4037-ae8e-cf634b1c4b41" containerID="d44f8256ce0d8ea5237e13fb4f6d7ee5cd698c2821613b48d73ba903d2ab5351" exitCode=143 Jan 23 09:20:39 crc kubenswrapper[4684]: I0123 09:20:39.113562 4684 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-nk7v5" event={"ID":"5fd1b372-d164-4037-ae8e-cf634b1c4b41","Type":"ContainerDied","Data":"d44f8256ce0d8ea5237e13fb4f6d7ee5cd698c2821613b48d73ba903d2ab5351"} Jan 23 09:20:40 crc kubenswrapper[4684]: I0123 09:20:40.119627 4684 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-multus_multus-jwr4q_ab0885cc-d621-4e36-9e37-1326848bd147/kube-multus/2.log" Jan 23 09:20:40 crc kubenswrapper[4684]: I0123 09:20:40.120577 4684 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-multus_multus-jwr4q_ab0885cc-d621-4e36-9e37-1326848bd147/kube-multus/1.log" Jan 23 09:20:40 crc kubenswrapper[4684]: I0123 09:20:40.120622 4684 generic.go:334] "Generic (PLEG): container finished" podID="ab0885cc-d621-4e36-9e37-1326848bd147" containerID="610ad7c3751dfca11e84d63256a09136a679cafc9de6642417b891d4b967f206" exitCode=2 Jan 23 09:20:40 crc kubenswrapper[4684]: I0123 09:20:40.120684 4684 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-multus/multus-jwr4q" event={"ID":"ab0885cc-d621-4e36-9e37-1326848bd147","Type":"ContainerDied","Data":"610ad7c3751dfca11e84d63256a09136a679cafc9de6642417b891d4b967f206"} Jan 23 09:20:40 crc kubenswrapper[4684]: I0123 09:20:40.120793 4684 scope.go:117] "RemoveContainer" containerID="7bc78adb5a12c736586e26f00e1e598d2404f62b6f15dbb005f241e1d5fddae3" Jan 23 09:20:40 crc kubenswrapper[4684]: I0123 09:20:40.121323 4684 scope.go:117] "RemoveContainer" containerID="610ad7c3751dfca11e84d63256a09136a679cafc9de6642417b891d4b967f206" Jan 23 09:20:40 crc kubenswrapper[4684]: I0123 09:20:40.125519 4684 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-ovn-kubernetes_ovnkube-node-nk7v5_5fd1b372-d164-4037-ae8e-cf634b1c4b41/ovnkube-controller/3.log" Jan 23 09:20:40 crc kubenswrapper[4684]: I0123 09:20:40.128078 4684 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-ovn-kubernetes_ovnkube-node-nk7v5_5fd1b372-d164-4037-ae8e-cf634b1c4b41/ovn-acl-logging/0.log" Jan 23 09:20:40 crc kubenswrapper[4684]: I0123 09:20:40.128816 4684 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-ovn-kubernetes_ovnkube-node-nk7v5_5fd1b372-d164-4037-ae8e-cf634b1c4b41/ovn-controller/0.log" Jan 23 09:20:40 crc kubenswrapper[4684]: I0123 09:20:40.129368 4684 generic.go:334] "Generic (PLEG): container finished" podID="5fd1b372-d164-4037-ae8e-cf634b1c4b41" containerID="8218cbc66b770be0ac1518a792ef1b287a309ea7d28374ac237fea5de79088e5" exitCode=0 Jan 23 09:20:40 crc kubenswrapper[4684]: I0123 09:20:40.129453 4684 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-nk7v5" event={"ID":"5fd1b372-d164-4037-ae8e-cf634b1c4b41","Type":"ContainerDied","Data":"8218cbc66b770be0ac1518a792ef1b287a309ea7d28374ac237fea5de79088e5"} Jan 23 09:20:40 crc kubenswrapper[4684]: I0123 09:20:40.129519 4684 generic.go:334] "Generic (PLEG): container finished" podID="5fd1b372-d164-4037-ae8e-cf634b1c4b41" containerID="6eab0113b2445bd23a5d3eb5f4bd79d26dd3352a1bf807cf7e770d55db85b699" exitCode=0 Jan 23 09:20:40 crc kubenswrapper[4684]: I0123 09:20:40.129535 4684 generic.go:334] "Generic (PLEG): container finished" podID="5fd1b372-d164-4037-ae8e-cf634b1c4b41" containerID="1d7d0cedb437ec48e365912b092c7f28a30e01fbab86c49bce1b26734ab264ee" exitCode=0 Jan 23 09:20:40 crc kubenswrapper[4684]: I0123 09:20:40.129544 4684 generic.go:334] "Generic (PLEG): container finished" podID="5fd1b372-d164-4037-ae8e-cf634b1c4b41" containerID="6ab83043e744c91535278153a247d7ba2b3612b867edbabf3a43192b51304e14" exitCode=0 Jan 23 09:20:40 crc kubenswrapper[4684]: I0123 09:20:40.129553 4684 generic.go:334] "Generic (PLEG): container finished" podID="5fd1b372-d164-4037-ae8e-cf634b1c4b41" containerID="5ecd3493767226c89a1f3e3dff04d36ff5c47117c6ad2712e71633f5c6e375b3" exitCode=0 Jan 23 09:20:40 crc kubenswrapper[4684]: I0123 09:20:40.129615 4684 generic.go:334] "Generic (PLEG): container finished" podID="5fd1b372-d164-4037-ae8e-cf634b1c4b41" containerID="c845b6b78d55b23f70032599e19fb345571b02ca00353315bb08e94c834330d4" exitCode=0 Jan 23 09:20:40 crc kubenswrapper[4684]: I0123 09:20:40.129628 4684 generic.go:334] "Generic (PLEG): container finished" podID="5fd1b372-d164-4037-ae8e-cf634b1c4b41" containerID="3eab81e73847c2d5a8a24bd2be84c8ed97ecc482fe023474b519ae6bcf3e6e49" exitCode=143 Jan 23 09:20:40 crc kubenswrapper[4684]: I0123 09:20:40.129518 4684 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-nk7v5" event={"ID":"5fd1b372-d164-4037-ae8e-cf634b1c4b41","Type":"ContainerDied","Data":"6eab0113b2445bd23a5d3eb5f4bd79d26dd3352a1bf807cf7e770d55db85b699"} Jan 23 09:20:40 crc kubenswrapper[4684]: I0123 09:20:40.129825 4684 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-nk7v5" event={"ID":"5fd1b372-d164-4037-ae8e-cf634b1c4b41","Type":"ContainerDied","Data":"1d7d0cedb437ec48e365912b092c7f28a30e01fbab86c49bce1b26734ab264ee"} Jan 23 09:20:40 crc kubenswrapper[4684]: I0123 09:20:40.129843 4684 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-nk7v5" event={"ID":"5fd1b372-d164-4037-ae8e-cf634b1c4b41","Type":"ContainerDied","Data":"6ab83043e744c91535278153a247d7ba2b3612b867edbabf3a43192b51304e14"} Jan 23 09:20:40 crc kubenswrapper[4684]: I0123 09:20:40.129854 4684 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-nk7v5" event={"ID":"5fd1b372-d164-4037-ae8e-cf634b1c4b41","Type":"ContainerDied","Data":"5ecd3493767226c89a1f3e3dff04d36ff5c47117c6ad2712e71633f5c6e375b3"} Jan 23 09:20:40 crc kubenswrapper[4684]: I0123 09:20:40.129892 4684 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-nk7v5" event={"ID":"5fd1b372-d164-4037-ae8e-cf634b1c4b41","Type":"ContainerDied","Data":"c845b6b78d55b23f70032599e19fb345571b02ca00353315bb08e94c834330d4"} Jan 23 09:20:40 crc kubenswrapper[4684]: I0123 09:20:40.129902 4684 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-nk7v5" event={"ID":"5fd1b372-d164-4037-ae8e-cf634b1c4b41","Type":"ContainerDied","Data":"3eab81e73847c2d5a8a24bd2be84c8ed97ecc482fe023474b519ae6bcf3e6e49"} Jan 23 09:20:41 crc kubenswrapper[4684]: I0123 09:20:41.138474 4684 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-ovn-kubernetes_ovnkube-node-nk7v5_5fd1b372-d164-4037-ae8e-cf634b1c4b41/ovnkube-controller/3.log" Jan 23 09:20:41 crc kubenswrapper[4684]: I0123 09:20:41.140803 4684 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-ovn-kubernetes_ovnkube-node-nk7v5_5fd1b372-d164-4037-ae8e-cf634b1c4b41/ovn-acl-logging/0.log" Jan 23 09:20:41 crc kubenswrapper[4684]: I0123 09:20:41.141271 4684 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-ovn-kubernetes_ovnkube-node-nk7v5_5fd1b372-d164-4037-ae8e-cf634b1c4b41/ovn-controller/0.log" Jan 23 09:20:41 crc kubenswrapper[4684]: I0123 09:20:41.141580 4684 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-nk7v5" event={"ID":"5fd1b372-d164-4037-ae8e-cf634b1c4b41","Type":"ContainerDied","Data":"a0d453ba54f696f071b7b86eca3cf00c4656d73f0f7fd74a9a9302ecf012b310"} Jan 23 09:20:41 crc kubenswrapper[4684]: I0123 09:20:41.141612 4684 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="a0d453ba54f696f071b7b86eca3cf00c4656d73f0f7fd74a9a9302ecf012b310" Jan 23 09:20:41 crc kubenswrapper[4684]: I0123 09:20:41.162632 4684 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-ovn-kubernetes_ovnkube-node-nk7v5_5fd1b372-d164-4037-ae8e-cf634b1c4b41/ovnkube-controller/3.log" Jan 23 09:20:41 crc kubenswrapper[4684]: I0123 09:20:41.165153 4684 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-ovn-kubernetes_ovnkube-node-nk7v5_5fd1b372-d164-4037-ae8e-cf634b1c4b41/ovn-acl-logging/0.log" Jan 23 09:20:41 crc kubenswrapper[4684]: I0123 09:20:41.165664 4684 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-ovn-kubernetes_ovnkube-node-nk7v5_5fd1b372-d164-4037-ae8e-cf634b1c4b41/ovn-controller/0.log" Jan 23 09:20:41 crc kubenswrapper[4684]: I0123 09:20:41.166476 4684 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-ovn-kubernetes/ovnkube-node-nk7v5" Jan 23 09:20:41 crc kubenswrapper[4684]: I0123 09:20:41.221500 4684 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-ovn-kubernetes/ovnkube-node-rwv7j"] Jan 23 09:20:41 crc kubenswrapper[4684]: E0123 09:20:41.221877 4684 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="5fd1b372-d164-4037-ae8e-cf634b1c4b41" containerName="kube-rbac-proxy-node" Jan 23 09:20:41 crc kubenswrapper[4684]: I0123 09:20:41.221892 4684 state_mem.go:107] "Deleted CPUSet assignment" podUID="5fd1b372-d164-4037-ae8e-cf634b1c4b41" containerName="kube-rbac-proxy-node" Jan 23 09:20:41 crc kubenswrapper[4684]: E0123 09:20:41.221933 4684 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="5fd1b372-d164-4037-ae8e-cf634b1c4b41" containerName="kube-rbac-proxy-ovn-metrics" Jan 23 09:20:41 crc kubenswrapper[4684]: I0123 09:20:41.221942 4684 state_mem.go:107] "Deleted CPUSet assignment" podUID="5fd1b372-d164-4037-ae8e-cf634b1c4b41" containerName="kube-rbac-proxy-ovn-metrics" Jan 23 09:20:41 crc kubenswrapper[4684]: E0123 09:20:41.221955 4684 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="5fd1b372-d164-4037-ae8e-cf634b1c4b41" containerName="ovnkube-controller" Jan 23 09:20:41 crc kubenswrapper[4684]: I0123 09:20:41.221965 4684 state_mem.go:107] "Deleted CPUSet assignment" podUID="5fd1b372-d164-4037-ae8e-cf634b1c4b41" containerName="ovnkube-controller" Jan 23 09:20:41 crc kubenswrapper[4684]: E0123 09:20:41.221975 4684 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="5fd1b372-d164-4037-ae8e-cf634b1c4b41" containerName="ovnkube-controller" Jan 23 09:20:41 crc kubenswrapper[4684]: I0123 09:20:41.221982 4684 state_mem.go:107] "Deleted CPUSet assignment" podUID="5fd1b372-d164-4037-ae8e-cf634b1c4b41" containerName="ovnkube-controller" Jan 23 09:20:41 crc kubenswrapper[4684]: E0123 09:20:41.222018 4684 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="5fd1b372-d164-4037-ae8e-cf634b1c4b41" containerName="ovnkube-controller" Jan 23 09:20:41 crc kubenswrapper[4684]: I0123 09:20:41.222028 4684 state_mem.go:107] "Deleted CPUSet assignment" podUID="5fd1b372-d164-4037-ae8e-cf634b1c4b41" containerName="ovnkube-controller" Jan 23 09:20:41 crc kubenswrapper[4684]: E0123 09:20:41.222040 4684 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="5fd1b372-d164-4037-ae8e-cf634b1c4b41" containerName="kubecfg-setup" Jan 23 09:20:41 crc kubenswrapper[4684]: I0123 09:20:41.222047 4684 state_mem.go:107] "Deleted CPUSet assignment" podUID="5fd1b372-d164-4037-ae8e-cf634b1c4b41" containerName="kubecfg-setup" Jan 23 09:20:41 crc kubenswrapper[4684]: E0123 09:20:41.222061 4684 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="5fd1b372-d164-4037-ae8e-cf634b1c4b41" containerName="nbdb" Jan 23 09:20:41 crc kubenswrapper[4684]: I0123 09:20:41.222068 4684 state_mem.go:107] "Deleted CPUSet assignment" podUID="5fd1b372-d164-4037-ae8e-cf634b1c4b41" containerName="nbdb" Jan 23 09:20:41 crc kubenswrapper[4684]: E0123 09:20:41.222104 4684 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="5fd1b372-d164-4037-ae8e-cf634b1c4b41" containerName="northd" Jan 23 09:20:41 crc kubenswrapper[4684]: I0123 09:20:41.222113 4684 state_mem.go:107] "Deleted CPUSet assignment" podUID="5fd1b372-d164-4037-ae8e-cf634b1c4b41" containerName="northd" Jan 23 09:20:41 crc kubenswrapper[4684]: E0123 09:20:41.222126 4684 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="5fd1b372-d164-4037-ae8e-cf634b1c4b41" containerName="ovn-controller" Jan 23 09:20:41 crc kubenswrapper[4684]: I0123 09:20:41.222133 4684 state_mem.go:107] "Deleted CPUSet assignment" podUID="5fd1b372-d164-4037-ae8e-cf634b1c4b41" containerName="ovn-controller" Jan 23 09:20:41 crc kubenswrapper[4684]: E0123 09:20:41.222144 4684 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="5fd1b372-d164-4037-ae8e-cf634b1c4b41" containerName="sbdb" Jan 23 09:20:41 crc kubenswrapper[4684]: I0123 09:20:41.222151 4684 state_mem.go:107] "Deleted CPUSet assignment" podUID="5fd1b372-d164-4037-ae8e-cf634b1c4b41" containerName="sbdb" Jan 23 09:20:41 crc kubenswrapper[4684]: E0123 09:20:41.222186 4684 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="5fd1b372-d164-4037-ae8e-cf634b1c4b41" containerName="ovn-acl-logging" Jan 23 09:20:41 crc kubenswrapper[4684]: I0123 09:20:41.222408 4684 state_mem.go:107] "Deleted CPUSet assignment" podUID="5fd1b372-d164-4037-ae8e-cf634b1c4b41" containerName="ovn-acl-logging" Jan 23 09:20:41 crc kubenswrapper[4684]: I0123 09:20:41.222605 4684 memory_manager.go:354] "RemoveStaleState removing state" podUID="5fd1b372-d164-4037-ae8e-cf634b1c4b41" containerName="northd" Jan 23 09:20:41 crc kubenswrapper[4684]: I0123 09:20:41.222620 4684 memory_manager.go:354] "RemoveStaleState removing state" podUID="5fd1b372-d164-4037-ae8e-cf634b1c4b41" containerName="ovnkube-controller" Jan 23 09:20:41 crc kubenswrapper[4684]: I0123 09:20:41.222631 4684 memory_manager.go:354] "RemoveStaleState removing state" podUID="5fd1b372-d164-4037-ae8e-cf634b1c4b41" containerName="ovnkube-controller" Jan 23 09:20:41 crc kubenswrapper[4684]: I0123 09:20:41.222639 4684 memory_manager.go:354] "RemoveStaleState removing state" podUID="5fd1b372-d164-4037-ae8e-cf634b1c4b41" containerName="ovn-controller" Jan 23 09:20:41 crc kubenswrapper[4684]: I0123 09:20:41.222653 4684 memory_manager.go:354] "RemoveStaleState removing state" podUID="5fd1b372-d164-4037-ae8e-cf634b1c4b41" containerName="ovnkube-controller" Jan 23 09:20:41 crc kubenswrapper[4684]: I0123 09:20:41.222684 4684 memory_manager.go:354] "RemoveStaleState removing state" podUID="5fd1b372-d164-4037-ae8e-cf634b1c4b41" containerName="ovn-acl-logging" Jan 23 09:20:41 crc kubenswrapper[4684]: I0123 09:20:41.222726 4684 memory_manager.go:354] "RemoveStaleState removing state" podUID="5fd1b372-d164-4037-ae8e-cf634b1c4b41" containerName="sbdb" Jan 23 09:20:41 crc kubenswrapper[4684]: I0123 09:20:41.222737 4684 memory_manager.go:354] "RemoveStaleState removing state" podUID="5fd1b372-d164-4037-ae8e-cf634b1c4b41" containerName="kube-rbac-proxy-node" Jan 23 09:20:41 crc kubenswrapper[4684]: I0123 09:20:41.222750 4684 memory_manager.go:354] "RemoveStaleState removing state" podUID="5fd1b372-d164-4037-ae8e-cf634b1c4b41" containerName="nbdb" Jan 23 09:20:41 crc kubenswrapper[4684]: I0123 09:20:41.222776 4684 memory_manager.go:354] "RemoveStaleState removing state" podUID="5fd1b372-d164-4037-ae8e-cf634b1c4b41" containerName="kube-rbac-proxy-ovn-metrics" Jan 23 09:20:41 crc kubenswrapper[4684]: E0123 09:20:41.230123 4684 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="5fd1b372-d164-4037-ae8e-cf634b1c4b41" containerName="ovnkube-controller" Jan 23 09:20:41 crc kubenswrapper[4684]: I0123 09:20:41.230149 4684 state_mem.go:107] "Deleted CPUSet assignment" podUID="5fd1b372-d164-4037-ae8e-cf634b1c4b41" containerName="ovnkube-controller" Jan 23 09:20:41 crc kubenswrapper[4684]: E0123 09:20:41.230162 4684 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="5fd1b372-d164-4037-ae8e-cf634b1c4b41" containerName="ovnkube-controller" Jan 23 09:20:41 crc kubenswrapper[4684]: I0123 09:20:41.230171 4684 state_mem.go:107] "Deleted CPUSet assignment" podUID="5fd1b372-d164-4037-ae8e-cf634b1c4b41" containerName="ovnkube-controller" Jan 23 09:20:41 crc kubenswrapper[4684]: I0123 09:20:41.230310 4684 memory_manager.go:354] "RemoveStaleState removing state" podUID="5fd1b372-d164-4037-ae8e-cf634b1c4b41" containerName="ovnkube-controller" Jan 23 09:20:41 crc kubenswrapper[4684]: I0123 09:20:41.230536 4684 memory_manager.go:354] "RemoveStaleState removing state" podUID="5fd1b372-d164-4037-ae8e-cf634b1c4b41" containerName="ovnkube-controller" Jan 23 09:20:41 crc kubenswrapper[4684]: I0123 09:20:41.234934 4684 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-ovn-kubernetes/ovnkube-node-rwv7j" Jan 23 09:20:41 crc kubenswrapper[4684]: I0123 09:20:41.327551 4684 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"host-slash\" (UniqueName: \"kubernetes.io/host-path/5fd1b372-d164-4037-ae8e-cf634b1c4b41-host-slash\") pod \"5fd1b372-d164-4037-ae8e-cf634b1c4b41\" (UID: \"5fd1b372-d164-4037-ae8e-cf634b1c4b41\") " Jan 23 09:20:41 crc kubenswrapper[4684]: I0123 09:20:41.327603 4684 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"host-run-ovn-kubernetes\" (UniqueName: \"kubernetes.io/host-path/5fd1b372-d164-4037-ae8e-cf634b1c4b41-host-run-ovn-kubernetes\") pod \"5fd1b372-d164-4037-ae8e-cf634b1c4b41\" (UID: \"5fd1b372-d164-4037-ae8e-cf634b1c4b41\") " Jan 23 09:20:41 crc kubenswrapper[4684]: I0123 09:20:41.327643 4684 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"host-var-lib-cni-networks-ovn-kubernetes\" (UniqueName: \"kubernetes.io/host-path/5fd1b372-d164-4037-ae8e-cf634b1c4b41-host-var-lib-cni-networks-ovn-kubernetes\") pod \"5fd1b372-d164-4037-ae8e-cf634b1c4b41\" (UID: \"5fd1b372-d164-4037-ae8e-cf634b1c4b41\") " Jan 23 09:20:41 crc kubenswrapper[4684]: I0123 09:20:41.327651 4684 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/5fd1b372-d164-4037-ae8e-cf634b1c4b41-host-slash" (OuterVolumeSpecName: "host-slash") pod "5fd1b372-d164-4037-ae8e-cf634b1c4b41" (UID: "5fd1b372-d164-4037-ae8e-cf634b1c4b41"). InnerVolumeSpecName "host-slash". PluginName "kubernetes.io/host-path", VolumeGidValue "" Jan 23 09:20:41 crc kubenswrapper[4684]: I0123 09:20:41.327713 4684 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovnkube-script-lib\" (UniqueName: \"kubernetes.io/configmap/5fd1b372-d164-4037-ae8e-cf634b1c4b41-ovnkube-script-lib\") pod \"5fd1b372-d164-4037-ae8e-cf634b1c4b41\" (UID: \"5fd1b372-d164-4037-ae8e-cf634b1c4b41\") " Jan 23 09:20:41 crc kubenswrapper[4684]: I0123 09:20:41.327746 4684 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"host-run-netns\" (UniqueName: \"kubernetes.io/host-path/5fd1b372-d164-4037-ae8e-cf634b1c4b41-host-run-netns\") pod \"5fd1b372-d164-4037-ae8e-cf634b1c4b41\" (UID: \"5fd1b372-d164-4037-ae8e-cf634b1c4b41\") " Jan 23 09:20:41 crc kubenswrapper[4684]: I0123 09:20:41.327775 4684 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"host-cni-netd\" (UniqueName: \"kubernetes.io/host-path/5fd1b372-d164-4037-ae8e-cf634b1c4b41-host-cni-netd\") pod \"5fd1b372-d164-4037-ae8e-cf634b1c4b41\" (UID: \"5fd1b372-d164-4037-ae8e-cf634b1c4b41\") " Jan 23 09:20:41 crc kubenswrapper[4684]: I0123 09:20:41.327774 4684 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/5fd1b372-d164-4037-ae8e-cf634b1c4b41-host-run-ovn-kubernetes" (OuterVolumeSpecName: "host-run-ovn-kubernetes") pod "5fd1b372-d164-4037-ae8e-cf634b1c4b41" (UID: "5fd1b372-d164-4037-ae8e-cf634b1c4b41"). InnerVolumeSpecName "host-run-ovn-kubernetes". PluginName "kubernetes.io/host-path", VolumeGidValue "" Jan 23 09:20:41 crc kubenswrapper[4684]: I0123 09:20:41.327785 4684 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/5fd1b372-d164-4037-ae8e-cf634b1c4b41-host-var-lib-cni-networks-ovn-kubernetes" (OuterVolumeSpecName: "host-var-lib-cni-networks-ovn-kubernetes") pod "5fd1b372-d164-4037-ae8e-cf634b1c4b41" (UID: "5fd1b372-d164-4037-ae8e-cf634b1c4b41"). InnerVolumeSpecName "host-var-lib-cni-networks-ovn-kubernetes". PluginName "kubernetes.io/host-path", VolumeGidValue "" Jan 23 09:20:41 crc kubenswrapper[4684]: I0123 09:20:41.327813 4684 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"var-lib-openvswitch\" (UniqueName: \"kubernetes.io/host-path/5fd1b372-d164-4037-ae8e-cf634b1c4b41-var-lib-openvswitch\") pod \"5fd1b372-d164-4037-ae8e-cf634b1c4b41\" (UID: \"5fd1b372-d164-4037-ae8e-cf634b1c4b41\") " Jan 23 09:20:41 crc kubenswrapper[4684]: I0123 09:20:41.327840 4684 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-l46bg\" (UniqueName: \"kubernetes.io/projected/5fd1b372-d164-4037-ae8e-cf634b1c4b41-kube-api-access-l46bg\") pod \"5fd1b372-d164-4037-ae8e-cf634b1c4b41\" (UID: \"5fd1b372-d164-4037-ae8e-cf634b1c4b41\") " Jan 23 09:20:41 crc kubenswrapper[4684]: I0123 09:20:41.327848 4684 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/5fd1b372-d164-4037-ae8e-cf634b1c4b41-host-cni-netd" (OuterVolumeSpecName: "host-cni-netd") pod "5fd1b372-d164-4037-ae8e-cf634b1c4b41" (UID: "5fd1b372-d164-4037-ae8e-cf634b1c4b41"). InnerVolumeSpecName "host-cni-netd". PluginName "kubernetes.io/host-path", VolumeGidValue "" Jan 23 09:20:41 crc kubenswrapper[4684]: I0123 09:20:41.327866 4684 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"systemd-units\" (UniqueName: \"kubernetes.io/host-path/5fd1b372-d164-4037-ae8e-cf634b1c4b41-systemd-units\") pod \"5fd1b372-d164-4037-ae8e-cf634b1c4b41\" (UID: \"5fd1b372-d164-4037-ae8e-cf634b1c4b41\") " Jan 23 09:20:41 crc kubenswrapper[4684]: I0123 09:20:41.327876 4684 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/5fd1b372-d164-4037-ae8e-cf634b1c4b41-var-lib-openvswitch" (OuterVolumeSpecName: "var-lib-openvswitch") pod "5fd1b372-d164-4037-ae8e-cf634b1c4b41" (UID: "5fd1b372-d164-4037-ae8e-cf634b1c4b41"). InnerVolumeSpecName "var-lib-openvswitch". PluginName "kubernetes.io/host-path", VolumeGidValue "" Jan 23 09:20:41 crc kubenswrapper[4684]: I0123 09:20:41.327899 4684 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"run-openvswitch\" (UniqueName: \"kubernetes.io/host-path/5fd1b372-d164-4037-ae8e-cf634b1c4b41-run-openvswitch\") pod \"5fd1b372-d164-4037-ae8e-cf634b1c4b41\" (UID: \"5fd1b372-d164-4037-ae8e-cf634b1c4b41\") " Jan 23 09:20:41 crc kubenswrapper[4684]: I0123 09:20:41.327946 4684 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"node-log\" (UniqueName: \"kubernetes.io/host-path/5fd1b372-d164-4037-ae8e-cf634b1c4b41-node-log\") pod \"5fd1b372-d164-4037-ae8e-cf634b1c4b41\" (UID: \"5fd1b372-d164-4037-ae8e-cf634b1c4b41\") " Jan 23 09:20:41 crc kubenswrapper[4684]: I0123 09:20:41.327975 4684 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"run-ovn\" (UniqueName: \"kubernetes.io/host-path/5fd1b372-d164-4037-ae8e-cf634b1c4b41-run-ovn\") pod \"5fd1b372-d164-4037-ae8e-cf634b1c4b41\" (UID: \"5fd1b372-d164-4037-ae8e-cf634b1c4b41\") " Jan 23 09:20:41 crc kubenswrapper[4684]: I0123 09:20:41.327978 4684 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/5fd1b372-d164-4037-ae8e-cf634b1c4b41-systemd-units" (OuterVolumeSpecName: "systemd-units") pod "5fd1b372-d164-4037-ae8e-cf634b1c4b41" (UID: "5fd1b372-d164-4037-ae8e-cf634b1c4b41"). InnerVolumeSpecName "systemd-units". PluginName "kubernetes.io/host-path", VolumeGidValue "" Jan 23 09:20:41 crc kubenswrapper[4684]: I0123 09:20:41.328002 4684 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovn-node-metrics-cert\" (UniqueName: \"kubernetes.io/secret/5fd1b372-d164-4037-ae8e-cf634b1c4b41-ovn-node-metrics-cert\") pod \"5fd1b372-d164-4037-ae8e-cf634b1c4b41\" (UID: \"5fd1b372-d164-4037-ae8e-cf634b1c4b41\") " Jan 23 09:20:41 crc kubenswrapper[4684]: I0123 09:20:41.328028 4684 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovnkube-config\" (UniqueName: \"kubernetes.io/configmap/5fd1b372-d164-4037-ae8e-cf634b1c4b41-ovnkube-config\") pod \"5fd1b372-d164-4037-ae8e-cf634b1c4b41\" (UID: \"5fd1b372-d164-4037-ae8e-cf634b1c4b41\") " Jan 23 09:20:41 crc kubenswrapper[4684]: I0123 09:20:41.328050 4684 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"etc-openvswitch\" (UniqueName: \"kubernetes.io/host-path/5fd1b372-d164-4037-ae8e-cf634b1c4b41-etc-openvswitch\") pod \"5fd1b372-d164-4037-ae8e-cf634b1c4b41\" (UID: \"5fd1b372-d164-4037-ae8e-cf634b1c4b41\") " Jan 23 09:20:41 crc kubenswrapper[4684]: I0123 09:20:41.328081 4684 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"run-systemd\" (UniqueName: \"kubernetes.io/host-path/5fd1b372-d164-4037-ae8e-cf634b1c4b41-run-systemd\") pod \"5fd1b372-d164-4037-ae8e-cf634b1c4b41\" (UID: \"5fd1b372-d164-4037-ae8e-cf634b1c4b41\") " Jan 23 09:20:41 crc kubenswrapper[4684]: I0123 09:20:41.328099 4684 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"host-cni-bin\" (UniqueName: \"kubernetes.io/host-path/5fd1b372-d164-4037-ae8e-cf634b1c4b41-host-cni-bin\") pod \"5fd1b372-d164-4037-ae8e-cf634b1c4b41\" (UID: \"5fd1b372-d164-4037-ae8e-cf634b1c4b41\") " Jan 23 09:20:41 crc kubenswrapper[4684]: I0123 09:20:41.328119 4684 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"log-socket\" (UniqueName: \"kubernetes.io/host-path/5fd1b372-d164-4037-ae8e-cf634b1c4b41-log-socket\") pod \"5fd1b372-d164-4037-ae8e-cf634b1c4b41\" (UID: \"5fd1b372-d164-4037-ae8e-cf634b1c4b41\") " Jan 23 09:20:41 crc kubenswrapper[4684]: I0123 09:20:41.328140 4684 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"host-kubelet\" (UniqueName: \"kubernetes.io/host-path/5fd1b372-d164-4037-ae8e-cf634b1c4b41-host-kubelet\") pod \"5fd1b372-d164-4037-ae8e-cf634b1c4b41\" (UID: \"5fd1b372-d164-4037-ae8e-cf634b1c4b41\") " Jan 23 09:20:41 crc kubenswrapper[4684]: I0123 09:20:41.328169 4684 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"env-overrides\" (UniqueName: \"kubernetes.io/configmap/5fd1b372-d164-4037-ae8e-cf634b1c4b41-env-overrides\") pod \"5fd1b372-d164-4037-ae8e-cf634b1c4b41\" (UID: \"5fd1b372-d164-4037-ae8e-cf634b1c4b41\") " Jan 23 09:20:41 crc kubenswrapper[4684]: I0123 09:20:41.328328 4684 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-run-ovn-kubernetes\" (UniqueName: \"kubernetes.io/host-path/634af5a9-2de9-4115-91ca-108d2dc489ec-host-run-ovn-kubernetes\") pod \"ovnkube-node-rwv7j\" (UID: \"634af5a9-2de9-4115-91ca-108d2dc489ec\") " pod="openshift-ovn-kubernetes/ovnkube-node-rwv7j" Jan 23 09:20:41 crc kubenswrapper[4684]: I0123 09:20:41.328356 4684 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-slash\" (UniqueName: \"kubernetes.io/host-path/634af5a9-2de9-4115-91ca-108d2dc489ec-host-slash\") pod \"ovnkube-node-rwv7j\" (UID: \"634af5a9-2de9-4115-91ca-108d2dc489ec\") " pod="openshift-ovn-kubernetes/ovnkube-node-rwv7j" Jan 23 09:20:41 crc kubenswrapper[4684]: I0123 09:20:41.328025 4684 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/5fd1b372-d164-4037-ae8e-cf634b1c4b41-node-log" (OuterVolumeSpecName: "node-log") pod "5fd1b372-d164-4037-ae8e-cf634b1c4b41" (UID: "5fd1b372-d164-4037-ae8e-cf634b1c4b41"). InnerVolumeSpecName "node-log". PluginName "kubernetes.io/host-path", VolumeGidValue "" Jan 23 09:20:41 crc kubenswrapper[4684]: I0123 09:20:41.328399 4684 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"node-log\" (UniqueName: \"kubernetes.io/host-path/634af5a9-2de9-4115-91ca-108d2dc489ec-node-log\") pod \"ovnkube-node-rwv7j\" (UID: \"634af5a9-2de9-4115-91ca-108d2dc489ec\") " pod="openshift-ovn-kubernetes/ovnkube-node-rwv7j" Jan 23 09:20:41 crc kubenswrapper[4684]: I0123 09:20:41.328061 4684 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/5fd1b372-d164-4037-ae8e-cf634b1c4b41-run-openvswitch" (OuterVolumeSpecName: "run-openvswitch") pod "5fd1b372-d164-4037-ae8e-cf634b1c4b41" (UID: "5fd1b372-d164-4037-ae8e-cf634b1c4b41"). InnerVolumeSpecName "run-openvswitch". PluginName "kubernetes.io/host-path", VolumeGidValue "" Jan 23 09:20:41 crc kubenswrapper[4684]: I0123 09:20:41.328101 4684 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/5fd1b372-d164-4037-ae8e-cf634b1c4b41-ovnkube-script-lib" (OuterVolumeSpecName: "ovnkube-script-lib") pod "5fd1b372-d164-4037-ae8e-cf634b1c4b41" (UID: "5fd1b372-d164-4037-ae8e-cf634b1c4b41"). InnerVolumeSpecName "ovnkube-script-lib". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 23 09:20:41 crc kubenswrapper[4684]: I0123 09:20:41.328131 4684 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/5fd1b372-d164-4037-ae8e-cf634b1c4b41-etc-openvswitch" (OuterVolumeSpecName: "etc-openvswitch") pod "5fd1b372-d164-4037-ae8e-cf634b1c4b41" (UID: "5fd1b372-d164-4037-ae8e-cf634b1c4b41"). InnerVolumeSpecName "etc-openvswitch". PluginName "kubernetes.io/host-path", VolumeGidValue "" Jan 23 09:20:41 crc kubenswrapper[4684]: I0123 09:20:41.328388 4684 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/5fd1b372-d164-4037-ae8e-cf634b1c4b41-ovnkube-config" (OuterVolumeSpecName: "ovnkube-config") pod "5fd1b372-d164-4037-ae8e-cf634b1c4b41" (UID: "5fd1b372-d164-4037-ae8e-cf634b1c4b41"). InnerVolumeSpecName "ovnkube-config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 23 09:20:41 crc kubenswrapper[4684]: I0123 09:20:41.328429 4684 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"run-openvswitch\" (UniqueName: \"kubernetes.io/host-path/634af5a9-2de9-4115-91ca-108d2dc489ec-run-openvswitch\") pod \"ovnkube-node-rwv7j\" (UID: \"634af5a9-2de9-4115-91ca-108d2dc489ec\") " pod="openshift-ovn-kubernetes/ovnkube-node-rwv7j" Jan 23 09:20:41 crc kubenswrapper[4684]: I0123 09:20:41.328473 4684 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-cni-bin\" (UniqueName: \"kubernetes.io/host-path/634af5a9-2de9-4115-91ca-108d2dc489ec-host-cni-bin\") pod \"ovnkube-node-rwv7j\" (UID: \"634af5a9-2de9-4115-91ca-108d2dc489ec\") " pod="openshift-ovn-kubernetes/ovnkube-node-rwv7j" Jan 23 09:20:41 crc kubenswrapper[4684]: I0123 09:20:41.328484 4684 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/5fd1b372-d164-4037-ae8e-cf634b1c4b41-run-ovn" (OuterVolumeSpecName: "run-ovn") pod "5fd1b372-d164-4037-ae8e-cf634b1c4b41" (UID: "5fd1b372-d164-4037-ae8e-cf634b1c4b41"). InnerVolumeSpecName "run-ovn". PluginName "kubernetes.io/host-path", VolumeGidValue "" Jan 23 09:20:41 crc kubenswrapper[4684]: I0123 09:20:41.328548 4684 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/5fd1b372-d164-4037-ae8e-cf634b1c4b41-host-cni-bin" (OuterVolumeSpecName: "host-cni-bin") pod "5fd1b372-d164-4037-ae8e-cf634b1c4b41" (UID: "5fd1b372-d164-4037-ae8e-cf634b1c4b41"). InnerVolumeSpecName "host-cni-bin". PluginName "kubernetes.io/host-path", VolumeGidValue "" Jan 23 09:20:41 crc kubenswrapper[4684]: I0123 09:20:41.328566 4684 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-2jjw9\" (UniqueName: \"kubernetes.io/projected/634af5a9-2de9-4115-91ca-108d2dc489ec-kube-api-access-2jjw9\") pod \"ovnkube-node-rwv7j\" (UID: \"634af5a9-2de9-4115-91ca-108d2dc489ec\") " pod="openshift-ovn-kubernetes/ovnkube-node-rwv7j" Jan 23 09:20:41 crc kubenswrapper[4684]: I0123 09:20:41.328588 4684 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovn-node-metrics-cert\" (UniqueName: \"kubernetes.io/secret/634af5a9-2de9-4115-91ca-108d2dc489ec-ovn-node-metrics-cert\") pod \"ovnkube-node-rwv7j\" (UID: \"634af5a9-2de9-4115-91ca-108d2dc489ec\") " pod="openshift-ovn-kubernetes/ovnkube-node-rwv7j" Jan 23 09:20:41 crc kubenswrapper[4684]: I0123 09:20:41.328610 4684 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-cni-netd\" (UniqueName: \"kubernetes.io/host-path/634af5a9-2de9-4115-91ca-108d2dc489ec-host-cni-netd\") pod \"ovnkube-node-rwv7j\" (UID: \"634af5a9-2de9-4115-91ca-108d2dc489ec\") " pod="openshift-ovn-kubernetes/ovnkube-node-rwv7j" Jan 23 09:20:41 crc kubenswrapper[4684]: I0123 09:20:41.328613 4684 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/5fd1b372-d164-4037-ae8e-cf634b1c4b41-log-socket" (OuterVolumeSpecName: "log-socket") pod "5fd1b372-d164-4037-ae8e-cf634b1c4b41" (UID: "5fd1b372-d164-4037-ae8e-cf634b1c4b41"). InnerVolumeSpecName "log-socket". PluginName "kubernetes.io/host-path", VolumeGidValue "" Jan 23 09:20:41 crc kubenswrapper[4684]: I0123 09:20:41.328625 4684 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"log-socket\" (UniqueName: \"kubernetes.io/host-path/634af5a9-2de9-4115-91ca-108d2dc489ec-log-socket\") pod \"ovnkube-node-rwv7j\" (UID: \"634af5a9-2de9-4115-91ca-108d2dc489ec\") " pod="openshift-ovn-kubernetes/ovnkube-node-rwv7j" Jan 23 09:20:41 crc kubenswrapper[4684]: I0123 09:20:41.328635 4684 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/5fd1b372-d164-4037-ae8e-cf634b1c4b41-env-overrides" (OuterVolumeSpecName: "env-overrides") pod "5fd1b372-d164-4037-ae8e-cf634b1c4b41" (UID: "5fd1b372-d164-4037-ae8e-cf634b1c4b41"). InnerVolumeSpecName "env-overrides". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 23 09:20:41 crc kubenswrapper[4684]: I0123 09:20:41.327811 4684 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/5fd1b372-d164-4037-ae8e-cf634b1c4b41-host-run-netns" (OuterVolumeSpecName: "host-run-netns") pod "5fd1b372-d164-4037-ae8e-cf634b1c4b41" (UID: "5fd1b372-d164-4037-ae8e-cf634b1c4b41"). InnerVolumeSpecName "host-run-netns". PluginName "kubernetes.io/host-path", VolumeGidValue "" Jan 23 09:20:41 crc kubenswrapper[4684]: I0123 09:20:41.328674 4684 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/5fd1b372-d164-4037-ae8e-cf634b1c4b41-host-kubelet" (OuterVolumeSpecName: "host-kubelet") pod "5fd1b372-d164-4037-ae8e-cf634b1c4b41" (UID: "5fd1b372-d164-4037-ae8e-cf634b1c4b41"). InnerVolumeSpecName "host-kubelet". PluginName "kubernetes.io/host-path", VolumeGidValue "" Jan 23 09:20:41 crc kubenswrapper[4684]: I0123 09:20:41.328753 4684 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"run-ovn\" (UniqueName: \"kubernetes.io/host-path/634af5a9-2de9-4115-91ca-108d2dc489ec-run-ovn\") pod \"ovnkube-node-rwv7j\" (UID: \"634af5a9-2de9-4115-91ca-108d2dc489ec\") " pod="openshift-ovn-kubernetes/ovnkube-node-rwv7j" Jan 23 09:20:41 crc kubenswrapper[4684]: I0123 09:20:41.328808 4684 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-var-lib-cni-networks-ovn-kubernetes\" (UniqueName: \"kubernetes.io/host-path/634af5a9-2de9-4115-91ca-108d2dc489ec-host-var-lib-cni-networks-ovn-kubernetes\") pod \"ovnkube-node-rwv7j\" (UID: \"634af5a9-2de9-4115-91ca-108d2dc489ec\") " pod="openshift-ovn-kubernetes/ovnkube-node-rwv7j" Jan 23 09:20:41 crc kubenswrapper[4684]: I0123 09:20:41.328840 4684 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-kubelet\" (UniqueName: \"kubernetes.io/host-path/634af5a9-2de9-4115-91ca-108d2dc489ec-host-kubelet\") pod \"ovnkube-node-rwv7j\" (UID: \"634af5a9-2de9-4115-91ca-108d2dc489ec\") " pod="openshift-ovn-kubernetes/ovnkube-node-rwv7j" Jan 23 09:20:41 crc kubenswrapper[4684]: I0123 09:20:41.328867 4684 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"run-systemd\" (UniqueName: \"kubernetes.io/host-path/634af5a9-2de9-4115-91ca-108d2dc489ec-run-systemd\") pod \"ovnkube-node-rwv7j\" (UID: \"634af5a9-2de9-4115-91ca-108d2dc489ec\") " pod="openshift-ovn-kubernetes/ovnkube-node-rwv7j" Jan 23 09:20:41 crc kubenswrapper[4684]: I0123 09:20:41.328881 4684 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-openvswitch\" (UniqueName: \"kubernetes.io/host-path/634af5a9-2de9-4115-91ca-108d2dc489ec-etc-openvswitch\") pod \"ovnkube-node-rwv7j\" (UID: \"634af5a9-2de9-4115-91ca-108d2dc489ec\") " pod="openshift-ovn-kubernetes/ovnkube-node-rwv7j" Jan 23 09:20:41 crc kubenswrapper[4684]: I0123 09:20:41.328898 4684 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovnkube-config\" (UniqueName: \"kubernetes.io/configmap/634af5a9-2de9-4115-91ca-108d2dc489ec-ovnkube-config\") pod \"ovnkube-node-rwv7j\" (UID: \"634af5a9-2de9-4115-91ca-108d2dc489ec\") " pod="openshift-ovn-kubernetes/ovnkube-node-rwv7j" Jan 23 09:20:41 crc kubenswrapper[4684]: I0123 09:20:41.328943 4684 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-run-netns\" (UniqueName: \"kubernetes.io/host-path/634af5a9-2de9-4115-91ca-108d2dc489ec-host-run-netns\") pod \"ovnkube-node-rwv7j\" (UID: \"634af5a9-2de9-4115-91ca-108d2dc489ec\") " pod="openshift-ovn-kubernetes/ovnkube-node-rwv7j" Jan 23 09:20:41 crc kubenswrapper[4684]: I0123 09:20:41.329000 4684 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovnkube-script-lib\" (UniqueName: \"kubernetes.io/configmap/634af5a9-2de9-4115-91ca-108d2dc489ec-ovnkube-script-lib\") pod \"ovnkube-node-rwv7j\" (UID: \"634af5a9-2de9-4115-91ca-108d2dc489ec\") " pod="openshift-ovn-kubernetes/ovnkube-node-rwv7j" Jan 23 09:20:41 crc kubenswrapper[4684]: I0123 09:20:41.329028 4684 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"env-overrides\" (UniqueName: \"kubernetes.io/configmap/634af5a9-2de9-4115-91ca-108d2dc489ec-env-overrides\") pod \"ovnkube-node-rwv7j\" (UID: \"634af5a9-2de9-4115-91ca-108d2dc489ec\") " pod="openshift-ovn-kubernetes/ovnkube-node-rwv7j" Jan 23 09:20:41 crc kubenswrapper[4684]: I0123 09:20:41.329048 4684 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"systemd-units\" (UniqueName: \"kubernetes.io/host-path/634af5a9-2de9-4115-91ca-108d2dc489ec-systemd-units\") pod \"ovnkube-node-rwv7j\" (UID: \"634af5a9-2de9-4115-91ca-108d2dc489ec\") " pod="openshift-ovn-kubernetes/ovnkube-node-rwv7j" Jan 23 09:20:41 crc kubenswrapper[4684]: I0123 09:20:41.329062 4684 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-lib-openvswitch\" (UniqueName: \"kubernetes.io/host-path/634af5a9-2de9-4115-91ca-108d2dc489ec-var-lib-openvswitch\") pod \"ovnkube-node-rwv7j\" (UID: \"634af5a9-2de9-4115-91ca-108d2dc489ec\") " pod="openshift-ovn-kubernetes/ovnkube-node-rwv7j" Jan 23 09:20:41 crc kubenswrapper[4684]: I0123 09:20:41.329099 4684 reconciler_common.go:293] "Volume detached for volume \"ovnkube-config\" (UniqueName: \"kubernetes.io/configmap/5fd1b372-d164-4037-ae8e-cf634b1c4b41-ovnkube-config\") on node \"crc\" DevicePath \"\"" Jan 23 09:20:41 crc kubenswrapper[4684]: I0123 09:20:41.329110 4684 reconciler_common.go:293] "Volume detached for volume \"etc-openvswitch\" (UniqueName: \"kubernetes.io/host-path/5fd1b372-d164-4037-ae8e-cf634b1c4b41-etc-openvswitch\") on node \"crc\" DevicePath \"\"" Jan 23 09:20:41 crc kubenswrapper[4684]: I0123 09:20:41.329120 4684 reconciler_common.go:293] "Volume detached for volume \"host-cni-bin\" (UniqueName: \"kubernetes.io/host-path/5fd1b372-d164-4037-ae8e-cf634b1c4b41-host-cni-bin\") on node \"crc\" DevicePath \"\"" Jan 23 09:20:41 crc kubenswrapper[4684]: I0123 09:20:41.329129 4684 reconciler_common.go:293] "Volume detached for volume \"log-socket\" (UniqueName: \"kubernetes.io/host-path/5fd1b372-d164-4037-ae8e-cf634b1c4b41-log-socket\") on node \"crc\" DevicePath \"\"" Jan 23 09:20:41 crc kubenswrapper[4684]: I0123 09:20:41.329138 4684 reconciler_common.go:293] "Volume detached for volume \"host-kubelet\" (UniqueName: \"kubernetes.io/host-path/5fd1b372-d164-4037-ae8e-cf634b1c4b41-host-kubelet\") on node \"crc\" DevicePath \"\"" Jan 23 09:20:41 crc kubenswrapper[4684]: I0123 09:20:41.329146 4684 reconciler_common.go:293] "Volume detached for volume \"env-overrides\" (UniqueName: \"kubernetes.io/configmap/5fd1b372-d164-4037-ae8e-cf634b1c4b41-env-overrides\") on node \"crc\" DevicePath \"\"" Jan 23 09:20:41 crc kubenswrapper[4684]: I0123 09:20:41.329154 4684 reconciler_common.go:293] "Volume detached for volume \"host-slash\" (UniqueName: \"kubernetes.io/host-path/5fd1b372-d164-4037-ae8e-cf634b1c4b41-host-slash\") on node \"crc\" DevicePath \"\"" Jan 23 09:20:41 crc kubenswrapper[4684]: I0123 09:20:41.329163 4684 reconciler_common.go:293] "Volume detached for volume \"host-run-ovn-kubernetes\" (UniqueName: \"kubernetes.io/host-path/5fd1b372-d164-4037-ae8e-cf634b1c4b41-host-run-ovn-kubernetes\") on node \"crc\" DevicePath \"\"" Jan 23 09:20:41 crc kubenswrapper[4684]: I0123 09:20:41.329174 4684 reconciler_common.go:293] "Volume detached for volume \"host-var-lib-cni-networks-ovn-kubernetes\" (UniqueName: \"kubernetes.io/host-path/5fd1b372-d164-4037-ae8e-cf634b1c4b41-host-var-lib-cni-networks-ovn-kubernetes\") on node \"crc\" DevicePath \"\"" Jan 23 09:20:41 crc kubenswrapper[4684]: I0123 09:20:41.329183 4684 reconciler_common.go:293] "Volume detached for volume \"ovnkube-script-lib\" (UniqueName: \"kubernetes.io/configmap/5fd1b372-d164-4037-ae8e-cf634b1c4b41-ovnkube-script-lib\") on node \"crc\" DevicePath \"\"" Jan 23 09:20:41 crc kubenswrapper[4684]: I0123 09:20:41.329191 4684 reconciler_common.go:293] "Volume detached for volume \"host-run-netns\" (UniqueName: \"kubernetes.io/host-path/5fd1b372-d164-4037-ae8e-cf634b1c4b41-host-run-netns\") on node \"crc\" DevicePath \"\"" Jan 23 09:20:41 crc kubenswrapper[4684]: I0123 09:20:41.329198 4684 reconciler_common.go:293] "Volume detached for volume \"host-cni-netd\" (UniqueName: \"kubernetes.io/host-path/5fd1b372-d164-4037-ae8e-cf634b1c4b41-host-cni-netd\") on node \"crc\" DevicePath \"\"" Jan 23 09:20:41 crc kubenswrapper[4684]: I0123 09:20:41.329207 4684 reconciler_common.go:293] "Volume detached for volume \"var-lib-openvswitch\" (UniqueName: \"kubernetes.io/host-path/5fd1b372-d164-4037-ae8e-cf634b1c4b41-var-lib-openvswitch\") on node \"crc\" DevicePath \"\"" Jan 23 09:20:41 crc kubenswrapper[4684]: I0123 09:20:41.329215 4684 reconciler_common.go:293] "Volume detached for volume \"systemd-units\" (UniqueName: \"kubernetes.io/host-path/5fd1b372-d164-4037-ae8e-cf634b1c4b41-systemd-units\") on node \"crc\" DevicePath \"\"" Jan 23 09:20:41 crc kubenswrapper[4684]: I0123 09:20:41.329223 4684 reconciler_common.go:293] "Volume detached for volume \"run-openvswitch\" (UniqueName: \"kubernetes.io/host-path/5fd1b372-d164-4037-ae8e-cf634b1c4b41-run-openvswitch\") on node \"crc\" DevicePath \"\"" Jan 23 09:20:41 crc kubenswrapper[4684]: I0123 09:20:41.329233 4684 reconciler_common.go:293] "Volume detached for volume \"node-log\" (UniqueName: \"kubernetes.io/host-path/5fd1b372-d164-4037-ae8e-cf634b1c4b41-node-log\") on node \"crc\" DevicePath \"\"" Jan 23 09:20:41 crc kubenswrapper[4684]: I0123 09:20:41.329240 4684 reconciler_common.go:293] "Volume detached for volume \"run-ovn\" (UniqueName: \"kubernetes.io/host-path/5fd1b372-d164-4037-ae8e-cf634b1c4b41-run-ovn\") on node \"crc\" DevicePath \"\"" Jan 23 09:20:41 crc kubenswrapper[4684]: I0123 09:20:41.335793 4684 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/5fd1b372-d164-4037-ae8e-cf634b1c4b41-ovn-node-metrics-cert" (OuterVolumeSpecName: "ovn-node-metrics-cert") pod "5fd1b372-d164-4037-ae8e-cf634b1c4b41" (UID: "5fd1b372-d164-4037-ae8e-cf634b1c4b41"). InnerVolumeSpecName "ovn-node-metrics-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 23 09:20:41 crc kubenswrapper[4684]: I0123 09:20:41.354892 4684 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/5fd1b372-d164-4037-ae8e-cf634b1c4b41-kube-api-access-l46bg" (OuterVolumeSpecName: "kube-api-access-l46bg") pod "5fd1b372-d164-4037-ae8e-cf634b1c4b41" (UID: "5fd1b372-d164-4037-ae8e-cf634b1c4b41"). InnerVolumeSpecName "kube-api-access-l46bg". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 23 09:20:41 crc kubenswrapper[4684]: I0123 09:20:41.361235 4684 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/5fd1b372-d164-4037-ae8e-cf634b1c4b41-run-systemd" (OuterVolumeSpecName: "run-systemd") pod "5fd1b372-d164-4037-ae8e-cf634b1c4b41" (UID: "5fd1b372-d164-4037-ae8e-cf634b1c4b41"). InnerVolumeSpecName "run-systemd". PluginName "kubernetes.io/host-path", VolumeGidValue "" Jan 23 09:20:41 crc kubenswrapper[4684]: I0123 09:20:41.430530 4684 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host-run-ovn-kubernetes\" (UniqueName: \"kubernetes.io/host-path/634af5a9-2de9-4115-91ca-108d2dc489ec-host-run-ovn-kubernetes\") pod \"ovnkube-node-rwv7j\" (UID: \"634af5a9-2de9-4115-91ca-108d2dc489ec\") " pod="openshift-ovn-kubernetes/ovnkube-node-rwv7j" Jan 23 09:20:41 crc kubenswrapper[4684]: I0123 09:20:41.430588 4684 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host-slash\" (UniqueName: \"kubernetes.io/host-path/634af5a9-2de9-4115-91ca-108d2dc489ec-host-slash\") pod \"ovnkube-node-rwv7j\" (UID: \"634af5a9-2de9-4115-91ca-108d2dc489ec\") " pod="openshift-ovn-kubernetes/ovnkube-node-rwv7j" Jan 23 09:20:41 crc kubenswrapper[4684]: I0123 09:20:41.430617 4684 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"node-log\" (UniqueName: \"kubernetes.io/host-path/634af5a9-2de9-4115-91ca-108d2dc489ec-node-log\") pod \"ovnkube-node-rwv7j\" (UID: \"634af5a9-2de9-4115-91ca-108d2dc489ec\") " pod="openshift-ovn-kubernetes/ovnkube-node-rwv7j" Jan 23 09:20:41 crc kubenswrapper[4684]: I0123 09:20:41.430642 4684 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"run-openvswitch\" (UniqueName: \"kubernetes.io/host-path/634af5a9-2de9-4115-91ca-108d2dc489ec-run-openvswitch\") pod \"ovnkube-node-rwv7j\" (UID: \"634af5a9-2de9-4115-91ca-108d2dc489ec\") " pod="openshift-ovn-kubernetes/ovnkube-node-rwv7j" Jan 23 09:20:41 crc kubenswrapper[4684]: I0123 09:20:41.430668 4684 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host-cni-bin\" (UniqueName: \"kubernetes.io/host-path/634af5a9-2de9-4115-91ca-108d2dc489ec-host-cni-bin\") pod \"ovnkube-node-rwv7j\" (UID: \"634af5a9-2de9-4115-91ca-108d2dc489ec\") " pod="openshift-ovn-kubernetes/ovnkube-node-rwv7j" Jan 23 09:20:41 crc kubenswrapper[4684]: I0123 09:20:41.430667 4684 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host-slash\" (UniqueName: \"kubernetes.io/host-path/634af5a9-2de9-4115-91ca-108d2dc489ec-host-slash\") pod \"ovnkube-node-rwv7j\" (UID: \"634af5a9-2de9-4115-91ca-108d2dc489ec\") " pod="openshift-ovn-kubernetes/ovnkube-node-rwv7j" Jan 23 09:20:41 crc kubenswrapper[4684]: I0123 09:20:41.430712 4684 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-2jjw9\" (UniqueName: \"kubernetes.io/projected/634af5a9-2de9-4115-91ca-108d2dc489ec-kube-api-access-2jjw9\") pod \"ovnkube-node-rwv7j\" (UID: \"634af5a9-2de9-4115-91ca-108d2dc489ec\") " pod="openshift-ovn-kubernetes/ovnkube-node-rwv7j" Jan 23 09:20:41 crc kubenswrapper[4684]: I0123 09:20:41.430723 4684 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"node-log\" (UniqueName: \"kubernetes.io/host-path/634af5a9-2de9-4115-91ca-108d2dc489ec-node-log\") pod \"ovnkube-node-rwv7j\" (UID: \"634af5a9-2de9-4115-91ca-108d2dc489ec\") " pod="openshift-ovn-kubernetes/ovnkube-node-rwv7j" Jan 23 09:20:41 crc kubenswrapper[4684]: I0123 09:20:41.430740 4684 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovn-node-metrics-cert\" (UniqueName: \"kubernetes.io/secret/634af5a9-2de9-4115-91ca-108d2dc489ec-ovn-node-metrics-cert\") pod \"ovnkube-node-rwv7j\" (UID: \"634af5a9-2de9-4115-91ca-108d2dc489ec\") " pod="openshift-ovn-kubernetes/ovnkube-node-rwv7j" Jan 23 09:20:41 crc kubenswrapper[4684]: I0123 09:20:41.431152 4684 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host-cni-netd\" (UniqueName: \"kubernetes.io/host-path/634af5a9-2de9-4115-91ca-108d2dc489ec-host-cni-netd\") pod \"ovnkube-node-rwv7j\" (UID: \"634af5a9-2de9-4115-91ca-108d2dc489ec\") " pod="openshift-ovn-kubernetes/ovnkube-node-rwv7j" Jan 23 09:20:41 crc kubenswrapper[4684]: I0123 09:20:41.431267 4684 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"log-socket\" (UniqueName: \"kubernetes.io/host-path/634af5a9-2de9-4115-91ca-108d2dc489ec-log-socket\") pod \"ovnkube-node-rwv7j\" (UID: \"634af5a9-2de9-4115-91ca-108d2dc489ec\") " pod="openshift-ovn-kubernetes/ovnkube-node-rwv7j" Jan 23 09:20:41 crc kubenswrapper[4684]: I0123 09:20:41.430669 4684 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host-run-ovn-kubernetes\" (UniqueName: \"kubernetes.io/host-path/634af5a9-2de9-4115-91ca-108d2dc489ec-host-run-ovn-kubernetes\") pod \"ovnkube-node-rwv7j\" (UID: \"634af5a9-2de9-4115-91ca-108d2dc489ec\") " pod="openshift-ovn-kubernetes/ovnkube-node-rwv7j" Jan 23 09:20:41 crc kubenswrapper[4684]: I0123 09:20:41.430747 4684 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"run-openvswitch\" (UniqueName: \"kubernetes.io/host-path/634af5a9-2de9-4115-91ca-108d2dc489ec-run-openvswitch\") pod \"ovnkube-node-rwv7j\" (UID: \"634af5a9-2de9-4115-91ca-108d2dc489ec\") " pod="openshift-ovn-kubernetes/ovnkube-node-rwv7j" Jan 23 09:20:41 crc kubenswrapper[4684]: I0123 09:20:41.431221 4684 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host-cni-netd\" (UniqueName: \"kubernetes.io/host-path/634af5a9-2de9-4115-91ca-108d2dc489ec-host-cni-netd\") pod \"ovnkube-node-rwv7j\" (UID: \"634af5a9-2de9-4115-91ca-108d2dc489ec\") " pod="openshift-ovn-kubernetes/ovnkube-node-rwv7j" Jan 23 09:20:41 crc kubenswrapper[4684]: I0123 09:20:41.430773 4684 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host-cni-bin\" (UniqueName: \"kubernetes.io/host-path/634af5a9-2de9-4115-91ca-108d2dc489ec-host-cni-bin\") pod \"ovnkube-node-rwv7j\" (UID: \"634af5a9-2de9-4115-91ca-108d2dc489ec\") " pod="openshift-ovn-kubernetes/ovnkube-node-rwv7j" Jan 23 09:20:41 crc kubenswrapper[4684]: I0123 09:20:41.431385 4684 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"log-socket\" (UniqueName: \"kubernetes.io/host-path/634af5a9-2de9-4115-91ca-108d2dc489ec-log-socket\") pod \"ovnkube-node-rwv7j\" (UID: \"634af5a9-2de9-4115-91ca-108d2dc489ec\") " pod="openshift-ovn-kubernetes/ovnkube-node-rwv7j" Jan 23 09:20:41 crc kubenswrapper[4684]: I0123 09:20:41.431475 4684 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"run-ovn\" (UniqueName: \"kubernetes.io/host-path/634af5a9-2de9-4115-91ca-108d2dc489ec-run-ovn\") pod \"ovnkube-node-rwv7j\" (UID: \"634af5a9-2de9-4115-91ca-108d2dc489ec\") " pod="openshift-ovn-kubernetes/ovnkube-node-rwv7j" Jan 23 09:20:41 crc kubenswrapper[4684]: I0123 09:20:41.431593 4684 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host-var-lib-cni-networks-ovn-kubernetes\" (UniqueName: \"kubernetes.io/host-path/634af5a9-2de9-4115-91ca-108d2dc489ec-host-var-lib-cni-networks-ovn-kubernetes\") pod \"ovnkube-node-rwv7j\" (UID: \"634af5a9-2de9-4115-91ca-108d2dc489ec\") " pod="openshift-ovn-kubernetes/ovnkube-node-rwv7j" Jan 23 09:20:41 crc kubenswrapper[4684]: I0123 09:20:41.431517 4684 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"run-ovn\" (UniqueName: \"kubernetes.io/host-path/634af5a9-2de9-4115-91ca-108d2dc489ec-run-ovn\") pod \"ovnkube-node-rwv7j\" (UID: \"634af5a9-2de9-4115-91ca-108d2dc489ec\") " pod="openshift-ovn-kubernetes/ovnkube-node-rwv7j" Jan 23 09:20:41 crc kubenswrapper[4684]: I0123 09:20:41.431622 4684 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host-kubelet\" (UniqueName: \"kubernetes.io/host-path/634af5a9-2de9-4115-91ca-108d2dc489ec-host-kubelet\") pod \"ovnkube-node-rwv7j\" (UID: \"634af5a9-2de9-4115-91ca-108d2dc489ec\") " pod="openshift-ovn-kubernetes/ovnkube-node-rwv7j" Jan 23 09:20:41 crc kubenswrapper[4684]: I0123 09:20:41.431669 4684 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host-var-lib-cni-networks-ovn-kubernetes\" (UniqueName: \"kubernetes.io/host-path/634af5a9-2de9-4115-91ca-108d2dc489ec-host-var-lib-cni-networks-ovn-kubernetes\") pod \"ovnkube-node-rwv7j\" (UID: \"634af5a9-2de9-4115-91ca-108d2dc489ec\") " pod="openshift-ovn-kubernetes/ovnkube-node-rwv7j" Jan 23 09:20:41 crc kubenswrapper[4684]: I0123 09:20:41.431732 4684 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"run-systemd\" (UniqueName: \"kubernetes.io/host-path/634af5a9-2de9-4115-91ca-108d2dc489ec-run-systemd\") pod \"ovnkube-node-rwv7j\" (UID: \"634af5a9-2de9-4115-91ca-108d2dc489ec\") " pod="openshift-ovn-kubernetes/ovnkube-node-rwv7j" Jan 23 09:20:41 crc kubenswrapper[4684]: I0123 09:20:41.431772 4684 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"run-systemd\" (UniqueName: \"kubernetes.io/host-path/634af5a9-2de9-4115-91ca-108d2dc489ec-run-systemd\") pod \"ovnkube-node-rwv7j\" (UID: \"634af5a9-2de9-4115-91ca-108d2dc489ec\") " pod="openshift-ovn-kubernetes/ovnkube-node-rwv7j" Jan 23 09:20:41 crc kubenswrapper[4684]: I0123 09:20:41.431773 4684 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etc-openvswitch\" (UniqueName: \"kubernetes.io/host-path/634af5a9-2de9-4115-91ca-108d2dc489ec-etc-openvswitch\") pod \"ovnkube-node-rwv7j\" (UID: \"634af5a9-2de9-4115-91ca-108d2dc489ec\") " pod="openshift-ovn-kubernetes/ovnkube-node-rwv7j" Jan 23 09:20:41 crc kubenswrapper[4684]: I0123 09:20:41.431847 4684 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovnkube-config\" (UniqueName: \"kubernetes.io/configmap/634af5a9-2de9-4115-91ca-108d2dc489ec-ovnkube-config\") pod \"ovnkube-node-rwv7j\" (UID: \"634af5a9-2de9-4115-91ca-108d2dc489ec\") " pod="openshift-ovn-kubernetes/ovnkube-node-rwv7j" Jan 23 09:20:41 crc kubenswrapper[4684]: I0123 09:20:41.431876 4684 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host-run-netns\" (UniqueName: \"kubernetes.io/host-path/634af5a9-2de9-4115-91ca-108d2dc489ec-host-run-netns\") pod \"ovnkube-node-rwv7j\" (UID: \"634af5a9-2de9-4115-91ca-108d2dc489ec\") " pod="openshift-ovn-kubernetes/ovnkube-node-rwv7j" Jan 23 09:20:41 crc kubenswrapper[4684]: I0123 09:20:41.431742 4684 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host-kubelet\" (UniqueName: \"kubernetes.io/host-path/634af5a9-2de9-4115-91ca-108d2dc489ec-host-kubelet\") pod \"ovnkube-node-rwv7j\" (UID: \"634af5a9-2de9-4115-91ca-108d2dc489ec\") " pod="openshift-ovn-kubernetes/ovnkube-node-rwv7j" Jan 23 09:20:41 crc kubenswrapper[4684]: I0123 09:20:41.431798 4684 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etc-openvswitch\" (UniqueName: \"kubernetes.io/host-path/634af5a9-2de9-4115-91ca-108d2dc489ec-etc-openvswitch\") pod \"ovnkube-node-rwv7j\" (UID: \"634af5a9-2de9-4115-91ca-108d2dc489ec\") " pod="openshift-ovn-kubernetes/ovnkube-node-rwv7j" Jan 23 09:20:41 crc kubenswrapper[4684]: I0123 09:20:41.431948 4684 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovnkube-script-lib\" (UniqueName: \"kubernetes.io/configmap/634af5a9-2de9-4115-91ca-108d2dc489ec-ovnkube-script-lib\") pod \"ovnkube-node-rwv7j\" (UID: \"634af5a9-2de9-4115-91ca-108d2dc489ec\") " pod="openshift-ovn-kubernetes/ovnkube-node-rwv7j" Jan 23 09:20:41 crc kubenswrapper[4684]: I0123 09:20:41.432061 4684 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host-run-netns\" (UniqueName: \"kubernetes.io/host-path/634af5a9-2de9-4115-91ca-108d2dc489ec-host-run-netns\") pod \"ovnkube-node-rwv7j\" (UID: \"634af5a9-2de9-4115-91ca-108d2dc489ec\") " pod="openshift-ovn-kubernetes/ovnkube-node-rwv7j" Jan 23 09:20:41 crc kubenswrapper[4684]: I0123 09:20:41.432570 4684 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovnkube-script-lib\" (UniqueName: \"kubernetes.io/configmap/634af5a9-2de9-4115-91ca-108d2dc489ec-ovnkube-script-lib\") pod \"ovnkube-node-rwv7j\" (UID: \"634af5a9-2de9-4115-91ca-108d2dc489ec\") " pod="openshift-ovn-kubernetes/ovnkube-node-rwv7j" Jan 23 09:20:41 crc kubenswrapper[4684]: I0123 09:20:41.432584 4684 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovnkube-config\" (UniqueName: \"kubernetes.io/configmap/634af5a9-2de9-4115-91ca-108d2dc489ec-ovnkube-config\") pod \"ovnkube-node-rwv7j\" (UID: \"634af5a9-2de9-4115-91ca-108d2dc489ec\") " pod="openshift-ovn-kubernetes/ovnkube-node-rwv7j" Jan 23 09:20:41 crc kubenswrapper[4684]: I0123 09:20:41.432611 4684 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"env-overrides\" (UniqueName: \"kubernetes.io/configmap/634af5a9-2de9-4115-91ca-108d2dc489ec-env-overrides\") pod \"ovnkube-node-rwv7j\" (UID: \"634af5a9-2de9-4115-91ca-108d2dc489ec\") " pod="openshift-ovn-kubernetes/ovnkube-node-rwv7j" Jan 23 09:20:41 crc kubenswrapper[4684]: I0123 09:20:41.432645 4684 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"systemd-units\" (UniqueName: \"kubernetes.io/host-path/634af5a9-2de9-4115-91ca-108d2dc489ec-systemd-units\") pod \"ovnkube-node-rwv7j\" (UID: \"634af5a9-2de9-4115-91ca-108d2dc489ec\") " pod="openshift-ovn-kubernetes/ovnkube-node-rwv7j" Jan 23 09:20:41 crc kubenswrapper[4684]: I0123 09:20:41.432663 4684 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"var-lib-openvswitch\" (UniqueName: \"kubernetes.io/host-path/634af5a9-2de9-4115-91ca-108d2dc489ec-var-lib-openvswitch\") pod \"ovnkube-node-rwv7j\" (UID: \"634af5a9-2de9-4115-91ca-108d2dc489ec\") " pod="openshift-ovn-kubernetes/ovnkube-node-rwv7j" Jan 23 09:20:41 crc kubenswrapper[4684]: I0123 09:20:41.432734 4684 reconciler_common.go:293] "Volume detached for volume \"ovn-node-metrics-cert\" (UniqueName: \"kubernetes.io/secret/5fd1b372-d164-4037-ae8e-cf634b1c4b41-ovn-node-metrics-cert\") on node \"crc\" DevicePath \"\"" Jan 23 09:20:41 crc kubenswrapper[4684]: I0123 09:20:41.432764 4684 reconciler_common.go:293] "Volume detached for volume \"run-systemd\" (UniqueName: \"kubernetes.io/host-path/5fd1b372-d164-4037-ae8e-cf634b1c4b41-run-systemd\") on node \"crc\" DevicePath \"\"" Jan 23 09:20:41 crc kubenswrapper[4684]: I0123 09:20:41.432775 4684 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-l46bg\" (UniqueName: \"kubernetes.io/projected/5fd1b372-d164-4037-ae8e-cf634b1c4b41-kube-api-access-l46bg\") on node \"crc\" DevicePath \"\"" Jan 23 09:20:41 crc kubenswrapper[4684]: I0123 09:20:41.432803 4684 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"var-lib-openvswitch\" (UniqueName: \"kubernetes.io/host-path/634af5a9-2de9-4115-91ca-108d2dc489ec-var-lib-openvswitch\") pod \"ovnkube-node-rwv7j\" (UID: \"634af5a9-2de9-4115-91ca-108d2dc489ec\") " pod="openshift-ovn-kubernetes/ovnkube-node-rwv7j" Jan 23 09:20:41 crc kubenswrapper[4684]: I0123 09:20:41.433105 4684 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"env-overrides\" (UniqueName: \"kubernetes.io/configmap/634af5a9-2de9-4115-91ca-108d2dc489ec-env-overrides\") pod \"ovnkube-node-rwv7j\" (UID: \"634af5a9-2de9-4115-91ca-108d2dc489ec\") " pod="openshift-ovn-kubernetes/ovnkube-node-rwv7j" Jan 23 09:20:41 crc kubenswrapper[4684]: I0123 09:20:41.433140 4684 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"systemd-units\" (UniqueName: \"kubernetes.io/host-path/634af5a9-2de9-4115-91ca-108d2dc489ec-systemd-units\") pod \"ovnkube-node-rwv7j\" (UID: \"634af5a9-2de9-4115-91ca-108d2dc489ec\") " pod="openshift-ovn-kubernetes/ovnkube-node-rwv7j" Jan 23 09:20:41 crc kubenswrapper[4684]: I0123 09:20:41.439944 4684 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovn-node-metrics-cert\" (UniqueName: \"kubernetes.io/secret/634af5a9-2de9-4115-91ca-108d2dc489ec-ovn-node-metrics-cert\") pod \"ovnkube-node-rwv7j\" (UID: \"634af5a9-2de9-4115-91ca-108d2dc489ec\") " pod="openshift-ovn-kubernetes/ovnkube-node-rwv7j" Jan 23 09:20:41 crc kubenswrapper[4684]: I0123 09:20:41.451050 4684 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-2jjw9\" (UniqueName: \"kubernetes.io/projected/634af5a9-2de9-4115-91ca-108d2dc489ec-kube-api-access-2jjw9\") pod \"ovnkube-node-rwv7j\" (UID: \"634af5a9-2de9-4115-91ca-108d2dc489ec\") " pod="openshift-ovn-kubernetes/ovnkube-node-rwv7j" Jan 23 09:20:41 crc kubenswrapper[4684]: I0123 09:20:41.551456 4684 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-ovn-kubernetes/ovnkube-node-rwv7j" Jan 23 09:20:42 crc kubenswrapper[4684]: I0123 09:20:42.145762 4684 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-ovn-kubernetes/ovnkube-node-nk7v5" Jan 23 09:20:42 crc kubenswrapper[4684]: I0123 09:20:42.164906 4684 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-ovn-kubernetes/ovnkube-node-nk7v5"] Jan 23 09:20:42 crc kubenswrapper[4684]: I0123 09:20:42.168803 4684 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-ovn-kubernetes/ovnkube-node-nk7v5"] Jan 23 09:20:42 crc kubenswrapper[4684]: I0123 09:20:42.683201 4684 scope.go:117] "RemoveContainer" containerID="4982abf5ece76335ecf3d32af453818177712b3e256640b9bebec20436b73eb7" Jan 23 09:20:42 crc kubenswrapper[4684]: W0123 09:20:42.710923 4684 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod634af5a9_2de9_4115_91ca_108d2dc489ec.slice/crio-434200d2eab0c0af4a4dbd97f8904a5db3424e37a5e8c3fa06f2065c21baa4a1 WatchSource:0}: Error finding container 434200d2eab0c0af4a4dbd97f8904a5db3424e37a5e8c3fa06f2065c21baa4a1: Status 404 returned error can't find the container with id 434200d2eab0c0af4a4dbd97f8904a5db3424e37a5e8c3fa06f2065c21baa4a1 Jan 23 09:20:43 crc kubenswrapper[4684]: I0123 09:20:43.158841 4684 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-ovn-kubernetes_ovnkube-node-nk7v5_5fd1b372-d164-4037-ae8e-cf634b1c4b41/ovn-acl-logging/0.log" Jan 23 09:20:43 crc kubenswrapper[4684]: I0123 09:20:43.159489 4684 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-ovn-kubernetes_ovnkube-node-nk7v5_5fd1b372-d164-4037-ae8e-cf634b1c4b41/ovn-controller/0.log" Jan 23 09:20:43 crc kubenswrapper[4684]: I0123 09:20:43.161390 4684 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-rwv7j" event={"ID":"634af5a9-2de9-4115-91ca-108d2dc489ec","Type":"ContainerStarted","Data":"434200d2eab0c0af4a4dbd97f8904a5db3424e37a5e8c3fa06f2065c21baa4a1"} Jan 23 09:20:43 crc kubenswrapper[4684]: I0123 09:20:43.164160 4684 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-multus_multus-jwr4q_ab0885cc-d621-4e36-9e37-1326848bd147/kube-multus/2.log" Jan 23 09:20:43 crc kubenswrapper[4684]: I0123 09:20:43.595459 4684 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="5fd1b372-d164-4037-ae8e-cf634b1c4b41" path="/var/lib/kubelet/pods/5fd1b372-d164-4037-ae8e-cf634b1c4b41/volumes" Jan 23 09:20:44 crc kubenswrapper[4684]: I0123 09:20:44.170520 4684 generic.go:334] "Generic (PLEG): container finished" podID="634af5a9-2de9-4115-91ca-108d2dc489ec" containerID="7165b04fa487cd9240a5dc3a5c3f9b1a1fe72bad5a38e195f417df97917c5618" exitCode=0 Jan 23 09:20:44 crc kubenswrapper[4684]: I0123 09:20:44.170595 4684 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-rwv7j" event={"ID":"634af5a9-2de9-4115-91ca-108d2dc489ec","Type":"ContainerDied","Data":"7165b04fa487cd9240a5dc3a5c3f9b1a1fe72bad5a38e195f417df97917c5618"} Jan 23 09:20:44 crc kubenswrapper[4684]: I0123 09:20:44.173990 4684 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-multus_multus-jwr4q_ab0885cc-d621-4e36-9e37-1326848bd147/kube-multus/2.log" Jan 23 09:20:44 crc kubenswrapper[4684]: I0123 09:20:44.174045 4684 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-multus/multus-jwr4q" event={"ID":"ab0885cc-d621-4e36-9e37-1326848bd147","Type":"ContainerStarted","Data":"41762d0b612fad7494eaa6025cd386f20eebcd0399f11b90a04e2b2da603b0f2"} Jan 23 09:20:45 crc kubenswrapper[4684]: I0123 09:20:45.181019 4684 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-rwv7j" event={"ID":"634af5a9-2de9-4115-91ca-108d2dc489ec","Type":"ContainerStarted","Data":"13f4a8e2aa1f5b434615855428a961dd0e15c206b8aed560aefb1ba898955705"} Jan 23 09:20:45 crc kubenswrapper[4684]: I0123 09:20:45.244169 4684 scope.go:117] "RemoveContainer" containerID="1d7d0cedb437ec48e365912b092c7f28a30e01fbab86c49bce1b26734ab264ee" Jan 23 09:20:46 crc kubenswrapper[4684]: I0123 09:20:46.234671 4684 scope.go:117] "RemoveContainer" containerID="6cfc04b44ac724b5e32e0102b3f0d670fdd7f2b7ae9b40266065c7b8192b228e" Jan 23 09:20:47 crc kubenswrapper[4684]: I0123 09:20:47.199200 4684 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-ovn-kubernetes_ovnkube-node-nk7v5_5fd1b372-d164-4037-ae8e-cf634b1c4b41/ovn-acl-logging/0.log" Jan 23 09:20:47 crc kubenswrapper[4684]: I0123 09:20:47.199988 4684 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-ovn-kubernetes_ovnkube-node-nk7v5_5fd1b372-d164-4037-ae8e-cf634b1c4b41/ovn-controller/0.log" Jan 23 09:20:47 crc kubenswrapper[4684]: I0123 09:20:47.605995 4684 scope.go:117] "RemoveContainer" containerID="8218cbc66b770be0ac1518a792ef1b287a309ea7d28374ac237fea5de79088e5" Jan 23 09:20:48 crc kubenswrapper[4684]: I0123 09:20:48.177723 4684 scope.go:117] "RemoveContainer" containerID="6ab83043e744c91535278153a247d7ba2b3612b867edbabf3a43192b51304e14" Jan 23 09:20:48 crc kubenswrapper[4684]: I0123 09:20:48.207948 4684 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-ovn-kubernetes_ovnkube-node-nk7v5_5fd1b372-d164-4037-ae8e-cf634b1c4b41/ovn-acl-logging/0.log" Jan 23 09:20:48 crc kubenswrapper[4684]: I0123 09:20:48.208330 4684 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-ovn-kubernetes_ovnkube-node-nk7v5_5fd1b372-d164-4037-ae8e-cf634b1c4b41/ovn-controller/0.log" Jan 23 09:20:48 crc kubenswrapper[4684]: I0123 09:20:48.223522 4684 scope.go:117] "RemoveContainer" containerID="c845b6b78d55b23f70032599e19fb345571b02ca00353315bb08e94c834330d4" Jan 23 09:20:48 crc kubenswrapper[4684]: I0123 09:20:48.241395 4684 scope.go:117] "RemoveContainer" containerID="d44f8256ce0d8ea5237e13fb4f6d7ee5cd698c2821613b48d73ba903d2ab5351" Jan 23 09:20:48 crc kubenswrapper[4684]: I0123 09:20:48.268040 4684 scope.go:117] "RemoveContainer" containerID="6eab0113b2445bd23a5d3eb5f4bd79d26dd3352a1bf807cf7e770d55db85b699" Jan 23 09:20:48 crc kubenswrapper[4684]: I0123 09:20:48.284449 4684 scope.go:117] "RemoveContainer" containerID="3eab81e73847c2d5a8a24bd2be84c8ed97ecc482fe023474b519ae6bcf3e6e49" Jan 23 09:20:48 crc kubenswrapper[4684]: I0123 09:20:48.297524 4684 scope.go:117] "RemoveContainer" containerID="5ecd3493767226c89a1f3e3dff04d36ff5c47117c6ad2712e71633f5c6e375b3" Jan 23 09:20:49 crc kubenswrapper[4684]: I0123 09:20:49.216116 4684 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-rwv7j" event={"ID":"634af5a9-2de9-4115-91ca-108d2dc489ec","Type":"ContainerStarted","Data":"3f4bb78c62a73cf6063475dbd2a84afe1e840ca0243d97702df6c2875e488a0d"} Jan 23 09:20:50 crc kubenswrapper[4684]: I0123 09:20:50.223647 4684 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-rwv7j" event={"ID":"634af5a9-2de9-4115-91ca-108d2dc489ec","Type":"ContainerStarted","Data":"8e193772f6f190174a1309277b846b9e7b1298931343cf4737072745ca9bbd85"} Jan 23 09:20:53 crc kubenswrapper[4684]: I0123 09:20:53.243282 4684 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-rwv7j" event={"ID":"634af5a9-2de9-4115-91ca-108d2dc489ec","Type":"ContainerStarted","Data":"97d47232c26e6c6f4a50c090e8a53fd811bca73954ecd08476b250323f46d5f0"} Jan 23 09:20:53 crc kubenswrapper[4684]: I0123 09:20:53.243592 4684 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-rwv7j" event={"ID":"634af5a9-2de9-4115-91ca-108d2dc489ec","Type":"ContainerStarted","Data":"441c4c45caaa57804d521ee7d69366681303b547c1b5b3056eca86ea549f1e1f"} Jan 23 09:21:03 crc kubenswrapper[4684]: I0123 09:21:03.299915 4684 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-rwv7j" event={"ID":"634af5a9-2de9-4115-91ca-108d2dc489ec","Type":"ContainerStarted","Data":"aa801bd8210c9d06ab2cc49e4b7ce9fc9ed95842c399da4099b997b3cdeac385"} Jan 23 09:21:06 crc kubenswrapper[4684]: I0123 09:21:06.321392 4684 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-rwv7j" event={"ID":"634af5a9-2de9-4115-91ca-108d2dc489ec","Type":"ContainerStarted","Data":"5515919fd3b328d66629a22643cfbf029dc182f9d1914651184ba99438bedea5"} Jan 23 09:21:06 crc kubenswrapper[4684]: E0123 09:21:06.427401 4684 log.go:32] "PullImage from image service failed" err="rpc error: code = Canceled desc = copying system image from manifest list: copying config: context canceled" image="quay.io/jetstack/cert-manager-controller:v1.19.2" Jan 23 09:21:06 crc kubenswrapper[4684]: E0123 09:21:06.427615 4684 kuberuntime_manager.go:1274] "Unhandled Error" err="container &Container{Name:cert-manager-controller,Image:quay.io/jetstack/cert-manager-controller:v1.19.2,Command:[],Args:[--v=2 --cluster-resource-namespace=$(POD_NAMESPACE) --leader-election-namespace=kube-system --acme-http01-solver-image=quay.io/jetstack/cert-manager-acmesolver:v1.19.2 --max-concurrent-challenges=60],WorkingDir:,Ports:[]ContainerPort{ContainerPort{Name:http-metrics,HostPort:0,ContainerPort:9402,Protocol:TCP,HostIP:,},ContainerPort{Name:http-healthz,HostPort:0,ContainerPort:9403,Protocol:TCP,HostIP:,},},Env:[]EnvVar{EnvVar{Name:POD_NAMESPACE,Value:,ValueFrom:&EnvVarSource{FieldRef:&ObjectFieldSelector{APIVersion:v1,FieldPath:metadata.namespace,},ResourceFieldRef:nil,ConfigMapKeyRef:nil,SecretKeyRef:nil,},},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:kube-api-access-z5h2p,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/livez,Port:{1 0 http-healthz},Host:,Scheme:HTTP,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:10,TimeoutSeconds:15,PeriodSeconds:10,SuccessThreshold:1,FailureThreshold:8,TerminationGracePeriodSeconds:nil,},ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:nil,SELinuxOptions:nil,RunAsUser:*1000680000,RunAsNonRoot:nil,ReadOnlyRootFilesystem:*true,AllowPrivilegeEscalation:*false,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod cert-manager-858654f9db-9kbld_cert-manager(05d3b6d9-c965-441d-a575-dd4d250c519b): ErrImagePull: rpc error: code = Canceled desc = copying system image from manifest list: copying config: context canceled" logger="UnhandledError" Jan 23 09:21:06 crc kubenswrapper[4684]: E0123 09:21:06.428793 4684 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"cert-manager-controller\" with ErrImagePull: \"rpc error: code = Canceled desc = copying system image from manifest list: copying config: context canceled\"" pod="cert-manager/cert-manager-858654f9db-9kbld" podUID="05d3b6d9-c965-441d-a575-dd4d250c519b" Jan 23 09:21:06 crc kubenswrapper[4684]: E0123 09:21:06.537050 4684 log.go:32] "PullImage from image service failed" err="rpc error: code = Canceled desc = copying system image from manifest list: copying layer: context canceled" image="quay.io/jetstack/cert-manager-cainjector:v1.19.2" Jan 23 09:21:06 crc kubenswrapper[4684]: E0123 09:21:06.537275 4684 kuberuntime_manager.go:1274] "Unhandled Error" err="container &Container{Name:cert-manager-cainjector,Image:quay.io/jetstack/cert-manager-cainjector:v1.19.2,Command:[],Args:[--v=2 --leader-election-namespace=kube-system],WorkingDir:,Ports:[]ContainerPort{ContainerPort{Name:http-metrics,HostPort:0,ContainerPort:9402,Protocol:TCP,HostIP:,},},Env:[]EnvVar{EnvVar{Name:POD_NAMESPACE,Value:,ValueFrom:&EnvVarSource{FieldRef:&ObjectFieldSelector{APIVersion:v1,FieldPath:metadata.namespace,},ResourceFieldRef:nil,ConfigMapKeyRef:nil,SecretKeyRef:nil,},},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:kube-api-access-gzkpg,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:nil,SELinuxOptions:nil,RunAsUser:*1000680000,RunAsNonRoot:nil,ReadOnlyRootFilesystem:*true,AllowPrivilegeEscalation:*false,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod cert-manager-cainjector-cf98fcc89-8p4gl_cert-manager(f4c0acc8-e95c-4880-ad7b-eafc6422a713): ErrImagePull: rpc error: code = Canceled desc = copying system image from manifest list: copying layer: context canceled" logger="UnhandledError" Jan 23 09:21:06 crc kubenswrapper[4684]: E0123 09:21:06.538502 4684 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"cert-manager-cainjector\" with ErrImagePull: \"rpc error: code = Canceled desc = copying system image from manifest list: copying layer: context canceled\"" pod="cert-manager/cert-manager-cainjector-cf98fcc89-8p4gl" podUID="f4c0acc8-e95c-4880-ad7b-eafc6422a713" Jan 23 09:21:07 crc kubenswrapper[4684]: I0123 09:21:07.326792 4684 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="cert-manager/cert-manager-webhook-687f57d79b-sfbw8" event={"ID":"b61e14d8-17ad-4f3b-aa18-e0030a15c870","Type":"ContainerStarted","Data":"7ff312af09d22dc2c6f9d26b16e1929db37a6c3a6f255d0ce548d3a318ae0873"} Jan 23 09:21:07 crc kubenswrapper[4684]: I0123 09:21:07.326946 4684 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="cert-manager/cert-manager-webhook-687f57d79b-sfbw8" Jan 23 09:21:07 crc kubenswrapper[4684]: E0123 09:21:07.328523 4684 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"cert-manager-controller\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.io/jetstack/cert-manager-controller:v1.19.2\\\"\"" pod="cert-manager/cert-manager-858654f9db-9kbld" podUID="05d3b6d9-c965-441d-a575-dd4d250c519b" Jan 23 09:21:07 crc kubenswrapper[4684]: E0123 09:21:07.328523 4684 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"cert-manager-cainjector\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.io/jetstack/cert-manager-cainjector:v1.19.2\\\"\"" pod="cert-manager/cert-manager-cainjector-cf98fcc89-8p4gl" podUID="f4c0acc8-e95c-4880-ad7b-eafc6422a713" Jan 23 09:21:07 crc kubenswrapper[4684]: I0123 09:21:07.370930 4684 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="cert-manager/cert-manager-webhook-687f57d79b-sfbw8" podStartSLOduration=1.972305902 podStartE2EDuration="39.370911662s" podCreationTimestamp="2026-01-23 09:20:28 +0000 UTC" firstStartedPulling="2026-01-23 09:20:29.170745794 +0000 UTC m=+801.794124335" lastFinishedPulling="2026-01-23 09:21:06.569351554 +0000 UTC m=+839.192730095" observedRunningTime="2026-01-23 09:21:07.367919664 +0000 UTC m=+839.991298215" watchObservedRunningTime="2026-01-23 09:21:07.370911662 +0000 UTC m=+839.994290203" Jan 23 09:21:09 crc kubenswrapper[4684]: I0123 09:21:09.354630 4684 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-rwv7j" event={"ID":"634af5a9-2de9-4115-91ca-108d2dc489ec","Type":"ContainerStarted","Data":"655f4888111f7aab45072336525414d37138422332be8f766779d5ef23c241db"} Jan 23 09:21:09 crc kubenswrapper[4684]: I0123 09:21:09.355658 4684 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-ovn-kubernetes/ovnkube-node-rwv7j" Jan 23 09:21:09 crc kubenswrapper[4684]: I0123 09:21:09.355680 4684 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-ovn-kubernetes/ovnkube-node-rwv7j" Jan 23 09:21:09 crc kubenswrapper[4684]: I0123 09:21:09.355694 4684 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-ovn-kubernetes/ovnkube-node-rwv7j" Jan 23 09:21:09 crc kubenswrapper[4684]: I0123 09:21:09.386931 4684 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-ovn-kubernetes/ovnkube-node-rwv7j" podStartSLOduration=28.386915032 podStartE2EDuration="28.386915032s" podCreationTimestamp="2026-01-23 09:20:41 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-23 09:21:09.38516379 +0000 UTC m=+842.008542331" watchObservedRunningTime="2026-01-23 09:21:09.386915032 +0000 UTC m=+842.010293573" Jan 23 09:21:09 crc kubenswrapper[4684]: I0123 09:21:09.390620 4684 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-ovn-kubernetes/ovnkube-node-rwv7j" Jan 23 09:21:09 crc kubenswrapper[4684]: I0123 09:21:09.391593 4684 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-ovn-kubernetes/ovnkube-node-rwv7j" Jan 23 09:21:13 crc kubenswrapper[4684]: I0123 09:21:13.723185 4684 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="cert-manager/cert-manager-webhook-687f57d79b-sfbw8" Jan 23 09:21:13 crc kubenswrapper[4684]: I0123 09:21:13.729144 4684 patch_prober.go:28] interesting pod/machine-config-daemon-wtphf container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Jan 23 09:21:13 crc kubenswrapper[4684]: I0123 09:21:13.729196 4684 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-wtphf" podUID="fe8e0d00-860e-4d47-9f48-686555520d79" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Jan 23 09:21:29 crc kubenswrapper[4684]: I0123 09:21:29.478527 4684 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="cert-manager/cert-manager-858654f9db-9kbld" event={"ID":"05d3b6d9-c965-441d-a575-dd4d250c519b","Type":"ContainerStarted","Data":"533791213bae4d759b515a150808270a609f5c9217582a908e6af54e091b25b2"} Jan 23 09:21:29 crc kubenswrapper[4684]: I0123 09:21:29.507956 4684 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="cert-manager/cert-manager-858654f9db-9kbld" podStartSLOduration=2.387128353 podStartE2EDuration="1m1.507933713s" podCreationTimestamp="2026-01-23 09:20:28 +0000 UTC" firstStartedPulling="2026-01-23 09:20:29.162880092 +0000 UTC m=+801.786258633" lastFinishedPulling="2026-01-23 09:21:28.283685452 +0000 UTC m=+860.907063993" observedRunningTime="2026-01-23 09:21:29.503826792 +0000 UTC m=+862.127205333" watchObservedRunningTime="2026-01-23 09:21:29.507933713 +0000 UTC m=+862.131312254" Jan 23 09:21:31 crc kubenswrapper[4684]: I0123 09:21:31.494549 4684 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="cert-manager/cert-manager-cainjector-cf98fcc89-8p4gl" event={"ID":"f4c0acc8-e95c-4880-ad7b-eafc6422a713","Type":"ContainerStarted","Data":"8a9b602ecdfba40d510d52c87299a7ae5c98dbf6fad5b364a281b02af653cbaa"} Jan 23 09:21:31 crc kubenswrapper[4684]: I0123 09:21:31.513387 4684 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="cert-manager/cert-manager-cainjector-cf98fcc89-8p4gl" podStartSLOduration=2.324320414 podStartE2EDuration="1m3.513369192s" podCreationTimestamp="2026-01-23 09:20:28 +0000 UTC" firstStartedPulling="2026-01-23 09:20:29.460517822 +0000 UTC m=+802.083896363" lastFinishedPulling="2026-01-23 09:21:30.6495666 +0000 UTC m=+863.272945141" observedRunningTime="2026-01-23 09:21:31.509809257 +0000 UTC m=+864.133187818" watchObservedRunningTime="2026-01-23 09:21:31.513369192 +0000 UTC m=+864.136747733" Jan 23 09:21:41 crc kubenswrapper[4684]: I0123 09:21:41.579613 4684 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-ovn-kubernetes/ovnkube-node-rwv7j" Jan 23 09:21:43 crc kubenswrapper[4684]: I0123 09:21:43.729175 4684 patch_prober.go:28] interesting pod/machine-config-daemon-wtphf container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Jan 23 09:21:43 crc kubenswrapper[4684]: I0123 09:21:43.729246 4684 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-wtphf" podUID="fe8e0d00-860e-4d47-9f48-686555520d79" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Jan 23 09:22:13 crc kubenswrapper[4684]: I0123 09:22:13.728507 4684 patch_prober.go:28] interesting pod/machine-config-daemon-wtphf container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Jan 23 09:22:13 crc kubenswrapper[4684]: I0123 09:22:13.729349 4684 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-wtphf" podUID="fe8e0d00-860e-4d47-9f48-686555520d79" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Jan 23 09:22:13 crc kubenswrapper[4684]: I0123 09:22:13.729404 4684 kubelet.go:2542] "SyncLoop (probe)" probe="liveness" status="unhealthy" pod="openshift-machine-config-operator/machine-config-daemon-wtphf" Jan 23 09:22:13 crc kubenswrapper[4684]: I0123 09:22:13.730204 4684 kuberuntime_manager.go:1027] "Message for Container of pod" containerName="machine-config-daemon" containerStatusID={"Type":"cri-o","ID":"6a54cd0e651571067c33ee3cd9f4af92e5f9d59906264f1f012e4be5834f6450"} pod="openshift-machine-config-operator/machine-config-daemon-wtphf" containerMessage="Container machine-config-daemon failed liveness probe, will be restarted" Jan 23 09:22:13 crc kubenswrapper[4684]: I0123 09:22:13.730268 4684 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-machine-config-operator/machine-config-daemon-wtphf" podUID="fe8e0d00-860e-4d47-9f48-686555520d79" containerName="machine-config-daemon" containerID="cri-o://6a54cd0e651571067c33ee3cd9f4af92e5f9d59906264f1f012e4be5834f6450" gracePeriod=600 Jan 23 09:22:14 crc kubenswrapper[4684]: I0123 09:22:14.811351 4684 generic.go:334] "Generic (PLEG): container finished" podID="fe8e0d00-860e-4d47-9f48-686555520d79" containerID="6a54cd0e651571067c33ee3cd9f4af92e5f9d59906264f1f012e4be5834f6450" exitCode=0 Jan 23 09:22:14 crc kubenswrapper[4684]: I0123 09:22:14.811857 4684 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-wtphf" event={"ID":"fe8e0d00-860e-4d47-9f48-686555520d79","Type":"ContainerDied","Data":"6a54cd0e651571067c33ee3cd9f4af92e5f9d59906264f1f012e4be5834f6450"} Jan 23 09:22:14 crc kubenswrapper[4684]: I0123 09:22:14.811884 4684 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-wtphf" event={"ID":"fe8e0d00-860e-4d47-9f48-686555520d79","Type":"ContainerStarted","Data":"d189a4bad8ef4c719b144352564a4f1767ae642d4e80c3912415bf811a82f8e8"} Jan 23 09:22:14 crc kubenswrapper[4684]: I0123 09:22:14.811900 4684 scope.go:117] "RemoveContainer" containerID="391314463e133e14077e9453ef4f023ff6205f2c184fe7d603fab43c81064707" Jan 23 09:22:20 crc kubenswrapper[4684]: I0123 09:22:20.838507 4684 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/53efe8611d43ac2275911d954e05efbbba7920a530aff9253ed1cec713gnsqg"] Jan 23 09:22:20 crc kubenswrapper[4684]: I0123 09:22:20.840182 4684 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/53efe8611d43ac2275911d954e05efbbba7920a530aff9253ed1cec713gnsqg" Jan 23 09:22:20 crc kubenswrapper[4684]: I0123 09:22:20.843545 4684 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-marketplace"/"default-dockercfg-vmwhc" Jan 23 09:22:20 crc kubenswrapper[4684]: I0123 09:22:20.852390 4684 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/53efe8611d43ac2275911d954e05efbbba7920a530aff9253ed1cec713gnsqg"] Jan 23 09:22:20 crc kubenswrapper[4684]: I0123 09:22:20.978467 4684 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-t4jrr\" (UniqueName: \"kubernetes.io/projected/169c6832-37df-469f-9ff3-c0775456568a-kube-api-access-t4jrr\") pod \"53efe8611d43ac2275911d954e05efbbba7920a530aff9253ed1cec713gnsqg\" (UID: \"169c6832-37df-469f-9ff3-c0775456568a\") " pod="openshift-marketplace/53efe8611d43ac2275911d954e05efbbba7920a530aff9253ed1cec713gnsqg" Jan 23 09:22:20 crc kubenswrapper[4684]: I0123 09:22:20.978865 4684 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"bundle\" (UniqueName: \"kubernetes.io/empty-dir/169c6832-37df-469f-9ff3-c0775456568a-bundle\") pod \"53efe8611d43ac2275911d954e05efbbba7920a530aff9253ed1cec713gnsqg\" (UID: \"169c6832-37df-469f-9ff3-c0775456568a\") " pod="openshift-marketplace/53efe8611d43ac2275911d954e05efbbba7920a530aff9253ed1cec713gnsqg" Jan 23 09:22:20 crc kubenswrapper[4684]: I0123 09:22:20.978894 4684 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"util\" (UniqueName: \"kubernetes.io/empty-dir/169c6832-37df-469f-9ff3-c0775456568a-util\") pod \"53efe8611d43ac2275911d954e05efbbba7920a530aff9253ed1cec713gnsqg\" (UID: \"169c6832-37df-469f-9ff3-c0775456568a\") " pod="openshift-marketplace/53efe8611d43ac2275911d954e05efbbba7920a530aff9253ed1cec713gnsqg" Jan 23 09:22:21 crc kubenswrapper[4684]: I0123 09:22:21.080601 4684 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-t4jrr\" (UniqueName: \"kubernetes.io/projected/169c6832-37df-469f-9ff3-c0775456568a-kube-api-access-t4jrr\") pod \"53efe8611d43ac2275911d954e05efbbba7920a530aff9253ed1cec713gnsqg\" (UID: \"169c6832-37df-469f-9ff3-c0775456568a\") " pod="openshift-marketplace/53efe8611d43ac2275911d954e05efbbba7920a530aff9253ed1cec713gnsqg" Jan 23 09:22:21 crc kubenswrapper[4684]: I0123 09:22:21.080671 4684 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"bundle\" (UniqueName: \"kubernetes.io/empty-dir/169c6832-37df-469f-9ff3-c0775456568a-bundle\") pod \"53efe8611d43ac2275911d954e05efbbba7920a530aff9253ed1cec713gnsqg\" (UID: \"169c6832-37df-469f-9ff3-c0775456568a\") " pod="openshift-marketplace/53efe8611d43ac2275911d954e05efbbba7920a530aff9253ed1cec713gnsqg" Jan 23 09:22:21 crc kubenswrapper[4684]: I0123 09:22:21.080707 4684 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"util\" (UniqueName: \"kubernetes.io/empty-dir/169c6832-37df-469f-9ff3-c0775456568a-util\") pod \"53efe8611d43ac2275911d954e05efbbba7920a530aff9253ed1cec713gnsqg\" (UID: \"169c6832-37df-469f-9ff3-c0775456568a\") " pod="openshift-marketplace/53efe8611d43ac2275911d954e05efbbba7920a530aff9253ed1cec713gnsqg" Jan 23 09:22:21 crc kubenswrapper[4684]: I0123 09:22:21.081213 4684 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"bundle\" (UniqueName: \"kubernetes.io/empty-dir/169c6832-37df-469f-9ff3-c0775456568a-bundle\") pod \"53efe8611d43ac2275911d954e05efbbba7920a530aff9253ed1cec713gnsqg\" (UID: \"169c6832-37df-469f-9ff3-c0775456568a\") " pod="openshift-marketplace/53efe8611d43ac2275911d954e05efbbba7920a530aff9253ed1cec713gnsqg" Jan 23 09:22:21 crc kubenswrapper[4684]: I0123 09:22:21.081232 4684 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"util\" (UniqueName: \"kubernetes.io/empty-dir/169c6832-37df-469f-9ff3-c0775456568a-util\") pod \"53efe8611d43ac2275911d954e05efbbba7920a530aff9253ed1cec713gnsqg\" (UID: \"169c6832-37df-469f-9ff3-c0775456568a\") " pod="openshift-marketplace/53efe8611d43ac2275911d954e05efbbba7920a530aff9253ed1cec713gnsqg" Jan 23 09:22:21 crc kubenswrapper[4684]: I0123 09:22:21.102305 4684 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-t4jrr\" (UniqueName: \"kubernetes.io/projected/169c6832-37df-469f-9ff3-c0775456568a-kube-api-access-t4jrr\") pod \"53efe8611d43ac2275911d954e05efbbba7920a530aff9253ed1cec713gnsqg\" (UID: \"169c6832-37df-469f-9ff3-c0775456568a\") " pod="openshift-marketplace/53efe8611d43ac2275911d954e05efbbba7920a530aff9253ed1cec713gnsqg" Jan 23 09:22:21 crc kubenswrapper[4684]: I0123 09:22:21.159521 4684 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/53efe8611d43ac2275911d954e05efbbba7920a530aff9253ed1cec713gnsqg" Jan 23 09:22:21 crc kubenswrapper[4684]: I0123 09:22:21.360547 4684 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/53efe8611d43ac2275911d954e05efbbba7920a530aff9253ed1cec713gnsqg"] Jan 23 09:22:21 crc kubenswrapper[4684]: I0123 09:22:21.856213 4684 generic.go:334] "Generic (PLEG): container finished" podID="169c6832-37df-469f-9ff3-c0775456568a" containerID="775a1756fcaaed884c5ebb04a136744225b23bb4371307a9c311bd9b7d6d7f61" exitCode=0 Jan 23 09:22:21 crc kubenswrapper[4684]: I0123 09:22:21.856263 4684 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/53efe8611d43ac2275911d954e05efbbba7920a530aff9253ed1cec713gnsqg" event={"ID":"169c6832-37df-469f-9ff3-c0775456568a","Type":"ContainerDied","Data":"775a1756fcaaed884c5ebb04a136744225b23bb4371307a9c311bd9b7d6d7f61"} Jan 23 09:22:21 crc kubenswrapper[4684]: I0123 09:22:21.856315 4684 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/53efe8611d43ac2275911d954e05efbbba7920a530aff9253ed1cec713gnsqg" event={"ID":"169c6832-37df-469f-9ff3-c0775456568a","Type":"ContainerStarted","Data":"76fdc12c3e352c03fb97ac6e6daa6eae1f2d5406468bd0d40b9d0f3e895bec06"} Jan 23 09:22:22 crc kubenswrapper[4684]: I0123 09:22:22.915511 4684 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/redhat-operators-k7sgb"] Jan 23 09:22:22 crc kubenswrapper[4684]: I0123 09:22:22.917142 4684 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-k7sgb" Jan 23 09:22:22 crc kubenswrapper[4684]: I0123 09:22:22.934567 4684 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-operators-k7sgb"] Jan 23 09:22:23 crc kubenswrapper[4684]: I0123 09:22:23.006525 4684 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/49af2e7d-f2c3-4fff-8e1f-659c9334154c-catalog-content\") pod \"redhat-operators-k7sgb\" (UID: \"49af2e7d-f2c3-4fff-8e1f-659c9334154c\") " pod="openshift-marketplace/redhat-operators-k7sgb" Jan 23 09:22:23 crc kubenswrapper[4684]: I0123 09:22:23.006557 4684 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/49af2e7d-f2c3-4fff-8e1f-659c9334154c-utilities\") pod \"redhat-operators-k7sgb\" (UID: \"49af2e7d-f2c3-4fff-8e1f-659c9334154c\") " pod="openshift-marketplace/redhat-operators-k7sgb" Jan 23 09:22:23 crc kubenswrapper[4684]: I0123 09:22:23.006748 4684 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-srpx8\" (UniqueName: \"kubernetes.io/projected/49af2e7d-f2c3-4fff-8e1f-659c9334154c-kube-api-access-srpx8\") pod \"redhat-operators-k7sgb\" (UID: \"49af2e7d-f2c3-4fff-8e1f-659c9334154c\") " pod="openshift-marketplace/redhat-operators-k7sgb" Jan 23 09:22:23 crc kubenswrapper[4684]: I0123 09:22:23.108375 4684 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-srpx8\" (UniqueName: \"kubernetes.io/projected/49af2e7d-f2c3-4fff-8e1f-659c9334154c-kube-api-access-srpx8\") pod \"redhat-operators-k7sgb\" (UID: \"49af2e7d-f2c3-4fff-8e1f-659c9334154c\") " pod="openshift-marketplace/redhat-operators-k7sgb" Jan 23 09:22:23 crc kubenswrapper[4684]: I0123 09:22:23.108504 4684 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/49af2e7d-f2c3-4fff-8e1f-659c9334154c-catalog-content\") pod \"redhat-operators-k7sgb\" (UID: \"49af2e7d-f2c3-4fff-8e1f-659c9334154c\") " pod="openshift-marketplace/redhat-operators-k7sgb" Jan 23 09:22:23 crc kubenswrapper[4684]: I0123 09:22:23.108537 4684 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/49af2e7d-f2c3-4fff-8e1f-659c9334154c-utilities\") pod \"redhat-operators-k7sgb\" (UID: \"49af2e7d-f2c3-4fff-8e1f-659c9334154c\") " pod="openshift-marketplace/redhat-operators-k7sgb" Jan 23 09:22:23 crc kubenswrapper[4684]: I0123 09:22:23.109095 4684 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/49af2e7d-f2c3-4fff-8e1f-659c9334154c-catalog-content\") pod \"redhat-operators-k7sgb\" (UID: \"49af2e7d-f2c3-4fff-8e1f-659c9334154c\") " pod="openshift-marketplace/redhat-operators-k7sgb" Jan 23 09:22:23 crc kubenswrapper[4684]: I0123 09:22:23.109111 4684 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/49af2e7d-f2c3-4fff-8e1f-659c9334154c-utilities\") pod \"redhat-operators-k7sgb\" (UID: \"49af2e7d-f2c3-4fff-8e1f-659c9334154c\") " pod="openshift-marketplace/redhat-operators-k7sgb" Jan 23 09:22:23 crc kubenswrapper[4684]: I0123 09:22:23.129489 4684 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-srpx8\" (UniqueName: \"kubernetes.io/projected/49af2e7d-f2c3-4fff-8e1f-659c9334154c-kube-api-access-srpx8\") pod \"redhat-operators-k7sgb\" (UID: \"49af2e7d-f2c3-4fff-8e1f-659c9334154c\") " pod="openshift-marketplace/redhat-operators-k7sgb" Jan 23 09:22:23 crc kubenswrapper[4684]: I0123 09:22:23.232381 4684 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-k7sgb" Jan 23 09:22:23 crc kubenswrapper[4684]: I0123 09:22:23.735852 4684 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-operators-k7sgb"] Jan 23 09:22:23 crc kubenswrapper[4684]: W0123 09:22:23.736689 4684 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod49af2e7d_f2c3_4fff_8e1f_659c9334154c.slice/crio-76395c8689115f72827b6370a9a92e1054ea31f23ce32123e8c86f3e211bebc9 WatchSource:0}: Error finding container 76395c8689115f72827b6370a9a92e1054ea31f23ce32123e8c86f3e211bebc9: Status 404 returned error can't find the container with id 76395c8689115f72827b6370a9a92e1054ea31f23ce32123e8c86f3e211bebc9 Jan 23 09:22:23 crc kubenswrapper[4684]: I0123 09:22:23.867675 4684 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-k7sgb" event={"ID":"49af2e7d-f2c3-4fff-8e1f-659c9334154c","Type":"ContainerStarted","Data":"76395c8689115f72827b6370a9a92e1054ea31f23ce32123e8c86f3e211bebc9"} Jan 23 09:22:24 crc kubenswrapper[4684]: I0123 09:22:24.874508 4684 generic.go:334] "Generic (PLEG): container finished" podID="49af2e7d-f2c3-4fff-8e1f-659c9334154c" containerID="8a6fedd01becab8b9c3ecffcf55cf91f9d3ebe881602c463d580b80980c8bb44" exitCode=0 Jan 23 09:22:24 crc kubenswrapper[4684]: I0123 09:22:24.874548 4684 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-k7sgb" event={"ID":"49af2e7d-f2c3-4fff-8e1f-659c9334154c","Type":"ContainerDied","Data":"8a6fedd01becab8b9c3ecffcf55cf91f9d3ebe881602c463d580b80980c8bb44"} Jan 23 09:22:24 crc kubenswrapper[4684]: I0123 09:22:24.877755 4684 generic.go:334] "Generic (PLEG): container finished" podID="169c6832-37df-469f-9ff3-c0775456568a" containerID="a747fcdb61f4427c55b9888bbcdda9b4e1f44705b3ce5d858c52cc339a27b378" exitCode=0 Jan 23 09:22:24 crc kubenswrapper[4684]: I0123 09:22:24.877794 4684 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/53efe8611d43ac2275911d954e05efbbba7920a530aff9253ed1cec713gnsqg" event={"ID":"169c6832-37df-469f-9ff3-c0775456568a","Type":"ContainerDied","Data":"a747fcdb61f4427c55b9888bbcdda9b4e1f44705b3ce5d858c52cc339a27b378"} Jan 23 09:22:25 crc kubenswrapper[4684]: I0123 09:22:25.885087 4684 generic.go:334] "Generic (PLEG): container finished" podID="169c6832-37df-469f-9ff3-c0775456568a" containerID="b5c332c3d370df085853197ecb6bca3b507d689fb79b2e6e48af9688b4e3ffcc" exitCode=0 Jan 23 09:22:25 crc kubenswrapper[4684]: I0123 09:22:25.885492 4684 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/53efe8611d43ac2275911d954e05efbbba7920a530aff9253ed1cec713gnsqg" event={"ID":"169c6832-37df-469f-9ff3-c0775456568a","Type":"ContainerDied","Data":"b5c332c3d370df085853197ecb6bca3b507d689fb79b2e6e48af9688b4e3ffcc"} Jan 23 09:22:25 crc kubenswrapper[4684]: I0123 09:22:25.888439 4684 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-k7sgb" event={"ID":"49af2e7d-f2c3-4fff-8e1f-659c9334154c","Type":"ContainerStarted","Data":"e90af99f4d57aa04f2da4f466cf265ca1fd37b32804b589623edaa7320b0765d"} Jan 23 09:22:26 crc kubenswrapper[4684]: I0123 09:22:26.896956 4684 generic.go:334] "Generic (PLEG): container finished" podID="49af2e7d-f2c3-4fff-8e1f-659c9334154c" containerID="e90af99f4d57aa04f2da4f466cf265ca1fd37b32804b589623edaa7320b0765d" exitCode=0 Jan 23 09:22:26 crc kubenswrapper[4684]: I0123 09:22:26.898846 4684 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-k7sgb" event={"ID":"49af2e7d-f2c3-4fff-8e1f-659c9334154c","Type":"ContainerDied","Data":"e90af99f4d57aa04f2da4f466cf265ca1fd37b32804b589623edaa7320b0765d"} Jan 23 09:22:27 crc kubenswrapper[4684]: I0123 09:22:27.114137 4684 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/53efe8611d43ac2275911d954e05efbbba7920a530aff9253ed1cec713gnsqg" Jan 23 09:22:27 crc kubenswrapper[4684]: I0123 09:22:27.269771 4684 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"util\" (UniqueName: \"kubernetes.io/empty-dir/169c6832-37df-469f-9ff3-c0775456568a-util\") pod \"169c6832-37df-469f-9ff3-c0775456568a\" (UID: \"169c6832-37df-469f-9ff3-c0775456568a\") " Jan 23 09:22:27 crc kubenswrapper[4684]: I0123 09:22:27.269867 4684 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-t4jrr\" (UniqueName: \"kubernetes.io/projected/169c6832-37df-469f-9ff3-c0775456568a-kube-api-access-t4jrr\") pod \"169c6832-37df-469f-9ff3-c0775456568a\" (UID: \"169c6832-37df-469f-9ff3-c0775456568a\") " Jan 23 09:22:27 crc kubenswrapper[4684]: I0123 09:22:27.269901 4684 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"bundle\" (UniqueName: \"kubernetes.io/empty-dir/169c6832-37df-469f-9ff3-c0775456568a-bundle\") pod \"169c6832-37df-469f-9ff3-c0775456568a\" (UID: \"169c6832-37df-469f-9ff3-c0775456568a\") " Jan 23 09:22:27 crc kubenswrapper[4684]: I0123 09:22:27.270804 4684 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/169c6832-37df-469f-9ff3-c0775456568a-bundle" (OuterVolumeSpecName: "bundle") pod "169c6832-37df-469f-9ff3-c0775456568a" (UID: "169c6832-37df-469f-9ff3-c0775456568a"). InnerVolumeSpecName "bundle". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 23 09:22:27 crc kubenswrapper[4684]: I0123 09:22:27.282295 4684 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/169c6832-37df-469f-9ff3-c0775456568a-util" (OuterVolumeSpecName: "util") pod "169c6832-37df-469f-9ff3-c0775456568a" (UID: "169c6832-37df-469f-9ff3-c0775456568a"). InnerVolumeSpecName "util". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 23 09:22:27 crc kubenswrapper[4684]: I0123 09:22:27.282844 4684 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/169c6832-37df-469f-9ff3-c0775456568a-kube-api-access-t4jrr" (OuterVolumeSpecName: "kube-api-access-t4jrr") pod "169c6832-37df-469f-9ff3-c0775456568a" (UID: "169c6832-37df-469f-9ff3-c0775456568a"). InnerVolumeSpecName "kube-api-access-t4jrr". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 23 09:22:27 crc kubenswrapper[4684]: I0123 09:22:27.371744 4684 reconciler_common.go:293] "Volume detached for volume \"util\" (UniqueName: \"kubernetes.io/empty-dir/169c6832-37df-469f-9ff3-c0775456568a-util\") on node \"crc\" DevicePath \"\"" Jan 23 09:22:27 crc kubenswrapper[4684]: I0123 09:22:27.371777 4684 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-t4jrr\" (UniqueName: \"kubernetes.io/projected/169c6832-37df-469f-9ff3-c0775456568a-kube-api-access-t4jrr\") on node \"crc\" DevicePath \"\"" Jan 23 09:22:27 crc kubenswrapper[4684]: I0123 09:22:27.371788 4684 reconciler_common.go:293] "Volume detached for volume \"bundle\" (UniqueName: \"kubernetes.io/empty-dir/169c6832-37df-469f-9ff3-c0775456568a-bundle\") on node \"crc\" DevicePath \"\"" Jan 23 09:22:27 crc kubenswrapper[4684]: I0123 09:22:27.905115 4684 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/53efe8611d43ac2275911d954e05efbbba7920a530aff9253ed1cec713gnsqg" Jan 23 09:22:27 crc kubenswrapper[4684]: I0123 09:22:27.905160 4684 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/53efe8611d43ac2275911d954e05efbbba7920a530aff9253ed1cec713gnsqg" event={"ID":"169c6832-37df-469f-9ff3-c0775456568a","Type":"ContainerDied","Data":"76fdc12c3e352c03fb97ac6e6daa6eae1f2d5406468bd0d40b9d0f3e895bec06"} Jan 23 09:22:27 crc kubenswrapper[4684]: I0123 09:22:27.905200 4684 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="76fdc12c3e352c03fb97ac6e6daa6eae1f2d5406468bd0d40b9d0f3e895bec06" Jan 23 09:22:27 crc kubenswrapper[4684]: I0123 09:22:27.908370 4684 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-k7sgb" event={"ID":"49af2e7d-f2c3-4fff-8e1f-659c9334154c","Type":"ContainerStarted","Data":"af96dc7b98643f0546d6f3cb856d847997934f21164395780b87caac765650f2"} Jan 23 09:22:28 crc kubenswrapper[4684]: I0123 09:22:28.154843 4684 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/redhat-operators-k7sgb" podStartSLOduration=3.419423292 podStartE2EDuration="6.154819669s" podCreationTimestamp="2026-01-23 09:22:22 +0000 UTC" firstStartedPulling="2026-01-23 09:22:24.875847683 +0000 UTC m=+917.499226224" lastFinishedPulling="2026-01-23 09:22:27.61124406 +0000 UTC m=+920.234622601" observedRunningTime="2026-01-23 09:22:27.933780902 +0000 UTC m=+920.557159443" watchObservedRunningTime="2026-01-23 09:22:28.154819669 +0000 UTC m=+920.778198210" Jan 23 09:22:31 crc kubenswrapper[4684]: I0123 09:22:31.424346 4684 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-nmstate/nmstate-operator-646758c888-qrbb4"] Jan 23 09:22:31 crc kubenswrapper[4684]: E0123 09:22:31.424588 4684 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="169c6832-37df-469f-9ff3-c0775456568a" containerName="extract" Jan 23 09:22:31 crc kubenswrapper[4684]: I0123 09:22:31.424603 4684 state_mem.go:107] "Deleted CPUSet assignment" podUID="169c6832-37df-469f-9ff3-c0775456568a" containerName="extract" Jan 23 09:22:31 crc kubenswrapper[4684]: E0123 09:22:31.424624 4684 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="169c6832-37df-469f-9ff3-c0775456568a" containerName="util" Jan 23 09:22:31 crc kubenswrapper[4684]: I0123 09:22:31.424631 4684 state_mem.go:107] "Deleted CPUSet assignment" podUID="169c6832-37df-469f-9ff3-c0775456568a" containerName="util" Jan 23 09:22:31 crc kubenswrapper[4684]: E0123 09:22:31.424660 4684 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="169c6832-37df-469f-9ff3-c0775456568a" containerName="pull" Jan 23 09:22:31 crc kubenswrapper[4684]: I0123 09:22:31.424668 4684 state_mem.go:107] "Deleted CPUSet assignment" podUID="169c6832-37df-469f-9ff3-c0775456568a" containerName="pull" Jan 23 09:22:31 crc kubenswrapper[4684]: I0123 09:22:31.424830 4684 memory_manager.go:354] "RemoveStaleState removing state" podUID="169c6832-37df-469f-9ff3-c0775456568a" containerName="extract" Jan 23 09:22:31 crc kubenswrapper[4684]: I0123 09:22:31.425272 4684 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-nmstate/nmstate-operator-646758c888-qrbb4" Jan 23 09:22:31 crc kubenswrapper[4684]: I0123 09:22:31.428341 4684 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-nmstate"/"kube-root-ca.crt" Jan 23 09:22:31 crc kubenswrapper[4684]: I0123 09:22:31.428406 4684 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-nmstate"/"nmstate-operator-dockercfg-4jgl6" Jan 23 09:22:31 crc kubenswrapper[4684]: I0123 09:22:31.429271 4684 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-nmstate"/"openshift-service-ca.crt" Jan 23 09:22:31 crc kubenswrapper[4684]: I0123 09:22:31.446048 4684 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-nmstate/nmstate-operator-646758c888-qrbb4"] Jan 23 09:22:31 crc kubenswrapper[4684]: I0123 09:22:31.524903 4684 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-pdvdv\" (UniqueName: \"kubernetes.io/projected/4e70b1ea-5bbb-44b8-893b-0b08388d8a39-kube-api-access-pdvdv\") pod \"nmstate-operator-646758c888-qrbb4\" (UID: \"4e70b1ea-5bbb-44b8-893b-0b08388d8a39\") " pod="openshift-nmstate/nmstate-operator-646758c888-qrbb4" Jan 23 09:22:31 crc kubenswrapper[4684]: I0123 09:22:31.627884 4684 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-pdvdv\" (UniqueName: \"kubernetes.io/projected/4e70b1ea-5bbb-44b8-893b-0b08388d8a39-kube-api-access-pdvdv\") pod \"nmstate-operator-646758c888-qrbb4\" (UID: \"4e70b1ea-5bbb-44b8-893b-0b08388d8a39\") " pod="openshift-nmstate/nmstate-operator-646758c888-qrbb4" Jan 23 09:22:31 crc kubenswrapper[4684]: I0123 09:22:31.651581 4684 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-pdvdv\" (UniqueName: \"kubernetes.io/projected/4e70b1ea-5bbb-44b8-893b-0b08388d8a39-kube-api-access-pdvdv\") pod \"nmstate-operator-646758c888-qrbb4\" (UID: \"4e70b1ea-5bbb-44b8-893b-0b08388d8a39\") " pod="openshift-nmstate/nmstate-operator-646758c888-qrbb4" Jan 23 09:22:31 crc kubenswrapper[4684]: I0123 09:22:31.740317 4684 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-nmstate/nmstate-operator-646758c888-qrbb4" Jan 23 09:22:32 crc kubenswrapper[4684]: I0123 09:22:32.042412 4684 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-nmstate/nmstate-operator-646758c888-qrbb4"] Jan 23 09:22:32 crc kubenswrapper[4684]: W0123 09:22:32.045292 4684 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod4e70b1ea_5bbb_44b8_893b_0b08388d8a39.slice/crio-eb15fc80e33780435d2d1c333ee7c43c9ad530fdae3c47f213a57912cef55b35 WatchSource:0}: Error finding container eb15fc80e33780435d2d1c333ee7c43c9ad530fdae3c47f213a57912cef55b35: Status 404 returned error can't find the container with id eb15fc80e33780435d2d1c333ee7c43c9ad530fdae3c47f213a57912cef55b35 Jan 23 09:22:32 crc kubenswrapper[4684]: I0123 09:22:32.936846 4684 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-nmstate/nmstate-operator-646758c888-qrbb4" event={"ID":"4e70b1ea-5bbb-44b8-893b-0b08388d8a39","Type":"ContainerStarted","Data":"eb15fc80e33780435d2d1c333ee7c43c9ad530fdae3c47f213a57912cef55b35"} Jan 23 09:22:33 crc kubenswrapper[4684]: I0123 09:22:33.233505 4684 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/redhat-operators-k7sgb" Jan 23 09:22:33 crc kubenswrapper[4684]: I0123 09:22:33.233568 4684 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/redhat-operators-k7sgb" Jan 23 09:22:34 crc kubenswrapper[4684]: I0123 09:22:34.283621 4684 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-marketplace/redhat-operators-k7sgb" podUID="49af2e7d-f2c3-4fff-8e1f-659c9334154c" containerName="registry-server" probeResult="failure" output=< Jan 23 09:22:34 crc kubenswrapper[4684]: timeout: failed to connect service ":50051" within 1s Jan 23 09:22:34 crc kubenswrapper[4684]: > Jan 23 09:22:37 crc kubenswrapper[4684]: I0123 09:22:37.968215 4684 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-nmstate/nmstate-operator-646758c888-qrbb4" event={"ID":"4e70b1ea-5bbb-44b8-893b-0b08388d8a39","Type":"ContainerStarted","Data":"81aeaf3b100bd4bca1ef3b29e5f2ae15d6974aae1cfe8658337327780bac98f0"} Jan 23 09:22:37 crc kubenswrapper[4684]: I0123 09:22:37.985053 4684 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-nmstate/nmstate-operator-646758c888-qrbb4" podStartSLOduration=2.134798614 podStartE2EDuration="6.985030966s" podCreationTimestamp="2026-01-23 09:22:31 +0000 UTC" firstStartedPulling="2026-01-23 09:22:32.049193931 +0000 UTC m=+924.672572472" lastFinishedPulling="2026-01-23 09:22:36.899426283 +0000 UTC m=+929.522804824" observedRunningTime="2026-01-23 09:22:37.98411165 +0000 UTC m=+930.607490211" watchObservedRunningTime="2026-01-23 09:22:37.985030966 +0000 UTC m=+930.608409507" Jan 23 09:22:40 crc kubenswrapper[4684]: I0123 09:22:40.993302 4684 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-nmstate/nmstate-webhook-8474b5b9d8-p4bsj"] Jan 23 09:22:40 crc kubenswrapper[4684]: I0123 09:22:40.994632 4684 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-nmstate/nmstate-webhook-8474b5b9d8-p4bsj" Jan 23 09:22:40 crc kubenswrapper[4684]: I0123 09:22:40.996833 4684 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-nmstate/nmstate-metrics-54757c584b-dlvm4"] Jan 23 09:22:40 crc kubenswrapper[4684]: I0123 09:22:40.996887 4684 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-nmstate"/"openshift-nmstate-webhook" Jan 23 09:22:40 crc kubenswrapper[4684]: I0123 09:22:40.997061 4684 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-nmstate"/"nmstate-handler-dockercfg-hhvq6" Jan 23 09:22:40 crc kubenswrapper[4684]: I0123 09:22:40.997738 4684 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-nmstate/nmstate-metrics-54757c584b-dlvm4" Jan 23 09:22:41 crc kubenswrapper[4684]: I0123 09:22:41.016180 4684 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-nmstate/nmstate-webhook-8474b5b9d8-p4bsj"] Jan 23 09:22:41 crc kubenswrapper[4684]: I0123 09:22:41.019640 4684 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-nmstate/nmstate-metrics-54757c584b-dlvm4"] Jan 23 09:22:41 crc kubenswrapper[4684]: I0123 09:22:41.046252 4684 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-nmstate/nmstate-handler-2kxj8"] Jan 23 09:22:41 crc kubenswrapper[4684]: I0123 09:22:41.046988 4684 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-nmstate/nmstate-handler-2kxj8" Jan 23 09:22:41 crc kubenswrapper[4684]: I0123 09:22:41.089746 4684 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovs-socket\" (UniqueName: \"kubernetes.io/host-path/2125ebe0-da30-4e7c-93e0-66b7aa2b87e4-ovs-socket\") pod \"nmstate-handler-2kxj8\" (UID: \"2125ebe0-da30-4e7c-93e0-66b7aa2b87e4\") " pod="openshift-nmstate/nmstate-handler-2kxj8" Jan 23 09:22:41 crc kubenswrapper[4684]: I0123 09:22:41.089798 4684 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-k5bdx\" (UniqueName: \"kubernetes.io/projected/2125ebe0-da30-4e7c-93e0-66b7aa2b87e4-kube-api-access-k5bdx\") pod \"nmstate-handler-2kxj8\" (UID: \"2125ebe0-da30-4e7c-93e0-66b7aa2b87e4\") " pod="openshift-nmstate/nmstate-handler-2kxj8" Jan 23 09:22:41 crc kubenswrapper[4684]: I0123 09:22:41.089915 4684 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-2jj5n\" (UniqueName: \"kubernetes.io/projected/7f98efc7-bdf6-4943-8ef9-9056f713acb2-kube-api-access-2jj5n\") pod \"nmstate-webhook-8474b5b9d8-p4bsj\" (UID: \"7f98efc7-bdf6-4943-8ef9-9056f713acb2\") " pod="openshift-nmstate/nmstate-webhook-8474b5b9d8-p4bsj" Jan 23 09:22:41 crc kubenswrapper[4684]: I0123 09:22:41.090046 4684 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"dbus-socket\" (UniqueName: \"kubernetes.io/host-path/2125ebe0-da30-4e7c-93e0-66b7aa2b87e4-dbus-socket\") pod \"nmstate-handler-2kxj8\" (UID: \"2125ebe0-da30-4e7c-93e0-66b7aa2b87e4\") " pod="openshift-nmstate/nmstate-handler-2kxj8" Jan 23 09:22:41 crc kubenswrapper[4684]: I0123 09:22:41.090137 4684 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-mc4j2\" (UniqueName: \"kubernetes.io/projected/55e58493-0888-4e94-bf0f-6c5b99a10ac4-kube-api-access-mc4j2\") pod \"nmstate-metrics-54757c584b-dlvm4\" (UID: \"55e58493-0888-4e94-bf0f-6c5b99a10ac4\") " pod="openshift-nmstate/nmstate-metrics-54757c584b-dlvm4" Jan 23 09:22:41 crc kubenswrapper[4684]: I0123 09:22:41.090201 4684 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"nmstate-lock\" (UniqueName: \"kubernetes.io/host-path/2125ebe0-da30-4e7c-93e0-66b7aa2b87e4-nmstate-lock\") pod \"nmstate-handler-2kxj8\" (UID: \"2125ebe0-da30-4e7c-93e0-66b7aa2b87e4\") " pod="openshift-nmstate/nmstate-handler-2kxj8" Jan 23 09:22:41 crc kubenswrapper[4684]: I0123 09:22:41.090223 4684 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tls-key-pair\" (UniqueName: \"kubernetes.io/secret/7f98efc7-bdf6-4943-8ef9-9056f713acb2-tls-key-pair\") pod \"nmstate-webhook-8474b5b9d8-p4bsj\" (UID: \"7f98efc7-bdf6-4943-8ef9-9056f713acb2\") " pod="openshift-nmstate/nmstate-webhook-8474b5b9d8-p4bsj" Jan 23 09:22:41 crc kubenswrapper[4684]: I0123 09:22:41.158586 4684 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-nmstate/nmstate-console-plugin-7754f76f8b-l7dkm"] Jan 23 09:22:41 crc kubenswrapper[4684]: I0123 09:22:41.159309 4684 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-nmstate/nmstate-console-plugin-7754f76f8b-l7dkm" Jan 23 09:22:41 crc kubenswrapper[4684]: I0123 09:22:41.161747 4684 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-nmstate"/"nginx-conf" Jan 23 09:22:41 crc kubenswrapper[4684]: I0123 09:22:41.161760 4684 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-nmstate"/"plugin-serving-cert" Jan 23 09:22:41 crc kubenswrapper[4684]: I0123 09:22:41.162435 4684 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-nmstate"/"default-dockercfg-h4brl" Jan 23 09:22:41 crc kubenswrapper[4684]: I0123 09:22:41.190971 4684 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-2jj5n\" (UniqueName: \"kubernetes.io/projected/7f98efc7-bdf6-4943-8ef9-9056f713acb2-kube-api-access-2jj5n\") pod \"nmstate-webhook-8474b5b9d8-p4bsj\" (UID: \"7f98efc7-bdf6-4943-8ef9-9056f713acb2\") " pod="openshift-nmstate/nmstate-webhook-8474b5b9d8-p4bsj" Jan 23 09:22:41 crc kubenswrapper[4684]: I0123 09:22:41.191056 4684 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"dbus-socket\" (UniqueName: \"kubernetes.io/host-path/2125ebe0-da30-4e7c-93e0-66b7aa2b87e4-dbus-socket\") pod \"nmstate-handler-2kxj8\" (UID: \"2125ebe0-da30-4e7c-93e0-66b7aa2b87e4\") " pod="openshift-nmstate/nmstate-handler-2kxj8" Jan 23 09:22:41 crc kubenswrapper[4684]: I0123 09:22:41.191099 4684 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"plugin-serving-cert\" (UniqueName: \"kubernetes.io/secret/bedfa793-7aff-4710-ae19-260a52e2957f-plugin-serving-cert\") pod \"nmstate-console-plugin-7754f76f8b-l7dkm\" (UID: \"bedfa793-7aff-4710-ae19-260a52e2957f\") " pod="openshift-nmstate/nmstate-console-plugin-7754f76f8b-l7dkm" Jan 23 09:22:41 crc kubenswrapper[4684]: I0123 09:22:41.191120 4684 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-gv9k2\" (UniqueName: \"kubernetes.io/projected/bedfa793-7aff-4710-ae19-260a52e2957f-kube-api-access-gv9k2\") pod \"nmstate-console-plugin-7754f76f8b-l7dkm\" (UID: \"bedfa793-7aff-4710-ae19-260a52e2957f\") " pod="openshift-nmstate/nmstate-console-plugin-7754f76f8b-l7dkm" Jan 23 09:22:41 crc kubenswrapper[4684]: I0123 09:22:41.191148 4684 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"nginx-conf\" (UniqueName: \"kubernetes.io/configmap/bedfa793-7aff-4710-ae19-260a52e2957f-nginx-conf\") pod \"nmstate-console-plugin-7754f76f8b-l7dkm\" (UID: \"bedfa793-7aff-4710-ae19-260a52e2957f\") " pod="openshift-nmstate/nmstate-console-plugin-7754f76f8b-l7dkm" Jan 23 09:22:41 crc kubenswrapper[4684]: I0123 09:22:41.191170 4684 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-mc4j2\" (UniqueName: \"kubernetes.io/projected/55e58493-0888-4e94-bf0f-6c5b99a10ac4-kube-api-access-mc4j2\") pod \"nmstate-metrics-54757c584b-dlvm4\" (UID: \"55e58493-0888-4e94-bf0f-6c5b99a10ac4\") " pod="openshift-nmstate/nmstate-metrics-54757c584b-dlvm4" Jan 23 09:22:41 crc kubenswrapper[4684]: I0123 09:22:41.191205 4684 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"nmstate-lock\" (UniqueName: \"kubernetes.io/host-path/2125ebe0-da30-4e7c-93e0-66b7aa2b87e4-nmstate-lock\") pod \"nmstate-handler-2kxj8\" (UID: \"2125ebe0-da30-4e7c-93e0-66b7aa2b87e4\") " pod="openshift-nmstate/nmstate-handler-2kxj8" Jan 23 09:22:41 crc kubenswrapper[4684]: I0123 09:22:41.191221 4684 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"tls-key-pair\" (UniqueName: \"kubernetes.io/secret/7f98efc7-bdf6-4943-8ef9-9056f713acb2-tls-key-pair\") pod \"nmstate-webhook-8474b5b9d8-p4bsj\" (UID: \"7f98efc7-bdf6-4943-8ef9-9056f713acb2\") " pod="openshift-nmstate/nmstate-webhook-8474b5b9d8-p4bsj" Jan 23 09:22:41 crc kubenswrapper[4684]: I0123 09:22:41.191240 4684 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovs-socket\" (UniqueName: \"kubernetes.io/host-path/2125ebe0-da30-4e7c-93e0-66b7aa2b87e4-ovs-socket\") pod \"nmstate-handler-2kxj8\" (UID: \"2125ebe0-da30-4e7c-93e0-66b7aa2b87e4\") " pod="openshift-nmstate/nmstate-handler-2kxj8" Jan 23 09:22:41 crc kubenswrapper[4684]: I0123 09:22:41.191261 4684 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-k5bdx\" (UniqueName: \"kubernetes.io/projected/2125ebe0-da30-4e7c-93e0-66b7aa2b87e4-kube-api-access-k5bdx\") pod \"nmstate-handler-2kxj8\" (UID: \"2125ebe0-da30-4e7c-93e0-66b7aa2b87e4\") " pod="openshift-nmstate/nmstate-handler-2kxj8" Jan 23 09:22:41 crc kubenswrapper[4684]: I0123 09:22:41.191334 4684 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"dbus-socket\" (UniqueName: \"kubernetes.io/host-path/2125ebe0-da30-4e7c-93e0-66b7aa2b87e4-dbus-socket\") pod \"nmstate-handler-2kxj8\" (UID: \"2125ebe0-da30-4e7c-93e0-66b7aa2b87e4\") " pod="openshift-nmstate/nmstate-handler-2kxj8" Jan 23 09:22:41 crc kubenswrapper[4684]: E0123 09:22:41.191417 4684 secret.go:188] Couldn't get secret openshift-nmstate/openshift-nmstate-webhook: secret "openshift-nmstate-webhook" not found Jan 23 09:22:41 crc kubenswrapper[4684]: E0123 09:22:41.191458 4684 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/7f98efc7-bdf6-4943-8ef9-9056f713acb2-tls-key-pair podName:7f98efc7-bdf6-4943-8ef9-9056f713acb2 nodeName:}" failed. No retries permitted until 2026-01-23 09:22:41.691441606 +0000 UTC m=+934.314820137 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "tls-key-pair" (UniqueName: "kubernetes.io/secret/7f98efc7-bdf6-4943-8ef9-9056f713acb2-tls-key-pair") pod "nmstate-webhook-8474b5b9d8-p4bsj" (UID: "7f98efc7-bdf6-4943-8ef9-9056f713acb2") : secret "openshift-nmstate-webhook" not found Jan 23 09:22:41 crc kubenswrapper[4684]: I0123 09:22:41.191470 4684 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovs-socket\" (UniqueName: \"kubernetes.io/host-path/2125ebe0-da30-4e7c-93e0-66b7aa2b87e4-ovs-socket\") pod \"nmstate-handler-2kxj8\" (UID: \"2125ebe0-da30-4e7c-93e0-66b7aa2b87e4\") " pod="openshift-nmstate/nmstate-handler-2kxj8" Jan 23 09:22:41 crc kubenswrapper[4684]: I0123 09:22:41.191504 4684 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"nmstate-lock\" (UniqueName: \"kubernetes.io/host-path/2125ebe0-da30-4e7c-93e0-66b7aa2b87e4-nmstate-lock\") pod \"nmstate-handler-2kxj8\" (UID: \"2125ebe0-da30-4e7c-93e0-66b7aa2b87e4\") " pod="openshift-nmstate/nmstate-handler-2kxj8" Jan 23 09:22:41 crc kubenswrapper[4684]: I0123 09:22:41.203500 4684 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-nmstate/nmstate-console-plugin-7754f76f8b-l7dkm"] Jan 23 09:22:41 crc kubenswrapper[4684]: I0123 09:22:41.217424 4684 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-k5bdx\" (UniqueName: \"kubernetes.io/projected/2125ebe0-da30-4e7c-93e0-66b7aa2b87e4-kube-api-access-k5bdx\") pod \"nmstate-handler-2kxj8\" (UID: \"2125ebe0-da30-4e7c-93e0-66b7aa2b87e4\") " pod="openshift-nmstate/nmstate-handler-2kxj8" Jan 23 09:22:41 crc kubenswrapper[4684]: I0123 09:22:41.217735 4684 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-2jj5n\" (UniqueName: \"kubernetes.io/projected/7f98efc7-bdf6-4943-8ef9-9056f713acb2-kube-api-access-2jj5n\") pod \"nmstate-webhook-8474b5b9d8-p4bsj\" (UID: \"7f98efc7-bdf6-4943-8ef9-9056f713acb2\") " pod="openshift-nmstate/nmstate-webhook-8474b5b9d8-p4bsj" Jan 23 09:22:41 crc kubenswrapper[4684]: I0123 09:22:41.219585 4684 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-mc4j2\" (UniqueName: \"kubernetes.io/projected/55e58493-0888-4e94-bf0f-6c5b99a10ac4-kube-api-access-mc4j2\") pod \"nmstate-metrics-54757c584b-dlvm4\" (UID: \"55e58493-0888-4e94-bf0f-6c5b99a10ac4\") " pod="openshift-nmstate/nmstate-metrics-54757c584b-dlvm4" Jan 23 09:22:41 crc kubenswrapper[4684]: I0123 09:22:41.292278 4684 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"plugin-serving-cert\" (UniqueName: \"kubernetes.io/secret/bedfa793-7aff-4710-ae19-260a52e2957f-plugin-serving-cert\") pod \"nmstate-console-plugin-7754f76f8b-l7dkm\" (UID: \"bedfa793-7aff-4710-ae19-260a52e2957f\") " pod="openshift-nmstate/nmstate-console-plugin-7754f76f8b-l7dkm" Jan 23 09:22:41 crc kubenswrapper[4684]: I0123 09:22:41.292320 4684 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-gv9k2\" (UniqueName: \"kubernetes.io/projected/bedfa793-7aff-4710-ae19-260a52e2957f-kube-api-access-gv9k2\") pod \"nmstate-console-plugin-7754f76f8b-l7dkm\" (UID: \"bedfa793-7aff-4710-ae19-260a52e2957f\") " pod="openshift-nmstate/nmstate-console-plugin-7754f76f8b-l7dkm" Jan 23 09:22:41 crc kubenswrapper[4684]: I0123 09:22:41.292345 4684 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"nginx-conf\" (UniqueName: \"kubernetes.io/configmap/bedfa793-7aff-4710-ae19-260a52e2957f-nginx-conf\") pod \"nmstate-console-plugin-7754f76f8b-l7dkm\" (UID: \"bedfa793-7aff-4710-ae19-260a52e2957f\") " pod="openshift-nmstate/nmstate-console-plugin-7754f76f8b-l7dkm" Jan 23 09:22:41 crc kubenswrapper[4684]: E0123 09:22:41.292427 4684 secret.go:188] Couldn't get secret openshift-nmstate/plugin-serving-cert: secret "plugin-serving-cert" not found Jan 23 09:22:41 crc kubenswrapper[4684]: E0123 09:22:41.292480 4684 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/bedfa793-7aff-4710-ae19-260a52e2957f-plugin-serving-cert podName:bedfa793-7aff-4710-ae19-260a52e2957f nodeName:}" failed. No retries permitted until 2026-01-23 09:22:41.792466028 +0000 UTC m=+934.415844569 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "plugin-serving-cert" (UniqueName: "kubernetes.io/secret/bedfa793-7aff-4710-ae19-260a52e2957f-plugin-serving-cert") pod "nmstate-console-plugin-7754f76f8b-l7dkm" (UID: "bedfa793-7aff-4710-ae19-260a52e2957f") : secret "plugin-serving-cert" not found Jan 23 09:22:41 crc kubenswrapper[4684]: I0123 09:22:41.293228 4684 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"nginx-conf\" (UniqueName: \"kubernetes.io/configmap/bedfa793-7aff-4710-ae19-260a52e2957f-nginx-conf\") pod \"nmstate-console-plugin-7754f76f8b-l7dkm\" (UID: \"bedfa793-7aff-4710-ae19-260a52e2957f\") " pod="openshift-nmstate/nmstate-console-plugin-7754f76f8b-l7dkm" Jan 23 09:22:41 crc kubenswrapper[4684]: I0123 09:22:41.314490 4684 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-gv9k2\" (UniqueName: \"kubernetes.io/projected/bedfa793-7aff-4710-ae19-260a52e2957f-kube-api-access-gv9k2\") pod \"nmstate-console-plugin-7754f76f8b-l7dkm\" (UID: \"bedfa793-7aff-4710-ae19-260a52e2957f\") " pod="openshift-nmstate/nmstate-console-plugin-7754f76f8b-l7dkm" Jan 23 09:22:41 crc kubenswrapper[4684]: I0123 09:22:41.322984 4684 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-nmstate/nmstate-metrics-54757c584b-dlvm4" Jan 23 09:22:41 crc kubenswrapper[4684]: I0123 09:22:41.360389 4684 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-nmstate/nmstate-handler-2kxj8" Jan 23 09:22:41 crc kubenswrapper[4684]: I0123 09:22:41.396182 4684 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-console/console-5dd4b96b5d-6rpcf"] Jan 23 09:22:41 crc kubenswrapper[4684]: I0123 09:22:41.397045 4684 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-console/console-5dd4b96b5d-6rpcf" Jan 23 09:22:41 crc kubenswrapper[4684]: W0123 09:22:41.412785 4684 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod2125ebe0_da30_4e7c_93e0_66b7aa2b87e4.slice/crio-1a0a000d9aac0b8ed485317c88f73fc11e4ecbc1a2719061b1e9e453021bc478 WatchSource:0}: Error finding container 1a0a000d9aac0b8ed485317c88f73fc11e4ecbc1a2719061b1e9e453021bc478: Status 404 returned error can't find the container with id 1a0a000d9aac0b8ed485317c88f73fc11e4ecbc1a2719061b1e9e453021bc478 Jan 23 09:22:41 crc kubenswrapper[4684]: I0123 09:22:41.419136 4684 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-console/console-5dd4b96b5d-6rpcf"] Jan 23 09:22:41 crc kubenswrapper[4684]: I0123 09:22:41.596397 4684 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"console-config\" (UniqueName: \"kubernetes.io/configmap/f236cdf1-17e1-40de-b363-a790caa0263b-console-config\") pod \"console-5dd4b96b5d-6rpcf\" (UID: \"f236cdf1-17e1-40de-b363-a790caa0263b\") " pod="openshift-console/console-5dd4b96b5d-6rpcf" Jan 23 09:22:41 crc kubenswrapper[4684]: I0123 09:22:41.596833 4684 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-6vlqr\" (UniqueName: \"kubernetes.io/projected/f236cdf1-17e1-40de-b363-a790caa0263b-kube-api-access-6vlqr\") pod \"console-5dd4b96b5d-6rpcf\" (UID: \"f236cdf1-17e1-40de-b363-a790caa0263b\") " pod="openshift-console/console-5dd4b96b5d-6rpcf" Jan 23 09:22:41 crc kubenswrapper[4684]: I0123 09:22:41.596902 4684 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/f236cdf1-17e1-40de-b363-a790caa0263b-trusted-ca-bundle\") pod \"console-5dd4b96b5d-6rpcf\" (UID: \"f236cdf1-17e1-40de-b363-a790caa0263b\") " pod="openshift-console/console-5dd4b96b5d-6rpcf" Jan 23 09:22:41 crc kubenswrapper[4684]: I0123 09:22:41.596929 4684 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"service-ca\" (UniqueName: \"kubernetes.io/configmap/f236cdf1-17e1-40de-b363-a790caa0263b-service-ca\") pod \"console-5dd4b96b5d-6rpcf\" (UID: \"f236cdf1-17e1-40de-b363-a790caa0263b\") " pod="openshift-console/console-5dd4b96b5d-6rpcf" Jan 23 09:22:41 crc kubenswrapper[4684]: I0123 09:22:41.596965 4684 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"oauth-serving-cert\" (UniqueName: \"kubernetes.io/configmap/f236cdf1-17e1-40de-b363-a790caa0263b-oauth-serving-cert\") pod \"console-5dd4b96b5d-6rpcf\" (UID: \"f236cdf1-17e1-40de-b363-a790caa0263b\") " pod="openshift-console/console-5dd4b96b5d-6rpcf" Jan 23 09:22:41 crc kubenswrapper[4684]: I0123 09:22:41.596984 4684 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"console-serving-cert\" (UniqueName: \"kubernetes.io/secret/f236cdf1-17e1-40de-b363-a790caa0263b-console-serving-cert\") pod \"console-5dd4b96b5d-6rpcf\" (UID: \"f236cdf1-17e1-40de-b363-a790caa0263b\") " pod="openshift-console/console-5dd4b96b5d-6rpcf" Jan 23 09:22:41 crc kubenswrapper[4684]: I0123 09:22:41.597134 4684 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"console-oauth-config\" (UniqueName: \"kubernetes.io/secret/f236cdf1-17e1-40de-b363-a790caa0263b-console-oauth-config\") pod \"console-5dd4b96b5d-6rpcf\" (UID: \"f236cdf1-17e1-40de-b363-a790caa0263b\") " pod="openshift-console/console-5dd4b96b5d-6rpcf" Jan 23 09:22:41 crc kubenswrapper[4684]: I0123 09:22:41.699093 4684 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"console-config\" (UniqueName: \"kubernetes.io/configmap/f236cdf1-17e1-40de-b363-a790caa0263b-console-config\") pod \"console-5dd4b96b5d-6rpcf\" (UID: \"f236cdf1-17e1-40de-b363-a790caa0263b\") " pod="openshift-console/console-5dd4b96b5d-6rpcf" Jan 23 09:22:41 crc kubenswrapper[4684]: I0123 09:22:41.699149 4684 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"tls-key-pair\" (UniqueName: \"kubernetes.io/secret/7f98efc7-bdf6-4943-8ef9-9056f713acb2-tls-key-pair\") pod \"nmstate-webhook-8474b5b9d8-p4bsj\" (UID: \"7f98efc7-bdf6-4943-8ef9-9056f713acb2\") " pod="openshift-nmstate/nmstate-webhook-8474b5b9d8-p4bsj" Jan 23 09:22:41 crc kubenswrapper[4684]: I0123 09:22:41.699178 4684 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-6vlqr\" (UniqueName: \"kubernetes.io/projected/f236cdf1-17e1-40de-b363-a790caa0263b-kube-api-access-6vlqr\") pod \"console-5dd4b96b5d-6rpcf\" (UID: \"f236cdf1-17e1-40de-b363-a790caa0263b\") " pod="openshift-console/console-5dd4b96b5d-6rpcf" Jan 23 09:22:41 crc kubenswrapper[4684]: I0123 09:22:41.699206 4684 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/f236cdf1-17e1-40de-b363-a790caa0263b-trusted-ca-bundle\") pod \"console-5dd4b96b5d-6rpcf\" (UID: \"f236cdf1-17e1-40de-b363-a790caa0263b\") " pod="openshift-console/console-5dd4b96b5d-6rpcf" Jan 23 09:22:41 crc kubenswrapper[4684]: I0123 09:22:41.699240 4684 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"service-ca\" (UniqueName: \"kubernetes.io/configmap/f236cdf1-17e1-40de-b363-a790caa0263b-service-ca\") pod \"console-5dd4b96b5d-6rpcf\" (UID: \"f236cdf1-17e1-40de-b363-a790caa0263b\") " pod="openshift-console/console-5dd4b96b5d-6rpcf" Jan 23 09:22:41 crc kubenswrapper[4684]: I0123 09:22:41.699268 4684 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"oauth-serving-cert\" (UniqueName: \"kubernetes.io/configmap/f236cdf1-17e1-40de-b363-a790caa0263b-oauth-serving-cert\") pod \"console-5dd4b96b5d-6rpcf\" (UID: \"f236cdf1-17e1-40de-b363-a790caa0263b\") " pod="openshift-console/console-5dd4b96b5d-6rpcf" Jan 23 09:22:41 crc kubenswrapper[4684]: I0123 09:22:41.699289 4684 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"console-serving-cert\" (UniqueName: \"kubernetes.io/secret/f236cdf1-17e1-40de-b363-a790caa0263b-console-serving-cert\") pod \"console-5dd4b96b5d-6rpcf\" (UID: \"f236cdf1-17e1-40de-b363-a790caa0263b\") " pod="openshift-console/console-5dd4b96b5d-6rpcf" Jan 23 09:22:41 crc kubenswrapper[4684]: I0123 09:22:41.699324 4684 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"console-oauth-config\" (UniqueName: \"kubernetes.io/secret/f236cdf1-17e1-40de-b363-a790caa0263b-console-oauth-config\") pod \"console-5dd4b96b5d-6rpcf\" (UID: \"f236cdf1-17e1-40de-b363-a790caa0263b\") " pod="openshift-console/console-5dd4b96b5d-6rpcf" Jan 23 09:22:41 crc kubenswrapper[4684]: I0123 09:22:41.701369 4684 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"service-ca\" (UniqueName: \"kubernetes.io/configmap/f236cdf1-17e1-40de-b363-a790caa0263b-service-ca\") pod \"console-5dd4b96b5d-6rpcf\" (UID: \"f236cdf1-17e1-40de-b363-a790caa0263b\") " pod="openshift-console/console-5dd4b96b5d-6rpcf" Jan 23 09:22:41 crc kubenswrapper[4684]: I0123 09:22:41.701462 4684 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"oauth-serving-cert\" (UniqueName: \"kubernetes.io/configmap/f236cdf1-17e1-40de-b363-a790caa0263b-oauth-serving-cert\") pod \"console-5dd4b96b5d-6rpcf\" (UID: \"f236cdf1-17e1-40de-b363-a790caa0263b\") " pod="openshift-console/console-5dd4b96b5d-6rpcf" Jan 23 09:22:41 crc kubenswrapper[4684]: I0123 09:22:41.702736 4684 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"console-config\" (UniqueName: \"kubernetes.io/configmap/f236cdf1-17e1-40de-b363-a790caa0263b-console-config\") pod \"console-5dd4b96b5d-6rpcf\" (UID: \"f236cdf1-17e1-40de-b363-a790caa0263b\") " pod="openshift-console/console-5dd4b96b5d-6rpcf" Jan 23 09:22:41 crc kubenswrapper[4684]: I0123 09:22:41.703977 4684 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/f236cdf1-17e1-40de-b363-a790caa0263b-trusted-ca-bundle\") pod \"console-5dd4b96b5d-6rpcf\" (UID: \"f236cdf1-17e1-40de-b363-a790caa0263b\") " pod="openshift-console/console-5dd4b96b5d-6rpcf" Jan 23 09:22:41 crc kubenswrapper[4684]: I0123 09:22:41.704500 4684 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"console-oauth-config\" (UniqueName: \"kubernetes.io/secret/f236cdf1-17e1-40de-b363-a790caa0263b-console-oauth-config\") pod \"console-5dd4b96b5d-6rpcf\" (UID: \"f236cdf1-17e1-40de-b363-a790caa0263b\") " pod="openshift-console/console-5dd4b96b5d-6rpcf" Jan 23 09:22:41 crc kubenswrapper[4684]: I0123 09:22:41.704816 4684 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"console-serving-cert\" (UniqueName: \"kubernetes.io/secret/f236cdf1-17e1-40de-b363-a790caa0263b-console-serving-cert\") pod \"console-5dd4b96b5d-6rpcf\" (UID: \"f236cdf1-17e1-40de-b363-a790caa0263b\") " pod="openshift-console/console-5dd4b96b5d-6rpcf" Jan 23 09:22:41 crc kubenswrapper[4684]: I0123 09:22:41.705131 4684 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"tls-key-pair\" (UniqueName: \"kubernetes.io/secret/7f98efc7-bdf6-4943-8ef9-9056f713acb2-tls-key-pair\") pod \"nmstate-webhook-8474b5b9d8-p4bsj\" (UID: \"7f98efc7-bdf6-4943-8ef9-9056f713acb2\") " pod="openshift-nmstate/nmstate-webhook-8474b5b9d8-p4bsj" Jan 23 09:22:41 crc kubenswrapper[4684]: I0123 09:22:41.717777 4684 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-6vlqr\" (UniqueName: \"kubernetes.io/projected/f236cdf1-17e1-40de-b363-a790caa0263b-kube-api-access-6vlqr\") pod \"console-5dd4b96b5d-6rpcf\" (UID: \"f236cdf1-17e1-40de-b363-a790caa0263b\") " pod="openshift-console/console-5dd4b96b5d-6rpcf" Jan 23 09:22:41 crc kubenswrapper[4684]: I0123 09:22:41.799805 4684 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"plugin-serving-cert\" (UniqueName: \"kubernetes.io/secret/bedfa793-7aff-4710-ae19-260a52e2957f-plugin-serving-cert\") pod \"nmstate-console-plugin-7754f76f8b-l7dkm\" (UID: \"bedfa793-7aff-4710-ae19-260a52e2957f\") " pod="openshift-nmstate/nmstate-console-plugin-7754f76f8b-l7dkm" Jan 23 09:22:41 crc kubenswrapper[4684]: I0123 09:22:41.803308 4684 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"plugin-serving-cert\" (UniqueName: \"kubernetes.io/secret/bedfa793-7aff-4710-ae19-260a52e2957f-plugin-serving-cert\") pod \"nmstate-console-plugin-7754f76f8b-l7dkm\" (UID: \"bedfa793-7aff-4710-ae19-260a52e2957f\") " pod="openshift-nmstate/nmstate-console-plugin-7754f76f8b-l7dkm" Jan 23 09:22:41 crc kubenswrapper[4684]: I0123 09:22:41.824919 4684 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-nmstate/nmstate-metrics-54757c584b-dlvm4"] Jan 23 09:22:41 crc kubenswrapper[4684]: W0123 09:22:41.835021 4684 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod55e58493_0888_4e94_bf0f_6c5b99a10ac4.slice/crio-2d2479b5fa1d76149c01d3364aace3e524ca4e99c1fccb754dd6d5becc37914d WatchSource:0}: Error finding container 2d2479b5fa1d76149c01d3364aace3e524ca4e99c1fccb754dd6d5becc37914d: Status 404 returned error can't find the container with id 2d2479b5fa1d76149c01d3364aace3e524ca4e99c1fccb754dd6d5becc37914d Jan 23 09:22:41 crc kubenswrapper[4684]: I0123 09:22:41.912258 4684 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-nmstate/nmstate-webhook-8474b5b9d8-p4bsj" Jan 23 09:22:41 crc kubenswrapper[4684]: I0123 09:22:41.993243 4684 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-nmstate/nmstate-handler-2kxj8" event={"ID":"2125ebe0-da30-4e7c-93e0-66b7aa2b87e4","Type":"ContainerStarted","Data":"1a0a000d9aac0b8ed485317c88f73fc11e4ecbc1a2719061b1e9e453021bc478"} Jan 23 09:22:41 crc kubenswrapper[4684]: I0123 09:22:41.994895 4684 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-nmstate/nmstate-metrics-54757c584b-dlvm4" event={"ID":"55e58493-0888-4e94-bf0f-6c5b99a10ac4","Type":"ContainerStarted","Data":"2d2479b5fa1d76149c01d3364aace3e524ca4e99c1fccb754dd6d5becc37914d"} Jan 23 09:22:42 crc kubenswrapper[4684]: I0123 09:22:42.015776 4684 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-console/console-5dd4b96b5d-6rpcf" Jan 23 09:22:42 crc kubenswrapper[4684]: I0123 09:22:42.082146 4684 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-nmstate/nmstate-console-plugin-7754f76f8b-l7dkm" Jan 23 09:22:42 crc kubenswrapper[4684]: I0123 09:22:42.179803 4684 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-nmstate/nmstate-webhook-8474b5b9d8-p4bsj"] Jan 23 09:22:42 crc kubenswrapper[4684]: I0123 09:22:42.369425 4684 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-console/console-5dd4b96b5d-6rpcf"] Jan 23 09:22:42 crc kubenswrapper[4684]: I0123 09:22:42.518662 4684 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-nmstate/nmstate-console-plugin-7754f76f8b-l7dkm"] Jan 23 09:22:43 crc kubenswrapper[4684]: I0123 09:22:43.002618 4684 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-console/console-5dd4b96b5d-6rpcf" event={"ID":"f236cdf1-17e1-40de-b363-a790caa0263b","Type":"ContainerStarted","Data":"2ec8da26900c64f30759f32e3c57fdc4f5a6c6c44b0d01adc213db9f5481649b"} Jan 23 09:22:43 crc kubenswrapper[4684]: I0123 09:22:43.002661 4684 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-console/console-5dd4b96b5d-6rpcf" event={"ID":"f236cdf1-17e1-40de-b363-a790caa0263b","Type":"ContainerStarted","Data":"40da55595481b61f6931c33295a253546c1035a90be79d182f0255f5cd6bcb00"} Jan 23 09:22:43 crc kubenswrapper[4684]: I0123 09:22:43.017542 4684 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-nmstate/nmstate-console-plugin-7754f76f8b-l7dkm" event={"ID":"bedfa793-7aff-4710-ae19-260a52e2957f","Type":"ContainerStarted","Data":"f884306b4905ecae5dea1570f1b089d73bab17159e12972a89e765f2f98f0579"} Jan 23 09:22:43 crc kubenswrapper[4684]: I0123 09:22:43.019001 4684 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-nmstate/nmstate-webhook-8474b5b9d8-p4bsj" event={"ID":"7f98efc7-bdf6-4943-8ef9-9056f713acb2","Type":"ContainerStarted","Data":"900e890b9502cd5bc60502bd4b5bcd2688a938ecae544f53f8244e2d734b56e2"} Jan 23 09:22:43 crc kubenswrapper[4684]: I0123 09:22:43.038289 4684 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-console/console-5dd4b96b5d-6rpcf" podStartSLOduration=2.03826572 podStartE2EDuration="2.03826572s" podCreationTimestamp="2026-01-23 09:22:41 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-23 09:22:43.036267242 +0000 UTC m=+935.659645783" watchObservedRunningTime="2026-01-23 09:22:43.03826572 +0000 UTC m=+935.661644261" Jan 23 09:22:43 crc kubenswrapper[4684]: I0123 09:22:43.281552 4684 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/redhat-operators-k7sgb" Jan 23 09:22:43 crc kubenswrapper[4684]: I0123 09:22:43.340731 4684 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/redhat-operators-k7sgb" Jan 23 09:22:43 crc kubenswrapper[4684]: I0123 09:22:43.514581 4684 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-operators-k7sgb"] Jan 23 09:22:45 crc kubenswrapper[4684]: I0123 09:22:45.034013 4684 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/redhat-operators-k7sgb" podUID="49af2e7d-f2c3-4fff-8e1f-659c9334154c" containerName="registry-server" containerID="cri-o://af96dc7b98643f0546d6f3cb856d847997934f21164395780b87caac765650f2" gracePeriod=2 Jan 23 09:22:45 crc kubenswrapper[4684]: I0123 09:22:45.991836 4684 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-k7sgb" Jan 23 09:22:46 crc kubenswrapper[4684]: I0123 09:22:46.046676 4684 generic.go:334] "Generic (PLEG): container finished" podID="49af2e7d-f2c3-4fff-8e1f-659c9334154c" containerID="af96dc7b98643f0546d6f3cb856d847997934f21164395780b87caac765650f2" exitCode=0 Jan 23 09:22:46 crc kubenswrapper[4684]: I0123 09:22:46.046897 4684 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-k7sgb" event={"ID":"49af2e7d-f2c3-4fff-8e1f-659c9334154c","Type":"ContainerDied","Data":"af96dc7b98643f0546d6f3cb856d847997934f21164395780b87caac765650f2"} Jan 23 09:22:46 crc kubenswrapper[4684]: I0123 09:22:46.047488 4684 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-k7sgb" event={"ID":"49af2e7d-f2c3-4fff-8e1f-659c9334154c","Type":"ContainerDied","Data":"76395c8689115f72827b6370a9a92e1054ea31f23ce32123e8c86f3e211bebc9"} Jan 23 09:22:46 crc kubenswrapper[4684]: I0123 09:22:46.047524 4684 scope.go:117] "RemoveContainer" containerID="af96dc7b98643f0546d6f3cb856d847997934f21164395780b87caac765650f2" Jan 23 09:22:46 crc kubenswrapper[4684]: I0123 09:22:46.047025 4684 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-k7sgb" Jan 23 09:22:46 crc kubenswrapper[4684]: I0123 09:22:46.065410 4684 scope.go:117] "RemoveContainer" containerID="e90af99f4d57aa04f2da4f466cf265ca1fd37b32804b589623edaa7320b0765d" Jan 23 09:22:46 crc kubenswrapper[4684]: I0123 09:22:46.070617 4684 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-srpx8\" (UniqueName: \"kubernetes.io/projected/49af2e7d-f2c3-4fff-8e1f-659c9334154c-kube-api-access-srpx8\") pod \"49af2e7d-f2c3-4fff-8e1f-659c9334154c\" (UID: \"49af2e7d-f2c3-4fff-8e1f-659c9334154c\") " Jan 23 09:22:46 crc kubenswrapper[4684]: I0123 09:22:46.070743 4684 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/49af2e7d-f2c3-4fff-8e1f-659c9334154c-utilities\") pod \"49af2e7d-f2c3-4fff-8e1f-659c9334154c\" (UID: \"49af2e7d-f2c3-4fff-8e1f-659c9334154c\") " Jan 23 09:22:46 crc kubenswrapper[4684]: I0123 09:22:46.070807 4684 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/49af2e7d-f2c3-4fff-8e1f-659c9334154c-catalog-content\") pod \"49af2e7d-f2c3-4fff-8e1f-659c9334154c\" (UID: \"49af2e7d-f2c3-4fff-8e1f-659c9334154c\") " Jan 23 09:22:46 crc kubenswrapper[4684]: I0123 09:22:46.071856 4684 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/49af2e7d-f2c3-4fff-8e1f-659c9334154c-utilities" (OuterVolumeSpecName: "utilities") pod "49af2e7d-f2c3-4fff-8e1f-659c9334154c" (UID: "49af2e7d-f2c3-4fff-8e1f-659c9334154c"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 23 09:22:46 crc kubenswrapper[4684]: I0123 09:22:46.077989 4684 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/49af2e7d-f2c3-4fff-8e1f-659c9334154c-kube-api-access-srpx8" (OuterVolumeSpecName: "kube-api-access-srpx8") pod "49af2e7d-f2c3-4fff-8e1f-659c9334154c" (UID: "49af2e7d-f2c3-4fff-8e1f-659c9334154c"). InnerVolumeSpecName "kube-api-access-srpx8". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 23 09:22:46 crc kubenswrapper[4684]: I0123 09:22:46.097078 4684 scope.go:117] "RemoveContainer" containerID="8a6fedd01becab8b9c3ecffcf55cf91f9d3ebe881602c463d580b80980c8bb44" Jan 23 09:22:46 crc kubenswrapper[4684]: I0123 09:22:46.135347 4684 scope.go:117] "RemoveContainer" containerID="af96dc7b98643f0546d6f3cb856d847997934f21164395780b87caac765650f2" Jan 23 09:22:46 crc kubenswrapper[4684]: E0123 09:22:46.140595 4684 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"af96dc7b98643f0546d6f3cb856d847997934f21164395780b87caac765650f2\": container with ID starting with af96dc7b98643f0546d6f3cb856d847997934f21164395780b87caac765650f2 not found: ID does not exist" containerID="af96dc7b98643f0546d6f3cb856d847997934f21164395780b87caac765650f2" Jan 23 09:22:46 crc kubenswrapper[4684]: I0123 09:22:46.140628 4684 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"af96dc7b98643f0546d6f3cb856d847997934f21164395780b87caac765650f2"} err="failed to get container status \"af96dc7b98643f0546d6f3cb856d847997934f21164395780b87caac765650f2\": rpc error: code = NotFound desc = could not find container \"af96dc7b98643f0546d6f3cb856d847997934f21164395780b87caac765650f2\": container with ID starting with af96dc7b98643f0546d6f3cb856d847997934f21164395780b87caac765650f2 not found: ID does not exist" Jan 23 09:22:46 crc kubenswrapper[4684]: I0123 09:22:46.140651 4684 scope.go:117] "RemoveContainer" containerID="e90af99f4d57aa04f2da4f466cf265ca1fd37b32804b589623edaa7320b0765d" Jan 23 09:22:46 crc kubenswrapper[4684]: E0123 09:22:46.141032 4684 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"e90af99f4d57aa04f2da4f466cf265ca1fd37b32804b589623edaa7320b0765d\": container with ID starting with e90af99f4d57aa04f2da4f466cf265ca1fd37b32804b589623edaa7320b0765d not found: ID does not exist" containerID="e90af99f4d57aa04f2da4f466cf265ca1fd37b32804b589623edaa7320b0765d" Jan 23 09:22:46 crc kubenswrapper[4684]: I0123 09:22:46.141074 4684 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"e90af99f4d57aa04f2da4f466cf265ca1fd37b32804b589623edaa7320b0765d"} err="failed to get container status \"e90af99f4d57aa04f2da4f466cf265ca1fd37b32804b589623edaa7320b0765d\": rpc error: code = NotFound desc = could not find container \"e90af99f4d57aa04f2da4f466cf265ca1fd37b32804b589623edaa7320b0765d\": container with ID starting with e90af99f4d57aa04f2da4f466cf265ca1fd37b32804b589623edaa7320b0765d not found: ID does not exist" Jan 23 09:22:46 crc kubenswrapper[4684]: I0123 09:22:46.141102 4684 scope.go:117] "RemoveContainer" containerID="8a6fedd01becab8b9c3ecffcf55cf91f9d3ebe881602c463d580b80980c8bb44" Jan 23 09:22:46 crc kubenswrapper[4684]: E0123 09:22:46.141396 4684 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"8a6fedd01becab8b9c3ecffcf55cf91f9d3ebe881602c463d580b80980c8bb44\": container with ID starting with 8a6fedd01becab8b9c3ecffcf55cf91f9d3ebe881602c463d580b80980c8bb44 not found: ID does not exist" containerID="8a6fedd01becab8b9c3ecffcf55cf91f9d3ebe881602c463d580b80980c8bb44" Jan 23 09:22:46 crc kubenswrapper[4684]: I0123 09:22:46.141422 4684 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"8a6fedd01becab8b9c3ecffcf55cf91f9d3ebe881602c463d580b80980c8bb44"} err="failed to get container status \"8a6fedd01becab8b9c3ecffcf55cf91f9d3ebe881602c463d580b80980c8bb44\": rpc error: code = NotFound desc = could not find container \"8a6fedd01becab8b9c3ecffcf55cf91f9d3ebe881602c463d580b80980c8bb44\": container with ID starting with 8a6fedd01becab8b9c3ecffcf55cf91f9d3ebe881602c463d580b80980c8bb44 not found: ID does not exist" Jan 23 09:22:46 crc kubenswrapper[4684]: I0123 09:22:46.173154 4684 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-srpx8\" (UniqueName: \"kubernetes.io/projected/49af2e7d-f2c3-4fff-8e1f-659c9334154c-kube-api-access-srpx8\") on node \"crc\" DevicePath \"\"" Jan 23 09:22:46 crc kubenswrapper[4684]: I0123 09:22:46.173195 4684 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/49af2e7d-f2c3-4fff-8e1f-659c9334154c-utilities\") on node \"crc\" DevicePath \"\"" Jan 23 09:22:46 crc kubenswrapper[4684]: I0123 09:22:46.219182 4684 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/49af2e7d-f2c3-4fff-8e1f-659c9334154c-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "49af2e7d-f2c3-4fff-8e1f-659c9334154c" (UID: "49af2e7d-f2c3-4fff-8e1f-659c9334154c"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 23 09:22:46 crc kubenswrapper[4684]: I0123 09:22:46.274793 4684 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/49af2e7d-f2c3-4fff-8e1f-659c9334154c-catalog-content\") on node \"crc\" DevicePath \"\"" Jan 23 09:22:46 crc kubenswrapper[4684]: I0123 09:22:46.391678 4684 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-operators-k7sgb"] Jan 23 09:22:46 crc kubenswrapper[4684]: I0123 09:22:46.401290 4684 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/redhat-operators-k7sgb"] Jan 23 09:22:47 crc kubenswrapper[4684]: I0123 09:22:47.057451 4684 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-nmstate/nmstate-console-plugin-7754f76f8b-l7dkm" event={"ID":"bedfa793-7aff-4710-ae19-260a52e2957f","Type":"ContainerStarted","Data":"d4ffda6820653d339c725c6df74727e13d86dbe86ae3319ff03262962a97bd21"} Jan 23 09:22:47 crc kubenswrapper[4684]: I0123 09:22:47.061973 4684 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-nmstate/nmstate-webhook-8474b5b9d8-p4bsj" event={"ID":"7f98efc7-bdf6-4943-8ef9-9056f713acb2","Type":"ContainerStarted","Data":"74fb4ad4d076258c4a47af898992ee8062eb093185f239a6a73cbed2c8193180"} Jan 23 09:22:47 crc kubenswrapper[4684]: I0123 09:22:47.062626 4684 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-nmstate/nmstate-webhook-8474b5b9d8-p4bsj" Jan 23 09:22:47 crc kubenswrapper[4684]: I0123 09:22:47.064037 4684 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-nmstate/nmstate-handler-2kxj8" event={"ID":"2125ebe0-da30-4e7c-93e0-66b7aa2b87e4","Type":"ContainerStarted","Data":"c15f154091fa41755963b6c3609325c01bb94177baca140822ba1425dab5de8c"} Jan 23 09:22:47 crc kubenswrapper[4684]: I0123 09:22:47.064682 4684 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-nmstate/nmstate-handler-2kxj8" Jan 23 09:22:47 crc kubenswrapper[4684]: I0123 09:22:47.065999 4684 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-nmstate/nmstate-metrics-54757c584b-dlvm4" event={"ID":"55e58493-0888-4e94-bf0f-6c5b99a10ac4","Type":"ContainerStarted","Data":"fc346fc44a7603d8a2a5179495260c6524434ec9d59b844f83c165bf63bfb5bb"} Jan 23 09:22:47 crc kubenswrapper[4684]: I0123 09:22:47.071066 4684 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-nmstate/nmstate-console-plugin-7754f76f8b-l7dkm" podStartSLOduration=1.725352033 podStartE2EDuration="6.071050123s" podCreationTimestamp="2026-01-23 09:22:41 +0000 UTC" firstStartedPulling="2026-01-23 09:22:42.53521493 +0000 UTC m=+935.158593471" lastFinishedPulling="2026-01-23 09:22:46.88091302 +0000 UTC m=+939.504291561" observedRunningTime="2026-01-23 09:22:47.069793277 +0000 UTC m=+939.693171838" watchObservedRunningTime="2026-01-23 09:22:47.071050123 +0000 UTC m=+939.694428664" Jan 23 09:22:47 crc kubenswrapper[4684]: I0123 09:22:47.092214 4684 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-nmstate/nmstate-webhook-8474b5b9d8-p4bsj" podStartSLOduration=3.672239396 podStartE2EDuration="7.092190278s" podCreationTimestamp="2026-01-23 09:22:40 +0000 UTC" firstStartedPulling="2026-01-23 09:22:42.246431484 +0000 UTC m=+934.869810025" lastFinishedPulling="2026-01-23 09:22:45.666382366 +0000 UTC m=+938.289760907" observedRunningTime="2026-01-23 09:22:47.086878896 +0000 UTC m=+939.710257457" watchObservedRunningTime="2026-01-23 09:22:47.092190278 +0000 UTC m=+939.715568819" Jan 23 09:22:47 crc kubenswrapper[4684]: I0123 09:22:47.100569 4684 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-nmstate/nmstate-handler-2kxj8" podStartSLOduration=1.871488716 podStartE2EDuration="6.100535807s" podCreationTimestamp="2026-01-23 09:22:41 +0000 UTC" firstStartedPulling="2026-01-23 09:22:41.421476941 +0000 UTC m=+934.044855482" lastFinishedPulling="2026-01-23 09:22:45.650524032 +0000 UTC m=+938.273902573" observedRunningTime="2026-01-23 09:22:47.100348121 +0000 UTC m=+939.723726662" watchObservedRunningTime="2026-01-23 09:22:47.100535807 +0000 UTC m=+939.723914348" Jan 23 09:22:47 crc kubenswrapper[4684]: I0123 09:22:47.593406 4684 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="49af2e7d-f2c3-4fff-8e1f-659c9334154c" path="/var/lib/kubelet/pods/49af2e7d-f2c3-4fff-8e1f-659c9334154c/volumes" Jan 23 09:22:49 crc kubenswrapper[4684]: I0123 09:22:49.081476 4684 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-nmstate/nmstate-metrics-54757c584b-dlvm4" event={"ID":"55e58493-0888-4e94-bf0f-6c5b99a10ac4","Type":"ContainerStarted","Data":"e6ecbebd87296850cdb2b9b83db4d12709a753d02a482dbad6f1cf1e3752b188"} Jan 23 09:22:49 crc kubenswrapper[4684]: I0123 09:22:49.104644 4684 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-nmstate/nmstate-metrics-54757c584b-dlvm4" podStartSLOduration=2.494026691 podStartE2EDuration="9.10461675s" podCreationTimestamp="2026-01-23 09:22:40 +0000 UTC" firstStartedPulling="2026-01-23 09:22:41.837732296 +0000 UTC m=+934.461110837" lastFinishedPulling="2026-01-23 09:22:48.448322355 +0000 UTC m=+941.071700896" observedRunningTime="2026-01-23 09:22:49.101538102 +0000 UTC m=+941.724916693" watchObservedRunningTime="2026-01-23 09:22:49.10461675 +0000 UTC m=+941.727995311" Jan 23 09:22:51 crc kubenswrapper[4684]: I0123 09:22:51.384041 4684 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-nmstate/nmstate-handler-2kxj8" Jan 23 09:22:52 crc kubenswrapper[4684]: I0123 09:22:52.016584 4684 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-console/console-5dd4b96b5d-6rpcf" Jan 23 09:22:52 crc kubenswrapper[4684]: I0123 09:22:52.016650 4684 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-console/console-5dd4b96b5d-6rpcf" Jan 23 09:22:52 crc kubenswrapper[4684]: I0123 09:22:52.021323 4684 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-console/console-5dd4b96b5d-6rpcf" Jan 23 09:22:52 crc kubenswrapper[4684]: I0123 09:22:52.100321 4684 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-console/console-5dd4b96b5d-6rpcf" Jan 23 09:22:52 crc kubenswrapper[4684]: I0123 09:22:52.157326 4684 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-console/console-f9d7485db-wd9fz"] Jan 23 09:23:01 crc kubenswrapper[4684]: I0123 09:23:01.917682 4684 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-nmstate/nmstate-webhook-8474b5b9d8-p4bsj" Jan 23 09:23:14 crc kubenswrapper[4684]: I0123 09:23:14.124204 4684 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/270996307cd21d144be796860235064b5127c2fcf62ccccd6689c259dc7ml56"] Jan 23 09:23:14 crc kubenswrapper[4684]: E0123 09:23:14.125004 4684 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="49af2e7d-f2c3-4fff-8e1f-659c9334154c" containerName="extract-utilities" Jan 23 09:23:14 crc kubenswrapper[4684]: I0123 09:23:14.125021 4684 state_mem.go:107] "Deleted CPUSet assignment" podUID="49af2e7d-f2c3-4fff-8e1f-659c9334154c" containerName="extract-utilities" Jan 23 09:23:14 crc kubenswrapper[4684]: E0123 09:23:14.125036 4684 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="49af2e7d-f2c3-4fff-8e1f-659c9334154c" containerName="registry-server" Jan 23 09:23:14 crc kubenswrapper[4684]: I0123 09:23:14.125044 4684 state_mem.go:107] "Deleted CPUSet assignment" podUID="49af2e7d-f2c3-4fff-8e1f-659c9334154c" containerName="registry-server" Jan 23 09:23:14 crc kubenswrapper[4684]: E0123 09:23:14.125070 4684 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="49af2e7d-f2c3-4fff-8e1f-659c9334154c" containerName="extract-content" Jan 23 09:23:14 crc kubenswrapper[4684]: I0123 09:23:14.125080 4684 state_mem.go:107] "Deleted CPUSet assignment" podUID="49af2e7d-f2c3-4fff-8e1f-659c9334154c" containerName="extract-content" Jan 23 09:23:14 crc kubenswrapper[4684]: I0123 09:23:14.125210 4684 memory_manager.go:354] "RemoveStaleState removing state" podUID="49af2e7d-f2c3-4fff-8e1f-659c9334154c" containerName="registry-server" Jan 23 09:23:14 crc kubenswrapper[4684]: I0123 09:23:14.126133 4684 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/270996307cd21d144be796860235064b5127c2fcf62ccccd6689c259dc7ml56" Jan 23 09:23:14 crc kubenswrapper[4684]: I0123 09:23:14.133348 4684 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-marketplace"/"default-dockercfg-vmwhc" Jan 23 09:23:14 crc kubenswrapper[4684]: I0123 09:23:14.136549 4684 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/270996307cd21d144be796860235064b5127c2fcf62ccccd6689c259dc7ml56"] Jan 23 09:23:14 crc kubenswrapper[4684]: I0123 09:23:14.272650 4684 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-8nzrj\" (UniqueName: \"kubernetes.io/projected/dea3f1d3-f2aa-41e3-afb0-ce7658aae496-kube-api-access-8nzrj\") pod \"270996307cd21d144be796860235064b5127c2fcf62ccccd6689c259dc7ml56\" (UID: \"dea3f1d3-f2aa-41e3-afb0-ce7658aae496\") " pod="openshift-marketplace/270996307cd21d144be796860235064b5127c2fcf62ccccd6689c259dc7ml56" Jan 23 09:23:14 crc kubenswrapper[4684]: I0123 09:23:14.272727 4684 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"bundle\" (UniqueName: \"kubernetes.io/empty-dir/dea3f1d3-f2aa-41e3-afb0-ce7658aae496-bundle\") pod \"270996307cd21d144be796860235064b5127c2fcf62ccccd6689c259dc7ml56\" (UID: \"dea3f1d3-f2aa-41e3-afb0-ce7658aae496\") " pod="openshift-marketplace/270996307cd21d144be796860235064b5127c2fcf62ccccd6689c259dc7ml56" Jan 23 09:23:14 crc kubenswrapper[4684]: I0123 09:23:14.272820 4684 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"util\" (UniqueName: \"kubernetes.io/empty-dir/dea3f1d3-f2aa-41e3-afb0-ce7658aae496-util\") pod \"270996307cd21d144be796860235064b5127c2fcf62ccccd6689c259dc7ml56\" (UID: \"dea3f1d3-f2aa-41e3-afb0-ce7658aae496\") " pod="openshift-marketplace/270996307cd21d144be796860235064b5127c2fcf62ccccd6689c259dc7ml56" Jan 23 09:23:14 crc kubenswrapper[4684]: I0123 09:23:14.374329 4684 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"util\" (UniqueName: \"kubernetes.io/empty-dir/dea3f1d3-f2aa-41e3-afb0-ce7658aae496-util\") pod \"270996307cd21d144be796860235064b5127c2fcf62ccccd6689c259dc7ml56\" (UID: \"dea3f1d3-f2aa-41e3-afb0-ce7658aae496\") " pod="openshift-marketplace/270996307cd21d144be796860235064b5127c2fcf62ccccd6689c259dc7ml56" Jan 23 09:23:14 crc kubenswrapper[4684]: I0123 09:23:14.374394 4684 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-8nzrj\" (UniqueName: \"kubernetes.io/projected/dea3f1d3-f2aa-41e3-afb0-ce7658aae496-kube-api-access-8nzrj\") pod \"270996307cd21d144be796860235064b5127c2fcf62ccccd6689c259dc7ml56\" (UID: \"dea3f1d3-f2aa-41e3-afb0-ce7658aae496\") " pod="openshift-marketplace/270996307cd21d144be796860235064b5127c2fcf62ccccd6689c259dc7ml56" Jan 23 09:23:14 crc kubenswrapper[4684]: I0123 09:23:14.374425 4684 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"bundle\" (UniqueName: \"kubernetes.io/empty-dir/dea3f1d3-f2aa-41e3-afb0-ce7658aae496-bundle\") pod \"270996307cd21d144be796860235064b5127c2fcf62ccccd6689c259dc7ml56\" (UID: \"dea3f1d3-f2aa-41e3-afb0-ce7658aae496\") " pod="openshift-marketplace/270996307cd21d144be796860235064b5127c2fcf62ccccd6689c259dc7ml56" Jan 23 09:23:14 crc kubenswrapper[4684]: I0123 09:23:14.374857 4684 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"util\" (UniqueName: \"kubernetes.io/empty-dir/dea3f1d3-f2aa-41e3-afb0-ce7658aae496-util\") pod \"270996307cd21d144be796860235064b5127c2fcf62ccccd6689c259dc7ml56\" (UID: \"dea3f1d3-f2aa-41e3-afb0-ce7658aae496\") " pod="openshift-marketplace/270996307cd21d144be796860235064b5127c2fcf62ccccd6689c259dc7ml56" Jan 23 09:23:14 crc kubenswrapper[4684]: I0123 09:23:14.374923 4684 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"bundle\" (UniqueName: \"kubernetes.io/empty-dir/dea3f1d3-f2aa-41e3-afb0-ce7658aae496-bundle\") pod \"270996307cd21d144be796860235064b5127c2fcf62ccccd6689c259dc7ml56\" (UID: \"dea3f1d3-f2aa-41e3-afb0-ce7658aae496\") " pod="openshift-marketplace/270996307cd21d144be796860235064b5127c2fcf62ccccd6689c259dc7ml56" Jan 23 09:23:14 crc kubenswrapper[4684]: I0123 09:23:14.394897 4684 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-8nzrj\" (UniqueName: \"kubernetes.io/projected/dea3f1d3-f2aa-41e3-afb0-ce7658aae496-kube-api-access-8nzrj\") pod \"270996307cd21d144be796860235064b5127c2fcf62ccccd6689c259dc7ml56\" (UID: \"dea3f1d3-f2aa-41e3-afb0-ce7658aae496\") " pod="openshift-marketplace/270996307cd21d144be796860235064b5127c2fcf62ccccd6689c259dc7ml56" Jan 23 09:23:14 crc kubenswrapper[4684]: I0123 09:23:14.445922 4684 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/270996307cd21d144be796860235064b5127c2fcf62ccccd6689c259dc7ml56" Jan 23 09:23:14 crc kubenswrapper[4684]: I0123 09:23:14.624333 4684 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/270996307cd21d144be796860235064b5127c2fcf62ccccd6689c259dc7ml56"] Jan 23 09:23:15 crc kubenswrapper[4684]: I0123 09:23:15.235494 4684 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/270996307cd21d144be796860235064b5127c2fcf62ccccd6689c259dc7ml56" event={"ID":"dea3f1d3-f2aa-41e3-afb0-ce7658aae496","Type":"ContainerStarted","Data":"9ca486b92234212992900a78ab5de32ea277777f49b86d2d9b8581a308146485"} Jan 23 09:23:17 crc kubenswrapper[4684]: I0123 09:23:17.192998 4684 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-console/console-f9d7485db-wd9fz" podUID="31ebe80c-870d-4be6-844c-504b72eb09d6" containerName="console" containerID="cri-o://9c8580cf9d6f1f3b2e3183d9599cd2dc8a20148912e482b2d8f1ed733d44fe11" gracePeriod=15 Jan 23 09:23:18 crc kubenswrapper[4684]: I0123 09:23:18.253678 4684 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/270996307cd21d144be796860235064b5127c2fcf62ccccd6689c259dc7ml56" event={"ID":"dea3f1d3-f2aa-41e3-afb0-ce7658aae496","Type":"ContainerStarted","Data":"d2d8e778f5b3e93bd09d83c3b1dbbd09574859750dfc90eed3f5ae5c394c9355"} Jan 23 09:23:19 crc kubenswrapper[4684]: I0123 09:23:19.261231 4684 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-console_console-f9d7485db-wd9fz_31ebe80c-870d-4be6-844c-504b72eb09d6/console/0.log" Jan 23 09:23:19 crc kubenswrapper[4684]: I0123 09:23:19.261450 4684 generic.go:334] "Generic (PLEG): container finished" podID="31ebe80c-870d-4be6-844c-504b72eb09d6" containerID="9c8580cf9d6f1f3b2e3183d9599cd2dc8a20148912e482b2d8f1ed733d44fe11" exitCode=2 Jan 23 09:23:19 crc kubenswrapper[4684]: I0123 09:23:19.261475 4684 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-console/console-f9d7485db-wd9fz" event={"ID":"31ebe80c-870d-4be6-844c-504b72eb09d6","Type":"ContainerDied","Data":"9c8580cf9d6f1f3b2e3183d9599cd2dc8a20148912e482b2d8f1ed733d44fe11"} Jan 23 09:23:20 crc kubenswrapper[4684]: I0123 09:23:20.270291 4684 generic.go:334] "Generic (PLEG): container finished" podID="dea3f1d3-f2aa-41e3-afb0-ce7658aae496" containerID="d2d8e778f5b3e93bd09d83c3b1dbbd09574859750dfc90eed3f5ae5c394c9355" exitCode=0 Jan 23 09:23:20 crc kubenswrapper[4684]: I0123 09:23:20.270331 4684 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/270996307cd21d144be796860235064b5127c2fcf62ccccd6689c259dc7ml56" event={"ID":"dea3f1d3-f2aa-41e3-afb0-ce7658aae496","Type":"ContainerDied","Data":"d2d8e778f5b3e93bd09d83c3b1dbbd09574859750dfc90eed3f5ae5c394c9355"} Jan 23 09:23:20 crc kubenswrapper[4684]: I0123 09:23:20.329105 4684 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-console_console-f9d7485db-wd9fz_31ebe80c-870d-4be6-844c-504b72eb09d6/console/0.log" Jan 23 09:23:20 crc kubenswrapper[4684]: I0123 09:23:20.329164 4684 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-console/console-f9d7485db-wd9fz" Jan 23 09:23:20 crc kubenswrapper[4684]: I0123 09:23:20.460190 4684 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"console-oauth-config\" (UniqueName: \"kubernetes.io/secret/31ebe80c-870d-4be6-844c-504b72eb09d6-console-oauth-config\") pod \"31ebe80c-870d-4be6-844c-504b72eb09d6\" (UID: \"31ebe80c-870d-4be6-844c-504b72eb09d6\") " Jan 23 09:23:20 crc kubenswrapper[4684]: I0123 09:23:20.460285 4684 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"console-serving-cert\" (UniqueName: \"kubernetes.io/secret/31ebe80c-870d-4be6-844c-504b72eb09d6-console-serving-cert\") pod \"31ebe80c-870d-4be6-844c-504b72eb09d6\" (UID: \"31ebe80c-870d-4be6-844c-504b72eb09d6\") " Jan 23 09:23:20 crc kubenswrapper[4684]: I0123 09:23:20.460342 4684 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-2s8rx\" (UniqueName: \"kubernetes.io/projected/31ebe80c-870d-4be6-844c-504b72eb09d6-kube-api-access-2s8rx\") pod \"31ebe80c-870d-4be6-844c-504b72eb09d6\" (UID: \"31ebe80c-870d-4be6-844c-504b72eb09d6\") " Jan 23 09:23:20 crc kubenswrapper[4684]: I0123 09:23:20.460382 4684 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/31ebe80c-870d-4be6-844c-504b72eb09d6-trusted-ca-bundle\") pod \"31ebe80c-870d-4be6-844c-504b72eb09d6\" (UID: \"31ebe80c-870d-4be6-844c-504b72eb09d6\") " Jan 23 09:23:20 crc kubenswrapper[4684]: I0123 09:23:20.460450 4684 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"console-config\" (UniqueName: \"kubernetes.io/configmap/31ebe80c-870d-4be6-844c-504b72eb09d6-console-config\") pod \"31ebe80c-870d-4be6-844c-504b72eb09d6\" (UID: \"31ebe80c-870d-4be6-844c-504b72eb09d6\") " Jan 23 09:23:20 crc kubenswrapper[4684]: I0123 09:23:20.460472 4684 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"service-ca\" (UniqueName: \"kubernetes.io/configmap/31ebe80c-870d-4be6-844c-504b72eb09d6-service-ca\") pod \"31ebe80c-870d-4be6-844c-504b72eb09d6\" (UID: \"31ebe80c-870d-4be6-844c-504b72eb09d6\") " Jan 23 09:23:20 crc kubenswrapper[4684]: I0123 09:23:20.460499 4684 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"oauth-serving-cert\" (UniqueName: \"kubernetes.io/configmap/31ebe80c-870d-4be6-844c-504b72eb09d6-oauth-serving-cert\") pod \"31ebe80c-870d-4be6-844c-504b72eb09d6\" (UID: \"31ebe80c-870d-4be6-844c-504b72eb09d6\") " Jan 23 09:23:20 crc kubenswrapper[4684]: I0123 09:23:20.461187 4684 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/31ebe80c-870d-4be6-844c-504b72eb09d6-trusted-ca-bundle" (OuterVolumeSpecName: "trusted-ca-bundle") pod "31ebe80c-870d-4be6-844c-504b72eb09d6" (UID: "31ebe80c-870d-4be6-844c-504b72eb09d6"). InnerVolumeSpecName "trusted-ca-bundle". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 23 09:23:20 crc kubenswrapper[4684]: I0123 09:23:20.461422 4684 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/31ebe80c-870d-4be6-844c-504b72eb09d6-oauth-serving-cert" (OuterVolumeSpecName: "oauth-serving-cert") pod "31ebe80c-870d-4be6-844c-504b72eb09d6" (UID: "31ebe80c-870d-4be6-844c-504b72eb09d6"). InnerVolumeSpecName "oauth-serving-cert". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 23 09:23:20 crc kubenswrapper[4684]: I0123 09:23:20.461743 4684 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/31ebe80c-870d-4be6-844c-504b72eb09d6-console-config" (OuterVolumeSpecName: "console-config") pod "31ebe80c-870d-4be6-844c-504b72eb09d6" (UID: "31ebe80c-870d-4be6-844c-504b72eb09d6"). InnerVolumeSpecName "console-config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 23 09:23:20 crc kubenswrapper[4684]: I0123 09:23:20.463775 4684 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/31ebe80c-870d-4be6-844c-504b72eb09d6-service-ca" (OuterVolumeSpecName: "service-ca") pod "31ebe80c-870d-4be6-844c-504b72eb09d6" (UID: "31ebe80c-870d-4be6-844c-504b72eb09d6"). InnerVolumeSpecName "service-ca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 23 09:23:20 crc kubenswrapper[4684]: I0123 09:23:20.465050 4684 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/31ebe80c-870d-4be6-844c-504b72eb09d6-kube-api-access-2s8rx" (OuterVolumeSpecName: "kube-api-access-2s8rx") pod "31ebe80c-870d-4be6-844c-504b72eb09d6" (UID: "31ebe80c-870d-4be6-844c-504b72eb09d6"). InnerVolumeSpecName "kube-api-access-2s8rx". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 23 09:23:20 crc kubenswrapper[4684]: I0123 09:23:20.465192 4684 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/31ebe80c-870d-4be6-844c-504b72eb09d6-console-serving-cert" (OuterVolumeSpecName: "console-serving-cert") pod "31ebe80c-870d-4be6-844c-504b72eb09d6" (UID: "31ebe80c-870d-4be6-844c-504b72eb09d6"). InnerVolumeSpecName "console-serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 23 09:23:20 crc kubenswrapper[4684]: I0123 09:23:20.473829 4684 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/31ebe80c-870d-4be6-844c-504b72eb09d6-console-oauth-config" (OuterVolumeSpecName: "console-oauth-config") pod "31ebe80c-870d-4be6-844c-504b72eb09d6" (UID: "31ebe80c-870d-4be6-844c-504b72eb09d6"). InnerVolumeSpecName "console-oauth-config". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 23 09:23:20 crc kubenswrapper[4684]: I0123 09:23:20.561636 4684 reconciler_common.go:293] "Volume detached for volume \"console-config\" (UniqueName: \"kubernetes.io/configmap/31ebe80c-870d-4be6-844c-504b72eb09d6-console-config\") on node \"crc\" DevicePath \"\"" Jan 23 09:23:20 crc kubenswrapper[4684]: I0123 09:23:20.561666 4684 reconciler_common.go:293] "Volume detached for volume \"service-ca\" (UniqueName: \"kubernetes.io/configmap/31ebe80c-870d-4be6-844c-504b72eb09d6-service-ca\") on node \"crc\" DevicePath \"\"" Jan 23 09:23:20 crc kubenswrapper[4684]: I0123 09:23:20.561675 4684 reconciler_common.go:293] "Volume detached for volume \"oauth-serving-cert\" (UniqueName: \"kubernetes.io/configmap/31ebe80c-870d-4be6-844c-504b72eb09d6-oauth-serving-cert\") on node \"crc\" DevicePath \"\"" Jan 23 09:23:20 crc kubenswrapper[4684]: I0123 09:23:20.561683 4684 reconciler_common.go:293] "Volume detached for volume \"console-oauth-config\" (UniqueName: \"kubernetes.io/secret/31ebe80c-870d-4be6-844c-504b72eb09d6-console-oauth-config\") on node \"crc\" DevicePath \"\"" Jan 23 09:23:20 crc kubenswrapper[4684]: I0123 09:23:20.561693 4684 reconciler_common.go:293] "Volume detached for volume \"console-serving-cert\" (UniqueName: \"kubernetes.io/secret/31ebe80c-870d-4be6-844c-504b72eb09d6-console-serving-cert\") on node \"crc\" DevicePath \"\"" Jan 23 09:23:20 crc kubenswrapper[4684]: I0123 09:23:20.561715 4684 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-2s8rx\" (UniqueName: \"kubernetes.io/projected/31ebe80c-870d-4be6-844c-504b72eb09d6-kube-api-access-2s8rx\") on node \"crc\" DevicePath \"\"" Jan 23 09:23:20 crc kubenswrapper[4684]: I0123 09:23:20.561725 4684 reconciler_common.go:293] "Volume detached for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/31ebe80c-870d-4be6-844c-504b72eb09d6-trusted-ca-bundle\") on node \"crc\" DevicePath \"\"" Jan 23 09:23:21 crc kubenswrapper[4684]: I0123 09:23:21.277541 4684 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-console_console-f9d7485db-wd9fz_31ebe80c-870d-4be6-844c-504b72eb09d6/console/0.log" Jan 23 09:23:21 crc kubenswrapper[4684]: I0123 09:23:21.277594 4684 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-console/console-f9d7485db-wd9fz" event={"ID":"31ebe80c-870d-4be6-844c-504b72eb09d6","Type":"ContainerDied","Data":"9cb30a261b457dd8175788a4df57479ba5c1c4b8f7ae517d48b1674045855b08"} Jan 23 09:23:21 crc kubenswrapper[4684]: I0123 09:23:21.277628 4684 scope.go:117] "RemoveContainer" containerID="9c8580cf9d6f1f3b2e3183d9599cd2dc8a20148912e482b2d8f1ed733d44fe11" Jan 23 09:23:21 crc kubenswrapper[4684]: I0123 09:23:21.277765 4684 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-console/console-f9d7485db-wd9fz" Jan 23 09:23:21 crc kubenswrapper[4684]: I0123 09:23:21.305944 4684 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-console/console-f9d7485db-wd9fz"] Jan 23 09:23:21 crc kubenswrapper[4684]: I0123 09:23:21.309128 4684 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-console/console-f9d7485db-wd9fz"] Jan 23 09:23:21 crc kubenswrapper[4684]: I0123 09:23:21.589333 4684 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="31ebe80c-870d-4be6-844c-504b72eb09d6" path="/var/lib/kubelet/pods/31ebe80c-870d-4be6-844c-504b72eb09d6/volumes" Jan 23 09:23:22 crc kubenswrapper[4684]: I0123 09:23:22.286194 4684 generic.go:334] "Generic (PLEG): container finished" podID="dea3f1d3-f2aa-41e3-afb0-ce7658aae496" containerID="5fcd1fce253adf329327f1d6ab5681a4567b9cbe6111e13863e4d605db158c80" exitCode=0 Jan 23 09:23:22 crc kubenswrapper[4684]: I0123 09:23:22.286297 4684 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/270996307cd21d144be796860235064b5127c2fcf62ccccd6689c259dc7ml56" event={"ID":"dea3f1d3-f2aa-41e3-afb0-ce7658aae496","Type":"ContainerDied","Data":"5fcd1fce253adf329327f1d6ab5681a4567b9cbe6111e13863e4d605db158c80"} Jan 23 09:23:23 crc kubenswrapper[4684]: I0123 09:23:23.293120 4684 generic.go:334] "Generic (PLEG): container finished" podID="dea3f1d3-f2aa-41e3-afb0-ce7658aae496" containerID="a884226ed2fa26d346ab16a23d882eb4bcea8d4a6ce2743245ede3ce769f6821" exitCode=0 Jan 23 09:23:23 crc kubenswrapper[4684]: I0123 09:23:23.293184 4684 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/270996307cd21d144be796860235064b5127c2fcf62ccccd6689c259dc7ml56" event={"ID":"dea3f1d3-f2aa-41e3-afb0-ce7658aae496","Type":"ContainerDied","Data":"a884226ed2fa26d346ab16a23d882eb4bcea8d4a6ce2743245ede3ce769f6821"} Jan 23 09:23:24 crc kubenswrapper[4684]: I0123 09:23:24.558423 4684 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/270996307cd21d144be796860235064b5127c2fcf62ccccd6689c259dc7ml56" Jan 23 09:23:24 crc kubenswrapper[4684]: I0123 09:23:24.714226 4684 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"bundle\" (UniqueName: \"kubernetes.io/empty-dir/dea3f1d3-f2aa-41e3-afb0-ce7658aae496-bundle\") pod \"dea3f1d3-f2aa-41e3-afb0-ce7658aae496\" (UID: \"dea3f1d3-f2aa-41e3-afb0-ce7658aae496\") " Jan 23 09:23:24 crc kubenswrapper[4684]: I0123 09:23:24.714324 4684 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-8nzrj\" (UniqueName: \"kubernetes.io/projected/dea3f1d3-f2aa-41e3-afb0-ce7658aae496-kube-api-access-8nzrj\") pod \"dea3f1d3-f2aa-41e3-afb0-ce7658aae496\" (UID: \"dea3f1d3-f2aa-41e3-afb0-ce7658aae496\") " Jan 23 09:23:24 crc kubenswrapper[4684]: I0123 09:23:24.714366 4684 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"util\" (UniqueName: \"kubernetes.io/empty-dir/dea3f1d3-f2aa-41e3-afb0-ce7658aae496-util\") pod \"dea3f1d3-f2aa-41e3-afb0-ce7658aae496\" (UID: \"dea3f1d3-f2aa-41e3-afb0-ce7658aae496\") " Jan 23 09:23:24 crc kubenswrapper[4684]: I0123 09:23:24.715240 4684 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/dea3f1d3-f2aa-41e3-afb0-ce7658aae496-bundle" (OuterVolumeSpecName: "bundle") pod "dea3f1d3-f2aa-41e3-afb0-ce7658aae496" (UID: "dea3f1d3-f2aa-41e3-afb0-ce7658aae496"). InnerVolumeSpecName "bundle". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 23 09:23:24 crc kubenswrapper[4684]: I0123 09:23:24.719904 4684 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/dea3f1d3-f2aa-41e3-afb0-ce7658aae496-kube-api-access-8nzrj" (OuterVolumeSpecName: "kube-api-access-8nzrj") pod "dea3f1d3-f2aa-41e3-afb0-ce7658aae496" (UID: "dea3f1d3-f2aa-41e3-afb0-ce7658aae496"). InnerVolumeSpecName "kube-api-access-8nzrj". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 23 09:23:24 crc kubenswrapper[4684]: I0123 09:23:24.724692 4684 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/dea3f1d3-f2aa-41e3-afb0-ce7658aae496-util" (OuterVolumeSpecName: "util") pod "dea3f1d3-f2aa-41e3-afb0-ce7658aae496" (UID: "dea3f1d3-f2aa-41e3-afb0-ce7658aae496"). InnerVolumeSpecName "util". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 23 09:23:24 crc kubenswrapper[4684]: I0123 09:23:24.816334 4684 reconciler_common.go:293] "Volume detached for volume \"bundle\" (UniqueName: \"kubernetes.io/empty-dir/dea3f1d3-f2aa-41e3-afb0-ce7658aae496-bundle\") on node \"crc\" DevicePath \"\"" Jan 23 09:23:24 crc kubenswrapper[4684]: I0123 09:23:24.816388 4684 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-8nzrj\" (UniqueName: \"kubernetes.io/projected/dea3f1d3-f2aa-41e3-afb0-ce7658aae496-kube-api-access-8nzrj\") on node \"crc\" DevicePath \"\"" Jan 23 09:23:24 crc kubenswrapper[4684]: I0123 09:23:24.816404 4684 reconciler_common.go:293] "Volume detached for volume \"util\" (UniqueName: \"kubernetes.io/empty-dir/dea3f1d3-f2aa-41e3-afb0-ce7658aae496-util\") on node \"crc\" DevicePath \"\"" Jan 23 09:23:25 crc kubenswrapper[4684]: I0123 09:23:25.308685 4684 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/270996307cd21d144be796860235064b5127c2fcf62ccccd6689c259dc7ml56" event={"ID":"dea3f1d3-f2aa-41e3-afb0-ce7658aae496","Type":"ContainerDied","Data":"9ca486b92234212992900a78ab5de32ea277777f49b86d2d9b8581a308146485"} Jan 23 09:23:25 crc kubenswrapper[4684]: I0123 09:23:25.308787 4684 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="9ca486b92234212992900a78ab5de32ea277777f49b86d2d9b8581a308146485" Jan 23 09:23:25 crc kubenswrapper[4684]: I0123 09:23:25.309065 4684 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/270996307cd21d144be796860235064b5127c2fcf62ccccd6689c259dc7ml56" Jan 23 09:23:32 crc kubenswrapper[4684]: I0123 09:23:32.086175 4684 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/community-operators-5fbwn"] Jan 23 09:23:32 crc kubenswrapper[4684]: E0123 09:23:32.086912 4684 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="dea3f1d3-f2aa-41e3-afb0-ce7658aae496" containerName="extract" Jan 23 09:23:32 crc kubenswrapper[4684]: I0123 09:23:32.086927 4684 state_mem.go:107] "Deleted CPUSet assignment" podUID="dea3f1d3-f2aa-41e3-afb0-ce7658aae496" containerName="extract" Jan 23 09:23:32 crc kubenswrapper[4684]: E0123 09:23:32.086954 4684 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="dea3f1d3-f2aa-41e3-afb0-ce7658aae496" containerName="pull" Jan 23 09:23:32 crc kubenswrapper[4684]: I0123 09:23:32.086960 4684 state_mem.go:107] "Deleted CPUSet assignment" podUID="dea3f1d3-f2aa-41e3-afb0-ce7658aae496" containerName="pull" Jan 23 09:23:32 crc kubenswrapper[4684]: E0123 09:23:32.086968 4684 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="31ebe80c-870d-4be6-844c-504b72eb09d6" containerName="console" Jan 23 09:23:32 crc kubenswrapper[4684]: I0123 09:23:32.086975 4684 state_mem.go:107] "Deleted CPUSet assignment" podUID="31ebe80c-870d-4be6-844c-504b72eb09d6" containerName="console" Jan 23 09:23:32 crc kubenswrapper[4684]: E0123 09:23:32.086987 4684 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="dea3f1d3-f2aa-41e3-afb0-ce7658aae496" containerName="util" Jan 23 09:23:32 crc kubenswrapper[4684]: I0123 09:23:32.086993 4684 state_mem.go:107] "Deleted CPUSet assignment" podUID="dea3f1d3-f2aa-41e3-afb0-ce7658aae496" containerName="util" Jan 23 09:23:32 crc kubenswrapper[4684]: I0123 09:23:32.087125 4684 memory_manager.go:354] "RemoveStaleState removing state" podUID="dea3f1d3-f2aa-41e3-afb0-ce7658aae496" containerName="extract" Jan 23 09:23:32 crc kubenswrapper[4684]: I0123 09:23:32.087139 4684 memory_manager.go:354] "RemoveStaleState removing state" podUID="31ebe80c-870d-4be6-844c-504b72eb09d6" containerName="console" Jan 23 09:23:32 crc kubenswrapper[4684]: I0123 09:23:32.088040 4684 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-5fbwn" Jan 23 09:23:32 crc kubenswrapper[4684]: I0123 09:23:32.099416 4684 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/community-operators-5fbwn"] Jan 23 09:23:32 crc kubenswrapper[4684]: I0123 09:23:32.215196 4684 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/731dfd08-3c63-458e-abf7-295bbcb056eb-catalog-content\") pod \"community-operators-5fbwn\" (UID: \"731dfd08-3c63-458e-abf7-295bbcb056eb\") " pod="openshift-marketplace/community-operators-5fbwn" Jan 23 09:23:32 crc kubenswrapper[4684]: I0123 09:23:32.215249 4684 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/731dfd08-3c63-458e-abf7-295bbcb056eb-utilities\") pod \"community-operators-5fbwn\" (UID: \"731dfd08-3c63-458e-abf7-295bbcb056eb\") " pod="openshift-marketplace/community-operators-5fbwn" Jan 23 09:23:32 crc kubenswrapper[4684]: I0123 09:23:32.215285 4684 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-tkzsq\" (UniqueName: \"kubernetes.io/projected/731dfd08-3c63-458e-abf7-295bbcb056eb-kube-api-access-tkzsq\") pod \"community-operators-5fbwn\" (UID: \"731dfd08-3c63-458e-abf7-295bbcb056eb\") " pod="openshift-marketplace/community-operators-5fbwn" Jan 23 09:23:32 crc kubenswrapper[4684]: I0123 09:23:32.317092 4684 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/731dfd08-3c63-458e-abf7-295bbcb056eb-catalog-content\") pod \"community-operators-5fbwn\" (UID: \"731dfd08-3c63-458e-abf7-295bbcb056eb\") " pod="openshift-marketplace/community-operators-5fbwn" Jan 23 09:23:32 crc kubenswrapper[4684]: I0123 09:23:32.317140 4684 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/731dfd08-3c63-458e-abf7-295bbcb056eb-utilities\") pod \"community-operators-5fbwn\" (UID: \"731dfd08-3c63-458e-abf7-295bbcb056eb\") " pod="openshift-marketplace/community-operators-5fbwn" Jan 23 09:23:32 crc kubenswrapper[4684]: I0123 09:23:32.317188 4684 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-tkzsq\" (UniqueName: \"kubernetes.io/projected/731dfd08-3c63-458e-abf7-295bbcb056eb-kube-api-access-tkzsq\") pod \"community-operators-5fbwn\" (UID: \"731dfd08-3c63-458e-abf7-295bbcb056eb\") " pod="openshift-marketplace/community-operators-5fbwn" Jan 23 09:23:32 crc kubenswrapper[4684]: I0123 09:23:32.317675 4684 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/731dfd08-3c63-458e-abf7-295bbcb056eb-catalog-content\") pod \"community-operators-5fbwn\" (UID: \"731dfd08-3c63-458e-abf7-295bbcb056eb\") " pod="openshift-marketplace/community-operators-5fbwn" Jan 23 09:23:32 crc kubenswrapper[4684]: I0123 09:23:32.317802 4684 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/731dfd08-3c63-458e-abf7-295bbcb056eb-utilities\") pod \"community-operators-5fbwn\" (UID: \"731dfd08-3c63-458e-abf7-295bbcb056eb\") " pod="openshift-marketplace/community-operators-5fbwn" Jan 23 09:23:32 crc kubenswrapper[4684]: I0123 09:23:32.348044 4684 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-tkzsq\" (UniqueName: \"kubernetes.io/projected/731dfd08-3c63-458e-abf7-295bbcb056eb-kube-api-access-tkzsq\") pod \"community-operators-5fbwn\" (UID: \"731dfd08-3c63-458e-abf7-295bbcb056eb\") " pod="openshift-marketplace/community-operators-5fbwn" Jan 23 09:23:32 crc kubenswrapper[4684]: I0123 09:23:32.401564 4684 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-5fbwn" Jan 23 09:23:32 crc kubenswrapper[4684]: I0123 09:23:32.679438 4684 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/community-operators-5fbwn"] Jan 23 09:23:33 crc kubenswrapper[4684]: I0123 09:23:33.356544 4684 generic.go:334] "Generic (PLEG): container finished" podID="731dfd08-3c63-458e-abf7-295bbcb056eb" containerID="51949249989e1d371397f309c3619bc9ad9f38cba7d431533e4e8ed7d6745f10" exitCode=0 Jan 23 09:23:33 crc kubenswrapper[4684]: I0123 09:23:33.358057 4684 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-5fbwn" event={"ID":"731dfd08-3c63-458e-abf7-295bbcb056eb","Type":"ContainerDied","Data":"51949249989e1d371397f309c3619bc9ad9f38cba7d431533e4e8ed7d6745f10"} Jan 23 09:23:33 crc kubenswrapper[4684]: I0123 09:23:33.358883 4684 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-5fbwn" event={"ID":"731dfd08-3c63-458e-abf7-295bbcb056eb","Type":"ContainerStarted","Data":"8a05b64d8f9e7bb1cf24d31fd8f35f299202ca8068976653c4afd83eabeed146"} Jan 23 09:23:35 crc kubenswrapper[4684]: I0123 09:23:35.371508 4684 generic.go:334] "Generic (PLEG): container finished" podID="731dfd08-3c63-458e-abf7-295bbcb056eb" containerID="fa5701514b99927b0bbc231850824030ca45a9d1890cc1934cf7a7ac274911f2" exitCode=0 Jan 23 09:23:35 crc kubenswrapper[4684]: I0123 09:23:35.371576 4684 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-5fbwn" event={"ID":"731dfd08-3c63-458e-abf7-295bbcb056eb","Type":"ContainerDied","Data":"fa5701514b99927b0bbc231850824030ca45a9d1890cc1934cf7a7ac274911f2"} Jan 23 09:23:36 crc kubenswrapper[4684]: I0123 09:23:36.381627 4684 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-5fbwn" event={"ID":"731dfd08-3c63-458e-abf7-295bbcb056eb","Type":"ContainerStarted","Data":"4937f150ec55aa79da7cdc30561940f527150e541196100275cbec8fd6e2118b"} Jan 23 09:23:36 crc kubenswrapper[4684]: I0123 09:23:36.406496 4684 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/community-operators-5fbwn" podStartSLOduration=1.767793469 podStartE2EDuration="4.406481007s" podCreationTimestamp="2026-01-23 09:23:32 +0000 UTC" firstStartedPulling="2026-01-23 09:23:33.359569844 +0000 UTC m=+985.982948385" lastFinishedPulling="2026-01-23 09:23:35.998257382 +0000 UTC m=+988.621635923" observedRunningTime="2026-01-23 09:23:36.402302808 +0000 UTC m=+989.025681359" watchObservedRunningTime="2026-01-23 09:23:36.406481007 +0000 UTC m=+989.029859548" Jan 23 09:23:37 crc kubenswrapper[4684]: I0123 09:23:37.795317 4684 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["metallb-system/metallb-operator-controller-manager-66c47b49dd-q49fh"] Jan 23 09:23:37 crc kubenswrapper[4684]: I0123 09:23:37.796369 4684 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="metallb-system/metallb-operator-controller-manager-66c47b49dd-q49fh" Jan 23 09:23:37 crc kubenswrapper[4684]: I0123 09:23:37.799003 4684 reflector.go:368] Caches populated for *v1.Secret from object-"metallb-system"/"manager-account-dockercfg-92frs" Jan 23 09:23:37 crc kubenswrapper[4684]: I0123 09:23:37.803336 4684 reflector.go:368] Caches populated for *v1.ConfigMap from object-"metallb-system"/"openshift-service-ca.crt" Jan 23 09:23:37 crc kubenswrapper[4684]: I0123 09:23:37.803851 4684 reflector.go:368] Caches populated for *v1.ConfigMap from object-"metallb-system"/"kube-root-ca.crt" Jan 23 09:23:37 crc kubenswrapper[4684]: I0123 09:23:37.803859 4684 reflector.go:368] Caches populated for *v1.Secret from object-"metallb-system"/"metallb-operator-controller-manager-service-cert" Jan 23 09:23:37 crc kubenswrapper[4684]: I0123 09:23:37.804054 4684 reflector.go:368] Caches populated for *v1.Secret from object-"metallb-system"/"metallb-operator-webhook-server-cert" Jan 23 09:23:37 crc kubenswrapper[4684]: I0123 09:23:37.823314 4684 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"webhook-cert\" (UniqueName: \"kubernetes.io/secret/00c9dbc4-3023-4be1-9876-0e2e2b35ac82-webhook-cert\") pod \"metallb-operator-controller-manager-66c47b49dd-q49fh\" (UID: \"00c9dbc4-3023-4be1-9876-0e2e2b35ac82\") " pod="metallb-system/metallb-operator-controller-manager-66c47b49dd-q49fh" Jan 23 09:23:37 crc kubenswrapper[4684]: I0123 09:23:37.823630 4684 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-h6zfw\" (UniqueName: \"kubernetes.io/projected/00c9dbc4-3023-4be1-9876-0e2e2b35ac82-kube-api-access-h6zfw\") pod \"metallb-operator-controller-manager-66c47b49dd-q49fh\" (UID: \"00c9dbc4-3023-4be1-9876-0e2e2b35ac82\") " pod="metallb-system/metallb-operator-controller-manager-66c47b49dd-q49fh" Jan 23 09:23:37 crc kubenswrapper[4684]: I0123 09:23:37.823732 4684 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"apiservice-cert\" (UniqueName: \"kubernetes.io/secret/00c9dbc4-3023-4be1-9876-0e2e2b35ac82-apiservice-cert\") pod \"metallb-operator-controller-manager-66c47b49dd-q49fh\" (UID: \"00c9dbc4-3023-4be1-9876-0e2e2b35ac82\") " pod="metallb-system/metallb-operator-controller-manager-66c47b49dd-q49fh" Jan 23 09:23:37 crc kubenswrapper[4684]: I0123 09:23:37.838421 4684 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["metallb-system/metallb-operator-controller-manager-66c47b49dd-q49fh"] Jan 23 09:23:37 crc kubenswrapper[4684]: I0123 09:23:37.924156 4684 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"webhook-cert\" (UniqueName: \"kubernetes.io/secret/00c9dbc4-3023-4be1-9876-0e2e2b35ac82-webhook-cert\") pod \"metallb-operator-controller-manager-66c47b49dd-q49fh\" (UID: \"00c9dbc4-3023-4be1-9876-0e2e2b35ac82\") " pod="metallb-system/metallb-operator-controller-manager-66c47b49dd-q49fh" Jan 23 09:23:37 crc kubenswrapper[4684]: I0123 09:23:37.924196 4684 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-h6zfw\" (UniqueName: \"kubernetes.io/projected/00c9dbc4-3023-4be1-9876-0e2e2b35ac82-kube-api-access-h6zfw\") pod \"metallb-operator-controller-manager-66c47b49dd-q49fh\" (UID: \"00c9dbc4-3023-4be1-9876-0e2e2b35ac82\") " pod="metallb-system/metallb-operator-controller-manager-66c47b49dd-q49fh" Jan 23 09:23:37 crc kubenswrapper[4684]: I0123 09:23:37.924239 4684 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"apiservice-cert\" (UniqueName: \"kubernetes.io/secret/00c9dbc4-3023-4be1-9876-0e2e2b35ac82-apiservice-cert\") pod \"metallb-operator-controller-manager-66c47b49dd-q49fh\" (UID: \"00c9dbc4-3023-4be1-9876-0e2e2b35ac82\") " pod="metallb-system/metallb-operator-controller-manager-66c47b49dd-q49fh" Jan 23 09:23:37 crc kubenswrapper[4684]: I0123 09:23:37.933269 4684 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"apiservice-cert\" (UniqueName: \"kubernetes.io/secret/00c9dbc4-3023-4be1-9876-0e2e2b35ac82-apiservice-cert\") pod \"metallb-operator-controller-manager-66c47b49dd-q49fh\" (UID: \"00c9dbc4-3023-4be1-9876-0e2e2b35ac82\") " pod="metallb-system/metallb-operator-controller-manager-66c47b49dd-q49fh" Jan 23 09:23:37 crc kubenswrapper[4684]: I0123 09:23:37.945844 4684 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"webhook-cert\" (UniqueName: \"kubernetes.io/secret/00c9dbc4-3023-4be1-9876-0e2e2b35ac82-webhook-cert\") pod \"metallb-operator-controller-manager-66c47b49dd-q49fh\" (UID: \"00c9dbc4-3023-4be1-9876-0e2e2b35ac82\") " pod="metallb-system/metallb-operator-controller-manager-66c47b49dd-q49fh" Jan 23 09:23:37 crc kubenswrapper[4684]: I0123 09:23:37.953322 4684 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-h6zfw\" (UniqueName: \"kubernetes.io/projected/00c9dbc4-3023-4be1-9876-0e2e2b35ac82-kube-api-access-h6zfw\") pod \"metallb-operator-controller-manager-66c47b49dd-q49fh\" (UID: \"00c9dbc4-3023-4be1-9876-0e2e2b35ac82\") " pod="metallb-system/metallb-operator-controller-manager-66c47b49dd-q49fh" Jan 23 09:23:38 crc kubenswrapper[4684]: I0123 09:23:38.095914 4684 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["metallb-system/metallb-operator-webhook-server-bfcb9dfcc-7qsz8"] Jan 23 09:23:38 crc kubenswrapper[4684]: I0123 09:23:38.096563 4684 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="metallb-system/metallb-operator-webhook-server-bfcb9dfcc-7qsz8" Jan 23 09:23:38 crc kubenswrapper[4684]: I0123 09:23:38.101938 4684 reflector.go:368] Caches populated for *v1.Secret from object-"metallb-system"/"metallb-webhook-cert" Jan 23 09:23:38 crc kubenswrapper[4684]: I0123 09:23:38.101965 4684 reflector.go:368] Caches populated for *v1.Secret from object-"metallb-system"/"metallb-operator-webhook-server-service-cert" Jan 23 09:23:38 crc kubenswrapper[4684]: I0123 09:23:38.102443 4684 reflector.go:368] Caches populated for *v1.Secret from object-"metallb-system"/"controller-dockercfg-v9gzr" Jan 23 09:23:38 crc kubenswrapper[4684]: I0123 09:23:38.126838 4684 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["metallb-system/metallb-operator-webhook-server-bfcb9dfcc-7qsz8"] Jan 23 09:23:38 crc kubenswrapper[4684]: I0123 09:23:38.135122 4684 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="metallb-system/metallb-operator-controller-manager-66c47b49dd-q49fh" Jan 23 09:23:38 crc kubenswrapper[4684]: I0123 09:23:38.228071 4684 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-p552p\" (UniqueName: \"kubernetes.io/projected/c001f52e-014a-4250-af27-7fdcebc0c759-kube-api-access-p552p\") pod \"metallb-operator-webhook-server-bfcb9dfcc-7qsz8\" (UID: \"c001f52e-014a-4250-af27-7fdcebc0c759\") " pod="metallb-system/metallb-operator-webhook-server-bfcb9dfcc-7qsz8" Jan 23 09:23:38 crc kubenswrapper[4684]: I0123 09:23:38.228203 4684 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"apiservice-cert\" (UniqueName: \"kubernetes.io/secret/c001f52e-014a-4250-af27-7fdcebc0c759-apiservice-cert\") pod \"metallb-operator-webhook-server-bfcb9dfcc-7qsz8\" (UID: \"c001f52e-014a-4250-af27-7fdcebc0c759\") " pod="metallb-system/metallb-operator-webhook-server-bfcb9dfcc-7qsz8" Jan 23 09:23:38 crc kubenswrapper[4684]: I0123 09:23:38.228236 4684 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"webhook-cert\" (UniqueName: \"kubernetes.io/secret/c001f52e-014a-4250-af27-7fdcebc0c759-webhook-cert\") pod \"metallb-operator-webhook-server-bfcb9dfcc-7qsz8\" (UID: \"c001f52e-014a-4250-af27-7fdcebc0c759\") " pod="metallb-system/metallb-operator-webhook-server-bfcb9dfcc-7qsz8" Jan 23 09:23:38 crc kubenswrapper[4684]: I0123 09:23:38.330232 4684 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-p552p\" (UniqueName: \"kubernetes.io/projected/c001f52e-014a-4250-af27-7fdcebc0c759-kube-api-access-p552p\") pod \"metallb-operator-webhook-server-bfcb9dfcc-7qsz8\" (UID: \"c001f52e-014a-4250-af27-7fdcebc0c759\") " pod="metallb-system/metallb-operator-webhook-server-bfcb9dfcc-7qsz8" Jan 23 09:23:38 crc kubenswrapper[4684]: I0123 09:23:38.330823 4684 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"apiservice-cert\" (UniqueName: \"kubernetes.io/secret/c001f52e-014a-4250-af27-7fdcebc0c759-apiservice-cert\") pod \"metallb-operator-webhook-server-bfcb9dfcc-7qsz8\" (UID: \"c001f52e-014a-4250-af27-7fdcebc0c759\") " pod="metallb-system/metallb-operator-webhook-server-bfcb9dfcc-7qsz8" Jan 23 09:23:38 crc kubenswrapper[4684]: I0123 09:23:38.330851 4684 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"webhook-cert\" (UniqueName: \"kubernetes.io/secret/c001f52e-014a-4250-af27-7fdcebc0c759-webhook-cert\") pod \"metallb-operator-webhook-server-bfcb9dfcc-7qsz8\" (UID: \"c001f52e-014a-4250-af27-7fdcebc0c759\") " pod="metallb-system/metallb-operator-webhook-server-bfcb9dfcc-7qsz8" Jan 23 09:23:38 crc kubenswrapper[4684]: I0123 09:23:38.338398 4684 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"apiservice-cert\" (UniqueName: \"kubernetes.io/secret/c001f52e-014a-4250-af27-7fdcebc0c759-apiservice-cert\") pod \"metallb-operator-webhook-server-bfcb9dfcc-7qsz8\" (UID: \"c001f52e-014a-4250-af27-7fdcebc0c759\") " pod="metallb-system/metallb-operator-webhook-server-bfcb9dfcc-7qsz8" Jan 23 09:23:38 crc kubenswrapper[4684]: I0123 09:23:38.343303 4684 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"webhook-cert\" (UniqueName: \"kubernetes.io/secret/c001f52e-014a-4250-af27-7fdcebc0c759-webhook-cert\") pod \"metallb-operator-webhook-server-bfcb9dfcc-7qsz8\" (UID: \"c001f52e-014a-4250-af27-7fdcebc0c759\") " pod="metallb-system/metallb-operator-webhook-server-bfcb9dfcc-7qsz8" Jan 23 09:23:38 crc kubenswrapper[4684]: I0123 09:23:38.359160 4684 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-p552p\" (UniqueName: \"kubernetes.io/projected/c001f52e-014a-4250-af27-7fdcebc0c759-kube-api-access-p552p\") pod \"metallb-operator-webhook-server-bfcb9dfcc-7qsz8\" (UID: \"c001f52e-014a-4250-af27-7fdcebc0c759\") " pod="metallb-system/metallb-operator-webhook-server-bfcb9dfcc-7qsz8" Jan 23 09:23:38 crc kubenswrapper[4684]: I0123 09:23:38.411506 4684 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="metallb-system/metallb-operator-webhook-server-bfcb9dfcc-7qsz8" Jan 23 09:23:38 crc kubenswrapper[4684]: I0123 09:23:38.466028 4684 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["metallb-system/metallb-operator-controller-manager-66c47b49dd-q49fh"] Jan 23 09:23:38 crc kubenswrapper[4684]: W0123 09:23:38.494043 4684 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod00c9dbc4_3023_4be1_9876_0e2e2b35ac82.slice/crio-84d11fef4ca2c420cca381211b302054d3e1181766931e9e155f5d9546aad7e6 WatchSource:0}: Error finding container 84d11fef4ca2c420cca381211b302054d3e1181766931e9e155f5d9546aad7e6: Status 404 returned error can't find the container with id 84d11fef4ca2c420cca381211b302054d3e1181766931e9e155f5d9546aad7e6 Jan 23 09:23:38 crc kubenswrapper[4684]: I0123 09:23:38.993649 4684 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["metallb-system/metallb-operator-webhook-server-bfcb9dfcc-7qsz8"] Jan 23 09:23:39 crc kubenswrapper[4684]: W0123 09:23:39.008019 4684 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-podc001f52e_014a_4250_af27_7fdcebc0c759.slice/crio-bac5a713e8b6b8b6e2f964393bfc8f5e7b2af9464aa99d389e543fdcdf6068fb WatchSource:0}: Error finding container bac5a713e8b6b8b6e2f964393bfc8f5e7b2af9464aa99d389e543fdcdf6068fb: Status 404 returned error can't find the container with id bac5a713e8b6b8b6e2f964393bfc8f5e7b2af9464aa99d389e543fdcdf6068fb Jan 23 09:23:39 crc kubenswrapper[4684]: I0123 09:23:39.400978 4684 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="metallb-system/metallb-operator-controller-manager-66c47b49dd-q49fh" event={"ID":"00c9dbc4-3023-4be1-9876-0e2e2b35ac82","Type":"ContainerStarted","Data":"84d11fef4ca2c420cca381211b302054d3e1181766931e9e155f5d9546aad7e6"} Jan 23 09:23:39 crc kubenswrapper[4684]: I0123 09:23:39.401972 4684 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="metallb-system/metallb-operator-webhook-server-bfcb9dfcc-7qsz8" event={"ID":"c001f52e-014a-4250-af27-7fdcebc0c759","Type":"ContainerStarted","Data":"bac5a713e8b6b8b6e2f964393bfc8f5e7b2af9464aa99d389e543fdcdf6068fb"} Jan 23 09:23:42 crc kubenswrapper[4684]: I0123 09:23:42.401739 4684 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/community-operators-5fbwn" Jan 23 09:23:42 crc kubenswrapper[4684]: I0123 09:23:42.402069 4684 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/community-operators-5fbwn" Jan 23 09:23:42 crc kubenswrapper[4684]: I0123 09:23:42.460309 4684 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/community-operators-5fbwn" Jan 23 09:23:42 crc kubenswrapper[4684]: I0123 09:23:42.513575 4684 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/community-operators-5fbwn" Jan 23 09:23:42 crc kubenswrapper[4684]: I0123 09:23:42.691836 4684 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/community-operators-5fbwn"] Jan 23 09:23:44 crc kubenswrapper[4684]: I0123 09:23:44.430543 4684 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/community-operators-5fbwn" podUID="731dfd08-3c63-458e-abf7-295bbcb056eb" containerName="registry-server" containerID="cri-o://4937f150ec55aa79da7cdc30561940f527150e541196100275cbec8fd6e2118b" gracePeriod=2 Jan 23 09:23:45 crc kubenswrapper[4684]: I0123 09:23:45.441197 4684 generic.go:334] "Generic (PLEG): container finished" podID="731dfd08-3c63-458e-abf7-295bbcb056eb" containerID="4937f150ec55aa79da7cdc30561940f527150e541196100275cbec8fd6e2118b" exitCode=0 Jan 23 09:23:45 crc kubenswrapper[4684]: I0123 09:23:45.441244 4684 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-5fbwn" event={"ID":"731dfd08-3c63-458e-abf7-295bbcb056eb","Type":"ContainerDied","Data":"4937f150ec55aa79da7cdc30561940f527150e541196100275cbec8fd6e2118b"} Jan 23 09:23:46 crc kubenswrapper[4684]: I0123 09:23:46.067021 4684 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-5fbwn" Jan 23 09:23:46 crc kubenswrapper[4684]: I0123 09:23:46.259283 4684 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/731dfd08-3c63-458e-abf7-295bbcb056eb-catalog-content\") pod \"731dfd08-3c63-458e-abf7-295bbcb056eb\" (UID: \"731dfd08-3c63-458e-abf7-295bbcb056eb\") " Jan 23 09:23:46 crc kubenswrapper[4684]: I0123 09:23:46.259392 4684 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/731dfd08-3c63-458e-abf7-295bbcb056eb-utilities\") pod \"731dfd08-3c63-458e-abf7-295bbcb056eb\" (UID: \"731dfd08-3c63-458e-abf7-295bbcb056eb\") " Jan 23 09:23:46 crc kubenswrapper[4684]: I0123 09:23:46.260512 4684 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/731dfd08-3c63-458e-abf7-295bbcb056eb-utilities" (OuterVolumeSpecName: "utilities") pod "731dfd08-3c63-458e-abf7-295bbcb056eb" (UID: "731dfd08-3c63-458e-abf7-295bbcb056eb"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 23 09:23:46 crc kubenswrapper[4684]: I0123 09:23:46.260903 4684 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-tkzsq\" (UniqueName: \"kubernetes.io/projected/731dfd08-3c63-458e-abf7-295bbcb056eb-kube-api-access-tkzsq\") pod \"731dfd08-3c63-458e-abf7-295bbcb056eb\" (UID: \"731dfd08-3c63-458e-abf7-295bbcb056eb\") " Jan 23 09:23:46 crc kubenswrapper[4684]: I0123 09:23:46.262501 4684 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/731dfd08-3c63-458e-abf7-295bbcb056eb-utilities\") on node \"crc\" DevicePath \"\"" Jan 23 09:23:46 crc kubenswrapper[4684]: I0123 09:23:46.266139 4684 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/731dfd08-3c63-458e-abf7-295bbcb056eb-kube-api-access-tkzsq" (OuterVolumeSpecName: "kube-api-access-tkzsq") pod "731dfd08-3c63-458e-abf7-295bbcb056eb" (UID: "731dfd08-3c63-458e-abf7-295bbcb056eb"). InnerVolumeSpecName "kube-api-access-tkzsq". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 23 09:23:46 crc kubenswrapper[4684]: I0123 09:23:46.305283 4684 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/731dfd08-3c63-458e-abf7-295bbcb056eb-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "731dfd08-3c63-458e-abf7-295bbcb056eb" (UID: "731dfd08-3c63-458e-abf7-295bbcb056eb"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 23 09:23:46 crc kubenswrapper[4684]: I0123 09:23:46.363403 4684 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-tkzsq\" (UniqueName: \"kubernetes.io/projected/731dfd08-3c63-458e-abf7-295bbcb056eb-kube-api-access-tkzsq\") on node \"crc\" DevicePath \"\"" Jan 23 09:23:46 crc kubenswrapper[4684]: I0123 09:23:46.363432 4684 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/731dfd08-3c63-458e-abf7-295bbcb056eb-catalog-content\") on node \"crc\" DevicePath \"\"" Jan 23 09:23:46 crc kubenswrapper[4684]: I0123 09:23:46.448336 4684 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="metallb-system/metallb-operator-webhook-server-bfcb9dfcc-7qsz8" event={"ID":"c001f52e-014a-4250-af27-7fdcebc0c759","Type":"ContainerStarted","Data":"146b78d178341430ce3cbe0bae7c56d0b553f0e5dd8327cbf7ca6ffcdcd2e715"} Jan 23 09:23:46 crc kubenswrapper[4684]: I0123 09:23:46.450125 4684 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="metallb-system/metallb-operator-webhook-server-bfcb9dfcc-7qsz8" Jan 23 09:23:46 crc kubenswrapper[4684]: I0123 09:23:46.451585 4684 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="metallb-system/metallb-operator-controller-manager-66c47b49dd-q49fh" event={"ID":"00c9dbc4-3023-4be1-9876-0e2e2b35ac82","Type":"ContainerStarted","Data":"834760f097d2a1c43f0ab5771f871d823b203ad1e3faba48aeaa1a363b787d92"} Jan 23 09:23:46 crc kubenswrapper[4684]: I0123 09:23:46.451959 4684 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="metallb-system/metallb-operator-controller-manager-66c47b49dd-q49fh" Jan 23 09:23:46 crc kubenswrapper[4684]: I0123 09:23:46.454644 4684 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-5fbwn" event={"ID":"731dfd08-3c63-458e-abf7-295bbcb056eb","Type":"ContainerDied","Data":"8a05b64d8f9e7bb1cf24d31fd8f35f299202ca8068976653c4afd83eabeed146"} Jan 23 09:23:46 crc kubenswrapper[4684]: I0123 09:23:46.454682 4684 scope.go:117] "RemoveContainer" containerID="4937f150ec55aa79da7cdc30561940f527150e541196100275cbec8fd6e2118b" Jan 23 09:23:46 crc kubenswrapper[4684]: I0123 09:23:46.454890 4684 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-5fbwn" Jan 23 09:23:46 crc kubenswrapper[4684]: I0123 09:23:46.472425 4684 scope.go:117] "RemoveContainer" containerID="fa5701514b99927b0bbc231850824030ca45a9d1890cc1934cf7a7ac274911f2" Jan 23 09:23:46 crc kubenswrapper[4684]: I0123 09:23:46.477749 4684 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="metallb-system/metallb-operator-webhook-server-bfcb9dfcc-7qsz8" podStartSLOduration=1.631256342 podStartE2EDuration="8.477730904s" podCreationTimestamp="2026-01-23 09:23:38 +0000 UTC" firstStartedPulling="2026-01-23 09:23:39.011129172 +0000 UTC m=+991.634507713" lastFinishedPulling="2026-01-23 09:23:45.857603734 +0000 UTC m=+998.480982275" observedRunningTime="2026-01-23 09:23:46.473324998 +0000 UTC m=+999.096703539" watchObservedRunningTime="2026-01-23 09:23:46.477730904 +0000 UTC m=+999.101109445" Jan 23 09:23:46 crc kubenswrapper[4684]: I0123 09:23:46.488918 4684 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/community-operators-5fbwn"] Jan 23 09:23:46 crc kubenswrapper[4684]: I0123 09:23:46.495626 4684 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/community-operators-5fbwn"] Jan 23 09:23:46 crc kubenswrapper[4684]: I0123 09:23:46.504855 4684 scope.go:117] "RemoveContainer" containerID="51949249989e1d371397f309c3619bc9ad9f38cba7d431533e4e8ed7d6745f10" Jan 23 09:23:46 crc kubenswrapper[4684]: I0123 09:23:46.505964 4684 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="metallb-system/metallb-operator-controller-manager-66c47b49dd-q49fh" podStartSLOduration=2.1707289100000002 podStartE2EDuration="9.505945792s" podCreationTimestamp="2026-01-23 09:23:37 +0000 UTC" firstStartedPulling="2026-01-23 09:23:38.502300447 +0000 UTC m=+991.125678988" lastFinishedPulling="2026-01-23 09:23:45.837517329 +0000 UTC m=+998.460895870" observedRunningTime="2026-01-23 09:23:46.505300163 +0000 UTC m=+999.128678714" watchObservedRunningTime="2026-01-23 09:23:46.505945792 +0000 UTC m=+999.129324333" Jan 23 09:23:47 crc kubenswrapper[4684]: I0123 09:23:47.590222 4684 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="731dfd08-3c63-458e-abf7-295bbcb056eb" path="/var/lib/kubelet/pods/731dfd08-3c63-458e-abf7-295bbcb056eb/volumes" Jan 23 09:23:54 crc kubenswrapper[4684]: I0123 09:23:54.138812 4684 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/redhat-marketplace-qs4ng"] Jan 23 09:23:54 crc kubenswrapper[4684]: E0123 09:23:54.139545 4684 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="731dfd08-3c63-458e-abf7-295bbcb056eb" containerName="extract-utilities" Jan 23 09:23:54 crc kubenswrapper[4684]: I0123 09:23:54.139559 4684 state_mem.go:107] "Deleted CPUSet assignment" podUID="731dfd08-3c63-458e-abf7-295bbcb056eb" containerName="extract-utilities" Jan 23 09:23:54 crc kubenswrapper[4684]: E0123 09:23:54.139580 4684 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="731dfd08-3c63-458e-abf7-295bbcb056eb" containerName="extract-content" Jan 23 09:23:54 crc kubenswrapper[4684]: I0123 09:23:54.139586 4684 state_mem.go:107] "Deleted CPUSet assignment" podUID="731dfd08-3c63-458e-abf7-295bbcb056eb" containerName="extract-content" Jan 23 09:23:54 crc kubenswrapper[4684]: E0123 09:23:54.139594 4684 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="731dfd08-3c63-458e-abf7-295bbcb056eb" containerName="registry-server" Jan 23 09:23:54 crc kubenswrapper[4684]: I0123 09:23:54.139600 4684 state_mem.go:107] "Deleted CPUSet assignment" podUID="731dfd08-3c63-458e-abf7-295bbcb056eb" containerName="registry-server" Jan 23 09:23:54 crc kubenswrapper[4684]: I0123 09:23:54.139715 4684 memory_manager.go:354] "RemoveStaleState removing state" podUID="731dfd08-3c63-458e-abf7-295bbcb056eb" containerName="registry-server" Jan 23 09:23:54 crc kubenswrapper[4684]: I0123 09:23:54.140482 4684 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-qs4ng" Jan 23 09:23:54 crc kubenswrapper[4684]: I0123 09:23:54.160115 4684 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-marketplace-qs4ng"] Jan 23 09:23:54 crc kubenswrapper[4684]: I0123 09:23:54.265923 4684 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/6cb8cfbb-d407-4a39-bd33-0e0862c129cf-catalog-content\") pod \"redhat-marketplace-qs4ng\" (UID: \"6cb8cfbb-d407-4a39-bd33-0e0862c129cf\") " pod="openshift-marketplace/redhat-marketplace-qs4ng" Jan 23 09:23:54 crc kubenswrapper[4684]: I0123 09:23:54.265985 4684 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/6cb8cfbb-d407-4a39-bd33-0e0862c129cf-utilities\") pod \"redhat-marketplace-qs4ng\" (UID: \"6cb8cfbb-d407-4a39-bd33-0e0862c129cf\") " pod="openshift-marketplace/redhat-marketplace-qs4ng" Jan 23 09:23:54 crc kubenswrapper[4684]: I0123 09:23:54.266083 4684 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-6mxgm\" (UniqueName: \"kubernetes.io/projected/6cb8cfbb-d407-4a39-bd33-0e0862c129cf-kube-api-access-6mxgm\") pod \"redhat-marketplace-qs4ng\" (UID: \"6cb8cfbb-d407-4a39-bd33-0e0862c129cf\") " pod="openshift-marketplace/redhat-marketplace-qs4ng" Jan 23 09:23:54 crc kubenswrapper[4684]: I0123 09:23:54.367513 4684 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-6mxgm\" (UniqueName: \"kubernetes.io/projected/6cb8cfbb-d407-4a39-bd33-0e0862c129cf-kube-api-access-6mxgm\") pod \"redhat-marketplace-qs4ng\" (UID: \"6cb8cfbb-d407-4a39-bd33-0e0862c129cf\") " pod="openshift-marketplace/redhat-marketplace-qs4ng" Jan 23 09:23:54 crc kubenswrapper[4684]: I0123 09:23:54.368003 4684 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/6cb8cfbb-d407-4a39-bd33-0e0862c129cf-catalog-content\") pod \"redhat-marketplace-qs4ng\" (UID: \"6cb8cfbb-d407-4a39-bd33-0e0862c129cf\") " pod="openshift-marketplace/redhat-marketplace-qs4ng" Jan 23 09:23:54 crc kubenswrapper[4684]: I0123 09:23:54.368449 4684 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/6cb8cfbb-d407-4a39-bd33-0e0862c129cf-catalog-content\") pod \"redhat-marketplace-qs4ng\" (UID: \"6cb8cfbb-d407-4a39-bd33-0e0862c129cf\") " pod="openshift-marketplace/redhat-marketplace-qs4ng" Jan 23 09:23:54 crc kubenswrapper[4684]: I0123 09:23:54.368522 4684 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/6cb8cfbb-d407-4a39-bd33-0e0862c129cf-utilities\") pod \"redhat-marketplace-qs4ng\" (UID: \"6cb8cfbb-d407-4a39-bd33-0e0862c129cf\") " pod="openshift-marketplace/redhat-marketplace-qs4ng" Jan 23 09:23:54 crc kubenswrapper[4684]: I0123 09:23:54.368946 4684 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/6cb8cfbb-d407-4a39-bd33-0e0862c129cf-utilities\") pod \"redhat-marketplace-qs4ng\" (UID: \"6cb8cfbb-d407-4a39-bd33-0e0862c129cf\") " pod="openshift-marketplace/redhat-marketplace-qs4ng" Jan 23 09:23:54 crc kubenswrapper[4684]: I0123 09:23:54.394899 4684 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-6mxgm\" (UniqueName: \"kubernetes.io/projected/6cb8cfbb-d407-4a39-bd33-0e0862c129cf-kube-api-access-6mxgm\") pod \"redhat-marketplace-qs4ng\" (UID: \"6cb8cfbb-d407-4a39-bd33-0e0862c129cf\") " pod="openshift-marketplace/redhat-marketplace-qs4ng" Jan 23 09:23:54 crc kubenswrapper[4684]: I0123 09:23:54.478435 4684 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-qs4ng" Jan 23 09:23:54 crc kubenswrapper[4684]: I0123 09:23:54.827513 4684 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-marketplace-qs4ng"] Jan 23 09:23:55 crc kubenswrapper[4684]: I0123 09:23:55.508786 4684 generic.go:334] "Generic (PLEG): container finished" podID="6cb8cfbb-d407-4a39-bd33-0e0862c129cf" containerID="3d3f9c3aa3c6db1d814f1b02e06ee5b4d756cfbf47a1b0cb84f3edd42d500dd3" exitCode=0 Jan 23 09:23:55 crc kubenswrapper[4684]: I0123 09:23:55.508892 4684 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-qs4ng" event={"ID":"6cb8cfbb-d407-4a39-bd33-0e0862c129cf","Type":"ContainerDied","Data":"3d3f9c3aa3c6db1d814f1b02e06ee5b4d756cfbf47a1b0cb84f3edd42d500dd3"} Jan 23 09:23:55 crc kubenswrapper[4684]: I0123 09:23:55.509118 4684 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-qs4ng" event={"ID":"6cb8cfbb-d407-4a39-bd33-0e0862c129cf","Type":"ContainerStarted","Data":"e9208b3f92a4bc6b19ec2657e9b2f93f3feb471713eb9ca87218f03755c47709"} Jan 23 09:23:56 crc kubenswrapper[4684]: I0123 09:23:56.517371 4684 generic.go:334] "Generic (PLEG): container finished" podID="6cb8cfbb-d407-4a39-bd33-0e0862c129cf" containerID="5d8c5a4f40fa708498a6d03d58bcb12befbe504b1606ac7315c55c6a272ddd84" exitCode=0 Jan 23 09:23:56 crc kubenswrapper[4684]: I0123 09:23:56.517420 4684 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-qs4ng" event={"ID":"6cb8cfbb-d407-4a39-bd33-0e0862c129cf","Type":"ContainerDied","Data":"5d8c5a4f40fa708498a6d03d58bcb12befbe504b1606ac7315c55c6a272ddd84"} Jan 23 09:23:57 crc kubenswrapper[4684]: I0123 09:23:57.525528 4684 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-qs4ng" event={"ID":"6cb8cfbb-d407-4a39-bd33-0e0862c129cf","Type":"ContainerStarted","Data":"f6152bb6ef3ce549ea316b590deab91790fae2466ba687d87426b5d0e81c70b0"} Jan 23 09:23:57 crc kubenswrapper[4684]: I0123 09:23:57.546553 4684 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/redhat-marketplace-qs4ng" podStartSLOduration=1.9819163290000001 podStartE2EDuration="3.546527604s" podCreationTimestamp="2026-01-23 09:23:54 +0000 UTC" firstStartedPulling="2026-01-23 09:23:55.512545874 +0000 UTC m=+1008.135924415" lastFinishedPulling="2026-01-23 09:23:57.077157149 +0000 UTC m=+1009.700535690" observedRunningTime="2026-01-23 09:23:57.546178894 +0000 UTC m=+1010.169557435" watchObservedRunningTime="2026-01-23 09:23:57.546527604 +0000 UTC m=+1010.169906145" Jan 23 09:23:58 crc kubenswrapper[4684]: I0123 09:23:58.417029 4684 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="metallb-system/metallb-operator-webhook-server-bfcb9dfcc-7qsz8" Jan 23 09:24:02 crc kubenswrapper[4684]: I0123 09:24:02.137419 4684 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/certified-operators-wwwwx"] Jan 23 09:24:02 crc kubenswrapper[4684]: I0123 09:24:02.139062 4684 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-wwwwx" Jan 23 09:24:02 crc kubenswrapper[4684]: I0123 09:24:02.161301 4684 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/certified-operators-wwwwx"] Jan 23 09:24:02 crc kubenswrapper[4684]: I0123 09:24:02.280411 4684 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/65fcbaaf-5496-4005-b93f-9434ff9a9ef5-utilities\") pod \"certified-operators-wwwwx\" (UID: \"65fcbaaf-5496-4005-b93f-9434ff9a9ef5\") " pod="openshift-marketplace/certified-operators-wwwwx" Jan 23 09:24:02 crc kubenswrapper[4684]: I0123 09:24:02.280483 4684 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-492tf\" (UniqueName: \"kubernetes.io/projected/65fcbaaf-5496-4005-b93f-9434ff9a9ef5-kube-api-access-492tf\") pod \"certified-operators-wwwwx\" (UID: \"65fcbaaf-5496-4005-b93f-9434ff9a9ef5\") " pod="openshift-marketplace/certified-operators-wwwwx" Jan 23 09:24:02 crc kubenswrapper[4684]: I0123 09:24:02.280508 4684 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/65fcbaaf-5496-4005-b93f-9434ff9a9ef5-catalog-content\") pod \"certified-operators-wwwwx\" (UID: \"65fcbaaf-5496-4005-b93f-9434ff9a9ef5\") " pod="openshift-marketplace/certified-operators-wwwwx" Jan 23 09:24:02 crc kubenswrapper[4684]: I0123 09:24:02.381823 4684 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/65fcbaaf-5496-4005-b93f-9434ff9a9ef5-utilities\") pod \"certified-operators-wwwwx\" (UID: \"65fcbaaf-5496-4005-b93f-9434ff9a9ef5\") " pod="openshift-marketplace/certified-operators-wwwwx" Jan 23 09:24:02 crc kubenswrapper[4684]: I0123 09:24:02.381892 4684 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-492tf\" (UniqueName: \"kubernetes.io/projected/65fcbaaf-5496-4005-b93f-9434ff9a9ef5-kube-api-access-492tf\") pod \"certified-operators-wwwwx\" (UID: \"65fcbaaf-5496-4005-b93f-9434ff9a9ef5\") " pod="openshift-marketplace/certified-operators-wwwwx" Jan 23 09:24:02 crc kubenswrapper[4684]: I0123 09:24:02.381918 4684 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/65fcbaaf-5496-4005-b93f-9434ff9a9ef5-catalog-content\") pod \"certified-operators-wwwwx\" (UID: \"65fcbaaf-5496-4005-b93f-9434ff9a9ef5\") " pod="openshift-marketplace/certified-operators-wwwwx" Jan 23 09:24:02 crc kubenswrapper[4684]: I0123 09:24:02.382404 4684 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/65fcbaaf-5496-4005-b93f-9434ff9a9ef5-catalog-content\") pod \"certified-operators-wwwwx\" (UID: \"65fcbaaf-5496-4005-b93f-9434ff9a9ef5\") " pod="openshift-marketplace/certified-operators-wwwwx" Jan 23 09:24:02 crc kubenswrapper[4684]: I0123 09:24:02.382458 4684 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/65fcbaaf-5496-4005-b93f-9434ff9a9ef5-utilities\") pod \"certified-operators-wwwwx\" (UID: \"65fcbaaf-5496-4005-b93f-9434ff9a9ef5\") " pod="openshift-marketplace/certified-operators-wwwwx" Jan 23 09:24:02 crc kubenswrapper[4684]: I0123 09:24:02.402431 4684 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-492tf\" (UniqueName: \"kubernetes.io/projected/65fcbaaf-5496-4005-b93f-9434ff9a9ef5-kube-api-access-492tf\") pod \"certified-operators-wwwwx\" (UID: \"65fcbaaf-5496-4005-b93f-9434ff9a9ef5\") " pod="openshift-marketplace/certified-operators-wwwwx" Jan 23 09:24:02 crc kubenswrapper[4684]: I0123 09:24:02.463349 4684 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-wwwwx" Jan 23 09:24:02 crc kubenswrapper[4684]: I0123 09:24:02.783370 4684 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/certified-operators-wwwwx"] Jan 23 09:24:03 crc kubenswrapper[4684]: I0123 09:24:03.571250 4684 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-wwwwx" event={"ID":"65fcbaaf-5496-4005-b93f-9434ff9a9ef5","Type":"ContainerStarted","Data":"e01b5447af3ef795e3b25219be9f544610e9f74d3329ebabadbe2cf64ea06dce"} Jan 23 09:24:04 crc kubenswrapper[4684]: I0123 09:24:04.479422 4684 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/redhat-marketplace-qs4ng" Jan 23 09:24:04 crc kubenswrapper[4684]: I0123 09:24:04.480049 4684 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/redhat-marketplace-qs4ng" Jan 23 09:24:04 crc kubenswrapper[4684]: I0123 09:24:04.530865 4684 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/redhat-marketplace-qs4ng" Jan 23 09:24:04 crc kubenswrapper[4684]: I0123 09:24:04.577571 4684 generic.go:334] "Generic (PLEG): container finished" podID="65fcbaaf-5496-4005-b93f-9434ff9a9ef5" containerID="03f7aa3b928bb2b018bd7330162009123c9929585098cd440fa00d8626559eb6" exitCode=0 Jan 23 09:24:04 crc kubenswrapper[4684]: I0123 09:24:04.577641 4684 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-wwwwx" event={"ID":"65fcbaaf-5496-4005-b93f-9434ff9a9ef5","Type":"ContainerDied","Data":"03f7aa3b928bb2b018bd7330162009123c9929585098cd440fa00d8626559eb6"} Jan 23 09:24:04 crc kubenswrapper[4684]: I0123 09:24:04.622226 4684 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/redhat-marketplace-qs4ng" Jan 23 09:24:06 crc kubenswrapper[4684]: I0123 09:24:06.593515 4684 generic.go:334] "Generic (PLEG): container finished" podID="65fcbaaf-5496-4005-b93f-9434ff9a9ef5" containerID="0f18cf9ad48ff3ded05f0e385af122c1c7dfb0a6ed636dccff021f6e5636813b" exitCode=0 Jan 23 09:24:06 crc kubenswrapper[4684]: I0123 09:24:06.593556 4684 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-wwwwx" event={"ID":"65fcbaaf-5496-4005-b93f-9434ff9a9ef5","Type":"ContainerDied","Data":"0f18cf9ad48ff3ded05f0e385af122c1c7dfb0a6ed636dccff021f6e5636813b"} Jan 23 09:24:07 crc kubenswrapper[4684]: I0123 09:24:07.601644 4684 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-wwwwx" event={"ID":"65fcbaaf-5496-4005-b93f-9434ff9a9ef5","Type":"ContainerStarted","Data":"bd68a648885f7d2505f54c4c83a615570b022081abb65372abca11dc8d286354"} Jan 23 09:24:07 crc kubenswrapper[4684]: I0123 09:24:07.623581 4684 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/certified-operators-wwwwx" podStartSLOduration=2.966699846 podStartE2EDuration="5.623564075s" podCreationTimestamp="2026-01-23 09:24:02 +0000 UTC" firstStartedPulling="2026-01-23 09:24:04.578885566 +0000 UTC m=+1017.202264097" lastFinishedPulling="2026-01-23 09:24:07.235749775 +0000 UTC m=+1019.859128326" observedRunningTime="2026-01-23 09:24:07.61916337 +0000 UTC m=+1020.242541921" watchObservedRunningTime="2026-01-23 09:24:07.623564075 +0000 UTC m=+1020.246942616" Jan 23 09:24:08 crc kubenswrapper[4684]: I0123 09:24:08.532503 4684 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-marketplace-qs4ng"] Jan 23 09:24:08 crc kubenswrapper[4684]: I0123 09:24:08.533084 4684 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/redhat-marketplace-qs4ng" podUID="6cb8cfbb-d407-4a39-bd33-0e0862c129cf" containerName="registry-server" containerID="cri-o://f6152bb6ef3ce549ea316b590deab91790fae2466ba687d87426b5d0e81c70b0" gracePeriod=2 Jan 23 09:24:09 crc kubenswrapper[4684]: I0123 09:24:09.614076 4684 generic.go:334] "Generic (PLEG): container finished" podID="6cb8cfbb-d407-4a39-bd33-0e0862c129cf" containerID="f6152bb6ef3ce549ea316b590deab91790fae2466ba687d87426b5d0e81c70b0" exitCode=0 Jan 23 09:24:09 crc kubenswrapper[4684]: I0123 09:24:09.614122 4684 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-qs4ng" event={"ID":"6cb8cfbb-d407-4a39-bd33-0e0862c129cf","Type":"ContainerDied","Data":"f6152bb6ef3ce549ea316b590deab91790fae2466ba687d87426b5d0e81c70b0"} Jan 23 09:24:10 crc kubenswrapper[4684]: I0123 09:24:10.006385 4684 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-qs4ng" Jan 23 09:24:10 crc kubenswrapper[4684]: I0123 09:24:10.087071 4684 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/6cb8cfbb-d407-4a39-bd33-0e0862c129cf-catalog-content\") pod \"6cb8cfbb-d407-4a39-bd33-0e0862c129cf\" (UID: \"6cb8cfbb-d407-4a39-bd33-0e0862c129cf\") " Jan 23 09:24:10 crc kubenswrapper[4684]: I0123 09:24:10.087142 4684 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/6cb8cfbb-d407-4a39-bd33-0e0862c129cf-utilities\") pod \"6cb8cfbb-d407-4a39-bd33-0e0862c129cf\" (UID: \"6cb8cfbb-d407-4a39-bd33-0e0862c129cf\") " Jan 23 09:24:10 crc kubenswrapper[4684]: I0123 09:24:10.087222 4684 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-6mxgm\" (UniqueName: \"kubernetes.io/projected/6cb8cfbb-d407-4a39-bd33-0e0862c129cf-kube-api-access-6mxgm\") pod \"6cb8cfbb-d407-4a39-bd33-0e0862c129cf\" (UID: \"6cb8cfbb-d407-4a39-bd33-0e0862c129cf\") " Jan 23 09:24:10 crc kubenswrapper[4684]: I0123 09:24:10.088037 4684 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/6cb8cfbb-d407-4a39-bd33-0e0862c129cf-utilities" (OuterVolumeSpecName: "utilities") pod "6cb8cfbb-d407-4a39-bd33-0e0862c129cf" (UID: "6cb8cfbb-d407-4a39-bd33-0e0862c129cf"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 23 09:24:10 crc kubenswrapper[4684]: I0123 09:24:10.101110 4684 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/6cb8cfbb-d407-4a39-bd33-0e0862c129cf-kube-api-access-6mxgm" (OuterVolumeSpecName: "kube-api-access-6mxgm") pod "6cb8cfbb-d407-4a39-bd33-0e0862c129cf" (UID: "6cb8cfbb-d407-4a39-bd33-0e0862c129cf"). InnerVolumeSpecName "kube-api-access-6mxgm". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 23 09:24:10 crc kubenswrapper[4684]: I0123 09:24:10.111692 4684 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/6cb8cfbb-d407-4a39-bd33-0e0862c129cf-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "6cb8cfbb-d407-4a39-bd33-0e0862c129cf" (UID: "6cb8cfbb-d407-4a39-bd33-0e0862c129cf"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 23 09:24:10 crc kubenswrapper[4684]: I0123 09:24:10.189290 4684 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/6cb8cfbb-d407-4a39-bd33-0e0862c129cf-catalog-content\") on node \"crc\" DevicePath \"\"" Jan 23 09:24:10 crc kubenswrapper[4684]: I0123 09:24:10.189377 4684 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/6cb8cfbb-d407-4a39-bd33-0e0862c129cf-utilities\") on node \"crc\" DevicePath \"\"" Jan 23 09:24:10 crc kubenswrapper[4684]: I0123 09:24:10.189412 4684 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-6mxgm\" (UniqueName: \"kubernetes.io/projected/6cb8cfbb-d407-4a39-bd33-0e0862c129cf-kube-api-access-6mxgm\") on node \"crc\" DevicePath \"\"" Jan 23 09:24:10 crc kubenswrapper[4684]: I0123 09:24:10.620971 4684 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-qs4ng" event={"ID":"6cb8cfbb-d407-4a39-bd33-0e0862c129cf","Type":"ContainerDied","Data":"e9208b3f92a4bc6b19ec2657e9b2f93f3feb471713eb9ca87218f03755c47709"} Jan 23 09:24:10 crc kubenswrapper[4684]: I0123 09:24:10.621029 4684 scope.go:117] "RemoveContainer" containerID="f6152bb6ef3ce549ea316b590deab91790fae2466ba687d87426b5d0e81c70b0" Jan 23 09:24:10 crc kubenswrapper[4684]: I0123 09:24:10.621048 4684 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-qs4ng" Jan 23 09:24:10 crc kubenswrapper[4684]: I0123 09:24:10.643017 4684 scope.go:117] "RemoveContainer" containerID="5d8c5a4f40fa708498a6d03d58bcb12befbe504b1606ac7315c55c6a272ddd84" Jan 23 09:24:10 crc kubenswrapper[4684]: I0123 09:24:10.658461 4684 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-marketplace-qs4ng"] Jan 23 09:24:10 crc kubenswrapper[4684]: I0123 09:24:10.664155 4684 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/redhat-marketplace-qs4ng"] Jan 23 09:24:10 crc kubenswrapper[4684]: I0123 09:24:10.669303 4684 scope.go:117] "RemoveContainer" containerID="3d3f9c3aa3c6db1d814f1b02e06ee5b4d756cfbf47a1b0cb84f3edd42d500dd3" Jan 23 09:24:11 crc kubenswrapper[4684]: I0123 09:24:11.594862 4684 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="6cb8cfbb-d407-4a39-bd33-0e0862c129cf" path="/var/lib/kubelet/pods/6cb8cfbb-d407-4a39-bd33-0e0862c129cf/volumes" Jan 23 09:24:12 crc kubenswrapper[4684]: I0123 09:24:12.463868 4684 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/certified-operators-wwwwx" Jan 23 09:24:12 crc kubenswrapper[4684]: I0123 09:24:12.463914 4684 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/certified-operators-wwwwx" Jan 23 09:24:12 crc kubenswrapper[4684]: I0123 09:24:12.501929 4684 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/certified-operators-wwwwx" Jan 23 09:24:12 crc kubenswrapper[4684]: I0123 09:24:12.668827 4684 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/certified-operators-wwwwx" Jan 23 09:24:13 crc kubenswrapper[4684]: I0123 09:24:13.729049 4684 patch_prober.go:28] interesting pod/machine-config-daemon-wtphf container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Jan 23 09:24:13 crc kubenswrapper[4684]: I0123 09:24:13.729362 4684 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-wtphf" podUID="fe8e0d00-860e-4d47-9f48-686555520d79" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Jan 23 09:24:15 crc kubenswrapper[4684]: I0123 09:24:15.330756 4684 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/certified-operators-wwwwx"] Jan 23 09:24:15 crc kubenswrapper[4684]: I0123 09:24:15.331039 4684 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/certified-operators-wwwwx" podUID="65fcbaaf-5496-4005-b93f-9434ff9a9ef5" containerName="registry-server" containerID="cri-o://bd68a648885f7d2505f54c4c83a615570b022081abb65372abca11dc8d286354" gracePeriod=2 Jan 23 09:24:16 crc kubenswrapper[4684]: I0123 09:24:16.658739 4684 generic.go:334] "Generic (PLEG): container finished" podID="65fcbaaf-5496-4005-b93f-9434ff9a9ef5" containerID="bd68a648885f7d2505f54c4c83a615570b022081abb65372abca11dc8d286354" exitCode=0 Jan 23 09:24:16 crc kubenswrapper[4684]: I0123 09:24:16.658942 4684 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-wwwwx" event={"ID":"65fcbaaf-5496-4005-b93f-9434ff9a9ef5","Type":"ContainerDied","Data":"bd68a648885f7d2505f54c4c83a615570b022081abb65372abca11dc8d286354"} Jan 23 09:24:16 crc kubenswrapper[4684]: I0123 09:24:16.786592 4684 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-wwwwx" Jan 23 09:24:16 crc kubenswrapper[4684]: I0123 09:24:16.882023 4684 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/65fcbaaf-5496-4005-b93f-9434ff9a9ef5-utilities\") pod \"65fcbaaf-5496-4005-b93f-9434ff9a9ef5\" (UID: \"65fcbaaf-5496-4005-b93f-9434ff9a9ef5\") " Jan 23 09:24:16 crc kubenswrapper[4684]: I0123 09:24:16.882184 4684 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-492tf\" (UniqueName: \"kubernetes.io/projected/65fcbaaf-5496-4005-b93f-9434ff9a9ef5-kube-api-access-492tf\") pod \"65fcbaaf-5496-4005-b93f-9434ff9a9ef5\" (UID: \"65fcbaaf-5496-4005-b93f-9434ff9a9ef5\") " Jan 23 09:24:16 crc kubenswrapper[4684]: I0123 09:24:16.882302 4684 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/65fcbaaf-5496-4005-b93f-9434ff9a9ef5-catalog-content\") pod \"65fcbaaf-5496-4005-b93f-9434ff9a9ef5\" (UID: \"65fcbaaf-5496-4005-b93f-9434ff9a9ef5\") " Jan 23 09:24:16 crc kubenswrapper[4684]: I0123 09:24:16.889424 4684 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/65fcbaaf-5496-4005-b93f-9434ff9a9ef5-utilities" (OuterVolumeSpecName: "utilities") pod "65fcbaaf-5496-4005-b93f-9434ff9a9ef5" (UID: "65fcbaaf-5496-4005-b93f-9434ff9a9ef5"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 23 09:24:16 crc kubenswrapper[4684]: I0123 09:24:16.901012 4684 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/65fcbaaf-5496-4005-b93f-9434ff9a9ef5-kube-api-access-492tf" (OuterVolumeSpecName: "kube-api-access-492tf") pod "65fcbaaf-5496-4005-b93f-9434ff9a9ef5" (UID: "65fcbaaf-5496-4005-b93f-9434ff9a9ef5"). InnerVolumeSpecName "kube-api-access-492tf". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 23 09:24:16 crc kubenswrapper[4684]: I0123 09:24:16.933952 4684 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/65fcbaaf-5496-4005-b93f-9434ff9a9ef5-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "65fcbaaf-5496-4005-b93f-9434ff9a9ef5" (UID: "65fcbaaf-5496-4005-b93f-9434ff9a9ef5"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 23 09:24:16 crc kubenswrapper[4684]: I0123 09:24:16.984336 4684 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-492tf\" (UniqueName: \"kubernetes.io/projected/65fcbaaf-5496-4005-b93f-9434ff9a9ef5-kube-api-access-492tf\") on node \"crc\" DevicePath \"\"" Jan 23 09:24:16 crc kubenswrapper[4684]: I0123 09:24:16.984378 4684 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/65fcbaaf-5496-4005-b93f-9434ff9a9ef5-catalog-content\") on node \"crc\" DevicePath \"\"" Jan 23 09:24:16 crc kubenswrapper[4684]: I0123 09:24:16.984391 4684 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/65fcbaaf-5496-4005-b93f-9434ff9a9ef5-utilities\") on node \"crc\" DevicePath \"\"" Jan 23 09:24:17 crc kubenswrapper[4684]: I0123 09:24:17.666531 4684 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-wwwwx" Jan 23 09:24:17 crc kubenswrapper[4684]: I0123 09:24:17.666519 4684 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-wwwwx" event={"ID":"65fcbaaf-5496-4005-b93f-9434ff9a9ef5","Type":"ContainerDied","Data":"e01b5447af3ef795e3b25219be9f544610e9f74d3329ebabadbe2cf64ea06dce"} Jan 23 09:24:17 crc kubenswrapper[4684]: I0123 09:24:17.666660 4684 scope.go:117] "RemoveContainer" containerID="bd68a648885f7d2505f54c4c83a615570b022081abb65372abca11dc8d286354" Jan 23 09:24:17 crc kubenswrapper[4684]: I0123 09:24:17.683583 4684 scope.go:117] "RemoveContainer" containerID="0f18cf9ad48ff3ded05f0e385af122c1c7dfb0a6ed636dccff021f6e5636813b" Jan 23 09:24:17 crc kubenswrapper[4684]: I0123 09:24:17.691183 4684 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/certified-operators-wwwwx"] Jan 23 09:24:17 crc kubenswrapper[4684]: I0123 09:24:17.700068 4684 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/certified-operators-wwwwx"] Jan 23 09:24:17 crc kubenswrapper[4684]: I0123 09:24:17.702368 4684 scope.go:117] "RemoveContainer" containerID="03f7aa3b928bb2b018bd7330162009123c9929585098cd440fa00d8626559eb6" Jan 23 09:24:18 crc kubenswrapper[4684]: I0123 09:24:18.138465 4684 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="metallb-system/metallb-operator-controller-manager-66c47b49dd-q49fh" Jan 23 09:24:18 crc kubenswrapper[4684]: I0123 09:24:18.962437 4684 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["metallb-system/frr-k8s-zr9tk"] Jan 23 09:24:18 crc kubenswrapper[4684]: E0123 09:24:18.962838 4684 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="65fcbaaf-5496-4005-b93f-9434ff9a9ef5" containerName="extract-utilities" Jan 23 09:24:18 crc kubenswrapper[4684]: I0123 09:24:18.962853 4684 state_mem.go:107] "Deleted CPUSet assignment" podUID="65fcbaaf-5496-4005-b93f-9434ff9a9ef5" containerName="extract-utilities" Jan 23 09:24:18 crc kubenswrapper[4684]: E0123 09:24:18.962871 4684 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="6cb8cfbb-d407-4a39-bd33-0e0862c129cf" containerName="extract-utilities" Jan 23 09:24:18 crc kubenswrapper[4684]: I0123 09:24:18.962879 4684 state_mem.go:107] "Deleted CPUSet assignment" podUID="6cb8cfbb-d407-4a39-bd33-0e0862c129cf" containerName="extract-utilities" Jan 23 09:24:18 crc kubenswrapper[4684]: E0123 09:24:18.962892 4684 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="6cb8cfbb-d407-4a39-bd33-0e0862c129cf" containerName="registry-server" Jan 23 09:24:18 crc kubenswrapper[4684]: I0123 09:24:18.962900 4684 state_mem.go:107] "Deleted CPUSet assignment" podUID="6cb8cfbb-d407-4a39-bd33-0e0862c129cf" containerName="registry-server" Jan 23 09:24:18 crc kubenswrapper[4684]: E0123 09:24:18.962911 4684 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="6cb8cfbb-d407-4a39-bd33-0e0862c129cf" containerName="extract-content" Jan 23 09:24:18 crc kubenswrapper[4684]: I0123 09:24:18.962918 4684 state_mem.go:107] "Deleted CPUSet assignment" podUID="6cb8cfbb-d407-4a39-bd33-0e0862c129cf" containerName="extract-content" Jan 23 09:24:18 crc kubenswrapper[4684]: E0123 09:24:18.962932 4684 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="65fcbaaf-5496-4005-b93f-9434ff9a9ef5" containerName="registry-server" Jan 23 09:24:18 crc kubenswrapper[4684]: I0123 09:24:18.962940 4684 state_mem.go:107] "Deleted CPUSet assignment" podUID="65fcbaaf-5496-4005-b93f-9434ff9a9ef5" containerName="registry-server" Jan 23 09:24:18 crc kubenswrapper[4684]: E0123 09:24:18.962956 4684 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="65fcbaaf-5496-4005-b93f-9434ff9a9ef5" containerName="extract-content" Jan 23 09:24:18 crc kubenswrapper[4684]: I0123 09:24:18.962963 4684 state_mem.go:107] "Deleted CPUSet assignment" podUID="65fcbaaf-5496-4005-b93f-9434ff9a9ef5" containerName="extract-content" Jan 23 09:24:18 crc kubenswrapper[4684]: I0123 09:24:18.963094 4684 memory_manager.go:354] "RemoveStaleState removing state" podUID="6cb8cfbb-d407-4a39-bd33-0e0862c129cf" containerName="registry-server" Jan 23 09:24:18 crc kubenswrapper[4684]: I0123 09:24:18.963108 4684 memory_manager.go:354] "RemoveStaleState removing state" podUID="65fcbaaf-5496-4005-b93f-9434ff9a9ef5" containerName="registry-server" Jan 23 09:24:18 crc kubenswrapper[4684]: I0123 09:24:18.965529 4684 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="metallb-system/frr-k8s-zr9tk" Jan 23 09:24:18 crc kubenswrapper[4684]: I0123 09:24:18.968484 4684 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["metallb-system/frr-k8s-webhook-server-7df86c4f6c-qp4nh"] Jan 23 09:24:18 crc kubenswrapper[4684]: I0123 09:24:18.970899 4684 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="metallb-system/frr-k8s-webhook-server-7df86c4f6c-qp4nh" Jan 23 09:24:18 crc kubenswrapper[4684]: I0123 09:24:18.978948 4684 reflector.go:368] Caches populated for *v1.Secret from object-"metallb-system"/"frr-k8s-daemon-dockercfg-nwmfl" Jan 23 09:24:18 crc kubenswrapper[4684]: I0123 09:24:18.979276 4684 reflector.go:368] Caches populated for *v1.Secret from object-"metallb-system"/"frr-k8s-certs-secret" Jan 23 09:24:18 crc kubenswrapper[4684]: I0123 09:24:18.979534 4684 reflector.go:368] Caches populated for *v1.ConfigMap from object-"metallb-system"/"frr-startup" Jan 23 09:24:18 crc kubenswrapper[4684]: I0123 09:24:18.981151 4684 reflector.go:368] Caches populated for *v1.Secret from object-"metallb-system"/"frr-k8s-webhook-server-cert" Jan 23 09:24:19 crc kubenswrapper[4684]: I0123 09:24:19.006515 4684 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["metallb-system/frr-k8s-webhook-server-7df86c4f6c-qp4nh"] Jan 23 09:24:19 crc kubenswrapper[4684]: I0123 09:24:19.090551 4684 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["metallb-system/speaker-v69pl"] Jan 23 09:24:19 crc kubenswrapper[4684]: I0123 09:24:19.091654 4684 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="metallb-system/speaker-v69pl" Jan 23 09:24:19 crc kubenswrapper[4684]: I0123 09:24:19.098339 4684 reflector.go:368] Caches populated for *v1.Secret from object-"metallb-system"/"speaker-certs-secret" Jan 23 09:24:19 crc kubenswrapper[4684]: I0123 09:24:19.098720 4684 reflector.go:368] Caches populated for *v1.Secret from object-"metallb-system"/"metallb-memberlist" Jan 23 09:24:19 crc kubenswrapper[4684]: I0123 09:24:19.098937 4684 reflector.go:368] Caches populated for *v1.Secret from object-"metallb-system"/"speaker-dockercfg-2q25k" Jan 23 09:24:19 crc kubenswrapper[4684]: I0123 09:24:19.099115 4684 reflector.go:368] Caches populated for *v1.ConfigMap from object-"metallb-system"/"metallb-excludel2" Jan 23 09:24:19 crc kubenswrapper[4684]: I0123 09:24:19.115735 4684 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/9171f98d-dc3e-4258-9c6e-a8316190944d-metrics-certs\") pod \"frr-k8s-zr9tk\" (UID: \"9171f98d-dc3e-4258-9c6e-a8316190944d\") " pod="metallb-system/frr-k8s-zr9tk" Jan 23 09:24:19 crc kubenswrapper[4684]: I0123 09:24:19.115792 4684 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-tkt8g\" (UniqueName: \"kubernetes.io/projected/ae885236-c9d2-4c57-bc11-a9aa077f5d1b-kube-api-access-tkt8g\") pod \"frr-k8s-webhook-server-7df86c4f6c-qp4nh\" (UID: \"ae885236-c9d2-4c57-bc11-a9aa077f5d1b\") " pod="metallb-system/frr-k8s-webhook-server-7df86c4f6c-qp4nh" Jan 23 09:24:19 crc kubenswrapper[4684]: I0123 09:24:19.115832 4684 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"frr-conf\" (UniqueName: \"kubernetes.io/empty-dir/9171f98d-dc3e-4258-9c6e-a8316190944d-frr-conf\") pod \"frr-k8s-zr9tk\" (UID: \"9171f98d-dc3e-4258-9c6e-a8316190944d\") " pod="metallb-system/frr-k8s-zr9tk" Jan 23 09:24:19 crc kubenswrapper[4684]: I0123 09:24:19.115853 4684 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-4npj2\" (UniqueName: \"kubernetes.io/projected/9171f98d-dc3e-4258-9c6e-a8316190944d-kube-api-access-4npj2\") pod \"frr-k8s-zr9tk\" (UID: \"9171f98d-dc3e-4258-9c6e-a8316190944d\") " pod="metallb-system/frr-k8s-zr9tk" Jan 23 09:24:19 crc kubenswrapper[4684]: I0123 09:24:19.115885 4684 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cert\" (UniqueName: \"kubernetes.io/secret/ae885236-c9d2-4c57-bc11-a9aa077f5d1b-cert\") pod \"frr-k8s-webhook-server-7df86c4f6c-qp4nh\" (UID: \"ae885236-c9d2-4c57-bc11-a9aa077f5d1b\") " pod="metallb-system/frr-k8s-webhook-server-7df86c4f6c-qp4nh" Jan 23 09:24:19 crc kubenswrapper[4684]: I0123 09:24:19.115917 4684 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"frr-startup\" (UniqueName: \"kubernetes.io/configmap/9171f98d-dc3e-4258-9c6e-a8316190944d-frr-startup\") pod \"frr-k8s-zr9tk\" (UID: \"9171f98d-dc3e-4258-9c6e-a8316190944d\") " pod="metallb-system/frr-k8s-zr9tk" Jan 23 09:24:19 crc kubenswrapper[4684]: I0123 09:24:19.115949 4684 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"reloader\" (UniqueName: \"kubernetes.io/empty-dir/9171f98d-dc3e-4258-9c6e-a8316190944d-reloader\") pod \"frr-k8s-zr9tk\" (UID: \"9171f98d-dc3e-4258-9c6e-a8316190944d\") " pod="metallb-system/frr-k8s-zr9tk" Jan 23 09:24:19 crc kubenswrapper[4684]: I0123 09:24:19.115992 4684 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"frr-sockets\" (UniqueName: \"kubernetes.io/empty-dir/9171f98d-dc3e-4258-9c6e-a8316190944d-frr-sockets\") pod \"frr-k8s-zr9tk\" (UID: \"9171f98d-dc3e-4258-9c6e-a8316190944d\") " pod="metallb-system/frr-k8s-zr9tk" Jan 23 09:24:19 crc kubenswrapper[4684]: I0123 09:24:19.116050 4684 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"metrics\" (UniqueName: \"kubernetes.io/empty-dir/9171f98d-dc3e-4258-9c6e-a8316190944d-metrics\") pod \"frr-k8s-zr9tk\" (UID: \"9171f98d-dc3e-4258-9c6e-a8316190944d\") " pod="metallb-system/frr-k8s-zr9tk" Jan 23 09:24:19 crc kubenswrapper[4684]: I0123 09:24:19.129792 4684 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["metallb-system/controller-6968d8fdc4-8v8jk"] Jan 23 09:24:19 crc kubenswrapper[4684]: I0123 09:24:19.131304 4684 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="metallb-system/controller-6968d8fdc4-8v8jk" Jan 23 09:24:19 crc kubenswrapper[4684]: I0123 09:24:19.137143 4684 reflector.go:368] Caches populated for *v1.Secret from object-"metallb-system"/"controller-certs-secret" Jan 23 09:24:19 crc kubenswrapper[4684]: I0123 09:24:19.163659 4684 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["metallb-system/controller-6968d8fdc4-8v8jk"] Jan 23 09:24:19 crc kubenswrapper[4684]: I0123 09:24:19.217580 4684 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/c673aad0-48c8-4410-9d62-028ebc02c103-metrics-certs\") pod \"speaker-v69pl\" (UID: \"c673aad0-48c8-4410-9d62-028ebc02c103\") " pod="metallb-system/speaker-v69pl" Jan 23 09:24:19 crc kubenswrapper[4684]: I0123 09:24:19.217638 4684 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"metallb-excludel2\" (UniqueName: \"kubernetes.io/configmap/c673aad0-48c8-4410-9d62-028ebc02c103-metallb-excludel2\") pod \"speaker-v69pl\" (UID: \"c673aad0-48c8-4410-9d62-028ebc02c103\") " pod="metallb-system/speaker-v69pl" Jan 23 09:24:19 crc kubenswrapper[4684]: I0123 09:24:19.217672 4684 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics\" (UniqueName: \"kubernetes.io/empty-dir/9171f98d-dc3e-4258-9c6e-a8316190944d-metrics\") pod \"frr-k8s-zr9tk\" (UID: \"9171f98d-dc3e-4258-9c6e-a8316190944d\") " pod="metallb-system/frr-k8s-zr9tk" Jan 23 09:24:19 crc kubenswrapper[4684]: I0123 09:24:19.217708 4684 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/9171f98d-dc3e-4258-9c6e-a8316190944d-metrics-certs\") pod \"frr-k8s-zr9tk\" (UID: \"9171f98d-dc3e-4258-9c6e-a8316190944d\") " pod="metallb-system/frr-k8s-zr9tk" Jan 23 09:24:19 crc kubenswrapper[4684]: I0123 09:24:19.217743 4684 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-2p7t9\" (UniqueName: \"kubernetes.io/projected/c673aad0-48c8-4410-9d62-028ebc02c103-kube-api-access-2p7t9\") pod \"speaker-v69pl\" (UID: \"c673aad0-48c8-4410-9d62-028ebc02c103\") " pod="metallb-system/speaker-v69pl" Jan 23 09:24:19 crc kubenswrapper[4684]: I0123 09:24:19.217779 4684 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-tkt8g\" (UniqueName: \"kubernetes.io/projected/ae885236-c9d2-4c57-bc11-a9aa077f5d1b-kube-api-access-tkt8g\") pod \"frr-k8s-webhook-server-7df86c4f6c-qp4nh\" (UID: \"ae885236-c9d2-4c57-bc11-a9aa077f5d1b\") " pod="metallb-system/frr-k8s-webhook-server-7df86c4f6c-qp4nh" Jan 23 09:24:19 crc kubenswrapper[4684]: I0123 09:24:19.217828 4684 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-4npj2\" (UniqueName: \"kubernetes.io/projected/9171f98d-dc3e-4258-9c6e-a8316190944d-kube-api-access-4npj2\") pod \"frr-k8s-zr9tk\" (UID: \"9171f98d-dc3e-4258-9c6e-a8316190944d\") " pod="metallb-system/frr-k8s-zr9tk" Jan 23 09:24:19 crc kubenswrapper[4684]: I0123 09:24:19.217850 4684 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"frr-conf\" (UniqueName: \"kubernetes.io/empty-dir/9171f98d-dc3e-4258-9c6e-a8316190944d-frr-conf\") pod \"frr-k8s-zr9tk\" (UID: \"9171f98d-dc3e-4258-9c6e-a8316190944d\") " pod="metallb-system/frr-k8s-zr9tk" Jan 23 09:24:19 crc kubenswrapper[4684]: I0123 09:24:19.217877 4684 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cert\" (UniqueName: \"kubernetes.io/secret/ae885236-c9d2-4c57-bc11-a9aa077f5d1b-cert\") pod \"frr-k8s-webhook-server-7df86c4f6c-qp4nh\" (UID: \"ae885236-c9d2-4c57-bc11-a9aa077f5d1b\") " pod="metallb-system/frr-k8s-webhook-server-7df86c4f6c-qp4nh" Jan 23 09:24:19 crc kubenswrapper[4684]: I0123 09:24:19.217908 4684 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cert\" (UniqueName: \"kubernetes.io/secret/b6455af6-22c5-44ad-a1fb-7d50f4a5271d-cert\") pod \"controller-6968d8fdc4-8v8jk\" (UID: \"b6455af6-22c5-44ad-a1fb-7d50f4a5271d\") " pod="metallb-system/controller-6968d8fdc4-8v8jk" Jan 23 09:24:19 crc kubenswrapper[4684]: I0123 09:24:19.217931 4684 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"frr-startup\" (UniqueName: \"kubernetes.io/configmap/9171f98d-dc3e-4258-9c6e-a8316190944d-frr-startup\") pod \"frr-k8s-zr9tk\" (UID: \"9171f98d-dc3e-4258-9c6e-a8316190944d\") " pod="metallb-system/frr-k8s-zr9tk" Jan 23 09:24:19 crc kubenswrapper[4684]: I0123 09:24:19.217958 4684 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"reloader\" (UniqueName: \"kubernetes.io/empty-dir/9171f98d-dc3e-4258-9c6e-a8316190944d-reloader\") pod \"frr-k8s-zr9tk\" (UID: \"9171f98d-dc3e-4258-9c6e-a8316190944d\") " pod="metallb-system/frr-k8s-zr9tk" Jan 23 09:24:19 crc kubenswrapper[4684]: I0123 09:24:19.217986 4684 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"memberlist\" (UniqueName: \"kubernetes.io/secret/c673aad0-48c8-4410-9d62-028ebc02c103-memberlist\") pod \"speaker-v69pl\" (UID: \"c673aad0-48c8-4410-9d62-028ebc02c103\") " pod="metallb-system/speaker-v69pl" Jan 23 09:24:19 crc kubenswrapper[4684]: I0123 09:24:19.218011 4684 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-qjh9q\" (UniqueName: \"kubernetes.io/projected/b6455af6-22c5-44ad-a1fb-7d50f4a5271d-kube-api-access-qjh9q\") pod \"controller-6968d8fdc4-8v8jk\" (UID: \"b6455af6-22c5-44ad-a1fb-7d50f4a5271d\") " pod="metallb-system/controller-6968d8fdc4-8v8jk" Jan 23 09:24:19 crc kubenswrapper[4684]: I0123 09:24:19.218056 4684 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"frr-sockets\" (UniqueName: \"kubernetes.io/empty-dir/9171f98d-dc3e-4258-9c6e-a8316190944d-frr-sockets\") pod \"frr-k8s-zr9tk\" (UID: \"9171f98d-dc3e-4258-9c6e-a8316190944d\") " pod="metallb-system/frr-k8s-zr9tk" Jan 23 09:24:19 crc kubenswrapper[4684]: I0123 09:24:19.218081 4684 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/b6455af6-22c5-44ad-a1fb-7d50f4a5271d-metrics-certs\") pod \"controller-6968d8fdc4-8v8jk\" (UID: \"b6455af6-22c5-44ad-a1fb-7d50f4a5271d\") " pod="metallb-system/controller-6968d8fdc4-8v8jk" Jan 23 09:24:19 crc kubenswrapper[4684]: I0123 09:24:19.218560 4684 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"metrics\" (UniqueName: \"kubernetes.io/empty-dir/9171f98d-dc3e-4258-9c6e-a8316190944d-metrics\") pod \"frr-k8s-zr9tk\" (UID: \"9171f98d-dc3e-4258-9c6e-a8316190944d\") " pod="metallb-system/frr-k8s-zr9tk" Jan 23 09:24:19 crc kubenswrapper[4684]: E0123 09:24:19.218665 4684 secret.go:188] Couldn't get secret metallb-system/frr-k8s-certs-secret: secret "frr-k8s-certs-secret" not found Jan 23 09:24:19 crc kubenswrapper[4684]: E0123 09:24:19.218737 4684 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/9171f98d-dc3e-4258-9c6e-a8316190944d-metrics-certs podName:9171f98d-dc3e-4258-9c6e-a8316190944d nodeName:}" failed. No retries permitted until 2026-01-23 09:24:19.718716892 +0000 UTC m=+1032.342095433 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "metrics-certs" (UniqueName: "kubernetes.io/secret/9171f98d-dc3e-4258-9c6e-a8316190944d-metrics-certs") pod "frr-k8s-zr9tk" (UID: "9171f98d-dc3e-4258-9c6e-a8316190944d") : secret "frr-k8s-certs-secret" not found Jan 23 09:24:19 crc kubenswrapper[4684]: I0123 09:24:19.219467 4684 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"frr-conf\" (UniqueName: \"kubernetes.io/empty-dir/9171f98d-dc3e-4258-9c6e-a8316190944d-frr-conf\") pod \"frr-k8s-zr9tk\" (UID: \"9171f98d-dc3e-4258-9c6e-a8316190944d\") " pod="metallb-system/frr-k8s-zr9tk" Jan 23 09:24:19 crc kubenswrapper[4684]: E0123 09:24:19.219546 4684 secret.go:188] Couldn't get secret metallb-system/frr-k8s-webhook-server-cert: secret "frr-k8s-webhook-server-cert" not found Jan 23 09:24:19 crc kubenswrapper[4684]: E0123 09:24:19.219576 4684 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/ae885236-c9d2-4c57-bc11-a9aa077f5d1b-cert podName:ae885236-c9d2-4c57-bc11-a9aa077f5d1b nodeName:}" failed. No retries permitted until 2026-01-23 09:24:19.719566446 +0000 UTC m=+1032.342944997 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "cert" (UniqueName: "kubernetes.io/secret/ae885236-c9d2-4c57-bc11-a9aa077f5d1b-cert") pod "frr-k8s-webhook-server-7df86c4f6c-qp4nh" (UID: "ae885236-c9d2-4c57-bc11-a9aa077f5d1b") : secret "frr-k8s-webhook-server-cert" not found Jan 23 09:24:19 crc kubenswrapper[4684]: I0123 09:24:19.220373 4684 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"frr-startup\" (UniqueName: \"kubernetes.io/configmap/9171f98d-dc3e-4258-9c6e-a8316190944d-frr-startup\") pod \"frr-k8s-zr9tk\" (UID: \"9171f98d-dc3e-4258-9c6e-a8316190944d\") " pod="metallb-system/frr-k8s-zr9tk" Jan 23 09:24:19 crc kubenswrapper[4684]: I0123 09:24:19.220587 4684 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"reloader\" (UniqueName: \"kubernetes.io/empty-dir/9171f98d-dc3e-4258-9c6e-a8316190944d-reloader\") pod \"frr-k8s-zr9tk\" (UID: \"9171f98d-dc3e-4258-9c6e-a8316190944d\") " pod="metallb-system/frr-k8s-zr9tk" Jan 23 09:24:19 crc kubenswrapper[4684]: I0123 09:24:19.220816 4684 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"frr-sockets\" (UniqueName: \"kubernetes.io/empty-dir/9171f98d-dc3e-4258-9c6e-a8316190944d-frr-sockets\") pod \"frr-k8s-zr9tk\" (UID: \"9171f98d-dc3e-4258-9c6e-a8316190944d\") " pod="metallb-system/frr-k8s-zr9tk" Jan 23 09:24:19 crc kubenswrapper[4684]: I0123 09:24:19.247494 4684 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-4npj2\" (UniqueName: \"kubernetes.io/projected/9171f98d-dc3e-4258-9c6e-a8316190944d-kube-api-access-4npj2\") pod \"frr-k8s-zr9tk\" (UID: \"9171f98d-dc3e-4258-9c6e-a8316190944d\") " pod="metallb-system/frr-k8s-zr9tk" Jan 23 09:24:19 crc kubenswrapper[4684]: I0123 09:24:19.261576 4684 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-tkt8g\" (UniqueName: \"kubernetes.io/projected/ae885236-c9d2-4c57-bc11-a9aa077f5d1b-kube-api-access-tkt8g\") pod \"frr-k8s-webhook-server-7df86c4f6c-qp4nh\" (UID: \"ae885236-c9d2-4c57-bc11-a9aa077f5d1b\") " pod="metallb-system/frr-k8s-webhook-server-7df86c4f6c-qp4nh" Jan 23 09:24:19 crc kubenswrapper[4684]: I0123 09:24:19.319723 4684 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metallb-excludel2\" (UniqueName: \"kubernetes.io/configmap/c673aad0-48c8-4410-9d62-028ebc02c103-metallb-excludel2\") pod \"speaker-v69pl\" (UID: \"c673aad0-48c8-4410-9d62-028ebc02c103\") " pod="metallb-system/speaker-v69pl" Jan 23 09:24:19 crc kubenswrapper[4684]: I0123 09:24:19.319796 4684 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-2p7t9\" (UniqueName: \"kubernetes.io/projected/c673aad0-48c8-4410-9d62-028ebc02c103-kube-api-access-2p7t9\") pod \"speaker-v69pl\" (UID: \"c673aad0-48c8-4410-9d62-028ebc02c103\") " pod="metallb-system/speaker-v69pl" Jan 23 09:24:19 crc kubenswrapper[4684]: I0123 09:24:19.319868 4684 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cert\" (UniqueName: \"kubernetes.io/secret/b6455af6-22c5-44ad-a1fb-7d50f4a5271d-cert\") pod \"controller-6968d8fdc4-8v8jk\" (UID: \"b6455af6-22c5-44ad-a1fb-7d50f4a5271d\") " pod="metallb-system/controller-6968d8fdc4-8v8jk" Jan 23 09:24:19 crc kubenswrapper[4684]: I0123 09:24:19.319906 4684 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"memberlist\" (UniqueName: \"kubernetes.io/secret/c673aad0-48c8-4410-9d62-028ebc02c103-memberlist\") pod \"speaker-v69pl\" (UID: \"c673aad0-48c8-4410-9d62-028ebc02c103\") " pod="metallb-system/speaker-v69pl" Jan 23 09:24:19 crc kubenswrapper[4684]: I0123 09:24:19.319931 4684 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-qjh9q\" (UniqueName: \"kubernetes.io/projected/b6455af6-22c5-44ad-a1fb-7d50f4a5271d-kube-api-access-qjh9q\") pod \"controller-6968d8fdc4-8v8jk\" (UID: \"b6455af6-22c5-44ad-a1fb-7d50f4a5271d\") " pod="metallb-system/controller-6968d8fdc4-8v8jk" Jan 23 09:24:19 crc kubenswrapper[4684]: I0123 09:24:19.319978 4684 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/b6455af6-22c5-44ad-a1fb-7d50f4a5271d-metrics-certs\") pod \"controller-6968d8fdc4-8v8jk\" (UID: \"b6455af6-22c5-44ad-a1fb-7d50f4a5271d\") " pod="metallb-system/controller-6968d8fdc4-8v8jk" Jan 23 09:24:19 crc kubenswrapper[4684]: I0123 09:24:19.320011 4684 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/c673aad0-48c8-4410-9d62-028ebc02c103-metrics-certs\") pod \"speaker-v69pl\" (UID: \"c673aad0-48c8-4410-9d62-028ebc02c103\") " pod="metallb-system/speaker-v69pl" Jan 23 09:24:19 crc kubenswrapper[4684]: E0123 09:24:19.320762 4684 secret.go:188] Couldn't get secret metallb-system/metallb-memberlist: secret "metallb-memberlist" not found Jan 23 09:24:19 crc kubenswrapper[4684]: E0123 09:24:19.320817 4684 secret.go:188] Couldn't get secret metallb-system/controller-certs-secret: secret "controller-certs-secret" not found Jan 23 09:24:19 crc kubenswrapper[4684]: E0123 09:24:19.320831 4684 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/c673aad0-48c8-4410-9d62-028ebc02c103-memberlist podName:c673aad0-48c8-4410-9d62-028ebc02c103 nodeName:}" failed. No retries permitted until 2026-01-23 09:24:19.820811784 +0000 UTC m=+1032.444190335 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "memberlist" (UniqueName: "kubernetes.io/secret/c673aad0-48c8-4410-9d62-028ebc02c103-memberlist") pod "speaker-v69pl" (UID: "c673aad0-48c8-4410-9d62-028ebc02c103") : secret "metallb-memberlist" not found Jan 23 09:24:19 crc kubenswrapper[4684]: E0123 09:24:19.320860 4684 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/b6455af6-22c5-44ad-a1fb-7d50f4a5271d-metrics-certs podName:b6455af6-22c5-44ad-a1fb-7d50f4a5271d nodeName:}" failed. No retries permitted until 2026-01-23 09:24:19.820848615 +0000 UTC m=+1032.444227156 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "metrics-certs" (UniqueName: "kubernetes.io/secret/b6455af6-22c5-44ad-a1fb-7d50f4a5271d-metrics-certs") pod "controller-6968d8fdc4-8v8jk" (UID: "b6455af6-22c5-44ad-a1fb-7d50f4a5271d") : secret "controller-certs-secret" not found Jan 23 09:24:19 crc kubenswrapper[4684]: I0123 09:24:19.320943 4684 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"metallb-excludel2\" (UniqueName: \"kubernetes.io/configmap/c673aad0-48c8-4410-9d62-028ebc02c103-metallb-excludel2\") pod \"speaker-v69pl\" (UID: \"c673aad0-48c8-4410-9d62-028ebc02c103\") " pod="metallb-system/speaker-v69pl" Jan 23 09:24:19 crc kubenswrapper[4684]: I0123 09:24:19.322802 4684 reflector.go:368] Caches populated for *v1.Secret from object-"metallb-system"/"metallb-webhook-cert" Jan 23 09:24:19 crc kubenswrapper[4684]: I0123 09:24:19.323458 4684 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/c673aad0-48c8-4410-9d62-028ebc02c103-metrics-certs\") pod \"speaker-v69pl\" (UID: \"c673aad0-48c8-4410-9d62-028ebc02c103\") " pod="metallb-system/speaker-v69pl" Jan 23 09:24:19 crc kubenswrapper[4684]: I0123 09:24:19.336089 4684 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"cert\" (UniqueName: \"kubernetes.io/secret/b6455af6-22c5-44ad-a1fb-7d50f4a5271d-cert\") pod \"controller-6968d8fdc4-8v8jk\" (UID: \"b6455af6-22c5-44ad-a1fb-7d50f4a5271d\") " pod="metallb-system/controller-6968d8fdc4-8v8jk" Jan 23 09:24:19 crc kubenswrapper[4684]: I0123 09:24:19.340863 4684 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-2p7t9\" (UniqueName: \"kubernetes.io/projected/c673aad0-48c8-4410-9d62-028ebc02c103-kube-api-access-2p7t9\") pod \"speaker-v69pl\" (UID: \"c673aad0-48c8-4410-9d62-028ebc02c103\") " pod="metallb-system/speaker-v69pl" Jan 23 09:24:19 crc kubenswrapper[4684]: I0123 09:24:19.344434 4684 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-qjh9q\" (UniqueName: \"kubernetes.io/projected/b6455af6-22c5-44ad-a1fb-7d50f4a5271d-kube-api-access-qjh9q\") pod \"controller-6968d8fdc4-8v8jk\" (UID: \"b6455af6-22c5-44ad-a1fb-7d50f4a5271d\") " pod="metallb-system/controller-6968d8fdc4-8v8jk" Jan 23 09:24:19 crc kubenswrapper[4684]: I0123 09:24:19.590796 4684 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="65fcbaaf-5496-4005-b93f-9434ff9a9ef5" path="/var/lib/kubelet/pods/65fcbaaf-5496-4005-b93f-9434ff9a9ef5/volumes" Jan 23 09:24:19 crc kubenswrapper[4684]: I0123 09:24:19.724398 4684 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/9171f98d-dc3e-4258-9c6e-a8316190944d-metrics-certs\") pod \"frr-k8s-zr9tk\" (UID: \"9171f98d-dc3e-4258-9c6e-a8316190944d\") " pod="metallb-system/frr-k8s-zr9tk" Jan 23 09:24:19 crc kubenswrapper[4684]: I0123 09:24:19.724494 4684 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cert\" (UniqueName: \"kubernetes.io/secret/ae885236-c9d2-4c57-bc11-a9aa077f5d1b-cert\") pod \"frr-k8s-webhook-server-7df86c4f6c-qp4nh\" (UID: \"ae885236-c9d2-4c57-bc11-a9aa077f5d1b\") " pod="metallb-system/frr-k8s-webhook-server-7df86c4f6c-qp4nh" Jan 23 09:24:19 crc kubenswrapper[4684]: I0123 09:24:19.728840 4684 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/9171f98d-dc3e-4258-9c6e-a8316190944d-metrics-certs\") pod \"frr-k8s-zr9tk\" (UID: \"9171f98d-dc3e-4258-9c6e-a8316190944d\") " pod="metallb-system/frr-k8s-zr9tk" Jan 23 09:24:19 crc kubenswrapper[4684]: I0123 09:24:19.729264 4684 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"cert\" (UniqueName: \"kubernetes.io/secret/ae885236-c9d2-4c57-bc11-a9aa077f5d1b-cert\") pod \"frr-k8s-webhook-server-7df86c4f6c-qp4nh\" (UID: \"ae885236-c9d2-4c57-bc11-a9aa077f5d1b\") " pod="metallb-system/frr-k8s-webhook-server-7df86c4f6c-qp4nh" Jan 23 09:24:19 crc kubenswrapper[4684]: I0123 09:24:19.826261 4684 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"memberlist\" (UniqueName: \"kubernetes.io/secret/c673aad0-48c8-4410-9d62-028ebc02c103-memberlist\") pod \"speaker-v69pl\" (UID: \"c673aad0-48c8-4410-9d62-028ebc02c103\") " pod="metallb-system/speaker-v69pl" Jan 23 09:24:19 crc kubenswrapper[4684]: I0123 09:24:19.826545 4684 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/b6455af6-22c5-44ad-a1fb-7d50f4a5271d-metrics-certs\") pod \"controller-6968d8fdc4-8v8jk\" (UID: \"b6455af6-22c5-44ad-a1fb-7d50f4a5271d\") " pod="metallb-system/controller-6968d8fdc4-8v8jk" Jan 23 09:24:19 crc kubenswrapper[4684]: E0123 09:24:19.826432 4684 secret.go:188] Couldn't get secret metallb-system/metallb-memberlist: secret "metallb-memberlist" not found Jan 23 09:24:19 crc kubenswrapper[4684]: E0123 09:24:19.826687 4684 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/c673aad0-48c8-4410-9d62-028ebc02c103-memberlist podName:c673aad0-48c8-4410-9d62-028ebc02c103 nodeName:}" failed. No retries permitted until 2026-01-23 09:24:20.826665364 +0000 UTC m=+1033.450043905 (durationBeforeRetry 1s). Error: MountVolume.SetUp failed for volume "memberlist" (UniqueName: "kubernetes.io/secret/c673aad0-48c8-4410-9d62-028ebc02c103-memberlist") pod "speaker-v69pl" (UID: "c673aad0-48c8-4410-9d62-028ebc02c103") : secret "metallb-memberlist" not found Jan 23 09:24:19 crc kubenswrapper[4684]: I0123 09:24:19.830382 4684 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/b6455af6-22c5-44ad-a1fb-7d50f4a5271d-metrics-certs\") pod \"controller-6968d8fdc4-8v8jk\" (UID: \"b6455af6-22c5-44ad-a1fb-7d50f4a5271d\") " pod="metallb-system/controller-6968d8fdc4-8v8jk" Jan 23 09:24:19 crc kubenswrapper[4684]: I0123 09:24:19.912102 4684 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="metallb-system/frr-k8s-zr9tk" Jan 23 09:24:19 crc kubenswrapper[4684]: I0123 09:24:19.934128 4684 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="metallb-system/frr-k8s-webhook-server-7df86c4f6c-qp4nh" Jan 23 09:24:20 crc kubenswrapper[4684]: I0123 09:24:20.048716 4684 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="metallb-system/controller-6968d8fdc4-8v8jk" Jan 23 09:24:20 crc kubenswrapper[4684]: I0123 09:24:20.372444 4684 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["metallb-system/frr-k8s-webhook-server-7df86c4f6c-qp4nh"] Jan 23 09:24:20 crc kubenswrapper[4684]: I0123 09:24:20.501776 4684 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["metallb-system/controller-6968d8fdc4-8v8jk"] Jan 23 09:24:20 crc kubenswrapper[4684]: I0123 09:24:20.694534 4684 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="metallb-system/frr-k8s-zr9tk" event={"ID":"9171f98d-dc3e-4258-9c6e-a8316190944d","Type":"ContainerStarted","Data":"4494300af7c98c358edb37c7d7027dfa4d65836b13a96b56c938a8fb9ae05f71"} Jan 23 09:24:20 crc kubenswrapper[4684]: I0123 09:24:20.696144 4684 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="metallb-system/controller-6968d8fdc4-8v8jk" event={"ID":"b6455af6-22c5-44ad-a1fb-7d50f4a5271d","Type":"ContainerStarted","Data":"67ffe22df705d839e2feba80b7e4ab7a4f53a94582837d867329c58b756fa928"} Jan 23 09:24:20 crc kubenswrapper[4684]: I0123 09:24:20.696208 4684 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="metallb-system/controller-6968d8fdc4-8v8jk" event={"ID":"b6455af6-22c5-44ad-a1fb-7d50f4a5271d","Type":"ContainerStarted","Data":"f97c51203b604e6fa4d9d4932af74c5a98cff1e2f17a725a8517a0b810e9d780"} Jan 23 09:24:20 crc kubenswrapper[4684]: I0123 09:24:20.696907 4684 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="metallb-system/frr-k8s-webhook-server-7df86c4f6c-qp4nh" event={"ID":"ae885236-c9d2-4c57-bc11-a9aa077f5d1b","Type":"ContainerStarted","Data":"cafb9ce837df74868135d71d1fad0b17c35237131b911aaa65fe83088740bda3"} Jan 23 09:24:20 crc kubenswrapper[4684]: I0123 09:24:20.843067 4684 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"memberlist\" (UniqueName: \"kubernetes.io/secret/c673aad0-48c8-4410-9d62-028ebc02c103-memberlist\") pod \"speaker-v69pl\" (UID: \"c673aad0-48c8-4410-9d62-028ebc02c103\") " pod="metallb-system/speaker-v69pl" Jan 23 09:24:20 crc kubenswrapper[4684]: I0123 09:24:20.848917 4684 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"memberlist\" (UniqueName: \"kubernetes.io/secret/c673aad0-48c8-4410-9d62-028ebc02c103-memberlist\") pod \"speaker-v69pl\" (UID: \"c673aad0-48c8-4410-9d62-028ebc02c103\") " pod="metallb-system/speaker-v69pl" Jan 23 09:24:20 crc kubenswrapper[4684]: I0123 09:24:20.915623 4684 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="metallb-system/speaker-v69pl" Jan 23 09:24:20 crc kubenswrapper[4684]: W0123 09:24:20.933678 4684 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podc673aad0_48c8_4410_9d62_028ebc02c103.slice/crio-82662fcae9f499037a5903a75a0d4967a19c1d4f4ed16fd8c1276f9f23759cd6 WatchSource:0}: Error finding container 82662fcae9f499037a5903a75a0d4967a19c1d4f4ed16fd8c1276f9f23759cd6: Status 404 returned error can't find the container with id 82662fcae9f499037a5903a75a0d4967a19c1d4f4ed16fd8c1276f9f23759cd6 Jan 23 09:24:21 crc kubenswrapper[4684]: I0123 09:24:21.704464 4684 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="metallb-system/speaker-v69pl" event={"ID":"c673aad0-48c8-4410-9d62-028ebc02c103","Type":"ContainerStarted","Data":"4da6024898503d498aad05f4c1d673d32f48b04062c8dc5f24beea9e98c22e87"} Jan 23 09:24:21 crc kubenswrapper[4684]: I0123 09:24:21.704858 4684 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="metallb-system/speaker-v69pl" event={"ID":"c673aad0-48c8-4410-9d62-028ebc02c103","Type":"ContainerStarted","Data":"d994645b1c73ff6659dd1fb58f7fedc2f22bd5c17f7beb2d10e94e5338b5e5ca"} Jan 23 09:24:21 crc kubenswrapper[4684]: I0123 09:24:21.704876 4684 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="metallb-system/speaker-v69pl" event={"ID":"c673aad0-48c8-4410-9d62-028ebc02c103","Type":"ContainerStarted","Data":"82662fcae9f499037a5903a75a0d4967a19c1d4f4ed16fd8c1276f9f23759cd6"} Jan 23 09:24:21 crc kubenswrapper[4684]: I0123 09:24:21.705812 4684 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="metallb-system/speaker-v69pl" Jan 23 09:24:21 crc kubenswrapper[4684]: I0123 09:24:21.708132 4684 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="metallb-system/controller-6968d8fdc4-8v8jk" event={"ID":"b6455af6-22c5-44ad-a1fb-7d50f4a5271d","Type":"ContainerStarted","Data":"e3ee9200ff8738cf31eb15ecb987dc3391138bc5c6386520f11971a688f5e28a"} Jan 23 09:24:21 crc kubenswrapper[4684]: I0123 09:24:21.708948 4684 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="metallb-system/controller-6968d8fdc4-8v8jk" Jan 23 09:24:21 crc kubenswrapper[4684]: I0123 09:24:21.754529 4684 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="metallb-system/speaker-v69pl" podStartSLOduration=2.754512986 podStartE2EDuration="2.754512986s" podCreationTimestamp="2026-01-23 09:24:19 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-23 09:24:21.730782026 +0000 UTC m=+1034.354160587" watchObservedRunningTime="2026-01-23 09:24:21.754512986 +0000 UTC m=+1034.377891517" Jan 23 09:24:21 crc kubenswrapper[4684]: I0123 09:24:21.755325 4684 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="metallb-system/controller-6968d8fdc4-8v8jk" podStartSLOduration=2.755321019 podStartE2EDuration="2.755321019s" podCreationTimestamp="2026-01-23 09:24:19 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-23 09:24:21.755239116 +0000 UTC m=+1034.378617667" watchObservedRunningTime="2026-01-23 09:24:21.755321019 +0000 UTC m=+1034.378699560" Jan 23 09:24:30 crc kubenswrapper[4684]: I0123 09:24:30.054463 4684 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="metallb-system/controller-6968d8fdc4-8v8jk" Jan 23 09:24:30 crc kubenswrapper[4684]: I0123 09:24:30.778212 4684 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="metallb-system/frr-k8s-webhook-server-7df86c4f6c-qp4nh" event={"ID":"ae885236-c9d2-4c57-bc11-a9aa077f5d1b","Type":"ContainerStarted","Data":"6ede74a6142bd06845e593fc13fcc9c34677ec6b61f38c7b12041174e5a6d414"} Jan 23 09:24:30 crc kubenswrapper[4684]: I0123 09:24:30.778568 4684 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="metallb-system/frr-k8s-webhook-server-7df86c4f6c-qp4nh" Jan 23 09:24:30 crc kubenswrapper[4684]: I0123 09:24:30.780623 4684 generic.go:334] "Generic (PLEG): container finished" podID="9171f98d-dc3e-4258-9c6e-a8316190944d" containerID="a2e16215f999d10655c57fcdf3fac8b8c20dc879970a5b3908508e70bf1d5a8d" exitCode=0 Jan 23 09:24:30 crc kubenswrapper[4684]: I0123 09:24:30.780713 4684 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="metallb-system/frr-k8s-zr9tk" event={"ID":"9171f98d-dc3e-4258-9c6e-a8316190944d","Type":"ContainerDied","Data":"a2e16215f999d10655c57fcdf3fac8b8c20dc879970a5b3908508e70bf1d5a8d"} Jan 23 09:24:30 crc kubenswrapper[4684]: I0123 09:24:30.843254 4684 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="metallb-system/frr-k8s-webhook-server-7df86c4f6c-qp4nh" podStartSLOduration=2.831408536 podStartE2EDuration="12.843235827s" podCreationTimestamp="2026-01-23 09:24:18 +0000 UTC" firstStartedPulling="2026-01-23 09:24:20.397468073 +0000 UTC m=+1033.020846614" lastFinishedPulling="2026-01-23 09:24:30.409295364 +0000 UTC m=+1043.032673905" observedRunningTime="2026-01-23 09:24:30.816565139 +0000 UTC m=+1043.439943700" watchObservedRunningTime="2026-01-23 09:24:30.843235827 +0000 UTC m=+1043.466614368" Jan 23 09:24:31 crc kubenswrapper[4684]: I0123 09:24:31.791126 4684 generic.go:334] "Generic (PLEG): container finished" podID="9171f98d-dc3e-4258-9c6e-a8316190944d" containerID="9393148b9ba712d90a5eb25fe6a6205f5e4a763a94832d9b93d1422fdcb65112" exitCode=0 Jan 23 09:24:31 crc kubenswrapper[4684]: I0123 09:24:31.793045 4684 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="metallb-system/frr-k8s-zr9tk" event={"ID":"9171f98d-dc3e-4258-9c6e-a8316190944d","Type":"ContainerDied","Data":"9393148b9ba712d90a5eb25fe6a6205f5e4a763a94832d9b93d1422fdcb65112"} Jan 23 09:24:32 crc kubenswrapper[4684]: I0123 09:24:32.800583 4684 generic.go:334] "Generic (PLEG): container finished" podID="9171f98d-dc3e-4258-9c6e-a8316190944d" containerID="5ca108ca2c42a663192ecc8964fcb120a8d5b11ce98db4d6591ecb329acc7cd3" exitCode=0 Jan 23 09:24:32 crc kubenswrapper[4684]: I0123 09:24:32.800617 4684 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="metallb-system/frr-k8s-zr9tk" event={"ID":"9171f98d-dc3e-4258-9c6e-a8316190944d","Type":"ContainerDied","Data":"5ca108ca2c42a663192ecc8964fcb120a8d5b11ce98db4d6591ecb329acc7cd3"} Jan 23 09:24:33 crc kubenswrapper[4684]: I0123 09:24:33.832431 4684 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="metallb-system/frr-k8s-zr9tk" event={"ID":"9171f98d-dc3e-4258-9c6e-a8316190944d","Type":"ContainerStarted","Data":"cf8b8fd3cae7527b456c0cdecd6fa7aded326a96f03f6982a172f3c7b3d73d7d"} Jan 23 09:24:33 crc kubenswrapper[4684]: I0123 09:24:33.832767 4684 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="metallb-system/frr-k8s-zr9tk" Jan 23 09:24:33 crc kubenswrapper[4684]: I0123 09:24:33.832781 4684 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="metallb-system/frr-k8s-zr9tk" event={"ID":"9171f98d-dc3e-4258-9c6e-a8316190944d","Type":"ContainerStarted","Data":"4e36d32b09619f1658f47e37138a914dd8d614abb9f6ea8ad3e3353ec9ed6e28"} Jan 23 09:24:33 crc kubenswrapper[4684]: I0123 09:24:33.832792 4684 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="metallb-system/frr-k8s-zr9tk" event={"ID":"9171f98d-dc3e-4258-9c6e-a8316190944d","Type":"ContainerStarted","Data":"7275d7ff3bd8c50cd06dc44769b129d48412f551db2bdd3dfc13f442af7d5302"} Jan 23 09:24:33 crc kubenswrapper[4684]: I0123 09:24:33.832804 4684 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="metallb-system/frr-k8s-zr9tk" event={"ID":"9171f98d-dc3e-4258-9c6e-a8316190944d","Type":"ContainerStarted","Data":"b454cd7a811b555eab01648e372fe68759c65434926a4d1a0695d04deb31e320"} Jan 23 09:24:33 crc kubenswrapper[4684]: I0123 09:24:33.832825 4684 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="metallb-system/frr-k8s-zr9tk" event={"ID":"9171f98d-dc3e-4258-9c6e-a8316190944d","Type":"ContainerStarted","Data":"bcc2f64cbef39f333a52d796cb368f82e949f0344541463c6781b563d7f987ed"} Jan 23 09:24:33 crc kubenswrapper[4684]: I0123 09:24:33.832836 4684 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="metallb-system/frr-k8s-zr9tk" event={"ID":"9171f98d-dc3e-4258-9c6e-a8316190944d","Type":"ContainerStarted","Data":"b65d808bf7d8d8cc9a1c03d8daaf6cb42122c8606848b5af2e545c3c1900d7fc"} Jan 23 09:24:33 crc kubenswrapper[4684]: I0123 09:24:33.867506 4684 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="metallb-system/frr-k8s-zr9tk" podStartSLOduration=5.489794511 podStartE2EDuration="15.86749009s" podCreationTimestamp="2026-01-23 09:24:18 +0000 UTC" firstStartedPulling="2026-01-23 09:24:20.001751866 +0000 UTC m=+1032.625130407" lastFinishedPulling="2026-01-23 09:24:30.379447445 +0000 UTC m=+1043.002825986" observedRunningTime="2026-01-23 09:24:33.866242054 +0000 UTC m=+1046.489620595" watchObservedRunningTime="2026-01-23 09:24:33.86749009 +0000 UTC m=+1046.490868621" Jan 23 09:24:34 crc kubenswrapper[4684]: I0123 09:24:34.913117 4684 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="metallb-system/frr-k8s-zr9tk" Jan 23 09:24:34 crc kubenswrapper[4684]: I0123 09:24:34.948685 4684 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="metallb-system/frr-k8s-zr9tk" Jan 23 09:24:40 crc kubenswrapper[4684]: I0123 09:24:40.049876 4684 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="metallb-system/frr-k8s-webhook-server-7df86c4f6c-qp4nh" Jan 23 09:24:40 crc kubenswrapper[4684]: I0123 09:24:40.920949 4684 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="metallb-system/speaker-v69pl" Jan 23 09:24:43 crc kubenswrapper[4684]: I0123 09:24:43.728889 4684 patch_prober.go:28] interesting pod/machine-config-daemon-wtphf container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Jan 23 09:24:43 crc kubenswrapper[4684]: I0123 09:24:43.729471 4684 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-wtphf" podUID="fe8e0d00-860e-4d47-9f48-686555520d79" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Jan 23 09:24:43 crc kubenswrapper[4684]: I0123 09:24:43.833939 4684 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack-operators/openstack-operator-index-zt5gw"] Jan 23 09:24:43 crc kubenswrapper[4684]: I0123 09:24:43.834659 4684 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/openstack-operator-index-zt5gw" Jan 23 09:24:43 crc kubenswrapper[4684]: I0123 09:24:43.837254 4684 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"openstack-operator-index-dockercfg-pmzkc" Jan 23 09:24:43 crc kubenswrapper[4684]: I0123 09:24:43.837313 4684 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack-operators"/"kube-root-ca.crt" Jan 23 09:24:43 crc kubenswrapper[4684]: I0123 09:24:43.839010 4684 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack-operators"/"openshift-service-ca.crt" Jan 23 09:24:43 crc kubenswrapper[4684]: I0123 09:24:43.900100 4684 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/openstack-operator-index-zt5gw"] Jan 23 09:24:43 crc kubenswrapper[4684]: I0123 09:24:43.990089 4684 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-b5d6t\" (UniqueName: \"kubernetes.io/projected/2c935634-e963-49ad-868b-7576011f21fb-kube-api-access-b5d6t\") pod \"openstack-operator-index-zt5gw\" (UID: \"2c935634-e963-49ad-868b-7576011f21fb\") " pod="openstack-operators/openstack-operator-index-zt5gw" Jan 23 09:24:44 crc kubenswrapper[4684]: I0123 09:24:44.091196 4684 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-b5d6t\" (UniqueName: \"kubernetes.io/projected/2c935634-e963-49ad-868b-7576011f21fb-kube-api-access-b5d6t\") pod \"openstack-operator-index-zt5gw\" (UID: \"2c935634-e963-49ad-868b-7576011f21fb\") " pod="openstack-operators/openstack-operator-index-zt5gw" Jan 23 09:24:44 crc kubenswrapper[4684]: I0123 09:24:44.111050 4684 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-b5d6t\" (UniqueName: \"kubernetes.io/projected/2c935634-e963-49ad-868b-7576011f21fb-kube-api-access-b5d6t\") pod \"openstack-operator-index-zt5gw\" (UID: \"2c935634-e963-49ad-868b-7576011f21fb\") " pod="openstack-operators/openstack-operator-index-zt5gw" Jan 23 09:24:44 crc kubenswrapper[4684]: I0123 09:24:44.154550 4684 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/openstack-operator-index-zt5gw" Jan 23 09:24:44 crc kubenswrapper[4684]: I0123 09:24:44.702859 4684 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/openstack-operator-index-zt5gw"] Jan 23 09:24:45 crc kubenswrapper[4684]: I0123 09:24:45.084894 4684 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/openstack-operator-index-zt5gw" event={"ID":"2c935634-e963-49ad-868b-7576011f21fb","Type":"ContainerStarted","Data":"b57c77da07521221912b73cfa896db7cf7bad6fd5a0cecd059704c9181943ad0"} Jan 23 09:24:46 crc kubenswrapper[4684]: I0123 09:24:46.092140 4684 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/openstack-operator-index-zt5gw" event={"ID":"2c935634-e963-49ad-868b-7576011f21fb","Type":"ContainerStarted","Data":"bf6d8200e19474529a2d41291a98ad73d83f8176a127aafe36359ed2c019cb32"} Jan 23 09:24:46 crc kubenswrapper[4684]: I0123 09:24:46.110560 4684 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack-operators/openstack-operator-index-zt5gw" podStartSLOduration=2.251960839 podStartE2EDuration="3.110542466s" podCreationTimestamp="2026-01-23 09:24:43 +0000 UTC" firstStartedPulling="2026-01-23 09:24:44.712297724 +0000 UTC m=+1057.335676265" lastFinishedPulling="2026-01-23 09:24:45.570879351 +0000 UTC m=+1058.194257892" observedRunningTime="2026-01-23 09:24:46.105903323 +0000 UTC m=+1058.729281864" watchObservedRunningTime="2026-01-23 09:24:46.110542466 +0000 UTC m=+1058.733921007" Jan 23 09:24:49 crc kubenswrapper[4684]: I0123 09:24:49.914999 4684 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="metallb-system/frr-k8s-zr9tk" Jan 23 09:24:54 crc kubenswrapper[4684]: I0123 09:24:54.155737 4684 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack-operators/openstack-operator-index-zt5gw" Jan 23 09:24:54 crc kubenswrapper[4684]: I0123 09:24:54.156685 4684 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack-operators/openstack-operator-index-zt5gw" Jan 23 09:24:54 crc kubenswrapper[4684]: I0123 09:24:54.195574 4684 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack-operators/openstack-operator-index-zt5gw" Jan 23 09:24:55 crc kubenswrapper[4684]: I0123 09:24:55.184865 4684 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack-operators/openstack-operator-index-zt5gw" Jan 23 09:25:01 crc kubenswrapper[4684]: I0123 09:25:01.333480 4684 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack-operators/97bb1f8024535e829fa8894f35597f6754858047b9fee802213b02de86bjvsp"] Jan 23 09:25:01 crc kubenswrapper[4684]: I0123 09:25:01.335109 4684 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/97bb1f8024535e829fa8894f35597f6754858047b9fee802213b02de86bjvsp" Jan 23 09:25:01 crc kubenswrapper[4684]: I0123 09:25:01.336995 4684 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"default-dockercfg-ctmcb" Jan 23 09:25:01 crc kubenswrapper[4684]: I0123 09:25:01.348989 4684 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/97bb1f8024535e829fa8894f35597f6754858047b9fee802213b02de86bjvsp"] Jan 23 09:25:01 crc kubenswrapper[4684]: I0123 09:25:01.442670 4684 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"util\" (UniqueName: \"kubernetes.io/empty-dir/985d0dfc-6e0c-4cdc-98c6-045b88957e25-util\") pod \"97bb1f8024535e829fa8894f35597f6754858047b9fee802213b02de86bjvsp\" (UID: \"985d0dfc-6e0c-4cdc-98c6-045b88957e25\") " pod="openstack-operators/97bb1f8024535e829fa8894f35597f6754858047b9fee802213b02de86bjvsp" Jan 23 09:25:01 crc kubenswrapper[4684]: I0123 09:25:01.443050 4684 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"bundle\" (UniqueName: \"kubernetes.io/empty-dir/985d0dfc-6e0c-4cdc-98c6-045b88957e25-bundle\") pod \"97bb1f8024535e829fa8894f35597f6754858047b9fee802213b02de86bjvsp\" (UID: \"985d0dfc-6e0c-4cdc-98c6-045b88957e25\") " pod="openstack-operators/97bb1f8024535e829fa8894f35597f6754858047b9fee802213b02de86bjvsp" Jan 23 09:25:01 crc kubenswrapper[4684]: I0123 09:25:01.443086 4684 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-ljk75\" (UniqueName: \"kubernetes.io/projected/985d0dfc-6e0c-4cdc-98c6-045b88957e25-kube-api-access-ljk75\") pod \"97bb1f8024535e829fa8894f35597f6754858047b9fee802213b02de86bjvsp\" (UID: \"985d0dfc-6e0c-4cdc-98c6-045b88957e25\") " pod="openstack-operators/97bb1f8024535e829fa8894f35597f6754858047b9fee802213b02de86bjvsp" Jan 23 09:25:01 crc kubenswrapper[4684]: I0123 09:25:01.544466 4684 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"bundle\" (UniqueName: \"kubernetes.io/empty-dir/985d0dfc-6e0c-4cdc-98c6-045b88957e25-bundle\") pod \"97bb1f8024535e829fa8894f35597f6754858047b9fee802213b02de86bjvsp\" (UID: \"985d0dfc-6e0c-4cdc-98c6-045b88957e25\") " pod="openstack-operators/97bb1f8024535e829fa8894f35597f6754858047b9fee802213b02de86bjvsp" Jan 23 09:25:01 crc kubenswrapper[4684]: I0123 09:25:01.544522 4684 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-ljk75\" (UniqueName: \"kubernetes.io/projected/985d0dfc-6e0c-4cdc-98c6-045b88957e25-kube-api-access-ljk75\") pod \"97bb1f8024535e829fa8894f35597f6754858047b9fee802213b02de86bjvsp\" (UID: \"985d0dfc-6e0c-4cdc-98c6-045b88957e25\") " pod="openstack-operators/97bb1f8024535e829fa8894f35597f6754858047b9fee802213b02de86bjvsp" Jan 23 09:25:01 crc kubenswrapper[4684]: I0123 09:25:01.544629 4684 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"util\" (UniqueName: \"kubernetes.io/empty-dir/985d0dfc-6e0c-4cdc-98c6-045b88957e25-util\") pod \"97bb1f8024535e829fa8894f35597f6754858047b9fee802213b02de86bjvsp\" (UID: \"985d0dfc-6e0c-4cdc-98c6-045b88957e25\") " pod="openstack-operators/97bb1f8024535e829fa8894f35597f6754858047b9fee802213b02de86bjvsp" Jan 23 09:25:01 crc kubenswrapper[4684]: I0123 09:25:01.545108 4684 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"bundle\" (UniqueName: \"kubernetes.io/empty-dir/985d0dfc-6e0c-4cdc-98c6-045b88957e25-bundle\") pod \"97bb1f8024535e829fa8894f35597f6754858047b9fee802213b02de86bjvsp\" (UID: \"985d0dfc-6e0c-4cdc-98c6-045b88957e25\") " pod="openstack-operators/97bb1f8024535e829fa8894f35597f6754858047b9fee802213b02de86bjvsp" Jan 23 09:25:01 crc kubenswrapper[4684]: I0123 09:25:01.545134 4684 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"util\" (UniqueName: \"kubernetes.io/empty-dir/985d0dfc-6e0c-4cdc-98c6-045b88957e25-util\") pod \"97bb1f8024535e829fa8894f35597f6754858047b9fee802213b02de86bjvsp\" (UID: \"985d0dfc-6e0c-4cdc-98c6-045b88957e25\") " pod="openstack-operators/97bb1f8024535e829fa8894f35597f6754858047b9fee802213b02de86bjvsp" Jan 23 09:25:01 crc kubenswrapper[4684]: I0123 09:25:01.567772 4684 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-ljk75\" (UniqueName: \"kubernetes.io/projected/985d0dfc-6e0c-4cdc-98c6-045b88957e25-kube-api-access-ljk75\") pod \"97bb1f8024535e829fa8894f35597f6754858047b9fee802213b02de86bjvsp\" (UID: \"985d0dfc-6e0c-4cdc-98c6-045b88957e25\") " pod="openstack-operators/97bb1f8024535e829fa8894f35597f6754858047b9fee802213b02de86bjvsp" Jan 23 09:25:01 crc kubenswrapper[4684]: I0123 09:25:01.695289 4684 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/97bb1f8024535e829fa8894f35597f6754858047b9fee802213b02de86bjvsp" Jan 23 09:25:02 crc kubenswrapper[4684]: I0123 09:25:02.096107 4684 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/97bb1f8024535e829fa8894f35597f6754858047b9fee802213b02de86bjvsp"] Jan 23 09:25:02 crc kubenswrapper[4684]: I0123 09:25:02.198899 4684 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/97bb1f8024535e829fa8894f35597f6754858047b9fee802213b02de86bjvsp" event={"ID":"985d0dfc-6e0c-4cdc-98c6-045b88957e25","Type":"ContainerStarted","Data":"7f6c6e902caf029f01193c6b0399b13bdd1c3d40614af23d90c4614c2a379227"} Jan 23 09:25:03 crc kubenswrapper[4684]: I0123 09:25:03.205674 4684 generic.go:334] "Generic (PLEG): container finished" podID="985d0dfc-6e0c-4cdc-98c6-045b88957e25" containerID="dec70e8802bd927e852b9d52487c1e825f2af0b36c680ecdb1693c76bf3ff3c9" exitCode=0 Jan 23 09:25:03 crc kubenswrapper[4684]: I0123 09:25:03.206084 4684 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/97bb1f8024535e829fa8894f35597f6754858047b9fee802213b02de86bjvsp" event={"ID":"985d0dfc-6e0c-4cdc-98c6-045b88957e25","Type":"ContainerDied","Data":"dec70e8802bd927e852b9d52487c1e825f2af0b36c680ecdb1693c76bf3ff3c9"} Jan 23 09:25:04 crc kubenswrapper[4684]: I0123 09:25:04.219069 4684 generic.go:334] "Generic (PLEG): container finished" podID="985d0dfc-6e0c-4cdc-98c6-045b88957e25" containerID="33fdfb91ab38bf8b8a3469007a7b84507afac8802a2546f2f41693f140dd8790" exitCode=0 Jan 23 09:25:04 crc kubenswrapper[4684]: I0123 09:25:04.219178 4684 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/97bb1f8024535e829fa8894f35597f6754858047b9fee802213b02de86bjvsp" event={"ID":"985d0dfc-6e0c-4cdc-98c6-045b88957e25","Type":"ContainerDied","Data":"33fdfb91ab38bf8b8a3469007a7b84507afac8802a2546f2f41693f140dd8790"} Jan 23 09:25:05 crc kubenswrapper[4684]: I0123 09:25:05.227053 4684 generic.go:334] "Generic (PLEG): container finished" podID="985d0dfc-6e0c-4cdc-98c6-045b88957e25" containerID="2968d04426fff70860624ddc331cc05e93ae926d67e55baacf7c6b1b12fb8b97" exitCode=0 Jan 23 09:25:05 crc kubenswrapper[4684]: I0123 09:25:05.227173 4684 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/97bb1f8024535e829fa8894f35597f6754858047b9fee802213b02de86bjvsp" event={"ID":"985d0dfc-6e0c-4cdc-98c6-045b88957e25","Type":"ContainerDied","Data":"2968d04426fff70860624ddc331cc05e93ae926d67e55baacf7c6b1b12fb8b97"} Jan 23 09:25:06 crc kubenswrapper[4684]: I0123 09:25:06.523905 4684 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack-operators/97bb1f8024535e829fa8894f35597f6754858047b9fee802213b02de86bjvsp" Jan 23 09:25:06 crc kubenswrapper[4684]: I0123 09:25:06.611379 4684 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"util\" (UniqueName: \"kubernetes.io/empty-dir/985d0dfc-6e0c-4cdc-98c6-045b88957e25-util\") pod \"985d0dfc-6e0c-4cdc-98c6-045b88957e25\" (UID: \"985d0dfc-6e0c-4cdc-98c6-045b88957e25\") " Jan 23 09:25:06 crc kubenswrapper[4684]: I0123 09:25:06.611550 4684 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"bundle\" (UniqueName: \"kubernetes.io/empty-dir/985d0dfc-6e0c-4cdc-98c6-045b88957e25-bundle\") pod \"985d0dfc-6e0c-4cdc-98c6-045b88957e25\" (UID: \"985d0dfc-6e0c-4cdc-98c6-045b88957e25\") " Jan 23 09:25:06 crc kubenswrapper[4684]: I0123 09:25:06.611590 4684 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-ljk75\" (UniqueName: \"kubernetes.io/projected/985d0dfc-6e0c-4cdc-98c6-045b88957e25-kube-api-access-ljk75\") pod \"985d0dfc-6e0c-4cdc-98c6-045b88957e25\" (UID: \"985d0dfc-6e0c-4cdc-98c6-045b88957e25\") " Jan 23 09:25:06 crc kubenswrapper[4684]: I0123 09:25:06.612766 4684 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/985d0dfc-6e0c-4cdc-98c6-045b88957e25-bundle" (OuterVolumeSpecName: "bundle") pod "985d0dfc-6e0c-4cdc-98c6-045b88957e25" (UID: "985d0dfc-6e0c-4cdc-98c6-045b88957e25"). InnerVolumeSpecName "bundle". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 23 09:25:06 crc kubenswrapper[4684]: I0123 09:25:06.616134 4684 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/985d0dfc-6e0c-4cdc-98c6-045b88957e25-kube-api-access-ljk75" (OuterVolumeSpecName: "kube-api-access-ljk75") pod "985d0dfc-6e0c-4cdc-98c6-045b88957e25" (UID: "985d0dfc-6e0c-4cdc-98c6-045b88957e25"). InnerVolumeSpecName "kube-api-access-ljk75". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 23 09:25:06 crc kubenswrapper[4684]: I0123 09:25:06.625093 4684 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/985d0dfc-6e0c-4cdc-98c6-045b88957e25-util" (OuterVolumeSpecName: "util") pod "985d0dfc-6e0c-4cdc-98c6-045b88957e25" (UID: "985d0dfc-6e0c-4cdc-98c6-045b88957e25"). InnerVolumeSpecName "util". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 23 09:25:06 crc kubenswrapper[4684]: I0123 09:25:06.713439 4684 reconciler_common.go:293] "Volume detached for volume \"bundle\" (UniqueName: \"kubernetes.io/empty-dir/985d0dfc-6e0c-4cdc-98c6-045b88957e25-bundle\") on node \"crc\" DevicePath \"\"" Jan 23 09:25:06 crc kubenswrapper[4684]: I0123 09:25:06.713474 4684 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-ljk75\" (UniqueName: \"kubernetes.io/projected/985d0dfc-6e0c-4cdc-98c6-045b88957e25-kube-api-access-ljk75\") on node \"crc\" DevicePath \"\"" Jan 23 09:25:06 crc kubenswrapper[4684]: I0123 09:25:06.713493 4684 reconciler_common.go:293] "Volume detached for volume \"util\" (UniqueName: \"kubernetes.io/empty-dir/985d0dfc-6e0c-4cdc-98c6-045b88957e25-util\") on node \"crc\" DevicePath \"\"" Jan 23 09:25:07 crc kubenswrapper[4684]: I0123 09:25:07.241168 4684 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/97bb1f8024535e829fa8894f35597f6754858047b9fee802213b02de86bjvsp" event={"ID":"985d0dfc-6e0c-4cdc-98c6-045b88957e25","Type":"ContainerDied","Data":"7f6c6e902caf029f01193c6b0399b13bdd1c3d40614af23d90c4614c2a379227"} Jan 23 09:25:07 crc kubenswrapper[4684]: I0123 09:25:07.241207 4684 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="7f6c6e902caf029f01193c6b0399b13bdd1c3d40614af23d90c4614c2a379227" Jan 23 09:25:07 crc kubenswrapper[4684]: I0123 09:25:07.241229 4684 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack-operators/97bb1f8024535e829fa8894f35597f6754858047b9fee802213b02de86bjvsp" Jan 23 09:25:13 crc kubenswrapper[4684]: I0123 09:25:13.512268 4684 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack-operators/openstack-operator-controller-init-85bfd44c94-6dlkw"] Jan 23 09:25:13 crc kubenswrapper[4684]: E0123 09:25:13.513240 4684 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="985d0dfc-6e0c-4cdc-98c6-045b88957e25" containerName="pull" Jan 23 09:25:13 crc kubenswrapper[4684]: I0123 09:25:13.513257 4684 state_mem.go:107] "Deleted CPUSet assignment" podUID="985d0dfc-6e0c-4cdc-98c6-045b88957e25" containerName="pull" Jan 23 09:25:13 crc kubenswrapper[4684]: E0123 09:25:13.513274 4684 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="985d0dfc-6e0c-4cdc-98c6-045b88957e25" containerName="util" Jan 23 09:25:13 crc kubenswrapper[4684]: I0123 09:25:13.513281 4684 state_mem.go:107] "Deleted CPUSet assignment" podUID="985d0dfc-6e0c-4cdc-98c6-045b88957e25" containerName="util" Jan 23 09:25:13 crc kubenswrapper[4684]: E0123 09:25:13.513293 4684 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="985d0dfc-6e0c-4cdc-98c6-045b88957e25" containerName="extract" Jan 23 09:25:13 crc kubenswrapper[4684]: I0123 09:25:13.513301 4684 state_mem.go:107] "Deleted CPUSet assignment" podUID="985d0dfc-6e0c-4cdc-98c6-045b88957e25" containerName="extract" Jan 23 09:25:13 crc kubenswrapper[4684]: I0123 09:25:13.513430 4684 memory_manager.go:354] "RemoveStaleState removing state" podUID="985d0dfc-6e0c-4cdc-98c6-045b88957e25" containerName="extract" Jan 23 09:25:13 crc kubenswrapper[4684]: I0123 09:25:13.513961 4684 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/openstack-operator-controller-init-85bfd44c94-6dlkw" Jan 23 09:25:13 crc kubenswrapper[4684]: I0123 09:25:13.516399 4684 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"openstack-operator-controller-init-dockercfg-s8bhm" Jan 23 09:25:13 crc kubenswrapper[4684]: I0123 09:25:13.599935 4684 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-qkgjt\" (UniqueName: \"kubernetes.io/projected/652bdac8-6488-4303-9d64-809a46258816-kube-api-access-qkgjt\") pod \"openstack-operator-controller-init-85bfd44c94-6dlkw\" (UID: \"652bdac8-6488-4303-9d64-809a46258816\") " pod="openstack-operators/openstack-operator-controller-init-85bfd44c94-6dlkw" Jan 23 09:25:13 crc kubenswrapper[4684]: I0123 09:25:13.627544 4684 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/openstack-operator-controller-init-85bfd44c94-6dlkw"] Jan 23 09:25:13 crc kubenswrapper[4684]: I0123 09:25:13.701079 4684 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-qkgjt\" (UniqueName: \"kubernetes.io/projected/652bdac8-6488-4303-9d64-809a46258816-kube-api-access-qkgjt\") pod \"openstack-operator-controller-init-85bfd44c94-6dlkw\" (UID: \"652bdac8-6488-4303-9d64-809a46258816\") " pod="openstack-operators/openstack-operator-controller-init-85bfd44c94-6dlkw" Jan 23 09:25:13 crc kubenswrapper[4684]: I0123 09:25:13.719082 4684 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-qkgjt\" (UniqueName: \"kubernetes.io/projected/652bdac8-6488-4303-9d64-809a46258816-kube-api-access-qkgjt\") pod \"openstack-operator-controller-init-85bfd44c94-6dlkw\" (UID: \"652bdac8-6488-4303-9d64-809a46258816\") " pod="openstack-operators/openstack-operator-controller-init-85bfd44c94-6dlkw" Jan 23 09:25:13 crc kubenswrapper[4684]: I0123 09:25:13.728648 4684 patch_prober.go:28] interesting pod/machine-config-daemon-wtphf container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Jan 23 09:25:13 crc kubenswrapper[4684]: I0123 09:25:13.728689 4684 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-wtphf" podUID="fe8e0d00-860e-4d47-9f48-686555520d79" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Jan 23 09:25:13 crc kubenswrapper[4684]: I0123 09:25:13.728747 4684 kubelet.go:2542] "SyncLoop (probe)" probe="liveness" status="unhealthy" pod="openshift-machine-config-operator/machine-config-daemon-wtphf" Jan 23 09:25:13 crc kubenswrapper[4684]: I0123 09:25:13.729271 4684 kuberuntime_manager.go:1027] "Message for Container of pod" containerName="machine-config-daemon" containerStatusID={"Type":"cri-o","ID":"d189a4bad8ef4c719b144352564a4f1767ae642d4e80c3912415bf811a82f8e8"} pod="openshift-machine-config-operator/machine-config-daemon-wtphf" containerMessage="Container machine-config-daemon failed liveness probe, will be restarted" Jan 23 09:25:13 crc kubenswrapper[4684]: I0123 09:25:13.729316 4684 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-machine-config-operator/machine-config-daemon-wtphf" podUID="fe8e0d00-860e-4d47-9f48-686555520d79" containerName="machine-config-daemon" containerID="cri-o://d189a4bad8ef4c719b144352564a4f1767ae642d4e80c3912415bf811a82f8e8" gracePeriod=600 Jan 23 09:25:13 crc kubenswrapper[4684]: I0123 09:25:13.832870 4684 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/openstack-operator-controller-init-85bfd44c94-6dlkw" Jan 23 09:25:14 crc kubenswrapper[4684]: I0123 09:25:14.121145 4684 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/openstack-operator-controller-init-85bfd44c94-6dlkw"] Jan 23 09:25:14 crc kubenswrapper[4684]: I0123 09:25:14.289547 4684 generic.go:334] "Generic (PLEG): container finished" podID="fe8e0d00-860e-4d47-9f48-686555520d79" containerID="d189a4bad8ef4c719b144352564a4f1767ae642d4e80c3912415bf811a82f8e8" exitCode=0 Jan 23 09:25:14 crc kubenswrapper[4684]: I0123 09:25:14.289619 4684 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-wtphf" event={"ID":"fe8e0d00-860e-4d47-9f48-686555520d79","Type":"ContainerDied","Data":"d189a4bad8ef4c719b144352564a4f1767ae642d4e80c3912415bf811a82f8e8"} Jan 23 09:25:14 crc kubenswrapper[4684]: I0123 09:25:14.289684 4684 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-wtphf" event={"ID":"fe8e0d00-860e-4d47-9f48-686555520d79","Type":"ContainerStarted","Data":"8ade61f7f4bbb3f3f435e6b903b0fe87d7cf6cd2ec8e018e44229efc22831425"} Jan 23 09:25:14 crc kubenswrapper[4684]: I0123 09:25:14.289723 4684 scope.go:117] "RemoveContainer" containerID="6a54cd0e651571067c33ee3cd9f4af92e5f9d59906264f1f012e4be5834f6450" Jan 23 09:25:14 crc kubenswrapper[4684]: I0123 09:25:14.291038 4684 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/openstack-operator-controller-init-85bfd44c94-6dlkw" event={"ID":"652bdac8-6488-4303-9d64-809a46258816","Type":"ContainerStarted","Data":"503aaec86edd2049814825806e8ef97ae9956f6488abf74fb93d43b9e08a2021"} Jan 23 09:25:23 crc kubenswrapper[4684]: I0123 09:25:23.353131 4684 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/openstack-operator-controller-init-85bfd44c94-6dlkw" event={"ID":"652bdac8-6488-4303-9d64-809a46258816","Type":"ContainerStarted","Data":"57690868eb35ba8af2982d44aeb813a8d3384b5fba383cb87ca15facdf3315ec"} Jan 23 09:25:23 crc kubenswrapper[4684]: I0123 09:25:23.353939 4684 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack-operators/openstack-operator-controller-init-85bfd44c94-6dlkw" Jan 23 09:25:23 crc kubenswrapper[4684]: I0123 09:25:23.386830 4684 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack-operators/openstack-operator-controller-init-85bfd44c94-6dlkw" podStartSLOduration=1.33556004 podStartE2EDuration="10.386808249s" podCreationTimestamp="2026-01-23 09:25:13 +0000 UTC" firstStartedPulling="2026-01-23 09:25:14.131811814 +0000 UTC m=+1086.755190355" lastFinishedPulling="2026-01-23 09:25:23.183060023 +0000 UTC m=+1095.806438564" observedRunningTime="2026-01-23 09:25:23.380476397 +0000 UTC m=+1096.003854938" watchObservedRunningTime="2026-01-23 09:25:23.386808249 +0000 UTC m=+1096.010186790" Jan 23 09:25:33 crc kubenswrapper[4684]: I0123 09:25:33.835943 4684 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack-operators/openstack-operator-controller-init-85bfd44c94-6dlkw" Jan 23 09:25:53 crc kubenswrapper[4684]: I0123 09:25:53.457147 4684 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack-operators/cinder-operator-controller-manager-69cf5d4557-srv5g"] Jan 23 09:25:53 crc kubenswrapper[4684]: I0123 09:25:53.458566 4684 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/cinder-operator-controller-manager-69cf5d4557-srv5g" Jan 23 09:25:53 crc kubenswrapper[4684]: I0123 09:25:53.462852 4684 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"cinder-operator-controller-manager-dockercfg-rt6zb" Jan 23 09:25:53 crc kubenswrapper[4684]: I0123 09:25:53.466104 4684 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack-operators/barbican-operator-controller-manager-59dd8b7cbf-sbkxr"] Jan 23 09:25:53 crc kubenswrapper[4684]: I0123 09:25:53.467327 4684 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/barbican-operator-controller-manager-59dd8b7cbf-sbkxr" Jan 23 09:25:53 crc kubenswrapper[4684]: I0123 09:25:53.469107 4684 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"barbican-operator-controller-manager-dockercfg-cfdjk" Jan 23 09:25:53 crc kubenswrapper[4684]: I0123 09:25:53.479762 4684 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/barbican-operator-controller-manager-59dd8b7cbf-sbkxr"] Jan 23 09:25:53 crc kubenswrapper[4684]: I0123 09:25:53.490885 4684 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack-operators/designate-operator-controller-manager-b45d7bf98-p77dl"] Jan 23 09:25:53 crc kubenswrapper[4684]: I0123 09:25:53.492115 4684 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/designate-operator-controller-manager-b45d7bf98-p77dl" Jan 23 09:25:53 crc kubenswrapper[4684]: I0123 09:25:53.493886 4684 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"designate-operator-controller-manager-dockercfg-gs8tp" Jan 23 09:25:53 crc kubenswrapper[4684]: I0123 09:25:53.508104 4684 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/cinder-operator-controller-manager-69cf5d4557-srv5g"] Jan 23 09:25:53 crc kubenswrapper[4684]: I0123 09:25:53.514592 4684 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/designate-operator-controller-manager-b45d7bf98-p77dl"] Jan 23 09:25:53 crc kubenswrapper[4684]: I0123 09:25:53.541526 4684 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack-operators/glance-operator-controller-manager-78fdd796fd-hx5dq"] Jan 23 09:25:53 crc kubenswrapper[4684]: I0123 09:25:53.543229 4684 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/glance-operator-controller-manager-78fdd796fd-hx5dq" Jan 23 09:25:53 crc kubenswrapper[4684]: I0123 09:25:53.546893 4684 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"glance-operator-controller-manager-dockercfg-k4slr" Jan 23 09:25:53 crc kubenswrapper[4684]: I0123 09:25:53.578352 4684 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack-operators/heat-operator-controller-manager-594c8c9d5d-ht6sr"] Jan 23 09:25:53 crc kubenswrapper[4684]: I0123 09:25:53.580484 4684 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/heat-operator-controller-manager-594c8c9d5d-ht6sr" Jan 23 09:25:53 crc kubenswrapper[4684]: I0123 09:25:53.584551 4684 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"heat-operator-controller-manager-dockercfg-fq2rd" Jan 23 09:25:53 crc kubenswrapper[4684]: I0123 09:25:53.588708 4684 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-vxr7w\" (UniqueName: \"kubernetes.io/projected/fd2ff302-08d1-4fd7-a45c-152155876b56-kube-api-access-vxr7w\") pod \"cinder-operator-controller-manager-69cf5d4557-srv5g\" (UID: \"fd2ff302-08d1-4fd7-a45c-152155876b56\") " pod="openstack-operators/cinder-operator-controller-manager-69cf5d4557-srv5g" Jan 23 09:25:53 crc kubenswrapper[4684]: I0123 09:25:53.588761 4684 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-qzggn\" (UniqueName: \"kubernetes.io/projected/dc5b7444-cf61-439c-a7ed-3c97289e6cfe-kube-api-access-qzggn\") pod \"barbican-operator-controller-manager-59dd8b7cbf-sbkxr\" (UID: \"dc5b7444-cf61-439c-a7ed-3c97289e6cfe\") " pod="openstack-operators/barbican-operator-controller-manager-59dd8b7cbf-sbkxr" Jan 23 09:25:53 crc kubenswrapper[4684]: I0123 09:25:53.588787 4684 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-b5gqm\" (UniqueName: \"kubernetes.io/projected/31af0894-c5ac-41ef-842e-b7d01dfa2229-kube-api-access-b5gqm\") pod \"designate-operator-controller-manager-b45d7bf98-p77dl\" (UID: \"31af0894-c5ac-41ef-842e-b7d01dfa2229\") " pod="openstack-operators/designate-operator-controller-manager-b45d7bf98-p77dl" Jan 23 09:25:53 crc kubenswrapper[4684]: I0123 09:25:53.594767 4684 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/heat-operator-controller-manager-594c8c9d5d-ht6sr"] Jan 23 09:25:53 crc kubenswrapper[4684]: I0123 09:25:53.609681 4684 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/glance-operator-controller-manager-78fdd796fd-hx5dq"] Jan 23 09:25:53 crc kubenswrapper[4684]: I0123 09:25:53.621779 4684 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack-operators/horizon-operator-controller-manager-77d5c5b54f-gc4d6"] Jan 23 09:25:53 crc kubenswrapper[4684]: I0123 09:25:53.623019 4684 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/horizon-operator-controller-manager-77d5c5b54f-gc4d6" Jan 23 09:25:53 crc kubenswrapper[4684]: I0123 09:25:53.631961 4684 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"horizon-operator-controller-manager-dockercfg-rn2v4" Jan 23 09:25:53 crc kubenswrapper[4684]: I0123 09:25:53.645741 4684 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack-operators/infra-operator-controller-manager-54ccf4f85d-t4lh8"] Jan 23 09:25:53 crc kubenswrapper[4684]: I0123 09:25:53.648292 4684 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/infra-operator-controller-manager-54ccf4f85d-t4lh8" Jan 23 09:25:53 crc kubenswrapper[4684]: I0123 09:25:53.653878 4684 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/horizon-operator-controller-manager-77d5c5b54f-gc4d6"] Jan 23 09:25:53 crc kubenswrapper[4684]: I0123 09:25:53.660208 4684 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"infra-operator-controller-manager-dockercfg-zv5ff" Jan 23 09:25:53 crc kubenswrapper[4684]: I0123 09:25:53.664399 4684 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"infra-operator-webhook-server-cert" Jan 23 09:25:53 crc kubenswrapper[4684]: I0123 09:25:53.672065 4684 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/infra-operator-controller-manager-54ccf4f85d-t4lh8"] Jan 23 09:25:53 crc kubenswrapper[4684]: I0123 09:25:53.689659 4684 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-wzc2v\" (UniqueName: \"kubernetes.io/projected/d61b277c-9b8c-423e-9b63-66dd812147c3-kube-api-access-wzc2v\") pod \"horizon-operator-controller-manager-77d5c5b54f-gc4d6\" (UID: \"d61b277c-9b8c-423e-9b63-66dd812147c3\") " pod="openstack-operators/horizon-operator-controller-manager-77d5c5b54f-gc4d6" Jan 23 09:25:53 crc kubenswrapper[4684]: I0123 09:25:53.689737 4684 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-vxr7w\" (UniqueName: \"kubernetes.io/projected/fd2ff302-08d1-4fd7-a45c-152155876b56-kube-api-access-vxr7w\") pod \"cinder-operator-controller-manager-69cf5d4557-srv5g\" (UID: \"fd2ff302-08d1-4fd7-a45c-152155876b56\") " pod="openstack-operators/cinder-operator-controller-manager-69cf5d4557-srv5g" Jan 23 09:25:53 crc kubenswrapper[4684]: I0123 09:25:53.689803 4684 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-5c9b8\" (UniqueName: \"kubernetes.io/projected/294e6daa-1ac9-4afc-b489-f7cff06c18ec-kube-api-access-5c9b8\") pod \"heat-operator-controller-manager-594c8c9d5d-ht6sr\" (UID: \"294e6daa-1ac9-4afc-b489-f7cff06c18ec\") " pod="openstack-operators/heat-operator-controller-manager-594c8c9d5d-ht6sr" Jan 23 09:25:53 crc kubenswrapper[4684]: I0123 09:25:53.689828 4684 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-qzggn\" (UniqueName: \"kubernetes.io/projected/dc5b7444-cf61-439c-a7ed-3c97289e6cfe-kube-api-access-qzggn\") pod \"barbican-operator-controller-manager-59dd8b7cbf-sbkxr\" (UID: \"dc5b7444-cf61-439c-a7ed-3c97289e6cfe\") " pod="openstack-operators/barbican-operator-controller-manager-59dd8b7cbf-sbkxr" Jan 23 09:25:53 crc kubenswrapper[4684]: I0123 09:25:53.689847 4684 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-b5gqm\" (UniqueName: \"kubernetes.io/projected/31af0894-c5ac-41ef-842e-b7d01dfa2229-kube-api-access-b5gqm\") pod \"designate-operator-controller-manager-b45d7bf98-p77dl\" (UID: \"31af0894-c5ac-41ef-842e-b7d01dfa2229\") " pod="openstack-operators/designate-operator-controller-manager-b45d7bf98-p77dl" Jan 23 09:25:53 crc kubenswrapper[4684]: I0123 09:25:53.689906 4684 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-44cwh\" (UniqueName: \"kubernetes.io/projected/299d3d78-4346-43f2-86f2-e1a3c20513a5-kube-api-access-44cwh\") pod \"glance-operator-controller-manager-78fdd796fd-hx5dq\" (UID: \"299d3d78-4346-43f2-86f2-e1a3c20513a5\") " pod="openstack-operators/glance-operator-controller-manager-78fdd796fd-hx5dq" Jan 23 09:25:53 crc kubenswrapper[4684]: I0123 09:25:53.730646 4684 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack-operators/ironic-operator-controller-manager-69d6c9f5b8-6s79c"] Jan 23 09:25:53 crc kubenswrapper[4684]: I0123 09:25:53.733926 4684 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/ironic-operator-controller-manager-69d6c9f5b8-6s79c" Jan 23 09:25:53 crc kubenswrapper[4684]: I0123 09:25:53.735754 4684 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"ironic-operator-controller-manager-dockercfg-cftwk" Jan 23 09:25:53 crc kubenswrapper[4684]: I0123 09:25:53.736369 4684 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-qzggn\" (UniqueName: \"kubernetes.io/projected/dc5b7444-cf61-439c-a7ed-3c97289e6cfe-kube-api-access-qzggn\") pod \"barbican-operator-controller-manager-59dd8b7cbf-sbkxr\" (UID: \"dc5b7444-cf61-439c-a7ed-3c97289e6cfe\") " pod="openstack-operators/barbican-operator-controller-manager-59dd8b7cbf-sbkxr" Jan 23 09:25:53 crc kubenswrapper[4684]: I0123 09:25:53.741119 4684 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/ironic-operator-controller-manager-69d6c9f5b8-6s79c"] Jan 23 09:25:53 crc kubenswrapper[4684]: I0123 09:25:53.749321 4684 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-vxr7w\" (UniqueName: \"kubernetes.io/projected/fd2ff302-08d1-4fd7-a45c-152155876b56-kube-api-access-vxr7w\") pod \"cinder-operator-controller-manager-69cf5d4557-srv5g\" (UID: \"fd2ff302-08d1-4fd7-a45c-152155876b56\") " pod="openstack-operators/cinder-operator-controller-manager-69cf5d4557-srv5g" Jan 23 09:25:53 crc kubenswrapper[4684]: I0123 09:25:53.753913 4684 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack-operators/keystone-operator-controller-manager-b8b6d4659-lfjfh"] Jan 23 09:25:53 crc kubenswrapper[4684]: I0123 09:25:53.754992 4684 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/keystone-operator-controller-manager-b8b6d4659-lfjfh" Jan 23 09:25:53 crc kubenswrapper[4684]: I0123 09:25:53.757386 4684 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"keystone-operator-controller-manager-dockercfg-wm9cg" Jan 23 09:25:53 crc kubenswrapper[4684]: I0123 09:25:53.760128 4684 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-b5gqm\" (UniqueName: \"kubernetes.io/projected/31af0894-c5ac-41ef-842e-b7d01dfa2229-kube-api-access-b5gqm\") pod \"designate-operator-controller-manager-b45d7bf98-p77dl\" (UID: \"31af0894-c5ac-41ef-842e-b7d01dfa2229\") " pod="openstack-operators/designate-operator-controller-manager-b45d7bf98-p77dl" Jan 23 09:25:53 crc kubenswrapper[4684]: I0123 09:25:53.781141 4684 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/cinder-operator-controller-manager-69cf5d4557-srv5g" Jan 23 09:25:53 crc kubenswrapper[4684]: I0123 09:25:53.791825 4684 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-wzc2v\" (UniqueName: \"kubernetes.io/projected/d61b277c-9b8c-423e-9b63-66dd812147c3-kube-api-access-wzc2v\") pod \"horizon-operator-controller-manager-77d5c5b54f-gc4d6\" (UID: \"d61b277c-9b8c-423e-9b63-66dd812147c3\") " pod="openstack-operators/horizon-operator-controller-manager-77d5c5b54f-gc4d6" Jan 23 09:25:53 crc kubenswrapper[4684]: I0123 09:25:53.791892 4684 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-5c9b8\" (UniqueName: \"kubernetes.io/projected/294e6daa-1ac9-4afc-b489-f7cff06c18ec-kube-api-access-5c9b8\") pod \"heat-operator-controller-manager-594c8c9d5d-ht6sr\" (UID: \"294e6daa-1ac9-4afc-b489-f7cff06c18ec\") " pod="openstack-operators/heat-operator-controller-manager-594c8c9d5d-ht6sr" Jan 23 09:25:53 crc kubenswrapper[4684]: I0123 09:25:53.791941 4684 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-44cwh\" (UniqueName: \"kubernetes.io/projected/299d3d78-4346-43f2-86f2-e1a3c20513a5-kube-api-access-44cwh\") pod \"glance-operator-controller-manager-78fdd796fd-hx5dq\" (UID: \"299d3d78-4346-43f2-86f2-e1a3c20513a5\") " pod="openstack-operators/glance-operator-controller-manager-78fdd796fd-hx5dq" Jan 23 09:25:53 crc kubenswrapper[4684]: I0123 09:25:53.791966 4684 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-p7spr\" (UniqueName: \"kubernetes.io/projected/56e669a2-5990-45ad-8d32-e8d57ef7a81e-kube-api-access-p7spr\") pod \"infra-operator-controller-manager-54ccf4f85d-t4lh8\" (UID: \"56e669a2-5990-45ad-8d32-e8d57ef7a81e\") " pod="openstack-operators/infra-operator-controller-manager-54ccf4f85d-t4lh8" Jan 23 09:25:53 crc kubenswrapper[4684]: I0123 09:25:53.791991 4684 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cert\" (UniqueName: \"kubernetes.io/secret/56e669a2-5990-45ad-8d32-e8d57ef7a81e-cert\") pod \"infra-operator-controller-manager-54ccf4f85d-t4lh8\" (UID: \"56e669a2-5990-45ad-8d32-e8d57ef7a81e\") " pod="openstack-operators/infra-operator-controller-manager-54ccf4f85d-t4lh8" Jan 23 09:25:53 crc kubenswrapper[4684]: I0123 09:25:53.792122 4684 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/barbican-operator-controller-manager-59dd8b7cbf-sbkxr" Jan 23 09:25:53 crc kubenswrapper[4684]: I0123 09:25:53.794807 4684 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack-operators/manila-operator-controller-manager-78c6999f6f-skhwl"] Jan 23 09:25:53 crc kubenswrapper[4684]: I0123 09:25:53.795747 4684 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/manila-operator-controller-manager-78c6999f6f-skhwl" Jan 23 09:25:53 crc kubenswrapper[4684]: I0123 09:25:53.800031 4684 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"manila-operator-controller-manager-dockercfg-c7gm8" Jan 23 09:25:53 crc kubenswrapper[4684]: I0123 09:25:53.810493 4684 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/designate-operator-controller-manager-b45d7bf98-p77dl" Jan 23 09:25:53 crc kubenswrapper[4684]: I0123 09:25:53.816242 4684 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack-operators/mariadb-operator-controller-manager-c87fff755-pl7fj"] Jan 23 09:25:53 crc kubenswrapper[4684]: I0123 09:25:53.817232 4684 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/mariadb-operator-controller-manager-c87fff755-pl7fj" Jan 23 09:25:53 crc kubenswrapper[4684]: I0123 09:25:53.832331 4684 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-44cwh\" (UniqueName: \"kubernetes.io/projected/299d3d78-4346-43f2-86f2-e1a3c20513a5-kube-api-access-44cwh\") pod \"glance-operator-controller-manager-78fdd796fd-hx5dq\" (UID: \"299d3d78-4346-43f2-86f2-e1a3c20513a5\") " pod="openstack-operators/glance-operator-controller-manager-78fdd796fd-hx5dq" Jan 23 09:25:53 crc kubenswrapper[4684]: I0123 09:25:53.842350 4684 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack-operators/neutron-operator-controller-manager-5d8f59fb49-7nv72"] Jan 23 09:25:53 crc kubenswrapper[4684]: I0123 09:25:53.843494 4684 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/neutron-operator-controller-manager-5d8f59fb49-7nv72" Jan 23 09:25:53 crc kubenswrapper[4684]: I0123 09:25:53.851325 4684 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-5c9b8\" (UniqueName: \"kubernetes.io/projected/294e6daa-1ac9-4afc-b489-f7cff06c18ec-kube-api-access-5c9b8\") pod \"heat-operator-controller-manager-594c8c9d5d-ht6sr\" (UID: \"294e6daa-1ac9-4afc-b489-f7cff06c18ec\") " pod="openstack-operators/heat-operator-controller-manager-594c8c9d5d-ht6sr" Jan 23 09:25:53 crc kubenswrapper[4684]: I0123 09:25:53.851726 4684 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"mariadb-operator-controller-manager-dockercfg-mlll4" Jan 23 09:25:53 crc kubenswrapper[4684]: I0123 09:25:53.852113 4684 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"neutron-operator-controller-manager-dockercfg-2wd29" Jan 23 09:25:53 crc kubenswrapper[4684]: I0123 09:25:53.885959 4684 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-wzc2v\" (UniqueName: \"kubernetes.io/projected/d61b277c-9b8c-423e-9b63-66dd812147c3-kube-api-access-wzc2v\") pod \"horizon-operator-controller-manager-77d5c5b54f-gc4d6\" (UID: \"d61b277c-9b8c-423e-9b63-66dd812147c3\") " pod="openstack-operators/horizon-operator-controller-manager-77d5c5b54f-gc4d6" Jan 23 09:25:53 crc kubenswrapper[4684]: I0123 09:25:53.892795 4684 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/keystone-operator-controller-manager-b8b6d4659-lfjfh"] Jan 23 09:25:53 crc kubenswrapper[4684]: I0123 09:25:53.893781 4684 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-ngx6c\" (UniqueName: \"kubernetes.io/projected/e13327b0-3e7d-498b-a5cb-1ae9cbc6fad7-kube-api-access-ngx6c\") pod \"manila-operator-controller-manager-78c6999f6f-skhwl\" (UID: \"e13327b0-3e7d-498b-a5cb-1ae9cbc6fad7\") " pod="openstack-operators/manila-operator-controller-manager-78c6999f6f-skhwl" Jan 23 09:25:53 crc kubenswrapper[4684]: I0123 09:25:53.893857 4684 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-mbgr4\" (UniqueName: \"kubernetes.io/projected/67b55215-9df7-4273-8e15-27c0a969e065-kube-api-access-mbgr4\") pod \"keystone-operator-controller-manager-b8b6d4659-lfjfh\" (UID: \"67b55215-9df7-4273-8e15-27c0a969e065\") " pod="openstack-operators/keystone-operator-controller-manager-b8b6d4659-lfjfh" Jan 23 09:25:53 crc kubenswrapper[4684]: I0123 09:25:53.893941 4684 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-6fb8b\" (UniqueName: \"kubernetes.io/projected/5bb19409-93c9-4453-800c-ce2899b48427-kube-api-access-6fb8b\") pod \"ironic-operator-controller-manager-69d6c9f5b8-6s79c\" (UID: \"5bb19409-93c9-4453-800c-ce2899b48427\") " pod="openstack-operators/ironic-operator-controller-manager-69d6c9f5b8-6s79c" Jan 23 09:25:53 crc kubenswrapper[4684]: I0123 09:25:53.893994 4684 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-lzp2m\" (UniqueName: \"kubernetes.io/projected/e1b45f19-8737-4f21-aade-d2b9cfda08fe-kube-api-access-lzp2m\") pod \"mariadb-operator-controller-manager-c87fff755-pl7fj\" (UID: \"e1b45f19-8737-4f21-aade-d2b9cfda08fe\") " pod="openstack-operators/mariadb-operator-controller-manager-c87fff755-pl7fj" Jan 23 09:25:53 crc kubenswrapper[4684]: I0123 09:25:53.894107 4684 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-p7spr\" (UniqueName: \"kubernetes.io/projected/56e669a2-5990-45ad-8d32-e8d57ef7a81e-kube-api-access-p7spr\") pod \"infra-operator-controller-manager-54ccf4f85d-t4lh8\" (UID: \"56e669a2-5990-45ad-8d32-e8d57ef7a81e\") " pod="openstack-operators/infra-operator-controller-manager-54ccf4f85d-t4lh8" Jan 23 09:25:53 crc kubenswrapper[4684]: I0123 09:25:53.894211 4684 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cert\" (UniqueName: \"kubernetes.io/secret/56e669a2-5990-45ad-8d32-e8d57ef7a81e-cert\") pod \"infra-operator-controller-manager-54ccf4f85d-t4lh8\" (UID: \"56e669a2-5990-45ad-8d32-e8d57ef7a81e\") " pod="openstack-operators/infra-operator-controller-manager-54ccf4f85d-t4lh8" Jan 23 09:25:53 crc kubenswrapper[4684]: E0123 09:25:53.894371 4684 secret.go:188] Couldn't get secret openstack-operators/infra-operator-webhook-server-cert: secret "infra-operator-webhook-server-cert" not found Jan 23 09:25:53 crc kubenswrapper[4684]: E0123 09:25:53.894442 4684 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/56e669a2-5990-45ad-8d32-e8d57ef7a81e-cert podName:56e669a2-5990-45ad-8d32-e8d57ef7a81e nodeName:}" failed. No retries permitted until 2026-01-23 09:25:54.394422794 +0000 UTC m=+1127.017801335 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "cert" (UniqueName: "kubernetes.io/secret/56e669a2-5990-45ad-8d32-e8d57ef7a81e-cert") pod "infra-operator-controller-manager-54ccf4f85d-t4lh8" (UID: "56e669a2-5990-45ad-8d32-e8d57ef7a81e") : secret "infra-operator-webhook-server-cert" not found Jan 23 09:25:53 crc kubenswrapper[4684]: I0123 09:25:53.903483 4684 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/glance-operator-controller-manager-78fdd796fd-hx5dq" Jan 23 09:25:53 crc kubenswrapper[4684]: I0123 09:25:53.907032 4684 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/manila-operator-controller-manager-78c6999f6f-skhwl"] Jan 23 09:25:53 crc kubenswrapper[4684]: I0123 09:25:53.923749 4684 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/mariadb-operator-controller-manager-c87fff755-pl7fj"] Jan 23 09:25:53 crc kubenswrapper[4684]: I0123 09:25:53.932975 4684 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/heat-operator-controller-manager-594c8c9d5d-ht6sr" Jan 23 09:25:53 crc kubenswrapper[4684]: I0123 09:25:53.942405 4684 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-p7spr\" (UniqueName: \"kubernetes.io/projected/56e669a2-5990-45ad-8d32-e8d57ef7a81e-kube-api-access-p7spr\") pod \"infra-operator-controller-manager-54ccf4f85d-t4lh8\" (UID: \"56e669a2-5990-45ad-8d32-e8d57ef7a81e\") " pod="openstack-operators/infra-operator-controller-manager-54ccf4f85d-t4lh8" Jan 23 09:25:53 crc kubenswrapper[4684]: I0123 09:25:53.956215 4684 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack-operators/nova-operator-controller-manager-6b8bc8d87d-jnlvz"] Jan 23 09:25:53 crc kubenswrapper[4684]: I0123 09:25:53.957252 4684 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/nova-operator-controller-manager-6b8bc8d87d-jnlvz" Jan 23 09:25:53 crc kubenswrapper[4684]: I0123 09:25:53.965348 4684 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"nova-operator-controller-manager-dockercfg-9xnkj" Jan 23 09:25:53 crc kubenswrapper[4684]: I0123 09:25:53.989058 4684 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/horizon-operator-controller-manager-77d5c5b54f-gc4d6" Jan 23 09:25:53 crc kubenswrapper[4684]: I0123 09:25:53.997127 4684 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-cjtkq\" (UniqueName: \"kubernetes.io/projected/9e4ad169-96f1-40ef-bedf-75d3a233ca35-kube-api-access-cjtkq\") pod \"neutron-operator-controller-manager-5d8f59fb49-7nv72\" (UID: \"9e4ad169-96f1-40ef-bedf-75d3a233ca35\") " pod="openstack-operators/neutron-operator-controller-manager-5d8f59fb49-7nv72" Jan 23 09:25:53 crc kubenswrapper[4684]: I0123 09:25:53.997189 4684 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-ngx6c\" (UniqueName: \"kubernetes.io/projected/e13327b0-3e7d-498b-a5cb-1ae9cbc6fad7-kube-api-access-ngx6c\") pod \"manila-operator-controller-manager-78c6999f6f-skhwl\" (UID: \"e13327b0-3e7d-498b-a5cb-1ae9cbc6fad7\") " pod="openstack-operators/manila-operator-controller-manager-78c6999f6f-skhwl" Jan 23 09:25:53 crc kubenswrapper[4684]: I0123 09:25:53.997224 4684 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-mbgr4\" (UniqueName: \"kubernetes.io/projected/67b55215-9df7-4273-8e15-27c0a969e065-kube-api-access-mbgr4\") pod \"keystone-operator-controller-manager-b8b6d4659-lfjfh\" (UID: \"67b55215-9df7-4273-8e15-27c0a969e065\") " pod="openstack-operators/keystone-operator-controller-manager-b8b6d4659-lfjfh" Jan 23 09:25:53 crc kubenswrapper[4684]: I0123 09:25:53.997305 4684 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-6fb8b\" (UniqueName: \"kubernetes.io/projected/5bb19409-93c9-4453-800c-ce2899b48427-kube-api-access-6fb8b\") pod \"ironic-operator-controller-manager-69d6c9f5b8-6s79c\" (UID: \"5bb19409-93c9-4453-800c-ce2899b48427\") " pod="openstack-operators/ironic-operator-controller-manager-69d6c9f5b8-6s79c" Jan 23 09:25:53 crc kubenswrapper[4684]: I0123 09:25:53.997347 4684 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-lzp2m\" (UniqueName: \"kubernetes.io/projected/e1b45f19-8737-4f21-aade-d2b9cfda08fe-kube-api-access-lzp2m\") pod \"mariadb-operator-controller-manager-c87fff755-pl7fj\" (UID: \"e1b45f19-8737-4f21-aade-d2b9cfda08fe\") " pod="openstack-operators/mariadb-operator-controller-manager-c87fff755-pl7fj" Jan 23 09:25:54 crc kubenswrapper[4684]: I0123 09:25:54.035265 4684 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-ngx6c\" (UniqueName: \"kubernetes.io/projected/e13327b0-3e7d-498b-a5cb-1ae9cbc6fad7-kube-api-access-ngx6c\") pod \"manila-operator-controller-manager-78c6999f6f-skhwl\" (UID: \"e13327b0-3e7d-498b-a5cb-1ae9cbc6fad7\") " pod="openstack-operators/manila-operator-controller-manager-78c6999f6f-skhwl" Jan 23 09:25:54 crc kubenswrapper[4684]: I0123 09:25:54.057083 4684 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/neutron-operator-controller-manager-5d8f59fb49-7nv72"] Jan 23 09:25:54 crc kubenswrapper[4684]: I0123 09:25:54.064293 4684 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-lzp2m\" (UniqueName: \"kubernetes.io/projected/e1b45f19-8737-4f21-aade-d2b9cfda08fe-kube-api-access-lzp2m\") pod \"mariadb-operator-controller-manager-c87fff755-pl7fj\" (UID: \"e1b45f19-8737-4f21-aade-d2b9cfda08fe\") " pod="openstack-operators/mariadb-operator-controller-manager-c87fff755-pl7fj" Jan 23 09:25:54 crc kubenswrapper[4684]: I0123 09:25:54.084576 4684 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack-operators/octavia-operator-controller-manager-7bd9774b6-b82vt"] Jan 23 09:25:54 crc kubenswrapper[4684]: I0123 09:25:54.093378 4684 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/octavia-operator-controller-manager-7bd9774b6-b82vt" Jan 23 09:25:54 crc kubenswrapper[4684]: I0123 09:25:54.099953 4684 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/nova-operator-controller-manager-6b8bc8d87d-jnlvz"] Jan 23 09:25:54 crc kubenswrapper[4684]: I0123 09:25:54.100870 4684 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-cjtkq\" (UniqueName: \"kubernetes.io/projected/9e4ad169-96f1-40ef-bedf-75d3a233ca35-kube-api-access-cjtkq\") pod \"neutron-operator-controller-manager-5d8f59fb49-7nv72\" (UID: \"9e4ad169-96f1-40ef-bedf-75d3a233ca35\") " pod="openstack-operators/neutron-operator-controller-manager-5d8f59fb49-7nv72" Jan 23 09:25:54 crc kubenswrapper[4684]: I0123 09:25:54.100955 4684 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-9qtc4\" (UniqueName: \"kubernetes.io/projected/b1376fdd-31b4-4a7a-a9b6-1a38565083cb-kube-api-access-9qtc4\") pod \"nova-operator-controller-manager-6b8bc8d87d-jnlvz\" (UID: \"b1376fdd-31b4-4a7a-a9b6-1a38565083cb\") " pod="openstack-operators/nova-operator-controller-manager-6b8bc8d87d-jnlvz" Jan 23 09:25:54 crc kubenswrapper[4684]: I0123 09:25:54.101280 4684 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-mbgr4\" (UniqueName: \"kubernetes.io/projected/67b55215-9df7-4273-8e15-27c0a969e065-kube-api-access-mbgr4\") pod \"keystone-operator-controller-manager-b8b6d4659-lfjfh\" (UID: \"67b55215-9df7-4273-8e15-27c0a969e065\") " pod="openstack-operators/keystone-operator-controller-manager-b8b6d4659-lfjfh" Jan 23 09:25:54 crc kubenswrapper[4684]: I0123 09:25:54.107630 4684 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"octavia-operator-controller-manager-dockercfg-whjnb" Jan 23 09:25:54 crc kubenswrapper[4684]: I0123 09:25:54.113674 4684 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/manila-operator-controller-manager-78c6999f6f-skhwl" Jan 23 09:25:54 crc kubenswrapper[4684]: I0123 09:25:54.122799 4684 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack-operators/openstack-baremetal-operator-controller-manager-7c9c58b557lb7bq"] Jan 23 09:25:54 crc kubenswrapper[4684]: I0123 09:25:54.123875 4684 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/openstack-baremetal-operator-controller-manager-7c9c58b557lb7bq" Jan 23 09:25:54 crc kubenswrapper[4684]: I0123 09:25:54.130874 4684 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/octavia-operator-controller-manager-7bd9774b6-b82vt"] Jan 23 09:25:54 crc kubenswrapper[4684]: I0123 09:25:54.140125 4684 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"openstack-baremetal-operator-controller-manager-dockercfg-nt2db" Jan 23 09:25:54 crc kubenswrapper[4684]: I0123 09:25:54.140200 4684 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"openstack-baremetal-operator-webhook-server-cert" Jan 23 09:25:54 crc kubenswrapper[4684]: I0123 09:25:54.140830 4684 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/mariadb-operator-controller-manager-c87fff755-pl7fj" Jan 23 09:25:54 crc kubenswrapper[4684]: I0123 09:25:54.165456 4684 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/openstack-baremetal-operator-controller-manager-7c9c58b557lb7bq"] Jan 23 09:25:54 crc kubenswrapper[4684]: I0123 09:25:54.166431 4684 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-6fb8b\" (UniqueName: \"kubernetes.io/projected/5bb19409-93c9-4453-800c-ce2899b48427-kube-api-access-6fb8b\") pod \"ironic-operator-controller-manager-69d6c9f5b8-6s79c\" (UID: \"5bb19409-93c9-4453-800c-ce2899b48427\") " pod="openstack-operators/ironic-operator-controller-manager-69d6c9f5b8-6s79c" Jan 23 09:25:54 crc kubenswrapper[4684]: I0123 09:25:54.166542 4684 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-cjtkq\" (UniqueName: \"kubernetes.io/projected/9e4ad169-96f1-40ef-bedf-75d3a233ca35-kube-api-access-cjtkq\") pod \"neutron-operator-controller-manager-5d8f59fb49-7nv72\" (UID: \"9e4ad169-96f1-40ef-bedf-75d3a233ca35\") " pod="openstack-operators/neutron-operator-controller-manager-5d8f59fb49-7nv72" Jan 23 09:25:54 crc kubenswrapper[4684]: I0123 09:25:54.182927 4684 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/neutron-operator-controller-manager-5d8f59fb49-7nv72" Jan 23 09:25:54 crc kubenswrapper[4684]: I0123 09:25:54.190133 4684 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack-operators/ovn-operator-controller-manager-55db956ddc-ll27v"] Jan 23 09:25:54 crc kubenswrapper[4684]: I0123 09:25:54.191050 4684 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/ovn-operator-controller-manager-55db956ddc-ll27v" Jan 23 09:25:54 crc kubenswrapper[4684]: I0123 09:25:54.200058 4684 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"ovn-operator-controller-manager-dockercfg-gnjlm" Jan 23 09:25:54 crc kubenswrapper[4684]: I0123 09:25:54.202134 4684 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-9qtc4\" (UniqueName: \"kubernetes.io/projected/b1376fdd-31b4-4a7a-a9b6-1a38565083cb-kube-api-access-9qtc4\") pod \"nova-operator-controller-manager-6b8bc8d87d-jnlvz\" (UID: \"b1376fdd-31b4-4a7a-a9b6-1a38565083cb\") " pod="openstack-operators/nova-operator-controller-manager-6b8bc8d87d-jnlvz" Jan 23 09:25:54 crc kubenswrapper[4684]: I0123 09:25:54.202206 4684 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-dmrk9\" (UniqueName: \"kubernetes.io/projected/2466d64b-62c9-422f-9609-5aaaa7de084c-kube-api-access-dmrk9\") pod \"octavia-operator-controller-manager-7bd9774b6-b82vt\" (UID: \"2466d64b-62c9-422f-9609-5aaaa7de084c\") " pod="openstack-operators/octavia-operator-controller-manager-7bd9774b6-b82vt" Jan 23 09:25:54 crc kubenswrapper[4684]: I0123 09:25:54.202283 4684 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cert\" (UniqueName: \"kubernetes.io/secret/b0bb140c-ce3d-4d8b-8627-67ae0145b2d4-cert\") pod \"openstack-baremetal-operator-controller-manager-7c9c58b557lb7bq\" (UID: \"b0bb140c-ce3d-4d8b-8627-67ae0145b2d4\") " pod="openstack-operators/openstack-baremetal-operator-controller-manager-7c9c58b557lb7bq" Jan 23 09:25:54 crc kubenswrapper[4684]: I0123 09:25:54.202320 4684 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-74mm6\" (UniqueName: \"kubernetes.io/projected/b0bb140c-ce3d-4d8b-8627-67ae0145b2d4-kube-api-access-74mm6\") pod \"openstack-baremetal-operator-controller-manager-7c9c58b557lb7bq\" (UID: \"b0bb140c-ce3d-4d8b-8627-67ae0145b2d4\") " pod="openstack-operators/openstack-baremetal-operator-controller-manager-7c9c58b557lb7bq" Jan 23 09:25:54 crc kubenswrapper[4684]: I0123 09:25:54.257605 4684 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack-operators/placement-operator-controller-manager-5d646b7d76-dbggg"] Jan 23 09:25:54 crc kubenswrapper[4684]: I0123 09:25:54.258970 4684 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/placement-operator-controller-manager-5d646b7d76-dbggg" Jan 23 09:25:54 crc kubenswrapper[4684]: I0123 09:25:54.271427 4684 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"placement-operator-controller-manager-dockercfg-sgxp9" Jan 23 09:25:54 crc kubenswrapper[4684]: I0123 09:25:54.290893 4684 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-9qtc4\" (UniqueName: \"kubernetes.io/projected/b1376fdd-31b4-4a7a-a9b6-1a38565083cb-kube-api-access-9qtc4\") pod \"nova-operator-controller-manager-6b8bc8d87d-jnlvz\" (UID: \"b1376fdd-31b4-4a7a-a9b6-1a38565083cb\") " pod="openstack-operators/nova-operator-controller-manager-6b8bc8d87d-jnlvz" Jan 23 09:25:54 crc kubenswrapper[4684]: I0123 09:25:54.299508 4684 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack-operators/swift-operator-controller-manager-547cbdb99f-8cnrp"] Jan 23 09:25:54 crc kubenswrapper[4684]: I0123 09:25:54.300560 4684 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/swift-operator-controller-manager-547cbdb99f-8cnrp" Jan 23 09:25:54 crc kubenswrapper[4684]: I0123 09:25:54.304163 4684 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"swift-operator-controller-manager-dockercfg-t66ch" Jan 23 09:25:54 crc kubenswrapper[4684]: I0123 09:25:54.305250 4684 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-dmrk9\" (UniqueName: \"kubernetes.io/projected/2466d64b-62c9-422f-9609-5aaaa7de084c-kube-api-access-dmrk9\") pod \"octavia-operator-controller-manager-7bd9774b6-b82vt\" (UID: \"2466d64b-62c9-422f-9609-5aaaa7de084c\") " pod="openstack-operators/octavia-operator-controller-manager-7bd9774b6-b82vt" Jan 23 09:25:54 crc kubenswrapper[4684]: I0123 09:25:54.305300 4684 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-s6j86\" (UniqueName: \"kubernetes.io/projected/ba45281f-6224-4ce8-bc8e-df42f7e89340-kube-api-access-s6j86\") pod \"placement-operator-controller-manager-5d646b7d76-dbggg\" (UID: \"ba45281f-6224-4ce8-bc8e-df42f7e89340\") " pod="openstack-operators/placement-operator-controller-manager-5d646b7d76-dbggg" Jan 23 09:25:54 crc kubenswrapper[4684]: I0123 09:25:54.305459 4684 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cert\" (UniqueName: \"kubernetes.io/secret/b0bb140c-ce3d-4d8b-8627-67ae0145b2d4-cert\") pod \"openstack-baremetal-operator-controller-manager-7c9c58b557lb7bq\" (UID: \"b0bb140c-ce3d-4d8b-8627-67ae0145b2d4\") " pod="openstack-operators/openstack-baremetal-operator-controller-manager-7c9c58b557lb7bq" Jan 23 09:25:54 crc kubenswrapper[4684]: I0123 09:25:54.305490 4684 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-74mm6\" (UniqueName: \"kubernetes.io/projected/b0bb140c-ce3d-4d8b-8627-67ae0145b2d4-kube-api-access-74mm6\") pod \"openstack-baremetal-operator-controller-manager-7c9c58b557lb7bq\" (UID: \"b0bb140c-ce3d-4d8b-8627-67ae0145b2d4\") " pod="openstack-operators/openstack-baremetal-operator-controller-manager-7c9c58b557lb7bq" Jan 23 09:25:54 crc kubenswrapper[4684]: I0123 09:25:54.305564 4684 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-l2mbk\" (UniqueName: \"kubernetes.io/projected/0755ab86-427c-4e7b-8712-4db92f543c69-kube-api-access-l2mbk\") pod \"ovn-operator-controller-manager-55db956ddc-ll27v\" (UID: \"0755ab86-427c-4e7b-8712-4db92f543c69\") " pod="openstack-operators/ovn-operator-controller-manager-55db956ddc-ll27v" Jan 23 09:25:54 crc kubenswrapper[4684]: E0123 09:25:54.305986 4684 secret.go:188] Couldn't get secret openstack-operators/openstack-baremetal-operator-webhook-server-cert: secret "openstack-baremetal-operator-webhook-server-cert" not found Jan 23 09:25:54 crc kubenswrapper[4684]: E0123 09:25:54.306024 4684 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/b0bb140c-ce3d-4d8b-8627-67ae0145b2d4-cert podName:b0bb140c-ce3d-4d8b-8627-67ae0145b2d4 nodeName:}" failed. No retries permitted until 2026-01-23 09:25:54.806010643 +0000 UTC m=+1127.429389184 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "cert" (UniqueName: "kubernetes.io/secret/b0bb140c-ce3d-4d8b-8627-67ae0145b2d4-cert") pod "openstack-baremetal-operator-controller-manager-7c9c58b557lb7bq" (UID: "b0bb140c-ce3d-4d8b-8627-67ae0145b2d4") : secret "openstack-baremetal-operator-webhook-server-cert" not found Jan 23 09:25:54 crc kubenswrapper[4684]: I0123 09:25:54.344044 4684 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/ironic-operator-controller-manager-69d6c9f5b8-6s79c" Jan 23 09:25:54 crc kubenswrapper[4684]: I0123 09:25:54.355772 4684 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/placement-operator-controller-manager-5d646b7d76-dbggg"] Jan 23 09:25:54 crc kubenswrapper[4684]: I0123 09:25:54.376638 4684 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-74mm6\" (UniqueName: \"kubernetes.io/projected/b0bb140c-ce3d-4d8b-8627-67ae0145b2d4-kube-api-access-74mm6\") pod \"openstack-baremetal-operator-controller-manager-7c9c58b557lb7bq\" (UID: \"b0bb140c-ce3d-4d8b-8627-67ae0145b2d4\") " pod="openstack-operators/openstack-baremetal-operator-controller-manager-7c9c58b557lb7bq" Jan 23 09:25:54 crc kubenswrapper[4684]: I0123 09:25:54.376737 4684 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/swift-operator-controller-manager-547cbdb99f-8cnrp"] Jan 23 09:25:54 crc kubenswrapper[4684]: I0123 09:25:54.377670 4684 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-dmrk9\" (UniqueName: \"kubernetes.io/projected/2466d64b-62c9-422f-9609-5aaaa7de084c-kube-api-access-dmrk9\") pod \"octavia-operator-controller-manager-7bd9774b6-b82vt\" (UID: \"2466d64b-62c9-422f-9609-5aaaa7de084c\") " pod="openstack-operators/octavia-operator-controller-manager-7bd9774b6-b82vt" Jan 23 09:25:54 crc kubenswrapper[4684]: I0123 09:25:54.383264 4684 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/keystone-operator-controller-manager-b8b6d4659-lfjfh" Jan 23 09:25:54 crc kubenswrapper[4684]: I0123 09:25:54.407358 4684 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cert\" (UniqueName: \"kubernetes.io/secret/56e669a2-5990-45ad-8d32-e8d57ef7a81e-cert\") pod \"infra-operator-controller-manager-54ccf4f85d-t4lh8\" (UID: \"56e669a2-5990-45ad-8d32-e8d57ef7a81e\") " pod="openstack-operators/infra-operator-controller-manager-54ccf4f85d-t4lh8" Jan 23 09:25:54 crc kubenswrapper[4684]: I0123 09:25:54.407408 4684 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-l2mbk\" (UniqueName: \"kubernetes.io/projected/0755ab86-427c-4e7b-8712-4db92f543c69-kube-api-access-l2mbk\") pod \"ovn-operator-controller-manager-55db956ddc-ll27v\" (UID: \"0755ab86-427c-4e7b-8712-4db92f543c69\") " pod="openstack-operators/ovn-operator-controller-manager-55db956ddc-ll27v" Jan 23 09:25:54 crc kubenswrapper[4684]: I0123 09:25:54.407478 4684 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-s6j86\" (UniqueName: \"kubernetes.io/projected/ba45281f-6224-4ce8-bc8e-df42f7e89340-kube-api-access-s6j86\") pod \"placement-operator-controller-manager-5d646b7d76-dbggg\" (UID: \"ba45281f-6224-4ce8-bc8e-df42f7e89340\") " pod="openstack-operators/placement-operator-controller-manager-5d646b7d76-dbggg" Jan 23 09:25:54 crc kubenswrapper[4684]: I0123 09:25:54.407508 4684 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-7qnl9\" (UniqueName: \"kubernetes.io/projected/ca0f93c0-4138-44c8-bd7d-027ced364a97-kube-api-access-7qnl9\") pod \"swift-operator-controller-manager-547cbdb99f-8cnrp\" (UID: \"ca0f93c0-4138-44c8-bd7d-027ced364a97\") " pod="openstack-operators/swift-operator-controller-manager-547cbdb99f-8cnrp" Jan 23 09:25:54 crc kubenswrapper[4684]: E0123 09:25:54.407656 4684 secret.go:188] Couldn't get secret openstack-operators/infra-operator-webhook-server-cert: secret "infra-operator-webhook-server-cert" not found Jan 23 09:25:54 crc kubenswrapper[4684]: E0123 09:25:54.407724 4684 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/56e669a2-5990-45ad-8d32-e8d57ef7a81e-cert podName:56e669a2-5990-45ad-8d32-e8d57ef7a81e nodeName:}" failed. No retries permitted until 2026-01-23 09:25:55.407680199 +0000 UTC m=+1128.031058740 (durationBeforeRetry 1s). Error: MountVolume.SetUp failed for volume "cert" (UniqueName: "kubernetes.io/secret/56e669a2-5990-45ad-8d32-e8d57ef7a81e-cert") pod "infra-operator-controller-manager-54ccf4f85d-t4lh8" (UID: "56e669a2-5990-45ad-8d32-e8d57ef7a81e") : secret "infra-operator-webhook-server-cert" not found Jan 23 09:25:54 crc kubenswrapper[4684]: I0123 09:25:54.427036 4684 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/ovn-operator-controller-manager-55db956ddc-ll27v"] Jan 23 09:25:54 crc kubenswrapper[4684]: I0123 09:25:54.446450 4684 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-l2mbk\" (UniqueName: \"kubernetes.io/projected/0755ab86-427c-4e7b-8712-4db92f543c69-kube-api-access-l2mbk\") pod \"ovn-operator-controller-manager-55db956ddc-ll27v\" (UID: \"0755ab86-427c-4e7b-8712-4db92f543c69\") " pod="openstack-operators/ovn-operator-controller-manager-55db956ddc-ll27v" Jan 23 09:25:54 crc kubenswrapper[4684]: I0123 09:25:54.457829 4684 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack-operators/test-operator-controller-manager-69797bbcbd-2f7kg"] Jan 23 09:25:54 crc kubenswrapper[4684]: I0123 09:25:54.460018 4684 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/test-operator-controller-manager-69797bbcbd-2f7kg" Jan 23 09:25:54 crc kubenswrapper[4684]: I0123 09:25:54.469930 4684 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"test-operator-controller-manager-dockercfg-m4lf9" Jan 23 09:25:54 crc kubenswrapper[4684]: I0123 09:25:54.476378 4684 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-s6j86\" (UniqueName: \"kubernetes.io/projected/ba45281f-6224-4ce8-bc8e-df42f7e89340-kube-api-access-s6j86\") pod \"placement-operator-controller-manager-5d646b7d76-dbggg\" (UID: \"ba45281f-6224-4ce8-bc8e-df42f7e89340\") " pod="openstack-operators/placement-operator-controller-manager-5d646b7d76-dbggg" Jan 23 09:25:54 crc kubenswrapper[4684]: I0123 09:25:54.497443 4684 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack-operators/telemetry-operator-controller-manager-85cd9769bb-4rk7k"] Jan 23 09:25:54 crc kubenswrapper[4684]: I0123 09:25:54.503040 4684 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/telemetry-operator-controller-manager-85cd9769bb-4rk7k" Jan 23 09:25:54 crc kubenswrapper[4684]: I0123 09:25:54.508368 4684 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-7qnl9\" (UniqueName: \"kubernetes.io/projected/ca0f93c0-4138-44c8-bd7d-027ced364a97-kube-api-access-7qnl9\") pod \"swift-operator-controller-manager-547cbdb99f-8cnrp\" (UID: \"ca0f93c0-4138-44c8-bd7d-027ced364a97\") " pod="openstack-operators/swift-operator-controller-manager-547cbdb99f-8cnrp" Jan 23 09:25:54 crc kubenswrapper[4684]: I0123 09:25:54.515292 4684 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-rtmvk\" (UniqueName: \"kubernetes.io/projected/b3f2f6c1-234f-457b-b335-f7e732976b73-kube-api-access-rtmvk\") pod \"test-operator-controller-manager-69797bbcbd-2f7kg\" (UID: \"b3f2f6c1-234f-457b-b335-f7e732976b73\") " pod="openstack-operators/test-operator-controller-manager-69797bbcbd-2f7kg" Jan 23 09:25:54 crc kubenswrapper[4684]: I0123 09:25:54.514498 4684 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"telemetry-operator-controller-manager-dockercfg-4gq7q" Jan 23 09:25:54 crc kubenswrapper[4684]: I0123 09:25:54.509133 4684 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/nova-operator-controller-manager-6b8bc8d87d-jnlvz" Jan 23 09:25:54 crc kubenswrapper[4684]: I0123 09:25:54.527886 4684 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/octavia-operator-controller-manager-7bd9774b6-b82vt" Jan 23 09:25:54 crc kubenswrapper[4684]: I0123 09:25:54.531809 4684 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/test-operator-controller-manager-69797bbcbd-2f7kg"] Jan 23 09:25:54 crc kubenswrapper[4684]: I0123 09:25:54.543275 4684 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-7qnl9\" (UniqueName: \"kubernetes.io/projected/ca0f93c0-4138-44c8-bd7d-027ced364a97-kube-api-access-7qnl9\") pod \"swift-operator-controller-manager-547cbdb99f-8cnrp\" (UID: \"ca0f93c0-4138-44c8-bd7d-027ced364a97\") " pod="openstack-operators/swift-operator-controller-manager-547cbdb99f-8cnrp" Jan 23 09:25:54 crc kubenswrapper[4684]: I0123 09:25:54.578902 4684 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack-operators/watcher-operator-controller-manager-5ffb9c6597-sx2td"] Jan 23 09:25:54 crc kubenswrapper[4684]: I0123 09:25:54.579447 4684 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/ovn-operator-controller-manager-55db956ddc-ll27v" Jan 23 09:25:54 crc kubenswrapper[4684]: I0123 09:25:54.580012 4684 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/watcher-operator-controller-manager-5ffb9c6597-sx2td" Jan 23 09:25:54 crc kubenswrapper[4684]: I0123 09:25:54.590180 4684 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"watcher-operator-controller-manager-dockercfg-qlwff" Jan 23 09:25:54 crc kubenswrapper[4684]: I0123 09:25:54.604114 4684 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/telemetry-operator-controller-manager-85cd9769bb-4rk7k"] Jan 23 09:25:54 crc kubenswrapper[4684]: I0123 09:25:54.619078 4684 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-d7rq6\" (UniqueName: \"kubernetes.io/projected/afb73601-eb5b-44cd-9f30-4e38a4cc28be-kube-api-access-d7rq6\") pod \"watcher-operator-controller-manager-5ffb9c6597-sx2td\" (UID: \"afb73601-eb5b-44cd-9f30-4e38a4cc28be\") " pod="openstack-operators/watcher-operator-controller-manager-5ffb9c6597-sx2td" Jan 23 09:25:54 crc kubenswrapper[4684]: I0123 09:25:54.619140 4684 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-fdw2n\" (UniqueName: \"kubernetes.io/projected/829a9115-60b9-4f34-811a-1acc4cbd9897-kube-api-access-fdw2n\") pod \"telemetry-operator-controller-manager-85cd9769bb-4rk7k\" (UID: \"829a9115-60b9-4f34-811a-1acc4cbd9897\") " pod="openstack-operators/telemetry-operator-controller-manager-85cd9769bb-4rk7k" Jan 23 09:25:54 crc kubenswrapper[4684]: I0123 09:25:54.619186 4684 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-rtmvk\" (UniqueName: \"kubernetes.io/projected/b3f2f6c1-234f-457b-b335-f7e732976b73-kube-api-access-rtmvk\") pod \"test-operator-controller-manager-69797bbcbd-2f7kg\" (UID: \"b3f2f6c1-234f-457b-b335-f7e732976b73\") " pod="openstack-operators/test-operator-controller-manager-69797bbcbd-2f7kg" Jan 23 09:25:54 crc kubenswrapper[4684]: I0123 09:25:54.619780 4684 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/placement-operator-controller-manager-5d646b7d76-dbggg" Jan 23 09:25:54 crc kubenswrapper[4684]: I0123 09:25:54.636321 4684 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/watcher-operator-controller-manager-5ffb9c6597-sx2td"] Jan 23 09:25:54 crc kubenswrapper[4684]: I0123 09:25:54.671612 4684 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-rtmvk\" (UniqueName: \"kubernetes.io/projected/b3f2f6c1-234f-457b-b335-f7e732976b73-kube-api-access-rtmvk\") pod \"test-operator-controller-manager-69797bbcbd-2f7kg\" (UID: \"b3f2f6c1-234f-457b-b335-f7e732976b73\") " pod="openstack-operators/test-operator-controller-manager-69797bbcbd-2f7kg" Jan 23 09:25:54 crc kubenswrapper[4684]: I0123 09:25:54.690957 4684 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/swift-operator-controller-manager-547cbdb99f-8cnrp" Jan 23 09:25:54 crc kubenswrapper[4684]: I0123 09:25:54.723545 4684 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-fdw2n\" (UniqueName: \"kubernetes.io/projected/829a9115-60b9-4f34-811a-1acc4cbd9897-kube-api-access-fdw2n\") pod \"telemetry-operator-controller-manager-85cd9769bb-4rk7k\" (UID: \"829a9115-60b9-4f34-811a-1acc4cbd9897\") " pod="openstack-operators/telemetry-operator-controller-manager-85cd9769bb-4rk7k" Jan 23 09:25:54 crc kubenswrapper[4684]: I0123 09:25:54.723690 4684 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-d7rq6\" (UniqueName: \"kubernetes.io/projected/afb73601-eb5b-44cd-9f30-4e38a4cc28be-kube-api-access-d7rq6\") pod \"watcher-operator-controller-manager-5ffb9c6597-sx2td\" (UID: \"afb73601-eb5b-44cd-9f30-4e38a4cc28be\") " pod="openstack-operators/watcher-operator-controller-manager-5ffb9c6597-sx2td" Jan 23 09:25:54 crc kubenswrapper[4684]: I0123 09:25:54.741572 4684 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack-operators/openstack-operator-controller-manager-57c46955cf-s5vdl"] Jan 23 09:25:54 crc kubenswrapper[4684]: I0123 09:25:54.742535 4684 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/openstack-operator-controller-manager-57c46955cf-s5vdl" Jan 23 09:25:54 crc kubenswrapper[4684]: I0123 09:25:54.749797 4684 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"metrics-server-cert" Jan 23 09:25:54 crc kubenswrapper[4684]: I0123 09:25:54.750026 4684 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"webhook-server-cert" Jan 23 09:25:54 crc kubenswrapper[4684]: I0123 09:25:54.750166 4684 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"openstack-operator-controller-manager-dockercfg-f5kkf" Jan 23 09:25:54 crc kubenswrapper[4684]: I0123 09:25:54.763714 4684 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-fdw2n\" (UniqueName: \"kubernetes.io/projected/829a9115-60b9-4f34-811a-1acc4cbd9897-kube-api-access-fdw2n\") pod \"telemetry-operator-controller-manager-85cd9769bb-4rk7k\" (UID: \"829a9115-60b9-4f34-811a-1acc4cbd9897\") " pod="openstack-operators/telemetry-operator-controller-manager-85cd9769bb-4rk7k" Jan 23 09:25:54 crc kubenswrapper[4684]: I0123 09:25:54.805000 4684 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/openstack-operator-controller-manager-57c46955cf-s5vdl"] Jan 23 09:25:54 crc kubenswrapper[4684]: I0123 09:25:54.821996 4684 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/test-operator-controller-manager-69797bbcbd-2f7kg" Jan 23 09:25:54 crc kubenswrapper[4684]: I0123 09:25:54.827105 4684 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cert\" (UniqueName: \"kubernetes.io/secret/b0bb140c-ce3d-4d8b-8627-67ae0145b2d4-cert\") pod \"openstack-baremetal-operator-controller-manager-7c9c58b557lb7bq\" (UID: \"b0bb140c-ce3d-4d8b-8627-67ae0145b2d4\") " pod="openstack-operators/openstack-baremetal-operator-controller-manager-7c9c58b557lb7bq" Jan 23 09:25:54 crc kubenswrapper[4684]: I0123 09:25:54.827368 4684 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/ef474359-484b-4042-8d86-0aa2fce7a260-metrics-certs\") pod \"openstack-operator-controller-manager-57c46955cf-s5vdl\" (UID: \"ef474359-484b-4042-8d86-0aa2fce7a260\") " pod="openstack-operators/openstack-operator-controller-manager-57c46955cf-s5vdl" Jan 23 09:25:54 crc kubenswrapper[4684]: I0123 09:25:54.827406 4684 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-2xhlz\" (UniqueName: \"kubernetes.io/projected/ef474359-484b-4042-8d86-0aa2fce7a260-kube-api-access-2xhlz\") pod \"openstack-operator-controller-manager-57c46955cf-s5vdl\" (UID: \"ef474359-484b-4042-8d86-0aa2fce7a260\") " pod="openstack-operators/openstack-operator-controller-manager-57c46955cf-s5vdl" Jan 23 09:25:54 crc kubenswrapper[4684]: I0123 09:25:54.827423 4684 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"webhook-certs\" (UniqueName: \"kubernetes.io/secret/ef474359-484b-4042-8d86-0aa2fce7a260-webhook-certs\") pod \"openstack-operator-controller-manager-57c46955cf-s5vdl\" (UID: \"ef474359-484b-4042-8d86-0aa2fce7a260\") " pod="openstack-operators/openstack-operator-controller-manager-57c46955cf-s5vdl" Jan 23 09:25:54 crc kubenswrapper[4684]: I0123 09:25:54.828017 4684 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-d7rq6\" (UniqueName: \"kubernetes.io/projected/afb73601-eb5b-44cd-9f30-4e38a4cc28be-kube-api-access-d7rq6\") pod \"watcher-operator-controller-manager-5ffb9c6597-sx2td\" (UID: \"afb73601-eb5b-44cd-9f30-4e38a4cc28be\") " pod="openstack-operators/watcher-operator-controller-manager-5ffb9c6597-sx2td" Jan 23 09:25:54 crc kubenswrapper[4684]: E0123 09:25:54.828111 4684 secret.go:188] Couldn't get secret openstack-operators/openstack-baremetal-operator-webhook-server-cert: secret "openstack-baremetal-operator-webhook-server-cert" not found Jan 23 09:25:54 crc kubenswrapper[4684]: E0123 09:25:54.828154 4684 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/b0bb140c-ce3d-4d8b-8627-67ae0145b2d4-cert podName:b0bb140c-ce3d-4d8b-8627-67ae0145b2d4 nodeName:}" failed. No retries permitted until 2026-01-23 09:25:55.828139724 +0000 UTC m=+1128.451518255 (durationBeforeRetry 1s). Error: MountVolume.SetUp failed for volume "cert" (UniqueName: "kubernetes.io/secret/b0bb140c-ce3d-4d8b-8627-67ae0145b2d4-cert") pod "openstack-baremetal-operator-controller-manager-7c9c58b557lb7bq" (UID: "b0bb140c-ce3d-4d8b-8627-67ae0145b2d4") : secret "openstack-baremetal-operator-webhook-server-cert" not found Jan 23 09:25:54 crc kubenswrapper[4684]: I0123 09:25:54.865176 4684 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/telemetry-operator-controller-manager-85cd9769bb-4rk7k" Jan 23 09:25:54 crc kubenswrapper[4684]: I0123 09:25:54.920028 4684 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack-operators/rabbitmq-cluster-operator-manager-668c99d594-c6nkk"] Jan 23 09:25:54 crc kubenswrapper[4684]: I0123 09:25:54.926385 4684 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/watcher-operator-controller-manager-5ffb9c6597-sx2td" Jan 23 09:25:54 crc kubenswrapper[4684]: I0123 09:25:54.933168 4684 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/rabbitmq-cluster-operator-manager-668c99d594-c6nkk" Jan 23 09:25:54 crc kubenswrapper[4684]: I0123 09:25:54.933989 4684 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/ef474359-484b-4042-8d86-0aa2fce7a260-metrics-certs\") pod \"openstack-operator-controller-manager-57c46955cf-s5vdl\" (UID: \"ef474359-484b-4042-8d86-0aa2fce7a260\") " pod="openstack-operators/openstack-operator-controller-manager-57c46955cf-s5vdl" Jan 23 09:25:54 crc kubenswrapper[4684]: I0123 09:25:54.934063 4684 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-2xhlz\" (UniqueName: \"kubernetes.io/projected/ef474359-484b-4042-8d86-0aa2fce7a260-kube-api-access-2xhlz\") pod \"openstack-operator-controller-manager-57c46955cf-s5vdl\" (UID: \"ef474359-484b-4042-8d86-0aa2fce7a260\") " pod="openstack-operators/openstack-operator-controller-manager-57c46955cf-s5vdl" Jan 23 09:25:54 crc kubenswrapper[4684]: I0123 09:25:54.934101 4684 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"webhook-certs\" (UniqueName: \"kubernetes.io/secret/ef474359-484b-4042-8d86-0aa2fce7a260-webhook-certs\") pod \"openstack-operator-controller-manager-57c46955cf-s5vdl\" (UID: \"ef474359-484b-4042-8d86-0aa2fce7a260\") " pod="openstack-operators/openstack-operator-controller-manager-57c46955cf-s5vdl" Jan 23 09:25:54 crc kubenswrapper[4684]: E0123 09:25:54.934307 4684 secret.go:188] Couldn't get secret openstack-operators/webhook-server-cert: secret "webhook-server-cert" not found Jan 23 09:25:54 crc kubenswrapper[4684]: E0123 09:25:54.934378 4684 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/ef474359-484b-4042-8d86-0aa2fce7a260-webhook-certs podName:ef474359-484b-4042-8d86-0aa2fce7a260 nodeName:}" failed. No retries permitted until 2026-01-23 09:25:55.434362412 +0000 UTC m=+1128.057740953 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "webhook-certs" (UniqueName: "kubernetes.io/secret/ef474359-484b-4042-8d86-0aa2fce7a260-webhook-certs") pod "openstack-operator-controller-manager-57c46955cf-s5vdl" (UID: "ef474359-484b-4042-8d86-0aa2fce7a260") : secret "webhook-server-cert" not found Jan 23 09:25:54 crc kubenswrapper[4684]: E0123 09:25:54.934477 4684 secret.go:188] Couldn't get secret openstack-operators/metrics-server-cert: secret "metrics-server-cert" not found Jan 23 09:25:54 crc kubenswrapper[4684]: E0123 09:25:54.934548 4684 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/ef474359-484b-4042-8d86-0aa2fce7a260-metrics-certs podName:ef474359-484b-4042-8d86-0aa2fce7a260 nodeName:}" failed. No retries permitted until 2026-01-23 09:25:55.434532387 +0000 UTC m=+1128.057910928 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "metrics-certs" (UniqueName: "kubernetes.io/secret/ef474359-484b-4042-8d86-0aa2fce7a260-metrics-certs") pod "openstack-operator-controller-manager-57c46955cf-s5vdl" (UID: "ef474359-484b-4042-8d86-0aa2fce7a260") : secret "metrics-server-cert" not found Jan 23 09:25:54 crc kubenswrapper[4684]: I0123 09:25:54.950896 4684 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/rabbitmq-cluster-operator-manager-668c99d594-c6nkk"] Jan 23 09:25:54 crc kubenswrapper[4684]: I0123 09:25:54.963077 4684 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"rabbitmq-cluster-operator-controller-manager-dockercfg-rpjk4" Jan 23 09:25:54 crc kubenswrapper[4684]: I0123 09:25:54.996277 4684 provider.go:102] Refreshing cache for provider: *credentialprovider.defaultDockerConfigProvider Jan 23 09:25:55 crc kubenswrapper[4684]: I0123 09:25:55.010513 4684 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-2xhlz\" (UniqueName: \"kubernetes.io/projected/ef474359-484b-4042-8d86-0aa2fce7a260-kube-api-access-2xhlz\") pod \"openstack-operator-controller-manager-57c46955cf-s5vdl\" (UID: \"ef474359-484b-4042-8d86-0aa2fce7a260\") " pod="openstack-operators/openstack-operator-controller-manager-57c46955cf-s5vdl" Jan 23 09:25:55 crc kubenswrapper[4684]: I0123 09:25:55.038431 4684 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-4d59d\" (UniqueName: \"kubernetes.io/projected/b45428ef-0f84-4d58-ab99-9d7e26470caa-kube-api-access-4d59d\") pod \"rabbitmq-cluster-operator-manager-668c99d594-c6nkk\" (UID: \"b45428ef-0f84-4d58-ab99-9d7e26470caa\") " pod="openstack-operators/rabbitmq-cluster-operator-manager-668c99d594-c6nkk" Jan 23 09:25:55 crc kubenswrapper[4684]: I0123 09:25:55.097770 4684 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/barbican-operator-controller-manager-59dd8b7cbf-sbkxr"] Jan 23 09:25:55 crc kubenswrapper[4684]: I0123 09:25:55.136808 4684 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/designate-operator-controller-manager-b45d7bf98-p77dl"] Jan 23 09:25:55 crc kubenswrapper[4684]: I0123 09:25:55.140429 4684 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-4d59d\" (UniqueName: \"kubernetes.io/projected/b45428ef-0f84-4d58-ab99-9d7e26470caa-kube-api-access-4d59d\") pod \"rabbitmq-cluster-operator-manager-668c99d594-c6nkk\" (UID: \"b45428ef-0f84-4d58-ab99-9d7e26470caa\") " pod="openstack-operators/rabbitmq-cluster-operator-manager-668c99d594-c6nkk" Jan 23 09:25:55 crc kubenswrapper[4684]: I0123 09:25:55.177667 4684 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-4d59d\" (UniqueName: \"kubernetes.io/projected/b45428ef-0f84-4d58-ab99-9d7e26470caa-kube-api-access-4d59d\") pod \"rabbitmq-cluster-operator-manager-668c99d594-c6nkk\" (UID: \"b45428ef-0f84-4d58-ab99-9d7e26470caa\") " pod="openstack-operators/rabbitmq-cluster-operator-manager-668c99d594-c6nkk" Jan 23 09:25:55 crc kubenswrapper[4684]: I0123 09:25:55.202723 4684 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/cinder-operator-controller-manager-69cf5d4557-srv5g"] Jan 23 09:25:55 crc kubenswrapper[4684]: I0123 09:25:55.284031 4684 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/rabbitmq-cluster-operator-manager-668c99d594-c6nkk" Jan 23 09:25:55 crc kubenswrapper[4684]: I0123 09:25:55.285988 4684 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/glance-operator-controller-manager-78fdd796fd-hx5dq"] Jan 23 09:25:55 crc kubenswrapper[4684]: I0123 09:25:55.444533 4684 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cert\" (UniqueName: \"kubernetes.io/secret/56e669a2-5990-45ad-8d32-e8d57ef7a81e-cert\") pod \"infra-operator-controller-manager-54ccf4f85d-t4lh8\" (UID: \"56e669a2-5990-45ad-8d32-e8d57ef7a81e\") " pod="openstack-operators/infra-operator-controller-manager-54ccf4f85d-t4lh8" Jan 23 09:25:55 crc kubenswrapper[4684]: I0123 09:25:55.444989 4684 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"webhook-certs\" (UniqueName: \"kubernetes.io/secret/ef474359-484b-4042-8d86-0aa2fce7a260-webhook-certs\") pod \"openstack-operator-controller-manager-57c46955cf-s5vdl\" (UID: \"ef474359-484b-4042-8d86-0aa2fce7a260\") " pod="openstack-operators/openstack-operator-controller-manager-57c46955cf-s5vdl" Jan 23 09:25:55 crc kubenswrapper[4684]: I0123 09:25:55.445199 4684 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/ef474359-484b-4042-8d86-0aa2fce7a260-metrics-certs\") pod \"openstack-operator-controller-manager-57c46955cf-s5vdl\" (UID: \"ef474359-484b-4042-8d86-0aa2fce7a260\") " pod="openstack-operators/openstack-operator-controller-manager-57c46955cf-s5vdl" Jan 23 09:25:55 crc kubenswrapper[4684]: E0123 09:25:55.445350 4684 secret.go:188] Couldn't get secret openstack-operators/metrics-server-cert: secret "metrics-server-cert" not found Jan 23 09:25:55 crc kubenswrapper[4684]: E0123 09:25:55.445435 4684 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/ef474359-484b-4042-8d86-0aa2fce7a260-metrics-certs podName:ef474359-484b-4042-8d86-0aa2fce7a260 nodeName:}" failed. No retries permitted until 2026-01-23 09:25:56.445390304 +0000 UTC m=+1129.068768855 (durationBeforeRetry 1s). Error: MountVolume.SetUp failed for volume "metrics-certs" (UniqueName: "kubernetes.io/secret/ef474359-484b-4042-8d86-0aa2fce7a260-metrics-certs") pod "openstack-operator-controller-manager-57c46955cf-s5vdl" (UID: "ef474359-484b-4042-8d86-0aa2fce7a260") : secret "metrics-server-cert" not found Jan 23 09:25:55 crc kubenswrapper[4684]: E0123 09:25:55.445918 4684 secret.go:188] Couldn't get secret openstack-operators/infra-operator-webhook-server-cert: secret "infra-operator-webhook-server-cert" not found Jan 23 09:25:55 crc kubenswrapper[4684]: E0123 09:25:55.445948 4684 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/56e669a2-5990-45ad-8d32-e8d57ef7a81e-cert podName:56e669a2-5990-45ad-8d32-e8d57ef7a81e nodeName:}" failed. No retries permitted until 2026-01-23 09:25:57.4459387 +0000 UTC m=+1130.069317241 (durationBeforeRetry 2s). Error: MountVolume.SetUp failed for volume "cert" (UniqueName: "kubernetes.io/secret/56e669a2-5990-45ad-8d32-e8d57ef7a81e-cert") pod "infra-operator-controller-manager-54ccf4f85d-t4lh8" (UID: "56e669a2-5990-45ad-8d32-e8d57ef7a81e") : secret "infra-operator-webhook-server-cert" not found Jan 23 09:25:55 crc kubenswrapper[4684]: E0123 09:25:55.445994 4684 secret.go:188] Couldn't get secret openstack-operators/webhook-server-cert: secret "webhook-server-cert" not found Jan 23 09:25:55 crc kubenswrapper[4684]: E0123 09:25:55.446019 4684 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/ef474359-484b-4042-8d86-0aa2fce7a260-webhook-certs podName:ef474359-484b-4042-8d86-0aa2fce7a260 nodeName:}" failed. No retries permitted until 2026-01-23 09:25:56.446012472 +0000 UTC m=+1129.069391013 (durationBeforeRetry 1s). Error: MountVolume.SetUp failed for volume "webhook-certs" (UniqueName: "kubernetes.io/secret/ef474359-484b-4042-8d86-0aa2fce7a260-webhook-certs") pod "openstack-operator-controller-manager-57c46955cf-s5vdl" (UID: "ef474359-484b-4042-8d86-0aa2fce7a260") : secret "webhook-server-cert" not found Jan 23 09:25:55 crc kubenswrapper[4684]: I0123 09:25:55.629096 4684 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/designate-operator-controller-manager-b45d7bf98-p77dl" event={"ID":"31af0894-c5ac-41ef-842e-b7d01dfa2229","Type":"ContainerStarted","Data":"e57c80fee89d638dea59830f859f5b29da4f8d494a5c9b3a3262192b99be99a0"} Jan 23 09:25:55 crc kubenswrapper[4684]: I0123 09:25:55.629132 4684 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/barbican-operator-controller-manager-59dd8b7cbf-sbkxr" event={"ID":"dc5b7444-cf61-439c-a7ed-3c97289e6cfe","Type":"ContainerStarted","Data":"fbbaee12956107e63858d5816bf018e2ef5654aede8ae664d1245ddb155c28ec"} Jan 23 09:25:55 crc kubenswrapper[4684]: I0123 09:25:55.629143 4684 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/glance-operator-controller-manager-78fdd796fd-hx5dq" event={"ID":"299d3d78-4346-43f2-86f2-e1a3c20513a5","Type":"ContainerStarted","Data":"4afcc101d0d7a4a2c24fc4c1ae846aa054e383b1a1d4077d848b12a67e152a8c"} Jan 23 09:25:55 crc kubenswrapper[4684]: I0123 09:25:55.629153 4684 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/cinder-operator-controller-manager-69cf5d4557-srv5g" event={"ID":"fd2ff302-08d1-4fd7-a45c-152155876b56","Type":"ContainerStarted","Data":"176d72cd504ab273afcbe1c137d3f0ec513f5008e97ff03b92c821e02713fc6c"} Jan 23 09:25:55 crc kubenswrapper[4684]: I0123 09:25:55.852138 4684 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cert\" (UniqueName: \"kubernetes.io/secret/b0bb140c-ce3d-4d8b-8627-67ae0145b2d4-cert\") pod \"openstack-baremetal-operator-controller-manager-7c9c58b557lb7bq\" (UID: \"b0bb140c-ce3d-4d8b-8627-67ae0145b2d4\") " pod="openstack-operators/openstack-baremetal-operator-controller-manager-7c9c58b557lb7bq" Jan 23 09:25:55 crc kubenswrapper[4684]: E0123 09:25:55.852772 4684 secret.go:188] Couldn't get secret openstack-operators/openstack-baremetal-operator-webhook-server-cert: secret "openstack-baremetal-operator-webhook-server-cert" not found Jan 23 09:25:55 crc kubenswrapper[4684]: E0123 09:25:55.852830 4684 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/b0bb140c-ce3d-4d8b-8627-67ae0145b2d4-cert podName:b0bb140c-ce3d-4d8b-8627-67ae0145b2d4 nodeName:}" failed. No retries permitted until 2026-01-23 09:25:57.852807653 +0000 UTC m=+1130.476186194 (durationBeforeRetry 2s). Error: MountVolume.SetUp failed for volume "cert" (UniqueName: "kubernetes.io/secret/b0bb140c-ce3d-4d8b-8627-67ae0145b2d4-cert") pod "openstack-baremetal-operator-controller-manager-7c9c58b557lb7bq" (UID: "b0bb140c-ce3d-4d8b-8627-67ae0145b2d4") : secret "openstack-baremetal-operator-webhook-server-cert" not found Jan 23 09:25:55 crc kubenswrapper[4684]: I0123 09:25:55.967873 4684 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/mariadb-operator-controller-manager-c87fff755-pl7fj"] Jan 23 09:25:56 crc kubenswrapper[4684]: I0123 09:25:56.010495 4684 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/keystone-operator-controller-manager-b8b6d4659-lfjfh"] Jan 23 09:25:56 crc kubenswrapper[4684]: I0123 09:25:56.026831 4684 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/manila-operator-controller-manager-78c6999f6f-skhwl"] Jan 23 09:25:56 crc kubenswrapper[4684]: I0123 09:25:56.057253 4684 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/neutron-operator-controller-manager-5d8f59fb49-7nv72"] Jan 23 09:25:56 crc kubenswrapper[4684]: I0123 09:25:56.083956 4684 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/heat-operator-controller-manager-594c8c9d5d-ht6sr"] Jan 23 09:25:56 crc kubenswrapper[4684]: I0123 09:25:56.424199 4684 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/horizon-operator-controller-manager-77d5c5b54f-gc4d6"] Jan 23 09:25:56 crc kubenswrapper[4684]: I0123 09:25:56.443767 4684 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/ovn-operator-controller-manager-55db956ddc-ll27v"] Jan 23 09:25:56 crc kubenswrapper[4684]: I0123 09:25:56.471555 4684 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/ef474359-484b-4042-8d86-0aa2fce7a260-metrics-certs\") pod \"openstack-operator-controller-manager-57c46955cf-s5vdl\" (UID: \"ef474359-484b-4042-8d86-0aa2fce7a260\") " pod="openstack-operators/openstack-operator-controller-manager-57c46955cf-s5vdl" Jan 23 09:25:56 crc kubenswrapper[4684]: I0123 09:25:56.471907 4684 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"webhook-certs\" (UniqueName: \"kubernetes.io/secret/ef474359-484b-4042-8d86-0aa2fce7a260-webhook-certs\") pod \"openstack-operator-controller-manager-57c46955cf-s5vdl\" (UID: \"ef474359-484b-4042-8d86-0aa2fce7a260\") " pod="openstack-operators/openstack-operator-controller-manager-57c46955cf-s5vdl" Jan 23 09:25:56 crc kubenswrapper[4684]: E0123 09:25:56.472046 4684 secret.go:188] Couldn't get secret openstack-operators/webhook-server-cert: secret "webhook-server-cert" not found Jan 23 09:25:56 crc kubenswrapper[4684]: E0123 09:25:56.472092 4684 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/ef474359-484b-4042-8d86-0aa2fce7a260-webhook-certs podName:ef474359-484b-4042-8d86-0aa2fce7a260 nodeName:}" failed. No retries permitted until 2026-01-23 09:25:58.472077711 +0000 UTC m=+1131.095456252 (durationBeforeRetry 2s). Error: MountVolume.SetUp failed for volume "webhook-certs" (UniqueName: "kubernetes.io/secret/ef474359-484b-4042-8d86-0aa2fce7a260-webhook-certs") pod "openstack-operator-controller-manager-57c46955cf-s5vdl" (UID: "ef474359-484b-4042-8d86-0aa2fce7a260") : secret "webhook-server-cert" not found Jan 23 09:25:56 crc kubenswrapper[4684]: E0123 09:25:56.472414 4684 secret.go:188] Couldn't get secret openstack-operators/metrics-server-cert: secret "metrics-server-cert" not found Jan 23 09:25:56 crc kubenswrapper[4684]: E0123 09:25:56.472447 4684 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/ef474359-484b-4042-8d86-0aa2fce7a260-metrics-certs podName:ef474359-484b-4042-8d86-0aa2fce7a260 nodeName:}" failed. No retries permitted until 2026-01-23 09:25:58.472437301 +0000 UTC m=+1131.095815842 (durationBeforeRetry 2s). Error: MountVolume.SetUp failed for volume "metrics-certs" (UniqueName: "kubernetes.io/secret/ef474359-484b-4042-8d86-0aa2fce7a260-metrics-certs") pod "openstack-operator-controller-manager-57c46955cf-s5vdl" (UID: "ef474359-484b-4042-8d86-0aa2fce7a260") : secret "metrics-server-cert" not found Jan 23 09:25:56 crc kubenswrapper[4684]: I0123 09:25:56.515240 4684 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/octavia-operator-controller-manager-7bd9774b6-b82vt"] Jan 23 09:25:56 crc kubenswrapper[4684]: W0123 09:25:56.532831 4684 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podd61b277c_9b8c_423e_9b63_66dd812147c3.slice/crio-23a3f8c633cf3456431c790b0a6a7a2a1a40fe7c7f1709b25e961a712118eb9e WatchSource:0}: Error finding container 23a3f8c633cf3456431c790b0a6a7a2a1a40fe7c7f1709b25e961a712118eb9e: Status 404 returned error can't find the container with id 23a3f8c633cf3456431c790b0a6a7a2a1a40fe7c7f1709b25e961a712118eb9e Jan 23 09:25:56 crc kubenswrapper[4684]: I0123 09:25:56.537783 4684 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/nova-operator-controller-manager-6b8bc8d87d-jnlvz"] Jan 23 09:25:56 crc kubenswrapper[4684]: I0123 09:25:56.553068 4684 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/placement-operator-controller-manager-5d646b7d76-dbggg"] Jan 23 09:25:56 crc kubenswrapper[4684]: I0123 09:25:56.591860 4684 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/test-operator-controller-manager-69797bbcbd-2f7kg"] Jan 23 09:25:56 crc kubenswrapper[4684]: I0123 09:25:56.596913 4684 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/swift-operator-controller-manager-547cbdb99f-8cnrp"] Jan 23 09:25:56 crc kubenswrapper[4684]: I0123 09:25:56.603960 4684 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/ironic-operator-controller-manager-69d6c9f5b8-6s79c"] Jan 23 09:25:56 crc kubenswrapper[4684]: I0123 09:25:56.612077 4684 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/telemetry-operator-controller-manager-85cd9769bb-4rk7k"] Jan 23 09:25:56 crc kubenswrapper[4684]: E0123 09:25:56.685547 4684 kuberuntime_manager.go:1274] "Unhandled Error" err="container &Container{Name:manager,Image:quay.io/openstack-k8s-operators/ironic-operator@sha256:d3c55b59cb192799f8d31196c55c9e9bb3cd38aef7ec51ef257dabf1548e8b30,Command:[/manager],Args:[--leader-elect --health-probe-bind-address=:8081 --metrics-bind-address=127.0.0.1:8080],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:LEASE_DURATION,Value:30,ValueFrom:nil,},EnvVar{Name:RENEW_DEADLINE,Value:20,ValueFrom:nil,},EnvVar{Name:RETRY_PERIOD,Value:5,ValueFrom:nil,},EnvVar{Name:ENABLE_WEBHOOKS,Value:false,ValueFrom:nil,},EnvVar{Name:METRICS_CERTS,Value:false,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{cpu: {{500 -3} {} 500m DecimalSI},memory: {{536870912 0} {} BinarySI},},Requests:ResourceList{cpu: {{10 -3} {} 10m DecimalSI},memory: {{268435456 0} {} BinarySI},},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:kube-api-access-6fb8b,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/healthz,Port:{0 8081 },Host:,Scheme:HTTP,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:15,TimeoutSeconds:1,PeriodSeconds:20,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},ReadinessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/readyz,Port:{0 8081 },Host:,Scheme:HTTP,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:5,TimeoutSeconds:1,PeriodSeconds:10,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[MKNOD],},Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod ironic-operator-controller-manager-69d6c9f5b8-6s79c_openstack-operators(5bb19409-93c9-4453-800c-ce2899b48427): ErrImagePull: pull QPS exceeded" logger="UnhandledError" Jan 23 09:25:56 crc kubenswrapper[4684]: E0123 09:25:56.686910 4684 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"manager\" with ErrImagePull: \"pull QPS exceeded\"" pod="openstack-operators/ironic-operator-controller-manager-69d6c9f5b8-6s79c" podUID="5bb19409-93c9-4453-800c-ce2899b48427" Jan 23 09:25:56 crc kubenswrapper[4684]: I0123 09:25:56.697847 4684 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/heat-operator-controller-manager-594c8c9d5d-ht6sr" event={"ID":"294e6daa-1ac9-4afc-b489-f7cff06c18ec","Type":"ContainerStarted","Data":"f3e58b54908aab9c5fb1f505bfae450be0ef7fb0fd522ebcfbedfe95273f9cbc"} Jan 23 09:25:56 crc kubenswrapper[4684]: I0123 09:25:56.703580 4684 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/horizon-operator-controller-manager-77d5c5b54f-gc4d6" event={"ID":"d61b277c-9b8c-423e-9b63-66dd812147c3","Type":"ContainerStarted","Data":"23a3f8c633cf3456431c790b0a6a7a2a1a40fe7c7f1709b25e961a712118eb9e"} Jan 23 09:25:56 crc kubenswrapper[4684]: I0123 09:25:56.705453 4684 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/manila-operator-controller-manager-78c6999f6f-skhwl" event={"ID":"e13327b0-3e7d-498b-a5cb-1ae9cbc6fad7","Type":"ContainerStarted","Data":"24a93ce04163f409483096f31af8ba8a51d5e59bfb4278e5f6e291849e96fa47"} Jan 23 09:25:56 crc kubenswrapper[4684]: I0123 09:25:56.714286 4684 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/ironic-operator-controller-manager-69d6c9f5b8-6s79c" event={"ID":"5bb19409-93c9-4453-800c-ce2899b48427","Type":"ContainerStarted","Data":"7923e0f4647ef7c61e385108b1bfaaba57aa834d4f3c6885ba5b18775ab92054"} Jan 23 09:25:56 crc kubenswrapper[4684]: E0123 09:25:56.718053 4684 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"manager\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.io/openstack-k8s-operators/ironic-operator@sha256:d3c55b59cb192799f8d31196c55c9e9bb3cd38aef7ec51ef257dabf1548e8b30\\\"\"" pod="openstack-operators/ironic-operator-controller-manager-69d6c9f5b8-6s79c" podUID="5bb19409-93c9-4453-800c-ce2899b48427" Jan 23 09:25:56 crc kubenswrapper[4684]: I0123 09:25:56.718728 4684 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/neutron-operator-controller-manager-5d8f59fb49-7nv72" event={"ID":"9e4ad169-96f1-40ef-bedf-75d3a233ca35","Type":"ContainerStarted","Data":"6c0c50a4d1af5e197f2a9a2df640ac350ffd6058a3aaeacc34dc0a94cbf8d2fb"} Jan 23 09:25:56 crc kubenswrapper[4684]: I0123 09:25:56.722408 4684 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/keystone-operator-controller-manager-b8b6d4659-lfjfh" event={"ID":"67b55215-9df7-4273-8e15-27c0a969e065","Type":"ContainerStarted","Data":"5d86a7081e597eb8d9b9db6c3ae7a23ebad532d0db3590d18ebe9b7c4d5cab54"} Jan 23 09:25:56 crc kubenswrapper[4684]: I0123 09:25:56.724688 4684 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/nova-operator-controller-manager-6b8bc8d87d-jnlvz" event={"ID":"b1376fdd-31b4-4a7a-a9b6-1a38565083cb","Type":"ContainerStarted","Data":"424e9d6579c53b890f5d11d02d74da55d43f96e777902aa861685cf81d7e79a3"} Jan 23 09:25:56 crc kubenswrapper[4684]: I0123 09:25:56.733709 4684 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/octavia-operator-controller-manager-7bd9774b6-b82vt" event={"ID":"2466d64b-62c9-422f-9609-5aaaa7de084c","Type":"ContainerStarted","Data":"b5b0beccb8324c4d1ff466443af38a534d4a735e97ec4f368d4ff06f62b7e1a7"} Jan 23 09:25:56 crc kubenswrapper[4684]: I0123 09:25:56.769331 4684 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/placement-operator-controller-manager-5d646b7d76-dbggg" event={"ID":"ba45281f-6224-4ce8-bc8e-df42f7e89340","Type":"ContainerStarted","Data":"aa6b08c6af67ea494546fee894b39b818e2457b26d63a73a1ab232988fc16bec"} Jan 23 09:25:56 crc kubenswrapper[4684]: I0123 09:25:56.778770 4684 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/swift-operator-controller-manager-547cbdb99f-8cnrp" event={"ID":"ca0f93c0-4138-44c8-bd7d-027ced364a97","Type":"ContainerStarted","Data":"1a29c2183a897770183c1d3d61489bd415054754950ffb5a25ab8c0c095172fc"} Jan 23 09:25:56 crc kubenswrapper[4684]: I0123 09:25:56.786854 4684 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/watcher-operator-controller-manager-5ffb9c6597-sx2td"] Jan 23 09:25:56 crc kubenswrapper[4684]: I0123 09:25:56.796358 4684 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/rabbitmq-cluster-operator-manager-668c99d594-c6nkk"] Jan 23 09:25:56 crc kubenswrapper[4684]: I0123 09:25:56.802913 4684 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/ovn-operator-controller-manager-55db956ddc-ll27v" event={"ID":"0755ab86-427c-4e7b-8712-4db92f543c69","Type":"ContainerStarted","Data":"0c4c6a34c6f8dbb033884ebb7ab1dfc3265e80bee81c5a135a26e6afd342fd32"} Jan 23 09:25:56 crc kubenswrapper[4684]: I0123 09:25:56.822092 4684 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/mariadb-operator-controller-manager-c87fff755-pl7fj" event={"ID":"e1b45f19-8737-4f21-aade-d2b9cfda08fe","Type":"ContainerStarted","Data":"99f4cd2e234821a172ed631d746c6ee1367b2168cc066ae885912c85e2699ff9"} Jan 23 09:25:56 crc kubenswrapper[4684]: I0123 09:25:56.847483 4684 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/telemetry-operator-controller-manager-85cd9769bb-4rk7k" event={"ID":"829a9115-60b9-4f34-811a-1acc4cbd9897","Type":"ContainerStarted","Data":"714dba6334c4eb0c3b9961e664831b916fa6a64aba6672deb8b73b48f9399d8d"} Jan 23 09:25:56 crc kubenswrapper[4684]: I0123 09:25:56.867863 4684 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/test-operator-controller-manager-69797bbcbd-2f7kg" event={"ID":"b3f2f6c1-234f-457b-b335-f7e732976b73","Type":"ContainerStarted","Data":"4201963c38d423b130b85a448a3535498a7386911742be3af4d14bb6149b766e"} Jan 23 09:25:56 crc kubenswrapper[4684]: E0123 09:25:56.895509 4684 kuberuntime_manager.go:1274] "Unhandled Error" err="container &Container{Name:operator,Image:quay.io/openstack-k8s-operators/rabbitmq-cluster-operator@sha256:893e66303c1b0bc1d00a299a3f0380bad55c8dc813c8a1c6a4aab379f5aa12a2,Command:[/manager],Args:[],WorkingDir:,Ports:[]ContainerPort{ContainerPort{Name:metrics,HostPort:0,ContainerPort:9782,Protocol:TCP,HostIP:,},},Env:[]EnvVar{EnvVar{Name:OPERATOR_NAMESPACE,Value:,ValueFrom:&EnvVarSource{FieldRef:&ObjectFieldSelector{APIVersion:v1,FieldPath:metadata.namespace,},ResourceFieldRef:nil,ConfigMapKeyRef:nil,SecretKeyRef:nil,},},EnvVar{Name:LEASE_DURATION,Value:30,ValueFrom:nil,},EnvVar{Name:RENEW_DEADLINE,Value:20,ValueFrom:nil,},EnvVar{Name:RETRY_PERIOD,Value:5,ValueFrom:nil,},EnvVar{Name:ENABLE_WEBHOOKS,Value:false,ValueFrom:nil,},EnvVar{Name:METRICS_CERTS,Value:false,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{cpu: {{200 -3} {} 200m DecimalSI},memory: {{524288000 0} {} 500Mi BinarySI},},Requests:ResourceList{cpu: {{5 -3} {} 5m DecimalSI},memory: {{67108864 0} {} BinarySI},},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:kube-api-access-4d59d,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:nil,SELinuxOptions:nil,RunAsUser:*1000660000,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod rabbitmq-cluster-operator-manager-668c99d594-c6nkk_openstack-operators(b45428ef-0f84-4d58-ab99-9d7e26470caa): ErrImagePull: pull QPS exceeded" logger="UnhandledError" Jan 23 09:25:56 crc kubenswrapper[4684]: E0123 09:25:56.898711 4684 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"operator\" with ErrImagePull: \"pull QPS exceeded\"" pod="openstack-operators/rabbitmq-cluster-operator-manager-668c99d594-c6nkk" podUID="b45428ef-0f84-4d58-ab99-9d7e26470caa" Jan 23 09:25:57 crc kubenswrapper[4684]: I0123 09:25:57.515409 4684 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cert\" (UniqueName: \"kubernetes.io/secret/56e669a2-5990-45ad-8d32-e8d57ef7a81e-cert\") pod \"infra-operator-controller-manager-54ccf4f85d-t4lh8\" (UID: \"56e669a2-5990-45ad-8d32-e8d57ef7a81e\") " pod="openstack-operators/infra-operator-controller-manager-54ccf4f85d-t4lh8" Jan 23 09:25:57 crc kubenswrapper[4684]: E0123 09:25:57.515609 4684 secret.go:188] Couldn't get secret openstack-operators/infra-operator-webhook-server-cert: secret "infra-operator-webhook-server-cert" not found Jan 23 09:25:57 crc kubenswrapper[4684]: E0123 09:25:57.515658 4684 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/56e669a2-5990-45ad-8d32-e8d57ef7a81e-cert podName:56e669a2-5990-45ad-8d32-e8d57ef7a81e nodeName:}" failed. No retries permitted until 2026-01-23 09:26:01.515641992 +0000 UTC m=+1134.139020533 (durationBeforeRetry 4s). Error: MountVolume.SetUp failed for volume "cert" (UniqueName: "kubernetes.io/secret/56e669a2-5990-45ad-8d32-e8d57ef7a81e-cert") pod "infra-operator-controller-manager-54ccf4f85d-t4lh8" (UID: "56e669a2-5990-45ad-8d32-e8d57ef7a81e") : secret "infra-operator-webhook-server-cert" not found Jan 23 09:25:57 crc kubenswrapper[4684]: I0123 09:25:57.890433 4684 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/watcher-operator-controller-manager-5ffb9c6597-sx2td" event={"ID":"afb73601-eb5b-44cd-9f30-4e38a4cc28be","Type":"ContainerStarted","Data":"4c6a055010d6683204e67d27b53f1ae16dc50589e724112297adc8b3285f5527"} Jan 23 09:25:57 crc kubenswrapper[4684]: I0123 09:25:57.893461 4684 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/rabbitmq-cluster-operator-manager-668c99d594-c6nkk" event={"ID":"b45428ef-0f84-4d58-ab99-9d7e26470caa","Type":"ContainerStarted","Data":"9be5f695210c88820f653b1a042dcb4ea1a6bb68b2924b4d84f49927dad93a73"} Jan 23 09:25:57 crc kubenswrapper[4684]: E0123 09:25:57.894415 4684 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"manager\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.io/openstack-k8s-operators/ironic-operator@sha256:d3c55b59cb192799f8d31196c55c9e9bb3cd38aef7ec51ef257dabf1548e8b30\\\"\"" pod="openstack-operators/ironic-operator-controller-manager-69d6c9f5b8-6s79c" podUID="5bb19409-93c9-4453-800c-ce2899b48427" Jan 23 09:25:57 crc kubenswrapper[4684]: E0123 09:25:57.906748 4684 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"operator\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.io/openstack-k8s-operators/rabbitmq-cluster-operator@sha256:893e66303c1b0bc1d00a299a3f0380bad55c8dc813c8a1c6a4aab379f5aa12a2\\\"\"" pod="openstack-operators/rabbitmq-cluster-operator-manager-668c99d594-c6nkk" podUID="b45428ef-0f84-4d58-ab99-9d7e26470caa" Jan 23 09:25:57 crc kubenswrapper[4684]: I0123 09:25:57.943730 4684 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cert\" (UniqueName: \"kubernetes.io/secret/b0bb140c-ce3d-4d8b-8627-67ae0145b2d4-cert\") pod \"openstack-baremetal-operator-controller-manager-7c9c58b557lb7bq\" (UID: \"b0bb140c-ce3d-4d8b-8627-67ae0145b2d4\") " pod="openstack-operators/openstack-baremetal-operator-controller-manager-7c9c58b557lb7bq" Jan 23 09:25:57 crc kubenswrapper[4684]: E0123 09:25:57.943929 4684 secret.go:188] Couldn't get secret openstack-operators/openstack-baremetal-operator-webhook-server-cert: secret "openstack-baremetal-operator-webhook-server-cert" not found Jan 23 09:25:57 crc kubenswrapper[4684]: E0123 09:25:57.943974 4684 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/b0bb140c-ce3d-4d8b-8627-67ae0145b2d4-cert podName:b0bb140c-ce3d-4d8b-8627-67ae0145b2d4 nodeName:}" failed. No retries permitted until 2026-01-23 09:26:01.943959993 +0000 UTC m=+1134.567338534 (durationBeforeRetry 4s). Error: MountVolume.SetUp failed for volume "cert" (UniqueName: "kubernetes.io/secret/b0bb140c-ce3d-4d8b-8627-67ae0145b2d4-cert") pod "openstack-baremetal-operator-controller-manager-7c9c58b557lb7bq" (UID: "b0bb140c-ce3d-4d8b-8627-67ae0145b2d4") : secret "openstack-baremetal-operator-webhook-server-cert" not found Jan 23 09:25:58 crc kubenswrapper[4684]: I0123 09:25:58.569557 4684 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/ef474359-484b-4042-8d86-0aa2fce7a260-metrics-certs\") pod \"openstack-operator-controller-manager-57c46955cf-s5vdl\" (UID: \"ef474359-484b-4042-8d86-0aa2fce7a260\") " pod="openstack-operators/openstack-operator-controller-manager-57c46955cf-s5vdl" Jan 23 09:25:58 crc kubenswrapper[4684]: I0123 09:25:58.569625 4684 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"webhook-certs\" (UniqueName: \"kubernetes.io/secret/ef474359-484b-4042-8d86-0aa2fce7a260-webhook-certs\") pod \"openstack-operator-controller-manager-57c46955cf-s5vdl\" (UID: \"ef474359-484b-4042-8d86-0aa2fce7a260\") " pod="openstack-operators/openstack-operator-controller-manager-57c46955cf-s5vdl" Jan 23 09:25:58 crc kubenswrapper[4684]: E0123 09:25:58.569780 4684 secret.go:188] Couldn't get secret openstack-operators/webhook-server-cert: secret "webhook-server-cert" not found Jan 23 09:25:58 crc kubenswrapper[4684]: E0123 09:25:58.569827 4684 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/ef474359-484b-4042-8d86-0aa2fce7a260-webhook-certs podName:ef474359-484b-4042-8d86-0aa2fce7a260 nodeName:}" failed. No retries permitted until 2026-01-23 09:26:02.569812911 +0000 UTC m=+1135.193191442 (durationBeforeRetry 4s). Error: MountVolume.SetUp failed for volume "webhook-certs" (UniqueName: "kubernetes.io/secret/ef474359-484b-4042-8d86-0aa2fce7a260-webhook-certs") pod "openstack-operator-controller-manager-57c46955cf-s5vdl" (UID: "ef474359-484b-4042-8d86-0aa2fce7a260") : secret "webhook-server-cert" not found Jan 23 09:25:58 crc kubenswrapper[4684]: E0123 09:25:58.570235 4684 secret.go:188] Couldn't get secret openstack-operators/metrics-server-cert: secret "metrics-server-cert" not found Jan 23 09:25:58 crc kubenswrapper[4684]: E0123 09:25:58.570269 4684 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/ef474359-484b-4042-8d86-0aa2fce7a260-metrics-certs podName:ef474359-484b-4042-8d86-0aa2fce7a260 nodeName:}" failed. No retries permitted until 2026-01-23 09:26:02.570260514 +0000 UTC m=+1135.193639055 (durationBeforeRetry 4s). Error: MountVolume.SetUp failed for volume "metrics-certs" (UniqueName: "kubernetes.io/secret/ef474359-484b-4042-8d86-0aa2fce7a260-metrics-certs") pod "openstack-operator-controller-manager-57c46955cf-s5vdl" (UID: "ef474359-484b-4042-8d86-0aa2fce7a260") : secret "metrics-server-cert" not found Jan 23 09:25:58 crc kubenswrapper[4684]: E0123 09:25:58.938173 4684 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"operator\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.io/openstack-k8s-operators/rabbitmq-cluster-operator@sha256:893e66303c1b0bc1d00a299a3f0380bad55c8dc813c8a1c6a4aab379f5aa12a2\\\"\"" pod="openstack-operators/rabbitmq-cluster-operator-manager-668c99d594-c6nkk" podUID="b45428ef-0f84-4d58-ab99-9d7e26470caa" Jan 23 09:26:01 crc kubenswrapper[4684]: I0123 09:26:01.548410 4684 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cert\" (UniqueName: \"kubernetes.io/secret/56e669a2-5990-45ad-8d32-e8d57ef7a81e-cert\") pod \"infra-operator-controller-manager-54ccf4f85d-t4lh8\" (UID: \"56e669a2-5990-45ad-8d32-e8d57ef7a81e\") " pod="openstack-operators/infra-operator-controller-manager-54ccf4f85d-t4lh8" Jan 23 09:26:01 crc kubenswrapper[4684]: E0123 09:26:01.548571 4684 secret.go:188] Couldn't get secret openstack-operators/infra-operator-webhook-server-cert: secret "infra-operator-webhook-server-cert" not found Jan 23 09:26:01 crc kubenswrapper[4684]: E0123 09:26:01.548628 4684 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/56e669a2-5990-45ad-8d32-e8d57ef7a81e-cert podName:56e669a2-5990-45ad-8d32-e8d57ef7a81e nodeName:}" failed. No retries permitted until 2026-01-23 09:26:09.548610725 +0000 UTC m=+1142.171989266 (durationBeforeRetry 8s). Error: MountVolume.SetUp failed for volume "cert" (UniqueName: "kubernetes.io/secret/56e669a2-5990-45ad-8d32-e8d57ef7a81e-cert") pod "infra-operator-controller-manager-54ccf4f85d-t4lh8" (UID: "56e669a2-5990-45ad-8d32-e8d57ef7a81e") : secret "infra-operator-webhook-server-cert" not found Jan 23 09:26:01 crc kubenswrapper[4684]: I0123 09:26:01.954466 4684 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cert\" (UniqueName: \"kubernetes.io/secret/b0bb140c-ce3d-4d8b-8627-67ae0145b2d4-cert\") pod \"openstack-baremetal-operator-controller-manager-7c9c58b557lb7bq\" (UID: \"b0bb140c-ce3d-4d8b-8627-67ae0145b2d4\") " pod="openstack-operators/openstack-baremetal-operator-controller-manager-7c9c58b557lb7bq" Jan 23 09:26:01 crc kubenswrapper[4684]: E0123 09:26:01.954988 4684 secret.go:188] Couldn't get secret openstack-operators/openstack-baremetal-operator-webhook-server-cert: secret "openstack-baremetal-operator-webhook-server-cert" not found Jan 23 09:26:01 crc kubenswrapper[4684]: E0123 09:26:01.955076 4684 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/b0bb140c-ce3d-4d8b-8627-67ae0145b2d4-cert podName:b0bb140c-ce3d-4d8b-8627-67ae0145b2d4 nodeName:}" failed. No retries permitted until 2026-01-23 09:26:09.955056186 +0000 UTC m=+1142.578434727 (durationBeforeRetry 8s). Error: MountVolume.SetUp failed for volume "cert" (UniqueName: "kubernetes.io/secret/b0bb140c-ce3d-4d8b-8627-67ae0145b2d4-cert") pod "openstack-baremetal-operator-controller-manager-7c9c58b557lb7bq" (UID: "b0bb140c-ce3d-4d8b-8627-67ae0145b2d4") : secret "openstack-baremetal-operator-webhook-server-cert" not found Jan 23 09:26:02 crc kubenswrapper[4684]: I0123 09:26:02.664860 4684 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/ef474359-484b-4042-8d86-0aa2fce7a260-metrics-certs\") pod \"openstack-operator-controller-manager-57c46955cf-s5vdl\" (UID: \"ef474359-484b-4042-8d86-0aa2fce7a260\") " pod="openstack-operators/openstack-operator-controller-manager-57c46955cf-s5vdl" Jan 23 09:26:02 crc kubenswrapper[4684]: I0123 09:26:02.665460 4684 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"webhook-certs\" (UniqueName: \"kubernetes.io/secret/ef474359-484b-4042-8d86-0aa2fce7a260-webhook-certs\") pod \"openstack-operator-controller-manager-57c46955cf-s5vdl\" (UID: \"ef474359-484b-4042-8d86-0aa2fce7a260\") " pod="openstack-operators/openstack-operator-controller-manager-57c46955cf-s5vdl" Jan 23 09:26:02 crc kubenswrapper[4684]: E0123 09:26:02.665064 4684 secret.go:188] Couldn't get secret openstack-operators/metrics-server-cert: secret "metrics-server-cert" not found Jan 23 09:26:02 crc kubenswrapper[4684]: E0123 09:26:02.665561 4684 secret.go:188] Couldn't get secret openstack-operators/webhook-server-cert: secret "webhook-server-cert" not found Jan 23 09:26:02 crc kubenswrapper[4684]: E0123 09:26:02.665665 4684 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/ef474359-484b-4042-8d86-0aa2fce7a260-metrics-certs podName:ef474359-484b-4042-8d86-0aa2fce7a260 nodeName:}" failed. No retries permitted until 2026-01-23 09:26:10.665643693 +0000 UTC m=+1143.289022234 (durationBeforeRetry 8s). Error: MountVolume.SetUp failed for volume "metrics-certs" (UniqueName: "kubernetes.io/secret/ef474359-484b-4042-8d86-0aa2fce7a260-metrics-certs") pod "openstack-operator-controller-manager-57c46955cf-s5vdl" (UID: "ef474359-484b-4042-8d86-0aa2fce7a260") : secret "metrics-server-cert" not found Jan 23 09:26:02 crc kubenswrapper[4684]: E0123 09:26:02.665750 4684 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/ef474359-484b-4042-8d86-0aa2fce7a260-webhook-certs podName:ef474359-484b-4042-8d86-0aa2fce7a260 nodeName:}" failed. No retries permitted until 2026-01-23 09:26:10.665693104 +0000 UTC m=+1143.289071635 (durationBeforeRetry 8s). Error: MountVolume.SetUp failed for volume "webhook-certs" (UniqueName: "kubernetes.io/secret/ef474359-484b-4042-8d86-0aa2fce7a260-webhook-certs") pod "openstack-operator-controller-manager-57c46955cf-s5vdl" (UID: "ef474359-484b-4042-8d86-0aa2fce7a260") : secret "webhook-server-cert" not found Jan 23 09:26:09 crc kubenswrapper[4684]: I0123 09:26:09.576772 4684 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cert\" (UniqueName: \"kubernetes.io/secret/56e669a2-5990-45ad-8d32-e8d57ef7a81e-cert\") pod \"infra-operator-controller-manager-54ccf4f85d-t4lh8\" (UID: \"56e669a2-5990-45ad-8d32-e8d57ef7a81e\") " pod="openstack-operators/infra-operator-controller-manager-54ccf4f85d-t4lh8" Jan 23 09:26:09 crc kubenswrapper[4684]: I0123 09:26:09.591650 4684 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"cert\" (UniqueName: \"kubernetes.io/secret/56e669a2-5990-45ad-8d32-e8d57ef7a81e-cert\") pod \"infra-operator-controller-manager-54ccf4f85d-t4lh8\" (UID: \"56e669a2-5990-45ad-8d32-e8d57ef7a81e\") " pod="openstack-operators/infra-operator-controller-manager-54ccf4f85d-t4lh8" Jan 23 09:26:09 crc kubenswrapper[4684]: I0123 09:26:09.600325 4684 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"infra-operator-controller-manager-dockercfg-zv5ff" Jan 23 09:26:09 crc kubenswrapper[4684]: I0123 09:26:09.608907 4684 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/infra-operator-controller-manager-54ccf4f85d-t4lh8" Jan 23 09:26:09 crc kubenswrapper[4684]: I0123 09:26:09.982776 4684 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cert\" (UniqueName: \"kubernetes.io/secret/b0bb140c-ce3d-4d8b-8627-67ae0145b2d4-cert\") pod \"openstack-baremetal-operator-controller-manager-7c9c58b557lb7bq\" (UID: \"b0bb140c-ce3d-4d8b-8627-67ae0145b2d4\") " pod="openstack-operators/openstack-baremetal-operator-controller-manager-7c9c58b557lb7bq" Jan 23 09:26:09 crc kubenswrapper[4684]: I0123 09:26:09.986272 4684 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"cert\" (UniqueName: \"kubernetes.io/secret/b0bb140c-ce3d-4d8b-8627-67ae0145b2d4-cert\") pod \"openstack-baremetal-operator-controller-manager-7c9c58b557lb7bq\" (UID: \"b0bb140c-ce3d-4d8b-8627-67ae0145b2d4\") " pod="openstack-operators/openstack-baremetal-operator-controller-manager-7c9c58b557lb7bq" Jan 23 09:26:10 crc kubenswrapper[4684]: I0123 09:26:10.153657 4684 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"openstack-baremetal-operator-controller-manager-dockercfg-nt2db" Jan 23 09:26:10 crc kubenswrapper[4684]: I0123 09:26:10.161718 4684 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/openstack-baremetal-operator-controller-manager-7c9c58b557lb7bq" Jan 23 09:26:10 crc kubenswrapper[4684]: I0123 09:26:10.693445 4684 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"webhook-certs\" (UniqueName: \"kubernetes.io/secret/ef474359-484b-4042-8d86-0aa2fce7a260-webhook-certs\") pod \"openstack-operator-controller-manager-57c46955cf-s5vdl\" (UID: \"ef474359-484b-4042-8d86-0aa2fce7a260\") " pod="openstack-operators/openstack-operator-controller-manager-57c46955cf-s5vdl" Jan 23 09:26:10 crc kubenswrapper[4684]: I0123 09:26:10.693600 4684 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/ef474359-484b-4042-8d86-0aa2fce7a260-metrics-certs\") pod \"openstack-operator-controller-manager-57c46955cf-s5vdl\" (UID: \"ef474359-484b-4042-8d86-0aa2fce7a260\") " pod="openstack-operators/openstack-operator-controller-manager-57c46955cf-s5vdl" Jan 23 09:26:10 crc kubenswrapper[4684]: E0123 09:26:10.694200 4684 secret.go:188] Couldn't get secret openstack-operators/webhook-server-cert: secret "webhook-server-cert" not found Jan 23 09:26:10 crc kubenswrapper[4684]: E0123 09:26:10.694292 4684 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/ef474359-484b-4042-8d86-0aa2fce7a260-webhook-certs podName:ef474359-484b-4042-8d86-0aa2fce7a260 nodeName:}" failed. No retries permitted until 2026-01-23 09:26:26.694271483 +0000 UTC m=+1159.317650024 (durationBeforeRetry 16s). Error: MountVolume.SetUp failed for volume "webhook-certs" (UniqueName: "kubernetes.io/secret/ef474359-484b-4042-8d86-0aa2fce7a260-webhook-certs") pod "openstack-operator-controller-manager-57c46955cf-s5vdl" (UID: "ef474359-484b-4042-8d86-0aa2fce7a260") : secret "webhook-server-cert" not found Jan 23 09:26:10 crc kubenswrapper[4684]: I0123 09:26:10.697210 4684 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/ef474359-484b-4042-8d86-0aa2fce7a260-metrics-certs\") pod \"openstack-operator-controller-manager-57c46955cf-s5vdl\" (UID: \"ef474359-484b-4042-8d86-0aa2fce7a260\") " pod="openstack-operators/openstack-operator-controller-manager-57c46955cf-s5vdl" Jan 23 09:26:12 crc kubenswrapper[4684]: E0123 09:26:12.862829 4684 log.go:32] "PullImage from image service failed" err="rpc error: code = Canceled desc = copying config: context canceled" image="quay.io/openstack-k8s-operators/swift-operator@sha256:445e951df2f21df6d33a466f75917e0f6103052ae751ae11887136e8ab165922" Jan 23 09:26:12 crc kubenswrapper[4684]: E0123 09:26:12.863259 4684 kuberuntime_manager.go:1274] "Unhandled Error" err="container &Container{Name:manager,Image:quay.io/openstack-k8s-operators/swift-operator@sha256:445e951df2f21df6d33a466f75917e0f6103052ae751ae11887136e8ab165922,Command:[/manager],Args:[--leader-elect --health-probe-bind-address=:8081 --metrics-bind-address=127.0.0.1:8080],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:LEASE_DURATION,Value:30,ValueFrom:nil,},EnvVar{Name:RENEW_DEADLINE,Value:20,ValueFrom:nil,},EnvVar{Name:RETRY_PERIOD,Value:5,ValueFrom:nil,},EnvVar{Name:ENABLE_WEBHOOKS,Value:false,ValueFrom:nil,},EnvVar{Name:METRICS_CERTS,Value:false,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{cpu: {{500 -3} {} 500m DecimalSI},memory: {{536870912 0} {} BinarySI},},Requests:ResourceList{cpu: {{10 -3} {} 10m DecimalSI},memory: {{268435456 0} {} BinarySI},},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:kube-api-access-7qnl9,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/healthz,Port:{0 8081 },Host:,Scheme:HTTP,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:15,TimeoutSeconds:1,PeriodSeconds:20,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},ReadinessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/readyz,Port:{0 8081 },Host:,Scheme:HTTP,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:5,TimeoutSeconds:1,PeriodSeconds:10,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:nil,SELinuxOptions:nil,RunAsUser:*1000660000,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod swift-operator-controller-manager-547cbdb99f-8cnrp_openstack-operators(ca0f93c0-4138-44c8-bd7d-027ced364a97): ErrImagePull: rpc error: code = Canceled desc = copying config: context canceled" logger="UnhandledError" Jan 23 09:26:12 crc kubenswrapper[4684]: E0123 09:26:12.864478 4684 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"manager\" with ErrImagePull: \"rpc error: code = Canceled desc = copying config: context canceled\"" pod="openstack-operators/swift-operator-controller-manager-547cbdb99f-8cnrp" podUID="ca0f93c0-4138-44c8-bd7d-027ced364a97" Jan 23 09:26:13 crc kubenswrapper[4684]: E0123 09:26:13.167866 4684 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"manager\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.io/openstack-k8s-operators/swift-operator@sha256:445e951df2f21df6d33a466f75917e0f6103052ae751ae11887136e8ab165922\\\"\"" pod="openstack-operators/swift-operator-controller-manager-547cbdb99f-8cnrp" podUID="ca0f93c0-4138-44c8-bd7d-027ced364a97" Jan 23 09:26:13 crc kubenswrapper[4684]: E0123 09:26:13.575122 4684 log.go:32] "PullImage from image service failed" err="rpc error: code = Canceled desc = copying config: context canceled" image="quay.io/openstack-k8s-operators/test-operator@sha256:c8dde42dafd41026ed2e4cfc26efc0fff63c4ba9d31326ae7dc644ccceaafa9d" Jan 23 09:26:13 crc kubenswrapper[4684]: E0123 09:26:13.575640 4684 kuberuntime_manager.go:1274] "Unhandled Error" err="container &Container{Name:manager,Image:quay.io/openstack-k8s-operators/test-operator@sha256:c8dde42dafd41026ed2e4cfc26efc0fff63c4ba9d31326ae7dc644ccceaafa9d,Command:[/manager],Args:[--leader-elect --health-probe-bind-address=:8081 --metrics-bind-address=127.0.0.1:8080],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:LEASE_DURATION,Value:30,ValueFrom:nil,},EnvVar{Name:RENEW_DEADLINE,Value:20,ValueFrom:nil,},EnvVar{Name:RETRY_PERIOD,Value:5,ValueFrom:nil,},EnvVar{Name:ENABLE_WEBHOOKS,Value:false,ValueFrom:nil,},EnvVar{Name:METRICS_CERTS,Value:false,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{cpu: {{500 -3} {} 500m DecimalSI},memory: {{536870912 0} {} BinarySI},},Requests:ResourceList{cpu: {{10 -3} {} 10m DecimalSI},memory: {{268435456 0} {} BinarySI},},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:kube-api-access-rtmvk,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/healthz,Port:{0 8081 },Host:,Scheme:HTTP,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:15,TimeoutSeconds:1,PeriodSeconds:20,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},ReadinessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/readyz,Port:{0 8081 },Host:,Scheme:HTTP,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:5,TimeoutSeconds:1,PeriodSeconds:10,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[MKNOD],},Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod test-operator-controller-manager-69797bbcbd-2f7kg_openstack-operators(b3f2f6c1-234f-457b-b335-f7e732976b73): ErrImagePull: rpc error: code = Canceled desc = copying config: context canceled" logger="UnhandledError" Jan 23 09:26:13 crc kubenswrapper[4684]: E0123 09:26:13.576981 4684 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"manager\" with ErrImagePull: \"rpc error: code = Canceled desc = copying config: context canceled\"" pod="openstack-operators/test-operator-controller-manager-69797bbcbd-2f7kg" podUID="b3f2f6c1-234f-457b-b335-f7e732976b73" Jan 23 09:26:14 crc kubenswrapper[4684]: E0123 09:26:14.163929 4684 log.go:32] "PullImage from image service failed" err="rpc error: code = Canceled desc = copying config: context canceled" image="quay.io/openstack-k8s-operators/mariadb-operator@sha256:ff0b6c27e2d96afccd73fbbb5b5297a3f60c7f4f1dfd2a877152466697018d71" Jan 23 09:26:14 crc kubenswrapper[4684]: E0123 09:26:14.164214 4684 kuberuntime_manager.go:1274] "Unhandled Error" err="container &Container{Name:manager,Image:quay.io/openstack-k8s-operators/mariadb-operator@sha256:ff0b6c27e2d96afccd73fbbb5b5297a3f60c7f4f1dfd2a877152466697018d71,Command:[/manager],Args:[--leader-elect --health-probe-bind-address=:8081 --metrics-bind-address=127.0.0.1:8080],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:LEASE_DURATION,Value:30,ValueFrom:nil,},EnvVar{Name:RENEW_DEADLINE,Value:20,ValueFrom:nil,},EnvVar{Name:RETRY_PERIOD,Value:5,ValueFrom:nil,},EnvVar{Name:ENABLE_WEBHOOKS,Value:false,ValueFrom:nil,},EnvVar{Name:METRICS_CERTS,Value:false,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{cpu: {{500 -3} {} 500m DecimalSI},memory: {{536870912 0} {} BinarySI},},Requests:ResourceList{cpu: {{10 -3} {} 10m DecimalSI},memory: {{268435456 0} {} BinarySI},},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:kube-api-access-lzp2m,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/healthz,Port:{0 8081 },Host:,Scheme:HTTP,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:15,TimeoutSeconds:1,PeriodSeconds:20,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},ReadinessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/readyz,Port:{0 8081 },Host:,Scheme:HTTP,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:5,TimeoutSeconds:1,PeriodSeconds:10,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[MKNOD],},Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod mariadb-operator-controller-manager-c87fff755-pl7fj_openstack-operators(e1b45f19-8737-4f21-aade-d2b9cfda08fe): ErrImagePull: rpc error: code = Canceled desc = copying config: context canceled" logger="UnhandledError" Jan 23 09:26:14 crc kubenswrapper[4684]: E0123 09:26:14.165540 4684 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"manager\" with ErrImagePull: \"rpc error: code = Canceled desc = copying config: context canceled\"" pod="openstack-operators/mariadb-operator-controller-manager-c87fff755-pl7fj" podUID="e1b45f19-8737-4f21-aade-d2b9cfda08fe" Jan 23 09:26:14 crc kubenswrapper[4684]: E0123 09:26:14.178682 4684 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"manager\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.io/openstack-k8s-operators/mariadb-operator@sha256:ff0b6c27e2d96afccd73fbbb5b5297a3f60c7f4f1dfd2a877152466697018d71\\\"\"" pod="openstack-operators/mariadb-operator-controller-manager-c87fff755-pl7fj" podUID="e1b45f19-8737-4f21-aade-d2b9cfda08fe" Jan 23 09:26:14 crc kubenswrapper[4684]: E0123 09:26:14.178882 4684 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"manager\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.io/openstack-k8s-operators/test-operator@sha256:c8dde42dafd41026ed2e4cfc26efc0fff63c4ba9d31326ae7dc644ccceaafa9d\\\"\"" pod="openstack-operators/test-operator-controller-manager-69797bbcbd-2f7kg" podUID="b3f2f6c1-234f-457b-b335-f7e732976b73" Jan 23 09:26:18 crc kubenswrapper[4684]: E0123 09:26:18.035570 4684 log.go:32] "PullImage from image service failed" err="rpc error: code = Canceled desc = copying config: context canceled" image="quay.io/openstack-k8s-operators/neutron-operator@sha256:b57d65d2a968705b9067192a7cb33bd4a12489db87e1d05de78c076f2062cab4" Jan 23 09:26:18 crc kubenswrapper[4684]: E0123 09:26:18.036503 4684 kuberuntime_manager.go:1274] "Unhandled Error" err="container &Container{Name:manager,Image:quay.io/openstack-k8s-operators/neutron-operator@sha256:b57d65d2a968705b9067192a7cb33bd4a12489db87e1d05de78c076f2062cab4,Command:[/manager],Args:[--leader-elect --health-probe-bind-address=:8081 --metrics-bind-address=127.0.0.1:8080],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:LEASE_DURATION,Value:30,ValueFrom:nil,},EnvVar{Name:RENEW_DEADLINE,Value:20,ValueFrom:nil,},EnvVar{Name:RETRY_PERIOD,Value:5,ValueFrom:nil,},EnvVar{Name:ENABLE_WEBHOOKS,Value:false,ValueFrom:nil,},EnvVar{Name:METRICS_CERTS,Value:false,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{cpu: {{500 -3} {} 500m DecimalSI},memory: {{536870912 0} {} BinarySI},},Requests:ResourceList{cpu: {{10 -3} {} 10m DecimalSI},memory: {{268435456 0} {} BinarySI},},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:kube-api-access-cjtkq,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/healthz,Port:{0 8081 },Host:,Scheme:HTTP,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:15,TimeoutSeconds:1,PeriodSeconds:20,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},ReadinessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/readyz,Port:{0 8081 },Host:,Scheme:HTTP,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:5,TimeoutSeconds:1,PeriodSeconds:10,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[MKNOD],},Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod neutron-operator-controller-manager-5d8f59fb49-7nv72_openstack-operators(9e4ad169-96f1-40ef-bedf-75d3a233ca35): ErrImagePull: rpc error: code = Canceled desc = copying config: context canceled" logger="UnhandledError" Jan 23 09:26:18 crc kubenswrapper[4684]: E0123 09:26:18.037879 4684 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"manager\" with ErrImagePull: \"rpc error: code = Canceled desc = copying config: context canceled\"" pod="openstack-operators/neutron-operator-controller-manager-5d8f59fb49-7nv72" podUID="9e4ad169-96f1-40ef-bedf-75d3a233ca35" Jan 23 09:26:18 crc kubenswrapper[4684]: E0123 09:26:18.202998 4684 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"manager\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.io/openstack-k8s-operators/neutron-operator@sha256:b57d65d2a968705b9067192a7cb33bd4a12489db87e1d05de78c076f2062cab4\\\"\"" pod="openstack-operators/neutron-operator-controller-manager-5d8f59fb49-7nv72" podUID="9e4ad169-96f1-40ef-bedf-75d3a233ca35" Jan 23 09:26:19 crc kubenswrapper[4684]: E0123 09:26:19.012651 4684 log.go:32] "PullImage from image service failed" err="rpc error: code = Canceled desc = copying config: context canceled" image="quay.io/openstack-k8s-operators/ovn-operator@sha256:8b3bfb9e86618b7ac69443939b0968fae28a22cd62ea1e429b599ff9f8a5f8cf" Jan 23 09:26:19 crc kubenswrapper[4684]: E0123 09:26:19.012878 4684 kuberuntime_manager.go:1274] "Unhandled Error" err="container &Container{Name:manager,Image:quay.io/openstack-k8s-operators/ovn-operator@sha256:8b3bfb9e86618b7ac69443939b0968fae28a22cd62ea1e429b599ff9f8a5f8cf,Command:[/manager],Args:[--leader-elect --health-probe-bind-address=:8081 --metrics-bind-address=127.0.0.1:8080],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:LEASE_DURATION,Value:30,ValueFrom:nil,},EnvVar{Name:RENEW_DEADLINE,Value:20,ValueFrom:nil,},EnvVar{Name:RETRY_PERIOD,Value:5,ValueFrom:nil,},EnvVar{Name:ENABLE_WEBHOOKS,Value:false,ValueFrom:nil,},EnvVar{Name:METRICS_CERTS,Value:false,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{cpu: {{500 -3} {} 500m DecimalSI},memory: {{536870912 0} {} BinarySI},},Requests:ResourceList{cpu: {{10 -3} {} 10m DecimalSI},memory: {{268435456 0} {} BinarySI},},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:kube-api-access-l2mbk,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/healthz,Port:{0 8081 },Host:,Scheme:HTTP,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:15,TimeoutSeconds:1,PeriodSeconds:20,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},ReadinessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/readyz,Port:{0 8081 },Host:,Scheme:HTTP,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:5,TimeoutSeconds:1,PeriodSeconds:10,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[MKNOD],},Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod ovn-operator-controller-manager-55db956ddc-ll27v_openstack-operators(0755ab86-427c-4e7b-8712-4db92f543c69): ErrImagePull: rpc error: code = Canceled desc = copying config: context canceled" logger="UnhandledError" Jan 23 09:26:19 crc kubenswrapper[4684]: E0123 09:26:19.014644 4684 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"manager\" with ErrImagePull: \"rpc error: code = Canceled desc = copying config: context canceled\"" pod="openstack-operators/ovn-operator-controller-manager-55db956ddc-ll27v" podUID="0755ab86-427c-4e7b-8712-4db92f543c69" Jan 23 09:26:19 crc kubenswrapper[4684]: E0123 09:26:19.208537 4684 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"manager\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.io/openstack-k8s-operators/ovn-operator@sha256:8b3bfb9e86618b7ac69443939b0968fae28a22cd62ea1e429b599ff9f8a5f8cf\\\"\"" pod="openstack-operators/ovn-operator-controller-manager-55db956ddc-ll27v" podUID="0755ab86-427c-4e7b-8712-4db92f543c69" Jan 23 09:26:20 crc kubenswrapper[4684]: E0123 09:26:20.691217 4684 log.go:32] "PullImage from image service failed" err="rpc error: code = Canceled desc = copying config: context canceled" image="quay.io/openstack-k8s-operators/placement-operator@sha256:65cfe5b9d5b0571aaf8ff9840b12cc56e90ca4cef162dd260c3a9fa2b52c6dd0" Jan 23 09:26:20 crc kubenswrapper[4684]: E0123 09:26:20.691686 4684 kuberuntime_manager.go:1274] "Unhandled Error" err="container &Container{Name:manager,Image:quay.io/openstack-k8s-operators/placement-operator@sha256:65cfe5b9d5b0571aaf8ff9840b12cc56e90ca4cef162dd260c3a9fa2b52c6dd0,Command:[/manager],Args:[--leader-elect --health-probe-bind-address=:8081 --metrics-bind-address=127.0.0.1:8080],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:LEASE_DURATION,Value:30,ValueFrom:nil,},EnvVar{Name:RENEW_DEADLINE,Value:20,ValueFrom:nil,},EnvVar{Name:RETRY_PERIOD,Value:5,ValueFrom:nil,},EnvVar{Name:ENABLE_WEBHOOKS,Value:false,ValueFrom:nil,},EnvVar{Name:METRICS_CERTS,Value:false,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{cpu: {{500 -3} {} 500m DecimalSI},memory: {{536870912 0} {} BinarySI},},Requests:ResourceList{cpu: {{10 -3} {} 10m DecimalSI},memory: {{268435456 0} {} BinarySI},},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:kube-api-access-s6j86,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/healthz,Port:{0 8081 },Host:,Scheme:HTTP,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:15,TimeoutSeconds:1,PeriodSeconds:20,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},ReadinessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/readyz,Port:{0 8081 },Host:,Scheme:HTTP,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:5,TimeoutSeconds:1,PeriodSeconds:10,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[MKNOD],},Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod placement-operator-controller-manager-5d646b7d76-dbggg_openstack-operators(ba45281f-6224-4ce8-bc8e-df42f7e89340): ErrImagePull: rpc error: code = Canceled desc = copying config: context canceled" logger="UnhandledError" Jan 23 09:26:20 crc kubenswrapper[4684]: E0123 09:26:20.692825 4684 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"manager\" with ErrImagePull: \"rpc error: code = Canceled desc = copying config: context canceled\"" pod="openstack-operators/placement-operator-controller-manager-5d646b7d76-dbggg" podUID="ba45281f-6224-4ce8-bc8e-df42f7e89340" Jan 23 09:26:21 crc kubenswrapper[4684]: E0123 09:26:21.222658 4684 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"manager\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.io/openstack-k8s-operators/placement-operator@sha256:65cfe5b9d5b0571aaf8ff9840b12cc56e90ca4cef162dd260c3a9fa2b52c6dd0\\\"\"" pod="openstack-operators/placement-operator-controller-manager-5d646b7d76-dbggg" podUID="ba45281f-6224-4ce8-bc8e-df42f7e89340" Jan 23 09:26:23 crc kubenswrapper[4684]: E0123 09:26:23.561319 4684 log.go:32] "PullImage from image service failed" err="rpc error: code = Canceled desc = copying config: context canceled" image="quay.io/openstack-k8s-operators/manila-operator@sha256:8bee4480babd6fd8f686e0ba52a304acb6ffb90f09c7c57e7f5df5f7658836d8" Jan 23 09:26:23 crc kubenswrapper[4684]: E0123 09:26:23.561845 4684 kuberuntime_manager.go:1274] "Unhandled Error" err="container &Container{Name:manager,Image:quay.io/openstack-k8s-operators/manila-operator@sha256:8bee4480babd6fd8f686e0ba52a304acb6ffb90f09c7c57e7f5df5f7658836d8,Command:[/manager],Args:[--leader-elect --health-probe-bind-address=:8081 --metrics-bind-address=127.0.0.1:8080],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:LEASE_DURATION,Value:30,ValueFrom:nil,},EnvVar{Name:RENEW_DEADLINE,Value:20,ValueFrom:nil,},EnvVar{Name:RETRY_PERIOD,Value:5,ValueFrom:nil,},EnvVar{Name:ENABLE_WEBHOOKS,Value:false,ValueFrom:nil,},EnvVar{Name:METRICS_CERTS,Value:false,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{cpu: {{500 -3} {} 500m DecimalSI},memory: {{536870912 0} {} BinarySI},},Requests:ResourceList{cpu: {{10 -3} {} 10m DecimalSI},memory: {{268435456 0} {} BinarySI},},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:kube-api-access-ngx6c,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/healthz,Port:{0 8081 },Host:,Scheme:HTTP,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:15,TimeoutSeconds:1,PeriodSeconds:20,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},ReadinessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/readyz,Port:{0 8081 },Host:,Scheme:HTTP,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:5,TimeoutSeconds:1,PeriodSeconds:10,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[MKNOD],},Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod manila-operator-controller-manager-78c6999f6f-skhwl_openstack-operators(e13327b0-3e7d-498b-a5cb-1ae9cbc6fad7): ErrImagePull: rpc error: code = Canceled desc = copying config: context canceled" logger="UnhandledError" Jan 23 09:26:23 crc kubenswrapper[4684]: E0123 09:26:23.564532 4684 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"manager\" with ErrImagePull: \"rpc error: code = Canceled desc = copying config: context canceled\"" pod="openstack-operators/manila-operator-controller-manager-78c6999f6f-skhwl" podUID="e13327b0-3e7d-498b-a5cb-1ae9cbc6fad7" Jan 23 09:26:24 crc kubenswrapper[4684]: E0123 09:26:24.238194 4684 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"manager\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.io/openstack-k8s-operators/manila-operator@sha256:8bee4480babd6fd8f686e0ba52a304acb6ffb90f09c7c57e7f5df5f7658836d8\\\"\"" pod="openstack-operators/manila-operator-controller-manager-78c6999f6f-skhwl" podUID="e13327b0-3e7d-498b-a5cb-1ae9cbc6fad7" Jan 23 09:26:26 crc kubenswrapper[4684]: I0123 09:26:26.715361 4684 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"webhook-certs\" (UniqueName: \"kubernetes.io/secret/ef474359-484b-4042-8d86-0aa2fce7a260-webhook-certs\") pod \"openstack-operator-controller-manager-57c46955cf-s5vdl\" (UID: \"ef474359-484b-4042-8d86-0aa2fce7a260\") " pod="openstack-operators/openstack-operator-controller-manager-57c46955cf-s5vdl" Jan 23 09:26:26 crc kubenswrapper[4684]: I0123 09:26:26.721394 4684 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"webhook-certs\" (UniqueName: \"kubernetes.io/secret/ef474359-484b-4042-8d86-0aa2fce7a260-webhook-certs\") pod \"openstack-operator-controller-manager-57c46955cf-s5vdl\" (UID: \"ef474359-484b-4042-8d86-0aa2fce7a260\") " pod="openstack-operators/openstack-operator-controller-manager-57c46955cf-s5vdl" Jan 23 09:26:26 crc kubenswrapper[4684]: I0123 09:26:26.755052 4684 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"openstack-operator-controller-manager-dockercfg-f5kkf" Jan 23 09:26:26 crc kubenswrapper[4684]: I0123 09:26:26.762471 4684 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/openstack-operator-controller-manager-57c46955cf-s5vdl" Jan 23 09:26:33 crc kubenswrapper[4684]: E0123 09:26:33.096415 4684 log.go:32] "PullImage from image service failed" err="rpc error: code = Canceled desc = copying config: context canceled" image="quay.io/openstack-k8s-operators/cinder-operator@sha256:e950ac2df7be78ae0cbcf62fe12ee7a06b628f1903da6fcb741609e857eb1a7f" Jan 23 09:26:33 crc kubenswrapper[4684]: E0123 09:26:33.097881 4684 kuberuntime_manager.go:1274] "Unhandled Error" err="container &Container{Name:manager,Image:quay.io/openstack-k8s-operators/cinder-operator@sha256:e950ac2df7be78ae0cbcf62fe12ee7a06b628f1903da6fcb741609e857eb1a7f,Command:[/manager],Args:[--leader-elect --health-probe-bind-address=:8081 --metrics-bind-address=127.0.0.1:8080],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:LEASE_DURATION,Value:30,ValueFrom:nil,},EnvVar{Name:RENEW_DEADLINE,Value:20,ValueFrom:nil,},EnvVar{Name:RETRY_PERIOD,Value:5,ValueFrom:nil,},EnvVar{Name:ENABLE_WEBHOOKS,Value:false,ValueFrom:nil,},EnvVar{Name:METRICS_CERTS,Value:false,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{cpu: {{500 -3} {} 500m DecimalSI},memory: {{536870912 0} {} BinarySI},},Requests:ResourceList{cpu: {{10 -3} {} 10m DecimalSI},memory: {{268435456 0} {} BinarySI},},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:kube-api-access-vxr7w,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/healthz,Port:{0 8081 },Host:,Scheme:HTTP,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:15,TimeoutSeconds:1,PeriodSeconds:20,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},ReadinessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/readyz,Port:{0 8081 },Host:,Scheme:HTTP,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:5,TimeoutSeconds:1,PeriodSeconds:10,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[MKNOD],},Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod cinder-operator-controller-manager-69cf5d4557-srv5g_openstack-operators(fd2ff302-08d1-4fd7-a45c-152155876b56): ErrImagePull: rpc error: code = Canceled desc = copying config: context canceled" logger="UnhandledError" Jan 23 09:26:33 crc kubenswrapper[4684]: E0123 09:26:33.099158 4684 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"manager\" with ErrImagePull: \"rpc error: code = Canceled desc = copying config: context canceled\"" pod="openstack-operators/cinder-operator-controller-manager-69cf5d4557-srv5g" podUID="fd2ff302-08d1-4fd7-a45c-152155876b56" Jan 23 09:26:33 crc kubenswrapper[4684]: E0123 09:26:33.308359 4684 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"manager\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.io/openstack-k8s-operators/cinder-operator@sha256:e950ac2df7be78ae0cbcf62fe12ee7a06b628f1903da6fcb741609e857eb1a7f\\\"\"" pod="openstack-operators/cinder-operator-controller-manager-69cf5d4557-srv5g" podUID="fd2ff302-08d1-4fd7-a45c-152155876b56" Jan 23 09:26:33 crc kubenswrapper[4684]: E0123 09:26:33.636677 4684 log.go:32] "PullImage from image service failed" err="rpc error: code = Canceled desc = copying config: context canceled" image="quay.io/openstack-k8s-operators/heat-operator@sha256:2f9a2f064448faebbae58f52d564dc0e8e39bed0fc12bd6b9fe925e42f1b5492" Jan 23 09:26:33 crc kubenswrapper[4684]: E0123 09:26:33.637142 4684 kuberuntime_manager.go:1274] "Unhandled Error" err="container &Container{Name:manager,Image:quay.io/openstack-k8s-operators/heat-operator@sha256:2f9a2f064448faebbae58f52d564dc0e8e39bed0fc12bd6b9fe925e42f1b5492,Command:[/manager],Args:[--leader-elect --health-probe-bind-address=:8081 --metrics-bind-address=127.0.0.1:8080],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:LEASE_DURATION,Value:30,ValueFrom:nil,},EnvVar{Name:RENEW_DEADLINE,Value:20,ValueFrom:nil,},EnvVar{Name:RETRY_PERIOD,Value:5,ValueFrom:nil,},EnvVar{Name:ENABLE_WEBHOOKS,Value:false,ValueFrom:nil,},EnvVar{Name:METRICS_CERTS,Value:false,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{cpu: {{500 -3} {} 500m DecimalSI},memory: {{536870912 0} {} BinarySI},},Requests:ResourceList{cpu: {{10 -3} {} 10m DecimalSI},memory: {{268435456 0} {} BinarySI},},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:kube-api-access-5c9b8,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/healthz,Port:{0 8081 },Host:,Scheme:HTTP,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:15,TimeoutSeconds:1,PeriodSeconds:20,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},ReadinessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/readyz,Port:{0 8081 },Host:,Scheme:HTTP,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:5,TimeoutSeconds:1,PeriodSeconds:10,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[MKNOD],},Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod heat-operator-controller-manager-594c8c9d5d-ht6sr_openstack-operators(294e6daa-1ac9-4afc-b489-f7cff06c18ec): ErrImagePull: rpc error: code = Canceled desc = copying config: context canceled" logger="UnhandledError" Jan 23 09:26:33 crc kubenswrapper[4684]: E0123 09:26:33.638468 4684 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"manager\" with ErrImagePull: \"rpc error: code = Canceled desc = copying config: context canceled\"" pod="openstack-operators/heat-operator-controller-manager-594c8c9d5d-ht6sr" podUID="294e6daa-1ac9-4afc-b489-f7cff06c18ec" Jan 23 09:26:34 crc kubenswrapper[4684]: E0123 09:26:34.108326 4684 log.go:32] "PullImage from image service failed" err="rpc error: code = Canceled desc = copying config: context canceled" image="quay.io/openstack-k8s-operators/telemetry-operator@sha256:e02722d7581bfe1c5fc13e2fa6811d8665102ba86635c77547abf6b933cde127" Jan 23 09:26:34 crc kubenswrapper[4684]: E0123 09:26:34.108565 4684 kuberuntime_manager.go:1274] "Unhandled Error" err="container &Container{Name:manager,Image:quay.io/openstack-k8s-operators/telemetry-operator@sha256:e02722d7581bfe1c5fc13e2fa6811d8665102ba86635c77547abf6b933cde127,Command:[/manager],Args:[--leader-elect --health-probe-bind-address=:8081 --metrics-bind-address=127.0.0.1:8080],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:LEASE_DURATION,Value:30,ValueFrom:nil,},EnvVar{Name:RENEW_DEADLINE,Value:20,ValueFrom:nil,},EnvVar{Name:RETRY_PERIOD,Value:5,ValueFrom:nil,},EnvVar{Name:ENABLE_WEBHOOKS,Value:false,ValueFrom:nil,},EnvVar{Name:METRICS_CERTS,Value:false,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{cpu: {{500 -3} {} 500m DecimalSI},memory: {{536870912 0} {} BinarySI},},Requests:ResourceList{cpu: {{10 -3} {} 10m DecimalSI},memory: {{268435456 0} {} BinarySI},},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:kube-api-access-fdw2n,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/healthz,Port:{0 8081 },Host:,Scheme:HTTP,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:15,TimeoutSeconds:1,PeriodSeconds:20,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},ReadinessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/readyz,Port:{0 8081 },Host:,Scheme:HTTP,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:5,TimeoutSeconds:1,PeriodSeconds:10,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[MKNOD],},Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod telemetry-operator-controller-manager-85cd9769bb-4rk7k_openstack-operators(829a9115-60b9-4f34-811a-1acc4cbd9897): ErrImagePull: rpc error: code = Canceled desc = copying config: context canceled" logger="UnhandledError" Jan 23 09:26:34 crc kubenswrapper[4684]: E0123 09:26:34.110247 4684 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"manager\" with ErrImagePull: \"rpc error: code = Canceled desc = copying config: context canceled\"" pod="openstack-operators/telemetry-operator-controller-manager-85cd9769bb-4rk7k" podUID="829a9115-60b9-4f34-811a-1acc4cbd9897" Jan 23 09:26:34 crc kubenswrapper[4684]: E0123 09:26:34.315918 4684 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"manager\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.io/openstack-k8s-operators/heat-operator@sha256:2f9a2f064448faebbae58f52d564dc0e8e39bed0fc12bd6b9fe925e42f1b5492\\\"\"" pod="openstack-operators/heat-operator-controller-manager-594c8c9d5d-ht6sr" podUID="294e6daa-1ac9-4afc-b489-f7cff06c18ec" Jan 23 09:26:34 crc kubenswrapper[4684]: E0123 09:26:34.316477 4684 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"manager\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.io/openstack-k8s-operators/telemetry-operator@sha256:e02722d7581bfe1c5fc13e2fa6811d8665102ba86635c77547abf6b933cde127\\\"\"" pod="openstack-operators/telemetry-operator-controller-manager-85cd9769bb-4rk7k" podUID="829a9115-60b9-4f34-811a-1acc4cbd9897" Jan 23 09:26:36 crc kubenswrapper[4684]: E0123 09:26:36.027338 4684 log.go:32] "PullImage from image service failed" err="rpc error: code = Canceled desc = copying config: context canceled" image="quay.io/openstack-k8s-operators/octavia-operator@sha256:a8fc8f9d445b1232f446119015b226008b07c6a259f5bebc1fcbb39ec310afe5" Jan 23 09:26:36 crc kubenswrapper[4684]: E0123 09:26:36.027806 4684 kuberuntime_manager.go:1274] "Unhandled Error" err="container &Container{Name:manager,Image:quay.io/openstack-k8s-operators/octavia-operator@sha256:a8fc8f9d445b1232f446119015b226008b07c6a259f5bebc1fcbb39ec310afe5,Command:[/manager],Args:[--leader-elect --health-probe-bind-address=:8081 --metrics-bind-address=127.0.0.1:8080],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:LEASE_DURATION,Value:30,ValueFrom:nil,},EnvVar{Name:RENEW_DEADLINE,Value:20,ValueFrom:nil,},EnvVar{Name:RETRY_PERIOD,Value:5,ValueFrom:nil,},EnvVar{Name:ENABLE_WEBHOOKS,Value:false,ValueFrom:nil,},EnvVar{Name:METRICS_CERTS,Value:false,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{cpu: {{500 -3} {} 500m DecimalSI},memory: {{536870912 0} {} BinarySI},},Requests:ResourceList{cpu: {{10 -3} {} 10m DecimalSI},memory: {{268435456 0} {} BinarySI},},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:kube-api-access-dmrk9,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/healthz,Port:{0 8081 },Host:,Scheme:HTTP,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:15,TimeoutSeconds:1,PeriodSeconds:20,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},ReadinessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/readyz,Port:{0 8081 },Host:,Scheme:HTTP,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:5,TimeoutSeconds:1,PeriodSeconds:10,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[MKNOD],},Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod octavia-operator-controller-manager-7bd9774b6-b82vt_openstack-operators(2466d64b-62c9-422f-9609-5aaaa7de084c): ErrImagePull: rpc error: code = Canceled desc = copying config: context canceled" logger="UnhandledError" Jan 23 09:26:36 crc kubenswrapper[4684]: E0123 09:26:36.029260 4684 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"manager\" with ErrImagePull: \"rpc error: code = Canceled desc = copying config: context canceled\"" pod="openstack-operators/octavia-operator-controller-manager-7bd9774b6-b82vt" podUID="2466d64b-62c9-422f-9609-5aaaa7de084c" Jan 23 09:26:36 crc kubenswrapper[4684]: E0123 09:26:36.345675 4684 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"manager\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.io/openstack-k8s-operators/octavia-operator@sha256:a8fc8f9d445b1232f446119015b226008b07c6a259f5bebc1fcbb39ec310afe5\\\"\"" pod="openstack-operators/octavia-operator-controller-manager-7bd9774b6-b82vt" podUID="2466d64b-62c9-422f-9609-5aaaa7de084c" Jan 23 09:26:36 crc kubenswrapper[4684]: E0123 09:26:36.579855 4684 log.go:32] "PullImage from image service failed" err="rpc error: code = Canceled desc = copying config: context canceled" image="quay.io/openstack-k8s-operators/designate-operator@sha256:6c88312afa9673f7b72c558368034d7a488ead73080cdcdf581fe85b99263ece" Jan 23 09:26:36 crc kubenswrapper[4684]: E0123 09:26:36.580081 4684 kuberuntime_manager.go:1274] "Unhandled Error" err="container &Container{Name:manager,Image:quay.io/openstack-k8s-operators/designate-operator@sha256:6c88312afa9673f7b72c558368034d7a488ead73080cdcdf581fe85b99263ece,Command:[/manager],Args:[--leader-elect --health-probe-bind-address=:8081 --metrics-bind-address=127.0.0.1:8080],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:LEASE_DURATION,Value:30,ValueFrom:nil,},EnvVar{Name:RENEW_DEADLINE,Value:20,ValueFrom:nil,},EnvVar{Name:RETRY_PERIOD,Value:5,ValueFrom:nil,},EnvVar{Name:ENABLE_WEBHOOKS,Value:false,ValueFrom:nil,},EnvVar{Name:METRICS_CERTS,Value:false,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{cpu: {{500 -3} {} 500m DecimalSI},memory: {{536870912 0} {} BinarySI},},Requests:ResourceList{cpu: {{10 -3} {} 10m DecimalSI},memory: {{268435456 0} {} BinarySI},},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:kube-api-access-b5gqm,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/healthz,Port:{0 8081 },Host:,Scheme:HTTP,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:15,TimeoutSeconds:1,PeriodSeconds:20,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},ReadinessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/readyz,Port:{0 8081 },Host:,Scheme:HTTP,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:5,TimeoutSeconds:1,PeriodSeconds:10,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[MKNOD],},Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod designate-operator-controller-manager-b45d7bf98-p77dl_openstack-operators(31af0894-c5ac-41ef-842e-b7d01dfa2229): ErrImagePull: rpc error: code = Canceled desc = copying config: context canceled" logger="UnhandledError" Jan 23 09:26:36 crc kubenswrapper[4684]: E0123 09:26:36.581675 4684 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"manager\" with ErrImagePull: \"rpc error: code = Canceled desc = copying config: context canceled\"" pod="openstack-operators/designate-operator-controller-manager-b45d7bf98-p77dl" podUID="31af0894-c5ac-41ef-842e-b7d01dfa2229" Jan 23 09:26:37 crc kubenswrapper[4684]: E0123 09:26:37.350403 4684 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"manager\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.io/openstack-k8s-operators/designate-operator@sha256:6c88312afa9673f7b72c558368034d7a488ead73080cdcdf581fe85b99263ece\\\"\"" pod="openstack-operators/designate-operator-controller-manager-b45d7bf98-p77dl" podUID="31af0894-c5ac-41ef-842e-b7d01dfa2229" Jan 23 09:26:40 crc kubenswrapper[4684]: E0123 09:26:40.836887 4684 log.go:32] "PullImage from image service failed" err="rpc error: code = Canceled desc = copying config: context canceled" image="quay.io/openstack-k8s-operators/watcher-operator@sha256:2d6d13b3c28e45c6bec980b8808dda8da4723ae87e66d04f53d52c3b3c51612b" Jan 23 09:26:40 crc kubenswrapper[4684]: E0123 09:26:40.837362 4684 kuberuntime_manager.go:1274] "Unhandled Error" err="container &Container{Name:manager,Image:quay.io/openstack-k8s-operators/watcher-operator@sha256:2d6d13b3c28e45c6bec980b8808dda8da4723ae87e66d04f53d52c3b3c51612b,Command:[/manager],Args:[--leader-elect --health-probe-bind-address=:8081 --metrics-bind-address=127.0.0.1:8080],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:LEASE_DURATION,Value:30,ValueFrom:nil,},EnvVar{Name:RENEW_DEADLINE,Value:20,ValueFrom:nil,},EnvVar{Name:RETRY_PERIOD,Value:5,ValueFrom:nil,},EnvVar{Name:ENABLE_WEBHOOKS,Value:false,ValueFrom:nil,},EnvVar{Name:METRICS_CERTS,Value:false,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{cpu: {{500 -3} {} 500m DecimalSI},memory: {{536870912 0} {} BinarySI},},Requests:ResourceList{cpu: {{10 -3} {} 10m DecimalSI},memory: {{268435456 0} {} BinarySI},},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:kube-api-access-d7rq6,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/healthz,Port:{0 8081 },Host:,Scheme:HTTP,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:15,TimeoutSeconds:1,PeriodSeconds:20,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},ReadinessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/readyz,Port:{0 8081 },Host:,Scheme:HTTP,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:5,TimeoutSeconds:1,PeriodSeconds:10,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[MKNOD],},Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod watcher-operator-controller-manager-5ffb9c6597-sx2td_openstack-operators(afb73601-eb5b-44cd-9f30-4e38a4cc28be): ErrImagePull: rpc error: code = Canceled desc = copying config: context canceled" logger="UnhandledError" Jan 23 09:26:40 crc kubenswrapper[4684]: E0123 09:26:40.838504 4684 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"manager\" with ErrImagePull: \"rpc error: code = Canceled desc = copying config: context canceled\"" pod="openstack-operators/watcher-operator-controller-manager-5ffb9c6597-sx2td" podUID="afb73601-eb5b-44cd-9f30-4e38a4cc28be" Jan 23 09:26:41 crc kubenswrapper[4684]: E0123 09:26:41.376757 4684 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"manager\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.io/openstack-k8s-operators/watcher-operator@sha256:2d6d13b3c28e45c6bec980b8808dda8da4723ae87e66d04f53d52c3b3c51612b\\\"\"" pod="openstack-operators/watcher-operator-controller-manager-5ffb9c6597-sx2td" podUID="afb73601-eb5b-44cd-9f30-4e38a4cc28be" Jan 23 09:26:41 crc kubenswrapper[4684]: E0123 09:26:41.416562 4684 log.go:32] "PullImage from image service failed" err="rpc error: code = Canceled desc = copying config: context canceled" image="quay.io/openstack-k8s-operators/ironic-operator@sha256:d3c55b59cb192799f8d31196c55c9e9bb3cd38aef7ec51ef257dabf1548e8b30" Jan 23 09:26:41 crc kubenswrapper[4684]: E0123 09:26:41.417157 4684 kuberuntime_manager.go:1274] "Unhandled Error" err="container &Container{Name:manager,Image:quay.io/openstack-k8s-operators/ironic-operator@sha256:d3c55b59cb192799f8d31196c55c9e9bb3cd38aef7ec51ef257dabf1548e8b30,Command:[/manager],Args:[--leader-elect --health-probe-bind-address=:8081 --metrics-bind-address=127.0.0.1:8080],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:LEASE_DURATION,Value:30,ValueFrom:nil,},EnvVar{Name:RENEW_DEADLINE,Value:20,ValueFrom:nil,},EnvVar{Name:RETRY_PERIOD,Value:5,ValueFrom:nil,},EnvVar{Name:ENABLE_WEBHOOKS,Value:false,ValueFrom:nil,},EnvVar{Name:METRICS_CERTS,Value:false,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{cpu: {{500 -3} {} 500m DecimalSI},memory: {{536870912 0} {} BinarySI},},Requests:ResourceList{cpu: {{10 -3} {} 10m DecimalSI},memory: {{268435456 0} {} BinarySI},},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:kube-api-access-6fb8b,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/healthz,Port:{0 8081 },Host:,Scheme:HTTP,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:15,TimeoutSeconds:1,PeriodSeconds:20,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},ReadinessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/readyz,Port:{0 8081 },Host:,Scheme:HTTP,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:5,TimeoutSeconds:1,PeriodSeconds:10,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[MKNOD],},Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod ironic-operator-controller-manager-69d6c9f5b8-6s79c_openstack-operators(5bb19409-93c9-4453-800c-ce2899b48427): ErrImagePull: rpc error: code = Canceled desc = copying config: context canceled" logger="UnhandledError" Jan 23 09:26:41 crc kubenswrapper[4684]: E0123 09:26:41.418443 4684 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"manager\" with ErrImagePull: \"rpc error: code = Canceled desc = copying config: context canceled\"" pod="openstack-operators/ironic-operator-controller-manager-69d6c9f5b8-6s79c" podUID="5bb19409-93c9-4453-800c-ce2899b48427" Jan 23 09:26:42 crc kubenswrapper[4684]: E0123 09:26:42.122832 4684 log.go:32] "PullImage from image service failed" err="rpc error: code = Canceled desc = copying config: context canceled" image="quay.io/openstack-k8s-operators/nova-operator@sha256:4e995cfa360a9d595a01b9c0541ab934692f2374203cb5738127dd784f793831" Jan 23 09:26:42 crc kubenswrapper[4684]: E0123 09:26:42.123646 4684 kuberuntime_manager.go:1274] "Unhandled Error" err="container &Container{Name:manager,Image:quay.io/openstack-k8s-operators/nova-operator@sha256:4e995cfa360a9d595a01b9c0541ab934692f2374203cb5738127dd784f793831,Command:[/manager],Args:[--leader-elect --health-probe-bind-address=:8081 --metrics-bind-address=127.0.0.1:8080],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:LEASE_DURATION,Value:30,ValueFrom:nil,},EnvVar{Name:RENEW_DEADLINE,Value:20,ValueFrom:nil,},EnvVar{Name:RETRY_PERIOD,Value:5,ValueFrom:nil,},EnvVar{Name:ENABLE_WEBHOOKS,Value:false,ValueFrom:nil,},EnvVar{Name:METRICS_CERTS,Value:false,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{cpu: {{500 -3} {} 500m DecimalSI},memory: {{536870912 0} {} BinarySI},},Requests:ResourceList{cpu: {{10 -3} {} 10m DecimalSI},memory: {{268435456 0} {} BinarySI},},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:kube-api-access-9qtc4,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/healthz,Port:{0 8081 },Host:,Scheme:HTTP,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:15,TimeoutSeconds:1,PeriodSeconds:20,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},ReadinessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/readyz,Port:{0 8081 },Host:,Scheme:HTTP,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:5,TimeoutSeconds:1,PeriodSeconds:10,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[MKNOD],},Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod nova-operator-controller-manager-6b8bc8d87d-jnlvz_openstack-operators(b1376fdd-31b4-4a7a-a9b6-1a38565083cb): ErrImagePull: rpc error: code = Canceled desc = copying config: context canceled" logger="UnhandledError" Jan 23 09:26:42 crc kubenswrapper[4684]: E0123 09:26:42.124949 4684 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"manager\" with ErrImagePull: \"rpc error: code = Canceled desc = copying config: context canceled\"" pod="openstack-operators/nova-operator-controller-manager-6b8bc8d87d-jnlvz" podUID="b1376fdd-31b4-4a7a-a9b6-1a38565083cb" Jan 23 09:26:42 crc kubenswrapper[4684]: E0123 09:26:42.387224 4684 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"manager\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.io/openstack-k8s-operators/nova-operator@sha256:4e995cfa360a9d595a01b9c0541ab934692f2374203cb5738127dd784f793831\\\"\"" pod="openstack-operators/nova-operator-controller-manager-6b8bc8d87d-jnlvz" podUID="b1376fdd-31b4-4a7a-a9b6-1a38565083cb" Jan 23 09:26:42 crc kubenswrapper[4684]: E0123 09:26:42.655514 4684 log.go:32] "PullImage from image service failed" err="rpc error: code = Canceled desc = copying config: context canceled" image="quay.io/openstack-k8s-operators/keystone-operator@sha256:8e340ff11922b38e811261de96982e1aff5f4eb8f225d1d9f5973025a4fe8349" Jan 23 09:26:42 crc kubenswrapper[4684]: E0123 09:26:42.655716 4684 kuberuntime_manager.go:1274] "Unhandled Error" err="container &Container{Name:manager,Image:quay.io/openstack-k8s-operators/keystone-operator@sha256:8e340ff11922b38e811261de96982e1aff5f4eb8f225d1d9f5973025a4fe8349,Command:[/manager],Args:[--leader-elect --health-probe-bind-address=:8081 --metrics-bind-address=127.0.0.1:8080],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:LEASE_DURATION,Value:30,ValueFrom:nil,},EnvVar{Name:RENEW_DEADLINE,Value:20,ValueFrom:nil,},EnvVar{Name:RETRY_PERIOD,Value:5,ValueFrom:nil,},EnvVar{Name:ENABLE_WEBHOOKS,Value:false,ValueFrom:nil,},EnvVar{Name:METRICS_CERTS,Value:false,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{cpu: {{500 -3} {} 500m DecimalSI},memory: {{536870912 0} {} BinarySI},},Requests:ResourceList{cpu: {{10 -3} {} 10m DecimalSI},memory: {{268435456 0} {} BinarySI},},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:kube-api-access-mbgr4,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/healthz,Port:{0 8081 },Host:,Scheme:HTTP,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:15,TimeoutSeconds:1,PeriodSeconds:20,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},ReadinessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/readyz,Port:{0 8081 },Host:,Scheme:HTTP,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:5,TimeoutSeconds:1,PeriodSeconds:10,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[MKNOD],},Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod keystone-operator-controller-manager-b8b6d4659-lfjfh_openstack-operators(67b55215-9df7-4273-8e15-27c0a969e065): ErrImagePull: rpc error: code = Canceled desc = copying config: context canceled" logger="UnhandledError" Jan 23 09:26:42 crc kubenswrapper[4684]: E0123 09:26:42.658085 4684 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"manager\" with ErrImagePull: \"rpc error: code = Canceled desc = copying config: context canceled\"" pod="openstack-operators/keystone-operator-controller-manager-b8b6d4659-lfjfh" podUID="67b55215-9df7-4273-8e15-27c0a969e065" Jan 23 09:26:43 crc kubenswrapper[4684]: E0123 09:26:43.394421 4684 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"manager\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.io/openstack-k8s-operators/keystone-operator@sha256:8e340ff11922b38e811261de96982e1aff5f4eb8f225d1d9f5973025a4fe8349\\\"\"" pod="openstack-operators/keystone-operator-controller-manager-b8b6d4659-lfjfh" podUID="67b55215-9df7-4273-8e15-27c0a969e065" Jan 23 09:26:44 crc kubenswrapper[4684]: I0123 09:26:44.144997 4684 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/openstack-baremetal-operator-controller-manager-7c9c58b557lb7bq"] Jan 23 09:26:46 crc kubenswrapper[4684]: E0123 09:26:46.118211 4684 log.go:32] "PullImage from image service failed" err="rpc error: code = Canceled desc = copying config: context canceled" image="quay.io/openstack-k8s-operators/rabbitmq-cluster-operator@sha256:893e66303c1b0bc1d00a299a3f0380bad55c8dc813c8a1c6a4aab379f5aa12a2" Jan 23 09:26:46 crc kubenswrapper[4684]: E0123 09:26:46.118924 4684 kuberuntime_manager.go:1274] "Unhandled Error" err="container &Container{Name:operator,Image:quay.io/openstack-k8s-operators/rabbitmq-cluster-operator@sha256:893e66303c1b0bc1d00a299a3f0380bad55c8dc813c8a1c6a4aab379f5aa12a2,Command:[/manager],Args:[],WorkingDir:,Ports:[]ContainerPort{ContainerPort{Name:metrics,HostPort:0,ContainerPort:9782,Protocol:TCP,HostIP:,},},Env:[]EnvVar{EnvVar{Name:OPERATOR_NAMESPACE,Value:,ValueFrom:&EnvVarSource{FieldRef:&ObjectFieldSelector{APIVersion:v1,FieldPath:metadata.namespace,},ResourceFieldRef:nil,ConfigMapKeyRef:nil,SecretKeyRef:nil,},},EnvVar{Name:LEASE_DURATION,Value:30,ValueFrom:nil,},EnvVar{Name:RENEW_DEADLINE,Value:20,ValueFrom:nil,},EnvVar{Name:RETRY_PERIOD,Value:5,ValueFrom:nil,},EnvVar{Name:ENABLE_WEBHOOKS,Value:false,ValueFrom:nil,},EnvVar{Name:METRICS_CERTS,Value:false,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{cpu: {{200 -3} {} 200m DecimalSI},memory: {{524288000 0} {} 500Mi BinarySI},},Requests:ResourceList{cpu: {{5 -3} {} 5m DecimalSI},memory: {{67108864 0} {} BinarySI},},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:kube-api-access-4d59d,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:nil,SELinuxOptions:nil,RunAsUser:*1000660000,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod rabbitmq-cluster-operator-manager-668c99d594-c6nkk_openstack-operators(b45428ef-0f84-4d58-ab99-9d7e26470caa): ErrImagePull: rpc error: code = Canceled desc = copying config: context canceled" logger="UnhandledError" Jan 23 09:26:46 crc kubenswrapper[4684]: E0123 09:26:46.120237 4684 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"operator\" with ErrImagePull: \"rpc error: code = Canceled desc = copying config: context canceled\"" pod="openstack-operators/rabbitmq-cluster-operator-manager-668c99d594-c6nkk" podUID="b45428ef-0f84-4d58-ab99-9d7e26470caa" Jan 23 09:26:46 crc kubenswrapper[4684]: I0123 09:26:46.415766 4684 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/openstack-baremetal-operator-controller-manager-7c9c58b557lb7bq" event={"ID":"b0bb140c-ce3d-4d8b-8627-67ae0145b2d4","Type":"ContainerStarted","Data":"2a54e52c78df4dc9d024f9433faf85e2b8febdc92acd50886dc8d662ae782387"} Jan 23 09:26:46 crc kubenswrapper[4684]: I0123 09:26:46.420509 4684 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/barbican-operator-controller-manager-59dd8b7cbf-sbkxr" event={"ID":"dc5b7444-cf61-439c-a7ed-3c97289e6cfe","Type":"ContainerStarted","Data":"7dcb95ce8e32844c193a5df4aa229eda8cf94f8689ec954eed050d276edc8ca4"} Jan 23 09:26:46 crc kubenswrapper[4684]: I0123 09:26:46.420779 4684 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack-operators/barbican-operator-controller-manager-59dd8b7cbf-sbkxr" Jan 23 09:26:46 crc kubenswrapper[4684]: I0123 09:26:46.436788 4684 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack-operators/barbican-operator-controller-manager-59dd8b7cbf-sbkxr" podStartSLOduration=6.331426387 podStartE2EDuration="53.436773021s" podCreationTimestamp="2026-01-23 09:25:53 +0000 UTC" firstStartedPulling="2026-01-23 09:25:54.995796771 +0000 UTC m=+1127.619175312" lastFinishedPulling="2026-01-23 09:26:42.101143405 +0000 UTC m=+1174.724521946" observedRunningTime="2026-01-23 09:26:46.43530453 +0000 UTC m=+1179.058683071" watchObservedRunningTime="2026-01-23 09:26:46.436773021 +0000 UTC m=+1179.060151562" Jan 23 09:26:46 crc kubenswrapper[4684]: I0123 09:26:46.571590 4684 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/infra-operator-controller-manager-54ccf4f85d-t4lh8"] Jan 23 09:26:46 crc kubenswrapper[4684]: I0123 09:26:46.754467 4684 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/openstack-operator-controller-manager-57c46955cf-s5vdl"] Jan 23 09:26:47 crc kubenswrapper[4684]: I0123 09:26:47.457272 4684 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/glance-operator-controller-manager-78fdd796fd-hx5dq" event={"ID":"299d3d78-4346-43f2-86f2-e1a3c20513a5","Type":"ContainerStarted","Data":"ff8a51262b61c79fcd6ccedcf0dcf4821dd4a51f311b752b1b72c152384ce5da"} Jan 23 09:26:47 crc kubenswrapper[4684]: I0123 09:26:47.458642 4684 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack-operators/glance-operator-controller-manager-78fdd796fd-hx5dq" Jan 23 09:26:47 crc kubenswrapper[4684]: I0123 09:26:47.466254 4684 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/placement-operator-controller-manager-5d646b7d76-dbggg" event={"ID":"ba45281f-6224-4ce8-bc8e-df42f7e89340","Type":"ContainerStarted","Data":"77c62d96aec2373e3763565b22f52e3c09a796baef5586859d54a7fbc1ad18db"} Jan 23 09:26:47 crc kubenswrapper[4684]: I0123 09:26:47.466567 4684 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack-operators/placement-operator-controller-manager-5d646b7d76-dbggg" Jan 23 09:26:47 crc kubenswrapper[4684]: I0123 09:26:47.479089 4684 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/test-operator-controller-manager-69797bbcbd-2f7kg" event={"ID":"b3f2f6c1-234f-457b-b335-f7e732976b73","Type":"ContainerStarted","Data":"30e3283c47a4efe3a38747bb89241047e247fd89ec692084efe6a7a7b9a2c30a"} Jan 23 09:26:47 crc kubenswrapper[4684]: I0123 09:26:47.479380 4684 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack-operators/test-operator-controller-manager-69797bbcbd-2f7kg" Jan 23 09:26:47 crc kubenswrapper[4684]: I0123 09:26:47.488912 4684 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/swift-operator-controller-manager-547cbdb99f-8cnrp" event={"ID":"ca0f93c0-4138-44c8-bd7d-027ced364a97","Type":"ContainerStarted","Data":"39c1d084fbbd77b84c52b5189b66b28ecbf8ce815a8afb72e60c70ce65903e46"} Jan 23 09:26:47 crc kubenswrapper[4684]: I0123 09:26:47.489603 4684 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack-operators/swift-operator-controller-manager-547cbdb99f-8cnrp" Jan 23 09:26:47 crc kubenswrapper[4684]: I0123 09:26:47.491863 4684 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/horizon-operator-controller-manager-77d5c5b54f-gc4d6" event={"ID":"d61b277c-9b8c-423e-9b63-66dd812147c3","Type":"ContainerStarted","Data":"de8e5931e050571b43be802232a665c395db9023165e98a04658b9cc76376b26"} Jan 23 09:26:47 crc kubenswrapper[4684]: I0123 09:26:47.492430 4684 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack-operators/horizon-operator-controller-manager-77d5c5b54f-gc4d6" Jan 23 09:26:47 crc kubenswrapper[4684]: I0123 09:26:47.493761 4684 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/openstack-operator-controller-manager-57c46955cf-s5vdl" event={"ID":"ef474359-484b-4042-8d86-0aa2fce7a260","Type":"ContainerStarted","Data":"045f26e49d5332c8747461e67093a5d669dceafa6f6c7c5e7cb462f377207f31"} Jan 23 09:26:47 crc kubenswrapper[4684]: I0123 09:26:47.507985 4684 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/mariadb-operator-controller-manager-c87fff755-pl7fj" event={"ID":"e1b45f19-8737-4f21-aade-d2b9cfda08fe","Type":"ContainerStarted","Data":"b9dd67e994a3d0d15f1eb55fb0c000fb98735379234bfb3ab123545f72d60fe1"} Jan 23 09:26:47 crc kubenswrapper[4684]: I0123 09:26:47.508796 4684 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack-operators/mariadb-operator-controller-manager-c87fff755-pl7fj" Jan 23 09:26:47 crc kubenswrapper[4684]: I0123 09:26:47.518383 4684 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/neutron-operator-controller-manager-5d8f59fb49-7nv72" event={"ID":"9e4ad169-96f1-40ef-bedf-75d3a233ca35","Type":"ContainerStarted","Data":"6c35929ff4f0f344f3a83657670e03703252b986b407be5425be1edf382ae3c4"} Jan 23 09:26:47 crc kubenswrapper[4684]: I0123 09:26:47.518935 4684 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack-operators/neutron-operator-controller-manager-5d8f59fb49-7nv72" Jan 23 09:26:47 crc kubenswrapper[4684]: I0123 09:26:47.531473 4684 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/infra-operator-controller-manager-54ccf4f85d-t4lh8" event={"ID":"56e669a2-5990-45ad-8d32-e8d57ef7a81e","Type":"ContainerStarted","Data":"09f86609dd72b3721de872d6a65f592348aaa8b1a674d49d7c5c01be63e11167"} Jan 23 09:26:47 crc kubenswrapper[4684]: I0123 09:26:47.546764 4684 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack-operators/horizon-operator-controller-manager-77d5c5b54f-gc4d6" podStartSLOduration=9.014143971 podStartE2EDuration="54.54674587s" podCreationTimestamp="2026-01-23 09:25:53 +0000 UTC" firstStartedPulling="2026-01-23 09:25:56.566866599 +0000 UTC m=+1129.190245140" lastFinishedPulling="2026-01-23 09:26:42.099468498 +0000 UTC m=+1174.722847039" observedRunningTime="2026-01-23 09:26:47.541082392 +0000 UTC m=+1180.164460933" watchObservedRunningTime="2026-01-23 09:26:47.54674587 +0000 UTC m=+1180.170124431" Jan 23 09:26:47 crc kubenswrapper[4684]: I0123 09:26:47.548986 4684 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/manila-operator-controller-manager-78c6999f6f-skhwl" event={"ID":"e13327b0-3e7d-498b-a5cb-1ae9cbc6fad7","Type":"ContainerStarted","Data":"7a6c8164df9ca6d3fe8a564b079446012070091602052b0a692f771e19315455"} Jan 23 09:26:47 crc kubenswrapper[4684]: I0123 09:26:47.549608 4684 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack-operators/manila-operator-controller-manager-78c6999f6f-skhwl" Jan 23 09:26:47 crc kubenswrapper[4684]: I0123 09:26:47.550316 4684 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack-operators/glance-operator-controller-manager-78fdd796fd-hx5dq" podStartSLOduration=8.032232152 podStartE2EDuration="54.55030239s" podCreationTimestamp="2026-01-23 09:25:53 +0000 UTC" firstStartedPulling="2026-01-23 09:25:55.582794749 +0000 UTC m=+1128.206173290" lastFinishedPulling="2026-01-23 09:26:42.100864987 +0000 UTC m=+1174.724243528" observedRunningTime="2026-01-23 09:26:47.499128785 +0000 UTC m=+1180.122507326" watchObservedRunningTime="2026-01-23 09:26:47.55030239 +0000 UTC m=+1180.173680931" Jan 23 09:26:47 crc kubenswrapper[4684]: I0123 09:26:47.563452 4684 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/ovn-operator-controller-manager-55db956ddc-ll27v" event={"ID":"0755ab86-427c-4e7b-8712-4db92f543c69","Type":"ContainerStarted","Data":"2194be1c177ca7a7588dbdc9e9a9d9be32aafbc0bf460bb303d7b9d2beed39e9"} Jan 23 09:26:47 crc kubenswrapper[4684]: I0123 09:26:47.563869 4684 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack-operators/ovn-operator-controller-manager-55db956ddc-ll27v" Jan 23 09:26:47 crc kubenswrapper[4684]: I0123 09:26:47.601884 4684 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack-operators/placement-operator-controller-manager-5d646b7d76-dbggg" podStartSLOduration=5.111135633 podStartE2EDuration="54.601868626s" podCreationTimestamp="2026-01-23 09:25:53 +0000 UTC" firstStartedPulling="2026-01-23 09:25:56.604550494 +0000 UTC m=+1129.227929035" lastFinishedPulling="2026-01-23 09:26:46.095283487 +0000 UTC m=+1178.718662028" observedRunningTime="2026-01-23 09:26:47.601176866 +0000 UTC m=+1180.224555417" watchObservedRunningTime="2026-01-23 09:26:47.601868626 +0000 UTC m=+1180.225247167" Jan 23 09:26:47 crc kubenswrapper[4684]: I0123 09:26:47.675723 4684 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack-operators/test-operator-controller-manager-69797bbcbd-2f7kg" podStartSLOduration=4.214558532 podStartE2EDuration="53.675683805s" podCreationTimestamp="2026-01-23 09:25:54 +0000 UTC" firstStartedPulling="2026-01-23 09:25:56.616236091 +0000 UTC m=+1129.239614632" lastFinishedPulling="2026-01-23 09:26:46.077361364 +0000 UTC m=+1178.700739905" observedRunningTime="2026-01-23 09:26:47.673863974 +0000 UTC m=+1180.297242535" watchObservedRunningTime="2026-01-23 09:26:47.675683805 +0000 UTC m=+1180.299062346" Jan 23 09:26:47 crc kubenswrapper[4684]: I0123 09:26:47.698917 4684 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack-operators/swift-operator-controller-manager-547cbdb99f-8cnrp" podStartSLOduration=5.237749512 podStartE2EDuration="54.698900336s" podCreationTimestamp="2026-01-23 09:25:53 +0000 UTC" firstStartedPulling="2026-01-23 09:25:56.616657113 +0000 UTC m=+1129.240035654" lastFinishedPulling="2026-01-23 09:26:46.077807937 +0000 UTC m=+1178.701186478" observedRunningTime="2026-01-23 09:26:47.696835398 +0000 UTC m=+1180.320213939" watchObservedRunningTime="2026-01-23 09:26:47.698900336 +0000 UTC m=+1180.322278877" Jan 23 09:26:47 crc kubenswrapper[4684]: I0123 09:26:47.781264 4684 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack-operators/mariadb-operator-controller-manager-c87fff755-pl7fj" podStartSLOduration=4.713796995 podStartE2EDuration="54.781224934s" podCreationTimestamp="2026-01-23 09:25:53 +0000 UTC" firstStartedPulling="2026-01-23 09:25:56.009874514 +0000 UTC m=+1128.633253055" lastFinishedPulling="2026-01-23 09:26:46.077302453 +0000 UTC m=+1178.700680994" observedRunningTime="2026-01-23 09:26:47.743925159 +0000 UTC m=+1180.367303700" watchObservedRunningTime="2026-01-23 09:26:47.781224934 +0000 UTC m=+1180.404603475" Jan 23 09:26:47 crc kubenswrapper[4684]: I0123 09:26:47.832764 4684 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack-operators/manila-operator-controller-manager-78c6999f6f-skhwl" podStartSLOduration=4.851098323 podStartE2EDuration="54.832741379s" podCreationTimestamp="2026-01-23 09:25:53 +0000 UTC" firstStartedPulling="2026-01-23 09:25:56.112366925 +0000 UTC m=+1128.735745456" lastFinishedPulling="2026-01-23 09:26:46.094009971 +0000 UTC m=+1178.717388512" observedRunningTime="2026-01-23 09:26:47.800480524 +0000 UTC m=+1180.423859065" watchObservedRunningTime="2026-01-23 09:26:47.832741379 +0000 UTC m=+1180.456119930" Jan 23 09:26:47 crc kubenswrapper[4684]: I0123 09:26:47.837350 4684 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack-operators/ovn-operator-controller-manager-55db956ddc-ll27v" podStartSLOduration=5.311758198 podStartE2EDuration="54.837328977s" podCreationTimestamp="2026-01-23 09:25:53 +0000 UTC" firstStartedPulling="2026-01-23 09:25:56.552209537 +0000 UTC m=+1129.175588078" lastFinishedPulling="2026-01-23 09:26:46.077780316 +0000 UTC m=+1178.701158857" observedRunningTime="2026-01-23 09:26:47.83207171 +0000 UTC m=+1180.455450251" watchObservedRunningTime="2026-01-23 09:26:47.837328977 +0000 UTC m=+1180.460707518" Jan 23 09:26:47 crc kubenswrapper[4684]: I0123 09:26:47.930895 4684 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack-operators/neutron-operator-controller-manager-5d8f59fb49-7nv72" podStartSLOduration=4.975745594 podStartE2EDuration="54.93087957s" podCreationTimestamp="2026-01-23 09:25:53 +0000 UTC" firstStartedPulling="2026-01-23 09:25:56.1222647 +0000 UTC m=+1128.745643241" lastFinishedPulling="2026-01-23 09:26:46.077398676 +0000 UTC m=+1178.700777217" observedRunningTime="2026-01-23 09:26:47.928784512 +0000 UTC m=+1180.552163053" watchObservedRunningTime="2026-01-23 09:26:47.93087957 +0000 UTC m=+1180.554258111" Jan 23 09:26:48 crc kubenswrapper[4684]: I0123 09:26:48.569566 4684 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/openstack-operator-controller-manager-57c46955cf-s5vdl" event={"ID":"ef474359-484b-4042-8d86-0aa2fce7a260","Type":"ContainerStarted","Data":"05c76faab7de6c98ab45fcd259134e4096762c08022ccd59ade89cfa0039c8ff"} Jan 23 09:26:49 crc kubenswrapper[4684]: I0123 09:26:49.576915 4684 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack-operators/openstack-operator-controller-manager-57c46955cf-s5vdl" Jan 23 09:26:49 crc kubenswrapper[4684]: I0123 09:26:49.614075 4684 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack-operators/openstack-operator-controller-manager-57c46955cf-s5vdl" podStartSLOduration=55.614056341 podStartE2EDuration="55.614056341s" podCreationTimestamp="2026-01-23 09:25:54 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-23 09:26:49.602254681 +0000 UTC m=+1182.225633222" watchObservedRunningTime="2026-01-23 09:26:49.614056341 +0000 UTC m=+1182.237434892" Jan 23 09:26:52 crc kubenswrapper[4684]: I0123 09:26:52.596951 4684 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/openstack-baremetal-operator-controller-manager-7c9c58b557lb7bq" event={"ID":"b0bb140c-ce3d-4d8b-8627-67ae0145b2d4","Type":"ContainerStarted","Data":"e7f6c9ee81d68a90369abf4803b89a500d69055dd9d427a884c3118f1c1e0593"} Jan 23 09:26:52 crc kubenswrapper[4684]: I0123 09:26:52.597673 4684 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack-operators/openstack-baremetal-operator-controller-manager-7c9c58b557lb7bq" Jan 23 09:26:52 crc kubenswrapper[4684]: I0123 09:26:52.599412 4684 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/heat-operator-controller-manager-594c8c9d5d-ht6sr" event={"ID":"294e6daa-1ac9-4afc-b489-f7cff06c18ec","Type":"ContainerStarted","Data":"35a883723c664d24250606367b6f8fcc57a8db7dd746de89832272eb425437fe"} Jan 23 09:26:52 crc kubenswrapper[4684]: I0123 09:26:52.600091 4684 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack-operators/heat-operator-controller-manager-594c8c9d5d-ht6sr" Jan 23 09:26:52 crc kubenswrapper[4684]: I0123 09:26:52.601179 4684 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/infra-operator-controller-manager-54ccf4f85d-t4lh8" event={"ID":"56e669a2-5990-45ad-8d32-e8d57ef7a81e","Type":"ContainerStarted","Data":"aa80238ffac7f6840c64e49a9c197faba10226fd68f0a0519d802b6b9b5fc462"} Jan 23 09:26:52 crc kubenswrapper[4684]: I0123 09:26:52.601564 4684 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack-operators/infra-operator-controller-manager-54ccf4f85d-t4lh8" Jan 23 09:26:52 crc kubenswrapper[4684]: I0123 09:26:52.602663 4684 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/cinder-operator-controller-manager-69cf5d4557-srv5g" event={"ID":"fd2ff302-08d1-4fd7-a45c-152155876b56","Type":"ContainerStarted","Data":"23c16a4238ca5d37dbac618dbe939a92780e099334d20f1b2c152fb22939de99"} Jan 23 09:26:52 crc kubenswrapper[4684]: I0123 09:26:52.603121 4684 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack-operators/cinder-operator-controller-manager-69cf5d4557-srv5g" Jan 23 09:26:52 crc kubenswrapper[4684]: I0123 09:26:52.607192 4684 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/telemetry-operator-controller-manager-85cd9769bb-4rk7k" event={"ID":"829a9115-60b9-4f34-811a-1acc4cbd9897","Type":"ContainerStarted","Data":"2601cadf349c9f13f969ad85fade290c031196c4871707dce59827d04eb94d5d"} Jan 23 09:26:52 crc kubenswrapper[4684]: I0123 09:26:52.607542 4684 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack-operators/telemetry-operator-controller-manager-85cd9769bb-4rk7k" Jan 23 09:26:52 crc kubenswrapper[4684]: I0123 09:26:52.608494 4684 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/octavia-operator-controller-manager-7bd9774b6-b82vt" event={"ID":"2466d64b-62c9-422f-9609-5aaaa7de084c","Type":"ContainerStarted","Data":"ab890cd6310d4a27550ae8f751d20e280d67252713424bcd55c902c19211e2a0"} Jan 23 09:26:52 crc kubenswrapper[4684]: I0123 09:26:52.608848 4684 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack-operators/octavia-operator-controller-manager-7bd9774b6-b82vt" Jan 23 09:26:52 crc kubenswrapper[4684]: I0123 09:26:52.641143 4684 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack-operators/cinder-operator-controller-manager-69cf5d4557-srv5g" podStartSLOduration=3.753349943 podStartE2EDuration="59.64112652s" podCreationTimestamp="2026-01-23 09:25:53 +0000 UTC" firstStartedPulling="2026-01-23 09:25:55.376914602 +0000 UTC m=+1128.000293143" lastFinishedPulling="2026-01-23 09:26:51.264691179 +0000 UTC m=+1183.888069720" observedRunningTime="2026-01-23 09:26:52.63400239 +0000 UTC m=+1185.257380931" watchObservedRunningTime="2026-01-23 09:26:52.64112652 +0000 UTC m=+1185.264505061" Jan 23 09:26:52 crc kubenswrapper[4684]: I0123 09:26:52.656722 4684 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack-operators/infra-operator-controller-manager-54ccf4f85d-t4lh8" podStartSLOduration=54.933010291 podStartE2EDuration="59.656690506s" podCreationTimestamp="2026-01-23 09:25:53 +0000 UTC" firstStartedPulling="2026-01-23 09:26:46.592351283 +0000 UTC m=+1179.215729824" lastFinishedPulling="2026-01-23 09:26:51.316031498 +0000 UTC m=+1183.939410039" observedRunningTime="2026-01-23 09:26:52.653360923 +0000 UTC m=+1185.276739474" watchObservedRunningTime="2026-01-23 09:26:52.656690506 +0000 UTC m=+1185.280069047" Jan 23 09:26:52 crc kubenswrapper[4684]: I0123 09:26:52.675667 4684 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack-operators/telemetry-operator-controller-manager-85cd9769bb-4rk7k" podStartSLOduration=4.046510832 podStartE2EDuration="58.675653288s" podCreationTimestamp="2026-01-23 09:25:54 +0000 UTC" firstStartedPulling="2026-01-23 09:25:56.685321289 +0000 UTC m=+1129.308699820" lastFinishedPulling="2026-01-23 09:26:51.314463735 +0000 UTC m=+1183.937842276" observedRunningTime="2026-01-23 09:26:52.672939402 +0000 UTC m=+1185.296317943" watchObservedRunningTime="2026-01-23 09:26:52.675653288 +0000 UTC m=+1185.299031829" Jan 23 09:26:52 crc kubenswrapper[4684]: I0123 09:26:52.688148 4684 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack-operators/heat-operator-controller-manager-594c8c9d5d-ht6sr" podStartSLOduration=4.4991091260000005 podStartE2EDuration="59.688132158s" podCreationTimestamp="2026-01-23 09:25:53 +0000 UTC" firstStartedPulling="2026-01-23 09:25:56.12747676 +0000 UTC m=+1128.750855301" lastFinishedPulling="2026-01-23 09:26:51.316499792 +0000 UTC m=+1183.939878333" observedRunningTime="2026-01-23 09:26:52.687115899 +0000 UTC m=+1185.310494440" watchObservedRunningTime="2026-01-23 09:26:52.688132158 +0000 UTC m=+1185.311510689" Jan 23 09:26:52 crc kubenswrapper[4684]: I0123 09:26:52.708606 4684 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack-operators/octavia-operator-controller-manager-7bd9774b6-b82vt" podStartSLOduration=4.992956749 podStartE2EDuration="59.708580971s" podCreationTimestamp="2026-01-23 09:25:53 +0000 UTC" firstStartedPulling="2026-01-23 09:25:56.602844435 +0000 UTC m=+1129.226222976" lastFinishedPulling="2026-01-23 09:26:51.318468657 +0000 UTC m=+1183.941847198" observedRunningTime="2026-01-23 09:26:52.701692238 +0000 UTC m=+1185.325070779" watchObservedRunningTime="2026-01-23 09:26:52.708580971 +0000 UTC m=+1185.331959512" Jan 23 09:26:52 crc kubenswrapper[4684]: I0123 09:26:52.753453 4684 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack-operators/openstack-baremetal-operator-controller-manager-7c9c58b557lb7bq" podStartSLOduration=54.542223325 podStartE2EDuration="59.753436279s" podCreationTimestamp="2026-01-23 09:25:53 +0000 UTC" firstStartedPulling="2026-01-23 09:26:46.104829825 +0000 UTC m=+1178.728208366" lastFinishedPulling="2026-01-23 09:26:51.316042779 +0000 UTC m=+1183.939421320" observedRunningTime="2026-01-23 09:26:52.746056902 +0000 UTC m=+1185.369435453" watchObservedRunningTime="2026-01-23 09:26:52.753436279 +0000 UTC m=+1185.376814820" Jan 23 09:26:53 crc kubenswrapper[4684]: E0123 09:26:53.583251 4684 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"manager\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.io/openstack-k8s-operators/ironic-operator@sha256:d3c55b59cb192799f8d31196c55c9e9bb3cd38aef7ec51ef257dabf1548e8b30\\\"\"" pod="openstack-operators/ironic-operator-controller-manager-69d6c9f5b8-6s79c" podUID="5bb19409-93c9-4453-800c-ce2899b48427" Jan 23 09:26:53 crc kubenswrapper[4684]: I0123 09:26:53.795472 4684 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack-operators/barbican-operator-controller-manager-59dd8b7cbf-sbkxr" Jan 23 09:26:53 crc kubenswrapper[4684]: I0123 09:26:53.907204 4684 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack-operators/glance-operator-controller-manager-78fdd796fd-hx5dq" Jan 23 09:26:53 crc kubenswrapper[4684]: I0123 09:26:53.992107 4684 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack-operators/horizon-operator-controller-manager-77d5c5b54f-gc4d6" Jan 23 09:26:54 crc kubenswrapper[4684]: I0123 09:26:54.119820 4684 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack-operators/manila-operator-controller-manager-78c6999f6f-skhwl" Jan 23 09:26:54 crc kubenswrapper[4684]: I0123 09:26:54.142820 4684 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack-operators/mariadb-operator-controller-manager-c87fff755-pl7fj" Jan 23 09:26:54 crc kubenswrapper[4684]: I0123 09:26:54.194224 4684 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack-operators/neutron-operator-controller-manager-5d8f59fb49-7nv72" Jan 23 09:26:54 crc kubenswrapper[4684]: I0123 09:26:54.582838 4684 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack-operators/ovn-operator-controller-manager-55db956ddc-ll27v" Jan 23 09:26:54 crc kubenswrapper[4684]: I0123 09:26:54.623476 4684 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack-operators/placement-operator-controller-manager-5d646b7d76-dbggg" Jan 23 09:26:54 crc kubenswrapper[4684]: I0123 09:26:54.696409 4684 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack-operators/swift-operator-controller-manager-547cbdb99f-8cnrp" Jan 23 09:26:54 crc kubenswrapper[4684]: I0123 09:26:54.825002 4684 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack-operators/test-operator-controller-manager-69797bbcbd-2f7kg" Jan 23 09:26:55 crc kubenswrapper[4684]: I0123 09:26:55.650200 4684 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/designate-operator-controller-manager-b45d7bf98-p77dl" event={"ID":"31af0894-c5ac-41ef-842e-b7d01dfa2229","Type":"ContainerStarted","Data":"ea645e243a2a3ef9338cd560586d87086218ff961ddb2b7446fe20d39499aa77"} Jan 23 09:26:55 crc kubenswrapper[4684]: I0123 09:26:55.650748 4684 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack-operators/designate-operator-controller-manager-b45d7bf98-p77dl" Jan 23 09:26:55 crc kubenswrapper[4684]: I0123 09:26:55.655948 4684 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/watcher-operator-controller-manager-5ffb9c6597-sx2td" event={"ID":"afb73601-eb5b-44cd-9f30-4e38a4cc28be","Type":"ContainerStarted","Data":"5aad4842d28b4714c2c6ad4ecd2f6961fd98157a44596fb07d2b78b1f5bc068f"} Jan 23 09:26:55 crc kubenswrapper[4684]: I0123 09:26:55.656177 4684 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack-operators/watcher-operator-controller-manager-5ffb9c6597-sx2td" Jan 23 09:26:55 crc kubenswrapper[4684]: I0123 09:26:55.660109 4684 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/nova-operator-controller-manager-6b8bc8d87d-jnlvz" event={"ID":"b1376fdd-31b4-4a7a-a9b6-1a38565083cb","Type":"ContainerStarted","Data":"4f016a969d007177d179d9cbef055cdeb3d973f90572a24f75675460dbf8e842"} Jan 23 09:26:55 crc kubenswrapper[4684]: I0123 09:26:55.660337 4684 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack-operators/nova-operator-controller-manager-6b8bc8d87d-jnlvz" Jan 23 09:26:55 crc kubenswrapper[4684]: I0123 09:26:55.686960 4684 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack-operators/watcher-operator-controller-manager-5ffb9c6597-sx2td" podStartSLOduration=3.430934252 podStartE2EDuration="1m1.686941125s" podCreationTimestamp="2026-01-23 09:25:54 +0000 UTC" firstStartedPulling="2026-01-23 09:25:56.895006296 +0000 UTC m=+1129.518384837" lastFinishedPulling="2026-01-23 09:26:55.151013169 +0000 UTC m=+1187.774391710" observedRunningTime="2026-01-23 09:26:55.686506412 +0000 UTC m=+1188.309884953" watchObservedRunningTime="2026-01-23 09:26:55.686941125 +0000 UTC m=+1188.310319666" Jan 23 09:26:55 crc kubenswrapper[4684]: I0123 09:26:55.687600 4684 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack-operators/designate-operator-controller-manager-b45d7bf98-p77dl" podStartSLOduration=2.745161101 podStartE2EDuration="1m2.687591183s" podCreationTimestamp="2026-01-23 09:25:53 +0000 UTC" firstStartedPulling="2026-01-23 09:25:55.208866635 +0000 UTC m=+1127.832245176" lastFinishedPulling="2026-01-23 09:26:55.151296717 +0000 UTC m=+1187.774675258" observedRunningTime="2026-01-23 09:26:55.673350303 +0000 UTC m=+1188.296728854" watchObservedRunningTime="2026-01-23 09:26:55.687591183 +0000 UTC m=+1188.310969724" Jan 23 09:26:55 crc kubenswrapper[4684]: I0123 09:26:55.701070 4684 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack-operators/nova-operator-controller-manager-6b8bc8d87d-jnlvz" podStartSLOduration=4.152088936 podStartE2EDuration="1m2.70104864s" podCreationTimestamp="2026-01-23 09:25:53 +0000 UTC" firstStartedPulling="2026-01-23 09:25:56.604818152 +0000 UTC m=+1129.228196693" lastFinishedPulling="2026-01-23 09:26:55.153777856 +0000 UTC m=+1187.777156397" observedRunningTime="2026-01-23 09:26:55.700737031 +0000 UTC m=+1188.324115582" watchObservedRunningTime="2026-01-23 09:26:55.70104864 +0000 UTC m=+1188.324427191" Jan 23 09:26:56 crc kubenswrapper[4684]: E0123 09:26:56.582923 4684 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"operator\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.io/openstack-k8s-operators/rabbitmq-cluster-operator@sha256:893e66303c1b0bc1d00a299a3f0380bad55c8dc813c8a1c6a4aab379f5aa12a2\\\"\"" pod="openstack-operators/rabbitmq-cluster-operator-manager-668c99d594-c6nkk" podUID="b45428ef-0f84-4d58-ab99-9d7e26470caa" Jan 23 09:26:56 crc kubenswrapper[4684]: I0123 09:26:56.667319 4684 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/keystone-operator-controller-manager-b8b6d4659-lfjfh" event={"ID":"67b55215-9df7-4273-8e15-27c0a969e065","Type":"ContainerStarted","Data":"6b79d6586a04dd12915986e79d242a563f5db46a55206aaaa7a48c087fa8f141"} Jan 23 09:26:56 crc kubenswrapper[4684]: I0123 09:26:56.681713 4684 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack-operators/keystone-operator-controller-manager-b8b6d4659-lfjfh" podStartSLOduration=3.66233638 podStartE2EDuration="1m3.681676074s" podCreationTimestamp="2026-01-23 09:25:53 +0000 UTC" firstStartedPulling="2026-01-23 09:25:56.026375489 +0000 UTC m=+1128.649754030" lastFinishedPulling="2026-01-23 09:26:56.045715183 +0000 UTC m=+1188.669093724" observedRunningTime="2026-01-23 09:26:56.679680798 +0000 UTC m=+1189.303059339" watchObservedRunningTime="2026-01-23 09:26:56.681676074 +0000 UTC m=+1189.305054615" Jan 23 09:26:56 crc kubenswrapper[4684]: I0123 09:26:56.768114 4684 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack-operators/openstack-operator-controller-manager-57c46955cf-s5vdl" Jan 23 09:26:59 crc kubenswrapper[4684]: I0123 09:26:59.614908 4684 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack-operators/infra-operator-controller-manager-54ccf4f85d-t4lh8" Jan 23 09:27:00 crc kubenswrapper[4684]: I0123 09:27:00.167221 4684 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack-operators/openstack-baremetal-operator-controller-manager-7c9c58b557lb7bq" Jan 23 09:27:03 crc kubenswrapper[4684]: I0123 09:27:03.785073 4684 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack-operators/cinder-operator-controller-manager-69cf5d4557-srv5g" Jan 23 09:27:03 crc kubenswrapper[4684]: I0123 09:27:03.815209 4684 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack-operators/designate-operator-controller-manager-b45d7bf98-p77dl" Jan 23 09:27:03 crc kubenswrapper[4684]: I0123 09:27:03.935814 4684 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack-operators/heat-operator-controller-manager-594c8c9d5d-ht6sr" Jan 23 09:27:04 crc kubenswrapper[4684]: I0123 09:27:04.384782 4684 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack-operators/keystone-operator-controller-manager-b8b6d4659-lfjfh" Jan 23 09:27:04 crc kubenswrapper[4684]: I0123 09:27:04.388399 4684 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack-operators/keystone-operator-controller-manager-b8b6d4659-lfjfh" Jan 23 09:27:04 crc kubenswrapper[4684]: I0123 09:27:04.512451 4684 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack-operators/nova-operator-controller-manager-6b8bc8d87d-jnlvz" Jan 23 09:27:04 crc kubenswrapper[4684]: I0123 09:27:04.536962 4684 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack-operators/octavia-operator-controller-manager-7bd9774b6-b82vt" Jan 23 09:27:04 crc kubenswrapper[4684]: I0123 09:27:04.868485 4684 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack-operators/telemetry-operator-controller-manager-85cd9769bb-4rk7k" Jan 23 09:27:04 crc kubenswrapper[4684]: I0123 09:27:04.938404 4684 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack-operators/watcher-operator-controller-manager-5ffb9c6597-sx2td" Jan 23 09:27:05 crc kubenswrapper[4684]: I0123 09:27:05.740730 4684 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/ironic-operator-controller-manager-69d6c9f5b8-6s79c" event={"ID":"5bb19409-93c9-4453-800c-ce2899b48427","Type":"ContainerStarted","Data":"c6d112ee38e45e158c95b16b6c7619350561806b6a7b6358814cb4a5ca3c97a1"} Jan 23 09:27:05 crc kubenswrapper[4684]: I0123 09:27:05.741874 4684 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack-operators/ironic-operator-controller-manager-69d6c9f5b8-6s79c" Jan 23 09:27:05 crc kubenswrapper[4684]: I0123 09:27:05.762638 4684 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack-operators/ironic-operator-controller-manager-69d6c9f5b8-6s79c" podStartSLOduration=4.413257337 podStartE2EDuration="1m12.762614651s" podCreationTimestamp="2026-01-23 09:25:53 +0000 UTC" firstStartedPulling="2026-01-23 09:25:56.685375171 +0000 UTC m=+1129.308753712" lastFinishedPulling="2026-01-23 09:27:05.034732485 +0000 UTC m=+1197.658111026" observedRunningTime="2026-01-23 09:27:05.762126988 +0000 UTC m=+1198.385505529" watchObservedRunningTime="2026-01-23 09:27:05.762614651 +0000 UTC m=+1198.385993192" Jan 23 09:27:12 crc kubenswrapper[4684]: I0123 09:27:12.793388 4684 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/rabbitmq-cluster-operator-manager-668c99d594-c6nkk" event={"ID":"b45428ef-0f84-4d58-ab99-9d7e26470caa","Type":"ContainerStarted","Data":"264f697572a69e05b09b693fcae88610f620503c647672ccca631a9da4075f10"} Jan 23 09:27:12 crc kubenswrapper[4684]: I0123 09:27:12.808359 4684 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack-operators/rabbitmq-cluster-operator-manager-668c99d594-c6nkk" podStartSLOduration=3.664704176 podStartE2EDuration="1m18.808338501s" podCreationTimestamp="2026-01-23 09:25:54 +0000 UTC" firstStartedPulling="2026-01-23 09:25:56.895347196 +0000 UTC m=+1129.518725737" lastFinishedPulling="2026-01-23 09:27:12.038981511 +0000 UTC m=+1204.662360062" observedRunningTime="2026-01-23 09:27:12.806111118 +0000 UTC m=+1205.429489659" watchObservedRunningTime="2026-01-23 09:27:12.808338501 +0000 UTC m=+1205.431717042" Jan 23 09:27:14 crc kubenswrapper[4684]: I0123 09:27:14.348041 4684 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack-operators/ironic-operator-controller-manager-69d6c9f5b8-6s79c" Jan 23 09:27:29 crc kubenswrapper[4684]: I0123 09:27:29.787470 4684 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/dnsmasq-dns-84bb9d8bd9-vxhm7"] Jan 23 09:27:29 crc kubenswrapper[4684]: I0123 09:27:29.789009 4684 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-84bb9d8bd9-vxhm7" Jan 23 09:27:29 crc kubenswrapper[4684]: I0123 09:27:29.796301 4684 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"dns" Jan 23 09:27:29 crc kubenswrapper[4684]: I0123 09:27:29.796476 4684 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"openshift-service-ca.crt" Jan 23 09:27:29 crc kubenswrapper[4684]: I0123 09:27:29.796710 4684 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"dnsmasq-dns-dockercfg-s6lz7" Jan 23 09:27:29 crc kubenswrapper[4684]: I0123 09:27:29.796908 4684 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"kube-root-ca.crt" Jan 23 09:27:29 crc kubenswrapper[4684]: I0123 09:27:29.802871 4684 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-84bb9d8bd9-vxhm7"] Jan 23 09:27:29 crc kubenswrapper[4684]: I0123 09:27:29.901926 4684 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-fngv8\" (UniqueName: \"kubernetes.io/projected/42066789-739c-4d7a-9072-0b67742f5ceb-kube-api-access-fngv8\") pod \"dnsmasq-dns-84bb9d8bd9-vxhm7\" (UID: \"42066789-739c-4d7a-9072-0b67742f5ceb\") " pod="openstack/dnsmasq-dns-84bb9d8bd9-vxhm7" Jan 23 09:27:29 crc kubenswrapper[4684]: I0123 09:27:29.902007 4684 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/42066789-739c-4d7a-9072-0b67742f5ceb-config\") pod \"dnsmasq-dns-84bb9d8bd9-vxhm7\" (UID: \"42066789-739c-4d7a-9072-0b67742f5ceb\") " pod="openstack/dnsmasq-dns-84bb9d8bd9-vxhm7" Jan 23 09:27:29 crc kubenswrapper[4684]: I0123 09:27:29.973726 4684 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/dnsmasq-dns-5f854695bc-wdfk4"] Jan 23 09:27:29 crc kubenswrapper[4684]: I0123 09:27:29.975109 4684 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-5f854695bc-wdfk4" Jan 23 09:27:29 crc kubenswrapper[4684]: I0123 09:27:29.980066 4684 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"dns-svc" Jan 23 09:27:29 crc kubenswrapper[4684]: I0123 09:27:29.987213 4684 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-5f854695bc-wdfk4"] Jan 23 09:27:30 crc kubenswrapper[4684]: I0123 09:27:30.003261 4684 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/236ba707-81cb-4106-95ed-09134443809a-dns-svc\") pod \"dnsmasq-dns-5f854695bc-wdfk4\" (UID: \"236ba707-81cb-4106-95ed-09134443809a\") " pod="openstack/dnsmasq-dns-5f854695bc-wdfk4" Jan 23 09:27:30 crc kubenswrapper[4684]: I0123 09:27:30.003340 4684 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/42066789-739c-4d7a-9072-0b67742f5ceb-config\") pod \"dnsmasq-dns-84bb9d8bd9-vxhm7\" (UID: \"42066789-739c-4d7a-9072-0b67742f5ceb\") " pod="openstack/dnsmasq-dns-84bb9d8bd9-vxhm7" Jan 23 09:27:30 crc kubenswrapper[4684]: I0123 09:27:30.003379 4684 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/236ba707-81cb-4106-95ed-09134443809a-config\") pod \"dnsmasq-dns-5f854695bc-wdfk4\" (UID: \"236ba707-81cb-4106-95ed-09134443809a\") " pod="openstack/dnsmasq-dns-5f854695bc-wdfk4" Jan 23 09:27:30 crc kubenswrapper[4684]: I0123 09:27:30.003417 4684 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-h5wq8\" (UniqueName: \"kubernetes.io/projected/236ba707-81cb-4106-95ed-09134443809a-kube-api-access-h5wq8\") pod \"dnsmasq-dns-5f854695bc-wdfk4\" (UID: \"236ba707-81cb-4106-95ed-09134443809a\") " pod="openstack/dnsmasq-dns-5f854695bc-wdfk4" Jan 23 09:27:30 crc kubenswrapper[4684]: I0123 09:27:30.003545 4684 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-fngv8\" (UniqueName: \"kubernetes.io/projected/42066789-739c-4d7a-9072-0b67742f5ceb-kube-api-access-fngv8\") pod \"dnsmasq-dns-84bb9d8bd9-vxhm7\" (UID: \"42066789-739c-4d7a-9072-0b67742f5ceb\") " pod="openstack/dnsmasq-dns-84bb9d8bd9-vxhm7" Jan 23 09:27:30 crc kubenswrapper[4684]: I0123 09:27:30.005111 4684 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/42066789-739c-4d7a-9072-0b67742f5ceb-config\") pod \"dnsmasq-dns-84bb9d8bd9-vxhm7\" (UID: \"42066789-739c-4d7a-9072-0b67742f5ceb\") " pod="openstack/dnsmasq-dns-84bb9d8bd9-vxhm7" Jan 23 09:27:30 crc kubenswrapper[4684]: I0123 09:27:30.065978 4684 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-fngv8\" (UniqueName: \"kubernetes.io/projected/42066789-739c-4d7a-9072-0b67742f5ceb-kube-api-access-fngv8\") pod \"dnsmasq-dns-84bb9d8bd9-vxhm7\" (UID: \"42066789-739c-4d7a-9072-0b67742f5ceb\") " pod="openstack/dnsmasq-dns-84bb9d8bd9-vxhm7" Jan 23 09:27:30 crc kubenswrapper[4684]: I0123 09:27:30.104999 4684 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/236ba707-81cb-4106-95ed-09134443809a-dns-svc\") pod \"dnsmasq-dns-5f854695bc-wdfk4\" (UID: \"236ba707-81cb-4106-95ed-09134443809a\") " pod="openstack/dnsmasq-dns-5f854695bc-wdfk4" Jan 23 09:27:30 crc kubenswrapper[4684]: I0123 09:27:30.105094 4684 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/236ba707-81cb-4106-95ed-09134443809a-config\") pod \"dnsmasq-dns-5f854695bc-wdfk4\" (UID: \"236ba707-81cb-4106-95ed-09134443809a\") " pod="openstack/dnsmasq-dns-5f854695bc-wdfk4" Jan 23 09:27:30 crc kubenswrapper[4684]: I0123 09:27:30.105161 4684 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-h5wq8\" (UniqueName: \"kubernetes.io/projected/236ba707-81cb-4106-95ed-09134443809a-kube-api-access-h5wq8\") pod \"dnsmasq-dns-5f854695bc-wdfk4\" (UID: \"236ba707-81cb-4106-95ed-09134443809a\") " pod="openstack/dnsmasq-dns-5f854695bc-wdfk4" Jan 23 09:27:30 crc kubenswrapper[4684]: I0123 09:27:30.106148 4684 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/236ba707-81cb-4106-95ed-09134443809a-config\") pod \"dnsmasq-dns-5f854695bc-wdfk4\" (UID: \"236ba707-81cb-4106-95ed-09134443809a\") " pod="openstack/dnsmasq-dns-5f854695bc-wdfk4" Jan 23 09:27:30 crc kubenswrapper[4684]: I0123 09:27:30.106159 4684 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/236ba707-81cb-4106-95ed-09134443809a-dns-svc\") pod \"dnsmasq-dns-5f854695bc-wdfk4\" (UID: \"236ba707-81cb-4106-95ed-09134443809a\") " pod="openstack/dnsmasq-dns-5f854695bc-wdfk4" Jan 23 09:27:30 crc kubenswrapper[4684]: I0123 09:27:30.109609 4684 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-84bb9d8bd9-vxhm7" Jan 23 09:27:30 crc kubenswrapper[4684]: I0123 09:27:30.154649 4684 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-h5wq8\" (UniqueName: \"kubernetes.io/projected/236ba707-81cb-4106-95ed-09134443809a-kube-api-access-h5wq8\") pod \"dnsmasq-dns-5f854695bc-wdfk4\" (UID: \"236ba707-81cb-4106-95ed-09134443809a\") " pod="openstack/dnsmasq-dns-5f854695bc-wdfk4" Jan 23 09:27:30 crc kubenswrapper[4684]: I0123 09:27:30.323056 4684 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-5f854695bc-wdfk4" Jan 23 09:27:30 crc kubenswrapper[4684]: I0123 09:27:30.731224 4684 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-84bb9d8bd9-vxhm7"] Jan 23 09:27:30 crc kubenswrapper[4684]: I0123 09:27:30.882811 4684 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-5f854695bc-wdfk4"] Jan 23 09:27:30 crc kubenswrapper[4684]: I0123 09:27:30.926856 4684 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-5f854695bc-wdfk4" event={"ID":"236ba707-81cb-4106-95ed-09134443809a","Type":"ContainerStarted","Data":"a1ffc27ad82b329d852479d6f3ccb087eb312023934fa7e1ca22cd00217c32b0"} Jan 23 09:27:30 crc kubenswrapper[4684]: I0123 09:27:30.928224 4684 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-84bb9d8bd9-vxhm7" event={"ID":"42066789-739c-4d7a-9072-0b67742f5ceb","Type":"ContainerStarted","Data":"0cd60bad5a4a667976206029f833989bf1c7bde06a314f5423f5fba2937fcc46"} Jan 23 09:27:32 crc kubenswrapper[4684]: I0123 09:27:32.769643 4684 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-5f854695bc-wdfk4"] Jan 23 09:27:32 crc kubenswrapper[4684]: I0123 09:27:32.803765 4684 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/dnsmasq-dns-744ffd65bc-rq86k"] Jan 23 09:27:32 crc kubenswrapper[4684]: I0123 09:27:32.805387 4684 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-744ffd65bc-rq86k" Jan 23 09:27:32 crc kubenswrapper[4684]: I0123 09:27:32.848571 4684 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-744ffd65bc-rq86k"] Jan 23 09:27:32 crc kubenswrapper[4684]: I0123 09:27:32.998318 4684 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-98m69\" (UniqueName: \"kubernetes.io/projected/37b79503-0495-4e7c-8bd4-c50fe67c35c5-kube-api-access-98m69\") pod \"dnsmasq-dns-744ffd65bc-rq86k\" (UID: \"37b79503-0495-4e7c-8bd4-c50fe67c35c5\") " pod="openstack/dnsmasq-dns-744ffd65bc-rq86k" Jan 23 09:27:32 crc kubenswrapper[4684]: I0123 09:27:32.998486 4684 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/37b79503-0495-4e7c-8bd4-c50fe67c35c5-config\") pod \"dnsmasq-dns-744ffd65bc-rq86k\" (UID: \"37b79503-0495-4e7c-8bd4-c50fe67c35c5\") " pod="openstack/dnsmasq-dns-744ffd65bc-rq86k" Jan 23 09:27:32 crc kubenswrapper[4684]: I0123 09:27:32.998914 4684 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/37b79503-0495-4e7c-8bd4-c50fe67c35c5-dns-svc\") pod \"dnsmasq-dns-744ffd65bc-rq86k\" (UID: \"37b79503-0495-4e7c-8bd4-c50fe67c35c5\") " pod="openstack/dnsmasq-dns-744ffd65bc-rq86k" Jan 23 09:27:33 crc kubenswrapper[4684]: I0123 09:27:33.100005 4684 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/37b79503-0495-4e7c-8bd4-c50fe67c35c5-config\") pod \"dnsmasq-dns-744ffd65bc-rq86k\" (UID: \"37b79503-0495-4e7c-8bd4-c50fe67c35c5\") " pod="openstack/dnsmasq-dns-744ffd65bc-rq86k" Jan 23 09:27:33 crc kubenswrapper[4684]: I0123 09:27:33.100077 4684 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/37b79503-0495-4e7c-8bd4-c50fe67c35c5-dns-svc\") pod \"dnsmasq-dns-744ffd65bc-rq86k\" (UID: \"37b79503-0495-4e7c-8bd4-c50fe67c35c5\") " pod="openstack/dnsmasq-dns-744ffd65bc-rq86k" Jan 23 09:27:33 crc kubenswrapper[4684]: I0123 09:27:33.100170 4684 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-98m69\" (UniqueName: \"kubernetes.io/projected/37b79503-0495-4e7c-8bd4-c50fe67c35c5-kube-api-access-98m69\") pod \"dnsmasq-dns-744ffd65bc-rq86k\" (UID: \"37b79503-0495-4e7c-8bd4-c50fe67c35c5\") " pod="openstack/dnsmasq-dns-744ffd65bc-rq86k" Jan 23 09:27:33 crc kubenswrapper[4684]: I0123 09:27:33.101161 4684 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/37b79503-0495-4e7c-8bd4-c50fe67c35c5-dns-svc\") pod \"dnsmasq-dns-744ffd65bc-rq86k\" (UID: \"37b79503-0495-4e7c-8bd4-c50fe67c35c5\") " pod="openstack/dnsmasq-dns-744ffd65bc-rq86k" Jan 23 09:27:33 crc kubenswrapper[4684]: I0123 09:27:33.102150 4684 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/37b79503-0495-4e7c-8bd4-c50fe67c35c5-config\") pod \"dnsmasq-dns-744ffd65bc-rq86k\" (UID: \"37b79503-0495-4e7c-8bd4-c50fe67c35c5\") " pod="openstack/dnsmasq-dns-744ffd65bc-rq86k" Jan 23 09:27:33 crc kubenswrapper[4684]: I0123 09:27:33.161654 4684 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-98m69\" (UniqueName: \"kubernetes.io/projected/37b79503-0495-4e7c-8bd4-c50fe67c35c5-kube-api-access-98m69\") pod \"dnsmasq-dns-744ffd65bc-rq86k\" (UID: \"37b79503-0495-4e7c-8bd4-c50fe67c35c5\") " pod="openstack/dnsmasq-dns-744ffd65bc-rq86k" Jan 23 09:27:33 crc kubenswrapper[4684]: I0123 09:27:33.167946 4684 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-84bb9d8bd9-vxhm7"] Jan 23 09:27:33 crc kubenswrapper[4684]: I0123 09:27:33.216592 4684 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/dnsmasq-dns-95f5f6995-9fjr7"] Jan 23 09:27:33 crc kubenswrapper[4684]: I0123 09:27:33.220554 4684 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-95f5f6995-9fjr7" Jan 23 09:27:33 crc kubenswrapper[4684]: I0123 09:27:33.292609 4684 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-95f5f6995-9fjr7"] Jan 23 09:27:33 crc kubenswrapper[4684]: I0123 09:27:33.404156 4684 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/169ee556-d1ee-4f51-9958-46bd24d4467f-config\") pod \"dnsmasq-dns-95f5f6995-9fjr7\" (UID: \"169ee556-d1ee-4f51-9958-46bd24d4467f\") " pod="openstack/dnsmasq-dns-95f5f6995-9fjr7" Jan 23 09:27:33 crc kubenswrapper[4684]: I0123 09:27:33.404214 4684 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-m9z4m\" (UniqueName: \"kubernetes.io/projected/169ee556-d1ee-4f51-9958-46bd24d4467f-kube-api-access-m9z4m\") pod \"dnsmasq-dns-95f5f6995-9fjr7\" (UID: \"169ee556-d1ee-4f51-9958-46bd24d4467f\") " pod="openstack/dnsmasq-dns-95f5f6995-9fjr7" Jan 23 09:27:33 crc kubenswrapper[4684]: I0123 09:27:33.404374 4684 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/169ee556-d1ee-4f51-9958-46bd24d4467f-dns-svc\") pod \"dnsmasq-dns-95f5f6995-9fjr7\" (UID: \"169ee556-d1ee-4f51-9958-46bd24d4467f\") " pod="openstack/dnsmasq-dns-95f5f6995-9fjr7" Jan 23 09:27:33 crc kubenswrapper[4684]: I0123 09:27:33.439822 4684 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-744ffd65bc-rq86k" Jan 23 09:27:33 crc kubenswrapper[4684]: I0123 09:27:33.505827 4684 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-m9z4m\" (UniqueName: \"kubernetes.io/projected/169ee556-d1ee-4f51-9958-46bd24d4467f-kube-api-access-m9z4m\") pod \"dnsmasq-dns-95f5f6995-9fjr7\" (UID: \"169ee556-d1ee-4f51-9958-46bd24d4467f\") " pod="openstack/dnsmasq-dns-95f5f6995-9fjr7" Jan 23 09:27:33 crc kubenswrapper[4684]: I0123 09:27:33.506022 4684 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/169ee556-d1ee-4f51-9958-46bd24d4467f-dns-svc\") pod \"dnsmasq-dns-95f5f6995-9fjr7\" (UID: \"169ee556-d1ee-4f51-9958-46bd24d4467f\") " pod="openstack/dnsmasq-dns-95f5f6995-9fjr7" Jan 23 09:27:33 crc kubenswrapper[4684]: I0123 09:27:33.506119 4684 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/169ee556-d1ee-4f51-9958-46bd24d4467f-config\") pod \"dnsmasq-dns-95f5f6995-9fjr7\" (UID: \"169ee556-d1ee-4f51-9958-46bd24d4467f\") " pod="openstack/dnsmasq-dns-95f5f6995-9fjr7" Jan 23 09:27:33 crc kubenswrapper[4684]: I0123 09:27:33.507490 4684 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/169ee556-d1ee-4f51-9958-46bd24d4467f-config\") pod \"dnsmasq-dns-95f5f6995-9fjr7\" (UID: \"169ee556-d1ee-4f51-9958-46bd24d4467f\") " pod="openstack/dnsmasq-dns-95f5f6995-9fjr7" Jan 23 09:27:33 crc kubenswrapper[4684]: I0123 09:27:33.507652 4684 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/169ee556-d1ee-4f51-9958-46bd24d4467f-dns-svc\") pod \"dnsmasq-dns-95f5f6995-9fjr7\" (UID: \"169ee556-d1ee-4f51-9958-46bd24d4467f\") " pod="openstack/dnsmasq-dns-95f5f6995-9fjr7" Jan 23 09:27:33 crc kubenswrapper[4684]: I0123 09:27:33.527342 4684 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-m9z4m\" (UniqueName: \"kubernetes.io/projected/169ee556-d1ee-4f51-9958-46bd24d4467f-kube-api-access-m9z4m\") pod \"dnsmasq-dns-95f5f6995-9fjr7\" (UID: \"169ee556-d1ee-4f51-9958-46bd24d4467f\") " pod="openstack/dnsmasq-dns-95f5f6995-9fjr7" Jan 23 09:27:33 crc kubenswrapper[4684]: I0123 09:27:33.595618 4684 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-95f5f6995-9fjr7" Jan 23 09:27:33 crc kubenswrapper[4684]: I0123 09:27:33.784422 4684 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-744ffd65bc-rq86k"] Jan 23 09:27:34 crc kubenswrapper[4684]: I0123 09:27:34.011378 4684 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-744ffd65bc-rq86k" event={"ID":"37b79503-0495-4e7c-8bd4-c50fe67c35c5","Type":"ContainerStarted","Data":"a6a23007b968f1f8301cc4108f2a37faf4ecd3323b9e03fe7975e82a3986aea3"} Jan 23 09:27:34 crc kubenswrapper[4684]: I0123 09:27:34.213876 4684 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/rabbitmq-server-0"] Jan 23 09:27:34 crc kubenswrapper[4684]: I0123 09:27:34.222778 4684 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/rabbitmq-server-0" Jan 23 09:27:34 crc kubenswrapper[4684]: I0123 09:27:34.227557 4684 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"rabbitmq-default-user" Jan 23 09:27:34 crc kubenswrapper[4684]: I0123 09:27:34.227815 4684 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"rabbitmq-server-dockercfg-wr9hs" Jan 23 09:27:34 crc kubenswrapper[4684]: I0123 09:27:34.228013 4684 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"rabbitmq-config-data" Jan 23 09:27:34 crc kubenswrapper[4684]: I0123 09:27:34.228154 4684 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-rabbitmq-svc" Jan 23 09:27:34 crc kubenswrapper[4684]: I0123 09:27:34.228343 4684 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"rabbitmq-server-conf" Jan 23 09:27:34 crc kubenswrapper[4684]: I0123 09:27:34.228478 4684 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"rabbitmq-erlang-cookie" Jan 23 09:27:34 crc kubenswrapper[4684]: I0123 09:27:34.231115 4684 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"rabbitmq-plugins-conf" Jan 23 09:27:34 crc kubenswrapper[4684]: I0123 09:27:34.233515 4684 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/rabbitmq-server-0"] Jan 23 09:27:34 crc kubenswrapper[4684]: I0123 09:27:34.283849 4684 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-95f5f6995-9fjr7"] Jan 23 09:27:34 crc kubenswrapper[4684]: W0123 09:27:34.295161 4684 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod169ee556_d1ee_4f51_9958_46bd24d4467f.slice/crio-0cda0dc6524d004735f24df66e7a23e60dd8772f536d7fff61af71f17d84535e WatchSource:0}: Error finding container 0cda0dc6524d004735f24df66e7a23e60dd8772f536d7fff61af71f17d84535e: Status 404 returned error can't find the container with id 0cda0dc6524d004735f24df66e7a23e60dd8772f536d7fff61af71f17d84535e Jan 23 09:27:34 crc kubenswrapper[4684]: I0123 09:27:34.362459 4684 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/rabbitmq-cell1-server-0"] Jan 23 09:27:34 crc kubenswrapper[4684]: I0123 09:27:34.365643 4684 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/rabbitmq-cell1-server-0" Jan 23 09:27:34 crc kubenswrapper[4684]: I0123 09:27:34.372746 4684 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"rabbitmq-cell1-plugins-conf" Jan 23 09:27:34 crc kubenswrapper[4684]: I0123 09:27:34.379367 4684 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"rabbitmq-cell1-config-data" Jan 23 09:27:34 crc kubenswrapper[4684]: I0123 09:27:34.380542 4684 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"rabbitmq-cell1-server-conf" Jan 23 09:27:34 crc kubenswrapper[4684]: I0123 09:27:34.381511 4684 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"rabbitmq-cell1-server-dockercfg-qjlcz" Jan 23 09:27:34 crc kubenswrapper[4684]: I0123 09:27:34.381557 4684 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"rabbitmq-cell1-erlang-cookie" Jan 23 09:27:34 crc kubenswrapper[4684]: I0123 09:27:34.383602 4684 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"rabbitmq-cell1-default-user" Jan 23 09:27:34 crc kubenswrapper[4684]: I0123 09:27:34.384005 4684 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-rabbitmq-cell1-svc" Jan 23 09:27:34 crc kubenswrapper[4684]: I0123 09:27:34.397150 4684 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/rabbitmq-cell1-server-0"] Jan 23 09:27:34 crc kubenswrapper[4684]: I0123 09:27:34.425206 4684 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"local-storage01-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage01-crc\") pod \"rabbitmq-server-0\" (UID: \"82a71d38-3c68-43a9-9913-bc184ebed996\") " pod="openstack/rabbitmq-server-0" Jan 23 09:27:34 crc kubenswrapper[4684]: I0123 09:27:34.425263 4684 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"server-conf\" (UniqueName: \"kubernetes.io/configmap/82a71d38-3c68-43a9-9913-bc184ebed996-server-conf\") pod \"rabbitmq-server-0\" (UID: \"82a71d38-3c68-43a9-9913-bc184ebed996\") " pod="openstack/rabbitmq-server-0" Jan 23 09:27:34 crc kubenswrapper[4684]: I0123 09:27:34.425316 4684 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/82a71d38-3c68-43a9-9913-bc184ebed996-config-data\") pod \"rabbitmq-server-0\" (UID: \"82a71d38-3c68-43a9-9913-bc184ebed996\") " pod="openstack/rabbitmq-server-0" Jan 23 09:27:34 crc kubenswrapper[4684]: I0123 09:27:34.425358 4684 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"rabbitmq-plugins\" (UniqueName: \"kubernetes.io/empty-dir/82a71d38-3c68-43a9-9913-bc184ebed996-rabbitmq-plugins\") pod \"rabbitmq-server-0\" (UID: \"82a71d38-3c68-43a9-9913-bc184ebed996\") " pod="openstack/rabbitmq-server-0" Jan 23 09:27:34 crc kubenswrapper[4684]: I0123 09:27:34.425388 4684 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"erlang-cookie-secret\" (UniqueName: \"kubernetes.io/secret/82a71d38-3c68-43a9-9913-bc184ebed996-erlang-cookie-secret\") pod \"rabbitmq-server-0\" (UID: \"82a71d38-3c68-43a9-9913-bc184ebed996\") " pod="openstack/rabbitmq-server-0" Jan 23 09:27:34 crc kubenswrapper[4684]: I0123 09:27:34.425425 4684 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"rabbitmq-tls\" (UniqueName: \"kubernetes.io/projected/82a71d38-3c68-43a9-9913-bc184ebed996-rabbitmq-tls\") pod \"rabbitmq-server-0\" (UID: \"82a71d38-3c68-43a9-9913-bc184ebed996\") " pod="openstack/rabbitmq-server-0" Jan 23 09:27:34 crc kubenswrapper[4684]: I0123 09:27:34.425453 4684 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-wxmdh\" (UniqueName: \"kubernetes.io/projected/82a71d38-3c68-43a9-9913-bc184ebed996-kube-api-access-wxmdh\") pod \"rabbitmq-server-0\" (UID: \"82a71d38-3c68-43a9-9913-bc184ebed996\") " pod="openstack/rabbitmq-server-0" Jan 23 09:27:34 crc kubenswrapper[4684]: I0123 09:27:34.425496 4684 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"rabbitmq-confd\" (UniqueName: \"kubernetes.io/projected/82a71d38-3c68-43a9-9913-bc184ebed996-rabbitmq-confd\") pod \"rabbitmq-server-0\" (UID: \"82a71d38-3c68-43a9-9913-bc184ebed996\") " pod="openstack/rabbitmq-server-0" Jan 23 09:27:34 crc kubenswrapper[4684]: I0123 09:27:34.425539 4684 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"plugins-conf\" (UniqueName: \"kubernetes.io/configmap/82a71d38-3c68-43a9-9913-bc184ebed996-plugins-conf\") pod \"rabbitmq-server-0\" (UID: \"82a71d38-3c68-43a9-9913-bc184ebed996\") " pod="openstack/rabbitmq-server-0" Jan 23 09:27:34 crc kubenswrapper[4684]: I0123 09:27:34.425561 4684 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"rabbitmq-erlang-cookie\" (UniqueName: \"kubernetes.io/empty-dir/82a71d38-3c68-43a9-9913-bc184ebed996-rabbitmq-erlang-cookie\") pod \"rabbitmq-server-0\" (UID: \"82a71d38-3c68-43a9-9913-bc184ebed996\") " pod="openstack/rabbitmq-server-0" Jan 23 09:27:34 crc kubenswrapper[4684]: I0123 09:27:34.425584 4684 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"pod-info\" (UniqueName: \"kubernetes.io/downward-api/82a71d38-3c68-43a9-9913-bc184ebed996-pod-info\") pod \"rabbitmq-server-0\" (UID: \"82a71d38-3c68-43a9-9913-bc184ebed996\") " pod="openstack/rabbitmq-server-0" Jan 23 09:27:34 crc kubenswrapper[4684]: I0123 09:27:34.527410 4684 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"rabbitmq-erlang-cookie\" (UniqueName: \"kubernetes.io/empty-dir/6a0c15bc-8e5e-47ee-9c23-1673363f1603-rabbitmq-erlang-cookie\") pod \"rabbitmq-cell1-server-0\" (UID: \"6a0c15bc-8e5e-47ee-9c23-1673363f1603\") " pod="openstack/rabbitmq-cell1-server-0" Jan 23 09:27:34 crc kubenswrapper[4684]: I0123 09:27:34.527458 4684 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/82a71d38-3c68-43a9-9913-bc184ebed996-config-data\") pod \"rabbitmq-server-0\" (UID: \"82a71d38-3c68-43a9-9913-bc184ebed996\") " pod="openstack/rabbitmq-server-0" Jan 23 09:27:34 crc kubenswrapper[4684]: I0123 09:27:34.527487 4684 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"plugins-conf\" (UniqueName: \"kubernetes.io/configmap/6a0c15bc-8e5e-47ee-9c23-1673363f1603-plugins-conf\") pod \"rabbitmq-cell1-server-0\" (UID: \"6a0c15bc-8e5e-47ee-9c23-1673363f1603\") " pod="openstack/rabbitmq-cell1-server-0" Jan 23 09:27:34 crc kubenswrapper[4684]: I0123 09:27:34.527509 4684 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"rabbitmq-plugins\" (UniqueName: \"kubernetes.io/empty-dir/82a71d38-3c68-43a9-9913-bc184ebed996-rabbitmq-plugins\") pod \"rabbitmq-server-0\" (UID: \"82a71d38-3c68-43a9-9913-bc184ebed996\") " pod="openstack/rabbitmq-server-0" Jan 23 09:27:34 crc kubenswrapper[4684]: I0123 09:27:34.527532 4684 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"erlang-cookie-secret\" (UniqueName: \"kubernetes.io/secret/82a71d38-3c68-43a9-9913-bc184ebed996-erlang-cookie-secret\") pod \"rabbitmq-server-0\" (UID: \"82a71d38-3c68-43a9-9913-bc184ebed996\") " pod="openstack/rabbitmq-server-0" Jan 23 09:27:34 crc kubenswrapper[4684]: I0123 09:27:34.527557 4684 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-nm4ks\" (UniqueName: \"kubernetes.io/projected/6a0c15bc-8e5e-47ee-9c23-1673363f1603-kube-api-access-nm4ks\") pod \"rabbitmq-cell1-server-0\" (UID: \"6a0c15bc-8e5e-47ee-9c23-1673363f1603\") " pod="openstack/rabbitmq-cell1-server-0" Jan 23 09:27:34 crc kubenswrapper[4684]: I0123 09:27:34.527572 4684 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"rabbitmq-tls\" (UniqueName: \"kubernetes.io/projected/82a71d38-3c68-43a9-9913-bc184ebed996-rabbitmq-tls\") pod \"rabbitmq-server-0\" (UID: \"82a71d38-3c68-43a9-9913-bc184ebed996\") " pod="openstack/rabbitmq-server-0" Jan 23 09:27:34 crc kubenswrapper[4684]: I0123 09:27:34.527596 4684 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"server-conf\" (UniqueName: \"kubernetes.io/configmap/6a0c15bc-8e5e-47ee-9c23-1673363f1603-server-conf\") pod \"rabbitmq-cell1-server-0\" (UID: \"6a0c15bc-8e5e-47ee-9c23-1673363f1603\") " pod="openstack/rabbitmq-cell1-server-0" Jan 23 09:27:34 crc kubenswrapper[4684]: I0123 09:27:34.527610 4684 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-wxmdh\" (UniqueName: \"kubernetes.io/projected/82a71d38-3c68-43a9-9913-bc184ebed996-kube-api-access-wxmdh\") pod \"rabbitmq-server-0\" (UID: \"82a71d38-3c68-43a9-9913-bc184ebed996\") " pod="openstack/rabbitmq-server-0" Jan 23 09:27:34 crc kubenswrapper[4684]: I0123 09:27:34.527626 4684 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"erlang-cookie-secret\" (UniqueName: \"kubernetes.io/secret/6a0c15bc-8e5e-47ee-9c23-1673363f1603-erlang-cookie-secret\") pod \"rabbitmq-cell1-server-0\" (UID: \"6a0c15bc-8e5e-47ee-9c23-1673363f1603\") " pod="openstack/rabbitmq-cell1-server-0" Jan 23 09:27:34 crc kubenswrapper[4684]: I0123 09:27:34.527647 4684 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"rabbitmq-plugins\" (UniqueName: \"kubernetes.io/empty-dir/6a0c15bc-8e5e-47ee-9c23-1673363f1603-rabbitmq-plugins\") pod \"rabbitmq-cell1-server-0\" (UID: \"6a0c15bc-8e5e-47ee-9c23-1673363f1603\") " pod="openstack/rabbitmq-cell1-server-0" Jan 23 09:27:34 crc kubenswrapper[4684]: I0123 09:27:34.527667 4684 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"rabbitmq-confd\" (UniqueName: \"kubernetes.io/projected/82a71d38-3c68-43a9-9913-bc184ebed996-rabbitmq-confd\") pod \"rabbitmq-server-0\" (UID: \"82a71d38-3c68-43a9-9913-bc184ebed996\") " pod="openstack/rabbitmq-server-0" Jan 23 09:27:34 crc kubenswrapper[4684]: I0123 09:27:34.527684 4684 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"local-storage08-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage08-crc\") pod \"rabbitmq-cell1-server-0\" (UID: \"6a0c15bc-8e5e-47ee-9c23-1673363f1603\") " pod="openstack/rabbitmq-cell1-server-0" Jan 23 09:27:34 crc kubenswrapper[4684]: I0123 09:27:34.527723 4684 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"pod-info\" (UniqueName: \"kubernetes.io/downward-api/6a0c15bc-8e5e-47ee-9c23-1673363f1603-pod-info\") pod \"rabbitmq-cell1-server-0\" (UID: \"6a0c15bc-8e5e-47ee-9c23-1673363f1603\") " pod="openstack/rabbitmq-cell1-server-0" Jan 23 09:27:34 crc kubenswrapper[4684]: I0123 09:27:34.527746 4684 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/6a0c15bc-8e5e-47ee-9c23-1673363f1603-config-data\") pod \"rabbitmq-cell1-server-0\" (UID: \"6a0c15bc-8e5e-47ee-9c23-1673363f1603\") " pod="openstack/rabbitmq-cell1-server-0" Jan 23 09:27:34 crc kubenswrapper[4684]: I0123 09:27:34.528379 4684 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"plugins-conf\" (UniqueName: \"kubernetes.io/configmap/82a71d38-3c68-43a9-9913-bc184ebed996-plugins-conf\") pod \"rabbitmq-server-0\" (UID: \"82a71d38-3c68-43a9-9913-bc184ebed996\") " pod="openstack/rabbitmq-server-0" Jan 23 09:27:34 crc kubenswrapper[4684]: I0123 09:27:34.528449 4684 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"rabbitmq-erlang-cookie\" (UniqueName: \"kubernetes.io/empty-dir/82a71d38-3c68-43a9-9913-bc184ebed996-rabbitmq-erlang-cookie\") pod \"rabbitmq-server-0\" (UID: \"82a71d38-3c68-43a9-9913-bc184ebed996\") " pod="openstack/rabbitmq-server-0" Jan 23 09:27:34 crc kubenswrapper[4684]: I0123 09:27:34.528494 4684 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pod-info\" (UniqueName: \"kubernetes.io/downward-api/82a71d38-3c68-43a9-9913-bc184ebed996-pod-info\") pod \"rabbitmq-server-0\" (UID: \"82a71d38-3c68-43a9-9913-bc184ebed996\") " pod="openstack/rabbitmq-server-0" Jan 23 09:27:34 crc kubenswrapper[4684]: I0123 09:27:34.528530 4684 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"rabbitmq-confd\" (UniqueName: \"kubernetes.io/projected/6a0c15bc-8e5e-47ee-9c23-1673363f1603-rabbitmq-confd\") pod \"rabbitmq-cell1-server-0\" (UID: \"6a0c15bc-8e5e-47ee-9c23-1673363f1603\") " pod="openstack/rabbitmq-cell1-server-0" Jan 23 09:27:34 crc kubenswrapper[4684]: I0123 09:27:34.528577 4684 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"rabbitmq-tls\" (UniqueName: \"kubernetes.io/projected/6a0c15bc-8e5e-47ee-9c23-1673363f1603-rabbitmq-tls\") pod \"rabbitmq-cell1-server-0\" (UID: \"6a0c15bc-8e5e-47ee-9c23-1673363f1603\") " pod="openstack/rabbitmq-cell1-server-0" Jan 23 09:27:34 crc kubenswrapper[4684]: I0123 09:27:34.528609 4684 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"local-storage01-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage01-crc\") pod \"rabbitmq-server-0\" (UID: \"82a71d38-3c68-43a9-9913-bc184ebed996\") " pod="openstack/rabbitmq-server-0" Jan 23 09:27:34 crc kubenswrapper[4684]: I0123 09:27:34.528655 4684 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"server-conf\" (UniqueName: \"kubernetes.io/configmap/82a71d38-3c68-43a9-9913-bc184ebed996-server-conf\") pod \"rabbitmq-server-0\" (UID: \"82a71d38-3c68-43a9-9913-bc184ebed996\") " pod="openstack/rabbitmq-server-0" Jan 23 09:27:34 crc kubenswrapper[4684]: I0123 09:27:34.528917 4684 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/82a71d38-3c68-43a9-9913-bc184ebed996-config-data\") pod \"rabbitmq-server-0\" (UID: \"82a71d38-3c68-43a9-9913-bc184ebed996\") " pod="openstack/rabbitmq-server-0" Jan 23 09:27:34 crc kubenswrapper[4684]: I0123 09:27:34.529244 4684 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"rabbitmq-plugins\" (UniqueName: \"kubernetes.io/empty-dir/82a71d38-3c68-43a9-9913-bc184ebed996-rabbitmq-plugins\") pod \"rabbitmq-server-0\" (UID: \"82a71d38-3c68-43a9-9913-bc184ebed996\") " pod="openstack/rabbitmq-server-0" Jan 23 09:27:34 crc kubenswrapper[4684]: I0123 09:27:34.530120 4684 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"rabbitmq-erlang-cookie\" (UniqueName: \"kubernetes.io/empty-dir/82a71d38-3c68-43a9-9913-bc184ebed996-rabbitmq-erlang-cookie\") pod \"rabbitmq-server-0\" (UID: \"82a71d38-3c68-43a9-9913-bc184ebed996\") " pod="openstack/rabbitmq-server-0" Jan 23 09:27:34 crc kubenswrapper[4684]: I0123 09:27:34.530752 4684 operation_generator.go:580] "MountVolume.MountDevice succeeded for volume \"local-storage01-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage01-crc\") pod \"rabbitmq-server-0\" (UID: \"82a71d38-3c68-43a9-9913-bc184ebed996\") device mount path \"/mnt/openstack/pv01\"" pod="openstack/rabbitmq-server-0" Jan 23 09:27:34 crc kubenswrapper[4684]: I0123 09:27:34.535240 4684 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"plugins-conf\" (UniqueName: \"kubernetes.io/configmap/82a71d38-3c68-43a9-9913-bc184ebed996-plugins-conf\") pod \"rabbitmq-server-0\" (UID: \"82a71d38-3c68-43a9-9913-bc184ebed996\") " pod="openstack/rabbitmq-server-0" Jan 23 09:27:34 crc kubenswrapper[4684]: I0123 09:27:34.536119 4684 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"server-conf\" (UniqueName: \"kubernetes.io/configmap/82a71d38-3c68-43a9-9913-bc184ebed996-server-conf\") pod \"rabbitmq-server-0\" (UID: \"82a71d38-3c68-43a9-9913-bc184ebed996\") " pod="openstack/rabbitmq-server-0" Jan 23 09:27:34 crc kubenswrapper[4684]: I0123 09:27:34.550885 4684 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"pod-info\" (UniqueName: \"kubernetes.io/downward-api/82a71d38-3c68-43a9-9913-bc184ebed996-pod-info\") pod \"rabbitmq-server-0\" (UID: \"82a71d38-3c68-43a9-9913-bc184ebed996\") " pod="openstack/rabbitmq-server-0" Jan 23 09:27:34 crc kubenswrapper[4684]: I0123 09:27:34.557793 4684 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"erlang-cookie-secret\" (UniqueName: \"kubernetes.io/secret/82a71d38-3c68-43a9-9913-bc184ebed996-erlang-cookie-secret\") pod \"rabbitmq-server-0\" (UID: \"82a71d38-3c68-43a9-9913-bc184ebed996\") " pod="openstack/rabbitmq-server-0" Jan 23 09:27:34 crc kubenswrapper[4684]: I0123 09:27:34.576956 4684 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"rabbitmq-confd\" (UniqueName: \"kubernetes.io/projected/82a71d38-3c68-43a9-9913-bc184ebed996-rabbitmq-confd\") pod \"rabbitmq-server-0\" (UID: \"82a71d38-3c68-43a9-9913-bc184ebed996\") " pod="openstack/rabbitmq-server-0" Jan 23 09:27:34 crc kubenswrapper[4684]: I0123 09:27:34.582641 4684 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"rabbitmq-tls\" (UniqueName: \"kubernetes.io/projected/82a71d38-3c68-43a9-9913-bc184ebed996-rabbitmq-tls\") pod \"rabbitmq-server-0\" (UID: \"82a71d38-3c68-43a9-9913-bc184ebed996\") " pod="openstack/rabbitmq-server-0" Jan 23 09:27:34 crc kubenswrapper[4684]: I0123 09:27:34.595950 4684 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"local-storage01-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage01-crc\") pod \"rabbitmq-server-0\" (UID: \"82a71d38-3c68-43a9-9913-bc184ebed996\") " pod="openstack/rabbitmq-server-0" Jan 23 09:27:34 crc kubenswrapper[4684]: I0123 09:27:34.598061 4684 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-wxmdh\" (UniqueName: \"kubernetes.io/projected/82a71d38-3c68-43a9-9913-bc184ebed996-kube-api-access-wxmdh\") pod \"rabbitmq-server-0\" (UID: \"82a71d38-3c68-43a9-9913-bc184ebed996\") " pod="openstack/rabbitmq-server-0" Jan 23 09:27:34 crc kubenswrapper[4684]: I0123 09:27:34.630476 4684 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"rabbitmq-tls\" (UniqueName: \"kubernetes.io/projected/6a0c15bc-8e5e-47ee-9c23-1673363f1603-rabbitmq-tls\") pod \"rabbitmq-cell1-server-0\" (UID: \"6a0c15bc-8e5e-47ee-9c23-1673363f1603\") " pod="openstack/rabbitmq-cell1-server-0" Jan 23 09:27:34 crc kubenswrapper[4684]: I0123 09:27:34.630560 4684 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"rabbitmq-erlang-cookie\" (UniqueName: \"kubernetes.io/empty-dir/6a0c15bc-8e5e-47ee-9c23-1673363f1603-rabbitmq-erlang-cookie\") pod \"rabbitmq-cell1-server-0\" (UID: \"6a0c15bc-8e5e-47ee-9c23-1673363f1603\") " pod="openstack/rabbitmq-cell1-server-0" Jan 23 09:27:34 crc kubenswrapper[4684]: I0123 09:27:34.630610 4684 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"plugins-conf\" (UniqueName: \"kubernetes.io/configmap/6a0c15bc-8e5e-47ee-9c23-1673363f1603-plugins-conf\") pod \"rabbitmq-cell1-server-0\" (UID: \"6a0c15bc-8e5e-47ee-9c23-1673363f1603\") " pod="openstack/rabbitmq-cell1-server-0" Jan 23 09:27:34 crc kubenswrapper[4684]: I0123 09:27:34.630665 4684 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-nm4ks\" (UniqueName: \"kubernetes.io/projected/6a0c15bc-8e5e-47ee-9c23-1673363f1603-kube-api-access-nm4ks\") pod \"rabbitmq-cell1-server-0\" (UID: \"6a0c15bc-8e5e-47ee-9c23-1673363f1603\") " pod="openstack/rabbitmq-cell1-server-0" Jan 23 09:27:34 crc kubenswrapper[4684]: I0123 09:27:34.630717 4684 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"server-conf\" (UniqueName: \"kubernetes.io/configmap/6a0c15bc-8e5e-47ee-9c23-1673363f1603-server-conf\") pod \"rabbitmq-cell1-server-0\" (UID: \"6a0c15bc-8e5e-47ee-9c23-1673363f1603\") " pod="openstack/rabbitmq-cell1-server-0" Jan 23 09:27:34 crc kubenswrapper[4684]: I0123 09:27:34.630742 4684 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"erlang-cookie-secret\" (UniqueName: \"kubernetes.io/secret/6a0c15bc-8e5e-47ee-9c23-1673363f1603-erlang-cookie-secret\") pod \"rabbitmq-cell1-server-0\" (UID: \"6a0c15bc-8e5e-47ee-9c23-1673363f1603\") " pod="openstack/rabbitmq-cell1-server-0" Jan 23 09:27:34 crc kubenswrapper[4684]: I0123 09:27:34.630768 4684 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"rabbitmq-plugins\" (UniqueName: \"kubernetes.io/empty-dir/6a0c15bc-8e5e-47ee-9c23-1673363f1603-rabbitmq-plugins\") pod \"rabbitmq-cell1-server-0\" (UID: \"6a0c15bc-8e5e-47ee-9c23-1673363f1603\") " pod="openstack/rabbitmq-cell1-server-0" Jan 23 09:27:34 crc kubenswrapper[4684]: I0123 09:27:34.630797 4684 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"local-storage08-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage08-crc\") pod \"rabbitmq-cell1-server-0\" (UID: \"6a0c15bc-8e5e-47ee-9c23-1673363f1603\") " pod="openstack/rabbitmq-cell1-server-0" Jan 23 09:27:34 crc kubenswrapper[4684]: I0123 09:27:34.630832 4684 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pod-info\" (UniqueName: \"kubernetes.io/downward-api/6a0c15bc-8e5e-47ee-9c23-1673363f1603-pod-info\") pod \"rabbitmq-cell1-server-0\" (UID: \"6a0c15bc-8e5e-47ee-9c23-1673363f1603\") " pod="openstack/rabbitmq-cell1-server-0" Jan 23 09:27:34 crc kubenswrapper[4684]: I0123 09:27:34.630862 4684 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/6a0c15bc-8e5e-47ee-9c23-1673363f1603-config-data\") pod \"rabbitmq-cell1-server-0\" (UID: \"6a0c15bc-8e5e-47ee-9c23-1673363f1603\") " pod="openstack/rabbitmq-cell1-server-0" Jan 23 09:27:34 crc kubenswrapper[4684]: I0123 09:27:34.630896 4684 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"rabbitmq-confd\" (UniqueName: \"kubernetes.io/projected/6a0c15bc-8e5e-47ee-9c23-1673363f1603-rabbitmq-confd\") pod \"rabbitmq-cell1-server-0\" (UID: \"6a0c15bc-8e5e-47ee-9c23-1673363f1603\") " pod="openstack/rabbitmq-cell1-server-0" Jan 23 09:27:34 crc kubenswrapper[4684]: I0123 09:27:34.634436 4684 operation_generator.go:580] "MountVolume.MountDevice succeeded for volume \"local-storage08-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage08-crc\") pod \"rabbitmq-cell1-server-0\" (UID: \"6a0c15bc-8e5e-47ee-9c23-1673363f1603\") device mount path \"/mnt/openstack/pv08\"" pod="openstack/rabbitmq-cell1-server-0" Jan 23 09:27:34 crc kubenswrapper[4684]: I0123 09:27:34.635438 4684 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"plugins-conf\" (UniqueName: \"kubernetes.io/configmap/6a0c15bc-8e5e-47ee-9c23-1673363f1603-plugins-conf\") pod \"rabbitmq-cell1-server-0\" (UID: \"6a0c15bc-8e5e-47ee-9c23-1673363f1603\") " pod="openstack/rabbitmq-cell1-server-0" Jan 23 09:27:34 crc kubenswrapper[4684]: I0123 09:27:34.635692 4684 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"rabbitmq-erlang-cookie\" (UniqueName: \"kubernetes.io/empty-dir/6a0c15bc-8e5e-47ee-9c23-1673363f1603-rabbitmq-erlang-cookie\") pod \"rabbitmq-cell1-server-0\" (UID: \"6a0c15bc-8e5e-47ee-9c23-1673363f1603\") " pod="openstack/rabbitmq-cell1-server-0" Jan 23 09:27:34 crc kubenswrapper[4684]: I0123 09:27:34.636089 4684 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"rabbitmq-plugins\" (UniqueName: \"kubernetes.io/empty-dir/6a0c15bc-8e5e-47ee-9c23-1673363f1603-rabbitmq-plugins\") pod \"rabbitmq-cell1-server-0\" (UID: \"6a0c15bc-8e5e-47ee-9c23-1673363f1603\") " pod="openstack/rabbitmq-cell1-server-0" Jan 23 09:27:34 crc kubenswrapper[4684]: I0123 09:27:34.637162 4684 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"server-conf\" (UniqueName: \"kubernetes.io/configmap/6a0c15bc-8e5e-47ee-9c23-1673363f1603-server-conf\") pod \"rabbitmq-cell1-server-0\" (UID: \"6a0c15bc-8e5e-47ee-9c23-1673363f1603\") " pod="openstack/rabbitmq-cell1-server-0" Jan 23 09:27:34 crc kubenswrapper[4684]: I0123 09:27:34.639682 4684 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/6a0c15bc-8e5e-47ee-9c23-1673363f1603-config-data\") pod \"rabbitmq-cell1-server-0\" (UID: \"6a0c15bc-8e5e-47ee-9c23-1673363f1603\") " pod="openstack/rabbitmq-cell1-server-0" Jan 23 09:27:34 crc kubenswrapper[4684]: I0123 09:27:34.640048 4684 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"pod-info\" (UniqueName: \"kubernetes.io/downward-api/6a0c15bc-8e5e-47ee-9c23-1673363f1603-pod-info\") pod \"rabbitmq-cell1-server-0\" (UID: \"6a0c15bc-8e5e-47ee-9c23-1673363f1603\") " pod="openstack/rabbitmq-cell1-server-0" Jan 23 09:27:34 crc kubenswrapper[4684]: I0123 09:27:34.640877 4684 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"erlang-cookie-secret\" (UniqueName: \"kubernetes.io/secret/6a0c15bc-8e5e-47ee-9c23-1673363f1603-erlang-cookie-secret\") pod \"rabbitmq-cell1-server-0\" (UID: \"6a0c15bc-8e5e-47ee-9c23-1673363f1603\") " pod="openstack/rabbitmq-cell1-server-0" Jan 23 09:27:34 crc kubenswrapper[4684]: I0123 09:27:34.642537 4684 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"rabbitmq-confd\" (UniqueName: \"kubernetes.io/projected/6a0c15bc-8e5e-47ee-9c23-1673363f1603-rabbitmq-confd\") pod \"rabbitmq-cell1-server-0\" (UID: \"6a0c15bc-8e5e-47ee-9c23-1673363f1603\") " pod="openstack/rabbitmq-cell1-server-0" Jan 23 09:27:34 crc kubenswrapper[4684]: I0123 09:27:34.658483 4684 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-nm4ks\" (UniqueName: \"kubernetes.io/projected/6a0c15bc-8e5e-47ee-9c23-1673363f1603-kube-api-access-nm4ks\") pod \"rabbitmq-cell1-server-0\" (UID: \"6a0c15bc-8e5e-47ee-9c23-1673363f1603\") " pod="openstack/rabbitmq-cell1-server-0" Jan 23 09:27:34 crc kubenswrapper[4684]: I0123 09:27:34.662721 4684 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"local-storage08-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage08-crc\") pod \"rabbitmq-cell1-server-0\" (UID: \"6a0c15bc-8e5e-47ee-9c23-1673363f1603\") " pod="openstack/rabbitmq-cell1-server-0" Jan 23 09:27:34 crc kubenswrapper[4684]: I0123 09:27:34.663222 4684 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"rabbitmq-tls\" (UniqueName: \"kubernetes.io/projected/6a0c15bc-8e5e-47ee-9c23-1673363f1603-rabbitmq-tls\") pod \"rabbitmq-cell1-server-0\" (UID: \"6a0c15bc-8e5e-47ee-9c23-1673363f1603\") " pod="openstack/rabbitmq-cell1-server-0" Jan 23 09:27:34 crc kubenswrapper[4684]: I0123 09:27:34.703244 4684 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/rabbitmq-cell1-server-0" Jan 23 09:27:34 crc kubenswrapper[4684]: I0123 09:27:34.877586 4684 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/rabbitmq-server-0" Jan 23 09:27:35 crc kubenswrapper[4684]: I0123 09:27:35.045304 4684 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-95f5f6995-9fjr7" event={"ID":"169ee556-d1ee-4f51-9958-46bd24d4467f","Type":"ContainerStarted","Data":"0cda0dc6524d004735f24df66e7a23e60dd8772f536d7fff61af71f17d84535e"} Jan 23 09:27:35 crc kubenswrapper[4684]: I0123 09:27:35.352565 4684 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/rabbitmq-server-0"] Jan 23 09:27:35 crc kubenswrapper[4684]: I0123 09:27:35.390408 4684 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/rabbitmq-cell1-server-0"] Jan 23 09:27:35 crc kubenswrapper[4684]: W0123 09:27:35.446117 4684 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod6a0c15bc_8e5e_47ee_9c23_1673363f1603.slice/crio-0188f9303d149eef3a32673d40e166bb661ca7d56d33c5e1446afa1acc86659a WatchSource:0}: Error finding container 0188f9303d149eef3a32673d40e166bb661ca7d56d33c5e1446afa1acc86659a: Status 404 returned error can't find the container with id 0188f9303d149eef3a32673d40e166bb661ca7d56d33c5e1446afa1acc86659a Jan 23 09:27:35 crc kubenswrapper[4684]: I0123 09:27:35.657940 4684 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/openstack-galera-0"] Jan 23 09:27:35 crc kubenswrapper[4684]: I0123 09:27:35.660494 4684 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/openstack-galera-0"] Jan 23 09:27:35 crc kubenswrapper[4684]: I0123 09:27:35.660588 4684 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/openstack-galera-0" Jan 23 09:27:35 crc kubenswrapper[4684]: I0123 09:27:35.663852 4684 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"galera-openstack-dockercfg-rgkpp" Jan 23 09:27:35 crc kubenswrapper[4684]: I0123 09:27:35.664058 4684 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"openstack-config-data" Jan 23 09:27:35 crc kubenswrapper[4684]: I0123 09:27:35.665013 4684 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-galera-openstack-svc" Jan 23 09:27:35 crc kubenswrapper[4684]: I0123 09:27:35.665330 4684 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"openstack-scripts" Jan 23 09:27:35 crc kubenswrapper[4684]: I0123 09:27:35.682684 4684 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"combined-ca-bundle" Jan 23 09:27:35 crc kubenswrapper[4684]: I0123 09:27:35.789652 4684 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-6sm2l\" (UniqueName: \"kubernetes.io/projected/01c5f17c-8303-4cae-b577-1da34c402098-kube-api-access-6sm2l\") pod \"openstack-galera-0\" (UID: \"01c5f17c-8303-4cae-b577-1da34c402098\") " pod="openstack/openstack-galera-0" Jan 23 09:27:35 crc kubenswrapper[4684]: I0123 09:27:35.791115 4684 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/01c5f17c-8303-4cae-b577-1da34c402098-combined-ca-bundle\") pod \"openstack-galera-0\" (UID: \"01c5f17c-8303-4cae-b577-1da34c402098\") " pod="openstack/openstack-galera-0" Jan 23 09:27:35 crc kubenswrapper[4684]: I0123 09:27:35.791261 4684 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/01c5f17c-8303-4cae-b577-1da34c402098-operator-scripts\") pod \"openstack-galera-0\" (UID: \"01c5f17c-8303-4cae-b577-1da34c402098\") " pod="openstack/openstack-galera-0" Jan 23 09:27:35 crc kubenswrapper[4684]: I0123 09:27:35.791394 4684 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data-generated\" (UniqueName: \"kubernetes.io/empty-dir/01c5f17c-8303-4cae-b577-1da34c402098-config-data-generated\") pod \"openstack-galera-0\" (UID: \"01c5f17c-8303-4cae-b577-1da34c402098\") " pod="openstack/openstack-galera-0" Jan 23 09:27:35 crc kubenswrapper[4684]: I0123 09:27:35.791630 4684 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"galera-tls-certs\" (UniqueName: \"kubernetes.io/secret/01c5f17c-8303-4cae-b577-1da34c402098-galera-tls-certs\") pod \"openstack-galera-0\" (UID: \"01c5f17c-8303-4cae-b577-1da34c402098\") " pod="openstack/openstack-galera-0" Jan 23 09:27:35 crc kubenswrapper[4684]: I0123 09:27:35.791765 4684 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data-default\" (UniqueName: \"kubernetes.io/configmap/01c5f17c-8303-4cae-b577-1da34c402098-config-data-default\") pod \"openstack-galera-0\" (UID: \"01c5f17c-8303-4cae-b577-1da34c402098\") " pod="openstack/openstack-galera-0" Jan 23 09:27:35 crc kubenswrapper[4684]: I0123 09:27:35.791822 4684 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kolla-config\" (UniqueName: \"kubernetes.io/configmap/01c5f17c-8303-4cae-b577-1da34c402098-kolla-config\") pod \"openstack-galera-0\" (UID: \"01c5f17c-8303-4cae-b577-1da34c402098\") " pod="openstack/openstack-galera-0" Jan 23 09:27:35 crc kubenswrapper[4684]: I0123 09:27:35.792061 4684 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"local-storage10-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage10-crc\") pod \"openstack-galera-0\" (UID: \"01c5f17c-8303-4cae-b577-1da34c402098\") " pod="openstack/openstack-galera-0" Jan 23 09:27:35 crc kubenswrapper[4684]: I0123 09:27:35.894009 4684 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"local-storage10-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage10-crc\") pod \"openstack-galera-0\" (UID: \"01c5f17c-8303-4cae-b577-1da34c402098\") " pod="openstack/openstack-galera-0" Jan 23 09:27:35 crc kubenswrapper[4684]: I0123 09:27:35.894089 4684 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-6sm2l\" (UniqueName: \"kubernetes.io/projected/01c5f17c-8303-4cae-b577-1da34c402098-kube-api-access-6sm2l\") pod \"openstack-galera-0\" (UID: \"01c5f17c-8303-4cae-b577-1da34c402098\") " pod="openstack/openstack-galera-0" Jan 23 09:27:35 crc kubenswrapper[4684]: I0123 09:27:35.894115 4684 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/01c5f17c-8303-4cae-b577-1da34c402098-combined-ca-bundle\") pod \"openstack-galera-0\" (UID: \"01c5f17c-8303-4cae-b577-1da34c402098\") " pod="openstack/openstack-galera-0" Jan 23 09:27:35 crc kubenswrapper[4684]: I0123 09:27:35.894136 4684 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/01c5f17c-8303-4cae-b577-1da34c402098-operator-scripts\") pod \"openstack-galera-0\" (UID: \"01c5f17c-8303-4cae-b577-1da34c402098\") " pod="openstack/openstack-galera-0" Jan 23 09:27:35 crc kubenswrapper[4684]: I0123 09:27:35.894158 4684 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data-generated\" (UniqueName: \"kubernetes.io/empty-dir/01c5f17c-8303-4cae-b577-1da34c402098-config-data-generated\") pod \"openstack-galera-0\" (UID: \"01c5f17c-8303-4cae-b577-1da34c402098\") " pod="openstack/openstack-galera-0" Jan 23 09:27:35 crc kubenswrapper[4684]: I0123 09:27:35.894204 4684 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"galera-tls-certs\" (UniqueName: \"kubernetes.io/secret/01c5f17c-8303-4cae-b577-1da34c402098-galera-tls-certs\") pod \"openstack-galera-0\" (UID: \"01c5f17c-8303-4cae-b577-1da34c402098\") " pod="openstack/openstack-galera-0" Jan 23 09:27:35 crc kubenswrapper[4684]: I0123 09:27:35.894249 4684 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data-default\" (UniqueName: \"kubernetes.io/configmap/01c5f17c-8303-4cae-b577-1da34c402098-config-data-default\") pod \"openstack-galera-0\" (UID: \"01c5f17c-8303-4cae-b577-1da34c402098\") " pod="openstack/openstack-galera-0" Jan 23 09:27:35 crc kubenswrapper[4684]: I0123 09:27:35.894280 4684 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kolla-config\" (UniqueName: \"kubernetes.io/configmap/01c5f17c-8303-4cae-b577-1da34c402098-kolla-config\") pod \"openstack-galera-0\" (UID: \"01c5f17c-8303-4cae-b577-1da34c402098\") " pod="openstack/openstack-galera-0" Jan 23 09:27:35 crc kubenswrapper[4684]: I0123 09:27:35.895390 4684 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kolla-config\" (UniqueName: \"kubernetes.io/configmap/01c5f17c-8303-4cae-b577-1da34c402098-kolla-config\") pod \"openstack-galera-0\" (UID: \"01c5f17c-8303-4cae-b577-1da34c402098\") " pod="openstack/openstack-galera-0" Jan 23 09:27:35 crc kubenswrapper[4684]: I0123 09:27:35.895684 4684 operation_generator.go:580] "MountVolume.MountDevice succeeded for volume \"local-storage10-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage10-crc\") pod \"openstack-galera-0\" (UID: \"01c5f17c-8303-4cae-b577-1da34c402098\") device mount path \"/mnt/openstack/pv10\"" pod="openstack/openstack-galera-0" Jan 23 09:27:35 crc kubenswrapper[4684]: I0123 09:27:35.899162 4684 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/01c5f17c-8303-4cae-b577-1da34c402098-operator-scripts\") pod \"openstack-galera-0\" (UID: \"01c5f17c-8303-4cae-b577-1da34c402098\") " pod="openstack/openstack-galera-0" Jan 23 09:27:35 crc kubenswrapper[4684]: I0123 09:27:35.901339 4684 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data-default\" (UniqueName: \"kubernetes.io/configmap/01c5f17c-8303-4cae-b577-1da34c402098-config-data-default\") pod \"openstack-galera-0\" (UID: \"01c5f17c-8303-4cae-b577-1da34c402098\") " pod="openstack/openstack-galera-0" Jan 23 09:27:35 crc kubenswrapper[4684]: I0123 09:27:35.901719 4684 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data-generated\" (UniqueName: \"kubernetes.io/empty-dir/01c5f17c-8303-4cae-b577-1da34c402098-config-data-generated\") pod \"openstack-galera-0\" (UID: \"01c5f17c-8303-4cae-b577-1da34c402098\") " pod="openstack/openstack-galera-0" Jan 23 09:27:35 crc kubenswrapper[4684]: I0123 09:27:35.932605 4684 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-6sm2l\" (UniqueName: \"kubernetes.io/projected/01c5f17c-8303-4cae-b577-1da34c402098-kube-api-access-6sm2l\") pod \"openstack-galera-0\" (UID: \"01c5f17c-8303-4cae-b577-1da34c402098\") " pod="openstack/openstack-galera-0" Jan 23 09:27:35 crc kubenswrapper[4684]: I0123 09:27:35.933678 4684 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/01c5f17c-8303-4cae-b577-1da34c402098-combined-ca-bundle\") pod \"openstack-galera-0\" (UID: \"01c5f17c-8303-4cae-b577-1da34c402098\") " pod="openstack/openstack-galera-0" Jan 23 09:27:35 crc kubenswrapper[4684]: I0123 09:27:35.937218 4684 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"galera-tls-certs\" (UniqueName: \"kubernetes.io/secret/01c5f17c-8303-4cae-b577-1da34c402098-galera-tls-certs\") pod \"openstack-galera-0\" (UID: \"01c5f17c-8303-4cae-b577-1da34c402098\") " pod="openstack/openstack-galera-0" Jan 23 09:27:35 crc kubenswrapper[4684]: I0123 09:27:35.970529 4684 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"local-storage10-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage10-crc\") pod \"openstack-galera-0\" (UID: \"01c5f17c-8303-4cae-b577-1da34c402098\") " pod="openstack/openstack-galera-0" Jan 23 09:27:35 crc kubenswrapper[4684]: I0123 09:27:35.994851 4684 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/openstack-galera-0" Jan 23 09:27:36 crc kubenswrapper[4684]: I0123 09:27:36.080311 4684 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/rabbitmq-server-0" event={"ID":"82a71d38-3c68-43a9-9913-bc184ebed996","Type":"ContainerStarted","Data":"1b21bd9e3930037e74d69751cd0149f3b8e0b508ed3e480c2bf99cb0a21657f7"} Jan 23 09:27:36 crc kubenswrapper[4684]: I0123 09:27:36.093911 4684 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/rabbitmq-cell1-server-0" event={"ID":"6a0c15bc-8e5e-47ee-9c23-1673363f1603","Type":"ContainerStarted","Data":"0188f9303d149eef3a32673d40e166bb661ca7d56d33c5e1446afa1acc86659a"} Jan 23 09:27:36 crc kubenswrapper[4684]: I0123 09:27:36.833241 4684 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/openstack-galera-0"] Jan 23 09:27:36 crc kubenswrapper[4684]: W0123 09:27:36.858760 4684 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod01c5f17c_8303_4cae_b577_1da34c402098.slice/crio-5953c5e609227f1e5eeedf2ec3a7a1594b04c6b7305adb5340e69c6a1628dfb9 WatchSource:0}: Error finding container 5953c5e609227f1e5eeedf2ec3a7a1594b04c6b7305adb5340e69c6a1628dfb9: Status 404 returned error can't find the container with id 5953c5e609227f1e5eeedf2ec3a7a1594b04c6b7305adb5340e69c6a1628dfb9 Jan 23 09:27:36 crc kubenswrapper[4684]: I0123 09:27:36.935095 4684 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/openstack-cell1-galera-0"] Jan 23 09:27:36 crc kubenswrapper[4684]: I0123 09:27:36.938139 4684 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/openstack-cell1-galera-0" Jan 23 09:27:36 crc kubenswrapper[4684]: I0123 09:27:36.943218 4684 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"openstack-cell1-scripts" Jan 23 09:27:36 crc kubenswrapper[4684]: I0123 09:27:36.943487 4684 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"openstack-cell1-config-data" Jan 23 09:27:36 crc kubenswrapper[4684]: I0123 09:27:36.943712 4684 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"galera-openstack-cell1-dockercfg-92t6w" Jan 23 09:27:36 crc kubenswrapper[4684]: I0123 09:27:36.943898 4684 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-galera-openstack-cell1-svc" Jan 23 09:27:36 crc kubenswrapper[4684]: I0123 09:27:36.981418 4684 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/openstack-cell1-galera-0"] Jan 23 09:27:37 crc kubenswrapper[4684]: I0123 09:27:37.032239 4684 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kolla-config\" (UniqueName: \"kubernetes.io/configmap/80a7fc30-a101-4948-9e81-34c2dfb02797-kolla-config\") pod \"openstack-cell1-galera-0\" (UID: \"80a7fc30-a101-4948-9e81-34c2dfb02797\") " pod="openstack/openstack-cell1-galera-0" Jan 23 09:27:37 crc kubenswrapper[4684]: I0123 09:27:37.032303 4684 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"galera-tls-certs\" (UniqueName: \"kubernetes.io/secret/80a7fc30-a101-4948-9e81-34c2dfb02797-galera-tls-certs\") pod \"openstack-cell1-galera-0\" (UID: \"80a7fc30-a101-4948-9e81-34c2dfb02797\") " pod="openstack/openstack-cell1-galera-0" Jan 23 09:27:37 crc kubenswrapper[4684]: I0123 09:27:37.032352 4684 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-t5bss\" (UniqueName: \"kubernetes.io/projected/80a7fc30-a101-4948-9e81-34c2dfb02797-kube-api-access-t5bss\") pod \"openstack-cell1-galera-0\" (UID: \"80a7fc30-a101-4948-9e81-34c2dfb02797\") " pod="openstack/openstack-cell1-galera-0" Jan 23 09:27:37 crc kubenswrapper[4684]: I0123 09:27:37.032383 4684 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"local-storage12-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage12-crc\") pod \"openstack-cell1-galera-0\" (UID: \"80a7fc30-a101-4948-9e81-34c2dfb02797\") " pod="openstack/openstack-cell1-galera-0" Jan 23 09:27:37 crc kubenswrapper[4684]: I0123 09:27:37.032405 4684 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/80a7fc30-a101-4948-9e81-34c2dfb02797-operator-scripts\") pod \"openstack-cell1-galera-0\" (UID: \"80a7fc30-a101-4948-9e81-34c2dfb02797\") " pod="openstack/openstack-cell1-galera-0" Jan 23 09:27:37 crc kubenswrapper[4684]: I0123 09:27:37.032447 4684 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/80a7fc30-a101-4948-9e81-34c2dfb02797-combined-ca-bundle\") pod \"openstack-cell1-galera-0\" (UID: \"80a7fc30-a101-4948-9e81-34c2dfb02797\") " pod="openstack/openstack-cell1-galera-0" Jan 23 09:27:37 crc kubenswrapper[4684]: I0123 09:27:37.032488 4684 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data-generated\" (UniqueName: \"kubernetes.io/empty-dir/80a7fc30-a101-4948-9e81-34c2dfb02797-config-data-generated\") pod \"openstack-cell1-galera-0\" (UID: \"80a7fc30-a101-4948-9e81-34c2dfb02797\") " pod="openstack/openstack-cell1-galera-0" Jan 23 09:27:37 crc kubenswrapper[4684]: I0123 09:27:37.032536 4684 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data-default\" (UniqueName: \"kubernetes.io/configmap/80a7fc30-a101-4948-9e81-34c2dfb02797-config-data-default\") pod \"openstack-cell1-galera-0\" (UID: \"80a7fc30-a101-4948-9e81-34c2dfb02797\") " pod="openstack/openstack-cell1-galera-0" Jan 23 09:27:37 crc kubenswrapper[4684]: I0123 09:27:37.133611 4684 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"local-storage12-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage12-crc\") pod \"openstack-cell1-galera-0\" (UID: \"80a7fc30-a101-4948-9e81-34c2dfb02797\") " pod="openstack/openstack-cell1-galera-0" Jan 23 09:27:37 crc kubenswrapper[4684]: I0123 09:27:37.133669 4684 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/80a7fc30-a101-4948-9e81-34c2dfb02797-operator-scripts\") pod \"openstack-cell1-galera-0\" (UID: \"80a7fc30-a101-4948-9e81-34c2dfb02797\") " pod="openstack/openstack-cell1-galera-0" Jan 23 09:27:37 crc kubenswrapper[4684]: I0123 09:27:37.133745 4684 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/80a7fc30-a101-4948-9e81-34c2dfb02797-combined-ca-bundle\") pod \"openstack-cell1-galera-0\" (UID: \"80a7fc30-a101-4948-9e81-34c2dfb02797\") " pod="openstack/openstack-cell1-galera-0" Jan 23 09:27:37 crc kubenswrapper[4684]: I0123 09:27:37.133789 4684 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data-generated\" (UniqueName: \"kubernetes.io/empty-dir/80a7fc30-a101-4948-9e81-34c2dfb02797-config-data-generated\") pod \"openstack-cell1-galera-0\" (UID: \"80a7fc30-a101-4948-9e81-34c2dfb02797\") " pod="openstack/openstack-cell1-galera-0" Jan 23 09:27:37 crc kubenswrapper[4684]: I0123 09:27:37.133845 4684 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data-default\" (UniqueName: \"kubernetes.io/configmap/80a7fc30-a101-4948-9e81-34c2dfb02797-config-data-default\") pod \"openstack-cell1-galera-0\" (UID: \"80a7fc30-a101-4948-9e81-34c2dfb02797\") " pod="openstack/openstack-cell1-galera-0" Jan 23 09:27:37 crc kubenswrapper[4684]: I0123 09:27:37.133880 4684 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kolla-config\" (UniqueName: \"kubernetes.io/configmap/80a7fc30-a101-4948-9e81-34c2dfb02797-kolla-config\") pod \"openstack-cell1-galera-0\" (UID: \"80a7fc30-a101-4948-9e81-34c2dfb02797\") " pod="openstack/openstack-cell1-galera-0" Jan 23 09:27:37 crc kubenswrapper[4684]: I0123 09:27:37.133931 4684 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"galera-tls-certs\" (UniqueName: \"kubernetes.io/secret/80a7fc30-a101-4948-9e81-34c2dfb02797-galera-tls-certs\") pod \"openstack-cell1-galera-0\" (UID: \"80a7fc30-a101-4948-9e81-34c2dfb02797\") " pod="openstack/openstack-cell1-galera-0" Jan 23 09:27:37 crc kubenswrapper[4684]: I0123 09:27:37.133997 4684 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-t5bss\" (UniqueName: \"kubernetes.io/projected/80a7fc30-a101-4948-9e81-34c2dfb02797-kube-api-access-t5bss\") pod \"openstack-cell1-galera-0\" (UID: \"80a7fc30-a101-4948-9e81-34c2dfb02797\") " pod="openstack/openstack-cell1-galera-0" Jan 23 09:27:37 crc kubenswrapper[4684]: I0123 09:27:37.134590 4684 operation_generator.go:580] "MountVolume.MountDevice succeeded for volume \"local-storage12-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage12-crc\") pod \"openstack-cell1-galera-0\" (UID: \"80a7fc30-a101-4948-9e81-34c2dfb02797\") device mount path \"/mnt/openstack/pv12\"" pod="openstack/openstack-cell1-galera-0" Jan 23 09:27:37 crc kubenswrapper[4684]: I0123 09:27:37.140070 4684 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/openstack-galera-0" event={"ID":"01c5f17c-8303-4cae-b577-1da34c402098","Type":"ContainerStarted","Data":"5953c5e609227f1e5eeedf2ec3a7a1594b04c6b7305adb5340e69c6a1628dfb9"} Jan 23 09:27:37 crc kubenswrapper[4684]: I0123 09:27:37.140413 4684 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/80a7fc30-a101-4948-9e81-34c2dfb02797-operator-scripts\") pod \"openstack-cell1-galera-0\" (UID: \"80a7fc30-a101-4948-9e81-34c2dfb02797\") " pod="openstack/openstack-cell1-galera-0" Jan 23 09:27:37 crc kubenswrapper[4684]: I0123 09:27:37.140715 4684 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kolla-config\" (UniqueName: \"kubernetes.io/configmap/80a7fc30-a101-4948-9e81-34c2dfb02797-kolla-config\") pod \"openstack-cell1-galera-0\" (UID: \"80a7fc30-a101-4948-9e81-34c2dfb02797\") " pod="openstack/openstack-cell1-galera-0" Jan 23 09:27:37 crc kubenswrapper[4684]: I0123 09:27:37.140886 4684 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data-default\" (UniqueName: \"kubernetes.io/configmap/80a7fc30-a101-4948-9e81-34c2dfb02797-config-data-default\") pod \"openstack-cell1-galera-0\" (UID: \"80a7fc30-a101-4948-9e81-34c2dfb02797\") " pod="openstack/openstack-cell1-galera-0" Jan 23 09:27:37 crc kubenswrapper[4684]: I0123 09:27:37.147934 4684 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data-generated\" (UniqueName: \"kubernetes.io/empty-dir/80a7fc30-a101-4948-9e81-34c2dfb02797-config-data-generated\") pod \"openstack-cell1-galera-0\" (UID: \"80a7fc30-a101-4948-9e81-34c2dfb02797\") " pod="openstack/openstack-cell1-galera-0" Jan 23 09:27:37 crc kubenswrapper[4684]: I0123 09:27:37.163481 4684 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/memcached-0"] Jan 23 09:27:37 crc kubenswrapper[4684]: I0123 09:27:37.165323 4684 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/memcached-0" Jan 23 09:27:37 crc kubenswrapper[4684]: I0123 09:27:37.166030 4684 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/80a7fc30-a101-4948-9e81-34c2dfb02797-combined-ca-bundle\") pod \"openstack-cell1-galera-0\" (UID: \"80a7fc30-a101-4948-9e81-34c2dfb02797\") " pod="openstack/openstack-cell1-galera-0" Jan 23 09:27:37 crc kubenswrapper[4684]: I0123 09:27:37.185204 4684 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"memcached-config-data" Jan 23 09:27:37 crc kubenswrapper[4684]: I0123 09:27:37.185282 4684 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-memcached-svc" Jan 23 09:27:37 crc kubenswrapper[4684]: I0123 09:27:37.185492 4684 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"memcached-memcached-dockercfg-s4nnx" Jan 23 09:27:37 crc kubenswrapper[4684]: I0123 09:27:37.185822 4684 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-t5bss\" (UniqueName: \"kubernetes.io/projected/80a7fc30-a101-4948-9e81-34c2dfb02797-kube-api-access-t5bss\") pod \"openstack-cell1-galera-0\" (UID: \"80a7fc30-a101-4948-9e81-34c2dfb02797\") " pod="openstack/openstack-cell1-galera-0" Jan 23 09:27:37 crc kubenswrapper[4684]: I0123 09:27:37.187404 4684 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"local-storage12-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage12-crc\") pod \"openstack-cell1-galera-0\" (UID: \"80a7fc30-a101-4948-9e81-34c2dfb02797\") " pod="openstack/openstack-cell1-galera-0" Jan 23 09:27:37 crc kubenswrapper[4684]: I0123 09:27:37.187492 4684 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"galera-tls-certs\" (UniqueName: \"kubernetes.io/secret/80a7fc30-a101-4948-9e81-34c2dfb02797-galera-tls-certs\") pod \"openstack-cell1-galera-0\" (UID: \"80a7fc30-a101-4948-9e81-34c2dfb02797\") " pod="openstack/openstack-cell1-galera-0" Jan 23 09:27:37 crc kubenswrapper[4684]: I0123 09:27:37.222583 4684 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/memcached-0"] Jan 23 09:27:37 crc kubenswrapper[4684]: I0123 09:27:37.249212 4684 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/7320f601-5b97-49b4-af32-aeae7d297ed1-combined-ca-bundle\") pod \"memcached-0\" (UID: \"7320f601-5b97-49b4-af32-aeae7d297ed1\") " pod="openstack/memcached-0" Jan 23 09:27:37 crc kubenswrapper[4684]: I0123 09:27:37.249268 4684 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"memcached-tls-certs\" (UniqueName: \"kubernetes.io/secret/7320f601-5b97-49b4-af32-aeae7d297ed1-memcached-tls-certs\") pod \"memcached-0\" (UID: \"7320f601-5b97-49b4-af32-aeae7d297ed1\") " pod="openstack/memcached-0" Jan 23 09:27:37 crc kubenswrapper[4684]: I0123 09:27:37.249296 4684 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/7320f601-5b97-49b4-af32-aeae7d297ed1-config-data\") pod \"memcached-0\" (UID: \"7320f601-5b97-49b4-af32-aeae7d297ed1\") " pod="openstack/memcached-0" Jan 23 09:27:37 crc kubenswrapper[4684]: I0123 09:27:37.249346 4684 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-kw7s7\" (UniqueName: \"kubernetes.io/projected/7320f601-5b97-49b4-af32-aeae7d297ed1-kube-api-access-kw7s7\") pod \"memcached-0\" (UID: \"7320f601-5b97-49b4-af32-aeae7d297ed1\") " pod="openstack/memcached-0" Jan 23 09:27:37 crc kubenswrapper[4684]: I0123 09:27:37.249366 4684 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kolla-config\" (UniqueName: \"kubernetes.io/configmap/7320f601-5b97-49b4-af32-aeae7d297ed1-kolla-config\") pod \"memcached-0\" (UID: \"7320f601-5b97-49b4-af32-aeae7d297ed1\") " pod="openstack/memcached-0" Jan 23 09:27:37 crc kubenswrapper[4684]: I0123 09:27:37.287186 4684 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/openstack-cell1-galera-0" Jan 23 09:27:37 crc kubenswrapper[4684]: I0123 09:27:37.352114 4684 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"memcached-tls-certs\" (UniqueName: \"kubernetes.io/secret/7320f601-5b97-49b4-af32-aeae7d297ed1-memcached-tls-certs\") pod \"memcached-0\" (UID: \"7320f601-5b97-49b4-af32-aeae7d297ed1\") " pod="openstack/memcached-0" Jan 23 09:27:37 crc kubenswrapper[4684]: I0123 09:27:37.352174 4684 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/7320f601-5b97-49b4-af32-aeae7d297ed1-config-data\") pod \"memcached-0\" (UID: \"7320f601-5b97-49b4-af32-aeae7d297ed1\") " pod="openstack/memcached-0" Jan 23 09:27:37 crc kubenswrapper[4684]: I0123 09:27:37.352252 4684 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-kw7s7\" (UniqueName: \"kubernetes.io/projected/7320f601-5b97-49b4-af32-aeae7d297ed1-kube-api-access-kw7s7\") pod \"memcached-0\" (UID: \"7320f601-5b97-49b4-af32-aeae7d297ed1\") " pod="openstack/memcached-0" Jan 23 09:27:37 crc kubenswrapper[4684]: I0123 09:27:37.352284 4684 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kolla-config\" (UniqueName: \"kubernetes.io/configmap/7320f601-5b97-49b4-af32-aeae7d297ed1-kolla-config\") pod \"memcached-0\" (UID: \"7320f601-5b97-49b4-af32-aeae7d297ed1\") " pod="openstack/memcached-0" Jan 23 09:27:37 crc kubenswrapper[4684]: I0123 09:27:37.352344 4684 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/7320f601-5b97-49b4-af32-aeae7d297ed1-combined-ca-bundle\") pod \"memcached-0\" (UID: \"7320f601-5b97-49b4-af32-aeae7d297ed1\") " pod="openstack/memcached-0" Jan 23 09:27:37 crc kubenswrapper[4684]: I0123 09:27:37.353757 4684 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/7320f601-5b97-49b4-af32-aeae7d297ed1-config-data\") pod \"memcached-0\" (UID: \"7320f601-5b97-49b4-af32-aeae7d297ed1\") " pod="openstack/memcached-0" Jan 23 09:27:37 crc kubenswrapper[4684]: I0123 09:27:37.354826 4684 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kolla-config\" (UniqueName: \"kubernetes.io/configmap/7320f601-5b97-49b4-af32-aeae7d297ed1-kolla-config\") pod \"memcached-0\" (UID: \"7320f601-5b97-49b4-af32-aeae7d297ed1\") " pod="openstack/memcached-0" Jan 23 09:27:37 crc kubenswrapper[4684]: I0123 09:27:37.356437 4684 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/7320f601-5b97-49b4-af32-aeae7d297ed1-combined-ca-bundle\") pod \"memcached-0\" (UID: \"7320f601-5b97-49b4-af32-aeae7d297ed1\") " pod="openstack/memcached-0" Jan 23 09:27:37 crc kubenswrapper[4684]: I0123 09:27:37.381143 4684 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"memcached-tls-certs\" (UniqueName: \"kubernetes.io/secret/7320f601-5b97-49b4-af32-aeae7d297ed1-memcached-tls-certs\") pod \"memcached-0\" (UID: \"7320f601-5b97-49b4-af32-aeae7d297ed1\") " pod="openstack/memcached-0" Jan 23 09:27:37 crc kubenswrapper[4684]: I0123 09:27:37.410458 4684 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-kw7s7\" (UniqueName: \"kubernetes.io/projected/7320f601-5b97-49b4-af32-aeae7d297ed1-kube-api-access-kw7s7\") pod \"memcached-0\" (UID: \"7320f601-5b97-49b4-af32-aeae7d297ed1\") " pod="openstack/memcached-0" Jan 23 09:27:37 crc kubenswrapper[4684]: I0123 09:27:37.578390 4684 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/memcached-0" Jan 23 09:27:37 crc kubenswrapper[4684]: I0123 09:27:37.961940 4684 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/openstack-cell1-galera-0"] Jan 23 09:27:38 crc kubenswrapper[4684]: I0123 09:27:38.682563 4684 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/memcached-0"] Jan 23 09:27:39 crc kubenswrapper[4684]: I0123 09:27:39.195133 4684 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/kube-state-metrics-0"] Jan 23 09:27:39 crc kubenswrapper[4684]: I0123 09:27:39.197212 4684 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/kube-state-metrics-0" Jan 23 09:27:39 crc kubenswrapper[4684]: I0123 09:27:39.208048 4684 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"telemetry-ceilometer-dockercfg-j4k4j" Jan 23 09:27:39 crc kubenswrapper[4684]: I0123 09:27:39.243384 4684 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/kube-state-metrics-0"] Jan 23 09:27:39 crc kubenswrapper[4684]: I0123 09:27:39.243429 4684 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/memcached-0" event={"ID":"7320f601-5b97-49b4-af32-aeae7d297ed1","Type":"ContainerStarted","Data":"9850f24bb2c4f1978b224c301669a5c95ab97a6453728b7447e15e7b97040af9"} Jan 23 09:27:39 crc kubenswrapper[4684]: I0123 09:27:39.282385 4684 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/openstack-cell1-galera-0" event={"ID":"80a7fc30-a101-4948-9e81-34c2dfb02797","Type":"ContainerStarted","Data":"a8fdd7ef86fd0514e19f0be10ba9c198d52ecc676070738d965c1ebc60f7acef"} Jan 23 09:27:39 crc kubenswrapper[4684]: I0123 09:27:39.345013 4684 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-tq79v\" (UniqueName: \"kubernetes.io/projected/48e55475-0575-41e9-9949-d5bdb86ee565-kube-api-access-tq79v\") pod \"kube-state-metrics-0\" (UID: \"48e55475-0575-41e9-9949-d5bdb86ee565\") " pod="openstack/kube-state-metrics-0" Jan 23 09:27:39 crc kubenswrapper[4684]: I0123 09:27:39.446679 4684 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-tq79v\" (UniqueName: \"kubernetes.io/projected/48e55475-0575-41e9-9949-d5bdb86ee565-kube-api-access-tq79v\") pod \"kube-state-metrics-0\" (UID: \"48e55475-0575-41e9-9949-d5bdb86ee565\") " pod="openstack/kube-state-metrics-0" Jan 23 09:27:39 crc kubenswrapper[4684]: I0123 09:27:39.471222 4684 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-tq79v\" (UniqueName: \"kubernetes.io/projected/48e55475-0575-41e9-9949-d5bdb86ee565-kube-api-access-tq79v\") pod \"kube-state-metrics-0\" (UID: \"48e55475-0575-41e9-9949-d5bdb86ee565\") " pod="openstack/kube-state-metrics-0" Jan 23 09:27:39 crc kubenswrapper[4684]: I0123 09:27:39.528066 4684 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/kube-state-metrics-0" Jan 23 09:27:40 crc kubenswrapper[4684]: I0123 09:27:40.060860 4684 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/kube-state-metrics-0"] Jan 23 09:27:40 crc kubenswrapper[4684]: I0123 09:27:40.327526 4684 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/kube-state-metrics-0" event={"ID":"48e55475-0575-41e9-9949-d5bdb86ee565","Type":"ContainerStarted","Data":"9633ddbf304ca274323481b3201fc205270d10050cf1a627fe1c312839627e59"} Jan 23 09:27:42 crc kubenswrapper[4684]: I0123 09:27:42.652245 4684 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/ovn-controller-jgsg8"] Jan 23 09:27:42 crc kubenswrapper[4684]: I0123 09:27:42.657111 4684 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ovn-controller-jgsg8" Jan 23 09:27:42 crc kubenswrapper[4684]: I0123 09:27:42.662234 4684 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"ovncontroller-ovncontroller-dockercfg-8rk95" Jan 23 09:27:42 crc kubenswrapper[4684]: I0123 09:27:42.662507 4684 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-ovncontroller-ovndbs" Jan 23 09:27:42 crc kubenswrapper[4684]: I0123 09:27:42.662836 4684 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"ovncontroller-scripts" Jan 23 09:27:42 crc kubenswrapper[4684]: I0123 09:27:42.666096 4684 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ovn-controller-jgsg8"] Jan 23 09:27:42 crc kubenswrapper[4684]: I0123 09:27:42.716513 4684 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/ovn-controller-ovs-c5pjd"] Jan 23 09:27:42 crc kubenswrapper[4684]: I0123 09:27:42.717982 4684 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ovn-controller-ovs-c5pjd" Jan 23 09:27:42 crc kubenswrapper[4684]: I0123 09:27:42.737948 4684 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ovn-controller-ovs-c5pjd"] Jan 23 09:27:42 crc kubenswrapper[4684]: I0123 09:27:42.833330 4684 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-log\" (UniqueName: \"kubernetes.io/host-path/c816dd8b-7da7-4424-8405-b44759f7861e-var-log\") pod \"ovn-controller-ovs-c5pjd\" (UID: \"c816dd8b-7da7-4424-8405-b44759f7861e\") " pod="openstack/ovn-controller-ovs-c5pjd" Jan 23 09:27:42 crc kubenswrapper[4684]: I0123 09:27:42.833452 4684 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-lib\" (UniqueName: \"kubernetes.io/host-path/c816dd8b-7da7-4424-8405-b44759f7861e-var-lib\") pod \"ovn-controller-ovs-c5pjd\" (UID: \"c816dd8b-7da7-4424-8405-b44759f7861e\") " pod="openstack/ovn-controller-ovs-c5pjd" Jan 23 09:27:42 crc kubenswrapper[4684]: I0123 09:27:42.833482 4684 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/f6d184f2-6bff-43ba-98a6-6e131c7b45a8-combined-ca-bundle\") pod \"ovn-controller-jgsg8\" (UID: \"f6d184f2-6bff-43ba-98a6-6e131c7b45a8\") " pod="openstack/ovn-controller-jgsg8" Jan 23 09:27:42 crc kubenswrapper[4684]: I0123 09:27:42.833565 4684 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-t758b\" (UniqueName: \"kubernetes.io/projected/c816dd8b-7da7-4424-8405-b44759f7861e-kube-api-access-t758b\") pod \"ovn-controller-ovs-c5pjd\" (UID: \"c816dd8b-7da7-4424-8405-b44759f7861e\") " pod="openstack/ovn-controller-ovs-c5pjd" Jan 23 09:27:42 crc kubenswrapper[4684]: I0123 09:27:42.833609 4684 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovn-controller-tls-certs\" (UniqueName: \"kubernetes.io/secret/f6d184f2-6bff-43ba-98a6-6e131c7b45a8-ovn-controller-tls-certs\") pod \"ovn-controller-jgsg8\" (UID: \"f6d184f2-6bff-43ba-98a6-6e131c7b45a8\") " pod="openstack/ovn-controller-jgsg8" Jan 23 09:27:42 crc kubenswrapper[4684]: I0123 09:27:42.833637 4684 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/f6d184f2-6bff-43ba-98a6-6e131c7b45a8-scripts\") pod \"ovn-controller-jgsg8\" (UID: \"f6d184f2-6bff-43ba-98a6-6e131c7b45a8\") " pod="openstack/ovn-controller-jgsg8" Jan 23 09:27:42 crc kubenswrapper[4684]: I0123 09:27:42.833661 4684 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-gztc8\" (UniqueName: \"kubernetes.io/projected/f6d184f2-6bff-43ba-98a6-6e131c7b45a8-kube-api-access-gztc8\") pod \"ovn-controller-jgsg8\" (UID: \"f6d184f2-6bff-43ba-98a6-6e131c7b45a8\") " pod="openstack/ovn-controller-jgsg8" Jan 23 09:27:42 crc kubenswrapper[4684]: I0123 09:27:42.833721 4684 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-ovs\" (UniqueName: \"kubernetes.io/host-path/c816dd8b-7da7-4424-8405-b44759f7861e-etc-ovs\") pod \"ovn-controller-ovs-c5pjd\" (UID: \"c816dd8b-7da7-4424-8405-b44759f7861e\") " pod="openstack/ovn-controller-ovs-c5pjd" Jan 23 09:27:42 crc kubenswrapper[4684]: I0123 09:27:42.833746 4684 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-run\" (UniqueName: \"kubernetes.io/host-path/f6d184f2-6bff-43ba-98a6-6e131c7b45a8-var-run\") pod \"ovn-controller-jgsg8\" (UID: \"f6d184f2-6bff-43ba-98a6-6e131c7b45a8\") " pod="openstack/ovn-controller-jgsg8" Jan 23 09:27:42 crc kubenswrapper[4684]: I0123 09:27:42.833767 4684 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-run\" (UniqueName: \"kubernetes.io/host-path/c816dd8b-7da7-4424-8405-b44759f7861e-var-run\") pod \"ovn-controller-ovs-c5pjd\" (UID: \"c816dd8b-7da7-4424-8405-b44759f7861e\") " pod="openstack/ovn-controller-ovs-c5pjd" Jan 23 09:27:42 crc kubenswrapper[4684]: I0123 09:27:42.833792 4684 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/c816dd8b-7da7-4424-8405-b44759f7861e-scripts\") pod \"ovn-controller-ovs-c5pjd\" (UID: \"c816dd8b-7da7-4424-8405-b44759f7861e\") " pod="openstack/ovn-controller-ovs-c5pjd" Jan 23 09:27:42 crc kubenswrapper[4684]: I0123 09:27:42.833829 4684 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-run-ovn\" (UniqueName: \"kubernetes.io/host-path/f6d184f2-6bff-43ba-98a6-6e131c7b45a8-var-run-ovn\") pod \"ovn-controller-jgsg8\" (UID: \"f6d184f2-6bff-43ba-98a6-6e131c7b45a8\") " pod="openstack/ovn-controller-jgsg8" Jan 23 09:27:42 crc kubenswrapper[4684]: I0123 09:27:42.833869 4684 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-log-ovn\" (UniqueName: \"kubernetes.io/host-path/f6d184f2-6bff-43ba-98a6-6e131c7b45a8-var-log-ovn\") pod \"ovn-controller-jgsg8\" (UID: \"f6d184f2-6bff-43ba-98a6-6e131c7b45a8\") " pod="openstack/ovn-controller-jgsg8" Jan 23 09:27:42 crc kubenswrapper[4684]: I0123 09:27:42.937917 4684 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"var-log-ovn\" (UniqueName: \"kubernetes.io/host-path/f6d184f2-6bff-43ba-98a6-6e131c7b45a8-var-log-ovn\") pod \"ovn-controller-jgsg8\" (UID: \"f6d184f2-6bff-43ba-98a6-6e131c7b45a8\") " pod="openstack/ovn-controller-jgsg8" Jan 23 09:27:42 crc kubenswrapper[4684]: I0123 09:27:42.937981 4684 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"var-log\" (UniqueName: \"kubernetes.io/host-path/c816dd8b-7da7-4424-8405-b44759f7861e-var-log\") pod \"ovn-controller-ovs-c5pjd\" (UID: \"c816dd8b-7da7-4424-8405-b44759f7861e\") " pod="openstack/ovn-controller-ovs-c5pjd" Jan 23 09:27:42 crc kubenswrapper[4684]: I0123 09:27:42.938024 4684 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"var-lib\" (UniqueName: \"kubernetes.io/host-path/c816dd8b-7da7-4424-8405-b44759f7861e-var-lib\") pod \"ovn-controller-ovs-c5pjd\" (UID: \"c816dd8b-7da7-4424-8405-b44759f7861e\") " pod="openstack/ovn-controller-ovs-c5pjd" Jan 23 09:27:42 crc kubenswrapper[4684]: I0123 09:27:42.938048 4684 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/f6d184f2-6bff-43ba-98a6-6e131c7b45a8-combined-ca-bundle\") pod \"ovn-controller-jgsg8\" (UID: \"f6d184f2-6bff-43ba-98a6-6e131c7b45a8\") " pod="openstack/ovn-controller-jgsg8" Jan 23 09:27:42 crc kubenswrapper[4684]: I0123 09:27:42.938107 4684 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-t758b\" (UniqueName: \"kubernetes.io/projected/c816dd8b-7da7-4424-8405-b44759f7861e-kube-api-access-t758b\") pod \"ovn-controller-ovs-c5pjd\" (UID: \"c816dd8b-7da7-4424-8405-b44759f7861e\") " pod="openstack/ovn-controller-ovs-c5pjd" Jan 23 09:27:42 crc kubenswrapper[4684]: I0123 09:27:42.938139 4684 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovn-controller-tls-certs\" (UniqueName: \"kubernetes.io/secret/f6d184f2-6bff-43ba-98a6-6e131c7b45a8-ovn-controller-tls-certs\") pod \"ovn-controller-jgsg8\" (UID: \"f6d184f2-6bff-43ba-98a6-6e131c7b45a8\") " pod="openstack/ovn-controller-jgsg8" Jan 23 09:27:42 crc kubenswrapper[4684]: I0123 09:27:42.938168 4684 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/f6d184f2-6bff-43ba-98a6-6e131c7b45a8-scripts\") pod \"ovn-controller-jgsg8\" (UID: \"f6d184f2-6bff-43ba-98a6-6e131c7b45a8\") " pod="openstack/ovn-controller-jgsg8" Jan 23 09:27:42 crc kubenswrapper[4684]: I0123 09:27:42.938191 4684 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-gztc8\" (UniqueName: \"kubernetes.io/projected/f6d184f2-6bff-43ba-98a6-6e131c7b45a8-kube-api-access-gztc8\") pod \"ovn-controller-jgsg8\" (UID: \"f6d184f2-6bff-43ba-98a6-6e131c7b45a8\") " pod="openstack/ovn-controller-jgsg8" Jan 23 09:27:42 crc kubenswrapper[4684]: I0123 09:27:42.938222 4684 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etc-ovs\" (UniqueName: \"kubernetes.io/host-path/c816dd8b-7da7-4424-8405-b44759f7861e-etc-ovs\") pod \"ovn-controller-ovs-c5pjd\" (UID: \"c816dd8b-7da7-4424-8405-b44759f7861e\") " pod="openstack/ovn-controller-ovs-c5pjd" Jan 23 09:27:42 crc kubenswrapper[4684]: I0123 09:27:42.938244 4684 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"var-run\" (UniqueName: \"kubernetes.io/host-path/f6d184f2-6bff-43ba-98a6-6e131c7b45a8-var-run\") pod \"ovn-controller-jgsg8\" (UID: \"f6d184f2-6bff-43ba-98a6-6e131c7b45a8\") " pod="openstack/ovn-controller-jgsg8" Jan 23 09:27:42 crc kubenswrapper[4684]: I0123 09:27:42.938267 4684 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"var-run\" (UniqueName: \"kubernetes.io/host-path/c816dd8b-7da7-4424-8405-b44759f7861e-var-run\") pod \"ovn-controller-ovs-c5pjd\" (UID: \"c816dd8b-7da7-4424-8405-b44759f7861e\") " pod="openstack/ovn-controller-ovs-c5pjd" Jan 23 09:27:42 crc kubenswrapper[4684]: I0123 09:27:42.938288 4684 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/c816dd8b-7da7-4424-8405-b44759f7861e-scripts\") pod \"ovn-controller-ovs-c5pjd\" (UID: \"c816dd8b-7da7-4424-8405-b44759f7861e\") " pod="openstack/ovn-controller-ovs-c5pjd" Jan 23 09:27:42 crc kubenswrapper[4684]: I0123 09:27:42.938321 4684 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"var-run-ovn\" (UniqueName: \"kubernetes.io/host-path/f6d184f2-6bff-43ba-98a6-6e131c7b45a8-var-run-ovn\") pod \"ovn-controller-jgsg8\" (UID: \"f6d184f2-6bff-43ba-98a6-6e131c7b45a8\") " pod="openstack/ovn-controller-jgsg8" Jan 23 09:27:42 crc kubenswrapper[4684]: I0123 09:27:42.939301 4684 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"var-log\" (UniqueName: \"kubernetes.io/host-path/c816dd8b-7da7-4424-8405-b44759f7861e-var-log\") pod \"ovn-controller-ovs-c5pjd\" (UID: \"c816dd8b-7da7-4424-8405-b44759f7861e\") " pod="openstack/ovn-controller-ovs-c5pjd" Jan 23 09:27:42 crc kubenswrapper[4684]: I0123 09:27:42.939344 4684 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"var-run\" (UniqueName: \"kubernetes.io/host-path/c816dd8b-7da7-4424-8405-b44759f7861e-var-run\") pod \"ovn-controller-ovs-c5pjd\" (UID: \"c816dd8b-7da7-4424-8405-b44759f7861e\") " pod="openstack/ovn-controller-ovs-c5pjd" Jan 23 09:27:42 crc kubenswrapper[4684]: I0123 09:27:42.939415 4684 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"var-run\" (UniqueName: \"kubernetes.io/host-path/f6d184f2-6bff-43ba-98a6-6e131c7b45a8-var-run\") pod \"ovn-controller-jgsg8\" (UID: \"f6d184f2-6bff-43ba-98a6-6e131c7b45a8\") " pod="openstack/ovn-controller-jgsg8" Jan 23 09:27:42 crc kubenswrapper[4684]: I0123 09:27:42.939871 4684 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"var-log-ovn\" (UniqueName: \"kubernetes.io/host-path/f6d184f2-6bff-43ba-98a6-6e131c7b45a8-var-log-ovn\") pod \"ovn-controller-jgsg8\" (UID: \"f6d184f2-6bff-43ba-98a6-6e131c7b45a8\") " pod="openstack/ovn-controller-jgsg8" Jan 23 09:27:42 crc kubenswrapper[4684]: I0123 09:27:42.940931 4684 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etc-ovs\" (UniqueName: \"kubernetes.io/host-path/c816dd8b-7da7-4424-8405-b44759f7861e-etc-ovs\") pod \"ovn-controller-ovs-c5pjd\" (UID: \"c816dd8b-7da7-4424-8405-b44759f7861e\") " pod="openstack/ovn-controller-ovs-c5pjd" Jan 23 09:27:42 crc kubenswrapper[4684]: I0123 09:27:42.941993 4684 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/f6d184f2-6bff-43ba-98a6-6e131c7b45a8-scripts\") pod \"ovn-controller-jgsg8\" (UID: \"f6d184f2-6bff-43ba-98a6-6e131c7b45a8\") " pod="openstack/ovn-controller-jgsg8" Jan 23 09:27:42 crc kubenswrapper[4684]: I0123 09:27:42.942084 4684 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/c816dd8b-7da7-4424-8405-b44759f7861e-scripts\") pod \"ovn-controller-ovs-c5pjd\" (UID: \"c816dd8b-7da7-4424-8405-b44759f7861e\") " pod="openstack/ovn-controller-ovs-c5pjd" Jan 23 09:27:42 crc kubenswrapper[4684]: I0123 09:27:42.942266 4684 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"var-lib\" (UniqueName: \"kubernetes.io/host-path/c816dd8b-7da7-4424-8405-b44759f7861e-var-lib\") pod \"ovn-controller-ovs-c5pjd\" (UID: \"c816dd8b-7da7-4424-8405-b44759f7861e\") " pod="openstack/ovn-controller-ovs-c5pjd" Jan 23 09:27:42 crc kubenswrapper[4684]: I0123 09:27:42.942396 4684 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"var-run-ovn\" (UniqueName: \"kubernetes.io/host-path/f6d184f2-6bff-43ba-98a6-6e131c7b45a8-var-run-ovn\") pod \"ovn-controller-jgsg8\" (UID: \"f6d184f2-6bff-43ba-98a6-6e131c7b45a8\") " pod="openstack/ovn-controller-jgsg8" Jan 23 09:27:42 crc kubenswrapper[4684]: I0123 09:27:42.944500 4684 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovn-controller-tls-certs\" (UniqueName: \"kubernetes.io/secret/f6d184f2-6bff-43ba-98a6-6e131c7b45a8-ovn-controller-tls-certs\") pod \"ovn-controller-jgsg8\" (UID: \"f6d184f2-6bff-43ba-98a6-6e131c7b45a8\") " pod="openstack/ovn-controller-jgsg8" Jan 23 09:27:42 crc kubenswrapper[4684]: I0123 09:27:42.960351 4684 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-t758b\" (UniqueName: \"kubernetes.io/projected/c816dd8b-7da7-4424-8405-b44759f7861e-kube-api-access-t758b\") pod \"ovn-controller-ovs-c5pjd\" (UID: \"c816dd8b-7da7-4424-8405-b44759f7861e\") " pod="openstack/ovn-controller-ovs-c5pjd" Jan 23 09:27:42 crc kubenswrapper[4684]: I0123 09:27:42.964036 4684 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/f6d184f2-6bff-43ba-98a6-6e131c7b45a8-combined-ca-bundle\") pod \"ovn-controller-jgsg8\" (UID: \"f6d184f2-6bff-43ba-98a6-6e131c7b45a8\") " pod="openstack/ovn-controller-jgsg8" Jan 23 09:27:42 crc kubenswrapper[4684]: I0123 09:27:42.964695 4684 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-gztc8\" (UniqueName: \"kubernetes.io/projected/f6d184f2-6bff-43ba-98a6-6e131c7b45a8-kube-api-access-gztc8\") pod \"ovn-controller-jgsg8\" (UID: \"f6d184f2-6bff-43ba-98a6-6e131c7b45a8\") " pod="openstack/ovn-controller-jgsg8" Jan 23 09:27:42 crc kubenswrapper[4684]: I0123 09:27:42.996000 4684 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ovn-controller-jgsg8" Jan 23 09:27:43 crc kubenswrapper[4684]: I0123 09:27:43.050644 4684 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ovn-controller-ovs-c5pjd" Jan 23 09:27:43 crc kubenswrapper[4684]: I0123 09:27:43.298269 4684 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/ovsdbserver-sb-0"] Jan 23 09:27:43 crc kubenswrapper[4684]: I0123 09:27:43.307510 4684 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ovsdbserver-sb-0" Jan 23 09:27:43 crc kubenswrapper[4684]: I0123 09:27:43.312188 4684 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"ovncluster-ovndbcluster-sb-dockercfg-xgf5r" Jan 23 09:27:43 crc kubenswrapper[4684]: I0123 09:27:43.312366 4684 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-ovn-metrics" Jan 23 09:27:43 crc kubenswrapper[4684]: I0123 09:27:43.312902 4684 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"ovndbcluster-sb-config" Jan 23 09:27:43 crc kubenswrapper[4684]: I0123 09:27:43.312931 4684 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"ovndbcluster-sb-scripts" Jan 23 09:27:43 crc kubenswrapper[4684]: I0123 09:27:43.312994 4684 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-ovndbcluster-sb-ovndbs" Jan 23 09:27:43 crc kubenswrapper[4684]: I0123 09:27:43.320652 4684 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ovsdbserver-sb-0"] Jan 23 09:27:43 crc kubenswrapper[4684]: I0123 09:27:43.453564 4684 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"local-storage11-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage11-crc\") pod \"ovsdbserver-sb-0\" (UID: \"092669ed-870b-4e9d-a34d-f62fca6b1660\") " pod="openstack/ovsdbserver-sb-0" Jan 23 09:27:43 crc kubenswrapper[4684]: I0123 09:27:43.453939 4684 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdbserver-sb-tls-certs\" (UniqueName: \"kubernetes.io/secret/092669ed-870b-4e9d-a34d-f62fca6b1660-ovsdbserver-sb-tls-certs\") pod \"ovsdbserver-sb-0\" (UID: \"092669ed-870b-4e9d-a34d-f62fca6b1660\") " pod="openstack/ovsdbserver-sb-0" Jan 23 09:27:43 crc kubenswrapper[4684]: I0123 09:27:43.454006 4684 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-nksr7\" (UniqueName: \"kubernetes.io/projected/092669ed-870b-4e9d-a34d-f62fca6b1660-kube-api-access-nksr7\") pod \"ovsdbserver-sb-0\" (UID: \"092669ed-870b-4e9d-a34d-f62fca6b1660\") " pod="openstack/ovsdbserver-sb-0" Jan 23 09:27:43 crc kubenswrapper[4684]: I0123 09:27:43.454032 4684 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdb-rundir\" (UniqueName: \"kubernetes.io/empty-dir/092669ed-870b-4e9d-a34d-f62fca6b1660-ovsdb-rundir\") pod \"ovsdbserver-sb-0\" (UID: \"092669ed-870b-4e9d-a34d-f62fca6b1660\") " pod="openstack/ovsdbserver-sb-0" Jan 23 09:27:43 crc kubenswrapper[4684]: I0123 09:27:43.454050 4684 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"metrics-certs-tls-certs\" (UniqueName: \"kubernetes.io/secret/092669ed-870b-4e9d-a34d-f62fca6b1660-metrics-certs-tls-certs\") pod \"ovsdbserver-sb-0\" (UID: \"092669ed-870b-4e9d-a34d-f62fca6b1660\") " pod="openstack/ovsdbserver-sb-0" Jan 23 09:27:43 crc kubenswrapper[4684]: I0123 09:27:43.454077 4684 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/092669ed-870b-4e9d-a34d-f62fca6b1660-combined-ca-bundle\") pod \"ovsdbserver-sb-0\" (UID: \"092669ed-870b-4e9d-a34d-f62fca6b1660\") " pod="openstack/ovsdbserver-sb-0" Jan 23 09:27:43 crc kubenswrapper[4684]: I0123 09:27:43.454098 4684 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/092669ed-870b-4e9d-a34d-f62fca6b1660-scripts\") pod \"ovsdbserver-sb-0\" (UID: \"092669ed-870b-4e9d-a34d-f62fca6b1660\") " pod="openstack/ovsdbserver-sb-0" Jan 23 09:27:43 crc kubenswrapper[4684]: I0123 09:27:43.454262 4684 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/092669ed-870b-4e9d-a34d-f62fca6b1660-config\") pod \"ovsdbserver-sb-0\" (UID: \"092669ed-870b-4e9d-a34d-f62fca6b1660\") " pod="openstack/ovsdbserver-sb-0" Jan 23 09:27:43 crc kubenswrapper[4684]: I0123 09:27:43.556173 4684 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"local-storage11-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage11-crc\") pod \"ovsdbserver-sb-0\" (UID: \"092669ed-870b-4e9d-a34d-f62fca6b1660\") " pod="openstack/ovsdbserver-sb-0" Jan 23 09:27:43 crc kubenswrapper[4684]: I0123 09:27:43.556262 4684 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdbserver-sb-tls-certs\" (UniqueName: \"kubernetes.io/secret/092669ed-870b-4e9d-a34d-f62fca6b1660-ovsdbserver-sb-tls-certs\") pod \"ovsdbserver-sb-0\" (UID: \"092669ed-870b-4e9d-a34d-f62fca6b1660\") " pod="openstack/ovsdbserver-sb-0" Jan 23 09:27:43 crc kubenswrapper[4684]: I0123 09:27:43.556302 4684 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-nksr7\" (UniqueName: \"kubernetes.io/projected/092669ed-870b-4e9d-a34d-f62fca6b1660-kube-api-access-nksr7\") pod \"ovsdbserver-sb-0\" (UID: \"092669ed-870b-4e9d-a34d-f62fca6b1660\") " pod="openstack/ovsdbserver-sb-0" Jan 23 09:27:43 crc kubenswrapper[4684]: I0123 09:27:43.556326 4684 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdb-rundir\" (UniqueName: \"kubernetes.io/empty-dir/092669ed-870b-4e9d-a34d-f62fca6b1660-ovsdb-rundir\") pod \"ovsdbserver-sb-0\" (UID: \"092669ed-870b-4e9d-a34d-f62fca6b1660\") " pod="openstack/ovsdbserver-sb-0" Jan 23 09:27:43 crc kubenswrapper[4684]: I0123 09:27:43.556532 4684 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-certs-tls-certs\" (UniqueName: \"kubernetes.io/secret/092669ed-870b-4e9d-a34d-f62fca6b1660-metrics-certs-tls-certs\") pod \"ovsdbserver-sb-0\" (UID: \"092669ed-870b-4e9d-a34d-f62fca6b1660\") " pod="openstack/ovsdbserver-sb-0" Jan 23 09:27:43 crc kubenswrapper[4684]: I0123 09:27:43.556594 4684 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/092669ed-870b-4e9d-a34d-f62fca6b1660-combined-ca-bundle\") pod \"ovsdbserver-sb-0\" (UID: \"092669ed-870b-4e9d-a34d-f62fca6b1660\") " pod="openstack/ovsdbserver-sb-0" Jan 23 09:27:43 crc kubenswrapper[4684]: I0123 09:27:43.556634 4684 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/092669ed-870b-4e9d-a34d-f62fca6b1660-scripts\") pod \"ovsdbserver-sb-0\" (UID: \"092669ed-870b-4e9d-a34d-f62fca6b1660\") " pod="openstack/ovsdbserver-sb-0" Jan 23 09:27:43 crc kubenswrapper[4684]: I0123 09:27:43.556734 4684 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/092669ed-870b-4e9d-a34d-f62fca6b1660-config\") pod \"ovsdbserver-sb-0\" (UID: \"092669ed-870b-4e9d-a34d-f62fca6b1660\") " pod="openstack/ovsdbserver-sb-0" Jan 23 09:27:43 crc kubenswrapper[4684]: I0123 09:27:43.556837 4684 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdb-rundir\" (UniqueName: \"kubernetes.io/empty-dir/092669ed-870b-4e9d-a34d-f62fca6b1660-ovsdb-rundir\") pod \"ovsdbserver-sb-0\" (UID: \"092669ed-870b-4e9d-a34d-f62fca6b1660\") " pod="openstack/ovsdbserver-sb-0" Jan 23 09:27:43 crc kubenswrapper[4684]: I0123 09:27:43.556880 4684 operation_generator.go:580] "MountVolume.MountDevice succeeded for volume \"local-storage11-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage11-crc\") pod \"ovsdbserver-sb-0\" (UID: \"092669ed-870b-4e9d-a34d-f62fca6b1660\") device mount path \"/mnt/openstack/pv11\"" pod="openstack/ovsdbserver-sb-0" Jan 23 09:27:43 crc kubenswrapper[4684]: I0123 09:27:43.557847 4684 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/092669ed-870b-4e9d-a34d-f62fca6b1660-config\") pod \"ovsdbserver-sb-0\" (UID: \"092669ed-870b-4e9d-a34d-f62fca6b1660\") " pod="openstack/ovsdbserver-sb-0" Jan 23 09:27:43 crc kubenswrapper[4684]: I0123 09:27:43.558300 4684 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/092669ed-870b-4e9d-a34d-f62fca6b1660-scripts\") pod \"ovsdbserver-sb-0\" (UID: \"092669ed-870b-4e9d-a34d-f62fca6b1660\") " pod="openstack/ovsdbserver-sb-0" Jan 23 09:27:43 crc kubenswrapper[4684]: I0123 09:27:43.567582 4684 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdbserver-sb-tls-certs\" (UniqueName: \"kubernetes.io/secret/092669ed-870b-4e9d-a34d-f62fca6b1660-ovsdbserver-sb-tls-certs\") pod \"ovsdbserver-sb-0\" (UID: \"092669ed-870b-4e9d-a34d-f62fca6b1660\") " pod="openstack/ovsdbserver-sb-0" Jan 23 09:27:43 crc kubenswrapper[4684]: I0123 09:27:43.578482 4684 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-nksr7\" (UniqueName: \"kubernetes.io/projected/092669ed-870b-4e9d-a34d-f62fca6b1660-kube-api-access-nksr7\") pod \"ovsdbserver-sb-0\" (UID: \"092669ed-870b-4e9d-a34d-f62fca6b1660\") " pod="openstack/ovsdbserver-sb-0" Jan 23 09:27:43 crc kubenswrapper[4684]: I0123 09:27:43.585933 4684 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"local-storage11-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage11-crc\") pod \"ovsdbserver-sb-0\" (UID: \"092669ed-870b-4e9d-a34d-f62fca6b1660\") " pod="openstack/ovsdbserver-sb-0" Jan 23 09:27:43 crc kubenswrapper[4684]: I0123 09:27:43.589658 4684 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"metrics-certs-tls-certs\" (UniqueName: \"kubernetes.io/secret/092669ed-870b-4e9d-a34d-f62fca6b1660-metrics-certs-tls-certs\") pod \"ovsdbserver-sb-0\" (UID: \"092669ed-870b-4e9d-a34d-f62fca6b1660\") " pod="openstack/ovsdbserver-sb-0" Jan 23 09:27:43 crc kubenswrapper[4684]: I0123 09:27:43.592042 4684 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/092669ed-870b-4e9d-a34d-f62fca6b1660-combined-ca-bundle\") pod \"ovsdbserver-sb-0\" (UID: \"092669ed-870b-4e9d-a34d-f62fca6b1660\") " pod="openstack/ovsdbserver-sb-0" Jan 23 09:27:43 crc kubenswrapper[4684]: I0123 09:27:43.628297 4684 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ovsdbserver-sb-0" Jan 23 09:27:43 crc kubenswrapper[4684]: I0123 09:27:43.728876 4684 patch_prober.go:28] interesting pod/machine-config-daemon-wtphf container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Jan 23 09:27:43 crc kubenswrapper[4684]: I0123 09:27:43.728947 4684 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-wtphf" podUID="fe8e0d00-860e-4d47-9f48-686555520d79" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Jan 23 09:27:46 crc kubenswrapper[4684]: I0123 09:27:46.165320 4684 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/ovsdbserver-nb-0"] Jan 23 09:27:46 crc kubenswrapper[4684]: I0123 09:27:46.167162 4684 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ovsdbserver-nb-0" Jan 23 09:27:46 crc kubenswrapper[4684]: I0123 09:27:46.174317 4684 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"ovndbcluster-nb-config" Jan 23 09:27:46 crc kubenswrapper[4684]: I0123 09:27:46.174680 4684 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"ovndbcluster-nb-scripts" Jan 23 09:27:46 crc kubenswrapper[4684]: I0123 09:27:46.174939 4684 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"ovncluster-ovndbcluster-nb-dockercfg-2wgvm" Jan 23 09:27:46 crc kubenswrapper[4684]: I0123 09:27:46.175070 4684 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-ovndbcluster-nb-ovndbs" Jan 23 09:27:46 crc kubenswrapper[4684]: I0123 09:27:46.180272 4684 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ovsdbserver-nb-0"] Jan 23 09:27:46 crc kubenswrapper[4684]: I0123 09:27:46.313139 4684 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdbserver-nb-tls-certs\" (UniqueName: \"kubernetes.io/secret/960d904d-7d3d-4c6a-a933-cf6c6a31d01d-ovsdbserver-nb-tls-certs\") pod \"ovsdbserver-nb-0\" (UID: \"960d904d-7d3d-4c6a-a933-cf6c6a31d01d\") " pod="openstack/ovsdbserver-nb-0" Jan 23 09:27:46 crc kubenswrapper[4684]: I0123 09:27:46.313261 4684 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdb-rundir\" (UniqueName: \"kubernetes.io/empty-dir/960d904d-7d3d-4c6a-a933-cf6c6a31d01d-ovsdb-rundir\") pod \"ovsdbserver-nb-0\" (UID: \"960d904d-7d3d-4c6a-a933-cf6c6a31d01d\") " pod="openstack/ovsdbserver-nb-0" Jan 23 09:27:46 crc kubenswrapper[4684]: I0123 09:27:46.313287 4684 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-mzznr\" (UniqueName: \"kubernetes.io/projected/960d904d-7d3d-4c6a-a933-cf6c6a31d01d-kube-api-access-mzznr\") pod \"ovsdbserver-nb-0\" (UID: \"960d904d-7d3d-4c6a-a933-cf6c6a31d01d\") " pod="openstack/ovsdbserver-nb-0" Jan 23 09:27:46 crc kubenswrapper[4684]: I0123 09:27:46.313449 4684 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/960d904d-7d3d-4c6a-a933-cf6c6a31d01d-config\") pod \"ovsdbserver-nb-0\" (UID: \"960d904d-7d3d-4c6a-a933-cf6c6a31d01d\") " pod="openstack/ovsdbserver-nb-0" Jan 23 09:27:46 crc kubenswrapper[4684]: I0123 09:27:46.313577 4684 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/960d904d-7d3d-4c6a-a933-cf6c6a31d01d-combined-ca-bundle\") pod \"ovsdbserver-nb-0\" (UID: \"960d904d-7d3d-4c6a-a933-cf6c6a31d01d\") " pod="openstack/ovsdbserver-nb-0" Jan 23 09:27:46 crc kubenswrapper[4684]: I0123 09:27:46.313612 4684 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"metrics-certs-tls-certs\" (UniqueName: \"kubernetes.io/secret/960d904d-7d3d-4c6a-a933-cf6c6a31d01d-metrics-certs-tls-certs\") pod \"ovsdbserver-nb-0\" (UID: \"960d904d-7d3d-4c6a-a933-cf6c6a31d01d\") " pod="openstack/ovsdbserver-nb-0" Jan 23 09:27:46 crc kubenswrapper[4684]: I0123 09:27:46.313644 4684 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"local-storage03-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage03-crc\") pod \"ovsdbserver-nb-0\" (UID: \"960d904d-7d3d-4c6a-a933-cf6c6a31d01d\") " pod="openstack/ovsdbserver-nb-0" Jan 23 09:27:46 crc kubenswrapper[4684]: I0123 09:27:46.313892 4684 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/960d904d-7d3d-4c6a-a933-cf6c6a31d01d-scripts\") pod \"ovsdbserver-nb-0\" (UID: \"960d904d-7d3d-4c6a-a933-cf6c6a31d01d\") " pod="openstack/ovsdbserver-nb-0" Jan 23 09:27:46 crc kubenswrapper[4684]: I0123 09:27:46.415058 4684 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/960d904d-7d3d-4c6a-a933-cf6c6a31d01d-combined-ca-bundle\") pod \"ovsdbserver-nb-0\" (UID: \"960d904d-7d3d-4c6a-a933-cf6c6a31d01d\") " pod="openstack/ovsdbserver-nb-0" Jan 23 09:27:46 crc kubenswrapper[4684]: I0123 09:27:46.415109 4684 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-certs-tls-certs\" (UniqueName: \"kubernetes.io/secret/960d904d-7d3d-4c6a-a933-cf6c6a31d01d-metrics-certs-tls-certs\") pod \"ovsdbserver-nb-0\" (UID: \"960d904d-7d3d-4c6a-a933-cf6c6a31d01d\") " pod="openstack/ovsdbserver-nb-0" Jan 23 09:27:46 crc kubenswrapper[4684]: I0123 09:27:46.415135 4684 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"local-storage03-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage03-crc\") pod \"ovsdbserver-nb-0\" (UID: \"960d904d-7d3d-4c6a-a933-cf6c6a31d01d\") " pod="openstack/ovsdbserver-nb-0" Jan 23 09:27:46 crc kubenswrapper[4684]: I0123 09:27:46.415202 4684 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/960d904d-7d3d-4c6a-a933-cf6c6a31d01d-scripts\") pod \"ovsdbserver-nb-0\" (UID: \"960d904d-7d3d-4c6a-a933-cf6c6a31d01d\") " pod="openstack/ovsdbserver-nb-0" Jan 23 09:27:46 crc kubenswrapper[4684]: I0123 09:27:46.415240 4684 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdbserver-nb-tls-certs\" (UniqueName: \"kubernetes.io/secret/960d904d-7d3d-4c6a-a933-cf6c6a31d01d-ovsdbserver-nb-tls-certs\") pod \"ovsdbserver-nb-0\" (UID: \"960d904d-7d3d-4c6a-a933-cf6c6a31d01d\") " pod="openstack/ovsdbserver-nb-0" Jan 23 09:27:46 crc kubenswrapper[4684]: I0123 09:27:46.415275 4684 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdb-rundir\" (UniqueName: \"kubernetes.io/empty-dir/960d904d-7d3d-4c6a-a933-cf6c6a31d01d-ovsdb-rundir\") pod \"ovsdbserver-nb-0\" (UID: \"960d904d-7d3d-4c6a-a933-cf6c6a31d01d\") " pod="openstack/ovsdbserver-nb-0" Jan 23 09:27:46 crc kubenswrapper[4684]: I0123 09:27:46.415299 4684 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-mzznr\" (UniqueName: \"kubernetes.io/projected/960d904d-7d3d-4c6a-a933-cf6c6a31d01d-kube-api-access-mzznr\") pod \"ovsdbserver-nb-0\" (UID: \"960d904d-7d3d-4c6a-a933-cf6c6a31d01d\") " pod="openstack/ovsdbserver-nb-0" Jan 23 09:27:46 crc kubenswrapper[4684]: I0123 09:27:46.415320 4684 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/960d904d-7d3d-4c6a-a933-cf6c6a31d01d-config\") pod \"ovsdbserver-nb-0\" (UID: \"960d904d-7d3d-4c6a-a933-cf6c6a31d01d\") " pod="openstack/ovsdbserver-nb-0" Jan 23 09:27:46 crc kubenswrapper[4684]: I0123 09:27:46.416255 4684 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdb-rundir\" (UniqueName: \"kubernetes.io/empty-dir/960d904d-7d3d-4c6a-a933-cf6c6a31d01d-ovsdb-rundir\") pod \"ovsdbserver-nb-0\" (UID: \"960d904d-7d3d-4c6a-a933-cf6c6a31d01d\") " pod="openstack/ovsdbserver-nb-0" Jan 23 09:27:46 crc kubenswrapper[4684]: I0123 09:27:46.416679 4684 operation_generator.go:580] "MountVolume.MountDevice succeeded for volume \"local-storage03-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage03-crc\") pod \"ovsdbserver-nb-0\" (UID: \"960d904d-7d3d-4c6a-a933-cf6c6a31d01d\") device mount path \"/mnt/openstack/pv03\"" pod="openstack/ovsdbserver-nb-0" Jan 23 09:27:46 crc kubenswrapper[4684]: I0123 09:27:46.417556 4684 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/960d904d-7d3d-4c6a-a933-cf6c6a31d01d-config\") pod \"ovsdbserver-nb-0\" (UID: \"960d904d-7d3d-4c6a-a933-cf6c6a31d01d\") " pod="openstack/ovsdbserver-nb-0" Jan 23 09:27:46 crc kubenswrapper[4684]: I0123 09:27:46.418540 4684 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/960d904d-7d3d-4c6a-a933-cf6c6a31d01d-scripts\") pod \"ovsdbserver-nb-0\" (UID: \"960d904d-7d3d-4c6a-a933-cf6c6a31d01d\") " pod="openstack/ovsdbserver-nb-0" Jan 23 09:27:46 crc kubenswrapper[4684]: I0123 09:27:46.422895 4684 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"metrics-certs-tls-certs\" (UniqueName: \"kubernetes.io/secret/960d904d-7d3d-4c6a-a933-cf6c6a31d01d-metrics-certs-tls-certs\") pod \"ovsdbserver-nb-0\" (UID: \"960d904d-7d3d-4c6a-a933-cf6c6a31d01d\") " pod="openstack/ovsdbserver-nb-0" Jan 23 09:27:46 crc kubenswrapper[4684]: I0123 09:27:46.425486 4684 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdbserver-nb-tls-certs\" (UniqueName: \"kubernetes.io/secret/960d904d-7d3d-4c6a-a933-cf6c6a31d01d-ovsdbserver-nb-tls-certs\") pod \"ovsdbserver-nb-0\" (UID: \"960d904d-7d3d-4c6a-a933-cf6c6a31d01d\") " pod="openstack/ovsdbserver-nb-0" Jan 23 09:27:46 crc kubenswrapper[4684]: I0123 09:27:46.433979 4684 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/960d904d-7d3d-4c6a-a933-cf6c6a31d01d-combined-ca-bundle\") pod \"ovsdbserver-nb-0\" (UID: \"960d904d-7d3d-4c6a-a933-cf6c6a31d01d\") " pod="openstack/ovsdbserver-nb-0" Jan 23 09:27:46 crc kubenswrapper[4684]: I0123 09:27:46.434491 4684 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-mzznr\" (UniqueName: \"kubernetes.io/projected/960d904d-7d3d-4c6a-a933-cf6c6a31d01d-kube-api-access-mzznr\") pod \"ovsdbserver-nb-0\" (UID: \"960d904d-7d3d-4c6a-a933-cf6c6a31d01d\") " pod="openstack/ovsdbserver-nb-0" Jan 23 09:27:46 crc kubenswrapper[4684]: I0123 09:27:46.453178 4684 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"local-storage03-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage03-crc\") pod \"ovsdbserver-nb-0\" (UID: \"960d904d-7d3d-4c6a-a933-cf6c6a31d01d\") " pod="openstack/ovsdbserver-nb-0" Jan 23 09:27:46 crc kubenswrapper[4684]: I0123 09:27:46.499947 4684 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ovsdbserver-nb-0" Jan 23 09:28:11 crc kubenswrapper[4684]: E0123 09:28:11.288309 4684 log.go:32] "PullImage from image service failed" err="rpc error: code = Canceled desc = copying config: context canceled" image="quay.io/podified-antelope-centos9/openstack-rabbitmq@sha256:e733252aab7f4bc0efbdd712bcd88e44c5498bf1773dba843bc9dcfac324fe3d" Jan 23 09:28:11 crc kubenswrapper[4684]: E0123 09:28:11.289155 4684 kuberuntime_manager.go:1274] "Unhandled Error" err="init container &Container{Name:setup-container,Image:quay.io/podified-antelope-centos9/openstack-rabbitmq@sha256:e733252aab7f4bc0efbdd712bcd88e44c5498bf1773dba843bc9dcfac324fe3d,Command:[sh -c cp /tmp/erlang-cookie-secret/.erlang.cookie /var/lib/rabbitmq/.erlang.cookie && chmod 600 /var/lib/rabbitmq/.erlang.cookie ; cp /tmp/rabbitmq-plugins/enabled_plugins /operator/enabled_plugins ; echo '[default]' > /var/lib/rabbitmq/.rabbitmqadmin.conf && sed -e 's/default_user/username/' -e 's/default_pass/password/' /tmp/default_user.conf >> /var/lib/rabbitmq/.rabbitmqadmin.conf && chmod 600 /var/lib/rabbitmq/.rabbitmqadmin.conf ; sleep 30],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{cpu: {{20 -3} {} 20m DecimalSI},memory: {{67108864 0} {} BinarySI},},Requests:ResourceList{cpu: {{20 -3} {} 20m DecimalSI},memory: {{67108864 0} {} BinarySI},},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:plugins-conf,ReadOnly:false,MountPath:/tmp/rabbitmq-plugins/,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:rabbitmq-erlang-cookie,ReadOnly:false,MountPath:/var/lib/rabbitmq/,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:erlang-cookie-secret,ReadOnly:false,MountPath:/tmp/erlang-cookie-secret/,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:rabbitmq-plugins,ReadOnly:false,MountPath:/operator,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:persistence,ReadOnly:false,MountPath:/var/lib/rabbitmq/mnesia/,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:rabbitmq-confd,ReadOnly:false,MountPath:/tmp/default_user.conf,SubPath:default_user.conf,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-nm4ks,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:nil,SELinuxOptions:nil,RunAsUser:*1000650000,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod rabbitmq-cell1-server-0_openstack(6a0c15bc-8e5e-47ee-9c23-1673363f1603): ErrImagePull: rpc error: code = Canceled desc = copying config: context canceled" logger="UnhandledError" Jan 23 09:28:11 crc kubenswrapper[4684]: E0123 09:28:11.290562 4684 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"setup-container\" with ErrImagePull: \"rpc error: code = Canceled desc = copying config: context canceled\"" pod="openstack/rabbitmq-cell1-server-0" podUID="6a0c15bc-8e5e-47ee-9c23-1673363f1603" Jan 23 09:28:11 crc kubenswrapper[4684]: E0123 09:28:11.566146 4684 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"setup-container\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.io/podified-antelope-centos9/openstack-rabbitmq@sha256:e733252aab7f4bc0efbdd712bcd88e44c5498bf1773dba843bc9dcfac324fe3d\\\"\"" pod="openstack/rabbitmq-cell1-server-0" podUID="6a0c15bc-8e5e-47ee-9c23-1673363f1603" Jan 23 09:28:11 crc kubenswrapper[4684]: E0123 09:28:11.895464 4684 log.go:32] "PullImage from image service failed" err="rpc error: code = Canceled desc = copying config: context canceled" image="quay.io/podified-antelope-centos9/openstack-rabbitmq@sha256:e733252aab7f4bc0efbdd712bcd88e44c5498bf1773dba843bc9dcfac324fe3d" Jan 23 09:28:11 crc kubenswrapper[4684]: E0123 09:28:11.895664 4684 kuberuntime_manager.go:1274] "Unhandled Error" err="init container &Container{Name:setup-container,Image:quay.io/podified-antelope-centos9/openstack-rabbitmq@sha256:e733252aab7f4bc0efbdd712bcd88e44c5498bf1773dba843bc9dcfac324fe3d,Command:[sh -c cp /tmp/erlang-cookie-secret/.erlang.cookie /var/lib/rabbitmq/.erlang.cookie && chmod 600 /var/lib/rabbitmq/.erlang.cookie ; cp /tmp/rabbitmq-plugins/enabled_plugins /operator/enabled_plugins ; echo '[default]' > /var/lib/rabbitmq/.rabbitmqadmin.conf && sed -e 's/default_user/username/' -e 's/default_pass/password/' /tmp/default_user.conf >> /var/lib/rabbitmq/.rabbitmqadmin.conf && chmod 600 /var/lib/rabbitmq/.rabbitmqadmin.conf ; sleep 30],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{cpu: {{20 -3} {} 20m DecimalSI},memory: {{67108864 0} {} BinarySI},},Requests:ResourceList{cpu: {{20 -3} {} 20m DecimalSI},memory: {{67108864 0} {} BinarySI},},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:plugins-conf,ReadOnly:false,MountPath:/tmp/rabbitmq-plugins/,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:rabbitmq-erlang-cookie,ReadOnly:false,MountPath:/var/lib/rabbitmq/,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:erlang-cookie-secret,ReadOnly:false,MountPath:/tmp/erlang-cookie-secret/,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:rabbitmq-plugins,ReadOnly:false,MountPath:/operator,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:persistence,ReadOnly:false,MountPath:/var/lib/rabbitmq/mnesia/,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:rabbitmq-confd,ReadOnly:false,MountPath:/tmp/default_user.conf,SubPath:default_user.conf,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-wxmdh,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:nil,SELinuxOptions:nil,RunAsUser:*1000650000,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod rabbitmq-server-0_openstack(82a71d38-3c68-43a9-9913-bc184ebed996): ErrImagePull: rpc error: code = Canceled desc = copying config: context canceled" logger="UnhandledError" Jan 23 09:28:11 crc kubenswrapper[4684]: E0123 09:28:11.896932 4684 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"setup-container\" with ErrImagePull: \"rpc error: code = Canceled desc = copying config: context canceled\"" pod="openstack/rabbitmq-server-0" podUID="82a71d38-3c68-43a9-9913-bc184ebed996" Jan 23 09:28:12 crc kubenswrapper[4684]: E0123 09:28:12.222749 4684 log.go:32] "PullImage from image service failed" err="rpc error: code = Canceled desc = copying config: context canceled" image="quay.io/podified-antelope-centos9/openstack-memcached@sha256:e47191ba776414b781b3e27b856ab45a03b9480c7dc2b1addb939608794882dc" Jan 23 09:28:12 crc kubenswrapper[4684]: E0123 09:28:12.223084 4684 kuberuntime_manager.go:1274] "Unhandled Error" err="container &Container{Name:memcached,Image:quay.io/podified-antelope-centos9/openstack-memcached@sha256:e47191ba776414b781b3e27b856ab45a03b9480c7dc2b1addb939608794882dc,Command:[/usr/bin/dumb-init -- /usr/local/bin/kolla_start],Args:[],WorkingDir:,Ports:[]ContainerPort{ContainerPort{Name:memcached,HostPort:0,ContainerPort:11211,Protocol:TCP,HostIP:,},ContainerPort{Name:memcached-tls,HostPort:0,ContainerPort:11212,Protocol:TCP,HostIP:,},},Env:[]EnvVar{EnvVar{Name:KOLLA_CONFIG_STRATEGY,Value:COPY_ALWAYS,ValueFrom:nil,},EnvVar{Name:POD_IPS,Value:,ValueFrom:&EnvVarSource{FieldRef:&ObjectFieldSelector{APIVersion:v1,FieldPath:status.podIPs,},ResourceFieldRef:nil,ConfigMapKeyRef:nil,SecretKeyRef:nil,},},EnvVar{Name:CONFIG_HASH,Value:n5fdh5dfh554h5b7hc7h7dh68hdfh64fh8chd4h79h678h66bhf7h86h674h75h597h68fh64fh584h95h79h58fh5f5h55fh59bhf6hb9h669h97q,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:config-data,ReadOnly:true,MountPath:/var/lib/kolla/config_files/src,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kolla-config,ReadOnly:true,MountPath:/var/lib/kolla/config_files,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:memcached-tls-certs,ReadOnly:true,MountPath:/var/lib/config-data/tls/certs/memcached.crt,SubPath:tls.crt,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:memcached-tls-certs,ReadOnly:true,MountPath:/var/lib/config-data/tls/private/memcached.key,SubPath:tls.key,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:combined-ca-bundle,ReadOnly:true,MountPath:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem,SubPath:tls-ca-bundle.pem,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-kw7s7,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:nil,TCPSocket:&TCPSocketAction{Port:{0 11211 },Host:,},GRPC:nil,},InitialDelaySeconds:3,TimeoutSeconds:5,PeriodSeconds:3,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},ReadinessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:nil,TCPSocket:&TCPSocketAction{Port:{0 11211 },Host:,},GRPC:nil,},InitialDelaySeconds:5,TimeoutSeconds:5,PeriodSeconds:5,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[MKNOD],},Privileged:nil,SELinuxOptions:nil,RunAsUser:*42457,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:*42457,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod memcached-0_openstack(7320f601-5b97-49b4-af32-aeae7d297ed1): ErrImagePull: rpc error: code = Canceled desc = copying config: context canceled" logger="UnhandledError" Jan 23 09:28:12 crc kubenswrapper[4684]: E0123 09:28:12.224306 4684 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"memcached\" with ErrImagePull: \"rpc error: code = Canceled desc = copying config: context canceled\"" pod="openstack/memcached-0" podUID="7320f601-5b97-49b4-af32-aeae7d297ed1" Jan 23 09:28:12 crc kubenswrapper[4684]: E0123 09:28:12.570202 4684 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"setup-container\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.io/podified-antelope-centos9/openstack-rabbitmq@sha256:e733252aab7f4bc0efbdd712bcd88e44c5498bf1773dba843bc9dcfac324fe3d\\\"\"" pod="openstack/rabbitmq-server-0" podUID="82a71d38-3c68-43a9-9913-bc184ebed996" Jan 23 09:28:12 crc kubenswrapper[4684]: E0123 09:28:12.570353 4684 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"memcached\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.io/podified-antelope-centos9/openstack-memcached@sha256:e47191ba776414b781b3e27b856ab45a03b9480c7dc2b1addb939608794882dc\\\"\"" pod="openstack/memcached-0" podUID="7320f601-5b97-49b4-af32-aeae7d297ed1" Jan 23 09:28:13 crc kubenswrapper[4684]: I0123 09:28:13.728457 4684 patch_prober.go:28] interesting pod/machine-config-daemon-wtphf container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Jan 23 09:28:13 crc kubenswrapper[4684]: I0123 09:28:13.728516 4684 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-wtphf" podUID="fe8e0d00-860e-4d47-9f48-686555520d79" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Jan 23 09:28:15 crc kubenswrapper[4684]: E0123 09:28:15.341893 4684 log.go:32] "PullImage from image service failed" err="rpc error: code = Canceled desc = copying config: context canceled" image="quay.io/podified-antelope-centos9/openstack-neutron-server@sha256:ea0bf67f1aa5d95a9a07b9c8692c293470f1311792c55d3d57f1f92e56689c33" Jan 23 09:28:15 crc kubenswrapper[4684]: E0123 09:28:15.342254 4684 kuberuntime_manager.go:1274] "Unhandled Error" err="init container &Container{Name:init,Image:quay.io/podified-antelope-centos9/openstack-neutron-server@sha256:ea0bf67f1aa5d95a9a07b9c8692c293470f1311792c55d3d57f1f92e56689c33,Command:[/bin/bash],Args:[-c dnsmasq --interface=* --conf-dir=/etc/dnsmasq.d --hostsdir=/etc/dnsmasq.d/hosts --keep-in-foreground --log-debug --bind-interfaces --listen-address=$(POD_IP) --port 5353 --log-facility=- --no-hosts --domain-needed --no-resolv --bogus-priv --log-queries --test],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:CONFIG_HASH,Value:n659h4h664hbh658h587h67ch89h587h8fh679hc6hf9h55fh644h5d5h698h68dh5cdh5ffh669h54ch9h689hb8hd4h5bfhd8h5d7h5fh665h574q,ValueFrom:nil,},EnvVar{Name:POD_IP,Value:,ValueFrom:&EnvVarSource{FieldRef:&ObjectFieldSelector{APIVersion:v1,FieldPath:status.podIP,},ResourceFieldRef:nil,ConfigMapKeyRef:nil,SecretKeyRef:nil,},},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:config,ReadOnly:true,MountPath:/etc/dnsmasq.d/config.cfg,SubPath:dns,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:dns-svc,ReadOnly:true,MountPath:/etc/dnsmasq.d/hosts/dns-svc,SubPath:dns-svc,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-m9z4m,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:nil,SELinuxOptions:nil,RunAsUser:*1000650000,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod dnsmasq-dns-95f5f6995-9fjr7_openstack(169ee556-d1ee-4f51-9958-46bd24d4467f): ErrImagePull: rpc error: code = Canceled desc = copying config: context canceled" logger="UnhandledError" Jan 23 09:28:15 crc kubenswrapper[4684]: E0123 09:28:15.343422 4684 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"init\" with ErrImagePull: \"rpc error: code = Canceled desc = copying config: context canceled\"" pod="openstack/dnsmasq-dns-95f5f6995-9fjr7" podUID="169ee556-d1ee-4f51-9958-46bd24d4467f" Jan 23 09:28:15 crc kubenswrapper[4684]: E0123 09:28:15.611614 4684 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"init\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.io/podified-antelope-centos9/openstack-neutron-server@sha256:ea0bf67f1aa5d95a9a07b9c8692c293470f1311792c55d3d57f1f92e56689c33\\\"\"" pod="openstack/dnsmasq-dns-95f5f6995-9fjr7" podUID="169ee556-d1ee-4f51-9958-46bd24d4467f" Jan 23 09:28:15 crc kubenswrapper[4684]: E0123 09:28:15.894836 4684 log.go:32] "PullImage from image service failed" err="rpc error: code = Canceled desc = copying config: context canceled" image="quay.io/podified-antelope-centos9/openstack-neutron-server@sha256:ea0bf67f1aa5d95a9a07b9c8692c293470f1311792c55d3d57f1f92e56689c33" Jan 23 09:28:15 crc kubenswrapper[4684]: E0123 09:28:15.895024 4684 kuberuntime_manager.go:1274] "Unhandled Error" err="init container &Container{Name:init,Image:quay.io/podified-antelope-centos9/openstack-neutron-server@sha256:ea0bf67f1aa5d95a9a07b9c8692c293470f1311792c55d3d57f1f92e56689c33,Command:[/bin/bash],Args:[-c dnsmasq --interface=* --conf-dir=/etc/dnsmasq.d --hostsdir=/etc/dnsmasq.d/hosts --keep-in-foreground --log-debug --bind-interfaces --listen-address=$(POD_IP) --port 5353 --log-facility=- --no-hosts --domain-needed --no-resolv --bogus-priv --log-queries --test],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:CONFIG_HASH,Value:nffh5bdhf4h5f8h79h55h77h58fh56dh7bh6fh578hbch55dh68h56bhd9h65dh57ch658hc9h566h666h688h58h65dh684h5d7h6ch575h5d6h88q,ValueFrom:nil,},EnvVar{Name:POD_IP,Value:,ValueFrom:&EnvVarSource{FieldRef:&ObjectFieldSelector{APIVersion:v1,FieldPath:status.podIP,},ResourceFieldRef:nil,ConfigMapKeyRef:nil,SecretKeyRef:nil,},},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:config,ReadOnly:true,MountPath:/etc/dnsmasq.d/config.cfg,SubPath:dns,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-fngv8,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:nil,SELinuxOptions:nil,RunAsUser:*1000650000,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod dnsmasq-dns-84bb9d8bd9-vxhm7_openstack(42066789-739c-4d7a-9072-0b67742f5ceb): ErrImagePull: rpc error: code = Canceled desc = copying config: context canceled" logger="UnhandledError" Jan 23 09:28:15 crc kubenswrapper[4684]: E0123 09:28:15.896215 4684 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"init\" with ErrImagePull: \"rpc error: code = Canceled desc = copying config: context canceled\"" pod="openstack/dnsmasq-dns-84bb9d8bd9-vxhm7" podUID="42066789-739c-4d7a-9072-0b67742f5ceb" Jan 23 09:28:16 crc kubenswrapper[4684]: E0123 09:28:16.668589 4684 log.go:32] "PullImage from image service failed" err="rpc error: code = Canceled desc = copying config: context canceled" image="quay.io/podified-antelope-centos9/openstack-neutron-server@sha256:ea0bf67f1aa5d95a9a07b9c8692c293470f1311792c55d3d57f1f92e56689c33" Jan 23 09:28:16 crc kubenswrapper[4684]: E0123 09:28:16.669160 4684 kuberuntime_manager.go:1274] "Unhandled Error" err="init container &Container{Name:init,Image:quay.io/podified-antelope-centos9/openstack-neutron-server@sha256:ea0bf67f1aa5d95a9a07b9c8692c293470f1311792c55d3d57f1f92e56689c33,Command:[/bin/bash],Args:[-c dnsmasq --interface=* --conf-dir=/etc/dnsmasq.d --hostsdir=/etc/dnsmasq.d/hosts --keep-in-foreground --log-debug --bind-interfaces --listen-address=$(POD_IP) --port 5353 --log-facility=- --no-hosts --domain-needed --no-resolv --bogus-priv --log-queries --test],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:CONFIG_HASH,Value:n68chd6h679hbfh55fhc6h5ffh5d8h94h56ch589hb4hc5h57bh677hcdh655h8dh667h675h654h66ch567h8fh659h5b4h675h566h55bh54h67dh6dq,ValueFrom:nil,},EnvVar{Name:POD_IP,Value:,ValueFrom:&EnvVarSource{FieldRef:&ObjectFieldSelector{APIVersion:v1,FieldPath:status.podIP,},ResourceFieldRef:nil,ConfigMapKeyRef:nil,SecretKeyRef:nil,},},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:config,ReadOnly:true,MountPath:/etc/dnsmasq.d/config.cfg,SubPath:dns,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:dns-svc,ReadOnly:true,MountPath:/etc/dnsmasq.d/hosts/dns-svc,SubPath:dns-svc,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-98m69,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:nil,SELinuxOptions:nil,RunAsUser:*1000650000,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod dnsmasq-dns-744ffd65bc-rq86k_openstack(37b79503-0495-4e7c-8bd4-c50fe67c35c5): ErrImagePull: rpc error: code = Canceled desc = copying config: context canceled" logger="UnhandledError" Jan 23 09:28:16 crc kubenswrapper[4684]: E0123 09:28:16.670479 4684 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"init\" with ErrImagePull: \"rpc error: code = Canceled desc = copying config: context canceled\"" pod="openstack/dnsmasq-dns-744ffd65bc-rq86k" podUID="37b79503-0495-4e7c-8bd4-c50fe67c35c5" Jan 23 09:28:17 crc kubenswrapper[4684]: E0123 09:28:17.620225 4684 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"init\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.io/podified-antelope-centos9/openstack-neutron-server@sha256:ea0bf67f1aa5d95a9a07b9c8692c293470f1311792c55d3d57f1f92e56689c33\\\"\"" pod="openstack/dnsmasq-dns-744ffd65bc-rq86k" podUID="37b79503-0495-4e7c-8bd4-c50fe67c35c5" Jan 23 09:28:17 crc kubenswrapper[4684]: E0123 09:28:17.997009 4684 log.go:32] "PullImage from image service failed" err="rpc error: code = Canceled desc = copying config: context canceled" image="quay.io/podified-antelope-centos9/openstack-mariadb@sha256:ed0f8ba03f3ce47a32006d730c3049455325eb2c3b98b9fd6b3fb9901004df13" Jan 23 09:28:17 crc kubenswrapper[4684]: E0123 09:28:17.997187 4684 kuberuntime_manager.go:1274] "Unhandled Error" err="init container &Container{Name:mysql-bootstrap,Image:quay.io/podified-antelope-centos9/openstack-mariadb@sha256:ed0f8ba03f3ce47a32006d730c3049455325eb2c3b98b9fd6b3fb9901004df13,Command:[bash /var/lib/operator-scripts/mysql_bootstrap.sh],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:KOLLA_BOOTSTRAP,Value:True,ValueFrom:nil,},EnvVar{Name:KOLLA_CONFIG_STRATEGY,Value:COPY_ALWAYS,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:mysql-db,ReadOnly:false,MountPath:/var/lib/mysql,SubPath:mysql,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:config-data-default,ReadOnly:true,MountPath:/var/lib/config-data/default,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:config-data-generated,ReadOnly:false,MountPath:/var/lib/config-data/generated,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:operator-scripts,ReadOnly:true,MountPath:/var/lib/operator-scripts,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kolla-config,ReadOnly:true,MountPath:/var/lib/kolla/config_files,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-6sm2l,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[MKNOD],},Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod openstack-galera-0_openstack(01c5f17c-8303-4cae-b577-1da34c402098): ErrImagePull: rpc error: code = Canceled desc = copying config: context canceled" logger="UnhandledError" Jan 23 09:28:17 crc kubenswrapper[4684]: E0123 09:28:17.998735 4684 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"mysql-bootstrap\" with ErrImagePull: \"rpc error: code = Canceled desc = copying config: context canceled\"" pod="openstack/openstack-galera-0" podUID="01c5f17c-8303-4cae-b577-1da34c402098" Jan 23 09:28:18 crc kubenswrapper[4684]: I0123 09:28:18.105665 4684 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-84bb9d8bd9-vxhm7" Jan 23 09:28:18 crc kubenswrapper[4684]: I0123 09:28:18.178220 4684 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/42066789-739c-4d7a-9072-0b67742f5ceb-config\") pod \"42066789-739c-4d7a-9072-0b67742f5ceb\" (UID: \"42066789-739c-4d7a-9072-0b67742f5ceb\") " Jan 23 09:28:18 crc kubenswrapper[4684]: I0123 09:28:18.178330 4684 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-fngv8\" (UniqueName: \"kubernetes.io/projected/42066789-739c-4d7a-9072-0b67742f5ceb-kube-api-access-fngv8\") pod \"42066789-739c-4d7a-9072-0b67742f5ceb\" (UID: \"42066789-739c-4d7a-9072-0b67742f5ceb\") " Jan 23 09:28:18 crc kubenswrapper[4684]: I0123 09:28:18.180677 4684 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/42066789-739c-4d7a-9072-0b67742f5ceb-config" (OuterVolumeSpecName: "config") pod "42066789-739c-4d7a-9072-0b67742f5ceb" (UID: "42066789-739c-4d7a-9072-0b67742f5ceb"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 23 09:28:18 crc kubenswrapper[4684]: I0123 09:28:18.193141 4684 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/42066789-739c-4d7a-9072-0b67742f5ceb-kube-api-access-fngv8" (OuterVolumeSpecName: "kube-api-access-fngv8") pod "42066789-739c-4d7a-9072-0b67742f5ceb" (UID: "42066789-739c-4d7a-9072-0b67742f5ceb"). InnerVolumeSpecName "kube-api-access-fngv8". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 23 09:28:18 crc kubenswrapper[4684]: I0123 09:28:18.281379 4684 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/42066789-739c-4d7a-9072-0b67742f5ceb-config\") on node \"crc\" DevicePath \"\"" Jan 23 09:28:18 crc kubenswrapper[4684]: I0123 09:28:18.281434 4684 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-fngv8\" (UniqueName: \"kubernetes.io/projected/42066789-739c-4d7a-9072-0b67742f5ceb-kube-api-access-fngv8\") on node \"crc\" DevicePath \"\"" Jan 23 09:28:18 crc kubenswrapper[4684]: E0123 09:28:18.415865 4684 log.go:32] "PullImage from image service failed" err="rpc error: code = Canceled desc = copying config: context canceled" image="quay.io/podified-antelope-centos9/openstack-neutron-server@sha256:ea0bf67f1aa5d95a9a07b9c8692c293470f1311792c55d3d57f1f92e56689c33" Jan 23 09:28:18 crc kubenswrapper[4684]: E0123 09:28:18.416318 4684 kuberuntime_manager.go:1274] "Unhandled Error" err="init container &Container{Name:init,Image:quay.io/podified-antelope-centos9/openstack-neutron-server@sha256:ea0bf67f1aa5d95a9a07b9c8692c293470f1311792c55d3d57f1f92e56689c33,Command:[/bin/bash],Args:[-c dnsmasq --interface=* --conf-dir=/etc/dnsmasq.d --hostsdir=/etc/dnsmasq.d/hosts --keep-in-foreground --log-debug --bind-interfaces --listen-address=$(POD_IP) --port 5353 --log-facility=- --no-hosts --domain-needed --no-resolv --bogus-priv --log-queries --test],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:CONFIG_HASH,Value:ndfhb5h667h568h584h5f9h58dh565h664h587h597h577h64bh5c4h66fh647hbdh68ch5c5h68dh686h5f7h64hd7hc6h55fh57bh98h57fh87h5fh57fq,ValueFrom:nil,},EnvVar{Name:POD_IP,Value:,ValueFrom:&EnvVarSource{FieldRef:&ObjectFieldSelector{APIVersion:v1,FieldPath:status.podIP,},ResourceFieldRef:nil,ConfigMapKeyRef:nil,SecretKeyRef:nil,},},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:config,ReadOnly:true,MountPath:/etc/dnsmasq.d/config.cfg,SubPath:dns,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:dns-svc,ReadOnly:true,MountPath:/etc/dnsmasq.d/hosts/dns-svc,SubPath:dns-svc,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-h5wq8,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:nil,SELinuxOptions:nil,RunAsUser:*1000650000,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod dnsmasq-dns-5f854695bc-wdfk4_openstack(236ba707-81cb-4106-95ed-09134443809a): ErrImagePull: rpc error: code = Canceled desc = copying config: context canceled" logger="UnhandledError" Jan 23 09:28:18 crc kubenswrapper[4684]: E0123 09:28:18.417402 4684 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"init\" with ErrImagePull: \"rpc error: code = Canceled desc = copying config: context canceled\"" pod="openstack/dnsmasq-dns-5f854695bc-wdfk4" podUID="236ba707-81cb-4106-95ed-09134443809a" Jan 23 09:28:18 crc kubenswrapper[4684]: I0123 09:28:18.621927 4684 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-84bb9d8bd9-vxhm7" Jan 23 09:28:18 crc kubenswrapper[4684]: I0123 09:28:18.629920 4684 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-84bb9d8bd9-vxhm7" event={"ID":"42066789-739c-4d7a-9072-0b67742f5ceb","Type":"ContainerDied","Data":"0cd60bad5a4a667976206029f833989bf1c7bde06a314f5423f5fba2937fcc46"} Jan 23 09:28:18 crc kubenswrapper[4684]: E0123 09:28:18.632865 4684 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"mysql-bootstrap\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.io/podified-antelope-centos9/openstack-mariadb@sha256:ed0f8ba03f3ce47a32006d730c3049455325eb2c3b98b9fd6b3fb9901004df13\\\"\"" pod="openstack/openstack-galera-0" podUID="01c5f17c-8303-4cae-b577-1da34c402098" Jan 23 09:28:18 crc kubenswrapper[4684]: I0123 09:28:18.767157 4684 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-84bb9d8bd9-vxhm7"] Jan 23 09:28:18 crc kubenswrapper[4684]: W0123 09:28:18.786388 4684 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-podf6d184f2_6bff_43ba_98a6_6e131c7b45a8.slice/crio-6ef23f0a2b7172e4fa8358b55a4288bb143e7ff348f047923253ccdc80f7f5d2 WatchSource:0}: Error finding container 6ef23f0a2b7172e4fa8358b55a4288bb143e7ff348f047923253ccdc80f7f5d2: Status 404 returned error can't find the container with id 6ef23f0a2b7172e4fa8358b55a4288bb143e7ff348f047923253ccdc80f7f5d2 Jan 23 09:28:18 crc kubenswrapper[4684]: I0123 09:28:18.806734 4684 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/dnsmasq-dns-84bb9d8bd9-vxhm7"] Jan 23 09:28:18 crc kubenswrapper[4684]: I0123 09:28:18.836812 4684 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ovn-controller-jgsg8"] Jan 23 09:28:19 crc kubenswrapper[4684]: I0123 09:28:19.055402 4684 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-5f854695bc-wdfk4" Jan 23 09:28:19 crc kubenswrapper[4684]: I0123 09:28:19.099194 4684 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/236ba707-81cb-4106-95ed-09134443809a-config\") pod \"236ba707-81cb-4106-95ed-09134443809a\" (UID: \"236ba707-81cb-4106-95ed-09134443809a\") " Jan 23 09:28:19 crc kubenswrapper[4684]: I0123 09:28:19.099305 4684 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/236ba707-81cb-4106-95ed-09134443809a-dns-svc\") pod \"236ba707-81cb-4106-95ed-09134443809a\" (UID: \"236ba707-81cb-4106-95ed-09134443809a\") " Jan 23 09:28:19 crc kubenswrapper[4684]: I0123 09:28:19.099466 4684 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-h5wq8\" (UniqueName: \"kubernetes.io/projected/236ba707-81cb-4106-95ed-09134443809a-kube-api-access-h5wq8\") pod \"236ba707-81cb-4106-95ed-09134443809a\" (UID: \"236ba707-81cb-4106-95ed-09134443809a\") " Jan 23 09:28:19 crc kubenswrapper[4684]: I0123 09:28:19.099995 4684 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/236ba707-81cb-4106-95ed-09134443809a-config" (OuterVolumeSpecName: "config") pod "236ba707-81cb-4106-95ed-09134443809a" (UID: "236ba707-81cb-4106-95ed-09134443809a"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 23 09:28:19 crc kubenswrapper[4684]: I0123 09:28:19.100197 4684 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/236ba707-81cb-4106-95ed-09134443809a-config\") on node \"crc\" DevicePath \"\"" Jan 23 09:28:19 crc kubenswrapper[4684]: I0123 09:28:19.100950 4684 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/236ba707-81cb-4106-95ed-09134443809a-dns-svc" (OuterVolumeSpecName: "dns-svc") pod "236ba707-81cb-4106-95ed-09134443809a" (UID: "236ba707-81cb-4106-95ed-09134443809a"). InnerVolumeSpecName "dns-svc". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 23 09:28:19 crc kubenswrapper[4684]: I0123 09:28:19.105028 4684 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/236ba707-81cb-4106-95ed-09134443809a-kube-api-access-h5wq8" (OuterVolumeSpecName: "kube-api-access-h5wq8") pod "236ba707-81cb-4106-95ed-09134443809a" (UID: "236ba707-81cb-4106-95ed-09134443809a"). InnerVolumeSpecName "kube-api-access-h5wq8". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 23 09:28:19 crc kubenswrapper[4684]: I0123 09:28:19.202360 4684 reconciler_common.go:293] "Volume detached for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/236ba707-81cb-4106-95ed-09134443809a-dns-svc\") on node \"crc\" DevicePath \"\"" Jan 23 09:28:19 crc kubenswrapper[4684]: I0123 09:28:19.202443 4684 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-h5wq8\" (UniqueName: \"kubernetes.io/projected/236ba707-81cb-4106-95ed-09134443809a-kube-api-access-h5wq8\") on node \"crc\" DevicePath \"\"" Jan 23 09:28:19 crc kubenswrapper[4684]: I0123 09:28:19.270098 4684 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ovsdbserver-nb-0"] Jan 23 09:28:19 crc kubenswrapper[4684]: W0123 09:28:19.278060 4684 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod960d904d_7d3d_4c6a_a933_cf6c6a31d01d.slice/crio-d2e72ce5d7189c266f3f3a1c66e67becc79699e3ed15c514940847bb11fbbdb6 WatchSource:0}: Error finding container d2e72ce5d7189c266f3f3a1c66e67becc79699e3ed15c514940847bb11fbbdb6: Status 404 returned error can't find the container with id d2e72ce5d7189c266f3f3a1c66e67becc79699e3ed15c514940847bb11fbbdb6 Jan 23 09:28:19 crc kubenswrapper[4684]: I0123 09:28:19.593950 4684 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="42066789-739c-4d7a-9072-0b67742f5ceb" path="/var/lib/kubelet/pods/42066789-739c-4d7a-9072-0b67742f5ceb/volumes" Jan 23 09:28:19 crc kubenswrapper[4684]: I0123 09:28:19.628372 4684 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ovsdbserver-nb-0" event={"ID":"960d904d-7d3d-4c6a-a933-cf6c6a31d01d","Type":"ContainerStarted","Data":"d2e72ce5d7189c266f3f3a1c66e67becc79699e3ed15c514940847bb11fbbdb6"} Jan 23 09:28:19 crc kubenswrapper[4684]: I0123 09:28:19.629603 4684 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ovn-controller-jgsg8" event={"ID":"f6d184f2-6bff-43ba-98a6-6e131c7b45a8","Type":"ContainerStarted","Data":"6ef23f0a2b7172e4fa8358b55a4288bb143e7ff348f047923253ccdc80f7f5d2"} Jan 23 09:28:19 crc kubenswrapper[4684]: I0123 09:28:19.631311 4684 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-5f854695bc-wdfk4" event={"ID":"236ba707-81cb-4106-95ed-09134443809a","Type":"ContainerDied","Data":"a1ffc27ad82b329d852479d6f3ccb087eb312023934fa7e1ca22cd00217c32b0"} Jan 23 09:28:19 crc kubenswrapper[4684]: I0123 09:28:19.631481 4684 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-5f854695bc-wdfk4" Jan 23 09:28:19 crc kubenswrapper[4684]: E0123 09:28:19.662180 4684 log.go:32] "PullImage from image service failed" err="rpc error: code = Canceled desc = copying system image from manifest list: copying layer: context canceled" image="registry.k8s.io/kube-state-metrics/kube-state-metrics@sha256:db384bf43222b066c378e77027a675d4cd9911107adba46c2922b3a55e10d6fb" Jan 23 09:28:19 crc kubenswrapper[4684]: E0123 09:28:19.662248 4684 kuberuntime_image.go:55] "Failed to pull image" err="rpc error: code = Canceled desc = copying system image from manifest list: copying layer: context canceled" image="registry.k8s.io/kube-state-metrics/kube-state-metrics@sha256:db384bf43222b066c378e77027a675d4cd9911107adba46c2922b3a55e10d6fb" Jan 23 09:28:19 crc kubenswrapper[4684]: E0123 09:28:19.662396 4684 kuberuntime_manager.go:1274] "Unhandled Error" err="container &Container{Name:kube-state-metrics,Image:registry.k8s.io/kube-state-metrics/kube-state-metrics@sha256:db384bf43222b066c378e77027a675d4cd9911107adba46c2922b3a55e10d6fb,Command:[],Args:[--resources=pods --namespaces=openstack],WorkingDir:,Ports:[]ContainerPort{ContainerPort{Name:http-metrics,HostPort:0,ContainerPort:8080,Protocol:TCP,HostIP:,},ContainerPort{Name:telemetry,HostPort:0,ContainerPort:8081,Protocol:TCP,HostIP:,},},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:kube-api-access-tq79v,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/livez,Port:{0 8080 },Host:,Scheme:HTTP,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:5,TimeoutSeconds:5,PeriodSeconds:10,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},ReadinessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/readyz,Port:{0 8081 },Host:,Scheme:HTTP,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:5,TimeoutSeconds:5,PeriodSeconds:10,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:Always,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:nil,SELinuxOptions:nil,RunAsUser:*1000650000,RunAsNonRoot:*true,ReadOnlyRootFilesystem:*true,AllowPrivilegeEscalation:*false,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod kube-state-metrics-0_openstack(48e55475-0575-41e9-9949-d5bdb86ee565): ErrImagePull: rpc error: code = Canceled desc = copying system image from manifest list: copying layer: context canceled" logger="UnhandledError" Jan 23 09:28:19 crc kubenswrapper[4684]: E0123 09:28:19.663631 4684 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"kube-state-metrics\" with ErrImagePull: \"rpc error: code = Canceled desc = copying system image from manifest list: copying layer: context canceled\"" pod="openstack/kube-state-metrics-0" podUID="48e55475-0575-41e9-9949-d5bdb86ee565" Jan 23 09:28:19 crc kubenswrapper[4684]: I0123 09:28:19.674502 4684 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-5f854695bc-wdfk4"] Jan 23 09:28:19 crc kubenswrapper[4684]: I0123 09:28:19.681387 4684 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/dnsmasq-dns-5f854695bc-wdfk4"] Jan 23 09:28:19 crc kubenswrapper[4684]: I0123 09:28:19.969846 4684 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ovn-controller-ovs-c5pjd"] Jan 23 09:28:20 crc kubenswrapper[4684]: I0123 09:28:20.120285 4684 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ovsdbserver-sb-0"] Jan 23 09:28:21 crc kubenswrapper[4684]: I0123 09:28:21.590780 4684 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="236ba707-81cb-4106-95ed-09134443809a" path="/var/lib/kubelet/pods/236ba707-81cb-4106-95ed-09134443809a/volumes" Jan 23 09:28:21 crc kubenswrapper[4684]: E0123 09:28:21.662656 4684 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"kube-state-metrics\" with ImagePullBackOff: \"Back-off pulling image \\\"registry.k8s.io/kube-state-metrics/kube-state-metrics@sha256:db384bf43222b066c378e77027a675d4cd9911107adba46c2922b3a55e10d6fb\\\"\"" pod="openstack/kube-state-metrics-0" podUID="48e55475-0575-41e9-9949-d5bdb86ee565" Jan 23 09:28:21 crc kubenswrapper[4684]: W0123 09:28:21.666247 4684 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-podc816dd8b_7da7_4424_8405_b44759f7861e.slice/crio-7fc191cc087479b6c1a021bb45785fc8c1a693d656878d4fd4f3c8922364aba1 WatchSource:0}: Error finding container 7fc191cc087479b6c1a021bb45785fc8c1a693d656878d4fd4f3c8922364aba1: Status 404 returned error can't find the container with id 7fc191cc087479b6c1a021bb45785fc8c1a693d656878d4fd4f3c8922364aba1 Jan 23 09:28:21 crc kubenswrapper[4684]: W0123 09:28:21.668180 4684 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod092669ed_870b_4e9d_a34d_f62fca6b1660.slice/crio-f3330544e36a7244b62c4660863eb3fb9558471b61de27e1fd9a7e880b9b4b32 WatchSource:0}: Error finding container f3330544e36a7244b62c4660863eb3fb9558471b61de27e1fd9a7e880b9b4b32: Status 404 returned error can't find the container with id f3330544e36a7244b62c4660863eb3fb9558471b61de27e1fd9a7e880b9b4b32 Jan 23 09:28:22 crc kubenswrapper[4684]: I0123 09:28:22.659370 4684 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ovn-controller-ovs-c5pjd" event={"ID":"c816dd8b-7da7-4424-8405-b44759f7861e","Type":"ContainerStarted","Data":"7fc191cc087479b6c1a021bb45785fc8c1a693d656878d4fd4f3c8922364aba1"} Jan 23 09:28:22 crc kubenswrapper[4684]: I0123 09:28:22.660631 4684 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ovsdbserver-sb-0" event={"ID":"092669ed-870b-4e9d-a34d-f62fca6b1660","Type":"ContainerStarted","Data":"f3330544e36a7244b62c4660863eb3fb9558471b61de27e1fd9a7e880b9b4b32"} Jan 23 09:28:28 crc kubenswrapper[4684]: I0123 09:28:28.701079 4684 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/openstack-cell1-galera-0" event={"ID":"80a7fc30-a101-4948-9e81-34c2dfb02797","Type":"ContainerStarted","Data":"2eeb1a18d853f02145200e27a35fcad484298d581311b41c20b4fec75410e28c"} Jan 23 09:28:43 crc kubenswrapper[4684]: I0123 09:28:43.728516 4684 patch_prober.go:28] interesting pod/machine-config-daemon-wtphf container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Jan 23 09:28:43 crc kubenswrapper[4684]: I0123 09:28:43.729002 4684 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-wtphf" podUID="fe8e0d00-860e-4d47-9f48-686555520d79" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Jan 23 09:28:43 crc kubenswrapper[4684]: I0123 09:28:43.729045 4684 kubelet.go:2542] "SyncLoop (probe)" probe="liveness" status="unhealthy" pod="openshift-machine-config-operator/machine-config-daemon-wtphf" Jan 23 09:28:43 crc kubenswrapper[4684]: I0123 09:28:43.729783 4684 kuberuntime_manager.go:1027] "Message for Container of pod" containerName="machine-config-daemon" containerStatusID={"Type":"cri-o","ID":"8ade61f7f4bbb3f3f435e6b903b0fe87d7cf6cd2ec8e018e44229efc22831425"} pod="openshift-machine-config-operator/machine-config-daemon-wtphf" containerMessage="Container machine-config-daemon failed liveness probe, will be restarted" Jan 23 09:28:43 crc kubenswrapper[4684]: I0123 09:28:43.729841 4684 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-machine-config-operator/machine-config-daemon-wtphf" podUID="fe8e0d00-860e-4d47-9f48-686555520d79" containerName="machine-config-daemon" containerID="cri-o://8ade61f7f4bbb3f3f435e6b903b0fe87d7cf6cd2ec8e018e44229efc22831425" gracePeriod=600 Jan 23 09:28:47 crc kubenswrapper[4684]: E0123 09:28:47.329605 4684 log.go:32] "PullImage from image service failed" err="rpc error: code = Canceled desc = copying config: context canceled" image="quay.io/podified-antelope-centos9/openstack-rabbitmq@sha256:e733252aab7f4bc0efbdd712bcd88e44c5498bf1773dba843bc9dcfac324fe3d" Jan 23 09:28:47 crc kubenswrapper[4684]: E0123 09:28:47.330076 4684 kuberuntime_manager.go:1274] "Unhandled Error" err="init container &Container{Name:setup-container,Image:quay.io/podified-antelope-centos9/openstack-rabbitmq@sha256:e733252aab7f4bc0efbdd712bcd88e44c5498bf1773dba843bc9dcfac324fe3d,Command:[sh -c cp /tmp/erlang-cookie-secret/.erlang.cookie /var/lib/rabbitmq/.erlang.cookie && chmod 600 /var/lib/rabbitmq/.erlang.cookie ; cp /tmp/rabbitmq-plugins/enabled_plugins /operator/enabled_plugins ; echo '[default]' > /var/lib/rabbitmq/.rabbitmqadmin.conf && sed -e 's/default_user/username/' -e 's/default_pass/password/' /tmp/default_user.conf >> /var/lib/rabbitmq/.rabbitmqadmin.conf && chmod 600 /var/lib/rabbitmq/.rabbitmqadmin.conf ; sleep 30],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{cpu: {{20 -3} {} 20m DecimalSI},memory: {{67108864 0} {} BinarySI},},Requests:ResourceList{cpu: {{20 -3} {} 20m DecimalSI},memory: {{67108864 0} {} BinarySI},},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:plugins-conf,ReadOnly:false,MountPath:/tmp/rabbitmq-plugins/,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:rabbitmq-erlang-cookie,ReadOnly:false,MountPath:/var/lib/rabbitmq/,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:erlang-cookie-secret,ReadOnly:false,MountPath:/tmp/erlang-cookie-secret/,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:rabbitmq-plugins,ReadOnly:false,MountPath:/operator,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:persistence,ReadOnly:false,MountPath:/var/lib/rabbitmq/mnesia/,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:rabbitmq-confd,ReadOnly:false,MountPath:/tmp/default_user.conf,SubPath:default_user.conf,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-wxmdh,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:nil,SELinuxOptions:nil,RunAsUser:*1000650000,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod rabbitmq-server-0_openstack(82a71d38-3c68-43a9-9913-bc184ebed996): ErrImagePull: rpc error: code = Canceled desc = copying config: context canceled" logger="UnhandledError" Jan 23 09:28:47 crc kubenswrapper[4684]: E0123 09:28:47.331223 4684 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"setup-container\" with ErrImagePull: \"rpc error: code = Canceled desc = copying config: context canceled\"" pod="openstack/rabbitmq-server-0" podUID="82a71d38-3c68-43a9-9913-bc184ebed996" Jan 23 09:28:47 crc kubenswrapper[4684]: I0123 09:28:47.871868 4684 generic.go:334] "Generic (PLEG): container finished" podID="fe8e0d00-860e-4d47-9f48-686555520d79" containerID="8ade61f7f4bbb3f3f435e6b903b0fe87d7cf6cd2ec8e018e44229efc22831425" exitCode=0 Jan 23 09:28:47 crc kubenswrapper[4684]: I0123 09:28:47.872296 4684 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-wtphf" event={"ID":"fe8e0d00-860e-4d47-9f48-686555520d79","Type":"ContainerDied","Data":"8ade61f7f4bbb3f3f435e6b903b0fe87d7cf6cd2ec8e018e44229efc22831425"} Jan 23 09:28:47 crc kubenswrapper[4684]: I0123 09:28:47.872397 4684 scope.go:117] "RemoveContainer" containerID="d189a4bad8ef4c719b144352564a4f1767ae642d4e80c3912415bf811a82f8e8" Jan 23 09:28:48 crc kubenswrapper[4684]: I0123 09:28:48.881985 4684 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/openstack-galera-0" event={"ID":"01c5f17c-8303-4cae-b577-1da34c402098","Type":"ContainerStarted","Data":"d574a865c378728df0091ece58d64472cfb31cd81f936568697d600b16b4b37b"} Jan 23 09:28:48 crc kubenswrapper[4684]: I0123 09:28:48.883232 4684 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/memcached-0" event={"ID":"7320f601-5b97-49b4-af32-aeae7d297ed1","Type":"ContainerStarted","Data":"5f904523b9870bddd78074b2c5926228108c2e90d16a850ebb402b09fc83bff8"} Jan 23 09:28:48 crc kubenswrapper[4684]: I0123 09:28:48.885361 4684 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-wtphf" event={"ID":"fe8e0d00-860e-4d47-9f48-686555520d79","Type":"ContainerStarted","Data":"8a400f51794ef4b6fdc66ad213f603d86645f2ebb5c89b0aaf3a7b97ea9ba3a1"} Jan 23 09:28:48 crc kubenswrapper[4684]: I0123 09:28:48.886560 4684 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-95f5f6995-9fjr7" event={"ID":"169ee556-d1ee-4f51-9958-46bd24d4467f","Type":"ContainerStarted","Data":"983c7bf7e94d31ba7060e25299096d29b58c568bae4f7ee7f76f54e6cfe586a7"} Jan 23 09:28:48 crc kubenswrapper[4684]: I0123 09:28:48.887820 4684 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-744ffd65bc-rq86k" event={"ID":"37b79503-0495-4e7c-8bd4-c50fe67c35c5","Type":"ContainerStarted","Data":"c687834328e7215e3411666d856126b09efd5ae84ba3c0dcdfe094d77a8e8121"} Jan 23 09:28:48 crc kubenswrapper[4684]: I0123 09:28:48.889076 4684 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ovsdbserver-sb-0" event={"ID":"092669ed-870b-4e9d-a34d-f62fca6b1660","Type":"ContainerStarted","Data":"1d92148498f8db44a030c9849c6a4c62d1f9497c2637b526f478bd203383d451"} Jan 23 09:28:48 crc kubenswrapper[4684]: I0123 09:28:48.890823 4684 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ovsdbserver-nb-0" event={"ID":"960d904d-7d3d-4c6a-a933-cf6c6a31d01d","Type":"ContainerStarted","Data":"c1fa7b354dbfd0de4232c033ade671b6307bf4e5d68e9b9b6172599e80c3533d"} Jan 23 09:28:48 crc kubenswrapper[4684]: I0123 09:28:48.892226 4684 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ovn-controller-jgsg8" event={"ID":"f6d184f2-6bff-43ba-98a6-6e131c7b45a8","Type":"ContainerStarted","Data":"098fdfc00e5862692de09914b25c875225fe40df82159ac019dc970f1b410fdd"} Jan 23 09:28:48 crc kubenswrapper[4684]: I0123 09:28:48.893591 4684 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ovn-controller-ovs-c5pjd" event={"ID":"c816dd8b-7da7-4424-8405-b44759f7861e","Type":"ContainerStarted","Data":"df24072472a97d719b448d7a6356349c67d25362cb8998e5e7a38bda050899af"} Jan 23 09:28:49 crc kubenswrapper[4684]: I0123 09:28:49.901497 4684 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/rabbitmq-cell1-server-0" event={"ID":"6a0c15bc-8e5e-47ee-9c23-1673363f1603","Type":"ContainerStarted","Data":"358ba8c0530319a3946cc789d5cfc05a51b3e76a95d94ea41ca6b9aea260ae54"} Jan 23 09:28:49 crc kubenswrapper[4684]: I0123 09:28:49.903237 4684 generic.go:334] "Generic (PLEG): container finished" podID="37b79503-0495-4e7c-8bd4-c50fe67c35c5" containerID="c687834328e7215e3411666d856126b09efd5ae84ba3c0dcdfe094d77a8e8121" exitCode=0 Jan 23 09:28:49 crc kubenswrapper[4684]: I0123 09:28:49.903328 4684 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-744ffd65bc-rq86k" event={"ID":"37b79503-0495-4e7c-8bd4-c50fe67c35c5","Type":"ContainerDied","Data":"c687834328e7215e3411666d856126b09efd5ae84ba3c0dcdfe094d77a8e8121"} Jan 23 09:28:49 crc kubenswrapper[4684]: I0123 09:28:49.905118 4684 generic.go:334] "Generic (PLEG): container finished" podID="169ee556-d1ee-4f51-9958-46bd24d4467f" containerID="983c7bf7e94d31ba7060e25299096d29b58c568bae4f7ee7f76f54e6cfe586a7" exitCode=0 Jan 23 09:28:49 crc kubenswrapper[4684]: I0123 09:28:49.905228 4684 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-95f5f6995-9fjr7" event={"ID":"169ee556-d1ee-4f51-9958-46bd24d4467f","Type":"ContainerDied","Data":"983c7bf7e94d31ba7060e25299096d29b58c568bae4f7ee7f76f54e6cfe586a7"} Jan 23 09:28:49 crc kubenswrapper[4684]: I0123 09:28:49.905577 4684 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/ovn-controller-jgsg8" Jan 23 09:28:49 crc kubenswrapper[4684]: I0123 09:28:49.960537 4684 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/ovn-controller-jgsg8" podStartSLOduration=39.088333933 podStartE2EDuration="1m7.960521431s" podCreationTimestamp="2026-01-23 09:27:42 +0000 UTC" firstStartedPulling="2026-01-23 09:28:18.795417453 +0000 UTC m=+1271.418796004" lastFinishedPulling="2026-01-23 09:28:47.667604961 +0000 UTC m=+1300.290983502" observedRunningTime="2026-01-23 09:28:49.950040244 +0000 UTC m=+1302.573418805" watchObservedRunningTime="2026-01-23 09:28:49.960521431 +0000 UTC m=+1302.583899972" Jan 23 09:28:50 crc kubenswrapper[4684]: I0123 09:28:50.049483 4684 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/memcached-0" podStartSLOduration=4.116276566 podStartE2EDuration="1m13.049461828s" podCreationTimestamp="2026-01-23 09:27:37 +0000 UTC" firstStartedPulling="2026-01-23 09:27:38.717236723 +0000 UTC m=+1231.340615274" lastFinishedPulling="2026-01-23 09:28:47.650421995 +0000 UTC m=+1300.273800536" observedRunningTime="2026-01-23 09:28:50.028063873 +0000 UTC m=+1302.651442434" watchObservedRunningTime="2026-01-23 09:28:50.049461828 +0000 UTC m=+1302.672840369" Jan 23 09:28:50 crc kubenswrapper[4684]: I0123 09:28:50.915743 4684 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-95f5f6995-9fjr7" event={"ID":"169ee556-d1ee-4f51-9958-46bd24d4467f","Type":"ContainerStarted","Data":"5f04488987c235686a18dc4872d68dc7464a68ffb3318c3d31aa196019932b4c"} Jan 23 09:28:50 crc kubenswrapper[4684]: I0123 09:28:50.916376 4684 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/dnsmasq-dns-95f5f6995-9fjr7" Jan 23 09:28:50 crc kubenswrapper[4684]: I0123 09:28:50.917796 4684 generic.go:334] "Generic (PLEG): container finished" podID="c816dd8b-7da7-4424-8405-b44759f7861e" containerID="df24072472a97d719b448d7a6356349c67d25362cb8998e5e7a38bda050899af" exitCode=0 Jan 23 09:28:50 crc kubenswrapper[4684]: I0123 09:28:50.917873 4684 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ovn-controller-ovs-c5pjd" event={"ID":"c816dd8b-7da7-4424-8405-b44759f7861e","Type":"ContainerDied","Data":"df24072472a97d719b448d7a6356349c67d25362cb8998e5e7a38bda050899af"} Jan 23 09:28:50 crc kubenswrapper[4684]: I0123 09:28:50.920305 4684 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-744ffd65bc-rq86k" event={"ID":"37b79503-0495-4e7c-8bd4-c50fe67c35c5","Type":"ContainerStarted","Data":"b90ef6ba1b420eb9bbc1ad2f4c7beed13a80b7b4e376fb9ea8b48f97eb68229d"} Jan 23 09:28:50 crc kubenswrapper[4684]: I0123 09:28:50.920555 4684 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/dnsmasq-dns-744ffd65bc-rq86k" Jan 23 09:28:50 crc kubenswrapper[4684]: I0123 09:28:50.943028 4684 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/dnsmasq-dns-95f5f6995-9fjr7" podStartSLOduration=4.5857224 podStartE2EDuration="1m17.94300842s" podCreationTimestamp="2026-01-23 09:27:33 +0000 UTC" firstStartedPulling="2026-01-23 09:27:34.298168857 +0000 UTC m=+1226.921547398" lastFinishedPulling="2026-01-23 09:28:47.655454877 +0000 UTC m=+1300.278833418" observedRunningTime="2026-01-23 09:28:50.936105355 +0000 UTC m=+1303.559483906" watchObservedRunningTime="2026-01-23 09:28:50.94300842 +0000 UTC m=+1303.566386961" Jan 23 09:28:50 crc kubenswrapper[4684]: I0123 09:28:50.984647 4684 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/dnsmasq-dns-744ffd65bc-rq86k" podStartSLOduration=5.124831877 podStartE2EDuration="1m18.984626898s" podCreationTimestamp="2026-01-23 09:27:32 +0000 UTC" firstStartedPulling="2026-01-23 09:27:33.807327596 +0000 UTC m=+1226.430706137" lastFinishedPulling="2026-01-23 09:28:47.667122617 +0000 UTC m=+1300.290501158" observedRunningTime="2026-01-23 09:28:50.983894597 +0000 UTC m=+1303.607273158" watchObservedRunningTime="2026-01-23 09:28:50.984626898 +0000 UTC m=+1303.608005439" Jan 23 09:28:52 crc kubenswrapper[4684]: I0123 09:28:52.579507 4684 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/memcached-0" Jan 23 09:28:52 crc kubenswrapper[4684]: I0123 09:28:52.935386 4684 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ovn-controller-ovs-c5pjd" event={"ID":"c816dd8b-7da7-4424-8405-b44759f7861e","Type":"ContainerStarted","Data":"94128f6877cdb9fb62becd87c0a2bd5e10c1224d5ef906c7c2007e69970870c3"} Jan 23 09:28:52 crc kubenswrapper[4684]: I0123 09:28:52.935432 4684 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ovn-controller-ovs-c5pjd" event={"ID":"c816dd8b-7da7-4424-8405-b44759f7861e","Type":"ContainerStarted","Data":"673f0db6b4c2d10b4783c1d38b5d028e83781d5d75a7a3501b61fe7991a0412a"} Jan 23 09:28:53 crc kubenswrapper[4684]: I0123 09:28:53.942344 4684 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/ovn-controller-ovs-c5pjd" Jan 23 09:28:53 crc kubenswrapper[4684]: I0123 09:28:53.975031 4684 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/ovn-controller-ovs-c5pjd" podStartSLOduration=48.313984851 podStartE2EDuration="1m11.97501323s" podCreationTimestamp="2026-01-23 09:27:42 +0000 UTC" firstStartedPulling="2026-01-23 09:28:23.988368587 +0000 UTC m=+1276.611747128" lastFinishedPulling="2026-01-23 09:28:47.649396966 +0000 UTC m=+1300.272775507" observedRunningTime="2026-01-23 09:28:53.972954751 +0000 UTC m=+1306.596333292" watchObservedRunningTime="2026-01-23 09:28:53.97501323 +0000 UTC m=+1306.598391771" Jan 23 09:28:54 crc kubenswrapper[4684]: I0123 09:28:54.951060 4684 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/ovn-controller-ovs-c5pjd" Jan 23 09:28:57 crc kubenswrapper[4684]: I0123 09:28:57.580509 4684 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/memcached-0" Jan 23 09:28:58 crc kubenswrapper[4684]: I0123 09:28:58.440885 4684 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/dnsmasq-dns-744ffd65bc-rq86k" Jan 23 09:28:58 crc kubenswrapper[4684]: I0123 09:28:58.597413 4684 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/dnsmasq-dns-95f5f6995-9fjr7" Jan 23 09:28:58 crc kubenswrapper[4684]: I0123 09:28:58.665077 4684 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-744ffd65bc-rq86k"] Jan 23 09:28:58 crc kubenswrapper[4684]: I0123 09:28:58.975943 4684 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/dnsmasq-dns-744ffd65bc-rq86k" podUID="37b79503-0495-4e7c-8bd4-c50fe67c35c5" containerName="dnsmasq-dns" containerID="cri-o://b90ef6ba1b420eb9bbc1ad2f4c7beed13a80b7b4e376fb9ea8b48f97eb68229d" gracePeriod=10 Jan 23 09:29:00 crc kubenswrapper[4684]: I0123 09:29:00.992235 4684 generic.go:334] "Generic (PLEG): container finished" podID="37b79503-0495-4e7c-8bd4-c50fe67c35c5" containerID="b90ef6ba1b420eb9bbc1ad2f4c7beed13a80b7b4e376fb9ea8b48f97eb68229d" exitCode=0 Jan 23 09:29:00 crc kubenswrapper[4684]: I0123 09:29:00.992341 4684 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-744ffd65bc-rq86k" event={"ID":"37b79503-0495-4e7c-8bd4-c50fe67c35c5","Type":"ContainerDied","Data":"b90ef6ba1b420eb9bbc1ad2f4c7beed13a80b7b4e376fb9ea8b48f97eb68229d"} Jan 23 09:29:08 crc kubenswrapper[4684]: I0123 09:29:08.441307 4684 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack/dnsmasq-dns-744ffd65bc-rq86k" podUID="37b79503-0495-4e7c-8bd4-c50fe67c35c5" containerName="dnsmasq-dns" probeResult="failure" output="dial tcp 10.217.0.95:5353: i/o timeout" Jan 23 09:29:08 crc kubenswrapper[4684]: I0123 09:29:08.971792 4684 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-744ffd65bc-rq86k" Jan 23 09:29:09 crc kubenswrapper[4684]: I0123 09:29:09.052376 4684 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-744ffd65bc-rq86k" event={"ID":"37b79503-0495-4e7c-8bd4-c50fe67c35c5","Type":"ContainerDied","Data":"a6a23007b968f1f8301cc4108f2a37faf4ecd3323b9e03fe7975e82a3986aea3"} Jan 23 09:29:09 crc kubenswrapper[4684]: I0123 09:29:09.052419 4684 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-744ffd65bc-rq86k" Jan 23 09:29:09 crc kubenswrapper[4684]: I0123 09:29:09.052433 4684 scope.go:117] "RemoveContainer" containerID="b90ef6ba1b420eb9bbc1ad2f4c7beed13a80b7b4e376fb9ea8b48f97eb68229d" Jan 23 09:29:09 crc kubenswrapper[4684]: I0123 09:29:09.095802 4684 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/37b79503-0495-4e7c-8bd4-c50fe67c35c5-config\") pod \"37b79503-0495-4e7c-8bd4-c50fe67c35c5\" (UID: \"37b79503-0495-4e7c-8bd4-c50fe67c35c5\") " Jan 23 09:29:09 crc kubenswrapper[4684]: I0123 09:29:09.095855 4684 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/37b79503-0495-4e7c-8bd4-c50fe67c35c5-dns-svc\") pod \"37b79503-0495-4e7c-8bd4-c50fe67c35c5\" (UID: \"37b79503-0495-4e7c-8bd4-c50fe67c35c5\") " Jan 23 09:29:09 crc kubenswrapper[4684]: I0123 09:29:09.095975 4684 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-98m69\" (UniqueName: \"kubernetes.io/projected/37b79503-0495-4e7c-8bd4-c50fe67c35c5-kube-api-access-98m69\") pod \"37b79503-0495-4e7c-8bd4-c50fe67c35c5\" (UID: \"37b79503-0495-4e7c-8bd4-c50fe67c35c5\") " Jan 23 09:29:09 crc kubenswrapper[4684]: I0123 09:29:09.100991 4684 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/37b79503-0495-4e7c-8bd4-c50fe67c35c5-kube-api-access-98m69" (OuterVolumeSpecName: "kube-api-access-98m69") pod "37b79503-0495-4e7c-8bd4-c50fe67c35c5" (UID: "37b79503-0495-4e7c-8bd4-c50fe67c35c5"). InnerVolumeSpecName "kube-api-access-98m69". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 23 09:29:09 crc kubenswrapper[4684]: I0123 09:29:09.138532 4684 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/37b79503-0495-4e7c-8bd4-c50fe67c35c5-config" (OuterVolumeSpecName: "config") pod "37b79503-0495-4e7c-8bd4-c50fe67c35c5" (UID: "37b79503-0495-4e7c-8bd4-c50fe67c35c5"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 23 09:29:09 crc kubenswrapper[4684]: I0123 09:29:09.171961 4684 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/37b79503-0495-4e7c-8bd4-c50fe67c35c5-dns-svc" (OuterVolumeSpecName: "dns-svc") pod "37b79503-0495-4e7c-8bd4-c50fe67c35c5" (UID: "37b79503-0495-4e7c-8bd4-c50fe67c35c5"). InnerVolumeSpecName "dns-svc". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 23 09:29:09 crc kubenswrapper[4684]: I0123 09:29:09.197773 4684 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-98m69\" (UniqueName: \"kubernetes.io/projected/37b79503-0495-4e7c-8bd4-c50fe67c35c5-kube-api-access-98m69\") on node \"crc\" DevicePath \"\"" Jan 23 09:29:09 crc kubenswrapper[4684]: I0123 09:29:09.198038 4684 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/37b79503-0495-4e7c-8bd4-c50fe67c35c5-config\") on node \"crc\" DevicePath \"\"" Jan 23 09:29:09 crc kubenswrapper[4684]: I0123 09:29:09.198058 4684 reconciler_common.go:293] "Volume detached for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/37b79503-0495-4e7c-8bd4-c50fe67c35c5-dns-svc\") on node \"crc\" DevicePath \"\"" Jan 23 09:29:09 crc kubenswrapper[4684]: I0123 09:29:09.407316 4684 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-744ffd65bc-rq86k"] Jan 23 09:29:09 crc kubenswrapper[4684]: I0123 09:29:09.421992 4684 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/dnsmasq-dns-744ffd65bc-rq86k"] Jan 23 09:29:09 crc kubenswrapper[4684]: I0123 09:29:09.591809 4684 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="37b79503-0495-4e7c-8bd4-c50fe67c35c5" path="/var/lib/kubelet/pods/37b79503-0495-4e7c-8bd4-c50fe67c35c5/volumes" Jan 23 09:29:10 crc kubenswrapper[4684]: I0123 09:29:10.065263 4684 generic.go:334] "Generic (PLEG): container finished" podID="80a7fc30-a101-4948-9e81-34c2dfb02797" containerID="2eeb1a18d853f02145200e27a35fcad484298d581311b41c20b4fec75410e28c" exitCode=0 Jan 23 09:29:10 crc kubenswrapper[4684]: I0123 09:29:10.065328 4684 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/openstack-cell1-galera-0" event={"ID":"80a7fc30-a101-4948-9e81-34c2dfb02797","Type":"ContainerDied","Data":"2eeb1a18d853f02145200e27a35fcad484298d581311b41c20b4fec75410e28c"} Jan 23 09:29:12 crc kubenswrapper[4684]: I0123 09:29:12.481986 4684 scope.go:117] "RemoveContainer" containerID="c687834328e7215e3411666d856126b09efd5ae84ba3c0dcdfe094d77a8e8121" Jan 23 09:29:13 crc kubenswrapper[4684]: I0123 09:29:13.442531 4684 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack/dnsmasq-dns-744ffd65bc-rq86k" podUID="37b79503-0495-4e7c-8bd4-c50fe67c35c5" containerName="dnsmasq-dns" probeResult="failure" output="dial tcp 10.217.0.95:5353: i/o timeout" Jan 23 09:29:13 crc kubenswrapper[4684]: E0123 09:29:13.990981 4684 log.go:32] "PullImage from image service failed" err="rpc error: code = Canceled desc = copying config: context canceled" image="quay.io/openstack-k8s-operators/openstack-network-exporter@sha256:ecd56e6733c475f2d441344fd98f288c3eac0261ba113695fec7520a954ccbc7" Jan 23 09:29:13 crc kubenswrapper[4684]: E0123 09:29:13.991143 4684 kuberuntime_manager.go:1274] "Unhandled Error" err="container &Container{Name:openstack-network-exporter,Image:quay.io/openstack-k8s-operators/openstack-network-exporter@sha256:ecd56e6733c475f2d441344fd98f288c3eac0261ba113695fec7520a954ccbc7,Command:[/app/openstack-network-exporter],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:OPENSTACK_NETWORK_EXPORTER_YAML,Value:/etc/config/openstack-network-exporter.yaml,ValueFrom:nil,},EnvVar{Name:CONFIG_HASH,Value:n5c9h56bh657h696h7dh684hb6h688h76h598h5c7h69hdchd9h679h647hd9h544h5f6h584h587h6bh5c5h5d9h696h5ffh5b7hc8h589h5cfh56fh554q,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:ovsdb-rundir,ReadOnly:false,MountPath:/tmp,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:config,ReadOnly:true,MountPath:/etc/config,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:metrics-certs-tls-certs,ReadOnly:true,MountPath:/etc/pki/tls/certs/ovnmetrics.crt,SubPath:tls.crt,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:metrics-certs-tls-certs,ReadOnly:true,MountPath:/etc/pki/tls/private/ovnmetrics.key,SubPath:tls.key,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:metrics-certs-tls-certs,ReadOnly:true,MountPath:/etc/pki/tls/certs/ovndbca.crt,SubPath:ca.crt,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-mzznr,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:nil,SELinuxOptions:nil,RunAsUser:*1000650000,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod ovsdbserver-nb-0_openstack(960d904d-7d3d-4c6a-a933-cf6c6a31d01d): ErrImagePull: rpc error: code = Canceled desc = copying config: context canceled" logger="UnhandledError" Jan 23 09:29:13 crc kubenswrapper[4684]: E0123 09:29:13.992344 4684 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"openstack-network-exporter\" with ErrImagePull: \"rpc error: code = Canceled desc = copying config: context canceled\"" pod="openstack/ovsdbserver-nb-0" podUID="960d904d-7d3d-4c6a-a933-cf6c6a31d01d" Jan 23 09:29:15 crc kubenswrapper[4684]: I0123 09:29:15.104649 4684 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/openstack-cell1-galera-0" event={"ID":"80a7fc30-a101-4948-9e81-34c2dfb02797","Type":"ContainerStarted","Data":"6052193c84f6ed02a75bd462a4b4071d813b79fcd19fc1c8d88937b78c6480dd"} Jan 23 09:29:15 crc kubenswrapper[4684]: I0123 09:29:15.107104 4684 generic.go:334] "Generic (PLEG): container finished" podID="01c5f17c-8303-4cae-b577-1da34c402098" containerID="d574a865c378728df0091ece58d64472cfb31cd81f936568697d600b16b4b37b" exitCode=0 Jan 23 09:29:15 crc kubenswrapper[4684]: I0123 09:29:15.107142 4684 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/openstack-galera-0" event={"ID":"01c5f17c-8303-4cae-b577-1da34c402098","Type":"ContainerDied","Data":"d574a865c378728df0091ece58d64472cfb31cd81f936568697d600b16b4b37b"} Jan 23 09:29:15 crc kubenswrapper[4684]: I0123 09:29:15.127855 4684 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/openstack-cell1-galera-0" podStartSLOduration=54.502701413 podStartE2EDuration="1m40.127837702s" podCreationTimestamp="2026-01-23 09:27:35 +0000 UTC" firstStartedPulling="2026-01-23 09:27:38.379903566 +0000 UTC m=+1231.003282097" lastFinishedPulling="2026-01-23 09:28:24.005039845 +0000 UTC m=+1276.628418386" observedRunningTime="2026-01-23 09:29:15.125461385 +0000 UTC m=+1327.748839936" watchObservedRunningTime="2026-01-23 09:29:15.127837702 +0000 UTC m=+1327.751216243" Jan 23 09:29:15 crc kubenswrapper[4684]: E0123 09:29:15.552202 4684 log.go:32] "PullImage from image service failed" err="rpc error: code = Canceled desc = copying system image from manifest list: copying config: context canceled" image="registry.k8s.io/kube-state-metrics/kube-state-metrics@sha256:db384bf43222b066c378e77027a675d4cd9911107adba46c2922b3a55e10d6fb" Jan 23 09:29:15 crc kubenswrapper[4684]: E0123 09:29:15.552590 4684 kuberuntime_image.go:55] "Failed to pull image" err="rpc error: code = Canceled desc = copying system image from manifest list: copying config: context canceled" image="registry.k8s.io/kube-state-metrics/kube-state-metrics@sha256:db384bf43222b066c378e77027a675d4cd9911107adba46c2922b3a55e10d6fb" Jan 23 09:29:15 crc kubenswrapper[4684]: E0123 09:29:15.552833 4684 kuberuntime_manager.go:1274] "Unhandled Error" err="container &Container{Name:kube-state-metrics,Image:registry.k8s.io/kube-state-metrics/kube-state-metrics@sha256:db384bf43222b066c378e77027a675d4cd9911107adba46c2922b3a55e10d6fb,Command:[],Args:[--resources=pods --namespaces=openstack],WorkingDir:,Ports:[]ContainerPort{ContainerPort{Name:http-metrics,HostPort:0,ContainerPort:8080,Protocol:TCP,HostIP:,},ContainerPort{Name:telemetry,HostPort:0,ContainerPort:8081,Protocol:TCP,HostIP:,},},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:kube-api-access-tq79v,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/livez,Port:{0 8080 },Host:,Scheme:HTTP,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:5,TimeoutSeconds:5,PeriodSeconds:10,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},ReadinessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/readyz,Port:{0 8081 },Host:,Scheme:HTTP,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:5,TimeoutSeconds:5,PeriodSeconds:10,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:Always,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:nil,SELinuxOptions:nil,RunAsUser:*1000650000,RunAsNonRoot:*true,ReadOnlyRootFilesystem:*true,AllowPrivilegeEscalation:*false,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod kube-state-metrics-0_openstack(48e55475-0575-41e9-9949-d5bdb86ee565): ErrImagePull: rpc error: code = Canceled desc = copying system image from manifest list: copying config: context canceled" logger="UnhandledError" Jan 23 09:29:15 crc kubenswrapper[4684]: E0123 09:29:15.556365 4684 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"kube-state-metrics\" with ErrImagePull: \"rpc error: code = Canceled desc = copying system image from manifest list: copying config: context canceled\"" pod="openstack/kube-state-metrics-0" podUID="48e55475-0575-41e9-9949-d5bdb86ee565" Jan 23 09:29:16 crc kubenswrapper[4684]: I0123 09:29:16.116122 4684 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ovsdbserver-sb-0" event={"ID":"092669ed-870b-4e9d-a34d-f62fca6b1660","Type":"ContainerStarted","Data":"57f0670a6c21b18c242b38ca6593521edfa572756d6d55dcc32c6dae2b2a8c4b"} Jan 23 09:29:16 crc kubenswrapper[4684]: I0123 09:29:16.119519 4684 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/openstack-galera-0" event={"ID":"01c5f17c-8303-4cae-b577-1da34c402098","Type":"ContainerStarted","Data":"79362531a9197725f91552f260129bf324047bb69c9ff9abae1e970545df5230"} Jan 23 09:29:16 crc kubenswrapper[4684]: I0123 09:29:16.121885 4684 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ovsdbserver-nb-0" event={"ID":"960d904d-7d3d-4c6a-a933-cf6c6a31d01d","Type":"ContainerStarted","Data":"c66f89ee7e9238bbc73e90a1716f13522b011a8ce0b2a7f73f38c15f289eacd7"} Jan 23 09:29:16 crc kubenswrapper[4684]: I0123 09:29:16.123776 4684 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/rabbitmq-server-0" event={"ID":"82a71d38-3c68-43a9-9913-bc184ebed996","Type":"ContainerStarted","Data":"117c3cfb0a176cfc1500fea0731f48b23931e3499ec86b05f8bbcf5b2f8b8bb6"} Jan 23 09:29:16 crc kubenswrapper[4684]: I0123 09:29:16.143677 4684 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/ovsdbserver-sb-0" podStartSLOduration=43.580563505 podStartE2EDuration="1m34.143652635s" podCreationTimestamp="2026-01-23 09:27:42 +0000 UTC" firstStartedPulling="2026-01-23 09:28:23.98844416 +0000 UTC m=+1276.611822701" lastFinishedPulling="2026-01-23 09:29:14.55153329 +0000 UTC m=+1327.174911831" observedRunningTime="2026-01-23 09:29:16.138262702 +0000 UTC m=+1328.761641263" watchObservedRunningTime="2026-01-23 09:29:16.143652635 +0000 UTC m=+1328.767031186" Jan 23 09:29:16 crc kubenswrapper[4684]: I0123 09:29:16.171034 4684 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/ovsdbserver-nb-0" podStartSLOduration=62.782096011 podStartE2EDuration="1m31.171016389s" podCreationTimestamp="2026-01-23 09:27:45 +0000 UTC" firstStartedPulling="2026-01-23 09:28:19.279569388 +0000 UTC m=+1271.902947929" lastFinishedPulling="2026-01-23 09:28:47.668489766 +0000 UTC m=+1300.291868307" observedRunningTime="2026-01-23 09:29:16.162856558 +0000 UTC m=+1328.786235109" watchObservedRunningTime="2026-01-23 09:29:16.171016389 +0000 UTC m=+1328.794394930" Jan 23 09:29:16 crc kubenswrapper[4684]: I0123 09:29:16.185198 4684 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/openstack-galera-0" podStartSLOduration=-9223371934.669598 podStartE2EDuration="1m42.1851785s" podCreationTimestamp="2026-01-23 09:27:34 +0000 UTC" firstStartedPulling="2026-01-23 09:27:36.898603035 +0000 UTC m=+1229.521981576" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-23 09:29:16.183491742 +0000 UTC m=+1328.806870293" watchObservedRunningTime="2026-01-23 09:29:16.1851785 +0000 UTC m=+1328.808557041" Jan 23 09:29:16 crc kubenswrapper[4684]: I0123 09:29:16.500891 4684 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/ovsdbserver-nb-0" Jan 23 09:29:16 crc kubenswrapper[4684]: I0123 09:29:16.501254 4684 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/ovsdbserver-nb-0" Jan 23 09:29:16 crc kubenswrapper[4684]: I0123 09:29:16.539656 4684 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/ovsdbserver-nb-0" Jan 23 09:29:16 crc kubenswrapper[4684]: I0123 09:29:16.629667 4684 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/ovsdbserver-sb-0" Jan 23 09:29:16 crc kubenswrapper[4684]: I0123 09:29:16.668997 4684 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/ovsdbserver-sb-0" Jan 23 09:29:17 crc kubenswrapper[4684]: I0123 09:29:17.130597 4684 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/ovsdbserver-sb-0" Jan 23 09:29:17 crc kubenswrapper[4684]: I0123 09:29:17.171368 4684 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/ovsdbserver-sb-0" Jan 23 09:29:17 crc kubenswrapper[4684]: I0123 09:29:17.172300 4684 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/ovsdbserver-nb-0" Jan 23 09:29:17 crc kubenswrapper[4684]: I0123 09:29:17.288284 4684 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/openstack-cell1-galera-0" Jan 23 09:29:17 crc kubenswrapper[4684]: I0123 09:29:17.288341 4684 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/openstack-cell1-galera-0" Jan 23 09:29:17 crc kubenswrapper[4684]: I0123 09:29:17.443516 4684 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/dnsmasq-dns-5b79764b65-cr8pj"] Jan 23 09:29:17 crc kubenswrapper[4684]: E0123 09:29:17.444221 4684 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="37b79503-0495-4e7c-8bd4-c50fe67c35c5" containerName="init" Jan 23 09:29:17 crc kubenswrapper[4684]: I0123 09:29:17.444246 4684 state_mem.go:107] "Deleted CPUSet assignment" podUID="37b79503-0495-4e7c-8bd4-c50fe67c35c5" containerName="init" Jan 23 09:29:17 crc kubenswrapper[4684]: E0123 09:29:17.444279 4684 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="37b79503-0495-4e7c-8bd4-c50fe67c35c5" containerName="dnsmasq-dns" Jan 23 09:29:17 crc kubenswrapper[4684]: I0123 09:29:17.444290 4684 state_mem.go:107] "Deleted CPUSet assignment" podUID="37b79503-0495-4e7c-8bd4-c50fe67c35c5" containerName="dnsmasq-dns" Jan 23 09:29:17 crc kubenswrapper[4684]: I0123 09:29:17.444481 4684 memory_manager.go:354] "RemoveStaleState removing state" podUID="37b79503-0495-4e7c-8bd4-c50fe67c35c5" containerName="dnsmasq-dns" Jan 23 09:29:17 crc kubenswrapper[4684]: I0123 09:29:17.445567 4684 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-5b79764b65-cr8pj" Jan 23 09:29:17 crc kubenswrapper[4684]: I0123 09:29:17.447992 4684 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"ovsdbserver-sb" Jan 23 09:29:17 crc kubenswrapper[4684]: I0123 09:29:17.452209 4684 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-5b79764b65-cr8pj"] Jan 23 09:29:17 crc kubenswrapper[4684]: I0123 09:29:17.571516 4684 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/11507910-c9ab-4a8d-b0e9-c5e2425c3338-ovsdbserver-sb\") pod \"dnsmasq-dns-5b79764b65-cr8pj\" (UID: \"11507910-c9ab-4a8d-b0e9-c5e2425c3338\") " pod="openstack/dnsmasq-dns-5b79764b65-cr8pj" Jan 23 09:29:17 crc kubenswrapper[4684]: I0123 09:29:17.571865 4684 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/11507910-c9ab-4a8d-b0e9-c5e2425c3338-dns-svc\") pod \"dnsmasq-dns-5b79764b65-cr8pj\" (UID: \"11507910-c9ab-4a8d-b0e9-c5e2425c3338\") " pod="openstack/dnsmasq-dns-5b79764b65-cr8pj" Jan 23 09:29:17 crc kubenswrapper[4684]: I0123 09:29:17.571981 4684 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/11507910-c9ab-4a8d-b0e9-c5e2425c3338-config\") pod \"dnsmasq-dns-5b79764b65-cr8pj\" (UID: \"11507910-c9ab-4a8d-b0e9-c5e2425c3338\") " pod="openstack/dnsmasq-dns-5b79764b65-cr8pj" Jan 23 09:29:17 crc kubenswrapper[4684]: I0123 09:29:17.572015 4684 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-rv9jd\" (UniqueName: \"kubernetes.io/projected/11507910-c9ab-4a8d-b0e9-c5e2425c3338-kube-api-access-rv9jd\") pod \"dnsmasq-dns-5b79764b65-cr8pj\" (UID: \"11507910-c9ab-4a8d-b0e9-c5e2425c3338\") " pod="openstack/dnsmasq-dns-5b79764b65-cr8pj" Jan 23 09:29:17 crc kubenswrapper[4684]: I0123 09:29:17.673367 4684 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-rv9jd\" (UniqueName: \"kubernetes.io/projected/11507910-c9ab-4a8d-b0e9-c5e2425c3338-kube-api-access-rv9jd\") pod \"dnsmasq-dns-5b79764b65-cr8pj\" (UID: \"11507910-c9ab-4a8d-b0e9-c5e2425c3338\") " pod="openstack/dnsmasq-dns-5b79764b65-cr8pj" Jan 23 09:29:17 crc kubenswrapper[4684]: I0123 09:29:17.673460 4684 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/11507910-c9ab-4a8d-b0e9-c5e2425c3338-ovsdbserver-sb\") pod \"dnsmasq-dns-5b79764b65-cr8pj\" (UID: \"11507910-c9ab-4a8d-b0e9-c5e2425c3338\") " pod="openstack/dnsmasq-dns-5b79764b65-cr8pj" Jan 23 09:29:17 crc kubenswrapper[4684]: I0123 09:29:17.673512 4684 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/11507910-c9ab-4a8d-b0e9-c5e2425c3338-dns-svc\") pod \"dnsmasq-dns-5b79764b65-cr8pj\" (UID: \"11507910-c9ab-4a8d-b0e9-c5e2425c3338\") " pod="openstack/dnsmasq-dns-5b79764b65-cr8pj" Jan 23 09:29:17 crc kubenswrapper[4684]: I0123 09:29:17.673580 4684 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/11507910-c9ab-4a8d-b0e9-c5e2425c3338-config\") pod \"dnsmasq-dns-5b79764b65-cr8pj\" (UID: \"11507910-c9ab-4a8d-b0e9-c5e2425c3338\") " pod="openstack/dnsmasq-dns-5b79764b65-cr8pj" Jan 23 09:29:17 crc kubenswrapper[4684]: I0123 09:29:17.674649 4684 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/11507910-c9ab-4a8d-b0e9-c5e2425c3338-ovsdbserver-sb\") pod \"dnsmasq-dns-5b79764b65-cr8pj\" (UID: \"11507910-c9ab-4a8d-b0e9-c5e2425c3338\") " pod="openstack/dnsmasq-dns-5b79764b65-cr8pj" Jan 23 09:29:17 crc kubenswrapper[4684]: I0123 09:29:17.674579 4684 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/11507910-c9ab-4a8d-b0e9-c5e2425c3338-config\") pod \"dnsmasq-dns-5b79764b65-cr8pj\" (UID: \"11507910-c9ab-4a8d-b0e9-c5e2425c3338\") " pod="openstack/dnsmasq-dns-5b79764b65-cr8pj" Jan 23 09:29:17 crc kubenswrapper[4684]: I0123 09:29:17.674467 4684 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/11507910-c9ab-4a8d-b0e9-c5e2425c3338-dns-svc\") pod \"dnsmasq-dns-5b79764b65-cr8pj\" (UID: \"11507910-c9ab-4a8d-b0e9-c5e2425c3338\") " pod="openstack/dnsmasq-dns-5b79764b65-cr8pj" Jan 23 09:29:17 crc kubenswrapper[4684]: I0123 09:29:17.694553 4684 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-rv9jd\" (UniqueName: \"kubernetes.io/projected/11507910-c9ab-4a8d-b0e9-c5e2425c3338-kube-api-access-rv9jd\") pod \"dnsmasq-dns-5b79764b65-cr8pj\" (UID: \"11507910-c9ab-4a8d-b0e9-c5e2425c3338\") " pod="openstack/dnsmasq-dns-5b79764b65-cr8pj" Jan 23 09:29:17 crc kubenswrapper[4684]: I0123 09:29:17.766007 4684 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-5b79764b65-cr8pj" Jan 23 09:29:18 crc kubenswrapper[4684]: I0123 09:29:18.196568 4684 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-5b79764b65-cr8pj"] Jan 23 09:29:19 crc kubenswrapper[4684]: I0123 09:29:19.143443 4684 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-5b79764b65-cr8pj" event={"ID":"11507910-c9ab-4a8d-b0e9-c5e2425c3338","Type":"ContainerStarted","Data":"0a0d8f2222d2c15e9662598906a285792f3547fa86c9cb18c9f5e7136589fcea"} Jan 23 09:29:19 crc kubenswrapper[4684]: I0123 09:29:19.924863 4684 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-5b79764b65-cr8pj"] Jan 23 09:29:19 crc kubenswrapper[4684]: I0123 09:29:19.966365 4684 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/dnsmasq-dns-586b989cdc-fpmhg"] Jan 23 09:29:19 crc kubenswrapper[4684]: I0123 09:29:19.968065 4684 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-586b989cdc-fpmhg" Jan 23 09:29:19 crc kubenswrapper[4684]: I0123 09:29:19.970610 4684 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"ovsdbserver-nb" Jan 23 09:29:20 crc kubenswrapper[4684]: I0123 09:29:20.000602 4684 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-586b989cdc-fpmhg"] Jan 23 09:29:20 crc kubenswrapper[4684]: I0123 09:29:20.011364 4684 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/d2457a57-4283-4e26-982f-62acaa95c1bf-config\") pod \"dnsmasq-dns-586b989cdc-fpmhg\" (UID: \"d2457a57-4283-4e26-982f-62acaa95c1bf\") " pod="openstack/dnsmasq-dns-586b989cdc-fpmhg" Jan 23 09:29:20 crc kubenswrapper[4684]: I0123 09:29:20.011454 4684 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/d2457a57-4283-4e26-982f-62acaa95c1bf-dns-svc\") pod \"dnsmasq-dns-586b989cdc-fpmhg\" (UID: \"d2457a57-4283-4e26-982f-62acaa95c1bf\") " pod="openstack/dnsmasq-dns-586b989cdc-fpmhg" Jan 23 09:29:20 crc kubenswrapper[4684]: I0123 09:29:20.011500 4684 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/d2457a57-4283-4e26-982f-62acaa95c1bf-ovsdbserver-sb\") pod \"dnsmasq-dns-586b989cdc-fpmhg\" (UID: \"d2457a57-4283-4e26-982f-62acaa95c1bf\") " pod="openstack/dnsmasq-dns-586b989cdc-fpmhg" Jan 23 09:29:20 crc kubenswrapper[4684]: I0123 09:29:20.011590 4684 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-cjb9w\" (UniqueName: \"kubernetes.io/projected/d2457a57-4283-4e26-982f-62acaa95c1bf-kube-api-access-cjb9w\") pod \"dnsmasq-dns-586b989cdc-fpmhg\" (UID: \"d2457a57-4283-4e26-982f-62acaa95c1bf\") " pod="openstack/dnsmasq-dns-586b989cdc-fpmhg" Jan 23 09:29:20 crc kubenswrapper[4684]: I0123 09:29:20.011615 4684 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/d2457a57-4283-4e26-982f-62acaa95c1bf-ovsdbserver-nb\") pod \"dnsmasq-dns-586b989cdc-fpmhg\" (UID: \"d2457a57-4283-4e26-982f-62acaa95c1bf\") " pod="openstack/dnsmasq-dns-586b989cdc-fpmhg" Jan 23 09:29:20 crc kubenswrapper[4684]: I0123 09:29:20.040291 4684 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/ovn-northd-0"] Jan 23 09:29:20 crc kubenswrapper[4684]: I0123 09:29:20.041565 4684 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ovn-northd-0" Jan 23 09:29:20 crc kubenswrapper[4684]: I0123 09:29:20.049981 4684 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-ovnnorthd-ovndbs" Jan 23 09:29:20 crc kubenswrapper[4684]: I0123 09:29:20.050219 4684 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"ovnnorthd-scripts" Jan 23 09:29:20 crc kubenswrapper[4684]: I0123 09:29:20.050354 4684 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"ovnnorthd-config" Jan 23 09:29:20 crc kubenswrapper[4684]: I0123 09:29:20.052857 4684 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"ovnnorthd-ovnnorthd-dockercfg-jtqxn" Jan 23 09:29:20 crc kubenswrapper[4684]: I0123 09:29:20.060111 4684 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/ovn-controller-metrics-x2qgc"] Jan 23 09:29:20 crc kubenswrapper[4684]: I0123 09:29:20.061196 4684 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ovn-controller-metrics-x2qgc" Jan 23 09:29:20 crc kubenswrapper[4684]: I0123 09:29:20.068927 4684 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"ovncontroller-metrics-config" Jan 23 09:29:20 crc kubenswrapper[4684]: I0123 09:29:20.103891 4684 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ovn-controller-metrics-x2qgc"] Jan 23 09:29:20 crc kubenswrapper[4684]: I0123 09:29:20.111848 4684 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ovn-northd-0"] Jan 23 09:29:20 crc kubenswrapper[4684]: I0123 09:29:20.112677 4684 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/d2457a57-4283-4e26-982f-62acaa95c1bf-config\") pod \"dnsmasq-dns-586b989cdc-fpmhg\" (UID: \"d2457a57-4283-4e26-982f-62acaa95c1bf\") " pod="openstack/dnsmasq-dns-586b989cdc-fpmhg" Jan 23 09:29:20 crc kubenswrapper[4684]: I0123 09:29:20.112760 4684 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/d2457a57-4283-4e26-982f-62acaa95c1bf-dns-svc\") pod \"dnsmasq-dns-586b989cdc-fpmhg\" (UID: \"d2457a57-4283-4e26-982f-62acaa95c1bf\") " pod="openstack/dnsmasq-dns-586b989cdc-fpmhg" Jan 23 09:29:20 crc kubenswrapper[4684]: I0123 09:29:20.112787 4684 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"metrics-certs-tls-certs\" (UniqueName: \"kubernetes.io/secret/366a8d70-2aa4-439d-a14e-4459b3f45736-metrics-certs-tls-certs\") pod \"ovn-northd-0\" (UID: \"366a8d70-2aa4-439d-a14e-4459b3f45736\") " pod="openstack/ovn-northd-0" Jan 23 09:29:20 crc kubenswrapper[4684]: I0123 09:29:20.112809 4684 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-fqpn7\" (UniqueName: \"kubernetes.io/projected/366a8d70-2aa4-439d-a14e-4459b3f45736-kube-api-access-fqpn7\") pod \"ovn-northd-0\" (UID: \"366a8d70-2aa4-439d-a14e-4459b3f45736\") " pod="openstack/ovn-northd-0" Jan 23 09:29:20 crc kubenswrapper[4684]: I0123 09:29:20.112826 4684 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovn-rundir\" (UniqueName: \"kubernetes.io/host-path/8a2ed8cb-f8c4-4ee2-884e-13a286ef4c86-ovn-rundir\") pod \"ovn-controller-metrics-x2qgc\" (UID: \"8a2ed8cb-f8c4-4ee2-884e-13a286ef4c86\") " pod="openstack/ovn-controller-metrics-x2qgc" Jan 23 09:29:20 crc kubenswrapper[4684]: I0123 09:29:20.112849 4684 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/d2457a57-4283-4e26-982f-62acaa95c1bf-ovsdbserver-sb\") pod \"dnsmasq-dns-586b989cdc-fpmhg\" (UID: \"d2457a57-4283-4e26-982f-62acaa95c1bf\") " pod="openstack/dnsmasq-dns-586b989cdc-fpmhg" Jan 23 09:29:20 crc kubenswrapper[4684]: I0123 09:29:20.112891 4684 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/366a8d70-2aa4-439d-a14e-4459b3f45736-scripts\") pod \"ovn-northd-0\" (UID: \"366a8d70-2aa4-439d-a14e-4459b3f45736\") " pod="openstack/ovn-northd-0" Jan 23 09:29:20 crc kubenswrapper[4684]: I0123 09:29:20.112912 4684 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/366a8d70-2aa4-439d-a14e-4459b3f45736-combined-ca-bundle\") pod \"ovn-northd-0\" (UID: \"366a8d70-2aa4-439d-a14e-4459b3f45736\") " pod="openstack/ovn-northd-0" Jan 23 09:29:20 crc kubenswrapper[4684]: I0123 09:29:20.112932 4684 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovs-rundir\" (UniqueName: \"kubernetes.io/host-path/8a2ed8cb-f8c4-4ee2-884e-13a286ef4c86-ovs-rundir\") pod \"ovn-controller-metrics-x2qgc\" (UID: \"8a2ed8cb-f8c4-4ee2-884e-13a286ef4c86\") " pod="openstack/ovn-controller-metrics-x2qgc" Jan 23 09:29:20 crc kubenswrapper[4684]: I0123 09:29:20.112948 4684 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/366a8d70-2aa4-439d-a14e-4459b3f45736-config\") pod \"ovn-northd-0\" (UID: \"366a8d70-2aa4-439d-a14e-4459b3f45736\") " pod="openstack/ovn-northd-0" Jan 23 09:29:20 crc kubenswrapper[4684]: I0123 09:29:20.112965 4684 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovn-northd-tls-certs\" (UniqueName: \"kubernetes.io/secret/366a8d70-2aa4-439d-a14e-4459b3f45736-ovn-northd-tls-certs\") pod \"ovn-northd-0\" (UID: \"366a8d70-2aa4-439d-a14e-4459b3f45736\") " pod="openstack/ovn-northd-0" Jan 23 09:29:20 crc kubenswrapper[4684]: I0123 09:29:20.112983 4684 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-tpgxt\" (UniqueName: \"kubernetes.io/projected/8a2ed8cb-f8c4-4ee2-884e-13a286ef4c86-kube-api-access-tpgxt\") pod \"ovn-controller-metrics-x2qgc\" (UID: \"8a2ed8cb-f8c4-4ee2-884e-13a286ef4c86\") " pod="openstack/ovn-controller-metrics-x2qgc" Jan 23 09:29:20 crc kubenswrapper[4684]: I0123 09:29:20.113000 4684 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovn-rundir\" (UniqueName: \"kubernetes.io/empty-dir/366a8d70-2aa4-439d-a14e-4459b3f45736-ovn-rundir\") pod \"ovn-northd-0\" (UID: \"366a8d70-2aa4-439d-a14e-4459b3f45736\") " pod="openstack/ovn-northd-0" Jan 23 09:29:20 crc kubenswrapper[4684]: I0123 09:29:20.113018 4684 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/8a2ed8cb-f8c4-4ee2-884e-13a286ef4c86-config\") pod \"ovn-controller-metrics-x2qgc\" (UID: \"8a2ed8cb-f8c4-4ee2-884e-13a286ef4c86\") " pod="openstack/ovn-controller-metrics-x2qgc" Jan 23 09:29:20 crc kubenswrapper[4684]: I0123 09:29:20.113037 4684 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/8a2ed8cb-f8c4-4ee2-884e-13a286ef4c86-combined-ca-bundle\") pod \"ovn-controller-metrics-x2qgc\" (UID: \"8a2ed8cb-f8c4-4ee2-884e-13a286ef4c86\") " pod="openstack/ovn-controller-metrics-x2qgc" Jan 23 09:29:20 crc kubenswrapper[4684]: I0123 09:29:20.113055 4684 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"metrics-certs-tls-certs\" (UniqueName: \"kubernetes.io/secret/8a2ed8cb-f8c4-4ee2-884e-13a286ef4c86-metrics-certs-tls-certs\") pod \"ovn-controller-metrics-x2qgc\" (UID: \"8a2ed8cb-f8c4-4ee2-884e-13a286ef4c86\") " pod="openstack/ovn-controller-metrics-x2qgc" Jan 23 09:29:20 crc kubenswrapper[4684]: I0123 09:29:20.113080 4684 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-cjb9w\" (UniqueName: \"kubernetes.io/projected/d2457a57-4283-4e26-982f-62acaa95c1bf-kube-api-access-cjb9w\") pod \"dnsmasq-dns-586b989cdc-fpmhg\" (UID: \"d2457a57-4283-4e26-982f-62acaa95c1bf\") " pod="openstack/dnsmasq-dns-586b989cdc-fpmhg" Jan 23 09:29:20 crc kubenswrapper[4684]: I0123 09:29:20.113102 4684 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/d2457a57-4283-4e26-982f-62acaa95c1bf-ovsdbserver-nb\") pod \"dnsmasq-dns-586b989cdc-fpmhg\" (UID: \"d2457a57-4283-4e26-982f-62acaa95c1bf\") " pod="openstack/dnsmasq-dns-586b989cdc-fpmhg" Jan 23 09:29:20 crc kubenswrapper[4684]: I0123 09:29:20.113908 4684 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/d2457a57-4283-4e26-982f-62acaa95c1bf-ovsdbserver-nb\") pod \"dnsmasq-dns-586b989cdc-fpmhg\" (UID: \"d2457a57-4283-4e26-982f-62acaa95c1bf\") " pod="openstack/dnsmasq-dns-586b989cdc-fpmhg" Jan 23 09:29:20 crc kubenswrapper[4684]: I0123 09:29:20.114454 4684 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/d2457a57-4283-4e26-982f-62acaa95c1bf-config\") pod \"dnsmasq-dns-586b989cdc-fpmhg\" (UID: \"d2457a57-4283-4e26-982f-62acaa95c1bf\") " pod="openstack/dnsmasq-dns-586b989cdc-fpmhg" Jan 23 09:29:20 crc kubenswrapper[4684]: I0123 09:29:20.114987 4684 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/d2457a57-4283-4e26-982f-62acaa95c1bf-dns-svc\") pod \"dnsmasq-dns-586b989cdc-fpmhg\" (UID: \"d2457a57-4283-4e26-982f-62acaa95c1bf\") " pod="openstack/dnsmasq-dns-586b989cdc-fpmhg" Jan 23 09:29:20 crc kubenswrapper[4684]: I0123 09:29:20.115489 4684 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/d2457a57-4283-4e26-982f-62acaa95c1bf-ovsdbserver-sb\") pod \"dnsmasq-dns-586b989cdc-fpmhg\" (UID: \"d2457a57-4283-4e26-982f-62acaa95c1bf\") " pod="openstack/dnsmasq-dns-586b989cdc-fpmhg" Jan 23 09:29:20 crc kubenswrapper[4684]: I0123 09:29:20.143811 4684 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-cjb9w\" (UniqueName: \"kubernetes.io/projected/d2457a57-4283-4e26-982f-62acaa95c1bf-kube-api-access-cjb9w\") pod \"dnsmasq-dns-586b989cdc-fpmhg\" (UID: \"d2457a57-4283-4e26-982f-62acaa95c1bf\") " pod="openstack/dnsmasq-dns-586b989cdc-fpmhg" Jan 23 09:29:20 crc kubenswrapper[4684]: I0123 09:29:20.214843 4684 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-certs-tls-certs\" (UniqueName: \"kubernetes.io/secret/366a8d70-2aa4-439d-a14e-4459b3f45736-metrics-certs-tls-certs\") pod \"ovn-northd-0\" (UID: \"366a8d70-2aa4-439d-a14e-4459b3f45736\") " pod="openstack/ovn-northd-0" Jan 23 09:29:20 crc kubenswrapper[4684]: I0123 09:29:20.215173 4684 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-fqpn7\" (UniqueName: \"kubernetes.io/projected/366a8d70-2aa4-439d-a14e-4459b3f45736-kube-api-access-fqpn7\") pod \"ovn-northd-0\" (UID: \"366a8d70-2aa4-439d-a14e-4459b3f45736\") " pod="openstack/ovn-northd-0" Jan 23 09:29:20 crc kubenswrapper[4684]: I0123 09:29:20.215228 4684 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovn-rundir\" (UniqueName: \"kubernetes.io/host-path/8a2ed8cb-f8c4-4ee2-884e-13a286ef4c86-ovn-rundir\") pod \"ovn-controller-metrics-x2qgc\" (UID: \"8a2ed8cb-f8c4-4ee2-884e-13a286ef4c86\") " pod="openstack/ovn-controller-metrics-x2qgc" Jan 23 09:29:20 crc kubenswrapper[4684]: I0123 09:29:20.215563 4684 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovn-rundir\" (UniqueName: \"kubernetes.io/host-path/8a2ed8cb-f8c4-4ee2-884e-13a286ef4c86-ovn-rundir\") pod \"ovn-controller-metrics-x2qgc\" (UID: \"8a2ed8cb-f8c4-4ee2-884e-13a286ef4c86\") " pod="openstack/ovn-controller-metrics-x2qgc" Jan 23 09:29:20 crc kubenswrapper[4684]: I0123 09:29:20.215657 4684 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/366a8d70-2aa4-439d-a14e-4459b3f45736-scripts\") pod \"ovn-northd-0\" (UID: \"366a8d70-2aa4-439d-a14e-4459b3f45736\") " pod="openstack/ovn-northd-0" Jan 23 09:29:20 crc kubenswrapper[4684]: I0123 09:29:20.216481 4684 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/366a8d70-2aa4-439d-a14e-4459b3f45736-combined-ca-bundle\") pod \"ovn-northd-0\" (UID: \"366a8d70-2aa4-439d-a14e-4459b3f45736\") " pod="openstack/ovn-northd-0" Jan 23 09:29:20 crc kubenswrapper[4684]: I0123 09:29:20.216503 4684 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovs-rundir\" (UniqueName: \"kubernetes.io/host-path/8a2ed8cb-f8c4-4ee2-884e-13a286ef4c86-ovs-rundir\") pod \"ovn-controller-metrics-x2qgc\" (UID: \"8a2ed8cb-f8c4-4ee2-884e-13a286ef4c86\") " pod="openstack/ovn-controller-metrics-x2qgc" Jan 23 09:29:20 crc kubenswrapper[4684]: I0123 09:29:20.216538 4684 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/366a8d70-2aa4-439d-a14e-4459b3f45736-config\") pod \"ovn-northd-0\" (UID: \"366a8d70-2aa4-439d-a14e-4459b3f45736\") " pod="openstack/ovn-northd-0" Jan 23 09:29:20 crc kubenswrapper[4684]: I0123 09:29:20.216560 4684 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovn-northd-tls-certs\" (UniqueName: \"kubernetes.io/secret/366a8d70-2aa4-439d-a14e-4459b3f45736-ovn-northd-tls-certs\") pod \"ovn-northd-0\" (UID: \"366a8d70-2aa4-439d-a14e-4459b3f45736\") " pod="openstack/ovn-northd-0" Jan 23 09:29:20 crc kubenswrapper[4684]: I0123 09:29:20.216582 4684 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-tpgxt\" (UniqueName: \"kubernetes.io/projected/8a2ed8cb-f8c4-4ee2-884e-13a286ef4c86-kube-api-access-tpgxt\") pod \"ovn-controller-metrics-x2qgc\" (UID: \"8a2ed8cb-f8c4-4ee2-884e-13a286ef4c86\") " pod="openstack/ovn-controller-metrics-x2qgc" Jan 23 09:29:20 crc kubenswrapper[4684]: I0123 09:29:20.216617 4684 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovn-rundir\" (UniqueName: \"kubernetes.io/empty-dir/366a8d70-2aa4-439d-a14e-4459b3f45736-ovn-rundir\") pod \"ovn-northd-0\" (UID: \"366a8d70-2aa4-439d-a14e-4459b3f45736\") " pod="openstack/ovn-northd-0" Jan 23 09:29:20 crc kubenswrapper[4684]: I0123 09:29:20.216639 4684 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/8a2ed8cb-f8c4-4ee2-884e-13a286ef4c86-config\") pod \"ovn-controller-metrics-x2qgc\" (UID: \"8a2ed8cb-f8c4-4ee2-884e-13a286ef4c86\") " pod="openstack/ovn-controller-metrics-x2qgc" Jan 23 09:29:20 crc kubenswrapper[4684]: I0123 09:29:20.216710 4684 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/8a2ed8cb-f8c4-4ee2-884e-13a286ef4c86-combined-ca-bundle\") pod \"ovn-controller-metrics-x2qgc\" (UID: \"8a2ed8cb-f8c4-4ee2-884e-13a286ef4c86\") " pod="openstack/ovn-controller-metrics-x2qgc" Jan 23 09:29:20 crc kubenswrapper[4684]: I0123 09:29:20.216732 4684 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-certs-tls-certs\" (UniqueName: \"kubernetes.io/secret/8a2ed8cb-f8c4-4ee2-884e-13a286ef4c86-metrics-certs-tls-certs\") pod \"ovn-controller-metrics-x2qgc\" (UID: \"8a2ed8cb-f8c4-4ee2-884e-13a286ef4c86\") " pod="openstack/ovn-controller-metrics-x2qgc" Jan 23 09:29:20 crc kubenswrapper[4684]: I0123 09:29:20.216426 4684 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/366a8d70-2aa4-439d-a14e-4459b3f45736-scripts\") pod \"ovn-northd-0\" (UID: \"366a8d70-2aa4-439d-a14e-4459b3f45736\") " pod="openstack/ovn-northd-0" Jan 23 09:29:20 crc kubenswrapper[4684]: I0123 09:29:20.217950 4684 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovs-rundir\" (UniqueName: \"kubernetes.io/host-path/8a2ed8cb-f8c4-4ee2-884e-13a286ef4c86-ovs-rundir\") pod \"ovn-controller-metrics-x2qgc\" (UID: \"8a2ed8cb-f8c4-4ee2-884e-13a286ef4c86\") " pod="openstack/ovn-controller-metrics-x2qgc" Jan 23 09:29:20 crc kubenswrapper[4684]: I0123 09:29:20.218423 4684 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovn-rundir\" (UniqueName: \"kubernetes.io/empty-dir/366a8d70-2aa4-439d-a14e-4459b3f45736-ovn-rundir\") pod \"ovn-northd-0\" (UID: \"366a8d70-2aa4-439d-a14e-4459b3f45736\") " pod="openstack/ovn-northd-0" Jan 23 09:29:20 crc kubenswrapper[4684]: I0123 09:29:20.218575 4684 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/366a8d70-2aa4-439d-a14e-4459b3f45736-config\") pod \"ovn-northd-0\" (UID: \"366a8d70-2aa4-439d-a14e-4459b3f45736\") " pod="openstack/ovn-northd-0" Jan 23 09:29:20 crc kubenswrapper[4684]: I0123 09:29:20.218727 4684 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/8a2ed8cb-f8c4-4ee2-884e-13a286ef4c86-config\") pod \"ovn-controller-metrics-x2qgc\" (UID: \"8a2ed8cb-f8c4-4ee2-884e-13a286ef4c86\") " pod="openstack/ovn-controller-metrics-x2qgc" Jan 23 09:29:20 crc kubenswrapper[4684]: I0123 09:29:20.222814 4684 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"metrics-certs-tls-certs\" (UniqueName: \"kubernetes.io/secret/8a2ed8cb-f8c4-4ee2-884e-13a286ef4c86-metrics-certs-tls-certs\") pod \"ovn-controller-metrics-x2qgc\" (UID: \"8a2ed8cb-f8c4-4ee2-884e-13a286ef4c86\") " pod="openstack/ovn-controller-metrics-x2qgc" Jan 23 09:29:20 crc kubenswrapper[4684]: I0123 09:29:20.223910 4684 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"metrics-certs-tls-certs\" (UniqueName: \"kubernetes.io/secret/366a8d70-2aa4-439d-a14e-4459b3f45736-metrics-certs-tls-certs\") pod \"ovn-northd-0\" (UID: \"366a8d70-2aa4-439d-a14e-4459b3f45736\") " pod="openstack/ovn-northd-0" Jan 23 09:29:20 crc kubenswrapper[4684]: I0123 09:29:20.224738 4684 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovn-northd-tls-certs\" (UniqueName: \"kubernetes.io/secret/366a8d70-2aa4-439d-a14e-4459b3f45736-ovn-northd-tls-certs\") pod \"ovn-northd-0\" (UID: \"366a8d70-2aa4-439d-a14e-4459b3f45736\") " pod="openstack/ovn-northd-0" Jan 23 09:29:20 crc kubenswrapper[4684]: I0123 09:29:20.227558 4684 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/8a2ed8cb-f8c4-4ee2-884e-13a286ef4c86-combined-ca-bundle\") pod \"ovn-controller-metrics-x2qgc\" (UID: \"8a2ed8cb-f8c4-4ee2-884e-13a286ef4c86\") " pod="openstack/ovn-controller-metrics-x2qgc" Jan 23 09:29:20 crc kubenswrapper[4684]: I0123 09:29:20.231289 4684 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/366a8d70-2aa4-439d-a14e-4459b3f45736-combined-ca-bundle\") pod \"ovn-northd-0\" (UID: \"366a8d70-2aa4-439d-a14e-4459b3f45736\") " pod="openstack/ovn-northd-0" Jan 23 09:29:20 crc kubenswrapper[4684]: I0123 09:29:20.240305 4684 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-fqpn7\" (UniqueName: \"kubernetes.io/projected/366a8d70-2aa4-439d-a14e-4459b3f45736-kube-api-access-fqpn7\") pod \"ovn-northd-0\" (UID: \"366a8d70-2aa4-439d-a14e-4459b3f45736\") " pod="openstack/ovn-northd-0" Jan 23 09:29:20 crc kubenswrapper[4684]: I0123 09:29:20.242783 4684 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-tpgxt\" (UniqueName: \"kubernetes.io/projected/8a2ed8cb-f8c4-4ee2-884e-13a286ef4c86-kube-api-access-tpgxt\") pod \"ovn-controller-metrics-x2qgc\" (UID: \"8a2ed8cb-f8c4-4ee2-884e-13a286ef4c86\") " pod="openstack/ovn-controller-metrics-x2qgc" Jan 23 09:29:20 crc kubenswrapper[4684]: I0123 09:29:20.287397 4684 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-586b989cdc-fpmhg" Jan 23 09:29:20 crc kubenswrapper[4684]: I0123 09:29:20.357525 4684 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ovn-northd-0" Jan 23 09:29:20 crc kubenswrapper[4684]: I0123 09:29:20.385798 4684 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ovn-controller-metrics-x2qgc" Jan 23 09:29:20 crc kubenswrapper[4684]: I0123 09:29:20.840767 4684 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-586b989cdc-fpmhg"] Jan 23 09:29:20 crc kubenswrapper[4684]: I0123 09:29:20.929357 4684 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ovn-northd-0"] Jan 23 09:29:21 crc kubenswrapper[4684]: I0123 09:29:21.016018 4684 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ovn-controller-metrics-x2qgc"] Jan 23 09:29:21 crc kubenswrapper[4684]: W0123 09:29:21.022632 4684 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod8a2ed8cb_f8c4_4ee2_884e_13a286ef4c86.slice/crio-2bd77994d88aa11354dcb35a1b8dbe383cb197f6e5d4aee1281dcd8aaf4a9671 WatchSource:0}: Error finding container 2bd77994d88aa11354dcb35a1b8dbe383cb197f6e5d4aee1281dcd8aaf4a9671: Status 404 returned error can't find the container with id 2bd77994d88aa11354dcb35a1b8dbe383cb197f6e5d4aee1281dcd8aaf4a9671 Jan 23 09:29:21 crc kubenswrapper[4684]: I0123 09:29:21.190281 4684 generic.go:334] "Generic (PLEG): container finished" podID="11507910-c9ab-4a8d-b0e9-c5e2425c3338" containerID="80057a79163d96945e6e9b9cc2c0d2d023fbb384f7164b561f997a10aa557889" exitCode=0 Jan 23 09:29:21 crc kubenswrapper[4684]: I0123 09:29:21.190690 4684 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-5b79764b65-cr8pj" event={"ID":"11507910-c9ab-4a8d-b0e9-c5e2425c3338","Type":"ContainerDied","Data":"80057a79163d96945e6e9b9cc2c0d2d023fbb384f7164b561f997a10aa557889"} Jan 23 09:29:21 crc kubenswrapper[4684]: I0123 09:29:21.193721 4684 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ovn-controller-metrics-x2qgc" event={"ID":"8a2ed8cb-f8c4-4ee2-884e-13a286ef4c86","Type":"ContainerStarted","Data":"2bd77994d88aa11354dcb35a1b8dbe383cb197f6e5d4aee1281dcd8aaf4a9671"} Jan 23 09:29:21 crc kubenswrapper[4684]: I0123 09:29:21.197228 4684 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-586b989cdc-fpmhg" event={"ID":"d2457a57-4283-4e26-982f-62acaa95c1bf","Type":"ContainerStarted","Data":"d80b133b7772d25ddf139545d4d612b6713e72bb1eb181b99b46eb33760a4e4b"} Jan 23 09:29:21 crc kubenswrapper[4684]: I0123 09:29:21.197274 4684 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-586b989cdc-fpmhg" event={"ID":"d2457a57-4283-4e26-982f-62acaa95c1bf","Type":"ContainerStarted","Data":"1505b40f9668c0b416f28f436b18e8ed45b30110e6b86ae7fdd80b72a2fed61e"} Jan 23 09:29:21 crc kubenswrapper[4684]: I0123 09:29:21.199681 4684 generic.go:334] "Generic (PLEG): container finished" podID="6a0c15bc-8e5e-47ee-9c23-1673363f1603" containerID="358ba8c0530319a3946cc789d5cfc05a51b3e76a95d94ea41ca6b9aea260ae54" exitCode=0 Jan 23 09:29:21 crc kubenswrapper[4684]: I0123 09:29:21.199770 4684 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/rabbitmq-cell1-server-0" event={"ID":"6a0c15bc-8e5e-47ee-9c23-1673363f1603","Type":"ContainerDied","Data":"358ba8c0530319a3946cc789d5cfc05a51b3e76a95d94ea41ca6b9aea260ae54"} Jan 23 09:29:21 crc kubenswrapper[4684]: I0123 09:29:21.202002 4684 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ovn-northd-0" event={"ID":"366a8d70-2aa4-439d-a14e-4459b3f45736","Type":"ContainerStarted","Data":"fb80b5bde90a980e0134bf83021c6308ad3307ff8d97c74a8108b0548c28a936"} Jan 23 09:29:21 crc kubenswrapper[4684]: I0123 09:29:21.564981 4684 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-5b79764b65-cr8pj" Jan 23 09:29:21 crc kubenswrapper[4684]: I0123 09:29:21.753805 4684 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/11507910-c9ab-4a8d-b0e9-c5e2425c3338-ovsdbserver-sb\") pod \"11507910-c9ab-4a8d-b0e9-c5e2425c3338\" (UID: \"11507910-c9ab-4a8d-b0e9-c5e2425c3338\") " Jan 23 09:29:21 crc kubenswrapper[4684]: I0123 09:29:21.753935 4684 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/11507910-c9ab-4a8d-b0e9-c5e2425c3338-config\") pod \"11507910-c9ab-4a8d-b0e9-c5e2425c3338\" (UID: \"11507910-c9ab-4a8d-b0e9-c5e2425c3338\") " Jan 23 09:29:21 crc kubenswrapper[4684]: I0123 09:29:21.753974 4684 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-rv9jd\" (UniqueName: \"kubernetes.io/projected/11507910-c9ab-4a8d-b0e9-c5e2425c3338-kube-api-access-rv9jd\") pod \"11507910-c9ab-4a8d-b0e9-c5e2425c3338\" (UID: \"11507910-c9ab-4a8d-b0e9-c5e2425c3338\") " Jan 23 09:29:21 crc kubenswrapper[4684]: I0123 09:29:21.754011 4684 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/11507910-c9ab-4a8d-b0e9-c5e2425c3338-dns-svc\") pod \"11507910-c9ab-4a8d-b0e9-c5e2425c3338\" (UID: \"11507910-c9ab-4a8d-b0e9-c5e2425c3338\") " Jan 23 09:29:21 crc kubenswrapper[4684]: I0123 09:29:21.759410 4684 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/11507910-c9ab-4a8d-b0e9-c5e2425c3338-kube-api-access-rv9jd" (OuterVolumeSpecName: "kube-api-access-rv9jd") pod "11507910-c9ab-4a8d-b0e9-c5e2425c3338" (UID: "11507910-c9ab-4a8d-b0e9-c5e2425c3338"). InnerVolumeSpecName "kube-api-access-rv9jd". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 23 09:29:21 crc kubenswrapper[4684]: I0123 09:29:21.779682 4684 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/11507910-c9ab-4a8d-b0e9-c5e2425c3338-dns-svc" (OuterVolumeSpecName: "dns-svc") pod "11507910-c9ab-4a8d-b0e9-c5e2425c3338" (UID: "11507910-c9ab-4a8d-b0e9-c5e2425c3338"). InnerVolumeSpecName "dns-svc". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 23 09:29:21 crc kubenswrapper[4684]: I0123 09:29:21.784210 4684 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/11507910-c9ab-4a8d-b0e9-c5e2425c3338-ovsdbserver-sb" (OuterVolumeSpecName: "ovsdbserver-sb") pod "11507910-c9ab-4a8d-b0e9-c5e2425c3338" (UID: "11507910-c9ab-4a8d-b0e9-c5e2425c3338"). InnerVolumeSpecName "ovsdbserver-sb". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 23 09:29:21 crc kubenswrapper[4684]: I0123 09:29:21.803312 4684 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/11507910-c9ab-4a8d-b0e9-c5e2425c3338-config" (OuterVolumeSpecName: "config") pod "11507910-c9ab-4a8d-b0e9-c5e2425c3338" (UID: "11507910-c9ab-4a8d-b0e9-c5e2425c3338"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 23 09:29:21 crc kubenswrapper[4684]: I0123 09:29:21.856369 4684 reconciler_common.go:293] "Volume detached for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/11507910-c9ab-4a8d-b0e9-c5e2425c3338-ovsdbserver-sb\") on node \"crc\" DevicePath \"\"" Jan 23 09:29:21 crc kubenswrapper[4684]: I0123 09:29:21.856773 4684 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/11507910-c9ab-4a8d-b0e9-c5e2425c3338-config\") on node \"crc\" DevicePath \"\"" Jan 23 09:29:21 crc kubenswrapper[4684]: I0123 09:29:21.856788 4684 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-rv9jd\" (UniqueName: \"kubernetes.io/projected/11507910-c9ab-4a8d-b0e9-c5e2425c3338-kube-api-access-rv9jd\") on node \"crc\" DevicePath \"\"" Jan 23 09:29:21 crc kubenswrapper[4684]: I0123 09:29:21.856800 4684 reconciler_common.go:293] "Volume detached for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/11507910-c9ab-4a8d-b0e9-c5e2425c3338-dns-svc\") on node \"crc\" DevicePath \"\"" Jan 23 09:29:22 crc kubenswrapper[4684]: I0123 09:29:22.214425 4684 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/rabbitmq-cell1-server-0" event={"ID":"6a0c15bc-8e5e-47ee-9c23-1673363f1603","Type":"ContainerStarted","Data":"a4c9a117c5b92cb67fe3eaf6e3d9b1260eea190f23710b986d5ce70813d55697"} Jan 23 09:29:22 crc kubenswrapper[4684]: I0123 09:29:22.214675 4684 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/rabbitmq-cell1-server-0" Jan 23 09:29:22 crc kubenswrapper[4684]: I0123 09:29:22.216409 4684 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-5b79764b65-cr8pj" event={"ID":"11507910-c9ab-4a8d-b0e9-c5e2425c3338","Type":"ContainerDied","Data":"0a0d8f2222d2c15e9662598906a285792f3547fa86c9cb18c9f5e7136589fcea"} Jan 23 09:29:22 crc kubenswrapper[4684]: I0123 09:29:22.216455 4684 scope.go:117] "RemoveContainer" containerID="80057a79163d96945e6e9b9cc2c0d2d023fbb384f7164b561f997a10aa557889" Jan 23 09:29:22 crc kubenswrapper[4684]: I0123 09:29:22.216549 4684 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-5b79764b65-cr8pj" Jan 23 09:29:22 crc kubenswrapper[4684]: I0123 09:29:22.223584 4684 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ovn-controller-metrics-x2qgc" event={"ID":"8a2ed8cb-f8c4-4ee2-884e-13a286ef4c86","Type":"ContainerStarted","Data":"213c9fd57693b089b1fa6533fc8d7dc73ea9bc5d2ed87eaddf92dd42ebc404a8"} Jan 23 09:29:22 crc kubenswrapper[4684]: I0123 09:29:22.226729 4684 generic.go:334] "Generic (PLEG): container finished" podID="d2457a57-4283-4e26-982f-62acaa95c1bf" containerID="d80b133b7772d25ddf139545d4d612b6713e72bb1eb181b99b46eb33760a4e4b" exitCode=0 Jan 23 09:29:22 crc kubenswrapper[4684]: I0123 09:29:22.226761 4684 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-586b989cdc-fpmhg" event={"ID":"d2457a57-4283-4e26-982f-62acaa95c1bf","Type":"ContainerDied","Data":"d80b133b7772d25ddf139545d4d612b6713e72bb1eb181b99b46eb33760a4e4b"} Jan 23 09:29:22 crc kubenswrapper[4684]: I0123 09:29:22.249354 4684 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/rabbitmq-cell1-server-0" podStartSLOduration=37.344909725 podStartE2EDuration="1m49.249334364s" podCreationTimestamp="2026-01-23 09:27:33 +0000 UTC" firstStartedPulling="2026-01-23 09:27:35.462020708 +0000 UTC m=+1228.085399249" lastFinishedPulling="2026-01-23 09:28:47.366445347 +0000 UTC m=+1299.989823888" observedRunningTime="2026-01-23 09:29:22.241163793 +0000 UTC m=+1334.864542334" watchObservedRunningTime="2026-01-23 09:29:22.249334364 +0000 UTC m=+1334.872712915" Jan 23 09:29:22 crc kubenswrapper[4684]: I0123 09:29:22.285823 4684 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-5b79764b65-cr8pj"] Jan 23 09:29:22 crc kubenswrapper[4684]: I0123 09:29:22.301069 4684 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/dnsmasq-dns-5b79764b65-cr8pj"] Jan 23 09:29:22 crc kubenswrapper[4684]: I0123 09:29:22.314215 4684 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/ovn-controller-metrics-x2qgc" podStartSLOduration=2.3141921500000002 podStartE2EDuration="2.31419215s" podCreationTimestamp="2026-01-23 09:29:20 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-23 09:29:22.311762101 +0000 UTC m=+1334.935140642" watchObservedRunningTime="2026-01-23 09:29:22.31419215 +0000 UTC m=+1334.937570691" Jan 23 09:29:23 crc kubenswrapper[4684]: I0123 09:29:23.043387 4684 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack/ovn-controller-jgsg8" podUID="f6d184f2-6bff-43ba-98a6-6e131c7b45a8" containerName="ovn-controller" probeResult="failure" output=< Jan 23 09:29:23 crc kubenswrapper[4684]: ERROR - ovn-controller connection status is 'not connected', expecting 'connected' status Jan 23 09:29:23 crc kubenswrapper[4684]: > Jan 23 09:29:23 crc kubenswrapper[4684]: I0123 09:29:23.108415 4684 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/ovn-controller-ovs-c5pjd" Jan 23 09:29:23 crc kubenswrapper[4684]: I0123 09:29:23.117906 4684 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/ovn-controller-ovs-c5pjd" Jan 23 09:29:23 crc kubenswrapper[4684]: I0123 09:29:23.243314 4684 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-586b989cdc-fpmhg" event={"ID":"d2457a57-4283-4e26-982f-62acaa95c1bf","Type":"ContainerStarted","Data":"69e1711db4062aeb41a776da95d7ff1a45b5e0d638add0143d72deb888d171e3"} Jan 23 09:29:23 crc kubenswrapper[4684]: I0123 09:29:23.243681 4684 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/dnsmasq-dns-586b989cdc-fpmhg" Jan 23 09:29:23 crc kubenswrapper[4684]: I0123 09:29:23.249542 4684 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ovn-northd-0" event={"ID":"366a8d70-2aa4-439d-a14e-4459b3f45736","Type":"ContainerStarted","Data":"cc49dd0547bde4654f867c5fe69c1d145f0a2ae4785952a7d74c593a23275939"} Jan 23 09:29:23 crc kubenswrapper[4684]: I0123 09:29:23.287599 4684 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/dnsmasq-dns-586b989cdc-fpmhg" podStartSLOduration=4.287576741 podStartE2EDuration="4.287576741s" podCreationTimestamp="2026-01-23 09:29:19 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-23 09:29:23.269975953 +0000 UTC m=+1335.893354504" watchObservedRunningTime="2026-01-23 09:29:23.287576741 +0000 UTC m=+1335.910955302" Jan 23 09:29:23 crc kubenswrapper[4684]: I0123 09:29:23.436241 4684 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/ovn-controller-jgsg8-config-k6jsz"] Jan 23 09:29:23 crc kubenswrapper[4684]: E0123 09:29:23.436749 4684 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="11507910-c9ab-4a8d-b0e9-c5e2425c3338" containerName="init" Jan 23 09:29:23 crc kubenswrapper[4684]: I0123 09:29:23.436773 4684 state_mem.go:107] "Deleted CPUSet assignment" podUID="11507910-c9ab-4a8d-b0e9-c5e2425c3338" containerName="init" Jan 23 09:29:23 crc kubenswrapper[4684]: I0123 09:29:23.436974 4684 memory_manager.go:354] "RemoveStaleState removing state" podUID="11507910-c9ab-4a8d-b0e9-c5e2425c3338" containerName="init" Jan 23 09:29:23 crc kubenswrapper[4684]: I0123 09:29:23.437595 4684 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ovn-controller-jgsg8-config-k6jsz" Jan 23 09:29:23 crc kubenswrapper[4684]: I0123 09:29:23.439880 4684 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"ovncontroller-extra-scripts" Jan 23 09:29:23 crc kubenswrapper[4684]: I0123 09:29:23.485038 4684 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ovn-controller-jgsg8-config-k6jsz"] Jan 23 09:29:23 crc kubenswrapper[4684]: I0123 09:29:23.591200 4684 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="11507910-c9ab-4a8d-b0e9-c5e2425c3338" path="/var/lib/kubelet/pods/11507910-c9ab-4a8d-b0e9-c5e2425c3338/volumes" Jan 23 09:29:23 crc kubenswrapper[4684]: I0123 09:29:23.592582 4684 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-run\" (UniqueName: \"kubernetes.io/host-path/34a79192-5869-492e-b53d-dbe5306b32ae-var-run\") pod \"ovn-controller-jgsg8-config-k6jsz\" (UID: \"34a79192-5869-492e-b53d-dbe5306b32ae\") " pod="openstack/ovn-controller-jgsg8-config-k6jsz" Jan 23 09:29:23 crc kubenswrapper[4684]: I0123 09:29:23.592621 4684 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/34a79192-5869-492e-b53d-dbe5306b32ae-scripts\") pod \"ovn-controller-jgsg8-config-k6jsz\" (UID: \"34a79192-5869-492e-b53d-dbe5306b32ae\") " pod="openstack/ovn-controller-jgsg8-config-k6jsz" Jan 23 09:29:23 crc kubenswrapper[4684]: I0123 09:29:23.592653 4684 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-vtlbb\" (UniqueName: \"kubernetes.io/projected/34a79192-5869-492e-b53d-dbe5306b32ae-kube-api-access-vtlbb\") pod \"ovn-controller-jgsg8-config-k6jsz\" (UID: \"34a79192-5869-492e-b53d-dbe5306b32ae\") " pod="openstack/ovn-controller-jgsg8-config-k6jsz" Jan 23 09:29:23 crc kubenswrapper[4684]: I0123 09:29:23.592729 4684 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"additional-scripts\" (UniqueName: \"kubernetes.io/configmap/34a79192-5869-492e-b53d-dbe5306b32ae-additional-scripts\") pod \"ovn-controller-jgsg8-config-k6jsz\" (UID: \"34a79192-5869-492e-b53d-dbe5306b32ae\") " pod="openstack/ovn-controller-jgsg8-config-k6jsz" Jan 23 09:29:23 crc kubenswrapper[4684]: I0123 09:29:23.592978 4684 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-log-ovn\" (UniqueName: \"kubernetes.io/host-path/34a79192-5869-492e-b53d-dbe5306b32ae-var-log-ovn\") pod \"ovn-controller-jgsg8-config-k6jsz\" (UID: \"34a79192-5869-492e-b53d-dbe5306b32ae\") " pod="openstack/ovn-controller-jgsg8-config-k6jsz" Jan 23 09:29:23 crc kubenswrapper[4684]: I0123 09:29:23.593068 4684 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-run-ovn\" (UniqueName: \"kubernetes.io/host-path/34a79192-5869-492e-b53d-dbe5306b32ae-var-run-ovn\") pod \"ovn-controller-jgsg8-config-k6jsz\" (UID: \"34a79192-5869-492e-b53d-dbe5306b32ae\") " pod="openstack/ovn-controller-jgsg8-config-k6jsz" Jan 23 09:29:23 crc kubenswrapper[4684]: I0123 09:29:23.694803 4684 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"var-log-ovn\" (UniqueName: \"kubernetes.io/host-path/34a79192-5869-492e-b53d-dbe5306b32ae-var-log-ovn\") pod \"ovn-controller-jgsg8-config-k6jsz\" (UID: \"34a79192-5869-492e-b53d-dbe5306b32ae\") " pod="openstack/ovn-controller-jgsg8-config-k6jsz" Jan 23 09:29:23 crc kubenswrapper[4684]: I0123 09:29:23.694872 4684 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"var-run-ovn\" (UniqueName: \"kubernetes.io/host-path/34a79192-5869-492e-b53d-dbe5306b32ae-var-run-ovn\") pod \"ovn-controller-jgsg8-config-k6jsz\" (UID: \"34a79192-5869-492e-b53d-dbe5306b32ae\") " pod="openstack/ovn-controller-jgsg8-config-k6jsz" Jan 23 09:29:23 crc kubenswrapper[4684]: I0123 09:29:23.694927 4684 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"var-run\" (UniqueName: \"kubernetes.io/host-path/34a79192-5869-492e-b53d-dbe5306b32ae-var-run\") pod \"ovn-controller-jgsg8-config-k6jsz\" (UID: \"34a79192-5869-492e-b53d-dbe5306b32ae\") " pod="openstack/ovn-controller-jgsg8-config-k6jsz" Jan 23 09:29:23 crc kubenswrapper[4684]: I0123 09:29:23.694955 4684 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/34a79192-5869-492e-b53d-dbe5306b32ae-scripts\") pod \"ovn-controller-jgsg8-config-k6jsz\" (UID: \"34a79192-5869-492e-b53d-dbe5306b32ae\") " pod="openstack/ovn-controller-jgsg8-config-k6jsz" Jan 23 09:29:23 crc kubenswrapper[4684]: I0123 09:29:23.694980 4684 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-vtlbb\" (UniqueName: \"kubernetes.io/projected/34a79192-5869-492e-b53d-dbe5306b32ae-kube-api-access-vtlbb\") pod \"ovn-controller-jgsg8-config-k6jsz\" (UID: \"34a79192-5869-492e-b53d-dbe5306b32ae\") " pod="openstack/ovn-controller-jgsg8-config-k6jsz" Jan 23 09:29:23 crc kubenswrapper[4684]: I0123 09:29:23.695038 4684 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"additional-scripts\" (UniqueName: \"kubernetes.io/configmap/34a79192-5869-492e-b53d-dbe5306b32ae-additional-scripts\") pod \"ovn-controller-jgsg8-config-k6jsz\" (UID: \"34a79192-5869-492e-b53d-dbe5306b32ae\") " pod="openstack/ovn-controller-jgsg8-config-k6jsz" Jan 23 09:29:23 crc kubenswrapper[4684]: I0123 09:29:23.695384 4684 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"var-run\" (UniqueName: \"kubernetes.io/host-path/34a79192-5869-492e-b53d-dbe5306b32ae-var-run\") pod \"ovn-controller-jgsg8-config-k6jsz\" (UID: \"34a79192-5869-492e-b53d-dbe5306b32ae\") " pod="openstack/ovn-controller-jgsg8-config-k6jsz" Jan 23 09:29:23 crc kubenswrapper[4684]: I0123 09:29:23.695408 4684 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"var-run-ovn\" (UniqueName: \"kubernetes.io/host-path/34a79192-5869-492e-b53d-dbe5306b32ae-var-run-ovn\") pod \"ovn-controller-jgsg8-config-k6jsz\" (UID: \"34a79192-5869-492e-b53d-dbe5306b32ae\") " pod="openstack/ovn-controller-jgsg8-config-k6jsz" Jan 23 09:29:23 crc kubenswrapper[4684]: I0123 09:29:23.695607 4684 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"var-log-ovn\" (UniqueName: \"kubernetes.io/host-path/34a79192-5869-492e-b53d-dbe5306b32ae-var-log-ovn\") pod \"ovn-controller-jgsg8-config-k6jsz\" (UID: \"34a79192-5869-492e-b53d-dbe5306b32ae\") " pod="openstack/ovn-controller-jgsg8-config-k6jsz" Jan 23 09:29:23 crc kubenswrapper[4684]: I0123 09:29:23.696201 4684 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"additional-scripts\" (UniqueName: \"kubernetes.io/configmap/34a79192-5869-492e-b53d-dbe5306b32ae-additional-scripts\") pod \"ovn-controller-jgsg8-config-k6jsz\" (UID: \"34a79192-5869-492e-b53d-dbe5306b32ae\") " pod="openstack/ovn-controller-jgsg8-config-k6jsz" Jan 23 09:29:23 crc kubenswrapper[4684]: I0123 09:29:23.697440 4684 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/34a79192-5869-492e-b53d-dbe5306b32ae-scripts\") pod \"ovn-controller-jgsg8-config-k6jsz\" (UID: \"34a79192-5869-492e-b53d-dbe5306b32ae\") " pod="openstack/ovn-controller-jgsg8-config-k6jsz" Jan 23 09:29:23 crc kubenswrapper[4684]: I0123 09:29:23.722476 4684 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-vtlbb\" (UniqueName: \"kubernetes.io/projected/34a79192-5869-492e-b53d-dbe5306b32ae-kube-api-access-vtlbb\") pod \"ovn-controller-jgsg8-config-k6jsz\" (UID: \"34a79192-5869-492e-b53d-dbe5306b32ae\") " pod="openstack/ovn-controller-jgsg8-config-k6jsz" Jan 23 09:29:23 crc kubenswrapper[4684]: I0123 09:29:23.752316 4684 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ovn-controller-jgsg8-config-k6jsz" Jan 23 09:29:24 crc kubenswrapper[4684]: I0123 09:29:24.247669 4684 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ovn-controller-jgsg8-config-k6jsz"] Jan 23 09:29:24 crc kubenswrapper[4684]: I0123 09:29:24.261410 4684 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ovn-northd-0" event={"ID":"366a8d70-2aa4-439d-a14e-4459b3f45736","Type":"ContainerStarted","Data":"176e0cda86b437e9b9f7f2ff21946fccb01123bf7854339588fd3db95f2a59b7"} Jan 23 09:29:24 crc kubenswrapper[4684]: I0123 09:29:24.310926 4684 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/ovn-northd-0" podStartSLOduration=2.305451213 podStartE2EDuration="4.310901086s" podCreationTimestamp="2026-01-23 09:29:20 +0000 UTC" firstStartedPulling="2026-01-23 09:29:20.944950604 +0000 UTC m=+1333.568329145" lastFinishedPulling="2026-01-23 09:29:22.950400477 +0000 UTC m=+1335.573779018" observedRunningTime="2026-01-23 09:29:24.306605765 +0000 UTC m=+1336.929984316" watchObservedRunningTime="2026-01-23 09:29:24.310901086 +0000 UTC m=+1336.934279627" Jan 23 09:29:24 crc kubenswrapper[4684]: I0123 09:29:24.489560 4684 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/openstack-cell1-galera-0" Jan 23 09:29:24 crc kubenswrapper[4684]: I0123 09:29:24.608089 4684 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/openstack-cell1-galera-0" Jan 23 09:29:25 crc kubenswrapper[4684]: I0123 09:29:25.273153 4684 generic.go:334] "Generic (PLEG): container finished" podID="34a79192-5869-492e-b53d-dbe5306b32ae" containerID="bb936caf3d2f07c80ffe4d73c5c7116e3ad3f7aebf3a16892f0981416095a83c" exitCode=0 Jan 23 09:29:25 crc kubenswrapper[4684]: I0123 09:29:25.273222 4684 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ovn-controller-jgsg8-config-k6jsz" event={"ID":"34a79192-5869-492e-b53d-dbe5306b32ae","Type":"ContainerDied","Data":"bb936caf3d2f07c80ffe4d73c5c7116e3ad3f7aebf3a16892f0981416095a83c"} Jan 23 09:29:25 crc kubenswrapper[4684]: I0123 09:29:25.275071 4684 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/ovn-northd-0" Jan 23 09:29:25 crc kubenswrapper[4684]: I0123 09:29:25.275109 4684 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ovn-controller-jgsg8-config-k6jsz" event={"ID":"34a79192-5869-492e-b53d-dbe5306b32ae","Type":"ContainerStarted","Data":"7935e83fe7dbdcb0104aa5009154fc32091be788a88793bb8e48742b364243e2"} Jan 23 09:29:25 crc kubenswrapper[4684]: E0123 09:29:25.584222 4684 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"kube-state-metrics\" with ImagePullBackOff: \"Back-off pulling image \\\"registry.k8s.io/kube-state-metrics/kube-state-metrics@sha256:db384bf43222b066c378e77027a675d4cd9911107adba46c2922b3a55e10d6fb\\\"\"" pod="openstack/kube-state-metrics-0" podUID="48e55475-0575-41e9-9949-d5bdb86ee565" Jan 23 09:29:25 crc kubenswrapper[4684]: I0123 09:29:25.996212 4684 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/openstack-galera-0" Jan 23 09:29:25 crc kubenswrapper[4684]: I0123 09:29:25.996277 4684 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/openstack-galera-0" Jan 23 09:29:26 crc kubenswrapper[4684]: I0123 09:29:26.018364 4684 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/root-account-create-update-w8zfc"] Jan 23 09:29:26 crc kubenswrapper[4684]: I0123 09:29:26.019337 4684 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/root-account-create-update-w8zfc" Jan 23 09:29:26 crc kubenswrapper[4684]: I0123 09:29:26.022316 4684 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"openstack-cell1-mariadb-root-db-secret" Jan 23 09:29:26 crc kubenswrapper[4684]: I0123 09:29:26.040989 4684 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/root-account-create-update-w8zfc"] Jan 23 09:29:26 crc kubenswrapper[4684]: I0123 09:29:26.136907 4684 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-8kphb\" (UniqueName: \"kubernetes.io/projected/71408791-d0ae-4bb7-b758-f6d343cf58a7-kube-api-access-8kphb\") pod \"root-account-create-update-w8zfc\" (UID: \"71408791-d0ae-4bb7-b758-f6d343cf58a7\") " pod="openstack/root-account-create-update-w8zfc" Jan 23 09:29:26 crc kubenswrapper[4684]: I0123 09:29:26.136980 4684 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/71408791-d0ae-4bb7-b758-f6d343cf58a7-operator-scripts\") pod \"root-account-create-update-w8zfc\" (UID: \"71408791-d0ae-4bb7-b758-f6d343cf58a7\") " pod="openstack/root-account-create-update-w8zfc" Jan 23 09:29:26 crc kubenswrapper[4684]: I0123 09:29:26.238534 4684 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/71408791-d0ae-4bb7-b758-f6d343cf58a7-operator-scripts\") pod \"root-account-create-update-w8zfc\" (UID: \"71408791-d0ae-4bb7-b758-f6d343cf58a7\") " pod="openstack/root-account-create-update-w8zfc" Jan 23 09:29:26 crc kubenswrapper[4684]: I0123 09:29:26.238674 4684 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-8kphb\" (UniqueName: \"kubernetes.io/projected/71408791-d0ae-4bb7-b758-f6d343cf58a7-kube-api-access-8kphb\") pod \"root-account-create-update-w8zfc\" (UID: \"71408791-d0ae-4bb7-b758-f6d343cf58a7\") " pod="openstack/root-account-create-update-w8zfc" Jan 23 09:29:26 crc kubenswrapper[4684]: I0123 09:29:26.239438 4684 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/71408791-d0ae-4bb7-b758-f6d343cf58a7-operator-scripts\") pod \"root-account-create-update-w8zfc\" (UID: \"71408791-d0ae-4bb7-b758-f6d343cf58a7\") " pod="openstack/root-account-create-update-w8zfc" Jan 23 09:29:26 crc kubenswrapper[4684]: I0123 09:29:26.267152 4684 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-8kphb\" (UniqueName: \"kubernetes.io/projected/71408791-d0ae-4bb7-b758-f6d343cf58a7-kube-api-access-8kphb\") pod \"root-account-create-update-w8zfc\" (UID: \"71408791-d0ae-4bb7-b758-f6d343cf58a7\") " pod="openstack/root-account-create-update-w8zfc" Jan 23 09:29:26 crc kubenswrapper[4684]: I0123 09:29:26.348391 4684 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/root-account-create-update-w8zfc" Jan 23 09:29:26 crc kubenswrapper[4684]: I0123 09:29:26.974006 4684 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/root-account-create-update-w8zfc"] Jan 23 09:29:27 crc kubenswrapper[4684]: I0123 09:29:27.214445 4684 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/ovn-controller-jgsg8-config-k6jsz" Jan 23 09:29:27 crc kubenswrapper[4684]: I0123 09:29:27.296684 4684 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/root-account-create-update-w8zfc" event={"ID":"71408791-d0ae-4bb7-b758-f6d343cf58a7","Type":"ContainerStarted","Data":"d8864e44fc8ed626ba6786a71b5e8a48144316da5f69ba8519fa70a080f7c95e"} Jan 23 09:29:27 crc kubenswrapper[4684]: I0123 09:29:27.300810 4684 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ovn-controller-jgsg8-config-k6jsz" event={"ID":"34a79192-5869-492e-b53d-dbe5306b32ae","Type":"ContainerDied","Data":"7935e83fe7dbdcb0104aa5009154fc32091be788a88793bb8e48742b364243e2"} Jan 23 09:29:27 crc kubenswrapper[4684]: I0123 09:29:27.300883 4684 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="7935e83fe7dbdcb0104aa5009154fc32091be788a88793bb8e48742b364243e2" Jan 23 09:29:27 crc kubenswrapper[4684]: I0123 09:29:27.300956 4684 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/ovn-controller-jgsg8-config-k6jsz" Jan 23 09:29:27 crc kubenswrapper[4684]: I0123 09:29:27.356580 4684 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"var-run-ovn\" (UniqueName: \"kubernetes.io/host-path/34a79192-5869-492e-b53d-dbe5306b32ae-var-run-ovn\") pod \"34a79192-5869-492e-b53d-dbe5306b32ae\" (UID: \"34a79192-5869-492e-b53d-dbe5306b32ae\") " Jan 23 09:29:27 crc kubenswrapper[4684]: I0123 09:29:27.356804 4684 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/34a79192-5869-492e-b53d-dbe5306b32ae-var-run-ovn" (OuterVolumeSpecName: "var-run-ovn") pod "34a79192-5869-492e-b53d-dbe5306b32ae" (UID: "34a79192-5869-492e-b53d-dbe5306b32ae"). InnerVolumeSpecName "var-run-ovn". PluginName "kubernetes.io/host-path", VolumeGidValue "" Jan 23 09:29:27 crc kubenswrapper[4684]: I0123 09:29:27.357026 4684 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/34a79192-5869-492e-b53d-dbe5306b32ae-scripts\") pod \"34a79192-5869-492e-b53d-dbe5306b32ae\" (UID: \"34a79192-5869-492e-b53d-dbe5306b32ae\") " Jan 23 09:29:27 crc kubenswrapper[4684]: I0123 09:29:27.357107 4684 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-vtlbb\" (UniqueName: \"kubernetes.io/projected/34a79192-5869-492e-b53d-dbe5306b32ae-kube-api-access-vtlbb\") pod \"34a79192-5869-492e-b53d-dbe5306b32ae\" (UID: \"34a79192-5869-492e-b53d-dbe5306b32ae\") " Jan 23 09:29:27 crc kubenswrapper[4684]: I0123 09:29:27.357247 4684 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"var-log-ovn\" (UniqueName: \"kubernetes.io/host-path/34a79192-5869-492e-b53d-dbe5306b32ae-var-log-ovn\") pod \"34a79192-5869-492e-b53d-dbe5306b32ae\" (UID: \"34a79192-5869-492e-b53d-dbe5306b32ae\") " Jan 23 09:29:27 crc kubenswrapper[4684]: I0123 09:29:27.357277 4684 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"var-run\" (UniqueName: \"kubernetes.io/host-path/34a79192-5869-492e-b53d-dbe5306b32ae-var-run\") pod \"34a79192-5869-492e-b53d-dbe5306b32ae\" (UID: \"34a79192-5869-492e-b53d-dbe5306b32ae\") " Jan 23 09:29:27 crc kubenswrapper[4684]: I0123 09:29:27.357313 4684 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"additional-scripts\" (UniqueName: \"kubernetes.io/configmap/34a79192-5869-492e-b53d-dbe5306b32ae-additional-scripts\") pod \"34a79192-5869-492e-b53d-dbe5306b32ae\" (UID: \"34a79192-5869-492e-b53d-dbe5306b32ae\") " Jan 23 09:29:27 crc kubenswrapper[4684]: I0123 09:29:27.357384 4684 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/34a79192-5869-492e-b53d-dbe5306b32ae-var-log-ovn" (OuterVolumeSpecName: "var-log-ovn") pod "34a79192-5869-492e-b53d-dbe5306b32ae" (UID: "34a79192-5869-492e-b53d-dbe5306b32ae"). InnerVolumeSpecName "var-log-ovn". PluginName "kubernetes.io/host-path", VolumeGidValue "" Jan 23 09:29:27 crc kubenswrapper[4684]: I0123 09:29:27.357419 4684 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/34a79192-5869-492e-b53d-dbe5306b32ae-var-run" (OuterVolumeSpecName: "var-run") pod "34a79192-5869-492e-b53d-dbe5306b32ae" (UID: "34a79192-5869-492e-b53d-dbe5306b32ae"). InnerVolumeSpecName "var-run". PluginName "kubernetes.io/host-path", VolumeGidValue "" Jan 23 09:29:27 crc kubenswrapper[4684]: I0123 09:29:27.357787 4684 reconciler_common.go:293] "Volume detached for volume \"var-log-ovn\" (UniqueName: \"kubernetes.io/host-path/34a79192-5869-492e-b53d-dbe5306b32ae-var-log-ovn\") on node \"crc\" DevicePath \"\"" Jan 23 09:29:27 crc kubenswrapper[4684]: I0123 09:29:27.357808 4684 reconciler_common.go:293] "Volume detached for volume \"var-run\" (UniqueName: \"kubernetes.io/host-path/34a79192-5869-492e-b53d-dbe5306b32ae-var-run\") on node \"crc\" DevicePath \"\"" Jan 23 09:29:27 crc kubenswrapper[4684]: I0123 09:29:27.357820 4684 reconciler_common.go:293] "Volume detached for volume \"var-run-ovn\" (UniqueName: \"kubernetes.io/host-path/34a79192-5869-492e-b53d-dbe5306b32ae-var-run-ovn\") on node \"crc\" DevicePath \"\"" Jan 23 09:29:27 crc kubenswrapper[4684]: I0123 09:29:27.358063 4684 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/34a79192-5869-492e-b53d-dbe5306b32ae-additional-scripts" (OuterVolumeSpecName: "additional-scripts") pod "34a79192-5869-492e-b53d-dbe5306b32ae" (UID: "34a79192-5869-492e-b53d-dbe5306b32ae"). InnerVolumeSpecName "additional-scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 23 09:29:27 crc kubenswrapper[4684]: I0123 09:29:27.358374 4684 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/34a79192-5869-492e-b53d-dbe5306b32ae-scripts" (OuterVolumeSpecName: "scripts") pod "34a79192-5869-492e-b53d-dbe5306b32ae" (UID: "34a79192-5869-492e-b53d-dbe5306b32ae"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 23 09:29:27 crc kubenswrapper[4684]: I0123 09:29:27.363106 4684 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/34a79192-5869-492e-b53d-dbe5306b32ae-kube-api-access-vtlbb" (OuterVolumeSpecName: "kube-api-access-vtlbb") pod "34a79192-5869-492e-b53d-dbe5306b32ae" (UID: "34a79192-5869-492e-b53d-dbe5306b32ae"). InnerVolumeSpecName "kube-api-access-vtlbb". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 23 09:29:27 crc kubenswrapper[4684]: I0123 09:29:27.459277 4684 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/34a79192-5869-492e-b53d-dbe5306b32ae-scripts\") on node \"crc\" DevicePath \"\"" Jan 23 09:29:27 crc kubenswrapper[4684]: I0123 09:29:27.459330 4684 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-vtlbb\" (UniqueName: \"kubernetes.io/projected/34a79192-5869-492e-b53d-dbe5306b32ae-kube-api-access-vtlbb\") on node \"crc\" DevicePath \"\"" Jan 23 09:29:27 crc kubenswrapper[4684]: I0123 09:29:27.459370 4684 reconciler_common.go:293] "Volume detached for volume \"additional-scripts\" (UniqueName: \"kubernetes.io/configmap/34a79192-5869-492e-b53d-dbe5306b32ae-additional-scripts\") on node \"crc\" DevicePath \"\"" Jan 23 09:29:28 crc kubenswrapper[4684]: I0123 09:29:28.034940 4684 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/ovn-controller-jgsg8" Jan 23 09:29:28 crc kubenswrapper[4684]: I0123 09:29:28.313514 4684 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/root-account-create-update-w8zfc" event={"ID":"71408791-d0ae-4bb7-b758-f6d343cf58a7","Type":"ContainerStarted","Data":"bf5197c3f0eb5ac2458125fa7c0f3ee0a42c7ce5b1fe5883c14a10e51e51e123"} Jan 23 09:29:28 crc kubenswrapper[4684]: I0123 09:29:28.358259 4684 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/root-account-create-update-w8zfc" podStartSLOduration=3.358239174 podStartE2EDuration="3.358239174s" podCreationTimestamp="2026-01-23 09:29:25 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-23 09:29:28.353919412 +0000 UTC m=+1340.977297953" watchObservedRunningTime="2026-01-23 09:29:28.358239174 +0000 UTC m=+1340.981617715" Jan 23 09:29:28 crc kubenswrapper[4684]: I0123 09:29:28.380788 4684 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/ovn-controller-jgsg8-config-k6jsz"] Jan 23 09:29:28 crc kubenswrapper[4684]: I0123 09:29:28.393160 4684 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/ovn-controller-jgsg8-config-k6jsz"] Jan 23 09:29:28 crc kubenswrapper[4684]: I0123 09:29:28.506395 4684 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/ovn-controller-jgsg8-config-zxz7b"] Jan 23 09:29:28 crc kubenswrapper[4684]: E0123 09:29:28.506849 4684 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="34a79192-5869-492e-b53d-dbe5306b32ae" containerName="ovn-config" Jan 23 09:29:28 crc kubenswrapper[4684]: I0123 09:29:28.506870 4684 state_mem.go:107] "Deleted CPUSet assignment" podUID="34a79192-5869-492e-b53d-dbe5306b32ae" containerName="ovn-config" Jan 23 09:29:28 crc kubenswrapper[4684]: I0123 09:29:28.507068 4684 memory_manager.go:354] "RemoveStaleState removing state" podUID="34a79192-5869-492e-b53d-dbe5306b32ae" containerName="ovn-config" Jan 23 09:29:28 crc kubenswrapper[4684]: I0123 09:29:28.507653 4684 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ovn-controller-jgsg8-config-zxz7b" Jan 23 09:29:28 crc kubenswrapper[4684]: W0123 09:29:28.513316 4684 reflector.go:561] object-"openstack"/"ovncontroller-extra-scripts": failed to list *v1.ConfigMap: configmaps "ovncontroller-extra-scripts" is forbidden: User "system:node:crc" cannot list resource "configmaps" in API group "" in the namespace "openstack": no relationship found between node 'crc' and this object Jan 23 09:29:28 crc kubenswrapper[4684]: E0123 09:29:28.513413 4684 reflector.go:158] "Unhandled Error" err="object-\"openstack\"/\"ovncontroller-extra-scripts\": Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps \"ovncontroller-extra-scripts\" is forbidden: User \"system:node:crc\" cannot list resource \"configmaps\" in API group \"\" in the namespace \"openstack\": no relationship found between node 'crc' and this object" logger="UnhandledError" Jan 23 09:29:28 crc kubenswrapper[4684]: I0123 09:29:28.535516 4684 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ovn-controller-jgsg8-config-zxz7b"] Jan 23 09:29:28 crc kubenswrapper[4684]: I0123 09:29:28.579342 4684 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-run\" (UniqueName: \"kubernetes.io/host-path/ebd0ad4d-1a78-4f3d-b5eb-ed190420f836-var-run\") pod \"ovn-controller-jgsg8-config-zxz7b\" (UID: \"ebd0ad4d-1a78-4f3d-b5eb-ed190420f836\") " pod="openstack/ovn-controller-jgsg8-config-zxz7b" Jan 23 09:29:28 crc kubenswrapper[4684]: I0123 09:29:28.579444 4684 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"additional-scripts\" (UniqueName: \"kubernetes.io/configmap/ebd0ad4d-1a78-4f3d-b5eb-ed190420f836-additional-scripts\") pod \"ovn-controller-jgsg8-config-zxz7b\" (UID: \"ebd0ad4d-1a78-4f3d-b5eb-ed190420f836\") " pod="openstack/ovn-controller-jgsg8-config-zxz7b" Jan 23 09:29:28 crc kubenswrapper[4684]: I0123 09:29:28.579480 4684 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-554x8\" (UniqueName: \"kubernetes.io/projected/ebd0ad4d-1a78-4f3d-b5eb-ed190420f836-kube-api-access-554x8\") pod \"ovn-controller-jgsg8-config-zxz7b\" (UID: \"ebd0ad4d-1a78-4f3d-b5eb-ed190420f836\") " pod="openstack/ovn-controller-jgsg8-config-zxz7b" Jan 23 09:29:28 crc kubenswrapper[4684]: I0123 09:29:28.579529 4684 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-log-ovn\" (UniqueName: \"kubernetes.io/host-path/ebd0ad4d-1a78-4f3d-b5eb-ed190420f836-var-log-ovn\") pod \"ovn-controller-jgsg8-config-zxz7b\" (UID: \"ebd0ad4d-1a78-4f3d-b5eb-ed190420f836\") " pod="openstack/ovn-controller-jgsg8-config-zxz7b" Jan 23 09:29:28 crc kubenswrapper[4684]: I0123 09:29:28.579559 4684 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/ebd0ad4d-1a78-4f3d-b5eb-ed190420f836-scripts\") pod \"ovn-controller-jgsg8-config-zxz7b\" (UID: \"ebd0ad4d-1a78-4f3d-b5eb-ed190420f836\") " pod="openstack/ovn-controller-jgsg8-config-zxz7b" Jan 23 09:29:28 crc kubenswrapper[4684]: I0123 09:29:28.579601 4684 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-run-ovn\" (UniqueName: \"kubernetes.io/host-path/ebd0ad4d-1a78-4f3d-b5eb-ed190420f836-var-run-ovn\") pod \"ovn-controller-jgsg8-config-zxz7b\" (UID: \"ebd0ad4d-1a78-4f3d-b5eb-ed190420f836\") " pod="openstack/ovn-controller-jgsg8-config-zxz7b" Jan 23 09:29:28 crc kubenswrapper[4684]: I0123 09:29:28.681810 4684 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/ebd0ad4d-1a78-4f3d-b5eb-ed190420f836-scripts\") pod \"ovn-controller-jgsg8-config-zxz7b\" (UID: \"ebd0ad4d-1a78-4f3d-b5eb-ed190420f836\") " pod="openstack/ovn-controller-jgsg8-config-zxz7b" Jan 23 09:29:28 crc kubenswrapper[4684]: I0123 09:29:28.681930 4684 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"var-run-ovn\" (UniqueName: \"kubernetes.io/host-path/ebd0ad4d-1a78-4f3d-b5eb-ed190420f836-var-run-ovn\") pod \"ovn-controller-jgsg8-config-zxz7b\" (UID: \"ebd0ad4d-1a78-4f3d-b5eb-ed190420f836\") " pod="openstack/ovn-controller-jgsg8-config-zxz7b" Jan 23 09:29:28 crc kubenswrapper[4684]: I0123 09:29:28.682092 4684 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"var-run\" (UniqueName: \"kubernetes.io/host-path/ebd0ad4d-1a78-4f3d-b5eb-ed190420f836-var-run\") pod \"ovn-controller-jgsg8-config-zxz7b\" (UID: \"ebd0ad4d-1a78-4f3d-b5eb-ed190420f836\") " pod="openstack/ovn-controller-jgsg8-config-zxz7b" Jan 23 09:29:28 crc kubenswrapper[4684]: I0123 09:29:28.682235 4684 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"additional-scripts\" (UniqueName: \"kubernetes.io/configmap/ebd0ad4d-1a78-4f3d-b5eb-ed190420f836-additional-scripts\") pod \"ovn-controller-jgsg8-config-zxz7b\" (UID: \"ebd0ad4d-1a78-4f3d-b5eb-ed190420f836\") " pod="openstack/ovn-controller-jgsg8-config-zxz7b" Jan 23 09:29:28 crc kubenswrapper[4684]: I0123 09:29:28.682308 4684 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-554x8\" (UniqueName: \"kubernetes.io/projected/ebd0ad4d-1a78-4f3d-b5eb-ed190420f836-kube-api-access-554x8\") pod \"ovn-controller-jgsg8-config-zxz7b\" (UID: \"ebd0ad4d-1a78-4f3d-b5eb-ed190420f836\") " pod="openstack/ovn-controller-jgsg8-config-zxz7b" Jan 23 09:29:28 crc kubenswrapper[4684]: I0123 09:29:28.682341 4684 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"var-run-ovn\" (UniqueName: \"kubernetes.io/host-path/ebd0ad4d-1a78-4f3d-b5eb-ed190420f836-var-run-ovn\") pod \"ovn-controller-jgsg8-config-zxz7b\" (UID: \"ebd0ad4d-1a78-4f3d-b5eb-ed190420f836\") " pod="openstack/ovn-controller-jgsg8-config-zxz7b" Jan 23 09:29:28 crc kubenswrapper[4684]: I0123 09:29:28.682356 4684 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"var-run\" (UniqueName: \"kubernetes.io/host-path/ebd0ad4d-1a78-4f3d-b5eb-ed190420f836-var-run\") pod \"ovn-controller-jgsg8-config-zxz7b\" (UID: \"ebd0ad4d-1a78-4f3d-b5eb-ed190420f836\") " pod="openstack/ovn-controller-jgsg8-config-zxz7b" Jan 23 09:29:28 crc kubenswrapper[4684]: I0123 09:29:28.682399 4684 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"var-log-ovn\" (UniqueName: \"kubernetes.io/host-path/ebd0ad4d-1a78-4f3d-b5eb-ed190420f836-var-log-ovn\") pod \"ovn-controller-jgsg8-config-zxz7b\" (UID: \"ebd0ad4d-1a78-4f3d-b5eb-ed190420f836\") " pod="openstack/ovn-controller-jgsg8-config-zxz7b" Jan 23 09:29:28 crc kubenswrapper[4684]: I0123 09:29:28.682576 4684 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"var-log-ovn\" (UniqueName: \"kubernetes.io/host-path/ebd0ad4d-1a78-4f3d-b5eb-ed190420f836-var-log-ovn\") pod \"ovn-controller-jgsg8-config-zxz7b\" (UID: \"ebd0ad4d-1a78-4f3d-b5eb-ed190420f836\") " pod="openstack/ovn-controller-jgsg8-config-zxz7b" Jan 23 09:29:28 crc kubenswrapper[4684]: I0123 09:29:28.685412 4684 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/ebd0ad4d-1a78-4f3d-b5eb-ed190420f836-scripts\") pod \"ovn-controller-jgsg8-config-zxz7b\" (UID: \"ebd0ad4d-1a78-4f3d-b5eb-ed190420f836\") " pod="openstack/ovn-controller-jgsg8-config-zxz7b" Jan 23 09:29:28 crc kubenswrapper[4684]: I0123 09:29:28.731082 4684 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-554x8\" (UniqueName: \"kubernetes.io/projected/ebd0ad4d-1a78-4f3d-b5eb-ed190420f836-kube-api-access-554x8\") pod \"ovn-controller-jgsg8-config-zxz7b\" (UID: \"ebd0ad4d-1a78-4f3d-b5eb-ed190420f836\") " pod="openstack/ovn-controller-jgsg8-config-zxz7b" Jan 23 09:29:29 crc kubenswrapper[4684]: I0123 09:29:29.487164 4684 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"ovncontroller-extra-scripts" Jan 23 09:29:29 crc kubenswrapper[4684]: I0123 09:29:29.493357 4684 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"additional-scripts\" (UniqueName: \"kubernetes.io/configmap/ebd0ad4d-1a78-4f3d-b5eb-ed190420f836-additional-scripts\") pod \"ovn-controller-jgsg8-config-zxz7b\" (UID: \"ebd0ad4d-1a78-4f3d-b5eb-ed190420f836\") " pod="openstack/ovn-controller-jgsg8-config-zxz7b" Jan 23 09:29:29 crc kubenswrapper[4684]: I0123 09:29:29.593654 4684 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="34a79192-5869-492e-b53d-dbe5306b32ae" path="/var/lib/kubelet/pods/34a79192-5869-492e-b53d-dbe5306b32ae/volumes" Jan 23 09:29:29 crc kubenswrapper[4684]: I0123 09:29:29.735117 4684 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ovn-controller-jgsg8-config-zxz7b" Jan 23 09:29:30 crc kubenswrapper[4684]: I0123 09:29:30.224375 4684 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/openstack-galera-0" Jan 23 09:29:30 crc kubenswrapper[4684]: I0123 09:29:30.264345 4684 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ovn-controller-jgsg8-config-zxz7b"] Jan 23 09:29:30 crc kubenswrapper[4684]: W0123 09:29:30.287746 4684 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-podebd0ad4d_1a78_4f3d_b5eb_ed190420f836.slice/crio-badbb960509d4ebd9c142d36faf6ba9edf24b2c188ef779333079bfb31d35519 WatchSource:0}: Error finding container badbb960509d4ebd9c142d36faf6ba9edf24b2c188ef779333079bfb31d35519: Status 404 returned error can't find the container with id badbb960509d4ebd9c142d36faf6ba9edf24b2c188ef779333079bfb31d35519 Jan 23 09:29:30 crc kubenswrapper[4684]: I0123 09:29:30.290316 4684 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/dnsmasq-dns-586b989cdc-fpmhg" Jan 23 09:29:30 crc kubenswrapper[4684]: I0123 09:29:30.334030 4684 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ovn-controller-jgsg8-config-zxz7b" event={"ID":"ebd0ad4d-1a78-4f3d-b5eb-ed190420f836","Type":"ContainerStarted","Data":"badbb960509d4ebd9c142d36faf6ba9edf24b2c188ef779333079bfb31d35519"} Jan 23 09:29:30 crc kubenswrapper[4684]: I0123 09:29:30.335332 4684 generic.go:334] "Generic (PLEG): container finished" podID="71408791-d0ae-4bb7-b758-f6d343cf58a7" containerID="bf5197c3f0eb5ac2458125fa7c0f3ee0a42c7ce5b1fe5883c14a10e51e51e123" exitCode=0 Jan 23 09:29:30 crc kubenswrapper[4684]: I0123 09:29:30.335524 4684 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/root-account-create-update-w8zfc" event={"ID":"71408791-d0ae-4bb7-b758-f6d343cf58a7","Type":"ContainerDied","Data":"bf5197c3f0eb5ac2458125fa7c0f3ee0a42c7ce5b1fe5883c14a10e51e51e123"} Jan 23 09:29:30 crc kubenswrapper[4684]: I0123 09:29:30.352115 4684 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-95f5f6995-9fjr7"] Jan 23 09:29:30 crc kubenswrapper[4684]: I0123 09:29:30.352334 4684 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/dnsmasq-dns-95f5f6995-9fjr7" podUID="169ee556-d1ee-4f51-9958-46bd24d4467f" containerName="dnsmasq-dns" containerID="cri-o://5f04488987c235686a18dc4872d68dc7464a68ffb3318c3d31aa196019932b4c" gracePeriod=10 Jan 23 09:29:30 crc kubenswrapper[4684]: I0123 09:29:30.393821 4684 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack/openstack-galera-0" podUID="01c5f17c-8303-4cae-b577-1da34c402098" containerName="galera" probeResult="failure" output=< Jan 23 09:29:30 crc kubenswrapper[4684]: wsrep_local_state_comment (Joined) differs from Synced Jan 23 09:29:30 crc kubenswrapper[4684]: > Jan 23 09:29:31 crc kubenswrapper[4684]: I0123 09:29:31.338840 4684 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-95f5f6995-9fjr7" Jan 23 09:29:31 crc kubenswrapper[4684]: I0123 09:29:31.345379 4684 generic.go:334] "Generic (PLEG): container finished" podID="169ee556-d1ee-4f51-9958-46bd24d4467f" containerID="5f04488987c235686a18dc4872d68dc7464a68ffb3318c3d31aa196019932b4c" exitCode=0 Jan 23 09:29:31 crc kubenswrapper[4684]: I0123 09:29:31.345463 4684 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-95f5f6995-9fjr7" Jan 23 09:29:31 crc kubenswrapper[4684]: I0123 09:29:31.345461 4684 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-95f5f6995-9fjr7" event={"ID":"169ee556-d1ee-4f51-9958-46bd24d4467f","Type":"ContainerDied","Data":"5f04488987c235686a18dc4872d68dc7464a68ffb3318c3d31aa196019932b4c"} Jan 23 09:29:31 crc kubenswrapper[4684]: I0123 09:29:31.345649 4684 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-95f5f6995-9fjr7" event={"ID":"169ee556-d1ee-4f51-9958-46bd24d4467f","Type":"ContainerDied","Data":"0cda0dc6524d004735f24df66e7a23e60dd8772f536d7fff61af71f17d84535e"} Jan 23 09:29:31 crc kubenswrapper[4684]: I0123 09:29:31.345730 4684 scope.go:117] "RemoveContainer" containerID="5f04488987c235686a18dc4872d68dc7464a68ffb3318c3d31aa196019932b4c" Jan 23 09:29:31 crc kubenswrapper[4684]: I0123 09:29:31.350051 4684 generic.go:334] "Generic (PLEG): container finished" podID="ebd0ad4d-1a78-4f3d-b5eb-ed190420f836" containerID="c54de7caf1a0c9b9e1e2a2d38c6dc0095338361df2e9d87f35b9cc94760fa909" exitCode=0 Jan 23 09:29:31 crc kubenswrapper[4684]: I0123 09:29:31.350328 4684 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ovn-controller-jgsg8-config-zxz7b" event={"ID":"ebd0ad4d-1a78-4f3d-b5eb-ed190420f836","Type":"ContainerDied","Data":"c54de7caf1a0c9b9e1e2a2d38c6dc0095338361df2e9d87f35b9cc94760fa909"} Jan 23 09:29:31 crc kubenswrapper[4684]: I0123 09:29:31.411947 4684 scope.go:117] "RemoveContainer" containerID="983c7bf7e94d31ba7060e25299096d29b58c568bae4f7ee7f76f54e6cfe586a7" Jan 23 09:29:31 crc kubenswrapper[4684]: I0123 09:29:31.435286 4684 scope.go:117] "RemoveContainer" containerID="5f04488987c235686a18dc4872d68dc7464a68ffb3318c3d31aa196019932b4c" Jan 23 09:29:31 crc kubenswrapper[4684]: E0123 09:29:31.435880 4684 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"5f04488987c235686a18dc4872d68dc7464a68ffb3318c3d31aa196019932b4c\": container with ID starting with 5f04488987c235686a18dc4872d68dc7464a68ffb3318c3d31aa196019932b4c not found: ID does not exist" containerID="5f04488987c235686a18dc4872d68dc7464a68ffb3318c3d31aa196019932b4c" Jan 23 09:29:31 crc kubenswrapper[4684]: I0123 09:29:31.435927 4684 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"5f04488987c235686a18dc4872d68dc7464a68ffb3318c3d31aa196019932b4c"} err="failed to get container status \"5f04488987c235686a18dc4872d68dc7464a68ffb3318c3d31aa196019932b4c\": rpc error: code = NotFound desc = could not find container \"5f04488987c235686a18dc4872d68dc7464a68ffb3318c3d31aa196019932b4c\": container with ID starting with 5f04488987c235686a18dc4872d68dc7464a68ffb3318c3d31aa196019932b4c not found: ID does not exist" Jan 23 09:29:31 crc kubenswrapper[4684]: I0123 09:29:31.435954 4684 scope.go:117] "RemoveContainer" containerID="983c7bf7e94d31ba7060e25299096d29b58c568bae4f7ee7f76f54e6cfe586a7" Jan 23 09:29:31 crc kubenswrapper[4684]: E0123 09:29:31.436461 4684 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"983c7bf7e94d31ba7060e25299096d29b58c568bae4f7ee7f76f54e6cfe586a7\": container with ID starting with 983c7bf7e94d31ba7060e25299096d29b58c568bae4f7ee7f76f54e6cfe586a7 not found: ID does not exist" containerID="983c7bf7e94d31ba7060e25299096d29b58c568bae4f7ee7f76f54e6cfe586a7" Jan 23 09:29:31 crc kubenswrapper[4684]: I0123 09:29:31.436509 4684 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"983c7bf7e94d31ba7060e25299096d29b58c568bae4f7ee7f76f54e6cfe586a7"} err="failed to get container status \"983c7bf7e94d31ba7060e25299096d29b58c568bae4f7ee7f76f54e6cfe586a7\": rpc error: code = NotFound desc = could not find container \"983c7bf7e94d31ba7060e25299096d29b58c568bae4f7ee7f76f54e6cfe586a7\": container with ID starting with 983c7bf7e94d31ba7060e25299096d29b58c568bae4f7ee7f76f54e6cfe586a7 not found: ID does not exist" Jan 23 09:29:31 crc kubenswrapper[4684]: I0123 09:29:31.441601 4684 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/169ee556-d1ee-4f51-9958-46bd24d4467f-config\") pod \"169ee556-d1ee-4f51-9958-46bd24d4467f\" (UID: \"169ee556-d1ee-4f51-9958-46bd24d4467f\") " Jan 23 09:29:31 crc kubenswrapper[4684]: I0123 09:29:31.441871 4684 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-m9z4m\" (UniqueName: \"kubernetes.io/projected/169ee556-d1ee-4f51-9958-46bd24d4467f-kube-api-access-m9z4m\") pod \"169ee556-d1ee-4f51-9958-46bd24d4467f\" (UID: \"169ee556-d1ee-4f51-9958-46bd24d4467f\") " Jan 23 09:29:31 crc kubenswrapper[4684]: I0123 09:29:31.441926 4684 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/169ee556-d1ee-4f51-9958-46bd24d4467f-dns-svc\") pod \"169ee556-d1ee-4f51-9958-46bd24d4467f\" (UID: \"169ee556-d1ee-4f51-9958-46bd24d4467f\") " Jan 23 09:29:31 crc kubenswrapper[4684]: I0123 09:29:31.448992 4684 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/169ee556-d1ee-4f51-9958-46bd24d4467f-kube-api-access-m9z4m" (OuterVolumeSpecName: "kube-api-access-m9z4m") pod "169ee556-d1ee-4f51-9958-46bd24d4467f" (UID: "169ee556-d1ee-4f51-9958-46bd24d4467f"). InnerVolumeSpecName "kube-api-access-m9z4m". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 23 09:29:31 crc kubenswrapper[4684]: I0123 09:29:31.514146 4684 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/169ee556-d1ee-4f51-9958-46bd24d4467f-config" (OuterVolumeSpecName: "config") pod "169ee556-d1ee-4f51-9958-46bd24d4467f" (UID: "169ee556-d1ee-4f51-9958-46bd24d4467f"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 23 09:29:31 crc kubenswrapper[4684]: I0123 09:29:31.523742 4684 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/169ee556-d1ee-4f51-9958-46bd24d4467f-dns-svc" (OuterVolumeSpecName: "dns-svc") pod "169ee556-d1ee-4f51-9958-46bd24d4467f" (UID: "169ee556-d1ee-4f51-9958-46bd24d4467f"). InnerVolumeSpecName "dns-svc". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 23 09:29:31 crc kubenswrapper[4684]: I0123 09:29:31.545092 4684 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/169ee556-d1ee-4f51-9958-46bd24d4467f-config\") on node \"crc\" DevicePath \"\"" Jan 23 09:29:31 crc kubenswrapper[4684]: I0123 09:29:31.545130 4684 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-m9z4m\" (UniqueName: \"kubernetes.io/projected/169ee556-d1ee-4f51-9958-46bd24d4467f-kube-api-access-m9z4m\") on node \"crc\" DevicePath \"\"" Jan 23 09:29:31 crc kubenswrapper[4684]: I0123 09:29:31.545146 4684 reconciler_common.go:293] "Volume detached for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/169ee556-d1ee-4f51-9958-46bd24d4467f-dns-svc\") on node \"crc\" DevicePath \"\"" Jan 23 09:29:31 crc kubenswrapper[4684]: I0123 09:29:31.669317 4684 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-95f5f6995-9fjr7"] Jan 23 09:29:31 crc kubenswrapper[4684]: I0123 09:29:31.676347 4684 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/dnsmasq-dns-95f5f6995-9fjr7"] Jan 23 09:29:31 crc kubenswrapper[4684]: I0123 09:29:31.692135 4684 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/root-account-create-update-w8zfc" Jan 23 09:29:31 crc kubenswrapper[4684]: I0123 09:29:31.748415 4684 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-8kphb\" (UniqueName: \"kubernetes.io/projected/71408791-d0ae-4bb7-b758-f6d343cf58a7-kube-api-access-8kphb\") pod \"71408791-d0ae-4bb7-b758-f6d343cf58a7\" (UID: \"71408791-d0ae-4bb7-b758-f6d343cf58a7\") " Jan 23 09:29:31 crc kubenswrapper[4684]: I0123 09:29:31.748467 4684 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/71408791-d0ae-4bb7-b758-f6d343cf58a7-operator-scripts\") pod \"71408791-d0ae-4bb7-b758-f6d343cf58a7\" (UID: \"71408791-d0ae-4bb7-b758-f6d343cf58a7\") " Jan 23 09:29:31 crc kubenswrapper[4684]: I0123 09:29:31.749417 4684 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/71408791-d0ae-4bb7-b758-f6d343cf58a7-operator-scripts" (OuterVolumeSpecName: "operator-scripts") pod "71408791-d0ae-4bb7-b758-f6d343cf58a7" (UID: "71408791-d0ae-4bb7-b758-f6d343cf58a7"). InnerVolumeSpecName "operator-scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 23 09:29:31 crc kubenswrapper[4684]: I0123 09:29:31.753635 4684 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/71408791-d0ae-4bb7-b758-f6d343cf58a7-kube-api-access-8kphb" (OuterVolumeSpecName: "kube-api-access-8kphb") pod "71408791-d0ae-4bb7-b758-f6d343cf58a7" (UID: "71408791-d0ae-4bb7-b758-f6d343cf58a7"). InnerVolumeSpecName "kube-api-access-8kphb". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 23 09:29:31 crc kubenswrapper[4684]: I0123 09:29:31.850327 4684 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-8kphb\" (UniqueName: \"kubernetes.io/projected/71408791-d0ae-4bb7-b758-f6d343cf58a7-kube-api-access-8kphb\") on node \"crc\" DevicePath \"\"" Jan 23 09:29:31 crc kubenswrapper[4684]: I0123 09:29:31.850369 4684 reconciler_common.go:293] "Volume detached for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/71408791-d0ae-4bb7-b758-f6d343cf58a7-operator-scripts\") on node \"crc\" DevicePath \"\"" Jan 23 09:29:32 crc kubenswrapper[4684]: I0123 09:29:32.359506 4684 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/root-account-create-update-w8zfc" Jan 23 09:29:32 crc kubenswrapper[4684]: I0123 09:29:32.359521 4684 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/root-account-create-update-w8zfc" event={"ID":"71408791-d0ae-4bb7-b758-f6d343cf58a7","Type":"ContainerDied","Data":"d8864e44fc8ed626ba6786a71b5e8a48144316da5f69ba8519fa70a080f7c95e"} Jan 23 09:29:32 crc kubenswrapper[4684]: I0123 09:29:32.360004 4684 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="d8864e44fc8ed626ba6786a71b5e8a48144316da5f69ba8519fa70a080f7c95e" Jan 23 09:29:32 crc kubenswrapper[4684]: I0123 09:29:32.756909 4684 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/ovn-controller-jgsg8-config-zxz7b" Jan 23 09:29:32 crc kubenswrapper[4684]: I0123 09:29:32.869046 4684 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-554x8\" (UniqueName: \"kubernetes.io/projected/ebd0ad4d-1a78-4f3d-b5eb-ed190420f836-kube-api-access-554x8\") pod \"ebd0ad4d-1a78-4f3d-b5eb-ed190420f836\" (UID: \"ebd0ad4d-1a78-4f3d-b5eb-ed190420f836\") " Jan 23 09:29:32 crc kubenswrapper[4684]: I0123 09:29:32.869360 4684 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"var-log-ovn\" (UniqueName: \"kubernetes.io/host-path/ebd0ad4d-1a78-4f3d-b5eb-ed190420f836-var-log-ovn\") pod \"ebd0ad4d-1a78-4f3d-b5eb-ed190420f836\" (UID: \"ebd0ad4d-1a78-4f3d-b5eb-ed190420f836\") " Jan 23 09:29:32 crc kubenswrapper[4684]: I0123 09:29:32.869598 4684 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/ebd0ad4d-1a78-4f3d-b5eb-ed190420f836-scripts\") pod \"ebd0ad4d-1a78-4f3d-b5eb-ed190420f836\" (UID: \"ebd0ad4d-1a78-4f3d-b5eb-ed190420f836\") " Jan 23 09:29:32 crc kubenswrapper[4684]: I0123 09:29:32.870104 4684 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"var-run-ovn\" (UniqueName: \"kubernetes.io/host-path/ebd0ad4d-1a78-4f3d-b5eb-ed190420f836-var-run-ovn\") pod \"ebd0ad4d-1a78-4f3d-b5eb-ed190420f836\" (UID: \"ebd0ad4d-1a78-4f3d-b5eb-ed190420f836\") " Jan 23 09:29:32 crc kubenswrapper[4684]: I0123 09:29:32.870257 4684 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"additional-scripts\" (UniqueName: \"kubernetes.io/configmap/ebd0ad4d-1a78-4f3d-b5eb-ed190420f836-additional-scripts\") pod \"ebd0ad4d-1a78-4f3d-b5eb-ed190420f836\" (UID: \"ebd0ad4d-1a78-4f3d-b5eb-ed190420f836\") " Jan 23 09:29:32 crc kubenswrapper[4684]: I0123 09:29:32.870395 4684 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"var-run\" (UniqueName: \"kubernetes.io/host-path/ebd0ad4d-1a78-4f3d-b5eb-ed190420f836-var-run\") pod \"ebd0ad4d-1a78-4f3d-b5eb-ed190420f836\" (UID: \"ebd0ad4d-1a78-4f3d-b5eb-ed190420f836\") " Jan 23 09:29:32 crc kubenswrapper[4684]: I0123 09:29:32.870293 4684 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/ebd0ad4d-1a78-4f3d-b5eb-ed190420f836-var-log-ovn" (OuterVolumeSpecName: "var-log-ovn") pod "ebd0ad4d-1a78-4f3d-b5eb-ed190420f836" (UID: "ebd0ad4d-1a78-4f3d-b5eb-ed190420f836"). InnerVolumeSpecName "var-log-ovn". PluginName "kubernetes.io/host-path", VolumeGidValue "" Jan 23 09:29:32 crc kubenswrapper[4684]: I0123 09:29:32.870501 4684 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/ebd0ad4d-1a78-4f3d-b5eb-ed190420f836-var-run-ovn" (OuterVolumeSpecName: "var-run-ovn") pod "ebd0ad4d-1a78-4f3d-b5eb-ed190420f836" (UID: "ebd0ad4d-1a78-4f3d-b5eb-ed190420f836"). InnerVolumeSpecName "var-run-ovn". PluginName "kubernetes.io/host-path", VolumeGidValue "" Jan 23 09:29:32 crc kubenswrapper[4684]: I0123 09:29:32.870943 4684 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/ebd0ad4d-1a78-4f3d-b5eb-ed190420f836-additional-scripts" (OuterVolumeSpecName: "additional-scripts") pod "ebd0ad4d-1a78-4f3d-b5eb-ed190420f836" (UID: "ebd0ad4d-1a78-4f3d-b5eb-ed190420f836"). InnerVolumeSpecName "additional-scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 23 09:29:32 crc kubenswrapper[4684]: I0123 09:29:32.870962 4684 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/ebd0ad4d-1a78-4f3d-b5eb-ed190420f836-var-run" (OuterVolumeSpecName: "var-run") pod "ebd0ad4d-1a78-4f3d-b5eb-ed190420f836" (UID: "ebd0ad4d-1a78-4f3d-b5eb-ed190420f836"). InnerVolumeSpecName "var-run". PluginName "kubernetes.io/host-path", VolumeGidValue "" Jan 23 09:29:32 crc kubenswrapper[4684]: I0123 09:29:32.871247 4684 reconciler_common.go:293] "Volume detached for volume \"var-run-ovn\" (UniqueName: \"kubernetes.io/host-path/ebd0ad4d-1a78-4f3d-b5eb-ed190420f836-var-run-ovn\") on node \"crc\" DevicePath \"\"" Jan 23 09:29:32 crc kubenswrapper[4684]: I0123 09:29:32.871346 4684 reconciler_common.go:293] "Volume detached for volume \"additional-scripts\" (UniqueName: \"kubernetes.io/configmap/ebd0ad4d-1a78-4f3d-b5eb-ed190420f836-additional-scripts\") on node \"crc\" DevicePath \"\"" Jan 23 09:29:32 crc kubenswrapper[4684]: I0123 09:29:32.871427 4684 reconciler_common.go:293] "Volume detached for volume \"var-run\" (UniqueName: \"kubernetes.io/host-path/ebd0ad4d-1a78-4f3d-b5eb-ed190420f836-var-run\") on node \"crc\" DevicePath \"\"" Jan 23 09:29:32 crc kubenswrapper[4684]: I0123 09:29:32.871514 4684 reconciler_common.go:293] "Volume detached for volume \"var-log-ovn\" (UniqueName: \"kubernetes.io/host-path/ebd0ad4d-1a78-4f3d-b5eb-ed190420f836-var-log-ovn\") on node \"crc\" DevicePath \"\"" Jan 23 09:29:32 crc kubenswrapper[4684]: I0123 09:29:32.871421 4684 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/ebd0ad4d-1a78-4f3d-b5eb-ed190420f836-scripts" (OuterVolumeSpecName: "scripts") pod "ebd0ad4d-1a78-4f3d-b5eb-ed190420f836" (UID: "ebd0ad4d-1a78-4f3d-b5eb-ed190420f836"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 23 09:29:32 crc kubenswrapper[4684]: I0123 09:29:32.878064 4684 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/ebd0ad4d-1a78-4f3d-b5eb-ed190420f836-kube-api-access-554x8" (OuterVolumeSpecName: "kube-api-access-554x8") pod "ebd0ad4d-1a78-4f3d-b5eb-ed190420f836" (UID: "ebd0ad4d-1a78-4f3d-b5eb-ed190420f836"). InnerVolumeSpecName "kube-api-access-554x8". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 23 09:29:32 crc kubenswrapper[4684]: I0123 09:29:32.973463 4684 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/ebd0ad4d-1a78-4f3d-b5eb-ed190420f836-scripts\") on node \"crc\" DevicePath \"\"" Jan 23 09:29:32 crc kubenswrapper[4684]: I0123 09:29:32.973535 4684 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-554x8\" (UniqueName: \"kubernetes.io/projected/ebd0ad4d-1a78-4f3d-b5eb-ed190420f836-kube-api-access-554x8\") on node \"crc\" DevicePath \"\"" Jan 23 09:29:33 crc kubenswrapper[4684]: I0123 09:29:33.372117 4684 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ovn-controller-jgsg8-config-zxz7b" event={"ID":"ebd0ad4d-1a78-4f3d-b5eb-ed190420f836","Type":"ContainerDied","Data":"badbb960509d4ebd9c142d36faf6ba9edf24b2c188ef779333079bfb31d35519"} Jan 23 09:29:33 crc kubenswrapper[4684]: I0123 09:29:33.372184 4684 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="badbb960509d4ebd9c142d36faf6ba9edf24b2c188ef779333079bfb31d35519" Jan 23 09:29:33 crc kubenswrapper[4684]: I0123 09:29:33.372194 4684 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/ovn-controller-jgsg8-config-zxz7b" Jan 23 09:29:33 crc kubenswrapper[4684]: I0123 09:29:33.592299 4684 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="169ee556-d1ee-4f51-9958-46bd24d4467f" path="/var/lib/kubelet/pods/169ee556-d1ee-4f51-9958-46bd24d4467f/volumes" Jan 23 09:29:33 crc kubenswrapper[4684]: I0123 09:29:33.834116 4684 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/ovn-controller-jgsg8-config-zxz7b"] Jan 23 09:29:33 crc kubenswrapper[4684]: I0123 09:29:33.839294 4684 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/ovn-controller-jgsg8-config-zxz7b"] Jan 23 09:29:34 crc kubenswrapper[4684]: I0123 09:29:34.708125 4684 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/rabbitmq-cell1-server-0" Jan 23 09:29:35 crc kubenswrapper[4684]: I0123 09:29:35.421876 4684 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/ovn-northd-0" Jan 23 09:29:35 crc kubenswrapper[4684]: I0123 09:29:35.594337 4684 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="ebd0ad4d-1a78-4f3d-b5eb-ed190420f836" path="/var/lib/kubelet/pods/ebd0ad4d-1a78-4f3d-b5eb-ed190420f836/volumes" Jan 23 09:29:36 crc kubenswrapper[4684]: I0123 09:29:36.120320 4684 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/openstack-galera-0" Jan 23 09:29:37 crc kubenswrapper[4684]: I0123 09:29:37.135166 4684 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/keystone-db-create-qbpk9"] Jan 23 09:29:37 crc kubenswrapper[4684]: E0123 09:29:37.136207 4684 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="ebd0ad4d-1a78-4f3d-b5eb-ed190420f836" containerName="ovn-config" Jan 23 09:29:37 crc kubenswrapper[4684]: I0123 09:29:37.136225 4684 state_mem.go:107] "Deleted CPUSet assignment" podUID="ebd0ad4d-1a78-4f3d-b5eb-ed190420f836" containerName="ovn-config" Jan 23 09:29:37 crc kubenswrapper[4684]: E0123 09:29:37.136246 4684 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="71408791-d0ae-4bb7-b758-f6d343cf58a7" containerName="mariadb-account-create-update" Jan 23 09:29:37 crc kubenswrapper[4684]: I0123 09:29:37.136254 4684 state_mem.go:107] "Deleted CPUSet assignment" podUID="71408791-d0ae-4bb7-b758-f6d343cf58a7" containerName="mariadb-account-create-update" Jan 23 09:29:37 crc kubenswrapper[4684]: E0123 09:29:37.136275 4684 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="169ee556-d1ee-4f51-9958-46bd24d4467f" containerName="dnsmasq-dns" Jan 23 09:29:37 crc kubenswrapper[4684]: I0123 09:29:37.136283 4684 state_mem.go:107] "Deleted CPUSet assignment" podUID="169ee556-d1ee-4f51-9958-46bd24d4467f" containerName="dnsmasq-dns" Jan 23 09:29:37 crc kubenswrapper[4684]: E0123 09:29:37.136297 4684 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="169ee556-d1ee-4f51-9958-46bd24d4467f" containerName="init" Jan 23 09:29:37 crc kubenswrapper[4684]: I0123 09:29:37.136303 4684 state_mem.go:107] "Deleted CPUSet assignment" podUID="169ee556-d1ee-4f51-9958-46bd24d4467f" containerName="init" Jan 23 09:29:37 crc kubenswrapper[4684]: I0123 09:29:37.136764 4684 memory_manager.go:354] "RemoveStaleState removing state" podUID="71408791-d0ae-4bb7-b758-f6d343cf58a7" containerName="mariadb-account-create-update" Jan 23 09:29:37 crc kubenswrapper[4684]: I0123 09:29:37.136790 4684 memory_manager.go:354] "RemoveStaleState removing state" podUID="169ee556-d1ee-4f51-9958-46bd24d4467f" containerName="dnsmasq-dns" Jan 23 09:29:37 crc kubenswrapper[4684]: I0123 09:29:37.136801 4684 memory_manager.go:354] "RemoveStaleState removing state" podUID="ebd0ad4d-1a78-4f3d-b5eb-ed190420f836" containerName="ovn-config" Jan 23 09:29:37 crc kubenswrapper[4684]: I0123 09:29:37.137651 4684 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/keystone-db-create-qbpk9" Jan 23 09:29:37 crc kubenswrapper[4684]: I0123 09:29:37.145692 4684 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/keystone-db-create-qbpk9"] Jan 23 09:29:37 crc kubenswrapper[4684]: I0123 09:29:37.238522 4684 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/keystone-2c77-account-create-update-k55xq"] Jan 23 09:29:37 crc kubenswrapper[4684]: I0123 09:29:37.239520 4684 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/keystone-2c77-account-create-update-k55xq" Jan 23 09:29:37 crc kubenswrapper[4684]: I0123 09:29:37.245322 4684 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"keystone-db-secret" Jan 23 09:29:37 crc kubenswrapper[4684]: I0123 09:29:37.251736 4684 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/93356bbd-8831-4fad-a7a7-4494b4244c26-operator-scripts\") pod \"keystone-db-create-qbpk9\" (UID: \"93356bbd-8831-4fad-a7a7-4494b4244c26\") " pod="openstack/keystone-db-create-qbpk9" Jan 23 09:29:37 crc kubenswrapper[4684]: I0123 09:29:37.251818 4684 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-75t8k\" (UniqueName: \"kubernetes.io/projected/93356bbd-8831-4fad-a7a7-4494b4244c26-kube-api-access-75t8k\") pod \"keystone-db-create-qbpk9\" (UID: \"93356bbd-8831-4fad-a7a7-4494b4244c26\") " pod="openstack/keystone-db-create-qbpk9" Jan 23 09:29:37 crc kubenswrapper[4684]: I0123 09:29:37.256538 4684 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/keystone-2c77-account-create-update-k55xq"] Jan 23 09:29:37 crc kubenswrapper[4684]: I0123 09:29:37.352883 4684 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-f8rp9\" (UniqueName: \"kubernetes.io/projected/4887e48e-971e-4f3a-8a5f-a050961c9c7c-kube-api-access-f8rp9\") pod \"keystone-2c77-account-create-update-k55xq\" (UID: \"4887e48e-971e-4f3a-8a5f-a050961c9c7c\") " pod="openstack/keystone-2c77-account-create-update-k55xq" Jan 23 09:29:37 crc kubenswrapper[4684]: I0123 09:29:37.353186 4684 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/4887e48e-971e-4f3a-8a5f-a050961c9c7c-operator-scripts\") pod \"keystone-2c77-account-create-update-k55xq\" (UID: \"4887e48e-971e-4f3a-8a5f-a050961c9c7c\") " pod="openstack/keystone-2c77-account-create-update-k55xq" Jan 23 09:29:37 crc kubenswrapper[4684]: I0123 09:29:37.353444 4684 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/93356bbd-8831-4fad-a7a7-4494b4244c26-operator-scripts\") pod \"keystone-db-create-qbpk9\" (UID: \"93356bbd-8831-4fad-a7a7-4494b4244c26\") " pod="openstack/keystone-db-create-qbpk9" Jan 23 09:29:37 crc kubenswrapper[4684]: I0123 09:29:37.353579 4684 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-75t8k\" (UniqueName: \"kubernetes.io/projected/93356bbd-8831-4fad-a7a7-4494b4244c26-kube-api-access-75t8k\") pod \"keystone-db-create-qbpk9\" (UID: \"93356bbd-8831-4fad-a7a7-4494b4244c26\") " pod="openstack/keystone-db-create-qbpk9" Jan 23 09:29:37 crc kubenswrapper[4684]: I0123 09:29:37.354294 4684 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/93356bbd-8831-4fad-a7a7-4494b4244c26-operator-scripts\") pod \"keystone-db-create-qbpk9\" (UID: \"93356bbd-8831-4fad-a7a7-4494b4244c26\") " pod="openstack/keystone-db-create-qbpk9" Jan 23 09:29:37 crc kubenswrapper[4684]: I0123 09:29:37.382487 4684 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-75t8k\" (UniqueName: \"kubernetes.io/projected/93356bbd-8831-4fad-a7a7-4494b4244c26-kube-api-access-75t8k\") pod \"keystone-db-create-qbpk9\" (UID: \"93356bbd-8831-4fad-a7a7-4494b4244c26\") " pod="openstack/keystone-db-create-qbpk9" Jan 23 09:29:37 crc kubenswrapper[4684]: I0123 09:29:37.429721 4684 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/placement-db-create-q6drh"] Jan 23 09:29:37 crc kubenswrapper[4684]: I0123 09:29:37.430711 4684 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/placement-db-create-q6drh" Jan 23 09:29:37 crc kubenswrapper[4684]: I0123 09:29:37.438968 4684 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/placement-db-create-q6drh"] Jan 23 09:29:37 crc kubenswrapper[4684]: I0123 09:29:37.455007 4684 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/4887e48e-971e-4f3a-8a5f-a050961c9c7c-operator-scripts\") pod \"keystone-2c77-account-create-update-k55xq\" (UID: \"4887e48e-971e-4f3a-8a5f-a050961c9c7c\") " pod="openstack/keystone-2c77-account-create-update-k55xq" Jan 23 09:29:37 crc kubenswrapper[4684]: I0123 09:29:37.455208 4684 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-f8rp9\" (UniqueName: \"kubernetes.io/projected/4887e48e-971e-4f3a-8a5f-a050961c9c7c-kube-api-access-f8rp9\") pod \"keystone-2c77-account-create-update-k55xq\" (UID: \"4887e48e-971e-4f3a-8a5f-a050961c9c7c\") " pod="openstack/keystone-2c77-account-create-update-k55xq" Jan 23 09:29:37 crc kubenswrapper[4684]: I0123 09:29:37.456370 4684 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/4887e48e-971e-4f3a-8a5f-a050961c9c7c-operator-scripts\") pod \"keystone-2c77-account-create-update-k55xq\" (UID: \"4887e48e-971e-4f3a-8a5f-a050961c9c7c\") " pod="openstack/keystone-2c77-account-create-update-k55xq" Jan 23 09:29:37 crc kubenswrapper[4684]: I0123 09:29:37.479135 4684 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/keystone-db-create-qbpk9" Jan 23 09:29:37 crc kubenswrapper[4684]: I0123 09:29:37.479328 4684 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-f8rp9\" (UniqueName: \"kubernetes.io/projected/4887e48e-971e-4f3a-8a5f-a050961c9c7c-kube-api-access-f8rp9\") pod \"keystone-2c77-account-create-update-k55xq\" (UID: \"4887e48e-971e-4f3a-8a5f-a050961c9c7c\") " pod="openstack/keystone-2c77-account-create-update-k55xq" Jan 23 09:29:37 crc kubenswrapper[4684]: I0123 09:29:37.543528 4684 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/placement-1ed6-account-create-update-prsmk"] Jan 23 09:29:37 crc kubenswrapper[4684]: I0123 09:29:37.544643 4684 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/placement-1ed6-account-create-update-prsmk" Jan 23 09:29:37 crc kubenswrapper[4684]: I0123 09:29:37.548817 4684 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"placement-db-secret" Jan 23 09:29:37 crc kubenswrapper[4684]: I0123 09:29:37.557848 4684 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/placement-1ed6-account-create-update-prsmk"] Jan 23 09:29:37 crc kubenswrapper[4684]: I0123 09:29:37.558680 4684 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-647mw\" (UniqueName: \"kubernetes.io/projected/9fbe33db-2ad3-4693-b957-716547fb796f-kube-api-access-647mw\") pod \"placement-db-create-q6drh\" (UID: \"9fbe33db-2ad3-4693-b957-716547fb796f\") " pod="openstack/placement-db-create-q6drh" Jan 23 09:29:37 crc kubenswrapper[4684]: I0123 09:29:37.558834 4684 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/9fbe33db-2ad3-4693-b957-716547fb796f-operator-scripts\") pod \"placement-db-create-q6drh\" (UID: \"9fbe33db-2ad3-4693-b957-716547fb796f\") " pod="openstack/placement-db-create-q6drh" Jan 23 09:29:37 crc kubenswrapper[4684]: I0123 09:29:37.571626 4684 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/keystone-2c77-account-create-update-k55xq" Jan 23 09:29:37 crc kubenswrapper[4684]: I0123 09:29:37.660239 4684 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-9qjj5\" (UniqueName: \"kubernetes.io/projected/41137896-cb01-4aa7-a4c0-786f7db16906-kube-api-access-9qjj5\") pod \"placement-1ed6-account-create-update-prsmk\" (UID: \"41137896-cb01-4aa7-a4c0-786f7db16906\") " pod="openstack/placement-1ed6-account-create-update-prsmk" Jan 23 09:29:37 crc kubenswrapper[4684]: I0123 09:29:37.660313 4684 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/41137896-cb01-4aa7-a4c0-786f7db16906-operator-scripts\") pod \"placement-1ed6-account-create-update-prsmk\" (UID: \"41137896-cb01-4aa7-a4c0-786f7db16906\") " pod="openstack/placement-1ed6-account-create-update-prsmk" Jan 23 09:29:37 crc kubenswrapper[4684]: I0123 09:29:37.660532 4684 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-647mw\" (UniqueName: \"kubernetes.io/projected/9fbe33db-2ad3-4693-b957-716547fb796f-kube-api-access-647mw\") pod \"placement-db-create-q6drh\" (UID: \"9fbe33db-2ad3-4693-b957-716547fb796f\") " pod="openstack/placement-db-create-q6drh" Jan 23 09:29:37 crc kubenswrapper[4684]: I0123 09:29:37.660665 4684 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/9fbe33db-2ad3-4693-b957-716547fb796f-operator-scripts\") pod \"placement-db-create-q6drh\" (UID: \"9fbe33db-2ad3-4693-b957-716547fb796f\") " pod="openstack/placement-db-create-q6drh" Jan 23 09:29:37 crc kubenswrapper[4684]: I0123 09:29:37.661379 4684 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/9fbe33db-2ad3-4693-b957-716547fb796f-operator-scripts\") pod \"placement-db-create-q6drh\" (UID: \"9fbe33db-2ad3-4693-b957-716547fb796f\") " pod="openstack/placement-db-create-q6drh" Jan 23 09:29:37 crc kubenswrapper[4684]: I0123 09:29:37.688093 4684 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-647mw\" (UniqueName: \"kubernetes.io/projected/9fbe33db-2ad3-4693-b957-716547fb796f-kube-api-access-647mw\") pod \"placement-db-create-q6drh\" (UID: \"9fbe33db-2ad3-4693-b957-716547fb796f\") " pod="openstack/placement-db-create-q6drh" Jan 23 09:29:37 crc kubenswrapper[4684]: I0123 09:29:37.748910 4684 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/placement-db-create-q6drh" Jan 23 09:29:37 crc kubenswrapper[4684]: I0123 09:29:37.762764 4684 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-9qjj5\" (UniqueName: \"kubernetes.io/projected/41137896-cb01-4aa7-a4c0-786f7db16906-kube-api-access-9qjj5\") pod \"placement-1ed6-account-create-update-prsmk\" (UID: \"41137896-cb01-4aa7-a4c0-786f7db16906\") " pod="openstack/placement-1ed6-account-create-update-prsmk" Jan 23 09:29:37 crc kubenswrapper[4684]: I0123 09:29:37.762847 4684 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/41137896-cb01-4aa7-a4c0-786f7db16906-operator-scripts\") pod \"placement-1ed6-account-create-update-prsmk\" (UID: \"41137896-cb01-4aa7-a4c0-786f7db16906\") " pod="openstack/placement-1ed6-account-create-update-prsmk" Jan 23 09:29:37 crc kubenswrapper[4684]: I0123 09:29:37.764094 4684 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/41137896-cb01-4aa7-a4c0-786f7db16906-operator-scripts\") pod \"placement-1ed6-account-create-update-prsmk\" (UID: \"41137896-cb01-4aa7-a4c0-786f7db16906\") " pod="openstack/placement-1ed6-account-create-update-prsmk" Jan 23 09:29:37 crc kubenswrapper[4684]: I0123 09:29:37.784474 4684 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-9qjj5\" (UniqueName: \"kubernetes.io/projected/41137896-cb01-4aa7-a4c0-786f7db16906-kube-api-access-9qjj5\") pod \"placement-1ed6-account-create-update-prsmk\" (UID: \"41137896-cb01-4aa7-a4c0-786f7db16906\") " pod="openstack/placement-1ed6-account-create-update-prsmk" Jan 23 09:29:37 crc kubenswrapper[4684]: I0123 09:29:37.834965 4684 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/glance-db-create-hgjbd"] Jan 23 09:29:37 crc kubenswrapper[4684]: I0123 09:29:37.836124 4684 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/glance-db-create-hgjbd" Jan 23 09:29:37 crc kubenswrapper[4684]: I0123 09:29:37.854224 4684 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/glance-db-create-hgjbd"] Jan 23 09:29:37 crc kubenswrapper[4684]: I0123 09:29:37.871777 4684 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/placement-1ed6-account-create-update-prsmk" Jan 23 09:29:37 crc kubenswrapper[4684]: I0123 09:29:37.966947 4684 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/2bd72eca-ef1e-4445-9d19-65ff92842e15-operator-scripts\") pod \"glance-db-create-hgjbd\" (UID: \"2bd72eca-ef1e-4445-9d19-65ff92842e15\") " pod="openstack/glance-db-create-hgjbd" Jan 23 09:29:37 crc kubenswrapper[4684]: I0123 09:29:37.967131 4684 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-drs5n\" (UniqueName: \"kubernetes.io/projected/2bd72eca-ef1e-4445-9d19-65ff92842e15-kube-api-access-drs5n\") pod \"glance-db-create-hgjbd\" (UID: \"2bd72eca-ef1e-4445-9d19-65ff92842e15\") " pod="openstack/glance-db-create-hgjbd" Jan 23 09:29:37 crc kubenswrapper[4684]: W0123 09:29:37.981629 4684 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod93356bbd_8831_4fad_a7a7_4494b4244c26.slice/crio-8b3fdfb8ad6008910fa6a7b00ad9a66ea2985815be06f972a83a48944fc1e46b WatchSource:0}: Error finding container 8b3fdfb8ad6008910fa6a7b00ad9a66ea2985815be06f972a83a48944fc1e46b: Status 404 returned error can't find the container with id 8b3fdfb8ad6008910fa6a7b00ad9a66ea2985815be06f972a83a48944fc1e46b Jan 23 09:29:37 crc kubenswrapper[4684]: I0123 09:29:37.983453 4684 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/keystone-db-create-qbpk9"] Jan 23 09:29:38 crc kubenswrapper[4684]: I0123 09:29:38.073189 4684 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/2bd72eca-ef1e-4445-9d19-65ff92842e15-operator-scripts\") pod \"glance-db-create-hgjbd\" (UID: \"2bd72eca-ef1e-4445-9d19-65ff92842e15\") " pod="openstack/glance-db-create-hgjbd" Jan 23 09:29:38 crc kubenswrapper[4684]: I0123 09:29:38.074341 4684 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-drs5n\" (UniqueName: \"kubernetes.io/projected/2bd72eca-ef1e-4445-9d19-65ff92842e15-kube-api-access-drs5n\") pod \"glance-db-create-hgjbd\" (UID: \"2bd72eca-ef1e-4445-9d19-65ff92842e15\") " pod="openstack/glance-db-create-hgjbd" Jan 23 09:29:38 crc kubenswrapper[4684]: I0123 09:29:38.075318 4684 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/2bd72eca-ef1e-4445-9d19-65ff92842e15-operator-scripts\") pod \"glance-db-create-hgjbd\" (UID: \"2bd72eca-ef1e-4445-9d19-65ff92842e15\") " pod="openstack/glance-db-create-hgjbd" Jan 23 09:29:38 crc kubenswrapper[4684]: I0123 09:29:38.094400 4684 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-drs5n\" (UniqueName: \"kubernetes.io/projected/2bd72eca-ef1e-4445-9d19-65ff92842e15-kube-api-access-drs5n\") pod \"glance-db-create-hgjbd\" (UID: \"2bd72eca-ef1e-4445-9d19-65ff92842e15\") " pod="openstack/glance-db-create-hgjbd" Jan 23 09:29:38 crc kubenswrapper[4684]: I0123 09:29:38.159231 4684 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/glance-db-create-hgjbd" Jan 23 09:29:38 crc kubenswrapper[4684]: I0123 09:29:38.199207 4684 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/keystone-2c77-account-create-update-k55xq"] Jan 23 09:29:38 crc kubenswrapper[4684]: I0123 09:29:38.367692 4684 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/placement-db-create-q6drh"] Jan 23 09:29:38 crc kubenswrapper[4684]: I0123 09:29:38.442261 4684 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/keystone-db-create-qbpk9" event={"ID":"93356bbd-8831-4fad-a7a7-4494b4244c26","Type":"ContainerStarted","Data":"8b3fdfb8ad6008910fa6a7b00ad9a66ea2985815be06f972a83a48944fc1e46b"} Jan 23 09:29:38 crc kubenswrapper[4684]: I0123 09:29:38.449066 4684 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/placement-db-create-q6drh" event={"ID":"9fbe33db-2ad3-4693-b957-716547fb796f","Type":"ContainerStarted","Data":"2b9460ac5cc0c11444cc39390e87b6bfc6555e2989a88f818e52598dba935da9"} Jan 23 09:29:38 crc kubenswrapper[4684]: I0123 09:29:38.451253 4684 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/glance-a3c7-account-create-update-mqslp"] Jan 23 09:29:38 crc kubenswrapper[4684]: I0123 09:29:38.456164 4684 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/glance-a3c7-account-create-update-mqslp" Jan 23 09:29:38 crc kubenswrapper[4684]: I0123 09:29:38.458440 4684 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"glance-db-secret" Jan 23 09:29:38 crc kubenswrapper[4684]: I0123 09:29:38.458934 4684 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/keystone-2c77-account-create-update-k55xq" event={"ID":"4887e48e-971e-4f3a-8a5f-a050961c9c7c","Type":"ContainerStarted","Data":"f64efd84b4cb025058e6a75eedf2a49f0e7d527b68ce07dc7bf30d18ebfc4e98"} Jan 23 09:29:38 crc kubenswrapper[4684]: I0123 09:29:38.461926 4684 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/glance-a3c7-account-create-update-mqslp"] Jan 23 09:29:38 crc kubenswrapper[4684]: I0123 09:29:38.602948 4684 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/0b25b466-e0e3-4ec1-9ce6-c3f4a19a2ae9-operator-scripts\") pod \"glance-a3c7-account-create-update-mqslp\" (UID: \"0b25b466-e0e3-4ec1-9ce6-c3f4a19a2ae9\") " pod="openstack/glance-a3c7-account-create-update-mqslp" Jan 23 09:29:38 crc kubenswrapper[4684]: I0123 09:29:38.605410 4684 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-z75nq\" (UniqueName: \"kubernetes.io/projected/0b25b466-e0e3-4ec1-9ce6-c3f4a19a2ae9-kube-api-access-z75nq\") pod \"glance-a3c7-account-create-update-mqslp\" (UID: \"0b25b466-e0e3-4ec1-9ce6-c3f4a19a2ae9\") " pod="openstack/glance-a3c7-account-create-update-mqslp" Jan 23 09:29:38 crc kubenswrapper[4684]: I0123 09:29:38.609170 4684 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/placement-1ed6-account-create-update-prsmk"] Jan 23 09:29:38 crc kubenswrapper[4684]: I0123 09:29:38.713937 4684 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/0b25b466-e0e3-4ec1-9ce6-c3f4a19a2ae9-operator-scripts\") pod \"glance-a3c7-account-create-update-mqslp\" (UID: \"0b25b466-e0e3-4ec1-9ce6-c3f4a19a2ae9\") " pod="openstack/glance-a3c7-account-create-update-mqslp" Jan 23 09:29:38 crc kubenswrapper[4684]: I0123 09:29:38.714023 4684 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-z75nq\" (UniqueName: \"kubernetes.io/projected/0b25b466-e0e3-4ec1-9ce6-c3f4a19a2ae9-kube-api-access-z75nq\") pod \"glance-a3c7-account-create-update-mqslp\" (UID: \"0b25b466-e0e3-4ec1-9ce6-c3f4a19a2ae9\") " pod="openstack/glance-a3c7-account-create-update-mqslp" Jan 23 09:29:38 crc kubenswrapper[4684]: I0123 09:29:38.716265 4684 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/0b25b466-e0e3-4ec1-9ce6-c3f4a19a2ae9-operator-scripts\") pod \"glance-a3c7-account-create-update-mqslp\" (UID: \"0b25b466-e0e3-4ec1-9ce6-c3f4a19a2ae9\") " pod="openstack/glance-a3c7-account-create-update-mqslp" Jan 23 09:29:38 crc kubenswrapper[4684]: I0123 09:29:38.740749 4684 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-z75nq\" (UniqueName: \"kubernetes.io/projected/0b25b466-e0e3-4ec1-9ce6-c3f4a19a2ae9-kube-api-access-z75nq\") pod \"glance-a3c7-account-create-update-mqslp\" (UID: \"0b25b466-e0e3-4ec1-9ce6-c3f4a19a2ae9\") " pod="openstack/glance-a3c7-account-create-update-mqslp" Jan 23 09:29:38 crc kubenswrapper[4684]: I0123 09:29:38.888033 4684 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/glance-a3c7-account-create-update-mqslp" Jan 23 09:29:38 crc kubenswrapper[4684]: I0123 09:29:38.940859 4684 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/glance-db-create-hgjbd"] Jan 23 09:29:38 crc kubenswrapper[4684]: W0123 09:29:38.949400 4684 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod2bd72eca_ef1e_4445_9d19_65ff92842e15.slice/crio-3d01fb7038634309d1b71242b61e5cd78c4440e197e90e3ce70af0e868dd34b9 WatchSource:0}: Error finding container 3d01fb7038634309d1b71242b61e5cd78c4440e197e90e3ce70af0e868dd34b9: Status 404 returned error can't find the container with id 3d01fb7038634309d1b71242b61e5cd78c4440e197e90e3ce70af0e868dd34b9 Jan 23 09:29:39 crc kubenswrapper[4684]: I0123 09:29:39.391568 4684 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/glance-a3c7-account-create-update-mqslp"] Jan 23 09:29:39 crc kubenswrapper[4684]: W0123 09:29:39.401322 4684 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod0b25b466_e0e3_4ec1_9ce6_c3f4a19a2ae9.slice/crio-c8497977b6a960a2ec6d2384fd3ca5beb0a8236ef3db340f3ae72d355e7bcb83 WatchSource:0}: Error finding container c8497977b6a960a2ec6d2384fd3ca5beb0a8236ef3db340f3ae72d355e7bcb83: Status 404 returned error can't find the container with id c8497977b6a960a2ec6d2384fd3ca5beb0a8236ef3db340f3ae72d355e7bcb83 Jan 23 09:29:39 crc kubenswrapper[4684]: I0123 09:29:39.487555 4684 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/keystone-db-create-qbpk9" event={"ID":"93356bbd-8831-4fad-a7a7-4494b4244c26","Type":"ContainerStarted","Data":"9dd7dff46f7efcc0738ef0a948eb3c1d2001b98a8bc1cbced6f6f45a4c4f5832"} Jan 23 09:29:39 crc kubenswrapper[4684]: I0123 09:29:39.492286 4684 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-a3c7-account-create-update-mqslp" event={"ID":"0b25b466-e0e3-4ec1-9ce6-c3f4a19a2ae9","Type":"ContainerStarted","Data":"c8497977b6a960a2ec6d2384fd3ca5beb0a8236ef3db340f3ae72d355e7bcb83"} Jan 23 09:29:39 crc kubenswrapper[4684]: I0123 09:29:39.494690 4684 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-db-create-hgjbd" event={"ID":"2bd72eca-ef1e-4445-9d19-65ff92842e15","Type":"ContainerStarted","Data":"6d2d1b0aa404c80f7cdaa2866f4b096fea8b17458044436ce8287492fc01664c"} Jan 23 09:29:39 crc kubenswrapper[4684]: I0123 09:29:39.494788 4684 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-db-create-hgjbd" event={"ID":"2bd72eca-ef1e-4445-9d19-65ff92842e15","Type":"ContainerStarted","Data":"3d01fb7038634309d1b71242b61e5cd78c4440e197e90e3ce70af0e868dd34b9"} Jan 23 09:29:39 crc kubenswrapper[4684]: I0123 09:29:39.498362 4684 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/placement-db-create-q6drh" event={"ID":"9fbe33db-2ad3-4693-b957-716547fb796f","Type":"ContainerStarted","Data":"e51f1a61abede2389701f093ae03178e6c0c47a717998eda985286e9850df226"} Jan 23 09:29:39 crc kubenswrapper[4684]: I0123 09:29:39.509346 4684 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/keystone-db-create-qbpk9" podStartSLOduration=2.5093285229999998 podStartE2EDuration="2.509328523s" podCreationTimestamp="2026-01-23 09:29:37 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-23 09:29:39.508982593 +0000 UTC m=+1352.132361134" watchObservedRunningTime="2026-01-23 09:29:39.509328523 +0000 UTC m=+1352.132707064" Jan 23 09:29:39 crc kubenswrapper[4684]: I0123 09:29:39.512687 4684 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/placement-1ed6-account-create-update-prsmk" event={"ID":"41137896-cb01-4aa7-a4c0-786f7db16906","Type":"ContainerStarted","Data":"77af8ffcaeca9435e6e4535486b24ad2c2cc8b264bbe31057a6f33747e15ecaa"} Jan 23 09:29:39 crc kubenswrapper[4684]: I0123 09:29:39.512765 4684 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/placement-1ed6-account-create-update-prsmk" event={"ID":"41137896-cb01-4aa7-a4c0-786f7db16906","Type":"ContainerStarted","Data":"d5f3a0843c367195ee46802d59722e6805b7b965e760c315c6068cfbe7bf10be"} Jan 23 09:29:39 crc kubenswrapper[4684]: I0123 09:29:39.520645 4684 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/keystone-2c77-account-create-update-k55xq" event={"ID":"4887e48e-971e-4f3a-8a5f-a050961c9c7c","Type":"ContainerStarted","Data":"414c8b50110f01840dfdf1a3f2857d4942a51b9c24ff5691d462ca8a72909d34"} Jan 23 09:29:39 crc kubenswrapper[4684]: I0123 09:29:39.534029 4684 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/glance-db-create-hgjbd" podStartSLOduration=2.534011381 podStartE2EDuration="2.534011381s" podCreationTimestamp="2026-01-23 09:29:37 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-23 09:29:39.531168121 +0000 UTC m=+1352.154546672" watchObservedRunningTime="2026-01-23 09:29:39.534011381 +0000 UTC m=+1352.157389932" Jan 23 09:29:39 crc kubenswrapper[4684]: I0123 09:29:39.557046 4684 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/placement-db-create-q6drh" podStartSLOduration=2.557020732 podStartE2EDuration="2.557020732s" podCreationTimestamp="2026-01-23 09:29:37 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-23 09:29:39.550809277 +0000 UTC m=+1352.174187838" watchObservedRunningTime="2026-01-23 09:29:39.557020732 +0000 UTC m=+1352.180399273" Jan 23 09:29:39 crc kubenswrapper[4684]: I0123 09:29:39.579570 4684 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/keystone-2c77-account-create-update-k55xq" podStartSLOduration=2.57955171 podStartE2EDuration="2.57955171s" podCreationTimestamp="2026-01-23 09:29:37 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-23 09:29:39.571262726 +0000 UTC m=+1352.194641277" watchObservedRunningTime="2026-01-23 09:29:39.57955171 +0000 UTC m=+1352.202930251" Jan 23 09:29:40 crc kubenswrapper[4684]: I0123 09:29:40.528428 4684 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-a3c7-account-create-update-mqslp" event={"ID":"0b25b466-e0e3-4ec1-9ce6-c3f4a19a2ae9","Type":"ContainerStarted","Data":"c19eb481efea50f03afa185cead2d9cd36a7b905c2c818689fb4b12dad3886cc"} Jan 23 09:29:40 crc kubenswrapper[4684]: I0123 09:29:40.549251 4684 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/glance-a3c7-account-create-update-mqslp" podStartSLOduration=2.549229057 podStartE2EDuration="2.549229057s" podCreationTimestamp="2026-01-23 09:29:38 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-23 09:29:40.546179211 +0000 UTC m=+1353.169557742" watchObservedRunningTime="2026-01-23 09:29:40.549229057 +0000 UTC m=+1353.172607598" Jan 23 09:29:40 crc kubenswrapper[4684]: I0123 09:29:40.568336 4684 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/placement-1ed6-account-create-update-prsmk" podStartSLOduration=3.568314837 podStartE2EDuration="3.568314837s" podCreationTimestamp="2026-01-23 09:29:37 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-23 09:29:40.560884597 +0000 UTC m=+1353.184263158" watchObservedRunningTime="2026-01-23 09:29:40.568314837 +0000 UTC m=+1353.191693378" Jan 23 09:29:43 crc kubenswrapper[4684]: I0123 09:29:43.552797 4684 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/kube-state-metrics-0" event={"ID":"48e55475-0575-41e9-9949-d5bdb86ee565","Type":"ContainerStarted","Data":"b95156a791bf23ae774336a0514e84c390ba31fc6386be32beea115aec8187db"} Jan 23 09:29:43 crc kubenswrapper[4684]: I0123 09:29:43.553958 4684 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/kube-state-metrics-0" Jan 23 09:29:43 crc kubenswrapper[4684]: I0123 09:29:43.554855 4684 generic.go:334] "Generic (PLEG): container finished" podID="41137896-cb01-4aa7-a4c0-786f7db16906" containerID="77af8ffcaeca9435e6e4535486b24ad2c2cc8b264bbe31057a6f33747e15ecaa" exitCode=0 Jan 23 09:29:43 crc kubenswrapper[4684]: I0123 09:29:43.554904 4684 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/placement-1ed6-account-create-update-prsmk" event={"ID":"41137896-cb01-4aa7-a4c0-786f7db16906","Type":"ContainerDied","Data":"77af8ffcaeca9435e6e4535486b24ad2c2cc8b264bbe31057a6f33747e15ecaa"} Jan 23 09:29:43 crc kubenswrapper[4684]: I0123 09:29:43.556561 4684 generic.go:334] "Generic (PLEG): container finished" podID="4887e48e-971e-4f3a-8a5f-a050961c9c7c" containerID="414c8b50110f01840dfdf1a3f2857d4942a51b9c24ff5691d462ca8a72909d34" exitCode=0 Jan 23 09:29:43 crc kubenswrapper[4684]: I0123 09:29:43.556633 4684 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/keystone-2c77-account-create-update-k55xq" event={"ID":"4887e48e-971e-4f3a-8a5f-a050961c9c7c","Type":"ContainerDied","Data":"414c8b50110f01840dfdf1a3f2857d4942a51b9c24ff5691d462ca8a72909d34"} Jan 23 09:29:43 crc kubenswrapper[4684]: I0123 09:29:43.558538 4684 generic.go:334] "Generic (PLEG): container finished" podID="93356bbd-8831-4fad-a7a7-4494b4244c26" containerID="9dd7dff46f7efcc0738ef0a948eb3c1d2001b98a8bc1cbced6f6f45a4c4f5832" exitCode=0 Jan 23 09:29:43 crc kubenswrapper[4684]: I0123 09:29:43.558588 4684 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/keystone-db-create-qbpk9" event={"ID":"93356bbd-8831-4fad-a7a7-4494b4244c26","Type":"ContainerDied","Data":"9dd7dff46f7efcc0738ef0a948eb3c1d2001b98a8bc1cbced6f6f45a4c4f5832"} Jan 23 09:29:43 crc kubenswrapper[4684]: I0123 09:29:43.560580 4684 generic.go:334] "Generic (PLEG): container finished" podID="0b25b466-e0e3-4ec1-9ce6-c3f4a19a2ae9" containerID="c19eb481efea50f03afa185cead2d9cd36a7b905c2c818689fb4b12dad3886cc" exitCode=0 Jan 23 09:29:43 crc kubenswrapper[4684]: I0123 09:29:43.560620 4684 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-a3c7-account-create-update-mqslp" event={"ID":"0b25b466-e0e3-4ec1-9ce6-c3f4a19a2ae9","Type":"ContainerDied","Data":"c19eb481efea50f03afa185cead2d9cd36a7b905c2c818689fb4b12dad3886cc"} Jan 23 09:29:43 crc kubenswrapper[4684]: I0123 09:29:43.562364 4684 generic.go:334] "Generic (PLEG): container finished" podID="2bd72eca-ef1e-4445-9d19-65ff92842e15" containerID="6d2d1b0aa404c80f7cdaa2866f4b096fea8b17458044436ce8287492fc01664c" exitCode=0 Jan 23 09:29:43 crc kubenswrapper[4684]: I0123 09:29:43.562568 4684 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-db-create-hgjbd" event={"ID":"2bd72eca-ef1e-4445-9d19-65ff92842e15","Type":"ContainerDied","Data":"6d2d1b0aa404c80f7cdaa2866f4b096fea8b17458044436ce8287492fc01664c"} Jan 23 09:29:43 crc kubenswrapper[4684]: I0123 09:29:43.565195 4684 generic.go:334] "Generic (PLEG): container finished" podID="9fbe33db-2ad3-4693-b957-716547fb796f" containerID="e51f1a61abede2389701f093ae03178e6c0c47a717998eda985286e9850df226" exitCode=0 Jan 23 09:29:43 crc kubenswrapper[4684]: I0123 09:29:43.565256 4684 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/placement-db-create-q6drh" event={"ID":"9fbe33db-2ad3-4693-b957-716547fb796f","Type":"ContainerDied","Data":"e51f1a61abede2389701f093ae03178e6c0c47a717998eda985286e9850df226"} Jan 23 09:29:43 crc kubenswrapper[4684]: I0123 09:29:43.598287 4684 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/kube-state-metrics-0" podStartSLOduration=1.979718807 podStartE2EDuration="2m4.598270518s" podCreationTimestamp="2026-01-23 09:27:39 +0000 UTC" firstStartedPulling="2026-01-23 09:27:40.115571019 +0000 UTC m=+1232.738949560" lastFinishedPulling="2026-01-23 09:29:42.73412273 +0000 UTC m=+1355.357501271" observedRunningTime="2026-01-23 09:29:43.576354428 +0000 UTC m=+1356.199732989" watchObservedRunningTime="2026-01-23 09:29:43.598270518 +0000 UTC m=+1356.221649049" Jan 23 09:29:44 crc kubenswrapper[4684]: I0123 09:29:44.524471 4684 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/root-account-create-update-w8zfc"] Jan 23 09:29:44 crc kubenswrapper[4684]: I0123 09:29:44.531213 4684 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/root-account-create-update-w8zfc"] Jan 23 09:29:44 crc kubenswrapper[4684]: I0123 09:29:44.617345 4684 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/root-account-create-update-2gbdd"] Jan 23 09:29:44 crc kubenswrapper[4684]: I0123 09:29:44.619408 4684 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/root-account-create-update-2gbdd" Jan 23 09:29:44 crc kubenswrapper[4684]: I0123 09:29:44.621629 4684 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"openstack-mariadb-root-db-secret" Jan 23 09:29:44 crc kubenswrapper[4684]: I0123 09:29:44.634515 4684 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/root-account-create-update-2gbdd"] Jan 23 09:29:44 crc kubenswrapper[4684]: I0123 09:29:44.732683 4684 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-628sw\" (UniqueName: \"kubernetes.io/projected/471d49c9-9531-4b3c-bbe5-1fb98852d71d-kube-api-access-628sw\") pod \"root-account-create-update-2gbdd\" (UID: \"471d49c9-9531-4b3c-bbe5-1fb98852d71d\") " pod="openstack/root-account-create-update-2gbdd" Jan 23 09:29:44 crc kubenswrapper[4684]: I0123 09:29:44.732781 4684 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/471d49c9-9531-4b3c-bbe5-1fb98852d71d-operator-scripts\") pod \"root-account-create-update-2gbdd\" (UID: \"471d49c9-9531-4b3c-bbe5-1fb98852d71d\") " pod="openstack/root-account-create-update-2gbdd" Jan 23 09:29:44 crc kubenswrapper[4684]: I0123 09:29:44.836669 4684 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-628sw\" (UniqueName: \"kubernetes.io/projected/471d49c9-9531-4b3c-bbe5-1fb98852d71d-kube-api-access-628sw\") pod \"root-account-create-update-2gbdd\" (UID: \"471d49c9-9531-4b3c-bbe5-1fb98852d71d\") " pod="openstack/root-account-create-update-2gbdd" Jan 23 09:29:44 crc kubenswrapper[4684]: I0123 09:29:44.836749 4684 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/471d49c9-9531-4b3c-bbe5-1fb98852d71d-operator-scripts\") pod \"root-account-create-update-2gbdd\" (UID: \"471d49c9-9531-4b3c-bbe5-1fb98852d71d\") " pod="openstack/root-account-create-update-2gbdd" Jan 23 09:29:44 crc kubenswrapper[4684]: I0123 09:29:44.837838 4684 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/471d49c9-9531-4b3c-bbe5-1fb98852d71d-operator-scripts\") pod \"root-account-create-update-2gbdd\" (UID: \"471d49c9-9531-4b3c-bbe5-1fb98852d71d\") " pod="openstack/root-account-create-update-2gbdd" Jan 23 09:29:44 crc kubenswrapper[4684]: I0123 09:29:44.865607 4684 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-628sw\" (UniqueName: \"kubernetes.io/projected/471d49c9-9531-4b3c-bbe5-1fb98852d71d-kube-api-access-628sw\") pod \"root-account-create-update-2gbdd\" (UID: \"471d49c9-9531-4b3c-bbe5-1fb98852d71d\") " pod="openstack/root-account-create-update-2gbdd" Jan 23 09:29:44 crc kubenswrapper[4684]: I0123 09:29:44.943216 4684 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/root-account-create-update-2gbdd" Jan 23 09:29:45 crc kubenswrapper[4684]: I0123 09:29:45.038161 4684 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/keystone-2c77-account-create-update-k55xq" Jan 23 09:29:45 crc kubenswrapper[4684]: I0123 09:29:45.142481 4684 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/4887e48e-971e-4f3a-8a5f-a050961c9c7c-operator-scripts\") pod \"4887e48e-971e-4f3a-8a5f-a050961c9c7c\" (UID: \"4887e48e-971e-4f3a-8a5f-a050961c9c7c\") " Jan 23 09:29:45 crc kubenswrapper[4684]: I0123 09:29:45.142572 4684 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-f8rp9\" (UniqueName: \"kubernetes.io/projected/4887e48e-971e-4f3a-8a5f-a050961c9c7c-kube-api-access-f8rp9\") pod \"4887e48e-971e-4f3a-8a5f-a050961c9c7c\" (UID: \"4887e48e-971e-4f3a-8a5f-a050961c9c7c\") " Jan 23 09:29:45 crc kubenswrapper[4684]: I0123 09:29:45.143380 4684 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/4887e48e-971e-4f3a-8a5f-a050961c9c7c-operator-scripts" (OuterVolumeSpecName: "operator-scripts") pod "4887e48e-971e-4f3a-8a5f-a050961c9c7c" (UID: "4887e48e-971e-4f3a-8a5f-a050961c9c7c"). InnerVolumeSpecName "operator-scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 23 09:29:45 crc kubenswrapper[4684]: I0123 09:29:45.150843 4684 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/4887e48e-971e-4f3a-8a5f-a050961c9c7c-kube-api-access-f8rp9" (OuterVolumeSpecName: "kube-api-access-f8rp9") pod "4887e48e-971e-4f3a-8a5f-a050961c9c7c" (UID: "4887e48e-971e-4f3a-8a5f-a050961c9c7c"). InnerVolumeSpecName "kube-api-access-f8rp9". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 23 09:29:45 crc kubenswrapper[4684]: I0123 09:29:45.198460 4684 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/keystone-db-create-qbpk9" Jan 23 09:29:45 crc kubenswrapper[4684]: I0123 09:29:45.219092 4684 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/placement-db-create-q6drh" Jan 23 09:29:45 crc kubenswrapper[4684]: I0123 09:29:45.230913 4684 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/glance-db-create-hgjbd" Jan 23 09:29:45 crc kubenswrapper[4684]: I0123 09:29:45.231618 4684 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/glance-a3c7-account-create-update-mqslp" Jan 23 09:29:45 crc kubenswrapper[4684]: I0123 09:29:45.244624 4684 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/placement-1ed6-account-create-update-prsmk" Jan 23 09:29:45 crc kubenswrapper[4684]: I0123 09:29:45.245131 4684 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-f8rp9\" (UniqueName: \"kubernetes.io/projected/4887e48e-971e-4f3a-8a5f-a050961c9c7c-kube-api-access-f8rp9\") on node \"crc\" DevicePath \"\"" Jan 23 09:29:45 crc kubenswrapper[4684]: I0123 09:29:45.245164 4684 reconciler_common.go:293] "Volume detached for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/4887e48e-971e-4f3a-8a5f-a050961c9c7c-operator-scripts\") on node \"crc\" DevicePath \"\"" Jan 23 09:29:45 crc kubenswrapper[4684]: I0123 09:29:45.346320 4684 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/93356bbd-8831-4fad-a7a7-4494b4244c26-operator-scripts\") pod \"93356bbd-8831-4fad-a7a7-4494b4244c26\" (UID: \"93356bbd-8831-4fad-a7a7-4494b4244c26\") " Jan 23 09:29:45 crc kubenswrapper[4684]: I0123 09:29:45.346459 4684 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-drs5n\" (UniqueName: \"kubernetes.io/projected/2bd72eca-ef1e-4445-9d19-65ff92842e15-kube-api-access-drs5n\") pod \"2bd72eca-ef1e-4445-9d19-65ff92842e15\" (UID: \"2bd72eca-ef1e-4445-9d19-65ff92842e15\") " Jan 23 09:29:45 crc kubenswrapper[4684]: I0123 09:29:45.346488 4684 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/0b25b466-e0e3-4ec1-9ce6-c3f4a19a2ae9-operator-scripts\") pod \"0b25b466-e0e3-4ec1-9ce6-c3f4a19a2ae9\" (UID: \"0b25b466-e0e3-4ec1-9ce6-c3f4a19a2ae9\") " Jan 23 09:29:45 crc kubenswrapper[4684]: I0123 09:29:45.346540 4684 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/9fbe33db-2ad3-4693-b957-716547fb796f-operator-scripts\") pod \"9fbe33db-2ad3-4693-b957-716547fb796f\" (UID: \"9fbe33db-2ad3-4693-b957-716547fb796f\") " Jan 23 09:29:45 crc kubenswrapper[4684]: I0123 09:29:45.346858 4684 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-z75nq\" (UniqueName: \"kubernetes.io/projected/0b25b466-e0e3-4ec1-9ce6-c3f4a19a2ae9-kube-api-access-z75nq\") pod \"0b25b466-e0e3-4ec1-9ce6-c3f4a19a2ae9\" (UID: \"0b25b466-e0e3-4ec1-9ce6-c3f4a19a2ae9\") " Jan 23 09:29:45 crc kubenswrapper[4684]: I0123 09:29:45.346876 4684 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/41137896-cb01-4aa7-a4c0-786f7db16906-operator-scripts\") pod \"41137896-cb01-4aa7-a4c0-786f7db16906\" (UID: \"41137896-cb01-4aa7-a4c0-786f7db16906\") " Jan 23 09:29:45 crc kubenswrapper[4684]: I0123 09:29:45.346905 4684 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-9qjj5\" (UniqueName: \"kubernetes.io/projected/41137896-cb01-4aa7-a4c0-786f7db16906-kube-api-access-9qjj5\") pod \"41137896-cb01-4aa7-a4c0-786f7db16906\" (UID: \"41137896-cb01-4aa7-a4c0-786f7db16906\") " Jan 23 09:29:45 crc kubenswrapper[4684]: I0123 09:29:45.346931 4684 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/2bd72eca-ef1e-4445-9d19-65ff92842e15-operator-scripts\") pod \"2bd72eca-ef1e-4445-9d19-65ff92842e15\" (UID: \"2bd72eca-ef1e-4445-9d19-65ff92842e15\") " Jan 23 09:29:45 crc kubenswrapper[4684]: I0123 09:29:45.346978 4684 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-647mw\" (UniqueName: \"kubernetes.io/projected/9fbe33db-2ad3-4693-b957-716547fb796f-kube-api-access-647mw\") pod \"9fbe33db-2ad3-4693-b957-716547fb796f\" (UID: \"9fbe33db-2ad3-4693-b957-716547fb796f\") " Jan 23 09:29:45 crc kubenswrapper[4684]: I0123 09:29:45.347006 4684 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-75t8k\" (UniqueName: \"kubernetes.io/projected/93356bbd-8831-4fad-a7a7-4494b4244c26-kube-api-access-75t8k\") pod \"93356bbd-8831-4fad-a7a7-4494b4244c26\" (UID: \"93356bbd-8831-4fad-a7a7-4494b4244c26\") " Jan 23 09:29:45 crc kubenswrapper[4684]: I0123 09:29:45.347277 4684 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/0b25b466-e0e3-4ec1-9ce6-c3f4a19a2ae9-operator-scripts" (OuterVolumeSpecName: "operator-scripts") pod "0b25b466-e0e3-4ec1-9ce6-c3f4a19a2ae9" (UID: "0b25b466-e0e3-4ec1-9ce6-c3f4a19a2ae9"). InnerVolumeSpecName "operator-scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 23 09:29:45 crc kubenswrapper[4684]: I0123 09:29:45.347378 4684 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/93356bbd-8831-4fad-a7a7-4494b4244c26-operator-scripts" (OuterVolumeSpecName: "operator-scripts") pod "93356bbd-8831-4fad-a7a7-4494b4244c26" (UID: "93356bbd-8831-4fad-a7a7-4494b4244c26"). InnerVolumeSpecName "operator-scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 23 09:29:45 crc kubenswrapper[4684]: I0123 09:29:45.347813 4684 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/2bd72eca-ef1e-4445-9d19-65ff92842e15-operator-scripts" (OuterVolumeSpecName: "operator-scripts") pod "2bd72eca-ef1e-4445-9d19-65ff92842e15" (UID: "2bd72eca-ef1e-4445-9d19-65ff92842e15"). InnerVolumeSpecName "operator-scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 23 09:29:45 crc kubenswrapper[4684]: I0123 09:29:45.347954 4684 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/41137896-cb01-4aa7-a4c0-786f7db16906-operator-scripts" (OuterVolumeSpecName: "operator-scripts") pod "41137896-cb01-4aa7-a4c0-786f7db16906" (UID: "41137896-cb01-4aa7-a4c0-786f7db16906"). InnerVolumeSpecName "operator-scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 23 09:29:45 crc kubenswrapper[4684]: I0123 09:29:45.348029 4684 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/9fbe33db-2ad3-4693-b957-716547fb796f-operator-scripts" (OuterVolumeSpecName: "operator-scripts") pod "9fbe33db-2ad3-4693-b957-716547fb796f" (UID: "9fbe33db-2ad3-4693-b957-716547fb796f"). InnerVolumeSpecName "operator-scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 23 09:29:45 crc kubenswrapper[4684]: I0123 09:29:45.351782 4684 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/41137896-cb01-4aa7-a4c0-786f7db16906-kube-api-access-9qjj5" (OuterVolumeSpecName: "kube-api-access-9qjj5") pod "41137896-cb01-4aa7-a4c0-786f7db16906" (UID: "41137896-cb01-4aa7-a4c0-786f7db16906"). InnerVolumeSpecName "kube-api-access-9qjj5". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 23 09:29:45 crc kubenswrapper[4684]: I0123 09:29:45.354095 4684 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/0b25b466-e0e3-4ec1-9ce6-c3f4a19a2ae9-kube-api-access-z75nq" (OuterVolumeSpecName: "kube-api-access-z75nq") pod "0b25b466-e0e3-4ec1-9ce6-c3f4a19a2ae9" (UID: "0b25b466-e0e3-4ec1-9ce6-c3f4a19a2ae9"). InnerVolumeSpecName "kube-api-access-z75nq". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 23 09:29:45 crc kubenswrapper[4684]: I0123 09:29:45.354217 4684 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/2bd72eca-ef1e-4445-9d19-65ff92842e15-kube-api-access-drs5n" (OuterVolumeSpecName: "kube-api-access-drs5n") pod "2bd72eca-ef1e-4445-9d19-65ff92842e15" (UID: "2bd72eca-ef1e-4445-9d19-65ff92842e15"). InnerVolumeSpecName "kube-api-access-drs5n". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 23 09:29:45 crc kubenswrapper[4684]: I0123 09:29:45.354274 4684 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/9fbe33db-2ad3-4693-b957-716547fb796f-kube-api-access-647mw" (OuterVolumeSpecName: "kube-api-access-647mw") pod "9fbe33db-2ad3-4693-b957-716547fb796f" (UID: "9fbe33db-2ad3-4693-b957-716547fb796f"). InnerVolumeSpecName "kube-api-access-647mw". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 23 09:29:45 crc kubenswrapper[4684]: I0123 09:29:45.367444 4684 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/93356bbd-8831-4fad-a7a7-4494b4244c26-kube-api-access-75t8k" (OuterVolumeSpecName: "kube-api-access-75t8k") pod "93356bbd-8831-4fad-a7a7-4494b4244c26" (UID: "93356bbd-8831-4fad-a7a7-4494b4244c26"). InnerVolumeSpecName "kube-api-access-75t8k". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 23 09:29:45 crc kubenswrapper[4684]: I0123 09:29:45.449360 4684 reconciler_common.go:293] "Volume detached for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/93356bbd-8831-4fad-a7a7-4494b4244c26-operator-scripts\") on node \"crc\" DevicePath \"\"" Jan 23 09:29:45 crc kubenswrapper[4684]: I0123 09:29:45.449593 4684 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-drs5n\" (UniqueName: \"kubernetes.io/projected/2bd72eca-ef1e-4445-9d19-65ff92842e15-kube-api-access-drs5n\") on node \"crc\" DevicePath \"\"" Jan 23 09:29:45 crc kubenswrapper[4684]: I0123 09:29:45.449608 4684 reconciler_common.go:293] "Volume detached for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/0b25b466-e0e3-4ec1-9ce6-c3f4a19a2ae9-operator-scripts\") on node \"crc\" DevicePath \"\"" Jan 23 09:29:45 crc kubenswrapper[4684]: I0123 09:29:45.449619 4684 reconciler_common.go:293] "Volume detached for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/9fbe33db-2ad3-4693-b957-716547fb796f-operator-scripts\") on node \"crc\" DevicePath \"\"" Jan 23 09:29:45 crc kubenswrapper[4684]: I0123 09:29:45.449630 4684 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-z75nq\" (UniqueName: \"kubernetes.io/projected/0b25b466-e0e3-4ec1-9ce6-c3f4a19a2ae9-kube-api-access-z75nq\") on node \"crc\" DevicePath \"\"" Jan 23 09:29:45 crc kubenswrapper[4684]: I0123 09:29:45.449640 4684 reconciler_common.go:293] "Volume detached for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/41137896-cb01-4aa7-a4c0-786f7db16906-operator-scripts\") on node \"crc\" DevicePath \"\"" Jan 23 09:29:45 crc kubenswrapper[4684]: I0123 09:29:45.449658 4684 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-9qjj5\" (UniqueName: \"kubernetes.io/projected/41137896-cb01-4aa7-a4c0-786f7db16906-kube-api-access-9qjj5\") on node \"crc\" DevicePath \"\"" Jan 23 09:29:45 crc kubenswrapper[4684]: I0123 09:29:45.449671 4684 reconciler_common.go:293] "Volume detached for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/2bd72eca-ef1e-4445-9d19-65ff92842e15-operator-scripts\") on node \"crc\" DevicePath \"\"" Jan 23 09:29:45 crc kubenswrapper[4684]: I0123 09:29:45.449683 4684 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-647mw\" (UniqueName: \"kubernetes.io/projected/9fbe33db-2ad3-4693-b957-716547fb796f-kube-api-access-647mw\") on node \"crc\" DevicePath \"\"" Jan 23 09:29:45 crc kubenswrapper[4684]: I0123 09:29:45.449775 4684 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-75t8k\" (UniqueName: \"kubernetes.io/projected/93356bbd-8831-4fad-a7a7-4494b4244c26-kube-api-access-75t8k\") on node \"crc\" DevicePath \"\"" Jan 23 09:29:45 crc kubenswrapper[4684]: I0123 09:29:45.582050 4684 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/keystone-2c77-account-create-update-k55xq" Jan 23 09:29:45 crc kubenswrapper[4684]: I0123 09:29:45.583384 4684 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/keystone-db-create-qbpk9" Jan 23 09:29:45 crc kubenswrapper[4684]: I0123 09:29:45.584839 4684 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/glance-a3c7-account-create-update-mqslp" Jan 23 09:29:45 crc kubenswrapper[4684]: I0123 09:29:45.591756 4684 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/glance-db-create-hgjbd" Jan 23 09:29:45 crc kubenswrapper[4684]: I0123 09:29:45.595323 4684 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/placement-db-create-q6drh" Jan 23 09:29:45 crc kubenswrapper[4684]: I0123 09:29:45.598323 4684 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/placement-1ed6-account-create-update-prsmk" Jan 23 09:29:45 crc kubenswrapper[4684]: I0123 09:29:45.598840 4684 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="71408791-d0ae-4bb7-b758-f6d343cf58a7" path="/var/lib/kubelet/pods/71408791-d0ae-4bb7-b758-f6d343cf58a7/volumes" Jan 23 09:29:45 crc kubenswrapper[4684]: I0123 09:29:45.622602 4684 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/keystone-2c77-account-create-update-k55xq" event={"ID":"4887e48e-971e-4f3a-8a5f-a050961c9c7c","Type":"ContainerDied","Data":"f64efd84b4cb025058e6a75eedf2a49f0e7d527b68ce07dc7bf30d18ebfc4e98"} Jan 23 09:29:45 crc kubenswrapper[4684]: I0123 09:29:45.622650 4684 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="f64efd84b4cb025058e6a75eedf2a49f0e7d527b68ce07dc7bf30d18ebfc4e98" Jan 23 09:29:45 crc kubenswrapper[4684]: I0123 09:29:45.622667 4684 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/keystone-db-create-qbpk9" event={"ID":"93356bbd-8831-4fad-a7a7-4494b4244c26","Type":"ContainerDied","Data":"8b3fdfb8ad6008910fa6a7b00ad9a66ea2985815be06f972a83a48944fc1e46b"} Jan 23 09:29:45 crc kubenswrapper[4684]: I0123 09:29:45.622679 4684 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="8b3fdfb8ad6008910fa6a7b00ad9a66ea2985815be06f972a83a48944fc1e46b" Jan 23 09:29:45 crc kubenswrapper[4684]: I0123 09:29:45.622689 4684 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-a3c7-account-create-update-mqslp" event={"ID":"0b25b466-e0e3-4ec1-9ce6-c3f4a19a2ae9","Type":"ContainerDied","Data":"c8497977b6a960a2ec6d2384fd3ca5beb0a8236ef3db340f3ae72d355e7bcb83"} Jan 23 09:29:45 crc kubenswrapper[4684]: I0123 09:29:45.622724 4684 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="c8497977b6a960a2ec6d2384fd3ca5beb0a8236ef3db340f3ae72d355e7bcb83" Jan 23 09:29:45 crc kubenswrapper[4684]: I0123 09:29:45.622735 4684 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-db-create-hgjbd" event={"ID":"2bd72eca-ef1e-4445-9d19-65ff92842e15","Type":"ContainerDied","Data":"3d01fb7038634309d1b71242b61e5cd78c4440e197e90e3ce70af0e868dd34b9"} Jan 23 09:29:45 crc kubenswrapper[4684]: I0123 09:29:45.622747 4684 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="3d01fb7038634309d1b71242b61e5cd78c4440e197e90e3ce70af0e868dd34b9" Jan 23 09:29:45 crc kubenswrapper[4684]: I0123 09:29:45.622757 4684 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/placement-db-create-q6drh" event={"ID":"9fbe33db-2ad3-4693-b957-716547fb796f","Type":"ContainerDied","Data":"2b9460ac5cc0c11444cc39390e87b6bfc6555e2989a88f818e52598dba935da9"} Jan 23 09:29:45 crc kubenswrapper[4684]: I0123 09:29:45.622768 4684 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="2b9460ac5cc0c11444cc39390e87b6bfc6555e2989a88f818e52598dba935da9" Jan 23 09:29:45 crc kubenswrapper[4684]: I0123 09:29:45.622778 4684 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/placement-1ed6-account-create-update-prsmk" event={"ID":"41137896-cb01-4aa7-a4c0-786f7db16906","Type":"ContainerDied","Data":"d5f3a0843c367195ee46802d59722e6805b7b965e760c315c6068cfbe7bf10be"} Jan 23 09:29:45 crc kubenswrapper[4684]: I0123 09:29:45.622790 4684 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="d5f3a0843c367195ee46802d59722e6805b7b965e760c315c6068cfbe7bf10be" Jan 23 09:29:45 crc kubenswrapper[4684]: I0123 09:29:45.712880 4684 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/root-account-create-update-2gbdd"] Jan 23 09:29:46 crc kubenswrapper[4684]: I0123 09:29:46.611100 4684 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/root-account-create-update-2gbdd" event={"ID":"471d49c9-9531-4b3c-bbe5-1fb98852d71d","Type":"ContainerStarted","Data":"15ae7d3caa1770ef913e1d10f705291740aa46d16e9cc0610b9e6ceaff5be7ab"} Jan 23 09:29:46 crc kubenswrapper[4684]: I0123 09:29:46.611659 4684 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/root-account-create-update-2gbdd" event={"ID":"471d49c9-9531-4b3c-bbe5-1fb98852d71d","Type":"ContainerStarted","Data":"c45a5bc9a2cb137ece3e1a8e3ac8f3576d048d01c6982a4e6a2563f213e1215a"} Jan 23 09:29:46 crc kubenswrapper[4684]: I0123 09:29:46.636386 4684 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/root-account-create-update-2gbdd" podStartSLOduration=2.636363141 podStartE2EDuration="2.636363141s" podCreationTimestamp="2026-01-23 09:29:44 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-23 09:29:46.627192621 +0000 UTC m=+1359.250571172" watchObservedRunningTime="2026-01-23 09:29:46.636363141 +0000 UTC m=+1359.259741682" Jan 23 09:29:48 crc kubenswrapper[4684]: I0123 09:29:48.595898 4684 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/glance-db-sync-tn627"] Jan 23 09:29:48 crc kubenswrapper[4684]: E0123 09:29:48.597539 4684 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="41137896-cb01-4aa7-a4c0-786f7db16906" containerName="mariadb-account-create-update" Jan 23 09:29:48 crc kubenswrapper[4684]: I0123 09:29:48.597569 4684 state_mem.go:107] "Deleted CPUSet assignment" podUID="41137896-cb01-4aa7-a4c0-786f7db16906" containerName="mariadb-account-create-update" Jan 23 09:29:48 crc kubenswrapper[4684]: E0123 09:29:48.597579 4684 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="0b25b466-e0e3-4ec1-9ce6-c3f4a19a2ae9" containerName="mariadb-account-create-update" Jan 23 09:29:48 crc kubenswrapper[4684]: I0123 09:29:48.597585 4684 state_mem.go:107] "Deleted CPUSet assignment" podUID="0b25b466-e0e3-4ec1-9ce6-c3f4a19a2ae9" containerName="mariadb-account-create-update" Jan 23 09:29:48 crc kubenswrapper[4684]: E0123 09:29:48.597599 4684 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="9fbe33db-2ad3-4693-b957-716547fb796f" containerName="mariadb-database-create" Jan 23 09:29:48 crc kubenswrapper[4684]: I0123 09:29:48.597606 4684 state_mem.go:107] "Deleted CPUSet assignment" podUID="9fbe33db-2ad3-4693-b957-716547fb796f" containerName="mariadb-database-create" Jan 23 09:29:48 crc kubenswrapper[4684]: E0123 09:29:48.597623 4684 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="4887e48e-971e-4f3a-8a5f-a050961c9c7c" containerName="mariadb-account-create-update" Jan 23 09:29:48 crc kubenswrapper[4684]: I0123 09:29:48.597628 4684 state_mem.go:107] "Deleted CPUSet assignment" podUID="4887e48e-971e-4f3a-8a5f-a050961c9c7c" containerName="mariadb-account-create-update" Jan 23 09:29:48 crc kubenswrapper[4684]: E0123 09:29:48.597641 4684 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="2bd72eca-ef1e-4445-9d19-65ff92842e15" containerName="mariadb-database-create" Jan 23 09:29:48 crc kubenswrapper[4684]: I0123 09:29:48.597647 4684 state_mem.go:107] "Deleted CPUSet assignment" podUID="2bd72eca-ef1e-4445-9d19-65ff92842e15" containerName="mariadb-database-create" Jan 23 09:29:48 crc kubenswrapper[4684]: E0123 09:29:48.597657 4684 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="93356bbd-8831-4fad-a7a7-4494b4244c26" containerName="mariadb-database-create" Jan 23 09:29:48 crc kubenswrapper[4684]: I0123 09:29:48.597662 4684 state_mem.go:107] "Deleted CPUSet assignment" podUID="93356bbd-8831-4fad-a7a7-4494b4244c26" containerName="mariadb-database-create" Jan 23 09:29:48 crc kubenswrapper[4684]: I0123 09:29:48.597824 4684 memory_manager.go:354] "RemoveStaleState removing state" podUID="2bd72eca-ef1e-4445-9d19-65ff92842e15" containerName="mariadb-database-create" Jan 23 09:29:48 crc kubenswrapper[4684]: I0123 09:29:48.597851 4684 memory_manager.go:354] "RemoveStaleState removing state" podUID="0b25b466-e0e3-4ec1-9ce6-c3f4a19a2ae9" containerName="mariadb-account-create-update" Jan 23 09:29:48 crc kubenswrapper[4684]: I0123 09:29:48.597862 4684 memory_manager.go:354] "RemoveStaleState removing state" podUID="4887e48e-971e-4f3a-8a5f-a050961c9c7c" containerName="mariadb-account-create-update" Jan 23 09:29:48 crc kubenswrapper[4684]: I0123 09:29:48.597872 4684 memory_manager.go:354] "RemoveStaleState removing state" podUID="9fbe33db-2ad3-4693-b957-716547fb796f" containerName="mariadb-database-create" Jan 23 09:29:48 crc kubenswrapper[4684]: I0123 09:29:48.597884 4684 memory_manager.go:354] "RemoveStaleState removing state" podUID="41137896-cb01-4aa7-a4c0-786f7db16906" containerName="mariadb-account-create-update" Jan 23 09:29:48 crc kubenswrapper[4684]: I0123 09:29:48.597892 4684 memory_manager.go:354] "RemoveStaleState removing state" podUID="93356bbd-8831-4fad-a7a7-4494b4244c26" containerName="mariadb-database-create" Jan 23 09:29:48 crc kubenswrapper[4684]: I0123 09:29:48.598659 4684 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/glance-db-sync-tn627" Jan 23 09:29:48 crc kubenswrapper[4684]: I0123 09:29:48.601364 4684 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"glance-glance-dockercfg-4hbkx" Jan 23 09:29:48 crc kubenswrapper[4684]: I0123 09:29:48.602534 4684 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"glance-config-data" Jan 23 09:29:48 crc kubenswrapper[4684]: I0123 09:29:48.606856 4684 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/glance-db-sync-tn627"] Jan 23 09:29:48 crc kubenswrapper[4684]: I0123 09:29:48.658982 4684 generic.go:334] "Generic (PLEG): container finished" podID="82a71d38-3c68-43a9-9913-bc184ebed996" containerID="117c3cfb0a176cfc1500fea0731f48b23931e3499ec86b05f8bbcf5b2f8b8bb6" exitCode=0 Jan 23 09:29:48 crc kubenswrapper[4684]: I0123 09:29:48.659033 4684 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/rabbitmq-server-0" event={"ID":"82a71d38-3c68-43a9-9913-bc184ebed996","Type":"ContainerDied","Data":"117c3cfb0a176cfc1500fea0731f48b23931e3499ec86b05f8bbcf5b2f8b8bb6"} Jan 23 09:29:48 crc kubenswrapper[4684]: I0123 09:29:48.701317 4684 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-mvkp8\" (UniqueName: \"kubernetes.io/projected/0ed6b304-077d-4d13-a28b-2c41c046a303-kube-api-access-mvkp8\") pod \"glance-db-sync-tn627\" (UID: \"0ed6b304-077d-4d13-a28b-2c41c046a303\") " pod="openstack/glance-db-sync-tn627" Jan 23 09:29:48 crc kubenswrapper[4684]: I0123 09:29:48.701378 4684 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/0ed6b304-077d-4d13-a28b-2c41c046a303-combined-ca-bundle\") pod \"glance-db-sync-tn627\" (UID: \"0ed6b304-077d-4d13-a28b-2c41c046a303\") " pod="openstack/glance-db-sync-tn627" Jan 23 09:29:48 crc kubenswrapper[4684]: I0123 09:29:48.701465 4684 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/0ed6b304-077d-4d13-a28b-2c41c046a303-config-data\") pod \"glance-db-sync-tn627\" (UID: \"0ed6b304-077d-4d13-a28b-2c41c046a303\") " pod="openstack/glance-db-sync-tn627" Jan 23 09:29:48 crc kubenswrapper[4684]: I0123 09:29:48.701524 4684 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"db-sync-config-data\" (UniqueName: \"kubernetes.io/secret/0ed6b304-077d-4d13-a28b-2c41c046a303-db-sync-config-data\") pod \"glance-db-sync-tn627\" (UID: \"0ed6b304-077d-4d13-a28b-2c41c046a303\") " pod="openstack/glance-db-sync-tn627" Jan 23 09:29:48 crc kubenswrapper[4684]: I0123 09:29:48.803631 4684 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/0ed6b304-077d-4d13-a28b-2c41c046a303-config-data\") pod \"glance-db-sync-tn627\" (UID: \"0ed6b304-077d-4d13-a28b-2c41c046a303\") " pod="openstack/glance-db-sync-tn627" Jan 23 09:29:48 crc kubenswrapper[4684]: I0123 09:29:48.804425 4684 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"db-sync-config-data\" (UniqueName: \"kubernetes.io/secret/0ed6b304-077d-4d13-a28b-2c41c046a303-db-sync-config-data\") pod \"glance-db-sync-tn627\" (UID: \"0ed6b304-077d-4d13-a28b-2c41c046a303\") " pod="openstack/glance-db-sync-tn627" Jan 23 09:29:48 crc kubenswrapper[4684]: I0123 09:29:48.805118 4684 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-mvkp8\" (UniqueName: \"kubernetes.io/projected/0ed6b304-077d-4d13-a28b-2c41c046a303-kube-api-access-mvkp8\") pod \"glance-db-sync-tn627\" (UID: \"0ed6b304-077d-4d13-a28b-2c41c046a303\") " pod="openstack/glance-db-sync-tn627" Jan 23 09:29:48 crc kubenswrapper[4684]: I0123 09:29:48.805196 4684 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/0ed6b304-077d-4d13-a28b-2c41c046a303-combined-ca-bundle\") pod \"glance-db-sync-tn627\" (UID: \"0ed6b304-077d-4d13-a28b-2c41c046a303\") " pod="openstack/glance-db-sync-tn627" Jan 23 09:29:48 crc kubenswrapper[4684]: I0123 09:29:48.817225 4684 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"db-sync-config-data\" (UniqueName: \"kubernetes.io/secret/0ed6b304-077d-4d13-a28b-2c41c046a303-db-sync-config-data\") pod \"glance-db-sync-tn627\" (UID: \"0ed6b304-077d-4d13-a28b-2c41c046a303\") " pod="openstack/glance-db-sync-tn627" Jan 23 09:29:48 crc kubenswrapper[4684]: I0123 09:29:48.826413 4684 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/0ed6b304-077d-4d13-a28b-2c41c046a303-config-data\") pod \"glance-db-sync-tn627\" (UID: \"0ed6b304-077d-4d13-a28b-2c41c046a303\") " pod="openstack/glance-db-sync-tn627" Jan 23 09:29:48 crc kubenswrapper[4684]: I0123 09:29:48.832411 4684 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/0ed6b304-077d-4d13-a28b-2c41c046a303-combined-ca-bundle\") pod \"glance-db-sync-tn627\" (UID: \"0ed6b304-077d-4d13-a28b-2c41c046a303\") " pod="openstack/glance-db-sync-tn627" Jan 23 09:29:48 crc kubenswrapper[4684]: I0123 09:29:48.832628 4684 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-mvkp8\" (UniqueName: \"kubernetes.io/projected/0ed6b304-077d-4d13-a28b-2c41c046a303-kube-api-access-mvkp8\") pod \"glance-db-sync-tn627\" (UID: \"0ed6b304-077d-4d13-a28b-2c41c046a303\") " pod="openstack/glance-db-sync-tn627" Jan 23 09:29:48 crc kubenswrapper[4684]: I0123 09:29:48.958355 4684 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/glance-db-sync-tn627" Jan 23 09:29:49 crc kubenswrapper[4684]: I0123 09:29:49.515321 4684 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/glance-db-sync-tn627"] Jan 23 09:29:49 crc kubenswrapper[4684]: I0123 09:29:49.537751 4684 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/kube-state-metrics-0" Jan 23 09:29:49 crc kubenswrapper[4684]: I0123 09:29:49.671955 4684 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-db-sync-tn627" event={"ID":"0ed6b304-077d-4d13-a28b-2c41c046a303","Type":"ContainerStarted","Data":"b2081406d5132fa497ff9c9a357194ecfe42ff1d24a87a77cf56852e351319c4"} Jan 23 09:29:51 crc kubenswrapper[4684]: I0123 09:29:51.688594 4684 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/rabbitmq-server-0" event={"ID":"82a71d38-3c68-43a9-9913-bc184ebed996","Type":"ContainerStarted","Data":"817ea9a29e87f839d270ed92f755ecda3bba82069b8a72d18c371684467bac12"} Jan 23 09:29:54 crc kubenswrapper[4684]: I0123 09:29:54.715455 4684 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/rabbitmq-server-0" Jan 23 09:29:54 crc kubenswrapper[4684]: I0123 09:29:54.743997 4684 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/rabbitmq-server-0" podStartSLOduration=-9223371894.1108 podStartE2EDuration="2m22.743976424s" podCreationTimestamp="2026-01-23 09:27:32 +0000 UTC" firstStartedPulling="2026-01-23 09:27:35.354199205 +0000 UTC m=+1227.977577746" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-23 09:29:54.73960048 +0000 UTC m=+1367.362979041" watchObservedRunningTime="2026-01-23 09:29:54.743976424 +0000 UTC m=+1367.367354985" Jan 23 09:29:55 crc kubenswrapper[4684]: I0123 09:29:55.733459 4684 generic.go:334] "Generic (PLEG): container finished" podID="471d49c9-9531-4b3c-bbe5-1fb98852d71d" containerID="15ae7d3caa1770ef913e1d10f705291740aa46d16e9cc0610b9e6ceaff5be7ab" exitCode=0 Jan 23 09:29:55 crc kubenswrapper[4684]: I0123 09:29:55.733523 4684 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/root-account-create-update-2gbdd" event={"ID":"471d49c9-9531-4b3c-bbe5-1fb98852d71d","Type":"ContainerDied","Data":"15ae7d3caa1770ef913e1d10f705291740aa46d16e9cc0610b9e6ceaff5be7ab"} Jan 23 09:29:57 crc kubenswrapper[4684]: I0123 09:29:57.141386 4684 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/root-account-create-update-2gbdd" Jan 23 09:29:57 crc kubenswrapper[4684]: I0123 09:29:57.261211 4684 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-628sw\" (UniqueName: \"kubernetes.io/projected/471d49c9-9531-4b3c-bbe5-1fb98852d71d-kube-api-access-628sw\") pod \"471d49c9-9531-4b3c-bbe5-1fb98852d71d\" (UID: \"471d49c9-9531-4b3c-bbe5-1fb98852d71d\") " Jan 23 09:29:57 crc kubenswrapper[4684]: I0123 09:29:57.261838 4684 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/471d49c9-9531-4b3c-bbe5-1fb98852d71d-operator-scripts\") pod \"471d49c9-9531-4b3c-bbe5-1fb98852d71d\" (UID: \"471d49c9-9531-4b3c-bbe5-1fb98852d71d\") " Jan 23 09:29:57 crc kubenswrapper[4684]: I0123 09:29:57.262798 4684 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/471d49c9-9531-4b3c-bbe5-1fb98852d71d-operator-scripts" (OuterVolumeSpecName: "operator-scripts") pod "471d49c9-9531-4b3c-bbe5-1fb98852d71d" (UID: "471d49c9-9531-4b3c-bbe5-1fb98852d71d"). InnerVolumeSpecName "operator-scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 23 09:29:57 crc kubenswrapper[4684]: I0123 09:29:57.275593 4684 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/471d49c9-9531-4b3c-bbe5-1fb98852d71d-kube-api-access-628sw" (OuterVolumeSpecName: "kube-api-access-628sw") pod "471d49c9-9531-4b3c-bbe5-1fb98852d71d" (UID: "471d49c9-9531-4b3c-bbe5-1fb98852d71d"). InnerVolumeSpecName "kube-api-access-628sw". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 23 09:29:57 crc kubenswrapper[4684]: I0123 09:29:57.364441 4684 reconciler_common.go:293] "Volume detached for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/471d49c9-9531-4b3c-bbe5-1fb98852d71d-operator-scripts\") on node \"crc\" DevicePath \"\"" Jan 23 09:29:57 crc kubenswrapper[4684]: I0123 09:29:57.364500 4684 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-628sw\" (UniqueName: \"kubernetes.io/projected/471d49c9-9531-4b3c-bbe5-1fb98852d71d-kube-api-access-628sw\") on node \"crc\" DevicePath \"\"" Jan 23 09:29:57 crc kubenswrapper[4684]: I0123 09:29:57.752037 4684 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/root-account-create-update-2gbdd" event={"ID":"471d49c9-9531-4b3c-bbe5-1fb98852d71d","Type":"ContainerDied","Data":"c45a5bc9a2cb137ece3e1a8e3ac8f3576d048d01c6982a4e6a2563f213e1215a"} Jan 23 09:29:57 crc kubenswrapper[4684]: I0123 09:29:57.752081 4684 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="c45a5bc9a2cb137ece3e1a8e3ac8f3576d048d01c6982a4e6a2563f213e1215a" Jan 23 09:29:57 crc kubenswrapper[4684]: I0123 09:29:57.752138 4684 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/root-account-create-update-2gbdd" Jan 23 09:29:58 crc kubenswrapper[4684]: I0123 09:29:58.803611 4684 prober.go:107] "Probe failed" probeType="Liveness" pod="openstack/openstack-cell1-galera-0" podUID="80a7fc30-a101-4948-9e81-34c2dfb02797" containerName="galera" probeResult="failure" output="command timed out" Jan 23 09:29:58 crc kubenswrapper[4684]: I0123 09:29:58.806205 4684 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack/openstack-cell1-galera-0" podUID="80a7fc30-a101-4948-9e81-34c2dfb02797" containerName="galera" probeResult="failure" output="command timed out" Jan 23 09:30:00 crc kubenswrapper[4684]: I0123 09:30:00.144373 4684 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29486010-lfsjx"] Jan 23 09:30:00 crc kubenswrapper[4684]: E0123 09:30:00.144776 4684 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="471d49c9-9531-4b3c-bbe5-1fb98852d71d" containerName="mariadb-account-create-update" Jan 23 09:30:00 crc kubenswrapper[4684]: I0123 09:30:00.144791 4684 state_mem.go:107] "Deleted CPUSet assignment" podUID="471d49c9-9531-4b3c-bbe5-1fb98852d71d" containerName="mariadb-account-create-update" Jan 23 09:30:00 crc kubenswrapper[4684]: I0123 09:30:00.144970 4684 memory_manager.go:354] "RemoveStaleState removing state" podUID="471d49c9-9531-4b3c-bbe5-1fb98852d71d" containerName="mariadb-account-create-update" Jan 23 09:30:00 crc kubenswrapper[4684]: I0123 09:30:00.145576 4684 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29486010-lfsjx" Jan 23 09:30:00 crc kubenswrapper[4684]: I0123 09:30:00.148681 4684 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-operator-lifecycle-manager"/"collect-profiles-config" Jan 23 09:30:00 crc kubenswrapper[4684]: I0123 09:30:00.155726 4684 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29486010-lfsjx"] Jan 23 09:30:00 crc kubenswrapper[4684]: I0123 09:30:00.168352 4684 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-operator-lifecycle-manager"/"collect-profiles-dockercfg-kzf4t" Jan 23 09:30:00 crc kubenswrapper[4684]: I0123 09:30:00.311299 4684 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-6zmnk\" (UniqueName: \"kubernetes.io/projected/0418d43a-0c43-459c-baf2-71075458ff45-kube-api-access-6zmnk\") pod \"collect-profiles-29486010-lfsjx\" (UID: \"0418d43a-0c43-459c-baf2-71075458ff45\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29486010-lfsjx" Jan 23 09:30:00 crc kubenswrapper[4684]: I0123 09:30:00.311360 4684 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/0418d43a-0c43-459c-baf2-71075458ff45-secret-volume\") pod \"collect-profiles-29486010-lfsjx\" (UID: \"0418d43a-0c43-459c-baf2-71075458ff45\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29486010-lfsjx" Jan 23 09:30:00 crc kubenswrapper[4684]: I0123 09:30:00.311394 4684 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/0418d43a-0c43-459c-baf2-71075458ff45-config-volume\") pod \"collect-profiles-29486010-lfsjx\" (UID: \"0418d43a-0c43-459c-baf2-71075458ff45\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29486010-lfsjx" Jan 23 09:30:00 crc kubenswrapper[4684]: I0123 09:30:00.413571 4684 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/0418d43a-0c43-459c-baf2-71075458ff45-secret-volume\") pod \"collect-profiles-29486010-lfsjx\" (UID: \"0418d43a-0c43-459c-baf2-71075458ff45\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29486010-lfsjx" Jan 23 09:30:00 crc kubenswrapper[4684]: I0123 09:30:00.414570 4684 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/0418d43a-0c43-459c-baf2-71075458ff45-config-volume\") pod \"collect-profiles-29486010-lfsjx\" (UID: \"0418d43a-0c43-459c-baf2-71075458ff45\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29486010-lfsjx" Jan 23 09:30:00 crc kubenswrapper[4684]: I0123 09:30:00.415110 4684 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-6zmnk\" (UniqueName: \"kubernetes.io/projected/0418d43a-0c43-459c-baf2-71075458ff45-kube-api-access-6zmnk\") pod \"collect-profiles-29486010-lfsjx\" (UID: \"0418d43a-0c43-459c-baf2-71075458ff45\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29486010-lfsjx" Jan 23 09:30:00 crc kubenswrapper[4684]: I0123 09:30:00.415671 4684 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/0418d43a-0c43-459c-baf2-71075458ff45-config-volume\") pod \"collect-profiles-29486010-lfsjx\" (UID: \"0418d43a-0c43-459c-baf2-71075458ff45\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29486010-lfsjx" Jan 23 09:30:00 crc kubenswrapper[4684]: I0123 09:30:00.423167 4684 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/0418d43a-0c43-459c-baf2-71075458ff45-secret-volume\") pod \"collect-profiles-29486010-lfsjx\" (UID: \"0418d43a-0c43-459c-baf2-71075458ff45\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29486010-lfsjx" Jan 23 09:30:00 crc kubenswrapper[4684]: I0123 09:30:00.442965 4684 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-6zmnk\" (UniqueName: \"kubernetes.io/projected/0418d43a-0c43-459c-baf2-71075458ff45-kube-api-access-6zmnk\") pod \"collect-profiles-29486010-lfsjx\" (UID: \"0418d43a-0c43-459c-baf2-71075458ff45\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29486010-lfsjx" Jan 23 09:30:00 crc kubenswrapper[4684]: I0123 09:30:00.506598 4684 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29486010-lfsjx" Jan 23 09:30:00 crc kubenswrapper[4684]: W0123 09:30:00.778717 4684 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod0418d43a_0c43_459c_baf2_71075458ff45.slice/crio-bbabbefb2f60dc8286cb7d1f611700ede80bcdfe61756db705ed82072205a32e WatchSource:0}: Error finding container bbabbefb2f60dc8286cb7d1f611700ede80bcdfe61756db705ed82072205a32e: Status 404 returned error can't find the container with id bbabbefb2f60dc8286cb7d1f611700ede80bcdfe61756db705ed82072205a32e Jan 23 09:30:00 crc kubenswrapper[4684]: I0123 09:30:00.781338 4684 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29486010-lfsjx"] Jan 23 09:30:01 crc kubenswrapper[4684]: I0123 09:30:01.791707 4684 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/collect-profiles-29486010-lfsjx" event={"ID":"0418d43a-0c43-459c-baf2-71075458ff45","Type":"ContainerStarted","Data":"bbabbefb2f60dc8286cb7d1f611700ede80bcdfe61756db705ed82072205a32e"} Jan 23 09:30:04 crc kubenswrapper[4684]: I0123 09:30:04.879232 4684 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack/rabbitmq-server-0" podUID="82a71d38-3c68-43a9-9913-bc184ebed996" containerName="rabbitmq" probeResult="failure" output="dial tcp 10.217.0.97:5671: connect: connection refused" Jan 23 09:30:06 crc kubenswrapper[4684]: I0123 09:30:06.836835 4684 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/collect-profiles-29486010-lfsjx" event={"ID":"0418d43a-0c43-459c-baf2-71075458ff45","Type":"ContainerStarted","Data":"f10074ccec3734d1868334e604289640a2cd5b4921d10d8fd4422520921e8f24"} Jan 23 09:30:07 crc kubenswrapper[4684]: I0123 09:30:07.845415 4684 generic.go:334] "Generic (PLEG): container finished" podID="0418d43a-0c43-459c-baf2-71075458ff45" containerID="f10074ccec3734d1868334e604289640a2cd5b4921d10d8fd4422520921e8f24" exitCode=0 Jan 23 09:30:07 crc kubenswrapper[4684]: I0123 09:30:07.845645 4684 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/collect-profiles-29486010-lfsjx" event={"ID":"0418d43a-0c43-459c-baf2-71075458ff45","Type":"ContainerDied","Data":"f10074ccec3734d1868334e604289640a2cd5b4921d10d8fd4422520921e8f24"} Jan 23 09:30:14 crc kubenswrapper[4684]: I0123 09:30:14.878937 4684 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack/rabbitmq-server-0" podUID="82a71d38-3c68-43a9-9913-bc184ebed996" containerName="rabbitmq" probeResult="failure" output="dial tcp 10.217.0.97:5671: connect: connection refused" Jan 23 09:30:24 crc kubenswrapper[4684]: I0123 09:30:24.877969 4684 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack/rabbitmq-server-0" podUID="82a71d38-3c68-43a9-9913-bc184ebed996" containerName="rabbitmq" probeResult="failure" output="dial tcp 10.217.0.97:5671: connect: connection refused" Jan 23 09:30:26 crc kubenswrapper[4684]: I0123 09:30:26.153526 4684 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29486010-lfsjx" Jan 23 09:30:26 crc kubenswrapper[4684]: I0123 09:30:26.287812 4684 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/0418d43a-0c43-459c-baf2-71075458ff45-config-volume\") pod \"0418d43a-0c43-459c-baf2-71075458ff45\" (UID: \"0418d43a-0c43-459c-baf2-71075458ff45\") " Jan 23 09:30:26 crc kubenswrapper[4684]: I0123 09:30:26.287914 4684 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-6zmnk\" (UniqueName: \"kubernetes.io/projected/0418d43a-0c43-459c-baf2-71075458ff45-kube-api-access-6zmnk\") pod \"0418d43a-0c43-459c-baf2-71075458ff45\" (UID: \"0418d43a-0c43-459c-baf2-71075458ff45\") " Jan 23 09:30:26 crc kubenswrapper[4684]: I0123 09:30:26.287954 4684 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/0418d43a-0c43-459c-baf2-71075458ff45-secret-volume\") pod \"0418d43a-0c43-459c-baf2-71075458ff45\" (UID: \"0418d43a-0c43-459c-baf2-71075458ff45\") " Jan 23 09:30:26 crc kubenswrapper[4684]: I0123 09:30:26.289417 4684 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/0418d43a-0c43-459c-baf2-71075458ff45-config-volume" (OuterVolumeSpecName: "config-volume") pod "0418d43a-0c43-459c-baf2-71075458ff45" (UID: "0418d43a-0c43-459c-baf2-71075458ff45"). InnerVolumeSpecName "config-volume". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 23 09:30:26 crc kubenswrapper[4684]: I0123 09:30:26.296227 4684 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/0418d43a-0c43-459c-baf2-71075458ff45-secret-volume" (OuterVolumeSpecName: "secret-volume") pod "0418d43a-0c43-459c-baf2-71075458ff45" (UID: "0418d43a-0c43-459c-baf2-71075458ff45"). InnerVolumeSpecName "secret-volume". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 23 09:30:26 crc kubenswrapper[4684]: I0123 09:30:26.313912 4684 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/0418d43a-0c43-459c-baf2-71075458ff45-kube-api-access-6zmnk" (OuterVolumeSpecName: "kube-api-access-6zmnk") pod "0418d43a-0c43-459c-baf2-71075458ff45" (UID: "0418d43a-0c43-459c-baf2-71075458ff45"). InnerVolumeSpecName "kube-api-access-6zmnk". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 23 09:30:26 crc kubenswrapper[4684]: I0123 09:30:26.389344 4684 reconciler_common.go:293] "Volume detached for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/0418d43a-0c43-459c-baf2-71075458ff45-config-volume\") on node \"crc\" DevicePath \"\"" Jan 23 09:30:26 crc kubenswrapper[4684]: I0123 09:30:26.389381 4684 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-6zmnk\" (UniqueName: \"kubernetes.io/projected/0418d43a-0c43-459c-baf2-71075458ff45-kube-api-access-6zmnk\") on node \"crc\" DevicePath \"\"" Jan 23 09:30:26 crc kubenswrapper[4684]: I0123 09:30:26.389391 4684 reconciler_common.go:293] "Volume detached for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/0418d43a-0c43-459c-baf2-71075458ff45-secret-volume\") on node \"crc\" DevicePath \"\"" Jan 23 09:30:26 crc kubenswrapper[4684]: E0123 09:30:26.833920 4684 log.go:32] "PullImage from image service failed" err="rpc error: code = Canceled desc = copying config: context canceled" image="quay.io/podified-antelope-centos9/openstack-glance-api@sha256:e4aa4ebbb1e581a12040e9ad2ae2709ac31b5d965bb64fc4252d1028b05c565f" Jan 23 09:30:26 crc kubenswrapper[4684]: E0123 09:30:26.834104 4684 kuberuntime_manager.go:1274] "Unhandled Error" err="container &Container{Name:glance-db-sync,Image:quay.io/podified-antelope-centos9/openstack-glance-api@sha256:e4aa4ebbb1e581a12040e9ad2ae2709ac31b5d965bb64fc4252d1028b05c565f,Command:[/bin/bash],Args:[-c /usr/local/bin/kolla_start],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:KOLLA_BOOTSTRAP,Value:true,ValueFrom:nil,},EnvVar{Name:KOLLA_CONFIG_STRATEGY,Value:COPY_ALWAYS,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:db-sync-config-data,ReadOnly:true,MountPath:/etc/glance/glance.conf.d,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:config-data,ReadOnly:true,MountPath:/etc/my.cnf,SubPath:my.cnf,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:config-data,ReadOnly:true,MountPath:/var/lib/kolla/config_files/config.json,SubPath:db-sync-config.json,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:combined-ca-bundle,ReadOnly:true,MountPath:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem,SubPath:tls-ca-bundle.pem,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-mvkp8,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[MKNOD],},Privileged:nil,SELinuxOptions:nil,RunAsUser:*42415,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:*42415,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod glance-db-sync-tn627_openstack(0ed6b304-077d-4d13-a28b-2c41c046a303): ErrImagePull: rpc error: code = Canceled desc = copying config: context canceled" logger="UnhandledError" Jan 23 09:30:26 crc kubenswrapper[4684]: E0123 09:30:26.835310 4684 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"glance-db-sync\" with ErrImagePull: \"rpc error: code = Canceled desc = copying config: context canceled\"" pod="openstack/glance-db-sync-tn627" podUID="0ed6b304-077d-4d13-a28b-2c41c046a303" Jan 23 09:30:26 crc kubenswrapper[4684]: I0123 09:30:26.988674 4684 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29486010-lfsjx" Jan 23 09:30:26 crc kubenswrapper[4684]: I0123 09:30:26.988665 4684 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/collect-profiles-29486010-lfsjx" event={"ID":"0418d43a-0c43-459c-baf2-71075458ff45","Type":"ContainerDied","Data":"bbabbefb2f60dc8286cb7d1f611700ede80bcdfe61756db705ed82072205a32e"} Jan 23 09:30:26 crc kubenswrapper[4684]: I0123 09:30:26.988734 4684 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="bbabbefb2f60dc8286cb7d1f611700ede80bcdfe61756db705ed82072205a32e" Jan 23 09:30:26 crc kubenswrapper[4684]: E0123 09:30:26.991517 4684 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"glance-db-sync\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.io/podified-antelope-centos9/openstack-glance-api@sha256:e4aa4ebbb1e581a12040e9ad2ae2709ac31b5d965bb64fc4252d1028b05c565f\\\"\"" pod="openstack/glance-db-sync-tn627" podUID="0ed6b304-077d-4d13-a28b-2c41c046a303" Jan 23 09:30:34 crc kubenswrapper[4684]: I0123 09:30:34.895085 4684 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/rabbitmq-server-0" Jan 23 09:30:35 crc kubenswrapper[4684]: I0123 09:30:35.665459 4684 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/cinder-db-create-mzsx6"] Jan 23 09:30:35 crc kubenswrapper[4684]: E0123 09:30:35.666152 4684 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="0418d43a-0c43-459c-baf2-71075458ff45" containerName="collect-profiles" Jan 23 09:30:35 crc kubenswrapper[4684]: I0123 09:30:35.666173 4684 state_mem.go:107] "Deleted CPUSet assignment" podUID="0418d43a-0c43-459c-baf2-71075458ff45" containerName="collect-profiles" Jan 23 09:30:35 crc kubenswrapper[4684]: I0123 09:30:35.666342 4684 memory_manager.go:354] "RemoveStaleState removing state" podUID="0418d43a-0c43-459c-baf2-71075458ff45" containerName="collect-profiles" Jan 23 09:30:35 crc kubenswrapper[4684]: I0123 09:30:35.667023 4684 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/cinder-db-create-mzsx6" Jan 23 09:30:35 crc kubenswrapper[4684]: I0123 09:30:35.683096 4684 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/barbican-2b42-account-create-update-w5njq"] Jan 23 09:30:35 crc kubenswrapper[4684]: I0123 09:30:35.684211 4684 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/barbican-2b42-account-create-update-w5njq" Jan 23 09:30:35 crc kubenswrapper[4684]: I0123 09:30:35.686292 4684 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"barbican-db-secret" Jan 23 09:30:35 crc kubenswrapper[4684]: I0123 09:30:35.689335 4684 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/cinder-db-create-mzsx6"] Jan 23 09:30:35 crc kubenswrapper[4684]: I0123 09:30:35.736530 4684 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/barbican-2b42-account-create-update-w5njq"] Jan 23 09:30:35 crc kubenswrapper[4684]: I0123 09:30:35.744881 4684 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-v6j2q\" (UniqueName: \"kubernetes.io/projected/7ae59c0f-8e7f-4f59-a991-3e6afb7e0daf-kube-api-access-v6j2q\") pod \"cinder-db-create-mzsx6\" (UID: \"7ae59c0f-8e7f-4f59-a991-3e6afb7e0daf\") " pod="openstack/cinder-db-create-mzsx6" Jan 23 09:30:35 crc kubenswrapper[4684]: I0123 09:30:35.744946 4684 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/5d712541-e87b-49c3-8cde-2daf0ef2c0bd-operator-scripts\") pod \"barbican-2b42-account-create-update-w5njq\" (UID: \"5d712541-e87b-49c3-8cde-2daf0ef2c0bd\") " pod="openstack/barbican-2b42-account-create-update-w5njq" Jan 23 09:30:35 crc kubenswrapper[4684]: I0123 09:30:35.745124 4684 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/7ae59c0f-8e7f-4f59-a991-3e6afb7e0daf-operator-scripts\") pod \"cinder-db-create-mzsx6\" (UID: \"7ae59c0f-8e7f-4f59-a991-3e6afb7e0daf\") " pod="openstack/cinder-db-create-mzsx6" Jan 23 09:30:35 crc kubenswrapper[4684]: I0123 09:30:35.745324 4684 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-kkqw2\" (UniqueName: \"kubernetes.io/projected/5d712541-e87b-49c3-8cde-2daf0ef2c0bd-kube-api-access-kkqw2\") pod \"barbican-2b42-account-create-update-w5njq\" (UID: \"5d712541-e87b-49c3-8cde-2daf0ef2c0bd\") " pod="openstack/barbican-2b42-account-create-update-w5njq" Jan 23 09:30:35 crc kubenswrapper[4684]: I0123 09:30:35.791628 4684 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/barbican-db-create-jcvvb"] Jan 23 09:30:35 crc kubenswrapper[4684]: I0123 09:30:35.792576 4684 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/barbican-db-create-jcvvb" Jan 23 09:30:35 crc kubenswrapper[4684]: I0123 09:30:35.812530 4684 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/barbican-db-create-jcvvb"] Jan 23 09:30:35 crc kubenswrapper[4684]: I0123 09:30:35.822331 4684 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/cinder-aa08-account-create-update-c8jx4"] Jan 23 09:30:35 crc kubenswrapper[4684]: I0123 09:30:35.823332 4684 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/cinder-aa08-account-create-update-c8jx4" Jan 23 09:30:35 crc kubenswrapper[4684]: I0123 09:30:35.828903 4684 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cinder-db-secret" Jan 23 09:30:35 crc kubenswrapper[4684]: I0123 09:30:35.847334 4684 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-v6j2q\" (UniqueName: \"kubernetes.io/projected/7ae59c0f-8e7f-4f59-a991-3e6afb7e0daf-kube-api-access-v6j2q\") pod \"cinder-db-create-mzsx6\" (UID: \"7ae59c0f-8e7f-4f59-a991-3e6afb7e0daf\") " pod="openstack/cinder-db-create-mzsx6" Jan 23 09:30:35 crc kubenswrapper[4684]: I0123 09:30:35.847411 4684 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/5d712541-e87b-49c3-8cde-2daf0ef2c0bd-operator-scripts\") pod \"barbican-2b42-account-create-update-w5njq\" (UID: \"5d712541-e87b-49c3-8cde-2daf0ef2c0bd\") " pod="openstack/barbican-2b42-account-create-update-w5njq" Jan 23 09:30:35 crc kubenswrapper[4684]: I0123 09:30:35.847465 4684 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-62bsk\" (UniqueName: \"kubernetes.io/projected/248994f7-7b0f-41e4-8a32-2dbf42ea41e9-kube-api-access-62bsk\") pod \"cinder-aa08-account-create-update-c8jx4\" (UID: \"248994f7-7b0f-41e4-8a32-2dbf42ea41e9\") " pod="openstack/cinder-aa08-account-create-update-c8jx4" Jan 23 09:30:35 crc kubenswrapper[4684]: I0123 09:30:35.847512 4684 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-gsx29\" (UniqueName: \"kubernetes.io/projected/51a95467-7819-43c2-aa22-699c74df62e8-kube-api-access-gsx29\") pod \"barbican-db-create-jcvvb\" (UID: \"51a95467-7819-43c2-aa22-699c74df62e8\") " pod="openstack/barbican-db-create-jcvvb" Jan 23 09:30:35 crc kubenswrapper[4684]: I0123 09:30:35.847543 4684 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/7ae59c0f-8e7f-4f59-a991-3e6afb7e0daf-operator-scripts\") pod \"cinder-db-create-mzsx6\" (UID: \"7ae59c0f-8e7f-4f59-a991-3e6afb7e0daf\") " pod="openstack/cinder-db-create-mzsx6" Jan 23 09:30:35 crc kubenswrapper[4684]: I0123 09:30:35.847614 4684 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/51a95467-7819-43c2-aa22-699c74df62e8-operator-scripts\") pod \"barbican-db-create-jcvvb\" (UID: \"51a95467-7819-43c2-aa22-699c74df62e8\") " pod="openstack/barbican-db-create-jcvvb" Jan 23 09:30:35 crc kubenswrapper[4684]: I0123 09:30:35.847679 4684 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-kkqw2\" (UniqueName: \"kubernetes.io/projected/5d712541-e87b-49c3-8cde-2daf0ef2c0bd-kube-api-access-kkqw2\") pod \"barbican-2b42-account-create-update-w5njq\" (UID: \"5d712541-e87b-49c3-8cde-2daf0ef2c0bd\") " pod="openstack/barbican-2b42-account-create-update-w5njq" Jan 23 09:30:35 crc kubenswrapper[4684]: I0123 09:30:35.847730 4684 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/248994f7-7b0f-41e4-8a32-2dbf42ea41e9-operator-scripts\") pod \"cinder-aa08-account-create-update-c8jx4\" (UID: \"248994f7-7b0f-41e4-8a32-2dbf42ea41e9\") " pod="openstack/cinder-aa08-account-create-update-c8jx4" Jan 23 09:30:35 crc kubenswrapper[4684]: I0123 09:30:35.848364 4684 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/5d712541-e87b-49c3-8cde-2daf0ef2c0bd-operator-scripts\") pod \"barbican-2b42-account-create-update-w5njq\" (UID: \"5d712541-e87b-49c3-8cde-2daf0ef2c0bd\") " pod="openstack/barbican-2b42-account-create-update-w5njq" Jan 23 09:30:35 crc kubenswrapper[4684]: I0123 09:30:35.848644 4684 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/7ae59c0f-8e7f-4f59-a991-3e6afb7e0daf-operator-scripts\") pod \"cinder-db-create-mzsx6\" (UID: \"7ae59c0f-8e7f-4f59-a991-3e6afb7e0daf\") " pod="openstack/cinder-db-create-mzsx6" Jan 23 09:30:35 crc kubenswrapper[4684]: I0123 09:30:35.897130 4684 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-kkqw2\" (UniqueName: \"kubernetes.io/projected/5d712541-e87b-49c3-8cde-2daf0ef2c0bd-kube-api-access-kkqw2\") pod \"barbican-2b42-account-create-update-w5njq\" (UID: \"5d712541-e87b-49c3-8cde-2daf0ef2c0bd\") " pod="openstack/barbican-2b42-account-create-update-w5njq" Jan 23 09:30:35 crc kubenswrapper[4684]: I0123 09:30:35.935418 4684 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-v6j2q\" (UniqueName: \"kubernetes.io/projected/7ae59c0f-8e7f-4f59-a991-3e6afb7e0daf-kube-api-access-v6j2q\") pod \"cinder-db-create-mzsx6\" (UID: \"7ae59c0f-8e7f-4f59-a991-3e6afb7e0daf\") " pod="openstack/cinder-db-create-mzsx6" Jan 23 09:30:35 crc kubenswrapper[4684]: I0123 09:30:35.946852 4684 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/cinder-aa08-account-create-update-c8jx4"] Jan 23 09:30:35 crc kubenswrapper[4684]: I0123 09:30:35.949645 4684 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-62bsk\" (UniqueName: \"kubernetes.io/projected/248994f7-7b0f-41e4-8a32-2dbf42ea41e9-kube-api-access-62bsk\") pod \"cinder-aa08-account-create-update-c8jx4\" (UID: \"248994f7-7b0f-41e4-8a32-2dbf42ea41e9\") " pod="openstack/cinder-aa08-account-create-update-c8jx4" Jan 23 09:30:35 crc kubenswrapper[4684]: I0123 09:30:35.949687 4684 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-gsx29\" (UniqueName: \"kubernetes.io/projected/51a95467-7819-43c2-aa22-699c74df62e8-kube-api-access-gsx29\") pod \"barbican-db-create-jcvvb\" (UID: \"51a95467-7819-43c2-aa22-699c74df62e8\") " pod="openstack/barbican-db-create-jcvvb" Jan 23 09:30:35 crc kubenswrapper[4684]: I0123 09:30:35.949734 4684 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/51a95467-7819-43c2-aa22-699c74df62e8-operator-scripts\") pod \"barbican-db-create-jcvvb\" (UID: \"51a95467-7819-43c2-aa22-699c74df62e8\") " pod="openstack/barbican-db-create-jcvvb" Jan 23 09:30:35 crc kubenswrapper[4684]: I0123 09:30:35.949762 4684 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/248994f7-7b0f-41e4-8a32-2dbf42ea41e9-operator-scripts\") pod \"cinder-aa08-account-create-update-c8jx4\" (UID: \"248994f7-7b0f-41e4-8a32-2dbf42ea41e9\") " pod="openstack/cinder-aa08-account-create-update-c8jx4" Jan 23 09:30:35 crc kubenswrapper[4684]: I0123 09:30:35.950399 4684 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/248994f7-7b0f-41e4-8a32-2dbf42ea41e9-operator-scripts\") pod \"cinder-aa08-account-create-update-c8jx4\" (UID: \"248994f7-7b0f-41e4-8a32-2dbf42ea41e9\") " pod="openstack/cinder-aa08-account-create-update-c8jx4" Jan 23 09:30:35 crc kubenswrapper[4684]: I0123 09:30:35.950975 4684 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/51a95467-7819-43c2-aa22-699c74df62e8-operator-scripts\") pod \"barbican-db-create-jcvvb\" (UID: \"51a95467-7819-43c2-aa22-699c74df62e8\") " pod="openstack/barbican-db-create-jcvvb" Jan 23 09:30:35 crc kubenswrapper[4684]: I0123 09:30:35.984217 4684 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/cinder-db-create-mzsx6" Jan 23 09:30:35 crc kubenswrapper[4684]: I0123 09:30:35.999608 4684 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/barbican-2b42-account-create-update-w5njq" Jan 23 09:30:36 crc kubenswrapper[4684]: I0123 09:30:36.002045 4684 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-62bsk\" (UniqueName: \"kubernetes.io/projected/248994f7-7b0f-41e4-8a32-2dbf42ea41e9-kube-api-access-62bsk\") pod \"cinder-aa08-account-create-update-c8jx4\" (UID: \"248994f7-7b0f-41e4-8a32-2dbf42ea41e9\") " pod="openstack/cinder-aa08-account-create-update-c8jx4" Jan 23 09:30:36 crc kubenswrapper[4684]: I0123 09:30:36.002518 4684 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-gsx29\" (UniqueName: \"kubernetes.io/projected/51a95467-7819-43c2-aa22-699c74df62e8-kube-api-access-gsx29\") pod \"barbican-db-create-jcvvb\" (UID: \"51a95467-7819-43c2-aa22-699c74df62e8\") " pod="openstack/barbican-db-create-jcvvb" Jan 23 09:30:36 crc kubenswrapper[4684]: I0123 09:30:36.095475 4684 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/neutron-db-create-wdrbh"] Jan 23 09:30:36 crc kubenswrapper[4684]: I0123 09:30:36.105040 4684 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/neutron-db-create-wdrbh" Jan 23 09:30:36 crc kubenswrapper[4684]: I0123 09:30:36.108090 4684 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/barbican-db-create-jcvvb" Jan 23 09:30:36 crc kubenswrapper[4684]: I0123 09:30:36.112503 4684 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/neutron-db-create-wdrbh"] Jan 23 09:30:36 crc kubenswrapper[4684]: I0123 09:30:36.137952 4684 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/cinder-aa08-account-create-update-c8jx4" Jan 23 09:30:36 crc kubenswrapper[4684]: I0123 09:30:36.161911 4684 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/c6c57d15-8e5f-4245-8830-c84079c9bee5-operator-scripts\") pod \"neutron-db-create-wdrbh\" (UID: \"c6c57d15-8e5f-4245-8830-c84079c9bee5\") " pod="openstack/neutron-db-create-wdrbh" Jan 23 09:30:36 crc kubenswrapper[4684]: I0123 09:30:36.162016 4684 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-rtp65\" (UniqueName: \"kubernetes.io/projected/c6c57d15-8e5f-4245-8830-c84079c9bee5-kube-api-access-rtp65\") pod \"neutron-db-create-wdrbh\" (UID: \"c6c57d15-8e5f-4245-8830-c84079c9bee5\") " pod="openstack/neutron-db-create-wdrbh" Jan 23 09:30:36 crc kubenswrapper[4684]: I0123 09:30:36.189852 4684 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/neutron-c3f2-account-create-update-z6d9n"] Jan 23 09:30:36 crc kubenswrapper[4684]: I0123 09:30:36.190895 4684 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/neutron-c3f2-account-create-update-z6d9n" Jan 23 09:30:36 crc kubenswrapper[4684]: I0123 09:30:36.198232 4684 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/neutron-c3f2-account-create-update-z6d9n"] Jan 23 09:30:36 crc kubenswrapper[4684]: I0123 09:30:36.199495 4684 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"neutron-db-secret" Jan 23 09:30:36 crc kubenswrapper[4684]: I0123 09:30:36.268671 4684 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/ed2e86a9-9a16-4fd0-b065-d95744b90dd7-operator-scripts\") pod \"neutron-c3f2-account-create-update-z6d9n\" (UID: \"ed2e86a9-9a16-4fd0-b065-d95744b90dd7\") " pod="openstack/neutron-c3f2-account-create-update-z6d9n" Jan 23 09:30:36 crc kubenswrapper[4684]: I0123 09:30:36.268951 4684 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/c6c57d15-8e5f-4245-8830-c84079c9bee5-operator-scripts\") pod \"neutron-db-create-wdrbh\" (UID: \"c6c57d15-8e5f-4245-8830-c84079c9bee5\") " pod="openstack/neutron-db-create-wdrbh" Jan 23 09:30:36 crc kubenswrapper[4684]: I0123 09:30:36.269006 4684 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-zqwzt\" (UniqueName: \"kubernetes.io/projected/ed2e86a9-9a16-4fd0-b065-d95744b90dd7-kube-api-access-zqwzt\") pod \"neutron-c3f2-account-create-update-z6d9n\" (UID: \"ed2e86a9-9a16-4fd0-b065-d95744b90dd7\") " pod="openstack/neutron-c3f2-account-create-update-z6d9n" Jan 23 09:30:36 crc kubenswrapper[4684]: I0123 09:30:36.269036 4684 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-rtp65\" (UniqueName: \"kubernetes.io/projected/c6c57d15-8e5f-4245-8830-c84079c9bee5-kube-api-access-rtp65\") pod \"neutron-db-create-wdrbh\" (UID: \"c6c57d15-8e5f-4245-8830-c84079c9bee5\") " pod="openstack/neutron-db-create-wdrbh" Jan 23 09:30:36 crc kubenswrapper[4684]: I0123 09:30:36.270278 4684 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/c6c57d15-8e5f-4245-8830-c84079c9bee5-operator-scripts\") pod \"neutron-db-create-wdrbh\" (UID: \"c6c57d15-8e5f-4245-8830-c84079c9bee5\") " pod="openstack/neutron-db-create-wdrbh" Jan 23 09:30:36 crc kubenswrapper[4684]: I0123 09:30:36.300636 4684 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/keystone-db-sync-znt8j"] Jan 23 09:30:36 crc kubenswrapper[4684]: I0123 09:30:36.302555 4684 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/keystone-db-sync-znt8j" Jan 23 09:30:36 crc kubenswrapper[4684]: I0123 09:30:36.311380 4684 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"keystone-keystone-dockercfg-8c4md" Jan 23 09:30:36 crc kubenswrapper[4684]: I0123 09:30:36.311613 4684 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"keystone-config-data" Jan 23 09:30:36 crc kubenswrapper[4684]: I0123 09:30:36.311786 4684 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"keystone" Jan 23 09:30:36 crc kubenswrapper[4684]: I0123 09:30:36.311955 4684 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"keystone-scripts" Jan 23 09:30:36 crc kubenswrapper[4684]: I0123 09:30:36.331864 4684 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-rtp65\" (UniqueName: \"kubernetes.io/projected/c6c57d15-8e5f-4245-8830-c84079c9bee5-kube-api-access-rtp65\") pod \"neutron-db-create-wdrbh\" (UID: \"c6c57d15-8e5f-4245-8830-c84079c9bee5\") " pod="openstack/neutron-db-create-wdrbh" Jan 23 09:30:36 crc kubenswrapper[4684]: I0123 09:30:36.386067 4684 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/ed2e86a9-9a16-4fd0-b065-d95744b90dd7-operator-scripts\") pod \"neutron-c3f2-account-create-update-z6d9n\" (UID: \"ed2e86a9-9a16-4fd0-b065-d95744b90dd7\") " pod="openstack/neutron-c3f2-account-create-update-z6d9n" Jan 23 09:30:36 crc kubenswrapper[4684]: I0123 09:30:36.386166 4684 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-zqwzt\" (UniqueName: \"kubernetes.io/projected/ed2e86a9-9a16-4fd0-b065-d95744b90dd7-kube-api-access-zqwzt\") pod \"neutron-c3f2-account-create-update-z6d9n\" (UID: \"ed2e86a9-9a16-4fd0-b065-d95744b90dd7\") " pod="openstack/neutron-c3f2-account-create-update-z6d9n" Jan 23 09:30:36 crc kubenswrapper[4684]: I0123 09:30:36.387166 4684 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/ed2e86a9-9a16-4fd0-b065-d95744b90dd7-operator-scripts\") pod \"neutron-c3f2-account-create-update-z6d9n\" (UID: \"ed2e86a9-9a16-4fd0-b065-d95744b90dd7\") " pod="openstack/neutron-c3f2-account-create-update-z6d9n" Jan 23 09:30:36 crc kubenswrapper[4684]: I0123 09:30:36.401953 4684 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/keystone-db-sync-znt8j"] Jan 23 09:30:36 crc kubenswrapper[4684]: I0123 09:30:36.428510 4684 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/neutron-db-create-wdrbh" Jan 23 09:30:36 crc kubenswrapper[4684]: I0123 09:30:36.431193 4684 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-zqwzt\" (UniqueName: \"kubernetes.io/projected/ed2e86a9-9a16-4fd0-b065-d95744b90dd7-kube-api-access-zqwzt\") pod \"neutron-c3f2-account-create-update-z6d9n\" (UID: \"ed2e86a9-9a16-4fd0-b065-d95744b90dd7\") " pod="openstack/neutron-c3f2-account-create-update-z6d9n" Jan 23 09:30:36 crc kubenswrapper[4684]: I0123 09:30:36.493387 4684 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-ccj8l\" (UniqueName: \"kubernetes.io/projected/7b4ce139-6147-4c82-8b4d-74de8f779b6c-kube-api-access-ccj8l\") pod \"keystone-db-sync-znt8j\" (UID: \"7b4ce139-6147-4c82-8b4d-74de8f779b6c\") " pod="openstack/keystone-db-sync-znt8j" Jan 23 09:30:36 crc kubenswrapper[4684]: I0123 09:30:36.493746 4684 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/7b4ce139-6147-4c82-8b4d-74de8f779b6c-combined-ca-bundle\") pod \"keystone-db-sync-znt8j\" (UID: \"7b4ce139-6147-4c82-8b4d-74de8f779b6c\") " pod="openstack/keystone-db-sync-znt8j" Jan 23 09:30:36 crc kubenswrapper[4684]: I0123 09:30:36.493870 4684 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/7b4ce139-6147-4c82-8b4d-74de8f779b6c-config-data\") pod \"keystone-db-sync-znt8j\" (UID: \"7b4ce139-6147-4c82-8b4d-74de8f779b6c\") " pod="openstack/keystone-db-sync-znt8j" Jan 23 09:30:36 crc kubenswrapper[4684]: I0123 09:30:36.530478 4684 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/neutron-c3f2-account-create-update-z6d9n" Jan 23 09:30:36 crc kubenswrapper[4684]: I0123 09:30:36.595489 4684 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-ccj8l\" (UniqueName: \"kubernetes.io/projected/7b4ce139-6147-4c82-8b4d-74de8f779b6c-kube-api-access-ccj8l\") pod \"keystone-db-sync-znt8j\" (UID: \"7b4ce139-6147-4c82-8b4d-74de8f779b6c\") " pod="openstack/keystone-db-sync-znt8j" Jan 23 09:30:36 crc kubenswrapper[4684]: I0123 09:30:36.595545 4684 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/7b4ce139-6147-4c82-8b4d-74de8f779b6c-combined-ca-bundle\") pod \"keystone-db-sync-znt8j\" (UID: \"7b4ce139-6147-4c82-8b4d-74de8f779b6c\") " pod="openstack/keystone-db-sync-znt8j" Jan 23 09:30:36 crc kubenswrapper[4684]: I0123 09:30:36.595606 4684 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/7b4ce139-6147-4c82-8b4d-74de8f779b6c-config-data\") pod \"keystone-db-sync-znt8j\" (UID: \"7b4ce139-6147-4c82-8b4d-74de8f779b6c\") " pod="openstack/keystone-db-sync-znt8j" Jan 23 09:30:36 crc kubenswrapper[4684]: I0123 09:30:36.602955 4684 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/7b4ce139-6147-4c82-8b4d-74de8f779b6c-combined-ca-bundle\") pod \"keystone-db-sync-znt8j\" (UID: \"7b4ce139-6147-4c82-8b4d-74de8f779b6c\") " pod="openstack/keystone-db-sync-znt8j" Jan 23 09:30:36 crc kubenswrapper[4684]: I0123 09:30:36.603495 4684 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/7b4ce139-6147-4c82-8b4d-74de8f779b6c-config-data\") pod \"keystone-db-sync-znt8j\" (UID: \"7b4ce139-6147-4c82-8b4d-74de8f779b6c\") " pod="openstack/keystone-db-sync-znt8j" Jan 23 09:30:36 crc kubenswrapper[4684]: I0123 09:30:36.631093 4684 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-ccj8l\" (UniqueName: \"kubernetes.io/projected/7b4ce139-6147-4c82-8b4d-74de8f779b6c-kube-api-access-ccj8l\") pod \"keystone-db-sync-znt8j\" (UID: \"7b4ce139-6147-4c82-8b4d-74de8f779b6c\") " pod="openstack/keystone-db-sync-znt8j" Jan 23 09:30:36 crc kubenswrapper[4684]: I0123 09:30:36.643056 4684 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/keystone-db-sync-znt8j" Jan 23 09:30:36 crc kubenswrapper[4684]: I0123 09:30:36.871917 4684 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/barbican-2b42-account-create-update-w5njq"] Jan 23 09:30:36 crc kubenswrapper[4684]: I0123 09:30:36.991567 4684 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/cinder-db-create-mzsx6"] Jan 23 09:30:37 crc kubenswrapper[4684]: W0123 09:30:37.014488 4684 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod7ae59c0f_8e7f_4f59_a991_3e6afb7e0daf.slice/crio-f4b115caa2a3115561247ae9ffe2c3b9d62645b41c4bfcbc66c166d54847c996 WatchSource:0}: Error finding container f4b115caa2a3115561247ae9ffe2c3b9d62645b41c4bfcbc66c166d54847c996: Status 404 returned error can't find the container with id f4b115caa2a3115561247ae9ffe2c3b9d62645b41c4bfcbc66c166d54847c996 Jan 23 09:30:37 crc kubenswrapper[4684]: I0123 09:30:37.059383 4684 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/barbican-db-create-jcvvb"] Jan 23 09:30:37 crc kubenswrapper[4684]: I0123 09:30:37.090054 4684 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-db-create-mzsx6" event={"ID":"7ae59c0f-8e7f-4f59-a991-3e6afb7e0daf","Type":"ContainerStarted","Data":"f4b115caa2a3115561247ae9ffe2c3b9d62645b41c4bfcbc66c166d54847c996"} Jan 23 09:30:37 crc kubenswrapper[4684]: I0123 09:30:37.091169 4684 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/barbican-2b42-account-create-update-w5njq" event={"ID":"5d712541-e87b-49c3-8cde-2daf0ef2c0bd","Type":"ContainerStarted","Data":"40dcecfbbcc18bdb0fc4f31962b9590d4e025e31304a9c4cefe35501fed3dcba"} Jan 23 09:30:37 crc kubenswrapper[4684]: I0123 09:30:37.159316 4684 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/neutron-db-create-wdrbh"] Jan 23 09:30:37 crc kubenswrapper[4684]: W0123 09:30:37.165603 4684 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-podc6c57d15_8e5f_4245_8830_c84079c9bee5.slice/crio-2acefd6788f50cd50bb1d671c33b85455464c8c77c740b9207be459024a44f09 WatchSource:0}: Error finding container 2acefd6788f50cd50bb1d671c33b85455464c8c77c740b9207be459024a44f09: Status 404 returned error can't find the container with id 2acefd6788f50cd50bb1d671c33b85455464c8c77c740b9207be459024a44f09 Jan 23 09:30:38 crc kubenswrapper[4684]: I0123 09:30:38.166977 4684 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/neutron-db-create-wdrbh" event={"ID":"c6c57d15-8e5f-4245-8830-c84079c9bee5","Type":"ContainerStarted","Data":"2acefd6788f50cd50bb1d671c33b85455464c8c77c740b9207be459024a44f09"} Jan 23 09:30:38 crc kubenswrapper[4684]: I0123 09:30:38.181630 4684 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/barbican-db-create-jcvvb" event={"ID":"51a95467-7819-43c2-aa22-699c74df62e8","Type":"ContainerStarted","Data":"cdf14a178bae7cda5ec1a67a51683cad27b4138e22e3bed8d62c565a504a75f0"} Jan 23 09:30:38 crc kubenswrapper[4684]: I0123 09:30:38.887390 4684 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/cinder-aa08-account-create-update-c8jx4"] Jan 23 09:30:39 crc kubenswrapper[4684]: I0123 09:30:39.189733 4684 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-aa08-account-create-update-c8jx4" event={"ID":"248994f7-7b0f-41e4-8a32-2dbf42ea41e9","Type":"ContainerStarted","Data":"f408ef5ed791cd64882b742b377fa914b5da76028c3ac49cbde8a35328000bc6"} Jan 23 09:30:39 crc kubenswrapper[4684]: I0123 09:30:39.238222 4684 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/keystone-db-sync-znt8j"] Jan 23 09:30:39 crc kubenswrapper[4684]: W0123 09:30:39.239808 4684 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod7b4ce139_6147_4c82_8b4d_74de8f779b6c.slice/crio-bd85acbbd84e664608f9bb96652801cf94a46e48f3bb8be05aef7eba8f93cc78 WatchSource:0}: Error finding container bd85acbbd84e664608f9bb96652801cf94a46e48f3bb8be05aef7eba8f93cc78: Status 404 returned error can't find the container with id bd85acbbd84e664608f9bb96652801cf94a46e48f3bb8be05aef7eba8f93cc78 Jan 23 09:30:39 crc kubenswrapper[4684]: W0123 09:30:39.324583 4684 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-poded2e86a9_9a16_4fd0_b065_d95744b90dd7.slice/crio-bab4d61fb47df3c1d5407e469dd442b5e76715b77ce665a2a9929eef99a0056e WatchSource:0}: Error finding container bab4d61fb47df3c1d5407e469dd442b5e76715b77ce665a2a9929eef99a0056e: Status 404 returned error can't find the container with id bab4d61fb47df3c1d5407e469dd442b5e76715b77ce665a2a9929eef99a0056e Jan 23 09:30:39 crc kubenswrapper[4684]: I0123 09:30:39.330520 4684 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/neutron-c3f2-account-create-update-z6d9n"] Jan 23 09:30:40 crc kubenswrapper[4684]: I0123 09:30:40.198451 4684 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-db-create-mzsx6" event={"ID":"7ae59c0f-8e7f-4f59-a991-3e6afb7e0daf","Type":"ContainerStarted","Data":"830ce8d856a7e21570caa0e1946e2834d0d061681d555f9601a1674f4b8129cf"} Jan 23 09:30:40 crc kubenswrapper[4684]: I0123 09:30:40.200381 4684 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/barbican-2b42-account-create-update-w5njq" event={"ID":"5d712541-e87b-49c3-8cde-2daf0ef2c0bd","Type":"ContainerStarted","Data":"bf19e3847088e82aaa7e07e340cc00931742cc0eca1a0d3ab89b2897a7b88ffb"} Jan 23 09:30:40 crc kubenswrapper[4684]: I0123 09:30:40.201938 4684 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/neutron-db-create-wdrbh" event={"ID":"c6c57d15-8e5f-4245-8830-c84079c9bee5","Type":"ContainerStarted","Data":"f738cd1b919cda818554031b3b6e23bccada191eb207eee446169ca4295f4ce9"} Jan 23 09:30:40 crc kubenswrapper[4684]: I0123 09:30:40.203013 4684 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/neutron-c3f2-account-create-update-z6d9n" event={"ID":"ed2e86a9-9a16-4fd0-b065-d95744b90dd7","Type":"ContainerStarted","Data":"bab4d61fb47df3c1d5407e469dd442b5e76715b77ce665a2a9929eef99a0056e"} Jan 23 09:30:40 crc kubenswrapper[4684]: I0123 09:30:40.204360 4684 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/barbican-db-create-jcvvb" event={"ID":"51a95467-7819-43c2-aa22-699c74df62e8","Type":"ContainerStarted","Data":"9715380cd5eb1122dc2e5c7e4b385a45094637abbe084dbbd0431ea0ae1902cf"} Jan 23 09:30:40 crc kubenswrapper[4684]: I0123 09:30:40.205279 4684 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/keystone-db-sync-znt8j" event={"ID":"7b4ce139-6147-4c82-8b4d-74de8f779b6c","Type":"ContainerStarted","Data":"bd85acbbd84e664608f9bb96652801cf94a46e48f3bb8be05aef7eba8f93cc78"} Jan 23 09:30:41 crc kubenswrapper[4684]: I0123 09:30:41.214445 4684 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/neutron-c3f2-account-create-update-z6d9n" event={"ID":"ed2e86a9-9a16-4fd0-b065-d95744b90dd7","Type":"ContainerStarted","Data":"b7e9813925b9ba54eb397f2c870d9d986650388af36fc8bd3677e423b1e1c9ad"} Jan 23 09:30:41 crc kubenswrapper[4684]: I0123 09:30:41.216516 4684 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-aa08-account-create-update-c8jx4" event={"ID":"248994f7-7b0f-41e4-8a32-2dbf42ea41e9","Type":"ContainerStarted","Data":"389d9652e99fbc6ce5c504c5926e0266187b9792c0c555bda41c84461d0c5326"} Jan 23 09:30:42 crc kubenswrapper[4684]: I0123 09:30:42.279336 4684 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/neutron-c3f2-account-create-update-z6d9n" podStartSLOduration=6.279288974 podStartE2EDuration="6.279288974s" podCreationTimestamp="2026-01-23 09:30:36 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-23 09:30:42.259872385 +0000 UTC m=+1414.883250926" watchObservedRunningTime="2026-01-23 09:30:42.279288974 +0000 UTC m=+1414.902667515" Jan 23 09:30:42 crc kubenswrapper[4684]: I0123 09:30:42.295024 4684 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/barbican-2b42-account-create-update-w5njq" podStartSLOduration=7.294998329 podStartE2EDuration="7.294998329s" podCreationTimestamp="2026-01-23 09:30:35 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-23 09:30:42.275484677 +0000 UTC m=+1414.898863228" watchObservedRunningTime="2026-01-23 09:30:42.294998329 +0000 UTC m=+1414.918376870" Jan 23 09:30:42 crc kubenswrapper[4684]: I0123 09:30:42.308840 4684 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/cinder-aa08-account-create-update-c8jx4" podStartSLOduration=7.3088166900000004 podStartE2EDuration="7.30881669s" podCreationTimestamp="2026-01-23 09:30:35 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-23 09:30:42.297449308 +0000 UTC m=+1414.920827879" watchObservedRunningTime="2026-01-23 09:30:42.30881669 +0000 UTC m=+1414.932195231" Jan 23 09:30:42 crc kubenswrapper[4684]: I0123 09:30:42.327294 4684 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/barbican-db-create-jcvvb" podStartSLOduration=7.327267732 podStartE2EDuration="7.327267732s" podCreationTimestamp="2026-01-23 09:30:35 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-23 09:30:42.32187732 +0000 UTC m=+1414.945255861" watchObservedRunningTime="2026-01-23 09:30:42.327267732 +0000 UTC m=+1414.950646273" Jan 23 09:30:42 crc kubenswrapper[4684]: I0123 09:30:42.347574 4684 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/neutron-db-create-wdrbh" podStartSLOduration=6.347550727 podStartE2EDuration="6.347550727s" podCreationTimestamp="2026-01-23 09:30:36 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-23 09:30:42.340120346 +0000 UTC m=+1414.963498877" watchObservedRunningTime="2026-01-23 09:30:42.347550727 +0000 UTC m=+1414.970929268" Jan 23 09:30:42 crc kubenswrapper[4684]: I0123 09:30:42.370292 4684 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/cinder-db-create-mzsx6" podStartSLOduration=7.370268079 podStartE2EDuration="7.370268079s" podCreationTimestamp="2026-01-23 09:30:35 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-23 09:30:42.360448931 +0000 UTC m=+1414.983827472" watchObservedRunningTime="2026-01-23 09:30:42.370268079 +0000 UTC m=+1414.993646620" Jan 23 09:30:45 crc kubenswrapper[4684]: I0123 09:30:45.270333 4684 generic.go:334] "Generic (PLEG): container finished" podID="7ae59c0f-8e7f-4f59-a991-3e6afb7e0daf" containerID="830ce8d856a7e21570caa0e1946e2834d0d061681d555f9601a1674f4b8129cf" exitCode=0 Jan 23 09:30:45 crc kubenswrapper[4684]: I0123 09:30:45.270384 4684 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-db-create-mzsx6" event={"ID":"7ae59c0f-8e7f-4f59-a991-3e6afb7e0daf","Type":"ContainerDied","Data":"830ce8d856a7e21570caa0e1946e2834d0d061681d555f9601a1674f4b8129cf"} Jan 23 09:30:45 crc kubenswrapper[4684]: I0123 09:30:45.272829 4684 generic.go:334] "Generic (PLEG): container finished" podID="c6c57d15-8e5f-4245-8830-c84079c9bee5" containerID="f738cd1b919cda818554031b3b6e23bccada191eb207eee446169ca4295f4ce9" exitCode=0 Jan 23 09:30:45 crc kubenswrapper[4684]: I0123 09:30:45.272912 4684 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/neutron-db-create-wdrbh" event={"ID":"c6c57d15-8e5f-4245-8830-c84079c9bee5","Type":"ContainerDied","Data":"f738cd1b919cda818554031b3b6e23bccada191eb207eee446169ca4295f4ce9"} Jan 23 09:30:45 crc kubenswrapper[4684]: I0123 09:30:45.276791 4684 generic.go:334] "Generic (PLEG): container finished" podID="51a95467-7819-43c2-aa22-699c74df62e8" containerID="9715380cd5eb1122dc2e5c7e4b385a45094637abbe084dbbd0431ea0ae1902cf" exitCode=0 Jan 23 09:30:45 crc kubenswrapper[4684]: I0123 09:30:45.276818 4684 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/barbican-db-create-jcvvb" event={"ID":"51a95467-7819-43c2-aa22-699c74df62e8","Type":"ContainerDied","Data":"9715380cd5eb1122dc2e5c7e4b385a45094637abbe084dbbd0431ea0ae1902cf"} Jan 23 09:30:47 crc kubenswrapper[4684]: I0123 09:30:47.300814 4684 generic.go:334] "Generic (PLEG): container finished" podID="248994f7-7b0f-41e4-8a32-2dbf42ea41e9" containerID="389d9652e99fbc6ce5c504c5926e0266187b9792c0c555bda41c84461d0c5326" exitCode=0 Jan 23 09:30:47 crc kubenswrapper[4684]: I0123 09:30:47.300964 4684 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-aa08-account-create-update-c8jx4" event={"ID":"248994f7-7b0f-41e4-8a32-2dbf42ea41e9","Type":"ContainerDied","Data":"389d9652e99fbc6ce5c504c5926e0266187b9792c0c555bda41c84461d0c5326"} Jan 23 09:30:47 crc kubenswrapper[4684]: I0123 09:30:47.305593 4684 generic.go:334] "Generic (PLEG): container finished" podID="5d712541-e87b-49c3-8cde-2daf0ef2c0bd" containerID="bf19e3847088e82aaa7e07e340cc00931742cc0eca1a0d3ab89b2897a7b88ffb" exitCode=0 Jan 23 09:30:47 crc kubenswrapper[4684]: I0123 09:30:47.305665 4684 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/barbican-2b42-account-create-update-w5njq" event={"ID":"5d712541-e87b-49c3-8cde-2daf0ef2c0bd","Type":"ContainerDied","Data":"bf19e3847088e82aaa7e07e340cc00931742cc0eca1a0d3ab89b2897a7b88ffb"} Jan 23 09:30:47 crc kubenswrapper[4684]: I0123 09:30:47.307634 4684 generic.go:334] "Generic (PLEG): container finished" podID="ed2e86a9-9a16-4fd0-b065-d95744b90dd7" containerID="b7e9813925b9ba54eb397f2c870d9d986650388af36fc8bd3677e423b1e1c9ad" exitCode=0 Jan 23 09:30:47 crc kubenswrapper[4684]: I0123 09:30:47.307668 4684 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/neutron-c3f2-account-create-update-z6d9n" event={"ID":"ed2e86a9-9a16-4fd0-b065-d95744b90dd7","Type":"ContainerDied","Data":"b7e9813925b9ba54eb397f2c870d9d986650388af36fc8bd3677e423b1e1c9ad"} Jan 23 09:30:47 crc kubenswrapper[4684]: I0123 09:30:47.978720 4684 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/barbican-db-create-jcvvb" Jan 23 09:30:48 crc kubenswrapper[4684]: I0123 09:30:48.066976 4684 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/cinder-db-create-mzsx6" Jan 23 09:30:48 crc kubenswrapper[4684]: I0123 09:30:48.074194 4684 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/neutron-db-create-wdrbh" Jan 23 09:30:48 crc kubenswrapper[4684]: I0123 09:30:48.122434 4684 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/51a95467-7819-43c2-aa22-699c74df62e8-operator-scripts\") pod \"51a95467-7819-43c2-aa22-699c74df62e8\" (UID: \"51a95467-7819-43c2-aa22-699c74df62e8\") " Jan 23 09:30:48 crc kubenswrapper[4684]: I0123 09:30:48.122784 4684 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-gsx29\" (UniqueName: \"kubernetes.io/projected/51a95467-7819-43c2-aa22-699c74df62e8-kube-api-access-gsx29\") pod \"51a95467-7819-43c2-aa22-699c74df62e8\" (UID: \"51a95467-7819-43c2-aa22-699c74df62e8\") " Jan 23 09:30:48 crc kubenswrapper[4684]: I0123 09:30:48.123606 4684 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/51a95467-7819-43c2-aa22-699c74df62e8-operator-scripts" (OuterVolumeSpecName: "operator-scripts") pod "51a95467-7819-43c2-aa22-699c74df62e8" (UID: "51a95467-7819-43c2-aa22-699c74df62e8"). InnerVolumeSpecName "operator-scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 23 09:30:48 crc kubenswrapper[4684]: I0123 09:30:48.124549 4684 reconciler_common.go:293] "Volume detached for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/51a95467-7819-43c2-aa22-699c74df62e8-operator-scripts\") on node \"crc\" DevicePath \"\"" Jan 23 09:30:48 crc kubenswrapper[4684]: I0123 09:30:48.130250 4684 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/51a95467-7819-43c2-aa22-699c74df62e8-kube-api-access-gsx29" (OuterVolumeSpecName: "kube-api-access-gsx29") pod "51a95467-7819-43c2-aa22-699c74df62e8" (UID: "51a95467-7819-43c2-aa22-699c74df62e8"). InnerVolumeSpecName "kube-api-access-gsx29". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 23 09:30:48 crc kubenswrapper[4684]: I0123 09:30:48.226518 4684 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-v6j2q\" (UniqueName: \"kubernetes.io/projected/7ae59c0f-8e7f-4f59-a991-3e6afb7e0daf-kube-api-access-v6j2q\") pod \"7ae59c0f-8e7f-4f59-a991-3e6afb7e0daf\" (UID: \"7ae59c0f-8e7f-4f59-a991-3e6afb7e0daf\") " Jan 23 09:30:48 crc kubenswrapper[4684]: I0123 09:30:48.226667 4684 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/7ae59c0f-8e7f-4f59-a991-3e6afb7e0daf-operator-scripts\") pod \"7ae59c0f-8e7f-4f59-a991-3e6afb7e0daf\" (UID: \"7ae59c0f-8e7f-4f59-a991-3e6afb7e0daf\") " Jan 23 09:30:48 crc kubenswrapper[4684]: I0123 09:30:48.226713 4684 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/c6c57d15-8e5f-4245-8830-c84079c9bee5-operator-scripts\") pod \"c6c57d15-8e5f-4245-8830-c84079c9bee5\" (UID: \"c6c57d15-8e5f-4245-8830-c84079c9bee5\") " Jan 23 09:30:48 crc kubenswrapper[4684]: I0123 09:30:48.226750 4684 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-rtp65\" (UniqueName: \"kubernetes.io/projected/c6c57d15-8e5f-4245-8830-c84079c9bee5-kube-api-access-rtp65\") pod \"c6c57d15-8e5f-4245-8830-c84079c9bee5\" (UID: \"c6c57d15-8e5f-4245-8830-c84079c9bee5\") " Jan 23 09:30:48 crc kubenswrapper[4684]: I0123 09:30:48.227197 4684 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-gsx29\" (UniqueName: \"kubernetes.io/projected/51a95467-7819-43c2-aa22-699c74df62e8-kube-api-access-gsx29\") on node \"crc\" DevicePath \"\"" Jan 23 09:30:48 crc kubenswrapper[4684]: I0123 09:30:48.227504 4684 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/7ae59c0f-8e7f-4f59-a991-3e6afb7e0daf-operator-scripts" (OuterVolumeSpecName: "operator-scripts") pod "7ae59c0f-8e7f-4f59-a991-3e6afb7e0daf" (UID: "7ae59c0f-8e7f-4f59-a991-3e6afb7e0daf"). InnerVolumeSpecName "operator-scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 23 09:30:48 crc kubenswrapper[4684]: I0123 09:30:48.227545 4684 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/c6c57d15-8e5f-4245-8830-c84079c9bee5-operator-scripts" (OuterVolumeSpecName: "operator-scripts") pod "c6c57d15-8e5f-4245-8830-c84079c9bee5" (UID: "c6c57d15-8e5f-4245-8830-c84079c9bee5"). InnerVolumeSpecName "operator-scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 23 09:30:48 crc kubenswrapper[4684]: I0123 09:30:48.232971 4684 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/7ae59c0f-8e7f-4f59-a991-3e6afb7e0daf-kube-api-access-v6j2q" (OuterVolumeSpecName: "kube-api-access-v6j2q") pod "7ae59c0f-8e7f-4f59-a991-3e6afb7e0daf" (UID: "7ae59c0f-8e7f-4f59-a991-3e6afb7e0daf"). InnerVolumeSpecName "kube-api-access-v6j2q". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 23 09:30:48 crc kubenswrapper[4684]: I0123 09:30:48.233894 4684 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/c6c57d15-8e5f-4245-8830-c84079c9bee5-kube-api-access-rtp65" (OuterVolumeSpecName: "kube-api-access-rtp65") pod "c6c57d15-8e5f-4245-8830-c84079c9bee5" (UID: "c6c57d15-8e5f-4245-8830-c84079c9bee5"). InnerVolumeSpecName "kube-api-access-rtp65". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 23 09:30:48 crc kubenswrapper[4684]: I0123 09:30:48.323277 4684 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/barbican-db-create-jcvvb" event={"ID":"51a95467-7819-43c2-aa22-699c74df62e8","Type":"ContainerDied","Data":"cdf14a178bae7cda5ec1a67a51683cad27b4138e22e3bed8d62c565a504a75f0"} Jan 23 09:30:48 crc kubenswrapper[4684]: I0123 09:30:48.323674 4684 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="cdf14a178bae7cda5ec1a67a51683cad27b4138e22e3bed8d62c565a504a75f0" Jan 23 09:30:48 crc kubenswrapper[4684]: I0123 09:30:48.324040 4684 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/barbican-db-create-jcvvb" Jan 23 09:30:48 crc kubenswrapper[4684]: I0123 09:30:48.330038 4684 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-v6j2q\" (UniqueName: \"kubernetes.io/projected/7ae59c0f-8e7f-4f59-a991-3e6afb7e0daf-kube-api-access-v6j2q\") on node \"crc\" DevicePath \"\"" Jan 23 09:30:48 crc kubenswrapper[4684]: I0123 09:30:48.330070 4684 reconciler_common.go:293] "Volume detached for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/7ae59c0f-8e7f-4f59-a991-3e6afb7e0daf-operator-scripts\") on node \"crc\" DevicePath \"\"" Jan 23 09:30:48 crc kubenswrapper[4684]: I0123 09:30:48.330084 4684 reconciler_common.go:293] "Volume detached for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/c6c57d15-8e5f-4245-8830-c84079c9bee5-operator-scripts\") on node \"crc\" DevicePath \"\"" Jan 23 09:30:48 crc kubenswrapper[4684]: I0123 09:30:48.330096 4684 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-rtp65\" (UniqueName: \"kubernetes.io/projected/c6c57d15-8e5f-4245-8830-c84079c9bee5-kube-api-access-rtp65\") on node \"crc\" DevicePath \"\"" Jan 23 09:30:48 crc kubenswrapper[4684]: I0123 09:30:48.331666 4684 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/keystone-db-sync-znt8j" event={"ID":"7b4ce139-6147-4c82-8b4d-74de8f779b6c","Type":"ContainerStarted","Data":"bd570b482adca7f99ca2f281ec8679e767854f52d98272f50742820a27744f07"} Jan 23 09:30:48 crc kubenswrapper[4684]: I0123 09:30:48.340827 4684 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-db-create-mzsx6" event={"ID":"7ae59c0f-8e7f-4f59-a991-3e6afb7e0daf","Type":"ContainerDied","Data":"f4b115caa2a3115561247ae9ffe2c3b9d62645b41c4bfcbc66c166d54847c996"} Jan 23 09:30:48 crc kubenswrapper[4684]: I0123 09:30:48.340875 4684 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="f4b115caa2a3115561247ae9ffe2c3b9d62645b41c4bfcbc66c166d54847c996" Jan 23 09:30:48 crc kubenswrapper[4684]: I0123 09:30:48.340952 4684 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/cinder-db-create-mzsx6" Jan 23 09:30:48 crc kubenswrapper[4684]: I0123 09:30:48.350743 4684 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/neutron-db-create-wdrbh" Jan 23 09:30:48 crc kubenswrapper[4684]: I0123 09:30:48.351624 4684 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/neutron-db-create-wdrbh" event={"ID":"c6c57d15-8e5f-4245-8830-c84079c9bee5","Type":"ContainerDied","Data":"2acefd6788f50cd50bb1d671c33b85455464c8c77c740b9207be459024a44f09"} Jan 23 09:30:48 crc kubenswrapper[4684]: I0123 09:30:48.351671 4684 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="2acefd6788f50cd50bb1d671c33b85455464c8c77c740b9207be459024a44f09" Jan 23 09:30:48 crc kubenswrapper[4684]: I0123 09:30:48.376601 4684 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/keystone-db-sync-znt8j" podStartSLOduration=3.651779345 podStartE2EDuration="12.376555204s" podCreationTimestamp="2026-01-23 09:30:36 +0000 UTC" firstStartedPulling="2026-01-23 09:30:39.243833838 +0000 UTC m=+1411.867212379" lastFinishedPulling="2026-01-23 09:30:47.968609697 +0000 UTC m=+1420.591988238" observedRunningTime="2026-01-23 09:30:48.3633348 +0000 UTC m=+1420.986713351" watchObservedRunningTime="2026-01-23 09:30:48.376555204 +0000 UTC m=+1420.999933745" Jan 23 09:30:48 crc kubenswrapper[4684]: I0123 09:30:48.722118 4684 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/cinder-aa08-account-create-update-c8jx4" Jan 23 09:30:48 crc kubenswrapper[4684]: I0123 09:30:48.857083 4684 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-62bsk\" (UniqueName: \"kubernetes.io/projected/248994f7-7b0f-41e4-8a32-2dbf42ea41e9-kube-api-access-62bsk\") pod \"248994f7-7b0f-41e4-8a32-2dbf42ea41e9\" (UID: \"248994f7-7b0f-41e4-8a32-2dbf42ea41e9\") " Jan 23 09:30:48 crc kubenswrapper[4684]: I0123 09:30:48.857241 4684 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/248994f7-7b0f-41e4-8a32-2dbf42ea41e9-operator-scripts\") pod \"248994f7-7b0f-41e4-8a32-2dbf42ea41e9\" (UID: \"248994f7-7b0f-41e4-8a32-2dbf42ea41e9\") " Jan 23 09:30:48 crc kubenswrapper[4684]: I0123 09:30:48.861909 4684 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/248994f7-7b0f-41e4-8a32-2dbf42ea41e9-operator-scripts" (OuterVolumeSpecName: "operator-scripts") pod "248994f7-7b0f-41e4-8a32-2dbf42ea41e9" (UID: "248994f7-7b0f-41e4-8a32-2dbf42ea41e9"). InnerVolumeSpecName "operator-scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 23 09:30:48 crc kubenswrapper[4684]: I0123 09:30:48.867901 4684 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/248994f7-7b0f-41e4-8a32-2dbf42ea41e9-kube-api-access-62bsk" (OuterVolumeSpecName: "kube-api-access-62bsk") pod "248994f7-7b0f-41e4-8a32-2dbf42ea41e9" (UID: "248994f7-7b0f-41e4-8a32-2dbf42ea41e9"). InnerVolumeSpecName "kube-api-access-62bsk". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 23 09:30:48 crc kubenswrapper[4684]: I0123 09:30:48.958887 4684 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-62bsk\" (UniqueName: \"kubernetes.io/projected/248994f7-7b0f-41e4-8a32-2dbf42ea41e9-kube-api-access-62bsk\") on node \"crc\" DevicePath \"\"" Jan 23 09:30:48 crc kubenswrapper[4684]: I0123 09:30:48.958918 4684 reconciler_common.go:293] "Volume detached for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/248994f7-7b0f-41e4-8a32-2dbf42ea41e9-operator-scripts\") on node \"crc\" DevicePath \"\"" Jan 23 09:30:48 crc kubenswrapper[4684]: I0123 09:30:48.959291 4684 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/neutron-c3f2-account-create-update-z6d9n" Jan 23 09:30:48 crc kubenswrapper[4684]: I0123 09:30:48.973267 4684 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/barbican-2b42-account-create-update-w5njq" Jan 23 09:30:49 crc kubenswrapper[4684]: I0123 09:30:49.059799 4684 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/ed2e86a9-9a16-4fd0-b065-d95744b90dd7-operator-scripts\") pod \"ed2e86a9-9a16-4fd0-b065-d95744b90dd7\" (UID: \"ed2e86a9-9a16-4fd0-b065-d95744b90dd7\") " Jan 23 09:30:49 crc kubenswrapper[4684]: I0123 09:30:49.059924 4684 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-zqwzt\" (UniqueName: \"kubernetes.io/projected/ed2e86a9-9a16-4fd0-b065-d95744b90dd7-kube-api-access-zqwzt\") pod \"ed2e86a9-9a16-4fd0-b065-d95744b90dd7\" (UID: \"ed2e86a9-9a16-4fd0-b065-d95744b90dd7\") " Jan 23 09:30:49 crc kubenswrapper[4684]: I0123 09:30:49.060665 4684 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/ed2e86a9-9a16-4fd0-b065-d95744b90dd7-operator-scripts" (OuterVolumeSpecName: "operator-scripts") pod "ed2e86a9-9a16-4fd0-b065-d95744b90dd7" (UID: "ed2e86a9-9a16-4fd0-b065-d95744b90dd7"). InnerVolumeSpecName "operator-scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 23 09:30:49 crc kubenswrapper[4684]: I0123 09:30:49.064823 4684 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/ed2e86a9-9a16-4fd0-b065-d95744b90dd7-kube-api-access-zqwzt" (OuterVolumeSpecName: "kube-api-access-zqwzt") pod "ed2e86a9-9a16-4fd0-b065-d95744b90dd7" (UID: "ed2e86a9-9a16-4fd0-b065-d95744b90dd7"). InnerVolumeSpecName "kube-api-access-zqwzt". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 23 09:30:49 crc kubenswrapper[4684]: I0123 09:30:49.161792 4684 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/5d712541-e87b-49c3-8cde-2daf0ef2c0bd-operator-scripts\") pod \"5d712541-e87b-49c3-8cde-2daf0ef2c0bd\" (UID: \"5d712541-e87b-49c3-8cde-2daf0ef2c0bd\") " Jan 23 09:30:49 crc kubenswrapper[4684]: I0123 09:30:49.161903 4684 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-kkqw2\" (UniqueName: \"kubernetes.io/projected/5d712541-e87b-49c3-8cde-2daf0ef2c0bd-kube-api-access-kkqw2\") pod \"5d712541-e87b-49c3-8cde-2daf0ef2c0bd\" (UID: \"5d712541-e87b-49c3-8cde-2daf0ef2c0bd\") " Jan 23 09:30:49 crc kubenswrapper[4684]: I0123 09:30:49.162275 4684 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/5d712541-e87b-49c3-8cde-2daf0ef2c0bd-operator-scripts" (OuterVolumeSpecName: "operator-scripts") pod "5d712541-e87b-49c3-8cde-2daf0ef2c0bd" (UID: "5d712541-e87b-49c3-8cde-2daf0ef2c0bd"). InnerVolumeSpecName "operator-scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 23 09:30:49 crc kubenswrapper[4684]: I0123 09:30:49.162305 4684 reconciler_common.go:293] "Volume detached for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/ed2e86a9-9a16-4fd0-b065-d95744b90dd7-operator-scripts\") on node \"crc\" DevicePath \"\"" Jan 23 09:30:49 crc kubenswrapper[4684]: I0123 09:30:49.162373 4684 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-zqwzt\" (UniqueName: \"kubernetes.io/projected/ed2e86a9-9a16-4fd0-b065-d95744b90dd7-kube-api-access-zqwzt\") on node \"crc\" DevicePath \"\"" Jan 23 09:30:49 crc kubenswrapper[4684]: I0123 09:30:49.167530 4684 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/5d712541-e87b-49c3-8cde-2daf0ef2c0bd-kube-api-access-kkqw2" (OuterVolumeSpecName: "kube-api-access-kkqw2") pod "5d712541-e87b-49c3-8cde-2daf0ef2c0bd" (UID: "5d712541-e87b-49c3-8cde-2daf0ef2c0bd"). InnerVolumeSpecName "kube-api-access-kkqw2". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 23 09:30:49 crc kubenswrapper[4684]: I0123 09:30:49.264394 4684 reconciler_common.go:293] "Volume detached for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/5d712541-e87b-49c3-8cde-2daf0ef2c0bd-operator-scripts\") on node \"crc\" DevicePath \"\"" Jan 23 09:30:49 crc kubenswrapper[4684]: I0123 09:30:49.264451 4684 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-kkqw2\" (UniqueName: \"kubernetes.io/projected/5d712541-e87b-49c3-8cde-2daf0ef2c0bd-kube-api-access-kkqw2\") on node \"crc\" DevicePath \"\"" Jan 23 09:30:49 crc kubenswrapper[4684]: I0123 09:30:49.364229 4684 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-aa08-account-create-update-c8jx4" event={"ID":"248994f7-7b0f-41e4-8a32-2dbf42ea41e9","Type":"ContainerDied","Data":"f408ef5ed791cd64882b742b377fa914b5da76028c3ac49cbde8a35328000bc6"} Jan 23 09:30:49 crc kubenswrapper[4684]: I0123 09:30:49.364283 4684 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="f408ef5ed791cd64882b742b377fa914b5da76028c3ac49cbde8a35328000bc6" Jan 23 09:30:49 crc kubenswrapper[4684]: I0123 09:30:49.364356 4684 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/cinder-aa08-account-create-update-c8jx4" Jan 23 09:30:49 crc kubenswrapper[4684]: I0123 09:30:49.369064 4684 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/barbican-2b42-account-create-update-w5njq" event={"ID":"5d712541-e87b-49c3-8cde-2daf0ef2c0bd","Type":"ContainerDied","Data":"40dcecfbbcc18bdb0fc4f31962b9590d4e025e31304a9c4cefe35501fed3dcba"} Jan 23 09:30:49 crc kubenswrapper[4684]: I0123 09:30:49.369112 4684 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="40dcecfbbcc18bdb0fc4f31962b9590d4e025e31304a9c4cefe35501fed3dcba" Jan 23 09:30:49 crc kubenswrapper[4684]: I0123 09:30:49.369184 4684 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/barbican-2b42-account-create-update-w5njq" Jan 23 09:30:49 crc kubenswrapper[4684]: I0123 09:30:49.373504 4684 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/neutron-c3f2-account-create-update-z6d9n" Jan 23 09:30:49 crc kubenswrapper[4684]: I0123 09:30:49.374188 4684 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/neutron-c3f2-account-create-update-z6d9n" event={"ID":"ed2e86a9-9a16-4fd0-b065-d95744b90dd7","Type":"ContainerDied","Data":"bab4d61fb47df3c1d5407e469dd442b5e76715b77ce665a2a9929eef99a0056e"} Jan 23 09:30:49 crc kubenswrapper[4684]: I0123 09:30:49.374229 4684 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="bab4d61fb47df3c1d5407e469dd442b5e76715b77ce665a2a9929eef99a0056e" Jan 23 09:30:50 crc kubenswrapper[4684]: I0123 09:30:50.405254 4684 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-db-sync-tn627" event={"ID":"0ed6b304-077d-4d13-a28b-2c41c046a303","Type":"ContainerStarted","Data":"3f222092aa592d93f5764d65b5d2400acc8ba125cb721c954901c7b1ff1c30ad"} Jan 23 09:30:50 crc kubenswrapper[4684]: I0123 09:30:50.442369 4684 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/glance-db-sync-tn627" podStartSLOduration=4.170490253 podStartE2EDuration="1m2.442343456s" podCreationTimestamp="2026-01-23 09:29:48 +0000 UTC" firstStartedPulling="2026-01-23 09:29:49.529709276 +0000 UTC m=+1362.153087817" lastFinishedPulling="2026-01-23 09:30:47.801562469 +0000 UTC m=+1420.424941020" observedRunningTime="2026-01-23 09:30:50.42764339 +0000 UTC m=+1423.051021931" watchObservedRunningTime="2026-01-23 09:30:50.442343456 +0000 UTC m=+1423.065721997" Jan 23 09:31:13 crc kubenswrapper[4684]: I0123 09:31:13.729184 4684 patch_prober.go:28] interesting pod/machine-config-daemon-wtphf container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Jan 23 09:31:13 crc kubenswrapper[4684]: I0123 09:31:13.729720 4684 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-wtphf" podUID="fe8e0d00-860e-4d47-9f48-686555520d79" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Jan 23 09:31:16 crc kubenswrapper[4684]: I0123 09:31:16.635272 4684 generic.go:334] "Generic (PLEG): container finished" podID="7b4ce139-6147-4c82-8b4d-74de8f779b6c" containerID="bd570b482adca7f99ca2f281ec8679e767854f52d98272f50742820a27744f07" exitCode=0 Jan 23 09:31:16 crc kubenswrapper[4684]: I0123 09:31:16.635360 4684 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/keystone-db-sync-znt8j" event={"ID":"7b4ce139-6147-4c82-8b4d-74de8f779b6c","Type":"ContainerDied","Data":"bd570b482adca7f99ca2f281ec8679e767854f52d98272f50742820a27744f07"} Jan 23 09:31:17 crc kubenswrapper[4684]: I0123 09:31:17.978142 4684 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/keystone-db-sync-znt8j" Jan 23 09:31:18 crc kubenswrapper[4684]: I0123 09:31:18.098561 4684 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-ccj8l\" (UniqueName: \"kubernetes.io/projected/7b4ce139-6147-4c82-8b4d-74de8f779b6c-kube-api-access-ccj8l\") pod \"7b4ce139-6147-4c82-8b4d-74de8f779b6c\" (UID: \"7b4ce139-6147-4c82-8b4d-74de8f779b6c\") " Jan 23 09:31:18 crc kubenswrapper[4684]: I0123 09:31:18.098674 4684 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/7b4ce139-6147-4c82-8b4d-74de8f779b6c-combined-ca-bundle\") pod \"7b4ce139-6147-4c82-8b4d-74de8f779b6c\" (UID: \"7b4ce139-6147-4c82-8b4d-74de8f779b6c\") " Jan 23 09:31:18 crc kubenswrapper[4684]: I0123 09:31:18.098921 4684 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/7b4ce139-6147-4c82-8b4d-74de8f779b6c-config-data\") pod \"7b4ce139-6147-4c82-8b4d-74de8f779b6c\" (UID: \"7b4ce139-6147-4c82-8b4d-74de8f779b6c\") " Jan 23 09:31:18 crc kubenswrapper[4684]: I0123 09:31:18.106364 4684 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/7b4ce139-6147-4c82-8b4d-74de8f779b6c-kube-api-access-ccj8l" (OuterVolumeSpecName: "kube-api-access-ccj8l") pod "7b4ce139-6147-4c82-8b4d-74de8f779b6c" (UID: "7b4ce139-6147-4c82-8b4d-74de8f779b6c"). InnerVolumeSpecName "kube-api-access-ccj8l". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 23 09:31:18 crc kubenswrapper[4684]: I0123 09:31:18.130144 4684 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/7b4ce139-6147-4c82-8b4d-74de8f779b6c-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "7b4ce139-6147-4c82-8b4d-74de8f779b6c" (UID: "7b4ce139-6147-4c82-8b4d-74de8f779b6c"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 23 09:31:18 crc kubenswrapper[4684]: I0123 09:31:18.169845 4684 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/7b4ce139-6147-4c82-8b4d-74de8f779b6c-config-data" (OuterVolumeSpecName: "config-data") pod "7b4ce139-6147-4c82-8b4d-74de8f779b6c" (UID: "7b4ce139-6147-4c82-8b4d-74de8f779b6c"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 23 09:31:18 crc kubenswrapper[4684]: I0123 09:31:18.201108 4684 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/7b4ce139-6147-4c82-8b4d-74de8f779b6c-config-data\") on node \"crc\" DevicePath \"\"" Jan 23 09:31:18 crc kubenswrapper[4684]: I0123 09:31:18.201162 4684 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-ccj8l\" (UniqueName: \"kubernetes.io/projected/7b4ce139-6147-4c82-8b4d-74de8f779b6c-kube-api-access-ccj8l\") on node \"crc\" DevicePath \"\"" Jan 23 09:31:18 crc kubenswrapper[4684]: I0123 09:31:18.201178 4684 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/7b4ce139-6147-4c82-8b4d-74de8f779b6c-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Jan 23 09:31:18 crc kubenswrapper[4684]: I0123 09:31:18.656280 4684 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/keystone-db-sync-znt8j" event={"ID":"7b4ce139-6147-4c82-8b4d-74de8f779b6c","Type":"ContainerDied","Data":"bd85acbbd84e664608f9bb96652801cf94a46e48f3bb8be05aef7eba8f93cc78"} Jan 23 09:31:18 crc kubenswrapper[4684]: I0123 09:31:18.656329 4684 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="bd85acbbd84e664608f9bb96652801cf94a46e48f3bb8be05aef7eba8f93cc78" Jan 23 09:31:18 crc kubenswrapper[4684]: I0123 09:31:18.656585 4684 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/keystone-db-sync-znt8j" Jan 23 09:31:18 crc kubenswrapper[4684]: I0123 09:31:18.961844 4684 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/dnsmasq-dns-5754f9d6ff-t88q4"] Jan 23 09:31:18 crc kubenswrapper[4684]: E0123 09:31:18.962464 4684 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="5d712541-e87b-49c3-8cde-2daf0ef2c0bd" containerName="mariadb-account-create-update" Jan 23 09:31:18 crc kubenswrapper[4684]: I0123 09:31:18.967930 4684 state_mem.go:107] "Deleted CPUSet assignment" podUID="5d712541-e87b-49c3-8cde-2daf0ef2c0bd" containerName="mariadb-account-create-update" Jan 23 09:31:18 crc kubenswrapper[4684]: E0123 09:31:18.968043 4684 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="7b4ce139-6147-4c82-8b4d-74de8f779b6c" containerName="keystone-db-sync" Jan 23 09:31:18 crc kubenswrapper[4684]: I0123 09:31:18.968110 4684 state_mem.go:107] "Deleted CPUSet assignment" podUID="7b4ce139-6147-4c82-8b4d-74de8f779b6c" containerName="keystone-db-sync" Jan 23 09:31:18 crc kubenswrapper[4684]: E0123 09:31:18.968184 4684 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="ed2e86a9-9a16-4fd0-b065-d95744b90dd7" containerName="mariadb-account-create-update" Jan 23 09:31:18 crc kubenswrapper[4684]: I0123 09:31:18.968246 4684 state_mem.go:107] "Deleted CPUSet assignment" podUID="ed2e86a9-9a16-4fd0-b065-d95744b90dd7" containerName="mariadb-account-create-update" Jan 23 09:31:18 crc kubenswrapper[4684]: E0123 09:31:18.968308 4684 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="c6c57d15-8e5f-4245-8830-c84079c9bee5" containerName="mariadb-database-create" Jan 23 09:31:18 crc kubenswrapper[4684]: I0123 09:31:18.968361 4684 state_mem.go:107] "Deleted CPUSet assignment" podUID="c6c57d15-8e5f-4245-8830-c84079c9bee5" containerName="mariadb-database-create" Jan 23 09:31:18 crc kubenswrapper[4684]: E0123 09:31:18.968440 4684 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="248994f7-7b0f-41e4-8a32-2dbf42ea41e9" containerName="mariadb-account-create-update" Jan 23 09:31:18 crc kubenswrapper[4684]: I0123 09:31:18.968730 4684 state_mem.go:107] "Deleted CPUSet assignment" podUID="248994f7-7b0f-41e4-8a32-2dbf42ea41e9" containerName="mariadb-account-create-update" Jan 23 09:31:18 crc kubenswrapper[4684]: E0123 09:31:18.968768 4684 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="51a95467-7819-43c2-aa22-699c74df62e8" containerName="mariadb-database-create" Jan 23 09:31:18 crc kubenswrapper[4684]: I0123 09:31:18.968775 4684 state_mem.go:107] "Deleted CPUSet assignment" podUID="51a95467-7819-43c2-aa22-699c74df62e8" containerName="mariadb-database-create" Jan 23 09:31:18 crc kubenswrapper[4684]: E0123 09:31:18.968788 4684 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="7ae59c0f-8e7f-4f59-a991-3e6afb7e0daf" containerName="mariadb-database-create" Jan 23 09:31:18 crc kubenswrapper[4684]: I0123 09:31:18.968798 4684 state_mem.go:107] "Deleted CPUSet assignment" podUID="7ae59c0f-8e7f-4f59-a991-3e6afb7e0daf" containerName="mariadb-database-create" Jan 23 09:31:18 crc kubenswrapper[4684]: I0123 09:31:18.969096 4684 memory_manager.go:354] "RemoveStaleState removing state" podUID="51a95467-7819-43c2-aa22-699c74df62e8" containerName="mariadb-database-create" Jan 23 09:31:18 crc kubenswrapper[4684]: I0123 09:31:18.969115 4684 memory_manager.go:354] "RemoveStaleState removing state" podUID="5d712541-e87b-49c3-8cde-2daf0ef2c0bd" containerName="mariadb-account-create-update" Jan 23 09:31:18 crc kubenswrapper[4684]: I0123 09:31:18.969123 4684 memory_manager.go:354] "RemoveStaleState removing state" podUID="7b4ce139-6147-4c82-8b4d-74de8f779b6c" containerName="keystone-db-sync" Jan 23 09:31:18 crc kubenswrapper[4684]: I0123 09:31:18.969132 4684 memory_manager.go:354] "RemoveStaleState removing state" podUID="ed2e86a9-9a16-4fd0-b065-d95744b90dd7" containerName="mariadb-account-create-update" Jan 23 09:31:18 crc kubenswrapper[4684]: I0123 09:31:18.969143 4684 memory_manager.go:354] "RemoveStaleState removing state" podUID="c6c57d15-8e5f-4245-8830-c84079c9bee5" containerName="mariadb-database-create" Jan 23 09:31:18 crc kubenswrapper[4684]: I0123 09:31:18.969152 4684 memory_manager.go:354] "RemoveStaleState removing state" podUID="7ae59c0f-8e7f-4f59-a991-3e6afb7e0daf" containerName="mariadb-database-create" Jan 23 09:31:18 crc kubenswrapper[4684]: I0123 09:31:18.969169 4684 memory_manager.go:354] "RemoveStaleState removing state" podUID="248994f7-7b0f-41e4-8a32-2dbf42ea41e9" containerName="mariadb-account-create-update" Jan 23 09:31:18 crc kubenswrapper[4684]: I0123 09:31:18.970021 4684 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-5754f9d6ff-t88q4" Jan 23 09:31:18 crc kubenswrapper[4684]: I0123 09:31:18.981335 4684 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-5754f9d6ff-t88q4"] Jan 23 09:31:19 crc kubenswrapper[4684]: I0123 09:31:19.058461 4684 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/keystone-bootstrap-99mf4"] Jan 23 09:31:19 crc kubenswrapper[4684]: I0123 09:31:19.067259 4684 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/keystone-bootstrap-99mf4" Jan 23 09:31:19 crc kubenswrapper[4684]: I0123 09:31:19.070996 4684 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"keystone" Jan 23 09:31:19 crc kubenswrapper[4684]: I0123 09:31:19.071007 4684 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"osp-secret" Jan 23 09:31:19 crc kubenswrapper[4684]: I0123 09:31:19.071172 4684 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"keystone-config-data" Jan 23 09:31:19 crc kubenswrapper[4684]: I0123 09:31:19.071291 4684 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"keystone-scripts" Jan 23 09:31:19 crc kubenswrapper[4684]: I0123 09:31:19.071985 4684 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"keystone-keystone-dockercfg-8c4md" Jan 23 09:31:19 crc kubenswrapper[4684]: I0123 09:31:19.083012 4684 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/keystone-bootstrap-99mf4"] Jan 23 09:31:19 crc kubenswrapper[4684]: I0123 09:31:19.118639 4684 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/6c23e474-5577-4b21-8753-10c8e7add0d5-ovsdbserver-nb\") pod \"dnsmasq-dns-5754f9d6ff-t88q4\" (UID: \"6c23e474-5577-4b21-8753-10c8e7add0d5\") " pod="openstack/dnsmasq-dns-5754f9d6ff-t88q4" Jan 23 09:31:19 crc kubenswrapper[4684]: I0123 09:31:19.118719 4684 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-cgnw5\" (UniqueName: \"kubernetes.io/projected/6c23e474-5577-4b21-8753-10c8e7add0d5-kube-api-access-cgnw5\") pod \"dnsmasq-dns-5754f9d6ff-t88q4\" (UID: \"6c23e474-5577-4b21-8753-10c8e7add0d5\") " pod="openstack/dnsmasq-dns-5754f9d6ff-t88q4" Jan 23 09:31:19 crc kubenswrapper[4684]: I0123 09:31:19.118759 4684 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/6c23e474-5577-4b21-8753-10c8e7add0d5-dns-svc\") pod \"dnsmasq-dns-5754f9d6ff-t88q4\" (UID: \"6c23e474-5577-4b21-8753-10c8e7add0d5\") " pod="openstack/dnsmasq-dns-5754f9d6ff-t88q4" Jan 23 09:31:19 crc kubenswrapper[4684]: I0123 09:31:19.118886 4684 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/6c23e474-5577-4b21-8753-10c8e7add0d5-ovsdbserver-sb\") pod \"dnsmasq-dns-5754f9d6ff-t88q4\" (UID: \"6c23e474-5577-4b21-8753-10c8e7add0d5\") " pod="openstack/dnsmasq-dns-5754f9d6ff-t88q4" Jan 23 09:31:19 crc kubenswrapper[4684]: I0123 09:31:19.118957 4684 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/6c23e474-5577-4b21-8753-10c8e7add0d5-config\") pod \"dnsmasq-dns-5754f9d6ff-t88q4\" (UID: \"6c23e474-5577-4b21-8753-10c8e7add0d5\") " pod="openstack/dnsmasq-dns-5754f9d6ff-t88q4" Jan 23 09:31:19 crc kubenswrapper[4684]: I0123 09:31:19.220962 4684 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/22e63821-ab31-4f87-8f9a-2e1e684ecd8a-config-data\") pod \"keystone-bootstrap-99mf4\" (UID: \"22e63821-ab31-4f87-8f9a-2e1e684ecd8a\") " pod="openstack/keystone-bootstrap-99mf4" Jan 23 09:31:19 crc kubenswrapper[4684]: I0123 09:31:19.221020 4684 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/22e63821-ab31-4f87-8f9a-2e1e684ecd8a-scripts\") pod \"keystone-bootstrap-99mf4\" (UID: \"22e63821-ab31-4f87-8f9a-2e1e684ecd8a\") " pod="openstack/keystone-bootstrap-99mf4" Jan 23 09:31:19 crc kubenswrapper[4684]: I0123 09:31:19.221056 4684 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-tvbjt\" (UniqueName: \"kubernetes.io/projected/22e63821-ab31-4f87-8f9a-2e1e684ecd8a-kube-api-access-tvbjt\") pod \"keystone-bootstrap-99mf4\" (UID: \"22e63821-ab31-4f87-8f9a-2e1e684ecd8a\") " pod="openstack/keystone-bootstrap-99mf4" Jan 23 09:31:19 crc kubenswrapper[4684]: I0123 09:31:19.221113 4684 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/22e63821-ab31-4f87-8f9a-2e1e684ecd8a-combined-ca-bundle\") pod \"keystone-bootstrap-99mf4\" (UID: \"22e63821-ab31-4f87-8f9a-2e1e684ecd8a\") " pod="openstack/keystone-bootstrap-99mf4" Jan 23 09:31:19 crc kubenswrapper[4684]: I0123 09:31:19.221165 4684 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"fernet-keys\" (UniqueName: \"kubernetes.io/secret/22e63821-ab31-4f87-8f9a-2e1e684ecd8a-fernet-keys\") pod \"keystone-bootstrap-99mf4\" (UID: \"22e63821-ab31-4f87-8f9a-2e1e684ecd8a\") " pod="openstack/keystone-bootstrap-99mf4" Jan 23 09:31:19 crc kubenswrapper[4684]: I0123 09:31:19.221213 4684 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/6c23e474-5577-4b21-8753-10c8e7add0d5-ovsdbserver-sb\") pod \"dnsmasq-dns-5754f9d6ff-t88q4\" (UID: \"6c23e474-5577-4b21-8753-10c8e7add0d5\") " pod="openstack/dnsmasq-dns-5754f9d6ff-t88q4" Jan 23 09:31:19 crc kubenswrapper[4684]: I0123 09:31:19.221279 4684 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/6c23e474-5577-4b21-8753-10c8e7add0d5-config\") pod \"dnsmasq-dns-5754f9d6ff-t88q4\" (UID: \"6c23e474-5577-4b21-8753-10c8e7add0d5\") " pod="openstack/dnsmasq-dns-5754f9d6ff-t88q4" Jan 23 09:31:19 crc kubenswrapper[4684]: I0123 09:31:19.221320 4684 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/6c23e474-5577-4b21-8753-10c8e7add0d5-ovsdbserver-nb\") pod \"dnsmasq-dns-5754f9d6ff-t88q4\" (UID: \"6c23e474-5577-4b21-8753-10c8e7add0d5\") " pod="openstack/dnsmasq-dns-5754f9d6ff-t88q4" Jan 23 09:31:19 crc kubenswrapper[4684]: I0123 09:31:19.221351 4684 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-cgnw5\" (UniqueName: \"kubernetes.io/projected/6c23e474-5577-4b21-8753-10c8e7add0d5-kube-api-access-cgnw5\") pod \"dnsmasq-dns-5754f9d6ff-t88q4\" (UID: \"6c23e474-5577-4b21-8753-10c8e7add0d5\") " pod="openstack/dnsmasq-dns-5754f9d6ff-t88q4" Jan 23 09:31:19 crc kubenswrapper[4684]: I0123 09:31:19.221421 4684 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"credential-keys\" (UniqueName: \"kubernetes.io/secret/22e63821-ab31-4f87-8f9a-2e1e684ecd8a-credential-keys\") pod \"keystone-bootstrap-99mf4\" (UID: \"22e63821-ab31-4f87-8f9a-2e1e684ecd8a\") " pod="openstack/keystone-bootstrap-99mf4" Jan 23 09:31:19 crc kubenswrapper[4684]: I0123 09:31:19.221441 4684 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/6c23e474-5577-4b21-8753-10c8e7add0d5-dns-svc\") pod \"dnsmasq-dns-5754f9d6ff-t88q4\" (UID: \"6c23e474-5577-4b21-8753-10c8e7add0d5\") " pod="openstack/dnsmasq-dns-5754f9d6ff-t88q4" Jan 23 09:31:19 crc kubenswrapper[4684]: I0123 09:31:19.222398 4684 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/6c23e474-5577-4b21-8753-10c8e7add0d5-config\") pod \"dnsmasq-dns-5754f9d6ff-t88q4\" (UID: \"6c23e474-5577-4b21-8753-10c8e7add0d5\") " pod="openstack/dnsmasq-dns-5754f9d6ff-t88q4" Jan 23 09:31:19 crc kubenswrapper[4684]: I0123 09:31:19.222495 4684 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/6c23e474-5577-4b21-8753-10c8e7add0d5-dns-svc\") pod \"dnsmasq-dns-5754f9d6ff-t88q4\" (UID: \"6c23e474-5577-4b21-8753-10c8e7add0d5\") " pod="openstack/dnsmasq-dns-5754f9d6ff-t88q4" Jan 23 09:31:19 crc kubenswrapper[4684]: I0123 09:31:19.223109 4684 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/6c23e474-5577-4b21-8753-10c8e7add0d5-ovsdbserver-nb\") pod \"dnsmasq-dns-5754f9d6ff-t88q4\" (UID: \"6c23e474-5577-4b21-8753-10c8e7add0d5\") " pod="openstack/dnsmasq-dns-5754f9d6ff-t88q4" Jan 23 09:31:19 crc kubenswrapper[4684]: I0123 09:31:19.223256 4684 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/6c23e474-5577-4b21-8753-10c8e7add0d5-ovsdbserver-sb\") pod \"dnsmasq-dns-5754f9d6ff-t88q4\" (UID: \"6c23e474-5577-4b21-8753-10c8e7add0d5\") " pod="openstack/dnsmasq-dns-5754f9d6ff-t88q4" Jan 23 09:31:19 crc kubenswrapper[4684]: I0123 09:31:19.254649 4684 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-cgnw5\" (UniqueName: \"kubernetes.io/projected/6c23e474-5577-4b21-8753-10c8e7add0d5-kube-api-access-cgnw5\") pod \"dnsmasq-dns-5754f9d6ff-t88q4\" (UID: \"6c23e474-5577-4b21-8753-10c8e7add0d5\") " pod="openstack/dnsmasq-dns-5754f9d6ff-t88q4" Jan 23 09:31:19 crc kubenswrapper[4684]: I0123 09:31:19.282766 4684 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/ceilometer-0"] Jan 23 09:31:19 crc kubenswrapper[4684]: I0123 09:31:19.285189 4684 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ceilometer-0" Jan 23 09:31:19 crc kubenswrapper[4684]: I0123 09:31:19.289292 4684 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"ceilometer-config-data" Jan 23 09:31:19 crc kubenswrapper[4684]: I0123 09:31:19.291388 4684 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"ceilometer-scripts" Jan 23 09:31:19 crc kubenswrapper[4684]: I0123 09:31:19.292926 4684 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-5754f9d6ff-t88q4" Jan 23 09:31:19 crc kubenswrapper[4684]: I0123 09:31:19.311621 4684 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ceilometer-0"] Jan 23 09:31:19 crc kubenswrapper[4684]: I0123 09:31:19.325767 4684 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/22e63821-ab31-4f87-8f9a-2e1e684ecd8a-config-data\") pod \"keystone-bootstrap-99mf4\" (UID: \"22e63821-ab31-4f87-8f9a-2e1e684ecd8a\") " pod="openstack/keystone-bootstrap-99mf4" Jan 23 09:31:19 crc kubenswrapper[4684]: I0123 09:31:19.328428 4684 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/22e63821-ab31-4f87-8f9a-2e1e684ecd8a-scripts\") pod \"keystone-bootstrap-99mf4\" (UID: \"22e63821-ab31-4f87-8f9a-2e1e684ecd8a\") " pod="openstack/keystone-bootstrap-99mf4" Jan 23 09:31:19 crc kubenswrapper[4684]: I0123 09:31:19.328456 4684 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/22e63821-ab31-4f87-8f9a-2e1e684ecd8a-combined-ca-bundle\") pod \"keystone-bootstrap-99mf4\" (UID: \"22e63821-ab31-4f87-8f9a-2e1e684ecd8a\") " pod="openstack/keystone-bootstrap-99mf4" Jan 23 09:31:19 crc kubenswrapper[4684]: I0123 09:31:19.328472 4684 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-tvbjt\" (UniqueName: \"kubernetes.io/projected/22e63821-ab31-4f87-8f9a-2e1e684ecd8a-kube-api-access-tvbjt\") pod \"keystone-bootstrap-99mf4\" (UID: \"22e63821-ab31-4f87-8f9a-2e1e684ecd8a\") " pod="openstack/keystone-bootstrap-99mf4" Jan 23 09:31:19 crc kubenswrapper[4684]: I0123 09:31:19.328526 4684 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"fernet-keys\" (UniqueName: \"kubernetes.io/secret/22e63821-ab31-4f87-8f9a-2e1e684ecd8a-fernet-keys\") pod \"keystone-bootstrap-99mf4\" (UID: \"22e63821-ab31-4f87-8f9a-2e1e684ecd8a\") " pod="openstack/keystone-bootstrap-99mf4" Jan 23 09:31:19 crc kubenswrapper[4684]: I0123 09:31:19.328722 4684 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"credential-keys\" (UniqueName: \"kubernetes.io/secret/22e63821-ab31-4f87-8f9a-2e1e684ecd8a-credential-keys\") pod \"keystone-bootstrap-99mf4\" (UID: \"22e63821-ab31-4f87-8f9a-2e1e684ecd8a\") " pod="openstack/keystone-bootstrap-99mf4" Jan 23 09:31:19 crc kubenswrapper[4684]: I0123 09:31:19.339641 4684 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"fernet-keys\" (UniqueName: \"kubernetes.io/secret/22e63821-ab31-4f87-8f9a-2e1e684ecd8a-fernet-keys\") pod \"keystone-bootstrap-99mf4\" (UID: \"22e63821-ab31-4f87-8f9a-2e1e684ecd8a\") " pod="openstack/keystone-bootstrap-99mf4" Jan 23 09:31:19 crc kubenswrapper[4684]: I0123 09:31:19.341190 4684 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/22e63821-ab31-4f87-8f9a-2e1e684ecd8a-config-data\") pod \"keystone-bootstrap-99mf4\" (UID: \"22e63821-ab31-4f87-8f9a-2e1e684ecd8a\") " pod="openstack/keystone-bootstrap-99mf4" Jan 23 09:31:19 crc kubenswrapper[4684]: I0123 09:31:19.348487 4684 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/22e63821-ab31-4f87-8f9a-2e1e684ecd8a-combined-ca-bundle\") pod \"keystone-bootstrap-99mf4\" (UID: \"22e63821-ab31-4f87-8f9a-2e1e684ecd8a\") " pod="openstack/keystone-bootstrap-99mf4" Jan 23 09:31:19 crc kubenswrapper[4684]: I0123 09:31:19.376881 4684 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/22e63821-ab31-4f87-8f9a-2e1e684ecd8a-scripts\") pod \"keystone-bootstrap-99mf4\" (UID: \"22e63821-ab31-4f87-8f9a-2e1e684ecd8a\") " pod="openstack/keystone-bootstrap-99mf4" Jan 23 09:31:19 crc kubenswrapper[4684]: I0123 09:31:19.384199 4684 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"credential-keys\" (UniqueName: \"kubernetes.io/secret/22e63821-ab31-4f87-8f9a-2e1e684ecd8a-credential-keys\") pod \"keystone-bootstrap-99mf4\" (UID: \"22e63821-ab31-4f87-8f9a-2e1e684ecd8a\") " pod="openstack/keystone-bootstrap-99mf4" Jan 23 09:31:19 crc kubenswrapper[4684]: I0123 09:31:19.401428 4684 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-tvbjt\" (UniqueName: \"kubernetes.io/projected/22e63821-ab31-4f87-8f9a-2e1e684ecd8a-kube-api-access-tvbjt\") pod \"keystone-bootstrap-99mf4\" (UID: \"22e63821-ab31-4f87-8f9a-2e1e684ecd8a\") " pod="openstack/keystone-bootstrap-99mf4" Jan 23 09:31:19 crc kubenswrapper[4684]: I0123 09:31:19.437566 4684 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-rprcw\" (UniqueName: \"kubernetes.io/projected/72dbfed3-111a-4a4f-999a-ef7ade8b5116-kube-api-access-rprcw\") pod \"ceilometer-0\" (UID: \"72dbfed3-111a-4a4f-999a-ef7ade8b5116\") " pod="openstack/ceilometer-0" Jan 23 09:31:19 crc kubenswrapper[4684]: I0123 09:31:19.437637 4684 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/72dbfed3-111a-4a4f-999a-ef7ade8b5116-scripts\") pod \"ceilometer-0\" (UID: \"72dbfed3-111a-4a4f-999a-ef7ade8b5116\") " pod="openstack/ceilometer-0" Jan 23 09:31:19 crc kubenswrapper[4684]: I0123 09:31:19.437677 4684 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/72dbfed3-111a-4a4f-999a-ef7ade8b5116-sg-core-conf-yaml\") pod \"ceilometer-0\" (UID: \"72dbfed3-111a-4a4f-999a-ef7ade8b5116\") " pod="openstack/ceilometer-0" Jan 23 09:31:19 crc kubenswrapper[4684]: I0123 09:31:19.437786 4684 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/72dbfed3-111a-4a4f-999a-ef7ade8b5116-log-httpd\") pod \"ceilometer-0\" (UID: \"72dbfed3-111a-4a4f-999a-ef7ade8b5116\") " pod="openstack/ceilometer-0" Jan 23 09:31:19 crc kubenswrapper[4684]: I0123 09:31:19.437812 4684 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/72dbfed3-111a-4a4f-999a-ef7ade8b5116-config-data\") pod \"ceilometer-0\" (UID: \"72dbfed3-111a-4a4f-999a-ef7ade8b5116\") " pod="openstack/ceilometer-0" Jan 23 09:31:19 crc kubenswrapper[4684]: I0123 09:31:19.437836 4684 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/72dbfed3-111a-4a4f-999a-ef7ade8b5116-combined-ca-bundle\") pod \"ceilometer-0\" (UID: \"72dbfed3-111a-4a4f-999a-ef7ade8b5116\") " pod="openstack/ceilometer-0" Jan 23 09:31:19 crc kubenswrapper[4684]: I0123 09:31:19.437875 4684 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/72dbfed3-111a-4a4f-999a-ef7ade8b5116-run-httpd\") pod \"ceilometer-0\" (UID: \"72dbfed3-111a-4a4f-999a-ef7ade8b5116\") " pod="openstack/ceilometer-0" Jan 23 09:31:19 crc kubenswrapper[4684]: I0123 09:31:19.530106 4684 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/cinder-db-sync-gpzdh"] Jan 23 09:31:19 crc kubenswrapper[4684]: I0123 09:31:19.531182 4684 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/cinder-db-sync-gpzdh" Jan 23 09:31:19 crc kubenswrapper[4684]: I0123 09:31:19.533670 4684 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cinder-config-data" Jan 23 09:31:19 crc kubenswrapper[4684]: I0123 09:31:19.534094 4684 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cinder-cinder-dockercfg-82m59" Jan 23 09:31:19 crc kubenswrapper[4684]: I0123 09:31:19.534169 4684 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cinder-scripts" Jan 23 09:31:19 crc kubenswrapper[4684]: I0123 09:31:19.539563 4684 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-rprcw\" (UniqueName: \"kubernetes.io/projected/72dbfed3-111a-4a4f-999a-ef7ade8b5116-kube-api-access-rprcw\") pod \"ceilometer-0\" (UID: \"72dbfed3-111a-4a4f-999a-ef7ade8b5116\") " pod="openstack/ceilometer-0" Jan 23 09:31:19 crc kubenswrapper[4684]: I0123 09:31:19.540355 4684 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/72dbfed3-111a-4a4f-999a-ef7ade8b5116-scripts\") pod \"ceilometer-0\" (UID: \"72dbfed3-111a-4a4f-999a-ef7ade8b5116\") " pod="openstack/ceilometer-0" Jan 23 09:31:19 crc kubenswrapper[4684]: I0123 09:31:19.540427 4684 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/72dbfed3-111a-4a4f-999a-ef7ade8b5116-sg-core-conf-yaml\") pod \"ceilometer-0\" (UID: \"72dbfed3-111a-4a4f-999a-ef7ade8b5116\") " pod="openstack/ceilometer-0" Jan 23 09:31:19 crc kubenswrapper[4684]: I0123 09:31:19.540550 4684 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/72dbfed3-111a-4a4f-999a-ef7ade8b5116-log-httpd\") pod \"ceilometer-0\" (UID: \"72dbfed3-111a-4a4f-999a-ef7ade8b5116\") " pod="openstack/ceilometer-0" Jan 23 09:31:19 crc kubenswrapper[4684]: I0123 09:31:19.540582 4684 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/72dbfed3-111a-4a4f-999a-ef7ade8b5116-config-data\") pod \"ceilometer-0\" (UID: \"72dbfed3-111a-4a4f-999a-ef7ade8b5116\") " pod="openstack/ceilometer-0" Jan 23 09:31:19 crc kubenswrapper[4684]: I0123 09:31:19.540605 4684 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/72dbfed3-111a-4a4f-999a-ef7ade8b5116-combined-ca-bundle\") pod \"ceilometer-0\" (UID: \"72dbfed3-111a-4a4f-999a-ef7ade8b5116\") " pod="openstack/ceilometer-0" Jan 23 09:31:19 crc kubenswrapper[4684]: I0123 09:31:19.540653 4684 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/72dbfed3-111a-4a4f-999a-ef7ade8b5116-run-httpd\") pod \"ceilometer-0\" (UID: \"72dbfed3-111a-4a4f-999a-ef7ade8b5116\") " pod="openstack/ceilometer-0" Jan 23 09:31:19 crc kubenswrapper[4684]: I0123 09:31:19.541564 4684 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/72dbfed3-111a-4a4f-999a-ef7ade8b5116-log-httpd\") pod \"ceilometer-0\" (UID: \"72dbfed3-111a-4a4f-999a-ef7ade8b5116\") " pod="openstack/ceilometer-0" Jan 23 09:31:19 crc kubenswrapper[4684]: I0123 09:31:19.546110 4684 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/72dbfed3-111a-4a4f-999a-ef7ade8b5116-run-httpd\") pod \"ceilometer-0\" (UID: \"72dbfed3-111a-4a4f-999a-ef7ade8b5116\") " pod="openstack/ceilometer-0" Jan 23 09:31:19 crc kubenswrapper[4684]: I0123 09:31:19.546221 4684 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/72dbfed3-111a-4a4f-999a-ef7ade8b5116-scripts\") pod \"ceilometer-0\" (UID: \"72dbfed3-111a-4a4f-999a-ef7ade8b5116\") " pod="openstack/ceilometer-0" Jan 23 09:31:19 crc kubenswrapper[4684]: I0123 09:31:19.547308 4684 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/72dbfed3-111a-4a4f-999a-ef7ade8b5116-sg-core-conf-yaml\") pod \"ceilometer-0\" (UID: \"72dbfed3-111a-4a4f-999a-ef7ade8b5116\") " pod="openstack/ceilometer-0" Jan 23 09:31:19 crc kubenswrapper[4684]: I0123 09:31:19.549645 4684 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/72dbfed3-111a-4a4f-999a-ef7ade8b5116-combined-ca-bundle\") pod \"ceilometer-0\" (UID: \"72dbfed3-111a-4a4f-999a-ef7ade8b5116\") " pod="openstack/ceilometer-0" Jan 23 09:31:19 crc kubenswrapper[4684]: I0123 09:31:19.554126 4684 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/72dbfed3-111a-4a4f-999a-ef7ade8b5116-config-data\") pod \"ceilometer-0\" (UID: \"72dbfed3-111a-4a4f-999a-ef7ade8b5116\") " pod="openstack/ceilometer-0" Jan 23 09:31:19 crc kubenswrapper[4684]: I0123 09:31:19.648804 4684 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/82fd9420-b726-4b9d-ad21-b05181fb6e23-config-data\") pod \"cinder-db-sync-gpzdh\" (UID: \"82fd9420-b726-4b9d-ad21-b05181fb6e23\") " pod="openstack/cinder-db-sync-gpzdh" Jan 23 09:31:19 crc kubenswrapper[4684]: I0123 09:31:19.648862 4684 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/82fd9420-b726-4b9d-ad21-b05181fb6e23-scripts\") pod \"cinder-db-sync-gpzdh\" (UID: \"82fd9420-b726-4b9d-ad21-b05181fb6e23\") " pod="openstack/cinder-db-sync-gpzdh" Jan 23 09:31:19 crc kubenswrapper[4684]: I0123 09:31:19.648886 4684 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"db-sync-config-data\" (UniqueName: \"kubernetes.io/secret/82fd9420-b726-4b9d-ad21-b05181fb6e23-db-sync-config-data\") pod \"cinder-db-sync-gpzdh\" (UID: \"82fd9420-b726-4b9d-ad21-b05181fb6e23\") " pod="openstack/cinder-db-sync-gpzdh" Jan 23 09:31:19 crc kubenswrapper[4684]: I0123 09:31:19.648916 4684 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-machine-id\" (UniqueName: \"kubernetes.io/host-path/82fd9420-b726-4b9d-ad21-b05181fb6e23-etc-machine-id\") pod \"cinder-db-sync-gpzdh\" (UID: \"82fd9420-b726-4b9d-ad21-b05181fb6e23\") " pod="openstack/cinder-db-sync-gpzdh" Jan 23 09:31:19 crc kubenswrapper[4684]: I0123 09:31:19.649003 4684 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-t265x\" (UniqueName: \"kubernetes.io/projected/82fd9420-b726-4b9d-ad21-b05181fb6e23-kube-api-access-t265x\") pod \"cinder-db-sync-gpzdh\" (UID: \"82fd9420-b726-4b9d-ad21-b05181fb6e23\") " pod="openstack/cinder-db-sync-gpzdh" Jan 23 09:31:19 crc kubenswrapper[4684]: I0123 09:31:19.649037 4684 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/82fd9420-b726-4b9d-ad21-b05181fb6e23-combined-ca-bundle\") pod \"cinder-db-sync-gpzdh\" (UID: \"82fd9420-b726-4b9d-ad21-b05181fb6e23\") " pod="openstack/cinder-db-sync-gpzdh" Jan 23 09:31:19 crc kubenswrapper[4684]: I0123 09:31:19.650342 4684 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-rprcw\" (UniqueName: \"kubernetes.io/projected/72dbfed3-111a-4a4f-999a-ef7ade8b5116-kube-api-access-rprcw\") pod \"ceilometer-0\" (UID: \"72dbfed3-111a-4a4f-999a-ef7ade8b5116\") " pod="openstack/ceilometer-0" Jan 23 09:31:19 crc kubenswrapper[4684]: I0123 09:31:19.674239 4684 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/cinder-db-sync-gpzdh"] Jan 23 09:31:19 crc kubenswrapper[4684]: I0123 09:31:19.686653 4684 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/keystone-bootstrap-99mf4" Jan 23 09:31:19 crc kubenswrapper[4684]: I0123 09:31:19.737806 4684 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/neutron-db-sync-mjwvr"] Jan 23 09:31:19 crc kubenswrapper[4684]: I0123 09:31:19.738873 4684 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/neutron-db-sync-mjwvr" Jan 23 09:31:19 crc kubenswrapper[4684]: I0123 09:31:19.744512 4684 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"neutron-config" Jan 23 09:31:19 crc kubenswrapper[4684]: I0123 09:31:19.744745 4684 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"neutron-httpd-config" Jan 23 09:31:19 crc kubenswrapper[4684]: I0123 09:31:19.744887 4684 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"neutron-neutron-dockercfg-5g4fk" Jan 23 09:31:19 crc kubenswrapper[4684]: I0123 09:31:19.804616 4684 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-t265x\" (UniqueName: \"kubernetes.io/projected/82fd9420-b726-4b9d-ad21-b05181fb6e23-kube-api-access-t265x\") pod \"cinder-db-sync-gpzdh\" (UID: \"82fd9420-b726-4b9d-ad21-b05181fb6e23\") " pod="openstack/cinder-db-sync-gpzdh" Jan 23 09:31:19 crc kubenswrapper[4684]: I0123 09:31:19.804709 4684 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/82fd9420-b726-4b9d-ad21-b05181fb6e23-combined-ca-bundle\") pod \"cinder-db-sync-gpzdh\" (UID: \"82fd9420-b726-4b9d-ad21-b05181fb6e23\") " pod="openstack/cinder-db-sync-gpzdh" Jan 23 09:31:19 crc kubenswrapper[4684]: I0123 09:31:19.804956 4684 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/82fd9420-b726-4b9d-ad21-b05181fb6e23-config-data\") pod \"cinder-db-sync-gpzdh\" (UID: \"82fd9420-b726-4b9d-ad21-b05181fb6e23\") " pod="openstack/cinder-db-sync-gpzdh" Jan 23 09:31:19 crc kubenswrapper[4684]: I0123 09:31:19.805030 4684 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/82fd9420-b726-4b9d-ad21-b05181fb6e23-scripts\") pod \"cinder-db-sync-gpzdh\" (UID: \"82fd9420-b726-4b9d-ad21-b05181fb6e23\") " pod="openstack/cinder-db-sync-gpzdh" Jan 23 09:31:19 crc kubenswrapper[4684]: I0123 09:31:19.805065 4684 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"db-sync-config-data\" (UniqueName: \"kubernetes.io/secret/82fd9420-b726-4b9d-ad21-b05181fb6e23-db-sync-config-data\") pod \"cinder-db-sync-gpzdh\" (UID: \"82fd9420-b726-4b9d-ad21-b05181fb6e23\") " pod="openstack/cinder-db-sync-gpzdh" Jan 23 09:31:19 crc kubenswrapper[4684]: I0123 09:31:19.805106 4684 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etc-machine-id\" (UniqueName: \"kubernetes.io/host-path/82fd9420-b726-4b9d-ad21-b05181fb6e23-etc-machine-id\") pod \"cinder-db-sync-gpzdh\" (UID: \"82fd9420-b726-4b9d-ad21-b05181fb6e23\") " pod="openstack/cinder-db-sync-gpzdh" Jan 23 09:31:19 crc kubenswrapper[4684]: I0123 09:31:19.805309 4684 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etc-machine-id\" (UniqueName: \"kubernetes.io/host-path/82fd9420-b726-4b9d-ad21-b05181fb6e23-etc-machine-id\") pod \"cinder-db-sync-gpzdh\" (UID: \"82fd9420-b726-4b9d-ad21-b05181fb6e23\") " pod="openstack/cinder-db-sync-gpzdh" Jan 23 09:31:19 crc kubenswrapper[4684]: I0123 09:31:19.845301 4684 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/82fd9420-b726-4b9d-ad21-b05181fb6e23-combined-ca-bundle\") pod \"cinder-db-sync-gpzdh\" (UID: \"82fd9420-b726-4b9d-ad21-b05181fb6e23\") " pod="openstack/cinder-db-sync-gpzdh" Jan 23 09:31:19 crc kubenswrapper[4684]: I0123 09:31:19.853138 4684 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"db-sync-config-data\" (UniqueName: \"kubernetes.io/secret/82fd9420-b726-4b9d-ad21-b05181fb6e23-db-sync-config-data\") pod \"cinder-db-sync-gpzdh\" (UID: \"82fd9420-b726-4b9d-ad21-b05181fb6e23\") " pod="openstack/cinder-db-sync-gpzdh" Jan 23 09:31:19 crc kubenswrapper[4684]: I0123 09:31:19.854087 4684 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/82fd9420-b726-4b9d-ad21-b05181fb6e23-scripts\") pod \"cinder-db-sync-gpzdh\" (UID: \"82fd9420-b726-4b9d-ad21-b05181fb6e23\") " pod="openstack/cinder-db-sync-gpzdh" Jan 23 09:31:19 crc kubenswrapper[4684]: I0123 09:31:19.874057 4684 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/neutron-db-sync-mjwvr"] Jan 23 09:31:19 crc kubenswrapper[4684]: I0123 09:31:19.891405 4684 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/82fd9420-b726-4b9d-ad21-b05181fb6e23-config-data\") pod \"cinder-db-sync-gpzdh\" (UID: \"82fd9420-b726-4b9d-ad21-b05181fb6e23\") " pod="openstack/cinder-db-sync-gpzdh" Jan 23 09:31:19 crc kubenswrapper[4684]: I0123 09:31:19.908133 4684 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-t265x\" (UniqueName: \"kubernetes.io/projected/82fd9420-b726-4b9d-ad21-b05181fb6e23-kube-api-access-t265x\") pod \"cinder-db-sync-gpzdh\" (UID: \"82fd9420-b726-4b9d-ad21-b05181fb6e23\") " pod="openstack/cinder-db-sync-gpzdh" Jan 23 09:31:19 crc kubenswrapper[4684]: I0123 09:31:19.947971 4684 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/5fd7bf23-46a9-4032-97f0-8d7984b734e0-combined-ca-bundle\") pod \"neutron-db-sync-mjwvr\" (UID: \"5fd7bf23-46a9-4032-97f0-8d7984b734e0\") " pod="openstack/neutron-db-sync-mjwvr" Jan 23 09:31:19 crc kubenswrapper[4684]: I0123 09:31:19.948040 4684 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/secret/5fd7bf23-46a9-4032-97f0-8d7984b734e0-config\") pod \"neutron-db-sync-mjwvr\" (UID: \"5fd7bf23-46a9-4032-97f0-8d7984b734e0\") " pod="openstack/neutron-db-sync-mjwvr" Jan 23 09:31:19 crc kubenswrapper[4684]: I0123 09:31:19.948278 4684 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-f8hzp\" (UniqueName: \"kubernetes.io/projected/5fd7bf23-46a9-4032-97f0-8d7984b734e0-kube-api-access-f8hzp\") pod \"neutron-db-sync-mjwvr\" (UID: \"5fd7bf23-46a9-4032-97f0-8d7984b734e0\") " pod="openstack/neutron-db-sync-mjwvr" Jan 23 09:31:19 crc kubenswrapper[4684]: I0123 09:31:19.948608 4684 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ceilometer-0" Jan 23 09:31:19 crc kubenswrapper[4684]: I0123 09:31:19.964029 4684 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/barbican-db-sync-9pq2q"] Jan 23 09:31:19 crc kubenswrapper[4684]: I0123 09:31:19.965293 4684 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/barbican-db-sync-9pq2q" Jan 23 09:31:19 crc kubenswrapper[4684]: I0123 09:31:19.969400 4684 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"barbican-barbican-dockercfg-rrw89" Jan 23 09:31:19 crc kubenswrapper[4684]: I0123 09:31:19.969601 4684 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"barbican-config-data" Jan 23 09:31:20 crc kubenswrapper[4684]: I0123 09:31:20.022767 4684 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/barbican-db-sync-9pq2q"] Jan 23 09:31:20 crc kubenswrapper[4684]: I0123 09:31:20.045608 4684 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-5754f9d6ff-t88q4"] Jan 23 09:31:20 crc kubenswrapper[4684]: I0123 09:31:20.064566 4684 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/4ffd82b5-ced8-4cca-89cb-25ad1bba207a-combined-ca-bundle\") pod \"barbican-db-sync-9pq2q\" (UID: \"4ffd82b5-ced8-4cca-89cb-25ad1bba207a\") " pod="openstack/barbican-db-sync-9pq2q" Jan 23 09:31:20 crc kubenswrapper[4684]: I0123 09:31:20.064769 4684 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"db-sync-config-data\" (UniqueName: \"kubernetes.io/secret/4ffd82b5-ced8-4cca-89cb-25ad1bba207a-db-sync-config-data\") pod \"barbican-db-sync-9pq2q\" (UID: \"4ffd82b5-ced8-4cca-89cb-25ad1bba207a\") " pod="openstack/barbican-db-sync-9pq2q" Jan 23 09:31:20 crc kubenswrapper[4684]: I0123 09:31:20.064820 4684 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-f8hzp\" (UniqueName: \"kubernetes.io/projected/5fd7bf23-46a9-4032-97f0-8d7984b734e0-kube-api-access-f8hzp\") pod \"neutron-db-sync-mjwvr\" (UID: \"5fd7bf23-46a9-4032-97f0-8d7984b734e0\") " pod="openstack/neutron-db-sync-mjwvr" Jan 23 09:31:20 crc kubenswrapper[4684]: I0123 09:31:20.065054 4684 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/5fd7bf23-46a9-4032-97f0-8d7984b734e0-combined-ca-bundle\") pod \"neutron-db-sync-mjwvr\" (UID: \"5fd7bf23-46a9-4032-97f0-8d7984b734e0\") " pod="openstack/neutron-db-sync-mjwvr" Jan 23 09:31:20 crc kubenswrapper[4684]: I0123 09:31:20.065093 4684 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/secret/5fd7bf23-46a9-4032-97f0-8d7984b734e0-config\") pod \"neutron-db-sync-mjwvr\" (UID: \"5fd7bf23-46a9-4032-97f0-8d7984b734e0\") " pod="openstack/neutron-db-sync-mjwvr" Jan 23 09:31:20 crc kubenswrapper[4684]: I0123 09:31:20.065120 4684 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-bsg6d\" (UniqueName: \"kubernetes.io/projected/4ffd82b5-ced8-4cca-89cb-25ad1bba207a-kube-api-access-bsg6d\") pod \"barbican-db-sync-9pq2q\" (UID: \"4ffd82b5-ced8-4cca-89cb-25ad1bba207a\") " pod="openstack/barbican-db-sync-9pq2q" Jan 23 09:31:20 crc kubenswrapper[4684]: I0123 09:31:20.077837 4684 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/secret/5fd7bf23-46a9-4032-97f0-8d7984b734e0-config\") pod \"neutron-db-sync-mjwvr\" (UID: \"5fd7bf23-46a9-4032-97f0-8d7984b734e0\") " pod="openstack/neutron-db-sync-mjwvr" Jan 23 09:31:20 crc kubenswrapper[4684]: I0123 09:31:20.078139 4684 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/5fd7bf23-46a9-4032-97f0-8d7984b734e0-combined-ca-bundle\") pod \"neutron-db-sync-mjwvr\" (UID: \"5fd7bf23-46a9-4032-97f0-8d7984b734e0\") " pod="openstack/neutron-db-sync-mjwvr" Jan 23 09:31:20 crc kubenswrapper[4684]: I0123 09:31:20.080248 4684 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/placement-db-sync-k24rv"] Jan 23 09:31:20 crc kubenswrapper[4684]: I0123 09:31:20.083370 4684 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/placement-db-sync-k24rv" Jan 23 09:31:20 crc kubenswrapper[4684]: I0123 09:31:20.088522 4684 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-f8hzp\" (UniqueName: \"kubernetes.io/projected/5fd7bf23-46a9-4032-97f0-8d7984b734e0-kube-api-access-f8hzp\") pod \"neutron-db-sync-mjwvr\" (UID: \"5fd7bf23-46a9-4032-97f0-8d7984b734e0\") " pod="openstack/neutron-db-sync-mjwvr" Jan 23 09:31:20 crc kubenswrapper[4684]: I0123 09:31:20.090293 4684 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"placement-placement-dockercfg-79zvt" Jan 23 09:31:20 crc kubenswrapper[4684]: I0123 09:31:20.090892 4684 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"placement-scripts" Jan 23 09:31:20 crc kubenswrapper[4684]: I0123 09:31:20.091830 4684 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"placement-config-data" Jan 23 09:31:20 crc kubenswrapper[4684]: I0123 09:31:20.102568 4684 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/placement-db-sync-k24rv"] Jan 23 09:31:20 crc kubenswrapper[4684]: I0123 09:31:20.122089 4684 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/neutron-db-sync-mjwvr" Jan 23 09:31:20 crc kubenswrapper[4684]: I0123 09:31:20.128920 4684 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/dnsmasq-dns-786f46ff4c-86fsj"] Jan 23 09:31:20 crc kubenswrapper[4684]: I0123 09:31:20.130814 4684 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-786f46ff4c-86fsj" Jan 23 09:31:20 crc kubenswrapper[4684]: I0123 09:31:20.161413 4684 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-786f46ff4c-86fsj"] Jan 23 09:31:20 crc kubenswrapper[4684]: I0123 09:31:20.169738 4684 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-bsg6d\" (UniqueName: \"kubernetes.io/projected/4ffd82b5-ced8-4cca-89cb-25ad1bba207a-kube-api-access-bsg6d\") pod \"barbican-db-sync-9pq2q\" (UID: \"4ffd82b5-ced8-4cca-89cb-25ad1bba207a\") " pod="openstack/barbican-db-sync-9pq2q" Jan 23 09:31:20 crc kubenswrapper[4684]: I0123 09:31:20.169802 4684 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/4ffd82b5-ced8-4cca-89cb-25ad1bba207a-combined-ca-bundle\") pod \"barbican-db-sync-9pq2q\" (UID: \"4ffd82b5-ced8-4cca-89cb-25ad1bba207a\") " pod="openstack/barbican-db-sync-9pq2q" Jan 23 09:31:20 crc kubenswrapper[4684]: I0123 09:31:20.169843 4684 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-9kwtp\" (UniqueName: \"kubernetes.io/projected/c51a6dae-114a-4a53-8e31-71f0f0124510-kube-api-access-9kwtp\") pod \"placement-db-sync-k24rv\" (UID: \"c51a6dae-114a-4a53-8e31-71f0f0124510\") " pod="openstack/placement-db-sync-k24rv" Jan 23 09:31:20 crc kubenswrapper[4684]: I0123 09:31:20.169865 4684 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/c51a6dae-114a-4a53-8e31-71f0f0124510-config-data\") pod \"placement-db-sync-k24rv\" (UID: \"c51a6dae-114a-4a53-8e31-71f0f0124510\") " pod="openstack/placement-db-sync-k24rv" Jan 23 09:31:20 crc kubenswrapper[4684]: I0123 09:31:20.169888 4684 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/c51a6dae-114a-4a53-8e31-71f0f0124510-scripts\") pod \"placement-db-sync-k24rv\" (UID: \"c51a6dae-114a-4a53-8e31-71f0f0124510\") " pod="openstack/placement-db-sync-k24rv" Jan 23 09:31:20 crc kubenswrapper[4684]: I0123 09:31:20.169905 4684 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/c51a6dae-114a-4a53-8e31-71f0f0124510-logs\") pod \"placement-db-sync-k24rv\" (UID: \"c51a6dae-114a-4a53-8e31-71f0f0124510\") " pod="openstack/placement-db-sync-k24rv" Jan 23 09:31:20 crc kubenswrapper[4684]: I0123 09:31:20.175162 4684 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"db-sync-config-data\" (UniqueName: \"kubernetes.io/secret/4ffd82b5-ced8-4cca-89cb-25ad1bba207a-db-sync-config-data\") pod \"barbican-db-sync-9pq2q\" (UID: \"4ffd82b5-ced8-4cca-89cb-25ad1bba207a\") " pod="openstack/barbican-db-sync-9pq2q" Jan 23 09:31:20 crc kubenswrapper[4684]: I0123 09:31:20.175673 4684 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/c51a6dae-114a-4a53-8e31-71f0f0124510-combined-ca-bundle\") pod \"placement-db-sync-k24rv\" (UID: \"c51a6dae-114a-4a53-8e31-71f0f0124510\") " pod="openstack/placement-db-sync-k24rv" Jan 23 09:31:20 crc kubenswrapper[4684]: I0123 09:31:20.179243 4684 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"db-sync-config-data\" (UniqueName: \"kubernetes.io/secret/4ffd82b5-ced8-4cca-89cb-25ad1bba207a-db-sync-config-data\") pod \"barbican-db-sync-9pq2q\" (UID: \"4ffd82b5-ced8-4cca-89cb-25ad1bba207a\") " pod="openstack/barbican-db-sync-9pq2q" Jan 23 09:31:20 crc kubenswrapper[4684]: I0123 09:31:20.180084 4684 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/4ffd82b5-ced8-4cca-89cb-25ad1bba207a-combined-ca-bundle\") pod \"barbican-db-sync-9pq2q\" (UID: \"4ffd82b5-ced8-4cca-89cb-25ad1bba207a\") " pod="openstack/barbican-db-sync-9pq2q" Jan 23 09:31:20 crc kubenswrapper[4684]: I0123 09:31:20.187092 4684 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-bsg6d\" (UniqueName: \"kubernetes.io/projected/4ffd82b5-ced8-4cca-89cb-25ad1bba207a-kube-api-access-bsg6d\") pod \"barbican-db-sync-9pq2q\" (UID: \"4ffd82b5-ced8-4cca-89cb-25ad1bba207a\") " pod="openstack/barbican-db-sync-9pq2q" Jan 23 09:31:20 crc kubenswrapper[4684]: I0123 09:31:20.194491 4684 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/cinder-db-sync-gpzdh" Jan 23 09:31:20 crc kubenswrapper[4684]: I0123 09:31:20.282524 4684 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/c51a6dae-114a-4a53-8e31-71f0f0124510-config-data\") pod \"placement-db-sync-k24rv\" (UID: \"c51a6dae-114a-4a53-8e31-71f0f0124510\") " pod="openstack/placement-db-sync-k24rv" Jan 23 09:31:20 crc kubenswrapper[4684]: I0123 09:31:20.282589 4684 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/6dcca2c8-e79c-4130-8376-90b178f9d2da-dns-svc\") pod \"dnsmasq-dns-786f46ff4c-86fsj\" (UID: \"6dcca2c8-e79c-4130-8376-90b178f9d2da\") " pod="openstack/dnsmasq-dns-786f46ff4c-86fsj" Jan 23 09:31:20 crc kubenswrapper[4684]: I0123 09:31:20.282620 4684 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/c51a6dae-114a-4a53-8e31-71f0f0124510-scripts\") pod \"placement-db-sync-k24rv\" (UID: \"c51a6dae-114a-4a53-8e31-71f0f0124510\") " pod="openstack/placement-db-sync-k24rv" Jan 23 09:31:20 crc kubenswrapper[4684]: I0123 09:31:20.282642 4684 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/6dcca2c8-e79c-4130-8376-90b178f9d2da-ovsdbserver-nb\") pod \"dnsmasq-dns-786f46ff4c-86fsj\" (UID: \"6dcca2c8-e79c-4130-8376-90b178f9d2da\") " pod="openstack/dnsmasq-dns-786f46ff4c-86fsj" Jan 23 09:31:20 crc kubenswrapper[4684]: I0123 09:31:20.282676 4684 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/c51a6dae-114a-4a53-8e31-71f0f0124510-logs\") pod \"placement-db-sync-k24rv\" (UID: \"c51a6dae-114a-4a53-8e31-71f0f0124510\") " pod="openstack/placement-db-sync-k24rv" Jan 23 09:31:20 crc kubenswrapper[4684]: I0123 09:31:20.282868 4684 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/6dcca2c8-e79c-4130-8376-90b178f9d2da-config\") pod \"dnsmasq-dns-786f46ff4c-86fsj\" (UID: \"6dcca2c8-e79c-4130-8376-90b178f9d2da\") " pod="openstack/dnsmasq-dns-786f46ff4c-86fsj" Jan 23 09:31:20 crc kubenswrapper[4684]: I0123 09:31:20.282895 4684 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/6dcca2c8-e79c-4130-8376-90b178f9d2da-ovsdbserver-sb\") pod \"dnsmasq-dns-786f46ff4c-86fsj\" (UID: \"6dcca2c8-e79c-4130-8376-90b178f9d2da\") " pod="openstack/dnsmasq-dns-786f46ff4c-86fsj" Jan 23 09:31:20 crc kubenswrapper[4684]: I0123 09:31:20.282951 4684 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/c51a6dae-114a-4a53-8e31-71f0f0124510-combined-ca-bundle\") pod \"placement-db-sync-k24rv\" (UID: \"c51a6dae-114a-4a53-8e31-71f0f0124510\") " pod="openstack/placement-db-sync-k24rv" Jan 23 09:31:20 crc kubenswrapper[4684]: I0123 09:31:20.282994 4684 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-6qjh6\" (UniqueName: \"kubernetes.io/projected/6dcca2c8-e79c-4130-8376-90b178f9d2da-kube-api-access-6qjh6\") pod \"dnsmasq-dns-786f46ff4c-86fsj\" (UID: \"6dcca2c8-e79c-4130-8376-90b178f9d2da\") " pod="openstack/dnsmasq-dns-786f46ff4c-86fsj" Jan 23 09:31:20 crc kubenswrapper[4684]: I0123 09:31:20.283074 4684 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-9kwtp\" (UniqueName: \"kubernetes.io/projected/c51a6dae-114a-4a53-8e31-71f0f0124510-kube-api-access-9kwtp\") pod \"placement-db-sync-k24rv\" (UID: \"c51a6dae-114a-4a53-8e31-71f0f0124510\") " pod="openstack/placement-db-sync-k24rv" Jan 23 09:31:20 crc kubenswrapper[4684]: I0123 09:31:20.290589 4684 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/c51a6dae-114a-4a53-8e31-71f0f0124510-config-data\") pod \"placement-db-sync-k24rv\" (UID: \"c51a6dae-114a-4a53-8e31-71f0f0124510\") " pod="openstack/placement-db-sync-k24rv" Jan 23 09:31:20 crc kubenswrapper[4684]: I0123 09:31:20.290610 4684 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/c51a6dae-114a-4a53-8e31-71f0f0124510-logs\") pod \"placement-db-sync-k24rv\" (UID: \"c51a6dae-114a-4a53-8e31-71f0f0124510\") " pod="openstack/placement-db-sync-k24rv" Jan 23 09:31:20 crc kubenswrapper[4684]: I0123 09:31:20.295205 4684 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/barbican-db-sync-9pq2q" Jan 23 09:31:20 crc kubenswrapper[4684]: I0123 09:31:20.300212 4684 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/c51a6dae-114a-4a53-8e31-71f0f0124510-combined-ca-bundle\") pod \"placement-db-sync-k24rv\" (UID: \"c51a6dae-114a-4a53-8e31-71f0f0124510\") " pod="openstack/placement-db-sync-k24rv" Jan 23 09:31:20 crc kubenswrapper[4684]: I0123 09:31:20.301185 4684 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/c51a6dae-114a-4a53-8e31-71f0f0124510-scripts\") pod \"placement-db-sync-k24rv\" (UID: \"c51a6dae-114a-4a53-8e31-71f0f0124510\") " pod="openstack/placement-db-sync-k24rv" Jan 23 09:31:20 crc kubenswrapper[4684]: I0123 09:31:20.312294 4684 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-9kwtp\" (UniqueName: \"kubernetes.io/projected/c51a6dae-114a-4a53-8e31-71f0f0124510-kube-api-access-9kwtp\") pod \"placement-db-sync-k24rv\" (UID: \"c51a6dae-114a-4a53-8e31-71f0f0124510\") " pod="openstack/placement-db-sync-k24rv" Jan 23 09:31:20 crc kubenswrapper[4684]: I0123 09:31:20.384894 4684 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-6qjh6\" (UniqueName: \"kubernetes.io/projected/6dcca2c8-e79c-4130-8376-90b178f9d2da-kube-api-access-6qjh6\") pod \"dnsmasq-dns-786f46ff4c-86fsj\" (UID: \"6dcca2c8-e79c-4130-8376-90b178f9d2da\") " pod="openstack/dnsmasq-dns-786f46ff4c-86fsj" Jan 23 09:31:20 crc kubenswrapper[4684]: I0123 09:31:20.385035 4684 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/6dcca2c8-e79c-4130-8376-90b178f9d2da-dns-svc\") pod \"dnsmasq-dns-786f46ff4c-86fsj\" (UID: \"6dcca2c8-e79c-4130-8376-90b178f9d2da\") " pod="openstack/dnsmasq-dns-786f46ff4c-86fsj" Jan 23 09:31:20 crc kubenswrapper[4684]: I0123 09:31:20.385067 4684 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/6dcca2c8-e79c-4130-8376-90b178f9d2da-ovsdbserver-nb\") pod \"dnsmasq-dns-786f46ff4c-86fsj\" (UID: \"6dcca2c8-e79c-4130-8376-90b178f9d2da\") " pod="openstack/dnsmasq-dns-786f46ff4c-86fsj" Jan 23 09:31:20 crc kubenswrapper[4684]: I0123 09:31:20.385115 4684 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/6dcca2c8-e79c-4130-8376-90b178f9d2da-config\") pod \"dnsmasq-dns-786f46ff4c-86fsj\" (UID: \"6dcca2c8-e79c-4130-8376-90b178f9d2da\") " pod="openstack/dnsmasq-dns-786f46ff4c-86fsj" Jan 23 09:31:20 crc kubenswrapper[4684]: I0123 09:31:20.385167 4684 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/6dcca2c8-e79c-4130-8376-90b178f9d2da-ovsdbserver-sb\") pod \"dnsmasq-dns-786f46ff4c-86fsj\" (UID: \"6dcca2c8-e79c-4130-8376-90b178f9d2da\") " pod="openstack/dnsmasq-dns-786f46ff4c-86fsj" Jan 23 09:31:20 crc kubenswrapper[4684]: I0123 09:31:20.387002 4684 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/6dcca2c8-e79c-4130-8376-90b178f9d2da-dns-svc\") pod \"dnsmasq-dns-786f46ff4c-86fsj\" (UID: \"6dcca2c8-e79c-4130-8376-90b178f9d2da\") " pod="openstack/dnsmasq-dns-786f46ff4c-86fsj" Jan 23 09:31:20 crc kubenswrapper[4684]: I0123 09:31:20.387645 4684 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/6dcca2c8-e79c-4130-8376-90b178f9d2da-config\") pod \"dnsmasq-dns-786f46ff4c-86fsj\" (UID: \"6dcca2c8-e79c-4130-8376-90b178f9d2da\") " pod="openstack/dnsmasq-dns-786f46ff4c-86fsj" Jan 23 09:31:20 crc kubenswrapper[4684]: I0123 09:31:20.389263 4684 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/6dcca2c8-e79c-4130-8376-90b178f9d2da-ovsdbserver-sb\") pod \"dnsmasq-dns-786f46ff4c-86fsj\" (UID: \"6dcca2c8-e79c-4130-8376-90b178f9d2da\") " pod="openstack/dnsmasq-dns-786f46ff4c-86fsj" Jan 23 09:31:20 crc kubenswrapper[4684]: I0123 09:31:20.394131 4684 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/6dcca2c8-e79c-4130-8376-90b178f9d2da-ovsdbserver-nb\") pod \"dnsmasq-dns-786f46ff4c-86fsj\" (UID: \"6dcca2c8-e79c-4130-8376-90b178f9d2da\") " pod="openstack/dnsmasq-dns-786f46ff4c-86fsj" Jan 23 09:31:20 crc kubenswrapper[4684]: I0123 09:31:20.414290 4684 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/placement-db-sync-k24rv" Jan 23 09:31:20 crc kubenswrapper[4684]: I0123 09:31:20.416577 4684 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-5754f9d6ff-t88q4"] Jan 23 09:31:20 crc kubenswrapper[4684]: I0123 09:31:20.445548 4684 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-6qjh6\" (UniqueName: \"kubernetes.io/projected/6dcca2c8-e79c-4130-8376-90b178f9d2da-kube-api-access-6qjh6\") pod \"dnsmasq-dns-786f46ff4c-86fsj\" (UID: \"6dcca2c8-e79c-4130-8376-90b178f9d2da\") " pod="openstack/dnsmasq-dns-786f46ff4c-86fsj" Jan 23 09:31:20 crc kubenswrapper[4684]: I0123 09:31:20.478211 4684 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-786f46ff4c-86fsj" Jan 23 09:31:20 crc kubenswrapper[4684]: I0123 09:31:20.586437 4684 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/keystone-bootstrap-99mf4"] Jan 23 09:31:20 crc kubenswrapper[4684]: I0123 09:31:20.747859 4684 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ceilometer-0"] Jan 23 09:31:20 crc kubenswrapper[4684]: I0123 09:31:20.755389 4684 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-5754f9d6ff-t88q4" event={"ID":"6c23e474-5577-4b21-8753-10c8e7add0d5","Type":"ContainerStarted","Data":"1c180628caea3b40de7a6754d10adc6687c1dbd1159c0bb1330f40b1bb114db0"} Jan 23 09:31:20 crc kubenswrapper[4684]: I0123 09:31:20.762749 4684 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/keystone-bootstrap-99mf4" event={"ID":"22e63821-ab31-4f87-8f9a-2e1e684ecd8a","Type":"ContainerStarted","Data":"b6a825663481224ccb00c46b339090b36fd6d4a74ce9b0c026163061be019006"} Jan 23 09:31:20 crc kubenswrapper[4684]: I0123 09:31:20.786804 4684 provider.go:102] Refreshing cache for provider: *credentialprovider.defaultDockerConfigProvider Jan 23 09:31:20 crc kubenswrapper[4684]: I0123 09:31:20.990083 4684 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/neutron-db-sync-mjwvr"] Jan 23 09:31:20 crc kubenswrapper[4684]: W0123 09:31:20.995473 4684 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod5fd7bf23_46a9_4032_97f0_8d7984b734e0.slice/crio-7a59865270964d2c35defdb095765eedbe56e33f2476bd64530962ba33ecbd0d WatchSource:0}: Error finding container 7a59865270964d2c35defdb095765eedbe56e33f2476bd64530962ba33ecbd0d: Status 404 returned error can't find the container with id 7a59865270964d2c35defdb095765eedbe56e33f2476bd64530962ba33ecbd0d Jan 23 09:31:20 crc kubenswrapper[4684]: I0123 09:31:20.997838 4684 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/cinder-db-sync-gpzdh"] Jan 23 09:31:21 crc kubenswrapper[4684]: I0123 09:31:21.144839 4684 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/barbican-db-sync-9pq2q"] Jan 23 09:31:21 crc kubenswrapper[4684]: I0123 09:31:21.156766 4684 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/placement-db-sync-k24rv"] Jan 23 09:31:21 crc kubenswrapper[4684]: I0123 09:31:21.258078 4684 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-786f46ff4c-86fsj"] Jan 23 09:31:21 crc kubenswrapper[4684]: W0123 09:31:21.264839 4684 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod6dcca2c8_e79c_4130_8376_90b178f9d2da.slice/crio-ef362851d6c862f479fb91ff1545b3281650de59d986a5ab55a968459f4223dc WatchSource:0}: Error finding container ef362851d6c862f479fb91ff1545b3281650de59d986a5ab55a968459f4223dc: Status 404 returned error can't find the container with id ef362851d6c862f479fb91ff1545b3281650de59d986a5ab55a968459f4223dc Jan 23 09:31:21 crc kubenswrapper[4684]: I0123 09:31:21.771199 4684 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"72dbfed3-111a-4a4f-999a-ef7ade8b5116","Type":"ContainerStarted","Data":"8fdbb10356dc1a236b23418590cdc764c7c56983b85c70fa5dd55aac9f879759"} Jan 23 09:31:21 crc kubenswrapper[4684]: I0123 09:31:21.772375 4684 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/neutron-db-sync-mjwvr" event={"ID":"5fd7bf23-46a9-4032-97f0-8d7984b734e0","Type":"ContainerStarted","Data":"7a59865270964d2c35defdb095765eedbe56e33f2476bd64530962ba33ecbd0d"} Jan 23 09:31:21 crc kubenswrapper[4684]: I0123 09:31:21.773689 4684 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/barbican-db-sync-9pq2q" event={"ID":"4ffd82b5-ced8-4cca-89cb-25ad1bba207a","Type":"ContainerStarted","Data":"ab574c7418715728a94e360e435de81a0713f8695e887305f811460ce99d750b"} Jan 23 09:31:21 crc kubenswrapper[4684]: I0123 09:31:21.774558 4684 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-db-sync-gpzdh" event={"ID":"82fd9420-b726-4b9d-ad21-b05181fb6e23","Type":"ContainerStarted","Data":"1d7cca76d17e57a0767b127140623be4bedc0bb7d62c5eb78fad9c048f019e40"} Jan 23 09:31:21 crc kubenswrapper[4684]: I0123 09:31:21.775538 4684 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/placement-db-sync-k24rv" event={"ID":"c51a6dae-114a-4a53-8e31-71f0f0124510","Type":"ContainerStarted","Data":"0c4d81adc0857d5c6038199b55de2adaeb4fd92e2372deba9ee37b5b4ba35018"} Jan 23 09:31:21 crc kubenswrapper[4684]: I0123 09:31:21.776447 4684 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-786f46ff4c-86fsj" event={"ID":"6dcca2c8-e79c-4130-8376-90b178f9d2da","Type":"ContainerStarted","Data":"ef362851d6c862f479fb91ff1545b3281650de59d986a5ab55a968459f4223dc"} Jan 23 09:31:22 crc kubenswrapper[4684]: I0123 09:31:22.308000 4684 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/ceilometer-0"] Jan 23 09:31:22 crc kubenswrapper[4684]: I0123 09:31:22.788820 4684 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/keystone-bootstrap-99mf4" event={"ID":"22e63821-ab31-4f87-8f9a-2e1e684ecd8a","Type":"ContainerStarted","Data":"33604b9c52c32c433debccb925270fa5ab782a873510def0353d22c565141900"} Jan 23 09:31:22 crc kubenswrapper[4684]: I0123 09:31:22.790873 4684 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-5754f9d6ff-t88q4" event={"ID":"6c23e474-5577-4b21-8753-10c8e7add0d5","Type":"ContainerStarted","Data":"028ae6268f2388a78005da2ec95aa02b688f0a5f69b57c70d13a2d03c723664f"} Jan 23 09:31:23 crc kubenswrapper[4684]: I0123 09:31:23.812801 4684 generic.go:334] "Generic (PLEG): container finished" podID="6c23e474-5577-4b21-8753-10c8e7add0d5" containerID="028ae6268f2388a78005da2ec95aa02b688f0a5f69b57c70d13a2d03c723664f" exitCode=0 Jan 23 09:31:23 crc kubenswrapper[4684]: I0123 09:31:23.812980 4684 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-5754f9d6ff-t88q4" event={"ID":"6c23e474-5577-4b21-8753-10c8e7add0d5","Type":"ContainerDied","Data":"028ae6268f2388a78005da2ec95aa02b688f0a5f69b57c70d13a2d03c723664f"} Jan 23 09:31:23 crc kubenswrapper[4684]: I0123 09:31:23.814594 4684 generic.go:334] "Generic (PLEG): container finished" podID="6dcca2c8-e79c-4130-8376-90b178f9d2da" containerID="a74d741d5c9cb8f3ffa6fd3ebf7b6facec345363f70040cddadcb7ac9467ef0f" exitCode=0 Jan 23 09:31:23 crc kubenswrapper[4684]: I0123 09:31:23.814679 4684 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-786f46ff4c-86fsj" event={"ID":"6dcca2c8-e79c-4130-8376-90b178f9d2da","Type":"ContainerDied","Data":"a74d741d5c9cb8f3ffa6fd3ebf7b6facec345363f70040cddadcb7ac9467ef0f"} Jan 23 09:31:23 crc kubenswrapper[4684]: I0123 09:31:23.886546 4684 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/keystone-bootstrap-99mf4" podStartSLOduration=4.8865248789999995 podStartE2EDuration="4.886524879s" podCreationTimestamp="2026-01-23 09:31:19 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-23 09:31:23.877801797 +0000 UTC m=+1456.501180358" watchObservedRunningTime="2026-01-23 09:31:23.886524879 +0000 UTC m=+1456.509903420" Jan 23 09:31:24 crc kubenswrapper[4684]: I0123 09:31:24.834932 4684 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-786f46ff4c-86fsj" event={"ID":"6dcca2c8-e79c-4130-8376-90b178f9d2da","Type":"ContainerStarted","Data":"a354ee24f6480d96a49b18023953298ad981479871e4a39cb0a671cead3ab410"} Jan 23 09:31:24 crc kubenswrapper[4684]: I0123 09:31:24.843411 4684 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/neutron-db-sync-mjwvr" event={"ID":"5fd7bf23-46a9-4032-97f0-8d7984b734e0","Type":"ContainerStarted","Data":"1f92611b2ba669fe16cef70364ca7ce8e9c1cbf3585f43341dfcf83194801d6f"} Jan 23 09:31:24 crc kubenswrapper[4684]: I0123 09:31:24.871803 4684 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/dnsmasq-dns-786f46ff4c-86fsj" podStartSLOduration=5.871783776 podStartE2EDuration="5.871783776s" podCreationTimestamp="2026-01-23 09:31:19 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-23 09:31:24.868535032 +0000 UTC m=+1457.491913583" watchObservedRunningTime="2026-01-23 09:31:24.871783776 +0000 UTC m=+1457.495162317" Jan 23 09:31:24 crc kubenswrapper[4684]: I0123 09:31:24.928474 4684 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/neutron-db-sync-mjwvr" podStartSLOduration=5.928454215 podStartE2EDuration="5.928454215s" podCreationTimestamp="2026-01-23 09:31:19 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-23 09:31:24.91722359 +0000 UTC m=+1457.540602141" watchObservedRunningTime="2026-01-23 09:31:24.928454215 +0000 UTC m=+1457.551832756" Jan 23 09:31:27 crc kubenswrapper[4684]: I0123 09:31:25.374375 4684 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-5754f9d6ff-t88q4" Jan 23 09:31:27 crc kubenswrapper[4684]: I0123 09:31:25.479749 4684 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/dnsmasq-dns-786f46ff4c-86fsj" Jan 23 09:31:27 crc kubenswrapper[4684]: I0123 09:31:25.502627 4684 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/6c23e474-5577-4b21-8753-10c8e7add0d5-config\") pod \"6c23e474-5577-4b21-8753-10c8e7add0d5\" (UID: \"6c23e474-5577-4b21-8753-10c8e7add0d5\") " Jan 23 09:31:27 crc kubenswrapper[4684]: I0123 09:31:25.502807 4684 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/6c23e474-5577-4b21-8753-10c8e7add0d5-dns-svc\") pod \"6c23e474-5577-4b21-8753-10c8e7add0d5\" (UID: \"6c23e474-5577-4b21-8753-10c8e7add0d5\") " Jan 23 09:31:27 crc kubenswrapper[4684]: I0123 09:31:25.502879 4684 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/6c23e474-5577-4b21-8753-10c8e7add0d5-ovsdbserver-sb\") pod \"6c23e474-5577-4b21-8753-10c8e7add0d5\" (UID: \"6c23e474-5577-4b21-8753-10c8e7add0d5\") " Jan 23 09:31:27 crc kubenswrapper[4684]: I0123 09:31:25.502952 4684 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-cgnw5\" (UniqueName: \"kubernetes.io/projected/6c23e474-5577-4b21-8753-10c8e7add0d5-kube-api-access-cgnw5\") pod \"6c23e474-5577-4b21-8753-10c8e7add0d5\" (UID: \"6c23e474-5577-4b21-8753-10c8e7add0d5\") " Jan 23 09:31:27 crc kubenswrapper[4684]: I0123 09:31:25.503004 4684 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/6c23e474-5577-4b21-8753-10c8e7add0d5-ovsdbserver-nb\") pod \"6c23e474-5577-4b21-8753-10c8e7add0d5\" (UID: \"6c23e474-5577-4b21-8753-10c8e7add0d5\") " Jan 23 09:31:27 crc kubenswrapper[4684]: I0123 09:31:25.550353 4684 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/6c23e474-5577-4b21-8753-10c8e7add0d5-kube-api-access-cgnw5" (OuterVolumeSpecName: "kube-api-access-cgnw5") pod "6c23e474-5577-4b21-8753-10c8e7add0d5" (UID: "6c23e474-5577-4b21-8753-10c8e7add0d5"). InnerVolumeSpecName "kube-api-access-cgnw5". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 23 09:31:27 crc kubenswrapper[4684]: I0123 09:31:25.554992 4684 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/6c23e474-5577-4b21-8753-10c8e7add0d5-ovsdbserver-nb" (OuterVolumeSpecName: "ovsdbserver-nb") pod "6c23e474-5577-4b21-8753-10c8e7add0d5" (UID: "6c23e474-5577-4b21-8753-10c8e7add0d5"). InnerVolumeSpecName "ovsdbserver-nb". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 23 09:31:27 crc kubenswrapper[4684]: I0123 09:31:25.556287 4684 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/6c23e474-5577-4b21-8753-10c8e7add0d5-config" (OuterVolumeSpecName: "config") pod "6c23e474-5577-4b21-8753-10c8e7add0d5" (UID: "6c23e474-5577-4b21-8753-10c8e7add0d5"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 23 09:31:27 crc kubenswrapper[4684]: I0123 09:31:25.570075 4684 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/6c23e474-5577-4b21-8753-10c8e7add0d5-dns-svc" (OuterVolumeSpecName: "dns-svc") pod "6c23e474-5577-4b21-8753-10c8e7add0d5" (UID: "6c23e474-5577-4b21-8753-10c8e7add0d5"). InnerVolumeSpecName "dns-svc". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 23 09:31:27 crc kubenswrapper[4684]: I0123 09:31:25.605101 4684 reconciler_common.go:293] "Volume detached for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/6c23e474-5577-4b21-8753-10c8e7add0d5-ovsdbserver-nb\") on node \"crc\" DevicePath \"\"" Jan 23 09:31:27 crc kubenswrapper[4684]: I0123 09:31:25.605140 4684 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/6c23e474-5577-4b21-8753-10c8e7add0d5-config\") on node \"crc\" DevicePath \"\"" Jan 23 09:31:27 crc kubenswrapper[4684]: I0123 09:31:25.605148 4684 reconciler_common.go:293] "Volume detached for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/6c23e474-5577-4b21-8753-10c8e7add0d5-dns-svc\") on node \"crc\" DevicePath \"\"" Jan 23 09:31:27 crc kubenswrapper[4684]: I0123 09:31:25.605160 4684 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-cgnw5\" (UniqueName: \"kubernetes.io/projected/6c23e474-5577-4b21-8753-10c8e7add0d5-kube-api-access-cgnw5\") on node \"crc\" DevicePath \"\"" Jan 23 09:31:27 crc kubenswrapper[4684]: I0123 09:31:25.638531 4684 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/6c23e474-5577-4b21-8753-10c8e7add0d5-ovsdbserver-sb" (OuterVolumeSpecName: "ovsdbserver-sb") pod "6c23e474-5577-4b21-8753-10c8e7add0d5" (UID: "6c23e474-5577-4b21-8753-10c8e7add0d5"). InnerVolumeSpecName "ovsdbserver-sb". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 23 09:31:27 crc kubenswrapper[4684]: I0123 09:31:25.707569 4684 reconciler_common.go:293] "Volume detached for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/6c23e474-5577-4b21-8753-10c8e7add0d5-ovsdbserver-sb\") on node \"crc\" DevicePath \"\"" Jan 23 09:31:27 crc kubenswrapper[4684]: I0123 09:31:25.862244 4684 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-5754f9d6ff-t88q4" Jan 23 09:31:27 crc kubenswrapper[4684]: I0123 09:31:25.863082 4684 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-5754f9d6ff-t88q4" event={"ID":"6c23e474-5577-4b21-8753-10c8e7add0d5","Type":"ContainerDied","Data":"1c180628caea3b40de7a6754d10adc6687c1dbd1159c0bb1330f40b1bb114db0"} Jan 23 09:31:27 crc kubenswrapper[4684]: I0123 09:31:25.863619 4684 scope.go:117] "RemoveContainer" containerID="028ae6268f2388a78005da2ec95aa02b688f0a5f69b57c70d13a2d03c723664f" Jan 23 09:31:27 crc kubenswrapper[4684]: I0123 09:31:25.962414 4684 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-5754f9d6ff-t88q4"] Jan 23 09:31:27 crc kubenswrapper[4684]: I0123 09:31:25.997645 4684 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/dnsmasq-dns-5754f9d6ff-t88q4"] Jan 23 09:31:27 crc kubenswrapper[4684]: I0123 09:31:27.603140 4684 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="6c23e474-5577-4b21-8753-10c8e7add0d5" path="/var/lib/kubelet/pods/6c23e474-5577-4b21-8753-10c8e7add0d5/volumes" Jan 23 09:31:29 crc kubenswrapper[4684]: I0123 09:31:29.920668 4684 generic.go:334] "Generic (PLEG): container finished" podID="22e63821-ab31-4f87-8f9a-2e1e684ecd8a" containerID="33604b9c52c32c433debccb925270fa5ab782a873510def0353d22c565141900" exitCode=0 Jan 23 09:31:29 crc kubenswrapper[4684]: I0123 09:31:29.920729 4684 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/keystone-bootstrap-99mf4" event={"ID":"22e63821-ab31-4f87-8f9a-2e1e684ecd8a","Type":"ContainerDied","Data":"33604b9c52c32c433debccb925270fa5ab782a873510def0353d22c565141900"} Jan 23 09:31:30 crc kubenswrapper[4684]: I0123 09:31:30.479878 4684 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/dnsmasq-dns-786f46ff4c-86fsj" Jan 23 09:31:30 crc kubenswrapper[4684]: I0123 09:31:30.559124 4684 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-586b989cdc-fpmhg"] Jan 23 09:31:30 crc kubenswrapper[4684]: I0123 09:31:30.559487 4684 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/dnsmasq-dns-586b989cdc-fpmhg" podUID="d2457a57-4283-4e26-982f-62acaa95c1bf" containerName="dnsmasq-dns" containerID="cri-o://69e1711db4062aeb41a776da95d7ff1a45b5e0d638add0143d72deb888d171e3" gracePeriod=10 Jan 23 09:31:30 crc kubenswrapper[4684]: I0123 09:31:30.932284 4684 generic.go:334] "Generic (PLEG): container finished" podID="0ed6b304-077d-4d13-a28b-2c41c046a303" containerID="3f222092aa592d93f5764d65b5d2400acc8ba125cb721c954901c7b1ff1c30ad" exitCode=0 Jan 23 09:31:30 crc kubenswrapper[4684]: I0123 09:31:30.932450 4684 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-db-sync-tn627" event={"ID":"0ed6b304-077d-4d13-a28b-2c41c046a303","Type":"ContainerDied","Data":"3f222092aa592d93f5764d65b5d2400acc8ba125cb721c954901c7b1ff1c30ad"} Jan 23 09:31:30 crc kubenswrapper[4684]: I0123 09:31:30.937230 4684 generic.go:334] "Generic (PLEG): container finished" podID="d2457a57-4283-4e26-982f-62acaa95c1bf" containerID="69e1711db4062aeb41a776da95d7ff1a45b5e0d638add0143d72deb888d171e3" exitCode=0 Jan 23 09:31:30 crc kubenswrapper[4684]: I0123 09:31:30.937326 4684 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-586b989cdc-fpmhg" event={"ID":"d2457a57-4283-4e26-982f-62acaa95c1bf","Type":"ContainerDied","Data":"69e1711db4062aeb41a776da95d7ff1a45b5e0d638add0143d72deb888d171e3"} Jan 23 09:31:33 crc kubenswrapper[4684]: I0123 09:31:33.718651 4684 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/glance-db-sync-tn627" Jan 23 09:31:33 crc kubenswrapper[4684]: I0123 09:31:33.750343 4684 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/keystone-bootstrap-99mf4" Jan 23 09:31:33 crc kubenswrapper[4684]: I0123 09:31:33.818613 4684 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"db-sync-config-data\" (UniqueName: \"kubernetes.io/secret/0ed6b304-077d-4d13-a28b-2c41c046a303-db-sync-config-data\") pod \"0ed6b304-077d-4d13-a28b-2c41c046a303\" (UID: \"0ed6b304-077d-4d13-a28b-2c41c046a303\") " Jan 23 09:31:33 crc kubenswrapper[4684]: I0123 09:31:33.818684 4684 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/22e63821-ab31-4f87-8f9a-2e1e684ecd8a-config-data\") pod \"22e63821-ab31-4f87-8f9a-2e1e684ecd8a\" (UID: \"22e63821-ab31-4f87-8f9a-2e1e684ecd8a\") " Jan 23 09:31:33 crc kubenswrapper[4684]: I0123 09:31:33.818765 4684 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"fernet-keys\" (UniqueName: \"kubernetes.io/secret/22e63821-ab31-4f87-8f9a-2e1e684ecd8a-fernet-keys\") pod \"22e63821-ab31-4f87-8f9a-2e1e684ecd8a\" (UID: \"22e63821-ab31-4f87-8f9a-2e1e684ecd8a\") " Jan 23 09:31:33 crc kubenswrapper[4684]: I0123 09:31:33.818808 4684 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/22e63821-ab31-4f87-8f9a-2e1e684ecd8a-scripts\") pod \"22e63821-ab31-4f87-8f9a-2e1e684ecd8a\" (UID: \"22e63821-ab31-4f87-8f9a-2e1e684ecd8a\") " Jan 23 09:31:33 crc kubenswrapper[4684]: I0123 09:31:33.818830 4684 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"credential-keys\" (UniqueName: \"kubernetes.io/secret/22e63821-ab31-4f87-8f9a-2e1e684ecd8a-credential-keys\") pod \"22e63821-ab31-4f87-8f9a-2e1e684ecd8a\" (UID: \"22e63821-ab31-4f87-8f9a-2e1e684ecd8a\") " Jan 23 09:31:33 crc kubenswrapper[4684]: I0123 09:31:33.818857 4684 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/22e63821-ab31-4f87-8f9a-2e1e684ecd8a-combined-ca-bundle\") pod \"22e63821-ab31-4f87-8f9a-2e1e684ecd8a\" (UID: \"22e63821-ab31-4f87-8f9a-2e1e684ecd8a\") " Jan 23 09:31:33 crc kubenswrapper[4684]: I0123 09:31:33.818941 4684 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-tvbjt\" (UniqueName: \"kubernetes.io/projected/22e63821-ab31-4f87-8f9a-2e1e684ecd8a-kube-api-access-tvbjt\") pod \"22e63821-ab31-4f87-8f9a-2e1e684ecd8a\" (UID: \"22e63821-ab31-4f87-8f9a-2e1e684ecd8a\") " Jan 23 09:31:33 crc kubenswrapper[4684]: I0123 09:31:33.818962 4684 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/0ed6b304-077d-4d13-a28b-2c41c046a303-combined-ca-bundle\") pod \"0ed6b304-077d-4d13-a28b-2c41c046a303\" (UID: \"0ed6b304-077d-4d13-a28b-2c41c046a303\") " Jan 23 09:31:33 crc kubenswrapper[4684]: I0123 09:31:33.818999 4684 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-mvkp8\" (UniqueName: \"kubernetes.io/projected/0ed6b304-077d-4d13-a28b-2c41c046a303-kube-api-access-mvkp8\") pod \"0ed6b304-077d-4d13-a28b-2c41c046a303\" (UID: \"0ed6b304-077d-4d13-a28b-2c41c046a303\") " Jan 23 09:31:33 crc kubenswrapper[4684]: I0123 09:31:33.819022 4684 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/0ed6b304-077d-4d13-a28b-2c41c046a303-config-data\") pod \"0ed6b304-077d-4d13-a28b-2c41c046a303\" (UID: \"0ed6b304-077d-4d13-a28b-2c41c046a303\") " Jan 23 09:31:33 crc kubenswrapper[4684]: I0123 09:31:33.839483 4684 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/0ed6b304-077d-4d13-a28b-2c41c046a303-db-sync-config-data" (OuterVolumeSpecName: "db-sync-config-data") pod "0ed6b304-077d-4d13-a28b-2c41c046a303" (UID: "0ed6b304-077d-4d13-a28b-2c41c046a303"). InnerVolumeSpecName "db-sync-config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 23 09:31:33 crc kubenswrapper[4684]: I0123 09:31:33.840451 4684 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/22e63821-ab31-4f87-8f9a-2e1e684ecd8a-fernet-keys" (OuterVolumeSpecName: "fernet-keys") pod "22e63821-ab31-4f87-8f9a-2e1e684ecd8a" (UID: "22e63821-ab31-4f87-8f9a-2e1e684ecd8a"). InnerVolumeSpecName "fernet-keys". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 23 09:31:33 crc kubenswrapper[4684]: I0123 09:31:33.838682 4684 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/22e63821-ab31-4f87-8f9a-2e1e684ecd8a-kube-api-access-tvbjt" (OuterVolumeSpecName: "kube-api-access-tvbjt") pod "22e63821-ab31-4f87-8f9a-2e1e684ecd8a" (UID: "22e63821-ab31-4f87-8f9a-2e1e684ecd8a"). InnerVolumeSpecName "kube-api-access-tvbjt". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 23 09:31:33 crc kubenswrapper[4684]: I0123 09:31:33.841001 4684 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/22e63821-ab31-4f87-8f9a-2e1e684ecd8a-credential-keys" (OuterVolumeSpecName: "credential-keys") pod "22e63821-ab31-4f87-8f9a-2e1e684ecd8a" (UID: "22e63821-ab31-4f87-8f9a-2e1e684ecd8a"). InnerVolumeSpecName "credential-keys". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 23 09:31:33 crc kubenswrapper[4684]: I0123 09:31:33.842234 4684 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/22e63821-ab31-4f87-8f9a-2e1e684ecd8a-scripts" (OuterVolumeSpecName: "scripts") pod "22e63821-ab31-4f87-8f9a-2e1e684ecd8a" (UID: "22e63821-ab31-4f87-8f9a-2e1e684ecd8a"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 23 09:31:33 crc kubenswrapper[4684]: I0123 09:31:33.862333 4684 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/0ed6b304-077d-4d13-a28b-2c41c046a303-kube-api-access-mvkp8" (OuterVolumeSpecName: "kube-api-access-mvkp8") pod "0ed6b304-077d-4d13-a28b-2c41c046a303" (UID: "0ed6b304-077d-4d13-a28b-2c41c046a303"). InnerVolumeSpecName "kube-api-access-mvkp8". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 23 09:31:33 crc kubenswrapper[4684]: I0123 09:31:33.883827 4684 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/22e63821-ab31-4f87-8f9a-2e1e684ecd8a-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "22e63821-ab31-4f87-8f9a-2e1e684ecd8a" (UID: "22e63821-ab31-4f87-8f9a-2e1e684ecd8a"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 23 09:31:33 crc kubenswrapper[4684]: I0123 09:31:33.885412 4684 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/0ed6b304-077d-4d13-a28b-2c41c046a303-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "0ed6b304-077d-4d13-a28b-2c41c046a303" (UID: "0ed6b304-077d-4d13-a28b-2c41c046a303"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 23 09:31:33 crc kubenswrapper[4684]: I0123 09:31:33.888303 4684 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/0ed6b304-077d-4d13-a28b-2c41c046a303-config-data" (OuterVolumeSpecName: "config-data") pod "0ed6b304-077d-4d13-a28b-2c41c046a303" (UID: "0ed6b304-077d-4d13-a28b-2c41c046a303"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 23 09:31:33 crc kubenswrapper[4684]: I0123 09:31:33.894892 4684 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/22e63821-ab31-4f87-8f9a-2e1e684ecd8a-config-data" (OuterVolumeSpecName: "config-data") pod "22e63821-ab31-4f87-8f9a-2e1e684ecd8a" (UID: "22e63821-ab31-4f87-8f9a-2e1e684ecd8a"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 23 09:31:33 crc kubenswrapper[4684]: I0123 09:31:33.923864 4684 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/22e63821-ab31-4f87-8f9a-2e1e684ecd8a-config-data\") on node \"crc\" DevicePath \"\"" Jan 23 09:31:33 crc kubenswrapper[4684]: I0123 09:31:33.923892 4684 reconciler_common.go:293] "Volume detached for volume \"fernet-keys\" (UniqueName: \"kubernetes.io/secret/22e63821-ab31-4f87-8f9a-2e1e684ecd8a-fernet-keys\") on node \"crc\" DevicePath \"\"" Jan 23 09:31:33 crc kubenswrapper[4684]: I0123 09:31:33.924087 4684 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/22e63821-ab31-4f87-8f9a-2e1e684ecd8a-scripts\") on node \"crc\" DevicePath \"\"" Jan 23 09:31:33 crc kubenswrapper[4684]: I0123 09:31:33.924100 4684 reconciler_common.go:293] "Volume detached for volume \"credential-keys\" (UniqueName: \"kubernetes.io/secret/22e63821-ab31-4f87-8f9a-2e1e684ecd8a-credential-keys\") on node \"crc\" DevicePath \"\"" Jan 23 09:31:33 crc kubenswrapper[4684]: I0123 09:31:33.924114 4684 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/22e63821-ab31-4f87-8f9a-2e1e684ecd8a-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Jan 23 09:31:33 crc kubenswrapper[4684]: I0123 09:31:33.925106 4684 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-tvbjt\" (UniqueName: \"kubernetes.io/projected/22e63821-ab31-4f87-8f9a-2e1e684ecd8a-kube-api-access-tvbjt\") on node \"crc\" DevicePath \"\"" Jan 23 09:31:33 crc kubenswrapper[4684]: I0123 09:31:33.925128 4684 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/0ed6b304-077d-4d13-a28b-2c41c046a303-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Jan 23 09:31:33 crc kubenswrapper[4684]: I0123 09:31:33.925136 4684 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-mvkp8\" (UniqueName: \"kubernetes.io/projected/0ed6b304-077d-4d13-a28b-2c41c046a303-kube-api-access-mvkp8\") on node \"crc\" DevicePath \"\"" Jan 23 09:31:33 crc kubenswrapper[4684]: I0123 09:31:33.925145 4684 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/0ed6b304-077d-4d13-a28b-2c41c046a303-config-data\") on node \"crc\" DevicePath \"\"" Jan 23 09:31:33 crc kubenswrapper[4684]: I0123 09:31:33.925153 4684 reconciler_common.go:293] "Volume detached for volume \"db-sync-config-data\" (UniqueName: \"kubernetes.io/secret/0ed6b304-077d-4d13-a28b-2c41c046a303-db-sync-config-data\") on node \"crc\" DevicePath \"\"" Jan 23 09:31:33 crc kubenswrapper[4684]: I0123 09:31:33.970132 4684 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/keystone-bootstrap-99mf4" Jan 23 09:31:33 crc kubenswrapper[4684]: I0123 09:31:33.970196 4684 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/keystone-bootstrap-99mf4" event={"ID":"22e63821-ab31-4f87-8f9a-2e1e684ecd8a","Type":"ContainerDied","Data":"b6a825663481224ccb00c46b339090b36fd6d4a74ce9b0c026163061be019006"} Jan 23 09:31:33 crc kubenswrapper[4684]: I0123 09:31:33.970238 4684 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="b6a825663481224ccb00c46b339090b36fd6d4a74ce9b0c026163061be019006" Jan 23 09:31:33 crc kubenswrapper[4684]: I0123 09:31:33.976541 4684 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-db-sync-tn627" event={"ID":"0ed6b304-077d-4d13-a28b-2c41c046a303","Type":"ContainerDied","Data":"b2081406d5132fa497ff9c9a357194ecfe42ff1d24a87a77cf56852e351319c4"} Jan 23 09:31:33 crc kubenswrapper[4684]: I0123 09:31:33.976572 4684 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="b2081406d5132fa497ff9c9a357194ecfe42ff1d24a87a77cf56852e351319c4" Jan 23 09:31:33 crc kubenswrapper[4684]: I0123 09:31:33.976640 4684 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/glance-db-sync-tn627" Jan 23 09:31:34 crc kubenswrapper[4684]: I0123 09:31:34.933801 4684 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/keystone-bootstrap-99mf4"] Jan 23 09:31:34 crc kubenswrapper[4684]: I0123 09:31:34.948101 4684 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/keystone-bootstrap-99mf4"] Jan 23 09:31:35 crc kubenswrapper[4684]: I0123 09:31:35.014232 4684 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/keystone-bootstrap-zxlq8"] Jan 23 09:31:35 crc kubenswrapper[4684]: E0123 09:31:35.014741 4684 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="22e63821-ab31-4f87-8f9a-2e1e684ecd8a" containerName="keystone-bootstrap" Jan 23 09:31:35 crc kubenswrapper[4684]: I0123 09:31:35.014764 4684 state_mem.go:107] "Deleted CPUSet assignment" podUID="22e63821-ab31-4f87-8f9a-2e1e684ecd8a" containerName="keystone-bootstrap" Jan 23 09:31:35 crc kubenswrapper[4684]: E0123 09:31:35.014809 4684 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="6c23e474-5577-4b21-8753-10c8e7add0d5" containerName="init" Jan 23 09:31:35 crc kubenswrapper[4684]: I0123 09:31:35.014819 4684 state_mem.go:107] "Deleted CPUSet assignment" podUID="6c23e474-5577-4b21-8753-10c8e7add0d5" containerName="init" Jan 23 09:31:35 crc kubenswrapper[4684]: E0123 09:31:35.014837 4684 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="0ed6b304-077d-4d13-a28b-2c41c046a303" containerName="glance-db-sync" Jan 23 09:31:35 crc kubenswrapper[4684]: I0123 09:31:35.014845 4684 state_mem.go:107] "Deleted CPUSet assignment" podUID="0ed6b304-077d-4d13-a28b-2c41c046a303" containerName="glance-db-sync" Jan 23 09:31:35 crc kubenswrapper[4684]: I0123 09:31:35.015066 4684 memory_manager.go:354] "RemoveStaleState removing state" podUID="0ed6b304-077d-4d13-a28b-2c41c046a303" containerName="glance-db-sync" Jan 23 09:31:35 crc kubenswrapper[4684]: I0123 09:31:35.015128 4684 memory_manager.go:354] "RemoveStaleState removing state" podUID="6c23e474-5577-4b21-8753-10c8e7add0d5" containerName="init" Jan 23 09:31:35 crc kubenswrapper[4684]: I0123 09:31:35.015151 4684 memory_manager.go:354] "RemoveStaleState removing state" podUID="22e63821-ab31-4f87-8f9a-2e1e684ecd8a" containerName="keystone-bootstrap" Jan 23 09:31:35 crc kubenswrapper[4684]: I0123 09:31:35.016126 4684 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/keystone-bootstrap-zxlq8" Jan 23 09:31:35 crc kubenswrapper[4684]: I0123 09:31:35.019544 4684 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"osp-secret" Jan 23 09:31:35 crc kubenswrapper[4684]: I0123 09:31:35.019555 4684 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"keystone" Jan 23 09:31:35 crc kubenswrapper[4684]: I0123 09:31:35.019809 4684 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"keystone-keystone-dockercfg-8c4md" Jan 23 09:31:35 crc kubenswrapper[4684]: I0123 09:31:35.019874 4684 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"keystone-scripts" Jan 23 09:31:35 crc kubenswrapper[4684]: I0123 09:31:35.020970 4684 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"keystone-config-data" Jan 23 09:31:35 crc kubenswrapper[4684]: I0123 09:31:35.050303 4684 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/keystone-bootstrap-zxlq8"] Jan 23 09:31:35 crc kubenswrapper[4684]: I0123 09:31:35.146467 4684 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"fernet-keys\" (UniqueName: \"kubernetes.io/secret/708d53e6-341e-4e7b-80e8-482b0175948c-fernet-keys\") pod \"keystone-bootstrap-zxlq8\" (UID: \"708d53e6-341e-4e7b-80e8-482b0175948c\") " pod="openstack/keystone-bootstrap-zxlq8" Jan 23 09:31:35 crc kubenswrapper[4684]: I0123 09:31:35.146556 4684 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"credential-keys\" (UniqueName: \"kubernetes.io/secret/708d53e6-341e-4e7b-80e8-482b0175948c-credential-keys\") pod \"keystone-bootstrap-zxlq8\" (UID: \"708d53e6-341e-4e7b-80e8-482b0175948c\") " pod="openstack/keystone-bootstrap-zxlq8" Jan 23 09:31:35 crc kubenswrapper[4684]: I0123 09:31:35.146604 4684 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/708d53e6-341e-4e7b-80e8-482b0175948c-config-data\") pod \"keystone-bootstrap-zxlq8\" (UID: \"708d53e6-341e-4e7b-80e8-482b0175948c\") " pod="openstack/keystone-bootstrap-zxlq8" Jan 23 09:31:35 crc kubenswrapper[4684]: I0123 09:31:35.146657 4684 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/708d53e6-341e-4e7b-80e8-482b0175948c-scripts\") pod \"keystone-bootstrap-zxlq8\" (UID: \"708d53e6-341e-4e7b-80e8-482b0175948c\") " pod="openstack/keystone-bootstrap-zxlq8" Jan 23 09:31:35 crc kubenswrapper[4684]: I0123 09:31:35.146679 4684 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/708d53e6-341e-4e7b-80e8-482b0175948c-combined-ca-bundle\") pod \"keystone-bootstrap-zxlq8\" (UID: \"708d53e6-341e-4e7b-80e8-482b0175948c\") " pod="openstack/keystone-bootstrap-zxlq8" Jan 23 09:31:35 crc kubenswrapper[4684]: I0123 09:31:35.146858 4684 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-b2rqk\" (UniqueName: \"kubernetes.io/projected/708d53e6-341e-4e7b-80e8-482b0175948c-kube-api-access-b2rqk\") pod \"keystone-bootstrap-zxlq8\" (UID: \"708d53e6-341e-4e7b-80e8-482b0175948c\") " pod="openstack/keystone-bootstrap-zxlq8" Jan 23 09:31:35 crc kubenswrapper[4684]: I0123 09:31:35.248175 4684 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"credential-keys\" (UniqueName: \"kubernetes.io/secret/708d53e6-341e-4e7b-80e8-482b0175948c-credential-keys\") pod \"keystone-bootstrap-zxlq8\" (UID: \"708d53e6-341e-4e7b-80e8-482b0175948c\") " pod="openstack/keystone-bootstrap-zxlq8" Jan 23 09:31:35 crc kubenswrapper[4684]: I0123 09:31:35.248252 4684 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/708d53e6-341e-4e7b-80e8-482b0175948c-config-data\") pod \"keystone-bootstrap-zxlq8\" (UID: \"708d53e6-341e-4e7b-80e8-482b0175948c\") " pod="openstack/keystone-bootstrap-zxlq8" Jan 23 09:31:35 crc kubenswrapper[4684]: I0123 09:31:35.248306 4684 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/708d53e6-341e-4e7b-80e8-482b0175948c-scripts\") pod \"keystone-bootstrap-zxlq8\" (UID: \"708d53e6-341e-4e7b-80e8-482b0175948c\") " pod="openstack/keystone-bootstrap-zxlq8" Jan 23 09:31:35 crc kubenswrapper[4684]: I0123 09:31:35.248332 4684 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/708d53e6-341e-4e7b-80e8-482b0175948c-combined-ca-bundle\") pod \"keystone-bootstrap-zxlq8\" (UID: \"708d53e6-341e-4e7b-80e8-482b0175948c\") " pod="openstack/keystone-bootstrap-zxlq8" Jan 23 09:31:35 crc kubenswrapper[4684]: I0123 09:31:35.248394 4684 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-b2rqk\" (UniqueName: \"kubernetes.io/projected/708d53e6-341e-4e7b-80e8-482b0175948c-kube-api-access-b2rqk\") pod \"keystone-bootstrap-zxlq8\" (UID: \"708d53e6-341e-4e7b-80e8-482b0175948c\") " pod="openstack/keystone-bootstrap-zxlq8" Jan 23 09:31:35 crc kubenswrapper[4684]: I0123 09:31:35.248438 4684 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"fernet-keys\" (UniqueName: \"kubernetes.io/secret/708d53e6-341e-4e7b-80e8-482b0175948c-fernet-keys\") pod \"keystone-bootstrap-zxlq8\" (UID: \"708d53e6-341e-4e7b-80e8-482b0175948c\") " pod="openstack/keystone-bootstrap-zxlq8" Jan 23 09:31:35 crc kubenswrapper[4684]: I0123 09:31:35.351772 4684 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"fernet-keys\" (UniqueName: \"kubernetes.io/secret/708d53e6-341e-4e7b-80e8-482b0175948c-fernet-keys\") pod \"keystone-bootstrap-zxlq8\" (UID: \"708d53e6-341e-4e7b-80e8-482b0175948c\") " pod="openstack/keystone-bootstrap-zxlq8" Jan 23 09:31:35 crc kubenswrapper[4684]: I0123 09:31:35.352306 4684 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/708d53e6-341e-4e7b-80e8-482b0175948c-combined-ca-bundle\") pod \"keystone-bootstrap-zxlq8\" (UID: \"708d53e6-341e-4e7b-80e8-482b0175948c\") " pod="openstack/keystone-bootstrap-zxlq8" Jan 23 09:31:35 crc kubenswrapper[4684]: I0123 09:31:35.352343 4684 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"credential-keys\" (UniqueName: \"kubernetes.io/secret/708d53e6-341e-4e7b-80e8-482b0175948c-credential-keys\") pod \"keystone-bootstrap-zxlq8\" (UID: \"708d53e6-341e-4e7b-80e8-482b0175948c\") " pod="openstack/keystone-bootstrap-zxlq8" Jan 23 09:31:35 crc kubenswrapper[4684]: I0123 09:31:35.353128 4684 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/708d53e6-341e-4e7b-80e8-482b0175948c-config-data\") pod \"keystone-bootstrap-zxlq8\" (UID: \"708d53e6-341e-4e7b-80e8-482b0175948c\") " pod="openstack/keystone-bootstrap-zxlq8" Jan 23 09:31:35 crc kubenswrapper[4684]: I0123 09:31:35.362658 4684 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/708d53e6-341e-4e7b-80e8-482b0175948c-scripts\") pod \"keystone-bootstrap-zxlq8\" (UID: \"708d53e6-341e-4e7b-80e8-482b0175948c\") " pod="openstack/keystone-bootstrap-zxlq8" Jan 23 09:31:35 crc kubenswrapper[4684]: I0123 09:31:35.363168 4684 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-b2rqk\" (UniqueName: \"kubernetes.io/projected/708d53e6-341e-4e7b-80e8-482b0175948c-kube-api-access-b2rqk\") pod \"keystone-bootstrap-zxlq8\" (UID: \"708d53e6-341e-4e7b-80e8-482b0175948c\") " pod="openstack/keystone-bootstrap-zxlq8" Jan 23 09:31:35 crc kubenswrapper[4684]: I0123 09:31:35.592605 4684 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="22e63821-ab31-4f87-8f9a-2e1e684ecd8a" path="/var/lib/kubelet/pods/22e63821-ab31-4f87-8f9a-2e1e684ecd8a/volumes" Jan 23 09:31:35 crc kubenswrapper[4684]: I0123 09:31:35.636093 4684 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/keystone-bootstrap-zxlq8" Jan 23 09:31:35 crc kubenswrapper[4684]: I0123 09:31:35.937496 4684 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/dnsmasq-dns-694dbb6647-xtjr2"] Jan 23 09:31:35 crc kubenswrapper[4684]: I0123 09:31:35.938896 4684 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-694dbb6647-xtjr2" Jan 23 09:31:36 crc kubenswrapper[4684]: I0123 09:31:36.061298 4684 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-694dbb6647-xtjr2"] Jan 23 09:31:36 crc kubenswrapper[4684]: I0123 09:31:36.065332 4684 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/3e252874-6205-4570-a8a8-dada614f685e-ovsdbserver-nb\") pod \"dnsmasq-dns-694dbb6647-xtjr2\" (UID: \"3e252874-6205-4570-a8a8-dada614f685e\") " pod="openstack/dnsmasq-dns-694dbb6647-xtjr2" Jan 23 09:31:36 crc kubenswrapper[4684]: I0123 09:31:36.065404 4684 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-22tmt\" (UniqueName: \"kubernetes.io/projected/3e252874-6205-4570-a8a8-dada614f685e-kube-api-access-22tmt\") pod \"dnsmasq-dns-694dbb6647-xtjr2\" (UID: \"3e252874-6205-4570-a8a8-dada614f685e\") " pod="openstack/dnsmasq-dns-694dbb6647-xtjr2" Jan 23 09:31:36 crc kubenswrapper[4684]: I0123 09:31:36.065444 4684 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/3e252874-6205-4570-a8a8-dada614f685e-dns-svc\") pod \"dnsmasq-dns-694dbb6647-xtjr2\" (UID: \"3e252874-6205-4570-a8a8-dada614f685e\") " pod="openstack/dnsmasq-dns-694dbb6647-xtjr2" Jan 23 09:31:36 crc kubenswrapper[4684]: I0123 09:31:36.065467 4684 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/3e252874-6205-4570-a8a8-dada614f685e-ovsdbserver-sb\") pod \"dnsmasq-dns-694dbb6647-xtjr2\" (UID: \"3e252874-6205-4570-a8a8-dada614f685e\") " pod="openstack/dnsmasq-dns-694dbb6647-xtjr2" Jan 23 09:31:36 crc kubenswrapper[4684]: I0123 09:31:36.065569 4684 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/3e252874-6205-4570-a8a8-dada614f685e-config\") pod \"dnsmasq-dns-694dbb6647-xtjr2\" (UID: \"3e252874-6205-4570-a8a8-dada614f685e\") " pod="openstack/dnsmasq-dns-694dbb6647-xtjr2" Jan 23 09:31:36 crc kubenswrapper[4684]: I0123 09:31:36.166917 4684 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-22tmt\" (UniqueName: \"kubernetes.io/projected/3e252874-6205-4570-a8a8-dada614f685e-kube-api-access-22tmt\") pod \"dnsmasq-dns-694dbb6647-xtjr2\" (UID: \"3e252874-6205-4570-a8a8-dada614f685e\") " pod="openstack/dnsmasq-dns-694dbb6647-xtjr2" Jan 23 09:31:36 crc kubenswrapper[4684]: I0123 09:31:36.166971 4684 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/3e252874-6205-4570-a8a8-dada614f685e-dns-svc\") pod \"dnsmasq-dns-694dbb6647-xtjr2\" (UID: \"3e252874-6205-4570-a8a8-dada614f685e\") " pod="openstack/dnsmasq-dns-694dbb6647-xtjr2" Jan 23 09:31:36 crc kubenswrapper[4684]: I0123 09:31:36.166997 4684 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/3e252874-6205-4570-a8a8-dada614f685e-ovsdbserver-sb\") pod \"dnsmasq-dns-694dbb6647-xtjr2\" (UID: \"3e252874-6205-4570-a8a8-dada614f685e\") " pod="openstack/dnsmasq-dns-694dbb6647-xtjr2" Jan 23 09:31:36 crc kubenswrapper[4684]: I0123 09:31:36.167078 4684 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/3e252874-6205-4570-a8a8-dada614f685e-config\") pod \"dnsmasq-dns-694dbb6647-xtjr2\" (UID: \"3e252874-6205-4570-a8a8-dada614f685e\") " pod="openstack/dnsmasq-dns-694dbb6647-xtjr2" Jan 23 09:31:36 crc kubenswrapper[4684]: I0123 09:31:36.167144 4684 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/3e252874-6205-4570-a8a8-dada614f685e-ovsdbserver-nb\") pod \"dnsmasq-dns-694dbb6647-xtjr2\" (UID: \"3e252874-6205-4570-a8a8-dada614f685e\") " pod="openstack/dnsmasq-dns-694dbb6647-xtjr2" Jan 23 09:31:36 crc kubenswrapper[4684]: I0123 09:31:36.168106 4684 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/3e252874-6205-4570-a8a8-dada614f685e-ovsdbserver-nb\") pod \"dnsmasq-dns-694dbb6647-xtjr2\" (UID: \"3e252874-6205-4570-a8a8-dada614f685e\") " pod="openstack/dnsmasq-dns-694dbb6647-xtjr2" Jan 23 09:31:36 crc kubenswrapper[4684]: I0123 09:31:36.168147 4684 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/3e252874-6205-4570-a8a8-dada614f685e-dns-svc\") pod \"dnsmasq-dns-694dbb6647-xtjr2\" (UID: \"3e252874-6205-4570-a8a8-dada614f685e\") " pod="openstack/dnsmasq-dns-694dbb6647-xtjr2" Jan 23 09:31:36 crc kubenswrapper[4684]: I0123 09:31:36.168367 4684 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/3e252874-6205-4570-a8a8-dada614f685e-ovsdbserver-sb\") pod \"dnsmasq-dns-694dbb6647-xtjr2\" (UID: \"3e252874-6205-4570-a8a8-dada614f685e\") " pod="openstack/dnsmasq-dns-694dbb6647-xtjr2" Jan 23 09:31:36 crc kubenswrapper[4684]: I0123 09:31:36.168659 4684 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/3e252874-6205-4570-a8a8-dada614f685e-config\") pod \"dnsmasq-dns-694dbb6647-xtjr2\" (UID: \"3e252874-6205-4570-a8a8-dada614f685e\") " pod="openstack/dnsmasq-dns-694dbb6647-xtjr2" Jan 23 09:31:36 crc kubenswrapper[4684]: I0123 09:31:36.188416 4684 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-22tmt\" (UniqueName: \"kubernetes.io/projected/3e252874-6205-4570-a8a8-dada614f685e-kube-api-access-22tmt\") pod \"dnsmasq-dns-694dbb6647-xtjr2\" (UID: \"3e252874-6205-4570-a8a8-dada614f685e\") " pod="openstack/dnsmasq-dns-694dbb6647-xtjr2" Jan 23 09:31:36 crc kubenswrapper[4684]: I0123 09:31:36.267026 4684 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-694dbb6647-xtjr2" Jan 23 09:31:40 crc kubenswrapper[4684]: I0123 09:31:40.290443 4684 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack/dnsmasq-dns-586b989cdc-fpmhg" podUID="d2457a57-4283-4e26-982f-62acaa95c1bf" containerName="dnsmasq-dns" probeResult="failure" output="dial tcp 10.217.0.108:5353: i/o timeout" Jan 23 09:31:43 crc kubenswrapper[4684]: I0123 09:31:43.728926 4684 patch_prober.go:28] interesting pod/machine-config-daemon-wtphf container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Jan 23 09:31:43 crc kubenswrapper[4684]: I0123 09:31:43.730238 4684 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-wtphf" podUID="fe8e0d00-860e-4d47-9f48-686555520d79" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Jan 23 09:31:45 crc kubenswrapper[4684]: I0123 09:31:45.291552 4684 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack/dnsmasq-dns-586b989cdc-fpmhg" podUID="d2457a57-4283-4e26-982f-62acaa95c1bf" containerName="dnsmasq-dns" probeResult="failure" output="dial tcp 10.217.0.108:5353: i/o timeout" Jan 23 09:31:50 crc kubenswrapper[4684]: I0123 09:31:50.291823 4684 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack/dnsmasq-dns-586b989cdc-fpmhg" podUID="d2457a57-4283-4e26-982f-62acaa95c1bf" containerName="dnsmasq-dns" probeResult="failure" output="dial tcp 10.217.0.108:5353: i/o timeout" Jan 23 09:31:50 crc kubenswrapper[4684]: I0123 09:31:50.292630 4684 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/dnsmasq-dns-586b989cdc-fpmhg" Jan 23 09:31:55 crc kubenswrapper[4684]: I0123 09:31:55.292486 4684 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack/dnsmasq-dns-586b989cdc-fpmhg" podUID="d2457a57-4283-4e26-982f-62acaa95c1bf" containerName="dnsmasq-dns" probeResult="failure" output="dial tcp 10.217.0.108:5353: i/o timeout" Jan 23 09:32:00 crc kubenswrapper[4684]: I0123 09:32:00.014515 4684 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-586b989cdc-fpmhg" Jan 23 09:32:00 crc kubenswrapper[4684]: I0123 09:32:00.087609 4684 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/d2457a57-4283-4e26-982f-62acaa95c1bf-ovsdbserver-sb\") pod \"d2457a57-4283-4e26-982f-62acaa95c1bf\" (UID: \"d2457a57-4283-4e26-982f-62acaa95c1bf\") " Jan 23 09:32:00 crc kubenswrapper[4684]: I0123 09:32:00.087726 4684 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/d2457a57-4283-4e26-982f-62acaa95c1bf-dns-svc\") pod \"d2457a57-4283-4e26-982f-62acaa95c1bf\" (UID: \"d2457a57-4283-4e26-982f-62acaa95c1bf\") " Jan 23 09:32:00 crc kubenswrapper[4684]: I0123 09:32:00.087786 4684 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/d2457a57-4283-4e26-982f-62acaa95c1bf-ovsdbserver-nb\") pod \"d2457a57-4283-4e26-982f-62acaa95c1bf\" (UID: \"d2457a57-4283-4e26-982f-62acaa95c1bf\") " Jan 23 09:32:00 crc kubenswrapper[4684]: I0123 09:32:00.087858 4684 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/d2457a57-4283-4e26-982f-62acaa95c1bf-config\") pod \"d2457a57-4283-4e26-982f-62acaa95c1bf\" (UID: \"d2457a57-4283-4e26-982f-62acaa95c1bf\") " Jan 23 09:32:00 crc kubenswrapper[4684]: I0123 09:32:00.087910 4684 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-cjb9w\" (UniqueName: \"kubernetes.io/projected/d2457a57-4283-4e26-982f-62acaa95c1bf-kube-api-access-cjb9w\") pod \"d2457a57-4283-4e26-982f-62acaa95c1bf\" (UID: \"d2457a57-4283-4e26-982f-62acaa95c1bf\") " Jan 23 09:32:00 crc kubenswrapper[4684]: I0123 09:32:00.111723 4684 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/d2457a57-4283-4e26-982f-62acaa95c1bf-kube-api-access-cjb9w" (OuterVolumeSpecName: "kube-api-access-cjb9w") pod "d2457a57-4283-4e26-982f-62acaa95c1bf" (UID: "d2457a57-4283-4e26-982f-62acaa95c1bf"). InnerVolumeSpecName "kube-api-access-cjb9w". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 23 09:32:00 crc kubenswrapper[4684]: I0123 09:32:00.136225 4684 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/d2457a57-4283-4e26-982f-62acaa95c1bf-dns-svc" (OuterVolumeSpecName: "dns-svc") pod "d2457a57-4283-4e26-982f-62acaa95c1bf" (UID: "d2457a57-4283-4e26-982f-62acaa95c1bf"). InnerVolumeSpecName "dns-svc". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 23 09:32:00 crc kubenswrapper[4684]: I0123 09:32:00.149682 4684 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/d2457a57-4283-4e26-982f-62acaa95c1bf-ovsdbserver-nb" (OuterVolumeSpecName: "ovsdbserver-nb") pod "d2457a57-4283-4e26-982f-62acaa95c1bf" (UID: "d2457a57-4283-4e26-982f-62acaa95c1bf"). InnerVolumeSpecName "ovsdbserver-nb". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 23 09:32:00 crc kubenswrapper[4684]: I0123 09:32:00.157262 4684 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/d2457a57-4283-4e26-982f-62acaa95c1bf-ovsdbserver-sb" (OuterVolumeSpecName: "ovsdbserver-sb") pod "d2457a57-4283-4e26-982f-62acaa95c1bf" (UID: "d2457a57-4283-4e26-982f-62acaa95c1bf"). InnerVolumeSpecName "ovsdbserver-sb". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 23 09:32:00 crc kubenswrapper[4684]: I0123 09:32:00.157996 4684 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/d2457a57-4283-4e26-982f-62acaa95c1bf-config" (OuterVolumeSpecName: "config") pod "d2457a57-4283-4e26-982f-62acaa95c1bf" (UID: "d2457a57-4283-4e26-982f-62acaa95c1bf"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 23 09:32:00 crc kubenswrapper[4684]: I0123 09:32:00.167340 4684 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-586b989cdc-fpmhg" event={"ID":"d2457a57-4283-4e26-982f-62acaa95c1bf","Type":"ContainerDied","Data":"1505b40f9668c0b416f28f436b18e8ed45b30110e6b86ae7fdd80b72a2fed61e"} Jan 23 09:32:00 crc kubenswrapper[4684]: I0123 09:32:00.167388 4684 scope.go:117] "RemoveContainer" containerID="69e1711db4062aeb41a776da95d7ff1a45b5e0d638add0143d72deb888d171e3" Jan 23 09:32:00 crc kubenswrapper[4684]: I0123 09:32:00.167421 4684 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-586b989cdc-fpmhg" Jan 23 09:32:00 crc kubenswrapper[4684]: I0123 09:32:00.189968 4684 reconciler_common.go:293] "Volume detached for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/d2457a57-4283-4e26-982f-62acaa95c1bf-ovsdbserver-nb\") on node \"crc\" DevicePath \"\"" Jan 23 09:32:00 crc kubenswrapper[4684]: I0123 09:32:00.190006 4684 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/d2457a57-4283-4e26-982f-62acaa95c1bf-config\") on node \"crc\" DevicePath \"\"" Jan 23 09:32:00 crc kubenswrapper[4684]: I0123 09:32:00.190020 4684 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-cjb9w\" (UniqueName: \"kubernetes.io/projected/d2457a57-4283-4e26-982f-62acaa95c1bf-kube-api-access-cjb9w\") on node \"crc\" DevicePath \"\"" Jan 23 09:32:00 crc kubenswrapper[4684]: I0123 09:32:00.190032 4684 reconciler_common.go:293] "Volume detached for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/d2457a57-4283-4e26-982f-62acaa95c1bf-ovsdbserver-sb\") on node \"crc\" DevicePath \"\"" Jan 23 09:32:00 crc kubenswrapper[4684]: I0123 09:32:00.190043 4684 reconciler_common.go:293] "Volume detached for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/d2457a57-4283-4e26-982f-62acaa95c1bf-dns-svc\") on node \"crc\" DevicePath \"\"" Jan 23 09:32:00 crc kubenswrapper[4684]: I0123 09:32:00.251325 4684 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-586b989cdc-fpmhg"] Jan 23 09:32:00 crc kubenswrapper[4684]: I0123 09:32:00.258078 4684 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/dnsmasq-dns-586b989cdc-fpmhg"] Jan 23 09:32:00 crc kubenswrapper[4684]: I0123 09:32:00.293517 4684 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack/dnsmasq-dns-586b989cdc-fpmhg" podUID="d2457a57-4283-4e26-982f-62acaa95c1bf" containerName="dnsmasq-dns" probeResult="failure" output="dial tcp 10.217.0.108:5353: i/o timeout" Jan 23 09:32:01 crc kubenswrapper[4684]: E0123 09:32:01.350775 4684 log.go:32] "PullImage from image service failed" err="rpc error: code = Canceled desc = copying config: context canceled" image="quay.io/podified-antelope-centos9/openstack-cinder-api@sha256:b59b7445e581cc720038107e421371c86c5765b2967e77d884ef29b1d9fd0f49" Jan 23 09:32:01 crc kubenswrapper[4684]: E0123 09:32:01.351222 4684 kuberuntime_manager.go:1274] "Unhandled Error" err="container &Container{Name:cinder-db-sync,Image:quay.io/podified-antelope-centos9/openstack-cinder-api@sha256:b59b7445e581cc720038107e421371c86c5765b2967e77d884ef29b1d9fd0f49,Command:[/bin/bash],Args:[-c /usr/local/bin/kolla_set_configs && /usr/local/bin/kolla_start],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:KOLLA_BOOTSTRAP,Value:TRUE,ValueFrom:nil,},EnvVar{Name:KOLLA_CONFIG_STRATEGY,Value:COPY_ALWAYS,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:etc-machine-id,ReadOnly:true,MountPath:/etc/machine-id,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:scripts,ReadOnly:true,MountPath:/usr/local/bin/container-scripts,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:config-data,ReadOnly:true,MountPath:/var/lib/config-data/merged,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:config-data,ReadOnly:true,MountPath:/etc/my.cnf,SubPath:my.cnf,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:db-sync-config-data,ReadOnly:true,MountPath:/etc/cinder/cinder.conf.d,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:config-data,ReadOnly:true,MountPath:/var/lib/kolla/config_files/config.json,SubPath:db-sync-config.json,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:combined-ca-bundle,ReadOnly:true,MountPath:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem,SubPath:tls-ca-bundle.pem,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-t265x,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:*0,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod cinder-db-sync-gpzdh_openstack(82fd9420-b726-4b9d-ad21-b05181fb6e23): ErrImagePull: rpc error: code = Canceled desc = copying config: context canceled" logger="UnhandledError" Jan 23 09:32:01 crc kubenswrapper[4684]: E0123 09:32:01.352428 4684 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"cinder-db-sync\" with ErrImagePull: \"rpc error: code = Canceled desc = copying config: context canceled\"" pod="openstack/cinder-db-sync-gpzdh" podUID="82fd9420-b726-4b9d-ad21-b05181fb6e23" Jan 23 09:32:01 crc kubenswrapper[4684]: I0123 09:32:01.597151 4684 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="d2457a57-4283-4e26-982f-62acaa95c1bf" path="/var/lib/kubelet/pods/d2457a57-4283-4e26-982f-62acaa95c1bf/volumes" Jan 23 09:32:02 crc kubenswrapper[4684]: E0123 09:32:02.189892 4684 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"cinder-db-sync\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.io/podified-antelope-centos9/openstack-cinder-api@sha256:b59b7445e581cc720038107e421371c86c5765b2967e77d884ef29b1d9fd0f49\\\"\"" pod="openstack/cinder-db-sync-gpzdh" podUID="82fd9420-b726-4b9d-ad21-b05181fb6e23" Jan 23 09:32:02 crc kubenswrapper[4684]: I0123 09:32:02.217730 4684 scope.go:117] "RemoveContainer" containerID="d80b133b7772d25ddf139545d4d612b6713e72bb1eb181b99b46eb33760a4e4b" Jan 23 09:32:02 crc kubenswrapper[4684]: E0123 09:32:02.395873 4684 log.go:32] "PullImage from image service failed" err="rpc error: code = Canceled desc = copying config: context canceled" image="quay.io/podified-antelope-centos9/openstack-barbican-api@sha256:fe32d3ea620f0c7ecfdde9bbf28417fde03bc18c6f60b1408fa8da24d8188f16" Jan 23 09:32:02 crc kubenswrapper[4684]: E0123 09:32:02.396314 4684 kuberuntime_manager.go:1274] "Unhandled Error" err="container &Container{Name:barbican-db-sync,Image:quay.io/podified-antelope-centos9/openstack-barbican-api@sha256:fe32d3ea620f0c7ecfdde9bbf28417fde03bc18c6f60b1408fa8da24d8188f16,Command:[/bin/bash],Args:[-c barbican-manage db upgrade],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:KOLLA_BOOTSTRAP,Value:TRUE,ValueFrom:nil,},EnvVar{Name:KOLLA_CONFIG_STRATEGY,Value:COPY_ALWAYS,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:db-sync-config-data,ReadOnly:true,MountPath:/etc/barbican/barbican.conf.d,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:combined-ca-bundle,ReadOnly:true,MountPath:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem,SubPath:tls-ca-bundle.pem,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-bsg6d,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[MKNOD],},Privileged:nil,SELinuxOptions:nil,RunAsUser:*42403,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:*42403,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod barbican-db-sync-9pq2q_openstack(4ffd82b5-ced8-4cca-89cb-25ad1bba207a): ErrImagePull: rpc error: code = Canceled desc = copying config: context canceled" logger="UnhandledError" Jan 23 09:32:02 crc kubenswrapper[4684]: E0123 09:32:02.397636 4684 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"barbican-db-sync\" with ErrImagePull: \"rpc error: code = Canceled desc = copying config: context canceled\"" pod="openstack/barbican-db-sync-9pq2q" podUID="4ffd82b5-ced8-4cca-89cb-25ad1bba207a" Jan 23 09:32:02 crc kubenswrapper[4684]: I0123 09:32:02.685205 4684 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-694dbb6647-xtjr2"] Jan 23 09:32:02 crc kubenswrapper[4684]: I0123 09:32:02.755073 4684 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/keystone-bootstrap-zxlq8"] Jan 23 09:32:02 crc kubenswrapper[4684]: W0123 09:32:02.760574 4684 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod708d53e6_341e_4e7b_80e8_482b0175948c.slice/crio-7b667a8ad74a4b8b98d47cda0744d01c6ff5830e41404b3c170b4faf1bd49e37 WatchSource:0}: Error finding container 7b667a8ad74a4b8b98d47cda0744d01c6ff5830e41404b3c170b4faf1bd49e37: Status 404 returned error can't find the container with id 7b667a8ad74a4b8b98d47cda0744d01c6ff5830e41404b3c170b4faf1bd49e37 Jan 23 09:32:03 crc kubenswrapper[4684]: I0123 09:32:03.225486 4684 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"72dbfed3-111a-4a4f-999a-ef7ade8b5116","Type":"ContainerStarted","Data":"5ba451b025535320b1519f5d1b64c8275e5ef33e65b5881a1c0ce7548cf3b62b"} Jan 23 09:32:03 crc kubenswrapper[4684]: I0123 09:32:03.227115 4684 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/keystone-bootstrap-zxlq8" event={"ID":"708d53e6-341e-4e7b-80e8-482b0175948c","Type":"ContainerStarted","Data":"31063009d43a382c032f53f4355e2d098ac4e31c4b5cbaef8ff1fc7f8b44ca70"} Jan 23 09:32:03 crc kubenswrapper[4684]: I0123 09:32:03.227187 4684 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/keystone-bootstrap-zxlq8" event={"ID":"708d53e6-341e-4e7b-80e8-482b0175948c","Type":"ContainerStarted","Data":"7b667a8ad74a4b8b98d47cda0744d01c6ff5830e41404b3c170b4faf1bd49e37"} Jan 23 09:32:03 crc kubenswrapper[4684]: I0123 09:32:03.240172 4684 generic.go:334] "Generic (PLEG): container finished" podID="3e252874-6205-4570-a8a8-dada614f685e" containerID="92a47e3603f036dc7013bf3c2d6cda2a28f90e0cb41dd1c34addedd1aafc6d4c" exitCode=0 Jan 23 09:32:03 crc kubenswrapper[4684]: I0123 09:32:03.240260 4684 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-694dbb6647-xtjr2" event={"ID":"3e252874-6205-4570-a8a8-dada614f685e","Type":"ContainerDied","Data":"92a47e3603f036dc7013bf3c2d6cda2a28f90e0cb41dd1c34addedd1aafc6d4c"} Jan 23 09:32:03 crc kubenswrapper[4684]: I0123 09:32:03.241285 4684 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-694dbb6647-xtjr2" event={"ID":"3e252874-6205-4570-a8a8-dada614f685e","Type":"ContainerStarted","Data":"03b24266e00d7a6426d72a1176625f55c3735a1b8002d3a7034f6303b405d35b"} Jan 23 09:32:03 crc kubenswrapper[4684]: I0123 09:32:03.258181 4684 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/keystone-bootstrap-zxlq8" podStartSLOduration=29.258155508 podStartE2EDuration="29.258155508s" podCreationTimestamp="2026-01-23 09:31:34 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-23 09:32:03.249023754 +0000 UTC m=+1495.872402295" watchObservedRunningTime="2026-01-23 09:32:03.258155508 +0000 UTC m=+1495.881534079" Jan 23 09:32:03 crc kubenswrapper[4684]: I0123 09:32:03.261562 4684 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/placement-db-sync-k24rv" event={"ID":"c51a6dae-114a-4a53-8e31-71f0f0124510","Type":"ContainerStarted","Data":"6f156e921e22b89ceb0741cb89089d81681d6670b52883728b4dd2e13603b52c"} Jan 23 09:32:03 crc kubenswrapper[4684]: E0123 09:32:03.264873 4684 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"barbican-db-sync\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.io/podified-antelope-centos9/openstack-barbican-api@sha256:fe32d3ea620f0c7ecfdde9bbf28417fde03bc18c6f60b1408fa8da24d8188f16\\\"\"" pod="openstack/barbican-db-sync-9pq2q" podUID="4ffd82b5-ced8-4cca-89cb-25ad1bba207a" Jan 23 09:32:03 crc kubenswrapper[4684]: I0123 09:32:03.333488 4684 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/placement-db-sync-k24rv" podStartSLOduration=3.302037548 podStartE2EDuration="44.333464776s" podCreationTimestamp="2026-01-23 09:31:19 +0000 UTC" firstStartedPulling="2026-01-23 09:31:21.157418104 +0000 UTC m=+1453.780796645" lastFinishedPulling="2026-01-23 09:32:02.188845332 +0000 UTC m=+1494.812223873" observedRunningTime="2026-01-23 09:32:03.317522875 +0000 UTC m=+1495.940901416" watchObservedRunningTime="2026-01-23 09:32:03.333464776 +0000 UTC m=+1495.956843327" Jan 23 09:32:04 crc kubenswrapper[4684]: I0123 09:32:04.277834 4684 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-694dbb6647-xtjr2" event={"ID":"3e252874-6205-4570-a8a8-dada614f685e","Type":"ContainerStarted","Data":"b34cc2bb7b14772f09b40ed69d363f104d610f45095bbc28810f6418559a9a0c"} Jan 23 09:32:04 crc kubenswrapper[4684]: I0123 09:32:04.278749 4684 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/dnsmasq-dns-694dbb6647-xtjr2" Jan 23 09:32:04 crc kubenswrapper[4684]: I0123 09:32:04.304806 4684 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/dnsmasq-dns-694dbb6647-xtjr2" podStartSLOduration=29.304790339 podStartE2EDuration="29.304790339s" podCreationTimestamp="2026-01-23 09:31:35 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-23 09:32:04.301146544 +0000 UTC m=+1496.924525095" watchObservedRunningTime="2026-01-23 09:32:04.304790339 +0000 UTC m=+1496.928168880" Jan 23 09:32:06 crc kubenswrapper[4684]: I0123 09:32:06.296355 4684 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"72dbfed3-111a-4a4f-999a-ef7ade8b5116","Type":"ContainerStarted","Data":"caa4120848942469b0152ddf152794d0d4987b8c6e13898b8beb2eea219b6004"} Jan 23 09:32:08 crc kubenswrapper[4684]: I0123 09:32:08.323175 4684 generic.go:334] "Generic (PLEG): container finished" podID="c51a6dae-114a-4a53-8e31-71f0f0124510" containerID="6f156e921e22b89ceb0741cb89089d81681d6670b52883728b4dd2e13603b52c" exitCode=0 Jan 23 09:32:08 crc kubenswrapper[4684]: I0123 09:32:08.323293 4684 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/placement-db-sync-k24rv" event={"ID":"c51a6dae-114a-4a53-8e31-71f0f0124510","Type":"ContainerDied","Data":"6f156e921e22b89ceb0741cb89089d81681d6670b52883728b4dd2e13603b52c"} Jan 23 09:32:09 crc kubenswrapper[4684]: I0123 09:32:09.338263 4684 generic.go:334] "Generic (PLEG): container finished" podID="708d53e6-341e-4e7b-80e8-482b0175948c" containerID="31063009d43a382c032f53f4355e2d098ac4e31c4b5cbaef8ff1fc7f8b44ca70" exitCode=0 Jan 23 09:32:09 crc kubenswrapper[4684]: I0123 09:32:09.338347 4684 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/keystone-bootstrap-zxlq8" event={"ID":"708d53e6-341e-4e7b-80e8-482b0175948c","Type":"ContainerDied","Data":"31063009d43a382c032f53f4355e2d098ac4e31c4b5cbaef8ff1fc7f8b44ca70"} Jan 23 09:32:09 crc kubenswrapper[4684]: I0123 09:32:09.682148 4684 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/placement-db-sync-k24rv" Jan 23 09:32:09 crc kubenswrapper[4684]: I0123 09:32:09.876950 4684 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/c51a6dae-114a-4a53-8e31-71f0f0124510-scripts\") pod \"c51a6dae-114a-4a53-8e31-71f0f0124510\" (UID: \"c51a6dae-114a-4a53-8e31-71f0f0124510\") " Jan 23 09:32:09 crc kubenswrapper[4684]: I0123 09:32:09.878010 4684 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/c51a6dae-114a-4a53-8e31-71f0f0124510-config-data\") pod \"c51a6dae-114a-4a53-8e31-71f0f0124510\" (UID: \"c51a6dae-114a-4a53-8e31-71f0f0124510\") " Jan 23 09:32:09 crc kubenswrapper[4684]: I0123 09:32:09.878069 4684 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/c51a6dae-114a-4a53-8e31-71f0f0124510-combined-ca-bundle\") pod \"c51a6dae-114a-4a53-8e31-71f0f0124510\" (UID: \"c51a6dae-114a-4a53-8e31-71f0f0124510\") " Jan 23 09:32:09 crc kubenswrapper[4684]: I0123 09:32:09.878112 4684 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-9kwtp\" (UniqueName: \"kubernetes.io/projected/c51a6dae-114a-4a53-8e31-71f0f0124510-kube-api-access-9kwtp\") pod \"c51a6dae-114a-4a53-8e31-71f0f0124510\" (UID: \"c51a6dae-114a-4a53-8e31-71f0f0124510\") " Jan 23 09:32:09 crc kubenswrapper[4684]: I0123 09:32:09.878137 4684 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/c51a6dae-114a-4a53-8e31-71f0f0124510-logs\") pod \"c51a6dae-114a-4a53-8e31-71f0f0124510\" (UID: \"c51a6dae-114a-4a53-8e31-71f0f0124510\") " Jan 23 09:32:09 crc kubenswrapper[4684]: I0123 09:32:09.878822 4684 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/c51a6dae-114a-4a53-8e31-71f0f0124510-logs" (OuterVolumeSpecName: "logs") pod "c51a6dae-114a-4a53-8e31-71f0f0124510" (UID: "c51a6dae-114a-4a53-8e31-71f0f0124510"). InnerVolumeSpecName "logs". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 23 09:32:09 crc kubenswrapper[4684]: I0123 09:32:09.886887 4684 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/c51a6dae-114a-4a53-8e31-71f0f0124510-kube-api-access-9kwtp" (OuterVolumeSpecName: "kube-api-access-9kwtp") pod "c51a6dae-114a-4a53-8e31-71f0f0124510" (UID: "c51a6dae-114a-4a53-8e31-71f0f0124510"). InnerVolumeSpecName "kube-api-access-9kwtp". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 23 09:32:09 crc kubenswrapper[4684]: I0123 09:32:09.888486 4684 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/c51a6dae-114a-4a53-8e31-71f0f0124510-scripts" (OuterVolumeSpecName: "scripts") pod "c51a6dae-114a-4a53-8e31-71f0f0124510" (UID: "c51a6dae-114a-4a53-8e31-71f0f0124510"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 23 09:32:09 crc kubenswrapper[4684]: I0123 09:32:09.901878 4684 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/c51a6dae-114a-4a53-8e31-71f0f0124510-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "c51a6dae-114a-4a53-8e31-71f0f0124510" (UID: "c51a6dae-114a-4a53-8e31-71f0f0124510"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 23 09:32:09 crc kubenswrapper[4684]: I0123 09:32:09.920816 4684 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/c51a6dae-114a-4a53-8e31-71f0f0124510-config-data" (OuterVolumeSpecName: "config-data") pod "c51a6dae-114a-4a53-8e31-71f0f0124510" (UID: "c51a6dae-114a-4a53-8e31-71f0f0124510"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 23 09:32:09 crc kubenswrapper[4684]: I0123 09:32:09.980379 4684 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/c51a6dae-114a-4a53-8e31-71f0f0124510-scripts\") on node \"crc\" DevicePath \"\"" Jan 23 09:32:09 crc kubenswrapper[4684]: I0123 09:32:09.980417 4684 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/c51a6dae-114a-4a53-8e31-71f0f0124510-config-data\") on node \"crc\" DevicePath \"\"" Jan 23 09:32:09 crc kubenswrapper[4684]: I0123 09:32:09.980430 4684 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/c51a6dae-114a-4a53-8e31-71f0f0124510-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Jan 23 09:32:09 crc kubenswrapper[4684]: I0123 09:32:09.980440 4684 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-9kwtp\" (UniqueName: \"kubernetes.io/projected/c51a6dae-114a-4a53-8e31-71f0f0124510-kube-api-access-9kwtp\") on node \"crc\" DevicePath \"\"" Jan 23 09:32:09 crc kubenswrapper[4684]: I0123 09:32:09.980450 4684 reconciler_common.go:293] "Volume detached for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/c51a6dae-114a-4a53-8e31-71f0f0124510-logs\") on node \"crc\" DevicePath \"\"" Jan 23 09:32:10 crc kubenswrapper[4684]: I0123 09:32:10.348194 4684 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/placement-db-sync-k24rv" Jan 23 09:32:10 crc kubenswrapper[4684]: I0123 09:32:10.352845 4684 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/placement-db-sync-k24rv" event={"ID":"c51a6dae-114a-4a53-8e31-71f0f0124510","Type":"ContainerDied","Data":"0c4d81adc0857d5c6038199b55de2adaeb4fd92e2372deba9ee37b5b4ba35018"} Jan 23 09:32:10 crc kubenswrapper[4684]: I0123 09:32:10.353061 4684 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="0c4d81adc0857d5c6038199b55de2adaeb4fd92e2372deba9ee37b5b4ba35018" Jan 23 09:32:10 crc kubenswrapper[4684]: I0123 09:32:10.512175 4684 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/placement-6f7c769f78-7sfgw"] Jan 23 09:32:10 crc kubenswrapper[4684]: E0123 09:32:10.512604 4684 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="c51a6dae-114a-4a53-8e31-71f0f0124510" containerName="placement-db-sync" Jan 23 09:32:10 crc kubenswrapper[4684]: I0123 09:32:10.512628 4684 state_mem.go:107] "Deleted CPUSet assignment" podUID="c51a6dae-114a-4a53-8e31-71f0f0124510" containerName="placement-db-sync" Jan 23 09:32:10 crc kubenswrapper[4684]: E0123 09:32:10.512655 4684 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="d2457a57-4283-4e26-982f-62acaa95c1bf" containerName="dnsmasq-dns" Jan 23 09:32:10 crc kubenswrapper[4684]: I0123 09:32:10.512663 4684 state_mem.go:107] "Deleted CPUSet assignment" podUID="d2457a57-4283-4e26-982f-62acaa95c1bf" containerName="dnsmasq-dns" Jan 23 09:32:10 crc kubenswrapper[4684]: E0123 09:32:10.512683 4684 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="d2457a57-4283-4e26-982f-62acaa95c1bf" containerName="init" Jan 23 09:32:10 crc kubenswrapper[4684]: I0123 09:32:10.512690 4684 state_mem.go:107] "Deleted CPUSet assignment" podUID="d2457a57-4283-4e26-982f-62acaa95c1bf" containerName="init" Jan 23 09:32:10 crc kubenswrapper[4684]: I0123 09:32:10.512941 4684 memory_manager.go:354] "RemoveStaleState removing state" podUID="d2457a57-4283-4e26-982f-62acaa95c1bf" containerName="dnsmasq-dns" Jan 23 09:32:10 crc kubenswrapper[4684]: I0123 09:32:10.512972 4684 memory_manager.go:354] "RemoveStaleState removing state" podUID="c51a6dae-114a-4a53-8e31-71f0f0124510" containerName="placement-db-sync" Jan 23 09:32:10 crc kubenswrapper[4684]: I0123 09:32:10.513902 4684 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/placement-6f7c769f78-7sfgw" Jan 23 09:32:10 crc kubenswrapper[4684]: I0123 09:32:10.524270 4684 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"placement-scripts" Jan 23 09:32:10 crc kubenswrapper[4684]: I0123 09:32:10.528024 4684 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-placement-internal-svc" Jan 23 09:32:10 crc kubenswrapper[4684]: I0123 09:32:10.528050 4684 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"placement-placement-dockercfg-79zvt" Jan 23 09:32:10 crc kubenswrapper[4684]: I0123 09:32:10.528758 4684 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"placement-config-data" Jan 23 09:32:10 crc kubenswrapper[4684]: I0123 09:32:10.528940 4684 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-placement-public-svc" Jan 23 09:32:10 crc kubenswrapper[4684]: I0123 09:32:10.559324 4684 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/placement-6f7c769f78-7sfgw"] Jan 23 09:32:10 crc kubenswrapper[4684]: I0123 09:32:10.717976 4684 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/90ee2ffb-783f-491a-9fa8-e37f267872f6-config-data\") pod \"placement-6f7c769f78-7sfgw\" (UID: \"90ee2ffb-783f-491a-9fa8-e37f267872f6\") " pod="openstack/placement-6f7c769f78-7sfgw" Jan 23 09:32:10 crc kubenswrapper[4684]: I0123 09:32:10.718256 4684 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/90ee2ffb-783f-491a-9fa8-e37f267872f6-combined-ca-bundle\") pod \"placement-6f7c769f78-7sfgw\" (UID: \"90ee2ffb-783f-491a-9fa8-e37f267872f6\") " pod="openstack/placement-6f7c769f78-7sfgw" Jan 23 09:32:10 crc kubenswrapper[4684]: I0123 09:32:10.718370 4684 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-75gt2\" (UniqueName: \"kubernetes.io/projected/90ee2ffb-783f-491a-9fa8-e37f267872f6-kube-api-access-75gt2\") pod \"placement-6f7c769f78-7sfgw\" (UID: \"90ee2ffb-783f-491a-9fa8-e37f267872f6\") " pod="openstack/placement-6f7c769f78-7sfgw" Jan 23 09:32:10 crc kubenswrapper[4684]: I0123 09:32:10.718410 4684 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/90ee2ffb-783f-491a-9fa8-e37f267872f6-logs\") pod \"placement-6f7c769f78-7sfgw\" (UID: \"90ee2ffb-783f-491a-9fa8-e37f267872f6\") " pod="openstack/placement-6f7c769f78-7sfgw" Jan 23 09:32:10 crc kubenswrapper[4684]: I0123 09:32:10.718453 4684 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/90ee2ffb-783f-491a-9fa8-e37f267872f6-public-tls-certs\") pod \"placement-6f7c769f78-7sfgw\" (UID: \"90ee2ffb-783f-491a-9fa8-e37f267872f6\") " pod="openstack/placement-6f7c769f78-7sfgw" Jan 23 09:32:10 crc kubenswrapper[4684]: I0123 09:32:10.718496 4684 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/90ee2ffb-783f-491a-9fa8-e37f267872f6-internal-tls-certs\") pod \"placement-6f7c769f78-7sfgw\" (UID: \"90ee2ffb-783f-491a-9fa8-e37f267872f6\") " pod="openstack/placement-6f7c769f78-7sfgw" Jan 23 09:32:10 crc kubenswrapper[4684]: I0123 09:32:10.718593 4684 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/90ee2ffb-783f-491a-9fa8-e37f267872f6-scripts\") pod \"placement-6f7c769f78-7sfgw\" (UID: \"90ee2ffb-783f-491a-9fa8-e37f267872f6\") " pod="openstack/placement-6f7c769f78-7sfgw" Jan 23 09:32:10 crc kubenswrapper[4684]: I0123 09:32:10.822765 4684 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-75gt2\" (UniqueName: \"kubernetes.io/projected/90ee2ffb-783f-491a-9fa8-e37f267872f6-kube-api-access-75gt2\") pod \"placement-6f7c769f78-7sfgw\" (UID: \"90ee2ffb-783f-491a-9fa8-e37f267872f6\") " pod="openstack/placement-6f7c769f78-7sfgw" Jan 23 09:32:10 crc kubenswrapper[4684]: I0123 09:32:10.823043 4684 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/90ee2ffb-783f-491a-9fa8-e37f267872f6-logs\") pod \"placement-6f7c769f78-7sfgw\" (UID: \"90ee2ffb-783f-491a-9fa8-e37f267872f6\") " pod="openstack/placement-6f7c769f78-7sfgw" Jan 23 09:32:10 crc kubenswrapper[4684]: I0123 09:32:10.823098 4684 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/90ee2ffb-783f-491a-9fa8-e37f267872f6-public-tls-certs\") pod \"placement-6f7c769f78-7sfgw\" (UID: \"90ee2ffb-783f-491a-9fa8-e37f267872f6\") " pod="openstack/placement-6f7c769f78-7sfgw" Jan 23 09:32:10 crc kubenswrapper[4684]: I0123 09:32:10.823151 4684 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/90ee2ffb-783f-491a-9fa8-e37f267872f6-internal-tls-certs\") pod \"placement-6f7c769f78-7sfgw\" (UID: \"90ee2ffb-783f-491a-9fa8-e37f267872f6\") " pod="openstack/placement-6f7c769f78-7sfgw" Jan 23 09:32:10 crc kubenswrapper[4684]: I0123 09:32:10.823199 4684 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/90ee2ffb-783f-491a-9fa8-e37f267872f6-scripts\") pod \"placement-6f7c769f78-7sfgw\" (UID: \"90ee2ffb-783f-491a-9fa8-e37f267872f6\") " pod="openstack/placement-6f7c769f78-7sfgw" Jan 23 09:32:10 crc kubenswrapper[4684]: I0123 09:32:10.823228 4684 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/90ee2ffb-783f-491a-9fa8-e37f267872f6-config-data\") pod \"placement-6f7c769f78-7sfgw\" (UID: \"90ee2ffb-783f-491a-9fa8-e37f267872f6\") " pod="openstack/placement-6f7c769f78-7sfgw" Jan 23 09:32:10 crc kubenswrapper[4684]: I0123 09:32:10.823251 4684 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/90ee2ffb-783f-491a-9fa8-e37f267872f6-combined-ca-bundle\") pod \"placement-6f7c769f78-7sfgw\" (UID: \"90ee2ffb-783f-491a-9fa8-e37f267872f6\") " pod="openstack/placement-6f7c769f78-7sfgw" Jan 23 09:32:10 crc kubenswrapper[4684]: I0123 09:32:10.834080 4684 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/90ee2ffb-783f-491a-9fa8-e37f267872f6-logs\") pod \"placement-6f7c769f78-7sfgw\" (UID: \"90ee2ffb-783f-491a-9fa8-e37f267872f6\") " pod="openstack/placement-6f7c769f78-7sfgw" Jan 23 09:32:10 crc kubenswrapper[4684]: I0123 09:32:10.838635 4684 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/90ee2ffb-783f-491a-9fa8-e37f267872f6-config-data\") pod \"placement-6f7c769f78-7sfgw\" (UID: \"90ee2ffb-783f-491a-9fa8-e37f267872f6\") " pod="openstack/placement-6f7c769f78-7sfgw" Jan 23 09:32:10 crc kubenswrapper[4684]: I0123 09:32:10.838076 4684 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/90ee2ffb-783f-491a-9fa8-e37f267872f6-internal-tls-certs\") pod \"placement-6f7c769f78-7sfgw\" (UID: \"90ee2ffb-783f-491a-9fa8-e37f267872f6\") " pod="openstack/placement-6f7c769f78-7sfgw" Jan 23 09:32:10 crc kubenswrapper[4684]: I0123 09:32:10.836722 4684 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/90ee2ffb-783f-491a-9fa8-e37f267872f6-public-tls-certs\") pod \"placement-6f7c769f78-7sfgw\" (UID: \"90ee2ffb-783f-491a-9fa8-e37f267872f6\") " pod="openstack/placement-6f7c769f78-7sfgw" Jan 23 09:32:10 crc kubenswrapper[4684]: I0123 09:32:10.838896 4684 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/90ee2ffb-783f-491a-9fa8-e37f267872f6-scripts\") pod \"placement-6f7c769f78-7sfgw\" (UID: \"90ee2ffb-783f-491a-9fa8-e37f267872f6\") " pod="openstack/placement-6f7c769f78-7sfgw" Jan 23 09:32:10 crc kubenswrapper[4684]: I0123 09:32:10.853323 4684 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/90ee2ffb-783f-491a-9fa8-e37f267872f6-combined-ca-bundle\") pod \"placement-6f7c769f78-7sfgw\" (UID: \"90ee2ffb-783f-491a-9fa8-e37f267872f6\") " pod="openstack/placement-6f7c769f78-7sfgw" Jan 23 09:32:10 crc kubenswrapper[4684]: I0123 09:32:10.867498 4684 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-75gt2\" (UniqueName: \"kubernetes.io/projected/90ee2ffb-783f-491a-9fa8-e37f267872f6-kube-api-access-75gt2\") pod \"placement-6f7c769f78-7sfgw\" (UID: \"90ee2ffb-783f-491a-9fa8-e37f267872f6\") " pod="openstack/placement-6f7c769f78-7sfgw" Jan 23 09:32:11 crc kubenswrapper[4684]: I0123 09:32:11.165062 4684 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/placement-6f7c769f78-7sfgw" Jan 23 09:32:11 crc kubenswrapper[4684]: I0123 09:32:11.268895 4684 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/dnsmasq-dns-694dbb6647-xtjr2" Jan 23 09:32:11 crc kubenswrapper[4684]: I0123 09:32:11.324518 4684 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-786f46ff4c-86fsj"] Jan 23 09:32:11 crc kubenswrapper[4684]: I0123 09:32:11.325133 4684 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/dnsmasq-dns-786f46ff4c-86fsj" podUID="6dcca2c8-e79c-4130-8376-90b178f9d2da" containerName="dnsmasq-dns" containerID="cri-o://a354ee24f6480d96a49b18023953298ad981479871e4a39cb0a671cead3ab410" gracePeriod=10 Jan 23 09:32:12 crc kubenswrapper[4684]: I0123 09:32:12.376287 4684 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/keystone-bootstrap-zxlq8" event={"ID":"708d53e6-341e-4e7b-80e8-482b0175948c","Type":"ContainerDied","Data":"7b667a8ad74a4b8b98d47cda0744d01c6ff5830e41404b3c170b4faf1bd49e37"} Jan 23 09:32:12 crc kubenswrapper[4684]: I0123 09:32:12.376593 4684 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="7b667a8ad74a4b8b98d47cda0744d01c6ff5830e41404b3c170b4faf1bd49e37" Jan 23 09:32:12 crc kubenswrapper[4684]: I0123 09:32:12.421779 4684 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/keystone-bootstrap-zxlq8" Jan 23 09:32:12 crc kubenswrapper[4684]: I0123 09:32:12.459271 4684 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/708d53e6-341e-4e7b-80e8-482b0175948c-scripts\") pod \"708d53e6-341e-4e7b-80e8-482b0175948c\" (UID: \"708d53e6-341e-4e7b-80e8-482b0175948c\") " Jan 23 09:32:12 crc kubenswrapper[4684]: I0123 09:32:12.459333 4684 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/708d53e6-341e-4e7b-80e8-482b0175948c-combined-ca-bundle\") pod \"708d53e6-341e-4e7b-80e8-482b0175948c\" (UID: \"708d53e6-341e-4e7b-80e8-482b0175948c\") " Jan 23 09:32:12 crc kubenswrapper[4684]: I0123 09:32:12.459407 4684 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"credential-keys\" (UniqueName: \"kubernetes.io/secret/708d53e6-341e-4e7b-80e8-482b0175948c-credential-keys\") pod \"708d53e6-341e-4e7b-80e8-482b0175948c\" (UID: \"708d53e6-341e-4e7b-80e8-482b0175948c\") " Jan 23 09:32:12 crc kubenswrapper[4684]: I0123 09:32:12.459453 4684 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/708d53e6-341e-4e7b-80e8-482b0175948c-config-data\") pod \"708d53e6-341e-4e7b-80e8-482b0175948c\" (UID: \"708d53e6-341e-4e7b-80e8-482b0175948c\") " Jan 23 09:32:12 crc kubenswrapper[4684]: I0123 09:32:12.459475 4684 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-b2rqk\" (UniqueName: \"kubernetes.io/projected/708d53e6-341e-4e7b-80e8-482b0175948c-kube-api-access-b2rqk\") pod \"708d53e6-341e-4e7b-80e8-482b0175948c\" (UID: \"708d53e6-341e-4e7b-80e8-482b0175948c\") " Jan 23 09:32:12 crc kubenswrapper[4684]: I0123 09:32:12.459510 4684 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"fernet-keys\" (UniqueName: \"kubernetes.io/secret/708d53e6-341e-4e7b-80e8-482b0175948c-fernet-keys\") pod \"708d53e6-341e-4e7b-80e8-482b0175948c\" (UID: \"708d53e6-341e-4e7b-80e8-482b0175948c\") " Jan 23 09:32:12 crc kubenswrapper[4684]: I0123 09:32:12.479523 4684 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/708d53e6-341e-4e7b-80e8-482b0175948c-scripts" (OuterVolumeSpecName: "scripts") pod "708d53e6-341e-4e7b-80e8-482b0175948c" (UID: "708d53e6-341e-4e7b-80e8-482b0175948c"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 23 09:32:12 crc kubenswrapper[4684]: I0123 09:32:12.480811 4684 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/708d53e6-341e-4e7b-80e8-482b0175948c-fernet-keys" (OuterVolumeSpecName: "fernet-keys") pod "708d53e6-341e-4e7b-80e8-482b0175948c" (UID: "708d53e6-341e-4e7b-80e8-482b0175948c"). InnerVolumeSpecName "fernet-keys". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 23 09:32:12 crc kubenswrapper[4684]: I0123 09:32:12.481299 4684 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/708d53e6-341e-4e7b-80e8-482b0175948c-credential-keys" (OuterVolumeSpecName: "credential-keys") pod "708d53e6-341e-4e7b-80e8-482b0175948c" (UID: "708d53e6-341e-4e7b-80e8-482b0175948c"). InnerVolumeSpecName "credential-keys". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 23 09:32:12 crc kubenswrapper[4684]: I0123 09:32:12.498027 4684 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/708d53e6-341e-4e7b-80e8-482b0175948c-kube-api-access-b2rqk" (OuterVolumeSpecName: "kube-api-access-b2rqk") pod "708d53e6-341e-4e7b-80e8-482b0175948c" (UID: "708d53e6-341e-4e7b-80e8-482b0175948c"). InnerVolumeSpecName "kube-api-access-b2rqk". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 23 09:32:12 crc kubenswrapper[4684]: I0123 09:32:12.531126 4684 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/708d53e6-341e-4e7b-80e8-482b0175948c-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "708d53e6-341e-4e7b-80e8-482b0175948c" (UID: "708d53e6-341e-4e7b-80e8-482b0175948c"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 23 09:32:12 crc kubenswrapper[4684]: I0123 09:32:12.539499 4684 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/708d53e6-341e-4e7b-80e8-482b0175948c-config-data" (OuterVolumeSpecName: "config-data") pod "708d53e6-341e-4e7b-80e8-482b0175948c" (UID: "708d53e6-341e-4e7b-80e8-482b0175948c"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 23 09:32:12 crc kubenswrapper[4684]: I0123 09:32:12.562727 4684 reconciler_common.go:293] "Volume detached for volume \"credential-keys\" (UniqueName: \"kubernetes.io/secret/708d53e6-341e-4e7b-80e8-482b0175948c-credential-keys\") on node \"crc\" DevicePath \"\"" Jan 23 09:32:12 crc kubenswrapper[4684]: I0123 09:32:12.562758 4684 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/708d53e6-341e-4e7b-80e8-482b0175948c-config-data\") on node \"crc\" DevicePath \"\"" Jan 23 09:32:12 crc kubenswrapper[4684]: I0123 09:32:12.562768 4684 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-b2rqk\" (UniqueName: \"kubernetes.io/projected/708d53e6-341e-4e7b-80e8-482b0175948c-kube-api-access-b2rqk\") on node \"crc\" DevicePath \"\"" Jan 23 09:32:12 crc kubenswrapper[4684]: I0123 09:32:12.562778 4684 reconciler_common.go:293] "Volume detached for volume \"fernet-keys\" (UniqueName: \"kubernetes.io/secret/708d53e6-341e-4e7b-80e8-482b0175948c-fernet-keys\") on node \"crc\" DevicePath \"\"" Jan 23 09:32:12 crc kubenswrapper[4684]: I0123 09:32:12.562786 4684 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/708d53e6-341e-4e7b-80e8-482b0175948c-scripts\") on node \"crc\" DevicePath \"\"" Jan 23 09:32:12 crc kubenswrapper[4684]: I0123 09:32:12.562794 4684 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/708d53e6-341e-4e7b-80e8-482b0175948c-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Jan 23 09:32:12 crc kubenswrapper[4684]: I0123 09:32:12.888658 4684 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/placement-6f7c769f78-7sfgw"] Jan 23 09:32:13 crc kubenswrapper[4684]: I0123 09:32:13.384880 4684 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/placement-6f7c769f78-7sfgw" event={"ID":"90ee2ffb-783f-491a-9fa8-e37f267872f6","Type":"ContainerStarted","Data":"2218944fbb8359c8655b0fee8f095153309f3dcc3867a37751a57d2e2e8b633a"} Jan 23 09:32:13 crc kubenswrapper[4684]: I0123 09:32:13.384911 4684 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/keystone-bootstrap-zxlq8" Jan 23 09:32:13 crc kubenswrapper[4684]: I0123 09:32:13.611977 4684 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/keystone-74b94f7dd5-jfwln"] Jan 23 09:32:13 crc kubenswrapper[4684]: E0123 09:32:13.612395 4684 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="708d53e6-341e-4e7b-80e8-482b0175948c" containerName="keystone-bootstrap" Jan 23 09:32:13 crc kubenswrapper[4684]: I0123 09:32:13.612416 4684 state_mem.go:107] "Deleted CPUSet assignment" podUID="708d53e6-341e-4e7b-80e8-482b0175948c" containerName="keystone-bootstrap" Jan 23 09:32:13 crc kubenswrapper[4684]: I0123 09:32:13.612614 4684 memory_manager.go:354] "RemoveStaleState removing state" podUID="708d53e6-341e-4e7b-80e8-482b0175948c" containerName="keystone-bootstrap" Jan 23 09:32:13 crc kubenswrapper[4684]: I0123 09:32:13.613569 4684 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/keystone-74b94f7dd5-jfwln" Jan 23 09:32:13 crc kubenswrapper[4684]: I0123 09:32:13.629920 4684 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-keystone-public-svc" Jan 23 09:32:13 crc kubenswrapper[4684]: I0123 09:32:13.630024 4684 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"keystone" Jan 23 09:32:13 crc kubenswrapper[4684]: I0123 09:32:13.630164 4684 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"keystone-keystone-dockercfg-8c4md" Jan 23 09:32:13 crc kubenswrapper[4684]: I0123 09:32:13.630354 4684 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-keystone-internal-svc" Jan 23 09:32:13 crc kubenswrapper[4684]: I0123 09:32:13.630393 4684 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"keystone-config-data" Jan 23 09:32:13 crc kubenswrapper[4684]: I0123 09:32:13.630496 4684 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"keystone-scripts" Jan 23 09:32:13 crc kubenswrapper[4684]: I0123 09:32:13.643198 4684 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/keystone-74b94f7dd5-jfwln"] Jan 23 09:32:13 crc kubenswrapper[4684]: I0123 09:32:13.729167 4684 patch_prober.go:28] interesting pod/machine-config-daemon-wtphf container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Jan 23 09:32:13 crc kubenswrapper[4684]: I0123 09:32:13.729232 4684 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-wtphf" podUID="fe8e0d00-860e-4d47-9f48-686555520d79" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Jan 23 09:32:13 crc kubenswrapper[4684]: I0123 09:32:13.729283 4684 kubelet.go:2542] "SyncLoop (probe)" probe="liveness" status="unhealthy" pod="openshift-machine-config-operator/machine-config-daemon-wtphf" Jan 23 09:32:13 crc kubenswrapper[4684]: I0123 09:32:13.730107 4684 kuberuntime_manager.go:1027] "Message for Container of pod" containerName="machine-config-daemon" containerStatusID={"Type":"cri-o","ID":"8a400f51794ef4b6fdc66ad213f603d86645f2ebb5c89b0aaf3a7b97ea9ba3a1"} pod="openshift-machine-config-operator/machine-config-daemon-wtphf" containerMessage="Container machine-config-daemon failed liveness probe, will be restarted" Jan 23 09:32:13 crc kubenswrapper[4684]: I0123 09:32:13.730176 4684 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-machine-config-operator/machine-config-daemon-wtphf" podUID="fe8e0d00-860e-4d47-9f48-686555520d79" containerName="machine-config-daemon" containerID="cri-o://8a400f51794ef4b6fdc66ad213f603d86645f2ebb5c89b0aaf3a7b97ea9ba3a1" gracePeriod=600 Jan 23 09:32:13 crc kubenswrapper[4684]: I0123 09:32:13.781199 4684 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-tt4sb\" (UniqueName: \"kubernetes.io/projected/c7c30d54-36fc-47e2-ad40-c3e530d1b721-kube-api-access-tt4sb\") pod \"keystone-74b94f7dd5-jfwln\" (UID: \"c7c30d54-36fc-47e2-ad40-c3e530d1b721\") " pod="openstack/keystone-74b94f7dd5-jfwln" Jan 23 09:32:13 crc kubenswrapper[4684]: I0123 09:32:13.781347 4684 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/c7c30d54-36fc-47e2-ad40-c3e530d1b721-public-tls-certs\") pod \"keystone-74b94f7dd5-jfwln\" (UID: \"c7c30d54-36fc-47e2-ad40-c3e530d1b721\") " pod="openstack/keystone-74b94f7dd5-jfwln" Jan 23 09:32:13 crc kubenswrapper[4684]: I0123 09:32:13.781489 4684 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/c7c30d54-36fc-47e2-ad40-c3e530d1b721-internal-tls-certs\") pod \"keystone-74b94f7dd5-jfwln\" (UID: \"c7c30d54-36fc-47e2-ad40-c3e530d1b721\") " pod="openstack/keystone-74b94f7dd5-jfwln" Jan 23 09:32:13 crc kubenswrapper[4684]: I0123 09:32:13.781511 4684 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/c7c30d54-36fc-47e2-ad40-c3e530d1b721-scripts\") pod \"keystone-74b94f7dd5-jfwln\" (UID: \"c7c30d54-36fc-47e2-ad40-c3e530d1b721\") " pod="openstack/keystone-74b94f7dd5-jfwln" Jan 23 09:32:13 crc kubenswrapper[4684]: I0123 09:32:13.781564 4684 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/c7c30d54-36fc-47e2-ad40-c3e530d1b721-combined-ca-bundle\") pod \"keystone-74b94f7dd5-jfwln\" (UID: \"c7c30d54-36fc-47e2-ad40-c3e530d1b721\") " pod="openstack/keystone-74b94f7dd5-jfwln" Jan 23 09:32:13 crc kubenswrapper[4684]: I0123 09:32:13.781585 4684 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/c7c30d54-36fc-47e2-ad40-c3e530d1b721-config-data\") pod \"keystone-74b94f7dd5-jfwln\" (UID: \"c7c30d54-36fc-47e2-ad40-c3e530d1b721\") " pod="openstack/keystone-74b94f7dd5-jfwln" Jan 23 09:32:13 crc kubenswrapper[4684]: I0123 09:32:13.782936 4684 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"fernet-keys\" (UniqueName: \"kubernetes.io/secret/c7c30d54-36fc-47e2-ad40-c3e530d1b721-fernet-keys\") pod \"keystone-74b94f7dd5-jfwln\" (UID: \"c7c30d54-36fc-47e2-ad40-c3e530d1b721\") " pod="openstack/keystone-74b94f7dd5-jfwln" Jan 23 09:32:13 crc kubenswrapper[4684]: I0123 09:32:13.783019 4684 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"credential-keys\" (UniqueName: \"kubernetes.io/secret/c7c30d54-36fc-47e2-ad40-c3e530d1b721-credential-keys\") pod \"keystone-74b94f7dd5-jfwln\" (UID: \"c7c30d54-36fc-47e2-ad40-c3e530d1b721\") " pod="openstack/keystone-74b94f7dd5-jfwln" Jan 23 09:32:13 crc kubenswrapper[4684]: I0123 09:32:13.884550 4684 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/c7c30d54-36fc-47e2-ad40-c3e530d1b721-internal-tls-certs\") pod \"keystone-74b94f7dd5-jfwln\" (UID: \"c7c30d54-36fc-47e2-ad40-c3e530d1b721\") " pod="openstack/keystone-74b94f7dd5-jfwln" Jan 23 09:32:13 crc kubenswrapper[4684]: I0123 09:32:13.884598 4684 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/c7c30d54-36fc-47e2-ad40-c3e530d1b721-scripts\") pod \"keystone-74b94f7dd5-jfwln\" (UID: \"c7c30d54-36fc-47e2-ad40-c3e530d1b721\") " pod="openstack/keystone-74b94f7dd5-jfwln" Jan 23 09:32:13 crc kubenswrapper[4684]: I0123 09:32:13.884675 4684 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/c7c30d54-36fc-47e2-ad40-c3e530d1b721-combined-ca-bundle\") pod \"keystone-74b94f7dd5-jfwln\" (UID: \"c7c30d54-36fc-47e2-ad40-c3e530d1b721\") " pod="openstack/keystone-74b94f7dd5-jfwln" Jan 23 09:32:13 crc kubenswrapper[4684]: I0123 09:32:13.884692 4684 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/c7c30d54-36fc-47e2-ad40-c3e530d1b721-config-data\") pod \"keystone-74b94f7dd5-jfwln\" (UID: \"c7c30d54-36fc-47e2-ad40-c3e530d1b721\") " pod="openstack/keystone-74b94f7dd5-jfwln" Jan 23 09:32:13 crc kubenswrapper[4684]: I0123 09:32:13.884774 4684 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"fernet-keys\" (UniqueName: \"kubernetes.io/secret/c7c30d54-36fc-47e2-ad40-c3e530d1b721-fernet-keys\") pod \"keystone-74b94f7dd5-jfwln\" (UID: \"c7c30d54-36fc-47e2-ad40-c3e530d1b721\") " pod="openstack/keystone-74b94f7dd5-jfwln" Jan 23 09:32:13 crc kubenswrapper[4684]: I0123 09:32:13.884804 4684 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"credential-keys\" (UniqueName: \"kubernetes.io/secret/c7c30d54-36fc-47e2-ad40-c3e530d1b721-credential-keys\") pod \"keystone-74b94f7dd5-jfwln\" (UID: \"c7c30d54-36fc-47e2-ad40-c3e530d1b721\") " pod="openstack/keystone-74b94f7dd5-jfwln" Jan 23 09:32:13 crc kubenswrapper[4684]: I0123 09:32:13.884837 4684 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-tt4sb\" (UniqueName: \"kubernetes.io/projected/c7c30d54-36fc-47e2-ad40-c3e530d1b721-kube-api-access-tt4sb\") pod \"keystone-74b94f7dd5-jfwln\" (UID: \"c7c30d54-36fc-47e2-ad40-c3e530d1b721\") " pod="openstack/keystone-74b94f7dd5-jfwln" Jan 23 09:32:13 crc kubenswrapper[4684]: I0123 09:32:13.884902 4684 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/c7c30d54-36fc-47e2-ad40-c3e530d1b721-public-tls-certs\") pod \"keystone-74b94f7dd5-jfwln\" (UID: \"c7c30d54-36fc-47e2-ad40-c3e530d1b721\") " pod="openstack/keystone-74b94f7dd5-jfwln" Jan 23 09:32:13 crc kubenswrapper[4684]: I0123 09:32:13.893077 4684 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/c7c30d54-36fc-47e2-ad40-c3e530d1b721-internal-tls-certs\") pod \"keystone-74b94f7dd5-jfwln\" (UID: \"c7c30d54-36fc-47e2-ad40-c3e530d1b721\") " pod="openstack/keystone-74b94f7dd5-jfwln" Jan 23 09:32:13 crc kubenswrapper[4684]: I0123 09:32:13.895234 4684 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/c7c30d54-36fc-47e2-ad40-c3e530d1b721-public-tls-certs\") pod \"keystone-74b94f7dd5-jfwln\" (UID: \"c7c30d54-36fc-47e2-ad40-c3e530d1b721\") " pod="openstack/keystone-74b94f7dd5-jfwln" Jan 23 09:32:13 crc kubenswrapper[4684]: I0123 09:32:13.899626 4684 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"credential-keys\" (UniqueName: \"kubernetes.io/secret/c7c30d54-36fc-47e2-ad40-c3e530d1b721-credential-keys\") pod \"keystone-74b94f7dd5-jfwln\" (UID: \"c7c30d54-36fc-47e2-ad40-c3e530d1b721\") " pod="openstack/keystone-74b94f7dd5-jfwln" Jan 23 09:32:13 crc kubenswrapper[4684]: I0123 09:32:13.899722 4684 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/c7c30d54-36fc-47e2-ad40-c3e530d1b721-config-data\") pod \"keystone-74b94f7dd5-jfwln\" (UID: \"c7c30d54-36fc-47e2-ad40-c3e530d1b721\") " pod="openstack/keystone-74b94f7dd5-jfwln" Jan 23 09:32:13 crc kubenswrapper[4684]: I0123 09:32:13.900124 4684 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"fernet-keys\" (UniqueName: \"kubernetes.io/secret/c7c30d54-36fc-47e2-ad40-c3e530d1b721-fernet-keys\") pod \"keystone-74b94f7dd5-jfwln\" (UID: \"c7c30d54-36fc-47e2-ad40-c3e530d1b721\") " pod="openstack/keystone-74b94f7dd5-jfwln" Jan 23 09:32:13 crc kubenswrapper[4684]: I0123 09:32:13.905266 4684 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/c7c30d54-36fc-47e2-ad40-c3e530d1b721-combined-ca-bundle\") pod \"keystone-74b94f7dd5-jfwln\" (UID: \"c7c30d54-36fc-47e2-ad40-c3e530d1b721\") " pod="openstack/keystone-74b94f7dd5-jfwln" Jan 23 09:32:13 crc kubenswrapper[4684]: I0123 09:32:13.907641 4684 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/c7c30d54-36fc-47e2-ad40-c3e530d1b721-scripts\") pod \"keystone-74b94f7dd5-jfwln\" (UID: \"c7c30d54-36fc-47e2-ad40-c3e530d1b721\") " pod="openstack/keystone-74b94f7dd5-jfwln" Jan 23 09:32:13 crc kubenswrapper[4684]: I0123 09:32:13.910375 4684 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-tt4sb\" (UniqueName: \"kubernetes.io/projected/c7c30d54-36fc-47e2-ad40-c3e530d1b721-kube-api-access-tt4sb\") pod \"keystone-74b94f7dd5-jfwln\" (UID: \"c7c30d54-36fc-47e2-ad40-c3e530d1b721\") " pod="openstack/keystone-74b94f7dd5-jfwln" Jan 23 09:32:13 crc kubenswrapper[4684]: I0123 09:32:13.990069 4684 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/keystone-74b94f7dd5-jfwln" Jan 23 09:32:14 crc kubenswrapper[4684]: I0123 09:32:14.395269 4684 generic.go:334] "Generic (PLEG): container finished" podID="6dcca2c8-e79c-4130-8376-90b178f9d2da" containerID="a354ee24f6480d96a49b18023953298ad981479871e4a39cb0a671cead3ab410" exitCode=0 Jan 23 09:32:14 crc kubenswrapper[4684]: I0123 09:32:14.395341 4684 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-786f46ff4c-86fsj" event={"ID":"6dcca2c8-e79c-4130-8376-90b178f9d2da","Type":"ContainerDied","Data":"a354ee24f6480d96a49b18023953298ad981479871e4a39cb0a671cead3ab410"} Jan 23 09:32:14 crc kubenswrapper[4684]: I0123 09:32:14.453090 4684 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/keystone-74b94f7dd5-jfwln"] Jan 23 09:32:15 crc kubenswrapper[4684]: I0123 09:32:15.405107 4684 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/keystone-74b94f7dd5-jfwln" event={"ID":"c7c30d54-36fc-47e2-ad40-c3e530d1b721","Type":"ContainerStarted","Data":"108f211935f686f45135e0b45af21d997e7f45cc537ed6d2a2b22f35a4bfecf8"} Jan 23 09:32:15 crc kubenswrapper[4684]: I0123 09:32:15.480512 4684 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack/dnsmasq-dns-786f46ff4c-86fsj" podUID="6dcca2c8-e79c-4130-8376-90b178f9d2da" containerName="dnsmasq-dns" probeResult="failure" output="dial tcp 10.217.0.137:5353: connect: connection refused" Jan 23 09:32:16 crc kubenswrapper[4684]: I0123 09:32:16.401029 4684 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-786f46ff4c-86fsj" Jan 23 09:32:16 crc kubenswrapper[4684]: I0123 09:32:16.430208 4684 generic.go:334] "Generic (PLEG): container finished" podID="fe8e0d00-860e-4d47-9f48-686555520d79" containerID="8a400f51794ef4b6fdc66ad213f603d86645f2ebb5c89b0aaf3a7b97ea9ba3a1" exitCode=0 Jan 23 09:32:16 crc kubenswrapper[4684]: I0123 09:32:16.430337 4684 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-wtphf" event={"ID":"fe8e0d00-860e-4d47-9f48-686555520d79","Type":"ContainerDied","Data":"8a400f51794ef4b6fdc66ad213f603d86645f2ebb5c89b0aaf3a7b97ea9ba3a1"} Jan 23 09:32:16 crc kubenswrapper[4684]: I0123 09:32:16.430390 4684 scope.go:117] "RemoveContainer" containerID="8ade61f7f4bbb3f3f435e6b903b0fe87d7cf6cd2ec8e018e44229efc22831425" Jan 23 09:32:16 crc kubenswrapper[4684]: I0123 09:32:16.442928 4684 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-786f46ff4c-86fsj" Jan 23 09:32:16 crc kubenswrapper[4684]: I0123 09:32:16.443924 4684 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-786f46ff4c-86fsj" event={"ID":"6dcca2c8-e79c-4130-8376-90b178f9d2da","Type":"ContainerDied","Data":"ef362851d6c862f479fb91ff1545b3281650de59d986a5ab55a968459f4223dc"} Jan 23 09:32:16 crc kubenswrapper[4684]: I0123 09:32:16.493483 4684 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/placement-6f7c769f78-7sfgw" event={"ID":"90ee2ffb-783f-491a-9fa8-e37f267872f6","Type":"ContainerStarted","Data":"b65e3b572cda645e0bc1c4b77ed0fe5ca65922fea53c288d9b79ea98310b9448"} Jan 23 09:32:16 crc kubenswrapper[4684]: I0123 09:32:16.495393 4684 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/keystone-74b94f7dd5-jfwln" event={"ID":"c7c30d54-36fc-47e2-ad40-c3e530d1b721","Type":"ContainerStarted","Data":"2877340d08b77958b4b9a51273a7e8be8c23b2e2a3f80fa0ae46b4191d0bea22"} Jan 23 09:32:16 crc kubenswrapper[4684]: I0123 09:32:16.526914 4684 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/keystone-74b94f7dd5-jfwln" podStartSLOduration=3.526888683 podStartE2EDuration="3.526888683s" podCreationTimestamp="2026-01-23 09:32:13 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-23 09:32:16.517294966 +0000 UTC m=+1509.140673507" watchObservedRunningTime="2026-01-23 09:32:16.526888683 +0000 UTC m=+1509.150267234" Jan 23 09:32:16 crc kubenswrapper[4684]: I0123 09:32:16.534693 4684 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/6dcca2c8-e79c-4130-8376-90b178f9d2da-config\") pod \"6dcca2c8-e79c-4130-8376-90b178f9d2da\" (UID: \"6dcca2c8-e79c-4130-8376-90b178f9d2da\") " Jan 23 09:32:16 crc kubenswrapper[4684]: I0123 09:32:16.535794 4684 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/6dcca2c8-e79c-4130-8376-90b178f9d2da-ovsdbserver-sb\") pod \"6dcca2c8-e79c-4130-8376-90b178f9d2da\" (UID: \"6dcca2c8-e79c-4130-8376-90b178f9d2da\") " Jan 23 09:32:16 crc kubenswrapper[4684]: I0123 09:32:16.535958 4684 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-6qjh6\" (UniqueName: \"kubernetes.io/projected/6dcca2c8-e79c-4130-8376-90b178f9d2da-kube-api-access-6qjh6\") pod \"6dcca2c8-e79c-4130-8376-90b178f9d2da\" (UID: \"6dcca2c8-e79c-4130-8376-90b178f9d2da\") " Jan 23 09:32:16 crc kubenswrapper[4684]: I0123 09:32:16.536100 4684 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/6dcca2c8-e79c-4130-8376-90b178f9d2da-dns-svc\") pod \"6dcca2c8-e79c-4130-8376-90b178f9d2da\" (UID: \"6dcca2c8-e79c-4130-8376-90b178f9d2da\") " Jan 23 09:32:16 crc kubenswrapper[4684]: I0123 09:32:16.536244 4684 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/6dcca2c8-e79c-4130-8376-90b178f9d2da-ovsdbserver-nb\") pod \"6dcca2c8-e79c-4130-8376-90b178f9d2da\" (UID: \"6dcca2c8-e79c-4130-8376-90b178f9d2da\") " Jan 23 09:32:16 crc kubenswrapper[4684]: I0123 09:32:16.543650 4684 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/6dcca2c8-e79c-4130-8376-90b178f9d2da-kube-api-access-6qjh6" (OuterVolumeSpecName: "kube-api-access-6qjh6") pod "6dcca2c8-e79c-4130-8376-90b178f9d2da" (UID: "6dcca2c8-e79c-4130-8376-90b178f9d2da"). InnerVolumeSpecName "kube-api-access-6qjh6". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 23 09:32:16 crc kubenswrapper[4684]: I0123 09:32:16.588046 4684 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/6dcca2c8-e79c-4130-8376-90b178f9d2da-ovsdbserver-sb" (OuterVolumeSpecName: "ovsdbserver-sb") pod "6dcca2c8-e79c-4130-8376-90b178f9d2da" (UID: "6dcca2c8-e79c-4130-8376-90b178f9d2da"). InnerVolumeSpecName "ovsdbserver-sb". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 23 09:32:16 crc kubenswrapper[4684]: I0123 09:32:16.588054 4684 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/6dcca2c8-e79c-4130-8376-90b178f9d2da-dns-svc" (OuterVolumeSpecName: "dns-svc") pod "6dcca2c8-e79c-4130-8376-90b178f9d2da" (UID: "6dcca2c8-e79c-4130-8376-90b178f9d2da"). InnerVolumeSpecName "dns-svc". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 23 09:32:16 crc kubenswrapper[4684]: I0123 09:32:16.592572 4684 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/6dcca2c8-e79c-4130-8376-90b178f9d2da-config" (OuterVolumeSpecName: "config") pod "6dcca2c8-e79c-4130-8376-90b178f9d2da" (UID: "6dcca2c8-e79c-4130-8376-90b178f9d2da"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 23 09:32:16 crc kubenswrapper[4684]: I0123 09:32:16.638848 4684 reconciler_common.go:293] "Volume detached for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/6dcca2c8-e79c-4130-8376-90b178f9d2da-dns-svc\") on node \"crc\" DevicePath \"\"" Jan 23 09:32:16 crc kubenswrapper[4684]: I0123 09:32:16.638884 4684 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/6dcca2c8-e79c-4130-8376-90b178f9d2da-config\") on node \"crc\" DevicePath \"\"" Jan 23 09:32:16 crc kubenswrapper[4684]: I0123 09:32:16.638920 4684 reconciler_common.go:293] "Volume detached for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/6dcca2c8-e79c-4130-8376-90b178f9d2da-ovsdbserver-sb\") on node \"crc\" DevicePath \"\"" Jan 23 09:32:16 crc kubenswrapper[4684]: I0123 09:32:16.638939 4684 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-6qjh6\" (UniqueName: \"kubernetes.io/projected/6dcca2c8-e79c-4130-8376-90b178f9d2da-kube-api-access-6qjh6\") on node \"crc\" DevicePath \"\"" Jan 23 09:32:16 crc kubenswrapper[4684]: I0123 09:32:16.656531 4684 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/6dcca2c8-e79c-4130-8376-90b178f9d2da-ovsdbserver-nb" (OuterVolumeSpecName: "ovsdbserver-nb") pod "6dcca2c8-e79c-4130-8376-90b178f9d2da" (UID: "6dcca2c8-e79c-4130-8376-90b178f9d2da"). InnerVolumeSpecName "ovsdbserver-nb". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 23 09:32:16 crc kubenswrapper[4684]: I0123 09:32:16.740984 4684 reconciler_common.go:293] "Volume detached for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/6dcca2c8-e79c-4130-8376-90b178f9d2da-ovsdbserver-nb\") on node \"crc\" DevicePath \"\"" Jan 23 09:32:16 crc kubenswrapper[4684]: I0123 09:32:16.779475 4684 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-786f46ff4c-86fsj"] Jan 23 09:32:16 crc kubenswrapper[4684]: I0123 09:32:16.787008 4684 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/dnsmasq-dns-786f46ff4c-86fsj"] Jan 23 09:32:17 crc kubenswrapper[4684]: I0123 09:32:17.503170 4684 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/keystone-74b94f7dd5-jfwln" Jan 23 09:32:17 crc kubenswrapper[4684]: I0123 09:32:17.594980 4684 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="6dcca2c8-e79c-4130-8376-90b178f9d2da" path="/var/lib/kubelet/pods/6dcca2c8-e79c-4130-8376-90b178f9d2da/volumes" Jan 23 09:32:19 crc kubenswrapper[4684]: I0123 09:32:19.687108 4684 scope.go:117] "RemoveContainer" containerID="a354ee24f6480d96a49b18023953298ad981479871e4a39cb0a671cead3ab410" Jan 23 09:32:19 crc kubenswrapper[4684]: I0123 09:32:19.702815 4684 scope.go:117] "RemoveContainer" containerID="a74d741d5c9cb8f3ffa6fd3ebf7b6facec345363f70040cddadcb7ac9467ef0f" Jan 23 09:32:20 crc kubenswrapper[4684]: E0123 09:32:20.243648 4684 log.go:32] "PullImage from image service failed" err="rpc error: code = Canceled desc = copying config: context canceled" image="quay.io/openstack-k8s-operators/sg-core@sha256:828e2158704d4954145386c2ef8d02a98d34f9e4170fdec3cb0e6de4c955ca92" Jan 23 09:32:20 crc kubenswrapper[4684]: E0123 09:32:20.244351 4684 kuberuntime_manager.go:1274] "Unhandled Error" err="container &Container{Name:sg-core,Image:quay.io/openstack-k8s-operators/sg-core@sha256:828e2158704d4954145386c2ef8d02a98d34f9e4170fdec3cb0e6de4c955ca92,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:sg-core-conf-yaml,ReadOnly:false,MountPath:/etc/sg-core.conf.yaml,SubPath:sg-core.conf.yaml,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-rprcw,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:Always,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL MKNOD],},Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod ceilometer-0_openstack(72dbfed3-111a-4a4f-999a-ef7ade8b5116): ErrImagePull: rpc error: code = Canceled desc = copying config: context canceled" logger="UnhandledError" Jan 23 09:32:20 crc kubenswrapper[4684]: I0123 09:32:20.549219 4684 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-wtphf" event={"ID":"fe8e0d00-860e-4d47-9f48-686555520d79","Type":"ContainerStarted","Data":"aaa3253f44fc261eba23e0bab4fba49957b928d9d9a01fb268ab6087cc818562"} Jan 23 09:32:20 crc kubenswrapper[4684]: I0123 09:32:20.552329 4684 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/placement-6f7c769f78-7sfgw" event={"ID":"90ee2ffb-783f-491a-9fa8-e37f267872f6","Type":"ContainerStarted","Data":"9a512b22f4401b908b378c7081e137e313ce05e78b21bb453f5191332406d167"} Jan 23 09:32:20 crc kubenswrapper[4684]: I0123 09:32:20.552864 4684 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/placement-6f7c769f78-7sfgw" Jan 23 09:32:20 crc kubenswrapper[4684]: I0123 09:32:20.635161 4684 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/placement-6f7c769f78-7sfgw" podStartSLOduration=10.635124992 podStartE2EDuration="10.635124992s" podCreationTimestamp="2026-01-23 09:32:10 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-23 09:32:20.61773815 +0000 UTC m=+1513.241116711" watchObservedRunningTime="2026-01-23 09:32:20.635124992 +0000 UTC m=+1513.258503533" Jan 23 09:32:21 crc kubenswrapper[4684]: I0123 09:32:21.577486 4684 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/barbican-db-sync-9pq2q" event={"ID":"4ffd82b5-ced8-4cca-89cb-25ad1bba207a","Type":"ContainerStarted","Data":"3e26eee440f0ec913ab7e4b7d3e25f44476d2262a35cfb2268773ccc12052a03"} Jan 23 09:32:21 crc kubenswrapper[4684]: I0123 09:32:21.578840 4684 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/placement-6f7c769f78-7sfgw" Jan 23 09:32:21 crc kubenswrapper[4684]: I0123 09:32:21.620177 4684 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/barbican-db-sync-9pq2q" podStartSLOduration=3.441539601 podStartE2EDuration="1m2.620152342s" podCreationTimestamp="2026-01-23 09:31:19 +0000 UTC" firstStartedPulling="2026-01-23 09:31:21.175550328 +0000 UTC m=+1453.798928869" lastFinishedPulling="2026-01-23 09:32:20.354163069 +0000 UTC m=+1512.977541610" observedRunningTime="2026-01-23 09:32:21.616822626 +0000 UTC m=+1514.240201187" watchObservedRunningTime="2026-01-23 09:32:21.620152342 +0000 UTC m=+1514.243530883" Jan 23 09:32:22 crc kubenswrapper[4684]: I0123 09:32:22.589382 4684 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-db-sync-gpzdh" event={"ID":"82fd9420-b726-4b9d-ad21-b05181fb6e23","Type":"ContainerStarted","Data":"7bf1e0cc8b6d0352dac476223651158bee043c796ac7567db416cf94db715313"} Jan 23 09:32:22 crc kubenswrapper[4684]: I0123 09:32:22.589432 4684 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness" Jan 23 09:32:22 crc kubenswrapper[4684]: I0123 09:32:22.626572 4684 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/cinder-db-sync-gpzdh" podStartSLOduration=4.175009458 podStartE2EDuration="1m3.62655063s" podCreationTimestamp="2026-01-23 09:31:19 +0000 UTC" firstStartedPulling="2026-01-23 09:31:21.000094955 +0000 UTC m=+1453.623473506" lastFinishedPulling="2026-01-23 09:32:20.451636137 +0000 UTC m=+1513.075014678" observedRunningTime="2026-01-23 09:32:22.615190712 +0000 UTC m=+1515.238569263" watchObservedRunningTime="2026-01-23 09:32:22.62655063 +0000 UTC m=+1515.249929171" Jan 23 09:32:34 crc kubenswrapper[4684]: I0123 09:32:34.109217 4684 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/placement-6f7c769f78-7sfgw" Jan 23 09:32:34 crc kubenswrapper[4684]: I0123 09:32:34.686322 4684 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/redhat-operators-ktlb4"] Jan 23 09:32:34 crc kubenswrapper[4684]: E0123 09:32:34.687179 4684 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="6dcca2c8-e79c-4130-8376-90b178f9d2da" containerName="init" Jan 23 09:32:34 crc kubenswrapper[4684]: I0123 09:32:34.687200 4684 state_mem.go:107] "Deleted CPUSet assignment" podUID="6dcca2c8-e79c-4130-8376-90b178f9d2da" containerName="init" Jan 23 09:32:34 crc kubenswrapper[4684]: E0123 09:32:34.687211 4684 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="6dcca2c8-e79c-4130-8376-90b178f9d2da" containerName="dnsmasq-dns" Jan 23 09:32:34 crc kubenswrapper[4684]: I0123 09:32:34.687218 4684 state_mem.go:107] "Deleted CPUSet assignment" podUID="6dcca2c8-e79c-4130-8376-90b178f9d2da" containerName="dnsmasq-dns" Jan 23 09:32:34 crc kubenswrapper[4684]: I0123 09:32:34.687427 4684 memory_manager.go:354] "RemoveStaleState removing state" podUID="6dcca2c8-e79c-4130-8376-90b178f9d2da" containerName="dnsmasq-dns" Jan 23 09:32:34 crc kubenswrapper[4684]: I0123 09:32:34.688969 4684 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-ktlb4" Jan 23 09:32:34 crc kubenswrapper[4684]: I0123 09:32:34.720473 4684 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-operators-ktlb4"] Jan 23 09:32:34 crc kubenswrapper[4684]: I0123 09:32:34.879646 4684 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/6742cb4f-5c93-4e38-8c73-77e75630d3dc-catalog-content\") pod \"redhat-operators-ktlb4\" (UID: \"6742cb4f-5c93-4e38-8c73-77e75630d3dc\") " pod="openshift-marketplace/redhat-operators-ktlb4" Jan 23 09:32:34 crc kubenswrapper[4684]: I0123 09:32:34.879712 4684 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/6742cb4f-5c93-4e38-8c73-77e75630d3dc-utilities\") pod \"redhat-operators-ktlb4\" (UID: \"6742cb4f-5c93-4e38-8c73-77e75630d3dc\") " pod="openshift-marketplace/redhat-operators-ktlb4" Jan 23 09:32:34 crc kubenswrapper[4684]: I0123 09:32:34.879767 4684 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-hf5l4\" (UniqueName: \"kubernetes.io/projected/6742cb4f-5c93-4e38-8c73-77e75630d3dc-kube-api-access-hf5l4\") pod \"redhat-operators-ktlb4\" (UID: \"6742cb4f-5c93-4e38-8c73-77e75630d3dc\") " pod="openshift-marketplace/redhat-operators-ktlb4" Jan 23 09:32:34 crc kubenswrapper[4684]: I0123 09:32:34.981691 4684 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/6742cb4f-5c93-4e38-8c73-77e75630d3dc-catalog-content\") pod \"redhat-operators-ktlb4\" (UID: \"6742cb4f-5c93-4e38-8c73-77e75630d3dc\") " pod="openshift-marketplace/redhat-operators-ktlb4" Jan 23 09:32:34 crc kubenswrapper[4684]: I0123 09:32:34.981775 4684 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/6742cb4f-5c93-4e38-8c73-77e75630d3dc-utilities\") pod \"redhat-operators-ktlb4\" (UID: \"6742cb4f-5c93-4e38-8c73-77e75630d3dc\") " pod="openshift-marketplace/redhat-operators-ktlb4" Jan 23 09:32:34 crc kubenswrapper[4684]: I0123 09:32:34.981846 4684 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-hf5l4\" (UniqueName: \"kubernetes.io/projected/6742cb4f-5c93-4e38-8c73-77e75630d3dc-kube-api-access-hf5l4\") pod \"redhat-operators-ktlb4\" (UID: \"6742cb4f-5c93-4e38-8c73-77e75630d3dc\") " pod="openshift-marketplace/redhat-operators-ktlb4" Jan 23 09:32:35 crc kubenswrapper[4684]: I0123 09:32:35.005819 4684 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/6742cb4f-5c93-4e38-8c73-77e75630d3dc-catalog-content\") pod \"redhat-operators-ktlb4\" (UID: \"6742cb4f-5c93-4e38-8c73-77e75630d3dc\") " pod="openshift-marketplace/redhat-operators-ktlb4" Jan 23 09:32:35 crc kubenswrapper[4684]: I0123 09:32:35.005860 4684 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/6742cb4f-5c93-4e38-8c73-77e75630d3dc-utilities\") pod \"redhat-operators-ktlb4\" (UID: \"6742cb4f-5c93-4e38-8c73-77e75630d3dc\") " pod="openshift-marketplace/redhat-operators-ktlb4" Jan 23 09:32:35 crc kubenswrapper[4684]: I0123 09:32:35.019651 4684 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-hf5l4\" (UniqueName: \"kubernetes.io/projected/6742cb4f-5c93-4e38-8c73-77e75630d3dc-kube-api-access-hf5l4\") pod \"redhat-operators-ktlb4\" (UID: \"6742cb4f-5c93-4e38-8c73-77e75630d3dc\") " pod="openshift-marketplace/redhat-operators-ktlb4" Jan 23 09:32:35 crc kubenswrapper[4684]: I0123 09:32:35.027368 4684 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-ktlb4" Jan 23 09:32:35 crc kubenswrapper[4684]: I0123 09:32:35.297729 4684 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/placement-6f7c769f78-7sfgw" Jan 23 09:32:36 crc kubenswrapper[4684]: E0123 09:32:36.818463 4684 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"sg-core\" with ErrImagePull: \"rpc error: code = Canceled desc = copying config: context canceled\"" pod="openstack/ceilometer-0" podUID="72dbfed3-111a-4a4f-999a-ef7ade8b5116" Jan 23 09:32:36 crc kubenswrapper[4684]: I0123 09:32:36.844916 4684 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-operators-ktlb4"] Jan 23 09:32:37 crc kubenswrapper[4684]: I0123 09:32:37.744960 4684 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/ceilometer-0" podUID="72dbfed3-111a-4a4f-999a-ef7ade8b5116" containerName="ceilometer-central-agent" containerID="cri-o://5ba451b025535320b1519f5d1b64c8275e5ef33e65b5881a1c0ce7548cf3b62b" gracePeriod=30 Jan 23 09:32:37 crc kubenswrapper[4684]: I0123 09:32:37.745034 4684 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"72dbfed3-111a-4a4f-999a-ef7ade8b5116","Type":"ContainerStarted","Data":"8255de2416173881f91fdc4c3f2c70ac7956c1a20f3c567bb0ce83deb53f0e06"} Jan 23 09:32:37 crc kubenswrapper[4684]: I0123 09:32:37.745925 4684 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/ceilometer-0" Jan 23 09:32:37 crc kubenswrapper[4684]: I0123 09:32:37.745599 4684 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/ceilometer-0" podUID="72dbfed3-111a-4a4f-999a-ef7ade8b5116" containerName="ceilometer-notification-agent" containerID="cri-o://caa4120848942469b0152ddf152794d0d4987b8c6e13898b8beb2eea219b6004" gracePeriod=30 Jan 23 09:32:37 crc kubenswrapper[4684]: I0123 09:32:37.745577 4684 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/ceilometer-0" podUID="72dbfed3-111a-4a4f-999a-ef7ade8b5116" containerName="proxy-httpd" containerID="cri-o://8255de2416173881f91fdc4c3f2c70ac7956c1a20f3c567bb0ce83deb53f0e06" gracePeriod=30 Jan 23 09:32:37 crc kubenswrapper[4684]: I0123 09:32:37.750385 4684 generic.go:334] "Generic (PLEG): container finished" podID="6742cb4f-5c93-4e38-8c73-77e75630d3dc" containerID="9d0f70c4f90c6be976ded4df9c8f2656ce8bf6b3da10cee7620fcc3f4db0c850" exitCode=0 Jan 23 09:32:37 crc kubenswrapper[4684]: I0123 09:32:37.750543 4684 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-ktlb4" event={"ID":"6742cb4f-5c93-4e38-8c73-77e75630d3dc","Type":"ContainerDied","Data":"9d0f70c4f90c6be976ded4df9c8f2656ce8bf6b3da10cee7620fcc3f4db0c850"} Jan 23 09:32:37 crc kubenswrapper[4684]: I0123 09:32:37.750616 4684 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-ktlb4" event={"ID":"6742cb4f-5c93-4e38-8c73-77e75630d3dc","Type":"ContainerStarted","Data":"8130639f844b919c48a3cd8b93faeb3c8a9d6860d68b0e806d80a2336fd0dd77"} Jan 23 09:32:38 crc kubenswrapper[4684]: I0123 09:32:38.761803 4684 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-ktlb4" event={"ID":"6742cb4f-5c93-4e38-8c73-77e75630d3dc","Type":"ContainerStarted","Data":"ebc4269ce2d37ff084e8ba7d830f40a57061294b372b57acef5cf8cd18c65c17"} Jan 23 09:32:38 crc kubenswrapper[4684]: I0123 09:32:38.766389 4684 generic.go:334] "Generic (PLEG): container finished" podID="72dbfed3-111a-4a4f-999a-ef7ade8b5116" containerID="8255de2416173881f91fdc4c3f2c70ac7956c1a20f3c567bb0ce83deb53f0e06" exitCode=0 Jan 23 09:32:38 crc kubenswrapper[4684]: I0123 09:32:38.766419 4684 generic.go:334] "Generic (PLEG): container finished" podID="72dbfed3-111a-4a4f-999a-ef7ade8b5116" containerID="5ba451b025535320b1519f5d1b64c8275e5ef33e65b5881a1c0ce7548cf3b62b" exitCode=0 Jan 23 09:32:38 crc kubenswrapper[4684]: I0123 09:32:38.766438 4684 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"72dbfed3-111a-4a4f-999a-ef7ade8b5116","Type":"ContainerDied","Data":"8255de2416173881f91fdc4c3f2c70ac7956c1a20f3c567bb0ce83deb53f0e06"} Jan 23 09:32:38 crc kubenswrapper[4684]: I0123 09:32:38.766463 4684 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"72dbfed3-111a-4a4f-999a-ef7ade8b5116","Type":"ContainerDied","Data":"5ba451b025535320b1519f5d1b64c8275e5ef33e65b5881a1c0ce7548cf3b62b"} Jan 23 09:32:39 crc kubenswrapper[4684]: I0123 09:32:39.779615 4684 generic.go:334] "Generic (PLEG): container finished" podID="6742cb4f-5c93-4e38-8c73-77e75630d3dc" containerID="ebc4269ce2d37ff084e8ba7d830f40a57061294b372b57acef5cf8cd18c65c17" exitCode=0 Jan 23 09:32:39 crc kubenswrapper[4684]: I0123 09:32:39.780018 4684 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-ktlb4" event={"ID":"6742cb4f-5c93-4e38-8c73-77e75630d3dc","Type":"ContainerDied","Data":"ebc4269ce2d37ff084e8ba7d830f40a57061294b372b57acef5cf8cd18c65c17"} Jan 23 09:32:41 crc kubenswrapper[4684]: E0123 09:32:41.580875 4684 cadvisor_stats_provider.go:516] "Partial failure issuing cadvisor.ContainerInfoV2" err="partial failures: [\"/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod72dbfed3_111a_4a4f_999a_ef7ade8b5116.slice/crio-caa4120848942469b0152ddf152794d0d4987b8c6e13898b8beb2eea219b6004.scope\": RecentStats: unable to find data in memory cache]" Jan 23 09:32:42 crc kubenswrapper[4684]: I0123 09:32:42.815211 4684 generic.go:334] "Generic (PLEG): container finished" podID="72dbfed3-111a-4a4f-999a-ef7ade8b5116" containerID="caa4120848942469b0152ddf152794d0d4987b8c6e13898b8beb2eea219b6004" exitCode=0 Jan 23 09:32:42 crc kubenswrapper[4684]: I0123 09:32:42.816139 4684 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"72dbfed3-111a-4a4f-999a-ef7ade8b5116","Type":"ContainerDied","Data":"caa4120848942469b0152ddf152794d0d4987b8c6e13898b8beb2eea219b6004"} Jan 23 09:32:42 crc kubenswrapper[4684]: I0123 09:32:42.819646 4684 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-ktlb4" event={"ID":"6742cb4f-5c93-4e38-8c73-77e75630d3dc","Type":"ContainerStarted","Data":"8eaef237ee4ad8b68cd853ab04ddfc8ccc00d14fad4b8dd9a7cedae302119a40"} Jan 23 09:32:42 crc kubenswrapper[4684]: I0123 09:32:42.860253 4684 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/redhat-operators-ktlb4" podStartSLOduration=4.342243101 podStartE2EDuration="8.860214377s" podCreationTimestamp="2026-01-23 09:32:34 +0000 UTC" firstStartedPulling="2026-01-23 09:32:37.756962629 +0000 UTC m=+1530.380341170" lastFinishedPulling="2026-01-23 09:32:42.274933905 +0000 UTC m=+1534.898312446" observedRunningTime="2026-01-23 09:32:42.848121617 +0000 UTC m=+1535.471500178" watchObservedRunningTime="2026-01-23 09:32:42.860214377 +0000 UTC m=+1535.483592918" Jan 23 09:32:42 crc kubenswrapper[4684]: I0123 09:32:42.985063 4684 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/ceilometer-0" Jan 23 09:32:43 crc kubenswrapper[4684]: I0123 09:32:43.132385 4684 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/72dbfed3-111a-4a4f-999a-ef7ade8b5116-sg-core-conf-yaml\") pod \"72dbfed3-111a-4a4f-999a-ef7ade8b5116\" (UID: \"72dbfed3-111a-4a4f-999a-ef7ade8b5116\") " Jan 23 09:32:43 crc kubenswrapper[4684]: I0123 09:32:43.132982 4684 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/72dbfed3-111a-4a4f-999a-ef7ade8b5116-scripts\") pod \"72dbfed3-111a-4a4f-999a-ef7ade8b5116\" (UID: \"72dbfed3-111a-4a4f-999a-ef7ade8b5116\") " Jan 23 09:32:43 crc kubenswrapper[4684]: I0123 09:32:43.133063 4684 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-rprcw\" (UniqueName: \"kubernetes.io/projected/72dbfed3-111a-4a4f-999a-ef7ade8b5116-kube-api-access-rprcw\") pod \"72dbfed3-111a-4a4f-999a-ef7ade8b5116\" (UID: \"72dbfed3-111a-4a4f-999a-ef7ade8b5116\") " Jan 23 09:32:43 crc kubenswrapper[4684]: I0123 09:32:43.133101 4684 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/72dbfed3-111a-4a4f-999a-ef7ade8b5116-run-httpd\") pod \"72dbfed3-111a-4a4f-999a-ef7ade8b5116\" (UID: \"72dbfed3-111a-4a4f-999a-ef7ade8b5116\") " Jan 23 09:32:43 crc kubenswrapper[4684]: I0123 09:32:43.133137 4684 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/72dbfed3-111a-4a4f-999a-ef7ade8b5116-log-httpd\") pod \"72dbfed3-111a-4a4f-999a-ef7ade8b5116\" (UID: \"72dbfed3-111a-4a4f-999a-ef7ade8b5116\") " Jan 23 09:32:43 crc kubenswrapper[4684]: I0123 09:32:43.133188 4684 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/72dbfed3-111a-4a4f-999a-ef7ade8b5116-combined-ca-bundle\") pod \"72dbfed3-111a-4a4f-999a-ef7ade8b5116\" (UID: \"72dbfed3-111a-4a4f-999a-ef7ade8b5116\") " Jan 23 09:32:43 crc kubenswrapper[4684]: I0123 09:32:43.133214 4684 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/72dbfed3-111a-4a4f-999a-ef7ade8b5116-config-data\") pod \"72dbfed3-111a-4a4f-999a-ef7ade8b5116\" (UID: \"72dbfed3-111a-4a4f-999a-ef7ade8b5116\") " Jan 23 09:32:43 crc kubenswrapper[4684]: I0123 09:32:43.133626 4684 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/72dbfed3-111a-4a4f-999a-ef7ade8b5116-run-httpd" (OuterVolumeSpecName: "run-httpd") pod "72dbfed3-111a-4a4f-999a-ef7ade8b5116" (UID: "72dbfed3-111a-4a4f-999a-ef7ade8b5116"). InnerVolumeSpecName "run-httpd". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 23 09:32:43 crc kubenswrapper[4684]: I0123 09:32:43.134473 4684 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/72dbfed3-111a-4a4f-999a-ef7ade8b5116-log-httpd" (OuterVolumeSpecName: "log-httpd") pod "72dbfed3-111a-4a4f-999a-ef7ade8b5116" (UID: "72dbfed3-111a-4a4f-999a-ef7ade8b5116"). InnerVolumeSpecName "log-httpd". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 23 09:32:43 crc kubenswrapper[4684]: I0123 09:32:43.142780 4684 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/72dbfed3-111a-4a4f-999a-ef7ade8b5116-sg-core-conf-yaml" (OuterVolumeSpecName: "sg-core-conf-yaml") pod "72dbfed3-111a-4a4f-999a-ef7ade8b5116" (UID: "72dbfed3-111a-4a4f-999a-ef7ade8b5116"). InnerVolumeSpecName "sg-core-conf-yaml". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 23 09:32:43 crc kubenswrapper[4684]: I0123 09:32:43.142824 4684 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/72dbfed3-111a-4a4f-999a-ef7ade8b5116-scripts" (OuterVolumeSpecName: "scripts") pod "72dbfed3-111a-4a4f-999a-ef7ade8b5116" (UID: "72dbfed3-111a-4a4f-999a-ef7ade8b5116"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 23 09:32:43 crc kubenswrapper[4684]: I0123 09:32:43.143081 4684 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/72dbfed3-111a-4a4f-999a-ef7ade8b5116-kube-api-access-rprcw" (OuterVolumeSpecName: "kube-api-access-rprcw") pod "72dbfed3-111a-4a4f-999a-ef7ade8b5116" (UID: "72dbfed3-111a-4a4f-999a-ef7ade8b5116"). InnerVolumeSpecName "kube-api-access-rprcw". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 23 09:32:43 crc kubenswrapper[4684]: I0123 09:32:43.231038 4684 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/72dbfed3-111a-4a4f-999a-ef7ade8b5116-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "72dbfed3-111a-4a4f-999a-ef7ade8b5116" (UID: "72dbfed3-111a-4a4f-999a-ef7ade8b5116"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 23 09:32:43 crc kubenswrapper[4684]: I0123 09:32:43.235456 4684 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-rprcw\" (UniqueName: \"kubernetes.io/projected/72dbfed3-111a-4a4f-999a-ef7ade8b5116-kube-api-access-rprcw\") on node \"crc\" DevicePath \"\"" Jan 23 09:32:43 crc kubenswrapper[4684]: I0123 09:32:43.235488 4684 reconciler_common.go:293] "Volume detached for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/72dbfed3-111a-4a4f-999a-ef7ade8b5116-run-httpd\") on node \"crc\" DevicePath \"\"" Jan 23 09:32:43 crc kubenswrapper[4684]: I0123 09:32:43.235500 4684 reconciler_common.go:293] "Volume detached for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/72dbfed3-111a-4a4f-999a-ef7ade8b5116-log-httpd\") on node \"crc\" DevicePath \"\"" Jan 23 09:32:43 crc kubenswrapper[4684]: I0123 09:32:43.235510 4684 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/72dbfed3-111a-4a4f-999a-ef7ade8b5116-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Jan 23 09:32:43 crc kubenswrapper[4684]: I0123 09:32:43.235521 4684 reconciler_common.go:293] "Volume detached for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/72dbfed3-111a-4a4f-999a-ef7ade8b5116-sg-core-conf-yaml\") on node \"crc\" DevicePath \"\"" Jan 23 09:32:43 crc kubenswrapper[4684]: I0123 09:32:43.235531 4684 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/72dbfed3-111a-4a4f-999a-ef7ade8b5116-scripts\") on node \"crc\" DevicePath \"\"" Jan 23 09:32:43 crc kubenswrapper[4684]: I0123 09:32:43.248198 4684 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/72dbfed3-111a-4a4f-999a-ef7ade8b5116-config-data" (OuterVolumeSpecName: "config-data") pod "72dbfed3-111a-4a4f-999a-ef7ade8b5116" (UID: "72dbfed3-111a-4a4f-999a-ef7ade8b5116"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 23 09:32:43 crc kubenswrapper[4684]: I0123 09:32:43.338870 4684 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/72dbfed3-111a-4a4f-999a-ef7ade8b5116-config-data\") on node \"crc\" DevicePath \"\"" Jan 23 09:32:43 crc kubenswrapper[4684]: I0123 09:32:43.830624 4684 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/ceilometer-0" Jan 23 09:32:43 crc kubenswrapper[4684]: I0123 09:32:43.831291 4684 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"72dbfed3-111a-4a4f-999a-ef7ade8b5116","Type":"ContainerDied","Data":"8fdbb10356dc1a236b23418590cdc764c7c56983b85c70fa5dd55aac9f879759"} Jan 23 09:32:43 crc kubenswrapper[4684]: I0123 09:32:43.831321 4684 scope.go:117] "RemoveContainer" containerID="8255de2416173881f91fdc4c3f2c70ac7956c1a20f3c567bb0ce83deb53f0e06" Jan 23 09:32:43 crc kubenswrapper[4684]: I0123 09:32:43.849365 4684 scope.go:117] "RemoveContainer" containerID="caa4120848942469b0152ddf152794d0d4987b8c6e13898b8beb2eea219b6004" Jan 23 09:32:43 crc kubenswrapper[4684]: I0123 09:32:43.972005 4684 scope.go:117] "RemoveContainer" containerID="5ba451b025535320b1519f5d1b64c8275e5ef33e65b5881a1c0ce7548cf3b62b" Jan 23 09:32:43 crc kubenswrapper[4684]: I0123 09:32:43.985304 4684 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/ceilometer-0"] Jan 23 09:32:43 crc kubenswrapper[4684]: I0123 09:32:43.994439 4684 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/ceilometer-0"] Jan 23 09:32:44 crc kubenswrapper[4684]: I0123 09:32:44.006586 4684 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/ceilometer-0"] Jan 23 09:32:44 crc kubenswrapper[4684]: E0123 09:32:44.007274 4684 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="72dbfed3-111a-4a4f-999a-ef7ade8b5116" containerName="ceilometer-central-agent" Jan 23 09:32:44 crc kubenswrapper[4684]: I0123 09:32:44.007367 4684 state_mem.go:107] "Deleted CPUSet assignment" podUID="72dbfed3-111a-4a4f-999a-ef7ade8b5116" containerName="ceilometer-central-agent" Jan 23 09:32:44 crc kubenswrapper[4684]: E0123 09:32:44.007445 4684 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="72dbfed3-111a-4a4f-999a-ef7ade8b5116" containerName="ceilometer-notification-agent" Jan 23 09:32:44 crc kubenswrapper[4684]: I0123 09:32:44.007529 4684 state_mem.go:107] "Deleted CPUSet assignment" podUID="72dbfed3-111a-4a4f-999a-ef7ade8b5116" containerName="ceilometer-notification-agent" Jan 23 09:32:44 crc kubenswrapper[4684]: E0123 09:32:44.007625 4684 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="72dbfed3-111a-4a4f-999a-ef7ade8b5116" containerName="proxy-httpd" Jan 23 09:32:44 crc kubenswrapper[4684]: I0123 09:32:44.007717 4684 state_mem.go:107] "Deleted CPUSet assignment" podUID="72dbfed3-111a-4a4f-999a-ef7ade8b5116" containerName="proxy-httpd" Jan 23 09:32:44 crc kubenswrapper[4684]: I0123 09:32:44.008010 4684 memory_manager.go:354] "RemoveStaleState removing state" podUID="72dbfed3-111a-4a4f-999a-ef7ade8b5116" containerName="ceilometer-central-agent" Jan 23 09:32:44 crc kubenswrapper[4684]: I0123 09:32:44.008105 4684 memory_manager.go:354] "RemoveStaleState removing state" podUID="72dbfed3-111a-4a4f-999a-ef7ade8b5116" containerName="ceilometer-notification-agent" Jan 23 09:32:44 crc kubenswrapper[4684]: I0123 09:32:44.008179 4684 memory_manager.go:354] "RemoveStaleState removing state" podUID="72dbfed3-111a-4a4f-999a-ef7ade8b5116" containerName="proxy-httpd" Jan 23 09:32:44 crc kubenswrapper[4684]: I0123 09:32:44.010128 4684 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ceilometer-0" Jan 23 09:32:44 crc kubenswrapper[4684]: I0123 09:32:44.014858 4684 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"ceilometer-config-data" Jan 23 09:32:44 crc kubenswrapper[4684]: I0123 09:32:44.015096 4684 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"ceilometer-scripts" Jan 23 09:32:44 crc kubenswrapper[4684]: I0123 09:32:44.029509 4684 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ceilometer-0"] Jan 23 09:32:44 crc kubenswrapper[4684]: I0123 09:32:44.176730 4684 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/6a531904-7199-45c6-aea1-23fb5a52addf-scripts\") pod \"ceilometer-0\" (UID: \"6a531904-7199-45c6-aea1-23fb5a52addf\") " pod="openstack/ceilometer-0" Jan 23 09:32:44 crc kubenswrapper[4684]: I0123 09:32:44.176799 4684 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-b7cc4\" (UniqueName: \"kubernetes.io/projected/6a531904-7199-45c6-aea1-23fb5a52addf-kube-api-access-b7cc4\") pod \"ceilometer-0\" (UID: \"6a531904-7199-45c6-aea1-23fb5a52addf\") " pod="openstack/ceilometer-0" Jan 23 09:32:44 crc kubenswrapper[4684]: I0123 09:32:44.176828 4684 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/6a531904-7199-45c6-aea1-23fb5a52addf-config-data\") pod \"ceilometer-0\" (UID: \"6a531904-7199-45c6-aea1-23fb5a52addf\") " pod="openstack/ceilometer-0" Jan 23 09:32:44 crc kubenswrapper[4684]: I0123 09:32:44.176860 4684 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/6a531904-7199-45c6-aea1-23fb5a52addf-log-httpd\") pod \"ceilometer-0\" (UID: \"6a531904-7199-45c6-aea1-23fb5a52addf\") " pod="openstack/ceilometer-0" Jan 23 09:32:44 crc kubenswrapper[4684]: I0123 09:32:44.176938 4684 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/6a531904-7199-45c6-aea1-23fb5a52addf-run-httpd\") pod \"ceilometer-0\" (UID: \"6a531904-7199-45c6-aea1-23fb5a52addf\") " pod="openstack/ceilometer-0" Jan 23 09:32:44 crc kubenswrapper[4684]: I0123 09:32:44.176975 4684 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/6a531904-7199-45c6-aea1-23fb5a52addf-sg-core-conf-yaml\") pod \"ceilometer-0\" (UID: \"6a531904-7199-45c6-aea1-23fb5a52addf\") " pod="openstack/ceilometer-0" Jan 23 09:32:44 crc kubenswrapper[4684]: I0123 09:32:44.177022 4684 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/6a531904-7199-45c6-aea1-23fb5a52addf-combined-ca-bundle\") pod \"ceilometer-0\" (UID: \"6a531904-7199-45c6-aea1-23fb5a52addf\") " pod="openstack/ceilometer-0" Jan 23 09:32:44 crc kubenswrapper[4684]: I0123 09:32:44.278877 4684 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/6a531904-7199-45c6-aea1-23fb5a52addf-run-httpd\") pod \"ceilometer-0\" (UID: \"6a531904-7199-45c6-aea1-23fb5a52addf\") " pod="openstack/ceilometer-0" Jan 23 09:32:44 crc kubenswrapper[4684]: I0123 09:32:44.278932 4684 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/6a531904-7199-45c6-aea1-23fb5a52addf-sg-core-conf-yaml\") pod \"ceilometer-0\" (UID: \"6a531904-7199-45c6-aea1-23fb5a52addf\") " pod="openstack/ceilometer-0" Jan 23 09:32:44 crc kubenswrapper[4684]: I0123 09:32:44.278987 4684 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/6a531904-7199-45c6-aea1-23fb5a52addf-combined-ca-bundle\") pod \"ceilometer-0\" (UID: \"6a531904-7199-45c6-aea1-23fb5a52addf\") " pod="openstack/ceilometer-0" Jan 23 09:32:44 crc kubenswrapper[4684]: I0123 09:32:44.279047 4684 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/6a531904-7199-45c6-aea1-23fb5a52addf-scripts\") pod \"ceilometer-0\" (UID: \"6a531904-7199-45c6-aea1-23fb5a52addf\") " pod="openstack/ceilometer-0" Jan 23 09:32:44 crc kubenswrapper[4684]: I0123 09:32:44.279085 4684 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-b7cc4\" (UniqueName: \"kubernetes.io/projected/6a531904-7199-45c6-aea1-23fb5a52addf-kube-api-access-b7cc4\") pod \"ceilometer-0\" (UID: \"6a531904-7199-45c6-aea1-23fb5a52addf\") " pod="openstack/ceilometer-0" Jan 23 09:32:44 crc kubenswrapper[4684]: I0123 09:32:44.279109 4684 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/6a531904-7199-45c6-aea1-23fb5a52addf-config-data\") pod \"ceilometer-0\" (UID: \"6a531904-7199-45c6-aea1-23fb5a52addf\") " pod="openstack/ceilometer-0" Jan 23 09:32:44 crc kubenswrapper[4684]: I0123 09:32:44.279147 4684 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/6a531904-7199-45c6-aea1-23fb5a52addf-log-httpd\") pod \"ceilometer-0\" (UID: \"6a531904-7199-45c6-aea1-23fb5a52addf\") " pod="openstack/ceilometer-0" Jan 23 09:32:44 crc kubenswrapper[4684]: I0123 09:32:44.279529 4684 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/6a531904-7199-45c6-aea1-23fb5a52addf-run-httpd\") pod \"ceilometer-0\" (UID: \"6a531904-7199-45c6-aea1-23fb5a52addf\") " pod="openstack/ceilometer-0" Jan 23 09:32:44 crc kubenswrapper[4684]: I0123 09:32:44.279657 4684 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/6a531904-7199-45c6-aea1-23fb5a52addf-log-httpd\") pod \"ceilometer-0\" (UID: \"6a531904-7199-45c6-aea1-23fb5a52addf\") " pod="openstack/ceilometer-0" Jan 23 09:32:44 crc kubenswrapper[4684]: I0123 09:32:44.285927 4684 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/6a531904-7199-45c6-aea1-23fb5a52addf-scripts\") pod \"ceilometer-0\" (UID: \"6a531904-7199-45c6-aea1-23fb5a52addf\") " pod="openstack/ceilometer-0" Jan 23 09:32:44 crc kubenswrapper[4684]: I0123 09:32:44.285945 4684 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/6a531904-7199-45c6-aea1-23fb5a52addf-combined-ca-bundle\") pod \"ceilometer-0\" (UID: \"6a531904-7199-45c6-aea1-23fb5a52addf\") " pod="openstack/ceilometer-0" Jan 23 09:32:44 crc kubenswrapper[4684]: I0123 09:32:44.287044 4684 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/6a531904-7199-45c6-aea1-23fb5a52addf-sg-core-conf-yaml\") pod \"ceilometer-0\" (UID: \"6a531904-7199-45c6-aea1-23fb5a52addf\") " pod="openstack/ceilometer-0" Jan 23 09:32:44 crc kubenswrapper[4684]: I0123 09:32:44.289907 4684 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/6a531904-7199-45c6-aea1-23fb5a52addf-config-data\") pod \"ceilometer-0\" (UID: \"6a531904-7199-45c6-aea1-23fb5a52addf\") " pod="openstack/ceilometer-0" Jan 23 09:32:44 crc kubenswrapper[4684]: I0123 09:32:44.302023 4684 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-b7cc4\" (UniqueName: \"kubernetes.io/projected/6a531904-7199-45c6-aea1-23fb5a52addf-kube-api-access-b7cc4\") pod \"ceilometer-0\" (UID: \"6a531904-7199-45c6-aea1-23fb5a52addf\") " pod="openstack/ceilometer-0" Jan 23 09:32:44 crc kubenswrapper[4684]: I0123 09:32:44.338808 4684 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ceilometer-0" Jan 23 09:32:44 crc kubenswrapper[4684]: I0123 09:32:44.849894 4684 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ceilometer-0"] Jan 23 09:32:44 crc kubenswrapper[4684]: W0123 09:32:44.859564 4684 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod6a531904_7199_45c6_aea1_23fb5a52addf.slice/crio-ddfca825dc6d552d5d244813a85979001e3925c944fa658f06e0ae933e021a38 WatchSource:0}: Error finding container ddfca825dc6d552d5d244813a85979001e3925c944fa658f06e0ae933e021a38: Status 404 returned error can't find the container with id ddfca825dc6d552d5d244813a85979001e3925c944fa658f06e0ae933e021a38 Jan 23 09:32:45 crc kubenswrapper[4684]: I0123 09:32:45.027772 4684 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/redhat-operators-ktlb4" Jan 23 09:32:45 crc kubenswrapper[4684]: I0123 09:32:45.027835 4684 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/redhat-operators-ktlb4" Jan 23 09:32:45 crc kubenswrapper[4684]: I0123 09:32:45.592691 4684 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="72dbfed3-111a-4a4f-999a-ef7ade8b5116" path="/var/lib/kubelet/pods/72dbfed3-111a-4a4f-999a-ef7ade8b5116/volumes" Jan 23 09:32:45 crc kubenswrapper[4684]: I0123 09:32:45.852673 4684 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"6a531904-7199-45c6-aea1-23fb5a52addf","Type":"ContainerStarted","Data":"ddfca825dc6d552d5d244813a85979001e3925c944fa658f06e0ae933e021a38"} Jan 23 09:32:46 crc kubenswrapper[4684]: I0123 09:32:46.085794 4684 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-marketplace/redhat-operators-ktlb4" podUID="6742cb4f-5c93-4e38-8c73-77e75630d3dc" containerName="registry-server" probeResult="failure" output=< Jan 23 09:32:46 crc kubenswrapper[4684]: timeout: failed to connect service ":50051" within 1s Jan 23 09:32:46 crc kubenswrapper[4684]: > Jan 23 09:32:50 crc kubenswrapper[4684]: I0123 09:32:50.897581 4684 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"6a531904-7199-45c6-aea1-23fb5a52addf","Type":"ContainerStarted","Data":"762ea99cb4f27ceb08fbd6bf312d7f3761fcaca60b5f0a11c30bbf20d8ed083e"} Jan 23 09:32:51 crc kubenswrapper[4684]: I0123 09:32:51.988255 4684 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/keystone-74b94f7dd5-jfwln" Jan 23 09:32:53 crc kubenswrapper[4684]: I0123 09:32:53.768099 4684 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/openstackclient"] Jan 23 09:32:53 crc kubenswrapper[4684]: I0123 09:32:53.769080 4684 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/openstackclient" Jan 23 09:32:53 crc kubenswrapper[4684]: I0123 09:32:53.774328 4684 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"openstack-config-secret" Jan 23 09:32:53 crc kubenswrapper[4684]: I0123 09:32:53.774427 4684 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"openstackclient-openstackclient-dockercfg-k8vq5" Jan 23 09:32:53 crc kubenswrapper[4684]: I0123 09:32:53.774764 4684 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"openstack-config" Jan 23 09:32:53 crc kubenswrapper[4684]: I0123 09:32:53.801838 4684 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/openstackclient"] Jan 23 09:32:53 crc kubenswrapper[4684]: I0123 09:32:53.946607 4684 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/8160e59c-4556-4c83-982f-e88c28c2347a-combined-ca-bundle\") pod \"openstackclient\" (UID: \"8160e59c-4556-4c83-982f-e88c28c2347a\") " pod="openstack/openstackclient" Jan 23 09:32:53 crc kubenswrapper[4684]: I0123 09:32:53.946745 4684 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"openstack-config-secret\" (UniqueName: \"kubernetes.io/secret/8160e59c-4556-4c83-982f-e88c28c2347a-openstack-config-secret\") pod \"openstackclient\" (UID: \"8160e59c-4556-4c83-982f-e88c28c2347a\") " pod="openstack/openstackclient" Jan 23 09:32:53 crc kubenswrapper[4684]: I0123 09:32:53.946779 4684 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-xdr8w\" (UniqueName: \"kubernetes.io/projected/8160e59c-4556-4c83-982f-e88c28c2347a-kube-api-access-xdr8w\") pod \"openstackclient\" (UID: \"8160e59c-4556-4c83-982f-e88c28c2347a\") " pod="openstack/openstackclient" Jan 23 09:32:53 crc kubenswrapper[4684]: I0123 09:32:53.946843 4684 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"openstack-config\" (UniqueName: \"kubernetes.io/configmap/8160e59c-4556-4c83-982f-e88c28c2347a-openstack-config\") pod \"openstackclient\" (UID: \"8160e59c-4556-4c83-982f-e88c28c2347a\") " pod="openstack/openstackclient" Jan 23 09:32:54 crc kubenswrapper[4684]: I0123 09:32:54.048501 4684 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"openstack-config-secret\" (UniqueName: \"kubernetes.io/secret/8160e59c-4556-4c83-982f-e88c28c2347a-openstack-config-secret\") pod \"openstackclient\" (UID: \"8160e59c-4556-4c83-982f-e88c28c2347a\") " pod="openstack/openstackclient" Jan 23 09:32:54 crc kubenswrapper[4684]: I0123 09:32:54.048925 4684 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-xdr8w\" (UniqueName: \"kubernetes.io/projected/8160e59c-4556-4c83-982f-e88c28c2347a-kube-api-access-xdr8w\") pod \"openstackclient\" (UID: \"8160e59c-4556-4c83-982f-e88c28c2347a\") " pod="openstack/openstackclient" Jan 23 09:32:54 crc kubenswrapper[4684]: I0123 09:32:54.049414 4684 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"openstack-config\" (UniqueName: \"kubernetes.io/configmap/8160e59c-4556-4c83-982f-e88c28c2347a-openstack-config\") pod \"openstackclient\" (UID: \"8160e59c-4556-4c83-982f-e88c28c2347a\") " pod="openstack/openstackclient" Jan 23 09:32:54 crc kubenswrapper[4684]: I0123 09:32:54.050319 4684 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/8160e59c-4556-4c83-982f-e88c28c2347a-combined-ca-bundle\") pod \"openstackclient\" (UID: \"8160e59c-4556-4c83-982f-e88c28c2347a\") " pod="openstack/openstackclient" Jan 23 09:32:54 crc kubenswrapper[4684]: I0123 09:32:54.050194 4684 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"openstack-config\" (UniqueName: \"kubernetes.io/configmap/8160e59c-4556-4c83-982f-e88c28c2347a-openstack-config\") pod \"openstackclient\" (UID: \"8160e59c-4556-4c83-982f-e88c28c2347a\") " pod="openstack/openstackclient" Jan 23 09:32:54 crc kubenswrapper[4684]: I0123 09:32:54.058542 4684 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/openstackclient"] Jan 23 09:32:54 crc kubenswrapper[4684]: E0123 09:32:54.059569 4684 pod_workers.go:1301] "Error syncing pod, skipping" err="unmounted volumes=[combined-ca-bundle kube-api-access-xdr8w openstack-config-secret], unattached volumes=[], failed to process volumes=[]: context canceled" pod="openstack/openstackclient" podUID="8160e59c-4556-4c83-982f-e88c28c2347a" Jan 23 09:32:54 crc kubenswrapper[4684]: I0123 09:32:54.068368 4684 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"openstack-config-secret\" (UniqueName: \"kubernetes.io/secret/8160e59c-4556-4c83-982f-e88c28c2347a-openstack-config-secret\") pod \"openstackclient\" (UID: \"8160e59c-4556-4c83-982f-e88c28c2347a\") " pod="openstack/openstackclient" Jan 23 09:32:54 crc kubenswrapper[4684]: I0123 09:32:54.068915 4684 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/8160e59c-4556-4c83-982f-e88c28c2347a-combined-ca-bundle\") pod \"openstackclient\" (UID: \"8160e59c-4556-4c83-982f-e88c28c2347a\") " pod="openstack/openstackclient" Jan 23 09:32:54 crc kubenswrapper[4684]: E0123 09:32:54.070139 4684 projected.go:194] Error preparing data for projected volume kube-api-access-xdr8w for pod openstack/openstackclient: failed to fetch token: pods "openstackclient" not found Jan 23 09:32:54 crc kubenswrapper[4684]: E0123 09:32:54.070199 4684 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/8160e59c-4556-4c83-982f-e88c28c2347a-kube-api-access-xdr8w podName:8160e59c-4556-4c83-982f-e88c28c2347a nodeName:}" failed. No retries permitted until 2026-01-23 09:32:54.570180467 +0000 UTC m=+1547.193559008 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "kube-api-access-xdr8w" (UniqueName: "kubernetes.io/projected/8160e59c-4556-4c83-982f-e88c28c2347a-kube-api-access-xdr8w") pod "openstackclient" (UID: "8160e59c-4556-4c83-982f-e88c28c2347a") : failed to fetch token: pods "openstackclient" not found Jan 23 09:32:54 crc kubenswrapper[4684]: I0123 09:32:54.103754 4684 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/openstackclient"] Jan 23 09:32:54 crc kubenswrapper[4684]: I0123 09:32:54.142481 4684 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/openstackclient"] Jan 23 09:32:54 crc kubenswrapper[4684]: I0123 09:32:54.143565 4684 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/openstackclient" Jan 23 09:32:54 crc kubenswrapper[4684]: I0123 09:32:54.157945 4684 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/openstackclient"] Jan 23 09:32:54 crc kubenswrapper[4684]: I0123 09:32:54.253138 4684 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-tlfn2\" (UniqueName: \"kubernetes.io/projected/cfb564ff-94ae-4292-ad6c-41a36677efeb-kube-api-access-tlfn2\") pod \"openstackclient\" (UID: \"cfb564ff-94ae-4292-ad6c-41a36677efeb\") " pod="openstack/openstackclient" Jan 23 09:32:54 crc kubenswrapper[4684]: I0123 09:32:54.253236 4684 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"openstack-config-secret\" (UniqueName: \"kubernetes.io/secret/cfb564ff-94ae-4292-ad6c-41a36677efeb-openstack-config-secret\") pod \"openstackclient\" (UID: \"cfb564ff-94ae-4292-ad6c-41a36677efeb\") " pod="openstack/openstackclient" Jan 23 09:32:54 crc kubenswrapper[4684]: I0123 09:32:54.253467 4684 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"openstack-config\" (UniqueName: \"kubernetes.io/configmap/cfb564ff-94ae-4292-ad6c-41a36677efeb-openstack-config\") pod \"openstackclient\" (UID: \"cfb564ff-94ae-4292-ad6c-41a36677efeb\") " pod="openstack/openstackclient" Jan 23 09:32:54 crc kubenswrapper[4684]: I0123 09:32:54.253645 4684 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/cfb564ff-94ae-4292-ad6c-41a36677efeb-combined-ca-bundle\") pod \"openstackclient\" (UID: \"cfb564ff-94ae-4292-ad6c-41a36677efeb\") " pod="openstack/openstackclient" Jan 23 09:32:54 crc kubenswrapper[4684]: I0123 09:32:54.355081 4684 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"openstack-config\" (UniqueName: \"kubernetes.io/configmap/cfb564ff-94ae-4292-ad6c-41a36677efeb-openstack-config\") pod \"openstackclient\" (UID: \"cfb564ff-94ae-4292-ad6c-41a36677efeb\") " pod="openstack/openstackclient" Jan 23 09:32:54 crc kubenswrapper[4684]: I0123 09:32:54.355153 4684 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/cfb564ff-94ae-4292-ad6c-41a36677efeb-combined-ca-bundle\") pod \"openstackclient\" (UID: \"cfb564ff-94ae-4292-ad6c-41a36677efeb\") " pod="openstack/openstackclient" Jan 23 09:32:54 crc kubenswrapper[4684]: I0123 09:32:54.355214 4684 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-tlfn2\" (UniqueName: \"kubernetes.io/projected/cfb564ff-94ae-4292-ad6c-41a36677efeb-kube-api-access-tlfn2\") pod \"openstackclient\" (UID: \"cfb564ff-94ae-4292-ad6c-41a36677efeb\") " pod="openstack/openstackclient" Jan 23 09:32:54 crc kubenswrapper[4684]: I0123 09:32:54.355281 4684 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"openstack-config-secret\" (UniqueName: \"kubernetes.io/secret/cfb564ff-94ae-4292-ad6c-41a36677efeb-openstack-config-secret\") pod \"openstackclient\" (UID: \"cfb564ff-94ae-4292-ad6c-41a36677efeb\") " pod="openstack/openstackclient" Jan 23 09:32:54 crc kubenswrapper[4684]: I0123 09:32:54.355966 4684 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"openstack-config\" (UniqueName: \"kubernetes.io/configmap/cfb564ff-94ae-4292-ad6c-41a36677efeb-openstack-config\") pod \"openstackclient\" (UID: \"cfb564ff-94ae-4292-ad6c-41a36677efeb\") " pod="openstack/openstackclient" Jan 23 09:32:54 crc kubenswrapper[4684]: I0123 09:32:54.359380 4684 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/cfb564ff-94ae-4292-ad6c-41a36677efeb-combined-ca-bundle\") pod \"openstackclient\" (UID: \"cfb564ff-94ae-4292-ad6c-41a36677efeb\") " pod="openstack/openstackclient" Jan 23 09:32:54 crc kubenswrapper[4684]: I0123 09:32:54.365115 4684 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"openstack-config-secret\" (UniqueName: \"kubernetes.io/secret/cfb564ff-94ae-4292-ad6c-41a36677efeb-openstack-config-secret\") pod \"openstackclient\" (UID: \"cfb564ff-94ae-4292-ad6c-41a36677efeb\") " pod="openstack/openstackclient" Jan 23 09:32:54 crc kubenswrapper[4684]: I0123 09:32:54.376168 4684 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-tlfn2\" (UniqueName: \"kubernetes.io/projected/cfb564ff-94ae-4292-ad6c-41a36677efeb-kube-api-access-tlfn2\") pod \"openstackclient\" (UID: \"cfb564ff-94ae-4292-ad6c-41a36677efeb\") " pod="openstack/openstackclient" Jan 23 09:32:54 crc kubenswrapper[4684]: I0123 09:32:54.466731 4684 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/openstackclient" Jan 23 09:32:54 crc kubenswrapper[4684]: I0123 09:32:54.660300 4684 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-xdr8w\" (UniqueName: \"kubernetes.io/projected/8160e59c-4556-4c83-982f-e88c28c2347a-kube-api-access-xdr8w\") pod \"openstackclient\" (UID: \"8160e59c-4556-4c83-982f-e88c28c2347a\") " pod="openstack/openstackclient" Jan 23 09:32:54 crc kubenswrapper[4684]: E0123 09:32:54.665086 4684 projected.go:194] Error preparing data for projected volume kube-api-access-xdr8w for pod openstack/openstackclient: failed to fetch token: serviceaccounts "openstackclient-openstackclient" is forbidden: the UID in the bound object reference (8160e59c-4556-4c83-982f-e88c28c2347a) does not match the UID in record. The object might have been deleted and then recreated Jan 23 09:32:54 crc kubenswrapper[4684]: E0123 09:32:54.665177 4684 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/8160e59c-4556-4c83-982f-e88c28c2347a-kube-api-access-xdr8w podName:8160e59c-4556-4c83-982f-e88c28c2347a nodeName:}" failed. No retries permitted until 2026-01-23 09:32:55.66515548 +0000 UTC m=+1548.288534021 (durationBeforeRetry 1s). Error: MountVolume.SetUp failed for volume "kube-api-access-xdr8w" (UniqueName: "kubernetes.io/projected/8160e59c-4556-4c83-982f-e88c28c2347a-kube-api-access-xdr8w") pod "openstackclient" (UID: "8160e59c-4556-4c83-982f-e88c28c2347a") : failed to fetch token: serviceaccounts "openstackclient-openstackclient" is forbidden: the UID in the bound object reference (8160e59c-4556-4c83-982f-e88c28c2347a) does not match the UID in record. The object might have been deleted and then recreated Jan 23 09:32:54 crc kubenswrapper[4684]: I0123 09:32:54.953782 4684 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/openstackclient" Jan 23 09:32:54 crc kubenswrapper[4684]: I0123 09:32:54.958790 4684 status_manager.go:861] "Pod was deleted and then recreated, skipping status update" pod="openstack/openstackclient" oldPodUID="8160e59c-4556-4c83-982f-e88c28c2347a" podUID="cfb564ff-94ae-4292-ad6c-41a36677efeb" Jan 23 09:32:54 crc kubenswrapper[4684]: I0123 09:32:54.977044 4684 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/openstackclient" Jan 23 09:32:55 crc kubenswrapper[4684]: I0123 09:32:55.067199 4684 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/8160e59c-4556-4c83-982f-e88c28c2347a-combined-ca-bundle\") pod \"8160e59c-4556-4c83-982f-e88c28c2347a\" (UID: \"8160e59c-4556-4c83-982f-e88c28c2347a\") " Jan 23 09:32:55 crc kubenswrapper[4684]: I0123 09:32:55.067738 4684 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"openstack-config\" (UniqueName: \"kubernetes.io/configmap/8160e59c-4556-4c83-982f-e88c28c2347a-openstack-config\") pod \"8160e59c-4556-4c83-982f-e88c28c2347a\" (UID: \"8160e59c-4556-4c83-982f-e88c28c2347a\") " Jan 23 09:32:55 crc kubenswrapper[4684]: I0123 09:32:55.067920 4684 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"openstack-config-secret\" (UniqueName: \"kubernetes.io/secret/8160e59c-4556-4c83-982f-e88c28c2347a-openstack-config-secret\") pod \"8160e59c-4556-4c83-982f-e88c28c2347a\" (UID: \"8160e59c-4556-4c83-982f-e88c28c2347a\") " Jan 23 09:32:55 crc kubenswrapper[4684]: I0123 09:32:55.068970 4684 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-xdr8w\" (UniqueName: \"kubernetes.io/projected/8160e59c-4556-4c83-982f-e88c28c2347a-kube-api-access-xdr8w\") on node \"crc\" DevicePath \"\"" Jan 23 09:32:55 crc kubenswrapper[4684]: I0123 09:32:55.069535 4684 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/8160e59c-4556-4c83-982f-e88c28c2347a-openstack-config" (OuterVolumeSpecName: "openstack-config") pod "8160e59c-4556-4c83-982f-e88c28c2347a" (UID: "8160e59c-4556-4c83-982f-e88c28c2347a"). InnerVolumeSpecName "openstack-config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 23 09:32:55 crc kubenswrapper[4684]: I0123 09:32:55.087052 4684 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/8160e59c-4556-4c83-982f-e88c28c2347a-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "8160e59c-4556-4c83-982f-e88c28c2347a" (UID: "8160e59c-4556-4c83-982f-e88c28c2347a"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 23 09:32:55 crc kubenswrapper[4684]: I0123 09:32:55.089026 4684 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/8160e59c-4556-4c83-982f-e88c28c2347a-openstack-config-secret" (OuterVolumeSpecName: "openstack-config-secret") pod "8160e59c-4556-4c83-982f-e88c28c2347a" (UID: "8160e59c-4556-4c83-982f-e88c28c2347a"). InnerVolumeSpecName "openstack-config-secret". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 23 09:32:55 crc kubenswrapper[4684]: I0123 09:32:55.127117 4684 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/redhat-operators-ktlb4" Jan 23 09:32:55 crc kubenswrapper[4684]: I0123 09:32:55.170170 4684 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/8160e59c-4556-4c83-982f-e88c28c2347a-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Jan 23 09:32:55 crc kubenswrapper[4684]: I0123 09:32:55.170205 4684 reconciler_common.go:293] "Volume detached for volume \"openstack-config\" (UniqueName: \"kubernetes.io/configmap/8160e59c-4556-4c83-982f-e88c28c2347a-openstack-config\") on node \"crc\" DevicePath \"\"" Jan 23 09:32:55 crc kubenswrapper[4684]: I0123 09:32:55.170218 4684 reconciler_common.go:293] "Volume detached for volume \"openstack-config-secret\" (UniqueName: \"kubernetes.io/secret/8160e59c-4556-4c83-982f-e88c28c2347a-openstack-config-secret\") on node \"crc\" DevicePath \"\"" Jan 23 09:32:55 crc kubenswrapper[4684]: I0123 09:32:55.178844 4684 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/openstackclient"] Jan 23 09:32:55 crc kubenswrapper[4684]: I0123 09:32:55.234016 4684 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/redhat-operators-ktlb4" Jan 23 09:32:55 crc kubenswrapper[4684]: I0123 09:32:55.393980 4684 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-operators-ktlb4"] Jan 23 09:32:55 crc kubenswrapper[4684]: I0123 09:32:55.593527 4684 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="8160e59c-4556-4c83-982f-e88c28c2347a" path="/var/lib/kubelet/pods/8160e59c-4556-4c83-982f-e88c28c2347a/volumes" Jan 23 09:32:55 crc kubenswrapper[4684]: I0123 09:32:55.963224 4684 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"6a531904-7199-45c6-aea1-23fb5a52addf","Type":"ContainerStarted","Data":"894e6602a08cf019d04f87e77782e15bec43f8a532209cc5367cebe50f5ae329"} Jan 23 09:32:55 crc kubenswrapper[4684]: I0123 09:32:55.965632 4684 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/openstackclient" event={"ID":"cfb564ff-94ae-4292-ad6c-41a36677efeb","Type":"ContainerStarted","Data":"6125af417173a5226bb5925bbb3f12e62cb064a9f3e50b18588fc62ab989a1d0"} Jan 23 09:32:55 crc kubenswrapper[4684]: I0123 09:32:55.965893 4684 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/openstackclient" Jan 23 09:32:55 crc kubenswrapper[4684]: I0123 09:32:55.971427 4684 status_manager.go:861] "Pod was deleted and then recreated, skipping status update" pod="openstack/openstackclient" oldPodUID="8160e59c-4556-4c83-982f-e88c28c2347a" podUID="cfb564ff-94ae-4292-ad6c-41a36677efeb" Jan 23 09:32:56 crc kubenswrapper[4684]: I0123 09:32:56.990487 4684 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"6a531904-7199-45c6-aea1-23fb5a52addf","Type":"ContainerStarted","Data":"c603a061f6e8d68d2f722940a2bd8f08682950c9eac0d707d8498811e71c3948"} Jan 23 09:32:56 crc kubenswrapper[4684]: I0123 09:32:56.990678 4684 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/redhat-operators-ktlb4" podUID="6742cb4f-5c93-4e38-8c73-77e75630d3dc" containerName="registry-server" containerID="cri-o://8eaef237ee4ad8b68cd853ab04ddfc8ccc00d14fad4b8dd9a7cedae302119a40" gracePeriod=2 Jan 23 09:32:58 crc kubenswrapper[4684]: I0123 09:32:58.007384 4684 generic.go:334] "Generic (PLEG): container finished" podID="6742cb4f-5c93-4e38-8c73-77e75630d3dc" containerID="8eaef237ee4ad8b68cd853ab04ddfc8ccc00d14fad4b8dd9a7cedae302119a40" exitCode=0 Jan 23 09:32:58 crc kubenswrapper[4684]: I0123 09:32:58.007425 4684 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-ktlb4" event={"ID":"6742cb4f-5c93-4e38-8c73-77e75630d3dc","Type":"ContainerDied","Data":"8eaef237ee4ad8b68cd853ab04ddfc8ccc00d14fad4b8dd9a7cedae302119a40"} Jan 23 09:32:58 crc kubenswrapper[4684]: I0123 09:32:58.365143 4684 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-ktlb4" Jan 23 09:32:58 crc kubenswrapper[4684]: I0123 09:32:58.541577 4684 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-hf5l4\" (UniqueName: \"kubernetes.io/projected/6742cb4f-5c93-4e38-8c73-77e75630d3dc-kube-api-access-hf5l4\") pod \"6742cb4f-5c93-4e38-8c73-77e75630d3dc\" (UID: \"6742cb4f-5c93-4e38-8c73-77e75630d3dc\") " Jan 23 09:32:58 crc kubenswrapper[4684]: I0123 09:32:58.541636 4684 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/6742cb4f-5c93-4e38-8c73-77e75630d3dc-utilities\") pod \"6742cb4f-5c93-4e38-8c73-77e75630d3dc\" (UID: \"6742cb4f-5c93-4e38-8c73-77e75630d3dc\") " Jan 23 09:32:58 crc kubenswrapper[4684]: I0123 09:32:58.541821 4684 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/6742cb4f-5c93-4e38-8c73-77e75630d3dc-catalog-content\") pod \"6742cb4f-5c93-4e38-8c73-77e75630d3dc\" (UID: \"6742cb4f-5c93-4e38-8c73-77e75630d3dc\") " Jan 23 09:32:58 crc kubenswrapper[4684]: I0123 09:32:58.543969 4684 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/6742cb4f-5c93-4e38-8c73-77e75630d3dc-utilities" (OuterVolumeSpecName: "utilities") pod "6742cb4f-5c93-4e38-8c73-77e75630d3dc" (UID: "6742cb4f-5c93-4e38-8c73-77e75630d3dc"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 23 09:32:58 crc kubenswrapper[4684]: I0123 09:32:58.570419 4684 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/6742cb4f-5c93-4e38-8c73-77e75630d3dc-kube-api-access-hf5l4" (OuterVolumeSpecName: "kube-api-access-hf5l4") pod "6742cb4f-5c93-4e38-8c73-77e75630d3dc" (UID: "6742cb4f-5c93-4e38-8c73-77e75630d3dc"). InnerVolumeSpecName "kube-api-access-hf5l4". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 23 09:32:58 crc kubenswrapper[4684]: I0123 09:32:58.643941 4684 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-hf5l4\" (UniqueName: \"kubernetes.io/projected/6742cb4f-5c93-4e38-8c73-77e75630d3dc-kube-api-access-hf5l4\") on node \"crc\" DevicePath \"\"" Jan 23 09:32:58 crc kubenswrapper[4684]: I0123 09:32:58.643992 4684 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/6742cb4f-5c93-4e38-8c73-77e75630d3dc-utilities\") on node \"crc\" DevicePath \"\"" Jan 23 09:32:58 crc kubenswrapper[4684]: I0123 09:32:58.664227 4684 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/6742cb4f-5c93-4e38-8c73-77e75630d3dc-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "6742cb4f-5c93-4e38-8c73-77e75630d3dc" (UID: "6742cb4f-5c93-4e38-8c73-77e75630d3dc"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 23 09:32:58 crc kubenswrapper[4684]: I0123 09:32:58.745794 4684 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/6742cb4f-5c93-4e38-8c73-77e75630d3dc-catalog-content\") on node \"crc\" DevicePath \"\"" Jan 23 09:32:59 crc kubenswrapper[4684]: I0123 09:32:59.018413 4684 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-ktlb4" event={"ID":"6742cb4f-5c93-4e38-8c73-77e75630d3dc","Type":"ContainerDied","Data":"8130639f844b919c48a3cd8b93faeb3c8a9d6860d68b0e806d80a2336fd0dd77"} Jan 23 09:32:59 crc kubenswrapper[4684]: I0123 09:32:59.018461 4684 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-ktlb4" Jan 23 09:32:59 crc kubenswrapper[4684]: I0123 09:32:59.018477 4684 scope.go:117] "RemoveContainer" containerID="8eaef237ee4ad8b68cd853ab04ddfc8ccc00d14fad4b8dd9a7cedae302119a40" Jan 23 09:32:59 crc kubenswrapper[4684]: I0123 09:32:59.057328 4684 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-operators-ktlb4"] Jan 23 09:32:59 crc kubenswrapper[4684]: I0123 09:32:59.062051 4684 scope.go:117] "RemoveContainer" containerID="ebc4269ce2d37ff084e8ba7d830f40a57061294b372b57acef5cf8cd18c65c17" Jan 23 09:32:59 crc kubenswrapper[4684]: I0123 09:32:59.066996 4684 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/redhat-operators-ktlb4"] Jan 23 09:32:59 crc kubenswrapper[4684]: I0123 09:32:59.089466 4684 scope.go:117] "RemoveContainer" containerID="9d0f70c4f90c6be976ded4df9c8f2656ce8bf6b3da10cee7620fcc3f4db0c850" Jan 23 09:32:59 crc kubenswrapper[4684]: I0123 09:32:59.600199 4684 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="6742cb4f-5c93-4e38-8c73-77e75630d3dc" path="/var/lib/kubelet/pods/6742cb4f-5c93-4e38-8c73-77e75630d3dc/volumes" Jan 23 09:33:08 crc kubenswrapper[4684]: I0123 09:33:08.118465 4684 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/openstackclient" event={"ID":"cfb564ff-94ae-4292-ad6c-41a36677efeb","Type":"ContainerStarted","Data":"57aa760fd1c770d2f79a592b04c08408de742e6116d94db87a5dc2728e3bfe1d"} Jan 23 09:33:08 crc kubenswrapper[4684]: I0123 09:33:08.122676 4684 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"6a531904-7199-45c6-aea1-23fb5a52addf","Type":"ContainerStarted","Data":"2b727e49d9cd6451f870dca152eb833c49165c7e74315a83ef2d6eaad85e7873"} Jan 23 09:33:08 crc kubenswrapper[4684]: I0123 09:33:08.123105 4684 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/ceilometer-0" Jan 23 09:33:08 crc kubenswrapper[4684]: I0123 09:33:08.138466 4684 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/openstackclient" podStartSLOduration=2.4119612249999998 podStartE2EDuration="14.138447367s" podCreationTimestamp="2026-01-23 09:32:54 +0000 UTC" firstStartedPulling="2026-01-23 09:32:55.24104428 +0000 UTC m=+1547.864422821" lastFinishedPulling="2026-01-23 09:33:06.967530432 +0000 UTC m=+1559.590908963" observedRunningTime="2026-01-23 09:33:08.134187514 +0000 UTC m=+1560.757566075" watchObservedRunningTime="2026-01-23 09:33:08.138447367 +0000 UTC m=+1560.761825908" Jan 23 09:33:08 crc kubenswrapper[4684]: I0123 09:33:08.163265 4684 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/ceilometer-0" podStartSLOduration=3.220586064 podStartE2EDuration="25.163246154s" podCreationTimestamp="2026-01-23 09:32:43 +0000 UTC" firstStartedPulling="2026-01-23 09:32:44.86254388 +0000 UTC m=+1537.485922421" lastFinishedPulling="2026-01-23 09:33:06.80520397 +0000 UTC m=+1559.428582511" observedRunningTime="2026-01-23 09:33:08.162187353 +0000 UTC m=+1560.785565904" watchObservedRunningTime="2026-01-23 09:33:08.163246154 +0000 UTC m=+1560.786624695" Jan 23 09:33:15 crc kubenswrapper[4684]: I0123 09:33:15.181892 4684 generic.go:334] "Generic (PLEG): container finished" podID="4ffd82b5-ced8-4cca-89cb-25ad1bba207a" containerID="3e26eee440f0ec913ab7e4b7d3e25f44476d2262a35cfb2268773ccc12052a03" exitCode=0 Jan 23 09:33:15 crc kubenswrapper[4684]: I0123 09:33:15.181953 4684 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/barbican-db-sync-9pq2q" event={"ID":"4ffd82b5-ced8-4cca-89cb-25ad1bba207a","Type":"ContainerDied","Data":"3e26eee440f0ec913ab7e4b7d3e25f44476d2262a35cfb2268773ccc12052a03"} Jan 23 09:33:16 crc kubenswrapper[4684]: I0123 09:33:16.536665 4684 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/barbican-db-sync-9pq2q" Jan 23 09:33:16 crc kubenswrapper[4684]: I0123 09:33:16.577352 4684 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/4ffd82b5-ced8-4cca-89cb-25ad1bba207a-combined-ca-bundle\") pod \"4ffd82b5-ced8-4cca-89cb-25ad1bba207a\" (UID: \"4ffd82b5-ced8-4cca-89cb-25ad1bba207a\") " Jan 23 09:33:16 crc kubenswrapper[4684]: I0123 09:33:16.577637 4684 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"db-sync-config-data\" (UniqueName: \"kubernetes.io/secret/4ffd82b5-ced8-4cca-89cb-25ad1bba207a-db-sync-config-data\") pod \"4ffd82b5-ced8-4cca-89cb-25ad1bba207a\" (UID: \"4ffd82b5-ced8-4cca-89cb-25ad1bba207a\") " Jan 23 09:33:16 crc kubenswrapper[4684]: I0123 09:33:16.577742 4684 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-bsg6d\" (UniqueName: \"kubernetes.io/projected/4ffd82b5-ced8-4cca-89cb-25ad1bba207a-kube-api-access-bsg6d\") pod \"4ffd82b5-ced8-4cca-89cb-25ad1bba207a\" (UID: \"4ffd82b5-ced8-4cca-89cb-25ad1bba207a\") " Jan 23 09:33:16 crc kubenswrapper[4684]: I0123 09:33:16.584934 4684 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/4ffd82b5-ced8-4cca-89cb-25ad1bba207a-kube-api-access-bsg6d" (OuterVolumeSpecName: "kube-api-access-bsg6d") pod "4ffd82b5-ced8-4cca-89cb-25ad1bba207a" (UID: "4ffd82b5-ced8-4cca-89cb-25ad1bba207a"). InnerVolumeSpecName "kube-api-access-bsg6d". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 23 09:33:16 crc kubenswrapper[4684]: I0123 09:33:16.589542 4684 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/4ffd82b5-ced8-4cca-89cb-25ad1bba207a-db-sync-config-data" (OuterVolumeSpecName: "db-sync-config-data") pod "4ffd82b5-ced8-4cca-89cb-25ad1bba207a" (UID: "4ffd82b5-ced8-4cca-89cb-25ad1bba207a"). InnerVolumeSpecName "db-sync-config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 23 09:33:16 crc kubenswrapper[4684]: I0123 09:33:16.619591 4684 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/4ffd82b5-ced8-4cca-89cb-25ad1bba207a-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "4ffd82b5-ced8-4cca-89cb-25ad1bba207a" (UID: "4ffd82b5-ced8-4cca-89cb-25ad1bba207a"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 23 09:33:16 crc kubenswrapper[4684]: I0123 09:33:16.680076 4684 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/4ffd82b5-ced8-4cca-89cb-25ad1bba207a-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Jan 23 09:33:16 crc kubenswrapper[4684]: I0123 09:33:16.680132 4684 reconciler_common.go:293] "Volume detached for volume \"db-sync-config-data\" (UniqueName: \"kubernetes.io/secret/4ffd82b5-ced8-4cca-89cb-25ad1bba207a-db-sync-config-data\") on node \"crc\" DevicePath \"\"" Jan 23 09:33:16 crc kubenswrapper[4684]: I0123 09:33:16.680146 4684 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-bsg6d\" (UniqueName: \"kubernetes.io/projected/4ffd82b5-ced8-4cca-89cb-25ad1bba207a-kube-api-access-bsg6d\") on node \"crc\" DevicePath \"\"" Jan 23 09:33:16 crc kubenswrapper[4684]: I0123 09:33:16.744105 4684 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/ceilometer-0"] Jan 23 09:33:16 crc kubenswrapper[4684]: I0123 09:33:16.744603 4684 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/ceilometer-0" podUID="6a531904-7199-45c6-aea1-23fb5a52addf" containerName="ceilometer-central-agent" containerID="cri-o://762ea99cb4f27ceb08fbd6bf312d7f3761fcaca60b5f0a11c30bbf20d8ed083e" gracePeriod=30 Jan 23 09:33:16 crc kubenswrapper[4684]: I0123 09:33:16.745503 4684 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/ceilometer-0" podUID="6a531904-7199-45c6-aea1-23fb5a52addf" containerName="proxy-httpd" containerID="cri-o://2b727e49d9cd6451f870dca152eb833c49165c7e74315a83ef2d6eaad85e7873" gracePeriod=30 Jan 23 09:33:16 crc kubenswrapper[4684]: I0123 09:33:16.745608 4684 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/ceilometer-0" podUID="6a531904-7199-45c6-aea1-23fb5a52addf" containerName="sg-core" containerID="cri-o://c603a061f6e8d68d2f722940a2bd8f08682950c9eac0d707d8498811e71c3948" gracePeriod=30 Jan 23 09:33:16 crc kubenswrapper[4684]: I0123 09:33:16.745675 4684 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/ceilometer-0" podUID="6a531904-7199-45c6-aea1-23fb5a52addf" containerName="ceilometer-notification-agent" containerID="cri-o://894e6602a08cf019d04f87e77782e15bec43f8a532209cc5367cebe50f5ae329" gracePeriod=30 Jan 23 09:33:17 crc kubenswrapper[4684]: I0123 09:33:17.167331 4684 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/nova-api-db-create-tjvmj"] Jan 23 09:33:17 crc kubenswrapper[4684]: E0123 09:33:17.168038 4684 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="6742cb4f-5c93-4e38-8c73-77e75630d3dc" containerName="registry-server" Jan 23 09:33:17 crc kubenswrapper[4684]: I0123 09:33:17.168061 4684 state_mem.go:107] "Deleted CPUSet assignment" podUID="6742cb4f-5c93-4e38-8c73-77e75630d3dc" containerName="registry-server" Jan 23 09:33:17 crc kubenswrapper[4684]: E0123 09:33:17.168080 4684 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="4ffd82b5-ced8-4cca-89cb-25ad1bba207a" containerName="barbican-db-sync" Jan 23 09:33:17 crc kubenswrapper[4684]: I0123 09:33:17.168087 4684 state_mem.go:107] "Deleted CPUSet assignment" podUID="4ffd82b5-ced8-4cca-89cb-25ad1bba207a" containerName="barbican-db-sync" Jan 23 09:33:17 crc kubenswrapper[4684]: E0123 09:33:17.168111 4684 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="6742cb4f-5c93-4e38-8c73-77e75630d3dc" containerName="extract-content" Jan 23 09:33:17 crc kubenswrapper[4684]: I0123 09:33:17.168119 4684 state_mem.go:107] "Deleted CPUSet assignment" podUID="6742cb4f-5c93-4e38-8c73-77e75630d3dc" containerName="extract-content" Jan 23 09:33:17 crc kubenswrapper[4684]: E0123 09:33:17.168147 4684 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="6742cb4f-5c93-4e38-8c73-77e75630d3dc" containerName="extract-utilities" Jan 23 09:33:17 crc kubenswrapper[4684]: I0123 09:33:17.168155 4684 state_mem.go:107] "Deleted CPUSet assignment" podUID="6742cb4f-5c93-4e38-8c73-77e75630d3dc" containerName="extract-utilities" Jan 23 09:33:17 crc kubenswrapper[4684]: I0123 09:33:17.168344 4684 memory_manager.go:354] "RemoveStaleState removing state" podUID="6742cb4f-5c93-4e38-8c73-77e75630d3dc" containerName="registry-server" Jan 23 09:33:17 crc kubenswrapper[4684]: I0123 09:33:17.168366 4684 memory_manager.go:354] "RemoveStaleState removing state" podUID="4ffd82b5-ced8-4cca-89cb-25ad1bba207a" containerName="barbican-db-sync" Jan 23 09:33:17 crc kubenswrapper[4684]: I0123 09:33:17.169061 4684 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-api-db-create-tjvmj" Jan 23 09:33:17 crc kubenswrapper[4684]: I0123 09:33:17.178063 4684 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-api-db-create-tjvmj"] Jan 23 09:33:17 crc kubenswrapper[4684]: I0123 09:33:17.188512 4684 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-tdj58\" (UniqueName: \"kubernetes.io/projected/3ea9252c-2a2c-4b59-9196-251b12919e70-kube-api-access-tdj58\") pod \"nova-api-db-create-tjvmj\" (UID: \"3ea9252c-2a2c-4b59-9196-251b12919e70\") " pod="openstack/nova-api-db-create-tjvmj" Jan 23 09:33:17 crc kubenswrapper[4684]: I0123 09:33:17.188926 4684 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/3ea9252c-2a2c-4b59-9196-251b12919e70-operator-scripts\") pod \"nova-api-db-create-tjvmj\" (UID: \"3ea9252c-2a2c-4b59-9196-251b12919e70\") " pod="openstack/nova-api-db-create-tjvmj" Jan 23 09:33:17 crc kubenswrapper[4684]: I0123 09:33:17.204038 4684 generic.go:334] "Generic (PLEG): container finished" podID="6a531904-7199-45c6-aea1-23fb5a52addf" containerID="2b727e49d9cd6451f870dca152eb833c49165c7e74315a83ef2d6eaad85e7873" exitCode=0 Jan 23 09:33:17 crc kubenswrapper[4684]: I0123 09:33:17.204074 4684 generic.go:334] "Generic (PLEG): container finished" podID="6a531904-7199-45c6-aea1-23fb5a52addf" containerID="c603a061f6e8d68d2f722940a2bd8f08682950c9eac0d707d8498811e71c3948" exitCode=2 Jan 23 09:33:17 crc kubenswrapper[4684]: I0123 09:33:17.204121 4684 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"6a531904-7199-45c6-aea1-23fb5a52addf","Type":"ContainerDied","Data":"2b727e49d9cd6451f870dca152eb833c49165c7e74315a83ef2d6eaad85e7873"} Jan 23 09:33:17 crc kubenswrapper[4684]: I0123 09:33:17.204152 4684 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"6a531904-7199-45c6-aea1-23fb5a52addf","Type":"ContainerDied","Data":"c603a061f6e8d68d2f722940a2bd8f08682950c9eac0d707d8498811e71c3948"} Jan 23 09:33:17 crc kubenswrapper[4684]: I0123 09:33:17.205559 4684 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/barbican-db-sync-9pq2q" event={"ID":"4ffd82b5-ced8-4cca-89cb-25ad1bba207a","Type":"ContainerDied","Data":"ab574c7418715728a94e360e435de81a0713f8695e887305f811460ce99d750b"} Jan 23 09:33:17 crc kubenswrapper[4684]: I0123 09:33:17.205605 4684 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="ab574c7418715728a94e360e435de81a0713f8695e887305f811460ce99d750b" Jan 23 09:33:17 crc kubenswrapper[4684]: I0123 09:33:17.205676 4684 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/barbican-db-sync-9pq2q" Jan 23 09:33:17 crc kubenswrapper[4684]: I0123 09:33:17.208170 4684 generic.go:334] "Generic (PLEG): container finished" podID="82fd9420-b726-4b9d-ad21-b05181fb6e23" containerID="7bf1e0cc8b6d0352dac476223651158bee043c796ac7567db416cf94db715313" exitCode=0 Jan 23 09:33:17 crc kubenswrapper[4684]: I0123 09:33:17.208205 4684 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-db-sync-gpzdh" event={"ID":"82fd9420-b726-4b9d-ad21-b05181fb6e23","Type":"ContainerDied","Data":"7bf1e0cc8b6d0352dac476223651158bee043c796ac7567db416cf94db715313"} Jan 23 09:33:17 crc kubenswrapper[4684]: I0123 09:33:17.283514 4684 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/nova-cell0-db-create-lng65"] Jan 23 09:33:17 crc kubenswrapper[4684]: I0123 09:33:17.284712 4684 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell0-db-create-lng65" Jan 23 09:33:17 crc kubenswrapper[4684]: I0123 09:33:17.291109 4684 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-tdj58\" (UniqueName: \"kubernetes.io/projected/3ea9252c-2a2c-4b59-9196-251b12919e70-kube-api-access-tdj58\") pod \"nova-api-db-create-tjvmj\" (UID: \"3ea9252c-2a2c-4b59-9196-251b12919e70\") " pod="openstack/nova-api-db-create-tjvmj" Jan 23 09:33:17 crc kubenswrapper[4684]: I0123 09:33:17.291270 4684 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/3ea9252c-2a2c-4b59-9196-251b12919e70-operator-scripts\") pod \"nova-api-db-create-tjvmj\" (UID: \"3ea9252c-2a2c-4b59-9196-251b12919e70\") " pod="openstack/nova-api-db-create-tjvmj" Jan 23 09:33:17 crc kubenswrapper[4684]: I0123 09:33:17.292037 4684 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/3ea9252c-2a2c-4b59-9196-251b12919e70-operator-scripts\") pod \"nova-api-db-create-tjvmj\" (UID: \"3ea9252c-2a2c-4b59-9196-251b12919e70\") " pod="openstack/nova-api-db-create-tjvmj" Jan 23 09:33:17 crc kubenswrapper[4684]: I0123 09:33:17.332798 4684 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-cell0-db-create-lng65"] Jan 23 09:33:17 crc kubenswrapper[4684]: I0123 09:33:17.366235 4684 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-tdj58\" (UniqueName: \"kubernetes.io/projected/3ea9252c-2a2c-4b59-9196-251b12919e70-kube-api-access-tdj58\") pod \"nova-api-db-create-tjvmj\" (UID: \"3ea9252c-2a2c-4b59-9196-251b12919e70\") " pod="openstack/nova-api-db-create-tjvmj" Jan 23 09:33:17 crc kubenswrapper[4684]: I0123 09:33:17.376542 4684 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/nova-api-4098-account-create-update-mfcrh"] Jan 23 09:33:17 crc kubenswrapper[4684]: I0123 09:33:17.378102 4684 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-api-4098-account-create-update-mfcrh" Jan 23 09:33:17 crc kubenswrapper[4684]: I0123 09:33:17.382794 4684 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-api-db-secret" Jan 23 09:33:17 crc kubenswrapper[4684]: I0123 09:33:17.392292 4684 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/a856b676-2311-4a06-9b0c-4fd64c76e34b-operator-scripts\") pod \"nova-cell0-db-create-lng65\" (UID: \"a856b676-2311-4a06-9b0c-4fd64c76e34b\") " pod="openstack/nova-cell0-db-create-lng65" Jan 23 09:33:17 crc kubenswrapper[4684]: I0123 09:33:17.392360 4684 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-mk8gs\" (UniqueName: \"kubernetes.io/projected/a856b676-2311-4a06-9b0c-4fd64c76e34b-kube-api-access-mk8gs\") pod \"nova-cell0-db-create-lng65\" (UID: \"a856b676-2311-4a06-9b0c-4fd64c76e34b\") " pod="openstack/nova-cell0-db-create-lng65" Jan 23 09:33:17 crc kubenswrapper[4684]: I0123 09:33:17.392411 4684 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-qg9db\" (UniqueName: \"kubernetes.io/projected/9314b229-b3d7-40b3-8c79-a327b2f0098d-kube-api-access-qg9db\") pod \"nova-api-4098-account-create-update-mfcrh\" (UID: \"9314b229-b3d7-40b3-8c79-a327b2f0098d\") " pod="openstack/nova-api-4098-account-create-update-mfcrh" Jan 23 09:33:17 crc kubenswrapper[4684]: I0123 09:33:17.392480 4684 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/9314b229-b3d7-40b3-8c79-a327b2f0098d-operator-scripts\") pod \"nova-api-4098-account-create-update-mfcrh\" (UID: \"9314b229-b3d7-40b3-8c79-a327b2f0098d\") " pod="openstack/nova-api-4098-account-create-update-mfcrh" Jan 23 09:33:17 crc kubenswrapper[4684]: I0123 09:33:17.418811 4684 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-api-4098-account-create-update-mfcrh"] Jan 23 09:33:17 crc kubenswrapper[4684]: I0123 09:33:17.465949 4684 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/nova-cell1-db-create-4qf5d"] Jan 23 09:33:17 crc kubenswrapper[4684]: I0123 09:33:17.476481 4684 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell1-db-create-4qf5d" Jan 23 09:33:17 crc kubenswrapper[4684]: I0123 09:33:17.486188 4684 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-api-db-create-tjvmj" Jan 23 09:33:17 crc kubenswrapper[4684]: I0123 09:33:17.508043 4684 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-22khh\" (UniqueName: \"kubernetes.io/projected/51bdf1ce-d5b3-4862-aa1c-4648c84f87a9-kube-api-access-22khh\") pod \"nova-cell1-db-create-4qf5d\" (UID: \"51bdf1ce-d5b3-4862-aa1c-4648c84f87a9\") " pod="openstack/nova-cell1-db-create-4qf5d" Jan 23 09:33:17 crc kubenswrapper[4684]: I0123 09:33:17.508115 4684 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/a856b676-2311-4a06-9b0c-4fd64c76e34b-operator-scripts\") pod \"nova-cell0-db-create-lng65\" (UID: \"a856b676-2311-4a06-9b0c-4fd64c76e34b\") " pod="openstack/nova-cell0-db-create-lng65" Jan 23 09:33:17 crc kubenswrapper[4684]: I0123 09:33:17.508157 4684 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-mk8gs\" (UniqueName: \"kubernetes.io/projected/a856b676-2311-4a06-9b0c-4fd64c76e34b-kube-api-access-mk8gs\") pod \"nova-cell0-db-create-lng65\" (UID: \"a856b676-2311-4a06-9b0c-4fd64c76e34b\") " pod="openstack/nova-cell0-db-create-lng65" Jan 23 09:33:17 crc kubenswrapper[4684]: I0123 09:33:17.508206 4684 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-qg9db\" (UniqueName: \"kubernetes.io/projected/9314b229-b3d7-40b3-8c79-a327b2f0098d-kube-api-access-qg9db\") pod \"nova-api-4098-account-create-update-mfcrh\" (UID: \"9314b229-b3d7-40b3-8c79-a327b2f0098d\") " pod="openstack/nova-api-4098-account-create-update-mfcrh" Jan 23 09:33:17 crc kubenswrapper[4684]: I0123 09:33:17.508250 4684 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/51bdf1ce-d5b3-4862-aa1c-4648c84f87a9-operator-scripts\") pod \"nova-cell1-db-create-4qf5d\" (UID: \"51bdf1ce-d5b3-4862-aa1c-4648c84f87a9\") " pod="openstack/nova-cell1-db-create-4qf5d" Jan 23 09:33:17 crc kubenswrapper[4684]: I0123 09:33:17.508317 4684 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/9314b229-b3d7-40b3-8c79-a327b2f0098d-operator-scripts\") pod \"nova-api-4098-account-create-update-mfcrh\" (UID: \"9314b229-b3d7-40b3-8c79-a327b2f0098d\") " pod="openstack/nova-api-4098-account-create-update-mfcrh" Jan 23 09:33:17 crc kubenswrapper[4684]: I0123 09:33:17.509342 4684 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/9314b229-b3d7-40b3-8c79-a327b2f0098d-operator-scripts\") pod \"nova-api-4098-account-create-update-mfcrh\" (UID: \"9314b229-b3d7-40b3-8c79-a327b2f0098d\") " pod="openstack/nova-api-4098-account-create-update-mfcrh" Jan 23 09:33:17 crc kubenswrapper[4684]: I0123 09:33:17.509993 4684 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/a856b676-2311-4a06-9b0c-4fd64c76e34b-operator-scripts\") pod \"nova-cell0-db-create-lng65\" (UID: \"a856b676-2311-4a06-9b0c-4fd64c76e34b\") " pod="openstack/nova-cell0-db-create-lng65" Jan 23 09:33:17 crc kubenswrapper[4684]: I0123 09:33:17.522975 4684 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-cell1-db-create-4qf5d"] Jan 23 09:33:17 crc kubenswrapper[4684]: I0123 09:33:17.579434 4684 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-qg9db\" (UniqueName: \"kubernetes.io/projected/9314b229-b3d7-40b3-8c79-a327b2f0098d-kube-api-access-qg9db\") pod \"nova-api-4098-account-create-update-mfcrh\" (UID: \"9314b229-b3d7-40b3-8c79-a327b2f0098d\") " pod="openstack/nova-api-4098-account-create-update-mfcrh" Jan 23 09:33:17 crc kubenswrapper[4684]: I0123 09:33:17.603067 4684 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-mk8gs\" (UniqueName: \"kubernetes.io/projected/a856b676-2311-4a06-9b0c-4fd64c76e34b-kube-api-access-mk8gs\") pod \"nova-cell0-db-create-lng65\" (UID: \"a856b676-2311-4a06-9b0c-4fd64c76e34b\") " pod="openstack/nova-cell0-db-create-lng65" Jan 23 09:33:17 crc kubenswrapper[4684]: I0123 09:33:17.662879 4684 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/51bdf1ce-d5b3-4862-aa1c-4648c84f87a9-operator-scripts\") pod \"nova-cell1-db-create-4qf5d\" (UID: \"51bdf1ce-d5b3-4862-aa1c-4648c84f87a9\") " pod="openstack/nova-cell1-db-create-4qf5d" Jan 23 09:33:17 crc kubenswrapper[4684]: I0123 09:33:17.663331 4684 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-22khh\" (UniqueName: \"kubernetes.io/projected/51bdf1ce-d5b3-4862-aa1c-4648c84f87a9-kube-api-access-22khh\") pod \"nova-cell1-db-create-4qf5d\" (UID: \"51bdf1ce-d5b3-4862-aa1c-4648c84f87a9\") " pod="openstack/nova-cell1-db-create-4qf5d" Jan 23 09:33:17 crc kubenswrapper[4684]: I0123 09:33:17.668201 4684 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell0-db-create-lng65" Jan 23 09:33:17 crc kubenswrapper[4684]: I0123 09:33:17.671585 4684 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/51bdf1ce-d5b3-4862-aa1c-4648c84f87a9-operator-scripts\") pod \"nova-cell1-db-create-4qf5d\" (UID: \"51bdf1ce-d5b3-4862-aa1c-4648c84f87a9\") " pod="openstack/nova-cell1-db-create-4qf5d" Jan 23 09:33:17 crc kubenswrapper[4684]: I0123 09:33:17.791923 4684 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-api-4098-account-create-update-mfcrh" Jan 23 09:33:17 crc kubenswrapper[4684]: I0123 09:33:17.832804 4684 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/nova-cell0-e4a7-account-create-update-mx2vl"] Jan 23 09:33:17 crc kubenswrapper[4684]: I0123 09:33:17.834009 4684 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell0-e4a7-account-create-update-mx2vl" Jan 23 09:33:17 crc kubenswrapper[4684]: I0123 09:33:17.844153 4684 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/barbican-worker-74bcc55f89-qgvh5"] Jan 23 09:33:17 crc kubenswrapper[4684]: I0123 09:33:17.852210 4684 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-cell0-db-secret" Jan 23 09:33:17 crc kubenswrapper[4684]: I0123 09:33:17.852725 4684 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-22khh\" (UniqueName: \"kubernetes.io/projected/51bdf1ce-d5b3-4862-aa1c-4648c84f87a9-kube-api-access-22khh\") pod \"nova-cell1-db-create-4qf5d\" (UID: \"51bdf1ce-d5b3-4862-aa1c-4648c84f87a9\") " pod="openstack/nova-cell1-db-create-4qf5d" Jan 23 09:33:17 crc kubenswrapper[4684]: I0123 09:33:17.867878 4684 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/barbican-worker-74bcc55f89-qgvh5" Jan 23 09:33:17 crc kubenswrapper[4684]: I0123 09:33:17.873454 4684 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"barbican-barbican-dockercfg-rrw89" Jan 23 09:33:17 crc kubenswrapper[4684]: I0123 09:33:17.873721 4684 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"barbican-config-data" Jan 23 09:33:17 crc kubenswrapper[4684]: I0123 09:33:17.898555 4684 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"barbican-worker-config-data" Jan 23 09:33:17 crc kubenswrapper[4684]: I0123 09:33:17.916783 4684 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/barbican-worker-74bcc55f89-qgvh5"] Jan 23 09:33:17 crc kubenswrapper[4684]: I0123 09:33:17.957838 4684 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-cell0-e4a7-account-create-update-mx2vl"] Jan 23 09:33:18 crc kubenswrapper[4684]: I0123 09:33:18.003043 4684 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/barbican-keystone-listener-7c6d999bfd-wgh9p"] Jan 23 09:33:18 crc kubenswrapper[4684]: I0123 09:33:18.004826 4684 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/barbican-keystone-listener-7c6d999bfd-wgh9p" Jan 23 09:33:18 crc kubenswrapper[4684]: I0123 09:33:18.009245 4684 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/barbican-keystone-listener-7c6d999bfd-wgh9p"] Jan 23 09:33:18 crc kubenswrapper[4684]: I0123 09:33:18.011190 4684 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"barbican-keystone-listener-config-data" Jan 23 09:33:18 crc kubenswrapper[4684]: I0123 09:33:18.012281 4684 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-t6rfr\" (UniqueName: \"kubernetes.io/projected/e849936f-39a5-4742-b2d8-d74a04de0ad1-kube-api-access-t6rfr\") pod \"nova-cell0-e4a7-account-create-update-mx2vl\" (UID: \"e849936f-39a5-4742-b2d8-d74a04de0ad1\") " pod="openstack/nova-cell0-e4a7-account-create-update-mx2vl" Jan 23 09:33:18 crc kubenswrapper[4684]: I0123 09:33:18.012347 4684 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/996c56f4-2118-4795-91da-d78f1ad2f792-combined-ca-bundle\") pod \"barbican-worker-74bcc55f89-qgvh5\" (UID: \"996c56f4-2118-4795-91da-d78f1ad2f792\") " pod="openstack/barbican-worker-74bcc55f89-qgvh5" Jan 23 09:33:18 crc kubenswrapper[4684]: I0123 09:33:18.012379 4684 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/996c56f4-2118-4795-91da-d78f1ad2f792-config-data-custom\") pod \"barbican-worker-74bcc55f89-qgvh5\" (UID: \"996c56f4-2118-4795-91da-d78f1ad2f792\") " pod="openstack/barbican-worker-74bcc55f89-qgvh5" Jan 23 09:33:18 crc kubenswrapper[4684]: I0123 09:33:18.012426 4684 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/996c56f4-2118-4795-91da-d78f1ad2f792-logs\") pod \"barbican-worker-74bcc55f89-qgvh5\" (UID: \"996c56f4-2118-4795-91da-d78f1ad2f792\") " pod="openstack/barbican-worker-74bcc55f89-qgvh5" Jan 23 09:33:18 crc kubenswrapper[4684]: I0123 09:33:18.012458 4684 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/996c56f4-2118-4795-91da-d78f1ad2f792-config-data\") pod \"barbican-worker-74bcc55f89-qgvh5\" (UID: \"996c56f4-2118-4795-91da-d78f1ad2f792\") " pod="openstack/barbican-worker-74bcc55f89-qgvh5" Jan 23 09:33:18 crc kubenswrapper[4684]: I0123 09:33:18.012497 4684 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-m696q\" (UniqueName: \"kubernetes.io/projected/996c56f4-2118-4795-91da-d78f1ad2f792-kube-api-access-m696q\") pod \"barbican-worker-74bcc55f89-qgvh5\" (UID: \"996c56f4-2118-4795-91da-d78f1ad2f792\") " pod="openstack/barbican-worker-74bcc55f89-qgvh5" Jan 23 09:33:18 crc kubenswrapper[4684]: I0123 09:33:18.016393 4684 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/e849936f-39a5-4742-b2d8-d74a04de0ad1-operator-scripts\") pod \"nova-cell0-e4a7-account-create-update-mx2vl\" (UID: \"e849936f-39a5-4742-b2d8-d74a04de0ad1\") " pod="openstack/nova-cell0-e4a7-account-create-update-mx2vl" Jan 23 09:33:18 crc kubenswrapper[4684]: I0123 09:33:18.030248 4684 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell1-db-create-4qf5d" Jan 23 09:33:18 crc kubenswrapper[4684]: I0123 09:33:18.032281 4684 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/dnsmasq-dns-b478fbf79-l44nc"] Jan 23 09:33:18 crc kubenswrapper[4684]: I0123 09:33:18.034833 4684 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-b478fbf79-l44nc" Jan 23 09:33:18 crc kubenswrapper[4684]: I0123 09:33:18.083830 4684 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-b478fbf79-l44nc"] Jan 23 09:33:18 crc kubenswrapper[4684]: I0123 09:33:18.118917 4684 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/dd332188-f0b4-4a86-a7ec-c722f64e1e41-combined-ca-bundle\") pod \"barbican-keystone-listener-7c6d999bfd-wgh9p\" (UID: \"dd332188-f0b4-4a86-a7ec-c722f64e1e41\") " pod="openstack/barbican-keystone-listener-7c6d999bfd-wgh9p" Jan 23 09:33:18 crc kubenswrapper[4684]: I0123 09:33:18.118985 4684 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/3d34dccc-24db-4c90-81fc-d9a898a7a643-dns-svc\") pod \"dnsmasq-dns-b478fbf79-l44nc\" (UID: \"3d34dccc-24db-4c90-81fc-d9a898a7a643\") " pod="openstack/dnsmasq-dns-b478fbf79-l44nc" Jan 23 09:33:18 crc kubenswrapper[4684]: I0123 09:33:18.119034 4684 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-t6rfr\" (UniqueName: \"kubernetes.io/projected/e849936f-39a5-4742-b2d8-d74a04de0ad1-kube-api-access-t6rfr\") pod \"nova-cell0-e4a7-account-create-update-mx2vl\" (UID: \"e849936f-39a5-4742-b2d8-d74a04de0ad1\") " pod="openstack/nova-cell0-e4a7-account-create-update-mx2vl" Jan 23 09:33:18 crc kubenswrapper[4684]: I0123 09:33:18.119068 4684 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/996c56f4-2118-4795-91da-d78f1ad2f792-combined-ca-bundle\") pod \"barbican-worker-74bcc55f89-qgvh5\" (UID: \"996c56f4-2118-4795-91da-d78f1ad2f792\") " pod="openstack/barbican-worker-74bcc55f89-qgvh5" Jan 23 09:33:18 crc kubenswrapper[4684]: I0123 09:33:18.119093 4684 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/996c56f4-2118-4795-91da-d78f1ad2f792-config-data-custom\") pod \"barbican-worker-74bcc55f89-qgvh5\" (UID: \"996c56f4-2118-4795-91da-d78f1ad2f792\") " pod="openstack/barbican-worker-74bcc55f89-qgvh5" Jan 23 09:33:18 crc kubenswrapper[4684]: I0123 09:33:18.119145 4684 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/996c56f4-2118-4795-91da-d78f1ad2f792-logs\") pod \"barbican-worker-74bcc55f89-qgvh5\" (UID: \"996c56f4-2118-4795-91da-d78f1ad2f792\") " pod="openstack/barbican-worker-74bcc55f89-qgvh5" Jan 23 09:33:18 crc kubenswrapper[4684]: I0123 09:33:18.119193 4684 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-bz948\" (UniqueName: \"kubernetes.io/projected/dd332188-f0b4-4a86-a7ec-c722f64e1e41-kube-api-access-bz948\") pod \"barbican-keystone-listener-7c6d999bfd-wgh9p\" (UID: \"dd332188-f0b4-4a86-a7ec-c722f64e1e41\") " pod="openstack/barbican-keystone-listener-7c6d999bfd-wgh9p" Jan 23 09:33:18 crc kubenswrapper[4684]: I0123 09:33:18.119232 4684 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/996c56f4-2118-4795-91da-d78f1ad2f792-config-data\") pod \"barbican-worker-74bcc55f89-qgvh5\" (UID: \"996c56f4-2118-4795-91da-d78f1ad2f792\") " pod="openstack/barbican-worker-74bcc55f89-qgvh5" Jan 23 09:33:18 crc kubenswrapper[4684]: I0123 09:33:18.119259 4684 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/dd332188-f0b4-4a86-a7ec-c722f64e1e41-config-data-custom\") pod \"barbican-keystone-listener-7c6d999bfd-wgh9p\" (UID: \"dd332188-f0b4-4a86-a7ec-c722f64e1e41\") " pod="openstack/barbican-keystone-listener-7c6d999bfd-wgh9p" Jan 23 09:33:18 crc kubenswrapper[4684]: I0123 09:33:18.119295 4684 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-m696q\" (UniqueName: \"kubernetes.io/projected/996c56f4-2118-4795-91da-d78f1ad2f792-kube-api-access-m696q\") pod \"barbican-worker-74bcc55f89-qgvh5\" (UID: \"996c56f4-2118-4795-91da-d78f1ad2f792\") " pod="openstack/barbican-worker-74bcc55f89-qgvh5" Jan 23 09:33:18 crc kubenswrapper[4684]: I0123 09:33:18.119321 4684 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/3d34dccc-24db-4c90-81fc-d9a898a7a643-ovsdbserver-nb\") pod \"dnsmasq-dns-b478fbf79-l44nc\" (UID: \"3d34dccc-24db-4c90-81fc-d9a898a7a643\") " pod="openstack/dnsmasq-dns-b478fbf79-l44nc" Jan 23 09:33:18 crc kubenswrapper[4684]: I0123 09:33:18.119364 4684 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-krhl5\" (UniqueName: \"kubernetes.io/projected/3d34dccc-24db-4c90-81fc-d9a898a7a643-kube-api-access-krhl5\") pod \"dnsmasq-dns-b478fbf79-l44nc\" (UID: \"3d34dccc-24db-4c90-81fc-d9a898a7a643\") " pod="openstack/dnsmasq-dns-b478fbf79-l44nc" Jan 23 09:33:18 crc kubenswrapper[4684]: I0123 09:33:18.119390 4684 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/3d34dccc-24db-4c90-81fc-d9a898a7a643-ovsdbserver-sb\") pod \"dnsmasq-dns-b478fbf79-l44nc\" (UID: \"3d34dccc-24db-4c90-81fc-d9a898a7a643\") " pod="openstack/dnsmasq-dns-b478fbf79-l44nc" Jan 23 09:33:18 crc kubenswrapper[4684]: I0123 09:33:18.119413 4684 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/3d34dccc-24db-4c90-81fc-d9a898a7a643-config\") pod \"dnsmasq-dns-b478fbf79-l44nc\" (UID: \"3d34dccc-24db-4c90-81fc-d9a898a7a643\") " pod="openstack/dnsmasq-dns-b478fbf79-l44nc" Jan 23 09:33:18 crc kubenswrapper[4684]: I0123 09:33:18.119444 4684 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/dd332188-f0b4-4a86-a7ec-c722f64e1e41-logs\") pod \"barbican-keystone-listener-7c6d999bfd-wgh9p\" (UID: \"dd332188-f0b4-4a86-a7ec-c722f64e1e41\") " pod="openstack/barbican-keystone-listener-7c6d999bfd-wgh9p" Jan 23 09:33:18 crc kubenswrapper[4684]: I0123 09:33:18.119465 4684 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/dd332188-f0b4-4a86-a7ec-c722f64e1e41-config-data\") pod \"barbican-keystone-listener-7c6d999bfd-wgh9p\" (UID: \"dd332188-f0b4-4a86-a7ec-c722f64e1e41\") " pod="openstack/barbican-keystone-listener-7c6d999bfd-wgh9p" Jan 23 09:33:18 crc kubenswrapper[4684]: I0123 09:33:18.119509 4684 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/e849936f-39a5-4742-b2d8-d74a04de0ad1-operator-scripts\") pod \"nova-cell0-e4a7-account-create-update-mx2vl\" (UID: \"e849936f-39a5-4742-b2d8-d74a04de0ad1\") " pod="openstack/nova-cell0-e4a7-account-create-update-mx2vl" Jan 23 09:33:18 crc kubenswrapper[4684]: I0123 09:33:18.120326 4684 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/e849936f-39a5-4742-b2d8-d74a04de0ad1-operator-scripts\") pod \"nova-cell0-e4a7-account-create-update-mx2vl\" (UID: \"e849936f-39a5-4742-b2d8-d74a04de0ad1\") " pod="openstack/nova-cell0-e4a7-account-create-update-mx2vl" Jan 23 09:33:18 crc kubenswrapper[4684]: I0123 09:33:18.121412 4684 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/996c56f4-2118-4795-91da-d78f1ad2f792-logs\") pod \"barbican-worker-74bcc55f89-qgvh5\" (UID: \"996c56f4-2118-4795-91da-d78f1ad2f792\") " pod="openstack/barbican-worker-74bcc55f89-qgvh5" Jan 23 09:33:18 crc kubenswrapper[4684]: I0123 09:33:18.128564 4684 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/996c56f4-2118-4795-91da-d78f1ad2f792-config-data\") pod \"barbican-worker-74bcc55f89-qgvh5\" (UID: \"996c56f4-2118-4795-91da-d78f1ad2f792\") " pod="openstack/barbican-worker-74bcc55f89-qgvh5" Jan 23 09:33:18 crc kubenswrapper[4684]: I0123 09:33:18.132125 4684 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/996c56f4-2118-4795-91da-d78f1ad2f792-combined-ca-bundle\") pod \"barbican-worker-74bcc55f89-qgvh5\" (UID: \"996c56f4-2118-4795-91da-d78f1ad2f792\") " pod="openstack/barbican-worker-74bcc55f89-qgvh5" Jan 23 09:33:18 crc kubenswrapper[4684]: I0123 09:33:18.143947 4684 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/996c56f4-2118-4795-91da-d78f1ad2f792-config-data-custom\") pod \"barbican-worker-74bcc55f89-qgvh5\" (UID: \"996c56f4-2118-4795-91da-d78f1ad2f792\") " pod="openstack/barbican-worker-74bcc55f89-qgvh5" Jan 23 09:33:18 crc kubenswrapper[4684]: I0123 09:33:18.152901 4684 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-m696q\" (UniqueName: \"kubernetes.io/projected/996c56f4-2118-4795-91da-d78f1ad2f792-kube-api-access-m696q\") pod \"barbican-worker-74bcc55f89-qgvh5\" (UID: \"996c56f4-2118-4795-91da-d78f1ad2f792\") " pod="openstack/barbican-worker-74bcc55f89-qgvh5" Jan 23 09:33:18 crc kubenswrapper[4684]: I0123 09:33:18.155443 4684 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-t6rfr\" (UniqueName: \"kubernetes.io/projected/e849936f-39a5-4742-b2d8-d74a04de0ad1-kube-api-access-t6rfr\") pod \"nova-cell0-e4a7-account-create-update-mx2vl\" (UID: \"e849936f-39a5-4742-b2d8-d74a04de0ad1\") " pod="openstack/nova-cell0-e4a7-account-create-update-mx2vl" Jan 23 09:33:18 crc kubenswrapper[4684]: I0123 09:33:18.209436 4684 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/nova-cell1-a91c-account-create-update-78wch"] Jan 23 09:33:18 crc kubenswrapper[4684]: I0123 09:33:18.211754 4684 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell1-a91c-account-create-update-78wch" Jan 23 09:33:18 crc kubenswrapper[4684]: I0123 09:33:18.218080 4684 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-cell1-db-secret" Jan 23 09:33:18 crc kubenswrapper[4684]: I0123 09:33:18.219174 4684 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell0-e4a7-account-create-update-mx2vl" Jan 23 09:33:18 crc kubenswrapper[4684]: I0123 09:33:18.220516 4684 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/3d34dccc-24db-4c90-81fc-d9a898a7a643-ovsdbserver-nb\") pod \"dnsmasq-dns-b478fbf79-l44nc\" (UID: \"3d34dccc-24db-4c90-81fc-d9a898a7a643\") " pod="openstack/dnsmasq-dns-b478fbf79-l44nc" Jan 23 09:33:18 crc kubenswrapper[4684]: I0123 09:33:18.220573 4684 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-krhl5\" (UniqueName: \"kubernetes.io/projected/3d34dccc-24db-4c90-81fc-d9a898a7a643-kube-api-access-krhl5\") pod \"dnsmasq-dns-b478fbf79-l44nc\" (UID: \"3d34dccc-24db-4c90-81fc-d9a898a7a643\") " pod="openstack/dnsmasq-dns-b478fbf79-l44nc" Jan 23 09:33:18 crc kubenswrapper[4684]: I0123 09:33:18.220595 4684 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/3d34dccc-24db-4c90-81fc-d9a898a7a643-ovsdbserver-sb\") pod \"dnsmasq-dns-b478fbf79-l44nc\" (UID: \"3d34dccc-24db-4c90-81fc-d9a898a7a643\") " pod="openstack/dnsmasq-dns-b478fbf79-l44nc" Jan 23 09:33:18 crc kubenswrapper[4684]: I0123 09:33:18.220613 4684 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/3d34dccc-24db-4c90-81fc-d9a898a7a643-config\") pod \"dnsmasq-dns-b478fbf79-l44nc\" (UID: \"3d34dccc-24db-4c90-81fc-d9a898a7a643\") " pod="openstack/dnsmasq-dns-b478fbf79-l44nc" Jan 23 09:33:18 crc kubenswrapper[4684]: I0123 09:33:18.220639 4684 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/dd332188-f0b4-4a86-a7ec-c722f64e1e41-logs\") pod \"barbican-keystone-listener-7c6d999bfd-wgh9p\" (UID: \"dd332188-f0b4-4a86-a7ec-c722f64e1e41\") " pod="openstack/barbican-keystone-listener-7c6d999bfd-wgh9p" Jan 23 09:33:18 crc kubenswrapper[4684]: I0123 09:33:18.220655 4684 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/dd332188-f0b4-4a86-a7ec-c722f64e1e41-config-data\") pod \"barbican-keystone-listener-7c6d999bfd-wgh9p\" (UID: \"dd332188-f0b4-4a86-a7ec-c722f64e1e41\") " pod="openstack/barbican-keystone-listener-7c6d999bfd-wgh9p" Jan 23 09:33:18 crc kubenswrapper[4684]: I0123 09:33:18.220689 4684 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/dd332188-f0b4-4a86-a7ec-c722f64e1e41-combined-ca-bundle\") pod \"barbican-keystone-listener-7c6d999bfd-wgh9p\" (UID: \"dd332188-f0b4-4a86-a7ec-c722f64e1e41\") " pod="openstack/barbican-keystone-listener-7c6d999bfd-wgh9p" Jan 23 09:33:18 crc kubenswrapper[4684]: I0123 09:33:18.220733 4684 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/3d34dccc-24db-4c90-81fc-d9a898a7a643-dns-svc\") pod \"dnsmasq-dns-b478fbf79-l44nc\" (UID: \"3d34dccc-24db-4c90-81fc-d9a898a7a643\") " pod="openstack/dnsmasq-dns-b478fbf79-l44nc" Jan 23 09:33:18 crc kubenswrapper[4684]: I0123 09:33:18.220788 4684 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-bz948\" (UniqueName: \"kubernetes.io/projected/dd332188-f0b4-4a86-a7ec-c722f64e1e41-kube-api-access-bz948\") pod \"barbican-keystone-listener-7c6d999bfd-wgh9p\" (UID: \"dd332188-f0b4-4a86-a7ec-c722f64e1e41\") " pod="openstack/barbican-keystone-listener-7c6d999bfd-wgh9p" Jan 23 09:33:18 crc kubenswrapper[4684]: I0123 09:33:18.220825 4684 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/dd332188-f0b4-4a86-a7ec-c722f64e1e41-config-data-custom\") pod \"barbican-keystone-listener-7c6d999bfd-wgh9p\" (UID: \"dd332188-f0b4-4a86-a7ec-c722f64e1e41\") " pod="openstack/barbican-keystone-listener-7c6d999bfd-wgh9p" Jan 23 09:33:18 crc kubenswrapper[4684]: I0123 09:33:18.222477 4684 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/dd332188-f0b4-4a86-a7ec-c722f64e1e41-logs\") pod \"barbican-keystone-listener-7c6d999bfd-wgh9p\" (UID: \"dd332188-f0b4-4a86-a7ec-c722f64e1e41\") " pod="openstack/barbican-keystone-listener-7c6d999bfd-wgh9p" Jan 23 09:33:18 crc kubenswrapper[4684]: I0123 09:33:18.223114 4684 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/3d34dccc-24db-4c90-81fc-d9a898a7a643-ovsdbserver-nb\") pod \"dnsmasq-dns-b478fbf79-l44nc\" (UID: \"3d34dccc-24db-4c90-81fc-d9a898a7a643\") " pod="openstack/dnsmasq-dns-b478fbf79-l44nc" Jan 23 09:33:18 crc kubenswrapper[4684]: I0123 09:33:18.225884 4684 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/dd332188-f0b4-4a86-a7ec-c722f64e1e41-config-data-custom\") pod \"barbican-keystone-listener-7c6d999bfd-wgh9p\" (UID: \"dd332188-f0b4-4a86-a7ec-c722f64e1e41\") " pod="openstack/barbican-keystone-listener-7c6d999bfd-wgh9p" Jan 23 09:33:18 crc kubenswrapper[4684]: I0123 09:33:18.226471 4684 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/3d34dccc-24db-4c90-81fc-d9a898a7a643-dns-svc\") pod \"dnsmasq-dns-b478fbf79-l44nc\" (UID: \"3d34dccc-24db-4c90-81fc-d9a898a7a643\") " pod="openstack/dnsmasq-dns-b478fbf79-l44nc" Jan 23 09:33:18 crc kubenswrapper[4684]: I0123 09:33:18.227328 4684 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/3d34dccc-24db-4c90-81fc-d9a898a7a643-config\") pod \"dnsmasq-dns-b478fbf79-l44nc\" (UID: \"3d34dccc-24db-4c90-81fc-d9a898a7a643\") " pod="openstack/dnsmasq-dns-b478fbf79-l44nc" Jan 23 09:33:18 crc kubenswrapper[4684]: I0123 09:33:18.233657 4684 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/3d34dccc-24db-4c90-81fc-d9a898a7a643-ovsdbserver-sb\") pod \"dnsmasq-dns-b478fbf79-l44nc\" (UID: \"3d34dccc-24db-4c90-81fc-d9a898a7a643\") " pod="openstack/dnsmasq-dns-b478fbf79-l44nc" Jan 23 09:33:18 crc kubenswrapper[4684]: I0123 09:33:18.236936 4684 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/dd332188-f0b4-4a86-a7ec-c722f64e1e41-config-data\") pod \"barbican-keystone-listener-7c6d999bfd-wgh9p\" (UID: \"dd332188-f0b4-4a86-a7ec-c722f64e1e41\") " pod="openstack/barbican-keystone-listener-7c6d999bfd-wgh9p" Jan 23 09:33:18 crc kubenswrapper[4684]: I0123 09:33:18.248186 4684 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/dd332188-f0b4-4a86-a7ec-c722f64e1e41-combined-ca-bundle\") pod \"barbican-keystone-listener-7c6d999bfd-wgh9p\" (UID: \"dd332188-f0b4-4a86-a7ec-c722f64e1e41\") " pod="openstack/barbican-keystone-listener-7c6d999bfd-wgh9p" Jan 23 09:33:18 crc kubenswrapper[4684]: I0123 09:33:18.253923 4684 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/barbican-worker-74bcc55f89-qgvh5" Jan 23 09:33:18 crc kubenswrapper[4684]: I0123 09:33:18.263222 4684 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-bz948\" (UniqueName: \"kubernetes.io/projected/dd332188-f0b4-4a86-a7ec-c722f64e1e41-kube-api-access-bz948\") pod \"barbican-keystone-listener-7c6d999bfd-wgh9p\" (UID: \"dd332188-f0b4-4a86-a7ec-c722f64e1e41\") " pod="openstack/barbican-keystone-listener-7c6d999bfd-wgh9p" Jan 23 09:33:18 crc kubenswrapper[4684]: I0123 09:33:18.274941 4684 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-cell1-a91c-account-create-update-78wch"] Jan 23 09:33:18 crc kubenswrapper[4684]: I0123 09:33:18.298772 4684 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-krhl5\" (UniqueName: \"kubernetes.io/projected/3d34dccc-24db-4c90-81fc-d9a898a7a643-kube-api-access-krhl5\") pod \"dnsmasq-dns-b478fbf79-l44nc\" (UID: \"3d34dccc-24db-4c90-81fc-d9a898a7a643\") " pod="openstack/dnsmasq-dns-b478fbf79-l44nc" Jan 23 09:33:18 crc kubenswrapper[4684]: I0123 09:33:18.313456 4684 generic.go:334] "Generic (PLEG): container finished" podID="6a531904-7199-45c6-aea1-23fb5a52addf" containerID="762ea99cb4f27ceb08fbd6bf312d7f3761fcaca60b5f0a11c30bbf20d8ed083e" exitCode=0 Jan 23 09:33:18 crc kubenswrapper[4684]: I0123 09:33:18.314177 4684 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"6a531904-7199-45c6-aea1-23fb5a52addf","Type":"ContainerDied","Data":"762ea99cb4f27ceb08fbd6bf312d7f3761fcaca60b5f0a11c30bbf20d8ed083e"} Jan 23 09:33:18 crc kubenswrapper[4684]: I0123 09:33:18.326646 4684 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/14cb5f92-83cd-4cd7-8a3c-7dcccd239f6b-operator-scripts\") pod \"nova-cell1-a91c-account-create-update-78wch\" (UID: \"14cb5f92-83cd-4cd7-8a3c-7dcccd239f6b\") " pod="openstack/nova-cell1-a91c-account-create-update-78wch" Jan 23 09:33:18 crc kubenswrapper[4684]: I0123 09:33:18.326760 4684 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-cd7rg\" (UniqueName: \"kubernetes.io/projected/14cb5f92-83cd-4cd7-8a3c-7dcccd239f6b-kube-api-access-cd7rg\") pod \"nova-cell1-a91c-account-create-update-78wch\" (UID: \"14cb5f92-83cd-4cd7-8a3c-7dcccd239f6b\") " pod="openstack/nova-cell1-a91c-account-create-update-78wch" Jan 23 09:33:18 crc kubenswrapper[4684]: I0123 09:33:18.346059 4684 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/barbican-keystone-listener-7c6d999bfd-wgh9p" Jan 23 09:33:18 crc kubenswrapper[4684]: I0123 09:33:18.352297 4684 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/barbican-api-b96468b6b-tn94s"] Jan 23 09:33:18 crc kubenswrapper[4684]: I0123 09:33:18.359778 4684 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/barbican-api-b96468b6b-tn94s" Jan 23 09:33:18 crc kubenswrapper[4684]: I0123 09:33:18.366624 4684 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"barbican-api-config-data" Jan 23 09:33:18 crc kubenswrapper[4684]: I0123 09:33:18.370255 4684 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/barbican-api-b96468b6b-tn94s"] Jan 23 09:33:18 crc kubenswrapper[4684]: I0123 09:33:18.393329 4684 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-b478fbf79-l44nc" Jan 23 09:33:18 crc kubenswrapper[4684]: I0123 09:33:18.431823 4684 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/14cb5f92-83cd-4cd7-8a3c-7dcccd239f6b-operator-scripts\") pod \"nova-cell1-a91c-account-create-update-78wch\" (UID: \"14cb5f92-83cd-4cd7-8a3c-7dcccd239f6b\") " pod="openstack/nova-cell1-a91c-account-create-update-78wch" Jan 23 09:33:18 crc kubenswrapper[4684]: I0123 09:33:18.433021 4684 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-api-db-create-tjvmj"] Jan 23 09:33:18 crc kubenswrapper[4684]: I0123 09:33:18.433937 4684 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/14cb5f92-83cd-4cd7-8a3c-7dcccd239f6b-operator-scripts\") pod \"nova-cell1-a91c-account-create-update-78wch\" (UID: \"14cb5f92-83cd-4cd7-8a3c-7dcccd239f6b\") " pod="openstack/nova-cell1-a91c-account-create-update-78wch" Jan 23 09:33:18 crc kubenswrapper[4684]: I0123 09:33:18.431895 4684 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-cd7rg\" (UniqueName: \"kubernetes.io/projected/14cb5f92-83cd-4cd7-8a3c-7dcccd239f6b-kube-api-access-cd7rg\") pod \"nova-cell1-a91c-account-create-update-78wch\" (UID: \"14cb5f92-83cd-4cd7-8a3c-7dcccd239f6b\") " pod="openstack/nova-cell1-a91c-account-create-update-78wch" Jan 23 09:33:18 crc kubenswrapper[4684]: I0123 09:33:18.524456 4684 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-cd7rg\" (UniqueName: \"kubernetes.io/projected/14cb5f92-83cd-4cd7-8a3c-7dcccd239f6b-kube-api-access-cd7rg\") pod \"nova-cell1-a91c-account-create-update-78wch\" (UID: \"14cb5f92-83cd-4cd7-8a3c-7dcccd239f6b\") " pod="openstack/nova-cell1-a91c-account-create-update-78wch" Jan 23 09:33:18 crc kubenswrapper[4684]: I0123 09:33:18.548295 4684 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-mqvzv\" (UniqueName: \"kubernetes.io/projected/968cfa50-ff5f-4484-8a59-2132539ba65b-kube-api-access-mqvzv\") pod \"barbican-api-b96468b6b-tn94s\" (UID: \"968cfa50-ff5f-4484-8a59-2132539ba65b\") " pod="openstack/barbican-api-b96468b6b-tn94s" Jan 23 09:33:18 crc kubenswrapper[4684]: I0123 09:33:18.548517 4684 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/968cfa50-ff5f-4484-8a59-2132539ba65b-combined-ca-bundle\") pod \"barbican-api-b96468b6b-tn94s\" (UID: \"968cfa50-ff5f-4484-8a59-2132539ba65b\") " pod="openstack/barbican-api-b96468b6b-tn94s" Jan 23 09:33:18 crc kubenswrapper[4684]: I0123 09:33:18.548647 4684 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/968cfa50-ff5f-4484-8a59-2132539ba65b-logs\") pod \"barbican-api-b96468b6b-tn94s\" (UID: \"968cfa50-ff5f-4484-8a59-2132539ba65b\") " pod="openstack/barbican-api-b96468b6b-tn94s" Jan 23 09:33:18 crc kubenswrapper[4684]: I0123 09:33:18.548756 4684 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/968cfa50-ff5f-4484-8a59-2132539ba65b-config-data-custom\") pod \"barbican-api-b96468b6b-tn94s\" (UID: \"968cfa50-ff5f-4484-8a59-2132539ba65b\") " pod="openstack/barbican-api-b96468b6b-tn94s" Jan 23 09:33:18 crc kubenswrapper[4684]: I0123 09:33:18.548828 4684 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/968cfa50-ff5f-4484-8a59-2132539ba65b-config-data\") pod \"barbican-api-b96468b6b-tn94s\" (UID: \"968cfa50-ff5f-4484-8a59-2132539ba65b\") " pod="openstack/barbican-api-b96468b6b-tn94s" Jan 23 09:33:18 crc kubenswrapper[4684]: I0123 09:33:18.576183 4684 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell1-a91c-account-create-update-78wch" Jan 23 09:33:18 crc kubenswrapper[4684]: I0123 09:33:18.663331 4684 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/968cfa50-ff5f-4484-8a59-2132539ba65b-logs\") pod \"barbican-api-b96468b6b-tn94s\" (UID: \"968cfa50-ff5f-4484-8a59-2132539ba65b\") " pod="openstack/barbican-api-b96468b6b-tn94s" Jan 23 09:33:18 crc kubenswrapper[4684]: I0123 09:33:18.663470 4684 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/968cfa50-ff5f-4484-8a59-2132539ba65b-config-data-custom\") pod \"barbican-api-b96468b6b-tn94s\" (UID: \"968cfa50-ff5f-4484-8a59-2132539ba65b\") " pod="openstack/barbican-api-b96468b6b-tn94s" Jan 23 09:33:18 crc kubenswrapper[4684]: I0123 09:33:18.663496 4684 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/968cfa50-ff5f-4484-8a59-2132539ba65b-config-data\") pod \"barbican-api-b96468b6b-tn94s\" (UID: \"968cfa50-ff5f-4484-8a59-2132539ba65b\") " pod="openstack/barbican-api-b96468b6b-tn94s" Jan 23 09:33:18 crc kubenswrapper[4684]: I0123 09:33:18.663556 4684 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-mqvzv\" (UniqueName: \"kubernetes.io/projected/968cfa50-ff5f-4484-8a59-2132539ba65b-kube-api-access-mqvzv\") pod \"barbican-api-b96468b6b-tn94s\" (UID: \"968cfa50-ff5f-4484-8a59-2132539ba65b\") " pod="openstack/barbican-api-b96468b6b-tn94s" Jan 23 09:33:18 crc kubenswrapper[4684]: I0123 09:33:18.663605 4684 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/968cfa50-ff5f-4484-8a59-2132539ba65b-combined-ca-bundle\") pod \"barbican-api-b96468b6b-tn94s\" (UID: \"968cfa50-ff5f-4484-8a59-2132539ba65b\") " pod="openstack/barbican-api-b96468b6b-tn94s" Jan 23 09:33:18 crc kubenswrapper[4684]: I0123 09:33:18.664526 4684 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/968cfa50-ff5f-4484-8a59-2132539ba65b-logs\") pod \"barbican-api-b96468b6b-tn94s\" (UID: \"968cfa50-ff5f-4484-8a59-2132539ba65b\") " pod="openstack/barbican-api-b96468b6b-tn94s" Jan 23 09:33:18 crc kubenswrapper[4684]: I0123 09:33:18.674287 4684 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/968cfa50-ff5f-4484-8a59-2132539ba65b-combined-ca-bundle\") pod \"barbican-api-b96468b6b-tn94s\" (UID: \"968cfa50-ff5f-4484-8a59-2132539ba65b\") " pod="openstack/barbican-api-b96468b6b-tn94s" Jan 23 09:33:18 crc kubenswrapper[4684]: I0123 09:33:18.688740 4684 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/968cfa50-ff5f-4484-8a59-2132539ba65b-config-data-custom\") pod \"barbican-api-b96468b6b-tn94s\" (UID: \"968cfa50-ff5f-4484-8a59-2132539ba65b\") " pod="openstack/barbican-api-b96468b6b-tn94s" Jan 23 09:33:18 crc kubenswrapper[4684]: I0123 09:33:18.716493 4684 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/968cfa50-ff5f-4484-8a59-2132539ba65b-config-data\") pod \"barbican-api-b96468b6b-tn94s\" (UID: \"968cfa50-ff5f-4484-8a59-2132539ba65b\") " pod="openstack/barbican-api-b96468b6b-tn94s" Jan 23 09:33:18 crc kubenswrapper[4684]: I0123 09:33:18.734322 4684 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-mqvzv\" (UniqueName: \"kubernetes.io/projected/968cfa50-ff5f-4484-8a59-2132539ba65b-kube-api-access-mqvzv\") pod \"barbican-api-b96468b6b-tn94s\" (UID: \"968cfa50-ff5f-4484-8a59-2132539ba65b\") " pod="openstack/barbican-api-b96468b6b-tn94s" Jan 23 09:33:18 crc kubenswrapper[4684]: I0123 09:33:18.735029 4684 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/barbican-api-b96468b6b-tn94s" Jan 23 09:33:18 crc kubenswrapper[4684]: I0123 09:33:18.978659 4684 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-api-4098-account-create-update-mfcrh"] Jan 23 09:33:19 crc kubenswrapper[4684]: I0123 09:33:19.031506 4684 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-cell0-db-create-lng65"] Jan 23 09:33:19 crc kubenswrapper[4684]: I0123 09:33:19.346632 4684 generic.go:334] "Generic (PLEG): container finished" podID="6a531904-7199-45c6-aea1-23fb5a52addf" containerID="894e6602a08cf019d04f87e77782e15bec43f8a532209cc5367cebe50f5ae329" exitCode=0 Jan 23 09:33:19 crc kubenswrapper[4684]: I0123 09:33:19.347009 4684 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"6a531904-7199-45c6-aea1-23fb5a52addf","Type":"ContainerDied","Data":"894e6602a08cf019d04f87e77782e15bec43f8a532209cc5367cebe50f5ae329"} Jan 23 09:33:19 crc kubenswrapper[4684]: I0123 09:33:19.348547 4684 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell0-db-create-lng65" event={"ID":"a856b676-2311-4a06-9b0c-4fd64c76e34b","Type":"ContainerStarted","Data":"9c467b07c4a11c677fc6294629b93443ad75cf9622758c10881555b9fec9cb5e"} Jan 23 09:33:19 crc kubenswrapper[4684]: I0123 09:33:19.349593 4684 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-api-4098-account-create-update-mfcrh" event={"ID":"9314b229-b3d7-40b3-8c79-a327b2f0098d","Type":"ContainerStarted","Data":"fce3b460d774203d223c3f15d3a297af8943c9d1191559d236eb4354702870bd"} Jan 23 09:33:19 crc kubenswrapper[4684]: I0123 09:33:19.360732 4684 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-api-db-create-tjvmj" event={"ID":"3ea9252c-2a2c-4b59-9196-251b12919e70","Type":"ContainerStarted","Data":"cde92c17ce65f379fd443261643cabe7c35c0c4865c6e4db9c2323a10729d113"} Jan 23 09:33:19 crc kubenswrapper[4684]: I0123 09:33:19.360787 4684 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-api-db-create-tjvmj" event={"ID":"3ea9252c-2a2c-4b59-9196-251b12919e70","Type":"ContainerStarted","Data":"81cc03231073d52bcdc293da5136704c14c47ede6b7bc463c1d9298dc326769c"} Jan 23 09:33:19 crc kubenswrapper[4684]: I0123 09:33:19.422068 4684 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/nova-api-db-create-tjvmj" podStartSLOduration=2.422038356 podStartE2EDuration="2.422038356s" podCreationTimestamp="2026-01-23 09:33:17 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-23 09:33:19.376833989 +0000 UTC m=+1572.000212540" watchObservedRunningTime="2026-01-23 09:33:19.422038356 +0000 UTC m=+1572.045416887" Jan 23 09:33:19 crc kubenswrapper[4684]: I0123 09:33:19.559790 4684 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-cell0-e4a7-account-create-update-mx2vl"] Jan 23 09:33:19 crc kubenswrapper[4684]: W0123 09:33:19.573869 4684 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pode849936f_39a5_4742_b2d8_d74a04de0ad1.slice/crio-3242b94f51ca802acd16b7bf5bfe912420dcb44fdc2381f6083941f6f278dfc5 WatchSource:0}: Error finding container 3242b94f51ca802acd16b7bf5bfe912420dcb44fdc2381f6083941f6f278dfc5: Status 404 returned error can't find the container with id 3242b94f51ca802acd16b7bf5bfe912420dcb44fdc2381f6083941f6f278dfc5 Jan 23 09:33:19 crc kubenswrapper[4684]: I0123 09:33:19.685186 4684 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-cell1-db-create-4qf5d"] Jan 23 09:33:19 crc kubenswrapper[4684]: I0123 09:33:19.750836 4684 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/cinder-db-sync-gpzdh" Jan 23 09:33:19 crc kubenswrapper[4684]: I0123 09:33:19.767310 4684 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/barbican-worker-74bcc55f89-qgvh5"] Jan 23 09:33:19 crc kubenswrapper[4684]: I0123 09:33:19.838265 4684 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/barbican-keystone-listener-7c6d999bfd-wgh9p"] Jan 23 09:33:19 crc kubenswrapper[4684]: I0123 09:33:19.946555 4684 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/82fd9420-b726-4b9d-ad21-b05181fb6e23-combined-ca-bundle\") pod \"82fd9420-b726-4b9d-ad21-b05181fb6e23\" (UID: \"82fd9420-b726-4b9d-ad21-b05181fb6e23\") " Jan 23 09:33:19 crc kubenswrapper[4684]: I0123 09:33:19.946752 4684 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/82fd9420-b726-4b9d-ad21-b05181fb6e23-scripts\") pod \"82fd9420-b726-4b9d-ad21-b05181fb6e23\" (UID: \"82fd9420-b726-4b9d-ad21-b05181fb6e23\") " Jan 23 09:33:19 crc kubenswrapper[4684]: I0123 09:33:19.946844 4684 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"etc-machine-id\" (UniqueName: \"kubernetes.io/host-path/82fd9420-b726-4b9d-ad21-b05181fb6e23-etc-machine-id\") pod \"82fd9420-b726-4b9d-ad21-b05181fb6e23\" (UID: \"82fd9420-b726-4b9d-ad21-b05181fb6e23\") " Jan 23 09:33:19 crc kubenswrapper[4684]: I0123 09:33:19.946910 4684 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/82fd9420-b726-4b9d-ad21-b05181fb6e23-config-data\") pod \"82fd9420-b726-4b9d-ad21-b05181fb6e23\" (UID: \"82fd9420-b726-4b9d-ad21-b05181fb6e23\") " Jan 23 09:33:19 crc kubenswrapper[4684]: I0123 09:33:19.946987 4684 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"db-sync-config-data\" (UniqueName: \"kubernetes.io/secret/82fd9420-b726-4b9d-ad21-b05181fb6e23-db-sync-config-data\") pod \"82fd9420-b726-4b9d-ad21-b05181fb6e23\" (UID: \"82fd9420-b726-4b9d-ad21-b05181fb6e23\") " Jan 23 09:33:19 crc kubenswrapper[4684]: I0123 09:33:19.947035 4684 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-t265x\" (UniqueName: \"kubernetes.io/projected/82fd9420-b726-4b9d-ad21-b05181fb6e23-kube-api-access-t265x\") pod \"82fd9420-b726-4b9d-ad21-b05181fb6e23\" (UID: \"82fd9420-b726-4b9d-ad21-b05181fb6e23\") " Jan 23 09:33:19 crc kubenswrapper[4684]: I0123 09:33:19.948389 4684 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/82fd9420-b726-4b9d-ad21-b05181fb6e23-etc-machine-id" (OuterVolumeSpecName: "etc-machine-id") pod "82fd9420-b726-4b9d-ad21-b05181fb6e23" (UID: "82fd9420-b726-4b9d-ad21-b05181fb6e23"). InnerVolumeSpecName "etc-machine-id". PluginName "kubernetes.io/host-path", VolumeGidValue "" Jan 23 09:33:19 crc kubenswrapper[4684]: I0123 09:33:19.961986 4684 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/82fd9420-b726-4b9d-ad21-b05181fb6e23-db-sync-config-data" (OuterVolumeSpecName: "db-sync-config-data") pod "82fd9420-b726-4b9d-ad21-b05181fb6e23" (UID: "82fd9420-b726-4b9d-ad21-b05181fb6e23"). InnerVolumeSpecName "db-sync-config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 23 09:33:19 crc kubenswrapper[4684]: I0123 09:33:19.964275 4684 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/82fd9420-b726-4b9d-ad21-b05181fb6e23-scripts" (OuterVolumeSpecName: "scripts") pod "82fd9420-b726-4b9d-ad21-b05181fb6e23" (UID: "82fd9420-b726-4b9d-ad21-b05181fb6e23"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 23 09:33:19 crc kubenswrapper[4684]: I0123 09:33:19.975643 4684 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/82fd9420-b726-4b9d-ad21-b05181fb6e23-kube-api-access-t265x" (OuterVolumeSpecName: "kube-api-access-t265x") pod "82fd9420-b726-4b9d-ad21-b05181fb6e23" (UID: "82fd9420-b726-4b9d-ad21-b05181fb6e23"). InnerVolumeSpecName "kube-api-access-t265x". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 23 09:33:20 crc kubenswrapper[4684]: I0123 09:33:20.032281 4684 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/82fd9420-b726-4b9d-ad21-b05181fb6e23-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "82fd9420-b726-4b9d-ad21-b05181fb6e23" (UID: "82fd9420-b726-4b9d-ad21-b05181fb6e23"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 23 09:33:20 crc kubenswrapper[4684]: I0123 09:33:20.058278 4684 reconciler_common.go:293] "Volume detached for volume \"etc-machine-id\" (UniqueName: \"kubernetes.io/host-path/82fd9420-b726-4b9d-ad21-b05181fb6e23-etc-machine-id\") on node \"crc\" DevicePath \"\"" Jan 23 09:33:20 crc kubenswrapper[4684]: I0123 09:33:20.058304 4684 reconciler_common.go:293] "Volume detached for volume \"db-sync-config-data\" (UniqueName: \"kubernetes.io/secret/82fd9420-b726-4b9d-ad21-b05181fb6e23-db-sync-config-data\") on node \"crc\" DevicePath \"\"" Jan 23 09:33:20 crc kubenswrapper[4684]: I0123 09:33:20.058314 4684 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-t265x\" (UniqueName: \"kubernetes.io/projected/82fd9420-b726-4b9d-ad21-b05181fb6e23-kube-api-access-t265x\") on node \"crc\" DevicePath \"\"" Jan 23 09:33:20 crc kubenswrapper[4684]: I0123 09:33:20.058324 4684 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/82fd9420-b726-4b9d-ad21-b05181fb6e23-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Jan 23 09:33:20 crc kubenswrapper[4684]: I0123 09:33:20.058332 4684 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/82fd9420-b726-4b9d-ad21-b05181fb6e23-scripts\") on node \"crc\" DevicePath \"\"" Jan 23 09:33:20 crc kubenswrapper[4684]: I0123 09:33:20.088648 4684 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/82fd9420-b726-4b9d-ad21-b05181fb6e23-config-data" (OuterVolumeSpecName: "config-data") pod "82fd9420-b726-4b9d-ad21-b05181fb6e23" (UID: "82fd9420-b726-4b9d-ad21-b05181fb6e23"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 23 09:33:20 crc kubenswrapper[4684]: I0123 09:33:20.175598 4684 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/82fd9420-b726-4b9d-ad21-b05181fb6e23-config-data\") on node \"crc\" DevicePath \"\"" Jan 23 09:33:20 crc kubenswrapper[4684]: I0123 09:33:20.250751 4684 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-cell1-a91c-account-create-update-78wch"] Jan 23 09:33:20 crc kubenswrapper[4684]: I0123 09:33:20.275770 4684 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-b478fbf79-l44nc"] Jan 23 09:33:20 crc kubenswrapper[4684]: I0123 09:33:20.306106 4684 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/ceilometer-0" Jan 23 09:33:20 crc kubenswrapper[4684]: I0123 09:33:20.315432 4684 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/barbican-api-b96468b6b-tn94s"] Jan 23 09:33:20 crc kubenswrapper[4684]: I0123 09:33:20.377528 4684 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-b7cc4\" (UniqueName: \"kubernetes.io/projected/6a531904-7199-45c6-aea1-23fb5a52addf-kube-api-access-b7cc4\") pod \"6a531904-7199-45c6-aea1-23fb5a52addf\" (UID: \"6a531904-7199-45c6-aea1-23fb5a52addf\") " Jan 23 09:33:20 crc kubenswrapper[4684]: I0123 09:33:20.377597 4684 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/6a531904-7199-45c6-aea1-23fb5a52addf-run-httpd\") pod \"6a531904-7199-45c6-aea1-23fb5a52addf\" (UID: \"6a531904-7199-45c6-aea1-23fb5a52addf\") " Jan 23 09:33:20 crc kubenswrapper[4684]: I0123 09:33:20.377628 4684 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/6a531904-7199-45c6-aea1-23fb5a52addf-sg-core-conf-yaml\") pod \"6a531904-7199-45c6-aea1-23fb5a52addf\" (UID: \"6a531904-7199-45c6-aea1-23fb5a52addf\") " Jan 23 09:33:20 crc kubenswrapper[4684]: I0123 09:33:20.377714 4684 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/6a531904-7199-45c6-aea1-23fb5a52addf-config-data\") pod \"6a531904-7199-45c6-aea1-23fb5a52addf\" (UID: \"6a531904-7199-45c6-aea1-23fb5a52addf\") " Jan 23 09:33:20 crc kubenswrapper[4684]: I0123 09:33:20.377763 4684 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/6a531904-7199-45c6-aea1-23fb5a52addf-combined-ca-bundle\") pod \"6a531904-7199-45c6-aea1-23fb5a52addf\" (UID: \"6a531904-7199-45c6-aea1-23fb5a52addf\") " Jan 23 09:33:20 crc kubenswrapper[4684]: I0123 09:33:20.377849 4684 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/6a531904-7199-45c6-aea1-23fb5a52addf-log-httpd\") pod \"6a531904-7199-45c6-aea1-23fb5a52addf\" (UID: \"6a531904-7199-45c6-aea1-23fb5a52addf\") " Jan 23 09:33:20 crc kubenswrapper[4684]: I0123 09:33:20.377875 4684 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/6a531904-7199-45c6-aea1-23fb5a52addf-scripts\") pod \"6a531904-7199-45c6-aea1-23fb5a52addf\" (UID: \"6a531904-7199-45c6-aea1-23fb5a52addf\") " Jan 23 09:33:20 crc kubenswrapper[4684]: I0123 09:33:20.380470 4684 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/6a531904-7199-45c6-aea1-23fb5a52addf-run-httpd" (OuterVolumeSpecName: "run-httpd") pod "6a531904-7199-45c6-aea1-23fb5a52addf" (UID: "6a531904-7199-45c6-aea1-23fb5a52addf"). InnerVolumeSpecName "run-httpd". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 23 09:33:20 crc kubenswrapper[4684]: I0123 09:33:20.380691 4684 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/6a531904-7199-45c6-aea1-23fb5a52addf-log-httpd" (OuterVolumeSpecName: "log-httpd") pod "6a531904-7199-45c6-aea1-23fb5a52addf" (UID: "6a531904-7199-45c6-aea1-23fb5a52addf"). InnerVolumeSpecName "log-httpd". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 23 09:33:20 crc kubenswrapper[4684]: I0123 09:33:20.405180 4684 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/6a531904-7199-45c6-aea1-23fb5a52addf-scripts" (OuterVolumeSpecName: "scripts") pod "6a531904-7199-45c6-aea1-23fb5a52addf" (UID: "6a531904-7199-45c6-aea1-23fb5a52addf"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 23 09:33:20 crc kubenswrapper[4684]: I0123 09:33:20.415048 4684 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-db-sync-gpzdh" event={"ID":"82fd9420-b726-4b9d-ad21-b05181fb6e23","Type":"ContainerDied","Data":"1d7cca76d17e57a0767b127140623be4bedc0bb7d62c5eb78fad9c048f019e40"} Jan 23 09:33:20 crc kubenswrapper[4684]: I0123 09:33:20.415098 4684 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="1d7cca76d17e57a0767b127140623be4bedc0bb7d62c5eb78fad9c048f019e40" Jan 23 09:33:20 crc kubenswrapper[4684]: I0123 09:33:20.415184 4684 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/cinder-db-sync-gpzdh" Jan 23 09:33:20 crc kubenswrapper[4684]: I0123 09:33:20.417920 4684 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/6a531904-7199-45c6-aea1-23fb5a52addf-kube-api-access-b7cc4" (OuterVolumeSpecName: "kube-api-access-b7cc4") pod "6a531904-7199-45c6-aea1-23fb5a52addf" (UID: "6a531904-7199-45c6-aea1-23fb5a52addf"). InnerVolumeSpecName "kube-api-access-b7cc4". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 23 09:33:20 crc kubenswrapper[4684]: I0123 09:33:20.437347 4684 generic.go:334] "Generic (PLEG): container finished" podID="a856b676-2311-4a06-9b0c-4fd64c76e34b" containerID="d9093d7423c81301b2be5e47d0675088888410423e30f534ec336ff35fa8df5a" exitCode=0 Jan 23 09:33:20 crc kubenswrapper[4684]: I0123 09:33:20.438058 4684 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell0-db-create-lng65" event={"ID":"a856b676-2311-4a06-9b0c-4fd64c76e34b","Type":"ContainerDied","Data":"d9093d7423c81301b2be5e47d0675088888410423e30f534ec336ff35fa8df5a"} Jan 23 09:33:20 crc kubenswrapper[4684]: I0123 09:33:20.448619 4684 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/barbican-api-b96468b6b-tn94s" event={"ID":"968cfa50-ff5f-4484-8a59-2132539ba65b","Type":"ContainerStarted","Data":"aca249ba45447d1283e774c00f4e33e964f68d136e591ee2c3c57ed1aeaaae84"} Jan 23 09:33:20 crc kubenswrapper[4684]: I0123 09:33:20.453216 4684 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/barbican-worker-74bcc55f89-qgvh5" event={"ID":"996c56f4-2118-4795-91da-d78f1ad2f792","Type":"ContainerStarted","Data":"f1518d49d1797a7c8f1ff1bb6ca33802eb8ad700776c5822700cecf6db61e78f"} Jan 23 09:33:20 crc kubenswrapper[4684]: I0123 09:33:20.466894 4684 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell1-db-create-4qf5d" event={"ID":"51bdf1ce-d5b3-4862-aa1c-4648c84f87a9","Type":"ContainerStarted","Data":"58b51666892381932027bd23a62c8b283aa5d54836e5e3d07ff3e478db0e8310"} Jan 23 09:33:20 crc kubenswrapper[4684]: I0123 09:33:20.466946 4684 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell1-db-create-4qf5d" event={"ID":"51bdf1ce-d5b3-4862-aa1c-4648c84f87a9","Type":"ContainerStarted","Data":"b5da5dc3d3fbcb57c4195b4ef9967991fd2d3aca73320558795c1f29be70e0c0"} Jan 23 09:33:20 crc kubenswrapper[4684]: I0123 09:33:20.483479 4684 reconciler_common.go:293] "Volume detached for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/6a531904-7199-45c6-aea1-23fb5a52addf-run-httpd\") on node \"crc\" DevicePath \"\"" Jan 23 09:33:20 crc kubenswrapper[4684]: I0123 09:33:20.483513 4684 reconciler_common.go:293] "Volume detached for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/6a531904-7199-45c6-aea1-23fb5a52addf-log-httpd\") on node \"crc\" DevicePath \"\"" Jan 23 09:33:20 crc kubenswrapper[4684]: I0123 09:33:20.483528 4684 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/6a531904-7199-45c6-aea1-23fb5a52addf-scripts\") on node \"crc\" DevicePath \"\"" Jan 23 09:33:20 crc kubenswrapper[4684]: I0123 09:33:20.483540 4684 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-b7cc4\" (UniqueName: \"kubernetes.io/projected/6a531904-7199-45c6-aea1-23fb5a52addf-kube-api-access-b7cc4\") on node \"crc\" DevicePath \"\"" Jan 23 09:33:20 crc kubenswrapper[4684]: I0123 09:33:20.500199 4684 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/barbican-keystone-listener-7c6d999bfd-wgh9p" event={"ID":"dd332188-f0b4-4a86-a7ec-c722f64e1e41","Type":"ContainerStarted","Data":"d9f184d5a70fbad1927df548e87c681167384b7a34144b467a8a7a7702f7ff1d"} Jan 23 09:33:20 crc kubenswrapper[4684]: I0123 09:33:20.506921 4684 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/6a531904-7199-45c6-aea1-23fb5a52addf-sg-core-conf-yaml" (OuterVolumeSpecName: "sg-core-conf-yaml") pod "6a531904-7199-45c6-aea1-23fb5a52addf" (UID: "6a531904-7199-45c6-aea1-23fb5a52addf"). InnerVolumeSpecName "sg-core-conf-yaml". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 23 09:33:20 crc kubenswrapper[4684]: I0123 09:33:20.539479 4684 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/nova-cell1-db-create-4qf5d" podStartSLOduration=3.539452904 podStartE2EDuration="3.539452904s" podCreationTimestamp="2026-01-23 09:33:17 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-23 09:33:20.526204021 +0000 UTC m=+1573.149582562" watchObservedRunningTime="2026-01-23 09:33:20.539452904 +0000 UTC m=+1573.162831445" Jan 23 09:33:20 crc kubenswrapper[4684]: I0123 09:33:20.573914 4684 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-api-4098-account-create-update-mfcrh" event={"ID":"9314b229-b3d7-40b3-8c79-a327b2f0098d","Type":"ContainerStarted","Data":"128721acce2a0336c13eca4bea3d5af0c23bbfd5b499f7e8f079d8a553cd5bcc"} Jan 23 09:33:20 crc kubenswrapper[4684]: I0123 09:33:20.584659 4684 reconciler_common.go:293] "Volume detached for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/6a531904-7199-45c6-aea1-23fb5a52addf-sg-core-conf-yaml\") on node \"crc\" DevicePath \"\"" Jan 23 09:33:20 crc kubenswrapper[4684]: I0123 09:33:20.584871 4684 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/6a531904-7199-45c6-aea1-23fb5a52addf-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "6a531904-7199-45c6-aea1-23fb5a52addf" (UID: "6a531904-7199-45c6-aea1-23fb5a52addf"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 23 09:33:20 crc kubenswrapper[4684]: I0123 09:33:20.586505 4684 generic.go:334] "Generic (PLEG): container finished" podID="3ea9252c-2a2c-4b59-9196-251b12919e70" containerID="cde92c17ce65f379fd443261643cabe7c35c0c4865c6e4db9c2323a10729d113" exitCode=0 Jan 23 09:33:20 crc kubenswrapper[4684]: I0123 09:33:20.586641 4684 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-api-db-create-tjvmj" event={"ID":"3ea9252c-2a2c-4b59-9196-251b12919e70","Type":"ContainerDied","Data":"cde92c17ce65f379fd443261643cabe7c35c0c4865c6e4db9c2323a10729d113"} Jan 23 09:33:20 crc kubenswrapper[4684]: I0123 09:33:20.597846 4684 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/6a531904-7199-45c6-aea1-23fb5a52addf-config-data" (OuterVolumeSpecName: "config-data") pod "6a531904-7199-45c6-aea1-23fb5a52addf" (UID: "6a531904-7199-45c6-aea1-23fb5a52addf"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 23 09:33:20 crc kubenswrapper[4684]: I0123 09:33:20.599112 4684 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"6a531904-7199-45c6-aea1-23fb5a52addf","Type":"ContainerDied","Data":"ddfca825dc6d552d5d244813a85979001e3925c944fa658f06e0ae933e021a38"} Jan 23 09:33:20 crc kubenswrapper[4684]: I0123 09:33:20.599156 4684 scope.go:117] "RemoveContainer" containerID="2b727e49d9cd6451f870dca152eb833c49165c7e74315a83ef2d6eaad85e7873" Jan 23 09:33:20 crc kubenswrapper[4684]: I0123 09:33:20.599277 4684 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/ceilometer-0" Jan 23 09:33:20 crc kubenswrapper[4684]: I0123 09:33:20.621910 4684 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell1-a91c-account-create-update-78wch" event={"ID":"14cb5f92-83cd-4cd7-8a3c-7dcccd239f6b","Type":"ContainerStarted","Data":"9e6821d23d1177096438394d565714f18a53a7a4dd733fb7a8e70a77185f0689"} Jan 23 09:33:20 crc kubenswrapper[4684]: I0123 09:33:20.631923 4684 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-b478fbf79-l44nc" event={"ID":"3d34dccc-24db-4c90-81fc-d9a898a7a643","Type":"ContainerStarted","Data":"04b6f12d54cbf4b83906547897967603aab368d0a688e01e0e560541f1268e26"} Jan 23 09:33:20 crc kubenswrapper[4684]: I0123 09:33:20.637927 4684 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell0-e4a7-account-create-update-mx2vl" event={"ID":"e849936f-39a5-4742-b2d8-d74a04de0ad1","Type":"ContainerStarted","Data":"6943c0e475ecbb2a15f88f73e6f1cb336079f6a44c0e618478a800b0a95d33f3"} Jan 23 09:33:20 crc kubenswrapper[4684]: I0123 09:33:20.637982 4684 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell0-e4a7-account-create-update-mx2vl" event={"ID":"e849936f-39a5-4742-b2d8-d74a04de0ad1","Type":"ContainerStarted","Data":"3242b94f51ca802acd16b7bf5bfe912420dcb44fdc2381f6083941f6f278dfc5"} Jan 23 09:33:20 crc kubenswrapper[4684]: I0123 09:33:20.659499 4684 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/nova-cell0-e4a7-account-create-update-mx2vl" podStartSLOduration=3.659480484 podStartE2EDuration="3.659480484s" podCreationTimestamp="2026-01-23 09:33:17 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-23 09:33:20.657075945 +0000 UTC m=+1573.280454486" watchObservedRunningTime="2026-01-23 09:33:20.659480484 +0000 UTC m=+1573.282859025" Jan 23 09:33:20 crc kubenswrapper[4684]: I0123 09:33:20.681959 4684 scope.go:117] "RemoveContainer" containerID="c603a061f6e8d68d2f722940a2bd8f08682950c9eac0d707d8498811e71c3948" Jan 23 09:33:20 crc kubenswrapper[4684]: I0123 09:33:20.688983 4684 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/6a531904-7199-45c6-aea1-23fb5a52addf-config-data\") on node \"crc\" DevicePath \"\"" Jan 23 09:33:20 crc kubenswrapper[4684]: I0123 09:33:20.689019 4684 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/6a531904-7199-45c6-aea1-23fb5a52addf-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Jan 23 09:33:20 crc kubenswrapper[4684]: I0123 09:33:20.766410 4684 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/ceilometer-0"] Jan 23 09:33:20 crc kubenswrapper[4684]: I0123 09:33:20.798604 4684 scope.go:117] "RemoveContainer" containerID="894e6602a08cf019d04f87e77782e15bec43f8a532209cc5367cebe50f5ae329" Jan 23 09:33:20 crc kubenswrapper[4684]: I0123 09:33:20.805291 4684 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/ceilometer-0"] Jan 23 09:33:20 crc kubenswrapper[4684]: I0123 09:33:20.838900 4684 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/ceilometer-0"] Jan 23 09:33:20 crc kubenswrapper[4684]: E0123 09:33:20.839418 4684 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="6a531904-7199-45c6-aea1-23fb5a52addf" containerName="ceilometer-notification-agent" Jan 23 09:33:20 crc kubenswrapper[4684]: I0123 09:33:20.839437 4684 state_mem.go:107] "Deleted CPUSet assignment" podUID="6a531904-7199-45c6-aea1-23fb5a52addf" containerName="ceilometer-notification-agent" Jan 23 09:33:20 crc kubenswrapper[4684]: E0123 09:33:20.839465 4684 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="6a531904-7199-45c6-aea1-23fb5a52addf" containerName="sg-core" Jan 23 09:33:20 crc kubenswrapper[4684]: I0123 09:33:20.839471 4684 state_mem.go:107] "Deleted CPUSet assignment" podUID="6a531904-7199-45c6-aea1-23fb5a52addf" containerName="sg-core" Jan 23 09:33:20 crc kubenswrapper[4684]: E0123 09:33:20.839478 4684 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="6a531904-7199-45c6-aea1-23fb5a52addf" containerName="ceilometer-central-agent" Jan 23 09:33:20 crc kubenswrapper[4684]: I0123 09:33:20.839484 4684 state_mem.go:107] "Deleted CPUSet assignment" podUID="6a531904-7199-45c6-aea1-23fb5a52addf" containerName="ceilometer-central-agent" Jan 23 09:33:20 crc kubenswrapper[4684]: E0123 09:33:20.839497 4684 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="82fd9420-b726-4b9d-ad21-b05181fb6e23" containerName="cinder-db-sync" Jan 23 09:33:20 crc kubenswrapper[4684]: I0123 09:33:20.839502 4684 state_mem.go:107] "Deleted CPUSet assignment" podUID="82fd9420-b726-4b9d-ad21-b05181fb6e23" containerName="cinder-db-sync" Jan 23 09:33:20 crc kubenswrapper[4684]: E0123 09:33:20.839510 4684 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="6a531904-7199-45c6-aea1-23fb5a52addf" containerName="proxy-httpd" Jan 23 09:33:20 crc kubenswrapper[4684]: I0123 09:33:20.839517 4684 state_mem.go:107] "Deleted CPUSet assignment" podUID="6a531904-7199-45c6-aea1-23fb5a52addf" containerName="proxy-httpd" Jan 23 09:33:20 crc kubenswrapper[4684]: I0123 09:33:20.839738 4684 memory_manager.go:354] "RemoveStaleState removing state" podUID="6a531904-7199-45c6-aea1-23fb5a52addf" containerName="sg-core" Jan 23 09:33:20 crc kubenswrapper[4684]: I0123 09:33:20.839761 4684 memory_manager.go:354] "RemoveStaleState removing state" podUID="6a531904-7199-45c6-aea1-23fb5a52addf" containerName="proxy-httpd" Jan 23 09:33:20 crc kubenswrapper[4684]: I0123 09:33:20.839772 4684 memory_manager.go:354] "RemoveStaleState removing state" podUID="6a531904-7199-45c6-aea1-23fb5a52addf" containerName="ceilometer-notification-agent" Jan 23 09:33:20 crc kubenswrapper[4684]: I0123 09:33:20.839787 4684 memory_manager.go:354] "RemoveStaleState removing state" podUID="6a531904-7199-45c6-aea1-23fb5a52addf" containerName="ceilometer-central-agent" Jan 23 09:33:20 crc kubenswrapper[4684]: I0123 09:33:20.839805 4684 memory_manager.go:354] "RemoveStaleState removing state" podUID="82fd9420-b726-4b9d-ad21-b05181fb6e23" containerName="cinder-db-sync" Jan 23 09:33:20 crc kubenswrapper[4684]: I0123 09:33:20.841336 4684 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ceilometer-0" Jan 23 09:33:20 crc kubenswrapper[4684]: I0123 09:33:20.846682 4684 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"ceilometer-config-data" Jan 23 09:33:20 crc kubenswrapper[4684]: I0123 09:33:20.847037 4684 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"ceilometer-scripts" Jan 23 09:33:20 crc kubenswrapper[4684]: I0123 09:33:20.889475 4684 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ceilometer-0"] Jan 23 09:33:20 crc kubenswrapper[4684]: I0123 09:33:20.894450 4684 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/8c701147-6de2-4cd9-8d2e-05831ceb7ed5-config-data\") pod \"ceilometer-0\" (UID: \"8c701147-6de2-4cd9-8d2e-05831ceb7ed5\") " pod="openstack/ceilometer-0" Jan 23 09:33:20 crc kubenswrapper[4684]: I0123 09:33:20.894495 4684 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/8c701147-6de2-4cd9-8d2e-05831ceb7ed5-run-httpd\") pod \"ceilometer-0\" (UID: \"8c701147-6de2-4cd9-8d2e-05831ceb7ed5\") " pod="openstack/ceilometer-0" Jan 23 09:33:20 crc kubenswrapper[4684]: I0123 09:33:20.894518 4684 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-pwtc7\" (UniqueName: \"kubernetes.io/projected/8c701147-6de2-4cd9-8d2e-05831ceb7ed5-kube-api-access-pwtc7\") pod \"ceilometer-0\" (UID: \"8c701147-6de2-4cd9-8d2e-05831ceb7ed5\") " pod="openstack/ceilometer-0" Jan 23 09:33:20 crc kubenswrapper[4684]: I0123 09:33:20.894649 4684 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/8c701147-6de2-4cd9-8d2e-05831ceb7ed5-sg-core-conf-yaml\") pod \"ceilometer-0\" (UID: \"8c701147-6de2-4cd9-8d2e-05831ceb7ed5\") " pod="openstack/ceilometer-0" Jan 23 09:33:20 crc kubenswrapper[4684]: I0123 09:33:20.894825 4684 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/8c701147-6de2-4cd9-8d2e-05831ceb7ed5-log-httpd\") pod \"ceilometer-0\" (UID: \"8c701147-6de2-4cd9-8d2e-05831ceb7ed5\") " pod="openstack/ceilometer-0" Jan 23 09:33:20 crc kubenswrapper[4684]: I0123 09:33:20.895036 4684 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/8c701147-6de2-4cd9-8d2e-05831ceb7ed5-combined-ca-bundle\") pod \"ceilometer-0\" (UID: \"8c701147-6de2-4cd9-8d2e-05831ceb7ed5\") " pod="openstack/ceilometer-0" Jan 23 09:33:20 crc kubenswrapper[4684]: I0123 09:33:20.895350 4684 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/8c701147-6de2-4cd9-8d2e-05831ceb7ed5-scripts\") pod \"ceilometer-0\" (UID: \"8c701147-6de2-4cd9-8d2e-05831ceb7ed5\") " pod="openstack/ceilometer-0" Jan 23 09:33:20 crc kubenswrapper[4684]: I0123 09:33:20.904311 4684 scope.go:117] "RemoveContainer" containerID="762ea99cb4f27ceb08fbd6bf312d7f3761fcaca60b5f0a11c30bbf20d8ed083e" Jan 23 09:33:21 crc kubenswrapper[4684]: I0123 09:33:21.000723 4684 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/8c701147-6de2-4cd9-8d2e-05831ceb7ed5-combined-ca-bundle\") pod \"ceilometer-0\" (UID: \"8c701147-6de2-4cd9-8d2e-05831ceb7ed5\") " pod="openstack/ceilometer-0" Jan 23 09:33:21 crc kubenswrapper[4684]: I0123 09:33:21.000782 4684 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/8c701147-6de2-4cd9-8d2e-05831ceb7ed5-scripts\") pod \"ceilometer-0\" (UID: \"8c701147-6de2-4cd9-8d2e-05831ceb7ed5\") " pod="openstack/ceilometer-0" Jan 23 09:33:21 crc kubenswrapper[4684]: I0123 09:33:21.000859 4684 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/8c701147-6de2-4cd9-8d2e-05831ceb7ed5-config-data\") pod \"ceilometer-0\" (UID: \"8c701147-6de2-4cd9-8d2e-05831ceb7ed5\") " pod="openstack/ceilometer-0" Jan 23 09:33:21 crc kubenswrapper[4684]: I0123 09:33:21.000879 4684 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/8c701147-6de2-4cd9-8d2e-05831ceb7ed5-run-httpd\") pod \"ceilometer-0\" (UID: \"8c701147-6de2-4cd9-8d2e-05831ceb7ed5\") " pod="openstack/ceilometer-0" Jan 23 09:33:21 crc kubenswrapper[4684]: I0123 09:33:21.000900 4684 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-pwtc7\" (UniqueName: \"kubernetes.io/projected/8c701147-6de2-4cd9-8d2e-05831ceb7ed5-kube-api-access-pwtc7\") pod \"ceilometer-0\" (UID: \"8c701147-6de2-4cd9-8d2e-05831ceb7ed5\") " pod="openstack/ceilometer-0" Jan 23 09:33:21 crc kubenswrapper[4684]: I0123 09:33:21.000921 4684 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/8c701147-6de2-4cd9-8d2e-05831ceb7ed5-sg-core-conf-yaml\") pod \"ceilometer-0\" (UID: \"8c701147-6de2-4cd9-8d2e-05831ceb7ed5\") " pod="openstack/ceilometer-0" Jan 23 09:33:21 crc kubenswrapper[4684]: I0123 09:33:21.000968 4684 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/8c701147-6de2-4cd9-8d2e-05831ceb7ed5-log-httpd\") pod \"ceilometer-0\" (UID: \"8c701147-6de2-4cd9-8d2e-05831ceb7ed5\") " pod="openstack/ceilometer-0" Jan 23 09:33:21 crc kubenswrapper[4684]: I0123 09:33:21.001416 4684 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/8c701147-6de2-4cd9-8d2e-05831ceb7ed5-log-httpd\") pod \"ceilometer-0\" (UID: \"8c701147-6de2-4cd9-8d2e-05831ceb7ed5\") " pod="openstack/ceilometer-0" Jan 23 09:33:21 crc kubenswrapper[4684]: I0123 09:33:21.002217 4684 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/8c701147-6de2-4cd9-8d2e-05831ceb7ed5-run-httpd\") pod \"ceilometer-0\" (UID: \"8c701147-6de2-4cd9-8d2e-05831ceb7ed5\") " pod="openstack/ceilometer-0" Jan 23 09:33:21 crc kubenswrapper[4684]: I0123 09:33:21.007793 4684 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/8c701147-6de2-4cd9-8d2e-05831ceb7ed5-scripts\") pod \"ceilometer-0\" (UID: \"8c701147-6de2-4cd9-8d2e-05831ceb7ed5\") " pod="openstack/ceilometer-0" Jan 23 09:33:21 crc kubenswrapper[4684]: I0123 09:33:21.007889 4684 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/8c701147-6de2-4cd9-8d2e-05831ceb7ed5-combined-ca-bundle\") pod \"ceilometer-0\" (UID: \"8c701147-6de2-4cd9-8d2e-05831ceb7ed5\") " pod="openstack/ceilometer-0" Jan 23 09:33:21 crc kubenswrapper[4684]: I0123 09:33:21.009998 4684 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/8c701147-6de2-4cd9-8d2e-05831ceb7ed5-config-data\") pod \"ceilometer-0\" (UID: \"8c701147-6de2-4cd9-8d2e-05831ceb7ed5\") " pod="openstack/ceilometer-0" Jan 23 09:33:21 crc kubenswrapper[4684]: I0123 09:33:21.029006 4684 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/8c701147-6de2-4cd9-8d2e-05831ceb7ed5-sg-core-conf-yaml\") pod \"ceilometer-0\" (UID: \"8c701147-6de2-4cd9-8d2e-05831ceb7ed5\") " pod="openstack/ceilometer-0" Jan 23 09:33:21 crc kubenswrapper[4684]: I0123 09:33:21.029913 4684 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-pwtc7\" (UniqueName: \"kubernetes.io/projected/8c701147-6de2-4cd9-8d2e-05831ceb7ed5-kube-api-access-pwtc7\") pod \"ceilometer-0\" (UID: \"8c701147-6de2-4cd9-8d2e-05831ceb7ed5\") " pod="openstack/ceilometer-0" Jan 23 09:33:21 crc kubenswrapper[4684]: I0123 09:33:21.212348 4684 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/cinder-scheduler-0"] Jan 23 09:33:21 crc kubenswrapper[4684]: I0123 09:33:21.228219 4684 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ceilometer-0" Jan 23 09:33:21 crc kubenswrapper[4684]: I0123 09:33:21.268247 4684 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/cinder-scheduler-0"] Jan 23 09:33:21 crc kubenswrapper[4684]: I0123 09:33:21.272431 4684 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/cinder-scheduler-0" Jan 23 09:33:21 crc kubenswrapper[4684]: I0123 09:33:21.282402 4684 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cinder-scheduler-config-data" Jan 23 09:33:21 crc kubenswrapper[4684]: I0123 09:33:21.282680 4684 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cinder-scripts" Jan 23 09:33:21 crc kubenswrapper[4684]: I0123 09:33:21.282914 4684 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cinder-cinder-dockercfg-82m59" Jan 23 09:33:21 crc kubenswrapper[4684]: I0123 09:33:21.283055 4684 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cinder-config-data" Jan 23 09:33:21 crc kubenswrapper[4684]: I0123 09:33:21.294054 4684 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-b478fbf79-l44nc"] Jan 23 09:33:21 crc kubenswrapper[4684]: I0123 09:33:21.328387 4684 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/59aa4e93-3c29-45d1-95d0-7cc3f595765a-scripts\") pod \"cinder-scheduler-0\" (UID: \"59aa4e93-3c29-45d1-95d0-7cc3f595765a\") " pod="openstack/cinder-scheduler-0" Jan 23 09:33:21 crc kubenswrapper[4684]: I0123 09:33:21.328471 4684 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-x8ccn\" (UniqueName: \"kubernetes.io/projected/59aa4e93-3c29-45d1-95d0-7cc3f595765a-kube-api-access-x8ccn\") pod \"cinder-scheduler-0\" (UID: \"59aa4e93-3c29-45d1-95d0-7cc3f595765a\") " pod="openstack/cinder-scheduler-0" Jan 23 09:33:21 crc kubenswrapper[4684]: I0123 09:33:21.328504 4684 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/59aa4e93-3c29-45d1-95d0-7cc3f595765a-config-data\") pod \"cinder-scheduler-0\" (UID: \"59aa4e93-3c29-45d1-95d0-7cc3f595765a\") " pod="openstack/cinder-scheduler-0" Jan 23 09:33:21 crc kubenswrapper[4684]: I0123 09:33:21.328578 4684 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/59aa4e93-3c29-45d1-95d0-7cc3f595765a-combined-ca-bundle\") pod \"cinder-scheduler-0\" (UID: \"59aa4e93-3c29-45d1-95d0-7cc3f595765a\") " pod="openstack/cinder-scheduler-0" Jan 23 09:33:21 crc kubenswrapper[4684]: I0123 09:33:21.328671 4684 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/59aa4e93-3c29-45d1-95d0-7cc3f595765a-config-data-custom\") pod \"cinder-scheduler-0\" (UID: \"59aa4e93-3c29-45d1-95d0-7cc3f595765a\") " pod="openstack/cinder-scheduler-0" Jan 23 09:33:21 crc kubenswrapper[4684]: I0123 09:33:21.328710 4684 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-machine-id\" (UniqueName: \"kubernetes.io/host-path/59aa4e93-3c29-45d1-95d0-7cc3f595765a-etc-machine-id\") pod \"cinder-scheduler-0\" (UID: \"59aa4e93-3c29-45d1-95d0-7cc3f595765a\") " pod="openstack/cinder-scheduler-0" Jan 23 09:33:21 crc kubenswrapper[4684]: I0123 09:33:21.335174 4684 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/dnsmasq-dns-b9fcb755f-mbwwx"] Jan 23 09:33:21 crc kubenswrapper[4684]: I0123 09:33:21.336723 4684 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-b9fcb755f-mbwwx" Jan 23 09:33:21 crc kubenswrapper[4684]: I0123 09:33:21.385758 4684 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-b9fcb755f-mbwwx"] Jan 23 09:33:21 crc kubenswrapper[4684]: I0123 09:33:21.438847 4684 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/59aa4e93-3c29-45d1-95d0-7cc3f595765a-config-data-custom\") pod \"cinder-scheduler-0\" (UID: \"59aa4e93-3c29-45d1-95d0-7cc3f595765a\") " pod="openstack/cinder-scheduler-0" Jan 23 09:33:21 crc kubenswrapper[4684]: I0123 09:33:21.438901 4684 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etc-machine-id\" (UniqueName: \"kubernetes.io/host-path/59aa4e93-3c29-45d1-95d0-7cc3f595765a-etc-machine-id\") pod \"cinder-scheduler-0\" (UID: \"59aa4e93-3c29-45d1-95d0-7cc3f595765a\") " pod="openstack/cinder-scheduler-0" Jan 23 09:33:21 crc kubenswrapper[4684]: I0123 09:33:21.438985 4684 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/59aa4e93-3c29-45d1-95d0-7cc3f595765a-scripts\") pod \"cinder-scheduler-0\" (UID: \"59aa4e93-3c29-45d1-95d0-7cc3f595765a\") " pod="openstack/cinder-scheduler-0" Jan 23 09:33:21 crc kubenswrapper[4684]: I0123 09:33:21.439037 4684 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-x8ccn\" (UniqueName: \"kubernetes.io/projected/59aa4e93-3c29-45d1-95d0-7cc3f595765a-kube-api-access-x8ccn\") pod \"cinder-scheduler-0\" (UID: \"59aa4e93-3c29-45d1-95d0-7cc3f595765a\") " pod="openstack/cinder-scheduler-0" Jan 23 09:33:21 crc kubenswrapper[4684]: I0123 09:33:21.439066 4684 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/59aa4e93-3c29-45d1-95d0-7cc3f595765a-config-data\") pod \"cinder-scheduler-0\" (UID: \"59aa4e93-3c29-45d1-95d0-7cc3f595765a\") " pod="openstack/cinder-scheduler-0" Jan 23 09:33:21 crc kubenswrapper[4684]: I0123 09:33:21.439133 4684 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/59aa4e93-3c29-45d1-95d0-7cc3f595765a-combined-ca-bundle\") pod \"cinder-scheduler-0\" (UID: \"59aa4e93-3c29-45d1-95d0-7cc3f595765a\") " pod="openstack/cinder-scheduler-0" Jan 23 09:33:21 crc kubenswrapper[4684]: I0123 09:33:21.440006 4684 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etc-machine-id\" (UniqueName: \"kubernetes.io/host-path/59aa4e93-3c29-45d1-95d0-7cc3f595765a-etc-machine-id\") pod \"cinder-scheduler-0\" (UID: \"59aa4e93-3c29-45d1-95d0-7cc3f595765a\") " pod="openstack/cinder-scheduler-0" Jan 23 09:33:21 crc kubenswrapper[4684]: I0123 09:33:21.466656 4684 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/59aa4e93-3c29-45d1-95d0-7cc3f595765a-config-data-custom\") pod \"cinder-scheduler-0\" (UID: \"59aa4e93-3c29-45d1-95d0-7cc3f595765a\") " pod="openstack/cinder-scheduler-0" Jan 23 09:33:21 crc kubenswrapper[4684]: I0123 09:33:21.467058 4684 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/59aa4e93-3c29-45d1-95d0-7cc3f595765a-scripts\") pod \"cinder-scheduler-0\" (UID: \"59aa4e93-3c29-45d1-95d0-7cc3f595765a\") " pod="openstack/cinder-scheduler-0" Jan 23 09:33:21 crc kubenswrapper[4684]: I0123 09:33:21.467581 4684 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/59aa4e93-3c29-45d1-95d0-7cc3f595765a-config-data\") pod \"cinder-scheduler-0\" (UID: \"59aa4e93-3c29-45d1-95d0-7cc3f595765a\") " pod="openstack/cinder-scheduler-0" Jan 23 09:33:21 crc kubenswrapper[4684]: I0123 09:33:21.467593 4684 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/59aa4e93-3c29-45d1-95d0-7cc3f595765a-combined-ca-bundle\") pod \"cinder-scheduler-0\" (UID: \"59aa4e93-3c29-45d1-95d0-7cc3f595765a\") " pod="openstack/cinder-scheduler-0" Jan 23 09:33:21 crc kubenswrapper[4684]: I0123 09:33:21.499433 4684 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-x8ccn\" (UniqueName: \"kubernetes.io/projected/59aa4e93-3c29-45d1-95d0-7cc3f595765a-kube-api-access-x8ccn\") pod \"cinder-scheduler-0\" (UID: \"59aa4e93-3c29-45d1-95d0-7cc3f595765a\") " pod="openstack/cinder-scheduler-0" Jan 23 09:33:21 crc kubenswrapper[4684]: I0123 09:33:21.547581 4684 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/42713741-8e02-44d5-b649-adf7d0f80837-ovsdbserver-sb\") pod \"dnsmasq-dns-b9fcb755f-mbwwx\" (UID: \"42713741-8e02-44d5-b649-adf7d0f80837\") " pod="openstack/dnsmasq-dns-b9fcb755f-mbwwx" Jan 23 09:33:21 crc kubenswrapper[4684]: I0123 09:33:21.548030 4684 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/42713741-8e02-44d5-b649-adf7d0f80837-dns-svc\") pod \"dnsmasq-dns-b9fcb755f-mbwwx\" (UID: \"42713741-8e02-44d5-b649-adf7d0f80837\") " pod="openstack/dnsmasq-dns-b9fcb755f-mbwwx" Jan 23 09:33:21 crc kubenswrapper[4684]: I0123 09:33:21.548252 4684 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-zj2n4\" (UniqueName: \"kubernetes.io/projected/42713741-8e02-44d5-b649-adf7d0f80837-kube-api-access-zj2n4\") pod \"dnsmasq-dns-b9fcb755f-mbwwx\" (UID: \"42713741-8e02-44d5-b649-adf7d0f80837\") " pod="openstack/dnsmasq-dns-b9fcb755f-mbwwx" Jan 23 09:33:21 crc kubenswrapper[4684]: I0123 09:33:21.548377 4684 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/42713741-8e02-44d5-b649-adf7d0f80837-ovsdbserver-nb\") pod \"dnsmasq-dns-b9fcb755f-mbwwx\" (UID: \"42713741-8e02-44d5-b649-adf7d0f80837\") " pod="openstack/dnsmasq-dns-b9fcb755f-mbwwx" Jan 23 09:33:21 crc kubenswrapper[4684]: I0123 09:33:21.548479 4684 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/42713741-8e02-44d5-b649-adf7d0f80837-config\") pod \"dnsmasq-dns-b9fcb755f-mbwwx\" (UID: \"42713741-8e02-44d5-b649-adf7d0f80837\") " pod="openstack/dnsmasq-dns-b9fcb755f-mbwwx" Jan 23 09:33:21 crc kubenswrapper[4684]: I0123 09:33:21.604584 4684 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="6a531904-7199-45c6-aea1-23fb5a52addf" path="/var/lib/kubelet/pods/6a531904-7199-45c6-aea1-23fb5a52addf/volumes" Jan 23 09:33:21 crc kubenswrapper[4684]: I0123 09:33:21.630233 4684 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/cinder-api-0"] Jan 23 09:33:21 crc kubenswrapper[4684]: I0123 09:33:21.632595 4684 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/cinder-api-0" Jan 23 09:33:21 crc kubenswrapper[4684]: I0123 09:33:21.645547 4684 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/cinder-scheduler-0" Jan 23 09:33:21 crc kubenswrapper[4684]: I0123 09:33:21.651145 4684 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-zj2n4\" (UniqueName: \"kubernetes.io/projected/42713741-8e02-44d5-b649-adf7d0f80837-kube-api-access-zj2n4\") pod \"dnsmasq-dns-b9fcb755f-mbwwx\" (UID: \"42713741-8e02-44d5-b649-adf7d0f80837\") " pod="openstack/dnsmasq-dns-b9fcb755f-mbwwx" Jan 23 09:33:21 crc kubenswrapper[4684]: I0123 09:33:21.651204 4684 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/42713741-8e02-44d5-b649-adf7d0f80837-ovsdbserver-nb\") pod \"dnsmasq-dns-b9fcb755f-mbwwx\" (UID: \"42713741-8e02-44d5-b649-adf7d0f80837\") " pod="openstack/dnsmasq-dns-b9fcb755f-mbwwx" Jan 23 09:33:21 crc kubenswrapper[4684]: I0123 09:33:21.651224 4684 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/42713741-8e02-44d5-b649-adf7d0f80837-config\") pod \"dnsmasq-dns-b9fcb755f-mbwwx\" (UID: \"42713741-8e02-44d5-b649-adf7d0f80837\") " pod="openstack/dnsmasq-dns-b9fcb755f-mbwwx" Jan 23 09:33:21 crc kubenswrapper[4684]: I0123 09:33:21.651292 4684 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/42713741-8e02-44d5-b649-adf7d0f80837-ovsdbserver-sb\") pod \"dnsmasq-dns-b9fcb755f-mbwwx\" (UID: \"42713741-8e02-44d5-b649-adf7d0f80837\") " pod="openstack/dnsmasq-dns-b9fcb755f-mbwwx" Jan 23 09:33:21 crc kubenswrapper[4684]: I0123 09:33:21.651323 4684 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/42713741-8e02-44d5-b649-adf7d0f80837-dns-svc\") pod \"dnsmasq-dns-b9fcb755f-mbwwx\" (UID: \"42713741-8e02-44d5-b649-adf7d0f80837\") " pod="openstack/dnsmasq-dns-b9fcb755f-mbwwx" Jan 23 09:33:21 crc kubenswrapper[4684]: I0123 09:33:21.652251 4684 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/42713741-8e02-44d5-b649-adf7d0f80837-dns-svc\") pod \"dnsmasq-dns-b9fcb755f-mbwwx\" (UID: \"42713741-8e02-44d5-b649-adf7d0f80837\") " pod="openstack/dnsmasq-dns-b9fcb755f-mbwwx" Jan 23 09:33:21 crc kubenswrapper[4684]: I0123 09:33:21.652508 4684 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cinder-api-config-data" Jan 23 09:33:21 crc kubenswrapper[4684]: I0123 09:33:21.653567 4684 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/42713741-8e02-44d5-b649-adf7d0f80837-config\") pod \"dnsmasq-dns-b9fcb755f-mbwwx\" (UID: \"42713741-8e02-44d5-b649-adf7d0f80837\") " pod="openstack/dnsmasq-dns-b9fcb755f-mbwwx" Jan 23 09:33:21 crc kubenswrapper[4684]: I0123 09:33:21.654285 4684 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/42713741-8e02-44d5-b649-adf7d0f80837-ovsdbserver-sb\") pod \"dnsmasq-dns-b9fcb755f-mbwwx\" (UID: \"42713741-8e02-44d5-b649-adf7d0f80837\") " pod="openstack/dnsmasq-dns-b9fcb755f-mbwwx" Jan 23 09:33:21 crc kubenswrapper[4684]: I0123 09:33:21.662094 4684 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/42713741-8e02-44d5-b649-adf7d0f80837-ovsdbserver-nb\") pod \"dnsmasq-dns-b9fcb755f-mbwwx\" (UID: \"42713741-8e02-44d5-b649-adf7d0f80837\") " pod="openstack/dnsmasq-dns-b9fcb755f-mbwwx" Jan 23 09:33:21 crc kubenswrapper[4684]: I0123 09:33:21.706019 4684 generic.go:334] "Generic (PLEG): container finished" podID="3d34dccc-24db-4c90-81fc-d9a898a7a643" containerID="683ab71d136a2e6ae2fcb3d81791aefcc10b3efc99221d65c4e8e1c8f9fa2a5e" exitCode=0 Jan 23 09:33:21 crc kubenswrapper[4684]: I0123 09:33:21.706376 4684 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-b478fbf79-l44nc" event={"ID":"3d34dccc-24db-4c90-81fc-d9a898a7a643","Type":"ContainerDied","Data":"683ab71d136a2e6ae2fcb3d81791aefcc10b3efc99221d65c4e8e1c8f9fa2a5e"} Jan 23 09:33:21 crc kubenswrapper[4684]: I0123 09:33:21.718171 4684 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell1-a91c-account-create-update-78wch" event={"ID":"14cb5f92-83cd-4cd7-8a3c-7dcccd239f6b","Type":"ContainerStarted","Data":"f8fde4f2b4c0065028fa2e7df507122426c9c3503bc90505443a6cffe2b7b394"} Jan 23 09:33:21 crc kubenswrapper[4684]: I0123 09:33:21.754262 4684 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/cinder-api-0"] Jan 23 09:33:21 crc kubenswrapper[4684]: I0123 09:33:21.756716 4684 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/b8693d7b-d2eb-4be6-95f7-299baceab47f-logs\") pod \"cinder-api-0\" (UID: \"b8693d7b-d2eb-4be6-95f7-299baceab47f\") " pod="openstack/cinder-api-0" Jan 23 09:33:21 crc kubenswrapper[4684]: I0123 09:33:21.756914 4684 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/b8693d7b-d2eb-4be6-95f7-299baceab47f-combined-ca-bundle\") pod \"cinder-api-0\" (UID: \"b8693d7b-d2eb-4be6-95f7-299baceab47f\") " pod="openstack/cinder-api-0" Jan 23 09:33:21 crc kubenswrapper[4684]: I0123 09:33:21.756942 4684 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-machine-id\" (UniqueName: \"kubernetes.io/host-path/b8693d7b-d2eb-4be6-95f7-299baceab47f-etc-machine-id\") pod \"cinder-api-0\" (UID: \"b8693d7b-d2eb-4be6-95f7-299baceab47f\") " pod="openstack/cinder-api-0" Jan 23 09:33:21 crc kubenswrapper[4684]: I0123 09:33:21.756967 4684 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/b8693d7b-d2eb-4be6-95f7-299baceab47f-scripts\") pod \"cinder-api-0\" (UID: \"b8693d7b-d2eb-4be6-95f7-299baceab47f\") " pod="openstack/cinder-api-0" Jan 23 09:33:21 crc kubenswrapper[4684]: I0123 09:33:21.756987 4684 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/b8693d7b-d2eb-4be6-95f7-299baceab47f-config-data-custom\") pod \"cinder-api-0\" (UID: \"b8693d7b-d2eb-4be6-95f7-299baceab47f\") " pod="openstack/cinder-api-0" Jan 23 09:33:21 crc kubenswrapper[4684]: I0123 09:33:21.757008 4684 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/b8693d7b-d2eb-4be6-95f7-299baceab47f-config-data\") pod \"cinder-api-0\" (UID: \"b8693d7b-d2eb-4be6-95f7-299baceab47f\") " pod="openstack/cinder-api-0" Jan 23 09:33:21 crc kubenswrapper[4684]: I0123 09:33:21.757046 4684 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-tvbng\" (UniqueName: \"kubernetes.io/projected/b8693d7b-d2eb-4be6-95f7-299baceab47f-kube-api-access-tvbng\") pod \"cinder-api-0\" (UID: \"b8693d7b-d2eb-4be6-95f7-299baceab47f\") " pod="openstack/cinder-api-0" Jan 23 09:33:21 crc kubenswrapper[4684]: I0123 09:33:21.784669 4684 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/barbican-api-b96468b6b-tn94s" event={"ID":"968cfa50-ff5f-4484-8a59-2132539ba65b","Type":"ContainerStarted","Data":"1263c51796e9cd2b6f83c2218e9dba9668852d7e31ce74acbbcda5bfc627c52e"} Jan 23 09:33:21 crc kubenswrapper[4684]: I0123 09:33:21.794592 4684 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-zj2n4\" (UniqueName: \"kubernetes.io/projected/42713741-8e02-44d5-b649-adf7d0f80837-kube-api-access-zj2n4\") pod \"dnsmasq-dns-b9fcb755f-mbwwx\" (UID: \"42713741-8e02-44d5-b649-adf7d0f80837\") " pod="openstack/dnsmasq-dns-b9fcb755f-mbwwx" Jan 23 09:33:21 crc kubenswrapper[4684]: I0123 09:33:21.864912 4684 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/b8693d7b-d2eb-4be6-95f7-299baceab47f-combined-ca-bundle\") pod \"cinder-api-0\" (UID: \"b8693d7b-d2eb-4be6-95f7-299baceab47f\") " pod="openstack/cinder-api-0" Jan 23 09:33:21 crc kubenswrapper[4684]: I0123 09:33:21.865096 4684 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etc-machine-id\" (UniqueName: \"kubernetes.io/host-path/b8693d7b-d2eb-4be6-95f7-299baceab47f-etc-machine-id\") pod \"cinder-api-0\" (UID: \"b8693d7b-d2eb-4be6-95f7-299baceab47f\") " pod="openstack/cinder-api-0" Jan 23 09:33:21 crc kubenswrapper[4684]: I0123 09:33:21.865173 4684 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/b8693d7b-d2eb-4be6-95f7-299baceab47f-scripts\") pod \"cinder-api-0\" (UID: \"b8693d7b-d2eb-4be6-95f7-299baceab47f\") " pod="openstack/cinder-api-0" Jan 23 09:33:21 crc kubenswrapper[4684]: I0123 09:33:21.865255 4684 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/b8693d7b-d2eb-4be6-95f7-299baceab47f-config-data-custom\") pod \"cinder-api-0\" (UID: \"b8693d7b-d2eb-4be6-95f7-299baceab47f\") " pod="openstack/cinder-api-0" Jan 23 09:33:21 crc kubenswrapper[4684]: I0123 09:33:21.865385 4684 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/b8693d7b-d2eb-4be6-95f7-299baceab47f-config-data\") pod \"cinder-api-0\" (UID: \"b8693d7b-d2eb-4be6-95f7-299baceab47f\") " pod="openstack/cinder-api-0" Jan 23 09:33:21 crc kubenswrapper[4684]: I0123 09:33:21.865521 4684 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-tvbng\" (UniqueName: \"kubernetes.io/projected/b8693d7b-d2eb-4be6-95f7-299baceab47f-kube-api-access-tvbng\") pod \"cinder-api-0\" (UID: \"b8693d7b-d2eb-4be6-95f7-299baceab47f\") " pod="openstack/cinder-api-0" Jan 23 09:33:21 crc kubenswrapper[4684]: I0123 09:33:21.865658 4684 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/b8693d7b-d2eb-4be6-95f7-299baceab47f-logs\") pod \"cinder-api-0\" (UID: \"b8693d7b-d2eb-4be6-95f7-299baceab47f\") " pod="openstack/cinder-api-0" Jan 23 09:33:21 crc kubenswrapper[4684]: I0123 09:33:21.866203 4684 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/b8693d7b-d2eb-4be6-95f7-299baceab47f-logs\") pod \"cinder-api-0\" (UID: \"b8693d7b-d2eb-4be6-95f7-299baceab47f\") " pod="openstack/cinder-api-0" Jan 23 09:33:21 crc kubenswrapper[4684]: I0123 09:33:21.868072 4684 generic.go:334] "Generic (PLEG): container finished" podID="51bdf1ce-d5b3-4862-aa1c-4648c84f87a9" containerID="58b51666892381932027bd23a62c8b283aa5d54836e5e3d07ff3e478db0e8310" exitCode=0 Jan 23 09:33:21 crc kubenswrapper[4684]: I0123 09:33:21.868162 4684 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell1-db-create-4qf5d" event={"ID":"51bdf1ce-d5b3-4862-aa1c-4648c84f87a9","Type":"ContainerDied","Data":"58b51666892381932027bd23a62c8b283aa5d54836e5e3d07ff3e478db0e8310"} Jan 23 09:33:21 crc kubenswrapper[4684]: I0123 09:33:21.868952 4684 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etc-machine-id\" (UniqueName: \"kubernetes.io/host-path/b8693d7b-d2eb-4be6-95f7-299baceab47f-etc-machine-id\") pod \"cinder-api-0\" (UID: \"b8693d7b-d2eb-4be6-95f7-299baceab47f\") " pod="openstack/cinder-api-0" Jan 23 09:33:21 crc kubenswrapper[4684]: I0123 09:33:21.894750 4684 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/b8693d7b-d2eb-4be6-95f7-299baceab47f-scripts\") pod \"cinder-api-0\" (UID: \"b8693d7b-d2eb-4be6-95f7-299baceab47f\") " pod="openstack/cinder-api-0" Jan 23 09:33:21 crc kubenswrapper[4684]: I0123 09:33:21.894946 4684 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/b8693d7b-d2eb-4be6-95f7-299baceab47f-config-data-custom\") pod \"cinder-api-0\" (UID: \"b8693d7b-d2eb-4be6-95f7-299baceab47f\") " pod="openstack/cinder-api-0" Jan 23 09:33:21 crc kubenswrapper[4684]: I0123 09:33:21.899157 4684 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/b8693d7b-d2eb-4be6-95f7-299baceab47f-config-data\") pod \"cinder-api-0\" (UID: \"b8693d7b-d2eb-4be6-95f7-299baceab47f\") " pod="openstack/cinder-api-0" Jan 23 09:33:21 crc kubenswrapper[4684]: I0123 09:33:21.922886 4684 generic.go:334] "Generic (PLEG): container finished" podID="9314b229-b3d7-40b3-8c79-a327b2f0098d" containerID="128721acce2a0336c13eca4bea3d5af0c23bbfd5b499f7e8f079d8a553cd5bcc" exitCode=0 Jan 23 09:33:21 crc kubenswrapper[4684]: I0123 09:33:21.924885 4684 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-api-4098-account-create-update-mfcrh" event={"ID":"9314b229-b3d7-40b3-8c79-a327b2f0098d","Type":"ContainerDied","Data":"128721acce2a0336c13eca4bea3d5af0c23bbfd5b499f7e8f079d8a553cd5bcc"} Jan 23 09:33:21 crc kubenswrapper[4684]: I0123 09:33:21.927713 4684 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/b8693d7b-d2eb-4be6-95f7-299baceab47f-combined-ca-bundle\") pod \"cinder-api-0\" (UID: \"b8693d7b-d2eb-4be6-95f7-299baceab47f\") " pod="openstack/cinder-api-0" Jan 23 09:33:21 crc kubenswrapper[4684]: I0123 09:33:21.942449 4684 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-tvbng\" (UniqueName: \"kubernetes.io/projected/b8693d7b-d2eb-4be6-95f7-299baceab47f-kube-api-access-tvbng\") pod \"cinder-api-0\" (UID: \"b8693d7b-d2eb-4be6-95f7-299baceab47f\") " pod="openstack/cinder-api-0" Jan 23 09:33:21 crc kubenswrapper[4684]: I0123 09:33:21.946995 4684 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/nova-cell1-a91c-account-create-update-78wch" podStartSLOduration=4.946968108 podStartE2EDuration="4.946968108s" podCreationTimestamp="2026-01-23 09:33:17 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-23 09:33:21.922435979 +0000 UTC m=+1574.545814530" watchObservedRunningTime="2026-01-23 09:33:21.946968108 +0000 UTC m=+1574.570346649" Jan 23 09:33:22 crc kubenswrapper[4684]: I0123 09:33:22.027589 4684 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-b9fcb755f-mbwwx" Jan 23 09:33:22 crc kubenswrapper[4684]: I0123 09:33:22.094149 4684 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/cinder-api-0" Jan 23 09:33:22 crc kubenswrapper[4684]: I0123 09:33:22.588500 4684 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ceilometer-0"] Jan 23 09:33:22 crc kubenswrapper[4684]: I0123 09:33:22.956361 4684 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-b478fbf79-l44nc" event={"ID":"3d34dccc-24db-4c90-81fc-d9a898a7a643","Type":"ContainerStarted","Data":"49fd1c1e6498827bc914fcc2c0a38944252a1fe19582ccda62f7fe76e7196a81"} Jan 23 09:33:22 crc kubenswrapper[4684]: I0123 09:33:22.956835 4684 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/dnsmasq-dns-b478fbf79-l44nc" podUID="3d34dccc-24db-4c90-81fc-d9a898a7a643" containerName="dnsmasq-dns" containerID="cri-o://49fd1c1e6498827bc914fcc2c0a38944252a1fe19582ccda62f7fe76e7196a81" gracePeriod=10 Jan 23 09:33:22 crc kubenswrapper[4684]: I0123 09:33:22.957939 4684 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/dnsmasq-dns-b478fbf79-l44nc" Jan 23 09:33:22 crc kubenswrapper[4684]: I0123 09:33:22.970066 4684 generic.go:334] "Generic (PLEG): container finished" podID="e849936f-39a5-4742-b2d8-d74a04de0ad1" containerID="6943c0e475ecbb2a15f88f73e6f1cb336079f6a44c0e618478a800b0a95d33f3" exitCode=0 Jan 23 09:33:22 crc kubenswrapper[4684]: I0123 09:33:22.970150 4684 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell0-e4a7-account-create-update-mx2vl" event={"ID":"e849936f-39a5-4742-b2d8-d74a04de0ad1","Type":"ContainerDied","Data":"6943c0e475ecbb2a15f88f73e6f1cb336079f6a44c0e618478a800b0a95d33f3"} Jan 23 09:33:22 crc kubenswrapper[4684]: I0123 09:33:22.975897 4684 generic.go:334] "Generic (PLEG): container finished" podID="14cb5f92-83cd-4cd7-8a3c-7dcccd239f6b" containerID="f8fde4f2b4c0065028fa2e7df507122426c9c3503bc90505443a6cffe2b7b394" exitCode=0 Jan 23 09:33:22 crc kubenswrapper[4684]: I0123 09:33:22.975963 4684 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell1-a91c-account-create-update-78wch" event={"ID":"14cb5f92-83cd-4cd7-8a3c-7dcccd239f6b","Type":"ContainerDied","Data":"f8fde4f2b4c0065028fa2e7df507122426c9c3503bc90505443a6cffe2b7b394"} Jan 23 09:33:22 crc kubenswrapper[4684]: I0123 09:33:22.993594 4684 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/barbican-api-b96468b6b-tn94s" event={"ID":"968cfa50-ff5f-4484-8a59-2132539ba65b","Type":"ContainerStarted","Data":"b9e405e70e2266d7fc6dbefa73f7a4ed51df68a9d096ed3eba47fe22d807ecb3"} Jan 23 09:33:22 crc kubenswrapper[4684]: I0123 09:33:22.993640 4684 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/barbican-api-b96468b6b-tn94s" Jan 23 09:33:22 crc kubenswrapper[4684]: I0123 09:33:22.993663 4684 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/barbican-api-b96468b6b-tn94s" Jan 23 09:33:23 crc kubenswrapper[4684]: I0123 09:33:23.010978 4684 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/dnsmasq-dns-b478fbf79-l44nc" podStartSLOduration=6.010954652 podStartE2EDuration="6.010954652s" podCreationTimestamp="2026-01-23 09:33:17 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-23 09:33:22.984774125 +0000 UTC m=+1575.608152686" watchObservedRunningTime="2026-01-23 09:33:23.010954652 +0000 UTC m=+1575.634333203" Jan 23 09:33:23 crc kubenswrapper[4684]: I0123 09:33:23.076786 4684 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/cinder-scheduler-0"] Jan 23 09:33:23 crc kubenswrapper[4684]: I0123 09:33:23.212650 4684 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/barbican-api-b96468b6b-tn94s" podStartSLOduration=5.212625092 podStartE2EDuration="5.212625092s" podCreationTimestamp="2026-01-23 09:33:18 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-23 09:33:23.067548708 +0000 UTC m=+1575.690927239" watchObservedRunningTime="2026-01-23 09:33:23.212625092 +0000 UTC m=+1575.836003623" Jan 23 09:33:23 crc kubenswrapper[4684]: I0123 09:33:23.511264 4684 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/cinder-api-0"] Jan 23 09:33:23 crc kubenswrapper[4684]: I0123 09:33:23.549055 4684 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-b9fcb755f-mbwwx"] Jan 23 09:33:24 crc kubenswrapper[4684]: I0123 09:33:24.004233 4684 generic.go:334] "Generic (PLEG): container finished" podID="3d34dccc-24db-4c90-81fc-d9a898a7a643" containerID="49fd1c1e6498827bc914fcc2c0a38944252a1fe19582ccda62f7fe76e7196a81" exitCode=0 Jan 23 09:33:24 crc kubenswrapper[4684]: I0123 09:33:24.004283 4684 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-b478fbf79-l44nc" event={"ID":"3d34dccc-24db-4c90-81fc-d9a898a7a643","Type":"ContainerDied","Data":"49fd1c1e6498827bc914fcc2c0a38944252a1fe19582ccda62f7fe76e7196a81"} Jan 23 09:33:24 crc kubenswrapper[4684]: W0123 09:33:24.240044 4684 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod8c701147_6de2_4cd9_8d2e_05831ceb7ed5.slice/crio-a1f743c3c2b51a03fd01bcb730f1e823e862489a0ba227bead8c54609e516db6 WatchSource:0}: Error finding container a1f743c3c2b51a03fd01bcb730f1e823e862489a0ba227bead8c54609e516db6: Status 404 returned error can't find the container with id a1f743c3c2b51a03fd01bcb730f1e823e862489a0ba227bead8c54609e516db6 Jan 23 09:33:24 crc kubenswrapper[4684]: I0123 09:33:24.477630 4684 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell1-db-create-4qf5d" Jan 23 09:33:24 crc kubenswrapper[4684]: I0123 09:33:24.478508 4684 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-api-db-create-tjvmj" Jan 23 09:33:24 crc kubenswrapper[4684]: I0123 09:33:24.488473 4684 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell0-db-create-lng65" Jan 23 09:33:24 crc kubenswrapper[4684]: I0123 09:33:24.489342 4684 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-api-4098-account-create-update-mfcrh" Jan 23 09:33:24 crc kubenswrapper[4684]: I0123 09:33:24.585268 4684 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-22khh\" (UniqueName: \"kubernetes.io/projected/51bdf1ce-d5b3-4862-aa1c-4648c84f87a9-kube-api-access-22khh\") pod \"51bdf1ce-d5b3-4862-aa1c-4648c84f87a9\" (UID: \"51bdf1ce-d5b3-4862-aa1c-4648c84f87a9\") " Jan 23 09:33:24 crc kubenswrapper[4684]: I0123 09:33:24.585570 4684 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/a856b676-2311-4a06-9b0c-4fd64c76e34b-operator-scripts\") pod \"a856b676-2311-4a06-9b0c-4fd64c76e34b\" (UID: \"a856b676-2311-4a06-9b0c-4fd64c76e34b\") " Jan 23 09:33:24 crc kubenswrapper[4684]: I0123 09:33:24.585595 4684 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/51bdf1ce-d5b3-4862-aa1c-4648c84f87a9-operator-scripts\") pod \"51bdf1ce-d5b3-4862-aa1c-4648c84f87a9\" (UID: \"51bdf1ce-d5b3-4862-aa1c-4648c84f87a9\") " Jan 23 09:33:24 crc kubenswrapper[4684]: I0123 09:33:24.585623 4684 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-mk8gs\" (UniqueName: \"kubernetes.io/projected/a856b676-2311-4a06-9b0c-4fd64c76e34b-kube-api-access-mk8gs\") pod \"a856b676-2311-4a06-9b0c-4fd64c76e34b\" (UID: \"a856b676-2311-4a06-9b0c-4fd64c76e34b\") " Jan 23 09:33:24 crc kubenswrapper[4684]: I0123 09:33:24.585721 4684 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/9314b229-b3d7-40b3-8c79-a327b2f0098d-operator-scripts\") pod \"9314b229-b3d7-40b3-8c79-a327b2f0098d\" (UID: \"9314b229-b3d7-40b3-8c79-a327b2f0098d\") " Jan 23 09:33:24 crc kubenswrapper[4684]: I0123 09:33:24.587745 4684 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/51bdf1ce-d5b3-4862-aa1c-4648c84f87a9-operator-scripts" (OuterVolumeSpecName: "operator-scripts") pod "51bdf1ce-d5b3-4862-aa1c-4648c84f87a9" (UID: "51bdf1ce-d5b3-4862-aa1c-4648c84f87a9"). InnerVolumeSpecName "operator-scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 23 09:33:24 crc kubenswrapper[4684]: I0123 09:33:24.588133 4684 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/9314b229-b3d7-40b3-8c79-a327b2f0098d-operator-scripts" (OuterVolumeSpecName: "operator-scripts") pod "9314b229-b3d7-40b3-8c79-a327b2f0098d" (UID: "9314b229-b3d7-40b3-8c79-a327b2f0098d"). InnerVolumeSpecName "operator-scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 23 09:33:24 crc kubenswrapper[4684]: I0123 09:33:24.585831 4684 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/3ea9252c-2a2c-4b59-9196-251b12919e70-operator-scripts\") pod \"3ea9252c-2a2c-4b59-9196-251b12919e70\" (UID: \"3ea9252c-2a2c-4b59-9196-251b12919e70\") " Jan 23 09:33:24 crc kubenswrapper[4684]: I0123 09:33:24.588614 4684 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/a856b676-2311-4a06-9b0c-4fd64c76e34b-operator-scripts" (OuterVolumeSpecName: "operator-scripts") pod "a856b676-2311-4a06-9b0c-4fd64c76e34b" (UID: "a856b676-2311-4a06-9b0c-4fd64c76e34b"). InnerVolumeSpecName "operator-scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 23 09:33:24 crc kubenswrapper[4684]: I0123 09:33:24.588670 4684 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-tdj58\" (UniqueName: \"kubernetes.io/projected/3ea9252c-2a2c-4b59-9196-251b12919e70-kube-api-access-tdj58\") pod \"3ea9252c-2a2c-4b59-9196-251b12919e70\" (UID: \"3ea9252c-2a2c-4b59-9196-251b12919e70\") " Jan 23 09:33:24 crc kubenswrapper[4684]: I0123 09:33:24.589225 4684 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-qg9db\" (UniqueName: \"kubernetes.io/projected/9314b229-b3d7-40b3-8c79-a327b2f0098d-kube-api-access-qg9db\") pod \"9314b229-b3d7-40b3-8c79-a327b2f0098d\" (UID: \"9314b229-b3d7-40b3-8c79-a327b2f0098d\") " Jan 23 09:33:24 crc kubenswrapper[4684]: I0123 09:33:24.589300 4684 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/3ea9252c-2a2c-4b59-9196-251b12919e70-operator-scripts" (OuterVolumeSpecName: "operator-scripts") pod "3ea9252c-2a2c-4b59-9196-251b12919e70" (UID: "3ea9252c-2a2c-4b59-9196-251b12919e70"). InnerVolumeSpecName "operator-scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 23 09:33:24 crc kubenswrapper[4684]: I0123 09:33:24.592097 4684 reconciler_common.go:293] "Volume detached for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/a856b676-2311-4a06-9b0c-4fd64c76e34b-operator-scripts\") on node \"crc\" DevicePath \"\"" Jan 23 09:33:24 crc kubenswrapper[4684]: I0123 09:33:24.592124 4684 reconciler_common.go:293] "Volume detached for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/51bdf1ce-d5b3-4862-aa1c-4648c84f87a9-operator-scripts\") on node \"crc\" DevicePath \"\"" Jan 23 09:33:24 crc kubenswrapper[4684]: I0123 09:33:24.592134 4684 reconciler_common.go:293] "Volume detached for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/9314b229-b3d7-40b3-8c79-a327b2f0098d-operator-scripts\") on node \"crc\" DevicePath \"\"" Jan 23 09:33:24 crc kubenswrapper[4684]: I0123 09:33:24.592143 4684 reconciler_common.go:293] "Volume detached for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/3ea9252c-2a2c-4b59-9196-251b12919e70-operator-scripts\") on node \"crc\" DevicePath \"\"" Jan 23 09:33:24 crc kubenswrapper[4684]: I0123 09:33:24.607063 4684 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/3ea9252c-2a2c-4b59-9196-251b12919e70-kube-api-access-tdj58" (OuterVolumeSpecName: "kube-api-access-tdj58") pod "3ea9252c-2a2c-4b59-9196-251b12919e70" (UID: "3ea9252c-2a2c-4b59-9196-251b12919e70"). InnerVolumeSpecName "kube-api-access-tdj58". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 23 09:33:24 crc kubenswrapper[4684]: I0123 09:33:24.609033 4684 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/9314b229-b3d7-40b3-8c79-a327b2f0098d-kube-api-access-qg9db" (OuterVolumeSpecName: "kube-api-access-qg9db") pod "9314b229-b3d7-40b3-8c79-a327b2f0098d" (UID: "9314b229-b3d7-40b3-8c79-a327b2f0098d"). InnerVolumeSpecName "kube-api-access-qg9db". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 23 09:33:24 crc kubenswrapper[4684]: I0123 09:33:24.611828 4684 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/a856b676-2311-4a06-9b0c-4fd64c76e34b-kube-api-access-mk8gs" (OuterVolumeSpecName: "kube-api-access-mk8gs") pod "a856b676-2311-4a06-9b0c-4fd64c76e34b" (UID: "a856b676-2311-4a06-9b0c-4fd64c76e34b"). InnerVolumeSpecName "kube-api-access-mk8gs". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 23 09:33:24 crc kubenswrapper[4684]: I0123 09:33:24.643346 4684 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/51bdf1ce-d5b3-4862-aa1c-4648c84f87a9-kube-api-access-22khh" (OuterVolumeSpecName: "kube-api-access-22khh") pod "51bdf1ce-d5b3-4862-aa1c-4648c84f87a9" (UID: "51bdf1ce-d5b3-4862-aa1c-4648c84f87a9"). InnerVolumeSpecName "kube-api-access-22khh". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 23 09:33:24 crc kubenswrapper[4684]: I0123 09:33:24.693923 4684 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-tdj58\" (UniqueName: \"kubernetes.io/projected/3ea9252c-2a2c-4b59-9196-251b12919e70-kube-api-access-tdj58\") on node \"crc\" DevicePath \"\"" Jan 23 09:33:24 crc kubenswrapper[4684]: I0123 09:33:24.693991 4684 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-qg9db\" (UniqueName: \"kubernetes.io/projected/9314b229-b3d7-40b3-8c79-a327b2f0098d-kube-api-access-qg9db\") on node \"crc\" DevicePath \"\"" Jan 23 09:33:24 crc kubenswrapper[4684]: I0123 09:33:24.694005 4684 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-22khh\" (UniqueName: \"kubernetes.io/projected/51bdf1ce-d5b3-4862-aa1c-4648c84f87a9-kube-api-access-22khh\") on node \"crc\" DevicePath \"\"" Jan 23 09:33:24 crc kubenswrapper[4684]: I0123 09:33:24.694020 4684 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-mk8gs\" (UniqueName: \"kubernetes.io/projected/a856b676-2311-4a06-9b0c-4fd64c76e34b-kube-api-access-mk8gs\") on node \"crc\" DevicePath \"\"" Jan 23 09:33:25 crc kubenswrapper[4684]: I0123 09:33:25.023595 4684 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell0-db-create-lng65" Jan 23 09:33:25 crc kubenswrapper[4684]: I0123 09:33:25.025572 4684 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell0-db-create-lng65" event={"ID":"a856b676-2311-4a06-9b0c-4fd64c76e34b","Type":"ContainerDied","Data":"9c467b07c4a11c677fc6294629b93443ad75cf9622758c10881555b9fec9cb5e"} Jan 23 09:33:25 crc kubenswrapper[4684]: I0123 09:33:25.025614 4684 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="9c467b07c4a11c677fc6294629b93443ad75cf9622758c10881555b9fec9cb5e" Jan 23 09:33:25 crc kubenswrapper[4684]: I0123 09:33:25.027406 4684 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-api-4098-account-create-update-mfcrh" event={"ID":"9314b229-b3d7-40b3-8c79-a327b2f0098d","Type":"ContainerDied","Data":"fce3b460d774203d223c3f15d3a297af8943c9d1191559d236eb4354702870bd"} Jan 23 09:33:25 crc kubenswrapper[4684]: I0123 09:33:25.027434 4684 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="fce3b460d774203d223c3f15d3a297af8943c9d1191559d236eb4354702870bd" Jan 23 09:33:25 crc kubenswrapper[4684]: I0123 09:33:25.027495 4684 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-api-4098-account-create-update-mfcrh" Jan 23 09:33:25 crc kubenswrapper[4684]: I0123 09:33:25.035540 4684 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"8c701147-6de2-4cd9-8d2e-05831ceb7ed5","Type":"ContainerStarted","Data":"a1f743c3c2b51a03fd01bcb730f1e823e862489a0ba227bead8c54609e516db6"} Jan 23 09:33:25 crc kubenswrapper[4684]: I0123 09:33:25.037090 4684 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-api-db-create-tjvmj" event={"ID":"3ea9252c-2a2c-4b59-9196-251b12919e70","Type":"ContainerDied","Data":"81cc03231073d52bcdc293da5136704c14c47ede6b7bc463c1d9298dc326769c"} Jan 23 09:33:25 crc kubenswrapper[4684]: I0123 09:33:25.037117 4684 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="81cc03231073d52bcdc293da5136704c14c47ede6b7bc463c1d9298dc326769c" Jan 23 09:33:25 crc kubenswrapper[4684]: I0123 09:33:25.037171 4684 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-api-db-create-tjvmj" Jan 23 09:33:25 crc kubenswrapper[4684]: I0123 09:33:25.046910 4684 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-b9fcb755f-mbwwx" event={"ID":"42713741-8e02-44d5-b649-adf7d0f80837","Type":"ContainerStarted","Data":"2e7ccb4080e22c294b9be34b08cb815f8d549638167648b57131279b78c94662"} Jan 23 09:33:25 crc kubenswrapper[4684]: I0123 09:33:25.054641 4684 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-scheduler-0" event={"ID":"59aa4e93-3c29-45d1-95d0-7cc3f595765a","Type":"ContainerStarted","Data":"27030967a5e8b1deb8d56112475a6bda5e7c24048a27e08eaf33ffd6004dc48c"} Jan 23 09:33:25 crc kubenswrapper[4684]: I0123 09:33:25.061304 4684 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell1-db-create-4qf5d" event={"ID":"51bdf1ce-d5b3-4862-aa1c-4648c84f87a9","Type":"ContainerDied","Data":"b5da5dc3d3fbcb57c4195b4ef9967991fd2d3aca73320558795c1f29be70e0c0"} Jan 23 09:33:25 crc kubenswrapper[4684]: I0123 09:33:25.062625 4684 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="b5da5dc3d3fbcb57c4195b4ef9967991fd2d3aca73320558795c1f29be70e0c0" Jan 23 09:33:25 crc kubenswrapper[4684]: I0123 09:33:25.061361 4684 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell1-db-create-4qf5d" Jan 23 09:33:25 crc kubenswrapper[4684]: I0123 09:33:25.067504 4684 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-api-0" event={"ID":"b8693d7b-d2eb-4be6-95f7-299baceab47f","Type":"ContainerStarted","Data":"be92492981c11afa4f1989c797889385cfae71d02081edea17a3971f71e46e93"} Jan 23 09:33:25 crc kubenswrapper[4684]: I0123 09:33:25.394866 4684 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/cinder-api-0"] Jan 23 09:33:26 crc kubenswrapper[4684]: I0123 09:33:26.079458 4684 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/barbican-api-6fb45b76fb-6d9bh"] Jan 23 09:33:26 crc kubenswrapper[4684]: E0123 09:33:26.079905 4684 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="51bdf1ce-d5b3-4862-aa1c-4648c84f87a9" containerName="mariadb-database-create" Jan 23 09:33:26 crc kubenswrapper[4684]: I0123 09:33:26.079922 4684 state_mem.go:107] "Deleted CPUSet assignment" podUID="51bdf1ce-d5b3-4862-aa1c-4648c84f87a9" containerName="mariadb-database-create" Jan 23 09:33:26 crc kubenswrapper[4684]: E0123 09:33:26.079938 4684 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="a856b676-2311-4a06-9b0c-4fd64c76e34b" containerName="mariadb-database-create" Jan 23 09:33:26 crc kubenswrapper[4684]: I0123 09:33:26.079946 4684 state_mem.go:107] "Deleted CPUSet assignment" podUID="a856b676-2311-4a06-9b0c-4fd64c76e34b" containerName="mariadb-database-create" Jan 23 09:33:26 crc kubenswrapper[4684]: E0123 09:33:26.079967 4684 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="9314b229-b3d7-40b3-8c79-a327b2f0098d" containerName="mariadb-account-create-update" Jan 23 09:33:26 crc kubenswrapper[4684]: I0123 09:33:26.079974 4684 state_mem.go:107] "Deleted CPUSet assignment" podUID="9314b229-b3d7-40b3-8c79-a327b2f0098d" containerName="mariadb-account-create-update" Jan 23 09:33:26 crc kubenswrapper[4684]: E0123 09:33:26.079999 4684 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="3ea9252c-2a2c-4b59-9196-251b12919e70" containerName="mariadb-database-create" Jan 23 09:33:26 crc kubenswrapper[4684]: I0123 09:33:26.080006 4684 state_mem.go:107] "Deleted CPUSet assignment" podUID="3ea9252c-2a2c-4b59-9196-251b12919e70" containerName="mariadb-database-create" Jan 23 09:33:26 crc kubenswrapper[4684]: I0123 09:33:26.080185 4684 memory_manager.go:354] "RemoveStaleState removing state" podUID="51bdf1ce-d5b3-4862-aa1c-4648c84f87a9" containerName="mariadb-database-create" Jan 23 09:33:26 crc kubenswrapper[4684]: I0123 09:33:26.080211 4684 memory_manager.go:354] "RemoveStaleState removing state" podUID="9314b229-b3d7-40b3-8c79-a327b2f0098d" containerName="mariadb-account-create-update" Jan 23 09:33:26 crc kubenswrapper[4684]: I0123 09:33:26.080224 4684 memory_manager.go:354] "RemoveStaleState removing state" podUID="3ea9252c-2a2c-4b59-9196-251b12919e70" containerName="mariadb-database-create" Jan 23 09:33:26 crc kubenswrapper[4684]: I0123 09:33:26.080238 4684 memory_manager.go:354] "RemoveStaleState removing state" podUID="a856b676-2311-4a06-9b0c-4fd64c76e34b" containerName="mariadb-database-create" Jan 23 09:33:26 crc kubenswrapper[4684]: I0123 09:33:26.081483 4684 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/barbican-api-6fb45b76fb-6d9bh" Jan 23 09:33:26 crc kubenswrapper[4684]: I0123 09:33:26.085997 4684 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-barbican-internal-svc" Jan 23 09:33:26 crc kubenswrapper[4684]: I0123 09:33:26.086262 4684 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-barbican-public-svc" Jan 23 09:33:26 crc kubenswrapper[4684]: I0123 09:33:26.130193 4684 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/d239343a-876f-4e5e-abf8-2bd91fee9812-config-data-custom\") pod \"barbican-api-6fb45b76fb-6d9bh\" (UID: \"d239343a-876f-4e5e-abf8-2bd91fee9812\") " pod="openstack/barbican-api-6fb45b76fb-6d9bh" Jan 23 09:33:26 crc kubenswrapper[4684]: I0123 09:33:26.130261 4684 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-bnwrt\" (UniqueName: \"kubernetes.io/projected/d239343a-876f-4e5e-abf8-2bd91fee9812-kube-api-access-bnwrt\") pod \"barbican-api-6fb45b76fb-6d9bh\" (UID: \"d239343a-876f-4e5e-abf8-2bd91fee9812\") " pod="openstack/barbican-api-6fb45b76fb-6d9bh" Jan 23 09:33:26 crc kubenswrapper[4684]: I0123 09:33:26.130348 4684 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/d239343a-876f-4e5e-abf8-2bd91fee9812-internal-tls-certs\") pod \"barbican-api-6fb45b76fb-6d9bh\" (UID: \"d239343a-876f-4e5e-abf8-2bd91fee9812\") " pod="openstack/barbican-api-6fb45b76fb-6d9bh" Jan 23 09:33:26 crc kubenswrapper[4684]: I0123 09:33:26.130400 4684 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/d239343a-876f-4e5e-abf8-2bd91fee9812-config-data\") pod \"barbican-api-6fb45b76fb-6d9bh\" (UID: \"d239343a-876f-4e5e-abf8-2bd91fee9812\") " pod="openstack/barbican-api-6fb45b76fb-6d9bh" Jan 23 09:33:26 crc kubenswrapper[4684]: I0123 09:33:26.130440 4684 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/d239343a-876f-4e5e-abf8-2bd91fee9812-logs\") pod \"barbican-api-6fb45b76fb-6d9bh\" (UID: \"d239343a-876f-4e5e-abf8-2bd91fee9812\") " pod="openstack/barbican-api-6fb45b76fb-6d9bh" Jan 23 09:33:26 crc kubenswrapper[4684]: I0123 09:33:26.130462 4684 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/d239343a-876f-4e5e-abf8-2bd91fee9812-combined-ca-bundle\") pod \"barbican-api-6fb45b76fb-6d9bh\" (UID: \"d239343a-876f-4e5e-abf8-2bd91fee9812\") " pod="openstack/barbican-api-6fb45b76fb-6d9bh" Jan 23 09:33:26 crc kubenswrapper[4684]: I0123 09:33:26.130480 4684 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/d239343a-876f-4e5e-abf8-2bd91fee9812-public-tls-certs\") pod \"barbican-api-6fb45b76fb-6d9bh\" (UID: \"d239343a-876f-4e5e-abf8-2bd91fee9812\") " pod="openstack/barbican-api-6fb45b76fb-6d9bh" Jan 23 09:33:26 crc kubenswrapper[4684]: I0123 09:33:26.152327 4684 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/barbican-api-6fb45b76fb-6d9bh"] Jan 23 09:33:26 crc kubenswrapper[4684]: I0123 09:33:26.231888 4684 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/d239343a-876f-4e5e-abf8-2bd91fee9812-internal-tls-certs\") pod \"barbican-api-6fb45b76fb-6d9bh\" (UID: \"d239343a-876f-4e5e-abf8-2bd91fee9812\") " pod="openstack/barbican-api-6fb45b76fb-6d9bh" Jan 23 09:33:26 crc kubenswrapper[4684]: I0123 09:33:26.231994 4684 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/d239343a-876f-4e5e-abf8-2bd91fee9812-config-data\") pod \"barbican-api-6fb45b76fb-6d9bh\" (UID: \"d239343a-876f-4e5e-abf8-2bd91fee9812\") " pod="openstack/barbican-api-6fb45b76fb-6d9bh" Jan 23 09:33:26 crc kubenswrapper[4684]: I0123 09:33:26.232051 4684 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/d239343a-876f-4e5e-abf8-2bd91fee9812-logs\") pod \"barbican-api-6fb45b76fb-6d9bh\" (UID: \"d239343a-876f-4e5e-abf8-2bd91fee9812\") " pod="openstack/barbican-api-6fb45b76fb-6d9bh" Jan 23 09:33:26 crc kubenswrapper[4684]: I0123 09:33:26.232073 4684 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/d239343a-876f-4e5e-abf8-2bd91fee9812-combined-ca-bundle\") pod \"barbican-api-6fb45b76fb-6d9bh\" (UID: \"d239343a-876f-4e5e-abf8-2bd91fee9812\") " pod="openstack/barbican-api-6fb45b76fb-6d9bh" Jan 23 09:33:26 crc kubenswrapper[4684]: I0123 09:33:26.232091 4684 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/d239343a-876f-4e5e-abf8-2bd91fee9812-public-tls-certs\") pod \"barbican-api-6fb45b76fb-6d9bh\" (UID: \"d239343a-876f-4e5e-abf8-2bd91fee9812\") " pod="openstack/barbican-api-6fb45b76fb-6d9bh" Jan 23 09:33:26 crc kubenswrapper[4684]: I0123 09:33:26.232133 4684 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/d239343a-876f-4e5e-abf8-2bd91fee9812-config-data-custom\") pod \"barbican-api-6fb45b76fb-6d9bh\" (UID: \"d239343a-876f-4e5e-abf8-2bd91fee9812\") " pod="openstack/barbican-api-6fb45b76fb-6d9bh" Jan 23 09:33:26 crc kubenswrapper[4684]: I0123 09:33:26.232158 4684 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-bnwrt\" (UniqueName: \"kubernetes.io/projected/d239343a-876f-4e5e-abf8-2bd91fee9812-kube-api-access-bnwrt\") pod \"barbican-api-6fb45b76fb-6d9bh\" (UID: \"d239343a-876f-4e5e-abf8-2bd91fee9812\") " pod="openstack/barbican-api-6fb45b76fb-6d9bh" Jan 23 09:33:26 crc kubenswrapper[4684]: I0123 09:33:26.238830 4684 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/d239343a-876f-4e5e-abf8-2bd91fee9812-logs\") pod \"barbican-api-6fb45b76fb-6d9bh\" (UID: \"d239343a-876f-4e5e-abf8-2bd91fee9812\") " pod="openstack/barbican-api-6fb45b76fb-6d9bh" Jan 23 09:33:26 crc kubenswrapper[4684]: I0123 09:33:26.240104 4684 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/d239343a-876f-4e5e-abf8-2bd91fee9812-internal-tls-certs\") pod \"barbican-api-6fb45b76fb-6d9bh\" (UID: \"d239343a-876f-4e5e-abf8-2bd91fee9812\") " pod="openstack/barbican-api-6fb45b76fb-6d9bh" Jan 23 09:33:26 crc kubenswrapper[4684]: I0123 09:33:26.240759 4684 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/d239343a-876f-4e5e-abf8-2bd91fee9812-config-data-custom\") pod \"barbican-api-6fb45b76fb-6d9bh\" (UID: \"d239343a-876f-4e5e-abf8-2bd91fee9812\") " pod="openstack/barbican-api-6fb45b76fb-6d9bh" Jan 23 09:33:26 crc kubenswrapper[4684]: I0123 09:33:26.241327 4684 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/d239343a-876f-4e5e-abf8-2bd91fee9812-combined-ca-bundle\") pod \"barbican-api-6fb45b76fb-6d9bh\" (UID: \"d239343a-876f-4e5e-abf8-2bd91fee9812\") " pod="openstack/barbican-api-6fb45b76fb-6d9bh" Jan 23 09:33:26 crc kubenswrapper[4684]: I0123 09:33:26.243260 4684 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/d239343a-876f-4e5e-abf8-2bd91fee9812-public-tls-certs\") pod \"barbican-api-6fb45b76fb-6d9bh\" (UID: \"d239343a-876f-4e5e-abf8-2bd91fee9812\") " pod="openstack/barbican-api-6fb45b76fb-6d9bh" Jan 23 09:33:26 crc kubenswrapper[4684]: I0123 09:33:26.244058 4684 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/d239343a-876f-4e5e-abf8-2bd91fee9812-config-data\") pod \"barbican-api-6fb45b76fb-6d9bh\" (UID: \"d239343a-876f-4e5e-abf8-2bd91fee9812\") " pod="openstack/barbican-api-6fb45b76fb-6d9bh" Jan 23 09:33:26 crc kubenswrapper[4684]: I0123 09:33:26.250073 4684 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-bnwrt\" (UniqueName: \"kubernetes.io/projected/d239343a-876f-4e5e-abf8-2bd91fee9812-kube-api-access-bnwrt\") pod \"barbican-api-6fb45b76fb-6d9bh\" (UID: \"d239343a-876f-4e5e-abf8-2bd91fee9812\") " pod="openstack/barbican-api-6fb45b76fb-6d9bh" Jan 23 09:33:26 crc kubenswrapper[4684]: I0123 09:33:26.423140 4684 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/barbican-api-6fb45b76fb-6d9bh" Jan 23 09:33:26 crc kubenswrapper[4684]: I0123 09:33:26.558989 4684 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-b478fbf79-l44nc" Jan 23 09:33:26 crc kubenswrapper[4684]: I0123 09:33:26.563185 4684 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell1-a91c-account-create-update-78wch" Jan 23 09:33:26 crc kubenswrapper[4684]: I0123 09:33:26.572069 4684 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell0-e4a7-account-create-update-mx2vl" Jan 23 09:33:26 crc kubenswrapper[4684]: I0123 09:33:26.638086 4684 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/3d34dccc-24db-4c90-81fc-d9a898a7a643-ovsdbserver-sb\") pod \"3d34dccc-24db-4c90-81fc-d9a898a7a643\" (UID: \"3d34dccc-24db-4c90-81fc-d9a898a7a643\") " Jan 23 09:33:26 crc kubenswrapper[4684]: I0123 09:33:26.638268 4684 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/3d34dccc-24db-4c90-81fc-d9a898a7a643-config\") pod \"3d34dccc-24db-4c90-81fc-d9a898a7a643\" (UID: \"3d34dccc-24db-4c90-81fc-d9a898a7a643\") " Jan 23 09:33:26 crc kubenswrapper[4684]: I0123 09:33:26.638303 4684 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-cd7rg\" (UniqueName: \"kubernetes.io/projected/14cb5f92-83cd-4cd7-8a3c-7dcccd239f6b-kube-api-access-cd7rg\") pod \"14cb5f92-83cd-4cd7-8a3c-7dcccd239f6b\" (UID: \"14cb5f92-83cd-4cd7-8a3c-7dcccd239f6b\") " Jan 23 09:33:26 crc kubenswrapper[4684]: I0123 09:33:26.638341 4684 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-krhl5\" (UniqueName: \"kubernetes.io/projected/3d34dccc-24db-4c90-81fc-d9a898a7a643-kube-api-access-krhl5\") pod \"3d34dccc-24db-4c90-81fc-d9a898a7a643\" (UID: \"3d34dccc-24db-4c90-81fc-d9a898a7a643\") " Jan 23 09:33:26 crc kubenswrapper[4684]: I0123 09:33:26.638364 4684 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/14cb5f92-83cd-4cd7-8a3c-7dcccd239f6b-operator-scripts\") pod \"14cb5f92-83cd-4cd7-8a3c-7dcccd239f6b\" (UID: \"14cb5f92-83cd-4cd7-8a3c-7dcccd239f6b\") " Jan 23 09:33:26 crc kubenswrapper[4684]: I0123 09:33:26.638385 4684 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-t6rfr\" (UniqueName: \"kubernetes.io/projected/e849936f-39a5-4742-b2d8-d74a04de0ad1-kube-api-access-t6rfr\") pod \"e849936f-39a5-4742-b2d8-d74a04de0ad1\" (UID: \"e849936f-39a5-4742-b2d8-d74a04de0ad1\") " Jan 23 09:33:26 crc kubenswrapper[4684]: I0123 09:33:26.638417 4684 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/3d34dccc-24db-4c90-81fc-d9a898a7a643-ovsdbserver-nb\") pod \"3d34dccc-24db-4c90-81fc-d9a898a7a643\" (UID: \"3d34dccc-24db-4c90-81fc-d9a898a7a643\") " Jan 23 09:33:26 crc kubenswrapper[4684]: I0123 09:33:26.638457 4684 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/e849936f-39a5-4742-b2d8-d74a04de0ad1-operator-scripts\") pod \"e849936f-39a5-4742-b2d8-d74a04de0ad1\" (UID: \"e849936f-39a5-4742-b2d8-d74a04de0ad1\") " Jan 23 09:33:26 crc kubenswrapper[4684]: I0123 09:33:26.638564 4684 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/3d34dccc-24db-4c90-81fc-d9a898a7a643-dns-svc\") pod \"3d34dccc-24db-4c90-81fc-d9a898a7a643\" (UID: \"3d34dccc-24db-4c90-81fc-d9a898a7a643\") " Jan 23 09:33:26 crc kubenswrapper[4684]: I0123 09:33:26.640050 4684 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/14cb5f92-83cd-4cd7-8a3c-7dcccd239f6b-operator-scripts" (OuterVolumeSpecName: "operator-scripts") pod "14cb5f92-83cd-4cd7-8a3c-7dcccd239f6b" (UID: "14cb5f92-83cd-4cd7-8a3c-7dcccd239f6b"). InnerVolumeSpecName "operator-scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 23 09:33:26 crc kubenswrapper[4684]: I0123 09:33:26.640563 4684 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/e849936f-39a5-4742-b2d8-d74a04de0ad1-operator-scripts" (OuterVolumeSpecName: "operator-scripts") pod "e849936f-39a5-4742-b2d8-d74a04de0ad1" (UID: "e849936f-39a5-4742-b2d8-d74a04de0ad1"). InnerVolumeSpecName "operator-scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 23 09:33:26 crc kubenswrapper[4684]: I0123 09:33:26.663335 4684 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/e849936f-39a5-4742-b2d8-d74a04de0ad1-kube-api-access-t6rfr" (OuterVolumeSpecName: "kube-api-access-t6rfr") pod "e849936f-39a5-4742-b2d8-d74a04de0ad1" (UID: "e849936f-39a5-4742-b2d8-d74a04de0ad1"). InnerVolumeSpecName "kube-api-access-t6rfr". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 23 09:33:26 crc kubenswrapper[4684]: I0123 09:33:26.667713 4684 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/14cb5f92-83cd-4cd7-8a3c-7dcccd239f6b-kube-api-access-cd7rg" (OuterVolumeSpecName: "kube-api-access-cd7rg") pod "14cb5f92-83cd-4cd7-8a3c-7dcccd239f6b" (UID: "14cb5f92-83cd-4cd7-8a3c-7dcccd239f6b"). InnerVolumeSpecName "kube-api-access-cd7rg". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 23 09:33:26 crc kubenswrapper[4684]: I0123 09:33:26.675787 4684 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/3d34dccc-24db-4c90-81fc-d9a898a7a643-kube-api-access-krhl5" (OuterVolumeSpecName: "kube-api-access-krhl5") pod "3d34dccc-24db-4c90-81fc-d9a898a7a643" (UID: "3d34dccc-24db-4c90-81fc-d9a898a7a643"). InnerVolumeSpecName "kube-api-access-krhl5". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 23 09:33:26 crc kubenswrapper[4684]: I0123 09:33:26.711092 4684 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/3d34dccc-24db-4c90-81fc-d9a898a7a643-ovsdbserver-sb" (OuterVolumeSpecName: "ovsdbserver-sb") pod "3d34dccc-24db-4c90-81fc-d9a898a7a643" (UID: "3d34dccc-24db-4c90-81fc-d9a898a7a643"). InnerVolumeSpecName "ovsdbserver-sb". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 23 09:33:26 crc kubenswrapper[4684]: I0123 09:33:26.741525 4684 reconciler_common.go:293] "Volume detached for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/3d34dccc-24db-4c90-81fc-d9a898a7a643-ovsdbserver-sb\") on node \"crc\" DevicePath \"\"" Jan 23 09:33:26 crc kubenswrapper[4684]: I0123 09:33:26.741566 4684 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-cd7rg\" (UniqueName: \"kubernetes.io/projected/14cb5f92-83cd-4cd7-8a3c-7dcccd239f6b-kube-api-access-cd7rg\") on node \"crc\" DevicePath \"\"" Jan 23 09:33:26 crc kubenswrapper[4684]: I0123 09:33:26.741584 4684 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-krhl5\" (UniqueName: \"kubernetes.io/projected/3d34dccc-24db-4c90-81fc-d9a898a7a643-kube-api-access-krhl5\") on node \"crc\" DevicePath \"\"" Jan 23 09:33:26 crc kubenswrapper[4684]: I0123 09:33:26.741597 4684 reconciler_common.go:293] "Volume detached for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/14cb5f92-83cd-4cd7-8a3c-7dcccd239f6b-operator-scripts\") on node \"crc\" DevicePath \"\"" Jan 23 09:33:26 crc kubenswrapper[4684]: I0123 09:33:26.741606 4684 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-t6rfr\" (UniqueName: \"kubernetes.io/projected/e849936f-39a5-4742-b2d8-d74a04de0ad1-kube-api-access-t6rfr\") on node \"crc\" DevicePath \"\"" Jan 23 09:33:26 crc kubenswrapper[4684]: I0123 09:33:26.741614 4684 reconciler_common.go:293] "Volume detached for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/e849936f-39a5-4742-b2d8-d74a04de0ad1-operator-scripts\") on node \"crc\" DevicePath \"\"" Jan 23 09:33:26 crc kubenswrapper[4684]: I0123 09:33:26.762383 4684 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/3d34dccc-24db-4c90-81fc-d9a898a7a643-config" (OuterVolumeSpecName: "config") pod "3d34dccc-24db-4c90-81fc-d9a898a7a643" (UID: "3d34dccc-24db-4c90-81fc-d9a898a7a643"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 23 09:33:26 crc kubenswrapper[4684]: I0123 09:33:26.773313 4684 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/3d34dccc-24db-4c90-81fc-d9a898a7a643-ovsdbserver-nb" (OuterVolumeSpecName: "ovsdbserver-nb") pod "3d34dccc-24db-4c90-81fc-d9a898a7a643" (UID: "3d34dccc-24db-4c90-81fc-d9a898a7a643"). InnerVolumeSpecName "ovsdbserver-nb". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 23 09:33:26 crc kubenswrapper[4684]: I0123 09:33:26.779718 4684 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/3d34dccc-24db-4c90-81fc-d9a898a7a643-dns-svc" (OuterVolumeSpecName: "dns-svc") pod "3d34dccc-24db-4c90-81fc-d9a898a7a643" (UID: "3d34dccc-24db-4c90-81fc-d9a898a7a643"). InnerVolumeSpecName "dns-svc". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 23 09:33:26 crc kubenswrapper[4684]: I0123 09:33:26.843362 4684 reconciler_common.go:293] "Volume detached for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/3d34dccc-24db-4c90-81fc-d9a898a7a643-dns-svc\") on node \"crc\" DevicePath \"\"" Jan 23 09:33:26 crc kubenswrapper[4684]: I0123 09:33:26.843406 4684 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/3d34dccc-24db-4c90-81fc-d9a898a7a643-config\") on node \"crc\" DevicePath \"\"" Jan 23 09:33:26 crc kubenswrapper[4684]: I0123 09:33:26.843418 4684 reconciler_common.go:293] "Volume detached for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/3d34dccc-24db-4c90-81fc-d9a898a7a643-ovsdbserver-nb\") on node \"crc\" DevicePath \"\"" Jan 23 09:33:27 crc kubenswrapper[4684]: I0123 09:33:27.082423 4684 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell1-a91c-account-create-update-78wch" event={"ID":"14cb5f92-83cd-4cd7-8a3c-7dcccd239f6b","Type":"ContainerDied","Data":"9e6821d23d1177096438394d565714f18a53a7a4dd733fb7a8e70a77185f0689"} Jan 23 09:33:27 crc kubenswrapper[4684]: I0123 09:33:27.082482 4684 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="9e6821d23d1177096438394d565714f18a53a7a4dd733fb7a8e70a77185f0689" Jan 23 09:33:27 crc kubenswrapper[4684]: I0123 09:33:27.082447 4684 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell1-a91c-account-create-update-78wch" Jan 23 09:33:27 crc kubenswrapper[4684]: I0123 09:33:27.084009 4684 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-b478fbf79-l44nc" event={"ID":"3d34dccc-24db-4c90-81fc-d9a898a7a643","Type":"ContainerDied","Data":"04b6f12d54cbf4b83906547897967603aab368d0a688e01e0e560541f1268e26"} Jan 23 09:33:27 crc kubenswrapper[4684]: I0123 09:33:27.084023 4684 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-b478fbf79-l44nc" Jan 23 09:33:27 crc kubenswrapper[4684]: I0123 09:33:27.084035 4684 scope.go:117] "RemoveContainer" containerID="49fd1c1e6498827bc914fcc2c0a38944252a1fe19582ccda62f7fe76e7196a81" Jan 23 09:33:27 crc kubenswrapper[4684]: I0123 09:33:27.093858 4684 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell0-e4a7-account-create-update-mx2vl" event={"ID":"e849936f-39a5-4742-b2d8-d74a04de0ad1","Type":"ContainerDied","Data":"3242b94f51ca802acd16b7bf5bfe912420dcb44fdc2381f6083941f6f278dfc5"} Jan 23 09:33:27 crc kubenswrapper[4684]: I0123 09:33:27.093902 4684 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="3242b94f51ca802acd16b7bf5bfe912420dcb44fdc2381f6083941f6f278dfc5" Jan 23 09:33:27 crc kubenswrapper[4684]: I0123 09:33:27.093969 4684 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell0-e4a7-account-create-update-mx2vl" Jan 23 09:33:27 crc kubenswrapper[4684]: I0123 09:33:27.131844 4684 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-b478fbf79-l44nc"] Jan 23 09:33:27 crc kubenswrapper[4684]: I0123 09:33:27.142266 4684 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/dnsmasq-dns-b478fbf79-l44nc"] Jan 23 09:33:27 crc kubenswrapper[4684]: I0123 09:33:27.293965 4684 scope.go:117] "RemoveContainer" containerID="683ab71d136a2e6ae2fcb3d81791aefcc10b3efc99221d65c4e8e1c8f9fa2a5e" Jan 23 09:33:27 crc kubenswrapper[4684]: I0123 09:33:27.646484 4684 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="3d34dccc-24db-4c90-81fc-d9a898a7a643" path="/var/lib/kubelet/pods/3d34dccc-24db-4c90-81fc-d9a898a7a643/volumes" Jan 23 09:33:27 crc kubenswrapper[4684]: I0123 09:33:27.892587 4684 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/barbican-api-6fb45b76fb-6d9bh"] Jan 23 09:33:27 crc kubenswrapper[4684]: W0123 09:33:27.897384 4684 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-podd239343a_876f_4e5e_abf8_2bd91fee9812.slice/crio-b9edb14c96101ffbc887716f1afbd1f94304986e4a5e89ad6831c791e7d689ea WatchSource:0}: Error finding container b9edb14c96101ffbc887716f1afbd1f94304986e4a5e89ad6831c791e7d689ea: Status 404 returned error can't find the container with id b9edb14c96101ffbc887716f1afbd1f94304986e4a5e89ad6831c791e7d689ea Jan 23 09:33:28 crc kubenswrapper[4684]: I0123 09:33:28.100670 4684 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/nova-cell0-conductor-db-sync-7pzwl"] Jan 23 09:33:28 crc kubenswrapper[4684]: E0123 09:33:28.101386 4684 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="3d34dccc-24db-4c90-81fc-d9a898a7a643" containerName="dnsmasq-dns" Jan 23 09:33:28 crc kubenswrapper[4684]: I0123 09:33:28.101405 4684 state_mem.go:107] "Deleted CPUSet assignment" podUID="3d34dccc-24db-4c90-81fc-d9a898a7a643" containerName="dnsmasq-dns" Jan 23 09:33:28 crc kubenswrapper[4684]: E0123 09:33:28.101429 4684 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="3d34dccc-24db-4c90-81fc-d9a898a7a643" containerName="init" Jan 23 09:33:28 crc kubenswrapper[4684]: I0123 09:33:28.101435 4684 state_mem.go:107] "Deleted CPUSet assignment" podUID="3d34dccc-24db-4c90-81fc-d9a898a7a643" containerName="init" Jan 23 09:33:28 crc kubenswrapper[4684]: E0123 09:33:28.101447 4684 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="e849936f-39a5-4742-b2d8-d74a04de0ad1" containerName="mariadb-account-create-update" Jan 23 09:33:28 crc kubenswrapper[4684]: I0123 09:33:28.101453 4684 state_mem.go:107] "Deleted CPUSet assignment" podUID="e849936f-39a5-4742-b2d8-d74a04de0ad1" containerName="mariadb-account-create-update" Jan 23 09:33:28 crc kubenswrapper[4684]: E0123 09:33:28.101464 4684 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="14cb5f92-83cd-4cd7-8a3c-7dcccd239f6b" containerName="mariadb-account-create-update" Jan 23 09:33:28 crc kubenswrapper[4684]: I0123 09:33:28.101515 4684 state_mem.go:107] "Deleted CPUSet assignment" podUID="14cb5f92-83cd-4cd7-8a3c-7dcccd239f6b" containerName="mariadb-account-create-update" Jan 23 09:33:28 crc kubenswrapper[4684]: I0123 09:33:28.101675 4684 memory_manager.go:354] "RemoveStaleState removing state" podUID="e849936f-39a5-4742-b2d8-d74a04de0ad1" containerName="mariadb-account-create-update" Jan 23 09:33:28 crc kubenswrapper[4684]: I0123 09:33:28.101688 4684 memory_manager.go:354] "RemoveStaleState removing state" podUID="14cb5f92-83cd-4cd7-8a3c-7dcccd239f6b" containerName="mariadb-account-create-update" Jan 23 09:33:28 crc kubenswrapper[4684]: I0123 09:33:28.101719 4684 memory_manager.go:354] "RemoveStaleState removing state" podUID="3d34dccc-24db-4c90-81fc-d9a898a7a643" containerName="dnsmasq-dns" Jan 23 09:33:28 crc kubenswrapper[4684]: I0123 09:33:28.102420 4684 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell0-conductor-db-sync-7pzwl" Jan 23 09:33:28 crc kubenswrapper[4684]: I0123 09:33:28.119537 4684 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-cell0-conductor-config-data" Jan 23 09:33:28 crc kubenswrapper[4684]: I0123 09:33:28.122896 4684 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-cell0-conductor-scripts" Jan 23 09:33:28 crc kubenswrapper[4684]: I0123 09:33:28.123109 4684 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-nova-dockercfg-k5ct5" Jan 23 09:33:28 crc kubenswrapper[4684]: I0123 09:33:28.139118 4684 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-cell0-conductor-db-sync-7pzwl"] Jan 23 09:33:28 crc kubenswrapper[4684]: I0123 09:33:28.215772 4684 generic.go:334] "Generic (PLEG): container finished" podID="42713741-8e02-44d5-b649-adf7d0f80837" containerID="c6cc27d87fa2d6fc77d881fd0e138e9dba9a6efab6e3dd65d56bd28cefb0b855" exitCode=0 Jan 23 09:33:28 crc kubenswrapper[4684]: I0123 09:33:28.215859 4684 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-b9fcb755f-mbwwx" event={"ID":"42713741-8e02-44d5-b649-adf7d0f80837","Type":"ContainerDied","Data":"c6cc27d87fa2d6fc77d881fd0e138e9dba9a6efab6e3dd65d56bd28cefb0b855"} Jan 23 09:33:28 crc kubenswrapper[4684]: I0123 09:33:28.223225 4684 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/barbican-api-6fb45b76fb-6d9bh" event={"ID":"d239343a-876f-4e5e-abf8-2bd91fee9812","Type":"ContainerStarted","Data":"b9edb14c96101ffbc887716f1afbd1f94304986e4a5e89ad6831c791e7d689ea"} Jan 23 09:33:28 crc kubenswrapper[4684]: I0123 09:33:28.234758 4684 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/71a684b6-60c9-4017-91d1-7a8e340d8482-combined-ca-bundle\") pod \"nova-cell0-conductor-db-sync-7pzwl\" (UID: \"71a684b6-60c9-4017-91d1-7a8e340d8482\") " pod="openstack/nova-cell0-conductor-db-sync-7pzwl" Jan 23 09:33:28 crc kubenswrapper[4684]: I0123 09:33:28.235043 4684 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/71a684b6-60c9-4017-91d1-7a8e340d8482-scripts\") pod \"nova-cell0-conductor-db-sync-7pzwl\" (UID: \"71a684b6-60c9-4017-91d1-7a8e340d8482\") " pod="openstack/nova-cell0-conductor-db-sync-7pzwl" Jan 23 09:33:28 crc kubenswrapper[4684]: I0123 09:33:28.235123 4684 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-jggsc\" (UniqueName: \"kubernetes.io/projected/71a684b6-60c9-4017-91d1-7a8e340d8482-kube-api-access-jggsc\") pod \"nova-cell0-conductor-db-sync-7pzwl\" (UID: \"71a684b6-60c9-4017-91d1-7a8e340d8482\") " pod="openstack/nova-cell0-conductor-db-sync-7pzwl" Jan 23 09:33:28 crc kubenswrapper[4684]: I0123 09:33:28.235303 4684 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/71a684b6-60c9-4017-91d1-7a8e340d8482-config-data\") pod \"nova-cell0-conductor-db-sync-7pzwl\" (UID: \"71a684b6-60c9-4017-91d1-7a8e340d8482\") " pod="openstack/nova-cell0-conductor-db-sync-7pzwl" Jan 23 09:33:28 crc kubenswrapper[4684]: I0123 09:33:28.338138 4684 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/71a684b6-60c9-4017-91d1-7a8e340d8482-combined-ca-bundle\") pod \"nova-cell0-conductor-db-sync-7pzwl\" (UID: \"71a684b6-60c9-4017-91d1-7a8e340d8482\") " pod="openstack/nova-cell0-conductor-db-sync-7pzwl" Jan 23 09:33:28 crc kubenswrapper[4684]: I0123 09:33:28.338238 4684 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/71a684b6-60c9-4017-91d1-7a8e340d8482-scripts\") pod \"nova-cell0-conductor-db-sync-7pzwl\" (UID: \"71a684b6-60c9-4017-91d1-7a8e340d8482\") " pod="openstack/nova-cell0-conductor-db-sync-7pzwl" Jan 23 09:33:28 crc kubenswrapper[4684]: I0123 09:33:28.338270 4684 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-jggsc\" (UniqueName: \"kubernetes.io/projected/71a684b6-60c9-4017-91d1-7a8e340d8482-kube-api-access-jggsc\") pod \"nova-cell0-conductor-db-sync-7pzwl\" (UID: \"71a684b6-60c9-4017-91d1-7a8e340d8482\") " pod="openstack/nova-cell0-conductor-db-sync-7pzwl" Jan 23 09:33:28 crc kubenswrapper[4684]: I0123 09:33:28.338371 4684 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/71a684b6-60c9-4017-91d1-7a8e340d8482-config-data\") pod \"nova-cell0-conductor-db-sync-7pzwl\" (UID: \"71a684b6-60c9-4017-91d1-7a8e340d8482\") " pod="openstack/nova-cell0-conductor-db-sync-7pzwl" Jan 23 09:33:28 crc kubenswrapper[4684]: I0123 09:33:28.387496 4684 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/71a684b6-60c9-4017-91d1-7a8e340d8482-config-data\") pod \"nova-cell0-conductor-db-sync-7pzwl\" (UID: \"71a684b6-60c9-4017-91d1-7a8e340d8482\") " pod="openstack/nova-cell0-conductor-db-sync-7pzwl" Jan 23 09:33:28 crc kubenswrapper[4684]: I0123 09:33:28.423839 4684 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-jggsc\" (UniqueName: \"kubernetes.io/projected/71a684b6-60c9-4017-91d1-7a8e340d8482-kube-api-access-jggsc\") pod \"nova-cell0-conductor-db-sync-7pzwl\" (UID: \"71a684b6-60c9-4017-91d1-7a8e340d8482\") " pod="openstack/nova-cell0-conductor-db-sync-7pzwl" Jan 23 09:33:28 crc kubenswrapper[4684]: I0123 09:33:28.473754 4684 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/71a684b6-60c9-4017-91d1-7a8e340d8482-combined-ca-bundle\") pod \"nova-cell0-conductor-db-sync-7pzwl\" (UID: \"71a684b6-60c9-4017-91d1-7a8e340d8482\") " pod="openstack/nova-cell0-conductor-db-sync-7pzwl" Jan 23 09:33:28 crc kubenswrapper[4684]: I0123 09:33:28.476226 4684 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/71a684b6-60c9-4017-91d1-7a8e340d8482-scripts\") pod \"nova-cell0-conductor-db-sync-7pzwl\" (UID: \"71a684b6-60c9-4017-91d1-7a8e340d8482\") " pod="openstack/nova-cell0-conductor-db-sync-7pzwl" Jan 23 09:33:28 crc kubenswrapper[4684]: I0123 09:33:28.499249 4684 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell0-conductor-db-sync-7pzwl" Jan 23 09:33:29 crc kubenswrapper[4684]: I0123 09:33:29.218982 4684 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-cell0-conductor-db-sync-7pzwl"] Jan 23 09:33:29 crc kubenswrapper[4684]: W0123 09:33:29.292732 4684 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod71a684b6_60c9_4017_91d1_7a8e340d8482.slice/crio-cd661dba9f1d5cf7a2365e21e23c842370d7ddb99ac8acd270a81a8c761e777d WatchSource:0}: Error finding container cd661dba9f1d5cf7a2365e21e23c842370d7ddb99ac8acd270a81a8c761e777d: Status 404 returned error can't find the container with id cd661dba9f1d5cf7a2365e21e23c842370d7ddb99ac8acd270a81a8c761e777d Jan 23 09:33:29 crc kubenswrapper[4684]: I0123 09:33:29.310806 4684 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/barbican-worker-74bcc55f89-qgvh5" event={"ID":"996c56f4-2118-4795-91da-d78f1ad2f792","Type":"ContainerStarted","Data":"b5b11c4fadff5dc8d2722023cd1846cd7e6e488d9f004c07fd1cf1daf2225c6c"} Jan 23 09:33:29 crc kubenswrapper[4684]: I0123 09:33:29.318096 4684 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/barbican-keystone-listener-7c6d999bfd-wgh9p" event={"ID":"dd332188-f0b4-4a86-a7ec-c722f64e1e41","Type":"ContainerStarted","Data":"c760927669cf8f50dc0ceebaa17b67abe324f586972604e13da26b40eff23c2b"} Jan 23 09:33:30 crc kubenswrapper[4684]: I0123 09:33:30.109413 4684 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack/barbican-api-b96468b6b-tn94s" podUID="968cfa50-ff5f-4484-8a59-2132539ba65b" containerName="barbican-api-log" probeResult="failure" output="Get \"http://10.217.0.155:9311/healthcheck\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" Jan 23 09:33:30 crc kubenswrapper[4684]: I0123 09:33:30.425962 4684 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/barbican-api-6fb45b76fb-6d9bh" event={"ID":"d239343a-876f-4e5e-abf8-2bd91fee9812","Type":"ContainerStarted","Data":"bc3b894308195b52a5b43d7d1f11911d7beca3d2e88b4c195de89781004ac242"} Jan 23 09:33:30 crc kubenswrapper[4684]: I0123 09:33:30.426226 4684 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/barbican-api-6fb45b76fb-6d9bh" event={"ID":"d239343a-876f-4e5e-abf8-2bd91fee9812","Type":"ContainerStarted","Data":"e30f4967e2ecedc4ca6d0d86191755ef99d2558efabb97a4af884cafae2a83d5"} Jan 23 09:33:30 crc kubenswrapper[4684]: I0123 09:33:30.426853 4684 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/barbican-api-6fb45b76fb-6d9bh" Jan 23 09:33:30 crc kubenswrapper[4684]: I0123 09:33:30.427795 4684 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/barbican-api-6fb45b76fb-6d9bh" Jan 23 09:33:30 crc kubenswrapper[4684]: I0123 09:33:30.444104 4684 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-scheduler-0" event={"ID":"59aa4e93-3c29-45d1-95d0-7cc3f595765a","Type":"ContainerStarted","Data":"5145cfc99a3d8b0e61f91a088872062cb3601e812ac9f9eb16ac734dda1fb422"} Jan 23 09:33:30 crc kubenswrapper[4684]: I0123 09:33:30.459504 4684 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell0-conductor-db-sync-7pzwl" event={"ID":"71a684b6-60c9-4017-91d1-7a8e340d8482","Type":"ContainerStarted","Data":"cd661dba9f1d5cf7a2365e21e23c842370d7ddb99ac8acd270a81a8c761e777d"} Jan 23 09:33:30 crc kubenswrapper[4684]: I0123 09:33:30.467294 4684 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/barbican-worker-74bcc55f89-qgvh5" event={"ID":"996c56f4-2118-4795-91da-d78f1ad2f792","Type":"ContainerStarted","Data":"7ceaaaa17732fe60eccf92ac69b1812146aa3a883fcf53f6c08a1f19650304f5"} Jan 23 09:33:30 crc kubenswrapper[4684]: I0123 09:33:30.478318 4684 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/barbican-keystone-listener-7c6d999bfd-wgh9p" event={"ID":"dd332188-f0b4-4a86-a7ec-c722f64e1e41","Type":"ContainerStarted","Data":"2a8be7970542cd86ebd87a35799f78f35fac531c31a1fdc2b5e1e2a7f041cf96"} Jan 23 09:33:30 crc kubenswrapper[4684]: I0123 09:33:30.490097 4684 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-api-0" event={"ID":"b8693d7b-d2eb-4be6-95f7-299baceab47f","Type":"ContainerStarted","Data":"ef8bf7c6fb6e70d7f574af2f1f5a5ee04b2e89507f6838964eee88bd73ddc71a"} Jan 23 09:33:30 crc kubenswrapper[4684]: I0123 09:33:30.500594 4684 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"8c701147-6de2-4cd9-8d2e-05831ceb7ed5","Type":"ContainerStarted","Data":"0c3ab84493dc275b7864ff07924cf942a94504ec9cf11093b75b115cfa909602"} Jan 23 09:33:30 crc kubenswrapper[4684]: I0123 09:33:30.516630 4684 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/barbican-api-6fb45b76fb-6d9bh" podStartSLOduration=4.5166078800000005 podStartE2EDuration="4.51660788s" podCreationTimestamp="2026-01-23 09:33:26 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-23 09:33:30.472201596 +0000 UTC m=+1583.095580137" watchObservedRunningTime="2026-01-23 09:33:30.51660788 +0000 UTC m=+1583.139986421" Jan 23 09:33:30 crc kubenswrapper[4684]: I0123 09:33:30.535253 4684 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-b9fcb755f-mbwwx" event={"ID":"42713741-8e02-44d5-b649-adf7d0f80837","Type":"ContainerStarted","Data":"70ea31ef317297c4f56b11a83ed391191a955809c198b6bfd8857276491501e7"} Jan 23 09:33:30 crc kubenswrapper[4684]: I0123 09:33:30.535675 4684 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/dnsmasq-dns-b9fcb755f-mbwwx" Jan 23 09:33:30 crc kubenswrapper[4684]: I0123 09:33:30.558372 4684 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/barbican-worker-74bcc55f89-qgvh5" podStartSLOduration=5.777357387 podStartE2EDuration="13.558353167s" podCreationTimestamp="2026-01-23 09:33:17 +0000 UTC" firstStartedPulling="2026-01-23 09:33:19.786425742 +0000 UTC m=+1572.409804283" lastFinishedPulling="2026-01-23 09:33:27.567421522 +0000 UTC m=+1580.190800063" observedRunningTime="2026-01-23 09:33:30.52248238 +0000 UTC m=+1583.145860931" watchObservedRunningTime="2026-01-23 09:33:30.558353167 +0000 UTC m=+1583.181731718" Jan 23 09:33:30 crc kubenswrapper[4684]: I0123 09:33:30.560121 4684 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/barbican-keystone-listener-7c6d999bfd-wgh9p" podStartSLOduration=6.056004543 podStartE2EDuration="13.560112098s" podCreationTimestamp="2026-01-23 09:33:17 +0000 UTC" firstStartedPulling="2026-01-23 09:33:19.8469057 +0000 UTC m=+1572.470284241" lastFinishedPulling="2026-01-23 09:33:27.351013265 +0000 UTC m=+1579.974391796" observedRunningTime="2026-01-23 09:33:30.559013246 +0000 UTC m=+1583.182391797" watchObservedRunningTime="2026-01-23 09:33:30.560112098 +0000 UTC m=+1583.183490649" Jan 23 09:33:30 crc kubenswrapper[4684]: I0123 09:33:30.602477 4684 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/dnsmasq-dns-b9fcb755f-mbwwx" podStartSLOduration=9.602456832 podStartE2EDuration="9.602456832s" podCreationTimestamp="2026-01-23 09:33:21 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-23 09:33:30.59234449 +0000 UTC m=+1583.215723041" watchObservedRunningTime="2026-01-23 09:33:30.602456832 +0000 UTC m=+1583.225835373" Jan 23 09:33:31 crc kubenswrapper[4684]: I0123 09:33:31.549621 4684 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-scheduler-0" event={"ID":"59aa4e93-3c29-45d1-95d0-7cc3f595765a","Type":"ContainerStarted","Data":"8619dfef25170ec3f006774f2f2ce9651e9b6ddea933ef1ad8127965eb9a0d7a"} Jan 23 09:33:31 crc kubenswrapper[4684]: I0123 09:33:31.565005 4684 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-api-0" event={"ID":"b8693d7b-d2eb-4be6-95f7-299baceab47f","Type":"ContainerStarted","Data":"47e323bcaa588d63d0fdb7611b165e3a0850544772b70debb8a042e53f925a9f"} Jan 23 09:33:31 crc kubenswrapper[4684]: I0123 09:33:31.565187 4684 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/cinder-api-0" podUID="b8693d7b-d2eb-4be6-95f7-299baceab47f" containerName="cinder-api-log" containerID="cri-o://ef8bf7c6fb6e70d7f574af2f1f5a5ee04b2e89507f6838964eee88bd73ddc71a" gracePeriod=30 Jan 23 09:33:31 crc kubenswrapper[4684]: I0123 09:33:31.565456 4684 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/cinder-api-0" Jan 23 09:33:31 crc kubenswrapper[4684]: I0123 09:33:31.565501 4684 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/cinder-api-0" podUID="b8693d7b-d2eb-4be6-95f7-299baceab47f" containerName="cinder-api" containerID="cri-o://47e323bcaa588d63d0fdb7611b165e3a0850544772b70debb8a042e53f925a9f" gracePeriod=30 Jan 23 09:33:31 crc kubenswrapper[4684]: I0123 09:33:31.639728 4684 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"8c701147-6de2-4cd9-8d2e-05831ceb7ed5","Type":"ContainerStarted","Data":"f1ddd13d94ce6c984e49236bee622db3062b3fa57236c352e50209c3320925fe"} Jan 23 09:33:31 crc kubenswrapper[4684]: I0123 09:33:31.646870 4684 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/cinder-scheduler-0" Jan 23 09:33:31 crc kubenswrapper[4684]: I0123 09:33:31.651153 4684 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/cinder-scheduler-0" podStartSLOduration=7.063085642 podStartE2EDuration="10.651133253s" podCreationTimestamp="2026-01-23 09:33:21 +0000 UTC" firstStartedPulling="2026-01-23 09:33:24.254817045 +0000 UTC m=+1576.878195586" lastFinishedPulling="2026-01-23 09:33:27.842864656 +0000 UTC m=+1580.466243197" observedRunningTime="2026-01-23 09:33:31.600977813 +0000 UTC m=+1584.224356374" watchObservedRunningTime="2026-01-23 09:33:31.651133253 +0000 UTC m=+1584.274511794" Jan 23 09:33:31 crc kubenswrapper[4684]: I0123 09:33:31.672921 4684 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/cinder-api-0" podStartSLOduration=10.672894562 podStartE2EDuration="10.672894562s" podCreationTimestamp="2026-01-23 09:33:21 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-23 09:33:31.660628087 +0000 UTC m=+1584.284006628" watchObservedRunningTime="2026-01-23 09:33:31.672894562 +0000 UTC m=+1584.296273103" Jan 23 09:33:32 crc kubenswrapper[4684]: I0123 09:33:32.605981 4684 generic.go:334] "Generic (PLEG): container finished" podID="b8693d7b-d2eb-4be6-95f7-299baceab47f" containerID="ef8bf7c6fb6e70d7f574af2f1f5a5ee04b2e89507f6838964eee88bd73ddc71a" exitCode=143 Jan 23 09:33:32 crc kubenswrapper[4684]: I0123 09:33:32.606049 4684 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-api-0" event={"ID":"b8693d7b-d2eb-4be6-95f7-299baceab47f","Type":"ContainerDied","Data":"ef8bf7c6fb6e70d7f574af2f1f5a5ee04b2e89507f6838964eee88bd73ddc71a"} Jan 23 09:33:32 crc kubenswrapper[4684]: I0123 09:33:32.613106 4684 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"8c701147-6de2-4cd9-8d2e-05831ceb7ed5","Type":"ContainerStarted","Data":"ccfe035950b77539d1bf0b65dbc64006cb7b105cc84605ceeee03de3e4934fbb"} Jan 23 09:33:32 crc kubenswrapper[4684]: I0123 09:33:32.819853 4684 prober.go:107] "Probe failed" probeType="Liveness" pod="openstack/barbican-api-b96468b6b-tn94s" podUID="968cfa50-ff5f-4484-8a59-2132539ba65b" containerName="barbican-api" probeResult="failure" output="Get \"http://10.217.0.155:9311/healthcheck\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" Jan 23 09:33:32 crc kubenswrapper[4684]: I0123 09:33:32.820283 4684 prober.go:107] "Probe failed" probeType="Liveness" pod="openstack/barbican-api-b96468b6b-tn94s" podUID="968cfa50-ff5f-4484-8a59-2132539ba65b" containerName="barbican-api-log" probeResult="failure" output="Get \"http://10.217.0.155:9311/healthcheck\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" Jan 23 09:33:33 crc kubenswrapper[4684]: I0123 09:33:33.641422 4684 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"8c701147-6de2-4cd9-8d2e-05831ceb7ed5","Type":"ContainerStarted","Data":"4d380496d7f1c1c1369ecb48f20752749677f53c1e371b082d585c1a2850c7a6"} Jan 23 09:33:33 crc kubenswrapper[4684]: I0123 09:33:33.642056 4684 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/ceilometer-0" Jan 23 09:33:33 crc kubenswrapper[4684]: I0123 09:33:33.672771 4684 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/ceilometer-0" podStartSLOduration=4.938114751 podStartE2EDuration="13.672749013s" podCreationTimestamp="2026-01-23 09:33:20 +0000 UTC" firstStartedPulling="2026-01-23 09:33:24.297288363 +0000 UTC m=+1576.920666904" lastFinishedPulling="2026-01-23 09:33:33.031922605 +0000 UTC m=+1585.655301166" observedRunningTime="2026-01-23 09:33:33.667959714 +0000 UTC m=+1586.291338275" watchObservedRunningTime="2026-01-23 09:33:33.672749013 +0000 UTC m=+1586.296127564" Jan 23 09:33:33 crc kubenswrapper[4684]: I0123 09:33:33.781916 4684 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack/barbican-api-b96468b6b-tn94s" podUID="968cfa50-ff5f-4484-8a59-2132539ba65b" containerName="barbican-api" probeResult="failure" output="Get \"http://10.217.0.155:9311/healthcheck\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" Jan 23 09:33:35 crc kubenswrapper[4684]: I0123 09:33:35.155029 4684 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack/barbican-api-b96468b6b-tn94s" podUID="968cfa50-ff5f-4484-8a59-2132539ba65b" containerName="barbican-api-log" probeResult="failure" output="Get \"http://10.217.0.155:9311/healthcheck\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" Jan 23 09:33:35 crc kubenswrapper[4684]: I0123 09:33:35.211934 4684 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/barbican-api-b96468b6b-tn94s" Jan 23 09:33:36 crc kubenswrapper[4684]: I0123 09:33:36.648377 4684 prober.go:107] "Probe failed" probeType="Startup" pod="openstack/cinder-scheduler-0" podUID="59aa4e93-3c29-45d1-95d0-7cc3f595765a" containerName="cinder-scheduler" probeResult="failure" output="Get \"http://10.217.0.157:8080/\": dial tcp 10.217.0.157:8080: connect: connection refused" Jan 23 09:33:37 crc kubenswrapper[4684]: I0123 09:33:37.029857 4684 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/dnsmasq-dns-b9fcb755f-mbwwx" Jan 23 09:33:37 crc kubenswrapper[4684]: I0123 09:33:37.114770 4684 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-694dbb6647-xtjr2"] Jan 23 09:33:37 crc kubenswrapper[4684]: I0123 09:33:37.115079 4684 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/dnsmasq-dns-694dbb6647-xtjr2" podUID="3e252874-6205-4570-a8a8-dada614f685e" containerName="dnsmasq-dns" containerID="cri-o://b34cc2bb7b14772f09b40ed69d363f104d610f45095bbc28810f6418559a9a0c" gracePeriod=10 Jan 23 09:33:37 crc kubenswrapper[4684]: I0123 09:33:37.394954 4684 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/barbican-api-b96468b6b-tn94s" Jan 23 09:33:37 crc kubenswrapper[4684]: I0123 09:33:37.733409 4684 generic.go:334] "Generic (PLEG): container finished" podID="3e252874-6205-4570-a8a8-dada614f685e" containerID="b34cc2bb7b14772f09b40ed69d363f104d610f45095bbc28810f6418559a9a0c" exitCode=0 Jan 23 09:33:37 crc kubenswrapper[4684]: I0123 09:33:37.733813 4684 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-694dbb6647-xtjr2" event={"ID":"3e252874-6205-4570-a8a8-dada614f685e","Type":"ContainerDied","Data":"b34cc2bb7b14772f09b40ed69d363f104d610f45095bbc28810f6418559a9a0c"} Jan 23 09:33:37 crc kubenswrapper[4684]: I0123 09:33:37.862112 4684 prober.go:107] "Probe failed" probeType="Liveness" pod="openstack/barbican-api-b96468b6b-tn94s" podUID="968cfa50-ff5f-4484-8a59-2132539ba65b" containerName="barbican-api-log" probeResult="failure" output="Get \"http://10.217.0.155:9311/healthcheck\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" Jan 23 09:33:38 crc kubenswrapper[4684]: I0123 09:33:38.754286 4684 generic.go:334] "Generic (PLEG): container finished" podID="5fd7bf23-46a9-4032-97f0-8d7984b734e0" containerID="1f92611b2ba669fe16cef70364ca7ce8e9c1cbf3585f43341dfcf83194801d6f" exitCode=0 Jan 23 09:33:38 crc kubenswrapper[4684]: I0123 09:33:38.754336 4684 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/neutron-db-sync-mjwvr" event={"ID":"5fd7bf23-46a9-4032-97f0-8d7984b734e0","Type":"ContainerDied","Data":"1f92611b2ba669fe16cef70364ca7ce8e9c1cbf3585f43341dfcf83194801d6f"} Jan 23 09:33:40 crc kubenswrapper[4684]: I0123 09:33:40.028893 4684 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/barbican-api-6fb45b76fb-6d9bh" Jan 23 09:33:40 crc kubenswrapper[4684]: I0123 09:33:40.767160 4684 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/barbican-api-6fb45b76fb-6d9bh" Jan 23 09:33:40 crc kubenswrapper[4684]: I0123 09:33:40.830093 4684 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/barbican-api-b96468b6b-tn94s"] Jan 23 09:33:40 crc kubenswrapper[4684]: I0123 09:33:40.830375 4684 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/barbican-api-b96468b6b-tn94s" podUID="968cfa50-ff5f-4484-8a59-2132539ba65b" containerName="barbican-api-log" containerID="cri-o://1263c51796e9cd2b6f83c2218e9dba9668852d7e31ce74acbbcda5bfc627c52e" gracePeriod=30 Jan 23 09:33:40 crc kubenswrapper[4684]: I0123 09:33:40.830860 4684 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/barbican-api-b96468b6b-tn94s" podUID="968cfa50-ff5f-4484-8a59-2132539ba65b" containerName="barbican-api" containerID="cri-o://b9e405e70e2266d7fc6dbefa73f7a4ed51df68a9d096ed3eba47fe22d807ecb3" gracePeriod=30 Jan 23 09:33:41 crc kubenswrapper[4684]: I0123 09:33:41.274923 4684 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack/dnsmasq-dns-694dbb6647-xtjr2" podUID="3e252874-6205-4570-a8a8-dada614f685e" containerName="dnsmasq-dns" probeResult="failure" output="dial tcp 10.217.0.139:5353: connect: connection refused" Jan 23 09:33:41 crc kubenswrapper[4684]: I0123 09:33:41.788852 4684 generic.go:334] "Generic (PLEG): container finished" podID="968cfa50-ff5f-4484-8a59-2132539ba65b" containerID="1263c51796e9cd2b6f83c2218e9dba9668852d7e31ce74acbbcda5bfc627c52e" exitCode=143 Jan 23 09:33:41 crc kubenswrapper[4684]: I0123 09:33:41.788900 4684 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/barbican-api-b96468b6b-tn94s" event={"ID":"968cfa50-ff5f-4484-8a59-2132539ba65b","Type":"ContainerDied","Data":"1263c51796e9cd2b6f83c2218e9dba9668852d7e31ce74acbbcda5bfc627c52e"} Jan 23 09:33:42 crc kubenswrapper[4684]: I0123 09:33:42.039837 4684 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/cinder-scheduler-0" Jan 23 09:33:42 crc kubenswrapper[4684]: I0123 09:33:42.087871 4684 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/cinder-scheduler-0"] Jan 23 09:33:42 crc kubenswrapper[4684]: I0123 09:33:42.137883 4684 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack/cinder-api-0" podUID="b8693d7b-d2eb-4be6-95f7-299baceab47f" containerName="cinder-api" probeResult="failure" output="Get \"http://10.217.0.159:8776/healthcheck\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" Jan 23 09:33:42 crc kubenswrapper[4684]: I0123 09:33:42.799241 4684 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/cinder-scheduler-0" podUID="59aa4e93-3c29-45d1-95d0-7cc3f595765a" containerName="cinder-scheduler" containerID="cri-o://5145cfc99a3d8b0e61f91a088872062cb3601e812ac9f9eb16ac734dda1fb422" gracePeriod=30 Jan 23 09:33:42 crc kubenswrapper[4684]: I0123 09:33:42.799511 4684 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/cinder-scheduler-0" podUID="59aa4e93-3c29-45d1-95d0-7cc3f595765a" containerName="probe" containerID="cri-o://8619dfef25170ec3f006774f2f2ce9651e9b6ddea933ef1ad8127965eb9a0d7a" gracePeriod=30 Jan 23 09:33:44 crc kubenswrapper[4684]: I0123 09:33:44.553983 4684 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack/barbican-api-b96468b6b-tn94s" podUID="968cfa50-ff5f-4484-8a59-2132539ba65b" containerName="barbican-api-log" probeResult="failure" output="Get \"http://10.217.0.155:9311/healthcheck\": read tcp 10.217.0.2:55852->10.217.0.155:9311: read: connection reset by peer" Jan 23 09:33:44 crc kubenswrapper[4684]: I0123 09:33:44.554018 4684 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack/barbican-api-b96468b6b-tn94s" podUID="968cfa50-ff5f-4484-8a59-2132539ba65b" containerName="barbican-api" probeResult="failure" output="Get \"http://10.217.0.155:9311/healthcheck\": read tcp 10.217.0.2:55842->10.217.0.155:9311: read: connection reset by peer" Jan 23 09:33:44 crc kubenswrapper[4684]: I0123 09:33:44.838461 4684 generic.go:334] "Generic (PLEG): container finished" podID="968cfa50-ff5f-4484-8a59-2132539ba65b" containerID="b9e405e70e2266d7fc6dbefa73f7a4ed51df68a9d096ed3eba47fe22d807ecb3" exitCode=0 Jan 23 09:33:44 crc kubenswrapper[4684]: I0123 09:33:44.838515 4684 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/barbican-api-b96468b6b-tn94s" event={"ID":"968cfa50-ff5f-4484-8a59-2132539ba65b","Type":"ContainerDied","Data":"b9e405e70e2266d7fc6dbefa73f7a4ed51df68a9d096ed3eba47fe22d807ecb3"} Jan 23 09:33:44 crc kubenswrapper[4684]: I0123 09:33:44.844119 4684 generic.go:334] "Generic (PLEG): container finished" podID="59aa4e93-3c29-45d1-95d0-7cc3f595765a" containerID="8619dfef25170ec3f006774f2f2ce9651e9b6ddea933ef1ad8127965eb9a0d7a" exitCode=0 Jan 23 09:33:44 crc kubenswrapper[4684]: I0123 09:33:44.844184 4684 generic.go:334] "Generic (PLEG): container finished" podID="59aa4e93-3c29-45d1-95d0-7cc3f595765a" containerID="5145cfc99a3d8b0e61f91a088872062cb3601e812ac9f9eb16ac734dda1fb422" exitCode=0 Jan 23 09:33:44 crc kubenswrapper[4684]: I0123 09:33:44.844234 4684 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-scheduler-0" event={"ID":"59aa4e93-3c29-45d1-95d0-7cc3f595765a","Type":"ContainerDied","Data":"8619dfef25170ec3f006774f2f2ce9651e9b6ddea933ef1ad8127965eb9a0d7a"} Jan 23 09:33:44 crc kubenswrapper[4684]: I0123 09:33:44.844307 4684 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-scheduler-0" event={"ID":"59aa4e93-3c29-45d1-95d0-7cc3f595765a","Type":"ContainerDied","Data":"5145cfc99a3d8b0e61f91a088872062cb3601e812ac9f9eb16ac734dda1fb422"} Jan 23 09:33:45 crc kubenswrapper[4684]: I0123 09:33:45.218166 4684 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/cinder-api-0" Jan 23 09:33:46 crc kubenswrapper[4684]: I0123 09:33:46.268942 4684 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack/dnsmasq-dns-694dbb6647-xtjr2" podUID="3e252874-6205-4570-a8a8-dada614f685e" containerName="dnsmasq-dns" probeResult="failure" output="dial tcp 10.217.0.139:5353: connect: connection refused" Jan 23 09:33:47 crc kubenswrapper[4684]: E0123 09:33:47.097574 4684 log.go:32] "PullImage from image service failed" err="rpc error: code = Canceled desc = copying config: context canceled" image="quay.io/podified-antelope-centos9/openstack-nova-conductor@sha256:22f097cb86b28ac48dc670ed7e0e841280bef1608f11b2b4536fbc2d2a6a90be" Jan 23 09:33:47 crc kubenswrapper[4684]: E0123 09:33:47.098192 4684 kuberuntime_manager.go:1274] "Unhandled Error" err="container &Container{Name:nova-cell0-conductor-db-sync,Image:quay.io/podified-antelope-centos9/openstack-nova-conductor@sha256:22f097cb86b28ac48dc670ed7e0e841280bef1608f11b2b4536fbc2d2a6a90be,Command:[/bin/bash],Args:[-c /usr/local/bin/kolla_start],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:CELL_NAME,Value:cell0,ValueFrom:nil,},EnvVar{Name:KOLLA_BOOTSTRAP,Value:true,ValueFrom:nil,},EnvVar{Name:KOLLA_CONFIG_STRATEGY,Value:COPY_ALWAYS,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:config-data,ReadOnly:false,MountPath:/var/lib/openstack/config,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:scripts,ReadOnly:false,MountPath:/var/lib/openstack/bin,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:config-data,ReadOnly:false,MountPath:/var/lib/kolla/config_files/config.json,SubPath:nova-conductor-dbsync-config.json,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:combined-ca-bundle,ReadOnly:true,MountPath:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem,SubPath:tls-ca-bundle.pem,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-jggsc,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[MKNOD],},Privileged:nil,SELinuxOptions:nil,RunAsUser:*42436,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod nova-cell0-conductor-db-sync-7pzwl_openstack(71a684b6-60c9-4017-91d1-7a8e340d8482): ErrImagePull: rpc error: code = Canceled desc = copying config: context canceled" logger="UnhandledError" Jan 23 09:33:47 crc kubenswrapper[4684]: E0123 09:33:47.099320 4684 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"nova-cell0-conductor-db-sync\" with ErrImagePull: \"rpc error: code = Canceled desc = copying config: context canceled\"" pod="openstack/nova-cell0-conductor-db-sync-7pzwl" podUID="71a684b6-60c9-4017-91d1-7a8e340d8482" Jan 23 09:33:47 crc kubenswrapper[4684]: I0123 09:33:47.200262 4684 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/neutron-db-sync-mjwvr" Jan 23 09:33:47 crc kubenswrapper[4684]: I0123 09:33:47.331811 4684 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/secret/5fd7bf23-46a9-4032-97f0-8d7984b734e0-config\") pod \"5fd7bf23-46a9-4032-97f0-8d7984b734e0\" (UID: \"5fd7bf23-46a9-4032-97f0-8d7984b734e0\") " Jan 23 09:33:47 crc kubenswrapper[4684]: I0123 09:33:47.332673 4684 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/5fd7bf23-46a9-4032-97f0-8d7984b734e0-combined-ca-bundle\") pod \"5fd7bf23-46a9-4032-97f0-8d7984b734e0\" (UID: \"5fd7bf23-46a9-4032-97f0-8d7984b734e0\") " Jan 23 09:33:47 crc kubenswrapper[4684]: I0123 09:33:47.332738 4684 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-f8hzp\" (UniqueName: \"kubernetes.io/projected/5fd7bf23-46a9-4032-97f0-8d7984b734e0-kube-api-access-f8hzp\") pod \"5fd7bf23-46a9-4032-97f0-8d7984b734e0\" (UID: \"5fd7bf23-46a9-4032-97f0-8d7984b734e0\") " Jan 23 09:33:47 crc kubenswrapper[4684]: I0123 09:33:47.338979 4684 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/5fd7bf23-46a9-4032-97f0-8d7984b734e0-kube-api-access-f8hzp" (OuterVolumeSpecName: "kube-api-access-f8hzp") pod "5fd7bf23-46a9-4032-97f0-8d7984b734e0" (UID: "5fd7bf23-46a9-4032-97f0-8d7984b734e0"). InnerVolumeSpecName "kube-api-access-f8hzp". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 23 09:33:47 crc kubenswrapper[4684]: I0123 09:33:47.366935 4684 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/5fd7bf23-46a9-4032-97f0-8d7984b734e0-config" (OuterVolumeSpecName: "config") pod "5fd7bf23-46a9-4032-97f0-8d7984b734e0" (UID: "5fd7bf23-46a9-4032-97f0-8d7984b734e0"). InnerVolumeSpecName "config". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 23 09:33:47 crc kubenswrapper[4684]: I0123 09:33:47.374580 4684 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/5fd7bf23-46a9-4032-97f0-8d7984b734e0-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "5fd7bf23-46a9-4032-97f0-8d7984b734e0" (UID: "5fd7bf23-46a9-4032-97f0-8d7984b734e0"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 23 09:33:47 crc kubenswrapper[4684]: I0123 09:33:47.435566 4684 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-f8hzp\" (UniqueName: \"kubernetes.io/projected/5fd7bf23-46a9-4032-97f0-8d7984b734e0-kube-api-access-f8hzp\") on node \"crc\" DevicePath \"\"" Jan 23 09:33:47 crc kubenswrapper[4684]: I0123 09:33:47.435610 4684 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/secret/5fd7bf23-46a9-4032-97f0-8d7984b734e0-config\") on node \"crc\" DevicePath \"\"" Jan 23 09:33:47 crc kubenswrapper[4684]: I0123 09:33:47.435623 4684 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/5fd7bf23-46a9-4032-97f0-8d7984b734e0-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Jan 23 09:33:47 crc kubenswrapper[4684]: I0123 09:33:47.623039 4684 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/barbican-api-b96468b6b-tn94s" Jan 23 09:33:47 crc kubenswrapper[4684]: I0123 09:33:47.742375 4684 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/968cfa50-ff5f-4484-8a59-2132539ba65b-config-data\") pod \"968cfa50-ff5f-4484-8a59-2132539ba65b\" (UID: \"968cfa50-ff5f-4484-8a59-2132539ba65b\") " Jan 23 09:33:47 crc kubenswrapper[4684]: I0123 09:33:47.742466 4684 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-mqvzv\" (UniqueName: \"kubernetes.io/projected/968cfa50-ff5f-4484-8a59-2132539ba65b-kube-api-access-mqvzv\") pod \"968cfa50-ff5f-4484-8a59-2132539ba65b\" (UID: \"968cfa50-ff5f-4484-8a59-2132539ba65b\") " Jan 23 09:33:47 crc kubenswrapper[4684]: I0123 09:33:47.742556 4684 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/968cfa50-ff5f-4484-8a59-2132539ba65b-config-data-custom\") pod \"968cfa50-ff5f-4484-8a59-2132539ba65b\" (UID: \"968cfa50-ff5f-4484-8a59-2132539ba65b\") " Jan 23 09:33:47 crc kubenswrapper[4684]: I0123 09:33:47.742591 4684 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/968cfa50-ff5f-4484-8a59-2132539ba65b-combined-ca-bundle\") pod \"968cfa50-ff5f-4484-8a59-2132539ba65b\" (UID: \"968cfa50-ff5f-4484-8a59-2132539ba65b\") " Jan 23 09:33:47 crc kubenswrapper[4684]: I0123 09:33:47.742768 4684 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/968cfa50-ff5f-4484-8a59-2132539ba65b-logs\") pod \"968cfa50-ff5f-4484-8a59-2132539ba65b\" (UID: \"968cfa50-ff5f-4484-8a59-2132539ba65b\") " Jan 23 09:33:47 crc kubenswrapper[4684]: I0123 09:33:47.744301 4684 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/968cfa50-ff5f-4484-8a59-2132539ba65b-logs" (OuterVolumeSpecName: "logs") pod "968cfa50-ff5f-4484-8a59-2132539ba65b" (UID: "968cfa50-ff5f-4484-8a59-2132539ba65b"). InnerVolumeSpecName "logs". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 23 09:33:47 crc kubenswrapper[4684]: I0123 09:33:47.794996 4684 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/968cfa50-ff5f-4484-8a59-2132539ba65b-kube-api-access-mqvzv" (OuterVolumeSpecName: "kube-api-access-mqvzv") pod "968cfa50-ff5f-4484-8a59-2132539ba65b" (UID: "968cfa50-ff5f-4484-8a59-2132539ba65b"). InnerVolumeSpecName "kube-api-access-mqvzv". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 23 09:33:47 crc kubenswrapper[4684]: I0123 09:33:47.799994 4684 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/968cfa50-ff5f-4484-8a59-2132539ba65b-config-data-custom" (OuterVolumeSpecName: "config-data-custom") pod "968cfa50-ff5f-4484-8a59-2132539ba65b" (UID: "968cfa50-ff5f-4484-8a59-2132539ba65b"). InnerVolumeSpecName "config-data-custom". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 23 09:33:47 crc kubenswrapper[4684]: I0123 09:33:47.846407 4684 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-mqvzv\" (UniqueName: \"kubernetes.io/projected/968cfa50-ff5f-4484-8a59-2132539ba65b-kube-api-access-mqvzv\") on node \"crc\" DevicePath \"\"" Jan 23 09:33:47 crc kubenswrapper[4684]: I0123 09:33:47.846674 4684 reconciler_common.go:293] "Volume detached for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/968cfa50-ff5f-4484-8a59-2132539ba65b-config-data-custom\") on node \"crc\" DevicePath \"\"" Jan 23 09:33:47 crc kubenswrapper[4684]: I0123 09:33:47.846806 4684 reconciler_common.go:293] "Volume detached for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/968cfa50-ff5f-4484-8a59-2132539ba65b-logs\") on node \"crc\" DevicePath \"\"" Jan 23 09:33:47 crc kubenswrapper[4684]: I0123 09:33:47.849485 4684 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/968cfa50-ff5f-4484-8a59-2132539ba65b-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "968cfa50-ff5f-4484-8a59-2132539ba65b" (UID: "968cfa50-ff5f-4484-8a59-2132539ba65b"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 23 09:33:47 crc kubenswrapper[4684]: I0123 09:33:47.880358 4684 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-scheduler-0" event={"ID":"59aa4e93-3c29-45d1-95d0-7cc3f595765a","Type":"ContainerDied","Data":"27030967a5e8b1deb8d56112475a6bda5e7c24048a27e08eaf33ffd6004dc48c"} Jan 23 09:33:47 crc kubenswrapper[4684]: I0123 09:33:47.880407 4684 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="27030967a5e8b1deb8d56112475a6bda5e7c24048a27e08eaf33ffd6004dc48c" Jan 23 09:33:47 crc kubenswrapper[4684]: I0123 09:33:47.883003 4684 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/neutron-db-sync-mjwvr" event={"ID":"5fd7bf23-46a9-4032-97f0-8d7984b734e0","Type":"ContainerDied","Data":"7a59865270964d2c35defdb095765eedbe56e33f2476bd64530962ba33ecbd0d"} Jan 23 09:33:47 crc kubenswrapper[4684]: I0123 09:33:47.883041 4684 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="7a59865270964d2c35defdb095765eedbe56e33f2476bd64530962ba33ecbd0d" Jan 23 09:33:47 crc kubenswrapper[4684]: I0123 09:33:47.883119 4684 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/neutron-db-sync-mjwvr" Jan 23 09:33:47 crc kubenswrapper[4684]: I0123 09:33:47.889650 4684 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/barbican-api-b96468b6b-tn94s" Jan 23 09:33:47 crc kubenswrapper[4684]: I0123 09:33:47.889918 4684 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/barbican-api-b96468b6b-tn94s" event={"ID":"968cfa50-ff5f-4484-8a59-2132539ba65b","Type":"ContainerDied","Data":"aca249ba45447d1283e774c00f4e33e964f68d136e591ee2c3c57ed1aeaaae84"} Jan 23 09:33:47 crc kubenswrapper[4684]: I0123 09:33:47.889970 4684 scope.go:117] "RemoveContainer" containerID="b9e405e70e2266d7fc6dbefa73f7a4ed51df68a9d096ed3eba47fe22d807ecb3" Jan 23 09:33:47 crc kubenswrapper[4684]: I0123 09:33:47.896125 4684 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-694dbb6647-xtjr2" event={"ID":"3e252874-6205-4570-a8a8-dada614f685e","Type":"ContainerDied","Data":"03b24266e00d7a6426d72a1176625f55c3735a1b8002d3a7034f6303b405d35b"} Jan 23 09:33:47 crc kubenswrapper[4684]: I0123 09:33:47.896164 4684 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="03b24266e00d7a6426d72a1176625f55c3735a1b8002d3a7034f6303b405d35b" Jan 23 09:33:47 crc kubenswrapper[4684]: I0123 09:33:47.897850 4684 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-694dbb6647-xtjr2" Jan 23 09:33:47 crc kubenswrapper[4684]: E0123 09:33:47.898035 4684 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"nova-cell0-conductor-db-sync\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.io/podified-antelope-centos9/openstack-nova-conductor@sha256:22f097cb86b28ac48dc670ed7e0e841280bef1608f11b2b4536fbc2d2a6a90be\\\"\"" pod="openstack/nova-cell0-conductor-db-sync-7pzwl" podUID="71a684b6-60c9-4017-91d1-7a8e340d8482" Jan 23 09:33:47 crc kubenswrapper[4684]: I0123 09:33:47.900010 4684 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/968cfa50-ff5f-4484-8a59-2132539ba65b-config-data" (OuterVolumeSpecName: "config-data") pod "968cfa50-ff5f-4484-8a59-2132539ba65b" (UID: "968cfa50-ff5f-4484-8a59-2132539ba65b"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 23 09:33:47 crc kubenswrapper[4684]: I0123 09:33:47.925468 4684 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/cinder-scheduler-0" Jan 23 09:33:47 crc kubenswrapper[4684]: I0123 09:33:47.935130 4684 scope.go:117] "RemoveContainer" containerID="1263c51796e9cd2b6f83c2218e9dba9668852d7e31ce74acbbcda5bfc627c52e" Jan 23 09:33:47 crc kubenswrapper[4684]: I0123 09:33:47.948381 4684 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-22tmt\" (UniqueName: \"kubernetes.io/projected/3e252874-6205-4570-a8a8-dada614f685e-kube-api-access-22tmt\") pod \"3e252874-6205-4570-a8a8-dada614f685e\" (UID: \"3e252874-6205-4570-a8a8-dada614f685e\") " Jan 23 09:33:47 crc kubenswrapper[4684]: I0123 09:33:47.948458 4684 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/3e252874-6205-4570-a8a8-dada614f685e-config\") pod \"3e252874-6205-4570-a8a8-dada614f685e\" (UID: \"3e252874-6205-4570-a8a8-dada614f685e\") " Jan 23 09:33:47 crc kubenswrapper[4684]: I0123 09:33:47.948503 4684 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/3e252874-6205-4570-a8a8-dada614f685e-ovsdbserver-sb\") pod \"3e252874-6205-4570-a8a8-dada614f685e\" (UID: \"3e252874-6205-4570-a8a8-dada614f685e\") " Jan 23 09:33:47 crc kubenswrapper[4684]: I0123 09:33:47.948557 4684 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/3e252874-6205-4570-a8a8-dada614f685e-dns-svc\") pod \"3e252874-6205-4570-a8a8-dada614f685e\" (UID: \"3e252874-6205-4570-a8a8-dada614f685e\") " Jan 23 09:33:47 crc kubenswrapper[4684]: I0123 09:33:47.948662 4684 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/3e252874-6205-4570-a8a8-dada614f685e-ovsdbserver-nb\") pod \"3e252874-6205-4570-a8a8-dada614f685e\" (UID: \"3e252874-6205-4570-a8a8-dada614f685e\") " Jan 23 09:33:47 crc kubenswrapper[4684]: I0123 09:33:47.949140 4684 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/968cfa50-ff5f-4484-8a59-2132539ba65b-config-data\") on node \"crc\" DevicePath \"\"" Jan 23 09:33:47 crc kubenswrapper[4684]: I0123 09:33:47.949152 4684 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/968cfa50-ff5f-4484-8a59-2132539ba65b-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Jan 23 09:33:48 crc kubenswrapper[4684]: I0123 09:33:48.044180 4684 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/3e252874-6205-4570-a8a8-dada614f685e-kube-api-access-22tmt" (OuterVolumeSpecName: "kube-api-access-22tmt") pod "3e252874-6205-4570-a8a8-dada614f685e" (UID: "3e252874-6205-4570-a8a8-dada614f685e"). InnerVolumeSpecName "kube-api-access-22tmt". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 23 09:33:48 crc kubenswrapper[4684]: I0123 09:33:48.051101 4684 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/59aa4e93-3c29-45d1-95d0-7cc3f595765a-config-data\") pod \"59aa4e93-3c29-45d1-95d0-7cc3f595765a\" (UID: \"59aa4e93-3c29-45d1-95d0-7cc3f595765a\") " Jan 23 09:33:48 crc kubenswrapper[4684]: I0123 09:33:48.051937 4684 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"etc-machine-id\" (UniqueName: \"kubernetes.io/host-path/59aa4e93-3c29-45d1-95d0-7cc3f595765a-etc-machine-id\") pod \"59aa4e93-3c29-45d1-95d0-7cc3f595765a\" (UID: \"59aa4e93-3c29-45d1-95d0-7cc3f595765a\") " Jan 23 09:33:48 crc kubenswrapper[4684]: I0123 09:33:48.052641 4684 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/59aa4e93-3c29-45d1-95d0-7cc3f595765a-combined-ca-bundle\") pod \"59aa4e93-3c29-45d1-95d0-7cc3f595765a\" (UID: \"59aa4e93-3c29-45d1-95d0-7cc3f595765a\") " Jan 23 09:33:48 crc kubenswrapper[4684]: I0123 09:33:48.052813 4684 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/59aa4e93-3c29-45d1-95d0-7cc3f595765a-scripts\") pod \"59aa4e93-3c29-45d1-95d0-7cc3f595765a\" (UID: \"59aa4e93-3c29-45d1-95d0-7cc3f595765a\") " Jan 23 09:33:48 crc kubenswrapper[4684]: I0123 09:33:48.053254 4684 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-x8ccn\" (UniqueName: \"kubernetes.io/projected/59aa4e93-3c29-45d1-95d0-7cc3f595765a-kube-api-access-x8ccn\") pod \"59aa4e93-3c29-45d1-95d0-7cc3f595765a\" (UID: \"59aa4e93-3c29-45d1-95d0-7cc3f595765a\") " Jan 23 09:33:48 crc kubenswrapper[4684]: I0123 09:33:48.053374 4684 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/59aa4e93-3c29-45d1-95d0-7cc3f595765a-config-data-custom\") pod \"59aa4e93-3c29-45d1-95d0-7cc3f595765a\" (UID: \"59aa4e93-3c29-45d1-95d0-7cc3f595765a\") " Jan 23 09:33:48 crc kubenswrapper[4684]: I0123 09:33:48.054849 4684 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/3e252874-6205-4570-a8a8-dada614f685e-ovsdbserver-sb" (OuterVolumeSpecName: "ovsdbserver-sb") pod "3e252874-6205-4570-a8a8-dada614f685e" (UID: "3e252874-6205-4570-a8a8-dada614f685e"). InnerVolumeSpecName "ovsdbserver-sb". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 23 09:33:48 crc kubenswrapper[4684]: I0123 09:33:48.055670 4684 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-22tmt\" (UniqueName: \"kubernetes.io/projected/3e252874-6205-4570-a8a8-dada614f685e-kube-api-access-22tmt\") on node \"crc\" DevicePath \"\"" Jan 23 09:33:48 crc kubenswrapper[4684]: I0123 09:33:48.056921 4684 reconciler_common.go:293] "Volume detached for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/3e252874-6205-4570-a8a8-dada614f685e-ovsdbserver-sb\") on node \"crc\" DevicePath \"\"" Jan 23 09:33:48 crc kubenswrapper[4684]: I0123 09:33:48.061881 4684 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/59aa4e93-3c29-45d1-95d0-7cc3f595765a-etc-machine-id" (OuterVolumeSpecName: "etc-machine-id") pod "59aa4e93-3c29-45d1-95d0-7cc3f595765a" (UID: "59aa4e93-3c29-45d1-95d0-7cc3f595765a"). InnerVolumeSpecName "etc-machine-id". PluginName "kubernetes.io/host-path", VolumeGidValue "" Jan 23 09:33:48 crc kubenswrapper[4684]: I0123 09:33:48.062766 4684 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/3e252874-6205-4570-a8a8-dada614f685e-dns-svc" (OuterVolumeSpecName: "dns-svc") pod "3e252874-6205-4570-a8a8-dada614f685e" (UID: "3e252874-6205-4570-a8a8-dada614f685e"). InnerVolumeSpecName "dns-svc". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 23 09:33:48 crc kubenswrapper[4684]: I0123 09:33:48.099944 4684 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/59aa4e93-3c29-45d1-95d0-7cc3f595765a-config-data-custom" (OuterVolumeSpecName: "config-data-custom") pod "59aa4e93-3c29-45d1-95d0-7cc3f595765a" (UID: "59aa4e93-3c29-45d1-95d0-7cc3f595765a"). InnerVolumeSpecName "config-data-custom". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 23 09:33:48 crc kubenswrapper[4684]: I0123 09:33:48.102741 4684 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/59aa4e93-3c29-45d1-95d0-7cc3f595765a-scripts" (OuterVolumeSpecName: "scripts") pod "59aa4e93-3c29-45d1-95d0-7cc3f595765a" (UID: "59aa4e93-3c29-45d1-95d0-7cc3f595765a"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 23 09:33:48 crc kubenswrapper[4684]: I0123 09:33:48.110615 4684 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/59aa4e93-3c29-45d1-95d0-7cc3f595765a-kube-api-access-x8ccn" (OuterVolumeSpecName: "kube-api-access-x8ccn") pod "59aa4e93-3c29-45d1-95d0-7cc3f595765a" (UID: "59aa4e93-3c29-45d1-95d0-7cc3f595765a"). InnerVolumeSpecName "kube-api-access-x8ccn". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 23 09:33:48 crc kubenswrapper[4684]: I0123 09:33:48.126160 4684 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/3e252874-6205-4570-a8a8-dada614f685e-config" (OuterVolumeSpecName: "config") pod "3e252874-6205-4570-a8a8-dada614f685e" (UID: "3e252874-6205-4570-a8a8-dada614f685e"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 23 09:33:48 crc kubenswrapper[4684]: I0123 09:33:48.144674 4684 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/59aa4e93-3c29-45d1-95d0-7cc3f595765a-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "59aa4e93-3c29-45d1-95d0-7cc3f595765a" (UID: "59aa4e93-3c29-45d1-95d0-7cc3f595765a"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 23 09:33:48 crc kubenswrapper[4684]: I0123 09:33:48.148439 4684 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/3e252874-6205-4570-a8a8-dada614f685e-ovsdbserver-nb" (OuterVolumeSpecName: "ovsdbserver-nb") pod "3e252874-6205-4570-a8a8-dada614f685e" (UID: "3e252874-6205-4570-a8a8-dada614f685e"). InnerVolumeSpecName "ovsdbserver-nb". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 23 09:33:48 crc kubenswrapper[4684]: I0123 09:33:48.159317 4684 reconciler_common.go:293] "Volume detached for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/3e252874-6205-4570-a8a8-dada614f685e-dns-svc\") on node \"crc\" DevicePath \"\"" Jan 23 09:33:48 crc kubenswrapper[4684]: I0123 09:33:48.159393 4684 reconciler_common.go:293] "Volume detached for volume \"etc-machine-id\" (UniqueName: \"kubernetes.io/host-path/59aa4e93-3c29-45d1-95d0-7cc3f595765a-etc-machine-id\") on node \"crc\" DevicePath \"\"" Jan 23 09:33:48 crc kubenswrapper[4684]: I0123 09:33:48.159407 4684 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/59aa4e93-3c29-45d1-95d0-7cc3f595765a-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Jan 23 09:33:48 crc kubenswrapper[4684]: I0123 09:33:48.159418 4684 reconciler_common.go:293] "Volume detached for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/3e252874-6205-4570-a8a8-dada614f685e-ovsdbserver-nb\") on node \"crc\" DevicePath \"\"" Jan 23 09:33:48 crc kubenswrapper[4684]: I0123 09:33:48.159431 4684 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/59aa4e93-3c29-45d1-95d0-7cc3f595765a-scripts\") on node \"crc\" DevicePath \"\"" Jan 23 09:33:48 crc kubenswrapper[4684]: I0123 09:33:48.159442 4684 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-x8ccn\" (UniqueName: \"kubernetes.io/projected/59aa4e93-3c29-45d1-95d0-7cc3f595765a-kube-api-access-x8ccn\") on node \"crc\" DevicePath \"\"" Jan 23 09:33:48 crc kubenswrapper[4684]: I0123 09:33:48.159453 4684 reconciler_common.go:293] "Volume detached for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/59aa4e93-3c29-45d1-95d0-7cc3f595765a-config-data-custom\") on node \"crc\" DevicePath \"\"" Jan 23 09:33:48 crc kubenswrapper[4684]: I0123 09:33:48.159465 4684 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/3e252874-6205-4570-a8a8-dada614f685e-config\") on node \"crc\" DevicePath \"\"" Jan 23 09:33:48 crc kubenswrapper[4684]: I0123 09:33:48.225215 4684 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/barbican-api-b96468b6b-tn94s"] Jan 23 09:33:48 crc kubenswrapper[4684]: I0123 09:33:48.234183 4684 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/barbican-api-b96468b6b-tn94s"] Jan 23 09:33:48 crc kubenswrapper[4684]: I0123 09:33:48.254196 4684 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/59aa4e93-3c29-45d1-95d0-7cc3f595765a-config-data" (OuterVolumeSpecName: "config-data") pod "59aa4e93-3c29-45d1-95d0-7cc3f595765a" (UID: "59aa4e93-3c29-45d1-95d0-7cc3f595765a"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 23 09:33:48 crc kubenswrapper[4684]: I0123 09:33:48.261771 4684 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/59aa4e93-3c29-45d1-95d0-7cc3f595765a-config-data\") on node \"crc\" DevicePath \"\"" Jan 23 09:33:48 crc kubenswrapper[4684]: I0123 09:33:48.336482 4684 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/ceilometer-0"] Jan 23 09:33:48 crc kubenswrapper[4684]: I0123 09:33:48.341049 4684 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/ceilometer-0" podUID="8c701147-6de2-4cd9-8d2e-05831ceb7ed5" containerName="ceilometer-central-agent" containerID="cri-o://0c3ab84493dc275b7864ff07924cf942a94504ec9cf11093b75b115cfa909602" gracePeriod=30 Jan 23 09:33:48 crc kubenswrapper[4684]: I0123 09:33:48.341252 4684 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/ceilometer-0" podUID="8c701147-6de2-4cd9-8d2e-05831ceb7ed5" containerName="proxy-httpd" containerID="cri-o://4d380496d7f1c1c1369ecb48f20752749677f53c1e371b082d585c1a2850c7a6" gracePeriod=30 Jan 23 09:33:48 crc kubenswrapper[4684]: I0123 09:33:48.341325 4684 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/ceilometer-0" podUID="8c701147-6de2-4cd9-8d2e-05831ceb7ed5" containerName="sg-core" containerID="cri-o://ccfe035950b77539d1bf0b65dbc64006cb7b105cc84605ceeee03de3e4934fbb" gracePeriod=30 Jan 23 09:33:48 crc kubenswrapper[4684]: I0123 09:33:48.341374 4684 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/ceilometer-0" podUID="8c701147-6de2-4cd9-8d2e-05831ceb7ed5" containerName="ceilometer-notification-agent" containerID="cri-o://f1ddd13d94ce6c984e49236bee622db3062b3fa57236c352e50209c3320925fe" gracePeriod=30 Jan 23 09:33:48 crc kubenswrapper[4684]: I0123 09:33:48.368956 4684 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack/ceilometer-0" podUID="8c701147-6de2-4cd9-8d2e-05831ceb7ed5" containerName="proxy-httpd" probeResult="failure" output="Get \"http://10.217.0.156:3000/\": EOF" Jan 23 09:33:48 crc kubenswrapper[4684]: I0123 09:33:48.447498 4684 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/dnsmasq-dns-68c677b759-mvp9m"] Jan 23 09:33:48 crc kubenswrapper[4684]: E0123 09:33:48.447910 4684 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="3e252874-6205-4570-a8a8-dada614f685e" containerName="dnsmasq-dns" Jan 23 09:33:48 crc kubenswrapper[4684]: I0123 09:33:48.447933 4684 state_mem.go:107] "Deleted CPUSet assignment" podUID="3e252874-6205-4570-a8a8-dada614f685e" containerName="dnsmasq-dns" Jan 23 09:33:48 crc kubenswrapper[4684]: E0123 09:33:48.447944 4684 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="3e252874-6205-4570-a8a8-dada614f685e" containerName="init" Jan 23 09:33:48 crc kubenswrapper[4684]: I0123 09:33:48.447950 4684 state_mem.go:107] "Deleted CPUSet assignment" podUID="3e252874-6205-4570-a8a8-dada614f685e" containerName="init" Jan 23 09:33:48 crc kubenswrapper[4684]: E0123 09:33:48.447962 4684 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="968cfa50-ff5f-4484-8a59-2132539ba65b" containerName="barbican-api-log" Jan 23 09:33:48 crc kubenswrapper[4684]: I0123 09:33:48.447969 4684 state_mem.go:107] "Deleted CPUSet assignment" podUID="968cfa50-ff5f-4484-8a59-2132539ba65b" containerName="barbican-api-log" Jan 23 09:33:48 crc kubenswrapper[4684]: E0123 09:33:48.447987 4684 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="59aa4e93-3c29-45d1-95d0-7cc3f595765a" containerName="cinder-scheduler" Jan 23 09:33:48 crc kubenswrapper[4684]: I0123 09:33:48.447996 4684 state_mem.go:107] "Deleted CPUSet assignment" podUID="59aa4e93-3c29-45d1-95d0-7cc3f595765a" containerName="cinder-scheduler" Jan 23 09:33:48 crc kubenswrapper[4684]: E0123 09:33:48.448016 4684 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="59aa4e93-3c29-45d1-95d0-7cc3f595765a" containerName="probe" Jan 23 09:33:48 crc kubenswrapper[4684]: I0123 09:33:48.448023 4684 state_mem.go:107] "Deleted CPUSet assignment" podUID="59aa4e93-3c29-45d1-95d0-7cc3f595765a" containerName="probe" Jan 23 09:33:48 crc kubenswrapper[4684]: E0123 09:33:48.448039 4684 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="5fd7bf23-46a9-4032-97f0-8d7984b734e0" containerName="neutron-db-sync" Jan 23 09:33:48 crc kubenswrapper[4684]: I0123 09:33:48.448048 4684 state_mem.go:107] "Deleted CPUSet assignment" podUID="5fd7bf23-46a9-4032-97f0-8d7984b734e0" containerName="neutron-db-sync" Jan 23 09:33:48 crc kubenswrapper[4684]: E0123 09:33:48.448072 4684 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="968cfa50-ff5f-4484-8a59-2132539ba65b" containerName="barbican-api" Jan 23 09:33:48 crc kubenswrapper[4684]: I0123 09:33:48.448078 4684 state_mem.go:107] "Deleted CPUSet assignment" podUID="968cfa50-ff5f-4484-8a59-2132539ba65b" containerName="barbican-api" Jan 23 09:33:48 crc kubenswrapper[4684]: I0123 09:33:48.448221 4684 memory_manager.go:354] "RemoveStaleState removing state" podUID="968cfa50-ff5f-4484-8a59-2132539ba65b" containerName="barbican-api" Jan 23 09:33:48 crc kubenswrapper[4684]: I0123 09:33:48.448235 4684 memory_manager.go:354] "RemoveStaleState removing state" podUID="3e252874-6205-4570-a8a8-dada614f685e" containerName="dnsmasq-dns" Jan 23 09:33:48 crc kubenswrapper[4684]: I0123 09:33:48.448243 4684 memory_manager.go:354] "RemoveStaleState removing state" podUID="5fd7bf23-46a9-4032-97f0-8d7984b734e0" containerName="neutron-db-sync" Jan 23 09:33:48 crc kubenswrapper[4684]: I0123 09:33:48.448250 4684 memory_manager.go:354] "RemoveStaleState removing state" podUID="59aa4e93-3c29-45d1-95d0-7cc3f595765a" containerName="probe" Jan 23 09:33:48 crc kubenswrapper[4684]: I0123 09:33:48.448260 4684 memory_manager.go:354] "RemoveStaleState removing state" podUID="59aa4e93-3c29-45d1-95d0-7cc3f595765a" containerName="cinder-scheduler" Jan 23 09:33:48 crc kubenswrapper[4684]: I0123 09:33:48.448271 4684 memory_manager.go:354] "RemoveStaleState removing state" podUID="968cfa50-ff5f-4484-8a59-2132539ba65b" containerName="barbican-api-log" Jan 23 09:33:48 crc kubenswrapper[4684]: I0123 09:33:48.449192 4684 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-68c677b759-mvp9m" Jan 23 09:33:48 crc kubenswrapper[4684]: I0123 09:33:48.464925 4684 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/72621f7c-b422-4946-afbc-8d3d049ee05c-ovsdbserver-nb\") pod \"dnsmasq-dns-68c677b759-mvp9m\" (UID: \"72621f7c-b422-4946-afbc-8d3d049ee05c\") " pod="openstack/dnsmasq-dns-68c677b759-mvp9m" Jan 23 09:33:48 crc kubenswrapper[4684]: I0123 09:33:48.464991 4684 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/72621f7c-b422-4946-afbc-8d3d049ee05c-ovsdbserver-sb\") pod \"dnsmasq-dns-68c677b759-mvp9m\" (UID: \"72621f7c-b422-4946-afbc-8d3d049ee05c\") " pod="openstack/dnsmasq-dns-68c677b759-mvp9m" Jan 23 09:33:48 crc kubenswrapper[4684]: I0123 09:33:48.465028 4684 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/72621f7c-b422-4946-afbc-8d3d049ee05c-config\") pod \"dnsmasq-dns-68c677b759-mvp9m\" (UID: \"72621f7c-b422-4946-afbc-8d3d049ee05c\") " pod="openstack/dnsmasq-dns-68c677b759-mvp9m" Jan 23 09:33:48 crc kubenswrapper[4684]: I0123 09:33:48.465054 4684 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/72621f7c-b422-4946-afbc-8d3d049ee05c-dns-svc\") pod \"dnsmasq-dns-68c677b759-mvp9m\" (UID: \"72621f7c-b422-4946-afbc-8d3d049ee05c\") " pod="openstack/dnsmasq-dns-68c677b759-mvp9m" Jan 23 09:33:48 crc kubenswrapper[4684]: I0123 09:33:48.465080 4684 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-k46vk\" (UniqueName: \"kubernetes.io/projected/72621f7c-b422-4946-afbc-8d3d049ee05c-kube-api-access-k46vk\") pod \"dnsmasq-dns-68c677b759-mvp9m\" (UID: \"72621f7c-b422-4946-afbc-8d3d049ee05c\") " pod="openstack/dnsmasq-dns-68c677b759-mvp9m" Jan 23 09:33:48 crc kubenswrapper[4684]: I0123 09:33:48.478984 4684 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-68c677b759-mvp9m"] Jan 23 09:33:48 crc kubenswrapper[4684]: I0123 09:33:48.568731 4684 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/72621f7c-b422-4946-afbc-8d3d049ee05c-ovsdbserver-nb\") pod \"dnsmasq-dns-68c677b759-mvp9m\" (UID: \"72621f7c-b422-4946-afbc-8d3d049ee05c\") " pod="openstack/dnsmasq-dns-68c677b759-mvp9m" Jan 23 09:33:48 crc kubenswrapper[4684]: I0123 09:33:48.568792 4684 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/72621f7c-b422-4946-afbc-8d3d049ee05c-ovsdbserver-sb\") pod \"dnsmasq-dns-68c677b759-mvp9m\" (UID: \"72621f7c-b422-4946-afbc-8d3d049ee05c\") " pod="openstack/dnsmasq-dns-68c677b759-mvp9m" Jan 23 09:33:48 crc kubenswrapper[4684]: I0123 09:33:48.568818 4684 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/72621f7c-b422-4946-afbc-8d3d049ee05c-config\") pod \"dnsmasq-dns-68c677b759-mvp9m\" (UID: \"72621f7c-b422-4946-afbc-8d3d049ee05c\") " pod="openstack/dnsmasq-dns-68c677b759-mvp9m" Jan 23 09:33:48 crc kubenswrapper[4684]: I0123 09:33:48.568843 4684 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/72621f7c-b422-4946-afbc-8d3d049ee05c-dns-svc\") pod \"dnsmasq-dns-68c677b759-mvp9m\" (UID: \"72621f7c-b422-4946-afbc-8d3d049ee05c\") " pod="openstack/dnsmasq-dns-68c677b759-mvp9m" Jan 23 09:33:48 crc kubenswrapper[4684]: I0123 09:33:48.568863 4684 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-k46vk\" (UniqueName: \"kubernetes.io/projected/72621f7c-b422-4946-afbc-8d3d049ee05c-kube-api-access-k46vk\") pod \"dnsmasq-dns-68c677b759-mvp9m\" (UID: \"72621f7c-b422-4946-afbc-8d3d049ee05c\") " pod="openstack/dnsmasq-dns-68c677b759-mvp9m" Jan 23 09:33:48 crc kubenswrapper[4684]: I0123 09:33:48.569871 4684 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/72621f7c-b422-4946-afbc-8d3d049ee05c-ovsdbserver-nb\") pod \"dnsmasq-dns-68c677b759-mvp9m\" (UID: \"72621f7c-b422-4946-afbc-8d3d049ee05c\") " pod="openstack/dnsmasq-dns-68c677b759-mvp9m" Jan 23 09:33:48 crc kubenswrapper[4684]: I0123 09:33:48.571348 4684 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/72621f7c-b422-4946-afbc-8d3d049ee05c-config\") pod \"dnsmasq-dns-68c677b759-mvp9m\" (UID: \"72621f7c-b422-4946-afbc-8d3d049ee05c\") " pod="openstack/dnsmasq-dns-68c677b759-mvp9m" Jan 23 09:33:48 crc kubenswrapper[4684]: I0123 09:33:48.571732 4684 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/72621f7c-b422-4946-afbc-8d3d049ee05c-ovsdbserver-sb\") pod \"dnsmasq-dns-68c677b759-mvp9m\" (UID: \"72621f7c-b422-4946-afbc-8d3d049ee05c\") " pod="openstack/dnsmasq-dns-68c677b759-mvp9m" Jan 23 09:33:48 crc kubenswrapper[4684]: I0123 09:33:48.572099 4684 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/72621f7c-b422-4946-afbc-8d3d049ee05c-dns-svc\") pod \"dnsmasq-dns-68c677b759-mvp9m\" (UID: \"72621f7c-b422-4946-afbc-8d3d049ee05c\") " pod="openstack/dnsmasq-dns-68c677b759-mvp9m" Jan 23 09:33:48 crc kubenswrapper[4684]: I0123 09:33:48.592856 4684 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/neutron-75f58545cb-xtfdc"] Jan 23 09:33:48 crc kubenswrapper[4684]: I0123 09:33:48.595959 4684 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/neutron-75f58545cb-xtfdc" Jan 23 09:33:48 crc kubenswrapper[4684]: I0123 09:33:48.602169 4684 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"neutron-config" Jan 23 09:33:48 crc kubenswrapper[4684]: I0123 09:33:48.603168 4684 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-neutron-ovndbs" Jan 23 09:33:48 crc kubenswrapper[4684]: I0123 09:33:48.603387 4684 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"neutron-neutron-dockercfg-5g4fk" Jan 23 09:33:48 crc kubenswrapper[4684]: I0123 09:33:48.616386 4684 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"neutron-httpd-config" Jan 23 09:33:48 crc kubenswrapper[4684]: I0123 09:33:48.620493 4684 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/neutron-75f58545cb-xtfdc"] Jan 23 09:33:48 crc kubenswrapper[4684]: I0123 09:33:48.645004 4684 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-k46vk\" (UniqueName: \"kubernetes.io/projected/72621f7c-b422-4946-afbc-8d3d049ee05c-kube-api-access-k46vk\") pod \"dnsmasq-dns-68c677b759-mvp9m\" (UID: \"72621f7c-b422-4946-afbc-8d3d049ee05c\") " pod="openstack/dnsmasq-dns-68c677b759-mvp9m" Jan 23 09:33:48 crc kubenswrapper[4684]: I0123 09:33:48.670039 4684 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/0ffa2554-38b9-498e-a08f-465b4454ed2d-combined-ca-bundle\") pod \"neutron-75f58545cb-xtfdc\" (UID: \"0ffa2554-38b9-498e-a08f-465b4454ed2d\") " pod="openstack/neutron-75f58545cb-xtfdc" Jan 23 09:33:48 crc kubenswrapper[4684]: I0123 09:33:48.670089 4684 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovndb-tls-certs\" (UniqueName: \"kubernetes.io/secret/0ffa2554-38b9-498e-a08f-465b4454ed2d-ovndb-tls-certs\") pod \"neutron-75f58545cb-xtfdc\" (UID: \"0ffa2554-38b9-498e-a08f-465b4454ed2d\") " pod="openstack/neutron-75f58545cb-xtfdc" Jan 23 09:33:48 crc kubenswrapper[4684]: I0123 09:33:48.670193 4684 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-vw7sb\" (UniqueName: \"kubernetes.io/projected/0ffa2554-38b9-498e-a08f-465b4454ed2d-kube-api-access-vw7sb\") pod \"neutron-75f58545cb-xtfdc\" (UID: \"0ffa2554-38b9-498e-a08f-465b4454ed2d\") " pod="openstack/neutron-75f58545cb-xtfdc" Jan 23 09:33:48 crc kubenswrapper[4684]: I0123 09:33:48.670252 4684 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"httpd-config\" (UniqueName: \"kubernetes.io/secret/0ffa2554-38b9-498e-a08f-465b4454ed2d-httpd-config\") pod \"neutron-75f58545cb-xtfdc\" (UID: \"0ffa2554-38b9-498e-a08f-465b4454ed2d\") " pod="openstack/neutron-75f58545cb-xtfdc" Jan 23 09:33:48 crc kubenswrapper[4684]: I0123 09:33:48.670353 4684 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/secret/0ffa2554-38b9-498e-a08f-465b4454ed2d-config\") pod \"neutron-75f58545cb-xtfdc\" (UID: \"0ffa2554-38b9-498e-a08f-465b4454ed2d\") " pod="openstack/neutron-75f58545cb-xtfdc" Jan 23 09:33:48 crc kubenswrapper[4684]: I0123 09:33:48.783006 4684 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/0ffa2554-38b9-498e-a08f-465b4454ed2d-combined-ca-bundle\") pod \"neutron-75f58545cb-xtfdc\" (UID: \"0ffa2554-38b9-498e-a08f-465b4454ed2d\") " pod="openstack/neutron-75f58545cb-xtfdc" Jan 23 09:33:48 crc kubenswrapper[4684]: I0123 09:33:48.783079 4684 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovndb-tls-certs\" (UniqueName: \"kubernetes.io/secret/0ffa2554-38b9-498e-a08f-465b4454ed2d-ovndb-tls-certs\") pod \"neutron-75f58545cb-xtfdc\" (UID: \"0ffa2554-38b9-498e-a08f-465b4454ed2d\") " pod="openstack/neutron-75f58545cb-xtfdc" Jan 23 09:33:48 crc kubenswrapper[4684]: I0123 09:33:48.783157 4684 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-vw7sb\" (UniqueName: \"kubernetes.io/projected/0ffa2554-38b9-498e-a08f-465b4454ed2d-kube-api-access-vw7sb\") pod \"neutron-75f58545cb-xtfdc\" (UID: \"0ffa2554-38b9-498e-a08f-465b4454ed2d\") " pod="openstack/neutron-75f58545cb-xtfdc" Jan 23 09:33:48 crc kubenswrapper[4684]: I0123 09:33:48.783214 4684 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"httpd-config\" (UniqueName: \"kubernetes.io/secret/0ffa2554-38b9-498e-a08f-465b4454ed2d-httpd-config\") pod \"neutron-75f58545cb-xtfdc\" (UID: \"0ffa2554-38b9-498e-a08f-465b4454ed2d\") " pod="openstack/neutron-75f58545cb-xtfdc" Jan 23 09:33:48 crc kubenswrapper[4684]: I0123 09:33:48.783280 4684 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/secret/0ffa2554-38b9-498e-a08f-465b4454ed2d-config\") pod \"neutron-75f58545cb-xtfdc\" (UID: \"0ffa2554-38b9-498e-a08f-465b4454ed2d\") " pod="openstack/neutron-75f58545cb-xtfdc" Jan 23 09:33:48 crc kubenswrapper[4684]: I0123 09:33:48.793880 4684 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/0ffa2554-38b9-498e-a08f-465b4454ed2d-combined-ca-bundle\") pod \"neutron-75f58545cb-xtfdc\" (UID: \"0ffa2554-38b9-498e-a08f-465b4454ed2d\") " pod="openstack/neutron-75f58545cb-xtfdc" Jan 23 09:33:48 crc kubenswrapper[4684]: I0123 09:33:48.798073 4684 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"httpd-config\" (UniqueName: \"kubernetes.io/secret/0ffa2554-38b9-498e-a08f-465b4454ed2d-httpd-config\") pod \"neutron-75f58545cb-xtfdc\" (UID: \"0ffa2554-38b9-498e-a08f-465b4454ed2d\") " pod="openstack/neutron-75f58545cb-xtfdc" Jan 23 09:33:48 crc kubenswrapper[4684]: I0123 09:33:48.798675 4684 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovndb-tls-certs\" (UniqueName: \"kubernetes.io/secret/0ffa2554-38b9-498e-a08f-465b4454ed2d-ovndb-tls-certs\") pod \"neutron-75f58545cb-xtfdc\" (UID: \"0ffa2554-38b9-498e-a08f-465b4454ed2d\") " pod="openstack/neutron-75f58545cb-xtfdc" Jan 23 09:33:48 crc kubenswrapper[4684]: I0123 09:33:48.799541 4684 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/secret/0ffa2554-38b9-498e-a08f-465b4454ed2d-config\") pod \"neutron-75f58545cb-xtfdc\" (UID: \"0ffa2554-38b9-498e-a08f-465b4454ed2d\") " pod="openstack/neutron-75f58545cb-xtfdc" Jan 23 09:33:48 crc kubenswrapper[4684]: I0123 09:33:48.820601 4684 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-vw7sb\" (UniqueName: \"kubernetes.io/projected/0ffa2554-38b9-498e-a08f-465b4454ed2d-kube-api-access-vw7sb\") pod \"neutron-75f58545cb-xtfdc\" (UID: \"0ffa2554-38b9-498e-a08f-465b4454ed2d\") " pod="openstack/neutron-75f58545cb-xtfdc" Jan 23 09:33:48 crc kubenswrapper[4684]: I0123 09:33:48.859364 4684 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-68c677b759-mvp9m" Jan 23 09:33:48 crc kubenswrapper[4684]: I0123 09:33:48.923672 4684 generic.go:334] "Generic (PLEG): container finished" podID="8c701147-6de2-4cd9-8d2e-05831ceb7ed5" containerID="4d380496d7f1c1c1369ecb48f20752749677f53c1e371b082d585c1a2850c7a6" exitCode=0 Jan 23 09:33:48 crc kubenswrapper[4684]: I0123 09:33:48.923726 4684 generic.go:334] "Generic (PLEG): container finished" podID="8c701147-6de2-4cd9-8d2e-05831ceb7ed5" containerID="ccfe035950b77539d1bf0b65dbc64006cb7b105cc84605ceeee03de3e4934fbb" exitCode=2 Jan 23 09:33:48 crc kubenswrapper[4684]: I0123 09:33:48.923781 4684 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"8c701147-6de2-4cd9-8d2e-05831ceb7ed5","Type":"ContainerDied","Data":"4d380496d7f1c1c1369ecb48f20752749677f53c1e371b082d585c1a2850c7a6"} Jan 23 09:33:48 crc kubenswrapper[4684]: I0123 09:33:48.923831 4684 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"8c701147-6de2-4cd9-8d2e-05831ceb7ed5","Type":"ContainerDied","Data":"ccfe035950b77539d1bf0b65dbc64006cb7b105cc84605ceeee03de3e4934fbb"} Jan 23 09:33:48 crc kubenswrapper[4684]: I0123 09:33:48.933685 4684 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/cinder-scheduler-0" Jan 23 09:33:48 crc kubenswrapper[4684]: I0123 09:33:48.936089 4684 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-694dbb6647-xtjr2" Jan 23 09:33:48 crc kubenswrapper[4684]: I0123 09:33:48.993209 4684 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/neutron-75f58545cb-xtfdc" Jan 23 09:33:49 crc kubenswrapper[4684]: I0123 09:33:49.024578 4684 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-694dbb6647-xtjr2"] Jan 23 09:33:49 crc kubenswrapper[4684]: I0123 09:33:49.052535 4684 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/dnsmasq-dns-694dbb6647-xtjr2"] Jan 23 09:33:49 crc kubenswrapper[4684]: I0123 09:33:49.069227 4684 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/cinder-scheduler-0"] Jan 23 09:33:49 crc kubenswrapper[4684]: I0123 09:33:49.116818 4684 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/cinder-scheduler-0"] Jan 23 09:33:49 crc kubenswrapper[4684]: I0123 09:33:49.154869 4684 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/cinder-scheduler-0"] Jan 23 09:33:49 crc kubenswrapper[4684]: I0123 09:33:49.157358 4684 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/cinder-scheduler-0" Jan 23 09:33:49 crc kubenswrapper[4684]: I0123 09:33:49.164080 4684 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cinder-scheduler-config-data" Jan 23 09:33:49 crc kubenswrapper[4684]: I0123 09:33:49.170408 4684 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/cinder-scheduler-0"] Jan 23 09:33:49 crc kubenswrapper[4684]: I0123 09:33:49.196632 4684 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/7a1bad04-8e0e-4dee-8cef-90091c05526f-combined-ca-bundle\") pod \"cinder-scheduler-0\" (UID: \"7a1bad04-8e0e-4dee-8cef-90091c05526f\") " pod="openstack/cinder-scheduler-0" Jan 23 09:33:49 crc kubenswrapper[4684]: I0123 09:33:49.196724 4684 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/7a1bad04-8e0e-4dee-8cef-90091c05526f-scripts\") pod \"cinder-scheduler-0\" (UID: \"7a1bad04-8e0e-4dee-8cef-90091c05526f\") " pod="openstack/cinder-scheduler-0" Jan 23 09:33:49 crc kubenswrapper[4684]: I0123 09:33:49.196745 4684 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/7a1bad04-8e0e-4dee-8cef-90091c05526f-config-data\") pod \"cinder-scheduler-0\" (UID: \"7a1bad04-8e0e-4dee-8cef-90091c05526f\") " pod="openstack/cinder-scheduler-0" Jan 23 09:33:49 crc kubenswrapper[4684]: I0123 09:33:49.196794 4684 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/7a1bad04-8e0e-4dee-8cef-90091c05526f-config-data-custom\") pod \"cinder-scheduler-0\" (UID: \"7a1bad04-8e0e-4dee-8cef-90091c05526f\") " pod="openstack/cinder-scheduler-0" Jan 23 09:33:49 crc kubenswrapper[4684]: I0123 09:33:49.196815 4684 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-machine-id\" (UniqueName: \"kubernetes.io/host-path/7a1bad04-8e0e-4dee-8cef-90091c05526f-etc-machine-id\") pod \"cinder-scheduler-0\" (UID: \"7a1bad04-8e0e-4dee-8cef-90091c05526f\") " pod="openstack/cinder-scheduler-0" Jan 23 09:33:49 crc kubenswrapper[4684]: I0123 09:33:49.196834 4684 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-dwrln\" (UniqueName: \"kubernetes.io/projected/7a1bad04-8e0e-4dee-8cef-90091c05526f-kube-api-access-dwrln\") pod \"cinder-scheduler-0\" (UID: \"7a1bad04-8e0e-4dee-8cef-90091c05526f\") " pod="openstack/cinder-scheduler-0" Jan 23 09:33:49 crc kubenswrapper[4684]: I0123 09:33:49.298508 4684 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/7a1bad04-8e0e-4dee-8cef-90091c05526f-scripts\") pod \"cinder-scheduler-0\" (UID: \"7a1bad04-8e0e-4dee-8cef-90091c05526f\") " pod="openstack/cinder-scheduler-0" Jan 23 09:33:49 crc kubenswrapper[4684]: I0123 09:33:49.298554 4684 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/7a1bad04-8e0e-4dee-8cef-90091c05526f-config-data\") pod \"cinder-scheduler-0\" (UID: \"7a1bad04-8e0e-4dee-8cef-90091c05526f\") " pod="openstack/cinder-scheduler-0" Jan 23 09:33:49 crc kubenswrapper[4684]: I0123 09:33:49.298622 4684 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/7a1bad04-8e0e-4dee-8cef-90091c05526f-config-data-custom\") pod \"cinder-scheduler-0\" (UID: \"7a1bad04-8e0e-4dee-8cef-90091c05526f\") " pod="openstack/cinder-scheduler-0" Jan 23 09:33:49 crc kubenswrapper[4684]: I0123 09:33:49.298649 4684 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etc-machine-id\" (UniqueName: \"kubernetes.io/host-path/7a1bad04-8e0e-4dee-8cef-90091c05526f-etc-machine-id\") pod \"cinder-scheduler-0\" (UID: \"7a1bad04-8e0e-4dee-8cef-90091c05526f\") " pod="openstack/cinder-scheduler-0" Jan 23 09:33:49 crc kubenswrapper[4684]: I0123 09:33:49.298674 4684 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-dwrln\" (UniqueName: \"kubernetes.io/projected/7a1bad04-8e0e-4dee-8cef-90091c05526f-kube-api-access-dwrln\") pod \"cinder-scheduler-0\" (UID: \"7a1bad04-8e0e-4dee-8cef-90091c05526f\") " pod="openstack/cinder-scheduler-0" Jan 23 09:33:49 crc kubenswrapper[4684]: I0123 09:33:49.300982 4684 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/7a1bad04-8e0e-4dee-8cef-90091c05526f-combined-ca-bundle\") pod \"cinder-scheduler-0\" (UID: \"7a1bad04-8e0e-4dee-8cef-90091c05526f\") " pod="openstack/cinder-scheduler-0" Jan 23 09:33:49 crc kubenswrapper[4684]: I0123 09:33:49.307923 4684 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/7a1bad04-8e0e-4dee-8cef-90091c05526f-config-data\") pod \"cinder-scheduler-0\" (UID: \"7a1bad04-8e0e-4dee-8cef-90091c05526f\") " pod="openstack/cinder-scheduler-0" Jan 23 09:33:49 crc kubenswrapper[4684]: I0123 09:33:49.308598 4684 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etc-machine-id\" (UniqueName: \"kubernetes.io/host-path/7a1bad04-8e0e-4dee-8cef-90091c05526f-etc-machine-id\") pod \"cinder-scheduler-0\" (UID: \"7a1bad04-8e0e-4dee-8cef-90091c05526f\") " pod="openstack/cinder-scheduler-0" Jan 23 09:33:49 crc kubenswrapper[4684]: I0123 09:33:49.312587 4684 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/7a1bad04-8e0e-4dee-8cef-90091c05526f-config-data-custom\") pod \"cinder-scheduler-0\" (UID: \"7a1bad04-8e0e-4dee-8cef-90091c05526f\") " pod="openstack/cinder-scheduler-0" Jan 23 09:33:49 crc kubenswrapper[4684]: I0123 09:33:49.312986 4684 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/7a1bad04-8e0e-4dee-8cef-90091c05526f-scripts\") pod \"cinder-scheduler-0\" (UID: \"7a1bad04-8e0e-4dee-8cef-90091c05526f\") " pod="openstack/cinder-scheduler-0" Jan 23 09:33:49 crc kubenswrapper[4684]: I0123 09:33:49.317161 4684 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/7a1bad04-8e0e-4dee-8cef-90091c05526f-combined-ca-bundle\") pod \"cinder-scheduler-0\" (UID: \"7a1bad04-8e0e-4dee-8cef-90091c05526f\") " pod="openstack/cinder-scheduler-0" Jan 23 09:33:49 crc kubenswrapper[4684]: I0123 09:33:49.330393 4684 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-dwrln\" (UniqueName: \"kubernetes.io/projected/7a1bad04-8e0e-4dee-8cef-90091c05526f-kube-api-access-dwrln\") pod \"cinder-scheduler-0\" (UID: \"7a1bad04-8e0e-4dee-8cef-90091c05526f\") " pod="openstack/cinder-scheduler-0" Jan 23 09:33:49 crc kubenswrapper[4684]: I0123 09:33:49.490042 4684 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-68c677b759-mvp9m"] Jan 23 09:33:49 crc kubenswrapper[4684]: I0123 09:33:49.518520 4684 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/cinder-scheduler-0" Jan 23 09:33:49 crc kubenswrapper[4684]: I0123 09:33:49.618634 4684 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="3e252874-6205-4570-a8a8-dada614f685e" path="/var/lib/kubelet/pods/3e252874-6205-4570-a8a8-dada614f685e/volumes" Jan 23 09:33:49 crc kubenswrapper[4684]: I0123 09:33:49.619494 4684 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="59aa4e93-3c29-45d1-95d0-7cc3f595765a" path="/var/lib/kubelet/pods/59aa4e93-3c29-45d1-95d0-7cc3f595765a/volumes" Jan 23 09:33:49 crc kubenswrapper[4684]: I0123 09:33:49.623444 4684 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="968cfa50-ff5f-4484-8a59-2132539ba65b" path="/var/lib/kubelet/pods/968cfa50-ff5f-4484-8a59-2132539ba65b/volumes" Jan 23 09:33:49 crc kubenswrapper[4684]: I0123 09:33:49.897170 4684 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/neutron-75f58545cb-xtfdc"] Jan 23 09:33:49 crc kubenswrapper[4684]: I0123 09:33:49.977481 4684 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-68c677b759-mvp9m" event={"ID":"72621f7c-b422-4946-afbc-8d3d049ee05c","Type":"ContainerStarted","Data":"4b7d95924505f33bd1b354484bc93eca8645cc0c4a783ecd0443011cdffb8074"} Jan 23 09:33:49 crc kubenswrapper[4684]: I0123 09:33:49.985860 4684 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/ceilometer-0" Jan 23 09:33:49 crc kubenswrapper[4684]: I0123 09:33:49.986641 4684 generic.go:334] "Generic (PLEG): container finished" podID="8c701147-6de2-4cd9-8d2e-05831ceb7ed5" containerID="f1ddd13d94ce6c984e49236bee622db3062b3fa57236c352e50209c3320925fe" exitCode=0 Jan 23 09:33:49 crc kubenswrapper[4684]: I0123 09:33:49.987246 4684 generic.go:334] "Generic (PLEG): container finished" podID="8c701147-6de2-4cd9-8d2e-05831ceb7ed5" containerID="0c3ab84493dc275b7864ff07924cf942a94504ec9cf11093b75b115cfa909602" exitCode=0 Jan 23 09:33:49 crc kubenswrapper[4684]: I0123 09:33:49.987617 4684 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"8c701147-6de2-4cd9-8d2e-05831ceb7ed5","Type":"ContainerDied","Data":"f1ddd13d94ce6c984e49236bee622db3062b3fa57236c352e50209c3320925fe"} Jan 23 09:33:49 crc kubenswrapper[4684]: I0123 09:33:49.987878 4684 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"8c701147-6de2-4cd9-8d2e-05831ceb7ed5","Type":"ContainerDied","Data":"0c3ab84493dc275b7864ff07924cf942a94504ec9cf11093b75b115cfa909602"} Jan 23 09:33:49 crc kubenswrapper[4684]: I0123 09:33:49.987892 4684 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"8c701147-6de2-4cd9-8d2e-05831ceb7ed5","Type":"ContainerDied","Data":"a1f743c3c2b51a03fd01bcb730f1e823e862489a0ba227bead8c54609e516db6"} Jan 23 09:33:49 crc kubenswrapper[4684]: I0123 09:33:49.987910 4684 scope.go:117] "RemoveContainer" containerID="4d380496d7f1c1c1369ecb48f20752749677f53c1e371b082d585c1a2850c7a6" Jan 23 09:33:49 crc kubenswrapper[4684]: I0123 09:33:49.993608 4684 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/neutron-75f58545cb-xtfdc" event={"ID":"0ffa2554-38b9-498e-a08f-465b4454ed2d","Type":"ContainerStarted","Data":"1c19a336b4b20f92f292aed6caba173ffa571f2278c4c532144cc8c661034b3b"} Jan 23 09:33:50 crc kubenswrapper[4684]: I0123 09:33:50.061258 4684 scope.go:117] "RemoveContainer" containerID="ccfe035950b77539d1bf0b65dbc64006cb7b105cc84605ceeee03de3e4934fbb" Jan 23 09:33:50 crc kubenswrapper[4684]: I0123 09:33:50.109964 4684 scope.go:117] "RemoveContainer" containerID="f1ddd13d94ce6c984e49236bee622db3062b3fa57236c352e50209c3320925fe" Jan 23 09:33:50 crc kubenswrapper[4684]: I0123 09:33:50.131559 4684 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/8c701147-6de2-4cd9-8d2e-05831ceb7ed5-combined-ca-bundle\") pod \"8c701147-6de2-4cd9-8d2e-05831ceb7ed5\" (UID: \"8c701147-6de2-4cd9-8d2e-05831ceb7ed5\") " Jan 23 09:33:50 crc kubenswrapper[4684]: I0123 09:33:50.131921 4684 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/8c701147-6de2-4cd9-8d2e-05831ceb7ed5-config-data\") pod \"8c701147-6de2-4cd9-8d2e-05831ceb7ed5\" (UID: \"8c701147-6de2-4cd9-8d2e-05831ceb7ed5\") " Jan 23 09:33:50 crc kubenswrapper[4684]: I0123 09:33:50.131956 4684 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/8c701147-6de2-4cd9-8d2e-05831ceb7ed5-run-httpd\") pod \"8c701147-6de2-4cd9-8d2e-05831ceb7ed5\" (UID: \"8c701147-6de2-4cd9-8d2e-05831ceb7ed5\") " Jan 23 09:33:50 crc kubenswrapper[4684]: I0123 09:33:50.131982 4684 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-pwtc7\" (UniqueName: \"kubernetes.io/projected/8c701147-6de2-4cd9-8d2e-05831ceb7ed5-kube-api-access-pwtc7\") pod \"8c701147-6de2-4cd9-8d2e-05831ceb7ed5\" (UID: \"8c701147-6de2-4cd9-8d2e-05831ceb7ed5\") " Jan 23 09:33:50 crc kubenswrapper[4684]: I0123 09:33:50.132009 4684 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/8c701147-6de2-4cd9-8d2e-05831ceb7ed5-log-httpd\") pod \"8c701147-6de2-4cd9-8d2e-05831ceb7ed5\" (UID: \"8c701147-6de2-4cd9-8d2e-05831ceb7ed5\") " Jan 23 09:33:50 crc kubenswrapper[4684]: I0123 09:33:50.132032 4684 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/8c701147-6de2-4cd9-8d2e-05831ceb7ed5-scripts\") pod \"8c701147-6de2-4cd9-8d2e-05831ceb7ed5\" (UID: \"8c701147-6de2-4cd9-8d2e-05831ceb7ed5\") " Jan 23 09:33:50 crc kubenswrapper[4684]: I0123 09:33:50.132103 4684 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/8c701147-6de2-4cd9-8d2e-05831ceb7ed5-sg-core-conf-yaml\") pod \"8c701147-6de2-4cd9-8d2e-05831ceb7ed5\" (UID: \"8c701147-6de2-4cd9-8d2e-05831ceb7ed5\") " Jan 23 09:33:50 crc kubenswrapper[4684]: I0123 09:33:50.132846 4684 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/8c701147-6de2-4cd9-8d2e-05831ceb7ed5-log-httpd" (OuterVolumeSpecName: "log-httpd") pod "8c701147-6de2-4cd9-8d2e-05831ceb7ed5" (UID: "8c701147-6de2-4cd9-8d2e-05831ceb7ed5"). InnerVolumeSpecName "log-httpd". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 23 09:33:50 crc kubenswrapper[4684]: I0123 09:33:50.133428 4684 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/8c701147-6de2-4cd9-8d2e-05831ceb7ed5-run-httpd" (OuterVolumeSpecName: "run-httpd") pod "8c701147-6de2-4cd9-8d2e-05831ceb7ed5" (UID: "8c701147-6de2-4cd9-8d2e-05831ceb7ed5"). InnerVolumeSpecName "run-httpd". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 23 09:33:50 crc kubenswrapper[4684]: I0123 09:33:50.138816 4684 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/8c701147-6de2-4cd9-8d2e-05831ceb7ed5-scripts" (OuterVolumeSpecName: "scripts") pod "8c701147-6de2-4cd9-8d2e-05831ceb7ed5" (UID: "8c701147-6de2-4cd9-8d2e-05831ceb7ed5"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 23 09:33:50 crc kubenswrapper[4684]: I0123 09:33:50.150677 4684 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/8c701147-6de2-4cd9-8d2e-05831ceb7ed5-kube-api-access-pwtc7" (OuterVolumeSpecName: "kube-api-access-pwtc7") pod "8c701147-6de2-4cd9-8d2e-05831ceb7ed5" (UID: "8c701147-6de2-4cd9-8d2e-05831ceb7ed5"). InnerVolumeSpecName "kube-api-access-pwtc7". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 23 09:33:50 crc kubenswrapper[4684]: I0123 09:33:50.182929 4684 scope.go:117] "RemoveContainer" containerID="0c3ab84493dc275b7864ff07924cf942a94504ec9cf11093b75b115cfa909602" Jan 23 09:33:50 crc kubenswrapper[4684]: I0123 09:33:50.220269 4684 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/8c701147-6de2-4cd9-8d2e-05831ceb7ed5-sg-core-conf-yaml" (OuterVolumeSpecName: "sg-core-conf-yaml") pod "8c701147-6de2-4cd9-8d2e-05831ceb7ed5" (UID: "8c701147-6de2-4cd9-8d2e-05831ceb7ed5"). InnerVolumeSpecName "sg-core-conf-yaml". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 23 09:33:50 crc kubenswrapper[4684]: I0123 09:33:50.233902 4684 reconciler_common.go:293] "Volume detached for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/8c701147-6de2-4cd9-8d2e-05831ceb7ed5-sg-core-conf-yaml\") on node \"crc\" DevicePath \"\"" Jan 23 09:33:50 crc kubenswrapper[4684]: I0123 09:33:50.233945 4684 reconciler_common.go:293] "Volume detached for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/8c701147-6de2-4cd9-8d2e-05831ceb7ed5-run-httpd\") on node \"crc\" DevicePath \"\"" Jan 23 09:33:50 crc kubenswrapper[4684]: I0123 09:33:50.233958 4684 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-pwtc7\" (UniqueName: \"kubernetes.io/projected/8c701147-6de2-4cd9-8d2e-05831ceb7ed5-kube-api-access-pwtc7\") on node \"crc\" DevicePath \"\"" Jan 23 09:33:50 crc kubenswrapper[4684]: I0123 09:33:50.233973 4684 reconciler_common.go:293] "Volume detached for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/8c701147-6de2-4cd9-8d2e-05831ceb7ed5-log-httpd\") on node \"crc\" DevicePath \"\"" Jan 23 09:33:50 crc kubenswrapper[4684]: I0123 09:33:50.233984 4684 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/8c701147-6de2-4cd9-8d2e-05831ceb7ed5-scripts\") on node \"crc\" DevicePath \"\"" Jan 23 09:33:50 crc kubenswrapper[4684]: I0123 09:33:50.250904 4684 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/8c701147-6de2-4cd9-8d2e-05831ceb7ed5-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "8c701147-6de2-4cd9-8d2e-05831ceb7ed5" (UID: "8c701147-6de2-4cd9-8d2e-05831ceb7ed5"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 23 09:33:50 crc kubenswrapper[4684]: I0123 09:33:50.268392 4684 scope.go:117] "RemoveContainer" containerID="4d380496d7f1c1c1369ecb48f20752749677f53c1e371b082d585c1a2850c7a6" Jan 23 09:33:50 crc kubenswrapper[4684]: E0123 09:33:50.270674 4684 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"4d380496d7f1c1c1369ecb48f20752749677f53c1e371b082d585c1a2850c7a6\": container with ID starting with 4d380496d7f1c1c1369ecb48f20752749677f53c1e371b082d585c1a2850c7a6 not found: ID does not exist" containerID="4d380496d7f1c1c1369ecb48f20752749677f53c1e371b082d585c1a2850c7a6" Jan 23 09:33:50 crc kubenswrapper[4684]: I0123 09:33:50.270743 4684 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"4d380496d7f1c1c1369ecb48f20752749677f53c1e371b082d585c1a2850c7a6"} err="failed to get container status \"4d380496d7f1c1c1369ecb48f20752749677f53c1e371b082d585c1a2850c7a6\": rpc error: code = NotFound desc = could not find container \"4d380496d7f1c1c1369ecb48f20752749677f53c1e371b082d585c1a2850c7a6\": container with ID starting with 4d380496d7f1c1c1369ecb48f20752749677f53c1e371b082d585c1a2850c7a6 not found: ID does not exist" Jan 23 09:33:50 crc kubenswrapper[4684]: I0123 09:33:50.270775 4684 scope.go:117] "RemoveContainer" containerID="ccfe035950b77539d1bf0b65dbc64006cb7b105cc84605ceeee03de3e4934fbb" Jan 23 09:33:50 crc kubenswrapper[4684]: E0123 09:33:50.271622 4684 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"ccfe035950b77539d1bf0b65dbc64006cb7b105cc84605ceeee03de3e4934fbb\": container with ID starting with ccfe035950b77539d1bf0b65dbc64006cb7b105cc84605ceeee03de3e4934fbb not found: ID does not exist" containerID="ccfe035950b77539d1bf0b65dbc64006cb7b105cc84605ceeee03de3e4934fbb" Jan 23 09:33:50 crc kubenswrapper[4684]: I0123 09:33:50.271670 4684 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"ccfe035950b77539d1bf0b65dbc64006cb7b105cc84605ceeee03de3e4934fbb"} err="failed to get container status \"ccfe035950b77539d1bf0b65dbc64006cb7b105cc84605ceeee03de3e4934fbb\": rpc error: code = NotFound desc = could not find container \"ccfe035950b77539d1bf0b65dbc64006cb7b105cc84605ceeee03de3e4934fbb\": container with ID starting with ccfe035950b77539d1bf0b65dbc64006cb7b105cc84605ceeee03de3e4934fbb not found: ID does not exist" Jan 23 09:33:50 crc kubenswrapper[4684]: I0123 09:33:50.271787 4684 scope.go:117] "RemoveContainer" containerID="f1ddd13d94ce6c984e49236bee622db3062b3fa57236c352e50209c3320925fe" Jan 23 09:33:50 crc kubenswrapper[4684]: E0123 09:33:50.272567 4684 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"f1ddd13d94ce6c984e49236bee622db3062b3fa57236c352e50209c3320925fe\": container with ID starting with f1ddd13d94ce6c984e49236bee622db3062b3fa57236c352e50209c3320925fe not found: ID does not exist" containerID="f1ddd13d94ce6c984e49236bee622db3062b3fa57236c352e50209c3320925fe" Jan 23 09:33:50 crc kubenswrapper[4684]: I0123 09:33:50.272598 4684 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"f1ddd13d94ce6c984e49236bee622db3062b3fa57236c352e50209c3320925fe"} err="failed to get container status \"f1ddd13d94ce6c984e49236bee622db3062b3fa57236c352e50209c3320925fe\": rpc error: code = NotFound desc = could not find container \"f1ddd13d94ce6c984e49236bee622db3062b3fa57236c352e50209c3320925fe\": container with ID starting with f1ddd13d94ce6c984e49236bee622db3062b3fa57236c352e50209c3320925fe not found: ID does not exist" Jan 23 09:33:50 crc kubenswrapper[4684]: I0123 09:33:50.272616 4684 scope.go:117] "RemoveContainer" containerID="0c3ab84493dc275b7864ff07924cf942a94504ec9cf11093b75b115cfa909602" Jan 23 09:33:50 crc kubenswrapper[4684]: E0123 09:33:50.273259 4684 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"0c3ab84493dc275b7864ff07924cf942a94504ec9cf11093b75b115cfa909602\": container with ID starting with 0c3ab84493dc275b7864ff07924cf942a94504ec9cf11093b75b115cfa909602 not found: ID does not exist" containerID="0c3ab84493dc275b7864ff07924cf942a94504ec9cf11093b75b115cfa909602" Jan 23 09:33:50 crc kubenswrapper[4684]: I0123 09:33:50.273289 4684 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"0c3ab84493dc275b7864ff07924cf942a94504ec9cf11093b75b115cfa909602"} err="failed to get container status \"0c3ab84493dc275b7864ff07924cf942a94504ec9cf11093b75b115cfa909602\": rpc error: code = NotFound desc = could not find container \"0c3ab84493dc275b7864ff07924cf942a94504ec9cf11093b75b115cfa909602\": container with ID starting with 0c3ab84493dc275b7864ff07924cf942a94504ec9cf11093b75b115cfa909602 not found: ID does not exist" Jan 23 09:33:50 crc kubenswrapper[4684]: I0123 09:33:50.273309 4684 scope.go:117] "RemoveContainer" containerID="4d380496d7f1c1c1369ecb48f20752749677f53c1e371b082d585c1a2850c7a6" Jan 23 09:33:50 crc kubenswrapper[4684]: I0123 09:33:50.273539 4684 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"4d380496d7f1c1c1369ecb48f20752749677f53c1e371b082d585c1a2850c7a6"} err="failed to get container status \"4d380496d7f1c1c1369ecb48f20752749677f53c1e371b082d585c1a2850c7a6\": rpc error: code = NotFound desc = could not find container \"4d380496d7f1c1c1369ecb48f20752749677f53c1e371b082d585c1a2850c7a6\": container with ID starting with 4d380496d7f1c1c1369ecb48f20752749677f53c1e371b082d585c1a2850c7a6 not found: ID does not exist" Jan 23 09:33:50 crc kubenswrapper[4684]: I0123 09:33:50.273587 4684 scope.go:117] "RemoveContainer" containerID="ccfe035950b77539d1bf0b65dbc64006cb7b105cc84605ceeee03de3e4934fbb" Jan 23 09:33:50 crc kubenswrapper[4684]: I0123 09:33:50.274883 4684 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"ccfe035950b77539d1bf0b65dbc64006cb7b105cc84605ceeee03de3e4934fbb"} err="failed to get container status \"ccfe035950b77539d1bf0b65dbc64006cb7b105cc84605ceeee03de3e4934fbb\": rpc error: code = NotFound desc = could not find container \"ccfe035950b77539d1bf0b65dbc64006cb7b105cc84605ceeee03de3e4934fbb\": container with ID starting with ccfe035950b77539d1bf0b65dbc64006cb7b105cc84605ceeee03de3e4934fbb not found: ID does not exist" Jan 23 09:33:50 crc kubenswrapper[4684]: I0123 09:33:50.274918 4684 scope.go:117] "RemoveContainer" containerID="f1ddd13d94ce6c984e49236bee622db3062b3fa57236c352e50209c3320925fe" Jan 23 09:33:50 crc kubenswrapper[4684]: I0123 09:33:50.275879 4684 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"f1ddd13d94ce6c984e49236bee622db3062b3fa57236c352e50209c3320925fe"} err="failed to get container status \"f1ddd13d94ce6c984e49236bee622db3062b3fa57236c352e50209c3320925fe\": rpc error: code = NotFound desc = could not find container \"f1ddd13d94ce6c984e49236bee622db3062b3fa57236c352e50209c3320925fe\": container with ID starting with f1ddd13d94ce6c984e49236bee622db3062b3fa57236c352e50209c3320925fe not found: ID does not exist" Jan 23 09:33:50 crc kubenswrapper[4684]: I0123 09:33:50.275937 4684 scope.go:117] "RemoveContainer" containerID="0c3ab84493dc275b7864ff07924cf942a94504ec9cf11093b75b115cfa909602" Jan 23 09:33:50 crc kubenswrapper[4684]: I0123 09:33:50.276219 4684 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"0c3ab84493dc275b7864ff07924cf942a94504ec9cf11093b75b115cfa909602"} err="failed to get container status \"0c3ab84493dc275b7864ff07924cf942a94504ec9cf11093b75b115cfa909602\": rpc error: code = NotFound desc = could not find container \"0c3ab84493dc275b7864ff07924cf942a94504ec9cf11093b75b115cfa909602\": container with ID starting with 0c3ab84493dc275b7864ff07924cf942a94504ec9cf11093b75b115cfa909602 not found: ID does not exist" Jan 23 09:33:50 crc kubenswrapper[4684]: I0123 09:33:50.315118 4684 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/cinder-scheduler-0"] Jan 23 09:33:50 crc kubenswrapper[4684]: W0123 09:33:50.323372 4684 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod7a1bad04_8e0e_4dee_8cef_90091c05526f.slice/crio-a858d3b9ce6ab8fb93996b6c32afab4bb2394d801282beb1510f140ee811e10b WatchSource:0}: Error finding container a858d3b9ce6ab8fb93996b6c32afab4bb2394d801282beb1510f140ee811e10b: Status 404 returned error can't find the container with id a858d3b9ce6ab8fb93996b6c32afab4bb2394d801282beb1510f140ee811e10b Jan 23 09:33:50 crc kubenswrapper[4684]: I0123 09:33:50.334041 4684 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/8c701147-6de2-4cd9-8d2e-05831ceb7ed5-config-data" (OuterVolumeSpecName: "config-data") pod "8c701147-6de2-4cd9-8d2e-05831ceb7ed5" (UID: "8c701147-6de2-4cd9-8d2e-05831ceb7ed5"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 23 09:33:50 crc kubenswrapper[4684]: I0123 09:33:50.338256 4684 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/8c701147-6de2-4cd9-8d2e-05831ceb7ed5-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Jan 23 09:33:50 crc kubenswrapper[4684]: I0123 09:33:50.338311 4684 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/8c701147-6de2-4cd9-8d2e-05831ceb7ed5-config-data\") on node \"crc\" DevicePath \"\"" Jan 23 09:33:51 crc kubenswrapper[4684]: I0123 09:33:51.011973 4684 generic.go:334] "Generic (PLEG): container finished" podID="72621f7c-b422-4946-afbc-8d3d049ee05c" containerID="dda4ad2f1bf4065b2eaacc6c6322fe692d8f4bb5d47be145db679c8eb509751e" exitCode=0 Jan 23 09:33:51 crc kubenswrapper[4684]: I0123 09:33:51.012150 4684 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-68c677b759-mvp9m" event={"ID":"72621f7c-b422-4946-afbc-8d3d049ee05c","Type":"ContainerDied","Data":"dda4ad2f1bf4065b2eaacc6c6322fe692d8f4bb5d47be145db679c8eb509751e"} Jan 23 09:33:51 crc kubenswrapper[4684]: I0123 09:33:51.019798 4684 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/ceilometer-0" Jan 23 09:33:51 crc kubenswrapper[4684]: I0123 09:33:51.027260 4684 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/neutron-75f58545cb-xtfdc" event={"ID":"0ffa2554-38b9-498e-a08f-465b4454ed2d","Type":"ContainerStarted","Data":"5a19d5e5234809e1f60691730398cea68dddfa18bfbd96febaa55b2782b5283b"} Jan 23 09:33:51 crc kubenswrapper[4684]: I0123 09:33:51.027310 4684 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/neutron-75f58545cb-xtfdc" event={"ID":"0ffa2554-38b9-498e-a08f-465b4454ed2d","Type":"ContainerStarted","Data":"aa2e0795d891f05c3c3740731d371598dd147a2fa84e2cbb486a96d5e7067258"} Jan 23 09:33:51 crc kubenswrapper[4684]: I0123 09:33:51.028090 4684 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/neutron-75f58545cb-xtfdc" Jan 23 09:33:51 crc kubenswrapper[4684]: I0123 09:33:51.034332 4684 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-scheduler-0" event={"ID":"7a1bad04-8e0e-4dee-8cef-90091c05526f","Type":"ContainerStarted","Data":"a858d3b9ce6ab8fb93996b6c32afab4bb2394d801282beb1510f140ee811e10b"} Jan 23 09:33:51 crc kubenswrapper[4684]: I0123 09:33:51.071658 4684 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/neutron-75f58545cb-xtfdc" podStartSLOduration=3.071632371 podStartE2EDuration="3.071632371s" podCreationTimestamp="2026-01-23 09:33:48 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-23 09:33:51.068605093 +0000 UTC m=+1603.691983644" watchObservedRunningTime="2026-01-23 09:33:51.071632371 +0000 UTC m=+1603.695010922" Jan 23 09:33:51 crc kubenswrapper[4684]: I0123 09:33:51.099932 4684 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/ceilometer-0"] Jan 23 09:33:51 crc kubenswrapper[4684]: I0123 09:33:51.117387 4684 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/ceilometer-0"] Jan 23 09:33:51 crc kubenswrapper[4684]: I0123 09:33:51.146391 4684 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/ceilometer-0"] Jan 23 09:33:51 crc kubenswrapper[4684]: E0123 09:33:51.146936 4684 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="8c701147-6de2-4cd9-8d2e-05831ceb7ed5" containerName="ceilometer-central-agent" Jan 23 09:33:51 crc kubenswrapper[4684]: I0123 09:33:51.146961 4684 state_mem.go:107] "Deleted CPUSet assignment" podUID="8c701147-6de2-4cd9-8d2e-05831ceb7ed5" containerName="ceilometer-central-agent" Jan 23 09:33:51 crc kubenswrapper[4684]: E0123 09:33:51.146986 4684 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="8c701147-6de2-4cd9-8d2e-05831ceb7ed5" containerName="ceilometer-notification-agent" Jan 23 09:33:51 crc kubenswrapper[4684]: I0123 09:33:51.146994 4684 state_mem.go:107] "Deleted CPUSet assignment" podUID="8c701147-6de2-4cd9-8d2e-05831ceb7ed5" containerName="ceilometer-notification-agent" Jan 23 09:33:51 crc kubenswrapper[4684]: E0123 09:33:51.147010 4684 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="8c701147-6de2-4cd9-8d2e-05831ceb7ed5" containerName="sg-core" Jan 23 09:33:51 crc kubenswrapper[4684]: I0123 09:33:51.147021 4684 state_mem.go:107] "Deleted CPUSet assignment" podUID="8c701147-6de2-4cd9-8d2e-05831ceb7ed5" containerName="sg-core" Jan 23 09:33:51 crc kubenswrapper[4684]: E0123 09:33:51.147033 4684 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="8c701147-6de2-4cd9-8d2e-05831ceb7ed5" containerName="proxy-httpd" Jan 23 09:33:51 crc kubenswrapper[4684]: I0123 09:33:51.147040 4684 state_mem.go:107] "Deleted CPUSet assignment" podUID="8c701147-6de2-4cd9-8d2e-05831ceb7ed5" containerName="proxy-httpd" Jan 23 09:33:51 crc kubenswrapper[4684]: I0123 09:33:51.147258 4684 memory_manager.go:354] "RemoveStaleState removing state" podUID="8c701147-6de2-4cd9-8d2e-05831ceb7ed5" containerName="ceilometer-central-agent" Jan 23 09:33:51 crc kubenswrapper[4684]: I0123 09:33:51.147277 4684 memory_manager.go:354] "RemoveStaleState removing state" podUID="8c701147-6de2-4cd9-8d2e-05831ceb7ed5" containerName="ceilometer-notification-agent" Jan 23 09:33:51 crc kubenswrapper[4684]: I0123 09:33:51.147299 4684 memory_manager.go:354] "RemoveStaleState removing state" podUID="8c701147-6de2-4cd9-8d2e-05831ceb7ed5" containerName="sg-core" Jan 23 09:33:51 crc kubenswrapper[4684]: I0123 09:33:51.147316 4684 memory_manager.go:354] "RemoveStaleState removing state" podUID="8c701147-6de2-4cd9-8d2e-05831ceb7ed5" containerName="proxy-httpd" Jan 23 09:33:51 crc kubenswrapper[4684]: I0123 09:33:51.157467 4684 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ceilometer-0" Jan 23 09:33:51 crc kubenswrapper[4684]: I0123 09:33:51.162935 4684 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ceilometer-0"] Jan 23 09:33:51 crc kubenswrapper[4684]: I0123 09:33:51.164558 4684 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"ceilometer-scripts" Jan 23 09:33:51 crc kubenswrapper[4684]: I0123 09:33:51.164864 4684 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"ceilometer-config-data" Jan 23 09:33:51 crc kubenswrapper[4684]: I0123 09:33:51.273374 4684 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/ff866f70-2023-4750-b047-5c39e8fa5072-run-httpd\") pod \"ceilometer-0\" (UID: \"ff866f70-2023-4750-b047-5c39e8fa5072\") " pod="openstack/ceilometer-0" Jan 23 09:33:51 crc kubenswrapper[4684]: I0123 09:33:51.273457 4684 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/ff866f70-2023-4750-b047-5c39e8fa5072-config-data\") pod \"ceilometer-0\" (UID: \"ff866f70-2023-4750-b047-5c39e8fa5072\") " pod="openstack/ceilometer-0" Jan 23 09:33:51 crc kubenswrapper[4684]: I0123 09:33:51.273568 4684 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/ff866f70-2023-4750-b047-5c39e8fa5072-combined-ca-bundle\") pod \"ceilometer-0\" (UID: \"ff866f70-2023-4750-b047-5c39e8fa5072\") " pod="openstack/ceilometer-0" Jan 23 09:33:51 crc kubenswrapper[4684]: I0123 09:33:51.273612 4684 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/ff866f70-2023-4750-b047-5c39e8fa5072-sg-core-conf-yaml\") pod \"ceilometer-0\" (UID: \"ff866f70-2023-4750-b047-5c39e8fa5072\") " pod="openstack/ceilometer-0" Jan 23 09:33:51 crc kubenswrapper[4684]: I0123 09:33:51.273641 4684 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/ff866f70-2023-4750-b047-5c39e8fa5072-log-httpd\") pod \"ceilometer-0\" (UID: \"ff866f70-2023-4750-b047-5c39e8fa5072\") " pod="openstack/ceilometer-0" Jan 23 09:33:51 crc kubenswrapper[4684]: I0123 09:33:51.273680 4684 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-lp88n\" (UniqueName: \"kubernetes.io/projected/ff866f70-2023-4750-b047-5c39e8fa5072-kube-api-access-lp88n\") pod \"ceilometer-0\" (UID: \"ff866f70-2023-4750-b047-5c39e8fa5072\") " pod="openstack/ceilometer-0" Jan 23 09:33:51 crc kubenswrapper[4684]: I0123 09:33:51.273725 4684 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/ff866f70-2023-4750-b047-5c39e8fa5072-scripts\") pod \"ceilometer-0\" (UID: \"ff866f70-2023-4750-b047-5c39e8fa5072\") " pod="openstack/ceilometer-0" Jan 23 09:33:51 crc kubenswrapper[4684]: I0123 09:33:51.378548 4684 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/ff866f70-2023-4750-b047-5c39e8fa5072-run-httpd\") pod \"ceilometer-0\" (UID: \"ff866f70-2023-4750-b047-5c39e8fa5072\") " pod="openstack/ceilometer-0" Jan 23 09:33:51 crc kubenswrapper[4684]: I0123 09:33:51.378963 4684 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/ff866f70-2023-4750-b047-5c39e8fa5072-config-data\") pod \"ceilometer-0\" (UID: \"ff866f70-2023-4750-b047-5c39e8fa5072\") " pod="openstack/ceilometer-0" Jan 23 09:33:51 crc kubenswrapper[4684]: I0123 09:33:51.379053 4684 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/ff866f70-2023-4750-b047-5c39e8fa5072-combined-ca-bundle\") pod \"ceilometer-0\" (UID: \"ff866f70-2023-4750-b047-5c39e8fa5072\") " pod="openstack/ceilometer-0" Jan 23 09:33:51 crc kubenswrapper[4684]: I0123 09:33:51.379082 4684 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/ff866f70-2023-4750-b047-5c39e8fa5072-sg-core-conf-yaml\") pod \"ceilometer-0\" (UID: \"ff866f70-2023-4750-b047-5c39e8fa5072\") " pod="openstack/ceilometer-0" Jan 23 09:33:51 crc kubenswrapper[4684]: I0123 09:33:51.379136 4684 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/ff866f70-2023-4750-b047-5c39e8fa5072-log-httpd\") pod \"ceilometer-0\" (UID: \"ff866f70-2023-4750-b047-5c39e8fa5072\") " pod="openstack/ceilometer-0" Jan 23 09:33:51 crc kubenswrapper[4684]: I0123 09:33:51.379161 4684 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-lp88n\" (UniqueName: \"kubernetes.io/projected/ff866f70-2023-4750-b047-5c39e8fa5072-kube-api-access-lp88n\") pod \"ceilometer-0\" (UID: \"ff866f70-2023-4750-b047-5c39e8fa5072\") " pod="openstack/ceilometer-0" Jan 23 09:33:51 crc kubenswrapper[4684]: I0123 09:33:51.379186 4684 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/ff866f70-2023-4750-b047-5c39e8fa5072-scripts\") pod \"ceilometer-0\" (UID: \"ff866f70-2023-4750-b047-5c39e8fa5072\") " pod="openstack/ceilometer-0" Jan 23 09:33:51 crc kubenswrapper[4684]: I0123 09:33:51.385372 4684 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/ff866f70-2023-4750-b047-5c39e8fa5072-run-httpd\") pod \"ceilometer-0\" (UID: \"ff866f70-2023-4750-b047-5c39e8fa5072\") " pod="openstack/ceilometer-0" Jan 23 09:33:51 crc kubenswrapper[4684]: I0123 09:33:51.392121 4684 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/ff866f70-2023-4750-b047-5c39e8fa5072-log-httpd\") pod \"ceilometer-0\" (UID: \"ff866f70-2023-4750-b047-5c39e8fa5072\") " pod="openstack/ceilometer-0" Jan 23 09:33:51 crc kubenswrapper[4684]: I0123 09:33:51.412610 4684 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/ff866f70-2023-4750-b047-5c39e8fa5072-sg-core-conf-yaml\") pod \"ceilometer-0\" (UID: \"ff866f70-2023-4750-b047-5c39e8fa5072\") " pod="openstack/ceilometer-0" Jan 23 09:33:51 crc kubenswrapper[4684]: I0123 09:33:51.413534 4684 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/ff866f70-2023-4750-b047-5c39e8fa5072-scripts\") pod \"ceilometer-0\" (UID: \"ff866f70-2023-4750-b047-5c39e8fa5072\") " pod="openstack/ceilometer-0" Jan 23 09:33:51 crc kubenswrapper[4684]: I0123 09:33:51.413882 4684 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/ff866f70-2023-4750-b047-5c39e8fa5072-combined-ca-bundle\") pod \"ceilometer-0\" (UID: \"ff866f70-2023-4750-b047-5c39e8fa5072\") " pod="openstack/ceilometer-0" Jan 23 09:33:51 crc kubenswrapper[4684]: I0123 09:33:51.417692 4684 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/ff866f70-2023-4750-b047-5c39e8fa5072-config-data\") pod \"ceilometer-0\" (UID: \"ff866f70-2023-4750-b047-5c39e8fa5072\") " pod="openstack/ceilometer-0" Jan 23 09:33:51 crc kubenswrapper[4684]: I0123 09:33:51.438073 4684 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-lp88n\" (UniqueName: \"kubernetes.io/projected/ff866f70-2023-4750-b047-5c39e8fa5072-kube-api-access-lp88n\") pod \"ceilometer-0\" (UID: \"ff866f70-2023-4750-b047-5c39e8fa5072\") " pod="openstack/ceilometer-0" Jan 23 09:33:51 crc kubenswrapper[4684]: I0123 09:33:51.517107 4684 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ceilometer-0" Jan 23 09:33:51 crc kubenswrapper[4684]: I0123 09:33:51.689997 4684 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="8c701147-6de2-4cd9-8d2e-05831ceb7ed5" path="/var/lib/kubelet/pods/8c701147-6de2-4cd9-8d2e-05831ceb7ed5/volumes" Jan 23 09:33:52 crc kubenswrapper[4684]: I0123 09:33:52.021313 4684 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/neutron-f5484d975-q9jz7"] Jan 23 09:33:52 crc kubenswrapper[4684]: I0123 09:33:52.023627 4684 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/neutron-f5484d975-q9jz7" Jan 23 09:33:52 crc kubenswrapper[4684]: I0123 09:33:52.027294 4684 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-neutron-internal-svc" Jan 23 09:33:52 crc kubenswrapper[4684]: I0123 09:33:52.032172 4684 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-neutron-public-svc" Jan 23 09:33:52 crc kubenswrapper[4684]: I0123 09:33:52.053526 4684 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/neutron-f5484d975-q9jz7"] Jan 23 09:33:52 crc kubenswrapper[4684]: I0123 09:33:52.083432 4684 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-68c677b759-mvp9m" event={"ID":"72621f7c-b422-4946-afbc-8d3d049ee05c","Type":"ContainerStarted","Data":"767992535bab45f55830e63d207705d090cf2bdf374c54e5f3e769573a7cef3b"} Jan 23 09:33:52 crc kubenswrapper[4684]: I0123 09:33:52.085074 4684 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/dnsmasq-dns-68c677b759-mvp9m" Jan 23 09:33:52 crc kubenswrapper[4684]: I0123 09:33:52.096458 4684 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovndb-tls-certs\" (UniqueName: \"kubernetes.io/secret/51e1f37f-89c0-4b47-944a-ca74b33d32ce-ovndb-tls-certs\") pod \"neutron-f5484d975-q9jz7\" (UID: \"51e1f37f-89c0-4b47-944a-ca74b33d32ce\") " pod="openstack/neutron-f5484d975-q9jz7" Jan 23 09:33:52 crc kubenswrapper[4684]: I0123 09:33:52.096523 4684 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-gm5vj\" (UniqueName: \"kubernetes.io/projected/51e1f37f-89c0-4b47-944a-ca74b33d32ce-kube-api-access-gm5vj\") pod \"neutron-f5484d975-q9jz7\" (UID: \"51e1f37f-89c0-4b47-944a-ca74b33d32ce\") " pod="openstack/neutron-f5484d975-q9jz7" Jan 23 09:33:52 crc kubenswrapper[4684]: I0123 09:33:52.096566 4684 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/51e1f37f-89c0-4b47-944a-ca74b33d32ce-internal-tls-certs\") pod \"neutron-f5484d975-q9jz7\" (UID: \"51e1f37f-89c0-4b47-944a-ca74b33d32ce\") " pod="openstack/neutron-f5484d975-q9jz7" Jan 23 09:33:52 crc kubenswrapper[4684]: I0123 09:33:52.096608 4684 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/51e1f37f-89c0-4b47-944a-ca74b33d32ce-combined-ca-bundle\") pod \"neutron-f5484d975-q9jz7\" (UID: \"51e1f37f-89c0-4b47-944a-ca74b33d32ce\") " pod="openstack/neutron-f5484d975-q9jz7" Jan 23 09:33:52 crc kubenswrapper[4684]: I0123 09:33:52.096750 4684 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/secret/51e1f37f-89c0-4b47-944a-ca74b33d32ce-config\") pod \"neutron-f5484d975-q9jz7\" (UID: \"51e1f37f-89c0-4b47-944a-ca74b33d32ce\") " pod="openstack/neutron-f5484d975-q9jz7" Jan 23 09:33:52 crc kubenswrapper[4684]: I0123 09:33:52.096909 4684 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/51e1f37f-89c0-4b47-944a-ca74b33d32ce-public-tls-certs\") pod \"neutron-f5484d975-q9jz7\" (UID: \"51e1f37f-89c0-4b47-944a-ca74b33d32ce\") " pod="openstack/neutron-f5484d975-q9jz7" Jan 23 09:33:52 crc kubenswrapper[4684]: I0123 09:33:52.097025 4684 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"httpd-config\" (UniqueName: \"kubernetes.io/secret/51e1f37f-89c0-4b47-944a-ca74b33d32ce-httpd-config\") pod \"neutron-f5484d975-q9jz7\" (UID: \"51e1f37f-89c0-4b47-944a-ca74b33d32ce\") " pod="openstack/neutron-f5484d975-q9jz7" Jan 23 09:33:52 crc kubenswrapper[4684]: I0123 09:33:52.100513 4684 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-scheduler-0" event={"ID":"7a1bad04-8e0e-4dee-8cef-90091c05526f","Type":"ContainerStarted","Data":"c3d1e5e40d561242c4b31c3e8ed0a4cf4cbbee58ab2d2ef577c0756b21a5070e"} Jan 23 09:33:52 crc kubenswrapper[4684]: I0123 09:33:52.116128 4684 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/dnsmasq-dns-68c677b759-mvp9m" podStartSLOduration=4.11610816 podStartE2EDuration="4.11610816s" podCreationTimestamp="2026-01-23 09:33:48 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-23 09:33:52.113118733 +0000 UTC m=+1604.736497274" watchObservedRunningTime="2026-01-23 09:33:52.11610816 +0000 UTC m=+1604.739486701" Jan 23 09:33:52 crc kubenswrapper[4684]: I0123 09:33:52.198807 4684 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/51e1f37f-89c0-4b47-944a-ca74b33d32ce-combined-ca-bundle\") pod \"neutron-f5484d975-q9jz7\" (UID: \"51e1f37f-89c0-4b47-944a-ca74b33d32ce\") " pod="openstack/neutron-f5484d975-q9jz7" Jan 23 09:33:52 crc kubenswrapper[4684]: I0123 09:33:52.198876 4684 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/secret/51e1f37f-89c0-4b47-944a-ca74b33d32ce-config\") pod \"neutron-f5484d975-q9jz7\" (UID: \"51e1f37f-89c0-4b47-944a-ca74b33d32ce\") " pod="openstack/neutron-f5484d975-q9jz7" Jan 23 09:33:52 crc kubenswrapper[4684]: I0123 09:33:52.198949 4684 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/51e1f37f-89c0-4b47-944a-ca74b33d32ce-public-tls-certs\") pod \"neutron-f5484d975-q9jz7\" (UID: \"51e1f37f-89c0-4b47-944a-ca74b33d32ce\") " pod="openstack/neutron-f5484d975-q9jz7" Jan 23 09:33:52 crc kubenswrapper[4684]: I0123 09:33:52.199031 4684 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"httpd-config\" (UniqueName: \"kubernetes.io/secret/51e1f37f-89c0-4b47-944a-ca74b33d32ce-httpd-config\") pod \"neutron-f5484d975-q9jz7\" (UID: \"51e1f37f-89c0-4b47-944a-ca74b33d32ce\") " pod="openstack/neutron-f5484d975-q9jz7" Jan 23 09:33:52 crc kubenswrapper[4684]: I0123 09:33:52.199116 4684 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovndb-tls-certs\" (UniqueName: \"kubernetes.io/secret/51e1f37f-89c0-4b47-944a-ca74b33d32ce-ovndb-tls-certs\") pod \"neutron-f5484d975-q9jz7\" (UID: \"51e1f37f-89c0-4b47-944a-ca74b33d32ce\") " pod="openstack/neutron-f5484d975-q9jz7" Jan 23 09:33:52 crc kubenswrapper[4684]: I0123 09:33:52.199145 4684 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-gm5vj\" (UniqueName: \"kubernetes.io/projected/51e1f37f-89c0-4b47-944a-ca74b33d32ce-kube-api-access-gm5vj\") pod \"neutron-f5484d975-q9jz7\" (UID: \"51e1f37f-89c0-4b47-944a-ca74b33d32ce\") " pod="openstack/neutron-f5484d975-q9jz7" Jan 23 09:33:52 crc kubenswrapper[4684]: I0123 09:33:52.199177 4684 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/51e1f37f-89c0-4b47-944a-ca74b33d32ce-internal-tls-certs\") pod \"neutron-f5484d975-q9jz7\" (UID: \"51e1f37f-89c0-4b47-944a-ca74b33d32ce\") " pod="openstack/neutron-f5484d975-q9jz7" Jan 23 09:33:52 crc kubenswrapper[4684]: I0123 09:33:52.214961 4684 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovndb-tls-certs\" (UniqueName: \"kubernetes.io/secret/51e1f37f-89c0-4b47-944a-ca74b33d32ce-ovndb-tls-certs\") pod \"neutron-f5484d975-q9jz7\" (UID: \"51e1f37f-89c0-4b47-944a-ca74b33d32ce\") " pod="openstack/neutron-f5484d975-q9jz7" Jan 23 09:33:52 crc kubenswrapper[4684]: I0123 09:33:52.215509 4684 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"httpd-config\" (UniqueName: \"kubernetes.io/secret/51e1f37f-89c0-4b47-944a-ca74b33d32ce-httpd-config\") pod \"neutron-f5484d975-q9jz7\" (UID: \"51e1f37f-89c0-4b47-944a-ca74b33d32ce\") " pod="openstack/neutron-f5484d975-q9jz7" Jan 23 09:33:52 crc kubenswrapper[4684]: I0123 09:33:52.215903 4684 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/51e1f37f-89c0-4b47-944a-ca74b33d32ce-combined-ca-bundle\") pod \"neutron-f5484d975-q9jz7\" (UID: \"51e1f37f-89c0-4b47-944a-ca74b33d32ce\") " pod="openstack/neutron-f5484d975-q9jz7" Jan 23 09:33:52 crc kubenswrapper[4684]: I0123 09:33:52.216170 4684 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/secret/51e1f37f-89c0-4b47-944a-ca74b33d32ce-config\") pod \"neutron-f5484d975-q9jz7\" (UID: \"51e1f37f-89c0-4b47-944a-ca74b33d32ce\") " pod="openstack/neutron-f5484d975-q9jz7" Jan 23 09:33:52 crc kubenswrapper[4684]: I0123 09:33:52.216345 4684 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/51e1f37f-89c0-4b47-944a-ca74b33d32ce-internal-tls-certs\") pod \"neutron-f5484d975-q9jz7\" (UID: \"51e1f37f-89c0-4b47-944a-ca74b33d32ce\") " pod="openstack/neutron-f5484d975-q9jz7" Jan 23 09:33:52 crc kubenswrapper[4684]: I0123 09:33:52.229448 4684 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/51e1f37f-89c0-4b47-944a-ca74b33d32ce-public-tls-certs\") pod \"neutron-f5484d975-q9jz7\" (UID: \"51e1f37f-89c0-4b47-944a-ca74b33d32ce\") " pod="openstack/neutron-f5484d975-q9jz7" Jan 23 09:33:52 crc kubenswrapper[4684]: I0123 09:33:52.229648 4684 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-gm5vj\" (UniqueName: \"kubernetes.io/projected/51e1f37f-89c0-4b47-944a-ca74b33d32ce-kube-api-access-gm5vj\") pod \"neutron-f5484d975-q9jz7\" (UID: \"51e1f37f-89c0-4b47-944a-ca74b33d32ce\") " pod="openstack/neutron-f5484d975-q9jz7" Jan 23 09:33:52 crc kubenswrapper[4684]: I0123 09:33:52.363346 4684 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/neutron-f5484d975-q9jz7" Jan 23 09:33:52 crc kubenswrapper[4684]: I0123 09:33:52.365060 4684 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ceilometer-0"] Jan 23 09:33:53 crc kubenswrapper[4684]: I0123 09:33:53.133665 4684 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-scheduler-0" event={"ID":"7a1bad04-8e0e-4dee-8cef-90091c05526f","Type":"ContainerStarted","Data":"3ee49b1ecece55385342e309b6b83d9debe18553bd5e8f804d1156b952d12372"} Jan 23 09:33:53 crc kubenswrapper[4684]: I0123 09:33:53.160925 4684 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"ff866f70-2023-4750-b047-5c39e8fa5072","Type":"ContainerStarted","Data":"3561860d85cc7013bfa7d1d8d47577cfa2ed9742b13211c845e71f45e728c725"} Jan 23 09:33:53 crc kubenswrapper[4684]: I0123 09:33:53.164228 4684 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/neutron-f5484d975-q9jz7"] Jan 23 09:33:53 crc kubenswrapper[4684]: I0123 09:33:53.171542 4684 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/cinder-scheduler-0" podStartSLOduration=4.171521295 podStartE2EDuration="4.171521295s" podCreationTimestamp="2026-01-23 09:33:49 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-23 09:33:53.169126286 +0000 UTC m=+1605.792504827" watchObservedRunningTime="2026-01-23 09:33:53.171521295 +0000 UTC m=+1605.794899836" Jan 23 09:33:54 crc kubenswrapper[4684]: I0123 09:33:54.176245 4684 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/neutron-f5484d975-q9jz7" event={"ID":"51e1f37f-89c0-4b47-944a-ca74b33d32ce","Type":"ContainerStarted","Data":"32d6ef48d90078043b0e2a2dc6183a1bdc12b99a692dfd23f757a082576650c8"} Jan 23 09:33:54 crc kubenswrapper[4684]: I0123 09:33:54.176942 4684 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/neutron-f5484d975-q9jz7" event={"ID":"51e1f37f-89c0-4b47-944a-ca74b33d32ce","Type":"ContainerStarted","Data":"4c6b349b33253647e8a3575b0e44f14ce3409f52751962ce15df294859a1e50b"} Jan 23 09:33:54 crc kubenswrapper[4684]: I0123 09:33:54.176961 4684 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/neutron-f5484d975-q9jz7" event={"ID":"51e1f37f-89c0-4b47-944a-ca74b33d32ce","Type":"ContainerStarted","Data":"ff7a9deb97a37f8af415554d32fd4d686fc18d9e9b297d1119a4580d4a102fd2"} Jan 23 09:33:54 crc kubenswrapper[4684]: I0123 09:33:54.178194 4684 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/neutron-f5484d975-q9jz7" Jan 23 09:33:54 crc kubenswrapper[4684]: I0123 09:33:54.187813 4684 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"ff866f70-2023-4750-b047-5c39e8fa5072","Type":"ContainerStarted","Data":"5a3fb216c3fd10474ccf20df12d48ed6133bb8d54a3f088c191f3d506a04a9fc"} Jan 23 09:33:54 crc kubenswrapper[4684]: I0123 09:33:54.201304 4684 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/ceilometer-0"] Jan 23 09:33:54 crc kubenswrapper[4684]: I0123 09:33:54.243158 4684 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/neutron-f5484d975-q9jz7" podStartSLOduration=3.243133558 podStartE2EDuration="3.243133558s" podCreationTimestamp="2026-01-23 09:33:51 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-23 09:33:54.211233345 +0000 UTC m=+1606.834611896" watchObservedRunningTime="2026-01-23 09:33:54.243133558 +0000 UTC m=+1606.866512099" Jan 23 09:33:54 crc kubenswrapper[4684]: I0123 09:33:54.519628 4684 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/cinder-scheduler-0" Jan 23 09:33:55 crc kubenswrapper[4684]: I0123 09:33:55.199871 4684 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"ff866f70-2023-4750-b047-5c39e8fa5072","Type":"ContainerStarted","Data":"0262d242c9828252ff8794073bc3e4f952bacd47fb8b68f952a758b082570995"} Jan 23 09:33:55 crc kubenswrapper[4684]: I0123 09:33:55.199939 4684 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"ff866f70-2023-4750-b047-5c39e8fa5072","Type":"ContainerStarted","Data":"1a9baaa98c7f04388bb26d32b730745ea34f9516610cd22752dd29b9d64574bb"} Jan 23 09:33:57 crc kubenswrapper[4684]: I0123 09:33:57.222363 4684 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"ff866f70-2023-4750-b047-5c39e8fa5072","Type":"ContainerStarted","Data":"71a9e414aa8ddeace21e533466b0b8cd62e111032795895df0ae71a50f562885"} Jan 23 09:33:57 crc kubenswrapper[4684]: I0123 09:33:57.223085 4684 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/ceilometer-0" Jan 23 09:33:57 crc kubenswrapper[4684]: I0123 09:33:57.222533 4684 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/ceilometer-0" podUID="ff866f70-2023-4750-b047-5c39e8fa5072" containerName="ceilometer-central-agent" containerID="cri-o://5a3fb216c3fd10474ccf20df12d48ed6133bb8d54a3f088c191f3d506a04a9fc" gracePeriod=30 Jan 23 09:33:57 crc kubenswrapper[4684]: I0123 09:33:57.223950 4684 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/ceilometer-0" podUID="ff866f70-2023-4750-b047-5c39e8fa5072" containerName="ceilometer-notification-agent" containerID="cri-o://0262d242c9828252ff8794073bc3e4f952bacd47fb8b68f952a758b082570995" gracePeriod=30 Jan 23 09:33:57 crc kubenswrapper[4684]: I0123 09:33:57.224019 4684 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/ceilometer-0" podUID="ff866f70-2023-4750-b047-5c39e8fa5072" containerName="sg-core" containerID="cri-o://1a9baaa98c7f04388bb26d32b730745ea34f9516610cd22752dd29b9d64574bb" gracePeriod=30 Jan 23 09:33:57 crc kubenswrapper[4684]: I0123 09:33:57.229734 4684 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/ceilometer-0" podUID="ff866f70-2023-4750-b047-5c39e8fa5072" containerName="proxy-httpd" containerID="cri-o://71a9e414aa8ddeace21e533466b0b8cd62e111032795895df0ae71a50f562885" gracePeriod=30 Jan 23 09:33:57 crc kubenswrapper[4684]: I0123 09:33:57.256031 4684 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/ceilometer-0" podStartSLOduration=2.7349673450000003 podStartE2EDuration="6.255877514s" podCreationTimestamp="2026-01-23 09:33:51 +0000 UTC" firstStartedPulling="2026-01-23 09:33:52.362066371 +0000 UTC m=+1604.985444912" lastFinishedPulling="2026-01-23 09:33:55.88297653 +0000 UTC m=+1608.506355081" observedRunningTime="2026-01-23 09:33:57.244933968 +0000 UTC m=+1609.868312509" watchObservedRunningTime="2026-01-23 09:33:57.255877514 +0000 UTC m=+1609.879256055" Jan 23 09:33:57 crc kubenswrapper[4684]: I0123 09:33:57.990767 4684 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/community-operators-scs5f"] Jan 23 09:33:57 crc kubenswrapper[4684]: I0123 09:33:57.993329 4684 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-scs5f" Jan 23 09:33:58 crc kubenswrapper[4684]: I0123 09:33:58.005774 4684 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/community-operators-scs5f"] Jan 23 09:33:58 crc kubenswrapper[4684]: I0123 09:33:58.023619 4684 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/16af4cc6-6815-4216-a5af-3d7ba5720cf3-catalog-content\") pod \"community-operators-scs5f\" (UID: \"16af4cc6-6815-4216-a5af-3d7ba5720cf3\") " pod="openshift-marketplace/community-operators-scs5f" Jan 23 09:33:58 crc kubenswrapper[4684]: I0123 09:33:58.023688 4684 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-gx599\" (UniqueName: \"kubernetes.io/projected/16af4cc6-6815-4216-a5af-3d7ba5720cf3-kube-api-access-gx599\") pod \"community-operators-scs5f\" (UID: \"16af4cc6-6815-4216-a5af-3d7ba5720cf3\") " pod="openshift-marketplace/community-operators-scs5f" Jan 23 09:33:58 crc kubenswrapper[4684]: I0123 09:33:58.023948 4684 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/16af4cc6-6815-4216-a5af-3d7ba5720cf3-utilities\") pod \"community-operators-scs5f\" (UID: \"16af4cc6-6815-4216-a5af-3d7ba5720cf3\") " pod="openshift-marketplace/community-operators-scs5f" Jan 23 09:33:58 crc kubenswrapper[4684]: I0123 09:33:58.126170 4684 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/16af4cc6-6815-4216-a5af-3d7ba5720cf3-catalog-content\") pod \"community-operators-scs5f\" (UID: \"16af4cc6-6815-4216-a5af-3d7ba5720cf3\") " pod="openshift-marketplace/community-operators-scs5f" Jan 23 09:33:58 crc kubenswrapper[4684]: I0123 09:33:58.126465 4684 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-gx599\" (UniqueName: \"kubernetes.io/projected/16af4cc6-6815-4216-a5af-3d7ba5720cf3-kube-api-access-gx599\") pod \"community-operators-scs5f\" (UID: \"16af4cc6-6815-4216-a5af-3d7ba5720cf3\") " pod="openshift-marketplace/community-operators-scs5f" Jan 23 09:33:58 crc kubenswrapper[4684]: I0123 09:33:58.126848 4684 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/16af4cc6-6815-4216-a5af-3d7ba5720cf3-catalog-content\") pod \"community-operators-scs5f\" (UID: \"16af4cc6-6815-4216-a5af-3d7ba5720cf3\") " pod="openshift-marketplace/community-operators-scs5f" Jan 23 09:33:58 crc kubenswrapper[4684]: I0123 09:33:58.126861 4684 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/16af4cc6-6815-4216-a5af-3d7ba5720cf3-utilities\") pod \"community-operators-scs5f\" (UID: \"16af4cc6-6815-4216-a5af-3d7ba5720cf3\") " pod="openshift-marketplace/community-operators-scs5f" Jan 23 09:33:58 crc kubenswrapper[4684]: I0123 09:33:58.127493 4684 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/16af4cc6-6815-4216-a5af-3d7ba5720cf3-utilities\") pod \"community-operators-scs5f\" (UID: \"16af4cc6-6815-4216-a5af-3d7ba5720cf3\") " pod="openshift-marketplace/community-operators-scs5f" Jan 23 09:33:58 crc kubenswrapper[4684]: I0123 09:33:58.168063 4684 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-gx599\" (UniqueName: \"kubernetes.io/projected/16af4cc6-6815-4216-a5af-3d7ba5720cf3-kube-api-access-gx599\") pod \"community-operators-scs5f\" (UID: \"16af4cc6-6815-4216-a5af-3d7ba5720cf3\") " pod="openshift-marketplace/community-operators-scs5f" Jan 23 09:33:58 crc kubenswrapper[4684]: I0123 09:33:58.236090 4684 generic.go:334] "Generic (PLEG): container finished" podID="ff866f70-2023-4750-b047-5c39e8fa5072" containerID="71a9e414aa8ddeace21e533466b0b8cd62e111032795895df0ae71a50f562885" exitCode=0 Jan 23 09:33:58 crc kubenswrapper[4684]: I0123 09:33:58.236138 4684 generic.go:334] "Generic (PLEG): container finished" podID="ff866f70-2023-4750-b047-5c39e8fa5072" containerID="1a9baaa98c7f04388bb26d32b730745ea34f9516610cd22752dd29b9d64574bb" exitCode=2 Jan 23 09:33:58 crc kubenswrapper[4684]: I0123 09:33:58.236157 4684 generic.go:334] "Generic (PLEG): container finished" podID="ff866f70-2023-4750-b047-5c39e8fa5072" containerID="0262d242c9828252ff8794073bc3e4f952bacd47fb8b68f952a758b082570995" exitCode=0 Jan 23 09:33:58 crc kubenswrapper[4684]: I0123 09:33:58.236173 4684 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"ff866f70-2023-4750-b047-5c39e8fa5072","Type":"ContainerDied","Data":"71a9e414aa8ddeace21e533466b0b8cd62e111032795895df0ae71a50f562885"} Jan 23 09:33:58 crc kubenswrapper[4684]: I0123 09:33:58.236230 4684 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"ff866f70-2023-4750-b047-5c39e8fa5072","Type":"ContainerDied","Data":"1a9baaa98c7f04388bb26d32b730745ea34f9516610cd22752dd29b9d64574bb"} Jan 23 09:33:58 crc kubenswrapper[4684]: I0123 09:33:58.236248 4684 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"ff866f70-2023-4750-b047-5c39e8fa5072","Type":"ContainerDied","Data":"0262d242c9828252ff8794073bc3e4f952bacd47fb8b68f952a758b082570995"} Jan 23 09:33:58 crc kubenswrapper[4684]: I0123 09:33:58.320175 4684 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-scs5f" Jan 23 09:33:58 crc kubenswrapper[4684]: I0123 09:33:58.861945 4684 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/dnsmasq-dns-68c677b759-mvp9m" Jan 23 09:33:58 crc kubenswrapper[4684]: I0123 09:33:58.933302 4684 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-b9fcb755f-mbwwx"] Jan 23 09:33:58 crc kubenswrapper[4684]: I0123 09:33:58.933530 4684 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/dnsmasq-dns-b9fcb755f-mbwwx" podUID="42713741-8e02-44d5-b649-adf7d0f80837" containerName="dnsmasq-dns" containerID="cri-o://70ea31ef317297c4f56b11a83ed391191a955809c198b6bfd8857276491501e7" gracePeriod=10 Jan 23 09:33:59 crc kubenswrapper[4684]: I0123 09:33:59.030426 4684 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/community-operators-scs5f"] Jan 23 09:33:59 crc kubenswrapper[4684]: I0123 09:33:59.258251 4684 generic.go:334] "Generic (PLEG): container finished" podID="42713741-8e02-44d5-b649-adf7d0f80837" containerID="70ea31ef317297c4f56b11a83ed391191a955809c198b6bfd8857276491501e7" exitCode=0 Jan 23 09:33:59 crc kubenswrapper[4684]: I0123 09:33:59.258322 4684 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-b9fcb755f-mbwwx" event={"ID":"42713741-8e02-44d5-b649-adf7d0f80837","Type":"ContainerDied","Data":"70ea31ef317297c4f56b11a83ed391191a955809c198b6bfd8857276491501e7"} Jan 23 09:33:59 crc kubenswrapper[4684]: I0123 09:33:59.261487 4684 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-scs5f" event={"ID":"16af4cc6-6815-4216-a5af-3d7ba5720cf3","Type":"ContainerStarted","Data":"fdb0fa1522e3663c2d67662752e51e39c00c9bd531351c468bb087802d5912dc"} Jan 23 09:33:59 crc kubenswrapper[4684]: I0123 09:33:59.681187 4684 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-b9fcb755f-mbwwx" Jan 23 09:33:59 crc kubenswrapper[4684]: I0123 09:33:59.775276 4684 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/42713741-8e02-44d5-b649-adf7d0f80837-dns-svc\") pod \"42713741-8e02-44d5-b649-adf7d0f80837\" (UID: \"42713741-8e02-44d5-b649-adf7d0f80837\") " Jan 23 09:33:59 crc kubenswrapper[4684]: I0123 09:33:59.775335 4684 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/42713741-8e02-44d5-b649-adf7d0f80837-ovsdbserver-sb\") pod \"42713741-8e02-44d5-b649-adf7d0f80837\" (UID: \"42713741-8e02-44d5-b649-adf7d0f80837\") " Jan 23 09:33:59 crc kubenswrapper[4684]: I0123 09:33:59.775381 4684 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-zj2n4\" (UniqueName: \"kubernetes.io/projected/42713741-8e02-44d5-b649-adf7d0f80837-kube-api-access-zj2n4\") pod \"42713741-8e02-44d5-b649-adf7d0f80837\" (UID: \"42713741-8e02-44d5-b649-adf7d0f80837\") " Jan 23 09:33:59 crc kubenswrapper[4684]: I0123 09:33:59.775477 4684 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/42713741-8e02-44d5-b649-adf7d0f80837-ovsdbserver-nb\") pod \"42713741-8e02-44d5-b649-adf7d0f80837\" (UID: \"42713741-8e02-44d5-b649-adf7d0f80837\") " Jan 23 09:33:59 crc kubenswrapper[4684]: I0123 09:33:59.775603 4684 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/42713741-8e02-44d5-b649-adf7d0f80837-config\") pod \"42713741-8e02-44d5-b649-adf7d0f80837\" (UID: \"42713741-8e02-44d5-b649-adf7d0f80837\") " Jan 23 09:33:59 crc kubenswrapper[4684]: I0123 09:33:59.802189 4684 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/42713741-8e02-44d5-b649-adf7d0f80837-kube-api-access-zj2n4" (OuterVolumeSpecName: "kube-api-access-zj2n4") pod "42713741-8e02-44d5-b649-adf7d0f80837" (UID: "42713741-8e02-44d5-b649-adf7d0f80837"). InnerVolumeSpecName "kube-api-access-zj2n4". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 23 09:33:59 crc kubenswrapper[4684]: I0123 09:33:59.866722 4684 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/42713741-8e02-44d5-b649-adf7d0f80837-config" (OuterVolumeSpecName: "config") pod "42713741-8e02-44d5-b649-adf7d0f80837" (UID: "42713741-8e02-44d5-b649-adf7d0f80837"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 23 09:33:59 crc kubenswrapper[4684]: I0123 09:33:59.878056 4684 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/42713741-8e02-44d5-b649-adf7d0f80837-config\") on node \"crc\" DevicePath \"\"" Jan 23 09:33:59 crc kubenswrapper[4684]: I0123 09:33:59.878098 4684 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-zj2n4\" (UniqueName: \"kubernetes.io/projected/42713741-8e02-44d5-b649-adf7d0f80837-kube-api-access-zj2n4\") on node \"crc\" DevicePath \"\"" Jan 23 09:33:59 crc kubenswrapper[4684]: I0123 09:33:59.899027 4684 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/42713741-8e02-44d5-b649-adf7d0f80837-ovsdbserver-nb" (OuterVolumeSpecName: "ovsdbserver-nb") pod "42713741-8e02-44d5-b649-adf7d0f80837" (UID: "42713741-8e02-44d5-b649-adf7d0f80837"). InnerVolumeSpecName "ovsdbserver-nb". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 23 09:33:59 crc kubenswrapper[4684]: I0123 09:33:59.907122 4684 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/ceilometer-0" Jan 23 09:33:59 crc kubenswrapper[4684]: I0123 09:33:59.921058 4684 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/42713741-8e02-44d5-b649-adf7d0f80837-dns-svc" (OuterVolumeSpecName: "dns-svc") pod "42713741-8e02-44d5-b649-adf7d0f80837" (UID: "42713741-8e02-44d5-b649-adf7d0f80837"). InnerVolumeSpecName "dns-svc". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 23 09:33:59 crc kubenswrapper[4684]: I0123 09:33:59.945995 4684 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/42713741-8e02-44d5-b649-adf7d0f80837-ovsdbserver-sb" (OuterVolumeSpecName: "ovsdbserver-sb") pod "42713741-8e02-44d5-b649-adf7d0f80837" (UID: "42713741-8e02-44d5-b649-adf7d0f80837"). InnerVolumeSpecName "ovsdbserver-sb". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 23 09:33:59 crc kubenswrapper[4684]: I0123 09:33:59.963151 4684 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/cinder-scheduler-0" Jan 23 09:33:59 crc kubenswrapper[4684]: I0123 09:33:59.985005 4684 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/ff866f70-2023-4750-b047-5c39e8fa5072-config-data\") pod \"ff866f70-2023-4750-b047-5c39e8fa5072\" (UID: \"ff866f70-2023-4750-b047-5c39e8fa5072\") " Jan 23 09:33:59 crc kubenswrapper[4684]: I0123 09:33:59.985499 4684 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/ff866f70-2023-4750-b047-5c39e8fa5072-run-httpd\") pod \"ff866f70-2023-4750-b047-5c39e8fa5072\" (UID: \"ff866f70-2023-4750-b047-5c39e8fa5072\") " Jan 23 09:33:59 crc kubenswrapper[4684]: I0123 09:33:59.985569 4684 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-lp88n\" (UniqueName: \"kubernetes.io/projected/ff866f70-2023-4750-b047-5c39e8fa5072-kube-api-access-lp88n\") pod \"ff866f70-2023-4750-b047-5c39e8fa5072\" (UID: \"ff866f70-2023-4750-b047-5c39e8fa5072\") " Jan 23 09:33:59 crc kubenswrapper[4684]: I0123 09:33:59.985623 4684 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/ff866f70-2023-4750-b047-5c39e8fa5072-combined-ca-bundle\") pod \"ff866f70-2023-4750-b047-5c39e8fa5072\" (UID: \"ff866f70-2023-4750-b047-5c39e8fa5072\") " Jan 23 09:33:59 crc kubenswrapper[4684]: I0123 09:33:59.985650 4684 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/ff866f70-2023-4750-b047-5c39e8fa5072-sg-core-conf-yaml\") pod \"ff866f70-2023-4750-b047-5c39e8fa5072\" (UID: \"ff866f70-2023-4750-b047-5c39e8fa5072\") " Jan 23 09:33:59 crc kubenswrapper[4684]: I0123 09:33:59.985756 4684 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/ff866f70-2023-4750-b047-5c39e8fa5072-scripts\") pod \"ff866f70-2023-4750-b047-5c39e8fa5072\" (UID: \"ff866f70-2023-4750-b047-5c39e8fa5072\") " Jan 23 09:33:59 crc kubenswrapper[4684]: I0123 09:33:59.985783 4684 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/ff866f70-2023-4750-b047-5c39e8fa5072-log-httpd\") pod \"ff866f70-2023-4750-b047-5c39e8fa5072\" (UID: \"ff866f70-2023-4750-b047-5c39e8fa5072\") " Jan 23 09:33:59 crc kubenswrapper[4684]: I0123 09:33:59.986550 4684 reconciler_common.go:293] "Volume detached for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/42713741-8e02-44d5-b649-adf7d0f80837-dns-svc\") on node \"crc\" DevicePath \"\"" Jan 23 09:33:59 crc kubenswrapper[4684]: I0123 09:33:59.986576 4684 reconciler_common.go:293] "Volume detached for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/42713741-8e02-44d5-b649-adf7d0f80837-ovsdbserver-sb\") on node \"crc\" DevicePath \"\"" Jan 23 09:33:59 crc kubenswrapper[4684]: I0123 09:33:59.986592 4684 reconciler_common.go:293] "Volume detached for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/42713741-8e02-44d5-b649-adf7d0f80837-ovsdbserver-nb\") on node \"crc\" DevicePath \"\"" Jan 23 09:33:59 crc kubenswrapper[4684]: I0123 09:33:59.987295 4684 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/ff866f70-2023-4750-b047-5c39e8fa5072-log-httpd" (OuterVolumeSpecName: "log-httpd") pod "ff866f70-2023-4750-b047-5c39e8fa5072" (UID: "ff866f70-2023-4750-b047-5c39e8fa5072"). InnerVolumeSpecName "log-httpd". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 23 09:33:59 crc kubenswrapper[4684]: I0123 09:33:59.994561 4684 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/ff866f70-2023-4750-b047-5c39e8fa5072-scripts" (OuterVolumeSpecName: "scripts") pod "ff866f70-2023-4750-b047-5c39e8fa5072" (UID: "ff866f70-2023-4750-b047-5c39e8fa5072"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 23 09:33:59 crc kubenswrapper[4684]: I0123 09:33:59.994984 4684 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/ff866f70-2023-4750-b047-5c39e8fa5072-run-httpd" (OuterVolumeSpecName: "run-httpd") pod "ff866f70-2023-4750-b047-5c39e8fa5072" (UID: "ff866f70-2023-4750-b047-5c39e8fa5072"). InnerVolumeSpecName "run-httpd". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 23 09:33:59 crc kubenswrapper[4684]: I0123 09:33:59.996420 4684 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/ff866f70-2023-4750-b047-5c39e8fa5072-kube-api-access-lp88n" (OuterVolumeSpecName: "kube-api-access-lp88n") pod "ff866f70-2023-4750-b047-5c39e8fa5072" (UID: "ff866f70-2023-4750-b047-5c39e8fa5072"). InnerVolumeSpecName "kube-api-access-lp88n". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 23 09:34:00 crc kubenswrapper[4684]: I0123 09:34:00.035959 4684 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/ff866f70-2023-4750-b047-5c39e8fa5072-sg-core-conf-yaml" (OuterVolumeSpecName: "sg-core-conf-yaml") pod "ff866f70-2023-4750-b047-5c39e8fa5072" (UID: "ff866f70-2023-4750-b047-5c39e8fa5072"). InnerVolumeSpecName "sg-core-conf-yaml". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 23 09:34:00 crc kubenswrapper[4684]: I0123 09:34:00.088759 4684 reconciler_common.go:293] "Volume detached for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/ff866f70-2023-4750-b047-5c39e8fa5072-run-httpd\") on node \"crc\" DevicePath \"\"" Jan 23 09:34:00 crc kubenswrapper[4684]: I0123 09:34:00.088800 4684 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-lp88n\" (UniqueName: \"kubernetes.io/projected/ff866f70-2023-4750-b047-5c39e8fa5072-kube-api-access-lp88n\") on node \"crc\" DevicePath \"\"" Jan 23 09:34:00 crc kubenswrapper[4684]: I0123 09:34:00.088815 4684 reconciler_common.go:293] "Volume detached for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/ff866f70-2023-4750-b047-5c39e8fa5072-sg-core-conf-yaml\") on node \"crc\" DevicePath \"\"" Jan 23 09:34:00 crc kubenswrapper[4684]: I0123 09:34:00.088828 4684 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/ff866f70-2023-4750-b047-5c39e8fa5072-scripts\") on node \"crc\" DevicePath \"\"" Jan 23 09:34:00 crc kubenswrapper[4684]: I0123 09:34:00.088841 4684 reconciler_common.go:293] "Volume detached for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/ff866f70-2023-4750-b047-5c39e8fa5072-log-httpd\") on node \"crc\" DevicePath \"\"" Jan 23 09:34:00 crc kubenswrapper[4684]: I0123 09:34:00.093841 4684 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/ff866f70-2023-4750-b047-5c39e8fa5072-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "ff866f70-2023-4750-b047-5c39e8fa5072" (UID: "ff866f70-2023-4750-b047-5c39e8fa5072"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 23 09:34:00 crc kubenswrapper[4684]: I0123 09:34:00.126458 4684 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/ff866f70-2023-4750-b047-5c39e8fa5072-config-data" (OuterVolumeSpecName: "config-data") pod "ff866f70-2023-4750-b047-5c39e8fa5072" (UID: "ff866f70-2023-4750-b047-5c39e8fa5072"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 23 09:34:00 crc kubenswrapper[4684]: I0123 09:34:00.190375 4684 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/ff866f70-2023-4750-b047-5c39e8fa5072-config-data\") on node \"crc\" DevicePath \"\"" Jan 23 09:34:00 crc kubenswrapper[4684]: I0123 09:34:00.190414 4684 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/ff866f70-2023-4750-b047-5c39e8fa5072-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Jan 23 09:34:00 crc kubenswrapper[4684]: I0123 09:34:00.275310 4684 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-b9fcb755f-mbwwx" event={"ID":"42713741-8e02-44d5-b649-adf7d0f80837","Type":"ContainerDied","Data":"2e7ccb4080e22c294b9be34b08cb815f8d549638167648b57131279b78c94662"} Jan 23 09:34:00 crc kubenswrapper[4684]: I0123 09:34:00.275365 4684 scope.go:117] "RemoveContainer" containerID="70ea31ef317297c4f56b11a83ed391191a955809c198b6bfd8857276491501e7" Jan 23 09:34:00 crc kubenswrapper[4684]: I0123 09:34:00.275518 4684 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-b9fcb755f-mbwwx" Jan 23 09:34:00 crc kubenswrapper[4684]: I0123 09:34:00.288021 4684 generic.go:334] "Generic (PLEG): container finished" podID="16af4cc6-6815-4216-a5af-3d7ba5720cf3" containerID="1a2b33daa1e6376e7f9f6d27bb2cb847fa4f7588f6efef55c73a3efb3e2cc162" exitCode=0 Jan 23 09:34:00 crc kubenswrapper[4684]: I0123 09:34:00.288089 4684 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-scs5f" event={"ID":"16af4cc6-6815-4216-a5af-3d7ba5720cf3","Type":"ContainerDied","Data":"1a2b33daa1e6376e7f9f6d27bb2cb847fa4f7588f6efef55c73a3efb3e2cc162"} Jan 23 09:34:00 crc kubenswrapper[4684]: I0123 09:34:00.305119 4684 scope.go:117] "RemoveContainer" containerID="c6cc27d87fa2d6fc77d881fd0e138e9dba9a6efab6e3dd65d56bd28cefb0b855" Jan 23 09:34:00 crc kubenswrapper[4684]: I0123 09:34:00.317034 4684 generic.go:334] "Generic (PLEG): container finished" podID="ff866f70-2023-4750-b047-5c39e8fa5072" containerID="5a3fb216c3fd10474ccf20df12d48ed6133bb8d54a3f088c191f3d506a04a9fc" exitCode=0 Jan 23 09:34:00 crc kubenswrapper[4684]: I0123 09:34:00.317104 4684 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"ff866f70-2023-4750-b047-5c39e8fa5072","Type":"ContainerDied","Data":"5a3fb216c3fd10474ccf20df12d48ed6133bb8d54a3f088c191f3d506a04a9fc"} Jan 23 09:34:00 crc kubenswrapper[4684]: I0123 09:34:00.317194 4684 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"ff866f70-2023-4750-b047-5c39e8fa5072","Type":"ContainerDied","Data":"3561860d85cc7013bfa7d1d8d47577cfa2ed9742b13211c845e71f45e728c725"} Jan 23 09:34:00 crc kubenswrapper[4684]: I0123 09:34:00.317256 4684 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/ceilometer-0" Jan 23 09:34:00 crc kubenswrapper[4684]: I0123 09:34:00.356170 4684 scope.go:117] "RemoveContainer" containerID="71a9e414aa8ddeace21e533466b0b8cd62e111032795895df0ae71a50f562885" Jan 23 09:34:00 crc kubenswrapper[4684]: I0123 09:34:00.363984 4684 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-b9fcb755f-mbwwx"] Jan 23 09:34:00 crc kubenswrapper[4684]: I0123 09:34:00.424765 4684 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/dnsmasq-dns-b9fcb755f-mbwwx"] Jan 23 09:34:00 crc kubenswrapper[4684]: I0123 09:34:00.429823 4684 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/ceilometer-0"] Jan 23 09:34:00 crc kubenswrapper[4684]: I0123 09:34:00.440846 4684 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/ceilometer-0"] Jan 23 09:34:00 crc kubenswrapper[4684]: I0123 09:34:00.456781 4684 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/ceilometer-0"] Jan 23 09:34:00 crc kubenswrapper[4684]: E0123 09:34:00.457164 4684 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="ff866f70-2023-4750-b047-5c39e8fa5072" containerName="sg-core" Jan 23 09:34:00 crc kubenswrapper[4684]: I0123 09:34:00.457183 4684 state_mem.go:107] "Deleted CPUSet assignment" podUID="ff866f70-2023-4750-b047-5c39e8fa5072" containerName="sg-core" Jan 23 09:34:00 crc kubenswrapper[4684]: E0123 09:34:00.457196 4684 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="ff866f70-2023-4750-b047-5c39e8fa5072" containerName="ceilometer-central-agent" Jan 23 09:34:00 crc kubenswrapper[4684]: I0123 09:34:00.457202 4684 state_mem.go:107] "Deleted CPUSet assignment" podUID="ff866f70-2023-4750-b047-5c39e8fa5072" containerName="ceilometer-central-agent" Jan 23 09:34:00 crc kubenswrapper[4684]: E0123 09:34:00.457217 4684 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="42713741-8e02-44d5-b649-adf7d0f80837" containerName="dnsmasq-dns" Jan 23 09:34:00 crc kubenswrapper[4684]: I0123 09:34:00.457224 4684 state_mem.go:107] "Deleted CPUSet assignment" podUID="42713741-8e02-44d5-b649-adf7d0f80837" containerName="dnsmasq-dns" Jan 23 09:34:00 crc kubenswrapper[4684]: E0123 09:34:00.457235 4684 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="ff866f70-2023-4750-b047-5c39e8fa5072" containerName="ceilometer-notification-agent" Jan 23 09:34:00 crc kubenswrapper[4684]: I0123 09:34:00.457241 4684 state_mem.go:107] "Deleted CPUSet assignment" podUID="ff866f70-2023-4750-b047-5c39e8fa5072" containerName="ceilometer-notification-agent" Jan 23 09:34:00 crc kubenswrapper[4684]: E0123 09:34:00.457252 4684 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="ff866f70-2023-4750-b047-5c39e8fa5072" containerName="proxy-httpd" Jan 23 09:34:00 crc kubenswrapper[4684]: I0123 09:34:00.457258 4684 state_mem.go:107] "Deleted CPUSet assignment" podUID="ff866f70-2023-4750-b047-5c39e8fa5072" containerName="proxy-httpd" Jan 23 09:34:00 crc kubenswrapper[4684]: E0123 09:34:00.457273 4684 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="42713741-8e02-44d5-b649-adf7d0f80837" containerName="init" Jan 23 09:34:00 crc kubenswrapper[4684]: I0123 09:34:00.457279 4684 state_mem.go:107] "Deleted CPUSet assignment" podUID="42713741-8e02-44d5-b649-adf7d0f80837" containerName="init" Jan 23 09:34:00 crc kubenswrapper[4684]: I0123 09:34:00.457451 4684 memory_manager.go:354] "RemoveStaleState removing state" podUID="ff866f70-2023-4750-b047-5c39e8fa5072" containerName="proxy-httpd" Jan 23 09:34:00 crc kubenswrapper[4684]: I0123 09:34:00.457501 4684 memory_manager.go:354] "RemoveStaleState removing state" podUID="ff866f70-2023-4750-b047-5c39e8fa5072" containerName="ceilometer-central-agent" Jan 23 09:34:00 crc kubenswrapper[4684]: I0123 09:34:00.457516 4684 memory_manager.go:354] "RemoveStaleState removing state" podUID="ff866f70-2023-4750-b047-5c39e8fa5072" containerName="sg-core" Jan 23 09:34:00 crc kubenswrapper[4684]: I0123 09:34:00.457531 4684 memory_manager.go:354] "RemoveStaleState removing state" podUID="ff866f70-2023-4750-b047-5c39e8fa5072" containerName="ceilometer-notification-agent" Jan 23 09:34:00 crc kubenswrapper[4684]: I0123 09:34:00.457548 4684 memory_manager.go:354] "RemoveStaleState removing state" podUID="42713741-8e02-44d5-b649-adf7d0f80837" containerName="dnsmasq-dns" Jan 23 09:34:00 crc kubenswrapper[4684]: I0123 09:34:00.460325 4684 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ceilometer-0" Jan 23 09:34:00 crc kubenswrapper[4684]: I0123 09:34:00.468847 4684 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"ceilometer-scripts" Jan 23 09:34:00 crc kubenswrapper[4684]: I0123 09:34:00.468904 4684 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"ceilometer-config-data" Jan 23 09:34:00 crc kubenswrapper[4684]: I0123 09:34:00.477805 4684 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ceilometer-0"] Jan 23 09:34:00 crc kubenswrapper[4684]: I0123 09:34:00.502357 4684 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/4fa89d93-7eb6-41aa-bf62-eecd44d2c82e-scripts\") pod \"ceilometer-0\" (UID: \"4fa89d93-7eb6-41aa-bf62-eecd44d2c82e\") " pod="openstack/ceilometer-0" Jan 23 09:34:00 crc kubenswrapper[4684]: I0123 09:34:00.502607 4684 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/4fa89d93-7eb6-41aa-bf62-eecd44d2c82e-combined-ca-bundle\") pod \"ceilometer-0\" (UID: \"4fa89d93-7eb6-41aa-bf62-eecd44d2c82e\") " pod="openstack/ceilometer-0" Jan 23 09:34:00 crc kubenswrapper[4684]: I0123 09:34:00.502686 4684 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-558zm\" (UniqueName: \"kubernetes.io/projected/4fa89d93-7eb6-41aa-bf62-eecd44d2c82e-kube-api-access-558zm\") pod \"ceilometer-0\" (UID: \"4fa89d93-7eb6-41aa-bf62-eecd44d2c82e\") " pod="openstack/ceilometer-0" Jan 23 09:34:00 crc kubenswrapper[4684]: I0123 09:34:00.502925 4684 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/4fa89d93-7eb6-41aa-bf62-eecd44d2c82e-log-httpd\") pod \"ceilometer-0\" (UID: \"4fa89d93-7eb6-41aa-bf62-eecd44d2c82e\") " pod="openstack/ceilometer-0" Jan 23 09:34:00 crc kubenswrapper[4684]: I0123 09:34:00.503048 4684 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/4fa89d93-7eb6-41aa-bf62-eecd44d2c82e-sg-core-conf-yaml\") pod \"ceilometer-0\" (UID: \"4fa89d93-7eb6-41aa-bf62-eecd44d2c82e\") " pod="openstack/ceilometer-0" Jan 23 09:34:00 crc kubenswrapper[4684]: I0123 09:34:00.503160 4684 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/4fa89d93-7eb6-41aa-bf62-eecd44d2c82e-config-data\") pod \"ceilometer-0\" (UID: \"4fa89d93-7eb6-41aa-bf62-eecd44d2c82e\") " pod="openstack/ceilometer-0" Jan 23 09:34:00 crc kubenswrapper[4684]: I0123 09:34:00.503320 4684 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/4fa89d93-7eb6-41aa-bf62-eecd44d2c82e-run-httpd\") pod \"ceilometer-0\" (UID: \"4fa89d93-7eb6-41aa-bf62-eecd44d2c82e\") " pod="openstack/ceilometer-0" Jan 23 09:34:00 crc kubenswrapper[4684]: I0123 09:34:00.514058 4684 scope.go:117] "RemoveContainer" containerID="1a9baaa98c7f04388bb26d32b730745ea34f9516610cd22752dd29b9d64574bb" Jan 23 09:34:00 crc kubenswrapper[4684]: I0123 09:34:00.554531 4684 scope.go:117] "RemoveContainer" containerID="0262d242c9828252ff8794073bc3e4f952bacd47fb8b68f952a758b082570995" Jan 23 09:34:00 crc kubenswrapper[4684]: I0123 09:34:00.589180 4684 scope.go:117] "RemoveContainer" containerID="5a3fb216c3fd10474ccf20df12d48ed6133bb8d54a3f088c191f3d506a04a9fc" Jan 23 09:34:00 crc kubenswrapper[4684]: I0123 09:34:00.605675 4684 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/4fa89d93-7eb6-41aa-bf62-eecd44d2c82e-sg-core-conf-yaml\") pod \"ceilometer-0\" (UID: \"4fa89d93-7eb6-41aa-bf62-eecd44d2c82e\") " pod="openstack/ceilometer-0" Jan 23 09:34:00 crc kubenswrapper[4684]: I0123 09:34:00.606100 4684 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/4fa89d93-7eb6-41aa-bf62-eecd44d2c82e-config-data\") pod \"ceilometer-0\" (UID: \"4fa89d93-7eb6-41aa-bf62-eecd44d2c82e\") " pod="openstack/ceilometer-0" Jan 23 09:34:00 crc kubenswrapper[4684]: I0123 09:34:00.606315 4684 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/4fa89d93-7eb6-41aa-bf62-eecd44d2c82e-run-httpd\") pod \"ceilometer-0\" (UID: \"4fa89d93-7eb6-41aa-bf62-eecd44d2c82e\") " pod="openstack/ceilometer-0" Jan 23 09:34:00 crc kubenswrapper[4684]: I0123 09:34:00.606591 4684 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/4fa89d93-7eb6-41aa-bf62-eecd44d2c82e-scripts\") pod \"ceilometer-0\" (UID: \"4fa89d93-7eb6-41aa-bf62-eecd44d2c82e\") " pod="openstack/ceilometer-0" Jan 23 09:34:00 crc kubenswrapper[4684]: I0123 09:34:00.606851 4684 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/4fa89d93-7eb6-41aa-bf62-eecd44d2c82e-combined-ca-bundle\") pod \"ceilometer-0\" (UID: \"4fa89d93-7eb6-41aa-bf62-eecd44d2c82e\") " pod="openstack/ceilometer-0" Jan 23 09:34:00 crc kubenswrapper[4684]: I0123 09:34:00.607769 4684 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/4fa89d93-7eb6-41aa-bf62-eecd44d2c82e-run-httpd\") pod \"ceilometer-0\" (UID: \"4fa89d93-7eb6-41aa-bf62-eecd44d2c82e\") " pod="openstack/ceilometer-0" Jan 23 09:34:00 crc kubenswrapper[4684]: I0123 09:34:00.608136 4684 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-558zm\" (UniqueName: \"kubernetes.io/projected/4fa89d93-7eb6-41aa-bf62-eecd44d2c82e-kube-api-access-558zm\") pod \"ceilometer-0\" (UID: \"4fa89d93-7eb6-41aa-bf62-eecd44d2c82e\") " pod="openstack/ceilometer-0" Jan 23 09:34:00 crc kubenswrapper[4684]: I0123 09:34:00.608464 4684 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/4fa89d93-7eb6-41aa-bf62-eecd44d2c82e-log-httpd\") pod \"ceilometer-0\" (UID: \"4fa89d93-7eb6-41aa-bf62-eecd44d2c82e\") " pod="openstack/ceilometer-0" Jan 23 09:34:00 crc kubenswrapper[4684]: I0123 09:34:00.608865 4684 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/4fa89d93-7eb6-41aa-bf62-eecd44d2c82e-log-httpd\") pod \"ceilometer-0\" (UID: \"4fa89d93-7eb6-41aa-bf62-eecd44d2c82e\") " pod="openstack/ceilometer-0" Jan 23 09:34:00 crc kubenswrapper[4684]: I0123 09:34:00.613613 4684 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/4fa89d93-7eb6-41aa-bf62-eecd44d2c82e-sg-core-conf-yaml\") pod \"ceilometer-0\" (UID: \"4fa89d93-7eb6-41aa-bf62-eecd44d2c82e\") " pod="openstack/ceilometer-0" Jan 23 09:34:00 crc kubenswrapper[4684]: I0123 09:34:00.614374 4684 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/4fa89d93-7eb6-41aa-bf62-eecd44d2c82e-config-data\") pod \"ceilometer-0\" (UID: \"4fa89d93-7eb6-41aa-bf62-eecd44d2c82e\") " pod="openstack/ceilometer-0" Jan 23 09:34:00 crc kubenswrapper[4684]: I0123 09:34:00.616719 4684 scope.go:117] "RemoveContainer" containerID="71a9e414aa8ddeace21e533466b0b8cd62e111032795895df0ae71a50f562885" Jan 23 09:34:00 crc kubenswrapper[4684]: I0123 09:34:00.617181 4684 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/4fa89d93-7eb6-41aa-bf62-eecd44d2c82e-scripts\") pod \"ceilometer-0\" (UID: \"4fa89d93-7eb6-41aa-bf62-eecd44d2c82e\") " pod="openstack/ceilometer-0" Jan 23 09:34:00 crc kubenswrapper[4684]: E0123 09:34:00.626578 4684 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"71a9e414aa8ddeace21e533466b0b8cd62e111032795895df0ae71a50f562885\": container with ID starting with 71a9e414aa8ddeace21e533466b0b8cd62e111032795895df0ae71a50f562885 not found: ID does not exist" containerID="71a9e414aa8ddeace21e533466b0b8cd62e111032795895df0ae71a50f562885" Jan 23 09:34:00 crc kubenswrapper[4684]: I0123 09:34:00.626862 4684 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"71a9e414aa8ddeace21e533466b0b8cd62e111032795895df0ae71a50f562885"} err="failed to get container status \"71a9e414aa8ddeace21e533466b0b8cd62e111032795895df0ae71a50f562885\": rpc error: code = NotFound desc = could not find container \"71a9e414aa8ddeace21e533466b0b8cd62e111032795895df0ae71a50f562885\": container with ID starting with 71a9e414aa8ddeace21e533466b0b8cd62e111032795895df0ae71a50f562885 not found: ID does not exist" Jan 23 09:34:00 crc kubenswrapper[4684]: I0123 09:34:00.627014 4684 scope.go:117] "RemoveContainer" containerID="1a9baaa98c7f04388bb26d32b730745ea34f9516610cd22752dd29b9d64574bb" Jan 23 09:34:00 crc kubenswrapper[4684]: I0123 09:34:00.628250 4684 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/4fa89d93-7eb6-41aa-bf62-eecd44d2c82e-combined-ca-bundle\") pod \"ceilometer-0\" (UID: \"4fa89d93-7eb6-41aa-bf62-eecd44d2c82e\") " pod="openstack/ceilometer-0" Jan 23 09:34:00 crc kubenswrapper[4684]: E0123 09:34:00.631426 4684 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"1a9baaa98c7f04388bb26d32b730745ea34f9516610cd22752dd29b9d64574bb\": container with ID starting with 1a9baaa98c7f04388bb26d32b730745ea34f9516610cd22752dd29b9d64574bb not found: ID does not exist" containerID="1a9baaa98c7f04388bb26d32b730745ea34f9516610cd22752dd29b9d64574bb" Jan 23 09:34:00 crc kubenswrapper[4684]: I0123 09:34:00.631605 4684 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"1a9baaa98c7f04388bb26d32b730745ea34f9516610cd22752dd29b9d64574bb"} err="failed to get container status \"1a9baaa98c7f04388bb26d32b730745ea34f9516610cd22752dd29b9d64574bb\": rpc error: code = NotFound desc = could not find container \"1a9baaa98c7f04388bb26d32b730745ea34f9516610cd22752dd29b9d64574bb\": container with ID starting with 1a9baaa98c7f04388bb26d32b730745ea34f9516610cd22752dd29b9d64574bb not found: ID does not exist" Jan 23 09:34:00 crc kubenswrapper[4684]: I0123 09:34:00.631760 4684 scope.go:117] "RemoveContainer" containerID="0262d242c9828252ff8794073bc3e4f952bacd47fb8b68f952a758b082570995" Jan 23 09:34:00 crc kubenswrapper[4684]: E0123 09:34:00.632403 4684 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"0262d242c9828252ff8794073bc3e4f952bacd47fb8b68f952a758b082570995\": container with ID starting with 0262d242c9828252ff8794073bc3e4f952bacd47fb8b68f952a758b082570995 not found: ID does not exist" containerID="0262d242c9828252ff8794073bc3e4f952bacd47fb8b68f952a758b082570995" Jan 23 09:34:00 crc kubenswrapper[4684]: I0123 09:34:00.632464 4684 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"0262d242c9828252ff8794073bc3e4f952bacd47fb8b68f952a758b082570995"} err="failed to get container status \"0262d242c9828252ff8794073bc3e4f952bacd47fb8b68f952a758b082570995\": rpc error: code = NotFound desc = could not find container \"0262d242c9828252ff8794073bc3e4f952bacd47fb8b68f952a758b082570995\": container with ID starting with 0262d242c9828252ff8794073bc3e4f952bacd47fb8b68f952a758b082570995 not found: ID does not exist" Jan 23 09:34:00 crc kubenswrapper[4684]: I0123 09:34:00.632503 4684 scope.go:117] "RemoveContainer" containerID="5a3fb216c3fd10474ccf20df12d48ed6133bb8d54a3f088c191f3d506a04a9fc" Jan 23 09:34:00 crc kubenswrapper[4684]: E0123 09:34:00.634964 4684 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"5a3fb216c3fd10474ccf20df12d48ed6133bb8d54a3f088c191f3d506a04a9fc\": container with ID starting with 5a3fb216c3fd10474ccf20df12d48ed6133bb8d54a3f088c191f3d506a04a9fc not found: ID does not exist" containerID="5a3fb216c3fd10474ccf20df12d48ed6133bb8d54a3f088c191f3d506a04a9fc" Jan 23 09:34:00 crc kubenswrapper[4684]: I0123 09:34:00.635119 4684 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"5a3fb216c3fd10474ccf20df12d48ed6133bb8d54a3f088c191f3d506a04a9fc"} err="failed to get container status \"5a3fb216c3fd10474ccf20df12d48ed6133bb8d54a3f088c191f3d506a04a9fc\": rpc error: code = NotFound desc = could not find container \"5a3fb216c3fd10474ccf20df12d48ed6133bb8d54a3f088c191f3d506a04a9fc\": container with ID starting with 5a3fb216c3fd10474ccf20df12d48ed6133bb8d54a3f088c191f3d506a04a9fc not found: ID does not exist" Jan 23 09:34:00 crc kubenswrapper[4684]: I0123 09:34:00.642236 4684 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-558zm\" (UniqueName: \"kubernetes.io/projected/4fa89d93-7eb6-41aa-bf62-eecd44d2c82e-kube-api-access-558zm\") pod \"ceilometer-0\" (UID: \"4fa89d93-7eb6-41aa-bf62-eecd44d2c82e\") " pod="openstack/ceilometer-0" Jan 23 09:34:00 crc kubenswrapper[4684]: I0123 09:34:00.799471 4684 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ceilometer-0" Jan 23 09:34:01 crc kubenswrapper[4684]: I0123 09:34:01.330521 4684 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell0-conductor-db-sync-7pzwl" event={"ID":"71a684b6-60c9-4017-91d1-7a8e340d8482","Type":"ContainerStarted","Data":"24c551e4a261aabe66b4fb2f4e85fa350c54b90b1867df15c3a26439f7433cc5"} Jan 23 09:34:01 crc kubenswrapper[4684]: I0123 09:34:01.334511 4684 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-scs5f" event={"ID":"16af4cc6-6815-4216-a5af-3d7ba5720cf3","Type":"ContainerStarted","Data":"ef2f5d0c0f10da631e3fcc11f9292002f086c4ce3e9ba99fe862755fde46db39"} Jan 23 09:34:01 crc kubenswrapper[4684]: I0123 09:34:01.338395 4684 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ceilometer-0"] Jan 23 09:34:01 crc kubenswrapper[4684]: I0123 09:34:01.368942 4684 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/nova-cell0-conductor-db-sync-7pzwl" podStartSLOduration=2.520465911 podStartE2EDuration="33.368917603s" podCreationTimestamp="2026-01-23 09:33:28 +0000 UTC" firstStartedPulling="2026-01-23 09:33:29.340878527 +0000 UTC m=+1581.964257078" lastFinishedPulling="2026-01-23 09:34:00.189330229 +0000 UTC m=+1612.812708770" observedRunningTime="2026-01-23 09:34:01.358995656 +0000 UTC m=+1613.982374207" watchObservedRunningTime="2026-01-23 09:34:01.368917603 +0000 UTC m=+1613.992296144" Jan 23 09:34:01 crc kubenswrapper[4684]: I0123 09:34:01.607938 4684 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="42713741-8e02-44d5-b649-adf7d0f80837" path="/var/lib/kubelet/pods/42713741-8e02-44d5-b649-adf7d0f80837/volumes" Jan 23 09:34:01 crc kubenswrapper[4684]: I0123 09:34:01.608947 4684 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="ff866f70-2023-4750-b047-5c39e8fa5072" path="/var/lib/kubelet/pods/ff866f70-2023-4750-b047-5c39e8fa5072/volumes" Jan 23 09:34:02 crc kubenswrapper[4684]: I0123 09:34:02.166045 4684 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/cinder-api-0" Jan 23 09:34:02 crc kubenswrapper[4684]: I0123 09:34:02.349726 4684 generic.go:334] "Generic (PLEG): container finished" podID="b8693d7b-d2eb-4be6-95f7-299baceab47f" containerID="47e323bcaa588d63d0fdb7611b165e3a0850544772b70debb8a042e53f925a9f" exitCode=137 Jan 23 09:34:02 crc kubenswrapper[4684]: I0123 09:34:02.349815 4684 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-api-0" event={"ID":"b8693d7b-d2eb-4be6-95f7-299baceab47f","Type":"ContainerDied","Data":"47e323bcaa588d63d0fdb7611b165e3a0850544772b70debb8a042e53f925a9f"} Jan 23 09:34:02 crc kubenswrapper[4684]: I0123 09:34:02.349851 4684 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-api-0" event={"ID":"b8693d7b-d2eb-4be6-95f7-299baceab47f","Type":"ContainerDied","Data":"be92492981c11afa4f1989c797889385cfae71d02081edea17a3971f71e46e93"} Jan 23 09:34:02 crc kubenswrapper[4684]: I0123 09:34:02.349872 4684 scope.go:117] "RemoveContainer" containerID="47e323bcaa588d63d0fdb7611b165e3a0850544772b70debb8a042e53f925a9f" Jan 23 09:34:02 crc kubenswrapper[4684]: I0123 09:34:02.350049 4684 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/cinder-api-0" Jan 23 09:34:02 crc kubenswrapper[4684]: I0123 09:34:02.357768 4684 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"4fa89d93-7eb6-41aa-bf62-eecd44d2c82e","Type":"ContainerStarted","Data":"245d274e9ff159a926450f928c7bed37eed9da3f0713fc2ef0cd5dfb4635ab6f"} Jan 23 09:34:02 crc kubenswrapper[4684]: I0123 09:34:02.357815 4684 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"4fa89d93-7eb6-41aa-bf62-eecd44d2c82e","Type":"ContainerStarted","Data":"62c089412973a63f7b2c59996a170292558285e5a8d3f21f5c026e7f37d8ce7f"} Jan 23 09:34:02 crc kubenswrapper[4684]: I0123 09:34:02.360323 4684 generic.go:334] "Generic (PLEG): container finished" podID="16af4cc6-6815-4216-a5af-3d7ba5720cf3" containerID="ef2f5d0c0f10da631e3fcc11f9292002f086c4ce3e9ba99fe862755fde46db39" exitCode=0 Jan 23 09:34:02 crc kubenswrapper[4684]: I0123 09:34:02.360367 4684 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-scs5f" event={"ID":"16af4cc6-6815-4216-a5af-3d7ba5720cf3","Type":"ContainerDied","Data":"ef2f5d0c0f10da631e3fcc11f9292002f086c4ce3e9ba99fe862755fde46db39"} Jan 23 09:34:02 crc kubenswrapper[4684]: I0123 09:34:02.362941 4684 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"etc-machine-id\" (UniqueName: \"kubernetes.io/host-path/b8693d7b-d2eb-4be6-95f7-299baceab47f-etc-machine-id\") pod \"b8693d7b-d2eb-4be6-95f7-299baceab47f\" (UID: \"b8693d7b-d2eb-4be6-95f7-299baceab47f\") " Jan 23 09:34:02 crc kubenswrapper[4684]: I0123 09:34:02.363028 4684 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/b8693d7b-d2eb-4be6-95f7-299baceab47f-config-data\") pod \"b8693d7b-d2eb-4be6-95f7-299baceab47f\" (UID: \"b8693d7b-d2eb-4be6-95f7-299baceab47f\") " Jan 23 09:34:02 crc kubenswrapper[4684]: I0123 09:34:02.363170 4684 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/b8693d7b-d2eb-4be6-95f7-299baceab47f-config-data-custom\") pod \"b8693d7b-d2eb-4be6-95f7-299baceab47f\" (UID: \"b8693d7b-d2eb-4be6-95f7-299baceab47f\") " Jan 23 09:34:02 crc kubenswrapper[4684]: I0123 09:34:02.363233 4684 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/b8693d7b-d2eb-4be6-95f7-299baceab47f-logs\") pod \"b8693d7b-d2eb-4be6-95f7-299baceab47f\" (UID: \"b8693d7b-d2eb-4be6-95f7-299baceab47f\") " Jan 23 09:34:02 crc kubenswrapper[4684]: I0123 09:34:02.363296 4684 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/b8693d7b-d2eb-4be6-95f7-299baceab47f-combined-ca-bundle\") pod \"b8693d7b-d2eb-4be6-95f7-299baceab47f\" (UID: \"b8693d7b-d2eb-4be6-95f7-299baceab47f\") " Jan 23 09:34:02 crc kubenswrapper[4684]: I0123 09:34:02.363338 4684 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/b8693d7b-d2eb-4be6-95f7-299baceab47f-scripts\") pod \"b8693d7b-d2eb-4be6-95f7-299baceab47f\" (UID: \"b8693d7b-d2eb-4be6-95f7-299baceab47f\") " Jan 23 09:34:02 crc kubenswrapper[4684]: I0123 09:34:02.363365 4684 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-tvbng\" (UniqueName: \"kubernetes.io/projected/b8693d7b-d2eb-4be6-95f7-299baceab47f-kube-api-access-tvbng\") pod \"b8693d7b-d2eb-4be6-95f7-299baceab47f\" (UID: \"b8693d7b-d2eb-4be6-95f7-299baceab47f\") " Jan 23 09:34:02 crc kubenswrapper[4684]: I0123 09:34:02.370866 4684 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/b8693d7b-d2eb-4be6-95f7-299baceab47f-etc-machine-id" (OuterVolumeSpecName: "etc-machine-id") pod "b8693d7b-d2eb-4be6-95f7-299baceab47f" (UID: "b8693d7b-d2eb-4be6-95f7-299baceab47f"). InnerVolumeSpecName "etc-machine-id". PluginName "kubernetes.io/host-path", VolumeGidValue "" Jan 23 09:34:02 crc kubenswrapper[4684]: I0123 09:34:02.378660 4684 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/b8693d7b-d2eb-4be6-95f7-299baceab47f-logs" (OuterVolumeSpecName: "logs") pod "b8693d7b-d2eb-4be6-95f7-299baceab47f" (UID: "b8693d7b-d2eb-4be6-95f7-299baceab47f"). InnerVolumeSpecName "logs". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 23 09:34:02 crc kubenswrapper[4684]: I0123 09:34:02.381687 4684 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/b8693d7b-d2eb-4be6-95f7-299baceab47f-config-data-custom" (OuterVolumeSpecName: "config-data-custom") pod "b8693d7b-d2eb-4be6-95f7-299baceab47f" (UID: "b8693d7b-d2eb-4be6-95f7-299baceab47f"). InnerVolumeSpecName "config-data-custom". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 23 09:34:02 crc kubenswrapper[4684]: I0123 09:34:02.381929 4684 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/b8693d7b-d2eb-4be6-95f7-299baceab47f-kube-api-access-tvbng" (OuterVolumeSpecName: "kube-api-access-tvbng") pod "b8693d7b-d2eb-4be6-95f7-299baceab47f" (UID: "b8693d7b-d2eb-4be6-95f7-299baceab47f"). InnerVolumeSpecName "kube-api-access-tvbng". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 23 09:34:02 crc kubenswrapper[4684]: I0123 09:34:02.417128 4684 scope.go:117] "RemoveContainer" containerID="ef8bf7c6fb6e70d7f574af2f1f5a5ee04b2e89507f6838964eee88bd73ddc71a" Jan 23 09:34:02 crc kubenswrapper[4684]: I0123 09:34:02.417396 4684 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/b8693d7b-d2eb-4be6-95f7-299baceab47f-scripts" (OuterVolumeSpecName: "scripts") pod "b8693d7b-d2eb-4be6-95f7-299baceab47f" (UID: "b8693d7b-d2eb-4be6-95f7-299baceab47f"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 23 09:34:02 crc kubenswrapper[4684]: I0123 09:34:02.465877 4684 reconciler_common.go:293] "Volume detached for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/b8693d7b-d2eb-4be6-95f7-299baceab47f-config-data-custom\") on node \"crc\" DevicePath \"\"" Jan 23 09:34:02 crc kubenswrapper[4684]: I0123 09:34:02.465945 4684 reconciler_common.go:293] "Volume detached for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/b8693d7b-d2eb-4be6-95f7-299baceab47f-logs\") on node \"crc\" DevicePath \"\"" Jan 23 09:34:02 crc kubenswrapper[4684]: I0123 09:34:02.465958 4684 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/b8693d7b-d2eb-4be6-95f7-299baceab47f-scripts\") on node \"crc\" DevicePath \"\"" Jan 23 09:34:02 crc kubenswrapper[4684]: I0123 09:34:02.465967 4684 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-tvbng\" (UniqueName: \"kubernetes.io/projected/b8693d7b-d2eb-4be6-95f7-299baceab47f-kube-api-access-tvbng\") on node \"crc\" DevicePath \"\"" Jan 23 09:34:02 crc kubenswrapper[4684]: I0123 09:34:02.465980 4684 reconciler_common.go:293] "Volume detached for volume \"etc-machine-id\" (UniqueName: \"kubernetes.io/host-path/b8693d7b-d2eb-4be6-95f7-299baceab47f-etc-machine-id\") on node \"crc\" DevicePath \"\"" Jan 23 09:34:02 crc kubenswrapper[4684]: I0123 09:34:02.481938 4684 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/b8693d7b-d2eb-4be6-95f7-299baceab47f-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "b8693d7b-d2eb-4be6-95f7-299baceab47f" (UID: "b8693d7b-d2eb-4be6-95f7-299baceab47f"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 23 09:34:02 crc kubenswrapper[4684]: I0123 09:34:02.551348 4684 scope.go:117] "RemoveContainer" containerID="47e323bcaa588d63d0fdb7611b165e3a0850544772b70debb8a042e53f925a9f" Jan 23 09:34:02 crc kubenswrapper[4684]: E0123 09:34:02.552010 4684 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"47e323bcaa588d63d0fdb7611b165e3a0850544772b70debb8a042e53f925a9f\": container with ID starting with 47e323bcaa588d63d0fdb7611b165e3a0850544772b70debb8a042e53f925a9f not found: ID does not exist" containerID="47e323bcaa588d63d0fdb7611b165e3a0850544772b70debb8a042e53f925a9f" Jan 23 09:34:02 crc kubenswrapper[4684]: I0123 09:34:02.552039 4684 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"47e323bcaa588d63d0fdb7611b165e3a0850544772b70debb8a042e53f925a9f"} err="failed to get container status \"47e323bcaa588d63d0fdb7611b165e3a0850544772b70debb8a042e53f925a9f\": rpc error: code = NotFound desc = could not find container \"47e323bcaa588d63d0fdb7611b165e3a0850544772b70debb8a042e53f925a9f\": container with ID starting with 47e323bcaa588d63d0fdb7611b165e3a0850544772b70debb8a042e53f925a9f not found: ID does not exist" Jan 23 09:34:02 crc kubenswrapper[4684]: I0123 09:34:02.552062 4684 scope.go:117] "RemoveContainer" containerID="ef8bf7c6fb6e70d7f574af2f1f5a5ee04b2e89507f6838964eee88bd73ddc71a" Jan 23 09:34:02 crc kubenswrapper[4684]: E0123 09:34:02.552312 4684 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"ef8bf7c6fb6e70d7f574af2f1f5a5ee04b2e89507f6838964eee88bd73ddc71a\": container with ID starting with ef8bf7c6fb6e70d7f574af2f1f5a5ee04b2e89507f6838964eee88bd73ddc71a not found: ID does not exist" containerID="ef8bf7c6fb6e70d7f574af2f1f5a5ee04b2e89507f6838964eee88bd73ddc71a" Jan 23 09:34:02 crc kubenswrapper[4684]: I0123 09:34:02.552337 4684 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"ef8bf7c6fb6e70d7f574af2f1f5a5ee04b2e89507f6838964eee88bd73ddc71a"} err="failed to get container status \"ef8bf7c6fb6e70d7f574af2f1f5a5ee04b2e89507f6838964eee88bd73ddc71a\": rpc error: code = NotFound desc = could not find container \"ef8bf7c6fb6e70d7f574af2f1f5a5ee04b2e89507f6838964eee88bd73ddc71a\": container with ID starting with ef8bf7c6fb6e70d7f574af2f1f5a5ee04b2e89507f6838964eee88bd73ddc71a not found: ID does not exist" Jan 23 09:34:02 crc kubenswrapper[4684]: I0123 09:34:02.559904 4684 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/b8693d7b-d2eb-4be6-95f7-299baceab47f-config-data" (OuterVolumeSpecName: "config-data") pod "b8693d7b-d2eb-4be6-95f7-299baceab47f" (UID: "b8693d7b-d2eb-4be6-95f7-299baceab47f"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 23 09:34:02 crc kubenswrapper[4684]: I0123 09:34:02.567988 4684 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/b8693d7b-d2eb-4be6-95f7-299baceab47f-config-data\") on node \"crc\" DevicePath \"\"" Jan 23 09:34:02 crc kubenswrapper[4684]: I0123 09:34:02.568023 4684 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/b8693d7b-d2eb-4be6-95f7-299baceab47f-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Jan 23 09:34:02 crc kubenswrapper[4684]: I0123 09:34:02.703785 4684 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/cinder-api-0"] Jan 23 09:34:02 crc kubenswrapper[4684]: I0123 09:34:02.711348 4684 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/cinder-api-0"] Jan 23 09:34:02 crc kubenswrapper[4684]: I0123 09:34:02.722718 4684 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/cinder-api-0"] Jan 23 09:34:02 crc kubenswrapper[4684]: E0123 09:34:02.723234 4684 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="b8693d7b-d2eb-4be6-95f7-299baceab47f" containerName="cinder-api-log" Jan 23 09:34:02 crc kubenswrapper[4684]: I0123 09:34:02.723262 4684 state_mem.go:107] "Deleted CPUSet assignment" podUID="b8693d7b-d2eb-4be6-95f7-299baceab47f" containerName="cinder-api-log" Jan 23 09:34:02 crc kubenswrapper[4684]: E0123 09:34:02.723290 4684 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="b8693d7b-d2eb-4be6-95f7-299baceab47f" containerName="cinder-api" Jan 23 09:34:02 crc kubenswrapper[4684]: I0123 09:34:02.723299 4684 state_mem.go:107] "Deleted CPUSet assignment" podUID="b8693d7b-d2eb-4be6-95f7-299baceab47f" containerName="cinder-api" Jan 23 09:34:02 crc kubenswrapper[4684]: I0123 09:34:02.723507 4684 memory_manager.go:354] "RemoveStaleState removing state" podUID="b8693d7b-d2eb-4be6-95f7-299baceab47f" containerName="cinder-api-log" Jan 23 09:34:02 crc kubenswrapper[4684]: I0123 09:34:02.723540 4684 memory_manager.go:354] "RemoveStaleState removing state" podUID="b8693d7b-d2eb-4be6-95f7-299baceab47f" containerName="cinder-api" Jan 23 09:34:02 crc kubenswrapper[4684]: I0123 09:34:02.724521 4684 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/cinder-api-0" Jan 23 09:34:02 crc kubenswrapper[4684]: I0123 09:34:02.726913 4684 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-cinder-internal-svc" Jan 23 09:34:02 crc kubenswrapper[4684]: I0123 09:34:02.728993 4684 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cinder-api-config-data" Jan 23 09:34:02 crc kubenswrapper[4684]: I0123 09:34:02.729165 4684 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-cinder-public-svc" Jan 23 09:34:02 crc kubenswrapper[4684]: I0123 09:34:02.751303 4684 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/cinder-api-0"] Jan 23 09:34:02 crc kubenswrapper[4684]: I0123 09:34:02.872386 4684 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/6fc125a2-7cc0-40a7-bb2c-acc93ba7866a-logs\") pod \"cinder-api-0\" (UID: \"6fc125a2-7cc0-40a7-bb2c-acc93ba7866a\") " pod="openstack/cinder-api-0" Jan 23 09:34:02 crc kubenswrapper[4684]: I0123 09:34:02.872668 4684 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/6fc125a2-7cc0-40a7-bb2c-acc93ba7866a-public-tls-certs\") pod \"cinder-api-0\" (UID: \"6fc125a2-7cc0-40a7-bb2c-acc93ba7866a\") " pod="openstack/cinder-api-0" Jan 23 09:34:02 crc kubenswrapper[4684]: I0123 09:34:02.872893 4684 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/6fc125a2-7cc0-40a7-bb2c-acc93ba7866a-config-data-custom\") pod \"cinder-api-0\" (UID: \"6fc125a2-7cc0-40a7-bb2c-acc93ba7866a\") " pod="openstack/cinder-api-0" Jan 23 09:34:02 crc kubenswrapper[4684]: I0123 09:34:02.872976 4684 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/6fc125a2-7cc0-40a7-bb2c-acc93ba7866a-scripts\") pod \"cinder-api-0\" (UID: \"6fc125a2-7cc0-40a7-bb2c-acc93ba7866a\") " pod="openstack/cinder-api-0" Jan 23 09:34:02 crc kubenswrapper[4684]: I0123 09:34:02.873119 4684 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-vdlrz\" (UniqueName: \"kubernetes.io/projected/6fc125a2-7cc0-40a7-bb2c-acc93ba7866a-kube-api-access-vdlrz\") pod \"cinder-api-0\" (UID: \"6fc125a2-7cc0-40a7-bb2c-acc93ba7866a\") " pod="openstack/cinder-api-0" Jan 23 09:34:02 crc kubenswrapper[4684]: I0123 09:34:02.873256 4684 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/6fc125a2-7cc0-40a7-bb2c-acc93ba7866a-combined-ca-bundle\") pod \"cinder-api-0\" (UID: \"6fc125a2-7cc0-40a7-bb2c-acc93ba7866a\") " pod="openstack/cinder-api-0" Jan 23 09:34:02 crc kubenswrapper[4684]: I0123 09:34:02.873386 4684 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/6fc125a2-7cc0-40a7-bb2c-acc93ba7866a-config-data\") pod \"cinder-api-0\" (UID: \"6fc125a2-7cc0-40a7-bb2c-acc93ba7866a\") " pod="openstack/cinder-api-0" Jan 23 09:34:02 crc kubenswrapper[4684]: I0123 09:34:02.873539 4684 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-machine-id\" (UniqueName: \"kubernetes.io/host-path/6fc125a2-7cc0-40a7-bb2c-acc93ba7866a-etc-machine-id\") pod \"cinder-api-0\" (UID: \"6fc125a2-7cc0-40a7-bb2c-acc93ba7866a\") " pod="openstack/cinder-api-0" Jan 23 09:34:02 crc kubenswrapper[4684]: I0123 09:34:02.873631 4684 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/6fc125a2-7cc0-40a7-bb2c-acc93ba7866a-internal-tls-certs\") pod \"cinder-api-0\" (UID: \"6fc125a2-7cc0-40a7-bb2c-acc93ba7866a\") " pod="openstack/cinder-api-0" Jan 23 09:34:02 crc kubenswrapper[4684]: I0123 09:34:02.975598 4684 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/6fc125a2-7cc0-40a7-bb2c-acc93ba7866a-config-data\") pod \"cinder-api-0\" (UID: \"6fc125a2-7cc0-40a7-bb2c-acc93ba7866a\") " pod="openstack/cinder-api-0" Jan 23 09:34:02 crc kubenswrapper[4684]: I0123 09:34:02.975743 4684 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etc-machine-id\" (UniqueName: \"kubernetes.io/host-path/6fc125a2-7cc0-40a7-bb2c-acc93ba7866a-etc-machine-id\") pod \"cinder-api-0\" (UID: \"6fc125a2-7cc0-40a7-bb2c-acc93ba7866a\") " pod="openstack/cinder-api-0" Jan 23 09:34:02 crc kubenswrapper[4684]: I0123 09:34:02.975770 4684 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/6fc125a2-7cc0-40a7-bb2c-acc93ba7866a-internal-tls-certs\") pod \"cinder-api-0\" (UID: \"6fc125a2-7cc0-40a7-bb2c-acc93ba7866a\") " pod="openstack/cinder-api-0" Jan 23 09:34:02 crc kubenswrapper[4684]: I0123 09:34:02.975808 4684 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/6fc125a2-7cc0-40a7-bb2c-acc93ba7866a-logs\") pod \"cinder-api-0\" (UID: \"6fc125a2-7cc0-40a7-bb2c-acc93ba7866a\") " pod="openstack/cinder-api-0" Jan 23 09:34:02 crc kubenswrapper[4684]: I0123 09:34:02.975848 4684 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/6fc125a2-7cc0-40a7-bb2c-acc93ba7866a-public-tls-certs\") pod \"cinder-api-0\" (UID: \"6fc125a2-7cc0-40a7-bb2c-acc93ba7866a\") " pod="openstack/cinder-api-0" Jan 23 09:34:02 crc kubenswrapper[4684]: I0123 09:34:02.975895 4684 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/6fc125a2-7cc0-40a7-bb2c-acc93ba7866a-config-data-custom\") pod \"cinder-api-0\" (UID: \"6fc125a2-7cc0-40a7-bb2c-acc93ba7866a\") " pod="openstack/cinder-api-0" Jan 23 09:34:02 crc kubenswrapper[4684]: I0123 09:34:02.975932 4684 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/6fc125a2-7cc0-40a7-bb2c-acc93ba7866a-scripts\") pod \"cinder-api-0\" (UID: \"6fc125a2-7cc0-40a7-bb2c-acc93ba7866a\") " pod="openstack/cinder-api-0" Jan 23 09:34:02 crc kubenswrapper[4684]: I0123 09:34:02.976005 4684 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-vdlrz\" (UniqueName: \"kubernetes.io/projected/6fc125a2-7cc0-40a7-bb2c-acc93ba7866a-kube-api-access-vdlrz\") pod \"cinder-api-0\" (UID: \"6fc125a2-7cc0-40a7-bb2c-acc93ba7866a\") " pod="openstack/cinder-api-0" Jan 23 09:34:02 crc kubenswrapper[4684]: I0123 09:34:02.976037 4684 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/6fc125a2-7cc0-40a7-bb2c-acc93ba7866a-combined-ca-bundle\") pod \"cinder-api-0\" (UID: \"6fc125a2-7cc0-40a7-bb2c-acc93ba7866a\") " pod="openstack/cinder-api-0" Jan 23 09:34:02 crc kubenswrapper[4684]: I0123 09:34:02.976980 4684 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etc-machine-id\" (UniqueName: \"kubernetes.io/host-path/6fc125a2-7cc0-40a7-bb2c-acc93ba7866a-etc-machine-id\") pod \"cinder-api-0\" (UID: \"6fc125a2-7cc0-40a7-bb2c-acc93ba7866a\") " pod="openstack/cinder-api-0" Jan 23 09:34:02 crc kubenswrapper[4684]: I0123 09:34:02.977365 4684 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/6fc125a2-7cc0-40a7-bb2c-acc93ba7866a-logs\") pod \"cinder-api-0\" (UID: \"6fc125a2-7cc0-40a7-bb2c-acc93ba7866a\") " pod="openstack/cinder-api-0" Jan 23 09:34:02 crc kubenswrapper[4684]: I0123 09:34:02.982295 4684 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/6fc125a2-7cc0-40a7-bb2c-acc93ba7866a-scripts\") pod \"cinder-api-0\" (UID: \"6fc125a2-7cc0-40a7-bb2c-acc93ba7866a\") " pod="openstack/cinder-api-0" Jan 23 09:34:02 crc kubenswrapper[4684]: I0123 09:34:02.983295 4684 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/6fc125a2-7cc0-40a7-bb2c-acc93ba7866a-combined-ca-bundle\") pod \"cinder-api-0\" (UID: \"6fc125a2-7cc0-40a7-bb2c-acc93ba7866a\") " pod="openstack/cinder-api-0" Jan 23 09:34:02 crc kubenswrapper[4684]: I0123 09:34:02.983483 4684 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/6fc125a2-7cc0-40a7-bb2c-acc93ba7866a-config-data-custom\") pod \"cinder-api-0\" (UID: \"6fc125a2-7cc0-40a7-bb2c-acc93ba7866a\") " pod="openstack/cinder-api-0" Jan 23 09:34:02 crc kubenswrapper[4684]: I0123 09:34:02.983978 4684 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/6fc125a2-7cc0-40a7-bb2c-acc93ba7866a-public-tls-certs\") pod \"cinder-api-0\" (UID: \"6fc125a2-7cc0-40a7-bb2c-acc93ba7866a\") " pod="openstack/cinder-api-0" Jan 23 09:34:02 crc kubenswrapper[4684]: I0123 09:34:02.985640 4684 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/6fc125a2-7cc0-40a7-bb2c-acc93ba7866a-internal-tls-certs\") pod \"cinder-api-0\" (UID: \"6fc125a2-7cc0-40a7-bb2c-acc93ba7866a\") " pod="openstack/cinder-api-0" Jan 23 09:34:02 crc kubenswrapper[4684]: I0123 09:34:02.985555 4684 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/6fc125a2-7cc0-40a7-bb2c-acc93ba7866a-config-data\") pod \"cinder-api-0\" (UID: \"6fc125a2-7cc0-40a7-bb2c-acc93ba7866a\") " pod="openstack/cinder-api-0" Jan 23 09:34:03 crc kubenswrapper[4684]: I0123 09:34:03.014287 4684 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-vdlrz\" (UniqueName: \"kubernetes.io/projected/6fc125a2-7cc0-40a7-bb2c-acc93ba7866a-kube-api-access-vdlrz\") pod \"cinder-api-0\" (UID: \"6fc125a2-7cc0-40a7-bb2c-acc93ba7866a\") " pod="openstack/cinder-api-0" Jan 23 09:34:03 crc kubenswrapper[4684]: I0123 09:34:03.080880 4684 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/cinder-api-0" Jan 23 09:34:03 crc kubenswrapper[4684]: I0123 09:34:03.387607 4684 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"4fa89d93-7eb6-41aa-bf62-eecd44d2c82e","Type":"ContainerStarted","Data":"f5bad19e2f49439461b85e4557cab54f0a9214796dd9b4019f50104463db887a"} Jan 23 09:34:03 crc kubenswrapper[4684]: I0123 09:34:03.617944 4684 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="b8693d7b-d2eb-4be6-95f7-299baceab47f" path="/var/lib/kubelet/pods/b8693d7b-d2eb-4be6-95f7-299baceab47f/volumes" Jan 23 09:34:03 crc kubenswrapper[4684]: I0123 09:34:03.780564 4684 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/cinder-api-0"] Jan 23 09:34:04 crc kubenswrapper[4684]: I0123 09:34:04.416266 4684 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-api-0" event={"ID":"6fc125a2-7cc0-40a7-bb2c-acc93ba7866a","Type":"ContainerStarted","Data":"5e1cf0498c3e01133d34eae4d04a308c47f3431aacb8597d3feae2845d80f100"} Jan 23 09:34:04 crc kubenswrapper[4684]: I0123 09:34:04.418041 4684 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"4fa89d93-7eb6-41aa-bf62-eecd44d2c82e","Type":"ContainerStarted","Data":"bc7cc89ad1529fc55587709fe13590df67d0b0141d31a594cbfea0cb7e7fec77"} Jan 23 09:34:04 crc kubenswrapper[4684]: I0123 09:34:04.420408 4684 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-scs5f" event={"ID":"16af4cc6-6815-4216-a5af-3d7ba5720cf3","Type":"ContainerStarted","Data":"cadb0dd959801f12a53a546880b91555df5480689263cf0c1a39c7ad7fe27616"} Jan 23 09:34:04 crc kubenswrapper[4684]: I0123 09:34:04.445414 4684 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/community-operators-scs5f" podStartSLOduration=4.019877161 podStartE2EDuration="7.445396212s" podCreationTimestamp="2026-01-23 09:33:57 +0000 UTC" firstStartedPulling="2026-01-23 09:34:00.305280951 +0000 UTC m=+1612.928659492" lastFinishedPulling="2026-01-23 09:34:03.730800002 +0000 UTC m=+1616.354178543" observedRunningTime="2026-01-23 09:34:04.443120847 +0000 UTC m=+1617.066499408" watchObservedRunningTime="2026-01-23 09:34:04.445396212 +0000 UTC m=+1617.068774763" Jan 23 09:34:05 crc kubenswrapper[4684]: I0123 09:34:05.818762 4684 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-api-0" event={"ID":"6fc125a2-7cc0-40a7-bb2c-acc93ba7866a","Type":"ContainerStarted","Data":"2c906196e8da64e7436c6eb7a82666781b477d3c9b90c971a31fa84abf119f51"} Jan 23 09:34:06 crc kubenswrapper[4684]: I0123 09:34:06.834329 4684 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-api-0" event={"ID":"6fc125a2-7cc0-40a7-bb2c-acc93ba7866a","Type":"ContainerStarted","Data":"de2037d584a7869cf941b140fd7dc54f225dd89a8eab7635839b1215b4691326"} Jan 23 09:34:06 crc kubenswrapper[4684]: I0123 09:34:06.834803 4684 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/cinder-api-0" Jan 23 09:34:06 crc kubenswrapper[4684]: I0123 09:34:06.878260 4684 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/cinder-api-0" podStartSLOduration=4.878236063 podStartE2EDuration="4.878236063s" podCreationTimestamp="2026-01-23 09:34:02 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-23 09:34:06.859788689 +0000 UTC m=+1619.483167230" watchObservedRunningTime="2026-01-23 09:34:06.878236063 +0000 UTC m=+1619.501614614" Jan 23 09:34:07 crc kubenswrapper[4684]: I0123 09:34:07.095132 4684 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack/cinder-api-0" podUID="b8693d7b-d2eb-4be6-95f7-299baceab47f" containerName="cinder-api" probeResult="failure" output="Get \"http://10.217.0.159:8776/healthcheck\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" Jan 23 09:34:07 crc kubenswrapper[4684]: I0123 09:34:07.864847 4684 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"4fa89d93-7eb6-41aa-bf62-eecd44d2c82e","Type":"ContainerStarted","Data":"5ce4bfe8329de7f03c0fe3b1a3e2e75645201a862f7c26dc01f176f5de769607"} Jan 23 09:34:08 crc kubenswrapper[4684]: I0123 09:34:08.321002 4684 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/community-operators-scs5f" Jan 23 09:34:08 crc kubenswrapper[4684]: I0123 09:34:08.321324 4684 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/community-operators-scs5f" Jan 23 09:34:08 crc kubenswrapper[4684]: I0123 09:34:08.387676 4684 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/community-operators-scs5f" Jan 23 09:34:08 crc kubenswrapper[4684]: I0123 09:34:08.905378 4684 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/ceilometer-0" podStartSLOduration=3.5579165120000003 podStartE2EDuration="8.905355021s" podCreationTimestamp="2026-01-23 09:34:00 +0000 UTC" firstStartedPulling="2026-01-23 09:34:01.342316974 +0000 UTC m=+1613.965695515" lastFinishedPulling="2026-01-23 09:34:06.689755483 +0000 UTC m=+1619.313134024" observedRunningTime="2026-01-23 09:34:08.896488385 +0000 UTC m=+1621.519866946" watchObservedRunningTime="2026-01-23 09:34:08.905355021 +0000 UTC m=+1621.528733562" Jan 23 09:34:15 crc kubenswrapper[4684]: I0123 09:34:15.557248 4684 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/cinder-api-0" Jan 23 09:34:18 crc kubenswrapper[4684]: I0123 09:34:18.375055 4684 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/community-operators-scs5f" Jan 23 09:34:18 crc kubenswrapper[4684]: I0123 09:34:18.439767 4684 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/community-operators-scs5f"] Jan 23 09:34:18 crc kubenswrapper[4684]: I0123 09:34:18.976707 4684 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/community-operators-scs5f" podUID="16af4cc6-6815-4216-a5af-3d7ba5720cf3" containerName="registry-server" containerID="cri-o://cadb0dd959801f12a53a546880b91555df5480689263cf0c1a39c7ad7fe27616" gracePeriod=2 Jan 23 09:34:19 crc kubenswrapper[4684]: I0123 09:34:19.011231 4684 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/neutron-75f58545cb-xtfdc" Jan 23 09:34:20 crc kubenswrapper[4684]: I0123 09:34:20.024643 4684 generic.go:334] "Generic (PLEG): container finished" podID="16af4cc6-6815-4216-a5af-3d7ba5720cf3" containerID="cadb0dd959801f12a53a546880b91555df5480689263cf0c1a39c7ad7fe27616" exitCode=0 Jan 23 09:34:20 crc kubenswrapper[4684]: I0123 09:34:20.024954 4684 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-scs5f" event={"ID":"16af4cc6-6815-4216-a5af-3d7ba5720cf3","Type":"ContainerDied","Data":"cadb0dd959801f12a53a546880b91555df5480689263cf0c1a39c7ad7fe27616"} Jan 23 09:34:20 crc kubenswrapper[4684]: I0123 09:34:20.496070 4684 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-scs5f" Jan 23 09:34:20 crc kubenswrapper[4684]: I0123 09:34:20.552798 4684 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/16af4cc6-6815-4216-a5af-3d7ba5720cf3-utilities\") pod \"16af4cc6-6815-4216-a5af-3d7ba5720cf3\" (UID: \"16af4cc6-6815-4216-a5af-3d7ba5720cf3\") " Jan 23 09:34:20 crc kubenswrapper[4684]: I0123 09:34:20.553026 4684 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/16af4cc6-6815-4216-a5af-3d7ba5720cf3-catalog-content\") pod \"16af4cc6-6815-4216-a5af-3d7ba5720cf3\" (UID: \"16af4cc6-6815-4216-a5af-3d7ba5720cf3\") " Jan 23 09:34:20 crc kubenswrapper[4684]: I0123 09:34:20.553065 4684 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-gx599\" (UniqueName: \"kubernetes.io/projected/16af4cc6-6815-4216-a5af-3d7ba5720cf3-kube-api-access-gx599\") pod \"16af4cc6-6815-4216-a5af-3d7ba5720cf3\" (UID: \"16af4cc6-6815-4216-a5af-3d7ba5720cf3\") " Jan 23 09:34:20 crc kubenswrapper[4684]: I0123 09:34:20.553837 4684 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/16af4cc6-6815-4216-a5af-3d7ba5720cf3-utilities" (OuterVolumeSpecName: "utilities") pod "16af4cc6-6815-4216-a5af-3d7ba5720cf3" (UID: "16af4cc6-6815-4216-a5af-3d7ba5720cf3"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 23 09:34:20 crc kubenswrapper[4684]: I0123 09:34:20.565808 4684 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/16af4cc6-6815-4216-a5af-3d7ba5720cf3-kube-api-access-gx599" (OuterVolumeSpecName: "kube-api-access-gx599") pod "16af4cc6-6815-4216-a5af-3d7ba5720cf3" (UID: "16af4cc6-6815-4216-a5af-3d7ba5720cf3"). InnerVolumeSpecName "kube-api-access-gx599". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 23 09:34:20 crc kubenswrapper[4684]: I0123 09:34:20.605517 4684 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/16af4cc6-6815-4216-a5af-3d7ba5720cf3-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "16af4cc6-6815-4216-a5af-3d7ba5720cf3" (UID: "16af4cc6-6815-4216-a5af-3d7ba5720cf3"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 23 09:34:20 crc kubenswrapper[4684]: I0123 09:34:20.655367 4684 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/16af4cc6-6815-4216-a5af-3d7ba5720cf3-catalog-content\") on node \"crc\" DevicePath \"\"" Jan 23 09:34:20 crc kubenswrapper[4684]: I0123 09:34:20.655811 4684 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-gx599\" (UniqueName: \"kubernetes.io/projected/16af4cc6-6815-4216-a5af-3d7ba5720cf3-kube-api-access-gx599\") on node \"crc\" DevicePath \"\"" Jan 23 09:34:20 crc kubenswrapper[4684]: I0123 09:34:20.655831 4684 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/16af4cc6-6815-4216-a5af-3d7ba5720cf3-utilities\") on node \"crc\" DevicePath \"\"" Jan 23 09:34:21 crc kubenswrapper[4684]: I0123 09:34:21.036084 4684 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-scs5f" event={"ID":"16af4cc6-6815-4216-a5af-3d7ba5720cf3","Type":"ContainerDied","Data":"fdb0fa1522e3663c2d67662752e51e39c00c9bd531351c468bb087802d5912dc"} Jan 23 09:34:21 crc kubenswrapper[4684]: I0123 09:34:21.036384 4684 scope.go:117] "RemoveContainer" containerID="cadb0dd959801f12a53a546880b91555df5480689263cf0c1a39c7ad7fe27616" Jan 23 09:34:21 crc kubenswrapper[4684]: I0123 09:34:21.036514 4684 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-scs5f" Jan 23 09:34:21 crc kubenswrapper[4684]: I0123 09:34:21.066262 4684 scope.go:117] "RemoveContainer" containerID="ef2f5d0c0f10da631e3fcc11f9292002f086c4ce3e9ba99fe862755fde46db39" Jan 23 09:34:21 crc kubenswrapper[4684]: I0123 09:34:21.080646 4684 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/community-operators-scs5f"] Jan 23 09:34:21 crc kubenswrapper[4684]: I0123 09:34:21.090520 4684 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/community-operators-scs5f"] Jan 23 09:34:21 crc kubenswrapper[4684]: I0123 09:34:21.101577 4684 scope.go:117] "RemoveContainer" containerID="1a2b33daa1e6376e7f9f6d27bb2cb847fa4f7588f6efef55c73a3efb3e2cc162" Jan 23 09:34:21 crc kubenswrapper[4684]: I0123 09:34:21.598616 4684 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="16af4cc6-6815-4216-a5af-3d7ba5720cf3" path="/var/lib/kubelet/pods/16af4cc6-6815-4216-a5af-3d7ba5720cf3/volumes" Jan 23 09:34:22 crc kubenswrapper[4684]: I0123 09:34:22.384425 4684 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/neutron-f5484d975-q9jz7" Jan 23 09:34:22 crc kubenswrapper[4684]: I0123 09:34:22.455589 4684 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/neutron-75f58545cb-xtfdc"] Jan 23 09:34:22 crc kubenswrapper[4684]: I0123 09:34:22.456413 4684 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/neutron-75f58545cb-xtfdc" podUID="0ffa2554-38b9-498e-a08f-465b4454ed2d" containerName="neutron-httpd" containerID="cri-o://5a19d5e5234809e1f60691730398cea68dddfa18bfbd96febaa55b2782b5283b" gracePeriod=30 Jan 23 09:34:22 crc kubenswrapper[4684]: I0123 09:34:22.456334 4684 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/neutron-75f58545cb-xtfdc" podUID="0ffa2554-38b9-498e-a08f-465b4454ed2d" containerName="neutron-api" containerID="cri-o://aa2e0795d891f05c3c3740731d371598dd147a2fa84e2cbb486a96d5e7067258" gracePeriod=30 Jan 23 09:34:23 crc kubenswrapper[4684]: I0123 09:34:23.056593 4684 generic.go:334] "Generic (PLEG): container finished" podID="0ffa2554-38b9-498e-a08f-465b4454ed2d" containerID="5a19d5e5234809e1f60691730398cea68dddfa18bfbd96febaa55b2782b5283b" exitCode=0 Jan 23 09:34:23 crc kubenswrapper[4684]: I0123 09:34:23.056661 4684 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/neutron-75f58545cb-xtfdc" event={"ID":"0ffa2554-38b9-498e-a08f-465b4454ed2d","Type":"ContainerDied","Data":"5a19d5e5234809e1f60691730398cea68dddfa18bfbd96febaa55b2782b5283b"} Jan 23 09:34:24 crc kubenswrapper[4684]: I0123 09:34:24.339446 4684 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/ceilometer-0"] Jan 23 09:34:24 crc kubenswrapper[4684]: I0123 09:34:24.340521 4684 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/ceilometer-0" podUID="4fa89d93-7eb6-41aa-bf62-eecd44d2c82e" containerName="ceilometer-central-agent" containerID="cri-o://245d274e9ff159a926450f928c7bed37eed9da3f0713fc2ef0cd5dfb4635ab6f" gracePeriod=30 Jan 23 09:34:24 crc kubenswrapper[4684]: I0123 09:34:24.341408 4684 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/ceilometer-0" podUID="4fa89d93-7eb6-41aa-bf62-eecd44d2c82e" containerName="proxy-httpd" containerID="cri-o://5ce4bfe8329de7f03c0fe3b1a3e2e75645201a862f7c26dc01f176f5de769607" gracePeriod=30 Jan 23 09:34:24 crc kubenswrapper[4684]: I0123 09:34:24.341463 4684 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/ceilometer-0" podUID="4fa89d93-7eb6-41aa-bf62-eecd44d2c82e" containerName="sg-core" containerID="cri-o://bc7cc89ad1529fc55587709fe13590df67d0b0141d31a594cbfea0cb7e7fec77" gracePeriod=30 Jan 23 09:34:24 crc kubenswrapper[4684]: I0123 09:34:24.341523 4684 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/ceilometer-0" podUID="4fa89d93-7eb6-41aa-bf62-eecd44d2c82e" containerName="ceilometer-notification-agent" containerID="cri-o://f5bad19e2f49439461b85e4557cab54f0a9214796dd9b4019f50104463db887a" gracePeriod=30 Jan 23 09:34:24 crc kubenswrapper[4684]: I0123 09:34:24.341808 4684 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/ceilometer-0" Jan 23 09:34:24 crc kubenswrapper[4684]: I0123 09:34:24.354052 4684 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack/ceilometer-0" podUID="4fa89d93-7eb6-41aa-bf62-eecd44d2c82e" containerName="proxy-httpd" probeResult="failure" output="Get \"http://10.217.0.168:3000/\": EOF" Jan 23 09:34:24 crc kubenswrapper[4684]: E0123 09:34:24.465374 4684 cadvisor_stats_provider.go:516] "Partial failure issuing cadvisor.ContainerInfoV2" err="partial failures: [\"/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod4fa89d93_7eb6_41aa_bf62_eecd44d2c82e.slice/crio-conmon-bc7cc89ad1529fc55587709fe13590df67d0b0141d31a594cbfea0cb7e7fec77.scope\": RecentStats: unable to find data in memory cache]" Jan 23 09:34:25 crc kubenswrapper[4684]: I0123 09:34:25.074618 4684 generic.go:334] "Generic (PLEG): container finished" podID="4fa89d93-7eb6-41aa-bf62-eecd44d2c82e" containerID="5ce4bfe8329de7f03c0fe3b1a3e2e75645201a862f7c26dc01f176f5de769607" exitCode=0 Jan 23 09:34:25 crc kubenswrapper[4684]: I0123 09:34:25.074667 4684 generic.go:334] "Generic (PLEG): container finished" podID="4fa89d93-7eb6-41aa-bf62-eecd44d2c82e" containerID="bc7cc89ad1529fc55587709fe13590df67d0b0141d31a594cbfea0cb7e7fec77" exitCode=2 Jan 23 09:34:25 crc kubenswrapper[4684]: I0123 09:34:25.074663 4684 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"4fa89d93-7eb6-41aa-bf62-eecd44d2c82e","Type":"ContainerDied","Data":"5ce4bfe8329de7f03c0fe3b1a3e2e75645201a862f7c26dc01f176f5de769607"} Jan 23 09:34:25 crc kubenswrapper[4684]: I0123 09:34:25.074785 4684 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"4fa89d93-7eb6-41aa-bf62-eecd44d2c82e","Type":"ContainerDied","Data":"bc7cc89ad1529fc55587709fe13590df67d0b0141d31a594cbfea0cb7e7fec77"} Jan 23 09:34:26 crc kubenswrapper[4684]: I0123 09:34:26.087225 4684 generic.go:334] "Generic (PLEG): container finished" podID="4fa89d93-7eb6-41aa-bf62-eecd44d2c82e" containerID="245d274e9ff159a926450f928c7bed37eed9da3f0713fc2ef0cd5dfb4635ab6f" exitCode=0 Jan 23 09:34:26 crc kubenswrapper[4684]: I0123 09:34:26.087327 4684 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"4fa89d93-7eb6-41aa-bf62-eecd44d2c82e","Type":"ContainerDied","Data":"245d274e9ff159a926450f928c7bed37eed9da3f0713fc2ef0cd5dfb4635ab6f"} Jan 23 09:34:28 crc kubenswrapper[4684]: I0123 09:34:28.105680 4684 generic.go:334] "Generic (PLEG): container finished" podID="0ffa2554-38b9-498e-a08f-465b4454ed2d" containerID="aa2e0795d891f05c3c3740731d371598dd147a2fa84e2cbb486a96d5e7067258" exitCode=0 Jan 23 09:34:28 crc kubenswrapper[4684]: I0123 09:34:28.105905 4684 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/neutron-75f58545cb-xtfdc" event={"ID":"0ffa2554-38b9-498e-a08f-465b4454ed2d","Type":"ContainerDied","Data":"aa2e0795d891f05c3c3740731d371598dd147a2fa84e2cbb486a96d5e7067258"} Jan 23 09:34:28 crc kubenswrapper[4684]: I0123 09:34:28.106285 4684 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/neutron-75f58545cb-xtfdc" event={"ID":"0ffa2554-38b9-498e-a08f-465b4454ed2d","Type":"ContainerDied","Data":"1c19a336b4b20f92f292aed6caba173ffa571f2278c4c532144cc8c661034b3b"} Jan 23 09:34:28 crc kubenswrapper[4684]: I0123 09:34:28.106305 4684 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="1c19a336b4b20f92f292aed6caba173ffa571f2278c4c532144cc8c661034b3b" Jan 23 09:34:28 crc kubenswrapper[4684]: I0123 09:34:28.171367 4684 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/neutron-75f58545cb-xtfdc" Jan 23 09:34:28 crc kubenswrapper[4684]: I0123 09:34:28.281772 4684 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/secret/0ffa2554-38b9-498e-a08f-465b4454ed2d-config\") pod \"0ffa2554-38b9-498e-a08f-465b4454ed2d\" (UID: \"0ffa2554-38b9-498e-a08f-465b4454ed2d\") " Jan 23 09:34:28 crc kubenswrapper[4684]: I0123 09:34:28.281829 4684 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-vw7sb\" (UniqueName: \"kubernetes.io/projected/0ffa2554-38b9-498e-a08f-465b4454ed2d-kube-api-access-vw7sb\") pod \"0ffa2554-38b9-498e-a08f-465b4454ed2d\" (UID: \"0ffa2554-38b9-498e-a08f-465b4454ed2d\") " Jan 23 09:34:28 crc kubenswrapper[4684]: I0123 09:34:28.282027 4684 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/0ffa2554-38b9-498e-a08f-465b4454ed2d-combined-ca-bundle\") pod \"0ffa2554-38b9-498e-a08f-465b4454ed2d\" (UID: \"0ffa2554-38b9-498e-a08f-465b4454ed2d\") " Jan 23 09:34:28 crc kubenswrapper[4684]: I0123 09:34:28.282056 4684 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovndb-tls-certs\" (UniqueName: \"kubernetes.io/secret/0ffa2554-38b9-498e-a08f-465b4454ed2d-ovndb-tls-certs\") pod \"0ffa2554-38b9-498e-a08f-465b4454ed2d\" (UID: \"0ffa2554-38b9-498e-a08f-465b4454ed2d\") " Jan 23 09:34:28 crc kubenswrapper[4684]: I0123 09:34:28.282738 4684 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"httpd-config\" (UniqueName: \"kubernetes.io/secret/0ffa2554-38b9-498e-a08f-465b4454ed2d-httpd-config\") pod \"0ffa2554-38b9-498e-a08f-465b4454ed2d\" (UID: \"0ffa2554-38b9-498e-a08f-465b4454ed2d\") " Jan 23 09:34:28 crc kubenswrapper[4684]: I0123 09:34:28.287657 4684 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/0ffa2554-38b9-498e-a08f-465b4454ed2d-httpd-config" (OuterVolumeSpecName: "httpd-config") pod "0ffa2554-38b9-498e-a08f-465b4454ed2d" (UID: "0ffa2554-38b9-498e-a08f-465b4454ed2d"). InnerVolumeSpecName "httpd-config". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 23 09:34:28 crc kubenswrapper[4684]: I0123 09:34:28.289381 4684 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/0ffa2554-38b9-498e-a08f-465b4454ed2d-kube-api-access-vw7sb" (OuterVolumeSpecName: "kube-api-access-vw7sb") pod "0ffa2554-38b9-498e-a08f-465b4454ed2d" (UID: "0ffa2554-38b9-498e-a08f-465b4454ed2d"). InnerVolumeSpecName "kube-api-access-vw7sb". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 23 09:34:28 crc kubenswrapper[4684]: I0123 09:34:28.333520 4684 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/0ffa2554-38b9-498e-a08f-465b4454ed2d-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "0ffa2554-38b9-498e-a08f-465b4454ed2d" (UID: "0ffa2554-38b9-498e-a08f-465b4454ed2d"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 23 09:34:28 crc kubenswrapper[4684]: I0123 09:34:28.335807 4684 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/0ffa2554-38b9-498e-a08f-465b4454ed2d-config" (OuterVolumeSpecName: "config") pod "0ffa2554-38b9-498e-a08f-465b4454ed2d" (UID: "0ffa2554-38b9-498e-a08f-465b4454ed2d"). InnerVolumeSpecName "config". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 23 09:34:28 crc kubenswrapper[4684]: I0123 09:34:28.370112 4684 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/0ffa2554-38b9-498e-a08f-465b4454ed2d-ovndb-tls-certs" (OuterVolumeSpecName: "ovndb-tls-certs") pod "0ffa2554-38b9-498e-a08f-465b4454ed2d" (UID: "0ffa2554-38b9-498e-a08f-465b4454ed2d"). InnerVolumeSpecName "ovndb-tls-certs". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 23 09:34:28 crc kubenswrapper[4684]: I0123 09:34:28.385448 4684 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/secret/0ffa2554-38b9-498e-a08f-465b4454ed2d-config\") on node \"crc\" DevicePath \"\"" Jan 23 09:34:28 crc kubenswrapper[4684]: I0123 09:34:28.385748 4684 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-vw7sb\" (UniqueName: \"kubernetes.io/projected/0ffa2554-38b9-498e-a08f-465b4454ed2d-kube-api-access-vw7sb\") on node \"crc\" DevicePath \"\"" Jan 23 09:34:28 crc kubenswrapper[4684]: I0123 09:34:28.385842 4684 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/0ffa2554-38b9-498e-a08f-465b4454ed2d-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Jan 23 09:34:28 crc kubenswrapper[4684]: I0123 09:34:28.385911 4684 reconciler_common.go:293] "Volume detached for volume \"ovndb-tls-certs\" (UniqueName: \"kubernetes.io/secret/0ffa2554-38b9-498e-a08f-465b4454ed2d-ovndb-tls-certs\") on node \"crc\" DevicePath \"\"" Jan 23 09:34:28 crc kubenswrapper[4684]: I0123 09:34:28.386000 4684 reconciler_common.go:293] "Volume detached for volume \"httpd-config\" (UniqueName: \"kubernetes.io/secret/0ffa2554-38b9-498e-a08f-465b4454ed2d-httpd-config\") on node \"crc\" DevicePath \"\"" Jan 23 09:34:28 crc kubenswrapper[4684]: I0123 09:34:28.911256 4684 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/ceilometer-0" Jan 23 09:34:29 crc kubenswrapper[4684]: I0123 09:34:29.107795 4684 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-558zm\" (UniqueName: \"kubernetes.io/projected/4fa89d93-7eb6-41aa-bf62-eecd44d2c82e-kube-api-access-558zm\") pod \"4fa89d93-7eb6-41aa-bf62-eecd44d2c82e\" (UID: \"4fa89d93-7eb6-41aa-bf62-eecd44d2c82e\") " Jan 23 09:34:29 crc kubenswrapper[4684]: I0123 09:34:29.108203 4684 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/4fa89d93-7eb6-41aa-bf62-eecd44d2c82e-sg-core-conf-yaml\") pod \"4fa89d93-7eb6-41aa-bf62-eecd44d2c82e\" (UID: \"4fa89d93-7eb6-41aa-bf62-eecd44d2c82e\") " Jan 23 09:34:29 crc kubenswrapper[4684]: I0123 09:34:29.108296 4684 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/4fa89d93-7eb6-41aa-bf62-eecd44d2c82e-log-httpd\") pod \"4fa89d93-7eb6-41aa-bf62-eecd44d2c82e\" (UID: \"4fa89d93-7eb6-41aa-bf62-eecd44d2c82e\") " Jan 23 09:34:29 crc kubenswrapper[4684]: I0123 09:34:29.108335 4684 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/4fa89d93-7eb6-41aa-bf62-eecd44d2c82e-scripts\") pod \"4fa89d93-7eb6-41aa-bf62-eecd44d2c82e\" (UID: \"4fa89d93-7eb6-41aa-bf62-eecd44d2c82e\") " Jan 23 09:34:29 crc kubenswrapper[4684]: I0123 09:34:29.108360 4684 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/4fa89d93-7eb6-41aa-bf62-eecd44d2c82e-combined-ca-bundle\") pod \"4fa89d93-7eb6-41aa-bf62-eecd44d2c82e\" (UID: \"4fa89d93-7eb6-41aa-bf62-eecd44d2c82e\") " Jan 23 09:34:29 crc kubenswrapper[4684]: I0123 09:34:29.108430 4684 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/4fa89d93-7eb6-41aa-bf62-eecd44d2c82e-config-data\") pod \"4fa89d93-7eb6-41aa-bf62-eecd44d2c82e\" (UID: \"4fa89d93-7eb6-41aa-bf62-eecd44d2c82e\") " Jan 23 09:34:29 crc kubenswrapper[4684]: I0123 09:34:29.108457 4684 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/4fa89d93-7eb6-41aa-bf62-eecd44d2c82e-run-httpd\") pod \"4fa89d93-7eb6-41aa-bf62-eecd44d2c82e\" (UID: \"4fa89d93-7eb6-41aa-bf62-eecd44d2c82e\") " Jan 23 09:34:29 crc kubenswrapper[4684]: I0123 09:34:29.108795 4684 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/4fa89d93-7eb6-41aa-bf62-eecd44d2c82e-log-httpd" (OuterVolumeSpecName: "log-httpd") pod "4fa89d93-7eb6-41aa-bf62-eecd44d2c82e" (UID: "4fa89d93-7eb6-41aa-bf62-eecd44d2c82e"). InnerVolumeSpecName "log-httpd". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 23 09:34:29 crc kubenswrapper[4684]: I0123 09:34:29.109120 4684 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/4fa89d93-7eb6-41aa-bf62-eecd44d2c82e-run-httpd" (OuterVolumeSpecName: "run-httpd") pod "4fa89d93-7eb6-41aa-bf62-eecd44d2c82e" (UID: "4fa89d93-7eb6-41aa-bf62-eecd44d2c82e"). InnerVolumeSpecName "run-httpd". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 23 09:34:29 crc kubenswrapper[4684]: I0123 09:34:29.109184 4684 reconciler_common.go:293] "Volume detached for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/4fa89d93-7eb6-41aa-bf62-eecd44d2c82e-log-httpd\") on node \"crc\" DevicePath \"\"" Jan 23 09:34:29 crc kubenswrapper[4684]: I0123 09:34:29.126659 4684 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/4fa89d93-7eb6-41aa-bf62-eecd44d2c82e-kube-api-access-558zm" (OuterVolumeSpecName: "kube-api-access-558zm") pod "4fa89d93-7eb6-41aa-bf62-eecd44d2c82e" (UID: "4fa89d93-7eb6-41aa-bf62-eecd44d2c82e"). InnerVolumeSpecName "kube-api-access-558zm". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 23 09:34:29 crc kubenswrapper[4684]: I0123 09:34:29.136033 4684 generic.go:334] "Generic (PLEG): container finished" podID="4fa89d93-7eb6-41aa-bf62-eecd44d2c82e" containerID="f5bad19e2f49439461b85e4557cab54f0a9214796dd9b4019f50104463db887a" exitCode=0 Jan 23 09:34:29 crc kubenswrapper[4684]: I0123 09:34:29.136151 4684 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/neutron-75f58545cb-xtfdc" Jan 23 09:34:29 crc kubenswrapper[4684]: I0123 09:34:29.136345 4684 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"4fa89d93-7eb6-41aa-bf62-eecd44d2c82e","Type":"ContainerDied","Data":"f5bad19e2f49439461b85e4557cab54f0a9214796dd9b4019f50104463db887a"} Jan 23 09:34:29 crc kubenswrapper[4684]: I0123 09:34:29.136510 4684 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"4fa89d93-7eb6-41aa-bf62-eecd44d2c82e","Type":"ContainerDied","Data":"62c089412973a63f7b2c59996a170292558285e5a8d3f21f5c026e7f37d8ce7f"} Jan 23 09:34:29 crc kubenswrapper[4684]: I0123 09:34:29.136601 4684 scope.go:117] "RemoveContainer" containerID="5ce4bfe8329de7f03c0fe3b1a3e2e75645201a862f7c26dc01f176f5de769607" Jan 23 09:34:29 crc kubenswrapper[4684]: I0123 09:34:29.136995 4684 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/ceilometer-0" Jan 23 09:34:29 crc kubenswrapper[4684]: I0123 09:34:29.140158 4684 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/4fa89d93-7eb6-41aa-bf62-eecd44d2c82e-scripts" (OuterVolumeSpecName: "scripts") pod "4fa89d93-7eb6-41aa-bf62-eecd44d2c82e" (UID: "4fa89d93-7eb6-41aa-bf62-eecd44d2c82e"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 23 09:34:29 crc kubenswrapper[4684]: I0123 09:34:29.148979 4684 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/4fa89d93-7eb6-41aa-bf62-eecd44d2c82e-sg-core-conf-yaml" (OuterVolumeSpecName: "sg-core-conf-yaml") pod "4fa89d93-7eb6-41aa-bf62-eecd44d2c82e" (UID: "4fa89d93-7eb6-41aa-bf62-eecd44d2c82e"). InnerVolumeSpecName "sg-core-conf-yaml". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 23 09:34:29 crc kubenswrapper[4684]: I0123 09:34:29.204091 4684 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/4fa89d93-7eb6-41aa-bf62-eecd44d2c82e-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "4fa89d93-7eb6-41aa-bf62-eecd44d2c82e" (UID: "4fa89d93-7eb6-41aa-bf62-eecd44d2c82e"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 23 09:34:29 crc kubenswrapper[4684]: I0123 09:34:29.211185 4684 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-558zm\" (UniqueName: \"kubernetes.io/projected/4fa89d93-7eb6-41aa-bf62-eecd44d2c82e-kube-api-access-558zm\") on node \"crc\" DevicePath \"\"" Jan 23 09:34:29 crc kubenswrapper[4684]: I0123 09:34:29.211220 4684 reconciler_common.go:293] "Volume detached for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/4fa89d93-7eb6-41aa-bf62-eecd44d2c82e-sg-core-conf-yaml\") on node \"crc\" DevicePath \"\"" Jan 23 09:34:29 crc kubenswrapper[4684]: I0123 09:34:29.211232 4684 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/4fa89d93-7eb6-41aa-bf62-eecd44d2c82e-scripts\") on node \"crc\" DevicePath \"\"" Jan 23 09:34:29 crc kubenswrapper[4684]: I0123 09:34:29.211244 4684 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/4fa89d93-7eb6-41aa-bf62-eecd44d2c82e-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Jan 23 09:34:29 crc kubenswrapper[4684]: I0123 09:34:29.211255 4684 reconciler_common.go:293] "Volume detached for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/4fa89d93-7eb6-41aa-bf62-eecd44d2c82e-run-httpd\") on node \"crc\" DevicePath \"\"" Jan 23 09:34:29 crc kubenswrapper[4684]: I0123 09:34:29.224194 4684 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/4fa89d93-7eb6-41aa-bf62-eecd44d2c82e-config-data" (OuterVolumeSpecName: "config-data") pod "4fa89d93-7eb6-41aa-bf62-eecd44d2c82e" (UID: "4fa89d93-7eb6-41aa-bf62-eecd44d2c82e"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 23 09:34:29 crc kubenswrapper[4684]: I0123 09:34:29.255520 4684 scope.go:117] "RemoveContainer" containerID="bc7cc89ad1529fc55587709fe13590df67d0b0141d31a594cbfea0cb7e7fec77" Jan 23 09:34:29 crc kubenswrapper[4684]: I0123 09:34:29.267835 4684 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/neutron-75f58545cb-xtfdc"] Jan 23 09:34:29 crc kubenswrapper[4684]: I0123 09:34:29.276840 4684 scope.go:117] "RemoveContainer" containerID="f5bad19e2f49439461b85e4557cab54f0a9214796dd9b4019f50104463db887a" Jan 23 09:34:29 crc kubenswrapper[4684]: I0123 09:34:29.277591 4684 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/neutron-75f58545cb-xtfdc"] Jan 23 09:34:29 crc kubenswrapper[4684]: I0123 09:34:29.297308 4684 scope.go:117] "RemoveContainer" containerID="245d274e9ff159a926450f928c7bed37eed9da3f0713fc2ef0cd5dfb4635ab6f" Jan 23 09:34:29 crc kubenswrapper[4684]: I0123 09:34:29.312553 4684 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/4fa89d93-7eb6-41aa-bf62-eecd44d2c82e-config-data\") on node \"crc\" DevicePath \"\"" Jan 23 09:34:29 crc kubenswrapper[4684]: I0123 09:34:29.315390 4684 scope.go:117] "RemoveContainer" containerID="5ce4bfe8329de7f03c0fe3b1a3e2e75645201a862f7c26dc01f176f5de769607" Jan 23 09:34:29 crc kubenswrapper[4684]: E0123 09:34:29.316390 4684 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"5ce4bfe8329de7f03c0fe3b1a3e2e75645201a862f7c26dc01f176f5de769607\": container with ID starting with 5ce4bfe8329de7f03c0fe3b1a3e2e75645201a862f7c26dc01f176f5de769607 not found: ID does not exist" containerID="5ce4bfe8329de7f03c0fe3b1a3e2e75645201a862f7c26dc01f176f5de769607" Jan 23 09:34:29 crc kubenswrapper[4684]: I0123 09:34:29.316431 4684 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"5ce4bfe8329de7f03c0fe3b1a3e2e75645201a862f7c26dc01f176f5de769607"} err="failed to get container status \"5ce4bfe8329de7f03c0fe3b1a3e2e75645201a862f7c26dc01f176f5de769607\": rpc error: code = NotFound desc = could not find container \"5ce4bfe8329de7f03c0fe3b1a3e2e75645201a862f7c26dc01f176f5de769607\": container with ID starting with 5ce4bfe8329de7f03c0fe3b1a3e2e75645201a862f7c26dc01f176f5de769607 not found: ID does not exist" Jan 23 09:34:29 crc kubenswrapper[4684]: I0123 09:34:29.316462 4684 scope.go:117] "RemoveContainer" containerID="bc7cc89ad1529fc55587709fe13590df67d0b0141d31a594cbfea0cb7e7fec77" Jan 23 09:34:29 crc kubenswrapper[4684]: E0123 09:34:29.316824 4684 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"bc7cc89ad1529fc55587709fe13590df67d0b0141d31a594cbfea0cb7e7fec77\": container with ID starting with bc7cc89ad1529fc55587709fe13590df67d0b0141d31a594cbfea0cb7e7fec77 not found: ID does not exist" containerID="bc7cc89ad1529fc55587709fe13590df67d0b0141d31a594cbfea0cb7e7fec77" Jan 23 09:34:29 crc kubenswrapper[4684]: I0123 09:34:29.316861 4684 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"bc7cc89ad1529fc55587709fe13590df67d0b0141d31a594cbfea0cb7e7fec77"} err="failed to get container status \"bc7cc89ad1529fc55587709fe13590df67d0b0141d31a594cbfea0cb7e7fec77\": rpc error: code = NotFound desc = could not find container \"bc7cc89ad1529fc55587709fe13590df67d0b0141d31a594cbfea0cb7e7fec77\": container with ID starting with bc7cc89ad1529fc55587709fe13590df67d0b0141d31a594cbfea0cb7e7fec77 not found: ID does not exist" Jan 23 09:34:29 crc kubenswrapper[4684]: I0123 09:34:29.316876 4684 scope.go:117] "RemoveContainer" containerID="f5bad19e2f49439461b85e4557cab54f0a9214796dd9b4019f50104463db887a" Jan 23 09:34:29 crc kubenswrapper[4684]: E0123 09:34:29.317101 4684 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"f5bad19e2f49439461b85e4557cab54f0a9214796dd9b4019f50104463db887a\": container with ID starting with f5bad19e2f49439461b85e4557cab54f0a9214796dd9b4019f50104463db887a not found: ID does not exist" containerID="f5bad19e2f49439461b85e4557cab54f0a9214796dd9b4019f50104463db887a" Jan 23 09:34:29 crc kubenswrapper[4684]: I0123 09:34:29.317134 4684 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"f5bad19e2f49439461b85e4557cab54f0a9214796dd9b4019f50104463db887a"} err="failed to get container status \"f5bad19e2f49439461b85e4557cab54f0a9214796dd9b4019f50104463db887a\": rpc error: code = NotFound desc = could not find container \"f5bad19e2f49439461b85e4557cab54f0a9214796dd9b4019f50104463db887a\": container with ID starting with f5bad19e2f49439461b85e4557cab54f0a9214796dd9b4019f50104463db887a not found: ID does not exist" Jan 23 09:34:29 crc kubenswrapper[4684]: I0123 09:34:29.317152 4684 scope.go:117] "RemoveContainer" containerID="245d274e9ff159a926450f928c7bed37eed9da3f0713fc2ef0cd5dfb4635ab6f" Jan 23 09:34:29 crc kubenswrapper[4684]: E0123 09:34:29.317431 4684 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"245d274e9ff159a926450f928c7bed37eed9da3f0713fc2ef0cd5dfb4635ab6f\": container with ID starting with 245d274e9ff159a926450f928c7bed37eed9da3f0713fc2ef0cd5dfb4635ab6f not found: ID does not exist" containerID="245d274e9ff159a926450f928c7bed37eed9da3f0713fc2ef0cd5dfb4635ab6f" Jan 23 09:34:29 crc kubenswrapper[4684]: I0123 09:34:29.317460 4684 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"245d274e9ff159a926450f928c7bed37eed9da3f0713fc2ef0cd5dfb4635ab6f"} err="failed to get container status \"245d274e9ff159a926450f928c7bed37eed9da3f0713fc2ef0cd5dfb4635ab6f\": rpc error: code = NotFound desc = could not find container \"245d274e9ff159a926450f928c7bed37eed9da3f0713fc2ef0cd5dfb4635ab6f\": container with ID starting with 245d274e9ff159a926450f928c7bed37eed9da3f0713fc2ef0cd5dfb4635ab6f not found: ID does not exist" Jan 23 09:34:29 crc kubenswrapper[4684]: I0123 09:34:29.472871 4684 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/ceilometer-0"] Jan 23 09:34:29 crc kubenswrapper[4684]: I0123 09:34:29.481930 4684 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/ceilometer-0"] Jan 23 09:34:29 crc kubenswrapper[4684]: I0123 09:34:29.500860 4684 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/ceilometer-0"] Jan 23 09:34:29 crc kubenswrapper[4684]: E0123 09:34:29.501214 4684 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="4fa89d93-7eb6-41aa-bf62-eecd44d2c82e" containerName="sg-core" Jan 23 09:34:29 crc kubenswrapper[4684]: I0123 09:34:29.501235 4684 state_mem.go:107] "Deleted CPUSet assignment" podUID="4fa89d93-7eb6-41aa-bf62-eecd44d2c82e" containerName="sg-core" Jan 23 09:34:29 crc kubenswrapper[4684]: E0123 09:34:29.501251 4684 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="4fa89d93-7eb6-41aa-bf62-eecd44d2c82e" containerName="ceilometer-notification-agent" Jan 23 09:34:29 crc kubenswrapper[4684]: I0123 09:34:29.501260 4684 state_mem.go:107] "Deleted CPUSet assignment" podUID="4fa89d93-7eb6-41aa-bf62-eecd44d2c82e" containerName="ceilometer-notification-agent" Jan 23 09:34:29 crc kubenswrapper[4684]: E0123 09:34:29.501277 4684 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="0ffa2554-38b9-498e-a08f-465b4454ed2d" containerName="neutron-api" Jan 23 09:34:29 crc kubenswrapper[4684]: I0123 09:34:29.501284 4684 state_mem.go:107] "Deleted CPUSet assignment" podUID="0ffa2554-38b9-498e-a08f-465b4454ed2d" containerName="neutron-api" Jan 23 09:34:29 crc kubenswrapper[4684]: E0123 09:34:29.501295 4684 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="4fa89d93-7eb6-41aa-bf62-eecd44d2c82e" containerName="ceilometer-central-agent" Jan 23 09:34:29 crc kubenswrapper[4684]: I0123 09:34:29.501303 4684 state_mem.go:107] "Deleted CPUSet assignment" podUID="4fa89d93-7eb6-41aa-bf62-eecd44d2c82e" containerName="ceilometer-central-agent" Jan 23 09:34:29 crc kubenswrapper[4684]: E0123 09:34:29.501319 4684 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="16af4cc6-6815-4216-a5af-3d7ba5720cf3" containerName="registry-server" Jan 23 09:34:29 crc kubenswrapper[4684]: I0123 09:34:29.501326 4684 state_mem.go:107] "Deleted CPUSet assignment" podUID="16af4cc6-6815-4216-a5af-3d7ba5720cf3" containerName="registry-server" Jan 23 09:34:29 crc kubenswrapper[4684]: E0123 09:34:29.501613 4684 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="16af4cc6-6815-4216-a5af-3d7ba5720cf3" containerName="extract-utilities" Jan 23 09:34:29 crc kubenswrapper[4684]: I0123 09:34:29.501626 4684 state_mem.go:107] "Deleted CPUSet assignment" podUID="16af4cc6-6815-4216-a5af-3d7ba5720cf3" containerName="extract-utilities" Jan 23 09:34:29 crc kubenswrapper[4684]: E0123 09:34:29.501649 4684 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="16af4cc6-6815-4216-a5af-3d7ba5720cf3" containerName="extract-content" Jan 23 09:34:29 crc kubenswrapper[4684]: I0123 09:34:29.501657 4684 state_mem.go:107] "Deleted CPUSet assignment" podUID="16af4cc6-6815-4216-a5af-3d7ba5720cf3" containerName="extract-content" Jan 23 09:34:29 crc kubenswrapper[4684]: E0123 09:34:29.501670 4684 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="4fa89d93-7eb6-41aa-bf62-eecd44d2c82e" containerName="proxy-httpd" Jan 23 09:34:29 crc kubenswrapper[4684]: I0123 09:34:29.501680 4684 state_mem.go:107] "Deleted CPUSet assignment" podUID="4fa89d93-7eb6-41aa-bf62-eecd44d2c82e" containerName="proxy-httpd" Jan 23 09:34:29 crc kubenswrapper[4684]: E0123 09:34:29.501691 4684 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="0ffa2554-38b9-498e-a08f-465b4454ed2d" containerName="neutron-httpd" Jan 23 09:34:29 crc kubenswrapper[4684]: I0123 09:34:29.501720 4684 state_mem.go:107] "Deleted CPUSet assignment" podUID="0ffa2554-38b9-498e-a08f-465b4454ed2d" containerName="neutron-httpd" Jan 23 09:34:29 crc kubenswrapper[4684]: I0123 09:34:29.501943 4684 memory_manager.go:354] "RemoveStaleState removing state" podUID="0ffa2554-38b9-498e-a08f-465b4454ed2d" containerName="neutron-api" Jan 23 09:34:29 crc kubenswrapper[4684]: I0123 09:34:29.501976 4684 memory_manager.go:354] "RemoveStaleState removing state" podUID="16af4cc6-6815-4216-a5af-3d7ba5720cf3" containerName="registry-server" Jan 23 09:34:29 crc kubenswrapper[4684]: I0123 09:34:29.501993 4684 memory_manager.go:354] "RemoveStaleState removing state" podUID="0ffa2554-38b9-498e-a08f-465b4454ed2d" containerName="neutron-httpd" Jan 23 09:34:29 crc kubenswrapper[4684]: I0123 09:34:29.502006 4684 memory_manager.go:354] "RemoveStaleState removing state" podUID="4fa89d93-7eb6-41aa-bf62-eecd44d2c82e" containerName="ceilometer-notification-agent" Jan 23 09:34:29 crc kubenswrapper[4684]: I0123 09:34:29.502018 4684 memory_manager.go:354] "RemoveStaleState removing state" podUID="4fa89d93-7eb6-41aa-bf62-eecd44d2c82e" containerName="sg-core" Jan 23 09:34:29 crc kubenswrapper[4684]: I0123 09:34:29.502036 4684 memory_manager.go:354] "RemoveStaleState removing state" podUID="4fa89d93-7eb6-41aa-bf62-eecd44d2c82e" containerName="ceilometer-central-agent" Jan 23 09:34:29 crc kubenswrapper[4684]: I0123 09:34:29.502048 4684 memory_manager.go:354] "RemoveStaleState removing state" podUID="4fa89d93-7eb6-41aa-bf62-eecd44d2c82e" containerName="proxy-httpd" Jan 23 09:34:29 crc kubenswrapper[4684]: I0123 09:34:29.504126 4684 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ceilometer-0" Jan 23 09:34:29 crc kubenswrapper[4684]: I0123 09:34:29.510265 4684 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"ceilometer-scripts" Jan 23 09:34:29 crc kubenswrapper[4684]: I0123 09:34:29.510493 4684 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"ceilometer-config-data" Jan 23 09:34:29 crc kubenswrapper[4684]: I0123 09:34:29.513536 4684 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ceilometer-0"] Jan 23 09:34:29 crc kubenswrapper[4684]: I0123 09:34:29.594250 4684 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="0ffa2554-38b9-498e-a08f-465b4454ed2d" path="/var/lib/kubelet/pods/0ffa2554-38b9-498e-a08f-465b4454ed2d/volumes" Jan 23 09:34:29 crc kubenswrapper[4684]: I0123 09:34:29.595077 4684 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="4fa89d93-7eb6-41aa-bf62-eecd44d2c82e" path="/var/lib/kubelet/pods/4fa89d93-7eb6-41aa-bf62-eecd44d2c82e/volumes" Jan 23 09:34:29 crc kubenswrapper[4684]: I0123 09:34:29.616827 4684 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/af35288f-b2d9-4281-a7c7-2fbc7d21596f-scripts\") pod \"ceilometer-0\" (UID: \"af35288f-b2d9-4281-a7c7-2fbc7d21596f\") " pod="openstack/ceilometer-0" Jan 23 09:34:29 crc kubenswrapper[4684]: I0123 09:34:29.616900 4684 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/af35288f-b2d9-4281-a7c7-2fbc7d21596f-sg-core-conf-yaml\") pod \"ceilometer-0\" (UID: \"af35288f-b2d9-4281-a7c7-2fbc7d21596f\") " pod="openstack/ceilometer-0" Jan 23 09:34:29 crc kubenswrapper[4684]: I0123 09:34:29.617012 4684 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/af35288f-b2d9-4281-a7c7-2fbc7d21596f-combined-ca-bundle\") pod \"ceilometer-0\" (UID: \"af35288f-b2d9-4281-a7c7-2fbc7d21596f\") " pod="openstack/ceilometer-0" Jan 23 09:34:29 crc kubenswrapper[4684]: I0123 09:34:29.617040 4684 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/af35288f-b2d9-4281-a7c7-2fbc7d21596f-config-data\") pod \"ceilometer-0\" (UID: \"af35288f-b2d9-4281-a7c7-2fbc7d21596f\") " pod="openstack/ceilometer-0" Jan 23 09:34:29 crc kubenswrapper[4684]: I0123 09:34:29.617156 4684 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/af35288f-b2d9-4281-a7c7-2fbc7d21596f-log-httpd\") pod \"ceilometer-0\" (UID: \"af35288f-b2d9-4281-a7c7-2fbc7d21596f\") " pod="openstack/ceilometer-0" Jan 23 09:34:29 crc kubenswrapper[4684]: I0123 09:34:29.617267 4684 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/af35288f-b2d9-4281-a7c7-2fbc7d21596f-run-httpd\") pod \"ceilometer-0\" (UID: \"af35288f-b2d9-4281-a7c7-2fbc7d21596f\") " pod="openstack/ceilometer-0" Jan 23 09:34:29 crc kubenswrapper[4684]: I0123 09:34:29.617312 4684 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-rmdks\" (UniqueName: \"kubernetes.io/projected/af35288f-b2d9-4281-a7c7-2fbc7d21596f-kube-api-access-rmdks\") pod \"ceilometer-0\" (UID: \"af35288f-b2d9-4281-a7c7-2fbc7d21596f\") " pod="openstack/ceilometer-0" Jan 23 09:34:29 crc kubenswrapper[4684]: I0123 09:34:29.719249 4684 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-rmdks\" (UniqueName: \"kubernetes.io/projected/af35288f-b2d9-4281-a7c7-2fbc7d21596f-kube-api-access-rmdks\") pod \"ceilometer-0\" (UID: \"af35288f-b2d9-4281-a7c7-2fbc7d21596f\") " pod="openstack/ceilometer-0" Jan 23 09:34:29 crc kubenswrapper[4684]: I0123 09:34:29.719440 4684 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/af35288f-b2d9-4281-a7c7-2fbc7d21596f-scripts\") pod \"ceilometer-0\" (UID: \"af35288f-b2d9-4281-a7c7-2fbc7d21596f\") " pod="openstack/ceilometer-0" Jan 23 09:34:29 crc kubenswrapper[4684]: I0123 09:34:29.719523 4684 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/af35288f-b2d9-4281-a7c7-2fbc7d21596f-sg-core-conf-yaml\") pod \"ceilometer-0\" (UID: \"af35288f-b2d9-4281-a7c7-2fbc7d21596f\") " pod="openstack/ceilometer-0" Jan 23 09:34:29 crc kubenswrapper[4684]: I0123 09:34:29.719575 4684 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/af35288f-b2d9-4281-a7c7-2fbc7d21596f-combined-ca-bundle\") pod \"ceilometer-0\" (UID: \"af35288f-b2d9-4281-a7c7-2fbc7d21596f\") " pod="openstack/ceilometer-0" Jan 23 09:34:29 crc kubenswrapper[4684]: I0123 09:34:29.719605 4684 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/af35288f-b2d9-4281-a7c7-2fbc7d21596f-config-data\") pod \"ceilometer-0\" (UID: \"af35288f-b2d9-4281-a7c7-2fbc7d21596f\") " pod="openstack/ceilometer-0" Jan 23 09:34:29 crc kubenswrapper[4684]: I0123 09:34:29.719631 4684 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/af35288f-b2d9-4281-a7c7-2fbc7d21596f-log-httpd\") pod \"ceilometer-0\" (UID: \"af35288f-b2d9-4281-a7c7-2fbc7d21596f\") " pod="openstack/ceilometer-0" Jan 23 09:34:29 crc kubenswrapper[4684]: I0123 09:34:29.719780 4684 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/af35288f-b2d9-4281-a7c7-2fbc7d21596f-run-httpd\") pod \"ceilometer-0\" (UID: \"af35288f-b2d9-4281-a7c7-2fbc7d21596f\") " pod="openstack/ceilometer-0" Jan 23 09:34:29 crc kubenswrapper[4684]: I0123 09:34:29.720245 4684 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/af35288f-b2d9-4281-a7c7-2fbc7d21596f-run-httpd\") pod \"ceilometer-0\" (UID: \"af35288f-b2d9-4281-a7c7-2fbc7d21596f\") " pod="openstack/ceilometer-0" Jan 23 09:34:29 crc kubenswrapper[4684]: I0123 09:34:29.722410 4684 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/af35288f-b2d9-4281-a7c7-2fbc7d21596f-log-httpd\") pod \"ceilometer-0\" (UID: \"af35288f-b2d9-4281-a7c7-2fbc7d21596f\") " pod="openstack/ceilometer-0" Jan 23 09:34:29 crc kubenswrapper[4684]: I0123 09:34:29.725622 4684 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/af35288f-b2d9-4281-a7c7-2fbc7d21596f-combined-ca-bundle\") pod \"ceilometer-0\" (UID: \"af35288f-b2d9-4281-a7c7-2fbc7d21596f\") " pod="openstack/ceilometer-0" Jan 23 09:34:29 crc kubenswrapper[4684]: I0123 09:34:29.726419 4684 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/af35288f-b2d9-4281-a7c7-2fbc7d21596f-scripts\") pod \"ceilometer-0\" (UID: \"af35288f-b2d9-4281-a7c7-2fbc7d21596f\") " pod="openstack/ceilometer-0" Jan 23 09:34:29 crc kubenswrapper[4684]: I0123 09:34:29.726682 4684 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/af35288f-b2d9-4281-a7c7-2fbc7d21596f-sg-core-conf-yaml\") pod \"ceilometer-0\" (UID: \"af35288f-b2d9-4281-a7c7-2fbc7d21596f\") " pod="openstack/ceilometer-0" Jan 23 09:34:29 crc kubenswrapper[4684]: I0123 09:34:29.729822 4684 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/af35288f-b2d9-4281-a7c7-2fbc7d21596f-config-data\") pod \"ceilometer-0\" (UID: \"af35288f-b2d9-4281-a7c7-2fbc7d21596f\") " pod="openstack/ceilometer-0" Jan 23 09:34:29 crc kubenswrapper[4684]: I0123 09:34:29.738344 4684 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-rmdks\" (UniqueName: \"kubernetes.io/projected/af35288f-b2d9-4281-a7c7-2fbc7d21596f-kube-api-access-rmdks\") pod \"ceilometer-0\" (UID: \"af35288f-b2d9-4281-a7c7-2fbc7d21596f\") " pod="openstack/ceilometer-0" Jan 23 09:34:29 crc kubenswrapper[4684]: I0123 09:34:29.835731 4684 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ceilometer-0" Jan 23 09:34:30 crc kubenswrapper[4684]: I0123 09:34:30.146265 4684 generic.go:334] "Generic (PLEG): container finished" podID="71a684b6-60c9-4017-91d1-7a8e340d8482" containerID="24c551e4a261aabe66b4fb2f4e85fa350c54b90b1867df15c3a26439f7433cc5" exitCode=0 Jan 23 09:34:30 crc kubenswrapper[4684]: I0123 09:34:30.146372 4684 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell0-conductor-db-sync-7pzwl" event={"ID":"71a684b6-60c9-4017-91d1-7a8e340d8482","Type":"ContainerDied","Data":"24c551e4a261aabe66b4fb2f4e85fa350c54b90b1867df15c3a26439f7433cc5"} Jan 23 09:34:30 crc kubenswrapper[4684]: I0123 09:34:30.306668 4684 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ceilometer-0"] Jan 23 09:34:31 crc kubenswrapper[4684]: I0123 09:34:31.159984 4684 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"af35288f-b2d9-4281-a7c7-2fbc7d21596f","Type":"ContainerStarted","Data":"130a4497d5d83d14210764ce6ccaedb36dde9c3686a5be3a26020ed76249fdd4"} Jan 23 09:34:31 crc kubenswrapper[4684]: I0123 09:34:31.160292 4684 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"af35288f-b2d9-4281-a7c7-2fbc7d21596f","Type":"ContainerStarted","Data":"0d896e95c752003f5b5e0574e4e4eb577072fc289003fb1c36465497f82f7ba3"} Jan 23 09:34:31 crc kubenswrapper[4684]: I0123 09:34:31.501225 4684 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell0-conductor-db-sync-7pzwl" Jan 23 09:34:31 crc kubenswrapper[4684]: I0123 09:34:31.672904 4684 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/71a684b6-60c9-4017-91d1-7a8e340d8482-config-data\") pod \"71a684b6-60c9-4017-91d1-7a8e340d8482\" (UID: \"71a684b6-60c9-4017-91d1-7a8e340d8482\") " Jan 23 09:34:31 crc kubenswrapper[4684]: I0123 09:34:31.673445 4684 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/71a684b6-60c9-4017-91d1-7a8e340d8482-combined-ca-bundle\") pod \"71a684b6-60c9-4017-91d1-7a8e340d8482\" (UID: \"71a684b6-60c9-4017-91d1-7a8e340d8482\") " Jan 23 09:34:31 crc kubenswrapper[4684]: I0123 09:34:31.673667 4684 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/71a684b6-60c9-4017-91d1-7a8e340d8482-scripts\") pod \"71a684b6-60c9-4017-91d1-7a8e340d8482\" (UID: \"71a684b6-60c9-4017-91d1-7a8e340d8482\") " Jan 23 09:34:31 crc kubenswrapper[4684]: I0123 09:34:31.673794 4684 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-jggsc\" (UniqueName: \"kubernetes.io/projected/71a684b6-60c9-4017-91d1-7a8e340d8482-kube-api-access-jggsc\") pod \"71a684b6-60c9-4017-91d1-7a8e340d8482\" (UID: \"71a684b6-60c9-4017-91d1-7a8e340d8482\") " Jan 23 09:34:31 crc kubenswrapper[4684]: I0123 09:34:31.678857 4684 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/71a684b6-60c9-4017-91d1-7a8e340d8482-kube-api-access-jggsc" (OuterVolumeSpecName: "kube-api-access-jggsc") pod "71a684b6-60c9-4017-91d1-7a8e340d8482" (UID: "71a684b6-60c9-4017-91d1-7a8e340d8482"). InnerVolumeSpecName "kube-api-access-jggsc". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 23 09:34:31 crc kubenswrapper[4684]: I0123 09:34:31.682719 4684 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/71a684b6-60c9-4017-91d1-7a8e340d8482-scripts" (OuterVolumeSpecName: "scripts") pod "71a684b6-60c9-4017-91d1-7a8e340d8482" (UID: "71a684b6-60c9-4017-91d1-7a8e340d8482"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 23 09:34:31 crc kubenswrapper[4684]: I0123 09:34:31.701997 4684 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/71a684b6-60c9-4017-91d1-7a8e340d8482-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "71a684b6-60c9-4017-91d1-7a8e340d8482" (UID: "71a684b6-60c9-4017-91d1-7a8e340d8482"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 23 09:34:31 crc kubenswrapper[4684]: I0123 09:34:31.712235 4684 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/71a684b6-60c9-4017-91d1-7a8e340d8482-config-data" (OuterVolumeSpecName: "config-data") pod "71a684b6-60c9-4017-91d1-7a8e340d8482" (UID: "71a684b6-60c9-4017-91d1-7a8e340d8482"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 23 09:34:31 crc kubenswrapper[4684]: I0123 09:34:31.776387 4684 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-jggsc\" (UniqueName: \"kubernetes.io/projected/71a684b6-60c9-4017-91d1-7a8e340d8482-kube-api-access-jggsc\") on node \"crc\" DevicePath \"\"" Jan 23 09:34:31 crc kubenswrapper[4684]: I0123 09:34:31.776627 4684 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/71a684b6-60c9-4017-91d1-7a8e340d8482-config-data\") on node \"crc\" DevicePath \"\"" Jan 23 09:34:31 crc kubenswrapper[4684]: I0123 09:34:31.776748 4684 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/71a684b6-60c9-4017-91d1-7a8e340d8482-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Jan 23 09:34:31 crc kubenswrapper[4684]: I0123 09:34:31.776826 4684 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/71a684b6-60c9-4017-91d1-7a8e340d8482-scripts\") on node \"crc\" DevicePath \"\"" Jan 23 09:34:32 crc kubenswrapper[4684]: I0123 09:34:32.171143 4684 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"af35288f-b2d9-4281-a7c7-2fbc7d21596f","Type":"ContainerStarted","Data":"21772dd7c3987e334a247af035368b487c7a80039dd4e546f1fa783ed550e56f"} Jan 23 09:34:32 crc kubenswrapper[4684]: I0123 09:34:32.173224 4684 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell0-conductor-db-sync-7pzwl" event={"ID":"71a684b6-60c9-4017-91d1-7a8e340d8482","Type":"ContainerDied","Data":"cd661dba9f1d5cf7a2365e21e23c842370d7ddb99ac8acd270a81a8c761e777d"} Jan 23 09:34:32 crc kubenswrapper[4684]: I0123 09:34:32.173246 4684 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell0-conductor-db-sync-7pzwl" Jan 23 09:34:32 crc kubenswrapper[4684]: I0123 09:34:32.173250 4684 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="cd661dba9f1d5cf7a2365e21e23c842370d7ddb99ac8acd270a81a8c761e777d" Jan 23 09:34:32 crc kubenswrapper[4684]: I0123 09:34:32.291100 4684 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/nova-cell0-conductor-0"] Jan 23 09:34:32 crc kubenswrapper[4684]: E0123 09:34:32.291528 4684 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="71a684b6-60c9-4017-91d1-7a8e340d8482" containerName="nova-cell0-conductor-db-sync" Jan 23 09:34:32 crc kubenswrapper[4684]: I0123 09:34:32.291546 4684 state_mem.go:107] "Deleted CPUSet assignment" podUID="71a684b6-60c9-4017-91d1-7a8e340d8482" containerName="nova-cell0-conductor-db-sync" Jan 23 09:34:32 crc kubenswrapper[4684]: I0123 09:34:32.291799 4684 memory_manager.go:354] "RemoveStaleState removing state" podUID="71a684b6-60c9-4017-91d1-7a8e340d8482" containerName="nova-cell0-conductor-db-sync" Jan 23 09:34:32 crc kubenswrapper[4684]: I0123 09:34:32.292620 4684 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell0-conductor-0" Jan 23 09:34:32 crc kubenswrapper[4684]: I0123 09:34:32.320556 4684 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-cell0-conductor-0"] Jan 23 09:34:32 crc kubenswrapper[4684]: I0123 09:34:32.325200 4684 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-nova-dockercfg-k5ct5" Jan 23 09:34:32 crc kubenswrapper[4684]: I0123 09:34:32.325408 4684 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-cell0-conductor-config-data" Jan 23 09:34:32 crc kubenswrapper[4684]: I0123 09:34:32.387598 4684 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/f499765b-3360-4bf8-af8c-415602c1c519-combined-ca-bundle\") pod \"nova-cell0-conductor-0\" (UID: \"f499765b-3360-4bf8-af8c-415602c1c519\") " pod="openstack/nova-cell0-conductor-0" Jan 23 09:34:32 crc kubenswrapper[4684]: I0123 09:34:32.387686 4684 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-ws59p\" (UniqueName: \"kubernetes.io/projected/f499765b-3360-4bf8-af8c-415602c1c519-kube-api-access-ws59p\") pod \"nova-cell0-conductor-0\" (UID: \"f499765b-3360-4bf8-af8c-415602c1c519\") " pod="openstack/nova-cell0-conductor-0" Jan 23 09:34:32 crc kubenswrapper[4684]: I0123 09:34:32.387819 4684 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/f499765b-3360-4bf8-af8c-415602c1c519-config-data\") pod \"nova-cell0-conductor-0\" (UID: \"f499765b-3360-4bf8-af8c-415602c1c519\") " pod="openstack/nova-cell0-conductor-0" Jan 23 09:34:32 crc kubenswrapper[4684]: I0123 09:34:32.489163 4684 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/f499765b-3360-4bf8-af8c-415602c1c519-combined-ca-bundle\") pod \"nova-cell0-conductor-0\" (UID: \"f499765b-3360-4bf8-af8c-415602c1c519\") " pod="openstack/nova-cell0-conductor-0" Jan 23 09:34:32 crc kubenswrapper[4684]: I0123 09:34:32.489435 4684 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-ws59p\" (UniqueName: \"kubernetes.io/projected/f499765b-3360-4bf8-af8c-415602c1c519-kube-api-access-ws59p\") pod \"nova-cell0-conductor-0\" (UID: \"f499765b-3360-4bf8-af8c-415602c1c519\") " pod="openstack/nova-cell0-conductor-0" Jan 23 09:34:32 crc kubenswrapper[4684]: I0123 09:34:32.489508 4684 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/f499765b-3360-4bf8-af8c-415602c1c519-config-data\") pod \"nova-cell0-conductor-0\" (UID: \"f499765b-3360-4bf8-af8c-415602c1c519\") " pod="openstack/nova-cell0-conductor-0" Jan 23 09:34:32 crc kubenswrapper[4684]: I0123 09:34:32.494076 4684 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/f499765b-3360-4bf8-af8c-415602c1c519-combined-ca-bundle\") pod \"nova-cell0-conductor-0\" (UID: \"f499765b-3360-4bf8-af8c-415602c1c519\") " pod="openstack/nova-cell0-conductor-0" Jan 23 09:34:32 crc kubenswrapper[4684]: I0123 09:34:32.495011 4684 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/f499765b-3360-4bf8-af8c-415602c1c519-config-data\") pod \"nova-cell0-conductor-0\" (UID: \"f499765b-3360-4bf8-af8c-415602c1c519\") " pod="openstack/nova-cell0-conductor-0" Jan 23 09:34:32 crc kubenswrapper[4684]: I0123 09:34:32.509281 4684 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-ws59p\" (UniqueName: \"kubernetes.io/projected/f499765b-3360-4bf8-af8c-415602c1c519-kube-api-access-ws59p\") pod \"nova-cell0-conductor-0\" (UID: \"f499765b-3360-4bf8-af8c-415602c1c519\") " pod="openstack/nova-cell0-conductor-0" Jan 23 09:34:32 crc kubenswrapper[4684]: I0123 09:34:32.620129 4684 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell0-conductor-0" Jan 23 09:34:33 crc kubenswrapper[4684]: I0123 09:34:33.116285 4684 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-cell0-conductor-0"] Jan 23 09:34:33 crc kubenswrapper[4684]: W0123 09:34:33.168522 4684 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-podf499765b_3360_4bf8_af8c_415602c1c519.slice/crio-c69c8b4eca74e0185b6ab83541f374524dd5cd9b1ecfa6deed13e9e48c4daecb WatchSource:0}: Error finding container c69c8b4eca74e0185b6ab83541f374524dd5cd9b1ecfa6deed13e9e48c4daecb: Status 404 returned error can't find the container with id c69c8b4eca74e0185b6ab83541f374524dd5cd9b1ecfa6deed13e9e48c4daecb Jan 23 09:34:33 crc kubenswrapper[4684]: I0123 09:34:33.201898 4684 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"af35288f-b2d9-4281-a7c7-2fbc7d21596f","Type":"ContainerStarted","Data":"d9d5064a6bd442ab834cc4d983b89821968c0cd7e07b209928a830c29c998672"} Jan 23 09:34:33 crc kubenswrapper[4684]: I0123 09:34:33.203949 4684 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell0-conductor-0" event={"ID":"f499765b-3360-4bf8-af8c-415602c1c519","Type":"ContainerStarted","Data":"c69c8b4eca74e0185b6ab83541f374524dd5cd9b1ecfa6deed13e9e48c4daecb"} Jan 23 09:34:34 crc kubenswrapper[4684]: I0123 09:34:34.216509 4684 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell0-conductor-0" event={"ID":"f499765b-3360-4bf8-af8c-415602c1c519","Type":"ContainerStarted","Data":"a9d9c2e246f1b380a65da4fd96b909c114f78dc1d3732e6adeaef32b3661b809"} Jan 23 09:34:34 crc kubenswrapper[4684]: I0123 09:34:34.217061 4684 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/nova-cell0-conductor-0" Jan 23 09:34:34 crc kubenswrapper[4684]: I0123 09:34:34.221188 4684 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"af35288f-b2d9-4281-a7c7-2fbc7d21596f","Type":"ContainerStarted","Data":"4f91368c55733959997086ae0c0025cedf3bb8e0f40669be9100d0ca465f4899"} Jan 23 09:34:34 crc kubenswrapper[4684]: I0123 09:34:34.221392 4684 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/ceilometer-0" Jan 23 09:34:34 crc kubenswrapper[4684]: I0123 09:34:34.239322 4684 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/nova-cell0-conductor-0" podStartSLOduration=2.239302164 podStartE2EDuration="2.239302164s" podCreationTimestamp="2026-01-23 09:34:32 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-23 09:34:34.235464162 +0000 UTC m=+1646.858842703" watchObservedRunningTime="2026-01-23 09:34:34.239302164 +0000 UTC m=+1646.862680705" Jan 23 09:34:34 crc kubenswrapper[4684]: I0123 09:34:34.270306 4684 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/ceilometer-0" podStartSLOduration=2.072551554 podStartE2EDuration="5.270284359s" podCreationTimestamp="2026-01-23 09:34:29 +0000 UTC" firstStartedPulling="2026-01-23 09:34:30.32223204 +0000 UTC m=+1642.945610581" lastFinishedPulling="2026-01-23 09:34:33.519964845 +0000 UTC m=+1646.143343386" observedRunningTime="2026-01-23 09:34:34.26165067 +0000 UTC m=+1646.885029211" watchObservedRunningTime="2026-01-23 09:34:34.270284359 +0000 UTC m=+1646.893662910" Jan 23 09:34:42 crc kubenswrapper[4684]: I0123 09:34:42.650920 4684 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/nova-cell0-conductor-0" Jan 23 09:34:43 crc kubenswrapper[4684]: I0123 09:34:43.232820 4684 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/nova-cell0-cell-mapping-l7dxb"] Jan 23 09:34:43 crc kubenswrapper[4684]: I0123 09:34:43.234650 4684 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell0-cell-mapping-l7dxb" Jan 23 09:34:43 crc kubenswrapper[4684]: I0123 09:34:43.237180 4684 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-cell0-manage-scripts" Jan 23 09:34:43 crc kubenswrapper[4684]: I0123 09:34:43.237366 4684 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-cell0-manage-config-data" Jan 23 09:34:43 crc kubenswrapper[4684]: I0123 09:34:43.254257 4684 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-cell0-cell-mapping-l7dxb"] Jan 23 09:34:43 crc kubenswrapper[4684]: I0123 09:34:43.348296 4684 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-s9vsv\" (UniqueName: \"kubernetes.io/projected/f3ca078c-d881-4e98-95bf-7b7486f871d6-kube-api-access-s9vsv\") pod \"nova-cell0-cell-mapping-l7dxb\" (UID: \"f3ca078c-d881-4e98-95bf-7b7486f871d6\") " pod="openstack/nova-cell0-cell-mapping-l7dxb" Jan 23 09:34:43 crc kubenswrapper[4684]: I0123 09:34:43.348358 4684 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/f3ca078c-d881-4e98-95bf-7b7486f871d6-config-data\") pod \"nova-cell0-cell-mapping-l7dxb\" (UID: \"f3ca078c-d881-4e98-95bf-7b7486f871d6\") " pod="openstack/nova-cell0-cell-mapping-l7dxb" Jan 23 09:34:43 crc kubenswrapper[4684]: I0123 09:34:43.348438 4684 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/f3ca078c-d881-4e98-95bf-7b7486f871d6-scripts\") pod \"nova-cell0-cell-mapping-l7dxb\" (UID: \"f3ca078c-d881-4e98-95bf-7b7486f871d6\") " pod="openstack/nova-cell0-cell-mapping-l7dxb" Jan 23 09:34:43 crc kubenswrapper[4684]: I0123 09:34:43.348492 4684 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/f3ca078c-d881-4e98-95bf-7b7486f871d6-combined-ca-bundle\") pod \"nova-cell0-cell-mapping-l7dxb\" (UID: \"f3ca078c-d881-4e98-95bf-7b7486f871d6\") " pod="openstack/nova-cell0-cell-mapping-l7dxb" Jan 23 09:34:43 crc kubenswrapper[4684]: I0123 09:34:43.387530 4684 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/nova-scheduler-0"] Jan 23 09:34:43 crc kubenswrapper[4684]: I0123 09:34:43.388836 4684 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-scheduler-0" Jan 23 09:34:43 crc kubenswrapper[4684]: I0123 09:34:43.394552 4684 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-scheduler-config-data" Jan 23 09:34:43 crc kubenswrapper[4684]: I0123 09:34:43.422768 4684 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-scheduler-0"] Jan 23 09:34:43 crc kubenswrapper[4684]: I0123 09:34:43.449790 4684 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/f3ca078c-d881-4e98-95bf-7b7486f871d6-scripts\") pod \"nova-cell0-cell-mapping-l7dxb\" (UID: \"f3ca078c-d881-4e98-95bf-7b7486f871d6\") " pod="openstack/nova-cell0-cell-mapping-l7dxb" Jan 23 09:34:43 crc kubenswrapper[4684]: I0123 09:34:43.449902 4684 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/f3ca078c-d881-4e98-95bf-7b7486f871d6-combined-ca-bundle\") pod \"nova-cell0-cell-mapping-l7dxb\" (UID: \"f3ca078c-d881-4e98-95bf-7b7486f871d6\") " pod="openstack/nova-cell0-cell-mapping-l7dxb" Jan 23 09:34:43 crc kubenswrapper[4684]: I0123 09:34:43.449976 4684 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-s9vsv\" (UniqueName: \"kubernetes.io/projected/f3ca078c-d881-4e98-95bf-7b7486f871d6-kube-api-access-s9vsv\") pod \"nova-cell0-cell-mapping-l7dxb\" (UID: \"f3ca078c-d881-4e98-95bf-7b7486f871d6\") " pod="openstack/nova-cell0-cell-mapping-l7dxb" Jan 23 09:34:43 crc kubenswrapper[4684]: I0123 09:34:43.450015 4684 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/f3ca078c-d881-4e98-95bf-7b7486f871d6-config-data\") pod \"nova-cell0-cell-mapping-l7dxb\" (UID: \"f3ca078c-d881-4e98-95bf-7b7486f871d6\") " pod="openstack/nova-cell0-cell-mapping-l7dxb" Jan 23 09:34:43 crc kubenswrapper[4684]: I0123 09:34:43.458523 4684 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/f3ca078c-d881-4e98-95bf-7b7486f871d6-combined-ca-bundle\") pod \"nova-cell0-cell-mapping-l7dxb\" (UID: \"f3ca078c-d881-4e98-95bf-7b7486f871d6\") " pod="openstack/nova-cell0-cell-mapping-l7dxb" Jan 23 09:34:43 crc kubenswrapper[4684]: I0123 09:34:43.461642 4684 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/f3ca078c-d881-4e98-95bf-7b7486f871d6-scripts\") pod \"nova-cell0-cell-mapping-l7dxb\" (UID: \"f3ca078c-d881-4e98-95bf-7b7486f871d6\") " pod="openstack/nova-cell0-cell-mapping-l7dxb" Jan 23 09:34:43 crc kubenswrapper[4684]: I0123 09:34:43.464510 4684 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/f3ca078c-d881-4e98-95bf-7b7486f871d6-config-data\") pod \"nova-cell0-cell-mapping-l7dxb\" (UID: \"f3ca078c-d881-4e98-95bf-7b7486f871d6\") " pod="openstack/nova-cell0-cell-mapping-l7dxb" Jan 23 09:34:43 crc kubenswrapper[4684]: I0123 09:34:43.507822 4684 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-s9vsv\" (UniqueName: \"kubernetes.io/projected/f3ca078c-d881-4e98-95bf-7b7486f871d6-kube-api-access-s9vsv\") pod \"nova-cell0-cell-mapping-l7dxb\" (UID: \"f3ca078c-d881-4e98-95bf-7b7486f871d6\") " pod="openstack/nova-cell0-cell-mapping-l7dxb" Jan 23 09:34:43 crc kubenswrapper[4684]: I0123 09:34:43.551655 4684 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/15dd7b39-32a4-458c-b95a-401064d028df-combined-ca-bundle\") pod \"nova-scheduler-0\" (UID: \"15dd7b39-32a4-458c-b95a-401064d028df\") " pod="openstack/nova-scheduler-0" Jan 23 09:34:43 crc kubenswrapper[4684]: I0123 09:34:43.560193 4684 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-z2bpw\" (UniqueName: \"kubernetes.io/projected/15dd7b39-32a4-458c-b95a-401064d028df-kube-api-access-z2bpw\") pod \"nova-scheduler-0\" (UID: \"15dd7b39-32a4-458c-b95a-401064d028df\") " pod="openstack/nova-scheduler-0" Jan 23 09:34:43 crc kubenswrapper[4684]: I0123 09:34:43.560448 4684 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/15dd7b39-32a4-458c-b95a-401064d028df-config-data\") pod \"nova-scheduler-0\" (UID: \"15dd7b39-32a4-458c-b95a-401064d028df\") " pod="openstack/nova-scheduler-0" Jan 23 09:34:43 crc kubenswrapper[4684]: I0123 09:34:43.552343 4684 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell0-cell-mapping-l7dxb" Jan 23 09:34:43 crc kubenswrapper[4684]: I0123 09:34:43.569812 4684 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/nova-api-0"] Jan 23 09:34:43 crc kubenswrapper[4684]: I0123 09:34:43.571613 4684 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-api-0" Jan 23 09:34:43 crc kubenswrapper[4684]: I0123 09:34:43.583431 4684 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-api-config-data" Jan 23 09:34:43 crc kubenswrapper[4684]: I0123 09:34:43.657832 4684 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/nova-cell1-novncproxy-0"] Jan 23 09:34:43 crc kubenswrapper[4684]: I0123 09:34:43.680534 4684 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell1-novncproxy-0" Jan 23 09:34:43 crc kubenswrapper[4684]: I0123 09:34:43.695011 4684 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-cell1-novncproxy-config-data" Jan 23 09:34:43 crc kubenswrapper[4684]: I0123 09:34:43.703580 4684 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/15dd7b39-32a4-458c-b95a-401064d028df-combined-ca-bundle\") pod \"nova-scheduler-0\" (UID: \"15dd7b39-32a4-458c-b95a-401064d028df\") " pod="openstack/nova-scheduler-0" Jan 23 09:34:43 crc kubenswrapper[4684]: I0123 09:34:43.718201 4684 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-cell1-novncproxy-0"] Jan 23 09:34:43 crc kubenswrapper[4684]: I0123 09:34:43.720563 4684 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-z2bpw\" (UniqueName: \"kubernetes.io/projected/15dd7b39-32a4-458c-b95a-401064d028df-kube-api-access-z2bpw\") pod \"nova-scheduler-0\" (UID: \"15dd7b39-32a4-458c-b95a-401064d028df\") " pod="openstack/nova-scheduler-0" Jan 23 09:34:43 crc kubenswrapper[4684]: I0123 09:34:43.720663 4684 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/15dd7b39-32a4-458c-b95a-401064d028df-config-data\") pod \"nova-scheduler-0\" (UID: \"15dd7b39-32a4-458c-b95a-401064d028df\") " pod="openstack/nova-scheduler-0" Jan 23 09:34:43 crc kubenswrapper[4684]: I0123 09:34:43.739296 4684 patch_prober.go:28] interesting pod/machine-config-daemon-wtphf container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Jan 23 09:34:43 crc kubenswrapper[4684]: I0123 09:34:43.739349 4684 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-wtphf" podUID="fe8e0d00-860e-4d47-9f48-686555520d79" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Jan 23 09:34:43 crc kubenswrapper[4684]: I0123 09:34:43.740370 4684 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/15dd7b39-32a4-458c-b95a-401064d028df-combined-ca-bundle\") pod \"nova-scheduler-0\" (UID: \"15dd7b39-32a4-458c-b95a-401064d028df\") " pod="openstack/nova-scheduler-0" Jan 23 09:34:43 crc kubenswrapper[4684]: I0123 09:34:43.753661 4684 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/15dd7b39-32a4-458c-b95a-401064d028df-config-data\") pod \"nova-scheduler-0\" (UID: \"15dd7b39-32a4-458c-b95a-401064d028df\") " pod="openstack/nova-scheduler-0" Jan 23 09:34:43 crc kubenswrapper[4684]: I0123 09:34:43.791018 4684 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-api-0"] Jan 23 09:34:43 crc kubenswrapper[4684]: I0123 09:34:43.823858 4684 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/f44ada8d-8ab8-4b47-80ec-750f3bef5d6e-combined-ca-bundle\") pod \"nova-api-0\" (UID: \"f44ada8d-8ab8-4b47-80ec-750f3bef5d6e\") " pod="openstack/nova-api-0" Jan 23 09:34:43 crc kubenswrapper[4684]: I0123 09:34:43.823986 4684 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/50f5eb1f-ec36-426e-a675-b23ffe20e282-combined-ca-bundle\") pod \"nova-cell1-novncproxy-0\" (UID: \"50f5eb1f-ec36-426e-a675-b23ffe20e282\") " pod="openstack/nova-cell1-novncproxy-0" Jan 23 09:34:43 crc kubenswrapper[4684]: I0123 09:34:43.824021 4684 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/50f5eb1f-ec36-426e-a675-b23ffe20e282-config-data\") pod \"nova-cell1-novncproxy-0\" (UID: \"50f5eb1f-ec36-426e-a675-b23ffe20e282\") " pod="openstack/nova-cell1-novncproxy-0" Jan 23 09:34:43 crc kubenswrapper[4684]: I0123 09:34:43.824051 4684 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-jkrdm\" (UniqueName: \"kubernetes.io/projected/50f5eb1f-ec36-426e-a675-b23ffe20e282-kube-api-access-jkrdm\") pod \"nova-cell1-novncproxy-0\" (UID: \"50f5eb1f-ec36-426e-a675-b23ffe20e282\") " pod="openstack/nova-cell1-novncproxy-0" Jan 23 09:34:43 crc kubenswrapper[4684]: I0123 09:34:43.824101 4684 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/f44ada8d-8ab8-4b47-80ec-750f3bef5d6e-config-data\") pod \"nova-api-0\" (UID: \"f44ada8d-8ab8-4b47-80ec-750f3bef5d6e\") " pod="openstack/nova-api-0" Jan 23 09:34:43 crc kubenswrapper[4684]: I0123 09:34:43.824146 4684 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-lfcbr\" (UniqueName: \"kubernetes.io/projected/f44ada8d-8ab8-4b47-80ec-750f3bef5d6e-kube-api-access-lfcbr\") pod \"nova-api-0\" (UID: \"f44ada8d-8ab8-4b47-80ec-750f3bef5d6e\") " pod="openstack/nova-api-0" Jan 23 09:34:43 crc kubenswrapper[4684]: I0123 09:34:43.824241 4684 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/f44ada8d-8ab8-4b47-80ec-750f3bef5d6e-logs\") pod \"nova-api-0\" (UID: \"f44ada8d-8ab8-4b47-80ec-750f3bef5d6e\") " pod="openstack/nova-api-0" Jan 23 09:34:43 crc kubenswrapper[4684]: I0123 09:34:43.873585 4684 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-z2bpw\" (UniqueName: \"kubernetes.io/projected/15dd7b39-32a4-458c-b95a-401064d028df-kube-api-access-z2bpw\") pod \"nova-scheduler-0\" (UID: \"15dd7b39-32a4-458c-b95a-401064d028df\") " pod="openstack/nova-scheduler-0" Jan 23 09:34:43 crc kubenswrapper[4684]: I0123 09:34:43.925568 4684 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/f44ada8d-8ab8-4b47-80ec-750f3bef5d6e-combined-ca-bundle\") pod \"nova-api-0\" (UID: \"f44ada8d-8ab8-4b47-80ec-750f3bef5d6e\") " pod="openstack/nova-api-0" Jan 23 09:34:43 crc kubenswrapper[4684]: I0123 09:34:43.925671 4684 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/50f5eb1f-ec36-426e-a675-b23ffe20e282-combined-ca-bundle\") pod \"nova-cell1-novncproxy-0\" (UID: \"50f5eb1f-ec36-426e-a675-b23ffe20e282\") " pod="openstack/nova-cell1-novncproxy-0" Jan 23 09:34:43 crc kubenswrapper[4684]: I0123 09:34:43.925715 4684 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/50f5eb1f-ec36-426e-a675-b23ffe20e282-config-data\") pod \"nova-cell1-novncproxy-0\" (UID: \"50f5eb1f-ec36-426e-a675-b23ffe20e282\") " pod="openstack/nova-cell1-novncproxy-0" Jan 23 09:34:43 crc kubenswrapper[4684]: I0123 09:34:43.925734 4684 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-jkrdm\" (UniqueName: \"kubernetes.io/projected/50f5eb1f-ec36-426e-a675-b23ffe20e282-kube-api-access-jkrdm\") pod \"nova-cell1-novncproxy-0\" (UID: \"50f5eb1f-ec36-426e-a675-b23ffe20e282\") " pod="openstack/nova-cell1-novncproxy-0" Jan 23 09:34:43 crc kubenswrapper[4684]: I0123 09:34:43.925764 4684 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/f44ada8d-8ab8-4b47-80ec-750f3bef5d6e-config-data\") pod \"nova-api-0\" (UID: \"f44ada8d-8ab8-4b47-80ec-750f3bef5d6e\") " pod="openstack/nova-api-0" Jan 23 09:34:43 crc kubenswrapper[4684]: I0123 09:34:43.925792 4684 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-lfcbr\" (UniqueName: \"kubernetes.io/projected/f44ada8d-8ab8-4b47-80ec-750f3bef5d6e-kube-api-access-lfcbr\") pod \"nova-api-0\" (UID: \"f44ada8d-8ab8-4b47-80ec-750f3bef5d6e\") " pod="openstack/nova-api-0" Jan 23 09:34:43 crc kubenswrapper[4684]: I0123 09:34:43.925834 4684 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/f44ada8d-8ab8-4b47-80ec-750f3bef5d6e-logs\") pod \"nova-api-0\" (UID: \"f44ada8d-8ab8-4b47-80ec-750f3bef5d6e\") " pod="openstack/nova-api-0" Jan 23 09:34:43 crc kubenswrapper[4684]: I0123 09:34:43.926244 4684 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/f44ada8d-8ab8-4b47-80ec-750f3bef5d6e-logs\") pod \"nova-api-0\" (UID: \"f44ada8d-8ab8-4b47-80ec-750f3bef5d6e\") " pod="openstack/nova-api-0" Jan 23 09:34:43 crc kubenswrapper[4684]: I0123 09:34:43.935023 4684 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/f44ada8d-8ab8-4b47-80ec-750f3bef5d6e-combined-ca-bundle\") pod \"nova-api-0\" (UID: \"f44ada8d-8ab8-4b47-80ec-750f3bef5d6e\") " pod="openstack/nova-api-0" Jan 23 09:34:43 crc kubenswrapper[4684]: I0123 09:34:43.948427 4684 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/50f5eb1f-ec36-426e-a675-b23ffe20e282-combined-ca-bundle\") pod \"nova-cell1-novncproxy-0\" (UID: \"50f5eb1f-ec36-426e-a675-b23ffe20e282\") " pod="openstack/nova-cell1-novncproxy-0" Jan 23 09:34:43 crc kubenswrapper[4684]: I0123 09:34:43.957359 4684 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/50f5eb1f-ec36-426e-a675-b23ffe20e282-config-data\") pod \"nova-cell1-novncproxy-0\" (UID: \"50f5eb1f-ec36-426e-a675-b23ffe20e282\") " pod="openstack/nova-cell1-novncproxy-0" Jan 23 09:34:43 crc kubenswrapper[4684]: I0123 09:34:43.964811 4684 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/f44ada8d-8ab8-4b47-80ec-750f3bef5d6e-config-data\") pod \"nova-api-0\" (UID: \"f44ada8d-8ab8-4b47-80ec-750f3bef5d6e\") " pod="openstack/nova-api-0" Jan 23 09:34:43 crc kubenswrapper[4684]: I0123 09:34:43.977024 4684 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/nova-metadata-0"] Jan 23 09:34:43 crc kubenswrapper[4684]: I0123 09:34:43.978927 4684 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-metadata-0" Jan 23 09:34:44 crc kubenswrapper[4684]: I0123 09:34:44.015147 4684 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-metadata-0"] Jan 23 09:34:44 crc kubenswrapper[4684]: I0123 09:34:44.016199 4684 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-scheduler-0" Jan 23 09:34:44 crc kubenswrapper[4684]: I0123 09:34:44.024184 4684 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-metadata-config-data" Jan 23 09:34:44 crc kubenswrapper[4684]: I0123 09:34:44.030662 4684 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-lfcbr\" (UniqueName: \"kubernetes.io/projected/f44ada8d-8ab8-4b47-80ec-750f3bef5d6e-kube-api-access-lfcbr\") pod \"nova-api-0\" (UID: \"f44ada8d-8ab8-4b47-80ec-750f3bef5d6e\") " pod="openstack/nova-api-0" Jan 23 09:34:44 crc kubenswrapper[4684]: I0123 09:34:44.044724 4684 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-jkrdm\" (UniqueName: \"kubernetes.io/projected/50f5eb1f-ec36-426e-a675-b23ffe20e282-kube-api-access-jkrdm\") pod \"nova-cell1-novncproxy-0\" (UID: \"50f5eb1f-ec36-426e-a675-b23ffe20e282\") " pod="openstack/nova-cell1-novncproxy-0" Jan 23 09:34:44 crc kubenswrapper[4684]: I0123 09:34:44.055229 4684 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-api-0" Jan 23 09:34:44 crc kubenswrapper[4684]: I0123 09:34:44.107637 4684 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell1-novncproxy-0" Jan 23 09:34:44 crc kubenswrapper[4684]: I0123 09:34:44.132147 4684 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/3b470795-ca05-4eb5-bd9e-8137f92dc0a3-combined-ca-bundle\") pod \"nova-metadata-0\" (UID: \"3b470795-ca05-4eb5-bd9e-8137f92dc0a3\") " pod="openstack/nova-metadata-0" Jan 23 09:34:44 crc kubenswrapper[4684]: I0123 09:34:44.132209 4684 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-659mw\" (UniqueName: \"kubernetes.io/projected/3b470795-ca05-4eb5-bd9e-8137f92dc0a3-kube-api-access-659mw\") pod \"nova-metadata-0\" (UID: \"3b470795-ca05-4eb5-bd9e-8137f92dc0a3\") " pod="openstack/nova-metadata-0" Jan 23 09:34:44 crc kubenswrapper[4684]: I0123 09:34:44.132293 4684 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/3b470795-ca05-4eb5-bd9e-8137f92dc0a3-logs\") pod \"nova-metadata-0\" (UID: \"3b470795-ca05-4eb5-bd9e-8137f92dc0a3\") " pod="openstack/nova-metadata-0" Jan 23 09:34:44 crc kubenswrapper[4684]: I0123 09:34:44.132344 4684 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/3b470795-ca05-4eb5-bd9e-8137f92dc0a3-config-data\") pod \"nova-metadata-0\" (UID: \"3b470795-ca05-4eb5-bd9e-8137f92dc0a3\") " pod="openstack/nova-metadata-0" Jan 23 09:34:44 crc kubenswrapper[4684]: I0123 09:34:44.247211 4684 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-659mw\" (UniqueName: \"kubernetes.io/projected/3b470795-ca05-4eb5-bd9e-8137f92dc0a3-kube-api-access-659mw\") pod \"nova-metadata-0\" (UID: \"3b470795-ca05-4eb5-bd9e-8137f92dc0a3\") " pod="openstack/nova-metadata-0" Jan 23 09:34:44 crc kubenswrapper[4684]: I0123 09:34:44.248882 4684 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/3b470795-ca05-4eb5-bd9e-8137f92dc0a3-logs\") pod \"nova-metadata-0\" (UID: \"3b470795-ca05-4eb5-bd9e-8137f92dc0a3\") " pod="openstack/nova-metadata-0" Jan 23 09:34:44 crc kubenswrapper[4684]: I0123 09:34:44.249092 4684 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/3b470795-ca05-4eb5-bd9e-8137f92dc0a3-config-data\") pod \"nova-metadata-0\" (UID: \"3b470795-ca05-4eb5-bd9e-8137f92dc0a3\") " pod="openstack/nova-metadata-0" Jan 23 09:34:44 crc kubenswrapper[4684]: I0123 09:34:44.249198 4684 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/3b470795-ca05-4eb5-bd9e-8137f92dc0a3-combined-ca-bundle\") pod \"nova-metadata-0\" (UID: \"3b470795-ca05-4eb5-bd9e-8137f92dc0a3\") " pod="openstack/nova-metadata-0" Jan 23 09:34:44 crc kubenswrapper[4684]: I0123 09:34:44.265502 4684 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/3b470795-ca05-4eb5-bd9e-8137f92dc0a3-logs\") pod \"nova-metadata-0\" (UID: \"3b470795-ca05-4eb5-bd9e-8137f92dc0a3\") " pod="openstack/nova-metadata-0" Jan 23 09:34:44 crc kubenswrapper[4684]: I0123 09:34:44.265832 4684 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/dnsmasq-dns-8d97cbc7-2chtn"] Jan 23 09:34:44 crc kubenswrapper[4684]: I0123 09:34:44.268656 4684 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-8d97cbc7-2chtn" Jan 23 09:34:44 crc kubenswrapper[4684]: I0123 09:34:44.266561 4684 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/3b470795-ca05-4eb5-bd9e-8137f92dc0a3-combined-ca-bundle\") pod \"nova-metadata-0\" (UID: \"3b470795-ca05-4eb5-bd9e-8137f92dc0a3\") " pod="openstack/nova-metadata-0" Jan 23 09:34:44 crc kubenswrapper[4684]: I0123 09:34:44.269777 4684 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/3b470795-ca05-4eb5-bd9e-8137f92dc0a3-config-data\") pod \"nova-metadata-0\" (UID: \"3b470795-ca05-4eb5-bd9e-8137f92dc0a3\") " pod="openstack/nova-metadata-0" Jan 23 09:34:44 crc kubenswrapper[4684]: I0123 09:34:44.309090 4684 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-659mw\" (UniqueName: \"kubernetes.io/projected/3b470795-ca05-4eb5-bd9e-8137f92dc0a3-kube-api-access-659mw\") pod \"nova-metadata-0\" (UID: \"3b470795-ca05-4eb5-bd9e-8137f92dc0a3\") " pod="openstack/nova-metadata-0" Jan 23 09:34:44 crc kubenswrapper[4684]: I0123 09:34:44.344369 4684 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-8d97cbc7-2chtn"] Jan 23 09:34:44 crc kubenswrapper[4684]: I0123 09:34:44.351824 4684 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/42df2da0-3c64-4b95-9545-361fc18ccbaa-dns-svc\") pod \"dnsmasq-dns-8d97cbc7-2chtn\" (UID: \"42df2da0-3c64-4b95-9545-361fc18ccbaa\") " pod="openstack/dnsmasq-dns-8d97cbc7-2chtn" Jan 23 09:34:44 crc kubenswrapper[4684]: I0123 09:34:44.352148 4684 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/42df2da0-3c64-4b95-9545-361fc18ccbaa-ovsdbserver-nb\") pod \"dnsmasq-dns-8d97cbc7-2chtn\" (UID: \"42df2da0-3c64-4b95-9545-361fc18ccbaa\") " pod="openstack/dnsmasq-dns-8d97cbc7-2chtn" Jan 23 09:34:44 crc kubenswrapper[4684]: I0123 09:34:44.352289 4684 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/42df2da0-3c64-4b95-9545-361fc18ccbaa-ovsdbserver-sb\") pod \"dnsmasq-dns-8d97cbc7-2chtn\" (UID: \"42df2da0-3c64-4b95-9545-361fc18ccbaa\") " pod="openstack/dnsmasq-dns-8d97cbc7-2chtn" Jan 23 09:34:44 crc kubenswrapper[4684]: I0123 09:34:44.352405 4684 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/42df2da0-3c64-4b95-9545-361fc18ccbaa-config\") pod \"dnsmasq-dns-8d97cbc7-2chtn\" (UID: \"42df2da0-3c64-4b95-9545-361fc18ccbaa\") " pod="openstack/dnsmasq-dns-8d97cbc7-2chtn" Jan 23 09:34:44 crc kubenswrapper[4684]: I0123 09:34:44.352604 4684 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-6jt7f\" (UniqueName: \"kubernetes.io/projected/42df2da0-3c64-4b95-9545-361fc18ccbaa-kube-api-access-6jt7f\") pod \"dnsmasq-dns-8d97cbc7-2chtn\" (UID: \"42df2da0-3c64-4b95-9545-361fc18ccbaa\") " pod="openstack/dnsmasq-dns-8d97cbc7-2chtn" Jan 23 09:34:44 crc kubenswrapper[4684]: I0123 09:34:44.420172 4684 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-metadata-0" Jan 23 09:34:44 crc kubenswrapper[4684]: I0123 09:34:44.455969 4684 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-6jt7f\" (UniqueName: \"kubernetes.io/projected/42df2da0-3c64-4b95-9545-361fc18ccbaa-kube-api-access-6jt7f\") pod \"dnsmasq-dns-8d97cbc7-2chtn\" (UID: \"42df2da0-3c64-4b95-9545-361fc18ccbaa\") " pod="openstack/dnsmasq-dns-8d97cbc7-2chtn" Jan 23 09:34:44 crc kubenswrapper[4684]: I0123 09:34:44.456103 4684 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/42df2da0-3c64-4b95-9545-361fc18ccbaa-dns-svc\") pod \"dnsmasq-dns-8d97cbc7-2chtn\" (UID: \"42df2da0-3c64-4b95-9545-361fc18ccbaa\") " pod="openstack/dnsmasq-dns-8d97cbc7-2chtn" Jan 23 09:34:44 crc kubenswrapper[4684]: I0123 09:34:44.456140 4684 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/42df2da0-3c64-4b95-9545-361fc18ccbaa-ovsdbserver-nb\") pod \"dnsmasq-dns-8d97cbc7-2chtn\" (UID: \"42df2da0-3c64-4b95-9545-361fc18ccbaa\") " pod="openstack/dnsmasq-dns-8d97cbc7-2chtn" Jan 23 09:34:44 crc kubenswrapper[4684]: I0123 09:34:44.456187 4684 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/42df2da0-3c64-4b95-9545-361fc18ccbaa-ovsdbserver-sb\") pod \"dnsmasq-dns-8d97cbc7-2chtn\" (UID: \"42df2da0-3c64-4b95-9545-361fc18ccbaa\") " pod="openstack/dnsmasq-dns-8d97cbc7-2chtn" Jan 23 09:34:44 crc kubenswrapper[4684]: I0123 09:34:44.456221 4684 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/42df2da0-3c64-4b95-9545-361fc18ccbaa-config\") pod \"dnsmasq-dns-8d97cbc7-2chtn\" (UID: \"42df2da0-3c64-4b95-9545-361fc18ccbaa\") " pod="openstack/dnsmasq-dns-8d97cbc7-2chtn" Jan 23 09:34:44 crc kubenswrapper[4684]: I0123 09:34:44.457298 4684 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/42df2da0-3c64-4b95-9545-361fc18ccbaa-config\") pod \"dnsmasq-dns-8d97cbc7-2chtn\" (UID: \"42df2da0-3c64-4b95-9545-361fc18ccbaa\") " pod="openstack/dnsmasq-dns-8d97cbc7-2chtn" Jan 23 09:34:44 crc kubenswrapper[4684]: I0123 09:34:44.458222 4684 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/42df2da0-3c64-4b95-9545-361fc18ccbaa-dns-svc\") pod \"dnsmasq-dns-8d97cbc7-2chtn\" (UID: \"42df2da0-3c64-4b95-9545-361fc18ccbaa\") " pod="openstack/dnsmasq-dns-8d97cbc7-2chtn" Jan 23 09:34:44 crc kubenswrapper[4684]: I0123 09:34:44.459345 4684 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/42df2da0-3c64-4b95-9545-361fc18ccbaa-ovsdbserver-nb\") pod \"dnsmasq-dns-8d97cbc7-2chtn\" (UID: \"42df2da0-3c64-4b95-9545-361fc18ccbaa\") " pod="openstack/dnsmasq-dns-8d97cbc7-2chtn" Jan 23 09:34:44 crc kubenswrapper[4684]: I0123 09:34:44.459452 4684 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/42df2da0-3c64-4b95-9545-361fc18ccbaa-ovsdbserver-sb\") pod \"dnsmasq-dns-8d97cbc7-2chtn\" (UID: \"42df2da0-3c64-4b95-9545-361fc18ccbaa\") " pod="openstack/dnsmasq-dns-8d97cbc7-2chtn" Jan 23 09:34:44 crc kubenswrapper[4684]: I0123 09:34:44.506195 4684 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-cell0-cell-mapping-l7dxb"] Jan 23 09:34:44 crc kubenswrapper[4684]: I0123 09:34:44.509841 4684 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-6jt7f\" (UniqueName: \"kubernetes.io/projected/42df2da0-3c64-4b95-9545-361fc18ccbaa-kube-api-access-6jt7f\") pod \"dnsmasq-dns-8d97cbc7-2chtn\" (UID: \"42df2da0-3c64-4b95-9545-361fc18ccbaa\") " pod="openstack/dnsmasq-dns-8d97cbc7-2chtn" Jan 23 09:34:44 crc kubenswrapper[4684]: W0123 09:34:44.549277 4684 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-podf3ca078c_d881_4e98_95bf_7b7486f871d6.slice/crio-251e37db663b8ae069a8450ee54448d19b033fd69f3f67ad80fcb3a4a869d941 WatchSource:0}: Error finding container 251e37db663b8ae069a8450ee54448d19b033fd69f3f67ad80fcb3a4a869d941: Status 404 returned error can't find the container with id 251e37db663b8ae069a8450ee54448d19b033fd69f3f67ad80fcb3a4a869d941 Jan 23 09:34:44 crc kubenswrapper[4684]: I0123 09:34:44.607105 4684 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-8d97cbc7-2chtn" Jan 23 09:34:44 crc kubenswrapper[4684]: I0123 09:34:44.948122 4684 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-scheduler-0"] Jan 23 09:34:45 crc kubenswrapper[4684]: I0123 09:34:45.118069 4684 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-api-0"] Jan 23 09:34:45 crc kubenswrapper[4684]: W0123 09:34:45.284219 4684 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod50f5eb1f_ec36_426e_a675_b23ffe20e282.slice/crio-cdcc6a5954d3e57d45d19b766d51418cb3855a20777356a834997754c0d2d8d0 WatchSource:0}: Error finding container cdcc6a5954d3e57d45d19b766d51418cb3855a20777356a834997754c0d2d8d0: Status 404 returned error can't find the container with id cdcc6a5954d3e57d45d19b766d51418cb3855a20777356a834997754c0d2d8d0 Jan 23 09:34:45 crc kubenswrapper[4684]: I0123 09:34:45.288909 4684 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-metadata-0"] Jan 23 09:34:45 crc kubenswrapper[4684]: I0123 09:34:45.318430 4684 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-cell1-novncproxy-0"] Jan 23 09:34:45 crc kubenswrapper[4684]: I0123 09:34:45.349349 4684 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/nova-cell1-conductor-db-sync-bcpvp"] Jan 23 09:34:45 crc kubenswrapper[4684]: I0123 09:34:45.350863 4684 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell1-conductor-db-sync-bcpvp" Jan 23 09:34:45 crc kubenswrapper[4684]: I0123 09:34:45.358855 4684 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-cell1-conductor-config-data" Jan 23 09:34:45 crc kubenswrapper[4684]: I0123 09:34:45.359996 4684 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell0-cell-mapping-l7dxb" event={"ID":"f3ca078c-d881-4e98-95bf-7b7486f871d6","Type":"ContainerStarted","Data":"f0a50d692a88c5ab02e4415ab085cf83d51031f7e4a2189f9016c3a8a4778762"} Jan 23 09:34:45 crc kubenswrapper[4684]: I0123 09:34:45.360048 4684 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell0-cell-mapping-l7dxb" event={"ID":"f3ca078c-d881-4e98-95bf-7b7486f871d6","Type":"ContainerStarted","Data":"251e37db663b8ae069a8450ee54448d19b033fd69f3f67ad80fcb3a4a869d941"} Jan 23 09:34:45 crc kubenswrapper[4684]: I0123 09:34:45.361546 4684 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-cell1-conductor-scripts" Jan 23 09:34:45 crc kubenswrapper[4684]: I0123 09:34:45.377759 4684 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-cell1-conductor-db-sync-bcpvp"] Jan 23 09:34:45 crc kubenswrapper[4684]: I0123 09:34:45.379243 4684 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-metadata-0" event={"ID":"3b470795-ca05-4eb5-bd9e-8137f92dc0a3","Type":"ContainerStarted","Data":"566b1bc128d1e1f00ee16a249ab1e3c740f12bf4ab77aa703c89934b5652bf17"} Jan 23 09:34:45 crc kubenswrapper[4684]: I0123 09:34:45.396438 4684 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell1-novncproxy-0" event={"ID":"50f5eb1f-ec36-426e-a675-b23ffe20e282","Type":"ContainerStarted","Data":"cdcc6a5954d3e57d45d19b766d51418cb3855a20777356a834997754c0d2d8d0"} Jan 23 09:34:45 crc kubenswrapper[4684]: I0123 09:34:45.408046 4684 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/57bca338-31bf-4447-b296-864d1dea776e-combined-ca-bundle\") pod \"nova-cell1-conductor-db-sync-bcpvp\" (UID: \"57bca338-31bf-4447-b296-864d1dea776e\") " pod="openstack/nova-cell1-conductor-db-sync-bcpvp" Jan 23 09:34:45 crc kubenswrapper[4684]: I0123 09:34:45.408158 4684 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/57bca338-31bf-4447-b296-864d1dea776e-config-data\") pod \"nova-cell1-conductor-db-sync-bcpvp\" (UID: \"57bca338-31bf-4447-b296-864d1dea776e\") " pod="openstack/nova-cell1-conductor-db-sync-bcpvp" Jan 23 09:34:45 crc kubenswrapper[4684]: I0123 09:34:45.408207 4684 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-z4xrt\" (UniqueName: \"kubernetes.io/projected/57bca338-31bf-4447-b296-864d1dea776e-kube-api-access-z4xrt\") pod \"nova-cell1-conductor-db-sync-bcpvp\" (UID: \"57bca338-31bf-4447-b296-864d1dea776e\") " pod="openstack/nova-cell1-conductor-db-sync-bcpvp" Jan 23 09:34:45 crc kubenswrapper[4684]: I0123 09:34:45.408236 4684 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/57bca338-31bf-4447-b296-864d1dea776e-scripts\") pod \"nova-cell1-conductor-db-sync-bcpvp\" (UID: \"57bca338-31bf-4447-b296-864d1dea776e\") " pod="openstack/nova-cell1-conductor-db-sync-bcpvp" Jan 23 09:34:45 crc kubenswrapper[4684]: I0123 09:34:45.414089 4684 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-scheduler-0" event={"ID":"15dd7b39-32a4-458c-b95a-401064d028df","Type":"ContainerStarted","Data":"d5c1bd6bb2a740ae652da23c2525d36111fe6178ff22822e6b3fd2336d177bfc"} Jan 23 09:34:45 crc kubenswrapper[4684]: I0123 09:34:45.426264 4684 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-api-0" event={"ID":"f44ada8d-8ab8-4b47-80ec-750f3bef5d6e","Type":"ContainerStarted","Data":"ea676ff2834da13981a3a6d6e719f1b14eb6951814abf394fa6000b3892a3b50"} Jan 23 09:34:45 crc kubenswrapper[4684]: I0123 09:34:45.510141 4684 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/57bca338-31bf-4447-b296-864d1dea776e-config-data\") pod \"nova-cell1-conductor-db-sync-bcpvp\" (UID: \"57bca338-31bf-4447-b296-864d1dea776e\") " pod="openstack/nova-cell1-conductor-db-sync-bcpvp" Jan 23 09:34:45 crc kubenswrapper[4684]: I0123 09:34:45.510237 4684 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-z4xrt\" (UniqueName: \"kubernetes.io/projected/57bca338-31bf-4447-b296-864d1dea776e-kube-api-access-z4xrt\") pod \"nova-cell1-conductor-db-sync-bcpvp\" (UID: \"57bca338-31bf-4447-b296-864d1dea776e\") " pod="openstack/nova-cell1-conductor-db-sync-bcpvp" Jan 23 09:34:45 crc kubenswrapper[4684]: I0123 09:34:45.510448 4684 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/57bca338-31bf-4447-b296-864d1dea776e-scripts\") pod \"nova-cell1-conductor-db-sync-bcpvp\" (UID: \"57bca338-31bf-4447-b296-864d1dea776e\") " pod="openstack/nova-cell1-conductor-db-sync-bcpvp" Jan 23 09:34:45 crc kubenswrapper[4684]: I0123 09:34:45.510720 4684 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/57bca338-31bf-4447-b296-864d1dea776e-combined-ca-bundle\") pod \"nova-cell1-conductor-db-sync-bcpvp\" (UID: \"57bca338-31bf-4447-b296-864d1dea776e\") " pod="openstack/nova-cell1-conductor-db-sync-bcpvp" Jan 23 09:34:45 crc kubenswrapper[4684]: I0123 09:34:45.522497 4684 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/57bca338-31bf-4447-b296-864d1dea776e-scripts\") pod \"nova-cell1-conductor-db-sync-bcpvp\" (UID: \"57bca338-31bf-4447-b296-864d1dea776e\") " pod="openstack/nova-cell1-conductor-db-sync-bcpvp" Jan 23 09:34:45 crc kubenswrapper[4684]: I0123 09:34:45.523262 4684 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/57bca338-31bf-4447-b296-864d1dea776e-combined-ca-bundle\") pod \"nova-cell1-conductor-db-sync-bcpvp\" (UID: \"57bca338-31bf-4447-b296-864d1dea776e\") " pod="openstack/nova-cell1-conductor-db-sync-bcpvp" Jan 23 09:34:45 crc kubenswrapper[4684]: I0123 09:34:45.539977 4684 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/57bca338-31bf-4447-b296-864d1dea776e-config-data\") pod \"nova-cell1-conductor-db-sync-bcpvp\" (UID: \"57bca338-31bf-4447-b296-864d1dea776e\") " pod="openstack/nova-cell1-conductor-db-sync-bcpvp" Jan 23 09:34:45 crc kubenswrapper[4684]: I0123 09:34:45.540461 4684 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-z4xrt\" (UniqueName: \"kubernetes.io/projected/57bca338-31bf-4447-b296-864d1dea776e-kube-api-access-z4xrt\") pod \"nova-cell1-conductor-db-sync-bcpvp\" (UID: \"57bca338-31bf-4447-b296-864d1dea776e\") " pod="openstack/nova-cell1-conductor-db-sync-bcpvp" Jan 23 09:34:45 crc kubenswrapper[4684]: I0123 09:34:45.686778 4684 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell1-conductor-db-sync-bcpvp" Jan 23 09:34:45 crc kubenswrapper[4684]: I0123 09:34:45.711553 4684 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-8d97cbc7-2chtn"] Jan 23 09:34:46 crc kubenswrapper[4684]: I0123 09:34:46.261372 4684 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-cell1-conductor-db-sync-bcpvp"] Jan 23 09:34:46 crc kubenswrapper[4684]: I0123 09:34:46.438484 4684 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell1-conductor-db-sync-bcpvp" event={"ID":"57bca338-31bf-4447-b296-864d1dea776e","Type":"ContainerStarted","Data":"509deb1855d2c4728256aab2d8145913c36e9c812ff044513051e621ef622b9f"} Jan 23 09:34:46 crc kubenswrapper[4684]: I0123 09:34:46.442325 4684 generic.go:334] "Generic (PLEG): container finished" podID="42df2da0-3c64-4b95-9545-361fc18ccbaa" containerID="14a7f6a607576b6a0399783967d56e6ac77895e5e1007a8abe2e69d126030a28" exitCode=0 Jan 23 09:34:46 crc kubenswrapper[4684]: I0123 09:34:46.443359 4684 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-8d97cbc7-2chtn" event={"ID":"42df2da0-3c64-4b95-9545-361fc18ccbaa","Type":"ContainerDied","Data":"14a7f6a607576b6a0399783967d56e6ac77895e5e1007a8abe2e69d126030a28"} Jan 23 09:34:46 crc kubenswrapper[4684]: I0123 09:34:46.443389 4684 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-8d97cbc7-2chtn" event={"ID":"42df2da0-3c64-4b95-9545-361fc18ccbaa","Type":"ContainerStarted","Data":"d9aea27ad3ae943d0e5373781d4f73a0484814c43c49049e66427f3e9bb629be"} Jan 23 09:34:46 crc kubenswrapper[4684]: I0123 09:34:46.467909 4684 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/nova-cell0-cell-mapping-l7dxb" podStartSLOduration=3.467884136 podStartE2EDuration="3.467884136s" podCreationTimestamp="2026-01-23 09:34:43 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-23 09:34:46.464451177 +0000 UTC m=+1659.087829728" watchObservedRunningTime="2026-01-23 09:34:46.467884136 +0000 UTC m=+1659.091262677" Jan 23 09:34:47 crc kubenswrapper[4684]: I0123 09:34:47.454309 4684 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell1-conductor-db-sync-bcpvp" event={"ID":"57bca338-31bf-4447-b296-864d1dea776e","Type":"ContainerStarted","Data":"ef6fe5cc42a3c15cdc86ba1f8947b8dab11d1cb218beb770a9be7ef5069bcf13"} Jan 23 09:34:47 crc kubenswrapper[4684]: I0123 09:34:47.463102 4684 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-8d97cbc7-2chtn" event={"ID":"42df2da0-3c64-4b95-9545-361fc18ccbaa","Type":"ContainerStarted","Data":"89f054ed2c2f5bf2debde13bcdf5dca8bb036be0dff9d1b0aa5f025eb8fd2a69"} Jan 23 09:34:47 crc kubenswrapper[4684]: I0123 09:34:47.463411 4684 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/dnsmasq-dns-8d97cbc7-2chtn" Jan 23 09:34:47 crc kubenswrapper[4684]: I0123 09:34:47.475945 4684 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/nova-cell1-conductor-db-sync-bcpvp" podStartSLOduration=2.475921191 podStartE2EDuration="2.475921191s" podCreationTimestamp="2026-01-23 09:34:45 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-23 09:34:47.471178604 +0000 UTC m=+1660.094557165" watchObservedRunningTime="2026-01-23 09:34:47.475921191 +0000 UTC m=+1660.099299732" Jan 23 09:34:47 crc kubenswrapper[4684]: I0123 09:34:47.503283 4684 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/dnsmasq-dns-8d97cbc7-2chtn" podStartSLOduration=3.5032596910000002 podStartE2EDuration="3.503259691s" podCreationTimestamp="2026-01-23 09:34:44 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-23 09:34:47.502471028 +0000 UTC m=+1660.125849589" watchObservedRunningTime="2026-01-23 09:34:47.503259691 +0000 UTC m=+1660.126638252" Jan 23 09:34:48 crc kubenswrapper[4684]: I0123 09:34:48.201001 4684 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-metadata-0"] Jan 23 09:34:48 crc kubenswrapper[4684]: I0123 09:34:48.294648 4684 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-cell1-novncproxy-0"] Jan 23 09:34:49 crc kubenswrapper[4684]: I0123 09:34:49.693407 4684 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/certified-operators-5vgbm"] Jan 23 09:34:49 crc kubenswrapper[4684]: I0123 09:34:49.696193 4684 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-5vgbm" Jan 23 09:34:49 crc kubenswrapper[4684]: I0123 09:34:49.702434 4684 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/certified-operators-5vgbm"] Jan 23 09:34:49 crc kubenswrapper[4684]: I0123 09:34:49.822780 4684 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/f33774cf-bd34-4d96-bef3-dbf5751ba774-catalog-content\") pod \"certified-operators-5vgbm\" (UID: \"f33774cf-bd34-4d96-bef3-dbf5751ba774\") " pod="openshift-marketplace/certified-operators-5vgbm" Jan 23 09:34:49 crc kubenswrapper[4684]: I0123 09:34:49.822925 4684 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/f33774cf-bd34-4d96-bef3-dbf5751ba774-utilities\") pod \"certified-operators-5vgbm\" (UID: \"f33774cf-bd34-4d96-bef3-dbf5751ba774\") " pod="openshift-marketplace/certified-operators-5vgbm" Jan 23 09:34:49 crc kubenswrapper[4684]: I0123 09:34:49.822968 4684 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-m4r6q\" (UniqueName: \"kubernetes.io/projected/f33774cf-bd34-4d96-bef3-dbf5751ba774-kube-api-access-m4r6q\") pod \"certified-operators-5vgbm\" (UID: \"f33774cf-bd34-4d96-bef3-dbf5751ba774\") " pod="openshift-marketplace/certified-operators-5vgbm" Jan 23 09:34:49 crc kubenswrapper[4684]: I0123 09:34:49.924091 4684 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/f33774cf-bd34-4d96-bef3-dbf5751ba774-utilities\") pod \"certified-operators-5vgbm\" (UID: \"f33774cf-bd34-4d96-bef3-dbf5751ba774\") " pod="openshift-marketplace/certified-operators-5vgbm" Jan 23 09:34:49 crc kubenswrapper[4684]: I0123 09:34:49.924167 4684 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-m4r6q\" (UniqueName: \"kubernetes.io/projected/f33774cf-bd34-4d96-bef3-dbf5751ba774-kube-api-access-m4r6q\") pod \"certified-operators-5vgbm\" (UID: \"f33774cf-bd34-4d96-bef3-dbf5751ba774\") " pod="openshift-marketplace/certified-operators-5vgbm" Jan 23 09:34:49 crc kubenswrapper[4684]: I0123 09:34:49.924221 4684 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/f33774cf-bd34-4d96-bef3-dbf5751ba774-catalog-content\") pod \"certified-operators-5vgbm\" (UID: \"f33774cf-bd34-4d96-bef3-dbf5751ba774\") " pod="openshift-marketplace/certified-operators-5vgbm" Jan 23 09:34:49 crc kubenswrapper[4684]: I0123 09:34:49.924817 4684 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/f33774cf-bd34-4d96-bef3-dbf5751ba774-catalog-content\") pod \"certified-operators-5vgbm\" (UID: \"f33774cf-bd34-4d96-bef3-dbf5751ba774\") " pod="openshift-marketplace/certified-operators-5vgbm" Jan 23 09:34:49 crc kubenswrapper[4684]: I0123 09:34:49.925096 4684 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/f33774cf-bd34-4d96-bef3-dbf5751ba774-utilities\") pod \"certified-operators-5vgbm\" (UID: \"f33774cf-bd34-4d96-bef3-dbf5751ba774\") " pod="openshift-marketplace/certified-operators-5vgbm" Jan 23 09:34:49 crc kubenswrapper[4684]: I0123 09:34:49.964603 4684 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-m4r6q\" (UniqueName: \"kubernetes.io/projected/f33774cf-bd34-4d96-bef3-dbf5751ba774-kube-api-access-m4r6q\") pod \"certified-operators-5vgbm\" (UID: \"f33774cf-bd34-4d96-bef3-dbf5751ba774\") " pod="openshift-marketplace/certified-operators-5vgbm" Jan 23 09:34:50 crc kubenswrapper[4684]: I0123 09:34:50.030467 4684 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-5vgbm" Jan 23 09:34:51 crc kubenswrapper[4684]: I0123 09:34:51.510442 4684 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-api-0" event={"ID":"f44ada8d-8ab8-4b47-80ec-750f3bef5d6e","Type":"ContainerStarted","Data":"a6faadd2f4da809558cce77380dcdbf7864699cebdc1392c195c13b7f2c63441"} Jan 23 09:34:51 crc kubenswrapper[4684]: I0123 09:34:51.513964 4684 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-metadata-0" event={"ID":"3b470795-ca05-4eb5-bd9e-8137f92dc0a3","Type":"ContainerStarted","Data":"9f25396590d5fccbf24f5558d3395482d13877e0c5dffb6257360ad3897a549c"} Jan 23 09:34:51 crc kubenswrapper[4684]: I0123 09:34:51.515801 4684 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell1-novncproxy-0" event={"ID":"50f5eb1f-ec36-426e-a675-b23ffe20e282","Type":"ContainerStarted","Data":"9a4eda9c24c97a5f590074aa487a1f3aca20a1fef92dd92b60246d34d1f5c443"} Jan 23 09:34:51 crc kubenswrapper[4684]: I0123 09:34:51.515997 4684 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/nova-cell1-novncproxy-0" podUID="50f5eb1f-ec36-426e-a675-b23ffe20e282" containerName="nova-cell1-novncproxy-novncproxy" containerID="cri-o://9a4eda9c24c97a5f590074aa487a1f3aca20a1fef92dd92b60246d34d1f5c443" gracePeriod=30 Jan 23 09:34:51 crc kubenswrapper[4684]: I0123 09:34:51.541219 4684 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/nova-cell1-novncproxy-0" podStartSLOduration=2.86138737 podStartE2EDuration="8.541195209s" podCreationTimestamp="2026-01-23 09:34:43 +0000 UTC" firstStartedPulling="2026-01-23 09:34:45.295707945 +0000 UTC m=+1657.919086496" lastFinishedPulling="2026-01-23 09:34:50.975515794 +0000 UTC m=+1663.598894335" observedRunningTime="2026-01-23 09:34:51.531105197 +0000 UTC m=+1664.154483758" watchObservedRunningTime="2026-01-23 09:34:51.541195209 +0000 UTC m=+1664.164573750" Jan 23 09:34:51 crc kubenswrapper[4684]: I0123 09:34:51.635088 4684 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/certified-operators-5vgbm"] Jan 23 09:34:52 crc kubenswrapper[4684]: I0123 09:34:52.533466 4684 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-metadata-0" event={"ID":"3b470795-ca05-4eb5-bd9e-8137f92dc0a3","Type":"ContainerStarted","Data":"35014bfdb5198d32f6ff6b01fcfb881deff78e6f3aae42dd158a09d4bb32873a"} Jan 23 09:34:52 crc kubenswrapper[4684]: I0123 09:34:52.533589 4684 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/nova-metadata-0" podUID="3b470795-ca05-4eb5-bd9e-8137f92dc0a3" containerName="nova-metadata-log" containerID="cri-o://9f25396590d5fccbf24f5558d3395482d13877e0c5dffb6257360ad3897a549c" gracePeriod=30 Jan 23 09:34:52 crc kubenswrapper[4684]: I0123 09:34:52.534241 4684 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/nova-metadata-0" podUID="3b470795-ca05-4eb5-bd9e-8137f92dc0a3" containerName="nova-metadata-metadata" containerID="cri-o://35014bfdb5198d32f6ff6b01fcfb881deff78e6f3aae42dd158a09d4bb32873a" gracePeriod=30 Jan 23 09:34:52 crc kubenswrapper[4684]: I0123 09:34:52.539429 4684 generic.go:334] "Generic (PLEG): container finished" podID="f33774cf-bd34-4d96-bef3-dbf5751ba774" containerID="a9af033c9e48d7e18f71eeca3fd50c1b00fea299f546cc77eef4950db3505265" exitCode=0 Jan 23 09:34:52 crc kubenswrapper[4684]: I0123 09:34:52.539501 4684 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-5vgbm" event={"ID":"f33774cf-bd34-4d96-bef3-dbf5751ba774","Type":"ContainerDied","Data":"a9af033c9e48d7e18f71eeca3fd50c1b00fea299f546cc77eef4950db3505265"} Jan 23 09:34:52 crc kubenswrapper[4684]: I0123 09:34:52.539558 4684 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-5vgbm" event={"ID":"f33774cf-bd34-4d96-bef3-dbf5751ba774","Type":"ContainerStarted","Data":"0b0029ceb891eba792f86e54adad977aa8b65698d3beab77e48a2cc419e3c883"} Jan 23 09:34:52 crc kubenswrapper[4684]: I0123 09:34:52.543873 4684 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-scheduler-0" event={"ID":"15dd7b39-32a4-458c-b95a-401064d028df","Type":"ContainerStarted","Data":"29fc2896f389e69fc327c923678ae4c5bdb3b648e0e1c4174a1e077a72951651"} Jan 23 09:34:52 crc kubenswrapper[4684]: I0123 09:34:52.549344 4684 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-api-0" event={"ID":"f44ada8d-8ab8-4b47-80ec-750f3bef5d6e","Type":"ContainerStarted","Data":"5bb3593da0b51fcc3d7297b504c2a0e9d68c353eb732cc188b4c11252d67fc95"} Jan 23 09:34:52 crc kubenswrapper[4684]: I0123 09:34:52.570569 4684 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/nova-metadata-0" podStartSLOduration=3.878461727 podStartE2EDuration="9.570550691s" podCreationTimestamp="2026-01-23 09:34:43 +0000 UTC" firstStartedPulling="2026-01-23 09:34:45.281836044 +0000 UTC m=+1657.905214585" lastFinishedPulling="2026-01-23 09:34:50.973925008 +0000 UTC m=+1663.597303549" observedRunningTime="2026-01-23 09:34:52.558268326 +0000 UTC m=+1665.181646877" watchObservedRunningTime="2026-01-23 09:34:52.570550691 +0000 UTC m=+1665.193929232" Jan 23 09:34:52 crc kubenswrapper[4684]: I0123 09:34:52.577361 4684 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/nova-scheduler-0" podStartSLOduration=3.557636751 podStartE2EDuration="9.577339037s" podCreationTimestamp="2026-01-23 09:34:43 +0000 UTC" firstStartedPulling="2026-01-23 09:34:44.954207641 +0000 UTC m=+1657.577586182" lastFinishedPulling="2026-01-23 09:34:50.973909927 +0000 UTC m=+1663.597288468" observedRunningTime="2026-01-23 09:34:52.574734062 +0000 UTC m=+1665.198112623" watchObservedRunningTime="2026-01-23 09:34:52.577339037 +0000 UTC m=+1665.200717588" Jan 23 09:34:52 crc kubenswrapper[4684]: I0123 09:34:52.594684 4684 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/nova-api-0" podStartSLOduration=3.746649526 podStartE2EDuration="9.594662238s" podCreationTimestamp="2026-01-23 09:34:43 +0000 UTC" firstStartedPulling="2026-01-23 09:34:45.125910366 +0000 UTC m=+1657.749288907" lastFinishedPulling="2026-01-23 09:34:50.973923078 +0000 UTC m=+1663.597301619" observedRunningTime="2026-01-23 09:34:52.590133297 +0000 UTC m=+1665.213511848" watchObservedRunningTime="2026-01-23 09:34:52.594662238 +0000 UTC m=+1665.218040779" Jan 23 09:34:53 crc kubenswrapper[4684]: I0123 09:34:53.292000 4684 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-metadata-0" Jan 23 09:34:53 crc kubenswrapper[4684]: I0123 09:34:53.397333 4684 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/3b470795-ca05-4eb5-bd9e-8137f92dc0a3-logs\") pod \"3b470795-ca05-4eb5-bd9e-8137f92dc0a3\" (UID: \"3b470795-ca05-4eb5-bd9e-8137f92dc0a3\") " Jan 23 09:34:53 crc kubenswrapper[4684]: I0123 09:34:53.397902 4684 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/3b470795-ca05-4eb5-bd9e-8137f92dc0a3-combined-ca-bundle\") pod \"3b470795-ca05-4eb5-bd9e-8137f92dc0a3\" (UID: \"3b470795-ca05-4eb5-bd9e-8137f92dc0a3\") " Jan 23 09:34:53 crc kubenswrapper[4684]: I0123 09:34:53.397961 4684 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/3b470795-ca05-4eb5-bd9e-8137f92dc0a3-logs" (OuterVolumeSpecName: "logs") pod "3b470795-ca05-4eb5-bd9e-8137f92dc0a3" (UID: "3b470795-ca05-4eb5-bd9e-8137f92dc0a3"). InnerVolumeSpecName "logs". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 23 09:34:53 crc kubenswrapper[4684]: I0123 09:34:53.398058 4684 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-659mw\" (UniqueName: \"kubernetes.io/projected/3b470795-ca05-4eb5-bd9e-8137f92dc0a3-kube-api-access-659mw\") pod \"3b470795-ca05-4eb5-bd9e-8137f92dc0a3\" (UID: \"3b470795-ca05-4eb5-bd9e-8137f92dc0a3\") " Jan 23 09:34:53 crc kubenswrapper[4684]: I0123 09:34:53.398177 4684 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/3b470795-ca05-4eb5-bd9e-8137f92dc0a3-config-data\") pod \"3b470795-ca05-4eb5-bd9e-8137f92dc0a3\" (UID: \"3b470795-ca05-4eb5-bd9e-8137f92dc0a3\") " Jan 23 09:34:53 crc kubenswrapper[4684]: I0123 09:34:53.399185 4684 reconciler_common.go:293] "Volume detached for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/3b470795-ca05-4eb5-bd9e-8137f92dc0a3-logs\") on node \"crc\" DevicePath \"\"" Jan 23 09:34:53 crc kubenswrapper[4684]: I0123 09:34:53.403411 4684 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/3b470795-ca05-4eb5-bd9e-8137f92dc0a3-kube-api-access-659mw" (OuterVolumeSpecName: "kube-api-access-659mw") pod "3b470795-ca05-4eb5-bd9e-8137f92dc0a3" (UID: "3b470795-ca05-4eb5-bd9e-8137f92dc0a3"). InnerVolumeSpecName "kube-api-access-659mw". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 23 09:34:53 crc kubenswrapper[4684]: I0123 09:34:53.428600 4684 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/3b470795-ca05-4eb5-bd9e-8137f92dc0a3-config-data" (OuterVolumeSpecName: "config-data") pod "3b470795-ca05-4eb5-bd9e-8137f92dc0a3" (UID: "3b470795-ca05-4eb5-bd9e-8137f92dc0a3"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 23 09:34:53 crc kubenswrapper[4684]: I0123 09:34:53.445551 4684 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/3b470795-ca05-4eb5-bd9e-8137f92dc0a3-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "3b470795-ca05-4eb5-bd9e-8137f92dc0a3" (UID: "3b470795-ca05-4eb5-bd9e-8137f92dc0a3"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 23 09:34:53 crc kubenswrapper[4684]: I0123 09:34:53.501165 4684 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/3b470795-ca05-4eb5-bd9e-8137f92dc0a3-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Jan 23 09:34:53 crc kubenswrapper[4684]: I0123 09:34:53.501206 4684 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-659mw\" (UniqueName: \"kubernetes.io/projected/3b470795-ca05-4eb5-bd9e-8137f92dc0a3-kube-api-access-659mw\") on node \"crc\" DevicePath \"\"" Jan 23 09:34:53 crc kubenswrapper[4684]: I0123 09:34:53.501217 4684 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/3b470795-ca05-4eb5-bd9e-8137f92dc0a3-config-data\") on node \"crc\" DevicePath \"\"" Jan 23 09:34:53 crc kubenswrapper[4684]: I0123 09:34:53.561441 4684 generic.go:334] "Generic (PLEG): container finished" podID="3b470795-ca05-4eb5-bd9e-8137f92dc0a3" containerID="35014bfdb5198d32f6ff6b01fcfb881deff78e6f3aae42dd158a09d4bb32873a" exitCode=0 Jan 23 09:34:53 crc kubenswrapper[4684]: I0123 09:34:53.561477 4684 generic.go:334] "Generic (PLEG): container finished" podID="3b470795-ca05-4eb5-bd9e-8137f92dc0a3" containerID="9f25396590d5fccbf24f5558d3395482d13877e0c5dffb6257360ad3897a549c" exitCode=143 Jan 23 09:34:53 crc kubenswrapper[4684]: I0123 09:34:53.562818 4684 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-metadata-0" Jan 23 09:34:53 crc kubenswrapper[4684]: I0123 09:34:53.566398 4684 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-metadata-0" event={"ID":"3b470795-ca05-4eb5-bd9e-8137f92dc0a3","Type":"ContainerDied","Data":"35014bfdb5198d32f6ff6b01fcfb881deff78e6f3aae42dd158a09d4bb32873a"} Jan 23 09:34:53 crc kubenswrapper[4684]: I0123 09:34:53.566451 4684 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-metadata-0" event={"ID":"3b470795-ca05-4eb5-bd9e-8137f92dc0a3","Type":"ContainerDied","Data":"9f25396590d5fccbf24f5558d3395482d13877e0c5dffb6257360ad3897a549c"} Jan 23 09:34:53 crc kubenswrapper[4684]: I0123 09:34:53.566464 4684 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-metadata-0" event={"ID":"3b470795-ca05-4eb5-bd9e-8137f92dc0a3","Type":"ContainerDied","Data":"566b1bc128d1e1f00ee16a249ab1e3c740f12bf4ab77aa703c89934b5652bf17"} Jan 23 09:34:53 crc kubenswrapper[4684]: I0123 09:34:53.566482 4684 scope.go:117] "RemoveContainer" containerID="35014bfdb5198d32f6ff6b01fcfb881deff78e6f3aae42dd158a09d4bb32873a" Jan 23 09:34:53 crc kubenswrapper[4684]: I0123 09:34:53.620805 4684 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-metadata-0"] Jan 23 09:34:53 crc kubenswrapper[4684]: I0123 09:34:53.637663 4684 scope.go:117] "RemoveContainer" containerID="9f25396590d5fccbf24f5558d3395482d13877e0c5dffb6257360ad3897a549c" Jan 23 09:34:53 crc kubenswrapper[4684]: I0123 09:34:53.650324 4684 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/nova-metadata-0"] Jan 23 09:34:53 crc kubenswrapper[4684]: I0123 09:34:53.682311 4684 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/nova-metadata-0"] Jan 23 09:34:53 crc kubenswrapper[4684]: E0123 09:34:53.683037 4684 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="3b470795-ca05-4eb5-bd9e-8137f92dc0a3" containerName="nova-metadata-log" Jan 23 09:34:53 crc kubenswrapper[4684]: I0123 09:34:53.683244 4684 state_mem.go:107] "Deleted CPUSet assignment" podUID="3b470795-ca05-4eb5-bd9e-8137f92dc0a3" containerName="nova-metadata-log" Jan 23 09:34:53 crc kubenswrapper[4684]: E0123 09:34:53.683341 4684 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="3b470795-ca05-4eb5-bd9e-8137f92dc0a3" containerName="nova-metadata-metadata" Jan 23 09:34:53 crc kubenswrapper[4684]: I0123 09:34:53.683411 4684 state_mem.go:107] "Deleted CPUSet assignment" podUID="3b470795-ca05-4eb5-bd9e-8137f92dc0a3" containerName="nova-metadata-metadata" Jan 23 09:34:53 crc kubenswrapper[4684]: I0123 09:34:53.683915 4684 memory_manager.go:354] "RemoveStaleState removing state" podUID="3b470795-ca05-4eb5-bd9e-8137f92dc0a3" containerName="nova-metadata-log" Jan 23 09:34:53 crc kubenswrapper[4684]: I0123 09:34:53.684022 4684 memory_manager.go:354] "RemoveStaleState removing state" podUID="3b470795-ca05-4eb5-bd9e-8137f92dc0a3" containerName="nova-metadata-metadata" Jan 23 09:34:53 crc kubenswrapper[4684]: I0123 09:34:53.685265 4684 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-metadata-0" Jan 23 09:34:53 crc kubenswrapper[4684]: I0123 09:34:53.689985 4684 scope.go:117] "RemoveContainer" containerID="35014bfdb5198d32f6ff6b01fcfb881deff78e6f3aae42dd158a09d4bb32873a" Jan 23 09:34:53 crc kubenswrapper[4684]: I0123 09:34:53.690490 4684 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-metadata-config-data" Jan 23 09:34:53 crc kubenswrapper[4684]: I0123 09:34:53.700902 4684 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-metadata-0"] Jan 23 09:34:53 crc kubenswrapper[4684]: E0123 09:34:53.701338 4684 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"35014bfdb5198d32f6ff6b01fcfb881deff78e6f3aae42dd158a09d4bb32873a\": container with ID starting with 35014bfdb5198d32f6ff6b01fcfb881deff78e6f3aae42dd158a09d4bb32873a not found: ID does not exist" containerID="35014bfdb5198d32f6ff6b01fcfb881deff78e6f3aae42dd158a09d4bb32873a" Jan 23 09:34:53 crc kubenswrapper[4684]: I0123 09:34:53.701394 4684 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"35014bfdb5198d32f6ff6b01fcfb881deff78e6f3aae42dd158a09d4bb32873a"} err="failed to get container status \"35014bfdb5198d32f6ff6b01fcfb881deff78e6f3aae42dd158a09d4bb32873a\": rpc error: code = NotFound desc = could not find container \"35014bfdb5198d32f6ff6b01fcfb881deff78e6f3aae42dd158a09d4bb32873a\": container with ID starting with 35014bfdb5198d32f6ff6b01fcfb881deff78e6f3aae42dd158a09d4bb32873a not found: ID does not exist" Jan 23 09:34:53 crc kubenswrapper[4684]: I0123 09:34:53.701426 4684 scope.go:117] "RemoveContainer" containerID="9f25396590d5fccbf24f5558d3395482d13877e0c5dffb6257360ad3897a549c" Jan 23 09:34:53 crc kubenswrapper[4684]: I0123 09:34:53.701785 4684 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-nova-metadata-internal-svc" Jan 23 09:34:53 crc kubenswrapper[4684]: E0123 09:34:53.708336 4684 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"9f25396590d5fccbf24f5558d3395482d13877e0c5dffb6257360ad3897a549c\": container with ID starting with 9f25396590d5fccbf24f5558d3395482d13877e0c5dffb6257360ad3897a549c not found: ID does not exist" containerID="9f25396590d5fccbf24f5558d3395482d13877e0c5dffb6257360ad3897a549c" Jan 23 09:34:53 crc kubenswrapper[4684]: I0123 09:34:53.708392 4684 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"9f25396590d5fccbf24f5558d3395482d13877e0c5dffb6257360ad3897a549c"} err="failed to get container status \"9f25396590d5fccbf24f5558d3395482d13877e0c5dffb6257360ad3897a549c\": rpc error: code = NotFound desc = could not find container \"9f25396590d5fccbf24f5558d3395482d13877e0c5dffb6257360ad3897a549c\": container with ID starting with 9f25396590d5fccbf24f5558d3395482d13877e0c5dffb6257360ad3897a549c not found: ID does not exist" Jan 23 09:34:53 crc kubenswrapper[4684]: I0123 09:34:53.708423 4684 scope.go:117] "RemoveContainer" containerID="35014bfdb5198d32f6ff6b01fcfb881deff78e6f3aae42dd158a09d4bb32873a" Jan 23 09:34:53 crc kubenswrapper[4684]: I0123 09:34:53.711239 4684 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"35014bfdb5198d32f6ff6b01fcfb881deff78e6f3aae42dd158a09d4bb32873a"} err="failed to get container status \"35014bfdb5198d32f6ff6b01fcfb881deff78e6f3aae42dd158a09d4bb32873a\": rpc error: code = NotFound desc = could not find container \"35014bfdb5198d32f6ff6b01fcfb881deff78e6f3aae42dd158a09d4bb32873a\": container with ID starting with 35014bfdb5198d32f6ff6b01fcfb881deff78e6f3aae42dd158a09d4bb32873a not found: ID does not exist" Jan 23 09:34:53 crc kubenswrapper[4684]: I0123 09:34:53.711284 4684 scope.go:117] "RemoveContainer" containerID="9f25396590d5fccbf24f5558d3395482d13877e0c5dffb6257360ad3897a549c" Jan 23 09:34:53 crc kubenswrapper[4684]: I0123 09:34:53.723039 4684 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"9f25396590d5fccbf24f5558d3395482d13877e0c5dffb6257360ad3897a549c"} err="failed to get container status \"9f25396590d5fccbf24f5558d3395482d13877e0c5dffb6257360ad3897a549c\": rpc error: code = NotFound desc = could not find container \"9f25396590d5fccbf24f5558d3395482d13877e0c5dffb6257360ad3897a549c\": container with ID starting with 9f25396590d5fccbf24f5558d3395482d13877e0c5dffb6257360ad3897a549c not found: ID does not exist" Jan 23 09:34:53 crc kubenswrapper[4684]: I0123 09:34:53.808837 4684 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/567a0d3d-62ca-40aa-b10b-0b853e0ec646-logs\") pod \"nova-metadata-0\" (UID: \"567a0d3d-62ca-40aa-b10b-0b853e0ec646\") " pod="openstack/nova-metadata-0" Jan 23 09:34:53 crc kubenswrapper[4684]: I0123 09:34:53.808935 4684 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/567a0d3d-62ca-40aa-b10b-0b853e0ec646-combined-ca-bundle\") pod \"nova-metadata-0\" (UID: \"567a0d3d-62ca-40aa-b10b-0b853e0ec646\") " pod="openstack/nova-metadata-0" Jan 23 09:34:53 crc kubenswrapper[4684]: I0123 09:34:53.808960 4684 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/567a0d3d-62ca-40aa-b10b-0b853e0ec646-config-data\") pod \"nova-metadata-0\" (UID: \"567a0d3d-62ca-40aa-b10b-0b853e0ec646\") " pod="openstack/nova-metadata-0" Jan 23 09:34:53 crc kubenswrapper[4684]: I0123 09:34:53.808977 4684 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"nova-metadata-tls-certs\" (UniqueName: \"kubernetes.io/secret/567a0d3d-62ca-40aa-b10b-0b853e0ec646-nova-metadata-tls-certs\") pod \"nova-metadata-0\" (UID: \"567a0d3d-62ca-40aa-b10b-0b853e0ec646\") " pod="openstack/nova-metadata-0" Jan 23 09:34:53 crc kubenswrapper[4684]: I0123 09:34:53.809012 4684 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-82dls\" (UniqueName: \"kubernetes.io/projected/567a0d3d-62ca-40aa-b10b-0b853e0ec646-kube-api-access-82dls\") pod \"nova-metadata-0\" (UID: \"567a0d3d-62ca-40aa-b10b-0b853e0ec646\") " pod="openstack/nova-metadata-0" Jan 23 09:34:53 crc kubenswrapper[4684]: I0123 09:34:53.910442 4684 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/567a0d3d-62ca-40aa-b10b-0b853e0ec646-combined-ca-bundle\") pod \"nova-metadata-0\" (UID: \"567a0d3d-62ca-40aa-b10b-0b853e0ec646\") " pod="openstack/nova-metadata-0" Jan 23 09:34:53 crc kubenswrapper[4684]: I0123 09:34:53.910493 4684 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/567a0d3d-62ca-40aa-b10b-0b853e0ec646-config-data\") pod \"nova-metadata-0\" (UID: \"567a0d3d-62ca-40aa-b10b-0b853e0ec646\") " pod="openstack/nova-metadata-0" Jan 23 09:34:53 crc kubenswrapper[4684]: I0123 09:34:53.910513 4684 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"nova-metadata-tls-certs\" (UniqueName: \"kubernetes.io/secret/567a0d3d-62ca-40aa-b10b-0b853e0ec646-nova-metadata-tls-certs\") pod \"nova-metadata-0\" (UID: \"567a0d3d-62ca-40aa-b10b-0b853e0ec646\") " pod="openstack/nova-metadata-0" Jan 23 09:34:53 crc kubenswrapper[4684]: I0123 09:34:53.910545 4684 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-82dls\" (UniqueName: \"kubernetes.io/projected/567a0d3d-62ca-40aa-b10b-0b853e0ec646-kube-api-access-82dls\") pod \"nova-metadata-0\" (UID: \"567a0d3d-62ca-40aa-b10b-0b853e0ec646\") " pod="openstack/nova-metadata-0" Jan 23 09:34:53 crc kubenswrapper[4684]: I0123 09:34:53.910635 4684 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/567a0d3d-62ca-40aa-b10b-0b853e0ec646-logs\") pod \"nova-metadata-0\" (UID: \"567a0d3d-62ca-40aa-b10b-0b853e0ec646\") " pod="openstack/nova-metadata-0" Jan 23 09:34:53 crc kubenswrapper[4684]: I0123 09:34:53.911172 4684 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/567a0d3d-62ca-40aa-b10b-0b853e0ec646-logs\") pod \"nova-metadata-0\" (UID: \"567a0d3d-62ca-40aa-b10b-0b853e0ec646\") " pod="openstack/nova-metadata-0" Jan 23 09:34:53 crc kubenswrapper[4684]: I0123 09:34:53.914811 4684 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"nova-metadata-tls-certs\" (UniqueName: \"kubernetes.io/secret/567a0d3d-62ca-40aa-b10b-0b853e0ec646-nova-metadata-tls-certs\") pod \"nova-metadata-0\" (UID: \"567a0d3d-62ca-40aa-b10b-0b853e0ec646\") " pod="openstack/nova-metadata-0" Jan 23 09:34:53 crc kubenswrapper[4684]: I0123 09:34:53.914861 4684 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/567a0d3d-62ca-40aa-b10b-0b853e0ec646-combined-ca-bundle\") pod \"nova-metadata-0\" (UID: \"567a0d3d-62ca-40aa-b10b-0b853e0ec646\") " pod="openstack/nova-metadata-0" Jan 23 09:34:53 crc kubenswrapper[4684]: I0123 09:34:53.915021 4684 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/567a0d3d-62ca-40aa-b10b-0b853e0ec646-config-data\") pod \"nova-metadata-0\" (UID: \"567a0d3d-62ca-40aa-b10b-0b853e0ec646\") " pod="openstack/nova-metadata-0" Jan 23 09:34:53 crc kubenswrapper[4684]: I0123 09:34:53.929182 4684 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-82dls\" (UniqueName: \"kubernetes.io/projected/567a0d3d-62ca-40aa-b10b-0b853e0ec646-kube-api-access-82dls\") pod \"nova-metadata-0\" (UID: \"567a0d3d-62ca-40aa-b10b-0b853e0ec646\") " pod="openstack/nova-metadata-0" Jan 23 09:34:54 crc kubenswrapper[4684]: I0123 09:34:54.015541 4684 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-metadata-0" Jan 23 09:34:54 crc kubenswrapper[4684]: I0123 09:34:54.027269 4684 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/nova-scheduler-0" Jan 23 09:34:54 crc kubenswrapper[4684]: I0123 09:34:54.027476 4684 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/nova-scheduler-0" Jan 23 09:34:54 crc kubenswrapper[4684]: I0123 09:34:54.057044 4684 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/nova-api-0" Jan 23 09:34:54 crc kubenswrapper[4684]: I0123 09:34:54.057088 4684 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/nova-api-0" Jan 23 09:34:54 crc kubenswrapper[4684]: I0123 09:34:54.061329 4684 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/nova-scheduler-0" Jan 23 09:34:54 crc kubenswrapper[4684]: I0123 09:34:54.109089 4684 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/nova-cell1-novncproxy-0" Jan 23 09:34:54 crc kubenswrapper[4684]: I0123 09:34:54.580301 4684 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-5vgbm" event={"ID":"f33774cf-bd34-4d96-bef3-dbf5751ba774","Type":"ContainerStarted","Data":"e84ce78b285228f4c96b13ca8a93ea665cd36a0c83bd0d88d7b8a23db0de2723"} Jan 23 09:34:54 crc kubenswrapper[4684]: I0123 09:34:54.628550 4684 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/dnsmasq-dns-8d97cbc7-2chtn" Jan 23 09:34:54 crc kubenswrapper[4684]: I0123 09:34:54.658945 4684 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/nova-scheduler-0" Jan 23 09:34:54 crc kubenswrapper[4684]: I0123 09:34:54.663909 4684 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-metadata-0"] Jan 23 09:34:54 crc kubenswrapper[4684]: I0123 09:34:54.801033 4684 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-68c677b759-mvp9m"] Jan 23 09:34:54 crc kubenswrapper[4684]: I0123 09:34:54.801309 4684 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/dnsmasq-dns-68c677b759-mvp9m" podUID="72621f7c-b422-4946-afbc-8d3d049ee05c" containerName="dnsmasq-dns" containerID="cri-o://767992535bab45f55830e63d207705d090cf2bdf374c54e5f3e769573a7cef3b" gracePeriod=10 Jan 23 09:34:55 crc kubenswrapper[4684]: I0123 09:34:55.142023 4684 prober.go:107] "Probe failed" probeType="Startup" pod="openstack/nova-api-0" podUID="f44ada8d-8ab8-4b47-80ec-750f3bef5d6e" containerName="nova-api-log" probeResult="failure" output="Get \"http://10.217.0.174:8774/\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" Jan 23 09:34:55 crc kubenswrapper[4684]: I0123 09:34:55.142003 4684 prober.go:107] "Probe failed" probeType="Startup" pod="openstack/nova-api-0" podUID="f44ada8d-8ab8-4b47-80ec-750f3bef5d6e" containerName="nova-api-api" probeResult="failure" output="Get \"http://10.217.0.174:8774/\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" Jan 23 09:34:55 crc kubenswrapper[4684]: I0123 09:34:55.612242 4684 generic.go:334] "Generic (PLEG): container finished" podID="72621f7c-b422-4946-afbc-8d3d049ee05c" containerID="767992535bab45f55830e63d207705d090cf2bdf374c54e5f3e769573a7cef3b" exitCode=0 Jan 23 09:34:55 crc kubenswrapper[4684]: I0123 09:34:55.621748 4684 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="3b470795-ca05-4eb5-bd9e-8137f92dc0a3" path="/var/lib/kubelet/pods/3b470795-ca05-4eb5-bd9e-8137f92dc0a3/volumes" Jan 23 09:34:55 crc kubenswrapper[4684]: I0123 09:34:55.622535 4684 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-68c677b759-mvp9m" event={"ID":"72621f7c-b422-4946-afbc-8d3d049ee05c","Type":"ContainerDied","Data":"767992535bab45f55830e63d207705d090cf2bdf374c54e5f3e769573a7cef3b"} Jan 23 09:34:55 crc kubenswrapper[4684]: I0123 09:34:55.623844 4684 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-metadata-0" event={"ID":"567a0d3d-62ca-40aa-b10b-0b853e0ec646","Type":"ContainerStarted","Data":"0bd721388a1a0618844cc844b08443a2efb15dae5ae4d5fd34efd1bd468884c4"} Jan 23 09:34:55 crc kubenswrapper[4684]: I0123 09:34:55.623887 4684 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-metadata-0" event={"ID":"567a0d3d-62ca-40aa-b10b-0b853e0ec646","Type":"ContainerStarted","Data":"eefb17490ec58b1e2febd0e1dc4dd3a00cd9af2da7351bd1db78b91abde84978"} Jan 23 09:34:55 crc kubenswrapper[4684]: I0123 09:34:55.816206 4684 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-68c677b759-mvp9m" Jan 23 09:34:55 crc kubenswrapper[4684]: I0123 09:34:55.887778 4684 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/72621f7c-b422-4946-afbc-8d3d049ee05c-dns-svc\") pod \"72621f7c-b422-4946-afbc-8d3d049ee05c\" (UID: \"72621f7c-b422-4946-afbc-8d3d049ee05c\") " Jan 23 09:34:55 crc kubenswrapper[4684]: I0123 09:34:55.888263 4684 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/72621f7c-b422-4946-afbc-8d3d049ee05c-ovsdbserver-nb\") pod \"72621f7c-b422-4946-afbc-8d3d049ee05c\" (UID: \"72621f7c-b422-4946-afbc-8d3d049ee05c\") " Jan 23 09:34:55 crc kubenswrapper[4684]: I0123 09:34:55.888372 4684 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/72621f7c-b422-4946-afbc-8d3d049ee05c-ovsdbserver-sb\") pod \"72621f7c-b422-4946-afbc-8d3d049ee05c\" (UID: \"72621f7c-b422-4946-afbc-8d3d049ee05c\") " Jan 23 09:34:55 crc kubenswrapper[4684]: I0123 09:34:55.888453 4684 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-k46vk\" (UniqueName: \"kubernetes.io/projected/72621f7c-b422-4946-afbc-8d3d049ee05c-kube-api-access-k46vk\") pod \"72621f7c-b422-4946-afbc-8d3d049ee05c\" (UID: \"72621f7c-b422-4946-afbc-8d3d049ee05c\") " Jan 23 09:34:55 crc kubenswrapper[4684]: I0123 09:34:55.888664 4684 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/72621f7c-b422-4946-afbc-8d3d049ee05c-config\") pod \"72621f7c-b422-4946-afbc-8d3d049ee05c\" (UID: \"72621f7c-b422-4946-afbc-8d3d049ee05c\") " Jan 23 09:34:55 crc kubenswrapper[4684]: I0123 09:34:55.923039 4684 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/72621f7c-b422-4946-afbc-8d3d049ee05c-kube-api-access-k46vk" (OuterVolumeSpecName: "kube-api-access-k46vk") pod "72621f7c-b422-4946-afbc-8d3d049ee05c" (UID: "72621f7c-b422-4946-afbc-8d3d049ee05c"). InnerVolumeSpecName "kube-api-access-k46vk". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 23 09:34:55 crc kubenswrapper[4684]: I0123 09:34:55.991866 4684 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-k46vk\" (UniqueName: \"kubernetes.io/projected/72621f7c-b422-4946-afbc-8d3d049ee05c-kube-api-access-k46vk\") on node \"crc\" DevicePath \"\"" Jan 23 09:34:56 crc kubenswrapper[4684]: I0123 09:34:56.003575 4684 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/72621f7c-b422-4946-afbc-8d3d049ee05c-config" (OuterVolumeSpecName: "config") pod "72621f7c-b422-4946-afbc-8d3d049ee05c" (UID: "72621f7c-b422-4946-afbc-8d3d049ee05c"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 23 09:34:56 crc kubenswrapper[4684]: I0123 09:34:56.020336 4684 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/72621f7c-b422-4946-afbc-8d3d049ee05c-ovsdbserver-nb" (OuterVolumeSpecName: "ovsdbserver-nb") pod "72621f7c-b422-4946-afbc-8d3d049ee05c" (UID: "72621f7c-b422-4946-afbc-8d3d049ee05c"). InnerVolumeSpecName "ovsdbserver-nb". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 23 09:34:56 crc kubenswrapper[4684]: I0123 09:34:56.046314 4684 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/72621f7c-b422-4946-afbc-8d3d049ee05c-dns-svc" (OuterVolumeSpecName: "dns-svc") pod "72621f7c-b422-4946-afbc-8d3d049ee05c" (UID: "72621f7c-b422-4946-afbc-8d3d049ee05c"). InnerVolumeSpecName "dns-svc". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 23 09:34:56 crc kubenswrapper[4684]: I0123 09:34:56.087731 4684 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/72621f7c-b422-4946-afbc-8d3d049ee05c-ovsdbserver-sb" (OuterVolumeSpecName: "ovsdbserver-sb") pod "72621f7c-b422-4946-afbc-8d3d049ee05c" (UID: "72621f7c-b422-4946-afbc-8d3d049ee05c"). InnerVolumeSpecName "ovsdbserver-sb". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 23 09:34:56 crc kubenswrapper[4684]: I0123 09:34:56.095407 4684 reconciler_common.go:293] "Volume detached for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/72621f7c-b422-4946-afbc-8d3d049ee05c-ovsdbserver-nb\") on node \"crc\" DevicePath \"\"" Jan 23 09:34:56 crc kubenswrapper[4684]: I0123 09:34:56.095606 4684 reconciler_common.go:293] "Volume detached for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/72621f7c-b422-4946-afbc-8d3d049ee05c-ovsdbserver-sb\") on node \"crc\" DevicePath \"\"" Jan 23 09:34:56 crc kubenswrapper[4684]: I0123 09:34:56.095816 4684 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/72621f7c-b422-4946-afbc-8d3d049ee05c-config\") on node \"crc\" DevicePath \"\"" Jan 23 09:34:56 crc kubenswrapper[4684]: I0123 09:34:56.095927 4684 reconciler_common.go:293] "Volume detached for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/72621f7c-b422-4946-afbc-8d3d049ee05c-dns-svc\") on node \"crc\" DevicePath \"\"" Jan 23 09:34:56 crc kubenswrapper[4684]: I0123 09:34:56.633460 4684 generic.go:334] "Generic (PLEG): container finished" podID="f33774cf-bd34-4d96-bef3-dbf5751ba774" containerID="e84ce78b285228f4c96b13ca8a93ea665cd36a0c83bd0d88d7b8a23db0de2723" exitCode=0 Jan 23 09:34:56 crc kubenswrapper[4684]: I0123 09:34:56.633822 4684 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-5vgbm" event={"ID":"f33774cf-bd34-4d96-bef3-dbf5751ba774","Type":"ContainerDied","Data":"e84ce78b285228f4c96b13ca8a93ea665cd36a0c83bd0d88d7b8a23db0de2723"} Jan 23 09:34:56 crc kubenswrapper[4684]: I0123 09:34:56.640784 4684 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-68c677b759-mvp9m" event={"ID":"72621f7c-b422-4946-afbc-8d3d049ee05c","Type":"ContainerDied","Data":"4b7d95924505f33bd1b354484bc93eca8645cc0c4a783ecd0443011cdffb8074"} Jan 23 09:34:56 crc kubenswrapper[4684]: I0123 09:34:56.640888 4684 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-68c677b759-mvp9m" Jan 23 09:34:56 crc kubenswrapper[4684]: I0123 09:34:56.641114 4684 scope.go:117] "RemoveContainer" containerID="767992535bab45f55830e63d207705d090cf2bdf374c54e5f3e769573a7cef3b" Jan 23 09:34:56 crc kubenswrapper[4684]: I0123 09:34:56.651436 4684 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-metadata-0" event={"ID":"567a0d3d-62ca-40aa-b10b-0b853e0ec646","Type":"ContainerStarted","Data":"fa28ac4d39b7149ae613e6693c85d78770a39b55d479bb3bbece267ee259c71f"} Jan 23 09:34:56 crc kubenswrapper[4684]: I0123 09:34:56.669477 4684 scope.go:117] "RemoveContainer" containerID="dda4ad2f1bf4065b2eaacc6c6322fe692d8f4bb5d47be145db679c8eb509751e" Jan 23 09:34:56 crc kubenswrapper[4684]: I0123 09:34:56.704012 4684 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/nova-metadata-0" podStartSLOduration=3.703969469 podStartE2EDuration="3.703969469s" podCreationTimestamp="2026-01-23 09:34:53 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-23 09:34:56.685776553 +0000 UTC m=+1669.309155114" watchObservedRunningTime="2026-01-23 09:34:56.703969469 +0000 UTC m=+1669.327348010" Jan 23 09:34:56 crc kubenswrapper[4684]: I0123 09:34:56.725162 4684 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-68c677b759-mvp9m"] Jan 23 09:34:56 crc kubenswrapper[4684]: I0123 09:34:56.735975 4684 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/dnsmasq-dns-68c677b759-mvp9m"] Jan 23 09:34:57 crc kubenswrapper[4684]: I0123 09:34:57.594233 4684 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="72621f7c-b422-4946-afbc-8d3d049ee05c" path="/var/lib/kubelet/pods/72621f7c-b422-4946-afbc-8d3d049ee05c/volumes" Jan 23 09:34:57 crc kubenswrapper[4684]: I0123 09:34:57.664021 4684 generic.go:334] "Generic (PLEG): container finished" podID="f3ca078c-d881-4e98-95bf-7b7486f871d6" containerID="f0a50d692a88c5ab02e4415ab085cf83d51031f7e4a2189f9016c3a8a4778762" exitCode=0 Jan 23 09:34:57 crc kubenswrapper[4684]: I0123 09:34:57.664202 4684 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell0-cell-mapping-l7dxb" event={"ID":"f3ca078c-d881-4e98-95bf-7b7486f871d6","Type":"ContainerDied","Data":"f0a50d692a88c5ab02e4415ab085cf83d51031f7e4a2189f9016c3a8a4778762"} Jan 23 09:34:57 crc kubenswrapper[4684]: I0123 09:34:57.670680 4684 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-5vgbm" event={"ID":"f33774cf-bd34-4d96-bef3-dbf5751ba774","Type":"ContainerStarted","Data":"c5b6437ad8b79cea69578ac45d820eac798e9bc6ab79bd8a5829a23967bccde0"} Jan 23 09:34:57 crc kubenswrapper[4684]: I0123 09:34:57.716151 4684 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/certified-operators-5vgbm" podStartSLOduration=3.9358717739999998 podStartE2EDuration="8.716132654s" podCreationTimestamp="2026-01-23 09:34:49 +0000 UTC" firstStartedPulling="2026-01-23 09:34:52.543090367 +0000 UTC m=+1665.166468918" lastFinishedPulling="2026-01-23 09:34:57.323351257 +0000 UTC m=+1669.946729798" observedRunningTime="2026-01-23 09:34:57.70908503 +0000 UTC m=+1670.332463561" watchObservedRunningTime="2026-01-23 09:34:57.716132654 +0000 UTC m=+1670.339511195" Jan 23 09:34:59 crc kubenswrapper[4684]: I0123 09:34:59.016339 4684 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/nova-metadata-0" Jan 23 09:34:59 crc kubenswrapper[4684]: I0123 09:34:59.017870 4684 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/nova-metadata-0" Jan 23 09:34:59 crc kubenswrapper[4684]: I0123 09:34:59.082199 4684 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell0-cell-mapping-l7dxb" Jan 23 09:34:59 crc kubenswrapper[4684]: I0123 09:34:59.155375 4684 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/f3ca078c-d881-4e98-95bf-7b7486f871d6-combined-ca-bundle\") pod \"f3ca078c-d881-4e98-95bf-7b7486f871d6\" (UID: \"f3ca078c-d881-4e98-95bf-7b7486f871d6\") " Jan 23 09:34:59 crc kubenswrapper[4684]: I0123 09:34:59.155534 4684 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/f3ca078c-d881-4e98-95bf-7b7486f871d6-scripts\") pod \"f3ca078c-d881-4e98-95bf-7b7486f871d6\" (UID: \"f3ca078c-d881-4e98-95bf-7b7486f871d6\") " Jan 23 09:34:59 crc kubenswrapper[4684]: I0123 09:34:59.155588 4684 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-s9vsv\" (UniqueName: \"kubernetes.io/projected/f3ca078c-d881-4e98-95bf-7b7486f871d6-kube-api-access-s9vsv\") pod \"f3ca078c-d881-4e98-95bf-7b7486f871d6\" (UID: \"f3ca078c-d881-4e98-95bf-7b7486f871d6\") " Jan 23 09:34:59 crc kubenswrapper[4684]: I0123 09:34:59.155730 4684 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/f3ca078c-d881-4e98-95bf-7b7486f871d6-config-data\") pod \"f3ca078c-d881-4e98-95bf-7b7486f871d6\" (UID: \"f3ca078c-d881-4e98-95bf-7b7486f871d6\") " Jan 23 09:34:59 crc kubenswrapper[4684]: I0123 09:34:59.161125 4684 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/f3ca078c-d881-4e98-95bf-7b7486f871d6-kube-api-access-s9vsv" (OuterVolumeSpecName: "kube-api-access-s9vsv") pod "f3ca078c-d881-4e98-95bf-7b7486f871d6" (UID: "f3ca078c-d881-4e98-95bf-7b7486f871d6"). InnerVolumeSpecName "kube-api-access-s9vsv". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 23 09:34:59 crc kubenswrapper[4684]: I0123 09:34:59.162607 4684 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/f3ca078c-d881-4e98-95bf-7b7486f871d6-scripts" (OuterVolumeSpecName: "scripts") pod "f3ca078c-d881-4e98-95bf-7b7486f871d6" (UID: "f3ca078c-d881-4e98-95bf-7b7486f871d6"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 23 09:34:59 crc kubenswrapper[4684]: I0123 09:34:59.181344 4684 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/f3ca078c-d881-4e98-95bf-7b7486f871d6-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "f3ca078c-d881-4e98-95bf-7b7486f871d6" (UID: "f3ca078c-d881-4e98-95bf-7b7486f871d6"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 23 09:34:59 crc kubenswrapper[4684]: I0123 09:34:59.181915 4684 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/f3ca078c-d881-4e98-95bf-7b7486f871d6-config-data" (OuterVolumeSpecName: "config-data") pod "f3ca078c-d881-4e98-95bf-7b7486f871d6" (UID: "f3ca078c-d881-4e98-95bf-7b7486f871d6"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 23 09:34:59 crc kubenswrapper[4684]: I0123 09:34:59.258396 4684 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-s9vsv\" (UniqueName: \"kubernetes.io/projected/f3ca078c-d881-4e98-95bf-7b7486f871d6-kube-api-access-s9vsv\") on node \"crc\" DevicePath \"\"" Jan 23 09:34:59 crc kubenswrapper[4684]: I0123 09:34:59.258444 4684 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/f3ca078c-d881-4e98-95bf-7b7486f871d6-config-data\") on node \"crc\" DevicePath \"\"" Jan 23 09:34:59 crc kubenswrapper[4684]: I0123 09:34:59.258459 4684 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/f3ca078c-d881-4e98-95bf-7b7486f871d6-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Jan 23 09:34:59 crc kubenswrapper[4684]: I0123 09:34:59.258470 4684 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/f3ca078c-d881-4e98-95bf-7b7486f871d6-scripts\") on node \"crc\" DevicePath \"\"" Jan 23 09:34:59 crc kubenswrapper[4684]: I0123 09:34:59.688134 4684 generic.go:334] "Generic (PLEG): container finished" podID="57bca338-31bf-4447-b296-864d1dea776e" containerID="ef6fe5cc42a3c15cdc86ba1f8947b8dab11d1cb218beb770a9be7ef5069bcf13" exitCode=0 Jan 23 09:34:59 crc kubenswrapper[4684]: I0123 09:34:59.688181 4684 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell1-conductor-db-sync-bcpvp" event={"ID":"57bca338-31bf-4447-b296-864d1dea776e","Type":"ContainerDied","Data":"ef6fe5cc42a3c15cdc86ba1f8947b8dab11d1cb218beb770a9be7ef5069bcf13"} Jan 23 09:34:59 crc kubenswrapper[4684]: I0123 09:34:59.690485 4684 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell0-cell-mapping-l7dxb" event={"ID":"f3ca078c-d881-4e98-95bf-7b7486f871d6","Type":"ContainerDied","Data":"251e37db663b8ae069a8450ee54448d19b033fd69f3f67ad80fcb3a4a869d941"} Jan 23 09:34:59 crc kubenswrapper[4684]: I0123 09:34:59.690542 4684 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="251e37db663b8ae069a8450ee54448d19b033fd69f3f67ad80fcb3a4a869d941" Jan 23 09:34:59 crc kubenswrapper[4684]: I0123 09:34:59.690827 4684 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell0-cell-mapping-l7dxb" Jan 23 09:34:59 crc kubenswrapper[4684]: I0123 09:34:59.850050 4684 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/ceilometer-0" Jan 23 09:34:59 crc kubenswrapper[4684]: I0123 09:34:59.901901 4684 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-api-0"] Jan 23 09:34:59 crc kubenswrapper[4684]: I0123 09:34:59.902253 4684 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/nova-api-0" podUID="f44ada8d-8ab8-4b47-80ec-750f3bef5d6e" containerName="nova-api-log" containerID="cri-o://a6faadd2f4da809558cce77380dcdbf7864699cebdc1392c195c13b7f2c63441" gracePeriod=30 Jan 23 09:34:59 crc kubenswrapper[4684]: I0123 09:34:59.902468 4684 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/nova-api-0" podUID="f44ada8d-8ab8-4b47-80ec-750f3bef5d6e" containerName="nova-api-api" containerID="cri-o://5bb3593da0b51fcc3d7297b504c2a0e9d68c353eb732cc188b4c11252d67fc95" gracePeriod=30 Jan 23 09:35:00 crc kubenswrapper[4684]: I0123 09:35:00.025520 4684 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-scheduler-0"] Jan 23 09:35:00 crc kubenswrapper[4684]: I0123 09:35:00.026212 4684 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/nova-scheduler-0" podUID="15dd7b39-32a4-458c-b95a-401064d028df" containerName="nova-scheduler-scheduler" containerID="cri-o://29fc2896f389e69fc327c923678ae4c5bdb3b648e0e1c4174a1e077a72951651" gracePeriod=30 Jan 23 09:35:00 crc kubenswrapper[4684]: I0123 09:35:00.031939 4684 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/certified-operators-5vgbm" Jan 23 09:35:00 crc kubenswrapper[4684]: I0123 09:35:00.032768 4684 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/certified-operators-5vgbm" Jan 23 09:35:00 crc kubenswrapper[4684]: I0123 09:35:00.057412 4684 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-metadata-0"] Jan 23 09:35:00 crc kubenswrapper[4684]: I0123 09:35:00.701486 4684 generic.go:334] "Generic (PLEG): container finished" podID="f44ada8d-8ab8-4b47-80ec-750f3bef5d6e" containerID="a6faadd2f4da809558cce77380dcdbf7864699cebdc1392c195c13b7f2c63441" exitCode=143 Jan 23 09:35:00 crc kubenswrapper[4684]: I0123 09:35:00.701572 4684 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-api-0" event={"ID":"f44ada8d-8ab8-4b47-80ec-750f3bef5d6e","Type":"ContainerDied","Data":"a6faadd2f4da809558cce77380dcdbf7864699cebdc1392c195c13b7f2c63441"} Jan 23 09:35:00 crc kubenswrapper[4684]: I0123 09:35:00.702015 4684 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/nova-metadata-0" podUID="567a0d3d-62ca-40aa-b10b-0b853e0ec646" containerName="nova-metadata-log" containerID="cri-o://0bd721388a1a0618844cc844b08443a2efb15dae5ae4d5fd34efd1bd468884c4" gracePeriod=30 Jan 23 09:35:00 crc kubenswrapper[4684]: I0123 09:35:00.702479 4684 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/nova-metadata-0" podUID="567a0d3d-62ca-40aa-b10b-0b853e0ec646" containerName="nova-metadata-metadata" containerID="cri-o://fa28ac4d39b7149ae613e6693c85d78770a39b55d479bb3bbece267ee259c71f" gracePeriod=30 Jan 23 09:35:01 crc kubenswrapper[4684]: I0123 09:35:01.127336 4684 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-marketplace/certified-operators-5vgbm" podUID="f33774cf-bd34-4d96-bef3-dbf5751ba774" containerName="registry-server" probeResult="failure" output=< Jan 23 09:35:01 crc kubenswrapper[4684]: timeout: failed to connect service ":50051" within 1s Jan 23 09:35:01 crc kubenswrapper[4684]: > Jan 23 09:35:01 crc kubenswrapper[4684]: I0123 09:35:01.224988 4684 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell1-conductor-db-sync-bcpvp" Jan 23 09:35:01 crc kubenswrapper[4684]: I0123 09:35:01.296444 4684 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/57bca338-31bf-4447-b296-864d1dea776e-scripts\") pod \"57bca338-31bf-4447-b296-864d1dea776e\" (UID: \"57bca338-31bf-4447-b296-864d1dea776e\") " Jan 23 09:35:01 crc kubenswrapper[4684]: I0123 09:35:01.296569 4684 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-z4xrt\" (UniqueName: \"kubernetes.io/projected/57bca338-31bf-4447-b296-864d1dea776e-kube-api-access-z4xrt\") pod \"57bca338-31bf-4447-b296-864d1dea776e\" (UID: \"57bca338-31bf-4447-b296-864d1dea776e\") " Jan 23 09:35:01 crc kubenswrapper[4684]: I0123 09:35:01.296595 4684 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/57bca338-31bf-4447-b296-864d1dea776e-config-data\") pod \"57bca338-31bf-4447-b296-864d1dea776e\" (UID: \"57bca338-31bf-4447-b296-864d1dea776e\") " Jan 23 09:35:01 crc kubenswrapper[4684]: I0123 09:35:01.296780 4684 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/57bca338-31bf-4447-b296-864d1dea776e-combined-ca-bundle\") pod \"57bca338-31bf-4447-b296-864d1dea776e\" (UID: \"57bca338-31bf-4447-b296-864d1dea776e\") " Jan 23 09:35:01 crc kubenswrapper[4684]: I0123 09:35:01.321073 4684 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/57bca338-31bf-4447-b296-864d1dea776e-kube-api-access-z4xrt" (OuterVolumeSpecName: "kube-api-access-z4xrt") pod "57bca338-31bf-4447-b296-864d1dea776e" (UID: "57bca338-31bf-4447-b296-864d1dea776e"). InnerVolumeSpecName "kube-api-access-z4xrt". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 23 09:35:01 crc kubenswrapper[4684]: I0123 09:35:01.321544 4684 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/57bca338-31bf-4447-b296-864d1dea776e-scripts" (OuterVolumeSpecName: "scripts") pod "57bca338-31bf-4447-b296-864d1dea776e" (UID: "57bca338-31bf-4447-b296-864d1dea776e"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 23 09:35:01 crc kubenswrapper[4684]: I0123 09:35:01.354636 4684 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/57bca338-31bf-4447-b296-864d1dea776e-config-data" (OuterVolumeSpecName: "config-data") pod "57bca338-31bf-4447-b296-864d1dea776e" (UID: "57bca338-31bf-4447-b296-864d1dea776e"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 23 09:35:01 crc kubenswrapper[4684]: I0123 09:35:01.365171 4684 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/57bca338-31bf-4447-b296-864d1dea776e-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "57bca338-31bf-4447-b296-864d1dea776e" (UID: "57bca338-31bf-4447-b296-864d1dea776e"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 23 09:35:01 crc kubenswrapper[4684]: I0123 09:35:01.399092 4684 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-z4xrt\" (UniqueName: \"kubernetes.io/projected/57bca338-31bf-4447-b296-864d1dea776e-kube-api-access-z4xrt\") on node \"crc\" DevicePath \"\"" Jan 23 09:35:01 crc kubenswrapper[4684]: I0123 09:35:01.399131 4684 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/57bca338-31bf-4447-b296-864d1dea776e-config-data\") on node \"crc\" DevicePath \"\"" Jan 23 09:35:01 crc kubenswrapper[4684]: I0123 09:35:01.399140 4684 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/57bca338-31bf-4447-b296-864d1dea776e-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Jan 23 09:35:01 crc kubenswrapper[4684]: I0123 09:35:01.399149 4684 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/57bca338-31bf-4447-b296-864d1dea776e-scripts\") on node \"crc\" DevicePath \"\"" Jan 23 09:35:01 crc kubenswrapper[4684]: I0123 09:35:01.570597 4684 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-scheduler-0" Jan 23 09:35:01 crc kubenswrapper[4684]: I0123 09:35:01.629387 4684 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/15dd7b39-32a4-458c-b95a-401064d028df-config-data\") pod \"15dd7b39-32a4-458c-b95a-401064d028df\" (UID: \"15dd7b39-32a4-458c-b95a-401064d028df\") " Jan 23 09:35:01 crc kubenswrapper[4684]: I0123 09:35:01.629634 4684 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/15dd7b39-32a4-458c-b95a-401064d028df-combined-ca-bundle\") pod \"15dd7b39-32a4-458c-b95a-401064d028df\" (UID: \"15dd7b39-32a4-458c-b95a-401064d028df\") " Jan 23 09:35:01 crc kubenswrapper[4684]: I0123 09:35:01.629820 4684 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-z2bpw\" (UniqueName: \"kubernetes.io/projected/15dd7b39-32a4-458c-b95a-401064d028df-kube-api-access-z2bpw\") pod \"15dd7b39-32a4-458c-b95a-401064d028df\" (UID: \"15dd7b39-32a4-458c-b95a-401064d028df\") " Jan 23 09:35:01 crc kubenswrapper[4684]: I0123 09:35:01.660959 4684 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/15dd7b39-32a4-458c-b95a-401064d028df-kube-api-access-z2bpw" (OuterVolumeSpecName: "kube-api-access-z2bpw") pod "15dd7b39-32a4-458c-b95a-401064d028df" (UID: "15dd7b39-32a4-458c-b95a-401064d028df"). InnerVolumeSpecName "kube-api-access-z2bpw". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 23 09:35:01 crc kubenswrapper[4684]: I0123 09:35:01.730735 4684 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/15dd7b39-32a4-458c-b95a-401064d028df-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "15dd7b39-32a4-458c-b95a-401064d028df" (UID: "15dd7b39-32a4-458c-b95a-401064d028df"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 23 09:35:01 crc kubenswrapper[4684]: I0123 09:35:01.732487 4684 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/15dd7b39-32a4-458c-b95a-401064d028df-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Jan 23 09:35:01 crc kubenswrapper[4684]: I0123 09:35:01.732513 4684 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-z2bpw\" (UniqueName: \"kubernetes.io/projected/15dd7b39-32a4-458c-b95a-401064d028df-kube-api-access-z2bpw\") on node \"crc\" DevicePath \"\"" Jan 23 09:35:01 crc kubenswrapper[4684]: I0123 09:35:01.739938 4684 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/15dd7b39-32a4-458c-b95a-401064d028df-config-data" (OuterVolumeSpecName: "config-data") pod "15dd7b39-32a4-458c-b95a-401064d028df" (UID: "15dd7b39-32a4-458c-b95a-401064d028df"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 23 09:35:01 crc kubenswrapper[4684]: I0123 09:35:01.782216 4684 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell1-conductor-db-sync-bcpvp" Jan 23 09:35:01 crc kubenswrapper[4684]: I0123 09:35:01.782231 4684 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell1-conductor-db-sync-bcpvp" event={"ID":"57bca338-31bf-4447-b296-864d1dea776e","Type":"ContainerDied","Data":"509deb1855d2c4728256aab2d8145913c36e9c812ff044513051e621ef622b9f"} Jan 23 09:35:01 crc kubenswrapper[4684]: I0123 09:35:01.782266 4684 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="509deb1855d2c4728256aab2d8145913c36e9c812ff044513051e621ef622b9f" Jan 23 09:35:01 crc kubenswrapper[4684]: I0123 09:35:01.784155 4684 generic.go:334] "Generic (PLEG): container finished" podID="15dd7b39-32a4-458c-b95a-401064d028df" containerID="29fc2896f389e69fc327c923678ae4c5bdb3b648e0e1c4174a1e077a72951651" exitCode=0 Jan 23 09:35:01 crc kubenswrapper[4684]: I0123 09:35:01.784219 4684 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-scheduler-0" event={"ID":"15dd7b39-32a4-458c-b95a-401064d028df","Type":"ContainerDied","Data":"29fc2896f389e69fc327c923678ae4c5bdb3b648e0e1c4174a1e077a72951651"} Jan 23 09:35:01 crc kubenswrapper[4684]: I0123 09:35:01.784239 4684 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-scheduler-0" event={"ID":"15dd7b39-32a4-458c-b95a-401064d028df","Type":"ContainerDied","Data":"d5c1bd6bb2a740ae652da23c2525d36111fe6178ff22822e6b3fd2336d177bfc"} Jan 23 09:35:01 crc kubenswrapper[4684]: I0123 09:35:01.784258 4684 scope.go:117] "RemoveContainer" containerID="29fc2896f389e69fc327c923678ae4c5bdb3b648e0e1c4174a1e077a72951651" Jan 23 09:35:01 crc kubenswrapper[4684]: I0123 09:35:01.784364 4684 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-scheduler-0" Jan 23 09:35:01 crc kubenswrapper[4684]: I0123 09:35:01.793231 4684 generic.go:334] "Generic (PLEG): container finished" podID="567a0d3d-62ca-40aa-b10b-0b853e0ec646" containerID="fa28ac4d39b7149ae613e6693c85d78770a39b55d479bb3bbece267ee259c71f" exitCode=0 Jan 23 09:35:01 crc kubenswrapper[4684]: I0123 09:35:01.793277 4684 generic.go:334] "Generic (PLEG): container finished" podID="567a0d3d-62ca-40aa-b10b-0b853e0ec646" containerID="0bd721388a1a0618844cc844b08443a2efb15dae5ae4d5fd34efd1bd468884c4" exitCode=143 Jan 23 09:35:01 crc kubenswrapper[4684]: I0123 09:35:01.793295 4684 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-metadata-0" event={"ID":"567a0d3d-62ca-40aa-b10b-0b853e0ec646","Type":"ContainerDied","Data":"fa28ac4d39b7149ae613e6693c85d78770a39b55d479bb3bbece267ee259c71f"} Jan 23 09:35:01 crc kubenswrapper[4684]: I0123 09:35:01.793317 4684 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-metadata-0" event={"ID":"567a0d3d-62ca-40aa-b10b-0b853e0ec646","Type":"ContainerDied","Data":"0bd721388a1a0618844cc844b08443a2efb15dae5ae4d5fd34efd1bd468884c4"} Jan 23 09:35:01 crc kubenswrapper[4684]: I0123 09:35:01.854900 4684 scope.go:117] "RemoveContainer" containerID="29fc2896f389e69fc327c923678ae4c5bdb3b648e0e1c4174a1e077a72951651" Jan 23 09:35:01 crc kubenswrapper[4684]: I0123 09:35:01.858761 4684 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/nova-cell1-conductor-0"] Jan 23 09:35:01 crc kubenswrapper[4684]: E0123 09:35:01.859220 4684 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"29fc2896f389e69fc327c923678ae4c5bdb3b648e0e1c4174a1e077a72951651\": container with ID starting with 29fc2896f389e69fc327c923678ae4c5bdb3b648e0e1c4174a1e077a72951651 not found: ID does not exist" containerID="29fc2896f389e69fc327c923678ae4c5bdb3b648e0e1c4174a1e077a72951651" Jan 23 09:35:01 crc kubenswrapper[4684]: I0123 09:35:01.859340 4684 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"29fc2896f389e69fc327c923678ae4c5bdb3b648e0e1c4174a1e077a72951651"} err="failed to get container status \"29fc2896f389e69fc327c923678ae4c5bdb3b648e0e1c4174a1e077a72951651\": rpc error: code = NotFound desc = could not find container \"29fc2896f389e69fc327c923678ae4c5bdb3b648e0e1c4174a1e077a72951651\": container with ID starting with 29fc2896f389e69fc327c923678ae4c5bdb3b648e0e1c4174a1e077a72951651 not found: ID does not exist" Jan 23 09:35:01 crc kubenswrapper[4684]: E0123 09:35:01.859232 4684 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="72621f7c-b422-4946-afbc-8d3d049ee05c" containerName="init" Jan 23 09:35:01 crc kubenswrapper[4684]: I0123 09:35:01.859506 4684 state_mem.go:107] "Deleted CPUSet assignment" podUID="72621f7c-b422-4946-afbc-8d3d049ee05c" containerName="init" Jan 23 09:35:01 crc kubenswrapper[4684]: E0123 09:35:01.859566 4684 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="57bca338-31bf-4447-b296-864d1dea776e" containerName="nova-cell1-conductor-db-sync" Jan 23 09:35:01 crc kubenswrapper[4684]: I0123 09:35:01.859616 4684 state_mem.go:107] "Deleted CPUSet assignment" podUID="57bca338-31bf-4447-b296-864d1dea776e" containerName="nova-cell1-conductor-db-sync" Jan 23 09:35:01 crc kubenswrapper[4684]: E0123 09:35:01.859673 4684 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="15dd7b39-32a4-458c-b95a-401064d028df" containerName="nova-scheduler-scheduler" Jan 23 09:35:01 crc kubenswrapper[4684]: I0123 09:35:01.864777 4684 state_mem.go:107] "Deleted CPUSet assignment" podUID="15dd7b39-32a4-458c-b95a-401064d028df" containerName="nova-scheduler-scheduler" Jan 23 09:35:01 crc kubenswrapper[4684]: E0123 09:35:01.864842 4684 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="f3ca078c-d881-4e98-95bf-7b7486f871d6" containerName="nova-manage" Jan 23 09:35:01 crc kubenswrapper[4684]: I0123 09:35:01.864849 4684 state_mem.go:107] "Deleted CPUSet assignment" podUID="f3ca078c-d881-4e98-95bf-7b7486f871d6" containerName="nova-manage" Jan 23 09:35:01 crc kubenswrapper[4684]: E0123 09:35:01.864862 4684 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="72621f7c-b422-4946-afbc-8d3d049ee05c" containerName="dnsmasq-dns" Jan 23 09:35:01 crc kubenswrapper[4684]: I0123 09:35:01.864869 4684 state_mem.go:107] "Deleted CPUSet assignment" podUID="72621f7c-b422-4946-afbc-8d3d049ee05c" containerName="dnsmasq-dns" Jan 23 09:35:01 crc kubenswrapper[4684]: I0123 09:35:01.865214 4684 memory_manager.go:354] "RemoveStaleState removing state" podUID="72621f7c-b422-4946-afbc-8d3d049ee05c" containerName="dnsmasq-dns" Jan 23 09:35:01 crc kubenswrapper[4684]: I0123 09:35:01.865228 4684 memory_manager.go:354] "RemoveStaleState removing state" podUID="f3ca078c-d881-4e98-95bf-7b7486f871d6" containerName="nova-manage" Jan 23 09:35:01 crc kubenswrapper[4684]: I0123 09:35:01.865239 4684 memory_manager.go:354] "RemoveStaleState removing state" podUID="15dd7b39-32a4-458c-b95a-401064d028df" containerName="nova-scheduler-scheduler" Jan 23 09:35:01 crc kubenswrapper[4684]: I0123 09:35:01.865254 4684 memory_manager.go:354] "RemoveStaleState removing state" podUID="57bca338-31bf-4447-b296-864d1dea776e" containerName="nova-cell1-conductor-db-sync" Jan 23 09:35:01 crc kubenswrapper[4684]: I0123 09:35:01.867734 4684 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell1-conductor-0" Jan 23 09:35:01 crc kubenswrapper[4684]: I0123 09:35:01.871302 4684 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/15dd7b39-32a4-458c-b95a-401064d028df-config-data\") on node \"crc\" DevicePath \"\"" Jan 23 09:35:01 crc kubenswrapper[4684]: I0123 09:35:01.874889 4684 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-cell1-conductor-config-data" Jan 23 09:35:01 crc kubenswrapper[4684]: I0123 09:35:01.877485 4684 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-cell1-conductor-0"] Jan 23 09:35:01 crc kubenswrapper[4684]: I0123 09:35:01.952391 4684 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-metadata-0" Jan 23 09:35:01 crc kubenswrapper[4684]: I0123 09:35:01.955918 4684 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-scheduler-0"] Jan 23 09:35:01 crc kubenswrapper[4684]: I0123 09:35:01.975764 4684 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/nova-scheduler-0"] Jan 23 09:35:01 crc kubenswrapper[4684]: I0123 09:35:01.976048 4684 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/a36234df-99ba-470a-8309-55d1e0f53072-combined-ca-bundle\") pod \"nova-cell1-conductor-0\" (UID: \"a36234df-99ba-470a-8309-55d1e0f53072\") " pod="openstack/nova-cell1-conductor-0" Jan 23 09:35:01 crc kubenswrapper[4684]: I0123 09:35:01.976115 4684 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-k76xw\" (UniqueName: \"kubernetes.io/projected/a36234df-99ba-470a-8309-55d1e0f53072-kube-api-access-k76xw\") pod \"nova-cell1-conductor-0\" (UID: \"a36234df-99ba-470a-8309-55d1e0f53072\") " pod="openstack/nova-cell1-conductor-0" Jan 23 09:35:01 crc kubenswrapper[4684]: I0123 09:35:01.976280 4684 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/a36234df-99ba-470a-8309-55d1e0f53072-config-data\") pod \"nova-cell1-conductor-0\" (UID: \"a36234df-99ba-470a-8309-55d1e0f53072\") " pod="openstack/nova-cell1-conductor-0" Jan 23 09:35:01 crc kubenswrapper[4684]: I0123 09:35:01.999021 4684 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/nova-scheduler-0"] Jan 23 09:35:01 crc kubenswrapper[4684]: E0123 09:35:01.999405 4684 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="567a0d3d-62ca-40aa-b10b-0b853e0ec646" containerName="nova-metadata-log" Jan 23 09:35:01 crc kubenswrapper[4684]: I0123 09:35:01.999428 4684 state_mem.go:107] "Deleted CPUSet assignment" podUID="567a0d3d-62ca-40aa-b10b-0b853e0ec646" containerName="nova-metadata-log" Jan 23 09:35:01 crc kubenswrapper[4684]: E0123 09:35:01.999459 4684 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="567a0d3d-62ca-40aa-b10b-0b853e0ec646" containerName="nova-metadata-metadata" Jan 23 09:35:01 crc kubenswrapper[4684]: I0123 09:35:01.999465 4684 state_mem.go:107] "Deleted CPUSet assignment" podUID="567a0d3d-62ca-40aa-b10b-0b853e0ec646" containerName="nova-metadata-metadata" Jan 23 09:35:01 crc kubenswrapper[4684]: I0123 09:35:01.999627 4684 memory_manager.go:354] "RemoveStaleState removing state" podUID="567a0d3d-62ca-40aa-b10b-0b853e0ec646" containerName="nova-metadata-metadata" Jan 23 09:35:01 crc kubenswrapper[4684]: I0123 09:35:01.999642 4684 memory_manager.go:354] "RemoveStaleState removing state" podUID="567a0d3d-62ca-40aa-b10b-0b853e0ec646" containerName="nova-metadata-log" Jan 23 09:35:02 crc kubenswrapper[4684]: I0123 09:35:02.000412 4684 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-scheduler-0" Jan 23 09:35:02 crc kubenswrapper[4684]: I0123 09:35:02.005788 4684 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-scheduler-config-data" Jan 23 09:35:02 crc kubenswrapper[4684]: I0123 09:35:02.015074 4684 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-scheduler-0"] Jan 23 09:35:02 crc kubenswrapper[4684]: I0123 09:35:02.077503 4684 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-82dls\" (UniqueName: \"kubernetes.io/projected/567a0d3d-62ca-40aa-b10b-0b853e0ec646-kube-api-access-82dls\") pod \"567a0d3d-62ca-40aa-b10b-0b853e0ec646\" (UID: \"567a0d3d-62ca-40aa-b10b-0b853e0ec646\") " Jan 23 09:35:02 crc kubenswrapper[4684]: I0123 09:35:02.077571 4684 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/567a0d3d-62ca-40aa-b10b-0b853e0ec646-logs\") pod \"567a0d3d-62ca-40aa-b10b-0b853e0ec646\" (UID: \"567a0d3d-62ca-40aa-b10b-0b853e0ec646\") " Jan 23 09:35:02 crc kubenswrapper[4684]: I0123 09:35:02.077617 4684 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"nova-metadata-tls-certs\" (UniqueName: \"kubernetes.io/secret/567a0d3d-62ca-40aa-b10b-0b853e0ec646-nova-metadata-tls-certs\") pod \"567a0d3d-62ca-40aa-b10b-0b853e0ec646\" (UID: \"567a0d3d-62ca-40aa-b10b-0b853e0ec646\") " Jan 23 09:35:02 crc kubenswrapper[4684]: I0123 09:35:02.077843 4684 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/567a0d3d-62ca-40aa-b10b-0b853e0ec646-combined-ca-bundle\") pod \"567a0d3d-62ca-40aa-b10b-0b853e0ec646\" (UID: \"567a0d3d-62ca-40aa-b10b-0b853e0ec646\") " Jan 23 09:35:02 crc kubenswrapper[4684]: I0123 09:35:02.077901 4684 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/567a0d3d-62ca-40aa-b10b-0b853e0ec646-config-data\") pod \"567a0d3d-62ca-40aa-b10b-0b853e0ec646\" (UID: \"567a0d3d-62ca-40aa-b10b-0b853e0ec646\") " Jan 23 09:35:02 crc kubenswrapper[4684]: I0123 09:35:02.078122 4684 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/a36234df-99ba-470a-8309-55d1e0f53072-combined-ca-bundle\") pod \"nova-cell1-conductor-0\" (UID: \"a36234df-99ba-470a-8309-55d1e0f53072\") " pod="openstack/nova-cell1-conductor-0" Jan 23 09:35:02 crc kubenswrapper[4684]: I0123 09:35:02.078169 4684 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-k76xw\" (UniqueName: \"kubernetes.io/projected/a36234df-99ba-470a-8309-55d1e0f53072-kube-api-access-k76xw\") pod \"nova-cell1-conductor-0\" (UID: \"a36234df-99ba-470a-8309-55d1e0f53072\") " pod="openstack/nova-cell1-conductor-0" Jan 23 09:35:02 crc kubenswrapper[4684]: I0123 09:35:02.078276 4684 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/a36234df-99ba-470a-8309-55d1e0f53072-config-data\") pod \"nova-cell1-conductor-0\" (UID: \"a36234df-99ba-470a-8309-55d1e0f53072\") " pod="openstack/nova-cell1-conductor-0" Jan 23 09:35:02 crc kubenswrapper[4684]: I0123 09:35:02.078729 4684 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/567a0d3d-62ca-40aa-b10b-0b853e0ec646-logs" (OuterVolumeSpecName: "logs") pod "567a0d3d-62ca-40aa-b10b-0b853e0ec646" (UID: "567a0d3d-62ca-40aa-b10b-0b853e0ec646"). InnerVolumeSpecName "logs". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 23 09:35:02 crc kubenswrapper[4684]: I0123 09:35:02.092960 4684 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/567a0d3d-62ca-40aa-b10b-0b853e0ec646-kube-api-access-82dls" (OuterVolumeSpecName: "kube-api-access-82dls") pod "567a0d3d-62ca-40aa-b10b-0b853e0ec646" (UID: "567a0d3d-62ca-40aa-b10b-0b853e0ec646"). InnerVolumeSpecName "kube-api-access-82dls". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 23 09:35:02 crc kubenswrapper[4684]: I0123 09:35:02.093789 4684 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/a36234df-99ba-470a-8309-55d1e0f53072-combined-ca-bundle\") pod \"nova-cell1-conductor-0\" (UID: \"a36234df-99ba-470a-8309-55d1e0f53072\") " pod="openstack/nova-cell1-conductor-0" Jan 23 09:35:02 crc kubenswrapper[4684]: I0123 09:35:02.095432 4684 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/a36234df-99ba-470a-8309-55d1e0f53072-config-data\") pod \"nova-cell1-conductor-0\" (UID: \"a36234df-99ba-470a-8309-55d1e0f53072\") " pod="openstack/nova-cell1-conductor-0" Jan 23 09:35:02 crc kubenswrapper[4684]: I0123 09:35:02.112178 4684 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-k76xw\" (UniqueName: \"kubernetes.io/projected/a36234df-99ba-470a-8309-55d1e0f53072-kube-api-access-k76xw\") pod \"nova-cell1-conductor-0\" (UID: \"a36234df-99ba-470a-8309-55d1e0f53072\") " pod="openstack/nova-cell1-conductor-0" Jan 23 09:35:02 crc kubenswrapper[4684]: I0123 09:35:02.126275 4684 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/567a0d3d-62ca-40aa-b10b-0b853e0ec646-config-data" (OuterVolumeSpecName: "config-data") pod "567a0d3d-62ca-40aa-b10b-0b853e0ec646" (UID: "567a0d3d-62ca-40aa-b10b-0b853e0ec646"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 23 09:35:02 crc kubenswrapper[4684]: I0123 09:35:02.140009 4684 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/567a0d3d-62ca-40aa-b10b-0b853e0ec646-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "567a0d3d-62ca-40aa-b10b-0b853e0ec646" (UID: "567a0d3d-62ca-40aa-b10b-0b853e0ec646"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 23 09:35:02 crc kubenswrapper[4684]: I0123 09:35:02.173659 4684 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/567a0d3d-62ca-40aa-b10b-0b853e0ec646-nova-metadata-tls-certs" (OuterVolumeSpecName: "nova-metadata-tls-certs") pod "567a0d3d-62ca-40aa-b10b-0b853e0ec646" (UID: "567a0d3d-62ca-40aa-b10b-0b853e0ec646"). InnerVolumeSpecName "nova-metadata-tls-certs". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 23 09:35:02 crc kubenswrapper[4684]: I0123 09:35:02.180198 4684 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-jt2s2\" (UniqueName: \"kubernetes.io/projected/d757dc5c-a82e-403e-a11f-213b043a1b87-kube-api-access-jt2s2\") pod \"nova-scheduler-0\" (UID: \"d757dc5c-a82e-403e-a11f-213b043a1b87\") " pod="openstack/nova-scheduler-0" Jan 23 09:35:02 crc kubenswrapper[4684]: I0123 09:35:02.180282 4684 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/d757dc5c-a82e-403e-a11f-213b043a1b87-config-data\") pod \"nova-scheduler-0\" (UID: \"d757dc5c-a82e-403e-a11f-213b043a1b87\") " pod="openstack/nova-scheduler-0" Jan 23 09:35:02 crc kubenswrapper[4684]: I0123 09:35:02.180319 4684 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/d757dc5c-a82e-403e-a11f-213b043a1b87-combined-ca-bundle\") pod \"nova-scheduler-0\" (UID: \"d757dc5c-a82e-403e-a11f-213b043a1b87\") " pod="openstack/nova-scheduler-0" Jan 23 09:35:02 crc kubenswrapper[4684]: I0123 09:35:02.180399 4684 reconciler_common.go:293] "Volume detached for volume \"nova-metadata-tls-certs\" (UniqueName: \"kubernetes.io/secret/567a0d3d-62ca-40aa-b10b-0b853e0ec646-nova-metadata-tls-certs\") on node \"crc\" DevicePath \"\"" Jan 23 09:35:02 crc kubenswrapper[4684]: I0123 09:35:02.180410 4684 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/567a0d3d-62ca-40aa-b10b-0b853e0ec646-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Jan 23 09:35:02 crc kubenswrapper[4684]: I0123 09:35:02.180418 4684 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/567a0d3d-62ca-40aa-b10b-0b853e0ec646-config-data\") on node \"crc\" DevicePath \"\"" Jan 23 09:35:02 crc kubenswrapper[4684]: I0123 09:35:02.180427 4684 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-82dls\" (UniqueName: \"kubernetes.io/projected/567a0d3d-62ca-40aa-b10b-0b853e0ec646-kube-api-access-82dls\") on node \"crc\" DevicePath \"\"" Jan 23 09:35:02 crc kubenswrapper[4684]: I0123 09:35:02.180436 4684 reconciler_common.go:293] "Volume detached for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/567a0d3d-62ca-40aa-b10b-0b853e0ec646-logs\") on node \"crc\" DevicePath \"\"" Jan 23 09:35:02 crc kubenswrapper[4684]: I0123 09:35:02.212508 4684 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell1-conductor-0" Jan 23 09:35:02 crc kubenswrapper[4684]: I0123 09:35:02.282029 4684 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/d757dc5c-a82e-403e-a11f-213b043a1b87-config-data\") pod \"nova-scheduler-0\" (UID: \"d757dc5c-a82e-403e-a11f-213b043a1b87\") " pod="openstack/nova-scheduler-0" Jan 23 09:35:02 crc kubenswrapper[4684]: I0123 09:35:02.282111 4684 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/d757dc5c-a82e-403e-a11f-213b043a1b87-combined-ca-bundle\") pod \"nova-scheduler-0\" (UID: \"d757dc5c-a82e-403e-a11f-213b043a1b87\") " pod="openstack/nova-scheduler-0" Jan 23 09:35:02 crc kubenswrapper[4684]: I0123 09:35:02.282211 4684 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-jt2s2\" (UniqueName: \"kubernetes.io/projected/d757dc5c-a82e-403e-a11f-213b043a1b87-kube-api-access-jt2s2\") pod \"nova-scheduler-0\" (UID: \"d757dc5c-a82e-403e-a11f-213b043a1b87\") " pod="openstack/nova-scheduler-0" Jan 23 09:35:02 crc kubenswrapper[4684]: I0123 09:35:02.287161 4684 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/d757dc5c-a82e-403e-a11f-213b043a1b87-config-data\") pod \"nova-scheduler-0\" (UID: \"d757dc5c-a82e-403e-a11f-213b043a1b87\") " pod="openstack/nova-scheduler-0" Jan 23 09:35:02 crc kubenswrapper[4684]: I0123 09:35:02.289492 4684 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/d757dc5c-a82e-403e-a11f-213b043a1b87-combined-ca-bundle\") pod \"nova-scheduler-0\" (UID: \"d757dc5c-a82e-403e-a11f-213b043a1b87\") " pod="openstack/nova-scheduler-0" Jan 23 09:35:02 crc kubenswrapper[4684]: I0123 09:35:02.307472 4684 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-jt2s2\" (UniqueName: \"kubernetes.io/projected/d757dc5c-a82e-403e-a11f-213b043a1b87-kube-api-access-jt2s2\") pod \"nova-scheduler-0\" (UID: \"d757dc5c-a82e-403e-a11f-213b043a1b87\") " pod="openstack/nova-scheduler-0" Jan 23 09:35:02 crc kubenswrapper[4684]: I0123 09:35:02.320321 4684 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-scheduler-0" Jan 23 09:35:02 crc kubenswrapper[4684]: I0123 09:35:02.798493 4684 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-cell1-conductor-0"] Jan 23 09:35:02 crc kubenswrapper[4684]: W0123 09:35:02.804984 4684 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-poda36234df_99ba_470a_8309_55d1e0f53072.slice/crio-a9981a10f78af0976baf9db0e4308e0c0c5cf4412d057426359aef0751b2a8ae WatchSource:0}: Error finding container a9981a10f78af0976baf9db0e4308e0c0c5cf4412d057426359aef0751b2a8ae: Status 404 returned error can't find the container with id a9981a10f78af0976baf9db0e4308e0c0c5cf4412d057426359aef0751b2a8ae Jan 23 09:35:02 crc kubenswrapper[4684]: I0123 09:35:02.810013 4684 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-metadata-0" event={"ID":"567a0d3d-62ca-40aa-b10b-0b853e0ec646","Type":"ContainerDied","Data":"eefb17490ec58b1e2febd0e1dc4dd3a00cd9af2da7351bd1db78b91abde84978"} Jan 23 09:35:02 crc kubenswrapper[4684]: I0123 09:35:02.810071 4684 scope.go:117] "RemoveContainer" containerID="fa28ac4d39b7149ae613e6693c85d78770a39b55d479bb3bbece267ee259c71f" Jan 23 09:35:02 crc kubenswrapper[4684]: I0123 09:35:02.810125 4684 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-metadata-0" Jan 23 09:35:02 crc kubenswrapper[4684]: I0123 09:35:02.870943 4684 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-metadata-0"] Jan 23 09:35:02 crc kubenswrapper[4684]: I0123 09:35:02.880048 4684 scope.go:117] "RemoveContainer" containerID="0bd721388a1a0618844cc844b08443a2efb15dae5ae4d5fd34efd1bd468884c4" Jan 23 09:35:02 crc kubenswrapper[4684]: I0123 09:35:02.903442 4684 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/nova-metadata-0"] Jan 23 09:35:02 crc kubenswrapper[4684]: I0123 09:35:02.912760 4684 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/nova-metadata-0"] Jan 23 09:35:02 crc kubenswrapper[4684]: I0123 09:35:02.917723 4684 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-metadata-0" Jan 23 09:35:02 crc kubenswrapper[4684]: I0123 09:35:02.920643 4684 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-nova-metadata-internal-svc" Jan 23 09:35:02 crc kubenswrapper[4684]: I0123 09:35:02.920979 4684 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-metadata-config-data" Jan 23 09:35:02 crc kubenswrapper[4684]: I0123 09:35:02.927303 4684 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-scheduler-0"] Jan 23 09:35:02 crc kubenswrapper[4684]: I0123 09:35:02.953906 4684 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-metadata-0"] Jan 23 09:35:03 crc kubenswrapper[4684]: I0123 09:35:03.007522 4684 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-5tpcn\" (UniqueName: \"kubernetes.io/projected/8422775d-1328-4c3b-ab94-f235f45da903-kube-api-access-5tpcn\") pod \"nova-metadata-0\" (UID: \"8422775d-1328-4c3b-ab94-f235f45da903\") " pod="openstack/nova-metadata-0" Jan 23 09:35:03 crc kubenswrapper[4684]: I0123 09:35:03.007578 4684 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/8422775d-1328-4c3b-ab94-f235f45da903-config-data\") pod \"nova-metadata-0\" (UID: \"8422775d-1328-4c3b-ab94-f235f45da903\") " pod="openstack/nova-metadata-0" Jan 23 09:35:03 crc kubenswrapper[4684]: I0123 09:35:03.007602 4684 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/8422775d-1328-4c3b-ab94-f235f45da903-combined-ca-bundle\") pod \"nova-metadata-0\" (UID: \"8422775d-1328-4c3b-ab94-f235f45da903\") " pod="openstack/nova-metadata-0" Jan 23 09:35:03 crc kubenswrapper[4684]: I0123 09:35:03.007647 4684 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/8422775d-1328-4c3b-ab94-f235f45da903-logs\") pod \"nova-metadata-0\" (UID: \"8422775d-1328-4c3b-ab94-f235f45da903\") " pod="openstack/nova-metadata-0" Jan 23 09:35:03 crc kubenswrapper[4684]: I0123 09:35:03.007721 4684 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"nova-metadata-tls-certs\" (UniqueName: \"kubernetes.io/secret/8422775d-1328-4c3b-ab94-f235f45da903-nova-metadata-tls-certs\") pod \"nova-metadata-0\" (UID: \"8422775d-1328-4c3b-ab94-f235f45da903\") " pod="openstack/nova-metadata-0" Jan 23 09:35:03 crc kubenswrapper[4684]: I0123 09:35:03.109075 4684 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/8422775d-1328-4c3b-ab94-f235f45da903-logs\") pod \"nova-metadata-0\" (UID: \"8422775d-1328-4c3b-ab94-f235f45da903\") " pod="openstack/nova-metadata-0" Jan 23 09:35:03 crc kubenswrapper[4684]: I0123 09:35:03.109182 4684 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"nova-metadata-tls-certs\" (UniqueName: \"kubernetes.io/secret/8422775d-1328-4c3b-ab94-f235f45da903-nova-metadata-tls-certs\") pod \"nova-metadata-0\" (UID: \"8422775d-1328-4c3b-ab94-f235f45da903\") " pod="openstack/nova-metadata-0" Jan 23 09:35:03 crc kubenswrapper[4684]: I0123 09:35:03.109264 4684 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-5tpcn\" (UniqueName: \"kubernetes.io/projected/8422775d-1328-4c3b-ab94-f235f45da903-kube-api-access-5tpcn\") pod \"nova-metadata-0\" (UID: \"8422775d-1328-4c3b-ab94-f235f45da903\") " pod="openstack/nova-metadata-0" Jan 23 09:35:03 crc kubenswrapper[4684]: I0123 09:35:03.109303 4684 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/8422775d-1328-4c3b-ab94-f235f45da903-config-data\") pod \"nova-metadata-0\" (UID: \"8422775d-1328-4c3b-ab94-f235f45da903\") " pod="openstack/nova-metadata-0" Jan 23 09:35:03 crc kubenswrapper[4684]: I0123 09:35:03.109326 4684 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/8422775d-1328-4c3b-ab94-f235f45da903-combined-ca-bundle\") pod \"nova-metadata-0\" (UID: \"8422775d-1328-4c3b-ab94-f235f45da903\") " pod="openstack/nova-metadata-0" Jan 23 09:35:03 crc kubenswrapper[4684]: I0123 09:35:03.109558 4684 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/8422775d-1328-4c3b-ab94-f235f45da903-logs\") pod \"nova-metadata-0\" (UID: \"8422775d-1328-4c3b-ab94-f235f45da903\") " pod="openstack/nova-metadata-0" Jan 23 09:35:03 crc kubenswrapper[4684]: I0123 09:35:03.115212 4684 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/8422775d-1328-4c3b-ab94-f235f45da903-config-data\") pod \"nova-metadata-0\" (UID: \"8422775d-1328-4c3b-ab94-f235f45da903\") " pod="openstack/nova-metadata-0" Jan 23 09:35:03 crc kubenswrapper[4684]: I0123 09:35:03.126333 4684 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"nova-metadata-tls-certs\" (UniqueName: \"kubernetes.io/secret/8422775d-1328-4c3b-ab94-f235f45da903-nova-metadata-tls-certs\") pod \"nova-metadata-0\" (UID: \"8422775d-1328-4c3b-ab94-f235f45da903\") " pod="openstack/nova-metadata-0" Jan 23 09:35:03 crc kubenswrapper[4684]: I0123 09:35:03.128509 4684 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/8422775d-1328-4c3b-ab94-f235f45da903-combined-ca-bundle\") pod \"nova-metadata-0\" (UID: \"8422775d-1328-4c3b-ab94-f235f45da903\") " pod="openstack/nova-metadata-0" Jan 23 09:35:03 crc kubenswrapper[4684]: I0123 09:35:03.134194 4684 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-5tpcn\" (UniqueName: \"kubernetes.io/projected/8422775d-1328-4c3b-ab94-f235f45da903-kube-api-access-5tpcn\") pod \"nova-metadata-0\" (UID: \"8422775d-1328-4c3b-ab94-f235f45da903\") " pod="openstack/nova-metadata-0" Jan 23 09:35:03 crc kubenswrapper[4684]: I0123 09:35:03.213364 4684 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/kube-state-metrics-0"] Jan 23 09:35:03 crc kubenswrapper[4684]: I0123 09:35:03.213609 4684 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/kube-state-metrics-0" podUID="48e55475-0575-41e9-9949-d5bdb86ee565" containerName="kube-state-metrics" containerID="cri-o://b95156a791bf23ae774336a0514e84c390ba31fc6386be32beea115aec8187db" gracePeriod=30 Jan 23 09:35:03 crc kubenswrapper[4684]: I0123 09:35:03.259660 4684 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-metadata-0" Jan 23 09:35:03 crc kubenswrapper[4684]: I0123 09:35:03.594771 4684 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="15dd7b39-32a4-458c-b95a-401064d028df" path="/var/lib/kubelet/pods/15dd7b39-32a4-458c-b95a-401064d028df/volumes" Jan 23 09:35:03 crc kubenswrapper[4684]: I0123 09:35:03.595879 4684 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="567a0d3d-62ca-40aa-b10b-0b853e0ec646" path="/var/lib/kubelet/pods/567a0d3d-62ca-40aa-b10b-0b853e0ec646/volumes" Jan 23 09:35:03 crc kubenswrapper[4684]: W0123 09:35:03.736234 4684 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod8422775d_1328_4c3b_ab94_f235f45da903.slice/crio-076d1b5b473d697ca3e476f7444ed499ef0d22babc884eadecd1f9fd272e7167 WatchSource:0}: Error finding container 076d1b5b473d697ca3e476f7444ed499ef0d22babc884eadecd1f9fd272e7167: Status 404 returned error can't find the container with id 076d1b5b473d697ca3e476f7444ed499ef0d22babc884eadecd1f9fd272e7167 Jan 23 09:35:03 crc kubenswrapper[4684]: I0123 09:35:03.745393 4684 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-metadata-0"] Jan 23 09:35:03 crc kubenswrapper[4684]: I0123 09:35:03.819447 4684 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-metadata-0" event={"ID":"8422775d-1328-4c3b-ab94-f235f45da903","Type":"ContainerStarted","Data":"076d1b5b473d697ca3e476f7444ed499ef0d22babc884eadecd1f9fd272e7167"} Jan 23 09:35:03 crc kubenswrapper[4684]: I0123 09:35:03.822305 4684 generic.go:334] "Generic (PLEG): container finished" podID="48e55475-0575-41e9-9949-d5bdb86ee565" containerID="b95156a791bf23ae774336a0514e84c390ba31fc6386be32beea115aec8187db" exitCode=2 Jan 23 09:35:03 crc kubenswrapper[4684]: I0123 09:35:03.822386 4684 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/kube-state-metrics-0" event={"ID":"48e55475-0575-41e9-9949-d5bdb86ee565","Type":"ContainerDied","Data":"b95156a791bf23ae774336a0514e84c390ba31fc6386be32beea115aec8187db"} Jan 23 09:35:03 crc kubenswrapper[4684]: I0123 09:35:03.824148 4684 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-scheduler-0" event={"ID":"d757dc5c-a82e-403e-a11f-213b043a1b87","Type":"ContainerStarted","Data":"308b4f3f4167d94456a496ef6756811bc5d445e33a17274a5028b8787db31acf"} Jan 23 09:35:03 crc kubenswrapper[4684]: I0123 09:35:03.824181 4684 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-scheduler-0" event={"ID":"d757dc5c-a82e-403e-a11f-213b043a1b87","Type":"ContainerStarted","Data":"1f0e3b63c148c6a2ed7afda322210130c3a22db064ace08f38a16d7c3a0f5521"} Jan 23 09:35:03 crc kubenswrapper[4684]: I0123 09:35:03.831873 4684 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell1-conductor-0" event={"ID":"a36234df-99ba-470a-8309-55d1e0f53072","Type":"ContainerStarted","Data":"06c13af549cf3288d9ac4821912dfdb55dceac6816ec88db9b423630ab7b38e1"} Jan 23 09:35:03 crc kubenswrapper[4684]: I0123 09:35:03.831924 4684 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell1-conductor-0" event={"ID":"a36234df-99ba-470a-8309-55d1e0f53072","Type":"ContainerStarted","Data":"a9981a10f78af0976baf9db0e4308e0c0c5cf4412d057426359aef0751b2a8ae"} Jan 23 09:35:04 crc kubenswrapper[4684]: I0123 09:35:04.354959 4684 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/ceilometer-0"] Jan 23 09:35:04 crc kubenswrapper[4684]: I0123 09:35:04.356099 4684 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/ceilometer-0" podUID="af35288f-b2d9-4281-a7c7-2fbc7d21596f" containerName="ceilometer-central-agent" containerID="cri-o://130a4497d5d83d14210764ce6ccaedb36dde9c3686a5be3a26020ed76249fdd4" gracePeriod=30 Jan 23 09:35:04 crc kubenswrapper[4684]: I0123 09:35:04.356143 4684 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/ceilometer-0" podUID="af35288f-b2d9-4281-a7c7-2fbc7d21596f" containerName="sg-core" containerID="cri-o://d9d5064a6bd442ab834cc4d983b89821968c0cd7e07b209928a830c29c998672" gracePeriod=30 Jan 23 09:35:04 crc kubenswrapper[4684]: I0123 09:35:04.356144 4684 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/ceilometer-0" podUID="af35288f-b2d9-4281-a7c7-2fbc7d21596f" containerName="proxy-httpd" containerID="cri-o://4f91368c55733959997086ae0c0025cedf3bb8e0f40669be9100d0ca465f4899" gracePeriod=30 Jan 23 09:35:04 crc kubenswrapper[4684]: I0123 09:35:04.356242 4684 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/ceilometer-0" podUID="af35288f-b2d9-4281-a7c7-2fbc7d21596f" containerName="ceilometer-notification-agent" containerID="cri-o://21772dd7c3987e334a247af035368b487c7a80039dd4e546f1fa783ed550e56f" gracePeriod=30 Jan 23 09:35:04 crc kubenswrapper[4684]: I0123 09:35:04.719873 4684 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/kube-state-metrics-0" Jan 23 09:35:04 crc kubenswrapper[4684]: I0123 09:35:04.764065 4684 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-tq79v\" (UniqueName: \"kubernetes.io/projected/48e55475-0575-41e9-9949-d5bdb86ee565-kube-api-access-tq79v\") pod \"48e55475-0575-41e9-9949-d5bdb86ee565\" (UID: \"48e55475-0575-41e9-9949-d5bdb86ee565\") " Jan 23 09:35:04 crc kubenswrapper[4684]: I0123 09:35:04.771053 4684 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/48e55475-0575-41e9-9949-d5bdb86ee565-kube-api-access-tq79v" (OuterVolumeSpecName: "kube-api-access-tq79v") pod "48e55475-0575-41e9-9949-d5bdb86ee565" (UID: "48e55475-0575-41e9-9949-d5bdb86ee565"). InnerVolumeSpecName "kube-api-access-tq79v". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 23 09:35:04 crc kubenswrapper[4684]: I0123 09:35:04.853330 4684 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/kube-state-metrics-0" event={"ID":"48e55475-0575-41e9-9949-d5bdb86ee565","Type":"ContainerDied","Data":"9633ddbf304ca274323481b3201fc205270d10050cf1a627fe1c312839627e59"} Jan 23 09:35:04 crc kubenswrapper[4684]: I0123 09:35:04.853379 4684 scope.go:117] "RemoveContainer" containerID="b95156a791bf23ae774336a0514e84c390ba31fc6386be32beea115aec8187db" Jan 23 09:35:04 crc kubenswrapper[4684]: I0123 09:35:04.853652 4684 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/kube-state-metrics-0" Jan 23 09:35:04 crc kubenswrapper[4684]: I0123 09:35:04.865989 4684 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-tq79v\" (UniqueName: \"kubernetes.io/projected/48e55475-0575-41e9-9949-d5bdb86ee565-kube-api-access-tq79v\") on node \"crc\" DevicePath \"\"" Jan 23 09:35:04 crc kubenswrapper[4684]: I0123 09:35:04.867962 4684 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-metadata-0" event={"ID":"8422775d-1328-4c3b-ab94-f235f45da903","Type":"ContainerStarted","Data":"fe4898e0f384ce64dd84f36fe3a2b7d0ec6b514b769c9683e1930a0f64736cbb"} Jan 23 09:35:04 crc kubenswrapper[4684]: I0123 09:35:04.868006 4684 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-metadata-0" event={"ID":"8422775d-1328-4c3b-ab94-f235f45da903","Type":"ContainerStarted","Data":"cee12fcd9bf176eae676b6e84e67c25b1175501c01645605332c04911fae741b"} Jan 23 09:35:04 crc kubenswrapper[4684]: I0123 09:35:04.912860 4684 generic.go:334] "Generic (PLEG): container finished" podID="af35288f-b2d9-4281-a7c7-2fbc7d21596f" containerID="4f91368c55733959997086ae0c0025cedf3bb8e0f40669be9100d0ca465f4899" exitCode=0 Jan 23 09:35:04 crc kubenswrapper[4684]: I0123 09:35:04.913145 4684 generic.go:334] "Generic (PLEG): container finished" podID="af35288f-b2d9-4281-a7c7-2fbc7d21596f" containerID="d9d5064a6bd442ab834cc4d983b89821968c0cd7e07b209928a830c29c998672" exitCode=2 Jan 23 09:35:04 crc kubenswrapper[4684]: I0123 09:35:04.913206 4684 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"af35288f-b2d9-4281-a7c7-2fbc7d21596f","Type":"ContainerDied","Data":"4f91368c55733959997086ae0c0025cedf3bb8e0f40669be9100d0ca465f4899"} Jan 23 09:35:04 crc kubenswrapper[4684]: I0123 09:35:04.913231 4684 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"af35288f-b2d9-4281-a7c7-2fbc7d21596f","Type":"ContainerDied","Data":"d9d5064a6bd442ab834cc4d983b89821968c0cd7e07b209928a830c29c998672"} Jan 23 09:35:04 crc kubenswrapper[4684]: I0123 09:35:04.929974 4684 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/nova-metadata-0" podStartSLOduration=2.929959545 podStartE2EDuration="2.929959545s" podCreationTimestamp="2026-01-23 09:35:02 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-23 09:35:04.90975319 +0000 UTC m=+1677.533131741" watchObservedRunningTime="2026-01-23 09:35:04.929959545 +0000 UTC m=+1677.553338086" Jan 23 09:35:04 crc kubenswrapper[4684]: I0123 09:35:04.931891 4684 generic.go:334] "Generic (PLEG): container finished" podID="f44ada8d-8ab8-4b47-80ec-750f3bef5d6e" containerID="5bb3593da0b51fcc3d7297b504c2a0e9d68c353eb732cc188b4c11252d67fc95" exitCode=0 Jan 23 09:35:04 crc kubenswrapper[4684]: I0123 09:35:04.932896 4684 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-api-0" event={"ID":"f44ada8d-8ab8-4b47-80ec-750f3bef5d6e","Type":"ContainerDied","Data":"5bb3593da0b51fcc3d7297b504c2a0e9d68c353eb732cc188b4c11252d67fc95"} Jan 23 09:35:04 crc kubenswrapper[4684]: I0123 09:35:04.933445 4684 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/nova-cell1-conductor-0" Jan 23 09:35:04 crc kubenswrapper[4684]: I0123 09:35:04.952692 4684 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/kube-state-metrics-0"] Jan 23 09:35:05 crc kubenswrapper[4684]: I0123 09:35:05.010750 4684 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/kube-state-metrics-0"] Jan 23 09:35:05 crc kubenswrapper[4684]: I0123 09:35:05.054410 4684 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/kube-state-metrics-0"] Jan 23 09:35:05 crc kubenswrapper[4684]: E0123 09:35:05.055420 4684 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="48e55475-0575-41e9-9949-d5bdb86ee565" containerName="kube-state-metrics" Jan 23 09:35:05 crc kubenswrapper[4684]: I0123 09:35:05.055444 4684 state_mem.go:107] "Deleted CPUSet assignment" podUID="48e55475-0575-41e9-9949-d5bdb86ee565" containerName="kube-state-metrics" Jan 23 09:35:05 crc kubenswrapper[4684]: I0123 09:35:05.055814 4684 memory_manager.go:354] "RemoveStaleState removing state" podUID="48e55475-0575-41e9-9949-d5bdb86ee565" containerName="kube-state-metrics" Jan 23 09:35:05 crc kubenswrapper[4684]: I0123 09:35:05.058983 4684 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/kube-state-metrics-0" Jan 23 09:35:05 crc kubenswrapper[4684]: I0123 09:35:05.062850 4684 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-kube-state-metrics-svc" Jan 23 09:35:05 crc kubenswrapper[4684]: I0123 09:35:05.063968 4684 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"kube-state-metrics-tls-config" Jan 23 09:35:05 crc kubenswrapper[4684]: I0123 09:35:05.068504 4684 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/nova-cell1-conductor-0" podStartSLOduration=4.068479699 podStartE2EDuration="4.068479699s" podCreationTimestamp="2026-01-23 09:35:01 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-23 09:35:04.980807564 +0000 UTC m=+1677.604186105" watchObservedRunningTime="2026-01-23 09:35:05.068479699 +0000 UTC m=+1677.691858240" Jan 23 09:35:05 crc kubenswrapper[4684]: I0123 09:35:05.073145 4684 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-state-metrics-tls-config\" (UniqueName: \"kubernetes.io/secret/2380836b-7770-4b06-9cb2-b61dfda5e96a-kube-state-metrics-tls-config\") pod \"kube-state-metrics-0\" (UID: \"2380836b-7770-4b06-9cb2-b61dfda5e96a\") " pod="openstack/kube-state-metrics-0" Jan 23 09:35:05 crc kubenswrapper[4684]: I0123 09:35:05.073261 4684 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/2380836b-7770-4b06-9cb2-b61dfda5e96a-combined-ca-bundle\") pod \"kube-state-metrics-0\" (UID: \"2380836b-7770-4b06-9cb2-b61dfda5e96a\") " pod="openstack/kube-state-metrics-0" Jan 23 09:35:05 crc kubenswrapper[4684]: I0123 09:35:05.073347 4684 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-qbwbk\" (UniqueName: \"kubernetes.io/projected/2380836b-7770-4b06-9cb2-b61dfda5e96a-kube-api-access-qbwbk\") pod \"kube-state-metrics-0\" (UID: \"2380836b-7770-4b06-9cb2-b61dfda5e96a\") " pod="openstack/kube-state-metrics-0" Jan 23 09:35:05 crc kubenswrapper[4684]: I0123 09:35:05.073487 4684 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-state-metrics-tls-certs\" (UniqueName: \"kubernetes.io/secret/2380836b-7770-4b06-9cb2-b61dfda5e96a-kube-state-metrics-tls-certs\") pod \"kube-state-metrics-0\" (UID: \"2380836b-7770-4b06-9cb2-b61dfda5e96a\") " pod="openstack/kube-state-metrics-0" Jan 23 09:35:05 crc kubenswrapper[4684]: I0123 09:35:05.190138 4684 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-state-metrics-tls-certs\" (UniqueName: \"kubernetes.io/secret/2380836b-7770-4b06-9cb2-b61dfda5e96a-kube-state-metrics-tls-certs\") pod \"kube-state-metrics-0\" (UID: \"2380836b-7770-4b06-9cb2-b61dfda5e96a\") " pod="openstack/kube-state-metrics-0" Jan 23 09:35:05 crc kubenswrapper[4684]: I0123 09:35:05.190320 4684 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-state-metrics-tls-config\" (UniqueName: \"kubernetes.io/secret/2380836b-7770-4b06-9cb2-b61dfda5e96a-kube-state-metrics-tls-config\") pod \"kube-state-metrics-0\" (UID: \"2380836b-7770-4b06-9cb2-b61dfda5e96a\") " pod="openstack/kube-state-metrics-0" Jan 23 09:35:05 crc kubenswrapper[4684]: I0123 09:35:05.190422 4684 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/2380836b-7770-4b06-9cb2-b61dfda5e96a-combined-ca-bundle\") pod \"kube-state-metrics-0\" (UID: \"2380836b-7770-4b06-9cb2-b61dfda5e96a\") " pod="openstack/kube-state-metrics-0" Jan 23 09:35:05 crc kubenswrapper[4684]: I0123 09:35:05.190504 4684 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-qbwbk\" (UniqueName: \"kubernetes.io/projected/2380836b-7770-4b06-9cb2-b61dfda5e96a-kube-api-access-qbwbk\") pod \"kube-state-metrics-0\" (UID: \"2380836b-7770-4b06-9cb2-b61dfda5e96a\") " pod="openstack/kube-state-metrics-0" Jan 23 09:35:05 crc kubenswrapper[4684]: I0123 09:35:05.194418 4684 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/kube-state-metrics-0"] Jan 23 09:35:05 crc kubenswrapper[4684]: I0123 09:35:05.221628 4684 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-state-metrics-tls-config\" (UniqueName: \"kubernetes.io/secret/2380836b-7770-4b06-9cb2-b61dfda5e96a-kube-state-metrics-tls-config\") pod \"kube-state-metrics-0\" (UID: \"2380836b-7770-4b06-9cb2-b61dfda5e96a\") " pod="openstack/kube-state-metrics-0" Jan 23 09:35:05 crc kubenswrapper[4684]: I0123 09:35:05.223584 4684 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-state-metrics-tls-certs\" (UniqueName: \"kubernetes.io/secret/2380836b-7770-4b06-9cb2-b61dfda5e96a-kube-state-metrics-tls-certs\") pod \"kube-state-metrics-0\" (UID: \"2380836b-7770-4b06-9cb2-b61dfda5e96a\") " pod="openstack/kube-state-metrics-0" Jan 23 09:35:05 crc kubenswrapper[4684]: I0123 09:35:05.278360 4684 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/nova-scheduler-0" podStartSLOduration=4.278331116 podStartE2EDuration="4.278331116s" podCreationTimestamp="2026-01-23 09:35:01 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-23 09:35:05.021262634 +0000 UTC m=+1677.644641205" watchObservedRunningTime="2026-01-23 09:35:05.278331116 +0000 UTC m=+1677.901709657" Jan 23 09:35:05 crc kubenswrapper[4684]: I0123 09:35:05.285533 4684 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-qbwbk\" (UniqueName: \"kubernetes.io/projected/2380836b-7770-4b06-9cb2-b61dfda5e96a-kube-api-access-qbwbk\") pod \"kube-state-metrics-0\" (UID: \"2380836b-7770-4b06-9cb2-b61dfda5e96a\") " pod="openstack/kube-state-metrics-0" Jan 23 09:35:05 crc kubenswrapper[4684]: I0123 09:35:05.335003 4684 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/2380836b-7770-4b06-9cb2-b61dfda5e96a-combined-ca-bundle\") pod \"kube-state-metrics-0\" (UID: \"2380836b-7770-4b06-9cb2-b61dfda5e96a\") " pod="openstack/kube-state-metrics-0" Jan 23 09:35:05 crc kubenswrapper[4684]: I0123 09:35:05.382816 4684 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/kube-state-metrics-0" Jan 23 09:35:05 crc kubenswrapper[4684]: I0123 09:35:05.442221 4684 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-api-0" Jan 23 09:35:05 crc kubenswrapper[4684]: I0123 09:35:05.597861 4684 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="48e55475-0575-41e9-9949-d5bdb86ee565" path="/var/lib/kubelet/pods/48e55475-0575-41e9-9949-d5bdb86ee565/volumes" Jan 23 09:35:05 crc kubenswrapper[4684]: I0123 09:35:05.606672 4684 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/f44ada8d-8ab8-4b47-80ec-750f3bef5d6e-logs\") pod \"f44ada8d-8ab8-4b47-80ec-750f3bef5d6e\" (UID: \"f44ada8d-8ab8-4b47-80ec-750f3bef5d6e\") " Jan 23 09:35:05 crc kubenswrapper[4684]: I0123 09:35:05.606802 4684 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/f44ada8d-8ab8-4b47-80ec-750f3bef5d6e-combined-ca-bundle\") pod \"f44ada8d-8ab8-4b47-80ec-750f3bef5d6e\" (UID: \"f44ada8d-8ab8-4b47-80ec-750f3bef5d6e\") " Jan 23 09:35:05 crc kubenswrapper[4684]: I0123 09:35:05.606839 4684 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-lfcbr\" (UniqueName: \"kubernetes.io/projected/f44ada8d-8ab8-4b47-80ec-750f3bef5d6e-kube-api-access-lfcbr\") pod \"f44ada8d-8ab8-4b47-80ec-750f3bef5d6e\" (UID: \"f44ada8d-8ab8-4b47-80ec-750f3bef5d6e\") " Jan 23 09:35:05 crc kubenswrapper[4684]: I0123 09:35:05.606873 4684 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/f44ada8d-8ab8-4b47-80ec-750f3bef5d6e-config-data\") pod \"f44ada8d-8ab8-4b47-80ec-750f3bef5d6e\" (UID: \"f44ada8d-8ab8-4b47-80ec-750f3bef5d6e\") " Jan 23 09:35:05 crc kubenswrapper[4684]: I0123 09:35:05.608152 4684 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/f44ada8d-8ab8-4b47-80ec-750f3bef5d6e-logs" (OuterVolumeSpecName: "logs") pod "f44ada8d-8ab8-4b47-80ec-750f3bef5d6e" (UID: "f44ada8d-8ab8-4b47-80ec-750f3bef5d6e"). InnerVolumeSpecName "logs". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 23 09:35:05 crc kubenswrapper[4684]: I0123 09:35:05.614908 4684 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/f44ada8d-8ab8-4b47-80ec-750f3bef5d6e-kube-api-access-lfcbr" (OuterVolumeSpecName: "kube-api-access-lfcbr") pod "f44ada8d-8ab8-4b47-80ec-750f3bef5d6e" (UID: "f44ada8d-8ab8-4b47-80ec-750f3bef5d6e"). InnerVolumeSpecName "kube-api-access-lfcbr". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 23 09:35:05 crc kubenswrapper[4684]: I0123 09:35:05.658092 4684 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/f44ada8d-8ab8-4b47-80ec-750f3bef5d6e-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "f44ada8d-8ab8-4b47-80ec-750f3bef5d6e" (UID: "f44ada8d-8ab8-4b47-80ec-750f3bef5d6e"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 23 09:35:05 crc kubenswrapper[4684]: I0123 09:35:05.683815 4684 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/f44ada8d-8ab8-4b47-80ec-750f3bef5d6e-config-data" (OuterVolumeSpecName: "config-data") pod "f44ada8d-8ab8-4b47-80ec-750f3bef5d6e" (UID: "f44ada8d-8ab8-4b47-80ec-750f3bef5d6e"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 23 09:35:05 crc kubenswrapper[4684]: I0123 09:35:05.713270 4684 reconciler_common.go:293] "Volume detached for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/f44ada8d-8ab8-4b47-80ec-750f3bef5d6e-logs\") on node \"crc\" DevicePath \"\"" Jan 23 09:35:05 crc kubenswrapper[4684]: I0123 09:35:05.713295 4684 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/f44ada8d-8ab8-4b47-80ec-750f3bef5d6e-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Jan 23 09:35:05 crc kubenswrapper[4684]: I0123 09:35:05.713306 4684 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-lfcbr\" (UniqueName: \"kubernetes.io/projected/f44ada8d-8ab8-4b47-80ec-750f3bef5d6e-kube-api-access-lfcbr\") on node \"crc\" DevicePath \"\"" Jan 23 09:35:05 crc kubenswrapper[4684]: I0123 09:35:05.713315 4684 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/f44ada8d-8ab8-4b47-80ec-750f3bef5d6e-config-data\") on node \"crc\" DevicePath \"\"" Jan 23 09:35:05 crc kubenswrapper[4684]: I0123 09:35:05.947000 4684 generic.go:334] "Generic (PLEG): container finished" podID="af35288f-b2d9-4281-a7c7-2fbc7d21596f" containerID="130a4497d5d83d14210764ce6ccaedb36dde9c3686a5be3a26020ed76249fdd4" exitCode=0 Jan 23 09:35:05 crc kubenswrapper[4684]: I0123 09:35:05.948743 4684 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"af35288f-b2d9-4281-a7c7-2fbc7d21596f","Type":"ContainerDied","Data":"130a4497d5d83d14210764ce6ccaedb36dde9c3686a5be3a26020ed76249fdd4"} Jan 23 09:35:05 crc kubenswrapper[4684]: I0123 09:35:05.953499 4684 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-api-0" event={"ID":"f44ada8d-8ab8-4b47-80ec-750f3bef5d6e","Type":"ContainerDied","Data":"ea676ff2834da13981a3a6d6e719f1b14eb6951814abf394fa6000b3892a3b50"} Jan 23 09:35:05 crc kubenswrapper[4684]: I0123 09:35:05.953667 4684 scope.go:117] "RemoveContainer" containerID="5bb3593da0b51fcc3d7297b504c2a0e9d68c353eb732cc188b4c11252d67fc95" Jan 23 09:35:05 crc kubenswrapper[4684]: I0123 09:35:05.953893 4684 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-api-0" Jan 23 09:35:05 crc kubenswrapper[4684]: I0123 09:35:05.993956 4684 scope.go:117] "RemoveContainer" containerID="a6faadd2f4da809558cce77380dcdbf7864699cebdc1392c195c13b7f2c63441" Jan 23 09:35:06 crc kubenswrapper[4684]: I0123 09:35:06.015804 4684 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-api-0"] Jan 23 09:35:06 crc kubenswrapper[4684]: I0123 09:35:06.030971 4684 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/nova-api-0"] Jan 23 09:35:06 crc kubenswrapper[4684]: I0123 09:35:06.057823 4684 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/kube-state-metrics-0"] Jan 23 09:35:06 crc kubenswrapper[4684]: W0123 09:35:06.072819 4684 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod2380836b_7770_4b06_9cb2_b61dfda5e96a.slice/crio-072dffcc87031e7af61a5745a56eea18cf40925a79143e78618eb2ad103e24b4 WatchSource:0}: Error finding container 072dffcc87031e7af61a5745a56eea18cf40925a79143e78618eb2ad103e24b4: Status 404 returned error can't find the container with id 072dffcc87031e7af61a5745a56eea18cf40925a79143e78618eb2ad103e24b4 Jan 23 09:35:06 crc kubenswrapper[4684]: I0123 09:35:06.090414 4684 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/nova-api-0"] Jan 23 09:35:06 crc kubenswrapper[4684]: E0123 09:35:06.090823 4684 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="f44ada8d-8ab8-4b47-80ec-750f3bef5d6e" containerName="nova-api-log" Jan 23 09:35:06 crc kubenswrapper[4684]: I0123 09:35:06.090834 4684 state_mem.go:107] "Deleted CPUSet assignment" podUID="f44ada8d-8ab8-4b47-80ec-750f3bef5d6e" containerName="nova-api-log" Jan 23 09:35:06 crc kubenswrapper[4684]: E0123 09:35:06.090869 4684 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="f44ada8d-8ab8-4b47-80ec-750f3bef5d6e" containerName="nova-api-api" Jan 23 09:35:06 crc kubenswrapper[4684]: I0123 09:35:06.090876 4684 state_mem.go:107] "Deleted CPUSet assignment" podUID="f44ada8d-8ab8-4b47-80ec-750f3bef5d6e" containerName="nova-api-api" Jan 23 09:35:06 crc kubenswrapper[4684]: I0123 09:35:06.091037 4684 memory_manager.go:354] "RemoveStaleState removing state" podUID="f44ada8d-8ab8-4b47-80ec-750f3bef5d6e" containerName="nova-api-log" Jan 23 09:35:06 crc kubenswrapper[4684]: I0123 09:35:06.091054 4684 memory_manager.go:354] "RemoveStaleState removing state" podUID="f44ada8d-8ab8-4b47-80ec-750f3bef5d6e" containerName="nova-api-api" Jan 23 09:35:06 crc kubenswrapper[4684]: I0123 09:35:06.091976 4684 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-api-0" Jan 23 09:35:06 crc kubenswrapper[4684]: I0123 09:35:06.097678 4684 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-api-config-data" Jan 23 09:35:06 crc kubenswrapper[4684]: I0123 09:35:06.136914 4684 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-api-0"] Jan 23 09:35:06 crc kubenswrapper[4684]: I0123 09:35:06.221356 4684 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/2ef78dc7-d7fb-4521-8b81-5805708cea53-logs\") pod \"nova-api-0\" (UID: \"2ef78dc7-d7fb-4521-8b81-5805708cea53\") " pod="openstack/nova-api-0" Jan 23 09:35:06 crc kubenswrapper[4684]: I0123 09:35:06.221411 4684 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-9lh5g\" (UniqueName: \"kubernetes.io/projected/2ef78dc7-d7fb-4521-8b81-5805708cea53-kube-api-access-9lh5g\") pod \"nova-api-0\" (UID: \"2ef78dc7-d7fb-4521-8b81-5805708cea53\") " pod="openstack/nova-api-0" Jan 23 09:35:06 crc kubenswrapper[4684]: I0123 09:35:06.221472 4684 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/2ef78dc7-d7fb-4521-8b81-5805708cea53-combined-ca-bundle\") pod \"nova-api-0\" (UID: \"2ef78dc7-d7fb-4521-8b81-5805708cea53\") " pod="openstack/nova-api-0" Jan 23 09:35:06 crc kubenswrapper[4684]: I0123 09:35:06.221548 4684 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/2ef78dc7-d7fb-4521-8b81-5805708cea53-config-data\") pod \"nova-api-0\" (UID: \"2ef78dc7-d7fb-4521-8b81-5805708cea53\") " pod="openstack/nova-api-0" Jan 23 09:35:06 crc kubenswrapper[4684]: I0123 09:35:06.322873 4684 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/2ef78dc7-d7fb-4521-8b81-5805708cea53-logs\") pod \"nova-api-0\" (UID: \"2ef78dc7-d7fb-4521-8b81-5805708cea53\") " pod="openstack/nova-api-0" Jan 23 09:35:06 crc kubenswrapper[4684]: I0123 09:35:06.322921 4684 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-9lh5g\" (UniqueName: \"kubernetes.io/projected/2ef78dc7-d7fb-4521-8b81-5805708cea53-kube-api-access-9lh5g\") pod \"nova-api-0\" (UID: \"2ef78dc7-d7fb-4521-8b81-5805708cea53\") " pod="openstack/nova-api-0" Jan 23 09:35:06 crc kubenswrapper[4684]: I0123 09:35:06.322970 4684 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/2ef78dc7-d7fb-4521-8b81-5805708cea53-combined-ca-bundle\") pod \"nova-api-0\" (UID: \"2ef78dc7-d7fb-4521-8b81-5805708cea53\") " pod="openstack/nova-api-0" Jan 23 09:35:06 crc kubenswrapper[4684]: I0123 09:35:06.323019 4684 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/2ef78dc7-d7fb-4521-8b81-5805708cea53-config-data\") pod \"nova-api-0\" (UID: \"2ef78dc7-d7fb-4521-8b81-5805708cea53\") " pod="openstack/nova-api-0" Jan 23 09:35:06 crc kubenswrapper[4684]: I0123 09:35:06.324053 4684 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/2ef78dc7-d7fb-4521-8b81-5805708cea53-logs\") pod \"nova-api-0\" (UID: \"2ef78dc7-d7fb-4521-8b81-5805708cea53\") " pod="openstack/nova-api-0" Jan 23 09:35:06 crc kubenswrapper[4684]: I0123 09:35:06.345407 4684 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/2ef78dc7-d7fb-4521-8b81-5805708cea53-combined-ca-bundle\") pod \"nova-api-0\" (UID: \"2ef78dc7-d7fb-4521-8b81-5805708cea53\") " pod="openstack/nova-api-0" Jan 23 09:35:06 crc kubenswrapper[4684]: I0123 09:35:06.347440 4684 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/2ef78dc7-d7fb-4521-8b81-5805708cea53-config-data\") pod \"nova-api-0\" (UID: \"2ef78dc7-d7fb-4521-8b81-5805708cea53\") " pod="openstack/nova-api-0" Jan 23 09:35:06 crc kubenswrapper[4684]: I0123 09:35:06.380226 4684 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-9lh5g\" (UniqueName: \"kubernetes.io/projected/2ef78dc7-d7fb-4521-8b81-5805708cea53-kube-api-access-9lh5g\") pod \"nova-api-0\" (UID: \"2ef78dc7-d7fb-4521-8b81-5805708cea53\") " pod="openstack/nova-api-0" Jan 23 09:35:06 crc kubenswrapper[4684]: I0123 09:35:06.460266 4684 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-api-0" Jan 23 09:35:06 crc kubenswrapper[4684]: I0123 09:35:06.987599 4684 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/kube-state-metrics-0" event={"ID":"2380836b-7770-4b06-9cb2-b61dfda5e96a","Type":"ContainerStarted","Data":"5f07dd860ab7b77595996b4a0d293007efc4ec790c721793049684fd696d2b49"} Jan 23 09:35:06 crc kubenswrapper[4684]: I0123 09:35:06.987980 4684 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/kube-state-metrics-0" event={"ID":"2380836b-7770-4b06-9cb2-b61dfda5e96a","Type":"ContainerStarted","Data":"072dffcc87031e7af61a5745a56eea18cf40925a79143e78618eb2ad103e24b4"} Jan 23 09:35:06 crc kubenswrapper[4684]: I0123 09:35:06.988808 4684 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/kube-state-metrics-0" Jan 23 09:35:07 crc kubenswrapper[4684]: I0123 09:35:07.040751 4684 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/kube-state-metrics-0" podStartSLOduration=2.583465521 podStartE2EDuration="3.040729102s" podCreationTimestamp="2026-01-23 09:35:04 +0000 UTC" firstStartedPulling="2026-01-23 09:35:06.077983016 +0000 UTC m=+1678.701361557" lastFinishedPulling="2026-01-23 09:35:06.535246597 +0000 UTC m=+1679.158625138" observedRunningTime="2026-01-23 09:35:07.028645373 +0000 UTC m=+1679.652023914" watchObservedRunningTime="2026-01-23 09:35:07.040729102 +0000 UTC m=+1679.664107643" Jan 23 09:35:07 crc kubenswrapper[4684]: I0123 09:35:07.081483 4684 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-api-0"] Jan 23 09:35:07 crc kubenswrapper[4684]: I0123 09:35:07.166580 4684 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/redhat-marketplace-pc29g"] Jan 23 09:35:07 crc kubenswrapper[4684]: I0123 09:35:07.186813 4684 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-pc29g" Jan 23 09:35:07 crc kubenswrapper[4684]: I0123 09:35:07.244093 4684 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-marketplace-pc29g"] Jan 23 09:35:07 crc kubenswrapper[4684]: I0123 09:35:07.250993 4684 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/04a6342a-bcc9-47fc-afdf-a36cd21a721a-catalog-content\") pod \"redhat-marketplace-pc29g\" (UID: \"04a6342a-bcc9-47fc-afdf-a36cd21a721a\") " pod="openshift-marketplace/redhat-marketplace-pc29g" Jan 23 09:35:07 crc kubenswrapper[4684]: I0123 09:35:07.251291 4684 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/04a6342a-bcc9-47fc-afdf-a36cd21a721a-utilities\") pod \"redhat-marketplace-pc29g\" (UID: \"04a6342a-bcc9-47fc-afdf-a36cd21a721a\") " pod="openshift-marketplace/redhat-marketplace-pc29g" Jan 23 09:35:07 crc kubenswrapper[4684]: I0123 09:35:07.251614 4684 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-kr8kq\" (UniqueName: \"kubernetes.io/projected/04a6342a-bcc9-47fc-afdf-a36cd21a721a-kube-api-access-kr8kq\") pod \"redhat-marketplace-pc29g\" (UID: \"04a6342a-bcc9-47fc-afdf-a36cd21a721a\") " pod="openshift-marketplace/redhat-marketplace-pc29g" Jan 23 09:35:07 crc kubenswrapper[4684]: I0123 09:35:07.321960 4684 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/nova-scheduler-0" Jan 23 09:35:07 crc kubenswrapper[4684]: I0123 09:35:07.354352 4684 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/04a6342a-bcc9-47fc-afdf-a36cd21a721a-utilities\") pod \"redhat-marketplace-pc29g\" (UID: \"04a6342a-bcc9-47fc-afdf-a36cd21a721a\") " pod="openshift-marketplace/redhat-marketplace-pc29g" Jan 23 09:35:07 crc kubenswrapper[4684]: I0123 09:35:07.354634 4684 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-kr8kq\" (UniqueName: \"kubernetes.io/projected/04a6342a-bcc9-47fc-afdf-a36cd21a721a-kube-api-access-kr8kq\") pod \"redhat-marketplace-pc29g\" (UID: \"04a6342a-bcc9-47fc-afdf-a36cd21a721a\") " pod="openshift-marketplace/redhat-marketplace-pc29g" Jan 23 09:35:07 crc kubenswrapper[4684]: I0123 09:35:07.354711 4684 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/04a6342a-bcc9-47fc-afdf-a36cd21a721a-catalog-content\") pod \"redhat-marketplace-pc29g\" (UID: \"04a6342a-bcc9-47fc-afdf-a36cd21a721a\") " pod="openshift-marketplace/redhat-marketplace-pc29g" Jan 23 09:35:07 crc kubenswrapper[4684]: I0123 09:35:07.355496 4684 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/04a6342a-bcc9-47fc-afdf-a36cd21a721a-utilities\") pod \"redhat-marketplace-pc29g\" (UID: \"04a6342a-bcc9-47fc-afdf-a36cd21a721a\") " pod="openshift-marketplace/redhat-marketplace-pc29g" Jan 23 09:35:07 crc kubenswrapper[4684]: I0123 09:35:07.357048 4684 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/04a6342a-bcc9-47fc-afdf-a36cd21a721a-catalog-content\") pod \"redhat-marketplace-pc29g\" (UID: \"04a6342a-bcc9-47fc-afdf-a36cd21a721a\") " pod="openshift-marketplace/redhat-marketplace-pc29g" Jan 23 09:35:07 crc kubenswrapper[4684]: I0123 09:35:07.391883 4684 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-kr8kq\" (UniqueName: \"kubernetes.io/projected/04a6342a-bcc9-47fc-afdf-a36cd21a721a-kube-api-access-kr8kq\") pod \"redhat-marketplace-pc29g\" (UID: \"04a6342a-bcc9-47fc-afdf-a36cd21a721a\") " pod="openshift-marketplace/redhat-marketplace-pc29g" Jan 23 09:35:07 crc kubenswrapper[4684]: I0123 09:35:07.519787 4684 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-pc29g" Jan 23 09:35:07 crc kubenswrapper[4684]: I0123 09:35:07.617360 4684 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="f44ada8d-8ab8-4b47-80ec-750f3bef5d6e" path="/var/lib/kubelet/pods/f44ada8d-8ab8-4b47-80ec-750f3bef5d6e/volumes" Jan 23 09:35:07 crc kubenswrapper[4684]: I0123 09:35:07.999244 4684 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-api-0" event={"ID":"2ef78dc7-d7fb-4521-8b81-5805708cea53","Type":"ContainerStarted","Data":"891a6c7457d2493ca65ca286a8472b97cacc3ff9df2401651985461c32d3233d"} Jan 23 09:35:07 crc kubenswrapper[4684]: I0123 09:35:07.999485 4684 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-api-0" event={"ID":"2ef78dc7-d7fb-4521-8b81-5805708cea53","Type":"ContainerStarted","Data":"016ee31397f41b13b8737129790a5bb0d93defa120f4a0af34495ca65355c6fc"} Jan 23 09:35:07 crc kubenswrapper[4684]: I0123 09:35:07.999495 4684 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-api-0" event={"ID":"2ef78dc7-d7fb-4521-8b81-5805708cea53","Type":"ContainerStarted","Data":"5478c64fec8495b786bb449704d19c33418c6fc046ad659a32ba7ba33a655622"} Jan 23 09:35:08 crc kubenswrapper[4684]: I0123 09:35:08.034551 4684 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/nova-api-0" podStartSLOduration=2.034530356 podStartE2EDuration="2.034530356s" podCreationTimestamp="2026-01-23 09:35:06 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-23 09:35:08.029759078 +0000 UTC m=+1680.653137629" watchObservedRunningTime="2026-01-23 09:35:08.034530356 +0000 UTC m=+1680.657908887" Jan 23 09:35:08 crc kubenswrapper[4684]: I0123 09:35:08.157621 4684 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-marketplace-pc29g"] Jan 23 09:35:08 crc kubenswrapper[4684]: I0123 09:35:08.262065 4684 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/nova-metadata-0" Jan 23 09:35:08 crc kubenswrapper[4684]: I0123 09:35:08.262334 4684 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/nova-metadata-0" Jan 23 09:35:09 crc kubenswrapper[4684]: I0123 09:35:09.014879 4684 generic.go:334] "Generic (PLEG): container finished" podID="af35288f-b2d9-4281-a7c7-2fbc7d21596f" containerID="21772dd7c3987e334a247af035368b487c7a80039dd4e546f1fa783ed550e56f" exitCode=0 Jan 23 09:35:09 crc kubenswrapper[4684]: I0123 09:35:09.014943 4684 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"af35288f-b2d9-4281-a7c7-2fbc7d21596f","Type":"ContainerDied","Data":"21772dd7c3987e334a247af035368b487c7a80039dd4e546f1fa783ed550e56f"} Jan 23 09:35:09 crc kubenswrapper[4684]: I0123 09:35:09.018754 4684 generic.go:334] "Generic (PLEG): container finished" podID="04a6342a-bcc9-47fc-afdf-a36cd21a721a" containerID="fd1e2ba4260e07893ea15d3cfac352e2fd3204806f2023961a3c95bacaf2d427" exitCode=0 Jan 23 09:35:09 crc kubenswrapper[4684]: I0123 09:35:09.018845 4684 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-pc29g" event={"ID":"04a6342a-bcc9-47fc-afdf-a36cd21a721a","Type":"ContainerDied","Data":"fd1e2ba4260e07893ea15d3cfac352e2fd3204806f2023961a3c95bacaf2d427"} Jan 23 09:35:09 crc kubenswrapper[4684]: I0123 09:35:09.018888 4684 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-pc29g" event={"ID":"04a6342a-bcc9-47fc-afdf-a36cd21a721a","Type":"ContainerStarted","Data":"77727fc7c9af8d7c4d9bd64d802bcf7a84038f8c0f67f3b7b099f16f110bb950"} Jan 23 09:35:09 crc kubenswrapper[4684]: I0123 09:35:09.450677 4684 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/ceilometer-0" Jan 23 09:35:09 crc kubenswrapper[4684]: I0123 09:35:09.611646 4684 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/af35288f-b2d9-4281-a7c7-2fbc7d21596f-config-data\") pod \"af35288f-b2d9-4281-a7c7-2fbc7d21596f\" (UID: \"af35288f-b2d9-4281-a7c7-2fbc7d21596f\") " Jan 23 09:35:09 crc kubenswrapper[4684]: I0123 09:35:09.611903 4684 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-rmdks\" (UniqueName: \"kubernetes.io/projected/af35288f-b2d9-4281-a7c7-2fbc7d21596f-kube-api-access-rmdks\") pod \"af35288f-b2d9-4281-a7c7-2fbc7d21596f\" (UID: \"af35288f-b2d9-4281-a7c7-2fbc7d21596f\") " Jan 23 09:35:09 crc kubenswrapper[4684]: I0123 09:35:09.612066 4684 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/af35288f-b2d9-4281-a7c7-2fbc7d21596f-sg-core-conf-yaml\") pod \"af35288f-b2d9-4281-a7c7-2fbc7d21596f\" (UID: \"af35288f-b2d9-4281-a7c7-2fbc7d21596f\") " Jan 23 09:35:09 crc kubenswrapper[4684]: I0123 09:35:09.612266 4684 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/af35288f-b2d9-4281-a7c7-2fbc7d21596f-scripts\") pod \"af35288f-b2d9-4281-a7c7-2fbc7d21596f\" (UID: \"af35288f-b2d9-4281-a7c7-2fbc7d21596f\") " Jan 23 09:35:09 crc kubenswrapper[4684]: I0123 09:35:09.612432 4684 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/af35288f-b2d9-4281-a7c7-2fbc7d21596f-run-httpd\") pod \"af35288f-b2d9-4281-a7c7-2fbc7d21596f\" (UID: \"af35288f-b2d9-4281-a7c7-2fbc7d21596f\") " Jan 23 09:35:09 crc kubenswrapper[4684]: I0123 09:35:09.612626 4684 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/af35288f-b2d9-4281-a7c7-2fbc7d21596f-combined-ca-bundle\") pod \"af35288f-b2d9-4281-a7c7-2fbc7d21596f\" (UID: \"af35288f-b2d9-4281-a7c7-2fbc7d21596f\") " Jan 23 09:35:09 crc kubenswrapper[4684]: I0123 09:35:09.612754 4684 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/af35288f-b2d9-4281-a7c7-2fbc7d21596f-log-httpd\") pod \"af35288f-b2d9-4281-a7c7-2fbc7d21596f\" (UID: \"af35288f-b2d9-4281-a7c7-2fbc7d21596f\") " Jan 23 09:35:09 crc kubenswrapper[4684]: I0123 09:35:09.614070 4684 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/af35288f-b2d9-4281-a7c7-2fbc7d21596f-log-httpd" (OuterVolumeSpecName: "log-httpd") pod "af35288f-b2d9-4281-a7c7-2fbc7d21596f" (UID: "af35288f-b2d9-4281-a7c7-2fbc7d21596f"). InnerVolumeSpecName "log-httpd". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 23 09:35:09 crc kubenswrapper[4684]: I0123 09:35:09.620227 4684 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/af35288f-b2d9-4281-a7c7-2fbc7d21596f-run-httpd" (OuterVolumeSpecName: "run-httpd") pod "af35288f-b2d9-4281-a7c7-2fbc7d21596f" (UID: "af35288f-b2d9-4281-a7c7-2fbc7d21596f"). InnerVolumeSpecName "run-httpd". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 23 09:35:09 crc kubenswrapper[4684]: I0123 09:35:09.643184 4684 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/af35288f-b2d9-4281-a7c7-2fbc7d21596f-scripts" (OuterVolumeSpecName: "scripts") pod "af35288f-b2d9-4281-a7c7-2fbc7d21596f" (UID: "af35288f-b2d9-4281-a7c7-2fbc7d21596f"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 23 09:35:09 crc kubenswrapper[4684]: I0123 09:35:09.735847 4684 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/af35288f-b2d9-4281-a7c7-2fbc7d21596f-scripts\") on node \"crc\" DevicePath \"\"" Jan 23 09:35:09 crc kubenswrapper[4684]: I0123 09:35:09.735890 4684 reconciler_common.go:293] "Volume detached for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/af35288f-b2d9-4281-a7c7-2fbc7d21596f-run-httpd\") on node \"crc\" DevicePath \"\"" Jan 23 09:35:09 crc kubenswrapper[4684]: I0123 09:35:09.735901 4684 reconciler_common.go:293] "Volume detached for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/af35288f-b2d9-4281-a7c7-2fbc7d21596f-log-httpd\") on node \"crc\" DevicePath \"\"" Jan 23 09:35:09 crc kubenswrapper[4684]: I0123 09:35:09.742084 4684 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/af35288f-b2d9-4281-a7c7-2fbc7d21596f-kube-api-access-rmdks" (OuterVolumeSpecName: "kube-api-access-rmdks") pod "af35288f-b2d9-4281-a7c7-2fbc7d21596f" (UID: "af35288f-b2d9-4281-a7c7-2fbc7d21596f"). InnerVolumeSpecName "kube-api-access-rmdks". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 23 09:35:09 crc kubenswrapper[4684]: I0123 09:35:09.837880 4684 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-rmdks\" (UniqueName: \"kubernetes.io/projected/af35288f-b2d9-4281-a7c7-2fbc7d21596f-kube-api-access-rmdks\") on node \"crc\" DevicePath \"\"" Jan 23 09:35:09 crc kubenswrapper[4684]: I0123 09:35:09.854876 4684 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/af35288f-b2d9-4281-a7c7-2fbc7d21596f-sg-core-conf-yaml" (OuterVolumeSpecName: "sg-core-conf-yaml") pod "af35288f-b2d9-4281-a7c7-2fbc7d21596f" (UID: "af35288f-b2d9-4281-a7c7-2fbc7d21596f"). InnerVolumeSpecName "sg-core-conf-yaml". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 23 09:35:09 crc kubenswrapper[4684]: I0123 09:35:09.917007 4684 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/af35288f-b2d9-4281-a7c7-2fbc7d21596f-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "af35288f-b2d9-4281-a7c7-2fbc7d21596f" (UID: "af35288f-b2d9-4281-a7c7-2fbc7d21596f"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 23 09:35:09 crc kubenswrapper[4684]: I0123 09:35:09.939634 4684 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/af35288f-b2d9-4281-a7c7-2fbc7d21596f-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Jan 23 09:35:09 crc kubenswrapper[4684]: I0123 09:35:09.939668 4684 reconciler_common.go:293] "Volume detached for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/af35288f-b2d9-4281-a7c7-2fbc7d21596f-sg-core-conf-yaml\") on node \"crc\" DevicePath \"\"" Jan 23 09:35:09 crc kubenswrapper[4684]: I0123 09:35:09.947927 4684 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/af35288f-b2d9-4281-a7c7-2fbc7d21596f-config-data" (OuterVolumeSpecName: "config-data") pod "af35288f-b2d9-4281-a7c7-2fbc7d21596f" (UID: "af35288f-b2d9-4281-a7c7-2fbc7d21596f"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 23 09:35:10 crc kubenswrapper[4684]: I0123 09:35:10.040982 4684 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"af35288f-b2d9-4281-a7c7-2fbc7d21596f","Type":"ContainerDied","Data":"0d896e95c752003f5b5e0574e4e4eb577072fc289003fb1c36465497f82f7ba3"} Jan 23 09:35:10 crc kubenswrapper[4684]: I0123 09:35:10.041047 4684 scope.go:117] "RemoveContainer" containerID="4f91368c55733959997086ae0c0025cedf3bb8e0f40669be9100d0ca465f4899" Jan 23 09:35:10 crc kubenswrapper[4684]: I0123 09:35:10.041223 4684 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/ceilometer-0" Jan 23 09:35:10 crc kubenswrapper[4684]: I0123 09:35:10.044519 4684 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/af35288f-b2d9-4281-a7c7-2fbc7d21596f-config-data\") on node \"crc\" DevicePath \"\"" Jan 23 09:35:10 crc kubenswrapper[4684]: I0123 09:35:10.051200 4684 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-pc29g" event={"ID":"04a6342a-bcc9-47fc-afdf-a36cd21a721a","Type":"ContainerStarted","Data":"578cc89a47e3bb7f02c3ff49f6f9032c075211cf48e08cd50b403a89b0141d17"} Jan 23 09:35:10 crc kubenswrapper[4684]: I0123 09:35:10.073722 4684 scope.go:117] "RemoveContainer" containerID="d9d5064a6bd442ab834cc4d983b89821968c0cd7e07b209928a830c29c998672" Jan 23 09:35:10 crc kubenswrapper[4684]: I0123 09:35:10.123653 4684 scope.go:117] "RemoveContainer" containerID="21772dd7c3987e334a247af035368b487c7a80039dd4e546f1fa783ed550e56f" Jan 23 09:35:10 crc kubenswrapper[4684]: I0123 09:35:10.124407 4684 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/ceilometer-0"] Jan 23 09:35:10 crc kubenswrapper[4684]: I0123 09:35:10.127550 4684 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/certified-operators-5vgbm" Jan 23 09:35:10 crc kubenswrapper[4684]: I0123 09:35:10.139998 4684 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/ceilometer-0"] Jan 23 09:35:10 crc kubenswrapper[4684]: I0123 09:35:10.158548 4684 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/ceilometer-0"] Jan 23 09:35:10 crc kubenswrapper[4684]: E0123 09:35:10.158914 4684 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="af35288f-b2d9-4281-a7c7-2fbc7d21596f" containerName="proxy-httpd" Jan 23 09:35:10 crc kubenswrapper[4684]: I0123 09:35:10.158931 4684 state_mem.go:107] "Deleted CPUSet assignment" podUID="af35288f-b2d9-4281-a7c7-2fbc7d21596f" containerName="proxy-httpd" Jan 23 09:35:10 crc kubenswrapper[4684]: E0123 09:35:10.158945 4684 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="af35288f-b2d9-4281-a7c7-2fbc7d21596f" containerName="ceilometer-notification-agent" Jan 23 09:35:10 crc kubenswrapper[4684]: I0123 09:35:10.158952 4684 state_mem.go:107] "Deleted CPUSet assignment" podUID="af35288f-b2d9-4281-a7c7-2fbc7d21596f" containerName="ceilometer-notification-agent" Jan 23 09:35:10 crc kubenswrapper[4684]: E0123 09:35:10.158964 4684 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="af35288f-b2d9-4281-a7c7-2fbc7d21596f" containerName="sg-core" Jan 23 09:35:10 crc kubenswrapper[4684]: I0123 09:35:10.158970 4684 state_mem.go:107] "Deleted CPUSet assignment" podUID="af35288f-b2d9-4281-a7c7-2fbc7d21596f" containerName="sg-core" Jan 23 09:35:10 crc kubenswrapper[4684]: E0123 09:35:10.158991 4684 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="af35288f-b2d9-4281-a7c7-2fbc7d21596f" containerName="ceilometer-central-agent" Jan 23 09:35:10 crc kubenswrapper[4684]: I0123 09:35:10.158997 4684 state_mem.go:107] "Deleted CPUSet assignment" podUID="af35288f-b2d9-4281-a7c7-2fbc7d21596f" containerName="ceilometer-central-agent" Jan 23 09:35:10 crc kubenswrapper[4684]: I0123 09:35:10.159163 4684 memory_manager.go:354] "RemoveStaleState removing state" podUID="af35288f-b2d9-4281-a7c7-2fbc7d21596f" containerName="sg-core" Jan 23 09:35:10 crc kubenswrapper[4684]: I0123 09:35:10.159187 4684 memory_manager.go:354] "RemoveStaleState removing state" podUID="af35288f-b2d9-4281-a7c7-2fbc7d21596f" containerName="proxy-httpd" Jan 23 09:35:10 crc kubenswrapper[4684]: I0123 09:35:10.159205 4684 memory_manager.go:354] "RemoveStaleState removing state" podUID="af35288f-b2d9-4281-a7c7-2fbc7d21596f" containerName="ceilometer-notification-agent" Jan 23 09:35:10 crc kubenswrapper[4684]: I0123 09:35:10.159212 4684 memory_manager.go:354] "RemoveStaleState removing state" podUID="af35288f-b2d9-4281-a7c7-2fbc7d21596f" containerName="ceilometer-central-agent" Jan 23 09:35:10 crc kubenswrapper[4684]: I0123 09:35:10.161206 4684 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ceilometer-0" Jan 23 09:35:10 crc kubenswrapper[4684]: I0123 09:35:10.167017 4684 scope.go:117] "RemoveContainer" containerID="130a4497d5d83d14210764ce6ccaedb36dde9c3686a5be3a26020ed76249fdd4" Jan 23 09:35:10 crc kubenswrapper[4684]: I0123 09:35:10.178884 4684 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"ceilometer-scripts" Jan 23 09:35:10 crc kubenswrapper[4684]: I0123 09:35:10.179140 4684 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-ceilometer-internal-svc" Jan 23 09:35:10 crc kubenswrapper[4684]: I0123 09:35:10.179277 4684 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"ceilometer-config-data" Jan 23 09:35:10 crc kubenswrapper[4684]: I0123 09:35:10.199802 4684 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ceilometer-0"] Jan 23 09:35:10 crc kubenswrapper[4684]: I0123 09:35:10.237157 4684 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/certified-operators-5vgbm" Jan 23 09:35:10 crc kubenswrapper[4684]: I0123 09:35:10.256307 4684 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/0c6ec1fa-d699-4dfd-a7f7-0a1efaefe22b-combined-ca-bundle\") pod \"ceilometer-0\" (UID: \"0c6ec1fa-d699-4dfd-a7f7-0a1efaefe22b\") " pod="openstack/ceilometer-0" Jan 23 09:35:10 crc kubenswrapper[4684]: I0123 09:35:10.256680 4684 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/0c6ec1fa-d699-4dfd-a7f7-0a1efaefe22b-run-httpd\") pod \"ceilometer-0\" (UID: \"0c6ec1fa-d699-4dfd-a7f7-0a1efaefe22b\") " pod="openstack/ceilometer-0" Jan 23 09:35:10 crc kubenswrapper[4684]: I0123 09:35:10.256849 4684 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ceilometer-tls-certs\" (UniqueName: \"kubernetes.io/secret/0c6ec1fa-d699-4dfd-a7f7-0a1efaefe22b-ceilometer-tls-certs\") pod \"ceilometer-0\" (UID: \"0c6ec1fa-d699-4dfd-a7f7-0a1efaefe22b\") " pod="openstack/ceilometer-0" Jan 23 09:35:10 crc kubenswrapper[4684]: I0123 09:35:10.256876 4684 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-8d6dp\" (UniqueName: \"kubernetes.io/projected/0c6ec1fa-d699-4dfd-a7f7-0a1efaefe22b-kube-api-access-8d6dp\") pod \"ceilometer-0\" (UID: \"0c6ec1fa-d699-4dfd-a7f7-0a1efaefe22b\") " pod="openstack/ceilometer-0" Jan 23 09:35:10 crc kubenswrapper[4684]: I0123 09:35:10.256897 4684 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/0c6ec1fa-d699-4dfd-a7f7-0a1efaefe22b-sg-core-conf-yaml\") pod \"ceilometer-0\" (UID: \"0c6ec1fa-d699-4dfd-a7f7-0a1efaefe22b\") " pod="openstack/ceilometer-0" Jan 23 09:35:10 crc kubenswrapper[4684]: I0123 09:35:10.256930 4684 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/0c6ec1fa-d699-4dfd-a7f7-0a1efaefe22b-config-data\") pod \"ceilometer-0\" (UID: \"0c6ec1fa-d699-4dfd-a7f7-0a1efaefe22b\") " pod="openstack/ceilometer-0" Jan 23 09:35:10 crc kubenswrapper[4684]: I0123 09:35:10.256974 4684 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/0c6ec1fa-d699-4dfd-a7f7-0a1efaefe22b-scripts\") pod \"ceilometer-0\" (UID: \"0c6ec1fa-d699-4dfd-a7f7-0a1efaefe22b\") " pod="openstack/ceilometer-0" Jan 23 09:35:10 crc kubenswrapper[4684]: I0123 09:35:10.257085 4684 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/0c6ec1fa-d699-4dfd-a7f7-0a1efaefe22b-log-httpd\") pod \"ceilometer-0\" (UID: \"0c6ec1fa-d699-4dfd-a7f7-0a1efaefe22b\") " pod="openstack/ceilometer-0" Jan 23 09:35:10 crc kubenswrapper[4684]: I0123 09:35:10.359266 4684 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/0c6ec1fa-d699-4dfd-a7f7-0a1efaefe22b-combined-ca-bundle\") pod \"ceilometer-0\" (UID: \"0c6ec1fa-d699-4dfd-a7f7-0a1efaefe22b\") " pod="openstack/ceilometer-0" Jan 23 09:35:10 crc kubenswrapper[4684]: I0123 09:35:10.360067 4684 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/0c6ec1fa-d699-4dfd-a7f7-0a1efaefe22b-run-httpd\") pod \"ceilometer-0\" (UID: \"0c6ec1fa-d699-4dfd-a7f7-0a1efaefe22b\") " pod="openstack/ceilometer-0" Jan 23 09:35:10 crc kubenswrapper[4684]: I0123 09:35:10.360266 4684 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ceilometer-tls-certs\" (UniqueName: \"kubernetes.io/secret/0c6ec1fa-d699-4dfd-a7f7-0a1efaefe22b-ceilometer-tls-certs\") pod \"ceilometer-0\" (UID: \"0c6ec1fa-d699-4dfd-a7f7-0a1efaefe22b\") " pod="openstack/ceilometer-0" Jan 23 09:35:10 crc kubenswrapper[4684]: I0123 09:35:10.360295 4684 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-8d6dp\" (UniqueName: \"kubernetes.io/projected/0c6ec1fa-d699-4dfd-a7f7-0a1efaefe22b-kube-api-access-8d6dp\") pod \"ceilometer-0\" (UID: \"0c6ec1fa-d699-4dfd-a7f7-0a1efaefe22b\") " pod="openstack/ceilometer-0" Jan 23 09:35:10 crc kubenswrapper[4684]: I0123 09:35:10.360319 4684 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/0c6ec1fa-d699-4dfd-a7f7-0a1efaefe22b-sg-core-conf-yaml\") pod \"ceilometer-0\" (UID: \"0c6ec1fa-d699-4dfd-a7f7-0a1efaefe22b\") " pod="openstack/ceilometer-0" Jan 23 09:35:10 crc kubenswrapper[4684]: I0123 09:35:10.360354 4684 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/0c6ec1fa-d699-4dfd-a7f7-0a1efaefe22b-config-data\") pod \"ceilometer-0\" (UID: \"0c6ec1fa-d699-4dfd-a7f7-0a1efaefe22b\") " pod="openstack/ceilometer-0" Jan 23 09:35:10 crc kubenswrapper[4684]: I0123 09:35:10.360399 4684 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/0c6ec1fa-d699-4dfd-a7f7-0a1efaefe22b-scripts\") pod \"ceilometer-0\" (UID: \"0c6ec1fa-d699-4dfd-a7f7-0a1efaefe22b\") " pod="openstack/ceilometer-0" Jan 23 09:35:10 crc kubenswrapper[4684]: I0123 09:35:10.360441 4684 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/0c6ec1fa-d699-4dfd-a7f7-0a1efaefe22b-log-httpd\") pod \"ceilometer-0\" (UID: \"0c6ec1fa-d699-4dfd-a7f7-0a1efaefe22b\") " pod="openstack/ceilometer-0" Jan 23 09:35:10 crc kubenswrapper[4684]: I0123 09:35:10.360516 4684 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/0c6ec1fa-d699-4dfd-a7f7-0a1efaefe22b-run-httpd\") pod \"ceilometer-0\" (UID: \"0c6ec1fa-d699-4dfd-a7f7-0a1efaefe22b\") " pod="openstack/ceilometer-0" Jan 23 09:35:10 crc kubenswrapper[4684]: I0123 09:35:10.360782 4684 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/0c6ec1fa-d699-4dfd-a7f7-0a1efaefe22b-log-httpd\") pod \"ceilometer-0\" (UID: \"0c6ec1fa-d699-4dfd-a7f7-0a1efaefe22b\") " pod="openstack/ceilometer-0" Jan 23 09:35:10 crc kubenswrapper[4684]: I0123 09:35:10.364844 4684 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ceilometer-tls-certs\" (UniqueName: \"kubernetes.io/secret/0c6ec1fa-d699-4dfd-a7f7-0a1efaefe22b-ceilometer-tls-certs\") pod \"ceilometer-0\" (UID: \"0c6ec1fa-d699-4dfd-a7f7-0a1efaefe22b\") " pod="openstack/ceilometer-0" Jan 23 09:35:10 crc kubenswrapper[4684]: I0123 09:35:10.368819 4684 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/0c6ec1fa-d699-4dfd-a7f7-0a1efaefe22b-sg-core-conf-yaml\") pod \"ceilometer-0\" (UID: \"0c6ec1fa-d699-4dfd-a7f7-0a1efaefe22b\") " pod="openstack/ceilometer-0" Jan 23 09:35:10 crc kubenswrapper[4684]: I0123 09:35:10.369299 4684 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/0c6ec1fa-d699-4dfd-a7f7-0a1efaefe22b-combined-ca-bundle\") pod \"ceilometer-0\" (UID: \"0c6ec1fa-d699-4dfd-a7f7-0a1efaefe22b\") " pod="openstack/ceilometer-0" Jan 23 09:35:10 crc kubenswrapper[4684]: I0123 09:35:10.382099 4684 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-8d6dp\" (UniqueName: \"kubernetes.io/projected/0c6ec1fa-d699-4dfd-a7f7-0a1efaefe22b-kube-api-access-8d6dp\") pod \"ceilometer-0\" (UID: \"0c6ec1fa-d699-4dfd-a7f7-0a1efaefe22b\") " pod="openstack/ceilometer-0" Jan 23 09:35:10 crc kubenswrapper[4684]: I0123 09:35:10.390678 4684 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/0c6ec1fa-d699-4dfd-a7f7-0a1efaefe22b-config-data\") pod \"ceilometer-0\" (UID: \"0c6ec1fa-d699-4dfd-a7f7-0a1efaefe22b\") " pod="openstack/ceilometer-0" Jan 23 09:35:10 crc kubenswrapper[4684]: I0123 09:35:10.393738 4684 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/0c6ec1fa-d699-4dfd-a7f7-0a1efaefe22b-scripts\") pod \"ceilometer-0\" (UID: \"0c6ec1fa-d699-4dfd-a7f7-0a1efaefe22b\") " pod="openstack/ceilometer-0" Jan 23 09:35:10 crc kubenswrapper[4684]: I0123 09:35:10.483043 4684 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ceilometer-0" Jan 23 09:35:11 crc kubenswrapper[4684]: I0123 09:35:11.008655 4684 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ceilometer-0"] Jan 23 09:35:11 crc kubenswrapper[4684]: I0123 09:35:11.062733 4684 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"0c6ec1fa-d699-4dfd-a7f7-0a1efaefe22b","Type":"ContainerStarted","Data":"50e973f85719e53611e988ca52161566a7340c7b0fac979e046f8422f25515b2"} Jan 23 09:35:11 crc kubenswrapper[4684]: I0123 09:35:11.067375 4684 generic.go:334] "Generic (PLEG): container finished" podID="04a6342a-bcc9-47fc-afdf-a36cd21a721a" containerID="578cc89a47e3bb7f02c3ff49f6f9032c075211cf48e08cd50b403a89b0141d17" exitCode=0 Jan 23 09:35:11 crc kubenswrapper[4684]: I0123 09:35:11.067466 4684 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-pc29g" event={"ID":"04a6342a-bcc9-47fc-afdf-a36cd21a721a","Type":"ContainerDied","Data":"578cc89a47e3bb7f02c3ff49f6f9032c075211cf48e08cd50b403a89b0141d17"} Jan 23 09:35:11 crc kubenswrapper[4684]: I0123 09:35:11.603956 4684 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="af35288f-b2d9-4281-a7c7-2fbc7d21596f" path="/var/lib/kubelet/pods/af35288f-b2d9-4281-a7c7-2fbc7d21596f/volumes" Jan 23 09:35:12 crc kubenswrapper[4684]: I0123 09:35:12.079786 4684 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-pc29g" event={"ID":"04a6342a-bcc9-47fc-afdf-a36cd21a721a","Type":"ContainerStarted","Data":"654792ec2d79315f5050442ebd0120b2bf3964b60e11e51690ef09d3646741bc"} Jan 23 09:35:12 crc kubenswrapper[4684]: I0123 09:35:12.081677 4684 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"0c6ec1fa-d699-4dfd-a7f7-0a1efaefe22b","Type":"ContainerStarted","Data":"41b5a3ffede749f2c21f7b87787775db52a32c8a7086d37064fb28eed7692788"} Jan 23 09:35:12 crc kubenswrapper[4684]: I0123 09:35:12.111474 4684 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/redhat-marketplace-pc29g" podStartSLOduration=2.66925461 podStartE2EDuration="5.111451171s" podCreationTimestamp="2026-01-23 09:35:07 +0000 UTC" firstStartedPulling="2026-01-23 09:35:09.020832062 +0000 UTC m=+1681.644210603" lastFinishedPulling="2026-01-23 09:35:11.463028623 +0000 UTC m=+1684.086407164" observedRunningTime="2026-01-23 09:35:12.104525781 +0000 UTC m=+1684.727904322" watchObservedRunningTime="2026-01-23 09:35:12.111451171 +0000 UTC m=+1684.734829712" Jan 23 09:35:12 crc kubenswrapper[4684]: I0123 09:35:12.249271 4684 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/nova-cell1-conductor-0" Jan 23 09:35:12 crc kubenswrapper[4684]: I0123 09:35:12.320740 4684 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/nova-scheduler-0" Jan 23 09:35:12 crc kubenswrapper[4684]: I0123 09:35:12.352175 4684 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/nova-scheduler-0" Jan 23 09:35:12 crc kubenswrapper[4684]: I0123 09:35:12.546594 4684 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/certified-operators-5vgbm"] Jan 23 09:35:12 crc kubenswrapper[4684]: I0123 09:35:12.551677 4684 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/certified-operators-5vgbm" podUID="f33774cf-bd34-4d96-bef3-dbf5751ba774" containerName="registry-server" containerID="cri-o://c5b6437ad8b79cea69578ac45d820eac798e9bc6ab79bd8a5829a23967bccde0" gracePeriod=2 Jan 23 09:35:13 crc kubenswrapper[4684]: I0123 09:35:13.137752 4684 generic.go:334] "Generic (PLEG): container finished" podID="f33774cf-bd34-4d96-bef3-dbf5751ba774" containerID="c5b6437ad8b79cea69578ac45d820eac798e9bc6ab79bd8a5829a23967bccde0" exitCode=0 Jan 23 09:35:13 crc kubenswrapper[4684]: I0123 09:35:13.137892 4684 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-5vgbm" event={"ID":"f33774cf-bd34-4d96-bef3-dbf5751ba774","Type":"ContainerDied","Data":"c5b6437ad8b79cea69578ac45d820eac798e9bc6ab79bd8a5829a23967bccde0"} Jan 23 09:35:13 crc kubenswrapper[4684]: I0123 09:35:13.137943 4684 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-5vgbm" event={"ID":"f33774cf-bd34-4d96-bef3-dbf5751ba774","Type":"ContainerDied","Data":"0b0029ceb891eba792f86e54adad977aa8b65698d3beab77e48a2cc419e3c883"} Jan 23 09:35:13 crc kubenswrapper[4684]: I0123 09:35:13.137958 4684 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="0b0029ceb891eba792f86e54adad977aa8b65698d3beab77e48a2cc419e3c883" Jan 23 09:35:13 crc kubenswrapper[4684]: I0123 09:35:13.153328 4684 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"0c6ec1fa-d699-4dfd-a7f7-0a1efaefe22b","Type":"ContainerStarted","Data":"10cd04d16e3668f96b99f4ca9f20c43e3c18d400c5549debc35d8f5edade414b"} Jan 23 09:35:13 crc kubenswrapper[4684]: I0123 09:35:13.178326 4684 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-5vgbm" Jan 23 09:35:13 crc kubenswrapper[4684]: I0123 09:35:13.261387 4684 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/nova-metadata-0" Jan 23 09:35:13 crc kubenswrapper[4684]: I0123 09:35:13.261433 4684 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/nova-metadata-0" Jan 23 09:35:13 crc kubenswrapper[4684]: I0123 09:35:13.332612 4684 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/f33774cf-bd34-4d96-bef3-dbf5751ba774-catalog-content\") pod \"f33774cf-bd34-4d96-bef3-dbf5751ba774\" (UID: \"f33774cf-bd34-4d96-bef3-dbf5751ba774\") " Jan 23 09:35:13 crc kubenswrapper[4684]: I0123 09:35:13.332735 4684 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/f33774cf-bd34-4d96-bef3-dbf5751ba774-utilities\") pod \"f33774cf-bd34-4d96-bef3-dbf5751ba774\" (UID: \"f33774cf-bd34-4d96-bef3-dbf5751ba774\") " Jan 23 09:35:13 crc kubenswrapper[4684]: I0123 09:35:13.332781 4684 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-m4r6q\" (UniqueName: \"kubernetes.io/projected/f33774cf-bd34-4d96-bef3-dbf5751ba774-kube-api-access-m4r6q\") pod \"f33774cf-bd34-4d96-bef3-dbf5751ba774\" (UID: \"f33774cf-bd34-4d96-bef3-dbf5751ba774\") " Jan 23 09:35:13 crc kubenswrapper[4684]: I0123 09:35:13.335025 4684 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/f33774cf-bd34-4d96-bef3-dbf5751ba774-utilities" (OuterVolumeSpecName: "utilities") pod "f33774cf-bd34-4d96-bef3-dbf5751ba774" (UID: "f33774cf-bd34-4d96-bef3-dbf5751ba774"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 23 09:35:13 crc kubenswrapper[4684]: I0123 09:35:13.349369 4684 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/f33774cf-bd34-4d96-bef3-dbf5751ba774-kube-api-access-m4r6q" (OuterVolumeSpecName: "kube-api-access-m4r6q") pod "f33774cf-bd34-4d96-bef3-dbf5751ba774" (UID: "f33774cf-bd34-4d96-bef3-dbf5751ba774"). InnerVolumeSpecName "kube-api-access-m4r6q". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 23 09:35:13 crc kubenswrapper[4684]: I0123 09:35:13.397248 4684 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/f33774cf-bd34-4d96-bef3-dbf5751ba774-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "f33774cf-bd34-4d96-bef3-dbf5751ba774" (UID: "f33774cf-bd34-4d96-bef3-dbf5751ba774"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 23 09:35:13 crc kubenswrapper[4684]: I0123 09:35:13.440276 4684 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/f33774cf-bd34-4d96-bef3-dbf5751ba774-catalog-content\") on node \"crc\" DevicePath \"\"" Jan 23 09:35:13 crc kubenswrapper[4684]: I0123 09:35:13.440657 4684 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/f33774cf-bd34-4d96-bef3-dbf5751ba774-utilities\") on node \"crc\" DevicePath \"\"" Jan 23 09:35:13 crc kubenswrapper[4684]: I0123 09:35:13.440670 4684 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-m4r6q\" (UniqueName: \"kubernetes.io/projected/f33774cf-bd34-4d96-bef3-dbf5751ba774-kube-api-access-m4r6q\") on node \"crc\" DevicePath \"\"" Jan 23 09:35:13 crc kubenswrapper[4684]: I0123 09:35:13.728725 4684 patch_prober.go:28] interesting pod/machine-config-daemon-wtphf container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Jan 23 09:35:13 crc kubenswrapper[4684]: I0123 09:35:13.729023 4684 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-wtphf" podUID="fe8e0d00-860e-4d47-9f48-686555520d79" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Jan 23 09:35:13 crc kubenswrapper[4684]: I0123 09:35:13.755132 4684 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/nova-scheduler-0" Jan 23 09:35:14 crc kubenswrapper[4684]: I0123 09:35:14.175131 4684 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-5vgbm" Jan 23 09:35:14 crc kubenswrapper[4684]: I0123 09:35:14.175773 4684 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"0c6ec1fa-d699-4dfd-a7f7-0a1efaefe22b","Type":"ContainerStarted","Data":"4e3f846e126c284a16175d190bf3e5718f7ffe453648e6e9aa20190121025557"} Jan 23 09:35:14 crc kubenswrapper[4684]: I0123 09:35:14.217405 4684 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/certified-operators-5vgbm"] Jan 23 09:35:14 crc kubenswrapper[4684]: I0123 09:35:14.253441 4684 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/certified-operators-5vgbm"] Jan 23 09:35:14 crc kubenswrapper[4684]: I0123 09:35:14.275934 4684 prober.go:107] "Probe failed" probeType="Startup" pod="openstack/nova-metadata-0" podUID="8422775d-1328-4c3b-ab94-f235f45da903" containerName="nova-metadata-metadata" probeResult="failure" output="Get \"https://10.217.0.183:8775/\": net/http: request canceled (Client.Timeout exceeded while awaiting headers)" Jan 23 09:35:14 crc kubenswrapper[4684]: I0123 09:35:14.275970 4684 prober.go:107] "Probe failed" probeType="Startup" pod="openstack/nova-metadata-0" podUID="8422775d-1328-4c3b-ab94-f235f45da903" containerName="nova-metadata-log" probeResult="failure" output="Get \"https://10.217.0.183:8775/\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" Jan 23 09:35:15 crc kubenswrapper[4684]: I0123 09:35:15.419361 4684 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/kube-state-metrics-0" Jan 23 09:35:15 crc kubenswrapper[4684]: I0123 09:35:15.597033 4684 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="f33774cf-bd34-4d96-bef3-dbf5751ba774" path="/var/lib/kubelet/pods/f33774cf-bd34-4d96-bef3-dbf5751ba774/volumes" Jan 23 09:35:16 crc kubenswrapper[4684]: I0123 09:35:16.461163 4684 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/nova-api-0" Jan 23 09:35:16 crc kubenswrapper[4684]: I0123 09:35:16.461687 4684 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/nova-api-0" Jan 23 09:35:17 crc kubenswrapper[4684]: I0123 09:35:17.221014 4684 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"0c6ec1fa-d699-4dfd-a7f7-0a1efaefe22b","Type":"ContainerStarted","Data":"e19dfabe20cbb867605cf4967faaf2b66c523c588672f5029083250a118a7164"} Jan 23 09:35:17 crc kubenswrapper[4684]: I0123 09:35:17.221238 4684 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/ceilometer-0" Jan 23 09:35:17 crc kubenswrapper[4684]: I0123 09:35:17.256289 4684 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/ceilometer-0" podStartSLOduration=2.995687793 podStartE2EDuration="7.256261594s" podCreationTimestamp="2026-01-23 09:35:10 +0000 UTC" firstStartedPulling="2026-01-23 09:35:11.016119262 +0000 UTC m=+1683.639497803" lastFinishedPulling="2026-01-23 09:35:15.276693063 +0000 UTC m=+1687.900071604" observedRunningTime="2026-01-23 09:35:17.24967706 +0000 UTC m=+1689.873055611" watchObservedRunningTime="2026-01-23 09:35:17.256261594 +0000 UTC m=+1689.879640145" Jan 23 09:35:17 crc kubenswrapper[4684]: I0123 09:35:17.519945 4684 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/redhat-marketplace-pc29g" Jan 23 09:35:17 crc kubenswrapper[4684]: I0123 09:35:17.521012 4684 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/redhat-marketplace-pc29g" Jan 23 09:35:17 crc kubenswrapper[4684]: I0123 09:35:17.550883 4684 prober.go:107] "Probe failed" probeType="Startup" pod="openstack/nova-api-0" podUID="2ef78dc7-d7fb-4521-8b81-5805708cea53" containerName="nova-api-log" probeResult="failure" output="Get \"http://10.217.0.185:8774/\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" Jan 23 09:35:17 crc kubenswrapper[4684]: I0123 09:35:17.551279 4684 prober.go:107] "Probe failed" probeType="Startup" pod="openstack/nova-api-0" podUID="2ef78dc7-d7fb-4521-8b81-5805708cea53" containerName="nova-api-api" probeResult="failure" output="Get \"http://10.217.0.185:8774/\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" Jan 23 09:35:17 crc kubenswrapper[4684]: I0123 09:35:17.696181 4684 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/redhat-marketplace-pc29g" Jan 23 09:35:18 crc kubenswrapper[4684]: I0123 09:35:18.281762 4684 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/redhat-marketplace-pc29g" Jan 23 09:35:18 crc kubenswrapper[4684]: I0123 09:35:18.349824 4684 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-marketplace-pc29g"] Jan 23 09:35:20 crc kubenswrapper[4684]: I0123 09:35:20.249113 4684 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/redhat-marketplace-pc29g" podUID="04a6342a-bcc9-47fc-afdf-a36cd21a721a" containerName="registry-server" containerID="cri-o://654792ec2d79315f5050442ebd0120b2bf3964b60e11e51690ef09d3646741bc" gracePeriod=2 Jan 23 09:35:20 crc kubenswrapper[4684]: I0123 09:35:20.800276 4684 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-pc29g" Jan 23 09:35:20 crc kubenswrapper[4684]: I0123 09:35:20.889725 4684 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/04a6342a-bcc9-47fc-afdf-a36cd21a721a-catalog-content\") pod \"04a6342a-bcc9-47fc-afdf-a36cd21a721a\" (UID: \"04a6342a-bcc9-47fc-afdf-a36cd21a721a\") " Jan 23 09:35:20 crc kubenswrapper[4684]: I0123 09:35:20.889794 4684 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/04a6342a-bcc9-47fc-afdf-a36cd21a721a-utilities\") pod \"04a6342a-bcc9-47fc-afdf-a36cd21a721a\" (UID: \"04a6342a-bcc9-47fc-afdf-a36cd21a721a\") " Jan 23 09:35:20 crc kubenswrapper[4684]: I0123 09:35:20.889888 4684 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-kr8kq\" (UniqueName: \"kubernetes.io/projected/04a6342a-bcc9-47fc-afdf-a36cd21a721a-kube-api-access-kr8kq\") pod \"04a6342a-bcc9-47fc-afdf-a36cd21a721a\" (UID: \"04a6342a-bcc9-47fc-afdf-a36cd21a721a\") " Jan 23 09:35:20 crc kubenswrapper[4684]: I0123 09:35:20.890638 4684 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/04a6342a-bcc9-47fc-afdf-a36cd21a721a-utilities" (OuterVolumeSpecName: "utilities") pod "04a6342a-bcc9-47fc-afdf-a36cd21a721a" (UID: "04a6342a-bcc9-47fc-afdf-a36cd21a721a"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 23 09:35:20 crc kubenswrapper[4684]: I0123 09:35:20.911502 4684 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/04a6342a-bcc9-47fc-afdf-a36cd21a721a-kube-api-access-kr8kq" (OuterVolumeSpecName: "kube-api-access-kr8kq") pod "04a6342a-bcc9-47fc-afdf-a36cd21a721a" (UID: "04a6342a-bcc9-47fc-afdf-a36cd21a721a"). InnerVolumeSpecName "kube-api-access-kr8kq". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 23 09:35:20 crc kubenswrapper[4684]: I0123 09:35:20.918904 4684 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/04a6342a-bcc9-47fc-afdf-a36cd21a721a-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "04a6342a-bcc9-47fc-afdf-a36cd21a721a" (UID: "04a6342a-bcc9-47fc-afdf-a36cd21a721a"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 23 09:35:20 crc kubenswrapper[4684]: I0123 09:35:20.991848 4684 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/04a6342a-bcc9-47fc-afdf-a36cd21a721a-catalog-content\") on node \"crc\" DevicePath \"\"" Jan 23 09:35:20 crc kubenswrapper[4684]: I0123 09:35:20.991884 4684 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/04a6342a-bcc9-47fc-afdf-a36cd21a721a-utilities\") on node \"crc\" DevicePath \"\"" Jan 23 09:35:20 crc kubenswrapper[4684]: I0123 09:35:20.991897 4684 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-kr8kq\" (UniqueName: \"kubernetes.io/projected/04a6342a-bcc9-47fc-afdf-a36cd21a721a-kube-api-access-kr8kq\") on node \"crc\" DevicePath \"\"" Jan 23 09:35:21 crc kubenswrapper[4684]: I0123 09:35:21.262053 4684 generic.go:334] "Generic (PLEG): container finished" podID="04a6342a-bcc9-47fc-afdf-a36cd21a721a" containerID="654792ec2d79315f5050442ebd0120b2bf3964b60e11e51690ef09d3646741bc" exitCode=0 Jan 23 09:35:21 crc kubenswrapper[4684]: I0123 09:35:21.262098 4684 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-pc29g" event={"ID":"04a6342a-bcc9-47fc-afdf-a36cd21a721a","Type":"ContainerDied","Data":"654792ec2d79315f5050442ebd0120b2bf3964b60e11e51690ef09d3646741bc"} Jan 23 09:35:21 crc kubenswrapper[4684]: I0123 09:35:21.262127 4684 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-pc29g" event={"ID":"04a6342a-bcc9-47fc-afdf-a36cd21a721a","Type":"ContainerDied","Data":"77727fc7c9af8d7c4d9bd64d802bcf7a84038f8c0f67f3b7b099f16f110bb950"} Jan 23 09:35:21 crc kubenswrapper[4684]: I0123 09:35:21.262143 4684 scope.go:117] "RemoveContainer" containerID="654792ec2d79315f5050442ebd0120b2bf3964b60e11e51690ef09d3646741bc" Jan 23 09:35:21 crc kubenswrapper[4684]: I0123 09:35:21.262260 4684 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-pc29g" Jan 23 09:35:21 crc kubenswrapper[4684]: I0123 09:35:21.305569 4684 scope.go:117] "RemoveContainer" containerID="578cc89a47e3bb7f02c3ff49f6f9032c075211cf48e08cd50b403a89b0141d17" Jan 23 09:35:21 crc kubenswrapper[4684]: I0123 09:35:21.312453 4684 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-marketplace-pc29g"] Jan 23 09:35:21 crc kubenswrapper[4684]: I0123 09:35:21.323290 4684 scope.go:117] "RemoveContainer" containerID="fd1e2ba4260e07893ea15d3cfac352e2fd3204806f2023961a3c95bacaf2d427" Jan 23 09:35:21 crc kubenswrapper[4684]: I0123 09:35:21.330302 4684 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/redhat-marketplace-pc29g"] Jan 23 09:35:21 crc kubenswrapper[4684]: I0123 09:35:21.377823 4684 scope.go:117] "RemoveContainer" containerID="654792ec2d79315f5050442ebd0120b2bf3964b60e11e51690ef09d3646741bc" Jan 23 09:35:21 crc kubenswrapper[4684]: E0123 09:35:21.378231 4684 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"654792ec2d79315f5050442ebd0120b2bf3964b60e11e51690ef09d3646741bc\": container with ID starting with 654792ec2d79315f5050442ebd0120b2bf3964b60e11e51690ef09d3646741bc not found: ID does not exist" containerID="654792ec2d79315f5050442ebd0120b2bf3964b60e11e51690ef09d3646741bc" Jan 23 09:35:21 crc kubenswrapper[4684]: I0123 09:35:21.378266 4684 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"654792ec2d79315f5050442ebd0120b2bf3964b60e11e51690ef09d3646741bc"} err="failed to get container status \"654792ec2d79315f5050442ebd0120b2bf3964b60e11e51690ef09d3646741bc\": rpc error: code = NotFound desc = could not find container \"654792ec2d79315f5050442ebd0120b2bf3964b60e11e51690ef09d3646741bc\": container with ID starting with 654792ec2d79315f5050442ebd0120b2bf3964b60e11e51690ef09d3646741bc not found: ID does not exist" Jan 23 09:35:21 crc kubenswrapper[4684]: I0123 09:35:21.378290 4684 scope.go:117] "RemoveContainer" containerID="578cc89a47e3bb7f02c3ff49f6f9032c075211cf48e08cd50b403a89b0141d17" Jan 23 09:35:21 crc kubenswrapper[4684]: E0123 09:35:21.378518 4684 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"578cc89a47e3bb7f02c3ff49f6f9032c075211cf48e08cd50b403a89b0141d17\": container with ID starting with 578cc89a47e3bb7f02c3ff49f6f9032c075211cf48e08cd50b403a89b0141d17 not found: ID does not exist" containerID="578cc89a47e3bb7f02c3ff49f6f9032c075211cf48e08cd50b403a89b0141d17" Jan 23 09:35:21 crc kubenswrapper[4684]: I0123 09:35:21.378546 4684 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"578cc89a47e3bb7f02c3ff49f6f9032c075211cf48e08cd50b403a89b0141d17"} err="failed to get container status \"578cc89a47e3bb7f02c3ff49f6f9032c075211cf48e08cd50b403a89b0141d17\": rpc error: code = NotFound desc = could not find container \"578cc89a47e3bb7f02c3ff49f6f9032c075211cf48e08cd50b403a89b0141d17\": container with ID starting with 578cc89a47e3bb7f02c3ff49f6f9032c075211cf48e08cd50b403a89b0141d17 not found: ID does not exist" Jan 23 09:35:21 crc kubenswrapper[4684]: I0123 09:35:21.378563 4684 scope.go:117] "RemoveContainer" containerID="fd1e2ba4260e07893ea15d3cfac352e2fd3204806f2023961a3c95bacaf2d427" Jan 23 09:35:21 crc kubenswrapper[4684]: E0123 09:35:21.378958 4684 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"fd1e2ba4260e07893ea15d3cfac352e2fd3204806f2023961a3c95bacaf2d427\": container with ID starting with fd1e2ba4260e07893ea15d3cfac352e2fd3204806f2023961a3c95bacaf2d427 not found: ID does not exist" containerID="fd1e2ba4260e07893ea15d3cfac352e2fd3204806f2023961a3c95bacaf2d427" Jan 23 09:35:21 crc kubenswrapper[4684]: I0123 09:35:21.378981 4684 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"fd1e2ba4260e07893ea15d3cfac352e2fd3204806f2023961a3c95bacaf2d427"} err="failed to get container status \"fd1e2ba4260e07893ea15d3cfac352e2fd3204806f2023961a3c95bacaf2d427\": rpc error: code = NotFound desc = could not find container \"fd1e2ba4260e07893ea15d3cfac352e2fd3204806f2023961a3c95bacaf2d427\": container with ID starting with fd1e2ba4260e07893ea15d3cfac352e2fd3204806f2023961a3c95bacaf2d427 not found: ID does not exist" Jan 23 09:35:21 crc kubenswrapper[4684]: W0123 09:35:21.581404 4684 watcher.go:93] Error while processing event ("/sys/fs/cgroup/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod04a6342a_bcc9_47fc_afdf_a36cd21a721a.slice/crio-conmon-654792ec2d79315f5050442ebd0120b2bf3964b60e11e51690ef09d3646741bc.scope": 0x40000100 == IN_CREATE|IN_ISDIR): inotify_add_watch /sys/fs/cgroup/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod04a6342a_bcc9_47fc_afdf_a36cd21a721a.slice/crio-conmon-654792ec2d79315f5050442ebd0120b2bf3964b60e11e51690ef09d3646741bc.scope: no such file or directory Jan 23 09:35:21 crc kubenswrapper[4684]: W0123 09:35:21.581494 4684 watcher.go:93] Error while processing event ("/sys/fs/cgroup/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod04a6342a_bcc9_47fc_afdf_a36cd21a721a.slice/crio-654792ec2d79315f5050442ebd0120b2bf3964b60e11e51690ef09d3646741bc.scope": 0x40000100 == IN_CREATE|IN_ISDIR): inotify_add_watch /sys/fs/cgroup/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod04a6342a_bcc9_47fc_afdf_a36cd21a721a.slice/crio-654792ec2d79315f5050442ebd0120b2bf3964b60e11e51690ef09d3646741bc.scope: no such file or directory Jan 23 09:35:21 crc kubenswrapper[4684]: W0123 09:35:21.590536 4684 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod04a6342a_bcc9_47fc_afdf_a36cd21a721a.slice/crio-fd1e2ba4260e07893ea15d3cfac352e2fd3204806f2023961a3c95bacaf2d427.scope WatchSource:0}: Error finding container fd1e2ba4260e07893ea15d3cfac352e2fd3204806f2023961a3c95bacaf2d427: Status 404 returned error can't find the container with id fd1e2ba4260e07893ea15d3cfac352e2fd3204806f2023961a3c95bacaf2d427 Jan 23 09:35:21 crc kubenswrapper[4684]: W0123 09:35:21.595628 4684 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod04a6342a_bcc9_47fc_afdf_a36cd21a721a.slice/crio-578cc89a47e3bb7f02c3ff49f6f9032c075211cf48e08cd50b403a89b0141d17.scope WatchSource:0}: Error finding container 578cc89a47e3bb7f02c3ff49f6f9032c075211cf48e08cd50b403a89b0141d17: Status 404 returned error can't find the container with id 578cc89a47e3bb7f02c3ff49f6f9032c075211cf48e08cd50b403a89b0141d17 Jan 23 09:35:21 crc kubenswrapper[4684]: I0123 09:35:21.612556 4684 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="04a6342a-bcc9-47fc-afdf-a36cd21a721a" path="/var/lib/kubelet/pods/04a6342a-bcc9-47fc-afdf-a36cd21a721a/volumes" Jan 23 09:35:21 crc kubenswrapper[4684]: E0123 09:35:21.831045 4684 cadvisor_stats_provider.go:516] "Partial failure issuing cadvisor.ContainerInfoV2" err="partial failures: [\"/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod50f5eb1f_ec36_426e_a675_b23ffe20e282.slice/crio-9a4eda9c24c97a5f590074aa487a1f3aca20a1fef92dd92b60246d34d1f5c443.scope\": RecentStats: unable to find data in memory cache], [\"/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod50f5eb1f_ec36_426e_a675_b23ffe20e282.slice/crio-conmon-9a4eda9c24c97a5f590074aa487a1f3aca20a1fef92dd92b60246d34d1f5c443.scope\": RecentStats: unable to find data in memory cache], [\"/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod04a6342a_bcc9_47fc_afdf_a36cd21a721a.slice/crio-77727fc7c9af8d7c4d9bd64d802bcf7a84038f8c0f67f3b7b099f16f110bb950\": RecentStats: unable to find data in memory cache], [\"/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod04a6342a_bcc9_47fc_afdf_a36cd21a721a.slice\": RecentStats: unable to find data in memory cache]" Jan 23 09:35:21 crc kubenswrapper[4684]: I0123 09:35:21.987003 4684 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell1-novncproxy-0" Jan 23 09:35:22 crc kubenswrapper[4684]: I0123 09:35:22.131473 4684 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-jkrdm\" (UniqueName: \"kubernetes.io/projected/50f5eb1f-ec36-426e-a675-b23ffe20e282-kube-api-access-jkrdm\") pod \"50f5eb1f-ec36-426e-a675-b23ffe20e282\" (UID: \"50f5eb1f-ec36-426e-a675-b23ffe20e282\") " Jan 23 09:35:22 crc kubenswrapper[4684]: I0123 09:35:22.131597 4684 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/50f5eb1f-ec36-426e-a675-b23ffe20e282-combined-ca-bundle\") pod \"50f5eb1f-ec36-426e-a675-b23ffe20e282\" (UID: \"50f5eb1f-ec36-426e-a675-b23ffe20e282\") " Jan 23 09:35:22 crc kubenswrapper[4684]: I0123 09:35:22.131755 4684 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/50f5eb1f-ec36-426e-a675-b23ffe20e282-config-data\") pod \"50f5eb1f-ec36-426e-a675-b23ffe20e282\" (UID: \"50f5eb1f-ec36-426e-a675-b23ffe20e282\") " Jan 23 09:35:22 crc kubenswrapper[4684]: I0123 09:35:22.164997 4684 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/50f5eb1f-ec36-426e-a675-b23ffe20e282-kube-api-access-jkrdm" (OuterVolumeSpecName: "kube-api-access-jkrdm") pod "50f5eb1f-ec36-426e-a675-b23ffe20e282" (UID: "50f5eb1f-ec36-426e-a675-b23ffe20e282"). InnerVolumeSpecName "kube-api-access-jkrdm". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 23 09:35:22 crc kubenswrapper[4684]: I0123 09:35:22.200872 4684 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/50f5eb1f-ec36-426e-a675-b23ffe20e282-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "50f5eb1f-ec36-426e-a675-b23ffe20e282" (UID: "50f5eb1f-ec36-426e-a675-b23ffe20e282"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 23 09:35:22 crc kubenswrapper[4684]: I0123 09:35:22.212338 4684 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/50f5eb1f-ec36-426e-a675-b23ffe20e282-config-data" (OuterVolumeSpecName: "config-data") pod "50f5eb1f-ec36-426e-a675-b23ffe20e282" (UID: "50f5eb1f-ec36-426e-a675-b23ffe20e282"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 23 09:35:22 crc kubenswrapper[4684]: I0123 09:35:22.238694 4684 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/50f5eb1f-ec36-426e-a675-b23ffe20e282-config-data\") on node \"crc\" DevicePath \"\"" Jan 23 09:35:22 crc kubenswrapper[4684]: I0123 09:35:22.238748 4684 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-jkrdm\" (UniqueName: \"kubernetes.io/projected/50f5eb1f-ec36-426e-a675-b23ffe20e282-kube-api-access-jkrdm\") on node \"crc\" DevicePath \"\"" Jan 23 09:35:22 crc kubenswrapper[4684]: I0123 09:35:22.238765 4684 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/50f5eb1f-ec36-426e-a675-b23ffe20e282-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Jan 23 09:35:22 crc kubenswrapper[4684]: I0123 09:35:22.275342 4684 generic.go:334] "Generic (PLEG): container finished" podID="50f5eb1f-ec36-426e-a675-b23ffe20e282" containerID="9a4eda9c24c97a5f590074aa487a1f3aca20a1fef92dd92b60246d34d1f5c443" exitCode=137 Jan 23 09:35:22 crc kubenswrapper[4684]: I0123 09:35:22.275469 4684 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell1-novncproxy-0" event={"ID":"50f5eb1f-ec36-426e-a675-b23ffe20e282","Type":"ContainerDied","Data":"9a4eda9c24c97a5f590074aa487a1f3aca20a1fef92dd92b60246d34d1f5c443"} Jan 23 09:35:22 crc kubenswrapper[4684]: I0123 09:35:22.275559 4684 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell1-novncproxy-0" event={"ID":"50f5eb1f-ec36-426e-a675-b23ffe20e282","Type":"ContainerDied","Data":"cdcc6a5954d3e57d45d19b766d51418cb3855a20777356a834997754c0d2d8d0"} Jan 23 09:35:22 crc kubenswrapper[4684]: I0123 09:35:22.275585 4684 scope.go:117] "RemoveContainer" containerID="9a4eda9c24c97a5f590074aa487a1f3aca20a1fef92dd92b60246d34d1f5c443" Jan 23 09:35:22 crc kubenswrapper[4684]: I0123 09:35:22.276657 4684 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell1-novncproxy-0" Jan 23 09:35:22 crc kubenswrapper[4684]: I0123 09:35:22.307369 4684 scope.go:117] "RemoveContainer" containerID="9a4eda9c24c97a5f590074aa487a1f3aca20a1fef92dd92b60246d34d1f5c443" Jan 23 09:35:22 crc kubenswrapper[4684]: E0123 09:35:22.308116 4684 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"9a4eda9c24c97a5f590074aa487a1f3aca20a1fef92dd92b60246d34d1f5c443\": container with ID starting with 9a4eda9c24c97a5f590074aa487a1f3aca20a1fef92dd92b60246d34d1f5c443 not found: ID does not exist" containerID="9a4eda9c24c97a5f590074aa487a1f3aca20a1fef92dd92b60246d34d1f5c443" Jan 23 09:35:22 crc kubenswrapper[4684]: I0123 09:35:22.308231 4684 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"9a4eda9c24c97a5f590074aa487a1f3aca20a1fef92dd92b60246d34d1f5c443"} err="failed to get container status \"9a4eda9c24c97a5f590074aa487a1f3aca20a1fef92dd92b60246d34d1f5c443\": rpc error: code = NotFound desc = could not find container \"9a4eda9c24c97a5f590074aa487a1f3aca20a1fef92dd92b60246d34d1f5c443\": container with ID starting with 9a4eda9c24c97a5f590074aa487a1f3aca20a1fef92dd92b60246d34d1f5c443 not found: ID does not exist" Jan 23 09:35:22 crc kubenswrapper[4684]: I0123 09:35:22.334844 4684 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-cell1-novncproxy-0"] Jan 23 09:35:22 crc kubenswrapper[4684]: I0123 09:35:22.361351 4684 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/nova-cell1-novncproxy-0"] Jan 23 09:35:22 crc kubenswrapper[4684]: I0123 09:35:22.386163 4684 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/nova-cell1-novncproxy-0"] Jan 23 09:35:22 crc kubenswrapper[4684]: E0123 09:35:22.386597 4684 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="04a6342a-bcc9-47fc-afdf-a36cd21a721a" containerName="extract-content" Jan 23 09:35:22 crc kubenswrapper[4684]: I0123 09:35:22.386616 4684 state_mem.go:107] "Deleted CPUSet assignment" podUID="04a6342a-bcc9-47fc-afdf-a36cd21a721a" containerName="extract-content" Jan 23 09:35:22 crc kubenswrapper[4684]: E0123 09:35:22.386648 4684 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="50f5eb1f-ec36-426e-a675-b23ffe20e282" containerName="nova-cell1-novncproxy-novncproxy" Jan 23 09:35:22 crc kubenswrapper[4684]: I0123 09:35:22.386657 4684 state_mem.go:107] "Deleted CPUSet assignment" podUID="50f5eb1f-ec36-426e-a675-b23ffe20e282" containerName="nova-cell1-novncproxy-novncproxy" Jan 23 09:35:22 crc kubenswrapper[4684]: E0123 09:35:22.386671 4684 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="f33774cf-bd34-4d96-bef3-dbf5751ba774" containerName="extract-content" Jan 23 09:35:22 crc kubenswrapper[4684]: I0123 09:35:22.386680 4684 state_mem.go:107] "Deleted CPUSet assignment" podUID="f33774cf-bd34-4d96-bef3-dbf5751ba774" containerName="extract-content" Jan 23 09:35:22 crc kubenswrapper[4684]: E0123 09:35:22.386703 4684 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="04a6342a-bcc9-47fc-afdf-a36cd21a721a" containerName="extract-utilities" Jan 23 09:35:22 crc kubenswrapper[4684]: I0123 09:35:22.386710 4684 state_mem.go:107] "Deleted CPUSet assignment" podUID="04a6342a-bcc9-47fc-afdf-a36cd21a721a" containerName="extract-utilities" Jan 23 09:35:22 crc kubenswrapper[4684]: E0123 09:35:22.386752 4684 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="f33774cf-bd34-4d96-bef3-dbf5751ba774" containerName="extract-utilities" Jan 23 09:35:22 crc kubenswrapper[4684]: I0123 09:35:22.386761 4684 state_mem.go:107] "Deleted CPUSet assignment" podUID="f33774cf-bd34-4d96-bef3-dbf5751ba774" containerName="extract-utilities" Jan 23 09:35:22 crc kubenswrapper[4684]: E0123 09:35:22.386773 4684 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="f33774cf-bd34-4d96-bef3-dbf5751ba774" containerName="registry-server" Jan 23 09:35:22 crc kubenswrapper[4684]: I0123 09:35:22.386782 4684 state_mem.go:107] "Deleted CPUSet assignment" podUID="f33774cf-bd34-4d96-bef3-dbf5751ba774" containerName="registry-server" Jan 23 09:35:22 crc kubenswrapper[4684]: E0123 09:35:22.386797 4684 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="04a6342a-bcc9-47fc-afdf-a36cd21a721a" containerName="registry-server" Jan 23 09:35:22 crc kubenswrapper[4684]: I0123 09:35:22.386805 4684 state_mem.go:107] "Deleted CPUSet assignment" podUID="04a6342a-bcc9-47fc-afdf-a36cd21a721a" containerName="registry-server" Jan 23 09:35:22 crc kubenswrapper[4684]: I0123 09:35:22.386993 4684 memory_manager.go:354] "RemoveStaleState removing state" podUID="f33774cf-bd34-4d96-bef3-dbf5751ba774" containerName="registry-server" Jan 23 09:35:22 crc kubenswrapper[4684]: I0123 09:35:22.387015 4684 memory_manager.go:354] "RemoveStaleState removing state" podUID="04a6342a-bcc9-47fc-afdf-a36cd21a721a" containerName="registry-server" Jan 23 09:35:22 crc kubenswrapper[4684]: I0123 09:35:22.387028 4684 memory_manager.go:354] "RemoveStaleState removing state" podUID="50f5eb1f-ec36-426e-a675-b23ffe20e282" containerName="nova-cell1-novncproxy-novncproxy" Jan 23 09:35:22 crc kubenswrapper[4684]: I0123 09:35:22.387703 4684 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell1-novncproxy-0" Jan 23 09:35:22 crc kubenswrapper[4684]: I0123 09:35:22.394342 4684 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-nova-novncproxy-cell1-vencrypt" Jan 23 09:35:22 crc kubenswrapper[4684]: I0123 09:35:22.394577 4684 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-cell1-novncproxy-config-data" Jan 23 09:35:22 crc kubenswrapper[4684]: I0123 09:35:22.394778 4684 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-nova-novncproxy-cell1-public-svc" Jan 23 09:35:22 crc kubenswrapper[4684]: I0123 09:35:22.442902 4684 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-cell1-novncproxy-0"] Jan 23 09:35:22 crc kubenswrapper[4684]: I0123 09:35:22.546009 4684 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/c03f1660-c3bd-4803-b1fd-c07c36966484-config-data\") pod \"nova-cell1-novncproxy-0\" (UID: \"c03f1660-c3bd-4803-b1fd-c07c36966484\") " pod="openstack/nova-cell1-novncproxy-0" Jan 23 09:35:22 crc kubenswrapper[4684]: I0123 09:35:22.546095 4684 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/c03f1660-c3bd-4803-b1fd-c07c36966484-combined-ca-bundle\") pod \"nova-cell1-novncproxy-0\" (UID: \"c03f1660-c3bd-4803-b1fd-c07c36966484\") " pod="openstack/nova-cell1-novncproxy-0" Jan 23 09:35:22 crc kubenswrapper[4684]: I0123 09:35:22.546134 4684 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"nova-novncproxy-tls-certs\" (UniqueName: \"kubernetes.io/secret/c03f1660-c3bd-4803-b1fd-c07c36966484-nova-novncproxy-tls-certs\") pod \"nova-cell1-novncproxy-0\" (UID: \"c03f1660-c3bd-4803-b1fd-c07c36966484\") " pod="openstack/nova-cell1-novncproxy-0" Jan 23 09:35:22 crc kubenswrapper[4684]: I0123 09:35:22.546168 4684 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-hjck4\" (UniqueName: \"kubernetes.io/projected/c03f1660-c3bd-4803-b1fd-c07c36966484-kube-api-access-hjck4\") pod \"nova-cell1-novncproxy-0\" (UID: \"c03f1660-c3bd-4803-b1fd-c07c36966484\") " pod="openstack/nova-cell1-novncproxy-0" Jan 23 09:35:22 crc kubenswrapper[4684]: I0123 09:35:22.546189 4684 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"vencrypt-tls-certs\" (UniqueName: \"kubernetes.io/secret/c03f1660-c3bd-4803-b1fd-c07c36966484-vencrypt-tls-certs\") pod \"nova-cell1-novncproxy-0\" (UID: \"c03f1660-c3bd-4803-b1fd-c07c36966484\") " pod="openstack/nova-cell1-novncproxy-0" Jan 23 09:35:22 crc kubenswrapper[4684]: I0123 09:35:22.648315 4684 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/c03f1660-c3bd-4803-b1fd-c07c36966484-config-data\") pod \"nova-cell1-novncproxy-0\" (UID: \"c03f1660-c3bd-4803-b1fd-c07c36966484\") " pod="openstack/nova-cell1-novncproxy-0" Jan 23 09:35:22 crc kubenswrapper[4684]: I0123 09:35:22.648420 4684 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/c03f1660-c3bd-4803-b1fd-c07c36966484-combined-ca-bundle\") pod \"nova-cell1-novncproxy-0\" (UID: \"c03f1660-c3bd-4803-b1fd-c07c36966484\") " pod="openstack/nova-cell1-novncproxy-0" Jan 23 09:35:22 crc kubenswrapper[4684]: I0123 09:35:22.648453 4684 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"nova-novncproxy-tls-certs\" (UniqueName: \"kubernetes.io/secret/c03f1660-c3bd-4803-b1fd-c07c36966484-nova-novncproxy-tls-certs\") pod \"nova-cell1-novncproxy-0\" (UID: \"c03f1660-c3bd-4803-b1fd-c07c36966484\") " pod="openstack/nova-cell1-novncproxy-0" Jan 23 09:35:22 crc kubenswrapper[4684]: I0123 09:35:22.648484 4684 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-hjck4\" (UniqueName: \"kubernetes.io/projected/c03f1660-c3bd-4803-b1fd-c07c36966484-kube-api-access-hjck4\") pod \"nova-cell1-novncproxy-0\" (UID: \"c03f1660-c3bd-4803-b1fd-c07c36966484\") " pod="openstack/nova-cell1-novncproxy-0" Jan 23 09:35:22 crc kubenswrapper[4684]: I0123 09:35:22.648503 4684 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"vencrypt-tls-certs\" (UniqueName: \"kubernetes.io/secret/c03f1660-c3bd-4803-b1fd-c07c36966484-vencrypt-tls-certs\") pod \"nova-cell1-novncproxy-0\" (UID: \"c03f1660-c3bd-4803-b1fd-c07c36966484\") " pod="openstack/nova-cell1-novncproxy-0" Jan 23 09:35:22 crc kubenswrapper[4684]: I0123 09:35:22.654572 4684 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/c03f1660-c3bd-4803-b1fd-c07c36966484-combined-ca-bundle\") pod \"nova-cell1-novncproxy-0\" (UID: \"c03f1660-c3bd-4803-b1fd-c07c36966484\") " pod="openstack/nova-cell1-novncproxy-0" Jan 23 09:35:22 crc kubenswrapper[4684]: I0123 09:35:22.654576 4684 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"vencrypt-tls-certs\" (UniqueName: \"kubernetes.io/secret/c03f1660-c3bd-4803-b1fd-c07c36966484-vencrypt-tls-certs\") pod \"nova-cell1-novncproxy-0\" (UID: \"c03f1660-c3bd-4803-b1fd-c07c36966484\") " pod="openstack/nova-cell1-novncproxy-0" Jan 23 09:35:22 crc kubenswrapper[4684]: I0123 09:35:22.671964 4684 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"nova-novncproxy-tls-certs\" (UniqueName: \"kubernetes.io/secret/c03f1660-c3bd-4803-b1fd-c07c36966484-nova-novncproxy-tls-certs\") pod \"nova-cell1-novncproxy-0\" (UID: \"c03f1660-c3bd-4803-b1fd-c07c36966484\") " pod="openstack/nova-cell1-novncproxy-0" Jan 23 09:35:22 crc kubenswrapper[4684]: I0123 09:35:22.679935 4684 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/c03f1660-c3bd-4803-b1fd-c07c36966484-config-data\") pod \"nova-cell1-novncproxy-0\" (UID: \"c03f1660-c3bd-4803-b1fd-c07c36966484\") " pod="openstack/nova-cell1-novncproxy-0" Jan 23 09:35:22 crc kubenswrapper[4684]: I0123 09:35:22.698929 4684 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-hjck4\" (UniqueName: \"kubernetes.io/projected/c03f1660-c3bd-4803-b1fd-c07c36966484-kube-api-access-hjck4\") pod \"nova-cell1-novncproxy-0\" (UID: \"c03f1660-c3bd-4803-b1fd-c07c36966484\") " pod="openstack/nova-cell1-novncproxy-0" Jan 23 09:35:22 crc kubenswrapper[4684]: I0123 09:35:22.756964 4684 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell1-novncproxy-0" Jan 23 09:35:23 crc kubenswrapper[4684]: I0123 09:35:23.269552 4684 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/nova-metadata-0" Jan 23 09:35:23 crc kubenswrapper[4684]: I0123 09:35:23.271351 4684 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-cell1-novncproxy-0"] Jan 23 09:35:23 crc kubenswrapper[4684]: I0123 09:35:23.272377 4684 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/nova-metadata-0" Jan 23 09:35:23 crc kubenswrapper[4684]: I0123 09:35:23.286247 4684 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/nova-metadata-0" Jan 23 09:35:23 crc kubenswrapper[4684]: I0123 09:35:23.288891 4684 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell1-novncproxy-0" event={"ID":"c03f1660-c3bd-4803-b1fd-c07c36966484","Type":"ContainerStarted","Data":"1eb91efd63c00e3b12e43c62b9048ec3d6f63b795540343f43ece77de8c09092"} Jan 23 09:35:23 crc kubenswrapper[4684]: I0123 09:35:23.303707 4684 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/nova-metadata-0" Jan 23 09:35:23 crc kubenswrapper[4684]: I0123 09:35:23.594789 4684 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="50f5eb1f-ec36-426e-a675-b23ffe20e282" path="/var/lib/kubelet/pods/50f5eb1f-ec36-426e-a675-b23ffe20e282/volumes" Jan 23 09:35:24 crc kubenswrapper[4684]: I0123 09:35:24.299990 4684 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell1-novncproxy-0" event={"ID":"c03f1660-c3bd-4803-b1fd-c07c36966484","Type":"ContainerStarted","Data":"7b153662d72a435989a54872c7ee3551a765873c3c06ffd10d2e84cd7622fb9b"} Jan 23 09:35:24 crc kubenswrapper[4684]: I0123 09:35:24.324421 4684 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/nova-cell1-novncproxy-0" podStartSLOduration=2.32439848 podStartE2EDuration="2.32439848s" podCreationTimestamp="2026-01-23 09:35:22 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-23 09:35:24.313827884 +0000 UTC m=+1696.937206425" watchObservedRunningTime="2026-01-23 09:35:24.32439848 +0000 UTC m=+1696.947777021" Jan 23 09:35:26 crc kubenswrapper[4684]: I0123 09:35:26.463648 4684 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/nova-api-0" Jan 23 09:35:26 crc kubenswrapper[4684]: I0123 09:35:26.465346 4684 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/nova-api-0" Jan 23 09:35:26 crc kubenswrapper[4684]: I0123 09:35:26.467795 4684 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/nova-api-0" Jan 23 09:35:26 crc kubenswrapper[4684]: I0123 09:35:26.468989 4684 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/nova-api-0" Jan 23 09:35:27 crc kubenswrapper[4684]: I0123 09:35:27.324637 4684 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/nova-api-0" Jan 23 09:35:27 crc kubenswrapper[4684]: I0123 09:35:27.329753 4684 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/nova-api-0" Jan 23 09:35:27 crc kubenswrapper[4684]: I0123 09:35:27.517765 4684 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/dnsmasq-dns-6f69c5c76f-8qdgs"] Jan 23 09:35:27 crc kubenswrapper[4684]: I0123 09:35:27.519270 4684 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-6f69c5c76f-8qdgs" Jan 23 09:35:27 crc kubenswrapper[4684]: I0123 09:35:27.531483 4684 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-6f69c5c76f-8qdgs"] Jan 23 09:35:27 crc kubenswrapper[4684]: I0123 09:35:27.641998 4684 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-h6f78\" (UniqueName: \"kubernetes.io/projected/5f9a95a1-59e2-4ea2-96f4-d95ef3bdcebb-kube-api-access-h6f78\") pod \"dnsmasq-dns-6f69c5c76f-8qdgs\" (UID: \"5f9a95a1-59e2-4ea2-96f4-d95ef3bdcebb\") " pod="openstack/dnsmasq-dns-6f69c5c76f-8qdgs" Jan 23 09:35:27 crc kubenswrapper[4684]: I0123 09:35:27.642105 4684 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/5f9a95a1-59e2-4ea2-96f4-d95ef3bdcebb-dns-svc\") pod \"dnsmasq-dns-6f69c5c76f-8qdgs\" (UID: \"5f9a95a1-59e2-4ea2-96f4-d95ef3bdcebb\") " pod="openstack/dnsmasq-dns-6f69c5c76f-8qdgs" Jan 23 09:35:27 crc kubenswrapper[4684]: I0123 09:35:27.642143 4684 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/5f9a95a1-59e2-4ea2-96f4-d95ef3bdcebb-ovsdbserver-nb\") pod \"dnsmasq-dns-6f69c5c76f-8qdgs\" (UID: \"5f9a95a1-59e2-4ea2-96f4-d95ef3bdcebb\") " pod="openstack/dnsmasq-dns-6f69c5c76f-8qdgs" Jan 23 09:35:27 crc kubenswrapper[4684]: I0123 09:35:27.642179 4684 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/5f9a95a1-59e2-4ea2-96f4-d95ef3bdcebb-ovsdbserver-sb\") pod \"dnsmasq-dns-6f69c5c76f-8qdgs\" (UID: \"5f9a95a1-59e2-4ea2-96f4-d95ef3bdcebb\") " pod="openstack/dnsmasq-dns-6f69c5c76f-8qdgs" Jan 23 09:35:27 crc kubenswrapper[4684]: I0123 09:35:27.642198 4684 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/5f9a95a1-59e2-4ea2-96f4-d95ef3bdcebb-config\") pod \"dnsmasq-dns-6f69c5c76f-8qdgs\" (UID: \"5f9a95a1-59e2-4ea2-96f4-d95ef3bdcebb\") " pod="openstack/dnsmasq-dns-6f69c5c76f-8qdgs" Jan 23 09:35:27 crc kubenswrapper[4684]: I0123 09:35:27.753627 4684 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/5f9a95a1-59e2-4ea2-96f4-d95ef3bdcebb-ovsdbserver-sb\") pod \"dnsmasq-dns-6f69c5c76f-8qdgs\" (UID: \"5f9a95a1-59e2-4ea2-96f4-d95ef3bdcebb\") " pod="openstack/dnsmasq-dns-6f69c5c76f-8qdgs" Jan 23 09:35:27 crc kubenswrapper[4684]: I0123 09:35:27.753674 4684 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/5f9a95a1-59e2-4ea2-96f4-d95ef3bdcebb-config\") pod \"dnsmasq-dns-6f69c5c76f-8qdgs\" (UID: \"5f9a95a1-59e2-4ea2-96f4-d95ef3bdcebb\") " pod="openstack/dnsmasq-dns-6f69c5c76f-8qdgs" Jan 23 09:35:27 crc kubenswrapper[4684]: I0123 09:35:27.753806 4684 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-h6f78\" (UniqueName: \"kubernetes.io/projected/5f9a95a1-59e2-4ea2-96f4-d95ef3bdcebb-kube-api-access-h6f78\") pod \"dnsmasq-dns-6f69c5c76f-8qdgs\" (UID: \"5f9a95a1-59e2-4ea2-96f4-d95ef3bdcebb\") " pod="openstack/dnsmasq-dns-6f69c5c76f-8qdgs" Jan 23 09:35:27 crc kubenswrapper[4684]: I0123 09:35:27.753899 4684 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/5f9a95a1-59e2-4ea2-96f4-d95ef3bdcebb-dns-svc\") pod \"dnsmasq-dns-6f69c5c76f-8qdgs\" (UID: \"5f9a95a1-59e2-4ea2-96f4-d95ef3bdcebb\") " pod="openstack/dnsmasq-dns-6f69c5c76f-8qdgs" Jan 23 09:35:27 crc kubenswrapper[4684]: I0123 09:35:27.753945 4684 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/5f9a95a1-59e2-4ea2-96f4-d95ef3bdcebb-ovsdbserver-nb\") pod \"dnsmasq-dns-6f69c5c76f-8qdgs\" (UID: \"5f9a95a1-59e2-4ea2-96f4-d95ef3bdcebb\") " pod="openstack/dnsmasq-dns-6f69c5c76f-8qdgs" Jan 23 09:35:27 crc kubenswrapper[4684]: I0123 09:35:27.754618 4684 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/5f9a95a1-59e2-4ea2-96f4-d95ef3bdcebb-config\") pod \"dnsmasq-dns-6f69c5c76f-8qdgs\" (UID: \"5f9a95a1-59e2-4ea2-96f4-d95ef3bdcebb\") " pod="openstack/dnsmasq-dns-6f69c5c76f-8qdgs" Jan 23 09:35:27 crc kubenswrapper[4684]: I0123 09:35:27.754645 4684 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/5f9a95a1-59e2-4ea2-96f4-d95ef3bdcebb-ovsdbserver-nb\") pod \"dnsmasq-dns-6f69c5c76f-8qdgs\" (UID: \"5f9a95a1-59e2-4ea2-96f4-d95ef3bdcebb\") " pod="openstack/dnsmasq-dns-6f69c5c76f-8qdgs" Jan 23 09:35:27 crc kubenswrapper[4684]: I0123 09:35:27.755180 4684 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/5f9a95a1-59e2-4ea2-96f4-d95ef3bdcebb-ovsdbserver-sb\") pod \"dnsmasq-dns-6f69c5c76f-8qdgs\" (UID: \"5f9a95a1-59e2-4ea2-96f4-d95ef3bdcebb\") " pod="openstack/dnsmasq-dns-6f69c5c76f-8qdgs" Jan 23 09:35:27 crc kubenswrapper[4684]: I0123 09:35:27.755680 4684 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/5f9a95a1-59e2-4ea2-96f4-d95ef3bdcebb-dns-svc\") pod \"dnsmasq-dns-6f69c5c76f-8qdgs\" (UID: \"5f9a95a1-59e2-4ea2-96f4-d95ef3bdcebb\") " pod="openstack/dnsmasq-dns-6f69c5c76f-8qdgs" Jan 23 09:35:27 crc kubenswrapper[4684]: I0123 09:35:27.757952 4684 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/nova-cell1-novncproxy-0" Jan 23 09:35:27 crc kubenswrapper[4684]: I0123 09:35:27.777333 4684 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-h6f78\" (UniqueName: \"kubernetes.io/projected/5f9a95a1-59e2-4ea2-96f4-d95ef3bdcebb-kube-api-access-h6f78\") pod \"dnsmasq-dns-6f69c5c76f-8qdgs\" (UID: \"5f9a95a1-59e2-4ea2-96f4-d95ef3bdcebb\") " pod="openstack/dnsmasq-dns-6f69c5c76f-8qdgs" Jan 23 09:35:27 crc kubenswrapper[4684]: I0123 09:35:27.852392 4684 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-6f69c5c76f-8qdgs" Jan 23 09:35:28 crc kubenswrapper[4684]: I0123 09:35:28.341313 4684 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-6f69c5c76f-8qdgs"] Jan 23 09:35:28 crc kubenswrapper[4684]: W0123 09:35:28.360527 4684 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod5f9a95a1_59e2_4ea2_96f4_d95ef3bdcebb.slice/crio-9df5ef70e1883ae23b65c7857b1416e74b1856214aac349d319b540c009a1841 WatchSource:0}: Error finding container 9df5ef70e1883ae23b65c7857b1416e74b1856214aac349d319b540c009a1841: Status 404 returned error can't find the container with id 9df5ef70e1883ae23b65c7857b1416e74b1856214aac349d319b540c009a1841 Jan 23 09:35:29 crc kubenswrapper[4684]: I0123 09:35:29.348142 4684 generic.go:334] "Generic (PLEG): container finished" podID="5f9a95a1-59e2-4ea2-96f4-d95ef3bdcebb" containerID="cc64fedec05b87847f9e240fe2a006be46e0858764bfa7da2f7c6565a14e554b" exitCode=0 Jan 23 09:35:29 crc kubenswrapper[4684]: I0123 09:35:29.349911 4684 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-6f69c5c76f-8qdgs" event={"ID":"5f9a95a1-59e2-4ea2-96f4-d95ef3bdcebb","Type":"ContainerDied","Data":"cc64fedec05b87847f9e240fe2a006be46e0858764bfa7da2f7c6565a14e554b"} Jan 23 09:35:29 crc kubenswrapper[4684]: I0123 09:35:29.349950 4684 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-6f69c5c76f-8qdgs" event={"ID":"5f9a95a1-59e2-4ea2-96f4-d95ef3bdcebb","Type":"ContainerStarted","Data":"9df5ef70e1883ae23b65c7857b1416e74b1856214aac349d319b540c009a1841"} Jan 23 09:35:30 crc kubenswrapper[4684]: I0123 09:35:30.394181 4684 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-6f69c5c76f-8qdgs" event={"ID":"5f9a95a1-59e2-4ea2-96f4-d95ef3bdcebb","Type":"ContainerStarted","Data":"57629ad7ae1d089781543c6965d9186b66d08a222dce7279b30a4d2098dd5f7e"} Jan 23 09:35:30 crc kubenswrapper[4684]: I0123 09:35:30.396417 4684 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/dnsmasq-dns-6f69c5c76f-8qdgs" Jan 23 09:35:30 crc kubenswrapper[4684]: I0123 09:35:30.450573 4684 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-api-0"] Jan 23 09:35:30 crc kubenswrapper[4684]: I0123 09:35:30.451038 4684 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/nova-api-0" podUID="2ef78dc7-d7fb-4521-8b81-5805708cea53" containerName="nova-api-log" containerID="cri-o://016ee31397f41b13b8737129790a5bb0d93defa120f4a0af34495ca65355c6fc" gracePeriod=30 Jan 23 09:35:30 crc kubenswrapper[4684]: I0123 09:35:30.451280 4684 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/nova-api-0" podUID="2ef78dc7-d7fb-4521-8b81-5805708cea53" containerName="nova-api-api" containerID="cri-o://891a6c7457d2493ca65ca286a8472b97cacc3ff9df2401651985461c32d3233d" gracePeriod=30 Jan 23 09:35:30 crc kubenswrapper[4684]: I0123 09:35:30.477296 4684 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/dnsmasq-dns-6f69c5c76f-8qdgs" podStartSLOduration=3.477270526 podStartE2EDuration="3.477270526s" podCreationTimestamp="2026-01-23 09:35:27 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-23 09:35:30.429143145 +0000 UTC m=+1703.052521706" watchObservedRunningTime="2026-01-23 09:35:30.477270526 +0000 UTC m=+1703.100649067" Jan 23 09:35:30 crc kubenswrapper[4684]: I0123 09:35:30.780015 4684 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/ceilometer-0"] Jan 23 09:35:30 crc kubenswrapper[4684]: I0123 09:35:30.780742 4684 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/ceilometer-0" podUID="0c6ec1fa-d699-4dfd-a7f7-0a1efaefe22b" containerName="ceilometer-central-agent" containerID="cri-o://41b5a3ffede749f2c21f7b87787775db52a32c8a7086d37064fb28eed7692788" gracePeriod=30 Jan 23 09:35:30 crc kubenswrapper[4684]: I0123 09:35:30.780894 4684 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/ceilometer-0" podUID="0c6ec1fa-d699-4dfd-a7f7-0a1efaefe22b" containerName="proxy-httpd" containerID="cri-o://e19dfabe20cbb867605cf4967faaf2b66c523c588672f5029083250a118a7164" gracePeriod=30 Jan 23 09:35:30 crc kubenswrapper[4684]: I0123 09:35:30.780939 4684 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/ceilometer-0" podUID="0c6ec1fa-d699-4dfd-a7f7-0a1efaefe22b" containerName="sg-core" containerID="cri-o://4e3f846e126c284a16175d190bf3e5718f7ffe453648e6e9aa20190121025557" gracePeriod=30 Jan 23 09:35:30 crc kubenswrapper[4684]: I0123 09:35:30.781065 4684 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/ceilometer-0" podUID="0c6ec1fa-d699-4dfd-a7f7-0a1efaefe22b" containerName="ceilometer-notification-agent" containerID="cri-o://10cd04d16e3668f96b99f4ca9f20c43e3c18d400c5549debc35d8f5edade414b" gracePeriod=30 Jan 23 09:35:30 crc kubenswrapper[4684]: I0123 09:35:30.786730 4684 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack/ceilometer-0" podUID="0c6ec1fa-d699-4dfd-a7f7-0a1efaefe22b" containerName="proxy-httpd" probeResult="failure" output="Get \"https://10.217.0.187:3000/\": read tcp 10.217.0.2:53812->10.217.0.187:3000: read: connection reset by peer" Jan 23 09:35:31 crc kubenswrapper[4684]: I0123 09:35:31.405634 4684 generic.go:334] "Generic (PLEG): container finished" podID="2ef78dc7-d7fb-4521-8b81-5805708cea53" containerID="016ee31397f41b13b8737129790a5bb0d93defa120f4a0af34495ca65355c6fc" exitCode=143 Jan 23 09:35:31 crc kubenswrapper[4684]: I0123 09:35:31.405729 4684 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-api-0" event={"ID":"2ef78dc7-d7fb-4521-8b81-5805708cea53","Type":"ContainerDied","Data":"016ee31397f41b13b8737129790a5bb0d93defa120f4a0af34495ca65355c6fc"} Jan 23 09:35:31 crc kubenswrapper[4684]: I0123 09:35:31.408606 4684 generic.go:334] "Generic (PLEG): container finished" podID="0c6ec1fa-d699-4dfd-a7f7-0a1efaefe22b" containerID="e19dfabe20cbb867605cf4967faaf2b66c523c588672f5029083250a118a7164" exitCode=0 Jan 23 09:35:31 crc kubenswrapper[4684]: I0123 09:35:31.408630 4684 generic.go:334] "Generic (PLEG): container finished" podID="0c6ec1fa-d699-4dfd-a7f7-0a1efaefe22b" containerID="4e3f846e126c284a16175d190bf3e5718f7ffe453648e6e9aa20190121025557" exitCode=2 Jan 23 09:35:31 crc kubenswrapper[4684]: I0123 09:35:31.408642 4684 generic.go:334] "Generic (PLEG): container finished" podID="0c6ec1fa-d699-4dfd-a7f7-0a1efaefe22b" containerID="41b5a3ffede749f2c21f7b87787775db52a32c8a7086d37064fb28eed7692788" exitCode=0 Jan 23 09:35:31 crc kubenswrapper[4684]: I0123 09:35:31.408942 4684 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"0c6ec1fa-d699-4dfd-a7f7-0a1efaefe22b","Type":"ContainerDied","Data":"e19dfabe20cbb867605cf4967faaf2b66c523c588672f5029083250a118a7164"} Jan 23 09:35:31 crc kubenswrapper[4684]: I0123 09:35:31.408994 4684 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"0c6ec1fa-d699-4dfd-a7f7-0a1efaefe22b","Type":"ContainerDied","Data":"4e3f846e126c284a16175d190bf3e5718f7ffe453648e6e9aa20190121025557"} Jan 23 09:35:31 crc kubenswrapper[4684]: I0123 09:35:31.409009 4684 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"0c6ec1fa-d699-4dfd-a7f7-0a1efaefe22b","Type":"ContainerDied","Data":"41b5a3ffede749f2c21f7b87787775db52a32c8a7086d37064fb28eed7692788"} Jan 23 09:35:32 crc kubenswrapper[4684]: I0123 09:35:32.757941 4684 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/nova-cell1-novncproxy-0" Jan 23 09:35:32 crc kubenswrapper[4684]: I0123 09:35:32.781887 4684 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/nova-cell1-novncproxy-0" Jan 23 09:35:33 crc kubenswrapper[4684]: I0123 09:35:33.442361 4684 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/nova-cell1-novncproxy-0" Jan 23 09:35:33 crc kubenswrapper[4684]: I0123 09:35:33.721616 4684 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/nova-cell1-cell-mapping-6t6d7"] Jan 23 09:35:33 crc kubenswrapper[4684]: I0123 09:35:33.724934 4684 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell1-cell-mapping-6t6d7" Jan 23 09:35:33 crc kubenswrapper[4684]: I0123 09:35:33.730749 4684 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-cell1-manage-config-data" Jan 23 09:35:33 crc kubenswrapper[4684]: I0123 09:35:33.730977 4684 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-cell1-manage-scripts" Jan 23 09:35:33 crc kubenswrapper[4684]: I0123 09:35:33.738514 4684 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-cell1-cell-mapping-6t6d7"] Jan 23 09:35:33 crc kubenswrapper[4684]: I0123 09:35:33.904927 4684 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/eb9b804b-5b0a-479a-8834-10c4adb4ad14-scripts\") pod \"nova-cell1-cell-mapping-6t6d7\" (UID: \"eb9b804b-5b0a-479a-8834-10c4adb4ad14\") " pod="openstack/nova-cell1-cell-mapping-6t6d7" Jan 23 09:35:33 crc kubenswrapper[4684]: I0123 09:35:33.905408 4684 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-p9dh9\" (UniqueName: \"kubernetes.io/projected/eb9b804b-5b0a-479a-8834-10c4adb4ad14-kube-api-access-p9dh9\") pod \"nova-cell1-cell-mapping-6t6d7\" (UID: \"eb9b804b-5b0a-479a-8834-10c4adb4ad14\") " pod="openstack/nova-cell1-cell-mapping-6t6d7" Jan 23 09:35:33 crc kubenswrapper[4684]: I0123 09:35:33.905471 4684 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/eb9b804b-5b0a-479a-8834-10c4adb4ad14-combined-ca-bundle\") pod \"nova-cell1-cell-mapping-6t6d7\" (UID: \"eb9b804b-5b0a-479a-8834-10c4adb4ad14\") " pod="openstack/nova-cell1-cell-mapping-6t6d7" Jan 23 09:35:33 crc kubenswrapper[4684]: I0123 09:35:33.905527 4684 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/eb9b804b-5b0a-479a-8834-10c4adb4ad14-config-data\") pod \"nova-cell1-cell-mapping-6t6d7\" (UID: \"eb9b804b-5b0a-479a-8834-10c4adb4ad14\") " pod="openstack/nova-cell1-cell-mapping-6t6d7" Jan 23 09:35:34 crc kubenswrapper[4684]: I0123 09:35:34.007357 4684 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/eb9b804b-5b0a-479a-8834-10c4adb4ad14-scripts\") pod \"nova-cell1-cell-mapping-6t6d7\" (UID: \"eb9b804b-5b0a-479a-8834-10c4adb4ad14\") " pod="openstack/nova-cell1-cell-mapping-6t6d7" Jan 23 09:35:34 crc kubenswrapper[4684]: I0123 09:35:34.007509 4684 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-p9dh9\" (UniqueName: \"kubernetes.io/projected/eb9b804b-5b0a-479a-8834-10c4adb4ad14-kube-api-access-p9dh9\") pod \"nova-cell1-cell-mapping-6t6d7\" (UID: \"eb9b804b-5b0a-479a-8834-10c4adb4ad14\") " pod="openstack/nova-cell1-cell-mapping-6t6d7" Jan 23 09:35:34 crc kubenswrapper[4684]: I0123 09:35:34.007564 4684 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/eb9b804b-5b0a-479a-8834-10c4adb4ad14-combined-ca-bundle\") pod \"nova-cell1-cell-mapping-6t6d7\" (UID: \"eb9b804b-5b0a-479a-8834-10c4adb4ad14\") " pod="openstack/nova-cell1-cell-mapping-6t6d7" Jan 23 09:35:34 crc kubenswrapper[4684]: I0123 09:35:34.007617 4684 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/eb9b804b-5b0a-479a-8834-10c4adb4ad14-config-data\") pod \"nova-cell1-cell-mapping-6t6d7\" (UID: \"eb9b804b-5b0a-479a-8834-10c4adb4ad14\") " pod="openstack/nova-cell1-cell-mapping-6t6d7" Jan 23 09:35:34 crc kubenswrapper[4684]: I0123 09:35:34.034638 4684 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-p9dh9\" (UniqueName: \"kubernetes.io/projected/eb9b804b-5b0a-479a-8834-10c4adb4ad14-kube-api-access-p9dh9\") pod \"nova-cell1-cell-mapping-6t6d7\" (UID: \"eb9b804b-5b0a-479a-8834-10c4adb4ad14\") " pod="openstack/nova-cell1-cell-mapping-6t6d7" Jan 23 09:35:34 crc kubenswrapper[4684]: I0123 09:35:34.036506 4684 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/eb9b804b-5b0a-479a-8834-10c4adb4ad14-combined-ca-bundle\") pod \"nova-cell1-cell-mapping-6t6d7\" (UID: \"eb9b804b-5b0a-479a-8834-10c4adb4ad14\") " pod="openstack/nova-cell1-cell-mapping-6t6d7" Jan 23 09:35:34 crc kubenswrapper[4684]: I0123 09:35:34.037148 4684 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/eb9b804b-5b0a-479a-8834-10c4adb4ad14-scripts\") pod \"nova-cell1-cell-mapping-6t6d7\" (UID: \"eb9b804b-5b0a-479a-8834-10c4adb4ad14\") " pod="openstack/nova-cell1-cell-mapping-6t6d7" Jan 23 09:35:34 crc kubenswrapper[4684]: I0123 09:35:34.039835 4684 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/eb9b804b-5b0a-479a-8834-10c4adb4ad14-config-data\") pod \"nova-cell1-cell-mapping-6t6d7\" (UID: \"eb9b804b-5b0a-479a-8834-10c4adb4ad14\") " pod="openstack/nova-cell1-cell-mapping-6t6d7" Jan 23 09:35:34 crc kubenswrapper[4684]: I0123 09:35:34.054409 4684 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell1-cell-mapping-6t6d7" Jan 23 09:35:34 crc kubenswrapper[4684]: I0123 09:35:34.251462 4684 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-api-0" Jan 23 09:35:34 crc kubenswrapper[4684]: I0123 09:35:34.414060 4684 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/2ef78dc7-d7fb-4521-8b81-5805708cea53-logs\") pod \"2ef78dc7-d7fb-4521-8b81-5805708cea53\" (UID: \"2ef78dc7-d7fb-4521-8b81-5805708cea53\") " Jan 23 09:35:34 crc kubenswrapper[4684]: I0123 09:35:34.414170 4684 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/2ef78dc7-d7fb-4521-8b81-5805708cea53-config-data\") pod \"2ef78dc7-d7fb-4521-8b81-5805708cea53\" (UID: \"2ef78dc7-d7fb-4521-8b81-5805708cea53\") " Jan 23 09:35:34 crc kubenswrapper[4684]: I0123 09:35:34.414529 4684 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-9lh5g\" (UniqueName: \"kubernetes.io/projected/2ef78dc7-d7fb-4521-8b81-5805708cea53-kube-api-access-9lh5g\") pod \"2ef78dc7-d7fb-4521-8b81-5805708cea53\" (UID: \"2ef78dc7-d7fb-4521-8b81-5805708cea53\") " Jan 23 09:35:34 crc kubenswrapper[4684]: I0123 09:35:34.414576 4684 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/2ef78dc7-d7fb-4521-8b81-5805708cea53-combined-ca-bundle\") pod \"2ef78dc7-d7fb-4521-8b81-5805708cea53\" (UID: \"2ef78dc7-d7fb-4521-8b81-5805708cea53\") " Jan 23 09:35:34 crc kubenswrapper[4684]: I0123 09:35:34.415749 4684 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/2ef78dc7-d7fb-4521-8b81-5805708cea53-logs" (OuterVolumeSpecName: "logs") pod "2ef78dc7-d7fb-4521-8b81-5805708cea53" (UID: "2ef78dc7-d7fb-4521-8b81-5805708cea53"). InnerVolumeSpecName "logs". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 23 09:35:34 crc kubenswrapper[4684]: I0123 09:35:34.422271 4684 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/2ef78dc7-d7fb-4521-8b81-5805708cea53-kube-api-access-9lh5g" (OuterVolumeSpecName: "kube-api-access-9lh5g") pod "2ef78dc7-d7fb-4521-8b81-5805708cea53" (UID: "2ef78dc7-d7fb-4521-8b81-5805708cea53"). InnerVolumeSpecName "kube-api-access-9lh5g". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 23 09:35:34 crc kubenswrapper[4684]: I0123 09:35:34.442107 4684 generic.go:334] "Generic (PLEG): container finished" podID="2ef78dc7-d7fb-4521-8b81-5805708cea53" containerID="891a6c7457d2493ca65ca286a8472b97cacc3ff9df2401651985461c32d3233d" exitCode=0 Jan 23 09:35:34 crc kubenswrapper[4684]: I0123 09:35:34.442430 4684 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-api-0" Jan 23 09:35:34 crc kubenswrapper[4684]: I0123 09:35:34.446065 4684 generic.go:334] "Generic (PLEG): container finished" podID="0c6ec1fa-d699-4dfd-a7f7-0a1efaefe22b" containerID="10cd04d16e3668f96b99f4ca9f20c43e3c18d400c5549debc35d8f5edade414b" exitCode=0 Jan 23 09:35:34 crc kubenswrapper[4684]: I0123 09:35:34.442530 4684 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-api-0" event={"ID":"2ef78dc7-d7fb-4521-8b81-5805708cea53","Type":"ContainerDied","Data":"891a6c7457d2493ca65ca286a8472b97cacc3ff9df2401651985461c32d3233d"} Jan 23 09:35:34 crc kubenswrapper[4684]: I0123 09:35:34.446513 4684 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-api-0" event={"ID":"2ef78dc7-d7fb-4521-8b81-5805708cea53","Type":"ContainerDied","Data":"5478c64fec8495b786bb449704d19c33418c6fc046ad659a32ba7ba33a655622"} Jan 23 09:35:34 crc kubenswrapper[4684]: I0123 09:35:34.446606 4684 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"0c6ec1fa-d699-4dfd-a7f7-0a1efaefe22b","Type":"ContainerDied","Data":"10cd04d16e3668f96b99f4ca9f20c43e3c18d400c5549debc35d8f5edade414b"} Jan 23 09:35:34 crc kubenswrapper[4684]: I0123 09:35:34.447576 4684 scope.go:117] "RemoveContainer" containerID="891a6c7457d2493ca65ca286a8472b97cacc3ff9df2401651985461c32d3233d" Jan 23 09:35:34 crc kubenswrapper[4684]: I0123 09:35:34.503244 4684 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/2ef78dc7-d7fb-4521-8b81-5805708cea53-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "2ef78dc7-d7fb-4521-8b81-5805708cea53" (UID: "2ef78dc7-d7fb-4521-8b81-5805708cea53"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 23 09:35:34 crc kubenswrapper[4684]: I0123 09:35:34.524044 4684 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/2ef78dc7-d7fb-4521-8b81-5805708cea53-config-data" (OuterVolumeSpecName: "config-data") pod "2ef78dc7-d7fb-4521-8b81-5805708cea53" (UID: "2ef78dc7-d7fb-4521-8b81-5805708cea53"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 23 09:35:34 crc kubenswrapper[4684]: I0123 09:35:34.524196 4684 scope.go:117] "RemoveContainer" containerID="016ee31397f41b13b8737129790a5bb0d93defa120f4a0af34495ca65355c6fc" Jan 23 09:35:34 crc kubenswrapper[4684]: I0123 09:35:34.526084 4684 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/2ef78dc7-d7fb-4521-8b81-5805708cea53-config-data\") on node \"crc\" DevicePath \"\"" Jan 23 09:35:34 crc kubenswrapper[4684]: I0123 09:35:34.526488 4684 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-9lh5g\" (UniqueName: \"kubernetes.io/projected/2ef78dc7-d7fb-4521-8b81-5805708cea53-kube-api-access-9lh5g\") on node \"crc\" DevicePath \"\"" Jan 23 09:35:34 crc kubenswrapper[4684]: I0123 09:35:34.526575 4684 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/2ef78dc7-d7fb-4521-8b81-5805708cea53-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Jan 23 09:35:34 crc kubenswrapper[4684]: I0123 09:35:34.526658 4684 reconciler_common.go:293] "Volume detached for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/2ef78dc7-d7fb-4521-8b81-5805708cea53-logs\") on node \"crc\" DevicePath \"\"" Jan 23 09:35:34 crc kubenswrapper[4684]: I0123 09:35:34.571965 4684 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/ceilometer-0" Jan 23 09:35:34 crc kubenswrapper[4684]: I0123 09:35:34.572452 4684 scope.go:117] "RemoveContainer" containerID="891a6c7457d2493ca65ca286a8472b97cacc3ff9df2401651985461c32d3233d" Jan 23 09:35:34 crc kubenswrapper[4684]: E0123 09:35:34.576934 4684 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"891a6c7457d2493ca65ca286a8472b97cacc3ff9df2401651985461c32d3233d\": container with ID starting with 891a6c7457d2493ca65ca286a8472b97cacc3ff9df2401651985461c32d3233d not found: ID does not exist" containerID="891a6c7457d2493ca65ca286a8472b97cacc3ff9df2401651985461c32d3233d" Jan 23 09:35:34 crc kubenswrapper[4684]: I0123 09:35:34.576982 4684 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"891a6c7457d2493ca65ca286a8472b97cacc3ff9df2401651985461c32d3233d"} err="failed to get container status \"891a6c7457d2493ca65ca286a8472b97cacc3ff9df2401651985461c32d3233d\": rpc error: code = NotFound desc = could not find container \"891a6c7457d2493ca65ca286a8472b97cacc3ff9df2401651985461c32d3233d\": container with ID starting with 891a6c7457d2493ca65ca286a8472b97cacc3ff9df2401651985461c32d3233d not found: ID does not exist" Jan 23 09:35:34 crc kubenswrapper[4684]: I0123 09:35:34.577011 4684 scope.go:117] "RemoveContainer" containerID="016ee31397f41b13b8737129790a5bb0d93defa120f4a0af34495ca65355c6fc" Jan 23 09:35:34 crc kubenswrapper[4684]: E0123 09:35:34.578560 4684 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"016ee31397f41b13b8737129790a5bb0d93defa120f4a0af34495ca65355c6fc\": container with ID starting with 016ee31397f41b13b8737129790a5bb0d93defa120f4a0af34495ca65355c6fc not found: ID does not exist" containerID="016ee31397f41b13b8737129790a5bb0d93defa120f4a0af34495ca65355c6fc" Jan 23 09:35:34 crc kubenswrapper[4684]: I0123 09:35:34.578588 4684 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"016ee31397f41b13b8737129790a5bb0d93defa120f4a0af34495ca65355c6fc"} err="failed to get container status \"016ee31397f41b13b8737129790a5bb0d93defa120f4a0af34495ca65355c6fc\": rpc error: code = NotFound desc = could not find container \"016ee31397f41b13b8737129790a5bb0d93defa120f4a0af34495ca65355c6fc\": container with ID starting with 016ee31397f41b13b8737129790a5bb0d93defa120f4a0af34495ca65355c6fc not found: ID does not exist" Jan 23 09:35:34 crc kubenswrapper[4684]: I0123 09:35:34.648854 4684 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-cell1-cell-mapping-6t6d7"] Jan 23 09:35:34 crc kubenswrapper[4684]: I0123 09:35:34.729903 4684 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-8d6dp\" (UniqueName: \"kubernetes.io/projected/0c6ec1fa-d699-4dfd-a7f7-0a1efaefe22b-kube-api-access-8d6dp\") pod \"0c6ec1fa-d699-4dfd-a7f7-0a1efaefe22b\" (UID: \"0c6ec1fa-d699-4dfd-a7f7-0a1efaefe22b\") " Jan 23 09:35:34 crc kubenswrapper[4684]: I0123 09:35:34.729996 4684 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/0c6ec1fa-d699-4dfd-a7f7-0a1efaefe22b-run-httpd\") pod \"0c6ec1fa-d699-4dfd-a7f7-0a1efaefe22b\" (UID: \"0c6ec1fa-d699-4dfd-a7f7-0a1efaefe22b\") " Jan 23 09:35:34 crc kubenswrapper[4684]: I0123 09:35:34.730056 4684 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/0c6ec1fa-d699-4dfd-a7f7-0a1efaefe22b-scripts\") pod \"0c6ec1fa-d699-4dfd-a7f7-0a1efaefe22b\" (UID: \"0c6ec1fa-d699-4dfd-a7f7-0a1efaefe22b\") " Jan 23 09:35:34 crc kubenswrapper[4684]: I0123 09:35:34.730192 4684 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/0c6ec1fa-d699-4dfd-a7f7-0a1efaefe22b-config-data\") pod \"0c6ec1fa-d699-4dfd-a7f7-0a1efaefe22b\" (UID: \"0c6ec1fa-d699-4dfd-a7f7-0a1efaefe22b\") " Jan 23 09:35:34 crc kubenswrapper[4684]: I0123 09:35:34.730228 4684 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/0c6ec1fa-d699-4dfd-a7f7-0a1efaefe22b-log-httpd\") pod \"0c6ec1fa-d699-4dfd-a7f7-0a1efaefe22b\" (UID: \"0c6ec1fa-d699-4dfd-a7f7-0a1efaefe22b\") " Jan 23 09:35:34 crc kubenswrapper[4684]: I0123 09:35:34.730265 4684 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/0c6ec1fa-d699-4dfd-a7f7-0a1efaefe22b-combined-ca-bundle\") pod \"0c6ec1fa-d699-4dfd-a7f7-0a1efaefe22b\" (UID: \"0c6ec1fa-d699-4dfd-a7f7-0a1efaefe22b\") " Jan 23 09:35:34 crc kubenswrapper[4684]: I0123 09:35:34.730284 4684 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/0c6ec1fa-d699-4dfd-a7f7-0a1efaefe22b-sg-core-conf-yaml\") pod \"0c6ec1fa-d699-4dfd-a7f7-0a1efaefe22b\" (UID: \"0c6ec1fa-d699-4dfd-a7f7-0a1efaefe22b\") " Jan 23 09:35:34 crc kubenswrapper[4684]: I0123 09:35:34.730306 4684 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ceilometer-tls-certs\" (UniqueName: \"kubernetes.io/secret/0c6ec1fa-d699-4dfd-a7f7-0a1efaefe22b-ceilometer-tls-certs\") pod \"0c6ec1fa-d699-4dfd-a7f7-0a1efaefe22b\" (UID: \"0c6ec1fa-d699-4dfd-a7f7-0a1efaefe22b\") " Jan 23 09:35:34 crc kubenswrapper[4684]: I0123 09:35:34.732205 4684 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/0c6ec1fa-d699-4dfd-a7f7-0a1efaefe22b-log-httpd" (OuterVolumeSpecName: "log-httpd") pod "0c6ec1fa-d699-4dfd-a7f7-0a1efaefe22b" (UID: "0c6ec1fa-d699-4dfd-a7f7-0a1efaefe22b"). InnerVolumeSpecName "log-httpd". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 23 09:35:34 crc kubenswrapper[4684]: I0123 09:35:34.734220 4684 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/0c6ec1fa-d699-4dfd-a7f7-0a1efaefe22b-run-httpd" (OuterVolumeSpecName: "run-httpd") pod "0c6ec1fa-d699-4dfd-a7f7-0a1efaefe22b" (UID: "0c6ec1fa-d699-4dfd-a7f7-0a1efaefe22b"). InnerVolumeSpecName "run-httpd". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 23 09:35:34 crc kubenswrapper[4684]: I0123 09:35:34.744148 4684 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/0c6ec1fa-d699-4dfd-a7f7-0a1efaefe22b-scripts" (OuterVolumeSpecName: "scripts") pod "0c6ec1fa-d699-4dfd-a7f7-0a1efaefe22b" (UID: "0c6ec1fa-d699-4dfd-a7f7-0a1efaefe22b"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 23 09:35:34 crc kubenswrapper[4684]: I0123 09:35:34.753089 4684 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/0c6ec1fa-d699-4dfd-a7f7-0a1efaefe22b-kube-api-access-8d6dp" (OuterVolumeSpecName: "kube-api-access-8d6dp") pod "0c6ec1fa-d699-4dfd-a7f7-0a1efaefe22b" (UID: "0c6ec1fa-d699-4dfd-a7f7-0a1efaefe22b"). InnerVolumeSpecName "kube-api-access-8d6dp". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 23 09:35:34 crc kubenswrapper[4684]: I0123 09:35:34.769476 4684 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/0c6ec1fa-d699-4dfd-a7f7-0a1efaefe22b-sg-core-conf-yaml" (OuterVolumeSpecName: "sg-core-conf-yaml") pod "0c6ec1fa-d699-4dfd-a7f7-0a1efaefe22b" (UID: "0c6ec1fa-d699-4dfd-a7f7-0a1efaefe22b"). InnerVolumeSpecName "sg-core-conf-yaml". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 23 09:35:34 crc kubenswrapper[4684]: I0123 09:35:34.832951 4684 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/0c6ec1fa-d699-4dfd-a7f7-0a1efaefe22b-scripts\") on node \"crc\" DevicePath \"\"" Jan 23 09:35:34 crc kubenswrapper[4684]: I0123 09:35:34.832985 4684 reconciler_common.go:293] "Volume detached for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/0c6ec1fa-d699-4dfd-a7f7-0a1efaefe22b-log-httpd\") on node \"crc\" DevicePath \"\"" Jan 23 09:35:34 crc kubenswrapper[4684]: I0123 09:35:34.832995 4684 reconciler_common.go:293] "Volume detached for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/0c6ec1fa-d699-4dfd-a7f7-0a1efaefe22b-sg-core-conf-yaml\") on node \"crc\" DevicePath \"\"" Jan 23 09:35:34 crc kubenswrapper[4684]: I0123 09:35:34.833005 4684 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-8d6dp\" (UniqueName: \"kubernetes.io/projected/0c6ec1fa-d699-4dfd-a7f7-0a1efaefe22b-kube-api-access-8d6dp\") on node \"crc\" DevicePath \"\"" Jan 23 09:35:34 crc kubenswrapper[4684]: I0123 09:35:34.833014 4684 reconciler_common.go:293] "Volume detached for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/0c6ec1fa-d699-4dfd-a7f7-0a1efaefe22b-run-httpd\") on node \"crc\" DevicePath \"\"" Jan 23 09:35:34 crc kubenswrapper[4684]: I0123 09:35:34.837750 4684 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/0c6ec1fa-d699-4dfd-a7f7-0a1efaefe22b-ceilometer-tls-certs" (OuterVolumeSpecName: "ceilometer-tls-certs") pod "0c6ec1fa-d699-4dfd-a7f7-0a1efaefe22b" (UID: "0c6ec1fa-d699-4dfd-a7f7-0a1efaefe22b"). InnerVolumeSpecName "ceilometer-tls-certs". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 23 09:35:34 crc kubenswrapper[4684]: I0123 09:35:34.940068 4684 reconciler_common.go:293] "Volume detached for volume \"ceilometer-tls-certs\" (UniqueName: \"kubernetes.io/secret/0c6ec1fa-d699-4dfd-a7f7-0a1efaefe22b-ceilometer-tls-certs\") on node \"crc\" DevicePath \"\"" Jan 23 09:35:34 crc kubenswrapper[4684]: I0123 09:35:34.950390 4684 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/0c6ec1fa-d699-4dfd-a7f7-0a1efaefe22b-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "0c6ec1fa-d699-4dfd-a7f7-0a1efaefe22b" (UID: "0c6ec1fa-d699-4dfd-a7f7-0a1efaefe22b"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 23 09:35:35 crc kubenswrapper[4684]: I0123 09:35:35.000241 4684 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/0c6ec1fa-d699-4dfd-a7f7-0a1efaefe22b-config-data" (OuterVolumeSpecName: "config-data") pod "0c6ec1fa-d699-4dfd-a7f7-0a1efaefe22b" (UID: "0c6ec1fa-d699-4dfd-a7f7-0a1efaefe22b"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 23 09:35:35 crc kubenswrapper[4684]: I0123 09:35:35.041660 4684 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/0c6ec1fa-d699-4dfd-a7f7-0a1efaefe22b-config-data\") on node \"crc\" DevicePath \"\"" Jan 23 09:35:35 crc kubenswrapper[4684]: I0123 09:35:35.041686 4684 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/0c6ec1fa-d699-4dfd-a7f7-0a1efaefe22b-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Jan 23 09:35:35 crc kubenswrapper[4684]: I0123 09:35:35.076186 4684 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-api-0"] Jan 23 09:35:35 crc kubenswrapper[4684]: I0123 09:35:35.083914 4684 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/nova-api-0"] Jan 23 09:35:35 crc kubenswrapper[4684]: I0123 09:35:35.102180 4684 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/nova-api-0"] Jan 23 09:35:35 crc kubenswrapper[4684]: E0123 09:35:35.102545 4684 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="0c6ec1fa-d699-4dfd-a7f7-0a1efaefe22b" containerName="ceilometer-notification-agent" Jan 23 09:35:35 crc kubenswrapper[4684]: I0123 09:35:35.102562 4684 state_mem.go:107] "Deleted CPUSet assignment" podUID="0c6ec1fa-d699-4dfd-a7f7-0a1efaefe22b" containerName="ceilometer-notification-agent" Jan 23 09:35:35 crc kubenswrapper[4684]: E0123 09:35:35.102580 4684 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="2ef78dc7-d7fb-4521-8b81-5805708cea53" containerName="nova-api-api" Jan 23 09:35:35 crc kubenswrapper[4684]: I0123 09:35:35.102586 4684 state_mem.go:107] "Deleted CPUSet assignment" podUID="2ef78dc7-d7fb-4521-8b81-5805708cea53" containerName="nova-api-api" Jan 23 09:35:35 crc kubenswrapper[4684]: E0123 09:35:35.102601 4684 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="0c6ec1fa-d699-4dfd-a7f7-0a1efaefe22b" containerName="ceilometer-central-agent" Jan 23 09:35:35 crc kubenswrapper[4684]: I0123 09:35:35.102607 4684 state_mem.go:107] "Deleted CPUSet assignment" podUID="0c6ec1fa-d699-4dfd-a7f7-0a1efaefe22b" containerName="ceilometer-central-agent" Jan 23 09:35:35 crc kubenswrapper[4684]: E0123 09:35:35.102619 4684 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="0c6ec1fa-d699-4dfd-a7f7-0a1efaefe22b" containerName="sg-core" Jan 23 09:35:35 crc kubenswrapper[4684]: I0123 09:35:35.102625 4684 state_mem.go:107] "Deleted CPUSet assignment" podUID="0c6ec1fa-d699-4dfd-a7f7-0a1efaefe22b" containerName="sg-core" Jan 23 09:35:35 crc kubenswrapper[4684]: E0123 09:35:35.102639 4684 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="0c6ec1fa-d699-4dfd-a7f7-0a1efaefe22b" containerName="proxy-httpd" Jan 23 09:35:35 crc kubenswrapper[4684]: I0123 09:35:35.102646 4684 state_mem.go:107] "Deleted CPUSet assignment" podUID="0c6ec1fa-d699-4dfd-a7f7-0a1efaefe22b" containerName="proxy-httpd" Jan 23 09:35:35 crc kubenswrapper[4684]: E0123 09:35:35.102653 4684 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="2ef78dc7-d7fb-4521-8b81-5805708cea53" containerName="nova-api-log" Jan 23 09:35:35 crc kubenswrapper[4684]: I0123 09:35:35.102660 4684 state_mem.go:107] "Deleted CPUSet assignment" podUID="2ef78dc7-d7fb-4521-8b81-5805708cea53" containerName="nova-api-log" Jan 23 09:35:35 crc kubenswrapper[4684]: I0123 09:35:35.103071 4684 memory_manager.go:354] "RemoveStaleState removing state" podUID="0c6ec1fa-d699-4dfd-a7f7-0a1efaefe22b" containerName="proxy-httpd" Jan 23 09:35:35 crc kubenswrapper[4684]: I0123 09:35:35.103090 4684 memory_manager.go:354] "RemoveStaleState removing state" podUID="2ef78dc7-d7fb-4521-8b81-5805708cea53" containerName="nova-api-api" Jan 23 09:35:35 crc kubenswrapper[4684]: I0123 09:35:35.103099 4684 memory_manager.go:354] "RemoveStaleState removing state" podUID="0c6ec1fa-d699-4dfd-a7f7-0a1efaefe22b" containerName="ceilometer-notification-agent" Jan 23 09:35:35 crc kubenswrapper[4684]: I0123 09:35:35.103111 4684 memory_manager.go:354] "RemoveStaleState removing state" podUID="0c6ec1fa-d699-4dfd-a7f7-0a1efaefe22b" containerName="ceilometer-central-agent" Jan 23 09:35:35 crc kubenswrapper[4684]: I0123 09:35:35.103124 4684 memory_manager.go:354] "RemoveStaleState removing state" podUID="2ef78dc7-d7fb-4521-8b81-5805708cea53" containerName="nova-api-log" Jan 23 09:35:35 crc kubenswrapper[4684]: I0123 09:35:35.103135 4684 memory_manager.go:354] "RemoveStaleState removing state" podUID="0c6ec1fa-d699-4dfd-a7f7-0a1efaefe22b" containerName="sg-core" Jan 23 09:35:35 crc kubenswrapper[4684]: I0123 09:35:35.104028 4684 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-api-0" Jan 23 09:35:35 crc kubenswrapper[4684]: I0123 09:35:35.110331 4684 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-nova-internal-svc" Jan 23 09:35:35 crc kubenswrapper[4684]: I0123 09:35:35.110792 4684 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-nova-public-svc" Jan 23 09:35:35 crc kubenswrapper[4684]: I0123 09:35:35.111264 4684 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-api-config-data" Jan 23 09:35:35 crc kubenswrapper[4684]: I0123 09:35:35.124813 4684 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-api-0"] Jan 23 09:35:35 crc kubenswrapper[4684]: I0123 09:35:35.143332 4684 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/87ef84be-3786-4a1e-a910-24e974d71fc2-public-tls-certs\") pod \"nova-api-0\" (UID: \"87ef84be-3786-4a1e-a910-24e974d71fc2\") " pod="openstack/nova-api-0" Jan 23 09:35:35 crc kubenswrapper[4684]: I0123 09:35:35.143485 4684 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/87ef84be-3786-4a1e-a910-24e974d71fc2-internal-tls-certs\") pod \"nova-api-0\" (UID: \"87ef84be-3786-4a1e-a910-24e974d71fc2\") " pod="openstack/nova-api-0" Jan 23 09:35:35 crc kubenswrapper[4684]: I0123 09:35:35.143550 4684 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-vdqm7\" (UniqueName: \"kubernetes.io/projected/87ef84be-3786-4a1e-a910-24e974d71fc2-kube-api-access-vdqm7\") pod \"nova-api-0\" (UID: \"87ef84be-3786-4a1e-a910-24e974d71fc2\") " pod="openstack/nova-api-0" Jan 23 09:35:35 crc kubenswrapper[4684]: I0123 09:35:35.143580 4684 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/87ef84be-3786-4a1e-a910-24e974d71fc2-config-data\") pod \"nova-api-0\" (UID: \"87ef84be-3786-4a1e-a910-24e974d71fc2\") " pod="openstack/nova-api-0" Jan 23 09:35:35 crc kubenswrapper[4684]: I0123 09:35:35.143600 4684 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/87ef84be-3786-4a1e-a910-24e974d71fc2-logs\") pod \"nova-api-0\" (UID: \"87ef84be-3786-4a1e-a910-24e974d71fc2\") " pod="openstack/nova-api-0" Jan 23 09:35:35 crc kubenswrapper[4684]: I0123 09:35:35.143669 4684 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/87ef84be-3786-4a1e-a910-24e974d71fc2-combined-ca-bundle\") pod \"nova-api-0\" (UID: \"87ef84be-3786-4a1e-a910-24e974d71fc2\") " pod="openstack/nova-api-0" Jan 23 09:35:35 crc kubenswrapper[4684]: I0123 09:35:35.244889 4684 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/87ef84be-3786-4a1e-a910-24e974d71fc2-internal-tls-certs\") pod \"nova-api-0\" (UID: \"87ef84be-3786-4a1e-a910-24e974d71fc2\") " pod="openstack/nova-api-0" Jan 23 09:35:35 crc kubenswrapper[4684]: I0123 09:35:35.244984 4684 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-vdqm7\" (UniqueName: \"kubernetes.io/projected/87ef84be-3786-4a1e-a910-24e974d71fc2-kube-api-access-vdqm7\") pod \"nova-api-0\" (UID: \"87ef84be-3786-4a1e-a910-24e974d71fc2\") " pod="openstack/nova-api-0" Jan 23 09:35:35 crc kubenswrapper[4684]: I0123 09:35:35.245030 4684 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/87ef84be-3786-4a1e-a910-24e974d71fc2-config-data\") pod \"nova-api-0\" (UID: \"87ef84be-3786-4a1e-a910-24e974d71fc2\") " pod="openstack/nova-api-0" Jan 23 09:35:35 crc kubenswrapper[4684]: I0123 09:35:35.245053 4684 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/87ef84be-3786-4a1e-a910-24e974d71fc2-logs\") pod \"nova-api-0\" (UID: \"87ef84be-3786-4a1e-a910-24e974d71fc2\") " pod="openstack/nova-api-0" Jan 23 09:35:35 crc kubenswrapper[4684]: I0123 09:35:35.245115 4684 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/87ef84be-3786-4a1e-a910-24e974d71fc2-combined-ca-bundle\") pod \"nova-api-0\" (UID: \"87ef84be-3786-4a1e-a910-24e974d71fc2\") " pod="openstack/nova-api-0" Jan 23 09:35:35 crc kubenswrapper[4684]: I0123 09:35:35.245173 4684 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/87ef84be-3786-4a1e-a910-24e974d71fc2-public-tls-certs\") pod \"nova-api-0\" (UID: \"87ef84be-3786-4a1e-a910-24e974d71fc2\") " pod="openstack/nova-api-0" Jan 23 09:35:35 crc kubenswrapper[4684]: I0123 09:35:35.247206 4684 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/87ef84be-3786-4a1e-a910-24e974d71fc2-logs\") pod \"nova-api-0\" (UID: \"87ef84be-3786-4a1e-a910-24e974d71fc2\") " pod="openstack/nova-api-0" Jan 23 09:35:35 crc kubenswrapper[4684]: I0123 09:35:35.249060 4684 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/87ef84be-3786-4a1e-a910-24e974d71fc2-internal-tls-certs\") pod \"nova-api-0\" (UID: \"87ef84be-3786-4a1e-a910-24e974d71fc2\") " pod="openstack/nova-api-0" Jan 23 09:35:35 crc kubenswrapper[4684]: I0123 09:35:35.249496 4684 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/87ef84be-3786-4a1e-a910-24e974d71fc2-public-tls-certs\") pod \"nova-api-0\" (UID: \"87ef84be-3786-4a1e-a910-24e974d71fc2\") " pod="openstack/nova-api-0" Jan 23 09:35:35 crc kubenswrapper[4684]: I0123 09:35:35.250677 4684 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/87ef84be-3786-4a1e-a910-24e974d71fc2-combined-ca-bundle\") pod \"nova-api-0\" (UID: \"87ef84be-3786-4a1e-a910-24e974d71fc2\") " pod="openstack/nova-api-0" Jan 23 09:35:35 crc kubenswrapper[4684]: I0123 09:35:35.257647 4684 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/87ef84be-3786-4a1e-a910-24e974d71fc2-config-data\") pod \"nova-api-0\" (UID: \"87ef84be-3786-4a1e-a910-24e974d71fc2\") " pod="openstack/nova-api-0" Jan 23 09:35:35 crc kubenswrapper[4684]: I0123 09:35:35.264364 4684 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-vdqm7\" (UniqueName: \"kubernetes.io/projected/87ef84be-3786-4a1e-a910-24e974d71fc2-kube-api-access-vdqm7\") pod \"nova-api-0\" (UID: \"87ef84be-3786-4a1e-a910-24e974d71fc2\") " pod="openstack/nova-api-0" Jan 23 09:35:35 crc kubenswrapper[4684]: I0123 09:35:35.428200 4684 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-api-0" Jan 23 09:35:35 crc kubenswrapper[4684]: I0123 09:35:35.466297 4684 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell1-cell-mapping-6t6d7" event={"ID":"eb9b804b-5b0a-479a-8834-10c4adb4ad14","Type":"ContainerStarted","Data":"aeebdc1c5705ed418bbd094135a18f77d5369aac12244fe848e31f118b52fa4f"} Jan 23 09:35:35 crc kubenswrapper[4684]: I0123 09:35:35.466832 4684 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell1-cell-mapping-6t6d7" event={"ID":"eb9b804b-5b0a-479a-8834-10c4adb4ad14","Type":"ContainerStarted","Data":"b8609949e102219fd1772235c829d9741fe239df0f33f5ec4b39dc0d76e1650b"} Jan 23 09:35:35 crc kubenswrapper[4684]: I0123 09:35:35.474476 4684 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"0c6ec1fa-d699-4dfd-a7f7-0a1efaefe22b","Type":"ContainerDied","Data":"50e973f85719e53611e988ca52161566a7340c7b0fac979e046f8422f25515b2"} Jan 23 09:35:35 crc kubenswrapper[4684]: I0123 09:35:35.474527 4684 scope.go:117] "RemoveContainer" containerID="e19dfabe20cbb867605cf4967faaf2b66c523c588672f5029083250a118a7164" Jan 23 09:35:35 crc kubenswrapper[4684]: I0123 09:35:35.474725 4684 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/ceilometer-0" Jan 23 09:35:35 crc kubenswrapper[4684]: I0123 09:35:35.498886 4684 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/nova-cell1-cell-mapping-6t6d7" podStartSLOduration=2.49886693 podStartE2EDuration="2.49886693s" podCreationTimestamp="2026-01-23 09:35:33 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-23 09:35:35.498250513 +0000 UTC m=+1708.121629054" watchObservedRunningTime="2026-01-23 09:35:35.49886693 +0000 UTC m=+1708.122245471" Jan 23 09:35:35 crc kubenswrapper[4684]: I0123 09:35:35.538454 4684 scope.go:117] "RemoveContainer" containerID="4e3f846e126c284a16175d190bf3e5718f7ffe453648e6e9aa20190121025557" Jan 23 09:35:35 crc kubenswrapper[4684]: I0123 09:35:35.547302 4684 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/ceilometer-0"] Jan 23 09:35:35 crc kubenswrapper[4684]: I0123 09:35:35.560889 4684 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/ceilometer-0"] Jan 23 09:35:35 crc kubenswrapper[4684]: I0123 09:35:35.600458 4684 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="0c6ec1fa-d699-4dfd-a7f7-0a1efaefe22b" path="/var/lib/kubelet/pods/0c6ec1fa-d699-4dfd-a7f7-0a1efaefe22b/volumes" Jan 23 09:35:35 crc kubenswrapper[4684]: I0123 09:35:35.601380 4684 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="2ef78dc7-d7fb-4521-8b81-5805708cea53" path="/var/lib/kubelet/pods/2ef78dc7-d7fb-4521-8b81-5805708cea53/volumes" Jan 23 09:35:35 crc kubenswrapper[4684]: I0123 09:35:35.602046 4684 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/ceilometer-0"] Jan 23 09:35:35 crc kubenswrapper[4684]: I0123 09:35:35.604654 4684 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ceilometer-0" Jan 23 09:35:35 crc kubenswrapper[4684]: I0123 09:35:35.609895 4684 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"ceilometer-config-data" Jan 23 09:35:35 crc kubenswrapper[4684]: I0123 09:35:35.609991 4684 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"ceilometer-scripts" Jan 23 09:35:35 crc kubenswrapper[4684]: I0123 09:35:35.611156 4684 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-ceilometer-internal-svc" Jan 23 09:35:35 crc kubenswrapper[4684]: I0123 09:35:35.613990 4684 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ceilometer-0"] Jan 23 09:35:35 crc kubenswrapper[4684]: I0123 09:35:35.626113 4684 scope.go:117] "RemoveContainer" containerID="10cd04d16e3668f96b99f4ca9f20c43e3c18d400c5549debc35d8f5edade414b" Jan 23 09:35:35 crc kubenswrapper[4684]: I0123 09:35:35.676237 4684 scope.go:117] "RemoveContainer" containerID="41b5a3ffede749f2c21f7b87787775db52a32c8a7086d37064fb28eed7692788" Jan 23 09:35:35 crc kubenswrapper[4684]: I0123 09:35:35.756199 4684 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/69edf57e-bfdb-4e05-b61a-5b42dad87ff8-combined-ca-bundle\") pod \"ceilometer-0\" (UID: \"69edf57e-bfdb-4e05-b61a-5b42dad87ff8\") " pod="openstack/ceilometer-0" Jan 23 09:35:35 crc kubenswrapper[4684]: I0123 09:35:35.756257 4684 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/69edf57e-bfdb-4e05-b61a-5b42dad87ff8-sg-core-conf-yaml\") pod \"ceilometer-0\" (UID: \"69edf57e-bfdb-4e05-b61a-5b42dad87ff8\") " pod="openstack/ceilometer-0" Jan 23 09:35:35 crc kubenswrapper[4684]: I0123 09:35:35.756297 4684 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/69edf57e-bfdb-4e05-b61a-5b42dad87ff8-run-httpd\") pod \"ceilometer-0\" (UID: \"69edf57e-bfdb-4e05-b61a-5b42dad87ff8\") " pod="openstack/ceilometer-0" Jan 23 09:35:35 crc kubenswrapper[4684]: I0123 09:35:35.756382 4684 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ceilometer-tls-certs\" (UniqueName: \"kubernetes.io/secret/69edf57e-bfdb-4e05-b61a-5b42dad87ff8-ceilometer-tls-certs\") pod \"ceilometer-0\" (UID: \"69edf57e-bfdb-4e05-b61a-5b42dad87ff8\") " pod="openstack/ceilometer-0" Jan 23 09:35:35 crc kubenswrapper[4684]: I0123 09:35:35.756430 4684 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-nkw7n\" (UniqueName: \"kubernetes.io/projected/69edf57e-bfdb-4e05-b61a-5b42dad87ff8-kube-api-access-nkw7n\") pod \"ceilometer-0\" (UID: \"69edf57e-bfdb-4e05-b61a-5b42dad87ff8\") " pod="openstack/ceilometer-0" Jan 23 09:35:35 crc kubenswrapper[4684]: I0123 09:35:35.756461 4684 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/69edf57e-bfdb-4e05-b61a-5b42dad87ff8-log-httpd\") pod \"ceilometer-0\" (UID: \"69edf57e-bfdb-4e05-b61a-5b42dad87ff8\") " pod="openstack/ceilometer-0" Jan 23 09:35:35 crc kubenswrapper[4684]: I0123 09:35:35.756530 4684 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/69edf57e-bfdb-4e05-b61a-5b42dad87ff8-scripts\") pod \"ceilometer-0\" (UID: \"69edf57e-bfdb-4e05-b61a-5b42dad87ff8\") " pod="openstack/ceilometer-0" Jan 23 09:35:35 crc kubenswrapper[4684]: I0123 09:35:35.756577 4684 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/69edf57e-bfdb-4e05-b61a-5b42dad87ff8-config-data\") pod \"ceilometer-0\" (UID: \"69edf57e-bfdb-4e05-b61a-5b42dad87ff8\") " pod="openstack/ceilometer-0" Jan 23 09:35:35 crc kubenswrapper[4684]: I0123 09:35:35.858304 4684 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-nkw7n\" (UniqueName: \"kubernetes.io/projected/69edf57e-bfdb-4e05-b61a-5b42dad87ff8-kube-api-access-nkw7n\") pod \"ceilometer-0\" (UID: \"69edf57e-bfdb-4e05-b61a-5b42dad87ff8\") " pod="openstack/ceilometer-0" Jan 23 09:35:35 crc kubenswrapper[4684]: I0123 09:35:35.858785 4684 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/69edf57e-bfdb-4e05-b61a-5b42dad87ff8-log-httpd\") pod \"ceilometer-0\" (UID: \"69edf57e-bfdb-4e05-b61a-5b42dad87ff8\") " pod="openstack/ceilometer-0" Jan 23 09:35:35 crc kubenswrapper[4684]: I0123 09:35:35.858867 4684 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/69edf57e-bfdb-4e05-b61a-5b42dad87ff8-scripts\") pod \"ceilometer-0\" (UID: \"69edf57e-bfdb-4e05-b61a-5b42dad87ff8\") " pod="openstack/ceilometer-0" Jan 23 09:35:35 crc kubenswrapper[4684]: I0123 09:35:35.858919 4684 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/69edf57e-bfdb-4e05-b61a-5b42dad87ff8-config-data\") pod \"ceilometer-0\" (UID: \"69edf57e-bfdb-4e05-b61a-5b42dad87ff8\") " pod="openstack/ceilometer-0" Jan 23 09:35:35 crc kubenswrapper[4684]: I0123 09:35:35.858974 4684 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/69edf57e-bfdb-4e05-b61a-5b42dad87ff8-combined-ca-bundle\") pod \"ceilometer-0\" (UID: \"69edf57e-bfdb-4e05-b61a-5b42dad87ff8\") " pod="openstack/ceilometer-0" Jan 23 09:35:35 crc kubenswrapper[4684]: I0123 09:35:35.859002 4684 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/69edf57e-bfdb-4e05-b61a-5b42dad87ff8-sg-core-conf-yaml\") pod \"ceilometer-0\" (UID: \"69edf57e-bfdb-4e05-b61a-5b42dad87ff8\") " pod="openstack/ceilometer-0" Jan 23 09:35:35 crc kubenswrapper[4684]: I0123 09:35:35.859043 4684 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/69edf57e-bfdb-4e05-b61a-5b42dad87ff8-run-httpd\") pod \"ceilometer-0\" (UID: \"69edf57e-bfdb-4e05-b61a-5b42dad87ff8\") " pod="openstack/ceilometer-0" Jan 23 09:35:35 crc kubenswrapper[4684]: I0123 09:35:35.859133 4684 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ceilometer-tls-certs\" (UniqueName: \"kubernetes.io/secret/69edf57e-bfdb-4e05-b61a-5b42dad87ff8-ceilometer-tls-certs\") pod \"ceilometer-0\" (UID: \"69edf57e-bfdb-4e05-b61a-5b42dad87ff8\") " pod="openstack/ceilometer-0" Jan 23 09:35:35 crc kubenswrapper[4684]: I0123 09:35:35.865386 4684 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/69edf57e-bfdb-4e05-b61a-5b42dad87ff8-log-httpd\") pod \"ceilometer-0\" (UID: \"69edf57e-bfdb-4e05-b61a-5b42dad87ff8\") " pod="openstack/ceilometer-0" Jan 23 09:35:35 crc kubenswrapper[4684]: I0123 09:35:35.871521 4684 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/69edf57e-bfdb-4e05-b61a-5b42dad87ff8-run-httpd\") pod \"ceilometer-0\" (UID: \"69edf57e-bfdb-4e05-b61a-5b42dad87ff8\") " pod="openstack/ceilometer-0" Jan 23 09:35:35 crc kubenswrapper[4684]: I0123 09:35:35.877992 4684 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/69edf57e-bfdb-4e05-b61a-5b42dad87ff8-sg-core-conf-yaml\") pod \"ceilometer-0\" (UID: \"69edf57e-bfdb-4e05-b61a-5b42dad87ff8\") " pod="openstack/ceilometer-0" Jan 23 09:35:35 crc kubenswrapper[4684]: I0123 09:35:35.878890 4684 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/69edf57e-bfdb-4e05-b61a-5b42dad87ff8-scripts\") pod \"ceilometer-0\" (UID: \"69edf57e-bfdb-4e05-b61a-5b42dad87ff8\") " pod="openstack/ceilometer-0" Jan 23 09:35:35 crc kubenswrapper[4684]: I0123 09:35:35.889452 4684 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/69edf57e-bfdb-4e05-b61a-5b42dad87ff8-combined-ca-bundle\") pod \"ceilometer-0\" (UID: \"69edf57e-bfdb-4e05-b61a-5b42dad87ff8\") " pod="openstack/ceilometer-0" Jan 23 09:35:35 crc kubenswrapper[4684]: I0123 09:35:35.894895 4684 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ceilometer-tls-certs\" (UniqueName: \"kubernetes.io/secret/69edf57e-bfdb-4e05-b61a-5b42dad87ff8-ceilometer-tls-certs\") pod \"ceilometer-0\" (UID: \"69edf57e-bfdb-4e05-b61a-5b42dad87ff8\") " pod="openstack/ceilometer-0" Jan 23 09:35:35 crc kubenswrapper[4684]: I0123 09:35:35.906845 4684 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-nkw7n\" (UniqueName: \"kubernetes.io/projected/69edf57e-bfdb-4e05-b61a-5b42dad87ff8-kube-api-access-nkw7n\") pod \"ceilometer-0\" (UID: \"69edf57e-bfdb-4e05-b61a-5b42dad87ff8\") " pod="openstack/ceilometer-0" Jan 23 09:35:35 crc kubenswrapper[4684]: I0123 09:35:35.907898 4684 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/69edf57e-bfdb-4e05-b61a-5b42dad87ff8-config-data\") pod \"ceilometer-0\" (UID: \"69edf57e-bfdb-4e05-b61a-5b42dad87ff8\") " pod="openstack/ceilometer-0" Jan 23 09:35:35 crc kubenswrapper[4684]: I0123 09:35:35.926464 4684 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ceilometer-0" Jan 23 09:35:36 crc kubenswrapper[4684]: I0123 09:35:36.131996 4684 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-api-0"] Jan 23 09:35:36 crc kubenswrapper[4684]: I0123 09:35:36.501444 4684 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-api-0" event={"ID":"87ef84be-3786-4a1e-a910-24e974d71fc2","Type":"ContainerStarted","Data":"a8a5f79663cc656b503e4655e9579e3f60c55a2a8e1e2b2dd1c21a47d772ab60"} Jan 23 09:35:36 crc kubenswrapper[4684]: I0123 09:35:36.700242 4684 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ceilometer-0"] Jan 23 09:35:36 crc kubenswrapper[4684]: W0123 09:35:36.708661 4684 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod69edf57e_bfdb_4e05_b61a_5b42dad87ff8.slice/crio-640bacf31ce185c99a4fe24d716f1a2a3dfdcd62b005bff24672b8c34f70d5ac WatchSource:0}: Error finding container 640bacf31ce185c99a4fe24d716f1a2a3dfdcd62b005bff24672b8c34f70d5ac: Status 404 returned error can't find the container with id 640bacf31ce185c99a4fe24d716f1a2a3dfdcd62b005bff24672b8c34f70d5ac Jan 23 09:35:37 crc kubenswrapper[4684]: I0123 09:35:37.518956 4684 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-api-0" event={"ID":"87ef84be-3786-4a1e-a910-24e974d71fc2","Type":"ContainerStarted","Data":"7e5d55e7689a1736418c89f67752395690ef9d21274632006e777a1ba0fd29da"} Jan 23 09:35:37 crc kubenswrapper[4684]: I0123 09:35:37.519285 4684 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-api-0" event={"ID":"87ef84be-3786-4a1e-a910-24e974d71fc2","Type":"ContainerStarted","Data":"5d6ea5787dad3d5992af7a3a0a3b2375ee41579979533df056b3335f5d617a38"} Jan 23 09:35:37 crc kubenswrapper[4684]: I0123 09:35:37.524917 4684 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"69edf57e-bfdb-4e05-b61a-5b42dad87ff8","Type":"ContainerStarted","Data":"640bacf31ce185c99a4fe24d716f1a2a3dfdcd62b005bff24672b8c34f70d5ac"} Jan 23 09:35:37 crc kubenswrapper[4684]: I0123 09:35:37.566249 4684 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/nova-api-0" podStartSLOduration=2.566221215 podStartE2EDuration="2.566221215s" podCreationTimestamp="2026-01-23 09:35:35 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-23 09:35:37.561945755 +0000 UTC m=+1710.185324296" watchObservedRunningTime="2026-01-23 09:35:37.566221215 +0000 UTC m=+1710.189599756" Jan 23 09:35:37 crc kubenswrapper[4684]: I0123 09:35:37.855603 4684 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/dnsmasq-dns-6f69c5c76f-8qdgs" Jan 23 09:35:37 crc kubenswrapper[4684]: I0123 09:35:37.932414 4684 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-8d97cbc7-2chtn"] Jan 23 09:35:37 crc kubenswrapper[4684]: I0123 09:35:37.932689 4684 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/dnsmasq-dns-8d97cbc7-2chtn" podUID="42df2da0-3c64-4b95-9545-361fc18ccbaa" containerName="dnsmasq-dns" containerID="cri-o://89f054ed2c2f5bf2debde13bcdf5dca8bb036be0dff9d1b0aa5f025eb8fd2a69" gracePeriod=10 Jan 23 09:35:38 crc kubenswrapper[4684]: I0123 09:35:38.484809 4684 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-8d97cbc7-2chtn" Jan 23 09:35:38 crc kubenswrapper[4684]: I0123 09:35:38.550773 4684 generic.go:334] "Generic (PLEG): container finished" podID="42df2da0-3c64-4b95-9545-361fc18ccbaa" containerID="89f054ed2c2f5bf2debde13bcdf5dca8bb036be0dff9d1b0aa5f025eb8fd2a69" exitCode=0 Jan 23 09:35:38 crc kubenswrapper[4684]: I0123 09:35:38.550848 4684 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-8d97cbc7-2chtn" event={"ID":"42df2da0-3c64-4b95-9545-361fc18ccbaa","Type":"ContainerDied","Data":"89f054ed2c2f5bf2debde13bcdf5dca8bb036be0dff9d1b0aa5f025eb8fd2a69"} Jan 23 09:35:38 crc kubenswrapper[4684]: I0123 09:35:38.550875 4684 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-8d97cbc7-2chtn" event={"ID":"42df2da0-3c64-4b95-9545-361fc18ccbaa","Type":"ContainerDied","Data":"d9aea27ad3ae943d0e5373781d4f73a0484814c43c49049e66427f3e9bb629be"} Jan 23 09:35:38 crc kubenswrapper[4684]: I0123 09:35:38.550893 4684 scope.go:117] "RemoveContainer" containerID="89f054ed2c2f5bf2debde13bcdf5dca8bb036be0dff9d1b0aa5f025eb8fd2a69" Jan 23 09:35:38 crc kubenswrapper[4684]: I0123 09:35:38.551076 4684 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-8d97cbc7-2chtn" Jan 23 09:35:38 crc kubenswrapper[4684]: I0123 09:35:38.558879 4684 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"69edf57e-bfdb-4e05-b61a-5b42dad87ff8","Type":"ContainerStarted","Data":"1a1ea1d2af0d9bf2659965e5271c543d9ac302f7ff7b16d6bba5b8633363da90"} Jan 23 09:35:38 crc kubenswrapper[4684]: I0123 09:35:38.603010 4684 scope.go:117] "RemoveContainer" containerID="14a7f6a607576b6a0399783967d56e6ac77895e5e1007a8abe2e69d126030a28" Jan 23 09:35:38 crc kubenswrapper[4684]: I0123 09:35:38.616776 4684 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/42df2da0-3c64-4b95-9545-361fc18ccbaa-config\") pod \"42df2da0-3c64-4b95-9545-361fc18ccbaa\" (UID: \"42df2da0-3c64-4b95-9545-361fc18ccbaa\") " Jan 23 09:35:38 crc kubenswrapper[4684]: I0123 09:35:38.616843 4684 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-6jt7f\" (UniqueName: \"kubernetes.io/projected/42df2da0-3c64-4b95-9545-361fc18ccbaa-kube-api-access-6jt7f\") pod \"42df2da0-3c64-4b95-9545-361fc18ccbaa\" (UID: \"42df2da0-3c64-4b95-9545-361fc18ccbaa\") " Jan 23 09:35:38 crc kubenswrapper[4684]: I0123 09:35:38.617034 4684 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/42df2da0-3c64-4b95-9545-361fc18ccbaa-ovsdbserver-nb\") pod \"42df2da0-3c64-4b95-9545-361fc18ccbaa\" (UID: \"42df2da0-3c64-4b95-9545-361fc18ccbaa\") " Jan 23 09:35:38 crc kubenswrapper[4684]: I0123 09:35:38.617105 4684 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/42df2da0-3c64-4b95-9545-361fc18ccbaa-dns-svc\") pod \"42df2da0-3c64-4b95-9545-361fc18ccbaa\" (UID: \"42df2da0-3c64-4b95-9545-361fc18ccbaa\") " Jan 23 09:35:38 crc kubenswrapper[4684]: I0123 09:35:38.617149 4684 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/42df2da0-3c64-4b95-9545-361fc18ccbaa-ovsdbserver-sb\") pod \"42df2da0-3c64-4b95-9545-361fc18ccbaa\" (UID: \"42df2da0-3c64-4b95-9545-361fc18ccbaa\") " Jan 23 09:35:38 crc kubenswrapper[4684]: I0123 09:35:38.631520 4684 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/42df2da0-3c64-4b95-9545-361fc18ccbaa-kube-api-access-6jt7f" (OuterVolumeSpecName: "kube-api-access-6jt7f") pod "42df2da0-3c64-4b95-9545-361fc18ccbaa" (UID: "42df2da0-3c64-4b95-9545-361fc18ccbaa"). InnerVolumeSpecName "kube-api-access-6jt7f". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 23 09:35:38 crc kubenswrapper[4684]: I0123 09:35:38.640878 4684 scope.go:117] "RemoveContainer" containerID="89f054ed2c2f5bf2debde13bcdf5dca8bb036be0dff9d1b0aa5f025eb8fd2a69" Jan 23 09:35:38 crc kubenswrapper[4684]: E0123 09:35:38.643614 4684 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"89f054ed2c2f5bf2debde13bcdf5dca8bb036be0dff9d1b0aa5f025eb8fd2a69\": container with ID starting with 89f054ed2c2f5bf2debde13bcdf5dca8bb036be0dff9d1b0aa5f025eb8fd2a69 not found: ID does not exist" containerID="89f054ed2c2f5bf2debde13bcdf5dca8bb036be0dff9d1b0aa5f025eb8fd2a69" Jan 23 09:35:38 crc kubenswrapper[4684]: I0123 09:35:38.643665 4684 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"89f054ed2c2f5bf2debde13bcdf5dca8bb036be0dff9d1b0aa5f025eb8fd2a69"} err="failed to get container status \"89f054ed2c2f5bf2debde13bcdf5dca8bb036be0dff9d1b0aa5f025eb8fd2a69\": rpc error: code = NotFound desc = could not find container \"89f054ed2c2f5bf2debde13bcdf5dca8bb036be0dff9d1b0aa5f025eb8fd2a69\": container with ID starting with 89f054ed2c2f5bf2debde13bcdf5dca8bb036be0dff9d1b0aa5f025eb8fd2a69 not found: ID does not exist" Jan 23 09:35:38 crc kubenswrapper[4684]: I0123 09:35:38.643714 4684 scope.go:117] "RemoveContainer" containerID="14a7f6a607576b6a0399783967d56e6ac77895e5e1007a8abe2e69d126030a28" Jan 23 09:35:38 crc kubenswrapper[4684]: E0123 09:35:38.645108 4684 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"14a7f6a607576b6a0399783967d56e6ac77895e5e1007a8abe2e69d126030a28\": container with ID starting with 14a7f6a607576b6a0399783967d56e6ac77895e5e1007a8abe2e69d126030a28 not found: ID does not exist" containerID="14a7f6a607576b6a0399783967d56e6ac77895e5e1007a8abe2e69d126030a28" Jan 23 09:35:38 crc kubenswrapper[4684]: I0123 09:35:38.645136 4684 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"14a7f6a607576b6a0399783967d56e6ac77895e5e1007a8abe2e69d126030a28"} err="failed to get container status \"14a7f6a607576b6a0399783967d56e6ac77895e5e1007a8abe2e69d126030a28\": rpc error: code = NotFound desc = could not find container \"14a7f6a607576b6a0399783967d56e6ac77895e5e1007a8abe2e69d126030a28\": container with ID starting with 14a7f6a607576b6a0399783967d56e6ac77895e5e1007a8abe2e69d126030a28 not found: ID does not exist" Jan 23 09:35:38 crc kubenswrapper[4684]: I0123 09:35:38.693552 4684 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/42df2da0-3c64-4b95-9545-361fc18ccbaa-ovsdbserver-nb" (OuterVolumeSpecName: "ovsdbserver-nb") pod "42df2da0-3c64-4b95-9545-361fc18ccbaa" (UID: "42df2da0-3c64-4b95-9545-361fc18ccbaa"). InnerVolumeSpecName "ovsdbserver-nb". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 23 09:35:38 crc kubenswrapper[4684]: I0123 09:35:38.698538 4684 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/42df2da0-3c64-4b95-9545-361fc18ccbaa-config" (OuterVolumeSpecName: "config") pod "42df2da0-3c64-4b95-9545-361fc18ccbaa" (UID: "42df2da0-3c64-4b95-9545-361fc18ccbaa"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 23 09:35:38 crc kubenswrapper[4684]: I0123 09:35:38.706407 4684 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/42df2da0-3c64-4b95-9545-361fc18ccbaa-ovsdbserver-sb" (OuterVolumeSpecName: "ovsdbserver-sb") pod "42df2da0-3c64-4b95-9545-361fc18ccbaa" (UID: "42df2da0-3c64-4b95-9545-361fc18ccbaa"). InnerVolumeSpecName "ovsdbserver-sb". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 23 09:35:38 crc kubenswrapper[4684]: I0123 09:35:38.711899 4684 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/42df2da0-3c64-4b95-9545-361fc18ccbaa-dns-svc" (OuterVolumeSpecName: "dns-svc") pod "42df2da0-3c64-4b95-9545-361fc18ccbaa" (UID: "42df2da0-3c64-4b95-9545-361fc18ccbaa"). InnerVolumeSpecName "dns-svc". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 23 09:35:38 crc kubenswrapper[4684]: I0123 09:35:38.728324 4684 reconciler_common.go:293] "Volume detached for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/42df2da0-3c64-4b95-9545-361fc18ccbaa-ovsdbserver-nb\") on node \"crc\" DevicePath \"\"" Jan 23 09:35:38 crc kubenswrapper[4684]: I0123 09:35:38.728595 4684 reconciler_common.go:293] "Volume detached for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/42df2da0-3c64-4b95-9545-361fc18ccbaa-dns-svc\") on node \"crc\" DevicePath \"\"" Jan 23 09:35:38 crc kubenswrapper[4684]: I0123 09:35:38.728623 4684 reconciler_common.go:293] "Volume detached for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/42df2da0-3c64-4b95-9545-361fc18ccbaa-ovsdbserver-sb\") on node \"crc\" DevicePath \"\"" Jan 23 09:35:38 crc kubenswrapper[4684]: I0123 09:35:38.728635 4684 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/42df2da0-3c64-4b95-9545-361fc18ccbaa-config\") on node \"crc\" DevicePath \"\"" Jan 23 09:35:38 crc kubenswrapper[4684]: I0123 09:35:38.728648 4684 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-6jt7f\" (UniqueName: \"kubernetes.io/projected/42df2da0-3c64-4b95-9545-361fc18ccbaa-kube-api-access-6jt7f\") on node \"crc\" DevicePath \"\"" Jan 23 09:35:38 crc kubenswrapper[4684]: I0123 09:35:38.930638 4684 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-8d97cbc7-2chtn"] Jan 23 09:35:38 crc kubenswrapper[4684]: I0123 09:35:38.940986 4684 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/dnsmasq-dns-8d97cbc7-2chtn"] Jan 23 09:35:39 crc kubenswrapper[4684]: I0123 09:35:39.580013 4684 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"69edf57e-bfdb-4e05-b61a-5b42dad87ff8","Type":"ContainerStarted","Data":"595a24405a012c2ad79c420e47eb463781ed4e39fc2d86db1ebc361d3ec7e85c"} Jan 23 09:35:39 crc kubenswrapper[4684]: I0123 09:35:39.597185 4684 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="42df2da0-3c64-4b95-9545-361fc18ccbaa" path="/var/lib/kubelet/pods/42df2da0-3c64-4b95-9545-361fc18ccbaa/volumes" Jan 23 09:35:40 crc kubenswrapper[4684]: I0123 09:35:40.595408 4684 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"69edf57e-bfdb-4e05-b61a-5b42dad87ff8","Type":"ContainerStarted","Data":"f0d9d2bf5ab9e06f96ce10efcf51656474f6dfab8142907591690e7e1e89aeb3"} Jan 23 09:35:41 crc kubenswrapper[4684]: I0123 09:35:41.627728 4684 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"69edf57e-bfdb-4e05-b61a-5b42dad87ff8","Type":"ContainerStarted","Data":"1e8d28be59fb08176414bb619422b76575709f8a48348b61688048b491a72480"} Jan 23 09:35:41 crc kubenswrapper[4684]: I0123 09:35:41.628394 4684 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/ceilometer-0" Jan 23 09:35:41 crc kubenswrapper[4684]: I0123 09:35:41.657006 4684 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/ceilometer-0" podStartSLOduration=2.426596157 podStartE2EDuration="6.656983223s" podCreationTimestamp="2026-01-23 09:35:35 +0000 UTC" firstStartedPulling="2026-01-23 09:35:36.711051293 +0000 UTC m=+1709.334429834" lastFinishedPulling="2026-01-23 09:35:40.941438359 +0000 UTC m=+1713.564816900" observedRunningTime="2026-01-23 09:35:41.648007101 +0000 UTC m=+1714.271385652" watchObservedRunningTime="2026-01-23 09:35:41.656983223 +0000 UTC m=+1714.280361764" Jan 23 09:35:42 crc kubenswrapper[4684]: I0123 09:35:42.658290 4684 generic.go:334] "Generic (PLEG): container finished" podID="eb9b804b-5b0a-479a-8834-10c4adb4ad14" containerID="aeebdc1c5705ed418bbd094135a18f77d5369aac12244fe848e31f118b52fa4f" exitCode=0 Jan 23 09:35:42 crc kubenswrapper[4684]: I0123 09:35:42.658874 4684 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell1-cell-mapping-6t6d7" event={"ID":"eb9b804b-5b0a-479a-8834-10c4adb4ad14","Type":"ContainerDied","Data":"aeebdc1c5705ed418bbd094135a18f77d5369aac12244fe848e31f118b52fa4f"} Jan 23 09:35:43 crc kubenswrapper[4684]: I0123 09:35:43.728830 4684 patch_prober.go:28] interesting pod/machine-config-daemon-wtphf container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Jan 23 09:35:43 crc kubenswrapper[4684]: I0123 09:35:43.729177 4684 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-wtphf" podUID="fe8e0d00-860e-4d47-9f48-686555520d79" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Jan 23 09:35:43 crc kubenswrapper[4684]: I0123 09:35:43.729232 4684 kubelet.go:2542] "SyncLoop (probe)" probe="liveness" status="unhealthy" pod="openshift-machine-config-operator/machine-config-daemon-wtphf" Jan 23 09:35:43 crc kubenswrapper[4684]: I0123 09:35:43.730308 4684 kuberuntime_manager.go:1027] "Message for Container of pod" containerName="machine-config-daemon" containerStatusID={"Type":"cri-o","ID":"aaa3253f44fc261eba23e0bab4fba49957b928d9d9a01fb268ab6087cc818562"} pod="openshift-machine-config-operator/machine-config-daemon-wtphf" containerMessage="Container machine-config-daemon failed liveness probe, will be restarted" Jan 23 09:35:43 crc kubenswrapper[4684]: I0123 09:35:43.730388 4684 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-machine-config-operator/machine-config-daemon-wtphf" podUID="fe8e0d00-860e-4d47-9f48-686555520d79" containerName="machine-config-daemon" containerID="cri-o://aaa3253f44fc261eba23e0bab4fba49957b928d9d9a01fb268ab6087cc818562" gracePeriod=600 Jan 23 09:35:43 crc kubenswrapper[4684]: E0123 09:35:43.930205 4684 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-wtphf_openshift-machine-config-operator(fe8e0d00-860e-4d47-9f48-686555520d79)\"" pod="openshift-machine-config-operator/machine-config-daemon-wtphf" podUID="fe8e0d00-860e-4d47-9f48-686555520d79" Jan 23 09:35:44 crc kubenswrapper[4684]: I0123 09:35:44.123190 4684 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell1-cell-mapping-6t6d7" Jan 23 09:35:44 crc kubenswrapper[4684]: I0123 09:35:44.251676 4684 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/eb9b804b-5b0a-479a-8834-10c4adb4ad14-config-data\") pod \"eb9b804b-5b0a-479a-8834-10c4adb4ad14\" (UID: \"eb9b804b-5b0a-479a-8834-10c4adb4ad14\") " Jan 23 09:35:44 crc kubenswrapper[4684]: I0123 09:35:44.251772 4684 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/eb9b804b-5b0a-479a-8834-10c4adb4ad14-scripts\") pod \"eb9b804b-5b0a-479a-8834-10c4adb4ad14\" (UID: \"eb9b804b-5b0a-479a-8834-10c4adb4ad14\") " Jan 23 09:35:44 crc kubenswrapper[4684]: I0123 09:35:44.251911 4684 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/eb9b804b-5b0a-479a-8834-10c4adb4ad14-combined-ca-bundle\") pod \"eb9b804b-5b0a-479a-8834-10c4adb4ad14\" (UID: \"eb9b804b-5b0a-479a-8834-10c4adb4ad14\") " Jan 23 09:35:44 crc kubenswrapper[4684]: I0123 09:35:44.251953 4684 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-p9dh9\" (UniqueName: \"kubernetes.io/projected/eb9b804b-5b0a-479a-8834-10c4adb4ad14-kube-api-access-p9dh9\") pod \"eb9b804b-5b0a-479a-8834-10c4adb4ad14\" (UID: \"eb9b804b-5b0a-479a-8834-10c4adb4ad14\") " Jan 23 09:35:44 crc kubenswrapper[4684]: I0123 09:35:44.264019 4684 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/eb9b804b-5b0a-479a-8834-10c4adb4ad14-kube-api-access-p9dh9" (OuterVolumeSpecName: "kube-api-access-p9dh9") pod "eb9b804b-5b0a-479a-8834-10c4adb4ad14" (UID: "eb9b804b-5b0a-479a-8834-10c4adb4ad14"). InnerVolumeSpecName "kube-api-access-p9dh9". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 23 09:35:44 crc kubenswrapper[4684]: I0123 09:35:44.275886 4684 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/eb9b804b-5b0a-479a-8834-10c4adb4ad14-scripts" (OuterVolumeSpecName: "scripts") pod "eb9b804b-5b0a-479a-8834-10c4adb4ad14" (UID: "eb9b804b-5b0a-479a-8834-10c4adb4ad14"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 23 09:35:44 crc kubenswrapper[4684]: I0123 09:35:44.308948 4684 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/eb9b804b-5b0a-479a-8834-10c4adb4ad14-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "eb9b804b-5b0a-479a-8834-10c4adb4ad14" (UID: "eb9b804b-5b0a-479a-8834-10c4adb4ad14"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 23 09:35:44 crc kubenswrapper[4684]: I0123 09:35:44.352552 4684 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/eb9b804b-5b0a-479a-8834-10c4adb4ad14-config-data" (OuterVolumeSpecName: "config-data") pod "eb9b804b-5b0a-479a-8834-10c4adb4ad14" (UID: "eb9b804b-5b0a-479a-8834-10c4adb4ad14"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 23 09:35:44 crc kubenswrapper[4684]: I0123 09:35:44.353958 4684 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/eb9b804b-5b0a-479a-8834-10c4adb4ad14-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Jan 23 09:35:44 crc kubenswrapper[4684]: I0123 09:35:44.354004 4684 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-p9dh9\" (UniqueName: \"kubernetes.io/projected/eb9b804b-5b0a-479a-8834-10c4adb4ad14-kube-api-access-p9dh9\") on node \"crc\" DevicePath \"\"" Jan 23 09:35:44 crc kubenswrapper[4684]: I0123 09:35:44.354021 4684 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/eb9b804b-5b0a-479a-8834-10c4adb4ad14-config-data\") on node \"crc\" DevicePath \"\"" Jan 23 09:35:44 crc kubenswrapper[4684]: I0123 09:35:44.354032 4684 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/eb9b804b-5b0a-479a-8834-10c4adb4ad14-scripts\") on node \"crc\" DevicePath \"\"" Jan 23 09:35:44 crc kubenswrapper[4684]: I0123 09:35:44.678776 4684 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell1-cell-mapping-6t6d7" event={"ID":"eb9b804b-5b0a-479a-8834-10c4adb4ad14","Type":"ContainerDied","Data":"b8609949e102219fd1772235c829d9741fe239df0f33f5ec4b39dc0d76e1650b"} Jan 23 09:35:44 crc kubenswrapper[4684]: I0123 09:35:44.679094 4684 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="b8609949e102219fd1772235c829d9741fe239df0f33f5ec4b39dc0d76e1650b" Jan 23 09:35:44 crc kubenswrapper[4684]: I0123 09:35:44.679020 4684 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell1-cell-mapping-6t6d7" Jan 23 09:35:44 crc kubenswrapper[4684]: I0123 09:35:44.685807 4684 generic.go:334] "Generic (PLEG): container finished" podID="fe8e0d00-860e-4d47-9f48-686555520d79" containerID="aaa3253f44fc261eba23e0bab4fba49957b928d9d9a01fb268ab6087cc818562" exitCode=0 Jan 23 09:35:44 crc kubenswrapper[4684]: I0123 09:35:44.685853 4684 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-wtphf" event={"ID":"fe8e0d00-860e-4d47-9f48-686555520d79","Type":"ContainerDied","Data":"aaa3253f44fc261eba23e0bab4fba49957b928d9d9a01fb268ab6087cc818562"} Jan 23 09:35:44 crc kubenswrapper[4684]: I0123 09:35:44.685894 4684 scope.go:117] "RemoveContainer" containerID="8a400f51794ef4b6fdc66ad213f603d86645f2ebb5c89b0aaf3a7b97ea9ba3a1" Jan 23 09:35:44 crc kubenswrapper[4684]: I0123 09:35:44.690054 4684 scope.go:117] "RemoveContainer" containerID="aaa3253f44fc261eba23e0bab4fba49957b928d9d9a01fb268ab6087cc818562" Jan 23 09:35:44 crc kubenswrapper[4684]: E0123 09:35:44.690523 4684 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-wtphf_openshift-machine-config-operator(fe8e0d00-860e-4d47-9f48-686555520d79)\"" pod="openshift-machine-config-operator/machine-config-daemon-wtphf" podUID="fe8e0d00-860e-4d47-9f48-686555520d79" Jan 23 09:35:44 crc kubenswrapper[4684]: I0123 09:35:44.904910 4684 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-api-0"] Jan 23 09:35:44 crc kubenswrapper[4684]: I0123 09:35:44.905429 4684 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/nova-api-0" podUID="87ef84be-3786-4a1e-a910-24e974d71fc2" containerName="nova-api-log" containerID="cri-o://5d6ea5787dad3d5992af7a3a0a3b2375ee41579979533df056b3335f5d617a38" gracePeriod=30 Jan 23 09:35:44 crc kubenswrapper[4684]: I0123 09:35:44.905531 4684 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/nova-api-0" podUID="87ef84be-3786-4a1e-a910-24e974d71fc2" containerName="nova-api-api" containerID="cri-o://7e5d55e7689a1736418c89f67752395690ef9d21274632006e777a1ba0fd29da" gracePeriod=30 Jan 23 09:35:44 crc kubenswrapper[4684]: I0123 09:35:44.929038 4684 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-scheduler-0"] Jan 23 09:35:44 crc kubenswrapper[4684]: I0123 09:35:44.929254 4684 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/nova-scheduler-0" podUID="d757dc5c-a82e-403e-a11f-213b043a1b87" containerName="nova-scheduler-scheduler" containerID="cri-o://308b4f3f4167d94456a496ef6756811bc5d445e33a17274a5028b8787db31acf" gracePeriod=30 Jan 23 09:35:44 crc kubenswrapper[4684]: I0123 09:35:44.950883 4684 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-metadata-0"] Jan 23 09:35:44 crc kubenswrapper[4684]: I0123 09:35:44.951241 4684 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/nova-metadata-0" podUID="8422775d-1328-4c3b-ab94-f235f45da903" containerName="nova-metadata-log" containerID="cri-o://cee12fcd9bf176eae676b6e84e67c25b1175501c01645605332c04911fae741b" gracePeriod=30 Jan 23 09:35:44 crc kubenswrapper[4684]: I0123 09:35:44.951288 4684 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/nova-metadata-0" podUID="8422775d-1328-4c3b-ab94-f235f45da903" containerName="nova-metadata-metadata" containerID="cri-o://fe4898e0f384ce64dd84f36fe3a2b7d0ec6b514b769c9683e1930a0f64736cbb" gracePeriod=30 Jan 23 09:35:45 crc kubenswrapper[4684]: I0123 09:35:45.566154 4684 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-api-0" Jan 23 09:35:45 crc kubenswrapper[4684]: I0123 09:35:45.689070 4684 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/87ef84be-3786-4a1e-a910-24e974d71fc2-internal-tls-certs\") pod \"87ef84be-3786-4a1e-a910-24e974d71fc2\" (UID: \"87ef84be-3786-4a1e-a910-24e974d71fc2\") " Jan 23 09:35:45 crc kubenswrapper[4684]: I0123 09:35:45.689150 4684 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/87ef84be-3786-4a1e-a910-24e974d71fc2-config-data\") pod \"87ef84be-3786-4a1e-a910-24e974d71fc2\" (UID: \"87ef84be-3786-4a1e-a910-24e974d71fc2\") " Jan 23 09:35:45 crc kubenswrapper[4684]: I0123 09:35:45.689197 4684 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-vdqm7\" (UniqueName: \"kubernetes.io/projected/87ef84be-3786-4a1e-a910-24e974d71fc2-kube-api-access-vdqm7\") pod \"87ef84be-3786-4a1e-a910-24e974d71fc2\" (UID: \"87ef84be-3786-4a1e-a910-24e974d71fc2\") " Jan 23 09:35:45 crc kubenswrapper[4684]: I0123 09:35:45.689269 4684 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/87ef84be-3786-4a1e-a910-24e974d71fc2-public-tls-certs\") pod \"87ef84be-3786-4a1e-a910-24e974d71fc2\" (UID: \"87ef84be-3786-4a1e-a910-24e974d71fc2\") " Jan 23 09:35:45 crc kubenswrapper[4684]: I0123 09:35:45.689610 4684 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/87ef84be-3786-4a1e-a910-24e974d71fc2-combined-ca-bundle\") pod \"87ef84be-3786-4a1e-a910-24e974d71fc2\" (UID: \"87ef84be-3786-4a1e-a910-24e974d71fc2\") " Jan 23 09:35:45 crc kubenswrapper[4684]: I0123 09:35:45.689709 4684 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/87ef84be-3786-4a1e-a910-24e974d71fc2-logs\") pod \"87ef84be-3786-4a1e-a910-24e974d71fc2\" (UID: \"87ef84be-3786-4a1e-a910-24e974d71fc2\") " Jan 23 09:35:45 crc kubenswrapper[4684]: I0123 09:35:45.708335 4684 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/87ef84be-3786-4a1e-a910-24e974d71fc2-logs" (OuterVolumeSpecName: "logs") pod "87ef84be-3786-4a1e-a910-24e974d71fc2" (UID: "87ef84be-3786-4a1e-a910-24e974d71fc2"). InnerVolumeSpecName "logs". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 23 09:35:45 crc kubenswrapper[4684]: I0123 09:35:45.719576 4684 reconciler_common.go:293] "Volume detached for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/87ef84be-3786-4a1e-a910-24e974d71fc2-logs\") on node \"crc\" DevicePath \"\"" Jan 23 09:35:45 crc kubenswrapper[4684]: I0123 09:35:45.728310 4684 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/87ef84be-3786-4a1e-a910-24e974d71fc2-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "87ef84be-3786-4a1e-a910-24e974d71fc2" (UID: "87ef84be-3786-4a1e-a910-24e974d71fc2"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 23 09:35:45 crc kubenswrapper[4684]: I0123 09:35:45.733896 4684 generic.go:334] "Generic (PLEG): container finished" podID="87ef84be-3786-4a1e-a910-24e974d71fc2" containerID="7e5d55e7689a1736418c89f67752395690ef9d21274632006e777a1ba0fd29da" exitCode=0 Jan 23 09:35:45 crc kubenswrapper[4684]: I0123 09:35:45.733932 4684 generic.go:334] "Generic (PLEG): container finished" podID="87ef84be-3786-4a1e-a910-24e974d71fc2" containerID="5d6ea5787dad3d5992af7a3a0a3b2375ee41579979533df056b3335f5d617a38" exitCode=143 Jan 23 09:35:45 crc kubenswrapper[4684]: I0123 09:35:45.733979 4684 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-api-0" event={"ID":"87ef84be-3786-4a1e-a910-24e974d71fc2","Type":"ContainerDied","Data":"7e5d55e7689a1736418c89f67752395690ef9d21274632006e777a1ba0fd29da"} Jan 23 09:35:45 crc kubenswrapper[4684]: I0123 09:35:45.734013 4684 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-api-0" event={"ID":"87ef84be-3786-4a1e-a910-24e974d71fc2","Type":"ContainerDied","Data":"5d6ea5787dad3d5992af7a3a0a3b2375ee41579979533df056b3335f5d617a38"} Jan 23 09:35:45 crc kubenswrapper[4684]: I0123 09:35:45.734025 4684 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-api-0" event={"ID":"87ef84be-3786-4a1e-a910-24e974d71fc2","Type":"ContainerDied","Data":"a8a5f79663cc656b503e4655e9579e3f60c55a2a8e1e2b2dd1c21a47d772ab60"} Jan 23 09:35:45 crc kubenswrapper[4684]: I0123 09:35:45.734042 4684 scope.go:117] "RemoveContainer" containerID="7e5d55e7689a1736418c89f67752395690ef9d21274632006e777a1ba0fd29da" Jan 23 09:35:45 crc kubenswrapper[4684]: I0123 09:35:45.734205 4684 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-api-0" Jan 23 09:35:45 crc kubenswrapper[4684]: I0123 09:35:45.746050 4684 generic.go:334] "Generic (PLEG): container finished" podID="8422775d-1328-4c3b-ab94-f235f45da903" containerID="cee12fcd9bf176eae676b6e84e67c25b1175501c01645605332c04911fae741b" exitCode=143 Jan 23 09:35:45 crc kubenswrapper[4684]: I0123 09:35:45.746159 4684 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-metadata-0" event={"ID":"8422775d-1328-4c3b-ab94-f235f45da903","Type":"ContainerDied","Data":"cee12fcd9bf176eae676b6e84e67c25b1175501c01645605332c04911fae741b"} Jan 23 09:35:45 crc kubenswrapper[4684]: I0123 09:35:45.754002 4684 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/87ef84be-3786-4a1e-a910-24e974d71fc2-kube-api-access-vdqm7" (OuterVolumeSpecName: "kube-api-access-vdqm7") pod "87ef84be-3786-4a1e-a910-24e974d71fc2" (UID: "87ef84be-3786-4a1e-a910-24e974d71fc2"). InnerVolumeSpecName "kube-api-access-vdqm7". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 23 09:35:45 crc kubenswrapper[4684]: I0123 09:35:45.761931 4684 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/87ef84be-3786-4a1e-a910-24e974d71fc2-config-data" (OuterVolumeSpecName: "config-data") pod "87ef84be-3786-4a1e-a910-24e974d71fc2" (UID: "87ef84be-3786-4a1e-a910-24e974d71fc2"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 23 09:35:45 crc kubenswrapper[4684]: I0123 09:35:45.772988 4684 scope.go:117] "RemoveContainer" containerID="5d6ea5787dad3d5992af7a3a0a3b2375ee41579979533df056b3335f5d617a38" Jan 23 09:35:45 crc kubenswrapper[4684]: I0123 09:35:45.786779 4684 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/87ef84be-3786-4a1e-a910-24e974d71fc2-public-tls-certs" (OuterVolumeSpecName: "public-tls-certs") pod "87ef84be-3786-4a1e-a910-24e974d71fc2" (UID: "87ef84be-3786-4a1e-a910-24e974d71fc2"). InnerVolumeSpecName "public-tls-certs". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 23 09:35:45 crc kubenswrapper[4684]: I0123 09:35:45.798859 4684 scope.go:117] "RemoveContainer" containerID="7e5d55e7689a1736418c89f67752395690ef9d21274632006e777a1ba0fd29da" Jan 23 09:35:45 crc kubenswrapper[4684]: E0123 09:35:45.799338 4684 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"7e5d55e7689a1736418c89f67752395690ef9d21274632006e777a1ba0fd29da\": container with ID starting with 7e5d55e7689a1736418c89f67752395690ef9d21274632006e777a1ba0fd29da not found: ID does not exist" containerID="7e5d55e7689a1736418c89f67752395690ef9d21274632006e777a1ba0fd29da" Jan 23 09:35:45 crc kubenswrapper[4684]: I0123 09:35:45.799376 4684 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"7e5d55e7689a1736418c89f67752395690ef9d21274632006e777a1ba0fd29da"} err="failed to get container status \"7e5d55e7689a1736418c89f67752395690ef9d21274632006e777a1ba0fd29da\": rpc error: code = NotFound desc = could not find container \"7e5d55e7689a1736418c89f67752395690ef9d21274632006e777a1ba0fd29da\": container with ID starting with 7e5d55e7689a1736418c89f67752395690ef9d21274632006e777a1ba0fd29da not found: ID does not exist" Jan 23 09:35:45 crc kubenswrapper[4684]: I0123 09:35:45.799402 4684 scope.go:117] "RemoveContainer" containerID="5d6ea5787dad3d5992af7a3a0a3b2375ee41579979533df056b3335f5d617a38" Jan 23 09:35:45 crc kubenswrapper[4684]: E0123 09:35:45.799908 4684 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"5d6ea5787dad3d5992af7a3a0a3b2375ee41579979533df056b3335f5d617a38\": container with ID starting with 5d6ea5787dad3d5992af7a3a0a3b2375ee41579979533df056b3335f5d617a38 not found: ID does not exist" containerID="5d6ea5787dad3d5992af7a3a0a3b2375ee41579979533df056b3335f5d617a38" Jan 23 09:35:45 crc kubenswrapper[4684]: I0123 09:35:45.799936 4684 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"5d6ea5787dad3d5992af7a3a0a3b2375ee41579979533df056b3335f5d617a38"} err="failed to get container status \"5d6ea5787dad3d5992af7a3a0a3b2375ee41579979533df056b3335f5d617a38\": rpc error: code = NotFound desc = could not find container \"5d6ea5787dad3d5992af7a3a0a3b2375ee41579979533df056b3335f5d617a38\": container with ID starting with 5d6ea5787dad3d5992af7a3a0a3b2375ee41579979533df056b3335f5d617a38 not found: ID does not exist" Jan 23 09:35:45 crc kubenswrapper[4684]: I0123 09:35:45.799953 4684 scope.go:117] "RemoveContainer" containerID="7e5d55e7689a1736418c89f67752395690ef9d21274632006e777a1ba0fd29da" Jan 23 09:35:45 crc kubenswrapper[4684]: I0123 09:35:45.801428 4684 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"7e5d55e7689a1736418c89f67752395690ef9d21274632006e777a1ba0fd29da"} err="failed to get container status \"7e5d55e7689a1736418c89f67752395690ef9d21274632006e777a1ba0fd29da\": rpc error: code = NotFound desc = could not find container \"7e5d55e7689a1736418c89f67752395690ef9d21274632006e777a1ba0fd29da\": container with ID starting with 7e5d55e7689a1736418c89f67752395690ef9d21274632006e777a1ba0fd29da not found: ID does not exist" Jan 23 09:35:45 crc kubenswrapper[4684]: I0123 09:35:45.801480 4684 scope.go:117] "RemoveContainer" containerID="5d6ea5787dad3d5992af7a3a0a3b2375ee41579979533df056b3335f5d617a38" Jan 23 09:35:45 crc kubenswrapper[4684]: I0123 09:35:45.801761 4684 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/87ef84be-3786-4a1e-a910-24e974d71fc2-internal-tls-certs" (OuterVolumeSpecName: "internal-tls-certs") pod "87ef84be-3786-4a1e-a910-24e974d71fc2" (UID: "87ef84be-3786-4a1e-a910-24e974d71fc2"). InnerVolumeSpecName "internal-tls-certs". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 23 09:35:45 crc kubenswrapper[4684]: I0123 09:35:45.801916 4684 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"5d6ea5787dad3d5992af7a3a0a3b2375ee41579979533df056b3335f5d617a38"} err="failed to get container status \"5d6ea5787dad3d5992af7a3a0a3b2375ee41579979533df056b3335f5d617a38\": rpc error: code = NotFound desc = could not find container \"5d6ea5787dad3d5992af7a3a0a3b2375ee41579979533df056b3335f5d617a38\": container with ID starting with 5d6ea5787dad3d5992af7a3a0a3b2375ee41579979533df056b3335f5d617a38 not found: ID does not exist" Jan 23 09:35:45 crc kubenswrapper[4684]: I0123 09:35:45.823017 4684 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/87ef84be-3786-4a1e-a910-24e974d71fc2-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Jan 23 09:35:45 crc kubenswrapper[4684]: I0123 09:35:45.823060 4684 reconciler_common.go:293] "Volume detached for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/87ef84be-3786-4a1e-a910-24e974d71fc2-internal-tls-certs\") on node \"crc\" DevicePath \"\"" Jan 23 09:35:45 crc kubenswrapper[4684]: I0123 09:35:45.823073 4684 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/87ef84be-3786-4a1e-a910-24e974d71fc2-config-data\") on node \"crc\" DevicePath \"\"" Jan 23 09:35:45 crc kubenswrapper[4684]: I0123 09:35:45.823085 4684 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-vdqm7\" (UniqueName: \"kubernetes.io/projected/87ef84be-3786-4a1e-a910-24e974d71fc2-kube-api-access-vdqm7\") on node \"crc\" DevicePath \"\"" Jan 23 09:35:45 crc kubenswrapper[4684]: I0123 09:35:45.823099 4684 reconciler_common.go:293] "Volume detached for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/87ef84be-3786-4a1e-a910-24e974d71fc2-public-tls-certs\") on node \"crc\" DevicePath \"\"" Jan 23 09:35:46 crc kubenswrapper[4684]: I0123 09:35:46.075016 4684 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-api-0"] Jan 23 09:35:46 crc kubenswrapper[4684]: I0123 09:35:46.091818 4684 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/nova-api-0"] Jan 23 09:35:46 crc kubenswrapper[4684]: I0123 09:35:46.110088 4684 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/nova-api-0"] Jan 23 09:35:46 crc kubenswrapper[4684]: E0123 09:35:46.110491 4684 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="87ef84be-3786-4a1e-a910-24e974d71fc2" containerName="nova-api-api" Jan 23 09:35:46 crc kubenswrapper[4684]: I0123 09:35:46.110507 4684 state_mem.go:107] "Deleted CPUSet assignment" podUID="87ef84be-3786-4a1e-a910-24e974d71fc2" containerName="nova-api-api" Jan 23 09:35:46 crc kubenswrapper[4684]: E0123 09:35:46.110525 4684 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="87ef84be-3786-4a1e-a910-24e974d71fc2" containerName="nova-api-log" Jan 23 09:35:46 crc kubenswrapper[4684]: I0123 09:35:46.110532 4684 state_mem.go:107] "Deleted CPUSet assignment" podUID="87ef84be-3786-4a1e-a910-24e974d71fc2" containerName="nova-api-log" Jan 23 09:35:46 crc kubenswrapper[4684]: E0123 09:35:46.110552 4684 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="eb9b804b-5b0a-479a-8834-10c4adb4ad14" containerName="nova-manage" Jan 23 09:35:46 crc kubenswrapper[4684]: I0123 09:35:46.110560 4684 state_mem.go:107] "Deleted CPUSet assignment" podUID="eb9b804b-5b0a-479a-8834-10c4adb4ad14" containerName="nova-manage" Jan 23 09:35:46 crc kubenswrapper[4684]: E0123 09:35:46.110575 4684 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="42df2da0-3c64-4b95-9545-361fc18ccbaa" containerName="dnsmasq-dns" Jan 23 09:35:46 crc kubenswrapper[4684]: I0123 09:35:46.110581 4684 state_mem.go:107] "Deleted CPUSet assignment" podUID="42df2da0-3c64-4b95-9545-361fc18ccbaa" containerName="dnsmasq-dns" Jan 23 09:35:46 crc kubenswrapper[4684]: E0123 09:35:46.110590 4684 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="42df2da0-3c64-4b95-9545-361fc18ccbaa" containerName="init" Jan 23 09:35:46 crc kubenswrapper[4684]: I0123 09:35:46.110596 4684 state_mem.go:107] "Deleted CPUSet assignment" podUID="42df2da0-3c64-4b95-9545-361fc18ccbaa" containerName="init" Jan 23 09:35:46 crc kubenswrapper[4684]: I0123 09:35:46.110817 4684 memory_manager.go:354] "RemoveStaleState removing state" podUID="87ef84be-3786-4a1e-a910-24e974d71fc2" containerName="nova-api-api" Jan 23 09:35:46 crc kubenswrapper[4684]: I0123 09:35:46.110832 4684 memory_manager.go:354] "RemoveStaleState removing state" podUID="eb9b804b-5b0a-479a-8834-10c4adb4ad14" containerName="nova-manage" Jan 23 09:35:46 crc kubenswrapper[4684]: I0123 09:35:46.110845 4684 memory_manager.go:354] "RemoveStaleState removing state" podUID="87ef84be-3786-4a1e-a910-24e974d71fc2" containerName="nova-api-log" Jan 23 09:35:46 crc kubenswrapper[4684]: I0123 09:35:46.110853 4684 memory_manager.go:354] "RemoveStaleState removing state" podUID="42df2da0-3c64-4b95-9545-361fc18ccbaa" containerName="dnsmasq-dns" Jan 23 09:35:46 crc kubenswrapper[4684]: I0123 09:35:46.111927 4684 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-api-0" Jan 23 09:35:46 crc kubenswrapper[4684]: I0123 09:35:46.113992 4684 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-nova-public-svc" Jan 23 09:35:46 crc kubenswrapper[4684]: I0123 09:35:46.114808 4684 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-api-config-data" Jan 23 09:35:46 crc kubenswrapper[4684]: I0123 09:35:46.120688 4684 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-nova-internal-svc" Jan 23 09:35:46 crc kubenswrapper[4684]: I0123 09:35:46.125291 4684 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-api-0"] Jan 23 09:35:46 crc kubenswrapper[4684]: I0123 09:35:46.232845 4684 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/e0cd885d-0d54-4392-9d8a-cd2cb48b47d2-config-data\") pod \"nova-api-0\" (UID: \"e0cd885d-0d54-4392-9d8a-cd2cb48b47d2\") " pod="openstack/nova-api-0" Jan 23 09:35:46 crc kubenswrapper[4684]: I0123 09:35:46.232920 4684 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/e0cd885d-0d54-4392-9d8a-cd2cb48b47d2-logs\") pod \"nova-api-0\" (UID: \"e0cd885d-0d54-4392-9d8a-cd2cb48b47d2\") " pod="openstack/nova-api-0" Jan 23 09:35:46 crc kubenswrapper[4684]: I0123 09:35:46.232958 4684 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-6j6fj\" (UniqueName: \"kubernetes.io/projected/e0cd885d-0d54-4392-9d8a-cd2cb48b47d2-kube-api-access-6j6fj\") pod \"nova-api-0\" (UID: \"e0cd885d-0d54-4392-9d8a-cd2cb48b47d2\") " pod="openstack/nova-api-0" Jan 23 09:35:46 crc kubenswrapper[4684]: I0123 09:35:46.233082 4684 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/e0cd885d-0d54-4392-9d8a-cd2cb48b47d2-combined-ca-bundle\") pod \"nova-api-0\" (UID: \"e0cd885d-0d54-4392-9d8a-cd2cb48b47d2\") " pod="openstack/nova-api-0" Jan 23 09:35:46 crc kubenswrapper[4684]: I0123 09:35:46.233118 4684 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/e0cd885d-0d54-4392-9d8a-cd2cb48b47d2-public-tls-certs\") pod \"nova-api-0\" (UID: \"e0cd885d-0d54-4392-9d8a-cd2cb48b47d2\") " pod="openstack/nova-api-0" Jan 23 09:35:46 crc kubenswrapper[4684]: I0123 09:35:46.233141 4684 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/e0cd885d-0d54-4392-9d8a-cd2cb48b47d2-internal-tls-certs\") pod \"nova-api-0\" (UID: \"e0cd885d-0d54-4392-9d8a-cd2cb48b47d2\") " pod="openstack/nova-api-0" Jan 23 09:35:46 crc kubenswrapper[4684]: I0123 09:35:46.334910 4684 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/e0cd885d-0d54-4392-9d8a-cd2cb48b47d2-combined-ca-bundle\") pod \"nova-api-0\" (UID: \"e0cd885d-0d54-4392-9d8a-cd2cb48b47d2\") " pod="openstack/nova-api-0" Jan 23 09:35:46 crc kubenswrapper[4684]: I0123 09:35:46.334980 4684 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/e0cd885d-0d54-4392-9d8a-cd2cb48b47d2-public-tls-certs\") pod \"nova-api-0\" (UID: \"e0cd885d-0d54-4392-9d8a-cd2cb48b47d2\") " pod="openstack/nova-api-0" Jan 23 09:35:46 crc kubenswrapper[4684]: I0123 09:35:46.335015 4684 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/e0cd885d-0d54-4392-9d8a-cd2cb48b47d2-internal-tls-certs\") pod \"nova-api-0\" (UID: \"e0cd885d-0d54-4392-9d8a-cd2cb48b47d2\") " pod="openstack/nova-api-0" Jan 23 09:35:46 crc kubenswrapper[4684]: I0123 09:35:46.335110 4684 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/e0cd885d-0d54-4392-9d8a-cd2cb48b47d2-config-data\") pod \"nova-api-0\" (UID: \"e0cd885d-0d54-4392-9d8a-cd2cb48b47d2\") " pod="openstack/nova-api-0" Jan 23 09:35:46 crc kubenswrapper[4684]: I0123 09:35:46.335148 4684 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/e0cd885d-0d54-4392-9d8a-cd2cb48b47d2-logs\") pod \"nova-api-0\" (UID: \"e0cd885d-0d54-4392-9d8a-cd2cb48b47d2\") " pod="openstack/nova-api-0" Jan 23 09:35:46 crc kubenswrapper[4684]: I0123 09:35:46.335178 4684 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-6j6fj\" (UniqueName: \"kubernetes.io/projected/e0cd885d-0d54-4392-9d8a-cd2cb48b47d2-kube-api-access-6j6fj\") pod \"nova-api-0\" (UID: \"e0cd885d-0d54-4392-9d8a-cd2cb48b47d2\") " pod="openstack/nova-api-0" Jan 23 09:35:46 crc kubenswrapper[4684]: I0123 09:35:46.336156 4684 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/e0cd885d-0d54-4392-9d8a-cd2cb48b47d2-logs\") pod \"nova-api-0\" (UID: \"e0cd885d-0d54-4392-9d8a-cd2cb48b47d2\") " pod="openstack/nova-api-0" Jan 23 09:35:46 crc kubenswrapper[4684]: I0123 09:35:46.340022 4684 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/e0cd885d-0d54-4392-9d8a-cd2cb48b47d2-combined-ca-bundle\") pod \"nova-api-0\" (UID: \"e0cd885d-0d54-4392-9d8a-cd2cb48b47d2\") " pod="openstack/nova-api-0" Jan 23 09:35:46 crc kubenswrapper[4684]: I0123 09:35:46.343529 4684 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/e0cd885d-0d54-4392-9d8a-cd2cb48b47d2-config-data\") pod \"nova-api-0\" (UID: \"e0cd885d-0d54-4392-9d8a-cd2cb48b47d2\") " pod="openstack/nova-api-0" Jan 23 09:35:46 crc kubenswrapper[4684]: I0123 09:35:46.345528 4684 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/e0cd885d-0d54-4392-9d8a-cd2cb48b47d2-public-tls-certs\") pod \"nova-api-0\" (UID: \"e0cd885d-0d54-4392-9d8a-cd2cb48b47d2\") " pod="openstack/nova-api-0" Jan 23 09:35:46 crc kubenswrapper[4684]: I0123 09:35:46.347947 4684 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/e0cd885d-0d54-4392-9d8a-cd2cb48b47d2-internal-tls-certs\") pod \"nova-api-0\" (UID: \"e0cd885d-0d54-4392-9d8a-cd2cb48b47d2\") " pod="openstack/nova-api-0" Jan 23 09:35:46 crc kubenswrapper[4684]: I0123 09:35:46.368931 4684 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-6j6fj\" (UniqueName: \"kubernetes.io/projected/e0cd885d-0d54-4392-9d8a-cd2cb48b47d2-kube-api-access-6j6fj\") pod \"nova-api-0\" (UID: \"e0cd885d-0d54-4392-9d8a-cd2cb48b47d2\") " pod="openstack/nova-api-0" Jan 23 09:35:46 crc kubenswrapper[4684]: I0123 09:35:46.434320 4684 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-api-0" Jan 23 09:35:46 crc kubenswrapper[4684]: I0123 09:35:46.966706 4684 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-api-0"] Jan 23 09:35:46 crc kubenswrapper[4684]: W0123 09:35:46.977882 4684 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pode0cd885d_0d54_4392_9d8a_cd2cb48b47d2.slice/crio-b325568a9b6f982f737f301a276e1a6521c273cc5f942e849bafafd2790cb7d1 WatchSource:0}: Error finding container b325568a9b6f982f737f301a276e1a6521c273cc5f942e849bafafd2790cb7d1: Status 404 returned error can't find the container with id b325568a9b6f982f737f301a276e1a6521c273cc5f942e849bafafd2790cb7d1 Jan 23 09:35:47 crc kubenswrapper[4684]: E0123 09:35:47.324464 4684 log.go:32] "ExecSync cmd from runtime service failed" err="rpc error: code = Unknown desc = command error: cannot register an exec PID: container is stopping, stdout: , stderr: , exit code -1" containerID="308b4f3f4167d94456a496ef6756811bc5d445e33a17274a5028b8787db31acf" cmd=["/usr/bin/pgrep","-r","DRST","nova-scheduler"] Jan 23 09:35:47 crc kubenswrapper[4684]: E0123 09:35:47.327931 4684 log.go:32] "ExecSync cmd from runtime service failed" err="rpc error: code = Unknown desc = command error: cannot register an exec PID: container is stopping, stdout: , stderr: , exit code -1" containerID="308b4f3f4167d94456a496ef6756811bc5d445e33a17274a5028b8787db31acf" cmd=["/usr/bin/pgrep","-r","DRST","nova-scheduler"] Jan 23 09:35:47 crc kubenswrapper[4684]: E0123 09:35:47.332318 4684 log.go:32] "ExecSync cmd from runtime service failed" err="rpc error: code = Unknown desc = command error: cannot register an exec PID: container is stopping, stdout: , stderr: , exit code -1" containerID="308b4f3f4167d94456a496ef6756811bc5d445e33a17274a5028b8787db31acf" cmd=["/usr/bin/pgrep","-r","DRST","nova-scheduler"] Jan 23 09:35:47 crc kubenswrapper[4684]: E0123 09:35:47.332557 4684 prober.go:104] "Probe errored" err="rpc error: code = Unknown desc = command error: cannot register an exec PID: container is stopping, stdout: , stderr: , exit code -1" probeType="Readiness" pod="openstack/nova-scheduler-0" podUID="d757dc5c-a82e-403e-a11f-213b043a1b87" containerName="nova-scheduler-scheduler" Jan 23 09:35:47 crc kubenswrapper[4684]: I0123 09:35:47.592215 4684 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="87ef84be-3786-4a1e-a910-24e974d71fc2" path="/var/lib/kubelet/pods/87ef84be-3786-4a1e-a910-24e974d71fc2/volumes" Jan 23 09:35:47 crc kubenswrapper[4684]: I0123 09:35:47.782617 4684 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-api-0" event={"ID":"e0cd885d-0d54-4392-9d8a-cd2cb48b47d2","Type":"ContainerStarted","Data":"ab687510277bb0179300bac78e945f901a00066a4ae12985bc9835baaf6424b8"} Jan 23 09:35:47 crc kubenswrapper[4684]: I0123 09:35:47.782676 4684 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-api-0" event={"ID":"e0cd885d-0d54-4392-9d8a-cd2cb48b47d2","Type":"ContainerStarted","Data":"98c2f36be50d9df7d3a72939d34b6223765237dab8f74c80b1263ca40e7083b3"} Jan 23 09:35:47 crc kubenswrapper[4684]: I0123 09:35:47.782690 4684 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-api-0" event={"ID":"e0cd885d-0d54-4392-9d8a-cd2cb48b47d2","Type":"ContainerStarted","Data":"b325568a9b6f982f737f301a276e1a6521c273cc5f942e849bafafd2790cb7d1"} Jan 23 09:35:47 crc kubenswrapper[4684]: I0123 09:35:47.811583 4684 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/nova-api-0" podStartSLOduration=1.8115596969999999 podStartE2EDuration="1.811559697s" podCreationTimestamp="2026-01-23 09:35:46 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-23 09:35:47.807966186 +0000 UTC m=+1720.431344727" watchObservedRunningTime="2026-01-23 09:35:47.811559697 +0000 UTC m=+1720.434938248" Jan 23 09:35:48 crc kubenswrapper[4684]: I0123 09:35:48.261422 4684 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack/nova-metadata-0" podUID="8422775d-1328-4c3b-ab94-f235f45da903" containerName="nova-metadata-log" probeResult="failure" output="Get \"https://10.217.0.183:8775/\": dial tcp 10.217.0.183:8775: connect: connection refused" Jan 23 09:35:48 crc kubenswrapper[4684]: I0123 09:35:48.261427 4684 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack/nova-metadata-0" podUID="8422775d-1328-4c3b-ab94-f235f45da903" containerName="nova-metadata-metadata" probeResult="failure" output="Get \"https://10.217.0.183:8775/\": dial tcp 10.217.0.183:8775: connect: connection refused" Jan 23 09:35:48 crc kubenswrapper[4684]: I0123 09:35:48.794351 4684 generic.go:334] "Generic (PLEG): container finished" podID="8422775d-1328-4c3b-ab94-f235f45da903" containerID="fe4898e0f384ce64dd84f36fe3a2b7d0ec6b514b769c9683e1930a0f64736cbb" exitCode=0 Jan 23 09:35:48 crc kubenswrapper[4684]: I0123 09:35:48.795367 4684 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-metadata-0" event={"ID":"8422775d-1328-4c3b-ab94-f235f45da903","Type":"ContainerDied","Data":"fe4898e0f384ce64dd84f36fe3a2b7d0ec6b514b769c9683e1930a0f64736cbb"} Jan 23 09:35:49 crc kubenswrapper[4684]: I0123 09:35:49.196474 4684 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-metadata-0" Jan 23 09:35:49 crc kubenswrapper[4684]: I0123 09:35:49.305270 4684 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/8422775d-1328-4c3b-ab94-f235f45da903-combined-ca-bundle\") pod \"8422775d-1328-4c3b-ab94-f235f45da903\" (UID: \"8422775d-1328-4c3b-ab94-f235f45da903\") " Jan 23 09:35:49 crc kubenswrapper[4684]: I0123 09:35:49.305321 4684 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-5tpcn\" (UniqueName: \"kubernetes.io/projected/8422775d-1328-4c3b-ab94-f235f45da903-kube-api-access-5tpcn\") pod \"8422775d-1328-4c3b-ab94-f235f45da903\" (UID: \"8422775d-1328-4c3b-ab94-f235f45da903\") " Jan 23 09:35:49 crc kubenswrapper[4684]: I0123 09:35:49.305401 4684 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/8422775d-1328-4c3b-ab94-f235f45da903-config-data\") pod \"8422775d-1328-4c3b-ab94-f235f45da903\" (UID: \"8422775d-1328-4c3b-ab94-f235f45da903\") " Jan 23 09:35:49 crc kubenswrapper[4684]: I0123 09:35:49.305557 4684 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/8422775d-1328-4c3b-ab94-f235f45da903-logs\") pod \"8422775d-1328-4c3b-ab94-f235f45da903\" (UID: \"8422775d-1328-4c3b-ab94-f235f45da903\") " Jan 23 09:35:49 crc kubenswrapper[4684]: I0123 09:35:49.305654 4684 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"nova-metadata-tls-certs\" (UniqueName: \"kubernetes.io/secret/8422775d-1328-4c3b-ab94-f235f45da903-nova-metadata-tls-certs\") pod \"8422775d-1328-4c3b-ab94-f235f45da903\" (UID: \"8422775d-1328-4c3b-ab94-f235f45da903\") " Jan 23 09:35:49 crc kubenswrapper[4684]: I0123 09:35:49.308771 4684 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/8422775d-1328-4c3b-ab94-f235f45da903-logs" (OuterVolumeSpecName: "logs") pod "8422775d-1328-4c3b-ab94-f235f45da903" (UID: "8422775d-1328-4c3b-ab94-f235f45da903"). InnerVolumeSpecName "logs". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 23 09:35:49 crc kubenswrapper[4684]: I0123 09:35:49.311333 4684 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/8422775d-1328-4c3b-ab94-f235f45da903-kube-api-access-5tpcn" (OuterVolumeSpecName: "kube-api-access-5tpcn") pod "8422775d-1328-4c3b-ab94-f235f45da903" (UID: "8422775d-1328-4c3b-ab94-f235f45da903"). InnerVolumeSpecName "kube-api-access-5tpcn". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 23 09:35:49 crc kubenswrapper[4684]: I0123 09:35:49.343577 4684 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/8422775d-1328-4c3b-ab94-f235f45da903-config-data" (OuterVolumeSpecName: "config-data") pod "8422775d-1328-4c3b-ab94-f235f45da903" (UID: "8422775d-1328-4c3b-ab94-f235f45da903"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 23 09:35:49 crc kubenswrapper[4684]: I0123 09:35:49.366158 4684 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/8422775d-1328-4c3b-ab94-f235f45da903-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "8422775d-1328-4c3b-ab94-f235f45da903" (UID: "8422775d-1328-4c3b-ab94-f235f45da903"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 23 09:35:49 crc kubenswrapper[4684]: I0123 09:35:49.409041 4684 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/8422775d-1328-4c3b-ab94-f235f45da903-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Jan 23 09:35:49 crc kubenswrapper[4684]: I0123 09:35:49.409086 4684 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-5tpcn\" (UniqueName: \"kubernetes.io/projected/8422775d-1328-4c3b-ab94-f235f45da903-kube-api-access-5tpcn\") on node \"crc\" DevicePath \"\"" Jan 23 09:35:49 crc kubenswrapper[4684]: I0123 09:35:49.409100 4684 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/8422775d-1328-4c3b-ab94-f235f45da903-config-data\") on node \"crc\" DevicePath \"\"" Jan 23 09:35:49 crc kubenswrapper[4684]: I0123 09:35:49.409111 4684 reconciler_common.go:293] "Volume detached for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/8422775d-1328-4c3b-ab94-f235f45da903-logs\") on node \"crc\" DevicePath \"\"" Jan 23 09:35:49 crc kubenswrapper[4684]: I0123 09:35:49.423511 4684 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/8422775d-1328-4c3b-ab94-f235f45da903-nova-metadata-tls-certs" (OuterVolumeSpecName: "nova-metadata-tls-certs") pod "8422775d-1328-4c3b-ab94-f235f45da903" (UID: "8422775d-1328-4c3b-ab94-f235f45da903"). InnerVolumeSpecName "nova-metadata-tls-certs". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 23 09:35:49 crc kubenswrapper[4684]: I0123 09:35:49.511020 4684 reconciler_common.go:293] "Volume detached for volume \"nova-metadata-tls-certs\" (UniqueName: \"kubernetes.io/secret/8422775d-1328-4c3b-ab94-f235f45da903-nova-metadata-tls-certs\") on node \"crc\" DevicePath \"\"" Jan 23 09:35:49 crc kubenswrapper[4684]: I0123 09:35:49.806944 4684 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-metadata-0" Jan 23 09:35:49 crc kubenswrapper[4684]: I0123 09:35:49.806936 4684 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-metadata-0" event={"ID":"8422775d-1328-4c3b-ab94-f235f45da903","Type":"ContainerDied","Data":"076d1b5b473d697ca3e476f7444ed499ef0d22babc884eadecd1f9fd272e7167"} Jan 23 09:35:49 crc kubenswrapper[4684]: I0123 09:35:49.807127 4684 scope.go:117] "RemoveContainer" containerID="fe4898e0f384ce64dd84f36fe3a2b7d0ec6b514b769c9683e1930a0f64736cbb" Jan 23 09:35:49 crc kubenswrapper[4684]: I0123 09:35:49.811312 4684 generic.go:334] "Generic (PLEG): container finished" podID="d757dc5c-a82e-403e-a11f-213b043a1b87" containerID="308b4f3f4167d94456a496ef6756811bc5d445e33a17274a5028b8787db31acf" exitCode=0 Jan 23 09:35:49 crc kubenswrapper[4684]: I0123 09:35:49.811493 4684 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-scheduler-0" event={"ID":"d757dc5c-a82e-403e-a11f-213b043a1b87","Type":"ContainerDied","Data":"308b4f3f4167d94456a496ef6756811bc5d445e33a17274a5028b8787db31acf"} Jan 23 09:35:49 crc kubenswrapper[4684]: I0123 09:35:49.842145 4684 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-metadata-0"] Jan 23 09:35:49 crc kubenswrapper[4684]: I0123 09:35:49.845714 4684 scope.go:117] "RemoveContainer" containerID="cee12fcd9bf176eae676b6e84e67c25b1175501c01645605332c04911fae741b" Jan 23 09:35:49 crc kubenswrapper[4684]: I0123 09:35:49.859408 4684 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/nova-metadata-0"] Jan 23 09:35:49 crc kubenswrapper[4684]: I0123 09:35:49.909988 4684 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/nova-metadata-0"] Jan 23 09:35:49 crc kubenswrapper[4684]: E0123 09:35:49.910452 4684 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="8422775d-1328-4c3b-ab94-f235f45da903" containerName="nova-metadata-metadata" Jan 23 09:35:49 crc kubenswrapper[4684]: I0123 09:35:49.910477 4684 state_mem.go:107] "Deleted CPUSet assignment" podUID="8422775d-1328-4c3b-ab94-f235f45da903" containerName="nova-metadata-metadata" Jan 23 09:35:49 crc kubenswrapper[4684]: E0123 09:35:49.910511 4684 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="8422775d-1328-4c3b-ab94-f235f45da903" containerName="nova-metadata-log" Jan 23 09:35:49 crc kubenswrapper[4684]: I0123 09:35:49.910519 4684 state_mem.go:107] "Deleted CPUSet assignment" podUID="8422775d-1328-4c3b-ab94-f235f45da903" containerName="nova-metadata-log" Jan 23 09:35:49 crc kubenswrapper[4684]: I0123 09:35:49.910763 4684 memory_manager.go:354] "RemoveStaleState removing state" podUID="8422775d-1328-4c3b-ab94-f235f45da903" containerName="nova-metadata-log" Jan 23 09:35:49 crc kubenswrapper[4684]: I0123 09:35:49.910785 4684 memory_manager.go:354] "RemoveStaleState removing state" podUID="8422775d-1328-4c3b-ab94-f235f45da903" containerName="nova-metadata-metadata" Jan 23 09:35:49 crc kubenswrapper[4684]: I0123 09:35:49.911913 4684 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-metadata-0" Jan 23 09:35:49 crc kubenswrapper[4684]: I0123 09:35:49.917224 4684 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-nova-metadata-internal-svc" Jan 23 09:35:49 crc kubenswrapper[4684]: I0123 09:35:49.917496 4684 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-metadata-config-data" Jan 23 09:35:49 crc kubenswrapper[4684]: I0123 09:35:49.933213 4684 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-metadata-0"] Jan 23 09:35:50 crc kubenswrapper[4684]: I0123 09:35:50.020856 4684 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-472gp\" (UniqueName: \"kubernetes.io/projected/48b55b45-1ad6-4310-aaff-0a978bbf5538-kube-api-access-472gp\") pod \"nova-metadata-0\" (UID: \"48b55b45-1ad6-4310-aaff-0a978bbf5538\") " pod="openstack/nova-metadata-0" Jan 23 09:35:50 crc kubenswrapper[4684]: I0123 09:35:50.021160 4684 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/48b55b45-1ad6-4310-aaff-0a978bbf5538-config-data\") pod \"nova-metadata-0\" (UID: \"48b55b45-1ad6-4310-aaff-0a978bbf5538\") " pod="openstack/nova-metadata-0" Jan 23 09:35:50 crc kubenswrapper[4684]: I0123 09:35:50.021372 4684 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/48b55b45-1ad6-4310-aaff-0a978bbf5538-logs\") pod \"nova-metadata-0\" (UID: \"48b55b45-1ad6-4310-aaff-0a978bbf5538\") " pod="openstack/nova-metadata-0" Jan 23 09:35:50 crc kubenswrapper[4684]: I0123 09:35:50.021505 4684 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/48b55b45-1ad6-4310-aaff-0a978bbf5538-combined-ca-bundle\") pod \"nova-metadata-0\" (UID: \"48b55b45-1ad6-4310-aaff-0a978bbf5538\") " pod="openstack/nova-metadata-0" Jan 23 09:35:50 crc kubenswrapper[4684]: I0123 09:35:50.021642 4684 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"nova-metadata-tls-certs\" (UniqueName: \"kubernetes.io/secret/48b55b45-1ad6-4310-aaff-0a978bbf5538-nova-metadata-tls-certs\") pod \"nova-metadata-0\" (UID: \"48b55b45-1ad6-4310-aaff-0a978bbf5538\") " pod="openstack/nova-metadata-0" Jan 23 09:35:50 crc kubenswrapper[4684]: I0123 09:35:50.035531 4684 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-scheduler-0" Jan 23 09:35:50 crc kubenswrapper[4684]: I0123 09:35:50.123497 4684 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-jt2s2\" (UniqueName: \"kubernetes.io/projected/d757dc5c-a82e-403e-a11f-213b043a1b87-kube-api-access-jt2s2\") pod \"d757dc5c-a82e-403e-a11f-213b043a1b87\" (UID: \"d757dc5c-a82e-403e-a11f-213b043a1b87\") " Jan 23 09:35:50 crc kubenswrapper[4684]: I0123 09:35:50.123602 4684 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/d757dc5c-a82e-403e-a11f-213b043a1b87-config-data\") pod \"d757dc5c-a82e-403e-a11f-213b043a1b87\" (UID: \"d757dc5c-a82e-403e-a11f-213b043a1b87\") " Jan 23 09:35:50 crc kubenswrapper[4684]: I0123 09:35:50.123675 4684 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/d757dc5c-a82e-403e-a11f-213b043a1b87-combined-ca-bundle\") pod \"d757dc5c-a82e-403e-a11f-213b043a1b87\" (UID: \"d757dc5c-a82e-403e-a11f-213b043a1b87\") " Jan 23 09:35:50 crc kubenswrapper[4684]: I0123 09:35:50.123932 4684 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-472gp\" (UniqueName: \"kubernetes.io/projected/48b55b45-1ad6-4310-aaff-0a978bbf5538-kube-api-access-472gp\") pod \"nova-metadata-0\" (UID: \"48b55b45-1ad6-4310-aaff-0a978bbf5538\") " pod="openstack/nova-metadata-0" Jan 23 09:35:50 crc kubenswrapper[4684]: I0123 09:35:50.124596 4684 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/48b55b45-1ad6-4310-aaff-0a978bbf5538-config-data\") pod \"nova-metadata-0\" (UID: \"48b55b45-1ad6-4310-aaff-0a978bbf5538\") " pod="openstack/nova-metadata-0" Jan 23 09:35:50 crc kubenswrapper[4684]: I0123 09:35:50.124757 4684 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/48b55b45-1ad6-4310-aaff-0a978bbf5538-logs\") pod \"nova-metadata-0\" (UID: \"48b55b45-1ad6-4310-aaff-0a978bbf5538\") " pod="openstack/nova-metadata-0" Jan 23 09:35:50 crc kubenswrapper[4684]: I0123 09:35:50.124816 4684 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/48b55b45-1ad6-4310-aaff-0a978bbf5538-combined-ca-bundle\") pod \"nova-metadata-0\" (UID: \"48b55b45-1ad6-4310-aaff-0a978bbf5538\") " pod="openstack/nova-metadata-0" Jan 23 09:35:50 crc kubenswrapper[4684]: I0123 09:35:50.124886 4684 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"nova-metadata-tls-certs\" (UniqueName: \"kubernetes.io/secret/48b55b45-1ad6-4310-aaff-0a978bbf5538-nova-metadata-tls-certs\") pod \"nova-metadata-0\" (UID: \"48b55b45-1ad6-4310-aaff-0a978bbf5538\") " pod="openstack/nova-metadata-0" Jan 23 09:35:50 crc kubenswrapper[4684]: I0123 09:35:50.126450 4684 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/48b55b45-1ad6-4310-aaff-0a978bbf5538-logs\") pod \"nova-metadata-0\" (UID: \"48b55b45-1ad6-4310-aaff-0a978bbf5538\") " pod="openstack/nova-metadata-0" Jan 23 09:35:50 crc kubenswrapper[4684]: I0123 09:35:50.132226 4684 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/48b55b45-1ad6-4310-aaff-0a978bbf5538-config-data\") pod \"nova-metadata-0\" (UID: \"48b55b45-1ad6-4310-aaff-0a978bbf5538\") " pod="openstack/nova-metadata-0" Jan 23 09:35:50 crc kubenswrapper[4684]: I0123 09:35:50.132388 4684 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/d757dc5c-a82e-403e-a11f-213b043a1b87-kube-api-access-jt2s2" (OuterVolumeSpecName: "kube-api-access-jt2s2") pod "d757dc5c-a82e-403e-a11f-213b043a1b87" (UID: "d757dc5c-a82e-403e-a11f-213b043a1b87"). InnerVolumeSpecName "kube-api-access-jt2s2". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 23 09:35:50 crc kubenswrapper[4684]: I0123 09:35:50.136052 4684 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/48b55b45-1ad6-4310-aaff-0a978bbf5538-combined-ca-bundle\") pod \"nova-metadata-0\" (UID: \"48b55b45-1ad6-4310-aaff-0a978bbf5538\") " pod="openstack/nova-metadata-0" Jan 23 09:35:50 crc kubenswrapper[4684]: I0123 09:35:50.136282 4684 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"nova-metadata-tls-certs\" (UniqueName: \"kubernetes.io/secret/48b55b45-1ad6-4310-aaff-0a978bbf5538-nova-metadata-tls-certs\") pod \"nova-metadata-0\" (UID: \"48b55b45-1ad6-4310-aaff-0a978bbf5538\") " pod="openstack/nova-metadata-0" Jan 23 09:35:50 crc kubenswrapper[4684]: I0123 09:35:50.148481 4684 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-472gp\" (UniqueName: \"kubernetes.io/projected/48b55b45-1ad6-4310-aaff-0a978bbf5538-kube-api-access-472gp\") pod \"nova-metadata-0\" (UID: \"48b55b45-1ad6-4310-aaff-0a978bbf5538\") " pod="openstack/nova-metadata-0" Jan 23 09:35:50 crc kubenswrapper[4684]: I0123 09:35:50.160179 4684 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/d757dc5c-a82e-403e-a11f-213b043a1b87-config-data" (OuterVolumeSpecName: "config-data") pod "d757dc5c-a82e-403e-a11f-213b043a1b87" (UID: "d757dc5c-a82e-403e-a11f-213b043a1b87"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 23 09:35:50 crc kubenswrapper[4684]: I0123 09:35:50.173068 4684 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/d757dc5c-a82e-403e-a11f-213b043a1b87-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "d757dc5c-a82e-403e-a11f-213b043a1b87" (UID: "d757dc5c-a82e-403e-a11f-213b043a1b87"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 23 09:35:50 crc kubenswrapper[4684]: I0123 09:35:50.226350 4684 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/d757dc5c-a82e-403e-a11f-213b043a1b87-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Jan 23 09:35:50 crc kubenswrapper[4684]: I0123 09:35:50.226389 4684 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-jt2s2\" (UniqueName: \"kubernetes.io/projected/d757dc5c-a82e-403e-a11f-213b043a1b87-kube-api-access-jt2s2\") on node \"crc\" DevicePath \"\"" Jan 23 09:35:50 crc kubenswrapper[4684]: I0123 09:35:50.226404 4684 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/d757dc5c-a82e-403e-a11f-213b043a1b87-config-data\") on node \"crc\" DevicePath \"\"" Jan 23 09:35:50 crc kubenswrapper[4684]: I0123 09:35:50.240518 4684 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-metadata-0" Jan 23 09:35:50 crc kubenswrapper[4684]: I0123 09:35:50.756148 4684 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-metadata-0"] Jan 23 09:35:50 crc kubenswrapper[4684]: I0123 09:35:50.823874 4684 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-metadata-0" event={"ID":"48b55b45-1ad6-4310-aaff-0a978bbf5538","Type":"ContainerStarted","Data":"c1a19fc7447b25416b25e72feb60143092362c184daf8a70564fb933db1ce782"} Jan 23 09:35:50 crc kubenswrapper[4684]: I0123 09:35:50.830965 4684 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-scheduler-0" event={"ID":"d757dc5c-a82e-403e-a11f-213b043a1b87","Type":"ContainerDied","Data":"1f0e3b63c148c6a2ed7afda322210130c3a22db064ace08f38a16d7c3a0f5521"} Jan 23 09:35:50 crc kubenswrapper[4684]: I0123 09:35:50.831005 4684 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-scheduler-0" Jan 23 09:35:50 crc kubenswrapper[4684]: I0123 09:35:50.831050 4684 scope.go:117] "RemoveContainer" containerID="308b4f3f4167d94456a496ef6756811bc5d445e33a17274a5028b8787db31acf" Jan 23 09:35:50 crc kubenswrapper[4684]: I0123 09:35:50.910180 4684 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-scheduler-0"] Jan 23 09:35:50 crc kubenswrapper[4684]: I0123 09:35:50.926888 4684 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/nova-scheduler-0"] Jan 23 09:35:50 crc kubenswrapper[4684]: I0123 09:35:50.935093 4684 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/nova-scheduler-0"] Jan 23 09:35:50 crc kubenswrapper[4684]: E0123 09:35:50.935587 4684 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="d757dc5c-a82e-403e-a11f-213b043a1b87" containerName="nova-scheduler-scheduler" Jan 23 09:35:50 crc kubenswrapper[4684]: I0123 09:35:50.935605 4684 state_mem.go:107] "Deleted CPUSet assignment" podUID="d757dc5c-a82e-403e-a11f-213b043a1b87" containerName="nova-scheduler-scheduler" Jan 23 09:35:50 crc kubenswrapper[4684]: I0123 09:35:50.935810 4684 memory_manager.go:354] "RemoveStaleState removing state" podUID="d757dc5c-a82e-403e-a11f-213b043a1b87" containerName="nova-scheduler-scheduler" Jan 23 09:35:50 crc kubenswrapper[4684]: I0123 09:35:50.936491 4684 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-scheduler-0" Jan 23 09:35:50 crc kubenswrapper[4684]: I0123 09:35:50.939688 4684 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-scheduler-config-data" Jan 23 09:35:50 crc kubenswrapper[4684]: I0123 09:35:50.947022 4684 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-scheduler-0"] Jan 23 09:35:51 crc kubenswrapper[4684]: I0123 09:35:51.039660 4684 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/2fab0d59-7e3d-4c70-a3a7-63dcb3629988-config-data\") pod \"nova-scheduler-0\" (UID: \"2fab0d59-7e3d-4c70-a3a7-63dcb3629988\") " pod="openstack/nova-scheduler-0" Jan 23 09:35:51 crc kubenswrapper[4684]: I0123 09:35:51.039744 4684 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-67d9c\" (UniqueName: \"kubernetes.io/projected/2fab0d59-7e3d-4c70-a3a7-63dcb3629988-kube-api-access-67d9c\") pod \"nova-scheduler-0\" (UID: \"2fab0d59-7e3d-4c70-a3a7-63dcb3629988\") " pod="openstack/nova-scheduler-0" Jan 23 09:35:51 crc kubenswrapper[4684]: I0123 09:35:51.039876 4684 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/2fab0d59-7e3d-4c70-a3a7-63dcb3629988-combined-ca-bundle\") pod \"nova-scheduler-0\" (UID: \"2fab0d59-7e3d-4c70-a3a7-63dcb3629988\") " pod="openstack/nova-scheduler-0" Jan 23 09:35:51 crc kubenswrapper[4684]: I0123 09:35:51.141713 4684 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/2fab0d59-7e3d-4c70-a3a7-63dcb3629988-config-data\") pod \"nova-scheduler-0\" (UID: \"2fab0d59-7e3d-4c70-a3a7-63dcb3629988\") " pod="openstack/nova-scheduler-0" Jan 23 09:35:51 crc kubenswrapper[4684]: I0123 09:35:51.141807 4684 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-67d9c\" (UniqueName: \"kubernetes.io/projected/2fab0d59-7e3d-4c70-a3a7-63dcb3629988-kube-api-access-67d9c\") pod \"nova-scheduler-0\" (UID: \"2fab0d59-7e3d-4c70-a3a7-63dcb3629988\") " pod="openstack/nova-scheduler-0" Jan 23 09:35:51 crc kubenswrapper[4684]: I0123 09:35:51.142227 4684 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/2fab0d59-7e3d-4c70-a3a7-63dcb3629988-combined-ca-bundle\") pod \"nova-scheduler-0\" (UID: \"2fab0d59-7e3d-4c70-a3a7-63dcb3629988\") " pod="openstack/nova-scheduler-0" Jan 23 09:35:51 crc kubenswrapper[4684]: I0123 09:35:51.146467 4684 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/2fab0d59-7e3d-4c70-a3a7-63dcb3629988-config-data\") pod \"nova-scheduler-0\" (UID: \"2fab0d59-7e3d-4c70-a3a7-63dcb3629988\") " pod="openstack/nova-scheduler-0" Jan 23 09:35:51 crc kubenswrapper[4684]: I0123 09:35:51.146854 4684 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/2fab0d59-7e3d-4c70-a3a7-63dcb3629988-combined-ca-bundle\") pod \"nova-scheduler-0\" (UID: \"2fab0d59-7e3d-4c70-a3a7-63dcb3629988\") " pod="openstack/nova-scheduler-0" Jan 23 09:35:51 crc kubenswrapper[4684]: I0123 09:35:51.164269 4684 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-67d9c\" (UniqueName: \"kubernetes.io/projected/2fab0d59-7e3d-4c70-a3a7-63dcb3629988-kube-api-access-67d9c\") pod \"nova-scheduler-0\" (UID: \"2fab0d59-7e3d-4c70-a3a7-63dcb3629988\") " pod="openstack/nova-scheduler-0" Jan 23 09:35:51 crc kubenswrapper[4684]: I0123 09:35:51.265449 4684 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-scheduler-0" Jan 23 09:35:51 crc kubenswrapper[4684]: I0123 09:35:51.600059 4684 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="8422775d-1328-4c3b-ab94-f235f45da903" path="/var/lib/kubelet/pods/8422775d-1328-4c3b-ab94-f235f45da903/volumes" Jan 23 09:35:51 crc kubenswrapper[4684]: I0123 09:35:51.601008 4684 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="d757dc5c-a82e-403e-a11f-213b043a1b87" path="/var/lib/kubelet/pods/d757dc5c-a82e-403e-a11f-213b043a1b87/volumes" Jan 23 09:35:51 crc kubenswrapper[4684]: I0123 09:35:51.752212 4684 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-scheduler-0"] Jan 23 09:35:51 crc kubenswrapper[4684]: W0123 09:35:51.752765 4684 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod2fab0d59_7e3d_4c70_a3a7_63dcb3629988.slice/crio-083ba1569f7953f1dc1fb98c62fbce4478ee78e707cfdd7bdec9905dd4d4823c WatchSource:0}: Error finding container 083ba1569f7953f1dc1fb98c62fbce4478ee78e707cfdd7bdec9905dd4d4823c: Status 404 returned error can't find the container with id 083ba1569f7953f1dc1fb98c62fbce4478ee78e707cfdd7bdec9905dd4d4823c Jan 23 09:35:51 crc kubenswrapper[4684]: I0123 09:35:51.850531 4684 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-metadata-0" event={"ID":"48b55b45-1ad6-4310-aaff-0a978bbf5538","Type":"ContainerStarted","Data":"853520a0bd51d7f70ca1a522a2a3d63faf5f4f43a2c6ad4de73462b1fb3fd13e"} Jan 23 09:35:51 crc kubenswrapper[4684]: I0123 09:35:51.853800 4684 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-scheduler-0" event={"ID":"2fab0d59-7e3d-4c70-a3a7-63dcb3629988","Type":"ContainerStarted","Data":"083ba1569f7953f1dc1fb98c62fbce4478ee78e707cfdd7bdec9905dd4d4823c"} Jan 23 09:35:52 crc kubenswrapper[4684]: I0123 09:35:52.865345 4684 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-metadata-0" event={"ID":"48b55b45-1ad6-4310-aaff-0a978bbf5538","Type":"ContainerStarted","Data":"7dc3748a249e5b2418aa84c6579395a1abcd6685afab7bd3c972fbcbcd6da1d0"} Jan 23 09:35:52 crc kubenswrapper[4684]: I0123 09:35:52.865715 4684 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-scheduler-0" event={"ID":"2fab0d59-7e3d-4c70-a3a7-63dcb3629988","Type":"ContainerStarted","Data":"f376beafd55295fdb1465b00e73e99be1a144df1e1840f8151211e86e7b7de70"} Jan 23 09:35:52 crc kubenswrapper[4684]: I0123 09:35:52.902288 4684 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/nova-metadata-0" podStartSLOduration=3.902262169 podStartE2EDuration="3.902262169s" podCreationTimestamp="2026-01-23 09:35:49 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-23 09:35:52.889843271 +0000 UTC m=+1725.513221812" watchObservedRunningTime="2026-01-23 09:35:52.902262169 +0000 UTC m=+1725.525640710" Jan 23 09:35:52 crc kubenswrapper[4684]: I0123 09:35:52.909589 4684 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/nova-scheduler-0" podStartSLOduration=2.909567294 podStartE2EDuration="2.909567294s" podCreationTimestamp="2026-01-23 09:35:50 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-23 09:35:52.908253458 +0000 UTC m=+1725.531632009" watchObservedRunningTime="2026-01-23 09:35:52.909567294 +0000 UTC m=+1725.532945835" Jan 23 09:35:55 crc kubenswrapper[4684]: I0123 09:35:55.241332 4684 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/nova-metadata-0" Jan 23 09:35:55 crc kubenswrapper[4684]: I0123 09:35:55.241679 4684 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/nova-metadata-0" Jan 23 09:35:56 crc kubenswrapper[4684]: I0123 09:35:56.265822 4684 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/nova-scheduler-0" Jan 23 09:35:56 crc kubenswrapper[4684]: I0123 09:35:56.435556 4684 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/nova-api-0" Jan 23 09:35:56 crc kubenswrapper[4684]: I0123 09:35:56.435632 4684 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/nova-api-0" Jan 23 09:35:57 crc kubenswrapper[4684]: I0123 09:35:57.451912 4684 prober.go:107] "Probe failed" probeType="Startup" pod="openstack/nova-api-0" podUID="e0cd885d-0d54-4392-9d8a-cd2cb48b47d2" containerName="nova-api-log" probeResult="failure" output="Get \"https://10.217.0.193:8774/\": net/http: request canceled (Client.Timeout exceeded while awaiting headers)" Jan 23 09:35:57 crc kubenswrapper[4684]: I0123 09:35:57.451919 4684 prober.go:107] "Probe failed" probeType="Startup" pod="openstack/nova-api-0" podUID="e0cd885d-0d54-4392-9d8a-cd2cb48b47d2" containerName="nova-api-api" probeResult="failure" output="Get \"https://10.217.0.193:8774/\": net/http: request canceled (Client.Timeout exceeded while awaiting headers)" Jan 23 09:35:59 crc kubenswrapper[4684]: I0123 09:35:59.583467 4684 scope.go:117] "RemoveContainer" containerID="aaa3253f44fc261eba23e0bab4fba49957b928d9d9a01fb268ab6087cc818562" Jan 23 09:35:59 crc kubenswrapper[4684]: E0123 09:35:59.584022 4684 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-wtphf_openshift-machine-config-operator(fe8e0d00-860e-4d47-9f48-686555520d79)\"" pod="openshift-machine-config-operator/machine-config-daemon-wtphf" podUID="fe8e0d00-860e-4d47-9f48-686555520d79" Jan 23 09:36:00 crc kubenswrapper[4684]: I0123 09:36:00.241626 4684 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/nova-metadata-0" Jan 23 09:36:00 crc kubenswrapper[4684]: I0123 09:36:00.241732 4684 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/nova-metadata-0" Jan 23 09:36:01 crc kubenswrapper[4684]: I0123 09:36:01.257904 4684 prober.go:107] "Probe failed" probeType="Startup" pod="openstack/nova-metadata-0" podUID="48b55b45-1ad6-4310-aaff-0a978bbf5538" containerName="nova-metadata-metadata" probeResult="failure" output="Get \"https://10.217.0.194:8775/\": net/http: request canceled (Client.Timeout exceeded while awaiting headers)" Jan 23 09:36:01 crc kubenswrapper[4684]: I0123 09:36:01.257961 4684 prober.go:107] "Probe failed" probeType="Startup" pod="openstack/nova-metadata-0" podUID="48b55b45-1ad6-4310-aaff-0a978bbf5538" containerName="nova-metadata-log" probeResult="failure" output="Get \"https://10.217.0.194:8775/\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" Jan 23 09:36:01 crc kubenswrapper[4684]: I0123 09:36:01.266128 4684 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/nova-scheduler-0" Jan 23 09:36:01 crc kubenswrapper[4684]: I0123 09:36:01.293882 4684 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/nova-scheduler-0" Jan 23 09:36:01 crc kubenswrapper[4684]: I0123 09:36:01.969958 4684 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/nova-scheduler-0" Jan 23 09:36:05 crc kubenswrapper[4684]: I0123 09:36:05.940500 4684 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/ceilometer-0" Jan 23 09:36:06 crc kubenswrapper[4684]: I0123 09:36:06.445483 4684 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/nova-api-0" Jan 23 09:36:06 crc kubenswrapper[4684]: I0123 09:36:06.446497 4684 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/nova-api-0" Jan 23 09:36:06 crc kubenswrapper[4684]: I0123 09:36:06.446620 4684 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/nova-api-0" Jan 23 09:36:06 crc kubenswrapper[4684]: I0123 09:36:06.453337 4684 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/nova-api-0" Jan 23 09:36:06 crc kubenswrapper[4684]: I0123 09:36:06.985880 4684 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/nova-api-0" Jan 23 09:36:06 crc kubenswrapper[4684]: I0123 09:36:06.996454 4684 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/nova-api-0" Jan 23 09:36:07 crc kubenswrapper[4684]: I0123 09:36:07.375391 4684 scope.go:117] "RemoveContainer" containerID="bb936caf3d2f07c80ffe4d73c5c7116e3ad3f7aebf3a16892f0981416095a83c" Jan 23 09:36:07 crc kubenswrapper[4684]: I0123 09:36:07.411243 4684 scope.go:117] "RemoveContainer" containerID="bf5197c3f0eb5ac2458125fa7c0f3ee0a42c7ce5b1fe5883c14a10e51e51e123" Jan 23 09:36:07 crc kubenswrapper[4684]: I0123 09:36:07.463652 4684 scope.go:117] "RemoveContainer" containerID="c54de7caf1a0c9b9e1e2a2d38c6dc0095338361df2e9d87f35b9cc94760fa909" Jan 23 09:36:10 crc kubenswrapper[4684]: I0123 09:36:10.247165 4684 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/nova-metadata-0" Jan 23 09:36:10 crc kubenswrapper[4684]: I0123 09:36:10.248340 4684 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/nova-metadata-0" Jan 23 09:36:10 crc kubenswrapper[4684]: I0123 09:36:10.256044 4684 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/nova-metadata-0" Jan 23 09:36:10 crc kubenswrapper[4684]: I0123 09:36:10.256831 4684 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/nova-metadata-0" Jan 23 09:36:12 crc kubenswrapper[4684]: I0123 09:36:12.582591 4684 scope.go:117] "RemoveContainer" containerID="aaa3253f44fc261eba23e0bab4fba49957b928d9d9a01fb268ab6087cc818562" Jan 23 09:36:12 crc kubenswrapper[4684]: E0123 09:36:12.583509 4684 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-wtphf_openshift-machine-config-operator(fe8e0d00-860e-4d47-9f48-686555520d79)\"" pod="openshift-machine-config-operator/machine-config-daemon-wtphf" podUID="fe8e0d00-860e-4d47-9f48-686555520d79" Jan 23 09:36:18 crc kubenswrapper[4684]: I0123 09:36:18.437936 4684 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/rabbitmq-server-0"] Jan 23 09:36:20 crc kubenswrapper[4684]: I0123 09:36:20.276137 4684 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/rabbitmq-cell1-server-0"] Jan 23 09:36:25 crc kubenswrapper[4684]: I0123 09:36:25.497960 4684 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/rabbitmq-cell1-server-0" podUID="6a0c15bc-8e5e-47ee-9c23-1673363f1603" containerName="rabbitmq" containerID="cri-o://a4c9a117c5b92cb67fe3eaf6e3d9b1260eea190f23710b986d5ce70813d55697" gracePeriod=604795 Jan 23 09:36:25 crc kubenswrapper[4684]: I0123 09:36:25.502530 4684 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/rabbitmq-server-0" podUID="82a71d38-3c68-43a9-9913-bc184ebed996" containerName="rabbitmq" containerID="cri-o://817ea9a29e87f839d270ed92f755ecda3bba82069b8a72d18c371684467bac12" gracePeriod=604793 Jan 23 09:36:26 crc kubenswrapper[4684]: I0123 09:36:26.582860 4684 scope.go:117] "RemoveContainer" containerID="aaa3253f44fc261eba23e0bab4fba49957b928d9d9a01fb268ab6087cc818562" Jan 23 09:36:26 crc kubenswrapper[4684]: E0123 09:36:26.583534 4684 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-wtphf_openshift-machine-config-operator(fe8e0d00-860e-4d47-9f48-686555520d79)\"" pod="openshift-machine-config-operator/machine-config-daemon-wtphf" podUID="fe8e0d00-860e-4d47-9f48-686555520d79" Jan 23 09:36:32 crc kubenswrapper[4684]: I0123 09:36:32.186685 4684 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/rabbitmq-cell1-server-0" Jan 23 09:36:32 crc kubenswrapper[4684]: I0123 09:36:32.194619 4684 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/rabbitmq-server-0" Jan 23 09:36:32 crc kubenswrapper[4684]: I0123 09:36:32.267148 4684 generic.go:334] "Generic (PLEG): container finished" podID="6a0c15bc-8e5e-47ee-9c23-1673363f1603" containerID="a4c9a117c5b92cb67fe3eaf6e3d9b1260eea190f23710b986d5ce70813d55697" exitCode=0 Jan 23 09:36:32 crc kubenswrapper[4684]: I0123 09:36:32.267302 4684 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/rabbitmq-cell1-server-0" Jan 23 09:36:32 crc kubenswrapper[4684]: I0123 09:36:32.267407 4684 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/rabbitmq-cell1-server-0" event={"ID":"6a0c15bc-8e5e-47ee-9c23-1673363f1603","Type":"ContainerDied","Data":"a4c9a117c5b92cb67fe3eaf6e3d9b1260eea190f23710b986d5ce70813d55697"} Jan 23 09:36:32 crc kubenswrapper[4684]: I0123 09:36:32.267489 4684 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/rabbitmq-cell1-server-0" event={"ID":"6a0c15bc-8e5e-47ee-9c23-1673363f1603","Type":"ContainerDied","Data":"0188f9303d149eef3a32673d40e166bb661ca7d56d33c5e1446afa1acc86659a"} Jan 23 09:36:32 crc kubenswrapper[4684]: I0123 09:36:32.267515 4684 scope.go:117] "RemoveContainer" containerID="a4c9a117c5b92cb67fe3eaf6e3d9b1260eea190f23710b986d5ce70813d55697" Jan 23 09:36:32 crc kubenswrapper[4684]: I0123 09:36:32.277038 4684 generic.go:334] "Generic (PLEG): container finished" podID="82a71d38-3c68-43a9-9913-bc184ebed996" containerID="817ea9a29e87f839d270ed92f755ecda3bba82069b8a72d18c371684467bac12" exitCode=0 Jan 23 09:36:32 crc kubenswrapper[4684]: I0123 09:36:32.277093 4684 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/rabbitmq-server-0" event={"ID":"82a71d38-3c68-43a9-9913-bc184ebed996","Type":"ContainerDied","Data":"817ea9a29e87f839d270ed92f755ecda3bba82069b8a72d18c371684467bac12"} Jan 23 09:36:32 crc kubenswrapper[4684]: I0123 09:36:32.277126 4684 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/rabbitmq-server-0" event={"ID":"82a71d38-3c68-43a9-9913-bc184ebed996","Type":"ContainerDied","Data":"1b21bd9e3930037e74d69751cd0149f3b8e0b508ed3e480c2bf99cb0a21657f7"} Jan 23 09:36:32 crc kubenswrapper[4684]: I0123 09:36:32.277193 4684 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/rabbitmq-server-0" Jan 23 09:36:32 crc kubenswrapper[4684]: I0123 09:36:32.282013 4684 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"persistence\" (UniqueName: \"kubernetes.io/local-volume/local-storage08-crc\") pod \"6a0c15bc-8e5e-47ee-9c23-1673363f1603\" (UID: \"6a0c15bc-8e5e-47ee-9c23-1673363f1603\") " Jan 23 09:36:32 crc kubenswrapper[4684]: I0123 09:36:32.282077 4684 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"rabbitmq-tls\" (UniqueName: \"kubernetes.io/projected/82a71d38-3c68-43a9-9913-bc184ebed996-rabbitmq-tls\") pod \"82a71d38-3c68-43a9-9913-bc184ebed996\" (UID: \"82a71d38-3c68-43a9-9913-bc184ebed996\") " Jan 23 09:36:32 crc kubenswrapper[4684]: I0123 09:36:32.282106 4684 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/82a71d38-3c68-43a9-9913-bc184ebed996-config-data\") pod \"82a71d38-3c68-43a9-9913-bc184ebed996\" (UID: \"82a71d38-3c68-43a9-9913-bc184ebed996\") " Jan 23 09:36:32 crc kubenswrapper[4684]: I0123 09:36:32.282135 4684 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"rabbitmq-tls\" (UniqueName: \"kubernetes.io/projected/6a0c15bc-8e5e-47ee-9c23-1673363f1603-rabbitmq-tls\") pod \"6a0c15bc-8e5e-47ee-9c23-1673363f1603\" (UID: \"6a0c15bc-8e5e-47ee-9c23-1673363f1603\") " Jan 23 09:36:32 crc kubenswrapper[4684]: I0123 09:36:32.282198 4684 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"rabbitmq-plugins\" (UniqueName: \"kubernetes.io/empty-dir/82a71d38-3c68-43a9-9913-bc184ebed996-rabbitmq-plugins\") pod \"82a71d38-3c68-43a9-9913-bc184ebed996\" (UID: \"82a71d38-3c68-43a9-9913-bc184ebed996\") " Jan 23 09:36:32 crc kubenswrapper[4684]: I0123 09:36:32.282234 4684 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"erlang-cookie-secret\" (UniqueName: \"kubernetes.io/secret/82a71d38-3c68-43a9-9913-bc184ebed996-erlang-cookie-secret\") pod \"82a71d38-3c68-43a9-9913-bc184ebed996\" (UID: \"82a71d38-3c68-43a9-9913-bc184ebed996\") " Jan 23 09:36:32 crc kubenswrapper[4684]: I0123 09:36:32.282260 4684 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"rabbitmq-confd\" (UniqueName: \"kubernetes.io/projected/6a0c15bc-8e5e-47ee-9c23-1673363f1603-rabbitmq-confd\") pod \"6a0c15bc-8e5e-47ee-9c23-1673363f1603\" (UID: \"6a0c15bc-8e5e-47ee-9c23-1673363f1603\") " Jan 23 09:36:32 crc kubenswrapper[4684]: I0123 09:36:32.282292 4684 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"plugins-conf\" (UniqueName: \"kubernetes.io/configmap/82a71d38-3c68-43a9-9913-bc184ebed996-plugins-conf\") pod \"82a71d38-3c68-43a9-9913-bc184ebed996\" (UID: \"82a71d38-3c68-43a9-9913-bc184ebed996\") " Jan 23 09:36:32 crc kubenswrapper[4684]: I0123 09:36:32.282316 4684 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"server-conf\" (UniqueName: \"kubernetes.io/configmap/6a0c15bc-8e5e-47ee-9c23-1673363f1603-server-conf\") pod \"6a0c15bc-8e5e-47ee-9c23-1673363f1603\" (UID: \"6a0c15bc-8e5e-47ee-9c23-1673363f1603\") " Jan 23 09:36:32 crc kubenswrapper[4684]: I0123 09:36:32.282359 4684 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"server-conf\" (UniqueName: \"kubernetes.io/configmap/82a71d38-3c68-43a9-9913-bc184ebed996-server-conf\") pod \"82a71d38-3c68-43a9-9913-bc184ebed996\" (UID: \"82a71d38-3c68-43a9-9913-bc184ebed996\") " Jan 23 09:36:32 crc kubenswrapper[4684]: I0123 09:36:32.282419 4684 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-nm4ks\" (UniqueName: \"kubernetes.io/projected/6a0c15bc-8e5e-47ee-9c23-1673363f1603-kube-api-access-nm4ks\") pod \"6a0c15bc-8e5e-47ee-9c23-1673363f1603\" (UID: \"6a0c15bc-8e5e-47ee-9c23-1673363f1603\") " Jan 23 09:36:32 crc kubenswrapper[4684]: I0123 09:36:32.282446 4684 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"rabbitmq-confd\" (UniqueName: \"kubernetes.io/projected/82a71d38-3c68-43a9-9913-bc184ebed996-rabbitmq-confd\") pod \"82a71d38-3c68-43a9-9913-bc184ebed996\" (UID: \"82a71d38-3c68-43a9-9913-bc184ebed996\") " Jan 23 09:36:32 crc kubenswrapper[4684]: I0123 09:36:32.282482 4684 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pod-info\" (UniqueName: \"kubernetes.io/downward-api/6a0c15bc-8e5e-47ee-9c23-1673363f1603-pod-info\") pod \"6a0c15bc-8e5e-47ee-9c23-1673363f1603\" (UID: \"6a0c15bc-8e5e-47ee-9c23-1673363f1603\") " Jan 23 09:36:32 crc kubenswrapper[4684]: I0123 09:36:32.282508 4684 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"plugins-conf\" (UniqueName: \"kubernetes.io/configmap/6a0c15bc-8e5e-47ee-9c23-1673363f1603-plugins-conf\") pod \"6a0c15bc-8e5e-47ee-9c23-1673363f1603\" (UID: \"6a0c15bc-8e5e-47ee-9c23-1673363f1603\") " Jan 23 09:36:32 crc kubenswrapper[4684]: I0123 09:36:32.282529 4684 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"rabbitmq-erlang-cookie\" (UniqueName: \"kubernetes.io/empty-dir/6a0c15bc-8e5e-47ee-9c23-1673363f1603-rabbitmq-erlang-cookie\") pod \"6a0c15bc-8e5e-47ee-9c23-1673363f1603\" (UID: \"6a0c15bc-8e5e-47ee-9c23-1673363f1603\") " Jan 23 09:36:32 crc kubenswrapper[4684]: I0123 09:36:32.282551 4684 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/6a0c15bc-8e5e-47ee-9c23-1673363f1603-config-data\") pod \"6a0c15bc-8e5e-47ee-9c23-1673363f1603\" (UID: \"6a0c15bc-8e5e-47ee-9c23-1673363f1603\") " Jan 23 09:36:32 crc kubenswrapper[4684]: I0123 09:36:32.282576 4684 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"rabbitmq-plugins\" (UniqueName: \"kubernetes.io/empty-dir/6a0c15bc-8e5e-47ee-9c23-1673363f1603-rabbitmq-plugins\") pod \"6a0c15bc-8e5e-47ee-9c23-1673363f1603\" (UID: \"6a0c15bc-8e5e-47ee-9c23-1673363f1603\") " Jan 23 09:36:32 crc kubenswrapper[4684]: I0123 09:36:32.282623 4684 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-wxmdh\" (UniqueName: \"kubernetes.io/projected/82a71d38-3c68-43a9-9913-bc184ebed996-kube-api-access-wxmdh\") pod \"82a71d38-3c68-43a9-9913-bc184ebed996\" (UID: \"82a71d38-3c68-43a9-9913-bc184ebed996\") " Jan 23 09:36:32 crc kubenswrapper[4684]: I0123 09:36:32.282652 4684 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"rabbitmq-erlang-cookie\" (UniqueName: \"kubernetes.io/empty-dir/82a71d38-3c68-43a9-9913-bc184ebed996-rabbitmq-erlang-cookie\") pod \"82a71d38-3c68-43a9-9913-bc184ebed996\" (UID: \"82a71d38-3c68-43a9-9913-bc184ebed996\") " Jan 23 09:36:32 crc kubenswrapper[4684]: I0123 09:36:32.282675 4684 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pod-info\" (UniqueName: \"kubernetes.io/downward-api/82a71d38-3c68-43a9-9913-bc184ebed996-pod-info\") pod \"82a71d38-3c68-43a9-9913-bc184ebed996\" (UID: \"82a71d38-3c68-43a9-9913-bc184ebed996\") " Jan 23 09:36:32 crc kubenswrapper[4684]: I0123 09:36:32.282753 4684 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"persistence\" (UniqueName: \"kubernetes.io/local-volume/local-storage01-crc\") pod \"82a71d38-3c68-43a9-9913-bc184ebed996\" (UID: \"82a71d38-3c68-43a9-9913-bc184ebed996\") " Jan 23 09:36:32 crc kubenswrapper[4684]: I0123 09:36:32.282793 4684 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"erlang-cookie-secret\" (UniqueName: \"kubernetes.io/secret/6a0c15bc-8e5e-47ee-9c23-1673363f1603-erlang-cookie-secret\") pod \"6a0c15bc-8e5e-47ee-9c23-1673363f1603\" (UID: \"6a0c15bc-8e5e-47ee-9c23-1673363f1603\") " Jan 23 09:36:32 crc kubenswrapper[4684]: I0123 09:36:32.299109 4684 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/6a0c15bc-8e5e-47ee-9c23-1673363f1603-rabbitmq-plugins" (OuterVolumeSpecName: "rabbitmq-plugins") pod "6a0c15bc-8e5e-47ee-9c23-1673363f1603" (UID: "6a0c15bc-8e5e-47ee-9c23-1673363f1603"). InnerVolumeSpecName "rabbitmq-plugins". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 23 09:36:32 crc kubenswrapper[4684]: I0123 09:36:32.300945 4684 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/6a0c15bc-8e5e-47ee-9c23-1673363f1603-kube-api-access-nm4ks" (OuterVolumeSpecName: "kube-api-access-nm4ks") pod "6a0c15bc-8e5e-47ee-9c23-1673363f1603" (UID: "6a0c15bc-8e5e-47ee-9c23-1673363f1603"). InnerVolumeSpecName "kube-api-access-nm4ks". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 23 09:36:32 crc kubenswrapper[4684]: I0123 09:36:32.304180 4684 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/6a0c15bc-8e5e-47ee-9c23-1673363f1603-plugins-conf" (OuterVolumeSpecName: "plugins-conf") pod "6a0c15bc-8e5e-47ee-9c23-1673363f1603" (UID: "6a0c15bc-8e5e-47ee-9c23-1673363f1603"). InnerVolumeSpecName "plugins-conf". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 23 09:36:32 crc kubenswrapper[4684]: I0123 09:36:32.308187 4684 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/6a0c15bc-8e5e-47ee-9c23-1673363f1603-erlang-cookie-secret" (OuterVolumeSpecName: "erlang-cookie-secret") pod "6a0c15bc-8e5e-47ee-9c23-1673363f1603" (UID: "6a0c15bc-8e5e-47ee-9c23-1673363f1603"). InnerVolumeSpecName "erlang-cookie-secret". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 23 09:36:32 crc kubenswrapper[4684]: I0123 09:36:32.312146 4684 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/82a71d38-3c68-43a9-9913-bc184ebed996-rabbitmq-plugins" (OuterVolumeSpecName: "rabbitmq-plugins") pod "82a71d38-3c68-43a9-9913-bc184ebed996" (UID: "82a71d38-3c68-43a9-9913-bc184ebed996"). InnerVolumeSpecName "rabbitmq-plugins". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 23 09:36:32 crc kubenswrapper[4684]: I0123 09:36:32.312312 4684 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/6a0c15bc-8e5e-47ee-9c23-1673363f1603-rabbitmq-erlang-cookie" (OuterVolumeSpecName: "rabbitmq-erlang-cookie") pod "6a0c15bc-8e5e-47ee-9c23-1673363f1603" (UID: "6a0c15bc-8e5e-47ee-9c23-1673363f1603"). InnerVolumeSpecName "rabbitmq-erlang-cookie". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 23 09:36:32 crc kubenswrapper[4684]: I0123 09:36:32.314158 4684 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/82a71d38-3c68-43a9-9913-bc184ebed996-rabbitmq-erlang-cookie" (OuterVolumeSpecName: "rabbitmq-erlang-cookie") pod "82a71d38-3c68-43a9-9913-bc184ebed996" (UID: "82a71d38-3c68-43a9-9913-bc184ebed996"). InnerVolumeSpecName "rabbitmq-erlang-cookie". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 23 09:36:32 crc kubenswrapper[4684]: I0123 09:36:32.315627 4684 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/82a71d38-3c68-43a9-9913-bc184ebed996-plugins-conf" (OuterVolumeSpecName: "plugins-conf") pod "82a71d38-3c68-43a9-9913-bc184ebed996" (UID: "82a71d38-3c68-43a9-9913-bc184ebed996"). InnerVolumeSpecName "plugins-conf". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 23 09:36:32 crc kubenswrapper[4684]: I0123 09:36:32.318377 4684 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/82a71d38-3c68-43a9-9913-bc184ebed996-kube-api-access-wxmdh" (OuterVolumeSpecName: "kube-api-access-wxmdh") pod "82a71d38-3c68-43a9-9913-bc184ebed996" (UID: "82a71d38-3c68-43a9-9913-bc184ebed996"). InnerVolumeSpecName "kube-api-access-wxmdh". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 23 09:36:32 crc kubenswrapper[4684]: I0123 09:36:32.318945 4684 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/downward-api/6a0c15bc-8e5e-47ee-9c23-1673363f1603-pod-info" (OuterVolumeSpecName: "pod-info") pod "6a0c15bc-8e5e-47ee-9c23-1673363f1603" (UID: "6a0c15bc-8e5e-47ee-9c23-1673363f1603"). InnerVolumeSpecName "pod-info". PluginName "kubernetes.io/downward-api", VolumeGidValue "" Jan 23 09:36:32 crc kubenswrapper[4684]: I0123 09:36:32.360992 4684 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/82a71d38-3c68-43a9-9913-bc184ebed996-rabbitmq-tls" (OuterVolumeSpecName: "rabbitmq-tls") pod "82a71d38-3c68-43a9-9913-bc184ebed996" (UID: "82a71d38-3c68-43a9-9913-bc184ebed996"). InnerVolumeSpecName "rabbitmq-tls". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 23 09:36:32 crc kubenswrapper[4684]: I0123 09:36:32.361052 4684 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/6a0c15bc-8e5e-47ee-9c23-1673363f1603-rabbitmq-tls" (OuterVolumeSpecName: "rabbitmq-tls") pod "6a0c15bc-8e5e-47ee-9c23-1673363f1603" (UID: "6a0c15bc-8e5e-47ee-9c23-1673363f1603"). InnerVolumeSpecName "rabbitmq-tls". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 23 09:36:32 crc kubenswrapper[4684]: I0123 09:36:32.362187 4684 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/downward-api/82a71d38-3c68-43a9-9913-bc184ebed996-pod-info" (OuterVolumeSpecName: "pod-info") pod "82a71d38-3c68-43a9-9913-bc184ebed996" (UID: "82a71d38-3c68-43a9-9913-bc184ebed996"). InnerVolumeSpecName "pod-info". PluginName "kubernetes.io/downward-api", VolumeGidValue "" Jan 23 09:36:32 crc kubenswrapper[4684]: I0123 09:36:32.363063 4684 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/local-volume/local-storage01-crc" (OuterVolumeSpecName: "persistence") pod "82a71d38-3c68-43a9-9913-bc184ebed996" (UID: "82a71d38-3c68-43a9-9913-bc184ebed996"). InnerVolumeSpecName "local-storage01-crc". PluginName "kubernetes.io/local-volume", VolumeGidValue "" Jan 23 09:36:32 crc kubenswrapper[4684]: I0123 09:36:32.363761 4684 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/local-volume/local-storage08-crc" (OuterVolumeSpecName: "persistence") pod "6a0c15bc-8e5e-47ee-9c23-1673363f1603" (UID: "6a0c15bc-8e5e-47ee-9c23-1673363f1603"). InnerVolumeSpecName "local-storage08-crc". PluginName "kubernetes.io/local-volume", VolumeGidValue "" Jan 23 09:36:32 crc kubenswrapper[4684]: I0123 09:36:32.364133 4684 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/82a71d38-3c68-43a9-9913-bc184ebed996-erlang-cookie-secret" (OuterVolumeSpecName: "erlang-cookie-secret") pod "82a71d38-3c68-43a9-9913-bc184ebed996" (UID: "82a71d38-3c68-43a9-9913-bc184ebed996"). InnerVolumeSpecName "erlang-cookie-secret". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 23 09:36:32 crc kubenswrapper[4684]: I0123 09:36:32.382304 4684 scope.go:117] "RemoveContainer" containerID="358ba8c0530319a3946cc789d5cfc05a51b3e76a95d94ea41ca6b9aea260ae54" Jan 23 09:36:32 crc kubenswrapper[4684]: I0123 09:36:32.388199 4684 reconciler_common.go:293] "Volume detached for volume \"rabbitmq-tls\" (UniqueName: \"kubernetes.io/projected/6a0c15bc-8e5e-47ee-9c23-1673363f1603-rabbitmq-tls\") on node \"crc\" DevicePath \"\"" Jan 23 09:36:32 crc kubenswrapper[4684]: I0123 09:36:32.388224 4684 reconciler_common.go:293] "Volume detached for volume \"rabbitmq-plugins\" (UniqueName: \"kubernetes.io/empty-dir/82a71d38-3c68-43a9-9913-bc184ebed996-rabbitmq-plugins\") on node \"crc\" DevicePath \"\"" Jan 23 09:36:32 crc kubenswrapper[4684]: I0123 09:36:32.388233 4684 reconciler_common.go:293] "Volume detached for volume \"erlang-cookie-secret\" (UniqueName: \"kubernetes.io/secret/82a71d38-3c68-43a9-9913-bc184ebed996-erlang-cookie-secret\") on node \"crc\" DevicePath \"\"" Jan 23 09:36:32 crc kubenswrapper[4684]: I0123 09:36:32.388242 4684 reconciler_common.go:293] "Volume detached for volume \"plugins-conf\" (UniqueName: \"kubernetes.io/configmap/82a71d38-3c68-43a9-9913-bc184ebed996-plugins-conf\") on node \"crc\" DevicePath \"\"" Jan 23 09:36:32 crc kubenswrapper[4684]: I0123 09:36:32.388250 4684 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-nm4ks\" (UniqueName: \"kubernetes.io/projected/6a0c15bc-8e5e-47ee-9c23-1673363f1603-kube-api-access-nm4ks\") on node \"crc\" DevicePath \"\"" Jan 23 09:36:32 crc kubenswrapper[4684]: I0123 09:36:32.388259 4684 reconciler_common.go:293] "Volume detached for volume \"pod-info\" (UniqueName: \"kubernetes.io/downward-api/6a0c15bc-8e5e-47ee-9c23-1673363f1603-pod-info\") on node \"crc\" DevicePath \"\"" Jan 23 09:36:32 crc kubenswrapper[4684]: I0123 09:36:32.388267 4684 reconciler_common.go:293] "Volume detached for volume \"plugins-conf\" (UniqueName: \"kubernetes.io/configmap/6a0c15bc-8e5e-47ee-9c23-1673363f1603-plugins-conf\") on node \"crc\" DevicePath \"\"" Jan 23 09:36:32 crc kubenswrapper[4684]: I0123 09:36:32.388275 4684 reconciler_common.go:293] "Volume detached for volume \"rabbitmq-erlang-cookie\" (UniqueName: \"kubernetes.io/empty-dir/6a0c15bc-8e5e-47ee-9c23-1673363f1603-rabbitmq-erlang-cookie\") on node \"crc\" DevicePath \"\"" Jan 23 09:36:32 crc kubenswrapper[4684]: I0123 09:36:32.388283 4684 reconciler_common.go:293] "Volume detached for volume \"rabbitmq-plugins\" (UniqueName: \"kubernetes.io/empty-dir/6a0c15bc-8e5e-47ee-9c23-1673363f1603-rabbitmq-plugins\") on node \"crc\" DevicePath \"\"" Jan 23 09:36:32 crc kubenswrapper[4684]: I0123 09:36:32.402656 4684 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-wxmdh\" (UniqueName: \"kubernetes.io/projected/82a71d38-3c68-43a9-9913-bc184ebed996-kube-api-access-wxmdh\") on node \"crc\" DevicePath \"\"" Jan 23 09:36:32 crc kubenswrapper[4684]: I0123 09:36:32.402692 4684 reconciler_common.go:293] "Volume detached for volume \"rabbitmq-erlang-cookie\" (UniqueName: \"kubernetes.io/empty-dir/82a71d38-3c68-43a9-9913-bc184ebed996-rabbitmq-erlang-cookie\") on node \"crc\" DevicePath \"\"" Jan 23 09:36:32 crc kubenswrapper[4684]: I0123 09:36:32.402723 4684 reconciler_common.go:293] "Volume detached for volume \"pod-info\" (UniqueName: \"kubernetes.io/downward-api/82a71d38-3c68-43a9-9913-bc184ebed996-pod-info\") on node \"crc\" DevicePath \"\"" Jan 23 09:36:32 crc kubenswrapper[4684]: I0123 09:36:32.402761 4684 reconciler_common.go:286] "operationExecutor.UnmountDevice started for volume \"local-storage01-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage01-crc\") on node \"crc\" " Jan 23 09:36:32 crc kubenswrapper[4684]: I0123 09:36:32.402778 4684 reconciler_common.go:293] "Volume detached for volume \"erlang-cookie-secret\" (UniqueName: \"kubernetes.io/secret/6a0c15bc-8e5e-47ee-9c23-1673363f1603-erlang-cookie-secret\") on node \"crc\" DevicePath \"\"" Jan 23 09:36:32 crc kubenswrapper[4684]: I0123 09:36:32.402795 4684 reconciler_common.go:286] "operationExecutor.UnmountDevice started for volume \"local-storage08-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage08-crc\") on node \"crc\" " Jan 23 09:36:32 crc kubenswrapper[4684]: I0123 09:36:32.402806 4684 reconciler_common.go:293] "Volume detached for volume \"rabbitmq-tls\" (UniqueName: \"kubernetes.io/projected/82a71d38-3c68-43a9-9913-bc184ebed996-rabbitmq-tls\") on node \"crc\" DevicePath \"\"" Jan 23 09:36:32 crc kubenswrapper[4684]: I0123 09:36:32.400460 4684 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/6a0c15bc-8e5e-47ee-9c23-1673363f1603-config-data" (OuterVolumeSpecName: "config-data") pod "6a0c15bc-8e5e-47ee-9c23-1673363f1603" (UID: "6a0c15bc-8e5e-47ee-9c23-1673363f1603"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 23 09:36:32 crc kubenswrapper[4684]: I0123 09:36:32.433077 4684 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/82a71d38-3c68-43a9-9913-bc184ebed996-config-data" (OuterVolumeSpecName: "config-data") pod "82a71d38-3c68-43a9-9913-bc184ebed996" (UID: "82a71d38-3c68-43a9-9913-bc184ebed996"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 23 09:36:32 crc kubenswrapper[4684]: I0123 09:36:32.458817 4684 scope.go:117] "RemoveContainer" containerID="a4c9a117c5b92cb67fe3eaf6e3d9b1260eea190f23710b986d5ce70813d55697" Jan 23 09:36:32 crc kubenswrapper[4684]: I0123 09:36:32.459247 4684 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/6a0c15bc-8e5e-47ee-9c23-1673363f1603-server-conf" (OuterVolumeSpecName: "server-conf") pod "6a0c15bc-8e5e-47ee-9c23-1673363f1603" (UID: "6a0c15bc-8e5e-47ee-9c23-1673363f1603"). InnerVolumeSpecName "server-conf". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 23 09:36:32 crc kubenswrapper[4684]: E0123 09:36:32.459498 4684 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"a4c9a117c5b92cb67fe3eaf6e3d9b1260eea190f23710b986d5ce70813d55697\": container with ID starting with a4c9a117c5b92cb67fe3eaf6e3d9b1260eea190f23710b986d5ce70813d55697 not found: ID does not exist" containerID="a4c9a117c5b92cb67fe3eaf6e3d9b1260eea190f23710b986d5ce70813d55697" Jan 23 09:36:32 crc kubenswrapper[4684]: I0123 09:36:32.459544 4684 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"a4c9a117c5b92cb67fe3eaf6e3d9b1260eea190f23710b986d5ce70813d55697"} err="failed to get container status \"a4c9a117c5b92cb67fe3eaf6e3d9b1260eea190f23710b986d5ce70813d55697\": rpc error: code = NotFound desc = could not find container \"a4c9a117c5b92cb67fe3eaf6e3d9b1260eea190f23710b986d5ce70813d55697\": container with ID starting with a4c9a117c5b92cb67fe3eaf6e3d9b1260eea190f23710b986d5ce70813d55697 not found: ID does not exist" Jan 23 09:36:32 crc kubenswrapper[4684]: I0123 09:36:32.459569 4684 scope.go:117] "RemoveContainer" containerID="358ba8c0530319a3946cc789d5cfc05a51b3e76a95d94ea41ca6b9aea260ae54" Jan 23 09:36:32 crc kubenswrapper[4684]: E0123 09:36:32.462995 4684 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"358ba8c0530319a3946cc789d5cfc05a51b3e76a95d94ea41ca6b9aea260ae54\": container with ID starting with 358ba8c0530319a3946cc789d5cfc05a51b3e76a95d94ea41ca6b9aea260ae54 not found: ID does not exist" containerID="358ba8c0530319a3946cc789d5cfc05a51b3e76a95d94ea41ca6b9aea260ae54" Jan 23 09:36:32 crc kubenswrapper[4684]: I0123 09:36:32.463055 4684 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"358ba8c0530319a3946cc789d5cfc05a51b3e76a95d94ea41ca6b9aea260ae54"} err="failed to get container status \"358ba8c0530319a3946cc789d5cfc05a51b3e76a95d94ea41ca6b9aea260ae54\": rpc error: code = NotFound desc = could not find container \"358ba8c0530319a3946cc789d5cfc05a51b3e76a95d94ea41ca6b9aea260ae54\": container with ID starting with 358ba8c0530319a3946cc789d5cfc05a51b3e76a95d94ea41ca6b9aea260ae54 not found: ID does not exist" Jan 23 09:36:32 crc kubenswrapper[4684]: I0123 09:36:32.463084 4684 scope.go:117] "RemoveContainer" containerID="817ea9a29e87f839d270ed92f755ecda3bba82069b8a72d18c371684467bac12" Jan 23 09:36:32 crc kubenswrapper[4684]: I0123 09:36:32.491015 4684 operation_generator.go:917] UnmountDevice succeeded for volume "local-storage01-crc" (UniqueName: "kubernetes.io/local-volume/local-storage01-crc") on node "crc" Jan 23 09:36:32 crc kubenswrapper[4684]: I0123 09:36:32.491447 4684 operation_generator.go:917] UnmountDevice succeeded for volume "local-storage08-crc" (UniqueName: "kubernetes.io/local-volume/local-storage08-crc") on node "crc" Jan 23 09:36:32 crc kubenswrapper[4684]: I0123 09:36:32.495428 4684 scope.go:117] "RemoveContainer" containerID="117c3cfb0a176cfc1500fea0731f48b23931e3499ec86b05f8bbcf5b2f8b8bb6" Jan 23 09:36:32 crc kubenswrapper[4684]: I0123 09:36:32.512013 4684 reconciler_common.go:293] "Volume detached for volume \"local-storage08-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage08-crc\") on node \"crc\" DevicePath \"\"" Jan 23 09:36:32 crc kubenswrapper[4684]: I0123 09:36:32.512042 4684 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/82a71d38-3c68-43a9-9913-bc184ebed996-config-data\") on node \"crc\" DevicePath \"\"" Jan 23 09:36:32 crc kubenswrapper[4684]: I0123 09:36:32.512051 4684 reconciler_common.go:293] "Volume detached for volume \"server-conf\" (UniqueName: \"kubernetes.io/configmap/6a0c15bc-8e5e-47ee-9c23-1673363f1603-server-conf\") on node \"crc\" DevicePath \"\"" Jan 23 09:36:32 crc kubenswrapper[4684]: I0123 09:36:32.512059 4684 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/6a0c15bc-8e5e-47ee-9c23-1673363f1603-config-data\") on node \"crc\" DevicePath \"\"" Jan 23 09:36:32 crc kubenswrapper[4684]: I0123 09:36:32.512067 4684 reconciler_common.go:293] "Volume detached for volume \"local-storage01-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage01-crc\") on node \"crc\" DevicePath \"\"" Jan 23 09:36:32 crc kubenswrapper[4684]: I0123 09:36:32.551032 4684 scope.go:117] "RemoveContainer" containerID="817ea9a29e87f839d270ed92f755ecda3bba82069b8a72d18c371684467bac12" Jan 23 09:36:32 crc kubenswrapper[4684]: E0123 09:36:32.552203 4684 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"817ea9a29e87f839d270ed92f755ecda3bba82069b8a72d18c371684467bac12\": container with ID starting with 817ea9a29e87f839d270ed92f755ecda3bba82069b8a72d18c371684467bac12 not found: ID does not exist" containerID="817ea9a29e87f839d270ed92f755ecda3bba82069b8a72d18c371684467bac12" Jan 23 09:36:32 crc kubenswrapper[4684]: I0123 09:36:32.552268 4684 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"817ea9a29e87f839d270ed92f755ecda3bba82069b8a72d18c371684467bac12"} err="failed to get container status \"817ea9a29e87f839d270ed92f755ecda3bba82069b8a72d18c371684467bac12\": rpc error: code = NotFound desc = could not find container \"817ea9a29e87f839d270ed92f755ecda3bba82069b8a72d18c371684467bac12\": container with ID starting with 817ea9a29e87f839d270ed92f755ecda3bba82069b8a72d18c371684467bac12 not found: ID does not exist" Jan 23 09:36:32 crc kubenswrapper[4684]: I0123 09:36:32.552301 4684 scope.go:117] "RemoveContainer" containerID="117c3cfb0a176cfc1500fea0731f48b23931e3499ec86b05f8bbcf5b2f8b8bb6" Jan 23 09:36:32 crc kubenswrapper[4684]: E0123 09:36:32.552609 4684 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"117c3cfb0a176cfc1500fea0731f48b23931e3499ec86b05f8bbcf5b2f8b8bb6\": container with ID starting with 117c3cfb0a176cfc1500fea0731f48b23931e3499ec86b05f8bbcf5b2f8b8bb6 not found: ID does not exist" containerID="117c3cfb0a176cfc1500fea0731f48b23931e3499ec86b05f8bbcf5b2f8b8bb6" Jan 23 09:36:32 crc kubenswrapper[4684]: I0123 09:36:32.552648 4684 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"117c3cfb0a176cfc1500fea0731f48b23931e3499ec86b05f8bbcf5b2f8b8bb6"} err="failed to get container status \"117c3cfb0a176cfc1500fea0731f48b23931e3499ec86b05f8bbcf5b2f8b8bb6\": rpc error: code = NotFound desc = could not find container \"117c3cfb0a176cfc1500fea0731f48b23931e3499ec86b05f8bbcf5b2f8b8bb6\": container with ID starting with 117c3cfb0a176cfc1500fea0731f48b23931e3499ec86b05f8bbcf5b2f8b8bb6 not found: ID does not exist" Jan 23 09:36:32 crc kubenswrapper[4684]: I0123 09:36:32.556781 4684 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/82a71d38-3c68-43a9-9913-bc184ebed996-server-conf" (OuterVolumeSpecName: "server-conf") pod "82a71d38-3c68-43a9-9913-bc184ebed996" (UID: "82a71d38-3c68-43a9-9913-bc184ebed996"). InnerVolumeSpecName "server-conf". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 23 09:36:32 crc kubenswrapper[4684]: I0123 09:36:32.576035 4684 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/6a0c15bc-8e5e-47ee-9c23-1673363f1603-rabbitmq-confd" (OuterVolumeSpecName: "rabbitmq-confd") pod "6a0c15bc-8e5e-47ee-9c23-1673363f1603" (UID: "6a0c15bc-8e5e-47ee-9c23-1673363f1603"). InnerVolumeSpecName "rabbitmq-confd". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 23 09:36:32 crc kubenswrapper[4684]: I0123 09:36:32.591935 4684 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/82a71d38-3c68-43a9-9913-bc184ebed996-rabbitmq-confd" (OuterVolumeSpecName: "rabbitmq-confd") pod "82a71d38-3c68-43a9-9913-bc184ebed996" (UID: "82a71d38-3c68-43a9-9913-bc184ebed996"). InnerVolumeSpecName "rabbitmq-confd". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 23 09:36:32 crc kubenswrapper[4684]: I0123 09:36:32.614040 4684 reconciler_common.go:293] "Volume detached for volume \"rabbitmq-confd\" (UniqueName: \"kubernetes.io/projected/6a0c15bc-8e5e-47ee-9c23-1673363f1603-rabbitmq-confd\") on node \"crc\" DevicePath \"\"" Jan 23 09:36:32 crc kubenswrapper[4684]: I0123 09:36:32.614080 4684 reconciler_common.go:293] "Volume detached for volume \"server-conf\" (UniqueName: \"kubernetes.io/configmap/82a71d38-3c68-43a9-9913-bc184ebed996-server-conf\") on node \"crc\" DevicePath \"\"" Jan 23 09:36:32 crc kubenswrapper[4684]: I0123 09:36:32.614089 4684 reconciler_common.go:293] "Volume detached for volume \"rabbitmq-confd\" (UniqueName: \"kubernetes.io/projected/82a71d38-3c68-43a9-9913-bc184ebed996-rabbitmq-confd\") on node \"crc\" DevicePath \"\"" Jan 23 09:36:32 crc kubenswrapper[4684]: I0123 09:36:32.901011 4684 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/rabbitmq-cell1-server-0"] Jan 23 09:36:32 crc kubenswrapper[4684]: I0123 09:36:32.910317 4684 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/rabbitmq-cell1-server-0"] Jan 23 09:36:32 crc kubenswrapper[4684]: I0123 09:36:32.933235 4684 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/rabbitmq-server-0"] Jan 23 09:36:32 crc kubenswrapper[4684]: I0123 09:36:32.953099 4684 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/rabbitmq-server-0"] Jan 23 09:36:32 crc kubenswrapper[4684]: I0123 09:36:32.962586 4684 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/rabbitmq-cell1-server-0"] Jan 23 09:36:32 crc kubenswrapper[4684]: E0123 09:36:32.963820 4684 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="6a0c15bc-8e5e-47ee-9c23-1673363f1603" containerName="rabbitmq" Jan 23 09:36:32 crc kubenswrapper[4684]: I0123 09:36:32.963842 4684 state_mem.go:107] "Deleted CPUSet assignment" podUID="6a0c15bc-8e5e-47ee-9c23-1673363f1603" containerName="rabbitmq" Jan 23 09:36:32 crc kubenswrapper[4684]: E0123 09:36:32.963861 4684 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="82a71d38-3c68-43a9-9913-bc184ebed996" containerName="rabbitmq" Jan 23 09:36:32 crc kubenswrapper[4684]: I0123 09:36:32.963868 4684 state_mem.go:107] "Deleted CPUSet assignment" podUID="82a71d38-3c68-43a9-9913-bc184ebed996" containerName="rabbitmq" Jan 23 09:36:32 crc kubenswrapper[4684]: E0123 09:36:32.963885 4684 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="6a0c15bc-8e5e-47ee-9c23-1673363f1603" containerName="setup-container" Jan 23 09:36:32 crc kubenswrapper[4684]: I0123 09:36:32.963892 4684 state_mem.go:107] "Deleted CPUSet assignment" podUID="6a0c15bc-8e5e-47ee-9c23-1673363f1603" containerName="setup-container" Jan 23 09:36:32 crc kubenswrapper[4684]: E0123 09:36:32.963914 4684 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="82a71d38-3c68-43a9-9913-bc184ebed996" containerName="setup-container" Jan 23 09:36:32 crc kubenswrapper[4684]: I0123 09:36:32.963920 4684 state_mem.go:107] "Deleted CPUSet assignment" podUID="82a71d38-3c68-43a9-9913-bc184ebed996" containerName="setup-container" Jan 23 09:36:32 crc kubenswrapper[4684]: I0123 09:36:32.964085 4684 memory_manager.go:354] "RemoveStaleState removing state" podUID="6a0c15bc-8e5e-47ee-9c23-1673363f1603" containerName="rabbitmq" Jan 23 09:36:32 crc kubenswrapper[4684]: I0123 09:36:32.964098 4684 memory_manager.go:354] "RemoveStaleState removing state" podUID="82a71d38-3c68-43a9-9913-bc184ebed996" containerName="rabbitmq" Jan 23 09:36:32 crc kubenswrapper[4684]: I0123 09:36:32.965090 4684 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/rabbitmq-cell1-server-0" Jan 23 09:36:32 crc kubenswrapper[4684]: I0123 09:36:32.971997 4684 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"rabbitmq-cell1-erlang-cookie" Jan 23 09:36:32 crc kubenswrapper[4684]: I0123 09:36:32.972131 4684 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"rabbitmq-cell1-default-user" Jan 23 09:36:32 crc kubenswrapper[4684]: I0123 09:36:32.972020 4684 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-rabbitmq-cell1-svc" Jan 23 09:36:32 crc kubenswrapper[4684]: I0123 09:36:32.972533 4684 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"rabbitmq-cell1-server-conf" Jan 23 09:36:32 crc kubenswrapper[4684]: I0123 09:36:32.972647 4684 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"rabbitmq-cell1-config-data" Jan 23 09:36:32 crc kubenswrapper[4684]: I0123 09:36:32.972789 4684 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"rabbitmq-cell1-server-dockercfg-qjlcz" Jan 23 09:36:32 crc kubenswrapper[4684]: I0123 09:36:32.972900 4684 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"rabbitmq-cell1-plugins-conf" Jan 23 09:36:32 crc kubenswrapper[4684]: I0123 09:36:32.979203 4684 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/rabbitmq-cell1-server-0"] Jan 23 09:36:33 crc kubenswrapper[4684]: I0123 09:36:33.001863 4684 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/rabbitmq-server-0"] Jan 23 09:36:33 crc kubenswrapper[4684]: I0123 09:36:33.009968 4684 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/rabbitmq-server-0" Jan 23 09:36:33 crc kubenswrapper[4684]: I0123 09:36:33.017419 4684 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"rabbitmq-server-conf" Jan 23 09:36:33 crc kubenswrapper[4684]: I0123 09:36:33.017828 4684 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"rabbitmq-plugins-conf" Jan 23 09:36:33 crc kubenswrapper[4684]: I0123 09:36:33.017920 4684 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"rabbitmq-default-user" Jan 23 09:36:33 crc kubenswrapper[4684]: I0123 09:36:33.018101 4684 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"rabbitmq-erlang-cookie" Jan 23 09:36:33 crc kubenswrapper[4684]: I0123 09:36:33.018138 4684 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"rabbitmq-config-data" Jan 23 09:36:33 crc kubenswrapper[4684]: I0123 09:36:33.018409 4684 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"rabbitmq-server-dockercfg-wr9hs" Jan 23 09:36:33 crc kubenswrapper[4684]: I0123 09:36:33.018611 4684 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-rabbitmq-svc" Jan 23 09:36:33 crc kubenswrapper[4684]: I0123 09:36:33.021793 4684 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"rabbitmq-confd\" (UniqueName: \"kubernetes.io/projected/5b7f0e5b-e1ba-4da5-b644-e16236fd5403-rabbitmq-confd\") pod \"rabbitmq-cell1-server-0\" (UID: \"5b7f0e5b-e1ba-4da5-b644-e16236fd5403\") " pod="openstack/rabbitmq-cell1-server-0" Jan 23 09:36:33 crc kubenswrapper[4684]: I0123 09:36:33.021857 4684 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"pod-info\" (UniqueName: \"kubernetes.io/downward-api/5b7f0e5b-e1ba-4da5-b644-e16236fd5403-pod-info\") pod \"rabbitmq-cell1-server-0\" (UID: \"5b7f0e5b-e1ba-4da5-b644-e16236fd5403\") " pod="openstack/rabbitmq-cell1-server-0" Jan 23 09:36:33 crc kubenswrapper[4684]: I0123 09:36:33.021907 4684 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"local-storage08-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage08-crc\") pod \"rabbitmq-cell1-server-0\" (UID: \"5b7f0e5b-e1ba-4da5-b644-e16236fd5403\") " pod="openstack/rabbitmq-cell1-server-0" Jan 23 09:36:33 crc kubenswrapper[4684]: I0123 09:36:33.021955 4684 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"plugins-conf\" (UniqueName: \"kubernetes.io/configmap/5b7f0e5b-e1ba-4da5-b644-e16236fd5403-plugins-conf\") pod \"rabbitmq-cell1-server-0\" (UID: \"5b7f0e5b-e1ba-4da5-b644-e16236fd5403\") " pod="openstack/rabbitmq-cell1-server-0" Jan 23 09:36:33 crc kubenswrapper[4684]: I0123 09:36:33.021985 4684 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/5b7f0e5b-e1ba-4da5-b644-e16236fd5403-config-data\") pod \"rabbitmq-cell1-server-0\" (UID: \"5b7f0e5b-e1ba-4da5-b644-e16236fd5403\") " pod="openstack/rabbitmq-cell1-server-0" Jan 23 09:36:33 crc kubenswrapper[4684]: I0123 09:36:33.022039 4684 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"rabbitmq-erlang-cookie\" (UniqueName: \"kubernetes.io/empty-dir/5b7f0e5b-e1ba-4da5-b644-e16236fd5403-rabbitmq-erlang-cookie\") pod \"rabbitmq-cell1-server-0\" (UID: \"5b7f0e5b-e1ba-4da5-b644-e16236fd5403\") " pod="openstack/rabbitmq-cell1-server-0" Jan 23 09:36:33 crc kubenswrapper[4684]: I0123 09:36:33.022083 4684 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"rabbitmq-tls\" (UniqueName: \"kubernetes.io/projected/5b7f0e5b-e1ba-4da5-b644-e16236fd5403-rabbitmq-tls\") pod \"rabbitmq-cell1-server-0\" (UID: \"5b7f0e5b-e1ba-4da5-b644-e16236fd5403\") " pod="openstack/rabbitmq-cell1-server-0" Jan 23 09:36:33 crc kubenswrapper[4684]: I0123 09:36:33.022106 4684 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"rabbitmq-plugins\" (UniqueName: \"kubernetes.io/empty-dir/5b7f0e5b-e1ba-4da5-b644-e16236fd5403-rabbitmq-plugins\") pod \"rabbitmq-cell1-server-0\" (UID: \"5b7f0e5b-e1ba-4da5-b644-e16236fd5403\") " pod="openstack/rabbitmq-cell1-server-0" Jan 23 09:36:33 crc kubenswrapper[4684]: I0123 09:36:33.022149 4684 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-6zrc9\" (UniqueName: \"kubernetes.io/projected/5b7f0e5b-e1ba-4da5-b644-e16236fd5403-kube-api-access-6zrc9\") pod \"rabbitmq-cell1-server-0\" (UID: \"5b7f0e5b-e1ba-4da5-b644-e16236fd5403\") " pod="openstack/rabbitmq-cell1-server-0" Jan 23 09:36:33 crc kubenswrapper[4684]: I0123 09:36:33.022188 4684 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"server-conf\" (UniqueName: \"kubernetes.io/configmap/5b7f0e5b-e1ba-4da5-b644-e16236fd5403-server-conf\") pod \"rabbitmq-cell1-server-0\" (UID: \"5b7f0e5b-e1ba-4da5-b644-e16236fd5403\") " pod="openstack/rabbitmq-cell1-server-0" Jan 23 09:36:33 crc kubenswrapper[4684]: I0123 09:36:33.022225 4684 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"erlang-cookie-secret\" (UniqueName: \"kubernetes.io/secret/5b7f0e5b-e1ba-4da5-b644-e16236fd5403-erlang-cookie-secret\") pod \"rabbitmq-cell1-server-0\" (UID: \"5b7f0e5b-e1ba-4da5-b644-e16236fd5403\") " pod="openstack/rabbitmq-cell1-server-0" Jan 23 09:36:33 crc kubenswrapper[4684]: I0123 09:36:33.066046 4684 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/rabbitmq-server-0"] Jan 23 09:36:33 crc kubenswrapper[4684]: I0123 09:36:33.125987 4684 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"rabbitmq-tls\" (UniqueName: \"kubernetes.io/projected/d05a61f9-7d60-4073-ae62-7a4a59fe6ed6-rabbitmq-tls\") pod \"rabbitmq-server-0\" (UID: \"d05a61f9-7d60-4073-ae62-7a4a59fe6ed6\") " pod="openstack/rabbitmq-server-0" Jan 23 09:36:33 crc kubenswrapper[4684]: I0123 09:36:33.126070 4684 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"rabbitmq-confd\" (UniqueName: \"kubernetes.io/projected/5b7f0e5b-e1ba-4da5-b644-e16236fd5403-rabbitmq-confd\") pod \"rabbitmq-cell1-server-0\" (UID: \"5b7f0e5b-e1ba-4da5-b644-e16236fd5403\") " pod="openstack/rabbitmq-cell1-server-0" Jan 23 09:36:33 crc kubenswrapper[4684]: I0123 09:36:33.126138 4684 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pod-info\" (UniqueName: \"kubernetes.io/downward-api/5b7f0e5b-e1ba-4da5-b644-e16236fd5403-pod-info\") pod \"rabbitmq-cell1-server-0\" (UID: \"5b7f0e5b-e1ba-4da5-b644-e16236fd5403\") " pod="openstack/rabbitmq-cell1-server-0" Jan 23 09:36:33 crc kubenswrapper[4684]: I0123 09:36:33.126205 4684 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"local-storage08-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage08-crc\") pod \"rabbitmq-cell1-server-0\" (UID: \"5b7f0e5b-e1ba-4da5-b644-e16236fd5403\") " pod="openstack/rabbitmq-cell1-server-0" Jan 23 09:36:33 crc kubenswrapper[4684]: I0123 09:36:33.126224 4684 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-5q75w\" (UniqueName: \"kubernetes.io/projected/d05a61f9-7d60-4073-ae62-7a4a59fe6ed6-kube-api-access-5q75w\") pod \"rabbitmq-server-0\" (UID: \"d05a61f9-7d60-4073-ae62-7a4a59fe6ed6\") " pod="openstack/rabbitmq-server-0" Jan 23 09:36:33 crc kubenswrapper[4684]: I0123 09:36:33.126299 4684 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"plugins-conf\" (UniqueName: \"kubernetes.io/configmap/5b7f0e5b-e1ba-4da5-b644-e16236fd5403-plugins-conf\") pod \"rabbitmq-cell1-server-0\" (UID: \"5b7f0e5b-e1ba-4da5-b644-e16236fd5403\") " pod="openstack/rabbitmq-cell1-server-0" Jan 23 09:36:33 crc kubenswrapper[4684]: I0123 09:36:33.126329 4684 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"rabbitmq-plugins\" (UniqueName: \"kubernetes.io/empty-dir/d05a61f9-7d60-4073-ae62-7a4a59fe6ed6-rabbitmq-plugins\") pod \"rabbitmq-server-0\" (UID: \"d05a61f9-7d60-4073-ae62-7a4a59fe6ed6\") " pod="openstack/rabbitmq-server-0" Jan 23 09:36:33 crc kubenswrapper[4684]: I0123 09:36:33.126383 4684 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/5b7f0e5b-e1ba-4da5-b644-e16236fd5403-config-data\") pod \"rabbitmq-cell1-server-0\" (UID: \"5b7f0e5b-e1ba-4da5-b644-e16236fd5403\") " pod="openstack/rabbitmq-cell1-server-0" Jan 23 09:36:33 crc kubenswrapper[4684]: I0123 09:36:33.126450 4684 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"rabbitmq-erlang-cookie\" (UniqueName: \"kubernetes.io/empty-dir/5b7f0e5b-e1ba-4da5-b644-e16236fd5403-rabbitmq-erlang-cookie\") pod \"rabbitmq-cell1-server-0\" (UID: \"5b7f0e5b-e1ba-4da5-b644-e16236fd5403\") " pod="openstack/rabbitmq-cell1-server-0" Jan 23 09:36:33 crc kubenswrapper[4684]: I0123 09:36:33.126923 4684 operation_generator.go:580] "MountVolume.MountDevice succeeded for volume \"local-storage08-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage08-crc\") pod \"rabbitmq-cell1-server-0\" (UID: \"5b7f0e5b-e1ba-4da5-b644-e16236fd5403\") device mount path \"/mnt/openstack/pv08\"" pod="openstack/rabbitmq-cell1-server-0" Jan 23 09:36:33 crc kubenswrapper[4684]: I0123 09:36:33.127366 4684 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"rabbitmq-erlang-cookie\" (UniqueName: \"kubernetes.io/empty-dir/5b7f0e5b-e1ba-4da5-b644-e16236fd5403-rabbitmq-erlang-cookie\") pod \"rabbitmq-cell1-server-0\" (UID: \"5b7f0e5b-e1ba-4da5-b644-e16236fd5403\") " pod="openstack/rabbitmq-cell1-server-0" Jan 23 09:36:33 crc kubenswrapper[4684]: I0123 09:36:33.127575 4684 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"rabbitmq-tls\" (UniqueName: \"kubernetes.io/projected/5b7f0e5b-e1ba-4da5-b644-e16236fd5403-rabbitmq-tls\") pod \"rabbitmq-cell1-server-0\" (UID: \"5b7f0e5b-e1ba-4da5-b644-e16236fd5403\") " pod="openstack/rabbitmq-cell1-server-0" Jan 23 09:36:33 crc kubenswrapper[4684]: I0123 09:36:33.127621 4684 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"rabbitmq-plugins\" (UniqueName: \"kubernetes.io/empty-dir/5b7f0e5b-e1ba-4da5-b644-e16236fd5403-rabbitmq-plugins\") pod \"rabbitmq-cell1-server-0\" (UID: \"5b7f0e5b-e1ba-4da5-b644-e16236fd5403\") " pod="openstack/rabbitmq-cell1-server-0" Jan 23 09:36:33 crc kubenswrapper[4684]: I0123 09:36:33.127660 4684 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"erlang-cookie-secret\" (UniqueName: \"kubernetes.io/secret/d05a61f9-7d60-4073-ae62-7a4a59fe6ed6-erlang-cookie-secret\") pod \"rabbitmq-server-0\" (UID: \"d05a61f9-7d60-4073-ae62-7a4a59fe6ed6\") " pod="openstack/rabbitmq-server-0" Jan 23 09:36:33 crc kubenswrapper[4684]: I0123 09:36:33.127741 4684 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-6zrc9\" (UniqueName: \"kubernetes.io/projected/5b7f0e5b-e1ba-4da5-b644-e16236fd5403-kube-api-access-6zrc9\") pod \"rabbitmq-cell1-server-0\" (UID: \"5b7f0e5b-e1ba-4da5-b644-e16236fd5403\") " pod="openstack/rabbitmq-cell1-server-0" Jan 23 09:36:33 crc kubenswrapper[4684]: I0123 09:36:33.127742 4684 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"plugins-conf\" (UniqueName: \"kubernetes.io/configmap/5b7f0e5b-e1ba-4da5-b644-e16236fd5403-plugins-conf\") pod \"rabbitmq-cell1-server-0\" (UID: \"5b7f0e5b-e1ba-4da5-b644-e16236fd5403\") " pod="openstack/rabbitmq-cell1-server-0" Jan 23 09:36:33 crc kubenswrapper[4684]: I0123 09:36:33.127759 4684 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/5b7f0e5b-e1ba-4da5-b644-e16236fd5403-config-data\") pod \"rabbitmq-cell1-server-0\" (UID: \"5b7f0e5b-e1ba-4da5-b644-e16236fd5403\") " pod="openstack/rabbitmq-cell1-server-0" Jan 23 09:36:33 crc kubenswrapper[4684]: I0123 09:36:33.127830 4684 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"server-conf\" (UniqueName: \"kubernetes.io/configmap/5b7f0e5b-e1ba-4da5-b644-e16236fd5403-server-conf\") pod \"rabbitmq-cell1-server-0\" (UID: \"5b7f0e5b-e1ba-4da5-b644-e16236fd5403\") " pod="openstack/rabbitmq-cell1-server-0" Jan 23 09:36:33 crc kubenswrapper[4684]: I0123 09:36:33.127865 4684 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"server-conf\" (UniqueName: \"kubernetes.io/configmap/d05a61f9-7d60-4073-ae62-7a4a59fe6ed6-server-conf\") pod \"rabbitmq-server-0\" (UID: \"d05a61f9-7d60-4073-ae62-7a4a59fe6ed6\") " pod="openstack/rabbitmq-server-0" Jan 23 09:36:33 crc kubenswrapper[4684]: I0123 09:36:33.127914 4684 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"local-storage01-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage01-crc\") pod \"rabbitmq-server-0\" (UID: \"d05a61f9-7d60-4073-ae62-7a4a59fe6ed6\") " pod="openstack/rabbitmq-server-0" Jan 23 09:36:33 crc kubenswrapper[4684]: I0123 09:36:33.127942 4684 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"rabbitmq-confd\" (UniqueName: \"kubernetes.io/projected/d05a61f9-7d60-4073-ae62-7a4a59fe6ed6-rabbitmq-confd\") pod \"rabbitmq-server-0\" (UID: \"d05a61f9-7d60-4073-ae62-7a4a59fe6ed6\") " pod="openstack/rabbitmq-server-0" Jan 23 09:36:33 crc kubenswrapper[4684]: I0123 09:36:33.127979 4684 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"erlang-cookie-secret\" (UniqueName: \"kubernetes.io/secret/5b7f0e5b-e1ba-4da5-b644-e16236fd5403-erlang-cookie-secret\") pod \"rabbitmq-cell1-server-0\" (UID: \"5b7f0e5b-e1ba-4da5-b644-e16236fd5403\") " pod="openstack/rabbitmq-cell1-server-0" Jan 23 09:36:33 crc kubenswrapper[4684]: I0123 09:36:33.128033 4684 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"plugins-conf\" (UniqueName: \"kubernetes.io/configmap/d05a61f9-7d60-4073-ae62-7a4a59fe6ed6-plugins-conf\") pod \"rabbitmq-server-0\" (UID: \"d05a61f9-7d60-4073-ae62-7a4a59fe6ed6\") " pod="openstack/rabbitmq-server-0" Jan 23 09:36:33 crc kubenswrapper[4684]: I0123 09:36:33.128076 4684 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"pod-info\" (UniqueName: \"kubernetes.io/downward-api/d05a61f9-7d60-4073-ae62-7a4a59fe6ed6-pod-info\") pod \"rabbitmq-server-0\" (UID: \"d05a61f9-7d60-4073-ae62-7a4a59fe6ed6\") " pod="openstack/rabbitmq-server-0" Jan 23 09:36:33 crc kubenswrapper[4684]: I0123 09:36:33.128080 4684 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"rabbitmq-plugins\" (UniqueName: \"kubernetes.io/empty-dir/5b7f0e5b-e1ba-4da5-b644-e16236fd5403-rabbitmq-plugins\") pod \"rabbitmq-cell1-server-0\" (UID: \"5b7f0e5b-e1ba-4da5-b644-e16236fd5403\") " pod="openstack/rabbitmq-cell1-server-0" Jan 23 09:36:33 crc kubenswrapper[4684]: I0123 09:36:33.129190 4684 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"server-conf\" (UniqueName: \"kubernetes.io/configmap/5b7f0e5b-e1ba-4da5-b644-e16236fd5403-server-conf\") pod \"rabbitmq-cell1-server-0\" (UID: \"5b7f0e5b-e1ba-4da5-b644-e16236fd5403\") " pod="openstack/rabbitmq-cell1-server-0" Jan 23 09:36:33 crc kubenswrapper[4684]: I0123 09:36:33.129293 4684 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/d05a61f9-7d60-4073-ae62-7a4a59fe6ed6-config-data\") pod \"rabbitmq-server-0\" (UID: \"d05a61f9-7d60-4073-ae62-7a4a59fe6ed6\") " pod="openstack/rabbitmq-server-0" Jan 23 09:36:33 crc kubenswrapper[4684]: I0123 09:36:33.129323 4684 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"rabbitmq-erlang-cookie\" (UniqueName: \"kubernetes.io/empty-dir/d05a61f9-7d60-4073-ae62-7a4a59fe6ed6-rabbitmq-erlang-cookie\") pod \"rabbitmq-server-0\" (UID: \"d05a61f9-7d60-4073-ae62-7a4a59fe6ed6\") " pod="openstack/rabbitmq-server-0" Jan 23 09:36:33 crc kubenswrapper[4684]: I0123 09:36:33.135631 4684 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"rabbitmq-confd\" (UniqueName: \"kubernetes.io/projected/5b7f0e5b-e1ba-4da5-b644-e16236fd5403-rabbitmq-confd\") pod \"rabbitmq-cell1-server-0\" (UID: \"5b7f0e5b-e1ba-4da5-b644-e16236fd5403\") " pod="openstack/rabbitmq-cell1-server-0" Jan 23 09:36:33 crc kubenswrapper[4684]: I0123 09:36:33.150791 4684 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"pod-info\" (UniqueName: \"kubernetes.io/downward-api/5b7f0e5b-e1ba-4da5-b644-e16236fd5403-pod-info\") pod \"rabbitmq-cell1-server-0\" (UID: \"5b7f0e5b-e1ba-4da5-b644-e16236fd5403\") " pod="openstack/rabbitmq-cell1-server-0" Jan 23 09:36:33 crc kubenswrapper[4684]: I0123 09:36:33.152480 4684 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"erlang-cookie-secret\" (UniqueName: \"kubernetes.io/secret/5b7f0e5b-e1ba-4da5-b644-e16236fd5403-erlang-cookie-secret\") pod \"rabbitmq-cell1-server-0\" (UID: \"5b7f0e5b-e1ba-4da5-b644-e16236fd5403\") " pod="openstack/rabbitmq-cell1-server-0" Jan 23 09:36:33 crc kubenswrapper[4684]: I0123 09:36:33.153037 4684 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"rabbitmq-tls\" (UniqueName: \"kubernetes.io/projected/5b7f0e5b-e1ba-4da5-b644-e16236fd5403-rabbitmq-tls\") pod \"rabbitmq-cell1-server-0\" (UID: \"5b7f0e5b-e1ba-4da5-b644-e16236fd5403\") " pod="openstack/rabbitmq-cell1-server-0" Jan 23 09:36:33 crc kubenswrapper[4684]: I0123 09:36:33.153066 4684 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-6zrc9\" (UniqueName: \"kubernetes.io/projected/5b7f0e5b-e1ba-4da5-b644-e16236fd5403-kube-api-access-6zrc9\") pod \"rabbitmq-cell1-server-0\" (UID: \"5b7f0e5b-e1ba-4da5-b644-e16236fd5403\") " pod="openstack/rabbitmq-cell1-server-0" Jan 23 09:36:33 crc kubenswrapper[4684]: I0123 09:36:33.167038 4684 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"local-storage08-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage08-crc\") pod \"rabbitmq-cell1-server-0\" (UID: \"5b7f0e5b-e1ba-4da5-b644-e16236fd5403\") " pod="openstack/rabbitmq-cell1-server-0" Jan 23 09:36:33 crc kubenswrapper[4684]: I0123 09:36:33.231097 4684 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-5q75w\" (UniqueName: \"kubernetes.io/projected/d05a61f9-7d60-4073-ae62-7a4a59fe6ed6-kube-api-access-5q75w\") pod \"rabbitmq-server-0\" (UID: \"d05a61f9-7d60-4073-ae62-7a4a59fe6ed6\") " pod="openstack/rabbitmq-server-0" Jan 23 09:36:33 crc kubenswrapper[4684]: I0123 09:36:33.231208 4684 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"rabbitmq-plugins\" (UniqueName: \"kubernetes.io/empty-dir/d05a61f9-7d60-4073-ae62-7a4a59fe6ed6-rabbitmq-plugins\") pod \"rabbitmq-server-0\" (UID: \"d05a61f9-7d60-4073-ae62-7a4a59fe6ed6\") " pod="openstack/rabbitmq-server-0" Jan 23 09:36:33 crc kubenswrapper[4684]: I0123 09:36:33.231343 4684 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"erlang-cookie-secret\" (UniqueName: \"kubernetes.io/secret/d05a61f9-7d60-4073-ae62-7a4a59fe6ed6-erlang-cookie-secret\") pod \"rabbitmq-server-0\" (UID: \"d05a61f9-7d60-4073-ae62-7a4a59fe6ed6\") " pod="openstack/rabbitmq-server-0" Jan 23 09:36:33 crc kubenswrapper[4684]: I0123 09:36:33.231752 4684 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"rabbitmq-plugins\" (UniqueName: \"kubernetes.io/empty-dir/d05a61f9-7d60-4073-ae62-7a4a59fe6ed6-rabbitmq-plugins\") pod \"rabbitmq-server-0\" (UID: \"d05a61f9-7d60-4073-ae62-7a4a59fe6ed6\") " pod="openstack/rabbitmq-server-0" Jan 23 09:36:33 crc kubenswrapper[4684]: I0123 09:36:33.232029 4684 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"server-conf\" (UniqueName: \"kubernetes.io/configmap/d05a61f9-7d60-4073-ae62-7a4a59fe6ed6-server-conf\") pod \"rabbitmq-server-0\" (UID: \"d05a61f9-7d60-4073-ae62-7a4a59fe6ed6\") " pod="openstack/rabbitmq-server-0" Jan 23 09:36:33 crc kubenswrapper[4684]: I0123 09:36:33.232064 4684 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"local-storage01-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage01-crc\") pod \"rabbitmq-server-0\" (UID: \"d05a61f9-7d60-4073-ae62-7a4a59fe6ed6\") " pod="openstack/rabbitmq-server-0" Jan 23 09:36:33 crc kubenswrapper[4684]: I0123 09:36:33.232085 4684 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"rabbitmq-confd\" (UniqueName: \"kubernetes.io/projected/d05a61f9-7d60-4073-ae62-7a4a59fe6ed6-rabbitmq-confd\") pod \"rabbitmq-server-0\" (UID: \"d05a61f9-7d60-4073-ae62-7a4a59fe6ed6\") " pod="openstack/rabbitmq-server-0" Jan 23 09:36:33 crc kubenswrapper[4684]: I0123 09:36:33.232120 4684 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"plugins-conf\" (UniqueName: \"kubernetes.io/configmap/d05a61f9-7d60-4073-ae62-7a4a59fe6ed6-plugins-conf\") pod \"rabbitmq-server-0\" (UID: \"d05a61f9-7d60-4073-ae62-7a4a59fe6ed6\") " pod="openstack/rabbitmq-server-0" Jan 23 09:36:33 crc kubenswrapper[4684]: I0123 09:36:33.232135 4684 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pod-info\" (UniqueName: \"kubernetes.io/downward-api/d05a61f9-7d60-4073-ae62-7a4a59fe6ed6-pod-info\") pod \"rabbitmq-server-0\" (UID: \"d05a61f9-7d60-4073-ae62-7a4a59fe6ed6\") " pod="openstack/rabbitmq-server-0" Jan 23 09:36:33 crc kubenswrapper[4684]: I0123 09:36:33.232158 4684 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/d05a61f9-7d60-4073-ae62-7a4a59fe6ed6-config-data\") pod \"rabbitmq-server-0\" (UID: \"d05a61f9-7d60-4073-ae62-7a4a59fe6ed6\") " pod="openstack/rabbitmq-server-0" Jan 23 09:36:33 crc kubenswrapper[4684]: I0123 09:36:33.232172 4684 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"rabbitmq-erlang-cookie\" (UniqueName: \"kubernetes.io/empty-dir/d05a61f9-7d60-4073-ae62-7a4a59fe6ed6-rabbitmq-erlang-cookie\") pod \"rabbitmq-server-0\" (UID: \"d05a61f9-7d60-4073-ae62-7a4a59fe6ed6\") " pod="openstack/rabbitmq-server-0" Jan 23 09:36:33 crc kubenswrapper[4684]: I0123 09:36:33.232193 4684 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"rabbitmq-tls\" (UniqueName: \"kubernetes.io/projected/d05a61f9-7d60-4073-ae62-7a4a59fe6ed6-rabbitmq-tls\") pod \"rabbitmq-server-0\" (UID: \"d05a61f9-7d60-4073-ae62-7a4a59fe6ed6\") " pod="openstack/rabbitmq-server-0" Jan 23 09:36:33 crc kubenswrapper[4684]: I0123 09:36:33.232959 4684 operation_generator.go:580] "MountVolume.MountDevice succeeded for volume \"local-storage01-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage01-crc\") pod \"rabbitmq-server-0\" (UID: \"d05a61f9-7d60-4073-ae62-7a4a59fe6ed6\") device mount path \"/mnt/openstack/pv01\"" pod="openstack/rabbitmq-server-0" Jan 23 09:36:33 crc kubenswrapper[4684]: I0123 09:36:33.233143 4684 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"server-conf\" (UniqueName: \"kubernetes.io/configmap/d05a61f9-7d60-4073-ae62-7a4a59fe6ed6-server-conf\") pod \"rabbitmq-server-0\" (UID: \"d05a61f9-7d60-4073-ae62-7a4a59fe6ed6\") " pod="openstack/rabbitmq-server-0" Jan 23 09:36:33 crc kubenswrapper[4684]: I0123 09:36:33.233630 4684 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"plugins-conf\" (UniqueName: \"kubernetes.io/configmap/d05a61f9-7d60-4073-ae62-7a4a59fe6ed6-plugins-conf\") pod \"rabbitmq-server-0\" (UID: \"d05a61f9-7d60-4073-ae62-7a4a59fe6ed6\") " pod="openstack/rabbitmq-server-0" Jan 23 09:36:33 crc kubenswrapper[4684]: I0123 09:36:33.233811 4684 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/d05a61f9-7d60-4073-ae62-7a4a59fe6ed6-config-data\") pod \"rabbitmq-server-0\" (UID: \"d05a61f9-7d60-4073-ae62-7a4a59fe6ed6\") " pod="openstack/rabbitmq-server-0" Jan 23 09:36:33 crc kubenswrapper[4684]: I0123 09:36:33.234099 4684 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"rabbitmq-erlang-cookie\" (UniqueName: \"kubernetes.io/empty-dir/d05a61f9-7d60-4073-ae62-7a4a59fe6ed6-rabbitmq-erlang-cookie\") pod \"rabbitmq-server-0\" (UID: \"d05a61f9-7d60-4073-ae62-7a4a59fe6ed6\") " pod="openstack/rabbitmq-server-0" Jan 23 09:36:33 crc kubenswrapper[4684]: I0123 09:36:33.236818 4684 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"erlang-cookie-secret\" (UniqueName: \"kubernetes.io/secret/d05a61f9-7d60-4073-ae62-7a4a59fe6ed6-erlang-cookie-secret\") pod \"rabbitmq-server-0\" (UID: \"d05a61f9-7d60-4073-ae62-7a4a59fe6ed6\") " pod="openstack/rabbitmq-server-0" Jan 23 09:36:33 crc kubenswrapper[4684]: I0123 09:36:33.237220 4684 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"rabbitmq-tls\" (UniqueName: \"kubernetes.io/projected/d05a61f9-7d60-4073-ae62-7a4a59fe6ed6-rabbitmq-tls\") pod \"rabbitmq-server-0\" (UID: \"d05a61f9-7d60-4073-ae62-7a4a59fe6ed6\") " pod="openstack/rabbitmq-server-0" Jan 23 09:36:33 crc kubenswrapper[4684]: I0123 09:36:33.237680 4684 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"rabbitmq-confd\" (UniqueName: \"kubernetes.io/projected/d05a61f9-7d60-4073-ae62-7a4a59fe6ed6-rabbitmq-confd\") pod \"rabbitmq-server-0\" (UID: \"d05a61f9-7d60-4073-ae62-7a4a59fe6ed6\") " pod="openstack/rabbitmq-server-0" Jan 23 09:36:33 crc kubenswrapper[4684]: I0123 09:36:33.239243 4684 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"pod-info\" (UniqueName: \"kubernetes.io/downward-api/d05a61f9-7d60-4073-ae62-7a4a59fe6ed6-pod-info\") pod \"rabbitmq-server-0\" (UID: \"d05a61f9-7d60-4073-ae62-7a4a59fe6ed6\") " pod="openstack/rabbitmq-server-0" Jan 23 09:36:33 crc kubenswrapper[4684]: I0123 09:36:33.286378 4684 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/rabbitmq-cell1-server-0" Jan 23 09:36:33 crc kubenswrapper[4684]: I0123 09:36:33.301674 4684 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-5q75w\" (UniqueName: \"kubernetes.io/projected/d05a61f9-7d60-4073-ae62-7a4a59fe6ed6-kube-api-access-5q75w\") pod \"rabbitmq-server-0\" (UID: \"d05a61f9-7d60-4073-ae62-7a4a59fe6ed6\") " pod="openstack/rabbitmq-server-0" Jan 23 09:36:33 crc kubenswrapper[4684]: I0123 09:36:33.316724 4684 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"local-storage01-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage01-crc\") pod \"rabbitmq-server-0\" (UID: \"d05a61f9-7d60-4073-ae62-7a4a59fe6ed6\") " pod="openstack/rabbitmq-server-0" Jan 23 09:36:33 crc kubenswrapper[4684]: I0123 09:36:33.347537 4684 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/rabbitmq-server-0" Jan 23 09:36:33 crc kubenswrapper[4684]: I0123 09:36:33.600997 4684 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="6a0c15bc-8e5e-47ee-9c23-1673363f1603" path="/var/lib/kubelet/pods/6a0c15bc-8e5e-47ee-9c23-1673363f1603/volumes" Jan 23 09:36:33 crc kubenswrapper[4684]: I0123 09:36:33.605373 4684 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="82a71d38-3c68-43a9-9913-bc184ebed996" path="/var/lib/kubelet/pods/82a71d38-3c68-43a9-9913-bc184ebed996/volumes" Jan 23 09:36:33 crc kubenswrapper[4684]: I0123 09:36:33.858078 4684 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/rabbitmq-cell1-server-0"] Jan 23 09:36:33 crc kubenswrapper[4684]: I0123 09:36:33.936307 4684 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/rabbitmq-server-0"] Jan 23 09:36:34 crc kubenswrapper[4684]: I0123 09:36:34.327751 4684 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/rabbitmq-server-0" event={"ID":"d05a61f9-7d60-4073-ae62-7a4a59fe6ed6","Type":"ContainerStarted","Data":"50f5147c3b1ec6aaba4eea6d3fa10aa006bfe54891c39e271c957c962deaeaf6"} Jan 23 09:36:34 crc kubenswrapper[4684]: I0123 09:36:34.337856 4684 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/rabbitmq-cell1-server-0" event={"ID":"5b7f0e5b-e1ba-4da5-b644-e16236fd5403","Type":"ContainerStarted","Data":"b2b86e2aef71ce77f7c855f72e9206396f7da1034b83e91eb7c43efe1fc8295c"} Jan 23 09:36:35 crc kubenswrapper[4684]: I0123 09:36:35.348500 4684 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/rabbitmq-server-0" event={"ID":"d05a61f9-7d60-4073-ae62-7a4a59fe6ed6","Type":"ContainerStarted","Data":"e59b637437b36e9b6b974a9c3f463a2bc2834101358bb253fd84a0c7745e10c7"} Jan 23 09:36:35 crc kubenswrapper[4684]: I0123 09:36:35.353128 4684 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/rabbitmq-cell1-server-0" event={"ID":"5b7f0e5b-e1ba-4da5-b644-e16236fd5403","Type":"ContainerStarted","Data":"90e35e316eb55a861f0dd2afb7814645afe4b1a0025e07f6fc78d9e4ed00572f"} Jan 23 09:36:35 crc kubenswrapper[4684]: I0123 09:36:35.599271 4684 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/dnsmasq-dns-7bf6f4788c-zvlv7"] Jan 23 09:36:35 crc kubenswrapper[4684]: I0123 09:36:35.604096 4684 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-7bf6f4788c-zvlv7" Jan 23 09:36:35 crc kubenswrapper[4684]: I0123 09:36:35.608666 4684 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"openstack-edpm-ipam" Jan 23 09:36:35 crc kubenswrapper[4684]: I0123 09:36:35.618755 4684 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-7bf6f4788c-zvlv7"] Jan 23 09:36:35 crc kubenswrapper[4684]: I0123 09:36:35.682860 4684 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-869nk\" (UniqueName: \"kubernetes.io/projected/716a1bb5-66f3-471e-96e1-353506de67a4-kube-api-access-869nk\") pod \"dnsmasq-dns-7bf6f4788c-zvlv7\" (UID: \"716a1bb5-66f3-471e-96e1-353506de67a4\") " pod="openstack/dnsmasq-dns-7bf6f4788c-zvlv7" Jan 23 09:36:35 crc kubenswrapper[4684]: I0123 09:36:35.683031 4684 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/716a1bb5-66f3-471e-96e1-353506de67a4-dns-svc\") pod \"dnsmasq-dns-7bf6f4788c-zvlv7\" (UID: \"716a1bb5-66f3-471e-96e1-353506de67a4\") " pod="openstack/dnsmasq-dns-7bf6f4788c-zvlv7" Jan 23 09:36:35 crc kubenswrapper[4684]: I0123 09:36:35.683083 4684 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/716a1bb5-66f3-471e-96e1-353506de67a4-config\") pod \"dnsmasq-dns-7bf6f4788c-zvlv7\" (UID: \"716a1bb5-66f3-471e-96e1-353506de67a4\") " pod="openstack/dnsmasq-dns-7bf6f4788c-zvlv7" Jan 23 09:36:35 crc kubenswrapper[4684]: I0123 09:36:35.683186 4684 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/configmap/716a1bb5-66f3-471e-96e1-353506de67a4-openstack-edpm-ipam\") pod \"dnsmasq-dns-7bf6f4788c-zvlv7\" (UID: \"716a1bb5-66f3-471e-96e1-353506de67a4\") " pod="openstack/dnsmasq-dns-7bf6f4788c-zvlv7" Jan 23 09:36:35 crc kubenswrapper[4684]: I0123 09:36:35.683303 4684 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/716a1bb5-66f3-471e-96e1-353506de67a4-ovsdbserver-sb\") pod \"dnsmasq-dns-7bf6f4788c-zvlv7\" (UID: \"716a1bb5-66f3-471e-96e1-353506de67a4\") " pod="openstack/dnsmasq-dns-7bf6f4788c-zvlv7" Jan 23 09:36:35 crc kubenswrapper[4684]: I0123 09:36:35.683409 4684 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/716a1bb5-66f3-471e-96e1-353506de67a4-ovsdbserver-nb\") pod \"dnsmasq-dns-7bf6f4788c-zvlv7\" (UID: \"716a1bb5-66f3-471e-96e1-353506de67a4\") " pod="openstack/dnsmasq-dns-7bf6f4788c-zvlv7" Jan 23 09:36:35 crc kubenswrapper[4684]: I0123 09:36:35.785869 4684 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/716a1bb5-66f3-471e-96e1-353506de67a4-dns-svc\") pod \"dnsmasq-dns-7bf6f4788c-zvlv7\" (UID: \"716a1bb5-66f3-471e-96e1-353506de67a4\") " pod="openstack/dnsmasq-dns-7bf6f4788c-zvlv7" Jan 23 09:36:35 crc kubenswrapper[4684]: I0123 09:36:35.787009 4684 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/716a1bb5-66f3-471e-96e1-353506de67a4-dns-svc\") pod \"dnsmasq-dns-7bf6f4788c-zvlv7\" (UID: \"716a1bb5-66f3-471e-96e1-353506de67a4\") " pod="openstack/dnsmasq-dns-7bf6f4788c-zvlv7" Jan 23 09:36:35 crc kubenswrapper[4684]: I0123 09:36:35.787014 4684 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/716a1bb5-66f3-471e-96e1-353506de67a4-config\") pod \"dnsmasq-dns-7bf6f4788c-zvlv7\" (UID: \"716a1bb5-66f3-471e-96e1-353506de67a4\") " pod="openstack/dnsmasq-dns-7bf6f4788c-zvlv7" Jan 23 09:36:35 crc kubenswrapper[4684]: I0123 09:36:35.787110 4684 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/configmap/716a1bb5-66f3-471e-96e1-353506de67a4-openstack-edpm-ipam\") pod \"dnsmasq-dns-7bf6f4788c-zvlv7\" (UID: \"716a1bb5-66f3-471e-96e1-353506de67a4\") " pod="openstack/dnsmasq-dns-7bf6f4788c-zvlv7" Jan 23 09:36:35 crc kubenswrapper[4684]: I0123 09:36:35.787185 4684 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/716a1bb5-66f3-471e-96e1-353506de67a4-ovsdbserver-sb\") pod \"dnsmasq-dns-7bf6f4788c-zvlv7\" (UID: \"716a1bb5-66f3-471e-96e1-353506de67a4\") " pod="openstack/dnsmasq-dns-7bf6f4788c-zvlv7" Jan 23 09:36:35 crc kubenswrapper[4684]: I0123 09:36:35.787239 4684 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/716a1bb5-66f3-471e-96e1-353506de67a4-ovsdbserver-nb\") pod \"dnsmasq-dns-7bf6f4788c-zvlv7\" (UID: \"716a1bb5-66f3-471e-96e1-353506de67a4\") " pod="openstack/dnsmasq-dns-7bf6f4788c-zvlv7" Jan 23 09:36:35 crc kubenswrapper[4684]: I0123 09:36:35.787392 4684 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-869nk\" (UniqueName: \"kubernetes.io/projected/716a1bb5-66f3-471e-96e1-353506de67a4-kube-api-access-869nk\") pod \"dnsmasq-dns-7bf6f4788c-zvlv7\" (UID: \"716a1bb5-66f3-471e-96e1-353506de67a4\") " pod="openstack/dnsmasq-dns-7bf6f4788c-zvlv7" Jan 23 09:36:35 crc kubenswrapper[4684]: I0123 09:36:35.787924 4684 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/716a1bb5-66f3-471e-96e1-353506de67a4-config\") pod \"dnsmasq-dns-7bf6f4788c-zvlv7\" (UID: \"716a1bb5-66f3-471e-96e1-353506de67a4\") " pod="openstack/dnsmasq-dns-7bf6f4788c-zvlv7" Jan 23 09:36:35 crc kubenswrapper[4684]: I0123 09:36:35.788021 4684 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/configmap/716a1bb5-66f3-471e-96e1-353506de67a4-openstack-edpm-ipam\") pod \"dnsmasq-dns-7bf6f4788c-zvlv7\" (UID: \"716a1bb5-66f3-471e-96e1-353506de67a4\") " pod="openstack/dnsmasq-dns-7bf6f4788c-zvlv7" Jan 23 09:36:35 crc kubenswrapper[4684]: I0123 09:36:35.788398 4684 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/716a1bb5-66f3-471e-96e1-353506de67a4-ovsdbserver-nb\") pod \"dnsmasq-dns-7bf6f4788c-zvlv7\" (UID: \"716a1bb5-66f3-471e-96e1-353506de67a4\") " pod="openstack/dnsmasq-dns-7bf6f4788c-zvlv7" Jan 23 09:36:35 crc kubenswrapper[4684]: I0123 09:36:35.789032 4684 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/716a1bb5-66f3-471e-96e1-353506de67a4-ovsdbserver-sb\") pod \"dnsmasq-dns-7bf6f4788c-zvlv7\" (UID: \"716a1bb5-66f3-471e-96e1-353506de67a4\") " pod="openstack/dnsmasq-dns-7bf6f4788c-zvlv7" Jan 23 09:36:35 crc kubenswrapper[4684]: I0123 09:36:35.806587 4684 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-869nk\" (UniqueName: \"kubernetes.io/projected/716a1bb5-66f3-471e-96e1-353506de67a4-kube-api-access-869nk\") pod \"dnsmasq-dns-7bf6f4788c-zvlv7\" (UID: \"716a1bb5-66f3-471e-96e1-353506de67a4\") " pod="openstack/dnsmasq-dns-7bf6f4788c-zvlv7" Jan 23 09:36:35 crc kubenswrapper[4684]: I0123 09:36:35.930465 4684 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-7bf6f4788c-zvlv7" Jan 23 09:36:36 crc kubenswrapper[4684]: I0123 09:36:36.419981 4684 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-7bf6f4788c-zvlv7"] Jan 23 09:36:36 crc kubenswrapper[4684]: W0123 09:36:36.420569 4684 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod716a1bb5_66f3_471e_96e1_353506de67a4.slice/crio-b75597392f55913ca4e8d0d259d87c61f0fd5f9cf801f8ce92bf28a6d4beeefc WatchSource:0}: Error finding container b75597392f55913ca4e8d0d259d87c61f0fd5f9cf801f8ce92bf28a6d4beeefc: Status 404 returned error can't find the container with id b75597392f55913ca4e8d0d259d87c61f0fd5f9cf801f8ce92bf28a6d4beeefc Jan 23 09:36:37 crc kubenswrapper[4684]: I0123 09:36:37.383229 4684 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-7bf6f4788c-zvlv7" event={"ID":"716a1bb5-66f3-471e-96e1-353506de67a4","Type":"ContainerDied","Data":"9eac1d1216f7eb5129f963a9f98119cb33551f245ea555fdb3b986d3c7b077b5"} Jan 23 09:36:37 crc kubenswrapper[4684]: I0123 09:36:37.383581 4684 generic.go:334] "Generic (PLEG): container finished" podID="716a1bb5-66f3-471e-96e1-353506de67a4" containerID="9eac1d1216f7eb5129f963a9f98119cb33551f245ea555fdb3b986d3c7b077b5" exitCode=0 Jan 23 09:36:37 crc kubenswrapper[4684]: I0123 09:36:37.383608 4684 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-7bf6f4788c-zvlv7" event={"ID":"716a1bb5-66f3-471e-96e1-353506de67a4","Type":"ContainerStarted","Data":"b75597392f55913ca4e8d0d259d87c61f0fd5f9cf801f8ce92bf28a6d4beeefc"} Jan 23 09:36:37 crc kubenswrapper[4684]: I0123 09:36:37.587791 4684 scope.go:117] "RemoveContainer" containerID="aaa3253f44fc261eba23e0bab4fba49957b928d9d9a01fb268ab6087cc818562" Jan 23 09:36:37 crc kubenswrapper[4684]: E0123 09:36:37.588277 4684 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-wtphf_openshift-machine-config-operator(fe8e0d00-860e-4d47-9f48-686555520d79)\"" pod="openshift-machine-config-operator/machine-config-daemon-wtphf" podUID="fe8e0d00-860e-4d47-9f48-686555520d79" Jan 23 09:36:38 crc kubenswrapper[4684]: I0123 09:36:38.392650 4684 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-7bf6f4788c-zvlv7" event={"ID":"716a1bb5-66f3-471e-96e1-353506de67a4","Type":"ContainerStarted","Data":"2a04ab7dbc778f85cafefed770b9031584e03937847835703df31fa69a9162ee"} Jan 23 09:36:38 crc kubenswrapper[4684]: I0123 09:36:38.393060 4684 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/dnsmasq-dns-7bf6f4788c-zvlv7" Jan 23 09:36:38 crc kubenswrapper[4684]: I0123 09:36:38.420224 4684 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/dnsmasq-dns-7bf6f4788c-zvlv7" podStartSLOduration=3.420204296 podStartE2EDuration="3.420204296s" podCreationTimestamp="2026-01-23 09:36:35 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-23 09:36:38.414047803 +0000 UTC m=+1771.037426364" watchObservedRunningTime="2026-01-23 09:36:38.420204296 +0000 UTC m=+1771.043582837" Jan 23 09:36:45 crc kubenswrapper[4684]: I0123 09:36:45.931906 4684 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/dnsmasq-dns-7bf6f4788c-zvlv7" Jan 23 09:36:46 crc kubenswrapper[4684]: I0123 09:36:46.003818 4684 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-6f69c5c76f-8qdgs"] Jan 23 09:36:46 crc kubenswrapper[4684]: I0123 09:36:46.004137 4684 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/dnsmasq-dns-6f69c5c76f-8qdgs" podUID="5f9a95a1-59e2-4ea2-96f4-d95ef3bdcebb" containerName="dnsmasq-dns" containerID="cri-o://57629ad7ae1d089781543c6965d9186b66d08a222dce7279b30a4d2098dd5f7e" gracePeriod=10 Jan 23 09:36:46 crc kubenswrapper[4684]: I0123 09:36:46.235639 4684 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/dnsmasq-dns-8446b8749f-5zcjt"] Jan 23 09:36:46 crc kubenswrapper[4684]: I0123 09:36:46.237611 4684 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-8446b8749f-5zcjt" Jan 23 09:36:46 crc kubenswrapper[4684]: I0123 09:36:46.252112 4684 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-8446b8749f-5zcjt"] Jan 23 09:36:46 crc kubenswrapper[4684]: I0123 09:36:46.405594 4684 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/254228f8-63f6-4461-83cb-fac99d91726e-dns-svc\") pod \"dnsmasq-dns-8446b8749f-5zcjt\" (UID: \"254228f8-63f6-4461-83cb-fac99d91726e\") " pod="openstack/dnsmasq-dns-8446b8749f-5zcjt" Jan 23 09:36:46 crc kubenswrapper[4684]: I0123 09:36:46.405934 4684 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/configmap/254228f8-63f6-4461-83cb-fac99d91726e-openstack-edpm-ipam\") pod \"dnsmasq-dns-8446b8749f-5zcjt\" (UID: \"254228f8-63f6-4461-83cb-fac99d91726e\") " pod="openstack/dnsmasq-dns-8446b8749f-5zcjt" Jan 23 09:36:46 crc kubenswrapper[4684]: I0123 09:36:46.406103 4684 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-prhz7\" (UniqueName: \"kubernetes.io/projected/254228f8-63f6-4461-83cb-fac99d91726e-kube-api-access-prhz7\") pod \"dnsmasq-dns-8446b8749f-5zcjt\" (UID: \"254228f8-63f6-4461-83cb-fac99d91726e\") " pod="openstack/dnsmasq-dns-8446b8749f-5zcjt" Jan 23 09:36:46 crc kubenswrapper[4684]: I0123 09:36:46.406377 4684 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/254228f8-63f6-4461-83cb-fac99d91726e-ovsdbserver-sb\") pod \"dnsmasq-dns-8446b8749f-5zcjt\" (UID: \"254228f8-63f6-4461-83cb-fac99d91726e\") " pod="openstack/dnsmasq-dns-8446b8749f-5zcjt" Jan 23 09:36:46 crc kubenswrapper[4684]: I0123 09:36:46.406550 4684 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/254228f8-63f6-4461-83cb-fac99d91726e-ovsdbserver-nb\") pod \"dnsmasq-dns-8446b8749f-5zcjt\" (UID: \"254228f8-63f6-4461-83cb-fac99d91726e\") " pod="openstack/dnsmasq-dns-8446b8749f-5zcjt" Jan 23 09:36:46 crc kubenswrapper[4684]: I0123 09:36:46.406604 4684 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/254228f8-63f6-4461-83cb-fac99d91726e-config\") pod \"dnsmasq-dns-8446b8749f-5zcjt\" (UID: \"254228f8-63f6-4461-83cb-fac99d91726e\") " pod="openstack/dnsmasq-dns-8446b8749f-5zcjt" Jan 23 09:36:46 crc kubenswrapper[4684]: I0123 09:36:46.456608 4684 generic.go:334] "Generic (PLEG): container finished" podID="5f9a95a1-59e2-4ea2-96f4-d95ef3bdcebb" containerID="57629ad7ae1d089781543c6965d9186b66d08a222dce7279b30a4d2098dd5f7e" exitCode=0 Jan 23 09:36:46 crc kubenswrapper[4684]: I0123 09:36:46.456656 4684 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-6f69c5c76f-8qdgs" event={"ID":"5f9a95a1-59e2-4ea2-96f4-d95ef3bdcebb","Type":"ContainerDied","Data":"57629ad7ae1d089781543c6965d9186b66d08a222dce7279b30a4d2098dd5f7e"} Jan 23 09:36:46 crc kubenswrapper[4684]: I0123 09:36:46.507760 4684 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/254228f8-63f6-4461-83cb-fac99d91726e-ovsdbserver-sb\") pod \"dnsmasq-dns-8446b8749f-5zcjt\" (UID: \"254228f8-63f6-4461-83cb-fac99d91726e\") " pod="openstack/dnsmasq-dns-8446b8749f-5zcjt" Jan 23 09:36:46 crc kubenswrapper[4684]: I0123 09:36:46.507863 4684 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/254228f8-63f6-4461-83cb-fac99d91726e-ovsdbserver-nb\") pod \"dnsmasq-dns-8446b8749f-5zcjt\" (UID: \"254228f8-63f6-4461-83cb-fac99d91726e\") " pod="openstack/dnsmasq-dns-8446b8749f-5zcjt" Jan 23 09:36:46 crc kubenswrapper[4684]: I0123 09:36:46.507893 4684 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/254228f8-63f6-4461-83cb-fac99d91726e-config\") pod \"dnsmasq-dns-8446b8749f-5zcjt\" (UID: \"254228f8-63f6-4461-83cb-fac99d91726e\") " pod="openstack/dnsmasq-dns-8446b8749f-5zcjt" Jan 23 09:36:46 crc kubenswrapper[4684]: I0123 09:36:46.507943 4684 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/254228f8-63f6-4461-83cb-fac99d91726e-dns-svc\") pod \"dnsmasq-dns-8446b8749f-5zcjt\" (UID: \"254228f8-63f6-4461-83cb-fac99d91726e\") " pod="openstack/dnsmasq-dns-8446b8749f-5zcjt" Jan 23 09:36:46 crc kubenswrapper[4684]: I0123 09:36:46.507969 4684 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/configmap/254228f8-63f6-4461-83cb-fac99d91726e-openstack-edpm-ipam\") pod \"dnsmasq-dns-8446b8749f-5zcjt\" (UID: \"254228f8-63f6-4461-83cb-fac99d91726e\") " pod="openstack/dnsmasq-dns-8446b8749f-5zcjt" Jan 23 09:36:46 crc kubenswrapper[4684]: I0123 09:36:46.508013 4684 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-prhz7\" (UniqueName: \"kubernetes.io/projected/254228f8-63f6-4461-83cb-fac99d91726e-kube-api-access-prhz7\") pod \"dnsmasq-dns-8446b8749f-5zcjt\" (UID: \"254228f8-63f6-4461-83cb-fac99d91726e\") " pod="openstack/dnsmasq-dns-8446b8749f-5zcjt" Jan 23 09:36:46 crc kubenswrapper[4684]: I0123 09:36:46.508884 4684 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/254228f8-63f6-4461-83cb-fac99d91726e-ovsdbserver-sb\") pod \"dnsmasq-dns-8446b8749f-5zcjt\" (UID: \"254228f8-63f6-4461-83cb-fac99d91726e\") " pod="openstack/dnsmasq-dns-8446b8749f-5zcjt" Jan 23 09:36:46 crc kubenswrapper[4684]: I0123 09:36:46.509111 4684 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/254228f8-63f6-4461-83cb-fac99d91726e-ovsdbserver-nb\") pod \"dnsmasq-dns-8446b8749f-5zcjt\" (UID: \"254228f8-63f6-4461-83cb-fac99d91726e\") " pod="openstack/dnsmasq-dns-8446b8749f-5zcjt" Jan 23 09:36:46 crc kubenswrapper[4684]: I0123 09:36:46.509589 4684 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/254228f8-63f6-4461-83cb-fac99d91726e-dns-svc\") pod \"dnsmasq-dns-8446b8749f-5zcjt\" (UID: \"254228f8-63f6-4461-83cb-fac99d91726e\") " pod="openstack/dnsmasq-dns-8446b8749f-5zcjt" Jan 23 09:36:46 crc kubenswrapper[4684]: I0123 09:36:46.509798 4684 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/configmap/254228f8-63f6-4461-83cb-fac99d91726e-openstack-edpm-ipam\") pod \"dnsmasq-dns-8446b8749f-5zcjt\" (UID: \"254228f8-63f6-4461-83cb-fac99d91726e\") " pod="openstack/dnsmasq-dns-8446b8749f-5zcjt" Jan 23 09:36:46 crc kubenswrapper[4684]: I0123 09:36:46.510313 4684 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/254228f8-63f6-4461-83cb-fac99d91726e-config\") pod \"dnsmasq-dns-8446b8749f-5zcjt\" (UID: \"254228f8-63f6-4461-83cb-fac99d91726e\") " pod="openstack/dnsmasq-dns-8446b8749f-5zcjt" Jan 23 09:36:46 crc kubenswrapper[4684]: I0123 09:36:46.532280 4684 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-prhz7\" (UniqueName: \"kubernetes.io/projected/254228f8-63f6-4461-83cb-fac99d91726e-kube-api-access-prhz7\") pod \"dnsmasq-dns-8446b8749f-5zcjt\" (UID: \"254228f8-63f6-4461-83cb-fac99d91726e\") " pod="openstack/dnsmasq-dns-8446b8749f-5zcjt" Jan 23 09:36:46 crc kubenswrapper[4684]: I0123 09:36:46.555904 4684 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-8446b8749f-5zcjt" Jan 23 09:36:47 crc kubenswrapper[4684]: I0123 09:36:47.036826 4684 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-8446b8749f-5zcjt"] Jan 23 09:36:47 crc kubenswrapper[4684]: W0123 09:36:47.052822 4684 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod254228f8_63f6_4461_83cb_fac99d91726e.slice/crio-cb3f3f0382caa47b4265862e72e1c2e91f232ea1c3d6bfb0294c38bc9e519f2c WatchSource:0}: Error finding container cb3f3f0382caa47b4265862e72e1c2e91f232ea1c3d6bfb0294c38bc9e519f2c: Status 404 returned error can't find the container with id cb3f3f0382caa47b4265862e72e1c2e91f232ea1c3d6bfb0294c38bc9e519f2c Jan 23 09:36:47 crc kubenswrapper[4684]: I0123 09:36:47.098116 4684 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-6f69c5c76f-8qdgs" Jan 23 09:36:47 crc kubenswrapper[4684]: I0123 09:36:47.222564 4684 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/5f9a95a1-59e2-4ea2-96f4-d95ef3bdcebb-config\") pod \"5f9a95a1-59e2-4ea2-96f4-d95ef3bdcebb\" (UID: \"5f9a95a1-59e2-4ea2-96f4-d95ef3bdcebb\") " Jan 23 09:36:47 crc kubenswrapper[4684]: I0123 09:36:47.223022 4684 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/5f9a95a1-59e2-4ea2-96f4-d95ef3bdcebb-dns-svc\") pod \"5f9a95a1-59e2-4ea2-96f4-d95ef3bdcebb\" (UID: \"5f9a95a1-59e2-4ea2-96f4-d95ef3bdcebb\") " Jan 23 09:36:47 crc kubenswrapper[4684]: I0123 09:36:47.223864 4684 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/5f9a95a1-59e2-4ea2-96f4-d95ef3bdcebb-ovsdbserver-nb\") pod \"5f9a95a1-59e2-4ea2-96f4-d95ef3bdcebb\" (UID: \"5f9a95a1-59e2-4ea2-96f4-d95ef3bdcebb\") " Jan 23 09:36:47 crc kubenswrapper[4684]: I0123 09:36:47.223904 4684 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-h6f78\" (UniqueName: \"kubernetes.io/projected/5f9a95a1-59e2-4ea2-96f4-d95ef3bdcebb-kube-api-access-h6f78\") pod \"5f9a95a1-59e2-4ea2-96f4-d95ef3bdcebb\" (UID: \"5f9a95a1-59e2-4ea2-96f4-d95ef3bdcebb\") " Jan 23 09:36:47 crc kubenswrapper[4684]: I0123 09:36:47.224244 4684 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/5f9a95a1-59e2-4ea2-96f4-d95ef3bdcebb-ovsdbserver-sb\") pod \"5f9a95a1-59e2-4ea2-96f4-d95ef3bdcebb\" (UID: \"5f9a95a1-59e2-4ea2-96f4-d95ef3bdcebb\") " Jan 23 09:36:47 crc kubenswrapper[4684]: I0123 09:36:47.243981 4684 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/5f9a95a1-59e2-4ea2-96f4-d95ef3bdcebb-kube-api-access-h6f78" (OuterVolumeSpecName: "kube-api-access-h6f78") pod "5f9a95a1-59e2-4ea2-96f4-d95ef3bdcebb" (UID: "5f9a95a1-59e2-4ea2-96f4-d95ef3bdcebb"). InnerVolumeSpecName "kube-api-access-h6f78". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 23 09:36:47 crc kubenswrapper[4684]: I0123 09:36:47.289787 4684 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/5f9a95a1-59e2-4ea2-96f4-d95ef3bdcebb-dns-svc" (OuterVolumeSpecName: "dns-svc") pod "5f9a95a1-59e2-4ea2-96f4-d95ef3bdcebb" (UID: "5f9a95a1-59e2-4ea2-96f4-d95ef3bdcebb"). InnerVolumeSpecName "dns-svc". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 23 09:36:47 crc kubenswrapper[4684]: I0123 09:36:47.303677 4684 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/5f9a95a1-59e2-4ea2-96f4-d95ef3bdcebb-ovsdbserver-nb" (OuterVolumeSpecName: "ovsdbserver-nb") pod "5f9a95a1-59e2-4ea2-96f4-d95ef3bdcebb" (UID: "5f9a95a1-59e2-4ea2-96f4-d95ef3bdcebb"). InnerVolumeSpecName "ovsdbserver-nb". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 23 09:36:47 crc kubenswrapper[4684]: I0123 09:36:47.304670 4684 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/5f9a95a1-59e2-4ea2-96f4-d95ef3bdcebb-ovsdbserver-sb" (OuterVolumeSpecName: "ovsdbserver-sb") pod "5f9a95a1-59e2-4ea2-96f4-d95ef3bdcebb" (UID: "5f9a95a1-59e2-4ea2-96f4-d95ef3bdcebb"). InnerVolumeSpecName "ovsdbserver-sb". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 23 09:36:47 crc kubenswrapper[4684]: I0123 09:36:47.316463 4684 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/5f9a95a1-59e2-4ea2-96f4-d95ef3bdcebb-config" (OuterVolumeSpecName: "config") pod "5f9a95a1-59e2-4ea2-96f4-d95ef3bdcebb" (UID: "5f9a95a1-59e2-4ea2-96f4-d95ef3bdcebb"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 23 09:36:47 crc kubenswrapper[4684]: I0123 09:36:47.327323 4684 reconciler_common.go:293] "Volume detached for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/5f9a95a1-59e2-4ea2-96f4-d95ef3bdcebb-ovsdbserver-sb\") on node \"crc\" DevicePath \"\"" Jan 23 09:36:47 crc kubenswrapper[4684]: I0123 09:36:47.327360 4684 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/5f9a95a1-59e2-4ea2-96f4-d95ef3bdcebb-config\") on node \"crc\" DevicePath \"\"" Jan 23 09:36:47 crc kubenswrapper[4684]: I0123 09:36:47.327374 4684 reconciler_common.go:293] "Volume detached for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/5f9a95a1-59e2-4ea2-96f4-d95ef3bdcebb-dns-svc\") on node \"crc\" DevicePath \"\"" Jan 23 09:36:47 crc kubenswrapper[4684]: I0123 09:36:47.327385 4684 reconciler_common.go:293] "Volume detached for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/5f9a95a1-59e2-4ea2-96f4-d95ef3bdcebb-ovsdbserver-nb\") on node \"crc\" DevicePath \"\"" Jan 23 09:36:47 crc kubenswrapper[4684]: I0123 09:36:47.327397 4684 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-h6f78\" (UniqueName: \"kubernetes.io/projected/5f9a95a1-59e2-4ea2-96f4-d95ef3bdcebb-kube-api-access-h6f78\") on node \"crc\" DevicePath \"\"" Jan 23 09:36:47 crc kubenswrapper[4684]: I0123 09:36:47.464867 4684 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-8446b8749f-5zcjt" event={"ID":"254228f8-63f6-4461-83cb-fac99d91726e","Type":"ContainerStarted","Data":"cb3f3f0382caa47b4265862e72e1c2e91f232ea1c3d6bfb0294c38bc9e519f2c"} Jan 23 09:36:47 crc kubenswrapper[4684]: I0123 09:36:47.467089 4684 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-6f69c5c76f-8qdgs" event={"ID":"5f9a95a1-59e2-4ea2-96f4-d95ef3bdcebb","Type":"ContainerDied","Data":"9df5ef70e1883ae23b65c7857b1416e74b1856214aac349d319b540c009a1841"} Jan 23 09:36:47 crc kubenswrapper[4684]: I0123 09:36:47.467126 4684 scope.go:117] "RemoveContainer" containerID="57629ad7ae1d089781543c6965d9186b66d08a222dce7279b30a4d2098dd5f7e" Jan 23 09:36:47 crc kubenswrapper[4684]: I0123 09:36:47.467182 4684 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-6f69c5c76f-8qdgs" Jan 23 09:36:47 crc kubenswrapper[4684]: I0123 09:36:47.486953 4684 scope.go:117] "RemoveContainer" containerID="cc64fedec05b87847f9e240fe2a006be46e0858764bfa7da2f7c6565a14e554b" Jan 23 09:36:47 crc kubenswrapper[4684]: I0123 09:36:47.503639 4684 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-6f69c5c76f-8qdgs"] Jan 23 09:36:47 crc kubenswrapper[4684]: I0123 09:36:47.512038 4684 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/dnsmasq-dns-6f69c5c76f-8qdgs"] Jan 23 09:36:47 crc kubenswrapper[4684]: I0123 09:36:47.595982 4684 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="5f9a95a1-59e2-4ea2-96f4-d95ef3bdcebb" path="/var/lib/kubelet/pods/5f9a95a1-59e2-4ea2-96f4-d95ef3bdcebb/volumes" Jan 23 09:36:48 crc kubenswrapper[4684]: I0123 09:36:48.476286 4684 generic.go:334] "Generic (PLEG): container finished" podID="254228f8-63f6-4461-83cb-fac99d91726e" containerID="d3a8618764b797dba6477bb0a4c98c1197497fa2f09f3da440c9b1bec75e3909" exitCode=0 Jan 23 09:36:48 crc kubenswrapper[4684]: I0123 09:36:48.476377 4684 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-8446b8749f-5zcjt" event={"ID":"254228f8-63f6-4461-83cb-fac99d91726e","Type":"ContainerDied","Data":"d3a8618764b797dba6477bb0a4c98c1197497fa2f09f3da440c9b1bec75e3909"} Jan 23 09:36:49 crc kubenswrapper[4684]: I0123 09:36:49.490935 4684 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-8446b8749f-5zcjt" event={"ID":"254228f8-63f6-4461-83cb-fac99d91726e","Type":"ContainerStarted","Data":"0e441f99d8d7f62586c27a14f460269bb3ce4a9215d979566c6a8f24cf2f9242"} Jan 23 09:36:49 crc kubenswrapper[4684]: I0123 09:36:49.491370 4684 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/dnsmasq-dns-8446b8749f-5zcjt" Jan 23 09:36:49 crc kubenswrapper[4684]: I0123 09:36:49.516302 4684 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/dnsmasq-dns-8446b8749f-5zcjt" podStartSLOduration=3.516279436 podStartE2EDuration="3.516279436s" podCreationTimestamp="2026-01-23 09:36:46 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-23 09:36:49.506930144 +0000 UTC m=+1782.130308685" watchObservedRunningTime="2026-01-23 09:36:49.516279436 +0000 UTC m=+1782.139657977" Jan 23 09:36:53 crc kubenswrapper[4684]: I0123 09:36:52.581544 4684 scope.go:117] "RemoveContainer" containerID="aaa3253f44fc261eba23e0bab4fba49957b928d9d9a01fb268ab6087cc818562" Jan 23 09:36:53 crc kubenswrapper[4684]: E0123 09:36:52.582047 4684 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-wtphf_openshift-machine-config-operator(fe8e0d00-860e-4d47-9f48-686555520d79)\"" pod="openshift-machine-config-operator/machine-config-daemon-wtphf" podUID="fe8e0d00-860e-4d47-9f48-686555520d79" Jan 23 09:36:56 crc kubenswrapper[4684]: I0123 09:36:56.558942 4684 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/dnsmasq-dns-8446b8749f-5zcjt" Jan 23 09:36:56 crc kubenswrapper[4684]: I0123 09:36:56.637972 4684 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-7bf6f4788c-zvlv7"] Jan 23 09:36:56 crc kubenswrapper[4684]: I0123 09:36:56.638287 4684 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/dnsmasq-dns-7bf6f4788c-zvlv7" podUID="716a1bb5-66f3-471e-96e1-353506de67a4" containerName="dnsmasq-dns" containerID="cri-o://2a04ab7dbc778f85cafefed770b9031584e03937847835703df31fa69a9162ee" gracePeriod=10 Jan 23 09:36:58 crc kubenswrapper[4684]: I0123 09:36:58.494189 4684 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-7bf6f4788c-zvlv7" Jan 23 09:36:58 crc kubenswrapper[4684]: I0123 09:36:58.570812 4684 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/716a1bb5-66f3-471e-96e1-353506de67a4-ovsdbserver-nb\") pod \"716a1bb5-66f3-471e-96e1-353506de67a4\" (UID: \"716a1bb5-66f3-471e-96e1-353506de67a4\") " Jan 23 09:36:58 crc kubenswrapper[4684]: I0123 09:36:58.570905 4684 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-869nk\" (UniqueName: \"kubernetes.io/projected/716a1bb5-66f3-471e-96e1-353506de67a4-kube-api-access-869nk\") pod \"716a1bb5-66f3-471e-96e1-353506de67a4\" (UID: \"716a1bb5-66f3-471e-96e1-353506de67a4\") " Jan 23 09:36:58 crc kubenswrapper[4684]: I0123 09:36:58.570983 4684 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/configmap/716a1bb5-66f3-471e-96e1-353506de67a4-openstack-edpm-ipam\") pod \"716a1bb5-66f3-471e-96e1-353506de67a4\" (UID: \"716a1bb5-66f3-471e-96e1-353506de67a4\") " Jan 23 09:36:58 crc kubenswrapper[4684]: I0123 09:36:58.571236 4684 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/716a1bb5-66f3-471e-96e1-353506de67a4-dns-svc\") pod \"716a1bb5-66f3-471e-96e1-353506de67a4\" (UID: \"716a1bb5-66f3-471e-96e1-353506de67a4\") " Jan 23 09:36:58 crc kubenswrapper[4684]: I0123 09:36:58.571346 4684 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/716a1bb5-66f3-471e-96e1-353506de67a4-config\") pod \"716a1bb5-66f3-471e-96e1-353506de67a4\" (UID: \"716a1bb5-66f3-471e-96e1-353506de67a4\") " Jan 23 09:36:58 crc kubenswrapper[4684]: I0123 09:36:58.571408 4684 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/716a1bb5-66f3-471e-96e1-353506de67a4-ovsdbserver-sb\") pod \"716a1bb5-66f3-471e-96e1-353506de67a4\" (UID: \"716a1bb5-66f3-471e-96e1-353506de67a4\") " Jan 23 09:36:58 crc kubenswrapper[4684]: I0123 09:36:58.605093 4684 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/716a1bb5-66f3-471e-96e1-353506de67a4-kube-api-access-869nk" (OuterVolumeSpecName: "kube-api-access-869nk") pod "716a1bb5-66f3-471e-96e1-353506de67a4" (UID: "716a1bb5-66f3-471e-96e1-353506de67a4"). InnerVolumeSpecName "kube-api-access-869nk". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 23 09:36:58 crc kubenswrapper[4684]: I0123 09:36:58.619059 4684 generic.go:334] "Generic (PLEG): container finished" podID="716a1bb5-66f3-471e-96e1-353506de67a4" containerID="2a04ab7dbc778f85cafefed770b9031584e03937847835703df31fa69a9162ee" exitCode=0 Jan 23 09:36:58 crc kubenswrapper[4684]: I0123 09:36:58.619105 4684 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-7bf6f4788c-zvlv7" event={"ID":"716a1bb5-66f3-471e-96e1-353506de67a4","Type":"ContainerDied","Data":"2a04ab7dbc778f85cafefed770b9031584e03937847835703df31fa69a9162ee"} Jan 23 09:36:58 crc kubenswrapper[4684]: I0123 09:36:58.619131 4684 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-7bf6f4788c-zvlv7" event={"ID":"716a1bb5-66f3-471e-96e1-353506de67a4","Type":"ContainerDied","Data":"b75597392f55913ca4e8d0d259d87c61f0fd5f9cf801f8ce92bf28a6d4beeefc"} Jan 23 09:36:58 crc kubenswrapper[4684]: I0123 09:36:58.619147 4684 scope.go:117] "RemoveContainer" containerID="2a04ab7dbc778f85cafefed770b9031584e03937847835703df31fa69a9162ee" Jan 23 09:36:58 crc kubenswrapper[4684]: I0123 09:36:58.619271 4684 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-7bf6f4788c-zvlv7" Jan 23 09:36:58 crc kubenswrapper[4684]: I0123 09:36:58.642594 4684 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/716a1bb5-66f3-471e-96e1-353506de67a4-dns-svc" (OuterVolumeSpecName: "dns-svc") pod "716a1bb5-66f3-471e-96e1-353506de67a4" (UID: "716a1bb5-66f3-471e-96e1-353506de67a4"). InnerVolumeSpecName "dns-svc". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 23 09:36:58 crc kubenswrapper[4684]: I0123 09:36:58.660982 4684 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/716a1bb5-66f3-471e-96e1-353506de67a4-ovsdbserver-nb" (OuterVolumeSpecName: "ovsdbserver-nb") pod "716a1bb5-66f3-471e-96e1-353506de67a4" (UID: "716a1bb5-66f3-471e-96e1-353506de67a4"). InnerVolumeSpecName "ovsdbserver-nb". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 23 09:36:58 crc kubenswrapper[4684]: I0123 09:36:58.681228 4684 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/716a1bb5-66f3-471e-96e1-353506de67a4-ovsdbserver-sb" (OuterVolumeSpecName: "ovsdbserver-sb") pod "716a1bb5-66f3-471e-96e1-353506de67a4" (UID: "716a1bb5-66f3-471e-96e1-353506de67a4"). InnerVolumeSpecName "ovsdbserver-sb". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 23 09:36:58 crc kubenswrapper[4684]: I0123 09:36:58.683537 4684 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-869nk\" (UniqueName: \"kubernetes.io/projected/716a1bb5-66f3-471e-96e1-353506de67a4-kube-api-access-869nk\") on node \"crc\" DevicePath \"\"" Jan 23 09:36:58 crc kubenswrapper[4684]: I0123 09:36:58.683566 4684 reconciler_common.go:293] "Volume detached for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/716a1bb5-66f3-471e-96e1-353506de67a4-dns-svc\") on node \"crc\" DevicePath \"\"" Jan 23 09:36:58 crc kubenswrapper[4684]: I0123 09:36:58.683604 4684 reconciler_common.go:293] "Volume detached for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/716a1bb5-66f3-471e-96e1-353506de67a4-ovsdbserver-sb\") on node \"crc\" DevicePath \"\"" Jan 23 09:36:58 crc kubenswrapper[4684]: I0123 09:36:58.683617 4684 reconciler_common.go:293] "Volume detached for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/716a1bb5-66f3-471e-96e1-353506de67a4-ovsdbserver-nb\") on node \"crc\" DevicePath \"\"" Jan 23 09:36:58 crc kubenswrapper[4684]: I0123 09:36:58.685586 4684 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/716a1bb5-66f3-471e-96e1-353506de67a4-config" (OuterVolumeSpecName: "config") pod "716a1bb5-66f3-471e-96e1-353506de67a4" (UID: "716a1bb5-66f3-471e-96e1-353506de67a4"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 23 09:36:58 crc kubenswrapper[4684]: I0123 09:36:58.699257 4684 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/716a1bb5-66f3-471e-96e1-353506de67a4-openstack-edpm-ipam" (OuterVolumeSpecName: "openstack-edpm-ipam") pod "716a1bb5-66f3-471e-96e1-353506de67a4" (UID: "716a1bb5-66f3-471e-96e1-353506de67a4"). InnerVolumeSpecName "openstack-edpm-ipam". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 23 09:36:58 crc kubenswrapper[4684]: I0123 09:36:58.744718 4684 scope.go:117] "RemoveContainer" containerID="9eac1d1216f7eb5129f963a9f98119cb33551f245ea555fdb3b986d3c7b077b5" Jan 23 09:36:58 crc kubenswrapper[4684]: I0123 09:36:58.768834 4684 scope.go:117] "RemoveContainer" containerID="2a04ab7dbc778f85cafefed770b9031584e03937847835703df31fa69a9162ee" Jan 23 09:36:58 crc kubenswrapper[4684]: E0123 09:36:58.769341 4684 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"2a04ab7dbc778f85cafefed770b9031584e03937847835703df31fa69a9162ee\": container with ID starting with 2a04ab7dbc778f85cafefed770b9031584e03937847835703df31fa69a9162ee not found: ID does not exist" containerID="2a04ab7dbc778f85cafefed770b9031584e03937847835703df31fa69a9162ee" Jan 23 09:36:58 crc kubenswrapper[4684]: I0123 09:36:58.769406 4684 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"2a04ab7dbc778f85cafefed770b9031584e03937847835703df31fa69a9162ee"} err="failed to get container status \"2a04ab7dbc778f85cafefed770b9031584e03937847835703df31fa69a9162ee\": rpc error: code = NotFound desc = could not find container \"2a04ab7dbc778f85cafefed770b9031584e03937847835703df31fa69a9162ee\": container with ID starting with 2a04ab7dbc778f85cafefed770b9031584e03937847835703df31fa69a9162ee not found: ID does not exist" Jan 23 09:36:58 crc kubenswrapper[4684]: I0123 09:36:58.769431 4684 scope.go:117] "RemoveContainer" containerID="9eac1d1216f7eb5129f963a9f98119cb33551f245ea555fdb3b986d3c7b077b5" Jan 23 09:36:58 crc kubenswrapper[4684]: E0123 09:36:58.769778 4684 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"9eac1d1216f7eb5129f963a9f98119cb33551f245ea555fdb3b986d3c7b077b5\": container with ID starting with 9eac1d1216f7eb5129f963a9f98119cb33551f245ea555fdb3b986d3c7b077b5 not found: ID does not exist" containerID="9eac1d1216f7eb5129f963a9f98119cb33551f245ea555fdb3b986d3c7b077b5" Jan 23 09:36:58 crc kubenswrapper[4684]: I0123 09:36:58.769829 4684 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"9eac1d1216f7eb5129f963a9f98119cb33551f245ea555fdb3b986d3c7b077b5"} err="failed to get container status \"9eac1d1216f7eb5129f963a9f98119cb33551f245ea555fdb3b986d3c7b077b5\": rpc error: code = NotFound desc = could not find container \"9eac1d1216f7eb5129f963a9f98119cb33551f245ea555fdb3b986d3c7b077b5\": container with ID starting with 9eac1d1216f7eb5129f963a9f98119cb33551f245ea555fdb3b986d3c7b077b5 not found: ID does not exist" Jan 23 09:36:58 crc kubenswrapper[4684]: I0123 09:36:58.785357 4684 reconciler_common.go:293] "Volume detached for volume \"openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/configmap/716a1bb5-66f3-471e-96e1-353506de67a4-openstack-edpm-ipam\") on node \"crc\" DevicePath \"\"" Jan 23 09:36:58 crc kubenswrapper[4684]: I0123 09:36:58.785660 4684 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/716a1bb5-66f3-471e-96e1-353506de67a4-config\") on node \"crc\" DevicePath \"\"" Jan 23 09:36:58 crc kubenswrapper[4684]: I0123 09:36:58.957707 4684 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-7bf6f4788c-zvlv7"] Jan 23 09:36:58 crc kubenswrapper[4684]: I0123 09:36:58.967097 4684 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/dnsmasq-dns-7bf6f4788c-zvlv7"] Jan 23 09:36:59 crc kubenswrapper[4684]: I0123 09:36:59.594506 4684 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="716a1bb5-66f3-471e-96e1-353506de67a4" path="/var/lib/kubelet/pods/716a1bb5-66f3-471e-96e1-353506de67a4/volumes" Jan 23 09:37:02 crc kubenswrapper[4684]: I0123 09:37:02.289786 4684 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/repo-setup-edpm-deployment-openstack-edpm-ipam-dww24"] Jan 23 09:37:02 crc kubenswrapper[4684]: E0123 09:37:02.290822 4684 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="716a1bb5-66f3-471e-96e1-353506de67a4" containerName="init" Jan 23 09:37:02 crc kubenswrapper[4684]: I0123 09:37:02.290845 4684 state_mem.go:107] "Deleted CPUSet assignment" podUID="716a1bb5-66f3-471e-96e1-353506de67a4" containerName="init" Jan 23 09:37:02 crc kubenswrapper[4684]: E0123 09:37:02.290869 4684 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="5f9a95a1-59e2-4ea2-96f4-d95ef3bdcebb" containerName="init" Jan 23 09:37:02 crc kubenswrapper[4684]: I0123 09:37:02.290876 4684 state_mem.go:107] "Deleted CPUSet assignment" podUID="5f9a95a1-59e2-4ea2-96f4-d95ef3bdcebb" containerName="init" Jan 23 09:37:02 crc kubenswrapper[4684]: E0123 09:37:02.290884 4684 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="5f9a95a1-59e2-4ea2-96f4-d95ef3bdcebb" containerName="dnsmasq-dns" Jan 23 09:37:02 crc kubenswrapper[4684]: I0123 09:37:02.290891 4684 state_mem.go:107] "Deleted CPUSet assignment" podUID="5f9a95a1-59e2-4ea2-96f4-d95ef3bdcebb" containerName="dnsmasq-dns" Jan 23 09:37:02 crc kubenswrapper[4684]: E0123 09:37:02.290901 4684 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="716a1bb5-66f3-471e-96e1-353506de67a4" containerName="dnsmasq-dns" Jan 23 09:37:02 crc kubenswrapper[4684]: I0123 09:37:02.290908 4684 state_mem.go:107] "Deleted CPUSet assignment" podUID="716a1bb5-66f3-471e-96e1-353506de67a4" containerName="dnsmasq-dns" Jan 23 09:37:02 crc kubenswrapper[4684]: I0123 09:37:02.291120 4684 memory_manager.go:354] "RemoveStaleState removing state" podUID="5f9a95a1-59e2-4ea2-96f4-d95ef3bdcebb" containerName="dnsmasq-dns" Jan 23 09:37:02 crc kubenswrapper[4684]: I0123 09:37:02.291136 4684 memory_manager.go:354] "RemoveStaleState removing state" podUID="716a1bb5-66f3-471e-96e1-353506de67a4" containerName="dnsmasq-dns" Jan 23 09:37:02 crc kubenswrapper[4684]: I0123 09:37:02.291887 4684 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/repo-setup-edpm-deployment-openstack-edpm-ipam-dww24" Jan 23 09:37:02 crc kubenswrapper[4684]: I0123 09:37:02.294348 4684 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"openstack-aee-default-env" Jan 23 09:37:02 crc kubenswrapper[4684]: I0123 09:37:02.294401 4684 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"openstack-edpm-ipam-dockercfg-5vtkf" Jan 23 09:37:02 crc kubenswrapper[4684]: I0123 09:37:02.294358 4684 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"dataplane-ansible-ssh-private-key-secret" Jan 23 09:37:02 crc kubenswrapper[4684]: I0123 09:37:02.294661 4684 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"dataplanenodeset-openstack-edpm-ipam" Jan 23 09:37:02 crc kubenswrapper[4684]: I0123 09:37:02.307584 4684 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/repo-setup-edpm-deployment-openstack-edpm-ipam-dww24"] Jan 23 09:37:02 crc kubenswrapper[4684]: I0123 09:37:02.454675 4684 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-dd66n\" (UniqueName: \"kubernetes.io/projected/b4b6123c-b0a7-4d22-a9b7-5da8a1598fff-kube-api-access-dd66n\") pod \"repo-setup-edpm-deployment-openstack-edpm-ipam-dww24\" (UID: \"b4b6123c-b0a7-4d22-a9b7-5da8a1598fff\") " pod="openstack/repo-setup-edpm-deployment-openstack-edpm-ipam-dww24" Jan 23 09:37:02 crc kubenswrapper[4684]: I0123 09:37:02.454773 4684 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/b4b6123c-b0a7-4d22-a9b7-5da8a1598fff-ssh-key-openstack-edpm-ipam\") pod \"repo-setup-edpm-deployment-openstack-edpm-ipam-dww24\" (UID: \"b4b6123c-b0a7-4d22-a9b7-5da8a1598fff\") " pod="openstack/repo-setup-edpm-deployment-openstack-edpm-ipam-dww24" Jan 23 09:37:02 crc kubenswrapper[4684]: I0123 09:37:02.454824 4684 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/b4b6123c-b0a7-4d22-a9b7-5da8a1598fff-inventory\") pod \"repo-setup-edpm-deployment-openstack-edpm-ipam-dww24\" (UID: \"b4b6123c-b0a7-4d22-a9b7-5da8a1598fff\") " pod="openstack/repo-setup-edpm-deployment-openstack-edpm-ipam-dww24" Jan 23 09:37:02 crc kubenswrapper[4684]: I0123 09:37:02.454849 4684 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"repo-setup-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/b4b6123c-b0a7-4d22-a9b7-5da8a1598fff-repo-setup-combined-ca-bundle\") pod \"repo-setup-edpm-deployment-openstack-edpm-ipam-dww24\" (UID: \"b4b6123c-b0a7-4d22-a9b7-5da8a1598fff\") " pod="openstack/repo-setup-edpm-deployment-openstack-edpm-ipam-dww24" Jan 23 09:37:02 crc kubenswrapper[4684]: I0123 09:37:02.557011 4684 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-dd66n\" (UniqueName: \"kubernetes.io/projected/b4b6123c-b0a7-4d22-a9b7-5da8a1598fff-kube-api-access-dd66n\") pod \"repo-setup-edpm-deployment-openstack-edpm-ipam-dww24\" (UID: \"b4b6123c-b0a7-4d22-a9b7-5da8a1598fff\") " pod="openstack/repo-setup-edpm-deployment-openstack-edpm-ipam-dww24" Jan 23 09:37:02 crc kubenswrapper[4684]: I0123 09:37:02.557500 4684 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/b4b6123c-b0a7-4d22-a9b7-5da8a1598fff-ssh-key-openstack-edpm-ipam\") pod \"repo-setup-edpm-deployment-openstack-edpm-ipam-dww24\" (UID: \"b4b6123c-b0a7-4d22-a9b7-5da8a1598fff\") " pod="openstack/repo-setup-edpm-deployment-openstack-edpm-ipam-dww24" Jan 23 09:37:02 crc kubenswrapper[4684]: I0123 09:37:02.557654 4684 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/b4b6123c-b0a7-4d22-a9b7-5da8a1598fff-inventory\") pod \"repo-setup-edpm-deployment-openstack-edpm-ipam-dww24\" (UID: \"b4b6123c-b0a7-4d22-a9b7-5da8a1598fff\") " pod="openstack/repo-setup-edpm-deployment-openstack-edpm-ipam-dww24" Jan 23 09:37:02 crc kubenswrapper[4684]: I0123 09:37:02.557807 4684 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"repo-setup-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/b4b6123c-b0a7-4d22-a9b7-5da8a1598fff-repo-setup-combined-ca-bundle\") pod \"repo-setup-edpm-deployment-openstack-edpm-ipam-dww24\" (UID: \"b4b6123c-b0a7-4d22-a9b7-5da8a1598fff\") " pod="openstack/repo-setup-edpm-deployment-openstack-edpm-ipam-dww24" Jan 23 09:37:02 crc kubenswrapper[4684]: I0123 09:37:02.565575 4684 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"repo-setup-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/b4b6123c-b0a7-4d22-a9b7-5da8a1598fff-repo-setup-combined-ca-bundle\") pod \"repo-setup-edpm-deployment-openstack-edpm-ipam-dww24\" (UID: \"b4b6123c-b0a7-4d22-a9b7-5da8a1598fff\") " pod="openstack/repo-setup-edpm-deployment-openstack-edpm-ipam-dww24" Jan 23 09:37:02 crc kubenswrapper[4684]: I0123 09:37:02.567414 4684 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/b4b6123c-b0a7-4d22-a9b7-5da8a1598fff-inventory\") pod \"repo-setup-edpm-deployment-openstack-edpm-ipam-dww24\" (UID: \"b4b6123c-b0a7-4d22-a9b7-5da8a1598fff\") " pod="openstack/repo-setup-edpm-deployment-openstack-edpm-ipam-dww24" Jan 23 09:37:02 crc kubenswrapper[4684]: I0123 09:37:02.568508 4684 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/b4b6123c-b0a7-4d22-a9b7-5da8a1598fff-ssh-key-openstack-edpm-ipam\") pod \"repo-setup-edpm-deployment-openstack-edpm-ipam-dww24\" (UID: \"b4b6123c-b0a7-4d22-a9b7-5da8a1598fff\") " pod="openstack/repo-setup-edpm-deployment-openstack-edpm-ipam-dww24" Jan 23 09:37:02 crc kubenswrapper[4684]: I0123 09:37:02.580425 4684 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-dd66n\" (UniqueName: \"kubernetes.io/projected/b4b6123c-b0a7-4d22-a9b7-5da8a1598fff-kube-api-access-dd66n\") pod \"repo-setup-edpm-deployment-openstack-edpm-ipam-dww24\" (UID: \"b4b6123c-b0a7-4d22-a9b7-5da8a1598fff\") " pod="openstack/repo-setup-edpm-deployment-openstack-edpm-ipam-dww24" Jan 23 09:37:02 crc kubenswrapper[4684]: I0123 09:37:02.610733 4684 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/repo-setup-edpm-deployment-openstack-edpm-ipam-dww24" Jan 23 09:37:03 crc kubenswrapper[4684]: I0123 09:37:03.229409 4684 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/repo-setup-edpm-deployment-openstack-edpm-ipam-dww24"] Jan 23 09:37:03 crc kubenswrapper[4684]: I0123 09:37:03.240033 4684 provider.go:102] Refreshing cache for provider: *credentialprovider.defaultDockerConfigProvider Jan 23 09:37:03 crc kubenswrapper[4684]: I0123 09:37:03.581965 4684 scope.go:117] "RemoveContainer" containerID="aaa3253f44fc261eba23e0bab4fba49957b928d9d9a01fb268ab6087cc818562" Jan 23 09:37:03 crc kubenswrapper[4684]: E0123 09:37:03.582332 4684 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-wtphf_openshift-machine-config-operator(fe8e0d00-860e-4d47-9f48-686555520d79)\"" pod="openshift-machine-config-operator/machine-config-daemon-wtphf" podUID="fe8e0d00-860e-4d47-9f48-686555520d79" Jan 23 09:37:03 crc kubenswrapper[4684]: I0123 09:37:03.675097 4684 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/repo-setup-edpm-deployment-openstack-edpm-ipam-dww24" event={"ID":"b4b6123c-b0a7-4d22-a9b7-5da8a1598fff","Type":"ContainerStarted","Data":"1de9d2e46f1d698fe8164e493c22e7e6f4d5ec85b12e99d3c24e30efcb610374"} Jan 23 09:37:07 crc kubenswrapper[4684]: I0123 09:37:07.709743 4684 generic.go:334] "Generic (PLEG): container finished" podID="d05a61f9-7d60-4073-ae62-7a4a59fe6ed6" containerID="e59b637437b36e9b6b974a9c3f463a2bc2834101358bb253fd84a0c7745e10c7" exitCode=0 Jan 23 09:37:07 crc kubenswrapper[4684]: I0123 09:37:07.710280 4684 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/rabbitmq-server-0" event={"ID":"d05a61f9-7d60-4073-ae62-7a4a59fe6ed6","Type":"ContainerDied","Data":"e59b637437b36e9b6b974a9c3f463a2bc2834101358bb253fd84a0c7745e10c7"} Jan 23 09:37:07 crc kubenswrapper[4684]: I0123 09:37:07.715168 4684 generic.go:334] "Generic (PLEG): container finished" podID="5b7f0e5b-e1ba-4da5-b644-e16236fd5403" containerID="90e35e316eb55a861f0dd2afb7814645afe4b1a0025e07f6fc78d9e4ed00572f" exitCode=0 Jan 23 09:37:07 crc kubenswrapper[4684]: I0123 09:37:07.715212 4684 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/rabbitmq-cell1-server-0" event={"ID":"5b7f0e5b-e1ba-4da5-b644-e16236fd5403","Type":"ContainerDied","Data":"90e35e316eb55a861f0dd2afb7814645afe4b1a0025e07f6fc78d9e4ed00572f"} Jan 23 09:37:12 crc kubenswrapper[4684]: I0123 09:37:12.480477 4684 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"openstack-aee-default-env" Jan 23 09:37:13 crc kubenswrapper[4684]: I0123 09:37:13.813915 4684 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/repo-setup-edpm-deployment-openstack-edpm-ipam-dww24" event={"ID":"b4b6123c-b0a7-4d22-a9b7-5da8a1598fff","Type":"ContainerStarted","Data":"108adf2b8cc722deaa239edf7ea5dcd32da0c275706a434a7d9039a8b6ec9d50"} Jan 23 09:37:13 crc kubenswrapper[4684]: I0123 09:37:13.821456 4684 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/rabbitmq-server-0" event={"ID":"d05a61f9-7d60-4073-ae62-7a4a59fe6ed6","Type":"ContainerStarted","Data":"2677d41a4aec2aca6403c18a156db3040b53fa574e7adef5678b2d5d92607cdf"} Jan 23 09:37:13 crc kubenswrapper[4684]: I0123 09:37:13.822155 4684 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/rabbitmq-server-0" Jan 23 09:37:13 crc kubenswrapper[4684]: I0123 09:37:13.824346 4684 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/rabbitmq-cell1-server-0" event={"ID":"5b7f0e5b-e1ba-4da5-b644-e16236fd5403","Type":"ContainerStarted","Data":"af9e90df31ad95f776afaebbc9d2e4508fe756dbd5f3558a4ee1fc38da50670b"} Jan 23 09:37:13 crc kubenswrapper[4684]: I0123 09:37:13.825030 4684 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/rabbitmq-cell1-server-0" Jan 23 09:37:13 crc kubenswrapper[4684]: I0123 09:37:13.888719 4684 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/repo-setup-edpm-deployment-openstack-edpm-ipam-dww24" podStartSLOduration=2.652159142 podStartE2EDuration="11.888681939s" podCreationTimestamp="2026-01-23 09:37:02 +0000 UTC" firstStartedPulling="2026-01-23 09:37:03.239484332 +0000 UTC m=+1795.862862873" lastFinishedPulling="2026-01-23 09:37:12.476007129 +0000 UTC m=+1805.099385670" observedRunningTime="2026-01-23 09:37:13.844879279 +0000 UTC m=+1806.468257840" watchObservedRunningTime="2026-01-23 09:37:13.888681939 +0000 UTC m=+1806.512060480" Jan 23 09:37:13 crc kubenswrapper[4684]: I0123 09:37:13.889748 4684 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/rabbitmq-cell1-server-0" podStartSLOduration=41.889727128 podStartE2EDuration="41.889727128s" podCreationTimestamp="2026-01-23 09:36:32 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-23 09:37:13.885481499 +0000 UTC m=+1806.508860060" watchObservedRunningTime="2026-01-23 09:37:13.889727128 +0000 UTC m=+1806.513105679" Jan 23 09:37:13 crc kubenswrapper[4684]: I0123 09:37:13.914336 4684 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/rabbitmq-server-0" podStartSLOduration=41.914312518 podStartE2EDuration="41.914312518s" podCreationTimestamp="2026-01-23 09:36:32 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-23 09:37:13.908195577 +0000 UTC m=+1806.531574128" watchObservedRunningTime="2026-01-23 09:37:13.914312518 +0000 UTC m=+1806.537691059" Jan 23 09:37:17 crc kubenswrapper[4684]: I0123 09:37:17.582251 4684 scope.go:117] "RemoveContainer" containerID="aaa3253f44fc261eba23e0bab4fba49957b928d9d9a01fb268ab6087cc818562" Jan 23 09:37:17 crc kubenswrapper[4684]: E0123 09:37:17.583277 4684 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-wtphf_openshift-machine-config-operator(fe8e0d00-860e-4d47-9f48-686555520d79)\"" pod="openshift-machine-config-operator/machine-config-daemon-wtphf" podUID="fe8e0d00-860e-4d47-9f48-686555520d79" Jan 23 09:37:23 crc kubenswrapper[4684]: I0123 09:37:23.289471 4684 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack/rabbitmq-cell1-server-0" podUID="5b7f0e5b-e1ba-4da5-b644-e16236fd5403" containerName="rabbitmq" probeResult="failure" output="dial tcp 10.217.0.196:5671: connect: connection refused" Jan 23 09:37:23 crc kubenswrapper[4684]: I0123 09:37:23.350209 4684 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack/rabbitmq-server-0" podUID="d05a61f9-7d60-4073-ae62-7a4a59fe6ed6" containerName="rabbitmq" probeResult="failure" output="dial tcp 10.217.0.197:5671: connect: connection refused" Jan 23 09:37:28 crc kubenswrapper[4684]: I0123 09:37:28.581854 4684 scope.go:117] "RemoveContainer" containerID="aaa3253f44fc261eba23e0bab4fba49957b928d9d9a01fb268ab6087cc818562" Jan 23 09:37:28 crc kubenswrapper[4684]: E0123 09:37:28.582638 4684 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-wtphf_openshift-machine-config-operator(fe8e0d00-860e-4d47-9f48-686555520d79)\"" pod="openshift-machine-config-operator/machine-config-daemon-wtphf" podUID="fe8e0d00-860e-4d47-9f48-686555520d79" Jan 23 09:37:29 crc kubenswrapper[4684]: I0123 09:37:29.985497 4684 generic.go:334] "Generic (PLEG): container finished" podID="b4b6123c-b0a7-4d22-a9b7-5da8a1598fff" containerID="108adf2b8cc722deaa239edf7ea5dcd32da0c275706a434a7d9039a8b6ec9d50" exitCode=0 Jan 23 09:37:29 crc kubenswrapper[4684]: I0123 09:37:29.985748 4684 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/repo-setup-edpm-deployment-openstack-edpm-ipam-dww24" event={"ID":"b4b6123c-b0a7-4d22-a9b7-5da8a1598fff","Type":"ContainerDied","Data":"108adf2b8cc722deaa239edf7ea5dcd32da0c275706a434a7d9039a8b6ec9d50"} Jan 23 09:37:31 crc kubenswrapper[4684]: I0123 09:37:31.407737 4684 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/repo-setup-edpm-deployment-openstack-edpm-ipam-dww24" Jan 23 09:37:31 crc kubenswrapper[4684]: I0123 09:37:31.551294 4684 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"repo-setup-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/b4b6123c-b0a7-4d22-a9b7-5da8a1598fff-repo-setup-combined-ca-bundle\") pod \"b4b6123c-b0a7-4d22-a9b7-5da8a1598fff\" (UID: \"b4b6123c-b0a7-4d22-a9b7-5da8a1598fff\") " Jan 23 09:37:31 crc kubenswrapper[4684]: I0123 09:37:31.551751 4684 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/b4b6123c-b0a7-4d22-a9b7-5da8a1598fff-inventory\") pod \"b4b6123c-b0a7-4d22-a9b7-5da8a1598fff\" (UID: \"b4b6123c-b0a7-4d22-a9b7-5da8a1598fff\") " Jan 23 09:37:31 crc kubenswrapper[4684]: I0123 09:37:31.551789 4684 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/b4b6123c-b0a7-4d22-a9b7-5da8a1598fff-ssh-key-openstack-edpm-ipam\") pod \"b4b6123c-b0a7-4d22-a9b7-5da8a1598fff\" (UID: \"b4b6123c-b0a7-4d22-a9b7-5da8a1598fff\") " Jan 23 09:37:31 crc kubenswrapper[4684]: I0123 09:37:31.551827 4684 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-dd66n\" (UniqueName: \"kubernetes.io/projected/b4b6123c-b0a7-4d22-a9b7-5da8a1598fff-kube-api-access-dd66n\") pod \"b4b6123c-b0a7-4d22-a9b7-5da8a1598fff\" (UID: \"b4b6123c-b0a7-4d22-a9b7-5da8a1598fff\") " Jan 23 09:37:31 crc kubenswrapper[4684]: I0123 09:37:31.560593 4684 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/b4b6123c-b0a7-4d22-a9b7-5da8a1598fff-kube-api-access-dd66n" (OuterVolumeSpecName: "kube-api-access-dd66n") pod "b4b6123c-b0a7-4d22-a9b7-5da8a1598fff" (UID: "b4b6123c-b0a7-4d22-a9b7-5da8a1598fff"). InnerVolumeSpecName "kube-api-access-dd66n". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 23 09:37:31 crc kubenswrapper[4684]: I0123 09:37:31.568947 4684 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/b4b6123c-b0a7-4d22-a9b7-5da8a1598fff-repo-setup-combined-ca-bundle" (OuterVolumeSpecName: "repo-setup-combined-ca-bundle") pod "b4b6123c-b0a7-4d22-a9b7-5da8a1598fff" (UID: "b4b6123c-b0a7-4d22-a9b7-5da8a1598fff"). InnerVolumeSpecName "repo-setup-combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 23 09:37:31 crc kubenswrapper[4684]: I0123 09:37:31.610272 4684 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/b4b6123c-b0a7-4d22-a9b7-5da8a1598fff-ssh-key-openstack-edpm-ipam" (OuterVolumeSpecName: "ssh-key-openstack-edpm-ipam") pod "b4b6123c-b0a7-4d22-a9b7-5da8a1598fff" (UID: "b4b6123c-b0a7-4d22-a9b7-5da8a1598fff"). InnerVolumeSpecName "ssh-key-openstack-edpm-ipam". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 23 09:37:31 crc kubenswrapper[4684]: I0123 09:37:31.618562 4684 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/b4b6123c-b0a7-4d22-a9b7-5da8a1598fff-inventory" (OuterVolumeSpecName: "inventory") pod "b4b6123c-b0a7-4d22-a9b7-5da8a1598fff" (UID: "b4b6123c-b0a7-4d22-a9b7-5da8a1598fff"). InnerVolumeSpecName "inventory". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 23 09:37:31 crc kubenswrapper[4684]: I0123 09:37:31.655361 4684 reconciler_common.go:293] "Volume detached for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/b4b6123c-b0a7-4d22-a9b7-5da8a1598fff-inventory\") on node \"crc\" DevicePath \"\"" Jan 23 09:37:31 crc kubenswrapper[4684]: I0123 09:37:31.655391 4684 reconciler_common.go:293] "Volume detached for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/b4b6123c-b0a7-4d22-a9b7-5da8a1598fff-ssh-key-openstack-edpm-ipam\") on node \"crc\" DevicePath \"\"" Jan 23 09:37:31 crc kubenswrapper[4684]: I0123 09:37:31.655403 4684 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-dd66n\" (UniqueName: \"kubernetes.io/projected/b4b6123c-b0a7-4d22-a9b7-5da8a1598fff-kube-api-access-dd66n\") on node \"crc\" DevicePath \"\"" Jan 23 09:37:31 crc kubenswrapper[4684]: I0123 09:37:31.655415 4684 reconciler_common.go:293] "Volume detached for volume \"repo-setup-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/b4b6123c-b0a7-4d22-a9b7-5da8a1598fff-repo-setup-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Jan 23 09:37:32 crc kubenswrapper[4684]: I0123 09:37:32.021518 4684 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/repo-setup-edpm-deployment-openstack-edpm-ipam-dww24" event={"ID":"b4b6123c-b0a7-4d22-a9b7-5da8a1598fff","Type":"ContainerDied","Data":"1de9d2e46f1d698fe8164e493c22e7e6f4d5ec85b12e99d3c24e30efcb610374"} Jan 23 09:37:32 crc kubenswrapper[4684]: I0123 09:37:32.021563 4684 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/repo-setup-edpm-deployment-openstack-edpm-ipam-dww24" Jan 23 09:37:32 crc kubenswrapper[4684]: I0123 09:37:32.021565 4684 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="1de9d2e46f1d698fe8164e493c22e7e6f4d5ec85b12e99d3c24e30efcb610374" Jan 23 09:37:32 crc kubenswrapper[4684]: I0123 09:37:32.156844 4684 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/bootstrap-edpm-deployment-openstack-edpm-ipam-jx7hz"] Jan 23 09:37:32 crc kubenswrapper[4684]: E0123 09:37:32.157228 4684 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="b4b6123c-b0a7-4d22-a9b7-5da8a1598fff" containerName="repo-setup-edpm-deployment-openstack-edpm-ipam" Jan 23 09:37:32 crc kubenswrapper[4684]: I0123 09:37:32.157246 4684 state_mem.go:107] "Deleted CPUSet assignment" podUID="b4b6123c-b0a7-4d22-a9b7-5da8a1598fff" containerName="repo-setup-edpm-deployment-openstack-edpm-ipam" Jan 23 09:37:32 crc kubenswrapper[4684]: I0123 09:37:32.157414 4684 memory_manager.go:354] "RemoveStaleState removing state" podUID="b4b6123c-b0a7-4d22-a9b7-5da8a1598fff" containerName="repo-setup-edpm-deployment-openstack-edpm-ipam" Jan 23 09:37:32 crc kubenswrapper[4684]: I0123 09:37:32.157984 4684 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/bootstrap-edpm-deployment-openstack-edpm-ipam-jx7hz" Jan 23 09:37:32 crc kubenswrapper[4684]: I0123 09:37:32.167760 4684 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"dataplane-ansible-ssh-private-key-secret" Jan 23 09:37:32 crc kubenswrapper[4684]: I0123 09:37:32.167815 4684 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"openstack-aee-default-env" Jan 23 09:37:32 crc kubenswrapper[4684]: I0123 09:37:32.167886 4684 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"dataplanenodeset-openstack-edpm-ipam" Jan 23 09:37:32 crc kubenswrapper[4684]: I0123 09:37:32.172222 4684 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"openstack-edpm-ipam-dockercfg-5vtkf" Jan 23 09:37:32 crc kubenswrapper[4684]: I0123 09:37:32.189276 4684 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/bootstrap-edpm-deployment-openstack-edpm-ipam-jx7hz"] Jan 23 09:37:32 crc kubenswrapper[4684]: I0123 09:37:32.267351 4684 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"bootstrap-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/990467eb-1e9f-4b1f-bf85-dd9980a0b5aa-bootstrap-combined-ca-bundle\") pod \"bootstrap-edpm-deployment-openstack-edpm-ipam-jx7hz\" (UID: \"990467eb-1e9f-4b1f-bf85-dd9980a0b5aa\") " pod="openstack/bootstrap-edpm-deployment-openstack-edpm-ipam-jx7hz" Jan 23 09:37:32 crc kubenswrapper[4684]: I0123 09:37:32.267795 4684 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/990467eb-1e9f-4b1f-bf85-dd9980a0b5aa-inventory\") pod \"bootstrap-edpm-deployment-openstack-edpm-ipam-jx7hz\" (UID: \"990467eb-1e9f-4b1f-bf85-dd9980a0b5aa\") " pod="openstack/bootstrap-edpm-deployment-openstack-edpm-ipam-jx7hz" Jan 23 09:37:32 crc kubenswrapper[4684]: I0123 09:37:32.267830 4684 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-hbr4g\" (UniqueName: \"kubernetes.io/projected/990467eb-1e9f-4b1f-bf85-dd9980a0b5aa-kube-api-access-hbr4g\") pod \"bootstrap-edpm-deployment-openstack-edpm-ipam-jx7hz\" (UID: \"990467eb-1e9f-4b1f-bf85-dd9980a0b5aa\") " pod="openstack/bootstrap-edpm-deployment-openstack-edpm-ipam-jx7hz" Jan 23 09:37:32 crc kubenswrapper[4684]: I0123 09:37:32.267948 4684 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/990467eb-1e9f-4b1f-bf85-dd9980a0b5aa-ssh-key-openstack-edpm-ipam\") pod \"bootstrap-edpm-deployment-openstack-edpm-ipam-jx7hz\" (UID: \"990467eb-1e9f-4b1f-bf85-dd9980a0b5aa\") " pod="openstack/bootstrap-edpm-deployment-openstack-edpm-ipam-jx7hz" Jan 23 09:37:32 crc kubenswrapper[4684]: I0123 09:37:32.369548 4684 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"bootstrap-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/990467eb-1e9f-4b1f-bf85-dd9980a0b5aa-bootstrap-combined-ca-bundle\") pod \"bootstrap-edpm-deployment-openstack-edpm-ipam-jx7hz\" (UID: \"990467eb-1e9f-4b1f-bf85-dd9980a0b5aa\") " pod="openstack/bootstrap-edpm-deployment-openstack-edpm-ipam-jx7hz" Jan 23 09:37:32 crc kubenswrapper[4684]: I0123 09:37:32.369623 4684 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/990467eb-1e9f-4b1f-bf85-dd9980a0b5aa-inventory\") pod \"bootstrap-edpm-deployment-openstack-edpm-ipam-jx7hz\" (UID: \"990467eb-1e9f-4b1f-bf85-dd9980a0b5aa\") " pod="openstack/bootstrap-edpm-deployment-openstack-edpm-ipam-jx7hz" Jan 23 09:37:32 crc kubenswrapper[4684]: I0123 09:37:32.369648 4684 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-hbr4g\" (UniqueName: \"kubernetes.io/projected/990467eb-1e9f-4b1f-bf85-dd9980a0b5aa-kube-api-access-hbr4g\") pod \"bootstrap-edpm-deployment-openstack-edpm-ipam-jx7hz\" (UID: \"990467eb-1e9f-4b1f-bf85-dd9980a0b5aa\") " pod="openstack/bootstrap-edpm-deployment-openstack-edpm-ipam-jx7hz" Jan 23 09:37:32 crc kubenswrapper[4684]: I0123 09:37:32.369731 4684 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/990467eb-1e9f-4b1f-bf85-dd9980a0b5aa-ssh-key-openstack-edpm-ipam\") pod \"bootstrap-edpm-deployment-openstack-edpm-ipam-jx7hz\" (UID: \"990467eb-1e9f-4b1f-bf85-dd9980a0b5aa\") " pod="openstack/bootstrap-edpm-deployment-openstack-edpm-ipam-jx7hz" Jan 23 09:37:32 crc kubenswrapper[4684]: I0123 09:37:32.376320 4684 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"bootstrap-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/990467eb-1e9f-4b1f-bf85-dd9980a0b5aa-bootstrap-combined-ca-bundle\") pod \"bootstrap-edpm-deployment-openstack-edpm-ipam-jx7hz\" (UID: \"990467eb-1e9f-4b1f-bf85-dd9980a0b5aa\") " pod="openstack/bootstrap-edpm-deployment-openstack-edpm-ipam-jx7hz" Jan 23 09:37:32 crc kubenswrapper[4684]: I0123 09:37:32.378570 4684 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/990467eb-1e9f-4b1f-bf85-dd9980a0b5aa-inventory\") pod \"bootstrap-edpm-deployment-openstack-edpm-ipam-jx7hz\" (UID: \"990467eb-1e9f-4b1f-bf85-dd9980a0b5aa\") " pod="openstack/bootstrap-edpm-deployment-openstack-edpm-ipam-jx7hz" Jan 23 09:37:32 crc kubenswrapper[4684]: I0123 09:37:32.379109 4684 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/990467eb-1e9f-4b1f-bf85-dd9980a0b5aa-ssh-key-openstack-edpm-ipam\") pod \"bootstrap-edpm-deployment-openstack-edpm-ipam-jx7hz\" (UID: \"990467eb-1e9f-4b1f-bf85-dd9980a0b5aa\") " pod="openstack/bootstrap-edpm-deployment-openstack-edpm-ipam-jx7hz" Jan 23 09:37:32 crc kubenswrapper[4684]: I0123 09:37:32.394357 4684 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-hbr4g\" (UniqueName: \"kubernetes.io/projected/990467eb-1e9f-4b1f-bf85-dd9980a0b5aa-kube-api-access-hbr4g\") pod \"bootstrap-edpm-deployment-openstack-edpm-ipam-jx7hz\" (UID: \"990467eb-1e9f-4b1f-bf85-dd9980a0b5aa\") " pod="openstack/bootstrap-edpm-deployment-openstack-edpm-ipam-jx7hz" Jan 23 09:37:32 crc kubenswrapper[4684]: I0123 09:37:32.475775 4684 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/bootstrap-edpm-deployment-openstack-edpm-ipam-jx7hz" Jan 23 09:37:33 crc kubenswrapper[4684]: I0123 09:37:33.055572 4684 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/bootstrap-edpm-deployment-openstack-edpm-ipam-jx7hz"] Jan 23 09:37:33 crc kubenswrapper[4684]: W0123 09:37:33.072080 4684 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod990467eb_1e9f_4b1f_bf85_dd9980a0b5aa.slice/crio-1b093aa477791cc5024b4d993019ccf89a09bf960c1b8139e48d64b04f30fce4 WatchSource:0}: Error finding container 1b093aa477791cc5024b4d993019ccf89a09bf960c1b8139e48d64b04f30fce4: Status 404 returned error can't find the container with id 1b093aa477791cc5024b4d993019ccf89a09bf960c1b8139e48d64b04f30fce4 Jan 23 09:37:33 crc kubenswrapper[4684]: I0123 09:37:33.289259 4684 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/rabbitmq-cell1-server-0" Jan 23 09:37:33 crc kubenswrapper[4684]: I0123 09:37:33.350946 4684 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/rabbitmq-server-0" Jan 23 09:37:34 crc kubenswrapper[4684]: I0123 09:37:34.043226 4684 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/bootstrap-edpm-deployment-openstack-edpm-ipam-jx7hz" event={"ID":"990467eb-1e9f-4b1f-bf85-dd9980a0b5aa","Type":"ContainerStarted","Data":"1b093aa477791cc5024b4d993019ccf89a09bf960c1b8139e48d64b04f30fce4"} Jan 23 09:37:36 crc kubenswrapper[4684]: I0123 09:37:36.063647 4684 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/bootstrap-edpm-deployment-openstack-edpm-ipam-jx7hz" event={"ID":"990467eb-1e9f-4b1f-bf85-dd9980a0b5aa","Type":"ContainerStarted","Data":"24accc2341840839f402cbf55f4918a0f0dd46f71345bca8d86328b525f446eb"} Jan 23 09:37:36 crc kubenswrapper[4684]: I0123 09:37:36.085636 4684 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/bootstrap-edpm-deployment-openstack-edpm-ipam-jx7hz" podStartSLOduration=2.211333084 podStartE2EDuration="4.085602871s" podCreationTimestamp="2026-01-23 09:37:32 +0000 UTC" firstStartedPulling="2026-01-23 09:37:33.080231858 +0000 UTC m=+1825.703610399" lastFinishedPulling="2026-01-23 09:37:34.954501645 +0000 UTC m=+1827.577880186" observedRunningTime="2026-01-23 09:37:36.081080604 +0000 UTC m=+1828.704459165" watchObservedRunningTime="2026-01-23 09:37:36.085602871 +0000 UTC m=+1828.708981412" Jan 23 09:37:40 crc kubenswrapper[4684]: I0123 09:37:40.582690 4684 scope.go:117] "RemoveContainer" containerID="aaa3253f44fc261eba23e0bab4fba49957b928d9d9a01fb268ab6087cc818562" Jan 23 09:37:40 crc kubenswrapper[4684]: E0123 09:37:40.584018 4684 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-wtphf_openshift-machine-config-operator(fe8e0d00-860e-4d47-9f48-686555520d79)\"" pod="openshift-machine-config-operator/machine-config-daemon-wtphf" podUID="fe8e0d00-860e-4d47-9f48-686555520d79" Jan 23 09:37:52 crc kubenswrapper[4684]: I0123 09:37:52.583370 4684 scope.go:117] "RemoveContainer" containerID="aaa3253f44fc261eba23e0bab4fba49957b928d9d9a01fb268ab6087cc818562" Jan 23 09:37:52 crc kubenswrapper[4684]: E0123 09:37:52.584652 4684 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-wtphf_openshift-machine-config-operator(fe8e0d00-860e-4d47-9f48-686555520d79)\"" pod="openshift-machine-config-operator/machine-config-daemon-wtphf" podUID="fe8e0d00-860e-4d47-9f48-686555520d79" Jan 23 09:38:05 crc kubenswrapper[4684]: I0123 09:38:05.582164 4684 scope.go:117] "RemoveContainer" containerID="aaa3253f44fc261eba23e0bab4fba49957b928d9d9a01fb268ab6087cc818562" Jan 23 09:38:05 crc kubenswrapper[4684]: E0123 09:38:05.582917 4684 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-wtphf_openshift-machine-config-operator(fe8e0d00-860e-4d47-9f48-686555520d79)\"" pod="openshift-machine-config-operator/machine-config-daemon-wtphf" podUID="fe8e0d00-860e-4d47-9f48-686555520d79" Jan 23 09:38:12 crc kubenswrapper[4684]: I0123 09:38:12.527839 4684 scope.go:117] "RemoveContainer" containerID="b34cc2bb7b14772f09b40ed69d363f104d610f45095bbc28810f6418559a9a0c" Jan 23 09:38:12 crc kubenswrapper[4684]: I0123 09:38:12.552925 4684 scope.go:117] "RemoveContainer" containerID="92a47e3603f036dc7013bf3c2d6cda2a28f90e0cb41dd1c34addedd1aafc6d4c" Jan 23 09:38:12 crc kubenswrapper[4684]: I0123 09:38:12.577086 4684 scope.go:117] "RemoveContainer" containerID="33604b9c52c32c433debccb925270fa5ab782a873510def0353d22c565141900" Jan 23 09:38:17 crc kubenswrapper[4684]: I0123 09:38:17.588609 4684 scope.go:117] "RemoveContainer" containerID="aaa3253f44fc261eba23e0bab4fba49957b928d9d9a01fb268ab6087cc818562" Jan 23 09:38:17 crc kubenswrapper[4684]: E0123 09:38:17.589460 4684 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-wtphf_openshift-machine-config-operator(fe8e0d00-860e-4d47-9f48-686555520d79)\"" pod="openshift-machine-config-operator/machine-config-daemon-wtphf" podUID="fe8e0d00-860e-4d47-9f48-686555520d79" Jan 23 09:38:30 crc kubenswrapper[4684]: I0123 09:38:30.581828 4684 scope.go:117] "RemoveContainer" containerID="aaa3253f44fc261eba23e0bab4fba49957b928d9d9a01fb268ab6087cc818562" Jan 23 09:38:30 crc kubenswrapper[4684]: E0123 09:38:30.583621 4684 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-wtphf_openshift-machine-config-operator(fe8e0d00-860e-4d47-9f48-686555520d79)\"" pod="openshift-machine-config-operator/machine-config-daemon-wtphf" podUID="fe8e0d00-860e-4d47-9f48-686555520d79" Jan 23 09:38:42 crc kubenswrapper[4684]: I0123 09:38:42.582049 4684 scope.go:117] "RemoveContainer" containerID="aaa3253f44fc261eba23e0bab4fba49957b928d9d9a01fb268ab6087cc818562" Jan 23 09:38:42 crc kubenswrapper[4684]: E0123 09:38:42.582808 4684 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-wtphf_openshift-machine-config-operator(fe8e0d00-860e-4d47-9f48-686555520d79)\"" pod="openshift-machine-config-operator/machine-config-daemon-wtphf" podUID="fe8e0d00-860e-4d47-9f48-686555520d79" Jan 23 09:38:53 crc kubenswrapper[4684]: I0123 09:38:53.582605 4684 scope.go:117] "RemoveContainer" containerID="aaa3253f44fc261eba23e0bab4fba49957b928d9d9a01fb268ab6087cc818562" Jan 23 09:38:53 crc kubenswrapper[4684]: E0123 09:38:53.583739 4684 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-wtphf_openshift-machine-config-operator(fe8e0d00-860e-4d47-9f48-686555520d79)\"" pod="openshift-machine-config-operator/machine-config-daemon-wtphf" podUID="fe8e0d00-860e-4d47-9f48-686555520d79" Jan 23 09:39:04 crc kubenswrapper[4684]: I0123 09:39:04.581984 4684 scope.go:117] "RemoveContainer" containerID="aaa3253f44fc261eba23e0bab4fba49957b928d9d9a01fb268ab6087cc818562" Jan 23 09:39:04 crc kubenswrapper[4684]: E0123 09:39:04.582796 4684 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-wtphf_openshift-machine-config-operator(fe8e0d00-860e-4d47-9f48-686555520d79)\"" pod="openshift-machine-config-operator/machine-config-daemon-wtphf" podUID="fe8e0d00-860e-4d47-9f48-686555520d79" Jan 23 09:39:15 crc kubenswrapper[4684]: I0123 09:39:15.582142 4684 scope.go:117] "RemoveContainer" containerID="aaa3253f44fc261eba23e0bab4fba49957b928d9d9a01fb268ab6087cc818562" Jan 23 09:39:15 crc kubenswrapper[4684]: E0123 09:39:15.582960 4684 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-wtphf_openshift-machine-config-operator(fe8e0d00-860e-4d47-9f48-686555520d79)\"" pod="openshift-machine-config-operator/machine-config-daemon-wtphf" podUID="fe8e0d00-860e-4d47-9f48-686555520d79" Jan 23 09:39:30 crc kubenswrapper[4684]: I0123 09:39:30.582338 4684 scope.go:117] "RemoveContainer" containerID="aaa3253f44fc261eba23e0bab4fba49957b928d9d9a01fb268ab6087cc818562" Jan 23 09:39:30 crc kubenswrapper[4684]: E0123 09:39:30.584039 4684 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-wtphf_openshift-machine-config-operator(fe8e0d00-860e-4d47-9f48-686555520d79)\"" pod="openshift-machine-config-operator/machine-config-daemon-wtphf" podUID="fe8e0d00-860e-4d47-9f48-686555520d79" Jan 23 09:39:42 crc kubenswrapper[4684]: I0123 09:39:42.582215 4684 scope.go:117] "RemoveContainer" containerID="aaa3253f44fc261eba23e0bab4fba49957b928d9d9a01fb268ab6087cc818562" Jan 23 09:39:42 crc kubenswrapper[4684]: E0123 09:39:42.582938 4684 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-wtphf_openshift-machine-config-operator(fe8e0d00-860e-4d47-9f48-686555520d79)\"" pod="openshift-machine-config-operator/machine-config-daemon-wtphf" podUID="fe8e0d00-860e-4d47-9f48-686555520d79" Jan 23 09:39:45 crc kubenswrapper[4684]: I0123 09:39:45.050570 4684 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/glance-db-create-hgjbd"] Jan 23 09:39:45 crc kubenswrapper[4684]: I0123 09:39:45.060414 4684 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/keystone-2c77-account-create-update-k55xq"] Jan 23 09:39:45 crc kubenswrapper[4684]: I0123 09:39:45.070507 4684 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/glance-a3c7-account-create-update-mqslp"] Jan 23 09:39:45 crc kubenswrapper[4684]: I0123 09:39:45.082765 4684 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/keystone-db-create-qbpk9"] Jan 23 09:39:45 crc kubenswrapper[4684]: I0123 09:39:45.104144 4684 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/placement-db-create-q6drh"] Jan 23 09:39:45 crc kubenswrapper[4684]: I0123 09:39:45.117989 4684 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/glance-db-create-hgjbd"] Jan 23 09:39:45 crc kubenswrapper[4684]: I0123 09:39:45.126321 4684 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/placement-1ed6-account-create-update-prsmk"] Jan 23 09:39:45 crc kubenswrapper[4684]: I0123 09:39:45.133805 4684 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/keystone-2c77-account-create-update-k55xq"] Jan 23 09:39:45 crc kubenswrapper[4684]: I0123 09:39:45.141590 4684 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/glance-a3c7-account-create-update-mqslp"] Jan 23 09:39:45 crc kubenswrapper[4684]: I0123 09:39:45.149556 4684 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/placement-db-create-q6drh"] Jan 23 09:39:45 crc kubenswrapper[4684]: I0123 09:39:45.156637 4684 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/placement-1ed6-account-create-update-prsmk"] Jan 23 09:39:45 crc kubenswrapper[4684]: I0123 09:39:45.166967 4684 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/keystone-db-create-qbpk9"] Jan 23 09:39:45 crc kubenswrapper[4684]: I0123 09:39:45.594391 4684 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="0b25b466-e0e3-4ec1-9ce6-c3f4a19a2ae9" path="/var/lib/kubelet/pods/0b25b466-e0e3-4ec1-9ce6-c3f4a19a2ae9/volumes" Jan 23 09:39:45 crc kubenswrapper[4684]: I0123 09:39:45.595144 4684 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="2bd72eca-ef1e-4445-9d19-65ff92842e15" path="/var/lib/kubelet/pods/2bd72eca-ef1e-4445-9d19-65ff92842e15/volumes" Jan 23 09:39:45 crc kubenswrapper[4684]: I0123 09:39:45.595960 4684 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="41137896-cb01-4aa7-a4c0-786f7db16906" path="/var/lib/kubelet/pods/41137896-cb01-4aa7-a4c0-786f7db16906/volumes" Jan 23 09:39:45 crc kubenswrapper[4684]: I0123 09:39:45.596531 4684 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="4887e48e-971e-4f3a-8a5f-a050961c9c7c" path="/var/lib/kubelet/pods/4887e48e-971e-4f3a-8a5f-a050961c9c7c/volumes" Jan 23 09:39:45 crc kubenswrapper[4684]: I0123 09:39:45.597636 4684 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="93356bbd-8831-4fad-a7a7-4494b4244c26" path="/var/lib/kubelet/pods/93356bbd-8831-4fad-a7a7-4494b4244c26/volumes" Jan 23 09:39:45 crc kubenswrapper[4684]: I0123 09:39:45.598194 4684 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="9fbe33db-2ad3-4693-b957-716547fb796f" path="/var/lib/kubelet/pods/9fbe33db-2ad3-4693-b957-716547fb796f/volumes" Jan 23 09:39:54 crc kubenswrapper[4684]: I0123 09:39:54.582800 4684 scope.go:117] "RemoveContainer" containerID="aaa3253f44fc261eba23e0bab4fba49957b928d9d9a01fb268ab6087cc818562" Jan 23 09:39:54 crc kubenswrapper[4684]: E0123 09:39:54.583449 4684 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-wtphf_openshift-machine-config-operator(fe8e0d00-860e-4d47-9f48-686555520d79)\"" pod="openshift-machine-config-operator/machine-config-daemon-wtphf" podUID="fe8e0d00-860e-4d47-9f48-686555520d79" Jan 23 09:39:57 crc kubenswrapper[4684]: I0123 09:39:57.030266 4684 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/root-account-create-update-2gbdd"] Jan 23 09:39:57 crc kubenswrapper[4684]: I0123 09:39:57.037524 4684 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/root-account-create-update-2gbdd"] Jan 23 09:39:57 crc kubenswrapper[4684]: I0123 09:39:57.592737 4684 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="471d49c9-9531-4b3c-bbe5-1fb98852d71d" path="/var/lib/kubelet/pods/471d49c9-9531-4b3c-bbe5-1fb98852d71d/volumes" Jan 23 09:40:05 crc kubenswrapper[4684]: I0123 09:40:05.669001 4684 scope.go:117] "RemoveContainer" containerID="aaa3253f44fc261eba23e0bab4fba49957b928d9d9a01fb268ab6087cc818562" Jan 23 09:40:05 crc kubenswrapper[4684]: E0123 09:40:05.670027 4684 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-wtphf_openshift-machine-config-operator(fe8e0d00-860e-4d47-9f48-686555520d79)\"" pod="openshift-machine-config-operator/machine-config-daemon-wtphf" podUID="fe8e0d00-860e-4d47-9f48-686555520d79" Jan 23 09:40:12 crc kubenswrapper[4684]: I0123 09:40:12.715647 4684 scope.go:117] "RemoveContainer" containerID="c19eb481efea50f03afa185cead2d9cd36a7b905c2c818689fb4b12dad3886cc" Jan 23 09:40:12 crc kubenswrapper[4684]: I0123 09:40:12.751784 4684 scope.go:117] "RemoveContainer" containerID="77af8ffcaeca9435e6e4535486b24ad2c2cc8b264bbe31057a6f33747e15ecaa" Jan 23 09:40:12 crc kubenswrapper[4684]: I0123 09:40:12.811925 4684 scope.go:117] "RemoveContainer" containerID="aa2e0795d891f05c3c3740731d371598dd147a2fa84e2cbb486a96d5e7067258" Jan 23 09:40:12 crc kubenswrapper[4684]: I0123 09:40:12.866173 4684 scope.go:117] "RemoveContainer" containerID="8619dfef25170ec3f006774f2f2ce9651e9b6ddea933ef1ad8127965eb9a0d7a" Jan 23 09:40:12 crc kubenswrapper[4684]: I0123 09:40:12.890746 4684 scope.go:117] "RemoveContainer" containerID="e51f1a61abede2389701f093ae03178e6c0c47a717998eda985286e9850df226" Jan 23 09:40:12 crc kubenswrapper[4684]: I0123 09:40:12.917786 4684 scope.go:117] "RemoveContainer" containerID="6d2d1b0aa404c80f7cdaa2866f4b096fea8b17458044436ce8287492fc01664c" Jan 23 09:40:12 crc kubenswrapper[4684]: I0123 09:40:12.953423 4684 scope.go:117] "RemoveContainer" containerID="5145cfc99a3d8b0e61f91a088872062cb3601e812ac9f9eb16ac734dda1fb422" Jan 23 09:40:12 crc kubenswrapper[4684]: I0123 09:40:12.999566 4684 scope.go:117] "RemoveContainer" containerID="5a19d5e5234809e1f60691730398cea68dddfa18bfbd96febaa55b2782b5283b" Jan 23 09:40:13 crc kubenswrapper[4684]: I0123 09:40:13.023060 4684 scope.go:117] "RemoveContainer" containerID="414c8b50110f01840dfdf1a3f2857d4942a51b9c24ff5691d462ca8a72909d34" Jan 23 09:40:13 crc kubenswrapper[4684]: I0123 09:40:13.048568 4684 scope.go:117] "RemoveContainer" containerID="15ae7d3caa1770ef913e1d10f705291740aa46d16e9cc0610b9e6ceaff5be7ab" Jan 23 09:40:13 crc kubenswrapper[4684]: I0123 09:40:13.104319 4684 scope.go:117] "RemoveContainer" containerID="9dd7dff46f7efcc0738ef0a948eb3c1d2001b98a8bc1cbced6f6f45a4c4f5832" Jan 23 09:40:20 crc kubenswrapper[4684]: I0123 09:40:20.581748 4684 scope.go:117] "RemoveContainer" containerID="aaa3253f44fc261eba23e0bab4fba49957b928d9d9a01fb268ab6087cc818562" Jan 23 09:40:20 crc kubenswrapper[4684]: E0123 09:40:20.582495 4684 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-wtphf_openshift-machine-config-operator(fe8e0d00-860e-4d47-9f48-686555520d79)\"" pod="openshift-machine-config-operator/machine-config-daemon-wtphf" podUID="fe8e0d00-860e-4d47-9f48-686555520d79" Jan 23 09:40:35 crc kubenswrapper[4684]: I0123 09:40:35.582763 4684 scope.go:117] "RemoveContainer" containerID="aaa3253f44fc261eba23e0bab4fba49957b928d9d9a01fb268ab6087cc818562" Jan 23 09:40:35 crc kubenswrapper[4684]: E0123 09:40:35.583629 4684 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-wtphf_openshift-machine-config-operator(fe8e0d00-860e-4d47-9f48-686555520d79)\"" pod="openshift-machine-config-operator/machine-config-daemon-wtphf" podUID="fe8e0d00-860e-4d47-9f48-686555520d79" Jan 23 09:40:46 crc kubenswrapper[4684]: I0123 09:40:46.582882 4684 scope.go:117] "RemoveContainer" containerID="aaa3253f44fc261eba23e0bab4fba49957b928d9d9a01fb268ab6087cc818562" Jan 23 09:40:47 crc kubenswrapper[4684]: I0123 09:40:47.751774 4684 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-wtphf" event={"ID":"fe8e0d00-860e-4d47-9f48-686555520d79","Type":"ContainerStarted","Data":"4ca7091f270e90c736fc01d37ad639ae0e6d8467b5f3f891e0f994b8fe5136e3"} Jan 23 09:40:49 crc kubenswrapper[4684]: I0123 09:40:49.081024 4684 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/barbican-2b42-account-create-update-w5njq"] Jan 23 09:40:49 crc kubenswrapper[4684]: I0123 09:40:49.101836 4684 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/neutron-db-create-wdrbh"] Jan 23 09:40:49 crc kubenswrapper[4684]: I0123 09:40:49.122754 4684 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/neutron-c3f2-account-create-update-z6d9n"] Jan 23 09:40:49 crc kubenswrapper[4684]: I0123 09:40:49.133680 4684 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/neutron-c3f2-account-create-update-z6d9n"] Jan 23 09:40:49 crc kubenswrapper[4684]: I0123 09:40:49.142295 4684 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/barbican-db-create-jcvvb"] Jan 23 09:40:49 crc kubenswrapper[4684]: I0123 09:40:49.150859 4684 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/cinder-aa08-account-create-update-c8jx4"] Jan 23 09:40:49 crc kubenswrapper[4684]: I0123 09:40:49.159840 4684 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/cinder-db-create-mzsx6"] Jan 23 09:40:49 crc kubenswrapper[4684]: I0123 09:40:49.169250 4684 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/barbican-2b42-account-create-update-w5njq"] Jan 23 09:40:49 crc kubenswrapper[4684]: I0123 09:40:49.180402 4684 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/neutron-db-create-wdrbh"] Jan 23 09:40:49 crc kubenswrapper[4684]: I0123 09:40:49.192775 4684 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/cinder-aa08-account-create-update-c8jx4"] Jan 23 09:40:49 crc kubenswrapper[4684]: I0123 09:40:49.202334 4684 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/cinder-db-create-mzsx6"] Jan 23 09:40:49 crc kubenswrapper[4684]: I0123 09:40:49.219733 4684 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/barbican-db-create-jcvvb"] Jan 23 09:40:49 crc kubenswrapper[4684]: I0123 09:40:49.593418 4684 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="248994f7-7b0f-41e4-8a32-2dbf42ea41e9" path="/var/lib/kubelet/pods/248994f7-7b0f-41e4-8a32-2dbf42ea41e9/volumes" Jan 23 09:40:49 crc kubenswrapper[4684]: I0123 09:40:49.594115 4684 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="51a95467-7819-43c2-aa22-699c74df62e8" path="/var/lib/kubelet/pods/51a95467-7819-43c2-aa22-699c74df62e8/volumes" Jan 23 09:40:49 crc kubenswrapper[4684]: I0123 09:40:49.594762 4684 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="5d712541-e87b-49c3-8cde-2daf0ef2c0bd" path="/var/lib/kubelet/pods/5d712541-e87b-49c3-8cde-2daf0ef2c0bd/volumes" Jan 23 09:40:49 crc kubenswrapper[4684]: I0123 09:40:49.595388 4684 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="7ae59c0f-8e7f-4f59-a991-3e6afb7e0daf" path="/var/lib/kubelet/pods/7ae59c0f-8e7f-4f59-a991-3e6afb7e0daf/volumes" Jan 23 09:40:49 crc kubenswrapper[4684]: I0123 09:40:49.596606 4684 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="c6c57d15-8e5f-4245-8830-c84079c9bee5" path="/var/lib/kubelet/pods/c6c57d15-8e5f-4245-8830-c84079c9bee5/volumes" Jan 23 09:40:49 crc kubenswrapper[4684]: I0123 09:40:49.597338 4684 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="ed2e86a9-9a16-4fd0-b065-d95744b90dd7" path="/var/lib/kubelet/pods/ed2e86a9-9a16-4fd0-b065-d95744b90dd7/volumes" Jan 23 09:41:13 crc kubenswrapper[4684]: I0123 09:41:13.270335 4684 scope.go:117] "RemoveContainer" containerID="e84ce78b285228f4c96b13ca8a93ea665cd36a0c83bd0d88d7b8a23db0de2723" Jan 23 09:41:13 crc kubenswrapper[4684]: I0123 09:41:13.303636 4684 scope.go:117] "RemoveContainer" containerID="389d9652e99fbc6ce5c504c5926e0266187b9792c0c555bda41c84461d0c5326" Jan 23 09:41:13 crc kubenswrapper[4684]: I0123 09:41:13.348764 4684 scope.go:117] "RemoveContainer" containerID="a9af033c9e48d7e18f71eeca3fd50c1b00fea299f546cc77eef4950db3505265" Jan 23 09:41:13 crc kubenswrapper[4684]: I0123 09:41:13.393363 4684 scope.go:117] "RemoveContainer" containerID="bf19e3847088e82aaa7e07e340cc00931742cc0eca1a0d3ab89b2897a7b88ffb" Jan 23 09:41:13 crc kubenswrapper[4684]: I0123 09:41:13.463166 4684 scope.go:117] "RemoveContainer" containerID="f738cd1b919cda818554031b3b6e23bccada191eb207eee446169ca4295f4ce9" Jan 23 09:41:13 crc kubenswrapper[4684]: I0123 09:41:13.492088 4684 scope.go:117] "RemoveContainer" containerID="830ce8d856a7e21570caa0e1946e2834d0d061681d555f9601a1674f4b8129cf" Jan 23 09:41:13 crc kubenswrapper[4684]: I0123 09:41:13.531345 4684 scope.go:117] "RemoveContainer" containerID="9715380cd5eb1122dc2e5c7e4b385a45094637abbe084dbbd0431ea0ae1902cf" Jan 23 09:41:13 crc kubenswrapper[4684]: I0123 09:41:13.552226 4684 scope.go:117] "RemoveContainer" containerID="b7e9813925b9ba54eb397f2c870d9d986650388af36fc8bd3677e423b1e1c9ad" Jan 23 09:41:13 crc kubenswrapper[4684]: I0123 09:41:13.575125 4684 scope.go:117] "RemoveContainer" containerID="c5b6437ad8b79cea69578ac45d820eac798e9bc6ab79bd8a5829a23967bccde0" Jan 23 09:41:18 crc kubenswrapper[4684]: I0123 09:41:18.052526 4684 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/keystone-db-sync-znt8j"] Jan 23 09:41:18 crc kubenswrapper[4684]: I0123 09:41:18.071625 4684 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/keystone-db-sync-znt8j"] Jan 23 09:41:19 crc kubenswrapper[4684]: I0123 09:41:19.594845 4684 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="7b4ce139-6147-4c82-8b4d-74de8f779b6c" path="/var/lib/kubelet/pods/7b4ce139-6147-4c82-8b4d-74de8f779b6c/volumes" Jan 23 09:41:34 crc kubenswrapper[4684]: I0123 09:41:34.033365 4684 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/glance-db-sync-tn627"] Jan 23 09:41:34 crc kubenswrapper[4684]: I0123 09:41:34.042363 4684 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/glance-db-sync-tn627"] Jan 23 09:41:35 crc kubenswrapper[4684]: I0123 09:41:35.594584 4684 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="0ed6b304-077d-4d13-a28b-2c41c046a303" path="/var/lib/kubelet/pods/0ed6b304-077d-4d13-a28b-2c41c046a303/volumes" Jan 23 09:41:41 crc kubenswrapper[4684]: I0123 09:41:41.262546 4684 generic.go:334] "Generic (PLEG): container finished" podID="990467eb-1e9f-4b1f-bf85-dd9980a0b5aa" containerID="24accc2341840839f402cbf55f4918a0f0dd46f71345bca8d86328b525f446eb" exitCode=0 Jan 23 09:41:41 crc kubenswrapper[4684]: I0123 09:41:41.262616 4684 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/bootstrap-edpm-deployment-openstack-edpm-ipam-jx7hz" event={"ID":"990467eb-1e9f-4b1f-bf85-dd9980a0b5aa","Type":"ContainerDied","Data":"24accc2341840839f402cbf55f4918a0f0dd46f71345bca8d86328b525f446eb"} Jan 23 09:41:42 crc kubenswrapper[4684]: I0123 09:41:42.689793 4684 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/bootstrap-edpm-deployment-openstack-edpm-ipam-jx7hz" Jan 23 09:41:42 crc kubenswrapper[4684]: I0123 09:41:42.850954 4684 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/990467eb-1e9f-4b1f-bf85-dd9980a0b5aa-inventory\") pod \"990467eb-1e9f-4b1f-bf85-dd9980a0b5aa\" (UID: \"990467eb-1e9f-4b1f-bf85-dd9980a0b5aa\") " Jan 23 09:41:42 crc kubenswrapper[4684]: I0123 09:41:42.851129 4684 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-hbr4g\" (UniqueName: \"kubernetes.io/projected/990467eb-1e9f-4b1f-bf85-dd9980a0b5aa-kube-api-access-hbr4g\") pod \"990467eb-1e9f-4b1f-bf85-dd9980a0b5aa\" (UID: \"990467eb-1e9f-4b1f-bf85-dd9980a0b5aa\") " Jan 23 09:41:42 crc kubenswrapper[4684]: I0123 09:41:42.851260 4684 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"bootstrap-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/990467eb-1e9f-4b1f-bf85-dd9980a0b5aa-bootstrap-combined-ca-bundle\") pod \"990467eb-1e9f-4b1f-bf85-dd9980a0b5aa\" (UID: \"990467eb-1e9f-4b1f-bf85-dd9980a0b5aa\") " Jan 23 09:41:42 crc kubenswrapper[4684]: I0123 09:41:42.851307 4684 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/990467eb-1e9f-4b1f-bf85-dd9980a0b5aa-ssh-key-openstack-edpm-ipam\") pod \"990467eb-1e9f-4b1f-bf85-dd9980a0b5aa\" (UID: \"990467eb-1e9f-4b1f-bf85-dd9980a0b5aa\") " Jan 23 09:41:42 crc kubenswrapper[4684]: I0123 09:41:42.857288 4684 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/990467eb-1e9f-4b1f-bf85-dd9980a0b5aa-kube-api-access-hbr4g" (OuterVolumeSpecName: "kube-api-access-hbr4g") pod "990467eb-1e9f-4b1f-bf85-dd9980a0b5aa" (UID: "990467eb-1e9f-4b1f-bf85-dd9980a0b5aa"). InnerVolumeSpecName "kube-api-access-hbr4g". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 23 09:41:42 crc kubenswrapper[4684]: I0123 09:41:42.862930 4684 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/990467eb-1e9f-4b1f-bf85-dd9980a0b5aa-bootstrap-combined-ca-bundle" (OuterVolumeSpecName: "bootstrap-combined-ca-bundle") pod "990467eb-1e9f-4b1f-bf85-dd9980a0b5aa" (UID: "990467eb-1e9f-4b1f-bf85-dd9980a0b5aa"). InnerVolumeSpecName "bootstrap-combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 23 09:41:42 crc kubenswrapper[4684]: I0123 09:41:42.879805 4684 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/990467eb-1e9f-4b1f-bf85-dd9980a0b5aa-inventory" (OuterVolumeSpecName: "inventory") pod "990467eb-1e9f-4b1f-bf85-dd9980a0b5aa" (UID: "990467eb-1e9f-4b1f-bf85-dd9980a0b5aa"). InnerVolumeSpecName "inventory". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 23 09:41:42 crc kubenswrapper[4684]: I0123 09:41:42.884203 4684 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/990467eb-1e9f-4b1f-bf85-dd9980a0b5aa-ssh-key-openstack-edpm-ipam" (OuterVolumeSpecName: "ssh-key-openstack-edpm-ipam") pod "990467eb-1e9f-4b1f-bf85-dd9980a0b5aa" (UID: "990467eb-1e9f-4b1f-bf85-dd9980a0b5aa"). InnerVolumeSpecName "ssh-key-openstack-edpm-ipam". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 23 09:41:42 crc kubenswrapper[4684]: I0123 09:41:42.953927 4684 reconciler_common.go:293] "Volume detached for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/990467eb-1e9f-4b1f-bf85-dd9980a0b5aa-inventory\") on node \"crc\" DevicePath \"\"" Jan 23 09:41:42 crc kubenswrapper[4684]: I0123 09:41:42.953972 4684 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-hbr4g\" (UniqueName: \"kubernetes.io/projected/990467eb-1e9f-4b1f-bf85-dd9980a0b5aa-kube-api-access-hbr4g\") on node \"crc\" DevicePath \"\"" Jan 23 09:41:42 crc kubenswrapper[4684]: I0123 09:41:42.953986 4684 reconciler_common.go:293] "Volume detached for volume \"bootstrap-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/990467eb-1e9f-4b1f-bf85-dd9980a0b5aa-bootstrap-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Jan 23 09:41:42 crc kubenswrapper[4684]: I0123 09:41:42.953997 4684 reconciler_common.go:293] "Volume detached for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/990467eb-1e9f-4b1f-bf85-dd9980a0b5aa-ssh-key-openstack-edpm-ipam\") on node \"crc\" DevicePath \"\"" Jan 23 09:41:43 crc kubenswrapper[4684]: I0123 09:41:43.280179 4684 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/bootstrap-edpm-deployment-openstack-edpm-ipam-jx7hz" event={"ID":"990467eb-1e9f-4b1f-bf85-dd9980a0b5aa","Type":"ContainerDied","Data":"1b093aa477791cc5024b4d993019ccf89a09bf960c1b8139e48d64b04f30fce4"} Jan 23 09:41:43 crc kubenswrapper[4684]: I0123 09:41:43.280893 4684 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="1b093aa477791cc5024b4d993019ccf89a09bf960c1b8139e48d64b04f30fce4" Jan 23 09:41:43 crc kubenswrapper[4684]: I0123 09:41:43.280230 4684 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/bootstrap-edpm-deployment-openstack-edpm-ipam-jx7hz" Jan 23 09:41:43 crc kubenswrapper[4684]: I0123 09:41:43.385273 4684 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/configure-network-edpm-deployment-openstack-edpm-ipam-8qgmc"] Jan 23 09:41:43 crc kubenswrapper[4684]: E0123 09:41:43.385758 4684 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="990467eb-1e9f-4b1f-bf85-dd9980a0b5aa" containerName="bootstrap-edpm-deployment-openstack-edpm-ipam" Jan 23 09:41:43 crc kubenswrapper[4684]: I0123 09:41:43.385780 4684 state_mem.go:107] "Deleted CPUSet assignment" podUID="990467eb-1e9f-4b1f-bf85-dd9980a0b5aa" containerName="bootstrap-edpm-deployment-openstack-edpm-ipam" Jan 23 09:41:43 crc kubenswrapper[4684]: I0123 09:41:43.386023 4684 memory_manager.go:354] "RemoveStaleState removing state" podUID="990467eb-1e9f-4b1f-bf85-dd9980a0b5aa" containerName="bootstrap-edpm-deployment-openstack-edpm-ipam" Jan 23 09:41:43 crc kubenswrapper[4684]: I0123 09:41:43.386791 4684 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/configure-network-edpm-deployment-openstack-edpm-ipam-8qgmc" Jan 23 09:41:43 crc kubenswrapper[4684]: I0123 09:41:43.389039 4684 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"dataplane-ansible-ssh-private-key-secret" Jan 23 09:41:43 crc kubenswrapper[4684]: I0123 09:41:43.389371 4684 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"openstack-aee-default-env" Jan 23 09:41:43 crc kubenswrapper[4684]: I0123 09:41:43.389754 4684 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"openstack-edpm-ipam-dockercfg-5vtkf" Jan 23 09:41:43 crc kubenswrapper[4684]: I0123 09:41:43.389914 4684 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"dataplanenodeset-openstack-edpm-ipam" Jan 23 09:41:43 crc kubenswrapper[4684]: I0123 09:41:43.409646 4684 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/configure-network-edpm-deployment-openstack-edpm-ipam-8qgmc"] Jan 23 09:41:43 crc kubenswrapper[4684]: I0123 09:41:43.462347 4684 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/215ee287-6881-4991-ae11-e63eb7605a0a-inventory\") pod \"configure-network-edpm-deployment-openstack-edpm-ipam-8qgmc\" (UID: \"215ee287-6881-4991-ae11-e63eb7605a0a\") " pod="openstack/configure-network-edpm-deployment-openstack-edpm-ipam-8qgmc" Jan 23 09:41:43 crc kubenswrapper[4684]: I0123 09:41:43.462460 4684 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-4kwvs\" (UniqueName: \"kubernetes.io/projected/215ee287-6881-4991-ae11-e63eb7605a0a-kube-api-access-4kwvs\") pod \"configure-network-edpm-deployment-openstack-edpm-ipam-8qgmc\" (UID: \"215ee287-6881-4991-ae11-e63eb7605a0a\") " pod="openstack/configure-network-edpm-deployment-openstack-edpm-ipam-8qgmc" Jan 23 09:41:43 crc kubenswrapper[4684]: I0123 09:41:43.462582 4684 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/215ee287-6881-4991-ae11-e63eb7605a0a-ssh-key-openstack-edpm-ipam\") pod \"configure-network-edpm-deployment-openstack-edpm-ipam-8qgmc\" (UID: \"215ee287-6881-4991-ae11-e63eb7605a0a\") " pod="openstack/configure-network-edpm-deployment-openstack-edpm-ipam-8qgmc" Jan 23 09:41:43 crc kubenswrapper[4684]: I0123 09:41:43.564491 4684 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/215ee287-6881-4991-ae11-e63eb7605a0a-inventory\") pod \"configure-network-edpm-deployment-openstack-edpm-ipam-8qgmc\" (UID: \"215ee287-6881-4991-ae11-e63eb7605a0a\") " pod="openstack/configure-network-edpm-deployment-openstack-edpm-ipam-8qgmc" Jan 23 09:41:43 crc kubenswrapper[4684]: I0123 09:41:43.564612 4684 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-4kwvs\" (UniqueName: \"kubernetes.io/projected/215ee287-6881-4991-ae11-e63eb7605a0a-kube-api-access-4kwvs\") pod \"configure-network-edpm-deployment-openstack-edpm-ipam-8qgmc\" (UID: \"215ee287-6881-4991-ae11-e63eb7605a0a\") " pod="openstack/configure-network-edpm-deployment-openstack-edpm-ipam-8qgmc" Jan 23 09:41:43 crc kubenswrapper[4684]: I0123 09:41:43.564677 4684 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/215ee287-6881-4991-ae11-e63eb7605a0a-ssh-key-openstack-edpm-ipam\") pod \"configure-network-edpm-deployment-openstack-edpm-ipam-8qgmc\" (UID: \"215ee287-6881-4991-ae11-e63eb7605a0a\") " pod="openstack/configure-network-edpm-deployment-openstack-edpm-ipam-8qgmc" Jan 23 09:41:43 crc kubenswrapper[4684]: I0123 09:41:43.572320 4684 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/215ee287-6881-4991-ae11-e63eb7605a0a-inventory\") pod \"configure-network-edpm-deployment-openstack-edpm-ipam-8qgmc\" (UID: \"215ee287-6881-4991-ae11-e63eb7605a0a\") " pod="openstack/configure-network-edpm-deployment-openstack-edpm-ipam-8qgmc" Jan 23 09:41:43 crc kubenswrapper[4684]: I0123 09:41:43.573294 4684 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/215ee287-6881-4991-ae11-e63eb7605a0a-ssh-key-openstack-edpm-ipam\") pod \"configure-network-edpm-deployment-openstack-edpm-ipam-8qgmc\" (UID: \"215ee287-6881-4991-ae11-e63eb7605a0a\") " pod="openstack/configure-network-edpm-deployment-openstack-edpm-ipam-8qgmc" Jan 23 09:41:43 crc kubenswrapper[4684]: I0123 09:41:43.588929 4684 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-4kwvs\" (UniqueName: \"kubernetes.io/projected/215ee287-6881-4991-ae11-e63eb7605a0a-kube-api-access-4kwvs\") pod \"configure-network-edpm-deployment-openstack-edpm-ipam-8qgmc\" (UID: \"215ee287-6881-4991-ae11-e63eb7605a0a\") " pod="openstack/configure-network-edpm-deployment-openstack-edpm-ipam-8qgmc" Jan 23 09:41:43 crc kubenswrapper[4684]: I0123 09:41:43.705145 4684 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/configure-network-edpm-deployment-openstack-edpm-ipam-8qgmc" Jan 23 09:41:44 crc kubenswrapper[4684]: I0123 09:41:44.908681 4684 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/configure-network-edpm-deployment-openstack-edpm-ipam-8qgmc"] Jan 23 09:41:45 crc kubenswrapper[4684]: I0123 09:41:45.298446 4684 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/configure-network-edpm-deployment-openstack-edpm-ipam-8qgmc" event={"ID":"215ee287-6881-4991-ae11-e63eb7605a0a","Type":"ContainerStarted","Data":"760964874c48c1fa2c99583dc07403f773239eb9ea80f6ec2499babf85793b06"} Jan 23 09:41:46 crc kubenswrapper[4684]: I0123 09:41:46.308342 4684 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/configure-network-edpm-deployment-openstack-edpm-ipam-8qgmc" event={"ID":"215ee287-6881-4991-ae11-e63eb7605a0a","Type":"ContainerStarted","Data":"638162a7e6f0fcefc5aba9522b91cfe96c49bd47eb9e3c02ce5d5c0e9953735d"} Jan 23 09:41:46 crc kubenswrapper[4684]: I0123 09:41:46.327148 4684 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/configure-network-edpm-deployment-openstack-edpm-ipam-8qgmc" podStartSLOduration=2.8622315499999997 podStartE2EDuration="3.327125045s" podCreationTimestamp="2026-01-23 09:41:43 +0000 UTC" firstStartedPulling="2026-01-23 09:41:44.896793096 +0000 UTC m=+2077.520171637" lastFinishedPulling="2026-01-23 09:41:45.361686591 +0000 UTC m=+2077.985065132" observedRunningTime="2026-01-23 09:41:46.325135249 +0000 UTC m=+2078.948513800" watchObservedRunningTime="2026-01-23 09:41:46.327125045 +0000 UTC m=+2078.950503586" Jan 23 09:42:10 crc kubenswrapper[4684]: I0123 09:42:10.043944 4684 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/placement-db-sync-k24rv"] Jan 23 09:42:10 crc kubenswrapper[4684]: I0123 09:42:10.062920 4684 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/placement-db-sync-k24rv"] Jan 23 09:42:11 crc kubenswrapper[4684]: I0123 09:42:11.597284 4684 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="c51a6dae-114a-4a53-8e31-71f0f0124510" path="/var/lib/kubelet/pods/c51a6dae-114a-4a53-8e31-71f0f0124510/volumes" Jan 23 09:42:13 crc kubenswrapper[4684]: I0123 09:42:13.081030 4684 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/keystone-bootstrap-zxlq8"] Jan 23 09:42:13 crc kubenswrapper[4684]: I0123 09:42:13.096153 4684 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/keystone-bootstrap-zxlq8"] Jan 23 09:42:13 crc kubenswrapper[4684]: I0123 09:42:13.592648 4684 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="708d53e6-341e-4e7b-80e8-482b0175948c" path="/var/lib/kubelet/pods/708d53e6-341e-4e7b-80e8-482b0175948c/volumes" Jan 23 09:42:13 crc kubenswrapper[4684]: I0123 09:42:13.721145 4684 scope.go:117] "RemoveContainer" containerID="6f156e921e22b89ceb0741cb89089d81681d6670b52883728b4dd2e13603b52c" Jan 23 09:42:13 crc kubenswrapper[4684]: I0123 09:42:13.754445 4684 scope.go:117] "RemoveContainer" containerID="3f222092aa592d93f5764d65b5d2400acc8ba125cb721c954901c7b1ff1c30ad" Jan 23 09:42:13 crc kubenswrapper[4684]: I0123 09:42:13.812973 4684 scope.go:117] "RemoveContainer" containerID="bd570b482adca7f99ca2f281ec8679e767854f52d98272f50742820a27744f07" Jan 23 09:42:13 crc kubenswrapper[4684]: I0123 09:42:13.860753 4684 scope.go:117] "RemoveContainer" containerID="31063009d43a382c032f53f4355e2d098ac4e31c4b5cbaef8ff1fc7f8b44ca70" Jan 23 09:43:04 crc kubenswrapper[4684]: I0123 09:43:04.935714 4684 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/redhat-operators-9822l"] Jan 23 09:43:04 crc kubenswrapper[4684]: I0123 09:43:04.938745 4684 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-9822l" Jan 23 09:43:04 crc kubenswrapper[4684]: I0123 09:43:04.950281 4684 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-operators-9822l"] Jan 23 09:43:05 crc kubenswrapper[4684]: I0123 09:43:05.080291 4684 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-8w85g\" (UniqueName: \"kubernetes.io/projected/45158d3a-544c-4350-90da-af2ebe2898b0-kube-api-access-8w85g\") pod \"redhat-operators-9822l\" (UID: \"45158d3a-544c-4350-90da-af2ebe2898b0\") " pod="openshift-marketplace/redhat-operators-9822l" Jan 23 09:43:05 crc kubenswrapper[4684]: I0123 09:43:05.080652 4684 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/45158d3a-544c-4350-90da-af2ebe2898b0-utilities\") pod \"redhat-operators-9822l\" (UID: \"45158d3a-544c-4350-90da-af2ebe2898b0\") " pod="openshift-marketplace/redhat-operators-9822l" Jan 23 09:43:05 crc kubenswrapper[4684]: I0123 09:43:05.080834 4684 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/45158d3a-544c-4350-90da-af2ebe2898b0-catalog-content\") pod \"redhat-operators-9822l\" (UID: \"45158d3a-544c-4350-90da-af2ebe2898b0\") " pod="openshift-marketplace/redhat-operators-9822l" Jan 23 09:43:05 crc kubenswrapper[4684]: I0123 09:43:05.182954 4684 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/45158d3a-544c-4350-90da-af2ebe2898b0-catalog-content\") pod \"redhat-operators-9822l\" (UID: \"45158d3a-544c-4350-90da-af2ebe2898b0\") " pod="openshift-marketplace/redhat-operators-9822l" Jan 23 09:43:05 crc kubenswrapper[4684]: I0123 09:43:05.183277 4684 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-8w85g\" (UniqueName: \"kubernetes.io/projected/45158d3a-544c-4350-90da-af2ebe2898b0-kube-api-access-8w85g\") pod \"redhat-operators-9822l\" (UID: \"45158d3a-544c-4350-90da-af2ebe2898b0\") " pod="openshift-marketplace/redhat-operators-9822l" Jan 23 09:43:05 crc kubenswrapper[4684]: I0123 09:43:05.183425 4684 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/45158d3a-544c-4350-90da-af2ebe2898b0-utilities\") pod \"redhat-operators-9822l\" (UID: \"45158d3a-544c-4350-90da-af2ebe2898b0\") " pod="openshift-marketplace/redhat-operators-9822l" Jan 23 09:43:05 crc kubenswrapper[4684]: I0123 09:43:05.183619 4684 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/45158d3a-544c-4350-90da-af2ebe2898b0-catalog-content\") pod \"redhat-operators-9822l\" (UID: \"45158d3a-544c-4350-90da-af2ebe2898b0\") " pod="openshift-marketplace/redhat-operators-9822l" Jan 23 09:43:05 crc kubenswrapper[4684]: I0123 09:43:05.183786 4684 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/45158d3a-544c-4350-90da-af2ebe2898b0-utilities\") pod \"redhat-operators-9822l\" (UID: \"45158d3a-544c-4350-90da-af2ebe2898b0\") " pod="openshift-marketplace/redhat-operators-9822l" Jan 23 09:43:05 crc kubenswrapper[4684]: I0123 09:43:05.209389 4684 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-8w85g\" (UniqueName: \"kubernetes.io/projected/45158d3a-544c-4350-90da-af2ebe2898b0-kube-api-access-8w85g\") pod \"redhat-operators-9822l\" (UID: \"45158d3a-544c-4350-90da-af2ebe2898b0\") " pod="openshift-marketplace/redhat-operators-9822l" Jan 23 09:43:05 crc kubenswrapper[4684]: I0123 09:43:05.260057 4684 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-9822l" Jan 23 09:43:05 crc kubenswrapper[4684]: I0123 09:43:05.763677 4684 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-operators-9822l"] Jan 23 09:43:06 crc kubenswrapper[4684]: I0123 09:43:06.005621 4684 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-9822l" event={"ID":"45158d3a-544c-4350-90da-af2ebe2898b0","Type":"ContainerStarted","Data":"b22fb88b2490d3e538da0f0a01341d698f155778ac7bd3f45a4e4a204582ca35"} Jan 23 09:43:07 crc kubenswrapper[4684]: I0123 09:43:07.015246 4684 generic.go:334] "Generic (PLEG): container finished" podID="45158d3a-544c-4350-90da-af2ebe2898b0" containerID="433703ee619bddd2ca579f03af909359f5c829b9d43075bd0adb792c5bdf8db1" exitCode=0 Jan 23 09:43:07 crc kubenswrapper[4684]: I0123 09:43:07.015342 4684 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-9822l" event={"ID":"45158d3a-544c-4350-90da-af2ebe2898b0","Type":"ContainerDied","Data":"433703ee619bddd2ca579f03af909359f5c829b9d43075bd0adb792c5bdf8db1"} Jan 23 09:43:07 crc kubenswrapper[4684]: I0123 09:43:07.017396 4684 provider.go:102] Refreshing cache for provider: *credentialprovider.defaultDockerConfigProvider Jan 23 09:43:09 crc kubenswrapper[4684]: I0123 09:43:09.041408 4684 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-9822l" event={"ID":"45158d3a-544c-4350-90da-af2ebe2898b0","Type":"ContainerStarted","Data":"a992e929a36ff966d6a34905555170e880b37b898c0f124365b529a8b0b13ae4"} Jan 23 09:43:13 crc kubenswrapper[4684]: I0123 09:43:13.729318 4684 patch_prober.go:28] interesting pod/machine-config-daemon-wtphf container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Jan 23 09:43:13 crc kubenswrapper[4684]: I0123 09:43:13.730371 4684 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-wtphf" podUID="fe8e0d00-860e-4d47-9f48-686555520d79" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Jan 23 09:43:15 crc kubenswrapper[4684]: I0123 09:43:15.092288 4684 generic.go:334] "Generic (PLEG): container finished" podID="45158d3a-544c-4350-90da-af2ebe2898b0" containerID="a992e929a36ff966d6a34905555170e880b37b898c0f124365b529a8b0b13ae4" exitCode=0 Jan 23 09:43:15 crc kubenswrapper[4684]: I0123 09:43:15.092377 4684 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-9822l" event={"ID":"45158d3a-544c-4350-90da-af2ebe2898b0","Type":"ContainerDied","Data":"a992e929a36ff966d6a34905555170e880b37b898c0f124365b529a8b0b13ae4"} Jan 23 09:43:16 crc kubenswrapper[4684]: I0123 09:43:16.103572 4684 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-9822l" event={"ID":"45158d3a-544c-4350-90da-af2ebe2898b0","Type":"ContainerStarted","Data":"8ac6d70cd9a131b99387f18a8e54ef9ccef742a7f50ed38e3b9246971276f670"} Jan 23 09:43:16 crc kubenswrapper[4684]: I0123 09:43:16.131267 4684 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/redhat-operators-9822l" podStartSLOduration=3.530432074 podStartE2EDuration="12.131247481s" podCreationTimestamp="2026-01-23 09:43:04 +0000 UTC" firstStartedPulling="2026-01-23 09:43:07.017118539 +0000 UTC m=+2159.640497080" lastFinishedPulling="2026-01-23 09:43:15.617933946 +0000 UTC m=+2168.241312487" observedRunningTime="2026-01-23 09:43:16.131011804 +0000 UTC m=+2168.754390365" watchObservedRunningTime="2026-01-23 09:43:16.131247481 +0000 UTC m=+2168.754626022" Jan 23 09:43:17 crc kubenswrapper[4684]: I0123 09:43:17.050560 4684 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/barbican-db-sync-9pq2q"] Jan 23 09:43:17 crc kubenswrapper[4684]: I0123 09:43:17.061144 4684 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/barbican-db-sync-9pq2q"] Jan 23 09:43:17 crc kubenswrapper[4684]: I0123 09:43:17.622960 4684 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="4ffd82b5-ced8-4cca-89cb-25ad1bba207a" path="/var/lib/kubelet/pods/4ffd82b5-ced8-4cca-89cb-25ad1bba207a/volumes" Jan 23 09:43:20 crc kubenswrapper[4684]: I0123 09:43:20.028321 4684 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/cinder-db-sync-gpzdh"] Jan 23 09:43:20 crc kubenswrapper[4684]: I0123 09:43:20.036175 4684 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/cinder-db-sync-gpzdh"] Jan 23 09:43:21 crc kubenswrapper[4684]: I0123 09:43:21.594898 4684 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="82fd9420-b726-4b9d-ad21-b05181fb6e23" path="/var/lib/kubelet/pods/82fd9420-b726-4b9d-ad21-b05181fb6e23/volumes" Jan 23 09:43:24 crc kubenswrapper[4684]: I0123 09:43:24.168422 4684 generic.go:334] "Generic (PLEG): container finished" podID="215ee287-6881-4991-ae11-e63eb7605a0a" containerID="638162a7e6f0fcefc5aba9522b91cfe96c49bd47eb9e3c02ce5d5c0e9953735d" exitCode=0 Jan 23 09:43:24 crc kubenswrapper[4684]: I0123 09:43:24.168497 4684 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/configure-network-edpm-deployment-openstack-edpm-ipam-8qgmc" event={"ID":"215ee287-6881-4991-ae11-e63eb7605a0a","Type":"ContainerDied","Data":"638162a7e6f0fcefc5aba9522b91cfe96c49bd47eb9e3c02ce5d5c0e9953735d"} Jan 23 09:43:25 crc kubenswrapper[4684]: I0123 09:43:25.036588 4684 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-cell0-db-create-lng65"] Jan 23 09:43:25 crc kubenswrapper[4684]: I0123 09:43:25.044553 4684 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-api-4098-account-create-update-mfcrh"] Jan 23 09:43:25 crc kubenswrapper[4684]: I0123 09:43:25.053166 4684 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-cell1-db-create-4qf5d"] Jan 23 09:43:25 crc kubenswrapper[4684]: I0123 09:43:25.068542 4684 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/nova-api-4098-account-create-update-mfcrh"] Jan 23 09:43:25 crc kubenswrapper[4684]: I0123 09:43:25.079908 4684 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/nova-cell1-db-create-4qf5d"] Jan 23 09:43:25 crc kubenswrapper[4684]: I0123 09:43:25.089395 4684 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-api-db-create-tjvmj"] Jan 23 09:43:25 crc kubenswrapper[4684]: I0123 09:43:25.098763 4684 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/nova-cell0-db-create-lng65"] Jan 23 09:43:25 crc kubenswrapper[4684]: I0123 09:43:25.110894 4684 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/nova-api-db-create-tjvmj"] Jan 23 09:43:25 crc kubenswrapper[4684]: I0123 09:43:25.260872 4684 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/redhat-operators-9822l" Jan 23 09:43:25 crc kubenswrapper[4684]: I0123 09:43:25.260955 4684 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/redhat-operators-9822l" Jan 23 09:43:25 crc kubenswrapper[4684]: I0123 09:43:25.593754 4684 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="3ea9252c-2a2c-4b59-9196-251b12919e70" path="/var/lib/kubelet/pods/3ea9252c-2a2c-4b59-9196-251b12919e70/volumes" Jan 23 09:43:25 crc kubenswrapper[4684]: I0123 09:43:25.594762 4684 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="51bdf1ce-d5b3-4862-aa1c-4648c84f87a9" path="/var/lib/kubelet/pods/51bdf1ce-d5b3-4862-aa1c-4648c84f87a9/volumes" Jan 23 09:43:25 crc kubenswrapper[4684]: I0123 09:43:25.595429 4684 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="9314b229-b3d7-40b3-8c79-a327b2f0098d" path="/var/lib/kubelet/pods/9314b229-b3d7-40b3-8c79-a327b2f0098d/volumes" Jan 23 09:43:25 crc kubenswrapper[4684]: I0123 09:43:25.596137 4684 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="a856b676-2311-4a06-9b0c-4fd64c76e34b" path="/var/lib/kubelet/pods/a856b676-2311-4a06-9b0c-4fd64c76e34b/volumes" Jan 23 09:43:25 crc kubenswrapper[4684]: I0123 09:43:25.619941 4684 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/configure-network-edpm-deployment-openstack-edpm-ipam-8qgmc" Jan 23 09:43:25 crc kubenswrapper[4684]: I0123 09:43:25.723400 4684 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/215ee287-6881-4991-ae11-e63eb7605a0a-ssh-key-openstack-edpm-ipam\") pod \"215ee287-6881-4991-ae11-e63eb7605a0a\" (UID: \"215ee287-6881-4991-ae11-e63eb7605a0a\") " Jan 23 09:43:25 crc kubenswrapper[4684]: I0123 09:43:25.723496 4684 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/215ee287-6881-4991-ae11-e63eb7605a0a-inventory\") pod \"215ee287-6881-4991-ae11-e63eb7605a0a\" (UID: \"215ee287-6881-4991-ae11-e63eb7605a0a\") " Jan 23 09:43:25 crc kubenswrapper[4684]: I0123 09:43:25.723567 4684 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-4kwvs\" (UniqueName: \"kubernetes.io/projected/215ee287-6881-4991-ae11-e63eb7605a0a-kube-api-access-4kwvs\") pod \"215ee287-6881-4991-ae11-e63eb7605a0a\" (UID: \"215ee287-6881-4991-ae11-e63eb7605a0a\") " Jan 23 09:43:25 crc kubenswrapper[4684]: I0123 09:43:25.729614 4684 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/215ee287-6881-4991-ae11-e63eb7605a0a-kube-api-access-4kwvs" (OuterVolumeSpecName: "kube-api-access-4kwvs") pod "215ee287-6881-4991-ae11-e63eb7605a0a" (UID: "215ee287-6881-4991-ae11-e63eb7605a0a"). InnerVolumeSpecName "kube-api-access-4kwvs". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 23 09:43:25 crc kubenswrapper[4684]: I0123 09:43:25.750875 4684 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/215ee287-6881-4991-ae11-e63eb7605a0a-inventory" (OuterVolumeSpecName: "inventory") pod "215ee287-6881-4991-ae11-e63eb7605a0a" (UID: "215ee287-6881-4991-ae11-e63eb7605a0a"). InnerVolumeSpecName "inventory". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 23 09:43:25 crc kubenswrapper[4684]: I0123 09:43:25.751131 4684 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/215ee287-6881-4991-ae11-e63eb7605a0a-ssh-key-openstack-edpm-ipam" (OuterVolumeSpecName: "ssh-key-openstack-edpm-ipam") pod "215ee287-6881-4991-ae11-e63eb7605a0a" (UID: "215ee287-6881-4991-ae11-e63eb7605a0a"). InnerVolumeSpecName "ssh-key-openstack-edpm-ipam". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 23 09:43:25 crc kubenswrapper[4684]: I0123 09:43:25.826122 4684 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-4kwvs\" (UniqueName: \"kubernetes.io/projected/215ee287-6881-4991-ae11-e63eb7605a0a-kube-api-access-4kwvs\") on node \"crc\" DevicePath \"\"" Jan 23 09:43:25 crc kubenswrapper[4684]: I0123 09:43:25.826166 4684 reconciler_common.go:293] "Volume detached for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/215ee287-6881-4991-ae11-e63eb7605a0a-ssh-key-openstack-edpm-ipam\") on node \"crc\" DevicePath \"\"" Jan 23 09:43:25 crc kubenswrapper[4684]: I0123 09:43:25.826178 4684 reconciler_common.go:293] "Volume detached for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/215ee287-6881-4991-ae11-e63eb7605a0a-inventory\") on node \"crc\" DevicePath \"\"" Jan 23 09:43:26 crc kubenswrapper[4684]: I0123 09:43:26.188525 4684 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/configure-network-edpm-deployment-openstack-edpm-ipam-8qgmc" event={"ID":"215ee287-6881-4991-ae11-e63eb7605a0a","Type":"ContainerDied","Data":"760964874c48c1fa2c99583dc07403f773239eb9ea80f6ec2499babf85793b06"} Jan 23 09:43:26 crc kubenswrapper[4684]: I0123 09:43:26.188559 4684 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/configure-network-edpm-deployment-openstack-edpm-ipam-8qgmc" Jan 23 09:43:26 crc kubenswrapper[4684]: I0123 09:43:26.188576 4684 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="760964874c48c1fa2c99583dc07403f773239eb9ea80f6ec2499babf85793b06" Jan 23 09:43:26 crc kubenswrapper[4684]: I0123 09:43:26.309316 4684 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/validate-network-edpm-deployment-openstack-edpm-ipam-r79jr"] Jan 23 09:43:26 crc kubenswrapper[4684]: E0123 09:43:26.309905 4684 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="215ee287-6881-4991-ae11-e63eb7605a0a" containerName="configure-network-edpm-deployment-openstack-edpm-ipam" Jan 23 09:43:26 crc kubenswrapper[4684]: I0123 09:43:26.309927 4684 state_mem.go:107] "Deleted CPUSet assignment" podUID="215ee287-6881-4991-ae11-e63eb7605a0a" containerName="configure-network-edpm-deployment-openstack-edpm-ipam" Jan 23 09:43:26 crc kubenswrapper[4684]: I0123 09:43:26.310152 4684 memory_manager.go:354] "RemoveStaleState removing state" podUID="215ee287-6881-4991-ae11-e63eb7605a0a" containerName="configure-network-edpm-deployment-openstack-edpm-ipam" Jan 23 09:43:26 crc kubenswrapper[4684]: I0123 09:43:26.310973 4684 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/validate-network-edpm-deployment-openstack-edpm-ipam-r79jr" Jan 23 09:43:26 crc kubenswrapper[4684]: I0123 09:43:26.311952 4684 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-marketplace/redhat-operators-9822l" podUID="45158d3a-544c-4350-90da-af2ebe2898b0" containerName="registry-server" probeResult="failure" output=< Jan 23 09:43:26 crc kubenswrapper[4684]: timeout: failed to connect service ":50051" within 1s Jan 23 09:43:26 crc kubenswrapper[4684]: > Jan 23 09:43:26 crc kubenswrapper[4684]: I0123 09:43:26.316451 4684 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"dataplane-ansible-ssh-private-key-secret" Jan 23 09:43:26 crc kubenswrapper[4684]: I0123 09:43:26.325152 4684 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"openstack-aee-default-env" Jan 23 09:43:26 crc kubenswrapper[4684]: I0123 09:43:26.326434 4684 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/validate-network-edpm-deployment-openstack-edpm-ipam-r79jr"] Jan 23 09:43:26 crc kubenswrapper[4684]: I0123 09:43:26.329020 4684 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"dataplanenodeset-openstack-edpm-ipam" Jan 23 09:43:26 crc kubenswrapper[4684]: I0123 09:43:26.331720 4684 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"openstack-edpm-ipam-dockercfg-5vtkf" Jan 23 09:43:26 crc kubenswrapper[4684]: I0123 09:43:26.435944 4684 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/369572f8-f12b-4c03-85d2-82ca737357ed-ssh-key-openstack-edpm-ipam\") pod \"validate-network-edpm-deployment-openstack-edpm-ipam-r79jr\" (UID: \"369572f8-f12b-4c03-85d2-82ca737357ed\") " pod="openstack/validate-network-edpm-deployment-openstack-edpm-ipam-r79jr" Jan 23 09:43:26 crc kubenswrapper[4684]: I0123 09:43:26.436012 4684 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/369572f8-f12b-4c03-85d2-82ca737357ed-inventory\") pod \"validate-network-edpm-deployment-openstack-edpm-ipam-r79jr\" (UID: \"369572f8-f12b-4c03-85d2-82ca737357ed\") " pod="openstack/validate-network-edpm-deployment-openstack-edpm-ipam-r79jr" Jan 23 09:43:26 crc kubenswrapper[4684]: I0123 09:43:26.436149 4684 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-jbptx\" (UniqueName: \"kubernetes.io/projected/369572f8-f12b-4c03-85d2-82ca737357ed-kube-api-access-jbptx\") pod \"validate-network-edpm-deployment-openstack-edpm-ipam-r79jr\" (UID: \"369572f8-f12b-4c03-85d2-82ca737357ed\") " pod="openstack/validate-network-edpm-deployment-openstack-edpm-ipam-r79jr" Jan 23 09:43:26 crc kubenswrapper[4684]: I0123 09:43:26.538032 4684 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/369572f8-f12b-4c03-85d2-82ca737357ed-ssh-key-openstack-edpm-ipam\") pod \"validate-network-edpm-deployment-openstack-edpm-ipam-r79jr\" (UID: \"369572f8-f12b-4c03-85d2-82ca737357ed\") " pod="openstack/validate-network-edpm-deployment-openstack-edpm-ipam-r79jr" Jan 23 09:43:26 crc kubenswrapper[4684]: I0123 09:43:26.538108 4684 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/369572f8-f12b-4c03-85d2-82ca737357ed-inventory\") pod \"validate-network-edpm-deployment-openstack-edpm-ipam-r79jr\" (UID: \"369572f8-f12b-4c03-85d2-82ca737357ed\") " pod="openstack/validate-network-edpm-deployment-openstack-edpm-ipam-r79jr" Jan 23 09:43:26 crc kubenswrapper[4684]: I0123 09:43:26.538242 4684 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-jbptx\" (UniqueName: \"kubernetes.io/projected/369572f8-f12b-4c03-85d2-82ca737357ed-kube-api-access-jbptx\") pod \"validate-network-edpm-deployment-openstack-edpm-ipam-r79jr\" (UID: \"369572f8-f12b-4c03-85d2-82ca737357ed\") " pod="openstack/validate-network-edpm-deployment-openstack-edpm-ipam-r79jr" Jan 23 09:43:26 crc kubenswrapper[4684]: I0123 09:43:26.544752 4684 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/369572f8-f12b-4c03-85d2-82ca737357ed-inventory\") pod \"validate-network-edpm-deployment-openstack-edpm-ipam-r79jr\" (UID: \"369572f8-f12b-4c03-85d2-82ca737357ed\") " pod="openstack/validate-network-edpm-deployment-openstack-edpm-ipam-r79jr" Jan 23 09:43:26 crc kubenswrapper[4684]: I0123 09:43:26.544752 4684 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/369572f8-f12b-4c03-85d2-82ca737357ed-ssh-key-openstack-edpm-ipam\") pod \"validate-network-edpm-deployment-openstack-edpm-ipam-r79jr\" (UID: \"369572f8-f12b-4c03-85d2-82ca737357ed\") " pod="openstack/validate-network-edpm-deployment-openstack-edpm-ipam-r79jr" Jan 23 09:43:26 crc kubenswrapper[4684]: I0123 09:43:26.557962 4684 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-jbptx\" (UniqueName: \"kubernetes.io/projected/369572f8-f12b-4c03-85d2-82ca737357ed-kube-api-access-jbptx\") pod \"validate-network-edpm-deployment-openstack-edpm-ipam-r79jr\" (UID: \"369572f8-f12b-4c03-85d2-82ca737357ed\") " pod="openstack/validate-network-edpm-deployment-openstack-edpm-ipam-r79jr" Jan 23 09:43:26 crc kubenswrapper[4684]: I0123 09:43:26.628616 4684 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/validate-network-edpm-deployment-openstack-edpm-ipam-r79jr" Jan 23 09:43:27 crc kubenswrapper[4684]: I0123 09:43:27.041349 4684 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-cell0-e4a7-account-create-update-mx2vl"] Jan 23 09:43:27 crc kubenswrapper[4684]: I0123 09:43:27.061833 4684 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-cell1-a91c-account-create-update-78wch"] Jan 23 09:43:27 crc kubenswrapper[4684]: I0123 09:43:27.065959 4684 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/nova-cell0-e4a7-account-create-update-mx2vl"] Jan 23 09:43:27 crc kubenswrapper[4684]: I0123 09:43:27.074486 4684 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/nova-cell1-a91c-account-create-update-78wch"] Jan 23 09:43:27 crc kubenswrapper[4684]: I0123 09:43:27.248284 4684 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/validate-network-edpm-deployment-openstack-edpm-ipam-r79jr"] Jan 23 09:43:27 crc kubenswrapper[4684]: I0123 09:43:27.591478 4684 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="14cb5f92-83cd-4cd7-8a3c-7dcccd239f6b" path="/var/lib/kubelet/pods/14cb5f92-83cd-4cd7-8a3c-7dcccd239f6b/volumes" Jan 23 09:43:27 crc kubenswrapper[4684]: I0123 09:43:27.592435 4684 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="e849936f-39a5-4742-b2d8-d74a04de0ad1" path="/var/lib/kubelet/pods/e849936f-39a5-4742-b2d8-d74a04de0ad1/volumes" Jan 23 09:43:28 crc kubenswrapper[4684]: I0123 09:43:28.209082 4684 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/validate-network-edpm-deployment-openstack-edpm-ipam-r79jr" event={"ID":"369572f8-f12b-4c03-85d2-82ca737357ed","Type":"ContainerStarted","Data":"693d96b0c0f467e33786d6297c7182ad904533d38c295b117911e904f35c5cbd"} Jan 23 09:43:28 crc kubenswrapper[4684]: I0123 09:43:28.209723 4684 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/validate-network-edpm-deployment-openstack-edpm-ipam-r79jr" event={"ID":"369572f8-f12b-4c03-85d2-82ca737357ed","Type":"ContainerStarted","Data":"968581aaa0d9338fbccc5c568c2e4e340a85beed17da1234fcfcf94f1e02353b"} Jan 23 09:43:28 crc kubenswrapper[4684]: I0123 09:43:28.227687 4684 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/validate-network-edpm-deployment-openstack-edpm-ipam-r79jr" podStartSLOduration=1.7395099429999998 podStartE2EDuration="2.227667711s" podCreationTimestamp="2026-01-23 09:43:26 +0000 UTC" firstStartedPulling="2026-01-23 09:43:27.256317767 +0000 UTC m=+2179.879696308" lastFinishedPulling="2026-01-23 09:43:27.744475545 +0000 UTC m=+2180.367854076" observedRunningTime="2026-01-23 09:43:28.225090678 +0000 UTC m=+2180.848469229" watchObservedRunningTime="2026-01-23 09:43:28.227667711 +0000 UTC m=+2180.851046252" Jan 23 09:43:34 crc kubenswrapper[4684]: I0123 09:43:34.261265 4684 generic.go:334] "Generic (PLEG): container finished" podID="369572f8-f12b-4c03-85d2-82ca737357ed" containerID="693d96b0c0f467e33786d6297c7182ad904533d38c295b117911e904f35c5cbd" exitCode=0 Jan 23 09:43:34 crc kubenswrapper[4684]: I0123 09:43:34.261381 4684 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/validate-network-edpm-deployment-openstack-edpm-ipam-r79jr" event={"ID":"369572f8-f12b-4c03-85d2-82ca737357ed","Type":"ContainerDied","Data":"693d96b0c0f467e33786d6297c7182ad904533d38c295b117911e904f35c5cbd"} Jan 23 09:43:35 crc kubenswrapper[4684]: I0123 09:43:35.313430 4684 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/redhat-operators-9822l" Jan 23 09:43:35 crc kubenswrapper[4684]: I0123 09:43:35.380984 4684 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/redhat-operators-9822l" Jan 23 09:43:35 crc kubenswrapper[4684]: I0123 09:43:35.692848 4684 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/validate-network-edpm-deployment-openstack-edpm-ipam-r79jr" Jan 23 09:43:35 crc kubenswrapper[4684]: I0123 09:43:35.743846 4684 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/369572f8-f12b-4c03-85d2-82ca737357ed-inventory\") pod \"369572f8-f12b-4c03-85d2-82ca737357ed\" (UID: \"369572f8-f12b-4c03-85d2-82ca737357ed\") " Jan 23 09:43:35 crc kubenswrapper[4684]: I0123 09:43:35.744014 4684 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/369572f8-f12b-4c03-85d2-82ca737357ed-ssh-key-openstack-edpm-ipam\") pod \"369572f8-f12b-4c03-85d2-82ca737357ed\" (UID: \"369572f8-f12b-4c03-85d2-82ca737357ed\") " Jan 23 09:43:35 crc kubenswrapper[4684]: I0123 09:43:35.745078 4684 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-jbptx\" (UniqueName: \"kubernetes.io/projected/369572f8-f12b-4c03-85d2-82ca737357ed-kube-api-access-jbptx\") pod \"369572f8-f12b-4c03-85d2-82ca737357ed\" (UID: \"369572f8-f12b-4c03-85d2-82ca737357ed\") " Jan 23 09:43:35 crc kubenswrapper[4684]: I0123 09:43:35.751584 4684 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/369572f8-f12b-4c03-85d2-82ca737357ed-kube-api-access-jbptx" (OuterVolumeSpecName: "kube-api-access-jbptx") pod "369572f8-f12b-4c03-85d2-82ca737357ed" (UID: "369572f8-f12b-4c03-85d2-82ca737357ed"). InnerVolumeSpecName "kube-api-access-jbptx". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 23 09:43:35 crc kubenswrapper[4684]: I0123 09:43:35.774310 4684 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/369572f8-f12b-4c03-85d2-82ca737357ed-ssh-key-openstack-edpm-ipam" (OuterVolumeSpecName: "ssh-key-openstack-edpm-ipam") pod "369572f8-f12b-4c03-85d2-82ca737357ed" (UID: "369572f8-f12b-4c03-85d2-82ca737357ed"). InnerVolumeSpecName "ssh-key-openstack-edpm-ipam". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 23 09:43:35 crc kubenswrapper[4684]: I0123 09:43:35.774347 4684 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/369572f8-f12b-4c03-85d2-82ca737357ed-inventory" (OuterVolumeSpecName: "inventory") pod "369572f8-f12b-4c03-85d2-82ca737357ed" (UID: "369572f8-f12b-4c03-85d2-82ca737357ed"). InnerVolumeSpecName "inventory". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 23 09:43:35 crc kubenswrapper[4684]: I0123 09:43:35.847131 4684 reconciler_common.go:293] "Volume detached for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/369572f8-f12b-4c03-85d2-82ca737357ed-inventory\") on node \"crc\" DevicePath \"\"" Jan 23 09:43:35 crc kubenswrapper[4684]: I0123 09:43:35.847173 4684 reconciler_common.go:293] "Volume detached for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/369572f8-f12b-4c03-85d2-82ca737357ed-ssh-key-openstack-edpm-ipam\") on node \"crc\" DevicePath \"\"" Jan 23 09:43:35 crc kubenswrapper[4684]: I0123 09:43:35.847189 4684 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-jbptx\" (UniqueName: \"kubernetes.io/projected/369572f8-f12b-4c03-85d2-82ca737357ed-kube-api-access-jbptx\") on node \"crc\" DevicePath \"\"" Jan 23 09:43:36 crc kubenswrapper[4684]: I0123 09:43:36.130387 4684 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-operators-9822l"] Jan 23 09:43:36 crc kubenswrapper[4684]: I0123 09:43:36.280044 4684 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/validate-network-edpm-deployment-openstack-edpm-ipam-r79jr" event={"ID":"369572f8-f12b-4c03-85d2-82ca737357ed","Type":"ContainerDied","Data":"968581aaa0d9338fbccc5c568c2e4e340a85beed17da1234fcfcf94f1e02353b"} Jan 23 09:43:36 crc kubenswrapper[4684]: I0123 09:43:36.280107 4684 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="968581aaa0d9338fbccc5c568c2e4e340a85beed17da1234fcfcf94f1e02353b" Jan 23 09:43:36 crc kubenswrapper[4684]: I0123 09:43:36.280069 4684 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/validate-network-edpm-deployment-openstack-edpm-ipam-r79jr" Jan 23 09:43:36 crc kubenswrapper[4684]: I0123 09:43:36.355361 4684 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/install-os-edpm-deployment-openstack-edpm-ipam-gz859"] Jan 23 09:43:36 crc kubenswrapper[4684]: E0123 09:43:36.355899 4684 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="369572f8-f12b-4c03-85d2-82ca737357ed" containerName="validate-network-edpm-deployment-openstack-edpm-ipam" Jan 23 09:43:36 crc kubenswrapper[4684]: I0123 09:43:36.355920 4684 state_mem.go:107] "Deleted CPUSet assignment" podUID="369572f8-f12b-4c03-85d2-82ca737357ed" containerName="validate-network-edpm-deployment-openstack-edpm-ipam" Jan 23 09:43:36 crc kubenswrapper[4684]: I0123 09:43:36.356157 4684 memory_manager.go:354] "RemoveStaleState removing state" podUID="369572f8-f12b-4c03-85d2-82ca737357ed" containerName="validate-network-edpm-deployment-openstack-edpm-ipam" Jan 23 09:43:36 crc kubenswrapper[4684]: I0123 09:43:36.356925 4684 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/install-os-edpm-deployment-openstack-edpm-ipam-gz859" Jan 23 09:43:36 crc kubenswrapper[4684]: I0123 09:43:36.358966 4684 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"openstack-edpm-ipam-dockercfg-5vtkf" Jan 23 09:43:36 crc kubenswrapper[4684]: I0123 09:43:36.360221 4684 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"dataplanenodeset-openstack-edpm-ipam" Jan 23 09:43:36 crc kubenswrapper[4684]: I0123 09:43:36.362959 4684 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"dataplane-ansible-ssh-private-key-secret" Jan 23 09:43:36 crc kubenswrapper[4684]: I0123 09:43:36.376302 4684 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"openstack-aee-default-env" Jan 23 09:43:36 crc kubenswrapper[4684]: I0123 09:43:36.379039 4684 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/install-os-edpm-deployment-openstack-edpm-ipam-gz859"] Jan 23 09:43:36 crc kubenswrapper[4684]: I0123 09:43:36.559055 4684 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-dr9kw\" (UniqueName: \"kubernetes.io/projected/37823d9e-d4f7-4efa-9edd-0dc597578f7e-kube-api-access-dr9kw\") pod \"install-os-edpm-deployment-openstack-edpm-ipam-gz859\" (UID: \"37823d9e-d4f7-4efa-9edd-0dc597578f7e\") " pod="openstack/install-os-edpm-deployment-openstack-edpm-ipam-gz859" Jan 23 09:43:36 crc kubenswrapper[4684]: I0123 09:43:36.559107 4684 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/37823d9e-d4f7-4efa-9edd-0dc597578f7e-inventory\") pod \"install-os-edpm-deployment-openstack-edpm-ipam-gz859\" (UID: \"37823d9e-d4f7-4efa-9edd-0dc597578f7e\") " pod="openstack/install-os-edpm-deployment-openstack-edpm-ipam-gz859" Jan 23 09:43:36 crc kubenswrapper[4684]: I0123 09:43:36.559136 4684 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/37823d9e-d4f7-4efa-9edd-0dc597578f7e-ssh-key-openstack-edpm-ipam\") pod \"install-os-edpm-deployment-openstack-edpm-ipam-gz859\" (UID: \"37823d9e-d4f7-4efa-9edd-0dc597578f7e\") " pod="openstack/install-os-edpm-deployment-openstack-edpm-ipam-gz859" Jan 23 09:43:36 crc kubenswrapper[4684]: I0123 09:43:36.662329 4684 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-dr9kw\" (UniqueName: \"kubernetes.io/projected/37823d9e-d4f7-4efa-9edd-0dc597578f7e-kube-api-access-dr9kw\") pod \"install-os-edpm-deployment-openstack-edpm-ipam-gz859\" (UID: \"37823d9e-d4f7-4efa-9edd-0dc597578f7e\") " pod="openstack/install-os-edpm-deployment-openstack-edpm-ipam-gz859" Jan 23 09:43:36 crc kubenswrapper[4684]: I0123 09:43:36.662419 4684 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/37823d9e-d4f7-4efa-9edd-0dc597578f7e-inventory\") pod \"install-os-edpm-deployment-openstack-edpm-ipam-gz859\" (UID: \"37823d9e-d4f7-4efa-9edd-0dc597578f7e\") " pod="openstack/install-os-edpm-deployment-openstack-edpm-ipam-gz859" Jan 23 09:43:36 crc kubenswrapper[4684]: I0123 09:43:36.662466 4684 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/37823d9e-d4f7-4efa-9edd-0dc597578f7e-ssh-key-openstack-edpm-ipam\") pod \"install-os-edpm-deployment-openstack-edpm-ipam-gz859\" (UID: \"37823d9e-d4f7-4efa-9edd-0dc597578f7e\") " pod="openstack/install-os-edpm-deployment-openstack-edpm-ipam-gz859" Jan 23 09:43:36 crc kubenswrapper[4684]: I0123 09:43:36.666722 4684 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/37823d9e-d4f7-4efa-9edd-0dc597578f7e-inventory\") pod \"install-os-edpm-deployment-openstack-edpm-ipam-gz859\" (UID: \"37823d9e-d4f7-4efa-9edd-0dc597578f7e\") " pod="openstack/install-os-edpm-deployment-openstack-edpm-ipam-gz859" Jan 23 09:43:36 crc kubenswrapper[4684]: I0123 09:43:36.678295 4684 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/37823d9e-d4f7-4efa-9edd-0dc597578f7e-ssh-key-openstack-edpm-ipam\") pod \"install-os-edpm-deployment-openstack-edpm-ipam-gz859\" (UID: \"37823d9e-d4f7-4efa-9edd-0dc597578f7e\") " pod="openstack/install-os-edpm-deployment-openstack-edpm-ipam-gz859" Jan 23 09:43:36 crc kubenswrapper[4684]: I0123 09:43:36.693120 4684 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-dr9kw\" (UniqueName: \"kubernetes.io/projected/37823d9e-d4f7-4efa-9edd-0dc597578f7e-kube-api-access-dr9kw\") pod \"install-os-edpm-deployment-openstack-edpm-ipam-gz859\" (UID: \"37823d9e-d4f7-4efa-9edd-0dc597578f7e\") " pod="openstack/install-os-edpm-deployment-openstack-edpm-ipam-gz859" Jan 23 09:43:36 crc kubenswrapper[4684]: I0123 09:43:36.987509 4684 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/install-os-edpm-deployment-openstack-edpm-ipam-gz859" Jan 23 09:43:37 crc kubenswrapper[4684]: I0123 09:43:37.287397 4684 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/redhat-operators-9822l" podUID="45158d3a-544c-4350-90da-af2ebe2898b0" containerName="registry-server" containerID="cri-o://8ac6d70cd9a131b99387f18a8e54ef9ccef742a7f50ed38e3b9246971276f670" gracePeriod=2 Jan 23 09:43:37 crc kubenswrapper[4684]: I0123 09:43:37.521004 4684 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/install-os-edpm-deployment-openstack-edpm-ipam-gz859"] Jan 23 09:43:37 crc kubenswrapper[4684]: I0123 09:43:37.661000 4684 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-9822l" Jan 23 09:43:37 crc kubenswrapper[4684]: I0123 09:43:37.790818 4684 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-8w85g\" (UniqueName: \"kubernetes.io/projected/45158d3a-544c-4350-90da-af2ebe2898b0-kube-api-access-8w85g\") pod \"45158d3a-544c-4350-90da-af2ebe2898b0\" (UID: \"45158d3a-544c-4350-90da-af2ebe2898b0\") " Jan 23 09:43:37 crc kubenswrapper[4684]: I0123 09:43:37.791034 4684 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/45158d3a-544c-4350-90da-af2ebe2898b0-catalog-content\") pod \"45158d3a-544c-4350-90da-af2ebe2898b0\" (UID: \"45158d3a-544c-4350-90da-af2ebe2898b0\") " Jan 23 09:43:37 crc kubenswrapper[4684]: I0123 09:43:37.791117 4684 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/45158d3a-544c-4350-90da-af2ebe2898b0-utilities\") pod \"45158d3a-544c-4350-90da-af2ebe2898b0\" (UID: \"45158d3a-544c-4350-90da-af2ebe2898b0\") " Jan 23 09:43:37 crc kubenswrapper[4684]: I0123 09:43:37.794876 4684 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/45158d3a-544c-4350-90da-af2ebe2898b0-utilities" (OuterVolumeSpecName: "utilities") pod "45158d3a-544c-4350-90da-af2ebe2898b0" (UID: "45158d3a-544c-4350-90da-af2ebe2898b0"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 23 09:43:37 crc kubenswrapper[4684]: I0123 09:43:37.800864 4684 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/45158d3a-544c-4350-90da-af2ebe2898b0-kube-api-access-8w85g" (OuterVolumeSpecName: "kube-api-access-8w85g") pod "45158d3a-544c-4350-90da-af2ebe2898b0" (UID: "45158d3a-544c-4350-90da-af2ebe2898b0"). InnerVolumeSpecName "kube-api-access-8w85g". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 23 09:43:37 crc kubenswrapper[4684]: I0123 09:43:37.893352 4684 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/45158d3a-544c-4350-90da-af2ebe2898b0-utilities\") on node \"crc\" DevicePath \"\"" Jan 23 09:43:37 crc kubenswrapper[4684]: I0123 09:43:37.893648 4684 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-8w85g\" (UniqueName: \"kubernetes.io/projected/45158d3a-544c-4350-90da-af2ebe2898b0-kube-api-access-8w85g\") on node \"crc\" DevicePath \"\"" Jan 23 09:43:37 crc kubenswrapper[4684]: I0123 09:43:37.922265 4684 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/45158d3a-544c-4350-90da-af2ebe2898b0-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "45158d3a-544c-4350-90da-af2ebe2898b0" (UID: "45158d3a-544c-4350-90da-af2ebe2898b0"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 23 09:43:37 crc kubenswrapper[4684]: I0123 09:43:37.996136 4684 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/45158d3a-544c-4350-90da-af2ebe2898b0-catalog-content\") on node \"crc\" DevicePath \"\"" Jan 23 09:43:38 crc kubenswrapper[4684]: I0123 09:43:38.305376 4684 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/install-os-edpm-deployment-openstack-edpm-ipam-gz859" event={"ID":"37823d9e-d4f7-4efa-9edd-0dc597578f7e","Type":"ContainerStarted","Data":"b24516466f72883494f36501f52d80eeb68b67af25b8945437bc4edcf4cb2f00"} Jan 23 09:43:38 crc kubenswrapper[4684]: I0123 09:43:38.312626 4684 generic.go:334] "Generic (PLEG): container finished" podID="45158d3a-544c-4350-90da-af2ebe2898b0" containerID="8ac6d70cd9a131b99387f18a8e54ef9ccef742a7f50ed38e3b9246971276f670" exitCode=0 Jan 23 09:43:38 crc kubenswrapper[4684]: I0123 09:43:38.312677 4684 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-9822l" event={"ID":"45158d3a-544c-4350-90da-af2ebe2898b0","Type":"ContainerDied","Data":"8ac6d70cd9a131b99387f18a8e54ef9ccef742a7f50ed38e3b9246971276f670"} Jan 23 09:43:38 crc kubenswrapper[4684]: I0123 09:43:38.312751 4684 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-9822l" event={"ID":"45158d3a-544c-4350-90da-af2ebe2898b0","Type":"ContainerDied","Data":"b22fb88b2490d3e538da0f0a01341d698f155778ac7bd3f45a4e4a204582ca35"} Jan 23 09:43:38 crc kubenswrapper[4684]: I0123 09:43:38.312760 4684 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-9822l" Jan 23 09:43:38 crc kubenswrapper[4684]: I0123 09:43:38.312823 4684 scope.go:117] "RemoveContainer" containerID="8ac6d70cd9a131b99387f18a8e54ef9ccef742a7f50ed38e3b9246971276f670" Jan 23 09:43:38 crc kubenswrapper[4684]: I0123 09:43:38.338537 4684 scope.go:117] "RemoveContainer" containerID="a992e929a36ff966d6a34905555170e880b37b898c0f124365b529a8b0b13ae4" Jan 23 09:43:38 crc kubenswrapper[4684]: I0123 09:43:38.360140 4684 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-operators-9822l"] Jan 23 09:43:38 crc kubenswrapper[4684]: I0123 09:43:38.368469 4684 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/redhat-operators-9822l"] Jan 23 09:43:38 crc kubenswrapper[4684]: I0123 09:43:38.373626 4684 scope.go:117] "RemoveContainer" containerID="433703ee619bddd2ca579f03af909359f5c829b9d43075bd0adb792c5bdf8db1" Jan 23 09:43:38 crc kubenswrapper[4684]: I0123 09:43:38.396261 4684 scope.go:117] "RemoveContainer" containerID="8ac6d70cd9a131b99387f18a8e54ef9ccef742a7f50ed38e3b9246971276f670" Jan 23 09:43:38 crc kubenswrapper[4684]: E0123 09:43:38.396865 4684 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"8ac6d70cd9a131b99387f18a8e54ef9ccef742a7f50ed38e3b9246971276f670\": container with ID starting with 8ac6d70cd9a131b99387f18a8e54ef9ccef742a7f50ed38e3b9246971276f670 not found: ID does not exist" containerID="8ac6d70cd9a131b99387f18a8e54ef9ccef742a7f50ed38e3b9246971276f670" Jan 23 09:43:38 crc kubenswrapper[4684]: I0123 09:43:38.396898 4684 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"8ac6d70cd9a131b99387f18a8e54ef9ccef742a7f50ed38e3b9246971276f670"} err="failed to get container status \"8ac6d70cd9a131b99387f18a8e54ef9ccef742a7f50ed38e3b9246971276f670\": rpc error: code = NotFound desc = could not find container \"8ac6d70cd9a131b99387f18a8e54ef9ccef742a7f50ed38e3b9246971276f670\": container with ID starting with 8ac6d70cd9a131b99387f18a8e54ef9ccef742a7f50ed38e3b9246971276f670 not found: ID does not exist" Jan 23 09:43:38 crc kubenswrapper[4684]: I0123 09:43:38.396920 4684 scope.go:117] "RemoveContainer" containerID="a992e929a36ff966d6a34905555170e880b37b898c0f124365b529a8b0b13ae4" Jan 23 09:43:38 crc kubenswrapper[4684]: E0123 09:43:38.397225 4684 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"a992e929a36ff966d6a34905555170e880b37b898c0f124365b529a8b0b13ae4\": container with ID starting with a992e929a36ff966d6a34905555170e880b37b898c0f124365b529a8b0b13ae4 not found: ID does not exist" containerID="a992e929a36ff966d6a34905555170e880b37b898c0f124365b529a8b0b13ae4" Jan 23 09:43:38 crc kubenswrapper[4684]: I0123 09:43:38.397250 4684 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"a992e929a36ff966d6a34905555170e880b37b898c0f124365b529a8b0b13ae4"} err="failed to get container status \"a992e929a36ff966d6a34905555170e880b37b898c0f124365b529a8b0b13ae4\": rpc error: code = NotFound desc = could not find container \"a992e929a36ff966d6a34905555170e880b37b898c0f124365b529a8b0b13ae4\": container with ID starting with a992e929a36ff966d6a34905555170e880b37b898c0f124365b529a8b0b13ae4 not found: ID does not exist" Jan 23 09:43:38 crc kubenswrapper[4684]: I0123 09:43:38.397264 4684 scope.go:117] "RemoveContainer" containerID="433703ee619bddd2ca579f03af909359f5c829b9d43075bd0adb792c5bdf8db1" Jan 23 09:43:38 crc kubenswrapper[4684]: E0123 09:43:38.397615 4684 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"433703ee619bddd2ca579f03af909359f5c829b9d43075bd0adb792c5bdf8db1\": container with ID starting with 433703ee619bddd2ca579f03af909359f5c829b9d43075bd0adb792c5bdf8db1 not found: ID does not exist" containerID="433703ee619bddd2ca579f03af909359f5c829b9d43075bd0adb792c5bdf8db1" Jan 23 09:43:38 crc kubenswrapper[4684]: I0123 09:43:38.397657 4684 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"433703ee619bddd2ca579f03af909359f5c829b9d43075bd0adb792c5bdf8db1"} err="failed to get container status \"433703ee619bddd2ca579f03af909359f5c829b9d43075bd0adb792c5bdf8db1\": rpc error: code = NotFound desc = could not find container \"433703ee619bddd2ca579f03af909359f5c829b9d43075bd0adb792c5bdf8db1\": container with ID starting with 433703ee619bddd2ca579f03af909359f5c829b9d43075bd0adb792c5bdf8db1 not found: ID does not exist" Jan 23 09:43:39 crc kubenswrapper[4684]: I0123 09:43:39.322670 4684 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/install-os-edpm-deployment-openstack-edpm-ipam-gz859" event={"ID":"37823d9e-d4f7-4efa-9edd-0dc597578f7e","Type":"ContainerStarted","Data":"23a6818c025d2db803d06b1b0ad6686cd204b5a6a302a548826f2b5219d5c75f"} Jan 23 09:43:39 crc kubenswrapper[4684]: I0123 09:43:39.347765 4684 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/install-os-edpm-deployment-openstack-edpm-ipam-gz859" podStartSLOduration=2.815315885 podStartE2EDuration="3.347677104s" podCreationTimestamp="2026-01-23 09:43:36 +0000 UTC" firstStartedPulling="2026-01-23 09:43:37.549684901 +0000 UTC m=+2190.173063442" lastFinishedPulling="2026-01-23 09:43:38.08204612 +0000 UTC m=+2190.705424661" observedRunningTime="2026-01-23 09:43:39.342590159 +0000 UTC m=+2191.965968700" watchObservedRunningTime="2026-01-23 09:43:39.347677104 +0000 UTC m=+2191.971055645" Jan 23 09:43:39 crc kubenswrapper[4684]: I0123 09:43:39.593909 4684 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="45158d3a-544c-4350-90da-af2ebe2898b0" path="/var/lib/kubelet/pods/45158d3a-544c-4350-90da-af2ebe2898b0/volumes" Jan 23 09:43:43 crc kubenswrapper[4684]: I0123 09:43:43.728875 4684 patch_prober.go:28] interesting pod/machine-config-daemon-wtphf container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Jan 23 09:43:43 crc kubenswrapper[4684]: I0123 09:43:43.729487 4684 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-wtphf" podUID="fe8e0d00-860e-4d47-9f48-686555520d79" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Jan 23 09:43:48 crc kubenswrapper[4684]: I0123 09:43:48.073885 4684 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/neutron-db-sync-mjwvr"] Jan 23 09:43:48 crc kubenswrapper[4684]: I0123 09:43:48.087633 4684 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/neutron-db-sync-mjwvr"] Jan 23 09:43:49 crc kubenswrapper[4684]: I0123 09:43:49.595417 4684 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="5fd7bf23-46a9-4032-97f0-8d7984b734e0" path="/var/lib/kubelet/pods/5fd7bf23-46a9-4032-97f0-8d7984b734e0/volumes" Jan 23 09:44:13 crc kubenswrapper[4684]: I0123 09:44:13.728343 4684 patch_prober.go:28] interesting pod/machine-config-daemon-wtphf container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Jan 23 09:44:13 crc kubenswrapper[4684]: I0123 09:44:13.729021 4684 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-wtphf" podUID="fe8e0d00-860e-4d47-9f48-686555520d79" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Jan 23 09:44:13 crc kubenswrapper[4684]: I0123 09:44:13.729073 4684 kubelet.go:2542] "SyncLoop (probe)" probe="liveness" status="unhealthy" pod="openshift-machine-config-operator/machine-config-daemon-wtphf" Jan 23 09:44:13 crc kubenswrapper[4684]: I0123 09:44:13.729891 4684 kuberuntime_manager.go:1027] "Message for Container of pod" containerName="machine-config-daemon" containerStatusID={"Type":"cri-o","ID":"4ca7091f270e90c736fc01d37ad639ae0e6d8467b5f3f891e0f994b8fe5136e3"} pod="openshift-machine-config-operator/machine-config-daemon-wtphf" containerMessage="Container machine-config-daemon failed liveness probe, will be restarted" Jan 23 09:44:13 crc kubenswrapper[4684]: I0123 09:44:13.729953 4684 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-machine-config-operator/machine-config-daemon-wtphf" podUID="fe8e0d00-860e-4d47-9f48-686555520d79" containerName="machine-config-daemon" containerID="cri-o://4ca7091f270e90c736fc01d37ad639ae0e6d8467b5f3f891e0f994b8fe5136e3" gracePeriod=600 Jan 23 09:44:13 crc kubenswrapper[4684]: E0123 09:44:13.922619 4684 cadvisor_stats_provider.go:516] "Partial failure issuing cadvisor.ContainerInfoV2" err="partial failures: [\"/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podfe8e0d00_860e_4d47_9f48_686555520d79.slice/crio-4ca7091f270e90c736fc01d37ad639ae0e6d8467b5f3f891e0f994b8fe5136e3.scope\": RecentStats: unable to find data in memory cache]" Jan 23 09:44:14 crc kubenswrapper[4684]: I0123 09:44:14.019237 4684 scope.go:117] "RemoveContainer" containerID="6943c0e475ecbb2a15f88f73e6f1cb336079f6a44c0e618478a800b0a95d33f3" Jan 23 09:44:14 crc kubenswrapper[4684]: I0123 09:44:14.043059 4684 scope.go:117] "RemoveContainer" containerID="58b51666892381932027bd23a62c8b283aa5d54836e5e3d07ff3e478db0e8310" Jan 23 09:44:14 crc kubenswrapper[4684]: I0123 09:44:14.087367 4684 scope.go:117] "RemoveContainer" containerID="3e26eee440f0ec913ab7e4b7d3e25f44476d2262a35cfb2268773ccc12052a03" Jan 23 09:44:14 crc kubenswrapper[4684]: I0123 09:44:14.139561 4684 scope.go:117] "RemoveContainer" containerID="1f92611b2ba669fe16cef70364ca7ce8e9c1cbf3585f43341dfcf83194801d6f" Jan 23 09:44:14 crc kubenswrapper[4684]: I0123 09:44:14.250583 4684 scope.go:117] "RemoveContainer" containerID="128721acce2a0336c13eca4bea3d5af0c23bbfd5b499f7e8f079d8a553cd5bcc" Jan 23 09:44:14 crc kubenswrapper[4684]: I0123 09:44:14.274091 4684 scope.go:117] "RemoveContainer" containerID="f8fde4f2b4c0065028fa2e7df507122426c9c3503bc90505443a6cffe2b7b394" Jan 23 09:44:14 crc kubenswrapper[4684]: I0123 09:44:14.292646 4684 scope.go:117] "RemoveContainer" containerID="cde92c17ce65f379fd443261643cabe7c35c0c4865c6e4db9c2323a10729d113" Jan 23 09:44:14 crc kubenswrapper[4684]: I0123 09:44:14.311087 4684 scope.go:117] "RemoveContainer" containerID="d9093d7423c81301b2be5e47d0675088888410423e30f534ec336ff35fa8df5a" Jan 23 09:44:14 crc kubenswrapper[4684]: I0123 09:44:14.330716 4684 scope.go:117] "RemoveContainer" containerID="7bf1e0cc8b6d0352dac476223651158bee043c796ac7567db416cf94db715313" Jan 23 09:44:14 crc kubenswrapper[4684]: I0123 09:44:14.614920 4684 generic.go:334] "Generic (PLEG): container finished" podID="fe8e0d00-860e-4d47-9f48-686555520d79" containerID="4ca7091f270e90c736fc01d37ad639ae0e6d8467b5f3f891e0f994b8fe5136e3" exitCode=0 Jan 23 09:44:14 crc kubenswrapper[4684]: I0123 09:44:14.614977 4684 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-wtphf" event={"ID":"fe8e0d00-860e-4d47-9f48-686555520d79","Type":"ContainerDied","Data":"4ca7091f270e90c736fc01d37ad639ae0e6d8467b5f3f891e0f994b8fe5136e3"} Jan 23 09:44:14 crc kubenswrapper[4684]: I0123 09:44:14.615027 4684 scope.go:117] "RemoveContainer" containerID="aaa3253f44fc261eba23e0bab4fba49957b928d9d9a01fb268ab6087cc818562" Jan 23 09:44:16 crc kubenswrapper[4684]: I0123 09:44:16.644622 4684 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-wtphf" event={"ID":"fe8e0d00-860e-4d47-9f48-686555520d79","Type":"ContainerStarted","Data":"e241fb8ce89b1144b77898bb643960ab8da29fdc0ca5835cb761cdb036975632"} Jan 23 09:44:23 crc kubenswrapper[4684]: I0123 09:44:23.709268 4684 generic.go:334] "Generic (PLEG): container finished" podID="37823d9e-d4f7-4efa-9edd-0dc597578f7e" containerID="23a6818c025d2db803d06b1b0ad6686cd204b5a6a302a548826f2b5219d5c75f" exitCode=0 Jan 23 09:44:23 crc kubenswrapper[4684]: I0123 09:44:23.709364 4684 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/install-os-edpm-deployment-openstack-edpm-ipam-gz859" event={"ID":"37823d9e-d4f7-4efa-9edd-0dc597578f7e","Type":"ContainerDied","Data":"23a6818c025d2db803d06b1b0ad6686cd204b5a6a302a548826f2b5219d5c75f"} Jan 23 09:44:25 crc kubenswrapper[4684]: I0123 09:44:25.120844 4684 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/install-os-edpm-deployment-openstack-edpm-ipam-gz859" Jan 23 09:44:25 crc kubenswrapper[4684]: I0123 09:44:25.186883 4684 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/37823d9e-d4f7-4efa-9edd-0dc597578f7e-ssh-key-openstack-edpm-ipam\") pod \"37823d9e-d4f7-4efa-9edd-0dc597578f7e\" (UID: \"37823d9e-d4f7-4efa-9edd-0dc597578f7e\") " Jan 23 09:44:25 crc kubenswrapper[4684]: I0123 09:44:25.187086 4684 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/37823d9e-d4f7-4efa-9edd-0dc597578f7e-inventory\") pod \"37823d9e-d4f7-4efa-9edd-0dc597578f7e\" (UID: \"37823d9e-d4f7-4efa-9edd-0dc597578f7e\") " Jan 23 09:44:25 crc kubenswrapper[4684]: I0123 09:44:25.187222 4684 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-dr9kw\" (UniqueName: \"kubernetes.io/projected/37823d9e-d4f7-4efa-9edd-0dc597578f7e-kube-api-access-dr9kw\") pod \"37823d9e-d4f7-4efa-9edd-0dc597578f7e\" (UID: \"37823d9e-d4f7-4efa-9edd-0dc597578f7e\") " Jan 23 09:44:25 crc kubenswrapper[4684]: I0123 09:44:25.192312 4684 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/37823d9e-d4f7-4efa-9edd-0dc597578f7e-kube-api-access-dr9kw" (OuterVolumeSpecName: "kube-api-access-dr9kw") pod "37823d9e-d4f7-4efa-9edd-0dc597578f7e" (UID: "37823d9e-d4f7-4efa-9edd-0dc597578f7e"). InnerVolumeSpecName "kube-api-access-dr9kw". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 23 09:44:25 crc kubenswrapper[4684]: I0123 09:44:25.211141 4684 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/37823d9e-d4f7-4efa-9edd-0dc597578f7e-inventory" (OuterVolumeSpecName: "inventory") pod "37823d9e-d4f7-4efa-9edd-0dc597578f7e" (UID: "37823d9e-d4f7-4efa-9edd-0dc597578f7e"). InnerVolumeSpecName "inventory". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 23 09:44:25 crc kubenswrapper[4684]: I0123 09:44:25.217970 4684 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/37823d9e-d4f7-4efa-9edd-0dc597578f7e-ssh-key-openstack-edpm-ipam" (OuterVolumeSpecName: "ssh-key-openstack-edpm-ipam") pod "37823d9e-d4f7-4efa-9edd-0dc597578f7e" (UID: "37823d9e-d4f7-4efa-9edd-0dc597578f7e"). InnerVolumeSpecName "ssh-key-openstack-edpm-ipam". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 23 09:44:25 crc kubenswrapper[4684]: I0123 09:44:25.288984 4684 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-dr9kw\" (UniqueName: \"kubernetes.io/projected/37823d9e-d4f7-4efa-9edd-0dc597578f7e-kube-api-access-dr9kw\") on node \"crc\" DevicePath \"\"" Jan 23 09:44:25 crc kubenswrapper[4684]: I0123 09:44:25.289017 4684 reconciler_common.go:293] "Volume detached for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/37823d9e-d4f7-4efa-9edd-0dc597578f7e-ssh-key-openstack-edpm-ipam\") on node \"crc\" DevicePath \"\"" Jan 23 09:44:25 crc kubenswrapper[4684]: I0123 09:44:25.289028 4684 reconciler_common.go:293] "Volume detached for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/37823d9e-d4f7-4efa-9edd-0dc597578f7e-inventory\") on node \"crc\" DevicePath \"\"" Jan 23 09:44:25 crc kubenswrapper[4684]: I0123 09:44:25.726106 4684 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/install-os-edpm-deployment-openstack-edpm-ipam-gz859" event={"ID":"37823d9e-d4f7-4efa-9edd-0dc597578f7e","Type":"ContainerDied","Data":"b24516466f72883494f36501f52d80eeb68b67af25b8945437bc4edcf4cb2f00"} Jan 23 09:44:25 crc kubenswrapper[4684]: I0123 09:44:25.726553 4684 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="b24516466f72883494f36501f52d80eeb68b67af25b8945437bc4edcf4cb2f00" Jan 23 09:44:25 crc kubenswrapper[4684]: I0123 09:44:25.726642 4684 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/install-os-edpm-deployment-openstack-edpm-ipam-gz859" Jan 23 09:44:25 crc kubenswrapper[4684]: I0123 09:44:25.801959 4684 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/ceph-hci-pre-edpm-deployment-openstack-edpm-ipam-nxqrp"] Jan 23 09:44:25 crc kubenswrapper[4684]: E0123 09:44:25.802551 4684 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="45158d3a-544c-4350-90da-af2ebe2898b0" containerName="extract-utilities" Jan 23 09:44:25 crc kubenswrapper[4684]: I0123 09:44:25.802575 4684 state_mem.go:107] "Deleted CPUSet assignment" podUID="45158d3a-544c-4350-90da-af2ebe2898b0" containerName="extract-utilities" Jan 23 09:44:25 crc kubenswrapper[4684]: E0123 09:44:25.802590 4684 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="45158d3a-544c-4350-90da-af2ebe2898b0" containerName="registry-server" Jan 23 09:44:25 crc kubenswrapper[4684]: I0123 09:44:25.802598 4684 state_mem.go:107] "Deleted CPUSet assignment" podUID="45158d3a-544c-4350-90da-af2ebe2898b0" containerName="registry-server" Jan 23 09:44:25 crc kubenswrapper[4684]: E0123 09:44:25.802616 4684 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="45158d3a-544c-4350-90da-af2ebe2898b0" containerName="extract-content" Jan 23 09:44:25 crc kubenswrapper[4684]: I0123 09:44:25.802624 4684 state_mem.go:107] "Deleted CPUSet assignment" podUID="45158d3a-544c-4350-90da-af2ebe2898b0" containerName="extract-content" Jan 23 09:44:25 crc kubenswrapper[4684]: E0123 09:44:25.802650 4684 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="37823d9e-d4f7-4efa-9edd-0dc597578f7e" containerName="install-os-edpm-deployment-openstack-edpm-ipam" Jan 23 09:44:25 crc kubenswrapper[4684]: I0123 09:44:25.802659 4684 state_mem.go:107] "Deleted CPUSet assignment" podUID="37823d9e-d4f7-4efa-9edd-0dc597578f7e" containerName="install-os-edpm-deployment-openstack-edpm-ipam" Jan 23 09:44:25 crc kubenswrapper[4684]: I0123 09:44:25.802877 4684 memory_manager.go:354] "RemoveStaleState removing state" podUID="37823d9e-d4f7-4efa-9edd-0dc597578f7e" containerName="install-os-edpm-deployment-openstack-edpm-ipam" Jan 23 09:44:25 crc kubenswrapper[4684]: I0123 09:44:25.802904 4684 memory_manager.go:354] "RemoveStaleState removing state" podUID="45158d3a-544c-4350-90da-af2ebe2898b0" containerName="registry-server" Jan 23 09:44:25 crc kubenswrapper[4684]: I0123 09:44:25.803645 4684 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ceph-hci-pre-edpm-deployment-openstack-edpm-ipam-nxqrp" Jan 23 09:44:25 crc kubenswrapper[4684]: I0123 09:44:25.807038 4684 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"openstack-aee-default-env" Jan 23 09:44:25 crc kubenswrapper[4684]: I0123 09:44:25.807724 4684 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"openstack-edpm-ipam-dockercfg-5vtkf" Jan 23 09:44:25 crc kubenswrapper[4684]: I0123 09:44:25.807863 4684 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"dataplane-ansible-ssh-private-key-secret" Jan 23 09:44:25 crc kubenswrapper[4684]: I0123 09:44:25.807872 4684 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"dataplanenodeset-openstack-edpm-ipam" Jan 23 09:44:25 crc kubenswrapper[4684]: I0123 09:44:25.812171 4684 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ceph-hci-pre-edpm-deployment-openstack-edpm-ipam-nxqrp"] Jan 23 09:44:25 crc kubenswrapper[4684]: I0123 09:44:25.901007 4684 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-nl4h8\" (UniqueName: \"kubernetes.io/projected/1a0208ad-b4f8-4798-b935-e541e61a3918-kube-api-access-nl4h8\") pod \"ceph-hci-pre-edpm-deployment-openstack-edpm-ipam-nxqrp\" (UID: \"1a0208ad-b4f8-4798-b935-e541e61a3918\") " pod="openstack/ceph-hci-pre-edpm-deployment-openstack-edpm-ipam-nxqrp" Jan 23 09:44:25 crc kubenswrapper[4684]: I0123 09:44:25.901095 4684 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/1a0208ad-b4f8-4798-b935-e541e61a3918-inventory\") pod \"ceph-hci-pre-edpm-deployment-openstack-edpm-ipam-nxqrp\" (UID: \"1a0208ad-b4f8-4798-b935-e541e61a3918\") " pod="openstack/ceph-hci-pre-edpm-deployment-openstack-edpm-ipam-nxqrp" Jan 23 09:44:25 crc kubenswrapper[4684]: I0123 09:44:25.901183 4684 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/1a0208ad-b4f8-4798-b935-e541e61a3918-ssh-key-openstack-edpm-ipam\") pod \"ceph-hci-pre-edpm-deployment-openstack-edpm-ipam-nxqrp\" (UID: \"1a0208ad-b4f8-4798-b935-e541e61a3918\") " pod="openstack/ceph-hci-pre-edpm-deployment-openstack-edpm-ipam-nxqrp" Jan 23 09:44:26 crc kubenswrapper[4684]: I0123 09:44:26.002952 4684 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/1a0208ad-b4f8-4798-b935-e541e61a3918-ssh-key-openstack-edpm-ipam\") pod \"ceph-hci-pre-edpm-deployment-openstack-edpm-ipam-nxqrp\" (UID: \"1a0208ad-b4f8-4798-b935-e541e61a3918\") " pod="openstack/ceph-hci-pre-edpm-deployment-openstack-edpm-ipam-nxqrp" Jan 23 09:44:26 crc kubenswrapper[4684]: I0123 09:44:26.003083 4684 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-nl4h8\" (UniqueName: \"kubernetes.io/projected/1a0208ad-b4f8-4798-b935-e541e61a3918-kube-api-access-nl4h8\") pod \"ceph-hci-pre-edpm-deployment-openstack-edpm-ipam-nxqrp\" (UID: \"1a0208ad-b4f8-4798-b935-e541e61a3918\") " pod="openstack/ceph-hci-pre-edpm-deployment-openstack-edpm-ipam-nxqrp" Jan 23 09:44:26 crc kubenswrapper[4684]: I0123 09:44:26.003144 4684 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/1a0208ad-b4f8-4798-b935-e541e61a3918-inventory\") pod \"ceph-hci-pre-edpm-deployment-openstack-edpm-ipam-nxqrp\" (UID: \"1a0208ad-b4f8-4798-b935-e541e61a3918\") " pod="openstack/ceph-hci-pre-edpm-deployment-openstack-edpm-ipam-nxqrp" Jan 23 09:44:26 crc kubenswrapper[4684]: I0123 09:44:26.008057 4684 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/1a0208ad-b4f8-4798-b935-e541e61a3918-inventory\") pod \"ceph-hci-pre-edpm-deployment-openstack-edpm-ipam-nxqrp\" (UID: \"1a0208ad-b4f8-4798-b935-e541e61a3918\") " pod="openstack/ceph-hci-pre-edpm-deployment-openstack-edpm-ipam-nxqrp" Jan 23 09:44:26 crc kubenswrapper[4684]: I0123 09:44:26.008384 4684 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/1a0208ad-b4f8-4798-b935-e541e61a3918-ssh-key-openstack-edpm-ipam\") pod \"ceph-hci-pre-edpm-deployment-openstack-edpm-ipam-nxqrp\" (UID: \"1a0208ad-b4f8-4798-b935-e541e61a3918\") " pod="openstack/ceph-hci-pre-edpm-deployment-openstack-edpm-ipam-nxqrp" Jan 23 09:44:26 crc kubenswrapper[4684]: I0123 09:44:26.024023 4684 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-nl4h8\" (UniqueName: \"kubernetes.io/projected/1a0208ad-b4f8-4798-b935-e541e61a3918-kube-api-access-nl4h8\") pod \"ceph-hci-pre-edpm-deployment-openstack-edpm-ipam-nxqrp\" (UID: \"1a0208ad-b4f8-4798-b935-e541e61a3918\") " pod="openstack/ceph-hci-pre-edpm-deployment-openstack-edpm-ipam-nxqrp" Jan 23 09:44:26 crc kubenswrapper[4684]: I0123 09:44:26.118815 4684 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ceph-hci-pre-edpm-deployment-openstack-edpm-ipam-nxqrp" Jan 23 09:44:26 crc kubenswrapper[4684]: I0123 09:44:26.689724 4684 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ceph-hci-pre-edpm-deployment-openstack-edpm-ipam-nxqrp"] Jan 23 09:44:26 crc kubenswrapper[4684]: I0123 09:44:26.736013 4684 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceph-hci-pre-edpm-deployment-openstack-edpm-ipam-nxqrp" event={"ID":"1a0208ad-b4f8-4798-b935-e541e61a3918","Type":"ContainerStarted","Data":"2f77a37f31c0ed1d7710d97a05350b3f61aa12172a5cfe1a39a17026afac5c03"} Jan 23 09:44:28 crc kubenswrapper[4684]: I0123 09:44:28.752233 4684 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceph-hci-pre-edpm-deployment-openstack-edpm-ipam-nxqrp" event={"ID":"1a0208ad-b4f8-4798-b935-e541e61a3918","Type":"ContainerStarted","Data":"09ff9cb666acdc428ea3d10a5b7ab372faef246c2af76caca4d11154bcec0425"} Jan 23 09:44:28 crc kubenswrapper[4684]: I0123 09:44:28.774836 4684 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/ceph-hci-pre-edpm-deployment-openstack-edpm-ipam-nxqrp" podStartSLOduration=3.133988425 podStartE2EDuration="3.774818394s" podCreationTimestamp="2026-01-23 09:44:25 +0000 UTC" firstStartedPulling="2026-01-23 09:44:26.69093575 +0000 UTC m=+2239.314314281" lastFinishedPulling="2026-01-23 09:44:27.331765709 +0000 UTC m=+2239.955144250" observedRunningTime="2026-01-23 09:44:28.76766904 +0000 UTC m=+2241.391047581" watchObservedRunningTime="2026-01-23 09:44:28.774818394 +0000 UTC m=+2241.398196935" Jan 23 09:44:32 crc kubenswrapper[4684]: I0123 09:44:32.041126 4684 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-cell0-conductor-db-sync-7pzwl"] Jan 23 09:44:32 crc kubenswrapper[4684]: I0123 09:44:32.050221 4684 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/nova-cell0-conductor-db-sync-7pzwl"] Jan 23 09:44:32 crc kubenswrapper[4684]: I0123 09:44:32.783765 4684 generic.go:334] "Generic (PLEG): container finished" podID="1a0208ad-b4f8-4798-b935-e541e61a3918" containerID="09ff9cb666acdc428ea3d10a5b7ab372faef246c2af76caca4d11154bcec0425" exitCode=0 Jan 23 09:44:32 crc kubenswrapper[4684]: I0123 09:44:32.784084 4684 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceph-hci-pre-edpm-deployment-openstack-edpm-ipam-nxqrp" event={"ID":"1a0208ad-b4f8-4798-b935-e541e61a3918","Type":"ContainerDied","Data":"09ff9cb666acdc428ea3d10a5b7ab372faef246c2af76caca4d11154bcec0425"} Jan 23 09:44:33 crc kubenswrapper[4684]: I0123 09:44:33.595542 4684 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="71a684b6-60c9-4017-91d1-7a8e340d8482" path="/var/lib/kubelet/pods/71a684b6-60c9-4017-91d1-7a8e340d8482/volumes" Jan 23 09:44:34 crc kubenswrapper[4684]: I0123 09:44:34.274252 4684 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/ceph-hci-pre-edpm-deployment-openstack-edpm-ipam-nxqrp" Jan 23 09:44:34 crc kubenswrapper[4684]: I0123 09:44:34.387570 4684 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/1a0208ad-b4f8-4798-b935-e541e61a3918-ssh-key-openstack-edpm-ipam\") pod \"1a0208ad-b4f8-4798-b935-e541e61a3918\" (UID: \"1a0208ad-b4f8-4798-b935-e541e61a3918\") " Jan 23 09:44:34 crc kubenswrapper[4684]: I0123 09:44:34.387782 4684 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-nl4h8\" (UniqueName: \"kubernetes.io/projected/1a0208ad-b4f8-4798-b935-e541e61a3918-kube-api-access-nl4h8\") pod \"1a0208ad-b4f8-4798-b935-e541e61a3918\" (UID: \"1a0208ad-b4f8-4798-b935-e541e61a3918\") " Jan 23 09:44:34 crc kubenswrapper[4684]: I0123 09:44:34.388018 4684 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/1a0208ad-b4f8-4798-b935-e541e61a3918-inventory\") pod \"1a0208ad-b4f8-4798-b935-e541e61a3918\" (UID: \"1a0208ad-b4f8-4798-b935-e541e61a3918\") " Jan 23 09:44:34 crc kubenswrapper[4684]: I0123 09:44:34.406528 4684 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/1a0208ad-b4f8-4798-b935-e541e61a3918-kube-api-access-nl4h8" (OuterVolumeSpecName: "kube-api-access-nl4h8") pod "1a0208ad-b4f8-4798-b935-e541e61a3918" (UID: "1a0208ad-b4f8-4798-b935-e541e61a3918"). InnerVolumeSpecName "kube-api-access-nl4h8". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 23 09:44:34 crc kubenswrapper[4684]: I0123 09:44:34.416641 4684 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/1a0208ad-b4f8-4798-b935-e541e61a3918-ssh-key-openstack-edpm-ipam" (OuterVolumeSpecName: "ssh-key-openstack-edpm-ipam") pod "1a0208ad-b4f8-4798-b935-e541e61a3918" (UID: "1a0208ad-b4f8-4798-b935-e541e61a3918"). InnerVolumeSpecName "ssh-key-openstack-edpm-ipam". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 23 09:44:34 crc kubenswrapper[4684]: I0123 09:44:34.445019 4684 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/1a0208ad-b4f8-4798-b935-e541e61a3918-inventory" (OuterVolumeSpecName: "inventory") pod "1a0208ad-b4f8-4798-b935-e541e61a3918" (UID: "1a0208ad-b4f8-4798-b935-e541e61a3918"). InnerVolumeSpecName "inventory". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 23 09:44:34 crc kubenswrapper[4684]: I0123 09:44:34.490106 4684 reconciler_common.go:293] "Volume detached for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/1a0208ad-b4f8-4798-b935-e541e61a3918-inventory\") on node \"crc\" DevicePath \"\"" Jan 23 09:44:34 crc kubenswrapper[4684]: I0123 09:44:34.490152 4684 reconciler_common.go:293] "Volume detached for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/1a0208ad-b4f8-4798-b935-e541e61a3918-ssh-key-openstack-edpm-ipam\") on node \"crc\" DevicePath \"\"" Jan 23 09:44:34 crc kubenswrapper[4684]: I0123 09:44:34.490171 4684 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-nl4h8\" (UniqueName: \"kubernetes.io/projected/1a0208ad-b4f8-4798-b935-e541e61a3918-kube-api-access-nl4h8\") on node \"crc\" DevicePath \"\"" Jan 23 09:44:34 crc kubenswrapper[4684]: I0123 09:44:34.800675 4684 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceph-hci-pre-edpm-deployment-openstack-edpm-ipam-nxqrp" event={"ID":"1a0208ad-b4f8-4798-b935-e541e61a3918","Type":"ContainerDied","Data":"2f77a37f31c0ed1d7710d97a05350b3f61aa12172a5cfe1a39a17026afac5c03"} Jan 23 09:44:34 crc kubenswrapper[4684]: I0123 09:44:34.800979 4684 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="2f77a37f31c0ed1d7710d97a05350b3f61aa12172a5cfe1a39a17026afac5c03" Jan 23 09:44:34 crc kubenswrapper[4684]: I0123 09:44:34.800743 4684 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/ceph-hci-pre-edpm-deployment-openstack-edpm-ipam-nxqrp" Jan 23 09:44:34 crc kubenswrapper[4684]: I0123 09:44:34.875085 4684 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/configure-os-edpm-deployment-openstack-edpm-ipam-k965m"] Jan 23 09:44:34 crc kubenswrapper[4684]: E0123 09:44:34.875495 4684 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="1a0208ad-b4f8-4798-b935-e541e61a3918" containerName="ceph-hci-pre-edpm-deployment-openstack-edpm-ipam" Jan 23 09:44:34 crc kubenswrapper[4684]: I0123 09:44:34.875525 4684 state_mem.go:107] "Deleted CPUSet assignment" podUID="1a0208ad-b4f8-4798-b935-e541e61a3918" containerName="ceph-hci-pre-edpm-deployment-openstack-edpm-ipam" Jan 23 09:44:34 crc kubenswrapper[4684]: I0123 09:44:34.875787 4684 memory_manager.go:354] "RemoveStaleState removing state" podUID="1a0208ad-b4f8-4798-b935-e541e61a3918" containerName="ceph-hci-pre-edpm-deployment-openstack-edpm-ipam" Jan 23 09:44:34 crc kubenswrapper[4684]: I0123 09:44:34.876401 4684 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/configure-os-edpm-deployment-openstack-edpm-ipam-k965m" Jan 23 09:44:34 crc kubenswrapper[4684]: I0123 09:44:34.880213 4684 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"openstack-aee-default-env" Jan 23 09:44:34 crc kubenswrapper[4684]: I0123 09:44:34.883126 4684 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"openstack-edpm-ipam-dockercfg-5vtkf" Jan 23 09:44:34 crc kubenswrapper[4684]: I0123 09:44:34.883939 4684 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"dataplanenodeset-openstack-edpm-ipam" Jan 23 09:44:34 crc kubenswrapper[4684]: I0123 09:44:34.883951 4684 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"dataplane-ansible-ssh-private-key-secret" Jan 23 09:44:34 crc kubenswrapper[4684]: I0123 09:44:34.889261 4684 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/configure-os-edpm-deployment-openstack-edpm-ipam-k965m"] Jan 23 09:44:34 crc kubenswrapper[4684]: I0123 09:44:34.896659 4684 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-sd8zh\" (UniqueName: \"kubernetes.io/projected/5c7fb6ce-b97d-4827-b8c0-254582176d6d-kube-api-access-sd8zh\") pod \"configure-os-edpm-deployment-openstack-edpm-ipam-k965m\" (UID: \"5c7fb6ce-b97d-4827-b8c0-254582176d6d\") " pod="openstack/configure-os-edpm-deployment-openstack-edpm-ipam-k965m" Jan 23 09:44:34 crc kubenswrapper[4684]: I0123 09:44:34.896817 4684 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/5c7fb6ce-b97d-4827-b8c0-254582176d6d-ssh-key-openstack-edpm-ipam\") pod \"configure-os-edpm-deployment-openstack-edpm-ipam-k965m\" (UID: \"5c7fb6ce-b97d-4827-b8c0-254582176d6d\") " pod="openstack/configure-os-edpm-deployment-openstack-edpm-ipam-k965m" Jan 23 09:44:34 crc kubenswrapper[4684]: I0123 09:44:34.896894 4684 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/5c7fb6ce-b97d-4827-b8c0-254582176d6d-inventory\") pod \"configure-os-edpm-deployment-openstack-edpm-ipam-k965m\" (UID: \"5c7fb6ce-b97d-4827-b8c0-254582176d6d\") " pod="openstack/configure-os-edpm-deployment-openstack-edpm-ipam-k965m" Jan 23 09:44:34 crc kubenswrapper[4684]: I0123 09:44:34.997995 4684 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/5c7fb6ce-b97d-4827-b8c0-254582176d6d-ssh-key-openstack-edpm-ipam\") pod \"configure-os-edpm-deployment-openstack-edpm-ipam-k965m\" (UID: \"5c7fb6ce-b97d-4827-b8c0-254582176d6d\") " pod="openstack/configure-os-edpm-deployment-openstack-edpm-ipam-k965m" Jan 23 09:44:34 crc kubenswrapper[4684]: I0123 09:44:34.998123 4684 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/5c7fb6ce-b97d-4827-b8c0-254582176d6d-inventory\") pod \"configure-os-edpm-deployment-openstack-edpm-ipam-k965m\" (UID: \"5c7fb6ce-b97d-4827-b8c0-254582176d6d\") " pod="openstack/configure-os-edpm-deployment-openstack-edpm-ipam-k965m" Jan 23 09:44:34 crc kubenswrapper[4684]: I0123 09:44:34.998157 4684 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-sd8zh\" (UniqueName: \"kubernetes.io/projected/5c7fb6ce-b97d-4827-b8c0-254582176d6d-kube-api-access-sd8zh\") pod \"configure-os-edpm-deployment-openstack-edpm-ipam-k965m\" (UID: \"5c7fb6ce-b97d-4827-b8c0-254582176d6d\") " pod="openstack/configure-os-edpm-deployment-openstack-edpm-ipam-k965m" Jan 23 09:44:35 crc kubenswrapper[4684]: I0123 09:44:35.016423 4684 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/5c7fb6ce-b97d-4827-b8c0-254582176d6d-inventory\") pod \"configure-os-edpm-deployment-openstack-edpm-ipam-k965m\" (UID: \"5c7fb6ce-b97d-4827-b8c0-254582176d6d\") " pod="openstack/configure-os-edpm-deployment-openstack-edpm-ipam-k965m" Jan 23 09:44:35 crc kubenswrapper[4684]: I0123 09:44:35.016908 4684 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/5c7fb6ce-b97d-4827-b8c0-254582176d6d-ssh-key-openstack-edpm-ipam\") pod \"configure-os-edpm-deployment-openstack-edpm-ipam-k965m\" (UID: \"5c7fb6ce-b97d-4827-b8c0-254582176d6d\") " pod="openstack/configure-os-edpm-deployment-openstack-edpm-ipam-k965m" Jan 23 09:44:35 crc kubenswrapper[4684]: I0123 09:44:35.019653 4684 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-sd8zh\" (UniqueName: \"kubernetes.io/projected/5c7fb6ce-b97d-4827-b8c0-254582176d6d-kube-api-access-sd8zh\") pod \"configure-os-edpm-deployment-openstack-edpm-ipam-k965m\" (UID: \"5c7fb6ce-b97d-4827-b8c0-254582176d6d\") " pod="openstack/configure-os-edpm-deployment-openstack-edpm-ipam-k965m" Jan 23 09:44:35 crc kubenswrapper[4684]: I0123 09:44:35.197697 4684 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/configure-os-edpm-deployment-openstack-edpm-ipam-k965m" Jan 23 09:44:35 crc kubenswrapper[4684]: W0123 09:44:35.708952 4684 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod5c7fb6ce_b97d_4827_b8c0_254582176d6d.slice/crio-304cf6b67dff6e85ce6966efdb5e32ae1f39c002157e160f64a17a215012166d WatchSource:0}: Error finding container 304cf6b67dff6e85ce6966efdb5e32ae1f39c002157e160f64a17a215012166d: Status 404 returned error can't find the container with id 304cf6b67dff6e85ce6966efdb5e32ae1f39c002157e160f64a17a215012166d Jan 23 09:44:35 crc kubenswrapper[4684]: I0123 09:44:35.716466 4684 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/configure-os-edpm-deployment-openstack-edpm-ipam-k965m"] Jan 23 09:44:35 crc kubenswrapper[4684]: I0123 09:44:35.810866 4684 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/configure-os-edpm-deployment-openstack-edpm-ipam-k965m" event={"ID":"5c7fb6ce-b97d-4827-b8c0-254582176d6d","Type":"ContainerStarted","Data":"304cf6b67dff6e85ce6966efdb5e32ae1f39c002157e160f64a17a215012166d"} Jan 23 09:44:36 crc kubenswrapper[4684]: I0123 09:44:36.821453 4684 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/configure-os-edpm-deployment-openstack-edpm-ipam-k965m" event={"ID":"5c7fb6ce-b97d-4827-b8c0-254582176d6d","Type":"ContainerStarted","Data":"5de067ac77489ac1a36897dacb3c13d9e19e9e2f95e74ff95953abe58b424a6e"} Jan 23 09:44:36 crc kubenswrapper[4684]: I0123 09:44:36.852406 4684 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/configure-os-edpm-deployment-openstack-edpm-ipam-k965m" podStartSLOduration=2.261122585 podStartE2EDuration="2.852379808s" podCreationTimestamp="2026-01-23 09:44:34 +0000 UTC" firstStartedPulling="2026-01-23 09:44:35.711826801 +0000 UTC m=+2248.335205342" lastFinishedPulling="2026-01-23 09:44:36.303084024 +0000 UTC m=+2248.926462565" observedRunningTime="2026-01-23 09:44:36.844894344 +0000 UTC m=+2249.468272895" watchObservedRunningTime="2026-01-23 09:44:36.852379808 +0000 UTC m=+2249.475758349" Jan 23 09:44:59 crc kubenswrapper[4684]: I0123 09:44:59.038985 4684 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-cell0-cell-mapping-l7dxb"] Jan 23 09:44:59 crc kubenswrapper[4684]: I0123 09:44:59.066300 4684 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/nova-cell0-cell-mapping-l7dxb"] Jan 23 09:44:59 crc kubenswrapper[4684]: I0123 09:44:59.595255 4684 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="f3ca078c-d881-4e98-95bf-7b7486f871d6" path="/var/lib/kubelet/pods/f3ca078c-d881-4e98-95bf-7b7486f871d6/volumes" Jan 23 09:45:00 crc kubenswrapper[4684]: I0123 09:45:00.143192 4684 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29486025-grn9l"] Jan 23 09:45:00 crc kubenswrapper[4684]: I0123 09:45:00.144607 4684 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29486025-grn9l" Jan 23 09:45:00 crc kubenswrapper[4684]: I0123 09:45:00.148246 4684 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-operator-lifecycle-manager"/"collect-profiles-config" Jan 23 09:45:00 crc kubenswrapper[4684]: I0123 09:45:00.152808 4684 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-operator-lifecycle-manager"/"collect-profiles-dockercfg-kzf4t" Jan 23 09:45:00 crc kubenswrapper[4684]: I0123 09:45:00.158588 4684 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29486025-grn9l"] Jan 23 09:45:00 crc kubenswrapper[4684]: I0123 09:45:00.201578 4684 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-pdmmt\" (UniqueName: \"kubernetes.io/projected/d2a849a5-07be-46fe-bbcd-d6c77b0b740a-kube-api-access-pdmmt\") pod \"collect-profiles-29486025-grn9l\" (UID: \"d2a849a5-07be-46fe-bbcd-d6c77b0b740a\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29486025-grn9l" Jan 23 09:45:00 crc kubenswrapper[4684]: I0123 09:45:00.203400 4684 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/d2a849a5-07be-46fe-bbcd-d6c77b0b740a-config-volume\") pod \"collect-profiles-29486025-grn9l\" (UID: \"d2a849a5-07be-46fe-bbcd-d6c77b0b740a\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29486025-grn9l" Jan 23 09:45:00 crc kubenswrapper[4684]: I0123 09:45:00.204018 4684 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/d2a849a5-07be-46fe-bbcd-d6c77b0b740a-secret-volume\") pod \"collect-profiles-29486025-grn9l\" (UID: \"d2a849a5-07be-46fe-bbcd-d6c77b0b740a\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29486025-grn9l" Jan 23 09:45:00 crc kubenswrapper[4684]: I0123 09:45:00.305755 4684 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/d2a849a5-07be-46fe-bbcd-d6c77b0b740a-secret-volume\") pod \"collect-profiles-29486025-grn9l\" (UID: \"d2a849a5-07be-46fe-bbcd-d6c77b0b740a\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29486025-grn9l" Jan 23 09:45:00 crc kubenswrapper[4684]: I0123 09:45:00.305826 4684 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-pdmmt\" (UniqueName: \"kubernetes.io/projected/d2a849a5-07be-46fe-bbcd-d6c77b0b740a-kube-api-access-pdmmt\") pod \"collect-profiles-29486025-grn9l\" (UID: \"d2a849a5-07be-46fe-bbcd-d6c77b0b740a\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29486025-grn9l" Jan 23 09:45:00 crc kubenswrapper[4684]: I0123 09:45:00.305935 4684 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/d2a849a5-07be-46fe-bbcd-d6c77b0b740a-config-volume\") pod \"collect-profiles-29486025-grn9l\" (UID: \"d2a849a5-07be-46fe-bbcd-d6c77b0b740a\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29486025-grn9l" Jan 23 09:45:00 crc kubenswrapper[4684]: I0123 09:45:00.306958 4684 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/d2a849a5-07be-46fe-bbcd-d6c77b0b740a-config-volume\") pod \"collect-profiles-29486025-grn9l\" (UID: \"d2a849a5-07be-46fe-bbcd-d6c77b0b740a\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29486025-grn9l" Jan 23 09:45:00 crc kubenswrapper[4684]: I0123 09:45:00.323352 4684 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/d2a849a5-07be-46fe-bbcd-d6c77b0b740a-secret-volume\") pod \"collect-profiles-29486025-grn9l\" (UID: \"d2a849a5-07be-46fe-bbcd-d6c77b0b740a\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29486025-grn9l" Jan 23 09:45:00 crc kubenswrapper[4684]: I0123 09:45:00.328311 4684 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-pdmmt\" (UniqueName: \"kubernetes.io/projected/d2a849a5-07be-46fe-bbcd-d6c77b0b740a-kube-api-access-pdmmt\") pod \"collect-profiles-29486025-grn9l\" (UID: \"d2a849a5-07be-46fe-bbcd-d6c77b0b740a\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29486025-grn9l" Jan 23 09:45:00 crc kubenswrapper[4684]: I0123 09:45:00.472411 4684 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29486025-grn9l" Jan 23 09:45:00 crc kubenswrapper[4684]: I0123 09:45:00.938230 4684 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29486025-grn9l"] Jan 23 09:45:01 crc kubenswrapper[4684]: I0123 09:45:01.002023 4684 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/collect-profiles-29486025-grn9l" event={"ID":"d2a849a5-07be-46fe-bbcd-d6c77b0b740a","Type":"ContainerStarted","Data":"4db404d12ace3bc903be8d583a7fa576fd413af27362f8931f8b3a1fff1e4007"} Jan 23 09:45:01 crc kubenswrapper[4684]: I0123 09:45:01.036950 4684 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-cell1-conductor-db-sync-bcpvp"] Jan 23 09:45:01 crc kubenswrapper[4684]: I0123 09:45:01.048993 4684 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/nova-cell1-conductor-db-sync-bcpvp"] Jan 23 09:45:01 crc kubenswrapper[4684]: I0123 09:45:01.593489 4684 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="57bca338-31bf-4447-b296-864d1dea776e" path="/var/lib/kubelet/pods/57bca338-31bf-4447-b296-864d1dea776e/volumes" Jan 23 09:45:02 crc kubenswrapper[4684]: I0123 09:45:02.011128 4684 generic.go:334] "Generic (PLEG): container finished" podID="d2a849a5-07be-46fe-bbcd-d6c77b0b740a" containerID="e149ef284d142fb4d666f48e7d21111f7b0a1565ed435f6d4ca08cc41235437c" exitCode=0 Jan 23 09:45:02 crc kubenswrapper[4684]: I0123 09:45:02.011174 4684 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/collect-profiles-29486025-grn9l" event={"ID":"d2a849a5-07be-46fe-bbcd-d6c77b0b740a","Type":"ContainerDied","Data":"e149ef284d142fb4d666f48e7d21111f7b0a1565ed435f6d4ca08cc41235437c"} Jan 23 09:45:03 crc kubenswrapper[4684]: I0123 09:45:03.398878 4684 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29486025-grn9l" Jan 23 09:45:03 crc kubenswrapper[4684]: I0123 09:45:03.475100 4684 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-pdmmt\" (UniqueName: \"kubernetes.io/projected/d2a849a5-07be-46fe-bbcd-d6c77b0b740a-kube-api-access-pdmmt\") pod \"d2a849a5-07be-46fe-bbcd-d6c77b0b740a\" (UID: \"d2a849a5-07be-46fe-bbcd-d6c77b0b740a\") " Jan 23 09:45:03 crc kubenswrapper[4684]: I0123 09:45:03.475220 4684 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/d2a849a5-07be-46fe-bbcd-d6c77b0b740a-config-volume\") pod \"d2a849a5-07be-46fe-bbcd-d6c77b0b740a\" (UID: \"d2a849a5-07be-46fe-bbcd-d6c77b0b740a\") " Jan 23 09:45:03 crc kubenswrapper[4684]: I0123 09:45:03.475262 4684 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/d2a849a5-07be-46fe-bbcd-d6c77b0b740a-secret-volume\") pod \"d2a849a5-07be-46fe-bbcd-d6c77b0b740a\" (UID: \"d2a849a5-07be-46fe-bbcd-d6c77b0b740a\") " Jan 23 09:45:03 crc kubenswrapper[4684]: I0123 09:45:03.476343 4684 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/d2a849a5-07be-46fe-bbcd-d6c77b0b740a-config-volume" (OuterVolumeSpecName: "config-volume") pod "d2a849a5-07be-46fe-bbcd-d6c77b0b740a" (UID: "d2a849a5-07be-46fe-bbcd-d6c77b0b740a"). InnerVolumeSpecName "config-volume". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 23 09:45:03 crc kubenswrapper[4684]: I0123 09:45:03.488064 4684 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/d2a849a5-07be-46fe-bbcd-d6c77b0b740a-kube-api-access-pdmmt" (OuterVolumeSpecName: "kube-api-access-pdmmt") pod "d2a849a5-07be-46fe-bbcd-d6c77b0b740a" (UID: "d2a849a5-07be-46fe-bbcd-d6c77b0b740a"). InnerVolumeSpecName "kube-api-access-pdmmt". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 23 09:45:03 crc kubenswrapper[4684]: I0123 09:45:03.494905 4684 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/d2a849a5-07be-46fe-bbcd-d6c77b0b740a-secret-volume" (OuterVolumeSpecName: "secret-volume") pod "d2a849a5-07be-46fe-bbcd-d6c77b0b740a" (UID: "d2a849a5-07be-46fe-bbcd-d6c77b0b740a"). InnerVolumeSpecName "secret-volume". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 23 09:45:03 crc kubenswrapper[4684]: I0123 09:45:03.578036 4684 reconciler_common.go:293] "Volume detached for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/d2a849a5-07be-46fe-bbcd-d6c77b0b740a-config-volume\") on node \"crc\" DevicePath \"\"" Jan 23 09:45:03 crc kubenswrapper[4684]: I0123 09:45:03.578072 4684 reconciler_common.go:293] "Volume detached for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/d2a849a5-07be-46fe-bbcd-d6c77b0b740a-secret-volume\") on node \"crc\" DevicePath \"\"" Jan 23 09:45:03 crc kubenswrapper[4684]: I0123 09:45:03.578084 4684 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-pdmmt\" (UniqueName: \"kubernetes.io/projected/d2a849a5-07be-46fe-bbcd-d6c77b0b740a-kube-api-access-pdmmt\") on node \"crc\" DevicePath \"\"" Jan 23 09:45:04 crc kubenswrapper[4684]: I0123 09:45:04.036304 4684 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/collect-profiles-29486025-grn9l" event={"ID":"d2a849a5-07be-46fe-bbcd-d6c77b0b740a","Type":"ContainerDied","Data":"4db404d12ace3bc903be8d583a7fa576fd413af27362f8931f8b3a1fff1e4007"} Jan 23 09:45:04 crc kubenswrapper[4684]: I0123 09:45:04.036342 4684 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="4db404d12ace3bc903be8d583a7fa576fd413af27362f8931f8b3a1fff1e4007" Jan 23 09:45:04 crc kubenswrapper[4684]: I0123 09:45:04.036359 4684 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29486025-grn9l" Jan 23 09:45:04 crc kubenswrapper[4684]: I0123 09:45:04.477561 4684 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29485980-dfbbw"] Jan 23 09:45:04 crc kubenswrapper[4684]: I0123 09:45:04.486638 4684 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29485980-dfbbw"] Jan 23 09:45:05 crc kubenswrapper[4684]: I0123 09:45:05.604224 4684 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="7d3e8240-e3e7-42d7-a0fa-6379a76c546e" path="/var/lib/kubelet/pods/7d3e8240-e3e7-42d7-a0fa-6379a76c546e/volumes" Jan 23 09:45:12 crc kubenswrapper[4684]: I0123 09:45:12.546597 4684 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/redhat-marketplace-nz6tw"] Jan 23 09:45:12 crc kubenswrapper[4684]: E0123 09:45:12.550293 4684 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="d2a849a5-07be-46fe-bbcd-d6c77b0b740a" containerName="collect-profiles" Jan 23 09:45:12 crc kubenswrapper[4684]: I0123 09:45:12.550334 4684 state_mem.go:107] "Deleted CPUSet assignment" podUID="d2a849a5-07be-46fe-bbcd-d6c77b0b740a" containerName="collect-profiles" Jan 23 09:45:12 crc kubenswrapper[4684]: I0123 09:45:12.550605 4684 memory_manager.go:354] "RemoveStaleState removing state" podUID="d2a849a5-07be-46fe-bbcd-d6c77b0b740a" containerName="collect-profiles" Jan 23 09:45:12 crc kubenswrapper[4684]: I0123 09:45:12.552266 4684 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-nz6tw" Jan 23 09:45:12 crc kubenswrapper[4684]: I0123 09:45:12.556419 4684 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-marketplace-nz6tw"] Jan 23 09:45:12 crc kubenswrapper[4684]: I0123 09:45:12.632501 4684 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/4d72ec36-ec8d-4ea9-b387-279a35ad882d-catalog-content\") pod \"redhat-marketplace-nz6tw\" (UID: \"4d72ec36-ec8d-4ea9-b387-279a35ad882d\") " pod="openshift-marketplace/redhat-marketplace-nz6tw" Jan 23 09:45:12 crc kubenswrapper[4684]: I0123 09:45:12.633547 4684 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-9w9tw\" (UniqueName: \"kubernetes.io/projected/4d72ec36-ec8d-4ea9-b387-279a35ad882d-kube-api-access-9w9tw\") pod \"redhat-marketplace-nz6tw\" (UID: \"4d72ec36-ec8d-4ea9-b387-279a35ad882d\") " pod="openshift-marketplace/redhat-marketplace-nz6tw" Jan 23 09:45:12 crc kubenswrapper[4684]: I0123 09:45:12.633583 4684 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/4d72ec36-ec8d-4ea9-b387-279a35ad882d-utilities\") pod \"redhat-marketplace-nz6tw\" (UID: \"4d72ec36-ec8d-4ea9-b387-279a35ad882d\") " pod="openshift-marketplace/redhat-marketplace-nz6tw" Jan 23 09:45:12 crc kubenswrapper[4684]: I0123 09:45:12.735689 4684 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-9w9tw\" (UniqueName: \"kubernetes.io/projected/4d72ec36-ec8d-4ea9-b387-279a35ad882d-kube-api-access-9w9tw\") pod \"redhat-marketplace-nz6tw\" (UID: \"4d72ec36-ec8d-4ea9-b387-279a35ad882d\") " pod="openshift-marketplace/redhat-marketplace-nz6tw" Jan 23 09:45:12 crc kubenswrapper[4684]: I0123 09:45:12.735776 4684 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/4d72ec36-ec8d-4ea9-b387-279a35ad882d-utilities\") pod \"redhat-marketplace-nz6tw\" (UID: \"4d72ec36-ec8d-4ea9-b387-279a35ad882d\") " pod="openshift-marketplace/redhat-marketplace-nz6tw" Jan 23 09:45:12 crc kubenswrapper[4684]: I0123 09:45:12.735879 4684 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/4d72ec36-ec8d-4ea9-b387-279a35ad882d-catalog-content\") pod \"redhat-marketplace-nz6tw\" (UID: \"4d72ec36-ec8d-4ea9-b387-279a35ad882d\") " pod="openshift-marketplace/redhat-marketplace-nz6tw" Jan 23 09:45:12 crc kubenswrapper[4684]: I0123 09:45:12.736420 4684 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/4d72ec36-ec8d-4ea9-b387-279a35ad882d-catalog-content\") pod \"redhat-marketplace-nz6tw\" (UID: \"4d72ec36-ec8d-4ea9-b387-279a35ad882d\") " pod="openshift-marketplace/redhat-marketplace-nz6tw" Jan 23 09:45:12 crc kubenswrapper[4684]: I0123 09:45:12.737126 4684 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/4d72ec36-ec8d-4ea9-b387-279a35ad882d-utilities\") pod \"redhat-marketplace-nz6tw\" (UID: \"4d72ec36-ec8d-4ea9-b387-279a35ad882d\") " pod="openshift-marketplace/redhat-marketplace-nz6tw" Jan 23 09:45:12 crc kubenswrapper[4684]: I0123 09:45:12.760118 4684 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-9w9tw\" (UniqueName: \"kubernetes.io/projected/4d72ec36-ec8d-4ea9-b387-279a35ad882d-kube-api-access-9w9tw\") pod \"redhat-marketplace-nz6tw\" (UID: \"4d72ec36-ec8d-4ea9-b387-279a35ad882d\") " pod="openshift-marketplace/redhat-marketplace-nz6tw" Jan 23 09:45:12 crc kubenswrapper[4684]: I0123 09:45:12.876087 4684 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-nz6tw" Jan 23 09:45:13 crc kubenswrapper[4684]: I0123 09:45:13.443857 4684 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-marketplace-nz6tw"] Jan 23 09:45:14 crc kubenswrapper[4684]: I0123 09:45:14.130240 4684 generic.go:334] "Generic (PLEG): container finished" podID="4d72ec36-ec8d-4ea9-b387-279a35ad882d" containerID="703efb9192b4160589b4bd8b88dafb4520ec5b0d580d23aad14f0f4d23d7447b" exitCode=0 Jan 23 09:45:14 crc kubenswrapper[4684]: I0123 09:45:14.130351 4684 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-nz6tw" event={"ID":"4d72ec36-ec8d-4ea9-b387-279a35ad882d","Type":"ContainerDied","Data":"703efb9192b4160589b4bd8b88dafb4520ec5b0d580d23aad14f0f4d23d7447b"} Jan 23 09:45:14 crc kubenswrapper[4684]: I0123 09:45:14.130815 4684 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-nz6tw" event={"ID":"4d72ec36-ec8d-4ea9-b387-279a35ad882d","Type":"ContainerStarted","Data":"4c56c06bc8d2b5b5829231dc6207e9e711431796dd6e86f65d79033a72c33850"} Jan 23 09:45:14 crc kubenswrapper[4684]: I0123 09:45:14.519373 4684 scope.go:117] "RemoveContainer" containerID="f0a50d692a88c5ab02e4415ab085cf83d51031f7e4a2189f9016c3a8a4778762" Jan 23 09:45:14 crc kubenswrapper[4684]: I0123 09:45:14.568685 4684 scope.go:117] "RemoveContainer" containerID="2892349cfbda780621ff677d6c6b8e64018aa431d2495b06c636d820584190b5" Jan 23 09:45:14 crc kubenswrapper[4684]: I0123 09:45:14.593775 4684 scope.go:117] "RemoveContainer" containerID="24c551e4a261aabe66b4fb2f4e85fa350c54b90b1867df15c3a26439f7433cc5" Jan 23 09:45:14 crc kubenswrapper[4684]: I0123 09:45:14.658250 4684 scope.go:117] "RemoveContainer" containerID="ef6fe5cc42a3c15cdc86ba1f8947b8dab11d1cb218beb770a9be7ef5069bcf13" Jan 23 09:45:15 crc kubenswrapper[4684]: I0123 09:45:15.141117 4684 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-nz6tw" event={"ID":"4d72ec36-ec8d-4ea9-b387-279a35ad882d","Type":"ContainerStarted","Data":"981813583e5802f5d651cd0a4e0e82c442bb6096ce1e27722ccc1a888870a668"} Jan 23 09:45:15 crc kubenswrapper[4684]: I0123 09:45:15.682099 4684 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/community-operators-ktrtq"] Jan 23 09:45:15 crc kubenswrapper[4684]: I0123 09:45:15.684116 4684 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-ktrtq" Jan 23 09:45:15 crc kubenswrapper[4684]: I0123 09:45:15.705944 4684 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/community-operators-ktrtq"] Jan 23 09:45:15 crc kubenswrapper[4684]: I0123 09:45:15.801938 4684 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/c1c0b235-c228-481a-9916-962ae0ddb620-utilities\") pod \"community-operators-ktrtq\" (UID: \"c1c0b235-c228-481a-9916-962ae0ddb620\") " pod="openshift-marketplace/community-operators-ktrtq" Jan 23 09:45:15 crc kubenswrapper[4684]: I0123 09:45:15.801989 4684 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/c1c0b235-c228-481a-9916-962ae0ddb620-catalog-content\") pod \"community-operators-ktrtq\" (UID: \"c1c0b235-c228-481a-9916-962ae0ddb620\") " pod="openshift-marketplace/community-operators-ktrtq" Jan 23 09:45:15 crc kubenswrapper[4684]: I0123 09:45:15.802221 4684 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-p6fl7\" (UniqueName: \"kubernetes.io/projected/c1c0b235-c228-481a-9916-962ae0ddb620-kube-api-access-p6fl7\") pod \"community-operators-ktrtq\" (UID: \"c1c0b235-c228-481a-9916-962ae0ddb620\") " pod="openshift-marketplace/community-operators-ktrtq" Jan 23 09:45:15 crc kubenswrapper[4684]: I0123 09:45:15.905608 4684 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/c1c0b235-c228-481a-9916-962ae0ddb620-catalog-content\") pod \"community-operators-ktrtq\" (UID: \"c1c0b235-c228-481a-9916-962ae0ddb620\") " pod="openshift-marketplace/community-operators-ktrtq" Jan 23 09:45:15 crc kubenswrapper[4684]: I0123 09:45:15.905668 4684 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/c1c0b235-c228-481a-9916-962ae0ddb620-utilities\") pod \"community-operators-ktrtq\" (UID: \"c1c0b235-c228-481a-9916-962ae0ddb620\") " pod="openshift-marketplace/community-operators-ktrtq" Jan 23 09:45:15 crc kubenswrapper[4684]: I0123 09:45:15.905881 4684 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-p6fl7\" (UniqueName: \"kubernetes.io/projected/c1c0b235-c228-481a-9916-962ae0ddb620-kube-api-access-p6fl7\") pod \"community-operators-ktrtq\" (UID: \"c1c0b235-c228-481a-9916-962ae0ddb620\") " pod="openshift-marketplace/community-operators-ktrtq" Jan 23 09:45:15 crc kubenswrapper[4684]: I0123 09:45:15.906235 4684 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/c1c0b235-c228-481a-9916-962ae0ddb620-utilities\") pod \"community-operators-ktrtq\" (UID: \"c1c0b235-c228-481a-9916-962ae0ddb620\") " pod="openshift-marketplace/community-operators-ktrtq" Jan 23 09:45:15 crc kubenswrapper[4684]: I0123 09:45:15.906262 4684 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/c1c0b235-c228-481a-9916-962ae0ddb620-catalog-content\") pod \"community-operators-ktrtq\" (UID: \"c1c0b235-c228-481a-9916-962ae0ddb620\") " pod="openshift-marketplace/community-operators-ktrtq" Jan 23 09:45:15 crc kubenswrapper[4684]: I0123 09:45:15.946220 4684 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-p6fl7\" (UniqueName: \"kubernetes.io/projected/c1c0b235-c228-481a-9916-962ae0ddb620-kube-api-access-p6fl7\") pod \"community-operators-ktrtq\" (UID: \"c1c0b235-c228-481a-9916-962ae0ddb620\") " pod="openshift-marketplace/community-operators-ktrtq" Jan 23 09:45:16 crc kubenswrapper[4684]: I0123 09:45:16.002949 4684 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-ktrtq" Jan 23 09:45:16 crc kubenswrapper[4684]: I0123 09:45:16.151289 4684 generic.go:334] "Generic (PLEG): container finished" podID="4d72ec36-ec8d-4ea9-b387-279a35ad882d" containerID="981813583e5802f5d651cd0a4e0e82c442bb6096ce1e27722ccc1a888870a668" exitCode=0 Jan 23 09:45:16 crc kubenswrapper[4684]: I0123 09:45:16.151331 4684 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-nz6tw" event={"ID":"4d72ec36-ec8d-4ea9-b387-279a35ad882d","Type":"ContainerDied","Data":"981813583e5802f5d651cd0a4e0e82c442bb6096ce1e27722ccc1a888870a668"} Jan 23 09:45:16 crc kubenswrapper[4684]: I0123 09:45:16.632237 4684 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/community-operators-ktrtq"] Jan 23 09:45:16 crc kubenswrapper[4684]: W0123 09:45:16.639931 4684 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podc1c0b235_c228_481a_9916_962ae0ddb620.slice/crio-e4378c46c47f72e9f27f22cdbb89c9b5c4a2e68dae1e6ad526c756433a2f2c7c WatchSource:0}: Error finding container e4378c46c47f72e9f27f22cdbb89c9b5c4a2e68dae1e6ad526c756433a2f2c7c: Status 404 returned error can't find the container with id e4378c46c47f72e9f27f22cdbb89c9b5c4a2e68dae1e6ad526c756433a2f2c7c Jan 23 09:45:17 crc kubenswrapper[4684]: I0123 09:45:17.161386 4684 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-ktrtq" event={"ID":"c1c0b235-c228-481a-9916-962ae0ddb620","Type":"ContainerDied","Data":"9c6117fec8c8411bae132309b3905906ac7466e384b99db0d45ec44cfa51df9a"} Jan 23 09:45:17 crc kubenswrapper[4684]: I0123 09:45:17.161249 4684 generic.go:334] "Generic (PLEG): container finished" podID="c1c0b235-c228-481a-9916-962ae0ddb620" containerID="9c6117fec8c8411bae132309b3905906ac7466e384b99db0d45ec44cfa51df9a" exitCode=0 Jan 23 09:45:17 crc kubenswrapper[4684]: I0123 09:45:17.162517 4684 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-ktrtq" event={"ID":"c1c0b235-c228-481a-9916-962ae0ddb620","Type":"ContainerStarted","Data":"e4378c46c47f72e9f27f22cdbb89c9b5c4a2e68dae1e6ad526c756433a2f2c7c"} Jan 23 09:45:17 crc kubenswrapper[4684]: I0123 09:45:17.170801 4684 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-nz6tw" event={"ID":"4d72ec36-ec8d-4ea9-b387-279a35ad882d","Type":"ContainerStarted","Data":"d1e9132dbbc4b561ba01297ea66e136fee494a9ec48bfa3422cf31087139118d"} Jan 23 09:45:18 crc kubenswrapper[4684]: I0123 09:45:18.183872 4684 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-ktrtq" event={"ID":"c1c0b235-c228-481a-9916-962ae0ddb620","Type":"ContainerStarted","Data":"896493f9392da1bd6d1d9bb1cd68d8daf931d786d2a86426ee88311b45637b83"} Jan 23 09:45:18 crc kubenswrapper[4684]: I0123 09:45:18.212537 4684 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/redhat-marketplace-nz6tw" podStartSLOduration=3.756304123 podStartE2EDuration="6.212511982s" podCreationTimestamp="2026-01-23 09:45:12 +0000 UTC" firstStartedPulling="2026-01-23 09:45:14.131889734 +0000 UTC m=+2286.755268275" lastFinishedPulling="2026-01-23 09:45:16.588097593 +0000 UTC m=+2289.211476134" observedRunningTime="2026-01-23 09:45:17.237866008 +0000 UTC m=+2289.861244549" watchObservedRunningTime="2026-01-23 09:45:18.212511982 +0000 UTC m=+2290.835890543" Jan 23 09:45:20 crc kubenswrapper[4684]: I0123 09:45:20.202491 4684 generic.go:334] "Generic (PLEG): container finished" podID="c1c0b235-c228-481a-9916-962ae0ddb620" containerID="896493f9392da1bd6d1d9bb1cd68d8daf931d786d2a86426ee88311b45637b83" exitCode=0 Jan 23 09:45:20 crc kubenswrapper[4684]: I0123 09:45:20.202543 4684 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-ktrtq" event={"ID":"c1c0b235-c228-481a-9916-962ae0ddb620","Type":"ContainerDied","Data":"896493f9392da1bd6d1d9bb1cd68d8daf931d786d2a86426ee88311b45637b83"} Jan 23 09:45:21 crc kubenswrapper[4684]: I0123 09:45:21.213458 4684 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-ktrtq" event={"ID":"c1c0b235-c228-481a-9916-962ae0ddb620","Type":"ContainerStarted","Data":"14875fd0ccfc761a372efe44915dd443fdb9fa363fc8d3126390c6844695949e"} Jan 23 09:45:21 crc kubenswrapper[4684]: I0123 09:45:21.232556 4684 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/community-operators-ktrtq" podStartSLOduration=2.523783942 podStartE2EDuration="6.232536729s" podCreationTimestamp="2026-01-23 09:45:15 +0000 UTC" firstStartedPulling="2026-01-23 09:45:17.163549293 +0000 UTC m=+2289.786927834" lastFinishedPulling="2026-01-23 09:45:20.87230208 +0000 UTC m=+2293.495680621" observedRunningTime="2026-01-23 09:45:21.230612064 +0000 UTC m=+2293.853990625" watchObservedRunningTime="2026-01-23 09:45:21.232536729 +0000 UTC m=+2293.855915270" Jan 23 09:45:22 crc kubenswrapper[4684]: I0123 09:45:22.877069 4684 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/redhat-marketplace-nz6tw" Jan 23 09:45:22 crc kubenswrapper[4684]: I0123 09:45:22.877396 4684 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/redhat-marketplace-nz6tw" Jan 23 09:45:22 crc kubenswrapper[4684]: I0123 09:45:22.923918 4684 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/redhat-marketplace-nz6tw" Jan 23 09:45:23 crc kubenswrapper[4684]: I0123 09:45:23.281751 4684 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/redhat-marketplace-nz6tw" Jan 23 09:45:24 crc kubenswrapper[4684]: I0123 09:45:24.075131 4684 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-marketplace-nz6tw"] Jan 23 09:45:25 crc kubenswrapper[4684]: I0123 09:45:25.251450 4684 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/redhat-marketplace-nz6tw" podUID="4d72ec36-ec8d-4ea9-b387-279a35ad882d" containerName="registry-server" containerID="cri-o://d1e9132dbbc4b561ba01297ea66e136fee494a9ec48bfa3422cf31087139118d" gracePeriod=2 Jan 23 09:45:26 crc kubenswrapper[4684]: I0123 09:45:26.006577 4684 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/community-operators-ktrtq" Jan 23 09:45:26 crc kubenswrapper[4684]: I0123 09:45:26.006634 4684 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/community-operators-ktrtq" Jan 23 09:45:26 crc kubenswrapper[4684]: I0123 09:45:26.070583 4684 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/community-operators-ktrtq" Jan 23 09:45:26 crc kubenswrapper[4684]: I0123 09:45:26.262322 4684 generic.go:334] "Generic (PLEG): container finished" podID="4d72ec36-ec8d-4ea9-b387-279a35ad882d" containerID="d1e9132dbbc4b561ba01297ea66e136fee494a9ec48bfa3422cf31087139118d" exitCode=0 Jan 23 09:45:26 crc kubenswrapper[4684]: I0123 09:45:26.262373 4684 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-nz6tw" event={"ID":"4d72ec36-ec8d-4ea9-b387-279a35ad882d","Type":"ContainerDied","Data":"d1e9132dbbc4b561ba01297ea66e136fee494a9ec48bfa3422cf31087139118d"} Jan 23 09:45:26 crc kubenswrapper[4684]: I0123 09:45:26.304274 4684 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/community-operators-ktrtq" Jan 23 09:45:26 crc kubenswrapper[4684]: I0123 09:45:26.823728 4684 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-nz6tw" Jan 23 09:45:26 crc kubenswrapper[4684]: I0123 09:45:26.961720 4684 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/4d72ec36-ec8d-4ea9-b387-279a35ad882d-catalog-content\") pod \"4d72ec36-ec8d-4ea9-b387-279a35ad882d\" (UID: \"4d72ec36-ec8d-4ea9-b387-279a35ad882d\") " Jan 23 09:45:26 crc kubenswrapper[4684]: I0123 09:45:26.962188 4684 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-9w9tw\" (UniqueName: \"kubernetes.io/projected/4d72ec36-ec8d-4ea9-b387-279a35ad882d-kube-api-access-9w9tw\") pod \"4d72ec36-ec8d-4ea9-b387-279a35ad882d\" (UID: \"4d72ec36-ec8d-4ea9-b387-279a35ad882d\") " Jan 23 09:45:26 crc kubenswrapper[4684]: I0123 09:45:26.962279 4684 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/4d72ec36-ec8d-4ea9-b387-279a35ad882d-utilities\") pod \"4d72ec36-ec8d-4ea9-b387-279a35ad882d\" (UID: \"4d72ec36-ec8d-4ea9-b387-279a35ad882d\") " Jan 23 09:45:26 crc kubenswrapper[4684]: I0123 09:45:26.963574 4684 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/4d72ec36-ec8d-4ea9-b387-279a35ad882d-utilities" (OuterVolumeSpecName: "utilities") pod "4d72ec36-ec8d-4ea9-b387-279a35ad882d" (UID: "4d72ec36-ec8d-4ea9-b387-279a35ad882d"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 23 09:45:26 crc kubenswrapper[4684]: I0123 09:45:26.968816 4684 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/4d72ec36-ec8d-4ea9-b387-279a35ad882d-kube-api-access-9w9tw" (OuterVolumeSpecName: "kube-api-access-9w9tw") pod "4d72ec36-ec8d-4ea9-b387-279a35ad882d" (UID: "4d72ec36-ec8d-4ea9-b387-279a35ad882d"). InnerVolumeSpecName "kube-api-access-9w9tw". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 23 09:45:26 crc kubenswrapper[4684]: I0123 09:45:26.980888 4684 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/4d72ec36-ec8d-4ea9-b387-279a35ad882d-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "4d72ec36-ec8d-4ea9-b387-279a35ad882d" (UID: "4d72ec36-ec8d-4ea9-b387-279a35ad882d"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 23 09:45:27 crc kubenswrapper[4684]: I0123 09:45:27.064928 4684 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-9w9tw\" (UniqueName: \"kubernetes.io/projected/4d72ec36-ec8d-4ea9-b387-279a35ad882d-kube-api-access-9w9tw\") on node \"crc\" DevicePath \"\"" Jan 23 09:45:27 crc kubenswrapper[4684]: I0123 09:45:27.064988 4684 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/4d72ec36-ec8d-4ea9-b387-279a35ad882d-utilities\") on node \"crc\" DevicePath \"\"" Jan 23 09:45:27 crc kubenswrapper[4684]: I0123 09:45:27.065003 4684 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/4d72ec36-ec8d-4ea9-b387-279a35ad882d-catalog-content\") on node \"crc\" DevicePath \"\"" Jan 23 09:45:27 crc kubenswrapper[4684]: I0123 09:45:27.273464 4684 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-nz6tw" Jan 23 09:45:27 crc kubenswrapper[4684]: I0123 09:45:27.275207 4684 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-nz6tw" event={"ID":"4d72ec36-ec8d-4ea9-b387-279a35ad882d","Type":"ContainerDied","Data":"4c56c06bc8d2b5b5829231dc6207e9e711431796dd6e86f65d79033a72c33850"} Jan 23 09:45:27 crc kubenswrapper[4684]: I0123 09:45:27.275277 4684 scope.go:117] "RemoveContainer" containerID="d1e9132dbbc4b561ba01297ea66e136fee494a9ec48bfa3422cf31087139118d" Jan 23 09:45:27 crc kubenswrapper[4684]: I0123 09:45:27.306925 4684 scope.go:117] "RemoveContainer" containerID="981813583e5802f5d651cd0a4e0e82c442bb6096ce1e27722ccc1a888870a668" Jan 23 09:45:27 crc kubenswrapper[4684]: I0123 09:45:27.323774 4684 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-marketplace-nz6tw"] Jan 23 09:45:27 crc kubenswrapper[4684]: I0123 09:45:27.334130 4684 scope.go:117] "RemoveContainer" containerID="703efb9192b4160589b4bd8b88dafb4520ec5b0d580d23aad14f0f4d23d7447b" Jan 23 09:45:27 crc kubenswrapper[4684]: I0123 09:45:27.345940 4684 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/redhat-marketplace-nz6tw"] Jan 23 09:45:27 crc kubenswrapper[4684]: I0123 09:45:27.599718 4684 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="4d72ec36-ec8d-4ea9-b387-279a35ad882d" path="/var/lib/kubelet/pods/4d72ec36-ec8d-4ea9-b387-279a35ad882d/volumes" Jan 23 09:45:28 crc kubenswrapper[4684]: I0123 09:45:28.471754 4684 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/community-operators-ktrtq"] Jan 23 09:45:28 crc kubenswrapper[4684]: I0123 09:45:28.472258 4684 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/community-operators-ktrtq" podUID="c1c0b235-c228-481a-9916-962ae0ddb620" containerName="registry-server" containerID="cri-o://14875fd0ccfc761a372efe44915dd443fdb9fa363fc8d3126390c6844695949e" gracePeriod=2 Jan 23 09:45:29 crc kubenswrapper[4684]: I0123 09:45:29.296086 4684 generic.go:334] "Generic (PLEG): container finished" podID="c1c0b235-c228-481a-9916-962ae0ddb620" containerID="14875fd0ccfc761a372efe44915dd443fdb9fa363fc8d3126390c6844695949e" exitCode=0 Jan 23 09:45:29 crc kubenswrapper[4684]: I0123 09:45:29.296175 4684 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-ktrtq" event={"ID":"c1c0b235-c228-481a-9916-962ae0ddb620","Type":"ContainerDied","Data":"14875fd0ccfc761a372efe44915dd443fdb9fa363fc8d3126390c6844695949e"} Jan 23 09:45:29 crc kubenswrapper[4684]: I0123 09:45:29.439827 4684 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-ktrtq" Jan 23 09:45:29 crc kubenswrapper[4684]: I0123 09:45:29.611195 4684 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-p6fl7\" (UniqueName: \"kubernetes.io/projected/c1c0b235-c228-481a-9916-962ae0ddb620-kube-api-access-p6fl7\") pod \"c1c0b235-c228-481a-9916-962ae0ddb620\" (UID: \"c1c0b235-c228-481a-9916-962ae0ddb620\") " Jan 23 09:45:29 crc kubenswrapper[4684]: I0123 09:45:29.611306 4684 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/c1c0b235-c228-481a-9916-962ae0ddb620-utilities\") pod \"c1c0b235-c228-481a-9916-962ae0ddb620\" (UID: \"c1c0b235-c228-481a-9916-962ae0ddb620\") " Jan 23 09:45:29 crc kubenswrapper[4684]: I0123 09:45:29.611371 4684 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/c1c0b235-c228-481a-9916-962ae0ddb620-catalog-content\") pod \"c1c0b235-c228-481a-9916-962ae0ddb620\" (UID: \"c1c0b235-c228-481a-9916-962ae0ddb620\") " Jan 23 09:45:29 crc kubenswrapper[4684]: I0123 09:45:29.613084 4684 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/c1c0b235-c228-481a-9916-962ae0ddb620-utilities" (OuterVolumeSpecName: "utilities") pod "c1c0b235-c228-481a-9916-962ae0ddb620" (UID: "c1c0b235-c228-481a-9916-962ae0ddb620"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 23 09:45:29 crc kubenswrapper[4684]: I0123 09:45:29.618140 4684 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/c1c0b235-c228-481a-9916-962ae0ddb620-kube-api-access-p6fl7" (OuterVolumeSpecName: "kube-api-access-p6fl7") pod "c1c0b235-c228-481a-9916-962ae0ddb620" (UID: "c1c0b235-c228-481a-9916-962ae0ddb620"). InnerVolumeSpecName "kube-api-access-p6fl7". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 23 09:45:29 crc kubenswrapper[4684]: I0123 09:45:29.670610 4684 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/c1c0b235-c228-481a-9916-962ae0ddb620-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "c1c0b235-c228-481a-9916-962ae0ddb620" (UID: "c1c0b235-c228-481a-9916-962ae0ddb620"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 23 09:45:29 crc kubenswrapper[4684]: I0123 09:45:29.715242 4684 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/c1c0b235-c228-481a-9916-962ae0ddb620-catalog-content\") on node \"crc\" DevicePath \"\"" Jan 23 09:45:29 crc kubenswrapper[4684]: I0123 09:45:29.715278 4684 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-p6fl7\" (UniqueName: \"kubernetes.io/projected/c1c0b235-c228-481a-9916-962ae0ddb620-kube-api-access-p6fl7\") on node \"crc\" DevicePath \"\"" Jan 23 09:45:29 crc kubenswrapper[4684]: I0123 09:45:29.715291 4684 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/c1c0b235-c228-481a-9916-962ae0ddb620-utilities\") on node \"crc\" DevicePath \"\"" Jan 23 09:45:30 crc kubenswrapper[4684]: I0123 09:45:30.306794 4684 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-ktrtq" event={"ID":"c1c0b235-c228-481a-9916-962ae0ddb620","Type":"ContainerDied","Data":"e4378c46c47f72e9f27f22cdbb89c9b5c4a2e68dae1e6ad526c756433a2f2c7c"} Jan 23 09:45:30 crc kubenswrapper[4684]: I0123 09:45:30.307492 4684 scope.go:117] "RemoveContainer" containerID="14875fd0ccfc761a372efe44915dd443fdb9fa363fc8d3126390c6844695949e" Jan 23 09:45:30 crc kubenswrapper[4684]: I0123 09:45:30.307096 4684 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-ktrtq" Jan 23 09:45:30 crc kubenswrapper[4684]: I0123 09:45:30.356078 4684 scope.go:117] "RemoveContainer" containerID="896493f9392da1bd6d1d9bb1cd68d8daf931d786d2a86426ee88311b45637b83" Jan 23 09:45:30 crc kubenswrapper[4684]: I0123 09:45:30.363645 4684 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/community-operators-ktrtq"] Jan 23 09:45:30 crc kubenswrapper[4684]: I0123 09:45:30.382655 4684 scope.go:117] "RemoveContainer" containerID="9c6117fec8c8411bae132309b3905906ac7466e384b99db0d45ec44cfa51df9a" Jan 23 09:45:30 crc kubenswrapper[4684]: I0123 09:45:30.409905 4684 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/community-operators-ktrtq"] Jan 23 09:45:31 crc kubenswrapper[4684]: I0123 09:45:31.605248 4684 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="c1c0b235-c228-481a-9916-962ae0ddb620" path="/var/lib/kubelet/pods/c1c0b235-c228-481a-9916-962ae0ddb620/volumes" Jan 23 09:45:32 crc kubenswrapper[4684]: I0123 09:45:32.325956 4684 generic.go:334] "Generic (PLEG): container finished" podID="5c7fb6ce-b97d-4827-b8c0-254582176d6d" containerID="5de067ac77489ac1a36897dacb3c13d9e19e9e2f95e74ff95953abe58b424a6e" exitCode=0 Jan 23 09:45:32 crc kubenswrapper[4684]: I0123 09:45:32.326002 4684 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/configure-os-edpm-deployment-openstack-edpm-ipam-k965m" event={"ID":"5c7fb6ce-b97d-4827-b8c0-254582176d6d","Type":"ContainerDied","Data":"5de067ac77489ac1a36897dacb3c13d9e19e9e2f95e74ff95953abe58b424a6e"} Jan 23 09:45:33 crc kubenswrapper[4684]: I0123 09:45:33.787381 4684 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/configure-os-edpm-deployment-openstack-edpm-ipam-k965m" Jan 23 09:45:33 crc kubenswrapper[4684]: I0123 09:45:33.849658 4684 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/5c7fb6ce-b97d-4827-b8c0-254582176d6d-inventory\") pod \"5c7fb6ce-b97d-4827-b8c0-254582176d6d\" (UID: \"5c7fb6ce-b97d-4827-b8c0-254582176d6d\") " Jan 23 09:45:33 crc kubenswrapper[4684]: I0123 09:45:33.850049 4684 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-sd8zh\" (UniqueName: \"kubernetes.io/projected/5c7fb6ce-b97d-4827-b8c0-254582176d6d-kube-api-access-sd8zh\") pod \"5c7fb6ce-b97d-4827-b8c0-254582176d6d\" (UID: \"5c7fb6ce-b97d-4827-b8c0-254582176d6d\") " Jan 23 09:45:33 crc kubenswrapper[4684]: I0123 09:45:33.850160 4684 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/5c7fb6ce-b97d-4827-b8c0-254582176d6d-ssh-key-openstack-edpm-ipam\") pod \"5c7fb6ce-b97d-4827-b8c0-254582176d6d\" (UID: \"5c7fb6ce-b97d-4827-b8c0-254582176d6d\") " Jan 23 09:45:33 crc kubenswrapper[4684]: I0123 09:45:33.859025 4684 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/5c7fb6ce-b97d-4827-b8c0-254582176d6d-kube-api-access-sd8zh" (OuterVolumeSpecName: "kube-api-access-sd8zh") pod "5c7fb6ce-b97d-4827-b8c0-254582176d6d" (UID: "5c7fb6ce-b97d-4827-b8c0-254582176d6d"). InnerVolumeSpecName "kube-api-access-sd8zh". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 23 09:45:33 crc kubenswrapper[4684]: I0123 09:45:33.875047 4684 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/5c7fb6ce-b97d-4827-b8c0-254582176d6d-ssh-key-openstack-edpm-ipam" (OuterVolumeSpecName: "ssh-key-openstack-edpm-ipam") pod "5c7fb6ce-b97d-4827-b8c0-254582176d6d" (UID: "5c7fb6ce-b97d-4827-b8c0-254582176d6d"). InnerVolumeSpecName "ssh-key-openstack-edpm-ipam". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 23 09:45:33 crc kubenswrapper[4684]: I0123 09:45:33.877395 4684 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/5c7fb6ce-b97d-4827-b8c0-254582176d6d-inventory" (OuterVolumeSpecName: "inventory") pod "5c7fb6ce-b97d-4827-b8c0-254582176d6d" (UID: "5c7fb6ce-b97d-4827-b8c0-254582176d6d"). InnerVolumeSpecName "inventory". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 23 09:45:33 crc kubenswrapper[4684]: I0123 09:45:33.952761 4684 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-sd8zh\" (UniqueName: \"kubernetes.io/projected/5c7fb6ce-b97d-4827-b8c0-254582176d6d-kube-api-access-sd8zh\") on node \"crc\" DevicePath \"\"" Jan 23 09:45:33 crc kubenswrapper[4684]: I0123 09:45:33.952809 4684 reconciler_common.go:293] "Volume detached for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/5c7fb6ce-b97d-4827-b8c0-254582176d6d-ssh-key-openstack-edpm-ipam\") on node \"crc\" DevicePath \"\"" Jan 23 09:45:33 crc kubenswrapper[4684]: I0123 09:45:33.952824 4684 reconciler_common.go:293] "Volume detached for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/5c7fb6ce-b97d-4827-b8c0-254582176d6d-inventory\") on node \"crc\" DevicePath \"\"" Jan 23 09:45:34 crc kubenswrapper[4684]: I0123 09:45:34.343379 4684 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/configure-os-edpm-deployment-openstack-edpm-ipam-k965m" event={"ID":"5c7fb6ce-b97d-4827-b8c0-254582176d6d","Type":"ContainerDied","Data":"304cf6b67dff6e85ce6966efdb5e32ae1f39c002157e160f64a17a215012166d"} Jan 23 09:45:34 crc kubenswrapper[4684]: I0123 09:45:34.343424 4684 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="304cf6b67dff6e85ce6966efdb5e32ae1f39c002157e160f64a17a215012166d" Jan 23 09:45:34 crc kubenswrapper[4684]: I0123 09:45:34.343427 4684 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/configure-os-edpm-deployment-openstack-edpm-ipam-k965m" Jan 23 09:45:34 crc kubenswrapper[4684]: I0123 09:45:34.467916 4684 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/ssh-known-hosts-edpm-deployment-tqfjf"] Jan 23 09:45:34 crc kubenswrapper[4684]: E0123 09:45:34.468564 4684 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="4d72ec36-ec8d-4ea9-b387-279a35ad882d" containerName="extract-utilities" Jan 23 09:45:34 crc kubenswrapper[4684]: I0123 09:45:34.468583 4684 state_mem.go:107] "Deleted CPUSet assignment" podUID="4d72ec36-ec8d-4ea9-b387-279a35ad882d" containerName="extract-utilities" Jan 23 09:45:34 crc kubenswrapper[4684]: E0123 09:45:34.468592 4684 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="c1c0b235-c228-481a-9916-962ae0ddb620" containerName="registry-server" Jan 23 09:45:34 crc kubenswrapper[4684]: I0123 09:45:34.468598 4684 state_mem.go:107] "Deleted CPUSet assignment" podUID="c1c0b235-c228-481a-9916-962ae0ddb620" containerName="registry-server" Jan 23 09:45:34 crc kubenswrapper[4684]: E0123 09:45:34.468609 4684 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="4d72ec36-ec8d-4ea9-b387-279a35ad882d" containerName="registry-server" Jan 23 09:45:34 crc kubenswrapper[4684]: I0123 09:45:34.468615 4684 state_mem.go:107] "Deleted CPUSet assignment" podUID="4d72ec36-ec8d-4ea9-b387-279a35ad882d" containerName="registry-server" Jan 23 09:45:34 crc kubenswrapper[4684]: E0123 09:45:34.468632 4684 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="4d72ec36-ec8d-4ea9-b387-279a35ad882d" containerName="extract-content" Jan 23 09:45:34 crc kubenswrapper[4684]: I0123 09:45:34.468637 4684 state_mem.go:107] "Deleted CPUSet assignment" podUID="4d72ec36-ec8d-4ea9-b387-279a35ad882d" containerName="extract-content" Jan 23 09:45:34 crc kubenswrapper[4684]: E0123 09:45:34.468649 4684 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="c1c0b235-c228-481a-9916-962ae0ddb620" containerName="extract-content" Jan 23 09:45:34 crc kubenswrapper[4684]: I0123 09:45:34.468655 4684 state_mem.go:107] "Deleted CPUSet assignment" podUID="c1c0b235-c228-481a-9916-962ae0ddb620" containerName="extract-content" Jan 23 09:45:34 crc kubenswrapper[4684]: E0123 09:45:34.468665 4684 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="c1c0b235-c228-481a-9916-962ae0ddb620" containerName="extract-utilities" Jan 23 09:45:34 crc kubenswrapper[4684]: I0123 09:45:34.468671 4684 state_mem.go:107] "Deleted CPUSet assignment" podUID="c1c0b235-c228-481a-9916-962ae0ddb620" containerName="extract-utilities" Jan 23 09:45:34 crc kubenswrapper[4684]: E0123 09:45:34.468689 4684 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="5c7fb6ce-b97d-4827-b8c0-254582176d6d" containerName="configure-os-edpm-deployment-openstack-edpm-ipam" Jan 23 09:45:34 crc kubenswrapper[4684]: I0123 09:45:34.468700 4684 state_mem.go:107] "Deleted CPUSet assignment" podUID="5c7fb6ce-b97d-4827-b8c0-254582176d6d" containerName="configure-os-edpm-deployment-openstack-edpm-ipam" Jan 23 09:45:34 crc kubenswrapper[4684]: I0123 09:45:34.468874 4684 memory_manager.go:354] "RemoveStaleState removing state" podUID="4d72ec36-ec8d-4ea9-b387-279a35ad882d" containerName="registry-server" Jan 23 09:45:34 crc kubenswrapper[4684]: I0123 09:45:34.468901 4684 memory_manager.go:354] "RemoveStaleState removing state" podUID="5c7fb6ce-b97d-4827-b8c0-254582176d6d" containerName="configure-os-edpm-deployment-openstack-edpm-ipam" Jan 23 09:45:34 crc kubenswrapper[4684]: I0123 09:45:34.468913 4684 memory_manager.go:354] "RemoveStaleState removing state" podUID="c1c0b235-c228-481a-9916-962ae0ddb620" containerName="registry-server" Jan 23 09:45:34 crc kubenswrapper[4684]: I0123 09:45:34.469485 4684 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ssh-known-hosts-edpm-deployment-tqfjf" Jan 23 09:45:34 crc kubenswrapper[4684]: I0123 09:45:34.471748 4684 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"dataplanenodeset-openstack-edpm-ipam" Jan 23 09:45:34 crc kubenswrapper[4684]: I0123 09:45:34.472022 4684 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"openstack-aee-default-env" Jan 23 09:45:34 crc kubenswrapper[4684]: I0123 09:45:34.472589 4684 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"dataplane-ansible-ssh-private-key-secret" Jan 23 09:45:34 crc kubenswrapper[4684]: I0123 09:45:34.477592 4684 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"openstack-edpm-ipam-dockercfg-5vtkf" Jan 23 09:45:34 crc kubenswrapper[4684]: I0123 09:45:34.494318 4684 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ssh-known-hosts-edpm-deployment-tqfjf"] Jan 23 09:45:34 crc kubenswrapper[4684]: I0123 09:45:34.565027 4684 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-x2pgn\" (UniqueName: \"kubernetes.io/projected/2b58eeeb-a5c7-4034-9722-9118d571ca6e-kube-api-access-x2pgn\") pod \"ssh-known-hosts-edpm-deployment-tqfjf\" (UID: \"2b58eeeb-a5c7-4034-9722-9118d571ca6e\") " pod="openstack/ssh-known-hosts-edpm-deployment-tqfjf" Jan 23 09:45:34 crc kubenswrapper[4684]: I0123 09:45:34.565242 4684 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/2b58eeeb-a5c7-4034-9722-9118d571ca6e-ssh-key-openstack-edpm-ipam\") pod \"ssh-known-hosts-edpm-deployment-tqfjf\" (UID: \"2b58eeeb-a5c7-4034-9722-9118d571ca6e\") " pod="openstack/ssh-known-hosts-edpm-deployment-tqfjf" Jan 23 09:45:34 crc kubenswrapper[4684]: I0123 09:45:34.565363 4684 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"inventory-0\" (UniqueName: \"kubernetes.io/secret/2b58eeeb-a5c7-4034-9722-9118d571ca6e-inventory-0\") pod \"ssh-known-hosts-edpm-deployment-tqfjf\" (UID: \"2b58eeeb-a5c7-4034-9722-9118d571ca6e\") " pod="openstack/ssh-known-hosts-edpm-deployment-tqfjf" Jan 23 09:45:34 crc kubenswrapper[4684]: I0123 09:45:34.667141 4684 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-x2pgn\" (UniqueName: \"kubernetes.io/projected/2b58eeeb-a5c7-4034-9722-9118d571ca6e-kube-api-access-x2pgn\") pod \"ssh-known-hosts-edpm-deployment-tqfjf\" (UID: \"2b58eeeb-a5c7-4034-9722-9118d571ca6e\") " pod="openstack/ssh-known-hosts-edpm-deployment-tqfjf" Jan 23 09:45:34 crc kubenswrapper[4684]: I0123 09:45:34.667217 4684 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/2b58eeeb-a5c7-4034-9722-9118d571ca6e-ssh-key-openstack-edpm-ipam\") pod \"ssh-known-hosts-edpm-deployment-tqfjf\" (UID: \"2b58eeeb-a5c7-4034-9722-9118d571ca6e\") " pod="openstack/ssh-known-hosts-edpm-deployment-tqfjf" Jan 23 09:45:34 crc kubenswrapper[4684]: I0123 09:45:34.667268 4684 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"inventory-0\" (UniqueName: \"kubernetes.io/secret/2b58eeeb-a5c7-4034-9722-9118d571ca6e-inventory-0\") pod \"ssh-known-hosts-edpm-deployment-tqfjf\" (UID: \"2b58eeeb-a5c7-4034-9722-9118d571ca6e\") " pod="openstack/ssh-known-hosts-edpm-deployment-tqfjf" Jan 23 09:45:34 crc kubenswrapper[4684]: I0123 09:45:34.682271 4684 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"inventory-0\" (UniqueName: \"kubernetes.io/secret/2b58eeeb-a5c7-4034-9722-9118d571ca6e-inventory-0\") pod \"ssh-known-hosts-edpm-deployment-tqfjf\" (UID: \"2b58eeeb-a5c7-4034-9722-9118d571ca6e\") " pod="openstack/ssh-known-hosts-edpm-deployment-tqfjf" Jan 23 09:45:34 crc kubenswrapper[4684]: I0123 09:45:34.682447 4684 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/2b58eeeb-a5c7-4034-9722-9118d571ca6e-ssh-key-openstack-edpm-ipam\") pod \"ssh-known-hosts-edpm-deployment-tqfjf\" (UID: \"2b58eeeb-a5c7-4034-9722-9118d571ca6e\") " pod="openstack/ssh-known-hosts-edpm-deployment-tqfjf" Jan 23 09:45:34 crc kubenswrapper[4684]: I0123 09:45:34.688218 4684 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-x2pgn\" (UniqueName: \"kubernetes.io/projected/2b58eeeb-a5c7-4034-9722-9118d571ca6e-kube-api-access-x2pgn\") pod \"ssh-known-hosts-edpm-deployment-tqfjf\" (UID: \"2b58eeeb-a5c7-4034-9722-9118d571ca6e\") " pod="openstack/ssh-known-hosts-edpm-deployment-tqfjf" Jan 23 09:45:34 crc kubenswrapper[4684]: I0123 09:45:34.786453 4684 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ssh-known-hosts-edpm-deployment-tqfjf" Jan 23 09:45:35 crc kubenswrapper[4684]: I0123 09:45:35.381511 4684 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ssh-known-hosts-edpm-deployment-tqfjf"] Jan 23 09:45:35 crc kubenswrapper[4684]: W0123 09:45:35.426483 4684 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod2b58eeeb_a5c7_4034_9722_9118d571ca6e.slice/crio-7c95131d1a588c30c92a052381f639963a3a1b53ca89d71770660c46637eca58 WatchSource:0}: Error finding container 7c95131d1a588c30c92a052381f639963a3a1b53ca89d71770660c46637eca58: Status 404 returned error can't find the container with id 7c95131d1a588c30c92a052381f639963a3a1b53ca89d71770660c46637eca58 Jan 23 09:45:36 crc kubenswrapper[4684]: I0123 09:45:36.360383 4684 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ssh-known-hosts-edpm-deployment-tqfjf" event={"ID":"2b58eeeb-a5c7-4034-9722-9118d571ca6e","Type":"ContainerStarted","Data":"eeedf85593466cb28ef5a7381de33ffd980211147cf8537b3b4654801d041eb9"} Jan 23 09:45:36 crc kubenswrapper[4684]: I0123 09:45:36.360796 4684 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ssh-known-hosts-edpm-deployment-tqfjf" event={"ID":"2b58eeeb-a5c7-4034-9722-9118d571ca6e","Type":"ContainerStarted","Data":"7c95131d1a588c30c92a052381f639963a3a1b53ca89d71770660c46637eca58"} Jan 23 09:45:36 crc kubenswrapper[4684]: I0123 09:45:36.386551 4684 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/ssh-known-hosts-edpm-deployment-tqfjf" podStartSLOduration=1.778263195 podStartE2EDuration="2.386524325s" podCreationTimestamp="2026-01-23 09:45:34 +0000 UTC" firstStartedPulling="2026-01-23 09:45:35.434410435 +0000 UTC m=+2308.057788966" lastFinishedPulling="2026-01-23 09:45:36.042671555 +0000 UTC m=+2308.666050096" observedRunningTime="2026-01-23 09:45:36.380076421 +0000 UTC m=+2309.003454962" watchObservedRunningTime="2026-01-23 09:45:36.386524325 +0000 UTC m=+2309.009902866" Jan 23 09:45:44 crc kubenswrapper[4684]: I0123 09:45:44.063607 4684 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-cell1-cell-mapping-6t6d7"] Jan 23 09:45:44 crc kubenswrapper[4684]: I0123 09:45:44.075853 4684 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/nova-cell1-cell-mapping-6t6d7"] Jan 23 09:45:44 crc kubenswrapper[4684]: I0123 09:45:44.427797 4684 generic.go:334] "Generic (PLEG): container finished" podID="2b58eeeb-a5c7-4034-9722-9118d571ca6e" containerID="eeedf85593466cb28ef5a7381de33ffd980211147cf8537b3b4654801d041eb9" exitCode=0 Jan 23 09:45:44 crc kubenswrapper[4684]: I0123 09:45:44.427986 4684 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ssh-known-hosts-edpm-deployment-tqfjf" event={"ID":"2b58eeeb-a5c7-4034-9722-9118d571ca6e","Type":"ContainerDied","Data":"eeedf85593466cb28ef5a7381de33ffd980211147cf8537b3b4654801d041eb9"} Jan 23 09:45:44 crc kubenswrapper[4684]: I0123 09:45:44.607823 4684 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/certified-operators-tfk4n"] Jan 23 09:45:44 crc kubenswrapper[4684]: I0123 09:45:44.614880 4684 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-tfk4n" Jan 23 09:45:44 crc kubenswrapper[4684]: I0123 09:45:44.665913 4684 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/certified-operators-tfk4n"] Jan 23 09:45:44 crc kubenswrapper[4684]: I0123 09:45:44.763925 4684 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-fv92t\" (UniqueName: \"kubernetes.io/projected/7cd9283a-98ef-4f80-af25-809f15d5ff36-kube-api-access-fv92t\") pod \"certified-operators-tfk4n\" (UID: \"7cd9283a-98ef-4f80-af25-809f15d5ff36\") " pod="openshift-marketplace/certified-operators-tfk4n" Jan 23 09:45:44 crc kubenswrapper[4684]: I0123 09:45:44.764047 4684 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/7cd9283a-98ef-4f80-af25-809f15d5ff36-catalog-content\") pod \"certified-operators-tfk4n\" (UID: \"7cd9283a-98ef-4f80-af25-809f15d5ff36\") " pod="openshift-marketplace/certified-operators-tfk4n" Jan 23 09:45:44 crc kubenswrapper[4684]: I0123 09:45:44.764227 4684 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/7cd9283a-98ef-4f80-af25-809f15d5ff36-utilities\") pod \"certified-operators-tfk4n\" (UID: \"7cd9283a-98ef-4f80-af25-809f15d5ff36\") " pod="openshift-marketplace/certified-operators-tfk4n" Jan 23 09:45:44 crc kubenswrapper[4684]: I0123 09:45:44.866137 4684 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/7cd9283a-98ef-4f80-af25-809f15d5ff36-catalog-content\") pod \"certified-operators-tfk4n\" (UID: \"7cd9283a-98ef-4f80-af25-809f15d5ff36\") " pod="openshift-marketplace/certified-operators-tfk4n" Jan 23 09:45:44 crc kubenswrapper[4684]: I0123 09:45:44.866884 4684 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/7cd9283a-98ef-4f80-af25-809f15d5ff36-utilities\") pod \"certified-operators-tfk4n\" (UID: \"7cd9283a-98ef-4f80-af25-809f15d5ff36\") " pod="openshift-marketplace/certified-operators-tfk4n" Jan 23 09:45:44 crc kubenswrapper[4684]: I0123 09:45:44.866653 4684 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/7cd9283a-98ef-4f80-af25-809f15d5ff36-catalog-content\") pod \"certified-operators-tfk4n\" (UID: \"7cd9283a-98ef-4f80-af25-809f15d5ff36\") " pod="openshift-marketplace/certified-operators-tfk4n" Jan 23 09:45:44 crc kubenswrapper[4684]: I0123 09:45:44.866947 4684 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-fv92t\" (UniqueName: \"kubernetes.io/projected/7cd9283a-98ef-4f80-af25-809f15d5ff36-kube-api-access-fv92t\") pod \"certified-operators-tfk4n\" (UID: \"7cd9283a-98ef-4f80-af25-809f15d5ff36\") " pod="openshift-marketplace/certified-operators-tfk4n" Jan 23 09:45:44 crc kubenswrapper[4684]: I0123 09:45:44.867371 4684 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/7cd9283a-98ef-4f80-af25-809f15d5ff36-utilities\") pod \"certified-operators-tfk4n\" (UID: \"7cd9283a-98ef-4f80-af25-809f15d5ff36\") " pod="openshift-marketplace/certified-operators-tfk4n" Jan 23 09:45:44 crc kubenswrapper[4684]: I0123 09:45:44.893123 4684 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-fv92t\" (UniqueName: \"kubernetes.io/projected/7cd9283a-98ef-4f80-af25-809f15d5ff36-kube-api-access-fv92t\") pod \"certified-operators-tfk4n\" (UID: \"7cd9283a-98ef-4f80-af25-809f15d5ff36\") " pod="openshift-marketplace/certified-operators-tfk4n" Jan 23 09:45:44 crc kubenswrapper[4684]: I0123 09:45:44.979637 4684 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-tfk4n" Jan 23 09:45:45 crc kubenswrapper[4684]: I0123 09:45:45.289560 4684 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/certified-operators-tfk4n"] Jan 23 09:45:45 crc kubenswrapper[4684]: I0123 09:45:45.442359 4684 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-tfk4n" event={"ID":"7cd9283a-98ef-4f80-af25-809f15d5ff36","Type":"ContainerStarted","Data":"6fe5601d8c4b5eafbe98a6b9e785d2d788a5af60c6f2355e288e1f507e38fc74"} Jan 23 09:45:45 crc kubenswrapper[4684]: I0123 09:45:45.593264 4684 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="eb9b804b-5b0a-479a-8834-10c4adb4ad14" path="/var/lib/kubelet/pods/eb9b804b-5b0a-479a-8834-10c4adb4ad14/volumes" Jan 23 09:45:46 crc kubenswrapper[4684]: I0123 09:45:46.004931 4684 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/ssh-known-hosts-edpm-deployment-tqfjf" Jan 23 09:45:46 crc kubenswrapper[4684]: I0123 09:45:46.107754 4684 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"inventory-0\" (UniqueName: \"kubernetes.io/secret/2b58eeeb-a5c7-4034-9722-9118d571ca6e-inventory-0\") pod \"2b58eeeb-a5c7-4034-9722-9118d571ca6e\" (UID: \"2b58eeeb-a5c7-4034-9722-9118d571ca6e\") " Jan 23 09:45:46 crc kubenswrapper[4684]: I0123 09:45:46.107821 4684 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-x2pgn\" (UniqueName: \"kubernetes.io/projected/2b58eeeb-a5c7-4034-9722-9118d571ca6e-kube-api-access-x2pgn\") pod \"2b58eeeb-a5c7-4034-9722-9118d571ca6e\" (UID: \"2b58eeeb-a5c7-4034-9722-9118d571ca6e\") " Jan 23 09:45:46 crc kubenswrapper[4684]: I0123 09:45:46.107999 4684 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/2b58eeeb-a5c7-4034-9722-9118d571ca6e-ssh-key-openstack-edpm-ipam\") pod \"2b58eeeb-a5c7-4034-9722-9118d571ca6e\" (UID: \"2b58eeeb-a5c7-4034-9722-9118d571ca6e\") " Jan 23 09:45:46 crc kubenswrapper[4684]: I0123 09:45:46.146352 4684 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/2b58eeeb-a5c7-4034-9722-9118d571ca6e-kube-api-access-x2pgn" (OuterVolumeSpecName: "kube-api-access-x2pgn") pod "2b58eeeb-a5c7-4034-9722-9118d571ca6e" (UID: "2b58eeeb-a5c7-4034-9722-9118d571ca6e"). InnerVolumeSpecName "kube-api-access-x2pgn". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 23 09:45:46 crc kubenswrapper[4684]: I0123 09:45:46.151919 4684 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/2b58eeeb-a5c7-4034-9722-9118d571ca6e-inventory-0" (OuterVolumeSpecName: "inventory-0") pod "2b58eeeb-a5c7-4034-9722-9118d571ca6e" (UID: "2b58eeeb-a5c7-4034-9722-9118d571ca6e"). InnerVolumeSpecName "inventory-0". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 23 09:45:46 crc kubenswrapper[4684]: I0123 09:45:46.169922 4684 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/2b58eeeb-a5c7-4034-9722-9118d571ca6e-ssh-key-openstack-edpm-ipam" (OuterVolumeSpecName: "ssh-key-openstack-edpm-ipam") pod "2b58eeeb-a5c7-4034-9722-9118d571ca6e" (UID: "2b58eeeb-a5c7-4034-9722-9118d571ca6e"). InnerVolumeSpecName "ssh-key-openstack-edpm-ipam". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 23 09:45:46 crc kubenswrapper[4684]: I0123 09:45:46.210246 4684 reconciler_common.go:293] "Volume detached for volume \"inventory-0\" (UniqueName: \"kubernetes.io/secret/2b58eeeb-a5c7-4034-9722-9118d571ca6e-inventory-0\") on node \"crc\" DevicePath \"\"" Jan 23 09:45:46 crc kubenswrapper[4684]: I0123 09:45:46.210300 4684 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-x2pgn\" (UniqueName: \"kubernetes.io/projected/2b58eeeb-a5c7-4034-9722-9118d571ca6e-kube-api-access-x2pgn\") on node \"crc\" DevicePath \"\"" Jan 23 09:45:46 crc kubenswrapper[4684]: I0123 09:45:46.210319 4684 reconciler_common.go:293] "Volume detached for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/2b58eeeb-a5c7-4034-9722-9118d571ca6e-ssh-key-openstack-edpm-ipam\") on node \"crc\" DevicePath \"\"" Jan 23 09:45:46 crc kubenswrapper[4684]: I0123 09:45:46.451857 4684 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ssh-known-hosts-edpm-deployment-tqfjf" event={"ID":"2b58eeeb-a5c7-4034-9722-9118d571ca6e","Type":"ContainerDied","Data":"7c95131d1a588c30c92a052381f639963a3a1b53ca89d71770660c46637eca58"} Jan 23 09:45:46 crc kubenswrapper[4684]: I0123 09:45:46.451919 4684 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="7c95131d1a588c30c92a052381f639963a3a1b53ca89d71770660c46637eca58" Jan 23 09:45:46 crc kubenswrapper[4684]: I0123 09:45:46.451872 4684 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/ssh-known-hosts-edpm-deployment-tqfjf" Jan 23 09:45:46 crc kubenswrapper[4684]: I0123 09:45:46.454264 4684 generic.go:334] "Generic (PLEG): container finished" podID="7cd9283a-98ef-4f80-af25-809f15d5ff36" containerID="1addbd94f9ddd95a41b78108a3640718f8d3cc37bd7471b090d4afbd9abb1b30" exitCode=0 Jan 23 09:45:46 crc kubenswrapper[4684]: I0123 09:45:46.454301 4684 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-tfk4n" event={"ID":"7cd9283a-98ef-4f80-af25-809f15d5ff36","Type":"ContainerDied","Data":"1addbd94f9ddd95a41b78108a3640718f8d3cc37bd7471b090d4afbd9abb1b30"} Jan 23 09:45:46 crc kubenswrapper[4684]: I0123 09:45:46.576832 4684 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/run-os-edpm-deployment-openstack-edpm-ipam-rj8pr"] Jan 23 09:45:46 crc kubenswrapper[4684]: E0123 09:45:46.577548 4684 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="2b58eeeb-a5c7-4034-9722-9118d571ca6e" containerName="ssh-known-hosts-edpm-deployment" Jan 23 09:45:46 crc kubenswrapper[4684]: I0123 09:45:46.577648 4684 state_mem.go:107] "Deleted CPUSet assignment" podUID="2b58eeeb-a5c7-4034-9722-9118d571ca6e" containerName="ssh-known-hosts-edpm-deployment" Jan 23 09:45:46 crc kubenswrapper[4684]: I0123 09:45:46.577965 4684 memory_manager.go:354] "RemoveStaleState removing state" podUID="2b58eeeb-a5c7-4034-9722-9118d571ca6e" containerName="ssh-known-hosts-edpm-deployment" Jan 23 09:45:46 crc kubenswrapper[4684]: I0123 09:45:46.578747 4684 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/run-os-edpm-deployment-openstack-edpm-ipam-rj8pr" Jan 23 09:45:46 crc kubenswrapper[4684]: I0123 09:45:46.580763 4684 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"dataplane-ansible-ssh-private-key-secret" Jan 23 09:45:46 crc kubenswrapper[4684]: I0123 09:45:46.581673 4684 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"openstack-aee-default-env" Jan 23 09:45:46 crc kubenswrapper[4684]: I0123 09:45:46.585733 4684 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"dataplanenodeset-openstack-edpm-ipam" Jan 23 09:45:46 crc kubenswrapper[4684]: I0123 09:45:46.586426 4684 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"openstack-edpm-ipam-dockercfg-5vtkf" Jan 23 09:45:46 crc kubenswrapper[4684]: I0123 09:45:46.592068 4684 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/run-os-edpm-deployment-openstack-edpm-ipam-rj8pr"] Jan 23 09:45:46 crc kubenswrapper[4684]: I0123 09:45:46.623984 4684 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/ae43dbba-1a4e-4d8d-8682-e77939d5650d-ssh-key-openstack-edpm-ipam\") pod \"run-os-edpm-deployment-openstack-edpm-ipam-rj8pr\" (UID: \"ae43dbba-1a4e-4d8d-8682-e77939d5650d\") " pod="openstack/run-os-edpm-deployment-openstack-edpm-ipam-rj8pr" Jan 23 09:45:46 crc kubenswrapper[4684]: I0123 09:45:46.624054 4684 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/ae43dbba-1a4e-4d8d-8682-e77939d5650d-inventory\") pod \"run-os-edpm-deployment-openstack-edpm-ipam-rj8pr\" (UID: \"ae43dbba-1a4e-4d8d-8682-e77939d5650d\") " pod="openstack/run-os-edpm-deployment-openstack-edpm-ipam-rj8pr" Jan 23 09:45:46 crc kubenswrapper[4684]: I0123 09:45:46.624119 4684 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-m69tw\" (UniqueName: \"kubernetes.io/projected/ae43dbba-1a4e-4d8d-8682-e77939d5650d-kube-api-access-m69tw\") pod \"run-os-edpm-deployment-openstack-edpm-ipam-rj8pr\" (UID: \"ae43dbba-1a4e-4d8d-8682-e77939d5650d\") " pod="openstack/run-os-edpm-deployment-openstack-edpm-ipam-rj8pr" Jan 23 09:45:46 crc kubenswrapper[4684]: I0123 09:45:46.725400 4684 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/ae43dbba-1a4e-4d8d-8682-e77939d5650d-ssh-key-openstack-edpm-ipam\") pod \"run-os-edpm-deployment-openstack-edpm-ipam-rj8pr\" (UID: \"ae43dbba-1a4e-4d8d-8682-e77939d5650d\") " pod="openstack/run-os-edpm-deployment-openstack-edpm-ipam-rj8pr" Jan 23 09:45:46 crc kubenswrapper[4684]: I0123 09:45:46.725490 4684 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/ae43dbba-1a4e-4d8d-8682-e77939d5650d-inventory\") pod \"run-os-edpm-deployment-openstack-edpm-ipam-rj8pr\" (UID: \"ae43dbba-1a4e-4d8d-8682-e77939d5650d\") " pod="openstack/run-os-edpm-deployment-openstack-edpm-ipam-rj8pr" Jan 23 09:45:46 crc kubenswrapper[4684]: I0123 09:45:46.725549 4684 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-m69tw\" (UniqueName: \"kubernetes.io/projected/ae43dbba-1a4e-4d8d-8682-e77939d5650d-kube-api-access-m69tw\") pod \"run-os-edpm-deployment-openstack-edpm-ipam-rj8pr\" (UID: \"ae43dbba-1a4e-4d8d-8682-e77939d5650d\") " pod="openstack/run-os-edpm-deployment-openstack-edpm-ipam-rj8pr" Jan 23 09:45:46 crc kubenswrapper[4684]: I0123 09:45:46.733775 4684 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/ae43dbba-1a4e-4d8d-8682-e77939d5650d-ssh-key-openstack-edpm-ipam\") pod \"run-os-edpm-deployment-openstack-edpm-ipam-rj8pr\" (UID: \"ae43dbba-1a4e-4d8d-8682-e77939d5650d\") " pod="openstack/run-os-edpm-deployment-openstack-edpm-ipam-rj8pr" Jan 23 09:45:46 crc kubenswrapper[4684]: I0123 09:45:46.736161 4684 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/ae43dbba-1a4e-4d8d-8682-e77939d5650d-inventory\") pod \"run-os-edpm-deployment-openstack-edpm-ipam-rj8pr\" (UID: \"ae43dbba-1a4e-4d8d-8682-e77939d5650d\") " pod="openstack/run-os-edpm-deployment-openstack-edpm-ipam-rj8pr" Jan 23 09:45:46 crc kubenswrapper[4684]: I0123 09:45:46.749261 4684 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-m69tw\" (UniqueName: \"kubernetes.io/projected/ae43dbba-1a4e-4d8d-8682-e77939d5650d-kube-api-access-m69tw\") pod \"run-os-edpm-deployment-openstack-edpm-ipam-rj8pr\" (UID: \"ae43dbba-1a4e-4d8d-8682-e77939d5650d\") " pod="openstack/run-os-edpm-deployment-openstack-edpm-ipam-rj8pr" Jan 23 09:45:46 crc kubenswrapper[4684]: I0123 09:45:46.904418 4684 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/run-os-edpm-deployment-openstack-edpm-ipam-rj8pr" Jan 23 09:45:47 crc kubenswrapper[4684]: I0123 09:45:47.548988 4684 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/run-os-edpm-deployment-openstack-edpm-ipam-rj8pr"] Jan 23 09:45:47 crc kubenswrapper[4684]: W0123 09:45:47.563465 4684 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-podae43dbba_1a4e_4d8d_8682_e77939d5650d.slice/crio-5b50878ec3b45984ecdf49ed1fb7bb66c39720fc342433a734f97f48527e4f9a WatchSource:0}: Error finding container 5b50878ec3b45984ecdf49ed1fb7bb66c39720fc342433a734f97f48527e4f9a: Status 404 returned error can't find the container with id 5b50878ec3b45984ecdf49ed1fb7bb66c39720fc342433a734f97f48527e4f9a Jan 23 09:45:48 crc kubenswrapper[4684]: I0123 09:45:48.471868 4684 generic.go:334] "Generic (PLEG): container finished" podID="7cd9283a-98ef-4f80-af25-809f15d5ff36" containerID="63880ed7fb3eed626d188e6ac96985c99d2b1486833d2eb4ebcc3a7a52475631" exitCode=0 Jan 23 09:45:48 crc kubenswrapper[4684]: I0123 09:45:48.472471 4684 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-tfk4n" event={"ID":"7cd9283a-98ef-4f80-af25-809f15d5ff36","Type":"ContainerDied","Data":"63880ed7fb3eed626d188e6ac96985c99d2b1486833d2eb4ebcc3a7a52475631"} Jan 23 09:45:48 crc kubenswrapper[4684]: I0123 09:45:48.476968 4684 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/run-os-edpm-deployment-openstack-edpm-ipam-rj8pr" event={"ID":"ae43dbba-1a4e-4d8d-8682-e77939d5650d","Type":"ContainerStarted","Data":"1bc7d2c4d9785d791b4f938771327dd59c3d30de92aa2f2586f8abe46f342ed0"} Jan 23 09:45:48 crc kubenswrapper[4684]: I0123 09:45:48.477014 4684 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/run-os-edpm-deployment-openstack-edpm-ipam-rj8pr" event={"ID":"ae43dbba-1a4e-4d8d-8682-e77939d5650d","Type":"ContainerStarted","Data":"5b50878ec3b45984ecdf49ed1fb7bb66c39720fc342433a734f97f48527e4f9a"} Jan 23 09:45:48 crc kubenswrapper[4684]: I0123 09:45:48.512251 4684 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/run-os-edpm-deployment-openstack-edpm-ipam-rj8pr" podStartSLOduration=2.084727707 podStartE2EDuration="2.512235699s" podCreationTimestamp="2026-01-23 09:45:46 +0000 UTC" firstStartedPulling="2026-01-23 09:45:47.56575122 +0000 UTC m=+2320.189129761" lastFinishedPulling="2026-01-23 09:45:47.993259212 +0000 UTC m=+2320.616637753" observedRunningTime="2026-01-23 09:45:48.506171666 +0000 UTC m=+2321.129550207" watchObservedRunningTime="2026-01-23 09:45:48.512235699 +0000 UTC m=+2321.135614240" Jan 23 09:45:50 crc kubenswrapper[4684]: I0123 09:45:50.494843 4684 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-tfk4n" event={"ID":"7cd9283a-98ef-4f80-af25-809f15d5ff36","Type":"ContainerStarted","Data":"619d564f88ae22bad57a344636a709fa3bdfb79bc2b0ddf011065ffd8214a451"} Jan 23 09:45:50 crc kubenswrapper[4684]: I0123 09:45:50.517803 4684 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/certified-operators-tfk4n" podStartSLOduration=3.022588362 podStartE2EDuration="6.517784324s" podCreationTimestamp="2026-01-23 09:45:44 +0000 UTC" firstStartedPulling="2026-01-23 09:45:46.456004784 +0000 UTC m=+2319.079383325" lastFinishedPulling="2026-01-23 09:45:49.951200746 +0000 UTC m=+2322.574579287" observedRunningTime="2026-01-23 09:45:50.511823133 +0000 UTC m=+2323.135201674" watchObservedRunningTime="2026-01-23 09:45:50.517784324 +0000 UTC m=+2323.141162865" Jan 23 09:45:54 crc kubenswrapper[4684]: I0123 09:45:54.980829 4684 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/certified-operators-tfk4n" Jan 23 09:45:54 crc kubenswrapper[4684]: I0123 09:45:54.981344 4684 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/certified-operators-tfk4n" Jan 23 09:45:55 crc kubenswrapper[4684]: I0123 09:45:55.031232 4684 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/certified-operators-tfk4n" Jan 23 09:45:55 crc kubenswrapper[4684]: I0123 09:45:55.593064 4684 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/certified-operators-tfk4n" Jan 23 09:45:57 crc kubenswrapper[4684]: I0123 09:45:57.189118 4684 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/certified-operators-tfk4n"] Jan 23 09:45:57 crc kubenswrapper[4684]: I0123 09:45:57.553337 4684 generic.go:334] "Generic (PLEG): container finished" podID="ae43dbba-1a4e-4d8d-8682-e77939d5650d" containerID="1bc7d2c4d9785d791b4f938771327dd59c3d30de92aa2f2586f8abe46f342ed0" exitCode=0 Jan 23 09:45:57 crc kubenswrapper[4684]: I0123 09:45:57.553558 4684 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/certified-operators-tfk4n" podUID="7cd9283a-98ef-4f80-af25-809f15d5ff36" containerName="registry-server" containerID="cri-o://619d564f88ae22bad57a344636a709fa3bdfb79bc2b0ddf011065ffd8214a451" gracePeriod=2 Jan 23 09:45:57 crc kubenswrapper[4684]: I0123 09:45:57.553906 4684 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/run-os-edpm-deployment-openstack-edpm-ipam-rj8pr" event={"ID":"ae43dbba-1a4e-4d8d-8682-e77939d5650d","Type":"ContainerDied","Data":"1bc7d2c4d9785d791b4f938771327dd59c3d30de92aa2f2586f8abe46f342ed0"} Jan 23 09:45:58 crc kubenswrapper[4684]: I0123 09:45:58.562655 4684 generic.go:334] "Generic (PLEG): container finished" podID="7cd9283a-98ef-4f80-af25-809f15d5ff36" containerID="619d564f88ae22bad57a344636a709fa3bdfb79bc2b0ddf011065ffd8214a451" exitCode=0 Jan 23 09:45:58 crc kubenswrapper[4684]: I0123 09:45:58.562736 4684 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-tfk4n" event={"ID":"7cd9283a-98ef-4f80-af25-809f15d5ff36","Type":"ContainerDied","Data":"619d564f88ae22bad57a344636a709fa3bdfb79bc2b0ddf011065ffd8214a451"} Jan 23 09:45:58 crc kubenswrapper[4684]: I0123 09:45:58.816601 4684 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-tfk4n" Jan 23 09:45:58 crc kubenswrapper[4684]: I0123 09:45:58.819425 4684 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/7cd9283a-98ef-4f80-af25-809f15d5ff36-catalog-content\") pod \"7cd9283a-98ef-4f80-af25-809f15d5ff36\" (UID: \"7cd9283a-98ef-4f80-af25-809f15d5ff36\") " Jan 23 09:45:58 crc kubenswrapper[4684]: I0123 09:45:58.819537 4684 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/7cd9283a-98ef-4f80-af25-809f15d5ff36-utilities\") pod \"7cd9283a-98ef-4f80-af25-809f15d5ff36\" (UID: \"7cd9283a-98ef-4f80-af25-809f15d5ff36\") " Jan 23 09:45:58 crc kubenswrapper[4684]: I0123 09:45:58.819600 4684 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-fv92t\" (UniqueName: \"kubernetes.io/projected/7cd9283a-98ef-4f80-af25-809f15d5ff36-kube-api-access-fv92t\") pod \"7cd9283a-98ef-4f80-af25-809f15d5ff36\" (UID: \"7cd9283a-98ef-4f80-af25-809f15d5ff36\") " Jan 23 09:45:58 crc kubenswrapper[4684]: I0123 09:45:58.820491 4684 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/7cd9283a-98ef-4f80-af25-809f15d5ff36-utilities" (OuterVolumeSpecName: "utilities") pod "7cd9283a-98ef-4f80-af25-809f15d5ff36" (UID: "7cd9283a-98ef-4f80-af25-809f15d5ff36"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 23 09:45:58 crc kubenswrapper[4684]: I0123 09:45:58.847980 4684 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/7cd9283a-98ef-4f80-af25-809f15d5ff36-kube-api-access-fv92t" (OuterVolumeSpecName: "kube-api-access-fv92t") pod "7cd9283a-98ef-4f80-af25-809f15d5ff36" (UID: "7cd9283a-98ef-4f80-af25-809f15d5ff36"). InnerVolumeSpecName "kube-api-access-fv92t". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 23 09:45:58 crc kubenswrapper[4684]: I0123 09:45:58.921142 4684 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/7cd9283a-98ef-4f80-af25-809f15d5ff36-utilities\") on node \"crc\" DevicePath \"\"" Jan 23 09:45:58 crc kubenswrapper[4684]: I0123 09:45:58.921168 4684 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-fv92t\" (UniqueName: \"kubernetes.io/projected/7cd9283a-98ef-4f80-af25-809f15d5ff36-kube-api-access-fv92t\") on node \"crc\" DevicePath \"\"" Jan 23 09:45:58 crc kubenswrapper[4684]: I0123 09:45:58.937016 4684 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/7cd9283a-98ef-4f80-af25-809f15d5ff36-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "7cd9283a-98ef-4f80-af25-809f15d5ff36" (UID: "7cd9283a-98ef-4f80-af25-809f15d5ff36"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 23 09:45:58 crc kubenswrapper[4684]: I0123 09:45:58.986924 4684 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/run-os-edpm-deployment-openstack-edpm-ipam-rj8pr" Jan 23 09:45:59 crc kubenswrapper[4684]: I0123 09:45:59.023285 4684 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/7cd9283a-98ef-4f80-af25-809f15d5ff36-catalog-content\") on node \"crc\" DevicePath \"\"" Jan 23 09:45:59 crc kubenswrapper[4684]: I0123 09:45:59.124117 4684 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/ae43dbba-1a4e-4d8d-8682-e77939d5650d-inventory\") pod \"ae43dbba-1a4e-4d8d-8682-e77939d5650d\" (UID: \"ae43dbba-1a4e-4d8d-8682-e77939d5650d\") " Jan 23 09:45:59 crc kubenswrapper[4684]: I0123 09:45:59.124331 4684 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/ae43dbba-1a4e-4d8d-8682-e77939d5650d-ssh-key-openstack-edpm-ipam\") pod \"ae43dbba-1a4e-4d8d-8682-e77939d5650d\" (UID: \"ae43dbba-1a4e-4d8d-8682-e77939d5650d\") " Jan 23 09:45:59 crc kubenswrapper[4684]: I0123 09:45:59.124375 4684 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-m69tw\" (UniqueName: \"kubernetes.io/projected/ae43dbba-1a4e-4d8d-8682-e77939d5650d-kube-api-access-m69tw\") pod \"ae43dbba-1a4e-4d8d-8682-e77939d5650d\" (UID: \"ae43dbba-1a4e-4d8d-8682-e77939d5650d\") " Jan 23 09:45:59 crc kubenswrapper[4684]: I0123 09:45:59.128441 4684 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/ae43dbba-1a4e-4d8d-8682-e77939d5650d-kube-api-access-m69tw" (OuterVolumeSpecName: "kube-api-access-m69tw") pod "ae43dbba-1a4e-4d8d-8682-e77939d5650d" (UID: "ae43dbba-1a4e-4d8d-8682-e77939d5650d"). InnerVolumeSpecName "kube-api-access-m69tw". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 23 09:45:59 crc kubenswrapper[4684]: I0123 09:45:59.150476 4684 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/ae43dbba-1a4e-4d8d-8682-e77939d5650d-inventory" (OuterVolumeSpecName: "inventory") pod "ae43dbba-1a4e-4d8d-8682-e77939d5650d" (UID: "ae43dbba-1a4e-4d8d-8682-e77939d5650d"). InnerVolumeSpecName "inventory". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 23 09:45:59 crc kubenswrapper[4684]: I0123 09:45:59.154005 4684 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/ae43dbba-1a4e-4d8d-8682-e77939d5650d-ssh-key-openstack-edpm-ipam" (OuterVolumeSpecName: "ssh-key-openstack-edpm-ipam") pod "ae43dbba-1a4e-4d8d-8682-e77939d5650d" (UID: "ae43dbba-1a4e-4d8d-8682-e77939d5650d"). InnerVolumeSpecName "ssh-key-openstack-edpm-ipam". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 23 09:45:59 crc kubenswrapper[4684]: I0123 09:45:59.227337 4684 reconciler_common.go:293] "Volume detached for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/ae43dbba-1a4e-4d8d-8682-e77939d5650d-inventory\") on node \"crc\" DevicePath \"\"" Jan 23 09:45:59 crc kubenswrapper[4684]: I0123 09:45:59.227377 4684 reconciler_common.go:293] "Volume detached for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/ae43dbba-1a4e-4d8d-8682-e77939d5650d-ssh-key-openstack-edpm-ipam\") on node \"crc\" DevicePath \"\"" Jan 23 09:45:59 crc kubenswrapper[4684]: I0123 09:45:59.227387 4684 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-m69tw\" (UniqueName: \"kubernetes.io/projected/ae43dbba-1a4e-4d8d-8682-e77939d5650d-kube-api-access-m69tw\") on node \"crc\" DevicePath \"\"" Jan 23 09:45:59 crc kubenswrapper[4684]: I0123 09:45:59.572752 4684 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/run-os-edpm-deployment-openstack-edpm-ipam-rj8pr" event={"ID":"ae43dbba-1a4e-4d8d-8682-e77939d5650d","Type":"ContainerDied","Data":"5b50878ec3b45984ecdf49ed1fb7bb66c39720fc342433a734f97f48527e4f9a"} Jan 23 09:45:59 crc kubenswrapper[4684]: I0123 09:45:59.572818 4684 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="5b50878ec3b45984ecdf49ed1fb7bb66c39720fc342433a734f97f48527e4f9a" Jan 23 09:45:59 crc kubenswrapper[4684]: I0123 09:45:59.572996 4684 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/run-os-edpm-deployment-openstack-edpm-ipam-rj8pr" Jan 23 09:45:59 crc kubenswrapper[4684]: I0123 09:45:59.575196 4684 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-tfk4n" event={"ID":"7cd9283a-98ef-4f80-af25-809f15d5ff36","Type":"ContainerDied","Data":"6fe5601d8c4b5eafbe98a6b9e785d2d788a5af60c6f2355e288e1f507e38fc74"} Jan 23 09:45:59 crc kubenswrapper[4684]: I0123 09:45:59.575237 4684 scope.go:117] "RemoveContainer" containerID="619d564f88ae22bad57a344636a709fa3bdfb79bc2b0ddf011065ffd8214a451" Jan 23 09:45:59 crc kubenswrapper[4684]: I0123 09:45:59.575372 4684 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-tfk4n" Jan 23 09:45:59 crc kubenswrapper[4684]: I0123 09:45:59.630425 4684 scope.go:117] "RemoveContainer" containerID="63880ed7fb3eed626d188e6ac96985c99d2b1486833d2eb4ebcc3a7a52475631" Jan 23 09:45:59 crc kubenswrapper[4684]: I0123 09:45:59.682933 4684 scope.go:117] "RemoveContainer" containerID="1addbd94f9ddd95a41b78108a3640718f8d3cc37bd7471b090d4afbd9abb1b30" Jan 23 09:45:59 crc kubenswrapper[4684]: I0123 09:45:59.688815 4684 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/certified-operators-tfk4n"] Jan 23 09:45:59 crc kubenswrapper[4684]: I0123 09:45:59.697772 4684 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/certified-operators-tfk4n"] Jan 23 09:45:59 crc kubenswrapper[4684]: I0123 09:45:59.707342 4684 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/reboot-os-edpm-deployment-openstack-edpm-ipam-gjrkm"] Jan 23 09:45:59 crc kubenswrapper[4684]: E0123 09:45:59.707792 4684 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="7cd9283a-98ef-4f80-af25-809f15d5ff36" containerName="extract-content" Jan 23 09:45:59 crc kubenswrapper[4684]: I0123 09:45:59.707805 4684 state_mem.go:107] "Deleted CPUSet assignment" podUID="7cd9283a-98ef-4f80-af25-809f15d5ff36" containerName="extract-content" Jan 23 09:45:59 crc kubenswrapper[4684]: E0123 09:45:59.707826 4684 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="7cd9283a-98ef-4f80-af25-809f15d5ff36" containerName="extract-utilities" Jan 23 09:45:59 crc kubenswrapper[4684]: I0123 09:45:59.707834 4684 state_mem.go:107] "Deleted CPUSet assignment" podUID="7cd9283a-98ef-4f80-af25-809f15d5ff36" containerName="extract-utilities" Jan 23 09:45:59 crc kubenswrapper[4684]: E0123 09:45:59.707852 4684 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="ae43dbba-1a4e-4d8d-8682-e77939d5650d" containerName="run-os-edpm-deployment-openstack-edpm-ipam" Jan 23 09:45:59 crc kubenswrapper[4684]: I0123 09:45:59.707863 4684 state_mem.go:107] "Deleted CPUSet assignment" podUID="ae43dbba-1a4e-4d8d-8682-e77939d5650d" containerName="run-os-edpm-deployment-openstack-edpm-ipam" Jan 23 09:45:59 crc kubenswrapper[4684]: E0123 09:45:59.707886 4684 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="7cd9283a-98ef-4f80-af25-809f15d5ff36" containerName="registry-server" Jan 23 09:45:59 crc kubenswrapper[4684]: I0123 09:45:59.707893 4684 state_mem.go:107] "Deleted CPUSet assignment" podUID="7cd9283a-98ef-4f80-af25-809f15d5ff36" containerName="registry-server" Jan 23 09:45:59 crc kubenswrapper[4684]: I0123 09:45:59.708092 4684 memory_manager.go:354] "RemoveStaleState removing state" podUID="ae43dbba-1a4e-4d8d-8682-e77939d5650d" containerName="run-os-edpm-deployment-openstack-edpm-ipam" Jan 23 09:45:59 crc kubenswrapper[4684]: I0123 09:45:59.708116 4684 memory_manager.go:354] "RemoveStaleState removing state" podUID="7cd9283a-98ef-4f80-af25-809f15d5ff36" containerName="registry-server" Jan 23 09:45:59 crc kubenswrapper[4684]: I0123 09:45:59.708990 4684 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/reboot-os-edpm-deployment-openstack-edpm-ipam-gjrkm" Jan 23 09:45:59 crc kubenswrapper[4684]: I0123 09:45:59.713903 4684 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"openstack-edpm-ipam-dockercfg-5vtkf" Jan 23 09:45:59 crc kubenswrapper[4684]: I0123 09:45:59.714152 4684 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"dataplane-ansible-ssh-private-key-secret" Jan 23 09:45:59 crc kubenswrapper[4684]: I0123 09:45:59.714295 4684 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"dataplanenodeset-openstack-edpm-ipam" Jan 23 09:45:59 crc kubenswrapper[4684]: I0123 09:45:59.714410 4684 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"openstack-aee-default-env" Jan 23 09:45:59 crc kubenswrapper[4684]: I0123 09:45:59.717935 4684 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/reboot-os-edpm-deployment-openstack-edpm-ipam-gjrkm"] Jan 23 09:45:59 crc kubenswrapper[4684]: I0123 09:45:59.846797 4684 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/3979ebeb-8bcf-4a62-99a8-10dac39bf900-ssh-key-openstack-edpm-ipam\") pod \"reboot-os-edpm-deployment-openstack-edpm-ipam-gjrkm\" (UID: \"3979ebeb-8bcf-4a62-99a8-10dac39bf900\") " pod="openstack/reboot-os-edpm-deployment-openstack-edpm-ipam-gjrkm" Jan 23 09:45:59 crc kubenswrapper[4684]: I0123 09:45:59.846849 4684 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/3979ebeb-8bcf-4a62-99a8-10dac39bf900-inventory\") pod \"reboot-os-edpm-deployment-openstack-edpm-ipam-gjrkm\" (UID: \"3979ebeb-8bcf-4a62-99a8-10dac39bf900\") " pod="openstack/reboot-os-edpm-deployment-openstack-edpm-ipam-gjrkm" Jan 23 09:45:59 crc kubenswrapper[4684]: I0123 09:45:59.846960 4684 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-x879p\" (UniqueName: \"kubernetes.io/projected/3979ebeb-8bcf-4a62-99a8-10dac39bf900-kube-api-access-x879p\") pod \"reboot-os-edpm-deployment-openstack-edpm-ipam-gjrkm\" (UID: \"3979ebeb-8bcf-4a62-99a8-10dac39bf900\") " pod="openstack/reboot-os-edpm-deployment-openstack-edpm-ipam-gjrkm" Jan 23 09:45:59 crc kubenswrapper[4684]: I0123 09:45:59.948682 4684 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/3979ebeb-8bcf-4a62-99a8-10dac39bf900-ssh-key-openstack-edpm-ipam\") pod \"reboot-os-edpm-deployment-openstack-edpm-ipam-gjrkm\" (UID: \"3979ebeb-8bcf-4a62-99a8-10dac39bf900\") " pod="openstack/reboot-os-edpm-deployment-openstack-edpm-ipam-gjrkm" Jan 23 09:45:59 crc kubenswrapper[4684]: I0123 09:45:59.948766 4684 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/3979ebeb-8bcf-4a62-99a8-10dac39bf900-inventory\") pod \"reboot-os-edpm-deployment-openstack-edpm-ipam-gjrkm\" (UID: \"3979ebeb-8bcf-4a62-99a8-10dac39bf900\") " pod="openstack/reboot-os-edpm-deployment-openstack-edpm-ipam-gjrkm" Jan 23 09:45:59 crc kubenswrapper[4684]: I0123 09:45:59.948856 4684 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-x879p\" (UniqueName: \"kubernetes.io/projected/3979ebeb-8bcf-4a62-99a8-10dac39bf900-kube-api-access-x879p\") pod \"reboot-os-edpm-deployment-openstack-edpm-ipam-gjrkm\" (UID: \"3979ebeb-8bcf-4a62-99a8-10dac39bf900\") " pod="openstack/reboot-os-edpm-deployment-openstack-edpm-ipam-gjrkm" Jan 23 09:45:59 crc kubenswrapper[4684]: I0123 09:45:59.952828 4684 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/3979ebeb-8bcf-4a62-99a8-10dac39bf900-inventory\") pod \"reboot-os-edpm-deployment-openstack-edpm-ipam-gjrkm\" (UID: \"3979ebeb-8bcf-4a62-99a8-10dac39bf900\") " pod="openstack/reboot-os-edpm-deployment-openstack-edpm-ipam-gjrkm" Jan 23 09:45:59 crc kubenswrapper[4684]: I0123 09:45:59.953213 4684 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/3979ebeb-8bcf-4a62-99a8-10dac39bf900-ssh-key-openstack-edpm-ipam\") pod \"reboot-os-edpm-deployment-openstack-edpm-ipam-gjrkm\" (UID: \"3979ebeb-8bcf-4a62-99a8-10dac39bf900\") " pod="openstack/reboot-os-edpm-deployment-openstack-edpm-ipam-gjrkm" Jan 23 09:45:59 crc kubenswrapper[4684]: I0123 09:45:59.974511 4684 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-x879p\" (UniqueName: \"kubernetes.io/projected/3979ebeb-8bcf-4a62-99a8-10dac39bf900-kube-api-access-x879p\") pod \"reboot-os-edpm-deployment-openstack-edpm-ipam-gjrkm\" (UID: \"3979ebeb-8bcf-4a62-99a8-10dac39bf900\") " pod="openstack/reboot-os-edpm-deployment-openstack-edpm-ipam-gjrkm" Jan 23 09:46:00 crc kubenswrapper[4684]: I0123 09:46:00.112578 4684 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/reboot-os-edpm-deployment-openstack-edpm-ipam-gjrkm" Jan 23 09:46:00 crc kubenswrapper[4684]: I0123 09:46:00.722775 4684 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/reboot-os-edpm-deployment-openstack-edpm-ipam-gjrkm"] Jan 23 09:46:00 crc kubenswrapper[4684]: W0123 09:46:00.725968 4684 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod3979ebeb_8bcf_4a62_99a8_10dac39bf900.slice/crio-ed2d140d4ca45e1ebc7bf7c8b1d8bac1415e8c6aee279451279979d4c09d4a56 WatchSource:0}: Error finding container ed2d140d4ca45e1ebc7bf7c8b1d8bac1415e8c6aee279451279979d4c09d4a56: Status 404 returned error can't find the container with id ed2d140d4ca45e1ebc7bf7c8b1d8bac1415e8c6aee279451279979d4c09d4a56 Jan 23 09:46:01 crc kubenswrapper[4684]: I0123 09:46:01.596730 4684 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="7cd9283a-98ef-4f80-af25-809f15d5ff36" path="/var/lib/kubelet/pods/7cd9283a-98ef-4f80-af25-809f15d5ff36/volumes" Jan 23 09:46:01 crc kubenswrapper[4684]: I0123 09:46:01.599003 4684 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/reboot-os-edpm-deployment-openstack-edpm-ipam-gjrkm" event={"ID":"3979ebeb-8bcf-4a62-99a8-10dac39bf900","Type":"ContainerStarted","Data":"a44c3af07c2082a113194cad0994e4fb45108d8000cc264b44f4d3eb30e6da93"} Jan 23 09:46:01 crc kubenswrapper[4684]: I0123 09:46:01.599046 4684 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/reboot-os-edpm-deployment-openstack-edpm-ipam-gjrkm" event={"ID":"3979ebeb-8bcf-4a62-99a8-10dac39bf900","Type":"ContainerStarted","Data":"ed2d140d4ca45e1ebc7bf7c8b1d8bac1415e8c6aee279451279979d4c09d4a56"} Jan 23 09:46:01 crc kubenswrapper[4684]: I0123 09:46:01.614236 4684 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/reboot-os-edpm-deployment-openstack-edpm-ipam-gjrkm" podStartSLOduration=2.167577482 podStartE2EDuration="2.614216491s" podCreationTimestamp="2026-01-23 09:45:59 +0000 UTC" firstStartedPulling="2026-01-23 09:46:00.732511664 +0000 UTC m=+2333.355890205" lastFinishedPulling="2026-01-23 09:46:01.179150663 +0000 UTC m=+2333.802529214" observedRunningTime="2026-01-23 09:46:01.610644509 +0000 UTC m=+2334.234023050" watchObservedRunningTime="2026-01-23 09:46:01.614216491 +0000 UTC m=+2334.237595032" Jan 23 09:46:12 crc kubenswrapper[4684]: I0123 09:46:12.903639 4684 generic.go:334] "Generic (PLEG): container finished" podID="3979ebeb-8bcf-4a62-99a8-10dac39bf900" containerID="a44c3af07c2082a113194cad0994e4fb45108d8000cc264b44f4d3eb30e6da93" exitCode=0 Jan 23 09:46:12 crc kubenswrapper[4684]: I0123 09:46:12.903745 4684 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/reboot-os-edpm-deployment-openstack-edpm-ipam-gjrkm" event={"ID":"3979ebeb-8bcf-4a62-99a8-10dac39bf900","Type":"ContainerDied","Data":"a44c3af07c2082a113194cad0994e4fb45108d8000cc264b44f4d3eb30e6da93"} Jan 23 09:46:14 crc kubenswrapper[4684]: I0123 09:46:14.347772 4684 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/reboot-os-edpm-deployment-openstack-edpm-ipam-gjrkm" Jan 23 09:46:14 crc kubenswrapper[4684]: I0123 09:46:14.439767 4684 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/3979ebeb-8bcf-4a62-99a8-10dac39bf900-ssh-key-openstack-edpm-ipam\") pod \"3979ebeb-8bcf-4a62-99a8-10dac39bf900\" (UID: \"3979ebeb-8bcf-4a62-99a8-10dac39bf900\") " Jan 23 09:46:14 crc kubenswrapper[4684]: I0123 09:46:14.440004 4684 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/3979ebeb-8bcf-4a62-99a8-10dac39bf900-inventory\") pod \"3979ebeb-8bcf-4a62-99a8-10dac39bf900\" (UID: \"3979ebeb-8bcf-4a62-99a8-10dac39bf900\") " Jan 23 09:46:14 crc kubenswrapper[4684]: I0123 09:46:14.440106 4684 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-x879p\" (UniqueName: \"kubernetes.io/projected/3979ebeb-8bcf-4a62-99a8-10dac39bf900-kube-api-access-x879p\") pod \"3979ebeb-8bcf-4a62-99a8-10dac39bf900\" (UID: \"3979ebeb-8bcf-4a62-99a8-10dac39bf900\") " Jan 23 09:46:14 crc kubenswrapper[4684]: I0123 09:46:14.447797 4684 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/3979ebeb-8bcf-4a62-99a8-10dac39bf900-kube-api-access-x879p" (OuterVolumeSpecName: "kube-api-access-x879p") pod "3979ebeb-8bcf-4a62-99a8-10dac39bf900" (UID: "3979ebeb-8bcf-4a62-99a8-10dac39bf900"). InnerVolumeSpecName "kube-api-access-x879p". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 23 09:46:14 crc kubenswrapper[4684]: I0123 09:46:14.476261 4684 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/3979ebeb-8bcf-4a62-99a8-10dac39bf900-ssh-key-openstack-edpm-ipam" (OuterVolumeSpecName: "ssh-key-openstack-edpm-ipam") pod "3979ebeb-8bcf-4a62-99a8-10dac39bf900" (UID: "3979ebeb-8bcf-4a62-99a8-10dac39bf900"). InnerVolumeSpecName "ssh-key-openstack-edpm-ipam". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 23 09:46:14 crc kubenswrapper[4684]: I0123 09:46:14.476315 4684 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/3979ebeb-8bcf-4a62-99a8-10dac39bf900-inventory" (OuterVolumeSpecName: "inventory") pod "3979ebeb-8bcf-4a62-99a8-10dac39bf900" (UID: "3979ebeb-8bcf-4a62-99a8-10dac39bf900"). InnerVolumeSpecName "inventory". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 23 09:46:14 crc kubenswrapper[4684]: I0123 09:46:14.543192 4684 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-x879p\" (UniqueName: \"kubernetes.io/projected/3979ebeb-8bcf-4a62-99a8-10dac39bf900-kube-api-access-x879p\") on node \"crc\" DevicePath \"\"" Jan 23 09:46:14 crc kubenswrapper[4684]: I0123 09:46:14.543241 4684 reconciler_common.go:293] "Volume detached for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/3979ebeb-8bcf-4a62-99a8-10dac39bf900-ssh-key-openstack-edpm-ipam\") on node \"crc\" DevicePath \"\"" Jan 23 09:46:14 crc kubenswrapper[4684]: I0123 09:46:14.543255 4684 reconciler_common.go:293] "Volume detached for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/3979ebeb-8bcf-4a62-99a8-10dac39bf900-inventory\") on node \"crc\" DevicePath \"\"" Jan 23 09:46:14 crc kubenswrapper[4684]: I0123 09:46:14.802076 4684 scope.go:117] "RemoveContainer" containerID="aeebdc1c5705ed418bbd094135a18f77d5369aac12244fe848e31f118b52fa4f" Jan 23 09:46:14 crc kubenswrapper[4684]: I0123 09:46:14.928518 4684 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/reboot-os-edpm-deployment-openstack-edpm-ipam-gjrkm" event={"ID":"3979ebeb-8bcf-4a62-99a8-10dac39bf900","Type":"ContainerDied","Data":"ed2d140d4ca45e1ebc7bf7c8b1d8bac1415e8c6aee279451279979d4c09d4a56"} Jan 23 09:46:14 crc kubenswrapper[4684]: I0123 09:46:14.928808 4684 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="ed2d140d4ca45e1ebc7bf7c8b1d8bac1415e8c6aee279451279979d4c09d4a56" Jan 23 09:46:14 crc kubenswrapper[4684]: I0123 09:46:14.928943 4684 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/reboot-os-edpm-deployment-openstack-edpm-ipam-gjrkm" Jan 23 09:46:43 crc kubenswrapper[4684]: I0123 09:46:43.729072 4684 patch_prober.go:28] interesting pod/machine-config-daemon-wtphf container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Jan 23 09:46:43 crc kubenswrapper[4684]: I0123 09:46:43.729568 4684 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-wtphf" podUID="fe8e0d00-860e-4d47-9f48-686555520d79" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Jan 23 09:47:13 crc kubenswrapper[4684]: I0123 09:47:13.728770 4684 patch_prober.go:28] interesting pod/machine-config-daemon-wtphf container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Jan 23 09:47:13 crc kubenswrapper[4684]: I0123 09:47:13.729299 4684 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-wtphf" podUID="fe8e0d00-860e-4d47-9f48-686555520d79" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Jan 23 09:47:43 crc kubenswrapper[4684]: I0123 09:47:43.728539 4684 patch_prober.go:28] interesting pod/machine-config-daemon-wtphf container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Jan 23 09:47:43 crc kubenswrapper[4684]: I0123 09:47:43.729074 4684 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-wtphf" podUID="fe8e0d00-860e-4d47-9f48-686555520d79" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Jan 23 09:47:43 crc kubenswrapper[4684]: I0123 09:47:43.729119 4684 kubelet.go:2542] "SyncLoop (probe)" probe="liveness" status="unhealthy" pod="openshift-machine-config-operator/machine-config-daemon-wtphf" Jan 23 09:47:43 crc kubenswrapper[4684]: I0123 09:47:43.729576 4684 kuberuntime_manager.go:1027] "Message for Container of pod" containerName="machine-config-daemon" containerStatusID={"Type":"cri-o","ID":"e241fb8ce89b1144b77898bb643960ab8da29fdc0ca5835cb761cdb036975632"} pod="openshift-machine-config-operator/machine-config-daemon-wtphf" containerMessage="Container machine-config-daemon failed liveness probe, will be restarted" Jan 23 09:47:43 crc kubenswrapper[4684]: I0123 09:47:43.729622 4684 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-machine-config-operator/machine-config-daemon-wtphf" podUID="fe8e0d00-860e-4d47-9f48-686555520d79" containerName="machine-config-daemon" containerID="cri-o://e241fb8ce89b1144b77898bb643960ab8da29fdc0ca5835cb761cdb036975632" gracePeriod=600 Jan 23 09:47:44 crc kubenswrapper[4684]: E0123 09:47:44.409902 4684 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-wtphf_openshift-machine-config-operator(fe8e0d00-860e-4d47-9f48-686555520d79)\"" pod="openshift-machine-config-operator/machine-config-daemon-wtphf" podUID="fe8e0d00-860e-4d47-9f48-686555520d79" Jan 23 09:47:44 crc kubenswrapper[4684]: I0123 09:47:44.619013 4684 generic.go:334] "Generic (PLEG): container finished" podID="fe8e0d00-860e-4d47-9f48-686555520d79" containerID="e241fb8ce89b1144b77898bb643960ab8da29fdc0ca5835cb761cdb036975632" exitCode=0 Jan 23 09:47:44 crc kubenswrapper[4684]: I0123 09:47:44.619084 4684 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-wtphf" event={"ID":"fe8e0d00-860e-4d47-9f48-686555520d79","Type":"ContainerDied","Data":"e241fb8ce89b1144b77898bb643960ab8da29fdc0ca5835cb761cdb036975632"} Jan 23 09:47:44 crc kubenswrapper[4684]: I0123 09:47:44.619146 4684 scope.go:117] "RemoveContainer" containerID="4ca7091f270e90c736fc01d37ad639ae0e6d8467b5f3f891e0f994b8fe5136e3" Jan 23 09:47:44 crc kubenswrapper[4684]: I0123 09:47:44.620251 4684 scope.go:117] "RemoveContainer" containerID="e241fb8ce89b1144b77898bb643960ab8da29fdc0ca5835cb761cdb036975632" Jan 23 09:47:44 crc kubenswrapper[4684]: E0123 09:47:44.620555 4684 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-wtphf_openshift-machine-config-operator(fe8e0d00-860e-4d47-9f48-686555520d79)\"" pod="openshift-machine-config-operator/machine-config-daemon-wtphf" podUID="fe8e0d00-860e-4d47-9f48-686555520d79" Jan 23 09:47:58 crc kubenswrapper[4684]: I0123 09:47:58.582583 4684 scope.go:117] "RemoveContainer" containerID="e241fb8ce89b1144b77898bb643960ab8da29fdc0ca5835cb761cdb036975632" Jan 23 09:47:58 crc kubenswrapper[4684]: E0123 09:47:58.583588 4684 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-wtphf_openshift-machine-config-operator(fe8e0d00-860e-4d47-9f48-686555520d79)\"" pod="openshift-machine-config-operator/machine-config-daemon-wtphf" podUID="fe8e0d00-860e-4d47-9f48-686555520d79" Jan 23 09:48:12 crc kubenswrapper[4684]: I0123 09:48:12.583079 4684 scope.go:117] "RemoveContainer" containerID="e241fb8ce89b1144b77898bb643960ab8da29fdc0ca5835cb761cdb036975632" Jan 23 09:48:12 crc kubenswrapper[4684]: E0123 09:48:12.584830 4684 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-wtphf_openshift-machine-config-operator(fe8e0d00-860e-4d47-9f48-686555520d79)\"" pod="openshift-machine-config-operator/machine-config-daemon-wtphf" podUID="fe8e0d00-860e-4d47-9f48-686555520d79" Jan 23 09:48:25 crc kubenswrapper[4684]: I0123 09:48:25.583984 4684 scope.go:117] "RemoveContainer" containerID="e241fb8ce89b1144b77898bb643960ab8da29fdc0ca5835cb761cdb036975632" Jan 23 09:48:25 crc kubenswrapper[4684]: E0123 09:48:25.584993 4684 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-wtphf_openshift-machine-config-operator(fe8e0d00-860e-4d47-9f48-686555520d79)\"" pod="openshift-machine-config-operator/machine-config-daemon-wtphf" podUID="fe8e0d00-860e-4d47-9f48-686555520d79" Jan 23 09:48:38 crc kubenswrapper[4684]: I0123 09:48:38.582505 4684 scope.go:117] "RemoveContainer" containerID="e241fb8ce89b1144b77898bb643960ab8da29fdc0ca5835cb761cdb036975632" Jan 23 09:48:38 crc kubenswrapper[4684]: E0123 09:48:38.583131 4684 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-wtphf_openshift-machine-config-operator(fe8e0d00-860e-4d47-9f48-686555520d79)\"" pod="openshift-machine-config-operator/machine-config-daemon-wtphf" podUID="fe8e0d00-860e-4d47-9f48-686555520d79" Jan 23 09:48:49 crc kubenswrapper[4684]: I0123 09:48:49.581986 4684 scope.go:117] "RemoveContainer" containerID="e241fb8ce89b1144b77898bb643960ab8da29fdc0ca5835cb761cdb036975632" Jan 23 09:48:49 crc kubenswrapper[4684]: E0123 09:48:49.582824 4684 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-wtphf_openshift-machine-config-operator(fe8e0d00-860e-4d47-9f48-686555520d79)\"" pod="openshift-machine-config-operator/machine-config-daemon-wtphf" podUID="fe8e0d00-860e-4d47-9f48-686555520d79" Jan 23 09:49:02 crc kubenswrapper[4684]: I0123 09:49:02.583183 4684 scope.go:117] "RemoveContainer" containerID="e241fb8ce89b1144b77898bb643960ab8da29fdc0ca5835cb761cdb036975632" Jan 23 09:49:02 crc kubenswrapper[4684]: E0123 09:49:02.584001 4684 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-wtphf_openshift-machine-config-operator(fe8e0d00-860e-4d47-9f48-686555520d79)\"" pod="openshift-machine-config-operator/machine-config-daemon-wtphf" podUID="fe8e0d00-860e-4d47-9f48-686555520d79" Jan 23 09:49:17 crc kubenswrapper[4684]: I0123 09:49:17.586368 4684 scope.go:117] "RemoveContainer" containerID="e241fb8ce89b1144b77898bb643960ab8da29fdc0ca5835cb761cdb036975632" Jan 23 09:49:17 crc kubenswrapper[4684]: E0123 09:49:17.587120 4684 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-wtphf_openshift-machine-config-operator(fe8e0d00-860e-4d47-9f48-686555520d79)\"" pod="openshift-machine-config-operator/machine-config-daemon-wtphf" podUID="fe8e0d00-860e-4d47-9f48-686555520d79" Jan 23 09:49:30 crc kubenswrapper[4684]: I0123 09:49:30.581757 4684 scope.go:117] "RemoveContainer" containerID="e241fb8ce89b1144b77898bb643960ab8da29fdc0ca5835cb761cdb036975632" Jan 23 09:49:30 crc kubenswrapper[4684]: E0123 09:49:30.582434 4684 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-wtphf_openshift-machine-config-operator(fe8e0d00-860e-4d47-9f48-686555520d79)\"" pod="openshift-machine-config-operator/machine-config-daemon-wtphf" podUID="fe8e0d00-860e-4d47-9f48-686555520d79" Jan 23 09:49:44 crc kubenswrapper[4684]: I0123 09:49:44.582272 4684 scope.go:117] "RemoveContainer" containerID="e241fb8ce89b1144b77898bb643960ab8da29fdc0ca5835cb761cdb036975632" Jan 23 09:49:44 crc kubenswrapper[4684]: E0123 09:49:44.583159 4684 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-wtphf_openshift-machine-config-operator(fe8e0d00-860e-4d47-9f48-686555520d79)\"" pod="openshift-machine-config-operator/machine-config-daemon-wtphf" podUID="fe8e0d00-860e-4d47-9f48-686555520d79" Jan 23 09:49:55 crc kubenswrapper[4684]: I0123 09:49:55.582027 4684 scope.go:117] "RemoveContainer" containerID="e241fb8ce89b1144b77898bb643960ab8da29fdc0ca5835cb761cdb036975632" Jan 23 09:49:55 crc kubenswrapper[4684]: E0123 09:49:55.584916 4684 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-wtphf_openshift-machine-config-operator(fe8e0d00-860e-4d47-9f48-686555520d79)\"" pod="openshift-machine-config-operator/machine-config-daemon-wtphf" podUID="fe8e0d00-860e-4d47-9f48-686555520d79" Jan 23 09:50:10 crc kubenswrapper[4684]: I0123 09:50:10.581660 4684 scope.go:117] "RemoveContainer" containerID="e241fb8ce89b1144b77898bb643960ab8da29fdc0ca5835cb761cdb036975632" Jan 23 09:50:10 crc kubenswrapper[4684]: E0123 09:50:10.582471 4684 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-wtphf_openshift-machine-config-operator(fe8e0d00-860e-4d47-9f48-686555520d79)\"" pod="openshift-machine-config-operator/machine-config-daemon-wtphf" podUID="fe8e0d00-860e-4d47-9f48-686555520d79" Jan 23 09:50:24 crc kubenswrapper[4684]: I0123 09:50:24.584973 4684 scope.go:117] "RemoveContainer" containerID="e241fb8ce89b1144b77898bb643960ab8da29fdc0ca5835cb761cdb036975632" Jan 23 09:50:24 crc kubenswrapper[4684]: E0123 09:50:24.585751 4684 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-wtphf_openshift-machine-config-operator(fe8e0d00-860e-4d47-9f48-686555520d79)\"" pod="openshift-machine-config-operator/machine-config-daemon-wtphf" podUID="fe8e0d00-860e-4d47-9f48-686555520d79" Jan 23 09:50:35 crc kubenswrapper[4684]: I0123 09:50:35.581613 4684 scope.go:117] "RemoveContainer" containerID="e241fb8ce89b1144b77898bb643960ab8da29fdc0ca5835cb761cdb036975632" Jan 23 09:50:35 crc kubenswrapper[4684]: E0123 09:50:35.582416 4684 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-wtphf_openshift-machine-config-operator(fe8e0d00-860e-4d47-9f48-686555520d79)\"" pod="openshift-machine-config-operator/machine-config-daemon-wtphf" podUID="fe8e0d00-860e-4d47-9f48-686555520d79" Jan 23 09:50:48 crc kubenswrapper[4684]: I0123 09:50:48.582007 4684 scope.go:117] "RemoveContainer" containerID="e241fb8ce89b1144b77898bb643960ab8da29fdc0ca5835cb761cdb036975632" Jan 23 09:50:48 crc kubenswrapper[4684]: E0123 09:50:48.582740 4684 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-wtphf_openshift-machine-config-operator(fe8e0d00-860e-4d47-9f48-686555520d79)\"" pod="openshift-machine-config-operator/machine-config-daemon-wtphf" podUID="fe8e0d00-860e-4d47-9f48-686555520d79" Jan 23 09:50:59 crc kubenswrapper[4684]: I0123 09:50:59.582349 4684 scope.go:117] "RemoveContainer" containerID="e241fb8ce89b1144b77898bb643960ab8da29fdc0ca5835cb761cdb036975632" Jan 23 09:50:59 crc kubenswrapper[4684]: E0123 09:50:59.583107 4684 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-wtphf_openshift-machine-config-operator(fe8e0d00-860e-4d47-9f48-686555520d79)\"" pod="openshift-machine-config-operator/machine-config-daemon-wtphf" podUID="fe8e0d00-860e-4d47-9f48-686555520d79" Jan 23 09:51:10 crc kubenswrapper[4684]: I0123 09:51:10.582603 4684 scope.go:117] "RemoveContainer" containerID="e241fb8ce89b1144b77898bb643960ab8da29fdc0ca5835cb761cdb036975632" Jan 23 09:51:10 crc kubenswrapper[4684]: E0123 09:51:10.583931 4684 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-wtphf_openshift-machine-config-operator(fe8e0d00-860e-4d47-9f48-686555520d79)\"" pod="openshift-machine-config-operator/machine-config-daemon-wtphf" podUID="fe8e0d00-860e-4d47-9f48-686555520d79" Jan 23 09:51:23 crc kubenswrapper[4684]: I0123 09:51:23.583132 4684 scope.go:117] "RemoveContainer" containerID="e241fb8ce89b1144b77898bb643960ab8da29fdc0ca5835cb761cdb036975632" Jan 23 09:51:23 crc kubenswrapper[4684]: E0123 09:51:23.583887 4684 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-wtphf_openshift-machine-config-operator(fe8e0d00-860e-4d47-9f48-686555520d79)\"" pod="openshift-machine-config-operator/machine-config-daemon-wtphf" podUID="fe8e0d00-860e-4d47-9f48-686555520d79" Jan 23 09:51:35 crc kubenswrapper[4684]: I0123 09:51:35.583119 4684 scope.go:117] "RemoveContainer" containerID="e241fb8ce89b1144b77898bb643960ab8da29fdc0ca5835cb761cdb036975632" Jan 23 09:51:35 crc kubenswrapper[4684]: E0123 09:51:35.583886 4684 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-wtphf_openshift-machine-config-operator(fe8e0d00-860e-4d47-9f48-686555520d79)\"" pod="openshift-machine-config-operator/machine-config-daemon-wtphf" podUID="fe8e0d00-860e-4d47-9f48-686555520d79" Jan 23 09:51:49 crc kubenswrapper[4684]: I0123 09:51:49.582797 4684 scope.go:117] "RemoveContainer" containerID="e241fb8ce89b1144b77898bb643960ab8da29fdc0ca5835cb761cdb036975632" Jan 23 09:51:49 crc kubenswrapper[4684]: E0123 09:51:49.583673 4684 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-wtphf_openshift-machine-config-operator(fe8e0d00-860e-4d47-9f48-686555520d79)\"" pod="openshift-machine-config-operator/machine-config-daemon-wtphf" podUID="fe8e0d00-860e-4d47-9f48-686555520d79" Jan 23 09:52:02 crc kubenswrapper[4684]: I0123 09:52:02.582318 4684 scope.go:117] "RemoveContainer" containerID="e241fb8ce89b1144b77898bb643960ab8da29fdc0ca5835cb761cdb036975632" Jan 23 09:52:02 crc kubenswrapper[4684]: E0123 09:52:02.583062 4684 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-wtphf_openshift-machine-config-operator(fe8e0d00-860e-4d47-9f48-686555520d79)\"" pod="openshift-machine-config-operator/machine-config-daemon-wtphf" podUID="fe8e0d00-860e-4d47-9f48-686555520d79" Jan 23 09:52:14 crc kubenswrapper[4684]: I0123 09:52:14.581953 4684 scope.go:117] "RemoveContainer" containerID="e241fb8ce89b1144b77898bb643960ab8da29fdc0ca5835cb761cdb036975632" Jan 23 09:52:14 crc kubenswrapper[4684]: E0123 09:52:14.582723 4684 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-wtphf_openshift-machine-config-operator(fe8e0d00-860e-4d47-9f48-686555520d79)\"" pod="openshift-machine-config-operator/machine-config-daemon-wtphf" podUID="fe8e0d00-860e-4d47-9f48-686555520d79" Jan 23 09:52:25 crc kubenswrapper[4684]: I0123 09:52:25.585312 4684 scope.go:117] "RemoveContainer" containerID="e241fb8ce89b1144b77898bb643960ab8da29fdc0ca5835cb761cdb036975632" Jan 23 09:52:25 crc kubenswrapper[4684]: E0123 09:52:25.586224 4684 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-wtphf_openshift-machine-config-operator(fe8e0d00-860e-4d47-9f48-686555520d79)\"" pod="openshift-machine-config-operator/machine-config-daemon-wtphf" podUID="fe8e0d00-860e-4d47-9f48-686555520d79" Jan 23 09:52:38 crc kubenswrapper[4684]: I0123 09:52:38.582857 4684 scope.go:117] "RemoveContainer" containerID="e241fb8ce89b1144b77898bb643960ab8da29fdc0ca5835cb761cdb036975632" Jan 23 09:52:38 crc kubenswrapper[4684]: E0123 09:52:38.583714 4684 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-wtphf_openshift-machine-config-operator(fe8e0d00-860e-4d47-9f48-686555520d79)\"" pod="openshift-machine-config-operator/machine-config-daemon-wtphf" podUID="fe8e0d00-860e-4d47-9f48-686555520d79" Jan 23 09:52:52 crc kubenswrapper[4684]: I0123 09:52:52.582538 4684 scope.go:117] "RemoveContainer" containerID="e241fb8ce89b1144b77898bb643960ab8da29fdc0ca5835cb761cdb036975632" Jan 23 09:52:53 crc kubenswrapper[4684]: I0123 09:52:53.164227 4684 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-wtphf" event={"ID":"fe8e0d00-860e-4d47-9f48-686555520d79","Type":"ContainerStarted","Data":"8d1f652ff74148a06a7cece32bb007304d1575a17aa3e4576d5bb01005d192bb"} Jan 23 09:53:18 crc kubenswrapper[4684]: I0123 09:53:18.723127 4684 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/validate-network-edpm-deployment-openstack-edpm-ipam-r79jr"] Jan 23 09:53:18 crc kubenswrapper[4684]: I0123 09:53:18.740093 4684 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/ceph-hci-pre-edpm-deployment-openstack-edpm-ipam-nxqrp"] Jan 23 09:53:18 crc kubenswrapper[4684]: I0123 09:53:18.748909 4684 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/reboot-os-edpm-deployment-openstack-edpm-ipam-gjrkm"] Jan 23 09:53:18 crc kubenswrapper[4684]: I0123 09:53:18.760315 4684 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/validate-network-edpm-deployment-openstack-edpm-ipam-r79jr"] Jan 23 09:53:18 crc kubenswrapper[4684]: I0123 09:53:18.768514 4684 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/ceph-hci-pre-edpm-deployment-openstack-edpm-ipam-nxqrp"] Jan 23 09:53:18 crc kubenswrapper[4684]: I0123 09:53:18.777588 4684 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/reboot-os-edpm-deployment-openstack-edpm-ipam-gjrkm"] Jan 23 09:53:18 crc kubenswrapper[4684]: I0123 09:53:18.787611 4684 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/repo-setup-edpm-deployment-openstack-edpm-ipam-dww24"] Jan 23 09:53:18 crc kubenswrapper[4684]: I0123 09:53:18.797208 4684 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/configure-network-edpm-deployment-openstack-edpm-ipam-8qgmc"] Jan 23 09:53:18 crc kubenswrapper[4684]: I0123 09:53:18.805089 4684 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/repo-setup-edpm-deployment-openstack-edpm-ipam-dww24"] Jan 23 09:53:18 crc kubenswrapper[4684]: I0123 09:53:18.812875 4684 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/configure-network-edpm-deployment-openstack-edpm-ipam-8qgmc"] Jan 23 09:53:18 crc kubenswrapper[4684]: I0123 09:53:18.819571 4684 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/configure-os-edpm-deployment-openstack-edpm-ipam-k965m"] Jan 23 09:53:18 crc kubenswrapper[4684]: I0123 09:53:18.828833 4684 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/ssh-known-hosts-edpm-deployment-tqfjf"] Jan 23 09:53:18 crc kubenswrapper[4684]: I0123 09:53:18.837741 4684 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/install-os-edpm-deployment-openstack-edpm-ipam-gz859"] Jan 23 09:53:18 crc kubenswrapper[4684]: I0123 09:53:18.845471 4684 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/run-os-edpm-deployment-openstack-edpm-ipam-rj8pr"] Jan 23 09:53:18 crc kubenswrapper[4684]: I0123 09:53:18.853162 4684 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/bootstrap-edpm-deployment-openstack-edpm-ipam-jx7hz"] Jan 23 09:53:18 crc kubenswrapper[4684]: I0123 09:53:18.860617 4684 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/ssh-known-hosts-edpm-deployment-tqfjf"] Jan 23 09:53:18 crc kubenswrapper[4684]: I0123 09:53:18.869928 4684 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/run-os-edpm-deployment-openstack-edpm-ipam-rj8pr"] Jan 23 09:53:18 crc kubenswrapper[4684]: I0123 09:53:18.878781 4684 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/install-os-edpm-deployment-openstack-edpm-ipam-gz859"] Jan 23 09:53:18 crc kubenswrapper[4684]: I0123 09:53:18.886182 4684 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/configure-os-edpm-deployment-openstack-edpm-ipam-k965m"] Jan 23 09:53:18 crc kubenswrapper[4684]: I0123 09:53:18.893592 4684 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/bootstrap-edpm-deployment-openstack-edpm-ipam-jx7hz"] Jan 23 09:53:19 crc kubenswrapper[4684]: I0123 09:53:19.593020 4684 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="1a0208ad-b4f8-4798-b935-e541e61a3918" path="/var/lib/kubelet/pods/1a0208ad-b4f8-4798-b935-e541e61a3918/volumes" Jan 23 09:53:19 crc kubenswrapper[4684]: I0123 09:53:19.594343 4684 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="215ee287-6881-4991-ae11-e63eb7605a0a" path="/var/lib/kubelet/pods/215ee287-6881-4991-ae11-e63eb7605a0a/volumes" Jan 23 09:53:19 crc kubenswrapper[4684]: I0123 09:53:19.595285 4684 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="2b58eeeb-a5c7-4034-9722-9118d571ca6e" path="/var/lib/kubelet/pods/2b58eeeb-a5c7-4034-9722-9118d571ca6e/volumes" Jan 23 09:53:19 crc kubenswrapper[4684]: I0123 09:53:19.596016 4684 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="369572f8-f12b-4c03-85d2-82ca737357ed" path="/var/lib/kubelet/pods/369572f8-f12b-4c03-85d2-82ca737357ed/volumes" Jan 23 09:53:19 crc kubenswrapper[4684]: I0123 09:53:19.597534 4684 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="37823d9e-d4f7-4efa-9edd-0dc597578f7e" path="/var/lib/kubelet/pods/37823d9e-d4f7-4efa-9edd-0dc597578f7e/volumes" Jan 23 09:53:19 crc kubenswrapper[4684]: I0123 09:53:19.598313 4684 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="3979ebeb-8bcf-4a62-99a8-10dac39bf900" path="/var/lib/kubelet/pods/3979ebeb-8bcf-4a62-99a8-10dac39bf900/volumes" Jan 23 09:53:19 crc kubenswrapper[4684]: I0123 09:53:19.598907 4684 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="5c7fb6ce-b97d-4827-b8c0-254582176d6d" path="/var/lib/kubelet/pods/5c7fb6ce-b97d-4827-b8c0-254582176d6d/volumes" Jan 23 09:53:19 crc kubenswrapper[4684]: I0123 09:53:19.600039 4684 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="990467eb-1e9f-4b1f-bf85-dd9980a0b5aa" path="/var/lib/kubelet/pods/990467eb-1e9f-4b1f-bf85-dd9980a0b5aa/volumes" Jan 23 09:53:19 crc kubenswrapper[4684]: I0123 09:53:19.600613 4684 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="ae43dbba-1a4e-4d8d-8682-e77939d5650d" path="/var/lib/kubelet/pods/ae43dbba-1a4e-4d8d-8682-e77939d5650d/volumes" Jan 23 09:53:19 crc kubenswrapper[4684]: I0123 09:53:19.601250 4684 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="b4b6123c-b0a7-4d22-a9b7-5da8a1598fff" path="/var/lib/kubelet/pods/b4b6123c-b0a7-4d22-a9b7-5da8a1598fff/volumes" Jan 23 09:53:31 crc kubenswrapper[4684]: I0123 09:53:31.826953 4684 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/repo-setup-edpm-deployment-openstack-edpm-ipam-5qtdd"] Jan 23 09:53:31 crc kubenswrapper[4684]: E0123 09:53:31.827886 4684 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="3979ebeb-8bcf-4a62-99a8-10dac39bf900" containerName="reboot-os-edpm-deployment-openstack-edpm-ipam" Jan 23 09:53:31 crc kubenswrapper[4684]: I0123 09:53:31.827908 4684 state_mem.go:107] "Deleted CPUSet assignment" podUID="3979ebeb-8bcf-4a62-99a8-10dac39bf900" containerName="reboot-os-edpm-deployment-openstack-edpm-ipam" Jan 23 09:53:31 crc kubenswrapper[4684]: I0123 09:53:31.828099 4684 memory_manager.go:354] "RemoveStaleState removing state" podUID="3979ebeb-8bcf-4a62-99a8-10dac39bf900" containerName="reboot-os-edpm-deployment-openstack-edpm-ipam" Jan 23 09:53:31 crc kubenswrapper[4684]: I0123 09:53:31.828785 4684 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/repo-setup-edpm-deployment-openstack-edpm-ipam-5qtdd" Jan 23 09:53:31 crc kubenswrapper[4684]: I0123 09:53:31.832635 4684 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"dataplane-ansible-ssh-private-key-secret" Jan 23 09:53:31 crc kubenswrapper[4684]: I0123 09:53:31.834528 4684 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"ceph-conf-files" Jan 23 09:53:31 crc kubenswrapper[4684]: I0123 09:53:31.834531 4684 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"openstack-edpm-ipam-dockercfg-5vtkf" Jan 23 09:53:31 crc kubenswrapper[4684]: I0123 09:53:31.835012 4684 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"dataplanenodeset-openstack-edpm-ipam" Jan 23 09:53:31 crc kubenswrapper[4684]: I0123 09:53:31.835611 4684 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"openstack-aee-default-env" Jan 23 09:53:31 crc kubenswrapper[4684]: I0123 09:53:31.840519 4684 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/repo-setup-edpm-deployment-openstack-edpm-ipam-5qtdd"] Jan 23 09:53:31 crc kubenswrapper[4684]: I0123 09:53:31.881012 4684 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ceph\" (UniqueName: \"kubernetes.io/secret/6572a448-1ced-481b-af00-e2edb0d95187-ceph\") pod \"repo-setup-edpm-deployment-openstack-edpm-ipam-5qtdd\" (UID: \"6572a448-1ced-481b-af00-e2edb0d95187\") " pod="openstack/repo-setup-edpm-deployment-openstack-edpm-ipam-5qtdd" Jan 23 09:53:31 crc kubenswrapper[4684]: I0123 09:53:31.881079 4684 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/6572a448-1ced-481b-af00-e2edb0d95187-inventory\") pod \"repo-setup-edpm-deployment-openstack-edpm-ipam-5qtdd\" (UID: \"6572a448-1ced-481b-af00-e2edb0d95187\") " pod="openstack/repo-setup-edpm-deployment-openstack-edpm-ipam-5qtdd" Jan 23 09:53:31 crc kubenswrapper[4684]: I0123 09:53:31.881126 4684 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-wrx5r\" (UniqueName: \"kubernetes.io/projected/6572a448-1ced-481b-af00-e2edb0d95187-kube-api-access-wrx5r\") pod \"repo-setup-edpm-deployment-openstack-edpm-ipam-5qtdd\" (UID: \"6572a448-1ced-481b-af00-e2edb0d95187\") " pod="openstack/repo-setup-edpm-deployment-openstack-edpm-ipam-5qtdd" Jan 23 09:53:31 crc kubenswrapper[4684]: I0123 09:53:31.881351 4684 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/6572a448-1ced-481b-af00-e2edb0d95187-ssh-key-openstack-edpm-ipam\") pod \"repo-setup-edpm-deployment-openstack-edpm-ipam-5qtdd\" (UID: \"6572a448-1ced-481b-af00-e2edb0d95187\") " pod="openstack/repo-setup-edpm-deployment-openstack-edpm-ipam-5qtdd" Jan 23 09:53:31 crc kubenswrapper[4684]: I0123 09:53:31.881461 4684 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"repo-setup-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/6572a448-1ced-481b-af00-e2edb0d95187-repo-setup-combined-ca-bundle\") pod \"repo-setup-edpm-deployment-openstack-edpm-ipam-5qtdd\" (UID: \"6572a448-1ced-481b-af00-e2edb0d95187\") " pod="openstack/repo-setup-edpm-deployment-openstack-edpm-ipam-5qtdd" Jan 23 09:53:31 crc kubenswrapper[4684]: I0123 09:53:31.983008 4684 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/6572a448-1ced-481b-af00-e2edb0d95187-ssh-key-openstack-edpm-ipam\") pod \"repo-setup-edpm-deployment-openstack-edpm-ipam-5qtdd\" (UID: \"6572a448-1ced-481b-af00-e2edb0d95187\") " pod="openstack/repo-setup-edpm-deployment-openstack-edpm-ipam-5qtdd" Jan 23 09:53:31 crc kubenswrapper[4684]: I0123 09:53:31.983067 4684 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"repo-setup-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/6572a448-1ced-481b-af00-e2edb0d95187-repo-setup-combined-ca-bundle\") pod \"repo-setup-edpm-deployment-openstack-edpm-ipam-5qtdd\" (UID: \"6572a448-1ced-481b-af00-e2edb0d95187\") " pod="openstack/repo-setup-edpm-deployment-openstack-edpm-ipam-5qtdd" Jan 23 09:53:31 crc kubenswrapper[4684]: I0123 09:53:31.983125 4684 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ceph\" (UniqueName: \"kubernetes.io/secret/6572a448-1ced-481b-af00-e2edb0d95187-ceph\") pod \"repo-setup-edpm-deployment-openstack-edpm-ipam-5qtdd\" (UID: \"6572a448-1ced-481b-af00-e2edb0d95187\") " pod="openstack/repo-setup-edpm-deployment-openstack-edpm-ipam-5qtdd" Jan 23 09:53:31 crc kubenswrapper[4684]: I0123 09:53:31.983144 4684 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/6572a448-1ced-481b-af00-e2edb0d95187-inventory\") pod \"repo-setup-edpm-deployment-openstack-edpm-ipam-5qtdd\" (UID: \"6572a448-1ced-481b-af00-e2edb0d95187\") " pod="openstack/repo-setup-edpm-deployment-openstack-edpm-ipam-5qtdd" Jan 23 09:53:31 crc kubenswrapper[4684]: I0123 09:53:31.983184 4684 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-wrx5r\" (UniqueName: \"kubernetes.io/projected/6572a448-1ced-481b-af00-e2edb0d95187-kube-api-access-wrx5r\") pod \"repo-setup-edpm-deployment-openstack-edpm-ipam-5qtdd\" (UID: \"6572a448-1ced-481b-af00-e2edb0d95187\") " pod="openstack/repo-setup-edpm-deployment-openstack-edpm-ipam-5qtdd" Jan 23 09:53:31 crc kubenswrapper[4684]: I0123 09:53:31.989017 4684 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/6572a448-1ced-481b-af00-e2edb0d95187-ssh-key-openstack-edpm-ipam\") pod \"repo-setup-edpm-deployment-openstack-edpm-ipam-5qtdd\" (UID: \"6572a448-1ced-481b-af00-e2edb0d95187\") " pod="openstack/repo-setup-edpm-deployment-openstack-edpm-ipam-5qtdd" Jan 23 09:53:31 crc kubenswrapper[4684]: I0123 09:53:31.989252 4684 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"repo-setup-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/6572a448-1ced-481b-af00-e2edb0d95187-repo-setup-combined-ca-bundle\") pod \"repo-setup-edpm-deployment-openstack-edpm-ipam-5qtdd\" (UID: \"6572a448-1ced-481b-af00-e2edb0d95187\") " pod="openstack/repo-setup-edpm-deployment-openstack-edpm-ipam-5qtdd" Jan 23 09:53:31 crc kubenswrapper[4684]: I0123 09:53:31.990123 4684 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/6572a448-1ced-481b-af00-e2edb0d95187-inventory\") pod \"repo-setup-edpm-deployment-openstack-edpm-ipam-5qtdd\" (UID: \"6572a448-1ced-481b-af00-e2edb0d95187\") " pod="openstack/repo-setup-edpm-deployment-openstack-edpm-ipam-5qtdd" Jan 23 09:53:31 crc kubenswrapper[4684]: I0123 09:53:31.995407 4684 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ceph\" (UniqueName: \"kubernetes.io/secret/6572a448-1ced-481b-af00-e2edb0d95187-ceph\") pod \"repo-setup-edpm-deployment-openstack-edpm-ipam-5qtdd\" (UID: \"6572a448-1ced-481b-af00-e2edb0d95187\") " pod="openstack/repo-setup-edpm-deployment-openstack-edpm-ipam-5qtdd" Jan 23 09:53:32 crc kubenswrapper[4684]: I0123 09:53:32.006411 4684 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-wrx5r\" (UniqueName: \"kubernetes.io/projected/6572a448-1ced-481b-af00-e2edb0d95187-kube-api-access-wrx5r\") pod \"repo-setup-edpm-deployment-openstack-edpm-ipam-5qtdd\" (UID: \"6572a448-1ced-481b-af00-e2edb0d95187\") " pod="openstack/repo-setup-edpm-deployment-openstack-edpm-ipam-5qtdd" Jan 23 09:53:32 crc kubenswrapper[4684]: I0123 09:53:32.148321 4684 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/repo-setup-edpm-deployment-openstack-edpm-ipam-5qtdd" Jan 23 09:53:32 crc kubenswrapper[4684]: I0123 09:53:32.709477 4684 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/repo-setup-edpm-deployment-openstack-edpm-ipam-5qtdd"] Jan 23 09:53:32 crc kubenswrapper[4684]: I0123 09:53:32.713934 4684 provider.go:102] Refreshing cache for provider: *credentialprovider.defaultDockerConfigProvider Jan 23 09:53:33 crc kubenswrapper[4684]: I0123 09:53:33.494816 4684 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/repo-setup-edpm-deployment-openstack-edpm-ipam-5qtdd" event={"ID":"6572a448-1ced-481b-af00-e2edb0d95187","Type":"ContainerStarted","Data":"e161c48a33c9966ed1ef458cf5111dfc04708deedc1b63d563e44f1c51cfdd22"} Jan 23 09:53:34 crc kubenswrapper[4684]: I0123 09:53:34.506609 4684 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/repo-setup-edpm-deployment-openstack-edpm-ipam-5qtdd" event={"ID":"6572a448-1ced-481b-af00-e2edb0d95187","Type":"ContainerStarted","Data":"aeb9f8d9d2cb1683e2f8e1a570c6d836756589bb685426effcdf77aa57b659f7"} Jan 23 09:53:34 crc kubenswrapper[4684]: I0123 09:53:34.550300 4684 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/repo-setup-edpm-deployment-openstack-edpm-ipam-5qtdd" podStartSLOduration=2.940997854 podStartE2EDuration="3.550268891s" podCreationTimestamp="2026-01-23 09:53:31 +0000 UTC" firstStartedPulling="2026-01-23 09:53:32.713606486 +0000 UTC m=+2785.336985027" lastFinishedPulling="2026-01-23 09:53:33.322877523 +0000 UTC m=+2785.946256064" observedRunningTime="2026-01-23 09:53:34.523931685 +0000 UTC m=+2787.147310226" watchObservedRunningTime="2026-01-23 09:53:34.550268891 +0000 UTC m=+2787.173647432" Jan 23 09:53:42 crc kubenswrapper[4684]: I0123 09:53:42.240359 4684 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/redhat-operators-9st26"] Jan 23 09:53:42 crc kubenswrapper[4684]: I0123 09:53:42.242873 4684 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-9st26" Jan 23 09:53:42 crc kubenswrapper[4684]: I0123 09:53:42.274058 4684 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-operators-9st26"] Jan 23 09:53:42 crc kubenswrapper[4684]: I0123 09:53:42.367749 4684 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-6pprn\" (UniqueName: \"kubernetes.io/projected/3d29620c-2356-48fe-b10e-a559e2be975c-kube-api-access-6pprn\") pod \"redhat-operators-9st26\" (UID: \"3d29620c-2356-48fe-b10e-a559e2be975c\") " pod="openshift-marketplace/redhat-operators-9st26" Jan 23 09:53:42 crc kubenswrapper[4684]: I0123 09:53:42.367828 4684 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/3d29620c-2356-48fe-b10e-a559e2be975c-catalog-content\") pod \"redhat-operators-9st26\" (UID: \"3d29620c-2356-48fe-b10e-a559e2be975c\") " pod="openshift-marketplace/redhat-operators-9st26" Jan 23 09:53:42 crc kubenswrapper[4684]: I0123 09:53:42.367898 4684 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/3d29620c-2356-48fe-b10e-a559e2be975c-utilities\") pod \"redhat-operators-9st26\" (UID: \"3d29620c-2356-48fe-b10e-a559e2be975c\") " pod="openshift-marketplace/redhat-operators-9st26" Jan 23 09:53:42 crc kubenswrapper[4684]: I0123 09:53:42.469034 4684 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/3d29620c-2356-48fe-b10e-a559e2be975c-catalog-content\") pod \"redhat-operators-9st26\" (UID: \"3d29620c-2356-48fe-b10e-a559e2be975c\") " pod="openshift-marketplace/redhat-operators-9st26" Jan 23 09:53:42 crc kubenswrapper[4684]: I0123 09:53:42.469121 4684 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/3d29620c-2356-48fe-b10e-a559e2be975c-utilities\") pod \"redhat-operators-9st26\" (UID: \"3d29620c-2356-48fe-b10e-a559e2be975c\") " pod="openshift-marketplace/redhat-operators-9st26" Jan 23 09:53:42 crc kubenswrapper[4684]: I0123 09:53:42.469218 4684 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-6pprn\" (UniqueName: \"kubernetes.io/projected/3d29620c-2356-48fe-b10e-a559e2be975c-kube-api-access-6pprn\") pod \"redhat-operators-9st26\" (UID: \"3d29620c-2356-48fe-b10e-a559e2be975c\") " pod="openshift-marketplace/redhat-operators-9st26" Jan 23 09:53:42 crc kubenswrapper[4684]: I0123 09:53:42.469756 4684 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/3d29620c-2356-48fe-b10e-a559e2be975c-utilities\") pod \"redhat-operators-9st26\" (UID: \"3d29620c-2356-48fe-b10e-a559e2be975c\") " pod="openshift-marketplace/redhat-operators-9st26" Jan 23 09:53:42 crc kubenswrapper[4684]: I0123 09:53:42.469757 4684 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/3d29620c-2356-48fe-b10e-a559e2be975c-catalog-content\") pod \"redhat-operators-9st26\" (UID: \"3d29620c-2356-48fe-b10e-a559e2be975c\") " pod="openshift-marketplace/redhat-operators-9st26" Jan 23 09:53:42 crc kubenswrapper[4684]: I0123 09:53:42.491788 4684 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-6pprn\" (UniqueName: \"kubernetes.io/projected/3d29620c-2356-48fe-b10e-a559e2be975c-kube-api-access-6pprn\") pod \"redhat-operators-9st26\" (UID: \"3d29620c-2356-48fe-b10e-a559e2be975c\") " pod="openshift-marketplace/redhat-operators-9st26" Jan 23 09:53:42 crc kubenswrapper[4684]: I0123 09:53:42.563496 4684 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-9st26" Jan 23 09:53:43 crc kubenswrapper[4684]: I0123 09:53:43.059656 4684 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-operators-9st26"] Jan 23 09:53:43 crc kubenswrapper[4684]: I0123 09:53:43.618682 4684 generic.go:334] "Generic (PLEG): container finished" podID="3d29620c-2356-48fe-b10e-a559e2be975c" containerID="091a26a09bfe34a0367ba7ca7c62ceb1851e27832ab38a3415e6ecc6bc6baddd" exitCode=0 Jan 23 09:53:43 crc kubenswrapper[4684]: I0123 09:53:43.619263 4684 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-9st26" event={"ID":"3d29620c-2356-48fe-b10e-a559e2be975c","Type":"ContainerDied","Data":"091a26a09bfe34a0367ba7ca7c62ceb1851e27832ab38a3415e6ecc6bc6baddd"} Jan 23 09:53:43 crc kubenswrapper[4684]: I0123 09:53:43.619315 4684 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-9st26" event={"ID":"3d29620c-2356-48fe-b10e-a559e2be975c","Type":"ContainerStarted","Data":"2bce2ba69f878b836ab1c6146a1e10e9bd36a35163c9445198b0ed6c10573f3d"} Jan 23 09:53:45 crc kubenswrapper[4684]: I0123 09:53:45.639183 4684 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-9st26" event={"ID":"3d29620c-2356-48fe-b10e-a559e2be975c","Type":"ContainerStarted","Data":"ab778cc7c66d012b5620441cd35936066cd5eaec1c49cc53ea6a2fc7b85d2cdc"} Jan 23 09:53:54 crc kubenswrapper[4684]: I0123 09:53:54.708592 4684 generic.go:334] "Generic (PLEG): container finished" podID="3d29620c-2356-48fe-b10e-a559e2be975c" containerID="ab778cc7c66d012b5620441cd35936066cd5eaec1c49cc53ea6a2fc7b85d2cdc" exitCode=0 Jan 23 09:53:54 crc kubenswrapper[4684]: I0123 09:53:54.708691 4684 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-9st26" event={"ID":"3d29620c-2356-48fe-b10e-a559e2be975c","Type":"ContainerDied","Data":"ab778cc7c66d012b5620441cd35936066cd5eaec1c49cc53ea6a2fc7b85d2cdc"} Jan 23 09:53:55 crc kubenswrapper[4684]: I0123 09:53:55.719574 4684 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-9st26" event={"ID":"3d29620c-2356-48fe-b10e-a559e2be975c","Type":"ContainerStarted","Data":"8fa33dc708344f1b25846b52febb500bd8b4d807a97998fba770c07817413d2e"} Jan 23 09:53:55 crc kubenswrapper[4684]: I0123 09:53:55.738410 4684 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/redhat-operators-9st26" podStartSLOduration=2.240219718 podStartE2EDuration="13.738391331s" podCreationTimestamp="2026-01-23 09:53:42 +0000 UTC" firstStartedPulling="2026-01-23 09:53:43.621613552 +0000 UTC m=+2796.244992093" lastFinishedPulling="2026-01-23 09:53:55.119785165 +0000 UTC m=+2807.743163706" observedRunningTime="2026-01-23 09:53:55.737819294 +0000 UTC m=+2808.361197825" watchObservedRunningTime="2026-01-23 09:53:55.738391331 +0000 UTC m=+2808.361769872" Jan 23 09:53:56 crc kubenswrapper[4684]: I0123 09:53:56.728613 4684 generic.go:334] "Generic (PLEG): container finished" podID="6572a448-1ced-481b-af00-e2edb0d95187" containerID="aeb9f8d9d2cb1683e2f8e1a570c6d836756589bb685426effcdf77aa57b659f7" exitCode=0 Jan 23 09:53:56 crc kubenswrapper[4684]: I0123 09:53:56.728653 4684 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/repo-setup-edpm-deployment-openstack-edpm-ipam-5qtdd" event={"ID":"6572a448-1ced-481b-af00-e2edb0d95187","Type":"ContainerDied","Data":"aeb9f8d9d2cb1683e2f8e1a570c6d836756589bb685426effcdf77aa57b659f7"} Jan 23 09:53:58 crc kubenswrapper[4684]: I0123 09:53:58.186381 4684 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/repo-setup-edpm-deployment-openstack-edpm-ipam-5qtdd" Jan 23 09:53:58 crc kubenswrapper[4684]: I0123 09:53:58.295201 4684 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"repo-setup-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/6572a448-1ced-481b-af00-e2edb0d95187-repo-setup-combined-ca-bundle\") pod \"6572a448-1ced-481b-af00-e2edb0d95187\" (UID: \"6572a448-1ced-481b-af00-e2edb0d95187\") " Jan 23 09:53:58 crc kubenswrapper[4684]: I0123 09:53:58.295346 4684 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ceph\" (UniqueName: \"kubernetes.io/secret/6572a448-1ced-481b-af00-e2edb0d95187-ceph\") pod \"6572a448-1ced-481b-af00-e2edb0d95187\" (UID: \"6572a448-1ced-481b-af00-e2edb0d95187\") " Jan 23 09:53:58 crc kubenswrapper[4684]: I0123 09:53:58.295406 4684 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/6572a448-1ced-481b-af00-e2edb0d95187-inventory\") pod \"6572a448-1ced-481b-af00-e2edb0d95187\" (UID: \"6572a448-1ced-481b-af00-e2edb0d95187\") " Jan 23 09:53:58 crc kubenswrapper[4684]: I0123 09:53:58.295516 4684 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-wrx5r\" (UniqueName: \"kubernetes.io/projected/6572a448-1ced-481b-af00-e2edb0d95187-kube-api-access-wrx5r\") pod \"6572a448-1ced-481b-af00-e2edb0d95187\" (UID: \"6572a448-1ced-481b-af00-e2edb0d95187\") " Jan 23 09:53:58 crc kubenswrapper[4684]: I0123 09:53:58.295569 4684 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/6572a448-1ced-481b-af00-e2edb0d95187-ssh-key-openstack-edpm-ipam\") pod \"6572a448-1ced-481b-af00-e2edb0d95187\" (UID: \"6572a448-1ced-481b-af00-e2edb0d95187\") " Jan 23 09:53:58 crc kubenswrapper[4684]: I0123 09:53:58.310984 4684 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/6572a448-1ced-481b-af00-e2edb0d95187-ceph" (OuterVolumeSpecName: "ceph") pod "6572a448-1ced-481b-af00-e2edb0d95187" (UID: "6572a448-1ced-481b-af00-e2edb0d95187"). InnerVolumeSpecName "ceph". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 23 09:53:58 crc kubenswrapper[4684]: I0123 09:53:58.311073 4684 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/6572a448-1ced-481b-af00-e2edb0d95187-repo-setup-combined-ca-bundle" (OuterVolumeSpecName: "repo-setup-combined-ca-bundle") pod "6572a448-1ced-481b-af00-e2edb0d95187" (UID: "6572a448-1ced-481b-af00-e2edb0d95187"). InnerVolumeSpecName "repo-setup-combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 23 09:53:58 crc kubenswrapper[4684]: I0123 09:53:58.311393 4684 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/6572a448-1ced-481b-af00-e2edb0d95187-kube-api-access-wrx5r" (OuterVolumeSpecName: "kube-api-access-wrx5r") pod "6572a448-1ced-481b-af00-e2edb0d95187" (UID: "6572a448-1ced-481b-af00-e2edb0d95187"). InnerVolumeSpecName "kube-api-access-wrx5r". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 23 09:53:58 crc kubenswrapper[4684]: I0123 09:53:58.326322 4684 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/6572a448-1ced-481b-af00-e2edb0d95187-ssh-key-openstack-edpm-ipam" (OuterVolumeSpecName: "ssh-key-openstack-edpm-ipam") pod "6572a448-1ced-481b-af00-e2edb0d95187" (UID: "6572a448-1ced-481b-af00-e2edb0d95187"). InnerVolumeSpecName "ssh-key-openstack-edpm-ipam". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 23 09:53:58 crc kubenswrapper[4684]: I0123 09:53:58.331518 4684 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/6572a448-1ced-481b-af00-e2edb0d95187-inventory" (OuterVolumeSpecName: "inventory") pod "6572a448-1ced-481b-af00-e2edb0d95187" (UID: "6572a448-1ced-481b-af00-e2edb0d95187"). InnerVolumeSpecName "inventory". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 23 09:53:58 crc kubenswrapper[4684]: I0123 09:53:58.397805 4684 reconciler_common.go:293] "Volume detached for volume \"ceph\" (UniqueName: \"kubernetes.io/secret/6572a448-1ced-481b-af00-e2edb0d95187-ceph\") on node \"crc\" DevicePath \"\"" Jan 23 09:53:58 crc kubenswrapper[4684]: I0123 09:53:58.397846 4684 reconciler_common.go:293] "Volume detached for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/6572a448-1ced-481b-af00-e2edb0d95187-inventory\") on node \"crc\" DevicePath \"\"" Jan 23 09:53:58 crc kubenswrapper[4684]: I0123 09:53:58.397859 4684 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-wrx5r\" (UniqueName: \"kubernetes.io/projected/6572a448-1ced-481b-af00-e2edb0d95187-kube-api-access-wrx5r\") on node \"crc\" DevicePath \"\"" Jan 23 09:53:58 crc kubenswrapper[4684]: I0123 09:53:58.397872 4684 reconciler_common.go:293] "Volume detached for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/6572a448-1ced-481b-af00-e2edb0d95187-ssh-key-openstack-edpm-ipam\") on node \"crc\" DevicePath \"\"" Jan 23 09:53:58 crc kubenswrapper[4684]: I0123 09:53:58.397887 4684 reconciler_common.go:293] "Volume detached for volume \"repo-setup-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/6572a448-1ced-481b-af00-e2edb0d95187-repo-setup-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Jan 23 09:53:58 crc kubenswrapper[4684]: I0123 09:53:58.749013 4684 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/repo-setup-edpm-deployment-openstack-edpm-ipam-5qtdd" event={"ID":"6572a448-1ced-481b-af00-e2edb0d95187","Type":"ContainerDied","Data":"e161c48a33c9966ed1ef458cf5111dfc04708deedc1b63d563e44f1c51cfdd22"} Jan 23 09:53:58 crc kubenswrapper[4684]: I0123 09:53:58.749098 4684 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="e161c48a33c9966ed1ef458cf5111dfc04708deedc1b63d563e44f1c51cfdd22" Jan 23 09:53:58 crc kubenswrapper[4684]: I0123 09:53:58.749168 4684 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/repo-setup-edpm-deployment-openstack-edpm-ipam-5qtdd" Jan 23 09:53:58 crc kubenswrapper[4684]: I0123 09:53:58.856908 4684 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/bootstrap-edpm-deployment-openstack-edpm-ipam-j7qnk"] Jan 23 09:53:58 crc kubenswrapper[4684]: E0123 09:53:58.857970 4684 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="6572a448-1ced-481b-af00-e2edb0d95187" containerName="repo-setup-edpm-deployment-openstack-edpm-ipam" Jan 23 09:53:58 crc kubenswrapper[4684]: I0123 09:53:58.858072 4684 state_mem.go:107] "Deleted CPUSet assignment" podUID="6572a448-1ced-481b-af00-e2edb0d95187" containerName="repo-setup-edpm-deployment-openstack-edpm-ipam" Jan 23 09:53:58 crc kubenswrapper[4684]: I0123 09:53:58.858505 4684 memory_manager.go:354] "RemoveStaleState removing state" podUID="6572a448-1ced-481b-af00-e2edb0d95187" containerName="repo-setup-edpm-deployment-openstack-edpm-ipam" Jan 23 09:53:58 crc kubenswrapper[4684]: I0123 09:53:58.859445 4684 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/bootstrap-edpm-deployment-openstack-edpm-ipam-j7qnk" Jan 23 09:53:58 crc kubenswrapper[4684]: I0123 09:53:58.864342 4684 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"dataplanenodeset-openstack-edpm-ipam" Jan 23 09:53:58 crc kubenswrapper[4684]: I0123 09:53:58.864597 4684 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"openstack-edpm-ipam-dockercfg-5vtkf" Jan 23 09:53:58 crc kubenswrapper[4684]: I0123 09:53:58.865077 4684 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"openstack-aee-default-env" Jan 23 09:53:58 crc kubenswrapper[4684]: I0123 09:53:58.865369 4684 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"ceph-conf-files" Jan 23 09:53:58 crc kubenswrapper[4684]: I0123 09:53:58.867864 4684 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/bootstrap-edpm-deployment-openstack-edpm-ipam-j7qnk"] Jan 23 09:53:58 crc kubenswrapper[4684]: I0123 09:53:58.870950 4684 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"dataplane-ansible-ssh-private-key-secret" Jan 23 09:53:58 crc kubenswrapper[4684]: I0123 09:53:58.910247 4684 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/47eb1e50-9644-40c1-b739-f70c2274808c-ssh-key-openstack-edpm-ipam\") pod \"bootstrap-edpm-deployment-openstack-edpm-ipam-j7qnk\" (UID: \"47eb1e50-9644-40c1-b739-f70c2274808c\") " pod="openstack/bootstrap-edpm-deployment-openstack-edpm-ipam-j7qnk" Jan 23 09:53:58 crc kubenswrapper[4684]: I0123 09:53:58.910581 4684 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ceph\" (UniqueName: \"kubernetes.io/secret/47eb1e50-9644-40c1-b739-f70c2274808c-ceph\") pod \"bootstrap-edpm-deployment-openstack-edpm-ipam-j7qnk\" (UID: \"47eb1e50-9644-40c1-b739-f70c2274808c\") " pod="openstack/bootstrap-edpm-deployment-openstack-edpm-ipam-j7qnk" Jan 23 09:53:58 crc kubenswrapper[4684]: I0123 09:53:58.910755 4684 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/47eb1e50-9644-40c1-b739-f70c2274808c-inventory\") pod \"bootstrap-edpm-deployment-openstack-edpm-ipam-j7qnk\" (UID: \"47eb1e50-9644-40c1-b739-f70c2274808c\") " pod="openstack/bootstrap-edpm-deployment-openstack-edpm-ipam-j7qnk" Jan 23 09:53:58 crc kubenswrapper[4684]: I0123 09:53:58.910953 4684 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-v5p4j\" (UniqueName: \"kubernetes.io/projected/47eb1e50-9644-40c1-b739-f70c2274808c-kube-api-access-v5p4j\") pod \"bootstrap-edpm-deployment-openstack-edpm-ipam-j7qnk\" (UID: \"47eb1e50-9644-40c1-b739-f70c2274808c\") " pod="openstack/bootstrap-edpm-deployment-openstack-edpm-ipam-j7qnk" Jan 23 09:53:58 crc kubenswrapper[4684]: I0123 09:53:58.911107 4684 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"bootstrap-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/47eb1e50-9644-40c1-b739-f70c2274808c-bootstrap-combined-ca-bundle\") pod \"bootstrap-edpm-deployment-openstack-edpm-ipam-j7qnk\" (UID: \"47eb1e50-9644-40c1-b739-f70c2274808c\") " pod="openstack/bootstrap-edpm-deployment-openstack-edpm-ipam-j7qnk" Jan 23 09:53:59 crc kubenswrapper[4684]: I0123 09:53:59.012663 4684 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/47eb1e50-9644-40c1-b739-f70c2274808c-inventory\") pod \"bootstrap-edpm-deployment-openstack-edpm-ipam-j7qnk\" (UID: \"47eb1e50-9644-40c1-b739-f70c2274808c\") " pod="openstack/bootstrap-edpm-deployment-openstack-edpm-ipam-j7qnk" Jan 23 09:53:59 crc kubenswrapper[4684]: I0123 09:53:59.012826 4684 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-v5p4j\" (UniqueName: \"kubernetes.io/projected/47eb1e50-9644-40c1-b739-f70c2274808c-kube-api-access-v5p4j\") pod \"bootstrap-edpm-deployment-openstack-edpm-ipam-j7qnk\" (UID: \"47eb1e50-9644-40c1-b739-f70c2274808c\") " pod="openstack/bootstrap-edpm-deployment-openstack-edpm-ipam-j7qnk" Jan 23 09:53:59 crc kubenswrapper[4684]: I0123 09:53:59.012858 4684 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"bootstrap-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/47eb1e50-9644-40c1-b739-f70c2274808c-bootstrap-combined-ca-bundle\") pod \"bootstrap-edpm-deployment-openstack-edpm-ipam-j7qnk\" (UID: \"47eb1e50-9644-40c1-b739-f70c2274808c\") " pod="openstack/bootstrap-edpm-deployment-openstack-edpm-ipam-j7qnk" Jan 23 09:53:59 crc kubenswrapper[4684]: I0123 09:53:59.012907 4684 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/47eb1e50-9644-40c1-b739-f70c2274808c-ssh-key-openstack-edpm-ipam\") pod \"bootstrap-edpm-deployment-openstack-edpm-ipam-j7qnk\" (UID: \"47eb1e50-9644-40c1-b739-f70c2274808c\") " pod="openstack/bootstrap-edpm-deployment-openstack-edpm-ipam-j7qnk" Jan 23 09:53:59 crc kubenswrapper[4684]: I0123 09:53:59.012941 4684 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ceph\" (UniqueName: \"kubernetes.io/secret/47eb1e50-9644-40c1-b739-f70c2274808c-ceph\") pod \"bootstrap-edpm-deployment-openstack-edpm-ipam-j7qnk\" (UID: \"47eb1e50-9644-40c1-b739-f70c2274808c\") " pod="openstack/bootstrap-edpm-deployment-openstack-edpm-ipam-j7qnk" Jan 23 09:53:59 crc kubenswrapper[4684]: I0123 09:53:59.016426 4684 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/47eb1e50-9644-40c1-b739-f70c2274808c-ssh-key-openstack-edpm-ipam\") pod \"bootstrap-edpm-deployment-openstack-edpm-ipam-j7qnk\" (UID: \"47eb1e50-9644-40c1-b739-f70c2274808c\") " pod="openstack/bootstrap-edpm-deployment-openstack-edpm-ipam-j7qnk" Jan 23 09:53:59 crc kubenswrapper[4684]: I0123 09:53:59.017040 4684 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"bootstrap-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/47eb1e50-9644-40c1-b739-f70c2274808c-bootstrap-combined-ca-bundle\") pod \"bootstrap-edpm-deployment-openstack-edpm-ipam-j7qnk\" (UID: \"47eb1e50-9644-40c1-b739-f70c2274808c\") " pod="openstack/bootstrap-edpm-deployment-openstack-edpm-ipam-j7qnk" Jan 23 09:53:59 crc kubenswrapper[4684]: I0123 09:53:59.018579 4684 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ceph\" (UniqueName: \"kubernetes.io/secret/47eb1e50-9644-40c1-b739-f70c2274808c-ceph\") pod \"bootstrap-edpm-deployment-openstack-edpm-ipam-j7qnk\" (UID: \"47eb1e50-9644-40c1-b739-f70c2274808c\") " pod="openstack/bootstrap-edpm-deployment-openstack-edpm-ipam-j7qnk" Jan 23 09:53:59 crc kubenswrapper[4684]: I0123 09:53:59.019600 4684 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/47eb1e50-9644-40c1-b739-f70c2274808c-inventory\") pod \"bootstrap-edpm-deployment-openstack-edpm-ipam-j7qnk\" (UID: \"47eb1e50-9644-40c1-b739-f70c2274808c\") " pod="openstack/bootstrap-edpm-deployment-openstack-edpm-ipam-j7qnk" Jan 23 09:53:59 crc kubenswrapper[4684]: I0123 09:53:59.034822 4684 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-v5p4j\" (UniqueName: \"kubernetes.io/projected/47eb1e50-9644-40c1-b739-f70c2274808c-kube-api-access-v5p4j\") pod \"bootstrap-edpm-deployment-openstack-edpm-ipam-j7qnk\" (UID: \"47eb1e50-9644-40c1-b739-f70c2274808c\") " pod="openstack/bootstrap-edpm-deployment-openstack-edpm-ipam-j7qnk" Jan 23 09:53:59 crc kubenswrapper[4684]: I0123 09:53:59.191413 4684 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/bootstrap-edpm-deployment-openstack-edpm-ipam-j7qnk" Jan 23 09:53:59 crc kubenswrapper[4684]: W0123 09:53:59.785955 4684 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod47eb1e50_9644_40c1_b739_f70c2274808c.slice/crio-d6a5d97bc3ae5b308bac4594c6bcae5a102a12e532fab6951321e405fa2d5312 WatchSource:0}: Error finding container d6a5d97bc3ae5b308bac4594c6bcae5a102a12e532fab6951321e405fa2d5312: Status 404 returned error can't find the container with id d6a5d97bc3ae5b308bac4594c6bcae5a102a12e532fab6951321e405fa2d5312 Jan 23 09:53:59 crc kubenswrapper[4684]: I0123 09:53:59.796679 4684 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/bootstrap-edpm-deployment-openstack-edpm-ipam-j7qnk"] Jan 23 09:54:00 crc kubenswrapper[4684]: I0123 09:54:00.771652 4684 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/bootstrap-edpm-deployment-openstack-edpm-ipam-j7qnk" event={"ID":"47eb1e50-9644-40c1-b739-f70c2274808c","Type":"ContainerStarted","Data":"e6e84931bf8815d1c895dc0510faf536fec7e402e35591ea2462d0e428b57efb"} Jan 23 09:54:00 crc kubenswrapper[4684]: I0123 09:54:00.772108 4684 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/bootstrap-edpm-deployment-openstack-edpm-ipam-j7qnk" event={"ID":"47eb1e50-9644-40c1-b739-f70c2274808c","Type":"ContainerStarted","Data":"d6a5d97bc3ae5b308bac4594c6bcae5a102a12e532fab6951321e405fa2d5312"} Jan 23 09:54:00 crc kubenswrapper[4684]: I0123 09:54:00.799181 4684 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/bootstrap-edpm-deployment-openstack-edpm-ipam-j7qnk" podStartSLOduration=2.275870702 podStartE2EDuration="2.799162568s" podCreationTimestamp="2026-01-23 09:53:58 +0000 UTC" firstStartedPulling="2026-01-23 09:53:59.78926786 +0000 UTC m=+2812.412646391" lastFinishedPulling="2026-01-23 09:54:00.312559726 +0000 UTC m=+2812.935938257" observedRunningTime="2026-01-23 09:54:00.790540441 +0000 UTC m=+2813.413918992" watchObservedRunningTime="2026-01-23 09:54:00.799162568 +0000 UTC m=+2813.422541109" Jan 23 09:54:02 crc kubenswrapper[4684]: I0123 09:54:02.564331 4684 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/redhat-operators-9st26" Jan 23 09:54:02 crc kubenswrapper[4684]: I0123 09:54:02.564424 4684 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/redhat-operators-9st26" Jan 23 09:54:02 crc kubenswrapper[4684]: I0123 09:54:02.611386 4684 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/redhat-operators-9st26" Jan 23 09:54:02 crc kubenswrapper[4684]: I0123 09:54:02.828166 4684 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/redhat-operators-9st26" Jan 23 09:54:02 crc kubenswrapper[4684]: I0123 09:54:02.880226 4684 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-operators-9st26"] Jan 23 09:54:04 crc kubenswrapper[4684]: I0123 09:54:04.803749 4684 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/redhat-operators-9st26" podUID="3d29620c-2356-48fe-b10e-a559e2be975c" containerName="registry-server" containerID="cri-o://8fa33dc708344f1b25846b52febb500bd8b4d807a97998fba770c07817413d2e" gracePeriod=2 Jan 23 09:54:05 crc kubenswrapper[4684]: I0123 09:54:05.275438 4684 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-9st26" Jan 23 09:54:05 crc kubenswrapper[4684]: I0123 09:54:05.370426 4684 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-6pprn\" (UniqueName: \"kubernetes.io/projected/3d29620c-2356-48fe-b10e-a559e2be975c-kube-api-access-6pprn\") pod \"3d29620c-2356-48fe-b10e-a559e2be975c\" (UID: \"3d29620c-2356-48fe-b10e-a559e2be975c\") " Jan 23 09:54:05 crc kubenswrapper[4684]: I0123 09:54:05.370552 4684 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/3d29620c-2356-48fe-b10e-a559e2be975c-catalog-content\") pod \"3d29620c-2356-48fe-b10e-a559e2be975c\" (UID: \"3d29620c-2356-48fe-b10e-a559e2be975c\") " Jan 23 09:54:05 crc kubenswrapper[4684]: I0123 09:54:05.370686 4684 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/3d29620c-2356-48fe-b10e-a559e2be975c-utilities\") pod \"3d29620c-2356-48fe-b10e-a559e2be975c\" (UID: \"3d29620c-2356-48fe-b10e-a559e2be975c\") " Jan 23 09:54:05 crc kubenswrapper[4684]: I0123 09:54:05.372605 4684 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/3d29620c-2356-48fe-b10e-a559e2be975c-utilities" (OuterVolumeSpecName: "utilities") pod "3d29620c-2356-48fe-b10e-a559e2be975c" (UID: "3d29620c-2356-48fe-b10e-a559e2be975c"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 23 09:54:05 crc kubenswrapper[4684]: I0123 09:54:05.379666 4684 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/3d29620c-2356-48fe-b10e-a559e2be975c-kube-api-access-6pprn" (OuterVolumeSpecName: "kube-api-access-6pprn") pod "3d29620c-2356-48fe-b10e-a559e2be975c" (UID: "3d29620c-2356-48fe-b10e-a559e2be975c"). InnerVolumeSpecName "kube-api-access-6pprn". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 23 09:54:05 crc kubenswrapper[4684]: I0123 09:54:05.472499 4684 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-6pprn\" (UniqueName: \"kubernetes.io/projected/3d29620c-2356-48fe-b10e-a559e2be975c-kube-api-access-6pprn\") on node \"crc\" DevicePath \"\"" Jan 23 09:54:05 crc kubenswrapper[4684]: I0123 09:54:05.472547 4684 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/3d29620c-2356-48fe-b10e-a559e2be975c-utilities\") on node \"crc\" DevicePath \"\"" Jan 23 09:54:05 crc kubenswrapper[4684]: I0123 09:54:05.504555 4684 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/3d29620c-2356-48fe-b10e-a559e2be975c-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "3d29620c-2356-48fe-b10e-a559e2be975c" (UID: "3d29620c-2356-48fe-b10e-a559e2be975c"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 23 09:54:05 crc kubenswrapper[4684]: I0123 09:54:05.574719 4684 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/3d29620c-2356-48fe-b10e-a559e2be975c-catalog-content\") on node \"crc\" DevicePath \"\"" Jan 23 09:54:05 crc kubenswrapper[4684]: I0123 09:54:05.987782 4684 generic.go:334] "Generic (PLEG): container finished" podID="3d29620c-2356-48fe-b10e-a559e2be975c" containerID="8fa33dc708344f1b25846b52febb500bd8b4d807a97998fba770c07817413d2e" exitCode=0 Jan 23 09:54:05 crc kubenswrapper[4684]: I0123 09:54:05.988231 4684 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-9st26" event={"ID":"3d29620c-2356-48fe-b10e-a559e2be975c","Type":"ContainerDied","Data":"8fa33dc708344f1b25846b52febb500bd8b4d807a97998fba770c07817413d2e"} Jan 23 09:54:05 crc kubenswrapper[4684]: I0123 09:54:05.988266 4684 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-9st26" event={"ID":"3d29620c-2356-48fe-b10e-a559e2be975c","Type":"ContainerDied","Data":"2bce2ba69f878b836ab1c6146a1e10e9bd36a35163c9445198b0ed6c10573f3d"} Jan 23 09:54:05 crc kubenswrapper[4684]: I0123 09:54:05.988289 4684 scope.go:117] "RemoveContainer" containerID="8fa33dc708344f1b25846b52febb500bd8b4d807a97998fba770c07817413d2e" Jan 23 09:54:05 crc kubenswrapper[4684]: I0123 09:54:05.988480 4684 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-9st26" Jan 23 09:54:06 crc kubenswrapper[4684]: I0123 09:54:06.041791 4684 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-operators-9st26"] Jan 23 09:54:06 crc kubenswrapper[4684]: I0123 09:54:06.052271 4684 scope.go:117] "RemoveContainer" containerID="ab778cc7c66d012b5620441cd35936066cd5eaec1c49cc53ea6a2fc7b85d2cdc" Jan 23 09:54:06 crc kubenswrapper[4684]: I0123 09:54:06.053343 4684 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/redhat-operators-9st26"] Jan 23 09:54:06 crc kubenswrapper[4684]: I0123 09:54:06.098355 4684 scope.go:117] "RemoveContainer" containerID="091a26a09bfe34a0367ba7ca7c62ceb1851e27832ab38a3415e6ecc6bc6baddd" Jan 23 09:54:06 crc kubenswrapper[4684]: I0123 09:54:06.137068 4684 scope.go:117] "RemoveContainer" containerID="8fa33dc708344f1b25846b52febb500bd8b4d807a97998fba770c07817413d2e" Jan 23 09:54:06 crc kubenswrapper[4684]: E0123 09:54:06.137633 4684 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"8fa33dc708344f1b25846b52febb500bd8b4d807a97998fba770c07817413d2e\": container with ID starting with 8fa33dc708344f1b25846b52febb500bd8b4d807a97998fba770c07817413d2e not found: ID does not exist" containerID="8fa33dc708344f1b25846b52febb500bd8b4d807a97998fba770c07817413d2e" Jan 23 09:54:06 crc kubenswrapper[4684]: I0123 09:54:06.137679 4684 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"8fa33dc708344f1b25846b52febb500bd8b4d807a97998fba770c07817413d2e"} err="failed to get container status \"8fa33dc708344f1b25846b52febb500bd8b4d807a97998fba770c07817413d2e\": rpc error: code = NotFound desc = could not find container \"8fa33dc708344f1b25846b52febb500bd8b4d807a97998fba770c07817413d2e\": container with ID starting with 8fa33dc708344f1b25846b52febb500bd8b4d807a97998fba770c07817413d2e not found: ID does not exist" Jan 23 09:54:06 crc kubenswrapper[4684]: I0123 09:54:06.137715 4684 scope.go:117] "RemoveContainer" containerID="ab778cc7c66d012b5620441cd35936066cd5eaec1c49cc53ea6a2fc7b85d2cdc" Jan 23 09:54:06 crc kubenswrapper[4684]: E0123 09:54:06.138161 4684 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"ab778cc7c66d012b5620441cd35936066cd5eaec1c49cc53ea6a2fc7b85d2cdc\": container with ID starting with ab778cc7c66d012b5620441cd35936066cd5eaec1c49cc53ea6a2fc7b85d2cdc not found: ID does not exist" containerID="ab778cc7c66d012b5620441cd35936066cd5eaec1c49cc53ea6a2fc7b85d2cdc" Jan 23 09:54:06 crc kubenswrapper[4684]: I0123 09:54:06.138183 4684 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"ab778cc7c66d012b5620441cd35936066cd5eaec1c49cc53ea6a2fc7b85d2cdc"} err="failed to get container status \"ab778cc7c66d012b5620441cd35936066cd5eaec1c49cc53ea6a2fc7b85d2cdc\": rpc error: code = NotFound desc = could not find container \"ab778cc7c66d012b5620441cd35936066cd5eaec1c49cc53ea6a2fc7b85d2cdc\": container with ID starting with ab778cc7c66d012b5620441cd35936066cd5eaec1c49cc53ea6a2fc7b85d2cdc not found: ID does not exist" Jan 23 09:54:06 crc kubenswrapper[4684]: I0123 09:54:06.138196 4684 scope.go:117] "RemoveContainer" containerID="091a26a09bfe34a0367ba7ca7c62ceb1851e27832ab38a3415e6ecc6bc6baddd" Jan 23 09:54:06 crc kubenswrapper[4684]: E0123 09:54:06.138574 4684 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"091a26a09bfe34a0367ba7ca7c62ceb1851e27832ab38a3415e6ecc6bc6baddd\": container with ID starting with 091a26a09bfe34a0367ba7ca7c62ceb1851e27832ab38a3415e6ecc6bc6baddd not found: ID does not exist" containerID="091a26a09bfe34a0367ba7ca7c62ceb1851e27832ab38a3415e6ecc6bc6baddd" Jan 23 09:54:06 crc kubenswrapper[4684]: I0123 09:54:06.138626 4684 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"091a26a09bfe34a0367ba7ca7c62ceb1851e27832ab38a3415e6ecc6bc6baddd"} err="failed to get container status \"091a26a09bfe34a0367ba7ca7c62ceb1851e27832ab38a3415e6ecc6bc6baddd\": rpc error: code = NotFound desc = could not find container \"091a26a09bfe34a0367ba7ca7c62ceb1851e27832ab38a3415e6ecc6bc6baddd\": container with ID starting with 091a26a09bfe34a0367ba7ca7c62ceb1851e27832ab38a3415e6ecc6bc6baddd not found: ID does not exist" Jan 23 09:54:07 crc kubenswrapper[4684]: I0123 09:54:07.596616 4684 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="3d29620c-2356-48fe-b10e-a559e2be975c" path="/var/lib/kubelet/pods/3d29620c-2356-48fe-b10e-a559e2be975c/volumes" Jan 23 09:54:15 crc kubenswrapper[4684]: I0123 09:54:15.043962 4684 scope.go:117] "RemoveContainer" containerID="23a6818c025d2db803d06b1b0ad6686cd204b5a6a302a548826f2b5219d5c75f" Jan 23 09:54:15 crc kubenswrapper[4684]: I0123 09:54:15.117441 4684 scope.go:117] "RemoveContainer" containerID="5de067ac77489ac1a36897dacb3c13d9e19e9e2f95e74ff95953abe58b424a6e" Jan 23 09:54:15 crc kubenswrapper[4684]: I0123 09:54:15.182419 4684 scope.go:117] "RemoveContainer" containerID="693d96b0c0f467e33786d6297c7182ad904533d38c295b117911e904f35c5cbd" Jan 23 09:54:15 crc kubenswrapper[4684]: I0123 09:54:15.218070 4684 scope.go:117] "RemoveContainer" containerID="eeedf85593466cb28ef5a7381de33ffd980211147cf8537b3b4654801d041eb9" Jan 23 09:54:15 crc kubenswrapper[4684]: I0123 09:54:15.265100 4684 scope.go:117] "RemoveContainer" containerID="638162a7e6f0fcefc5aba9522b91cfe96c49bd47eb9e3c02ce5d5c0e9953735d" Jan 23 09:54:15 crc kubenswrapper[4684]: I0123 09:54:15.347466 4684 scope.go:117] "RemoveContainer" containerID="24accc2341840839f402cbf55f4918a0f0dd46f71345bca8d86328b525f446eb" Jan 23 09:54:15 crc kubenswrapper[4684]: I0123 09:54:15.456825 4684 scope.go:117] "RemoveContainer" containerID="09ff9cb666acdc428ea3d10a5b7ab372faef246c2af76caca4d11154bcec0425" Jan 23 09:54:15 crc kubenswrapper[4684]: I0123 09:54:15.492266 4684 scope.go:117] "RemoveContainer" containerID="a44c3af07c2082a113194cad0994e4fb45108d8000cc264b44f4d3eb30e6da93" Jan 23 09:54:15 crc kubenswrapper[4684]: I0123 09:54:15.528550 4684 scope.go:117] "RemoveContainer" containerID="108adf2b8cc722deaa239edf7ea5dcd32da0c275706a434a7d9039a8b6ec9d50" Jan 23 09:54:15 crc kubenswrapper[4684]: I0123 09:54:15.559465 4684 scope.go:117] "RemoveContainer" containerID="1bc7d2c4d9785d791b4f938771327dd59c3d30de92aa2f2586f8abe46f342ed0" Jan 23 09:55:13 crc kubenswrapper[4684]: I0123 09:55:13.728760 4684 patch_prober.go:28] interesting pod/machine-config-daemon-wtphf container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Jan 23 09:55:13 crc kubenswrapper[4684]: I0123 09:55:13.729353 4684 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-wtphf" podUID="fe8e0d00-860e-4d47-9f48-686555520d79" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Jan 23 09:55:25 crc kubenswrapper[4684]: I0123 09:55:25.836356 4684 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/redhat-marketplace-2dmxx"] Jan 23 09:55:25 crc kubenswrapper[4684]: E0123 09:55:25.837383 4684 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="3d29620c-2356-48fe-b10e-a559e2be975c" containerName="extract-content" Jan 23 09:55:25 crc kubenswrapper[4684]: I0123 09:55:25.837402 4684 state_mem.go:107] "Deleted CPUSet assignment" podUID="3d29620c-2356-48fe-b10e-a559e2be975c" containerName="extract-content" Jan 23 09:55:25 crc kubenswrapper[4684]: E0123 09:55:25.837411 4684 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="3d29620c-2356-48fe-b10e-a559e2be975c" containerName="registry-server" Jan 23 09:55:25 crc kubenswrapper[4684]: I0123 09:55:25.837419 4684 state_mem.go:107] "Deleted CPUSet assignment" podUID="3d29620c-2356-48fe-b10e-a559e2be975c" containerName="registry-server" Jan 23 09:55:25 crc kubenswrapper[4684]: E0123 09:55:25.837449 4684 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="3d29620c-2356-48fe-b10e-a559e2be975c" containerName="extract-utilities" Jan 23 09:55:25 crc kubenswrapper[4684]: I0123 09:55:25.837459 4684 state_mem.go:107] "Deleted CPUSet assignment" podUID="3d29620c-2356-48fe-b10e-a559e2be975c" containerName="extract-utilities" Jan 23 09:55:25 crc kubenswrapper[4684]: I0123 09:55:25.837691 4684 memory_manager.go:354] "RemoveStaleState removing state" podUID="3d29620c-2356-48fe-b10e-a559e2be975c" containerName="registry-server" Jan 23 09:55:25 crc kubenswrapper[4684]: I0123 09:55:25.839451 4684 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-2dmxx" Jan 23 09:55:25 crc kubenswrapper[4684]: I0123 09:55:25.860521 4684 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-marketplace-2dmxx"] Jan 23 09:55:25 crc kubenswrapper[4684]: I0123 09:55:25.962631 4684 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/30a63ef8-dcb6-4250-9861-f759a295224a-catalog-content\") pod \"redhat-marketplace-2dmxx\" (UID: \"30a63ef8-dcb6-4250-9861-f759a295224a\") " pod="openshift-marketplace/redhat-marketplace-2dmxx" Jan 23 09:55:25 crc kubenswrapper[4684]: I0123 09:55:25.962957 4684 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-5xlxx\" (UniqueName: \"kubernetes.io/projected/30a63ef8-dcb6-4250-9861-f759a295224a-kube-api-access-5xlxx\") pod \"redhat-marketplace-2dmxx\" (UID: \"30a63ef8-dcb6-4250-9861-f759a295224a\") " pod="openshift-marketplace/redhat-marketplace-2dmxx" Jan 23 09:55:25 crc kubenswrapper[4684]: I0123 09:55:25.963401 4684 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/30a63ef8-dcb6-4250-9861-f759a295224a-utilities\") pod \"redhat-marketplace-2dmxx\" (UID: \"30a63ef8-dcb6-4250-9861-f759a295224a\") " pod="openshift-marketplace/redhat-marketplace-2dmxx" Jan 23 09:55:26 crc kubenswrapper[4684]: I0123 09:55:26.065645 4684 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-5xlxx\" (UniqueName: \"kubernetes.io/projected/30a63ef8-dcb6-4250-9861-f759a295224a-kube-api-access-5xlxx\") pod \"redhat-marketplace-2dmxx\" (UID: \"30a63ef8-dcb6-4250-9861-f759a295224a\") " pod="openshift-marketplace/redhat-marketplace-2dmxx" Jan 23 09:55:26 crc kubenswrapper[4684]: I0123 09:55:26.065806 4684 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/30a63ef8-dcb6-4250-9861-f759a295224a-utilities\") pod \"redhat-marketplace-2dmxx\" (UID: \"30a63ef8-dcb6-4250-9861-f759a295224a\") " pod="openshift-marketplace/redhat-marketplace-2dmxx" Jan 23 09:55:26 crc kubenswrapper[4684]: I0123 09:55:26.066436 4684 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/30a63ef8-dcb6-4250-9861-f759a295224a-utilities\") pod \"redhat-marketplace-2dmxx\" (UID: \"30a63ef8-dcb6-4250-9861-f759a295224a\") " pod="openshift-marketplace/redhat-marketplace-2dmxx" Jan 23 09:55:26 crc kubenswrapper[4684]: I0123 09:55:26.066617 4684 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/30a63ef8-dcb6-4250-9861-f759a295224a-catalog-content\") pod \"redhat-marketplace-2dmxx\" (UID: \"30a63ef8-dcb6-4250-9861-f759a295224a\") " pod="openshift-marketplace/redhat-marketplace-2dmxx" Jan 23 09:55:26 crc kubenswrapper[4684]: I0123 09:55:26.067042 4684 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/30a63ef8-dcb6-4250-9861-f759a295224a-catalog-content\") pod \"redhat-marketplace-2dmxx\" (UID: \"30a63ef8-dcb6-4250-9861-f759a295224a\") " pod="openshift-marketplace/redhat-marketplace-2dmxx" Jan 23 09:55:26 crc kubenswrapper[4684]: I0123 09:55:26.090244 4684 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-5xlxx\" (UniqueName: \"kubernetes.io/projected/30a63ef8-dcb6-4250-9861-f759a295224a-kube-api-access-5xlxx\") pod \"redhat-marketplace-2dmxx\" (UID: \"30a63ef8-dcb6-4250-9861-f759a295224a\") " pod="openshift-marketplace/redhat-marketplace-2dmxx" Jan 23 09:55:26 crc kubenswrapper[4684]: I0123 09:55:26.167428 4684 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-2dmxx" Jan 23 09:55:26 crc kubenswrapper[4684]: I0123 09:55:26.689491 4684 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-marketplace-2dmxx"] Jan 23 09:55:27 crc kubenswrapper[4684]: I0123 09:55:27.636029 4684 generic.go:334] "Generic (PLEG): container finished" podID="30a63ef8-dcb6-4250-9861-f759a295224a" containerID="078e2199e42c215ec85ed35607340c2ebd7082eb7a75cbfa060c04ebdfbf6ace" exitCode=0 Jan 23 09:55:27 crc kubenswrapper[4684]: I0123 09:55:27.636098 4684 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-2dmxx" event={"ID":"30a63ef8-dcb6-4250-9861-f759a295224a","Type":"ContainerDied","Data":"078e2199e42c215ec85ed35607340c2ebd7082eb7a75cbfa060c04ebdfbf6ace"} Jan 23 09:55:27 crc kubenswrapper[4684]: I0123 09:55:27.636565 4684 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-2dmxx" event={"ID":"30a63ef8-dcb6-4250-9861-f759a295224a","Type":"ContainerStarted","Data":"fd35c2b3e4e6992bf48a253ccb6c89bb1c9c538f889f7920984703bc992d0c11"} Jan 23 09:55:29 crc kubenswrapper[4684]: E0123 09:55:29.646004 4684 cadvisor_stats_provider.go:516] "Partial failure issuing cadvisor.ContainerInfoV2" err="partial failures: [\"/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod30a63ef8_dcb6_4250_9861_f759a295224a.slice/crio-9196c545cb213a2b4998041c3c1aa1c0e87cdac025049b7f24a136ab59a5ca77.scope\": RecentStats: unable to find data in memory cache]" Jan 23 09:55:29 crc kubenswrapper[4684]: I0123 09:55:29.693953 4684 generic.go:334] "Generic (PLEG): container finished" podID="30a63ef8-dcb6-4250-9861-f759a295224a" containerID="9196c545cb213a2b4998041c3c1aa1c0e87cdac025049b7f24a136ab59a5ca77" exitCode=0 Jan 23 09:55:29 crc kubenswrapper[4684]: I0123 09:55:29.694009 4684 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-2dmxx" event={"ID":"30a63ef8-dcb6-4250-9861-f759a295224a","Type":"ContainerDied","Data":"9196c545cb213a2b4998041c3c1aa1c0e87cdac025049b7f24a136ab59a5ca77"} Jan 23 09:55:30 crc kubenswrapper[4684]: I0123 09:55:30.720194 4684 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-2dmxx" event={"ID":"30a63ef8-dcb6-4250-9861-f759a295224a","Type":"ContainerStarted","Data":"f0d89b0e99c4b7936b557223d3ed7d7501624fe5b09ebd9b8b62f53e22e88981"} Jan 23 09:55:30 crc kubenswrapper[4684]: I0123 09:55:30.745944 4684 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/redhat-marketplace-2dmxx" podStartSLOduration=3.215189452 podStartE2EDuration="5.745928921s" podCreationTimestamp="2026-01-23 09:55:25 +0000 UTC" firstStartedPulling="2026-01-23 09:55:27.641050214 +0000 UTC m=+2900.264428755" lastFinishedPulling="2026-01-23 09:55:30.171789673 +0000 UTC m=+2902.795168224" observedRunningTime="2026-01-23 09:55:30.743874422 +0000 UTC m=+2903.367252963" watchObservedRunningTime="2026-01-23 09:55:30.745928921 +0000 UTC m=+2903.369307452" Jan 23 09:55:36 crc kubenswrapper[4684]: I0123 09:55:36.168004 4684 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/redhat-marketplace-2dmxx" Jan 23 09:55:36 crc kubenswrapper[4684]: I0123 09:55:36.168556 4684 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/redhat-marketplace-2dmxx" Jan 23 09:55:36 crc kubenswrapper[4684]: I0123 09:55:36.219672 4684 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/redhat-marketplace-2dmxx" Jan 23 09:55:36 crc kubenswrapper[4684]: I0123 09:55:36.821907 4684 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/redhat-marketplace-2dmxx" Jan 23 09:55:36 crc kubenswrapper[4684]: I0123 09:55:36.876437 4684 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-marketplace-2dmxx"] Jan 23 09:55:38 crc kubenswrapper[4684]: I0123 09:55:38.776881 4684 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/redhat-marketplace-2dmxx" podUID="30a63ef8-dcb6-4250-9861-f759a295224a" containerName="registry-server" containerID="cri-o://f0d89b0e99c4b7936b557223d3ed7d7501624fe5b09ebd9b8b62f53e22e88981" gracePeriod=2 Jan 23 09:55:39 crc kubenswrapper[4684]: I0123 09:55:39.244137 4684 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-2dmxx" Jan 23 09:55:39 crc kubenswrapper[4684]: I0123 09:55:39.434245 4684 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/30a63ef8-dcb6-4250-9861-f759a295224a-utilities\") pod \"30a63ef8-dcb6-4250-9861-f759a295224a\" (UID: \"30a63ef8-dcb6-4250-9861-f759a295224a\") " Jan 23 09:55:39 crc kubenswrapper[4684]: I0123 09:55:39.434301 4684 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/30a63ef8-dcb6-4250-9861-f759a295224a-catalog-content\") pod \"30a63ef8-dcb6-4250-9861-f759a295224a\" (UID: \"30a63ef8-dcb6-4250-9861-f759a295224a\") " Jan 23 09:55:39 crc kubenswrapper[4684]: I0123 09:55:39.434347 4684 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-5xlxx\" (UniqueName: \"kubernetes.io/projected/30a63ef8-dcb6-4250-9861-f759a295224a-kube-api-access-5xlxx\") pod \"30a63ef8-dcb6-4250-9861-f759a295224a\" (UID: \"30a63ef8-dcb6-4250-9861-f759a295224a\") " Jan 23 09:55:39 crc kubenswrapper[4684]: I0123 09:55:39.437165 4684 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/30a63ef8-dcb6-4250-9861-f759a295224a-utilities" (OuterVolumeSpecName: "utilities") pod "30a63ef8-dcb6-4250-9861-f759a295224a" (UID: "30a63ef8-dcb6-4250-9861-f759a295224a"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 23 09:55:39 crc kubenswrapper[4684]: I0123 09:55:39.441029 4684 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/30a63ef8-dcb6-4250-9861-f759a295224a-kube-api-access-5xlxx" (OuterVolumeSpecName: "kube-api-access-5xlxx") pod "30a63ef8-dcb6-4250-9861-f759a295224a" (UID: "30a63ef8-dcb6-4250-9861-f759a295224a"). InnerVolumeSpecName "kube-api-access-5xlxx". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 23 09:55:39 crc kubenswrapper[4684]: I0123 09:55:39.459430 4684 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/30a63ef8-dcb6-4250-9861-f759a295224a-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "30a63ef8-dcb6-4250-9861-f759a295224a" (UID: "30a63ef8-dcb6-4250-9861-f759a295224a"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 23 09:55:39 crc kubenswrapper[4684]: I0123 09:55:39.537609 4684 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/30a63ef8-dcb6-4250-9861-f759a295224a-catalog-content\") on node \"crc\" DevicePath \"\"" Jan 23 09:55:39 crc kubenswrapper[4684]: I0123 09:55:39.537652 4684 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-5xlxx\" (UniqueName: \"kubernetes.io/projected/30a63ef8-dcb6-4250-9861-f759a295224a-kube-api-access-5xlxx\") on node \"crc\" DevicePath \"\"" Jan 23 09:55:39 crc kubenswrapper[4684]: I0123 09:55:39.537667 4684 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/30a63ef8-dcb6-4250-9861-f759a295224a-utilities\") on node \"crc\" DevicePath \"\"" Jan 23 09:55:39 crc kubenswrapper[4684]: I0123 09:55:39.787958 4684 generic.go:334] "Generic (PLEG): container finished" podID="30a63ef8-dcb6-4250-9861-f759a295224a" containerID="f0d89b0e99c4b7936b557223d3ed7d7501624fe5b09ebd9b8b62f53e22e88981" exitCode=0 Jan 23 09:55:39 crc kubenswrapper[4684]: I0123 09:55:39.788013 4684 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-2dmxx" event={"ID":"30a63ef8-dcb6-4250-9861-f759a295224a","Type":"ContainerDied","Data":"f0d89b0e99c4b7936b557223d3ed7d7501624fe5b09ebd9b8b62f53e22e88981"} Jan 23 09:55:39 crc kubenswrapper[4684]: I0123 09:55:39.788057 4684 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-2dmxx" Jan 23 09:55:39 crc kubenswrapper[4684]: I0123 09:55:39.788085 4684 scope.go:117] "RemoveContainer" containerID="f0d89b0e99c4b7936b557223d3ed7d7501624fe5b09ebd9b8b62f53e22e88981" Jan 23 09:55:39 crc kubenswrapper[4684]: I0123 09:55:39.788072 4684 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-2dmxx" event={"ID":"30a63ef8-dcb6-4250-9861-f759a295224a","Type":"ContainerDied","Data":"fd35c2b3e4e6992bf48a253ccb6c89bb1c9c538f889f7920984703bc992d0c11"} Jan 23 09:55:39 crc kubenswrapper[4684]: I0123 09:55:39.813923 4684 scope.go:117] "RemoveContainer" containerID="9196c545cb213a2b4998041c3c1aa1c0e87cdac025049b7f24a136ab59a5ca77" Jan 23 09:55:39 crc kubenswrapper[4684]: I0123 09:55:39.830007 4684 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-marketplace-2dmxx"] Jan 23 09:55:39 crc kubenswrapper[4684]: I0123 09:55:39.838281 4684 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/redhat-marketplace-2dmxx"] Jan 23 09:55:39 crc kubenswrapper[4684]: I0123 09:55:39.838921 4684 scope.go:117] "RemoveContainer" containerID="078e2199e42c215ec85ed35607340c2ebd7082eb7a75cbfa060c04ebdfbf6ace" Jan 23 09:55:39 crc kubenswrapper[4684]: I0123 09:55:39.881491 4684 scope.go:117] "RemoveContainer" containerID="f0d89b0e99c4b7936b557223d3ed7d7501624fe5b09ebd9b8b62f53e22e88981" Jan 23 09:55:39 crc kubenswrapper[4684]: E0123 09:55:39.882149 4684 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"f0d89b0e99c4b7936b557223d3ed7d7501624fe5b09ebd9b8b62f53e22e88981\": container with ID starting with f0d89b0e99c4b7936b557223d3ed7d7501624fe5b09ebd9b8b62f53e22e88981 not found: ID does not exist" containerID="f0d89b0e99c4b7936b557223d3ed7d7501624fe5b09ebd9b8b62f53e22e88981" Jan 23 09:55:39 crc kubenswrapper[4684]: I0123 09:55:39.882209 4684 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"f0d89b0e99c4b7936b557223d3ed7d7501624fe5b09ebd9b8b62f53e22e88981"} err="failed to get container status \"f0d89b0e99c4b7936b557223d3ed7d7501624fe5b09ebd9b8b62f53e22e88981\": rpc error: code = NotFound desc = could not find container \"f0d89b0e99c4b7936b557223d3ed7d7501624fe5b09ebd9b8b62f53e22e88981\": container with ID starting with f0d89b0e99c4b7936b557223d3ed7d7501624fe5b09ebd9b8b62f53e22e88981 not found: ID does not exist" Jan 23 09:55:39 crc kubenswrapper[4684]: I0123 09:55:39.882245 4684 scope.go:117] "RemoveContainer" containerID="9196c545cb213a2b4998041c3c1aa1c0e87cdac025049b7f24a136ab59a5ca77" Jan 23 09:55:39 crc kubenswrapper[4684]: E0123 09:55:39.888041 4684 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"9196c545cb213a2b4998041c3c1aa1c0e87cdac025049b7f24a136ab59a5ca77\": container with ID starting with 9196c545cb213a2b4998041c3c1aa1c0e87cdac025049b7f24a136ab59a5ca77 not found: ID does not exist" containerID="9196c545cb213a2b4998041c3c1aa1c0e87cdac025049b7f24a136ab59a5ca77" Jan 23 09:55:39 crc kubenswrapper[4684]: I0123 09:55:39.888257 4684 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"9196c545cb213a2b4998041c3c1aa1c0e87cdac025049b7f24a136ab59a5ca77"} err="failed to get container status \"9196c545cb213a2b4998041c3c1aa1c0e87cdac025049b7f24a136ab59a5ca77\": rpc error: code = NotFound desc = could not find container \"9196c545cb213a2b4998041c3c1aa1c0e87cdac025049b7f24a136ab59a5ca77\": container with ID starting with 9196c545cb213a2b4998041c3c1aa1c0e87cdac025049b7f24a136ab59a5ca77 not found: ID does not exist" Jan 23 09:55:39 crc kubenswrapper[4684]: I0123 09:55:39.888368 4684 scope.go:117] "RemoveContainer" containerID="078e2199e42c215ec85ed35607340c2ebd7082eb7a75cbfa060c04ebdfbf6ace" Jan 23 09:55:39 crc kubenswrapper[4684]: E0123 09:55:39.888901 4684 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"078e2199e42c215ec85ed35607340c2ebd7082eb7a75cbfa060c04ebdfbf6ace\": container with ID starting with 078e2199e42c215ec85ed35607340c2ebd7082eb7a75cbfa060c04ebdfbf6ace not found: ID does not exist" containerID="078e2199e42c215ec85ed35607340c2ebd7082eb7a75cbfa060c04ebdfbf6ace" Jan 23 09:55:39 crc kubenswrapper[4684]: I0123 09:55:39.888991 4684 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"078e2199e42c215ec85ed35607340c2ebd7082eb7a75cbfa060c04ebdfbf6ace"} err="failed to get container status \"078e2199e42c215ec85ed35607340c2ebd7082eb7a75cbfa060c04ebdfbf6ace\": rpc error: code = NotFound desc = could not find container \"078e2199e42c215ec85ed35607340c2ebd7082eb7a75cbfa060c04ebdfbf6ace\": container with ID starting with 078e2199e42c215ec85ed35607340c2ebd7082eb7a75cbfa060c04ebdfbf6ace not found: ID does not exist" Jan 23 09:55:41 crc kubenswrapper[4684]: I0123 09:55:41.594567 4684 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="30a63ef8-dcb6-4250-9861-f759a295224a" path="/var/lib/kubelet/pods/30a63ef8-dcb6-4250-9861-f759a295224a/volumes" Jan 23 09:55:43 crc kubenswrapper[4684]: I0123 09:55:43.729048 4684 patch_prober.go:28] interesting pod/machine-config-daemon-wtphf container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Jan 23 09:55:43 crc kubenswrapper[4684]: I0123 09:55:43.730191 4684 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-wtphf" podUID="fe8e0d00-860e-4d47-9f48-686555520d79" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Jan 23 09:55:43 crc kubenswrapper[4684]: I0123 09:55:43.822517 4684 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/community-operators-76qt7"] Jan 23 09:55:43 crc kubenswrapper[4684]: E0123 09:55:43.823451 4684 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="30a63ef8-dcb6-4250-9861-f759a295224a" containerName="extract-utilities" Jan 23 09:55:43 crc kubenswrapper[4684]: I0123 09:55:43.823662 4684 state_mem.go:107] "Deleted CPUSet assignment" podUID="30a63ef8-dcb6-4250-9861-f759a295224a" containerName="extract-utilities" Jan 23 09:55:43 crc kubenswrapper[4684]: E0123 09:55:43.823688 4684 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="30a63ef8-dcb6-4250-9861-f759a295224a" containerName="registry-server" Jan 23 09:55:43 crc kubenswrapper[4684]: I0123 09:55:43.823712 4684 state_mem.go:107] "Deleted CPUSet assignment" podUID="30a63ef8-dcb6-4250-9861-f759a295224a" containerName="registry-server" Jan 23 09:55:43 crc kubenswrapper[4684]: E0123 09:55:43.823751 4684 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="30a63ef8-dcb6-4250-9861-f759a295224a" containerName="extract-content" Jan 23 09:55:43 crc kubenswrapper[4684]: I0123 09:55:43.823759 4684 state_mem.go:107] "Deleted CPUSet assignment" podUID="30a63ef8-dcb6-4250-9861-f759a295224a" containerName="extract-content" Jan 23 09:55:43 crc kubenswrapper[4684]: I0123 09:55:43.824107 4684 memory_manager.go:354] "RemoveStaleState removing state" podUID="30a63ef8-dcb6-4250-9861-f759a295224a" containerName="registry-server" Jan 23 09:55:43 crc kubenswrapper[4684]: I0123 09:55:43.827660 4684 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-76qt7" Jan 23 09:55:43 crc kubenswrapper[4684]: I0123 09:55:43.841465 4684 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/community-operators-76qt7"] Jan 23 09:55:43 crc kubenswrapper[4684]: I0123 09:55:43.920462 4684 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-gkz8q\" (UniqueName: \"kubernetes.io/projected/7dd7a52a-e55b-4bad-b663-f3ebbb748fc7-kube-api-access-gkz8q\") pod \"community-operators-76qt7\" (UID: \"7dd7a52a-e55b-4bad-b663-f3ebbb748fc7\") " pod="openshift-marketplace/community-operators-76qt7" Jan 23 09:55:43 crc kubenswrapper[4684]: I0123 09:55:43.920595 4684 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/7dd7a52a-e55b-4bad-b663-f3ebbb748fc7-utilities\") pod \"community-operators-76qt7\" (UID: \"7dd7a52a-e55b-4bad-b663-f3ebbb748fc7\") " pod="openshift-marketplace/community-operators-76qt7" Jan 23 09:55:43 crc kubenswrapper[4684]: I0123 09:55:43.920668 4684 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/7dd7a52a-e55b-4bad-b663-f3ebbb748fc7-catalog-content\") pod \"community-operators-76qt7\" (UID: \"7dd7a52a-e55b-4bad-b663-f3ebbb748fc7\") " pod="openshift-marketplace/community-operators-76qt7" Jan 23 09:55:44 crc kubenswrapper[4684]: I0123 09:55:44.021996 4684 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/7dd7a52a-e55b-4bad-b663-f3ebbb748fc7-utilities\") pod \"community-operators-76qt7\" (UID: \"7dd7a52a-e55b-4bad-b663-f3ebbb748fc7\") " pod="openshift-marketplace/community-operators-76qt7" Jan 23 09:55:44 crc kubenswrapper[4684]: I0123 09:55:44.022147 4684 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/7dd7a52a-e55b-4bad-b663-f3ebbb748fc7-catalog-content\") pod \"community-operators-76qt7\" (UID: \"7dd7a52a-e55b-4bad-b663-f3ebbb748fc7\") " pod="openshift-marketplace/community-operators-76qt7" Jan 23 09:55:44 crc kubenswrapper[4684]: I0123 09:55:44.022214 4684 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-gkz8q\" (UniqueName: \"kubernetes.io/projected/7dd7a52a-e55b-4bad-b663-f3ebbb748fc7-kube-api-access-gkz8q\") pod \"community-operators-76qt7\" (UID: \"7dd7a52a-e55b-4bad-b663-f3ebbb748fc7\") " pod="openshift-marketplace/community-operators-76qt7" Jan 23 09:55:44 crc kubenswrapper[4684]: I0123 09:55:44.022592 4684 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/7dd7a52a-e55b-4bad-b663-f3ebbb748fc7-catalog-content\") pod \"community-operators-76qt7\" (UID: \"7dd7a52a-e55b-4bad-b663-f3ebbb748fc7\") " pod="openshift-marketplace/community-operators-76qt7" Jan 23 09:55:44 crc kubenswrapper[4684]: I0123 09:55:44.022592 4684 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/7dd7a52a-e55b-4bad-b663-f3ebbb748fc7-utilities\") pod \"community-operators-76qt7\" (UID: \"7dd7a52a-e55b-4bad-b663-f3ebbb748fc7\") " pod="openshift-marketplace/community-operators-76qt7" Jan 23 09:55:44 crc kubenswrapper[4684]: I0123 09:55:44.041272 4684 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-gkz8q\" (UniqueName: \"kubernetes.io/projected/7dd7a52a-e55b-4bad-b663-f3ebbb748fc7-kube-api-access-gkz8q\") pod \"community-operators-76qt7\" (UID: \"7dd7a52a-e55b-4bad-b663-f3ebbb748fc7\") " pod="openshift-marketplace/community-operators-76qt7" Jan 23 09:55:44 crc kubenswrapper[4684]: I0123 09:55:44.152166 4684 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-76qt7" Jan 23 09:55:44 crc kubenswrapper[4684]: I0123 09:55:44.797365 4684 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/community-operators-76qt7"] Jan 23 09:55:44 crc kubenswrapper[4684]: I0123 09:55:44.857549 4684 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-76qt7" event={"ID":"7dd7a52a-e55b-4bad-b663-f3ebbb748fc7","Type":"ContainerStarted","Data":"cd349d1adba08225d0187ec1aa812f226e7ec2dfdcf71ed5dbea978683905bf2"} Jan 23 09:55:45 crc kubenswrapper[4684]: I0123 09:55:45.875520 4684 generic.go:334] "Generic (PLEG): container finished" podID="7dd7a52a-e55b-4bad-b663-f3ebbb748fc7" containerID="043f2665a26fc89d72c7aa849ae2f52e4af73003e6077c4fd6913f11dc5d477b" exitCode=0 Jan 23 09:55:45 crc kubenswrapper[4684]: I0123 09:55:45.875605 4684 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-76qt7" event={"ID":"7dd7a52a-e55b-4bad-b663-f3ebbb748fc7","Type":"ContainerDied","Data":"043f2665a26fc89d72c7aa849ae2f52e4af73003e6077c4fd6913f11dc5d477b"} Jan 23 09:55:48 crc kubenswrapper[4684]: I0123 09:55:48.904680 4684 generic.go:334] "Generic (PLEG): container finished" podID="7dd7a52a-e55b-4bad-b663-f3ebbb748fc7" containerID="6204f9966e0c99167be45041287d2aabe9dd27f7896cb0cf396d9bf46e220aba" exitCode=0 Jan 23 09:55:48 crc kubenswrapper[4684]: I0123 09:55:48.904781 4684 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-76qt7" event={"ID":"7dd7a52a-e55b-4bad-b663-f3ebbb748fc7","Type":"ContainerDied","Data":"6204f9966e0c99167be45041287d2aabe9dd27f7896cb0cf396d9bf46e220aba"} Jan 23 09:55:49 crc kubenswrapper[4684]: I0123 09:55:49.916449 4684 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-76qt7" event={"ID":"7dd7a52a-e55b-4bad-b663-f3ebbb748fc7","Type":"ContainerStarted","Data":"023aa3774a4974e3211d21700a51b31c948e3c830ea9ce8d7a93cfeb988ef13e"} Jan 23 09:55:49 crc kubenswrapper[4684]: I0123 09:55:49.949970 4684 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/community-operators-76qt7" podStartSLOduration=3.502734985 podStartE2EDuration="6.949947128s" podCreationTimestamp="2026-01-23 09:55:43 +0000 UTC" firstStartedPulling="2026-01-23 09:55:45.88553826 +0000 UTC m=+2918.508916801" lastFinishedPulling="2026-01-23 09:55:49.332750403 +0000 UTC m=+2921.956128944" observedRunningTime="2026-01-23 09:55:49.941522966 +0000 UTC m=+2922.564901507" watchObservedRunningTime="2026-01-23 09:55:49.949947128 +0000 UTC m=+2922.573325679" Jan 23 09:55:54 crc kubenswrapper[4684]: I0123 09:55:54.152542 4684 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/community-operators-76qt7" Jan 23 09:55:54 crc kubenswrapper[4684]: I0123 09:55:54.153190 4684 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/community-operators-76qt7" Jan 23 09:55:54 crc kubenswrapper[4684]: I0123 09:55:54.205573 4684 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/community-operators-76qt7" Jan 23 09:55:55 crc kubenswrapper[4684]: I0123 09:55:55.014566 4684 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/community-operators-76qt7" Jan 23 09:55:55 crc kubenswrapper[4684]: I0123 09:55:55.069323 4684 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/community-operators-76qt7"] Jan 23 09:55:56 crc kubenswrapper[4684]: I0123 09:55:56.976974 4684 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/community-operators-76qt7" podUID="7dd7a52a-e55b-4bad-b663-f3ebbb748fc7" containerName="registry-server" containerID="cri-o://023aa3774a4974e3211d21700a51b31c948e3c830ea9ce8d7a93cfeb988ef13e" gracePeriod=2 Jan 23 09:55:57 crc kubenswrapper[4684]: I0123 09:55:57.597346 4684 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-76qt7" Jan 23 09:55:57 crc kubenswrapper[4684]: I0123 09:55:57.714116 4684 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/7dd7a52a-e55b-4bad-b663-f3ebbb748fc7-catalog-content\") pod \"7dd7a52a-e55b-4bad-b663-f3ebbb748fc7\" (UID: \"7dd7a52a-e55b-4bad-b663-f3ebbb748fc7\") " Jan 23 09:55:57 crc kubenswrapper[4684]: I0123 09:55:57.714439 4684 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-gkz8q\" (UniqueName: \"kubernetes.io/projected/7dd7a52a-e55b-4bad-b663-f3ebbb748fc7-kube-api-access-gkz8q\") pod \"7dd7a52a-e55b-4bad-b663-f3ebbb748fc7\" (UID: \"7dd7a52a-e55b-4bad-b663-f3ebbb748fc7\") " Jan 23 09:55:57 crc kubenswrapper[4684]: I0123 09:55:57.714673 4684 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/7dd7a52a-e55b-4bad-b663-f3ebbb748fc7-utilities\") pod \"7dd7a52a-e55b-4bad-b663-f3ebbb748fc7\" (UID: \"7dd7a52a-e55b-4bad-b663-f3ebbb748fc7\") " Jan 23 09:55:57 crc kubenswrapper[4684]: I0123 09:55:57.715733 4684 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/7dd7a52a-e55b-4bad-b663-f3ebbb748fc7-utilities" (OuterVolumeSpecName: "utilities") pod "7dd7a52a-e55b-4bad-b663-f3ebbb748fc7" (UID: "7dd7a52a-e55b-4bad-b663-f3ebbb748fc7"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 23 09:55:57 crc kubenswrapper[4684]: I0123 09:55:57.724010 4684 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/7dd7a52a-e55b-4bad-b663-f3ebbb748fc7-kube-api-access-gkz8q" (OuterVolumeSpecName: "kube-api-access-gkz8q") pod "7dd7a52a-e55b-4bad-b663-f3ebbb748fc7" (UID: "7dd7a52a-e55b-4bad-b663-f3ebbb748fc7"). InnerVolumeSpecName "kube-api-access-gkz8q". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 23 09:55:57 crc kubenswrapper[4684]: I0123 09:55:57.775409 4684 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/7dd7a52a-e55b-4bad-b663-f3ebbb748fc7-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "7dd7a52a-e55b-4bad-b663-f3ebbb748fc7" (UID: "7dd7a52a-e55b-4bad-b663-f3ebbb748fc7"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 23 09:55:57 crc kubenswrapper[4684]: I0123 09:55:57.816821 4684 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/7dd7a52a-e55b-4bad-b663-f3ebbb748fc7-catalog-content\") on node \"crc\" DevicePath \"\"" Jan 23 09:55:57 crc kubenswrapper[4684]: I0123 09:55:57.816864 4684 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-gkz8q\" (UniqueName: \"kubernetes.io/projected/7dd7a52a-e55b-4bad-b663-f3ebbb748fc7-kube-api-access-gkz8q\") on node \"crc\" DevicePath \"\"" Jan 23 09:55:57 crc kubenswrapper[4684]: I0123 09:55:57.816876 4684 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/7dd7a52a-e55b-4bad-b663-f3ebbb748fc7-utilities\") on node \"crc\" DevicePath \"\"" Jan 23 09:55:57 crc kubenswrapper[4684]: I0123 09:55:57.987482 4684 generic.go:334] "Generic (PLEG): container finished" podID="7dd7a52a-e55b-4bad-b663-f3ebbb748fc7" containerID="023aa3774a4974e3211d21700a51b31c948e3c830ea9ce8d7a93cfeb988ef13e" exitCode=0 Jan 23 09:55:57 crc kubenswrapper[4684]: I0123 09:55:57.987545 4684 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-76qt7" event={"ID":"7dd7a52a-e55b-4bad-b663-f3ebbb748fc7","Type":"ContainerDied","Data":"023aa3774a4974e3211d21700a51b31c948e3c830ea9ce8d7a93cfeb988ef13e"} Jan 23 09:55:57 crc kubenswrapper[4684]: I0123 09:55:57.987595 4684 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-76qt7" Jan 23 09:55:57 crc kubenswrapper[4684]: I0123 09:55:57.987625 4684 scope.go:117] "RemoveContainer" containerID="023aa3774a4974e3211d21700a51b31c948e3c830ea9ce8d7a93cfeb988ef13e" Jan 23 09:55:57 crc kubenswrapper[4684]: I0123 09:55:57.987612 4684 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-76qt7" event={"ID":"7dd7a52a-e55b-4bad-b663-f3ebbb748fc7","Type":"ContainerDied","Data":"cd349d1adba08225d0187ec1aa812f226e7ec2dfdcf71ed5dbea978683905bf2"} Jan 23 09:55:58 crc kubenswrapper[4684]: I0123 09:55:58.012972 4684 scope.go:117] "RemoveContainer" containerID="6204f9966e0c99167be45041287d2aabe9dd27f7896cb0cf396d9bf46e220aba" Jan 23 09:55:58 crc kubenswrapper[4684]: I0123 09:55:58.038823 4684 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/community-operators-76qt7"] Jan 23 09:55:58 crc kubenswrapper[4684]: I0123 09:55:58.051673 4684 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/community-operators-76qt7"] Jan 23 09:55:58 crc kubenswrapper[4684]: I0123 09:55:58.063251 4684 scope.go:117] "RemoveContainer" containerID="043f2665a26fc89d72c7aa849ae2f52e4af73003e6077c4fd6913f11dc5d477b" Jan 23 09:55:58 crc kubenswrapper[4684]: I0123 09:55:58.104604 4684 scope.go:117] "RemoveContainer" containerID="023aa3774a4974e3211d21700a51b31c948e3c830ea9ce8d7a93cfeb988ef13e" Jan 23 09:55:58 crc kubenswrapper[4684]: E0123 09:55:58.105551 4684 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"023aa3774a4974e3211d21700a51b31c948e3c830ea9ce8d7a93cfeb988ef13e\": container with ID starting with 023aa3774a4974e3211d21700a51b31c948e3c830ea9ce8d7a93cfeb988ef13e not found: ID does not exist" containerID="023aa3774a4974e3211d21700a51b31c948e3c830ea9ce8d7a93cfeb988ef13e" Jan 23 09:55:58 crc kubenswrapper[4684]: I0123 09:55:58.105589 4684 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"023aa3774a4974e3211d21700a51b31c948e3c830ea9ce8d7a93cfeb988ef13e"} err="failed to get container status \"023aa3774a4974e3211d21700a51b31c948e3c830ea9ce8d7a93cfeb988ef13e\": rpc error: code = NotFound desc = could not find container \"023aa3774a4974e3211d21700a51b31c948e3c830ea9ce8d7a93cfeb988ef13e\": container with ID starting with 023aa3774a4974e3211d21700a51b31c948e3c830ea9ce8d7a93cfeb988ef13e not found: ID does not exist" Jan 23 09:55:58 crc kubenswrapper[4684]: I0123 09:55:58.105632 4684 scope.go:117] "RemoveContainer" containerID="6204f9966e0c99167be45041287d2aabe9dd27f7896cb0cf396d9bf46e220aba" Jan 23 09:55:58 crc kubenswrapper[4684]: E0123 09:55:58.110547 4684 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"6204f9966e0c99167be45041287d2aabe9dd27f7896cb0cf396d9bf46e220aba\": container with ID starting with 6204f9966e0c99167be45041287d2aabe9dd27f7896cb0cf396d9bf46e220aba not found: ID does not exist" containerID="6204f9966e0c99167be45041287d2aabe9dd27f7896cb0cf396d9bf46e220aba" Jan 23 09:55:58 crc kubenswrapper[4684]: I0123 09:55:58.110589 4684 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"6204f9966e0c99167be45041287d2aabe9dd27f7896cb0cf396d9bf46e220aba"} err="failed to get container status \"6204f9966e0c99167be45041287d2aabe9dd27f7896cb0cf396d9bf46e220aba\": rpc error: code = NotFound desc = could not find container \"6204f9966e0c99167be45041287d2aabe9dd27f7896cb0cf396d9bf46e220aba\": container with ID starting with 6204f9966e0c99167be45041287d2aabe9dd27f7896cb0cf396d9bf46e220aba not found: ID does not exist" Jan 23 09:55:58 crc kubenswrapper[4684]: I0123 09:55:58.110619 4684 scope.go:117] "RemoveContainer" containerID="043f2665a26fc89d72c7aa849ae2f52e4af73003e6077c4fd6913f11dc5d477b" Jan 23 09:55:58 crc kubenswrapper[4684]: E0123 09:55:58.111139 4684 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"043f2665a26fc89d72c7aa849ae2f52e4af73003e6077c4fd6913f11dc5d477b\": container with ID starting with 043f2665a26fc89d72c7aa849ae2f52e4af73003e6077c4fd6913f11dc5d477b not found: ID does not exist" containerID="043f2665a26fc89d72c7aa849ae2f52e4af73003e6077c4fd6913f11dc5d477b" Jan 23 09:55:58 crc kubenswrapper[4684]: I0123 09:55:58.111193 4684 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"043f2665a26fc89d72c7aa849ae2f52e4af73003e6077c4fd6913f11dc5d477b"} err="failed to get container status \"043f2665a26fc89d72c7aa849ae2f52e4af73003e6077c4fd6913f11dc5d477b\": rpc error: code = NotFound desc = could not find container \"043f2665a26fc89d72c7aa849ae2f52e4af73003e6077c4fd6913f11dc5d477b\": container with ID starting with 043f2665a26fc89d72c7aa849ae2f52e4af73003e6077c4fd6913f11dc5d477b not found: ID does not exist" Jan 23 09:55:58 crc kubenswrapper[4684]: I0123 09:55:58.996552 4684 generic.go:334] "Generic (PLEG): container finished" podID="47eb1e50-9644-40c1-b739-f70c2274808c" containerID="e6e84931bf8815d1c895dc0510faf536fec7e402e35591ea2462d0e428b57efb" exitCode=0 Jan 23 09:55:58 crc kubenswrapper[4684]: I0123 09:55:58.996606 4684 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/bootstrap-edpm-deployment-openstack-edpm-ipam-j7qnk" event={"ID":"47eb1e50-9644-40c1-b739-f70c2274808c","Type":"ContainerDied","Data":"e6e84931bf8815d1c895dc0510faf536fec7e402e35591ea2462d0e428b57efb"} Jan 23 09:55:59 crc kubenswrapper[4684]: I0123 09:55:59.593805 4684 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="7dd7a52a-e55b-4bad-b663-f3ebbb748fc7" path="/var/lib/kubelet/pods/7dd7a52a-e55b-4bad-b663-f3ebbb748fc7/volumes" Jan 23 09:56:00 crc kubenswrapper[4684]: I0123 09:56:00.418572 4684 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/bootstrap-edpm-deployment-openstack-edpm-ipam-j7qnk" Jan 23 09:56:00 crc kubenswrapper[4684]: I0123 09:56:00.470840 4684 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"bootstrap-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/47eb1e50-9644-40c1-b739-f70c2274808c-bootstrap-combined-ca-bundle\") pod \"47eb1e50-9644-40c1-b739-f70c2274808c\" (UID: \"47eb1e50-9644-40c1-b739-f70c2274808c\") " Jan 23 09:56:00 crc kubenswrapper[4684]: I0123 09:56:00.470901 4684 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/47eb1e50-9644-40c1-b739-f70c2274808c-ssh-key-openstack-edpm-ipam\") pod \"47eb1e50-9644-40c1-b739-f70c2274808c\" (UID: \"47eb1e50-9644-40c1-b739-f70c2274808c\") " Jan 23 09:56:00 crc kubenswrapper[4684]: I0123 09:56:00.470973 4684 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ceph\" (UniqueName: \"kubernetes.io/secret/47eb1e50-9644-40c1-b739-f70c2274808c-ceph\") pod \"47eb1e50-9644-40c1-b739-f70c2274808c\" (UID: \"47eb1e50-9644-40c1-b739-f70c2274808c\") " Jan 23 09:56:00 crc kubenswrapper[4684]: I0123 09:56:00.471029 4684 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/47eb1e50-9644-40c1-b739-f70c2274808c-inventory\") pod \"47eb1e50-9644-40c1-b739-f70c2274808c\" (UID: \"47eb1e50-9644-40c1-b739-f70c2274808c\") " Jan 23 09:56:00 crc kubenswrapper[4684]: I0123 09:56:00.471074 4684 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-v5p4j\" (UniqueName: \"kubernetes.io/projected/47eb1e50-9644-40c1-b739-f70c2274808c-kube-api-access-v5p4j\") pod \"47eb1e50-9644-40c1-b739-f70c2274808c\" (UID: \"47eb1e50-9644-40c1-b739-f70c2274808c\") " Jan 23 09:56:00 crc kubenswrapper[4684]: I0123 09:56:00.490995 4684 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/47eb1e50-9644-40c1-b739-f70c2274808c-bootstrap-combined-ca-bundle" (OuterVolumeSpecName: "bootstrap-combined-ca-bundle") pod "47eb1e50-9644-40c1-b739-f70c2274808c" (UID: "47eb1e50-9644-40c1-b739-f70c2274808c"). InnerVolumeSpecName "bootstrap-combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 23 09:56:00 crc kubenswrapper[4684]: I0123 09:56:00.493130 4684 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/47eb1e50-9644-40c1-b739-f70c2274808c-kube-api-access-v5p4j" (OuterVolumeSpecName: "kube-api-access-v5p4j") pod "47eb1e50-9644-40c1-b739-f70c2274808c" (UID: "47eb1e50-9644-40c1-b739-f70c2274808c"). InnerVolumeSpecName "kube-api-access-v5p4j". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 23 09:56:00 crc kubenswrapper[4684]: I0123 09:56:00.493508 4684 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/47eb1e50-9644-40c1-b739-f70c2274808c-ceph" (OuterVolumeSpecName: "ceph") pod "47eb1e50-9644-40c1-b739-f70c2274808c" (UID: "47eb1e50-9644-40c1-b739-f70c2274808c"). InnerVolumeSpecName "ceph". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 23 09:56:00 crc kubenswrapper[4684]: I0123 09:56:00.505471 4684 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/47eb1e50-9644-40c1-b739-f70c2274808c-inventory" (OuterVolumeSpecName: "inventory") pod "47eb1e50-9644-40c1-b739-f70c2274808c" (UID: "47eb1e50-9644-40c1-b739-f70c2274808c"). InnerVolumeSpecName "inventory". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 23 09:56:00 crc kubenswrapper[4684]: I0123 09:56:00.517935 4684 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/47eb1e50-9644-40c1-b739-f70c2274808c-ssh-key-openstack-edpm-ipam" (OuterVolumeSpecName: "ssh-key-openstack-edpm-ipam") pod "47eb1e50-9644-40c1-b739-f70c2274808c" (UID: "47eb1e50-9644-40c1-b739-f70c2274808c"). InnerVolumeSpecName "ssh-key-openstack-edpm-ipam". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 23 09:56:00 crc kubenswrapper[4684]: I0123 09:56:00.573747 4684 reconciler_common.go:293] "Volume detached for volume \"bootstrap-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/47eb1e50-9644-40c1-b739-f70c2274808c-bootstrap-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Jan 23 09:56:00 crc kubenswrapper[4684]: I0123 09:56:00.574129 4684 reconciler_common.go:293] "Volume detached for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/47eb1e50-9644-40c1-b739-f70c2274808c-ssh-key-openstack-edpm-ipam\") on node \"crc\" DevicePath \"\"" Jan 23 09:56:00 crc kubenswrapper[4684]: I0123 09:56:00.574146 4684 reconciler_common.go:293] "Volume detached for volume \"ceph\" (UniqueName: \"kubernetes.io/secret/47eb1e50-9644-40c1-b739-f70c2274808c-ceph\") on node \"crc\" DevicePath \"\"" Jan 23 09:56:00 crc kubenswrapper[4684]: I0123 09:56:00.574163 4684 reconciler_common.go:293] "Volume detached for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/47eb1e50-9644-40c1-b739-f70c2274808c-inventory\") on node \"crc\" DevicePath \"\"" Jan 23 09:56:00 crc kubenswrapper[4684]: I0123 09:56:00.574177 4684 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-v5p4j\" (UniqueName: \"kubernetes.io/projected/47eb1e50-9644-40c1-b739-f70c2274808c-kube-api-access-v5p4j\") on node \"crc\" DevicePath \"\"" Jan 23 09:56:01 crc kubenswrapper[4684]: I0123 09:56:01.018487 4684 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/bootstrap-edpm-deployment-openstack-edpm-ipam-j7qnk" event={"ID":"47eb1e50-9644-40c1-b739-f70c2274808c","Type":"ContainerDied","Data":"d6a5d97bc3ae5b308bac4594c6bcae5a102a12e532fab6951321e405fa2d5312"} Jan 23 09:56:01 crc kubenswrapper[4684]: I0123 09:56:01.018524 4684 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="d6a5d97bc3ae5b308bac4594c6bcae5a102a12e532fab6951321e405fa2d5312" Jan 23 09:56:01 crc kubenswrapper[4684]: I0123 09:56:01.018588 4684 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/bootstrap-edpm-deployment-openstack-edpm-ipam-j7qnk" Jan 23 09:56:01 crc kubenswrapper[4684]: I0123 09:56:01.119074 4684 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/configure-network-edpm-deployment-openstack-edpm-ipam-bj6vb"] Jan 23 09:56:01 crc kubenswrapper[4684]: E0123 09:56:01.119415 4684 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="7dd7a52a-e55b-4bad-b663-f3ebbb748fc7" containerName="extract-utilities" Jan 23 09:56:01 crc kubenswrapper[4684]: I0123 09:56:01.119434 4684 state_mem.go:107] "Deleted CPUSet assignment" podUID="7dd7a52a-e55b-4bad-b663-f3ebbb748fc7" containerName="extract-utilities" Jan 23 09:56:01 crc kubenswrapper[4684]: E0123 09:56:01.119445 4684 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="7dd7a52a-e55b-4bad-b663-f3ebbb748fc7" containerName="registry-server" Jan 23 09:56:01 crc kubenswrapper[4684]: I0123 09:56:01.119452 4684 state_mem.go:107] "Deleted CPUSet assignment" podUID="7dd7a52a-e55b-4bad-b663-f3ebbb748fc7" containerName="registry-server" Jan 23 09:56:01 crc kubenswrapper[4684]: E0123 09:56:01.119468 4684 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="7dd7a52a-e55b-4bad-b663-f3ebbb748fc7" containerName="extract-content" Jan 23 09:56:01 crc kubenswrapper[4684]: I0123 09:56:01.119474 4684 state_mem.go:107] "Deleted CPUSet assignment" podUID="7dd7a52a-e55b-4bad-b663-f3ebbb748fc7" containerName="extract-content" Jan 23 09:56:01 crc kubenswrapper[4684]: E0123 09:56:01.119502 4684 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="47eb1e50-9644-40c1-b739-f70c2274808c" containerName="bootstrap-edpm-deployment-openstack-edpm-ipam" Jan 23 09:56:01 crc kubenswrapper[4684]: I0123 09:56:01.119508 4684 state_mem.go:107] "Deleted CPUSet assignment" podUID="47eb1e50-9644-40c1-b739-f70c2274808c" containerName="bootstrap-edpm-deployment-openstack-edpm-ipam" Jan 23 09:56:01 crc kubenswrapper[4684]: I0123 09:56:01.119718 4684 memory_manager.go:354] "RemoveStaleState removing state" podUID="7dd7a52a-e55b-4bad-b663-f3ebbb748fc7" containerName="registry-server" Jan 23 09:56:01 crc kubenswrapper[4684]: I0123 09:56:01.119738 4684 memory_manager.go:354] "RemoveStaleState removing state" podUID="47eb1e50-9644-40c1-b739-f70c2274808c" containerName="bootstrap-edpm-deployment-openstack-edpm-ipam" Jan 23 09:56:01 crc kubenswrapper[4684]: I0123 09:56:01.126127 4684 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/configure-network-edpm-deployment-openstack-edpm-ipam-bj6vb" Jan 23 09:56:01 crc kubenswrapper[4684]: I0123 09:56:01.128479 4684 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"dataplane-ansible-ssh-private-key-secret" Jan 23 09:56:01 crc kubenswrapper[4684]: I0123 09:56:01.128919 4684 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"openstack-edpm-ipam-dockercfg-5vtkf" Jan 23 09:56:01 crc kubenswrapper[4684]: I0123 09:56:01.131422 4684 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"openstack-aee-default-env" Jan 23 09:56:01 crc kubenswrapper[4684]: I0123 09:56:01.131552 4684 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"dataplanenodeset-openstack-edpm-ipam" Jan 23 09:56:01 crc kubenswrapper[4684]: I0123 09:56:01.132893 4684 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"ceph-conf-files" Jan 23 09:56:01 crc kubenswrapper[4684]: I0123 09:56:01.133629 4684 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/configure-network-edpm-deployment-openstack-edpm-ipam-bj6vb"] Jan 23 09:56:01 crc kubenswrapper[4684]: I0123 09:56:01.188844 4684 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/f86589ab-3e45-48a5-a081-96572c2bcfca-inventory\") pod \"configure-network-edpm-deployment-openstack-edpm-ipam-bj6vb\" (UID: \"f86589ab-3e45-48a5-a081-96572c2bcfca\") " pod="openstack/configure-network-edpm-deployment-openstack-edpm-ipam-bj6vb" Jan 23 09:56:01 crc kubenswrapper[4684]: I0123 09:56:01.189012 4684 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/f86589ab-3e45-48a5-a081-96572c2bcfca-ssh-key-openstack-edpm-ipam\") pod \"configure-network-edpm-deployment-openstack-edpm-ipam-bj6vb\" (UID: \"f86589ab-3e45-48a5-a081-96572c2bcfca\") " pod="openstack/configure-network-edpm-deployment-openstack-edpm-ipam-bj6vb" Jan 23 09:56:01 crc kubenswrapper[4684]: I0123 09:56:01.189094 4684 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ceph\" (UniqueName: \"kubernetes.io/secret/f86589ab-3e45-48a5-a081-96572c2bcfca-ceph\") pod \"configure-network-edpm-deployment-openstack-edpm-ipam-bj6vb\" (UID: \"f86589ab-3e45-48a5-a081-96572c2bcfca\") " pod="openstack/configure-network-edpm-deployment-openstack-edpm-ipam-bj6vb" Jan 23 09:56:01 crc kubenswrapper[4684]: I0123 09:56:01.189126 4684 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-9j842\" (UniqueName: \"kubernetes.io/projected/f86589ab-3e45-48a5-a081-96572c2bcfca-kube-api-access-9j842\") pod \"configure-network-edpm-deployment-openstack-edpm-ipam-bj6vb\" (UID: \"f86589ab-3e45-48a5-a081-96572c2bcfca\") " pod="openstack/configure-network-edpm-deployment-openstack-edpm-ipam-bj6vb" Jan 23 09:56:01 crc kubenswrapper[4684]: I0123 09:56:01.293252 4684 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/f86589ab-3e45-48a5-a081-96572c2bcfca-ssh-key-openstack-edpm-ipam\") pod \"configure-network-edpm-deployment-openstack-edpm-ipam-bj6vb\" (UID: \"f86589ab-3e45-48a5-a081-96572c2bcfca\") " pod="openstack/configure-network-edpm-deployment-openstack-edpm-ipam-bj6vb" Jan 23 09:56:01 crc kubenswrapper[4684]: I0123 09:56:01.293369 4684 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ceph\" (UniqueName: \"kubernetes.io/secret/f86589ab-3e45-48a5-a081-96572c2bcfca-ceph\") pod \"configure-network-edpm-deployment-openstack-edpm-ipam-bj6vb\" (UID: \"f86589ab-3e45-48a5-a081-96572c2bcfca\") " pod="openstack/configure-network-edpm-deployment-openstack-edpm-ipam-bj6vb" Jan 23 09:56:01 crc kubenswrapper[4684]: I0123 09:56:01.293444 4684 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-9j842\" (UniqueName: \"kubernetes.io/projected/f86589ab-3e45-48a5-a081-96572c2bcfca-kube-api-access-9j842\") pod \"configure-network-edpm-deployment-openstack-edpm-ipam-bj6vb\" (UID: \"f86589ab-3e45-48a5-a081-96572c2bcfca\") " pod="openstack/configure-network-edpm-deployment-openstack-edpm-ipam-bj6vb" Jan 23 09:56:01 crc kubenswrapper[4684]: I0123 09:56:01.293540 4684 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/f86589ab-3e45-48a5-a081-96572c2bcfca-inventory\") pod \"configure-network-edpm-deployment-openstack-edpm-ipam-bj6vb\" (UID: \"f86589ab-3e45-48a5-a081-96572c2bcfca\") " pod="openstack/configure-network-edpm-deployment-openstack-edpm-ipam-bj6vb" Jan 23 09:56:01 crc kubenswrapper[4684]: I0123 09:56:01.308898 4684 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ceph\" (UniqueName: \"kubernetes.io/secret/f86589ab-3e45-48a5-a081-96572c2bcfca-ceph\") pod \"configure-network-edpm-deployment-openstack-edpm-ipam-bj6vb\" (UID: \"f86589ab-3e45-48a5-a081-96572c2bcfca\") " pod="openstack/configure-network-edpm-deployment-openstack-edpm-ipam-bj6vb" Jan 23 09:56:01 crc kubenswrapper[4684]: I0123 09:56:01.323210 4684 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/f86589ab-3e45-48a5-a081-96572c2bcfca-inventory\") pod \"configure-network-edpm-deployment-openstack-edpm-ipam-bj6vb\" (UID: \"f86589ab-3e45-48a5-a081-96572c2bcfca\") " pod="openstack/configure-network-edpm-deployment-openstack-edpm-ipam-bj6vb" Jan 23 09:56:01 crc kubenswrapper[4684]: I0123 09:56:01.324570 4684 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/f86589ab-3e45-48a5-a081-96572c2bcfca-ssh-key-openstack-edpm-ipam\") pod \"configure-network-edpm-deployment-openstack-edpm-ipam-bj6vb\" (UID: \"f86589ab-3e45-48a5-a081-96572c2bcfca\") " pod="openstack/configure-network-edpm-deployment-openstack-edpm-ipam-bj6vb" Jan 23 09:56:01 crc kubenswrapper[4684]: I0123 09:56:01.329548 4684 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-9j842\" (UniqueName: \"kubernetes.io/projected/f86589ab-3e45-48a5-a081-96572c2bcfca-kube-api-access-9j842\") pod \"configure-network-edpm-deployment-openstack-edpm-ipam-bj6vb\" (UID: \"f86589ab-3e45-48a5-a081-96572c2bcfca\") " pod="openstack/configure-network-edpm-deployment-openstack-edpm-ipam-bj6vb" Jan 23 09:56:01 crc kubenswrapper[4684]: I0123 09:56:01.447870 4684 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/configure-network-edpm-deployment-openstack-edpm-ipam-bj6vb" Jan 23 09:56:02 crc kubenswrapper[4684]: W0123 09:56:02.082877 4684 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-podf86589ab_3e45_48a5_a081_96572c2bcfca.slice/crio-0f15cbd9ad7c71e227310691a9ca319ca70341881726b0876d6dd2370cd01750 WatchSource:0}: Error finding container 0f15cbd9ad7c71e227310691a9ca319ca70341881726b0876d6dd2370cd01750: Status 404 returned error can't find the container with id 0f15cbd9ad7c71e227310691a9ca319ca70341881726b0876d6dd2370cd01750 Jan 23 09:56:02 crc kubenswrapper[4684]: I0123 09:56:02.083776 4684 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/configure-network-edpm-deployment-openstack-edpm-ipam-bj6vb"] Jan 23 09:56:03 crc kubenswrapper[4684]: I0123 09:56:03.034664 4684 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/configure-network-edpm-deployment-openstack-edpm-ipam-bj6vb" event={"ID":"f86589ab-3e45-48a5-a081-96572c2bcfca","Type":"ContainerStarted","Data":"0f15cbd9ad7c71e227310691a9ca319ca70341881726b0876d6dd2370cd01750"} Jan 23 09:56:03 crc kubenswrapper[4684]: I0123 09:56:03.364542 4684 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/certified-operators-n46rr"] Jan 23 09:56:03 crc kubenswrapper[4684]: I0123 09:56:03.367265 4684 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-n46rr" Jan 23 09:56:03 crc kubenswrapper[4684]: I0123 09:56:03.422965 4684 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/certified-operators-n46rr"] Jan 23 09:56:03 crc kubenswrapper[4684]: I0123 09:56:03.436473 4684 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-5ddpq\" (UniqueName: \"kubernetes.io/projected/61921386-cdd1-46d0-866c-114115acde03-kube-api-access-5ddpq\") pod \"certified-operators-n46rr\" (UID: \"61921386-cdd1-46d0-866c-114115acde03\") " pod="openshift-marketplace/certified-operators-n46rr" Jan 23 09:56:03 crc kubenswrapper[4684]: I0123 09:56:03.436801 4684 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/61921386-cdd1-46d0-866c-114115acde03-utilities\") pod \"certified-operators-n46rr\" (UID: \"61921386-cdd1-46d0-866c-114115acde03\") " pod="openshift-marketplace/certified-operators-n46rr" Jan 23 09:56:03 crc kubenswrapper[4684]: I0123 09:56:03.437042 4684 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/61921386-cdd1-46d0-866c-114115acde03-catalog-content\") pod \"certified-operators-n46rr\" (UID: \"61921386-cdd1-46d0-866c-114115acde03\") " pod="openshift-marketplace/certified-operators-n46rr" Jan 23 09:56:03 crc kubenswrapper[4684]: I0123 09:56:03.539578 4684 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/61921386-cdd1-46d0-866c-114115acde03-catalog-content\") pod \"certified-operators-n46rr\" (UID: \"61921386-cdd1-46d0-866c-114115acde03\") " pod="openshift-marketplace/certified-operators-n46rr" Jan 23 09:56:03 crc kubenswrapper[4684]: I0123 09:56:03.539743 4684 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-5ddpq\" (UniqueName: \"kubernetes.io/projected/61921386-cdd1-46d0-866c-114115acde03-kube-api-access-5ddpq\") pod \"certified-operators-n46rr\" (UID: \"61921386-cdd1-46d0-866c-114115acde03\") " pod="openshift-marketplace/certified-operators-n46rr" Jan 23 09:56:03 crc kubenswrapper[4684]: I0123 09:56:03.539782 4684 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/61921386-cdd1-46d0-866c-114115acde03-utilities\") pod \"certified-operators-n46rr\" (UID: \"61921386-cdd1-46d0-866c-114115acde03\") " pod="openshift-marketplace/certified-operators-n46rr" Jan 23 09:56:03 crc kubenswrapper[4684]: I0123 09:56:03.540237 4684 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/61921386-cdd1-46d0-866c-114115acde03-catalog-content\") pod \"certified-operators-n46rr\" (UID: \"61921386-cdd1-46d0-866c-114115acde03\") " pod="openshift-marketplace/certified-operators-n46rr" Jan 23 09:56:03 crc kubenswrapper[4684]: I0123 09:56:03.540639 4684 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/61921386-cdd1-46d0-866c-114115acde03-utilities\") pod \"certified-operators-n46rr\" (UID: \"61921386-cdd1-46d0-866c-114115acde03\") " pod="openshift-marketplace/certified-operators-n46rr" Jan 23 09:56:03 crc kubenswrapper[4684]: I0123 09:56:03.562907 4684 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-5ddpq\" (UniqueName: \"kubernetes.io/projected/61921386-cdd1-46d0-866c-114115acde03-kube-api-access-5ddpq\") pod \"certified-operators-n46rr\" (UID: \"61921386-cdd1-46d0-866c-114115acde03\") " pod="openshift-marketplace/certified-operators-n46rr" Jan 23 09:56:03 crc kubenswrapper[4684]: I0123 09:56:03.730574 4684 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-n46rr" Jan 23 09:56:04 crc kubenswrapper[4684]: I0123 09:56:04.331360 4684 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/certified-operators-n46rr"] Jan 23 09:56:05 crc kubenswrapper[4684]: I0123 09:56:05.051222 4684 generic.go:334] "Generic (PLEG): container finished" podID="61921386-cdd1-46d0-866c-114115acde03" containerID="6f29cb4963c840815cd2508e120d2c1eb80016b97fc16a00a9c35a5a0666a0ee" exitCode=0 Jan 23 09:56:05 crc kubenswrapper[4684]: I0123 09:56:05.051313 4684 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-n46rr" event={"ID":"61921386-cdd1-46d0-866c-114115acde03","Type":"ContainerDied","Data":"6f29cb4963c840815cd2508e120d2c1eb80016b97fc16a00a9c35a5a0666a0ee"} Jan 23 09:56:05 crc kubenswrapper[4684]: I0123 09:56:05.051344 4684 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-n46rr" event={"ID":"61921386-cdd1-46d0-866c-114115acde03","Type":"ContainerStarted","Data":"3dae965adaa0ae66a4c24528b0dd4968112457d1ae656a38583833d794be68b6"} Jan 23 09:56:05 crc kubenswrapper[4684]: I0123 09:56:05.078335 4684 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/configure-network-edpm-deployment-openstack-edpm-ipam-bj6vb" event={"ID":"f86589ab-3e45-48a5-a081-96572c2bcfca","Type":"ContainerStarted","Data":"6b5bdbdfc0d822037ef27f392ff4e28ad231ac5e852e9c56ff08c6c3ac1b97a8"} Jan 23 09:56:05 crc kubenswrapper[4684]: I0123 09:56:05.109674 4684 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/configure-network-edpm-deployment-openstack-edpm-ipam-bj6vb" podStartSLOduration=1.946755443 podStartE2EDuration="4.109648923s" podCreationTimestamp="2026-01-23 09:56:01 +0000 UTC" firstStartedPulling="2026-01-23 09:56:02.086960858 +0000 UTC m=+2934.710339399" lastFinishedPulling="2026-01-23 09:56:04.249854338 +0000 UTC m=+2936.873232879" observedRunningTime="2026-01-23 09:56:05.106014269 +0000 UTC m=+2937.729392830" watchObservedRunningTime="2026-01-23 09:56:05.109648923 +0000 UTC m=+2937.733027484" Jan 23 09:56:07 crc kubenswrapper[4684]: I0123 09:56:07.335868 4684 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-n46rr" event={"ID":"61921386-cdd1-46d0-866c-114115acde03","Type":"ContainerStarted","Data":"688db5d4c84020f13732e371e286b9bb07263bb85069f64a4aeaf5ea1aefaeb6"} Jan 23 09:56:10 crc kubenswrapper[4684]: I0123 09:56:10.374322 4684 generic.go:334] "Generic (PLEG): container finished" podID="61921386-cdd1-46d0-866c-114115acde03" containerID="688db5d4c84020f13732e371e286b9bb07263bb85069f64a4aeaf5ea1aefaeb6" exitCode=0 Jan 23 09:56:10 crc kubenswrapper[4684]: I0123 09:56:10.374395 4684 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-n46rr" event={"ID":"61921386-cdd1-46d0-866c-114115acde03","Type":"ContainerDied","Data":"688db5d4c84020f13732e371e286b9bb07263bb85069f64a4aeaf5ea1aefaeb6"} Jan 23 09:56:12 crc kubenswrapper[4684]: I0123 09:56:12.392779 4684 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-n46rr" event={"ID":"61921386-cdd1-46d0-866c-114115acde03","Type":"ContainerStarted","Data":"ede721b893be65df61b4eb283da3f48abbbc74d5ed252785a3d75478bd8f5087"} Jan 23 09:56:13 crc kubenswrapper[4684]: I0123 09:56:13.728849 4684 patch_prober.go:28] interesting pod/machine-config-daemon-wtphf container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Jan 23 09:56:13 crc kubenswrapper[4684]: I0123 09:56:13.729239 4684 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-wtphf" podUID="fe8e0d00-860e-4d47-9f48-686555520d79" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Jan 23 09:56:13 crc kubenswrapper[4684]: I0123 09:56:13.729303 4684 kubelet.go:2542] "SyncLoop (probe)" probe="liveness" status="unhealthy" pod="openshift-machine-config-operator/machine-config-daemon-wtphf" Jan 23 09:56:13 crc kubenswrapper[4684]: I0123 09:56:13.731939 4684 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/certified-operators-n46rr" Jan 23 09:56:13 crc kubenswrapper[4684]: I0123 09:56:13.732203 4684 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/certified-operators-n46rr" Jan 23 09:56:13 crc kubenswrapper[4684]: I0123 09:56:13.784197 4684 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/certified-operators-n46rr" Jan 23 09:56:13 crc kubenswrapper[4684]: I0123 09:56:13.811144 4684 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/certified-operators-n46rr" podStartSLOduration=4.029894909 podStartE2EDuration="10.811121255s" podCreationTimestamp="2026-01-23 09:56:03 +0000 UTC" firstStartedPulling="2026-01-23 09:56:05.069325184 +0000 UTC m=+2937.692703725" lastFinishedPulling="2026-01-23 09:56:11.85055153 +0000 UTC m=+2944.473930071" observedRunningTime="2026-01-23 09:56:12.418146558 +0000 UTC m=+2945.041525099" watchObservedRunningTime="2026-01-23 09:56:13.811121255 +0000 UTC m=+2946.434499796" Jan 23 09:56:14 crc kubenswrapper[4684]: I0123 09:56:14.413333 4684 kuberuntime_manager.go:1027] "Message for Container of pod" containerName="machine-config-daemon" containerStatusID={"Type":"cri-o","ID":"8d1f652ff74148a06a7cece32bb007304d1575a17aa3e4576d5bb01005d192bb"} pod="openshift-machine-config-operator/machine-config-daemon-wtphf" containerMessage="Container machine-config-daemon failed liveness probe, will be restarted" Jan 23 09:56:14 crc kubenswrapper[4684]: I0123 09:56:14.413434 4684 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-machine-config-operator/machine-config-daemon-wtphf" podUID="fe8e0d00-860e-4d47-9f48-686555520d79" containerName="machine-config-daemon" containerID="cri-o://8d1f652ff74148a06a7cece32bb007304d1575a17aa3e4576d5bb01005d192bb" gracePeriod=600 Jan 23 09:56:15 crc kubenswrapper[4684]: I0123 09:56:15.428468 4684 generic.go:334] "Generic (PLEG): container finished" podID="fe8e0d00-860e-4d47-9f48-686555520d79" containerID="8d1f652ff74148a06a7cece32bb007304d1575a17aa3e4576d5bb01005d192bb" exitCode=0 Jan 23 09:56:15 crc kubenswrapper[4684]: I0123 09:56:15.428910 4684 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-wtphf" event={"ID":"fe8e0d00-860e-4d47-9f48-686555520d79","Type":"ContainerDied","Data":"8d1f652ff74148a06a7cece32bb007304d1575a17aa3e4576d5bb01005d192bb"} Jan 23 09:56:15 crc kubenswrapper[4684]: I0123 09:56:15.429316 4684 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-wtphf" event={"ID":"fe8e0d00-860e-4d47-9f48-686555520d79","Type":"ContainerStarted","Data":"d1c64bcff5b15812f02c5451d69c8159a40aa5751c27f7f31fd2c1167f6c8ab3"} Jan 23 09:56:15 crc kubenswrapper[4684]: I0123 09:56:15.429345 4684 scope.go:117] "RemoveContainer" containerID="e241fb8ce89b1144b77898bb643960ab8da29fdc0ca5835cb761cdb036975632" Jan 23 09:56:23 crc kubenswrapper[4684]: I0123 09:56:23.783851 4684 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/certified-operators-n46rr" Jan 23 09:56:23 crc kubenswrapper[4684]: I0123 09:56:23.853587 4684 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/certified-operators-n46rr"] Jan 23 09:56:24 crc kubenswrapper[4684]: I0123 09:56:24.504416 4684 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/certified-operators-n46rr" podUID="61921386-cdd1-46d0-866c-114115acde03" containerName="registry-server" containerID="cri-o://ede721b893be65df61b4eb283da3f48abbbc74d5ed252785a3d75478bd8f5087" gracePeriod=2 Jan 23 09:56:24 crc kubenswrapper[4684]: I0123 09:56:24.972979 4684 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-n46rr" Jan 23 09:56:25 crc kubenswrapper[4684]: I0123 09:56:25.069641 4684 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-5ddpq\" (UniqueName: \"kubernetes.io/projected/61921386-cdd1-46d0-866c-114115acde03-kube-api-access-5ddpq\") pod \"61921386-cdd1-46d0-866c-114115acde03\" (UID: \"61921386-cdd1-46d0-866c-114115acde03\") " Jan 23 09:56:25 crc kubenswrapper[4684]: I0123 09:56:25.069816 4684 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/61921386-cdd1-46d0-866c-114115acde03-catalog-content\") pod \"61921386-cdd1-46d0-866c-114115acde03\" (UID: \"61921386-cdd1-46d0-866c-114115acde03\") " Jan 23 09:56:25 crc kubenswrapper[4684]: I0123 09:56:25.069872 4684 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/61921386-cdd1-46d0-866c-114115acde03-utilities\") pod \"61921386-cdd1-46d0-866c-114115acde03\" (UID: \"61921386-cdd1-46d0-866c-114115acde03\") " Jan 23 09:56:25 crc kubenswrapper[4684]: I0123 09:56:25.070786 4684 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/61921386-cdd1-46d0-866c-114115acde03-utilities" (OuterVolumeSpecName: "utilities") pod "61921386-cdd1-46d0-866c-114115acde03" (UID: "61921386-cdd1-46d0-866c-114115acde03"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 23 09:56:25 crc kubenswrapper[4684]: I0123 09:56:25.087249 4684 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/61921386-cdd1-46d0-866c-114115acde03-kube-api-access-5ddpq" (OuterVolumeSpecName: "kube-api-access-5ddpq") pod "61921386-cdd1-46d0-866c-114115acde03" (UID: "61921386-cdd1-46d0-866c-114115acde03"). InnerVolumeSpecName "kube-api-access-5ddpq". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 23 09:56:25 crc kubenswrapper[4684]: I0123 09:56:25.126084 4684 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/61921386-cdd1-46d0-866c-114115acde03-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "61921386-cdd1-46d0-866c-114115acde03" (UID: "61921386-cdd1-46d0-866c-114115acde03"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 23 09:56:25 crc kubenswrapper[4684]: I0123 09:56:25.171725 4684 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-5ddpq\" (UniqueName: \"kubernetes.io/projected/61921386-cdd1-46d0-866c-114115acde03-kube-api-access-5ddpq\") on node \"crc\" DevicePath \"\"" Jan 23 09:56:25 crc kubenswrapper[4684]: I0123 09:56:25.171769 4684 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/61921386-cdd1-46d0-866c-114115acde03-catalog-content\") on node \"crc\" DevicePath \"\"" Jan 23 09:56:25 crc kubenswrapper[4684]: I0123 09:56:25.171781 4684 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/61921386-cdd1-46d0-866c-114115acde03-utilities\") on node \"crc\" DevicePath \"\"" Jan 23 09:56:25 crc kubenswrapper[4684]: I0123 09:56:25.521346 4684 generic.go:334] "Generic (PLEG): container finished" podID="61921386-cdd1-46d0-866c-114115acde03" containerID="ede721b893be65df61b4eb283da3f48abbbc74d5ed252785a3d75478bd8f5087" exitCode=0 Jan 23 09:56:25 crc kubenswrapper[4684]: I0123 09:56:25.521403 4684 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-n46rr" event={"ID":"61921386-cdd1-46d0-866c-114115acde03","Type":"ContainerDied","Data":"ede721b893be65df61b4eb283da3f48abbbc74d5ed252785a3d75478bd8f5087"} Jan 23 09:56:25 crc kubenswrapper[4684]: I0123 09:56:25.521439 4684 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-n46rr" event={"ID":"61921386-cdd1-46d0-866c-114115acde03","Type":"ContainerDied","Data":"3dae965adaa0ae66a4c24528b0dd4968112457d1ae656a38583833d794be68b6"} Jan 23 09:56:25 crc kubenswrapper[4684]: I0123 09:56:25.521462 4684 scope.go:117] "RemoveContainer" containerID="ede721b893be65df61b4eb283da3f48abbbc74d5ed252785a3d75478bd8f5087" Jan 23 09:56:25 crc kubenswrapper[4684]: I0123 09:56:25.521645 4684 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-n46rr" Jan 23 09:56:25 crc kubenswrapper[4684]: I0123 09:56:25.550068 4684 scope.go:117] "RemoveContainer" containerID="688db5d4c84020f13732e371e286b9bb07263bb85069f64a4aeaf5ea1aefaeb6" Jan 23 09:56:25 crc kubenswrapper[4684]: I0123 09:56:25.580512 4684 scope.go:117] "RemoveContainer" containerID="6f29cb4963c840815cd2508e120d2c1eb80016b97fc16a00a9c35a5a0666a0ee" Jan 23 09:56:25 crc kubenswrapper[4684]: I0123 09:56:25.607404 4684 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/certified-operators-n46rr"] Jan 23 09:56:25 crc kubenswrapper[4684]: I0123 09:56:25.607466 4684 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/certified-operators-n46rr"] Jan 23 09:56:25 crc kubenswrapper[4684]: I0123 09:56:25.619513 4684 scope.go:117] "RemoveContainer" containerID="ede721b893be65df61b4eb283da3f48abbbc74d5ed252785a3d75478bd8f5087" Jan 23 09:56:25 crc kubenswrapper[4684]: E0123 09:56:25.620082 4684 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"ede721b893be65df61b4eb283da3f48abbbc74d5ed252785a3d75478bd8f5087\": container with ID starting with ede721b893be65df61b4eb283da3f48abbbc74d5ed252785a3d75478bd8f5087 not found: ID does not exist" containerID="ede721b893be65df61b4eb283da3f48abbbc74d5ed252785a3d75478bd8f5087" Jan 23 09:56:25 crc kubenswrapper[4684]: I0123 09:56:25.620130 4684 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"ede721b893be65df61b4eb283da3f48abbbc74d5ed252785a3d75478bd8f5087"} err="failed to get container status \"ede721b893be65df61b4eb283da3f48abbbc74d5ed252785a3d75478bd8f5087\": rpc error: code = NotFound desc = could not find container \"ede721b893be65df61b4eb283da3f48abbbc74d5ed252785a3d75478bd8f5087\": container with ID starting with ede721b893be65df61b4eb283da3f48abbbc74d5ed252785a3d75478bd8f5087 not found: ID does not exist" Jan 23 09:56:25 crc kubenswrapper[4684]: I0123 09:56:25.620157 4684 scope.go:117] "RemoveContainer" containerID="688db5d4c84020f13732e371e286b9bb07263bb85069f64a4aeaf5ea1aefaeb6" Jan 23 09:56:25 crc kubenswrapper[4684]: E0123 09:56:25.620427 4684 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"688db5d4c84020f13732e371e286b9bb07263bb85069f64a4aeaf5ea1aefaeb6\": container with ID starting with 688db5d4c84020f13732e371e286b9bb07263bb85069f64a4aeaf5ea1aefaeb6 not found: ID does not exist" containerID="688db5d4c84020f13732e371e286b9bb07263bb85069f64a4aeaf5ea1aefaeb6" Jan 23 09:56:25 crc kubenswrapper[4684]: I0123 09:56:25.620457 4684 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"688db5d4c84020f13732e371e286b9bb07263bb85069f64a4aeaf5ea1aefaeb6"} err="failed to get container status \"688db5d4c84020f13732e371e286b9bb07263bb85069f64a4aeaf5ea1aefaeb6\": rpc error: code = NotFound desc = could not find container \"688db5d4c84020f13732e371e286b9bb07263bb85069f64a4aeaf5ea1aefaeb6\": container with ID starting with 688db5d4c84020f13732e371e286b9bb07263bb85069f64a4aeaf5ea1aefaeb6 not found: ID does not exist" Jan 23 09:56:25 crc kubenswrapper[4684]: I0123 09:56:25.620477 4684 scope.go:117] "RemoveContainer" containerID="6f29cb4963c840815cd2508e120d2c1eb80016b97fc16a00a9c35a5a0666a0ee" Jan 23 09:56:25 crc kubenswrapper[4684]: E0123 09:56:25.620931 4684 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"6f29cb4963c840815cd2508e120d2c1eb80016b97fc16a00a9c35a5a0666a0ee\": container with ID starting with 6f29cb4963c840815cd2508e120d2c1eb80016b97fc16a00a9c35a5a0666a0ee not found: ID does not exist" containerID="6f29cb4963c840815cd2508e120d2c1eb80016b97fc16a00a9c35a5a0666a0ee" Jan 23 09:56:25 crc kubenswrapper[4684]: I0123 09:56:25.620971 4684 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"6f29cb4963c840815cd2508e120d2c1eb80016b97fc16a00a9c35a5a0666a0ee"} err="failed to get container status \"6f29cb4963c840815cd2508e120d2c1eb80016b97fc16a00a9c35a5a0666a0ee\": rpc error: code = NotFound desc = could not find container \"6f29cb4963c840815cd2508e120d2c1eb80016b97fc16a00a9c35a5a0666a0ee\": container with ID starting with 6f29cb4963c840815cd2508e120d2c1eb80016b97fc16a00a9c35a5a0666a0ee not found: ID does not exist" Jan 23 09:56:27 crc kubenswrapper[4684]: I0123 09:56:27.592593 4684 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="61921386-cdd1-46d0-866c-114115acde03" path="/var/lib/kubelet/pods/61921386-cdd1-46d0-866c-114115acde03/volumes" Jan 23 09:56:36 crc kubenswrapper[4684]: I0123 09:56:36.612677 4684 generic.go:334] "Generic (PLEG): container finished" podID="f86589ab-3e45-48a5-a081-96572c2bcfca" containerID="6b5bdbdfc0d822037ef27f392ff4e28ad231ac5e852e9c56ff08c6c3ac1b97a8" exitCode=0 Jan 23 09:56:36 crc kubenswrapper[4684]: I0123 09:56:36.613322 4684 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/configure-network-edpm-deployment-openstack-edpm-ipam-bj6vb" event={"ID":"f86589ab-3e45-48a5-a081-96572c2bcfca","Type":"ContainerDied","Data":"6b5bdbdfc0d822037ef27f392ff4e28ad231ac5e852e9c56ff08c6c3ac1b97a8"} Jan 23 09:56:38 crc kubenswrapper[4684]: I0123 09:56:38.046539 4684 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/configure-network-edpm-deployment-openstack-edpm-ipam-bj6vb" Jan 23 09:56:38 crc kubenswrapper[4684]: I0123 09:56:38.147303 4684 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-9j842\" (UniqueName: \"kubernetes.io/projected/f86589ab-3e45-48a5-a081-96572c2bcfca-kube-api-access-9j842\") pod \"f86589ab-3e45-48a5-a081-96572c2bcfca\" (UID: \"f86589ab-3e45-48a5-a081-96572c2bcfca\") " Jan 23 09:56:38 crc kubenswrapper[4684]: I0123 09:56:38.147572 4684 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/f86589ab-3e45-48a5-a081-96572c2bcfca-ssh-key-openstack-edpm-ipam\") pod \"f86589ab-3e45-48a5-a081-96572c2bcfca\" (UID: \"f86589ab-3e45-48a5-a081-96572c2bcfca\") " Jan 23 09:56:38 crc kubenswrapper[4684]: I0123 09:56:38.147616 4684 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/f86589ab-3e45-48a5-a081-96572c2bcfca-inventory\") pod \"f86589ab-3e45-48a5-a081-96572c2bcfca\" (UID: \"f86589ab-3e45-48a5-a081-96572c2bcfca\") " Jan 23 09:56:38 crc kubenswrapper[4684]: I0123 09:56:38.147677 4684 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ceph\" (UniqueName: \"kubernetes.io/secret/f86589ab-3e45-48a5-a081-96572c2bcfca-ceph\") pod \"f86589ab-3e45-48a5-a081-96572c2bcfca\" (UID: \"f86589ab-3e45-48a5-a081-96572c2bcfca\") " Jan 23 09:56:38 crc kubenswrapper[4684]: I0123 09:56:38.163246 4684 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/f86589ab-3e45-48a5-a081-96572c2bcfca-kube-api-access-9j842" (OuterVolumeSpecName: "kube-api-access-9j842") pod "f86589ab-3e45-48a5-a081-96572c2bcfca" (UID: "f86589ab-3e45-48a5-a081-96572c2bcfca"). InnerVolumeSpecName "kube-api-access-9j842". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 23 09:56:38 crc kubenswrapper[4684]: I0123 09:56:38.165082 4684 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/f86589ab-3e45-48a5-a081-96572c2bcfca-ceph" (OuterVolumeSpecName: "ceph") pod "f86589ab-3e45-48a5-a081-96572c2bcfca" (UID: "f86589ab-3e45-48a5-a081-96572c2bcfca"). InnerVolumeSpecName "ceph". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 23 09:56:38 crc kubenswrapper[4684]: I0123 09:56:38.177600 4684 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/f86589ab-3e45-48a5-a081-96572c2bcfca-ssh-key-openstack-edpm-ipam" (OuterVolumeSpecName: "ssh-key-openstack-edpm-ipam") pod "f86589ab-3e45-48a5-a081-96572c2bcfca" (UID: "f86589ab-3e45-48a5-a081-96572c2bcfca"). InnerVolumeSpecName "ssh-key-openstack-edpm-ipam". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 23 09:56:38 crc kubenswrapper[4684]: I0123 09:56:38.180468 4684 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/f86589ab-3e45-48a5-a081-96572c2bcfca-inventory" (OuterVolumeSpecName: "inventory") pod "f86589ab-3e45-48a5-a081-96572c2bcfca" (UID: "f86589ab-3e45-48a5-a081-96572c2bcfca"). InnerVolumeSpecName "inventory". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 23 09:56:38 crc kubenswrapper[4684]: I0123 09:56:38.250165 4684 reconciler_common.go:293] "Volume detached for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/f86589ab-3e45-48a5-a081-96572c2bcfca-inventory\") on node \"crc\" DevicePath \"\"" Jan 23 09:56:38 crc kubenswrapper[4684]: I0123 09:56:38.250496 4684 reconciler_common.go:293] "Volume detached for volume \"ceph\" (UniqueName: \"kubernetes.io/secret/f86589ab-3e45-48a5-a081-96572c2bcfca-ceph\") on node \"crc\" DevicePath \"\"" Jan 23 09:56:38 crc kubenswrapper[4684]: I0123 09:56:38.250512 4684 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-9j842\" (UniqueName: \"kubernetes.io/projected/f86589ab-3e45-48a5-a081-96572c2bcfca-kube-api-access-9j842\") on node \"crc\" DevicePath \"\"" Jan 23 09:56:38 crc kubenswrapper[4684]: I0123 09:56:38.250551 4684 reconciler_common.go:293] "Volume detached for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/f86589ab-3e45-48a5-a081-96572c2bcfca-ssh-key-openstack-edpm-ipam\") on node \"crc\" DevicePath \"\"" Jan 23 09:56:38 crc kubenswrapper[4684]: I0123 09:56:38.628508 4684 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/configure-network-edpm-deployment-openstack-edpm-ipam-bj6vb" event={"ID":"f86589ab-3e45-48a5-a081-96572c2bcfca","Type":"ContainerDied","Data":"0f15cbd9ad7c71e227310691a9ca319ca70341881726b0876d6dd2370cd01750"} Jan 23 09:56:38 crc kubenswrapper[4684]: I0123 09:56:38.628552 4684 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="0f15cbd9ad7c71e227310691a9ca319ca70341881726b0876d6dd2370cd01750" Jan 23 09:56:38 crc kubenswrapper[4684]: I0123 09:56:38.628859 4684 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/configure-network-edpm-deployment-openstack-edpm-ipam-bj6vb" Jan 23 09:56:38 crc kubenswrapper[4684]: I0123 09:56:38.719620 4684 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/validate-network-edpm-deployment-openstack-edpm-ipam-tlkk8"] Jan 23 09:56:38 crc kubenswrapper[4684]: E0123 09:56:38.720017 4684 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="61921386-cdd1-46d0-866c-114115acde03" containerName="extract-utilities" Jan 23 09:56:38 crc kubenswrapper[4684]: I0123 09:56:38.720039 4684 state_mem.go:107] "Deleted CPUSet assignment" podUID="61921386-cdd1-46d0-866c-114115acde03" containerName="extract-utilities" Jan 23 09:56:38 crc kubenswrapper[4684]: E0123 09:56:38.720083 4684 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="61921386-cdd1-46d0-866c-114115acde03" containerName="registry-server" Jan 23 09:56:38 crc kubenswrapper[4684]: I0123 09:56:38.720093 4684 state_mem.go:107] "Deleted CPUSet assignment" podUID="61921386-cdd1-46d0-866c-114115acde03" containerName="registry-server" Jan 23 09:56:38 crc kubenswrapper[4684]: E0123 09:56:38.720110 4684 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="f86589ab-3e45-48a5-a081-96572c2bcfca" containerName="configure-network-edpm-deployment-openstack-edpm-ipam" Jan 23 09:56:38 crc kubenswrapper[4684]: I0123 09:56:38.720120 4684 state_mem.go:107] "Deleted CPUSet assignment" podUID="f86589ab-3e45-48a5-a081-96572c2bcfca" containerName="configure-network-edpm-deployment-openstack-edpm-ipam" Jan 23 09:56:38 crc kubenswrapper[4684]: E0123 09:56:38.720129 4684 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="61921386-cdd1-46d0-866c-114115acde03" containerName="extract-content" Jan 23 09:56:38 crc kubenswrapper[4684]: I0123 09:56:38.720136 4684 state_mem.go:107] "Deleted CPUSet assignment" podUID="61921386-cdd1-46d0-866c-114115acde03" containerName="extract-content" Jan 23 09:56:38 crc kubenswrapper[4684]: I0123 09:56:38.720338 4684 memory_manager.go:354] "RemoveStaleState removing state" podUID="f86589ab-3e45-48a5-a081-96572c2bcfca" containerName="configure-network-edpm-deployment-openstack-edpm-ipam" Jan 23 09:56:38 crc kubenswrapper[4684]: I0123 09:56:38.720354 4684 memory_manager.go:354] "RemoveStaleState removing state" podUID="61921386-cdd1-46d0-866c-114115acde03" containerName="registry-server" Jan 23 09:56:38 crc kubenswrapper[4684]: I0123 09:56:38.720962 4684 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/validate-network-edpm-deployment-openstack-edpm-ipam-tlkk8" Jan 23 09:56:38 crc kubenswrapper[4684]: I0123 09:56:38.724819 4684 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"ceph-conf-files" Jan 23 09:56:38 crc kubenswrapper[4684]: I0123 09:56:38.724828 4684 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"dataplane-ansible-ssh-private-key-secret" Jan 23 09:56:38 crc kubenswrapper[4684]: I0123 09:56:38.726152 4684 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"dataplanenodeset-openstack-edpm-ipam" Jan 23 09:56:38 crc kubenswrapper[4684]: I0123 09:56:38.727314 4684 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"openstack-aee-default-env" Jan 23 09:56:38 crc kubenswrapper[4684]: I0123 09:56:38.728611 4684 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"openstack-edpm-ipam-dockercfg-5vtkf" Jan 23 09:56:38 crc kubenswrapper[4684]: I0123 09:56:38.750546 4684 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/validate-network-edpm-deployment-openstack-edpm-ipam-tlkk8"] Jan 23 09:56:38 crc kubenswrapper[4684]: I0123 09:56:38.861481 4684 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-csnt9\" (UniqueName: \"kubernetes.io/projected/e2aa43b6-cc3e-4a3f-a98d-a788624c5253-kube-api-access-csnt9\") pod \"validate-network-edpm-deployment-openstack-edpm-ipam-tlkk8\" (UID: \"e2aa43b6-cc3e-4a3f-a98d-a788624c5253\") " pod="openstack/validate-network-edpm-deployment-openstack-edpm-ipam-tlkk8" Jan 23 09:56:38 crc kubenswrapper[4684]: I0123 09:56:38.861800 4684 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/e2aa43b6-cc3e-4a3f-a98d-a788624c5253-inventory\") pod \"validate-network-edpm-deployment-openstack-edpm-ipam-tlkk8\" (UID: \"e2aa43b6-cc3e-4a3f-a98d-a788624c5253\") " pod="openstack/validate-network-edpm-deployment-openstack-edpm-ipam-tlkk8" Jan 23 09:56:38 crc kubenswrapper[4684]: I0123 09:56:38.861946 4684 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ceph\" (UniqueName: \"kubernetes.io/secret/e2aa43b6-cc3e-4a3f-a98d-a788624c5253-ceph\") pod \"validate-network-edpm-deployment-openstack-edpm-ipam-tlkk8\" (UID: \"e2aa43b6-cc3e-4a3f-a98d-a788624c5253\") " pod="openstack/validate-network-edpm-deployment-openstack-edpm-ipam-tlkk8" Jan 23 09:56:38 crc kubenswrapper[4684]: I0123 09:56:38.862087 4684 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/e2aa43b6-cc3e-4a3f-a98d-a788624c5253-ssh-key-openstack-edpm-ipam\") pod \"validate-network-edpm-deployment-openstack-edpm-ipam-tlkk8\" (UID: \"e2aa43b6-cc3e-4a3f-a98d-a788624c5253\") " pod="openstack/validate-network-edpm-deployment-openstack-edpm-ipam-tlkk8" Jan 23 09:56:38 crc kubenswrapper[4684]: I0123 09:56:38.964928 4684 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/e2aa43b6-cc3e-4a3f-a98d-a788624c5253-ssh-key-openstack-edpm-ipam\") pod \"validate-network-edpm-deployment-openstack-edpm-ipam-tlkk8\" (UID: \"e2aa43b6-cc3e-4a3f-a98d-a788624c5253\") " pod="openstack/validate-network-edpm-deployment-openstack-edpm-ipam-tlkk8" Jan 23 09:56:38 crc kubenswrapper[4684]: I0123 09:56:38.965219 4684 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-csnt9\" (UniqueName: \"kubernetes.io/projected/e2aa43b6-cc3e-4a3f-a98d-a788624c5253-kube-api-access-csnt9\") pod \"validate-network-edpm-deployment-openstack-edpm-ipam-tlkk8\" (UID: \"e2aa43b6-cc3e-4a3f-a98d-a788624c5253\") " pod="openstack/validate-network-edpm-deployment-openstack-edpm-ipam-tlkk8" Jan 23 09:56:38 crc kubenswrapper[4684]: I0123 09:56:38.965319 4684 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/e2aa43b6-cc3e-4a3f-a98d-a788624c5253-inventory\") pod \"validate-network-edpm-deployment-openstack-edpm-ipam-tlkk8\" (UID: \"e2aa43b6-cc3e-4a3f-a98d-a788624c5253\") " pod="openstack/validate-network-edpm-deployment-openstack-edpm-ipam-tlkk8" Jan 23 09:56:38 crc kubenswrapper[4684]: I0123 09:56:38.965370 4684 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ceph\" (UniqueName: \"kubernetes.io/secret/e2aa43b6-cc3e-4a3f-a98d-a788624c5253-ceph\") pod \"validate-network-edpm-deployment-openstack-edpm-ipam-tlkk8\" (UID: \"e2aa43b6-cc3e-4a3f-a98d-a788624c5253\") " pod="openstack/validate-network-edpm-deployment-openstack-edpm-ipam-tlkk8" Jan 23 09:56:38 crc kubenswrapper[4684]: I0123 09:56:38.970364 4684 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ceph\" (UniqueName: \"kubernetes.io/secret/e2aa43b6-cc3e-4a3f-a98d-a788624c5253-ceph\") pod \"validate-network-edpm-deployment-openstack-edpm-ipam-tlkk8\" (UID: \"e2aa43b6-cc3e-4a3f-a98d-a788624c5253\") " pod="openstack/validate-network-edpm-deployment-openstack-edpm-ipam-tlkk8" Jan 23 09:56:38 crc kubenswrapper[4684]: I0123 09:56:38.970453 4684 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/e2aa43b6-cc3e-4a3f-a98d-a788624c5253-inventory\") pod \"validate-network-edpm-deployment-openstack-edpm-ipam-tlkk8\" (UID: \"e2aa43b6-cc3e-4a3f-a98d-a788624c5253\") " pod="openstack/validate-network-edpm-deployment-openstack-edpm-ipam-tlkk8" Jan 23 09:56:38 crc kubenswrapper[4684]: I0123 09:56:38.971562 4684 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/e2aa43b6-cc3e-4a3f-a98d-a788624c5253-ssh-key-openstack-edpm-ipam\") pod \"validate-network-edpm-deployment-openstack-edpm-ipam-tlkk8\" (UID: \"e2aa43b6-cc3e-4a3f-a98d-a788624c5253\") " pod="openstack/validate-network-edpm-deployment-openstack-edpm-ipam-tlkk8" Jan 23 09:56:38 crc kubenswrapper[4684]: I0123 09:56:38.988267 4684 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-csnt9\" (UniqueName: \"kubernetes.io/projected/e2aa43b6-cc3e-4a3f-a98d-a788624c5253-kube-api-access-csnt9\") pod \"validate-network-edpm-deployment-openstack-edpm-ipam-tlkk8\" (UID: \"e2aa43b6-cc3e-4a3f-a98d-a788624c5253\") " pod="openstack/validate-network-edpm-deployment-openstack-edpm-ipam-tlkk8" Jan 23 09:56:39 crc kubenswrapper[4684]: I0123 09:56:39.036789 4684 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/validate-network-edpm-deployment-openstack-edpm-ipam-tlkk8" Jan 23 09:56:39 crc kubenswrapper[4684]: W0123 09:56:39.627249 4684 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pode2aa43b6_cc3e_4a3f_a98d_a788624c5253.slice/crio-070478a7fa28ef004c0cd11dd0cef6bcb97d27016ce993ee72be994e510b9dd0 WatchSource:0}: Error finding container 070478a7fa28ef004c0cd11dd0cef6bcb97d27016ce993ee72be994e510b9dd0: Status 404 returned error can't find the container with id 070478a7fa28ef004c0cd11dd0cef6bcb97d27016ce993ee72be994e510b9dd0 Jan 23 09:56:39 crc kubenswrapper[4684]: I0123 09:56:39.629715 4684 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/validate-network-edpm-deployment-openstack-edpm-ipam-tlkk8"] Jan 23 09:56:39 crc kubenswrapper[4684]: I0123 09:56:39.649975 4684 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/validate-network-edpm-deployment-openstack-edpm-ipam-tlkk8" event={"ID":"e2aa43b6-cc3e-4a3f-a98d-a788624c5253","Type":"ContainerStarted","Data":"070478a7fa28ef004c0cd11dd0cef6bcb97d27016ce993ee72be994e510b9dd0"} Jan 23 09:56:40 crc kubenswrapper[4684]: I0123 09:56:40.660673 4684 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/validate-network-edpm-deployment-openstack-edpm-ipam-tlkk8" event={"ID":"e2aa43b6-cc3e-4a3f-a98d-a788624c5253","Type":"ContainerStarted","Data":"1717fe82f16c653393e157fb9e349a1e5d484d83ecb629ce8226c2e2ac3814b0"} Jan 23 09:56:40 crc kubenswrapper[4684]: I0123 09:56:40.683670 4684 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/validate-network-edpm-deployment-openstack-edpm-ipam-tlkk8" podStartSLOduration=2.252696939 podStartE2EDuration="2.683648872s" podCreationTimestamp="2026-01-23 09:56:38 +0000 UTC" firstStartedPulling="2026-01-23 09:56:39.634558068 +0000 UTC m=+2972.257936609" lastFinishedPulling="2026-01-23 09:56:40.065510001 +0000 UTC m=+2972.688888542" observedRunningTime="2026-01-23 09:56:40.678527475 +0000 UTC m=+2973.301906036" watchObservedRunningTime="2026-01-23 09:56:40.683648872 +0000 UTC m=+2973.307027413" Jan 23 09:56:45 crc kubenswrapper[4684]: I0123 09:56:45.714661 4684 generic.go:334] "Generic (PLEG): container finished" podID="e2aa43b6-cc3e-4a3f-a98d-a788624c5253" containerID="1717fe82f16c653393e157fb9e349a1e5d484d83ecb629ce8226c2e2ac3814b0" exitCode=0 Jan 23 09:56:45 crc kubenswrapper[4684]: I0123 09:56:45.715141 4684 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/validate-network-edpm-deployment-openstack-edpm-ipam-tlkk8" event={"ID":"e2aa43b6-cc3e-4a3f-a98d-a788624c5253","Type":"ContainerDied","Data":"1717fe82f16c653393e157fb9e349a1e5d484d83ecb629ce8226c2e2ac3814b0"} Jan 23 09:56:47 crc kubenswrapper[4684]: I0123 09:56:47.151146 4684 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/validate-network-edpm-deployment-openstack-edpm-ipam-tlkk8" Jan 23 09:56:47 crc kubenswrapper[4684]: I0123 09:56:47.246664 4684 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ceph\" (UniqueName: \"kubernetes.io/secret/e2aa43b6-cc3e-4a3f-a98d-a788624c5253-ceph\") pod \"e2aa43b6-cc3e-4a3f-a98d-a788624c5253\" (UID: \"e2aa43b6-cc3e-4a3f-a98d-a788624c5253\") " Jan 23 09:56:47 crc kubenswrapper[4684]: I0123 09:56:47.246787 4684 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/e2aa43b6-cc3e-4a3f-a98d-a788624c5253-ssh-key-openstack-edpm-ipam\") pod \"e2aa43b6-cc3e-4a3f-a98d-a788624c5253\" (UID: \"e2aa43b6-cc3e-4a3f-a98d-a788624c5253\") " Jan 23 09:56:47 crc kubenswrapper[4684]: I0123 09:56:47.246815 4684 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-csnt9\" (UniqueName: \"kubernetes.io/projected/e2aa43b6-cc3e-4a3f-a98d-a788624c5253-kube-api-access-csnt9\") pod \"e2aa43b6-cc3e-4a3f-a98d-a788624c5253\" (UID: \"e2aa43b6-cc3e-4a3f-a98d-a788624c5253\") " Jan 23 09:56:47 crc kubenswrapper[4684]: I0123 09:56:47.246952 4684 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/e2aa43b6-cc3e-4a3f-a98d-a788624c5253-inventory\") pod \"e2aa43b6-cc3e-4a3f-a98d-a788624c5253\" (UID: \"e2aa43b6-cc3e-4a3f-a98d-a788624c5253\") " Jan 23 09:56:47 crc kubenswrapper[4684]: I0123 09:56:47.252542 4684 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/e2aa43b6-cc3e-4a3f-a98d-a788624c5253-ceph" (OuterVolumeSpecName: "ceph") pod "e2aa43b6-cc3e-4a3f-a98d-a788624c5253" (UID: "e2aa43b6-cc3e-4a3f-a98d-a788624c5253"). InnerVolumeSpecName "ceph". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 23 09:56:47 crc kubenswrapper[4684]: I0123 09:56:47.253277 4684 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/e2aa43b6-cc3e-4a3f-a98d-a788624c5253-kube-api-access-csnt9" (OuterVolumeSpecName: "kube-api-access-csnt9") pod "e2aa43b6-cc3e-4a3f-a98d-a788624c5253" (UID: "e2aa43b6-cc3e-4a3f-a98d-a788624c5253"). InnerVolumeSpecName "kube-api-access-csnt9". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 23 09:56:47 crc kubenswrapper[4684]: I0123 09:56:47.275597 4684 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/e2aa43b6-cc3e-4a3f-a98d-a788624c5253-ssh-key-openstack-edpm-ipam" (OuterVolumeSpecName: "ssh-key-openstack-edpm-ipam") pod "e2aa43b6-cc3e-4a3f-a98d-a788624c5253" (UID: "e2aa43b6-cc3e-4a3f-a98d-a788624c5253"). InnerVolumeSpecName "ssh-key-openstack-edpm-ipam". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 23 09:56:47 crc kubenswrapper[4684]: I0123 09:56:47.275654 4684 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/e2aa43b6-cc3e-4a3f-a98d-a788624c5253-inventory" (OuterVolumeSpecName: "inventory") pod "e2aa43b6-cc3e-4a3f-a98d-a788624c5253" (UID: "e2aa43b6-cc3e-4a3f-a98d-a788624c5253"). InnerVolumeSpecName "inventory". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 23 09:56:47 crc kubenswrapper[4684]: I0123 09:56:47.349571 4684 reconciler_common.go:293] "Volume detached for volume \"ceph\" (UniqueName: \"kubernetes.io/secret/e2aa43b6-cc3e-4a3f-a98d-a788624c5253-ceph\") on node \"crc\" DevicePath \"\"" Jan 23 09:56:47 crc kubenswrapper[4684]: I0123 09:56:47.349912 4684 reconciler_common.go:293] "Volume detached for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/e2aa43b6-cc3e-4a3f-a98d-a788624c5253-ssh-key-openstack-edpm-ipam\") on node \"crc\" DevicePath \"\"" Jan 23 09:56:47 crc kubenswrapper[4684]: I0123 09:56:47.349933 4684 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-csnt9\" (UniqueName: \"kubernetes.io/projected/e2aa43b6-cc3e-4a3f-a98d-a788624c5253-kube-api-access-csnt9\") on node \"crc\" DevicePath \"\"" Jan 23 09:56:47 crc kubenswrapper[4684]: I0123 09:56:47.349947 4684 reconciler_common.go:293] "Volume detached for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/e2aa43b6-cc3e-4a3f-a98d-a788624c5253-inventory\") on node \"crc\" DevicePath \"\"" Jan 23 09:56:47 crc kubenswrapper[4684]: I0123 09:56:47.733633 4684 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/validate-network-edpm-deployment-openstack-edpm-ipam-tlkk8" event={"ID":"e2aa43b6-cc3e-4a3f-a98d-a788624c5253","Type":"ContainerDied","Data":"070478a7fa28ef004c0cd11dd0cef6bcb97d27016ce993ee72be994e510b9dd0"} Jan 23 09:56:47 crc kubenswrapper[4684]: I0123 09:56:47.733722 4684 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="070478a7fa28ef004c0cd11dd0cef6bcb97d27016ce993ee72be994e510b9dd0" Jan 23 09:56:47 crc kubenswrapper[4684]: I0123 09:56:47.733773 4684 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/validate-network-edpm-deployment-openstack-edpm-ipam-tlkk8" Jan 23 09:56:47 crc kubenswrapper[4684]: I0123 09:56:47.823010 4684 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/install-os-edpm-deployment-openstack-edpm-ipam-2nhwv"] Jan 23 09:56:47 crc kubenswrapper[4684]: E0123 09:56:47.823578 4684 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="e2aa43b6-cc3e-4a3f-a98d-a788624c5253" containerName="validate-network-edpm-deployment-openstack-edpm-ipam" Jan 23 09:56:47 crc kubenswrapper[4684]: I0123 09:56:47.823622 4684 state_mem.go:107] "Deleted CPUSet assignment" podUID="e2aa43b6-cc3e-4a3f-a98d-a788624c5253" containerName="validate-network-edpm-deployment-openstack-edpm-ipam" Jan 23 09:56:47 crc kubenswrapper[4684]: I0123 09:56:47.823817 4684 memory_manager.go:354] "RemoveStaleState removing state" podUID="e2aa43b6-cc3e-4a3f-a98d-a788624c5253" containerName="validate-network-edpm-deployment-openstack-edpm-ipam" Jan 23 09:56:47 crc kubenswrapper[4684]: I0123 09:56:47.824374 4684 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/install-os-edpm-deployment-openstack-edpm-ipam-2nhwv" Jan 23 09:56:47 crc kubenswrapper[4684]: I0123 09:56:47.829876 4684 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"openstack-aee-default-env" Jan 23 09:56:47 crc kubenswrapper[4684]: I0123 09:56:47.829901 4684 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"ceph-conf-files" Jan 23 09:56:47 crc kubenswrapper[4684]: I0123 09:56:47.830098 4684 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"dataplane-ansible-ssh-private-key-secret" Jan 23 09:56:47 crc kubenswrapper[4684]: I0123 09:56:47.830253 4684 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"dataplanenodeset-openstack-edpm-ipam" Jan 23 09:56:47 crc kubenswrapper[4684]: I0123 09:56:47.833295 4684 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"openstack-edpm-ipam-dockercfg-5vtkf" Jan 23 09:56:47 crc kubenswrapper[4684]: I0123 09:56:47.857948 4684 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/install-os-edpm-deployment-openstack-edpm-ipam-2nhwv"] Jan 23 09:56:47 crc kubenswrapper[4684]: I0123 09:56:47.960314 4684 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/9ed4c3b1-8a47-426f-a72f-80df33efa202-inventory\") pod \"install-os-edpm-deployment-openstack-edpm-ipam-2nhwv\" (UID: \"9ed4c3b1-8a47-426f-a72f-80df33efa202\") " pod="openstack/install-os-edpm-deployment-openstack-edpm-ipam-2nhwv" Jan 23 09:56:47 crc kubenswrapper[4684]: I0123 09:56:47.960429 4684 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ceph\" (UniqueName: \"kubernetes.io/secret/9ed4c3b1-8a47-426f-a72f-80df33efa202-ceph\") pod \"install-os-edpm-deployment-openstack-edpm-ipam-2nhwv\" (UID: \"9ed4c3b1-8a47-426f-a72f-80df33efa202\") " pod="openstack/install-os-edpm-deployment-openstack-edpm-ipam-2nhwv" Jan 23 09:56:47 crc kubenswrapper[4684]: I0123 09:56:47.960479 4684 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-kbq4j\" (UniqueName: \"kubernetes.io/projected/9ed4c3b1-8a47-426f-a72f-80df33efa202-kube-api-access-kbq4j\") pod \"install-os-edpm-deployment-openstack-edpm-ipam-2nhwv\" (UID: \"9ed4c3b1-8a47-426f-a72f-80df33efa202\") " pod="openstack/install-os-edpm-deployment-openstack-edpm-ipam-2nhwv" Jan 23 09:56:47 crc kubenswrapper[4684]: I0123 09:56:47.960571 4684 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/9ed4c3b1-8a47-426f-a72f-80df33efa202-ssh-key-openstack-edpm-ipam\") pod \"install-os-edpm-deployment-openstack-edpm-ipam-2nhwv\" (UID: \"9ed4c3b1-8a47-426f-a72f-80df33efa202\") " pod="openstack/install-os-edpm-deployment-openstack-edpm-ipam-2nhwv" Jan 23 09:56:48 crc kubenswrapper[4684]: I0123 09:56:48.062464 4684 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ceph\" (UniqueName: \"kubernetes.io/secret/9ed4c3b1-8a47-426f-a72f-80df33efa202-ceph\") pod \"install-os-edpm-deployment-openstack-edpm-ipam-2nhwv\" (UID: \"9ed4c3b1-8a47-426f-a72f-80df33efa202\") " pod="openstack/install-os-edpm-deployment-openstack-edpm-ipam-2nhwv" Jan 23 09:56:48 crc kubenswrapper[4684]: I0123 09:56:48.062542 4684 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-kbq4j\" (UniqueName: \"kubernetes.io/projected/9ed4c3b1-8a47-426f-a72f-80df33efa202-kube-api-access-kbq4j\") pod \"install-os-edpm-deployment-openstack-edpm-ipam-2nhwv\" (UID: \"9ed4c3b1-8a47-426f-a72f-80df33efa202\") " pod="openstack/install-os-edpm-deployment-openstack-edpm-ipam-2nhwv" Jan 23 09:56:48 crc kubenswrapper[4684]: I0123 09:56:48.062646 4684 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/9ed4c3b1-8a47-426f-a72f-80df33efa202-ssh-key-openstack-edpm-ipam\") pod \"install-os-edpm-deployment-openstack-edpm-ipam-2nhwv\" (UID: \"9ed4c3b1-8a47-426f-a72f-80df33efa202\") " pod="openstack/install-os-edpm-deployment-openstack-edpm-ipam-2nhwv" Jan 23 09:56:48 crc kubenswrapper[4684]: I0123 09:56:48.062759 4684 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/9ed4c3b1-8a47-426f-a72f-80df33efa202-inventory\") pod \"install-os-edpm-deployment-openstack-edpm-ipam-2nhwv\" (UID: \"9ed4c3b1-8a47-426f-a72f-80df33efa202\") " pod="openstack/install-os-edpm-deployment-openstack-edpm-ipam-2nhwv" Jan 23 09:56:48 crc kubenswrapper[4684]: I0123 09:56:48.069281 4684 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ceph\" (UniqueName: \"kubernetes.io/secret/9ed4c3b1-8a47-426f-a72f-80df33efa202-ceph\") pod \"install-os-edpm-deployment-openstack-edpm-ipam-2nhwv\" (UID: \"9ed4c3b1-8a47-426f-a72f-80df33efa202\") " pod="openstack/install-os-edpm-deployment-openstack-edpm-ipam-2nhwv" Jan 23 09:56:48 crc kubenswrapper[4684]: I0123 09:56:48.072506 4684 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/9ed4c3b1-8a47-426f-a72f-80df33efa202-inventory\") pod \"install-os-edpm-deployment-openstack-edpm-ipam-2nhwv\" (UID: \"9ed4c3b1-8a47-426f-a72f-80df33efa202\") " pod="openstack/install-os-edpm-deployment-openstack-edpm-ipam-2nhwv" Jan 23 09:56:48 crc kubenswrapper[4684]: I0123 09:56:48.080880 4684 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/9ed4c3b1-8a47-426f-a72f-80df33efa202-ssh-key-openstack-edpm-ipam\") pod \"install-os-edpm-deployment-openstack-edpm-ipam-2nhwv\" (UID: \"9ed4c3b1-8a47-426f-a72f-80df33efa202\") " pod="openstack/install-os-edpm-deployment-openstack-edpm-ipam-2nhwv" Jan 23 09:56:48 crc kubenswrapper[4684]: I0123 09:56:48.087312 4684 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-kbq4j\" (UniqueName: \"kubernetes.io/projected/9ed4c3b1-8a47-426f-a72f-80df33efa202-kube-api-access-kbq4j\") pod \"install-os-edpm-deployment-openstack-edpm-ipam-2nhwv\" (UID: \"9ed4c3b1-8a47-426f-a72f-80df33efa202\") " pod="openstack/install-os-edpm-deployment-openstack-edpm-ipam-2nhwv" Jan 23 09:56:48 crc kubenswrapper[4684]: I0123 09:56:48.141265 4684 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/install-os-edpm-deployment-openstack-edpm-ipam-2nhwv" Jan 23 09:56:48 crc kubenswrapper[4684]: I0123 09:56:48.680773 4684 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/install-os-edpm-deployment-openstack-edpm-ipam-2nhwv"] Jan 23 09:56:48 crc kubenswrapper[4684]: I0123 09:56:48.744958 4684 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/install-os-edpm-deployment-openstack-edpm-ipam-2nhwv" event={"ID":"9ed4c3b1-8a47-426f-a72f-80df33efa202","Type":"ContainerStarted","Data":"33e8fa5bb0fcc35099104c6c850541deef0f6384a12a62fa8a02a14a63b95fc3"} Jan 23 09:56:49 crc kubenswrapper[4684]: I0123 09:56:49.754678 4684 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/install-os-edpm-deployment-openstack-edpm-ipam-2nhwv" event={"ID":"9ed4c3b1-8a47-426f-a72f-80df33efa202","Type":"ContainerStarted","Data":"16b17a8716cb8817015e7663f6eab5055433b6d728d99e7a202b4a48b9ee5a9f"} Jan 23 09:56:49 crc kubenswrapper[4684]: I0123 09:56:49.776620 4684 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/install-os-edpm-deployment-openstack-edpm-ipam-2nhwv" podStartSLOduration=2.349881236 podStartE2EDuration="2.776593838s" podCreationTimestamp="2026-01-23 09:56:47 +0000 UTC" firstStartedPulling="2026-01-23 09:56:48.687370813 +0000 UTC m=+2981.310749354" lastFinishedPulling="2026-01-23 09:56:49.114083415 +0000 UTC m=+2981.737461956" observedRunningTime="2026-01-23 09:56:49.774852518 +0000 UTC m=+2982.398231469" watchObservedRunningTime="2026-01-23 09:56:49.776593838 +0000 UTC m=+2982.399972379" Jan 23 09:57:33 crc kubenswrapper[4684]: I0123 09:57:33.108511 4684 generic.go:334] "Generic (PLEG): container finished" podID="9ed4c3b1-8a47-426f-a72f-80df33efa202" containerID="16b17a8716cb8817015e7663f6eab5055433b6d728d99e7a202b4a48b9ee5a9f" exitCode=0 Jan 23 09:57:33 crc kubenswrapper[4684]: I0123 09:57:33.109038 4684 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/install-os-edpm-deployment-openstack-edpm-ipam-2nhwv" event={"ID":"9ed4c3b1-8a47-426f-a72f-80df33efa202","Type":"ContainerDied","Data":"16b17a8716cb8817015e7663f6eab5055433b6d728d99e7a202b4a48b9ee5a9f"} Jan 23 09:57:34 crc kubenswrapper[4684]: I0123 09:57:34.512993 4684 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/install-os-edpm-deployment-openstack-edpm-ipam-2nhwv" Jan 23 09:57:34 crc kubenswrapper[4684]: I0123 09:57:34.573425 4684 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-kbq4j\" (UniqueName: \"kubernetes.io/projected/9ed4c3b1-8a47-426f-a72f-80df33efa202-kube-api-access-kbq4j\") pod \"9ed4c3b1-8a47-426f-a72f-80df33efa202\" (UID: \"9ed4c3b1-8a47-426f-a72f-80df33efa202\") " Jan 23 09:57:34 crc kubenswrapper[4684]: I0123 09:57:34.575031 4684 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/9ed4c3b1-8a47-426f-a72f-80df33efa202-ssh-key-openstack-edpm-ipam\") pod \"9ed4c3b1-8a47-426f-a72f-80df33efa202\" (UID: \"9ed4c3b1-8a47-426f-a72f-80df33efa202\") " Jan 23 09:57:34 crc kubenswrapper[4684]: I0123 09:57:34.575091 4684 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/9ed4c3b1-8a47-426f-a72f-80df33efa202-inventory\") pod \"9ed4c3b1-8a47-426f-a72f-80df33efa202\" (UID: \"9ed4c3b1-8a47-426f-a72f-80df33efa202\") " Jan 23 09:57:34 crc kubenswrapper[4684]: I0123 09:57:34.575117 4684 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ceph\" (UniqueName: \"kubernetes.io/secret/9ed4c3b1-8a47-426f-a72f-80df33efa202-ceph\") pod \"9ed4c3b1-8a47-426f-a72f-80df33efa202\" (UID: \"9ed4c3b1-8a47-426f-a72f-80df33efa202\") " Jan 23 09:57:34 crc kubenswrapper[4684]: I0123 09:57:34.585236 4684 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/9ed4c3b1-8a47-426f-a72f-80df33efa202-ceph" (OuterVolumeSpecName: "ceph") pod "9ed4c3b1-8a47-426f-a72f-80df33efa202" (UID: "9ed4c3b1-8a47-426f-a72f-80df33efa202"). InnerVolumeSpecName "ceph". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 23 09:57:34 crc kubenswrapper[4684]: I0123 09:57:34.585268 4684 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/9ed4c3b1-8a47-426f-a72f-80df33efa202-kube-api-access-kbq4j" (OuterVolumeSpecName: "kube-api-access-kbq4j") pod "9ed4c3b1-8a47-426f-a72f-80df33efa202" (UID: "9ed4c3b1-8a47-426f-a72f-80df33efa202"). InnerVolumeSpecName "kube-api-access-kbq4j". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 23 09:57:34 crc kubenswrapper[4684]: I0123 09:57:34.601340 4684 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/9ed4c3b1-8a47-426f-a72f-80df33efa202-ssh-key-openstack-edpm-ipam" (OuterVolumeSpecName: "ssh-key-openstack-edpm-ipam") pod "9ed4c3b1-8a47-426f-a72f-80df33efa202" (UID: "9ed4c3b1-8a47-426f-a72f-80df33efa202"). InnerVolumeSpecName "ssh-key-openstack-edpm-ipam". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 23 09:57:34 crc kubenswrapper[4684]: I0123 09:57:34.604460 4684 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/9ed4c3b1-8a47-426f-a72f-80df33efa202-inventory" (OuterVolumeSpecName: "inventory") pod "9ed4c3b1-8a47-426f-a72f-80df33efa202" (UID: "9ed4c3b1-8a47-426f-a72f-80df33efa202"). InnerVolumeSpecName "inventory". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 23 09:57:34 crc kubenswrapper[4684]: I0123 09:57:34.678337 4684 reconciler_common.go:293] "Volume detached for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/9ed4c3b1-8a47-426f-a72f-80df33efa202-ssh-key-openstack-edpm-ipam\") on node \"crc\" DevicePath \"\"" Jan 23 09:57:34 crc kubenswrapper[4684]: I0123 09:57:34.678380 4684 reconciler_common.go:293] "Volume detached for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/9ed4c3b1-8a47-426f-a72f-80df33efa202-inventory\") on node \"crc\" DevicePath \"\"" Jan 23 09:57:34 crc kubenswrapper[4684]: I0123 09:57:34.678394 4684 reconciler_common.go:293] "Volume detached for volume \"ceph\" (UniqueName: \"kubernetes.io/secret/9ed4c3b1-8a47-426f-a72f-80df33efa202-ceph\") on node \"crc\" DevicePath \"\"" Jan 23 09:57:34 crc kubenswrapper[4684]: I0123 09:57:34.678406 4684 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-kbq4j\" (UniqueName: \"kubernetes.io/projected/9ed4c3b1-8a47-426f-a72f-80df33efa202-kube-api-access-kbq4j\") on node \"crc\" DevicePath \"\"" Jan 23 09:57:35 crc kubenswrapper[4684]: I0123 09:57:35.124179 4684 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/install-os-edpm-deployment-openstack-edpm-ipam-2nhwv" event={"ID":"9ed4c3b1-8a47-426f-a72f-80df33efa202","Type":"ContainerDied","Data":"33e8fa5bb0fcc35099104c6c850541deef0f6384a12a62fa8a02a14a63b95fc3"} Jan 23 09:57:35 crc kubenswrapper[4684]: I0123 09:57:35.124221 4684 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="33e8fa5bb0fcc35099104c6c850541deef0f6384a12a62fa8a02a14a63b95fc3" Jan 23 09:57:35 crc kubenswrapper[4684]: I0123 09:57:35.124638 4684 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/install-os-edpm-deployment-openstack-edpm-ipam-2nhwv" Jan 23 09:57:35 crc kubenswrapper[4684]: I0123 09:57:35.259023 4684 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/ceph-hci-pre-edpm-deployment-openstack-edpm-ipam-fqnlz"] Jan 23 09:57:35 crc kubenswrapper[4684]: E0123 09:57:35.259748 4684 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="9ed4c3b1-8a47-426f-a72f-80df33efa202" containerName="install-os-edpm-deployment-openstack-edpm-ipam" Jan 23 09:57:35 crc kubenswrapper[4684]: I0123 09:57:35.262440 4684 state_mem.go:107] "Deleted CPUSet assignment" podUID="9ed4c3b1-8a47-426f-a72f-80df33efa202" containerName="install-os-edpm-deployment-openstack-edpm-ipam" Jan 23 09:57:35 crc kubenswrapper[4684]: I0123 09:57:35.263001 4684 memory_manager.go:354] "RemoveStaleState removing state" podUID="9ed4c3b1-8a47-426f-a72f-80df33efa202" containerName="install-os-edpm-deployment-openstack-edpm-ipam" Jan 23 09:57:35 crc kubenswrapper[4684]: I0123 09:57:35.263935 4684 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ceph-hci-pre-edpm-deployment-openstack-edpm-ipam-fqnlz" Jan 23 09:57:35 crc kubenswrapper[4684]: I0123 09:57:35.267450 4684 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"openstack-edpm-ipam-dockercfg-5vtkf" Jan 23 09:57:35 crc kubenswrapper[4684]: I0123 09:57:35.267782 4684 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"ceph-conf-files" Jan 23 09:57:35 crc kubenswrapper[4684]: I0123 09:57:35.267967 4684 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"dataplane-ansible-ssh-private-key-secret" Jan 23 09:57:35 crc kubenswrapper[4684]: I0123 09:57:35.268136 4684 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"openstack-aee-default-env" Jan 23 09:57:35 crc kubenswrapper[4684]: I0123 09:57:35.268714 4684 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ceph-hci-pre-edpm-deployment-openstack-edpm-ipam-fqnlz"] Jan 23 09:57:35 crc kubenswrapper[4684]: I0123 09:57:35.269672 4684 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"dataplanenodeset-openstack-edpm-ipam" Jan 23 09:57:35 crc kubenswrapper[4684]: I0123 09:57:35.390190 4684 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/01a17f7c-b39e-4dd6-9a40-d474056ee41a-ssh-key-openstack-edpm-ipam\") pod \"ceph-hci-pre-edpm-deployment-openstack-edpm-ipam-fqnlz\" (UID: \"01a17f7c-b39e-4dd6-9a40-d474056ee41a\") " pod="openstack/ceph-hci-pre-edpm-deployment-openstack-edpm-ipam-fqnlz" Jan 23 09:57:35 crc kubenswrapper[4684]: I0123 09:57:35.390281 4684 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/01a17f7c-b39e-4dd6-9a40-d474056ee41a-inventory\") pod \"ceph-hci-pre-edpm-deployment-openstack-edpm-ipam-fqnlz\" (UID: \"01a17f7c-b39e-4dd6-9a40-d474056ee41a\") " pod="openstack/ceph-hci-pre-edpm-deployment-openstack-edpm-ipam-fqnlz" Jan 23 09:57:35 crc kubenswrapper[4684]: I0123 09:57:35.390326 4684 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-w6psb\" (UniqueName: \"kubernetes.io/projected/01a17f7c-b39e-4dd6-9a40-d474056ee41a-kube-api-access-w6psb\") pod \"ceph-hci-pre-edpm-deployment-openstack-edpm-ipam-fqnlz\" (UID: \"01a17f7c-b39e-4dd6-9a40-d474056ee41a\") " pod="openstack/ceph-hci-pre-edpm-deployment-openstack-edpm-ipam-fqnlz" Jan 23 09:57:35 crc kubenswrapper[4684]: I0123 09:57:35.390398 4684 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ceph\" (UniqueName: \"kubernetes.io/secret/01a17f7c-b39e-4dd6-9a40-d474056ee41a-ceph\") pod \"ceph-hci-pre-edpm-deployment-openstack-edpm-ipam-fqnlz\" (UID: \"01a17f7c-b39e-4dd6-9a40-d474056ee41a\") " pod="openstack/ceph-hci-pre-edpm-deployment-openstack-edpm-ipam-fqnlz" Jan 23 09:57:35 crc kubenswrapper[4684]: I0123 09:57:35.493311 4684 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/01a17f7c-b39e-4dd6-9a40-d474056ee41a-inventory\") pod \"ceph-hci-pre-edpm-deployment-openstack-edpm-ipam-fqnlz\" (UID: \"01a17f7c-b39e-4dd6-9a40-d474056ee41a\") " pod="openstack/ceph-hci-pre-edpm-deployment-openstack-edpm-ipam-fqnlz" Jan 23 09:57:35 crc kubenswrapper[4684]: I0123 09:57:35.493451 4684 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-w6psb\" (UniqueName: \"kubernetes.io/projected/01a17f7c-b39e-4dd6-9a40-d474056ee41a-kube-api-access-w6psb\") pod \"ceph-hci-pre-edpm-deployment-openstack-edpm-ipam-fqnlz\" (UID: \"01a17f7c-b39e-4dd6-9a40-d474056ee41a\") " pod="openstack/ceph-hci-pre-edpm-deployment-openstack-edpm-ipam-fqnlz" Jan 23 09:57:35 crc kubenswrapper[4684]: I0123 09:57:35.493531 4684 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ceph\" (UniqueName: \"kubernetes.io/secret/01a17f7c-b39e-4dd6-9a40-d474056ee41a-ceph\") pod \"ceph-hci-pre-edpm-deployment-openstack-edpm-ipam-fqnlz\" (UID: \"01a17f7c-b39e-4dd6-9a40-d474056ee41a\") " pod="openstack/ceph-hci-pre-edpm-deployment-openstack-edpm-ipam-fqnlz" Jan 23 09:57:35 crc kubenswrapper[4684]: I0123 09:57:35.493660 4684 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/01a17f7c-b39e-4dd6-9a40-d474056ee41a-ssh-key-openstack-edpm-ipam\") pod \"ceph-hci-pre-edpm-deployment-openstack-edpm-ipam-fqnlz\" (UID: \"01a17f7c-b39e-4dd6-9a40-d474056ee41a\") " pod="openstack/ceph-hci-pre-edpm-deployment-openstack-edpm-ipam-fqnlz" Jan 23 09:57:35 crc kubenswrapper[4684]: I0123 09:57:35.497882 4684 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ceph\" (UniqueName: \"kubernetes.io/secret/01a17f7c-b39e-4dd6-9a40-d474056ee41a-ceph\") pod \"ceph-hci-pre-edpm-deployment-openstack-edpm-ipam-fqnlz\" (UID: \"01a17f7c-b39e-4dd6-9a40-d474056ee41a\") " pod="openstack/ceph-hci-pre-edpm-deployment-openstack-edpm-ipam-fqnlz" Jan 23 09:57:35 crc kubenswrapper[4684]: I0123 09:57:35.498464 4684 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/01a17f7c-b39e-4dd6-9a40-d474056ee41a-ssh-key-openstack-edpm-ipam\") pod \"ceph-hci-pre-edpm-deployment-openstack-edpm-ipam-fqnlz\" (UID: \"01a17f7c-b39e-4dd6-9a40-d474056ee41a\") " pod="openstack/ceph-hci-pre-edpm-deployment-openstack-edpm-ipam-fqnlz" Jan 23 09:57:35 crc kubenswrapper[4684]: I0123 09:57:35.499064 4684 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/01a17f7c-b39e-4dd6-9a40-d474056ee41a-inventory\") pod \"ceph-hci-pre-edpm-deployment-openstack-edpm-ipam-fqnlz\" (UID: \"01a17f7c-b39e-4dd6-9a40-d474056ee41a\") " pod="openstack/ceph-hci-pre-edpm-deployment-openstack-edpm-ipam-fqnlz" Jan 23 09:57:35 crc kubenswrapper[4684]: I0123 09:57:35.511774 4684 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-w6psb\" (UniqueName: \"kubernetes.io/projected/01a17f7c-b39e-4dd6-9a40-d474056ee41a-kube-api-access-w6psb\") pod \"ceph-hci-pre-edpm-deployment-openstack-edpm-ipam-fqnlz\" (UID: \"01a17f7c-b39e-4dd6-9a40-d474056ee41a\") " pod="openstack/ceph-hci-pre-edpm-deployment-openstack-edpm-ipam-fqnlz" Jan 23 09:57:35 crc kubenswrapper[4684]: I0123 09:57:35.598111 4684 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ceph-hci-pre-edpm-deployment-openstack-edpm-ipam-fqnlz" Jan 23 09:57:36 crc kubenswrapper[4684]: I0123 09:57:36.140431 4684 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ceph-hci-pre-edpm-deployment-openstack-edpm-ipam-fqnlz"] Jan 23 09:57:37 crc kubenswrapper[4684]: I0123 09:57:37.145472 4684 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceph-hci-pre-edpm-deployment-openstack-edpm-ipam-fqnlz" event={"ID":"01a17f7c-b39e-4dd6-9a40-d474056ee41a","Type":"ContainerStarted","Data":"a44cd4349f56779dd930fbf9e1143b475d1f373dfc9990fd6d3a32a19cc9eedf"} Jan 23 09:57:37 crc kubenswrapper[4684]: I0123 09:57:37.145845 4684 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceph-hci-pre-edpm-deployment-openstack-edpm-ipam-fqnlz" event={"ID":"01a17f7c-b39e-4dd6-9a40-d474056ee41a","Type":"ContainerStarted","Data":"a9d11ac2a4175b06d4669411ad429ce225a8c17de19832d0bcb3f22bfef707aa"} Jan 23 09:57:37 crc kubenswrapper[4684]: I0123 09:57:37.173518 4684 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/ceph-hci-pre-edpm-deployment-openstack-edpm-ipam-fqnlz" podStartSLOduration=1.736428658 podStartE2EDuration="2.173499788s" podCreationTimestamp="2026-01-23 09:57:35 +0000 UTC" firstStartedPulling="2026-01-23 09:57:36.179312692 +0000 UTC m=+3028.802691233" lastFinishedPulling="2026-01-23 09:57:36.616383822 +0000 UTC m=+3029.239762363" observedRunningTime="2026-01-23 09:57:37.160935518 +0000 UTC m=+3029.784314059" watchObservedRunningTime="2026-01-23 09:57:37.173499788 +0000 UTC m=+3029.796878329" Jan 23 09:57:41 crc kubenswrapper[4684]: I0123 09:57:41.182462 4684 generic.go:334] "Generic (PLEG): container finished" podID="01a17f7c-b39e-4dd6-9a40-d474056ee41a" containerID="a44cd4349f56779dd930fbf9e1143b475d1f373dfc9990fd6d3a32a19cc9eedf" exitCode=0 Jan 23 09:57:41 crc kubenswrapper[4684]: I0123 09:57:41.182505 4684 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceph-hci-pre-edpm-deployment-openstack-edpm-ipam-fqnlz" event={"ID":"01a17f7c-b39e-4dd6-9a40-d474056ee41a","Type":"ContainerDied","Data":"a44cd4349f56779dd930fbf9e1143b475d1f373dfc9990fd6d3a32a19cc9eedf"} Jan 23 09:57:42 crc kubenswrapper[4684]: I0123 09:57:42.591783 4684 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/ceph-hci-pre-edpm-deployment-openstack-edpm-ipam-fqnlz" Jan 23 09:57:42 crc kubenswrapper[4684]: I0123 09:57:42.637292 4684 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/01a17f7c-b39e-4dd6-9a40-d474056ee41a-ssh-key-openstack-edpm-ipam\") pod \"01a17f7c-b39e-4dd6-9a40-d474056ee41a\" (UID: \"01a17f7c-b39e-4dd6-9a40-d474056ee41a\") " Jan 23 09:57:42 crc kubenswrapper[4684]: I0123 09:57:42.637355 4684 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-w6psb\" (UniqueName: \"kubernetes.io/projected/01a17f7c-b39e-4dd6-9a40-d474056ee41a-kube-api-access-w6psb\") pod \"01a17f7c-b39e-4dd6-9a40-d474056ee41a\" (UID: \"01a17f7c-b39e-4dd6-9a40-d474056ee41a\") " Jan 23 09:57:42 crc kubenswrapper[4684]: I0123 09:57:42.637382 4684 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/01a17f7c-b39e-4dd6-9a40-d474056ee41a-inventory\") pod \"01a17f7c-b39e-4dd6-9a40-d474056ee41a\" (UID: \"01a17f7c-b39e-4dd6-9a40-d474056ee41a\") " Jan 23 09:57:42 crc kubenswrapper[4684]: I0123 09:57:42.637430 4684 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ceph\" (UniqueName: \"kubernetes.io/secret/01a17f7c-b39e-4dd6-9a40-d474056ee41a-ceph\") pod \"01a17f7c-b39e-4dd6-9a40-d474056ee41a\" (UID: \"01a17f7c-b39e-4dd6-9a40-d474056ee41a\") " Jan 23 09:57:42 crc kubenswrapper[4684]: I0123 09:57:42.652945 4684 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/01a17f7c-b39e-4dd6-9a40-d474056ee41a-ceph" (OuterVolumeSpecName: "ceph") pod "01a17f7c-b39e-4dd6-9a40-d474056ee41a" (UID: "01a17f7c-b39e-4dd6-9a40-d474056ee41a"). InnerVolumeSpecName "ceph". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 23 09:57:42 crc kubenswrapper[4684]: I0123 09:57:42.653020 4684 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/01a17f7c-b39e-4dd6-9a40-d474056ee41a-kube-api-access-w6psb" (OuterVolumeSpecName: "kube-api-access-w6psb") pod "01a17f7c-b39e-4dd6-9a40-d474056ee41a" (UID: "01a17f7c-b39e-4dd6-9a40-d474056ee41a"). InnerVolumeSpecName "kube-api-access-w6psb". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 23 09:57:42 crc kubenswrapper[4684]: I0123 09:57:42.674794 4684 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/01a17f7c-b39e-4dd6-9a40-d474056ee41a-inventory" (OuterVolumeSpecName: "inventory") pod "01a17f7c-b39e-4dd6-9a40-d474056ee41a" (UID: "01a17f7c-b39e-4dd6-9a40-d474056ee41a"). InnerVolumeSpecName "inventory". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 23 09:57:42 crc kubenswrapper[4684]: I0123 09:57:42.690467 4684 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/01a17f7c-b39e-4dd6-9a40-d474056ee41a-ssh-key-openstack-edpm-ipam" (OuterVolumeSpecName: "ssh-key-openstack-edpm-ipam") pod "01a17f7c-b39e-4dd6-9a40-d474056ee41a" (UID: "01a17f7c-b39e-4dd6-9a40-d474056ee41a"). InnerVolumeSpecName "ssh-key-openstack-edpm-ipam". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 23 09:57:42 crc kubenswrapper[4684]: I0123 09:57:42.740342 4684 reconciler_common.go:293] "Volume detached for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/01a17f7c-b39e-4dd6-9a40-d474056ee41a-ssh-key-openstack-edpm-ipam\") on node \"crc\" DevicePath \"\"" Jan 23 09:57:42 crc kubenswrapper[4684]: I0123 09:57:42.740407 4684 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-w6psb\" (UniqueName: \"kubernetes.io/projected/01a17f7c-b39e-4dd6-9a40-d474056ee41a-kube-api-access-w6psb\") on node \"crc\" DevicePath \"\"" Jan 23 09:57:42 crc kubenswrapper[4684]: I0123 09:57:42.740423 4684 reconciler_common.go:293] "Volume detached for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/01a17f7c-b39e-4dd6-9a40-d474056ee41a-inventory\") on node \"crc\" DevicePath \"\"" Jan 23 09:57:42 crc kubenswrapper[4684]: I0123 09:57:42.740435 4684 reconciler_common.go:293] "Volume detached for volume \"ceph\" (UniqueName: \"kubernetes.io/secret/01a17f7c-b39e-4dd6-9a40-d474056ee41a-ceph\") on node \"crc\" DevicePath \"\"" Jan 23 09:57:43 crc kubenswrapper[4684]: I0123 09:57:43.198931 4684 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceph-hci-pre-edpm-deployment-openstack-edpm-ipam-fqnlz" event={"ID":"01a17f7c-b39e-4dd6-9a40-d474056ee41a","Type":"ContainerDied","Data":"a9d11ac2a4175b06d4669411ad429ce225a8c17de19832d0bcb3f22bfef707aa"} Jan 23 09:57:43 crc kubenswrapper[4684]: I0123 09:57:43.199519 4684 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="a9d11ac2a4175b06d4669411ad429ce225a8c17de19832d0bcb3f22bfef707aa" Jan 23 09:57:43 crc kubenswrapper[4684]: I0123 09:57:43.198992 4684 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/ceph-hci-pre-edpm-deployment-openstack-edpm-ipam-fqnlz" Jan 23 09:57:43 crc kubenswrapper[4684]: I0123 09:57:43.347575 4684 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/configure-os-edpm-deployment-openstack-edpm-ipam-nwlp8"] Jan 23 09:57:43 crc kubenswrapper[4684]: E0123 09:57:43.348117 4684 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="01a17f7c-b39e-4dd6-9a40-d474056ee41a" containerName="ceph-hci-pre-edpm-deployment-openstack-edpm-ipam" Jan 23 09:57:43 crc kubenswrapper[4684]: I0123 09:57:43.348140 4684 state_mem.go:107] "Deleted CPUSet assignment" podUID="01a17f7c-b39e-4dd6-9a40-d474056ee41a" containerName="ceph-hci-pre-edpm-deployment-openstack-edpm-ipam" Jan 23 09:57:43 crc kubenswrapper[4684]: I0123 09:57:43.348368 4684 memory_manager.go:354] "RemoveStaleState removing state" podUID="01a17f7c-b39e-4dd6-9a40-d474056ee41a" containerName="ceph-hci-pre-edpm-deployment-openstack-edpm-ipam" Jan 23 09:57:43 crc kubenswrapper[4684]: I0123 09:57:43.349104 4684 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/configure-os-edpm-deployment-openstack-edpm-ipam-nwlp8" Jan 23 09:57:43 crc kubenswrapper[4684]: I0123 09:57:43.355974 4684 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"openstack-edpm-ipam-dockercfg-5vtkf" Jan 23 09:57:43 crc kubenswrapper[4684]: I0123 09:57:43.356038 4684 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"dataplane-ansible-ssh-private-key-secret" Jan 23 09:57:43 crc kubenswrapper[4684]: I0123 09:57:43.356002 4684 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"openstack-aee-default-env" Jan 23 09:57:43 crc kubenswrapper[4684]: I0123 09:57:43.356142 4684 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"ceph-conf-files" Jan 23 09:57:43 crc kubenswrapper[4684]: I0123 09:57:43.356226 4684 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"dataplanenodeset-openstack-edpm-ipam" Jan 23 09:57:43 crc kubenswrapper[4684]: I0123 09:57:43.359187 4684 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/configure-os-edpm-deployment-openstack-edpm-ipam-nwlp8"] Jan 23 09:57:43 crc kubenswrapper[4684]: I0123 09:57:43.453233 4684 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/8cbed0d5-0896-4efe-af09-8469dcbd2cfb-ssh-key-openstack-edpm-ipam\") pod \"configure-os-edpm-deployment-openstack-edpm-ipam-nwlp8\" (UID: \"8cbed0d5-0896-4efe-af09-8469dcbd2cfb\") " pod="openstack/configure-os-edpm-deployment-openstack-edpm-ipam-nwlp8" Jan 23 09:57:43 crc kubenswrapper[4684]: I0123 09:57:43.453291 4684 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ceph\" (UniqueName: \"kubernetes.io/secret/8cbed0d5-0896-4efe-af09-8469dcbd2cfb-ceph\") pod \"configure-os-edpm-deployment-openstack-edpm-ipam-nwlp8\" (UID: \"8cbed0d5-0896-4efe-af09-8469dcbd2cfb\") " pod="openstack/configure-os-edpm-deployment-openstack-edpm-ipam-nwlp8" Jan 23 09:57:43 crc kubenswrapper[4684]: I0123 09:57:43.453335 4684 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/8cbed0d5-0896-4efe-af09-8469dcbd2cfb-inventory\") pod \"configure-os-edpm-deployment-openstack-edpm-ipam-nwlp8\" (UID: \"8cbed0d5-0896-4efe-af09-8469dcbd2cfb\") " pod="openstack/configure-os-edpm-deployment-openstack-edpm-ipam-nwlp8" Jan 23 09:57:43 crc kubenswrapper[4684]: I0123 09:57:43.453370 4684 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-4z2jv\" (UniqueName: \"kubernetes.io/projected/8cbed0d5-0896-4efe-af09-8469dcbd2cfb-kube-api-access-4z2jv\") pod \"configure-os-edpm-deployment-openstack-edpm-ipam-nwlp8\" (UID: \"8cbed0d5-0896-4efe-af09-8469dcbd2cfb\") " pod="openstack/configure-os-edpm-deployment-openstack-edpm-ipam-nwlp8" Jan 23 09:57:43 crc kubenswrapper[4684]: I0123 09:57:43.555316 4684 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/8cbed0d5-0896-4efe-af09-8469dcbd2cfb-ssh-key-openstack-edpm-ipam\") pod \"configure-os-edpm-deployment-openstack-edpm-ipam-nwlp8\" (UID: \"8cbed0d5-0896-4efe-af09-8469dcbd2cfb\") " pod="openstack/configure-os-edpm-deployment-openstack-edpm-ipam-nwlp8" Jan 23 09:57:43 crc kubenswrapper[4684]: I0123 09:57:43.555753 4684 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ceph\" (UniqueName: \"kubernetes.io/secret/8cbed0d5-0896-4efe-af09-8469dcbd2cfb-ceph\") pod \"configure-os-edpm-deployment-openstack-edpm-ipam-nwlp8\" (UID: \"8cbed0d5-0896-4efe-af09-8469dcbd2cfb\") " pod="openstack/configure-os-edpm-deployment-openstack-edpm-ipam-nwlp8" Jan 23 09:57:43 crc kubenswrapper[4684]: I0123 09:57:43.555845 4684 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/8cbed0d5-0896-4efe-af09-8469dcbd2cfb-inventory\") pod \"configure-os-edpm-deployment-openstack-edpm-ipam-nwlp8\" (UID: \"8cbed0d5-0896-4efe-af09-8469dcbd2cfb\") " pod="openstack/configure-os-edpm-deployment-openstack-edpm-ipam-nwlp8" Jan 23 09:57:43 crc kubenswrapper[4684]: I0123 09:57:43.555909 4684 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-4z2jv\" (UniqueName: \"kubernetes.io/projected/8cbed0d5-0896-4efe-af09-8469dcbd2cfb-kube-api-access-4z2jv\") pod \"configure-os-edpm-deployment-openstack-edpm-ipam-nwlp8\" (UID: \"8cbed0d5-0896-4efe-af09-8469dcbd2cfb\") " pod="openstack/configure-os-edpm-deployment-openstack-edpm-ipam-nwlp8" Jan 23 09:57:43 crc kubenswrapper[4684]: I0123 09:57:43.564865 4684 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/8cbed0d5-0896-4efe-af09-8469dcbd2cfb-inventory\") pod \"configure-os-edpm-deployment-openstack-edpm-ipam-nwlp8\" (UID: \"8cbed0d5-0896-4efe-af09-8469dcbd2cfb\") " pod="openstack/configure-os-edpm-deployment-openstack-edpm-ipam-nwlp8" Jan 23 09:57:43 crc kubenswrapper[4684]: I0123 09:57:43.567053 4684 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/8cbed0d5-0896-4efe-af09-8469dcbd2cfb-ssh-key-openstack-edpm-ipam\") pod \"configure-os-edpm-deployment-openstack-edpm-ipam-nwlp8\" (UID: \"8cbed0d5-0896-4efe-af09-8469dcbd2cfb\") " pod="openstack/configure-os-edpm-deployment-openstack-edpm-ipam-nwlp8" Jan 23 09:57:43 crc kubenswrapper[4684]: I0123 09:57:43.574521 4684 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ceph\" (UniqueName: \"kubernetes.io/secret/8cbed0d5-0896-4efe-af09-8469dcbd2cfb-ceph\") pod \"configure-os-edpm-deployment-openstack-edpm-ipam-nwlp8\" (UID: \"8cbed0d5-0896-4efe-af09-8469dcbd2cfb\") " pod="openstack/configure-os-edpm-deployment-openstack-edpm-ipam-nwlp8" Jan 23 09:57:43 crc kubenswrapper[4684]: I0123 09:57:43.575572 4684 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-4z2jv\" (UniqueName: \"kubernetes.io/projected/8cbed0d5-0896-4efe-af09-8469dcbd2cfb-kube-api-access-4z2jv\") pod \"configure-os-edpm-deployment-openstack-edpm-ipam-nwlp8\" (UID: \"8cbed0d5-0896-4efe-af09-8469dcbd2cfb\") " pod="openstack/configure-os-edpm-deployment-openstack-edpm-ipam-nwlp8" Jan 23 09:57:43 crc kubenswrapper[4684]: I0123 09:57:43.671693 4684 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/configure-os-edpm-deployment-openstack-edpm-ipam-nwlp8" Jan 23 09:57:44 crc kubenswrapper[4684]: I0123 09:57:44.206602 4684 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/configure-os-edpm-deployment-openstack-edpm-ipam-nwlp8"] Jan 23 09:57:45 crc kubenswrapper[4684]: I0123 09:57:45.219858 4684 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/configure-os-edpm-deployment-openstack-edpm-ipam-nwlp8" event={"ID":"8cbed0d5-0896-4efe-af09-8469dcbd2cfb","Type":"ContainerStarted","Data":"c1ad8ac5cafe56541abcc448ca7cf975af9f265940b7ce25cac7fc769f9c1150"} Jan 23 09:57:46 crc kubenswrapper[4684]: I0123 09:57:46.228185 4684 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/configure-os-edpm-deployment-openstack-edpm-ipam-nwlp8" event={"ID":"8cbed0d5-0896-4efe-af09-8469dcbd2cfb","Type":"ContainerStarted","Data":"1d072e0c9a43278e26685e88214dafa77b3b44cf5dadec4c6f7b0975bc5efc33"} Jan 23 09:58:32 crc kubenswrapper[4684]: I0123 09:58:32.670282 4684 generic.go:334] "Generic (PLEG): container finished" podID="8cbed0d5-0896-4efe-af09-8469dcbd2cfb" containerID="1d072e0c9a43278e26685e88214dafa77b3b44cf5dadec4c6f7b0975bc5efc33" exitCode=0 Jan 23 09:58:32 crc kubenswrapper[4684]: I0123 09:58:32.670389 4684 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/configure-os-edpm-deployment-openstack-edpm-ipam-nwlp8" event={"ID":"8cbed0d5-0896-4efe-af09-8469dcbd2cfb","Type":"ContainerDied","Data":"1d072e0c9a43278e26685e88214dafa77b3b44cf5dadec4c6f7b0975bc5efc33"} Jan 23 09:58:34 crc kubenswrapper[4684]: I0123 09:58:34.082754 4684 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/configure-os-edpm-deployment-openstack-edpm-ipam-nwlp8" Jan 23 09:58:34 crc kubenswrapper[4684]: I0123 09:58:34.110643 4684 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/8cbed0d5-0896-4efe-af09-8469dcbd2cfb-inventory\") pod \"8cbed0d5-0896-4efe-af09-8469dcbd2cfb\" (UID: \"8cbed0d5-0896-4efe-af09-8469dcbd2cfb\") " Jan 23 09:58:34 crc kubenswrapper[4684]: I0123 09:58:34.110758 4684 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-4z2jv\" (UniqueName: \"kubernetes.io/projected/8cbed0d5-0896-4efe-af09-8469dcbd2cfb-kube-api-access-4z2jv\") pod \"8cbed0d5-0896-4efe-af09-8469dcbd2cfb\" (UID: \"8cbed0d5-0896-4efe-af09-8469dcbd2cfb\") " Jan 23 09:58:34 crc kubenswrapper[4684]: I0123 09:58:34.110854 4684 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/8cbed0d5-0896-4efe-af09-8469dcbd2cfb-ssh-key-openstack-edpm-ipam\") pod \"8cbed0d5-0896-4efe-af09-8469dcbd2cfb\" (UID: \"8cbed0d5-0896-4efe-af09-8469dcbd2cfb\") " Jan 23 09:58:34 crc kubenswrapper[4684]: I0123 09:58:34.110952 4684 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ceph\" (UniqueName: \"kubernetes.io/secret/8cbed0d5-0896-4efe-af09-8469dcbd2cfb-ceph\") pod \"8cbed0d5-0896-4efe-af09-8469dcbd2cfb\" (UID: \"8cbed0d5-0896-4efe-af09-8469dcbd2cfb\") " Jan 23 09:58:34 crc kubenswrapper[4684]: I0123 09:58:34.117892 4684 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/8cbed0d5-0896-4efe-af09-8469dcbd2cfb-ceph" (OuterVolumeSpecName: "ceph") pod "8cbed0d5-0896-4efe-af09-8469dcbd2cfb" (UID: "8cbed0d5-0896-4efe-af09-8469dcbd2cfb"). InnerVolumeSpecName "ceph". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 23 09:58:34 crc kubenswrapper[4684]: I0123 09:58:34.129822 4684 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/8cbed0d5-0896-4efe-af09-8469dcbd2cfb-kube-api-access-4z2jv" (OuterVolumeSpecName: "kube-api-access-4z2jv") pod "8cbed0d5-0896-4efe-af09-8469dcbd2cfb" (UID: "8cbed0d5-0896-4efe-af09-8469dcbd2cfb"). InnerVolumeSpecName "kube-api-access-4z2jv". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 23 09:58:34 crc kubenswrapper[4684]: I0123 09:58:34.142459 4684 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/8cbed0d5-0896-4efe-af09-8469dcbd2cfb-ssh-key-openstack-edpm-ipam" (OuterVolumeSpecName: "ssh-key-openstack-edpm-ipam") pod "8cbed0d5-0896-4efe-af09-8469dcbd2cfb" (UID: "8cbed0d5-0896-4efe-af09-8469dcbd2cfb"). InnerVolumeSpecName "ssh-key-openstack-edpm-ipam". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 23 09:58:34 crc kubenswrapper[4684]: I0123 09:58:34.148087 4684 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/8cbed0d5-0896-4efe-af09-8469dcbd2cfb-inventory" (OuterVolumeSpecName: "inventory") pod "8cbed0d5-0896-4efe-af09-8469dcbd2cfb" (UID: "8cbed0d5-0896-4efe-af09-8469dcbd2cfb"). InnerVolumeSpecName "inventory". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 23 09:58:34 crc kubenswrapper[4684]: I0123 09:58:34.213478 4684 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-4z2jv\" (UniqueName: \"kubernetes.io/projected/8cbed0d5-0896-4efe-af09-8469dcbd2cfb-kube-api-access-4z2jv\") on node \"crc\" DevicePath \"\"" Jan 23 09:58:34 crc kubenswrapper[4684]: I0123 09:58:34.213526 4684 reconciler_common.go:293] "Volume detached for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/8cbed0d5-0896-4efe-af09-8469dcbd2cfb-ssh-key-openstack-edpm-ipam\") on node \"crc\" DevicePath \"\"" Jan 23 09:58:34 crc kubenswrapper[4684]: I0123 09:58:34.213542 4684 reconciler_common.go:293] "Volume detached for volume \"ceph\" (UniqueName: \"kubernetes.io/secret/8cbed0d5-0896-4efe-af09-8469dcbd2cfb-ceph\") on node \"crc\" DevicePath \"\"" Jan 23 09:58:34 crc kubenswrapper[4684]: I0123 09:58:34.213553 4684 reconciler_common.go:293] "Volume detached for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/8cbed0d5-0896-4efe-af09-8469dcbd2cfb-inventory\") on node \"crc\" DevicePath \"\"" Jan 23 09:58:34 crc kubenswrapper[4684]: I0123 09:58:34.688367 4684 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/configure-os-edpm-deployment-openstack-edpm-ipam-nwlp8" event={"ID":"8cbed0d5-0896-4efe-af09-8469dcbd2cfb","Type":"ContainerDied","Data":"c1ad8ac5cafe56541abcc448ca7cf975af9f265940b7ce25cac7fc769f9c1150"} Jan 23 09:58:34 crc kubenswrapper[4684]: I0123 09:58:34.688419 4684 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="c1ad8ac5cafe56541abcc448ca7cf975af9f265940b7ce25cac7fc769f9c1150" Jan 23 09:58:34 crc kubenswrapper[4684]: I0123 09:58:34.688446 4684 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/configure-os-edpm-deployment-openstack-edpm-ipam-nwlp8" Jan 23 09:58:34 crc kubenswrapper[4684]: I0123 09:58:34.784588 4684 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/ssh-known-hosts-edpm-deployment-bpqtq"] Jan 23 09:58:34 crc kubenswrapper[4684]: E0123 09:58:34.785158 4684 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="8cbed0d5-0896-4efe-af09-8469dcbd2cfb" containerName="configure-os-edpm-deployment-openstack-edpm-ipam" Jan 23 09:58:34 crc kubenswrapper[4684]: I0123 09:58:34.785193 4684 state_mem.go:107] "Deleted CPUSet assignment" podUID="8cbed0d5-0896-4efe-af09-8469dcbd2cfb" containerName="configure-os-edpm-deployment-openstack-edpm-ipam" Jan 23 09:58:34 crc kubenswrapper[4684]: I0123 09:58:34.785439 4684 memory_manager.go:354] "RemoveStaleState removing state" podUID="8cbed0d5-0896-4efe-af09-8469dcbd2cfb" containerName="configure-os-edpm-deployment-openstack-edpm-ipam" Jan 23 09:58:34 crc kubenswrapper[4684]: I0123 09:58:34.786235 4684 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ssh-known-hosts-edpm-deployment-bpqtq" Jan 23 09:58:34 crc kubenswrapper[4684]: I0123 09:58:34.790830 4684 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"ceph-conf-files" Jan 23 09:58:34 crc kubenswrapper[4684]: I0123 09:58:34.791022 4684 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"dataplane-ansible-ssh-private-key-secret" Jan 23 09:58:34 crc kubenswrapper[4684]: I0123 09:58:34.791120 4684 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"openstack-aee-default-env" Jan 23 09:58:34 crc kubenswrapper[4684]: I0123 09:58:34.793908 4684 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"openstack-edpm-ipam-dockercfg-5vtkf" Jan 23 09:58:34 crc kubenswrapper[4684]: I0123 09:58:34.794400 4684 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"dataplanenodeset-openstack-edpm-ipam" Jan 23 09:58:34 crc kubenswrapper[4684]: I0123 09:58:34.804571 4684 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ssh-known-hosts-edpm-deployment-bpqtq"] Jan 23 09:58:34 crc kubenswrapper[4684]: I0123 09:58:34.826566 4684 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"inventory-0\" (UniqueName: \"kubernetes.io/secret/d7513ac8-1304-4762-a2f2-6d3b152fc4a7-inventory-0\") pod \"ssh-known-hosts-edpm-deployment-bpqtq\" (UID: \"d7513ac8-1304-4762-a2f2-6d3b152fc4a7\") " pod="openstack/ssh-known-hosts-edpm-deployment-bpqtq" Jan 23 09:58:34 crc kubenswrapper[4684]: I0123 09:58:34.826795 4684 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/d7513ac8-1304-4762-a2f2-6d3b152fc4a7-ssh-key-openstack-edpm-ipam\") pod \"ssh-known-hosts-edpm-deployment-bpqtq\" (UID: \"d7513ac8-1304-4762-a2f2-6d3b152fc4a7\") " pod="openstack/ssh-known-hosts-edpm-deployment-bpqtq" Jan 23 09:58:34 crc kubenswrapper[4684]: I0123 09:58:34.826847 4684 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-d7wmn\" (UniqueName: \"kubernetes.io/projected/d7513ac8-1304-4762-a2f2-6d3b152fc4a7-kube-api-access-d7wmn\") pod \"ssh-known-hosts-edpm-deployment-bpqtq\" (UID: \"d7513ac8-1304-4762-a2f2-6d3b152fc4a7\") " pod="openstack/ssh-known-hosts-edpm-deployment-bpqtq" Jan 23 09:58:34 crc kubenswrapper[4684]: I0123 09:58:34.826882 4684 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ceph\" (UniqueName: \"kubernetes.io/secret/d7513ac8-1304-4762-a2f2-6d3b152fc4a7-ceph\") pod \"ssh-known-hosts-edpm-deployment-bpqtq\" (UID: \"d7513ac8-1304-4762-a2f2-6d3b152fc4a7\") " pod="openstack/ssh-known-hosts-edpm-deployment-bpqtq" Jan 23 09:58:34 crc kubenswrapper[4684]: I0123 09:58:34.928365 4684 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"inventory-0\" (UniqueName: \"kubernetes.io/secret/d7513ac8-1304-4762-a2f2-6d3b152fc4a7-inventory-0\") pod \"ssh-known-hosts-edpm-deployment-bpqtq\" (UID: \"d7513ac8-1304-4762-a2f2-6d3b152fc4a7\") " pod="openstack/ssh-known-hosts-edpm-deployment-bpqtq" Jan 23 09:58:34 crc kubenswrapper[4684]: I0123 09:58:34.928489 4684 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/d7513ac8-1304-4762-a2f2-6d3b152fc4a7-ssh-key-openstack-edpm-ipam\") pod \"ssh-known-hosts-edpm-deployment-bpqtq\" (UID: \"d7513ac8-1304-4762-a2f2-6d3b152fc4a7\") " pod="openstack/ssh-known-hosts-edpm-deployment-bpqtq" Jan 23 09:58:34 crc kubenswrapper[4684]: I0123 09:58:34.928532 4684 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-d7wmn\" (UniqueName: \"kubernetes.io/projected/d7513ac8-1304-4762-a2f2-6d3b152fc4a7-kube-api-access-d7wmn\") pod \"ssh-known-hosts-edpm-deployment-bpqtq\" (UID: \"d7513ac8-1304-4762-a2f2-6d3b152fc4a7\") " pod="openstack/ssh-known-hosts-edpm-deployment-bpqtq" Jan 23 09:58:34 crc kubenswrapper[4684]: I0123 09:58:34.928566 4684 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ceph\" (UniqueName: \"kubernetes.io/secret/d7513ac8-1304-4762-a2f2-6d3b152fc4a7-ceph\") pod \"ssh-known-hosts-edpm-deployment-bpqtq\" (UID: \"d7513ac8-1304-4762-a2f2-6d3b152fc4a7\") " pod="openstack/ssh-known-hosts-edpm-deployment-bpqtq" Jan 23 09:58:34 crc kubenswrapper[4684]: I0123 09:58:34.932295 4684 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ceph\" (UniqueName: \"kubernetes.io/secret/d7513ac8-1304-4762-a2f2-6d3b152fc4a7-ceph\") pod \"ssh-known-hosts-edpm-deployment-bpqtq\" (UID: \"d7513ac8-1304-4762-a2f2-6d3b152fc4a7\") " pod="openstack/ssh-known-hosts-edpm-deployment-bpqtq" Jan 23 09:58:34 crc kubenswrapper[4684]: I0123 09:58:34.932952 4684 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"inventory-0\" (UniqueName: \"kubernetes.io/secret/d7513ac8-1304-4762-a2f2-6d3b152fc4a7-inventory-0\") pod \"ssh-known-hosts-edpm-deployment-bpqtq\" (UID: \"d7513ac8-1304-4762-a2f2-6d3b152fc4a7\") " pod="openstack/ssh-known-hosts-edpm-deployment-bpqtq" Jan 23 09:58:34 crc kubenswrapper[4684]: I0123 09:58:34.936262 4684 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/d7513ac8-1304-4762-a2f2-6d3b152fc4a7-ssh-key-openstack-edpm-ipam\") pod \"ssh-known-hosts-edpm-deployment-bpqtq\" (UID: \"d7513ac8-1304-4762-a2f2-6d3b152fc4a7\") " pod="openstack/ssh-known-hosts-edpm-deployment-bpqtq" Jan 23 09:58:34 crc kubenswrapper[4684]: I0123 09:58:34.949422 4684 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-d7wmn\" (UniqueName: \"kubernetes.io/projected/d7513ac8-1304-4762-a2f2-6d3b152fc4a7-kube-api-access-d7wmn\") pod \"ssh-known-hosts-edpm-deployment-bpqtq\" (UID: \"d7513ac8-1304-4762-a2f2-6d3b152fc4a7\") " pod="openstack/ssh-known-hosts-edpm-deployment-bpqtq" Jan 23 09:58:35 crc kubenswrapper[4684]: I0123 09:58:35.102876 4684 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ssh-known-hosts-edpm-deployment-bpqtq" Jan 23 09:58:35 crc kubenswrapper[4684]: I0123 09:58:35.675630 4684 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ssh-known-hosts-edpm-deployment-bpqtq"] Jan 23 09:58:35 crc kubenswrapper[4684]: I0123 09:58:35.679909 4684 provider.go:102] Refreshing cache for provider: *credentialprovider.defaultDockerConfigProvider Jan 23 09:58:35 crc kubenswrapper[4684]: I0123 09:58:35.699721 4684 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ssh-known-hosts-edpm-deployment-bpqtq" event={"ID":"d7513ac8-1304-4762-a2f2-6d3b152fc4a7","Type":"ContainerStarted","Data":"dcbd5895deca75c69ea3e0521c4eb593510bbfb2152e0b5f11c91b3e9188e592"} Jan 23 09:58:36 crc kubenswrapper[4684]: I0123 09:58:36.709737 4684 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ssh-known-hosts-edpm-deployment-bpqtq" event={"ID":"d7513ac8-1304-4762-a2f2-6d3b152fc4a7","Type":"ContainerStarted","Data":"8790f40b98de7e202fe97896614c66a0801d387af4f1fff6cd917a6f3ff5cae0"} Jan 23 09:58:36 crc kubenswrapper[4684]: I0123 09:58:36.729373 4684 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/ssh-known-hosts-edpm-deployment-bpqtq" podStartSLOduration=2.237056006 podStartE2EDuration="2.729349581s" podCreationTimestamp="2026-01-23 09:58:34 +0000 UTC" firstStartedPulling="2026-01-23 09:58:35.679576419 +0000 UTC m=+3088.302954960" lastFinishedPulling="2026-01-23 09:58:36.171869994 +0000 UTC m=+3088.795248535" observedRunningTime="2026-01-23 09:58:36.723797232 +0000 UTC m=+3089.347175773" watchObservedRunningTime="2026-01-23 09:58:36.729349581 +0000 UTC m=+3089.352728132" Jan 23 09:58:43 crc kubenswrapper[4684]: I0123 09:58:43.728334 4684 patch_prober.go:28] interesting pod/machine-config-daemon-wtphf container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Jan 23 09:58:43 crc kubenswrapper[4684]: I0123 09:58:43.728869 4684 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-wtphf" podUID="fe8e0d00-860e-4d47-9f48-686555520d79" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Jan 23 09:58:48 crc kubenswrapper[4684]: I0123 09:58:48.806408 4684 generic.go:334] "Generic (PLEG): container finished" podID="d7513ac8-1304-4762-a2f2-6d3b152fc4a7" containerID="8790f40b98de7e202fe97896614c66a0801d387af4f1fff6cd917a6f3ff5cae0" exitCode=0 Jan 23 09:58:48 crc kubenswrapper[4684]: I0123 09:58:48.806770 4684 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ssh-known-hosts-edpm-deployment-bpqtq" event={"ID":"d7513ac8-1304-4762-a2f2-6d3b152fc4a7","Type":"ContainerDied","Data":"8790f40b98de7e202fe97896614c66a0801d387af4f1fff6cd917a6f3ff5cae0"} Jan 23 09:58:50 crc kubenswrapper[4684]: I0123 09:58:50.247266 4684 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/ssh-known-hosts-edpm-deployment-bpqtq" Jan 23 09:58:50 crc kubenswrapper[4684]: I0123 09:58:50.325129 4684 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/d7513ac8-1304-4762-a2f2-6d3b152fc4a7-ssh-key-openstack-edpm-ipam\") pod \"d7513ac8-1304-4762-a2f2-6d3b152fc4a7\" (UID: \"d7513ac8-1304-4762-a2f2-6d3b152fc4a7\") " Jan 23 09:58:50 crc kubenswrapper[4684]: I0123 09:58:50.325195 4684 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ceph\" (UniqueName: \"kubernetes.io/secret/d7513ac8-1304-4762-a2f2-6d3b152fc4a7-ceph\") pod \"d7513ac8-1304-4762-a2f2-6d3b152fc4a7\" (UID: \"d7513ac8-1304-4762-a2f2-6d3b152fc4a7\") " Jan 23 09:58:50 crc kubenswrapper[4684]: I0123 09:58:50.325240 4684 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-d7wmn\" (UniqueName: \"kubernetes.io/projected/d7513ac8-1304-4762-a2f2-6d3b152fc4a7-kube-api-access-d7wmn\") pod \"d7513ac8-1304-4762-a2f2-6d3b152fc4a7\" (UID: \"d7513ac8-1304-4762-a2f2-6d3b152fc4a7\") " Jan 23 09:58:50 crc kubenswrapper[4684]: I0123 09:58:50.325307 4684 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"inventory-0\" (UniqueName: \"kubernetes.io/secret/d7513ac8-1304-4762-a2f2-6d3b152fc4a7-inventory-0\") pod \"d7513ac8-1304-4762-a2f2-6d3b152fc4a7\" (UID: \"d7513ac8-1304-4762-a2f2-6d3b152fc4a7\") " Jan 23 09:58:50 crc kubenswrapper[4684]: I0123 09:58:50.331280 4684 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/d7513ac8-1304-4762-a2f2-6d3b152fc4a7-kube-api-access-d7wmn" (OuterVolumeSpecName: "kube-api-access-d7wmn") pod "d7513ac8-1304-4762-a2f2-6d3b152fc4a7" (UID: "d7513ac8-1304-4762-a2f2-6d3b152fc4a7"). InnerVolumeSpecName "kube-api-access-d7wmn". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 23 09:58:50 crc kubenswrapper[4684]: I0123 09:58:50.331744 4684 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/d7513ac8-1304-4762-a2f2-6d3b152fc4a7-ceph" (OuterVolumeSpecName: "ceph") pod "d7513ac8-1304-4762-a2f2-6d3b152fc4a7" (UID: "d7513ac8-1304-4762-a2f2-6d3b152fc4a7"). InnerVolumeSpecName "ceph". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 23 09:58:50 crc kubenswrapper[4684]: I0123 09:58:50.356364 4684 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/d7513ac8-1304-4762-a2f2-6d3b152fc4a7-inventory-0" (OuterVolumeSpecName: "inventory-0") pod "d7513ac8-1304-4762-a2f2-6d3b152fc4a7" (UID: "d7513ac8-1304-4762-a2f2-6d3b152fc4a7"). InnerVolumeSpecName "inventory-0". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 23 09:58:50 crc kubenswrapper[4684]: I0123 09:58:50.363862 4684 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/d7513ac8-1304-4762-a2f2-6d3b152fc4a7-ssh-key-openstack-edpm-ipam" (OuterVolumeSpecName: "ssh-key-openstack-edpm-ipam") pod "d7513ac8-1304-4762-a2f2-6d3b152fc4a7" (UID: "d7513ac8-1304-4762-a2f2-6d3b152fc4a7"). InnerVolumeSpecName "ssh-key-openstack-edpm-ipam". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 23 09:58:50 crc kubenswrapper[4684]: I0123 09:58:50.427837 4684 reconciler_common.go:293] "Volume detached for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/d7513ac8-1304-4762-a2f2-6d3b152fc4a7-ssh-key-openstack-edpm-ipam\") on node \"crc\" DevicePath \"\"" Jan 23 09:58:50 crc kubenswrapper[4684]: I0123 09:58:50.427869 4684 reconciler_common.go:293] "Volume detached for volume \"ceph\" (UniqueName: \"kubernetes.io/secret/d7513ac8-1304-4762-a2f2-6d3b152fc4a7-ceph\") on node \"crc\" DevicePath \"\"" Jan 23 09:58:50 crc kubenswrapper[4684]: I0123 09:58:50.427880 4684 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-d7wmn\" (UniqueName: \"kubernetes.io/projected/d7513ac8-1304-4762-a2f2-6d3b152fc4a7-kube-api-access-d7wmn\") on node \"crc\" DevicePath \"\"" Jan 23 09:58:50 crc kubenswrapper[4684]: I0123 09:58:50.427888 4684 reconciler_common.go:293] "Volume detached for volume \"inventory-0\" (UniqueName: \"kubernetes.io/secret/d7513ac8-1304-4762-a2f2-6d3b152fc4a7-inventory-0\") on node \"crc\" DevicePath \"\"" Jan 23 09:58:50 crc kubenswrapper[4684]: I0123 09:58:50.829737 4684 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ssh-known-hosts-edpm-deployment-bpqtq" event={"ID":"d7513ac8-1304-4762-a2f2-6d3b152fc4a7","Type":"ContainerDied","Data":"dcbd5895deca75c69ea3e0521c4eb593510bbfb2152e0b5f11c91b3e9188e592"} Jan 23 09:58:50 crc kubenswrapper[4684]: I0123 09:58:50.829776 4684 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="dcbd5895deca75c69ea3e0521c4eb593510bbfb2152e0b5f11c91b3e9188e592" Jan 23 09:58:50 crc kubenswrapper[4684]: I0123 09:58:50.829832 4684 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/ssh-known-hosts-edpm-deployment-bpqtq" Jan 23 09:58:50 crc kubenswrapper[4684]: I0123 09:58:50.926573 4684 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/run-os-edpm-deployment-openstack-edpm-ipam-7wznn"] Jan 23 09:58:50 crc kubenswrapper[4684]: E0123 09:58:50.927223 4684 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="d7513ac8-1304-4762-a2f2-6d3b152fc4a7" containerName="ssh-known-hosts-edpm-deployment" Jan 23 09:58:50 crc kubenswrapper[4684]: I0123 09:58:50.927242 4684 state_mem.go:107] "Deleted CPUSet assignment" podUID="d7513ac8-1304-4762-a2f2-6d3b152fc4a7" containerName="ssh-known-hosts-edpm-deployment" Jan 23 09:58:50 crc kubenswrapper[4684]: I0123 09:58:50.927433 4684 memory_manager.go:354] "RemoveStaleState removing state" podUID="d7513ac8-1304-4762-a2f2-6d3b152fc4a7" containerName="ssh-known-hosts-edpm-deployment" Jan 23 09:58:50 crc kubenswrapper[4684]: I0123 09:58:50.929658 4684 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/run-os-edpm-deployment-openstack-edpm-ipam-7wznn" Jan 23 09:58:50 crc kubenswrapper[4684]: I0123 09:58:50.932083 4684 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"dataplane-ansible-ssh-private-key-secret" Jan 23 09:58:50 crc kubenswrapper[4684]: I0123 09:58:50.932336 4684 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"openstack-edpm-ipam-dockercfg-5vtkf" Jan 23 09:58:50 crc kubenswrapper[4684]: I0123 09:58:50.932519 4684 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"ceph-conf-files" Jan 23 09:58:50 crc kubenswrapper[4684]: I0123 09:58:50.940181 4684 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"dataplanenodeset-openstack-edpm-ipam" Jan 23 09:58:50 crc kubenswrapper[4684]: I0123 09:58:50.940594 4684 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"openstack-aee-default-env" Jan 23 09:58:50 crc kubenswrapper[4684]: I0123 09:58:50.947343 4684 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/run-os-edpm-deployment-openstack-edpm-ipam-7wznn"] Jan 23 09:58:51 crc kubenswrapper[4684]: I0123 09:58:51.042859 4684 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-pnz6r\" (UniqueName: \"kubernetes.io/projected/1139aa20-9131-40c7-bd06-f108d5ac42ab-kube-api-access-pnz6r\") pod \"run-os-edpm-deployment-openstack-edpm-ipam-7wznn\" (UID: \"1139aa20-9131-40c7-bd06-f108d5ac42ab\") " pod="openstack/run-os-edpm-deployment-openstack-edpm-ipam-7wznn" Jan 23 09:58:51 crc kubenswrapper[4684]: I0123 09:58:51.043152 4684 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ceph\" (UniqueName: \"kubernetes.io/secret/1139aa20-9131-40c7-bd06-f108d5ac42ab-ceph\") pod \"run-os-edpm-deployment-openstack-edpm-ipam-7wznn\" (UID: \"1139aa20-9131-40c7-bd06-f108d5ac42ab\") " pod="openstack/run-os-edpm-deployment-openstack-edpm-ipam-7wznn" Jan 23 09:58:51 crc kubenswrapper[4684]: I0123 09:58:51.043271 4684 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/1139aa20-9131-40c7-bd06-f108d5ac42ab-inventory\") pod \"run-os-edpm-deployment-openstack-edpm-ipam-7wznn\" (UID: \"1139aa20-9131-40c7-bd06-f108d5ac42ab\") " pod="openstack/run-os-edpm-deployment-openstack-edpm-ipam-7wznn" Jan 23 09:58:51 crc kubenswrapper[4684]: I0123 09:58:51.043476 4684 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/1139aa20-9131-40c7-bd06-f108d5ac42ab-ssh-key-openstack-edpm-ipam\") pod \"run-os-edpm-deployment-openstack-edpm-ipam-7wznn\" (UID: \"1139aa20-9131-40c7-bd06-f108d5ac42ab\") " pod="openstack/run-os-edpm-deployment-openstack-edpm-ipam-7wznn" Jan 23 09:58:51 crc kubenswrapper[4684]: I0123 09:58:51.145498 4684 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/1139aa20-9131-40c7-bd06-f108d5ac42ab-inventory\") pod \"run-os-edpm-deployment-openstack-edpm-ipam-7wznn\" (UID: \"1139aa20-9131-40c7-bd06-f108d5ac42ab\") " pod="openstack/run-os-edpm-deployment-openstack-edpm-ipam-7wznn" Jan 23 09:58:51 crc kubenswrapper[4684]: I0123 09:58:51.145611 4684 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/1139aa20-9131-40c7-bd06-f108d5ac42ab-ssh-key-openstack-edpm-ipam\") pod \"run-os-edpm-deployment-openstack-edpm-ipam-7wznn\" (UID: \"1139aa20-9131-40c7-bd06-f108d5ac42ab\") " pod="openstack/run-os-edpm-deployment-openstack-edpm-ipam-7wznn" Jan 23 09:58:51 crc kubenswrapper[4684]: I0123 09:58:51.145735 4684 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-pnz6r\" (UniqueName: \"kubernetes.io/projected/1139aa20-9131-40c7-bd06-f108d5ac42ab-kube-api-access-pnz6r\") pod \"run-os-edpm-deployment-openstack-edpm-ipam-7wznn\" (UID: \"1139aa20-9131-40c7-bd06-f108d5ac42ab\") " pod="openstack/run-os-edpm-deployment-openstack-edpm-ipam-7wznn" Jan 23 09:58:51 crc kubenswrapper[4684]: I0123 09:58:51.145839 4684 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ceph\" (UniqueName: \"kubernetes.io/secret/1139aa20-9131-40c7-bd06-f108d5ac42ab-ceph\") pod \"run-os-edpm-deployment-openstack-edpm-ipam-7wznn\" (UID: \"1139aa20-9131-40c7-bd06-f108d5ac42ab\") " pod="openstack/run-os-edpm-deployment-openstack-edpm-ipam-7wznn" Jan 23 09:58:51 crc kubenswrapper[4684]: I0123 09:58:51.149879 4684 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ceph\" (UniqueName: \"kubernetes.io/secret/1139aa20-9131-40c7-bd06-f108d5ac42ab-ceph\") pod \"run-os-edpm-deployment-openstack-edpm-ipam-7wznn\" (UID: \"1139aa20-9131-40c7-bd06-f108d5ac42ab\") " pod="openstack/run-os-edpm-deployment-openstack-edpm-ipam-7wznn" Jan 23 09:58:51 crc kubenswrapper[4684]: I0123 09:58:51.156218 4684 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/1139aa20-9131-40c7-bd06-f108d5ac42ab-ssh-key-openstack-edpm-ipam\") pod \"run-os-edpm-deployment-openstack-edpm-ipam-7wznn\" (UID: \"1139aa20-9131-40c7-bd06-f108d5ac42ab\") " pod="openstack/run-os-edpm-deployment-openstack-edpm-ipam-7wznn" Jan 23 09:58:51 crc kubenswrapper[4684]: I0123 09:58:51.156795 4684 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/1139aa20-9131-40c7-bd06-f108d5ac42ab-inventory\") pod \"run-os-edpm-deployment-openstack-edpm-ipam-7wznn\" (UID: \"1139aa20-9131-40c7-bd06-f108d5ac42ab\") " pod="openstack/run-os-edpm-deployment-openstack-edpm-ipam-7wznn" Jan 23 09:58:51 crc kubenswrapper[4684]: I0123 09:58:51.168330 4684 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-pnz6r\" (UniqueName: \"kubernetes.io/projected/1139aa20-9131-40c7-bd06-f108d5ac42ab-kube-api-access-pnz6r\") pod \"run-os-edpm-deployment-openstack-edpm-ipam-7wznn\" (UID: \"1139aa20-9131-40c7-bd06-f108d5ac42ab\") " pod="openstack/run-os-edpm-deployment-openstack-edpm-ipam-7wznn" Jan 23 09:58:51 crc kubenswrapper[4684]: I0123 09:58:51.249532 4684 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/run-os-edpm-deployment-openstack-edpm-ipam-7wznn" Jan 23 09:58:51 crc kubenswrapper[4684]: I0123 09:58:51.775302 4684 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/run-os-edpm-deployment-openstack-edpm-ipam-7wznn"] Jan 23 09:58:51 crc kubenswrapper[4684]: I0123 09:58:51.840249 4684 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/run-os-edpm-deployment-openstack-edpm-ipam-7wznn" event={"ID":"1139aa20-9131-40c7-bd06-f108d5ac42ab","Type":"ContainerStarted","Data":"6046af0aedb1e87390ab4268abfe5da2667abddd996a562aa18064c51f998967"} Jan 23 09:58:52 crc kubenswrapper[4684]: I0123 09:58:52.863055 4684 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/run-os-edpm-deployment-openstack-edpm-ipam-7wznn" event={"ID":"1139aa20-9131-40c7-bd06-f108d5ac42ab","Type":"ContainerStarted","Data":"1f7ed55ff186e8df399201cb3150a9c121733c54215c80584251f75e0bb8689f"} Jan 23 09:58:52 crc kubenswrapper[4684]: I0123 09:58:52.882351 4684 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/run-os-edpm-deployment-openstack-edpm-ipam-7wznn" podStartSLOduration=2.3345231 podStartE2EDuration="2.882333899s" podCreationTimestamp="2026-01-23 09:58:50 +0000 UTC" firstStartedPulling="2026-01-23 09:58:51.770944908 +0000 UTC m=+3104.394323449" lastFinishedPulling="2026-01-23 09:58:52.318755707 +0000 UTC m=+3104.942134248" observedRunningTime="2026-01-23 09:58:52.882103832 +0000 UTC m=+3105.505482393" watchObservedRunningTime="2026-01-23 09:58:52.882333899 +0000 UTC m=+3105.505712440" Jan 23 09:59:00 crc kubenswrapper[4684]: I0123 09:59:00.932517 4684 generic.go:334] "Generic (PLEG): container finished" podID="1139aa20-9131-40c7-bd06-f108d5ac42ab" containerID="1f7ed55ff186e8df399201cb3150a9c121733c54215c80584251f75e0bb8689f" exitCode=0 Jan 23 09:59:00 crc kubenswrapper[4684]: I0123 09:59:00.933412 4684 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/run-os-edpm-deployment-openstack-edpm-ipam-7wznn" event={"ID":"1139aa20-9131-40c7-bd06-f108d5ac42ab","Type":"ContainerDied","Data":"1f7ed55ff186e8df399201cb3150a9c121733c54215c80584251f75e0bb8689f"} Jan 23 09:59:02 crc kubenswrapper[4684]: I0123 09:59:02.374526 4684 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/run-os-edpm-deployment-openstack-edpm-ipam-7wznn" Jan 23 09:59:02 crc kubenswrapper[4684]: I0123 09:59:02.414110 4684 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ceph\" (UniqueName: \"kubernetes.io/secret/1139aa20-9131-40c7-bd06-f108d5ac42ab-ceph\") pod \"1139aa20-9131-40c7-bd06-f108d5ac42ab\" (UID: \"1139aa20-9131-40c7-bd06-f108d5ac42ab\") " Jan 23 09:59:02 crc kubenswrapper[4684]: I0123 09:59:02.414228 4684 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/1139aa20-9131-40c7-bd06-f108d5ac42ab-inventory\") pod \"1139aa20-9131-40c7-bd06-f108d5ac42ab\" (UID: \"1139aa20-9131-40c7-bd06-f108d5ac42ab\") " Jan 23 09:59:02 crc kubenswrapper[4684]: I0123 09:59:02.414285 4684 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-pnz6r\" (UniqueName: \"kubernetes.io/projected/1139aa20-9131-40c7-bd06-f108d5ac42ab-kube-api-access-pnz6r\") pod \"1139aa20-9131-40c7-bd06-f108d5ac42ab\" (UID: \"1139aa20-9131-40c7-bd06-f108d5ac42ab\") " Jan 23 09:59:02 crc kubenswrapper[4684]: I0123 09:59:02.414345 4684 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/1139aa20-9131-40c7-bd06-f108d5ac42ab-ssh-key-openstack-edpm-ipam\") pod \"1139aa20-9131-40c7-bd06-f108d5ac42ab\" (UID: \"1139aa20-9131-40c7-bd06-f108d5ac42ab\") " Jan 23 09:59:02 crc kubenswrapper[4684]: I0123 09:59:02.425900 4684 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/1139aa20-9131-40c7-bd06-f108d5ac42ab-kube-api-access-pnz6r" (OuterVolumeSpecName: "kube-api-access-pnz6r") pod "1139aa20-9131-40c7-bd06-f108d5ac42ab" (UID: "1139aa20-9131-40c7-bd06-f108d5ac42ab"). InnerVolumeSpecName "kube-api-access-pnz6r". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 23 09:59:02 crc kubenswrapper[4684]: I0123 09:59:02.428913 4684 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/1139aa20-9131-40c7-bd06-f108d5ac42ab-ceph" (OuterVolumeSpecName: "ceph") pod "1139aa20-9131-40c7-bd06-f108d5ac42ab" (UID: "1139aa20-9131-40c7-bd06-f108d5ac42ab"). InnerVolumeSpecName "ceph". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 23 09:59:02 crc kubenswrapper[4684]: I0123 09:59:02.443406 4684 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/1139aa20-9131-40c7-bd06-f108d5ac42ab-ssh-key-openstack-edpm-ipam" (OuterVolumeSpecName: "ssh-key-openstack-edpm-ipam") pod "1139aa20-9131-40c7-bd06-f108d5ac42ab" (UID: "1139aa20-9131-40c7-bd06-f108d5ac42ab"). InnerVolumeSpecName "ssh-key-openstack-edpm-ipam". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 23 09:59:02 crc kubenswrapper[4684]: I0123 09:59:02.455309 4684 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/1139aa20-9131-40c7-bd06-f108d5ac42ab-inventory" (OuterVolumeSpecName: "inventory") pod "1139aa20-9131-40c7-bd06-f108d5ac42ab" (UID: "1139aa20-9131-40c7-bd06-f108d5ac42ab"). InnerVolumeSpecName "inventory". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 23 09:59:02 crc kubenswrapper[4684]: I0123 09:59:02.517581 4684 reconciler_common.go:293] "Volume detached for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/1139aa20-9131-40c7-bd06-f108d5ac42ab-ssh-key-openstack-edpm-ipam\") on node \"crc\" DevicePath \"\"" Jan 23 09:59:02 crc kubenswrapper[4684]: I0123 09:59:02.517850 4684 reconciler_common.go:293] "Volume detached for volume \"ceph\" (UniqueName: \"kubernetes.io/secret/1139aa20-9131-40c7-bd06-f108d5ac42ab-ceph\") on node \"crc\" DevicePath \"\"" Jan 23 09:59:02 crc kubenswrapper[4684]: I0123 09:59:02.517961 4684 reconciler_common.go:293] "Volume detached for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/1139aa20-9131-40c7-bd06-f108d5ac42ab-inventory\") on node \"crc\" DevicePath \"\"" Jan 23 09:59:02 crc kubenswrapper[4684]: I0123 09:59:02.518044 4684 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-pnz6r\" (UniqueName: \"kubernetes.io/projected/1139aa20-9131-40c7-bd06-f108d5ac42ab-kube-api-access-pnz6r\") on node \"crc\" DevicePath \"\"" Jan 23 09:59:02 crc kubenswrapper[4684]: I0123 09:59:02.952536 4684 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/run-os-edpm-deployment-openstack-edpm-ipam-7wznn" event={"ID":"1139aa20-9131-40c7-bd06-f108d5ac42ab","Type":"ContainerDied","Data":"6046af0aedb1e87390ab4268abfe5da2667abddd996a562aa18064c51f998967"} Jan 23 09:59:02 crc kubenswrapper[4684]: I0123 09:59:02.952581 4684 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="6046af0aedb1e87390ab4268abfe5da2667abddd996a562aa18064c51f998967" Jan 23 09:59:02 crc kubenswrapper[4684]: I0123 09:59:02.952646 4684 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/run-os-edpm-deployment-openstack-edpm-ipam-7wznn" Jan 23 09:59:03 crc kubenswrapper[4684]: I0123 09:59:03.040430 4684 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/reboot-os-edpm-deployment-openstack-edpm-ipam-hnv4c"] Jan 23 09:59:03 crc kubenswrapper[4684]: E0123 09:59:03.040821 4684 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="1139aa20-9131-40c7-bd06-f108d5ac42ab" containerName="run-os-edpm-deployment-openstack-edpm-ipam" Jan 23 09:59:03 crc kubenswrapper[4684]: I0123 09:59:03.040838 4684 state_mem.go:107] "Deleted CPUSet assignment" podUID="1139aa20-9131-40c7-bd06-f108d5ac42ab" containerName="run-os-edpm-deployment-openstack-edpm-ipam" Jan 23 09:59:03 crc kubenswrapper[4684]: I0123 09:59:03.041084 4684 memory_manager.go:354] "RemoveStaleState removing state" podUID="1139aa20-9131-40c7-bd06-f108d5ac42ab" containerName="run-os-edpm-deployment-openstack-edpm-ipam" Jan 23 09:59:03 crc kubenswrapper[4684]: I0123 09:59:03.041810 4684 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/reboot-os-edpm-deployment-openstack-edpm-ipam-hnv4c" Jan 23 09:59:03 crc kubenswrapper[4684]: I0123 09:59:03.045263 4684 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"ceph-conf-files" Jan 23 09:59:03 crc kubenswrapper[4684]: I0123 09:59:03.045531 4684 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"dataplane-ansible-ssh-private-key-secret" Jan 23 09:59:03 crc kubenswrapper[4684]: I0123 09:59:03.049365 4684 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"dataplanenodeset-openstack-edpm-ipam" Jan 23 09:59:03 crc kubenswrapper[4684]: I0123 09:59:03.049592 4684 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"openstack-aee-default-env" Jan 23 09:59:03 crc kubenswrapper[4684]: I0123 09:59:03.049774 4684 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"openstack-edpm-ipam-dockercfg-5vtkf" Jan 23 09:59:03 crc kubenswrapper[4684]: I0123 09:59:03.072447 4684 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/reboot-os-edpm-deployment-openstack-edpm-ipam-hnv4c"] Jan 23 09:59:03 crc kubenswrapper[4684]: I0123 09:59:03.130276 4684 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ceph\" (UniqueName: \"kubernetes.io/secret/89a1992b-4dc8-4218-a148-bec983fddd94-ceph\") pod \"reboot-os-edpm-deployment-openstack-edpm-ipam-hnv4c\" (UID: \"89a1992b-4dc8-4218-a148-bec983fddd94\") " pod="openstack/reboot-os-edpm-deployment-openstack-edpm-ipam-hnv4c" Jan 23 09:59:03 crc kubenswrapper[4684]: I0123 09:59:03.130849 4684 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/89a1992b-4dc8-4218-a148-bec983fddd94-inventory\") pod \"reboot-os-edpm-deployment-openstack-edpm-ipam-hnv4c\" (UID: \"89a1992b-4dc8-4218-a148-bec983fddd94\") " pod="openstack/reboot-os-edpm-deployment-openstack-edpm-ipam-hnv4c" Jan 23 09:59:03 crc kubenswrapper[4684]: I0123 09:59:03.130889 4684 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-n4lm6\" (UniqueName: \"kubernetes.io/projected/89a1992b-4dc8-4218-a148-bec983fddd94-kube-api-access-n4lm6\") pod \"reboot-os-edpm-deployment-openstack-edpm-ipam-hnv4c\" (UID: \"89a1992b-4dc8-4218-a148-bec983fddd94\") " pod="openstack/reboot-os-edpm-deployment-openstack-edpm-ipam-hnv4c" Jan 23 09:59:03 crc kubenswrapper[4684]: I0123 09:59:03.131004 4684 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/89a1992b-4dc8-4218-a148-bec983fddd94-ssh-key-openstack-edpm-ipam\") pod \"reboot-os-edpm-deployment-openstack-edpm-ipam-hnv4c\" (UID: \"89a1992b-4dc8-4218-a148-bec983fddd94\") " pod="openstack/reboot-os-edpm-deployment-openstack-edpm-ipam-hnv4c" Jan 23 09:59:03 crc kubenswrapper[4684]: I0123 09:59:03.234164 4684 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ceph\" (UniqueName: \"kubernetes.io/secret/89a1992b-4dc8-4218-a148-bec983fddd94-ceph\") pod \"reboot-os-edpm-deployment-openstack-edpm-ipam-hnv4c\" (UID: \"89a1992b-4dc8-4218-a148-bec983fddd94\") " pod="openstack/reboot-os-edpm-deployment-openstack-edpm-ipam-hnv4c" Jan 23 09:59:03 crc kubenswrapper[4684]: I0123 09:59:03.234242 4684 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/89a1992b-4dc8-4218-a148-bec983fddd94-inventory\") pod \"reboot-os-edpm-deployment-openstack-edpm-ipam-hnv4c\" (UID: \"89a1992b-4dc8-4218-a148-bec983fddd94\") " pod="openstack/reboot-os-edpm-deployment-openstack-edpm-ipam-hnv4c" Jan 23 09:59:03 crc kubenswrapper[4684]: I0123 09:59:03.234275 4684 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-n4lm6\" (UniqueName: \"kubernetes.io/projected/89a1992b-4dc8-4218-a148-bec983fddd94-kube-api-access-n4lm6\") pod \"reboot-os-edpm-deployment-openstack-edpm-ipam-hnv4c\" (UID: \"89a1992b-4dc8-4218-a148-bec983fddd94\") " pod="openstack/reboot-os-edpm-deployment-openstack-edpm-ipam-hnv4c" Jan 23 09:59:03 crc kubenswrapper[4684]: I0123 09:59:03.234311 4684 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/89a1992b-4dc8-4218-a148-bec983fddd94-ssh-key-openstack-edpm-ipam\") pod \"reboot-os-edpm-deployment-openstack-edpm-ipam-hnv4c\" (UID: \"89a1992b-4dc8-4218-a148-bec983fddd94\") " pod="openstack/reboot-os-edpm-deployment-openstack-edpm-ipam-hnv4c" Jan 23 09:59:03 crc kubenswrapper[4684]: I0123 09:59:03.240577 4684 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/89a1992b-4dc8-4218-a148-bec983fddd94-inventory\") pod \"reboot-os-edpm-deployment-openstack-edpm-ipam-hnv4c\" (UID: \"89a1992b-4dc8-4218-a148-bec983fddd94\") " pod="openstack/reboot-os-edpm-deployment-openstack-edpm-ipam-hnv4c" Jan 23 09:59:03 crc kubenswrapper[4684]: I0123 09:59:03.240588 4684 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ceph\" (UniqueName: \"kubernetes.io/secret/89a1992b-4dc8-4218-a148-bec983fddd94-ceph\") pod \"reboot-os-edpm-deployment-openstack-edpm-ipam-hnv4c\" (UID: \"89a1992b-4dc8-4218-a148-bec983fddd94\") " pod="openstack/reboot-os-edpm-deployment-openstack-edpm-ipam-hnv4c" Jan 23 09:59:03 crc kubenswrapper[4684]: I0123 09:59:03.245470 4684 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/89a1992b-4dc8-4218-a148-bec983fddd94-ssh-key-openstack-edpm-ipam\") pod \"reboot-os-edpm-deployment-openstack-edpm-ipam-hnv4c\" (UID: \"89a1992b-4dc8-4218-a148-bec983fddd94\") " pod="openstack/reboot-os-edpm-deployment-openstack-edpm-ipam-hnv4c" Jan 23 09:59:03 crc kubenswrapper[4684]: I0123 09:59:03.257551 4684 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-n4lm6\" (UniqueName: \"kubernetes.io/projected/89a1992b-4dc8-4218-a148-bec983fddd94-kube-api-access-n4lm6\") pod \"reboot-os-edpm-deployment-openstack-edpm-ipam-hnv4c\" (UID: \"89a1992b-4dc8-4218-a148-bec983fddd94\") " pod="openstack/reboot-os-edpm-deployment-openstack-edpm-ipam-hnv4c" Jan 23 09:59:03 crc kubenswrapper[4684]: I0123 09:59:03.371524 4684 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/reboot-os-edpm-deployment-openstack-edpm-ipam-hnv4c" Jan 23 09:59:03 crc kubenswrapper[4684]: I0123 09:59:03.909366 4684 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/reboot-os-edpm-deployment-openstack-edpm-ipam-hnv4c"] Jan 23 09:59:03 crc kubenswrapper[4684]: W0123 09:59:03.910333 4684 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod89a1992b_4dc8_4218_a148_bec983fddd94.slice/crio-083d88f874a229ee092457a59eb7a0b803b1b98cb61afe156391a9b8b3972c39 WatchSource:0}: Error finding container 083d88f874a229ee092457a59eb7a0b803b1b98cb61afe156391a9b8b3972c39: Status 404 returned error can't find the container with id 083d88f874a229ee092457a59eb7a0b803b1b98cb61afe156391a9b8b3972c39 Jan 23 09:59:03 crc kubenswrapper[4684]: I0123 09:59:03.962575 4684 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/reboot-os-edpm-deployment-openstack-edpm-ipam-hnv4c" event={"ID":"89a1992b-4dc8-4218-a148-bec983fddd94","Type":"ContainerStarted","Data":"083d88f874a229ee092457a59eb7a0b803b1b98cb61afe156391a9b8b3972c39"} Jan 23 09:59:04 crc kubenswrapper[4684]: I0123 09:59:04.973301 4684 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/reboot-os-edpm-deployment-openstack-edpm-ipam-hnv4c" event={"ID":"89a1992b-4dc8-4218-a148-bec983fddd94","Type":"ContainerStarted","Data":"e6f487d37e2e861239eef93041d144bf584a6d4a8e953c12217248ee923398dd"} Jan 23 09:59:04 crc kubenswrapper[4684]: I0123 09:59:04.991273 4684 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/reboot-os-edpm-deployment-openstack-edpm-ipam-hnv4c" podStartSLOduration=1.55099184 podStartE2EDuration="1.991240061s" podCreationTimestamp="2026-01-23 09:59:03 +0000 UTC" firstStartedPulling="2026-01-23 09:59:03.913622279 +0000 UTC m=+3116.537000820" lastFinishedPulling="2026-01-23 09:59:04.3538705 +0000 UTC m=+3116.977249041" observedRunningTime="2026-01-23 09:59:04.987233986 +0000 UTC m=+3117.610612547" watchObservedRunningTime="2026-01-23 09:59:04.991240061 +0000 UTC m=+3117.614618602" Jan 23 09:59:13 crc kubenswrapper[4684]: I0123 09:59:13.728266 4684 patch_prober.go:28] interesting pod/machine-config-daemon-wtphf container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Jan 23 09:59:13 crc kubenswrapper[4684]: I0123 09:59:13.728779 4684 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-wtphf" podUID="fe8e0d00-860e-4d47-9f48-686555520d79" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Jan 23 09:59:16 crc kubenswrapper[4684]: I0123 09:59:16.065747 4684 generic.go:334] "Generic (PLEG): container finished" podID="89a1992b-4dc8-4218-a148-bec983fddd94" containerID="e6f487d37e2e861239eef93041d144bf584a6d4a8e953c12217248ee923398dd" exitCode=0 Jan 23 09:59:16 crc kubenswrapper[4684]: I0123 09:59:16.065819 4684 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/reboot-os-edpm-deployment-openstack-edpm-ipam-hnv4c" event={"ID":"89a1992b-4dc8-4218-a148-bec983fddd94","Type":"ContainerDied","Data":"e6f487d37e2e861239eef93041d144bf584a6d4a8e953c12217248ee923398dd"} Jan 23 09:59:17 crc kubenswrapper[4684]: I0123 09:59:17.476815 4684 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/reboot-os-edpm-deployment-openstack-edpm-ipam-hnv4c" Jan 23 09:59:17 crc kubenswrapper[4684]: I0123 09:59:17.569369 4684 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/89a1992b-4dc8-4218-a148-bec983fddd94-inventory\") pod \"89a1992b-4dc8-4218-a148-bec983fddd94\" (UID: \"89a1992b-4dc8-4218-a148-bec983fddd94\") " Jan 23 09:59:17 crc kubenswrapper[4684]: I0123 09:59:17.569517 4684 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/89a1992b-4dc8-4218-a148-bec983fddd94-ssh-key-openstack-edpm-ipam\") pod \"89a1992b-4dc8-4218-a148-bec983fddd94\" (UID: \"89a1992b-4dc8-4218-a148-bec983fddd94\") " Jan 23 09:59:17 crc kubenswrapper[4684]: I0123 09:59:17.569619 4684 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ceph\" (UniqueName: \"kubernetes.io/secret/89a1992b-4dc8-4218-a148-bec983fddd94-ceph\") pod \"89a1992b-4dc8-4218-a148-bec983fddd94\" (UID: \"89a1992b-4dc8-4218-a148-bec983fddd94\") " Jan 23 09:59:17 crc kubenswrapper[4684]: I0123 09:59:17.569646 4684 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-n4lm6\" (UniqueName: \"kubernetes.io/projected/89a1992b-4dc8-4218-a148-bec983fddd94-kube-api-access-n4lm6\") pod \"89a1992b-4dc8-4218-a148-bec983fddd94\" (UID: \"89a1992b-4dc8-4218-a148-bec983fddd94\") " Jan 23 09:59:17 crc kubenswrapper[4684]: I0123 09:59:17.575682 4684 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/89a1992b-4dc8-4218-a148-bec983fddd94-kube-api-access-n4lm6" (OuterVolumeSpecName: "kube-api-access-n4lm6") pod "89a1992b-4dc8-4218-a148-bec983fddd94" (UID: "89a1992b-4dc8-4218-a148-bec983fddd94"). InnerVolumeSpecName "kube-api-access-n4lm6". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 23 09:59:17 crc kubenswrapper[4684]: I0123 09:59:17.578966 4684 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/89a1992b-4dc8-4218-a148-bec983fddd94-ceph" (OuterVolumeSpecName: "ceph") pod "89a1992b-4dc8-4218-a148-bec983fddd94" (UID: "89a1992b-4dc8-4218-a148-bec983fddd94"). InnerVolumeSpecName "ceph". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 23 09:59:17 crc kubenswrapper[4684]: I0123 09:59:17.597550 4684 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/89a1992b-4dc8-4218-a148-bec983fddd94-ssh-key-openstack-edpm-ipam" (OuterVolumeSpecName: "ssh-key-openstack-edpm-ipam") pod "89a1992b-4dc8-4218-a148-bec983fddd94" (UID: "89a1992b-4dc8-4218-a148-bec983fddd94"). InnerVolumeSpecName "ssh-key-openstack-edpm-ipam". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 23 09:59:17 crc kubenswrapper[4684]: I0123 09:59:17.611309 4684 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/89a1992b-4dc8-4218-a148-bec983fddd94-inventory" (OuterVolumeSpecName: "inventory") pod "89a1992b-4dc8-4218-a148-bec983fddd94" (UID: "89a1992b-4dc8-4218-a148-bec983fddd94"). InnerVolumeSpecName "inventory". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 23 09:59:17 crc kubenswrapper[4684]: I0123 09:59:17.672530 4684 reconciler_common.go:293] "Volume detached for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/89a1992b-4dc8-4218-a148-bec983fddd94-inventory\") on node \"crc\" DevicePath \"\"" Jan 23 09:59:17 crc kubenswrapper[4684]: I0123 09:59:17.673085 4684 reconciler_common.go:293] "Volume detached for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/89a1992b-4dc8-4218-a148-bec983fddd94-ssh-key-openstack-edpm-ipam\") on node \"crc\" DevicePath \"\"" Jan 23 09:59:17 crc kubenswrapper[4684]: I0123 09:59:17.673105 4684 reconciler_common.go:293] "Volume detached for volume \"ceph\" (UniqueName: \"kubernetes.io/secret/89a1992b-4dc8-4218-a148-bec983fddd94-ceph\") on node \"crc\" DevicePath \"\"" Jan 23 09:59:17 crc kubenswrapper[4684]: I0123 09:59:17.673114 4684 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-n4lm6\" (UniqueName: \"kubernetes.io/projected/89a1992b-4dc8-4218-a148-bec983fddd94-kube-api-access-n4lm6\") on node \"crc\" DevicePath \"\"" Jan 23 09:59:18 crc kubenswrapper[4684]: I0123 09:59:18.094974 4684 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/reboot-os-edpm-deployment-openstack-edpm-ipam-hnv4c" event={"ID":"89a1992b-4dc8-4218-a148-bec983fddd94","Type":"ContainerDied","Data":"083d88f874a229ee092457a59eb7a0b803b1b98cb61afe156391a9b8b3972c39"} Jan 23 09:59:18 crc kubenswrapper[4684]: I0123 09:59:18.095015 4684 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="083d88f874a229ee092457a59eb7a0b803b1b98cb61afe156391a9b8b3972c39" Jan 23 09:59:18 crc kubenswrapper[4684]: I0123 09:59:18.095071 4684 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/reboot-os-edpm-deployment-openstack-edpm-ipam-hnv4c" Jan 23 09:59:18 crc kubenswrapper[4684]: I0123 09:59:18.258127 4684 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/install-certs-edpm-deployment-openstack-edpm-ipam-dbfqx"] Jan 23 09:59:18 crc kubenswrapper[4684]: E0123 09:59:18.258584 4684 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="89a1992b-4dc8-4218-a148-bec983fddd94" containerName="reboot-os-edpm-deployment-openstack-edpm-ipam" Jan 23 09:59:18 crc kubenswrapper[4684]: I0123 09:59:18.258616 4684 state_mem.go:107] "Deleted CPUSet assignment" podUID="89a1992b-4dc8-4218-a148-bec983fddd94" containerName="reboot-os-edpm-deployment-openstack-edpm-ipam" Jan 23 09:59:18 crc kubenswrapper[4684]: I0123 09:59:18.258845 4684 memory_manager.go:354] "RemoveStaleState removing state" podUID="89a1992b-4dc8-4218-a148-bec983fddd94" containerName="reboot-os-edpm-deployment-openstack-edpm-ipam" Jan 23 09:59:18 crc kubenswrapper[4684]: I0123 09:59:18.259418 4684 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-dbfqx" Jan 23 09:59:18 crc kubenswrapper[4684]: I0123 09:59:18.261762 4684 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"openstack-edpm-ipam-ovn-default-certs-0" Jan 23 09:59:18 crc kubenswrapper[4684]: I0123 09:59:18.261808 4684 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"ceph-conf-files" Jan 23 09:59:18 crc kubenswrapper[4684]: I0123 09:59:18.261839 4684 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"openstack-aee-default-env" Jan 23 09:59:18 crc kubenswrapper[4684]: I0123 09:59:18.261907 4684 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"openstack-edpm-ipam-dockercfg-5vtkf" Jan 23 09:59:18 crc kubenswrapper[4684]: I0123 09:59:18.262958 4684 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"openstack-edpm-ipam-libvirt-default-certs-0" Jan 23 09:59:18 crc kubenswrapper[4684]: I0123 09:59:18.263117 4684 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"dataplane-ansible-ssh-private-key-secret" Jan 23 09:59:18 crc kubenswrapper[4684]: I0123 09:59:18.263288 4684 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"openstack-edpm-ipam-neutron-metadata-default-certs-0" Jan 23 09:59:18 crc kubenswrapper[4684]: I0123 09:59:18.263814 4684 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"dataplanenodeset-openstack-edpm-ipam" Jan 23 09:59:18 crc kubenswrapper[4684]: I0123 09:59:18.274512 4684 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/install-certs-edpm-deployment-openstack-edpm-ipam-dbfqx"] Jan 23 09:59:18 crc kubenswrapper[4684]: I0123 09:59:18.287871 4684 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"nova-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/2aa3021c-18ad-49eb-ae34-b54e30548ccf-nova-combined-ca-bundle\") pod \"install-certs-edpm-deployment-openstack-edpm-ipam-dbfqx\" (UID: \"2aa3021c-18ad-49eb-ae34-b54e30548ccf\") " pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-dbfqx" Jan 23 09:59:18 crc kubenswrapper[4684]: I0123 09:59:18.287919 4684 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-dnftm\" (UniqueName: \"kubernetes.io/projected/2aa3021c-18ad-49eb-ae34-b54e30548ccf-kube-api-access-dnftm\") pod \"install-certs-edpm-deployment-openstack-edpm-ipam-dbfqx\" (UID: \"2aa3021c-18ad-49eb-ae34-b54e30548ccf\") " pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-dbfqx" Jan 23 09:59:18 crc kubenswrapper[4684]: I0123 09:59:18.287959 4684 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"openstack-edpm-ipam-ovn-default-certs-0\" (UniqueName: \"kubernetes.io/projected/2aa3021c-18ad-49eb-ae34-b54e30548ccf-openstack-edpm-ipam-ovn-default-certs-0\") pod \"install-certs-edpm-deployment-openstack-edpm-ipam-dbfqx\" (UID: \"2aa3021c-18ad-49eb-ae34-b54e30548ccf\") " pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-dbfqx" Jan 23 09:59:18 crc kubenswrapper[4684]: I0123 09:59:18.287982 4684 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"openstack-edpm-ipam-libvirt-default-certs-0\" (UniqueName: \"kubernetes.io/projected/2aa3021c-18ad-49eb-ae34-b54e30548ccf-openstack-edpm-ipam-libvirt-default-certs-0\") pod \"install-certs-edpm-deployment-openstack-edpm-ipam-dbfqx\" (UID: \"2aa3021c-18ad-49eb-ae34-b54e30548ccf\") " pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-dbfqx" Jan 23 09:59:18 crc kubenswrapper[4684]: I0123 09:59:18.288007 4684 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/2aa3021c-18ad-49eb-ae34-b54e30548ccf-ssh-key-openstack-edpm-ipam\") pod \"install-certs-edpm-deployment-openstack-edpm-ipam-dbfqx\" (UID: \"2aa3021c-18ad-49eb-ae34-b54e30548ccf\") " pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-dbfqx" Jan 23 09:59:18 crc kubenswrapper[4684]: I0123 09:59:18.288036 4684 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"bootstrap-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/2aa3021c-18ad-49eb-ae34-b54e30548ccf-bootstrap-combined-ca-bundle\") pod \"install-certs-edpm-deployment-openstack-edpm-ipam-dbfqx\" (UID: \"2aa3021c-18ad-49eb-ae34-b54e30548ccf\") " pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-dbfqx" Jan 23 09:59:18 crc kubenswrapper[4684]: I0123 09:59:18.288073 4684 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"repo-setup-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/2aa3021c-18ad-49eb-ae34-b54e30548ccf-repo-setup-combined-ca-bundle\") pod \"install-certs-edpm-deployment-openstack-edpm-ipam-dbfqx\" (UID: \"2aa3021c-18ad-49eb-ae34-b54e30548ccf\") " pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-dbfqx" Jan 23 09:59:18 crc kubenswrapper[4684]: I0123 09:59:18.288120 4684 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/2aa3021c-18ad-49eb-ae34-b54e30548ccf-inventory\") pod \"install-certs-edpm-deployment-openstack-edpm-ipam-dbfqx\" (UID: \"2aa3021c-18ad-49eb-ae34-b54e30548ccf\") " pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-dbfqx" Jan 23 09:59:18 crc kubenswrapper[4684]: I0123 09:59:18.288142 4684 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"libvirt-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/2aa3021c-18ad-49eb-ae34-b54e30548ccf-libvirt-combined-ca-bundle\") pod \"install-certs-edpm-deployment-openstack-edpm-ipam-dbfqx\" (UID: \"2aa3021c-18ad-49eb-ae34-b54e30548ccf\") " pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-dbfqx" Jan 23 09:59:18 crc kubenswrapper[4684]: I0123 09:59:18.288162 4684 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"openstack-edpm-ipam-neutron-metadata-default-certs-0\" (UniqueName: \"kubernetes.io/projected/2aa3021c-18ad-49eb-ae34-b54e30548ccf-openstack-edpm-ipam-neutron-metadata-default-certs-0\") pod \"install-certs-edpm-deployment-openstack-edpm-ipam-dbfqx\" (UID: \"2aa3021c-18ad-49eb-ae34-b54e30548ccf\") " pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-dbfqx" Jan 23 09:59:18 crc kubenswrapper[4684]: I0123 09:59:18.288196 4684 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovn-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/2aa3021c-18ad-49eb-ae34-b54e30548ccf-ovn-combined-ca-bundle\") pod \"install-certs-edpm-deployment-openstack-edpm-ipam-dbfqx\" (UID: \"2aa3021c-18ad-49eb-ae34-b54e30548ccf\") " pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-dbfqx" Jan 23 09:59:18 crc kubenswrapper[4684]: I0123 09:59:18.288232 4684 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"neutron-metadata-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/2aa3021c-18ad-49eb-ae34-b54e30548ccf-neutron-metadata-combined-ca-bundle\") pod \"install-certs-edpm-deployment-openstack-edpm-ipam-dbfqx\" (UID: \"2aa3021c-18ad-49eb-ae34-b54e30548ccf\") " pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-dbfqx" Jan 23 09:59:18 crc kubenswrapper[4684]: I0123 09:59:18.288249 4684 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ceph\" (UniqueName: \"kubernetes.io/secret/2aa3021c-18ad-49eb-ae34-b54e30548ccf-ceph\") pod \"install-certs-edpm-deployment-openstack-edpm-ipam-dbfqx\" (UID: \"2aa3021c-18ad-49eb-ae34-b54e30548ccf\") " pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-dbfqx" Jan 23 09:59:18 crc kubenswrapper[4684]: I0123 09:59:18.390320 4684 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"repo-setup-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/2aa3021c-18ad-49eb-ae34-b54e30548ccf-repo-setup-combined-ca-bundle\") pod \"install-certs-edpm-deployment-openstack-edpm-ipam-dbfqx\" (UID: \"2aa3021c-18ad-49eb-ae34-b54e30548ccf\") " pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-dbfqx" Jan 23 09:59:18 crc kubenswrapper[4684]: I0123 09:59:18.390399 4684 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"libvirt-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/2aa3021c-18ad-49eb-ae34-b54e30548ccf-libvirt-combined-ca-bundle\") pod \"install-certs-edpm-deployment-openstack-edpm-ipam-dbfqx\" (UID: \"2aa3021c-18ad-49eb-ae34-b54e30548ccf\") " pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-dbfqx" Jan 23 09:59:18 crc kubenswrapper[4684]: I0123 09:59:18.390419 4684 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/2aa3021c-18ad-49eb-ae34-b54e30548ccf-inventory\") pod \"install-certs-edpm-deployment-openstack-edpm-ipam-dbfqx\" (UID: \"2aa3021c-18ad-49eb-ae34-b54e30548ccf\") " pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-dbfqx" Jan 23 09:59:18 crc kubenswrapper[4684]: I0123 09:59:18.390439 4684 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"openstack-edpm-ipam-neutron-metadata-default-certs-0\" (UniqueName: \"kubernetes.io/projected/2aa3021c-18ad-49eb-ae34-b54e30548ccf-openstack-edpm-ipam-neutron-metadata-default-certs-0\") pod \"install-certs-edpm-deployment-openstack-edpm-ipam-dbfqx\" (UID: \"2aa3021c-18ad-49eb-ae34-b54e30548ccf\") " pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-dbfqx" Jan 23 09:59:18 crc kubenswrapper[4684]: I0123 09:59:18.390496 4684 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovn-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/2aa3021c-18ad-49eb-ae34-b54e30548ccf-ovn-combined-ca-bundle\") pod \"install-certs-edpm-deployment-openstack-edpm-ipam-dbfqx\" (UID: \"2aa3021c-18ad-49eb-ae34-b54e30548ccf\") " pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-dbfqx" Jan 23 09:59:18 crc kubenswrapper[4684]: I0123 09:59:18.390553 4684 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"neutron-metadata-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/2aa3021c-18ad-49eb-ae34-b54e30548ccf-neutron-metadata-combined-ca-bundle\") pod \"install-certs-edpm-deployment-openstack-edpm-ipam-dbfqx\" (UID: \"2aa3021c-18ad-49eb-ae34-b54e30548ccf\") " pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-dbfqx" Jan 23 09:59:18 crc kubenswrapper[4684]: I0123 09:59:18.390571 4684 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ceph\" (UniqueName: \"kubernetes.io/secret/2aa3021c-18ad-49eb-ae34-b54e30548ccf-ceph\") pod \"install-certs-edpm-deployment-openstack-edpm-ipam-dbfqx\" (UID: \"2aa3021c-18ad-49eb-ae34-b54e30548ccf\") " pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-dbfqx" Jan 23 09:59:18 crc kubenswrapper[4684]: I0123 09:59:18.391447 4684 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"nova-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/2aa3021c-18ad-49eb-ae34-b54e30548ccf-nova-combined-ca-bundle\") pod \"install-certs-edpm-deployment-openstack-edpm-ipam-dbfqx\" (UID: \"2aa3021c-18ad-49eb-ae34-b54e30548ccf\") " pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-dbfqx" Jan 23 09:59:18 crc kubenswrapper[4684]: I0123 09:59:18.391481 4684 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-dnftm\" (UniqueName: \"kubernetes.io/projected/2aa3021c-18ad-49eb-ae34-b54e30548ccf-kube-api-access-dnftm\") pod \"install-certs-edpm-deployment-openstack-edpm-ipam-dbfqx\" (UID: \"2aa3021c-18ad-49eb-ae34-b54e30548ccf\") " pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-dbfqx" Jan 23 09:59:18 crc kubenswrapper[4684]: I0123 09:59:18.391566 4684 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"openstack-edpm-ipam-ovn-default-certs-0\" (UniqueName: \"kubernetes.io/projected/2aa3021c-18ad-49eb-ae34-b54e30548ccf-openstack-edpm-ipam-ovn-default-certs-0\") pod \"install-certs-edpm-deployment-openstack-edpm-ipam-dbfqx\" (UID: \"2aa3021c-18ad-49eb-ae34-b54e30548ccf\") " pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-dbfqx" Jan 23 09:59:18 crc kubenswrapper[4684]: I0123 09:59:18.391613 4684 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"openstack-edpm-ipam-libvirt-default-certs-0\" (UniqueName: \"kubernetes.io/projected/2aa3021c-18ad-49eb-ae34-b54e30548ccf-openstack-edpm-ipam-libvirt-default-certs-0\") pod \"install-certs-edpm-deployment-openstack-edpm-ipam-dbfqx\" (UID: \"2aa3021c-18ad-49eb-ae34-b54e30548ccf\") " pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-dbfqx" Jan 23 09:59:18 crc kubenswrapper[4684]: I0123 09:59:18.391650 4684 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/2aa3021c-18ad-49eb-ae34-b54e30548ccf-ssh-key-openstack-edpm-ipam\") pod \"install-certs-edpm-deployment-openstack-edpm-ipam-dbfqx\" (UID: \"2aa3021c-18ad-49eb-ae34-b54e30548ccf\") " pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-dbfqx" Jan 23 09:59:18 crc kubenswrapper[4684]: I0123 09:59:18.391731 4684 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"bootstrap-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/2aa3021c-18ad-49eb-ae34-b54e30548ccf-bootstrap-combined-ca-bundle\") pod \"install-certs-edpm-deployment-openstack-edpm-ipam-dbfqx\" (UID: \"2aa3021c-18ad-49eb-ae34-b54e30548ccf\") " pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-dbfqx" Jan 23 09:59:18 crc kubenswrapper[4684]: I0123 09:59:18.396802 4684 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/2aa3021c-18ad-49eb-ae34-b54e30548ccf-inventory\") pod \"install-certs-edpm-deployment-openstack-edpm-ipam-dbfqx\" (UID: \"2aa3021c-18ad-49eb-ae34-b54e30548ccf\") " pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-dbfqx" Jan 23 09:59:18 crc kubenswrapper[4684]: I0123 09:59:18.396951 4684 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"openstack-edpm-ipam-neutron-metadata-default-certs-0\" (UniqueName: \"kubernetes.io/projected/2aa3021c-18ad-49eb-ae34-b54e30548ccf-openstack-edpm-ipam-neutron-metadata-default-certs-0\") pod \"install-certs-edpm-deployment-openstack-edpm-ipam-dbfqx\" (UID: \"2aa3021c-18ad-49eb-ae34-b54e30548ccf\") " pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-dbfqx" Jan 23 09:59:18 crc kubenswrapper[4684]: I0123 09:59:18.397333 4684 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"repo-setup-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/2aa3021c-18ad-49eb-ae34-b54e30548ccf-repo-setup-combined-ca-bundle\") pod \"install-certs-edpm-deployment-openstack-edpm-ipam-dbfqx\" (UID: \"2aa3021c-18ad-49eb-ae34-b54e30548ccf\") " pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-dbfqx" Jan 23 09:59:18 crc kubenswrapper[4684]: I0123 09:59:18.398307 4684 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovn-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/2aa3021c-18ad-49eb-ae34-b54e30548ccf-ovn-combined-ca-bundle\") pod \"install-certs-edpm-deployment-openstack-edpm-ipam-dbfqx\" (UID: \"2aa3021c-18ad-49eb-ae34-b54e30548ccf\") " pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-dbfqx" Jan 23 09:59:18 crc kubenswrapper[4684]: I0123 09:59:18.398778 4684 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"nova-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/2aa3021c-18ad-49eb-ae34-b54e30548ccf-nova-combined-ca-bundle\") pod \"install-certs-edpm-deployment-openstack-edpm-ipam-dbfqx\" (UID: \"2aa3021c-18ad-49eb-ae34-b54e30548ccf\") " pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-dbfqx" Jan 23 09:59:18 crc kubenswrapper[4684]: I0123 09:59:18.398917 4684 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ceph\" (UniqueName: \"kubernetes.io/secret/2aa3021c-18ad-49eb-ae34-b54e30548ccf-ceph\") pod \"install-certs-edpm-deployment-openstack-edpm-ipam-dbfqx\" (UID: \"2aa3021c-18ad-49eb-ae34-b54e30548ccf\") " pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-dbfqx" Jan 23 09:59:18 crc kubenswrapper[4684]: I0123 09:59:18.399013 4684 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"libvirt-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/2aa3021c-18ad-49eb-ae34-b54e30548ccf-libvirt-combined-ca-bundle\") pod \"install-certs-edpm-deployment-openstack-edpm-ipam-dbfqx\" (UID: \"2aa3021c-18ad-49eb-ae34-b54e30548ccf\") " pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-dbfqx" Jan 23 09:59:18 crc kubenswrapper[4684]: I0123 09:59:18.399152 4684 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"neutron-metadata-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/2aa3021c-18ad-49eb-ae34-b54e30548ccf-neutron-metadata-combined-ca-bundle\") pod \"install-certs-edpm-deployment-openstack-edpm-ipam-dbfqx\" (UID: \"2aa3021c-18ad-49eb-ae34-b54e30548ccf\") " pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-dbfqx" Jan 23 09:59:18 crc kubenswrapper[4684]: I0123 09:59:18.399159 4684 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"bootstrap-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/2aa3021c-18ad-49eb-ae34-b54e30548ccf-bootstrap-combined-ca-bundle\") pod \"install-certs-edpm-deployment-openstack-edpm-ipam-dbfqx\" (UID: \"2aa3021c-18ad-49eb-ae34-b54e30548ccf\") " pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-dbfqx" Jan 23 09:59:18 crc kubenswrapper[4684]: I0123 09:59:18.400821 4684 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/2aa3021c-18ad-49eb-ae34-b54e30548ccf-ssh-key-openstack-edpm-ipam\") pod \"install-certs-edpm-deployment-openstack-edpm-ipam-dbfqx\" (UID: \"2aa3021c-18ad-49eb-ae34-b54e30548ccf\") " pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-dbfqx" Jan 23 09:59:18 crc kubenswrapper[4684]: I0123 09:59:18.405413 4684 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"openstack-edpm-ipam-libvirt-default-certs-0\" (UniqueName: \"kubernetes.io/projected/2aa3021c-18ad-49eb-ae34-b54e30548ccf-openstack-edpm-ipam-libvirt-default-certs-0\") pod \"install-certs-edpm-deployment-openstack-edpm-ipam-dbfqx\" (UID: \"2aa3021c-18ad-49eb-ae34-b54e30548ccf\") " pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-dbfqx" Jan 23 09:59:18 crc kubenswrapper[4684]: I0123 09:59:18.410523 4684 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"openstack-edpm-ipam-ovn-default-certs-0\" (UniqueName: \"kubernetes.io/projected/2aa3021c-18ad-49eb-ae34-b54e30548ccf-openstack-edpm-ipam-ovn-default-certs-0\") pod \"install-certs-edpm-deployment-openstack-edpm-ipam-dbfqx\" (UID: \"2aa3021c-18ad-49eb-ae34-b54e30548ccf\") " pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-dbfqx" Jan 23 09:59:18 crc kubenswrapper[4684]: I0123 09:59:18.412507 4684 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-dnftm\" (UniqueName: \"kubernetes.io/projected/2aa3021c-18ad-49eb-ae34-b54e30548ccf-kube-api-access-dnftm\") pod \"install-certs-edpm-deployment-openstack-edpm-ipam-dbfqx\" (UID: \"2aa3021c-18ad-49eb-ae34-b54e30548ccf\") " pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-dbfqx" Jan 23 09:59:18 crc kubenswrapper[4684]: I0123 09:59:18.578555 4684 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-dbfqx" Jan 23 09:59:19 crc kubenswrapper[4684]: I0123 09:59:19.110326 4684 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/install-certs-edpm-deployment-openstack-edpm-ipam-dbfqx"] Jan 23 09:59:20 crc kubenswrapper[4684]: I0123 09:59:20.110338 4684 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-dbfqx" event={"ID":"2aa3021c-18ad-49eb-ae34-b54e30548ccf","Type":"ContainerStarted","Data":"8663c203057cd30ed159dbc10f2da920a14814cd7356a99af148ee1efba4dcbb"} Jan 23 09:59:22 crc kubenswrapper[4684]: I0123 09:59:22.131186 4684 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-dbfqx" event={"ID":"2aa3021c-18ad-49eb-ae34-b54e30548ccf","Type":"ContainerStarted","Data":"30216b8331135efdbb2e8689a778951552f712870106963f7f94657940a2f440"} Jan 23 09:59:22 crc kubenswrapper[4684]: I0123 09:59:22.194469 4684 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-dbfqx" podStartSLOduration=2.332658817 podStartE2EDuration="4.194450424s" podCreationTimestamp="2026-01-23 09:59:18 +0000 UTC" firstStartedPulling="2026-01-23 09:59:19.110377952 +0000 UTC m=+3131.733756493" lastFinishedPulling="2026-01-23 09:59:20.972169559 +0000 UTC m=+3133.595548100" observedRunningTime="2026-01-23 09:59:22.191651513 +0000 UTC m=+3134.815030074" watchObservedRunningTime="2026-01-23 09:59:22.194450424 +0000 UTC m=+3134.817828965" Jan 23 09:59:43 crc kubenswrapper[4684]: I0123 09:59:43.728979 4684 patch_prober.go:28] interesting pod/machine-config-daemon-wtphf container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Jan 23 09:59:43 crc kubenswrapper[4684]: I0123 09:59:43.729535 4684 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-wtphf" podUID="fe8e0d00-860e-4d47-9f48-686555520d79" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Jan 23 09:59:43 crc kubenswrapper[4684]: I0123 09:59:43.729622 4684 kubelet.go:2542] "SyncLoop (probe)" probe="liveness" status="unhealthy" pod="openshift-machine-config-operator/machine-config-daemon-wtphf" Jan 23 09:59:43 crc kubenswrapper[4684]: I0123 09:59:43.730544 4684 kuberuntime_manager.go:1027] "Message for Container of pod" containerName="machine-config-daemon" containerStatusID={"Type":"cri-o","ID":"d1c64bcff5b15812f02c5451d69c8159a40aa5751c27f7f31fd2c1167f6c8ab3"} pod="openshift-machine-config-operator/machine-config-daemon-wtphf" containerMessage="Container machine-config-daemon failed liveness probe, will be restarted" Jan 23 09:59:43 crc kubenswrapper[4684]: I0123 09:59:43.730781 4684 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-machine-config-operator/machine-config-daemon-wtphf" podUID="fe8e0d00-860e-4d47-9f48-686555520d79" containerName="machine-config-daemon" containerID="cri-o://d1c64bcff5b15812f02c5451d69c8159a40aa5751c27f7f31fd2c1167f6c8ab3" gracePeriod=600 Jan 23 09:59:44 crc kubenswrapper[4684]: I0123 09:59:44.330948 4684 generic.go:334] "Generic (PLEG): container finished" podID="fe8e0d00-860e-4d47-9f48-686555520d79" containerID="d1c64bcff5b15812f02c5451d69c8159a40aa5751c27f7f31fd2c1167f6c8ab3" exitCode=0 Jan 23 09:59:44 crc kubenswrapper[4684]: I0123 09:59:44.331031 4684 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-wtphf" event={"ID":"fe8e0d00-860e-4d47-9f48-686555520d79","Type":"ContainerDied","Data":"d1c64bcff5b15812f02c5451d69c8159a40aa5751c27f7f31fd2c1167f6c8ab3"} Jan 23 09:59:44 crc kubenswrapper[4684]: I0123 09:59:44.331093 4684 scope.go:117] "RemoveContainer" containerID="8d1f652ff74148a06a7cece32bb007304d1575a17aa3e4576d5bb01005d192bb" Jan 23 09:59:44 crc kubenswrapper[4684]: E0123 09:59:44.701233 4684 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-wtphf_openshift-machine-config-operator(fe8e0d00-860e-4d47-9f48-686555520d79)\"" pod="openshift-machine-config-operator/machine-config-daemon-wtphf" podUID="fe8e0d00-860e-4d47-9f48-686555520d79" Jan 23 09:59:45 crc kubenswrapper[4684]: I0123 09:59:45.345095 4684 scope.go:117] "RemoveContainer" containerID="d1c64bcff5b15812f02c5451d69c8159a40aa5751c27f7f31fd2c1167f6c8ab3" Jan 23 09:59:45 crc kubenswrapper[4684]: E0123 09:59:45.345645 4684 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-wtphf_openshift-machine-config-operator(fe8e0d00-860e-4d47-9f48-686555520d79)\"" pod="openshift-machine-config-operator/machine-config-daemon-wtphf" podUID="fe8e0d00-860e-4d47-9f48-686555520d79" Jan 23 09:59:56 crc kubenswrapper[4684]: I0123 09:59:56.444616 4684 generic.go:334] "Generic (PLEG): container finished" podID="2aa3021c-18ad-49eb-ae34-b54e30548ccf" containerID="30216b8331135efdbb2e8689a778951552f712870106963f7f94657940a2f440" exitCode=0 Jan 23 09:59:56 crc kubenswrapper[4684]: I0123 09:59:56.445029 4684 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-dbfqx" event={"ID":"2aa3021c-18ad-49eb-ae34-b54e30548ccf","Type":"ContainerDied","Data":"30216b8331135efdbb2e8689a778951552f712870106963f7f94657940a2f440"} Jan 23 09:59:57 crc kubenswrapper[4684]: I0123 09:59:57.594932 4684 scope.go:117] "RemoveContainer" containerID="d1c64bcff5b15812f02c5451d69c8159a40aa5751c27f7f31fd2c1167f6c8ab3" Jan 23 09:59:57 crc kubenswrapper[4684]: E0123 09:59:57.595624 4684 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-wtphf_openshift-machine-config-operator(fe8e0d00-860e-4d47-9f48-686555520d79)\"" pod="openshift-machine-config-operator/machine-config-daemon-wtphf" podUID="fe8e0d00-860e-4d47-9f48-686555520d79" Jan 23 09:59:57 crc kubenswrapper[4684]: I0123 09:59:57.909231 4684 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-dbfqx" Jan 23 09:59:58 crc kubenswrapper[4684]: I0123 09:59:58.009608 4684 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-dnftm\" (UniqueName: \"kubernetes.io/projected/2aa3021c-18ad-49eb-ae34-b54e30548ccf-kube-api-access-dnftm\") pod \"2aa3021c-18ad-49eb-ae34-b54e30548ccf\" (UID: \"2aa3021c-18ad-49eb-ae34-b54e30548ccf\") " Jan 23 09:59:58 crc kubenswrapper[4684]: I0123 09:59:58.010040 4684 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/2aa3021c-18ad-49eb-ae34-b54e30548ccf-ssh-key-openstack-edpm-ipam\") pod \"2aa3021c-18ad-49eb-ae34-b54e30548ccf\" (UID: \"2aa3021c-18ad-49eb-ae34-b54e30548ccf\") " Jan 23 09:59:58 crc kubenswrapper[4684]: I0123 09:59:58.010203 4684 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"openstack-edpm-ipam-ovn-default-certs-0\" (UniqueName: \"kubernetes.io/projected/2aa3021c-18ad-49eb-ae34-b54e30548ccf-openstack-edpm-ipam-ovn-default-certs-0\") pod \"2aa3021c-18ad-49eb-ae34-b54e30548ccf\" (UID: \"2aa3021c-18ad-49eb-ae34-b54e30548ccf\") " Jan 23 09:59:58 crc kubenswrapper[4684]: I0123 09:59:58.010345 4684 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovn-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/2aa3021c-18ad-49eb-ae34-b54e30548ccf-ovn-combined-ca-bundle\") pod \"2aa3021c-18ad-49eb-ae34-b54e30548ccf\" (UID: \"2aa3021c-18ad-49eb-ae34-b54e30548ccf\") " Jan 23 09:59:58 crc kubenswrapper[4684]: I0123 09:59:58.010536 4684 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ceph\" (UniqueName: \"kubernetes.io/secret/2aa3021c-18ad-49eb-ae34-b54e30548ccf-ceph\") pod \"2aa3021c-18ad-49eb-ae34-b54e30548ccf\" (UID: \"2aa3021c-18ad-49eb-ae34-b54e30548ccf\") " Jan 23 09:59:58 crc kubenswrapper[4684]: I0123 09:59:58.010766 4684 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"neutron-metadata-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/2aa3021c-18ad-49eb-ae34-b54e30548ccf-neutron-metadata-combined-ca-bundle\") pod \"2aa3021c-18ad-49eb-ae34-b54e30548ccf\" (UID: \"2aa3021c-18ad-49eb-ae34-b54e30548ccf\") " Jan 23 09:59:58 crc kubenswrapper[4684]: I0123 09:59:58.010892 4684 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"openstack-edpm-ipam-neutron-metadata-default-certs-0\" (UniqueName: \"kubernetes.io/projected/2aa3021c-18ad-49eb-ae34-b54e30548ccf-openstack-edpm-ipam-neutron-metadata-default-certs-0\") pod \"2aa3021c-18ad-49eb-ae34-b54e30548ccf\" (UID: \"2aa3021c-18ad-49eb-ae34-b54e30548ccf\") " Jan 23 09:59:58 crc kubenswrapper[4684]: I0123 09:59:58.011113 4684 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"libvirt-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/2aa3021c-18ad-49eb-ae34-b54e30548ccf-libvirt-combined-ca-bundle\") pod \"2aa3021c-18ad-49eb-ae34-b54e30548ccf\" (UID: \"2aa3021c-18ad-49eb-ae34-b54e30548ccf\") " Jan 23 09:59:58 crc kubenswrapper[4684]: I0123 09:59:58.011291 4684 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"nova-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/2aa3021c-18ad-49eb-ae34-b54e30548ccf-nova-combined-ca-bundle\") pod \"2aa3021c-18ad-49eb-ae34-b54e30548ccf\" (UID: \"2aa3021c-18ad-49eb-ae34-b54e30548ccf\") " Jan 23 09:59:58 crc kubenswrapper[4684]: I0123 09:59:58.011429 4684 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"openstack-edpm-ipam-libvirt-default-certs-0\" (UniqueName: \"kubernetes.io/projected/2aa3021c-18ad-49eb-ae34-b54e30548ccf-openstack-edpm-ipam-libvirt-default-certs-0\") pod \"2aa3021c-18ad-49eb-ae34-b54e30548ccf\" (UID: \"2aa3021c-18ad-49eb-ae34-b54e30548ccf\") " Jan 23 09:59:58 crc kubenswrapper[4684]: I0123 09:59:58.011549 4684 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"repo-setup-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/2aa3021c-18ad-49eb-ae34-b54e30548ccf-repo-setup-combined-ca-bundle\") pod \"2aa3021c-18ad-49eb-ae34-b54e30548ccf\" (UID: \"2aa3021c-18ad-49eb-ae34-b54e30548ccf\") " Jan 23 09:59:58 crc kubenswrapper[4684]: I0123 09:59:58.011647 4684 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"bootstrap-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/2aa3021c-18ad-49eb-ae34-b54e30548ccf-bootstrap-combined-ca-bundle\") pod \"2aa3021c-18ad-49eb-ae34-b54e30548ccf\" (UID: \"2aa3021c-18ad-49eb-ae34-b54e30548ccf\") " Jan 23 09:59:58 crc kubenswrapper[4684]: I0123 09:59:58.011748 4684 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/2aa3021c-18ad-49eb-ae34-b54e30548ccf-inventory\") pod \"2aa3021c-18ad-49eb-ae34-b54e30548ccf\" (UID: \"2aa3021c-18ad-49eb-ae34-b54e30548ccf\") " Jan 23 09:59:58 crc kubenswrapper[4684]: I0123 09:59:58.016630 4684 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/2aa3021c-18ad-49eb-ae34-b54e30548ccf-ovn-combined-ca-bundle" (OuterVolumeSpecName: "ovn-combined-ca-bundle") pod "2aa3021c-18ad-49eb-ae34-b54e30548ccf" (UID: "2aa3021c-18ad-49eb-ae34-b54e30548ccf"). InnerVolumeSpecName "ovn-combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 23 09:59:58 crc kubenswrapper[4684]: I0123 09:59:58.016682 4684 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/2aa3021c-18ad-49eb-ae34-b54e30548ccf-ceph" (OuterVolumeSpecName: "ceph") pod "2aa3021c-18ad-49eb-ae34-b54e30548ccf" (UID: "2aa3021c-18ad-49eb-ae34-b54e30548ccf"). InnerVolumeSpecName "ceph". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 23 09:59:58 crc kubenswrapper[4684]: I0123 09:59:58.016734 4684 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/2aa3021c-18ad-49eb-ae34-b54e30548ccf-kube-api-access-dnftm" (OuterVolumeSpecName: "kube-api-access-dnftm") pod "2aa3021c-18ad-49eb-ae34-b54e30548ccf" (UID: "2aa3021c-18ad-49eb-ae34-b54e30548ccf"). InnerVolumeSpecName "kube-api-access-dnftm". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 23 09:59:58 crc kubenswrapper[4684]: I0123 09:59:58.017129 4684 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/2aa3021c-18ad-49eb-ae34-b54e30548ccf-neutron-metadata-combined-ca-bundle" (OuterVolumeSpecName: "neutron-metadata-combined-ca-bundle") pod "2aa3021c-18ad-49eb-ae34-b54e30548ccf" (UID: "2aa3021c-18ad-49eb-ae34-b54e30548ccf"). InnerVolumeSpecName "neutron-metadata-combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 23 09:59:58 crc kubenswrapper[4684]: I0123 09:59:58.018389 4684 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/2aa3021c-18ad-49eb-ae34-b54e30548ccf-openstack-edpm-ipam-ovn-default-certs-0" (OuterVolumeSpecName: "openstack-edpm-ipam-ovn-default-certs-0") pod "2aa3021c-18ad-49eb-ae34-b54e30548ccf" (UID: "2aa3021c-18ad-49eb-ae34-b54e30548ccf"). InnerVolumeSpecName "openstack-edpm-ipam-ovn-default-certs-0". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 23 09:59:58 crc kubenswrapper[4684]: I0123 09:59:58.020287 4684 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/2aa3021c-18ad-49eb-ae34-b54e30548ccf-nova-combined-ca-bundle" (OuterVolumeSpecName: "nova-combined-ca-bundle") pod "2aa3021c-18ad-49eb-ae34-b54e30548ccf" (UID: "2aa3021c-18ad-49eb-ae34-b54e30548ccf"). InnerVolumeSpecName "nova-combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 23 09:59:58 crc kubenswrapper[4684]: I0123 09:59:58.020734 4684 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/2aa3021c-18ad-49eb-ae34-b54e30548ccf-bootstrap-combined-ca-bundle" (OuterVolumeSpecName: "bootstrap-combined-ca-bundle") pod "2aa3021c-18ad-49eb-ae34-b54e30548ccf" (UID: "2aa3021c-18ad-49eb-ae34-b54e30548ccf"). InnerVolumeSpecName "bootstrap-combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 23 09:59:58 crc kubenswrapper[4684]: I0123 09:59:58.023187 4684 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/2aa3021c-18ad-49eb-ae34-b54e30548ccf-repo-setup-combined-ca-bundle" (OuterVolumeSpecName: "repo-setup-combined-ca-bundle") pod "2aa3021c-18ad-49eb-ae34-b54e30548ccf" (UID: "2aa3021c-18ad-49eb-ae34-b54e30548ccf"). InnerVolumeSpecName "repo-setup-combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 23 09:59:58 crc kubenswrapper[4684]: I0123 09:59:58.027973 4684 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/2aa3021c-18ad-49eb-ae34-b54e30548ccf-openstack-edpm-ipam-neutron-metadata-default-certs-0" (OuterVolumeSpecName: "openstack-edpm-ipam-neutron-metadata-default-certs-0") pod "2aa3021c-18ad-49eb-ae34-b54e30548ccf" (UID: "2aa3021c-18ad-49eb-ae34-b54e30548ccf"). InnerVolumeSpecName "openstack-edpm-ipam-neutron-metadata-default-certs-0". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 23 09:59:58 crc kubenswrapper[4684]: I0123 09:59:58.036013 4684 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/2aa3021c-18ad-49eb-ae34-b54e30548ccf-openstack-edpm-ipam-libvirt-default-certs-0" (OuterVolumeSpecName: "openstack-edpm-ipam-libvirt-default-certs-0") pod "2aa3021c-18ad-49eb-ae34-b54e30548ccf" (UID: "2aa3021c-18ad-49eb-ae34-b54e30548ccf"). InnerVolumeSpecName "openstack-edpm-ipam-libvirt-default-certs-0". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 23 09:59:58 crc kubenswrapper[4684]: I0123 09:59:58.036107 4684 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/2aa3021c-18ad-49eb-ae34-b54e30548ccf-libvirt-combined-ca-bundle" (OuterVolumeSpecName: "libvirt-combined-ca-bundle") pod "2aa3021c-18ad-49eb-ae34-b54e30548ccf" (UID: "2aa3021c-18ad-49eb-ae34-b54e30548ccf"). InnerVolumeSpecName "libvirt-combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 23 09:59:58 crc kubenswrapper[4684]: I0123 09:59:58.045214 4684 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/2aa3021c-18ad-49eb-ae34-b54e30548ccf-inventory" (OuterVolumeSpecName: "inventory") pod "2aa3021c-18ad-49eb-ae34-b54e30548ccf" (UID: "2aa3021c-18ad-49eb-ae34-b54e30548ccf"). InnerVolumeSpecName "inventory". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 23 09:59:58 crc kubenswrapper[4684]: I0123 09:59:58.045926 4684 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/2aa3021c-18ad-49eb-ae34-b54e30548ccf-ssh-key-openstack-edpm-ipam" (OuterVolumeSpecName: "ssh-key-openstack-edpm-ipam") pod "2aa3021c-18ad-49eb-ae34-b54e30548ccf" (UID: "2aa3021c-18ad-49eb-ae34-b54e30548ccf"). InnerVolumeSpecName "ssh-key-openstack-edpm-ipam". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 23 09:59:58 crc kubenswrapper[4684]: I0123 09:59:58.114448 4684 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-dnftm\" (UniqueName: \"kubernetes.io/projected/2aa3021c-18ad-49eb-ae34-b54e30548ccf-kube-api-access-dnftm\") on node \"crc\" DevicePath \"\"" Jan 23 09:59:58 crc kubenswrapper[4684]: I0123 09:59:58.114502 4684 reconciler_common.go:293] "Volume detached for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/2aa3021c-18ad-49eb-ae34-b54e30548ccf-ssh-key-openstack-edpm-ipam\") on node \"crc\" DevicePath \"\"" Jan 23 09:59:58 crc kubenswrapper[4684]: I0123 09:59:58.114518 4684 reconciler_common.go:293] "Volume detached for volume \"openstack-edpm-ipam-ovn-default-certs-0\" (UniqueName: \"kubernetes.io/projected/2aa3021c-18ad-49eb-ae34-b54e30548ccf-openstack-edpm-ipam-ovn-default-certs-0\") on node \"crc\" DevicePath \"\"" Jan 23 09:59:58 crc kubenswrapper[4684]: I0123 09:59:58.114531 4684 reconciler_common.go:293] "Volume detached for volume \"ovn-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/2aa3021c-18ad-49eb-ae34-b54e30548ccf-ovn-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Jan 23 09:59:58 crc kubenswrapper[4684]: I0123 09:59:58.114544 4684 reconciler_common.go:293] "Volume detached for volume \"ceph\" (UniqueName: \"kubernetes.io/secret/2aa3021c-18ad-49eb-ae34-b54e30548ccf-ceph\") on node \"crc\" DevicePath \"\"" Jan 23 09:59:58 crc kubenswrapper[4684]: I0123 09:59:58.114555 4684 reconciler_common.go:293] "Volume detached for volume \"neutron-metadata-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/2aa3021c-18ad-49eb-ae34-b54e30548ccf-neutron-metadata-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Jan 23 09:59:58 crc kubenswrapper[4684]: I0123 09:59:58.114571 4684 reconciler_common.go:293] "Volume detached for volume \"openstack-edpm-ipam-neutron-metadata-default-certs-0\" (UniqueName: \"kubernetes.io/projected/2aa3021c-18ad-49eb-ae34-b54e30548ccf-openstack-edpm-ipam-neutron-metadata-default-certs-0\") on node \"crc\" DevicePath \"\"" Jan 23 09:59:58 crc kubenswrapper[4684]: I0123 09:59:58.114585 4684 reconciler_common.go:293] "Volume detached for volume \"libvirt-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/2aa3021c-18ad-49eb-ae34-b54e30548ccf-libvirt-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Jan 23 09:59:58 crc kubenswrapper[4684]: I0123 09:59:58.114597 4684 reconciler_common.go:293] "Volume detached for volume \"nova-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/2aa3021c-18ad-49eb-ae34-b54e30548ccf-nova-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Jan 23 09:59:58 crc kubenswrapper[4684]: I0123 09:59:58.114609 4684 reconciler_common.go:293] "Volume detached for volume \"openstack-edpm-ipam-libvirt-default-certs-0\" (UniqueName: \"kubernetes.io/projected/2aa3021c-18ad-49eb-ae34-b54e30548ccf-openstack-edpm-ipam-libvirt-default-certs-0\") on node \"crc\" DevicePath \"\"" Jan 23 09:59:58 crc kubenswrapper[4684]: I0123 09:59:58.114620 4684 reconciler_common.go:293] "Volume detached for volume \"repo-setup-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/2aa3021c-18ad-49eb-ae34-b54e30548ccf-repo-setup-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Jan 23 09:59:58 crc kubenswrapper[4684]: I0123 09:59:58.114630 4684 reconciler_common.go:293] "Volume detached for volume \"bootstrap-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/2aa3021c-18ad-49eb-ae34-b54e30548ccf-bootstrap-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Jan 23 09:59:58 crc kubenswrapper[4684]: I0123 09:59:58.114644 4684 reconciler_common.go:293] "Volume detached for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/2aa3021c-18ad-49eb-ae34-b54e30548ccf-inventory\") on node \"crc\" DevicePath \"\"" Jan 23 09:59:58 crc kubenswrapper[4684]: I0123 09:59:58.463544 4684 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-dbfqx" event={"ID":"2aa3021c-18ad-49eb-ae34-b54e30548ccf","Type":"ContainerDied","Data":"8663c203057cd30ed159dbc10f2da920a14814cd7356a99af148ee1efba4dcbb"} Jan 23 09:59:58 crc kubenswrapper[4684]: I0123 09:59:58.463584 4684 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="8663c203057cd30ed159dbc10f2da920a14814cd7356a99af148ee1efba4dcbb" Jan 23 09:59:58 crc kubenswrapper[4684]: I0123 09:59:58.463593 4684 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-dbfqx" Jan 23 09:59:58 crc kubenswrapper[4684]: I0123 09:59:58.560591 4684 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/ceph-client-edpm-deployment-openstack-edpm-ipam-8tpv8"] Jan 23 09:59:58 crc kubenswrapper[4684]: E0123 09:59:58.561054 4684 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="2aa3021c-18ad-49eb-ae34-b54e30548ccf" containerName="install-certs-edpm-deployment-openstack-edpm-ipam" Jan 23 09:59:58 crc kubenswrapper[4684]: I0123 09:59:58.561083 4684 state_mem.go:107] "Deleted CPUSet assignment" podUID="2aa3021c-18ad-49eb-ae34-b54e30548ccf" containerName="install-certs-edpm-deployment-openstack-edpm-ipam" Jan 23 09:59:58 crc kubenswrapper[4684]: I0123 09:59:58.561285 4684 memory_manager.go:354] "RemoveStaleState removing state" podUID="2aa3021c-18ad-49eb-ae34-b54e30548ccf" containerName="install-certs-edpm-deployment-openstack-edpm-ipam" Jan 23 09:59:58 crc kubenswrapper[4684]: I0123 09:59:58.561886 4684 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ceph-client-edpm-deployment-openstack-edpm-ipam-8tpv8" Jan 23 09:59:58 crc kubenswrapper[4684]: I0123 09:59:58.563713 4684 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"ceph-conf-files" Jan 23 09:59:58 crc kubenswrapper[4684]: I0123 09:59:58.564039 4684 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"openstack-edpm-ipam-dockercfg-5vtkf" Jan 23 09:59:58 crc kubenswrapper[4684]: I0123 09:59:58.564916 4684 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"openstack-aee-default-env" Jan 23 09:59:58 crc kubenswrapper[4684]: I0123 09:59:58.565239 4684 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"dataplane-ansible-ssh-private-key-secret" Jan 23 09:59:58 crc kubenswrapper[4684]: I0123 09:59:58.565444 4684 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"dataplanenodeset-openstack-edpm-ipam" Jan 23 09:59:58 crc kubenswrapper[4684]: I0123 09:59:58.594149 4684 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ceph-client-edpm-deployment-openstack-edpm-ipam-8tpv8"] Jan 23 09:59:58 crc kubenswrapper[4684]: I0123 09:59:58.725220 4684 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/5f77b49d-cf17-4b55-9ef8-0d0e13966845-inventory\") pod \"ceph-client-edpm-deployment-openstack-edpm-ipam-8tpv8\" (UID: \"5f77b49d-cf17-4b55-9ef8-0d0e13966845\") " pod="openstack/ceph-client-edpm-deployment-openstack-edpm-ipam-8tpv8" Jan 23 09:59:58 crc kubenswrapper[4684]: I0123 09:59:58.725269 4684 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-cxq9n\" (UniqueName: \"kubernetes.io/projected/5f77b49d-cf17-4b55-9ef8-0d0e13966845-kube-api-access-cxq9n\") pod \"ceph-client-edpm-deployment-openstack-edpm-ipam-8tpv8\" (UID: \"5f77b49d-cf17-4b55-9ef8-0d0e13966845\") " pod="openstack/ceph-client-edpm-deployment-openstack-edpm-ipam-8tpv8" Jan 23 09:59:58 crc kubenswrapper[4684]: I0123 09:59:58.725299 4684 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ceph\" (UniqueName: \"kubernetes.io/secret/5f77b49d-cf17-4b55-9ef8-0d0e13966845-ceph\") pod \"ceph-client-edpm-deployment-openstack-edpm-ipam-8tpv8\" (UID: \"5f77b49d-cf17-4b55-9ef8-0d0e13966845\") " pod="openstack/ceph-client-edpm-deployment-openstack-edpm-ipam-8tpv8" Jan 23 09:59:58 crc kubenswrapper[4684]: I0123 09:59:58.725333 4684 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/5f77b49d-cf17-4b55-9ef8-0d0e13966845-ssh-key-openstack-edpm-ipam\") pod \"ceph-client-edpm-deployment-openstack-edpm-ipam-8tpv8\" (UID: \"5f77b49d-cf17-4b55-9ef8-0d0e13966845\") " pod="openstack/ceph-client-edpm-deployment-openstack-edpm-ipam-8tpv8" Jan 23 09:59:58 crc kubenswrapper[4684]: I0123 09:59:58.827484 4684 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/5f77b49d-cf17-4b55-9ef8-0d0e13966845-inventory\") pod \"ceph-client-edpm-deployment-openstack-edpm-ipam-8tpv8\" (UID: \"5f77b49d-cf17-4b55-9ef8-0d0e13966845\") " pod="openstack/ceph-client-edpm-deployment-openstack-edpm-ipam-8tpv8" Jan 23 09:59:58 crc kubenswrapper[4684]: I0123 09:59:58.827543 4684 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-cxq9n\" (UniqueName: \"kubernetes.io/projected/5f77b49d-cf17-4b55-9ef8-0d0e13966845-kube-api-access-cxq9n\") pod \"ceph-client-edpm-deployment-openstack-edpm-ipam-8tpv8\" (UID: \"5f77b49d-cf17-4b55-9ef8-0d0e13966845\") " pod="openstack/ceph-client-edpm-deployment-openstack-edpm-ipam-8tpv8" Jan 23 09:59:58 crc kubenswrapper[4684]: I0123 09:59:58.827571 4684 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ceph\" (UniqueName: \"kubernetes.io/secret/5f77b49d-cf17-4b55-9ef8-0d0e13966845-ceph\") pod \"ceph-client-edpm-deployment-openstack-edpm-ipam-8tpv8\" (UID: \"5f77b49d-cf17-4b55-9ef8-0d0e13966845\") " pod="openstack/ceph-client-edpm-deployment-openstack-edpm-ipam-8tpv8" Jan 23 09:59:58 crc kubenswrapper[4684]: I0123 09:59:58.827608 4684 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/5f77b49d-cf17-4b55-9ef8-0d0e13966845-ssh-key-openstack-edpm-ipam\") pod \"ceph-client-edpm-deployment-openstack-edpm-ipam-8tpv8\" (UID: \"5f77b49d-cf17-4b55-9ef8-0d0e13966845\") " pod="openstack/ceph-client-edpm-deployment-openstack-edpm-ipam-8tpv8" Jan 23 09:59:58 crc kubenswrapper[4684]: I0123 09:59:58.833457 4684 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ceph\" (UniqueName: \"kubernetes.io/secret/5f77b49d-cf17-4b55-9ef8-0d0e13966845-ceph\") pod \"ceph-client-edpm-deployment-openstack-edpm-ipam-8tpv8\" (UID: \"5f77b49d-cf17-4b55-9ef8-0d0e13966845\") " pod="openstack/ceph-client-edpm-deployment-openstack-edpm-ipam-8tpv8" Jan 23 09:59:58 crc kubenswrapper[4684]: I0123 09:59:58.834350 4684 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/5f77b49d-cf17-4b55-9ef8-0d0e13966845-inventory\") pod \"ceph-client-edpm-deployment-openstack-edpm-ipam-8tpv8\" (UID: \"5f77b49d-cf17-4b55-9ef8-0d0e13966845\") " pod="openstack/ceph-client-edpm-deployment-openstack-edpm-ipam-8tpv8" Jan 23 09:59:58 crc kubenswrapper[4684]: I0123 09:59:58.835569 4684 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/5f77b49d-cf17-4b55-9ef8-0d0e13966845-ssh-key-openstack-edpm-ipam\") pod \"ceph-client-edpm-deployment-openstack-edpm-ipam-8tpv8\" (UID: \"5f77b49d-cf17-4b55-9ef8-0d0e13966845\") " pod="openstack/ceph-client-edpm-deployment-openstack-edpm-ipam-8tpv8" Jan 23 09:59:58 crc kubenswrapper[4684]: I0123 09:59:58.849349 4684 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-cxq9n\" (UniqueName: \"kubernetes.io/projected/5f77b49d-cf17-4b55-9ef8-0d0e13966845-kube-api-access-cxq9n\") pod \"ceph-client-edpm-deployment-openstack-edpm-ipam-8tpv8\" (UID: \"5f77b49d-cf17-4b55-9ef8-0d0e13966845\") " pod="openstack/ceph-client-edpm-deployment-openstack-edpm-ipam-8tpv8" Jan 23 09:59:58 crc kubenswrapper[4684]: I0123 09:59:58.880124 4684 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ceph-client-edpm-deployment-openstack-edpm-ipam-8tpv8" Jan 23 09:59:59 crc kubenswrapper[4684]: I0123 09:59:59.419979 4684 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ceph-client-edpm-deployment-openstack-edpm-ipam-8tpv8"] Jan 23 09:59:59 crc kubenswrapper[4684]: W0123 09:59:59.429010 4684 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod5f77b49d_cf17_4b55_9ef8_0d0e13966845.slice/crio-90feac58626836b16dc214bfc27cd410690234296809ec4820040f07433f12ba WatchSource:0}: Error finding container 90feac58626836b16dc214bfc27cd410690234296809ec4820040f07433f12ba: Status 404 returned error can't find the container with id 90feac58626836b16dc214bfc27cd410690234296809ec4820040f07433f12ba Jan 23 09:59:59 crc kubenswrapper[4684]: I0123 09:59:59.477462 4684 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceph-client-edpm-deployment-openstack-edpm-ipam-8tpv8" event={"ID":"5f77b49d-cf17-4b55-9ef8-0d0e13966845","Type":"ContainerStarted","Data":"90feac58626836b16dc214bfc27cd410690234296809ec4820040f07433f12ba"} Jan 23 10:00:00 crc kubenswrapper[4684]: I0123 10:00:00.157312 4684 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29486040-sg9qd"] Jan 23 10:00:00 crc kubenswrapper[4684]: I0123 10:00:00.164407 4684 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29486040-sg9qd" Jan 23 10:00:00 crc kubenswrapper[4684]: I0123 10:00:00.187892 4684 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-operator-lifecycle-manager"/"collect-profiles-config" Jan 23 10:00:00 crc kubenswrapper[4684]: I0123 10:00:00.188434 4684 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-operator-lifecycle-manager"/"collect-profiles-dockercfg-kzf4t" Jan 23 10:00:00 crc kubenswrapper[4684]: I0123 10:00:00.189836 4684 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29486040-sg9qd"] Jan 23 10:00:00 crc kubenswrapper[4684]: I0123 10:00:00.259121 4684 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/ff06a00d-310c-41dc-bae5-042190b4be89-config-volume\") pod \"collect-profiles-29486040-sg9qd\" (UID: \"ff06a00d-310c-41dc-bae5-042190b4be89\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29486040-sg9qd" Jan 23 10:00:00 crc kubenswrapper[4684]: I0123 10:00:00.259266 4684 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/ff06a00d-310c-41dc-bae5-042190b4be89-secret-volume\") pod \"collect-profiles-29486040-sg9qd\" (UID: \"ff06a00d-310c-41dc-bae5-042190b4be89\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29486040-sg9qd" Jan 23 10:00:00 crc kubenswrapper[4684]: I0123 10:00:00.259440 4684 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-zvw7w\" (UniqueName: \"kubernetes.io/projected/ff06a00d-310c-41dc-bae5-042190b4be89-kube-api-access-zvw7w\") pod \"collect-profiles-29486040-sg9qd\" (UID: \"ff06a00d-310c-41dc-bae5-042190b4be89\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29486040-sg9qd" Jan 23 10:00:00 crc kubenswrapper[4684]: I0123 10:00:00.361351 4684 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/ff06a00d-310c-41dc-bae5-042190b4be89-secret-volume\") pod \"collect-profiles-29486040-sg9qd\" (UID: \"ff06a00d-310c-41dc-bae5-042190b4be89\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29486040-sg9qd" Jan 23 10:00:00 crc kubenswrapper[4684]: I0123 10:00:00.361523 4684 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-zvw7w\" (UniqueName: \"kubernetes.io/projected/ff06a00d-310c-41dc-bae5-042190b4be89-kube-api-access-zvw7w\") pod \"collect-profiles-29486040-sg9qd\" (UID: \"ff06a00d-310c-41dc-bae5-042190b4be89\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29486040-sg9qd" Jan 23 10:00:00 crc kubenswrapper[4684]: I0123 10:00:00.361564 4684 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/ff06a00d-310c-41dc-bae5-042190b4be89-config-volume\") pod \"collect-profiles-29486040-sg9qd\" (UID: \"ff06a00d-310c-41dc-bae5-042190b4be89\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29486040-sg9qd" Jan 23 10:00:00 crc kubenswrapper[4684]: I0123 10:00:00.362523 4684 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/ff06a00d-310c-41dc-bae5-042190b4be89-config-volume\") pod \"collect-profiles-29486040-sg9qd\" (UID: \"ff06a00d-310c-41dc-bae5-042190b4be89\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29486040-sg9qd" Jan 23 10:00:00 crc kubenswrapper[4684]: I0123 10:00:00.367948 4684 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/ff06a00d-310c-41dc-bae5-042190b4be89-secret-volume\") pod \"collect-profiles-29486040-sg9qd\" (UID: \"ff06a00d-310c-41dc-bae5-042190b4be89\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29486040-sg9qd" Jan 23 10:00:00 crc kubenswrapper[4684]: I0123 10:00:00.388634 4684 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-zvw7w\" (UniqueName: \"kubernetes.io/projected/ff06a00d-310c-41dc-bae5-042190b4be89-kube-api-access-zvw7w\") pod \"collect-profiles-29486040-sg9qd\" (UID: \"ff06a00d-310c-41dc-bae5-042190b4be89\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29486040-sg9qd" Jan 23 10:00:00 crc kubenswrapper[4684]: I0123 10:00:00.503263 4684 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29486040-sg9qd" Jan 23 10:00:01 crc kubenswrapper[4684]: I0123 10:00:01.013554 4684 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29486040-sg9qd"] Jan 23 10:00:01 crc kubenswrapper[4684]: W0123 10:00:01.015766 4684 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podff06a00d_310c_41dc_bae5_042190b4be89.slice/crio-6a28de29a3fdeb599dca2013a586c1339b5aeb08abbdbc451c432b1e77211b66 WatchSource:0}: Error finding container 6a28de29a3fdeb599dca2013a586c1339b5aeb08abbdbc451c432b1e77211b66: Status 404 returned error can't find the container with id 6a28de29a3fdeb599dca2013a586c1339b5aeb08abbdbc451c432b1e77211b66 Jan 23 10:00:01 crc kubenswrapper[4684]: I0123 10:00:01.500098 4684 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceph-client-edpm-deployment-openstack-edpm-ipam-8tpv8" event={"ID":"5f77b49d-cf17-4b55-9ef8-0d0e13966845","Type":"ContainerStarted","Data":"72046a57fb5c65179e3bca3d6b3617d5e66009234f4596ee6c4005106987f5dc"} Jan 23 10:00:01 crc kubenswrapper[4684]: I0123 10:00:01.502921 4684 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/collect-profiles-29486040-sg9qd" event={"ID":"ff06a00d-310c-41dc-bae5-042190b4be89","Type":"ContainerStarted","Data":"6a28de29a3fdeb599dca2013a586c1339b5aeb08abbdbc451c432b1e77211b66"} Jan 23 10:00:01 crc kubenswrapper[4684]: I0123 10:00:01.529632 4684 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/ceph-client-edpm-deployment-openstack-edpm-ipam-8tpv8" podStartSLOduration=2.531174011 podStartE2EDuration="3.529611149s" podCreationTimestamp="2026-01-23 09:59:58 +0000 UTC" firstStartedPulling="2026-01-23 09:59:59.431842626 +0000 UTC m=+3172.055221167" lastFinishedPulling="2026-01-23 10:00:00.430279764 +0000 UTC m=+3173.053658305" observedRunningTime="2026-01-23 10:00:01.518662194 +0000 UTC m=+3174.142040735" watchObservedRunningTime="2026-01-23 10:00:01.529611149 +0000 UTC m=+3174.152989700" Jan 23 10:00:02 crc kubenswrapper[4684]: I0123 10:00:02.520514 4684 generic.go:334] "Generic (PLEG): container finished" podID="ff06a00d-310c-41dc-bae5-042190b4be89" containerID="17eaf213586c18cd0815c547fe7ac44336e510e6db9d3bfc57b801f2786cc066" exitCode=0 Jan 23 10:00:02 crc kubenswrapper[4684]: I0123 10:00:02.520567 4684 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/collect-profiles-29486040-sg9qd" event={"ID":"ff06a00d-310c-41dc-bae5-042190b4be89","Type":"ContainerDied","Data":"17eaf213586c18cd0815c547fe7ac44336e510e6db9d3bfc57b801f2786cc066"} Jan 23 10:00:03 crc kubenswrapper[4684]: I0123 10:00:03.933295 4684 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29486040-sg9qd" Jan 23 10:00:04 crc kubenswrapper[4684]: I0123 10:00:04.056992 4684 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/ff06a00d-310c-41dc-bae5-042190b4be89-secret-volume\") pod \"ff06a00d-310c-41dc-bae5-042190b4be89\" (UID: \"ff06a00d-310c-41dc-bae5-042190b4be89\") " Jan 23 10:00:04 crc kubenswrapper[4684]: I0123 10:00:04.057075 4684 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/ff06a00d-310c-41dc-bae5-042190b4be89-config-volume\") pod \"ff06a00d-310c-41dc-bae5-042190b4be89\" (UID: \"ff06a00d-310c-41dc-bae5-042190b4be89\") " Jan 23 10:00:04 crc kubenswrapper[4684]: I0123 10:00:04.057154 4684 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-zvw7w\" (UniqueName: \"kubernetes.io/projected/ff06a00d-310c-41dc-bae5-042190b4be89-kube-api-access-zvw7w\") pod \"ff06a00d-310c-41dc-bae5-042190b4be89\" (UID: \"ff06a00d-310c-41dc-bae5-042190b4be89\") " Jan 23 10:00:04 crc kubenswrapper[4684]: I0123 10:00:04.058780 4684 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/ff06a00d-310c-41dc-bae5-042190b4be89-config-volume" (OuterVolumeSpecName: "config-volume") pod "ff06a00d-310c-41dc-bae5-042190b4be89" (UID: "ff06a00d-310c-41dc-bae5-042190b4be89"). InnerVolumeSpecName "config-volume". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 23 10:00:04 crc kubenswrapper[4684]: I0123 10:00:04.063119 4684 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/ff06a00d-310c-41dc-bae5-042190b4be89-kube-api-access-zvw7w" (OuterVolumeSpecName: "kube-api-access-zvw7w") pod "ff06a00d-310c-41dc-bae5-042190b4be89" (UID: "ff06a00d-310c-41dc-bae5-042190b4be89"). InnerVolumeSpecName "kube-api-access-zvw7w". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 23 10:00:04 crc kubenswrapper[4684]: I0123 10:00:04.063861 4684 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/ff06a00d-310c-41dc-bae5-042190b4be89-secret-volume" (OuterVolumeSpecName: "secret-volume") pod "ff06a00d-310c-41dc-bae5-042190b4be89" (UID: "ff06a00d-310c-41dc-bae5-042190b4be89"). InnerVolumeSpecName "secret-volume". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 23 10:00:04 crc kubenswrapper[4684]: I0123 10:00:04.159519 4684 reconciler_common.go:293] "Volume detached for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/ff06a00d-310c-41dc-bae5-042190b4be89-secret-volume\") on node \"crc\" DevicePath \"\"" Jan 23 10:00:04 crc kubenswrapper[4684]: I0123 10:00:04.159823 4684 reconciler_common.go:293] "Volume detached for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/ff06a00d-310c-41dc-bae5-042190b4be89-config-volume\") on node \"crc\" DevicePath \"\"" Jan 23 10:00:04 crc kubenswrapper[4684]: I0123 10:00:04.159912 4684 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-zvw7w\" (UniqueName: \"kubernetes.io/projected/ff06a00d-310c-41dc-bae5-042190b4be89-kube-api-access-zvw7w\") on node \"crc\" DevicePath \"\"" Jan 23 10:00:04 crc kubenswrapper[4684]: I0123 10:00:04.541131 4684 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/collect-profiles-29486040-sg9qd" event={"ID":"ff06a00d-310c-41dc-bae5-042190b4be89","Type":"ContainerDied","Data":"6a28de29a3fdeb599dca2013a586c1339b5aeb08abbdbc451c432b1e77211b66"} Jan 23 10:00:04 crc kubenswrapper[4684]: I0123 10:00:04.541171 4684 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="6a28de29a3fdeb599dca2013a586c1339b5aeb08abbdbc451c432b1e77211b66" Jan 23 10:00:04 crc kubenswrapper[4684]: I0123 10:00:04.541214 4684 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29486040-sg9qd" Jan 23 10:00:05 crc kubenswrapper[4684]: I0123 10:00:05.043185 4684 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29485995-jsj57"] Jan 23 10:00:05 crc kubenswrapper[4684]: I0123 10:00:05.064576 4684 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29485995-jsj57"] Jan 23 10:00:05 crc kubenswrapper[4684]: I0123 10:00:05.594894 4684 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="f47989d9-6d56-4b95-8678-6aa1e287dded" path="/var/lib/kubelet/pods/f47989d9-6d56-4b95-8678-6aa1e287dded/volumes" Jan 23 10:00:07 crc kubenswrapper[4684]: I0123 10:00:07.567954 4684 generic.go:334] "Generic (PLEG): container finished" podID="5f77b49d-cf17-4b55-9ef8-0d0e13966845" containerID="72046a57fb5c65179e3bca3d6b3617d5e66009234f4596ee6c4005106987f5dc" exitCode=0 Jan 23 10:00:07 crc kubenswrapper[4684]: I0123 10:00:07.568018 4684 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceph-client-edpm-deployment-openstack-edpm-ipam-8tpv8" event={"ID":"5f77b49d-cf17-4b55-9ef8-0d0e13966845","Type":"ContainerDied","Data":"72046a57fb5c65179e3bca3d6b3617d5e66009234f4596ee6c4005106987f5dc"} Jan 23 10:00:08 crc kubenswrapper[4684]: I0123 10:00:08.581984 4684 scope.go:117] "RemoveContainer" containerID="d1c64bcff5b15812f02c5451d69c8159a40aa5751c27f7f31fd2c1167f6c8ab3" Jan 23 10:00:08 crc kubenswrapper[4684]: E0123 10:00:08.582616 4684 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-wtphf_openshift-machine-config-operator(fe8e0d00-860e-4d47-9f48-686555520d79)\"" pod="openshift-machine-config-operator/machine-config-daemon-wtphf" podUID="fe8e0d00-860e-4d47-9f48-686555520d79" Jan 23 10:00:08 crc kubenswrapper[4684]: I0123 10:00:08.991628 4684 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/ceph-client-edpm-deployment-openstack-edpm-ipam-8tpv8" Jan 23 10:00:09 crc kubenswrapper[4684]: I0123 10:00:09.157376 4684 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/5f77b49d-cf17-4b55-9ef8-0d0e13966845-ssh-key-openstack-edpm-ipam\") pod \"5f77b49d-cf17-4b55-9ef8-0d0e13966845\" (UID: \"5f77b49d-cf17-4b55-9ef8-0d0e13966845\") " Jan 23 10:00:09 crc kubenswrapper[4684]: I0123 10:00:09.157490 4684 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ceph\" (UniqueName: \"kubernetes.io/secret/5f77b49d-cf17-4b55-9ef8-0d0e13966845-ceph\") pod \"5f77b49d-cf17-4b55-9ef8-0d0e13966845\" (UID: \"5f77b49d-cf17-4b55-9ef8-0d0e13966845\") " Jan 23 10:00:09 crc kubenswrapper[4684]: I0123 10:00:09.157539 4684 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/5f77b49d-cf17-4b55-9ef8-0d0e13966845-inventory\") pod \"5f77b49d-cf17-4b55-9ef8-0d0e13966845\" (UID: \"5f77b49d-cf17-4b55-9ef8-0d0e13966845\") " Jan 23 10:00:09 crc kubenswrapper[4684]: I0123 10:00:09.157643 4684 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-cxq9n\" (UniqueName: \"kubernetes.io/projected/5f77b49d-cf17-4b55-9ef8-0d0e13966845-kube-api-access-cxq9n\") pod \"5f77b49d-cf17-4b55-9ef8-0d0e13966845\" (UID: \"5f77b49d-cf17-4b55-9ef8-0d0e13966845\") " Jan 23 10:00:09 crc kubenswrapper[4684]: I0123 10:00:09.164413 4684 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/5f77b49d-cf17-4b55-9ef8-0d0e13966845-ceph" (OuterVolumeSpecName: "ceph") pod "5f77b49d-cf17-4b55-9ef8-0d0e13966845" (UID: "5f77b49d-cf17-4b55-9ef8-0d0e13966845"). InnerVolumeSpecName "ceph". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 23 10:00:09 crc kubenswrapper[4684]: I0123 10:00:09.165612 4684 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/5f77b49d-cf17-4b55-9ef8-0d0e13966845-kube-api-access-cxq9n" (OuterVolumeSpecName: "kube-api-access-cxq9n") pod "5f77b49d-cf17-4b55-9ef8-0d0e13966845" (UID: "5f77b49d-cf17-4b55-9ef8-0d0e13966845"). InnerVolumeSpecName "kube-api-access-cxq9n". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 23 10:00:09 crc kubenswrapper[4684]: I0123 10:00:09.188769 4684 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/5f77b49d-cf17-4b55-9ef8-0d0e13966845-ssh-key-openstack-edpm-ipam" (OuterVolumeSpecName: "ssh-key-openstack-edpm-ipam") pod "5f77b49d-cf17-4b55-9ef8-0d0e13966845" (UID: "5f77b49d-cf17-4b55-9ef8-0d0e13966845"). InnerVolumeSpecName "ssh-key-openstack-edpm-ipam". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 23 10:00:09 crc kubenswrapper[4684]: I0123 10:00:09.190256 4684 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/5f77b49d-cf17-4b55-9ef8-0d0e13966845-inventory" (OuterVolumeSpecName: "inventory") pod "5f77b49d-cf17-4b55-9ef8-0d0e13966845" (UID: "5f77b49d-cf17-4b55-9ef8-0d0e13966845"). InnerVolumeSpecName "inventory". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 23 10:00:09 crc kubenswrapper[4684]: I0123 10:00:09.260260 4684 reconciler_common.go:293] "Volume detached for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/5f77b49d-cf17-4b55-9ef8-0d0e13966845-ssh-key-openstack-edpm-ipam\") on node \"crc\" DevicePath \"\"" Jan 23 10:00:09 crc kubenswrapper[4684]: I0123 10:00:09.260327 4684 reconciler_common.go:293] "Volume detached for volume \"ceph\" (UniqueName: \"kubernetes.io/secret/5f77b49d-cf17-4b55-9ef8-0d0e13966845-ceph\") on node \"crc\" DevicePath \"\"" Jan 23 10:00:09 crc kubenswrapper[4684]: I0123 10:00:09.260340 4684 reconciler_common.go:293] "Volume detached for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/5f77b49d-cf17-4b55-9ef8-0d0e13966845-inventory\") on node \"crc\" DevicePath \"\"" Jan 23 10:00:09 crc kubenswrapper[4684]: I0123 10:00:09.260386 4684 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-cxq9n\" (UniqueName: \"kubernetes.io/projected/5f77b49d-cf17-4b55-9ef8-0d0e13966845-kube-api-access-cxq9n\") on node \"crc\" DevicePath \"\"" Jan 23 10:00:09 crc kubenswrapper[4684]: I0123 10:00:09.587098 4684 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/ceph-client-edpm-deployment-openstack-edpm-ipam-8tpv8" Jan 23 10:00:09 crc kubenswrapper[4684]: I0123 10:00:09.594596 4684 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceph-client-edpm-deployment-openstack-edpm-ipam-8tpv8" event={"ID":"5f77b49d-cf17-4b55-9ef8-0d0e13966845","Type":"ContainerDied","Data":"90feac58626836b16dc214bfc27cd410690234296809ec4820040f07433f12ba"} Jan 23 10:00:09 crc kubenswrapper[4684]: I0123 10:00:09.595859 4684 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="90feac58626836b16dc214bfc27cd410690234296809ec4820040f07433f12ba" Jan 23 10:00:09 crc kubenswrapper[4684]: I0123 10:00:09.743271 4684 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/ovn-edpm-deployment-openstack-edpm-ipam-9klss"] Jan 23 10:00:09 crc kubenswrapper[4684]: E0123 10:00:09.744168 4684 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="5f77b49d-cf17-4b55-9ef8-0d0e13966845" containerName="ceph-client-edpm-deployment-openstack-edpm-ipam" Jan 23 10:00:09 crc kubenswrapper[4684]: I0123 10:00:09.744203 4684 state_mem.go:107] "Deleted CPUSet assignment" podUID="5f77b49d-cf17-4b55-9ef8-0d0e13966845" containerName="ceph-client-edpm-deployment-openstack-edpm-ipam" Jan 23 10:00:09 crc kubenswrapper[4684]: E0123 10:00:09.744220 4684 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="ff06a00d-310c-41dc-bae5-042190b4be89" containerName="collect-profiles" Jan 23 10:00:09 crc kubenswrapper[4684]: I0123 10:00:09.744230 4684 state_mem.go:107] "Deleted CPUSet assignment" podUID="ff06a00d-310c-41dc-bae5-042190b4be89" containerName="collect-profiles" Jan 23 10:00:09 crc kubenswrapper[4684]: I0123 10:00:09.744489 4684 memory_manager.go:354] "RemoveStaleState removing state" podUID="5f77b49d-cf17-4b55-9ef8-0d0e13966845" containerName="ceph-client-edpm-deployment-openstack-edpm-ipam" Jan 23 10:00:09 crc kubenswrapper[4684]: I0123 10:00:09.744507 4684 memory_manager.go:354] "RemoveStaleState removing state" podUID="ff06a00d-310c-41dc-bae5-042190b4be89" containerName="collect-profiles" Jan 23 10:00:09 crc kubenswrapper[4684]: I0123 10:00:09.745264 4684 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ovn-edpm-deployment-openstack-edpm-ipam-9klss" Jan 23 10:00:09 crc kubenswrapper[4684]: I0123 10:00:09.749820 4684 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"ovncontroller-config" Jan 23 10:00:09 crc kubenswrapper[4684]: I0123 10:00:09.750065 4684 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"ceph-conf-files" Jan 23 10:00:09 crc kubenswrapper[4684]: I0123 10:00:09.750385 4684 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"dataplanenodeset-openstack-edpm-ipam" Jan 23 10:00:09 crc kubenswrapper[4684]: I0123 10:00:09.751068 4684 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"openstack-aee-default-env" Jan 23 10:00:09 crc kubenswrapper[4684]: I0123 10:00:09.751633 4684 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"dataplane-ansible-ssh-private-key-secret" Jan 23 10:00:09 crc kubenswrapper[4684]: I0123 10:00:09.752675 4684 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"openstack-edpm-ipam-dockercfg-5vtkf" Jan 23 10:00:09 crc kubenswrapper[4684]: I0123 10:00:09.760311 4684 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ovn-edpm-deployment-openstack-edpm-ipam-9klss"] Jan 23 10:00:09 crc kubenswrapper[4684]: I0123 10:00:09.872942 4684 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-c79nl\" (UniqueName: \"kubernetes.io/projected/e755b648-4ecf-4fc5-922a-39c5061827de-kube-api-access-c79nl\") pod \"ovn-edpm-deployment-openstack-edpm-ipam-9klss\" (UID: \"e755b648-4ecf-4fc5-922a-39c5061827de\") " pod="openstack/ovn-edpm-deployment-openstack-edpm-ipam-9klss" Jan 23 10:00:09 crc kubenswrapper[4684]: I0123 10:00:09.873033 4684 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/e755b648-4ecf-4fc5-922a-39c5061827de-inventory\") pod \"ovn-edpm-deployment-openstack-edpm-ipam-9klss\" (UID: \"e755b648-4ecf-4fc5-922a-39c5061827de\") " pod="openstack/ovn-edpm-deployment-openstack-edpm-ipam-9klss" Jan 23 10:00:09 crc kubenswrapper[4684]: I0123 10:00:09.873072 4684 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovncontroller-config-0\" (UniqueName: \"kubernetes.io/configmap/e755b648-4ecf-4fc5-922a-39c5061827de-ovncontroller-config-0\") pod \"ovn-edpm-deployment-openstack-edpm-ipam-9klss\" (UID: \"e755b648-4ecf-4fc5-922a-39c5061827de\") " pod="openstack/ovn-edpm-deployment-openstack-edpm-ipam-9klss" Jan 23 10:00:09 crc kubenswrapper[4684]: I0123 10:00:09.873162 4684 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/e755b648-4ecf-4fc5-922a-39c5061827de-ssh-key-openstack-edpm-ipam\") pod \"ovn-edpm-deployment-openstack-edpm-ipam-9klss\" (UID: \"e755b648-4ecf-4fc5-922a-39c5061827de\") " pod="openstack/ovn-edpm-deployment-openstack-edpm-ipam-9klss" Jan 23 10:00:09 crc kubenswrapper[4684]: I0123 10:00:09.873191 4684 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovn-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/e755b648-4ecf-4fc5-922a-39c5061827de-ovn-combined-ca-bundle\") pod \"ovn-edpm-deployment-openstack-edpm-ipam-9klss\" (UID: \"e755b648-4ecf-4fc5-922a-39c5061827de\") " pod="openstack/ovn-edpm-deployment-openstack-edpm-ipam-9klss" Jan 23 10:00:09 crc kubenswrapper[4684]: I0123 10:00:09.873227 4684 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ceph\" (UniqueName: \"kubernetes.io/secret/e755b648-4ecf-4fc5-922a-39c5061827de-ceph\") pod \"ovn-edpm-deployment-openstack-edpm-ipam-9klss\" (UID: \"e755b648-4ecf-4fc5-922a-39c5061827de\") " pod="openstack/ovn-edpm-deployment-openstack-edpm-ipam-9klss" Jan 23 10:00:09 crc kubenswrapper[4684]: I0123 10:00:09.974431 4684 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ceph\" (UniqueName: \"kubernetes.io/secret/e755b648-4ecf-4fc5-922a-39c5061827de-ceph\") pod \"ovn-edpm-deployment-openstack-edpm-ipam-9klss\" (UID: \"e755b648-4ecf-4fc5-922a-39c5061827de\") " pod="openstack/ovn-edpm-deployment-openstack-edpm-ipam-9klss" Jan 23 10:00:09 crc kubenswrapper[4684]: I0123 10:00:09.974617 4684 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-c79nl\" (UniqueName: \"kubernetes.io/projected/e755b648-4ecf-4fc5-922a-39c5061827de-kube-api-access-c79nl\") pod \"ovn-edpm-deployment-openstack-edpm-ipam-9klss\" (UID: \"e755b648-4ecf-4fc5-922a-39c5061827de\") " pod="openstack/ovn-edpm-deployment-openstack-edpm-ipam-9klss" Jan 23 10:00:09 crc kubenswrapper[4684]: I0123 10:00:09.974656 4684 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/e755b648-4ecf-4fc5-922a-39c5061827de-inventory\") pod \"ovn-edpm-deployment-openstack-edpm-ipam-9klss\" (UID: \"e755b648-4ecf-4fc5-922a-39c5061827de\") " pod="openstack/ovn-edpm-deployment-openstack-edpm-ipam-9klss" Jan 23 10:00:09 crc kubenswrapper[4684]: I0123 10:00:09.974686 4684 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovncontroller-config-0\" (UniqueName: \"kubernetes.io/configmap/e755b648-4ecf-4fc5-922a-39c5061827de-ovncontroller-config-0\") pod \"ovn-edpm-deployment-openstack-edpm-ipam-9klss\" (UID: \"e755b648-4ecf-4fc5-922a-39c5061827de\") " pod="openstack/ovn-edpm-deployment-openstack-edpm-ipam-9klss" Jan 23 10:00:09 crc kubenswrapper[4684]: I0123 10:00:09.974767 4684 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/e755b648-4ecf-4fc5-922a-39c5061827de-ssh-key-openstack-edpm-ipam\") pod \"ovn-edpm-deployment-openstack-edpm-ipam-9klss\" (UID: \"e755b648-4ecf-4fc5-922a-39c5061827de\") " pod="openstack/ovn-edpm-deployment-openstack-edpm-ipam-9klss" Jan 23 10:00:09 crc kubenswrapper[4684]: I0123 10:00:09.974786 4684 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovn-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/e755b648-4ecf-4fc5-922a-39c5061827de-ovn-combined-ca-bundle\") pod \"ovn-edpm-deployment-openstack-edpm-ipam-9klss\" (UID: \"e755b648-4ecf-4fc5-922a-39c5061827de\") " pod="openstack/ovn-edpm-deployment-openstack-edpm-ipam-9klss" Jan 23 10:00:09 crc kubenswrapper[4684]: I0123 10:00:09.975993 4684 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovncontroller-config-0\" (UniqueName: \"kubernetes.io/configmap/e755b648-4ecf-4fc5-922a-39c5061827de-ovncontroller-config-0\") pod \"ovn-edpm-deployment-openstack-edpm-ipam-9klss\" (UID: \"e755b648-4ecf-4fc5-922a-39c5061827de\") " pod="openstack/ovn-edpm-deployment-openstack-edpm-ipam-9klss" Jan 23 10:00:09 crc kubenswrapper[4684]: I0123 10:00:09.978829 4684 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ceph\" (UniqueName: \"kubernetes.io/secret/e755b648-4ecf-4fc5-922a-39c5061827de-ceph\") pod \"ovn-edpm-deployment-openstack-edpm-ipam-9klss\" (UID: \"e755b648-4ecf-4fc5-922a-39c5061827de\") " pod="openstack/ovn-edpm-deployment-openstack-edpm-ipam-9klss" Jan 23 10:00:09 crc kubenswrapper[4684]: I0123 10:00:09.979205 4684 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/e755b648-4ecf-4fc5-922a-39c5061827de-ssh-key-openstack-edpm-ipam\") pod \"ovn-edpm-deployment-openstack-edpm-ipam-9klss\" (UID: \"e755b648-4ecf-4fc5-922a-39c5061827de\") " pod="openstack/ovn-edpm-deployment-openstack-edpm-ipam-9klss" Jan 23 10:00:09 crc kubenswrapper[4684]: I0123 10:00:09.980452 4684 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovn-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/e755b648-4ecf-4fc5-922a-39c5061827de-ovn-combined-ca-bundle\") pod \"ovn-edpm-deployment-openstack-edpm-ipam-9klss\" (UID: \"e755b648-4ecf-4fc5-922a-39c5061827de\") " pod="openstack/ovn-edpm-deployment-openstack-edpm-ipam-9klss" Jan 23 10:00:09 crc kubenswrapper[4684]: I0123 10:00:09.984138 4684 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/e755b648-4ecf-4fc5-922a-39c5061827de-inventory\") pod \"ovn-edpm-deployment-openstack-edpm-ipam-9klss\" (UID: \"e755b648-4ecf-4fc5-922a-39c5061827de\") " pod="openstack/ovn-edpm-deployment-openstack-edpm-ipam-9klss" Jan 23 10:00:09 crc kubenswrapper[4684]: I0123 10:00:09.996079 4684 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-c79nl\" (UniqueName: \"kubernetes.io/projected/e755b648-4ecf-4fc5-922a-39c5061827de-kube-api-access-c79nl\") pod \"ovn-edpm-deployment-openstack-edpm-ipam-9klss\" (UID: \"e755b648-4ecf-4fc5-922a-39c5061827de\") " pod="openstack/ovn-edpm-deployment-openstack-edpm-ipam-9klss" Jan 23 10:00:10 crc kubenswrapper[4684]: I0123 10:00:10.062102 4684 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ovn-edpm-deployment-openstack-edpm-ipam-9klss" Jan 23 10:00:10 crc kubenswrapper[4684]: I0123 10:00:10.627985 4684 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ovn-edpm-deployment-openstack-edpm-ipam-9klss"] Jan 23 10:00:11 crc kubenswrapper[4684]: I0123 10:00:11.610751 4684 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ovn-edpm-deployment-openstack-edpm-ipam-9klss" event={"ID":"e755b648-4ecf-4fc5-922a-39c5061827de","Type":"ContainerStarted","Data":"1ad677c76d0d53eac6def3d93bae7fb2c8e7baea9af0695e537b9bb403c7a130"} Jan 23 10:00:11 crc kubenswrapper[4684]: I0123 10:00:11.611317 4684 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ovn-edpm-deployment-openstack-edpm-ipam-9klss" event={"ID":"e755b648-4ecf-4fc5-922a-39c5061827de","Type":"ContainerStarted","Data":"edf7d25012043b9e2ea8022a0c455a7365a13d6f55c5da8b551e3694ae0c3449"} Jan 23 10:00:11 crc kubenswrapper[4684]: I0123 10:00:11.639652 4684 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/ovn-edpm-deployment-openstack-edpm-ipam-9klss" podStartSLOduration=2.178084316 podStartE2EDuration="2.639600657s" podCreationTimestamp="2026-01-23 10:00:09 +0000 UTC" firstStartedPulling="2026-01-23 10:00:10.632609953 +0000 UTC m=+3183.255988484" lastFinishedPulling="2026-01-23 10:00:11.094126284 +0000 UTC m=+3183.717504825" observedRunningTime="2026-01-23 10:00:11.637573909 +0000 UTC m=+3184.260952450" watchObservedRunningTime="2026-01-23 10:00:11.639600657 +0000 UTC m=+3184.262979208" Jan 23 10:00:15 crc kubenswrapper[4684]: I0123 10:00:15.874773 4684 scope.go:117] "RemoveContainer" containerID="94c4efdb39c91980e5f8ba1eb61eda93e2821a0ba403c3f60c2903df32e13b81" Jan 23 10:00:19 crc kubenswrapper[4684]: I0123 10:00:19.582007 4684 scope.go:117] "RemoveContainer" containerID="d1c64bcff5b15812f02c5451d69c8159a40aa5751c27f7f31fd2c1167f6c8ab3" Jan 23 10:00:19 crc kubenswrapper[4684]: E0123 10:00:19.582615 4684 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-wtphf_openshift-machine-config-operator(fe8e0d00-860e-4d47-9f48-686555520d79)\"" pod="openshift-machine-config-operator/machine-config-daemon-wtphf" podUID="fe8e0d00-860e-4d47-9f48-686555520d79" Jan 23 10:00:34 crc kubenswrapper[4684]: I0123 10:00:34.581971 4684 scope.go:117] "RemoveContainer" containerID="d1c64bcff5b15812f02c5451d69c8159a40aa5751c27f7f31fd2c1167f6c8ab3" Jan 23 10:00:34 crc kubenswrapper[4684]: E0123 10:00:34.583977 4684 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-wtphf_openshift-machine-config-operator(fe8e0d00-860e-4d47-9f48-686555520d79)\"" pod="openshift-machine-config-operator/machine-config-daemon-wtphf" podUID="fe8e0d00-860e-4d47-9f48-686555520d79" Jan 23 10:00:47 crc kubenswrapper[4684]: I0123 10:00:47.589020 4684 scope.go:117] "RemoveContainer" containerID="d1c64bcff5b15812f02c5451d69c8159a40aa5751c27f7f31fd2c1167f6c8ab3" Jan 23 10:00:47 crc kubenswrapper[4684]: E0123 10:00:47.589812 4684 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-wtphf_openshift-machine-config-operator(fe8e0d00-860e-4d47-9f48-686555520d79)\"" pod="openshift-machine-config-operator/machine-config-daemon-wtphf" podUID="fe8e0d00-860e-4d47-9f48-686555520d79" Jan 23 10:01:00 crc kubenswrapper[4684]: I0123 10:01:00.157884 4684 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/keystone-cron-29486041-8929f"] Jan 23 10:01:00 crc kubenswrapper[4684]: I0123 10:01:00.159841 4684 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/keystone-cron-29486041-8929f" Jan 23 10:01:00 crc kubenswrapper[4684]: I0123 10:01:00.175176 4684 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/keystone-cron-29486041-8929f"] Jan 23 10:01:00 crc kubenswrapper[4684]: I0123 10:01:00.289395 4684 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/1ca6f1ca-5942-4aee-a0bc-b7d2549de3a2-combined-ca-bundle\") pod \"keystone-cron-29486041-8929f\" (UID: \"1ca6f1ca-5942-4aee-a0bc-b7d2549de3a2\") " pod="openstack/keystone-cron-29486041-8929f" Jan 23 10:01:00 crc kubenswrapper[4684]: I0123 10:01:00.290040 4684 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"fernet-keys\" (UniqueName: \"kubernetes.io/secret/1ca6f1ca-5942-4aee-a0bc-b7d2549de3a2-fernet-keys\") pod \"keystone-cron-29486041-8929f\" (UID: \"1ca6f1ca-5942-4aee-a0bc-b7d2549de3a2\") " pod="openstack/keystone-cron-29486041-8929f" Jan 23 10:01:00 crc kubenswrapper[4684]: I0123 10:01:00.290152 4684 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-vrtbf\" (UniqueName: \"kubernetes.io/projected/1ca6f1ca-5942-4aee-a0bc-b7d2549de3a2-kube-api-access-vrtbf\") pod \"keystone-cron-29486041-8929f\" (UID: \"1ca6f1ca-5942-4aee-a0bc-b7d2549de3a2\") " pod="openstack/keystone-cron-29486041-8929f" Jan 23 10:01:00 crc kubenswrapper[4684]: I0123 10:01:00.290182 4684 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/1ca6f1ca-5942-4aee-a0bc-b7d2549de3a2-config-data\") pod \"keystone-cron-29486041-8929f\" (UID: \"1ca6f1ca-5942-4aee-a0bc-b7d2549de3a2\") " pod="openstack/keystone-cron-29486041-8929f" Jan 23 10:01:00 crc kubenswrapper[4684]: I0123 10:01:00.392511 4684 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/1ca6f1ca-5942-4aee-a0bc-b7d2549de3a2-combined-ca-bundle\") pod \"keystone-cron-29486041-8929f\" (UID: \"1ca6f1ca-5942-4aee-a0bc-b7d2549de3a2\") " pod="openstack/keystone-cron-29486041-8929f" Jan 23 10:01:00 crc kubenswrapper[4684]: I0123 10:01:00.392582 4684 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"fernet-keys\" (UniqueName: \"kubernetes.io/secret/1ca6f1ca-5942-4aee-a0bc-b7d2549de3a2-fernet-keys\") pod \"keystone-cron-29486041-8929f\" (UID: \"1ca6f1ca-5942-4aee-a0bc-b7d2549de3a2\") " pod="openstack/keystone-cron-29486041-8929f" Jan 23 10:01:00 crc kubenswrapper[4684]: I0123 10:01:00.392683 4684 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-vrtbf\" (UniqueName: \"kubernetes.io/projected/1ca6f1ca-5942-4aee-a0bc-b7d2549de3a2-kube-api-access-vrtbf\") pod \"keystone-cron-29486041-8929f\" (UID: \"1ca6f1ca-5942-4aee-a0bc-b7d2549de3a2\") " pod="openstack/keystone-cron-29486041-8929f" Jan 23 10:01:00 crc kubenswrapper[4684]: I0123 10:01:00.392743 4684 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/1ca6f1ca-5942-4aee-a0bc-b7d2549de3a2-config-data\") pod \"keystone-cron-29486041-8929f\" (UID: \"1ca6f1ca-5942-4aee-a0bc-b7d2549de3a2\") " pod="openstack/keystone-cron-29486041-8929f" Jan 23 10:01:00 crc kubenswrapper[4684]: I0123 10:01:00.401553 4684 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"fernet-keys\" (UniqueName: \"kubernetes.io/secret/1ca6f1ca-5942-4aee-a0bc-b7d2549de3a2-fernet-keys\") pod \"keystone-cron-29486041-8929f\" (UID: \"1ca6f1ca-5942-4aee-a0bc-b7d2549de3a2\") " pod="openstack/keystone-cron-29486041-8929f" Jan 23 10:01:00 crc kubenswrapper[4684]: I0123 10:01:00.403522 4684 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/1ca6f1ca-5942-4aee-a0bc-b7d2549de3a2-combined-ca-bundle\") pod \"keystone-cron-29486041-8929f\" (UID: \"1ca6f1ca-5942-4aee-a0bc-b7d2549de3a2\") " pod="openstack/keystone-cron-29486041-8929f" Jan 23 10:01:00 crc kubenswrapper[4684]: I0123 10:01:00.418996 4684 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/1ca6f1ca-5942-4aee-a0bc-b7d2549de3a2-config-data\") pod \"keystone-cron-29486041-8929f\" (UID: \"1ca6f1ca-5942-4aee-a0bc-b7d2549de3a2\") " pod="openstack/keystone-cron-29486041-8929f" Jan 23 10:01:00 crc kubenswrapper[4684]: I0123 10:01:00.422662 4684 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-vrtbf\" (UniqueName: \"kubernetes.io/projected/1ca6f1ca-5942-4aee-a0bc-b7d2549de3a2-kube-api-access-vrtbf\") pod \"keystone-cron-29486041-8929f\" (UID: \"1ca6f1ca-5942-4aee-a0bc-b7d2549de3a2\") " pod="openstack/keystone-cron-29486041-8929f" Jan 23 10:01:00 crc kubenswrapper[4684]: I0123 10:01:00.492580 4684 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/keystone-cron-29486041-8929f" Jan 23 10:01:00 crc kubenswrapper[4684]: I0123 10:01:00.583778 4684 scope.go:117] "RemoveContainer" containerID="d1c64bcff5b15812f02c5451d69c8159a40aa5751c27f7f31fd2c1167f6c8ab3" Jan 23 10:01:00 crc kubenswrapper[4684]: E0123 10:01:00.584211 4684 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-wtphf_openshift-machine-config-operator(fe8e0d00-860e-4d47-9f48-686555520d79)\"" pod="openshift-machine-config-operator/machine-config-daemon-wtphf" podUID="fe8e0d00-860e-4d47-9f48-686555520d79" Jan 23 10:01:01 crc kubenswrapper[4684]: I0123 10:01:01.031077 4684 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/keystone-cron-29486041-8929f"] Jan 23 10:01:01 crc kubenswrapper[4684]: I0123 10:01:01.070935 4684 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/keystone-cron-29486041-8929f" event={"ID":"1ca6f1ca-5942-4aee-a0bc-b7d2549de3a2","Type":"ContainerStarted","Data":"5b8163548c4f8ac7d6a9104912d61437d809028ded6c273d9392bb21b90695df"} Jan 23 10:01:02 crc kubenswrapper[4684]: I0123 10:01:02.079682 4684 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/keystone-cron-29486041-8929f" event={"ID":"1ca6f1ca-5942-4aee-a0bc-b7d2549de3a2","Type":"ContainerStarted","Data":"01c7d74e128632cc63d667dc538c921cdce93f964f9bde02a7716329e238dec4"} Jan 23 10:01:07 crc kubenswrapper[4684]: I0123 10:01:07.451263 4684 generic.go:334] "Generic (PLEG): container finished" podID="1ca6f1ca-5942-4aee-a0bc-b7d2549de3a2" containerID="01c7d74e128632cc63d667dc538c921cdce93f964f9bde02a7716329e238dec4" exitCode=0 Jan 23 10:01:07 crc kubenswrapper[4684]: I0123 10:01:07.451867 4684 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/keystone-cron-29486041-8929f" event={"ID":"1ca6f1ca-5942-4aee-a0bc-b7d2549de3a2","Type":"ContainerDied","Data":"01c7d74e128632cc63d667dc538c921cdce93f964f9bde02a7716329e238dec4"} Jan 23 10:01:08 crc kubenswrapper[4684]: I0123 10:01:08.814628 4684 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/keystone-cron-29486041-8929f" Jan 23 10:01:08 crc kubenswrapper[4684]: I0123 10:01:08.951543 4684 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"fernet-keys\" (UniqueName: \"kubernetes.io/secret/1ca6f1ca-5942-4aee-a0bc-b7d2549de3a2-fernet-keys\") pod \"1ca6f1ca-5942-4aee-a0bc-b7d2549de3a2\" (UID: \"1ca6f1ca-5942-4aee-a0bc-b7d2549de3a2\") " Jan 23 10:01:08 crc kubenswrapper[4684]: I0123 10:01:08.952099 4684 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/1ca6f1ca-5942-4aee-a0bc-b7d2549de3a2-combined-ca-bundle\") pod \"1ca6f1ca-5942-4aee-a0bc-b7d2549de3a2\" (UID: \"1ca6f1ca-5942-4aee-a0bc-b7d2549de3a2\") " Jan 23 10:01:08 crc kubenswrapper[4684]: I0123 10:01:08.952225 4684 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/1ca6f1ca-5942-4aee-a0bc-b7d2549de3a2-config-data\") pod \"1ca6f1ca-5942-4aee-a0bc-b7d2549de3a2\" (UID: \"1ca6f1ca-5942-4aee-a0bc-b7d2549de3a2\") " Jan 23 10:01:08 crc kubenswrapper[4684]: I0123 10:01:08.952267 4684 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-vrtbf\" (UniqueName: \"kubernetes.io/projected/1ca6f1ca-5942-4aee-a0bc-b7d2549de3a2-kube-api-access-vrtbf\") pod \"1ca6f1ca-5942-4aee-a0bc-b7d2549de3a2\" (UID: \"1ca6f1ca-5942-4aee-a0bc-b7d2549de3a2\") " Jan 23 10:01:08 crc kubenswrapper[4684]: I0123 10:01:08.959228 4684 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/1ca6f1ca-5942-4aee-a0bc-b7d2549de3a2-fernet-keys" (OuterVolumeSpecName: "fernet-keys") pod "1ca6f1ca-5942-4aee-a0bc-b7d2549de3a2" (UID: "1ca6f1ca-5942-4aee-a0bc-b7d2549de3a2"). InnerVolumeSpecName "fernet-keys". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 23 10:01:08 crc kubenswrapper[4684]: I0123 10:01:08.960932 4684 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/1ca6f1ca-5942-4aee-a0bc-b7d2549de3a2-kube-api-access-vrtbf" (OuterVolumeSpecName: "kube-api-access-vrtbf") pod "1ca6f1ca-5942-4aee-a0bc-b7d2549de3a2" (UID: "1ca6f1ca-5942-4aee-a0bc-b7d2549de3a2"). InnerVolumeSpecName "kube-api-access-vrtbf". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 23 10:01:08 crc kubenswrapper[4684]: I0123 10:01:08.999656 4684 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/1ca6f1ca-5942-4aee-a0bc-b7d2549de3a2-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "1ca6f1ca-5942-4aee-a0bc-b7d2549de3a2" (UID: "1ca6f1ca-5942-4aee-a0bc-b7d2549de3a2"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 23 10:01:09 crc kubenswrapper[4684]: I0123 10:01:09.004221 4684 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/1ca6f1ca-5942-4aee-a0bc-b7d2549de3a2-config-data" (OuterVolumeSpecName: "config-data") pod "1ca6f1ca-5942-4aee-a0bc-b7d2549de3a2" (UID: "1ca6f1ca-5942-4aee-a0bc-b7d2549de3a2"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 23 10:01:09 crc kubenswrapper[4684]: I0123 10:01:09.054575 4684 reconciler_common.go:293] "Volume detached for volume \"fernet-keys\" (UniqueName: \"kubernetes.io/secret/1ca6f1ca-5942-4aee-a0bc-b7d2549de3a2-fernet-keys\") on node \"crc\" DevicePath \"\"" Jan 23 10:01:09 crc kubenswrapper[4684]: I0123 10:01:09.054617 4684 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/1ca6f1ca-5942-4aee-a0bc-b7d2549de3a2-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Jan 23 10:01:09 crc kubenswrapper[4684]: I0123 10:01:09.054632 4684 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/1ca6f1ca-5942-4aee-a0bc-b7d2549de3a2-config-data\") on node \"crc\" DevicePath \"\"" Jan 23 10:01:09 crc kubenswrapper[4684]: I0123 10:01:09.054644 4684 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-vrtbf\" (UniqueName: \"kubernetes.io/projected/1ca6f1ca-5942-4aee-a0bc-b7d2549de3a2-kube-api-access-vrtbf\") on node \"crc\" DevicePath \"\"" Jan 23 10:01:09 crc kubenswrapper[4684]: I0123 10:01:09.468471 4684 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/keystone-cron-29486041-8929f" event={"ID":"1ca6f1ca-5942-4aee-a0bc-b7d2549de3a2","Type":"ContainerDied","Data":"5b8163548c4f8ac7d6a9104912d61437d809028ded6c273d9392bb21b90695df"} Jan 23 10:01:09 crc kubenswrapper[4684]: I0123 10:01:09.468715 4684 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="5b8163548c4f8ac7d6a9104912d61437d809028ded6c273d9392bb21b90695df" Jan 23 10:01:09 crc kubenswrapper[4684]: I0123 10:01:09.468518 4684 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/keystone-cron-29486041-8929f" Jan 23 10:01:15 crc kubenswrapper[4684]: I0123 10:01:15.582917 4684 scope.go:117] "RemoveContainer" containerID="d1c64bcff5b15812f02c5451d69c8159a40aa5751c27f7f31fd2c1167f6c8ab3" Jan 23 10:01:15 crc kubenswrapper[4684]: E0123 10:01:15.583814 4684 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-wtphf_openshift-machine-config-operator(fe8e0d00-860e-4d47-9f48-686555520d79)\"" pod="openshift-machine-config-operator/machine-config-daemon-wtphf" podUID="fe8e0d00-860e-4d47-9f48-686555520d79" Jan 23 10:01:27 crc kubenswrapper[4684]: I0123 10:01:27.587965 4684 scope.go:117] "RemoveContainer" containerID="d1c64bcff5b15812f02c5451d69c8159a40aa5751c27f7f31fd2c1167f6c8ab3" Jan 23 10:01:27 crc kubenswrapper[4684]: E0123 10:01:27.588659 4684 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-wtphf_openshift-machine-config-operator(fe8e0d00-860e-4d47-9f48-686555520d79)\"" pod="openshift-machine-config-operator/machine-config-daemon-wtphf" podUID="fe8e0d00-860e-4d47-9f48-686555520d79" Jan 23 10:01:38 crc kubenswrapper[4684]: I0123 10:01:38.582867 4684 scope.go:117] "RemoveContainer" containerID="d1c64bcff5b15812f02c5451d69c8159a40aa5751c27f7f31fd2c1167f6c8ab3" Jan 23 10:01:38 crc kubenswrapper[4684]: E0123 10:01:38.583925 4684 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-wtphf_openshift-machine-config-operator(fe8e0d00-860e-4d47-9f48-686555520d79)\"" pod="openshift-machine-config-operator/machine-config-daemon-wtphf" podUID="fe8e0d00-860e-4d47-9f48-686555520d79" Jan 23 10:01:42 crc kubenswrapper[4684]: I0123 10:01:42.731804 4684 generic.go:334] "Generic (PLEG): container finished" podID="e755b648-4ecf-4fc5-922a-39c5061827de" containerID="1ad677c76d0d53eac6def3d93bae7fb2c8e7baea9af0695e537b9bb403c7a130" exitCode=0 Jan 23 10:01:42 crc kubenswrapper[4684]: I0123 10:01:42.731899 4684 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ovn-edpm-deployment-openstack-edpm-ipam-9klss" event={"ID":"e755b648-4ecf-4fc5-922a-39c5061827de","Type":"ContainerDied","Data":"1ad677c76d0d53eac6def3d93bae7fb2c8e7baea9af0695e537b9bb403c7a130"} Jan 23 10:01:44 crc kubenswrapper[4684]: I0123 10:01:44.149902 4684 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/ovn-edpm-deployment-openstack-edpm-ipam-9klss" Jan 23 10:01:44 crc kubenswrapper[4684]: I0123 10:01:44.259910 4684 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovn-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/e755b648-4ecf-4fc5-922a-39c5061827de-ovn-combined-ca-bundle\") pod \"e755b648-4ecf-4fc5-922a-39c5061827de\" (UID: \"e755b648-4ecf-4fc5-922a-39c5061827de\") " Jan 23 10:01:44 crc kubenswrapper[4684]: I0123 10:01:44.260211 4684 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-c79nl\" (UniqueName: \"kubernetes.io/projected/e755b648-4ecf-4fc5-922a-39c5061827de-kube-api-access-c79nl\") pod \"e755b648-4ecf-4fc5-922a-39c5061827de\" (UID: \"e755b648-4ecf-4fc5-922a-39c5061827de\") " Jan 23 10:01:44 crc kubenswrapper[4684]: I0123 10:01:44.260250 4684 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovncontroller-config-0\" (UniqueName: \"kubernetes.io/configmap/e755b648-4ecf-4fc5-922a-39c5061827de-ovncontroller-config-0\") pod \"e755b648-4ecf-4fc5-922a-39c5061827de\" (UID: \"e755b648-4ecf-4fc5-922a-39c5061827de\") " Jan 23 10:01:44 crc kubenswrapper[4684]: I0123 10:01:44.260308 4684 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ceph\" (UniqueName: \"kubernetes.io/secret/e755b648-4ecf-4fc5-922a-39c5061827de-ceph\") pod \"e755b648-4ecf-4fc5-922a-39c5061827de\" (UID: \"e755b648-4ecf-4fc5-922a-39c5061827de\") " Jan 23 10:01:44 crc kubenswrapper[4684]: I0123 10:01:44.260373 4684 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/e755b648-4ecf-4fc5-922a-39c5061827de-ssh-key-openstack-edpm-ipam\") pod \"e755b648-4ecf-4fc5-922a-39c5061827de\" (UID: \"e755b648-4ecf-4fc5-922a-39c5061827de\") " Jan 23 10:01:44 crc kubenswrapper[4684]: I0123 10:01:44.260445 4684 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/e755b648-4ecf-4fc5-922a-39c5061827de-inventory\") pod \"e755b648-4ecf-4fc5-922a-39c5061827de\" (UID: \"e755b648-4ecf-4fc5-922a-39c5061827de\") " Jan 23 10:01:44 crc kubenswrapper[4684]: I0123 10:01:44.268250 4684 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/e755b648-4ecf-4fc5-922a-39c5061827de-ovn-combined-ca-bundle" (OuterVolumeSpecName: "ovn-combined-ca-bundle") pod "e755b648-4ecf-4fc5-922a-39c5061827de" (UID: "e755b648-4ecf-4fc5-922a-39c5061827de"). InnerVolumeSpecName "ovn-combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 23 10:01:44 crc kubenswrapper[4684]: I0123 10:01:44.331847 4684 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/e755b648-4ecf-4fc5-922a-39c5061827de-ceph" (OuterVolumeSpecName: "ceph") pod "e755b648-4ecf-4fc5-922a-39c5061827de" (UID: "e755b648-4ecf-4fc5-922a-39c5061827de"). InnerVolumeSpecName "ceph". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 23 10:01:44 crc kubenswrapper[4684]: I0123 10:01:44.349625 4684 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/e755b648-4ecf-4fc5-922a-39c5061827de-ovncontroller-config-0" (OuterVolumeSpecName: "ovncontroller-config-0") pod "e755b648-4ecf-4fc5-922a-39c5061827de" (UID: "e755b648-4ecf-4fc5-922a-39c5061827de"). InnerVolumeSpecName "ovncontroller-config-0". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 23 10:01:44 crc kubenswrapper[4684]: I0123 10:01:44.350982 4684 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/e755b648-4ecf-4fc5-922a-39c5061827de-kube-api-access-c79nl" (OuterVolumeSpecName: "kube-api-access-c79nl") pod "e755b648-4ecf-4fc5-922a-39c5061827de" (UID: "e755b648-4ecf-4fc5-922a-39c5061827de"). InnerVolumeSpecName "kube-api-access-c79nl". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 23 10:01:44 crc kubenswrapper[4684]: I0123 10:01:44.373900 4684 reconciler_common.go:293] "Volume detached for volume \"ovn-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/e755b648-4ecf-4fc5-922a-39c5061827de-ovn-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Jan 23 10:01:44 crc kubenswrapper[4684]: I0123 10:01:44.373947 4684 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-c79nl\" (UniqueName: \"kubernetes.io/projected/e755b648-4ecf-4fc5-922a-39c5061827de-kube-api-access-c79nl\") on node \"crc\" DevicePath \"\"" Jan 23 10:01:44 crc kubenswrapper[4684]: I0123 10:01:44.373962 4684 reconciler_common.go:293] "Volume detached for volume \"ovncontroller-config-0\" (UniqueName: \"kubernetes.io/configmap/e755b648-4ecf-4fc5-922a-39c5061827de-ovncontroller-config-0\") on node \"crc\" DevicePath \"\"" Jan 23 10:01:44 crc kubenswrapper[4684]: I0123 10:01:44.373982 4684 reconciler_common.go:293] "Volume detached for volume \"ceph\" (UniqueName: \"kubernetes.io/secret/e755b648-4ecf-4fc5-922a-39c5061827de-ceph\") on node \"crc\" DevicePath \"\"" Jan 23 10:01:44 crc kubenswrapper[4684]: I0123 10:01:44.377426 4684 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/e755b648-4ecf-4fc5-922a-39c5061827de-inventory" (OuterVolumeSpecName: "inventory") pod "e755b648-4ecf-4fc5-922a-39c5061827de" (UID: "e755b648-4ecf-4fc5-922a-39c5061827de"). InnerVolumeSpecName "inventory". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 23 10:01:44 crc kubenswrapper[4684]: I0123 10:01:44.395856 4684 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/e755b648-4ecf-4fc5-922a-39c5061827de-ssh-key-openstack-edpm-ipam" (OuterVolumeSpecName: "ssh-key-openstack-edpm-ipam") pod "e755b648-4ecf-4fc5-922a-39c5061827de" (UID: "e755b648-4ecf-4fc5-922a-39c5061827de"). InnerVolumeSpecName "ssh-key-openstack-edpm-ipam". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 23 10:01:44 crc kubenswrapper[4684]: I0123 10:01:44.476192 4684 reconciler_common.go:293] "Volume detached for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/e755b648-4ecf-4fc5-922a-39c5061827de-ssh-key-openstack-edpm-ipam\") on node \"crc\" DevicePath \"\"" Jan 23 10:01:44 crc kubenswrapper[4684]: I0123 10:01:44.476242 4684 reconciler_common.go:293] "Volume detached for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/e755b648-4ecf-4fc5-922a-39c5061827de-inventory\") on node \"crc\" DevicePath \"\"" Jan 23 10:01:44 crc kubenswrapper[4684]: I0123 10:01:44.751166 4684 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ovn-edpm-deployment-openstack-edpm-ipam-9klss" event={"ID":"e755b648-4ecf-4fc5-922a-39c5061827de","Type":"ContainerDied","Data":"edf7d25012043b9e2ea8022a0c455a7365a13d6f55c5da8b551e3694ae0c3449"} Jan 23 10:01:44 crc kubenswrapper[4684]: I0123 10:01:44.751214 4684 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="edf7d25012043b9e2ea8022a0c455a7365a13d6f55c5da8b551e3694ae0c3449" Jan 23 10:01:44 crc kubenswrapper[4684]: I0123 10:01:44.751531 4684 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/ovn-edpm-deployment-openstack-edpm-ipam-9klss" Jan 23 10:01:44 crc kubenswrapper[4684]: I0123 10:01:44.867605 4684 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/neutron-metadata-edpm-deployment-openstack-edpm-ipam-bkm2h"] Jan 23 10:01:44 crc kubenswrapper[4684]: E0123 10:01:44.868025 4684 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="e755b648-4ecf-4fc5-922a-39c5061827de" containerName="ovn-edpm-deployment-openstack-edpm-ipam" Jan 23 10:01:44 crc kubenswrapper[4684]: I0123 10:01:44.868046 4684 state_mem.go:107] "Deleted CPUSet assignment" podUID="e755b648-4ecf-4fc5-922a-39c5061827de" containerName="ovn-edpm-deployment-openstack-edpm-ipam" Jan 23 10:01:44 crc kubenswrapper[4684]: E0123 10:01:44.868092 4684 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="1ca6f1ca-5942-4aee-a0bc-b7d2549de3a2" containerName="keystone-cron" Jan 23 10:01:44 crc kubenswrapper[4684]: I0123 10:01:44.868103 4684 state_mem.go:107] "Deleted CPUSet assignment" podUID="1ca6f1ca-5942-4aee-a0bc-b7d2549de3a2" containerName="keystone-cron" Jan 23 10:01:44 crc kubenswrapper[4684]: I0123 10:01:44.868316 4684 memory_manager.go:354] "RemoveStaleState removing state" podUID="e755b648-4ecf-4fc5-922a-39c5061827de" containerName="ovn-edpm-deployment-openstack-edpm-ipam" Jan 23 10:01:44 crc kubenswrapper[4684]: I0123 10:01:44.868342 4684 memory_manager.go:354] "RemoveStaleState removing state" podUID="1ca6f1ca-5942-4aee-a0bc-b7d2549de3a2" containerName="keystone-cron" Jan 23 10:01:44 crc kubenswrapper[4684]: I0123 10:01:44.869078 4684 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/neutron-metadata-edpm-deployment-openstack-edpm-ipam-bkm2h" Jan 23 10:01:44 crc kubenswrapper[4684]: I0123 10:01:44.873149 4684 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"openstack-edpm-ipam-dockercfg-5vtkf" Jan 23 10:01:44 crc kubenswrapper[4684]: I0123 10:01:44.873182 4684 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-metadata-neutron-config" Jan 23 10:01:44 crc kubenswrapper[4684]: I0123 10:01:44.873656 4684 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"openstack-aee-default-env" Jan 23 10:01:44 crc kubenswrapper[4684]: I0123 10:01:44.874287 4684 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"ceph-conf-files" Jan 23 10:01:44 crc kubenswrapper[4684]: I0123 10:01:44.874303 4684 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"dataplanenodeset-openstack-edpm-ipam" Jan 23 10:01:44 crc kubenswrapper[4684]: I0123 10:01:44.874353 4684 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"dataplane-ansible-ssh-private-key-secret" Jan 23 10:01:44 crc kubenswrapper[4684]: I0123 10:01:44.885807 4684 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/cb533e15-1dac-453b-a0d7-041112a91f0b-inventory\") pod \"neutron-metadata-edpm-deployment-openstack-edpm-ipam-bkm2h\" (UID: \"cb533e15-1dac-453b-a0d7-041112a91f0b\") " pod="openstack/neutron-metadata-edpm-deployment-openstack-edpm-ipam-bkm2h" Jan 23 10:01:44 crc kubenswrapper[4684]: I0123 10:01:44.886043 4684 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/cb533e15-1dac-453b-a0d7-041112a91f0b-ssh-key-openstack-edpm-ipam\") pod \"neutron-metadata-edpm-deployment-openstack-edpm-ipam-bkm2h\" (UID: \"cb533e15-1dac-453b-a0d7-041112a91f0b\") " pod="openstack/neutron-metadata-edpm-deployment-openstack-edpm-ipam-bkm2h" Jan 23 10:01:44 crc kubenswrapper[4684]: I0123 10:01:44.886178 4684 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"nova-metadata-neutron-config-0\" (UniqueName: \"kubernetes.io/secret/cb533e15-1dac-453b-a0d7-041112a91f0b-nova-metadata-neutron-config-0\") pod \"neutron-metadata-edpm-deployment-openstack-edpm-ipam-bkm2h\" (UID: \"cb533e15-1dac-453b-a0d7-041112a91f0b\") " pod="openstack/neutron-metadata-edpm-deployment-openstack-edpm-ipam-bkm2h" Jan 23 10:01:44 crc kubenswrapper[4684]: I0123 10:01:44.886306 4684 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"neutron-metadata-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/cb533e15-1dac-453b-a0d7-041112a91f0b-neutron-metadata-combined-ca-bundle\") pod \"neutron-metadata-edpm-deployment-openstack-edpm-ipam-bkm2h\" (UID: \"cb533e15-1dac-453b-a0d7-041112a91f0b\") " pod="openstack/neutron-metadata-edpm-deployment-openstack-edpm-ipam-bkm2h" Jan 23 10:01:44 crc kubenswrapper[4684]: I0123 10:01:44.886437 4684 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-pzwf2\" (UniqueName: \"kubernetes.io/projected/cb533e15-1dac-453b-a0d7-041112a91f0b-kube-api-access-pzwf2\") pod \"neutron-metadata-edpm-deployment-openstack-edpm-ipam-bkm2h\" (UID: \"cb533e15-1dac-453b-a0d7-041112a91f0b\") " pod="openstack/neutron-metadata-edpm-deployment-openstack-edpm-ipam-bkm2h" Jan 23 10:01:44 crc kubenswrapper[4684]: I0123 10:01:44.886501 4684 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ceph\" (UniqueName: \"kubernetes.io/secret/cb533e15-1dac-453b-a0d7-041112a91f0b-ceph\") pod \"neutron-metadata-edpm-deployment-openstack-edpm-ipam-bkm2h\" (UID: \"cb533e15-1dac-453b-a0d7-041112a91f0b\") " pod="openstack/neutron-metadata-edpm-deployment-openstack-edpm-ipam-bkm2h" Jan 23 10:01:44 crc kubenswrapper[4684]: I0123 10:01:44.886592 4684 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"neutron-ovn-metadata-agent-neutron-config-0\" (UniqueName: \"kubernetes.io/secret/cb533e15-1dac-453b-a0d7-041112a91f0b-neutron-ovn-metadata-agent-neutron-config-0\") pod \"neutron-metadata-edpm-deployment-openstack-edpm-ipam-bkm2h\" (UID: \"cb533e15-1dac-453b-a0d7-041112a91f0b\") " pod="openstack/neutron-metadata-edpm-deployment-openstack-edpm-ipam-bkm2h" Jan 23 10:01:44 crc kubenswrapper[4684]: I0123 10:01:44.887342 4684 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"neutron-ovn-metadata-agent-neutron-config" Jan 23 10:01:44 crc kubenswrapper[4684]: I0123 10:01:44.887659 4684 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/neutron-metadata-edpm-deployment-openstack-edpm-ipam-bkm2h"] Jan 23 10:01:44 crc kubenswrapper[4684]: I0123 10:01:44.989056 4684 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/cb533e15-1dac-453b-a0d7-041112a91f0b-ssh-key-openstack-edpm-ipam\") pod \"neutron-metadata-edpm-deployment-openstack-edpm-ipam-bkm2h\" (UID: \"cb533e15-1dac-453b-a0d7-041112a91f0b\") " pod="openstack/neutron-metadata-edpm-deployment-openstack-edpm-ipam-bkm2h" Jan 23 10:01:44 crc kubenswrapper[4684]: I0123 10:01:44.989120 4684 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"nova-metadata-neutron-config-0\" (UniqueName: \"kubernetes.io/secret/cb533e15-1dac-453b-a0d7-041112a91f0b-nova-metadata-neutron-config-0\") pod \"neutron-metadata-edpm-deployment-openstack-edpm-ipam-bkm2h\" (UID: \"cb533e15-1dac-453b-a0d7-041112a91f0b\") " pod="openstack/neutron-metadata-edpm-deployment-openstack-edpm-ipam-bkm2h" Jan 23 10:01:44 crc kubenswrapper[4684]: I0123 10:01:44.989158 4684 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"neutron-metadata-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/cb533e15-1dac-453b-a0d7-041112a91f0b-neutron-metadata-combined-ca-bundle\") pod \"neutron-metadata-edpm-deployment-openstack-edpm-ipam-bkm2h\" (UID: \"cb533e15-1dac-453b-a0d7-041112a91f0b\") " pod="openstack/neutron-metadata-edpm-deployment-openstack-edpm-ipam-bkm2h" Jan 23 10:01:44 crc kubenswrapper[4684]: I0123 10:01:44.989195 4684 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-pzwf2\" (UniqueName: \"kubernetes.io/projected/cb533e15-1dac-453b-a0d7-041112a91f0b-kube-api-access-pzwf2\") pod \"neutron-metadata-edpm-deployment-openstack-edpm-ipam-bkm2h\" (UID: \"cb533e15-1dac-453b-a0d7-041112a91f0b\") " pod="openstack/neutron-metadata-edpm-deployment-openstack-edpm-ipam-bkm2h" Jan 23 10:01:44 crc kubenswrapper[4684]: I0123 10:01:44.989226 4684 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ceph\" (UniqueName: \"kubernetes.io/secret/cb533e15-1dac-453b-a0d7-041112a91f0b-ceph\") pod \"neutron-metadata-edpm-deployment-openstack-edpm-ipam-bkm2h\" (UID: \"cb533e15-1dac-453b-a0d7-041112a91f0b\") " pod="openstack/neutron-metadata-edpm-deployment-openstack-edpm-ipam-bkm2h" Jan 23 10:01:44 crc kubenswrapper[4684]: I0123 10:01:44.989273 4684 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"neutron-ovn-metadata-agent-neutron-config-0\" (UniqueName: \"kubernetes.io/secret/cb533e15-1dac-453b-a0d7-041112a91f0b-neutron-ovn-metadata-agent-neutron-config-0\") pod \"neutron-metadata-edpm-deployment-openstack-edpm-ipam-bkm2h\" (UID: \"cb533e15-1dac-453b-a0d7-041112a91f0b\") " pod="openstack/neutron-metadata-edpm-deployment-openstack-edpm-ipam-bkm2h" Jan 23 10:01:44 crc kubenswrapper[4684]: I0123 10:01:44.989331 4684 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/cb533e15-1dac-453b-a0d7-041112a91f0b-inventory\") pod \"neutron-metadata-edpm-deployment-openstack-edpm-ipam-bkm2h\" (UID: \"cb533e15-1dac-453b-a0d7-041112a91f0b\") " pod="openstack/neutron-metadata-edpm-deployment-openstack-edpm-ipam-bkm2h" Jan 23 10:01:44 crc kubenswrapper[4684]: I0123 10:01:44.992996 4684 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"nova-metadata-neutron-config-0\" (UniqueName: \"kubernetes.io/secret/cb533e15-1dac-453b-a0d7-041112a91f0b-nova-metadata-neutron-config-0\") pod \"neutron-metadata-edpm-deployment-openstack-edpm-ipam-bkm2h\" (UID: \"cb533e15-1dac-453b-a0d7-041112a91f0b\") " pod="openstack/neutron-metadata-edpm-deployment-openstack-edpm-ipam-bkm2h" Jan 23 10:01:44 crc kubenswrapper[4684]: I0123 10:01:44.993339 4684 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"neutron-metadata-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/cb533e15-1dac-453b-a0d7-041112a91f0b-neutron-metadata-combined-ca-bundle\") pod \"neutron-metadata-edpm-deployment-openstack-edpm-ipam-bkm2h\" (UID: \"cb533e15-1dac-453b-a0d7-041112a91f0b\") " pod="openstack/neutron-metadata-edpm-deployment-openstack-edpm-ipam-bkm2h" Jan 23 10:01:44 crc kubenswrapper[4684]: I0123 10:01:44.994068 4684 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/cb533e15-1dac-453b-a0d7-041112a91f0b-ssh-key-openstack-edpm-ipam\") pod \"neutron-metadata-edpm-deployment-openstack-edpm-ipam-bkm2h\" (UID: \"cb533e15-1dac-453b-a0d7-041112a91f0b\") " pod="openstack/neutron-metadata-edpm-deployment-openstack-edpm-ipam-bkm2h" Jan 23 10:01:44 crc kubenswrapper[4684]: I0123 10:01:44.994194 4684 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/cb533e15-1dac-453b-a0d7-041112a91f0b-inventory\") pod \"neutron-metadata-edpm-deployment-openstack-edpm-ipam-bkm2h\" (UID: \"cb533e15-1dac-453b-a0d7-041112a91f0b\") " pod="openstack/neutron-metadata-edpm-deployment-openstack-edpm-ipam-bkm2h" Jan 23 10:01:44 crc kubenswrapper[4684]: I0123 10:01:44.994597 4684 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"neutron-ovn-metadata-agent-neutron-config-0\" (UniqueName: \"kubernetes.io/secret/cb533e15-1dac-453b-a0d7-041112a91f0b-neutron-ovn-metadata-agent-neutron-config-0\") pod \"neutron-metadata-edpm-deployment-openstack-edpm-ipam-bkm2h\" (UID: \"cb533e15-1dac-453b-a0d7-041112a91f0b\") " pod="openstack/neutron-metadata-edpm-deployment-openstack-edpm-ipam-bkm2h" Jan 23 10:01:44 crc kubenswrapper[4684]: I0123 10:01:44.994888 4684 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ceph\" (UniqueName: \"kubernetes.io/secret/cb533e15-1dac-453b-a0d7-041112a91f0b-ceph\") pod \"neutron-metadata-edpm-deployment-openstack-edpm-ipam-bkm2h\" (UID: \"cb533e15-1dac-453b-a0d7-041112a91f0b\") " pod="openstack/neutron-metadata-edpm-deployment-openstack-edpm-ipam-bkm2h" Jan 23 10:01:45 crc kubenswrapper[4684]: I0123 10:01:45.007358 4684 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-pzwf2\" (UniqueName: \"kubernetes.io/projected/cb533e15-1dac-453b-a0d7-041112a91f0b-kube-api-access-pzwf2\") pod \"neutron-metadata-edpm-deployment-openstack-edpm-ipam-bkm2h\" (UID: \"cb533e15-1dac-453b-a0d7-041112a91f0b\") " pod="openstack/neutron-metadata-edpm-deployment-openstack-edpm-ipam-bkm2h" Jan 23 10:01:45 crc kubenswrapper[4684]: I0123 10:01:45.185994 4684 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/neutron-metadata-edpm-deployment-openstack-edpm-ipam-bkm2h" Jan 23 10:01:45 crc kubenswrapper[4684]: I0123 10:01:45.784859 4684 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/neutron-metadata-edpm-deployment-openstack-edpm-ipam-bkm2h"] Jan 23 10:01:46 crc kubenswrapper[4684]: I0123 10:01:46.767498 4684 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/neutron-metadata-edpm-deployment-openstack-edpm-ipam-bkm2h" event={"ID":"cb533e15-1dac-453b-a0d7-041112a91f0b","Type":"ContainerStarted","Data":"bbafb31df6754ca600ac705e4a01df9cbd2c4c5925a13a9f6af51ca06d7e70b6"} Jan 23 10:01:47 crc kubenswrapper[4684]: I0123 10:01:47.779063 4684 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/neutron-metadata-edpm-deployment-openstack-edpm-ipam-bkm2h" event={"ID":"cb533e15-1dac-453b-a0d7-041112a91f0b","Type":"ContainerStarted","Data":"923e14d4a10f187f77bf9fad305a32c5e311caeaf058c75a65a8c46ee77facb8"} Jan 23 10:01:47 crc kubenswrapper[4684]: I0123 10:01:47.801898 4684 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/neutron-metadata-edpm-deployment-openstack-edpm-ipam-bkm2h" podStartSLOduration=3.041531143 podStartE2EDuration="3.801873335s" podCreationTimestamp="2026-01-23 10:01:44 +0000 UTC" firstStartedPulling="2026-01-23 10:01:45.792750917 +0000 UTC m=+3278.416129458" lastFinishedPulling="2026-01-23 10:01:46.553093099 +0000 UTC m=+3279.176471650" observedRunningTime="2026-01-23 10:01:47.798439157 +0000 UTC m=+3280.421817718" watchObservedRunningTime="2026-01-23 10:01:47.801873335 +0000 UTC m=+3280.425251876" Jan 23 10:01:51 crc kubenswrapper[4684]: I0123 10:01:51.581930 4684 scope.go:117] "RemoveContainer" containerID="d1c64bcff5b15812f02c5451d69c8159a40aa5751c27f7f31fd2c1167f6c8ab3" Jan 23 10:01:51 crc kubenswrapper[4684]: E0123 10:01:51.582797 4684 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-wtphf_openshift-machine-config-operator(fe8e0d00-860e-4d47-9f48-686555520d79)\"" pod="openshift-machine-config-operator/machine-config-daemon-wtphf" podUID="fe8e0d00-860e-4d47-9f48-686555520d79" Jan 23 10:02:02 crc kubenswrapper[4684]: I0123 10:02:02.581914 4684 scope.go:117] "RemoveContainer" containerID="d1c64bcff5b15812f02c5451d69c8159a40aa5751c27f7f31fd2c1167f6c8ab3" Jan 23 10:02:02 crc kubenswrapper[4684]: E0123 10:02:02.582625 4684 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-wtphf_openshift-machine-config-operator(fe8e0d00-860e-4d47-9f48-686555520d79)\"" pod="openshift-machine-config-operator/machine-config-daemon-wtphf" podUID="fe8e0d00-860e-4d47-9f48-686555520d79" Jan 23 10:02:17 crc kubenswrapper[4684]: I0123 10:02:17.590168 4684 scope.go:117] "RemoveContainer" containerID="d1c64bcff5b15812f02c5451d69c8159a40aa5751c27f7f31fd2c1167f6c8ab3" Jan 23 10:02:17 crc kubenswrapper[4684]: E0123 10:02:17.590977 4684 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-wtphf_openshift-machine-config-operator(fe8e0d00-860e-4d47-9f48-686555520d79)\"" pod="openshift-machine-config-operator/machine-config-daemon-wtphf" podUID="fe8e0d00-860e-4d47-9f48-686555520d79" Jan 23 10:02:32 crc kubenswrapper[4684]: I0123 10:02:32.583146 4684 scope.go:117] "RemoveContainer" containerID="d1c64bcff5b15812f02c5451d69c8159a40aa5751c27f7f31fd2c1167f6c8ab3" Jan 23 10:02:32 crc kubenswrapper[4684]: E0123 10:02:32.584510 4684 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-wtphf_openshift-machine-config-operator(fe8e0d00-860e-4d47-9f48-686555520d79)\"" pod="openshift-machine-config-operator/machine-config-daemon-wtphf" podUID="fe8e0d00-860e-4d47-9f48-686555520d79" Jan 23 10:02:47 crc kubenswrapper[4684]: I0123 10:02:47.588393 4684 scope.go:117] "RemoveContainer" containerID="d1c64bcff5b15812f02c5451d69c8159a40aa5751c27f7f31fd2c1167f6c8ab3" Jan 23 10:02:47 crc kubenswrapper[4684]: E0123 10:02:47.589166 4684 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-wtphf_openshift-machine-config-operator(fe8e0d00-860e-4d47-9f48-686555520d79)\"" pod="openshift-machine-config-operator/machine-config-daemon-wtphf" podUID="fe8e0d00-860e-4d47-9f48-686555520d79" Jan 23 10:03:01 crc kubenswrapper[4684]: I0123 10:03:01.460382 4684 generic.go:334] "Generic (PLEG): container finished" podID="cb533e15-1dac-453b-a0d7-041112a91f0b" containerID="923e14d4a10f187f77bf9fad305a32c5e311caeaf058c75a65a8c46ee77facb8" exitCode=0 Jan 23 10:03:01 crc kubenswrapper[4684]: I0123 10:03:01.460449 4684 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/neutron-metadata-edpm-deployment-openstack-edpm-ipam-bkm2h" event={"ID":"cb533e15-1dac-453b-a0d7-041112a91f0b","Type":"ContainerDied","Data":"923e14d4a10f187f77bf9fad305a32c5e311caeaf058c75a65a8c46ee77facb8"} Jan 23 10:03:02 crc kubenswrapper[4684]: I0123 10:03:02.582511 4684 scope.go:117] "RemoveContainer" containerID="d1c64bcff5b15812f02c5451d69c8159a40aa5751c27f7f31fd2c1167f6c8ab3" Jan 23 10:03:02 crc kubenswrapper[4684]: E0123 10:03:02.582947 4684 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-wtphf_openshift-machine-config-operator(fe8e0d00-860e-4d47-9f48-686555520d79)\"" pod="openshift-machine-config-operator/machine-config-daemon-wtphf" podUID="fe8e0d00-860e-4d47-9f48-686555520d79" Jan 23 10:03:02 crc kubenswrapper[4684]: I0123 10:03:02.870310 4684 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/neutron-metadata-edpm-deployment-openstack-edpm-ipam-bkm2h" Jan 23 10:03:02 crc kubenswrapper[4684]: I0123 10:03:02.914017 4684 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-pzwf2\" (UniqueName: \"kubernetes.io/projected/cb533e15-1dac-453b-a0d7-041112a91f0b-kube-api-access-pzwf2\") pod \"cb533e15-1dac-453b-a0d7-041112a91f0b\" (UID: \"cb533e15-1dac-453b-a0d7-041112a91f0b\") " Jan 23 10:03:02 crc kubenswrapper[4684]: I0123 10:03:02.914123 4684 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"neutron-metadata-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/cb533e15-1dac-453b-a0d7-041112a91f0b-neutron-metadata-combined-ca-bundle\") pod \"cb533e15-1dac-453b-a0d7-041112a91f0b\" (UID: \"cb533e15-1dac-453b-a0d7-041112a91f0b\") " Jan 23 10:03:02 crc kubenswrapper[4684]: I0123 10:03:02.914163 4684 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/cb533e15-1dac-453b-a0d7-041112a91f0b-inventory\") pod \"cb533e15-1dac-453b-a0d7-041112a91f0b\" (UID: \"cb533e15-1dac-453b-a0d7-041112a91f0b\") " Jan 23 10:03:02 crc kubenswrapper[4684]: I0123 10:03:02.914316 4684 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ceph\" (UniqueName: \"kubernetes.io/secret/cb533e15-1dac-453b-a0d7-041112a91f0b-ceph\") pod \"cb533e15-1dac-453b-a0d7-041112a91f0b\" (UID: \"cb533e15-1dac-453b-a0d7-041112a91f0b\") " Jan 23 10:03:02 crc kubenswrapper[4684]: I0123 10:03:02.914372 4684 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/cb533e15-1dac-453b-a0d7-041112a91f0b-ssh-key-openstack-edpm-ipam\") pod \"cb533e15-1dac-453b-a0d7-041112a91f0b\" (UID: \"cb533e15-1dac-453b-a0d7-041112a91f0b\") " Jan 23 10:03:02 crc kubenswrapper[4684]: I0123 10:03:02.914422 4684 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"nova-metadata-neutron-config-0\" (UniqueName: \"kubernetes.io/secret/cb533e15-1dac-453b-a0d7-041112a91f0b-nova-metadata-neutron-config-0\") pod \"cb533e15-1dac-453b-a0d7-041112a91f0b\" (UID: \"cb533e15-1dac-453b-a0d7-041112a91f0b\") " Jan 23 10:03:02 crc kubenswrapper[4684]: I0123 10:03:02.914461 4684 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"neutron-ovn-metadata-agent-neutron-config-0\" (UniqueName: \"kubernetes.io/secret/cb533e15-1dac-453b-a0d7-041112a91f0b-neutron-ovn-metadata-agent-neutron-config-0\") pod \"cb533e15-1dac-453b-a0d7-041112a91f0b\" (UID: \"cb533e15-1dac-453b-a0d7-041112a91f0b\") " Jan 23 10:03:02 crc kubenswrapper[4684]: I0123 10:03:02.920226 4684 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/cb533e15-1dac-453b-a0d7-041112a91f0b-neutron-metadata-combined-ca-bundle" (OuterVolumeSpecName: "neutron-metadata-combined-ca-bundle") pod "cb533e15-1dac-453b-a0d7-041112a91f0b" (UID: "cb533e15-1dac-453b-a0d7-041112a91f0b"). InnerVolumeSpecName "neutron-metadata-combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 23 10:03:02 crc kubenswrapper[4684]: I0123 10:03:02.927905 4684 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/cb533e15-1dac-453b-a0d7-041112a91f0b-ceph" (OuterVolumeSpecName: "ceph") pod "cb533e15-1dac-453b-a0d7-041112a91f0b" (UID: "cb533e15-1dac-453b-a0d7-041112a91f0b"). InnerVolumeSpecName "ceph". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 23 10:03:02 crc kubenswrapper[4684]: I0123 10:03:02.928402 4684 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/cb533e15-1dac-453b-a0d7-041112a91f0b-kube-api-access-pzwf2" (OuterVolumeSpecName: "kube-api-access-pzwf2") pod "cb533e15-1dac-453b-a0d7-041112a91f0b" (UID: "cb533e15-1dac-453b-a0d7-041112a91f0b"). InnerVolumeSpecName "kube-api-access-pzwf2". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 23 10:03:02 crc kubenswrapper[4684]: I0123 10:03:02.946297 4684 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/cb533e15-1dac-453b-a0d7-041112a91f0b-ssh-key-openstack-edpm-ipam" (OuterVolumeSpecName: "ssh-key-openstack-edpm-ipam") pod "cb533e15-1dac-453b-a0d7-041112a91f0b" (UID: "cb533e15-1dac-453b-a0d7-041112a91f0b"). InnerVolumeSpecName "ssh-key-openstack-edpm-ipam". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 23 10:03:02 crc kubenswrapper[4684]: I0123 10:03:02.946719 4684 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/cb533e15-1dac-453b-a0d7-041112a91f0b-inventory" (OuterVolumeSpecName: "inventory") pod "cb533e15-1dac-453b-a0d7-041112a91f0b" (UID: "cb533e15-1dac-453b-a0d7-041112a91f0b"). InnerVolumeSpecName "inventory". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 23 10:03:02 crc kubenswrapper[4684]: I0123 10:03:02.947546 4684 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/cb533e15-1dac-453b-a0d7-041112a91f0b-nova-metadata-neutron-config-0" (OuterVolumeSpecName: "nova-metadata-neutron-config-0") pod "cb533e15-1dac-453b-a0d7-041112a91f0b" (UID: "cb533e15-1dac-453b-a0d7-041112a91f0b"). InnerVolumeSpecName "nova-metadata-neutron-config-0". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 23 10:03:02 crc kubenswrapper[4684]: I0123 10:03:02.951009 4684 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/cb533e15-1dac-453b-a0d7-041112a91f0b-neutron-ovn-metadata-agent-neutron-config-0" (OuterVolumeSpecName: "neutron-ovn-metadata-agent-neutron-config-0") pod "cb533e15-1dac-453b-a0d7-041112a91f0b" (UID: "cb533e15-1dac-453b-a0d7-041112a91f0b"). InnerVolumeSpecName "neutron-ovn-metadata-agent-neutron-config-0". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 23 10:03:03 crc kubenswrapper[4684]: I0123 10:03:03.016802 4684 reconciler_common.go:293] "Volume detached for volume \"neutron-metadata-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/cb533e15-1dac-453b-a0d7-041112a91f0b-neutron-metadata-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Jan 23 10:03:03 crc kubenswrapper[4684]: I0123 10:03:03.017128 4684 reconciler_common.go:293] "Volume detached for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/cb533e15-1dac-453b-a0d7-041112a91f0b-inventory\") on node \"crc\" DevicePath \"\"" Jan 23 10:03:03 crc kubenswrapper[4684]: I0123 10:03:03.017224 4684 reconciler_common.go:293] "Volume detached for volume \"ceph\" (UniqueName: \"kubernetes.io/secret/cb533e15-1dac-453b-a0d7-041112a91f0b-ceph\") on node \"crc\" DevicePath \"\"" Jan 23 10:03:03 crc kubenswrapper[4684]: I0123 10:03:03.017312 4684 reconciler_common.go:293] "Volume detached for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/cb533e15-1dac-453b-a0d7-041112a91f0b-ssh-key-openstack-edpm-ipam\") on node \"crc\" DevicePath \"\"" Jan 23 10:03:03 crc kubenswrapper[4684]: I0123 10:03:03.017404 4684 reconciler_common.go:293] "Volume detached for volume \"nova-metadata-neutron-config-0\" (UniqueName: \"kubernetes.io/secret/cb533e15-1dac-453b-a0d7-041112a91f0b-nova-metadata-neutron-config-0\") on node \"crc\" DevicePath \"\"" Jan 23 10:03:03 crc kubenswrapper[4684]: I0123 10:03:03.017593 4684 reconciler_common.go:293] "Volume detached for volume \"neutron-ovn-metadata-agent-neutron-config-0\" (UniqueName: \"kubernetes.io/secret/cb533e15-1dac-453b-a0d7-041112a91f0b-neutron-ovn-metadata-agent-neutron-config-0\") on node \"crc\" DevicePath \"\"" Jan 23 10:03:03 crc kubenswrapper[4684]: I0123 10:03:03.017726 4684 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-pzwf2\" (UniqueName: \"kubernetes.io/projected/cb533e15-1dac-453b-a0d7-041112a91f0b-kube-api-access-pzwf2\") on node \"crc\" DevicePath \"\"" Jan 23 10:03:03 crc kubenswrapper[4684]: I0123 10:03:03.477544 4684 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/neutron-metadata-edpm-deployment-openstack-edpm-ipam-bkm2h" event={"ID":"cb533e15-1dac-453b-a0d7-041112a91f0b","Type":"ContainerDied","Data":"bbafb31df6754ca600ac705e4a01df9cbd2c4c5925a13a9f6af51ca06d7e70b6"} Jan 23 10:03:03 crc kubenswrapper[4684]: I0123 10:03:03.478129 4684 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="bbafb31df6754ca600ac705e4a01df9cbd2c4c5925a13a9f6af51ca06d7e70b6" Jan 23 10:03:03 crc kubenswrapper[4684]: I0123 10:03:03.477598 4684 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/neutron-metadata-edpm-deployment-openstack-edpm-ipam-bkm2h" Jan 23 10:03:03 crc kubenswrapper[4684]: I0123 10:03:03.604829 4684 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/libvirt-edpm-deployment-openstack-edpm-ipam-p7z6q"] Jan 23 10:03:03 crc kubenswrapper[4684]: E0123 10:03:03.605323 4684 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="cb533e15-1dac-453b-a0d7-041112a91f0b" containerName="neutron-metadata-edpm-deployment-openstack-edpm-ipam" Jan 23 10:03:03 crc kubenswrapper[4684]: I0123 10:03:03.605343 4684 state_mem.go:107] "Deleted CPUSet assignment" podUID="cb533e15-1dac-453b-a0d7-041112a91f0b" containerName="neutron-metadata-edpm-deployment-openstack-edpm-ipam" Jan 23 10:03:03 crc kubenswrapper[4684]: I0123 10:03:03.605613 4684 memory_manager.go:354] "RemoveStaleState removing state" podUID="cb533e15-1dac-453b-a0d7-041112a91f0b" containerName="neutron-metadata-edpm-deployment-openstack-edpm-ipam" Jan 23 10:03:03 crc kubenswrapper[4684]: I0123 10:03:03.606394 4684 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/libvirt-edpm-deployment-openstack-edpm-ipam-p7z6q" Jan 23 10:03:03 crc kubenswrapper[4684]: I0123 10:03:03.617987 4684 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/libvirt-edpm-deployment-openstack-edpm-ipam-p7z6q"] Jan 23 10:03:03 crc kubenswrapper[4684]: I0123 10:03:03.620376 4684 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"ceph-conf-files" Jan 23 10:03:03 crc kubenswrapper[4684]: I0123 10:03:03.621009 4684 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"dataplane-ansible-ssh-private-key-secret" Jan 23 10:03:03 crc kubenswrapper[4684]: I0123 10:03:03.621331 4684 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"openstack-aee-default-env" Jan 23 10:03:03 crc kubenswrapper[4684]: I0123 10:03:03.621545 4684 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"libvirt-secret" Jan 23 10:03:03 crc kubenswrapper[4684]: I0123 10:03:03.624578 4684 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"dataplanenodeset-openstack-edpm-ipam" Jan 23 10:03:03 crc kubenswrapper[4684]: I0123 10:03:03.624773 4684 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"openstack-edpm-ipam-dockercfg-5vtkf" Jan 23 10:03:03 crc kubenswrapper[4684]: I0123 10:03:03.626852 4684 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/5310afc8-7024-4b88-b421-28631272375a-ssh-key-openstack-edpm-ipam\") pod \"libvirt-edpm-deployment-openstack-edpm-ipam-p7z6q\" (UID: \"5310afc8-7024-4b88-b421-28631272375a\") " pod="openstack/libvirt-edpm-deployment-openstack-edpm-ipam-p7z6q" Jan 23 10:03:03 crc kubenswrapper[4684]: I0123 10:03:03.626935 4684 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ceph\" (UniqueName: \"kubernetes.io/secret/5310afc8-7024-4b88-b421-28631272375a-ceph\") pod \"libvirt-edpm-deployment-openstack-edpm-ipam-p7z6q\" (UID: \"5310afc8-7024-4b88-b421-28631272375a\") " pod="openstack/libvirt-edpm-deployment-openstack-edpm-ipam-p7z6q" Jan 23 10:03:03 crc kubenswrapper[4684]: I0123 10:03:03.626965 4684 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"libvirt-secret-0\" (UniqueName: \"kubernetes.io/secret/5310afc8-7024-4b88-b421-28631272375a-libvirt-secret-0\") pod \"libvirt-edpm-deployment-openstack-edpm-ipam-p7z6q\" (UID: \"5310afc8-7024-4b88-b421-28631272375a\") " pod="openstack/libvirt-edpm-deployment-openstack-edpm-ipam-p7z6q" Jan 23 10:03:03 crc kubenswrapper[4684]: I0123 10:03:03.626989 4684 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/5310afc8-7024-4b88-b421-28631272375a-inventory\") pod \"libvirt-edpm-deployment-openstack-edpm-ipam-p7z6q\" (UID: \"5310afc8-7024-4b88-b421-28631272375a\") " pod="openstack/libvirt-edpm-deployment-openstack-edpm-ipam-p7z6q" Jan 23 10:03:03 crc kubenswrapper[4684]: I0123 10:03:03.627085 4684 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"libvirt-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/5310afc8-7024-4b88-b421-28631272375a-libvirt-combined-ca-bundle\") pod \"libvirt-edpm-deployment-openstack-edpm-ipam-p7z6q\" (UID: \"5310afc8-7024-4b88-b421-28631272375a\") " pod="openstack/libvirt-edpm-deployment-openstack-edpm-ipam-p7z6q" Jan 23 10:03:03 crc kubenswrapper[4684]: I0123 10:03:03.627142 4684 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-z5s8s\" (UniqueName: \"kubernetes.io/projected/5310afc8-7024-4b88-b421-28631272375a-kube-api-access-z5s8s\") pod \"libvirt-edpm-deployment-openstack-edpm-ipam-p7z6q\" (UID: \"5310afc8-7024-4b88-b421-28631272375a\") " pod="openstack/libvirt-edpm-deployment-openstack-edpm-ipam-p7z6q" Jan 23 10:03:03 crc kubenswrapper[4684]: I0123 10:03:03.728304 4684 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/5310afc8-7024-4b88-b421-28631272375a-ssh-key-openstack-edpm-ipam\") pod \"libvirt-edpm-deployment-openstack-edpm-ipam-p7z6q\" (UID: \"5310afc8-7024-4b88-b421-28631272375a\") " pod="openstack/libvirt-edpm-deployment-openstack-edpm-ipam-p7z6q" Jan 23 10:03:03 crc kubenswrapper[4684]: I0123 10:03:03.728377 4684 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ceph\" (UniqueName: \"kubernetes.io/secret/5310afc8-7024-4b88-b421-28631272375a-ceph\") pod \"libvirt-edpm-deployment-openstack-edpm-ipam-p7z6q\" (UID: \"5310afc8-7024-4b88-b421-28631272375a\") " pod="openstack/libvirt-edpm-deployment-openstack-edpm-ipam-p7z6q" Jan 23 10:03:03 crc kubenswrapper[4684]: I0123 10:03:03.728400 4684 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"libvirt-secret-0\" (UniqueName: \"kubernetes.io/secret/5310afc8-7024-4b88-b421-28631272375a-libvirt-secret-0\") pod \"libvirt-edpm-deployment-openstack-edpm-ipam-p7z6q\" (UID: \"5310afc8-7024-4b88-b421-28631272375a\") " pod="openstack/libvirt-edpm-deployment-openstack-edpm-ipam-p7z6q" Jan 23 10:03:03 crc kubenswrapper[4684]: I0123 10:03:03.728420 4684 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/5310afc8-7024-4b88-b421-28631272375a-inventory\") pod \"libvirt-edpm-deployment-openstack-edpm-ipam-p7z6q\" (UID: \"5310afc8-7024-4b88-b421-28631272375a\") " pod="openstack/libvirt-edpm-deployment-openstack-edpm-ipam-p7z6q" Jan 23 10:03:03 crc kubenswrapper[4684]: I0123 10:03:03.728478 4684 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"libvirt-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/5310afc8-7024-4b88-b421-28631272375a-libvirt-combined-ca-bundle\") pod \"libvirt-edpm-deployment-openstack-edpm-ipam-p7z6q\" (UID: \"5310afc8-7024-4b88-b421-28631272375a\") " pod="openstack/libvirt-edpm-deployment-openstack-edpm-ipam-p7z6q" Jan 23 10:03:03 crc kubenswrapper[4684]: I0123 10:03:03.728522 4684 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-z5s8s\" (UniqueName: \"kubernetes.io/projected/5310afc8-7024-4b88-b421-28631272375a-kube-api-access-z5s8s\") pod \"libvirt-edpm-deployment-openstack-edpm-ipam-p7z6q\" (UID: \"5310afc8-7024-4b88-b421-28631272375a\") " pod="openstack/libvirt-edpm-deployment-openstack-edpm-ipam-p7z6q" Jan 23 10:03:03 crc kubenswrapper[4684]: I0123 10:03:03.733555 4684 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"libvirt-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/5310afc8-7024-4b88-b421-28631272375a-libvirt-combined-ca-bundle\") pod \"libvirt-edpm-deployment-openstack-edpm-ipam-p7z6q\" (UID: \"5310afc8-7024-4b88-b421-28631272375a\") " pod="openstack/libvirt-edpm-deployment-openstack-edpm-ipam-p7z6q" Jan 23 10:03:03 crc kubenswrapper[4684]: I0123 10:03:03.734133 4684 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/5310afc8-7024-4b88-b421-28631272375a-inventory\") pod \"libvirt-edpm-deployment-openstack-edpm-ipam-p7z6q\" (UID: \"5310afc8-7024-4b88-b421-28631272375a\") " pod="openstack/libvirt-edpm-deployment-openstack-edpm-ipam-p7z6q" Jan 23 10:03:03 crc kubenswrapper[4684]: I0123 10:03:03.734491 4684 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/5310afc8-7024-4b88-b421-28631272375a-ssh-key-openstack-edpm-ipam\") pod \"libvirt-edpm-deployment-openstack-edpm-ipam-p7z6q\" (UID: \"5310afc8-7024-4b88-b421-28631272375a\") " pod="openstack/libvirt-edpm-deployment-openstack-edpm-ipam-p7z6q" Jan 23 10:03:03 crc kubenswrapper[4684]: I0123 10:03:03.735344 4684 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ceph\" (UniqueName: \"kubernetes.io/secret/5310afc8-7024-4b88-b421-28631272375a-ceph\") pod \"libvirt-edpm-deployment-openstack-edpm-ipam-p7z6q\" (UID: \"5310afc8-7024-4b88-b421-28631272375a\") " pod="openstack/libvirt-edpm-deployment-openstack-edpm-ipam-p7z6q" Jan 23 10:03:03 crc kubenswrapper[4684]: I0123 10:03:03.739867 4684 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"libvirt-secret-0\" (UniqueName: \"kubernetes.io/secret/5310afc8-7024-4b88-b421-28631272375a-libvirt-secret-0\") pod \"libvirt-edpm-deployment-openstack-edpm-ipam-p7z6q\" (UID: \"5310afc8-7024-4b88-b421-28631272375a\") " pod="openstack/libvirt-edpm-deployment-openstack-edpm-ipam-p7z6q" Jan 23 10:03:03 crc kubenswrapper[4684]: I0123 10:03:03.748397 4684 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-z5s8s\" (UniqueName: \"kubernetes.io/projected/5310afc8-7024-4b88-b421-28631272375a-kube-api-access-z5s8s\") pod \"libvirt-edpm-deployment-openstack-edpm-ipam-p7z6q\" (UID: \"5310afc8-7024-4b88-b421-28631272375a\") " pod="openstack/libvirt-edpm-deployment-openstack-edpm-ipam-p7z6q" Jan 23 10:03:03 crc kubenswrapper[4684]: I0123 10:03:03.923329 4684 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/libvirt-edpm-deployment-openstack-edpm-ipam-p7z6q" Jan 23 10:03:04 crc kubenswrapper[4684]: I0123 10:03:04.454175 4684 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/libvirt-edpm-deployment-openstack-edpm-ipam-p7z6q"] Jan 23 10:03:04 crc kubenswrapper[4684]: I0123 10:03:04.486370 4684 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/libvirt-edpm-deployment-openstack-edpm-ipam-p7z6q" event={"ID":"5310afc8-7024-4b88-b421-28631272375a","Type":"ContainerStarted","Data":"9ce1964b81fd406d410bb78e369a4d42d5b40f62ff805e5e9e73356a06d24858"} Jan 23 10:03:08 crc kubenswrapper[4684]: I0123 10:03:08.473248 4684 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"openstack-aee-default-env" Jan 23 10:03:09 crc kubenswrapper[4684]: I0123 10:03:09.139461 4684 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/libvirt-edpm-deployment-openstack-edpm-ipam-p7z6q" event={"ID":"5310afc8-7024-4b88-b421-28631272375a","Type":"ContainerStarted","Data":"0668a1519bfb9cdf8573b2b6403df757f82c08d2e9aa62503e02966fec51e03b"} Jan 23 10:03:09 crc kubenswrapper[4684]: I0123 10:03:09.198962 4684 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/libvirt-edpm-deployment-openstack-edpm-ipam-p7z6q" podStartSLOduration=2.19957751 podStartE2EDuration="6.198940283s" podCreationTimestamp="2026-01-23 10:03:03 +0000 UTC" firstStartedPulling="2026-01-23 10:03:04.471611627 +0000 UTC m=+3357.094990178" lastFinishedPulling="2026-01-23 10:03:08.47097441 +0000 UTC m=+3361.094352951" observedRunningTime="2026-01-23 10:03:09.188462442 +0000 UTC m=+3361.811840993" watchObservedRunningTime="2026-01-23 10:03:09.198940283 +0000 UTC m=+3361.822318824" Jan 23 10:03:15 crc kubenswrapper[4684]: I0123 10:03:15.582540 4684 scope.go:117] "RemoveContainer" containerID="d1c64bcff5b15812f02c5451d69c8159a40aa5751c27f7f31fd2c1167f6c8ab3" Jan 23 10:03:15 crc kubenswrapper[4684]: E0123 10:03:15.583407 4684 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-wtphf_openshift-machine-config-operator(fe8e0d00-860e-4d47-9f48-686555520d79)\"" pod="openshift-machine-config-operator/machine-config-daemon-wtphf" podUID="fe8e0d00-860e-4d47-9f48-686555520d79" Jan 23 10:03:27 crc kubenswrapper[4684]: I0123 10:03:27.587815 4684 scope.go:117] "RemoveContainer" containerID="d1c64bcff5b15812f02c5451d69c8159a40aa5751c27f7f31fd2c1167f6c8ab3" Jan 23 10:03:27 crc kubenswrapper[4684]: E0123 10:03:27.588523 4684 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-wtphf_openshift-machine-config-operator(fe8e0d00-860e-4d47-9f48-686555520d79)\"" pod="openshift-machine-config-operator/machine-config-daemon-wtphf" podUID="fe8e0d00-860e-4d47-9f48-686555520d79" Jan 23 10:03:40 crc kubenswrapper[4684]: I0123 10:03:40.583683 4684 scope.go:117] "RemoveContainer" containerID="d1c64bcff5b15812f02c5451d69c8159a40aa5751c27f7f31fd2c1167f6c8ab3" Jan 23 10:03:40 crc kubenswrapper[4684]: E0123 10:03:40.584595 4684 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-wtphf_openshift-machine-config-operator(fe8e0d00-860e-4d47-9f48-686555520d79)\"" pod="openshift-machine-config-operator/machine-config-daemon-wtphf" podUID="fe8e0d00-860e-4d47-9f48-686555520d79" Jan 23 10:03:53 crc kubenswrapper[4684]: I0123 10:03:53.582645 4684 scope.go:117] "RemoveContainer" containerID="d1c64bcff5b15812f02c5451d69c8159a40aa5751c27f7f31fd2c1167f6c8ab3" Jan 23 10:03:53 crc kubenswrapper[4684]: E0123 10:03:53.583968 4684 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-wtphf_openshift-machine-config-operator(fe8e0d00-860e-4d47-9f48-686555520d79)\"" pod="openshift-machine-config-operator/machine-config-daemon-wtphf" podUID="fe8e0d00-860e-4d47-9f48-686555520d79" Jan 23 10:04:06 crc kubenswrapper[4684]: I0123 10:04:06.582742 4684 scope.go:117] "RemoveContainer" containerID="d1c64bcff5b15812f02c5451d69c8159a40aa5751c27f7f31fd2c1167f6c8ab3" Jan 23 10:04:06 crc kubenswrapper[4684]: E0123 10:04:06.583803 4684 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-wtphf_openshift-machine-config-operator(fe8e0d00-860e-4d47-9f48-686555520d79)\"" pod="openshift-machine-config-operator/machine-config-daemon-wtphf" podUID="fe8e0d00-860e-4d47-9f48-686555520d79" Jan 23 10:04:20 crc kubenswrapper[4684]: I0123 10:04:20.582224 4684 scope.go:117] "RemoveContainer" containerID="d1c64bcff5b15812f02c5451d69c8159a40aa5751c27f7f31fd2c1167f6c8ab3" Jan 23 10:04:20 crc kubenswrapper[4684]: E0123 10:04:20.584008 4684 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-wtphf_openshift-machine-config-operator(fe8e0d00-860e-4d47-9f48-686555520d79)\"" pod="openshift-machine-config-operator/machine-config-daemon-wtphf" podUID="fe8e0d00-860e-4d47-9f48-686555520d79" Jan 23 10:04:34 crc kubenswrapper[4684]: I0123 10:04:34.581659 4684 scope.go:117] "RemoveContainer" containerID="d1c64bcff5b15812f02c5451d69c8159a40aa5751c27f7f31fd2c1167f6c8ab3" Jan 23 10:04:34 crc kubenswrapper[4684]: E0123 10:04:34.582484 4684 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-wtphf_openshift-machine-config-operator(fe8e0d00-860e-4d47-9f48-686555520d79)\"" pod="openshift-machine-config-operator/machine-config-daemon-wtphf" podUID="fe8e0d00-860e-4d47-9f48-686555520d79" Jan 23 10:04:49 crc kubenswrapper[4684]: I0123 10:04:49.661404 4684 scope.go:117] "RemoveContainer" containerID="d1c64bcff5b15812f02c5451d69c8159a40aa5751c27f7f31fd2c1167f6c8ab3" Jan 23 10:04:50 crc kubenswrapper[4684]: I0123 10:04:50.359432 4684 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-wtphf" event={"ID":"fe8e0d00-860e-4d47-9f48-686555520d79","Type":"ContainerStarted","Data":"ceb6b580f569b2fa2d093ef8e815058bc34f53db19466664eaf44145b4851560"} Jan 23 10:05:34 crc kubenswrapper[4684]: I0123 10:05:34.337779 4684 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/redhat-marketplace-67pb6"] Jan 23 10:05:34 crc kubenswrapper[4684]: I0123 10:05:34.340437 4684 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-67pb6" Jan 23 10:05:34 crc kubenswrapper[4684]: I0123 10:05:34.360919 4684 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-marketplace-67pb6"] Jan 23 10:05:34 crc kubenswrapper[4684]: I0123 10:05:34.406160 4684 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/3cb5e827-22d1-49c5-8ca4-c9e58e39de1e-catalog-content\") pod \"redhat-marketplace-67pb6\" (UID: \"3cb5e827-22d1-49c5-8ca4-c9e58e39de1e\") " pod="openshift-marketplace/redhat-marketplace-67pb6" Jan 23 10:05:34 crc kubenswrapper[4684]: I0123 10:05:34.406282 4684 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/3cb5e827-22d1-49c5-8ca4-c9e58e39de1e-utilities\") pod \"redhat-marketplace-67pb6\" (UID: \"3cb5e827-22d1-49c5-8ca4-c9e58e39de1e\") " pod="openshift-marketplace/redhat-marketplace-67pb6" Jan 23 10:05:34 crc kubenswrapper[4684]: I0123 10:05:34.406377 4684 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-j9vql\" (UniqueName: \"kubernetes.io/projected/3cb5e827-22d1-49c5-8ca4-c9e58e39de1e-kube-api-access-j9vql\") pod \"redhat-marketplace-67pb6\" (UID: \"3cb5e827-22d1-49c5-8ca4-c9e58e39de1e\") " pod="openshift-marketplace/redhat-marketplace-67pb6" Jan 23 10:05:34 crc kubenswrapper[4684]: I0123 10:05:34.508054 4684 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/3cb5e827-22d1-49c5-8ca4-c9e58e39de1e-catalog-content\") pod \"redhat-marketplace-67pb6\" (UID: \"3cb5e827-22d1-49c5-8ca4-c9e58e39de1e\") " pod="openshift-marketplace/redhat-marketplace-67pb6" Jan 23 10:05:34 crc kubenswrapper[4684]: I0123 10:05:34.508173 4684 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/3cb5e827-22d1-49c5-8ca4-c9e58e39de1e-utilities\") pod \"redhat-marketplace-67pb6\" (UID: \"3cb5e827-22d1-49c5-8ca4-c9e58e39de1e\") " pod="openshift-marketplace/redhat-marketplace-67pb6" Jan 23 10:05:34 crc kubenswrapper[4684]: I0123 10:05:34.508269 4684 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-j9vql\" (UniqueName: \"kubernetes.io/projected/3cb5e827-22d1-49c5-8ca4-c9e58e39de1e-kube-api-access-j9vql\") pod \"redhat-marketplace-67pb6\" (UID: \"3cb5e827-22d1-49c5-8ca4-c9e58e39de1e\") " pod="openshift-marketplace/redhat-marketplace-67pb6" Jan 23 10:05:34 crc kubenswrapper[4684]: I0123 10:05:34.508946 4684 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/3cb5e827-22d1-49c5-8ca4-c9e58e39de1e-utilities\") pod \"redhat-marketplace-67pb6\" (UID: \"3cb5e827-22d1-49c5-8ca4-c9e58e39de1e\") " pod="openshift-marketplace/redhat-marketplace-67pb6" Jan 23 10:05:34 crc kubenswrapper[4684]: I0123 10:05:34.509050 4684 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/3cb5e827-22d1-49c5-8ca4-c9e58e39de1e-catalog-content\") pod \"redhat-marketplace-67pb6\" (UID: \"3cb5e827-22d1-49c5-8ca4-c9e58e39de1e\") " pod="openshift-marketplace/redhat-marketplace-67pb6" Jan 23 10:05:34 crc kubenswrapper[4684]: I0123 10:05:34.531741 4684 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-j9vql\" (UniqueName: \"kubernetes.io/projected/3cb5e827-22d1-49c5-8ca4-c9e58e39de1e-kube-api-access-j9vql\") pod \"redhat-marketplace-67pb6\" (UID: \"3cb5e827-22d1-49c5-8ca4-c9e58e39de1e\") " pod="openshift-marketplace/redhat-marketplace-67pb6" Jan 23 10:05:34 crc kubenswrapper[4684]: I0123 10:05:34.673048 4684 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-67pb6" Jan 23 10:05:35 crc kubenswrapper[4684]: I0123 10:05:35.266413 4684 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-marketplace-67pb6"] Jan 23 10:05:35 crc kubenswrapper[4684]: I0123 10:05:35.737778 4684 generic.go:334] "Generic (PLEG): container finished" podID="3cb5e827-22d1-49c5-8ca4-c9e58e39de1e" containerID="e45a3a2137ba97e84844bb9d246fca2436b88d52d755e2d0a8e3684401dfbb87" exitCode=0 Jan 23 10:05:35 crc kubenswrapper[4684]: I0123 10:05:35.737945 4684 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-67pb6" event={"ID":"3cb5e827-22d1-49c5-8ca4-c9e58e39de1e","Type":"ContainerDied","Data":"e45a3a2137ba97e84844bb9d246fca2436b88d52d755e2d0a8e3684401dfbb87"} Jan 23 10:05:35 crc kubenswrapper[4684]: I0123 10:05:35.738118 4684 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-67pb6" event={"ID":"3cb5e827-22d1-49c5-8ca4-c9e58e39de1e","Type":"ContainerStarted","Data":"3a7a9d82e4b0f749821935fa17bc9a44b044b4917e322dca9fd389eaba120240"} Jan 23 10:05:35 crc kubenswrapper[4684]: I0123 10:05:35.744459 4684 provider.go:102] Refreshing cache for provider: *credentialprovider.defaultDockerConfigProvider Jan 23 10:05:36 crc kubenswrapper[4684]: I0123 10:05:36.750933 4684 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-67pb6" event={"ID":"3cb5e827-22d1-49c5-8ca4-c9e58e39de1e","Type":"ContainerStarted","Data":"c01011069ea460960cb65487d331107b148b6648226630ad2027650bc92489f9"} Jan 23 10:05:37 crc kubenswrapper[4684]: I0123 10:05:37.765064 4684 generic.go:334] "Generic (PLEG): container finished" podID="3cb5e827-22d1-49c5-8ca4-c9e58e39de1e" containerID="c01011069ea460960cb65487d331107b148b6648226630ad2027650bc92489f9" exitCode=0 Jan 23 10:05:37 crc kubenswrapper[4684]: I0123 10:05:37.765116 4684 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-67pb6" event={"ID":"3cb5e827-22d1-49c5-8ca4-c9e58e39de1e","Type":"ContainerDied","Data":"c01011069ea460960cb65487d331107b148b6648226630ad2027650bc92489f9"} Jan 23 10:05:38 crc kubenswrapper[4684]: I0123 10:05:38.786346 4684 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-67pb6" event={"ID":"3cb5e827-22d1-49c5-8ca4-c9e58e39de1e","Type":"ContainerStarted","Data":"4b3e2b60ee00d948af80f4ae690f4319dacd39663442726b4a4cdc7689128a9d"} Jan 23 10:05:38 crc kubenswrapper[4684]: I0123 10:05:38.848148 4684 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/redhat-marketplace-67pb6" podStartSLOduration=2.197933153 podStartE2EDuration="4.848127398s" podCreationTimestamp="2026-01-23 10:05:34 +0000 UTC" firstStartedPulling="2026-01-23 10:05:35.744178505 +0000 UTC m=+3508.367557046" lastFinishedPulling="2026-01-23 10:05:38.39437275 +0000 UTC m=+3511.017751291" observedRunningTime="2026-01-23 10:05:38.846581804 +0000 UTC m=+3511.469960355" watchObservedRunningTime="2026-01-23 10:05:38.848127398 +0000 UTC m=+3511.471505939" Jan 23 10:05:44 crc kubenswrapper[4684]: I0123 10:05:44.674335 4684 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/redhat-marketplace-67pb6" Jan 23 10:05:44 crc kubenswrapper[4684]: I0123 10:05:44.675037 4684 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/redhat-marketplace-67pb6" Jan 23 10:05:44 crc kubenswrapper[4684]: I0123 10:05:44.718043 4684 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/redhat-marketplace-67pb6" Jan 23 10:05:44 crc kubenswrapper[4684]: I0123 10:05:44.871455 4684 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/redhat-marketplace-67pb6" Jan 23 10:05:44 crc kubenswrapper[4684]: I0123 10:05:44.952943 4684 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-marketplace-67pb6"] Jan 23 10:05:46 crc kubenswrapper[4684]: I0123 10:05:46.844081 4684 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/redhat-marketplace-67pb6" podUID="3cb5e827-22d1-49c5-8ca4-c9e58e39de1e" containerName="registry-server" containerID="cri-o://4b3e2b60ee00d948af80f4ae690f4319dacd39663442726b4a4cdc7689128a9d" gracePeriod=2 Jan 23 10:05:47 crc kubenswrapper[4684]: I0123 10:05:47.319313 4684 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-67pb6" Jan 23 10:05:47 crc kubenswrapper[4684]: I0123 10:05:47.439930 4684 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/3cb5e827-22d1-49c5-8ca4-c9e58e39de1e-catalog-content\") pod \"3cb5e827-22d1-49c5-8ca4-c9e58e39de1e\" (UID: \"3cb5e827-22d1-49c5-8ca4-c9e58e39de1e\") " Jan 23 10:05:47 crc kubenswrapper[4684]: I0123 10:05:47.440162 4684 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/3cb5e827-22d1-49c5-8ca4-c9e58e39de1e-utilities\") pod \"3cb5e827-22d1-49c5-8ca4-c9e58e39de1e\" (UID: \"3cb5e827-22d1-49c5-8ca4-c9e58e39de1e\") " Jan 23 10:05:47 crc kubenswrapper[4684]: I0123 10:05:47.440229 4684 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-j9vql\" (UniqueName: \"kubernetes.io/projected/3cb5e827-22d1-49c5-8ca4-c9e58e39de1e-kube-api-access-j9vql\") pod \"3cb5e827-22d1-49c5-8ca4-c9e58e39de1e\" (UID: \"3cb5e827-22d1-49c5-8ca4-c9e58e39de1e\") " Jan 23 10:05:47 crc kubenswrapper[4684]: I0123 10:05:47.441162 4684 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/3cb5e827-22d1-49c5-8ca4-c9e58e39de1e-utilities" (OuterVolumeSpecName: "utilities") pod "3cb5e827-22d1-49c5-8ca4-c9e58e39de1e" (UID: "3cb5e827-22d1-49c5-8ca4-c9e58e39de1e"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 23 10:05:47 crc kubenswrapper[4684]: I0123 10:05:47.448905 4684 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/3cb5e827-22d1-49c5-8ca4-c9e58e39de1e-kube-api-access-j9vql" (OuterVolumeSpecName: "kube-api-access-j9vql") pod "3cb5e827-22d1-49c5-8ca4-c9e58e39de1e" (UID: "3cb5e827-22d1-49c5-8ca4-c9e58e39de1e"). InnerVolumeSpecName "kube-api-access-j9vql". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 23 10:05:47 crc kubenswrapper[4684]: I0123 10:05:47.467667 4684 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/3cb5e827-22d1-49c5-8ca4-c9e58e39de1e-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "3cb5e827-22d1-49c5-8ca4-c9e58e39de1e" (UID: "3cb5e827-22d1-49c5-8ca4-c9e58e39de1e"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 23 10:05:47 crc kubenswrapper[4684]: I0123 10:05:47.542569 4684 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-j9vql\" (UniqueName: \"kubernetes.io/projected/3cb5e827-22d1-49c5-8ca4-c9e58e39de1e-kube-api-access-j9vql\") on node \"crc\" DevicePath \"\"" Jan 23 10:05:47 crc kubenswrapper[4684]: I0123 10:05:47.542622 4684 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/3cb5e827-22d1-49c5-8ca4-c9e58e39de1e-catalog-content\") on node \"crc\" DevicePath \"\"" Jan 23 10:05:47 crc kubenswrapper[4684]: I0123 10:05:47.542635 4684 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/3cb5e827-22d1-49c5-8ca4-c9e58e39de1e-utilities\") on node \"crc\" DevicePath \"\"" Jan 23 10:05:47 crc kubenswrapper[4684]: I0123 10:05:47.853990 4684 generic.go:334] "Generic (PLEG): container finished" podID="3cb5e827-22d1-49c5-8ca4-c9e58e39de1e" containerID="4b3e2b60ee00d948af80f4ae690f4319dacd39663442726b4a4cdc7689128a9d" exitCode=0 Jan 23 10:05:47 crc kubenswrapper[4684]: I0123 10:05:47.854035 4684 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-67pb6" event={"ID":"3cb5e827-22d1-49c5-8ca4-c9e58e39de1e","Type":"ContainerDied","Data":"4b3e2b60ee00d948af80f4ae690f4319dacd39663442726b4a4cdc7689128a9d"} Jan 23 10:05:47 crc kubenswrapper[4684]: I0123 10:05:47.854084 4684 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-67pb6" event={"ID":"3cb5e827-22d1-49c5-8ca4-c9e58e39de1e","Type":"ContainerDied","Data":"3a7a9d82e4b0f749821935fa17bc9a44b044b4917e322dca9fd389eaba120240"} Jan 23 10:05:47 crc kubenswrapper[4684]: I0123 10:05:47.854053 4684 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-67pb6" Jan 23 10:05:47 crc kubenswrapper[4684]: I0123 10:05:47.854106 4684 scope.go:117] "RemoveContainer" containerID="4b3e2b60ee00d948af80f4ae690f4319dacd39663442726b4a4cdc7689128a9d" Jan 23 10:05:47 crc kubenswrapper[4684]: I0123 10:05:47.877452 4684 scope.go:117] "RemoveContainer" containerID="c01011069ea460960cb65487d331107b148b6648226630ad2027650bc92489f9" Jan 23 10:05:47 crc kubenswrapper[4684]: I0123 10:05:47.879270 4684 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-marketplace-67pb6"] Jan 23 10:05:47 crc kubenswrapper[4684]: I0123 10:05:47.893301 4684 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/redhat-marketplace-67pb6"] Jan 23 10:05:47 crc kubenswrapper[4684]: I0123 10:05:47.903125 4684 scope.go:117] "RemoveContainer" containerID="e45a3a2137ba97e84844bb9d246fca2436b88d52d755e2d0a8e3684401dfbb87" Jan 23 10:05:47 crc kubenswrapper[4684]: I0123 10:05:47.967835 4684 scope.go:117] "RemoveContainer" containerID="4b3e2b60ee00d948af80f4ae690f4319dacd39663442726b4a4cdc7689128a9d" Jan 23 10:05:47 crc kubenswrapper[4684]: E0123 10:05:47.969892 4684 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"4b3e2b60ee00d948af80f4ae690f4319dacd39663442726b4a4cdc7689128a9d\": container with ID starting with 4b3e2b60ee00d948af80f4ae690f4319dacd39663442726b4a4cdc7689128a9d not found: ID does not exist" containerID="4b3e2b60ee00d948af80f4ae690f4319dacd39663442726b4a4cdc7689128a9d" Jan 23 10:05:47 crc kubenswrapper[4684]: I0123 10:05:47.969941 4684 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"4b3e2b60ee00d948af80f4ae690f4319dacd39663442726b4a4cdc7689128a9d"} err="failed to get container status \"4b3e2b60ee00d948af80f4ae690f4319dacd39663442726b4a4cdc7689128a9d\": rpc error: code = NotFound desc = could not find container \"4b3e2b60ee00d948af80f4ae690f4319dacd39663442726b4a4cdc7689128a9d\": container with ID starting with 4b3e2b60ee00d948af80f4ae690f4319dacd39663442726b4a4cdc7689128a9d not found: ID does not exist" Jan 23 10:05:47 crc kubenswrapper[4684]: I0123 10:05:47.969971 4684 scope.go:117] "RemoveContainer" containerID="c01011069ea460960cb65487d331107b148b6648226630ad2027650bc92489f9" Jan 23 10:05:47 crc kubenswrapper[4684]: E0123 10:05:47.970657 4684 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"c01011069ea460960cb65487d331107b148b6648226630ad2027650bc92489f9\": container with ID starting with c01011069ea460960cb65487d331107b148b6648226630ad2027650bc92489f9 not found: ID does not exist" containerID="c01011069ea460960cb65487d331107b148b6648226630ad2027650bc92489f9" Jan 23 10:05:47 crc kubenswrapper[4684]: I0123 10:05:47.970688 4684 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"c01011069ea460960cb65487d331107b148b6648226630ad2027650bc92489f9"} err="failed to get container status \"c01011069ea460960cb65487d331107b148b6648226630ad2027650bc92489f9\": rpc error: code = NotFound desc = could not find container \"c01011069ea460960cb65487d331107b148b6648226630ad2027650bc92489f9\": container with ID starting with c01011069ea460960cb65487d331107b148b6648226630ad2027650bc92489f9 not found: ID does not exist" Jan 23 10:05:47 crc kubenswrapper[4684]: I0123 10:05:47.970744 4684 scope.go:117] "RemoveContainer" containerID="e45a3a2137ba97e84844bb9d246fca2436b88d52d755e2d0a8e3684401dfbb87" Jan 23 10:05:47 crc kubenswrapper[4684]: E0123 10:05:47.971029 4684 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"e45a3a2137ba97e84844bb9d246fca2436b88d52d755e2d0a8e3684401dfbb87\": container with ID starting with e45a3a2137ba97e84844bb9d246fca2436b88d52d755e2d0a8e3684401dfbb87 not found: ID does not exist" containerID="e45a3a2137ba97e84844bb9d246fca2436b88d52d755e2d0a8e3684401dfbb87" Jan 23 10:05:47 crc kubenswrapper[4684]: I0123 10:05:47.971062 4684 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"e45a3a2137ba97e84844bb9d246fca2436b88d52d755e2d0a8e3684401dfbb87"} err="failed to get container status \"e45a3a2137ba97e84844bb9d246fca2436b88d52d755e2d0a8e3684401dfbb87\": rpc error: code = NotFound desc = could not find container \"e45a3a2137ba97e84844bb9d246fca2436b88d52d755e2d0a8e3684401dfbb87\": container with ID starting with e45a3a2137ba97e84844bb9d246fca2436b88d52d755e2d0a8e3684401dfbb87 not found: ID does not exist" Jan 23 10:05:49 crc kubenswrapper[4684]: I0123 10:05:49.594745 4684 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="3cb5e827-22d1-49c5-8ca4-c9e58e39de1e" path="/var/lib/kubelet/pods/3cb5e827-22d1-49c5-8ca4-c9e58e39de1e/volumes" Jan 23 10:06:29 crc kubenswrapper[4684]: I0123 10:06:29.172368 4684 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/certified-operators-dxhls"] Jan 23 10:06:29 crc kubenswrapper[4684]: E0123 10:06:29.174689 4684 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="3cb5e827-22d1-49c5-8ca4-c9e58e39de1e" containerName="registry-server" Jan 23 10:06:29 crc kubenswrapper[4684]: I0123 10:06:29.174841 4684 state_mem.go:107] "Deleted CPUSet assignment" podUID="3cb5e827-22d1-49c5-8ca4-c9e58e39de1e" containerName="registry-server" Jan 23 10:06:29 crc kubenswrapper[4684]: E0123 10:06:29.174946 4684 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="3cb5e827-22d1-49c5-8ca4-c9e58e39de1e" containerName="extract-utilities" Jan 23 10:06:29 crc kubenswrapper[4684]: I0123 10:06:29.175008 4684 state_mem.go:107] "Deleted CPUSet assignment" podUID="3cb5e827-22d1-49c5-8ca4-c9e58e39de1e" containerName="extract-utilities" Jan 23 10:06:29 crc kubenswrapper[4684]: E0123 10:06:29.175082 4684 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="3cb5e827-22d1-49c5-8ca4-c9e58e39de1e" containerName="extract-content" Jan 23 10:06:29 crc kubenswrapper[4684]: I0123 10:06:29.175143 4684 state_mem.go:107] "Deleted CPUSet assignment" podUID="3cb5e827-22d1-49c5-8ca4-c9e58e39de1e" containerName="extract-content" Jan 23 10:06:29 crc kubenswrapper[4684]: I0123 10:06:29.175464 4684 memory_manager.go:354] "RemoveStaleState removing state" podUID="3cb5e827-22d1-49c5-8ca4-c9e58e39de1e" containerName="registry-server" Jan 23 10:06:29 crc kubenswrapper[4684]: I0123 10:06:29.177014 4684 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-dxhls" Jan 23 10:06:29 crc kubenswrapper[4684]: I0123 10:06:29.189625 4684 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/certified-operators-dxhls"] Jan 23 10:06:29 crc kubenswrapper[4684]: I0123 10:06:29.341011 4684 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/d9eaf1a5-cbd7-4420-9cb7-2e28f697e27e-catalog-content\") pod \"certified-operators-dxhls\" (UID: \"d9eaf1a5-cbd7-4420-9cb7-2e28f697e27e\") " pod="openshift-marketplace/certified-operators-dxhls" Jan 23 10:06:29 crc kubenswrapper[4684]: I0123 10:06:29.341220 4684 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/d9eaf1a5-cbd7-4420-9cb7-2e28f697e27e-utilities\") pod \"certified-operators-dxhls\" (UID: \"d9eaf1a5-cbd7-4420-9cb7-2e28f697e27e\") " pod="openshift-marketplace/certified-operators-dxhls" Jan 23 10:06:29 crc kubenswrapper[4684]: I0123 10:06:29.342352 4684 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-l2gbs\" (UniqueName: \"kubernetes.io/projected/d9eaf1a5-cbd7-4420-9cb7-2e28f697e27e-kube-api-access-l2gbs\") pod \"certified-operators-dxhls\" (UID: \"d9eaf1a5-cbd7-4420-9cb7-2e28f697e27e\") " pod="openshift-marketplace/certified-operators-dxhls" Jan 23 10:06:29 crc kubenswrapper[4684]: I0123 10:06:29.444277 4684 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/d9eaf1a5-cbd7-4420-9cb7-2e28f697e27e-catalog-content\") pod \"certified-operators-dxhls\" (UID: \"d9eaf1a5-cbd7-4420-9cb7-2e28f697e27e\") " pod="openshift-marketplace/certified-operators-dxhls" Jan 23 10:06:29 crc kubenswrapper[4684]: I0123 10:06:29.444339 4684 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/d9eaf1a5-cbd7-4420-9cb7-2e28f697e27e-utilities\") pod \"certified-operators-dxhls\" (UID: \"d9eaf1a5-cbd7-4420-9cb7-2e28f697e27e\") " pod="openshift-marketplace/certified-operators-dxhls" Jan 23 10:06:29 crc kubenswrapper[4684]: I0123 10:06:29.444406 4684 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-l2gbs\" (UniqueName: \"kubernetes.io/projected/d9eaf1a5-cbd7-4420-9cb7-2e28f697e27e-kube-api-access-l2gbs\") pod \"certified-operators-dxhls\" (UID: \"d9eaf1a5-cbd7-4420-9cb7-2e28f697e27e\") " pod="openshift-marketplace/certified-operators-dxhls" Jan 23 10:06:29 crc kubenswrapper[4684]: I0123 10:06:29.444755 4684 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/d9eaf1a5-cbd7-4420-9cb7-2e28f697e27e-catalog-content\") pod \"certified-operators-dxhls\" (UID: \"d9eaf1a5-cbd7-4420-9cb7-2e28f697e27e\") " pod="openshift-marketplace/certified-operators-dxhls" Jan 23 10:06:29 crc kubenswrapper[4684]: I0123 10:06:29.444839 4684 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/d9eaf1a5-cbd7-4420-9cb7-2e28f697e27e-utilities\") pod \"certified-operators-dxhls\" (UID: \"d9eaf1a5-cbd7-4420-9cb7-2e28f697e27e\") " pod="openshift-marketplace/certified-operators-dxhls" Jan 23 10:06:29 crc kubenswrapper[4684]: I0123 10:06:29.468275 4684 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-l2gbs\" (UniqueName: \"kubernetes.io/projected/d9eaf1a5-cbd7-4420-9cb7-2e28f697e27e-kube-api-access-l2gbs\") pod \"certified-operators-dxhls\" (UID: \"d9eaf1a5-cbd7-4420-9cb7-2e28f697e27e\") " pod="openshift-marketplace/certified-operators-dxhls" Jan 23 10:06:29 crc kubenswrapper[4684]: I0123 10:06:29.501765 4684 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-dxhls" Jan 23 10:06:30 crc kubenswrapper[4684]: I0123 10:06:30.116222 4684 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/certified-operators-dxhls"] Jan 23 10:06:30 crc kubenswrapper[4684]: I0123 10:06:30.186273 4684 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-dxhls" event={"ID":"d9eaf1a5-cbd7-4420-9cb7-2e28f697e27e","Type":"ContainerStarted","Data":"910f885b6d0b9cfdfb3b56e05fcac9b487a69c09bbd9dbdee309214e3825de9d"} Jan 23 10:06:31 crc kubenswrapper[4684]: I0123 10:06:31.196032 4684 generic.go:334] "Generic (PLEG): container finished" podID="d9eaf1a5-cbd7-4420-9cb7-2e28f697e27e" containerID="e223bb57eae3356087d68ccd3fdba8fbccb379fd3e416f0bfaabb216d0ce9e21" exitCode=0 Jan 23 10:06:31 crc kubenswrapper[4684]: I0123 10:06:31.196141 4684 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-dxhls" event={"ID":"d9eaf1a5-cbd7-4420-9cb7-2e28f697e27e","Type":"ContainerDied","Data":"e223bb57eae3356087d68ccd3fdba8fbccb379fd3e416f0bfaabb216d0ce9e21"} Jan 23 10:06:32 crc kubenswrapper[4684]: I0123 10:06:32.206213 4684 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-dxhls" event={"ID":"d9eaf1a5-cbd7-4420-9cb7-2e28f697e27e","Type":"ContainerStarted","Data":"31cbeaabbf366811119d79d45e22946d93de9a4a10038545f1e8a0449a293fc0"} Jan 23 10:06:34 crc kubenswrapper[4684]: I0123 10:06:34.239403 4684 generic.go:334] "Generic (PLEG): container finished" podID="d9eaf1a5-cbd7-4420-9cb7-2e28f697e27e" containerID="31cbeaabbf366811119d79d45e22946d93de9a4a10038545f1e8a0449a293fc0" exitCode=0 Jan 23 10:06:34 crc kubenswrapper[4684]: I0123 10:06:34.239765 4684 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-dxhls" event={"ID":"d9eaf1a5-cbd7-4420-9cb7-2e28f697e27e","Type":"ContainerDied","Data":"31cbeaabbf366811119d79d45e22946d93de9a4a10038545f1e8a0449a293fc0"} Jan 23 10:06:35 crc kubenswrapper[4684]: I0123 10:06:35.250867 4684 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-dxhls" event={"ID":"d9eaf1a5-cbd7-4420-9cb7-2e28f697e27e","Type":"ContainerStarted","Data":"a7f53952d3940daaf90c0d9cd728ea5986a1a16fae30f18499e301e4bfd57434"} Jan 23 10:06:35 crc kubenswrapper[4684]: I0123 10:06:35.269049 4684 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/certified-operators-dxhls" podStartSLOduration=2.753584396 podStartE2EDuration="6.2690293s" podCreationTimestamp="2026-01-23 10:06:29 +0000 UTC" firstStartedPulling="2026-01-23 10:06:31.198220242 +0000 UTC m=+3563.821598783" lastFinishedPulling="2026-01-23 10:06:34.713665146 +0000 UTC m=+3567.337043687" observedRunningTime="2026-01-23 10:06:35.268093273 +0000 UTC m=+3567.891471814" watchObservedRunningTime="2026-01-23 10:06:35.2690293 +0000 UTC m=+3567.892407841" Jan 23 10:06:39 crc kubenswrapper[4684]: I0123 10:06:39.502039 4684 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/certified-operators-dxhls" Jan 23 10:06:39 crc kubenswrapper[4684]: I0123 10:06:39.502655 4684 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/certified-operators-dxhls" Jan 23 10:06:39 crc kubenswrapper[4684]: I0123 10:06:39.547771 4684 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/certified-operators-dxhls" Jan 23 10:06:40 crc kubenswrapper[4684]: I0123 10:06:40.335125 4684 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/certified-operators-dxhls" Jan 23 10:06:40 crc kubenswrapper[4684]: I0123 10:06:40.385842 4684 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/certified-operators-dxhls"] Jan 23 10:06:42 crc kubenswrapper[4684]: I0123 10:06:42.304544 4684 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/certified-operators-dxhls" podUID="d9eaf1a5-cbd7-4420-9cb7-2e28f697e27e" containerName="registry-server" containerID="cri-o://a7f53952d3940daaf90c0d9cd728ea5986a1a16fae30f18499e301e4bfd57434" gracePeriod=2 Jan 23 10:06:42 crc kubenswrapper[4684]: I0123 10:06:42.760766 4684 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-dxhls" Jan 23 10:06:42 crc kubenswrapper[4684]: I0123 10:06:42.901774 4684 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-l2gbs\" (UniqueName: \"kubernetes.io/projected/d9eaf1a5-cbd7-4420-9cb7-2e28f697e27e-kube-api-access-l2gbs\") pod \"d9eaf1a5-cbd7-4420-9cb7-2e28f697e27e\" (UID: \"d9eaf1a5-cbd7-4420-9cb7-2e28f697e27e\") " Jan 23 10:06:42 crc kubenswrapper[4684]: I0123 10:06:42.901867 4684 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/d9eaf1a5-cbd7-4420-9cb7-2e28f697e27e-catalog-content\") pod \"d9eaf1a5-cbd7-4420-9cb7-2e28f697e27e\" (UID: \"d9eaf1a5-cbd7-4420-9cb7-2e28f697e27e\") " Jan 23 10:06:42 crc kubenswrapper[4684]: I0123 10:06:42.901963 4684 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/d9eaf1a5-cbd7-4420-9cb7-2e28f697e27e-utilities\") pod \"d9eaf1a5-cbd7-4420-9cb7-2e28f697e27e\" (UID: \"d9eaf1a5-cbd7-4420-9cb7-2e28f697e27e\") " Jan 23 10:06:42 crc kubenswrapper[4684]: I0123 10:06:42.903258 4684 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/d9eaf1a5-cbd7-4420-9cb7-2e28f697e27e-utilities" (OuterVolumeSpecName: "utilities") pod "d9eaf1a5-cbd7-4420-9cb7-2e28f697e27e" (UID: "d9eaf1a5-cbd7-4420-9cb7-2e28f697e27e"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 23 10:06:42 crc kubenswrapper[4684]: I0123 10:06:42.907774 4684 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/d9eaf1a5-cbd7-4420-9cb7-2e28f697e27e-kube-api-access-l2gbs" (OuterVolumeSpecName: "kube-api-access-l2gbs") pod "d9eaf1a5-cbd7-4420-9cb7-2e28f697e27e" (UID: "d9eaf1a5-cbd7-4420-9cb7-2e28f697e27e"). InnerVolumeSpecName "kube-api-access-l2gbs". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 23 10:06:42 crc kubenswrapper[4684]: I0123 10:06:42.966881 4684 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/d9eaf1a5-cbd7-4420-9cb7-2e28f697e27e-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "d9eaf1a5-cbd7-4420-9cb7-2e28f697e27e" (UID: "d9eaf1a5-cbd7-4420-9cb7-2e28f697e27e"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 23 10:06:43 crc kubenswrapper[4684]: I0123 10:06:43.003777 4684 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/d9eaf1a5-cbd7-4420-9cb7-2e28f697e27e-utilities\") on node \"crc\" DevicePath \"\"" Jan 23 10:06:43 crc kubenswrapper[4684]: I0123 10:06:43.003821 4684 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-l2gbs\" (UniqueName: \"kubernetes.io/projected/d9eaf1a5-cbd7-4420-9cb7-2e28f697e27e-kube-api-access-l2gbs\") on node \"crc\" DevicePath \"\"" Jan 23 10:06:43 crc kubenswrapper[4684]: I0123 10:06:43.003835 4684 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/d9eaf1a5-cbd7-4420-9cb7-2e28f697e27e-catalog-content\") on node \"crc\" DevicePath \"\"" Jan 23 10:06:43 crc kubenswrapper[4684]: I0123 10:06:43.314883 4684 generic.go:334] "Generic (PLEG): container finished" podID="d9eaf1a5-cbd7-4420-9cb7-2e28f697e27e" containerID="a7f53952d3940daaf90c0d9cd728ea5986a1a16fae30f18499e301e4bfd57434" exitCode=0 Jan 23 10:06:43 crc kubenswrapper[4684]: I0123 10:06:43.315069 4684 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-dxhls" event={"ID":"d9eaf1a5-cbd7-4420-9cb7-2e28f697e27e","Type":"ContainerDied","Data":"a7f53952d3940daaf90c0d9cd728ea5986a1a16fae30f18499e301e4bfd57434"} Jan 23 10:06:43 crc kubenswrapper[4684]: I0123 10:06:43.315220 4684 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-dxhls" event={"ID":"d9eaf1a5-cbd7-4420-9cb7-2e28f697e27e","Type":"ContainerDied","Data":"910f885b6d0b9cfdfb3b56e05fcac9b487a69c09bbd9dbdee309214e3825de9d"} Jan 23 10:06:43 crc kubenswrapper[4684]: I0123 10:06:43.315245 4684 scope.go:117] "RemoveContainer" containerID="a7f53952d3940daaf90c0d9cd728ea5986a1a16fae30f18499e301e4bfd57434" Jan 23 10:06:43 crc kubenswrapper[4684]: I0123 10:06:43.315165 4684 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-dxhls" Jan 23 10:06:43 crc kubenswrapper[4684]: I0123 10:06:43.333535 4684 scope.go:117] "RemoveContainer" containerID="31cbeaabbf366811119d79d45e22946d93de9a4a10038545f1e8a0449a293fc0" Jan 23 10:06:43 crc kubenswrapper[4684]: I0123 10:06:43.361874 4684 scope.go:117] "RemoveContainer" containerID="e223bb57eae3356087d68ccd3fdba8fbccb379fd3e416f0bfaabb216d0ce9e21" Jan 23 10:06:43 crc kubenswrapper[4684]: I0123 10:06:43.366307 4684 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/certified-operators-dxhls"] Jan 23 10:06:43 crc kubenswrapper[4684]: I0123 10:06:43.373300 4684 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/certified-operators-dxhls"] Jan 23 10:06:43 crc kubenswrapper[4684]: I0123 10:06:43.403398 4684 scope.go:117] "RemoveContainer" containerID="a7f53952d3940daaf90c0d9cd728ea5986a1a16fae30f18499e301e4bfd57434" Jan 23 10:06:43 crc kubenswrapper[4684]: E0123 10:06:43.403972 4684 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"a7f53952d3940daaf90c0d9cd728ea5986a1a16fae30f18499e301e4bfd57434\": container with ID starting with a7f53952d3940daaf90c0d9cd728ea5986a1a16fae30f18499e301e4bfd57434 not found: ID does not exist" containerID="a7f53952d3940daaf90c0d9cd728ea5986a1a16fae30f18499e301e4bfd57434" Jan 23 10:06:43 crc kubenswrapper[4684]: I0123 10:06:43.404023 4684 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"a7f53952d3940daaf90c0d9cd728ea5986a1a16fae30f18499e301e4bfd57434"} err="failed to get container status \"a7f53952d3940daaf90c0d9cd728ea5986a1a16fae30f18499e301e4bfd57434\": rpc error: code = NotFound desc = could not find container \"a7f53952d3940daaf90c0d9cd728ea5986a1a16fae30f18499e301e4bfd57434\": container with ID starting with a7f53952d3940daaf90c0d9cd728ea5986a1a16fae30f18499e301e4bfd57434 not found: ID does not exist" Jan 23 10:06:43 crc kubenswrapper[4684]: I0123 10:06:43.404054 4684 scope.go:117] "RemoveContainer" containerID="31cbeaabbf366811119d79d45e22946d93de9a4a10038545f1e8a0449a293fc0" Jan 23 10:06:43 crc kubenswrapper[4684]: E0123 10:06:43.404433 4684 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"31cbeaabbf366811119d79d45e22946d93de9a4a10038545f1e8a0449a293fc0\": container with ID starting with 31cbeaabbf366811119d79d45e22946d93de9a4a10038545f1e8a0449a293fc0 not found: ID does not exist" containerID="31cbeaabbf366811119d79d45e22946d93de9a4a10038545f1e8a0449a293fc0" Jan 23 10:06:43 crc kubenswrapper[4684]: I0123 10:06:43.404469 4684 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"31cbeaabbf366811119d79d45e22946d93de9a4a10038545f1e8a0449a293fc0"} err="failed to get container status \"31cbeaabbf366811119d79d45e22946d93de9a4a10038545f1e8a0449a293fc0\": rpc error: code = NotFound desc = could not find container \"31cbeaabbf366811119d79d45e22946d93de9a4a10038545f1e8a0449a293fc0\": container with ID starting with 31cbeaabbf366811119d79d45e22946d93de9a4a10038545f1e8a0449a293fc0 not found: ID does not exist" Jan 23 10:06:43 crc kubenswrapper[4684]: I0123 10:06:43.404487 4684 scope.go:117] "RemoveContainer" containerID="e223bb57eae3356087d68ccd3fdba8fbccb379fd3e416f0bfaabb216d0ce9e21" Jan 23 10:06:43 crc kubenswrapper[4684]: E0123 10:06:43.404782 4684 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"e223bb57eae3356087d68ccd3fdba8fbccb379fd3e416f0bfaabb216d0ce9e21\": container with ID starting with e223bb57eae3356087d68ccd3fdba8fbccb379fd3e416f0bfaabb216d0ce9e21 not found: ID does not exist" containerID="e223bb57eae3356087d68ccd3fdba8fbccb379fd3e416f0bfaabb216d0ce9e21" Jan 23 10:06:43 crc kubenswrapper[4684]: I0123 10:06:43.404812 4684 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"e223bb57eae3356087d68ccd3fdba8fbccb379fd3e416f0bfaabb216d0ce9e21"} err="failed to get container status \"e223bb57eae3356087d68ccd3fdba8fbccb379fd3e416f0bfaabb216d0ce9e21\": rpc error: code = NotFound desc = could not find container \"e223bb57eae3356087d68ccd3fdba8fbccb379fd3e416f0bfaabb216d0ce9e21\": container with ID starting with e223bb57eae3356087d68ccd3fdba8fbccb379fd3e416f0bfaabb216d0ce9e21 not found: ID does not exist" Jan 23 10:06:43 crc kubenswrapper[4684]: I0123 10:06:43.594068 4684 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="d9eaf1a5-cbd7-4420-9cb7-2e28f697e27e" path="/var/lib/kubelet/pods/d9eaf1a5-cbd7-4420-9cb7-2e28f697e27e/volumes" Jan 23 10:07:13 crc kubenswrapper[4684]: I0123 10:07:13.729177 4684 patch_prober.go:28] interesting pod/machine-config-daemon-wtphf container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Jan 23 10:07:13 crc kubenswrapper[4684]: I0123 10:07:13.729801 4684 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-wtphf" podUID="fe8e0d00-860e-4d47-9f48-686555520d79" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Jan 23 10:07:43 crc kubenswrapper[4684]: I0123 10:07:43.729021 4684 patch_prober.go:28] interesting pod/machine-config-daemon-wtphf container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Jan 23 10:07:43 crc kubenswrapper[4684]: I0123 10:07:43.729586 4684 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-wtphf" podUID="fe8e0d00-860e-4d47-9f48-686555520d79" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Jan 23 10:07:52 crc kubenswrapper[4684]: I0123 10:07:52.910995 4684 generic.go:334] "Generic (PLEG): container finished" podID="5310afc8-7024-4b88-b421-28631272375a" containerID="0668a1519bfb9cdf8573b2b6403df757f82c08d2e9aa62503e02966fec51e03b" exitCode=0 Jan 23 10:07:52 crc kubenswrapper[4684]: I0123 10:07:52.911072 4684 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/libvirt-edpm-deployment-openstack-edpm-ipam-p7z6q" event={"ID":"5310afc8-7024-4b88-b421-28631272375a","Type":"ContainerDied","Data":"0668a1519bfb9cdf8573b2b6403df757f82c08d2e9aa62503e02966fec51e03b"} Jan 23 10:07:54 crc kubenswrapper[4684]: I0123 10:07:54.362915 4684 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/libvirt-edpm-deployment-openstack-edpm-ipam-p7z6q" Jan 23 10:07:54 crc kubenswrapper[4684]: I0123 10:07:54.438541 4684 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-z5s8s\" (UniqueName: \"kubernetes.io/projected/5310afc8-7024-4b88-b421-28631272375a-kube-api-access-z5s8s\") pod \"5310afc8-7024-4b88-b421-28631272375a\" (UID: \"5310afc8-7024-4b88-b421-28631272375a\") " Jan 23 10:07:54 crc kubenswrapper[4684]: I0123 10:07:54.438597 4684 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"libvirt-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/5310afc8-7024-4b88-b421-28631272375a-libvirt-combined-ca-bundle\") pod \"5310afc8-7024-4b88-b421-28631272375a\" (UID: \"5310afc8-7024-4b88-b421-28631272375a\") " Jan 23 10:07:54 crc kubenswrapper[4684]: I0123 10:07:54.438687 4684 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ceph\" (UniqueName: \"kubernetes.io/secret/5310afc8-7024-4b88-b421-28631272375a-ceph\") pod \"5310afc8-7024-4b88-b421-28631272375a\" (UID: \"5310afc8-7024-4b88-b421-28631272375a\") " Jan 23 10:07:54 crc kubenswrapper[4684]: I0123 10:07:54.438723 4684 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/5310afc8-7024-4b88-b421-28631272375a-ssh-key-openstack-edpm-ipam\") pod \"5310afc8-7024-4b88-b421-28631272375a\" (UID: \"5310afc8-7024-4b88-b421-28631272375a\") " Jan 23 10:07:54 crc kubenswrapper[4684]: I0123 10:07:54.438794 4684 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/5310afc8-7024-4b88-b421-28631272375a-inventory\") pod \"5310afc8-7024-4b88-b421-28631272375a\" (UID: \"5310afc8-7024-4b88-b421-28631272375a\") " Jan 23 10:07:54 crc kubenswrapper[4684]: I0123 10:07:54.438817 4684 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"libvirt-secret-0\" (UniqueName: \"kubernetes.io/secret/5310afc8-7024-4b88-b421-28631272375a-libvirt-secret-0\") pod \"5310afc8-7024-4b88-b421-28631272375a\" (UID: \"5310afc8-7024-4b88-b421-28631272375a\") " Jan 23 10:07:54 crc kubenswrapper[4684]: I0123 10:07:54.453846 4684 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/5310afc8-7024-4b88-b421-28631272375a-ceph" (OuterVolumeSpecName: "ceph") pod "5310afc8-7024-4b88-b421-28631272375a" (UID: "5310afc8-7024-4b88-b421-28631272375a"). InnerVolumeSpecName "ceph". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 23 10:07:54 crc kubenswrapper[4684]: I0123 10:07:54.454002 4684 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/5310afc8-7024-4b88-b421-28631272375a-kube-api-access-z5s8s" (OuterVolumeSpecName: "kube-api-access-z5s8s") pod "5310afc8-7024-4b88-b421-28631272375a" (UID: "5310afc8-7024-4b88-b421-28631272375a"). InnerVolumeSpecName "kube-api-access-z5s8s". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 23 10:07:54 crc kubenswrapper[4684]: I0123 10:07:54.456903 4684 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/5310afc8-7024-4b88-b421-28631272375a-libvirt-combined-ca-bundle" (OuterVolumeSpecName: "libvirt-combined-ca-bundle") pod "5310afc8-7024-4b88-b421-28631272375a" (UID: "5310afc8-7024-4b88-b421-28631272375a"). InnerVolumeSpecName "libvirt-combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 23 10:07:54 crc kubenswrapper[4684]: I0123 10:07:54.461923 4684 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/5310afc8-7024-4b88-b421-28631272375a-ssh-key-openstack-edpm-ipam" (OuterVolumeSpecName: "ssh-key-openstack-edpm-ipam") pod "5310afc8-7024-4b88-b421-28631272375a" (UID: "5310afc8-7024-4b88-b421-28631272375a"). InnerVolumeSpecName "ssh-key-openstack-edpm-ipam". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 23 10:07:54 crc kubenswrapper[4684]: I0123 10:07:54.463936 4684 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/5310afc8-7024-4b88-b421-28631272375a-inventory" (OuterVolumeSpecName: "inventory") pod "5310afc8-7024-4b88-b421-28631272375a" (UID: "5310afc8-7024-4b88-b421-28631272375a"). InnerVolumeSpecName "inventory". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 23 10:07:54 crc kubenswrapper[4684]: I0123 10:07:54.475614 4684 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/5310afc8-7024-4b88-b421-28631272375a-libvirt-secret-0" (OuterVolumeSpecName: "libvirt-secret-0") pod "5310afc8-7024-4b88-b421-28631272375a" (UID: "5310afc8-7024-4b88-b421-28631272375a"). InnerVolumeSpecName "libvirt-secret-0". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 23 10:07:54 crc kubenswrapper[4684]: I0123 10:07:54.540721 4684 reconciler_common.go:293] "Volume detached for volume \"ceph\" (UniqueName: \"kubernetes.io/secret/5310afc8-7024-4b88-b421-28631272375a-ceph\") on node \"crc\" DevicePath \"\"" Jan 23 10:07:54 crc kubenswrapper[4684]: I0123 10:07:54.540770 4684 reconciler_common.go:293] "Volume detached for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/5310afc8-7024-4b88-b421-28631272375a-ssh-key-openstack-edpm-ipam\") on node \"crc\" DevicePath \"\"" Jan 23 10:07:54 crc kubenswrapper[4684]: I0123 10:07:54.540787 4684 reconciler_common.go:293] "Volume detached for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/5310afc8-7024-4b88-b421-28631272375a-inventory\") on node \"crc\" DevicePath \"\"" Jan 23 10:07:54 crc kubenswrapper[4684]: I0123 10:07:54.540802 4684 reconciler_common.go:293] "Volume detached for volume \"libvirt-secret-0\" (UniqueName: \"kubernetes.io/secret/5310afc8-7024-4b88-b421-28631272375a-libvirt-secret-0\") on node \"crc\" DevicePath \"\"" Jan 23 10:07:54 crc kubenswrapper[4684]: I0123 10:07:54.540814 4684 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-z5s8s\" (UniqueName: \"kubernetes.io/projected/5310afc8-7024-4b88-b421-28631272375a-kube-api-access-z5s8s\") on node \"crc\" DevicePath \"\"" Jan 23 10:07:54 crc kubenswrapper[4684]: I0123 10:07:54.540825 4684 reconciler_common.go:293] "Volume detached for volume \"libvirt-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/5310afc8-7024-4b88-b421-28631272375a-libvirt-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Jan 23 10:07:54 crc kubenswrapper[4684]: I0123 10:07:54.934854 4684 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/libvirt-edpm-deployment-openstack-edpm-ipam-p7z6q" event={"ID":"5310afc8-7024-4b88-b421-28631272375a","Type":"ContainerDied","Data":"9ce1964b81fd406d410bb78e369a4d42d5b40f62ff805e5e9e73356a06d24858"} Jan 23 10:07:54 crc kubenswrapper[4684]: I0123 10:07:54.934908 4684 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/libvirt-edpm-deployment-openstack-edpm-ipam-p7z6q" Jan 23 10:07:54 crc kubenswrapper[4684]: I0123 10:07:54.934925 4684 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="9ce1964b81fd406d410bb78e369a4d42d5b40f62ff805e5e9e73356a06d24858" Jan 23 10:07:55 crc kubenswrapper[4684]: I0123 10:07:55.036566 4684 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/nova-custom-ceph-edpm-deployment-openstack-edpm-ipam-228hk"] Jan 23 10:07:55 crc kubenswrapper[4684]: E0123 10:07:55.037009 4684 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="d9eaf1a5-cbd7-4420-9cb7-2e28f697e27e" containerName="extract-utilities" Jan 23 10:07:55 crc kubenswrapper[4684]: I0123 10:07:55.037033 4684 state_mem.go:107] "Deleted CPUSet assignment" podUID="d9eaf1a5-cbd7-4420-9cb7-2e28f697e27e" containerName="extract-utilities" Jan 23 10:07:55 crc kubenswrapper[4684]: E0123 10:07:55.037050 4684 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="d9eaf1a5-cbd7-4420-9cb7-2e28f697e27e" containerName="registry-server" Jan 23 10:07:55 crc kubenswrapper[4684]: I0123 10:07:55.037060 4684 state_mem.go:107] "Deleted CPUSet assignment" podUID="d9eaf1a5-cbd7-4420-9cb7-2e28f697e27e" containerName="registry-server" Jan 23 10:07:55 crc kubenswrapper[4684]: E0123 10:07:55.037077 4684 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="5310afc8-7024-4b88-b421-28631272375a" containerName="libvirt-edpm-deployment-openstack-edpm-ipam" Jan 23 10:07:55 crc kubenswrapper[4684]: I0123 10:07:55.037086 4684 state_mem.go:107] "Deleted CPUSet assignment" podUID="5310afc8-7024-4b88-b421-28631272375a" containerName="libvirt-edpm-deployment-openstack-edpm-ipam" Jan 23 10:07:55 crc kubenswrapper[4684]: E0123 10:07:55.037107 4684 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="d9eaf1a5-cbd7-4420-9cb7-2e28f697e27e" containerName="extract-content" Jan 23 10:07:55 crc kubenswrapper[4684]: I0123 10:07:55.037114 4684 state_mem.go:107] "Deleted CPUSet assignment" podUID="d9eaf1a5-cbd7-4420-9cb7-2e28f697e27e" containerName="extract-content" Jan 23 10:07:55 crc kubenswrapper[4684]: I0123 10:07:55.037273 4684 memory_manager.go:354] "RemoveStaleState removing state" podUID="d9eaf1a5-cbd7-4420-9cb7-2e28f697e27e" containerName="registry-server" Jan 23 10:07:55 crc kubenswrapper[4684]: I0123 10:07:55.037292 4684 memory_manager.go:354] "RemoveStaleState removing state" podUID="5310afc8-7024-4b88-b421-28631272375a" containerName="libvirt-edpm-deployment-openstack-edpm-ipam" Jan 23 10:07:55 crc kubenswrapper[4684]: I0123 10:07:55.037923 4684 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-custom-ceph-edpm-deployment-openstack-edpm-ipam-228hk" Jan 23 10:07:55 crc kubenswrapper[4684]: I0123 10:07:55.040573 4684 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-cell1-compute-config" Jan 23 10:07:55 crc kubenswrapper[4684]: I0123 10:07:55.040882 4684 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-migration-ssh-key" Jan 23 10:07:55 crc kubenswrapper[4684]: I0123 10:07:55.041710 4684 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"openstack-edpm-ipam-dockercfg-5vtkf" Jan 23 10:07:55 crc kubenswrapper[4684]: I0123 10:07:55.041788 4684 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"dataplane-ansible-ssh-private-key-secret" Jan 23 10:07:55 crc kubenswrapper[4684]: I0123 10:07:55.041956 4684 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"ceph-conf-files" Jan 23 10:07:55 crc kubenswrapper[4684]: I0123 10:07:55.042066 4684 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"openstack-aee-default-env" Jan 23 10:07:55 crc kubenswrapper[4684]: I0123 10:07:55.042652 4684 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"ceph-nova" Jan 23 10:07:55 crc kubenswrapper[4684]: I0123 10:07:55.042822 4684 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"nova-extra-config" Jan 23 10:07:55 crc kubenswrapper[4684]: I0123 10:07:55.043026 4684 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"dataplanenodeset-openstack-edpm-ipam" Jan 23 10:07:55 crc kubenswrapper[4684]: I0123 10:07:55.058792 4684 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-custom-ceph-edpm-deployment-openstack-edpm-ipam-228hk"] Jan 23 10:07:55 crc kubenswrapper[4684]: I0123 10:07:55.151528 4684 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/55887726-e3b8-4e73-a5fe-c82860636e1b-inventory\") pod \"nova-custom-ceph-edpm-deployment-openstack-edpm-ipam-228hk\" (UID: \"55887726-e3b8-4e73-a5fe-c82860636e1b\") " pod="openstack/nova-custom-ceph-edpm-deployment-openstack-edpm-ipam-228hk" Jan 23 10:07:55 crc kubenswrapper[4684]: I0123 10:07:55.151618 4684 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"nova-cell1-compute-config-0\" (UniqueName: \"kubernetes.io/secret/55887726-e3b8-4e73-a5fe-c82860636e1b-nova-cell1-compute-config-0\") pod \"nova-custom-ceph-edpm-deployment-openstack-edpm-ipam-228hk\" (UID: \"55887726-e3b8-4e73-a5fe-c82860636e1b\") " pod="openstack/nova-custom-ceph-edpm-deployment-openstack-edpm-ipam-228hk" Jan 23 10:07:55 crc kubenswrapper[4684]: I0123 10:07:55.151935 4684 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ceph\" (UniqueName: \"kubernetes.io/secret/55887726-e3b8-4e73-a5fe-c82860636e1b-ceph\") pod \"nova-custom-ceph-edpm-deployment-openstack-edpm-ipam-228hk\" (UID: \"55887726-e3b8-4e73-a5fe-c82860636e1b\") " pod="openstack/nova-custom-ceph-edpm-deployment-openstack-edpm-ipam-228hk" Jan 23 10:07:55 crc kubenswrapper[4684]: I0123 10:07:55.152021 4684 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"nova-custom-ceph-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/55887726-e3b8-4e73-a5fe-c82860636e1b-nova-custom-ceph-combined-ca-bundle\") pod \"nova-custom-ceph-edpm-deployment-openstack-edpm-ipam-228hk\" (UID: \"55887726-e3b8-4e73-a5fe-c82860636e1b\") " pod="openstack/nova-custom-ceph-edpm-deployment-openstack-edpm-ipam-228hk" Jan 23 10:07:55 crc kubenswrapper[4684]: I0123 10:07:55.152055 4684 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"nova-extra-config-0\" (UniqueName: \"kubernetes.io/configmap/55887726-e3b8-4e73-a5fe-c82860636e1b-nova-extra-config-0\") pod \"nova-custom-ceph-edpm-deployment-openstack-edpm-ipam-228hk\" (UID: \"55887726-e3b8-4e73-a5fe-c82860636e1b\") " pod="openstack/nova-custom-ceph-edpm-deployment-openstack-edpm-ipam-228hk" Jan 23 10:07:55 crc kubenswrapper[4684]: I0123 10:07:55.152096 4684 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-d2k45\" (UniqueName: \"kubernetes.io/projected/55887726-e3b8-4e73-a5fe-c82860636e1b-kube-api-access-d2k45\") pod \"nova-custom-ceph-edpm-deployment-openstack-edpm-ipam-228hk\" (UID: \"55887726-e3b8-4e73-a5fe-c82860636e1b\") " pod="openstack/nova-custom-ceph-edpm-deployment-openstack-edpm-ipam-228hk" Jan 23 10:07:55 crc kubenswrapper[4684]: I0123 10:07:55.152164 4684 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ceph-nova-0\" (UniqueName: \"kubernetes.io/configmap/55887726-e3b8-4e73-a5fe-c82860636e1b-ceph-nova-0\") pod \"nova-custom-ceph-edpm-deployment-openstack-edpm-ipam-228hk\" (UID: \"55887726-e3b8-4e73-a5fe-c82860636e1b\") " pod="openstack/nova-custom-ceph-edpm-deployment-openstack-edpm-ipam-228hk" Jan 23 10:07:55 crc kubenswrapper[4684]: I0123 10:07:55.152216 4684 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/55887726-e3b8-4e73-a5fe-c82860636e1b-ssh-key-openstack-edpm-ipam\") pod \"nova-custom-ceph-edpm-deployment-openstack-edpm-ipam-228hk\" (UID: \"55887726-e3b8-4e73-a5fe-c82860636e1b\") " pod="openstack/nova-custom-ceph-edpm-deployment-openstack-edpm-ipam-228hk" Jan 23 10:07:55 crc kubenswrapper[4684]: I0123 10:07:55.152282 4684 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"nova-migration-ssh-key-1\" (UniqueName: \"kubernetes.io/secret/55887726-e3b8-4e73-a5fe-c82860636e1b-nova-migration-ssh-key-1\") pod \"nova-custom-ceph-edpm-deployment-openstack-edpm-ipam-228hk\" (UID: \"55887726-e3b8-4e73-a5fe-c82860636e1b\") " pod="openstack/nova-custom-ceph-edpm-deployment-openstack-edpm-ipam-228hk" Jan 23 10:07:55 crc kubenswrapper[4684]: I0123 10:07:55.152431 4684 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"nova-migration-ssh-key-0\" (UniqueName: \"kubernetes.io/secret/55887726-e3b8-4e73-a5fe-c82860636e1b-nova-migration-ssh-key-0\") pod \"nova-custom-ceph-edpm-deployment-openstack-edpm-ipam-228hk\" (UID: \"55887726-e3b8-4e73-a5fe-c82860636e1b\") " pod="openstack/nova-custom-ceph-edpm-deployment-openstack-edpm-ipam-228hk" Jan 23 10:07:55 crc kubenswrapper[4684]: I0123 10:07:55.152493 4684 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"nova-cell1-compute-config-1\" (UniqueName: \"kubernetes.io/secret/55887726-e3b8-4e73-a5fe-c82860636e1b-nova-cell1-compute-config-1\") pod \"nova-custom-ceph-edpm-deployment-openstack-edpm-ipam-228hk\" (UID: \"55887726-e3b8-4e73-a5fe-c82860636e1b\") " pod="openstack/nova-custom-ceph-edpm-deployment-openstack-edpm-ipam-228hk" Jan 23 10:07:55 crc kubenswrapper[4684]: I0123 10:07:55.254477 4684 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/55887726-e3b8-4e73-a5fe-c82860636e1b-inventory\") pod \"nova-custom-ceph-edpm-deployment-openstack-edpm-ipam-228hk\" (UID: \"55887726-e3b8-4e73-a5fe-c82860636e1b\") " pod="openstack/nova-custom-ceph-edpm-deployment-openstack-edpm-ipam-228hk" Jan 23 10:07:55 crc kubenswrapper[4684]: I0123 10:07:55.254602 4684 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"nova-cell1-compute-config-0\" (UniqueName: \"kubernetes.io/secret/55887726-e3b8-4e73-a5fe-c82860636e1b-nova-cell1-compute-config-0\") pod \"nova-custom-ceph-edpm-deployment-openstack-edpm-ipam-228hk\" (UID: \"55887726-e3b8-4e73-a5fe-c82860636e1b\") " pod="openstack/nova-custom-ceph-edpm-deployment-openstack-edpm-ipam-228hk" Jan 23 10:07:55 crc kubenswrapper[4684]: I0123 10:07:55.254664 4684 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ceph\" (UniqueName: \"kubernetes.io/secret/55887726-e3b8-4e73-a5fe-c82860636e1b-ceph\") pod \"nova-custom-ceph-edpm-deployment-openstack-edpm-ipam-228hk\" (UID: \"55887726-e3b8-4e73-a5fe-c82860636e1b\") " pod="openstack/nova-custom-ceph-edpm-deployment-openstack-edpm-ipam-228hk" Jan 23 10:07:55 crc kubenswrapper[4684]: I0123 10:07:55.254728 4684 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"nova-custom-ceph-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/55887726-e3b8-4e73-a5fe-c82860636e1b-nova-custom-ceph-combined-ca-bundle\") pod \"nova-custom-ceph-edpm-deployment-openstack-edpm-ipam-228hk\" (UID: \"55887726-e3b8-4e73-a5fe-c82860636e1b\") " pod="openstack/nova-custom-ceph-edpm-deployment-openstack-edpm-ipam-228hk" Jan 23 10:07:55 crc kubenswrapper[4684]: I0123 10:07:55.254756 4684 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"nova-extra-config-0\" (UniqueName: \"kubernetes.io/configmap/55887726-e3b8-4e73-a5fe-c82860636e1b-nova-extra-config-0\") pod \"nova-custom-ceph-edpm-deployment-openstack-edpm-ipam-228hk\" (UID: \"55887726-e3b8-4e73-a5fe-c82860636e1b\") " pod="openstack/nova-custom-ceph-edpm-deployment-openstack-edpm-ipam-228hk" Jan 23 10:07:55 crc kubenswrapper[4684]: I0123 10:07:55.254779 4684 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-d2k45\" (UniqueName: \"kubernetes.io/projected/55887726-e3b8-4e73-a5fe-c82860636e1b-kube-api-access-d2k45\") pod \"nova-custom-ceph-edpm-deployment-openstack-edpm-ipam-228hk\" (UID: \"55887726-e3b8-4e73-a5fe-c82860636e1b\") " pod="openstack/nova-custom-ceph-edpm-deployment-openstack-edpm-ipam-228hk" Jan 23 10:07:55 crc kubenswrapper[4684]: I0123 10:07:55.254805 4684 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ceph-nova-0\" (UniqueName: \"kubernetes.io/configmap/55887726-e3b8-4e73-a5fe-c82860636e1b-ceph-nova-0\") pod \"nova-custom-ceph-edpm-deployment-openstack-edpm-ipam-228hk\" (UID: \"55887726-e3b8-4e73-a5fe-c82860636e1b\") " pod="openstack/nova-custom-ceph-edpm-deployment-openstack-edpm-ipam-228hk" Jan 23 10:07:55 crc kubenswrapper[4684]: I0123 10:07:55.254842 4684 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/55887726-e3b8-4e73-a5fe-c82860636e1b-ssh-key-openstack-edpm-ipam\") pod \"nova-custom-ceph-edpm-deployment-openstack-edpm-ipam-228hk\" (UID: \"55887726-e3b8-4e73-a5fe-c82860636e1b\") " pod="openstack/nova-custom-ceph-edpm-deployment-openstack-edpm-ipam-228hk" Jan 23 10:07:55 crc kubenswrapper[4684]: I0123 10:07:55.254891 4684 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"nova-migration-ssh-key-1\" (UniqueName: \"kubernetes.io/secret/55887726-e3b8-4e73-a5fe-c82860636e1b-nova-migration-ssh-key-1\") pod \"nova-custom-ceph-edpm-deployment-openstack-edpm-ipam-228hk\" (UID: \"55887726-e3b8-4e73-a5fe-c82860636e1b\") " pod="openstack/nova-custom-ceph-edpm-deployment-openstack-edpm-ipam-228hk" Jan 23 10:07:55 crc kubenswrapper[4684]: I0123 10:07:55.254929 4684 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"nova-migration-ssh-key-0\" (UniqueName: \"kubernetes.io/secret/55887726-e3b8-4e73-a5fe-c82860636e1b-nova-migration-ssh-key-0\") pod \"nova-custom-ceph-edpm-deployment-openstack-edpm-ipam-228hk\" (UID: \"55887726-e3b8-4e73-a5fe-c82860636e1b\") " pod="openstack/nova-custom-ceph-edpm-deployment-openstack-edpm-ipam-228hk" Jan 23 10:07:55 crc kubenswrapper[4684]: I0123 10:07:55.254947 4684 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"nova-cell1-compute-config-1\" (UniqueName: \"kubernetes.io/secret/55887726-e3b8-4e73-a5fe-c82860636e1b-nova-cell1-compute-config-1\") pod \"nova-custom-ceph-edpm-deployment-openstack-edpm-ipam-228hk\" (UID: \"55887726-e3b8-4e73-a5fe-c82860636e1b\") " pod="openstack/nova-custom-ceph-edpm-deployment-openstack-edpm-ipam-228hk" Jan 23 10:07:55 crc kubenswrapper[4684]: I0123 10:07:55.256090 4684 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"nova-extra-config-0\" (UniqueName: \"kubernetes.io/configmap/55887726-e3b8-4e73-a5fe-c82860636e1b-nova-extra-config-0\") pod \"nova-custom-ceph-edpm-deployment-openstack-edpm-ipam-228hk\" (UID: \"55887726-e3b8-4e73-a5fe-c82860636e1b\") " pod="openstack/nova-custom-ceph-edpm-deployment-openstack-edpm-ipam-228hk" Jan 23 10:07:55 crc kubenswrapper[4684]: I0123 10:07:55.256099 4684 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ceph-nova-0\" (UniqueName: \"kubernetes.io/configmap/55887726-e3b8-4e73-a5fe-c82860636e1b-ceph-nova-0\") pod \"nova-custom-ceph-edpm-deployment-openstack-edpm-ipam-228hk\" (UID: \"55887726-e3b8-4e73-a5fe-c82860636e1b\") " pod="openstack/nova-custom-ceph-edpm-deployment-openstack-edpm-ipam-228hk" Jan 23 10:07:55 crc kubenswrapper[4684]: I0123 10:07:55.259179 4684 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"nova-cell1-compute-config-1\" (UniqueName: \"kubernetes.io/secret/55887726-e3b8-4e73-a5fe-c82860636e1b-nova-cell1-compute-config-1\") pod \"nova-custom-ceph-edpm-deployment-openstack-edpm-ipam-228hk\" (UID: \"55887726-e3b8-4e73-a5fe-c82860636e1b\") " pod="openstack/nova-custom-ceph-edpm-deployment-openstack-edpm-ipam-228hk" Jan 23 10:07:55 crc kubenswrapper[4684]: I0123 10:07:55.259644 4684 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"nova-migration-ssh-key-1\" (UniqueName: \"kubernetes.io/secret/55887726-e3b8-4e73-a5fe-c82860636e1b-nova-migration-ssh-key-1\") pod \"nova-custom-ceph-edpm-deployment-openstack-edpm-ipam-228hk\" (UID: \"55887726-e3b8-4e73-a5fe-c82860636e1b\") " pod="openstack/nova-custom-ceph-edpm-deployment-openstack-edpm-ipam-228hk" Jan 23 10:07:55 crc kubenswrapper[4684]: I0123 10:07:55.259719 4684 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"nova-migration-ssh-key-0\" (UniqueName: \"kubernetes.io/secret/55887726-e3b8-4e73-a5fe-c82860636e1b-nova-migration-ssh-key-0\") pod \"nova-custom-ceph-edpm-deployment-openstack-edpm-ipam-228hk\" (UID: \"55887726-e3b8-4e73-a5fe-c82860636e1b\") " pod="openstack/nova-custom-ceph-edpm-deployment-openstack-edpm-ipam-228hk" Jan 23 10:07:55 crc kubenswrapper[4684]: I0123 10:07:55.261282 4684 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"nova-custom-ceph-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/55887726-e3b8-4e73-a5fe-c82860636e1b-nova-custom-ceph-combined-ca-bundle\") pod \"nova-custom-ceph-edpm-deployment-openstack-edpm-ipam-228hk\" (UID: \"55887726-e3b8-4e73-a5fe-c82860636e1b\") " pod="openstack/nova-custom-ceph-edpm-deployment-openstack-edpm-ipam-228hk" Jan 23 10:07:55 crc kubenswrapper[4684]: I0123 10:07:55.261561 4684 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/55887726-e3b8-4e73-a5fe-c82860636e1b-inventory\") pod \"nova-custom-ceph-edpm-deployment-openstack-edpm-ipam-228hk\" (UID: \"55887726-e3b8-4e73-a5fe-c82860636e1b\") " pod="openstack/nova-custom-ceph-edpm-deployment-openstack-edpm-ipam-228hk" Jan 23 10:07:55 crc kubenswrapper[4684]: I0123 10:07:55.262119 4684 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ceph\" (UniqueName: \"kubernetes.io/secret/55887726-e3b8-4e73-a5fe-c82860636e1b-ceph\") pod \"nova-custom-ceph-edpm-deployment-openstack-edpm-ipam-228hk\" (UID: \"55887726-e3b8-4e73-a5fe-c82860636e1b\") " pod="openstack/nova-custom-ceph-edpm-deployment-openstack-edpm-ipam-228hk" Jan 23 10:07:55 crc kubenswrapper[4684]: I0123 10:07:55.267624 4684 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/55887726-e3b8-4e73-a5fe-c82860636e1b-ssh-key-openstack-edpm-ipam\") pod \"nova-custom-ceph-edpm-deployment-openstack-edpm-ipam-228hk\" (UID: \"55887726-e3b8-4e73-a5fe-c82860636e1b\") " pod="openstack/nova-custom-ceph-edpm-deployment-openstack-edpm-ipam-228hk" Jan 23 10:07:55 crc kubenswrapper[4684]: I0123 10:07:55.270103 4684 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"nova-cell1-compute-config-0\" (UniqueName: \"kubernetes.io/secret/55887726-e3b8-4e73-a5fe-c82860636e1b-nova-cell1-compute-config-0\") pod \"nova-custom-ceph-edpm-deployment-openstack-edpm-ipam-228hk\" (UID: \"55887726-e3b8-4e73-a5fe-c82860636e1b\") " pod="openstack/nova-custom-ceph-edpm-deployment-openstack-edpm-ipam-228hk" Jan 23 10:07:55 crc kubenswrapper[4684]: I0123 10:07:55.273804 4684 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-d2k45\" (UniqueName: \"kubernetes.io/projected/55887726-e3b8-4e73-a5fe-c82860636e1b-kube-api-access-d2k45\") pod \"nova-custom-ceph-edpm-deployment-openstack-edpm-ipam-228hk\" (UID: \"55887726-e3b8-4e73-a5fe-c82860636e1b\") " pod="openstack/nova-custom-ceph-edpm-deployment-openstack-edpm-ipam-228hk" Jan 23 10:07:55 crc kubenswrapper[4684]: I0123 10:07:55.374517 4684 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-custom-ceph-edpm-deployment-openstack-edpm-ipam-228hk" Jan 23 10:07:55 crc kubenswrapper[4684]: I0123 10:07:55.887491 4684 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-custom-ceph-edpm-deployment-openstack-edpm-ipam-228hk"] Jan 23 10:07:55 crc kubenswrapper[4684]: I0123 10:07:55.942863 4684 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-custom-ceph-edpm-deployment-openstack-edpm-ipam-228hk" event={"ID":"55887726-e3b8-4e73-a5fe-c82860636e1b","Type":"ContainerStarted","Data":"8a7c24b2b0c43e4bd2653e32cd1f2c51ca13d0297767084bd85ef6586873d60b"} Jan 23 10:07:56 crc kubenswrapper[4684]: I0123 10:07:56.961785 4684 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-custom-ceph-edpm-deployment-openstack-edpm-ipam-228hk" event={"ID":"55887726-e3b8-4e73-a5fe-c82860636e1b","Type":"ContainerStarted","Data":"fb72704b79e2cb1c4ba925c4164ae162f787b4b5395a660d7a2ecbc56f277258"} Jan 23 10:07:56 crc kubenswrapper[4684]: I0123 10:07:56.986463 4684 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/nova-custom-ceph-edpm-deployment-openstack-edpm-ipam-228hk" podStartSLOduration=1.39157087 podStartE2EDuration="1.986442961s" podCreationTimestamp="2026-01-23 10:07:55 +0000 UTC" firstStartedPulling="2026-01-23 10:07:55.877267675 +0000 UTC m=+3648.500646206" lastFinishedPulling="2026-01-23 10:07:56.472139756 +0000 UTC m=+3649.095518297" observedRunningTime="2026-01-23 10:07:56.982278102 +0000 UTC m=+3649.605656643" watchObservedRunningTime="2026-01-23 10:07:56.986442961 +0000 UTC m=+3649.609821502" Jan 23 10:08:13 crc kubenswrapper[4684]: I0123 10:08:13.728863 4684 patch_prober.go:28] interesting pod/machine-config-daemon-wtphf container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Jan 23 10:08:13 crc kubenswrapper[4684]: I0123 10:08:13.729394 4684 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-wtphf" podUID="fe8e0d00-860e-4d47-9f48-686555520d79" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Jan 23 10:08:13 crc kubenswrapper[4684]: I0123 10:08:13.729442 4684 kubelet.go:2542] "SyncLoop (probe)" probe="liveness" status="unhealthy" pod="openshift-machine-config-operator/machine-config-daemon-wtphf" Jan 23 10:08:13 crc kubenswrapper[4684]: I0123 10:08:13.730213 4684 kuberuntime_manager.go:1027] "Message for Container of pod" containerName="machine-config-daemon" containerStatusID={"Type":"cri-o","ID":"ceb6b580f569b2fa2d093ef8e815058bc34f53db19466664eaf44145b4851560"} pod="openshift-machine-config-operator/machine-config-daemon-wtphf" containerMessage="Container machine-config-daemon failed liveness probe, will be restarted" Jan 23 10:08:13 crc kubenswrapper[4684]: I0123 10:08:13.730273 4684 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-machine-config-operator/machine-config-daemon-wtphf" podUID="fe8e0d00-860e-4d47-9f48-686555520d79" containerName="machine-config-daemon" containerID="cri-o://ceb6b580f569b2fa2d093ef8e815058bc34f53db19466664eaf44145b4851560" gracePeriod=600 Jan 23 10:08:14 crc kubenswrapper[4684]: I0123 10:08:14.088214 4684 generic.go:334] "Generic (PLEG): container finished" podID="fe8e0d00-860e-4d47-9f48-686555520d79" containerID="ceb6b580f569b2fa2d093ef8e815058bc34f53db19466664eaf44145b4851560" exitCode=0 Jan 23 10:08:14 crc kubenswrapper[4684]: I0123 10:08:14.088267 4684 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-wtphf" event={"ID":"fe8e0d00-860e-4d47-9f48-686555520d79","Type":"ContainerDied","Data":"ceb6b580f569b2fa2d093ef8e815058bc34f53db19466664eaf44145b4851560"} Jan 23 10:08:14 crc kubenswrapper[4684]: I0123 10:08:14.088315 4684 scope.go:117] "RemoveContainer" containerID="d1c64bcff5b15812f02c5451d69c8159a40aa5751c27f7f31fd2c1167f6c8ab3" Jan 23 10:08:15 crc kubenswrapper[4684]: I0123 10:08:15.106397 4684 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-wtphf" event={"ID":"fe8e0d00-860e-4d47-9f48-686555520d79","Type":"ContainerStarted","Data":"c2163abc5f57af87ea82023d09559fe5c528b862743942dcea670480cc44810b"} Jan 23 10:10:43 crc kubenswrapper[4684]: I0123 10:10:43.728946 4684 patch_prober.go:28] interesting pod/machine-config-daemon-wtphf container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Jan 23 10:10:43 crc kubenswrapper[4684]: I0123 10:10:43.729591 4684 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-wtphf" podUID="fe8e0d00-860e-4d47-9f48-686555520d79" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Jan 23 10:10:57 crc kubenswrapper[4684]: I0123 10:10:57.534389 4684 generic.go:334] "Generic (PLEG): container finished" podID="55887726-e3b8-4e73-a5fe-c82860636e1b" containerID="fb72704b79e2cb1c4ba925c4164ae162f787b4b5395a660d7a2ecbc56f277258" exitCode=0 Jan 23 10:10:57 crc kubenswrapper[4684]: I0123 10:10:57.534479 4684 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-custom-ceph-edpm-deployment-openstack-edpm-ipam-228hk" event={"ID":"55887726-e3b8-4e73-a5fe-c82860636e1b","Type":"ContainerDied","Data":"fb72704b79e2cb1c4ba925c4164ae162f787b4b5395a660d7a2ecbc56f277258"} Jan 23 10:10:58 crc kubenswrapper[4684]: I0123 10:10:58.996374 4684 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-custom-ceph-edpm-deployment-openstack-edpm-ipam-228hk" Jan 23 10:10:59 crc kubenswrapper[4684]: I0123 10:10:59.144723 4684 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/55887726-e3b8-4e73-a5fe-c82860636e1b-inventory\") pod \"55887726-e3b8-4e73-a5fe-c82860636e1b\" (UID: \"55887726-e3b8-4e73-a5fe-c82860636e1b\") " Jan 23 10:10:59 crc kubenswrapper[4684]: I0123 10:10:59.144809 4684 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-d2k45\" (UniqueName: \"kubernetes.io/projected/55887726-e3b8-4e73-a5fe-c82860636e1b-kube-api-access-d2k45\") pod \"55887726-e3b8-4e73-a5fe-c82860636e1b\" (UID: \"55887726-e3b8-4e73-a5fe-c82860636e1b\") " Jan 23 10:10:59 crc kubenswrapper[4684]: I0123 10:10:59.144839 4684 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"nova-migration-ssh-key-0\" (UniqueName: \"kubernetes.io/secret/55887726-e3b8-4e73-a5fe-c82860636e1b-nova-migration-ssh-key-0\") pod \"55887726-e3b8-4e73-a5fe-c82860636e1b\" (UID: \"55887726-e3b8-4e73-a5fe-c82860636e1b\") " Jan 23 10:10:59 crc kubenswrapper[4684]: I0123 10:10:59.144874 4684 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"nova-custom-ceph-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/55887726-e3b8-4e73-a5fe-c82860636e1b-nova-custom-ceph-combined-ca-bundle\") pod \"55887726-e3b8-4e73-a5fe-c82860636e1b\" (UID: \"55887726-e3b8-4e73-a5fe-c82860636e1b\") " Jan 23 10:10:59 crc kubenswrapper[4684]: I0123 10:10:59.144932 4684 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"nova-cell1-compute-config-0\" (UniqueName: \"kubernetes.io/secret/55887726-e3b8-4e73-a5fe-c82860636e1b-nova-cell1-compute-config-0\") pod \"55887726-e3b8-4e73-a5fe-c82860636e1b\" (UID: \"55887726-e3b8-4e73-a5fe-c82860636e1b\") " Jan 23 10:10:59 crc kubenswrapper[4684]: I0123 10:10:59.144988 4684 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/55887726-e3b8-4e73-a5fe-c82860636e1b-ssh-key-openstack-edpm-ipam\") pod \"55887726-e3b8-4e73-a5fe-c82860636e1b\" (UID: \"55887726-e3b8-4e73-a5fe-c82860636e1b\") " Jan 23 10:10:59 crc kubenswrapper[4684]: I0123 10:10:59.145013 4684 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ceph\" (UniqueName: \"kubernetes.io/secret/55887726-e3b8-4e73-a5fe-c82860636e1b-ceph\") pod \"55887726-e3b8-4e73-a5fe-c82860636e1b\" (UID: \"55887726-e3b8-4e73-a5fe-c82860636e1b\") " Jan 23 10:10:59 crc kubenswrapper[4684]: I0123 10:10:59.145080 4684 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"nova-cell1-compute-config-1\" (UniqueName: \"kubernetes.io/secret/55887726-e3b8-4e73-a5fe-c82860636e1b-nova-cell1-compute-config-1\") pod \"55887726-e3b8-4e73-a5fe-c82860636e1b\" (UID: \"55887726-e3b8-4e73-a5fe-c82860636e1b\") " Jan 23 10:10:59 crc kubenswrapper[4684]: I0123 10:10:59.145166 4684 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"nova-migration-ssh-key-1\" (UniqueName: \"kubernetes.io/secret/55887726-e3b8-4e73-a5fe-c82860636e1b-nova-migration-ssh-key-1\") pod \"55887726-e3b8-4e73-a5fe-c82860636e1b\" (UID: \"55887726-e3b8-4e73-a5fe-c82860636e1b\") " Jan 23 10:10:59 crc kubenswrapper[4684]: I0123 10:10:59.145196 4684 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ceph-nova-0\" (UniqueName: \"kubernetes.io/configmap/55887726-e3b8-4e73-a5fe-c82860636e1b-ceph-nova-0\") pod \"55887726-e3b8-4e73-a5fe-c82860636e1b\" (UID: \"55887726-e3b8-4e73-a5fe-c82860636e1b\") " Jan 23 10:10:59 crc kubenswrapper[4684]: I0123 10:10:59.145227 4684 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"nova-extra-config-0\" (UniqueName: \"kubernetes.io/configmap/55887726-e3b8-4e73-a5fe-c82860636e1b-nova-extra-config-0\") pod \"55887726-e3b8-4e73-a5fe-c82860636e1b\" (UID: \"55887726-e3b8-4e73-a5fe-c82860636e1b\") " Jan 23 10:10:59 crc kubenswrapper[4684]: I0123 10:10:59.150885 4684 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/55887726-e3b8-4e73-a5fe-c82860636e1b-nova-custom-ceph-combined-ca-bundle" (OuterVolumeSpecName: "nova-custom-ceph-combined-ca-bundle") pod "55887726-e3b8-4e73-a5fe-c82860636e1b" (UID: "55887726-e3b8-4e73-a5fe-c82860636e1b"). InnerVolumeSpecName "nova-custom-ceph-combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 23 10:10:59 crc kubenswrapper[4684]: I0123 10:10:59.169389 4684 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/55887726-e3b8-4e73-a5fe-c82860636e1b-kube-api-access-d2k45" (OuterVolumeSpecName: "kube-api-access-d2k45") pod "55887726-e3b8-4e73-a5fe-c82860636e1b" (UID: "55887726-e3b8-4e73-a5fe-c82860636e1b"). InnerVolumeSpecName "kube-api-access-d2k45". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 23 10:10:59 crc kubenswrapper[4684]: I0123 10:10:59.171935 4684 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/55887726-e3b8-4e73-a5fe-c82860636e1b-ceph" (OuterVolumeSpecName: "ceph") pod "55887726-e3b8-4e73-a5fe-c82860636e1b" (UID: "55887726-e3b8-4e73-a5fe-c82860636e1b"). InnerVolumeSpecName "ceph". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 23 10:10:59 crc kubenswrapper[4684]: I0123 10:10:59.177379 4684 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/55887726-e3b8-4e73-a5fe-c82860636e1b-nova-extra-config-0" (OuterVolumeSpecName: "nova-extra-config-0") pod "55887726-e3b8-4e73-a5fe-c82860636e1b" (UID: "55887726-e3b8-4e73-a5fe-c82860636e1b"). InnerVolumeSpecName "nova-extra-config-0". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 23 10:10:59 crc kubenswrapper[4684]: I0123 10:10:59.180946 4684 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/55887726-e3b8-4e73-a5fe-c82860636e1b-nova-migration-ssh-key-1" (OuterVolumeSpecName: "nova-migration-ssh-key-1") pod "55887726-e3b8-4e73-a5fe-c82860636e1b" (UID: "55887726-e3b8-4e73-a5fe-c82860636e1b"). InnerVolumeSpecName "nova-migration-ssh-key-1". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 23 10:10:59 crc kubenswrapper[4684]: I0123 10:10:59.182068 4684 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/55887726-e3b8-4e73-a5fe-c82860636e1b-nova-cell1-compute-config-1" (OuterVolumeSpecName: "nova-cell1-compute-config-1") pod "55887726-e3b8-4e73-a5fe-c82860636e1b" (UID: "55887726-e3b8-4e73-a5fe-c82860636e1b"). InnerVolumeSpecName "nova-cell1-compute-config-1". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 23 10:10:59 crc kubenswrapper[4684]: I0123 10:10:59.188524 4684 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/55887726-e3b8-4e73-a5fe-c82860636e1b-ssh-key-openstack-edpm-ipam" (OuterVolumeSpecName: "ssh-key-openstack-edpm-ipam") pod "55887726-e3b8-4e73-a5fe-c82860636e1b" (UID: "55887726-e3b8-4e73-a5fe-c82860636e1b"). InnerVolumeSpecName "ssh-key-openstack-edpm-ipam". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 23 10:10:59 crc kubenswrapper[4684]: I0123 10:10:59.189427 4684 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/55887726-e3b8-4e73-a5fe-c82860636e1b-nova-cell1-compute-config-0" (OuterVolumeSpecName: "nova-cell1-compute-config-0") pod "55887726-e3b8-4e73-a5fe-c82860636e1b" (UID: "55887726-e3b8-4e73-a5fe-c82860636e1b"). InnerVolumeSpecName "nova-cell1-compute-config-0". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 23 10:10:59 crc kubenswrapper[4684]: I0123 10:10:59.191000 4684 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/55887726-e3b8-4e73-a5fe-c82860636e1b-inventory" (OuterVolumeSpecName: "inventory") pod "55887726-e3b8-4e73-a5fe-c82860636e1b" (UID: "55887726-e3b8-4e73-a5fe-c82860636e1b"). InnerVolumeSpecName "inventory". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 23 10:10:59 crc kubenswrapper[4684]: I0123 10:10:59.197647 4684 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/55887726-e3b8-4e73-a5fe-c82860636e1b-ceph-nova-0" (OuterVolumeSpecName: "ceph-nova-0") pod "55887726-e3b8-4e73-a5fe-c82860636e1b" (UID: "55887726-e3b8-4e73-a5fe-c82860636e1b"). InnerVolumeSpecName "ceph-nova-0". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 23 10:10:59 crc kubenswrapper[4684]: I0123 10:10:59.199411 4684 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/55887726-e3b8-4e73-a5fe-c82860636e1b-nova-migration-ssh-key-0" (OuterVolumeSpecName: "nova-migration-ssh-key-0") pod "55887726-e3b8-4e73-a5fe-c82860636e1b" (UID: "55887726-e3b8-4e73-a5fe-c82860636e1b"). InnerVolumeSpecName "nova-migration-ssh-key-0". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 23 10:10:59 crc kubenswrapper[4684]: I0123 10:10:59.247038 4684 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-d2k45\" (UniqueName: \"kubernetes.io/projected/55887726-e3b8-4e73-a5fe-c82860636e1b-kube-api-access-d2k45\") on node \"crc\" DevicePath \"\"" Jan 23 10:10:59 crc kubenswrapper[4684]: I0123 10:10:59.247292 4684 reconciler_common.go:293] "Volume detached for volume \"nova-migration-ssh-key-0\" (UniqueName: \"kubernetes.io/secret/55887726-e3b8-4e73-a5fe-c82860636e1b-nova-migration-ssh-key-0\") on node \"crc\" DevicePath \"\"" Jan 23 10:10:59 crc kubenswrapper[4684]: I0123 10:10:59.247464 4684 reconciler_common.go:293] "Volume detached for volume \"nova-custom-ceph-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/55887726-e3b8-4e73-a5fe-c82860636e1b-nova-custom-ceph-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Jan 23 10:10:59 crc kubenswrapper[4684]: I0123 10:10:59.247557 4684 reconciler_common.go:293] "Volume detached for volume \"nova-cell1-compute-config-0\" (UniqueName: \"kubernetes.io/secret/55887726-e3b8-4e73-a5fe-c82860636e1b-nova-cell1-compute-config-0\") on node \"crc\" DevicePath \"\"" Jan 23 10:10:59 crc kubenswrapper[4684]: I0123 10:10:59.247648 4684 reconciler_common.go:293] "Volume detached for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/55887726-e3b8-4e73-a5fe-c82860636e1b-ssh-key-openstack-edpm-ipam\") on node \"crc\" DevicePath \"\"" Jan 23 10:10:59 crc kubenswrapper[4684]: I0123 10:10:59.247782 4684 reconciler_common.go:293] "Volume detached for volume \"ceph\" (UniqueName: \"kubernetes.io/secret/55887726-e3b8-4e73-a5fe-c82860636e1b-ceph\") on node \"crc\" DevicePath \"\"" Jan 23 10:10:59 crc kubenswrapper[4684]: I0123 10:10:59.247875 4684 reconciler_common.go:293] "Volume detached for volume \"nova-cell1-compute-config-1\" (UniqueName: \"kubernetes.io/secret/55887726-e3b8-4e73-a5fe-c82860636e1b-nova-cell1-compute-config-1\") on node \"crc\" DevicePath \"\"" Jan 23 10:10:59 crc kubenswrapper[4684]: I0123 10:10:59.247942 4684 reconciler_common.go:293] "Volume detached for volume \"nova-migration-ssh-key-1\" (UniqueName: \"kubernetes.io/secret/55887726-e3b8-4e73-a5fe-c82860636e1b-nova-migration-ssh-key-1\") on node \"crc\" DevicePath \"\"" Jan 23 10:10:59 crc kubenswrapper[4684]: I0123 10:10:59.247996 4684 reconciler_common.go:293] "Volume detached for volume \"ceph-nova-0\" (UniqueName: \"kubernetes.io/configmap/55887726-e3b8-4e73-a5fe-c82860636e1b-ceph-nova-0\") on node \"crc\" DevicePath \"\"" Jan 23 10:10:59 crc kubenswrapper[4684]: I0123 10:10:59.248050 4684 reconciler_common.go:293] "Volume detached for volume \"nova-extra-config-0\" (UniqueName: \"kubernetes.io/configmap/55887726-e3b8-4e73-a5fe-c82860636e1b-nova-extra-config-0\") on node \"crc\" DevicePath \"\"" Jan 23 10:10:59 crc kubenswrapper[4684]: I0123 10:10:59.248112 4684 reconciler_common.go:293] "Volume detached for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/55887726-e3b8-4e73-a5fe-c82860636e1b-inventory\") on node \"crc\" DevicePath \"\"" Jan 23 10:10:59 crc kubenswrapper[4684]: I0123 10:10:59.550262 4684 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-custom-ceph-edpm-deployment-openstack-edpm-ipam-228hk" event={"ID":"55887726-e3b8-4e73-a5fe-c82860636e1b","Type":"ContainerDied","Data":"8a7c24b2b0c43e4bd2653e32cd1f2c51ca13d0297767084bd85ef6586873d60b"} Jan 23 10:10:59 crc kubenswrapper[4684]: I0123 10:10:59.550306 4684 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="8a7c24b2b0c43e4bd2653e32cd1f2c51ca13d0297767084bd85ef6586873d60b" Jan 23 10:10:59 crc kubenswrapper[4684]: I0123 10:10:59.550336 4684 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-custom-ceph-edpm-deployment-openstack-edpm-ipam-228hk" Jan 23 10:11:13 crc kubenswrapper[4684]: I0123 10:11:13.728908 4684 patch_prober.go:28] interesting pod/machine-config-daemon-wtphf container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Jan 23 10:11:13 crc kubenswrapper[4684]: I0123 10:11:13.729466 4684 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-wtphf" podUID="fe8e0d00-860e-4d47-9f48-686555520d79" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Jan 23 10:11:14 crc kubenswrapper[4684]: I0123 10:11:14.207649 4684 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/cinder-backup-0"] Jan 23 10:11:14 crc kubenswrapper[4684]: E0123 10:11:14.208054 4684 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="55887726-e3b8-4e73-a5fe-c82860636e1b" containerName="nova-custom-ceph-edpm-deployment-openstack-edpm-ipam" Jan 23 10:11:14 crc kubenswrapper[4684]: I0123 10:11:14.208072 4684 state_mem.go:107] "Deleted CPUSet assignment" podUID="55887726-e3b8-4e73-a5fe-c82860636e1b" containerName="nova-custom-ceph-edpm-deployment-openstack-edpm-ipam" Jan 23 10:11:14 crc kubenswrapper[4684]: I0123 10:11:14.208261 4684 memory_manager.go:354] "RemoveStaleState removing state" podUID="55887726-e3b8-4e73-a5fe-c82860636e1b" containerName="nova-custom-ceph-edpm-deployment-openstack-edpm-ipam" Jan 23 10:11:14 crc kubenswrapper[4684]: I0123 10:11:14.209147 4684 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/cinder-backup-0" Jan 23 10:11:14 crc kubenswrapper[4684]: I0123 10:11:14.224455 4684 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/cinder-volume-volume1-0"] Jan 23 10:11:14 crc kubenswrapper[4684]: I0123 10:11:14.225145 4684 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"ceph-conf-files" Jan 23 10:11:14 crc kubenswrapper[4684]: I0123 10:11:14.225992 4684 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/cinder-volume-volume1-0" Jan 23 10:11:14 crc kubenswrapper[4684]: I0123 10:11:14.228242 4684 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cinder-backup-config-data" Jan 23 10:11:14 crc kubenswrapper[4684]: I0123 10:11:14.228451 4684 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cinder-volume-volume1-config-data" Jan 23 10:11:14 crc kubenswrapper[4684]: I0123 10:11:14.318811 4684 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/cinder-backup-0"] Jan 23 10:11:14 crc kubenswrapper[4684]: I0123 10:11:14.330258 4684 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/cinder-volume-volume1-0"] Jan 23 10:11:14 crc kubenswrapper[4684]: I0123 10:11:14.345872 4684 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/46859102-633b-4fca-bbeb-c34dfdbea96d-scripts\") pod \"cinder-backup-0\" (UID: \"46859102-633b-4fca-bbeb-c34dfdbea96d\") " pod="openstack/cinder-backup-0" Jan 23 10:11:14 crc kubenswrapper[4684]: I0123 10:11:14.345925 4684 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ceph\" (UniqueName: \"kubernetes.io/projected/2d39cffc-9089-47c7-acd7-50bb64ed8f61-ceph\") pod \"cinder-volume-volume1-0\" (UID: \"2d39cffc-9089-47c7-acd7-50bb64ed8f61\") " pod="openstack/cinder-volume-volume1-0" Jan 23 10:11:14 crc kubenswrapper[4684]: I0123 10:11:14.345950 4684 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-f2xk6\" (UniqueName: \"kubernetes.io/projected/46859102-633b-4fca-bbeb-c34dfdbea96d-kube-api-access-f2xk6\") pod \"cinder-backup-0\" (UID: \"46859102-633b-4fca-bbeb-c34dfdbea96d\") " pod="openstack/cinder-backup-0" Jan 23 10:11:14 crc kubenswrapper[4684]: I0123 10:11:14.345987 4684 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"run\" (UniqueName: \"kubernetes.io/host-path/2d39cffc-9089-47c7-acd7-50bb64ed8f61-run\") pod \"cinder-volume-volume1-0\" (UID: \"2d39cffc-9089-47c7-acd7-50bb64ed8f61\") " pod="openstack/cinder-volume-volume1-0" Jan 23 10:11:14 crc kubenswrapper[4684]: I0123 10:11:14.346015 4684 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/2d39cffc-9089-47c7-acd7-50bb64ed8f61-config-data\") pod \"cinder-volume-volume1-0\" (UID: \"2d39cffc-9089-47c7-acd7-50bb64ed8f61\") " pod="openstack/cinder-volume-volume1-0" Jan 23 10:11:14 crc kubenswrapper[4684]: I0123 10:11:14.346038 4684 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"sys\" (UniqueName: \"kubernetes.io/host-path/46859102-633b-4fca-bbeb-c34dfdbea96d-sys\") pod \"cinder-backup-0\" (UID: \"46859102-633b-4fca-bbeb-c34dfdbea96d\") " pod="openstack/cinder-backup-0" Jan 23 10:11:14 crc kubenswrapper[4684]: I0123 10:11:14.346053 4684 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-iscsi\" (UniqueName: \"kubernetes.io/host-path/46859102-633b-4fca-bbeb-c34dfdbea96d-etc-iscsi\") pod \"cinder-backup-0\" (UID: \"46859102-633b-4fca-bbeb-c34dfdbea96d\") " pod="openstack/cinder-backup-0" Jan 23 10:11:14 crc kubenswrapper[4684]: I0123 10:11:14.346074 4684 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-locks-brick\" (UniqueName: \"kubernetes.io/host-path/46859102-633b-4fca-bbeb-c34dfdbea96d-var-locks-brick\") pod \"cinder-backup-0\" (UID: \"46859102-633b-4fca-bbeb-c34dfdbea96d\") " pod="openstack/cinder-backup-0" Jan 23 10:11:14 crc kubenswrapper[4684]: I0123 10:11:14.346181 4684 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-nvme\" (UniqueName: \"kubernetes.io/host-path/46859102-633b-4fca-bbeb-c34dfdbea96d-etc-nvme\") pod \"cinder-backup-0\" (UID: \"46859102-633b-4fca-bbeb-c34dfdbea96d\") " pod="openstack/cinder-backup-0" Jan 23 10:11:14 crc kubenswrapper[4684]: I0123 10:11:14.346230 4684 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/46859102-633b-4fca-bbeb-c34dfdbea96d-config-data-custom\") pod \"cinder-backup-0\" (UID: \"46859102-633b-4fca-bbeb-c34dfdbea96d\") " pod="openstack/cinder-backup-0" Jan 23 10:11:14 crc kubenswrapper[4684]: I0123 10:11:14.346258 4684 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-lib-cinder\" (UniqueName: \"kubernetes.io/host-path/2d39cffc-9089-47c7-acd7-50bb64ed8f61-var-lib-cinder\") pod \"cinder-volume-volume1-0\" (UID: \"2d39cffc-9089-47c7-acd7-50bb64ed8f61\") " pod="openstack/cinder-volume-volume1-0" Jan 23 10:11:14 crc kubenswrapper[4684]: I0123 10:11:14.346287 4684 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ceph\" (UniqueName: \"kubernetes.io/projected/46859102-633b-4fca-bbeb-c34dfdbea96d-ceph\") pod \"cinder-backup-0\" (UID: \"46859102-633b-4fca-bbeb-c34dfdbea96d\") " pod="openstack/cinder-backup-0" Jan 23 10:11:14 crc kubenswrapper[4684]: I0123 10:11:14.346310 4684 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-locks-brick\" (UniqueName: \"kubernetes.io/host-path/2d39cffc-9089-47c7-acd7-50bb64ed8f61-var-locks-brick\") pod \"cinder-volume-volume1-0\" (UID: \"2d39cffc-9089-47c7-acd7-50bb64ed8f61\") " pod="openstack/cinder-volume-volume1-0" Jan 23 10:11:14 crc kubenswrapper[4684]: I0123 10:11:14.346420 4684 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"sys\" (UniqueName: \"kubernetes.io/host-path/2d39cffc-9089-47c7-acd7-50bb64ed8f61-sys\") pod \"cinder-volume-volume1-0\" (UID: \"2d39cffc-9089-47c7-acd7-50bb64ed8f61\") " pod="openstack/cinder-volume-volume1-0" Jan 23 10:11:14 crc kubenswrapper[4684]: I0123 10:11:14.346434 4684 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/46859102-633b-4fca-bbeb-c34dfdbea96d-lib-modules\") pod \"cinder-backup-0\" (UID: \"46859102-633b-4fca-bbeb-c34dfdbea96d\") " pod="openstack/cinder-backup-0" Jan 23 10:11:14 crc kubenswrapper[4684]: I0123 10:11:14.346466 4684 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-nvme\" (UniqueName: \"kubernetes.io/host-path/2d39cffc-9089-47c7-acd7-50bb64ed8f61-etc-nvme\") pod \"cinder-volume-volume1-0\" (UID: \"2d39cffc-9089-47c7-acd7-50bb64ed8f61\") " pod="openstack/cinder-volume-volume1-0" Jan 23 10:11:14 crc kubenswrapper[4684]: I0123 10:11:14.346481 4684 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"dev\" (UniqueName: \"kubernetes.io/host-path/46859102-633b-4fca-bbeb-c34dfdbea96d-dev\") pod \"cinder-backup-0\" (UID: \"46859102-633b-4fca-bbeb-c34dfdbea96d\") " pod="openstack/cinder-backup-0" Jan 23 10:11:14 crc kubenswrapper[4684]: I0123 10:11:14.346495 4684 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-locks-cinder\" (UniqueName: \"kubernetes.io/host-path/46859102-633b-4fca-bbeb-c34dfdbea96d-var-locks-cinder\") pod \"cinder-backup-0\" (UID: \"46859102-633b-4fca-bbeb-c34dfdbea96d\") " pod="openstack/cinder-backup-0" Jan 23 10:11:14 crc kubenswrapper[4684]: I0123 10:11:14.346568 4684 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-machine-id\" (UniqueName: \"kubernetes.io/host-path/46859102-633b-4fca-bbeb-c34dfdbea96d-etc-machine-id\") pod \"cinder-backup-0\" (UID: \"46859102-633b-4fca-bbeb-c34dfdbea96d\") " pod="openstack/cinder-backup-0" Jan 23 10:11:14 crc kubenswrapper[4684]: I0123 10:11:14.346658 4684 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/2d39cffc-9089-47c7-acd7-50bb64ed8f61-lib-modules\") pod \"cinder-volume-volume1-0\" (UID: \"2d39cffc-9089-47c7-acd7-50bb64ed8f61\") " pod="openstack/cinder-volume-volume1-0" Jan 23 10:11:14 crc kubenswrapper[4684]: I0123 10:11:14.346739 4684 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"dev\" (UniqueName: \"kubernetes.io/host-path/2d39cffc-9089-47c7-acd7-50bb64ed8f61-dev\") pod \"cinder-volume-volume1-0\" (UID: \"2d39cffc-9089-47c7-acd7-50bb64ed8f61\") " pod="openstack/cinder-volume-volume1-0" Jan 23 10:11:14 crc kubenswrapper[4684]: I0123 10:11:14.346773 4684 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-lib-cinder\" (UniqueName: \"kubernetes.io/host-path/46859102-633b-4fca-bbeb-c34dfdbea96d-var-lib-cinder\") pod \"cinder-backup-0\" (UID: \"46859102-633b-4fca-bbeb-c34dfdbea96d\") " pod="openstack/cinder-backup-0" Jan 23 10:11:14 crc kubenswrapper[4684]: I0123 10:11:14.346810 4684 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/2d39cffc-9089-47c7-acd7-50bb64ed8f61-config-data-custom\") pod \"cinder-volume-volume1-0\" (UID: \"2d39cffc-9089-47c7-acd7-50bb64ed8f61\") " pod="openstack/cinder-volume-volume1-0" Jan 23 10:11:14 crc kubenswrapper[4684]: I0123 10:11:14.346858 4684 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/46859102-633b-4fca-bbeb-c34dfdbea96d-combined-ca-bundle\") pod \"cinder-backup-0\" (UID: \"46859102-633b-4fca-bbeb-c34dfdbea96d\") " pod="openstack/cinder-backup-0" Jan 23 10:11:14 crc kubenswrapper[4684]: I0123 10:11:14.346908 4684 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-fwfh9\" (UniqueName: \"kubernetes.io/projected/2d39cffc-9089-47c7-acd7-50bb64ed8f61-kube-api-access-fwfh9\") pod \"cinder-volume-volume1-0\" (UID: \"2d39cffc-9089-47c7-acd7-50bb64ed8f61\") " pod="openstack/cinder-volume-volume1-0" Jan 23 10:11:14 crc kubenswrapper[4684]: I0123 10:11:14.346930 4684 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/46859102-633b-4fca-bbeb-c34dfdbea96d-config-data\") pod \"cinder-backup-0\" (UID: \"46859102-633b-4fca-bbeb-c34dfdbea96d\") " pod="openstack/cinder-backup-0" Jan 23 10:11:14 crc kubenswrapper[4684]: I0123 10:11:14.346973 4684 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/2d39cffc-9089-47c7-acd7-50bb64ed8f61-combined-ca-bundle\") pod \"cinder-volume-volume1-0\" (UID: \"2d39cffc-9089-47c7-acd7-50bb64ed8f61\") " pod="openstack/cinder-volume-volume1-0" Jan 23 10:11:14 crc kubenswrapper[4684]: I0123 10:11:14.346991 4684 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"run\" (UniqueName: \"kubernetes.io/host-path/46859102-633b-4fca-bbeb-c34dfdbea96d-run\") pod \"cinder-backup-0\" (UID: \"46859102-633b-4fca-bbeb-c34dfdbea96d\") " pod="openstack/cinder-backup-0" Jan 23 10:11:14 crc kubenswrapper[4684]: I0123 10:11:14.347019 4684 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-machine-id\" (UniqueName: \"kubernetes.io/host-path/2d39cffc-9089-47c7-acd7-50bb64ed8f61-etc-machine-id\") pod \"cinder-volume-volume1-0\" (UID: \"2d39cffc-9089-47c7-acd7-50bb64ed8f61\") " pod="openstack/cinder-volume-volume1-0" Jan 23 10:11:14 crc kubenswrapper[4684]: I0123 10:11:14.347112 4684 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-locks-cinder\" (UniqueName: \"kubernetes.io/host-path/2d39cffc-9089-47c7-acd7-50bb64ed8f61-var-locks-cinder\") pod \"cinder-volume-volume1-0\" (UID: \"2d39cffc-9089-47c7-acd7-50bb64ed8f61\") " pod="openstack/cinder-volume-volume1-0" Jan 23 10:11:14 crc kubenswrapper[4684]: I0123 10:11:14.347178 4684 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-iscsi\" (UniqueName: \"kubernetes.io/host-path/2d39cffc-9089-47c7-acd7-50bb64ed8f61-etc-iscsi\") pod \"cinder-volume-volume1-0\" (UID: \"2d39cffc-9089-47c7-acd7-50bb64ed8f61\") " pod="openstack/cinder-volume-volume1-0" Jan 23 10:11:14 crc kubenswrapper[4684]: I0123 10:11:14.347204 4684 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/2d39cffc-9089-47c7-acd7-50bb64ed8f61-scripts\") pod \"cinder-volume-volume1-0\" (UID: \"2d39cffc-9089-47c7-acd7-50bb64ed8f61\") " pod="openstack/cinder-volume-volume1-0" Jan 23 10:11:14 crc kubenswrapper[4684]: I0123 10:11:14.449354 4684 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etc-iscsi\" (UniqueName: \"kubernetes.io/host-path/2d39cffc-9089-47c7-acd7-50bb64ed8f61-etc-iscsi\") pod \"cinder-volume-volume1-0\" (UID: \"2d39cffc-9089-47c7-acd7-50bb64ed8f61\") " pod="openstack/cinder-volume-volume1-0" Jan 23 10:11:14 crc kubenswrapper[4684]: I0123 10:11:14.449402 4684 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/2d39cffc-9089-47c7-acd7-50bb64ed8f61-scripts\") pod \"cinder-volume-volume1-0\" (UID: \"2d39cffc-9089-47c7-acd7-50bb64ed8f61\") " pod="openstack/cinder-volume-volume1-0" Jan 23 10:11:14 crc kubenswrapper[4684]: I0123 10:11:14.449433 4684 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/46859102-633b-4fca-bbeb-c34dfdbea96d-scripts\") pod \"cinder-backup-0\" (UID: \"46859102-633b-4fca-bbeb-c34dfdbea96d\") " pod="openstack/cinder-backup-0" Jan 23 10:11:14 crc kubenswrapper[4684]: I0123 10:11:14.449457 4684 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ceph\" (UniqueName: \"kubernetes.io/projected/2d39cffc-9089-47c7-acd7-50bb64ed8f61-ceph\") pod \"cinder-volume-volume1-0\" (UID: \"2d39cffc-9089-47c7-acd7-50bb64ed8f61\") " pod="openstack/cinder-volume-volume1-0" Jan 23 10:11:14 crc kubenswrapper[4684]: I0123 10:11:14.449478 4684 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-f2xk6\" (UniqueName: \"kubernetes.io/projected/46859102-633b-4fca-bbeb-c34dfdbea96d-kube-api-access-f2xk6\") pod \"cinder-backup-0\" (UID: \"46859102-633b-4fca-bbeb-c34dfdbea96d\") " pod="openstack/cinder-backup-0" Jan 23 10:11:14 crc kubenswrapper[4684]: I0123 10:11:14.449502 4684 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"run\" (UniqueName: \"kubernetes.io/host-path/2d39cffc-9089-47c7-acd7-50bb64ed8f61-run\") pod \"cinder-volume-volume1-0\" (UID: \"2d39cffc-9089-47c7-acd7-50bb64ed8f61\") " pod="openstack/cinder-volume-volume1-0" Jan 23 10:11:14 crc kubenswrapper[4684]: I0123 10:11:14.449526 4684 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/2d39cffc-9089-47c7-acd7-50bb64ed8f61-config-data\") pod \"cinder-volume-volume1-0\" (UID: \"2d39cffc-9089-47c7-acd7-50bb64ed8f61\") " pod="openstack/cinder-volume-volume1-0" Jan 23 10:11:14 crc kubenswrapper[4684]: I0123 10:11:14.449533 4684 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etc-iscsi\" (UniqueName: \"kubernetes.io/host-path/2d39cffc-9089-47c7-acd7-50bb64ed8f61-etc-iscsi\") pod \"cinder-volume-volume1-0\" (UID: \"2d39cffc-9089-47c7-acd7-50bb64ed8f61\") " pod="openstack/cinder-volume-volume1-0" Jan 23 10:11:14 crc kubenswrapper[4684]: I0123 10:11:14.449581 4684 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"sys\" (UniqueName: \"kubernetes.io/host-path/46859102-633b-4fca-bbeb-c34dfdbea96d-sys\") pod \"cinder-backup-0\" (UID: \"46859102-633b-4fca-bbeb-c34dfdbea96d\") " pod="openstack/cinder-backup-0" Jan 23 10:11:14 crc kubenswrapper[4684]: I0123 10:11:14.449616 4684 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"run\" (UniqueName: \"kubernetes.io/host-path/2d39cffc-9089-47c7-acd7-50bb64ed8f61-run\") pod \"cinder-volume-volume1-0\" (UID: \"2d39cffc-9089-47c7-acd7-50bb64ed8f61\") " pod="openstack/cinder-volume-volume1-0" Jan 23 10:11:14 crc kubenswrapper[4684]: I0123 10:11:14.449547 4684 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"sys\" (UniqueName: \"kubernetes.io/host-path/46859102-633b-4fca-bbeb-c34dfdbea96d-sys\") pod \"cinder-backup-0\" (UID: \"46859102-633b-4fca-bbeb-c34dfdbea96d\") " pod="openstack/cinder-backup-0" Jan 23 10:11:14 crc kubenswrapper[4684]: I0123 10:11:14.449998 4684 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etc-iscsi\" (UniqueName: \"kubernetes.io/host-path/46859102-633b-4fca-bbeb-c34dfdbea96d-etc-iscsi\") pod \"cinder-backup-0\" (UID: \"46859102-633b-4fca-bbeb-c34dfdbea96d\") " pod="openstack/cinder-backup-0" Jan 23 10:11:14 crc kubenswrapper[4684]: I0123 10:11:14.450041 4684 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"var-locks-brick\" (UniqueName: \"kubernetes.io/host-path/46859102-633b-4fca-bbeb-c34dfdbea96d-var-locks-brick\") pod \"cinder-backup-0\" (UID: \"46859102-633b-4fca-bbeb-c34dfdbea96d\") " pod="openstack/cinder-backup-0" Jan 23 10:11:14 crc kubenswrapper[4684]: I0123 10:11:14.450112 4684 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etc-nvme\" (UniqueName: \"kubernetes.io/host-path/46859102-633b-4fca-bbeb-c34dfdbea96d-etc-nvme\") pod \"cinder-backup-0\" (UID: \"46859102-633b-4fca-bbeb-c34dfdbea96d\") " pod="openstack/cinder-backup-0" Jan 23 10:11:14 crc kubenswrapper[4684]: I0123 10:11:14.450146 4684 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/46859102-633b-4fca-bbeb-c34dfdbea96d-config-data-custom\") pod \"cinder-backup-0\" (UID: \"46859102-633b-4fca-bbeb-c34dfdbea96d\") " pod="openstack/cinder-backup-0" Jan 23 10:11:14 crc kubenswrapper[4684]: I0123 10:11:14.450172 4684 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"var-lib-cinder\" (UniqueName: \"kubernetes.io/host-path/2d39cffc-9089-47c7-acd7-50bb64ed8f61-var-lib-cinder\") pod \"cinder-volume-volume1-0\" (UID: \"2d39cffc-9089-47c7-acd7-50bb64ed8f61\") " pod="openstack/cinder-volume-volume1-0" Jan 23 10:11:14 crc kubenswrapper[4684]: I0123 10:11:14.450206 4684 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ceph\" (UniqueName: \"kubernetes.io/projected/46859102-633b-4fca-bbeb-c34dfdbea96d-ceph\") pod \"cinder-backup-0\" (UID: \"46859102-633b-4fca-bbeb-c34dfdbea96d\") " pod="openstack/cinder-backup-0" Jan 23 10:11:14 crc kubenswrapper[4684]: I0123 10:11:14.450227 4684 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"var-locks-brick\" (UniqueName: \"kubernetes.io/host-path/2d39cffc-9089-47c7-acd7-50bb64ed8f61-var-locks-brick\") pod \"cinder-volume-volume1-0\" (UID: \"2d39cffc-9089-47c7-acd7-50bb64ed8f61\") " pod="openstack/cinder-volume-volume1-0" Jan 23 10:11:14 crc kubenswrapper[4684]: I0123 10:11:14.450310 4684 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"sys\" (UniqueName: \"kubernetes.io/host-path/2d39cffc-9089-47c7-acd7-50bb64ed8f61-sys\") pod \"cinder-volume-volume1-0\" (UID: \"2d39cffc-9089-47c7-acd7-50bb64ed8f61\") " pod="openstack/cinder-volume-volume1-0" Jan 23 10:11:14 crc kubenswrapper[4684]: I0123 10:11:14.450325 4684 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/46859102-633b-4fca-bbeb-c34dfdbea96d-lib-modules\") pod \"cinder-backup-0\" (UID: \"46859102-633b-4fca-bbeb-c34dfdbea96d\") " pod="openstack/cinder-backup-0" Jan 23 10:11:14 crc kubenswrapper[4684]: I0123 10:11:14.450357 4684 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"dev\" (UniqueName: \"kubernetes.io/host-path/46859102-633b-4fca-bbeb-c34dfdbea96d-dev\") pod \"cinder-backup-0\" (UID: \"46859102-633b-4fca-bbeb-c34dfdbea96d\") " pod="openstack/cinder-backup-0" Jan 23 10:11:14 crc kubenswrapper[4684]: I0123 10:11:14.450374 4684 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etc-nvme\" (UniqueName: \"kubernetes.io/host-path/2d39cffc-9089-47c7-acd7-50bb64ed8f61-etc-nvme\") pod \"cinder-volume-volume1-0\" (UID: \"2d39cffc-9089-47c7-acd7-50bb64ed8f61\") " pod="openstack/cinder-volume-volume1-0" Jan 23 10:11:14 crc kubenswrapper[4684]: I0123 10:11:14.450391 4684 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"var-locks-cinder\" (UniqueName: \"kubernetes.io/host-path/46859102-633b-4fca-bbeb-c34dfdbea96d-var-locks-cinder\") pod \"cinder-backup-0\" (UID: \"46859102-633b-4fca-bbeb-c34dfdbea96d\") " pod="openstack/cinder-backup-0" Jan 23 10:11:14 crc kubenswrapper[4684]: I0123 10:11:14.450427 4684 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etc-machine-id\" (UniqueName: \"kubernetes.io/host-path/46859102-633b-4fca-bbeb-c34dfdbea96d-etc-machine-id\") pod \"cinder-backup-0\" (UID: \"46859102-633b-4fca-bbeb-c34dfdbea96d\") " pod="openstack/cinder-backup-0" Jan 23 10:11:14 crc kubenswrapper[4684]: I0123 10:11:14.450462 4684 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/2d39cffc-9089-47c7-acd7-50bb64ed8f61-lib-modules\") pod \"cinder-volume-volume1-0\" (UID: \"2d39cffc-9089-47c7-acd7-50bb64ed8f61\") " pod="openstack/cinder-volume-volume1-0" Jan 23 10:11:14 crc kubenswrapper[4684]: I0123 10:11:14.450482 4684 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"dev\" (UniqueName: \"kubernetes.io/host-path/2d39cffc-9089-47c7-acd7-50bb64ed8f61-dev\") pod \"cinder-volume-volume1-0\" (UID: \"2d39cffc-9089-47c7-acd7-50bb64ed8f61\") " pod="openstack/cinder-volume-volume1-0" Jan 23 10:11:14 crc kubenswrapper[4684]: I0123 10:11:14.450513 4684 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"var-lib-cinder\" (UniqueName: \"kubernetes.io/host-path/46859102-633b-4fca-bbeb-c34dfdbea96d-var-lib-cinder\") pod \"cinder-backup-0\" (UID: \"46859102-633b-4fca-bbeb-c34dfdbea96d\") " pod="openstack/cinder-backup-0" Jan 23 10:11:14 crc kubenswrapper[4684]: I0123 10:11:14.450569 4684 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/2d39cffc-9089-47c7-acd7-50bb64ed8f61-config-data-custom\") pod \"cinder-volume-volume1-0\" (UID: \"2d39cffc-9089-47c7-acd7-50bb64ed8f61\") " pod="openstack/cinder-volume-volume1-0" Jan 23 10:11:14 crc kubenswrapper[4684]: I0123 10:11:14.450601 4684 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/46859102-633b-4fca-bbeb-c34dfdbea96d-combined-ca-bundle\") pod \"cinder-backup-0\" (UID: \"46859102-633b-4fca-bbeb-c34dfdbea96d\") " pod="openstack/cinder-backup-0" Jan 23 10:11:14 crc kubenswrapper[4684]: I0123 10:11:14.450625 4684 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-fwfh9\" (UniqueName: \"kubernetes.io/projected/2d39cffc-9089-47c7-acd7-50bb64ed8f61-kube-api-access-fwfh9\") pod \"cinder-volume-volume1-0\" (UID: \"2d39cffc-9089-47c7-acd7-50bb64ed8f61\") " pod="openstack/cinder-volume-volume1-0" Jan 23 10:11:14 crc kubenswrapper[4684]: I0123 10:11:14.450640 4684 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/46859102-633b-4fca-bbeb-c34dfdbea96d-config-data\") pod \"cinder-backup-0\" (UID: \"46859102-633b-4fca-bbeb-c34dfdbea96d\") " pod="openstack/cinder-backup-0" Jan 23 10:11:14 crc kubenswrapper[4684]: I0123 10:11:14.450686 4684 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/2d39cffc-9089-47c7-acd7-50bb64ed8f61-combined-ca-bundle\") pod \"cinder-volume-volume1-0\" (UID: \"2d39cffc-9089-47c7-acd7-50bb64ed8f61\") " pod="openstack/cinder-volume-volume1-0" Jan 23 10:11:14 crc kubenswrapper[4684]: I0123 10:11:14.450719 4684 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"run\" (UniqueName: \"kubernetes.io/host-path/46859102-633b-4fca-bbeb-c34dfdbea96d-run\") pod \"cinder-backup-0\" (UID: \"46859102-633b-4fca-bbeb-c34dfdbea96d\") " pod="openstack/cinder-backup-0" Jan 23 10:11:14 crc kubenswrapper[4684]: I0123 10:11:14.450753 4684 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etc-machine-id\" (UniqueName: \"kubernetes.io/host-path/2d39cffc-9089-47c7-acd7-50bb64ed8f61-etc-machine-id\") pod \"cinder-volume-volume1-0\" (UID: \"2d39cffc-9089-47c7-acd7-50bb64ed8f61\") " pod="openstack/cinder-volume-volume1-0" Jan 23 10:11:14 crc kubenswrapper[4684]: I0123 10:11:14.450787 4684 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"var-locks-cinder\" (UniqueName: \"kubernetes.io/host-path/2d39cffc-9089-47c7-acd7-50bb64ed8f61-var-locks-cinder\") pod \"cinder-volume-volume1-0\" (UID: \"2d39cffc-9089-47c7-acd7-50bb64ed8f61\") " pod="openstack/cinder-volume-volume1-0" Jan 23 10:11:14 crc kubenswrapper[4684]: I0123 10:11:14.451592 4684 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"dev\" (UniqueName: \"kubernetes.io/host-path/2d39cffc-9089-47c7-acd7-50bb64ed8f61-dev\") pod \"cinder-volume-volume1-0\" (UID: \"2d39cffc-9089-47c7-acd7-50bb64ed8f61\") " pod="openstack/cinder-volume-volume1-0" Jan 23 10:11:14 crc kubenswrapper[4684]: I0123 10:11:14.451883 4684 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etc-nvme\" (UniqueName: \"kubernetes.io/host-path/46859102-633b-4fca-bbeb-c34dfdbea96d-etc-nvme\") pod \"cinder-backup-0\" (UID: \"46859102-633b-4fca-bbeb-c34dfdbea96d\") " pod="openstack/cinder-backup-0" Jan 23 10:11:14 crc kubenswrapper[4684]: I0123 10:11:14.451943 4684 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etc-iscsi\" (UniqueName: \"kubernetes.io/host-path/46859102-633b-4fca-bbeb-c34dfdbea96d-etc-iscsi\") pod \"cinder-backup-0\" (UID: \"46859102-633b-4fca-bbeb-c34dfdbea96d\") " pod="openstack/cinder-backup-0" Jan 23 10:11:14 crc kubenswrapper[4684]: I0123 10:11:14.452241 4684 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"var-locks-cinder\" (UniqueName: \"kubernetes.io/host-path/2d39cffc-9089-47c7-acd7-50bb64ed8f61-var-locks-cinder\") pod \"cinder-volume-volume1-0\" (UID: \"2d39cffc-9089-47c7-acd7-50bb64ed8f61\") " pod="openstack/cinder-volume-volume1-0" Jan 23 10:11:14 crc kubenswrapper[4684]: I0123 10:11:14.452336 4684 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"var-lib-cinder\" (UniqueName: \"kubernetes.io/host-path/46859102-633b-4fca-bbeb-c34dfdbea96d-var-lib-cinder\") pod \"cinder-backup-0\" (UID: \"46859102-633b-4fca-bbeb-c34dfdbea96d\") " pod="openstack/cinder-backup-0" Jan 23 10:11:14 crc kubenswrapper[4684]: I0123 10:11:14.452480 4684 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"var-locks-brick\" (UniqueName: \"kubernetes.io/host-path/2d39cffc-9089-47c7-acd7-50bb64ed8f61-var-locks-brick\") pod \"cinder-volume-volume1-0\" (UID: \"2d39cffc-9089-47c7-acd7-50bb64ed8f61\") " pod="openstack/cinder-volume-volume1-0" Jan 23 10:11:14 crc kubenswrapper[4684]: I0123 10:11:14.452607 4684 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"sys\" (UniqueName: \"kubernetes.io/host-path/2d39cffc-9089-47c7-acd7-50bb64ed8f61-sys\") pod \"cinder-volume-volume1-0\" (UID: \"2d39cffc-9089-47c7-acd7-50bb64ed8f61\") " pod="openstack/cinder-volume-volume1-0" Jan 23 10:11:14 crc kubenswrapper[4684]: I0123 10:11:14.452626 4684 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"var-locks-brick\" (UniqueName: \"kubernetes.io/host-path/46859102-633b-4fca-bbeb-c34dfdbea96d-var-locks-brick\") pod \"cinder-backup-0\" (UID: \"46859102-633b-4fca-bbeb-c34dfdbea96d\") " pod="openstack/cinder-backup-0" Jan 23 10:11:14 crc kubenswrapper[4684]: I0123 10:11:14.452862 4684 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/46859102-633b-4fca-bbeb-c34dfdbea96d-lib-modules\") pod \"cinder-backup-0\" (UID: \"46859102-633b-4fca-bbeb-c34dfdbea96d\") " pod="openstack/cinder-backup-0" Jan 23 10:11:14 crc kubenswrapper[4684]: I0123 10:11:14.452983 4684 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"dev\" (UniqueName: \"kubernetes.io/host-path/46859102-633b-4fca-bbeb-c34dfdbea96d-dev\") pod \"cinder-backup-0\" (UID: \"46859102-633b-4fca-bbeb-c34dfdbea96d\") " pod="openstack/cinder-backup-0" Jan 23 10:11:14 crc kubenswrapper[4684]: I0123 10:11:14.453109 4684 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etc-nvme\" (UniqueName: \"kubernetes.io/host-path/2d39cffc-9089-47c7-acd7-50bb64ed8f61-etc-nvme\") pod \"cinder-volume-volume1-0\" (UID: \"2d39cffc-9089-47c7-acd7-50bb64ed8f61\") " pod="openstack/cinder-volume-volume1-0" Jan 23 10:11:14 crc kubenswrapper[4684]: I0123 10:11:14.453234 4684 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"var-locks-cinder\" (UniqueName: \"kubernetes.io/host-path/46859102-633b-4fca-bbeb-c34dfdbea96d-var-locks-cinder\") pod \"cinder-backup-0\" (UID: \"46859102-633b-4fca-bbeb-c34dfdbea96d\") " pod="openstack/cinder-backup-0" Jan 23 10:11:14 crc kubenswrapper[4684]: I0123 10:11:14.453338 4684 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etc-machine-id\" (UniqueName: \"kubernetes.io/host-path/46859102-633b-4fca-bbeb-c34dfdbea96d-etc-machine-id\") pod \"cinder-backup-0\" (UID: \"46859102-633b-4fca-bbeb-c34dfdbea96d\") " pod="openstack/cinder-backup-0" Jan 23 10:11:14 crc kubenswrapper[4684]: I0123 10:11:14.453452 4684 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/2d39cffc-9089-47c7-acd7-50bb64ed8f61-lib-modules\") pod \"cinder-volume-volume1-0\" (UID: \"2d39cffc-9089-47c7-acd7-50bb64ed8f61\") " pod="openstack/cinder-volume-volume1-0" Jan 23 10:11:14 crc kubenswrapper[4684]: I0123 10:11:14.454197 4684 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"var-lib-cinder\" (UniqueName: \"kubernetes.io/host-path/2d39cffc-9089-47c7-acd7-50bb64ed8f61-var-lib-cinder\") pod \"cinder-volume-volume1-0\" (UID: \"2d39cffc-9089-47c7-acd7-50bb64ed8f61\") " pod="openstack/cinder-volume-volume1-0" Jan 23 10:11:14 crc kubenswrapper[4684]: I0123 10:11:14.454253 4684 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"run\" (UniqueName: \"kubernetes.io/host-path/46859102-633b-4fca-bbeb-c34dfdbea96d-run\") pod \"cinder-backup-0\" (UID: \"46859102-633b-4fca-bbeb-c34dfdbea96d\") " pod="openstack/cinder-backup-0" Jan 23 10:11:14 crc kubenswrapper[4684]: I0123 10:11:14.454944 4684 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etc-machine-id\" (UniqueName: \"kubernetes.io/host-path/2d39cffc-9089-47c7-acd7-50bb64ed8f61-etc-machine-id\") pod \"cinder-volume-volume1-0\" (UID: \"2d39cffc-9089-47c7-acd7-50bb64ed8f61\") " pod="openstack/cinder-volume-volume1-0" Jan 23 10:11:14 crc kubenswrapper[4684]: I0123 10:11:14.455989 4684 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ceph\" (UniqueName: \"kubernetes.io/projected/46859102-633b-4fca-bbeb-c34dfdbea96d-ceph\") pod \"cinder-backup-0\" (UID: \"46859102-633b-4fca-bbeb-c34dfdbea96d\") " pod="openstack/cinder-backup-0" Jan 23 10:11:14 crc kubenswrapper[4684]: I0123 10:11:14.456364 4684 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/46859102-633b-4fca-bbeb-c34dfdbea96d-scripts\") pod \"cinder-backup-0\" (UID: \"46859102-633b-4fca-bbeb-c34dfdbea96d\") " pod="openstack/cinder-backup-0" Jan 23 10:11:14 crc kubenswrapper[4684]: I0123 10:11:14.457636 4684 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/2d39cffc-9089-47c7-acd7-50bb64ed8f61-config-data\") pod \"cinder-volume-volume1-0\" (UID: \"2d39cffc-9089-47c7-acd7-50bb64ed8f61\") " pod="openstack/cinder-volume-volume1-0" Jan 23 10:11:14 crc kubenswrapper[4684]: I0123 10:11:14.458231 4684 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/2d39cffc-9089-47c7-acd7-50bb64ed8f61-config-data-custom\") pod \"cinder-volume-volume1-0\" (UID: \"2d39cffc-9089-47c7-acd7-50bb64ed8f61\") " pod="openstack/cinder-volume-volume1-0" Jan 23 10:11:14 crc kubenswrapper[4684]: I0123 10:11:14.459117 4684 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ceph\" (UniqueName: \"kubernetes.io/projected/2d39cffc-9089-47c7-acd7-50bb64ed8f61-ceph\") pod \"cinder-volume-volume1-0\" (UID: \"2d39cffc-9089-47c7-acd7-50bb64ed8f61\") " pod="openstack/cinder-volume-volume1-0" Jan 23 10:11:14 crc kubenswrapper[4684]: I0123 10:11:14.461057 4684 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/46859102-633b-4fca-bbeb-c34dfdbea96d-combined-ca-bundle\") pod \"cinder-backup-0\" (UID: \"46859102-633b-4fca-bbeb-c34dfdbea96d\") " pod="openstack/cinder-backup-0" Jan 23 10:11:14 crc kubenswrapper[4684]: I0123 10:11:14.461197 4684 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/46859102-633b-4fca-bbeb-c34dfdbea96d-config-data-custom\") pod \"cinder-backup-0\" (UID: \"46859102-633b-4fca-bbeb-c34dfdbea96d\") " pod="openstack/cinder-backup-0" Jan 23 10:11:14 crc kubenswrapper[4684]: I0123 10:11:14.461471 4684 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/2d39cffc-9089-47c7-acd7-50bb64ed8f61-combined-ca-bundle\") pod \"cinder-volume-volume1-0\" (UID: \"2d39cffc-9089-47c7-acd7-50bb64ed8f61\") " pod="openstack/cinder-volume-volume1-0" Jan 23 10:11:14 crc kubenswrapper[4684]: I0123 10:11:14.469712 4684 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/46859102-633b-4fca-bbeb-c34dfdbea96d-config-data\") pod \"cinder-backup-0\" (UID: \"46859102-633b-4fca-bbeb-c34dfdbea96d\") " pod="openstack/cinder-backup-0" Jan 23 10:11:14 crc kubenswrapper[4684]: I0123 10:11:14.471828 4684 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-f2xk6\" (UniqueName: \"kubernetes.io/projected/46859102-633b-4fca-bbeb-c34dfdbea96d-kube-api-access-f2xk6\") pod \"cinder-backup-0\" (UID: \"46859102-633b-4fca-bbeb-c34dfdbea96d\") " pod="openstack/cinder-backup-0" Jan 23 10:11:14 crc kubenswrapper[4684]: I0123 10:11:14.473761 4684 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-fwfh9\" (UniqueName: \"kubernetes.io/projected/2d39cffc-9089-47c7-acd7-50bb64ed8f61-kube-api-access-fwfh9\") pod \"cinder-volume-volume1-0\" (UID: \"2d39cffc-9089-47c7-acd7-50bb64ed8f61\") " pod="openstack/cinder-volume-volume1-0" Jan 23 10:11:14 crc kubenswrapper[4684]: I0123 10:11:14.477071 4684 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/2d39cffc-9089-47c7-acd7-50bb64ed8f61-scripts\") pod \"cinder-volume-volume1-0\" (UID: \"2d39cffc-9089-47c7-acd7-50bb64ed8f61\") " pod="openstack/cinder-volume-volume1-0" Jan 23 10:11:14 crc kubenswrapper[4684]: I0123 10:11:14.525454 4684 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/cinder-backup-0" Jan 23 10:11:14 crc kubenswrapper[4684]: I0123 10:11:14.550097 4684 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/cinder-volume-volume1-0" Jan 23 10:11:14 crc kubenswrapper[4684]: I0123 10:11:14.947379 4684 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/manila-db-create-9r5vp"] Jan 23 10:11:14 crc kubenswrapper[4684]: I0123 10:11:14.948956 4684 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/manila-db-create-9r5vp" Jan 23 10:11:14 crc kubenswrapper[4684]: I0123 10:11:14.968242 4684 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/manila-db-create-9r5vp"] Jan 23 10:11:15 crc kubenswrapper[4684]: I0123 10:11:15.055009 4684 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/manila-e4ed-account-create-update-rzjjx"] Jan 23 10:11:15 crc kubenswrapper[4684]: I0123 10:11:15.056454 4684 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/manila-e4ed-account-create-update-rzjjx" Jan 23 10:11:15 crc kubenswrapper[4684]: I0123 10:11:15.061994 4684 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"manila-db-secret" Jan 23 10:11:15 crc kubenswrapper[4684]: I0123 10:11:15.064150 4684 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/9a1d764d-4ecd-4f2f-a4b8-848142c93b15-operator-scripts\") pod \"manila-db-create-9r5vp\" (UID: \"9a1d764d-4ecd-4f2f-a4b8-848142c93b15\") " pod="openstack/manila-db-create-9r5vp" Jan 23 10:11:15 crc kubenswrapper[4684]: I0123 10:11:15.064341 4684 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-5vnkr\" (UniqueName: \"kubernetes.io/projected/9a1d764d-4ecd-4f2f-a4b8-848142c93b15-kube-api-access-5vnkr\") pod \"manila-db-create-9r5vp\" (UID: \"9a1d764d-4ecd-4f2f-a4b8-848142c93b15\") " pod="openstack/manila-db-create-9r5vp" Jan 23 10:11:15 crc kubenswrapper[4684]: I0123 10:11:15.076464 4684 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/glance-default-external-api-0"] Jan 23 10:11:15 crc kubenswrapper[4684]: I0123 10:11:15.083716 4684 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/glance-default-external-api-0" Jan 23 10:11:15 crc kubenswrapper[4684]: I0123 10:11:15.092789 4684 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"glance-default-external-config-data" Jan 23 10:11:15 crc kubenswrapper[4684]: I0123 10:11:15.094935 4684 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"glance-scripts" Jan 23 10:11:15 crc kubenswrapper[4684]: I0123 10:11:15.094935 4684 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-glance-default-public-svc" Jan 23 10:11:15 crc kubenswrapper[4684]: I0123 10:11:15.095211 4684 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"glance-glance-dockercfg-4hbkx" Jan 23 10:11:15 crc kubenswrapper[4684]: I0123 10:11:15.101749 4684 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/manila-e4ed-account-create-update-rzjjx"] Jan 23 10:11:15 crc kubenswrapper[4684]: I0123 10:11:15.128609 4684 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/glance-default-external-api-0"] Jan 23 10:11:15 crc kubenswrapper[4684]: I0123 10:11:15.166459 4684 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/a1353995-a0d3-4d2d-bb96-99c94673be54-combined-ca-bundle\") pod \"glance-default-external-api-0\" (UID: \"a1353995-a0d3-4d2d-bb96-99c94673be54\") " pod="openstack/glance-default-external-api-0" Jan 23 10:11:15 crc kubenswrapper[4684]: I0123 10:11:15.166533 4684 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"local-storage02-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage02-crc\") pod \"glance-default-external-api-0\" (UID: \"a1353995-a0d3-4d2d-bb96-99c94673be54\") " pod="openstack/glance-default-external-api-0" Jan 23 10:11:15 crc kubenswrapper[4684]: I0123 10:11:15.166613 4684 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"httpd-run\" (UniqueName: \"kubernetes.io/empty-dir/a1353995-a0d3-4d2d-bb96-99c94673be54-httpd-run\") pod \"glance-default-external-api-0\" (UID: \"a1353995-a0d3-4d2d-bb96-99c94673be54\") " pod="openstack/glance-default-external-api-0" Jan 23 10:11:15 crc kubenswrapper[4684]: I0123 10:11:15.166646 4684 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/a1353995-a0d3-4d2d-bb96-99c94673be54-public-tls-certs\") pod \"glance-default-external-api-0\" (UID: \"a1353995-a0d3-4d2d-bb96-99c94673be54\") " pod="openstack/glance-default-external-api-0" Jan 23 10:11:15 crc kubenswrapper[4684]: I0123 10:11:15.166685 4684 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ceph\" (UniqueName: \"kubernetes.io/projected/a1353995-a0d3-4d2d-bb96-99c94673be54-ceph\") pod \"glance-default-external-api-0\" (UID: \"a1353995-a0d3-4d2d-bb96-99c94673be54\") " pod="openstack/glance-default-external-api-0" Jan 23 10:11:15 crc kubenswrapper[4684]: I0123 10:11:15.166728 4684 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/a1353995-a0d3-4d2d-bb96-99c94673be54-config-data\") pod \"glance-default-external-api-0\" (UID: \"a1353995-a0d3-4d2d-bb96-99c94673be54\") " pod="openstack/glance-default-external-api-0" Jan 23 10:11:15 crc kubenswrapper[4684]: I0123 10:11:15.166766 4684 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/a1353995-a0d3-4d2d-bb96-99c94673be54-logs\") pod \"glance-default-external-api-0\" (UID: \"a1353995-a0d3-4d2d-bb96-99c94673be54\") " pod="openstack/glance-default-external-api-0" Jan 23 10:11:15 crc kubenswrapper[4684]: I0123 10:11:15.166802 4684 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-5vnkr\" (UniqueName: \"kubernetes.io/projected/9a1d764d-4ecd-4f2f-a4b8-848142c93b15-kube-api-access-5vnkr\") pod \"manila-db-create-9r5vp\" (UID: \"9a1d764d-4ecd-4f2f-a4b8-848142c93b15\") " pod="openstack/manila-db-create-9r5vp" Jan 23 10:11:15 crc kubenswrapper[4684]: I0123 10:11:15.166827 4684 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-gjdgm\" (UniqueName: \"kubernetes.io/projected/a1353995-a0d3-4d2d-bb96-99c94673be54-kube-api-access-gjdgm\") pod \"glance-default-external-api-0\" (UID: \"a1353995-a0d3-4d2d-bb96-99c94673be54\") " pod="openstack/glance-default-external-api-0" Jan 23 10:11:15 crc kubenswrapper[4684]: I0123 10:11:15.166862 4684 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-hr9zc\" (UniqueName: \"kubernetes.io/projected/a7e275bc-7d07-4a5c-98be-6e9eb72cf537-kube-api-access-hr9zc\") pod \"manila-e4ed-account-create-update-rzjjx\" (UID: \"a7e275bc-7d07-4a5c-98be-6e9eb72cf537\") " pod="openstack/manila-e4ed-account-create-update-rzjjx" Jan 23 10:11:15 crc kubenswrapper[4684]: I0123 10:11:15.166892 4684 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/a7e275bc-7d07-4a5c-98be-6e9eb72cf537-operator-scripts\") pod \"manila-e4ed-account-create-update-rzjjx\" (UID: \"a7e275bc-7d07-4a5c-98be-6e9eb72cf537\") " pod="openstack/manila-e4ed-account-create-update-rzjjx" Jan 23 10:11:15 crc kubenswrapper[4684]: I0123 10:11:15.166932 4684 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/9a1d764d-4ecd-4f2f-a4b8-848142c93b15-operator-scripts\") pod \"manila-db-create-9r5vp\" (UID: \"9a1d764d-4ecd-4f2f-a4b8-848142c93b15\") " pod="openstack/manila-db-create-9r5vp" Jan 23 10:11:15 crc kubenswrapper[4684]: I0123 10:11:15.166973 4684 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/a1353995-a0d3-4d2d-bb96-99c94673be54-scripts\") pod \"glance-default-external-api-0\" (UID: \"a1353995-a0d3-4d2d-bb96-99c94673be54\") " pod="openstack/glance-default-external-api-0" Jan 23 10:11:15 crc kubenswrapper[4684]: I0123 10:11:15.168273 4684 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/9a1d764d-4ecd-4f2f-a4b8-848142c93b15-operator-scripts\") pod \"manila-db-create-9r5vp\" (UID: \"9a1d764d-4ecd-4f2f-a4b8-848142c93b15\") " pod="openstack/manila-db-create-9r5vp" Jan 23 10:11:15 crc kubenswrapper[4684]: I0123 10:11:15.197499 4684 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/glance-default-internal-api-0"] Jan 23 10:11:15 crc kubenswrapper[4684]: I0123 10:11:15.199846 4684 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/glance-default-internal-api-0" Jan 23 10:11:15 crc kubenswrapper[4684]: I0123 10:11:15.204228 4684 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-glance-default-internal-svc" Jan 23 10:11:15 crc kubenswrapper[4684]: I0123 10:11:15.204671 4684 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"glance-default-internal-config-data" Jan 23 10:11:15 crc kubenswrapper[4684]: I0123 10:11:15.212189 4684 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/glance-default-internal-api-0"] Jan 23 10:11:15 crc kubenswrapper[4684]: I0123 10:11:15.237983 4684 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-5vnkr\" (UniqueName: \"kubernetes.io/projected/9a1d764d-4ecd-4f2f-a4b8-848142c93b15-kube-api-access-5vnkr\") pod \"manila-db-create-9r5vp\" (UID: \"9a1d764d-4ecd-4f2f-a4b8-848142c93b15\") " pod="openstack/manila-db-create-9r5vp" Jan 23 10:11:15 crc kubenswrapper[4684]: I0123 10:11:15.275510 4684 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/cinder-backup-0"] Jan 23 10:11:15 crc kubenswrapper[4684]: I0123 10:11:15.277058 4684 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/a1353995-a0d3-4d2d-bb96-99c94673be54-config-data\") pod \"glance-default-external-api-0\" (UID: \"a1353995-a0d3-4d2d-bb96-99c94673be54\") " pod="openstack/glance-default-external-api-0" Jan 23 10:11:15 crc kubenswrapper[4684]: I0123 10:11:15.277114 4684 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/b697e4ae-16df-466e-bfad-f76ddb6f9e97-logs\") pod \"glance-default-internal-api-0\" (UID: \"b697e4ae-16df-466e-bfad-f76ddb6f9e97\") " pod="openstack/glance-default-internal-api-0" Jan 23 10:11:15 crc kubenswrapper[4684]: I0123 10:11:15.277142 4684 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"local-storage04-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage04-crc\") pod \"glance-default-internal-api-0\" (UID: \"b697e4ae-16df-466e-bfad-f76ddb6f9e97\") " pod="openstack/glance-default-internal-api-0" Jan 23 10:11:15 crc kubenswrapper[4684]: I0123 10:11:15.277183 4684 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/b697e4ae-16df-466e-bfad-f76ddb6f9e97-scripts\") pod \"glance-default-internal-api-0\" (UID: \"b697e4ae-16df-466e-bfad-f76ddb6f9e97\") " pod="openstack/glance-default-internal-api-0" Jan 23 10:11:15 crc kubenswrapper[4684]: I0123 10:11:15.277215 4684 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/a1353995-a0d3-4d2d-bb96-99c94673be54-logs\") pod \"glance-default-external-api-0\" (UID: \"a1353995-a0d3-4d2d-bb96-99c94673be54\") " pod="openstack/glance-default-external-api-0" Jan 23 10:11:15 crc kubenswrapper[4684]: I0123 10:11:15.277245 4684 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-gjdgm\" (UniqueName: \"kubernetes.io/projected/a1353995-a0d3-4d2d-bb96-99c94673be54-kube-api-access-gjdgm\") pod \"glance-default-external-api-0\" (UID: \"a1353995-a0d3-4d2d-bb96-99c94673be54\") " pod="openstack/glance-default-external-api-0" Jan 23 10:11:15 crc kubenswrapper[4684]: I0123 10:11:15.277275 4684 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"httpd-run\" (UniqueName: \"kubernetes.io/empty-dir/b697e4ae-16df-466e-bfad-f76ddb6f9e97-httpd-run\") pod \"glance-default-internal-api-0\" (UID: \"b697e4ae-16df-466e-bfad-f76ddb6f9e97\") " pod="openstack/glance-default-internal-api-0" Jan 23 10:11:15 crc kubenswrapper[4684]: I0123 10:11:15.277297 4684 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/b697e4ae-16df-466e-bfad-f76ddb6f9e97-config-data\") pod \"glance-default-internal-api-0\" (UID: \"b697e4ae-16df-466e-bfad-f76ddb6f9e97\") " pod="openstack/glance-default-internal-api-0" Jan 23 10:11:15 crc kubenswrapper[4684]: I0123 10:11:15.277323 4684 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-hr9zc\" (UniqueName: \"kubernetes.io/projected/a7e275bc-7d07-4a5c-98be-6e9eb72cf537-kube-api-access-hr9zc\") pod \"manila-e4ed-account-create-update-rzjjx\" (UID: \"a7e275bc-7d07-4a5c-98be-6e9eb72cf537\") " pod="openstack/manila-e4ed-account-create-update-rzjjx" Jan 23 10:11:15 crc kubenswrapper[4684]: I0123 10:11:15.277355 4684 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/a7e275bc-7d07-4a5c-98be-6e9eb72cf537-operator-scripts\") pod \"manila-e4ed-account-create-update-rzjjx\" (UID: \"a7e275bc-7d07-4a5c-98be-6e9eb72cf537\") " pod="openstack/manila-e4ed-account-create-update-rzjjx" Jan 23 10:11:15 crc kubenswrapper[4684]: I0123 10:11:15.277405 4684 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/a1353995-a0d3-4d2d-bb96-99c94673be54-scripts\") pod \"glance-default-external-api-0\" (UID: \"a1353995-a0d3-4d2d-bb96-99c94673be54\") " pod="openstack/glance-default-external-api-0" Jan 23 10:11:15 crc kubenswrapper[4684]: I0123 10:11:15.277434 4684 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-vss56\" (UniqueName: \"kubernetes.io/projected/b697e4ae-16df-466e-bfad-f76ddb6f9e97-kube-api-access-vss56\") pod \"glance-default-internal-api-0\" (UID: \"b697e4ae-16df-466e-bfad-f76ddb6f9e97\") " pod="openstack/glance-default-internal-api-0" Jan 23 10:11:15 crc kubenswrapper[4684]: I0123 10:11:15.277473 4684 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/a1353995-a0d3-4d2d-bb96-99c94673be54-combined-ca-bundle\") pod \"glance-default-external-api-0\" (UID: \"a1353995-a0d3-4d2d-bb96-99c94673be54\") " pod="openstack/glance-default-external-api-0" Jan 23 10:11:15 crc kubenswrapper[4684]: I0123 10:11:15.277516 4684 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"local-storage02-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage02-crc\") pod \"glance-default-external-api-0\" (UID: \"a1353995-a0d3-4d2d-bb96-99c94673be54\") " pod="openstack/glance-default-external-api-0" Jan 23 10:11:15 crc kubenswrapper[4684]: I0123 10:11:15.277550 4684 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ceph\" (UniqueName: \"kubernetes.io/projected/b697e4ae-16df-466e-bfad-f76ddb6f9e97-ceph\") pod \"glance-default-internal-api-0\" (UID: \"b697e4ae-16df-466e-bfad-f76ddb6f9e97\") " pod="openstack/glance-default-internal-api-0" Jan 23 10:11:15 crc kubenswrapper[4684]: I0123 10:11:15.277575 4684 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/b697e4ae-16df-466e-bfad-f76ddb6f9e97-combined-ca-bundle\") pod \"glance-default-internal-api-0\" (UID: \"b697e4ae-16df-466e-bfad-f76ddb6f9e97\") " pod="openstack/glance-default-internal-api-0" Jan 23 10:11:15 crc kubenswrapper[4684]: I0123 10:11:15.277630 4684 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"httpd-run\" (UniqueName: \"kubernetes.io/empty-dir/a1353995-a0d3-4d2d-bb96-99c94673be54-httpd-run\") pod \"glance-default-external-api-0\" (UID: \"a1353995-a0d3-4d2d-bb96-99c94673be54\") " pod="openstack/glance-default-external-api-0" Jan 23 10:11:15 crc kubenswrapper[4684]: I0123 10:11:15.277673 4684 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/a1353995-a0d3-4d2d-bb96-99c94673be54-public-tls-certs\") pod \"glance-default-external-api-0\" (UID: \"a1353995-a0d3-4d2d-bb96-99c94673be54\") " pod="openstack/glance-default-external-api-0" Jan 23 10:11:15 crc kubenswrapper[4684]: I0123 10:11:15.277743 4684 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/b697e4ae-16df-466e-bfad-f76ddb6f9e97-internal-tls-certs\") pod \"glance-default-internal-api-0\" (UID: \"b697e4ae-16df-466e-bfad-f76ddb6f9e97\") " pod="openstack/glance-default-internal-api-0" Jan 23 10:11:15 crc kubenswrapper[4684]: I0123 10:11:15.277775 4684 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ceph\" (UniqueName: \"kubernetes.io/projected/a1353995-a0d3-4d2d-bb96-99c94673be54-ceph\") pod \"glance-default-external-api-0\" (UID: \"a1353995-a0d3-4d2d-bb96-99c94673be54\") " pod="openstack/glance-default-external-api-0" Jan 23 10:11:15 crc kubenswrapper[4684]: I0123 10:11:15.285610 4684 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/a7e275bc-7d07-4a5c-98be-6e9eb72cf537-operator-scripts\") pod \"manila-e4ed-account-create-update-rzjjx\" (UID: \"a7e275bc-7d07-4a5c-98be-6e9eb72cf537\") " pod="openstack/manila-e4ed-account-create-update-rzjjx" Jan 23 10:11:15 crc kubenswrapper[4684]: I0123 10:11:15.285623 4684 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/a1353995-a0d3-4d2d-bb96-99c94673be54-logs\") pod \"glance-default-external-api-0\" (UID: \"a1353995-a0d3-4d2d-bb96-99c94673be54\") " pod="openstack/glance-default-external-api-0" Jan 23 10:11:15 crc kubenswrapper[4684]: I0123 10:11:15.287416 4684 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ceph\" (UniqueName: \"kubernetes.io/projected/a1353995-a0d3-4d2d-bb96-99c94673be54-ceph\") pod \"glance-default-external-api-0\" (UID: \"a1353995-a0d3-4d2d-bb96-99c94673be54\") " pod="openstack/glance-default-external-api-0" Jan 23 10:11:15 crc kubenswrapper[4684]: I0123 10:11:15.289593 4684 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/manila-db-create-9r5vp" Jan 23 10:11:15 crc kubenswrapper[4684]: I0123 10:11:15.291926 4684 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"httpd-run\" (UniqueName: \"kubernetes.io/empty-dir/a1353995-a0d3-4d2d-bb96-99c94673be54-httpd-run\") pod \"glance-default-external-api-0\" (UID: \"a1353995-a0d3-4d2d-bb96-99c94673be54\") " pod="openstack/glance-default-external-api-0" Jan 23 10:11:15 crc kubenswrapper[4684]: I0123 10:11:15.293075 4684 operation_generator.go:580] "MountVolume.MountDevice succeeded for volume \"local-storage02-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage02-crc\") pod \"glance-default-external-api-0\" (UID: \"a1353995-a0d3-4d2d-bb96-99c94673be54\") device mount path \"/mnt/openstack/pv02\"" pod="openstack/glance-default-external-api-0" Jan 23 10:11:15 crc kubenswrapper[4684]: I0123 10:11:15.294831 4684 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/a1353995-a0d3-4d2d-bb96-99c94673be54-public-tls-certs\") pod \"glance-default-external-api-0\" (UID: \"a1353995-a0d3-4d2d-bb96-99c94673be54\") " pod="openstack/glance-default-external-api-0" Jan 23 10:11:15 crc kubenswrapper[4684]: I0123 10:11:15.326227 4684 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/a1353995-a0d3-4d2d-bb96-99c94673be54-config-data\") pod \"glance-default-external-api-0\" (UID: \"a1353995-a0d3-4d2d-bb96-99c94673be54\") " pod="openstack/glance-default-external-api-0" Jan 23 10:11:15 crc kubenswrapper[4684]: I0123 10:11:15.326457 4684 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/a1353995-a0d3-4d2d-bb96-99c94673be54-scripts\") pod \"glance-default-external-api-0\" (UID: \"a1353995-a0d3-4d2d-bb96-99c94673be54\") " pod="openstack/glance-default-external-api-0" Jan 23 10:11:15 crc kubenswrapper[4684]: I0123 10:11:15.326661 4684 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/a1353995-a0d3-4d2d-bb96-99c94673be54-combined-ca-bundle\") pod \"glance-default-external-api-0\" (UID: \"a1353995-a0d3-4d2d-bb96-99c94673be54\") " pod="openstack/glance-default-external-api-0" Jan 23 10:11:15 crc kubenswrapper[4684]: I0123 10:11:15.328855 4684 provider.go:102] Refreshing cache for provider: *credentialprovider.defaultDockerConfigProvider Jan 23 10:11:15 crc kubenswrapper[4684]: I0123 10:11:15.330345 4684 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-hr9zc\" (UniqueName: \"kubernetes.io/projected/a7e275bc-7d07-4a5c-98be-6e9eb72cf537-kube-api-access-hr9zc\") pod \"manila-e4ed-account-create-update-rzjjx\" (UID: \"a7e275bc-7d07-4a5c-98be-6e9eb72cf537\") " pod="openstack/manila-e4ed-account-create-update-rzjjx" Jan 23 10:11:15 crc kubenswrapper[4684]: I0123 10:11:15.339295 4684 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-gjdgm\" (UniqueName: \"kubernetes.io/projected/a1353995-a0d3-4d2d-bb96-99c94673be54-kube-api-access-gjdgm\") pod \"glance-default-external-api-0\" (UID: \"a1353995-a0d3-4d2d-bb96-99c94673be54\") " pod="openstack/glance-default-external-api-0" Jan 23 10:11:15 crc kubenswrapper[4684]: I0123 10:11:15.380330 4684 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/b697e4ae-16df-466e-bfad-f76ddb6f9e97-internal-tls-certs\") pod \"glance-default-internal-api-0\" (UID: \"b697e4ae-16df-466e-bfad-f76ddb6f9e97\") " pod="openstack/glance-default-internal-api-0" Jan 23 10:11:15 crc kubenswrapper[4684]: I0123 10:11:15.380415 4684 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/b697e4ae-16df-466e-bfad-f76ddb6f9e97-logs\") pod \"glance-default-internal-api-0\" (UID: \"b697e4ae-16df-466e-bfad-f76ddb6f9e97\") " pod="openstack/glance-default-internal-api-0" Jan 23 10:11:15 crc kubenswrapper[4684]: I0123 10:11:15.380442 4684 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"local-storage04-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage04-crc\") pod \"glance-default-internal-api-0\" (UID: \"b697e4ae-16df-466e-bfad-f76ddb6f9e97\") " pod="openstack/glance-default-internal-api-0" Jan 23 10:11:15 crc kubenswrapper[4684]: I0123 10:11:15.380466 4684 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/b697e4ae-16df-466e-bfad-f76ddb6f9e97-scripts\") pod \"glance-default-internal-api-0\" (UID: \"b697e4ae-16df-466e-bfad-f76ddb6f9e97\") " pod="openstack/glance-default-internal-api-0" Jan 23 10:11:15 crc kubenswrapper[4684]: I0123 10:11:15.380517 4684 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/b697e4ae-16df-466e-bfad-f76ddb6f9e97-config-data\") pod \"glance-default-internal-api-0\" (UID: \"b697e4ae-16df-466e-bfad-f76ddb6f9e97\") " pod="openstack/glance-default-internal-api-0" Jan 23 10:11:15 crc kubenswrapper[4684]: I0123 10:11:15.380537 4684 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"httpd-run\" (UniqueName: \"kubernetes.io/empty-dir/b697e4ae-16df-466e-bfad-f76ddb6f9e97-httpd-run\") pod \"glance-default-internal-api-0\" (UID: \"b697e4ae-16df-466e-bfad-f76ddb6f9e97\") " pod="openstack/glance-default-internal-api-0" Jan 23 10:11:15 crc kubenswrapper[4684]: I0123 10:11:15.380606 4684 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-vss56\" (UniqueName: \"kubernetes.io/projected/b697e4ae-16df-466e-bfad-f76ddb6f9e97-kube-api-access-vss56\") pod \"glance-default-internal-api-0\" (UID: \"b697e4ae-16df-466e-bfad-f76ddb6f9e97\") " pod="openstack/glance-default-internal-api-0" Jan 23 10:11:15 crc kubenswrapper[4684]: I0123 10:11:15.380689 4684 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ceph\" (UniqueName: \"kubernetes.io/projected/b697e4ae-16df-466e-bfad-f76ddb6f9e97-ceph\") pod \"glance-default-internal-api-0\" (UID: \"b697e4ae-16df-466e-bfad-f76ddb6f9e97\") " pod="openstack/glance-default-internal-api-0" Jan 23 10:11:15 crc kubenswrapper[4684]: I0123 10:11:15.380735 4684 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/b697e4ae-16df-466e-bfad-f76ddb6f9e97-combined-ca-bundle\") pod \"glance-default-internal-api-0\" (UID: \"b697e4ae-16df-466e-bfad-f76ddb6f9e97\") " pod="openstack/glance-default-internal-api-0" Jan 23 10:11:15 crc kubenswrapper[4684]: I0123 10:11:15.389342 4684 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/b697e4ae-16df-466e-bfad-f76ddb6f9e97-logs\") pod \"glance-default-internal-api-0\" (UID: \"b697e4ae-16df-466e-bfad-f76ddb6f9e97\") " pod="openstack/glance-default-internal-api-0" Jan 23 10:11:15 crc kubenswrapper[4684]: I0123 10:11:15.389625 4684 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"httpd-run\" (UniqueName: \"kubernetes.io/empty-dir/b697e4ae-16df-466e-bfad-f76ddb6f9e97-httpd-run\") pod \"glance-default-internal-api-0\" (UID: \"b697e4ae-16df-466e-bfad-f76ddb6f9e97\") " pod="openstack/glance-default-internal-api-0" Jan 23 10:11:15 crc kubenswrapper[4684]: I0123 10:11:15.393914 4684 operation_generator.go:580] "MountVolume.MountDevice succeeded for volume \"local-storage04-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage04-crc\") pod \"glance-default-internal-api-0\" (UID: \"b697e4ae-16df-466e-bfad-f76ddb6f9e97\") device mount path \"/mnt/openstack/pv04\"" pod="openstack/glance-default-internal-api-0" Jan 23 10:11:15 crc kubenswrapper[4684]: I0123 10:11:15.394552 4684 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/cinder-volume-volume1-0"] Jan 23 10:11:15 crc kubenswrapper[4684]: I0123 10:11:15.395193 4684 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/manila-e4ed-account-create-update-rzjjx" Jan 23 10:11:15 crc kubenswrapper[4684]: I0123 10:11:15.404114 4684 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"local-storage02-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage02-crc\") pod \"glance-default-external-api-0\" (UID: \"a1353995-a0d3-4d2d-bb96-99c94673be54\") " pod="openstack/glance-default-external-api-0" Jan 23 10:11:15 crc kubenswrapper[4684]: I0123 10:11:15.435119 4684 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/b697e4ae-16df-466e-bfad-f76ddb6f9e97-config-data\") pod \"glance-default-internal-api-0\" (UID: \"b697e4ae-16df-466e-bfad-f76ddb6f9e97\") " pod="openstack/glance-default-internal-api-0" Jan 23 10:11:15 crc kubenswrapper[4684]: I0123 10:11:15.435244 4684 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/horizon-5688b9fcb7-jmp7t"] Jan 23 10:11:15 crc kubenswrapper[4684]: I0123 10:11:15.439124 4684 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/horizon-5688b9fcb7-jmp7t"] Jan 23 10:11:15 crc kubenswrapper[4684]: I0123 10:11:15.439812 4684 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/horizon-5688b9fcb7-jmp7t" Jan 23 10:11:15 crc kubenswrapper[4684]: I0123 10:11:15.444239 4684 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/b697e4ae-16df-466e-bfad-f76ddb6f9e97-internal-tls-certs\") pod \"glance-default-internal-api-0\" (UID: \"b697e4ae-16df-466e-bfad-f76ddb6f9e97\") " pod="openstack/glance-default-internal-api-0" Jan 23 10:11:15 crc kubenswrapper[4684]: I0123 10:11:15.447132 4684 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/b697e4ae-16df-466e-bfad-f76ddb6f9e97-scripts\") pod \"glance-default-internal-api-0\" (UID: \"b697e4ae-16df-466e-bfad-f76ddb6f9e97\") " pod="openstack/glance-default-internal-api-0" Jan 23 10:11:15 crc kubenswrapper[4684]: I0123 10:11:15.447299 4684 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"horizon" Jan 23 10:11:15 crc kubenswrapper[4684]: I0123 10:11:15.447485 4684 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"horizon-horizon-dockercfg-8mqfh" Jan 23 10:11:15 crc kubenswrapper[4684]: I0123 10:11:15.447739 4684 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"horizon-config-data" Jan 23 10:11:15 crc kubenswrapper[4684]: I0123 10:11:15.447887 4684 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"horizon-scripts" Jan 23 10:11:15 crc kubenswrapper[4684]: I0123 10:11:15.448396 4684 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ceph\" (UniqueName: \"kubernetes.io/projected/b697e4ae-16df-466e-bfad-f76ddb6f9e97-ceph\") pod \"glance-default-internal-api-0\" (UID: \"b697e4ae-16df-466e-bfad-f76ddb6f9e97\") " pod="openstack/glance-default-internal-api-0" Jan 23 10:11:15 crc kubenswrapper[4684]: I0123 10:11:15.457923 4684 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/b697e4ae-16df-466e-bfad-f76ddb6f9e97-combined-ca-bundle\") pod \"glance-default-internal-api-0\" (UID: \"b697e4ae-16df-466e-bfad-f76ddb6f9e97\") " pod="openstack/glance-default-internal-api-0" Jan 23 10:11:15 crc kubenswrapper[4684]: I0123 10:11:15.501578 4684 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-vss56\" (UniqueName: \"kubernetes.io/projected/b697e4ae-16df-466e-bfad-f76ddb6f9e97-kube-api-access-vss56\") pod \"glance-default-internal-api-0\" (UID: \"b697e4ae-16df-466e-bfad-f76ddb6f9e97\") " pod="openstack/glance-default-internal-api-0" Jan 23 10:11:15 crc kubenswrapper[4684]: I0123 10:11:15.530642 4684 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"local-storage04-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage04-crc\") pod \"glance-default-internal-api-0\" (UID: \"b697e4ae-16df-466e-bfad-f76ddb6f9e97\") " pod="openstack/glance-default-internal-api-0" Jan 23 10:11:15 crc kubenswrapper[4684]: I0123 10:11:15.545414 4684 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/glance-default-external-api-0"] Jan 23 10:11:15 crc kubenswrapper[4684]: I0123 10:11:15.546474 4684 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/glance-default-external-api-0" Jan 23 10:11:15 crc kubenswrapper[4684]: I0123 10:11:15.578629 4684 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/horizon-59d6c7fdc9-qhdcc"] Jan 23 10:11:15 crc kubenswrapper[4684]: I0123 10:11:15.580995 4684 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/horizon-59d6c7fdc9-qhdcc" Jan 23 10:11:15 crc kubenswrapper[4684]: I0123 10:11:15.582369 4684 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/glance-default-internal-api-0" Jan 23 10:11:15 crc kubenswrapper[4684]: I0123 10:11:15.592367 4684 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/ebba5660-17ca-4b84-9a66-a496add9d7cc-scripts\") pod \"horizon-59d6c7fdc9-qhdcc\" (UID: \"ebba5660-17ca-4b84-9a66-a496add9d7cc\") " pod="openstack/horizon-59d6c7fdc9-qhdcc" Jan 23 10:11:15 crc kubenswrapper[4684]: I0123 10:11:15.592413 4684 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"horizon-secret-key\" (UniqueName: \"kubernetes.io/secret/ebba5660-17ca-4b84-9a66-a496add9d7cc-horizon-secret-key\") pod \"horizon-59d6c7fdc9-qhdcc\" (UID: \"ebba5660-17ca-4b84-9a66-a496add9d7cc\") " pod="openstack/horizon-59d6c7fdc9-qhdcc" Jan 23 10:11:15 crc kubenswrapper[4684]: I0123 10:11:15.592473 4684 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"horizon-secret-key\" (UniqueName: \"kubernetes.io/secret/a6526c5c-0da9-4294-a03a-6a8276b3d381-horizon-secret-key\") pod \"horizon-5688b9fcb7-jmp7t\" (UID: \"a6526c5c-0da9-4294-a03a-6a8276b3d381\") " pod="openstack/horizon-5688b9fcb7-jmp7t" Jan 23 10:11:15 crc kubenswrapper[4684]: I0123 10:11:15.592510 4684 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/a6526c5c-0da9-4294-a03a-6a8276b3d381-logs\") pod \"horizon-5688b9fcb7-jmp7t\" (UID: \"a6526c5c-0da9-4294-a03a-6a8276b3d381\") " pod="openstack/horizon-5688b9fcb7-jmp7t" Jan 23 10:11:15 crc kubenswrapper[4684]: I0123 10:11:15.592574 4684 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/a6526c5c-0da9-4294-a03a-6a8276b3d381-config-data\") pod \"horizon-5688b9fcb7-jmp7t\" (UID: \"a6526c5c-0da9-4294-a03a-6a8276b3d381\") " pod="openstack/horizon-5688b9fcb7-jmp7t" Jan 23 10:11:15 crc kubenswrapper[4684]: I0123 10:11:15.592610 4684 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/a6526c5c-0da9-4294-a03a-6a8276b3d381-scripts\") pod \"horizon-5688b9fcb7-jmp7t\" (UID: \"a6526c5c-0da9-4294-a03a-6a8276b3d381\") " pod="openstack/horizon-5688b9fcb7-jmp7t" Jan 23 10:11:15 crc kubenswrapper[4684]: I0123 10:11:15.592630 4684 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/ebba5660-17ca-4b84-9a66-a496add9d7cc-config-data\") pod \"horizon-59d6c7fdc9-qhdcc\" (UID: \"ebba5660-17ca-4b84-9a66-a496add9d7cc\") " pod="openstack/horizon-59d6c7fdc9-qhdcc" Jan 23 10:11:15 crc kubenswrapper[4684]: I0123 10:11:15.592647 4684 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-vzk7l\" (UniqueName: \"kubernetes.io/projected/a6526c5c-0da9-4294-a03a-6a8276b3d381-kube-api-access-vzk7l\") pod \"horizon-5688b9fcb7-jmp7t\" (UID: \"a6526c5c-0da9-4294-a03a-6a8276b3d381\") " pod="openstack/horizon-5688b9fcb7-jmp7t" Jan 23 10:11:15 crc kubenswrapper[4684]: I0123 10:11:15.592668 4684 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/ebba5660-17ca-4b84-9a66-a496add9d7cc-logs\") pod \"horizon-59d6c7fdc9-qhdcc\" (UID: \"ebba5660-17ca-4b84-9a66-a496add9d7cc\") " pod="openstack/horizon-59d6c7fdc9-qhdcc" Jan 23 10:11:15 crc kubenswrapper[4684]: I0123 10:11:15.592689 4684 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-nzfrr\" (UniqueName: \"kubernetes.io/projected/ebba5660-17ca-4b84-9a66-a496add9d7cc-kube-api-access-nzfrr\") pod \"horizon-59d6c7fdc9-qhdcc\" (UID: \"ebba5660-17ca-4b84-9a66-a496add9d7cc\") " pod="openstack/horizon-59d6c7fdc9-qhdcc" Jan 23 10:11:15 crc kubenswrapper[4684]: I0123 10:11:15.699972 4684 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/horizon-59d6c7fdc9-qhdcc"] Jan 23 10:11:15 crc kubenswrapper[4684]: I0123 10:11:15.723397 4684 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/a6526c5c-0da9-4294-a03a-6a8276b3d381-config-data\") pod \"horizon-5688b9fcb7-jmp7t\" (UID: \"a6526c5c-0da9-4294-a03a-6a8276b3d381\") " pod="openstack/horizon-5688b9fcb7-jmp7t" Jan 23 10:11:15 crc kubenswrapper[4684]: I0123 10:11:15.723548 4684 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/a6526c5c-0da9-4294-a03a-6a8276b3d381-scripts\") pod \"horizon-5688b9fcb7-jmp7t\" (UID: \"a6526c5c-0da9-4294-a03a-6a8276b3d381\") " pod="openstack/horizon-5688b9fcb7-jmp7t" Jan 23 10:11:15 crc kubenswrapper[4684]: I0123 10:11:15.723580 4684 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/ebba5660-17ca-4b84-9a66-a496add9d7cc-config-data\") pod \"horizon-59d6c7fdc9-qhdcc\" (UID: \"ebba5660-17ca-4b84-9a66-a496add9d7cc\") " pod="openstack/horizon-59d6c7fdc9-qhdcc" Jan 23 10:11:15 crc kubenswrapper[4684]: I0123 10:11:15.723607 4684 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-vzk7l\" (UniqueName: \"kubernetes.io/projected/a6526c5c-0da9-4294-a03a-6a8276b3d381-kube-api-access-vzk7l\") pod \"horizon-5688b9fcb7-jmp7t\" (UID: \"a6526c5c-0da9-4294-a03a-6a8276b3d381\") " pod="openstack/horizon-5688b9fcb7-jmp7t" Jan 23 10:11:15 crc kubenswrapper[4684]: I0123 10:11:15.723638 4684 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/ebba5660-17ca-4b84-9a66-a496add9d7cc-logs\") pod \"horizon-59d6c7fdc9-qhdcc\" (UID: \"ebba5660-17ca-4b84-9a66-a496add9d7cc\") " pod="openstack/horizon-59d6c7fdc9-qhdcc" Jan 23 10:11:15 crc kubenswrapper[4684]: I0123 10:11:15.723658 4684 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-nzfrr\" (UniqueName: \"kubernetes.io/projected/ebba5660-17ca-4b84-9a66-a496add9d7cc-kube-api-access-nzfrr\") pod \"horizon-59d6c7fdc9-qhdcc\" (UID: \"ebba5660-17ca-4b84-9a66-a496add9d7cc\") " pod="openstack/horizon-59d6c7fdc9-qhdcc" Jan 23 10:11:15 crc kubenswrapper[4684]: I0123 10:11:15.723843 4684 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/ebba5660-17ca-4b84-9a66-a496add9d7cc-scripts\") pod \"horizon-59d6c7fdc9-qhdcc\" (UID: \"ebba5660-17ca-4b84-9a66-a496add9d7cc\") " pod="openstack/horizon-59d6c7fdc9-qhdcc" Jan 23 10:11:15 crc kubenswrapper[4684]: I0123 10:11:15.723880 4684 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"horizon-secret-key\" (UniqueName: \"kubernetes.io/secret/ebba5660-17ca-4b84-9a66-a496add9d7cc-horizon-secret-key\") pod \"horizon-59d6c7fdc9-qhdcc\" (UID: \"ebba5660-17ca-4b84-9a66-a496add9d7cc\") " pod="openstack/horizon-59d6c7fdc9-qhdcc" Jan 23 10:11:15 crc kubenswrapper[4684]: I0123 10:11:15.723995 4684 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"horizon-secret-key\" (UniqueName: \"kubernetes.io/secret/a6526c5c-0da9-4294-a03a-6a8276b3d381-horizon-secret-key\") pod \"horizon-5688b9fcb7-jmp7t\" (UID: \"a6526c5c-0da9-4294-a03a-6a8276b3d381\") " pod="openstack/horizon-5688b9fcb7-jmp7t" Jan 23 10:11:15 crc kubenswrapper[4684]: I0123 10:11:15.724087 4684 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/a6526c5c-0da9-4294-a03a-6a8276b3d381-logs\") pod \"horizon-5688b9fcb7-jmp7t\" (UID: \"a6526c5c-0da9-4294-a03a-6a8276b3d381\") " pod="openstack/horizon-5688b9fcb7-jmp7t" Jan 23 10:11:15 crc kubenswrapper[4684]: I0123 10:11:15.725296 4684 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/a6526c5c-0da9-4294-a03a-6a8276b3d381-logs\") pod \"horizon-5688b9fcb7-jmp7t\" (UID: \"a6526c5c-0da9-4294-a03a-6a8276b3d381\") " pod="openstack/horizon-5688b9fcb7-jmp7t" Jan 23 10:11:15 crc kubenswrapper[4684]: I0123 10:11:15.728207 4684 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/ebba5660-17ca-4b84-9a66-a496add9d7cc-logs\") pod \"horizon-59d6c7fdc9-qhdcc\" (UID: \"ebba5660-17ca-4b84-9a66-a496add9d7cc\") " pod="openstack/horizon-59d6c7fdc9-qhdcc" Jan 23 10:11:15 crc kubenswrapper[4684]: I0123 10:11:15.728789 4684 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/a6526c5c-0da9-4294-a03a-6a8276b3d381-scripts\") pod \"horizon-5688b9fcb7-jmp7t\" (UID: \"a6526c5c-0da9-4294-a03a-6a8276b3d381\") " pod="openstack/horizon-5688b9fcb7-jmp7t" Jan 23 10:11:15 crc kubenswrapper[4684]: I0123 10:11:15.729981 4684 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/ebba5660-17ca-4b84-9a66-a496add9d7cc-config-data\") pod \"horizon-59d6c7fdc9-qhdcc\" (UID: \"ebba5660-17ca-4b84-9a66-a496add9d7cc\") " pod="openstack/horizon-59d6c7fdc9-qhdcc" Jan 23 10:11:15 crc kubenswrapper[4684]: I0123 10:11:15.730210 4684 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/ebba5660-17ca-4b84-9a66-a496add9d7cc-scripts\") pod \"horizon-59d6c7fdc9-qhdcc\" (UID: \"ebba5660-17ca-4b84-9a66-a496add9d7cc\") " pod="openstack/horizon-59d6c7fdc9-qhdcc" Jan 23 10:11:15 crc kubenswrapper[4684]: I0123 10:11:15.730734 4684 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/a6526c5c-0da9-4294-a03a-6a8276b3d381-config-data\") pod \"horizon-5688b9fcb7-jmp7t\" (UID: \"a6526c5c-0da9-4294-a03a-6a8276b3d381\") " pod="openstack/horizon-5688b9fcb7-jmp7t" Jan 23 10:11:15 crc kubenswrapper[4684]: I0123 10:11:15.730904 4684 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-volume-volume1-0" event={"ID":"2d39cffc-9089-47c7-acd7-50bb64ed8f61","Type":"ContainerStarted","Data":"ff1ea975a030ea37639a3e46e0cb24948fccf1f898e59295b5213ae452927045"} Jan 23 10:11:15 crc kubenswrapper[4684]: I0123 10:11:15.734641 4684 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-backup-0" event={"ID":"46859102-633b-4fca-bbeb-c34dfdbea96d","Type":"ContainerStarted","Data":"181c4bf290baef248fcacda7e1a2c61dde5039650ff438b71d4bbae71dc1448d"} Jan 23 10:11:15 crc kubenswrapper[4684]: I0123 10:11:15.793578 4684 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"horizon-secret-key\" (UniqueName: \"kubernetes.io/secret/ebba5660-17ca-4b84-9a66-a496add9d7cc-horizon-secret-key\") pod \"horizon-59d6c7fdc9-qhdcc\" (UID: \"ebba5660-17ca-4b84-9a66-a496add9d7cc\") " pod="openstack/horizon-59d6c7fdc9-qhdcc" Jan 23 10:11:15 crc kubenswrapper[4684]: I0123 10:11:15.795356 4684 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-nzfrr\" (UniqueName: \"kubernetes.io/projected/ebba5660-17ca-4b84-9a66-a496add9d7cc-kube-api-access-nzfrr\") pod \"horizon-59d6c7fdc9-qhdcc\" (UID: \"ebba5660-17ca-4b84-9a66-a496add9d7cc\") " pod="openstack/horizon-59d6c7fdc9-qhdcc" Jan 23 10:11:15 crc kubenswrapper[4684]: I0123 10:11:15.802949 4684 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-vzk7l\" (UniqueName: \"kubernetes.io/projected/a6526c5c-0da9-4294-a03a-6a8276b3d381-kube-api-access-vzk7l\") pod \"horizon-5688b9fcb7-jmp7t\" (UID: \"a6526c5c-0da9-4294-a03a-6a8276b3d381\") " pod="openstack/horizon-5688b9fcb7-jmp7t" Jan 23 10:11:15 crc kubenswrapper[4684]: I0123 10:11:15.811597 4684 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"horizon-secret-key\" (UniqueName: \"kubernetes.io/secret/a6526c5c-0da9-4294-a03a-6a8276b3d381-horizon-secret-key\") pod \"horizon-5688b9fcb7-jmp7t\" (UID: \"a6526c5c-0da9-4294-a03a-6a8276b3d381\") " pod="openstack/horizon-5688b9fcb7-jmp7t" Jan 23 10:11:15 crc kubenswrapper[4684]: I0123 10:11:15.818392 4684 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/glance-default-internal-api-0"] Jan 23 10:11:15 crc kubenswrapper[4684]: I0123 10:11:15.967010 4684 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/horizon-5688b9fcb7-jmp7t" Jan 23 10:11:15 crc kubenswrapper[4684]: I0123 10:11:15.995905 4684 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/horizon-59d6c7fdc9-qhdcc" Jan 23 10:11:16 crc kubenswrapper[4684]: I0123 10:11:16.107896 4684 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/manila-db-create-9r5vp"] Jan 23 10:11:16 crc kubenswrapper[4684]: I0123 10:11:16.469119 4684 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/manila-e4ed-account-create-update-rzjjx"] Jan 23 10:11:16 crc kubenswrapper[4684]: I0123 10:11:16.522466 4684 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/glance-default-internal-api-0"] Jan 23 10:11:16 crc kubenswrapper[4684]: I0123 10:11:16.713011 4684 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/horizon-5688b9fcb7-jmp7t"] Jan 23 10:11:16 crc kubenswrapper[4684]: I0123 10:11:16.760506 4684 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/glance-default-external-api-0"] Jan 23 10:11:16 crc kubenswrapper[4684]: W0123 10:11:16.763771 4684 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-poda6526c5c_0da9_4294_a03a_6a8276b3d381.slice/crio-da3b59a3f7feaabade870353cef881c8e2351810c3c0fcd162a235925e2ae4a7 WatchSource:0}: Error finding container da3b59a3f7feaabade870353cef881c8e2351810c3c0fcd162a235925e2ae4a7: Status 404 returned error can't find the container with id da3b59a3f7feaabade870353cef881c8e2351810c3c0fcd162a235925e2ae4a7 Jan 23 10:11:16 crc kubenswrapper[4684]: I0123 10:11:16.766930 4684 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-default-internal-api-0" event={"ID":"b697e4ae-16df-466e-bfad-f76ddb6f9e97","Type":"ContainerStarted","Data":"33f361584de79ca5c7580499610001c91dea56121bd9b66cacb6769ae35f39eb"} Jan 23 10:11:16 crc kubenswrapper[4684]: I0123 10:11:16.778993 4684 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/manila-e4ed-account-create-update-rzjjx" event={"ID":"a7e275bc-7d07-4a5c-98be-6e9eb72cf537","Type":"ContainerStarted","Data":"48bc32194c16a2903c1b0ff47e930bfc90bc79d8709cb588a1623ce3739c7dd7"} Jan 23 10:11:16 crc kubenswrapper[4684]: I0123 10:11:16.786946 4684 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/manila-db-create-9r5vp" event={"ID":"9a1d764d-4ecd-4f2f-a4b8-848142c93b15","Type":"ContainerStarted","Data":"4b07826d8d7b74866e4df82c8644b5f00e22f376aa05bd0d8894087316060dc7"} Jan 23 10:11:16 crc kubenswrapper[4684]: I0123 10:11:16.889800 4684 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/horizon-59d6c7fdc9-qhdcc"] Jan 23 10:11:17 crc kubenswrapper[4684]: I0123 10:11:17.874679 4684 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-default-external-api-0" event={"ID":"a1353995-a0d3-4d2d-bb96-99c94673be54","Type":"ContainerStarted","Data":"920a575cecaa33d4216d639088c432b57f810407cb4f6bd9d1a24832f7245575"} Jan 23 10:11:17 crc kubenswrapper[4684]: I0123 10:11:17.899452 4684 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/horizon-5688b9fcb7-jmp7t" event={"ID":"a6526c5c-0da9-4294-a03a-6a8276b3d381","Type":"ContainerStarted","Data":"da3b59a3f7feaabade870353cef881c8e2351810c3c0fcd162a235925e2ae4a7"} Jan 23 10:11:17 crc kubenswrapper[4684]: I0123 10:11:17.907453 4684 generic.go:334] "Generic (PLEG): container finished" podID="9a1d764d-4ecd-4f2f-a4b8-848142c93b15" containerID="6b65babecde8db8f98f55ed29b02489a73c7ecaf2fe163886352ecff8af568c9" exitCode=0 Jan 23 10:11:17 crc kubenswrapper[4684]: I0123 10:11:17.907591 4684 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/manila-db-create-9r5vp" event={"ID":"9a1d764d-4ecd-4f2f-a4b8-848142c93b15","Type":"ContainerDied","Data":"6b65babecde8db8f98f55ed29b02489a73c7ecaf2fe163886352ecff8af568c9"} Jan 23 10:11:17 crc kubenswrapper[4684]: I0123 10:11:17.931653 4684 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/horizon-59d6c7fdc9-qhdcc" event={"ID":"ebba5660-17ca-4b84-9a66-a496add9d7cc","Type":"ContainerStarted","Data":"abec2bf3570a222fae3ebf82191744dc27ae46ceb2b820e0e288e1d481f3c50d"} Jan 23 10:11:17 crc kubenswrapper[4684]: I0123 10:11:17.941054 4684 generic.go:334] "Generic (PLEG): container finished" podID="a7e275bc-7d07-4a5c-98be-6e9eb72cf537" containerID="aa1cfff82632dd93f61919922195de1d8bd4c2eada5623abb4fcbef1821342cc" exitCode=0 Jan 23 10:11:17 crc kubenswrapper[4684]: I0123 10:11:17.941111 4684 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/manila-e4ed-account-create-update-rzjjx" event={"ID":"a7e275bc-7d07-4a5c-98be-6e9eb72cf537","Type":"ContainerDied","Data":"aa1cfff82632dd93f61919922195de1d8bd4c2eada5623abb4fcbef1821342cc"} Jan 23 10:11:18 crc kubenswrapper[4684]: I0123 10:11:18.315090 4684 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/horizon-5688b9fcb7-jmp7t"] Jan 23 10:11:18 crc kubenswrapper[4684]: I0123 10:11:18.352901 4684 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/horizon-6dc7f74bf4-rpjsz"] Jan 23 10:11:18 crc kubenswrapper[4684]: I0123 10:11:18.383727 4684 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/horizon-6dc7f74bf4-rpjsz" Jan 23 10:11:18 crc kubenswrapper[4684]: I0123 10:11:18.392876 4684 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-horizon-svc" Jan 23 10:11:18 crc kubenswrapper[4684]: I0123 10:11:18.521609 4684 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/horizon-6dc7f74bf4-rpjsz"] Jan 23 10:11:18 crc kubenswrapper[4684]: I0123 10:11:18.564146 4684 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/d510be09-5472-4350-8930-0cda7b4b9c84-logs\") pod \"horizon-6dc7f74bf4-rpjsz\" (UID: \"d510be09-5472-4350-8930-0cda7b4b9c84\") " pod="openstack/horizon-6dc7f74bf4-rpjsz" Jan 23 10:11:18 crc kubenswrapper[4684]: I0123 10:11:18.564243 4684 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-xm9v6\" (UniqueName: \"kubernetes.io/projected/d510be09-5472-4350-8930-0cda7b4b9c84-kube-api-access-xm9v6\") pod \"horizon-6dc7f74bf4-rpjsz\" (UID: \"d510be09-5472-4350-8930-0cda7b4b9c84\") " pod="openstack/horizon-6dc7f74bf4-rpjsz" Jan 23 10:11:18 crc kubenswrapper[4684]: I0123 10:11:18.564320 4684 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"horizon-tls-certs\" (UniqueName: \"kubernetes.io/secret/d510be09-5472-4350-8930-0cda7b4b9c84-horizon-tls-certs\") pod \"horizon-6dc7f74bf4-rpjsz\" (UID: \"d510be09-5472-4350-8930-0cda7b4b9c84\") " pod="openstack/horizon-6dc7f74bf4-rpjsz" Jan 23 10:11:18 crc kubenswrapper[4684]: I0123 10:11:18.564352 4684 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/d510be09-5472-4350-8930-0cda7b4b9c84-combined-ca-bundle\") pod \"horizon-6dc7f74bf4-rpjsz\" (UID: \"d510be09-5472-4350-8930-0cda7b4b9c84\") " pod="openstack/horizon-6dc7f74bf4-rpjsz" Jan 23 10:11:18 crc kubenswrapper[4684]: I0123 10:11:18.564414 4684 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"horizon-secret-key\" (UniqueName: \"kubernetes.io/secret/d510be09-5472-4350-8930-0cda7b4b9c84-horizon-secret-key\") pod \"horizon-6dc7f74bf4-rpjsz\" (UID: \"d510be09-5472-4350-8930-0cda7b4b9c84\") " pod="openstack/horizon-6dc7f74bf4-rpjsz" Jan 23 10:11:18 crc kubenswrapper[4684]: I0123 10:11:18.564448 4684 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/d510be09-5472-4350-8930-0cda7b4b9c84-config-data\") pod \"horizon-6dc7f74bf4-rpjsz\" (UID: \"d510be09-5472-4350-8930-0cda7b4b9c84\") " pod="openstack/horizon-6dc7f74bf4-rpjsz" Jan 23 10:11:18 crc kubenswrapper[4684]: I0123 10:11:18.564470 4684 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/d510be09-5472-4350-8930-0cda7b4b9c84-scripts\") pod \"horizon-6dc7f74bf4-rpjsz\" (UID: \"d510be09-5472-4350-8930-0cda7b4b9c84\") " pod="openstack/horizon-6dc7f74bf4-rpjsz" Jan 23 10:11:18 crc kubenswrapper[4684]: I0123 10:11:18.609739 4684 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/horizon-59d6c7fdc9-qhdcc"] Jan 23 10:11:18 crc kubenswrapper[4684]: I0123 10:11:18.618780 4684 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/horizon-7df5b758fb-8sfdj"] Jan 23 10:11:18 crc kubenswrapper[4684]: I0123 10:11:18.620786 4684 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/horizon-7df5b758fb-8sfdj" Jan 23 10:11:18 crc kubenswrapper[4684]: I0123 10:11:18.669681 4684 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"horizon-secret-key\" (UniqueName: \"kubernetes.io/secret/78d43a15-1645-42a6-a25b-a6c4d7a244c4-horizon-secret-key\") pod \"horizon-7df5b758fb-8sfdj\" (UID: \"78d43a15-1645-42a6-a25b-a6c4d7a244c4\") " pod="openstack/horizon-7df5b758fb-8sfdj" Jan 23 10:11:18 crc kubenswrapper[4684]: I0123 10:11:18.670114 4684 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/78d43a15-1645-42a6-a25b-a6c4d7a244c4-config-data\") pod \"horizon-7df5b758fb-8sfdj\" (UID: \"78d43a15-1645-42a6-a25b-a6c4d7a244c4\") " pod="openstack/horizon-7df5b758fb-8sfdj" Jan 23 10:11:18 crc kubenswrapper[4684]: I0123 10:11:18.670181 4684 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/78d43a15-1645-42a6-a25b-a6c4d7a244c4-scripts\") pod \"horizon-7df5b758fb-8sfdj\" (UID: \"78d43a15-1645-42a6-a25b-a6c4d7a244c4\") " pod="openstack/horizon-7df5b758fb-8sfdj" Jan 23 10:11:18 crc kubenswrapper[4684]: I0123 10:11:18.670299 4684 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/78d43a15-1645-42a6-a25b-a6c4d7a244c4-logs\") pod \"horizon-7df5b758fb-8sfdj\" (UID: \"78d43a15-1645-42a6-a25b-a6c4d7a244c4\") " pod="openstack/horizon-7df5b758fb-8sfdj" Jan 23 10:11:18 crc kubenswrapper[4684]: I0123 10:11:18.670348 4684 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/d510be09-5472-4350-8930-0cda7b4b9c84-logs\") pod \"horizon-6dc7f74bf4-rpjsz\" (UID: \"d510be09-5472-4350-8930-0cda7b4b9c84\") " pod="openstack/horizon-6dc7f74bf4-rpjsz" Jan 23 10:11:18 crc kubenswrapper[4684]: I0123 10:11:18.670465 4684 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-xm9v6\" (UniqueName: \"kubernetes.io/projected/d510be09-5472-4350-8930-0cda7b4b9c84-kube-api-access-xm9v6\") pod \"horizon-6dc7f74bf4-rpjsz\" (UID: \"d510be09-5472-4350-8930-0cda7b4b9c84\") " pod="openstack/horizon-6dc7f74bf4-rpjsz" Jan 23 10:11:18 crc kubenswrapper[4684]: I0123 10:11:18.670529 4684 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/78d43a15-1645-42a6-a25b-a6c4d7a244c4-combined-ca-bundle\") pod \"horizon-7df5b758fb-8sfdj\" (UID: \"78d43a15-1645-42a6-a25b-a6c4d7a244c4\") " pod="openstack/horizon-7df5b758fb-8sfdj" Jan 23 10:11:18 crc kubenswrapper[4684]: I0123 10:11:18.670570 4684 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-z82nw\" (UniqueName: \"kubernetes.io/projected/78d43a15-1645-42a6-a25b-a6c4d7a244c4-kube-api-access-z82nw\") pod \"horizon-7df5b758fb-8sfdj\" (UID: \"78d43a15-1645-42a6-a25b-a6c4d7a244c4\") " pod="openstack/horizon-7df5b758fb-8sfdj" Jan 23 10:11:18 crc kubenswrapper[4684]: I0123 10:11:18.670598 4684 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"horizon-tls-certs\" (UniqueName: \"kubernetes.io/secret/d510be09-5472-4350-8930-0cda7b4b9c84-horizon-tls-certs\") pod \"horizon-6dc7f74bf4-rpjsz\" (UID: \"d510be09-5472-4350-8930-0cda7b4b9c84\") " pod="openstack/horizon-6dc7f74bf4-rpjsz" Jan 23 10:11:18 crc kubenswrapper[4684]: I0123 10:11:18.670623 4684 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/d510be09-5472-4350-8930-0cda7b4b9c84-combined-ca-bundle\") pod \"horizon-6dc7f74bf4-rpjsz\" (UID: \"d510be09-5472-4350-8930-0cda7b4b9c84\") " pod="openstack/horizon-6dc7f74bf4-rpjsz" Jan 23 10:11:18 crc kubenswrapper[4684]: I0123 10:11:18.670656 4684 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"horizon-tls-certs\" (UniqueName: \"kubernetes.io/secret/78d43a15-1645-42a6-a25b-a6c4d7a244c4-horizon-tls-certs\") pod \"horizon-7df5b758fb-8sfdj\" (UID: \"78d43a15-1645-42a6-a25b-a6c4d7a244c4\") " pod="openstack/horizon-7df5b758fb-8sfdj" Jan 23 10:11:18 crc kubenswrapper[4684]: I0123 10:11:18.671659 4684 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"horizon-secret-key\" (UniqueName: \"kubernetes.io/secret/d510be09-5472-4350-8930-0cda7b4b9c84-horizon-secret-key\") pod \"horizon-6dc7f74bf4-rpjsz\" (UID: \"d510be09-5472-4350-8930-0cda7b4b9c84\") " pod="openstack/horizon-6dc7f74bf4-rpjsz" Jan 23 10:11:18 crc kubenswrapper[4684]: I0123 10:11:18.671740 4684 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/d510be09-5472-4350-8930-0cda7b4b9c84-config-data\") pod \"horizon-6dc7f74bf4-rpjsz\" (UID: \"d510be09-5472-4350-8930-0cda7b4b9c84\") " pod="openstack/horizon-6dc7f74bf4-rpjsz" Jan 23 10:11:18 crc kubenswrapper[4684]: I0123 10:11:18.671769 4684 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/d510be09-5472-4350-8930-0cda7b4b9c84-scripts\") pod \"horizon-6dc7f74bf4-rpjsz\" (UID: \"d510be09-5472-4350-8930-0cda7b4b9c84\") " pod="openstack/horizon-6dc7f74bf4-rpjsz" Jan 23 10:11:18 crc kubenswrapper[4684]: I0123 10:11:18.673288 4684 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/d510be09-5472-4350-8930-0cda7b4b9c84-logs\") pod \"horizon-6dc7f74bf4-rpjsz\" (UID: \"d510be09-5472-4350-8930-0cda7b4b9c84\") " pod="openstack/horizon-6dc7f74bf4-rpjsz" Jan 23 10:11:18 crc kubenswrapper[4684]: I0123 10:11:18.678635 4684 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"horizon-tls-certs\" (UniqueName: \"kubernetes.io/secret/d510be09-5472-4350-8930-0cda7b4b9c84-horizon-tls-certs\") pod \"horizon-6dc7f74bf4-rpjsz\" (UID: \"d510be09-5472-4350-8930-0cda7b4b9c84\") " pod="openstack/horizon-6dc7f74bf4-rpjsz" Jan 23 10:11:18 crc kubenswrapper[4684]: I0123 10:11:18.680413 4684 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/d510be09-5472-4350-8930-0cda7b4b9c84-config-data\") pod \"horizon-6dc7f74bf4-rpjsz\" (UID: \"d510be09-5472-4350-8930-0cda7b4b9c84\") " pod="openstack/horizon-6dc7f74bf4-rpjsz" Jan 23 10:11:18 crc kubenswrapper[4684]: I0123 10:11:18.689259 4684 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/d510be09-5472-4350-8930-0cda7b4b9c84-combined-ca-bundle\") pod \"horizon-6dc7f74bf4-rpjsz\" (UID: \"d510be09-5472-4350-8930-0cda7b4b9c84\") " pod="openstack/horizon-6dc7f74bf4-rpjsz" Jan 23 10:11:18 crc kubenswrapper[4684]: I0123 10:11:18.689510 4684 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/horizon-7df5b758fb-8sfdj"] Jan 23 10:11:18 crc kubenswrapper[4684]: I0123 10:11:18.690001 4684 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/d510be09-5472-4350-8930-0cda7b4b9c84-scripts\") pod \"horizon-6dc7f74bf4-rpjsz\" (UID: \"d510be09-5472-4350-8930-0cda7b4b9c84\") " pod="openstack/horizon-6dc7f74bf4-rpjsz" Jan 23 10:11:18 crc kubenswrapper[4684]: I0123 10:11:18.699959 4684 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"horizon-secret-key\" (UniqueName: \"kubernetes.io/secret/d510be09-5472-4350-8930-0cda7b4b9c84-horizon-secret-key\") pod \"horizon-6dc7f74bf4-rpjsz\" (UID: \"d510be09-5472-4350-8930-0cda7b4b9c84\") " pod="openstack/horizon-6dc7f74bf4-rpjsz" Jan 23 10:11:18 crc kubenswrapper[4684]: I0123 10:11:18.717845 4684 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-xm9v6\" (UniqueName: \"kubernetes.io/projected/d510be09-5472-4350-8930-0cda7b4b9c84-kube-api-access-xm9v6\") pod \"horizon-6dc7f74bf4-rpjsz\" (UID: \"d510be09-5472-4350-8930-0cda7b4b9c84\") " pod="openstack/horizon-6dc7f74bf4-rpjsz" Jan 23 10:11:18 crc kubenswrapper[4684]: I0123 10:11:18.751676 4684 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/horizon-6dc7f74bf4-rpjsz" Jan 23 10:11:18 crc kubenswrapper[4684]: I0123 10:11:18.778445 4684 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/78d43a15-1645-42a6-a25b-a6c4d7a244c4-logs\") pod \"horizon-7df5b758fb-8sfdj\" (UID: \"78d43a15-1645-42a6-a25b-a6c4d7a244c4\") " pod="openstack/horizon-7df5b758fb-8sfdj" Jan 23 10:11:18 crc kubenswrapper[4684]: I0123 10:11:18.779207 4684 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/78d43a15-1645-42a6-a25b-a6c4d7a244c4-combined-ca-bundle\") pod \"horizon-7df5b758fb-8sfdj\" (UID: \"78d43a15-1645-42a6-a25b-a6c4d7a244c4\") " pod="openstack/horizon-7df5b758fb-8sfdj" Jan 23 10:11:18 crc kubenswrapper[4684]: I0123 10:11:18.779295 4684 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-z82nw\" (UniqueName: \"kubernetes.io/projected/78d43a15-1645-42a6-a25b-a6c4d7a244c4-kube-api-access-z82nw\") pod \"horizon-7df5b758fb-8sfdj\" (UID: \"78d43a15-1645-42a6-a25b-a6c4d7a244c4\") " pod="openstack/horizon-7df5b758fb-8sfdj" Jan 23 10:11:18 crc kubenswrapper[4684]: I0123 10:11:18.779207 4684 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/78d43a15-1645-42a6-a25b-a6c4d7a244c4-logs\") pod \"horizon-7df5b758fb-8sfdj\" (UID: \"78d43a15-1645-42a6-a25b-a6c4d7a244c4\") " pod="openstack/horizon-7df5b758fb-8sfdj" Jan 23 10:11:18 crc kubenswrapper[4684]: I0123 10:11:18.779993 4684 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"horizon-tls-certs\" (UniqueName: \"kubernetes.io/secret/78d43a15-1645-42a6-a25b-a6c4d7a244c4-horizon-tls-certs\") pod \"horizon-7df5b758fb-8sfdj\" (UID: \"78d43a15-1645-42a6-a25b-a6c4d7a244c4\") " pod="openstack/horizon-7df5b758fb-8sfdj" Jan 23 10:11:18 crc kubenswrapper[4684]: I0123 10:11:18.780609 4684 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"horizon-secret-key\" (UniqueName: \"kubernetes.io/secret/78d43a15-1645-42a6-a25b-a6c4d7a244c4-horizon-secret-key\") pod \"horizon-7df5b758fb-8sfdj\" (UID: \"78d43a15-1645-42a6-a25b-a6c4d7a244c4\") " pod="openstack/horizon-7df5b758fb-8sfdj" Jan 23 10:11:18 crc kubenswrapper[4684]: I0123 10:11:18.781088 4684 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/78d43a15-1645-42a6-a25b-a6c4d7a244c4-config-data\") pod \"horizon-7df5b758fb-8sfdj\" (UID: \"78d43a15-1645-42a6-a25b-a6c4d7a244c4\") " pod="openstack/horizon-7df5b758fb-8sfdj" Jan 23 10:11:18 crc kubenswrapper[4684]: I0123 10:11:18.781203 4684 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/78d43a15-1645-42a6-a25b-a6c4d7a244c4-scripts\") pod \"horizon-7df5b758fb-8sfdj\" (UID: \"78d43a15-1645-42a6-a25b-a6c4d7a244c4\") " pod="openstack/horizon-7df5b758fb-8sfdj" Jan 23 10:11:18 crc kubenswrapper[4684]: I0123 10:11:18.782019 4684 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/78d43a15-1645-42a6-a25b-a6c4d7a244c4-scripts\") pod \"horizon-7df5b758fb-8sfdj\" (UID: \"78d43a15-1645-42a6-a25b-a6c4d7a244c4\") " pod="openstack/horizon-7df5b758fb-8sfdj" Jan 23 10:11:18 crc kubenswrapper[4684]: I0123 10:11:18.786865 4684 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"horizon-secret-key\" (UniqueName: \"kubernetes.io/secret/78d43a15-1645-42a6-a25b-a6c4d7a244c4-horizon-secret-key\") pod \"horizon-7df5b758fb-8sfdj\" (UID: \"78d43a15-1645-42a6-a25b-a6c4d7a244c4\") " pod="openstack/horizon-7df5b758fb-8sfdj" Jan 23 10:11:18 crc kubenswrapper[4684]: I0123 10:11:18.792498 4684 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"horizon-tls-certs\" (UniqueName: \"kubernetes.io/secret/78d43a15-1645-42a6-a25b-a6c4d7a244c4-horizon-tls-certs\") pod \"horizon-7df5b758fb-8sfdj\" (UID: \"78d43a15-1645-42a6-a25b-a6c4d7a244c4\") " pod="openstack/horizon-7df5b758fb-8sfdj" Jan 23 10:11:18 crc kubenswrapper[4684]: I0123 10:11:18.793155 4684 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/78d43a15-1645-42a6-a25b-a6c4d7a244c4-config-data\") pod \"horizon-7df5b758fb-8sfdj\" (UID: \"78d43a15-1645-42a6-a25b-a6c4d7a244c4\") " pod="openstack/horizon-7df5b758fb-8sfdj" Jan 23 10:11:18 crc kubenswrapper[4684]: I0123 10:11:18.796693 4684 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/78d43a15-1645-42a6-a25b-a6c4d7a244c4-combined-ca-bundle\") pod \"horizon-7df5b758fb-8sfdj\" (UID: \"78d43a15-1645-42a6-a25b-a6c4d7a244c4\") " pod="openstack/horizon-7df5b758fb-8sfdj" Jan 23 10:11:18 crc kubenswrapper[4684]: I0123 10:11:18.804573 4684 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-z82nw\" (UniqueName: \"kubernetes.io/projected/78d43a15-1645-42a6-a25b-a6c4d7a244c4-kube-api-access-z82nw\") pod \"horizon-7df5b758fb-8sfdj\" (UID: \"78d43a15-1645-42a6-a25b-a6c4d7a244c4\") " pod="openstack/horizon-7df5b758fb-8sfdj" Jan 23 10:11:19 crc kubenswrapper[4684]: I0123 10:11:19.010625 4684 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-volume-volume1-0" event={"ID":"2d39cffc-9089-47c7-acd7-50bb64ed8f61","Type":"ContainerStarted","Data":"f3b1dbff5a3efdc78be2340bd4e5c9723f2afc5c2cfb9ae749f8117ac018c74f"} Jan 23 10:11:19 crc kubenswrapper[4684]: I0123 10:11:19.016687 4684 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-default-external-api-0" event={"ID":"a1353995-a0d3-4d2d-bb96-99c94673be54","Type":"ContainerStarted","Data":"46d83126df26e23292b75e36c16d9ca1be43ff977c7bd44bb9d44e340ef2c0bf"} Jan 23 10:11:19 crc kubenswrapper[4684]: I0123 10:11:19.028655 4684 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-backup-0" event={"ID":"46859102-633b-4fca-bbeb-c34dfdbea96d","Type":"ContainerStarted","Data":"d9d937d09303c0ec98cacfcdb0db84e9870732b7d19efc46e25aefa8267ad551"} Jan 23 10:11:19 crc kubenswrapper[4684]: I0123 10:11:19.034192 4684 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-default-internal-api-0" event={"ID":"b697e4ae-16df-466e-bfad-f76ddb6f9e97","Type":"ContainerStarted","Data":"90463277892f21211bdf8a76f6162d3ae7590a658ee34f4a46b7a9e0af35f468"} Jan 23 10:11:19 crc kubenswrapper[4684]: I0123 10:11:19.051807 4684 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/horizon-7df5b758fb-8sfdj" Jan 23 10:11:19 crc kubenswrapper[4684]: I0123 10:11:19.394505 4684 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/horizon-6dc7f74bf4-rpjsz"] Jan 23 10:11:19 crc kubenswrapper[4684]: W0123 10:11:19.430865 4684 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-podd510be09_5472_4350_8930_0cda7b4b9c84.slice/crio-24cfc0ebfa7a7e7c712273b2a7b0d41a3931d6744783f3ca890c777f5bd9f44d WatchSource:0}: Error finding container 24cfc0ebfa7a7e7c712273b2a7b0d41a3931d6744783f3ca890c777f5bd9f44d: Status 404 returned error can't find the container with id 24cfc0ebfa7a7e7c712273b2a7b0d41a3931d6744783f3ca890c777f5bd9f44d Jan 23 10:11:19 crc kubenswrapper[4684]: I0123 10:11:19.988565 4684 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/manila-db-create-9r5vp" Jan 23 10:11:20 crc kubenswrapper[4684]: I0123 10:11:20.016212 4684 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-5vnkr\" (UniqueName: \"kubernetes.io/projected/9a1d764d-4ecd-4f2f-a4b8-848142c93b15-kube-api-access-5vnkr\") pod \"9a1d764d-4ecd-4f2f-a4b8-848142c93b15\" (UID: \"9a1d764d-4ecd-4f2f-a4b8-848142c93b15\") " Jan 23 10:11:20 crc kubenswrapper[4684]: I0123 10:11:20.016249 4684 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/9a1d764d-4ecd-4f2f-a4b8-848142c93b15-operator-scripts\") pod \"9a1d764d-4ecd-4f2f-a4b8-848142c93b15\" (UID: \"9a1d764d-4ecd-4f2f-a4b8-848142c93b15\") " Jan 23 10:11:20 crc kubenswrapper[4684]: I0123 10:11:20.017381 4684 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/9a1d764d-4ecd-4f2f-a4b8-848142c93b15-operator-scripts" (OuterVolumeSpecName: "operator-scripts") pod "9a1d764d-4ecd-4f2f-a4b8-848142c93b15" (UID: "9a1d764d-4ecd-4f2f-a4b8-848142c93b15"). InnerVolumeSpecName "operator-scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 23 10:11:20 crc kubenswrapper[4684]: I0123 10:11:20.045964 4684 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/9a1d764d-4ecd-4f2f-a4b8-848142c93b15-kube-api-access-5vnkr" (OuterVolumeSpecName: "kube-api-access-5vnkr") pod "9a1d764d-4ecd-4f2f-a4b8-848142c93b15" (UID: "9a1d764d-4ecd-4f2f-a4b8-848142c93b15"). InnerVolumeSpecName "kube-api-access-5vnkr". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 23 10:11:20 crc kubenswrapper[4684]: I0123 10:11:20.079812 4684 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-default-internal-api-0" event={"ID":"b697e4ae-16df-466e-bfad-f76ddb6f9e97","Type":"ContainerStarted","Data":"f382e5fe5ccd99548f8d36eaa2941f441c1a3ea280bbb4cbde5a1c18d41167d3"} Jan 23 10:11:20 crc kubenswrapper[4684]: I0123 10:11:20.079956 4684 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/glance-default-internal-api-0" podUID="b697e4ae-16df-466e-bfad-f76ddb6f9e97" containerName="glance-log" containerID="cri-o://90463277892f21211bdf8a76f6162d3ae7590a658ee34f4a46b7a9e0af35f468" gracePeriod=30 Jan 23 10:11:20 crc kubenswrapper[4684]: I0123 10:11:20.080267 4684 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/glance-default-internal-api-0" podUID="b697e4ae-16df-466e-bfad-f76ddb6f9e97" containerName="glance-httpd" containerID="cri-o://f382e5fe5ccd99548f8d36eaa2941f441c1a3ea280bbb4cbde5a1c18d41167d3" gracePeriod=30 Jan 23 10:11:20 crc kubenswrapper[4684]: I0123 10:11:20.084634 4684 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-volume-volume1-0" event={"ID":"2d39cffc-9089-47c7-acd7-50bb64ed8f61","Type":"ContainerStarted","Data":"737093b61d989f397423f73a3db32285853f09aef4d890592e7e427695f17522"} Jan 23 10:11:20 crc kubenswrapper[4684]: I0123 10:11:20.105521 4684 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/glance-default-internal-api-0" podStartSLOduration=6.105503023 podStartE2EDuration="6.105503023s" podCreationTimestamp="2026-01-23 10:11:14 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-23 10:11:20.100181941 +0000 UTC m=+3852.723560482" watchObservedRunningTime="2026-01-23 10:11:20.105503023 +0000 UTC m=+3852.728881564" Jan 23 10:11:20 crc kubenswrapper[4684]: I0123 10:11:20.109358 4684 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/glance-default-external-api-0" podUID="a1353995-a0d3-4d2d-bb96-99c94673be54" containerName="glance-log" containerID="cri-o://46d83126df26e23292b75e36c16d9ca1be43ff977c7bd44bb9d44e340ef2c0bf" gracePeriod=30 Jan 23 10:11:20 crc kubenswrapper[4684]: I0123 10:11:20.109782 4684 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/glance-default-external-api-0" podUID="a1353995-a0d3-4d2d-bb96-99c94673be54" containerName="glance-httpd" containerID="cri-o://4eb213dd37f72e230d5788d3ec77b918bb1aea152d652c1afe2ebec63de686cf" gracePeriod=30 Jan 23 10:11:20 crc kubenswrapper[4684]: I0123 10:11:20.118412 4684 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-5vnkr\" (UniqueName: \"kubernetes.io/projected/9a1d764d-4ecd-4f2f-a4b8-848142c93b15-kube-api-access-5vnkr\") on node \"crc\" DevicePath \"\"" Jan 23 10:11:20 crc kubenswrapper[4684]: I0123 10:11:20.123674 4684 reconciler_common.go:293] "Volume detached for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/9a1d764d-4ecd-4f2f-a4b8-848142c93b15-operator-scripts\") on node \"crc\" DevicePath \"\"" Jan 23 10:11:20 crc kubenswrapper[4684]: I0123 10:11:20.139945 4684 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-backup-0" event={"ID":"46859102-633b-4fca-bbeb-c34dfdbea96d","Type":"ContainerStarted","Data":"67a0ec24ab5db6078f7ccf99eda904a68f6638a657174d5c9d96fd506add05f5"} Jan 23 10:11:20 crc kubenswrapper[4684]: I0123 10:11:20.144102 4684 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/manila-db-create-9r5vp" event={"ID":"9a1d764d-4ecd-4f2f-a4b8-848142c93b15","Type":"ContainerDied","Data":"4b07826d8d7b74866e4df82c8644b5f00e22f376aa05bd0d8894087316060dc7"} Jan 23 10:11:20 crc kubenswrapper[4684]: I0123 10:11:20.144571 4684 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="4b07826d8d7b74866e4df82c8644b5f00e22f376aa05bd0d8894087316060dc7" Jan 23 10:11:20 crc kubenswrapper[4684]: I0123 10:11:20.145938 4684 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/cinder-volume-volume1-0" podStartSLOduration=3.825754401 podStartE2EDuration="6.145925865s" podCreationTimestamp="2026-01-23 10:11:14 +0000 UTC" firstStartedPulling="2026-01-23 10:11:15.498889006 +0000 UTC m=+3848.122267547" lastFinishedPulling="2026-01-23 10:11:17.81906047 +0000 UTC m=+3850.442439011" observedRunningTime="2026-01-23 10:11:20.130608278 +0000 UTC m=+3852.753986829" watchObservedRunningTime="2026-01-23 10:11:20.145925865 +0000 UTC m=+3852.769304406" Jan 23 10:11:20 crc kubenswrapper[4684]: I0123 10:11:20.146898 4684 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/manila-db-create-9r5vp" Jan 23 10:11:20 crc kubenswrapper[4684]: I0123 10:11:20.192099 4684 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/horizon-6dc7f74bf4-rpjsz" event={"ID":"d510be09-5472-4350-8930-0cda7b4b9c84","Type":"ContainerStarted","Data":"24cfc0ebfa7a7e7c712273b2a7b0d41a3931d6744783f3ca890c777f5bd9f44d"} Jan 23 10:11:20 crc kubenswrapper[4684]: I0123 10:11:20.192375 4684 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/glance-default-external-api-0" podStartSLOduration=6.192355449 podStartE2EDuration="6.192355449s" podCreationTimestamp="2026-01-23 10:11:14 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-23 10:11:20.183014473 +0000 UTC m=+3852.806393014" watchObservedRunningTime="2026-01-23 10:11:20.192355449 +0000 UTC m=+3852.815733990" Jan 23 10:11:20 crc kubenswrapper[4684]: I0123 10:11:20.229277 4684 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/horizon-7df5b758fb-8sfdj"] Jan 23 10:11:20 crc kubenswrapper[4684]: I0123 10:11:20.238218 4684 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/cinder-backup-0" podStartSLOduration=3.851795074 podStartE2EDuration="6.238193116s" podCreationTimestamp="2026-01-23 10:11:14 +0000 UTC" firstStartedPulling="2026-01-23 10:11:15.328509148 +0000 UTC m=+3847.951887699" lastFinishedPulling="2026-01-23 10:11:17.71490719 +0000 UTC m=+3850.338285741" observedRunningTime="2026-01-23 10:11:20.217789084 +0000 UTC m=+3852.841167635" watchObservedRunningTime="2026-01-23 10:11:20.238193116 +0000 UTC m=+3852.861571667" Jan 23 10:11:20 crc kubenswrapper[4684]: W0123 10:11:20.279141 4684 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod78d43a15_1645_42a6_a25b_a6c4d7a244c4.slice/crio-f9375b06e8e43dbf3b035526cb6897d166f3989aa23a40e6abf2b002ceb85614 WatchSource:0}: Error finding container f9375b06e8e43dbf3b035526cb6897d166f3989aa23a40e6abf2b002ceb85614: Status 404 returned error can't find the container with id f9375b06e8e43dbf3b035526cb6897d166f3989aa23a40e6abf2b002ceb85614 Jan 23 10:11:20 crc kubenswrapper[4684]: I0123 10:11:20.281967 4684 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/manila-e4ed-account-create-update-rzjjx" Jan 23 10:11:20 crc kubenswrapper[4684]: I0123 10:11:20.326958 4684 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-hr9zc\" (UniqueName: \"kubernetes.io/projected/a7e275bc-7d07-4a5c-98be-6e9eb72cf537-kube-api-access-hr9zc\") pod \"a7e275bc-7d07-4a5c-98be-6e9eb72cf537\" (UID: \"a7e275bc-7d07-4a5c-98be-6e9eb72cf537\") " Jan 23 10:11:20 crc kubenswrapper[4684]: I0123 10:11:20.327029 4684 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/a7e275bc-7d07-4a5c-98be-6e9eb72cf537-operator-scripts\") pod \"a7e275bc-7d07-4a5c-98be-6e9eb72cf537\" (UID: \"a7e275bc-7d07-4a5c-98be-6e9eb72cf537\") " Jan 23 10:11:20 crc kubenswrapper[4684]: I0123 10:11:20.328715 4684 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/a7e275bc-7d07-4a5c-98be-6e9eb72cf537-operator-scripts" (OuterVolumeSpecName: "operator-scripts") pod "a7e275bc-7d07-4a5c-98be-6e9eb72cf537" (UID: "a7e275bc-7d07-4a5c-98be-6e9eb72cf537"). InnerVolumeSpecName "operator-scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 23 10:11:20 crc kubenswrapper[4684]: I0123 10:11:20.334736 4684 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/a7e275bc-7d07-4a5c-98be-6e9eb72cf537-kube-api-access-hr9zc" (OuterVolumeSpecName: "kube-api-access-hr9zc") pod "a7e275bc-7d07-4a5c-98be-6e9eb72cf537" (UID: "a7e275bc-7d07-4a5c-98be-6e9eb72cf537"). InnerVolumeSpecName "kube-api-access-hr9zc". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 23 10:11:20 crc kubenswrapper[4684]: I0123 10:11:20.432341 4684 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-hr9zc\" (UniqueName: \"kubernetes.io/projected/a7e275bc-7d07-4a5c-98be-6e9eb72cf537-kube-api-access-hr9zc\") on node \"crc\" DevicePath \"\"" Jan 23 10:11:20 crc kubenswrapper[4684]: I0123 10:11:20.432746 4684 reconciler_common.go:293] "Volume detached for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/a7e275bc-7d07-4a5c-98be-6e9eb72cf537-operator-scripts\") on node \"crc\" DevicePath \"\"" Jan 23 10:11:21 crc kubenswrapper[4684]: I0123 10:11:20.968992 4684 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/glance-default-internal-api-0" Jan 23 10:11:21 crc kubenswrapper[4684]: I0123 10:11:21.048243 4684 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ceph\" (UniqueName: \"kubernetes.io/projected/b697e4ae-16df-466e-bfad-f76ddb6f9e97-ceph\") pod \"b697e4ae-16df-466e-bfad-f76ddb6f9e97\" (UID: \"b697e4ae-16df-466e-bfad-f76ddb6f9e97\") " Jan 23 10:11:21 crc kubenswrapper[4684]: I0123 10:11:21.048281 4684 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/b697e4ae-16df-466e-bfad-f76ddb6f9e97-internal-tls-certs\") pod \"b697e4ae-16df-466e-bfad-f76ddb6f9e97\" (UID: \"b697e4ae-16df-466e-bfad-f76ddb6f9e97\") " Jan 23 10:11:21 crc kubenswrapper[4684]: I0123 10:11:21.048323 4684 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/b697e4ae-16df-466e-bfad-f76ddb6f9e97-config-data\") pod \"b697e4ae-16df-466e-bfad-f76ddb6f9e97\" (UID: \"b697e4ae-16df-466e-bfad-f76ddb6f9e97\") " Jan 23 10:11:21 crc kubenswrapper[4684]: I0123 10:11:21.048353 4684 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-vss56\" (UniqueName: \"kubernetes.io/projected/b697e4ae-16df-466e-bfad-f76ddb6f9e97-kube-api-access-vss56\") pod \"b697e4ae-16df-466e-bfad-f76ddb6f9e97\" (UID: \"b697e4ae-16df-466e-bfad-f76ddb6f9e97\") " Jan 23 10:11:21 crc kubenswrapper[4684]: I0123 10:11:21.048413 4684 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"httpd-run\" (UniqueName: \"kubernetes.io/empty-dir/b697e4ae-16df-466e-bfad-f76ddb6f9e97-httpd-run\") pod \"b697e4ae-16df-466e-bfad-f76ddb6f9e97\" (UID: \"b697e4ae-16df-466e-bfad-f76ddb6f9e97\") " Jan 23 10:11:21 crc kubenswrapper[4684]: I0123 10:11:21.048514 4684 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/b697e4ae-16df-466e-bfad-f76ddb6f9e97-scripts\") pod \"b697e4ae-16df-466e-bfad-f76ddb6f9e97\" (UID: \"b697e4ae-16df-466e-bfad-f76ddb6f9e97\") " Jan 23 10:11:21 crc kubenswrapper[4684]: I0123 10:11:21.048626 4684 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/b697e4ae-16df-466e-bfad-f76ddb6f9e97-logs\") pod \"b697e4ae-16df-466e-bfad-f76ddb6f9e97\" (UID: \"b697e4ae-16df-466e-bfad-f76ddb6f9e97\") " Jan 23 10:11:21 crc kubenswrapper[4684]: I0123 10:11:21.048648 4684 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/b697e4ae-16df-466e-bfad-f76ddb6f9e97-combined-ca-bundle\") pod \"b697e4ae-16df-466e-bfad-f76ddb6f9e97\" (UID: \"b697e4ae-16df-466e-bfad-f76ddb6f9e97\") " Jan 23 10:11:21 crc kubenswrapper[4684]: I0123 10:11:21.048675 4684 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"glance\" (UniqueName: \"kubernetes.io/local-volume/local-storage04-crc\") pod \"b697e4ae-16df-466e-bfad-f76ddb6f9e97\" (UID: \"b697e4ae-16df-466e-bfad-f76ddb6f9e97\") " Jan 23 10:11:21 crc kubenswrapper[4684]: I0123 10:11:21.051454 4684 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/b697e4ae-16df-466e-bfad-f76ddb6f9e97-logs" (OuterVolumeSpecName: "logs") pod "b697e4ae-16df-466e-bfad-f76ddb6f9e97" (UID: "b697e4ae-16df-466e-bfad-f76ddb6f9e97"). InnerVolumeSpecName "logs". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 23 10:11:21 crc kubenswrapper[4684]: I0123 10:11:21.051678 4684 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/b697e4ae-16df-466e-bfad-f76ddb6f9e97-httpd-run" (OuterVolumeSpecName: "httpd-run") pod "b697e4ae-16df-466e-bfad-f76ddb6f9e97" (UID: "b697e4ae-16df-466e-bfad-f76ddb6f9e97"). InnerVolumeSpecName "httpd-run". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 23 10:11:21 crc kubenswrapper[4684]: I0123 10:11:21.056877 4684 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/b697e4ae-16df-466e-bfad-f76ddb6f9e97-kube-api-access-vss56" (OuterVolumeSpecName: "kube-api-access-vss56") pod "b697e4ae-16df-466e-bfad-f76ddb6f9e97" (UID: "b697e4ae-16df-466e-bfad-f76ddb6f9e97"). InnerVolumeSpecName "kube-api-access-vss56". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 23 10:11:21 crc kubenswrapper[4684]: I0123 10:11:21.084840 4684 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/local-volume/local-storage04-crc" (OuterVolumeSpecName: "glance") pod "b697e4ae-16df-466e-bfad-f76ddb6f9e97" (UID: "b697e4ae-16df-466e-bfad-f76ddb6f9e97"). InnerVolumeSpecName "local-storage04-crc". PluginName "kubernetes.io/local-volume", VolumeGidValue "" Jan 23 10:11:21 crc kubenswrapper[4684]: I0123 10:11:21.085218 4684 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/b697e4ae-16df-466e-bfad-f76ddb6f9e97-ceph" (OuterVolumeSpecName: "ceph") pod "b697e4ae-16df-466e-bfad-f76ddb6f9e97" (UID: "b697e4ae-16df-466e-bfad-f76ddb6f9e97"). InnerVolumeSpecName "ceph". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 23 10:11:21 crc kubenswrapper[4684]: I0123 10:11:21.087810 4684 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/b697e4ae-16df-466e-bfad-f76ddb6f9e97-scripts" (OuterVolumeSpecName: "scripts") pod "b697e4ae-16df-466e-bfad-f76ddb6f9e97" (UID: "b697e4ae-16df-466e-bfad-f76ddb6f9e97"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 23 10:11:21 crc kubenswrapper[4684]: I0123 10:11:21.152296 4684 reconciler_common.go:293] "Volume detached for volume \"httpd-run\" (UniqueName: \"kubernetes.io/empty-dir/b697e4ae-16df-466e-bfad-f76ddb6f9e97-httpd-run\") on node \"crc\" DevicePath \"\"" Jan 23 10:11:21 crc kubenswrapper[4684]: I0123 10:11:21.152323 4684 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/b697e4ae-16df-466e-bfad-f76ddb6f9e97-scripts\") on node \"crc\" DevicePath \"\"" Jan 23 10:11:21 crc kubenswrapper[4684]: I0123 10:11:21.152331 4684 reconciler_common.go:293] "Volume detached for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/b697e4ae-16df-466e-bfad-f76ddb6f9e97-logs\") on node \"crc\" DevicePath \"\"" Jan 23 10:11:21 crc kubenswrapper[4684]: I0123 10:11:21.152358 4684 reconciler_common.go:286] "operationExecutor.UnmountDevice started for volume \"local-storage04-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage04-crc\") on node \"crc\" " Jan 23 10:11:21 crc kubenswrapper[4684]: I0123 10:11:21.152370 4684 reconciler_common.go:293] "Volume detached for volume \"ceph\" (UniqueName: \"kubernetes.io/projected/b697e4ae-16df-466e-bfad-f76ddb6f9e97-ceph\") on node \"crc\" DevicePath \"\"" Jan 23 10:11:21 crc kubenswrapper[4684]: I0123 10:11:21.152384 4684 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-vss56\" (UniqueName: \"kubernetes.io/projected/b697e4ae-16df-466e-bfad-f76ddb6f9e97-kube-api-access-vss56\") on node \"crc\" DevicePath \"\"" Jan 23 10:11:21 crc kubenswrapper[4684]: I0123 10:11:21.160998 4684 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/b697e4ae-16df-466e-bfad-f76ddb6f9e97-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "b697e4ae-16df-466e-bfad-f76ddb6f9e97" (UID: "b697e4ae-16df-466e-bfad-f76ddb6f9e97"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 23 10:11:21 crc kubenswrapper[4684]: I0123 10:11:21.259579 4684 operation_generator.go:917] UnmountDevice succeeded for volume "local-storage04-crc" (UniqueName: "kubernetes.io/local-volume/local-storage04-crc") on node "crc" Jan 23 10:11:21 crc kubenswrapper[4684]: I0123 10:11:21.263632 4684 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/b697e4ae-16df-466e-bfad-f76ddb6f9e97-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Jan 23 10:11:21 crc kubenswrapper[4684]: I0123 10:11:21.263687 4684 reconciler_common.go:293] "Volume detached for volume \"local-storage04-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage04-crc\") on node \"crc\" DevicePath \"\"" Jan 23 10:11:21 crc kubenswrapper[4684]: I0123 10:11:21.279559 4684 generic.go:334] "Generic (PLEG): container finished" podID="b697e4ae-16df-466e-bfad-f76ddb6f9e97" containerID="f382e5fe5ccd99548f8d36eaa2941f441c1a3ea280bbb4cbde5a1c18d41167d3" exitCode=143 Jan 23 10:11:21 crc kubenswrapper[4684]: I0123 10:11:21.279595 4684 generic.go:334] "Generic (PLEG): container finished" podID="b697e4ae-16df-466e-bfad-f76ddb6f9e97" containerID="90463277892f21211bdf8a76f6162d3ae7590a658ee34f4a46b7a9e0af35f468" exitCode=143 Jan 23 10:11:21 crc kubenswrapper[4684]: I0123 10:11:21.279670 4684 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-default-internal-api-0" event={"ID":"b697e4ae-16df-466e-bfad-f76ddb6f9e97","Type":"ContainerDied","Data":"f382e5fe5ccd99548f8d36eaa2941f441c1a3ea280bbb4cbde5a1c18d41167d3"} Jan 23 10:11:21 crc kubenswrapper[4684]: I0123 10:11:21.279736 4684 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-default-internal-api-0" event={"ID":"b697e4ae-16df-466e-bfad-f76ddb6f9e97","Type":"ContainerDied","Data":"90463277892f21211bdf8a76f6162d3ae7590a658ee34f4a46b7a9e0af35f468"} Jan 23 10:11:21 crc kubenswrapper[4684]: I0123 10:11:21.279751 4684 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-default-internal-api-0" event={"ID":"b697e4ae-16df-466e-bfad-f76ddb6f9e97","Type":"ContainerDied","Data":"33f361584de79ca5c7580499610001c91dea56121bd9b66cacb6769ae35f39eb"} Jan 23 10:11:21 crc kubenswrapper[4684]: I0123 10:11:21.279770 4684 scope.go:117] "RemoveContainer" containerID="f382e5fe5ccd99548f8d36eaa2941f441c1a3ea280bbb4cbde5a1c18d41167d3" Jan 23 10:11:21 crc kubenswrapper[4684]: I0123 10:11:21.279934 4684 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/glance-default-internal-api-0" Jan 23 10:11:21 crc kubenswrapper[4684]: I0123 10:11:21.292614 4684 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/manila-e4ed-account-create-update-rzjjx" Jan 23 10:11:21 crc kubenswrapper[4684]: I0123 10:11:21.293024 4684 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/manila-e4ed-account-create-update-rzjjx" event={"ID":"a7e275bc-7d07-4a5c-98be-6e9eb72cf537","Type":"ContainerDied","Data":"48bc32194c16a2903c1b0ff47e930bfc90bc79d8709cb588a1623ce3739c7dd7"} Jan 23 10:11:21 crc kubenswrapper[4684]: I0123 10:11:21.293075 4684 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="48bc32194c16a2903c1b0ff47e930bfc90bc79d8709cb588a1623ce3739c7dd7" Jan 23 10:11:21 crc kubenswrapper[4684]: I0123 10:11:21.299454 4684 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/horizon-7df5b758fb-8sfdj" event={"ID":"78d43a15-1645-42a6-a25b-a6c4d7a244c4","Type":"ContainerStarted","Data":"f9375b06e8e43dbf3b035526cb6897d166f3989aa23a40e6abf2b002ceb85614"} Jan 23 10:11:21 crc kubenswrapper[4684]: I0123 10:11:21.304369 4684 generic.go:334] "Generic (PLEG): container finished" podID="a1353995-a0d3-4d2d-bb96-99c94673be54" containerID="4eb213dd37f72e230d5788d3ec77b918bb1aea152d652c1afe2ebec63de686cf" exitCode=143 Jan 23 10:11:21 crc kubenswrapper[4684]: I0123 10:11:21.304388 4684 generic.go:334] "Generic (PLEG): container finished" podID="a1353995-a0d3-4d2d-bb96-99c94673be54" containerID="46d83126df26e23292b75e36c16d9ca1be43ff977c7bd44bb9d44e340ef2c0bf" exitCode=143 Jan 23 10:11:21 crc kubenswrapper[4684]: I0123 10:11:21.305782 4684 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-default-external-api-0" event={"ID":"a1353995-a0d3-4d2d-bb96-99c94673be54","Type":"ContainerDied","Data":"4eb213dd37f72e230d5788d3ec77b918bb1aea152d652c1afe2ebec63de686cf"} Jan 23 10:11:21 crc kubenswrapper[4684]: I0123 10:11:21.305847 4684 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-default-external-api-0" event={"ID":"a1353995-a0d3-4d2d-bb96-99c94673be54","Type":"ContainerDied","Data":"46d83126df26e23292b75e36c16d9ca1be43ff977c7bd44bb9d44e340ef2c0bf"} Jan 23 10:11:21 crc kubenswrapper[4684]: I0123 10:11:21.339843 4684 scope.go:117] "RemoveContainer" containerID="90463277892f21211bdf8a76f6162d3ae7590a658ee34f4a46b7a9e0af35f468" Jan 23 10:11:21 crc kubenswrapper[4684]: I0123 10:11:21.341523 4684 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/b697e4ae-16df-466e-bfad-f76ddb6f9e97-internal-tls-certs" (OuterVolumeSpecName: "internal-tls-certs") pod "b697e4ae-16df-466e-bfad-f76ddb6f9e97" (UID: "b697e4ae-16df-466e-bfad-f76ddb6f9e97"). InnerVolumeSpecName "internal-tls-certs". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 23 10:11:21 crc kubenswrapper[4684]: I0123 10:11:21.366091 4684 reconciler_common.go:293] "Volume detached for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/b697e4ae-16df-466e-bfad-f76ddb6f9e97-internal-tls-certs\") on node \"crc\" DevicePath \"\"" Jan 23 10:11:21 crc kubenswrapper[4684]: I0123 10:11:21.376482 4684 scope.go:117] "RemoveContainer" containerID="f382e5fe5ccd99548f8d36eaa2941f441c1a3ea280bbb4cbde5a1c18d41167d3" Jan 23 10:11:21 crc kubenswrapper[4684]: E0123 10:11:21.377738 4684 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"f382e5fe5ccd99548f8d36eaa2941f441c1a3ea280bbb4cbde5a1c18d41167d3\": container with ID starting with f382e5fe5ccd99548f8d36eaa2941f441c1a3ea280bbb4cbde5a1c18d41167d3 not found: ID does not exist" containerID="f382e5fe5ccd99548f8d36eaa2941f441c1a3ea280bbb4cbde5a1c18d41167d3" Jan 23 10:11:21 crc kubenswrapper[4684]: I0123 10:11:21.377778 4684 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"f382e5fe5ccd99548f8d36eaa2941f441c1a3ea280bbb4cbde5a1c18d41167d3"} err="failed to get container status \"f382e5fe5ccd99548f8d36eaa2941f441c1a3ea280bbb4cbde5a1c18d41167d3\": rpc error: code = NotFound desc = could not find container \"f382e5fe5ccd99548f8d36eaa2941f441c1a3ea280bbb4cbde5a1c18d41167d3\": container with ID starting with f382e5fe5ccd99548f8d36eaa2941f441c1a3ea280bbb4cbde5a1c18d41167d3 not found: ID does not exist" Jan 23 10:11:21 crc kubenswrapper[4684]: I0123 10:11:21.377808 4684 scope.go:117] "RemoveContainer" containerID="90463277892f21211bdf8a76f6162d3ae7590a658ee34f4a46b7a9e0af35f468" Jan 23 10:11:21 crc kubenswrapper[4684]: E0123 10:11:21.379641 4684 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"90463277892f21211bdf8a76f6162d3ae7590a658ee34f4a46b7a9e0af35f468\": container with ID starting with 90463277892f21211bdf8a76f6162d3ae7590a658ee34f4a46b7a9e0af35f468 not found: ID does not exist" containerID="90463277892f21211bdf8a76f6162d3ae7590a658ee34f4a46b7a9e0af35f468" Jan 23 10:11:21 crc kubenswrapper[4684]: I0123 10:11:21.379681 4684 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"90463277892f21211bdf8a76f6162d3ae7590a658ee34f4a46b7a9e0af35f468"} err="failed to get container status \"90463277892f21211bdf8a76f6162d3ae7590a658ee34f4a46b7a9e0af35f468\": rpc error: code = NotFound desc = could not find container \"90463277892f21211bdf8a76f6162d3ae7590a658ee34f4a46b7a9e0af35f468\": container with ID starting with 90463277892f21211bdf8a76f6162d3ae7590a658ee34f4a46b7a9e0af35f468 not found: ID does not exist" Jan 23 10:11:21 crc kubenswrapper[4684]: I0123 10:11:21.379825 4684 scope.go:117] "RemoveContainer" containerID="f382e5fe5ccd99548f8d36eaa2941f441c1a3ea280bbb4cbde5a1c18d41167d3" Jan 23 10:11:21 crc kubenswrapper[4684]: I0123 10:11:21.380052 4684 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"f382e5fe5ccd99548f8d36eaa2941f441c1a3ea280bbb4cbde5a1c18d41167d3"} err="failed to get container status \"f382e5fe5ccd99548f8d36eaa2941f441c1a3ea280bbb4cbde5a1c18d41167d3\": rpc error: code = NotFound desc = could not find container \"f382e5fe5ccd99548f8d36eaa2941f441c1a3ea280bbb4cbde5a1c18d41167d3\": container with ID starting with f382e5fe5ccd99548f8d36eaa2941f441c1a3ea280bbb4cbde5a1c18d41167d3 not found: ID does not exist" Jan 23 10:11:21 crc kubenswrapper[4684]: I0123 10:11:21.380067 4684 scope.go:117] "RemoveContainer" containerID="90463277892f21211bdf8a76f6162d3ae7590a658ee34f4a46b7a9e0af35f468" Jan 23 10:11:21 crc kubenswrapper[4684]: I0123 10:11:21.380307 4684 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"90463277892f21211bdf8a76f6162d3ae7590a658ee34f4a46b7a9e0af35f468"} err="failed to get container status \"90463277892f21211bdf8a76f6162d3ae7590a658ee34f4a46b7a9e0af35f468\": rpc error: code = NotFound desc = could not find container \"90463277892f21211bdf8a76f6162d3ae7590a658ee34f4a46b7a9e0af35f468\": container with ID starting with 90463277892f21211bdf8a76f6162d3ae7590a658ee34f4a46b7a9e0af35f468 not found: ID does not exist" Jan 23 10:11:21 crc kubenswrapper[4684]: I0123 10:11:21.393556 4684 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/b697e4ae-16df-466e-bfad-f76ddb6f9e97-config-data" (OuterVolumeSpecName: "config-data") pod "b697e4ae-16df-466e-bfad-f76ddb6f9e97" (UID: "b697e4ae-16df-466e-bfad-f76ddb6f9e97"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 23 10:11:21 crc kubenswrapper[4684]: I0123 10:11:21.468127 4684 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/b697e4ae-16df-466e-bfad-f76ddb6f9e97-config-data\") on node \"crc\" DevicePath \"\"" Jan 23 10:11:21 crc kubenswrapper[4684]: E0123 10:11:21.530538 4684 cadvisor_stats_provider.go:516] "Partial failure issuing cadvisor.ContainerInfoV2" err="partial failures: [\"/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-poda7e275bc_7d07_4a5c_98be_6e9eb72cf537.slice/crio-48bc32194c16a2903c1b0ff47e930bfc90bc79d8709cb588a1623ce3739c7dd7\": RecentStats: unable to find data in memory cache], [\"/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-poda7e275bc_7d07_4a5c_98be_6e9eb72cf537.slice\": RecentStats: unable to find data in memory cache]" Jan 23 10:11:21 crc kubenswrapper[4684]: I0123 10:11:21.625876 4684 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/glance-default-internal-api-0"] Jan 23 10:11:21 crc kubenswrapper[4684]: I0123 10:11:21.655232 4684 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/glance-default-internal-api-0"] Jan 23 10:11:21 crc kubenswrapper[4684]: I0123 10:11:21.663812 4684 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/glance-default-internal-api-0"] Jan 23 10:11:21 crc kubenswrapper[4684]: E0123 10:11:21.664314 4684 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="b697e4ae-16df-466e-bfad-f76ddb6f9e97" containerName="glance-log" Jan 23 10:11:21 crc kubenswrapper[4684]: I0123 10:11:21.664330 4684 state_mem.go:107] "Deleted CPUSet assignment" podUID="b697e4ae-16df-466e-bfad-f76ddb6f9e97" containerName="glance-log" Jan 23 10:11:21 crc kubenswrapper[4684]: E0123 10:11:21.664355 4684 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="9a1d764d-4ecd-4f2f-a4b8-848142c93b15" containerName="mariadb-database-create" Jan 23 10:11:21 crc kubenswrapper[4684]: I0123 10:11:21.664361 4684 state_mem.go:107] "Deleted CPUSet assignment" podUID="9a1d764d-4ecd-4f2f-a4b8-848142c93b15" containerName="mariadb-database-create" Jan 23 10:11:21 crc kubenswrapper[4684]: E0123 10:11:21.664399 4684 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="b697e4ae-16df-466e-bfad-f76ddb6f9e97" containerName="glance-httpd" Jan 23 10:11:21 crc kubenswrapper[4684]: I0123 10:11:21.664406 4684 state_mem.go:107] "Deleted CPUSet assignment" podUID="b697e4ae-16df-466e-bfad-f76ddb6f9e97" containerName="glance-httpd" Jan 23 10:11:21 crc kubenswrapper[4684]: E0123 10:11:21.664414 4684 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="a7e275bc-7d07-4a5c-98be-6e9eb72cf537" containerName="mariadb-account-create-update" Jan 23 10:11:21 crc kubenswrapper[4684]: I0123 10:11:21.664420 4684 state_mem.go:107] "Deleted CPUSet assignment" podUID="a7e275bc-7d07-4a5c-98be-6e9eb72cf537" containerName="mariadb-account-create-update" Jan 23 10:11:21 crc kubenswrapper[4684]: I0123 10:11:21.664639 4684 memory_manager.go:354] "RemoveStaleState removing state" podUID="b697e4ae-16df-466e-bfad-f76ddb6f9e97" containerName="glance-httpd" Jan 23 10:11:21 crc kubenswrapper[4684]: I0123 10:11:21.664654 4684 memory_manager.go:354] "RemoveStaleState removing state" podUID="a7e275bc-7d07-4a5c-98be-6e9eb72cf537" containerName="mariadb-account-create-update" Jan 23 10:11:21 crc kubenswrapper[4684]: I0123 10:11:21.664671 4684 memory_manager.go:354] "RemoveStaleState removing state" podUID="9a1d764d-4ecd-4f2f-a4b8-848142c93b15" containerName="mariadb-database-create" Jan 23 10:11:21 crc kubenswrapper[4684]: I0123 10:11:21.664681 4684 memory_manager.go:354] "RemoveStaleState removing state" podUID="b697e4ae-16df-466e-bfad-f76ddb6f9e97" containerName="glance-log" Jan 23 10:11:21 crc kubenswrapper[4684]: I0123 10:11:21.666192 4684 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/glance-default-internal-api-0" Jan 23 10:11:21 crc kubenswrapper[4684]: I0123 10:11:21.674365 4684 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-glance-default-internal-svc" Jan 23 10:11:21 crc kubenswrapper[4684]: I0123 10:11:21.674449 4684 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"glance-default-internal-config-data" Jan 23 10:11:21 crc kubenswrapper[4684]: I0123 10:11:21.695215 4684 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/glance-default-internal-api-0"] Jan 23 10:11:21 crc kubenswrapper[4684]: I0123 10:11:21.782078 4684 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ceph\" (UniqueName: \"kubernetes.io/projected/b0804b14-3b60-4dbc-8e29-9cb493b96de4-ceph\") pod \"glance-default-internal-api-0\" (UID: \"b0804b14-3b60-4dbc-8e29-9cb493b96de4\") " pod="openstack/glance-default-internal-api-0" Jan 23 10:11:21 crc kubenswrapper[4684]: I0123 10:11:21.782150 4684 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/b0804b14-3b60-4dbc-8e29-9cb493b96de4-config-data\") pod \"glance-default-internal-api-0\" (UID: \"b0804b14-3b60-4dbc-8e29-9cb493b96de4\") " pod="openstack/glance-default-internal-api-0" Jan 23 10:11:21 crc kubenswrapper[4684]: I0123 10:11:21.782195 4684 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"local-storage04-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage04-crc\") pod \"glance-default-internal-api-0\" (UID: \"b0804b14-3b60-4dbc-8e29-9cb493b96de4\") " pod="openstack/glance-default-internal-api-0" Jan 23 10:11:21 crc kubenswrapper[4684]: I0123 10:11:21.782220 4684 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"httpd-run\" (UniqueName: \"kubernetes.io/empty-dir/b0804b14-3b60-4dbc-8e29-9cb493b96de4-httpd-run\") pod \"glance-default-internal-api-0\" (UID: \"b0804b14-3b60-4dbc-8e29-9cb493b96de4\") " pod="openstack/glance-default-internal-api-0" Jan 23 10:11:21 crc kubenswrapper[4684]: I0123 10:11:21.782242 4684 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-zlhzg\" (UniqueName: \"kubernetes.io/projected/b0804b14-3b60-4dbc-8e29-9cb493b96de4-kube-api-access-zlhzg\") pod \"glance-default-internal-api-0\" (UID: \"b0804b14-3b60-4dbc-8e29-9cb493b96de4\") " pod="openstack/glance-default-internal-api-0" Jan 23 10:11:21 crc kubenswrapper[4684]: I0123 10:11:21.782281 4684 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/b0804b14-3b60-4dbc-8e29-9cb493b96de4-internal-tls-certs\") pod \"glance-default-internal-api-0\" (UID: \"b0804b14-3b60-4dbc-8e29-9cb493b96de4\") " pod="openstack/glance-default-internal-api-0" Jan 23 10:11:21 crc kubenswrapper[4684]: I0123 10:11:21.782297 4684 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/b0804b14-3b60-4dbc-8e29-9cb493b96de4-combined-ca-bundle\") pod \"glance-default-internal-api-0\" (UID: \"b0804b14-3b60-4dbc-8e29-9cb493b96de4\") " pod="openstack/glance-default-internal-api-0" Jan 23 10:11:21 crc kubenswrapper[4684]: I0123 10:11:21.782319 4684 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/b0804b14-3b60-4dbc-8e29-9cb493b96de4-scripts\") pod \"glance-default-internal-api-0\" (UID: \"b0804b14-3b60-4dbc-8e29-9cb493b96de4\") " pod="openstack/glance-default-internal-api-0" Jan 23 10:11:21 crc kubenswrapper[4684]: I0123 10:11:21.782408 4684 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/b0804b14-3b60-4dbc-8e29-9cb493b96de4-logs\") pod \"glance-default-internal-api-0\" (UID: \"b0804b14-3b60-4dbc-8e29-9cb493b96de4\") " pod="openstack/glance-default-internal-api-0" Jan 23 10:11:21 crc kubenswrapper[4684]: I0123 10:11:21.883859 4684 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/b0804b14-3b60-4dbc-8e29-9cb493b96de4-logs\") pod \"glance-default-internal-api-0\" (UID: \"b0804b14-3b60-4dbc-8e29-9cb493b96de4\") " pod="openstack/glance-default-internal-api-0" Jan 23 10:11:21 crc kubenswrapper[4684]: I0123 10:11:21.883957 4684 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ceph\" (UniqueName: \"kubernetes.io/projected/b0804b14-3b60-4dbc-8e29-9cb493b96de4-ceph\") pod \"glance-default-internal-api-0\" (UID: \"b0804b14-3b60-4dbc-8e29-9cb493b96de4\") " pod="openstack/glance-default-internal-api-0" Jan 23 10:11:21 crc kubenswrapper[4684]: I0123 10:11:21.884002 4684 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/b0804b14-3b60-4dbc-8e29-9cb493b96de4-config-data\") pod \"glance-default-internal-api-0\" (UID: \"b0804b14-3b60-4dbc-8e29-9cb493b96de4\") " pod="openstack/glance-default-internal-api-0" Jan 23 10:11:21 crc kubenswrapper[4684]: I0123 10:11:21.884044 4684 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"local-storage04-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage04-crc\") pod \"glance-default-internal-api-0\" (UID: \"b0804b14-3b60-4dbc-8e29-9cb493b96de4\") " pod="openstack/glance-default-internal-api-0" Jan 23 10:11:21 crc kubenswrapper[4684]: I0123 10:11:21.884077 4684 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"httpd-run\" (UniqueName: \"kubernetes.io/empty-dir/b0804b14-3b60-4dbc-8e29-9cb493b96de4-httpd-run\") pod \"glance-default-internal-api-0\" (UID: \"b0804b14-3b60-4dbc-8e29-9cb493b96de4\") " pod="openstack/glance-default-internal-api-0" Jan 23 10:11:21 crc kubenswrapper[4684]: I0123 10:11:21.884101 4684 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-zlhzg\" (UniqueName: \"kubernetes.io/projected/b0804b14-3b60-4dbc-8e29-9cb493b96de4-kube-api-access-zlhzg\") pod \"glance-default-internal-api-0\" (UID: \"b0804b14-3b60-4dbc-8e29-9cb493b96de4\") " pod="openstack/glance-default-internal-api-0" Jan 23 10:11:21 crc kubenswrapper[4684]: I0123 10:11:21.884142 4684 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/b0804b14-3b60-4dbc-8e29-9cb493b96de4-internal-tls-certs\") pod \"glance-default-internal-api-0\" (UID: \"b0804b14-3b60-4dbc-8e29-9cb493b96de4\") " pod="openstack/glance-default-internal-api-0" Jan 23 10:11:21 crc kubenswrapper[4684]: I0123 10:11:21.884160 4684 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/b0804b14-3b60-4dbc-8e29-9cb493b96de4-combined-ca-bundle\") pod \"glance-default-internal-api-0\" (UID: \"b0804b14-3b60-4dbc-8e29-9cb493b96de4\") " pod="openstack/glance-default-internal-api-0" Jan 23 10:11:21 crc kubenswrapper[4684]: I0123 10:11:21.884181 4684 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/b0804b14-3b60-4dbc-8e29-9cb493b96de4-scripts\") pod \"glance-default-internal-api-0\" (UID: \"b0804b14-3b60-4dbc-8e29-9cb493b96de4\") " pod="openstack/glance-default-internal-api-0" Jan 23 10:11:21 crc kubenswrapper[4684]: I0123 10:11:21.884933 4684 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/b0804b14-3b60-4dbc-8e29-9cb493b96de4-logs\") pod \"glance-default-internal-api-0\" (UID: \"b0804b14-3b60-4dbc-8e29-9cb493b96de4\") " pod="openstack/glance-default-internal-api-0" Jan 23 10:11:21 crc kubenswrapper[4684]: I0123 10:11:21.885625 4684 operation_generator.go:580] "MountVolume.MountDevice succeeded for volume \"local-storage04-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage04-crc\") pod \"glance-default-internal-api-0\" (UID: \"b0804b14-3b60-4dbc-8e29-9cb493b96de4\") device mount path \"/mnt/openstack/pv04\"" pod="openstack/glance-default-internal-api-0" Jan 23 10:11:21 crc kubenswrapper[4684]: I0123 10:11:21.888414 4684 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"httpd-run\" (UniqueName: \"kubernetes.io/empty-dir/b0804b14-3b60-4dbc-8e29-9cb493b96de4-httpd-run\") pod \"glance-default-internal-api-0\" (UID: \"b0804b14-3b60-4dbc-8e29-9cb493b96de4\") " pod="openstack/glance-default-internal-api-0" Jan 23 10:11:21 crc kubenswrapper[4684]: I0123 10:11:21.893572 4684 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/b0804b14-3b60-4dbc-8e29-9cb493b96de4-config-data\") pod \"glance-default-internal-api-0\" (UID: \"b0804b14-3b60-4dbc-8e29-9cb493b96de4\") " pod="openstack/glance-default-internal-api-0" Jan 23 10:11:21 crc kubenswrapper[4684]: I0123 10:11:21.893988 4684 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/b0804b14-3b60-4dbc-8e29-9cb493b96de4-scripts\") pod \"glance-default-internal-api-0\" (UID: \"b0804b14-3b60-4dbc-8e29-9cb493b96de4\") " pod="openstack/glance-default-internal-api-0" Jan 23 10:11:21 crc kubenswrapper[4684]: I0123 10:11:21.897537 4684 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/b0804b14-3b60-4dbc-8e29-9cb493b96de4-internal-tls-certs\") pod \"glance-default-internal-api-0\" (UID: \"b0804b14-3b60-4dbc-8e29-9cb493b96de4\") " pod="openstack/glance-default-internal-api-0" Jan 23 10:11:21 crc kubenswrapper[4684]: I0123 10:11:21.901684 4684 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ceph\" (UniqueName: \"kubernetes.io/projected/b0804b14-3b60-4dbc-8e29-9cb493b96de4-ceph\") pod \"glance-default-internal-api-0\" (UID: \"b0804b14-3b60-4dbc-8e29-9cb493b96de4\") " pod="openstack/glance-default-internal-api-0" Jan 23 10:11:21 crc kubenswrapper[4684]: I0123 10:11:21.902128 4684 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/b0804b14-3b60-4dbc-8e29-9cb493b96de4-combined-ca-bundle\") pod \"glance-default-internal-api-0\" (UID: \"b0804b14-3b60-4dbc-8e29-9cb493b96de4\") " pod="openstack/glance-default-internal-api-0" Jan 23 10:11:21 crc kubenswrapper[4684]: I0123 10:11:21.906122 4684 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-zlhzg\" (UniqueName: \"kubernetes.io/projected/b0804b14-3b60-4dbc-8e29-9cb493b96de4-kube-api-access-zlhzg\") pod \"glance-default-internal-api-0\" (UID: \"b0804b14-3b60-4dbc-8e29-9cb493b96de4\") " pod="openstack/glance-default-internal-api-0" Jan 23 10:11:21 crc kubenswrapper[4684]: I0123 10:11:21.924058 4684 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"local-storage04-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage04-crc\") pod \"glance-default-internal-api-0\" (UID: \"b0804b14-3b60-4dbc-8e29-9cb493b96de4\") " pod="openstack/glance-default-internal-api-0" Jan 23 10:11:21 crc kubenswrapper[4684]: I0123 10:11:21.996716 4684 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/glance-default-internal-api-0" Jan 23 10:11:22 crc kubenswrapper[4684]: I0123 10:11:22.390075 4684 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/glance-default-external-api-0" Jan 23 10:11:22 crc kubenswrapper[4684]: I0123 10:11:22.511356 4684 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"httpd-run\" (UniqueName: \"kubernetes.io/empty-dir/a1353995-a0d3-4d2d-bb96-99c94673be54-httpd-run\") pod \"a1353995-a0d3-4d2d-bb96-99c94673be54\" (UID: \"a1353995-a0d3-4d2d-bb96-99c94673be54\") " Jan 23 10:11:22 crc kubenswrapper[4684]: I0123 10:11:22.511614 4684 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-gjdgm\" (UniqueName: \"kubernetes.io/projected/a1353995-a0d3-4d2d-bb96-99c94673be54-kube-api-access-gjdgm\") pod \"a1353995-a0d3-4d2d-bb96-99c94673be54\" (UID: \"a1353995-a0d3-4d2d-bb96-99c94673be54\") " Jan 23 10:11:22 crc kubenswrapper[4684]: I0123 10:11:22.511667 4684 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"glance\" (UniqueName: \"kubernetes.io/local-volume/local-storage02-crc\") pod \"a1353995-a0d3-4d2d-bb96-99c94673be54\" (UID: \"a1353995-a0d3-4d2d-bb96-99c94673be54\") " Jan 23 10:11:22 crc kubenswrapper[4684]: I0123 10:11:22.511758 4684 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ceph\" (UniqueName: \"kubernetes.io/projected/a1353995-a0d3-4d2d-bb96-99c94673be54-ceph\") pod \"a1353995-a0d3-4d2d-bb96-99c94673be54\" (UID: \"a1353995-a0d3-4d2d-bb96-99c94673be54\") " Jan 23 10:11:22 crc kubenswrapper[4684]: I0123 10:11:22.511890 4684 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/a1353995-a0d3-4d2d-bb96-99c94673be54-public-tls-certs\") pod \"a1353995-a0d3-4d2d-bb96-99c94673be54\" (UID: \"a1353995-a0d3-4d2d-bb96-99c94673be54\") " Jan 23 10:11:22 crc kubenswrapper[4684]: I0123 10:11:22.511948 4684 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/a1353995-a0d3-4d2d-bb96-99c94673be54-logs\") pod \"a1353995-a0d3-4d2d-bb96-99c94673be54\" (UID: \"a1353995-a0d3-4d2d-bb96-99c94673be54\") " Jan 23 10:11:22 crc kubenswrapper[4684]: I0123 10:11:22.512077 4684 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/a1353995-a0d3-4d2d-bb96-99c94673be54-config-data\") pod \"a1353995-a0d3-4d2d-bb96-99c94673be54\" (UID: \"a1353995-a0d3-4d2d-bb96-99c94673be54\") " Jan 23 10:11:22 crc kubenswrapper[4684]: I0123 10:11:22.512153 4684 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/a1353995-a0d3-4d2d-bb96-99c94673be54-scripts\") pod \"a1353995-a0d3-4d2d-bb96-99c94673be54\" (UID: \"a1353995-a0d3-4d2d-bb96-99c94673be54\") " Jan 23 10:11:22 crc kubenswrapper[4684]: I0123 10:11:22.512178 4684 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/a1353995-a0d3-4d2d-bb96-99c94673be54-combined-ca-bundle\") pod \"a1353995-a0d3-4d2d-bb96-99c94673be54\" (UID: \"a1353995-a0d3-4d2d-bb96-99c94673be54\") " Jan 23 10:11:22 crc kubenswrapper[4684]: I0123 10:11:22.514267 4684 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/a1353995-a0d3-4d2d-bb96-99c94673be54-httpd-run" (OuterVolumeSpecName: "httpd-run") pod "a1353995-a0d3-4d2d-bb96-99c94673be54" (UID: "a1353995-a0d3-4d2d-bb96-99c94673be54"). InnerVolumeSpecName "httpd-run". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 23 10:11:22 crc kubenswrapper[4684]: I0123 10:11:22.514935 4684 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/a1353995-a0d3-4d2d-bb96-99c94673be54-logs" (OuterVolumeSpecName: "logs") pod "a1353995-a0d3-4d2d-bb96-99c94673be54" (UID: "a1353995-a0d3-4d2d-bb96-99c94673be54"). InnerVolumeSpecName "logs". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 23 10:11:22 crc kubenswrapper[4684]: I0123 10:11:22.535938 4684 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/a1353995-a0d3-4d2d-bb96-99c94673be54-scripts" (OuterVolumeSpecName: "scripts") pod "a1353995-a0d3-4d2d-bb96-99c94673be54" (UID: "a1353995-a0d3-4d2d-bb96-99c94673be54"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 23 10:11:22 crc kubenswrapper[4684]: I0123 10:11:22.543202 4684 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/a1353995-a0d3-4d2d-bb96-99c94673be54-kube-api-access-gjdgm" (OuterVolumeSpecName: "kube-api-access-gjdgm") pod "a1353995-a0d3-4d2d-bb96-99c94673be54" (UID: "a1353995-a0d3-4d2d-bb96-99c94673be54"). InnerVolumeSpecName "kube-api-access-gjdgm". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 23 10:11:22 crc kubenswrapper[4684]: I0123 10:11:22.565991 4684 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/a1353995-a0d3-4d2d-bb96-99c94673be54-ceph" (OuterVolumeSpecName: "ceph") pod "a1353995-a0d3-4d2d-bb96-99c94673be54" (UID: "a1353995-a0d3-4d2d-bb96-99c94673be54"). InnerVolumeSpecName "ceph". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 23 10:11:22 crc kubenswrapper[4684]: I0123 10:11:22.587737 4684 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/local-volume/local-storage02-crc" (OuterVolumeSpecName: "glance") pod "a1353995-a0d3-4d2d-bb96-99c94673be54" (UID: "a1353995-a0d3-4d2d-bb96-99c94673be54"). InnerVolumeSpecName "local-storage02-crc". PluginName "kubernetes.io/local-volume", VolumeGidValue "" Jan 23 10:11:22 crc kubenswrapper[4684]: I0123 10:11:22.627770 4684 reconciler_common.go:293] "Volume detached for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/a1353995-a0d3-4d2d-bb96-99c94673be54-logs\") on node \"crc\" DevicePath \"\"" Jan 23 10:11:22 crc kubenswrapper[4684]: I0123 10:11:22.627803 4684 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/a1353995-a0d3-4d2d-bb96-99c94673be54-scripts\") on node \"crc\" DevicePath \"\"" Jan 23 10:11:22 crc kubenswrapper[4684]: I0123 10:11:22.627814 4684 reconciler_common.go:293] "Volume detached for volume \"httpd-run\" (UniqueName: \"kubernetes.io/empty-dir/a1353995-a0d3-4d2d-bb96-99c94673be54-httpd-run\") on node \"crc\" DevicePath \"\"" Jan 23 10:11:22 crc kubenswrapper[4684]: I0123 10:11:22.627825 4684 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-gjdgm\" (UniqueName: \"kubernetes.io/projected/a1353995-a0d3-4d2d-bb96-99c94673be54-kube-api-access-gjdgm\") on node \"crc\" DevicePath \"\"" Jan 23 10:11:22 crc kubenswrapper[4684]: I0123 10:11:22.627849 4684 reconciler_common.go:286] "operationExecutor.UnmountDevice started for volume \"local-storage02-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage02-crc\") on node \"crc\" " Jan 23 10:11:22 crc kubenswrapper[4684]: I0123 10:11:22.627857 4684 reconciler_common.go:293] "Volume detached for volume \"ceph\" (UniqueName: \"kubernetes.io/projected/a1353995-a0d3-4d2d-bb96-99c94673be54-ceph\") on node \"crc\" DevicePath \"\"" Jan 23 10:11:22 crc kubenswrapper[4684]: I0123 10:11:22.650955 4684 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/a1353995-a0d3-4d2d-bb96-99c94673be54-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "a1353995-a0d3-4d2d-bb96-99c94673be54" (UID: "a1353995-a0d3-4d2d-bb96-99c94673be54"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 23 10:11:22 crc kubenswrapper[4684]: I0123 10:11:22.730308 4684 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/a1353995-a0d3-4d2d-bb96-99c94673be54-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Jan 23 10:11:22 crc kubenswrapper[4684]: I0123 10:11:22.758307 4684 operation_generator.go:917] UnmountDevice succeeded for volume "local-storage02-crc" (UniqueName: "kubernetes.io/local-volume/local-storage02-crc") on node "crc" Jan 23 10:11:22 crc kubenswrapper[4684]: I0123 10:11:22.786357 4684 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/a1353995-a0d3-4d2d-bb96-99c94673be54-public-tls-certs" (OuterVolumeSpecName: "public-tls-certs") pod "a1353995-a0d3-4d2d-bb96-99c94673be54" (UID: "a1353995-a0d3-4d2d-bb96-99c94673be54"). InnerVolumeSpecName "public-tls-certs". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 23 10:11:22 crc kubenswrapper[4684]: I0123 10:11:22.837060 4684 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/a1353995-a0d3-4d2d-bb96-99c94673be54-config-data" (OuterVolumeSpecName: "config-data") pod "a1353995-a0d3-4d2d-bb96-99c94673be54" (UID: "a1353995-a0d3-4d2d-bb96-99c94673be54"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 23 10:11:22 crc kubenswrapper[4684]: I0123 10:11:22.843665 4684 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/a1353995-a0d3-4d2d-bb96-99c94673be54-config-data\") on node \"crc\" DevicePath \"\"" Jan 23 10:11:22 crc kubenswrapper[4684]: I0123 10:11:22.843722 4684 reconciler_common.go:293] "Volume detached for volume \"local-storage02-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage02-crc\") on node \"crc\" DevicePath \"\"" Jan 23 10:11:22 crc kubenswrapper[4684]: I0123 10:11:22.843744 4684 reconciler_common.go:293] "Volume detached for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/a1353995-a0d3-4d2d-bb96-99c94673be54-public-tls-certs\") on node \"crc\" DevicePath \"\"" Jan 23 10:11:23 crc kubenswrapper[4684]: I0123 10:11:23.148316 4684 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/glance-default-internal-api-0"] Jan 23 10:11:23 crc kubenswrapper[4684]: W0123 10:11:23.166030 4684 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-podb0804b14_3b60_4dbc_8e29_9cb493b96de4.slice/crio-3538a4b8509b18aa48f0fb5e02339a33c5fb3a6a7f793ae242f3dabdb23c1af2 WatchSource:0}: Error finding container 3538a4b8509b18aa48f0fb5e02339a33c5fb3a6a7f793ae242f3dabdb23c1af2: Status 404 returned error can't find the container with id 3538a4b8509b18aa48f0fb5e02339a33c5fb3a6a7f793ae242f3dabdb23c1af2 Jan 23 10:11:23 crc kubenswrapper[4684]: I0123 10:11:23.354757 4684 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-default-internal-api-0" event={"ID":"b0804b14-3b60-4dbc-8e29-9cb493b96de4","Type":"ContainerStarted","Data":"3538a4b8509b18aa48f0fb5e02339a33c5fb3a6a7f793ae242f3dabdb23c1af2"} Jan 23 10:11:23 crc kubenswrapper[4684]: I0123 10:11:23.362475 4684 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-default-external-api-0" event={"ID":"a1353995-a0d3-4d2d-bb96-99c94673be54","Type":"ContainerDied","Data":"920a575cecaa33d4216d639088c432b57f810407cb4f6bd9d1a24832f7245575"} Jan 23 10:11:23 crc kubenswrapper[4684]: I0123 10:11:23.362526 4684 scope.go:117] "RemoveContainer" containerID="4eb213dd37f72e230d5788d3ec77b918bb1aea152d652c1afe2ebec63de686cf" Jan 23 10:11:23 crc kubenswrapper[4684]: I0123 10:11:23.362651 4684 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/glance-default-external-api-0" Jan 23 10:11:23 crc kubenswrapper[4684]: I0123 10:11:23.419957 4684 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/glance-default-external-api-0"] Jan 23 10:11:23 crc kubenswrapper[4684]: I0123 10:11:23.441909 4684 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/glance-default-external-api-0"] Jan 23 10:11:23 crc kubenswrapper[4684]: I0123 10:11:23.479784 4684 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/glance-default-external-api-0"] Jan 23 10:11:23 crc kubenswrapper[4684]: E0123 10:11:23.488889 4684 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="a1353995-a0d3-4d2d-bb96-99c94673be54" containerName="glance-httpd" Jan 23 10:11:23 crc kubenswrapper[4684]: I0123 10:11:23.532835 4684 state_mem.go:107] "Deleted CPUSet assignment" podUID="a1353995-a0d3-4d2d-bb96-99c94673be54" containerName="glance-httpd" Jan 23 10:11:23 crc kubenswrapper[4684]: E0123 10:11:23.532926 4684 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="a1353995-a0d3-4d2d-bb96-99c94673be54" containerName="glance-log" Jan 23 10:11:23 crc kubenswrapper[4684]: I0123 10:11:23.532936 4684 state_mem.go:107] "Deleted CPUSet assignment" podUID="a1353995-a0d3-4d2d-bb96-99c94673be54" containerName="glance-log" Jan 23 10:11:23 crc kubenswrapper[4684]: I0123 10:11:23.533624 4684 memory_manager.go:354] "RemoveStaleState removing state" podUID="a1353995-a0d3-4d2d-bb96-99c94673be54" containerName="glance-log" Jan 23 10:11:23 crc kubenswrapper[4684]: I0123 10:11:23.533639 4684 memory_manager.go:354] "RemoveStaleState removing state" podUID="a1353995-a0d3-4d2d-bb96-99c94673be54" containerName="glance-httpd" Jan 23 10:11:23 crc kubenswrapper[4684]: I0123 10:11:23.544027 4684 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/glance-default-external-api-0"] Jan 23 10:11:23 crc kubenswrapper[4684]: I0123 10:11:23.544110 4684 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/glance-default-external-api-0" Jan 23 10:11:23 crc kubenswrapper[4684]: I0123 10:11:23.550194 4684 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"glance-default-external-config-data" Jan 23 10:11:23 crc kubenswrapper[4684]: I0123 10:11:23.552107 4684 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-glance-default-public-svc" Jan 23 10:11:23 crc kubenswrapper[4684]: I0123 10:11:23.568066 4684 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/a7c366f0-4ad9-4ec9-91ff-bab599bae5d0-logs\") pod \"glance-default-external-api-0\" (UID: \"a7c366f0-4ad9-4ec9-91ff-bab599bae5d0\") " pod="openstack/glance-default-external-api-0" Jan 23 10:11:23 crc kubenswrapper[4684]: I0123 10:11:23.568227 4684 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/a7c366f0-4ad9-4ec9-91ff-bab599bae5d0-combined-ca-bundle\") pod \"glance-default-external-api-0\" (UID: \"a7c366f0-4ad9-4ec9-91ff-bab599bae5d0\") " pod="openstack/glance-default-external-api-0" Jan 23 10:11:23 crc kubenswrapper[4684]: I0123 10:11:23.568255 4684 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/a7c366f0-4ad9-4ec9-91ff-bab599bae5d0-scripts\") pod \"glance-default-external-api-0\" (UID: \"a7c366f0-4ad9-4ec9-91ff-bab599bae5d0\") " pod="openstack/glance-default-external-api-0" Jan 23 10:11:23 crc kubenswrapper[4684]: I0123 10:11:23.568317 4684 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/a7c366f0-4ad9-4ec9-91ff-bab599bae5d0-config-data\") pod \"glance-default-external-api-0\" (UID: \"a7c366f0-4ad9-4ec9-91ff-bab599bae5d0\") " pod="openstack/glance-default-external-api-0" Jan 23 10:11:23 crc kubenswrapper[4684]: I0123 10:11:23.568382 4684 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-lx2qd\" (UniqueName: \"kubernetes.io/projected/a7c366f0-4ad9-4ec9-91ff-bab599bae5d0-kube-api-access-lx2qd\") pod \"glance-default-external-api-0\" (UID: \"a7c366f0-4ad9-4ec9-91ff-bab599bae5d0\") " pod="openstack/glance-default-external-api-0" Jan 23 10:11:23 crc kubenswrapper[4684]: I0123 10:11:23.568422 4684 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ceph\" (UniqueName: \"kubernetes.io/projected/a7c366f0-4ad9-4ec9-91ff-bab599bae5d0-ceph\") pod \"glance-default-external-api-0\" (UID: \"a7c366f0-4ad9-4ec9-91ff-bab599bae5d0\") " pod="openstack/glance-default-external-api-0" Jan 23 10:11:23 crc kubenswrapper[4684]: I0123 10:11:23.568500 4684 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/a7c366f0-4ad9-4ec9-91ff-bab599bae5d0-public-tls-certs\") pod \"glance-default-external-api-0\" (UID: \"a7c366f0-4ad9-4ec9-91ff-bab599bae5d0\") " pod="openstack/glance-default-external-api-0" Jan 23 10:11:23 crc kubenswrapper[4684]: I0123 10:11:23.568530 4684 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"httpd-run\" (UniqueName: \"kubernetes.io/empty-dir/a7c366f0-4ad9-4ec9-91ff-bab599bae5d0-httpd-run\") pod \"glance-default-external-api-0\" (UID: \"a7c366f0-4ad9-4ec9-91ff-bab599bae5d0\") " pod="openstack/glance-default-external-api-0" Jan 23 10:11:23 crc kubenswrapper[4684]: I0123 10:11:23.568624 4684 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"local-storage02-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage02-crc\") pod \"glance-default-external-api-0\" (UID: \"a7c366f0-4ad9-4ec9-91ff-bab599bae5d0\") " pod="openstack/glance-default-external-api-0" Jan 23 10:11:23 crc kubenswrapper[4684]: I0123 10:11:23.627918 4684 scope.go:117] "RemoveContainer" containerID="46d83126df26e23292b75e36c16d9ca1be43ff977c7bd44bb9d44e340ef2c0bf" Jan 23 10:11:23 crc kubenswrapper[4684]: I0123 10:11:23.635359 4684 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="a1353995-a0d3-4d2d-bb96-99c94673be54" path="/var/lib/kubelet/pods/a1353995-a0d3-4d2d-bb96-99c94673be54/volumes" Jan 23 10:11:23 crc kubenswrapper[4684]: I0123 10:11:23.638569 4684 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="b697e4ae-16df-466e-bfad-f76ddb6f9e97" path="/var/lib/kubelet/pods/b697e4ae-16df-466e-bfad-f76ddb6f9e97/volumes" Jan 23 10:11:23 crc kubenswrapper[4684]: I0123 10:11:23.670119 4684 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/a7c366f0-4ad9-4ec9-91ff-bab599bae5d0-combined-ca-bundle\") pod \"glance-default-external-api-0\" (UID: \"a7c366f0-4ad9-4ec9-91ff-bab599bae5d0\") " pod="openstack/glance-default-external-api-0" Jan 23 10:11:23 crc kubenswrapper[4684]: I0123 10:11:23.670159 4684 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/a7c366f0-4ad9-4ec9-91ff-bab599bae5d0-scripts\") pod \"glance-default-external-api-0\" (UID: \"a7c366f0-4ad9-4ec9-91ff-bab599bae5d0\") " pod="openstack/glance-default-external-api-0" Jan 23 10:11:23 crc kubenswrapper[4684]: I0123 10:11:23.670206 4684 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/a7c366f0-4ad9-4ec9-91ff-bab599bae5d0-config-data\") pod \"glance-default-external-api-0\" (UID: \"a7c366f0-4ad9-4ec9-91ff-bab599bae5d0\") " pod="openstack/glance-default-external-api-0" Jan 23 10:11:23 crc kubenswrapper[4684]: I0123 10:11:23.670237 4684 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-lx2qd\" (UniqueName: \"kubernetes.io/projected/a7c366f0-4ad9-4ec9-91ff-bab599bae5d0-kube-api-access-lx2qd\") pod \"glance-default-external-api-0\" (UID: \"a7c366f0-4ad9-4ec9-91ff-bab599bae5d0\") " pod="openstack/glance-default-external-api-0" Jan 23 10:11:23 crc kubenswrapper[4684]: I0123 10:11:23.670257 4684 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ceph\" (UniqueName: \"kubernetes.io/projected/a7c366f0-4ad9-4ec9-91ff-bab599bae5d0-ceph\") pod \"glance-default-external-api-0\" (UID: \"a7c366f0-4ad9-4ec9-91ff-bab599bae5d0\") " pod="openstack/glance-default-external-api-0" Jan 23 10:11:23 crc kubenswrapper[4684]: I0123 10:11:23.670283 4684 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/a7c366f0-4ad9-4ec9-91ff-bab599bae5d0-public-tls-certs\") pod \"glance-default-external-api-0\" (UID: \"a7c366f0-4ad9-4ec9-91ff-bab599bae5d0\") " pod="openstack/glance-default-external-api-0" Jan 23 10:11:23 crc kubenswrapper[4684]: I0123 10:11:23.670299 4684 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"httpd-run\" (UniqueName: \"kubernetes.io/empty-dir/a7c366f0-4ad9-4ec9-91ff-bab599bae5d0-httpd-run\") pod \"glance-default-external-api-0\" (UID: \"a7c366f0-4ad9-4ec9-91ff-bab599bae5d0\") " pod="openstack/glance-default-external-api-0" Jan 23 10:11:23 crc kubenswrapper[4684]: I0123 10:11:23.670352 4684 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"local-storage02-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage02-crc\") pod \"glance-default-external-api-0\" (UID: \"a7c366f0-4ad9-4ec9-91ff-bab599bae5d0\") " pod="openstack/glance-default-external-api-0" Jan 23 10:11:23 crc kubenswrapper[4684]: I0123 10:11:23.670377 4684 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/a7c366f0-4ad9-4ec9-91ff-bab599bae5d0-logs\") pod \"glance-default-external-api-0\" (UID: \"a7c366f0-4ad9-4ec9-91ff-bab599bae5d0\") " pod="openstack/glance-default-external-api-0" Jan 23 10:11:23 crc kubenswrapper[4684]: I0123 10:11:23.672751 4684 operation_generator.go:580] "MountVolume.MountDevice succeeded for volume \"local-storage02-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage02-crc\") pod \"glance-default-external-api-0\" (UID: \"a7c366f0-4ad9-4ec9-91ff-bab599bae5d0\") device mount path \"/mnt/openstack/pv02\"" pod="openstack/glance-default-external-api-0" Jan 23 10:11:23 crc kubenswrapper[4684]: I0123 10:11:23.672869 4684 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"httpd-run\" (UniqueName: \"kubernetes.io/empty-dir/a7c366f0-4ad9-4ec9-91ff-bab599bae5d0-httpd-run\") pod \"glance-default-external-api-0\" (UID: \"a7c366f0-4ad9-4ec9-91ff-bab599bae5d0\") " pod="openstack/glance-default-external-api-0" Jan 23 10:11:23 crc kubenswrapper[4684]: I0123 10:11:23.673166 4684 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/a7c366f0-4ad9-4ec9-91ff-bab599bae5d0-logs\") pod \"glance-default-external-api-0\" (UID: \"a7c366f0-4ad9-4ec9-91ff-bab599bae5d0\") " pod="openstack/glance-default-external-api-0" Jan 23 10:11:23 crc kubenswrapper[4684]: I0123 10:11:23.688142 4684 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/a7c366f0-4ad9-4ec9-91ff-bab599bae5d0-scripts\") pod \"glance-default-external-api-0\" (UID: \"a7c366f0-4ad9-4ec9-91ff-bab599bae5d0\") " pod="openstack/glance-default-external-api-0" Jan 23 10:11:23 crc kubenswrapper[4684]: I0123 10:11:23.697731 4684 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/a7c366f0-4ad9-4ec9-91ff-bab599bae5d0-combined-ca-bundle\") pod \"glance-default-external-api-0\" (UID: \"a7c366f0-4ad9-4ec9-91ff-bab599bae5d0\") " pod="openstack/glance-default-external-api-0" Jan 23 10:11:23 crc kubenswrapper[4684]: I0123 10:11:23.699874 4684 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/a7c366f0-4ad9-4ec9-91ff-bab599bae5d0-config-data\") pod \"glance-default-external-api-0\" (UID: \"a7c366f0-4ad9-4ec9-91ff-bab599bae5d0\") " pod="openstack/glance-default-external-api-0" Jan 23 10:11:23 crc kubenswrapper[4684]: I0123 10:11:23.728980 4684 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-lx2qd\" (UniqueName: \"kubernetes.io/projected/a7c366f0-4ad9-4ec9-91ff-bab599bae5d0-kube-api-access-lx2qd\") pod \"glance-default-external-api-0\" (UID: \"a7c366f0-4ad9-4ec9-91ff-bab599bae5d0\") " pod="openstack/glance-default-external-api-0" Jan 23 10:11:23 crc kubenswrapper[4684]: I0123 10:11:23.729328 4684 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ceph\" (UniqueName: \"kubernetes.io/projected/a7c366f0-4ad9-4ec9-91ff-bab599bae5d0-ceph\") pod \"glance-default-external-api-0\" (UID: \"a7c366f0-4ad9-4ec9-91ff-bab599bae5d0\") " pod="openstack/glance-default-external-api-0" Jan 23 10:11:23 crc kubenswrapper[4684]: I0123 10:11:23.729621 4684 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/a7c366f0-4ad9-4ec9-91ff-bab599bae5d0-public-tls-certs\") pod \"glance-default-external-api-0\" (UID: \"a7c366f0-4ad9-4ec9-91ff-bab599bae5d0\") " pod="openstack/glance-default-external-api-0" Jan 23 10:11:23 crc kubenswrapper[4684]: I0123 10:11:23.739656 4684 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"local-storage02-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage02-crc\") pod \"glance-default-external-api-0\" (UID: \"a7c366f0-4ad9-4ec9-91ff-bab599bae5d0\") " pod="openstack/glance-default-external-api-0" Jan 23 10:11:23 crc kubenswrapper[4684]: I0123 10:11:23.880177 4684 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/glance-default-external-api-0" Jan 23 10:11:24 crc kubenswrapper[4684]: I0123 10:11:24.530378 4684 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/cinder-backup-0" Jan 23 10:11:24 crc kubenswrapper[4684]: I0123 10:11:24.550944 4684 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/cinder-volume-volume1-0" Jan 23 10:11:24 crc kubenswrapper[4684]: I0123 10:11:24.630361 4684 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/glance-default-external-api-0"] Jan 23 10:11:24 crc kubenswrapper[4684]: W0123 10:11:24.765997 4684 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-poda7c366f0_4ad9_4ec9_91ff_bab599bae5d0.slice/crio-6cada3a30356376e0cd8110e9b66cb4b63dc481009ad5e164bdf76e029e3df04 WatchSource:0}: Error finding container 6cada3a30356376e0cd8110e9b66cb4b63dc481009ad5e164bdf76e029e3df04: Status 404 returned error can't find the container with id 6cada3a30356376e0cd8110e9b66cb4b63dc481009ad5e164bdf76e029e3df04 Jan 23 10:11:25 crc kubenswrapper[4684]: I0123 10:11:25.179543 4684 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/cinder-backup-0" Jan 23 10:11:25 crc kubenswrapper[4684]: I0123 10:11:25.201427 4684 prober.go:107] "Probe failed" probeType="Startup" pod="openstack/cinder-volume-volume1-0" podUID="2d39cffc-9089-47c7-acd7-50bb64ed8f61" containerName="cinder-volume" probeResult="failure" output="HTTP probe failed with statuscode: 500" Jan 23 10:11:25 crc kubenswrapper[4684]: I0123 10:11:25.405930 4684 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-default-external-api-0" event={"ID":"a7c366f0-4ad9-4ec9-91ff-bab599bae5d0","Type":"ContainerStarted","Data":"6cada3a30356376e0cd8110e9b66cb4b63dc481009ad5e164bdf76e029e3df04"} Jan 23 10:11:25 crc kubenswrapper[4684]: I0123 10:11:25.410399 4684 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-default-internal-api-0" event={"ID":"b0804b14-3b60-4dbc-8e29-9cb493b96de4","Type":"ContainerStarted","Data":"5a911fb37d7756a59ed67166d95684a99971da028d514d5ad0e9da7d1c48477e"} Jan 23 10:11:25 crc kubenswrapper[4684]: I0123 10:11:25.489687 4684 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/manila-db-sync-mdmkd"] Jan 23 10:11:25 crc kubenswrapper[4684]: I0123 10:11:25.492464 4684 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/manila-db-sync-mdmkd" Jan 23 10:11:25 crc kubenswrapper[4684]: I0123 10:11:25.495854 4684 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"manila-config-data" Jan 23 10:11:25 crc kubenswrapper[4684]: I0123 10:11:25.495958 4684 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"manila-manila-dockercfg-6j9z9" Jan 23 10:11:25 crc kubenswrapper[4684]: I0123 10:11:25.524377 4684 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/manila-db-sync-mdmkd"] Jan 23 10:11:25 crc kubenswrapper[4684]: I0123 10:11:25.545316 4684 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-vksvh\" (UniqueName: \"kubernetes.io/projected/bd829550-43d3-42d9-a9b4-e088ef820a77-kube-api-access-vksvh\") pod \"manila-db-sync-mdmkd\" (UID: \"bd829550-43d3-42d9-a9b4-e088ef820a77\") " pod="openstack/manila-db-sync-mdmkd" Jan 23 10:11:25 crc kubenswrapper[4684]: I0123 10:11:25.546360 4684 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/bd829550-43d3-42d9-a9b4-e088ef820a77-combined-ca-bundle\") pod \"manila-db-sync-mdmkd\" (UID: \"bd829550-43d3-42d9-a9b4-e088ef820a77\") " pod="openstack/manila-db-sync-mdmkd" Jan 23 10:11:25 crc kubenswrapper[4684]: I0123 10:11:25.551762 4684 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/bd829550-43d3-42d9-a9b4-e088ef820a77-config-data\") pod \"manila-db-sync-mdmkd\" (UID: \"bd829550-43d3-42d9-a9b4-e088ef820a77\") " pod="openstack/manila-db-sync-mdmkd" Jan 23 10:11:25 crc kubenswrapper[4684]: I0123 10:11:25.552193 4684 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"job-config-data\" (UniqueName: \"kubernetes.io/secret/bd829550-43d3-42d9-a9b4-e088ef820a77-job-config-data\") pod \"manila-db-sync-mdmkd\" (UID: \"bd829550-43d3-42d9-a9b4-e088ef820a77\") " pod="openstack/manila-db-sync-mdmkd" Jan 23 10:11:25 crc kubenswrapper[4684]: I0123 10:11:25.655774 4684 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/bd829550-43d3-42d9-a9b4-e088ef820a77-config-data\") pod \"manila-db-sync-mdmkd\" (UID: \"bd829550-43d3-42d9-a9b4-e088ef820a77\") " pod="openstack/manila-db-sync-mdmkd" Jan 23 10:11:25 crc kubenswrapper[4684]: I0123 10:11:25.655830 4684 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"job-config-data\" (UniqueName: \"kubernetes.io/secret/bd829550-43d3-42d9-a9b4-e088ef820a77-job-config-data\") pod \"manila-db-sync-mdmkd\" (UID: \"bd829550-43d3-42d9-a9b4-e088ef820a77\") " pod="openstack/manila-db-sync-mdmkd" Jan 23 10:11:25 crc kubenswrapper[4684]: I0123 10:11:25.667430 4684 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-vksvh\" (UniqueName: \"kubernetes.io/projected/bd829550-43d3-42d9-a9b4-e088ef820a77-kube-api-access-vksvh\") pod \"manila-db-sync-mdmkd\" (UID: \"bd829550-43d3-42d9-a9b4-e088ef820a77\") " pod="openstack/manila-db-sync-mdmkd" Jan 23 10:11:25 crc kubenswrapper[4684]: I0123 10:11:25.667645 4684 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/bd829550-43d3-42d9-a9b4-e088ef820a77-combined-ca-bundle\") pod \"manila-db-sync-mdmkd\" (UID: \"bd829550-43d3-42d9-a9b4-e088ef820a77\") " pod="openstack/manila-db-sync-mdmkd" Jan 23 10:11:25 crc kubenswrapper[4684]: I0123 10:11:25.669930 4684 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"job-config-data\" (UniqueName: \"kubernetes.io/secret/bd829550-43d3-42d9-a9b4-e088ef820a77-job-config-data\") pod \"manila-db-sync-mdmkd\" (UID: \"bd829550-43d3-42d9-a9b4-e088ef820a77\") " pod="openstack/manila-db-sync-mdmkd" Jan 23 10:11:25 crc kubenswrapper[4684]: I0123 10:11:25.674507 4684 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/bd829550-43d3-42d9-a9b4-e088ef820a77-combined-ca-bundle\") pod \"manila-db-sync-mdmkd\" (UID: \"bd829550-43d3-42d9-a9b4-e088ef820a77\") " pod="openstack/manila-db-sync-mdmkd" Jan 23 10:11:25 crc kubenswrapper[4684]: I0123 10:11:25.708479 4684 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-vksvh\" (UniqueName: \"kubernetes.io/projected/bd829550-43d3-42d9-a9b4-e088ef820a77-kube-api-access-vksvh\") pod \"manila-db-sync-mdmkd\" (UID: \"bd829550-43d3-42d9-a9b4-e088ef820a77\") " pod="openstack/manila-db-sync-mdmkd" Jan 23 10:11:25 crc kubenswrapper[4684]: I0123 10:11:25.725461 4684 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/bd829550-43d3-42d9-a9b4-e088ef820a77-config-data\") pod \"manila-db-sync-mdmkd\" (UID: \"bd829550-43d3-42d9-a9b4-e088ef820a77\") " pod="openstack/manila-db-sync-mdmkd" Jan 23 10:11:25 crc kubenswrapper[4684]: I0123 10:11:25.836271 4684 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/manila-db-sync-mdmkd" Jan 23 10:11:26 crc kubenswrapper[4684]: I0123 10:11:26.115048 4684 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/community-operators-zklnt"] Jan 23 10:11:26 crc kubenswrapper[4684]: I0123 10:11:26.123541 4684 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-zklnt" Jan 23 10:11:26 crc kubenswrapper[4684]: I0123 10:11:26.155252 4684 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/community-operators-zklnt"] Jan 23 10:11:26 crc kubenswrapper[4684]: I0123 10:11:26.182547 4684 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/6fbf4c7d-6e2e-44cd-852e-903aa8602f9f-catalog-content\") pod \"community-operators-zklnt\" (UID: \"6fbf4c7d-6e2e-44cd-852e-903aa8602f9f\") " pod="openshift-marketplace/community-operators-zklnt" Jan 23 10:11:26 crc kubenswrapper[4684]: I0123 10:11:26.182784 4684 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-4pxnf\" (UniqueName: \"kubernetes.io/projected/6fbf4c7d-6e2e-44cd-852e-903aa8602f9f-kube-api-access-4pxnf\") pod \"community-operators-zklnt\" (UID: \"6fbf4c7d-6e2e-44cd-852e-903aa8602f9f\") " pod="openshift-marketplace/community-operators-zklnt" Jan 23 10:11:26 crc kubenswrapper[4684]: I0123 10:11:26.184080 4684 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/6fbf4c7d-6e2e-44cd-852e-903aa8602f9f-utilities\") pod \"community-operators-zklnt\" (UID: \"6fbf4c7d-6e2e-44cd-852e-903aa8602f9f\") " pod="openshift-marketplace/community-operators-zklnt" Jan 23 10:11:26 crc kubenswrapper[4684]: I0123 10:11:26.288022 4684 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/6fbf4c7d-6e2e-44cd-852e-903aa8602f9f-utilities\") pod \"community-operators-zklnt\" (UID: \"6fbf4c7d-6e2e-44cd-852e-903aa8602f9f\") " pod="openshift-marketplace/community-operators-zklnt" Jan 23 10:11:26 crc kubenswrapper[4684]: I0123 10:11:26.288113 4684 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/6fbf4c7d-6e2e-44cd-852e-903aa8602f9f-catalog-content\") pod \"community-operators-zklnt\" (UID: \"6fbf4c7d-6e2e-44cd-852e-903aa8602f9f\") " pod="openshift-marketplace/community-operators-zklnt" Jan 23 10:11:26 crc kubenswrapper[4684]: I0123 10:11:26.288228 4684 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-4pxnf\" (UniqueName: \"kubernetes.io/projected/6fbf4c7d-6e2e-44cd-852e-903aa8602f9f-kube-api-access-4pxnf\") pod \"community-operators-zklnt\" (UID: \"6fbf4c7d-6e2e-44cd-852e-903aa8602f9f\") " pod="openshift-marketplace/community-operators-zklnt" Jan 23 10:11:26 crc kubenswrapper[4684]: I0123 10:11:26.289151 4684 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/6fbf4c7d-6e2e-44cd-852e-903aa8602f9f-catalog-content\") pod \"community-operators-zklnt\" (UID: \"6fbf4c7d-6e2e-44cd-852e-903aa8602f9f\") " pod="openshift-marketplace/community-operators-zklnt" Jan 23 10:11:26 crc kubenswrapper[4684]: I0123 10:11:26.289170 4684 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/6fbf4c7d-6e2e-44cd-852e-903aa8602f9f-utilities\") pod \"community-operators-zklnt\" (UID: \"6fbf4c7d-6e2e-44cd-852e-903aa8602f9f\") " pod="openshift-marketplace/community-operators-zklnt" Jan 23 10:11:26 crc kubenswrapper[4684]: I0123 10:11:26.316187 4684 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-4pxnf\" (UniqueName: \"kubernetes.io/projected/6fbf4c7d-6e2e-44cd-852e-903aa8602f9f-kube-api-access-4pxnf\") pod \"community-operators-zklnt\" (UID: \"6fbf4c7d-6e2e-44cd-852e-903aa8602f9f\") " pod="openshift-marketplace/community-operators-zklnt" Jan 23 10:11:26 crc kubenswrapper[4684]: I0123 10:11:26.576964 4684 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-zklnt" Jan 23 10:11:27 crc kubenswrapper[4684]: I0123 10:11:27.454618 4684 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-default-external-api-0" event={"ID":"a7c366f0-4ad9-4ec9-91ff-bab599bae5d0","Type":"ContainerStarted","Data":"04520094deb79bdf21fa60baa3e4b104434cf668c42f9bc7f71980a18a92735e"} Jan 23 10:11:27 crc kubenswrapper[4684]: I0123 10:11:27.457292 4684 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-default-internal-api-0" event={"ID":"b0804b14-3b60-4dbc-8e29-9cb493b96de4","Type":"ContainerStarted","Data":"fe5acf813e72261b89eea5bd5cb4dfeb5db8f3afeba70e5161447ac2b15d5b23"} Jan 23 10:11:27 crc kubenswrapper[4684]: I0123 10:11:27.495301 4684 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/glance-default-internal-api-0" podStartSLOduration=6.495276043 podStartE2EDuration="6.495276043s" podCreationTimestamp="2026-01-23 10:11:21 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-23 10:11:27.486011679 +0000 UTC m=+3860.109390220" watchObservedRunningTime="2026-01-23 10:11:27.495276043 +0000 UTC m=+3860.118654594" Jan 23 10:11:27 crc kubenswrapper[4684]: I0123 10:11:27.632413 4684 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/community-operators-zklnt"] Jan 23 10:11:27 crc kubenswrapper[4684]: W0123 10:11:27.782706 4684 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod6fbf4c7d_6e2e_44cd_852e_903aa8602f9f.slice/crio-422158a8186fd2bc28d57c17f07f6fd29c4d0b5721d455c00e7671f85ea02d99 WatchSource:0}: Error finding container 422158a8186fd2bc28d57c17f07f6fd29c4d0b5721d455c00e7671f85ea02d99: Status 404 returned error can't find the container with id 422158a8186fd2bc28d57c17f07f6fd29c4d0b5721d455c00e7671f85ea02d99 Jan 23 10:11:27 crc kubenswrapper[4684]: I0123 10:11:27.796353 4684 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/manila-db-sync-mdmkd"] Jan 23 10:11:27 crc kubenswrapper[4684]: W0123 10:11:27.865934 4684 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-podbd829550_43d3_42d9_a9b4_e088ef820a77.slice/crio-a58658152fef1d00c0e4cf57f6132bcc1b43e709caced30c3b16a6044d48e5ed WatchSource:0}: Error finding container a58658152fef1d00c0e4cf57f6132bcc1b43e709caced30c3b16a6044d48e5ed: Status 404 returned error can't find the container with id a58658152fef1d00c0e4cf57f6132bcc1b43e709caced30c3b16a6044d48e5ed Jan 23 10:11:28 crc kubenswrapper[4684]: I0123 10:11:28.481819 4684 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/redhat-operators-jsxqf"] Jan 23 10:11:28 crc kubenswrapper[4684]: I0123 10:11:28.488435 4684 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-jsxqf" Jan 23 10:11:28 crc kubenswrapper[4684]: I0123 10:11:28.490434 4684 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/manila-db-sync-mdmkd" event={"ID":"bd829550-43d3-42d9-a9b4-e088ef820a77","Type":"ContainerStarted","Data":"a58658152fef1d00c0e4cf57f6132bcc1b43e709caced30c3b16a6044d48e5ed"} Jan 23 10:11:28 crc kubenswrapper[4684]: I0123 10:11:28.495679 4684 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-zklnt" event={"ID":"6fbf4c7d-6e2e-44cd-852e-903aa8602f9f","Type":"ContainerStarted","Data":"422158a8186fd2bc28d57c17f07f6fd29c4d0b5721d455c00e7671f85ea02d99"} Jan 23 10:11:28 crc kubenswrapper[4684]: I0123 10:11:28.501460 4684 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-default-external-api-0" event={"ID":"a7c366f0-4ad9-4ec9-91ff-bab599bae5d0","Type":"ContainerStarted","Data":"2973092e63ff1a907650c08beb4c8632c45c0380a609f664fd213d95a88be03f"} Jan 23 10:11:28 crc kubenswrapper[4684]: I0123 10:11:28.504893 4684 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-operators-jsxqf"] Jan 23 10:11:28 crc kubenswrapper[4684]: I0123 10:11:28.549685 4684 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/glance-default-external-api-0" podStartSLOduration=5.549645036 podStartE2EDuration="5.549645036s" podCreationTimestamp="2026-01-23 10:11:23 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-23 10:11:28.545451066 +0000 UTC m=+3861.168829617" watchObservedRunningTime="2026-01-23 10:11:28.549645036 +0000 UTC m=+3861.173023587" Jan 23 10:11:28 crc kubenswrapper[4684]: I0123 10:11:28.667312 4684 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/8cd40dbd-dde6-4dab-ad91-26b0c526d129-utilities\") pod \"redhat-operators-jsxqf\" (UID: \"8cd40dbd-dde6-4dab-ad91-26b0c526d129\") " pod="openshift-marketplace/redhat-operators-jsxqf" Jan 23 10:11:28 crc kubenswrapper[4684]: I0123 10:11:28.667491 4684 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-c576z\" (UniqueName: \"kubernetes.io/projected/8cd40dbd-dde6-4dab-ad91-26b0c526d129-kube-api-access-c576z\") pod \"redhat-operators-jsxqf\" (UID: \"8cd40dbd-dde6-4dab-ad91-26b0c526d129\") " pod="openshift-marketplace/redhat-operators-jsxqf" Jan 23 10:11:28 crc kubenswrapper[4684]: I0123 10:11:28.667572 4684 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/8cd40dbd-dde6-4dab-ad91-26b0c526d129-catalog-content\") pod \"redhat-operators-jsxqf\" (UID: \"8cd40dbd-dde6-4dab-ad91-26b0c526d129\") " pod="openshift-marketplace/redhat-operators-jsxqf" Jan 23 10:11:28 crc kubenswrapper[4684]: I0123 10:11:28.769998 4684 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-c576z\" (UniqueName: \"kubernetes.io/projected/8cd40dbd-dde6-4dab-ad91-26b0c526d129-kube-api-access-c576z\") pod \"redhat-operators-jsxqf\" (UID: \"8cd40dbd-dde6-4dab-ad91-26b0c526d129\") " pod="openshift-marketplace/redhat-operators-jsxqf" Jan 23 10:11:28 crc kubenswrapper[4684]: I0123 10:11:28.770520 4684 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/8cd40dbd-dde6-4dab-ad91-26b0c526d129-catalog-content\") pod \"redhat-operators-jsxqf\" (UID: \"8cd40dbd-dde6-4dab-ad91-26b0c526d129\") " pod="openshift-marketplace/redhat-operators-jsxqf" Jan 23 10:11:28 crc kubenswrapper[4684]: I0123 10:11:28.771045 4684 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/8cd40dbd-dde6-4dab-ad91-26b0c526d129-utilities\") pod \"redhat-operators-jsxqf\" (UID: \"8cd40dbd-dde6-4dab-ad91-26b0c526d129\") " pod="openshift-marketplace/redhat-operators-jsxqf" Jan 23 10:11:28 crc kubenswrapper[4684]: I0123 10:11:28.771050 4684 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/8cd40dbd-dde6-4dab-ad91-26b0c526d129-catalog-content\") pod \"redhat-operators-jsxqf\" (UID: \"8cd40dbd-dde6-4dab-ad91-26b0c526d129\") " pod="openshift-marketplace/redhat-operators-jsxqf" Jan 23 10:11:28 crc kubenswrapper[4684]: I0123 10:11:28.771605 4684 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/8cd40dbd-dde6-4dab-ad91-26b0c526d129-utilities\") pod \"redhat-operators-jsxqf\" (UID: \"8cd40dbd-dde6-4dab-ad91-26b0c526d129\") " pod="openshift-marketplace/redhat-operators-jsxqf" Jan 23 10:11:28 crc kubenswrapper[4684]: I0123 10:11:28.807418 4684 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-c576z\" (UniqueName: \"kubernetes.io/projected/8cd40dbd-dde6-4dab-ad91-26b0c526d129-kube-api-access-c576z\") pod \"redhat-operators-jsxqf\" (UID: \"8cd40dbd-dde6-4dab-ad91-26b0c526d129\") " pod="openshift-marketplace/redhat-operators-jsxqf" Jan 23 10:11:28 crc kubenswrapper[4684]: I0123 10:11:28.827079 4684 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-jsxqf" Jan 23 10:11:29 crc kubenswrapper[4684]: I0123 10:11:29.528438 4684 generic.go:334] "Generic (PLEG): container finished" podID="6fbf4c7d-6e2e-44cd-852e-903aa8602f9f" containerID="c064639d49073778669c447a6b3980fba3abeb7a940378f6b6459dd5eb190008" exitCode=0 Jan 23 10:11:29 crc kubenswrapper[4684]: I0123 10:11:29.529266 4684 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-zklnt" event={"ID":"6fbf4c7d-6e2e-44cd-852e-903aa8602f9f","Type":"ContainerDied","Data":"c064639d49073778669c447a6b3980fba3abeb7a940378f6b6459dd5eb190008"} Jan 23 10:11:29 crc kubenswrapper[4684]: I0123 10:11:29.809417 4684 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/cinder-volume-volume1-0" Jan 23 10:11:31 crc kubenswrapper[4684]: I0123 10:11:31.997245 4684 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/glance-default-internal-api-0" Jan 23 10:11:31 crc kubenswrapper[4684]: I0123 10:11:31.997845 4684 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/glance-default-internal-api-0" Jan 23 10:11:32 crc kubenswrapper[4684]: I0123 10:11:32.029644 4684 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/glance-default-internal-api-0" Jan 23 10:11:32 crc kubenswrapper[4684]: I0123 10:11:32.043843 4684 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/glance-default-internal-api-0" Jan 23 10:11:32 crc kubenswrapper[4684]: I0123 10:11:32.562727 4684 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/glance-default-internal-api-0" Jan 23 10:11:32 crc kubenswrapper[4684]: I0123 10:11:32.562967 4684 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/glance-default-internal-api-0" Jan 23 10:11:33 crc kubenswrapper[4684]: I0123 10:11:33.881358 4684 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/glance-default-external-api-0" Jan 23 10:11:33 crc kubenswrapper[4684]: I0123 10:11:33.881772 4684 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/glance-default-external-api-0" Jan 23 10:11:33 crc kubenswrapper[4684]: I0123 10:11:33.922458 4684 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/glance-default-external-api-0" Jan 23 10:11:33 crc kubenswrapper[4684]: I0123 10:11:33.936885 4684 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/glance-default-external-api-0" Jan 23 10:11:34 crc kubenswrapper[4684]: I0123 10:11:34.586621 4684 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness" Jan 23 10:11:34 crc kubenswrapper[4684]: I0123 10:11:34.586648 4684 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness" Jan 23 10:11:34 crc kubenswrapper[4684]: I0123 10:11:34.586801 4684 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/glance-default-external-api-0" Jan 23 10:11:34 crc kubenswrapper[4684]: I0123 10:11:34.586832 4684 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/glance-default-external-api-0" Jan 23 10:11:38 crc kubenswrapper[4684]: E0123 10:11:38.090840 4684 log.go:32] "PullImage from image service failed" err="rpc error: code = Canceled desc = copying config: context canceled" image="quay.io/podified-antelope-centos9/openstack-horizon@sha256:dd7600bc5278c663cfcfecafd3fb051a2cd2ddc3c1efb07738bf09512aa23ae7" Jan 23 10:11:38 crc kubenswrapper[4684]: E0123 10:11:38.091546 4684 kuberuntime_manager.go:1274] "Unhandled Error" err="container &Container{Name:horizon-log,Image:quay.io/podified-antelope-centos9/openstack-horizon@sha256:dd7600bc5278c663cfcfecafd3fb051a2cd2ddc3c1efb07738bf09512aa23ae7,Command:[/bin/bash],Args:[-c tail -n+1 -F /var/log/horizon/horizon.log],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:CONFIG_HASH,Value:n6ch85h687h9dhbdh65h5hc4h5d7h57bh69hb7h575h54bh556h68fh99h5h654h686h666h577h88h66bh57fh96h5cbh84h5c6h664hch66dq,ValueFrom:nil,},EnvVar{Name:ENABLE_DESIGNATE,Value:yes,ValueFrom:nil,},EnvVar{Name:ENABLE_HEAT,Value:yes,ValueFrom:nil,},EnvVar{Name:ENABLE_IRONIC,Value:yes,ValueFrom:nil,},EnvVar{Name:ENABLE_MANILA,Value:yes,ValueFrom:nil,},EnvVar{Name:ENABLE_OCTAVIA,Value:yes,ValueFrom:nil,},EnvVar{Name:ENABLE_WATCHER,Value:no,ValueFrom:nil,},EnvVar{Name:KOLLA_CONFIG_STRATEGY,Value:COPY_ALWAYS,ValueFrom:nil,},EnvVar{Name:UNPACK_THEME,Value:true,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:logs,ReadOnly:false,MountPath:/var/log/horizon,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-vzk7l,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[MKNOD],},Privileged:nil,SELinuxOptions:nil,RunAsUser:*48,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*true,RunAsGroup:*42400,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod horizon-5688b9fcb7-jmp7t_openstack(a6526c5c-0da9-4294-a03a-6a8276b3d381): ErrImagePull: rpc error: code = Canceled desc = copying config: context canceled" logger="UnhandledError" Jan 23 10:11:38 crc kubenswrapper[4684]: E0123 10:11:38.094040 4684 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"horizon-log\" with ErrImagePull: \"rpc error: code = Canceled desc = copying config: context canceled\", failed to \"StartContainer\" for \"horizon\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.io/podified-antelope-centos9/openstack-horizon@sha256:dd7600bc5278c663cfcfecafd3fb051a2cd2ddc3c1efb07738bf09512aa23ae7\\\"\"]" pod="openstack/horizon-5688b9fcb7-jmp7t" podUID="a6526c5c-0da9-4294-a03a-6a8276b3d381" Jan 23 10:11:38 crc kubenswrapper[4684]: E0123 10:11:38.195242 4684 log.go:32] "PullImage from image service failed" err="rpc error: code = Canceled desc = copying config: context canceled" image="quay.io/podified-antelope-centos9/openstack-horizon@sha256:dd7600bc5278c663cfcfecafd3fb051a2cd2ddc3c1efb07738bf09512aa23ae7" Jan 23 10:11:38 crc kubenswrapper[4684]: E0123 10:11:38.195656 4684 kuberuntime_manager.go:1274] "Unhandled Error" err="container &Container{Name:horizon-log,Image:quay.io/podified-antelope-centos9/openstack-horizon@sha256:dd7600bc5278c663cfcfecafd3fb051a2cd2ddc3c1efb07738bf09512aa23ae7,Command:[/bin/bash],Args:[-c tail -n+1 -F /var/log/horizon/horizon.log],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:CONFIG_HASH,Value:n65bh5dch5dh666h65ch67dh5bh5ffh685h5cbhb8h58h66h676h59fh558h5bdh5h646h65ch98h4h646h58dh694hfdh675h676h64fh5bfh59h5b8q,ValueFrom:nil,},EnvVar{Name:ENABLE_DESIGNATE,Value:yes,ValueFrom:nil,},EnvVar{Name:ENABLE_HEAT,Value:yes,ValueFrom:nil,},EnvVar{Name:ENABLE_IRONIC,Value:yes,ValueFrom:nil,},EnvVar{Name:ENABLE_MANILA,Value:yes,ValueFrom:nil,},EnvVar{Name:ENABLE_OCTAVIA,Value:yes,ValueFrom:nil,},EnvVar{Name:ENABLE_WATCHER,Value:no,ValueFrom:nil,},EnvVar{Name:KOLLA_CONFIG_STRATEGY,Value:COPY_ALWAYS,ValueFrom:nil,},EnvVar{Name:UNPACK_THEME,Value:true,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:logs,ReadOnly:false,MountPath:/var/log/horizon,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-nzfrr,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[MKNOD],},Privileged:nil,SELinuxOptions:nil,RunAsUser:*48,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*true,RunAsGroup:*42400,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod horizon-59d6c7fdc9-qhdcc_openstack(ebba5660-17ca-4b84-9a66-a496add9d7cc): ErrImagePull: rpc error: code = Canceled desc = copying config: context canceled" logger="UnhandledError" Jan 23 10:11:39 crc kubenswrapper[4684]: E0123 10:11:39.191044 4684 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"horizon-log\" with ErrImagePull: \"rpc error: code = Canceled desc = copying config: context canceled\"" pod="openstack/horizon-59d6c7fdc9-qhdcc" podUID="ebba5660-17ca-4b84-9a66-a496add9d7cc" Jan 23 10:11:39 crc kubenswrapper[4684]: I0123 10:11:39.599187 4684 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/horizon-5688b9fcb7-jmp7t" Jan 23 10:11:39 crc kubenswrapper[4684]: I0123 10:11:39.700104 4684 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-operators-jsxqf"] Jan 23 10:11:39 crc kubenswrapper[4684]: W0123 10:11:39.712858 4684 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod8cd40dbd_dde6_4dab_ad91_26b0c526d129.slice/crio-15c5cf8cea00ecdc274ee2762b631c644dda63c03d1121a94a43a05a99a93c76 WatchSource:0}: Error finding container 15c5cf8cea00ecdc274ee2762b631c644dda63c03d1121a94a43a05a99a93c76: Status 404 returned error can't find the container with id 15c5cf8cea00ecdc274ee2762b631c644dda63c03d1121a94a43a05a99a93c76 Jan 23 10:11:39 crc kubenswrapper[4684]: I0123 10:11:39.717411 4684 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/a6526c5c-0da9-4294-a03a-6a8276b3d381-config-data\") pod \"a6526c5c-0da9-4294-a03a-6a8276b3d381\" (UID: \"a6526c5c-0da9-4294-a03a-6a8276b3d381\") " Jan 23 10:11:39 crc kubenswrapper[4684]: I0123 10:11:39.717671 4684 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/a6526c5c-0da9-4294-a03a-6a8276b3d381-logs\") pod \"a6526c5c-0da9-4294-a03a-6a8276b3d381\" (UID: \"a6526c5c-0da9-4294-a03a-6a8276b3d381\") " Jan 23 10:11:39 crc kubenswrapper[4684]: I0123 10:11:39.717773 4684 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/a6526c5c-0da9-4294-a03a-6a8276b3d381-scripts\") pod \"a6526c5c-0da9-4294-a03a-6a8276b3d381\" (UID: \"a6526c5c-0da9-4294-a03a-6a8276b3d381\") " Jan 23 10:11:39 crc kubenswrapper[4684]: I0123 10:11:39.717928 4684 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-vzk7l\" (UniqueName: \"kubernetes.io/projected/a6526c5c-0da9-4294-a03a-6a8276b3d381-kube-api-access-vzk7l\") pod \"a6526c5c-0da9-4294-a03a-6a8276b3d381\" (UID: \"a6526c5c-0da9-4294-a03a-6a8276b3d381\") " Jan 23 10:11:39 crc kubenswrapper[4684]: I0123 10:11:39.718033 4684 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"horizon-secret-key\" (UniqueName: \"kubernetes.io/secret/a6526c5c-0da9-4294-a03a-6a8276b3d381-horizon-secret-key\") pod \"a6526c5c-0da9-4294-a03a-6a8276b3d381\" (UID: \"a6526c5c-0da9-4294-a03a-6a8276b3d381\") " Jan 23 10:11:39 crc kubenswrapper[4684]: I0123 10:11:39.718265 4684 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/a6526c5c-0da9-4294-a03a-6a8276b3d381-logs" (OuterVolumeSpecName: "logs") pod "a6526c5c-0da9-4294-a03a-6a8276b3d381" (UID: "a6526c5c-0da9-4294-a03a-6a8276b3d381"). InnerVolumeSpecName "logs". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 23 10:11:39 crc kubenswrapper[4684]: I0123 10:11:39.718629 4684 reconciler_common.go:293] "Volume detached for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/a6526c5c-0da9-4294-a03a-6a8276b3d381-logs\") on node \"crc\" DevicePath \"\"" Jan 23 10:11:39 crc kubenswrapper[4684]: I0123 10:11:39.719197 4684 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/a6526c5c-0da9-4294-a03a-6a8276b3d381-config-data" (OuterVolumeSpecName: "config-data") pod "a6526c5c-0da9-4294-a03a-6a8276b3d381" (UID: "a6526c5c-0da9-4294-a03a-6a8276b3d381"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 23 10:11:39 crc kubenswrapper[4684]: I0123 10:11:39.720155 4684 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/a6526c5c-0da9-4294-a03a-6a8276b3d381-scripts" (OuterVolumeSpecName: "scripts") pod "a6526c5c-0da9-4294-a03a-6a8276b3d381" (UID: "a6526c5c-0da9-4294-a03a-6a8276b3d381"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 23 10:11:39 crc kubenswrapper[4684]: I0123 10:11:39.728309 4684 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/horizon-7df5b758fb-8sfdj" event={"ID":"78d43a15-1645-42a6-a25b-a6c4d7a244c4","Type":"ContainerStarted","Data":"8706ea5d489056cd40b9010c210ae502a9edce0a2f530f9cad8b9b9a13479335"} Jan 23 10:11:39 crc kubenswrapper[4684]: I0123 10:11:39.730655 4684 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/a6526c5c-0da9-4294-a03a-6a8276b3d381-kube-api-access-vzk7l" (OuterVolumeSpecName: "kube-api-access-vzk7l") pod "a6526c5c-0da9-4294-a03a-6a8276b3d381" (UID: "a6526c5c-0da9-4294-a03a-6a8276b3d381"). InnerVolumeSpecName "kube-api-access-vzk7l". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 23 10:11:39 crc kubenswrapper[4684]: I0123 10:11:39.731107 4684 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/a6526c5c-0da9-4294-a03a-6a8276b3d381-horizon-secret-key" (OuterVolumeSpecName: "horizon-secret-key") pod "a6526c5c-0da9-4294-a03a-6a8276b3d381" (UID: "a6526c5c-0da9-4294-a03a-6a8276b3d381"). InnerVolumeSpecName "horizon-secret-key". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 23 10:11:39 crc kubenswrapper[4684]: I0123 10:11:39.734472 4684 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-zklnt" event={"ID":"6fbf4c7d-6e2e-44cd-852e-903aa8602f9f","Type":"ContainerStarted","Data":"39bf389a467a992474237fc5b793766578fdbbb69199508a7b7dfc24c84f4e21"} Jan 23 10:11:39 crc kubenswrapper[4684]: I0123 10:11:39.739416 4684 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/horizon-5688b9fcb7-jmp7t" event={"ID":"a6526c5c-0da9-4294-a03a-6a8276b3d381","Type":"ContainerDied","Data":"da3b59a3f7feaabade870353cef881c8e2351810c3c0fcd162a235925e2ae4a7"} Jan 23 10:11:39 crc kubenswrapper[4684]: I0123 10:11:39.739485 4684 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/horizon-5688b9fcb7-jmp7t" Jan 23 10:11:39 crc kubenswrapper[4684]: I0123 10:11:39.783236 4684 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/horizon-6dc7f74bf4-rpjsz" event={"ID":"d510be09-5472-4350-8930-0cda7b4b9c84","Type":"ContainerStarted","Data":"500a35b661f8c3c7cc0acf170b117c1aa4c0e826b2de34ff32e9da2f946ab45e"} Jan 23 10:11:39 crc kubenswrapper[4684]: I0123 10:11:39.803309 4684 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/horizon-59d6c7fdc9-qhdcc" event={"ID":"ebba5660-17ca-4b84-9a66-a496add9d7cc","Type":"ContainerStarted","Data":"c79d9606153bea0eef02b860114a25cfd265247989133e1a083dd4c94a001e98"} Jan 23 10:11:39 crc kubenswrapper[4684]: I0123 10:11:39.803497 4684 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/horizon-59d6c7fdc9-qhdcc" podUID="ebba5660-17ca-4b84-9a66-a496add9d7cc" containerName="horizon" containerID="cri-o://c79d9606153bea0eef02b860114a25cfd265247989133e1a083dd4c94a001e98" gracePeriod=30 Jan 23 10:11:39 crc kubenswrapper[4684]: I0123 10:11:39.820634 4684 reconciler_common.go:293] "Volume detached for volume \"horizon-secret-key\" (UniqueName: \"kubernetes.io/secret/a6526c5c-0da9-4294-a03a-6a8276b3d381-horizon-secret-key\") on node \"crc\" DevicePath \"\"" Jan 23 10:11:39 crc kubenswrapper[4684]: I0123 10:11:39.820672 4684 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/a6526c5c-0da9-4294-a03a-6a8276b3d381-config-data\") on node \"crc\" DevicePath \"\"" Jan 23 10:11:39 crc kubenswrapper[4684]: I0123 10:11:39.820683 4684 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/a6526c5c-0da9-4294-a03a-6a8276b3d381-scripts\") on node \"crc\" DevicePath \"\"" Jan 23 10:11:39 crc kubenswrapper[4684]: I0123 10:11:39.820712 4684 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-vzk7l\" (UniqueName: \"kubernetes.io/projected/a6526c5c-0da9-4294-a03a-6a8276b3d381-kube-api-access-vzk7l\") on node \"crc\" DevicePath \"\"" Jan 23 10:11:39 crc kubenswrapper[4684]: I0123 10:11:39.944537 4684 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/horizon-5688b9fcb7-jmp7t"] Jan 23 10:11:39 crc kubenswrapper[4684]: I0123 10:11:39.974192 4684 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/horizon-5688b9fcb7-jmp7t"] Jan 23 10:11:40 crc kubenswrapper[4684]: I0123 10:11:40.815983 4684 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/horizon-6dc7f74bf4-rpjsz" event={"ID":"d510be09-5472-4350-8930-0cda7b4b9c84","Type":"ContainerStarted","Data":"110cc9e712e6d310fdaa9b0e893f0d65c774fc0a924a38fdd3917593ab37fc30"} Jan 23 10:11:40 crc kubenswrapper[4684]: I0123 10:11:40.820044 4684 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/horizon-7df5b758fb-8sfdj" event={"ID":"78d43a15-1645-42a6-a25b-a6c4d7a244c4","Type":"ContainerStarted","Data":"dc8c5b0795461756572228e25c06926d3e363425ec9a0870d9103ee9701634b3"} Jan 23 10:11:40 crc kubenswrapper[4684]: I0123 10:11:40.822533 4684 generic.go:334] "Generic (PLEG): container finished" podID="8cd40dbd-dde6-4dab-ad91-26b0c526d129" containerID="80ed6c263c0dd7ed1ef17db4adc3e956126bee1cb3159c0e004afb27ab3e94d6" exitCode=0 Jan 23 10:11:40 crc kubenswrapper[4684]: I0123 10:11:40.823637 4684 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-jsxqf" event={"ID":"8cd40dbd-dde6-4dab-ad91-26b0c526d129","Type":"ContainerDied","Data":"80ed6c263c0dd7ed1ef17db4adc3e956126bee1cb3159c0e004afb27ab3e94d6"} Jan 23 10:11:40 crc kubenswrapper[4684]: I0123 10:11:40.823674 4684 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-jsxqf" event={"ID":"8cd40dbd-dde6-4dab-ad91-26b0c526d129","Type":"ContainerStarted","Data":"15c5cf8cea00ecdc274ee2762b631c644dda63c03d1121a94a43a05a99a93c76"} Jan 23 10:11:40 crc kubenswrapper[4684]: I0123 10:11:40.853844 4684 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/horizon-6dc7f74bf4-rpjsz" podStartSLOduration=4.119828647 podStartE2EDuration="22.85382016s" podCreationTimestamp="2026-01-23 10:11:18 +0000 UTC" firstStartedPulling="2026-01-23 10:11:19.490864918 +0000 UTC m=+3852.114243459" lastFinishedPulling="2026-01-23 10:11:38.224856431 +0000 UTC m=+3870.848234972" observedRunningTime="2026-01-23 10:11:40.835504447 +0000 UTC m=+3873.458882988" watchObservedRunningTime="2026-01-23 10:11:40.85382016 +0000 UTC m=+3873.477198701" Jan 23 10:11:40 crc kubenswrapper[4684]: I0123 10:11:40.898767 4684 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/horizon-7df5b758fb-8sfdj" podStartSLOduration=4.966960939 podStartE2EDuration="22.89874141s" podCreationTimestamp="2026-01-23 10:11:18 +0000 UTC" firstStartedPulling="2026-01-23 10:11:20.288173321 +0000 UTC m=+3852.911551862" lastFinishedPulling="2026-01-23 10:11:38.219953792 +0000 UTC m=+3870.843332333" observedRunningTime="2026-01-23 10:11:40.89030748 +0000 UTC m=+3873.513686041" watchObservedRunningTime="2026-01-23 10:11:40.89874141 +0000 UTC m=+3873.522119951" Jan 23 10:11:41 crc kubenswrapper[4684]: I0123 10:11:41.605417 4684 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="a6526c5c-0da9-4294-a03a-6a8276b3d381" path="/var/lib/kubelet/pods/a6526c5c-0da9-4294-a03a-6a8276b3d381/volumes" Jan 23 10:11:41 crc kubenswrapper[4684]: I0123 10:11:41.867830 4684 generic.go:334] "Generic (PLEG): container finished" podID="6fbf4c7d-6e2e-44cd-852e-903aa8602f9f" containerID="39bf389a467a992474237fc5b793766578fdbbb69199508a7b7dfc24c84f4e21" exitCode=0 Jan 23 10:11:41 crc kubenswrapper[4684]: I0123 10:11:41.867900 4684 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-zklnt" event={"ID":"6fbf4c7d-6e2e-44cd-852e-903aa8602f9f","Type":"ContainerDied","Data":"39bf389a467a992474237fc5b793766578fdbbb69199508a7b7dfc24c84f4e21"} Jan 23 10:11:42 crc kubenswrapper[4684]: I0123 10:11:42.227129 4684 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/glance-default-external-api-0" Jan 23 10:11:42 crc kubenswrapper[4684]: I0123 10:11:42.227239 4684 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness" Jan 23 10:11:42 crc kubenswrapper[4684]: I0123 10:11:42.255949 4684 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/glance-default-internal-api-0" Jan 23 10:11:42 crc kubenswrapper[4684]: I0123 10:11:42.256063 4684 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness" Jan 23 10:11:42 crc kubenswrapper[4684]: I0123 10:11:42.257651 4684 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/glance-default-internal-api-0" Jan 23 10:11:42 crc kubenswrapper[4684]: I0123 10:11:42.272647 4684 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/glance-default-external-api-0" Jan 23 10:11:43 crc kubenswrapper[4684]: I0123 10:11:43.729120 4684 patch_prober.go:28] interesting pod/machine-config-daemon-wtphf container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Jan 23 10:11:43 crc kubenswrapper[4684]: I0123 10:11:43.729394 4684 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-wtphf" podUID="fe8e0d00-860e-4d47-9f48-686555520d79" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Jan 23 10:11:43 crc kubenswrapper[4684]: I0123 10:11:43.729433 4684 kubelet.go:2542] "SyncLoop (probe)" probe="liveness" status="unhealthy" pod="openshift-machine-config-operator/machine-config-daemon-wtphf" Jan 23 10:11:43 crc kubenswrapper[4684]: I0123 10:11:43.730250 4684 kuberuntime_manager.go:1027] "Message for Container of pod" containerName="machine-config-daemon" containerStatusID={"Type":"cri-o","ID":"c2163abc5f57af87ea82023d09559fe5c528b862743942dcea670480cc44810b"} pod="openshift-machine-config-operator/machine-config-daemon-wtphf" containerMessage="Container machine-config-daemon failed liveness probe, will be restarted" Jan 23 10:11:43 crc kubenswrapper[4684]: I0123 10:11:43.730293 4684 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-machine-config-operator/machine-config-daemon-wtphf" podUID="fe8e0d00-860e-4d47-9f48-686555520d79" containerName="machine-config-daemon" containerID="cri-o://c2163abc5f57af87ea82023d09559fe5c528b862743942dcea670480cc44810b" gracePeriod=600 Jan 23 10:11:43 crc kubenswrapper[4684]: I0123 10:11:43.896841 4684 generic.go:334] "Generic (PLEG): container finished" podID="fe8e0d00-860e-4d47-9f48-686555520d79" containerID="c2163abc5f57af87ea82023d09559fe5c528b862743942dcea670480cc44810b" exitCode=0 Jan 23 10:11:43 crc kubenswrapper[4684]: I0123 10:11:43.896883 4684 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-wtphf" event={"ID":"fe8e0d00-860e-4d47-9f48-686555520d79","Type":"ContainerDied","Data":"c2163abc5f57af87ea82023d09559fe5c528b862743942dcea670480cc44810b"} Jan 23 10:11:43 crc kubenswrapper[4684]: I0123 10:11:43.896914 4684 scope.go:117] "RemoveContainer" containerID="ceb6b580f569b2fa2d093ef8e815058bc34f53db19466664eaf44145b4851560" Jan 23 10:11:45 crc kubenswrapper[4684]: I0123 10:11:45.996362 4684 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/horizon-59d6c7fdc9-qhdcc" Jan 23 10:11:46 crc kubenswrapper[4684]: E0123 10:11:46.685005 4684 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-wtphf_openshift-machine-config-operator(fe8e0d00-860e-4d47-9f48-686555520d79)\"" pod="openshift-machine-config-operator/machine-config-daemon-wtphf" podUID="fe8e0d00-860e-4d47-9f48-686555520d79" Jan 23 10:11:46 crc kubenswrapper[4684]: I0123 10:11:46.931954 4684 scope.go:117] "RemoveContainer" containerID="c2163abc5f57af87ea82023d09559fe5c528b862743942dcea670480cc44810b" Jan 23 10:11:46 crc kubenswrapper[4684]: E0123 10:11:46.932269 4684 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-wtphf_openshift-machine-config-operator(fe8e0d00-860e-4d47-9f48-686555520d79)\"" pod="openshift-machine-config-operator/machine-config-daemon-wtphf" podUID="fe8e0d00-860e-4d47-9f48-686555520d79" Jan 23 10:11:48 crc kubenswrapper[4684]: I0123 10:11:48.752516 4684 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/horizon-6dc7f74bf4-rpjsz" Jan 23 10:11:48 crc kubenswrapper[4684]: I0123 10:11:48.753081 4684 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/horizon-6dc7f74bf4-rpjsz" Jan 23 10:11:48 crc kubenswrapper[4684]: I0123 10:11:48.959576 4684 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-zklnt" event={"ID":"6fbf4c7d-6e2e-44cd-852e-903aa8602f9f","Type":"ContainerStarted","Data":"f1b151ab2da7b71cbf51b21749188492a75a879047b5626ef9c05199b12bc06c"} Jan 23 10:11:48 crc kubenswrapper[4684]: I0123 10:11:48.964910 4684 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-jsxqf" event={"ID":"8cd40dbd-dde6-4dab-ad91-26b0c526d129","Type":"ContainerStarted","Data":"cd0481a3b65a2a4239ab7422e8b73fa87eb3657e2f39eefa07972addac5e7f62"} Jan 23 10:11:48 crc kubenswrapper[4684]: I0123 10:11:48.986870 4684 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/community-operators-zklnt" podStartSLOduration=12.885787556 podStartE2EDuration="22.986846033s" podCreationTimestamp="2026-01-23 10:11:26 +0000 UTC" firstStartedPulling="2026-01-23 10:11:38.068334798 +0000 UTC m=+3870.691713339" lastFinishedPulling="2026-01-23 10:11:48.169393275 +0000 UTC m=+3880.792771816" observedRunningTime="2026-01-23 10:11:48.977916728 +0000 UTC m=+3881.601295279" watchObservedRunningTime="2026-01-23 10:11:48.986846033 +0000 UTC m=+3881.610224574" Jan 23 10:11:49 crc kubenswrapper[4684]: I0123 10:11:49.053172 4684 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/horizon-7df5b758fb-8sfdj" Jan 23 10:11:49 crc kubenswrapper[4684]: I0123 10:11:49.053233 4684 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/horizon-7df5b758fb-8sfdj" Jan 23 10:11:49 crc kubenswrapper[4684]: I0123 10:11:49.055251 4684 prober.go:107] "Probe failed" probeType="Startup" pod="openstack/horizon-7df5b758fb-8sfdj" podUID="78d43a15-1645-42a6-a25b-a6c4d7a244c4" containerName="horizon" probeResult="failure" output="Get \"https://10.217.0.248:8443/dashboard/auth/login/?next=/dashboard/\": dial tcp 10.217.0.248:8443: connect: connection refused" Jan 23 10:11:49 crc kubenswrapper[4684]: I0123 10:11:49.982624 4684 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/manila-db-sync-mdmkd" event={"ID":"bd829550-43d3-42d9-a9b4-e088ef820a77","Type":"ContainerStarted","Data":"c2d9e68fc6d1318a60f5e585926097d58beebbc02b5060432a0b9bf9f5fdd3e7"} Jan 23 10:11:50 crc kubenswrapper[4684]: I0123 10:11:50.010665 4684 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/manila-db-sync-mdmkd" podStartSLOduration=4.776433634 podStartE2EDuration="25.010645853s" podCreationTimestamp="2026-01-23 10:11:25 +0000 UTC" firstStartedPulling="2026-01-23 10:11:27.935951118 +0000 UTC m=+3860.559329659" lastFinishedPulling="2026-01-23 10:11:48.170163337 +0000 UTC m=+3880.793541878" observedRunningTime="2026-01-23 10:11:50.005197778 +0000 UTC m=+3882.628576319" watchObservedRunningTime="2026-01-23 10:11:50.010645853 +0000 UTC m=+3882.634024394" Jan 23 10:11:55 crc kubenswrapper[4684]: I0123 10:11:55.025564 4684 generic.go:334] "Generic (PLEG): container finished" podID="8cd40dbd-dde6-4dab-ad91-26b0c526d129" containerID="cd0481a3b65a2a4239ab7422e8b73fa87eb3657e2f39eefa07972addac5e7f62" exitCode=0 Jan 23 10:11:55 crc kubenswrapper[4684]: I0123 10:11:55.025780 4684 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-jsxqf" event={"ID":"8cd40dbd-dde6-4dab-ad91-26b0c526d129","Type":"ContainerDied","Data":"cd0481a3b65a2a4239ab7422e8b73fa87eb3657e2f39eefa07972addac5e7f62"} Jan 23 10:11:56 crc kubenswrapper[4684]: I0123 10:11:56.578270 4684 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/community-operators-zklnt" Jan 23 10:11:56 crc kubenswrapper[4684]: I0123 10:11:56.578532 4684 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/community-operators-zklnt" Jan 23 10:11:57 crc kubenswrapper[4684]: I0123 10:11:57.047174 4684 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-jsxqf" event={"ID":"8cd40dbd-dde6-4dab-ad91-26b0c526d129","Type":"ContainerStarted","Data":"3607c7a9253c4bc04acef15b6aeba069481bbb3858ffc03ac4e50f073d4948b7"} Jan 23 10:11:57 crc kubenswrapper[4684]: I0123 10:11:57.094194 4684 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/redhat-operators-jsxqf" podStartSLOduration=14.36513649 podStartE2EDuration="29.094165751s" podCreationTimestamp="2026-01-23 10:11:28 +0000 UTC" firstStartedPulling="2026-01-23 10:11:40.825467491 +0000 UTC m=+3873.448846032" lastFinishedPulling="2026-01-23 10:11:55.554496752 +0000 UTC m=+3888.177875293" observedRunningTime="2026-01-23 10:11:57.087216483 +0000 UTC m=+3889.710595024" watchObservedRunningTime="2026-01-23 10:11:57.094165751 +0000 UTC m=+3889.717544302" Jan 23 10:11:57 crc kubenswrapper[4684]: I0123 10:11:57.654400 4684 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-marketplace/community-operators-zklnt" podUID="6fbf4c7d-6e2e-44cd-852e-903aa8602f9f" containerName="registry-server" probeResult="failure" output=< Jan 23 10:11:57 crc kubenswrapper[4684]: timeout: failed to connect service ":50051" within 1s Jan 23 10:11:57 crc kubenswrapper[4684]: > Jan 23 10:11:58 crc kubenswrapper[4684]: I0123 10:11:58.754735 4684 prober.go:107] "Probe failed" probeType="Startup" pod="openstack/horizon-6dc7f74bf4-rpjsz" podUID="d510be09-5472-4350-8930-0cda7b4b9c84" containerName="horizon" probeResult="failure" output="Get \"https://10.217.0.247:8443/dashboard/auth/login/?next=/dashboard/\": dial tcp 10.217.0.247:8443: connect: connection refused" Jan 23 10:11:58 crc kubenswrapper[4684]: I0123 10:11:58.828583 4684 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/redhat-operators-jsxqf" Jan 23 10:11:58 crc kubenswrapper[4684]: I0123 10:11:58.828636 4684 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/redhat-operators-jsxqf" Jan 23 10:11:59 crc kubenswrapper[4684]: I0123 10:11:59.053761 4684 prober.go:107] "Probe failed" probeType="Startup" pod="openstack/horizon-7df5b758fb-8sfdj" podUID="78d43a15-1645-42a6-a25b-a6c4d7a244c4" containerName="horizon" probeResult="failure" output="Get \"https://10.217.0.248:8443/dashboard/auth/login/?next=/dashboard/\": dial tcp 10.217.0.248:8443: connect: connection refused" Jan 23 10:11:59 crc kubenswrapper[4684]: I0123 10:11:59.876575 4684 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-marketplace/redhat-operators-jsxqf" podUID="8cd40dbd-dde6-4dab-ad91-26b0c526d129" containerName="registry-server" probeResult="failure" output=< Jan 23 10:11:59 crc kubenswrapper[4684]: timeout: failed to connect service ":50051" within 1s Jan 23 10:11:59 crc kubenswrapper[4684]: > Jan 23 10:12:01 crc kubenswrapper[4684]: I0123 10:12:01.581922 4684 scope.go:117] "RemoveContainer" containerID="c2163abc5f57af87ea82023d09559fe5c528b862743942dcea670480cc44810b" Jan 23 10:12:01 crc kubenswrapper[4684]: E0123 10:12:01.582428 4684 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-wtphf_openshift-machine-config-operator(fe8e0d00-860e-4d47-9f48-686555520d79)\"" pod="openshift-machine-config-operator/machine-config-daemon-wtphf" podUID="fe8e0d00-860e-4d47-9f48-686555520d79" Jan 23 10:12:06 crc kubenswrapper[4684]: I0123 10:12:06.637346 4684 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/community-operators-zklnt" Jan 23 10:12:06 crc kubenswrapper[4684]: I0123 10:12:06.697327 4684 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/community-operators-zklnt" Jan 23 10:12:06 crc kubenswrapper[4684]: I0123 10:12:06.883665 4684 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/community-operators-zklnt"] Jan 23 10:12:08 crc kubenswrapper[4684]: I0123 10:12:08.746241 4684 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/community-operators-zklnt" podUID="6fbf4c7d-6e2e-44cd-852e-903aa8602f9f" containerName="registry-server" containerID="cri-o://f1b151ab2da7b71cbf51b21749188492a75a879047b5626ef9c05199b12bc06c" gracePeriod=2 Jan 23 10:12:08 crc kubenswrapper[4684]: I0123 10:12:08.753146 4684 prober.go:107] "Probe failed" probeType="Startup" pod="openstack/horizon-6dc7f74bf4-rpjsz" podUID="d510be09-5472-4350-8930-0cda7b4b9c84" containerName="horizon" probeResult="failure" output="Get \"https://10.217.0.247:8443/dashboard/auth/login/?next=/dashboard/\": dial tcp 10.217.0.247:8443: connect: connection refused" Jan 23 10:12:09 crc kubenswrapper[4684]: I0123 10:12:09.054368 4684 prober.go:107] "Probe failed" probeType="Startup" pod="openstack/horizon-7df5b758fb-8sfdj" podUID="78d43a15-1645-42a6-a25b-a6c4d7a244c4" containerName="horizon" probeResult="failure" output="Get \"https://10.217.0.248:8443/dashboard/auth/login/?next=/dashboard/\": dial tcp 10.217.0.248:8443: connect: connection refused" Jan 23 10:12:09 crc kubenswrapper[4684]: I0123 10:12:09.054764 4684 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/horizon-7df5b758fb-8sfdj" Jan 23 10:12:09 crc kubenswrapper[4684]: I0123 10:12:09.056614 4684 kuberuntime_manager.go:1027] "Message for Container of pod" containerName="horizon" containerStatusID={"Type":"cri-o","ID":"dc8c5b0795461756572228e25c06926d3e363425ec9a0870d9103ee9701634b3"} pod="openstack/horizon-7df5b758fb-8sfdj" containerMessage="Container horizon failed startup probe, will be restarted" Jan 23 10:12:09 crc kubenswrapper[4684]: I0123 10:12:09.056684 4684 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/horizon-7df5b758fb-8sfdj" podUID="78d43a15-1645-42a6-a25b-a6c4d7a244c4" containerName="horizon" containerID="cri-o://dc8c5b0795461756572228e25c06926d3e363425ec9a0870d9103ee9701634b3" gracePeriod=30 Jan 23 10:12:09 crc kubenswrapper[4684]: I0123 10:12:09.753453 4684 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-zklnt" Jan 23 10:12:09 crc kubenswrapper[4684]: I0123 10:12:09.758679 4684 generic.go:334] "Generic (PLEG): container finished" podID="bd829550-43d3-42d9-a9b4-e088ef820a77" containerID="c2d9e68fc6d1318a60f5e585926097d58beebbc02b5060432a0b9bf9f5fdd3e7" exitCode=0 Jan 23 10:12:09 crc kubenswrapper[4684]: I0123 10:12:09.758779 4684 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/manila-db-sync-mdmkd" event={"ID":"bd829550-43d3-42d9-a9b4-e088ef820a77","Type":"ContainerDied","Data":"c2d9e68fc6d1318a60f5e585926097d58beebbc02b5060432a0b9bf9f5fdd3e7"} Jan 23 10:12:09 crc kubenswrapper[4684]: I0123 10:12:09.766222 4684 generic.go:334] "Generic (PLEG): container finished" podID="6fbf4c7d-6e2e-44cd-852e-903aa8602f9f" containerID="f1b151ab2da7b71cbf51b21749188492a75a879047b5626ef9c05199b12bc06c" exitCode=0 Jan 23 10:12:09 crc kubenswrapper[4684]: I0123 10:12:09.766309 4684 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-zklnt" event={"ID":"6fbf4c7d-6e2e-44cd-852e-903aa8602f9f","Type":"ContainerDied","Data":"f1b151ab2da7b71cbf51b21749188492a75a879047b5626ef9c05199b12bc06c"} Jan 23 10:12:09 crc kubenswrapper[4684]: I0123 10:12:09.766341 4684 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-zklnt" event={"ID":"6fbf4c7d-6e2e-44cd-852e-903aa8602f9f","Type":"ContainerDied","Data":"422158a8186fd2bc28d57c17f07f6fd29c4d0b5721d455c00e7671f85ea02d99"} Jan 23 10:12:09 crc kubenswrapper[4684]: I0123 10:12:09.766359 4684 scope.go:117] "RemoveContainer" containerID="f1b151ab2da7b71cbf51b21749188492a75a879047b5626ef9c05199b12bc06c" Jan 23 10:12:09 crc kubenswrapper[4684]: I0123 10:12:09.766640 4684 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-zklnt" Jan 23 10:12:09 crc kubenswrapper[4684]: I0123 10:12:09.804031 4684 scope.go:117] "RemoveContainer" containerID="39bf389a467a992474237fc5b793766578fdbbb69199508a7b7dfc24c84f4e21" Jan 23 10:12:09 crc kubenswrapper[4684]: I0123 10:12:09.849591 4684 scope.go:117] "RemoveContainer" containerID="c064639d49073778669c447a6b3980fba3abeb7a940378f6b6459dd5eb190008" Jan 23 10:12:09 crc kubenswrapper[4684]: I0123 10:12:09.852432 4684 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/6fbf4c7d-6e2e-44cd-852e-903aa8602f9f-catalog-content\") pod \"6fbf4c7d-6e2e-44cd-852e-903aa8602f9f\" (UID: \"6fbf4c7d-6e2e-44cd-852e-903aa8602f9f\") " Jan 23 10:12:09 crc kubenswrapper[4684]: I0123 10:12:09.852497 4684 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-4pxnf\" (UniqueName: \"kubernetes.io/projected/6fbf4c7d-6e2e-44cd-852e-903aa8602f9f-kube-api-access-4pxnf\") pod \"6fbf4c7d-6e2e-44cd-852e-903aa8602f9f\" (UID: \"6fbf4c7d-6e2e-44cd-852e-903aa8602f9f\") " Jan 23 10:12:09 crc kubenswrapper[4684]: I0123 10:12:09.852527 4684 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/6fbf4c7d-6e2e-44cd-852e-903aa8602f9f-utilities\") pod \"6fbf4c7d-6e2e-44cd-852e-903aa8602f9f\" (UID: \"6fbf4c7d-6e2e-44cd-852e-903aa8602f9f\") " Jan 23 10:12:09 crc kubenswrapper[4684]: I0123 10:12:09.878846 4684 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/6fbf4c7d-6e2e-44cd-852e-903aa8602f9f-utilities" (OuterVolumeSpecName: "utilities") pod "6fbf4c7d-6e2e-44cd-852e-903aa8602f9f" (UID: "6fbf4c7d-6e2e-44cd-852e-903aa8602f9f"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 23 10:12:09 crc kubenswrapper[4684]: I0123 10:12:09.880823 4684 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-marketplace/redhat-operators-jsxqf" podUID="8cd40dbd-dde6-4dab-ad91-26b0c526d129" containerName="registry-server" probeResult="failure" output=< Jan 23 10:12:09 crc kubenswrapper[4684]: timeout: failed to connect service ":50051" within 1s Jan 23 10:12:09 crc kubenswrapper[4684]: > Jan 23 10:12:09 crc kubenswrapper[4684]: I0123 10:12:09.926952 4684 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/6fbf4c7d-6e2e-44cd-852e-903aa8602f9f-kube-api-access-4pxnf" (OuterVolumeSpecName: "kube-api-access-4pxnf") pod "6fbf4c7d-6e2e-44cd-852e-903aa8602f9f" (UID: "6fbf4c7d-6e2e-44cd-852e-903aa8602f9f"). InnerVolumeSpecName "kube-api-access-4pxnf". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 23 10:12:09 crc kubenswrapper[4684]: I0123 10:12:09.979731 4684 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-4pxnf\" (UniqueName: \"kubernetes.io/projected/6fbf4c7d-6e2e-44cd-852e-903aa8602f9f-kube-api-access-4pxnf\") on node \"crc\" DevicePath \"\"" Jan 23 10:12:09 crc kubenswrapper[4684]: I0123 10:12:09.979766 4684 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/6fbf4c7d-6e2e-44cd-852e-903aa8602f9f-utilities\") on node \"crc\" DevicePath \"\"" Jan 23 10:12:09 crc kubenswrapper[4684]: I0123 10:12:09.981643 4684 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/6fbf4c7d-6e2e-44cd-852e-903aa8602f9f-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "6fbf4c7d-6e2e-44cd-852e-903aa8602f9f" (UID: "6fbf4c7d-6e2e-44cd-852e-903aa8602f9f"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 23 10:12:09 crc kubenswrapper[4684]: I0123 10:12:09.987208 4684 scope.go:117] "RemoveContainer" containerID="f1b151ab2da7b71cbf51b21749188492a75a879047b5626ef9c05199b12bc06c" Jan 23 10:12:09 crc kubenswrapper[4684]: E0123 10:12:09.988094 4684 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"f1b151ab2da7b71cbf51b21749188492a75a879047b5626ef9c05199b12bc06c\": container with ID starting with f1b151ab2da7b71cbf51b21749188492a75a879047b5626ef9c05199b12bc06c not found: ID does not exist" containerID="f1b151ab2da7b71cbf51b21749188492a75a879047b5626ef9c05199b12bc06c" Jan 23 10:12:09 crc kubenswrapper[4684]: I0123 10:12:09.988125 4684 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"f1b151ab2da7b71cbf51b21749188492a75a879047b5626ef9c05199b12bc06c"} err="failed to get container status \"f1b151ab2da7b71cbf51b21749188492a75a879047b5626ef9c05199b12bc06c\": rpc error: code = NotFound desc = could not find container \"f1b151ab2da7b71cbf51b21749188492a75a879047b5626ef9c05199b12bc06c\": container with ID starting with f1b151ab2da7b71cbf51b21749188492a75a879047b5626ef9c05199b12bc06c not found: ID does not exist" Jan 23 10:12:09 crc kubenswrapper[4684]: I0123 10:12:09.988146 4684 scope.go:117] "RemoveContainer" containerID="39bf389a467a992474237fc5b793766578fdbbb69199508a7b7dfc24c84f4e21" Jan 23 10:12:09 crc kubenswrapper[4684]: E0123 10:12:09.988665 4684 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"39bf389a467a992474237fc5b793766578fdbbb69199508a7b7dfc24c84f4e21\": container with ID starting with 39bf389a467a992474237fc5b793766578fdbbb69199508a7b7dfc24c84f4e21 not found: ID does not exist" containerID="39bf389a467a992474237fc5b793766578fdbbb69199508a7b7dfc24c84f4e21" Jan 23 10:12:09 crc kubenswrapper[4684]: I0123 10:12:09.988683 4684 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"39bf389a467a992474237fc5b793766578fdbbb69199508a7b7dfc24c84f4e21"} err="failed to get container status \"39bf389a467a992474237fc5b793766578fdbbb69199508a7b7dfc24c84f4e21\": rpc error: code = NotFound desc = could not find container \"39bf389a467a992474237fc5b793766578fdbbb69199508a7b7dfc24c84f4e21\": container with ID starting with 39bf389a467a992474237fc5b793766578fdbbb69199508a7b7dfc24c84f4e21 not found: ID does not exist" Jan 23 10:12:09 crc kubenswrapper[4684]: I0123 10:12:09.988713 4684 scope.go:117] "RemoveContainer" containerID="c064639d49073778669c447a6b3980fba3abeb7a940378f6b6459dd5eb190008" Jan 23 10:12:09 crc kubenswrapper[4684]: E0123 10:12:09.989061 4684 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"c064639d49073778669c447a6b3980fba3abeb7a940378f6b6459dd5eb190008\": container with ID starting with c064639d49073778669c447a6b3980fba3abeb7a940378f6b6459dd5eb190008 not found: ID does not exist" containerID="c064639d49073778669c447a6b3980fba3abeb7a940378f6b6459dd5eb190008" Jan 23 10:12:09 crc kubenswrapper[4684]: I0123 10:12:09.989093 4684 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"c064639d49073778669c447a6b3980fba3abeb7a940378f6b6459dd5eb190008"} err="failed to get container status \"c064639d49073778669c447a6b3980fba3abeb7a940378f6b6459dd5eb190008\": rpc error: code = NotFound desc = could not find container \"c064639d49073778669c447a6b3980fba3abeb7a940378f6b6459dd5eb190008\": container with ID starting with c064639d49073778669c447a6b3980fba3abeb7a940378f6b6459dd5eb190008 not found: ID does not exist" Jan 23 10:12:10 crc kubenswrapper[4684]: I0123 10:12:10.095513 4684 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/6fbf4c7d-6e2e-44cd-852e-903aa8602f9f-catalog-content\") on node \"crc\" DevicePath \"\"" Jan 23 10:12:10 crc kubenswrapper[4684]: I0123 10:12:10.135745 4684 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/community-operators-zklnt"] Jan 23 10:12:10 crc kubenswrapper[4684]: I0123 10:12:10.149222 4684 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/community-operators-zklnt"] Jan 23 10:12:10 crc kubenswrapper[4684]: I0123 10:12:10.379106 4684 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/horizon-59d6c7fdc9-qhdcc" Jan 23 10:12:10 crc kubenswrapper[4684]: I0123 10:12:10.402585 4684 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/ebba5660-17ca-4b84-9a66-a496add9d7cc-scripts\") pod \"ebba5660-17ca-4b84-9a66-a496add9d7cc\" (UID: \"ebba5660-17ca-4b84-9a66-a496add9d7cc\") " Jan 23 10:12:10 crc kubenswrapper[4684]: I0123 10:12:10.402680 4684 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-nzfrr\" (UniqueName: \"kubernetes.io/projected/ebba5660-17ca-4b84-9a66-a496add9d7cc-kube-api-access-nzfrr\") pod \"ebba5660-17ca-4b84-9a66-a496add9d7cc\" (UID: \"ebba5660-17ca-4b84-9a66-a496add9d7cc\") " Jan 23 10:12:10 crc kubenswrapper[4684]: I0123 10:12:10.402728 4684 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/ebba5660-17ca-4b84-9a66-a496add9d7cc-config-data\") pod \"ebba5660-17ca-4b84-9a66-a496add9d7cc\" (UID: \"ebba5660-17ca-4b84-9a66-a496add9d7cc\") " Jan 23 10:12:10 crc kubenswrapper[4684]: I0123 10:12:10.402750 4684 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/ebba5660-17ca-4b84-9a66-a496add9d7cc-logs\") pod \"ebba5660-17ca-4b84-9a66-a496add9d7cc\" (UID: \"ebba5660-17ca-4b84-9a66-a496add9d7cc\") " Jan 23 10:12:10 crc kubenswrapper[4684]: I0123 10:12:10.403538 4684 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/ebba5660-17ca-4b84-9a66-a496add9d7cc-logs" (OuterVolumeSpecName: "logs") pod "ebba5660-17ca-4b84-9a66-a496add9d7cc" (UID: "ebba5660-17ca-4b84-9a66-a496add9d7cc"). InnerVolumeSpecName "logs". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 23 10:12:10 crc kubenswrapper[4684]: I0123 10:12:10.410994 4684 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/ebba5660-17ca-4b84-9a66-a496add9d7cc-kube-api-access-nzfrr" (OuterVolumeSpecName: "kube-api-access-nzfrr") pod "ebba5660-17ca-4b84-9a66-a496add9d7cc" (UID: "ebba5660-17ca-4b84-9a66-a496add9d7cc"). InnerVolumeSpecName "kube-api-access-nzfrr". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 23 10:12:10 crc kubenswrapper[4684]: I0123 10:12:10.436462 4684 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/ebba5660-17ca-4b84-9a66-a496add9d7cc-scripts" (OuterVolumeSpecName: "scripts") pod "ebba5660-17ca-4b84-9a66-a496add9d7cc" (UID: "ebba5660-17ca-4b84-9a66-a496add9d7cc"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 23 10:12:10 crc kubenswrapper[4684]: I0123 10:12:10.447376 4684 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/ebba5660-17ca-4b84-9a66-a496add9d7cc-config-data" (OuterVolumeSpecName: "config-data") pod "ebba5660-17ca-4b84-9a66-a496add9d7cc" (UID: "ebba5660-17ca-4b84-9a66-a496add9d7cc"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 23 10:12:10 crc kubenswrapper[4684]: I0123 10:12:10.504450 4684 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"horizon-secret-key\" (UniqueName: \"kubernetes.io/secret/ebba5660-17ca-4b84-9a66-a496add9d7cc-horizon-secret-key\") pod \"ebba5660-17ca-4b84-9a66-a496add9d7cc\" (UID: \"ebba5660-17ca-4b84-9a66-a496add9d7cc\") " Jan 23 10:12:10 crc kubenswrapper[4684]: I0123 10:12:10.504997 4684 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-nzfrr\" (UniqueName: \"kubernetes.io/projected/ebba5660-17ca-4b84-9a66-a496add9d7cc-kube-api-access-nzfrr\") on node \"crc\" DevicePath \"\"" Jan 23 10:12:10 crc kubenswrapper[4684]: I0123 10:12:10.505028 4684 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/ebba5660-17ca-4b84-9a66-a496add9d7cc-config-data\") on node \"crc\" DevicePath \"\"" Jan 23 10:12:10 crc kubenswrapper[4684]: I0123 10:12:10.505043 4684 reconciler_common.go:293] "Volume detached for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/ebba5660-17ca-4b84-9a66-a496add9d7cc-logs\") on node \"crc\" DevicePath \"\"" Jan 23 10:12:10 crc kubenswrapper[4684]: I0123 10:12:10.505054 4684 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/ebba5660-17ca-4b84-9a66-a496add9d7cc-scripts\") on node \"crc\" DevicePath \"\"" Jan 23 10:12:10 crc kubenswrapper[4684]: I0123 10:12:10.513657 4684 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/ebba5660-17ca-4b84-9a66-a496add9d7cc-horizon-secret-key" (OuterVolumeSpecName: "horizon-secret-key") pod "ebba5660-17ca-4b84-9a66-a496add9d7cc" (UID: "ebba5660-17ca-4b84-9a66-a496add9d7cc"). InnerVolumeSpecName "horizon-secret-key". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 23 10:12:10 crc kubenswrapper[4684]: I0123 10:12:10.606609 4684 reconciler_common.go:293] "Volume detached for volume \"horizon-secret-key\" (UniqueName: \"kubernetes.io/secret/ebba5660-17ca-4b84-9a66-a496add9d7cc-horizon-secret-key\") on node \"crc\" DevicePath \"\"" Jan 23 10:12:10 crc kubenswrapper[4684]: I0123 10:12:10.775192 4684 generic.go:334] "Generic (PLEG): container finished" podID="ebba5660-17ca-4b84-9a66-a496add9d7cc" containerID="c79d9606153bea0eef02b860114a25cfd265247989133e1a083dd4c94a001e98" exitCode=137 Jan 23 10:12:10 crc kubenswrapper[4684]: I0123 10:12:10.775289 4684 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/horizon-59d6c7fdc9-qhdcc" Jan 23 10:12:10 crc kubenswrapper[4684]: I0123 10:12:10.776163 4684 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/horizon-59d6c7fdc9-qhdcc" event={"ID":"ebba5660-17ca-4b84-9a66-a496add9d7cc","Type":"ContainerDied","Data":"c79d9606153bea0eef02b860114a25cfd265247989133e1a083dd4c94a001e98"} Jan 23 10:12:10 crc kubenswrapper[4684]: I0123 10:12:10.776372 4684 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/horizon-59d6c7fdc9-qhdcc" event={"ID":"ebba5660-17ca-4b84-9a66-a496add9d7cc","Type":"ContainerDied","Data":"abec2bf3570a222fae3ebf82191744dc27ae46ceb2b820e0e288e1d481f3c50d"} Jan 23 10:12:10 crc kubenswrapper[4684]: I0123 10:12:10.776445 4684 scope.go:117] "RemoveContainer" containerID="c79d9606153bea0eef02b860114a25cfd265247989133e1a083dd4c94a001e98" Jan 23 10:12:10 crc kubenswrapper[4684]: I0123 10:12:10.868884 4684 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/horizon-59d6c7fdc9-qhdcc"] Jan 23 10:12:10 crc kubenswrapper[4684]: I0123 10:12:10.882370 4684 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/horizon-59d6c7fdc9-qhdcc"] Jan 23 10:12:10 crc kubenswrapper[4684]: I0123 10:12:10.943936 4684 scope.go:117] "RemoveContainer" containerID="c79d9606153bea0eef02b860114a25cfd265247989133e1a083dd4c94a001e98" Jan 23 10:12:10 crc kubenswrapper[4684]: E0123 10:12:10.944837 4684 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"c79d9606153bea0eef02b860114a25cfd265247989133e1a083dd4c94a001e98\": container with ID starting with c79d9606153bea0eef02b860114a25cfd265247989133e1a083dd4c94a001e98 not found: ID does not exist" containerID="c79d9606153bea0eef02b860114a25cfd265247989133e1a083dd4c94a001e98" Jan 23 10:12:10 crc kubenswrapper[4684]: I0123 10:12:10.944875 4684 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"c79d9606153bea0eef02b860114a25cfd265247989133e1a083dd4c94a001e98"} err="failed to get container status \"c79d9606153bea0eef02b860114a25cfd265247989133e1a083dd4c94a001e98\": rpc error: code = NotFound desc = could not find container \"c79d9606153bea0eef02b860114a25cfd265247989133e1a083dd4c94a001e98\": container with ID starting with c79d9606153bea0eef02b860114a25cfd265247989133e1a083dd4c94a001e98 not found: ID does not exist" Jan 23 10:12:11 crc kubenswrapper[4684]: I0123 10:12:11.601619 4684 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="6fbf4c7d-6e2e-44cd-852e-903aa8602f9f" path="/var/lib/kubelet/pods/6fbf4c7d-6e2e-44cd-852e-903aa8602f9f/volumes" Jan 23 10:12:11 crc kubenswrapper[4684]: I0123 10:12:11.602731 4684 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="ebba5660-17ca-4b84-9a66-a496add9d7cc" path="/var/lib/kubelet/pods/ebba5660-17ca-4b84-9a66-a496add9d7cc/volumes" Jan 23 10:12:11 crc kubenswrapper[4684]: I0123 10:12:11.683369 4684 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/manila-db-sync-mdmkd" Jan 23 10:12:11 crc kubenswrapper[4684]: I0123 10:12:11.787174 4684 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/manila-db-sync-mdmkd" event={"ID":"bd829550-43d3-42d9-a9b4-e088ef820a77","Type":"ContainerDied","Data":"a58658152fef1d00c0e4cf57f6132bcc1b43e709caced30c3b16a6044d48e5ed"} Jan 23 10:12:11 crc kubenswrapper[4684]: I0123 10:12:11.787216 4684 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="a58658152fef1d00c0e4cf57f6132bcc1b43e709caced30c3b16a6044d48e5ed" Jan 23 10:12:11 crc kubenswrapper[4684]: I0123 10:12:11.787279 4684 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/manila-db-sync-mdmkd" Jan 23 10:12:11 crc kubenswrapper[4684]: I0123 10:12:11.827488 4684 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"job-config-data\" (UniqueName: \"kubernetes.io/secret/bd829550-43d3-42d9-a9b4-e088ef820a77-job-config-data\") pod \"bd829550-43d3-42d9-a9b4-e088ef820a77\" (UID: \"bd829550-43d3-42d9-a9b4-e088ef820a77\") " Jan 23 10:12:11 crc kubenswrapper[4684]: I0123 10:12:11.827605 4684 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/bd829550-43d3-42d9-a9b4-e088ef820a77-combined-ca-bundle\") pod \"bd829550-43d3-42d9-a9b4-e088ef820a77\" (UID: \"bd829550-43d3-42d9-a9b4-e088ef820a77\") " Jan 23 10:12:11 crc kubenswrapper[4684]: I0123 10:12:11.827674 4684 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-vksvh\" (UniqueName: \"kubernetes.io/projected/bd829550-43d3-42d9-a9b4-e088ef820a77-kube-api-access-vksvh\") pod \"bd829550-43d3-42d9-a9b4-e088ef820a77\" (UID: \"bd829550-43d3-42d9-a9b4-e088ef820a77\") " Jan 23 10:12:11 crc kubenswrapper[4684]: I0123 10:12:11.827789 4684 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/bd829550-43d3-42d9-a9b4-e088ef820a77-config-data\") pod \"bd829550-43d3-42d9-a9b4-e088ef820a77\" (UID: \"bd829550-43d3-42d9-a9b4-e088ef820a77\") " Jan 23 10:12:11 crc kubenswrapper[4684]: I0123 10:12:11.849437 4684 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/bd829550-43d3-42d9-a9b4-e088ef820a77-job-config-data" (OuterVolumeSpecName: "job-config-data") pod "bd829550-43d3-42d9-a9b4-e088ef820a77" (UID: "bd829550-43d3-42d9-a9b4-e088ef820a77"). InnerVolumeSpecName "job-config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 23 10:12:11 crc kubenswrapper[4684]: I0123 10:12:11.852597 4684 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/bd829550-43d3-42d9-a9b4-e088ef820a77-kube-api-access-vksvh" (OuterVolumeSpecName: "kube-api-access-vksvh") pod "bd829550-43d3-42d9-a9b4-e088ef820a77" (UID: "bd829550-43d3-42d9-a9b4-e088ef820a77"). InnerVolumeSpecName "kube-api-access-vksvh". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 23 10:12:11 crc kubenswrapper[4684]: I0123 10:12:11.854125 4684 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/bd829550-43d3-42d9-a9b4-e088ef820a77-config-data" (OuterVolumeSpecName: "config-data") pod "bd829550-43d3-42d9-a9b4-e088ef820a77" (UID: "bd829550-43d3-42d9-a9b4-e088ef820a77"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 23 10:12:11 crc kubenswrapper[4684]: I0123 10:12:11.863195 4684 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/bd829550-43d3-42d9-a9b4-e088ef820a77-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "bd829550-43d3-42d9-a9b4-e088ef820a77" (UID: "bd829550-43d3-42d9-a9b4-e088ef820a77"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 23 10:12:11 crc kubenswrapper[4684]: I0123 10:12:11.930663 4684 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/bd829550-43d3-42d9-a9b4-e088ef820a77-config-data\") on node \"crc\" DevicePath \"\"" Jan 23 10:12:11 crc kubenswrapper[4684]: I0123 10:12:11.930713 4684 reconciler_common.go:293] "Volume detached for volume \"job-config-data\" (UniqueName: \"kubernetes.io/secret/bd829550-43d3-42d9-a9b4-e088ef820a77-job-config-data\") on node \"crc\" DevicePath \"\"" Jan 23 10:12:11 crc kubenswrapper[4684]: I0123 10:12:11.930739 4684 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/bd829550-43d3-42d9-a9b4-e088ef820a77-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Jan 23 10:12:11 crc kubenswrapper[4684]: I0123 10:12:11.930753 4684 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-vksvh\" (UniqueName: \"kubernetes.io/projected/bd829550-43d3-42d9-a9b4-e088ef820a77-kube-api-access-vksvh\") on node \"crc\" DevicePath \"\"" Jan 23 10:12:12 crc kubenswrapper[4684]: I0123 10:12:12.259139 4684 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/manila-scheduler-0"] Jan 23 10:12:12 crc kubenswrapper[4684]: E0123 10:12:12.259520 4684 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="ebba5660-17ca-4b84-9a66-a496add9d7cc" containerName="horizon" Jan 23 10:12:12 crc kubenswrapper[4684]: I0123 10:12:12.259792 4684 state_mem.go:107] "Deleted CPUSet assignment" podUID="ebba5660-17ca-4b84-9a66-a496add9d7cc" containerName="horizon" Jan 23 10:12:12 crc kubenswrapper[4684]: E0123 10:12:12.259809 4684 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="6fbf4c7d-6e2e-44cd-852e-903aa8602f9f" containerName="extract-content" Jan 23 10:12:12 crc kubenswrapper[4684]: I0123 10:12:12.259815 4684 state_mem.go:107] "Deleted CPUSet assignment" podUID="6fbf4c7d-6e2e-44cd-852e-903aa8602f9f" containerName="extract-content" Jan 23 10:12:12 crc kubenswrapper[4684]: E0123 10:12:12.259831 4684 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="6fbf4c7d-6e2e-44cd-852e-903aa8602f9f" containerName="extract-utilities" Jan 23 10:12:12 crc kubenswrapper[4684]: I0123 10:12:12.259838 4684 state_mem.go:107] "Deleted CPUSet assignment" podUID="6fbf4c7d-6e2e-44cd-852e-903aa8602f9f" containerName="extract-utilities" Jan 23 10:12:12 crc kubenswrapper[4684]: E0123 10:12:12.259846 4684 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="bd829550-43d3-42d9-a9b4-e088ef820a77" containerName="manila-db-sync" Jan 23 10:12:12 crc kubenswrapper[4684]: I0123 10:12:12.259852 4684 state_mem.go:107] "Deleted CPUSet assignment" podUID="bd829550-43d3-42d9-a9b4-e088ef820a77" containerName="manila-db-sync" Jan 23 10:12:12 crc kubenswrapper[4684]: E0123 10:12:12.259860 4684 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="6fbf4c7d-6e2e-44cd-852e-903aa8602f9f" containerName="registry-server" Jan 23 10:12:12 crc kubenswrapper[4684]: I0123 10:12:12.259866 4684 state_mem.go:107] "Deleted CPUSet assignment" podUID="6fbf4c7d-6e2e-44cd-852e-903aa8602f9f" containerName="registry-server" Jan 23 10:12:12 crc kubenswrapper[4684]: I0123 10:12:12.260041 4684 memory_manager.go:354] "RemoveStaleState removing state" podUID="ebba5660-17ca-4b84-9a66-a496add9d7cc" containerName="horizon" Jan 23 10:12:12 crc kubenswrapper[4684]: I0123 10:12:12.260066 4684 memory_manager.go:354] "RemoveStaleState removing state" podUID="6fbf4c7d-6e2e-44cd-852e-903aa8602f9f" containerName="registry-server" Jan 23 10:12:12 crc kubenswrapper[4684]: I0123 10:12:12.260080 4684 memory_manager.go:354] "RemoveStaleState removing state" podUID="bd829550-43d3-42d9-a9b4-e088ef820a77" containerName="manila-db-sync" Jan 23 10:12:12 crc kubenswrapper[4684]: I0123 10:12:12.261058 4684 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/manila-scheduler-0" Jan 23 10:12:12 crc kubenswrapper[4684]: I0123 10:12:12.270686 4684 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"manila-manila-dockercfg-6j9z9" Jan 23 10:12:12 crc kubenswrapper[4684]: I0123 10:12:12.270686 4684 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"manila-config-data" Jan 23 10:12:12 crc kubenswrapper[4684]: I0123 10:12:12.271133 4684 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"manila-scheduler-config-data" Jan 23 10:12:12 crc kubenswrapper[4684]: I0123 10:12:12.272865 4684 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"manila-scripts" Jan 23 10:12:12 crc kubenswrapper[4684]: I0123 10:12:12.334456 4684 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/manila-share-share1-0"] Jan 23 10:12:12 crc kubenswrapper[4684]: I0123 10:12:12.346105 4684 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/manila-share-share1-0" Jan 23 10:12:12 crc kubenswrapper[4684]: I0123 10:12:12.367666 4684 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/manila-share-share1-0"] Jan 23 10:12:12 crc kubenswrapper[4684]: I0123 10:12:12.373254 4684 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"manila-share-share1-config-data" Jan 23 10:12:12 crc kubenswrapper[4684]: I0123 10:12:12.440755 4684 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-9r86q\" (UniqueName: \"kubernetes.io/projected/601d1dbf-3e41-4f48-86a5-2038be6a33b3-kube-api-access-9r86q\") pod \"manila-share-share1-0\" (UID: \"601d1dbf-3e41-4f48-86a5-2038be6a33b3\") " pod="openstack/manila-share-share1-0" Jan 23 10:12:12 crc kubenswrapper[4684]: I0123 10:12:12.440801 4684 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/601d1dbf-3e41-4f48-86a5-2038be6a33b3-combined-ca-bundle\") pod \"manila-share-share1-0\" (UID: \"601d1dbf-3e41-4f48-86a5-2038be6a33b3\") " pod="openstack/manila-share-share1-0" Jan 23 10:12:12 crc kubenswrapper[4684]: I0123 10:12:12.440839 4684 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-machine-id\" (UniqueName: \"kubernetes.io/host-path/601d1dbf-3e41-4f48-86a5-2038be6a33b3-etc-machine-id\") pod \"manila-share-share1-0\" (UID: \"601d1dbf-3e41-4f48-86a5-2038be6a33b3\") " pod="openstack/manila-share-share1-0" Jan 23 10:12:12 crc kubenswrapper[4684]: I0123 10:12:12.440880 4684 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/e8415d7b-df39-4568-a028-7556ef70b916-scripts\") pod \"manila-scheduler-0\" (UID: \"e8415d7b-df39-4568-a028-7556ef70b916\") " pod="openstack/manila-scheduler-0" Jan 23 10:12:12 crc kubenswrapper[4684]: I0123 10:12:12.440991 4684 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-lib-manila\" (UniqueName: \"kubernetes.io/host-path/601d1dbf-3e41-4f48-86a5-2038be6a33b3-var-lib-manila\") pod \"manila-share-share1-0\" (UID: \"601d1dbf-3e41-4f48-86a5-2038be6a33b3\") " pod="openstack/manila-share-share1-0" Jan 23 10:12:12 crc kubenswrapper[4684]: I0123 10:12:12.441062 4684 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/601d1dbf-3e41-4f48-86a5-2038be6a33b3-scripts\") pod \"manila-share-share1-0\" (UID: \"601d1dbf-3e41-4f48-86a5-2038be6a33b3\") " pod="openstack/manila-share-share1-0" Jan 23 10:12:12 crc kubenswrapper[4684]: I0123 10:12:12.441135 4684 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/e8415d7b-df39-4568-a028-7556ef70b916-combined-ca-bundle\") pod \"manila-scheduler-0\" (UID: \"e8415d7b-df39-4568-a028-7556ef70b916\") " pod="openstack/manila-scheduler-0" Jan 23 10:12:12 crc kubenswrapper[4684]: I0123 10:12:12.441225 4684 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/e8415d7b-df39-4568-a028-7556ef70b916-config-data-custom\") pod \"manila-scheduler-0\" (UID: \"e8415d7b-df39-4568-a028-7556ef70b916\") " pod="openstack/manila-scheduler-0" Jan 23 10:12:12 crc kubenswrapper[4684]: I0123 10:12:12.441255 4684 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ceph\" (UniqueName: \"kubernetes.io/projected/601d1dbf-3e41-4f48-86a5-2038be6a33b3-ceph\") pod \"manila-share-share1-0\" (UID: \"601d1dbf-3e41-4f48-86a5-2038be6a33b3\") " pod="openstack/manila-share-share1-0" Jan 23 10:12:12 crc kubenswrapper[4684]: I0123 10:12:12.441327 4684 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-machine-id\" (UniqueName: \"kubernetes.io/host-path/e8415d7b-df39-4568-a028-7556ef70b916-etc-machine-id\") pod \"manila-scheduler-0\" (UID: \"e8415d7b-df39-4568-a028-7556ef70b916\") " pod="openstack/manila-scheduler-0" Jan 23 10:12:12 crc kubenswrapper[4684]: I0123 10:12:12.441362 4684 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-rjlp4\" (UniqueName: \"kubernetes.io/projected/e8415d7b-df39-4568-a028-7556ef70b916-kube-api-access-rjlp4\") pod \"manila-scheduler-0\" (UID: \"e8415d7b-df39-4568-a028-7556ef70b916\") " pod="openstack/manila-scheduler-0" Jan 23 10:12:12 crc kubenswrapper[4684]: I0123 10:12:12.441389 4684 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/601d1dbf-3e41-4f48-86a5-2038be6a33b3-config-data\") pod \"manila-share-share1-0\" (UID: \"601d1dbf-3e41-4f48-86a5-2038be6a33b3\") " pod="openstack/manila-share-share1-0" Jan 23 10:12:12 crc kubenswrapper[4684]: I0123 10:12:12.441410 4684 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/601d1dbf-3e41-4f48-86a5-2038be6a33b3-config-data-custom\") pod \"manila-share-share1-0\" (UID: \"601d1dbf-3e41-4f48-86a5-2038be6a33b3\") " pod="openstack/manila-share-share1-0" Jan 23 10:12:12 crc kubenswrapper[4684]: I0123 10:12:12.441476 4684 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/e8415d7b-df39-4568-a028-7556ef70b916-config-data\") pod \"manila-scheduler-0\" (UID: \"e8415d7b-df39-4568-a028-7556ef70b916\") " pod="openstack/manila-scheduler-0" Jan 23 10:12:12 crc kubenswrapper[4684]: I0123 10:12:12.466323 4684 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/manila-scheduler-0"] Jan 23 10:12:12 crc kubenswrapper[4684]: I0123 10:12:12.543577 4684 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"var-lib-manila\" (UniqueName: \"kubernetes.io/host-path/601d1dbf-3e41-4f48-86a5-2038be6a33b3-var-lib-manila\") pod \"manila-share-share1-0\" (UID: \"601d1dbf-3e41-4f48-86a5-2038be6a33b3\") " pod="openstack/manila-share-share1-0" Jan 23 10:12:12 crc kubenswrapper[4684]: I0123 10:12:12.543635 4684 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/601d1dbf-3e41-4f48-86a5-2038be6a33b3-scripts\") pod \"manila-share-share1-0\" (UID: \"601d1dbf-3e41-4f48-86a5-2038be6a33b3\") " pod="openstack/manila-share-share1-0" Jan 23 10:12:12 crc kubenswrapper[4684]: I0123 10:12:12.543672 4684 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/e8415d7b-df39-4568-a028-7556ef70b916-combined-ca-bundle\") pod \"manila-scheduler-0\" (UID: \"e8415d7b-df39-4568-a028-7556ef70b916\") " pod="openstack/manila-scheduler-0" Jan 23 10:12:12 crc kubenswrapper[4684]: I0123 10:12:12.543723 4684 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/e8415d7b-df39-4568-a028-7556ef70b916-config-data-custom\") pod \"manila-scheduler-0\" (UID: \"e8415d7b-df39-4568-a028-7556ef70b916\") " pod="openstack/manila-scheduler-0" Jan 23 10:12:12 crc kubenswrapper[4684]: I0123 10:12:12.543746 4684 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ceph\" (UniqueName: \"kubernetes.io/projected/601d1dbf-3e41-4f48-86a5-2038be6a33b3-ceph\") pod \"manila-share-share1-0\" (UID: \"601d1dbf-3e41-4f48-86a5-2038be6a33b3\") " pod="openstack/manila-share-share1-0" Jan 23 10:12:12 crc kubenswrapper[4684]: I0123 10:12:12.543796 4684 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etc-machine-id\" (UniqueName: \"kubernetes.io/host-path/e8415d7b-df39-4568-a028-7556ef70b916-etc-machine-id\") pod \"manila-scheduler-0\" (UID: \"e8415d7b-df39-4568-a028-7556ef70b916\") " pod="openstack/manila-scheduler-0" Jan 23 10:12:12 crc kubenswrapper[4684]: I0123 10:12:12.543821 4684 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-rjlp4\" (UniqueName: \"kubernetes.io/projected/e8415d7b-df39-4568-a028-7556ef70b916-kube-api-access-rjlp4\") pod \"manila-scheduler-0\" (UID: \"e8415d7b-df39-4568-a028-7556ef70b916\") " pod="openstack/manila-scheduler-0" Jan 23 10:12:12 crc kubenswrapper[4684]: I0123 10:12:12.543848 4684 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/601d1dbf-3e41-4f48-86a5-2038be6a33b3-config-data\") pod \"manila-share-share1-0\" (UID: \"601d1dbf-3e41-4f48-86a5-2038be6a33b3\") " pod="openstack/manila-share-share1-0" Jan 23 10:12:12 crc kubenswrapper[4684]: I0123 10:12:12.543872 4684 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/601d1dbf-3e41-4f48-86a5-2038be6a33b3-config-data-custom\") pod \"manila-share-share1-0\" (UID: \"601d1dbf-3e41-4f48-86a5-2038be6a33b3\") " pod="openstack/manila-share-share1-0" Jan 23 10:12:12 crc kubenswrapper[4684]: I0123 10:12:12.543929 4684 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/e8415d7b-df39-4568-a028-7556ef70b916-config-data\") pod \"manila-scheduler-0\" (UID: \"e8415d7b-df39-4568-a028-7556ef70b916\") " pod="openstack/manila-scheduler-0" Jan 23 10:12:12 crc kubenswrapper[4684]: I0123 10:12:12.543988 4684 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/601d1dbf-3e41-4f48-86a5-2038be6a33b3-combined-ca-bundle\") pod \"manila-share-share1-0\" (UID: \"601d1dbf-3e41-4f48-86a5-2038be6a33b3\") " pod="openstack/manila-share-share1-0" Jan 23 10:12:12 crc kubenswrapper[4684]: I0123 10:12:12.544013 4684 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-9r86q\" (UniqueName: \"kubernetes.io/projected/601d1dbf-3e41-4f48-86a5-2038be6a33b3-kube-api-access-9r86q\") pod \"manila-share-share1-0\" (UID: \"601d1dbf-3e41-4f48-86a5-2038be6a33b3\") " pod="openstack/manila-share-share1-0" Jan 23 10:12:12 crc kubenswrapper[4684]: I0123 10:12:12.544040 4684 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etc-machine-id\" (UniqueName: \"kubernetes.io/host-path/601d1dbf-3e41-4f48-86a5-2038be6a33b3-etc-machine-id\") pod \"manila-share-share1-0\" (UID: \"601d1dbf-3e41-4f48-86a5-2038be6a33b3\") " pod="openstack/manila-share-share1-0" Jan 23 10:12:12 crc kubenswrapper[4684]: I0123 10:12:12.544070 4684 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/e8415d7b-df39-4568-a028-7556ef70b916-scripts\") pod \"manila-scheduler-0\" (UID: \"e8415d7b-df39-4568-a028-7556ef70b916\") " pod="openstack/manila-scheduler-0" Jan 23 10:12:12 crc kubenswrapper[4684]: I0123 10:12:12.546462 4684 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etc-machine-id\" (UniqueName: \"kubernetes.io/host-path/e8415d7b-df39-4568-a028-7556ef70b916-etc-machine-id\") pod \"manila-scheduler-0\" (UID: \"e8415d7b-df39-4568-a028-7556ef70b916\") " pod="openstack/manila-scheduler-0" Jan 23 10:12:12 crc kubenswrapper[4684]: I0123 10:12:12.546656 4684 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"var-lib-manila\" (UniqueName: \"kubernetes.io/host-path/601d1dbf-3e41-4f48-86a5-2038be6a33b3-var-lib-manila\") pod \"manila-share-share1-0\" (UID: \"601d1dbf-3e41-4f48-86a5-2038be6a33b3\") " pod="openstack/manila-share-share1-0" Jan 23 10:12:12 crc kubenswrapper[4684]: I0123 10:12:12.548003 4684 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/e8415d7b-df39-4568-a028-7556ef70b916-scripts\") pod \"manila-scheduler-0\" (UID: \"e8415d7b-df39-4568-a028-7556ef70b916\") " pod="openstack/manila-scheduler-0" Jan 23 10:12:12 crc kubenswrapper[4684]: I0123 10:12:12.548070 4684 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etc-machine-id\" (UniqueName: \"kubernetes.io/host-path/601d1dbf-3e41-4f48-86a5-2038be6a33b3-etc-machine-id\") pod \"manila-share-share1-0\" (UID: \"601d1dbf-3e41-4f48-86a5-2038be6a33b3\") " pod="openstack/manila-share-share1-0" Jan 23 10:12:12 crc kubenswrapper[4684]: I0123 10:12:12.559957 4684 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/601d1dbf-3e41-4f48-86a5-2038be6a33b3-config-data-custom\") pod \"manila-share-share1-0\" (UID: \"601d1dbf-3e41-4f48-86a5-2038be6a33b3\") " pod="openstack/manila-share-share1-0" Jan 23 10:12:12 crc kubenswrapper[4684]: I0123 10:12:12.576842 4684 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ceph\" (UniqueName: \"kubernetes.io/projected/601d1dbf-3e41-4f48-86a5-2038be6a33b3-ceph\") pod \"manila-share-share1-0\" (UID: \"601d1dbf-3e41-4f48-86a5-2038be6a33b3\") " pod="openstack/manila-share-share1-0" Jan 23 10:12:12 crc kubenswrapper[4684]: I0123 10:12:12.576949 4684 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/e8415d7b-df39-4568-a028-7556ef70b916-config-data-custom\") pod \"manila-scheduler-0\" (UID: \"e8415d7b-df39-4568-a028-7556ef70b916\") " pod="openstack/manila-scheduler-0" Jan 23 10:12:12 crc kubenswrapper[4684]: I0123 10:12:12.579409 4684 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/e8415d7b-df39-4568-a028-7556ef70b916-combined-ca-bundle\") pod \"manila-scheduler-0\" (UID: \"e8415d7b-df39-4568-a028-7556ef70b916\") " pod="openstack/manila-scheduler-0" Jan 23 10:12:12 crc kubenswrapper[4684]: I0123 10:12:12.580503 4684 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/601d1dbf-3e41-4f48-86a5-2038be6a33b3-scripts\") pod \"manila-share-share1-0\" (UID: \"601d1dbf-3e41-4f48-86a5-2038be6a33b3\") " pod="openstack/manila-share-share1-0" Jan 23 10:12:12 crc kubenswrapper[4684]: I0123 10:12:12.581063 4684 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/601d1dbf-3e41-4f48-86a5-2038be6a33b3-combined-ca-bundle\") pod \"manila-share-share1-0\" (UID: \"601d1dbf-3e41-4f48-86a5-2038be6a33b3\") " pod="openstack/manila-share-share1-0" Jan 23 10:12:12 crc kubenswrapper[4684]: I0123 10:12:12.581668 4684 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/601d1dbf-3e41-4f48-86a5-2038be6a33b3-config-data\") pod \"manila-share-share1-0\" (UID: \"601d1dbf-3e41-4f48-86a5-2038be6a33b3\") " pod="openstack/manila-share-share1-0" Jan 23 10:12:12 crc kubenswrapper[4684]: I0123 10:12:12.595514 4684 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-9r86q\" (UniqueName: \"kubernetes.io/projected/601d1dbf-3e41-4f48-86a5-2038be6a33b3-kube-api-access-9r86q\") pod \"manila-share-share1-0\" (UID: \"601d1dbf-3e41-4f48-86a5-2038be6a33b3\") " pod="openstack/manila-share-share1-0" Jan 23 10:12:12 crc kubenswrapper[4684]: I0123 10:12:12.600486 4684 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/e8415d7b-df39-4568-a028-7556ef70b916-config-data\") pod \"manila-scheduler-0\" (UID: \"e8415d7b-df39-4568-a028-7556ef70b916\") " pod="openstack/manila-scheduler-0" Jan 23 10:12:12 crc kubenswrapper[4684]: I0123 10:12:12.610174 4684 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-rjlp4\" (UniqueName: \"kubernetes.io/projected/e8415d7b-df39-4568-a028-7556ef70b916-kube-api-access-rjlp4\") pod \"manila-scheduler-0\" (UID: \"e8415d7b-df39-4568-a028-7556ef70b916\") " pod="openstack/manila-scheduler-0" Jan 23 10:12:12 crc kubenswrapper[4684]: I0123 10:12:12.703069 4684 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/manila-share-share1-0" Jan 23 10:12:12 crc kubenswrapper[4684]: I0123 10:12:12.877809 4684 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/manila-scheduler-0" Jan 23 10:12:13 crc kubenswrapper[4684]: I0123 10:12:13.001173 4684 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/manila-api-0"] Jan 23 10:12:13 crc kubenswrapper[4684]: I0123 10:12:13.002832 4684 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/manila-api-0" Jan 23 10:12:13 crc kubenswrapper[4684]: I0123 10:12:13.046817 4684 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"manila-api-config-data" Jan 23 10:12:13 crc kubenswrapper[4684]: I0123 10:12:13.143223 4684 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/manila-api-0"] Jan 23 10:12:13 crc kubenswrapper[4684]: I0123 10:12:13.223927 4684 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-machine-id\" (UniqueName: \"kubernetes.io/host-path/eab2a2c4-440f-4b67-9eea-5386108bb9a9-etc-machine-id\") pod \"manila-api-0\" (UID: \"eab2a2c4-440f-4b67-9eea-5386108bb9a9\") " pod="openstack/manila-api-0" Jan 23 10:12:13 crc kubenswrapper[4684]: I0123 10:12:13.224047 4684 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/eab2a2c4-440f-4b67-9eea-5386108bb9a9-scripts\") pod \"manila-api-0\" (UID: \"eab2a2c4-440f-4b67-9eea-5386108bb9a9\") " pod="openstack/manila-api-0" Jan 23 10:12:13 crc kubenswrapper[4684]: I0123 10:12:13.224072 4684 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/eab2a2c4-440f-4b67-9eea-5386108bb9a9-config-data\") pod \"manila-api-0\" (UID: \"eab2a2c4-440f-4b67-9eea-5386108bb9a9\") " pod="openstack/manila-api-0" Jan 23 10:12:13 crc kubenswrapper[4684]: I0123 10:12:13.224108 4684 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-hkxd6\" (UniqueName: \"kubernetes.io/projected/eab2a2c4-440f-4b67-9eea-5386108bb9a9-kube-api-access-hkxd6\") pod \"manila-api-0\" (UID: \"eab2a2c4-440f-4b67-9eea-5386108bb9a9\") " pod="openstack/manila-api-0" Jan 23 10:12:13 crc kubenswrapper[4684]: I0123 10:12:13.224131 4684 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/eab2a2c4-440f-4b67-9eea-5386108bb9a9-config-data-custom\") pod \"manila-api-0\" (UID: \"eab2a2c4-440f-4b67-9eea-5386108bb9a9\") " pod="openstack/manila-api-0" Jan 23 10:12:13 crc kubenswrapper[4684]: I0123 10:12:13.224209 4684 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/eab2a2c4-440f-4b67-9eea-5386108bb9a9-combined-ca-bundle\") pod \"manila-api-0\" (UID: \"eab2a2c4-440f-4b67-9eea-5386108bb9a9\") " pod="openstack/manila-api-0" Jan 23 10:12:13 crc kubenswrapper[4684]: I0123 10:12:13.224230 4684 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/eab2a2c4-440f-4b67-9eea-5386108bb9a9-logs\") pod \"manila-api-0\" (UID: \"eab2a2c4-440f-4b67-9eea-5386108bb9a9\") " pod="openstack/manila-api-0" Jan 23 10:12:13 crc kubenswrapper[4684]: I0123 10:12:13.326045 4684 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/eab2a2c4-440f-4b67-9eea-5386108bb9a9-scripts\") pod \"manila-api-0\" (UID: \"eab2a2c4-440f-4b67-9eea-5386108bb9a9\") " pod="openstack/manila-api-0" Jan 23 10:12:13 crc kubenswrapper[4684]: I0123 10:12:13.326097 4684 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/eab2a2c4-440f-4b67-9eea-5386108bb9a9-config-data\") pod \"manila-api-0\" (UID: \"eab2a2c4-440f-4b67-9eea-5386108bb9a9\") " pod="openstack/manila-api-0" Jan 23 10:12:13 crc kubenswrapper[4684]: I0123 10:12:13.326133 4684 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-hkxd6\" (UniqueName: \"kubernetes.io/projected/eab2a2c4-440f-4b67-9eea-5386108bb9a9-kube-api-access-hkxd6\") pod \"manila-api-0\" (UID: \"eab2a2c4-440f-4b67-9eea-5386108bb9a9\") " pod="openstack/manila-api-0" Jan 23 10:12:13 crc kubenswrapper[4684]: I0123 10:12:13.326155 4684 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/eab2a2c4-440f-4b67-9eea-5386108bb9a9-config-data-custom\") pod \"manila-api-0\" (UID: \"eab2a2c4-440f-4b67-9eea-5386108bb9a9\") " pod="openstack/manila-api-0" Jan 23 10:12:13 crc kubenswrapper[4684]: I0123 10:12:13.326190 4684 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/eab2a2c4-440f-4b67-9eea-5386108bb9a9-combined-ca-bundle\") pod \"manila-api-0\" (UID: \"eab2a2c4-440f-4b67-9eea-5386108bb9a9\") " pod="openstack/manila-api-0" Jan 23 10:12:13 crc kubenswrapper[4684]: I0123 10:12:13.326230 4684 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/eab2a2c4-440f-4b67-9eea-5386108bb9a9-logs\") pod \"manila-api-0\" (UID: \"eab2a2c4-440f-4b67-9eea-5386108bb9a9\") " pod="openstack/manila-api-0" Jan 23 10:12:13 crc kubenswrapper[4684]: I0123 10:12:13.326275 4684 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etc-machine-id\" (UniqueName: \"kubernetes.io/host-path/eab2a2c4-440f-4b67-9eea-5386108bb9a9-etc-machine-id\") pod \"manila-api-0\" (UID: \"eab2a2c4-440f-4b67-9eea-5386108bb9a9\") " pod="openstack/manila-api-0" Jan 23 10:12:13 crc kubenswrapper[4684]: I0123 10:12:13.326345 4684 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etc-machine-id\" (UniqueName: \"kubernetes.io/host-path/eab2a2c4-440f-4b67-9eea-5386108bb9a9-etc-machine-id\") pod \"manila-api-0\" (UID: \"eab2a2c4-440f-4b67-9eea-5386108bb9a9\") " pod="openstack/manila-api-0" Jan 23 10:12:13 crc kubenswrapper[4684]: I0123 10:12:13.333322 4684 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/eab2a2c4-440f-4b67-9eea-5386108bb9a9-scripts\") pod \"manila-api-0\" (UID: \"eab2a2c4-440f-4b67-9eea-5386108bb9a9\") " pod="openstack/manila-api-0" Jan 23 10:12:13 crc kubenswrapper[4684]: I0123 10:12:13.335047 4684 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/eab2a2c4-440f-4b67-9eea-5386108bb9a9-logs\") pod \"manila-api-0\" (UID: \"eab2a2c4-440f-4b67-9eea-5386108bb9a9\") " pod="openstack/manila-api-0" Jan 23 10:12:13 crc kubenswrapper[4684]: I0123 10:12:13.349020 4684 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/eab2a2c4-440f-4b67-9eea-5386108bb9a9-config-data-custom\") pod \"manila-api-0\" (UID: \"eab2a2c4-440f-4b67-9eea-5386108bb9a9\") " pod="openstack/manila-api-0" Jan 23 10:12:13 crc kubenswrapper[4684]: I0123 10:12:13.356677 4684 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/eab2a2c4-440f-4b67-9eea-5386108bb9a9-config-data\") pod \"manila-api-0\" (UID: \"eab2a2c4-440f-4b67-9eea-5386108bb9a9\") " pod="openstack/manila-api-0" Jan 23 10:12:13 crc kubenswrapper[4684]: I0123 10:12:13.372401 4684 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/dnsmasq-dns-dbdfc799f-zk2np"] Jan 23 10:12:13 crc kubenswrapper[4684]: I0123 10:12:13.373815 4684 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-dbdfc799f-zk2np" Jan 23 10:12:13 crc kubenswrapper[4684]: I0123 10:12:13.387477 4684 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/eab2a2c4-440f-4b67-9eea-5386108bb9a9-combined-ca-bundle\") pod \"manila-api-0\" (UID: \"eab2a2c4-440f-4b67-9eea-5386108bb9a9\") " pod="openstack/manila-api-0" Jan 23 10:12:13 crc kubenswrapper[4684]: I0123 10:12:13.397151 4684 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-hkxd6\" (UniqueName: \"kubernetes.io/projected/eab2a2c4-440f-4b67-9eea-5386108bb9a9-kube-api-access-hkxd6\") pod \"manila-api-0\" (UID: \"eab2a2c4-440f-4b67-9eea-5386108bb9a9\") " pod="openstack/manila-api-0" Jan 23 10:12:13 crc kubenswrapper[4684]: I0123 10:12:13.407509 4684 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/manila-api-0" Jan 23 10:12:13 crc kubenswrapper[4684]: I0123 10:12:13.412619 4684 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-dbdfc799f-zk2np"] Jan 23 10:12:13 crc kubenswrapper[4684]: I0123 10:12:13.429636 4684 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-gqlt2\" (UniqueName: \"kubernetes.io/projected/e93e4d61-ad39-41c9-80ce-653f91213f4d-kube-api-access-gqlt2\") pod \"dnsmasq-dns-dbdfc799f-zk2np\" (UID: \"e93e4d61-ad39-41c9-80ce-653f91213f4d\") " pod="openstack/dnsmasq-dns-dbdfc799f-zk2np" Jan 23 10:12:13 crc kubenswrapper[4684]: I0123 10:12:13.429730 4684 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/e93e4d61-ad39-41c9-80ce-653f91213f4d-dns-svc\") pod \"dnsmasq-dns-dbdfc799f-zk2np\" (UID: \"e93e4d61-ad39-41c9-80ce-653f91213f4d\") " pod="openstack/dnsmasq-dns-dbdfc799f-zk2np" Jan 23 10:12:13 crc kubenswrapper[4684]: I0123 10:12:13.429762 4684 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/e93e4d61-ad39-41c9-80ce-653f91213f4d-ovsdbserver-sb\") pod \"dnsmasq-dns-dbdfc799f-zk2np\" (UID: \"e93e4d61-ad39-41c9-80ce-653f91213f4d\") " pod="openstack/dnsmasq-dns-dbdfc799f-zk2np" Jan 23 10:12:13 crc kubenswrapper[4684]: I0123 10:12:13.429790 4684 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/e93e4d61-ad39-41c9-80ce-653f91213f4d-ovsdbserver-nb\") pod \"dnsmasq-dns-dbdfc799f-zk2np\" (UID: \"e93e4d61-ad39-41c9-80ce-653f91213f4d\") " pod="openstack/dnsmasq-dns-dbdfc799f-zk2np" Jan 23 10:12:13 crc kubenswrapper[4684]: I0123 10:12:13.429873 4684 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/e93e4d61-ad39-41c9-80ce-653f91213f4d-config\") pod \"dnsmasq-dns-dbdfc799f-zk2np\" (UID: \"e93e4d61-ad39-41c9-80ce-653f91213f4d\") " pod="openstack/dnsmasq-dns-dbdfc799f-zk2np" Jan 23 10:12:13 crc kubenswrapper[4684]: I0123 10:12:13.429903 4684 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/configmap/e93e4d61-ad39-41c9-80ce-653f91213f4d-openstack-edpm-ipam\") pod \"dnsmasq-dns-dbdfc799f-zk2np\" (UID: \"e93e4d61-ad39-41c9-80ce-653f91213f4d\") " pod="openstack/dnsmasq-dns-dbdfc799f-zk2np" Jan 23 10:12:13 crc kubenswrapper[4684]: I0123 10:12:13.534654 4684 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/e93e4d61-ad39-41c9-80ce-653f91213f4d-config\") pod \"dnsmasq-dns-dbdfc799f-zk2np\" (UID: \"e93e4d61-ad39-41c9-80ce-653f91213f4d\") " pod="openstack/dnsmasq-dns-dbdfc799f-zk2np" Jan 23 10:12:13 crc kubenswrapper[4684]: I0123 10:12:13.535674 4684 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/configmap/e93e4d61-ad39-41c9-80ce-653f91213f4d-openstack-edpm-ipam\") pod \"dnsmasq-dns-dbdfc799f-zk2np\" (UID: \"e93e4d61-ad39-41c9-80ce-653f91213f4d\") " pod="openstack/dnsmasq-dns-dbdfc799f-zk2np" Jan 23 10:12:13 crc kubenswrapper[4684]: I0123 10:12:13.535847 4684 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-gqlt2\" (UniqueName: \"kubernetes.io/projected/e93e4d61-ad39-41c9-80ce-653f91213f4d-kube-api-access-gqlt2\") pod \"dnsmasq-dns-dbdfc799f-zk2np\" (UID: \"e93e4d61-ad39-41c9-80ce-653f91213f4d\") " pod="openstack/dnsmasq-dns-dbdfc799f-zk2np" Jan 23 10:12:13 crc kubenswrapper[4684]: I0123 10:12:13.535948 4684 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/e93e4d61-ad39-41c9-80ce-653f91213f4d-dns-svc\") pod \"dnsmasq-dns-dbdfc799f-zk2np\" (UID: \"e93e4d61-ad39-41c9-80ce-653f91213f4d\") " pod="openstack/dnsmasq-dns-dbdfc799f-zk2np" Jan 23 10:12:13 crc kubenswrapper[4684]: I0123 10:12:13.535988 4684 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/e93e4d61-ad39-41c9-80ce-653f91213f4d-ovsdbserver-sb\") pod \"dnsmasq-dns-dbdfc799f-zk2np\" (UID: \"e93e4d61-ad39-41c9-80ce-653f91213f4d\") " pod="openstack/dnsmasq-dns-dbdfc799f-zk2np" Jan 23 10:12:13 crc kubenswrapper[4684]: I0123 10:12:13.536027 4684 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/e93e4d61-ad39-41c9-80ce-653f91213f4d-ovsdbserver-nb\") pod \"dnsmasq-dns-dbdfc799f-zk2np\" (UID: \"e93e4d61-ad39-41c9-80ce-653f91213f4d\") " pod="openstack/dnsmasq-dns-dbdfc799f-zk2np" Jan 23 10:12:13 crc kubenswrapper[4684]: I0123 10:12:13.536795 4684 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/e93e4d61-ad39-41c9-80ce-653f91213f4d-ovsdbserver-nb\") pod \"dnsmasq-dns-dbdfc799f-zk2np\" (UID: \"e93e4d61-ad39-41c9-80ce-653f91213f4d\") " pod="openstack/dnsmasq-dns-dbdfc799f-zk2np" Jan 23 10:12:13 crc kubenswrapper[4684]: I0123 10:12:13.535581 4684 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/e93e4d61-ad39-41c9-80ce-653f91213f4d-config\") pod \"dnsmasq-dns-dbdfc799f-zk2np\" (UID: \"e93e4d61-ad39-41c9-80ce-653f91213f4d\") " pod="openstack/dnsmasq-dns-dbdfc799f-zk2np" Jan 23 10:12:13 crc kubenswrapper[4684]: I0123 10:12:13.537350 4684 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/configmap/e93e4d61-ad39-41c9-80ce-653f91213f4d-openstack-edpm-ipam\") pod \"dnsmasq-dns-dbdfc799f-zk2np\" (UID: \"e93e4d61-ad39-41c9-80ce-653f91213f4d\") " pod="openstack/dnsmasq-dns-dbdfc799f-zk2np" Jan 23 10:12:13 crc kubenswrapper[4684]: I0123 10:12:13.538142 4684 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/e93e4d61-ad39-41c9-80ce-653f91213f4d-dns-svc\") pod \"dnsmasq-dns-dbdfc799f-zk2np\" (UID: \"e93e4d61-ad39-41c9-80ce-653f91213f4d\") " pod="openstack/dnsmasq-dns-dbdfc799f-zk2np" Jan 23 10:12:13 crc kubenswrapper[4684]: I0123 10:12:13.542511 4684 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/e93e4d61-ad39-41c9-80ce-653f91213f4d-ovsdbserver-sb\") pod \"dnsmasq-dns-dbdfc799f-zk2np\" (UID: \"e93e4d61-ad39-41c9-80ce-653f91213f4d\") " pod="openstack/dnsmasq-dns-dbdfc799f-zk2np" Jan 23 10:12:14 crc kubenswrapper[4684]: I0123 10:12:14.107230 4684 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-gqlt2\" (UniqueName: \"kubernetes.io/projected/e93e4d61-ad39-41c9-80ce-653f91213f4d-kube-api-access-gqlt2\") pod \"dnsmasq-dns-dbdfc799f-zk2np\" (UID: \"e93e4d61-ad39-41c9-80ce-653f91213f4d\") " pod="openstack/dnsmasq-dns-dbdfc799f-zk2np" Jan 23 10:12:14 crc kubenswrapper[4684]: I0123 10:12:14.110221 4684 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-dbdfc799f-zk2np" Jan 23 10:12:15 crc kubenswrapper[4684]: I0123 10:12:15.374762 4684 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/manila-share-share1-0"] Jan 23 10:12:15 crc kubenswrapper[4684]: I0123 10:12:15.405148 4684 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/manila-scheduler-0"] Jan 23 10:12:15 crc kubenswrapper[4684]: I0123 10:12:15.414814 4684 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/manila-api-0"] Jan 23 10:12:15 crc kubenswrapper[4684]: I0123 10:12:15.445131 4684 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-dbdfc799f-zk2np"] Jan 23 10:12:15 crc kubenswrapper[4684]: I0123 10:12:15.873680 4684 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/manila-scheduler-0" event={"ID":"e8415d7b-df39-4568-a028-7556ef70b916","Type":"ContainerStarted","Data":"e1ca9bdcaba619d25675d3f624bd51b6620b8cd02f61cb82eadc885600bb46c9"} Jan 23 10:12:15 crc kubenswrapper[4684]: I0123 10:12:15.875149 4684 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-dbdfc799f-zk2np" event={"ID":"e93e4d61-ad39-41c9-80ce-653f91213f4d","Type":"ContainerStarted","Data":"84183b3a9cafb7380567d9a45adac412a222fafaa2917fb32886acb3e016f536"} Jan 23 10:12:15 crc kubenswrapper[4684]: I0123 10:12:15.877736 4684 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/manila-share-share1-0" event={"ID":"601d1dbf-3e41-4f48-86a5-2038be6a33b3","Type":"ContainerStarted","Data":"3664ce02766766b32242f2123f0346dbaf5448b19b69510784215d1939520cd3"} Jan 23 10:12:15 crc kubenswrapper[4684]: I0123 10:12:15.879941 4684 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/manila-api-0" event={"ID":"eab2a2c4-440f-4b67-9eea-5386108bb9a9","Type":"ContainerStarted","Data":"35747d7ffa2ccbad7ea06d6197d9b2492b915b95f9cfb5899f76aa3d4bc5d8e5"} Jan 23 10:12:16 crc kubenswrapper[4684]: I0123 10:12:16.585015 4684 scope.go:117] "RemoveContainer" containerID="c2163abc5f57af87ea82023d09559fe5c528b862743942dcea670480cc44810b" Jan 23 10:12:16 crc kubenswrapper[4684]: E0123 10:12:16.592145 4684 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-wtphf_openshift-machine-config-operator(fe8e0d00-860e-4d47-9f48-686555520d79)\"" pod="openshift-machine-config-operator/machine-config-daemon-wtphf" podUID="fe8e0d00-860e-4d47-9f48-686555520d79" Jan 23 10:12:16 crc kubenswrapper[4684]: I0123 10:12:16.910500 4684 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/manila-api-0" event={"ID":"eab2a2c4-440f-4b67-9eea-5386108bb9a9","Type":"ContainerStarted","Data":"6e838ff582d0128569c984b0293cd5d36eb8011941ba8652763553ef8decd9b0"} Jan 23 10:12:16 crc kubenswrapper[4684]: I0123 10:12:16.916851 4684 generic.go:334] "Generic (PLEG): container finished" podID="e93e4d61-ad39-41c9-80ce-653f91213f4d" containerID="fe3011a003164d44d90d67ac1fc43170f0b6764388166665390af016877670cb" exitCode=0 Jan 23 10:12:16 crc kubenswrapper[4684]: I0123 10:12:16.916984 4684 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-dbdfc799f-zk2np" event={"ID":"e93e4d61-ad39-41c9-80ce-653f91213f4d","Type":"ContainerDied","Data":"fe3011a003164d44d90d67ac1fc43170f0b6764388166665390af016877670cb"} Jan 23 10:12:17 crc kubenswrapper[4684]: I0123 10:12:17.609351 4684 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/manila-api-0"] Jan 23 10:12:17 crc kubenswrapper[4684]: I0123 10:12:17.936383 4684 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/manila-api-0" event={"ID":"eab2a2c4-440f-4b67-9eea-5386108bb9a9","Type":"ContainerStarted","Data":"59c79f3c16aee0cee57f9ebc8c7e981eb53a9ae32f9559aa5acac4b09eab4a0a"} Jan 23 10:12:17 crc kubenswrapper[4684]: I0123 10:12:17.936745 4684 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/manila-api-0" Jan 23 10:12:17 crc kubenswrapper[4684]: I0123 10:12:17.936480 4684 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/manila-api-0" podUID="eab2a2c4-440f-4b67-9eea-5386108bb9a9" containerName="manila-api-log" containerID="cri-o://6e838ff582d0128569c984b0293cd5d36eb8011941ba8652763553ef8decd9b0" gracePeriod=30 Jan 23 10:12:17 crc kubenswrapper[4684]: I0123 10:12:17.936872 4684 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/manila-api-0" podUID="eab2a2c4-440f-4b67-9eea-5386108bb9a9" containerName="manila-api" containerID="cri-o://59c79f3c16aee0cee57f9ebc8c7e981eb53a9ae32f9559aa5acac4b09eab4a0a" gracePeriod=30 Jan 23 10:12:17 crc kubenswrapper[4684]: I0123 10:12:17.945579 4684 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/manila-scheduler-0" event={"ID":"e8415d7b-df39-4568-a028-7556ef70b916","Type":"ContainerStarted","Data":"281bff89fbb992b4459660e77ec0b9e647e50fc0629c6e63a956f886c8cc14ee"} Jan 23 10:12:17 crc kubenswrapper[4684]: I0123 10:12:17.969424 4684 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-dbdfc799f-zk2np" event={"ID":"e93e4d61-ad39-41c9-80ce-653f91213f4d","Type":"ContainerStarted","Data":"7960bfef76b5679b2bb0413726e42c0e5fa9c98347eb4c9ca9764d35d49186ee"} Jan 23 10:12:17 crc kubenswrapper[4684]: I0123 10:12:17.970658 4684 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/dnsmasq-dns-dbdfc799f-zk2np" Jan 23 10:12:17 crc kubenswrapper[4684]: I0123 10:12:17.984342 4684 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/manila-api-0" podStartSLOduration=5.984323272 podStartE2EDuration="5.984323272s" podCreationTimestamp="2026-01-23 10:12:12 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-23 10:12:17.968955124 +0000 UTC m=+3910.592333665" watchObservedRunningTime="2026-01-23 10:12:17.984323272 +0000 UTC m=+3910.607701813" Jan 23 10:12:18 crc kubenswrapper[4684]: I0123 10:12:18.021862 4684 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/dnsmasq-dns-dbdfc799f-zk2np" podStartSLOduration=5.021836852 podStartE2EDuration="5.021836852s" podCreationTimestamp="2026-01-23 10:12:13 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-23 10:12:18.00107865 +0000 UTC m=+3910.624457191" watchObservedRunningTime="2026-01-23 10:12:18.021836852 +0000 UTC m=+3910.645215393" Jan 23 10:12:18 crc kubenswrapper[4684]: I0123 10:12:18.738199 4684 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/manila-api-0" Jan 23 10:12:18 crc kubenswrapper[4684]: I0123 10:12:18.879866 4684 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/eab2a2c4-440f-4b67-9eea-5386108bb9a9-config-data\") pod \"eab2a2c4-440f-4b67-9eea-5386108bb9a9\" (UID: \"eab2a2c4-440f-4b67-9eea-5386108bb9a9\") " Jan 23 10:12:18 crc kubenswrapper[4684]: I0123 10:12:18.879938 4684 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-hkxd6\" (UniqueName: \"kubernetes.io/projected/eab2a2c4-440f-4b67-9eea-5386108bb9a9-kube-api-access-hkxd6\") pod \"eab2a2c4-440f-4b67-9eea-5386108bb9a9\" (UID: \"eab2a2c4-440f-4b67-9eea-5386108bb9a9\") " Jan 23 10:12:18 crc kubenswrapper[4684]: I0123 10:12:18.879955 4684 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/eab2a2c4-440f-4b67-9eea-5386108bb9a9-config-data-custom\") pod \"eab2a2c4-440f-4b67-9eea-5386108bb9a9\" (UID: \"eab2a2c4-440f-4b67-9eea-5386108bb9a9\") " Jan 23 10:12:18 crc kubenswrapper[4684]: I0123 10:12:18.879982 4684 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/eab2a2c4-440f-4b67-9eea-5386108bb9a9-combined-ca-bundle\") pod \"eab2a2c4-440f-4b67-9eea-5386108bb9a9\" (UID: \"eab2a2c4-440f-4b67-9eea-5386108bb9a9\") " Jan 23 10:12:18 crc kubenswrapper[4684]: I0123 10:12:18.880751 4684 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/eab2a2c4-440f-4b67-9eea-5386108bb9a9-logs\") pod \"eab2a2c4-440f-4b67-9eea-5386108bb9a9\" (UID: \"eab2a2c4-440f-4b67-9eea-5386108bb9a9\") " Jan 23 10:12:18 crc kubenswrapper[4684]: I0123 10:12:18.880783 4684 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/eab2a2c4-440f-4b67-9eea-5386108bb9a9-scripts\") pod \"eab2a2c4-440f-4b67-9eea-5386108bb9a9\" (UID: \"eab2a2c4-440f-4b67-9eea-5386108bb9a9\") " Jan 23 10:12:18 crc kubenswrapper[4684]: I0123 10:12:18.880800 4684 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"etc-machine-id\" (UniqueName: \"kubernetes.io/host-path/eab2a2c4-440f-4b67-9eea-5386108bb9a9-etc-machine-id\") pod \"eab2a2c4-440f-4b67-9eea-5386108bb9a9\" (UID: \"eab2a2c4-440f-4b67-9eea-5386108bb9a9\") " Jan 23 10:12:18 crc kubenswrapper[4684]: I0123 10:12:18.881312 4684 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/eab2a2c4-440f-4b67-9eea-5386108bb9a9-logs" (OuterVolumeSpecName: "logs") pod "eab2a2c4-440f-4b67-9eea-5386108bb9a9" (UID: "eab2a2c4-440f-4b67-9eea-5386108bb9a9"). InnerVolumeSpecName "logs". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 23 10:12:18 crc kubenswrapper[4684]: I0123 10:12:18.881328 4684 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/eab2a2c4-440f-4b67-9eea-5386108bb9a9-etc-machine-id" (OuterVolumeSpecName: "etc-machine-id") pod "eab2a2c4-440f-4b67-9eea-5386108bb9a9" (UID: "eab2a2c4-440f-4b67-9eea-5386108bb9a9"). InnerVolumeSpecName "etc-machine-id". PluginName "kubernetes.io/host-path", VolumeGidValue "" Jan 23 10:12:18 crc kubenswrapper[4684]: I0123 10:12:18.888450 4684 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/eab2a2c4-440f-4b67-9eea-5386108bb9a9-scripts" (OuterVolumeSpecName: "scripts") pod "eab2a2c4-440f-4b67-9eea-5386108bb9a9" (UID: "eab2a2c4-440f-4b67-9eea-5386108bb9a9"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 23 10:12:18 crc kubenswrapper[4684]: I0123 10:12:18.898885 4684 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/eab2a2c4-440f-4b67-9eea-5386108bb9a9-config-data-custom" (OuterVolumeSpecName: "config-data-custom") pod "eab2a2c4-440f-4b67-9eea-5386108bb9a9" (UID: "eab2a2c4-440f-4b67-9eea-5386108bb9a9"). InnerVolumeSpecName "config-data-custom". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 23 10:12:18 crc kubenswrapper[4684]: I0123 10:12:18.903607 4684 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/eab2a2c4-440f-4b67-9eea-5386108bb9a9-kube-api-access-hkxd6" (OuterVolumeSpecName: "kube-api-access-hkxd6") pod "eab2a2c4-440f-4b67-9eea-5386108bb9a9" (UID: "eab2a2c4-440f-4b67-9eea-5386108bb9a9"). InnerVolumeSpecName "kube-api-access-hkxd6". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 23 10:12:18 crc kubenswrapper[4684]: I0123 10:12:18.947137 4684 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/eab2a2c4-440f-4b67-9eea-5386108bb9a9-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "eab2a2c4-440f-4b67-9eea-5386108bb9a9" (UID: "eab2a2c4-440f-4b67-9eea-5386108bb9a9"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 23 10:12:18 crc kubenswrapper[4684]: I0123 10:12:18.965867 4684 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/eab2a2c4-440f-4b67-9eea-5386108bb9a9-config-data" (OuterVolumeSpecName: "config-data") pod "eab2a2c4-440f-4b67-9eea-5386108bb9a9" (UID: "eab2a2c4-440f-4b67-9eea-5386108bb9a9"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 23 10:12:18 crc kubenswrapper[4684]: I0123 10:12:18.983173 4684 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/eab2a2c4-440f-4b67-9eea-5386108bb9a9-config-data\") on node \"crc\" DevicePath \"\"" Jan 23 10:12:18 crc kubenswrapper[4684]: I0123 10:12:18.983217 4684 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-hkxd6\" (UniqueName: \"kubernetes.io/projected/eab2a2c4-440f-4b67-9eea-5386108bb9a9-kube-api-access-hkxd6\") on node \"crc\" DevicePath \"\"" Jan 23 10:12:18 crc kubenswrapper[4684]: I0123 10:12:18.983231 4684 reconciler_common.go:293] "Volume detached for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/eab2a2c4-440f-4b67-9eea-5386108bb9a9-config-data-custom\") on node \"crc\" DevicePath \"\"" Jan 23 10:12:18 crc kubenswrapper[4684]: I0123 10:12:18.983244 4684 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/eab2a2c4-440f-4b67-9eea-5386108bb9a9-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Jan 23 10:12:18 crc kubenswrapper[4684]: I0123 10:12:18.983257 4684 reconciler_common.go:293] "Volume detached for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/eab2a2c4-440f-4b67-9eea-5386108bb9a9-logs\") on node \"crc\" DevicePath \"\"" Jan 23 10:12:18 crc kubenswrapper[4684]: I0123 10:12:18.983268 4684 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/eab2a2c4-440f-4b67-9eea-5386108bb9a9-scripts\") on node \"crc\" DevicePath \"\"" Jan 23 10:12:18 crc kubenswrapper[4684]: I0123 10:12:18.983279 4684 reconciler_common.go:293] "Volume detached for volume \"etc-machine-id\" (UniqueName: \"kubernetes.io/host-path/eab2a2c4-440f-4b67-9eea-5386108bb9a9-etc-machine-id\") on node \"crc\" DevicePath \"\"" Jan 23 10:12:19 crc kubenswrapper[4684]: I0123 10:12:19.013020 4684 generic.go:334] "Generic (PLEG): container finished" podID="eab2a2c4-440f-4b67-9eea-5386108bb9a9" containerID="59c79f3c16aee0cee57f9ebc8c7e981eb53a9ae32f9559aa5acac4b09eab4a0a" exitCode=143 Jan 23 10:12:19 crc kubenswrapper[4684]: I0123 10:12:19.013055 4684 generic.go:334] "Generic (PLEG): container finished" podID="eab2a2c4-440f-4b67-9eea-5386108bb9a9" containerID="6e838ff582d0128569c984b0293cd5d36eb8011941ba8652763553ef8decd9b0" exitCode=143 Jan 23 10:12:19 crc kubenswrapper[4684]: I0123 10:12:19.013144 4684 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/manila-api-0" Jan 23 10:12:19 crc kubenswrapper[4684]: I0123 10:12:19.013353 4684 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/manila-api-0" event={"ID":"eab2a2c4-440f-4b67-9eea-5386108bb9a9","Type":"ContainerDied","Data":"59c79f3c16aee0cee57f9ebc8c7e981eb53a9ae32f9559aa5acac4b09eab4a0a"} Jan 23 10:12:19 crc kubenswrapper[4684]: I0123 10:12:19.013463 4684 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/manila-api-0" event={"ID":"eab2a2c4-440f-4b67-9eea-5386108bb9a9","Type":"ContainerDied","Data":"6e838ff582d0128569c984b0293cd5d36eb8011941ba8652763553ef8decd9b0"} Jan 23 10:12:19 crc kubenswrapper[4684]: I0123 10:12:19.013523 4684 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/manila-api-0" event={"ID":"eab2a2c4-440f-4b67-9eea-5386108bb9a9","Type":"ContainerDied","Data":"35747d7ffa2ccbad7ea06d6197d9b2492b915b95f9cfb5899f76aa3d4bc5d8e5"} Jan 23 10:12:19 crc kubenswrapper[4684]: I0123 10:12:19.013604 4684 scope.go:117] "RemoveContainer" containerID="59c79f3c16aee0cee57f9ebc8c7e981eb53a9ae32f9559aa5acac4b09eab4a0a" Jan 23 10:12:19 crc kubenswrapper[4684]: I0123 10:12:19.028040 4684 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/manila-scheduler-0" event={"ID":"e8415d7b-df39-4568-a028-7556ef70b916","Type":"ContainerStarted","Data":"e64e5bfd4e82d0e6d9b538b2255c90fbd279d725ecd4ea9fb3ddd5ab7f4195ed"} Jan 23 10:12:19 crc kubenswrapper[4684]: I0123 10:12:19.063102 4684 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/manila-scheduler-0" podStartSLOduration=6.118276401 podStartE2EDuration="7.06307789s" podCreationTimestamp="2026-01-23 10:12:12 +0000 UTC" firstStartedPulling="2026-01-23 10:12:15.762985086 +0000 UTC m=+3908.386363627" lastFinishedPulling="2026-01-23 10:12:16.707786575 +0000 UTC m=+3909.331165116" observedRunningTime="2026-01-23 10:12:19.049449521 +0000 UTC m=+3911.672828092" watchObservedRunningTime="2026-01-23 10:12:19.06307789 +0000 UTC m=+3911.686456431" Jan 23 10:12:19 crc kubenswrapper[4684]: I0123 10:12:19.089994 4684 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/manila-api-0"] Jan 23 10:12:19 crc kubenswrapper[4684]: I0123 10:12:19.093177 4684 scope.go:117] "RemoveContainer" containerID="6e838ff582d0128569c984b0293cd5d36eb8011941ba8652763553ef8decd9b0" Jan 23 10:12:19 crc kubenswrapper[4684]: I0123 10:12:19.126510 4684 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/manila-api-0"] Jan 23 10:12:19 crc kubenswrapper[4684]: I0123 10:12:19.128859 4684 scope.go:117] "RemoveContainer" containerID="59c79f3c16aee0cee57f9ebc8c7e981eb53a9ae32f9559aa5acac4b09eab4a0a" Jan 23 10:12:19 crc kubenswrapper[4684]: E0123 10:12:19.129758 4684 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"59c79f3c16aee0cee57f9ebc8c7e981eb53a9ae32f9559aa5acac4b09eab4a0a\": container with ID starting with 59c79f3c16aee0cee57f9ebc8c7e981eb53a9ae32f9559aa5acac4b09eab4a0a not found: ID does not exist" containerID="59c79f3c16aee0cee57f9ebc8c7e981eb53a9ae32f9559aa5acac4b09eab4a0a" Jan 23 10:12:19 crc kubenswrapper[4684]: I0123 10:12:19.129870 4684 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"59c79f3c16aee0cee57f9ebc8c7e981eb53a9ae32f9559aa5acac4b09eab4a0a"} err="failed to get container status \"59c79f3c16aee0cee57f9ebc8c7e981eb53a9ae32f9559aa5acac4b09eab4a0a\": rpc error: code = NotFound desc = could not find container \"59c79f3c16aee0cee57f9ebc8c7e981eb53a9ae32f9559aa5acac4b09eab4a0a\": container with ID starting with 59c79f3c16aee0cee57f9ebc8c7e981eb53a9ae32f9559aa5acac4b09eab4a0a not found: ID does not exist" Jan 23 10:12:19 crc kubenswrapper[4684]: I0123 10:12:19.129947 4684 scope.go:117] "RemoveContainer" containerID="6e838ff582d0128569c984b0293cd5d36eb8011941ba8652763553ef8decd9b0" Jan 23 10:12:19 crc kubenswrapper[4684]: E0123 10:12:19.131515 4684 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"6e838ff582d0128569c984b0293cd5d36eb8011941ba8652763553ef8decd9b0\": container with ID starting with 6e838ff582d0128569c984b0293cd5d36eb8011941ba8652763553ef8decd9b0 not found: ID does not exist" containerID="6e838ff582d0128569c984b0293cd5d36eb8011941ba8652763553ef8decd9b0" Jan 23 10:12:19 crc kubenswrapper[4684]: I0123 10:12:19.131675 4684 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"6e838ff582d0128569c984b0293cd5d36eb8011941ba8652763553ef8decd9b0"} err="failed to get container status \"6e838ff582d0128569c984b0293cd5d36eb8011941ba8652763553ef8decd9b0\": rpc error: code = NotFound desc = could not find container \"6e838ff582d0128569c984b0293cd5d36eb8011941ba8652763553ef8decd9b0\": container with ID starting with 6e838ff582d0128569c984b0293cd5d36eb8011941ba8652763553ef8decd9b0 not found: ID does not exist" Jan 23 10:12:19 crc kubenswrapper[4684]: I0123 10:12:19.131780 4684 scope.go:117] "RemoveContainer" containerID="59c79f3c16aee0cee57f9ebc8c7e981eb53a9ae32f9559aa5acac4b09eab4a0a" Jan 23 10:12:19 crc kubenswrapper[4684]: I0123 10:12:19.134021 4684 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"59c79f3c16aee0cee57f9ebc8c7e981eb53a9ae32f9559aa5acac4b09eab4a0a"} err="failed to get container status \"59c79f3c16aee0cee57f9ebc8c7e981eb53a9ae32f9559aa5acac4b09eab4a0a\": rpc error: code = NotFound desc = could not find container \"59c79f3c16aee0cee57f9ebc8c7e981eb53a9ae32f9559aa5acac4b09eab4a0a\": container with ID starting with 59c79f3c16aee0cee57f9ebc8c7e981eb53a9ae32f9559aa5acac4b09eab4a0a not found: ID does not exist" Jan 23 10:12:19 crc kubenswrapper[4684]: I0123 10:12:19.134060 4684 scope.go:117] "RemoveContainer" containerID="6e838ff582d0128569c984b0293cd5d36eb8011941ba8652763553ef8decd9b0" Jan 23 10:12:19 crc kubenswrapper[4684]: I0123 10:12:19.135171 4684 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"6e838ff582d0128569c984b0293cd5d36eb8011941ba8652763553ef8decd9b0"} err="failed to get container status \"6e838ff582d0128569c984b0293cd5d36eb8011941ba8652763553ef8decd9b0\": rpc error: code = NotFound desc = could not find container \"6e838ff582d0128569c984b0293cd5d36eb8011941ba8652763553ef8decd9b0\": container with ID starting with 6e838ff582d0128569c984b0293cd5d36eb8011941ba8652763553ef8decd9b0 not found: ID does not exist" Jan 23 10:12:19 crc kubenswrapper[4684]: I0123 10:12:19.143767 4684 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/manila-api-0"] Jan 23 10:12:19 crc kubenswrapper[4684]: E0123 10:12:19.144344 4684 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="eab2a2c4-440f-4b67-9eea-5386108bb9a9" containerName="manila-api-log" Jan 23 10:12:19 crc kubenswrapper[4684]: I0123 10:12:19.144406 4684 state_mem.go:107] "Deleted CPUSet assignment" podUID="eab2a2c4-440f-4b67-9eea-5386108bb9a9" containerName="manila-api-log" Jan 23 10:12:19 crc kubenswrapper[4684]: E0123 10:12:19.144478 4684 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="eab2a2c4-440f-4b67-9eea-5386108bb9a9" containerName="manila-api" Jan 23 10:12:19 crc kubenswrapper[4684]: I0123 10:12:19.144544 4684 state_mem.go:107] "Deleted CPUSet assignment" podUID="eab2a2c4-440f-4b67-9eea-5386108bb9a9" containerName="manila-api" Jan 23 10:12:19 crc kubenswrapper[4684]: I0123 10:12:19.144803 4684 memory_manager.go:354] "RemoveStaleState removing state" podUID="eab2a2c4-440f-4b67-9eea-5386108bb9a9" containerName="manila-api-log" Jan 23 10:12:19 crc kubenswrapper[4684]: I0123 10:12:19.144890 4684 memory_manager.go:354] "RemoveStaleState removing state" podUID="eab2a2c4-440f-4b67-9eea-5386108bb9a9" containerName="manila-api" Jan 23 10:12:19 crc kubenswrapper[4684]: I0123 10:12:19.146032 4684 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/manila-api-0" Jan 23 10:12:19 crc kubenswrapper[4684]: I0123 10:12:19.149454 4684 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"manila-api-config-data" Jan 23 10:12:19 crc kubenswrapper[4684]: I0123 10:12:19.154430 4684 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-manila-public-svc" Jan 23 10:12:19 crc kubenswrapper[4684]: I0123 10:12:19.154623 4684 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-manila-internal-svc" Jan 23 10:12:19 crc kubenswrapper[4684]: I0123 10:12:19.166044 4684 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/manila-api-0"] Jan 23 10:12:19 crc kubenswrapper[4684]: I0123 10:12:19.290302 4684 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/f183a69c-226e-4737-81b8-01cae8e76539-scripts\") pod \"manila-api-0\" (UID: \"f183a69c-226e-4737-81b8-01cae8e76539\") " pod="openstack/manila-api-0" Jan 23 10:12:19 crc kubenswrapper[4684]: I0123 10:12:19.290593 4684 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/f183a69c-226e-4737-81b8-01cae8e76539-combined-ca-bundle\") pod \"manila-api-0\" (UID: \"f183a69c-226e-4737-81b8-01cae8e76539\") " pod="openstack/manila-api-0" Jan 23 10:12:19 crc kubenswrapper[4684]: I0123 10:12:19.290690 4684 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/f183a69c-226e-4737-81b8-01cae8e76539-public-tls-certs\") pod \"manila-api-0\" (UID: \"f183a69c-226e-4737-81b8-01cae8e76539\") " pod="openstack/manila-api-0" Jan 23 10:12:19 crc kubenswrapper[4684]: I0123 10:12:19.290854 4684 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/f183a69c-226e-4737-81b8-01cae8e76539-internal-tls-certs\") pod \"manila-api-0\" (UID: \"f183a69c-226e-4737-81b8-01cae8e76539\") " pod="openstack/manila-api-0" Jan 23 10:12:19 crc kubenswrapper[4684]: I0123 10:12:19.290987 4684 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-4pkdw\" (UniqueName: \"kubernetes.io/projected/f183a69c-226e-4737-81b8-01cae8e76539-kube-api-access-4pkdw\") pod \"manila-api-0\" (UID: \"f183a69c-226e-4737-81b8-01cae8e76539\") " pod="openstack/manila-api-0" Jan 23 10:12:19 crc kubenswrapper[4684]: I0123 10:12:19.291117 4684 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/f183a69c-226e-4737-81b8-01cae8e76539-config-data\") pod \"manila-api-0\" (UID: \"f183a69c-226e-4737-81b8-01cae8e76539\") " pod="openstack/manila-api-0" Jan 23 10:12:19 crc kubenswrapper[4684]: I0123 10:12:19.291225 4684 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-machine-id\" (UniqueName: \"kubernetes.io/host-path/f183a69c-226e-4737-81b8-01cae8e76539-etc-machine-id\") pod \"manila-api-0\" (UID: \"f183a69c-226e-4737-81b8-01cae8e76539\") " pod="openstack/manila-api-0" Jan 23 10:12:19 crc kubenswrapper[4684]: I0123 10:12:19.291345 4684 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/f183a69c-226e-4737-81b8-01cae8e76539-logs\") pod \"manila-api-0\" (UID: \"f183a69c-226e-4737-81b8-01cae8e76539\") " pod="openstack/manila-api-0" Jan 23 10:12:19 crc kubenswrapper[4684]: I0123 10:12:19.291463 4684 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/f183a69c-226e-4737-81b8-01cae8e76539-config-data-custom\") pod \"manila-api-0\" (UID: \"f183a69c-226e-4737-81b8-01cae8e76539\") " pod="openstack/manila-api-0" Jan 23 10:12:19 crc kubenswrapper[4684]: I0123 10:12:19.392951 4684 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/f183a69c-226e-4737-81b8-01cae8e76539-combined-ca-bundle\") pod \"manila-api-0\" (UID: \"f183a69c-226e-4737-81b8-01cae8e76539\") " pod="openstack/manila-api-0" Jan 23 10:12:19 crc kubenswrapper[4684]: I0123 10:12:19.393247 4684 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/f183a69c-226e-4737-81b8-01cae8e76539-public-tls-certs\") pod \"manila-api-0\" (UID: \"f183a69c-226e-4737-81b8-01cae8e76539\") " pod="openstack/manila-api-0" Jan 23 10:12:19 crc kubenswrapper[4684]: I0123 10:12:19.393288 4684 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/f183a69c-226e-4737-81b8-01cae8e76539-internal-tls-certs\") pod \"manila-api-0\" (UID: \"f183a69c-226e-4737-81b8-01cae8e76539\") " pod="openstack/manila-api-0" Jan 23 10:12:19 crc kubenswrapper[4684]: I0123 10:12:19.393308 4684 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-4pkdw\" (UniqueName: \"kubernetes.io/projected/f183a69c-226e-4737-81b8-01cae8e76539-kube-api-access-4pkdw\") pod \"manila-api-0\" (UID: \"f183a69c-226e-4737-81b8-01cae8e76539\") " pod="openstack/manila-api-0" Jan 23 10:12:19 crc kubenswrapper[4684]: I0123 10:12:19.393348 4684 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/f183a69c-226e-4737-81b8-01cae8e76539-config-data\") pod \"manila-api-0\" (UID: \"f183a69c-226e-4737-81b8-01cae8e76539\") " pod="openstack/manila-api-0" Jan 23 10:12:19 crc kubenswrapper[4684]: I0123 10:12:19.393368 4684 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etc-machine-id\" (UniqueName: \"kubernetes.io/host-path/f183a69c-226e-4737-81b8-01cae8e76539-etc-machine-id\") pod \"manila-api-0\" (UID: \"f183a69c-226e-4737-81b8-01cae8e76539\") " pod="openstack/manila-api-0" Jan 23 10:12:19 crc kubenswrapper[4684]: I0123 10:12:19.393396 4684 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/f183a69c-226e-4737-81b8-01cae8e76539-logs\") pod \"manila-api-0\" (UID: \"f183a69c-226e-4737-81b8-01cae8e76539\") " pod="openstack/manila-api-0" Jan 23 10:12:19 crc kubenswrapper[4684]: I0123 10:12:19.393432 4684 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/f183a69c-226e-4737-81b8-01cae8e76539-config-data-custom\") pod \"manila-api-0\" (UID: \"f183a69c-226e-4737-81b8-01cae8e76539\") " pod="openstack/manila-api-0" Jan 23 10:12:19 crc kubenswrapper[4684]: I0123 10:12:19.393460 4684 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/f183a69c-226e-4737-81b8-01cae8e76539-scripts\") pod \"manila-api-0\" (UID: \"f183a69c-226e-4737-81b8-01cae8e76539\") " pod="openstack/manila-api-0" Jan 23 10:12:19 crc kubenswrapper[4684]: I0123 10:12:19.401549 4684 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/f183a69c-226e-4737-81b8-01cae8e76539-logs\") pod \"manila-api-0\" (UID: \"f183a69c-226e-4737-81b8-01cae8e76539\") " pod="openstack/manila-api-0" Jan 23 10:12:19 crc kubenswrapper[4684]: I0123 10:12:19.401630 4684 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etc-machine-id\" (UniqueName: \"kubernetes.io/host-path/f183a69c-226e-4737-81b8-01cae8e76539-etc-machine-id\") pod \"manila-api-0\" (UID: \"f183a69c-226e-4737-81b8-01cae8e76539\") " pod="openstack/manila-api-0" Jan 23 10:12:19 crc kubenswrapper[4684]: I0123 10:12:19.406620 4684 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/f183a69c-226e-4737-81b8-01cae8e76539-combined-ca-bundle\") pod \"manila-api-0\" (UID: \"f183a69c-226e-4737-81b8-01cae8e76539\") " pod="openstack/manila-api-0" Jan 23 10:12:19 crc kubenswrapper[4684]: I0123 10:12:19.406944 4684 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/f183a69c-226e-4737-81b8-01cae8e76539-scripts\") pod \"manila-api-0\" (UID: \"f183a69c-226e-4737-81b8-01cae8e76539\") " pod="openstack/manila-api-0" Jan 23 10:12:19 crc kubenswrapper[4684]: I0123 10:12:19.407304 4684 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/f183a69c-226e-4737-81b8-01cae8e76539-config-data-custom\") pod \"manila-api-0\" (UID: \"f183a69c-226e-4737-81b8-01cae8e76539\") " pod="openstack/manila-api-0" Jan 23 10:12:19 crc kubenswrapper[4684]: I0123 10:12:19.408339 4684 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/f183a69c-226e-4737-81b8-01cae8e76539-public-tls-certs\") pod \"manila-api-0\" (UID: \"f183a69c-226e-4737-81b8-01cae8e76539\") " pod="openstack/manila-api-0" Jan 23 10:12:19 crc kubenswrapper[4684]: I0123 10:12:19.409078 4684 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/f183a69c-226e-4737-81b8-01cae8e76539-config-data\") pod \"manila-api-0\" (UID: \"f183a69c-226e-4737-81b8-01cae8e76539\") " pod="openstack/manila-api-0" Jan 23 10:12:19 crc kubenswrapper[4684]: I0123 10:12:19.427345 4684 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/f183a69c-226e-4737-81b8-01cae8e76539-internal-tls-certs\") pod \"manila-api-0\" (UID: \"f183a69c-226e-4737-81b8-01cae8e76539\") " pod="openstack/manila-api-0" Jan 23 10:12:19 crc kubenswrapper[4684]: I0123 10:12:19.439478 4684 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-4pkdw\" (UniqueName: \"kubernetes.io/projected/f183a69c-226e-4737-81b8-01cae8e76539-kube-api-access-4pkdw\") pod \"manila-api-0\" (UID: \"f183a69c-226e-4737-81b8-01cae8e76539\") " pod="openstack/manila-api-0" Jan 23 10:12:19 crc kubenswrapper[4684]: I0123 10:12:19.471080 4684 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/manila-api-0" Jan 23 10:12:19 crc kubenswrapper[4684]: I0123 10:12:19.636341 4684 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="eab2a2c4-440f-4b67-9eea-5386108bb9a9" path="/var/lib/kubelet/pods/eab2a2c4-440f-4b67-9eea-5386108bb9a9/volumes" Jan 23 10:12:20 crc kubenswrapper[4684]: I0123 10:12:20.010958 4684 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-marketplace/redhat-operators-jsxqf" podUID="8cd40dbd-dde6-4dab-ad91-26b0c526d129" containerName="registry-server" probeResult="failure" output=< Jan 23 10:12:20 crc kubenswrapper[4684]: timeout: failed to connect service ":50051" within 1s Jan 23 10:12:20 crc kubenswrapper[4684]: > Jan 23 10:12:20 crc kubenswrapper[4684]: I0123 10:12:20.341630 4684 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/manila-api-0"] Jan 23 10:12:20 crc kubenswrapper[4684]: W0123 10:12:20.476737 4684 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-podf183a69c_226e_4737_81b8_01cae8e76539.slice/crio-c75d8ef6d6d1e57974d59b2682fc467c103c77434cc12371e4c8ccc2480e831f WatchSource:0}: Error finding container c75d8ef6d6d1e57974d59b2682fc467c103c77434cc12371e4c8ccc2480e831f: Status 404 returned error can't find the container with id c75d8ef6d6d1e57974d59b2682fc467c103c77434cc12371e4c8ccc2480e831f Jan 23 10:12:21 crc kubenswrapper[4684]: I0123 10:12:21.092297 4684 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/manila-api-0" event={"ID":"f183a69c-226e-4737-81b8-01cae8e76539","Type":"ContainerStarted","Data":"3330bf4eb04093de1bac1d76df991515825520f07855f259a318db79c2b5ed13"} Jan 23 10:12:21 crc kubenswrapper[4684]: I0123 10:12:21.093736 4684 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/manila-api-0" event={"ID":"f183a69c-226e-4737-81b8-01cae8e76539","Type":"ContainerStarted","Data":"c75d8ef6d6d1e57974d59b2682fc467c103c77434cc12371e4c8ccc2480e831f"} Jan 23 10:12:22 crc kubenswrapper[4684]: I0123 10:12:22.112668 4684 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/manila-api-0" event={"ID":"f183a69c-226e-4737-81b8-01cae8e76539","Type":"ContainerStarted","Data":"1d21370540817f1b39028b278e0e050a0e85acd66b829d79368e49895fe83512"} Jan 23 10:12:22 crc kubenswrapper[4684]: I0123 10:12:22.112936 4684 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/manila-api-0" Jan 23 10:12:22 crc kubenswrapper[4684]: I0123 10:12:22.141656 4684 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/manila-api-0" podStartSLOduration=3.141631407 podStartE2EDuration="3.141631407s" podCreationTimestamp="2026-01-23 10:12:19 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-23 10:12:22.132164377 +0000 UTC m=+3914.755542918" watchObservedRunningTime="2026-01-23 10:12:22.141631407 +0000 UTC m=+3914.765009948" Jan 23 10:12:22 crc kubenswrapper[4684]: I0123 10:12:22.879615 4684 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/manila-scheduler-0" Jan 23 10:12:23 crc kubenswrapper[4684]: I0123 10:12:23.496496 4684 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/horizon-6dc7f74bf4-rpjsz" Jan 23 10:12:24 crc kubenswrapper[4684]: I0123 10:12:24.111929 4684 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/dnsmasq-dns-dbdfc799f-zk2np" Jan 23 10:12:24 crc kubenswrapper[4684]: I0123 10:12:24.218607 4684 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-8446b8749f-5zcjt"] Jan 23 10:12:24 crc kubenswrapper[4684]: I0123 10:12:24.218871 4684 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/dnsmasq-dns-8446b8749f-5zcjt" podUID="254228f8-63f6-4461-83cb-fac99d91726e" containerName="dnsmasq-dns" containerID="cri-o://0e441f99d8d7f62586c27a14f460269bb3ce4a9215d979566c6a8f24cf2f9242" gracePeriod=10 Jan 23 10:12:25 crc kubenswrapper[4684]: I0123 10:12:25.143591 4684 generic.go:334] "Generic (PLEG): container finished" podID="254228f8-63f6-4461-83cb-fac99d91726e" containerID="0e441f99d8d7f62586c27a14f460269bb3ce4a9215d979566c6a8f24cf2f9242" exitCode=0 Jan 23 10:12:25 crc kubenswrapper[4684]: I0123 10:12:25.143673 4684 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-8446b8749f-5zcjt" event={"ID":"254228f8-63f6-4461-83cb-fac99d91726e","Type":"ContainerDied","Data":"0e441f99d8d7f62586c27a14f460269bb3ce4a9215d979566c6a8f24cf2f9242"} Jan 23 10:12:26 crc kubenswrapper[4684]: I0123 10:12:26.571933 4684 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/horizon-6dc7f74bf4-rpjsz" Jan 23 10:12:28 crc kubenswrapper[4684]: I0123 10:12:28.353179 4684 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/ceilometer-0"] Jan 23 10:12:28 crc kubenswrapper[4684]: I0123 10:12:28.353689 4684 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/ceilometer-0" podUID="69edf57e-bfdb-4e05-b61a-5b42dad87ff8" containerName="ceilometer-central-agent" containerID="cri-o://1a1ea1d2af0d9bf2659965e5271c543d9ac302f7ff7b16d6bba5b8633363da90" gracePeriod=30 Jan 23 10:12:28 crc kubenswrapper[4684]: I0123 10:12:28.353834 4684 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/ceilometer-0" podUID="69edf57e-bfdb-4e05-b61a-5b42dad87ff8" containerName="proxy-httpd" containerID="cri-o://1e8d28be59fb08176414bb619422b76575709f8a48348b61688048b491a72480" gracePeriod=30 Jan 23 10:12:28 crc kubenswrapper[4684]: I0123 10:12:28.353870 4684 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/ceilometer-0" podUID="69edf57e-bfdb-4e05-b61a-5b42dad87ff8" containerName="sg-core" containerID="cri-o://f0d9d2bf5ab9e06f96ce10efcf51656474f6dfab8142907591690e7e1e89aeb3" gracePeriod=30 Jan 23 10:12:28 crc kubenswrapper[4684]: I0123 10:12:28.353901 4684 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/ceilometer-0" podUID="69edf57e-bfdb-4e05-b61a-5b42dad87ff8" containerName="ceilometer-notification-agent" containerID="cri-o://595a24405a012c2ad79c420e47eb463781ed4e39fc2d86db1ebc361d3ec7e85c" gracePeriod=30 Jan 23 10:12:28 crc kubenswrapper[4684]: I0123 10:12:28.582122 4684 scope.go:117] "RemoveContainer" containerID="c2163abc5f57af87ea82023d09559fe5c528b862743942dcea670480cc44810b" Jan 23 10:12:28 crc kubenswrapper[4684]: E0123 10:12:28.582732 4684 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-wtphf_openshift-machine-config-operator(fe8e0d00-860e-4d47-9f48-686555520d79)\"" pod="openshift-machine-config-operator/machine-config-daemon-wtphf" podUID="fe8e0d00-860e-4d47-9f48-686555520d79" Jan 23 10:12:28 crc kubenswrapper[4684]: I0123 10:12:28.892507 4684 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/redhat-operators-jsxqf" Jan 23 10:12:28 crc kubenswrapper[4684]: I0123 10:12:28.960902 4684 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/redhat-operators-jsxqf" Jan 23 10:12:29 crc kubenswrapper[4684]: I0123 10:12:29.181888 4684 generic.go:334] "Generic (PLEG): container finished" podID="69edf57e-bfdb-4e05-b61a-5b42dad87ff8" containerID="1e8d28be59fb08176414bb619422b76575709f8a48348b61688048b491a72480" exitCode=0 Jan 23 10:12:29 crc kubenswrapper[4684]: I0123 10:12:29.181915 4684 generic.go:334] "Generic (PLEG): container finished" podID="69edf57e-bfdb-4e05-b61a-5b42dad87ff8" containerID="f0d9d2bf5ab9e06f96ce10efcf51656474f6dfab8142907591690e7e1e89aeb3" exitCode=2 Jan 23 10:12:29 crc kubenswrapper[4684]: I0123 10:12:29.181925 4684 generic.go:334] "Generic (PLEG): container finished" podID="69edf57e-bfdb-4e05-b61a-5b42dad87ff8" containerID="1a1ea1d2af0d9bf2659965e5271c543d9ac302f7ff7b16d6bba5b8633363da90" exitCode=0 Jan 23 10:12:29 crc kubenswrapper[4684]: I0123 10:12:29.181961 4684 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"69edf57e-bfdb-4e05-b61a-5b42dad87ff8","Type":"ContainerDied","Data":"1e8d28be59fb08176414bb619422b76575709f8a48348b61688048b491a72480"} Jan 23 10:12:29 crc kubenswrapper[4684]: I0123 10:12:29.182010 4684 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"69edf57e-bfdb-4e05-b61a-5b42dad87ff8","Type":"ContainerDied","Data":"f0d9d2bf5ab9e06f96ce10efcf51656474f6dfab8142907591690e7e1e89aeb3"} Jan 23 10:12:29 crc kubenswrapper[4684]: I0123 10:12:29.182024 4684 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"69edf57e-bfdb-4e05-b61a-5b42dad87ff8","Type":"ContainerDied","Data":"1a1ea1d2af0d9bf2659965e5271c543d9ac302f7ff7b16d6bba5b8633363da90"} Jan 23 10:12:29 crc kubenswrapper[4684]: I0123 10:12:29.722201 4684 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-operators-jsxqf"] Jan 23 10:12:30 crc kubenswrapper[4684]: I0123 10:12:30.045468 4684 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-8446b8749f-5zcjt" Jan 23 10:12:30 crc kubenswrapper[4684]: I0123 10:12:30.169516 4684 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/configmap/254228f8-63f6-4461-83cb-fac99d91726e-openstack-edpm-ipam\") pod \"254228f8-63f6-4461-83cb-fac99d91726e\" (UID: \"254228f8-63f6-4461-83cb-fac99d91726e\") " Jan 23 10:12:30 crc kubenswrapper[4684]: I0123 10:12:30.169584 4684 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/254228f8-63f6-4461-83cb-fac99d91726e-ovsdbserver-sb\") pod \"254228f8-63f6-4461-83cb-fac99d91726e\" (UID: \"254228f8-63f6-4461-83cb-fac99d91726e\") " Jan 23 10:12:30 crc kubenswrapper[4684]: I0123 10:12:30.169642 4684 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/254228f8-63f6-4461-83cb-fac99d91726e-config\") pod \"254228f8-63f6-4461-83cb-fac99d91726e\" (UID: \"254228f8-63f6-4461-83cb-fac99d91726e\") " Jan 23 10:12:30 crc kubenswrapper[4684]: I0123 10:12:30.169736 4684 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-prhz7\" (UniqueName: \"kubernetes.io/projected/254228f8-63f6-4461-83cb-fac99d91726e-kube-api-access-prhz7\") pod \"254228f8-63f6-4461-83cb-fac99d91726e\" (UID: \"254228f8-63f6-4461-83cb-fac99d91726e\") " Jan 23 10:12:30 crc kubenswrapper[4684]: I0123 10:12:30.169815 4684 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/254228f8-63f6-4461-83cb-fac99d91726e-ovsdbserver-nb\") pod \"254228f8-63f6-4461-83cb-fac99d91726e\" (UID: \"254228f8-63f6-4461-83cb-fac99d91726e\") " Jan 23 10:12:30 crc kubenswrapper[4684]: I0123 10:12:30.169846 4684 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/254228f8-63f6-4461-83cb-fac99d91726e-dns-svc\") pod \"254228f8-63f6-4461-83cb-fac99d91726e\" (UID: \"254228f8-63f6-4461-83cb-fac99d91726e\") " Jan 23 10:12:30 crc kubenswrapper[4684]: I0123 10:12:30.184928 4684 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/254228f8-63f6-4461-83cb-fac99d91726e-kube-api-access-prhz7" (OuterVolumeSpecName: "kube-api-access-prhz7") pod "254228f8-63f6-4461-83cb-fac99d91726e" (UID: "254228f8-63f6-4461-83cb-fac99d91726e"). InnerVolumeSpecName "kube-api-access-prhz7". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 23 10:12:30 crc kubenswrapper[4684]: I0123 10:12:30.219868 4684 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/redhat-operators-jsxqf" podUID="8cd40dbd-dde6-4dab-ad91-26b0c526d129" containerName="registry-server" containerID="cri-o://3607c7a9253c4bc04acef15b6aeba069481bbb3858ffc03ac4e50f073d4948b7" gracePeriod=2 Jan 23 10:12:30 crc kubenswrapper[4684]: I0123 10:12:30.220019 4684 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-8446b8749f-5zcjt" Jan 23 10:12:30 crc kubenswrapper[4684]: I0123 10:12:30.220612 4684 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-8446b8749f-5zcjt" event={"ID":"254228f8-63f6-4461-83cb-fac99d91726e","Type":"ContainerDied","Data":"cb3f3f0382caa47b4265862e72e1c2e91f232ea1c3d6bfb0294c38bc9e519f2c"} Jan 23 10:12:30 crc kubenswrapper[4684]: I0123 10:12:30.220678 4684 scope.go:117] "RemoveContainer" containerID="0e441f99d8d7f62586c27a14f460269bb3ce4a9215d979566c6a8f24cf2f9242" Jan 23 10:12:30 crc kubenswrapper[4684]: I0123 10:12:30.274625 4684 scope.go:117] "RemoveContainer" containerID="d3a8618764b797dba6477bb0a4c98c1197497fa2f09f3da440c9b1bec75e3909" Jan 23 10:12:30 crc kubenswrapper[4684]: I0123 10:12:30.276310 4684 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-prhz7\" (UniqueName: \"kubernetes.io/projected/254228f8-63f6-4461-83cb-fac99d91726e-kube-api-access-prhz7\") on node \"crc\" DevicePath \"\"" Jan 23 10:12:30 crc kubenswrapper[4684]: I0123 10:12:30.289760 4684 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/254228f8-63f6-4461-83cb-fac99d91726e-ovsdbserver-sb" (OuterVolumeSpecName: "ovsdbserver-sb") pod "254228f8-63f6-4461-83cb-fac99d91726e" (UID: "254228f8-63f6-4461-83cb-fac99d91726e"). InnerVolumeSpecName "ovsdbserver-sb". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 23 10:12:30 crc kubenswrapper[4684]: I0123 10:12:30.290583 4684 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/254228f8-63f6-4461-83cb-fac99d91726e-dns-svc" (OuterVolumeSpecName: "dns-svc") pod "254228f8-63f6-4461-83cb-fac99d91726e" (UID: "254228f8-63f6-4461-83cb-fac99d91726e"). InnerVolumeSpecName "dns-svc". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 23 10:12:30 crc kubenswrapper[4684]: I0123 10:12:30.295079 4684 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/254228f8-63f6-4461-83cb-fac99d91726e-config" (OuterVolumeSpecName: "config") pod "254228f8-63f6-4461-83cb-fac99d91726e" (UID: "254228f8-63f6-4461-83cb-fac99d91726e"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 23 10:12:30 crc kubenswrapper[4684]: I0123 10:12:30.296319 4684 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/254228f8-63f6-4461-83cb-fac99d91726e-openstack-edpm-ipam" (OuterVolumeSpecName: "openstack-edpm-ipam") pod "254228f8-63f6-4461-83cb-fac99d91726e" (UID: "254228f8-63f6-4461-83cb-fac99d91726e"). InnerVolumeSpecName "openstack-edpm-ipam". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 23 10:12:30 crc kubenswrapper[4684]: I0123 10:12:30.323905 4684 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/254228f8-63f6-4461-83cb-fac99d91726e-ovsdbserver-nb" (OuterVolumeSpecName: "ovsdbserver-nb") pod "254228f8-63f6-4461-83cb-fac99d91726e" (UID: "254228f8-63f6-4461-83cb-fac99d91726e"). InnerVolumeSpecName "ovsdbserver-nb". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 23 10:12:30 crc kubenswrapper[4684]: I0123 10:12:30.378814 4684 reconciler_common.go:293] "Volume detached for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/254228f8-63f6-4461-83cb-fac99d91726e-ovsdbserver-nb\") on node \"crc\" DevicePath \"\"" Jan 23 10:12:30 crc kubenswrapper[4684]: I0123 10:12:30.379060 4684 reconciler_common.go:293] "Volume detached for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/254228f8-63f6-4461-83cb-fac99d91726e-dns-svc\") on node \"crc\" DevicePath \"\"" Jan 23 10:12:30 crc kubenswrapper[4684]: I0123 10:12:30.379072 4684 reconciler_common.go:293] "Volume detached for volume \"openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/configmap/254228f8-63f6-4461-83cb-fac99d91726e-openstack-edpm-ipam\") on node \"crc\" DevicePath \"\"" Jan 23 10:12:30 crc kubenswrapper[4684]: I0123 10:12:30.379082 4684 reconciler_common.go:293] "Volume detached for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/254228f8-63f6-4461-83cb-fac99d91726e-ovsdbserver-sb\") on node \"crc\" DevicePath \"\"" Jan 23 10:12:30 crc kubenswrapper[4684]: I0123 10:12:30.379212 4684 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/254228f8-63f6-4461-83cb-fac99d91726e-config\") on node \"crc\" DevicePath \"\"" Jan 23 10:12:30 crc kubenswrapper[4684]: I0123 10:12:30.574154 4684 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-8446b8749f-5zcjt"] Jan 23 10:12:30 crc kubenswrapper[4684]: I0123 10:12:30.594172 4684 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/dnsmasq-dns-8446b8749f-5zcjt"] Jan 23 10:12:30 crc kubenswrapper[4684]: I0123 10:12:30.774566 4684 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-jsxqf" Jan 23 10:12:30 crc kubenswrapper[4684]: I0123 10:12:30.893676 4684 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-c576z\" (UniqueName: \"kubernetes.io/projected/8cd40dbd-dde6-4dab-ad91-26b0c526d129-kube-api-access-c576z\") pod \"8cd40dbd-dde6-4dab-ad91-26b0c526d129\" (UID: \"8cd40dbd-dde6-4dab-ad91-26b0c526d129\") " Jan 23 10:12:30 crc kubenswrapper[4684]: I0123 10:12:30.894101 4684 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/8cd40dbd-dde6-4dab-ad91-26b0c526d129-utilities\") pod \"8cd40dbd-dde6-4dab-ad91-26b0c526d129\" (UID: \"8cd40dbd-dde6-4dab-ad91-26b0c526d129\") " Jan 23 10:12:30 crc kubenswrapper[4684]: I0123 10:12:30.894166 4684 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/8cd40dbd-dde6-4dab-ad91-26b0c526d129-catalog-content\") pod \"8cd40dbd-dde6-4dab-ad91-26b0c526d129\" (UID: \"8cd40dbd-dde6-4dab-ad91-26b0c526d129\") " Jan 23 10:12:30 crc kubenswrapper[4684]: I0123 10:12:30.894890 4684 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/8cd40dbd-dde6-4dab-ad91-26b0c526d129-utilities" (OuterVolumeSpecName: "utilities") pod "8cd40dbd-dde6-4dab-ad91-26b0c526d129" (UID: "8cd40dbd-dde6-4dab-ad91-26b0c526d129"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 23 10:12:30 crc kubenswrapper[4684]: I0123 10:12:30.918792 4684 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/8cd40dbd-dde6-4dab-ad91-26b0c526d129-kube-api-access-c576z" (OuterVolumeSpecName: "kube-api-access-c576z") pod "8cd40dbd-dde6-4dab-ad91-26b0c526d129" (UID: "8cd40dbd-dde6-4dab-ad91-26b0c526d129"). InnerVolumeSpecName "kube-api-access-c576z". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 23 10:12:30 crc kubenswrapper[4684]: I0123 10:12:30.997036 4684 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/8cd40dbd-dde6-4dab-ad91-26b0c526d129-utilities\") on node \"crc\" DevicePath \"\"" Jan 23 10:12:30 crc kubenswrapper[4684]: I0123 10:12:30.997087 4684 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-c576z\" (UniqueName: \"kubernetes.io/projected/8cd40dbd-dde6-4dab-ad91-26b0c526d129-kube-api-access-c576z\") on node \"crc\" DevicePath \"\"" Jan 23 10:12:31 crc kubenswrapper[4684]: I0123 10:12:31.094484 4684 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/8cd40dbd-dde6-4dab-ad91-26b0c526d129-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "8cd40dbd-dde6-4dab-ad91-26b0c526d129" (UID: "8cd40dbd-dde6-4dab-ad91-26b0c526d129"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 23 10:12:31 crc kubenswrapper[4684]: I0123 10:12:31.102556 4684 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/8cd40dbd-dde6-4dab-ad91-26b0c526d129-catalog-content\") on node \"crc\" DevicePath \"\"" Jan 23 10:12:31 crc kubenswrapper[4684]: I0123 10:12:31.230188 4684 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/manila-share-share1-0" event={"ID":"601d1dbf-3e41-4f48-86a5-2038be6a33b3","Type":"ContainerStarted","Data":"f1db028471b3c459d983154fabc17d46d180ffbbd77b0ef618c2c24590d6b887"} Jan 23 10:12:31 crc kubenswrapper[4684]: I0123 10:12:31.234437 4684 generic.go:334] "Generic (PLEG): container finished" podID="8cd40dbd-dde6-4dab-ad91-26b0c526d129" containerID="3607c7a9253c4bc04acef15b6aeba069481bbb3858ffc03ac4e50f073d4948b7" exitCode=0 Jan 23 10:12:31 crc kubenswrapper[4684]: I0123 10:12:31.234485 4684 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-jsxqf" event={"ID":"8cd40dbd-dde6-4dab-ad91-26b0c526d129","Type":"ContainerDied","Data":"3607c7a9253c4bc04acef15b6aeba069481bbb3858ffc03ac4e50f073d4948b7"} Jan 23 10:12:31 crc kubenswrapper[4684]: I0123 10:12:31.234518 4684 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-jsxqf" event={"ID":"8cd40dbd-dde6-4dab-ad91-26b0c526d129","Type":"ContainerDied","Data":"15c5cf8cea00ecdc274ee2762b631c644dda63c03d1121a94a43a05a99a93c76"} Jan 23 10:12:31 crc kubenswrapper[4684]: I0123 10:12:31.234542 4684 scope.go:117] "RemoveContainer" containerID="3607c7a9253c4bc04acef15b6aeba069481bbb3858ffc03ac4e50f073d4948b7" Jan 23 10:12:31 crc kubenswrapper[4684]: I0123 10:12:31.234683 4684 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-jsxqf" Jan 23 10:12:31 crc kubenswrapper[4684]: I0123 10:12:31.310401 4684 scope.go:117] "RemoveContainer" containerID="cd0481a3b65a2a4239ab7422e8b73fa87eb3657e2f39eefa07972addac5e7f62" Jan 23 10:12:31 crc kubenswrapper[4684]: I0123 10:12:31.356772 4684 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-operators-jsxqf"] Jan 23 10:12:31 crc kubenswrapper[4684]: I0123 10:12:31.399804 4684 scope.go:117] "RemoveContainer" containerID="80ed6c263c0dd7ed1ef17db4adc3e956126bee1cb3159c0e004afb27ab3e94d6" Jan 23 10:12:31 crc kubenswrapper[4684]: I0123 10:12:31.431766 4684 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/redhat-operators-jsxqf"] Jan 23 10:12:31 crc kubenswrapper[4684]: I0123 10:12:31.438155 4684 scope.go:117] "RemoveContainer" containerID="3607c7a9253c4bc04acef15b6aeba069481bbb3858ffc03ac4e50f073d4948b7" Jan 23 10:12:31 crc kubenswrapper[4684]: E0123 10:12:31.441046 4684 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"3607c7a9253c4bc04acef15b6aeba069481bbb3858ffc03ac4e50f073d4948b7\": container with ID starting with 3607c7a9253c4bc04acef15b6aeba069481bbb3858ffc03ac4e50f073d4948b7 not found: ID does not exist" containerID="3607c7a9253c4bc04acef15b6aeba069481bbb3858ffc03ac4e50f073d4948b7" Jan 23 10:12:31 crc kubenswrapper[4684]: I0123 10:12:31.441083 4684 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"3607c7a9253c4bc04acef15b6aeba069481bbb3858ffc03ac4e50f073d4948b7"} err="failed to get container status \"3607c7a9253c4bc04acef15b6aeba069481bbb3858ffc03ac4e50f073d4948b7\": rpc error: code = NotFound desc = could not find container \"3607c7a9253c4bc04acef15b6aeba069481bbb3858ffc03ac4e50f073d4948b7\": container with ID starting with 3607c7a9253c4bc04acef15b6aeba069481bbb3858ffc03ac4e50f073d4948b7 not found: ID does not exist" Jan 23 10:12:31 crc kubenswrapper[4684]: I0123 10:12:31.441106 4684 scope.go:117] "RemoveContainer" containerID="cd0481a3b65a2a4239ab7422e8b73fa87eb3657e2f39eefa07972addac5e7f62" Jan 23 10:12:31 crc kubenswrapper[4684]: E0123 10:12:31.445036 4684 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"cd0481a3b65a2a4239ab7422e8b73fa87eb3657e2f39eefa07972addac5e7f62\": container with ID starting with cd0481a3b65a2a4239ab7422e8b73fa87eb3657e2f39eefa07972addac5e7f62 not found: ID does not exist" containerID="cd0481a3b65a2a4239ab7422e8b73fa87eb3657e2f39eefa07972addac5e7f62" Jan 23 10:12:31 crc kubenswrapper[4684]: I0123 10:12:31.445075 4684 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"cd0481a3b65a2a4239ab7422e8b73fa87eb3657e2f39eefa07972addac5e7f62"} err="failed to get container status \"cd0481a3b65a2a4239ab7422e8b73fa87eb3657e2f39eefa07972addac5e7f62\": rpc error: code = NotFound desc = could not find container \"cd0481a3b65a2a4239ab7422e8b73fa87eb3657e2f39eefa07972addac5e7f62\": container with ID starting with cd0481a3b65a2a4239ab7422e8b73fa87eb3657e2f39eefa07972addac5e7f62 not found: ID does not exist" Jan 23 10:12:31 crc kubenswrapper[4684]: I0123 10:12:31.445095 4684 scope.go:117] "RemoveContainer" containerID="80ed6c263c0dd7ed1ef17db4adc3e956126bee1cb3159c0e004afb27ab3e94d6" Jan 23 10:12:31 crc kubenswrapper[4684]: E0123 10:12:31.449007 4684 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"80ed6c263c0dd7ed1ef17db4adc3e956126bee1cb3159c0e004afb27ab3e94d6\": container with ID starting with 80ed6c263c0dd7ed1ef17db4adc3e956126bee1cb3159c0e004afb27ab3e94d6 not found: ID does not exist" containerID="80ed6c263c0dd7ed1ef17db4adc3e956126bee1cb3159c0e004afb27ab3e94d6" Jan 23 10:12:31 crc kubenswrapper[4684]: I0123 10:12:31.449036 4684 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"80ed6c263c0dd7ed1ef17db4adc3e956126bee1cb3159c0e004afb27ab3e94d6"} err="failed to get container status \"80ed6c263c0dd7ed1ef17db4adc3e956126bee1cb3159c0e004afb27ab3e94d6\": rpc error: code = NotFound desc = could not find container \"80ed6c263c0dd7ed1ef17db4adc3e956126bee1cb3159c0e004afb27ab3e94d6\": container with ID starting with 80ed6c263c0dd7ed1ef17db4adc3e956126bee1cb3159c0e004afb27ab3e94d6 not found: ID does not exist" Jan 23 10:12:31 crc kubenswrapper[4684]: I0123 10:12:31.557621 4684 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack/dnsmasq-dns-8446b8749f-5zcjt" podUID="254228f8-63f6-4461-83cb-fac99d91726e" containerName="dnsmasq-dns" probeResult="failure" output="dial tcp 10.217.0.199:5353: i/o timeout" Jan 23 10:12:31 crc kubenswrapper[4684]: I0123 10:12:31.598953 4684 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="254228f8-63f6-4461-83cb-fac99d91726e" path="/var/lib/kubelet/pods/254228f8-63f6-4461-83cb-fac99d91726e/volumes" Jan 23 10:12:31 crc kubenswrapper[4684]: I0123 10:12:31.599755 4684 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="8cd40dbd-dde6-4dab-ad91-26b0c526d129" path="/var/lib/kubelet/pods/8cd40dbd-dde6-4dab-ad91-26b0c526d129/volumes" Jan 23 10:12:32 crc kubenswrapper[4684]: I0123 10:12:32.255202 4684 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/manila-share-share1-0" event={"ID":"601d1dbf-3e41-4f48-86a5-2038be6a33b3","Type":"ContainerStarted","Data":"f6aade6c2018686a84a62d57e2cf71f7405fbe2443bad447edfaedeecaaf0268"} Jan 23 10:12:32 crc kubenswrapper[4684]: I0123 10:12:32.288553 4684 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/manila-share-share1-0" podStartSLOduration=6.159882028 podStartE2EDuration="20.288527441s" podCreationTimestamp="2026-01-23 10:12:12 +0000 UTC" firstStartedPulling="2026-01-23 10:12:15.757193131 +0000 UTC m=+3908.380571662" lastFinishedPulling="2026-01-23 10:12:29.885838534 +0000 UTC m=+3922.509217075" observedRunningTime="2026-01-23 10:12:32.277685442 +0000 UTC m=+3924.901064003" watchObservedRunningTime="2026-01-23 10:12:32.288527441 +0000 UTC m=+3924.911905972" Jan 23 10:12:32 crc kubenswrapper[4684]: I0123 10:12:32.704480 4684 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/manila-share-share1-0" Jan 23 10:12:32 crc kubenswrapper[4684]: I0123 10:12:32.825860 4684 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/ceilometer-0" Jan 23 10:12:32 crc kubenswrapper[4684]: I0123 10:12:32.845692 4684 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-nkw7n\" (UniqueName: \"kubernetes.io/projected/69edf57e-bfdb-4e05-b61a-5b42dad87ff8-kube-api-access-nkw7n\") pod \"69edf57e-bfdb-4e05-b61a-5b42dad87ff8\" (UID: \"69edf57e-bfdb-4e05-b61a-5b42dad87ff8\") " Jan 23 10:12:32 crc kubenswrapper[4684]: I0123 10:12:32.845800 4684 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/69edf57e-bfdb-4e05-b61a-5b42dad87ff8-run-httpd\") pod \"69edf57e-bfdb-4e05-b61a-5b42dad87ff8\" (UID: \"69edf57e-bfdb-4e05-b61a-5b42dad87ff8\") " Jan 23 10:12:32 crc kubenswrapper[4684]: I0123 10:12:32.845860 4684 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ceilometer-tls-certs\" (UniqueName: \"kubernetes.io/secret/69edf57e-bfdb-4e05-b61a-5b42dad87ff8-ceilometer-tls-certs\") pod \"69edf57e-bfdb-4e05-b61a-5b42dad87ff8\" (UID: \"69edf57e-bfdb-4e05-b61a-5b42dad87ff8\") " Jan 23 10:12:32 crc kubenswrapper[4684]: I0123 10:12:32.845906 4684 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/69edf57e-bfdb-4e05-b61a-5b42dad87ff8-combined-ca-bundle\") pod \"69edf57e-bfdb-4e05-b61a-5b42dad87ff8\" (UID: \"69edf57e-bfdb-4e05-b61a-5b42dad87ff8\") " Jan 23 10:12:32 crc kubenswrapper[4684]: I0123 10:12:32.846034 4684 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/69edf57e-bfdb-4e05-b61a-5b42dad87ff8-log-httpd\") pod \"69edf57e-bfdb-4e05-b61a-5b42dad87ff8\" (UID: \"69edf57e-bfdb-4e05-b61a-5b42dad87ff8\") " Jan 23 10:12:32 crc kubenswrapper[4684]: I0123 10:12:32.846099 4684 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/69edf57e-bfdb-4e05-b61a-5b42dad87ff8-config-data\") pod \"69edf57e-bfdb-4e05-b61a-5b42dad87ff8\" (UID: \"69edf57e-bfdb-4e05-b61a-5b42dad87ff8\") " Jan 23 10:12:32 crc kubenswrapper[4684]: I0123 10:12:32.846118 4684 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/69edf57e-bfdb-4e05-b61a-5b42dad87ff8-sg-core-conf-yaml\") pod \"69edf57e-bfdb-4e05-b61a-5b42dad87ff8\" (UID: \"69edf57e-bfdb-4e05-b61a-5b42dad87ff8\") " Jan 23 10:12:32 crc kubenswrapper[4684]: I0123 10:12:32.846165 4684 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/69edf57e-bfdb-4e05-b61a-5b42dad87ff8-scripts\") pod \"69edf57e-bfdb-4e05-b61a-5b42dad87ff8\" (UID: \"69edf57e-bfdb-4e05-b61a-5b42dad87ff8\") " Jan 23 10:12:32 crc kubenswrapper[4684]: I0123 10:12:32.846489 4684 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/69edf57e-bfdb-4e05-b61a-5b42dad87ff8-run-httpd" (OuterVolumeSpecName: "run-httpd") pod "69edf57e-bfdb-4e05-b61a-5b42dad87ff8" (UID: "69edf57e-bfdb-4e05-b61a-5b42dad87ff8"). InnerVolumeSpecName "run-httpd". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 23 10:12:32 crc kubenswrapper[4684]: I0123 10:12:32.846793 4684 reconciler_common.go:293] "Volume detached for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/69edf57e-bfdb-4e05-b61a-5b42dad87ff8-run-httpd\") on node \"crc\" DevicePath \"\"" Jan 23 10:12:32 crc kubenswrapper[4684]: I0123 10:12:32.847161 4684 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/69edf57e-bfdb-4e05-b61a-5b42dad87ff8-log-httpd" (OuterVolumeSpecName: "log-httpd") pod "69edf57e-bfdb-4e05-b61a-5b42dad87ff8" (UID: "69edf57e-bfdb-4e05-b61a-5b42dad87ff8"). InnerVolumeSpecName "log-httpd". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 23 10:12:32 crc kubenswrapper[4684]: I0123 10:12:32.905224 4684 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/69edf57e-bfdb-4e05-b61a-5b42dad87ff8-kube-api-access-nkw7n" (OuterVolumeSpecName: "kube-api-access-nkw7n") pod "69edf57e-bfdb-4e05-b61a-5b42dad87ff8" (UID: "69edf57e-bfdb-4e05-b61a-5b42dad87ff8"). InnerVolumeSpecName "kube-api-access-nkw7n". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 23 10:12:32 crc kubenswrapper[4684]: I0123 10:12:32.913500 4684 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/69edf57e-bfdb-4e05-b61a-5b42dad87ff8-sg-core-conf-yaml" (OuterVolumeSpecName: "sg-core-conf-yaml") pod "69edf57e-bfdb-4e05-b61a-5b42dad87ff8" (UID: "69edf57e-bfdb-4e05-b61a-5b42dad87ff8"). InnerVolumeSpecName "sg-core-conf-yaml". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 23 10:12:32 crc kubenswrapper[4684]: I0123 10:12:32.915589 4684 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/69edf57e-bfdb-4e05-b61a-5b42dad87ff8-scripts" (OuterVolumeSpecName: "scripts") pod "69edf57e-bfdb-4e05-b61a-5b42dad87ff8" (UID: "69edf57e-bfdb-4e05-b61a-5b42dad87ff8"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 23 10:12:32 crc kubenswrapper[4684]: I0123 10:12:32.948773 4684 reconciler_common.go:293] "Volume detached for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/69edf57e-bfdb-4e05-b61a-5b42dad87ff8-log-httpd\") on node \"crc\" DevicePath \"\"" Jan 23 10:12:32 crc kubenswrapper[4684]: I0123 10:12:32.948813 4684 reconciler_common.go:293] "Volume detached for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/69edf57e-bfdb-4e05-b61a-5b42dad87ff8-sg-core-conf-yaml\") on node \"crc\" DevicePath \"\"" Jan 23 10:12:32 crc kubenswrapper[4684]: I0123 10:12:32.948826 4684 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/69edf57e-bfdb-4e05-b61a-5b42dad87ff8-scripts\") on node \"crc\" DevicePath \"\"" Jan 23 10:12:32 crc kubenswrapper[4684]: I0123 10:12:32.948837 4684 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-nkw7n\" (UniqueName: \"kubernetes.io/projected/69edf57e-bfdb-4e05-b61a-5b42dad87ff8-kube-api-access-nkw7n\") on node \"crc\" DevicePath \"\"" Jan 23 10:12:33 crc kubenswrapper[4684]: I0123 10:12:33.035454 4684 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/69edf57e-bfdb-4e05-b61a-5b42dad87ff8-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "69edf57e-bfdb-4e05-b61a-5b42dad87ff8" (UID: "69edf57e-bfdb-4e05-b61a-5b42dad87ff8"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 23 10:12:33 crc kubenswrapper[4684]: I0123 10:12:33.050384 4684 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/69edf57e-bfdb-4e05-b61a-5b42dad87ff8-ceilometer-tls-certs" (OuterVolumeSpecName: "ceilometer-tls-certs") pod "69edf57e-bfdb-4e05-b61a-5b42dad87ff8" (UID: "69edf57e-bfdb-4e05-b61a-5b42dad87ff8"). InnerVolumeSpecName "ceilometer-tls-certs". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 23 10:12:33 crc kubenswrapper[4684]: I0123 10:12:33.050918 4684 reconciler_common.go:293] "Volume detached for volume \"ceilometer-tls-certs\" (UniqueName: \"kubernetes.io/secret/69edf57e-bfdb-4e05-b61a-5b42dad87ff8-ceilometer-tls-certs\") on node \"crc\" DevicePath \"\"" Jan 23 10:12:33 crc kubenswrapper[4684]: I0123 10:12:33.050958 4684 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/69edf57e-bfdb-4e05-b61a-5b42dad87ff8-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Jan 23 10:12:33 crc kubenswrapper[4684]: I0123 10:12:33.109939 4684 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/69edf57e-bfdb-4e05-b61a-5b42dad87ff8-config-data" (OuterVolumeSpecName: "config-data") pod "69edf57e-bfdb-4e05-b61a-5b42dad87ff8" (UID: "69edf57e-bfdb-4e05-b61a-5b42dad87ff8"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 23 10:12:33 crc kubenswrapper[4684]: I0123 10:12:33.153153 4684 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/69edf57e-bfdb-4e05-b61a-5b42dad87ff8-config-data\") on node \"crc\" DevicePath \"\"" Jan 23 10:12:33 crc kubenswrapper[4684]: I0123 10:12:33.264634 4684 generic.go:334] "Generic (PLEG): container finished" podID="69edf57e-bfdb-4e05-b61a-5b42dad87ff8" containerID="595a24405a012c2ad79c420e47eb463781ed4e39fc2d86db1ebc361d3ec7e85c" exitCode=0 Jan 23 10:12:33 crc kubenswrapper[4684]: I0123 10:12:33.265646 4684 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/ceilometer-0" Jan 23 10:12:33 crc kubenswrapper[4684]: I0123 10:12:33.265846 4684 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"69edf57e-bfdb-4e05-b61a-5b42dad87ff8","Type":"ContainerDied","Data":"595a24405a012c2ad79c420e47eb463781ed4e39fc2d86db1ebc361d3ec7e85c"} Jan 23 10:12:33 crc kubenswrapper[4684]: I0123 10:12:33.265901 4684 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"69edf57e-bfdb-4e05-b61a-5b42dad87ff8","Type":"ContainerDied","Data":"640bacf31ce185c99a4fe24d716f1a2a3dfdcd62b005bff24672b8c34f70d5ac"} Jan 23 10:12:33 crc kubenswrapper[4684]: I0123 10:12:33.265941 4684 scope.go:117] "RemoveContainer" containerID="1e8d28be59fb08176414bb619422b76575709f8a48348b61688048b491a72480" Jan 23 10:12:33 crc kubenswrapper[4684]: I0123 10:12:33.294266 4684 scope.go:117] "RemoveContainer" containerID="f0d9d2bf5ab9e06f96ce10efcf51656474f6dfab8142907591690e7e1e89aeb3" Jan 23 10:12:33 crc kubenswrapper[4684]: I0123 10:12:33.301814 4684 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/ceilometer-0"] Jan 23 10:12:33 crc kubenswrapper[4684]: I0123 10:12:33.331926 4684 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/ceilometer-0"] Jan 23 10:12:33 crc kubenswrapper[4684]: I0123 10:12:33.348651 4684 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/ceilometer-0"] Jan 23 10:12:33 crc kubenswrapper[4684]: E0123 10:12:33.349149 4684 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="8cd40dbd-dde6-4dab-ad91-26b0c526d129" containerName="extract-content" Jan 23 10:12:33 crc kubenswrapper[4684]: I0123 10:12:33.349172 4684 state_mem.go:107] "Deleted CPUSet assignment" podUID="8cd40dbd-dde6-4dab-ad91-26b0c526d129" containerName="extract-content" Jan 23 10:12:33 crc kubenswrapper[4684]: E0123 10:12:33.349191 4684 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="254228f8-63f6-4461-83cb-fac99d91726e" containerName="init" Jan 23 10:12:33 crc kubenswrapper[4684]: I0123 10:12:33.349199 4684 state_mem.go:107] "Deleted CPUSet assignment" podUID="254228f8-63f6-4461-83cb-fac99d91726e" containerName="init" Jan 23 10:12:33 crc kubenswrapper[4684]: E0123 10:12:33.349217 4684 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="8cd40dbd-dde6-4dab-ad91-26b0c526d129" containerName="extract-utilities" Jan 23 10:12:33 crc kubenswrapper[4684]: I0123 10:12:33.349225 4684 state_mem.go:107] "Deleted CPUSet assignment" podUID="8cd40dbd-dde6-4dab-ad91-26b0c526d129" containerName="extract-utilities" Jan 23 10:12:33 crc kubenswrapper[4684]: E0123 10:12:33.349238 4684 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="69edf57e-bfdb-4e05-b61a-5b42dad87ff8" containerName="proxy-httpd" Jan 23 10:12:33 crc kubenswrapper[4684]: I0123 10:12:33.349245 4684 state_mem.go:107] "Deleted CPUSet assignment" podUID="69edf57e-bfdb-4e05-b61a-5b42dad87ff8" containerName="proxy-httpd" Jan 23 10:12:33 crc kubenswrapper[4684]: E0123 10:12:33.349272 4684 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="69edf57e-bfdb-4e05-b61a-5b42dad87ff8" containerName="ceilometer-central-agent" Jan 23 10:12:33 crc kubenswrapper[4684]: I0123 10:12:33.349280 4684 state_mem.go:107] "Deleted CPUSet assignment" podUID="69edf57e-bfdb-4e05-b61a-5b42dad87ff8" containerName="ceilometer-central-agent" Jan 23 10:12:33 crc kubenswrapper[4684]: E0123 10:12:33.349290 4684 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="8cd40dbd-dde6-4dab-ad91-26b0c526d129" containerName="registry-server" Jan 23 10:12:33 crc kubenswrapper[4684]: I0123 10:12:33.349297 4684 state_mem.go:107] "Deleted CPUSet assignment" podUID="8cd40dbd-dde6-4dab-ad91-26b0c526d129" containerName="registry-server" Jan 23 10:12:33 crc kubenswrapper[4684]: E0123 10:12:33.349310 4684 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="254228f8-63f6-4461-83cb-fac99d91726e" containerName="dnsmasq-dns" Jan 23 10:12:33 crc kubenswrapper[4684]: I0123 10:12:33.349317 4684 state_mem.go:107] "Deleted CPUSet assignment" podUID="254228f8-63f6-4461-83cb-fac99d91726e" containerName="dnsmasq-dns" Jan 23 10:12:33 crc kubenswrapper[4684]: E0123 10:12:33.349334 4684 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="69edf57e-bfdb-4e05-b61a-5b42dad87ff8" containerName="sg-core" Jan 23 10:12:33 crc kubenswrapper[4684]: I0123 10:12:33.349341 4684 state_mem.go:107] "Deleted CPUSet assignment" podUID="69edf57e-bfdb-4e05-b61a-5b42dad87ff8" containerName="sg-core" Jan 23 10:12:33 crc kubenswrapper[4684]: E0123 10:12:33.349363 4684 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="69edf57e-bfdb-4e05-b61a-5b42dad87ff8" containerName="ceilometer-notification-agent" Jan 23 10:12:33 crc kubenswrapper[4684]: I0123 10:12:33.349370 4684 state_mem.go:107] "Deleted CPUSet assignment" podUID="69edf57e-bfdb-4e05-b61a-5b42dad87ff8" containerName="ceilometer-notification-agent" Jan 23 10:12:33 crc kubenswrapper[4684]: I0123 10:12:33.349581 4684 memory_manager.go:354] "RemoveStaleState removing state" podUID="254228f8-63f6-4461-83cb-fac99d91726e" containerName="dnsmasq-dns" Jan 23 10:12:33 crc kubenswrapper[4684]: I0123 10:12:33.349609 4684 memory_manager.go:354] "RemoveStaleState removing state" podUID="69edf57e-bfdb-4e05-b61a-5b42dad87ff8" containerName="sg-core" Jan 23 10:12:33 crc kubenswrapper[4684]: I0123 10:12:33.349620 4684 memory_manager.go:354] "RemoveStaleState removing state" podUID="8cd40dbd-dde6-4dab-ad91-26b0c526d129" containerName="registry-server" Jan 23 10:12:33 crc kubenswrapper[4684]: I0123 10:12:33.349640 4684 memory_manager.go:354] "RemoveStaleState removing state" podUID="69edf57e-bfdb-4e05-b61a-5b42dad87ff8" containerName="ceilometer-central-agent" Jan 23 10:12:33 crc kubenswrapper[4684]: I0123 10:12:33.349651 4684 memory_manager.go:354] "RemoveStaleState removing state" podUID="69edf57e-bfdb-4e05-b61a-5b42dad87ff8" containerName="proxy-httpd" Jan 23 10:12:33 crc kubenswrapper[4684]: I0123 10:12:33.349665 4684 memory_manager.go:354] "RemoveStaleState removing state" podUID="69edf57e-bfdb-4e05-b61a-5b42dad87ff8" containerName="ceilometer-notification-agent" Jan 23 10:12:33 crc kubenswrapper[4684]: I0123 10:12:33.351641 4684 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ceilometer-0" Jan 23 10:12:33 crc kubenswrapper[4684]: I0123 10:12:33.353308 4684 scope.go:117] "RemoveContainer" containerID="595a24405a012c2ad79c420e47eb463781ed4e39fc2d86db1ebc361d3ec7e85c" Jan 23 10:12:33 crc kubenswrapper[4684]: I0123 10:12:33.354778 4684 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"ceilometer-config-data" Jan 23 10:12:33 crc kubenswrapper[4684]: I0123 10:12:33.358657 4684 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"ceilometer-scripts" Jan 23 10:12:33 crc kubenswrapper[4684]: I0123 10:12:33.360620 4684 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-ceilometer-internal-svc" Jan 23 10:12:33 crc kubenswrapper[4684]: I0123 10:12:33.370522 4684 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ceilometer-0"] Jan 23 10:12:33 crc kubenswrapper[4684]: I0123 10:12:33.405575 4684 scope.go:117] "RemoveContainer" containerID="1a1ea1d2af0d9bf2659965e5271c543d9ac302f7ff7b16d6bba5b8633363da90" Jan 23 10:12:33 crc kubenswrapper[4684]: I0123 10:12:33.429972 4684 scope.go:117] "RemoveContainer" containerID="1e8d28be59fb08176414bb619422b76575709f8a48348b61688048b491a72480" Jan 23 10:12:33 crc kubenswrapper[4684]: E0123 10:12:33.430627 4684 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"1e8d28be59fb08176414bb619422b76575709f8a48348b61688048b491a72480\": container with ID starting with 1e8d28be59fb08176414bb619422b76575709f8a48348b61688048b491a72480 not found: ID does not exist" containerID="1e8d28be59fb08176414bb619422b76575709f8a48348b61688048b491a72480" Jan 23 10:12:33 crc kubenswrapper[4684]: I0123 10:12:33.430656 4684 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"1e8d28be59fb08176414bb619422b76575709f8a48348b61688048b491a72480"} err="failed to get container status \"1e8d28be59fb08176414bb619422b76575709f8a48348b61688048b491a72480\": rpc error: code = NotFound desc = could not find container \"1e8d28be59fb08176414bb619422b76575709f8a48348b61688048b491a72480\": container with ID starting with 1e8d28be59fb08176414bb619422b76575709f8a48348b61688048b491a72480 not found: ID does not exist" Jan 23 10:12:33 crc kubenswrapper[4684]: I0123 10:12:33.430677 4684 scope.go:117] "RemoveContainer" containerID="f0d9d2bf5ab9e06f96ce10efcf51656474f6dfab8142907591690e7e1e89aeb3" Jan 23 10:12:33 crc kubenswrapper[4684]: E0123 10:12:33.431003 4684 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"f0d9d2bf5ab9e06f96ce10efcf51656474f6dfab8142907591690e7e1e89aeb3\": container with ID starting with f0d9d2bf5ab9e06f96ce10efcf51656474f6dfab8142907591690e7e1e89aeb3 not found: ID does not exist" containerID="f0d9d2bf5ab9e06f96ce10efcf51656474f6dfab8142907591690e7e1e89aeb3" Jan 23 10:12:33 crc kubenswrapper[4684]: I0123 10:12:33.431033 4684 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"f0d9d2bf5ab9e06f96ce10efcf51656474f6dfab8142907591690e7e1e89aeb3"} err="failed to get container status \"f0d9d2bf5ab9e06f96ce10efcf51656474f6dfab8142907591690e7e1e89aeb3\": rpc error: code = NotFound desc = could not find container \"f0d9d2bf5ab9e06f96ce10efcf51656474f6dfab8142907591690e7e1e89aeb3\": container with ID starting with f0d9d2bf5ab9e06f96ce10efcf51656474f6dfab8142907591690e7e1e89aeb3 not found: ID does not exist" Jan 23 10:12:33 crc kubenswrapper[4684]: I0123 10:12:33.431047 4684 scope.go:117] "RemoveContainer" containerID="595a24405a012c2ad79c420e47eb463781ed4e39fc2d86db1ebc361d3ec7e85c" Jan 23 10:12:33 crc kubenswrapper[4684]: E0123 10:12:33.431406 4684 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"595a24405a012c2ad79c420e47eb463781ed4e39fc2d86db1ebc361d3ec7e85c\": container with ID starting with 595a24405a012c2ad79c420e47eb463781ed4e39fc2d86db1ebc361d3ec7e85c not found: ID does not exist" containerID="595a24405a012c2ad79c420e47eb463781ed4e39fc2d86db1ebc361d3ec7e85c" Jan 23 10:12:33 crc kubenswrapper[4684]: I0123 10:12:33.431447 4684 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"595a24405a012c2ad79c420e47eb463781ed4e39fc2d86db1ebc361d3ec7e85c"} err="failed to get container status \"595a24405a012c2ad79c420e47eb463781ed4e39fc2d86db1ebc361d3ec7e85c\": rpc error: code = NotFound desc = could not find container \"595a24405a012c2ad79c420e47eb463781ed4e39fc2d86db1ebc361d3ec7e85c\": container with ID starting with 595a24405a012c2ad79c420e47eb463781ed4e39fc2d86db1ebc361d3ec7e85c not found: ID does not exist" Jan 23 10:12:33 crc kubenswrapper[4684]: I0123 10:12:33.431475 4684 scope.go:117] "RemoveContainer" containerID="1a1ea1d2af0d9bf2659965e5271c543d9ac302f7ff7b16d6bba5b8633363da90" Jan 23 10:12:33 crc kubenswrapper[4684]: E0123 10:12:33.431758 4684 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"1a1ea1d2af0d9bf2659965e5271c543d9ac302f7ff7b16d6bba5b8633363da90\": container with ID starting with 1a1ea1d2af0d9bf2659965e5271c543d9ac302f7ff7b16d6bba5b8633363da90 not found: ID does not exist" containerID="1a1ea1d2af0d9bf2659965e5271c543d9ac302f7ff7b16d6bba5b8633363da90" Jan 23 10:12:33 crc kubenswrapper[4684]: I0123 10:12:33.431783 4684 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"1a1ea1d2af0d9bf2659965e5271c543d9ac302f7ff7b16d6bba5b8633363da90"} err="failed to get container status \"1a1ea1d2af0d9bf2659965e5271c543d9ac302f7ff7b16d6bba5b8633363da90\": rpc error: code = NotFound desc = could not find container \"1a1ea1d2af0d9bf2659965e5271c543d9ac302f7ff7b16d6bba5b8633363da90\": container with ID starting with 1a1ea1d2af0d9bf2659965e5271c543d9ac302f7ff7b16d6bba5b8633363da90 not found: ID does not exist" Jan 23 10:12:33 crc kubenswrapper[4684]: I0123 10:12:33.460415 4684 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/19914f8a-2409-41e0-accb-221ccdb4428f-sg-core-conf-yaml\") pod \"ceilometer-0\" (UID: \"19914f8a-2409-41e0-accb-221ccdb4428f\") " pod="openstack/ceilometer-0" Jan 23 10:12:33 crc kubenswrapper[4684]: I0123 10:12:33.460482 4684 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/19914f8a-2409-41e0-accb-221ccdb4428f-scripts\") pod \"ceilometer-0\" (UID: \"19914f8a-2409-41e0-accb-221ccdb4428f\") " pod="openstack/ceilometer-0" Jan 23 10:12:33 crc kubenswrapper[4684]: I0123 10:12:33.460646 4684 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/19914f8a-2409-41e0-accb-221ccdb4428f-config-data\") pod \"ceilometer-0\" (UID: \"19914f8a-2409-41e0-accb-221ccdb4428f\") " pod="openstack/ceilometer-0" Jan 23 10:12:33 crc kubenswrapper[4684]: I0123 10:12:33.460806 4684 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/19914f8a-2409-41e0-accb-221ccdb4428f-log-httpd\") pod \"ceilometer-0\" (UID: \"19914f8a-2409-41e0-accb-221ccdb4428f\") " pod="openstack/ceilometer-0" Jan 23 10:12:33 crc kubenswrapper[4684]: I0123 10:12:33.460853 4684 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/19914f8a-2409-41e0-accb-221ccdb4428f-run-httpd\") pod \"ceilometer-0\" (UID: \"19914f8a-2409-41e0-accb-221ccdb4428f\") " pod="openstack/ceilometer-0" Jan 23 10:12:33 crc kubenswrapper[4684]: I0123 10:12:33.460902 4684 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ceilometer-tls-certs\" (UniqueName: \"kubernetes.io/secret/19914f8a-2409-41e0-accb-221ccdb4428f-ceilometer-tls-certs\") pod \"ceilometer-0\" (UID: \"19914f8a-2409-41e0-accb-221ccdb4428f\") " pod="openstack/ceilometer-0" Jan 23 10:12:33 crc kubenswrapper[4684]: I0123 10:12:33.460942 4684 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/19914f8a-2409-41e0-accb-221ccdb4428f-combined-ca-bundle\") pod \"ceilometer-0\" (UID: \"19914f8a-2409-41e0-accb-221ccdb4428f\") " pod="openstack/ceilometer-0" Jan 23 10:12:33 crc kubenswrapper[4684]: I0123 10:12:33.461085 4684 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-98mwc\" (UniqueName: \"kubernetes.io/projected/19914f8a-2409-41e0-accb-221ccdb4428f-kube-api-access-98mwc\") pod \"ceilometer-0\" (UID: \"19914f8a-2409-41e0-accb-221ccdb4428f\") " pod="openstack/ceilometer-0" Jan 23 10:12:33 crc kubenswrapper[4684]: I0123 10:12:33.563217 4684 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/19914f8a-2409-41e0-accb-221ccdb4428f-scripts\") pod \"ceilometer-0\" (UID: \"19914f8a-2409-41e0-accb-221ccdb4428f\") " pod="openstack/ceilometer-0" Jan 23 10:12:33 crc kubenswrapper[4684]: I0123 10:12:33.563292 4684 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/19914f8a-2409-41e0-accb-221ccdb4428f-config-data\") pod \"ceilometer-0\" (UID: \"19914f8a-2409-41e0-accb-221ccdb4428f\") " pod="openstack/ceilometer-0" Jan 23 10:12:33 crc kubenswrapper[4684]: I0123 10:12:33.563359 4684 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/19914f8a-2409-41e0-accb-221ccdb4428f-log-httpd\") pod \"ceilometer-0\" (UID: \"19914f8a-2409-41e0-accb-221ccdb4428f\") " pod="openstack/ceilometer-0" Jan 23 10:12:33 crc kubenswrapper[4684]: I0123 10:12:33.563389 4684 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/19914f8a-2409-41e0-accb-221ccdb4428f-run-httpd\") pod \"ceilometer-0\" (UID: \"19914f8a-2409-41e0-accb-221ccdb4428f\") " pod="openstack/ceilometer-0" Jan 23 10:12:33 crc kubenswrapper[4684]: I0123 10:12:33.563420 4684 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ceilometer-tls-certs\" (UniqueName: \"kubernetes.io/secret/19914f8a-2409-41e0-accb-221ccdb4428f-ceilometer-tls-certs\") pod \"ceilometer-0\" (UID: \"19914f8a-2409-41e0-accb-221ccdb4428f\") " pod="openstack/ceilometer-0" Jan 23 10:12:33 crc kubenswrapper[4684]: I0123 10:12:33.563447 4684 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/19914f8a-2409-41e0-accb-221ccdb4428f-combined-ca-bundle\") pod \"ceilometer-0\" (UID: \"19914f8a-2409-41e0-accb-221ccdb4428f\") " pod="openstack/ceilometer-0" Jan 23 10:12:33 crc kubenswrapper[4684]: I0123 10:12:33.563517 4684 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-98mwc\" (UniqueName: \"kubernetes.io/projected/19914f8a-2409-41e0-accb-221ccdb4428f-kube-api-access-98mwc\") pod \"ceilometer-0\" (UID: \"19914f8a-2409-41e0-accb-221ccdb4428f\") " pod="openstack/ceilometer-0" Jan 23 10:12:33 crc kubenswrapper[4684]: I0123 10:12:33.563585 4684 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/19914f8a-2409-41e0-accb-221ccdb4428f-sg-core-conf-yaml\") pod \"ceilometer-0\" (UID: \"19914f8a-2409-41e0-accb-221ccdb4428f\") " pod="openstack/ceilometer-0" Jan 23 10:12:33 crc kubenswrapper[4684]: I0123 10:12:33.564525 4684 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/19914f8a-2409-41e0-accb-221ccdb4428f-run-httpd\") pod \"ceilometer-0\" (UID: \"19914f8a-2409-41e0-accb-221ccdb4428f\") " pod="openstack/ceilometer-0" Jan 23 10:12:33 crc kubenswrapper[4684]: I0123 10:12:33.564847 4684 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/19914f8a-2409-41e0-accb-221ccdb4428f-log-httpd\") pod \"ceilometer-0\" (UID: \"19914f8a-2409-41e0-accb-221ccdb4428f\") " pod="openstack/ceilometer-0" Jan 23 10:12:33 crc kubenswrapper[4684]: I0123 10:12:33.569558 4684 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/19914f8a-2409-41e0-accb-221ccdb4428f-sg-core-conf-yaml\") pod \"ceilometer-0\" (UID: \"19914f8a-2409-41e0-accb-221ccdb4428f\") " pod="openstack/ceilometer-0" Jan 23 10:12:33 crc kubenswrapper[4684]: I0123 10:12:33.569653 4684 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/19914f8a-2409-41e0-accb-221ccdb4428f-scripts\") pod \"ceilometer-0\" (UID: \"19914f8a-2409-41e0-accb-221ccdb4428f\") " pod="openstack/ceilometer-0" Jan 23 10:12:33 crc kubenswrapper[4684]: I0123 10:12:33.570557 4684 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/19914f8a-2409-41e0-accb-221ccdb4428f-config-data\") pod \"ceilometer-0\" (UID: \"19914f8a-2409-41e0-accb-221ccdb4428f\") " pod="openstack/ceilometer-0" Jan 23 10:12:33 crc kubenswrapper[4684]: I0123 10:12:33.572859 4684 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/19914f8a-2409-41e0-accb-221ccdb4428f-combined-ca-bundle\") pod \"ceilometer-0\" (UID: \"19914f8a-2409-41e0-accb-221ccdb4428f\") " pod="openstack/ceilometer-0" Jan 23 10:12:33 crc kubenswrapper[4684]: I0123 10:12:33.581593 4684 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ceilometer-tls-certs\" (UniqueName: \"kubernetes.io/secret/19914f8a-2409-41e0-accb-221ccdb4428f-ceilometer-tls-certs\") pod \"ceilometer-0\" (UID: \"19914f8a-2409-41e0-accb-221ccdb4428f\") " pod="openstack/ceilometer-0" Jan 23 10:12:33 crc kubenswrapper[4684]: I0123 10:12:33.592430 4684 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-98mwc\" (UniqueName: \"kubernetes.io/projected/19914f8a-2409-41e0-accb-221ccdb4428f-kube-api-access-98mwc\") pod \"ceilometer-0\" (UID: \"19914f8a-2409-41e0-accb-221ccdb4428f\") " pod="openstack/ceilometer-0" Jan 23 10:12:33 crc kubenswrapper[4684]: I0123 10:12:33.599089 4684 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="69edf57e-bfdb-4e05-b61a-5b42dad87ff8" path="/var/lib/kubelet/pods/69edf57e-bfdb-4e05-b61a-5b42dad87ff8/volumes" Jan 23 10:12:33 crc kubenswrapper[4684]: I0123 10:12:33.677278 4684 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ceilometer-0" Jan 23 10:12:34 crc kubenswrapper[4684]: I0123 10:12:34.200104 4684 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ceilometer-0"] Jan 23 10:12:34 crc kubenswrapper[4684]: I0123 10:12:34.275884 4684 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"19914f8a-2409-41e0-accb-221ccdb4428f","Type":"ContainerStarted","Data":"a06b24958f5a2078f459bc59570d8a03a3dc71697472df678d3d50551b137756"} Jan 23 10:12:35 crc kubenswrapper[4684]: I0123 10:12:35.207311 4684 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/manila-scheduler-0" Jan 23 10:12:35 crc kubenswrapper[4684]: I0123 10:12:35.257498 4684 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/manila-scheduler-0"] Jan 23 10:12:35 crc kubenswrapper[4684]: I0123 10:12:35.288676 4684 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/manila-scheduler-0" podUID="e8415d7b-df39-4568-a028-7556ef70b916" containerName="manila-scheduler" containerID="cri-o://281bff89fbb992b4459660e77ec0b9e647e50fc0629c6e63a956f886c8cc14ee" gracePeriod=30 Jan 23 10:12:35 crc kubenswrapper[4684]: I0123 10:12:35.288951 4684 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"19914f8a-2409-41e0-accb-221ccdb4428f","Type":"ContainerStarted","Data":"2f23874283b23f2837fa74f167f174a8b134e350805721a69b43ccd7252ff66d"} Jan 23 10:12:35 crc kubenswrapper[4684]: I0123 10:12:35.289172 4684 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/manila-scheduler-0" podUID="e8415d7b-df39-4568-a028-7556ef70b916" containerName="probe" containerID="cri-o://e64e5bfd4e82d0e6d9b538b2255c90fbd279d725ecd4ea9fb3ddd5ab7f4195ed" gracePeriod=30 Jan 23 10:12:36 crc kubenswrapper[4684]: I0123 10:12:36.298972 4684 generic.go:334] "Generic (PLEG): container finished" podID="e8415d7b-df39-4568-a028-7556ef70b916" containerID="e64e5bfd4e82d0e6d9b538b2255c90fbd279d725ecd4ea9fb3ddd5ab7f4195ed" exitCode=0 Jan 23 10:12:36 crc kubenswrapper[4684]: I0123 10:12:36.299280 4684 generic.go:334] "Generic (PLEG): container finished" podID="e8415d7b-df39-4568-a028-7556ef70b916" containerID="281bff89fbb992b4459660e77ec0b9e647e50fc0629c6e63a956f886c8cc14ee" exitCode=0 Jan 23 10:12:36 crc kubenswrapper[4684]: I0123 10:12:36.299051 4684 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/manila-scheduler-0" event={"ID":"e8415d7b-df39-4568-a028-7556ef70b916","Type":"ContainerDied","Data":"e64e5bfd4e82d0e6d9b538b2255c90fbd279d725ecd4ea9fb3ddd5ab7f4195ed"} Jan 23 10:12:36 crc kubenswrapper[4684]: I0123 10:12:36.299323 4684 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/manila-scheduler-0" event={"ID":"e8415d7b-df39-4568-a028-7556ef70b916","Type":"ContainerDied","Data":"281bff89fbb992b4459660e77ec0b9e647e50fc0629c6e63a956f886c8cc14ee"} Jan 23 10:12:36 crc kubenswrapper[4684]: I0123 10:12:36.303110 4684 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"19914f8a-2409-41e0-accb-221ccdb4428f","Type":"ContainerStarted","Data":"40cec683e2a0c3ef9a704463d50fafb1af09c0deb87236ebe34d810e8bd6b721"} Jan 23 10:12:37 crc kubenswrapper[4684]: I0123 10:12:37.007600 4684 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/manila-scheduler-0" Jan 23 10:12:37 crc kubenswrapper[4684]: I0123 10:12:37.049386 4684 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"etc-machine-id\" (UniqueName: \"kubernetes.io/host-path/e8415d7b-df39-4568-a028-7556ef70b916-etc-machine-id\") pod \"e8415d7b-df39-4568-a028-7556ef70b916\" (UID: \"e8415d7b-df39-4568-a028-7556ef70b916\") " Jan 23 10:12:37 crc kubenswrapper[4684]: I0123 10:12:37.049517 4684 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/e8415d7b-df39-4568-a028-7556ef70b916-etc-machine-id" (OuterVolumeSpecName: "etc-machine-id") pod "e8415d7b-df39-4568-a028-7556ef70b916" (UID: "e8415d7b-df39-4568-a028-7556ef70b916"). InnerVolumeSpecName "etc-machine-id". PluginName "kubernetes.io/host-path", VolumeGidValue "" Jan 23 10:12:37 crc kubenswrapper[4684]: I0123 10:12:37.049540 4684 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-rjlp4\" (UniqueName: \"kubernetes.io/projected/e8415d7b-df39-4568-a028-7556ef70b916-kube-api-access-rjlp4\") pod \"e8415d7b-df39-4568-a028-7556ef70b916\" (UID: \"e8415d7b-df39-4568-a028-7556ef70b916\") " Jan 23 10:12:37 crc kubenswrapper[4684]: I0123 10:12:37.049579 4684 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/e8415d7b-df39-4568-a028-7556ef70b916-config-data-custom\") pod \"e8415d7b-df39-4568-a028-7556ef70b916\" (UID: \"e8415d7b-df39-4568-a028-7556ef70b916\") " Jan 23 10:12:37 crc kubenswrapper[4684]: I0123 10:12:37.049616 4684 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/e8415d7b-df39-4568-a028-7556ef70b916-scripts\") pod \"e8415d7b-df39-4568-a028-7556ef70b916\" (UID: \"e8415d7b-df39-4568-a028-7556ef70b916\") " Jan 23 10:12:37 crc kubenswrapper[4684]: I0123 10:12:37.049691 4684 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/e8415d7b-df39-4568-a028-7556ef70b916-combined-ca-bundle\") pod \"e8415d7b-df39-4568-a028-7556ef70b916\" (UID: \"e8415d7b-df39-4568-a028-7556ef70b916\") " Jan 23 10:12:37 crc kubenswrapper[4684]: I0123 10:12:37.049778 4684 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/e8415d7b-df39-4568-a028-7556ef70b916-config-data\") pod \"e8415d7b-df39-4568-a028-7556ef70b916\" (UID: \"e8415d7b-df39-4568-a028-7556ef70b916\") " Jan 23 10:12:37 crc kubenswrapper[4684]: I0123 10:12:37.050437 4684 reconciler_common.go:293] "Volume detached for volume \"etc-machine-id\" (UniqueName: \"kubernetes.io/host-path/e8415d7b-df39-4568-a028-7556ef70b916-etc-machine-id\") on node \"crc\" DevicePath \"\"" Jan 23 10:12:37 crc kubenswrapper[4684]: I0123 10:12:37.061883 4684 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/e8415d7b-df39-4568-a028-7556ef70b916-scripts" (OuterVolumeSpecName: "scripts") pod "e8415d7b-df39-4568-a028-7556ef70b916" (UID: "e8415d7b-df39-4568-a028-7556ef70b916"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 23 10:12:37 crc kubenswrapper[4684]: I0123 10:12:37.074873 4684 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/e8415d7b-df39-4568-a028-7556ef70b916-kube-api-access-rjlp4" (OuterVolumeSpecName: "kube-api-access-rjlp4") pod "e8415d7b-df39-4568-a028-7556ef70b916" (UID: "e8415d7b-df39-4568-a028-7556ef70b916"). InnerVolumeSpecName "kube-api-access-rjlp4". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 23 10:12:37 crc kubenswrapper[4684]: I0123 10:12:37.078018 4684 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/e8415d7b-df39-4568-a028-7556ef70b916-config-data-custom" (OuterVolumeSpecName: "config-data-custom") pod "e8415d7b-df39-4568-a028-7556ef70b916" (UID: "e8415d7b-df39-4568-a028-7556ef70b916"). InnerVolumeSpecName "config-data-custom". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 23 10:12:37 crc kubenswrapper[4684]: I0123 10:12:37.152558 4684 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-rjlp4\" (UniqueName: \"kubernetes.io/projected/e8415d7b-df39-4568-a028-7556ef70b916-kube-api-access-rjlp4\") on node \"crc\" DevicePath \"\"" Jan 23 10:12:37 crc kubenswrapper[4684]: I0123 10:12:37.152852 4684 reconciler_common.go:293] "Volume detached for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/e8415d7b-df39-4568-a028-7556ef70b916-config-data-custom\") on node \"crc\" DevicePath \"\"" Jan 23 10:12:37 crc kubenswrapper[4684]: I0123 10:12:37.152867 4684 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/e8415d7b-df39-4568-a028-7556ef70b916-scripts\") on node \"crc\" DevicePath \"\"" Jan 23 10:12:37 crc kubenswrapper[4684]: I0123 10:12:37.159318 4684 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/e8415d7b-df39-4568-a028-7556ef70b916-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "e8415d7b-df39-4568-a028-7556ef70b916" (UID: "e8415d7b-df39-4568-a028-7556ef70b916"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 23 10:12:37 crc kubenswrapper[4684]: I0123 10:12:37.233882 4684 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/e8415d7b-df39-4568-a028-7556ef70b916-config-data" (OuterVolumeSpecName: "config-data") pod "e8415d7b-df39-4568-a028-7556ef70b916" (UID: "e8415d7b-df39-4568-a028-7556ef70b916"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 23 10:12:37 crc kubenswrapper[4684]: I0123 10:12:37.255081 4684 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/e8415d7b-df39-4568-a028-7556ef70b916-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Jan 23 10:12:37 crc kubenswrapper[4684]: I0123 10:12:37.255117 4684 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/e8415d7b-df39-4568-a028-7556ef70b916-config-data\") on node \"crc\" DevicePath \"\"" Jan 23 10:12:37 crc kubenswrapper[4684]: I0123 10:12:37.313784 4684 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/manila-scheduler-0" event={"ID":"e8415d7b-df39-4568-a028-7556ef70b916","Type":"ContainerDied","Data":"e1ca9bdcaba619d25675d3f624bd51b6620b8cd02f61cb82eadc885600bb46c9"} Jan 23 10:12:37 crc kubenswrapper[4684]: I0123 10:12:37.314643 4684 scope.go:117] "RemoveContainer" containerID="e64e5bfd4e82d0e6d9b538b2255c90fbd279d725ecd4ea9fb3ddd5ab7f4195ed" Jan 23 10:12:37 crc kubenswrapper[4684]: I0123 10:12:37.314212 4684 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/manila-scheduler-0" Jan 23 10:12:37 crc kubenswrapper[4684]: I0123 10:12:37.343160 4684 scope.go:117] "RemoveContainer" containerID="281bff89fbb992b4459660e77ec0b9e647e50fc0629c6e63a956f886c8cc14ee" Jan 23 10:12:37 crc kubenswrapper[4684]: I0123 10:12:37.374096 4684 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/manila-scheduler-0"] Jan 23 10:12:37 crc kubenswrapper[4684]: I0123 10:12:37.390796 4684 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/manila-scheduler-0"] Jan 23 10:12:37 crc kubenswrapper[4684]: I0123 10:12:37.421836 4684 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/manila-scheduler-0"] Jan 23 10:12:37 crc kubenswrapper[4684]: E0123 10:12:37.422367 4684 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="e8415d7b-df39-4568-a028-7556ef70b916" containerName="probe" Jan 23 10:12:37 crc kubenswrapper[4684]: I0123 10:12:37.422392 4684 state_mem.go:107] "Deleted CPUSet assignment" podUID="e8415d7b-df39-4568-a028-7556ef70b916" containerName="probe" Jan 23 10:12:37 crc kubenswrapper[4684]: E0123 10:12:37.422406 4684 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="e8415d7b-df39-4568-a028-7556ef70b916" containerName="manila-scheduler" Jan 23 10:12:37 crc kubenswrapper[4684]: I0123 10:12:37.422415 4684 state_mem.go:107] "Deleted CPUSet assignment" podUID="e8415d7b-df39-4568-a028-7556ef70b916" containerName="manila-scheduler" Jan 23 10:12:37 crc kubenswrapper[4684]: I0123 10:12:37.422802 4684 memory_manager.go:354] "RemoveStaleState removing state" podUID="e8415d7b-df39-4568-a028-7556ef70b916" containerName="probe" Jan 23 10:12:37 crc kubenswrapper[4684]: I0123 10:12:37.422829 4684 memory_manager.go:354] "RemoveStaleState removing state" podUID="e8415d7b-df39-4568-a028-7556ef70b916" containerName="manila-scheduler" Jan 23 10:12:37 crc kubenswrapper[4684]: I0123 10:12:37.424148 4684 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/manila-scheduler-0" Jan 23 10:12:37 crc kubenswrapper[4684]: I0123 10:12:37.431827 4684 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"manila-scheduler-config-data" Jan 23 10:12:37 crc kubenswrapper[4684]: I0123 10:12:37.434983 4684 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/manila-scheduler-0"] Jan 23 10:12:37 crc kubenswrapper[4684]: I0123 10:12:37.459985 4684 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/1be4f920-aa7e-412c-8241-a795a65be1bb-combined-ca-bundle\") pod \"manila-scheduler-0\" (UID: \"1be4f920-aa7e-412c-8241-a795a65be1bb\") " pod="openstack/manila-scheduler-0" Jan 23 10:12:37 crc kubenswrapper[4684]: I0123 10:12:37.460087 4684 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/1be4f920-aa7e-412c-8241-a795a65be1bb-config-data\") pod \"manila-scheduler-0\" (UID: \"1be4f920-aa7e-412c-8241-a795a65be1bb\") " pod="openstack/manila-scheduler-0" Jan 23 10:12:37 crc kubenswrapper[4684]: I0123 10:12:37.460128 4684 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-machine-id\" (UniqueName: \"kubernetes.io/host-path/1be4f920-aa7e-412c-8241-a795a65be1bb-etc-machine-id\") pod \"manila-scheduler-0\" (UID: \"1be4f920-aa7e-412c-8241-a795a65be1bb\") " pod="openstack/manila-scheduler-0" Jan 23 10:12:37 crc kubenswrapper[4684]: I0123 10:12:37.460159 4684 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/1be4f920-aa7e-412c-8241-a795a65be1bb-scripts\") pod \"manila-scheduler-0\" (UID: \"1be4f920-aa7e-412c-8241-a795a65be1bb\") " pod="openstack/manila-scheduler-0" Jan 23 10:12:37 crc kubenswrapper[4684]: I0123 10:12:37.460266 4684 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/1be4f920-aa7e-412c-8241-a795a65be1bb-config-data-custom\") pod \"manila-scheduler-0\" (UID: \"1be4f920-aa7e-412c-8241-a795a65be1bb\") " pod="openstack/manila-scheduler-0" Jan 23 10:12:37 crc kubenswrapper[4684]: I0123 10:12:37.460345 4684 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-jjxrn\" (UniqueName: \"kubernetes.io/projected/1be4f920-aa7e-412c-8241-a795a65be1bb-kube-api-access-jjxrn\") pod \"manila-scheduler-0\" (UID: \"1be4f920-aa7e-412c-8241-a795a65be1bb\") " pod="openstack/manila-scheduler-0" Jan 23 10:12:37 crc kubenswrapper[4684]: I0123 10:12:37.561959 4684 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/1be4f920-aa7e-412c-8241-a795a65be1bb-config-data\") pod \"manila-scheduler-0\" (UID: \"1be4f920-aa7e-412c-8241-a795a65be1bb\") " pod="openstack/manila-scheduler-0" Jan 23 10:12:37 crc kubenswrapper[4684]: I0123 10:12:37.562026 4684 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etc-machine-id\" (UniqueName: \"kubernetes.io/host-path/1be4f920-aa7e-412c-8241-a795a65be1bb-etc-machine-id\") pod \"manila-scheduler-0\" (UID: \"1be4f920-aa7e-412c-8241-a795a65be1bb\") " pod="openstack/manila-scheduler-0" Jan 23 10:12:37 crc kubenswrapper[4684]: I0123 10:12:37.562050 4684 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/1be4f920-aa7e-412c-8241-a795a65be1bb-scripts\") pod \"manila-scheduler-0\" (UID: \"1be4f920-aa7e-412c-8241-a795a65be1bb\") " pod="openstack/manila-scheduler-0" Jan 23 10:12:37 crc kubenswrapper[4684]: I0123 10:12:37.562114 4684 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/1be4f920-aa7e-412c-8241-a795a65be1bb-config-data-custom\") pod \"manila-scheduler-0\" (UID: \"1be4f920-aa7e-412c-8241-a795a65be1bb\") " pod="openstack/manila-scheduler-0" Jan 23 10:12:37 crc kubenswrapper[4684]: I0123 10:12:37.562149 4684 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-jjxrn\" (UniqueName: \"kubernetes.io/projected/1be4f920-aa7e-412c-8241-a795a65be1bb-kube-api-access-jjxrn\") pod \"manila-scheduler-0\" (UID: \"1be4f920-aa7e-412c-8241-a795a65be1bb\") " pod="openstack/manila-scheduler-0" Jan 23 10:12:37 crc kubenswrapper[4684]: I0123 10:12:37.562151 4684 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etc-machine-id\" (UniqueName: \"kubernetes.io/host-path/1be4f920-aa7e-412c-8241-a795a65be1bb-etc-machine-id\") pod \"manila-scheduler-0\" (UID: \"1be4f920-aa7e-412c-8241-a795a65be1bb\") " pod="openstack/manila-scheduler-0" Jan 23 10:12:37 crc kubenswrapper[4684]: I0123 10:12:37.562223 4684 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/1be4f920-aa7e-412c-8241-a795a65be1bb-combined-ca-bundle\") pod \"manila-scheduler-0\" (UID: \"1be4f920-aa7e-412c-8241-a795a65be1bb\") " pod="openstack/manila-scheduler-0" Jan 23 10:12:37 crc kubenswrapper[4684]: I0123 10:12:37.567293 4684 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/1be4f920-aa7e-412c-8241-a795a65be1bb-combined-ca-bundle\") pod \"manila-scheduler-0\" (UID: \"1be4f920-aa7e-412c-8241-a795a65be1bb\") " pod="openstack/manila-scheduler-0" Jan 23 10:12:37 crc kubenswrapper[4684]: I0123 10:12:37.567629 4684 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/1be4f920-aa7e-412c-8241-a795a65be1bb-config-data\") pod \"manila-scheduler-0\" (UID: \"1be4f920-aa7e-412c-8241-a795a65be1bb\") " pod="openstack/manila-scheduler-0" Jan 23 10:12:37 crc kubenswrapper[4684]: I0123 10:12:37.568444 4684 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/1be4f920-aa7e-412c-8241-a795a65be1bb-config-data-custom\") pod \"manila-scheduler-0\" (UID: \"1be4f920-aa7e-412c-8241-a795a65be1bb\") " pod="openstack/manila-scheduler-0" Jan 23 10:12:37 crc kubenswrapper[4684]: I0123 10:12:37.571097 4684 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/1be4f920-aa7e-412c-8241-a795a65be1bb-scripts\") pod \"manila-scheduler-0\" (UID: \"1be4f920-aa7e-412c-8241-a795a65be1bb\") " pod="openstack/manila-scheduler-0" Jan 23 10:12:37 crc kubenswrapper[4684]: I0123 10:12:37.577808 4684 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-jjxrn\" (UniqueName: \"kubernetes.io/projected/1be4f920-aa7e-412c-8241-a795a65be1bb-kube-api-access-jjxrn\") pod \"manila-scheduler-0\" (UID: \"1be4f920-aa7e-412c-8241-a795a65be1bb\") " pod="openstack/manila-scheduler-0" Jan 23 10:12:37 crc kubenswrapper[4684]: I0123 10:12:37.596496 4684 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="e8415d7b-df39-4568-a028-7556ef70b916" path="/var/lib/kubelet/pods/e8415d7b-df39-4568-a028-7556ef70b916/volumes" Jan 23 10:12:37 crc kubenswrapper[4684]: I0123 10:12:37.747794 4684 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/manila-scheduler-0" Jan 23 10:12:38 crc kubenswrapper[4684]: I0123 10:12:38.281556 4684 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/manila-scheduler-0"] Jan 23 10:12:38 crc kubenswrapper[4684]: W0123 10:12:38.295212 4684 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod1be4f920_aa7e_412c_8241_a795a65be1bb.slice/crio-e8087268f776b5db0c2204ee787c807cc8cd2db344c547fb2c398e6078a918fd WatchSource:0}: Error finding container e8087268f776b5db0c2204ee787c807cc8cd2db344c547fb2c398e6078a918fd: Status 404 returned error can't find the container with id e8087268f776b5db0c2204ee787c807cc8cd2db344c547fb2c398e6078a918fd Jan 23 10:12:38 crc kubenswrapper[4684]: I0123 10:12:38.338339 4684 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/manila-scheduler-0" event={"ID":"1be4f920-aa7e-412c-8241-a795a65be1bb","Type":"ContainerStarted","Data":"e8087268f776b5db0c2204ee787c807cc8cd2db344c547fb2c398e6078a918fd"} Jan 23 10:12:38 crc kubenswrapper[4684]: I0123 10:12:38.344275 4684 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"19914f8a-2409-41e0-accb-221ccdb4428f","Type":"ContainerStarted","Data":"e1d02b273f7bd43c0d86ea8776251f81d50fd470c28e404e5185d17500a2dfe6"} Jan 23 10:12:39 crc kubenswrapper[4684]: I0123 10:12:39.362034 4684 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"19914f8a-2409-41e0-accb-221ccdb4428f","Type":"ContainerStarted","Data":"d2a6d4392b30199544c6ec09d827fad0fa8c36d7790b7f2226def2288718c098"} Jan 23 10:12:39 crc kubenswrapper[4684]: I0123 10:12:39.362669 4684 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/ceilometer-0" Jan 23 10:12:39 crc kubenswrapper[4684]: I0123 10:12:39.366829 4684 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/manila-scheduler-0" event={"ID":"1be4f920-aa7e-412c-8241-a795a65be1bb","Type":"ContainerStarted","Data":"d0e955b2f2ce2289dce7457e708d136cbcc7186654a59185c0081501e17e0a49"} Jan 23 10:12:39 crc kubenswrapper[4684]: I0123 10:12:39.366873 4684 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/manila-scheduler-0" event={"ID":"1be4f920-aa7e-412c-8241-a795a65be1bb","Type":"ContainerStarted","Data":"f46103e8719b2df661473564a228b48e2b2c01708723995580aed235238105e6"} Jan 23 10:12:39 crc kubenswrapper[4684]: I0123 10:12:39.379190 4684 generic.go:334] "Generic (PLEG): container finished" podID="78d43a15-1645-42a6-a25b-a6c4d7a244c4" containerID="dc8c5b0795461756572228e25c06926d3e363425ec9a0870d9103ee9701634b3" exitCode=137 Jan 23 10:12:39 crc kubenswrapper[4684]: I0123 10:12:39.379237 4684 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/horizon-7df5b758fb-8sfdj" event={"ID":"78d43a15-1645-42a6-a25b-a6c4d7a244c4","Type":"ContainerDied","Data":"dc8c5b0795461756572228e25c06926d3e363425ec9a0870d9103ee9701634b3"} Jan 23 10:12:39 crc kubenswrapper[4684]: I0123 10:12:39.402878 4684 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/ceilometer-0" podStartSLOduration=1.677228478 podStartE2EDuration="6.402853988s" podCreationTimestamp="2026-01-23 10:12:33 +0000 UTC" firstStartedPulling="2026-01-23 10:12:34.224223972 +0000 UTC m=+3926.847602513" lastFinishedPulling="2026-01-23 10:12:38.949849482 +0000 UTC m=+3931.573228023" observedRunningTime="2026-01-23 10:12:39.388146099 +0000 UTC m=+3932.011524640" watchObservedRunningTime="2026-01-23 10:12:39.402853988 +0000 UTC m=+3932.026232529" Jan 23 10:12:40 crc kubenswrapper[4684]: I0123 10:12:40.398487 4684 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/horizon-7df5b758fb-8sfdj" event={"ID":"78d43a15-1645-42a6-a25b-a6c4d7a244c4","Type":"ContainerStarted","Data":"b0991a0638f667e8d851b08b5251787659e6fa644eb1ac91180146ac6ae2939c"} Jan 23 10:12:40 crc kubenswrapper[4684]: I0123 10:12:40.439934 4684 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/manila-scheduler-0" podStartSLOduration=3.439909286 podStartE2EDuration="3.439909286s" podCreationTimestamp="2026-01-23 10:12:37 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-23 10:12:39.415026025 +0000 UTC m=+3932.038404576" watchObservedRunningTime="2026-01-23 10:12:40.439909286 +0000 UTC m=+3933.063287827" Jan 23 10:12:41 crc kubenswrapper[4684]: I0123 10:12:41.189519 4684 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/manila-api-0" Jan 23 10:12:42 crc kubenswrapper[4684]: I0123 10:12:42.582158 4684 scope.go:117] "RemoveContainer" containerID="c2163abc5f57af87ea82023d09559fe5c528b862743942dcea670480cc44810b" Jan 23 10:12:42 crc kubenswrapper[4684]: E0123 10:12:42.582654 4684 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-wtphf_openshift-machine-config-operator(fe8e0d00-860e-4d47-9f48-686555520d79)\"" pod="openshift-machine-config-operator/machine-config-daemon-wtphf" podUID="fe8e0d00-860e-4d47-9f48-686555520d79" Jan 23 10:12:44 crc kubenswrapper[4684]: I0123 10:12:44.506042 4684 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/manila-share-share1-0" Jan 23 10:12:44 crc kubenswrapper[4684]: I0123 10:12:44.567461 4684 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/manila-share-share1-0"] Jan 23 10:12:45 crc kubenswrapper[4684]: I0123 10:12:45.442277 4684 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/manila-share-share1-0" podUID="601d1dbf-3e41-4f48-86a5-2038be6a33b3" containerName="manila-share" containerID="cri-o://f1db028471b3c459d983154fabc17d46d180ffbbd77b0ef618c2c24590d6b887" gracePeriod=30 Jan 23 10:12:45 crc kubenswrapper[4684]: I0123 10:12:45.442371 4684 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/manila-share-share1-0" podUID="601d1dbf-3e41-4f48-86a5-2038be6a33b3" containerName="probe" containerID="cri-o://f6aade6c2018686a84a62d57e2cf71f7405fbe2443bad447edfaedeecaaf0268" gracePeriod=30 Jan 23 10:12:46 crc kubenswrapper[4684]: I0123 10:12:46.453459 4684 generic.go:334] "Generic (PLEG): container finished" podID="601d1dbf-3e41-4f48-86a5-2038be6a33b3" containerID="f6aade6c2018686a84a62d57e2cf71f7405fbe2443bad447edfaedeecaaf0268" exitCode=0 Jan 23 10:12:46 crc kubenswrapper[4684]: I0123 10:12:46.453806 4684 generic.go:334] "Generic (PLEG): container finished" podID="601d1dbf-3e41-4f48-86a5-2038be6a33b3" containerID="f1db028471b3c459d983154fabc17d46d180ffbbd77b0ef618c2c24590d6b887" exitCode=1 Jan 23 10:12:46 crc kubenswrapper[4684]: I0123 10:12:46.453831 4684 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/manila-share-share1-0" event={"ID":"601d1dbf-3e41-4f48-86a5-2038be6a33b3","Type":"ContainerDied","Data":"f6aade6c2018686a84a62d57e2cf71f7405fbe2443bad447edfaedeecaaf0268"} Jan 23 10:12:46 crc kubenswrapper[4684]: I0123 10:12:46.453861 4684 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/manila-share-share1-0" event={"ID":"601d1dbf-3e41-4f48-86a5-2038be6a33b3","Type":"ContainerDied","Data":"f1db028471b3c459d983154fabc17d46d180ffbbd77b0ef618c2c24590d6b887"} Jan 23 10:12:46 crc kubenswrapper[4684]: I0123 10:12:46.934996 4684 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/manila-share-share1-0" Jan 23 10:12:47 crc kubenswrapper[4684]: I0123 10:12:47.063750 4684 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"etc-machine-id\" (UniqueName: \"kubernetes.io/host-path/601d1dbf-3e41-4f48-86a5-2038be6a33b3-etc-machine-id\") pod \"601d1dbf-3e41-4f48-86a5-2038be6a33b3\" (UID: \"601d1dbf-3e41-4f48-86a5-2038be6a33b3\") " Jan 23 10:12:47 crc kubenswrapper[4684]: I0123 10:12:47.063850 4684 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/601d1dbf-3e41-4f48-86a5-2038be6a33b3-config-data\") pod \"601d1dbf-3e41-4f48-86a5-2038be6a33b3\" (UID: \"601d1dbf-3e41-4f48-86a5-2038be6a33b3\") " Jan 23 10:12:47 crc kubenswrapper[4684]: I0123 10:12:47.063872 4684 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"var-lib-manila\" (UniqueName: \"kubernetes.io/host-path/601d1dbf-3e41-4f48-86a5-2038be6a33b3-var-lib-manila\") pod \"601d1dbf-3e41-4f48-86a5-2038be6a33b3\" (UID: \"601d1dbf-3e41-4f48-86a5-2038be6a33b3\") " Jan 23 10:12:47 crc kubenswrapper[4684]: I0123 10:12:47.063916 4684 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/601d1dbf-3e41-4f48-86a5-2038be6a33b3-combined-ca-bundle\") pod \"601d1dbf-3e41-4f48-86a5-2038be6a33b3\" (UID: \"601d1dbf-3e41-4f48-86a5-2038be6a33b3\") " Jan 23 10:12:47 crc kubenswrapper[4684]: I0123 10:12:47.063992 4684 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/601d1dbf-3e41-4f48-86a5-2038be6a33b3-scripts\") pod \"601d1dbf-3e41-4f48-86a5-2038be6a33b3\" (UID: \"601d1dbf-3e41-4f48-86a5-2038be6a33b3\") " Jan 23 10:12:47 crc kubenswrapper[4684]: I0123 10:12:47.064025 4684 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/601d1dbf-3e41-4f48-86a5-2038be6a33b3-config-data-custom\") pod \"601d1dbf-3e41-4f48-86a5-2038be6a33b3\" (UID: \"601d1dbf-3e41-4f48-86a5-2038be6a33b3\") " Jan 23 10:12:47 crc kubenswrapper[4684]: I0123 10:12:47.064048 4684 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ceph\" (UniqueName: \"kubernetes.io/projected/601d1dbf-3e41-4f48-86a5-2038be6a33b3-ceph\") pod \"601d1dbf-3e41-4f48-86a5-2038be6a33b3\" (UID: \"601d1dbf-3e41-4f48-86a5-2038be6a33b3\") " Jan 23 10:12:47 crc kubenswrapper[4684]: I0123 10:12:47.064255 4684 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-9r86q\" (UniqueName: \"kubernetes.io/projected/601d1dbf-3e41-4f48-86a5-2038be6a33b3-kube-api-access-9r86q\") pod \"601d1dbf-3e41-4f48-86a5-2038be6a33b3\" (UID: \"601d1dbf-3e41-4f48-86a5-2038be6a33b3\") " Jan 23 10:12:47 crc kubenswrapper[4684]: I0123 10:12:47.064305 4684 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/601d1dbf-3e41-4f48-86a5-2038be6a33b3-var-lib-manila" (OuterVolumeSpecName: "var-lib-manila") pod "601d1dbf-3e41-4f48-86a5-2038be6a33b3" (UID: "601d1dbf-3e41-4f48-86a5-2038be6a33b3"). InnerVolumeSpecName "var-lib-manila". PluginName "kubernetes.io/host-path", VolumeGidValue "" Jan 23 10:12:47 crc kubenswrapper[4684]: I0123 10:12:47.064369 4684 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/601d1dbf-3e41-4f48-86a5-2038be6a33b3-etc-machine-id" (OuterVolumeSpecName: "etc-machine-id") pod "601d1dbf-3e41-4f48-86a5-2038be6a33b3" (UID: "601d1dbf-3e41-4f48-86a5-2038be6a33b3"). InnerVolumeSpecName "etc-machine-id". PluginName "kubernetes.io/host-path", VolumeGidValue "" Jan 23 10:12:47 crc kubenswrapper[4684]: I0123 10:12:47.064889 4684 reconciler_common.go:293] "Volume detached for volume \"etc-machine-id\" (UniqueName: \"kubernetes.io/host-path/601d1dbf-3e41-4f48-86a5-2038be6a33b3-etc-machine-id\") on node \"crc\" DevicePath \"\"" Jan 23 10:12:47 crc kubenswrapper[4684]: I0123 10:12:47.064914 4684 reconciler_common.go:293] "Volume detached for volume \"var-lib-manila\" (UniqueName: \"kubernetes.io/host-path/601d1dbf-3e41-4f48-86a5-2038be6a33b3-var-lib-manila\") on node \"crc\" DevicePath \"\"" Jan 23 10:12:47 crc kubenswrapper[4684]: I0123 10:12:47.070619 4684 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/601d1dbf-3e41-4f48-86a5-2038be6a33b3-config-data-custom" (OuterVolumeSpecName: "config-data-custom") pod "601d1dbf-3e41-4f48-86a5-2038be6a33b3" (UID: "601d1dbf-3e41-4f48-86a5-2038be6a33b3"). InnerVolumeSpecName "config-data-custom". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 23 10:12:47 crc kubenswrapper[4684]: I0123 10:12:47.070631 4684 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/601d1dbf-3e41-4f48-86a5-2038be6a33b3-scripts" (OuterVolumeSpecName: "scripts") pod "601d1dbf-3e41-4f48-86a5-2038be6a33b3" (UID: "601d1dbf-3e41-4f48-86a5-2038be6a33b3"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 23 10:12:47 crc kubenswrapper[4684]: I0123 10:12:47.072524 4684 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/601d1dbf-3e41-4f48-86a5-2038be6a33b3-ceph" (OuterVolumeSpecName: "ceph") pod "601d1dbf-3e41-4f48-86a5-2038be6a33b3" (UID: "601d1dbf-3e41-4f48-86a5-2038be6a33b3"). InnerVolumeSpecName "ceph". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 23 10:12:47 crc kubenswrapper[4684]: I0123 10:12:47.072927 4684 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/601d1dbf-3e41-4f48-86a5-2038be6a33b3-kube-api-access-9r86q" (OuterVolumeSpecName: "kube-api-access-9r86q") pod "601d1dbf-3e41-4f48-86a5-2038be6a33b3" (UID: "601d1dbf-3e41-4f48-86a5-2038be6a33b3"). InnerVolumeSpecName "kube-api-access-9r86q". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 23 10:12:47 crc kubenswrapper[4684]: I0123 10:12:47.139681 4684 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/601d1dbf-3e41-4f48-86a5-2038be6a33b3-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "601d1dbf-3e41-4f48-86a5-2038be6a33b3" (UID: "601d1dbf-3e41-4f48-86a5-2038be6a33b3"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 23 10:12:47 crc kubenswrapper[4684]: I0123 10:12:47.165706 4684 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-9r86q\" (UniqueName: \"kubernetes.io/projected/601d1dbf-3e41-4f48-86a5-2038be6a33b3-kube-api-access-9r86q\") on node \"crc\" DevicePath \"\"" Jan 23 10:12:47 crc kubenswrapper[4684]: I0123 10:12:47.165733 4684 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/601d1dbf-3e41-4f48-86a5-2038be6a33b3-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Jan 23 10:12:47 crc kubenswrapper[4684]: I0123 10:12:47.165743 4684 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/601d1dbf-3e41-4f48-86a5-2038be6a33b3-scripts\") on node \"crc\" DevicePath \"\"" Jan 23 10:12:47 crc kubenswrapper[4684]: I0123 10:12:47.165759 4684 reconciler_common.go:293] "Volume detached for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/601d1dbf-3e41-4f48-86a5-2038be6a33b3-config-data-custom\") on node \"crc\" DevicePath \"\"" Jan 23 10:12:47 crc kubenswrapper[4684]: I0123 10:12:47.165769 4684 reconciler_common.go:293] "Volume detached for volume \"ceph\" (UniqueName: \"kubernetes.io/projected/601d1dbf-3e41-4f48-86a5-2038be6a33b3-ceph\") on node \"crc\" DevicePath \"\"" Jan 23 10:12:47 crc kubenswrapper[4684]: I0123 10:12:47.187246 4684 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/601d1dbf-3e41-4f48-86a5-2038be6a33b3-config-data" (OuterVolumeSpecName: "config-data") pod "601d1dbf-3e41-4f48-86a5-2038be6a33b3" (UID: "601d1dbf-3e41-4f48-86a5-2038be6a33b3"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 23 10:12:47 crc kubenswrapper[4684]: I0123 10:12:47.268069 4684 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/601d1dbf-3e41-4f48-86a5-2038be6a33b3-config-data\") on node \"crc\" DevicePath \"\"" Jan 23 10:12:47 crc kubenswrapper[4684]: I0123 10:12:47.466157 4684 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/manila-share-share1-0" event={"ID":"601d1dbf-3e41-4f48-86a5-2038be6a33b3","Type":"ContainerDied","Data":"3664ce02766766b32242f2123f0346dbaf5448b19b69510784215d1939520cd3"} Jan 23 10:12:47 crc kubenswrapper[4684]: I0123 10:12:47.466262 4684 scope.go:117] "RemoveContainer" containerID="f6aade6c2018686a84a62d57e2cf71f7405fbe2443bad447edfaedeecaaf0268" Jan 23 10:12:47 crc kubenswrapper[4684]: I0123 10:12:47.467314 4684 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/manila-share-share1-0" Jan 23 10:12:47 crc kubenswrapper[4684]: I0123 10:12:47.500111 4684 scope.go:117] "RemoveContainer" containerID="f1db028471b3c459d983154fabc17d46d180ffbbd77b0ef618c2c24590d6b887" Jan 23 10:12:47 crc kubenswrapper[4684]: I0123 10:12:47.538916 4684 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/manila-share-share1-0"] Jan 23 10:12:47 crc kubenswrapper[4684]: I0123 10:12:47.570546 4684 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/manila-share-share1-0"] Jan 23 10:12:47 crc kubenswrapper[4684]: I0123 10:12:47.584857 4684 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/manila-share-share1-0"] Jan 23 10:12:47 crc kubenswrapper[4684]: E0123 10:12:47.585382 4684 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="601d1dbf-3e41-4f48-86a5-2038be6a33b3" containerName="probe" Jan 23 10:12:47 crc kubenswrapper[4684]: I0123 10:12:47.585419 4684 state_mem.go:107] "Deleted CPUSet assignment" podUID="601d1dbf-3e41-4f48-86a5-2038be6a33b3" containerName="probe" Jan 23 10:12:47 crc kubenswrapper[4684]: E0123 10:12:47.585432 4684 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="601d1dbf-3e41-4f48-86a5-2038be6a33b3" containerName="manila-share" Jan 23 10:12:47 crc kubenswrapper[4684]: I0123 10:12:47.585444 4684 state_mem.go:107] "Deleted CPUSet assignment" podUID="601d1dbf-3e41-4f48-86a5-2038be6a33b3" containerName="manila-share" Jan 23 10:12:47 crc kubenswrapper[4684]: I0123 10:12:47.585664 4684 memory_manager.go:354] "RemoveStaleState removing state" podUID="601d1dbf-3e41-4f48-86a5-2038be6a33b3" containerName="probe" Jan 23 10:12:47 crc kubenswrapper[4684]: I0123 10:12:47.585685 4684 memory_manager.go:354] "RemoveStaleState removing state" podUID="601d1dbf-3e41-4f48-86a5-2038be6a33b3" containerName="manila-share" Jan 23 10:12:47 crc kubenswrapper[4684]: I0123 10:12:47.592314 4684 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/manila-share-share1-0" Jan 23 10:12:47 crc kubenswrapper[4684]: I0123 10:12:47.606053 4684 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="601d1dbf-3e41-4f48-86a5-2038be6a33b3" path="/var/lib/kubelet/pods/601d1dbf-3e41-4f48-86a5-2038be6a33b3/volumes" Jan 23 10:12:47 crc kubenswrapper[4684]: I0123 10:12:47.609133 4684 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"manila-share-share1-config-data" Jan 23 10:12:47 crc kubenswrapper[4684]: I0123 10:12:47.610299 4684 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/manila-share-share1-0"] Jan 23 10:12:47 crc kubenswrapper[4684]: I0123 10:12:47.679226 4684 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-machine-id\" (UniqueName: \"kubernetes.io/host-path/f7b4b82a-f432-48b9-ae9c-2d23a78aec42-etc-machine-id\") pod \"manila-share-share1-0\" (UID: \"f7b4b82a-f432-48b9-ae9c-2d23a78aec42\") " pod="openstack/manila-share-share1-0" Jan 23 10:12:47 crc kubenswrapper[4684]: I0123 10:12:47.679332 4684 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/f7b4b82a-f432-48b9-ae9c-2d23a78aec42-config-data-custom\") pod \"manila-share-share1-0\" (UID: \"f7b4b82a-f432-48b9-ae9c-2d23a78aec42\") " pod="openstack/manila-share-share1-0" Jan 23 10:12:47 crc kubenswrapper[4684]: I0123 10:12:47.679400 4684 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/f7b4b82a-f432-48b9-ae9c-2d23a78aec42-config-data\") pod \"manila-share-share1-0\" (UID: \"f7b4b82a-f432-48b9-ae9c-2d23a78aec42\") " pod="openstack/manila-share-share1-0" Jan 23 10:12:47 crc kubenswrapper[4684]: I0123 10:12:47.679655 4684 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-lib-manila\" (UniqueName: \"kubernetes.io/host-path/f7b4b82a-f432-48b9-ae9c-2d23a78aec42-var-lib-manila\") pod \"manila-share-share1-0\" (UID: \"f7b4b82a-f432-48b9-ae9c-2d23a78aec42\") " pod="openstack/manila-share-share1-0" Jan 23 10:12:47 crc kubenswrapper[4684]: I0123 10:12:47.679691 4684 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/f7b4b82a-f432-48b9-ae9c-2d23a78aec42-scripts\") pod \"manila-share-share1-0\" (UID: \"f7b4b82a-f432-48b9-ae9c-2d23a78aec42\") " pod="openstack/manila-share-share1-0" Jan 23 10:12:47 crc kubenswrapper[4684]: I0123 10:12:47.679737 4684 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ceph\" (UniqueName: \"kubernetes.io/projected/f7b4b82a-f432-48b9-ae9c-2d23a78aec42-ceph\") pod \"manila-share-share1-0\" (UID: \"f7b4b82a-f432-48b9-ae9c-2d23a78aec42\") " pod="openstack/manila-share-share1-0" Jan 23 10:12:47 crc kubenswrapper[4684]: I0123 10:12:47.679767 4684 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/f7b4b82a-f432-48b9-ae9c-2d23a78aec42-combined-ca-bundle\") pod \"manila-share-share1-0\" (UID: \"f7b4b82a-f432-48b9-ae9c-2d23a78aec42\") " pod="openstack/manila-share-share1-0" Jan 23 10:12:47 crc kubenswrapper[4684]: I0123 10:12:47.680069 4684 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-mq6dg\" (UniqueName: \"kubernetes.io/projected/f7b4b82a-f432-48b9-ae9c-2d23a78aec42-kube-api-access-mq6dg\") pod \"manila-share-share1-0\" (UID: \"f7b4b82a-f432-48b9-ae9c-2d23a78aec42\") " pod="openstack/manila-share-share1-0" Jan 23 10:12:47 crc kubenswrapper[4684]: I0123 10:12:47.748955 4684 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/manila-scheduler-0" Jan 23 10:12:47 crc kubenswrapper[4684]: I0123 10:12:47.781939 4684 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"var-lib-manila\" (UniqueName: \"kubernetes.io/host-path/f7b4b82a-f432-48b9-ae9c-2d23a78aec42-var-lib-manila\") pod \"manila-share-share1-0\" (UID: \"f7b4b82a-f432-48b9-ae9c-2d23a78aec42\") " pod="openstack/manila-share-share1-0" Jan 23 10:12:47 crc kubenswrapper[4684]: I0123 10:12:47.781993 4684 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/f7b4b82a-f432-48b9-ae9c-2d23a78aec42-scripts\") pod \"manila-share-share1-0\" (UID: \"f7b4b82a-f432-48b9-ae9c-2d23a78aec42\") " pod="openstack/manila-share-share1-0" Jan 23 10:12:47 crc kubenswrapper[4684]: I0123 10:12:47.782024 4684 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ceph\" (UniqueName: \"kubernetes.io/projected/f7b4b82a-f432-48b9-ae9c-2d23a78aec42-ceph\") pod \"manila-share-share1-0\" (UID: \"f7b4b82a-f432-48b9-ae9c-2d23a78aec42\") " pod="openstack/manila-share-share1-0" Jan 23 10:12:47 crc kubenswrapper[4684]: I0123 10:12:47.782047 4684 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/f7b4b82a-f432-48b9-ae9c-2d23a78aec42-combined-ca-bundle\") pod \"manila-share-share1-0\" (UID: \"f7b4b82a-f432-48b9-ae9c-2d23a78aec42\") " pod="openstack/manila-share-share1-0" Jan 23 10:12:47 crc kubenswrapper[4684]: I0123 10:12:47.782186 4684 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-mq6dg\" (UniqueName: \"kubernetes.io/projected/f7b4b82a-f432-48b9-ae9c-2d23a78aec42-kube-api-access-mq6dg\") pod \"manila-share-share1-0\" (UID: \"f7b4b82a-f432-48b9-ae9c-2d23a78aec42\") " pod="openstack/manila-share-share1-0" Jan 23 10:12:47 crc kubenswrapper[4684]: I0123 10:12:47.782230 4684 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etc-machine-id\" (UniqueName: \"kubernetes.io/host-path/f7b4b82a-f432-48b9-ae9c-2d23a78aec42-etc-machine-id\") pod \"manila-share-share1-0\" (UID: \"f7b4b82a-f432-48b9-ae9c-2d23a78aec42\") " pod="openstack/manila-share-share1-0" Jan 23 10:12:47 crc kubenswrapper[4684]: I0123 10:12:47.782272 4684 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/f7b4b82a-f432-48b9-ae9c-2d23a78aec42-config-data-custom\") pod \"manila-share-share1-0\" (UID: \"f7b4b82a-f432-48b9-ae9c-2d23a78aec42\") " pod="openstack/manila-share-share1-0" Jan 23 10:12:47 crc kubenswrapper[4684]: I0123 10:12:47.782326 4684 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/f7b4b82a-f432-48b9-ae9c-2d23a78aec42-config-data\") pod \"manila-share-share1-0\" (UID: \"f7b4b82a-f432-48b9-ae9c-2d23a78aec42\") " pod="openstack/manila-share-share1-0" Jan 23 10:12:47 crc kubenswrapper[4684]: I0123 10:12:47.783486 4684 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"var-lib-manila\" (UniqueName: \"kubernetes.io/host-path/f7b4b82a-f432-48b9-ae9c-2d23a78aec42-var-lib-manila\") pod \"manila-share-share1-0\" (UID: \"f7b4b82a-f432-48b9-ae9c-2d23a78aec42\") " pod="openstack/manila-share-share1-0" Jan 23 10:12:47 crc kubenswrapper[4684]: I0123 10:12:47.784034 4684 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etc-machine-id\" (UniqueName: \"kubernetes.io/host-path/f7b4b82a-f432-48b9-ae9c-2d23a78aec42-etc-machine-id\") pod \"manila-share-share1-0\" (UID: \"f7b4b82a-f432-48b9-ae9c-2d23a78aec42\") " pod="openstack/manila-share-share1-0" Jan 23 10:12:47 crc kubenswrapper[4684]: I0123 10:12:47.787566 4684 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/f7b4b82a-f432-48b9-ae9c-2d23a78aec42-scripts\") pod \"manila-share-share1-0\" (UID: \"f7b4b82a-f432-48b9-ae9c-2d23a78aec42\") " pod="openstack/manila-share-share1-0" Jan 23 10:12:47 crc kubenswrapper[4684]: I0123 10:12:47.787676 4684 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ceph\" (UniqueName: \"kubernetes.io/projected/f7b4b82a-f432-48b9-ae9c-2d23a78aec42-ceph\") pod \"manila-share-share1-0\" (UID: \"f7b4b82a-f432-48b9-ae9c-2d23a78aec42\") " pod="openstack/manila-share-share1-0" Jan 23 10:12:47 crc kubenswrapper[4684]: I0123 10:12:47.788012 4684 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/f7b4b82a-f432-48b9-ae9c-2d23a78aec42-config-data\") pod \"manila-share-share1-0\" (UID: \"f7b4b82a-f432-48b9-ae9c-2d23a78aec42\") " pod="openstack/manila-share-share1-0" Jan 23 10:12:47 crc kubenswrapper[4684]: I0123 10:12:47.788635 4684 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/f7b4b82a-f432-48b9-ae9c-2d23a78aec42-combined-ca-bundle\") pod \"manila-share-share1-0\" (UID: \"f7b4b82a-f432-48b9-ae9c-2d23a78aec42\") " pod="openstack/manila-share-share1-0" Jan 23 10:12:47 crc kubenswrapper[4684]: I0123 10:12:47.788635 4684 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/f7b4b82a-f432-48b9-ae9c-2d23a78aec42-config-data-custom\") pod \"manila-share-share1-0\" (UID: \"f7b4b82a-f432-48b9-ae9c-2d23a78aec42\") " pod="openstack/manila-share-share1-0" Jan 23 10:12:47 crc kubenswrapper[4684]: I0123 10:12:47.801681 4684 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-mq6dg\" (UniqueName: \"kubernetes.io/projected/f7b4b82a-f432-48b9-ae9c-2d23a78aec42-kube-api-access-mq6dg\") pod \"manila-share-share1-0\" (UID: \"f7b4b82a-f432-48b9-ae9c-2d23a78aec42\") " pod="openstack/manila-share-share1-0" Jan 23 10:12:47 crc kubenswrapper[4684]: I0123 10:12:47.923561 4684 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/manila-share-share1-0" Jan 23 10:12:48 crc kubenswrapper[4684]: I0123 10:12:48.454943 4684 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/manila-share-share1-0"] Jan 23 10:12:49 crc kubenswrapper[4684]: I0123 10:12:49.052788 4684 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/horizon-7df5b758fb-8sfdj" Jan 23 10:12:49 crc kubenswrapper[4684]: I0123 10:12:49.053297 4684 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/horizon-7df5b758fb-8sfdj" Jan 23 10:12:49 crc kubenswrapper[4684]: I0123 10:12:49.054601 4684 prober.go:107] "Probe failed" probeType="Startup" pod="openstack/horizon-7df5b758fb-8sfdj" podUID="78d43a15-1645-42a6-a25b-a6c4d7a244c4" containerName="horizon" probeResult="failure" output="Get \"https://10.217.0.248:8443/dashboard/auth/login/?next=/dashboard/\": dial tcp 10.217.0.248:8443: connect: connection refused" Jan 23 10:12:49 crc kubenswrapper[4684]: I0123 10:12:49.503641 4684 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/manila-share-share1-0" event={"ID":"f7b4b82a-f432-48b9-ae9c-2d23a78aec42","Type":"ContainerStarted","Data":"09f28e6d926f3bbc2e2f5bdedbef90c74026c6de7facd6001905ffffa5055c31"} Jan 23 10:12:49 crc kubenswrapper[4684]: I0123 10:12:49.504021 4684 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/manila-share-share1-0" event={"ID":"f7b4b82a-f432-48b9-ae9c-2d23a78aec42","Type":"ContainerStarted","Data":"cada33658c8d54feaa1c14281dfb7faa52cd5772ddd5c7deca9be8428ab7b004"} Jan 23 10:12:50 crc kubenswrapper[4684]: I0123 10:12:50.514474 4684 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/manila-share-share1-0" event={"ID":"f7b4b82a-f432-48b9-ae9c-2d23a78aec42","Type":"ContainerStarted","Data":"648c09459b4532239ea0e43c57d44c991d7c3e17023a8b1af94b2117c495050f"} Jan 23 10:12:50 crc kubenswrapper[4684]: I0123 10:12:50.603979 4684 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/manila-share-share1-0" podStartSLOduration=3.6039575 podStartE2EDuration="3.6039575s" podCreationTimestamp="2026-01-23 10:12:47 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-23 10:12:50.597562697 +0000 UTC m=+3943.220941258" watchObservedRunningTime="2026-01-23 10:12:50.6039575 +0000 UTC m=+3943.227336061" Jan 23 10:12:54 crc kubenswrapper[4684]: I0123 10:12:54.582558 4684 scope.go:117] "RemoveContainer" containerID="c2163abc5f57af87ea82023d09559fe5c528b862743942dcea670480cc44810b" Jan 23 10:12:54 crc kubenswrapper[4684]: E0123 10:12:54.583388 4684 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-wtphf_openshift-machine-config-operator(fe8e0d00-860e-4d47-9f48-686555520d79)\"" pod="openshift-machine-config-operator/machine-config-daemon-wtphf" podUID="fe8e0d00-860e-4d47-9f48-686555520d79" Jan 23 10:12:57 crc kubenswrapper[4684]: I0123 10:12:57.924645 4684 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/manila-share-share1-0" Jan 23 10:12:59 crc kubenswrapper[4684]: I0123 10:12:59.054181 4684 prober.go:107] "Probe failed" probeType="Startup" pod="openstack/horizon-7df5b758fb-8sfdj" podUID="78d43a15-1645-42a6-a25b-a6c4d7a244c4" containerName="horizon" probeResult="failure" output="Get \"https://10.217.0.248:8443/dashboard/auth/login/?next=/dashboard/\": dial tcp 10.217.0.248:8443: connect: connection refused" Jan 23 10:12:59 crc kubenswrapper[4684]: I0123 10:12:59.469025 4684 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/manila-scheduler-0" Jan 23 10:13:03 crc kubenswrapper[4684]: I0123 10:13:03.706851 4684 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/ceilometer-0" Jan 23 10:13:06 crc kubenswrapper[4684]: I0123 10:13:06.582467 4684 scope.go:117] "RemoveContainer" containerID="c2163abc5f57af87ea82023d09559fe5c528b862743942dcea670480cc44810b" Jan 23 10:13:06 crc kubenswrapper[4684]: E0123 10:13:06.583342 4684 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-wtphf_openshift-machine-config-operator(fe8e0d00-860e-4d47-9f48-686555520d79)\"" pod="openshift-machine-config-operator/machine-config-daemon-wtphf" podUID="fe8e0d00-860e-4d47-9f48-686555520d79" Jan 23 10:13:09 crc kubenswrapper[4684]: I0123 10:13:09.721192 4684 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/manila-share-share1-0" Jan 23 10:13:11 crc kubenswrapper[4684]: I0123 10:13:11.129639 4684 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/horizon-7df5b758fb-8sfdj" Jan 23 10:13:13 crc kubenswrapper[4684]: I0123 10:13:13.152971 4684 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/horizon-7df5b758fb-8sfdj" Jan 23 10:13:13 crc kubenswrapper[4684]: I0123 10:13:13.226227 4684 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/horizon-6dc7f74bf4-rpjsz"] Jan 23 10:13:13 crc kubenswrapper[4684]: I0123 10:13:13.226479 4684 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/horizon-6dc7f74bf4-rpjsz" podUID="d510be09-5472-4350-8930-0cda7b4b9c84" containerName="horizon-log" containerID="cri-o://500a35b661f8c3c7cc0acf170b117c1aa4c0e826b2de34ff32e9da2f946ab45e" gracePeriod=30 Jan 23 10:13:13 crc kubenswrapper[4684]: I0123 10:13:13.226573 4684 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/horizon-6dc7f74bf4-rpjsz" podUID="d510be09-5472-4350-8930-0cda7b4b9c84" containerName="horizon" containerID="cri-o://110cc9e712e6d310fdaa9b0e893f0d65c774fc0a924a38fdd3917593ab37fc30" gracePeriod=30 Jan 23 10:13:17 crc kubenswrapper[4684]: I0123 10:13:17.072318 4684 generic.go:334] "Generic (PLEG): container finished" podID="d510be09-5472-4350-8930-0cda7b4b9c84" containerID="110cc9e712e6d310fdaa9b0e893f0d65c774fc0a924a38fdd3917593ab37fc30" exitCode=0 Jan 23 10:13:17 crc kubenswrapper[4684]: I0123 10:13:17.072373 4684 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/horizon-6dc7f74bf4-rpjsz" event={"ID":"d510be09-5472-4350-8930-0cda7b4b9c84","Type":"ContainerDied","Data":"110cc9e712e6d310fdaa9b0e893f0d65c774fc0a924a38fdd3917593ab37fc30"} Jan 23 10:13:18 crc kubenswrapper[4684]: I0123 10:13:18.752522 4684 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack/horizon-6dc7f74bf4-rpjsz" podUID="d510be09-5472-4350-8930-0cda7b4b9c84" containerName="horizon" probeResult="failure" output="Get \"https://10.217.0.247:8443/dashboard/auth/login/?next=/dashboard/\": dial tcp 10.217.0.247:8443: connect: connection refused" Jan 23 10:13:19 crc kubenswrapper[4684]: I0123 10:13:19.581790 4684 scope.go:117] "RemoveContainer" containerID="c2163abc5f57af87ea82023d09559fe5c528b862743942dcea670480cc44810b" Jan 23 10:13:19 crc kubenswrapper[4684]: E0123 10:13:19.582287 4684 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-wtphf_openshift-machine-config-operator(fe8e0d00-860e-4d47-9f48-686555520d79)\"" pod="openshift-machine-config-operator/machine-config-daemon-wtphf" podUID="fe8e0d00-860e-4d47-9f48-686555520d79" Jan 23 10:13:28 crc kubenswrapper[4684]: I0123 10:13:28.753002 4684 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack/horizon-6dc7f74bf4-rpjsz" podUID="d510be09-5472-4350-8930-0cda7b4b9c84" containerName="horizon" probeResult="failure" output="Get \"https://10.217.0.247:8443/dashboard/auth/login/?next=/dashboard/\": dial tcp 10.217.0.247:8443: connect: connection refused" Jan 23 10:13:30 crc kubenswrapper[4684]: I0123 10:13:30.583275 4684 scope.go:117] "RemoveContainer" containerID="c2163abc5f57af87ea82023d09559fe5c528b862743942dcea670480cc44810b" Jan 23 10:13:30 crc kubenswrapper[4684]: E0123 10:13:30.584046 4684 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-wtphf_openshift-machine-config-operator(fe8e0d00-860e-4d47-9f48-686555520d79)\"" pod="openshift-machine-config-operator/machine-config-daemon-wtphf" podUID="fe8e0d00-860e-4d47-9f48-686555520d79" Jan 23 10:13:38 crc kubenswrapper[4684]: I0123 10:13:38.752629 4684 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack/horizon-6dc7f74bf4-rpjsz" podUID="d510be09-5472-4350-8930-0cda7b4b9c84" containerName="horizon" probeResult="failure" output="Get \"https://10.217.0.247:8443/dashboard/auth/login/?next=/dashboard/\": dial tcp 10.217.0.247:8443: connect: connection refused" Jan 23 10:13:38 crc kubenswrapper[4684]: I0123 10:13:38.753317 4684 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/horizon-6dc7f74bf4-rpjsz" Jan 23 10:13:44 crc kubenswrapper[4684]: I0123 10:13:44.143123 4684 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/horizon-6dc7f74bf4-rpjsz" Jan 23 10:13:44 crc kubenswrapper[4684]: I0123 10:13:44.240445 4684 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/d510be09-5472-4350-8930-0cda7b4b9c84-scripts\") pod \"d510be09-5472-4350-8930-0cda7b4b9c84\" (UID: \"d510be09-5472-4350-8930-0cda7b4b9c84\") " Jan 23 10:13:44 crc kubenswrapper[4684]: I0123 10:13:44.240564 4684 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/d510be09-5472-4350-8930-0cda7b4b9c84-config-data\") pod \"d510be09-5472-4350-8930-0cda7b4b9c84\" (UID: \"d510be09-5472-4350-8930-0cda7b4b9c84\") " Jan 23 10:13:44 crc kubenswrapper[4684]: I0123 10:13:44.240619 4684 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-xm9v6\" (UniqueName: \"kubernetes.io/projected/d510be09-5472-4350-8930-0cda7b4b9c84-kube-api-access-xm9v6\") pod \"d510be09-5472-4350-8930-0cda7b4b9c84\" (UID: \"d510be09-5472-4350-8930-0cda7b4b9c84\") " Jan 23 10:13:44 crc kubenswrapper[4684]: I0123 10:13:44.240715 4684 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/d510be09-5472-4350-8930-0cda7b4b9c84-logs\") pod \"d510be09-5472-4350-8930-0cda7b4b9c84\" (UID: \"d510be09-5472-4350-8930-0cda7b4b9c84\") " Jan 23 10:13:44 crc kubenswrapper[4684]: I0123 10:13:44.240757 4684 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"horizon-tls-certs\" (UniqueName: \"kubernetes.io/secret/d510be09-5472-4350-8930-0cda7b4b9c84-horizon-tls-certs\") pod \"d510be09-5472-4350-8930-0cda7b4b9c84\" (UID: \"d510be09-5472-4350-8930-0cda7b4b9c84\") " Jan 23 10:13:44 crc kubenswrapper[4684]: I0123 10:13:44.240837 4684 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"horizon-secret-key\" (UniqueName: \"kubernetes.io/secret/d510be09-5472-4350-8930-0cda7b4b9c84-horizon-secret-key\") pod \"d510be09-5472-4350-8930-0cda7b4b9c84\" (UID: \"d510be09-5472-4350-8930-0cda7b4b9c84\") " Jan 23 10:13:44 crc kubenswrapper[4684]: I0123 10:13:44.240884 4684 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/d510be09-5472-4350-8930-0cda7b4b9c84-combined-ca-bundle\") pod \"d510be09-5472-4350-8930-0cda7b4b9c84\" (UID: \"d510be09-5472-4350-8930-0cda7b4b9c84\") " Jan 23 10:13:44 crc kubenswrapper[4684]: I0123 10:13:44.242010 4684 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/d510be09-5472-4350-8930-0cda7b4b9c84-logs" (OuterVolumeSpecName: "logs") pod "d510be09-5472-4350-8930-0cda7b4b9c84" (UID: "d510be09-5472-4350-8930-0cda7b4b9c84"). InnerVolumeSpecName "logs". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 23 10:13:44 crc kubenswrapper[4684]: I0123 10:13:44.245672 4684 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/d510be09-5472-4350-8930-0cda7b4b9c84-horizon-secret-key" (OuterVolumeSpecName: "horizon-secret-key") pod "d510be09-5472-4350-8930-0cda7b4b9c84" (UID: "d510be09-5472-4350-8930-0cda7b4b9c84"). InnerVolumeSpecName "horizon-secret-key". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 23 10:13:44 crc kubenswrapper[4684]: I0123 10:13:44.263626 4684 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/d510be09-5472-4350-8930-0cda7b4b9c84-kube-api-access-xm9v6" (OuterVolumeSpecName: "kube-api-access-xm9v6") pod "d510be09-5472-4350-8930-0cda7b4b9c84" (UID: "d510be09-5472-4350-8930-0cda7b4b9c84"). InnerVolumeSpecName "kube-api-access-xm9v6". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 23 10:13:44 crc kubenswrapper[4684]: I0123 10:13:44.271831 4684 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/d510be09-5472-4350-8930-0cda7b4b9c84-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "d510be09-5472-4350-8930-0cda7b4b9c84" (UID: "d510be09-5472-4350-8930-0cda7b4b9c84"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 23 10:13:44 crc kubenswrapper[4684]: I0123 10:13:44.272071 4684 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/d510be09-5472-4350-8930-0cda7b4b9c84-config-data" (OuterVolumeSpecName: "config-data") pod "d510be09-5472-4350-8930-0cda7b4b9c84" (UID: "d510be09-5472-4350-8930-0cda7b4b9c84"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 23 10:13:44 crc kubenswrapper[4684]: I0123 10:13:44.284749 4684 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/d510be09-5472-4350-8930-0cda7b4b9c84-scripts" (OuterVolumeSpecName: "scripts") pod "d510be09-5472-4350-8930-0cda7b4b9c84" (UID: "d510be09-5472-4350-8930-0cda7b4b9c84"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 23 10:13:44 crc kubenswrapper[4684]: I0123 10:13:44.300778 4684 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/d510be09-5472-4350-8930-0cda7b4b9c84-horizon-tls-certs" (OuterVolumeSpecName: "horizon-tls-certs") pod "d510be09-5472-4350-8930-0cda7b4b9c84" (UID: "d510be09-5472-4350-8930-0cda7b4b9c84"). InnerVolumeSpecName "horizon-tls-certs". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 23 10:13:44 crc kubenswrapper[4684]: I0123 10:13:44.321794 4684 generic.go:334] "Generic (PLEG): container finished" podID="d510be09-5472-4350-8930-0cda7b4b9c84" containerID="500a35b661f8c3c7cc0acf170b117c1aa4c0e826b2de34ff32e9da2f946ab45e" exitCode=137 Jan 23 10:13:44 crc kubenswrapper[4684]: I0123 10:13:44.321852 4684 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/horizon-6dc7f74bf4-rpjsz" event={"ID":"d510be09-5472-4350-8930-0cda7b4b9c84","Type":"ContainerDied","Data":"500a35b661f8c3c7cc0acf170b117c1aa4c0e826b2de34ff32e9da2f946ab45e"} Jan 23 10:13:44 crc kubenswrapper[4684]: I0123 10:13:44.321880 4684 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/horizon-6dc7f74bf4-rpjsz" event={"ID":"d510be09-5472-4350-8930-0cda7b4b9c84","Type":"ContainerDied","Data":"24cfc0ebfa7a7e7c712273b2a7b0d41a3931d6744783f3ca890c777f5bd9f44d"} Jan 23 10:13:44 crc kubenswrapper[4684]: I0123 10:13:44.321900 4684 scope.go:117] "RemoveContainer" containerID="110cc9e712e6d310fdaa9b0e893f0d65c774fc0a924a38fdd3917593ab37fc30" Jan 23 10:13:44 crc kubenswrapper[4684]: I0123 10:13:44.321930 4684 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/horizon-6dc7f74bf4-rpjsz" Jan 23 10:13:44 crc kubenswrapper[4684]: I0123 10:13:44.344103 4684 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/d510be09-5472-4350-8930-0cda7b4b9c84-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Jan 23 10:13:44 crc kubenswrapper[4684]: I0123 10:13:44.344147 4684 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/d510be09-5472-4350-8930-0cda7b4b9c84-scripts\") on node \"crc\" DevicePath \"\"" Jan 23 10:13:44 crc kubenswrapper[4684]: I0123 10:13:44.344160 4684 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/d510be09-5472-4350-8930-0cda7b4b9c84-config-data\") on node \"crc\" DevicePath \"\"" Jan 23 10:13:44 crc kubenswrapper[4684]: I0123 10:13:44.344173 4684 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-xm9v6\" (UniqueName: \"kubernetes.io/projected/d510be09-5472-4350-8930-0cda7b4b9c84-kube-api-access-xm9v6\") on node \"crc\" DevicePath \"\"" Jan 23 10:13:44 crc kubenswrapper[4684]: I0123 10:13:44.344186 4684 reconciler_common.go:293] "Volume detached for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/d510be09-5472-4350-8930-0cda7b4b9c84-logs\") on node \"crc\" DevicePath \"\"" Jan 23 10:13:44 crc kubenswrapper[4684]: I0123 10:13:44.344197 4684 reconciler_common.go:293] "Volume detached for volume \"horizon-tls-certs\" (UniqueName: \"kubernetes.io/secret/d510be09-5472-4350-8930-0cda7b4b9c84-horizon-tls-certs\") on node \"crc\" DevicePath \"\"" Jan 23 10:13:44 crc kubenswrapper[4684]: I0123 10:13:44.344208 4684 reconciler_common.go:293] "Volume detached for volume \"horizon-secret-key\" (UniqueName: \"kubernetes.io/secret/d510be09-5472-4350-8930-0cda7b4b9c84-horizon-secret-key\") on node \"crc\" DevicePath \"\"" Jan 23 10:13:44 crc kubenswrapper[4684]: I0123 10:13:44.376807 4684 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/horizon-6dc7f74bf4-rpjsz"] Jan 23 10:13:44 crc kubenswrapper[4684]: I0123 10:13:44.384884 4684 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/horizon-6dc7f74bf4-rpjsz"] Jan 23 10:13:44 crc kubenswrapper[4684]: I0123 10:13:44.488268 4684 scope.go:117] "RemoveContainer" containerID="500a35b661f8c3c7cc0acf170b117c1aa4c0e826b2de34ff32e9da2f946ab45e" Jan 23 10:13:44 crc kubenswrapper[4684]: I0123 10:13:44.504735 4684 scope.go:117] "RemoveContainer" containerID="110cc9e712e6d310fdaa9b0e893f0d65c774fc0a924a38fdd3917593ab37fc30" Jan 23 10:13:44 crc kubenswrapper[4684]: E0123 10:13:44.505177 4684 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"110cc9e712e6d310fdaa9b0e893f0d65c774fc0a924a38fdd3917593ab37fc30\": container with ID starting with 110cc9e712e6d310fdaa9b0e893f0d65c774fc0a924a38fdd3917593ab37fc30 not found: ID does not exist" containerID="110cc9e712e6d310fdaa9b0e893f0d65c774fc0a924a38fdd3917593ab37fc30" Jan 23 10:13:44 crc kubenswrapper[4684]: I0123 10:13:44.505215 4684 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"110cc9e712e6d310fdaa9b0e893f0d65c774fc0a924a38fdd3917593ab37fc30"} err="failed to get container status \"110cc9e712e6d310fdaa9b0e893f0d65c774fc0a924a38fdd3917593ab37fc30\": rpc error: code = NotFound desc = could not find container \"110cc9e712e6d310fdaa9b0e893f0d65c774fc0a924a38fdd3917593ab37fc30\": container with ID starting with 110cc9e712e6d310fdaa9b0e893f0d65c774fc0a924a38fdd3917593ab37fc30 not found: ID does not exist" Jan 23 10:13:44 crc kubenswrapper[4684]: I0123 10:13:44.505241 4684 scope.go:117] "RemoveContainer" containerID="500a35b661f8c3c7cc0acf170b117c1aa4c0e826b2de34ff32e9da2f946ab45e" Jan 23 10:13:44 crc kubenswrapper[4684]: E0123 10:13:44.505483 4684 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"500a35b661f8c3c7cc0acf170b117c1aa4c0e826b2de34ff32e9da2f946ab45e\": container with ID starting with 500a35b661f8c3c7cc0acf170b117c1aa4c0e826b2de34ff32e9da2f946ab45e not found: ID does not exist" containerID="500a35b661f8c3c7cc0acf170b117c1aa4c0e826b2de34ff32e9da2f946ab45e" Jan 23 10:13:44 crc kubenswrapper[4684]: I0123 10:13:44.505510 4684 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"500a35b661f8c3c7cc0acf170b117c1aa4c0e826b2de34ff32e9da2f946ab45e"} err="failed to get container status \"500a35b661f8c3c7cc0acf170b117c1aa4c0e826b2de34ff32e9da2f946ab45e\": rpc error: code = NotFound desc = could not find container \"500a35b661f8c3c7cc0acf170b117c1aa4c0e826b2de34ff32e9da2f946ab45e\": container with ID starting with 500a35b661f8c3c7cc0acf170b117c1aa4c0e826b2de34ff32e9da2f946ab45e not found: ID does not exist" Jan 23 10:13:44 crc kubenswrapper[4684]: I0123 10:13:44.582389 4684 scope.go:117] "RemoveContainer" containerID="c2163abc5f57af87ea82023d09559fe5c528b862743942dcea670480cc44810b" Jan 23 10:13:44 crc kubenswrapper[4684]: E0123 10:13:44.583050 4684 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-wtphf_openshift-machine-config-operator(fe8e0d00-860e-4d47-9f48-686555520d79)\"" pod="openshift-machine-config-operator/machine-config-daemon-wtphf" podUID="fe8e0d00-860e-4d47-9f48-686555520d79" Jan 23 10:13:45 crc kubenswrapper[4684]: I0123 10:13:45.593616 4684 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="d510be09-5472-4350-8930-0cda7b4b9c84" path="/var/lib/kubelet/pods/d510be09-5472-4350-8930-0cda7b4b9c84/volumes" Jan 23 10:13:55 crc kubenswrapper[4684]: I0123 10:13:55.582156 4684 scope.go:117] "RemoveContainer" containerID="c2163abc5f57af87ea82023d09559fe5c528b862743942dcea670480cc44810b" Jan 23 10:13:55 crc kubenswrapper[4684]: E0123 10:13:55.583039 4684 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-wtphf_openshift-machine-config-operator(fe8e0d00-860e-4d47-9f48-686555520d79)\"" pod="openshift-machine-config-operator/machine-config-daemon-wtphf" podUID="fe8e0d00-860e-4d47-9f48-686555520d79" Jan 23 10:14:07 crc kubenswrapper[4684]: I0123 10:14:07.589031 4684 scope.go:117] "RemoveContainer" containerID="c2163abc5f57af87ea82023d09559fe5c528b862743942dcea670480cc44810b" Jan 23 10:14:07 crc kubenswrapper[4684]: E0123 10:14:07.589762 4684 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-wtphf_openshift-machine-config-operator(fe8e0d00-860e-4d47-9f48-686555520d79)\"" pod="openshift-machine-config-operator/machine-config-daemon-wtphf" podUID="fe8e0d00-860e-4d47-9f48-686555520d79" Jan 23 10:14:22 crc kubenswrapper[4684]: I0123 10:14:22.582793 4684 scope.go:117] "RemoveContainer" containerID="c2163abc5f57af87ea82023d09559fe5c528b862743942dcea670480cc44810b" Jan 23 10:14:22 crc kubenswrapper[4684]: E0123 10:14:22.583440 4684 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-wtphf_openshift-machine-config-operator(fe8e0d00-860e-4d47-9f48-686555520d79)\"" pod="openshift-machine-config-operator/machine-config-daemon-wtphf" podUID="fe8e0d00-860e-4d47-9f48-686555520d79" Jan 23 10:14:23 crc kubenswrapper[4684]: I0123 10:14:23.006807 4684 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/tempest-tests-tempest"] Jan 23 10:14:23 crc kubenswrapper[4684]: E0123 10:14:23.008196 4684 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="d510be09-5472-4350-8930-0cda7b4b9c84" containerName="horizon" Jan 23 10:14:23 crc kubenswrapper[4684]: I0123 10:14:23.008333 4684 state_mem.go:107] "Deleted CPUSet assignment" podUID="d510be09-5472-4350-8930-0cda7b4b9c84" containerName="horizon" Jan 23 10:14:23 crc kubenswrapper[4684]: E0123 10:14:23.008515 4684 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="d510be09-5472-4350-8930-0cda7b4b9c84" containerName="horizon-log" Jan 23 10:14:23 crc kubenswrapper[4684]: I0123 10:14:23.008534 4684 state_mem.go:107] "Deleted CPUSet assignment" podUID="d510be09-5472-4350-8930-0cda7b4b9c84" containerName="horizon-log" Jan 23 10:14:23 crc kubenswrapper[4684]: I0123 10:14:23.009407 4684 memory_manager.go:354] "RemoveStaleState removing state" podUID="d510be09-5472-4350-8930-0cda7b4b9c84" containerName="horizon-log" Jan 23 10:14:23 crc kubenswrapper[4684]: I0123 10:14:23.009451 4684 memory_manager.go:354] "RemoveStaleState removing state" podUID="d510be09-5472-4350-8930-0cda7b4b9c84" containerName="horizon" Jan 23 10:14:23 crc kubenswrapper[4684]: I0123 10:14:23.011618 4684 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/tempest-tests-tempest" Jan 23 10:14:23 crc kubenswrapper[4684]: I0123 10:14:23.020263 4684 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/tempest-tests-tempest"] Jan 23 10:14:23 crc kubenswrapper[4684]: I0123 10:14:23.046519 4684 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"test-operator-controller-priv-key" Jan 23 10:14:23 crc kubenswrapper[4684]: I0123 10:14:23.046849 4684 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"tempest-tests-tempest-env-vars-s0" Jan 23 10:14:23 crc kubenswrapper[4684]: I0123 10:14:23.047093 4684 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"tempest-tests-tempest-custom-data-s0" Jan 23 10:14:23 crc kubenswrapper[4684]: I0123 10:14:23.047581 4684 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"default-dockercfg-fwcdj" Jan 23 10:14:23 crc kubenswrapper[4684]: I0123 10:14:23.108204 4684 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"openstack-config-secret\" (UniqueName: \"kubernetes.io/secret/a0e2465e-0c50-4aaa-a1c7-bdbd8d3a516a-openstack-config-secret\") pod \"tempest-tests-tempest\" (UID: \"a0e2465e-0c50-4aaa-a1c7-bdbd8d3a516a\") " pod="openstack/tempest-tests-tempest" Jan 23 10:14:23 crc kubenswrapper[4684]: I0123 10:14:23.108262 4684 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ssh-key\" (UniqueName: \"kubernetes.io/secret/a0e2465e-0c50-4aaa-a1c7-bdbd8d3a516a-ssh-key\") pod \"tempest-tests-tempest\" (UID: \"a0e2465e-0c50-4aaa-a1c7-bdbd8d3a516a\") " pod="openstack/tempest-tests-tempest" Jan 23 10:14:23 crc kubenswrapper[4684]: I0123 10:14:23.108286 4684 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"test-operator-ephemeral-temporary\" (UniqueName: \"kubernetes.io/empty-dir/a0e2465e-0c50-4aaa-a1c7-bdbd8d3a516a-test-operator-ephemeral-temporary\") pod \"tempest-tests-tempest\" (UID: \"a0e2465e-0c50-4aaa-a1c7-bdbd8d3a516a\") " pod="openstack/tempest-tests-tempest" Jan 23 10:14:23 crc kubenswrapper[4684]: I0123 10:14:23.108310 4684 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"test-operator-ephemeral-workdir\" (UniqueName: \"kubernetes.io/empty-dir/a0e2465e-0c50-4aaa-a1c7-bdbd8d3a516a-test-operator-ephemeral-workdir\") pod \"tempest-tests-tempest\" (UID: \"a0e2465e-0c50-4aaa-a1c7-bdbd8d3a516a\") " pod="openstack/tempest-tests-tempest" Jan 23 10:14:23 crc kubenswrapper[4684]: I0123 10:14:23.108506 4684 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/secret/a0e2465e-0c50-4aaa-a1c7-bdbd8d3a516a-ca-certs\") pod \"tempest-tests-tempest\" (UID: \"a0e2465e-0c50-4aaa-a1c7-bdbd8d3a516a\") " pod="openstack/tempest-tests-tempest" Jan 23 10:14:23 crc kubenswrapper[4684]: I0123 10:14:23.108554 4684 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"openstack-config\" (UniqueName: \"kubernetes.io/configmap/a0e2465e-0c50-4aaa-a1c7-bdbd8d3a516a-openstack-config\") pod \"tempest-tests-tempest\" (UID: \"a0e2465e-0c50-4aaa-a1c7-bdbd8d3a516a\") " pod="openstack/tempest-tests-tempest" Jan 23 10:14:23 crc kubenswrapper[4684]: I0123 10:14:23.108597 4684 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"local-storage07-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage07-crc\") pod \"tempest-tests-tempest\" (UID: \"a0e2465e-0c50-4aaa-a1c7-bdbd8d3a516a\") " pod="openstack/tempest-tests-tempest" Jan 23 10:14:23 crc kubenswrapper[4684]: I0123 10:14:23.108659 4684 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-hxl46\" (UniqueName: \"kubernetes.io/projected/a0e2465e-0c50-4aaa-a1c7-bdbd8d3a516a-kube-api-access-hxl46\") pod \"tempest-tests-tempest\" (UID: \"a0e2465e-0c50-4aaa-a1c7-bdbd8d3a516a\") " pod="openstack/tempest-tests-tempest" Jan 23 10:14:23 crc kubenswrapper[4684]: I0123 10:14:23.108727 4684 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/a0e2465e-0c50-4aaa-a1c7-bdbd8d3a516a-config-data\") pod \"tempest-tests-tempest\" (UID: \"a0e2465e-0c50-4aaa-a1c7-bdbd8d3a516a\") " pod="openstack/tempest-tests-tempest" Jan 23 10:14:23 crc kubenswrapper[4684]: I0123 10:14:23.210605 4684 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"openstack-config-secret\" (UniqueName: \"kubernetes.io/secret/a0e2465e-0c50-4aaa-a1c7-bdbd8d3a516a-openstack-config-secret\") pod \"tempest-tests-tempest\" (UID: \"a0e2465e-0c50-4aaa-a1c7-bdbd8d3a516a\") " pod="openstack/tempest-tests-tempest" Jan 23 10:14:23 crc kubenswrapper[4684]: I0123 10:14:23.210662 4684 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ssh-key\" (UniqueName: \"kubernetes.io/secret/a0e2465e-0c50-4aaa-a1c7-bdbd8d3a516a-ssh-key\") pod \"tempest-tests-tempest\" (UID: \"a0e2465e-0c50-4aaa-a1c7-bdbd8d3a516a\") " pod="openstack/tempest-tests-tempest" Jan 23 10:14:23 crc kubenswrapper[4684]: I0123 10:14:23.210692 4684 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"test-operator-ephemeral-temporary\" (UniqueName: \"kubernetes.io/empty-dir/a0e2465e-0c50-4aaa-a1c7-bdbd8d3a516a-test-operator-ephemeral-temporary\") pod \"tempest-tests-tempest\" (UID: \"a0e2465e-0c50-4aaa-a1c7-bdbd8d3a516a\") " pod="openstack/tempest-tests-tempest" Jan 23 10:14:23 crc kubenswrapper[4684]: I0123 10:14:23.210725 4684 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"test-operator-ephemeral-workdir\" (UniqueName: \"kubernetes.io/empty-dir/a0e2465e-0c50-4aaa-a1c7-bdbd8d3a516a-test-operator-ephemeral-workdir\") pod \"tempest-tests-tempest\" (UID: \"a0e2465e-0c50-4aaa-a1c7-bdbd8d3a516a\") " pod="openstack/tempest-tests-tempest" Jan 23 10:14:23 crc kubenswrapper[4684]: I0123 10:14:23.210778 4684 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/secret/a0e2465e-0c50-4aaa-a1c7-bdbd8d3a516a-ca-certs\") pod \"tempest-tests-tempest\" (UID: \"a0e2465e-0c50-4aaa-a1c7-bdbd8d3a516a\") " pod="openstack/tempest-tests-tempest" Jan 23 10:14:23 crc kubenswrapper[4684]: I0123 10:14:23.210797 4684 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"openstack-config\" (UniqueName: \"kubernetes.io/configmap/a0e2465e-0c50-4aaa-a1c7-bdbd8d3a516a-openstack-config\") pod \"tempest-tests-tempest\" (UID: \"a0e2465e-0c50-4aaa-a1c7-bdbd8d3a516a\") " pod="openstack/tempest-tests-tempest" Jan 23 10:14:23 crc kubenswrapper[4684]: I0123 10:14:23.210818 4684 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"local-storage07-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage07-crc\") pod \"tempest-tests-tempest\" (UID: \"a0e2465e-0c50-4aaa-a1c7-bdbd8d3a516a\") " pod="openstack/tempest-tests-tempest" Jan 23 10:14:23 crc kubenswrapper[4684]: I0123 10:14:23.210845 4684 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-hxl46\" (UniqueName: \"kubernetes.io/projected/a0e2465e-0c50-4aaa-a1c7-bdbd8d3a516a-kube-api-access-hxl46\") pod \"tempest-tests-tempest\" (UID: \"a0e2465e-0c50-4aaa-a1c7-bdbd8d3a516a\") " pod="openstack/tempest-tests-tempest" Jan 23 10:14:23 crc kubenswrapper[4684]: I0123 10:14:23.210867 4684 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/a0e2465e-0c50-4aaa-a1c7-bdbd8d3a516a-config-data\") pod \"tempest-tests-tempest\" (UID: \"a0e2465e-0c50-4aaa-a1c7-bdbd8d3a516a\") " pod="openstack/tempest-tests-tempest" Jan 23 10:14:23 crc kubenswrapper[4684]: I0123 10:14:23.212093 4684 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/a0e2465e-0c50-4aaa-a1c7-bdbd8d3a516a-config-data\") pod \"tempest-tests-tempest\" (UID: \"a0e2465e-0c50-4aaa-a1c7-bdbd8d3a516a\") " pod="openstack/tempest-tests-tempest" Jan 23 10:14:23 crc kubenswrapper[4684]: I0123 10:14:23.212730 4684 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"openstack-config\" (UniqueName: \"kubernetes.io/configmap/a0e2465e-0c50-4aaa-a1c7-bdbd8d3a516a-openstack-config\") pod \"tempest-tests-tempest\" (UID: \"a0e2465e-0c50-4aaa-a1c7-bdbd8d3a516a\") " pod="openstack/tempest-tests-tempest" Jan 23 10:14:23 crc kubenswrapper[4684]: I0123 10:14:23.213091 4684 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"test-operator-ephemeral-temporary\" (UniqueName: \"kubernetes.io/empty-dir/a0e2465e-0c50-4aaa-a1c7-bdbd8d3a516a-test-operator-ephemeral-temporary\") pod \"tempest-tests-tempest\" (UID: \"a0e2465e-0c50-4aaa-a1c7-bdbd8d3a516a\") " pod="openstack/tempest-tests-tempest" Jan 23 10:14:23 crc kubenswrapper[4684]: I0123 10:14:23.213587 4684 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"test-operator-ephemeral-workdir\" (UniqueName: \"kubernetes.io/empty-dir/a0e2465e-0c50-4aaa-a1c7-bdbd8d3a516a-test-operator-ephemeral-workdir\") pod \"tempest-tests-tempest\" (UID: \"a0e2465e-0c50-4aaa-a1c7-bdbd8d3a516a\") " pod="openstack/tempest-tests-tempest" Jan 23 10:14:23 crc kubenswrapper[4684]: I0123 10:14:23.217327 4684 operation_generator.go:580] "MountVolume.MountDevice succeeded for volume \"local-storage07-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage07-crc\") pod \"tempest-tests-tempest\" (UID: \"a0e2465e-0c50-4aaa-a1c7-bdbd8d3a516a\") device mount path \"/mnt/openstack/pv07\"" pod="openstack/tempest-tests-tempest" Jan 23 10:14:23 crc kubenswrapper[4684]: I0123 10:14:23.227441 4684 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"openstack-config-secret\" (UniqueName: \"kubernetes.io/secret/a0e2465e-0c50-4aaa-a1c7-bdbd8d3a516a-openstack-config-secret\") pod \"tempest-tests-tempest\" (UID: \"a0e2465e-0c50-4aaa-a1c7-bdbd8d3a516a\") " pod="openstack/tempest-tests-tempest" Jan 23 10:14:23 crc kubenswrapper[4684]: I0123 10:14:23.228017 4684 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ca-certs\" (UniqueName: \"kubernetes.io/secret/a0e2465e-0c50-4aaa-a1c7-bdbd8d3a516a-ca-certs\") pod \"tempest-tests-tempest\" (UID: \"a0e2465e-0c50-4aaa-a1c7-bdbd8d3a516a\") " pod="openstack/tempest-tests-tempest" Jan 23 10:14:23 crc kubenswrapper[4684]: I0123 10:14:23.232501 4684 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ssh-key\" (UniqueName: \"kubernetes.io/secret/a0e2465e-0c50-4aaa-a1c7-bdbd8d3a516a-ssh-key\") pod \"tempest-tests-tempest\" (UID: \"a0e2465e-0c50-4aaa-a1c7-bdbd8d3a516a\") " pod="openstack/tempest-tests-tempest" Jan 23 10:14:23 crc kubenswrapper[4684]: I0123 10:14:23.233054 4684 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-hxl46\" (UniqueName: \"kubernetes.io/projected/a0e2465e-0c50-4aaa-a1c7-bdbd8d3a516a-kube-api-access-hxl46\") pod \"tempest-tests-tempest\" (UID: \"a0e2465e-0c50-4aaa-a1c7-bdbd8d3a516a\") " pod="openstack/tempest-tests-tempest" Jan 23 10:14:23 crc kubenswrapper[4684]: I0123 10:14:23.252063 4684 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"local-storage07-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage07-crc\") pod \"tempest-tests-tempest\" (UID: \"a0e2465e-0c50-4aaa-a1c7-bdbd8d3a516a\") " pod="openstack/tempest-tests-tempest" Jan 23 10:14:23 crc kubenswrapper[4684]: I0123 10:14:23.402283 4684 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/tempest-tests-tempest" Jan 23 10:14:23 crc kubenswrapper[4684]: I0123 10:14:23.927440 4684 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/tempest-tests-tempest"] Jan 23 10:14:24 crc kubenswrapper[4684]: I0123 10:14:24.699856 4684 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/tempest-tests-tempest" event={"ID":"a0e2465e-0c50-4aaa-a1c7-bdbd8d3a516a","Type":"ContainerStarted","Data":"c73a4b7cfa993f72d85b940fb8663f968abd5d08866b610e4c3f3153d7472c87"} Jan 23 10:14:37 crc kubenswrapper[4684]: I0123 10:14:37.590764 4684 scope.go:117] "RemoveContainer" containerID="c2163abc5f57af87ea82023d09559fe5c528b862743942dcea670480cc44810b" Jan 23 10:14:37 crc kubenswrapper[4684]: E0123 10:14:37.591709 4684 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-wtphf_openshift-machine-config-operator(fe8e0d00-860e-4d47-9f48-686555520d79)\"" pod="openshift-machine-config-operator/machine-config-daemon-wtphf" podUID="fe8e0d00-860e-4d47-9f48-686555520d79" Jan 23 10:14:50 crc kubenswrapper[4684]: I0123 10:14:50.581903 4684 scope.go:117] "RemoveContainer" containerID="c2163abc5f57af87ea82023d09559fe5c528b862743942dcea670480cc44810b" Jan 23 10:14:50 crc kubenswrapper[4684]: E0123 10:14:50.582757 4684 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-wtphf_openshift-machine-config-operator(fe8e0d00-860e-4d47-9f48-686555520d79)\"" pod="openshift-machine-config-operator/machine-config-daemon-wtphf" podUID="fe8e0d00-860e-4d47-9f48-686555520d79" Jan 23 10:15:00 crc kubenswrapper[4684]: I0123 10:15:00.262575 4684 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29486055-s9gh9"] Jan 23 10:15:00 crc kubenswrapper[4684]: I0123 10:15:00.269797 4684 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29486055-s9gh9" Jan 23 10:15:00 crc kubenswrapper[4684]: I0123 10:15:00.274402 4684 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-operator-lifecycle-manager"/"collect-profiles-dockercfg-kzf4t" Jan 23 10:15:00 crc kubenswrapper[4684]: I0123 10:15:00.286488 4684 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-operator-lifecycle-manager"/"collect-profiles-config" Jan 23 10:15:00 crc kubenswrapper[4684]: I0123 10:15:00.287391 4684 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29486055-s9gh9"] Jan 23 10:15:00 crc kubenswrapper[4684]: I0123 10:15:00.361740 4684 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/dacd1c3b-ffb5-42d5-bf62-88fa6933e756-secret-volume\") pod \"collect-profiles-29486055-s9gh9\" (UID: \"dacd1c3b-ffb5-42d5-bf62-88fa6933e756\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29486055-s9gh9" Jan 23 10:15:00 crc kubenswrapper[4684]: I0123 10:15:00.361792 4684 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/dacd1c3b-ffb5-42d5-bf62-88fa6933e756-config-volume\") pod \"collect-profiles-29486055-s9gh9\" (UID: \"dacd1c3b-ffb5-42d5-bf62-88fa6933e756\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29486055-s9gh9" Jan 23 10:15:00 crc kubenswrapper[4684]: I0123 10:15:00.361832 4684 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-f4jrb\" (UniqueName: \"kubernetes.io/projected/dacd1c3b-ffb5-42d5-bf62-88fa6933e756-kube-api-access-f4jrb\") pod \"collect-profiles-29486055-s9gh9\" (UID: \"dacd1c3b-ffb5-42d5-bf62-88fa6933e756\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29486055-s9gh9" Jan 23 10:15:00 crc kubenswrapper[4684]: I0123 10:15:00.463400 4684 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/dacd1c3b-ffb5-42d5-bf62-88fa6933e756-secret-volume\") pod \"collect-profiles-29486055-s9gh9\" (UID: \"dacd1c3b-ffb5-42d5-bf62-88fa6933e756\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29486055-s9gh9" Jan 23 10:15:00 crc kubenswrapper[4684]: I0123 10:15:00.463760 4684 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/dacd1c3b-ffb5-42d5-bf62-88fa6933e756-config-volume\") pod \"collect-profiles-29486055-s9gh9\" (UID: \"dacd1c3b-ffb5-42d5-bf62-88fa6933e756\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29486055-s9gh9" Jan 23 10:15:00 crc kubenswrapper[4684]: I0123 10:15:00.463809 4684 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-f4jrb\" (UniqueName: \"kubernetes.io/projected/dacd1c3b-ffb5-42d5-bf62-88fa6933e756-kube-api-access-f4jrb\") pod \"collect-profiles-29486055-s9gh9\" (UID: \"dacd1c3b-ffb5-42d5-bf62-88fa6933e756\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29486055-s9gh9" Jan 23 10:15:00 crc kubenswrapper[4684]: I0123 10:15:00.465068 4684 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/dacd1c3b-ffb5-42d5-bf62-88fa6933e756-config-volume\") pod \"collect-profiles-29486055-s9gh9\" (UID: \"dacd1c3b-ffb5-42d5-bf62-88fa6933e756\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29486055-s9gh9" Jan 23 10:15:00 crc kubenswrapper[4684]: I0123 10:15:00.470515 4684 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/dacd1c3b-ffb5-42d5-bf62-88fa6933e756-secret-volume\") pod \"collect-profiles-29486055-s9gh9\" (UID: \"dacd1c3b-ffb5-42d5-bf62-88fa6933e756\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29486055-s9gh9" Jan 23 10:15:00 crc kubenswrapper[4684]: I0123 10:15:00.481152 4684 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-f4jrb\" (UniqueName: \"kubernetes.io/projected/dacd1c3b-ffb5-42d5-bf62-88fa6933e756-kube-api-access-f4jrb\") pod \"collect-profiles-29486055-s9gh9\" (UID: \"dacd1c3b-ffb5-42d5-bf62-88fa6933e756\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29486055-s9gh9" Jan 23 10:15:00 crc kubenswrapper[4684]: I0123 10:15:00.604223 4684 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29486055-s9gh9" Jan 23 10:15:03 crc kubenswrapper[4684]: I0123 10:15:03.584164 4684 scope.go:117] "RemoveContainer" containerID="c2163abc5f57af87ea82023d09559fe5c528b862743942dcea670480cc44810b" Jan 23 10:15:03 crc kubenswrapper[4684]: E0123 10:15:03.585143 4684 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-wtphf_openshift-machine-config-operator(fe8e0d00-860e-4d47-9f48-686555520d79)\"" pod="openshift-machine-config-operator/machine-config-daemon-wtphf" podUID="fe8e0d00-860e-4d47-9f48-686555520d79" Jan 23 10:15:13 crc kubenswrapper[4684]: E0123 10:15:13.462812 4684 log.go:32] "PullImage from image service failed" err="rpc error: code = Canceled desc = copying config: context canceled" image="quay.io/podified-antelope-centos9/openstack-tempest-all:current-podified" Jan 23 10:15:13 crc kubenswrapper[4684]: E0123 10:15:13.470798 4684 kuberuntime_manager.go:1274] "Unhandled Error" err="container &Container{Name:tempest-tests-tempest-tests-runner,Image:quay.io/podified-antelope-centos9/openstack-tempest-all:current-podified,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:config-data,ReadOnly:false,MountPath:/etc/test_operator,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:test-operator-ephemeral-workdir,ReadOnly:false,MountPath:/var/lib/tempest,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:test-operator-ephemeral-temporary,ReadOnly:false,MountPath:/tmp,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:test-operator-logs,ReadOnly:false,MountPath:/var/lib/tempest/external_files,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:openstack-config,ReadOnly:true,MountPath:/etc/openstack/clouds.yaml,SubPath:clouds.yaml,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:openstack-config,ReadOnly:true,MountPath:/var/lib/tempest/.config/openstack/clouds.yaml,SubPath:clouds.yaml,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:openstack-config-secret,ReadOnly:false,MountPath:/etc/openstack/secure.yaml,SubPath:secure.yaml,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:ca-certs,ReadOnly:true,MountPath:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem,SubPath:tls-ca-bundle.pem,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:ssh-key,ReadOnly:false,MountPath:/var/lib/tempest/id_ecdsa,SubPath:ssh_key,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-hxl46,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[],},Privileged:nil,SELinuxOptions:nil,RunAsUser:*42480,RunAsNonRoot:*false,ReadOnlyRootFilesystem:*false,AllowPrivilegeEscalation:*true,RunAsGroup:*42480,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{EnvFromSource{Prefix:,ConfigMapRef:&ConfigMapEnvSource{LocalObjectReference:LocalObjectReference{Name:tempest-tests-tempest-custom-data-s0,},Optional:nil,},SecretRef:nil,},EnvFromSource{Prefix:,ConfigMapRef:&ConfigMapEnvSource{LocalObjectReference:LocalObjectReference{Name:tempest-tests-tempest-env-vars-s0,},Optional:nil,},SecretRef:nil,},},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod tempest-tests-tempest_openstack(a0e2465e-0c50-4aaa-a1c7-bdbd8d3a516a): ErrImagePull: rpc error: code = Canceled desc = copying config: context canceled" logger="UnhandledError" Jan 23 10:15:13 crc kubenswrapper[4684]: E0123 10:15:13.471981 4684 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"tempest-tests-tempest-tests-runner\" with ErrImagePull: \"rpc error: code = Canceled desc = copying config: context canceled\"" pod="openstack/tempest-tests-tempest" podUID="a0e2465e-0c50-4aaa-a1c7-bdbd8d3a516a" Jan 23 10:15:13 crc kubenswrapper[4684]: I0123 10:15:13.807224 4684 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29486055-s9gh9"] Jan 23 10:15:14 crc kubenswrapper[4684]: I0123 10:15:14.282180 4684 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/collect-profiles-29486055-s9gh9" event={"ID":"dacd1c3b-ffb5-42d5-bf62-88fa6933e756","Type":"ContainerStarted","Data":"0c9aa89c15f3d5870b4fb2317a295d342ca113d3800692963756e500762b975c"} Jan 23 10:15:14 crc kubenswrapper[4684]: I0123 10:15:14.282557 4684 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/collect-profiles-29486055-s9gh9" event={"ID":"dacd1c3b-ffb5-42d5-bf62-88fa6933e756","Type":"ContainerStarted","Data":"c61b32f20b6a0304954cc1c239153149f5d971d39b939c3c5ddefb04f9fab15d"} Jan 23 10:15:14 crc kubenswrapper[4684]: E0123 10:15:14.283623 4684 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"tempest-tests-tempest-tests-runner\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.io/podified-antelope-centos9/openstack-tempest-all:current-podified\\\"\"" pod="openstack/tempest-tests-tempest" podUID="a0e2465e-0c50-4aaa-a1c7-bdbd8d3a516a" Jan 23 10:15:14 crc kubenswrapper[4684]: I0123 10:15:14.320933 4684 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-operator-lifecycle-manager/collect-profiles-29486055-s9gh9" podStartSLOduration=14.320909551 podStartE2EDuration="14.320909551s" podCreationTimestamp="2026-01-23 10:15:00 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-23 10:15:14.313565531 +0000 UTC m=+4086.936944082" watchObservedRunningTime="2026-01-23 10:15:14.320909551 +0000 UTC m=+4086.944288092" Jan 23 10:15:15 crc kubenswrapper[4684]: I0123 10:15:15.296272 4684 generic.go:334] "Generic (PLEG): container finished" podID="dacd1c3b-ffb5-42d5-bf62-88fa6933e756" containerID="0c9aa89c15f3d5870b4fb2317a295d342ca113d3800692963756e500762b975c" exitCode=0 Jan 23 10:15:15 crc kubenswrapper[4684]: I0123 10:15:15.296429 4684 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/collect-profiles-29486055-s9gh9" event={"ID":"dacd1c3b-ffb5-42d5-bf62-88fa6933e756","Type":"ContainerDied","Data":"0c9aa89c15f3d5870b4fb2317a295d342ca113d3800692963756e500762b975c"} Jan 23 10:15:16 crc kubenswrapper[4684]: I0123 10:15:16.581882 4684 scope.go:117] "RemoveContainer" containerID="c2163abc5f57af87ea82023d09559fe5c528b862743942dcea670480cc44810b" Jan 23 10:15:16 crc kubenswrapper[4684]: E0123 10:15:16.582474 4684 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-wtphf_openshift-machine-config-operator(fe8e0d00-860e-4d47-9f48-686555520d79)\"" pod="openshift-machine-config-operator/machine-config-daemon-wtphf" podUID="fe8e0d00-860e-4d47-9f48-686555520d79" Jan 23 10:15:16 crc kubenswrapper[4684]: I0123 10:15:16.707410 4684 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29486055-s9gh9" Jan 23 10:15:16 crc kubenswrapper[4684]: I0123 10:15:16.826491 4684 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-f4jrb\" (UniqueName: \"kubernetes.io/projected/dacd1c3b-ffb5-42d5-bf62-88fa6933e756-kube-api-access-f4jrb\") pod \"dacd1c3b-ffb5-42d5-bf62-88fa6933e756\" (UID: \"dacd1c3b-ffb5-42d5-bf62-88fa6933e756\") " Jan 23 10:15:16 crc kubenswrapper[4684]: I0123 10:15:16.826850 4684 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/dacd1c3b-ffb5-42d5-bf62-88fa6933e756-secret-volume\") pod \"dacd1c3b-ffb5-42d5-bf62-88fa6933e756\" (UID: \"dacd1c3b-ffb5-42d5-bf62-88fa6933e756\") " Jan 23 10:15:16 crc kubenswrapper[4684]: I0123 10:15:16.826892 4684 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/dacd1c3b-ffb5-42d5-bf62-88fa6933e756-config-volume\") pod \"dacd1c3b-ffb5-42d5-bf62-88fa6933e756\" (UID: \"dacd1c3b-ffb5-42d5-bf62-88fa6933e756\") " Jan 23 10:15:16 crc kubenswrapper[4684]: I0123 10:15:16.827733 4684 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/dacd1c3b-ffb5-42d5-bf62-88fa6933e756-config-volume" (OuterVolumeSpecName: "config-volume") pod "dacd1c3b-ffb5-42d5-bf62-88fa6933e756" (UID: "dacd1c3b-ffb5-42d5-bf62-88fa6933e756"). InnerVolumeSpecName "config-volume". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 23 10:15:16 crc kubenswrapper[4684]: I0123 10:15:16.843043 4684 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/dacd1c3b-ffb5-42d5-bf62-88fa6933e756-kube-api-access-f4jrb" (OuterVolumeSpecName: "kube-api-access-f4jrb") pod "dacd1c3b-ffb5-42d5-bf62-88fa6933e756" (UID: "dacd1c3b-ffb5-42d5-bf62-88fa6933e756"). InnerVolumeSpecName "kube-api-access-f4jrb". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 23 10:15:16 crc kubenswrapper[4684]: I0123 10:15:16.849927 4684 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/dacd1c3b-ffb5-42d5-bf62-88fa6933e756-secret-volume" (OuterVolumeSpecName: "secret-volume") pod "dacd1c3b-ffb5-42d5-bf62-88fa6933e756" (UID: "dacd1c3b-ffb5-42d5-bf62-88fa6933e756"). InnerVolumeSpecName "secret-volume". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 23 10:15:16 crc kubenswrapper[4684]: I0123 10:15:16.880549 4684 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29486010-lfsjx"] Jan 23 10:15:16 crc kubenswrapper[4684]: I0123 10:15:16.889103 4684 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29486010-lfsjx"] Jan 23 10:15:16 crc kubenswrapper[4684]: I0123 10:15:16.929025 4684 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-f4jrb\" (UniqueName: \"kubernetes.io/projected/dacd1c3b-ffb5-42d5-bf62-88fa6933e756-kube-api-access-f4jrb\") on node \"crc\" DevicePath \"\"" Jan 23 10:15:16 crc kubenswrapper[4684]: I0123 10:15:16.929061 4684 reconciler_common.go:293] "Volume detached for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/dacd1c3b-ffb5-42d5-bf62-88fa6933e756-secret-volume\") on node \"crc\" DevicePath \"\"" Jan 23 10:15:16 crc kubenswrapper[4684]: I0123 10:15:16.929073 4684 reconciler_common.go:293] "Volume detached for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/dacd1c3b-ffb5-42d5-bf62-88fa6933e756-config-volume\") on node \"crc\" DevicePath \"\"" Jan 23 10:15:17 crc kubenswrapper[4684]: I0123 10:15:17.314600 4684 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/collect-profiles-29486055-s9gh9" event={"ID":"dacd1c3b-ffb5-42d5-bf62-88fa6933e756","Type":"ContainerDied","Data":"c61b32f20b6a0304954cc1c239153149f5d971d39b939c3c5ddefb04f9fab15d"} Jan 23 10:15:17 crc kubenswrapper[4684]: I0123 10:15:17.314947 4684 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="c61b32f20b6a0304954cc1c239153149f5d971d39b939c3c5ddefb04f9fab15d" Jan 23 10:15:17 crc kubenswrapper[4684]: I0123 10:15:17.314670 4684 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29486055-s9gh9" Jan 23 10:15:17 crc kubenswrapper[4684]: I0123 10:15:17.594861 4684 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="0418d43a-0c43-459c-baf2-71075458ff45" path="/var/lib/kubelet/pods/0418d43a-0c43-459c-baf2-71075458ff45/volumes" Jan 23 10:15:27 crc kubenswrapper[4684]: I0123 10:15:27.078647 4684 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"tempest-tests-tempest-env-vars-s0" Jan 23 10:15:28 crc kubenswrapper[4684]: I0123 10:15:28.414353 4684 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/tempest-tests-tempest" event={"ID":"a0e2465e-0c50-4aaa-a1c7-bdbd8d3a516a","Type":"ContainerStarted","Data":"fd76173876ef1807d994f8ff7481a70adf6b0ba07b56c88142ccaa797558f7e1"} Jan 23 10:15:28 crc kubenswrapper[4684]: I0123 10:15:28.446064 4684 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/tempest-tests-tempest" podStartSLOduration=4.307782782 podStartE2EDuration="1m7.446042663s" podCreationTimestamp="2026-01-23 10:14:21 +0000 UTC" firstStartedPulling="2026-01-23 10:14:23.937393086 +0000 UTC m=+4036.560771627" lastFinishedPulling="2026-01-23 10:15:27.075652947 +0000 UTC m=+4099.699031508" observedRunningTime="2026-01-23 10:15:28.431721333 +0000 UTC m=+4101.055099894" watchObservedRunningTime="2026-01-23 10:15:28.446042663 +0000 UTC m=+4101.069421204" Jan 23 10:15:30 crc kubenswrapper[4684]: I0123 10:15:30.582301 4684 scope.go:117] "RemoveContainer" containerID="c2163abc5f57af87ea82023d09559fe5c528b862743942dcea670480cc44810b" Jan 23 10:15:30 crc kubenswrapper[4684]: E0123 10:15:30.583078 4684 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-wtphf_openshift-machine-config-operator(fe8e0d00-860e-4d47-9f48-686555520d79)\"" pod="openshift-machine-config-operator/machine-config-daemon-wtphf" podUID="fe8e0d00-860e-4d47-9f48-686555520d79" Jan 23 10:15:42 crc kubenswrapper[4684]: I0123 10:15:42.582276 4684 scope.go:117] "RemoveContainer" containerID="c2163abc5f57af87ea82023d09559fe5c528b862743942dcea670480cc44810b" Jan 23 10:15:42 crc kubenswrapper[4684]: E0123 10:15:42.583111 4684 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-wtphf_openshift-machine-config-operator(fe8e0d00-860e-4d47-9f48-686555520d79)\"" pod="openshift-machine-config-operator/machine-config-daemon-wtphf" podUID="fe8e0d00-860e-4d47-9f48-686555520d79" Jan 23 10:15:55 crc kubenswrapper[4684]: I0123 10:15:55.582559 4684 scope.go:117] "RemoveContainer" containerID="c2163abc5f57af87ea82023d09559fe5c528b862743942dcea670480cc44810b" Jan 23 10:15:55 crc kubenswrapper[4684]: E0123 10:15:55.586950 4684 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-wtphf_openshift-machine-config-operator(fe8e0d00-860e-4d47-9f48-686555520d79)\"" pod="openshift-machine-config-operator/machine-config-daemon-wtphf" podUID="fe8e0d00-860e-4d47-9f48-686555520d79" Jan 23 10:16:10 crc kubenswrapper[4684]: I0123 10:16:10.582474 4684 scope.go:117] "RemoveContainer" containerID="c2163abc5f57af87ea82023d09559fe5c528b862743942dcea670480cc44810b" Jan 23 10:16:10 crc kubenswrapper[4684]: E0123 10:16:10.583164 4684 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-wtphf_openshift-machine-config-operator(fe8e0d00-860e-4d47-9f48-686555520d79)\"" pod="openshift-machine-config-operator/machine-config-daemon-wtphf" podUID="fe8e0d00-860e-4d47-9f48-686555520d79" Jan 23 10:16:16 crc kubenswrapper[4684]: I0123 10:16:16.961094 4684 scope.go:117] "RemoveContainer" containerID="f10074ccec3734d1868334e604289640a2cd5b4921d10d8fd4422520921e8f24" Jan 23 10:16:25 crc kubenswrapper[4684]: I0123 10:16:25.582105 4684 scope.go:117] "RemoveContainer" containerID="c2163abc5f57af87ea82023d09559fe5c528b862743942dcea670480cc44810b" Jan 23 10:16:25 crc kubenswrapper[4684]: E0123 10:16:25.582817 4684 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-wtphf_openshift-machine-config-operator(fe8e0d00-860e-4d47-9f48-686555520d79)\"" pod="openshift-machine-config-operator/machine-config-daemon-wtphf" podUID="fe8e0d00-860e-4d47-9f48-686555520d79" Jan 23 10:16:36 crc kubenswrapper[4684]: I0123 10:16:36.582731 4684 scope.go:117] "RemoveContainer" containerID="c2163abc5f57af87ea82023d09559fe5c528b862743942dcea670480cc44810b" Jan 23 10:16:36 crc kubenswrapper[4684]: E0123 10:16:36.583500 4684 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-wtphf_openshift-machine-config-operator(fe8e0d00-860e-4d47-9f48-686555520d79)\"" pod="openshift-machine-config-operator/machine-config-daemon-wtphf" podUID="fe8e0d00-860e-4d47-9f48-686555520d79" Jan 23 10:16:48 crc kubenswrapper[4684]: I0123 10:16:48.582841 4684 scope.go:117] "RemoveContainer" containerID="c2163abc5f57af87ea82023d09559fe5c528b862743942dcea670480cc44810b" Jan 23 10:16:49 crc kubenswrapper[4684]: I0123 10:16:49.187552 4684 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-wtphf" event={"ID":"fe8e0d00-860e-4d47-9f48-686555520d79","Type":"ContainerStarted","Data":"eb2cd5a3802daabd6f25e381fb866c36fce1be8cb93402ebfba6f9d62b385554"} Jan 23 10:17:17 crc kubenswrapper[4684]: I0123 10:17:17.715860 4684 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/certified-operators-cwt7n"] Jan 23 10:17:17 crc kubenswrapper[4684]: E0123 10:17:17.728180 4684 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="dacd1c3b-ffb5-42d5-bf62-88fa6933e756" containerName="collect-profiles" Jan 23 10:17:17 crc kubenswrapper[4684]: I0123 10:17:17.728417 4684 state_mem.go:107] "Deleted CPUSet assignment" podUID="dacd1c3b-ffb5-42d5-bf62-88fa6933e756" containerName="collect-profiles" Jan 23 10:17:17 crc kubenswrapper[4684]: I0123 10:17:17.728657 4684 memory_manager.go:354] "RemoveStaleState removing state" podUID="dacd1c3b-ffb5-42d5-bf62-88fa6933e756" containerName="collect-profiles" Jan 23 10:17:17 crc kubenswrapper[4684]: I0123 10:17:17.730190 4684 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-cwt7n" Jan 23 10:17:17 crc kubenswrapper[4684]: I0123 10:17:17.742901 4684 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/certified-operators-cwt7n"] Jan 23 10:17:17 crc kubenswrapper[4684]: I0123 10:17:17.840424 4684 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-26xk6\" (UniqueName: \"kubernetes.io/projected/080b3bd2-8103-41a1-aeb1-c9bacfde3dd1-kube-api-access-26xk6\") pod \"certified-operators-cwt7n\" (UID: \"080b3bd2-8103-41a1-aeb1-c9bacfde3dd1\") " pod="openshift-marketplace/certified-operators-cwt7n" Jan 23 10:17:17 crc kubenswrapper[4684]: I0123 10:17:17.840517 4684 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/080b3bd2-8103-41a1-aeb1-c9bacfde3dd1-utilities\") pod \"certified-operators-cwt7n\" (UID: \"080b3bd2-8103-41a1-aeb1-c9bacfde3dd1\") " pod="openshift-marketplace/certified-operators-cwt7n" Jan 23 10:17:17 crc kubenswrapper[4684]: I0123 10:17:17.840981 4684 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/080b3bd2-8103-41a1-aeb1-c9bacfde3dd1-catalog-content\") pod \"certified-operators-cwt7n\" (UID: \"080b3bd2-8103-41a1-aeb1-c9bacfde3dd1\") " pod="openshift-marketplace/certified-operators-cwt7n" Jan 23 10:17:17 crc kubenswrapper[4684]: I0123 10:17:17.942848 4684 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/080b3bd2-8103-41a1-aeb1-c9bacfde3dd1-catalog-content\") pod \"certified-operators-cwt7n\" (UID: \"080b3bd2-8103-41a1-aeb1-c9bacfde3dd1\") " pod="openshift-marketplace/certified-operators-cwt7n" Jan 23 10:17:17 crc kubenswrapper[4684]: I0123 10:17:17.943137 4684 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-26xk6\" (UniqueName: \"kubernetes.io/projected/080b3bd2-8103-41a1-aeb1-c9bacfde3dd1-kube-api-access-26xk6\") pod \"certified-operators-cwt7n\" (UID: \"080b3bd2-8103-41a1-aeb1-c9bacfde3dd1\") " pod="openshift-marketplace/certified-operators-cwt7n" Jan 23 10:17:17 crc kubenswrapper[4684]: I0123 10:17:17.943308 4684 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/080b3bd2-8103-41a1-aeb1-c9bacfde3dd1-utilities\") pod \"certified-operators-cwt7n\" (UID: \"080b3bd2-8103-41a1-aeb1-c9bacfde3dd1\") " pod="openshift-marketplace/certified-operators-cwt7n" Jan 23 10:17:17 crc kubenswrapper[4684]: I0123 10:17:17.943934 4684 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/080b3bd2-8103-41a1-aeb1-c9bacfde3dd1-catalog-content\") pod \"certified-operators-cwt7n\" (UID: \"080b3bd2-8103-41a1-aeb1-c9bacfde3dd1\") " pod="openshift-marketplace/certified-operators-cwt7n" Jan 23 10:17:17 crc kubenswrapper[4684]: I0123 10:17:17.943937 4684 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/080b3bd2-8103-41a1-aeb1-c9bacfde3dd1-utilities\") pod \"certified-operators-cwt7n\" (UID: \"080b3bd2-8103-41a1-aeb1-c9bacfde3dd1\") " pod="openshift-marketplace/certified-operators-cwt7n" Jan 23 10:17:17 crc kubenswrapper[4684]: I0123 10:17:17.968679 4684 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-26xk6\" (UniqueName: \"kubernetes.io/projected/080b3bd2-8103-41a1-aeb1-c9bacfde3dd1-kube-api-access-26xk6\") pod \"certified-operators-cwt7n\" (UID: \"080b3bd2-8103-41a1-aeb1-c9bacfde3dd1\") " pod="openshift-marketplace/certified-operators-cwt7n" Jan 23 10:17:18 crc kubenswrapper[4684]: I0123 10:17:18.062175 4684 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-cwt7n" Jan 23 10:17:18 crc kubenswrapper[4684]: I0123 10:17:18.631201 4684 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/certified-operators-cwt7n"] Jan 23 10:17:19 crc kubenswrapper[4684]: I0123 10:17:19.476291 4684 generic.go:334] "Generic (PLEG): container finished" podID="080b3bd2-8103-41a1-aeb1-c9bacfde3dd1" containerID="32d6222b9b708dbb49e466d4f6f6e1e73be22e4df955bc99067b32cd5616eddb" exitCode=0 Jan 23 10:17:19 crc kubenswrapper[4684]: I0123 10:17:19.476373 4684 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-cwt7n" event={"ID":"080b3bd2-8103-41a1-aeb1-c9bacfde3dd1","Type":"ContainerDied","Data":"32d6222b9b708dbb49e466d4f6f6e1e73be22e4df955bc99067b32cd5616eddb"} Jan 23 10:17:19 crc kubenswrapper[4684]: I0123 10:17:19.476568 4684 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-cwt7n" event={"ID":"080b3bd2-8103-41a1-aeb1-c9bacfde3dd1","Type":"ContainerStarted","Data":"e74068ad4783113f1ebf9f0b49ff1cd79515f6d2360e163762e8a6c5c734255f"} Jan 23 10:17:19 crc kubenswrapper[4684]: I0123 10:17:19.479271 4684 provider.go:102] Refreshing cache for provider: *credentialprovider.defaultDockerConfigProvider Jan 23 10:17:21 crc kubenswrapper[4684]: I0123 10:17:21.494995 4684 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-cwt7n" event={"ID":"080b3bd2-8103-41a1-aeb1-c9bacfde3dd1","Type":"ContainerStarted","Data":"675ec7b58af168db31eae0af2c45a99310aad5ad9b49c7b541cec52730da7552"} Jan 23 10:17:23 crc kubenswrapper[4684]: I0123 10:17:23.559191 4684 generic.go:334] "Generic (PLEG): container finished" podID="080b3bd2-8103-41a1-aeb1-c9bacfde3dd1" containerID="675ec7b58af168db31eae0af2c45a99310aad5ad9b49c7b541cec52730da7552" exitCode=0 Jan 23 10:17:23 crc kubenswrapper[4684]: I0123 10:17:23.559269 4684 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-cwt7n" event={"ID":"080b3bd2-8103-41a1-aeb1-c9bacfde3dd1","Type":"ContainerDied","Data":"675ec7b58af168db31eae0af2c45a99310aad5ad9b49c7b541cec52730da7552"} Jan 23 10:17:25 crc kubenswrapper[4684]: I0123 10:17:25.279222 4684 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/redhat-marketplace-q2tk9"] Jan 23 10:17:25 crc kubenswrapper[4684]: I0123 10:17:25.281904 4684 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-q2tk9" Jan 23 10:17:25 crc kubenswrapper[4684]: I0123 10:17:25.294173 4684 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-nnch9\" (UniqueName: \"kubernetes.io/projected/621e1e97-7c26-40e8-8bba-2709c642655f-kube-api-access-nnch9\") pod \"redhat-marketplace-q2tk9\" (UID: \"621e1e97-7c26-40e8-8bba-2709c642655f\") " pod="openshift-marketplace/redhat-marketplace-q2tk9" Jan 23 10:17:25 crc kubenswrapper[4684]: I0123 10:17:25.294361 4684 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/621e1e97-7c26-40e8-8bba-2709c642655f-utilities\") pod \"redhat-marketplace-q2tk9\" (UID: \"621e1e97-7c26-40e8-8bba-2709c642655f\") " pod="openshift-marketplace/redhat-marketplace-q2tk9" Jan 23 10:17:25 crc kubenswrapper[4684]: I0123 10:17:25.294524 4684 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/621e1e97-7c26-40e8-8bba-2709c642655f-catalog-content\") pod \"redhat-marketplace-q2tk9\" (UID: \"621e1e97-7c26-40e8-8bba-2709c642655f\") " pod="openshift-marketplace/redhat-marketplace-q2tk9" Jan 23 10:17:25 crc kubenswrapper[4684]: I0123 10:17:25.295122 4684 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-marketplace-q2tk9"] Jan 23 10:17:25 crc kubenswrapper[4684]: I0123 10:17:25.396345 4684 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-nnch9\" (UniqueName: \"kubernetes.io/projected/621e1e97-7c26-40e8-8bba-2709c642655f-kube-api-access-nnch9\") pod \"redhat-marketplace-q2tk9\" (UID: \"621e1e97-7c26-40e8-8bba-2709c642655f\") " pod="openshift-marketplace/redhat-marketplace-q2tk9" Jan 23 10:17:25 crc kubenswrapper[4684]: I0123 10:17:25.396460 4684 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/621e1e97-7c26-40e8-8bba-2709c642655f-utilities\") pod \"redhat-marketplace-q2tk9\" (UID: \"621e1e97-7c26-40e8-8bba-2709c642655f\") " pod="openshift-marketplace/redhat-marketplace-q2tk9" Jan 23 10:17:25 crc kubenswrapper[4684]: I0123 10:17:25.396929 4684 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/621e1e97-7c26-40e8-8bba-2709c642655f-utilities\") pod \"redhat-marketplace-q2tk9\" (UID: \"621e1e97-7c26-40e8-8bba-2709c642655f\") " pod="openshift-marketplace/redhat-marketplace-q2tk9" Jan 23 10:17:25 crc kubenswrapper[4684]: I0123 10:17:25.397052 4684 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/621e1e97-7c26-40e8-8bba-2709c642655f-catalog-content\") pod \"redhat-marketplace-q2tk9\" (UID: \"621e1e97-7c26-40e8-8bba-2709c642655f\") " pod="openshift-marketplace/redhat-marketplace-q2tk9" Jan 23 10:17:25 crc kubenswrapper[4684]: I0123 10:17:25.397297 4684 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/621e1e97-7c26-40e8-8bba-2709c642655f-catalog-content\") pod \"redhat-marketplace-q2tk9\" (UID: \"621e1e97-7c26-40e8-8bba-2709c642655f\") " pod="openshift-marketplace/redhat-marketplace-q2tk9" Jan 23 10:17:25 crc kubenswrapper[4684]: I0123 10:17:25.431112 4684 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-nnch9\" (UniqueName: \"kubernetes.io/projected/621e1e97-7c26-40e8-8bba-2709c642655f-kube-api-access-nnch9\") pod \"redhat-marketplace-q2tk9\" (UID: \"621e1e97-7c26-40e8-8bba-2709c642655f\") " pod="openshift-marketplace/redhat-marketplace-q2tk9" Jan 23 10:17:25 crc kubenswrapper[4684]: I0123 10:17:25.601088 4684 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-q2tk9" Jan 23 10:17:26 crc kubenswrapper[4684]: I0123 10:17:26.156078 4684 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-marketplace-q2tk9"] Jan 23 10:17:26 crc kubenswrapper[4684]: I0123 10:17:26.583455 4684 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-q2tk9" event={"ID":"621e1e97-7c26-40e8-8bba-2709c642655f","Type":"ContainerStarted","Data":"f6e1448b4c258265df86d47f0c23b4738bb91817387729302bdab5b7c30c241d"} Jan 23 10:17:27 crc kubenswrapper[4684]: I0123 10:17:27.593939 4684 generic.go:334] "Generic (PLEG): container finished" podID="621e1e97-7c26-40e8-8bba-2709c642655f" containerID="6df9a3ac28064a11f3d649a7000d6b717a3c613d1d8c85476dd7551be168d17b" exitCode=0 Jan 23 10:17:27 crc kubenswrapper[4684]: I0123 10:17:27.594008 4684 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-q2tk9" event={"ID":"621e1e97-7c26-40e8-8bba-2709c642655f","Type":"ContainerDied","Data":"6df9a3ac28064a11f3d649a7000d6b717a3c613d1d8c85476dd7551be168d17b"} Jan 23 10:17:27 crc kubenswrapper[4684]: I0123 10:17:27.599170 4684 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-cwt7n" event={"ID":"080b3bd2-8103-41a1-aeb1-c9bacfde3dd1","Type":"ContainerStarted","Data":"5730b42334f147036134ac6cc264618817f1b849db4142845371cddbede0c688"} Jan 23 10:17:27 crc kubenswrapper[4684]: I0123 10:17:27.637120 4684 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/certified-operators-cwt7n" podStartSLOduration=3.417555369 podStartE2EDuration="10.637102328s" podCreationTimestamp="2026-01-23 10:17:17 +0000 UTC" firstStartedPulling="2026-01-23 10:17:19.478473213 +0000 UTC m=+4212.101851754" lastFinishedPulling="2026-01-23 10:17:26.698020172 +0000 UTC m=+4219.321398713" observedRunningTime="2026-01-23 10:17:27.628750979 +0000 UTC m=+4220.252129530" watchObservedRunningTime="2026-01-23 10:17:27.637102328 +0000 UTC m=+4220.260480869" Jan 23 10:17:28 crc kubenswrapper[4684]: I0123 10:17:28.064333 4684 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/certified-operators-cwt7n" Jan 23 10:17:28 crc kubenswrapper[4684]: I0123 10:17:28.064446 4684 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/certified-operators-cwt7n" Jan 23 10:17:28 crc kubenswrapper[4684]: I0123 10:17:28.610044 4684 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-q2tk9" event={"ID":"621e1e97-7c26-40e8-8bba-2709c642655f","Type":"ContainerStarted","Data":"06efd4f91b79641272845b014b9339d2c1f0344825a517f6c3b0dc8c02571c42"} Jan 23 10:17:29 crc kubenswrapper[4684]: I0123 10:17:29.136455 4684 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-marketplace/certified-operators-cwt7n" podUID="080b3bd2-8103-41a1-aeb1-c9bacfde3dd1" containerName="registry-server" probeResult="failure" output=< Jan 23 10:17:29 crc kubenswrapper[4684]: timeout: failed to connect service ":50051" within 1s Jan 23 10:17:29 crc kubenswrapper[4684]: > Jan 23 10:17:29 crc kubenswrapper[4684]: I0123 10:17:29.618666 4684 generic.go:334] "Generic (PLEG): container finished" podID="621e1e97-7c26-40e8-8bba-2709c642655f" containerID="06efd4f91b79641272845b014b9339d2c1f0344825a517f6c3b0dc8c02571c42" exitCode=0 Jan 23 10:17:29 crc kubenswrapper[4684]: I0123 10:17:29.618748 4684 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-q2tk9" event={"ID":"621e1e97-7c26-40e8-8bba-2709c642655f","Type":"ContainerDied","Data":"06efd4f91b79641272845b014b9339d2c1f0344825a517f6c3b0dc8c02571c42"} Jan 23 10:17:30 crc kubenswrapper[4684]: I0123 10:17:30.629751 4684 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-q2tk9" event={"ID":"621e1e97-7c26-40e8-8bba-2709c642655f","Type":"ContainerStarted","Data":"ed7e3f0523617d5218ed6043fed80a729b2744e6b4215fdbef931baace2574fe"} Jan 23 10:17:30 crc kubenswrapper[4684]: I0123 10:17:30.651882 4684 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/redhat-marketplace-q2tk9" podStartSLOduration=3.180106061 podStartE2EDuration="5.651864732s" podCreationTimestamp="2026-01-23 10:17:25 +0000 UTC" firstStartedPulling="2026-01-23 10:17:27.595873979 +0000 UTC m=+4220.219252520" lastFinishedPulling="2026-01-23 10:17:30.06763264 +0000 UTC m=+4222.691011191" observedRunningTime="2026-01-23 10:17:30.650099952 +0000 UTC m=+4223.273478493" watchObservedRunningTime="2026-01-23 10:17:30.651864732 +0000 UTC m=+4223.275243273" Jan 23 10:17:35 crc kubenswrapper[4684]: I0123 10:17:35.601521 4684 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/redhat-marketplace-q2tk9" Jan 23 10:17:35 crc kubenswrapper[4684]: I0123 10:17:35.602102 4684 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/redhat-marketplace-q2tk9" Jan 23 10:17:35 crc kubenswrapper[4684]: I0123 10:17:35.655880 4684 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/redhat-marketplace-q2tk9" Jan 23 10:17:35 crc kubenswrapper[4684]: I0123 10:17:35.741290 4684 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/redhat-marketplace-q2tk9" Jan 23 10:17:35 crc kubenswrapper[4684]: I0123 10:17:35.903543 4684 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-marketplace-q2tk9"] Jan 23 10:17:37 crc kubenswrapper[4684]: I0123 10:17:37.708671 4684 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/redhat-marketplace-q2tk9" podUID="621e1e97-7c26-40e8-8bba-2709c642655f" containerName="registry-server" containerID="cri-o://ed7e3f0523617d5218ed6043fed80a729b2744e6b4215fdbef931baace2574fe" gracePeriod=2 Jan 23 10:17:38 crc kubenswrapper[4684]: I0123 10:17:38.138576 4684 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/certified-operators-cwt7n" Jan 23 10:17:38 crc kubenswrapper[4684]: I0123 10:17:38.197613 4684 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/certified-operators-cwt7n" Jan 23 10:17:38 crc kubenswrapper[4684]: I0123 10:17:38.221946 4684 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-q2tk9" Jan 23 10:17:38 crc kubenswrapper[4684]: I0123 10:17:38.311636 4684 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-nnch9\" (UniqueName: \"kubernetes.io/projected/621e1e97-7c26-40e8-8bba-2709c642655f-kube-api-access-nnch9\") pod \"621e1e97-7c26-40e8-8bba-2709c642655f\" (UID: \"621e1e97-7c26-40e8-8bba-2709c642655f\") " Jan 23 10:17:38 crc kubenswrapper[4684]: I0123 10:17:38.311812 4684 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/621e1e97-7c26-40e8-8bba-2709c642655f-utilities\") pod \"621e1e97-7c26-40e8-8bba-2709c642655f\" (UID: \"621e1e97-7c26-40e8-8bba-2709c642655f\") " Jan 23 10:17:38 crc kubenswrapper[4684]: I0123 10:17:38.311871 4684 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/621e1e97-7c26-40e8-8bba-2709c642655f-catalog-content\") pod \"621e1e97-7c26-40e8-8bba-2709c642655f\" (UID: \"621e1e97-7c26-40e8-8bba-2709c642655f\") " Jan 23 10:17:38 crc kubenswrapper[4684]: I0123 10:17:38.312648 4684 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/621e1e97-7c26-40e8-8bba-2709c642655f-utilities" (OuterVolumeSpecName: "utilities") pod "621e1e97-7c26-40e8-8bba-2709c642655f" (UID: "621e1e97-7c26-40e8-8bba-2709c642655f"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 23 10:17:38 crc kubenswrapper[4684]: I0123 10:17:38.318781 4684 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/621e1e97-7c26-40e8-8bba-2709c642655f-kube-api-access-nnch9" (OuterVolumeSpecName: "kube-api-access-nnch9") pod "621e1e97-7c26-40e8-8bba-2709c642655f" (UID: "621e1e97-7c26-40e8-8bba-2709c642655f"). InnerVolumeSpecName "kube-api-access-nnch9". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 23 10:17:38 crc kubenswrapper[4684]: I0123 10:17:38.334612 4684 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/621e1e97-7c26-40e8-8bba-2709c642655f-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "621e1e97-7c26-40e8-8bba-2709c642655f" (UID: "621e1e97-7c26-40e8-8bba-2709c642655f"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 23 10:17:38 crc kubenswrapper[4684]: I0123 10:17:38.414912 4684 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-nnch9\" (UniqueName: \"kubernetes.io/projected/621e1e97-7c26-40e8-8bba-2709c642655f-kube-api-access-nnch9\") on node \"crc\" DevicePath \"\"" Jan 23 10:17:38 crc kubenswrapper[4684]: I0123 10:17:38.414941 4684 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/621e1e97-7c26-40e8-8bba-2709c642655f-utilities\") on node \"crc\" DevicePath \"\"" Jan 23 10:17:38 crc kubenswrapper[4684]: I0123 10:17:38.414961 4684 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/621e1e97-7c26-40e8-8bba-2709c642655f-catalog-content\") on node \"crc\" DevicePath \"\"" Jan 23 10:17:38 crc kubenswrapper[4684]: I0123 10:17:38.722939 4684 generic.go:334] "Generic (PLEG): container finished" podID="621e1e97-7c26-40e8-8bba-2709c642655f" containerID="ed7e3f0523617d5218ed6043fed80a729b2744e6b4215fdbef931baace2574fe" exitCode=0 Jan 23 10:17:38 crc kubenswrapper[4684]: I0123 10:17:38.725197 4684 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-q2tk9" Jan 23 10:17:38 crc kubenswrapper[4684]: I0123 10:17:38.725918 4684 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-q2tk9" event={"ID":"621e1e97-7c26-40e8-8bba-2709c642655f","Type":"ContainerDied","Data":"ed7e3f0523617d5218ed6043fed80a729b2744e6b4215fdbef931baace2574fe"} Jan 23 10:17:38 crc kubenswrapper[4684]: I0123 10:17:38.725971 4684 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-q2tk9" event={"ID":"621e1e97-7c26-40e8-8bba-2709c642655f","Type":"ContainerDied","Data":"f6e1448b4c258265df86d47f0c23b4738bb91817387729302bdab5b7c30c241d"} Jan 23 10:17:38 crc kubenswrapper[4684]: I0123 10:17:38.726000 4684 scope.go:117] "RemoveContainer" containerID="ed7e3f0523617d5218ed6043fed80a729b2744e6b4215fdbef931baace2574fe" Jan 23 10:17:38 crc kubenswrapper[4684]: I0123 10:17:38.776062 4684 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-marketplace-q2tk9"] Jan 23 10:17:38 crc kubenswrapper[4684]: I0123 10:17:38.778029 4684 scope.go:117] "RemoveContainer" containerID="06efd4f91b79641272845b014b9339d2c1f0344825a517f6c3b0dc8c02571c42" Jan 23 10:17:38 crc kubenswrapper[4684]: I0123 10:17:38.784982 4684 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/redhat-marketplace-q2tk9"] Jan 23 10:17:38 crc kubenswrapper[4684]: I0123 10:17:38.815388 4684 scope.go:117] "RemoveContainer" containerID="6df9a3ac28064a11f3d649a7000d6b717a3c613d1d8c85476dd7551be168d17b" Jan 23 10:17:38 crc kubenswrapper[4684]: I0123 10:17:38.863842 4684 scope.go:117] "RemoveContainer" containerID="ed7e3f0523617d5218ed6043fed80a729b2744e6b4215fdbef931baace2574fe" Jan 23 10:17:38 crc kubenswrapper[4684]: E0123 10:17:38.864648 4684 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"ed7e3f0523617d5218ed6043fed80a729b2744e6b4215fdbef931baace2574fe\": container with ID starting with ed7e3f0523617d5218ed6043fed80a729b2744e6b4215fdbef931baace2574fe not found: ID does not exist" containerID="ed7e3f0523617d5218ed6043fed80a729b2744e6b4215fdbef931baace2574fe" Jan 23 10:17:38 crc kubenswrapper[4684]: I0123 10:17:38.864723 4684 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"ed7e3f0523617d5218ed6043fed80a729b2744e6b4215fdbef931baace2574fe"} err="failed to get container status \"ed7e3f0523617d5218ed6043fed80a729b2744e6b4215fdbef931baace2574fe\": rpc error: code = NotFound desc = could not find container \"ed7e3f0523617d5218ed6043fed80a729b2744e6b4215fdbef931baace2574fe\": container with ID starting with ed7e3f0523617d5218ed6043fed80a729b2744e6b4215fdbef931baace2574fe not found: ID does not exist" Jan 23 10:17:38 crc kubenswrapper[4684]: I0123 10:17:38.864762 4684 scope.go:117] "RemoveContainer" containerID="06efd4f91b79641272845b014b9339d2c1f0344825a517f6c3b0dc8c02571c42" Jan 23 10:17:38 crc kubenswrapper[4684]: E0123 10:17:38.865223 4684 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"06efd4f91b79641272845b014b9339d2c1f0344825a517f6c3b0dc8c02571c42\": container with ID starting with 06efd4f91b79641272845b014b9339d2c1f0344825a517f6c3b0dc8c02571c42 not found: ID does not exist" containerID="06efd4f91b79641272845b014b9339d2c1f0344825a517f6c3b0dc8c02571c42" Jan 23 10:17:38 crc kubenswrapper[4684]: I0123 10:17:38.865250 4684 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"06efd4f91b79641272845b014b9339d2c1f0344825a517f6c3b0dc8c02571c42"} err="failed to get container status \"06efd4f91b79641272845b014b9339d2c1f0344825a517f6c3b0dc8c02571c42\": rpc error: code = NotFound desc = could not find container \"06efd4f91b79641272845b014b9339d2c1f0344825a517f6c3b0dc8c02571c42\": container with ID starting with 06efd4f91b79641272845b014b9339d2c1f0344825a517f6c3b0dc8c02571c42 not found: ID does not exist" Jan 23 10:17:38 crc kubenswrapper[4684]: I0123 10:17:38.865269 4684 scope.go:117] "RemoveContainer" containerID="6df9a3ac28064a11f3d649a7000d6b717a3c613d1d8c85476dd7551be168d17b" Jan 23 10:17:38 crc kubenswrapper[4684]: E0123 10:17:38.865861 4684 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"6df9a3ac28064a11f3d649a7000d6b717a3c613d1d8c85476dd7551be168d17b\": container with ID starting with 6df9a3ac28064a11f3d649a7000d6b717a3c613d1d8c85476dd7551be168d17b not found: ID does not exist" containerID="6df9a3ac28064a11f3d649a7000d6b717a3c613d1d8c85476dd7551be168d17b" Jan 23 10:17:38 crc kubenswrapper[4684]: I0123 10:17:38.865904 4684 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"6df9a3ac28064a11f3d649a7000d6b717a3c613d1d8c85476dd7551be168d17b"} err="failed to get container status \"6df9a3ac28064a11f3d649a7000d6b717a3c613d1d8c85476dd7551be168d17b\": rpc error: code = NotFound desc = could not find container \"6df9a3ac28064a11f3d649a7000d6b717a3c613d1d8c85476dd7551be168d17b\": container with ID starting with 6df9a3ac28064a11f3d649a7000d6b717a3c613d1d8c85476dd7551be168d17b not found: ID does not exist" Jan 23 10:17:39 crc kubenswrapper[4684]: I0123 10:17:39.596334 4684 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="621e1e97-7c26-40e8-8bba-2709c642655f" path="/var/lib/kubelet/pods/621e1e97-7c26-40e8-8bba-2709c642655f/volumes" Jan 23 10:17:40 crc kubenswrapper[4684]: I0123 10:17:40.305131 4684 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/certified-operators-cwt7n"] Jan 23 10:17:40 crc kubenswrapper[4684]: I0123 10:17:40.305567 4684 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/certified-operators-cwt7n" podUID="080b3bd2-8103-41a1-aeb1-c9bacfde3dd1" containerName="registry-server" containerID="cri-o://5730b42334f147036134ac6cc264618817f1b849db4142845371cddbede0c688" gracePeriod=2 Jan 23 10:17:40 crc kubenswrapper[4684]: I0123 10:17:40.747206 4684 generic.go:334] "Generic (PLEG): container finished" podID="080b3bd2-8103-41a1-aeb1-c9bacfde3dd1" containerID="5730b42334f147036134ac6cc264618817f1b849db4142845371cddbede0c688" exitCode=0 Jan 23 10:17:40 crc kubenswrapper[4684]: I0123 10:17:40.747310 4684 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-cwt7n" event={"ID":"080b3bd2-8103-41a1-aeb1-c9bacfde3dd1","Type":"ContainerDied","Data":"5730b42334f147036134ac6cc264618817f1b849db4142845371cddbede0c688"} Jan 23 10:17:41 crc kubenswrapper[4684]: I0123 10:17:41.221232 4684 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-cwt7n" Jan 23 10:17:41 crc kubenswrapper[4684]: I0123 10:17:41.272440 4684 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-26xk6\" (UniqueName: \"kubernetes.io/projected/080b3bd2-8103-41a1-aeb1-c9bacfde3dd1-kube-api-access-26xk6\") pod \"080b3bd2-8103-41a1-aeb1-c9bacfde3dd1\" (UID: \"080b3bd2-8103-41a1-aeb1-c9bacfde3dd1\") " Jan 23 10:17:41 crc kubenswrapper[4684]: I0123 10:17:41.272634 4684 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/080b3bd2-8103-41a1-aeb1-c9bacfde3dd1-utilities\") pod \"080b3bd2-8103-41a1-aeb1-c9bacfde3dd1\" (UID: \"080b3bd2-8103-41a1-aeb1-c9bacfde3dd1\") " Jan 23 10:17:41 crc kubenswrapper[4684]: I0123 10:17:41.272761 4684 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/080b3bd2-8103-41a1-aeb1-c9bacfde3dd1-catalog-content\") pod \"080b3bd2-8103-41a1-aeb1-c9bacfde3dd1\" (UID: \"080b3bd2-8103-41a1-aeb1-c9bacfde3dd1\") " Jan 23 10:17:41 crc kubenswrapper[4684]: I0123 10:17:41.273358 4684 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/080b3bd2-8103-41a1-aeb1-c9bacfde3dd1-utilities" (OuterVolumeSpecName: "utilities") pod "080b3bd2-8103-41a1-aeb1-c9bacfde3dd1" (UID: "080b3bd2-8103-41a1-aeb1-c9bacfde3dd1"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 23 10:17:41 crc kubenswrapper[4684]: I0123 10:17:41.278805 4684 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/080b3bd2-8103-41a1-aeb1-c9bacfde3dd1-kube-api-access-26xk6" (OuterVolumeSpecName: "kube-api-access-26xk6") pod "080b3bd2-8103-41a1-aeb1-c9bacfde3dd1" (UID: "080b3bd2-8103-41a1-aeb1-c9bacfde3dd1"). InnerVolumeSpecName "kube-api-access-26xk6". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 23 10:17:41 crc kubenswrapper[4684]: I0123 10:17:41.318638 4684 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/080b3bd2-8103-41a1-aeb1-c9bacfde3dd1-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "080b3bd2-8103-41a1-aeb1-c9bacfde3dd1" (UID: "080b3bd2-8103-41a1-aeb1-c9bacfde3dd1"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 23 10:17:41 crc kubenswrapper[4684]: I0123 10:17:41.381873 4684 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-26xk6\" (UniqueName: \"kubernetes.io/projected/080b3bd2-8103-41a1-aeb1-c9bacfde3dd1-kube-api-access-26xk6\") on node \"crc\" DevicePath \"\"" Jan 23 10:17:41 crc kubenswrapper[4684]: I0123 10:17:41.381909 4684 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/080b3bd2-8103-41a1-aeb1-c9bacfde3dd1-utilities\") on node \"crc\" DevicePath \"\"" Jan 23 10:17:41 crc kubenswrapper[4684]: I0123 10:17:41.381918 4684 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/080b3bd2-8103-41a1-aeb1-c9bacfde3dd1-catalog-content\") on node \"crc\" DevicePath \"\"" Jan 23 10:17:41 crc kubenswrapper[4684]: I0123 10:17:41.757285 4684 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-cwt7n" event={"ID":"080b3bd2-8103-41a1-aeb1-c9bacfde3dd1","Type":"ContainerDied","Data":"e74068ad4783113f1ebf9f0b49ff1cd79515f6d2360e163762e8a6c5c734255f"} Jan 23 10:17:41 crc kubenswrapper[4684]: I0123 10:17:41.757349 4684 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-cwt7n" Jan 23 10:17:41 crc kubenswrapper[4684]: I0123 10:17:41.757724 4684 scope.go:117] "RemoveContainer" containerID="5730b42334f147036134ac6cc264618817f1b849db4142845371cddbede0c688" Jan 23 10:17:41 crc kubenswrapper[4684]: I0123 10:17:41.787548 4684 scope.go:117] "RemoveContainer" containerID="675ec7b58af168db31eae0af2c45a99310aad5ad9b49c7b541cec52730da7552" Jan 23 10:17:41 crc kubenswrapper[4684]: I0123 10:17:41.801510 4684 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/certified-operators-cwt7n"] Jan 23 10:17:41 crc kubenswrapper[4684]: I0123 10:17:41.817769 4684 scope.go:117] "RemoveContainer" containerID="32d6222b9b708dbb49e466d4f6f6e1e73be22e4df955bc99067b32cd5616eddb" Jan 23 10:17:41 crc kubenswrapper[4684]: I0123 10:17:41.827246 4684 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/certified-operators-cwt7n"] Jan 23 10:17:43 crc kubenswrapper[4684]: I0123 10:17:43.595381 4684 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="080b3bd2-8103-41a1-aeb1-c9bacfde3dd1" path="/var/lib/kubelet/pods/080b3bd2-8103-41a1-aeb1-c9bacfde3dd1/volumes" Jan 23 10:19:13 crc kubenswrapper[4684]: I0123 10:19:13.728209 4684 patch_prober.go:28] interesting pod/machine-config-daemon-wtphf container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Jan 23 10:19:13 crc kubenswrapper[4684]: I0123 10:19:13.728606 4684 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-wtphf" podUID="fe8e0d00-860e-4d47-9f48-686555520d79" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Jan 23 10:19:43 crc kubenswrapper[4684]: I0123 10:19:43.728354 4684 patch_prober.go:28] interesting pod/machine-config-daemon-wtphf container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Jan 23 10:19:43 crc kubenswrapper[4684]: I0123 10:19:43.728903 4684 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-wtphf" podUID="fe8e0d00-860e-4d47-9f48-686555520d79" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Jan 23 10:20:13 crc kubenswrapper[4684]: I0123 10:20:13.729093 4684 patch_prober.go:28] interesting pod/machine-config-daemon-wtphf container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Jan 23 10:20:13 crc kubenswrapper[4684]: I0123 10:20:13.729689 4684 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-wtphf" podUID="fe8e0d00-860e-4d47-9f48-686555520d79" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Jan 23 10:20:13 crc kubenswrapper[4684]: I0123 10:20:13.729765 4684 kubelet.go:2542] "SyncLoop (probe)" probe="liveness" status="unhealthy" pod="openshift-machine-config-operator/machine-config-daemon-wtphf" Jan 23 10:20:13 crc kubenswrapper[4684]: I0123 10:20:13.730640 4684 kuberuntime_manager.go:1027] "Message for Container of pod" containerName="machine-config-daemon" containerStatusID={"Type":"cri-o","ID":"eb2cd5a3802daabd6f25e381fb866c36fce1be8cb93402ebfba6f9d62b385554"} pod="openshift-machine-config-operator/machine-config-daemon-wtphf" containerMessage="Container machine-config-daemon failed liveness probe, will be restarted" Jan 23 10:20:13 crc kubenswrapper[4684]: I0123 10:20:13.730727 4684 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-machine-config-operator/machine-config-daemon-wtphf" podUID="fe8e0d00-860e-4d47-9f48-686555520d79" containerName="machine-config-daemon" containerID="cri-o://eb2cd5a3802daabd6f25e381fb866c36fce1be8cb93402ebfba6f9d62b385554" gracePeriod=600 Jan 23 10:20:14 crc kubenswrapper[4684]: I0123 10:20:14.087410 4684 generic.go:334] "Generic (PLEG): container finished" podID="fe8e0d00-860e-4d47-9f48-686555520d79" containerID="eb2cd5a3802daabd6f25e381fb866c36fce1be8cb93402ebfba6f9d62b385554" exitCode=0 Jan 23 10:20:14 crc kubenswrapper[4684]: I0123 10:20:14.087518 4684 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-wtphf" event={"ID":"fe8e0d00-860e-4d47-9f48-686555520d79","Type":"ContainerDied","Data":"eb2cd5a3802daabd6f25e381fb866c36fce1be8cb93402ebfba6f9d62b385554"} Jan 23 10:20:14 crc kubenswrapper[4684]: I0123 10:20:14.087833 4684 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-wtphf" event={"ID":"fe8e0d00-860e-4d47-9f48-686555520d79","Type":"ContainerStarted","Data":"ea556fc8dc883b8c3494c093263ef6a2ba8fb783710728a6eb74afd116ee0ccc"} Jan 23 10:20:14 crc kubenswrapper[4684]: I0123 10:20:14.087857 4684 scope.go:117] "RemoveContainer" containerID="c2163abc5f57af87ea82023d09559fe5c528b862743942dcea670480cc44810b" Jan 23 10:21:21 crc kubenswrapper[4684]: I0123 10:21:21.061083 4684 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/manila-db-create-9r5vp"] Jan 23 10:21:21 crc kubenswrapper[4684]: I0123 10:21:21.076965 4684 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/manila-e4ed-account-create-update-rzjjx"] Jan 23 10:21:21 crc kubenswrapper[4684]: I0123 10:21:21.085301 4684 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/manila-e4ed-account-create-update-rzjjx"] Jan 23 10:21:21 crc kubenswrapper[4684]: I0123 10:21:21.092597 4684 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/manila-db-create-9r5vp"] Jan 23 10:21:21 crc kubenswrapper[4684]: I0123 10:21:21.594384 4684 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="9a1d764d-4ecd-4f2f-a4b8-848142c93b15" path="/var/lib/kubelet/pods/9a1d764d-4ecd-4f2f-a4b8-848142c93b15/volumes" Jan 23 10:21:21 crc kubenswrapper[4684]: I0123 10:21:21.596002 4684 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="a7e275bc-7d07-4a5c-98be-6e9eb72cf537" path="/var/lib/kubelet/pods/a7e275bc-7d07-4a5c-98be-6e9eb72cf537/volumes" Jan 23 10:21:37 crc kubenswrapper[4684]: I0123 10:21:37.269896 4684 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/community-operators-sd2sk"] Jan 23 10:21:37 crc kubenswrapper[4684]: E0123 10:21:37.270843 4684 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="621e1e97-7c26-40e8-8bba-2709c642655f" containerName="registry-server" Jan 23 10:21:37 crc kubenswrapper[4684]: I0123 10:21:37.270857 4684 state_mem.go:107] "Deleted CPUSet assignment" podUID="621e1e97-7c26-40e8-8bba-2709c642655f" containerName="registry-server" Jan 23 10:21:37 crc kubenswrapper[4684]: E0123 10:21:37.270879 4684 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="080b3bd2-8103-41a1-aeb1-c9bacfde3dd1" containerName="extract-utilities" Jan 23 10:21:37 crc kubenswrapper[4684]: I0123 10:21:37.270887 4684 state_mem.go:107] "Deleted CPUSet assignment" podUID="080b3bd2-8103-41a1-aeb1-c9bacfde3dd1" containerName="extract-utilities" Jan 23 10:21:37 crc kubenswrapper[4684]: E0123 10:21:37.270899 4684 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="621e1e97-7c26-40e8-8bba-2709c642655f" containerName="extract-utilities" Jan 23 10:21:37 crc kubenswrapper[4684]: I0123 10:21:37.270905 4684 state_mem.go:107] "Deleted CPUSet assignment" podUID="621e1e97-7c26-40e8-8bba-2709c642655f" containerName="extract-utilities" Jan 23 10:21:37 crc kubenswrapper[4684]: E0123 10:21:37.270912 4684 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="080b3bd2-8103-41a1-aeb1-c9bacfde3dd1" containerName="extract-content" Jan 23 10:21:37 crc kubenswrapper[4684]: I0123 10:21:37.270918 4684 state_mem.go:107] "Deleted CPUSet assignment" podUID="080b3bd2-8103-41a1-aeb1-c9bacfde3dd1" containerName="extract-content" Jan 23 10:21:37 crc kubenswrapper[4684]: E0123 10:21:37.270931 4684 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="080b3bd2-8103-41a1-aeb1-c9bacfde3dd1" containerName="registry-server" Jan 23 10:21:37 crc kubenswrapper[4684]: I0123 10:21:37.270936 4684 state_mem.go:107] "Deleted CPUSet assignment" podUID="080b3bd2-8103-41a1-aeb1-c9bacfde3dd1" containerName="registry-server" Jan 23 10:21:37 crc kubenswrapper[4684]: E0123 10:21:37.270946 4684 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="621e1e97-7c26-40e8-8bba-2709c642655f" containerName="extract-content" Jan 23 10:21:37 crc kubenswrapper[4684]: I0123 10:21:37.270951 4684 state_mem.go:107] "Deleted CPUSet assignment" podUID="621e1e97-7c26-40e8-8bba-2709c642655f" containerName="extract-content" Jan 23 10:21:37 crc kubenswrapper[4684]: I0123 10:21:37.271112 4684 memory_manager.go:354] "RemoveStaleState removing state" podUID="080b3bd2-8103-41a1-aeb1-c9bacfde3dd1" containerName="registry-server" Jan 23 10:21:37 crc kubenswrapper[4684]: I0123 10:21:37.271122 4684 memory_manager.go:354] "RemoveStaleState removing state" podUID="621e1e97-7c26-40e8-8bba-2709c642655f" containerName="registry-server" Jan 23 10:21:37 crc kubenswrapper[4684]: I0123 10:21:37.272417 4684 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-sd2sk" Jan 23 10:21:37 crc kubenswrapper[4684]: I0123 10:21:37.322296 4684 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/community-operators-sd2sk"] Jan 23 10:21:37 crc kubenswrapper[4684]: I0123 10:21:37.437200 4684 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-cwb9r\" (UniqueName: \"kubernetes.io/projected/6028abe2-45cf-4aaa-bf11-c21dc120fd81-kube-api-access-cwb9r\") pod \"community-operators-sd2sk\" (UID: \"6028abe2-45cf-4aaa-bf11-c21dc120fd81\") " pod="openshift-marketplace/community-operators-sd2sk" Jan 23 10:21:37 crc kubenswrapper[4684]: I0123 10:21:37.437536 4684 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/6028abe2-45cf-4aaa-bf11-c21dc120fd81-catalog-content\") pod \"community-operators-sd2sk\" (UID: \"6028abe2-45cf-4aaa-bf11-c21dc120fd81\") " pod="openshift-marketplace/community-operators-sd2sk" Jan 23 10:21:37 crc kubenswrapper[4684]: I0123 10:21:37.437651 4684 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/6028abe2-45cf-4aaa-bf11-c21dc120fd81-utilities\") pod \"community-operators-sd2sk\" (UID: \"6028abe2-45cf-4aaa-bf11-c21dc120fd81\") " pod="openshift-marketplace/community-operators-sd2sk" Jan 23 10:21:37 crc kubenswrapper[4684]: I0123 10:21:37.539951 4684 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-cwb9r\" (UniqueName: \"kubernetes.io/projected/6028abe2-45cf-4aaa-bf11-c21dc120fd81-kube-api-access-cwb9r\") pod \"community-operators-sd2sk\" (UID: \"6028abe2-45cf-4aaa-bf11-c21dc120fd81\") " pod="openshift-marketplace/community-operators-sd2sk" Jan 23 10:21:37 crc kubenswrapper[4684]: I0123 10:21:37.540013 4684 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/6028abe2-45cf-4aaa-bf11-c21dc120fd81-catalog-content\") pod \"community-operators-sd2sk\" (UID: \"6028abe2-45cf-4aaa-bf11-c21dc120fd81\") " pod="openshift-marketplace/community-operators-sd2sk" Jan 23 10:21:37 crc kubenswrapper[4684]: I0123 10:21:37.540118 4684 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/6028abe2-45cf-4aaa-bf11-c21dc120fd81-utilities\") pod \"community-operators-sd2sk\" (UID: \"6028abe2-45cf-4aaa-bf11-c21dc120fd81\") " pod="openshift-marketplace/community-operators-sd2sk" Jan 23 10:21:37 crc kubenswrapper[4684]: I0123 10:21:37.540599 4684 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/6028abe2-45cf-4aaa-bf11-c21dc120fd81-catalog-content\") pod \"community-operators-sd2sk\" (UID: \"6028abe2-45cf-4aaa-bf11-c21dc120fd81\") " pod="openshift-marketplace/community-operators-sd2sk" Jan 23 10:21:37 crc kubenswrapper[4684]: I0123 10:21:37.540645 4684 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/6028abe2-45cf-4aaa-bf11-c21dc120fd81-utilities\") pod \"community-operators-sd2sk\" (UID: \"6028abe2-45cf-4aaa-bf11-c21dc120fd81\") " pod="openshift-marketplace/community-operators-sd2sk" Jan 23 10:21:37 crc kubenswrapper[4684]: I0123 10:21:37.561206 4684 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-cwb9r\" (UniqueName: \"kubernetes.io/projected/6028abe2-45cf-4aaa-bf11-c21dc120fd81-kube-api-access-cwb9r\") pod \"community-operators-sd2sk\" (UID: \"6028abe2-45cf-4aaa-bf11-c21dc120fd81\") " pod="openshift-marketplace/community-operators-sd2sk" Jan 23 10:21:37 crc kubenswrapper[4684]: I0123 10:21:37.605967 4684 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-sd2sk" Jan 23 10:21:38 crc kubenswrapper[4684]: I0123 10:21:38.223226 4684 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/community-operators-sd2sk"] Jan 23 10:21:38 crc kubenswrapper[4684]: I0123 10:21:38.800638 4684 generic.go:334] "Generic (PLEG): container finished" podID="6028abe2-45cf-4aaa-bf11-c21dc120fd81" containerID="977e601e223458f107ab49ff6ec1af3ae9651dc8de0155616812bb687646c5a2" exitCode=0 Jan 23 10:21:38 crc kubenswrapper[4684]: I0123 10:21:38.800745 4684 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-sd2sk" event={"ID":"6028abe2-45cf-4aaa-bf11-c21dc120fd81","Type":"ContainerDied","Data":"977e601e223458f107ab49ff6ec1af3ae9651dc8de0155616812bb687646c5a2"} Jan 23 10:21:38 crc kubenswrapper[4684]: I0123 10:21:38.801223 4684 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-sd2sk" event={"ID":"6028abe2-45cf-4aaa-bf11-c21dc120fd81","Type":"ContainerStarted","Data":"69a088f72e45b619e45b2dbd58d55d534d2d4a2f322cd4311c5dce6c9529dc89"} Jan 23 10:21:39 crc kubenswrapper[4684]: I0123 10:21:39.818373 4684 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-sd2sk" event={"ID":"6028abe2-45cf-4aaa-bf11-c21dc120fd81","Type":"ContainerStarted","Data":"a6c8e04e219923a5fac6e37ff44980e685f410783e6ba9312613e1ca2ecc5151"} Jan 23 10:21:40 crc kubenswrapper[4684]: I0123 10:21:40.829745 4684 generic.go:334] "Generic (PLEG): container finished" podID="6028abe2-45cf-4aaa-bf11-c21dc120fd81" containerID="a6c8e04e219923a5fac6e37ff44980e685f410783e6ba9312613e1ca2ecc5151" exitCode=0 Jan 23 10:21:40 crc kubenswrapper[4684]: I0123 10:21:40.829965 4684 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-sd2sk" event={"ID":"6028abe2-45cf-4aaa-bf11-c21dc120fd81","Type":"ContainerDied","Data":"a6c8e04e219923a5fac6e37ff44980e685f410783e6ba9312613e1ca2ecc5151"} Jan 23 10:21:41 crc kubenswrapper[4684]: I0123 10:21:41.840690 4684 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-sd2sk" event={"ID":"6028abe2-45cf-4aaa-bf11-c21dc120fd81","Type":"ContainerStarted","Data":"2aebe187657002b1784990f923df53cef18f8571cd66e16e9283715d1db287a1"} Jan 23 10:21:41 crc kubenswrapper[4684]: I0123 10:21:41.860974 4684 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/community-operators-sd2sk" podStartSLOduration=2.395549419 podStartE2EDuration="4.860953458s" podCreationTimestamp="2026-01-23 10:21:37 +0000 UTC" firstStartedPulling="2026-01-23 10:21:38.803967187 +0000 UTC m=+4471.427345728" lastFinishedPulling="2026-01-23 10:21:41.269371216 +0000 UTC m=+4473.892749767" observedRunningTime="2026-01-23 10:21:41.860160955 +0000 UTC m=+4474.483539506" watchObservedRunningTime="2026-01-23 10:21:41.860953458 +0000 UTC m=+4474.484331999" Jan 23 10:21:47 crc kubenswrapper[4684]: I0123 10:21:47.606203 4684 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/community-operators-sd2sk" Jan 23 10:21:47 crc kubenswrapper[4684]: I0123 10:21:47.606770 4684 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/community-operators-sd2sk" Jan 23 10:21:47 crc kubenswrapper[4684]: I0123 10:21:47.650675 4684 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/community-operators-sd2sk" Jan 23 10:21:47 crc kubenswrapper[4684]: I0123 10:21:47.940763 4684 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/community-operators-sd2sk" Jan 23 10:21:48 crc kubenswrapper[4684]: I0123 10:21:48.012579 4684 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/community-operators-sd2sk"] Jan 23 10:21:49 crc kubenswrapper[4684]: I0123 10:21:49.909171 4684 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/community-operators-sd2sk" podUID="6028abe2-45cf-4aaa-bf11-c21dc120fd81" containerName="registry-server" containerID="cri-o://2aebe187657002b1784990f923df53cef18f8571cd66e16e9283715d1db287a1" gracePeriod=2 Jan 23 10:21:50 crc kubenswrapper[4684]: I0123 10:21:50.538905 4684 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-sd2sk" Jan 23 10:21:50 crc kubenswrapper[4684]: I0123 10:21:50.631390 4684 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/6028abe2-45cf-4aaa-bf11-c21dc120fd81-utilities\") pod \"6028abe2-45cf-4aaa-bf11-c21dc120fd81\" (UID: \"6028abe2-45cf-4aaa-bf11-c21dc120fd81\") " Jan 23 10:21:50 crc kubenswrapper[4684]: I0123 10:21:50.631530 4684 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/6028abe2-45cf-4aaa-bf11-c21dc120fd81-catalog-content\") pod \"6028abe2-45cf-4aaa-bf11-c21dc120fd81\" (UID: \"6028abe2-45cf-4aaa-bf11-c21dc120fd81\") " Jan 23 10:21:50 crc kubenswrapper[4684]: I0123 10:21:50.631585 4684 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-cwb9r\" (UniqueName: \"kubernetes.io/projected/6028abe2-45cf-4aaa-bf11-c21dc120fd81-kube-api-access-cwb9r\") pod \"6028abe2-45cf-4aaa-bf11-c21dc120fd81\" (UID: \"6028abe2-45cf-4aaa-bf11-c21dc120fd81\") " Jan 23 10:21:50 crc kubenswrapper[4684]: I0123 10:21:50.633820 4684 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/6028abe2-45cf-4aaa-bf11-c21dc120fd81-utilities" (OuterVolumeSpecName: "utilities") pod "6028abe2-45cf-4aaa-bf11-c21dc120fd81" (UID: "6028abe2-45cf-4aaa-bf11-c21dc120fd81"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 23 10:21:50 crc kubenswrapper[4684]: I0123 10:21:50.651868 4684 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/6028abe2-45cf-4aaa-bf11-c21dc120fd81-kube-api-access-cwb9r" (OuterVolumeSpecName: "kube-api-access-cwb9r") pod "6028abe2-45cf-4aaa-bf11-c21dc120fd81" (UID: "6028abe2-45cf-4aaa-bf11-c21dc120fd81"). InnerVolumeSpecName "kube-api-access-cwb9r". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 23 10:21:50 crc kubenswrapper[4684]: I0123 10:21:50.688203 4684 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/6028abe2-45cf-4aaa-bf11-c21dc120fd81-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "6028abe2-45cf-4aaa-bf11-c21dc120fd81" (UID: "6028abe2-45cf-4aaa-bf11-c21dc120fd81"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 23 10:21:50 crc kubenswrapper[4684]: I0123 10:21:50.735710 4684 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/6028abe2-45cf-4aaa-bf11-c21dc120fd81-utilities\") on node \"crc\" DevicePath \"\"" Jan 23 10:21:50 crc kubenswrapper[4684]: I0123 10:21:50.735737 4684 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/6028abe2-45cf-4aaa-bf11-c21dc120fd81-catalog-content\") on node \"crc\" DevicePath \"\"" Jan 23 10:21:50 crc kubenswrapper[4684]: I0123 10:21:50.735747 4684 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-cwb9r\" (UniqueName: \"kubernetes.io/projected/6028abe2-45cf-4aaa-bf11-c21dc120fd81-kube-api-access-cwb9r\") on node \"crc\" DevicePath \"\"" Jan 23 10:21:50 crc kubenswrapper[4684]: I0123 10:21:50.918932 4684 generic.go:334] "Generic (PLEG): container finished" podID="6028abe2-45cf-4aaa-bf11-c21dc120fd81" containerID="2aebe187657002b1784990f923df53cef18f8571cd66e16e9283715d1db287a1" exitCode=0 Jan 23 10:21:50 crc kubenswrapper[4684]: I0123 10:21:50.918979 4684 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-sd2sk" Jan 23 10:21:50 crc kubenswrapper[4684]: I0123 10:21:50.918996 4684 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-sd2sk" event={"ID":"6028abe2-45cf-4aaa-bf11-c21dc120fd81","Type":"ContainerDied","Data":"2aebe187657002b1784990f923df53cef18f8571cd66e16e9283715d1db287a1"} Jan 23 10:21:50 crc kubenswrapper[4684]: I0123 10:21:50.920185 4684 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-sd2sk" event={"ID":"6028abe2-45cf-4aaa-bf11-c21dc120fd81","Type":"ContainerDied","Data":"69a088f72e45b619e45b2dbd58d55d534d2d4a2f322cd4311c5dce6c9529dc89"} Jan 23 10:21:50 crc kubenswrapper[4684]: I0123 10:21:50.920272 4684 scope.go:117] "RemoveContainer" containerID="2aebe187657002b1784990f923df53cef18f8571cd66e16e9283715d1db287a1" Jan 23 10:21:50 crc kubenswrapper[4684]: I0123 10:21:50.956556 4684 scope.go:117] "RemoveContainer" containerID="a6c8e04e219923a5fac6e37ff44980e685f410783e6ba9312613e1ca2ecc5151" Jan 23 10:21:50 crc kubenswrapper[4684]: I0123 10:21:50.978356 4684 scope.go:117] "RemoveContainer" containerID="977e601e223458f107ab49ff6ec1af3ae9651dc8de0155616812bb687646c5a2" Jan 23 10:21:50 crc kubenswrapper[4684]: I0123 10:21:50.978521 4684 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/community-operators-sd2sk"] Jan 23 10:21:50 crc kubenswrapper[4684]: I0123 10:21:50.984203 4684 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/community-operators-sd2sk"] Jan 23 10:21:51 crc kubenswrapper[4684]: I0123 10:21:51.029330 4684 scope.go:117] "RemoveContainer" containerID="2aebe187657002b1784990f923df53cef18f8571cd66e16e9283715d1db287a1" Jan 23 10:21:51 crc kubenswrapper[4684]: E0123 10:21:51.029746 4684 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"2aebe187657002b1784990f923df53cef18f8571cd66e16e9283715d1db287a1\": container with ID starting with 2aebe187657002b1784990f923df53cef18f8571cd66e16e9283715d1db287a1 not found: ID does not exist" containerID="2aebe187657002b1784990f923df53cef18f8571cd66e16e9283715d1db287a1" Jan 23 10:21:51 crc kubenswrapper[4684]: I0123 10:21:51.029910 4684 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"2aebe187657002b1784990f923df53cef18f8571cd66e16e9283715d1db287a1"} err="failed to get container status \"2aebe187657002b1784990f923df53cef18f8571cd66e16e9283715d1db287a1\": rpc error: code = NotFound desc = could not find container \"2aebe187657002b1784990f923df53cef18f8571cd66e16e9283715d1db287a1\": container with ID starting with 2aebe187657002b1784990f923df53cef18f8571cd66e16e9283715d1db287a1 not found: ID does not exist" Jan 23 10:21:51 crc kubenswrapper[4684]: I0123 10:21:51.029999 4684 scope.go:117] "RemoveContainer" containerID="a6c8e04e219923a5fac6e37ff44980e685f410783e6ba9312613e1ca2ecc5151" Jan 23 10:21:51 crc kubenswrapper[4684]: E0123 10:21:51.031896 4684 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"a6c8e04e219923a5fac6e37ff44980e685f410783e6ba9312613e1ca2ecc5151\": container with ID starting with a6c8e04e219923a5fac6e37ff44980e685f410783e6ba9312613e1ca2ecc5151 not found: ID does not exist" containerID="a6c8e04e219923a5fac6e37ff44980e685f410783e6ba9312613e1ca2ecc5151" Jan 23 10:21:51 crc kubenswrapper[4684]: I0123 10:21:51.031932 4684 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"a6c8e04e219923a5fac6e37ff44980e685f410783e6ba9312613e1ca2ecc5151"} err="failed to get container status \"a6c8e04e219923a5fac6e37ff44980e685f410783e6ba9312613e1ca2ecc5151\": rpc error: code = NotFound desc = could not find container \"a6c8e04e219923a5fac6e37ff44980e685f410783e6ba9312613e1ca2ecc5151\": container with ID starting with a6c8e04e219923a5fac6e37ff44980e685f410783e6ba9312613e1ca2ecc5151 not found: ID does not exist" Jan 23 10:21:51 crc kubenswrapper[4684]: I0123 10:21:51.031956 4684 scope.go:117] "RemoveContainer" containerID="977e601e223458f107ab49ff6ec1af3ae9651dc8de0155616812bb687646c5a2" Jan 23 10:21:51 crc kubenswrapper[4684]: E0123 10:21:51.032281 4684 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"977e601e223458f107ab49ff6ec1af3ae9651dc8de0155616812bb687646c5a2\": container with ID starting with 977e601e223458f107ab49ff6ec1af3ae9651dc8de0155616812bb687646c5a2 not found: ID does not exist" containerID="977e601e223458f107ab49ff6ec1af3ae9651dc8de0155616812bb687646c5a2" Jan 23 10:21:51 crc kubenswrapper[4684]: I0123 10:21:51.032308 4684 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"977e601e223458f107ab49ff6ec1af3ae9651dc8de0155616812bb687646c5a2"} err="failed to get container status \"977e601e223458f107ab49ff6ec1af3ae9651dc8de0155616812bb687646c5a2\": rpc error: code = NotFound desc = could not find container \"977e601e223458f107ab49ff6ec1af3ae9651dc8de0155616812bb687646c5a2\": container with ID starting with 977e601e223458f107ab49ff6ec1af3ae9651dc8de0155616812bb687646c5a2 not found: ID does not exist" Jan 23 10:21:51 crc kubenswrapper[4684]: I0123 10:21:51.598649 4684 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="6028abe2-45cf-4aaa-bf11-c21dc120fd81" path="/var/lib/kubelet/pods/6028abe2-45cf-4aaa-bf11-c21dc120fd81/volumes" Jan 23 10:22:11 crc kubenswrapper[4684]: I0123 10:22:11.047082 4684 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/manila-db-sync-mdmkd"] Jan 23 10:22:11 crc kubenswrapper[4684]: I0123 10:22:11.060041 4684 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/manila-db-sync-mdmkd"] Jan 23 10:22:11 crc kubenswrapper[4684]: I0123 10:22:11.593236 4684 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="bd829550-43d3-42d9-a9b4-e088ef820a77" path="/var/lib/kubelet/pods/bd829550-43d3-42d9-a9b4-e088ef820a77/volumes" Jan 23 10:22:17 crc kubenswrapper[4684]: I0123 10:22:17.175333 4684 scope.go:117] "RemoveContainer" containerID="aa1cfff82632dd93f61919922195de1d8bd4c2eada5623abb4fcbef1821342cc" Jan 23 10:22:17 crc kubenswrapper[4684]: I0123 10:22:17.200364 4684 scope.go:117] "RemoveContainer" containerID="6b65babecde8db8f98f55ed29b02489a73c7ecaf2fe163886352ecff8af568c9" Jan 23 10:22:17 crc kubenswrapper[4684]: I0123 10:22:17.256250 4684 scope.go:117] "RemoveContainer" containerID="c2d9e68fc6d1318a60f5e585926097d58beebbc02b5060432a0b9bf9f5fdd3e7" Jan 23 10:22:43 crc kubenswrapper[4684]: I0123 10:22:43.729181 4684 patch_prober.go:28] interesting pod/machine-config-daemon-wtphf container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Jan 23 10:22:43 crc kubenswrapper[4684]: I0123 10:22:43.729652 4684 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-wtphf" podUID="fe8e0d00-860e-4d47-9f48-686555520d79" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Jan 23 10:23:04 crc kubenswrapper[4684]: I0123 10:23:04.208896 4684 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/redhat-operators-hq6wz"] Jan 23 10:23:04 crc kubenswrapper[4684]: E0123 10:23:04.209932 4684 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="6028abe2-45cf-4aaa-bf11-c21dc120fd81" containerName="registry-server" Jan 23 10:23:04 crc kubenswrapper[4684]: I0123 10:23:04.210025 4684 state_mem.go:107] "Deleted CPUSet assignment" podUID="6028abe2-45cf-4aaa-bf11-c21dc120fd81" containerName="registry-server" Jan 23 10:23:04 crc kubenswrapper[4684]: E0123 10:23:04.210047 4684 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="6028abe2-45cf-4aaa-bf11-c21dc120fd81" containerName="extract-utilities" Jan 23 10:23:04 crc kubenswrapper[4684]: I0123 10:23:04.210056 4684 state_mem.go:107] "Deleted CPUSet assignment" podUID="6028abe2-45cf-4aaa-bf11-c21dc120fd81" containerName="extract-utilities" Jan 23 10:23:04 crc kubenswrapper[4684]: E0123 10:23:04.210077 4684 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="6028abe2-45cf-4aaa-bf11-c21dc120fd81" containerName="extract-content" Jan 23 10:23:04 crc kubenswrapper[4684]: I0123 10:23:04.210085 4684 state_mem.go:107] "Deleted CPUSet assignment" podUID="6028abe2-45cf-4aaa-bf11-c21dc120fd81" containerName="extract-content" Jan 23 10:23:04 crc kubenswrapper[4684]: I0123 10:23:04.210323 4684 memory_manager.go:354] "RemoveStaleState removing state" podUID="6028abe2-45cf-4aaa-bf11-c21dc120fd81" containerName="registry-server" Jan 23 10:23:04 crc kubenswrapper[4684]: I0123 10:23:04.211997 4684 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-hq6wz" Jan 23 10:23:04 crc kubenswrapper[4684]: I0123 10:23:04.223250 4684 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-operators-hq6wz"] Jan 23 10:23:04 crc kubenswrapper[4684]: I0123 10:23:04.299103 4684 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/79e8539f-10d5-4ae9-be7d-80ea25dee6ea-utilities\") pod \"redhat-operators-hq6wz\" (UID: \"79e8539f-10d5-4ae9-be7d-80ea25dee6ea\") " pod="openshift-marketplace/redhat-operators-hq6wz" Jan 23 10:23:04 crc kubenswrapper[4684]: I0123 10:23:04.299316 4684 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/79e8539f-10d5-4ae9-be7d-80ea25dee6ea-catalog-content\") pod \"redhat-operators-hq6wz\" (UID: \"79e8539f-10d5-4ae9-be7d-80ea25dee6ea\") " pod="openshift-marketplace/redhat-operators-hq6wz" Jan 23 10:23:04 crc kubenswrapper[4684]: I0123 10:23:04.299559 4684 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-cb5cd\" (UniqueName: \"kubernetes.io/projected/79e8539f-10d5-4ae9-be7d-80ea25dee6ea-kube-api-access-cb5cd\") pod \"redhat-operators-hq6wz\" (UID: \"79e8539f-10d5-4ae9-be7d-80ea25dee6ea\") " pod="openshift-marketplace/redhat-operators-hq6wz" Jan 23 10:23:04 crc kubenswrapper[4684]: I0123 10:23:04.402169 4684 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-cb5cd\" (UniqueName: \"kubernetes.io/projected/79e8539f-10d5-4ae9-be7d-80ea25dee6ea-kube-api-access-cb5cd\") pod \"redhat-operators-hq6wz\" (UID: \"79e8539f-10d5-4ae9-be7d-80ea25dee6ea\") " pod="openshift-marketplace/redhat-operators-hq6wz" Jan 23 10:23:04 crc kubenswrapper[4684]: I0123 10:23:04.402363 4684 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/79e8539f-10d5-4ae9-be7d-80ea25dee6ea-utilities\") pod \"redhat-operators-hq6wz\" (UID: \"79e8539f-10d5-4ae9-be7d-80ea25dee6ea\") " pod="openshift-marketplace/redhat-operators-hq6wz" Jan 23 10:23:04 crc kubenswrapper[4684]: I0123 10:23:04.402449 4684 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/79e8539f-10d5-4ae9-be7d-80ea25dee6ea-catalog-content\") pod \"redhat-operators-hq6wz\" (UID: \"79e8539f-10d5-4ae9-be7d-80ea25dee6ea\") " pod="openshift-marketplace/redhat-operators-hq6wz" Jan 23 10:23:04 crc kubenswrapper[4684]: I0123 10:23:04.403007 4684 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/79e8539f-10d5-4ae9-be7d-80ea25dee6ea-catalog-content\") pod \"redhat-operators-hq6wz\" (UID: \"79e8539f-10d5-4ae9-be7d-80ea25dee6ea\") " pod="openshift-marketplace/redhat-operators-hq6wz" Jan 23 10:23:04 crc kubenswrapper[4684]: I0123 10:23:04.403157 4684 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/79e8539f-10d5-4ae9-be7d-80ea25dee6ea-utilities\") pod \"redhat-operators-hq6wz\" (UID: \"79e8539f-10d5-4ae9-be7d-80ea25dee6ea\") " pod="openshift-marketplace/redhat-operators-hq6wz" Jan 23 10:23:04 crc kubenswrapper[4684]: I0123 10:23:04.427861 4684 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-cb5cd\" (UniqueName: \"kubernetes.io/projected/79e8539f-10d5-4ae9-be7d-80ea25dee6ea-kube-api-access-cb5cd\") pod \"redhat-operators-hq6wz\" (UID: \"79e8539f-10d5-4ae9-be7d-80ea25dee6ea\") " pod="openshift-marketplace/redhat-operators-hq6wz" Jan 23 10:23:04 crc kubenswrapper[4684]: I0123 10:23:04.542660 4684 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-hq6wz" Jan 23 10:23:05 crc kubenswrapper[4684]: I0123 10:23:05.024639 4684 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-operators-hq6wz"] Jan 23 10:23:05 crc kubenswrapper[4684]: I0123 10:23:05.537091 4684 generic.go:334] "Generic (PLEG): container finished" podID="79e8539f-10d5-4ae9-be7d-80ea25dee6ea" containerID="217886f1b98618a7ac40dda33b3bb2e6719b1bcfc1f824713fa0f63f47356438" exitCode=0 Jan 23 10:23:05 crc kubenswrapper[4684]: I0123 10:23:05.537183 4684 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-hq6wz" event={"ID":"79e8539f-10d5-4ae9-be7d-80ea25dee6ea","Type":"ContainerDied","Data":"217886f1b98618a7ac40dda33b3bb2e6719b1bcfc1f824713fa0f63f47356438"} Jan 23 10:23:05 crc kubenswrapper[4684]: I0123 10:23:05.537385 4684 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-hq6wz" event={"ID":"79e8539f-10d5-4ae9-be7d-80ea25dee6ea","Type":"ContainerStarted","Data":"62279149eecd6fd93091753ab25fc48869396c2b68dab97f5fe1502a4b1d273b"} Jan 23 10:23:05 crc kubenswrapper[4684]: I0123 10:23:05.539734 4684 provider.go:102] Refreshing cache for provider: *credentialprovider.defaultDockerConfigProvider Jan 23 10:23:06 crc kubenswrapper[4684]: I0123 10:23:06.550590 4684 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-hq6wz" event={"ID":"79e8539f-10d5-4ae9-be7d-80ea25dee6ea","Type":"ContainerStarted","Data":"184eef0dcaa41ebb8b29b57e48047f26a6347632d86891f62ad6564829fe824e"} Jan 23 10:23:10 crc kubenswrapper[4684]: I0123 10:23:10.586967 4684 generic.go:334] "Generic (PLEG): container finished" podID="79e8539f-10d5-4ae9-be7d-80ea25dee6ea" containerID="184eef0dcaa41ebb8b29b57e48047f26a6347632d86891f62ad6564829fe824e" exitCode=0 Jan 23 10:23:10 crc kubenswrapper[4684]: I0123 10:23:10.587050 4684 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-hq6wz" event={"ID":"79e8539f-10d5-4ae9-be7d-80ea25dee6ea","Type":"ContainerDied","Data":"184eef0dcaa41ebb8b29b57e48047f26a6347632d86891f62ad6564829fe824e"} Jan 23 10:23:11 crc kubenswrapper[4684]: I0123 10:23:11.601884 4684 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-hq6wz" event={"ID":"79e8539f-10d5-4ae9-be7d-80ea25dee6ea","Type":"ContainerStarted","Data":"bf550f5ab76972fd4f3c15e4f249b517a6282a8aa501dcf756fe709db9661433"} Jan 23 10:23:11 crc kubenswrapper[4684]: I0123 10:23:11.633412 4684 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/redhat-operators-hq6wz" podStartSLOduration=2.163298908 podStartE2EDuration="7.633391293s" podCreationTimestamp="2026-01-23 10:23:04 +0000 UTC" firstStartedPulling="2026-01-23 10:23:05.539484725 +0000 UTC m=+4558.162863266" lastFinishedPulling="2026-01-23 10:23:11.00957711 +0000 UTC m=+4563.632955651" observedRunningTime="2026-01-23 10:23:11.626203168 +0000 UTC m=+4564.249581709" watchObservedRunningTime="2026-01-23 10:23:11.633391293 +0000 UTC m=+4564.256769854" Jan 23 10:23:13 crc kubenswrapper[4684]: I0123 10:23:13.728987 4684 patch_prober.go:28] interesting pod/machine-config-daemon-wtphf container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Jan 23 10:23:13 crc kubenswrapper[4684]: I0123 10:23:13.729346 4684 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-wtphf" podUID="fe8e0d00-860e-4d47-9f48-686555520d79" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Jan 23 10:23:14 crc kubenswrapper[4684]: I0123 10:23:14.543384 4684 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/redhat-operators-hq6wz" Jan 23 10:23:14 crc kubenswrapper[4684]: I0123 10:23:14.543440 4684 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/redhat-operators-hq6wz" Jan 23 10:23:15 crc kubenswrapper[4684]: I0123 10:23:15.589961 4684 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-marketplace/redhat-operators-hq6wz" podUID="79e8539f-10d5-4ae9-be7d-80ea25dee6ea" containerName="registry-server" probeResult="failure" output=< Jan 23 10:23:15 crc kubenswrapper[4684]: timeout: failed to connect service ":50051" within 1s Jan 23 10:23:15 crc kubenswrapper[4684]: > Jan 23 10:23:24 crc kubenswrapper[4684]: I0123 10:23:24.601468 4684 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/redhat-operators-hq6wz" Jan 23 10:23:24 crc kubenswrapper[4684]: I0123 10:23:24.662853 4684 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/redhat-operators-hq6wz" Jan 23 10:23:24 crc kubenswrapper[4684]: I0123 10:23:24.840031 4684 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-operators-hq6wz"] Jan 23 10:23:25 crc kubenswrapper[4684]: I0123 10:23:25.742629 4684 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/redhat-operators-hq6wz" podUID="79e8539f-10d5-4ae9-be7d-80ea25dee6ea" containerName="registry-server" containerID="cri-o://bf550f5ab76972fd4f3c15e4f249b517a6282a8aa501dcf756fe709db9661433" gracePeriod=2 Jan 23 10:23:26 crc kubenswrapper[4684]: I0123 10:23:26.240781 4684 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-hq6wz" Jan 23 10:23:26 crc kubenswrapper[4684]: I0123 10:23:26.366633 4684 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/79e8539f-10d5-4ae9-be7d-80ea25dee6ea-utilities\") pod \"79e8539f-10d5-4ae9-be7d-80ea25dee6ea\" (UID: \"79e8539f-10d5-4ae9-be7d-80ea25dee6ea\") " Jan 23 10:23:26 crc kubenswrapper[4684]: I0123 10:23:26.366735 4684 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/79e8539f-10d5-4ae9-be7d-80ea25dee6ea-catalog-content\") pod \"79e8539f-10d5-4ae9-be7d-80ea25dee6ea\" (UID: \"79e8539f-10d5-4ae9-be7d-80ea25dee6ea\") " Jan 23 10:23:26 crc kubenswrapper[4684]: I0123 10:23:26.366787 4684 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-cb5cd\" (UniqueName: \"kubernetes.io/projected/79e8539f-10d5-4ae9-be7d-80ea25dee6ea-kube-api-access-cb5cd\") pod \"79e8539f-10d5-4ae9-be7d-80ea25dee6ea\" (UID: \"79e8539f-10d5-4ae9-be7d-80ea25dee6ea\") " Jan 23 10:23:26 crc kubenswrapper[4684]: I0123 10:23:26.368895 4684 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/79e8539f-10d5-4ae9-be7d-80ea25dee6ea-utilities" (OuterVolumeSpecName: "utilities") pod "79e8539f-10d5-4ae9-be7d-80ea25dee6ea" (UID: "79e8539f-10d5-4ae9-be7d-80ea25dee6ea"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 23 10:23:26 crc kubenswrapper[4684]: I0123 10:23:26.376396 4684 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/79e8539f-10d5-4ae9-be7d-80ea25dee6ea-kube-api-access-cb5cd" (OuterVolumeSpecName: "kube-api-access-cb5cd") pod "79e8539f-10d5-4ae9-be7d-80ea25dee6ea" (UID: "79e8539f-10d5-4ae9-be7d-80ea25dee6ea"). InnerVolumeSpecName "kube-api-access-cb5cd". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 23 10:23:26 crc kubenswrapper[4684]: I0123 10:23:26.470232 4684 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/79e8539f-10d5-4ae9-be7d-80ea25dee6ea-utilities\") on node \"crc\" DevicePath \"\"" Jan 23 10:23:26 crc kubenswrapper[4684]: I0123 10:23:26.470290 4684 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-cb5cd\" (UniqueName: \"kubernetes.io/projected/79e8539f-10d5-4ae9-be7d-80ea25dee6ea-kube-api-access-cb5cd\") on node \"crc\" DevicePath \"\"" Jan 23 10:23:26 crc kubenswrapper[4684]: I0123 10:23:26.479829 4684 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/79e8539f-10d5-4ae9-be7d-80ea25dee6ea-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "79e8539f-10d5-4ae9-be7d-80ea25dee6ea" (UID: "79e8539f-10d5-4ae9-be7d-80ea25dee6ea"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 23 10:23:26 crc kubenswrapper[4684]: I0123 10:23:26.572373 4684 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/79e8539f-10d5-4ae9-be7d-80ea25dee6ea-catalog-content\") on node \"crc\" DevicePath \"\"" Jan 23 10:23:26 crc kubenswrapper[4684]: I0123 10:23:26.750740 4684 generic.go:334] "Generic (PLEG): container finished" podID="79e8539f-10d5-4ae9-be7d-80ea25dee6ea" containerID="bf550f5ab76972fd4f3c15e4f249b517a6282a8aa501dcf756fe709db9661433" exitCode=0 Jan 23 10:23:26 crc kubenswrapper[4684]: I0123 10:23:26.751842 4684 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-hq6wz" Jan 23 10:23:26 crc kubenswrapper[4684]: I0123 10:23:26.754747 4684 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-hq6wz" event={"ID":"79e8539f-10d5-4ae9-be7d-80ea25dee6ea","Type":"ContainerDied","Data":"bf550f5ab76972fd4f3c15e4f249b517a6282a8aa501dcf756fe709db9661433"} Jan 23 10:23:26 crc kubenswrapper[4684]: I0123 10:23:26.754793 4684 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-hq6wz" event={"ID":"79e8539f-10d5-4ae9-be7d-80ea25dee6ea","Type":"ContainerDied","Data":"62279149eecd6fd93091753ab25fc48869396c2b68dab97f5fe1502a4b1d273b"} Jan 23 10:23:26 crc kubenswrapper[4684]: I0123 10:23:26.754812 4684 scope.go:117] "RemoveContainer" containerID="bf550f5ab76972fd4f3c15e4f249b517a6282a8aa501dcf756fe709db9661433" Jan 23 10:23:26 crc kubenswrapper[4684]: I0123 10:23:26.785795 4684 scope.go:117] "RemoveContainer" containerID="184eef0dcaa41ebb8b29b57e48047f26a6347632d86891f62ad6564829fe824e" Jan 23 10:23:26 crc kubenswrapper[4684]: I0123 10:23:26.794185 4684 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-operators-hq6wz"] Jan 23 10:23:26 crc kubenswrapper[4684]: I0123 10:23:26.806449 4684 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/redhat-operators-hq6wz"] Jan 23 10:23:26 crc kubenswrapper[4684]: I0123 10:23:26.819295 4684 scope.go:117] "RemoveContainer" containerID="217886f1b98618a7ac40dda33b3bb2e6719b1bcfc1f824713fa0f63f47356438" Jan 23 10:23:26 crc kubenswrapper[4684]: I0123 10:23:26.849916 4684 scope.go:117] "RemoveContainer" containerID="bf550f5ab76972fd4f3c15e4f249b517a6282a8aa501dcf756fe709db9661433" Jan 23 10:23:26 crc kubenswrapper[4684]: E0123 10:23:26.850385 4684 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"bf550f5ab76972fd4f3c15e4f249b517a6282a8aa501dcf756fe709db9661433\": container with ID starting with bf550f5ab76972fd4f3c15e4f249b517a6282a8aa501dcf756fe709db9661433 not found: ID does not exist" containerID="bf550f5ab76972fd4f3c15e4f249b517a6282a8aa501dcf756fe709db9661433" Jan 23 10:23:26 crc kubenswrapper[4684]: I0123 10:23:26.850420 4684 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"bf550f5ab76972fd4f3c15e4f249b517a6282a8aa501dcf756fe709db9661433"} err="failed to get container status \"bf550f5ab76972fd4f3c15e4f249b517a6282a8aa501dcf756fe709db9661433\": rpc error: code = NotFound desc = could not find container \"bf550f5ab76972fd4f3c15e4f249b517a6282a8aa501dcf756fe709db9661433\": container with ID starting with bf550f5ab76972fd4f3c15e4f249b517a6282a8aa501dcf756fe709db9661433 not found: ID does not exist" Jan 23 10:23:26 crc kubenswrapper[4684]: I0123 10:23:26.850447 4684 scope.go:117] "RemoveContainer" containerID="184eef0dcaa41ebb8b29b57e48047f26a6347632d86891f62ad6564829fe824e" Jan 23 10:23:26 crc kubenswrapper[4684]: E0123 10:23:26.850914 4684 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"184eef0dcaa41ebb8b29b57e48047f26a6347632d86891f62ad6564829fe824e\": container with ID starting with 184eef0dcaa41ebb8b29b57e48047f26a6347632d86891f62ad6564829fe824e not found: ID does not exist" containerID="184eef0dcaa41ebb8b29b57e48047f26a6347632d86891f62ad6564829fe824e" Jan 23 10:23:26 crc kubenswrapper[4684]: I0123 10:23:26.850942 4684 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"184eef0dcaa41ebb8b29b57e48047f26a6347632d86891f62ad6564829fe824e"} err="failed to get container status \"184eef0dcaa41ebb8b29b57e48047f26a6347632d86891f62ad6564829fe824e\": rpc error: code = NotFound desc = could not find container \"184eef0dcaa41ebb8b29b57e48047f26a6347632d86891f62ad6564829fe824e\": container with ID starting with 184eef0dcaa41ebb8b29b57e48047f26a6347632d86891f62ad6564829fe824e not found: ID does not exist" Jan 23 10:23:26 crc kubenswrapper[4684]: I0123 10:23:26.850964 4684 scope.go:117] "RemoveContainer" containerID="217886f1b98618a7ac40dda33b3bb2e6719b1bcfc1f824713fa0f63f47356438" Jan 23 10:23:26 crc kubenswrapper[4684]: E0123 10:23:26.851447 4684 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"217886f1b98618a7ac40dda33b3bb2e6719b1bcfc1f824713fa0f63f47356438\": container with ID starting with 217886f1b98618a7ac40dda33b3bb2e6719b1bcfc1f824713fa0f63f47356438 not found: ID does not exist" containerID="217886f1b98618a7ac40dda33b3bb2e6719b1bcfc1f824713fa0f63f47356438" Jan 23 10:23:26 crc kubenswrapper[4684]: I0123 10:23:26.851594 4684 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"217886f1b98618a7ac40dda33b3bb2e6719b1bcfc1f824713fa0f63f47356438"} err="failed to get container status \"217886f1b98618a7ac40dda33b3bb2e6719b1bcfc1f824713fa0f63f47356438\": rpc error: code = NotFound desc = could not find container \"217886f1b98618a7ac40dda33b3bb2e6719b1bcfc1f824713fa0f63f47356438\": container with ID starting with 217886f1b98618a7ac40dda33b3bb2e6719b1bcfc1f824713fa0f63f47356438 not found: ID does not exist" Jan 23 10:23:27 crc kubenswrapper[4684]: I0123 10:23:27.594350 4684 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="79e8539f-10d5-4ae9-be7d-80ea25dee6ea" path="/var/lib/kubelet/pods/79e8539f-10d5-4ae9-be7d-80ea25dee6ea/volumes" Jan 23 10:23:43 crc kubenswrapper[4684]: I0123 10:23:43.728743 4684 patch_prober.go:28] interesting pod/machine-config-daemon-wtphf container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Jan 23 10:23:43 crc kubenswrapper[4684]: I0123 10:23:43.729335 4684 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-wtphf" podUID="fe8e0d00-860e-4d47-9f48-686555520d79" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Jan 23 10:23:43 crc kubenswrapper[4684]: I0123 10:23:43.729391 4684 kubelet.go:2542] "SyncLoop (probe)" probe="liveness" status="unhealthy" pod="openshift-machine-config-operator/machine-config-daemon-wtphf" Jan 23 10:23:43 crc kubenswrapper[4684]: I0123 10:23:43.730276 4684 kuberuntime_manager.go:1027] "Message for Container of pod" containerName="machine-config-daemon" containerStatusID={"Type":"cri-o","ID":"ea556fc8dc883b8c3494c093263ef6a2ba8fb783710728a6eb74afd116ee0ccc"} pod="openshift-machine-config-operator/machine-config-daemon-wtphf" containerMessage="Container machine-config-daemon failed liveness probe, will be restarted" Jan 23 10:23:43 crc kubenswrapper[4684]: I0123 10:23:43.730346 4684 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-machine-config-operator/machine-config-daemon-wtphf" podUID="fe8e0d00-860e-4d47-9f48-686555520d79" containerName="machine-config-daemon" containerID="cri-o://ea556fc8dc883b8c3494c093263ef6a2ba8fb783710728a6eb74afd116ee0ccc" gracePeriod=600 Jan 23 10:23:43 crc kubenswrapper[4684]: E0123 10:23:43.867437 4684 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-wtphf_openshift-machine-config-operator(fe8e0d00-860e-4d47-9f48-686555520d79)\"" pod="openshift-machine-config-operator/machine-config-daemon-wtphf" podUID="fe8e0d00-860e-4d47-9f48-686555520d79" Jan 23 10:23:43 crc kubenswrapper[4684]: I0123 10:23:43.893635 4684 generic.go:334] "Generic (PLEG): container finished" podID="fe8e0d00-860e-4d47-9f48-686555520d79" containerID="ea556fc8dc883b8c3494c093263ef6a2ba8fb783710728a6eb74afd116ee0ccc" exitCode=0 Jan 23 10:23:43 crc kubenswrapper[4684]: I0123 10:23:43.893827 4684 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-wtphf" event={"ID":"fe8e0d00-860e-4d47-9f48-686555520d79","Type":"ContainerDied","Data":"ea556fc8dc883b8c3494c093263ef6a2ba8fb783710728a6eb74afd116ee0ccc"} Jan 23 10:23:43 crc kubenswrapper[4684]: I0123 10:23:43.894126 4684 scope.go:117] "RemoveContainer" containerID="eb2cd5a3802daabd6f25e381fb866c36fce1be8cb93402ebfba6f9d62b385554" Jan 23 10:23:43 crc kubenswrapper[4684]: I0123 10:23:43.894997 4684 scope.go:117] "RemoveContainer" containerID="ea556fc8dc883b8c3494c093263ef6a2ba8fb783710728a6eb74afd116ee0ccc" Jan 23 10:23:43 crc kubenswrapper[4684]: E0123 10:23:43.895635 4684 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-wtphf_openshift-machine-config-operator(fe8e0d00-860e-4d47-9f48-686555520d79)\"" pod="openshift-machine-config-operator/machine-config-daemon-wtphf" podUID="fe8e0d00-860e-4d47-9f48-686555520d79" Jan 23 10:23:58 crc kubenswrapper[4684]: I0123 10:23:58.582041 4684 scope.go:117] "RemoveContainer" containerID="ea556fc8dc883b8c3494c093263ef6a2ba8fb783710728a6eb74afd116ee0ccc" Jan 23 10:23:58 crc kubenswrapper[4684]: E0123 10:23:58.582815 4684 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-wtphf_openshift-machine-config-operator(fe8e0d00-860e-4d47-9f48-686555520d79)\"" pod="openshift-machine-config-operator/machine-config-daemon-wtphf" podUID="fe8e0d00-860e-4d47-9f48-686555520d79" Jan 23 10:24:12 crc kubenswrapper[4684]: I0123 10:24:12.582677 4684 scope.go:117] "RemoveContainer" containerID="ea556fc8dc883b8c3494c093263ef6a2ba8fb783710728a6eb74afd116ee0ccc" Jan 23 10:24:12 crc kubenswrapper[4684]: E0123 10:24:12.583373 4684 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-wtphf_openshift-machine-config-operator(fe8e0d00-860e-4d47-9f48-686555520d79)\"" pod="openshift-machine-config-operator/machine-config-daemon-wtphf" podUID="fe8e0d00-860e-4d47-9f48-686555520d79" Jan 23 10:24:23 crc kubenswrapper[4684]: I0123 10:24:23.582437 4684 scope.go:117] "RemoveContainer" containerID="ea556fc8dc883b8c3494c093263ef6a2ba8fb783710728a6eb74afd116ee0ccc" Jan 23 10:24:23 crc kubenswrapper[4684]: E0123 10:24:23.583290 4684 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-wtphf_openshift-machine-config-operator(fe8e0d00-860e-4d47-9f48-686555520d79)\"" pod="openshift-machine-config-operator/machine-config-daemon-wtphf" podUID="fe8e0d00-860e-4d47-9f48-686555520d79" Jan 23 10:24:38 crc kubenswrapper[4684]: I0123 10:24:38.583254 4684 scope.go:117] "RemoveContainer" containerID="ea556fc8dc883b8c3494c093263ef6a2ba8fb783710728a6eb74afd116ee0ccc" Jan 23 10:24:38 crc kubenswrapper[4684]: E0123 10:24:38.584372 4684 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-wtphf_openshift-machine-config-operator(fe8e0d00-860e-4d47-9f48-686555520d79)\"" pod="openshift-machine-config-operator/machine-config-daemon-wtphf" podUID="fe8e0d00-860e-4d47-9f48-686555520d79" Jan 23 10:24:50 crc kubenswrapper[4684]: I0123 10:24:50.581991 4684 scope.go:117] "RemoveContainer" containerID="ea556fc8dc883b8c3494c093263ef6a2ba8fb783710728a6eb74afd116ee0ccc" Jan 23 10:24:50 crc kubenswrapper[4684]: E0123 10:24:50.582860 4684 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-wtphf_openshift-machine-config-operator(fe8e0d00-860e-4d47-9f48-686555520d79)\"" pod="openshift-machine-config-operator/machine-config-daemon-wtphf" podUID="fe8e0d00-860e-4d47-9f48-686555520d79" Jan 23 10:25:05 crc kubenswrapper[4684]: I0123 10:25:05.582948 4684 scope.go:117] "RemoveContainer" containerID="ea556fc8dc883b8c3494c093263ef6a2ba8fb783710728a6eb74afd116ee0ccc" Jan 23 10:25:05 crc kubenswrapper[4684]: E0123 10:25:05.583992 4684 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-wtphf_openshift-machine-config-operator(fe8e0d00-860e-4d47-9f48-686555520d79)\"" pod="openshift-machine-config-operator/machine-config-daemon-wtphf" podUID="fe8e0d00-860e-4d47-9f48-686555520d79" Jan 23 10:25:20 crc kubenswrapper[4684]: I0123 10:25:20.582292 4684 scope.go:117] "RemoveContainer" containerID="ea556fc8dc883b8c3494c093263ef6a2ba8fb783710728a6eb74afd116ee0ccc" Jan 23 10:25:20 crc kubenswrapper[4684]: E0123 10:25:20.583030 4684 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-wtphf_openshift-machine-config-operator(fe8e0d00-860e-4d47-9f48-686555520d79)\"" pod="openshift-machine-config-operator/machine-config-daemon-wtphf" podUID="fe8e0d00-860e-4d47-9f48-686555520d79" Jan 23 10:25:32 crc kubenswrapper[4684]: I0123 10:25:32.581902 4684 scope.go:117] "RemoveContainer" containerID="ea556fc8dc883b8c3494c093263ef6a2ba8fb783710728a6eb74afd116ee0ccc" Jan 23 10:25:32 crc kubenswrapper[4684]: E0123 10:25:32.582601 4684 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-wtphf_openshift-machine-config-operator(fe8e0d00-860e-4d47-9f48-686555520d79)\"" pod="openshift-machine-config-operator/machine-config-daemon-wtphf" podUID="fe8e0d00-860e-4d47-9f48-686555520d79" Jan 23 10:25:44 crc kubenswrapper[4684]: I0123 10:25:44.582004 4684 scope.go:117] "RemoveContainer" containerID="ea556fc8dc883b8c3494c093263ef6a2ba8fb783710728a6eb74afd116ee0ccc" Jan 23 10:25:44 crc kubenswrapper[4684]: E0123 10:25:44.582679 4684 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-wtphf_openshift-machine-config-operator(fe8e0d00-860e-4d47-9f48-686555520d79)\"" pod="openshift-machine-config-operator/machine-config-daemon-wtphf" podUID="fe8e0d00-860e-4d47-9f48-686555520d79" Jan 23 10:25:55 crc kubenswrapper[4684]: I0123 10:25:55.582244 4684 scope.go:117] "RemoveContainer" containerID="ea556fc8dc883b8c3494c093263ef6a2ba8fb783710728a6eb74afd116ee0ccc" Jan 23 10:25:55 crc kubenswrapper[4684]: E0123 10:25:55.583994 4684 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-wtphf_openshift-machine-config-operator(fe8e0d00-860e-4d47-9f48-686555520d79)\"" pod="openshift-machine-config-operator/machine-config-daemon-wtphf" podUID="fe8e0d00-860e-4d47-9f48-686555520d79" Jan 23 10:26:09 crc kubenswrapper[4684]: I0123 10:26:09.582739 4684 scope.go:117] "RemoveContainer" containerID="ea556fc8dc883b8c3494c093263ef6a2ba8fb783710728a6eb74afd116ee0ccc" Jan 23 10:26:09 crc kubenswrapper[4684]: E0123 10:26:09.583473 4684 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-wtphf_openshift-machine-config-operator(fe8e0d00-860e-4d47-9f48-686555520d79)\"" pod="openshift-machine-config-operator/machine-config-daemon-wtphf" podUID="fe8e0d00-860e-4d47-9f48-686555520d79" Jan 23 10:26:22 crc kubenswrapper[4684]: I0123 10:26:22.581906 4684 scope.go:117] "RemoveContainer" containerID="ea556fc8dc883b8c3494c093263ef6a2ba8fb783710728a6eb74afd116ee0ccc" Jan 23 10:26:22 crc kubenswrapper[4684]: E0123 10:26:22.582578 4684 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-wtphf_openshift-machine-config-operator(fe8e0d00-860e-4d47-9f48-686555520d79)\"" pod="openshift-machine-config-operator/machine-config-daemon-wtphf" podUID="fe8e0d00-860e-4d47-9f48-686555520d79" Jan 23 10:26:35 crc kubenswrapper[4684]: I0123 10:26:35.582015 4684 scope.go:117] "RemoveContainer" containerID="ea556fc8dc883b8c3494c093263ef6a2ba8fb783710728a6eb74afd116ee0ccc" Jan 23 10:26:35 crc kubenswrapper[4684]: E0123 10:26:35.582815 4684 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-wtphf_openshift-machine-config-operator(fe8e0d00-860e-4d47-9f48-686555520d79)\"" pod="openshift-machine-config-operator/machine-config-daemon-wtphf" podUID="fe8e0d00-860e-4d47-9f48-686555520d79" Jan 23 10:26:48 crc kubenswrapper[4684]: I0123 10:26:48.581926 4684 scope.go:117] "RemoveContainer" containerID="ea556fc8dc883b8c3494c093263ef6a2ba8fb783710728a6eb74afd116ee0ccc" Jan 23 10:26:48 crc kubenswrapper[4684]: E0123 10:26:48.583686 4684 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-wtphf_openshift-machine-config-operator(fe8e0d00-860e-4d47-9f48-686555520d79)\"" pod="openshift-machine-config-operator/machine-config-daemon-wtphf" podUID="fe8e0d00-860e-4d47-9f48-686555520d79" Jan 23 10:27:00 crc kubenswrapper[4684]: I0123 10:27:00.581893 4684 scope.go:117] "RemoveContainer" containerID="ea556fc8dc883b8c3494c093263ef6a2ba8fb783710728a6eb74afd116ee0ccc" Jan 23 10:27:00 crc kubenswrapper[4684]: E0123 10:27:00.582601 4684 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-wtphf_openshift-machine-config-operator(fe8e0d00-860e-4d47-9f48-686555520d79)\"" pod="openshift-machine-config-operator/machine-config-daemon-wtphf" podUID="fe8e0d00-860e-4d47-9f48-686555520d79" Jan 23 10:27:15 crc kubenswrapper[4684]: I0123 10:27:15.582736 4684 scope.go:117] "RemoveContainer" containerID="ea556fc8dc883b8c3494c093263ef6a2ba8fb783710728a6eb74afd116ee0ccc" Jan 23 10:27:15 crc kubenswrapper[4684]: E0123 10:27:15.583519 4684 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-wtphf_openshift-machine-config-operator(fe8e0d00-860e-4d47-9f48-686555520d79)\"" pod="openshift-machine-config-operator/machine-config-daemon-wtphf" podUID="fe8e0d00-860e-4d47-9f48-686555520d79" Jan 23 10:27:28 crc kubenswrapper[4684]: I0123 10:27:28.581643 4684 scope.go:117] "RemoveContainer" containerID="ea556fc8dc883b8c3494c093263ef6a2ba8fb783710728a6eb74afd116ee0ccc" Jan 23 10:27:28 crc kubenswrapper[4684]: E0123 10:27:28.582379 4684 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-wtphf_openshift-machine-config-operator(fe8e0d00-860e-4d47-9f48-686555520d79)\"" pod="openshift-machine-config-operator/machine-config-daemon-wtphf" podUID="fe8e0d00-860e-4d47-9f48-686555520d79" Jan 23 10:27:40 crc kubenswrapper[4684]: I0123 10:27:40.581882 4684 scope.go:117] "RemoveContainer" containerID="ea556fc8dc883b8c3494c093263ef6a2ba8fb783710728a6eb74afd116ee0ccc" Jan 23 10:27:40 crc kubenswrapper[4684]: E0123 10:27:40.582581 4684 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-wtphf_openshift-machine-config-operator(fe8e0d00-860e-4d47-9f48-686555520d79)\"" pod="openshift-machine-config-operator/machine-config-daemon-wtphf" podUID="fe8e0d00-860e-4d47-9f48-686555520d79" Jan 23 10:27:52 crc kubenswrapper[4684]: I0123 10:27:52.582205 4684 scope.go:117] "RemoveContainer" containerID="ea556fc8dc883b8c3494c093263ef6a2ba8fb783710728a6eb74afd116ee0ccc" Jan 23 10:27:52 crc kubenswrapper[4684]: E0123 10:27:52.582994 4684 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-wtphf_openshift-machine-config-operator(fe8e0d00-860e-4d47-9f48-686555520d79)\"" pod="openshift-machine-config-operator/machine-config-daemon-wtphf" podUID="fe8e0d00-860e-4d47-9f48-686555520d79" Jan 23 10:28:03 crc kubenswrapper[4684]: I0123 10:28:03.582422 4684 scope.go:117] "RemoveContainer" containerID="ea556fc8dc883b8c3494c093263ef6a2ba8fb783710728a6eb74afd116ee0ccc" Jan 23 10:28:03 crc kubenswrapper[4684]: E0123 10:28:03.583150 4684 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-wtphf_openshift-machine-config-operator(fe8e0d00-860e-4d47-9f48-686555520d79)\"" pod="openshift-machine-config-operator/machine-config-daemon-wtphf" podUID="fe8e0d00-860e-4d47-9f48-686555520d79" Jan 23 10:28:12 crc kubenswrapper[4684]: I0123 10:28:12.032986 4684 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/certified-operators-h56lx"] Jan 23 10:28:12 crc kubenswrapper[4684]: E0123 10:28:12.033669 4684 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="79e8539f-10d5-4ae9-be7d-80ea25dee6ea" containerName="registry-server" Jan 23 10:28:12 crc kubenswrapper[4684]: I0123 10:28:12.033684 4684 state_mem.go:107] "Deleted CPUSet assignment" podUID="79e8539f-10d5-4ae9-be7d-80ea25dee6ea" containerName="registry-server" Jan 23 10:28:12 crc kubenswrapper[4684]: E0123 10:28:12.033714 4684 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="79e8539f-10d5-4ae9-be7d-80ea25dee6ea" containerName="extract-utilities" Jan 23 10:28:12 crc kubenswrapper[4684]: I0123 10:28:12.033721 4684 state_mem.go:107] "Deleted CPUSet assignment" podUID="79e8539f-10d5-4ae9-be7d-80ea25dee6ea" containerName="extract-utilities" Jan 23 10:28:12 crc kubenswrapper[4684]: E0123 10:28:12.033747 4684 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="79e8539f-10d5-4ae9-be7d-80ea25dee6ea" containerName="extract-content" Jan 23 10:28:12 crc kubenswrapper[4684]: I0123 10:28:12.033753 4684 state_mem.go:107] "Deleted CPUSet assignment" podUID="79e8539f-10d5-4ae9-be7d-80ea25dee6ea" containerName="extract-content" Jan 23 10:28:12 crc kubenswrapper[4684]: I0123 10:28:12.033926 4684 memory_manager.go:354] "RemoveStaleState removing state" podUID="79e8539f-10d5-4ae9-be7d-80ea25dee6ea" containerName="registry-server" Jan 23 10:28:12 crc kubenswrapper[4684]: I0123 10:28:12.035252 4684 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-h56lx" Jan 23 10:28:12 crc kubenswrapper[4684]: I0123 10:28:12.057731 4684 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/certified-operators-h56lx"] Jan 23 10:28:12 crc kubenswrapper[4684]: I0123 10:28:12.130436 4684 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/665ff6df-aacd-46b1-a9a3-20a90cd5a297-utilities\") pod \"certified-operators-h56lx\" (UID: \"665ff6df-aacd-46b1-a9a3-20a90cd5a297\") " pod="openshift-marketplace/certified-operators-h56lx" Jan 23 10:28:12 crc kubenswrapper[4684]: I0123 10:28:12.130497 4684 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/665ff6df-aacd-46b1-a9a3-20a90cd5a297-catalog-content\") pod \"certified-operators-h56lx\" (UID: \"665ff6df-aacd-46b1-a9a3-20a90cd5a297\") " pod="openshift-marketplace/certified-operators-h56lx" Jan 23 10:28:12 crc kubenswrapper[4684]: I0123 10:28:12.130766 4684 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-qhtwq\" (UniqueName: \"kubernetes.io/projected/665ff6df-aacd-46b1-a9a3-20a90cd5a297-kube-api-access-qhtwq\") pod \"certified-operators-h56lx\" (UID: \"665ff6df-aacd-46b1-a9a3-20a90cd5a297\") " pod="openshift-marketplace/certified-operators-h56lx" Jan 23 10:28:12 crc kubenswrapper[4684]: I0123 10:28:12.233011 4684 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/665ff6df-aacd-46b1-a9a3-20a90cd5a297-utilities\") pod \"certified-operators-h56lx\" (UID: \"665ff6df-aacd-46b1-a9a3-20a90cd5a297\") " pod="openshift-marketplace/certified-operators-h56lx" Jan 23 10:28:12 crc kubenswrapper[4684]: I0123 10:28:12.233056 4684 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/665ff6df-aacd-46b1-a9a3-20a90cd5a297-catalog-content\") pod \"certified-operators-h56lx\" (UID: \"665ff6df-aacd-46b1-a9a3-20a90cd5a297\") " pod="openshift-marketplace/certified-operators-h56lx" Jan 23 10:28:12 crc kubenswrapper[4684]: I0123 10:28:12.233101 4684 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-qhtwq\" (UniqueName: \"kubernetes.io/projected/665ff6df-aacd-46b1-a9a3-20a90cd5a297-kube-api-access-qhtwq\") pod \"certified-operators-h56lx\" (UID: \"665ff6df-aacd-46b1-a9a3-20a90cd5a297\") " pod="openshift-marketplace/certified-operators-h56lx" Jan 23 10:28:12 crc kubenswrapper[4684]: I0123 10:28:12.234111 4684 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/665ff6df-aacd-46b1-a9a3-20a90cd5a297-utilities\") pod \"certified-operators-h56lx\" (UID: \"665ff6df-aacd-46b1-a9a3-20a90cd5a297\") " pod="openshift-marketplace/certified-operators-h56lx" Jan 23 10:28:12 crc kubenswrapper[4684]: I0123 10:28:12.234365 4684 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/665ff6df-aacd-46b1-a9a3-20a90cd5a297-catalog-content\") pod \"certified-operators-h56lx\" (UID: \"665ff6df-aacd-46b1-a9a3-20a90cd5a297\") " pod="openshift-marketplace/certified-operators-h56lx" Jan 23 10:28:12 crc kubenswrapper[4684]: I0123 10:28:12.260681 4684 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-qhtwq\" (UniqueName: \"kubernetes.io/projected/665ff6df-aacd-46b1-a9a3-20a90cd5a297-kube-api-access-qhtwq\") pod \"certified-operators-h56lx\" (UID: \"665ff6df-aacd-46b1-a9a3-20a90cd5a297\") " pod="openshift-marketplace/certified-operators-h56lx" Jan 23 10:28:12 crc kubenswrapper[4684]: I0123 10:28:12.366528 4684 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-h56lx" Jan 23 10:28:12 crc kubenswrapper[4684]: I0123 10:28:12.953274 4684 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/certified-operators-h56lx"] Jan 23 10:28:13 crc kubenswrapper[4684]: I0123 10:28:13.304796 4684 generic.go:334] "Generic (PLEG): container finished" podID="665ff6df-aacd-46b1-a9a3-20a90cd5a297" containerID="f683111af560060791e3757e3cf834ca253c727794dc7d0d128678c70fa639de" exitCode=0 Jan 23 10:28:13 crc kubenswrapper[4684]: I0123 10:28:13.305041 4684 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-h56lx" event={"ID":"665ff6df-aacd-46b1-a9a3-20a90cd5a297","Type":"ContainerDied","Data":"f683111af560060791e3757e3cf834ca253c727794dc7d0d128678c70fa639de"} Jan 23 10:28:13 crc kubenswrapper[4684]: I0123 10:28:13.305066 4684 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-h56lx" event={"ID":"665ff6df-aacd-46b1-a9a3-20a90cd5a297","Type":"ContainerStarted","Data":"31fa7744cf5169237d80f78185fcbddf79ccaea9087716e74862d9344e026816"} Jan 23 10:28:13 crc kubenswrapper[4684]: I0123 10:28:13.307395 4684 provider.go:102] Refreshing cache for provider: *credentialprovider.defaultDockerConfigProvider Jan 23 10:28:14 crc kubenswrapper[4684]: I0123 10:28:14.316122 4684 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-h56lx" event={"ID":"665ff6df-aacd-46b1-a9a3-20a90cd5a297","Type":"ContainerStarted","Data":"af88254f892e22e53bd4c8dcf7b3f59af1f6ffbc8d7e497508864051aa4010ed"} Jan 23 10:28:14 crc kubenswrapper[4684]: I0123 10:28:14.582655 4684 scope.go:117] "RemoveContainer" containerID="ea556fc8dc883b8c3494c093263ef6a2ba8fb783710728a6eb74afd116ee0ccc" Jan 23 10:28:14 crc kubenswrapper[4684]: E0123 10:28:14.583021 4684 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-wtphf_openshift-machine-config-operator(fe8e0d00-860e-4d47-9f48-686555520d79)\"" pod="openshift-machine-config-operator/machine-config-daemon-wtphf" podUID="fe8e0d00-860e-4d47-9f48-686555520d79" Jan 23 10:28:15 crc kubenswrapper[4684]: I0123 10:28:15.326664 4684 generic.go:334] "Generic (PLEG): container finished" podID="665ff6df-aacd-46b1-a9a3-20a90cd5a297" containerID="af88254f892e22e53bd4c8dcf7b3f59af1f6ffbc8d7e497508864051aa4010ed" exitCode=0 Jan 23 10:28:15 crc kubenswrapper[4684]: I0123 10:28:15.326735 4684 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-h56lx" event={"ID":"665ff6df-aacd-46b1-a9a3-20a90cd5a297","Type":"ContainerDied","Data":"af88254f892e22e53bd4c8dcf7b3f59af1f6ffbc8d7e497508864051aa4010ed"} Jan 23 10:28:16 crc kubenswrapper[4684]: I0123 10:28:16.338100 4684 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-h56lx" event={"ID":"665ff6df-aacd-46b1-a9a3-20a90cd5a297","Type":"ContainerStarted","Data":"3a6a175e8375cead938cfbb3f2df916b09e18dde09f03a4c2052f1e7966f2eb9"} Jan 23 10:28:16 crc kubenswrapper[4684]: I0123 10:28:16.357194 4684 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/certified-operators-h56lx" podStartSLOduration=1.911165124 podStartE2EDuration="4.357174129s" podCreationTimestamp="2026-01-23 10:28:12 +0000 UTC" firstStartedPulling="2026-01-23 10:28:13.307128216 +0000 UTC m=+4865.930506747" lastFinishedPulling="2026-01-23 10:28:15.753137211 +0000 UTC m=+4868.376515752" observedRunningTime="2026-01-23 10:28:16.353596426 +0000 UTC m=+4868.976974987" watchObservedRunningTime="2026-01-23 10:28:16.357174129 +0000 UTC m=+4868.980552670" Jan 23 10:28:19 crc kubenswrapper[4684]: I0123 10:28:19.789845 4684 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/redhat-marketplace-w7w8q"] Jan 23 10:28:19 crc kubenswrapper[4684]: I0123 10:28:19.794104 4684 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-w7w8q" Jan 23 10:28:19 crc kubenswrapper[4684]: I0123 10:28:19.808599 4684 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-marketplace-w7w8q"] Jan 23 10:28:19 crc kubenswrapper[4684]: I0123 10:28:19.889831 4684 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/962bb5c2-d106-463b-92e4-dbcdb11172d9-utilities\") pod \"redhat-marketplace-w7w8q\" (UID: \"962bb5c2-d106-463b-92e4-dbcdb11172d9\") " pod="openshift-marketplace/redhat-marketplace-w7w8q" Jan 23 10:28:19 crc kubenswrapper[4684]: I0123 10:28:19.889994 4684 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/962bb5c2-d106-463b-92e4-dbcdb11172d9-catalog-content\") pod \"redhat-marketplace-w7w8q\" (UID: \"962bb5c2-d106-463b-92e4-dbcdb11172d9\") " pod="openshift-marketplace/redhat-marketplace-w7w8q" Jan 23 10:28:19 crc kubenswrapper[4684]: I0123 10:28:19.890029 4684 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-8c6zm\" (UniqueName: \"kubernetes.io/projected/962bb5c2-d106-463b-92e4-dbcdb11172d9-kube-api-access-8c6zm\") pod \"redhat-marketplace-w7w8q\" (UID: \"962bb5c2-d106-463b-92e4-dbcdb11172d9\") " pod="openshift-marketplace/redhat-marketplace-w7w8q" Jan 23 10:28:19 crc kubenswrapper[4684]: I0123 10:28:19.991469 4684 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/962bb5c2-d106-463b-92e4-dbcdb11172d9-utilities\") pod \"redhat-marketplace-w7w8q\" (UID: \"962bb5c2-d106-463b-92e4-dbcdb11172d9\") " pod="openshift-marketplace/redhat-marketplace-w7w8q" Jan 23 10:28:19 crc kubenswrapper[4684]: I0123 10:28:19.991518 4684 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/962bb5c2-d106-463b-92e4-dbcdb11172d9-catalog-content\") pod \"redhat-marketplace-w7w8q\" (UID: \"962bb5c2-d106-463b-92e4-dbcdb11172d9\") " pod="openshift-marketplace/redhat-marketplace-w7w8q" Jan 23 10:28:19 crc kubenswrapper[4684]: I0123 10:28:19.991545 4684 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-8c6zm\" (UniqueName: \"kubernetes.io/projected/962bb5c2-d106-463b-92e4-dbcdb11172d9-kube-api-access-8c6zm\") pod \"redhat-marketplace-w7w8q\" (UID: \"962bb5c2-d106-463b-92e4-dbcdb11172d9\") " pod="openshift-marketplace/redhat-marketplace-w7w8q" Jan 23 10:28:19 crc kubenswrapper[4684]: I0123 10:28:19.992220 4684 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/962bb5c2-d106-463b-92e4-dbcdb11172d9-utilities\") pod \"redhat-marketplace-w7w8q\" (UID: \"962bb5c2-d106-463b-92e4-dbcdb11172d9\") " pod="openshift-marketplace/redhat-marketplace-w7w8q" Jan 23 10:28:19 crc kubenswrapper[4684]: I0123 10:28:19.992324 4684 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/962bb5c2-d106-463b-92e4-dbcdb11172d9-catalog-content\") pod \"redhat-marketplace-w7w8q\" (UID: \"962bb5c2-d106-463b-92e4-dbcdb11172d9\") " pod="openshift-marketplace/redhat-marketplace-w7w8q" Jan 23 10:28:20 crc kubenswrapper[4684]: I0123 10:28:20.017768 4684 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-8c6zm\" (UniqueName: \"kubernetes.io/projected/962bb5c2-d106-463b-92e4-dbcdb11172d9-kube-api-access-8c6zm\") pod \"redhat-marketplace-w7w8q\" (UID: \"962bb5c2-d106-463b-92e4-dbcdb11172d9\") " pod="openshift-marketplace/redhat-marketplace-w7w8q" Jan 23 10:28:20 crc kubenswrapper[4684]: I0123 10:28:20.111452 4684 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-w7w8q" Jan 23 10:28:20 crc kubenswrapper[4684]: I0123 10:28:20.756258 4684 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-marketplace-w7w8q"] Jan 23 10:28:21 crc kubenswrapper[4684]: I0123 10:28:21.398430 4684 generic.go:334] "Generic (PLEG): container finished" podID="962bb5c2-d106-463b-92e4-dbcdb11172d9" containerID="91af590653f64049a82dc2cadf69addf0fb6993ef0e43367dba67e350c664953" exitCode=0 Jan 23 10:28:21 crc kubenswrapper[4684]: I0123 10:28:21.398521 4684 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-w7w8q" event={"ID":"962bb5c2-d106-463b-92e4-dbcdb11172d9","Type":"ContainerDied","Data":"91af590653f64049a82dc2cadf69addf0fb6993ef0e43367dba67e350c664953"} Jan 23 10:28:21 crc kubenswrapper[4684]: I0123 10:28:21.398716 4684 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-w7w8q" event={"ID":"962bb5c2-d106-463b-92e4-dbcdb11172d9","Type":"ContainerStarted","Data":"8f3606c188b7f3a835b931c66250131dd6ddad0e70c79c5b6ff3c3be6d37280f"} Jan 23 10:28:22 crc kubenswrapper[4684]: I0123 10:28:22.367517 4684 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/certified-operators-h56lx" Jan 23 10:28:22 crc kubenswrapper[4684]: I0123 10:28:22.367784 4684 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/certified-operators-h56lx" Jan 23 10:28:22 crc kubenswrapper[4684]: I0123 10:28:22.408820 4684 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-w7w8q" event={"ID":"962bb5c2-d106-463b-92e4-dbcdb11172d9","Type":"ContainerStarted","Data":"4059f48b5c910f323e9661040c1d949adb916acab0d4c3ee94496c9b6717606a"} Jan 23 10:28:22 crc kubenswrapper[4684]: I0123 10:28:22.419034 4684 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/certified-operators-h56lx" Jan 23 10:28:22 crc kubenswrapper[4684]: I0123 10:28:22.467548 4684 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/certified-operators-h56lx" Jan 23 10:28:23 crc kubenswrapper[4684]: I0123 10:28:23.420869 4684 generic.go:334] "Generic (PLEG): container finished" podID="962bb5c2-d106-463b-92e4-dbcdb11172d9" containerID="4059f48b5c910f323e9661040c1d949adb916acab0d4c3ee94496c9b6717606a" exitCode=0 Jan 23 10:28:23 crc kubenswrapper[4684]: I0123 10:28:23.423013 4684 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-w7w8q" event={"ID":"962bb5c2-d106-463b-92e4-dbcdb11172d9","Type":"ContainerDied","Data":"4059f48b5c910f323e9661040c1d949adb916acab0d4c3ee94496c9b6717606a"} Jan 23 10:28:24 crc kubenswrapper[4684]: I0123 10:28:24.432118 4684 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-w7w8q" event={"ID":"962bb5c2-d106-463b-92e4-dbcdb11172d9","Type":"ContainerStarted","Data":"fb139d6416fa2d8bfa93971715c403b2c12593caeaee61bd5a17c6365c15703e"} Jan 23 10:28:24 crc kubenswrapper[4684]: I0123 10:28:24.463082 4684 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/redhat-marketplace-w7w8q" podStartSLOduration=2.8348063999999997 podStartE2EDuration="5.463059785s" podCreationTimestamp="2026-01-23 10:28:19 +0000 UTC" firstStartedPulling="2026-01-23 10:28:21.400179945 +0000 UTC m=+4874.023558486" lastFinishedPulling="2026-01-23 10:28:24.02843333 +0000 UTC m=+4876.651811871" observedRunningTime="2026-01-23 10:28:24.450008272 +0000 UTC m=+4877.073386823" watchObservedRunningTime="2026-01-23 10:28:24.463059785 +0000 UTC m=+4877.086438346" Jan 23 10:28:24 crc kubenswrapper[4684]: I0123 10:28:24.810069 4684 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/certified-operators-h56lx"] Jan 23 10:28:24 crc kubenswrapper[4684]: I0123 10:28:24.810474 4684 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/certified-operators-h56lx" podUID="665ff6df-aacd-46b1-a9a3-20a90cd5a297" containerName="registry-server" containerID="cri-o://3a6a175e8375cead938cfbb3f2df916b09e18dde09f03a4c2052f1e7966f2eb9" gracePeriod=2 Jan 23 10:28:25 crc kubenswrapper[4684]: I0123 10:28:25.442449 4684 generic.go:334] "Generic (PLEG): container finished" podID="665ff6df-aacd-46b1-a9a3-20a90cd5a297" containerID="3a6a175e8375cead938cfbb3f2df916b09e18dde09f03a4c2052f1e7966f2eb9" exitCode=0 Jan 23 10:28:25 crc kubenswrapper[4684]: I0123 10:28:25.442478 4684 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-h56lx" event={"ID":"665ff6df-aacd-46b1-a9a3-20a90cd5a297","Type":"ContainerDied","Data":"3a6a175e8375cead938cfbb3f2df916b09e18dde09f03a4c2052f1e7966f2eb9"} Jan 23 10:28:25 crc kubenswrapper[4684]: I0123 10:28:25.442816 4684 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-h56lx" event={"ID":"665ff6df-aacd-46b1-a9a3-20a90cd5a297","Type":"ContainerDied","Data":"31fa7744cf5169237d80f78185fcbddf79ccaea9087716e74862d9344e026816"} Jan 23 10:28:25 crc kubenswrapper[4684]: I0123 10:28:25.442849 4684 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="31fa7744cf5169237d80f78185fcbddf79ccaea9087716e74862d9344e026816" Jan 23 10:28:25 crc kubenswrapper[4684]: I0123 10:28:25.506092 4684 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-h56lx" Jan 23 10:28:25 crc kubenswrapper[4684]: I0123 10:28:25.598871 4684 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/665ff6df-aacd-46b1-a9a3-20a90cd5a297-catalog-content\") pod \"665ff6df-aacd-46b1-a9a3-20a90cd5a297\" (UID: \"665ff6df-aacd-46b1-a9a3-20a90cd5a297\") " Jan 23 10:28:25 crc kubenswrapper[4684]: I0123 10:28:25.602796 4684 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/665ff6df-aacd-46b1-a9a3-20a90cd5a297-utilities\") pod \"665ff6df-aacd-46b1-a9a3-20a90cd5a297\" (UID: \"665ff6df-aacd-46b1-a9a3-20a90cd5a297\") " Jan 23 10:28:25 crc kubenswrapper[4684]: I0123 10:28:25.602966 4684 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-qhtwq\" (UniqueName: \"kubernetes.io/projected/665ff6df-aacd-46b1-a9a3-20a90cd5a297-kube-api-access-qhtwq\") pod \"665ff6df-aacd-46b1-a9a3-20a90cd5a297\" (UID: \"665ff6df-aacd-46b1-a9a3-20a90cd5a297\") " Jan 23 10:28:25 crc kubenswrapper[4684]: I0123 10:28:25.604627 4684 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/665ff6df-aacd-46b1-a9a3-20a90cd5a297-utilities" (OuterVolumeSpecName: "utilities") pod "665ff6df-aacd-46b1-a9a3-20a90cd5a297" (UID: "665ff6df-aacd-46b1-a9a3-20a90cd5a297"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 23 10:28:25 crc kubenswrapper[4684]: I0123 10:28:25.609421 4684 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/665ff6df-aacd-46b1-a9a3-20a90cd5a297-kube-api-access-qhtwq" (OuterVolumeSpecName: "kube-api-access-qhtwq") pod "665ff6df-aacd-46b1-a9a3-20a90cd5a297" (UID: "665ff6df-aacd-46b1-a9a3-20a90cd5a297"). InnerVolumeSpecName "kube-api-access-qhtwq". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 23 10:28:25 crc kubenswrapper[4684]: I0123 10:28:25.655747 4684 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/665ff6df-aacd-46b1-a9a3-20a90cd5a297-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "665ff6df-aacd-46b1-a9a3-20a90cd5a297" (UID: "665ff6df-aacd-46b1-a9a3-20a90cd5a297"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 23 10:28:25 crc kubenswrapper[4684]: I0123 10:28:25.706034 4684 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/665ff6df-aacd-46b1-a9a3-20a90cd5a297-catalog-content\") on node \"crc\" DevicePath \"\"" Jan 23 10:28:25 crc kubenswrapper[4684]: I0123 10:28:25.706074 4684 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/665ff6df-aacd-46b1-a9a3-20a90cd5a297-utilities\") on node \"crc\" DevicePath \"\"" Jan 23 10:28:25 crc kubenswrapper[4684]: I0123 10:28:25.706085 4684 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-qhtwq\" (UniqueName: \"kubernetes.io/projected/665ff6df-aacd-46b1-a9a3-20a90cd5a297-kube-api-access-qhtwq\") on node \"crc\" DevicePath \"\"" Jan 23 10:28:26 crc kubenswrapper[4684]: I0123 10:28:26.450729 4684 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-h56lx" Jan 23 10:28:26 crc kubenswrapper[4684]: I0123 10:28:26.484497 4684 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/certified-operators-h56lx"] Jan 23 10:28:26 crc kubenswrapper[4684]: I0123 10:28:26.498283 4684 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/certified-operators-h56lx"] Jan 23 10:28:27 crc kubenswrapper[4684]: I0123 10:28:27.594835 4684 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="665ff6df-aacd-46b1-a9a3-20a90cd5a297" path="/var/lib/kubelet/pods/665ff6df-aacd-46b1-a9a3-20a90cd5a297/volumes" Jan 23 10:28:29 crc kubenswrapper[4684]: I0123 10:28:29.585018 4684 scope.go:117] "RemoveContainer" containerID="ea556fc8dc883b8c3494c093263ef6a2ba8fb783710728a6eb74afd116ee0ccc" Jan 23 10:28:29 crc kubenswrapper[4684]: E0123 10:28:29.585511 4684 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-wtphf_openshift-machine-config-operator(fe8e0d00-860e-4d47-9f48-686555520d79)\"" pod="openshift-machine-config-operator/machine-config-daemon-wtphf" podUID="fe8e0d00-860e-4d47-9f48-686555520d79" Jan 23 10:28:30 crc kubenswrapper[4684]: I0123 10:28:30.113098 4684 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/redhat-marketplace-w7w8q" Jan 23 10:28:30 crc kubenswrapper[4684]: I0123 10:28:30.113330 4684 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/redhat-marketplace-w7w8q" Jan 23 10:28:30 crc kubenswrapper[4684]: I0123 10:28:30.168939 4684 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/redhat-marketplace-w7w8q" Jan 23 10:28:30 crc kubenswrapper[4684]: I0123 10:28:30.532573 4684 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/redhat-marketplace-w7w8q" Jan 23 10:28:30 crc kubenswrapper[4684]: I0123 10:28:30.809055 4684 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-marketplace-w7w8q"] Jan 23 10:28:32 crc kubenswrapper[4684]: I0123 10:28:32.500782 4684 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/redhat-marketplace-w7w8q" podUID="962bb5c2-d106-463b-92e4-dbcdb11172d9" containerName="registry-server" containerID="cri-o://fb139d6416fa2d8bfa93971715c403b2c12593caeaee61bd5a17c6365c15703e" gracePeriod=2 Jan 23 10:28:33 crc kubenswrapper[4684]: I0123 10:28:33.017493 4684 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-w7w8q" Jan 23 10:28:33 crc kubenswrapper[4684]: I0123 10:28:33.050512 4684 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-8c6zm\" (UniqueName: \"kubernetes.io/projected/962bb5c2-d106-463b-92e4-dbcdb11172d9-kube-api-access-8c6zm\") pod \"962bb5c2-d106-463b-92e4-dbcdb11172d9\" (UID: \"962bb5c2-d106-463b-92e4-dbcdb11172d9\") " Jan 23 10:28:33 crc kubenswrapper[4684]: I0123 10:28:33.050768 4684 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/962bb5c2-d106-463b-92e4-dbcdb11172d9-utilities\") pod \"962bb5c2-d106-463b-92e4-dbcdb11172d9\" (UID: \"962bb5c2-d106-463b-92e4-dbcdb11172d9\") " Jan 23 10:28:33 crc kubenswrapper[4684]: I0123 10:28:33.050913 4684 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/962bb5c2-d106-463b-92e4-dbcdb11172d9-catalog-content\") pod \"962bb5c2-d106-463b-92e4-dbcdb11172d9\" (UID: \"962bb5c2-d106-463b-92e4-dbcdb11172d9\") " Jan 23 10:28:33 crc kubenswrapper[4684]: I0123 10:28:33.053419 4684 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/962bb5c2-d106-463b-92e4-dbcdb11172d9-utilities" (OuterVolumeSpecName: "utilities") pod "962bb5c2-d106-463b-92e4-dbcdb11172d9" (UID: "962bb5c2-d106-463b-92e4-dbcdb11172d9"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 23 10:28:33 crc kubenswrapper[4684]: I0123 10:28:33.060368 4684 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/962bb5c2-d106-463b-92e4-dbcdb11172d9-kube-api-access-8c6zm" (OuterVolumeSpecName: "kube-api-access-8c6zm") pod "962bb5c2-d106-463b-92e4-dbcdb11172d9" (UID: "962bb5c2-d106-463b-92e4-dbcdb11172d9"). InnerVolumeSpecName "kube-api-access-8c6zm". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 23 10:28:33 crc kubenswrapper[4684]: I0123 10:28:33.094598 4684 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/962bb5c2-d106-463b-92e4-dbcdb11172d9-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "962bb5c2-d106-463b-92e4-dbcdb11172d9" (UID: "962bb5c2-d106-463b-92e4-dbcdb11172d9"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 23 10:28:33 crc kubenswrapper[4684]: I0123 10:28:33.154157 4684 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/962bb5c2-d106-463b-92e4-dbcdb11172d9-utilities\") on node \"crc\" DevicePath \"\"" Jan 23 10:28:33 crc kubenswrapper[4684]: I0123 10:28:33.154449 4684 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/962bb5c2-d106-463b-92e4-dbcdb11172d9-catalog-content\") on node \"crc\" DevicePath \"\"" Jan 23 10:28:33 crc kubenswrapper[4684]: I0123 10:28:33.154462 4684 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-8c6zm\" (UniqueName: \"kubernetes.io/projected/962bb5c2-d106-463b-92e4-dbcdb11172d9-kube-api-access-8c6zm\") on node \"crc\" DevicePath \"\"" Jan 23 10:28:33 crc kubenswrapper[4684]: I0123 10:28:33.511687 4684 generic.go:334] "Generic (PLEG): container finished" podID="962bb5c2-d106-463b-92e4-dbcdb11172d9" containerID="fb139d6416fa2d8bfa93971715c403b2c12593caeaee61bd5a17c6365c15703e" exitCode=0 Jan 23 10:28:33 crc kubenswrapper[4684]: I0123 10:28:33.511762 4684 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-w7w8q" Jan 23 10:28:33 crc kubenswrapper[4684]: I0123 10:28:33.511781 4684 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-w7w8q" event={"ID":"962bb5c2-d106-463b-92e4-dbcdb11172d9","Type":"ContainerDied","Data":"fb139d6416fa2d8bfa93971715c403b2c12593caeaee61bd5a17c6365c15703e"} Jan 23 10:28:33 crc kubenswrapper[4684]: I0123 10:28:33.512856 4684 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-w7w8q" event={"ID":"962bb5c2-d106-463b-92e4-dbcdb11172d9","Type":"ContainerDied","Data":"8f3606c188b7f3a835b931c66250131dd6ddad0e70c79c5b6ff3c3be6d37280f"} Jan 23 10:28:33 crc kubenswrapper[4684]: I0123 10:28:33.512907 4684 scope.go:117] "RemoveContainer" containerID="fb139d6416fa2d8bfa93971715c403b2c12593caeaee61bd5a17c6365c15703e" Jan 23 10:28:33 crc kubenswrapper[4684]: I0123 10:28:33.554384 4684 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-marketplace-w7w8q"] Jan 23 10:28:33 crc kubenswrapper[4684]: I0123 10:28:33.557558 4684 scope.go:117] "RemoveContainer" containerID="4059f48b5c910f323e9661040c1d949adb916acab0d4c3ee94496c9b6717606a" Jan 23 10:28:33 crc kubenswrapper[4684]: I0123 10:28:33.566269 4684 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/redhat-marketplace-w7w8q"] Jan 23 10:28:33 crc kubenswrapper[4684]: I0123 10:28:33.595233 4684 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="962bb5c2-d106-463b-92e4-dbcdb11172d9" path="/var/lib/kubelet/pods/962bb5c2-d106-463b-92e4-dbcdb11172d9/volumes" Jan 23 10:28:34 crc kubenswrapper[4684]: I0123 10:28:34.020952 4684 scope.go:117] "RemoveContainer" containerID="91af590653f64049a82dc2cadf69addf0fb6993ef0e43367dba67e350c664953" Jan 23 10:28:34 crc kubenswrapper[4684]: I0123 10:28:34.090458 4684 scope.go:117] "RemoveContainer" containerID="fb139d6416fa2d8bfa93971715c403b2c12593caeaee61bd5a17c6365c15703e" Jan 23 10:28:34 crc kubenswrapper[4684]: E0123 10:28:34.091118 4684 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"fb139d6416fa2d8bfa93971715c403b2c12593caeaee61bd5a17c6365c15703e\": container with ID starting with fb139d6416fa2d8bfa93971715c403b2c12593caeaee61bd5a17c6365c15703e not found: ID does not exist" containerID="fb139d6416fa2d8bfa93971715c403b2c12593caeaee61bd5a17c6365c15703e" Jan 23 10:28:34 crc kubenswrapper[4684]: I0123 10:28:34.091164 4684 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"fb139d6416fa2d8bfa93971715c403b2c12593caeaee61bd5a17c6365c15703e"} err="failed to get container status \"fb139d6416fa2d8bfa93971715c403b2c12593caeaee61bd5a17c6365c15703e\": rpc error: code = NotFound desc = could not find container \"fb139d6416fa2d8bfa93971715c403b2c12593caeaee61bd5a17c6365c15703e\": container with ID starting with fb139d6416fa2d8bfa93971715c403b2c12593caeaee61bd5a17c6365c15703e not found: ID does not exist" Jan 23 10:28:34 crc kubenswrapper[4684]: I0123 10:28:34.091191 4684 scope.go:117] "RemoveContainer" containerID="4059f48b5c910f323e9661040c1d949adb916acab0d4c3ee94496c9b6717606a" Jan 23 10:28:34 crc kubenswrapper[4684]: E0123 10:28:34.091563 4684 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"4059f48b5c910f323e9661040c1d949adb916acab0d4c3ee94496c9b6717606a\": container with ID starting with 4059f48b5c910f323e9661040c1d949adb916acab0d4c3ee94496c9b6717606a not found: ID does not exist" containerID="4059f48b5c910f323e9661040c1d949adb916acab0d4c3ee94496c9b6717606a" Jan 23 10:28:34 crc kubenswrapper[4684]: I0123 10:28:34.091612 4684 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"4059f48b5c910f323e9661040c1d949adb916acab0d4c3ee94496c9b6717606a"} err="failed to get container status \"4059f48b5c910f323e9661040c1d949adb916acab0d4c3ee94496c9b6717606a\": rpc error: code = NotFound desc = could not find container \"4059f48b5c910f323e9661040c1d949adb916acab0d4c3ee94496c9b6717606a\": container with ID starting with 4059f48b5c910f323e9661040c1d949adb916acab0d4c3ee94496c9b6717606a not found: ID does not exist" Jan 23 10:28:34 crc kubenswrapper[4684]: I0123 10:28:34.091647 4684 scope.go:117] "RemoveContainer" containerID="91af590653f64049a82dc2cadf69addf0fb6993ef0e43367dba67e350c664953" Jan 23 10:28:34 crc kubenswrapper[4684]: E0123 10:28:34.092014 4684 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"91af590653f64049a82dc2cadf69addf0fb6993ef0e43367dba67e350c664953\": container with ID starting with 91af590653f64049a82dc2cadf69addf0fb6993ef0e43367dba67e350c664953 not found: ID does not exist" containerID="91af590653f64049a82dc2cadf69addf0fb6993ef0e43367dba67e350c664953" Jan 23 10:28:34 crc kubenswrapper[4684]: I0123 10:28:34.092066 4684 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"91af590653f64049a82dc2cadf69addf0fb6993ef0e43367dba67e350c664953"} err="failed to get container status \"91af590653f64049a82dc2cadf69addf0fb6993ef0e43367dba67e350c664953\": rpc error: code = NotFound desc = could not find container \"91af590653f64049a82dc2cadf69addf0fb6993ef0e43367dba67e350c664953\": container with ID starting with 91af590653f64049a82dc2cadf69addf0fb6993ef0e43367dba67e350c664953 not found: ID does not exist" Jan 23 10:28:43 crc kubenswrapper[4684]: I0123 10:28:43.582812 4684 scope.go:117] "RemoveContainer" containerID="ea556fc8dc883b8c3494c093263ef6a2ba8fb783710728a6eb74afd116ee0ccc" Jan 23 10:28:43 crc kubenswrapper[4684]: E0123 10:28:43.583661 4684 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-wtphf_openshift-machine-config-operator(fe8e0d00-860e-4d47-9f48-686555520d79)\"" pod="openshift-machine-config-operator/machine-config-daemon-wtphf" podUID="fe8e0d00-860e-4d47-9f48-686555520d79" Jan 23 10:28:58 crc kubenswrapper[4684]: I0123 10:28:58.582277 4684 scope.go:117] "RemoveContainer" containerID="ea556fc8dc883b8c3494c093263ef6a2ba8fb783710728a6eb74afd116ee0ccc" Jan 23 10:28:59 crc kubenswrapper[4684]: I0123 10:28:59.766587 4684 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-wtphf" event={"ID":"fe8e0d00-860e-4d47-9f48-686555520d79","Type":"ContainerStarted","Data":"1756bd959f6d356e73018d112baa4f2e84373b3c4243cd97818969471c5f5c40"} Jan 23 10:29:02 crc kubenswrapper[4684]: I0123 10:29:02.797986 4684 generic.go:334] "Generic (PLEG): container finished" podID="a0e2465e-0c50-4aaa-a1c7-bdbd8d3a516a" containerID="fd76173876ef1807d994f8ff7481a70adf6b0ba07b56c88142ccaa797558f7e1" exitCode=0 Jan 23 10:29:02 crc kubenswrapper[4684]: I0123 10:29:02.798087 4684 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/tempest-tests-tempest" event={"ID":"a0e2465e-0c50-4aaa-a1c7-bdbd8d3a516a","Type":"ContainerDied","Data":"fd76173876ef1807d994f8ff7481a70adf6b0ba07b56c88142ccaa797558f7e1"} Jan 23 10:29:04 crc kubenswrapper[4684]: I0123 10:29:04.151112 4684 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/tempest-tests-tempest" Jan 23 10:29:04 crc kubenswrapper[4684]: I0123 10:29:04.341141 4684 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ssh-key\" (UniqueName: \"kubernetes.io/secret/a0e2465e-0c50-4aaa-a1c7-bdbd8d3a516a-ssh-key\") pod \"a0e2465e-0c50-4aaa-a1c7-bdbd8d3a516a\" (UID: \"a0e2465e-0c50-4aaa-a1c7-bdbd8d3a516a\") " Jan 23 10:29:04 crc kubenswrapper[4684]: I0123 10:29:04.341452 4684 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"test-operator-logs\" (UniqueName: \"kubernetes.io/local-volume/local-storage07-crc\") pod \"a0e2465e-0c50-4aaa-a1c7-bdbd8d3a516a\" (UID: \"a0e2465e-0c50-4aaa-a1c7-bdbd8d3a516a\") " Jan 23 10:29:04 crc kubenswrapper[4684]: I0123 10:29:04.341520 4684 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/a0e2465e-0c50-4aaa-a1c7-bdbd8d3a516a-config-data\") pod \"a0e2465e-0c50-4aaa-a1c7-bdbd8d3a516a\" (UID: \"a0e2465e-0c50-4aaa-a1c7-bdbd8d3a516a\") " Jan 23 10:29:04 crc kubenswrapper[4684]: I0123 10:29:04.341580 4684 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"openstack-config-secret\" (UniqueName: \"kubernetes.io/secret/a0e2465e-0c50-4aaa-a1c7-bdbd8d3a516a-openstack-config-secret\") pod \"a0e2465e-0c50-4aaa-a1c7-bdbd8d3a516a\" (UID: \"a0e2465e-0c50-4aaa-a1c7-bdbd8d3a516a\") " Jan 23 10:29:04 crc kubenswrapper[4684]: I0123 10:29:04.341603 4684 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"test-operator-ephemeral-workdir\" (UniqueName: \"kubernetes.io/empty-dir/a0e2465e-0c50-4aaa-a1c7-bdbd8d3a516a-test-operator-ephemeral-workdir\") pod \"a0e2465e-0c50-4aaa-a1c7-bdbd8d3a516a\" (UID: \"a0e2465e-0c50-4aaa-a1c7-bdbd8d3a516a\") " Jan 23 10:29:04 crc kubenswrapper[4684]: I0123 10:29:04.341638 4684 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"test-operator-ephemeral-temporary\" (UniqueName: \"kubernetes.io/empty-dir/a0e2465e-0c50-4aaa-a1c7-bdbd8d3a516a-test-operator-ephemeral-temporary\") pod \"a0e2465e-0c50-4aaa-a1c7-bdbd8d3a516a\" (UID: \"a0e2465e-0c50-4aaa-a1c7-bdbd8d3a516a\") " Jan 23 10:29:04 crc kubenswrapper[4684]: I0123 10:29:04.341655 4684 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/secret/a0e2465e-0c50-4aaa-a1c7-bdbd8d3a516a-ca-certs\") pod \"a0e2465e-0c50-4aaa-a1c7-bdbd8d3a516a\" (UID: \"a0e2465e-0c50-4aaa-a1c7-bdbd8d3a516a\") " Jan 23 10:29:04 crc kubenswrapper[4684]: I0123 10:29:04.341758 4684 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-hxl46\" (UniqueName: \"kubernetes.io/projected/a0e2465e-0c50-4aaa-a1c7-bdbd8d3a516a-kube-api-access-hxl46\") pod \"a0e2465e-0c50-4aaa-a1c7-bdbd8d3a516a\" (UID: \"a0e2465e-0c50-4aaa-a1c7-bdbd8d3a516a\") " Jan 23 10:29:04 crc kubenswrapper[4684]: I0123 10:29:04.341795 4684 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"openstack-config\" (UniqueName: \"kubernetes.io/configmap/a0e2465e-0c50-4aaa-a1c7-bdbd8d3a516a-openstack-config\") pod \"a0e2465e-0c50-4aaa-a1c7-bdbd8d3a516a\" (UID: \"a0e2465e-0c50-4aaa-a1c7-bdbd8d3a516a\") " Jan 23 10:29:04 crc kubenswrapper[4684]: I0123 10:29:04.342521 4684 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/a0e2465e-0c50-4aaa-a1c7-bdbd8d3a516a-config-data" (OuterVolumeSpecName: "config-data") pod "a0e2465e-0c50-4aaa-a1c7-bdbd8d3a516a" (UID: "a0e2465e-0c50-4aaa-a1c7-bdbd8d3a516a"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 23 10:29:04 crc kubenswrapper[4684]: I0123 10:29:04.343406 4684 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/a0e2465e-0c50-4aaa-a1c7-bdbd8d3a516a-test-operator-ephemeral-temporary" (OuterVolumeSpecName: "test-operator-ephemeral-temporary") pod "a0e2465e-0c50-4aaa-a1c7-bdbd8d3a516a" (UID: "a0e2465e-0c50-4aaa-a1c7-bdbd8d3a516a"). InnerVolumeSpecName "test-operator-ephemeral-temporary". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 23 10:29:04 crc kubenswrapper[4684]: I0123 10:29:04.351145 4684 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/a0e2465e-0c50-4aaa-a1c7-bdbd8d3a516a-test-operator-ephemeral-workdir" (OuterVolumeSpecName: "test-operator-ephemeral-workdir") pod "a0e2465e-0c50-4aaa-a1c7-bdbd8d3a516a" (UID: "a0e2465e-0c50-4aaa-a1c7-bdbd8d3a516a"). InnerVolumeSpecName "test-operator-ephemeral-workdir". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 23 10:29:04 crc kubenswrapper[4684]: I0123 10:29:04.351628 4684 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/a0e2465e-0c50-4aaa-a1c7-bdbd8d3a516a-kube-api-access-hxl46" (OuterVolumeSpecName: "kube-api-access-hxl46") pod "a0e2465e-0c50-4aaa-a1c7-bdbd8d3a516a" (UID: "a0e2465e-0c50-4aaa-a1c7-bdbd8d3a516a"). InnerVolumeSpecName "kube-api-access-hxl46". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 23 10:29:04 crc kubenswrapper[4684]: I0123 10:29:04.351670 4684 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/local-volume/local-storage07-crc" (OuterVolumeSpecName: "test-operator-logs") pod "a0e2465e-0c50-4aaa-a1c7-bdbd8d3a516a" (UID: "a0e2465e-0c50-4aaa-a1c7-bdbd8d3a516a"). InnerVolumeSpecName "local-storage07-crc". PluginName "kubernetes.io/local-volume", VolumeGidValue "" Jan 23 10:29:04 crc kubenswrapper[4684]: I0123 10:29:04.373202 4684 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/a0e2465e-0c50-4aaa-a1c7-bdbd8d3a516a-ca-certs" (OuterVolumeSpecName: "ca-certs") pod "a0e2465e-0c50-4aaa-a1c7-bdbd8d3a516a" (UID: "a0e2465e-0c50-4aaa-a1c7-bdbd8d3a516a"). InnerVolumeSpecName "ca-certs". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 23 10:29:04 crc kubenswrapper[4684]: I0123 10:29:04.374874 4684 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/a0e2465e-0c50-4aaa-a1c7-bdbd8d3a516a-openstack-config-secret" (OuterVolumeSpecName: "openstack-config-secret") pod "a0e2465e-0c50-4aaa-a1c7-bdbd8d3a516a" (UID: "a0e2465e-0c50-4aaa-a1c7-bdbd8d3a516a"). InnerVolumeSpecName "openstack-config-secret". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 23 10:29:04 crc kubenswrapper[4684]: I0123 10:29:04.381237 4684 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/a0e2465e-0c50-4aaa-a1c7-bdbd8d3a516a-ssh-key" (OuterVolumeSpecName: "ssh-key") pod "a0e2465e-0c50-4aaa-a1c7-bdbd8d3a516a" (UID: "a0e2465e-0c50-4aaa-a1c7-bdbd8d3a516a"). InnerVolumeSpecName "ssh-key". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 23 10:29:04 crc kubenswrapper[4684]: I0123 10:29:04.396069 4684 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/a0e2465e-0c50-4aaa-a1c7-bdbd8d3a516a-openstack-config" (OuterVolumeSpecName: "openstack-config") pod "a0e2465e-0c50-4aaa-a1c7-bdbd8d3a516a" (UID: "a0e2465e-0c50-4aaa-a1c7-bdbd8d3a516a"). InnerVolumeSpecName "openstack-config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 23 10:29:04 crc kubenswrapper[4684]: I0123 10:29:04.444987 4684 reconciler_common.go:286] "operationExecutor.UnmountDevice started for volume \"local-storage07-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage07-crc\") on node \"crc\" " Jan 23 10:29:04 crc kubenswrapper[4684]: I0123 10:29:04.445020 4684 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/a0e2465e-0c50-4aaa-a1c7-bdbd8d3a516a-config-data\") on node \"crc\" DevicePath \"\"" Jan 23 10:29:04 crc kubenswrapper[4684]: I0123 10:29:04.445034 4684 reconciler_common.go:293] "Volume detached for volume \"openstack-config-secret\" (UniqueName: \"kubernetes.io/secret/a0e2465e-0c50-4aaa-a1c7-bdbd8d3a516a-openstack-config-secret\") on node \"crc\" DevicePath \"\"" Jan 23 10:29:04 crc kubenswrapper[4684]: I0123 10:29:04.445059 4684 reconciler_common.go:293] "Volume detached for volume \"test-operator-ephemeral-workdir\" (UniqueName: \"kubernetes.io/empty-dir/a0e2465e-0c50-4aaa-a1c7-bdbd8d3a516a-test-operator-ephemeral-workdir\") on node \"crc\" DevicePath \"\"" Jan 23 10:29:04 crc kubenswrapper[4684]: I0123 10:29:04.445073 4684 reconciler_common.go:293] "Volume detached for volume \"test-operator-ephemeral-temporary\" (UniqueName: \"kubernetes.io/empty-dir/a0e2465e-0c50-4aaa-a1c7-bdbd8d3a516a-test-operator-ephemeral-temporary\") on node \"crc\" DevicePath \"\"" Jan 23 10:29:04 crc kubenswrapper[4684]: I0123 10:29:04.445086 4684 reconciler_common.go:293] "Volume detached for volume \"ca-certs\" (UniqueName: \"kubernetes.io/secret/a0e2465e-0c50-4aaa-a1c7-bdbd8d3a516a-ca-certs\") on node \"crc\" DevicePath \"\"" Jan 23 10:29:04 crc kubenswrapper[4684]: I0123 10:29:04.445097 4684 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-hxl46\" (UniqueName: \"kubernetes.io/projected/a0e2465e-0c50-4aaa-a1c7-bdbd8d3a516a-kube-api-access-hxl46\") on node \"crc\" DevicePath \"\"" Jan 23 10:29:04 crc kubenswrapper[4684]: I0123 10:29:04.445108 4684 reconciler_common.go:293] "Volume detached for volume \"openstack-config\" (UniqueName: \"kubernetes.io/configmap/a0e2465e-0c50-4aaa-a1c7-bdbd8d3a516a-openstack-config\") on node \"crc\" DevicePath \"\"" Jan 23 10:29:04 crc kubenswrapper[4684]: I0123 10:29:04.445119 4684 reconciler_common.go:293] "Volume detached for volume \"ssh-key\" (UniqueName: \"kubernetes.io/secret/a0e2465e-0c50-4aaa-a1c7-bdbd8d3a516a-ssh-key\") on node \"crc\" DevicePath \"\"" Jan 23 10:29:04 crc kubenswrapper[4684]: I0123 10:29:04.463675 4684 operation_generator.go:917] UnmountDevice succeeded for volume "local-storage07-crc" (UniqueName: "kubernetes.io/local-volume/local-storage07-crc") on node "crc" Jan 23 10:29:04 crc kubenswrapper[4684]: I0123 10:29:04.546565 4684 reconciler_common.go:293] "Volume detached for volume \"local-storage07-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage07-crc\") on node \"crc\" DevicePath \"\"" Jan 23 10:29:04 crc kubenswrapper[4684]: I0123 10:29:04.828050 4684 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/tempest-tests-tempest" event={"ID":"a0e2465e-0c50-4aaa-a1c7-bdbd8d3a516a","Type":"ContainerDied","Data":"c73a4b7cfa993f72d85b940fb8663f968abd5d08866b610e4c3f3153d7472c87"} Jan 23 10:29:04 crc kubenswrapper[4684]: I0123 10:29:04.828091 4684 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="c73a4b7cfa993f72d85b940fb8663f968abd5d08866b610e4c3f3153d7472c87" Jan 23 10:29:04 crc kubenswrapper[4684]: I0123 10:29:04.828141 4684 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/tempest-tests-tempest" Jan 23 10:29:10 crc kubenswrapper[4684]: I0123 10:29:10.897986 4684 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/test-operator-logs-pod-tempest-tempest-tests-tempest"] Jan 23 10:29:10 crc kubenswrapper[4684]: E0123 10:29:10.900627 4684 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="a0e2465e-0c50-4aaa-a1c7-bdbd8d3a516a" containerName="tempest-tests-tempest-tests-runner" Jan 23 10:29:10 crc kubenswrapper[4684]: I0123 10:29:10.900680 4684 state_mem.go:107] "Deleted CPUSet assignment" podUID="a0e2465e-0c50-4aaa-a1c7-bdbd8d3a516a" containerName="tempest-tests-tempest-tests-runner" Jan 23 10:29:10 crc kubenswrapper[4684]: E0123 10:29:10.900781 4684 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="962bb5c2-d106-463b-92e4-dbcdb11172d9" containerName="registry-server" Jan 23 10:29:10 crc kubenswrapper[4684]: I0123 10:29:10.900802 4684 state_mem.go:107] "Deleted CPUSet assignment" podUID="962bb5c2-d106-463b-92e4-dbcdb11172d9" containerName="registry-server" Jan 23 10:29:10 crc kubenswrapper[4684]: E0123 10:29:10.900825 4684 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="962bb5c2-d106-463b-92e4-dbcdb11172d9" containerName="extract-content" Jan 23 10:29:10 crc kubenswrapper[4684]: I0123 10:29:10.900841 4684 state_mem.go:107] "Deleted CPUSet assignment" podUID="962bb5c2-d106-463b-92e4-dbcdb11172d9" containerName="extract-content" Jan 23 10:29:10 crc kubenswrapper[4684]: E0123 10:29:10.900963 4684 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="665ff6df-aacd-46b1-a9a3-20a90cd5a297" containerName="extract-utilities" Jan 23 10:29:10 crc kubenswrapper[4684]: I0123 10:29:10.900985 4684 state_mem.go:107] "Deleted CPUSet assignment" podUID="665ff6df-aacd-46b1-a9a3-20a90cd5a297" containerName="extract-utilities" Jan 23 10:29:10 crc kubenswrapper[4684]: E0123 10:29:10.901010 4684 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="962bb5c2-d106-463b-92e4-dbcdb11172d9" containerName="extract-utilities" Jan 23 10:29:10 crc kubenswrapper[4684]: I0123 10:29:10.901026 4684 state_mem.go:107] "Deleted CPUSet assignment" podUID="962bb5c2-d106-463b-92e4-dbcdb11172d9" containerName="extract-utilities" Jan 23 10:29:10 crc kubenswrapper[4684]: E0123 10:29:10.901071 4684 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="665ff6df-aacd-46b1-a9a3-20a90cd5a297" containerName="registry-server" Jan 23 10:29:10 crc kubenswrapper[4684]: I0123 10:29:10.901088 4684 state_mem.go:107] "Deleted CPUSet assignment" podUID="665ff6df-aacd-46b1-a9a3-20a90cd5a297" containerName="registry-server" Jan 23 10:29:10 crc kubenswrapper[4684]: E0123 10:29:10.901113 4684 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="665ff6df-aacd-46b1-a9a3-20a90cd5a297" containerName="extract-content" Jan 23 10:29:10 crc kubenswrapper[4684]: I0123 10:29:10.901132 4684 state_mem.go:107] "Deleted CPUSet assignment" podUID="665ff6df-aacd-46b1-a9a3-20a90cd5a297" containerName="extract-content" Jan 23 10:29:10 crc kubenswrapper[4684]: I0123 10:29:10.901574 4684 memory_manager.go:354] "RemoveStaleState removing state" podUID="a0e2465e-0c50-4aaa-a1c7-bdbd8d3a516a" containerName="tempest-tests-tempest-tests-runner" Jan 23 10:29:10 crc kubenswrapper[4684]: I0123 10:29:10.901616 4684 memory_manager.go:354] "RemoveStaleState removing state" podUID="665ff6df-aacd-46b1-a9a3-20a90cd5a297" containerName="registry-server" Jan 23 10:29:10 crc kubenswrapper[4684]: I0123 10:29:10.901652 4684 memory_manager.go:354] "RemoveStaleState removing state" podUID="962bb5c2-d106-463b-92e4-dbcdb11172d9" containerName="registry-server" Jan 23 10:29:10 crc kubenswrapper[4684]: I0123 10:29:10.915040 4684 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/test-operator-logs-pod-tempest-tempest-tests-tempest" Jan 23 10:29:10 crc kubenswrapper[4684]: I0123 10:29:10.923900 4684 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"default-dockercfg-fwcdj" Jan 23 10:29:10 crc kubenswrapper[4684]: I0123 10:29:10.952958 4684 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/test-operator-logs-pod-tempest-tempest-tests-tempest"] Jan 23 10:29:11 crc kubenswrapper[4684]: I0123 10:29:11.033211 4684 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-bknhj\" (UniqueName: \"kubernetes.io/projected/6a468899-9742-4407-95d4-55c6e2c14fe2-kube-api-access-bknhj\") pod \"test-operator-logs-pod-tempest-tempest-tests-tempest\" (UID: \"6a468899-9742-4407-95d4-55c6e2c14fe2\") " pod="openstack/test-operator-logs-pod-tempest-tempest-tests-tempest" Jan 23 10:29:11 crc kubenswrapper[4684]: I0123 10:29:11.033350 4684 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"local-storage07-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage07-crc\") pod \"test-operator-logs-pod-tempest-tempest-tests-tempest\" (UID: \"6a468899-9742-4407-95d4-55c6e2c14fe2\") " pod="openstack/test-operator-logs-pod-tempest-tempest-tests-tempest" Jan 23 10:29:11 crc kubenswrapper[4684]: I0123 10:29:11.135315 4684 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"local-storage07-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage07-crc\") pod \"test-operator-logs-pod-tempest-tempest-tests-tempest\" (UID: \"6a468899-9742-4407-95d4-55c6e2c14fe2\") " pod="openstack/test-operator-logs-pod-tempest-tempest-tests-tempest" Jan 23 10:29:11 crc kubenswrapper[4684]: I0123 10:29:11.135531 4684 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-bknhj\" (UniqueName: \"kubernetes.io/projected/6a468899-9742-4407-95d4-55c6e2c14fe2-kube-api-access-bknhj\") pod \"test-operator-logs-pod-tempest-tempest-tests-tempest\" (UID: \"6a468899-9742-4407-95d4-55c6e2c14fe2\") " pod="openstack/test-operator-logs-pod-tempest-tempest-tests-tempest" Jan 23 10:29:11 crc kubenswrapper[4684]: I0123 10:29:11.136452 4684 operation_generator.go:580] "MountVolume.MountDevice succeeded for volume \"local-storage07-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage07-crc\") pod \"test-operator-logs-pod-tempest-tempest-tests-tempest\" (UID: \"6a468899-9742-4407-95d4-55c6e2c14fe2\") device mount path \"/mnt/openstack/pv07\"" pod="openstack/test-operator-logs-pod-tempest-tempest-tests-tempest" Jan 23 10:29:11 crc kubenswrapper[4684]: I0123 10:29:11.159010 4684 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-bknhj\" (UniqueName: \"kubernetes.io/projected/6a468899-9742-4407-95d4-55c6e2c14fe2-kube-api-access-bknhj\") pod \"test-operator-logs-pod-tempest-tempest-tests-tempest\" (UID: \"6a468899-9742-4407-95d4-55c6e2c14fe2\") " pod="openstack/test-operator-logs-pod-tempest-tempest-tests-tempest" Jan 23 10:29:11 crc kubenswrapper[4684]: I0123 10:29:11.171599 4684 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"local-storage07-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage07-crc\") pod \"test-operator-logs-pod-tempest-tempest-tests-tempest\" (UID: \"6a468899-9742-4407-95d4-55c6e2c14fe2\") " pod="openstack/test-operator-logs-pod-tempest-tempest-tests-tempest" Jan 23 10:29:11 crc kubenswrapper[4684]: I0123 10:29:11.232314 4684 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/test-operator-logs-pod-tempest-tempest-tests-tempest" Jan 23 10:29:11 crc kubenswrapper[4684]: I0123 10:29:11.667418 4684 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/test-operator-logs-pod-tempest-tempest-tests-tempest"] Jan 23 10:29:11 crc kubenswrapper[4684]: I0123 10:29:11.887602 4684 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/test-operator-logs-pod-tempest-tempest-tests-tempest" event={"ID":"6a468899-9742-4407-95d4-55c6e2c14fe2","Type":"ContainerStarted","Data":"f2e6cfa7c05dc6f19e6eca00775dcb4ad646f113d4f2467bb7e735c150356602"} Jan 23 10:29:13 crc kubenswrapper[4684]: I0123 10:29:13.908811 4684 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/test-operator-logs-pod-tempest-tempest-tests-tempest" event={"ID":"6a468899-9742-4407-95d4-55c6e2c14fe2","Type":"ContainerStarted","Data":"d1ac40ad41f7b74873fe40b2ba014edccff8c47c9d0bd561ae90c54659d9bf3c"} Jan 23 10:29:13 crc kubenswrapper[4684]: I0123 10:29:13.936851 4684 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/test-operator-logs-pod-tempest-tempest-tests-tempest" podStartSLOduration=2.6944068469999998 podStartE2EDuration="3.936829894s" podCreationTimestamp="2026-01-23 10:29:10 +0000 UTC" firstStartedPulling="2026-01-23 10:29:11.67382162 +0000 UTC m=+4924.297200161" lastFinishedPulling="2026-01-23 10:29:12.916244667 +0000 UTC m=+4925.539623208" observedRunningTime="2026-01-23 10:29:13.923371169 +0000 UTC m=+4926.546749710" watchObservedRunningTime="2026-01-23 10:29:13.936829894 +0000 UTC m=+4926.560208435" Jan 23 10:29:40 crc kubenswrapper[4684]: I0123 10:29:40.505277 4684 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-must-gather-vptsh/must-gather-h8xvk"] Jan 23 10:29:40 crc kubenswrapper[4684]: I0123 10:29:40.507281 4684 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-must-gather-vptsh/must-gather-h8xvk" Jan 23 10:29:40 crc kubenswrapper[4684]: I0123 10:29:40.517808 4684 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-must-gather-vptsh"/"kube-root-ca.crt" Jan 23 10:29:40 crc kubenswrapper[4684]: I0123 10:29:40.530900 4684 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-must-gather-vptsh"/"openshift-service-ca.crt" Jan 23 10:29:40 crc kubenswrapper[4684]: I0123 10:29:40.531310 4684 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-must-gather-vptsh/must-gather-h8xvk"] Jan 23 10:29:40 crc kubenswrapper[4684]: I0123 10:29:40.675776 4684 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"must-gather-output\" (UniqueName: \"kubernetes.io/empty-dir/b2f30992-3b03-4d65-bca1-e1eb2c1ffd87-must-gather-output\") pod \"must-gather-h8xvk\" (UID: \"b2f30992-3b03-4d65-bca1-e1eb2c1ffd87\") " pod="openshift-must-gather-vptsh/must-gather-h8xvk" Jan 23 10:29:40 crc kubenswrapper[4684]: I0123 10:29:40.676190 4684 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-wbrt4\" (UniqueName: \"kubernetes.io/projected/b2f30992-3b03-4d65-bca1-e1eb2c1ffd87-kube-api-access-wbrt4\") pod \"must-gather-h8xvk\" (UID: \"b2f30992-3b03-4d65-bca1-e1eb2c1ffd87\") " pod="openshift-must-gather-vptsh/must-gather-h8xvk" Jan 23 10:29:40 crc kubenswrapper[4684]: I0123 10:29:40.777561 4684 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"must-gather-output\" (UniqueName: \"kubernetes.io/empty-dir/b2f30992-3b03-4d65-bca1-e1eb2c1ffd87-must-gather-output\") pod \"must-gather-h8xvk\" (UID: \"b2f30992-3b03-4d65-bca1-e1eb2c1ffd87\") " pod="openshift-must-gather-vptsh/must-gather-h8xvk" Jan 23 10:29:40 crc kubenswrapper[4684]: I0123 10:29:40.777640 4684 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-wbrt4\" (UniqueName: \"kubernetes.io/projected/b2f30992-3b03-4d65-bca1-e1eb2c1ffd87-kube-api-access-wbrt4\") pod \"must-gather-h8xvk\" (UID: \"b2f30992-3b03-4d65-bca1-e1eb2c1ffd87\") " pod="openshift-must-gather-vptsh/must-gather-h8xvk" Jan 23 10:29:40 crc kubenswrapper[4684]: I0123 10:29:40.778001 4684 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"must-gather-output\" (UniqueName: \"kubernetes.io/empty-dir/b2f30992-3b03-4d65-bca1-e1eb2c1ffd87-must-gather-output\") pod \"must-gather-h8xvk\" (UID: \"b2f30992-3b03-4d65-bca1-e1eb2c1ffd87\") " pod="openshift-must-gather-vptsh/must-gather-h8xvk" Jan 23 10:29:40 crc kubenswrapper[4684]: I0123 10:29:40.796300 4684 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-wbrt4\" (UniqueName: \"kubernetes.io/projected/b2f30992-3b03-4d65-bca1-e1eb2c1ffd87-kube-api-access-wbrt4\") pod \"must-gather-h8xvk\" (UID: \"b2f30992-3b03-4d65-bca1-e1eb2c1ffd87\") " pod="openshift-must-gather-vptsh/must-gather-h8xvk" Jan 23 10:29:40 crc kubenswrapper[4684]: I0123 10:29:40.833458 4684 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-must-gather-vptsh/must-gather-h8xvk" Jan 23 10:29:41 crc kubenswrapper[4684]: I0123 10:29:41.408028 4684 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-must-gather-vptsh/must-gather-h8xvk"] Jan 23 10:29:42 crc kubenswrapper[4684]: I0123 10:29:42.163381 4684 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-must-gather-vptsh/must-gather-h8xvk" event={"ID":"b2f30992-3b03-4d65-bca1-e1eb2c1ffd87","Type":"ContainerStarted","Data":"0bc2c9963612e16762dcdf0a8daf2b5fa2d51865e1d5cc9ea902a21a29658ac3"} Jan 23 10:29:51 crc kubenswrapper[4684]: I0123 10:29:51.268373 4684 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-must-gather-vptsh/must-gather-h8xvk" event={"ID":"b2f30992-3b03-4d65-bca1-e1eb2c1ffd87","Type":"ContainerStarted","Data":"df2aa74f3a104858cd808b67e9896c4a4bd8e923868d5cfb7926c98d2459d5ef"} Jan 23 10:29:51 crc kubenswrapper[4684]: I0123 10:29:51.269037 4684 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-must-gather-vptsh/must-gather-h8xvk" event={"ID":"b2f30992-3b03-4d65-bca1-e1eb2c1ffd87","Type":"ContainerStarted","Data":"d093ff96b6e4173fe4bf23463588956f31c7402a5a7eb2c8ef6f9c11c8adf368"} Jan 23 10:29:51 crc kubenswrapper[4684]: I0123 10:29:51.291338 4684 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-must-gather-vptsh/must-gather-h8xvk" podStartSLOduration=2.777642716 podStartE2EDuration="11.29131409s" podCreationTimestamp="2026-01-23 10:29:40 +0000 UTC" firstStartedPulling="2026-01-23 10:29:41.538322207 +0000 UTC m=+4954.161700748" lastFinishedPulling="2026-01-23 10:29:50.051993581 +0000 UTC m=+4962.675372122" observedRunningTime="2026-01-23 10:29:51.288012305 +0000 UTC m=+4963.911390846" watchObservedRunningTime="2026-01-23 10:29:51.29131409 +0000 UTC m=+4963.914692631" Jan 23 10:29:56 crc kubenswrapper[4684]: I0123 10:29:56.465567 4684 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-must-gather-vptsh/crc-debug-jjbf8"] Jan 23 10:29:56 crc kubenswrapper[4684]: I0123 10:29:56.467290 4684 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-must-gather-vptsh/crc-debug-jjbf8" Jan 23 10:29:56 crc kubenswrapper[4684]: I0123 10:29:56.469122 4684 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-must-gather-vptsh"/"default-dockercfg-db6hw" Jan 23 10:29:56 crc kubenswrapper[4684]: I0123 10:29:56.474566 4684 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-jms4n\" (UniqueName: \"kubernetes.io/projected/47249c2d-8305-449c-9b65-e6ca7137d445-kube-api-access-jms4n\") pod \"crc-debug-jjbf8\" (UID: \"47249c2d-8305-449c-9b65-e6ca7137d445\") " pod="openshift-must-gather-vptsh/crc-debug-jjbf8" Jan 23 10:29:56 crc kubenswrapper[4684]: I0123 10:29:56.474910 4684 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host\" (UniqueName: \"kubernetes.io/host-path/47249c2d-8305-449c-9b65-e6ca7137d445-host\") pod \"crc-debug-jjbf8\" (UID: \"47249c2d-8305-449c-9b65-e6ca7137d445\") " pod="openshift-must-gather-vptsh/crc-debug-jjbf8" Jan 23 10:29:56 crc kubenswrapper[4684]: I0123 10:29:56.577464 4684 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-jms4n\" (UniqueName: \"kubernetes.io/projected/47249c2d-8305-449c-9b65-e6ca7137d445-kube-api-access-jms4n\") pod \"crc-debug-jjbf8\" (UID: \"47249c2d-8305-449c-9b65-e6ca7137d445\") " pod="openshift-must-gather-vptsh/crc-debug-jjbf8" Jan 23 10:29:56 crc kubenswrapper[4684]: I0123 10:29:56.577587 4684 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host\" (UniqueName: \"kubernetes.io/host-path/47249c2d-8305-449c-9b65-e6ca7137d445-host\") pod \"crc-debug-jjbf8\" (UID: \"47249c2d-8305-449c-9b65-e6ca7137d445\") " pod="openshift-must-gather-vptsh/crc-debug-jjbf8" Jan 23 10:29:56 crc kubenswrapper[4684]: I0123 10:29:56.577787 4684 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host\" (UniqueName: \"kubernetes.io/host-path/47249c2d-8305-449c-9b65-e6ca7137d445-host\") pod \"crc-debug-jjbf8\" (UID: \"47249c2d-8305-449c-9b65-e6ca7137d445\") " pod="openshift-must-gather-vptsh/crc-debug-jjbf8" Jan 23 10:29:56 crc kubenswrapper[4684]: I0123 10:29:56.605061 4684 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-jms4n\" (UniqueName: \"kubernetes.io/projected/47249c2d-8305-449c-9b65-e6ca7137d445-kube-api-access-jms4n\") pod \"crc-debug-jjbf8\" (UID: \"47249c2d-8305-449c-9b65-e6ca7137d445\") " pod="openshift-must-gather-vptsh/crc-debug-jjbf8" Jan 23 10:29:56 crc kubenswrapper[4684]: I0123 10:29:56.785402 4684 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-must-gather-vptsh/crc-debug-jjbf8" Jan 23 10:29:57 crc kubenswrapper[4684]: I0123 10:29:57.342113 4684 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-must-gather-vptsh/crc-debug-jjbf8" event={"ID":"47249c2d-8305-449c-9b65-e6ca7137d445","Type":"ContainerStarted","Data":"581f5d131e1737e4b1a548d894e1d6a8ce82230a669b2b7b87cffeb5cb8b3ed9"} Jan 23 10:30:00 crc kubenswrapper[4684]: I0123 10:30:00.194278 4684 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29486070-2ljcz"] Jan 23 10:30:00 crc kubenswrapper[4684]: I0123 10:30:00.195936 4684 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29486070-2ljcz" Jan 23 10:30:00 crc kubenswrapper[4684]: I0123 10:30:00.198209 4684 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-operator-lifecycle-manager"/"collect-profiles-config" Jan 23 10:30:00 crc kubenswrapper[4684]: I0123 10:30:00.198425 4684 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-operator-lifecycle-manager"/"collect-profiles-dockercfg-kzf4t" Jan 23 10:30:00 crc kubenswrapper[4684]: I0123 10:30:00.216625 4684 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29486070-2ljcz"] Jan 23 10:30:00 crc kubenswrapper[4684]: I0123 10:30:00.264162 4684 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-k6ddb\" (UniqueName: \"kubernetes.io/projected/53919638-0a82-4b57-9458-bd75d61d8017-kube-api-access-k6ddb\") pod \"collect-profiles-29486070-2ljcz\" (UID: \"53919638-0a82-4b57-9458-bd75d61d8017\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29486070-2ljcz" Jan 23 10:30:00 crc kubenswrapper[4684]: I0123 10:30:00.264221 4684 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/53919638-0a82-4b57-9458-bd75d61d8017-secret-volume\") pod \"collect-profiles-29486070-2ljcz\" (UID: \"53919638-0a82-4b57-9458-bd75d61d8017\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29486070-2ljcz" Jan 23 10:30:00 crc kubenswrapper[4684]: I0123 10:30:00.264482 4684 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/53919638-0a82-4b57-9458-bd75d61d8017-config-volume\") pod \"collect-profiles-29486070-2ljcz\" (UID: \"53919638-0a82-4b57-9458-bd75d61d8017\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29486070-2ljcz" Jan 23 10:30:00 crc kubenswrapper[4684]: I0123 10:30:00.365175 4684 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-k6ddb\" (UniqueName: \"kubernetes.io/projected/53919638-0a82-4b57-9458-bd75d61d8017-kube-api-access-k6ddb\") pod \"collect-profiles-29486070-2ljcz\" (UID: \"53919638-0a82-4b57-9458-bd75d61d8017\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29486070-2ljcz" Jan 23 10:30:00 crc kubenswrapper[4684]: I0123 10:30:00.365212 4684 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/53919638-0a82-4b57-9458-bd75d61d8017-secret-volume\") pod \"collect-profiles-29486070-2ljcz\" (UID: \"53919638-0a82-4b57-9458-bd75d61d8017\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29486070-2ljcz" Jan 23 10:30:00 crc kubenswrapper[4684]: I0123 10:30:00.365503 4684 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/53919638-0a82-4b57-9458-bd75d61d8017-config-volume\") pod \"collect-profiles-29486070-2ljcz\" (UID: \"53919638-0a82-4b57-9458-bd75d61d8017\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29486070-2ljcz" Jan 23 10:30:00 crc kubenswrapper[4684]: I0123 10:30:00.367008 4684 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/53919638-0a82-4b57-9458-bd75d61d8017-config-volume\") pod \"collect-profiles-29486070-2ljcz\" (UID: \"53919638-0a82-4b57-9458-bd75d61d8017\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29486070-2ljcz" Jan 23 10:30:00 crc kubenswrapper[4684]: I0123 10:30:00.504388 4684 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/53919638-0a82-4b57-9458-bd75d61d8017-secret-volume\") pod \"collect-profiles-29486070-2ljcz\" (UID: \"53919638-0a82-4b57-9458-bd75d61d8017\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29486070-2ljcz" Jan 23 10:30:00 crc kubenswrapper[4684]: I0123 10:30:00.508630 4684 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-k6ddb\" (UniqueName: \"kubernetes.io/projected/53919638-0a82-4b57-9458-bd75d61d8017-kube-api-access-k6ddb\") pod \"collect-profiles-29486070-2ljcz\" (UID: \"53919638-0a82-4b57-9458-bd75d61d8017\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29486070-2ljcz" Jan 23 10:30:00 crc kubenswrapper[4684]: I0123 10:30:00.529870 4684 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29486070-2ljcz" Jan 23 10:30:01 crc kubenswrapper[4684]: I0123 10:30:01.109476 4684 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29486070-2ljcz"] Jan 23 10:30:01 crc kubenswrapper[4684]: I0123 10:30:01.391930 4684 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/collect-profiles-29486070-2ljcz" event={"ID":"53919638-0a82-4b57-9458-bd75d61d8017","Type":"ContainerStarted","Data":"95440135792acbcd577f76e3e2275062003287c93a696ddd2599858864c3e07d"} Jan 23 10:30:01 crc kubenswrapper[4684]: I0123 10:30:01.392001 4684 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/collect-profiles-29486070-2ljcz" event={"ID":"53919638-0a82-4b57-9458-bd75d61d8017","Type":"ContainerStarted","Data":"649eda747ee536b41f5d28910db2ef4abfef3b5803c9196a5abe3196d7c0e640"} Jan 23 10:30:01 crc kubenswrapper[4684]: I0123 10:30:01.418010 4684 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-operator-lifecycle-manager/collect-profiles-29486070-2ljcz" podStartSLOduration=1.417986727 podStartE2EDuration="1.417986727s" podCreationTimestamp="2026-01-23 10:30:00 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-23 10:30:01.405059957 +0000 UTC m=+4974.028438508" watchObservedRunningTime="2026-01-23 10:30:01.417986727 +0000 UTC m=+4974.041365278" Jan 23 10:30:02 crc kubenswrapper[4684]: I0123 10:30:02.400937 4684 generic.go:334] "Generic (PLEG): container finished" podID="53919638-0a82-4b57-9458-bd75d61d8017" containerID="95440135792acbcd577f76e3e2275062003287c93a696ddd2599858864c3e07d" exitCode=0 Jan 23 10:30:02 crc kubenswrapper[4684]: I0123 10:30:02.401028 4684 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/collect-profiles-29486070-2ljcz" event={"ID":"53919638-0a82-4b57-9458-bd75d61d8017","Type":"ContainerDied","Data":"95440135792acbcd577f76e3e2275062003287c93a696ddd2599858864c3e07d"} Jan 23 10:30:13 crc kubenswrapper[4684]: I0123 10:30:13.710025 4684 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29486070-2ljcz" Jan 23 10:30:13 crc kubenswrapper[4684]: I0123 10:30:13.715575 4684 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/53919638-0a82-4b57-9458-bd75d61d8017-config-volume\") pod \"53919638-0a82-4b57-9458-bd75d61d8017\" (UID: \"53919638-0a82-4b57-9458-bd75d61d8017\") " Jan 23 10:30:13 crc kubenswrapper[4684]: I0123 10:30:13.715622 4684 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/53919638-0a82-4b57-9458-bd75d61d8017-secret-volume\") pod \"53919638-0a82-4b57-9458-bd75d61d8017\" (UID: \"53919638-0a82-4b57-9458-bd75d61d8017\") " Jan 23 10:30:13 crc kubenswrapper[4684]: I0123 10:30:13.716425 4684 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/53919638-0a82-4b57-9458-bd75d61d8017-config-volume" (OuterVolumeSpecName: "config-volume") pod "53919638-0a82-4b57-9458-bd75d61d8017" (UID: "53919638-0a82-4b57-9458-bd75d61d8017"). InnerVolumeSpecName "config-volume". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 23 10:30:13 crc kubenswrapper[4684]: I0123 10:30:13.715691 4684 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-k6ddb\" (UniqueName: \"kubernetes.io/projected/53919638-0a82-4b57-9458-bd75d61d8017-kube-api-access-k6ddb\") pod \"53919638-0a82-4b57-9458-bd75d61d8017\" (UID: \"53919638-0a82-4b57-9458-bd75d61d8017\") " Jan 23 10:30:13 crc kubenswrapper[4684]: I0123 10:30:13.717187 4684 reconciler_common.go:293] "Volume detached for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/53919638-0a82-4b57-9458-bd75d61d8017-config-volume\") on node \"crc\" DevicePath \"\"" Jan 23 10:30:13 crc kubenswrapper[4684]: I0123 10:30:13.723803 4684 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/53919638-0a82-4b57-9458-bd75d61d8017-kube-api-access-k6ddb" (OuterVolumeSpecName: "kube-api-access-k6ddb") pod "53919638-0a82-4b57-9458-bd75d61d8017" (UID: "53919638-0a82-4b57-9458-bd75d61d8017"). InnerVolumeSpecName "kube-api-access-k6ddb". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 23 10:30:13 crc kubenswrapper[4684]: I0123 10:30:13.723898 4684 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/53919638-0a82-4b57-9458-bd75d61d8017-secret-volume" (OuterVolumeSpecName: "secret-volume") pod "53919638-0a82-4b57-9458-bd75d61d8017" (UID: "53919638-0a82-4b57-9458-bd75d61d8017"). InnerVolumeSpecName "secret-volume". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 23 10:30:13 crc kubenswrapper[4684]: E0123 10:30:13.773196 4684 log.go:32] "PullImage from image service failed" err="rpc error: code = Canceled desc = copying config: context canceled" image="quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:6ab858aed98e4fe57e6b144da8e90ad5d6698bb4cc5521206f5c05809f0f9296" Jan 23 10:30:13 crc kubenswrapper[4684]: E0123 10:30:13.773413 4684 kuberuntime_manager.go:1274] "Unhandled Error" err="container &Container{Name:container-00,Image:quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:6ab858aed98e4fe57e6b144da8e90ad5d6698bb4cc5521206f5c05809f0f9296,Command:[chroot /host bash -c echo 'TOOLBOX_NAME=toolbox-osp' > /root/.toolboxrc ; rm -rf \"/var/tmp/sos-osp\" && mkdir -p \"/var/tmp/sos-osp\" && sudo podman rm --force toolbox-osp; sudo --preserve-env podman pull --authfile /var/lib/kubelet/config.json registry.redhat.io/rhel9/support-tools && toolbox sos report --batch --all-logs --only-plugins block,cifs,crio,devicemapper,devices,firewall_tables,firewalld,iscsi,lvm2,memory,multipath,nfs,nis,nvme,podman,process,processor,selinux,scsi,udev,logs,crypto --tmp-dir=\"/var/tmp/sos-osp\" && if [[ \"$(ls /var/log/pods/*/{*.log.*,*/*.log.*} 2>/dev/null)\" != '' ]]; then tar --ignore-failed-read --warning=no-file-changed -cJf \"/var/tmp/sos-osp/podlogs.tar.xz\" --transform 's,^,podlogs/,' /var/log/pods/*/{*.log.*,*/*.log.*} || true; fi],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:TMOUT,Value:900,ValueFrom:nil,},EnvVar{Name:HOST,Value:/host,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:host,ReadOnly:false,MountPath:/host,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-jms4n,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:nil,Privileged:*true,SELinuxOptions:nil,RunAsUser:*0,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod crc-debug-jjbf8_openshift-must-gather-vptsh(47249c2d-8305-449c-9b65-e6ca7137d445): ErrImagePull: rpc error: code = Canceled desc = copying config: context canceled" logger="UnhandledError" Jan 23 10:30:13 crc kubenswrapper[4684]: E0123 10:30:13.774828 4684 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"container-00\" with ErrImagePull: \"rpc error: code = Canceled desc = copying config: context canceled\"" pod="openshift-must-gather-vptsh/crc-debug-jjbf8" podUID="47249c2d-8305-449c-9b65-e6ca7137d445" Jan 23 10:30:13 crc kubenswrapper[4684]: I0123 10:30:13.818929 4684 reconciler_common.go:293] "Volume detached for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/53919638-0a82-4b57-9458-bd75d61d8017-secret-volume\") on node \"crc\" DevicePath \"\"" Jan 23 10:30:13 crc kubenswrapper[4684]: I0123 10:30:13.818965 4684 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-k6ddb\" (UniqueName: \"kubernetes.io/projected/53919638-0a82-4b57-9458-bd75d61d8017-kube-api-access-k6ddb\") on node \"crc\" DevicePath \"\"" Jan 23 10:30:14 crc kubenswrapper[4684]: I0123 10:30:14.532893 4684 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29486070-2ljcz" Jan 23 10:30:14 crc kubenswrapper[4684]: I0123 10:30:14.532952 4684 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/collect-profiles-29486070-2ljcz" event={"ID":"53919638-0a82-4b57-9458-bd75d61d8017","Type":"ContainerDied","Data":"649eda747ee536b41f5d28910db2ef4abfef3b5803c9196a5abe3196d7c0e640"} Jan 23 10:30:14 crc kubenswrapper[4684]: I0123 10:30:14.534144 4684 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="649eda747ee536b41f5d28910db2ef4abfef3b5803c9196a5abe3196d7c0e640" Jan 23 10:30:14 crc kubenswrapper[4684]: E0123 10:30:14.534468 4684 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"container-00\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:6ab858aed98e4fe57e6b144da8e90ad5d6698bb4cc5521206f5c05809f0f9296\\\"\"" pod="openshift-must-gather-vptsh/crc-debug-jjbf8" podUID="47249c2d-8305-449c-9b65-e6ca7137d445" Jan 23 10:30:14 crc kubenswrapper[4684]: I0123 10:30:14.795842 4684 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29486025-grn9l"] Jan 23 10:30:14 crc kubenswrapper[4684]: I0123 10:30:14.804093 4684 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29486025-grn9l"] Jan 23 10:30:15 crc kubenswrapper[4684]: I0123 10:30:15.597194 4684 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="d2a849a5-07be-46fe-bbcd-d6c77b0b740a" path="/var/lib/kubelet/pods/d2a849a5-07be-46fe-bbcd-d6c77b0b740a/volumes" Jan 23 10:30:17 crc kubenswrapper[4684]: I0123 10:30:17.542255 4684 scope.go:117] "RemoveContainer" containerID="e149ef284d142fb4d666f48e7d21111f7b0a1565ed435f6d4ca08cc41235437c" Jan 23 10:30:27 crc kubenswrapper[4684]: I0123 10:30:27.644208 4684 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-must-gather-vptsh/crc-debug-jjbf8" event={"ID":"47249c2d-8305-449c-9b65-e6ca7137d445","Type":"ContainerStarted","Data":"20c55fbf31e83b5a090b264e60b7e74a25d6c1a2685de42d4d779ae415fe95e2"} Jan 23 10:30:27 crc kubenswrapper[4684]: I0123 10:30:27.663301 4684 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-must-gather-vptsh/crc-debug-jjbf8" podStartSLOduration=1.474670516 podStartE2EDuration="31.663282092s" podCreationTimestamp="2026-01-23 10:29:56 +0000 UTC" firstStartedPulling="2026-01-23 10:29:56.824823509 +0000 UTC m=+4969.448202050" lastFinishedPulling="2026-01-23 10:30:27.013435085 +0000 UTC m=+4999.636813626" observedRunningTime="2026-01-23 10:30:27.657089125 +0000 UTC m=+5000.280467666" watchObservedRunningTime="2026-01-23 10:30:27.663282092 +0000 UTC m=+5000.286660633" Jan 23 10:31:09 crc kubenswrapper[4684]: I0123 10:31:09.004125 4684 generic.go:334] "Generic (PLEG): container finished" podID="47249c2d-8305-449c-9b65-e6ca7137d445" containerID="20c55fbf31e83b5a090b264e60b7e74a25d6c1a2685de42d4d779ae415fe95e2" exitCode=0 Jan 23 10:31:09 crc kubenswrapper[4684]: I0123 10:31:09.004350 4684 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-must-gather-vptsh/crc-debug-jjbf8" event={"ID":"47249c2d-8305-449c-9b65-e6ca7137d445","Type":"ContainerDied","Data":"20c55fbf31e83b5a090b264e60b7e74a25d6c1a2685de42d4d779ae415fe95e2"} Jan 23 10:31:10 crc kubenswrapper[4684]: I0123 10:31:10.151681 4684 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-must-gather-vptsh/crc-debug-jjbf8" Jan 23 10:31:10 crc kubenswrapper[4684]: I0123 10:31:10.195966 4684 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-must-gather-vptsh/crc-debug-jjbf8"] Jan 23 10:31:10 crc kubenswrapper[4684]: I0123 10:31:10.206060 4684 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-must-gather-vptsh/crc-debug-jjbf8"] Jan 23 10:31:10 crc kubenswrapper[4684]: I0123 10:31:10.256843 4684 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-jms4n\" (UniqueName: \"kubernetes.io/projected/47249c2d-8305-449c-9b65-e6ca7137d445-kube-api-access-jms4n\") pod \"47249c2d-8305-449c-9b65-e6ca7137d445\" (UID: \"47249c2d-8305-449c-9b65-e6ca7137d445\") " Jan 23 10:31:10 crc kubenswrapper[4684]: I0123 10:31:10.256895 4684 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"host\" (UniqueName: \"kubernetes.io/host-path/47249c2d-8305-449c-9b65-e6ca7137d445-host\") pod \"47249c2d-8305-449c-9b65-e6ca7137d445\" (UID: \"47249c2d-8305-449c-9b65-e6ca7137d445\") " Jan 23 10:31:10 crc kubenswrapper[4684]: I0123 10:31:10.256978 4684 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/47249c2d-8305-449c-9b65-e6ca7137d445-host" (OuterVolumeSpecName: "host") pod "47249c2d-8305-449c-9b65-e6ca7137d445" (UID: "47249c2d-8305-449c-9b65-e6ca7137d445"). InnerVolumeSpecName "host". PluginName "kubernetes.io/host-path", VolumeGidValue "" Jan 23 10:31:10 crc kubenswrapper[4684]: I0123 10:31:10.257521 4684 reconciler_common.go:293] "Volume detached for volume \"host\" (UniqueName: \"kubernetes.io/host-path/47249c2d-8305-449c-9b65-e6ca7137d445-host\") on node \"crc\" DevicePath \"\"" Jan 23 10:31:10 crc kubenswrapper[4684]: I0123 10:31:10.262278 4684 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/47249c2d-8305-449c-9b65-e6ca7137d445-kube-api-access-jms4n" (OuterVolumeSpecName: "kube-api-access-jms4n") pod "47249c2d-8305-449c-9b65-e6ca7137d445" (UID: "47249c2d-8305-449c-9b65-e6ca7137d445"). InnerVolumeSpecName "kube-api-access-jms4n". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 23 10:31:10 crc kubenswrapper[4684]: I0123 10:31:10.359417 4684 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-jms4n\" (UniqueName: \"kubernetes.io/projected/47249c2d-8305-449c-9b65-e6ca7137d445-kube-api-access-jms4n\") on node \"crc\" DevicePath \"\"" Jan 23 10:31:11 crc kubenswrapper[4684]: I0123 10:31:11.028856 4684 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="581f5d131e1737e4b1a548d894e1d6a8ce82230a669b2b7b87cffeb5cb8b3ed9" Jan 23 10:31:11 crc kubenswrapper[4684]: I0123 10:31:11.028900 4684 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-must-gather-vptsh/crc-debug-jjbf8" Jan 23 10:31:11 crc kubenswrapper[4684]: I0123 10:31:11.390848 4684 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-must-gather-vptsh/crc-debug-jj64v"] Jan 23 10:31:11 crc kubenswrapper[4684]: E0123 10:31:11.391393 4684 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="53919638-0a82-4b57-9458-bd75d61d8017" containerName="collect-profiles" Jan 23 10:31:11 crc kubenswrapper[4684]: I0123 10:31:11.391420 4684 state_mem.go:107] "Deleted CPUSet assignment" podUID="53919638-0a82-4b57-9458-bd75d61d8017" containerName="collect-profiles" Jan 23 10:31:11 crc kubenswrapper[4684]: E0123 10:31:11.391467 4684 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="47249c2d-8305-449c-9b65-e6ca7137d445" containerName="container-00" Jan 23 10:31:11 crc kubenswrapper[4684]: I0123 10:31:11.391477 4684 state_mem.go:107] "Deleted CPUSet assignment" podUID="47249c2d-8305-449c-9b65-e6ca7137d445" containerName="container-00" Jan 23 10:31:11 crc kubenswrapper[4684]: I0123 10:31:11.391730 4684 memory_manager.go:354] "RemoveStaleState removing state" podUID="47249c2d-8305-449c-9b65-e6ca7137d445" containerName="container-00" Jan 23 10:31:11 crc kubenswrapper[4684]: I0123 10:31:11.391748 4684 memory_manager.go:354] "RemoveStaleState removing state" podUID="53919638-0a82-4b57-9458-bd75d61d8017" containerName="collect-profiles" Jan 23 10:31:11 crc kubenswrapper[4684]: I0123 10:31:11.392561 4684 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-must-gather-vptsh/crc-debug-jj64v" Jan 23 10:31:11 crc kubenswrapper[4684]: I0123 10:31:11.394533 4684 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-must-gather-vptsh"/"default-dockercfg-db6hw" Jan 23 10:31:11 crc kubenswrapper[4684]: I0123 10:31:11.480034 4684 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-mkspj\" (UniqueName: \"kubernetes.io/projected/e95c2340-2f23-4e3a-bf6c-29f70dab5afd-kube-api-access-mkspj\") pod \"crc-debug-jj64v\" (UID: \"e95c2340-2f23-4e3a-bf6c-29f70dab5afd\") " pod="openshift-must-gather-vptsh/crc-debug-jj64v" Jan 23 10:31:11 crc kubenswrapper[4684]: I0123 10:31:11.480587 4684 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host\" (UniqueName: \"kubernetes.io/host-path/e95c2340-2f23-4e3a-bf6c-29f70dab5afd-host\") pod \"crc-debug-jj64v\" (UID: \"e95c2340-2f23-4e3a-bf6c-29f70dab5afd\") " pod="openshift-must-gather-vptsh/crc-debug-jj64v" Jan 23 10:31:11 crc kubenswrapper[4684]: I0123 10:31:11.582146 4684 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host\" (UniqueName: \"kubernetes.io/host-path/e95c2340-2f23-4e3a-bf6c-29f70dab5afd-host\") pod \"crc-debug-jj64v\" (UID: \"e95c2340-2f23-4e3a-bf6c-29f70dab5afd\") " pod="openshift-must-gather-vptsh/crc-debug-jj64v" Jan 23 10:31:11 crc kubenswrapper[4684]: I0123 10:31:11.582369 4684 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-mkspj\" (UniqueName: \"kubernetes.io/projected/e95c2340-2f23-4e3a-bf6c-29f70dab5afd-kube-api-access-mkspj\") pod \"crc-debug-jj64v\" (UID: \"e95c2340-2f23-4e3a-bf6c-29f70dab5afd\") " pod="openshift-must-gather-vptsh/crc-debug-jj64v" Jan 23 10:31:11 crc kubenswrapper[4684]: I0123 10:31:11.582847 4684 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host\" (UniqueName: \"kubernetes.io/host-path/e95c2340-2f23-4e3a-bf6c-29f70dab5afd-host\") pod \"crc-debug-jj64v\" (UID: \"e95c2340-2f23-4e3a-bf6c-29f70dab5afd\") " pod="openshift-must-gather-vptsh/crc-debug-jj64v" Jan 23 10:31:11 crc kubenswrapper[4684]: I0123 10:31:11.611252 4684 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="47249c2d-8305-449c-9b65-e6ca7137d445" path="/var/lib/kubelet/pods/47249c2d-8305-449c-9b65-e6ca7137d445/volumes" Jan 23 10:31:11 crc kubenswrapper[4684]: I0123 10:31:11.623520 4684 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-mkspj\" (UniqueName: \"kubernetes.io/projected/e95c2340-2f23-4e3a-bf6c-29f70dab5afd-kube-api-access-mkspj\") pod \"crc-debug-jj64v\" (UID: \"e95c2340-2f23-4e3a-bf6c-29f70dab5afd\") " pod="openshift-must-gather-vptsh/crc-debug-jj64v" Jan 23 10:31:11 crc kubenswrapper[4684]: I0123 10:31:11.711058 4684 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-must-gather-vptsh/crc-debug-jj64v" Jan 23 10:31:12 crc kubenswrapper[4684]: I0123 10:31:12.039754 4684 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-must-gather-vptsh/crc-debug-jj64v" event={"ID":"e95c2340-2f23-4e3a-bf6c-29f70dab5afd","Type":"ContainerStarted","Data":"242b4cd0b4650f59995b371e03ba496f8a1160924eba26eb9cc5f263e9bff2fa"} Jan 23 10:31:12 crc kubenswrapper[4684]: I0123 10:31:12.039797 4684 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-must-gather-vptsh/crc-debug-jj64v" event={"ID":"e95c2340-2f23-4e3a-bf6c-29f70dab5afd","Type":"ContainerStarted","Data":"04002982c42a2ccd1d98598a8d1379f7c6b2ef5fef826ee5f12fee4b711a8887"} Jan 23 10:31:12 crc kubenswrapper[4684]: I0123 10:31:12.515648 4684 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-must-gather-vptsh/crc-debug-jj64v"] Jan 23 10:31:12 crc kubenswrapper[4684]: I0123 10:31:12.530110 4684 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-must-gather-vptsh/crc-debug-jj64v"] Jan 23 10:31:13 crc kubenswrapper[4684]: I0123 10:31:13.049430 4684 generic.go:334] "Generic (PLEG): container finished" podID="e95c2340-2f23-4e3a-bf6c-29f70dab5afd" containerID="242b4cd0b4650f59995b371e03ba496f8a1160924eba26eb9cc5f263e9bff2fa" exitCode=0 Jan 23 10:31:13 crc kubenswrapper[4684]: I0123 10:31:13.155801 4684 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-must-gather-vptsh/crc-debug-jj64v" Jan 23 10:31:13 crc kubenswrapper[4684]: I0123 10:31:13.218312 4684 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-mkspj\" (UniqueName: \"kubernetes.io/projected/e95c2340-2f23-4e3a-bf6c-29f70dab5afd-kube-api-access-mkspj\") pod \"e95c2340-2f23-4e3a-bf6c-29f70dab5afd\" (UID: \"e95c2340-2f23-4e3a-bf6c-29f70dab5afd\") " Jan 23 10:31:13 crc kubenswrapper[4684]: I0123 10:31:13.218451 4684 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"host\" (UniqueName: \"kubernetes.io/host-path/e95c2340-2f23-4e3a-bf6c-29f70dab5afd-host\") pod \"e95c2340-2f23-4e3a-bf6c-29f70dab5afd\" (UID: \"e95c2340-2f23-4e3a-bf6c-29f70dab5afd\") " Jan 23 10:31:13 crc kubenswrapper[4684]: I0123 10:31:13.219182 4684 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/e95c2340-2f23-4e3a-bf6c-29f70dab5afd-host" (OuterVolumeSpecName: "host") pod "e95c2340-2f23-4e3a-bf6c-29f70dab5afd" (UID: "e95c2340-2f23-4e3a-bf6c-29f70dab5afd"). InnerVolumeSpecName "host". PluginName "kubernetes.io/host-path", VolumeGidValue "" Jan 23 10:31:13 crc kubenswrapper[4684]: I0123 10:31:13.225974 4684 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/e95c2340-2f23-4e3a-bf6c-29f70dab5afd-kube-api-access-mkspj" (OuterVolumeSpecName: "kube-api-access-mkspj") pod "e95c2340-2f23-4e3a-bf6c-29f70dab5afd" (UID: "e95c2340-2f23-4e3a-bf6c-29f70dab5afd"). InnerVolumeSpecName "kube-api-access-mkspj". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 23 10:31:13 crc kubenswrapper[4684]: I0123 10:31:13.320900 4684 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-mkspj\" (UniqueName: \"kubernetes.io/projected/e95c2340-2f23-4e3a-bf6c-29f70dab5afd-kube-api-access-mkspj\") on node \"crc\" DevicePath \"\"" Jan 23 10:31:13 crc kubenswrapper[4684]: I0123 10:31:13.320933 4684 reconciler_common.go:293] "Volume detached for volume \"host\" (UniqueName: \"kubernetes.io/host-path/e95c2340-2f23-4e3a-bf6c-29f70dab5afd-host\") on node \"crc\" DevicePath \"\"" Jan 23 10:31:13 crc kubenswrapper[4684]: I0123 10:31:13.595976 4684 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="e95c2340-2f23-4e3a-bf6c-29f70dab5afd" path="/var/lib/kubelet/pods/e95c2340-2f23-4e3a-bf6c-29f70dab5afd/volumes" Jan 23 10:31:13 crc kubenswrapper[4684]: I0123 10:31:13.713592 4684 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-must-gather-vptsh/crc-debug-j5sw7"] Jan 23 10:31:13 crc kubenswrapper[4684]: E0123 10:31:13.714121 4684 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="e95c2340-2f23-4e3a-bf6c-29f70dab5afd" containerName="container-00" Jan 23 10:31:13 crc kubenswrapper[4684]: I0123 10:31:13.714142 4684 state_mem.go:107] "Deleted CPUSet assignment" podUID="e95c2340-2f23-4e3a-bf6c-29f70dab5afd" containerName="container-00" Jan 23 10:31:13 crc kubenswrapper[4684]: I0123 10:31:13.714346 4684 memory_manager.go:354] "RemoveStaleState removing state" podUID="e95c2340-2f23-4e3a-bf6c-29f70dab5afd" containerName="container-00" Jan 23 10:31:13 crc kubenswrapper[4684]: I0123 10:31:13.715009 4684 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-must-gather-vptsh/crc-debug-j5sw7" Jan 23 10:31:13 crc kubenswrapper[4684]: I0123 10:31:13.728299 4684 patch_prober.go:28] interesting pod/machine-config-daemon-wtphf container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Jan 23 10:31:13 crc kubenswrapper[4684]: I0123 10:31:13.728355 4684 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-wtphf" podUID="fe8e0d00-860e-4d47-9f48-686555520d79" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Jan 23 10:31:13 crc kubenswrapper[4684]: I0123 10:31:13.833778 4684 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host\" (UniqueName: \"kubernetes.io/host-path/2f90b044-e6b8-418c-9d3d-cb27b4220818-host\") pod \"crc-debug-j5sw7\" (UID: \"2f90b044-e6b8-418c-9d3d-cb27b4220818\") " pod="openshift-must-gather-vptsh/crc-debug-j5sw7" Jan 23 10:31:13 crc kubenswrapper[4684]: I0123 10:31:13.834716 4684 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-mnpfd\" (UniqueName: \"kubernetes.io/projected/2f90b044-e6b8-418c-9d3d-cb27b4220818-kube-api-access-mnpfd\") pod \"crc-debug-j5sw7\" (UID: \"2f90b044-e6b8-418c-9d3d-cb27b4220818\") " pod="openshift-must-gather-vptsh/crc-debug-j5sw7" Jan 23 10:31:13 crc kubenswrapper[4684]: I0123 10:31:13.936155 4684 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host\" (UniqueName: \"kubernetes.io/host-path/2f90b044-e6b8-418c-9d3d-cb27b4220818-host\") pod \"crc-debug-j5sw7\" (UID: \"2f90b044-e6b8-418c-9d3d-cb27b4220818\") " pod="openshift-must-gather-vptsh/crc-debug-j5sw7" Jan 23 10:31:13 crc kubenswrapper[4684]: I0123 10:31:13.936216 4684 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-mnpfd\" (UniqueName: \"kubernetes.io/projected/2f90b044-e6b8-418c-9d3d-cb27b4220818-kube-api-access-mnpfd\") pod \"crc-debug-j5sw7\" (UID: \"2f90b044-e6b8-418c-9d3d-cb27b4220818\") " pod="openshift-must-gather-vptsh/crc-debug-j5sw7" Jan 23 10:31:13 crc kubenswrapper[4684]: I0123 10:31:13.936639 4684 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host\" (UniqueName: \"kubernetes.io/host-path/2f90b044-e6b8-418c-9d3d-cb27b4220818-host\") pod \"crc-debug-j5sw7\" (UID: \"2f90b044-e6b8-418c-9d3d-cb27b4220818\") " pod="openshift-must-gather-vptsh/crc-debug-j5sw7" Jan 23 10:31:13 crc kubenswrapper[4684]: I0123 10:31:13.967040 4684 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-mnpfd\" (UniqueName: \"kubernetes.io/projected/2f90b044-e6b8-418c-9d3d-cb27b4220818-kube-api-access-mnpfd\") pod \"crc-debug-j5sw7\" (UID: \"2f90b044-e6b8-418c-9d3d-cb27b4220818\") " pod="openshift-must-gather-vptsh/crc-debug-j5sw7" Jan 23 10:31:14 crc kubenswrapper[4684]: I0123 10:31:14.035187 4684 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-must-gather-vptsh/crc-debug-j5sw7" Jan 23 10:31:14 crc kubenswrapper[4684]: I0123 10:31:14.069085 4684 scope.go:117] "RemoveContainer" containerID="242b4cd0b4650f59995b371e03ba496f8a1160924eba26eb9cc5f263e9bff2fa" Jan 23 10:31:14 crc kubenswrapper[4684]: I0123 10:31:14.069135 4684 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-must-gather-vptsh/crc-debug-jj64v" Jan 23 10:31:15 crc kubenswrapper[4684]: I0123 10:31:15.079534 4684 generic.go:334] "Generic (PLEG): container finished" podID="2f90b044-e6b8-418c-9d3d-cb27b4220818" containerID="49df1a70af43cf2b6b33d4177c388e2e5197124b29d91072bb398a916861ee0b" exitCode=0 Jan 23 10:31:15 crc kubenswrapper[4684]: I0123 10:31:15.079612 4684 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-must-gather-vptsh/crc-debug-j5sw7" event={"ID":"2f90b044-e6b8-418c-9d3d-cb27b4220818","Type":"ContainerDied","Data":"49df1a70af43cf2b6b33d4177c388e2e5197124b29d91072bb398a916861ee0b"} Jan 23 10:31:15 crc kubenswrapper[4684]: I0123 10:31:15.079901 4684 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-must-gather-vptsh/crc-debug-j5sw7" event={"ID":"2f90b044-e6b8-418c-9d3d-cb27b4220818","Type":"ContainerStarted","Data":"805ec911d4918d9dc22113957f5f4a39e128354cb715908fe85c144910928e9d"} Jan 23 10:31:15 crc kubenswrapper[4684]: I0123 10:31:15.121698 4684 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-must-gather-vptsh/crc-debug-j5sw7"] Jan 23 10:31:15 crc kubenswrapper[4684]: I0123 10:31:15.136270 4684 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-must-gather-vptsh/crc-debug-j5sw7"] Jan 23 10:31:16 crc kubenswrapper[4684]: I0123 10:31:16.222473 4684 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-must-gather-vptsh/crc-debug-j5sw7" Jan 23 10:31:16 crc kubenswrapper[4684]: I0123 10:31:16.284249 4684 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-mnpfd\" (UniqueName: \"kubernetes.io/projected/2f90b044-e6b8-418c-9d3d-cb27b4220818-kube-api-access-mnpfd\") pod \"2f90b044-e6b8-418c-9d3d-cb27b4220818\" (UID: \"2f90b044-e6b8-418c-9d3d-cb27b4220818\") " Jan 23 10:31:16 crc kubenswrapper[4684]: I0123 10:31:16.284545 4684 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"host\" (UniqueName: \"kubernetes.io/host-path/2f90b044-e6b8-418c-9d3d-cb27b4220818-host\") pod \"2f90b044-e6b8-418c-9d3d-cb27b4220818\" (UID: \"2f90b044-e6b8-418c-9d3d-cb27b4220818\") " Jan 23 10:31:16 crc kubenswrapper[4684]: I0123 10:31:16.284649 4684 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/2f90b044-e6b8-418c-9d3d-cb27b4220818-host" (OuterVolumeSpecName: "host") pod "2f90b044-e6b8-418c-9d3d-cb27b4220818" (UID: "2f90b044-e6b8-418c-9d3d-cb27b4220818"). InnerVolumeSpecName "host". PluginName "kubernetes.io/host-path", VolumeGidValue "" Jan 23 10:31:16 crc kubenswrapper[4684]: I0123 10:31:16.285144 4684 reconciler_common.go:293] "Volume detached for volume \"host\" (UniqueName: \"kubernetes.io/host-path/2f90b044-e6b8-418c-9d3d-cb27b4220818-host\") on node \"crc\" DevicePath \"\"" Jan 23 10:31:16 crc kubenswrapper[4684]: I0123 10:31:16.306933 4684 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/2f90b044-e6b8-418c-9d3d-cb27b4220818-kube-api-access-mnpfd" (OuterVolumeSpecName: "kube-api-access-mnpfd") pod "2f90b044-e6b8-418c-9d3d-cb27b4220818" (UID: "2f90b044-e6b8-418c-9d3d-cb27b4220818"). InnerVolumeSpecName "kube-api-access-mnpfd". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 23 10:31:16 crc kubenswrapper[4684]: I0123 10:31:16.387228 4684 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-mnpfd\" (UniqueName: \"kubernetes.io/projected/2f90b044-e6b8-418c-9d3d-cb27b4220818-kube-api-access-mnpfd\") on node \"crc\" DevicePath \"\"" Jan 23 10:31:17 crc kubenswrapper[4684]: I0123 10:31:17.101392 4684 scope.go:117] "RemoveContainer" containerID="49df1a70af43cf2b6b33d4177c388e2e5197124b29d91072bb398a916861ee0b" Jan 23 10:31:17 crc kubenswrapper[4684]: I0123 10:31:17.101420 4684 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-must-gather-vptsh/crc-debug-j5sw7" Jan 23 10:31:17 crc kubenswrapper[4684]: I0123 10:31:17.591378 4684 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="2f90b044-e6b8-418c-9d3d-cb27b4220818" path="/var/lib/kubelet/pods/2f90b044-e6b8-418c-9d3d-cb27b4220818/volumes" Jan 23 10:31:43 crc kubenswrapper[4684]: I0123 10:31:43.729091 4684 patch_prober.go:28] interesting pod/machine-config-daemon-wtphf container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Jan 23 10:31:43 crc kubenswrapper[4684]: I0123 10:31:43.729762 4684 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-wtphf" podUID="fe8e0d00-860e-4d47-9f48-686555520d79" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Jan 23 10:32:09 crc kubenswrapper[4684]: I0123 10:32:09.678147 4684 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_barbican-api-6fb45b76fb-6d9bh_d239343a-876f-4e5e-abf8-2bd91fee9812/barbican-api/0.log" Jan 23 10:32:09 crc kubenswrapper[4684]: I0123 10:32:09.825623 4684 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_barbican-api-6fb45b76fb-6d9bh_d239343a-876f-4e5e-abf8-2bd91fee9812/barbican-api-log/0.log" Jan 23 10:32:09 crc kubenswrapper[4684]: I0123 10:32:09.922173 4684 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_barbican-keystone-listener-7c6d999bfd-wgh9p_dd332188-f0b4-4a86-a7ec-c722f64e1e41/barbican-keystone-listener/0.log" Jan 23 10:32:10 crc kubenswrapper[4684]: I0123 10:32:10.000810 4684 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_barbican-keystone-listener-7c6d999bfd-wgh9p_dd332188-f0b4-4a86-a7ec-c722f64e1e41/barbican-keystone-listener-log/0.log" Jan 23 10:32:10 crc kubenswrapper[4684]: I0123 10:32:10.156640 4684 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_barbican-worker-74bcc55f89-qgvh5_996c56f4-2118-4795-91da-d78f1ad2f792/barbican-worker/0.log" Jan 23 10:32:10 crc kubenswrapper[4684]: I0123 10:32:10.228620 4684 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_barbican-worker-74bcc55f89-qgvh5_996c56f4-2118-4795-91da-d78f1ad2f792/barbican-worker-log/0.log" Jan 23 10:32:10 crc kubenswrapper[4684]: I0123 10:32:10.461924 4684 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_bootstrap-edpm-deployment-openstack-edpm-ipam-j7qnk_47eb1e50-9644-40c1-b739-f70c2274808c/bootstrap-edpm-deployment-openstack-edpm-ipam/0.log" Jan 23 10:32:10 crc kubenswrapper[4684]: I0123 10:32:10.530691 4684 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_ceilometer-0_19914f8a-2409-41e0-accb-221ccdb4428f/ceilometer-central-agent/0.log" Jan 23 10:32:10 crc kubenswrapper[4684]: I0123 10:32:10.944821 4684 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_ceilometer-0_19914f8a-2409-41e0-accb-221ccdb4428f/proxy-httpd/0.log" Jan 23 10:32:10 crc kubenswrapper[4684]: I0123 10:32:10.984537 4684 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_ceilometer-0_19914f8a-2409-41e0-accb-221ccdb4428f/ceilometer-notification-agent/0.log" Jan 23 10:32:11 crc kubenswrapper[4684]: I0123 10:32:11.030762 4684 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_ceilometer-0_19914f8a-2409-41e0-accb-221ccdb4428f/sg-core/0.log" Jan 23 10:32:11 crc kubenswrapper[4684]: I0123 10:32:11.308530 4684 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_ceph-client-edpm-deployment-openstack-edpm-ipam-8tpv8_5f77b49d-cf17-4b55-9ef8-0d0e13966845/ceph-client-edpm-deployment-openstack-edpm-ipam/0.log" Jan 23 10:32:11 crc kubenswrapper[4684]: I0123 10:32:11.366482 4684 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_ceph-hci-pre-edpm-deployment-openstack-edpm-ipam-fqnlz_01a17f7c-b39e-4dd6-9a40-d474056ee41a/ceph-hci-pre-edpm-deployment-openstack-edpm-ipam/0.log" Jan 23 10:32:11 crc kubenswrapper[4684]: I0123 10:32:11.565449 4684 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_cinder-api-0_6fc125a2-7cc0-40a7-bb2c-acc93ba7866a/cinder-api/0.log" Jan 23 10:32:11 crc kubenswrapper[4684]: I0123 10:32:11.655911 4684 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_cinder-api-0_6fc125a2-7cc0-40a7-bb2c-acc93ba7866a/cinder-api-log/0.log" Jan 23 10:32:11 crc kubenswrapper[4684]: I0123 10:32:11.959717 4684 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_cinder-backup-0_46859102-633b-4fca-bbeb-c34dfdbea96d/probe/0.log" Jan 23 10:32:12 crc kubenswrapper[4684]: I0123 10:32:12.081662 4684 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_cinder-backup-0_46859102-633b-4fca-bbeb-c34dfdbea96d/cinder-backup/0.log" Jan 23 10:32:12 crc kubenswrapper[4684]: I0123 10:32:12.120960 4684 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_cinder-scheduler-0_7a1bad04-8e0e-4dee-8cef-90091c05526f/cinder-scheduler/0.log" Jan 23 10:32:12 crc kubenswrapper[4684]: I0123 10:32:12.297772 4684 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_cinder-scheduler-0_7a1bad04-8e0e-4dee-8cef-90091c05526f/probe/0.log" Jan 23 10:32:12 crc kubenswrapper[4684]: I0123 10:32:12.477987 4684 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_cinder-volume-volume1-0_2d39cffc-9089-47c7-acd7-50bb64ed8f61/cinder-volume/0.log" Jan 23 10:32:12 crc kubenswrapper[4684]: I0123 10:32:12.568222 4684 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_cinder-volume-volume1-0_2d39cffc-9089-47c7-acd7-50bb64ed8f61/probe/0.log" Jan 23 10:32:12 crc kubenswrapper[4684]: I0123 10:32:12.792199 4684 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_configure-network-edpm-deployment-openstack-edpm-ipam-bj6vb_f86589ab-3e45-48a5-a081-96572c2bcfca/configure-network-edpm-deployment-openstack-edpm-ipam/0.log" Jan 23 10:32:12 crc kubenswrapper[4684]: I0123 10:32:12.834118 4684 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_configure-os-edpm-deployment-openstack-edpm-ipam-nwlp8_8cbed0d5-0896-4efe-af09-8469dcbd2cfb/configure-os-edpm-deployment-openstack-edpm-ipam/0.log" Jan 23 10:32:13 crc kubenswrapper[4684]: I0123 10:32:13.071095 4684 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_dnsmasq-dns-dbdfc799f-zk2np_e93e4d61-ad39-41c9-80ce-653f91213f4d/init/0.log" Jan 23 10:32:13 crc kubenswrapper[4684]: I0123 10:32:13.314676 4684 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_dnsmasq-dns-dbdfc799f-zk2np_e93e4d61-ad39-41c9-80ce-653f91213f4d/init/0.log" Jan 23 10:32:13 crc kubenswrapper[4684]: I0123 10:32:13.473850 4684 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_glance-default-external-api-0_a7c366f0-4ad9-4ec9-91ff-bab599bae5d0/glance-httpd/0.log" Jan 23 10:32:13 crc kubenswrapper[4684]: I0123 10:32:13.580528 4684 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_dnsmasq-dns-dbdfc799f-zk2np_e93e4d61-ad39-41c9-80ce-653f91213f4d/dnsmasq-dns/0.log" Jan 23 10:32:13 crc kubenswrapper[4684]: I0123 10:32:13.729163 4684 patch_prober.go:28] interesting pod/machine-config-daemon-wtphf container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Jan 23 10:32:13 crc kubenswrapper[4684]: I0123 10:32:13.729228 4684 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-wtphf" podUID="fe8e0d00-860e-4d47-9f48-686555520d79" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Jan 23 10:32:13 crc kubenswrapper[4684]: I0123 10:32:13.729275 4684 kubelet.go:2542] "SyncLoop (probe)" probe="liveness" status="unhealthy" pod="openshift-machine-config-operator/machine-config-daemon-wtphf" Jan 23 10:32:13 crc kubenswrapper[4684]: I0123 10:32:13.730021 4684 kuberuntime_manager.go:1027] "Message for Container of pod" containerName="machine-config-daemon" containerStatusID={"Type":"cri-o","ID":"1756bd959f6d356e73018d112baa4f2e84373b3c4243cd97818969471c5f5c40"} pod="openshift-machine-config-operator/machine-config-daemon-wtphf" containerMessage="Container machine-config-daemon failed liveness probe, will be restarted" Jan 23 10:32:13 crc kubenswrapper[4684]: I0123 10:32:13.730095 4684 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-machine-config-operator/machine-config-daemon-wtphf" podUID="fe8e0d00-860e-4d47-9f48-686555520d79" containerName="machine-config-daemon" containerID="cri-o://1756bd959f6d356e73018d112baa4f2e84373b3c4243cd97818969471c5f5c40" gracePeriod=600 Jan 23 10:32:13 crc kubenswrapper[4684]: I0123 10:32:13.968331 4684 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_glance-default-internal-api-0_b0804b14-3b60-4dbc-8e29-9cb493b96de4/glance-httpd/0.log" Jan 23 10:32:14 crc kubenswrapper[4684]: I0123 10:32:14.055939 4684 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_glance-default-external-api-0_a7c366f0-4ad9-4ec9-91ff-bab599bae5d0/glance-log/0.log" Jan 23 10:32:14 crc kubenswrapper[4684]: I0123 10:32:14.202611 4684 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_glance-default-internal-api-0_b0804b14-3b60-4dbc-8e29-9cb493b96de4/glance-log/0.log" Jan 23 10:32:14 crc kubenswrapper[4684]: I0123 10:32:14.403914 4684 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_horizon-7df5b758fb-8sfdj_78d43a15-1645-42a6-a25b-a6c4d7a244c4/horizon/1.log" Jan 23 10:32:14 crc kubenswrapper[4684]: I0123 10:32:14.570232 4684 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_horizon-7df5b758fb-8sfdj_78d43a15-1645-42a6-a25b-a6c4d7a244c4/horizon/0.log" Jan 23 10:32:14 crc kubenswrapper[4684]: I0123 10:32:14.601456 4684 generic.go:334] "Generic (PLEG): container finished" podID="fe8e0d00-860e-4d47-9f48-686555520d79" containerID="1756bd959f6d356e73018d112baa4f2e84373b3c4243cd97818969471c5f5c40" exitCode=0 Jan 23 10:32:14 crc kubenswrapper[4684]: I0123 10:32:14.601539 4684 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-wtphf" event={"ID":"fe8e0d00-860e-4d47-9f48-686555520d79","Type":"ContainerDied","Data":"1756bd959f6d356e73018d112baa4f2e84373b3c4243cd97818969471c5f5c40"} Jan 23 10:32:14 crc kubenswrapper[4684]: I0123 10:32:14.601592 4684 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-wtphf" event={"ID":"fe8e0d00-860e-4d47-9f48-686555520d79","Type":"ContainerStarted","Data":"c351f8f481b25c1f4451b34197b1573cb0b3fb64f7de44a6587d8bc17e89bbbf"} Jan 23 10:32:14 crc kubenswrapper[4684]: I0123 10:32:14.601619 4684 scope.go:117] "RemoveContainer" containerID="ea556fc8dc883b8c3494c093263ef6a2ba8fb783710728a6eb74afd116ee0ccc" Jan 23 10:32:14 crc kubenswrapper[4684]: I0123 10:32:14.821144 4684 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_horizon-7df5b758fb-8sfdj_78d43a15-1645-42a6-a25b-a6c4d7a244c4/horizon-log/0.log" Jan 23 10:32:14 crc kubenswrapper[4684]: I0123 10:32:14.885604 4684 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_install-certs-edpm-deployment-openstack-edpm-ipam-dbfqx_2aa3021c-18ad-49eb-ae34-b54e30548ccf/install-certs-edpm-deployment-openstack-edpm-ipam/0.log" Jan 23 10:32:15 crc kubenswrapper[4684]: I0123 10:32:15.219024 4684 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_install-os-edpm-deployment-openstack-edpm-ipam-2nhwv_9ed4c3b1-8a47-426f-a72f-80df33efa202/install-os-edpm-deployment-openstack-edpm-ipam/0.log" Jan 23 10:32:15 crc kubenswrapper[4684]: I0123 10:32:15.408520 4684 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_keystone-74b94f7dd5-jfwln_c7c30d54-36fc-47e2-ad40-c3e530d1b721/keystone-api/0.log" Jan 23 10:32:15 crc kubenswrapper[4684]: I0123 10:32:15.585713 4684 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_keystone-cron-29486041-8929f_1ca6f1ca-5942-4aee-a0bc-b7d2549de3a2/keystone-cron/0.log" Jan 23 10:32:15 crc kubenswrapper[4684]: I0123 10:32:15.672299 4684 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_kube-state-metrics-0_2380836b-7770-4b06-9cb2-b61dfda5e96a/kube-state-metrics/0.log" Jan 23 10:32:15 crc kubenswrapper[4684]: I0123 10:32:15.938995 4684 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_libvirt-edpm-deployment-openstack-edpm-ipam-p7z6q_5310afc8-7024-4b88-b421-28631272375a/libvirt-edpm-deployment-openstack-edpm-ipam/0.log" Jan 23 10:32:16 crc kubenswrapper[4684]: I0123 10:32:16.029973 4684 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_manila-api-0_f183a69c-226e-4737-81b8-01cae8e76539/manila-api-log/0.log" Jan 23 10:32:16 crc kubenswrapper[4684]: I0123 10:32:16.115489 4684 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_manila-api-0_f183a69c-226e-4737-81b8-01cae8e76539/manila-api/0.log" Jan 23 10:32:16 crc kubenswrapper[4684]: I0123 10:32:16.250740 4684 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_manila-scheduler-0_1be4f920-aa7e-412c-8241-a795a65be1bb/manila-scheduler/0.log" Jan 23 10:32:16 crc kubenswrapper[4684]: I0123 10:32:16.286665 4684 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_manila-scheduler-0_1be4f920-aa7e-412c-8241-a795a65be1bb/probe/0.log" Jan 23 10:32:16 crc kubenswrapper[4684]: I0123 10:32:16.411970 4684 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_manila-share-share1-0_f7b4b82a-f432-48b9-ae9c-2d23a78aec42/manila-share/0.log" Jan 23 10:32:16 crc kubenswrapper[4684]: I0123 10:32:16.523088 4684 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_manila-share-share1-0_f7b4b82a-f432-48b9-ae9c-2d23a78aec42/probe/0.log" Jan 23 10:32:16 crc kubenswrapper[4684]: I0123 10:32:16.863446 4684 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_neutron-f5484d975-q9jz7_51e1f37f-89c0-4b47-944a-ca74b33d32ce/neutron-httpd/0.log" Jan 23 10:32:17 crc kubenswrapper[4684]: I0123 10:32:17.000393 4684 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_neutron-f5484d975-q9jz7_51e1f37f-89c0-4b47-944a-ca74b33d32ce/neutron-api/0.log" Jan 23 10:32:17 crc kubenswrapper[4684]: I0123 10:32:17.252963 4684 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_neutron-metadata-edpm-deployment-openstack-edpm-ipam-bkm2h_cb533e15-1dac-453b-a0d7-041112a91f0b/neutron-metadata-edpm-deployment-openstack-edpm-ipam/0.log" Jan 23 10:32:17 crc kubenswrapper[4684]: I0123 10:32:17.928967 4684 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_nova-api-0_e0cd885d-0d54-4392-9d8a-cd2cb48b47d2/nova-api-log/0.log" Jan 23 10:32:18 crc kubenswrapper[4684]: I0123 10:32:18.002986 4684 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_nova-cell0-conductor-0_f499765b-3360-4bf8-af8c-415602c1c519/nova-cell0-conductor-conductor/0.log" Jan 23 10:32:18 crc kubenswrapper[4684]: I0123 10:32:18.305754 4684 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_nova-cell1-conductor-0_a36234df-99ba-470a-8309-55d1e0f53072/nova-cell1-conductor-conductor/0.log" Jan 23 10:32:18 crc kubenswrapper[4684]: I0123 10:32:18.321390 4684 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_nova-api-0_e0cd885d-0d54-4392-9d8a-cd2cb48b47d2/nova-api-api/0.log" Jan 23 10:32:18 crc kubenswrapper[4684]: I0123 10:32:18.728831 4684 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_nova-cell1-novncproxy-0_c03f1660-c3bd-4803-b1fd-c07c36966484/nova-cell1-novncproxy-novncproxy/0.log" Jan 23 10:32:19 crc kubenswrapper[4684]: I0123 10:32:19.072489 4684 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_nova-custom-ceph-edpm-deployment-openstack-edpm-ipam-228hk_55887726-e3b8-4e73-a5fe-c82860636e1b/nova-custom-ceph-edpm-deployment-openstack-edpm-ipam/0.log" Jan 23 10:32:19 crc kubenswrapper[4684]: I0123 10:32:19.259365 4684 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_nova-metadata-0_48b55b45-1ad6-4310-aaff-0a978bbf5538/nova-metadata-log/0.log" Jan 23 10:32:19 crc kubenswrapper[4684]: I0123 10:32:19.651302 4684 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_nova-scheduler-0_2fab0d59-7e3d-4c70-a3a7-63dcb3629988/nova-scheduler-scheduler/0.log" Jan 23 10:32:19 crc kubenswrapper[4684]: I0123 10:32:19.719143 4684 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_openstack-cell1-galera-0_80a7fc30-a101-4948-9e81-34c2dfb02797/mysql-bootstrap/0.log" Jan 23 10:32:20 crc kubenswrapper[4684]: I0123 10:32:20.219480 4684 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_openstack-cell1-galera-0_80a7fc30-a101-4948-9e81-34c2dfb02797/mysql-bootstrap/0.log" Jan 23 10:32:20 crc kubenswrapper[4684]: I0123 10:32:20.305977 4684 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_openstack-cell1-galera-0_80a7fc30-a101-4948-9e81-34c2dfb02797/galera/0.log" Jan 23 10:32:20 crc kubenswrapper[4684]: I0123 10:32:20.521085 4684 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_openstack-galera-0_01c5f17c-8303-4cae-b577-1da34c402098/mysql-bootstrap/0.log" Jan 23 10:32:20 crc kubenswrapper[4684]: I0123 10:32:20.651344 4684 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_openstack-galera-0_01c5f17c-8303-4cae-b577-1da34c402098/galera/0.log" Jan 23 10:32:20 crc kubenswrapper[4684]: I0123 10:32:20.658850 4684 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_openstack-galera-0_01c5f17c-8303-4cae-b577-1da34c402098/mysql-bootstrap/0.log" Jan 23 10:32:20 crc kubenswrapper[4684]: I0123 10:32:20.895680 4684 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_openstackclient_cfb564ff-94ae-4292-ad6c-41a36677efeb/openstackclient/0.log" Jan 23 10:32:21 crc kubenswrapper[4684]: I0123 10:32:21.086270 4684 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_nova-metadata-0_48b55b45-1ad6-4310-aaff-0a978bbf5538/nova-metadata-metadata/0.log" Jan 23 10:32:21 crc kubenswrapper[4684]: I0123 10:32:21.120195 4684 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_ovn-controller-jgsg8_f6d184f2-6bff-43ba-98a6-6e131c7b45a8/ovn-controller/0.log" Jan 23 10:32:21 crc kubenswrapper[4684]: I0123 10:32:21.309947 4684 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_ovn-controller-metrics-x2qgc_8a2ed8cb-f8c4-4ee2-884e-13a286ef4c86/openstack-network-exporter/0.log" Jan 23 10:32:21 crc kubenswrapper[4684]: I0123 10:32:21.448620 4684 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_ovn-controller-ovs-c5pjd_c816dd8b-7da7-4424-8405-b44759f7861e/ovsdb-server-init/0.log" Jan 23 10:32:21 crc kubenswrapper[4684]: I0123 10:32:21.696266 4684 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_ovn-controller-ovs-c5pjd_c816dd8b-7da7-4424-8405-b44759f7861e/ovsdb-server/0.log" Jan 23 10:32:21 crc kubenswrapper[4684]: I0123 10:32:21.731516 4684 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_ovn-controller-ovs-c5pjd_c816dd8b-7da7-4424-8405-b44759f7861e/ovsdb-server-init/0.log" Jan 23 10:32:21 crc kubenswrapper[4684]: I0123 10:32:21.847062 4684 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_ovn-controller-ovs-c5pjd_c816dd8b-7da7-4424-8405-b44759f7861e/ovs-vswitchd/0.log" Jan 23 10:32:22 crc kubenswrapper[4684]: I0123 10:32:22.407809 4684 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_ovn-edpm-deployment-openstack-edpm-ipam-9klss_e755b648-4ecf-4fc5-922a-39c5061827de/ovn-edpm-deployment-openstack-edpm-ipam/0.log" Jan 23 10:32:22 crc kubenswrapper[4684]: I0123 10:32:22.478181 4684 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_ovn-northd-0_366a8d70-2aa4-439d-a14e-4459b3f45736/openstack-network-exporter/0.log" Jan 23 10:32:22 crc kubenswrapper[4684]: I0123 10:32:22.517529 4684 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_ovn-northd-0_366a8d70-2aa4-439d-a14e-4459b3f45736/ovn-northd/0.log" Jan 23 10:32:22 crc kubenswrapper[4684]: I0123 10:32:22.740916 4684 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_ovsdbserver-nb-0_960d904d-7d3d-4c6a-a933-cf6c6a31d01d/openstack-network-exporter/0.log" Jan 23 10:32:22 crc kubenswrapper[4684]: I0123 10:32:22.750355 4684 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_ovsdbserver-nb-0_960d904d-7d3d-4c6a-a933-cf6c6a31d01d/ovsdbserver-nb/0.log" Jan 23 10:32:23 crc kubenswrapper[4684]: I0123 10:32:23.040469 4684 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_ovsdbserver-sb-0_092669ed-870b-4e9d-a34d-f62fca6b1660/openstack-network-exporter/0.log" Jan 23 10:32:23 crc kubenswrapper[4684]: I0123 10:32:23.054384 4684 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_ovsdbserver-sb-0_092669ed-870b-4e9d-a34d-f62fca6b1660/ovsdbserver-sb/0.log" Jan 23 10:32:23 crc kubenswrapper[4684]: I0123 10:32:23.343797 4684 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_placement-6f7c769f78-7sfgw_90ee2ffb-783f-491a-9fa8-e37f267872f6/placement-api/0.log" Jan 23 10:32:23 crc kubenswrapper[4684]: I0123 10:32:23.463715 4684 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_placement-6f7c769f78-7sfgw_90ee2ffb-783f-491a-9fa8-e37f267872f6/placement-log/0.log" Jan 23 10:32:23 crc kubenswrapper[4684]: I0123 10:32:23.465947 4684 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_rabbitmq-cell1-server-0_5b7f0e5b-e1ba-4da5-b644-e16236fd5403/setup-container/0.log" Jan 23 10:32:23 crc kubenswrapper[4684]: I0123 10:32:23.854223 4684 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_rabbitmq-cell1-server-0_5b7f0e5b-e1ba-4da5-b644-e16236fd5403/setup-container/0.log" Jan 23 10:32:23 crc kubenswrapper[4684]: I0123 10:32:23.932252 4684 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_rabbitmq-cell1-server-0_5b7f0e5b-e1ba-4da5-b644-e16236fd5403/rabbitmq/0.log" Jan 23 10:32:23 crc kubenswrapper[4684]: I0123 10:32:23.967880 4684 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_rabbitmq-server-0_d05a61f9-7d60-4073-ae62-7a4a59fe6ed6/setup-container/0.log" Jan 23 10:32:24 crc kubenswrapper[4684]: I0123 10:32:24.100157 4684 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_rabbitmq-server-0_d05a61f9-7d60-4073-ae62-7a4a59fe6ed6/setup-container/0.log" Jan 23 10:32:24 crc kubenswrapper[4684]: I0123 10:32:24.311042 4684 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_reboot-os-edpm-deployment-openstack-edpm-ipam-hnv4c_89a1992b-4dc8-4218-a148-bec983fddd94/reboot-os-edpm-deployment-openstack-edpm-ipam/0.log" Jan 23 10:32:24 crc kubenswrapper[4684]: I0123 10:32:24.384293 4684 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_rabbitmq-server-0_d05a61f9-7d60-4073-ae62-7a4a59fe6ed6/rabbitmq/0.log" Jan 23 10:32:24 crc kubenswrapper[4684]: I0123 10:32:24.573567 4684 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_repo-setup-edpm-deployment-openstack-edpm-ipam-5qtdd_6572a448-1ced-481b-af00-e2edb0d95187/repo-setup-edpm-deployment-openstack-edpm-ipam/0.log" Jan 23 10:32:24 crc kubenswrapper[4684]: I0123 10:32:24.729345 4684 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_run-os-edpm-deployment-openstack-edpm-ipam-7wznn_1139aa20-9131-40c7-bd06-f108d5ac42ab/run-os-edpm-deployment-openstack-edpm-ipam/0.log" Jan 23 10:32:24 crc kubenswrapper[4684]: I0123 10:32:24.842021 4684 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_ssh-known-hosts-edpm-deployment-bpqtq_d7513ac8-1304-4762-a2f2-6d3b152fc4a7/ssh-known-hosts-edpm-deployment/0.log" Jan 23 10:32:25 crc kubenswrapper[4684]: I0123 10:32:25.197530 4684 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_test-operator-logs-pod-tempest-tempest-tests-tempest_6a468899-9742-4407-95d4-55c6e2c14fe2/test-operator-logs-container/0.log" Jan 23 10:32:25 crc kubenswrapper[4684]: I0123 10:32:25.214395 4684 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_tempest-tests-tempest_a0e2465e-0c50-4aaa-a1c7-bdbd8d3a516a/tempest-tests-tempest-tests-runner/0.log" Jan 23 10:32:25 crc kubenswrapper[4684]: I0123 10:32:25.453031 4684 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_validate-network-edpm-deployment-openstack-edpm-ipam-tlkk8_e2aa43b6-cc3e-4a3f-a98d-a788624c5253/validate-network-edpm-deployment-openstack-edpm-ipam/0.log" Jan 23 10:32:39 crc kubenswrapper[4684]: I0123 10:32:39.459042 4684 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_memcached-0_7320f601-5b97-49b4-af32-aeae7d297ed1/memcached/0.log" Jan 23 10:32:43 crc kubenswrapper[4684]: I0123 10:32:43.388129 4684 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/community-operators-2f7kn"] Jan 23 10:32:43 crc kubenswrapper[4684]: E0123 10:32:43.389163 4684 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="2f90b044-e6b8-418c-9d3d-cb27b4220818" containerName="container-00" Jan 23 10:32:43 crc kubenswrapper[4684]: I0123 10:32:43.389181 4684 state_mem.go:107] "Deleted CPUSet assignment" podUID="2f90b044-e6b8-418c-9d3d-cb27b4220818" containerName="container-00" Jan 23 10:32:43 crc kubenswrapper[4684]: I0123 10:32:43.389450 4684 memory_manager.go:354] "RemoveStaleState removing state" podUID="2f90b044-e6b8-418c-9d3d-cb27b4220818" containerName="container-00" Jan 23 10:32:43 crc kubenswrapper[4684]: I0123 10:32:43.391256 4684 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-2f7kn" Jan 23 10:32:43 crc kubenswrapper[4684]: I0123 10:32:43.403535 4684 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/community-operators-2f7kn"] Jan 23 10:32:43 crc kubenswrapper[4684]: I0123 10:32:43.545824 4684 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-fjkr5\" (UniqueName: \"kubernetes.io/projected/792110d5-5a3f-4096-9da6-c3b4f322d48c-kube-api-access-fjkr5\") pod \"community-operators-2f7kn\" (UID: \"792110d5-5a3f-4096-9da6-c3b4f322d48c\") " pod="openshift-marketplace/community-operators-2f7kn" Jan 23 10:32:43 crc kubenswrapper[4684]: I0123 10:32:43.545938 4684 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/792110d5-5a3f-4096-9da6-c3b4f322d48c-utilities\") pod \"community-operators-2f7kn\" (UID: \"792110d5-5a3f-4096-9da6-c3b4f322d48c\") " pod="openshift-marketplace/community-operators-2f7kn" Jan 23 10:32:43 crc kubenswrapper[4684]: I0123 10:32:43.546350 4684 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/792110d5-5a3f-4096-9da6-c3b4f322d48c-catalog-content\") pod \"community-operators-2f7kn\" (UID: \"792110d5-5a3f-4096-9da6-c3b4f322d48c\") " pod="openshift-marketplace/community-operators-2f7kn" Jan 23 10:32:43 crc kubenswrapper[4684]: I0123 10:32:43.647737 4684 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/792110d5-5a3f-4096-9da6-c3b4f322d48c-catalog-content\") pod \"community-operators-2f7kn\" (UID: \"792110d5-5a3f-4096-9da6-c3b4f322d48c\") " pod="openshift-marketplace/community-operators-2f7kn" Jan 23 10:32:43 crc kubenswrapper[4684]: I0123 10:32:43.647841 4684 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-fjkr5\" (UniqueName: \"kubernetes.io/projected/792110d5-5a3f-4096-9da6-c3b4f322d48c-kube-api-access-fjkr5\") pod \"community-operators-2f7kn\" (UID: \"792110d5-5a3f-4096-9da6-c3b4f322d48c\") " pod="openshift-marketplace/community-operators-2f7kn" Jan 23 10:32:43 crc kubenswrapper[4684]: I0123 10:32:43.647936 4684 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/792110d5-5a3f-4096-9da6-c3b4f322d48c-utilities\") pod \"community-operators-2f7kn\" (UID: \"792110d5-5a3f-4096-9da6-c3b4f322d48c\") " pod="openshift-marketplace/community-operators-2f7kn" Jan 23 10:32:43 crc kubenswrapper[4684]: I0123 10:32:43.648505 4684 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/792110d5-5a3f-4096-9da6-c3b4f322d48c-catalog-content\") pod \"community-operators-2f7kn\" (UID: \"792110d5-5a3f-4096-9da6-c3b4f322d48c\") " pod="openshift-marketplace/community-operators-2f7kn" Jan 23 10:32:43 crc kubenswrapper[4684]: I0123 10:32:43.648543 4684 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/792110d5-5a3f-4096-9da6-c3b4f322d48c-utilities\") pod \"community-operators-2f7kn\" (UID: \"792110d5-5a3f-4096-9da6-c3b4f322d48c\") " pod="openshift-marketplace/community-operators-2f7kn" Jan 23 10:32:43 crc kubenswrapper[4684]: I0123 10:32:43.670457 4684 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-fjkr5\" (UniqueName: \"kubernetes.io/projected/792110d5-5a3f-4096-9da6-c3b4f322d48c-kube-api-access-fjkr5\") pod \"community-operators-2f7kn\" (UID: \"792110d5-5a3f-4096-9da6-c3b4f322d48c\") " pod="openshift-marketplace/community-operators-2f7kn" Jan 23 10:32:43 crc kubenswrapper[4684]: I0123 10:32:43.709306 4684 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-2f7kn" Jan 23 10:32:44 crc kubenswrapper[4684]: I0123 10:32:44.400720 4684 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/community-operators-2f7kn"] Jan 23 10:32:44 crc kubenswrapper[4684]: I0123 10:32:44.933194 4684 generic.go:334] "Generic (PLEG): container finished" podID="792110d5-5a3f-4096-9da6-c3b4f322d48c" containerID="2f088104a577bddebe8d57a6eab25021f899609ddf1e2887db0bcc423a970755" exitCode=0 Jan 23 10:32:44 crc kubenswrapper[4684]: I0123 10:32:44.933510 4684 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-2f7kn" event={"ID":"792110d5-5a3f-4096-9da6-c3b4f322d48c","Type":"ContainerDied","Data":"2f088104a577bddebe8d57a6eab25021f899609ddf1e2887db0bcc423a970755"} Jan 23 10:32:44 crc kubenswrapper[4684]: I0123 10:32:44.933540 4684 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-2f7kn" event={"ID":"792110d5-5a3f-4096-9da6-c3b4f322d48c","Type":"ContainerStarted","Data":"abd5f776a8a963a915539b529443a27f07ed845dbacd24085bdbc22c10c9a1f9"} Jan 23 10:32:46 crc kubenswrapper[4684]: I0123 10:32:46.951611 4684 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-2f7kn" event={"ID":"792110d5-5a3f-4096-9da6-c3b4f322d48c","Type":"ContainerStarted","Data":"395a8c09e3e1247ac3f8dacd4a6edb5924de5fe8a3d8d7befa846738338ff590"} Jan 23 10:32:49 crc kubenswrapper[4684]: I0123 10:32:49.988971 4684 generic.go:334] "Generic (PLEG): container finished" podID="792110d5-5a3f-4096-9da6-c3b4f322d48c" containerID="395a8c09e3e1247ac3f8dacd4a6edb5924de5fe8a3d8d7befa846738338ff590" exitCode=0 Jan 23 10:32:49 crc kubenswrapper[4684]: I0123 10:32:49.989057 4684 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-2f7kn" event={"ID":"792110d5-5a3f-4096-9da6-c3b4f322d48c","Type":"ContainerDied","Data":"395a8c09e3e1247ac3f8dacd4a6edb5924de5fe8a3d8d7befa846738338ff590"} Jan 23 10:32:51 crc kubenswrapper[4684]: I0123 10:32:51.002106 4684 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-2f7kn" event={"ID":"792110d5-5a3f-4096-9da6-c3b4f322d48c","Type":"ContainerStarted","Data":"463b13269dbe7d8701fbc8cf6752cac6ced25e9b94cec0d3e99b1efbf13255a5"} Jan 23 10:32:51 crc kubenswrapper[4684]: I0123 10:32:51.025760 4684 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/community-operators-2f7kn" podStartSLOduration=2.451573078 podStartE2EDuration="8.025737094s" podCreationTimestamp="2026-01-23 10:32:43 +0000 UTC" firstStartedPulling="2026-01-23 10:32:44.936784564 +0000 UTC m=+5137.560163105" lastFinishedPulling="2026-01-23 10:32:50.51094857 +0000 UTC m=+5143.134327121" observedRunningTime="2026-01-23 10:32:51.019756632 +0000 UTC m=+5143.643135183" watchObservedRunningTime="2026-01-23 10:32:51.025737094 +0000 UTC m=+5143.649115635" Jan 23 10:32:53 crc kubenswrapper[4684]: I0123 10:32:53.709912 4684 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/community-operators-2f7kn" Jan 23 10:32:53 crc kubenswrapper[4684]: I0123 10:32:53.710275 4684 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/community-operators-2f7kn" Jan 23 10:32:54 crc kubenswrapper[4684]: I0123 10:32:54.762443 4684 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-marketplace/community-operators-2f7kn" podUID="792110d5-5a3f-4096-9da6-c3b4f322d48c" containerName="registry-server" probeResult="failure" output=< Jan 23 10:32:54 crc kubenswrapper[4684]: timeout: failed to connect service ":50051" within 1s Jan 23 10:32:54 crc kubenswrapper[4684]: > Jan 23 10:33:03 crc kubenswrapper[4684]: I0123 10:33:03.768904 4684 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/community-operators-2f7kn" Jan 23 10:33:03 crc kubenswrapper[4684]: I0123 10:33:03.836060 4684 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/community-operators-2f7kn" Jan 23 10:33:04 crc kubenswrapper[4684]: I0123 10:33:04.951025 4684 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_97bb1f8024535e829fa8894f35597f6754858047b9fee802213b02de86bjvsp_985d0dfc-6e0c-4cdc-98c6-045b88957e25/util/0.log" Jan 23 10:33:05 crc kubenswrapper[4684]: I0123 10:33:05.234873 4684 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_97bb1f8024535e829fa8894f35597f6754858047b9fee802213b02de86bjvsp_985d0dfc-6e0c-4cdc-98c6-045b88957e25/pull/0.log" Jan 23 10:33:05 crc kubenswrapper[4684]: I0123 10:33:05.235178 4684 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_97bb1f8024535e829fa8894f35597f6754858047b9fee802213b02de86bjvsp_985d0dfc-6e0c-4cdc-98c6-045b88957e25/pull/0.log" Jan 23 10:33:05 crc kubenswrapper[4684]: I0123 10:33:05.291494 4684 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_97bb1f8024535e829fa8894f35597f6754858047b9fee802213b02de86bjvsp_985d0dfc-6e0c-4cdc-98c6-045b88957e25/util/0.log" Jan 23 10:33:05 crc kubenswrapper[4684]: I0123 10:33:05.525383 4684 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_97bb1f8024535e829fa8894f35597f6754858047b9fee802213b02de86bjvsp_985d0dfc-6e0c-4cdc-98c6-045b88957e25/pull/0.log" Jan 23 10:33:05 crc kubenswrapper[4684]: I0123 10:33:05.564458 4684 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_97bb1f8024535e829fa8894f35597f6754858047b9fee802213b02de86bjvsp_985d0dfc-6e0c-4cdc-98c6-045b88957e25/util/0.log" Jan 23 10:33:05 crc kubenswrapper[4684]: I0123 10:33:05.638241 4684 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_97bb1f8024535e829fa8894f35597f6754858047b9fee802213b02de86bjvsp_985d0dfc-6e0c-4cdc-98c6-045b88957e25/extract/0.log" Jan 23 10:33:05 crc kubenswrapper[4684]: I0123 10:33:05.647295 4684 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/community-operators-2f7kn"] Jan 23 10:33:05 crc kubenswrapper[4684]: I0123 10:33:05.647549 4684 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/community-operators-2f7kn" podUID="792110d5-5a3f-4096-9da6-c3b4f322d48c" containerName="registry-server" containerID="cri-o://463b13269dbe7d8701fbc8cf6752cac6ced25e9b94cec0d3e99b1efbf13255a5" gracePeriod=2 Jan 23 10:33:05 crc kubenswrapper[4684]: I0123 10:33:05.878544 4684 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_barbican-operator-controller-manager-59dd8b7cbf-sbkxr_dc5b7444-cf61-439c-a7ed-3c97289e6cfe/manager/0.log" Jan 23 10:33:06 crc kubenswrapper[4684]: I0123 10:33:06.102184 4684 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_cinder-operator-controller-manager-69cf5d4557-srv5g_fd2ff302-08d1-4fd7-a45c-152155876b56/manager/0.log" Jan 23 10:33:06 crc kubenswrapper[4684]: I0123 10:33:06.149475 4684 generic.go:334] "Generic (PLEG): container finished" podID="792110d5-5a3f-4096-9da6-c3b4f322d48c" containerID="463b13269dbe7d8701fbc8cf6752cac6ced25e9b94cec0d3e99b1efbf13255a5" exitCode=0 Jan 23 10:33:06 crc kubenswrapper[4684]: I0123 10:33:06.149533 4684 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-2f7kn" event={"ID":"792110d5-5a3f-4096-9da6-c3b4f322d48c","Type":"ContainerDied","Data":"463b13269dbe7d8701fbc8cf6752cac6ced25e9b94cec0d3e99b1efbf13255a5"} Jan 23 10:33:06 crc kubenswrapper[4684]: I0123 10:33:06.242132 4684 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-2f7kn" Jan 23 10:33:06 crc kubenswrapper[4684]: I0123 10:33:06.308350 4684 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_designate-operator-controller-manager-b45d7bf98-p77dl_31af0894-c5ac-41ef-842e-b7d01dfa2229/manager/0.log" Jan 23 10:33:06 crc kubenswrapper[4684]: I0123 10:33:06.330331 4684 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/792110d5-5a3f-4096-9da6-c3b4f322d48c-utilities\") pod \"792110d5-5a3f-4096-9da6-c3b4f322d48c\" (UID: \"792110d5-5a3f-4096-9da6-c3b4f322d48c\") " Jan 23 10:33:06 crc kubenswrapper[4684]: I0123 10:33:06.332614 4684 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/792110d5-5a3f-4096-9da6-c3b4f322d48c-utilities" (OuterVolumeSpecName: "utilities") pod "792110d5-5a3f-4096-9da6-c3b4f322d48c" (UID: "792110d5-5a3f-4096-9da6-c3b4f322d48c"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 23 10:33:06 crc kubenswrapper[4684]: I0123 10:33:06.333095 4684 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-fjkr5\" (UniqueName: \"kubernetes.io/projected/792110d5-5a3f-4096-9da6-c3b4f322d48c-kube-api-access-fjkr5\") pod \"792110d5-5a3f-4096-9da6-c3b4f322d48c\" (UID: \"792110d5-5a3f-4096-9da6-c3b4f322d48c\") " Jan 23 10:33:06 crc kubenswrapper[4684]: I0123 10:33:06.333251 4684 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/792110d5-5a3f-4096-9da6-c3b4f322d48c-catalog-content\") pod \"792110d5-5a3f-4096-9da6-c3b4f322d48c\" (UID: \"792110d5-5a3f-4096-9da6-c3b4f322d48c\") " Jan 23 10:33:06 crc kubenswrapper[4684]: I0123 10:33:06.335184 4684 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/792110d5-5a3f-4096-9da6-c3b4f322d48c-utilities\") on node \"crc\" DevicePath \"\"" Jan 23 10:33:06 crc kubenswrapper[4684]: I0123 10:33:06.352975 4684 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/792110d5-5a3f-4096-9da6-c3b4f322d48c-kube-api-access-fjkr5" (OuterVolumeSpecName: "kube-api-access-fjkr5") pod "792110d5-5a3f-4096-9da6-c3b4f322d48c" (UID: "792110d5-5a3f-4096-9da6-c3b4f322d48c"). InnerVolumeSpecName "kube-api-access-fjkr5". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 23 10:33:06 crc kubenswrapper[4684]: I0123 10:33:06.402126 4684 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/792110d5-5a3f-4096-9da6-c3b4f322d48c-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "792110d5-5a3f-4096-9da6-c3b4f322d48c" (UID: "792110d5-5a3f-4096-9da6-c3b4f322d48c"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 23 10:33:06 crc kubenswrapper[4684]: I0123 10:33:06.437136 4684 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-fjkr5\" (UniqueName: \"kubernetes.io/projected/792110d5-5a3f-4096-9da6-c3b4f322d48c-kube-api-access-fjkr5\") on node \"crc\" DevicePath \"\"" Jan 23 10:33:06 crc kubenswrapper[4684]: I0123 10:33:06.437407 4684 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/792110d5-5a3f-4096-9da6-c3b4f322d48c-catalog-content\") on node \"crc\" DevicePath \"\"" Jan 23 10:33:06 crc kubenswrapper[4684]: I0123 10:33:06.567308 4684 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_glance-operator-controller-manager-78fdd796fd-hx5dq_299d3d78-4346-43f2-86f2-e1a3c20513a5/manager/0.log" Jan 23 10:33:06 crc kubenswrapper[4684]: I0123 10:33:06.686101 4684 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_heat-operator-controller-manager-594c8c9d5d-ht6sr_294e6daa-1ac9-4afc-b489-f7cff06c18ec/manager/0.log" Jan 23 10:33:06 crc kubenswrapper[4684]: I0123 10:33:06.898014 4684 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_horizon-operator-controller-manager-77d5c5b54f-gc4d6_d61b277c-9b8c-423e-9b63-66dd812147c3/manager/0.log" Jan 23 10:33:07 crc kubenswrapper[4684]: I0123 10:33:07.163951 4684 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-2f7kn" event={"ID":"792110d5-5a3f-4096-9da6-c3b4f322d48c","Type":"ContainerDied","Data":"abd5f776a8a963a915539b529443a27f07ed845dbacd24085bdbc22c10c9a1f9"} Jan 23 10:33:07 crc kubenswrapper[4684]: I0123 10:33:07.164025 4684 scope.go:117] "RemoveContainer" containerID="463b13269dbe7d8701fbc8cf6752cac6ced25e9b94cec0d3e99b1efbf13255a5" Jan 23 10:33:07 crc kubenswrapper[4684]: I0123 10:33:07.164045 4684 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-2f7kn" Jan 23 10:33:07 crc kubenswrapper[4684]: I0123 10:33:07.183979 4684 scope.go:117] "RemoveContainer" containerID="395a8c09e3e1247ac3f8dacd4a6edb5924de5fe8a3d8d7befa846738338ff590" Jan 23 10:33:07 crc kubenswrapper[4684]: I0123 10:33:07.217472 4684 scope.go:117] "RemoveContainer" containerID="2f088104a577bddebe8d57a6eab25021f899609ddf1e2887db0bcc423a970755" Jan 23 10:33:07 crc kubenswrapper[4684]: I0123 10:33:07.241370 4684 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_ironic-operator-controller-manager-69d6c9f5b8-6s79c_5bb19409-93c9-4453-800c-ce2899b48427/manager/0.log" Jan 23 10:33:07 crc kubenswrapper[4684]: I0123 10:33:07.274826 4684 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/community-operators-2f7kn"] Jan 23 10:33:07 crc kubenswrapper[4684]: I0123 10:33:07.290870 4684 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/community-operators-2f7kn"] Jan 23 10:33:07 crc kubenswrapper[4684]: I0123 10:33:07.411910 4684 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_infra-operator-controller-manager-54ccf4f85d-t4lh8_56e669a2-5990-45ad-8d32-e8d57ef7a81e/manager/0.log" Jan 23 10:33:07 crc kubenswrapper[4684]: I0123 10:33:07.484011 4684 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_keystone-operator-controller-manager-b8b6d4659-lfjfh_67b55215-9df7-4273-8e15-27c0a969e065/manager/0.log" Jan 23 10:33:07 crc kubenswrapper[4684]: I0123 10:33:07.596295 4684 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="792110d5-5a3f-4096-9da6-c3b4f322d48c" path="/var/lib/kubelet/pods/792110d5-5a3f-4096-9da6-c3b4f322d48c/volumes" Jan 23 10:33:07 crc kubenswrapper[4684]: I0123 10:33:07.649774 4684 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_manila-operator-controller-manager-78c6999f6f-skhwl_e13327b0-3e7d-498b-a5cb-1ae9cbc6fad7/manager/0.log" Jan 23 10:33:07 crc kubenswrapper[4684]: I0123 10:33:07.851891 4684 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_mariadb-operator-controller-manager-c87fff755-pl7fj_e1b45f19-8737-4f21-aade-d2b9cfda08fe/manager/0.log" Jan 23 10:33:08 crc kubenswrapper[4684]: I0123 10:33:08.021684 4684 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_neutron-operator-controller-manager-5d8f59fb49-7nv72_9e4ad169-96f1-40ef-bedf-75d3a233ca35/manager/0.log" Jan 23 10:33:08 crc kubenswrapper[4684]: I0123 10:33:08.599545 4684 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_nova-operator-controller-manager-6b8bc8d87d-jnlvz_b1376fdd-31b4-4a7a-a9b6-1a38565083cb/manager/0.log" Jan 23 10:33:08 crc kubenswrapper[4684]: I0123 10:33:08.857014 4684 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_octavia-operator-controller-manager-7bd9774b6-b82vt_2466d64b-62c9-422f-9609-5aaaa7de084c/manager/0.log" Jan 23 10:33:08 crc kubenswrapper[4684]: I0123 10:33:08.969809 4684 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_openstack-baremetal-operator-controller-manager-7c9c58b557lb7bq_b0bb140c-ce3d-4d8b-8627-67ae0145b2d4/manager/0.log" Jan 23 10:33:09 crc kubenswrapper[4684]: I0123 10:33:09.293150 4684 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_openstack-operator-controller-init-85bfd44c94-6dlkw_652bdac8-6488-4303-9d64-809a46258816/operator/0.log" Jan 23 10:33:09 crc kubenswrapper[4684]: I0123 10:33:09.709314 4684 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_openstack-operator-index-zt5gw_2c935634-e963-49ad-868b-7576011f21fb/registry-server/0.log" Jan 23 10:33:09 crc kubenswrapper[4684]: I0123 10:33:09.815467 4684 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_ovn-operator-controller-manager-55db956ddc-ll27v_0755ab86-427c-4e7b-8712-4db92f543c69/manager/0.log" Jan 23 10:33:10 crc kubenswrapper[4684]: I0123 10:33:10.488563 4684 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_placement-operator-controller-manager-5d646b7d76-dbggg_ba45281f-6224-4ce8-bc8e-df42f7e89340/manager/0.log" Jan 23 10:33:10 crc kubenswrapper[4684]: I0123 10:33:10.654459 4684 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_rabbitmq-cluster-operator-manager-668c99d594-c6nkk_b45428ef-0f84-4d58-ab99-9d7e26470caa/operator/0.log" Jan 23 10:33:10 crc kubenswrapper[4684]: I0123 10:33:10.806828 4684 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_openstack-operator-controller-manager-57c46955cf-s5vdl_ef474359-484b-4042-8d86-0aa2fce7a260/manager/0.log" Jan 23 10:33:10 crc kubenswrapper[4684]: I0123 10:33:10.819202 4684 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_swift-operator-controller-manager-547cbdb99f-8cnrp_ca0f93c0-4138-44c8-bd7d-027ced364a97/manager/0.log" Jan 23 10:33:10 crc kubenswrapper[4684]: I0123 10:33:10.969253 4684 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_telemetry-operator-controller-manager-85cd9769bb-4rk7k_829a9115-60b9-4f34-811a-1acc4cbd9897/manager/0.log" Jan 23 10:33:11 crc kubenswrapper[4684]: I0123 10:33:11.061966 4684 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_watcher-operator-controller-manager-5ffb9c6597-sx2td_afb73601-eb5b-44cd-9f30-4e38a4cc28be/manager/0.log" Jan 23 10:33:11 crc kubenswrapper[4684]: I0123 10:33:11.113710 4684 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_test-operator-controller-manager-69797bbcbd-2f7kg_b3f2f6c1-234f-457b-b335-f7e732976b73/manager/0.log" Jan 23 10:33:35 crc kubenswrapper[4684]: I0123 10:33:35.212008 4684 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-machine-api_control-plane-machine-set-operator-78cbb6b69f-4qpn2_f92af7c0-b6ef-4fe1-b057-b2424aa96458/control-plane-machine-set-operator/0.log" Jan 23 10:33:35 crc kubenswrapper[4684]: I0123 10:33:35.274475 4684 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-machine-api_machine-api-operator-5694c8668f-pgngb_9b3c5fb5-4205-4162-9d9e-b522ee092236/kube-rbac-proxy/0.log" Jan 23 10:33:35 crc kubenswrapper[4684]: I0123 10:33:35.352328 4684 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-machine-api_machine-api-operator-5694c8668f-pgngb_9b3c5fb5-4205-4162-9d9e-b522ee092236/machine-api-operator/0.log" Jan 23 10:33:35 crc kubenswrapper[4684]: I0123 10:33:35.793929 4684 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/redhat-operators-bw9sx"] Jan 23 10:33:35 crc kubenswrapper[4684]: E0123 10:33:35.794531 4684 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="792110d5-5a3f-4096-9da6-c3b4f322d48c" containerName="extract-utilities" Jan 23 10:33:35 crc kubenswrapper[4684]: I0123 10:33:35.794550 4684 state_mem.go:107] "Deleted CPUSet assignment" podUID="792110d5-5a3f-4096-9da6-c3b4f322d48c" containerName="extract-utilities" Jan 23 10:33:35 crc kubenswrapper[4684]: E0123 10:33:35.794569 4684 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="792110d5-5a3f-4096-9da6-c3b4f322d48c" containerName="registry-server" Jan 23 10:33:35 crc kubenswrapper[4684]: I0123 10:33:35.794576 4684 state_mem.go:107] "Deleted CPUSet assignment" podUID="792110d5-5a3f-4096-9da6-c3b4f322d48c" containerName="registry-server" Jan 23 10:33:35 crc kubenswrapper[4684]: E0123 10:33:35.794592 4684 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="792110d5-5a3f-4096-9da6-c3b4f322d48c" containerName="extract-content" Jan 23 10:33:35 crc kubenswrapper[4684]: I0123 10:33:35.794598 4684 state_mem.go:107] "Deleted CPUSet assignment" podUID="792110d5-5a3f-4096-9da6-c3b4f322d48c" containerName="extract-content" Jan 23 10:33:35 crc kubenswrapper[4684]: I0123 10:33:35.794773 4684 memory_manager.go:354] "RemoveStaleState removing state" podUID="792110d5-5a3f-4096-9da6-c3b4f322d48c" containerName="registry-server" Jan 23 10:33:35 crc kubenswrapper[4684]: I0123 10:33:35.796190 4684 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-bw9sx" Jan 23 10:33:35 crc kubenswrapper[4684]: I0123 10:33:35.815669 4684 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/55eac6ac-f3fc-4c2d-83f9-d8859d6ec044-catalog-content\") pod \"redhat-operators-bw9sx\" (UID: \"55eac6ac-f3fc-4c2d-83f9-d8859d6ec044\") " pod="openshift-marketplace/redhat-operators-bw9sx" Jan 23 10:33:35 crc kubenswrapper[4684]: I0123 10:33:35.816044 4684 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-7b8wm\" (UniqueName: \"kubernetes.io/projected/55eac6ac-f3fc-4c2d-83f9-d8859d6ec044-kube-api-access-7b8wm\") pod \"redhat-operators-bw9sx\" (UID: \"55eac6ac-f3fc-4c2d-83f9-d8859d6ec044\") " pod="openshift-marketplace/redhat-operators-bw9sx" Jan 23 10:33:35 crc kubenswrapper[4684]: I0123 10:33:35.816154 4684 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/55eac6ac-f3fc-4c2d-83f9-d8859d6ec044-utilities\") pod \"redhat-operators-bw9sx\" (UID: \"55eac6ac-f3fc-4c2d-83f9-d8859d6ec044\") " pod="openshift-marketplace/redhat-operators-bw9sx" Jan 23 10:33:35 crc kubenswrapper[4684]: I0123 10:33:35.860767 4684 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-operators-bw9sx"] Jan 23 10:33:35 crc kubenswrapper[4684]: I0123 10:33:35.918271 4684 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/55eac6ac-f3fc-4c2d-83f9-d8859d6ec044-catalog-content\") pod \"redhat-operators-bw9sx\" (UID: \"55eac6ac-f3fc-4c2d-83f9-d8859d6ec044\") " pod="openshift-marketplace/redhat-operators-bw9sx" Jan 23 10:33:35 crc kubenswrapper[4684]: I0123 10:33:35.918402 4684 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-7b8wm\" (UniqueName: \"kubernetes.io/projected/55eac6ac-f3fc-4c2d-83f9-d8859d6ec044-kube-api-access-7b8wm\") pod \"redhat-operators-bw9sx\" (UID: \"55eac6ac-f3fc-4c2d-83f9-d8859d6ec044\") " pod="openshift-marketplace/redhat-operators-bw9sx" Jan 23 10:33:35 crc kubenswrapper[4684]: I0123 10:33:35.918444 4684 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/55eac6ac-f3fc-4c2d-83f9-d8859d6ec044-utilities\") pod \"redhat-operators-bw9sx\" (UID: \"55eac6ac-f3fc-4c2d-83f9-d8859d6ec044\") " pod="openshift-marketplace/redhat-operators-bw9sx" Jan 23 10:33:35 crc kubenswrapper[4684]: I0123 10:33:35.918993 4684 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/55eac6ac-f3fc-4c2d-83f9-d8859d6ec044-utilities\") pod \"redhat-operators-bw9sx\" (UID: \"55eac6ac-f3fc-4c2d-83f9-d8859d6ec044\") " pod="openshift-marketplace/redhat-operators-bw9sx" Jan 23 10:33:35 crc kubenswrapper[4684]: I0123 10:33:35.919372 4684 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/55eac6ac-f3fc-4c2d-83f9-d8859d6ec044-catalog-content\") pod \"redhat-operators-bw9sx\" (UID: \"55eac6ac-f3fc-4c2d-83f9-d8859d6ec044\") " pod="openshift-marketplace/redhat-operators-bw9sx" Jan 23 10:33:35 crc kubenswrapper[4684]: I0123 10:33:35.940006 4684 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-7b8wm\" (UniqueName: \"kubernetes.io/projected/55eac6ac-f3fc-4c2d-83f9-d8859d6ec044-kube-api-access-7b8wm\") pod \"redhat-operators-bw9sx\" (UID: \"55eac6ac-f3fc-4c2d-83f9-d8859d6ec044\") " pod="openshift-marketplace/redhat-operators-bw9sx" Jan 23 10:33:36 crc kubenswrapper[4684]: I0123 10:33:36.121446 4684 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-bw9sx" Jan 23 10:33:36 crc kubenswrapper[4684]: I0123 10:33:36.484623 4684 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-operators-bw9sx"] Jan 23 10:33:37 crc kubenswrapper[4684]: I0123 10:33:37.457649 4684 generic.go:334] "Generic (PLEG): container finished" podID="55eac6ac-f3fc-4c2d-83f9-d8859d6ec044" containerID="2c72feca205ae000884ef3f1da59024903976d2989bf31ed35dc1ddb392fd7bd" exitCode=0 Jan 23 10:33:37 crc kubenswrapper[4684]: I0123 10:33:37.457756 4684 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-bw9sx" event={"ID":"55eac6ac-f3fc-4c2d-83f9-d8859d6ec044","Type":"ContainerDied","Data":"2c72feca205ae000884ef3f1da59024903976d2989bf31ed35dc1ddb392fd7bd"} Jan 23 10:33:37 crc kubenswrapper[4684]: I0123 10:33:37.458275 4684 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-bw9sx" event={"ID":"55eac6ac-f3fc-4c2d-83f9-d8859d6ec044","Type":"ContainerStarted","Data":"a341bfb01300c94be52fedbea81252eb21c45b92436b6d644a66ea89cc4ffe07"} Jan 23 10:33:37 crc kubenswrapper[4684]: I0123 10:33:37.465259 4684 provider.go:102] Refreshing cache for provider: *credentialprovider.defaultDockerConfigProvider Jan 23 10:33:39 crc kubenswrapper[4684]: I0123 10:33:39.475362 4684 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-bw9sx" event={"ID":"55eac6ac-f3fc-4c2d-83f9-d8859d6ec044","Type":"ContainerStarted","Data":"f8132f403a5a8ab89a7b1140bc0a10f341d4430aabba50fc0bfed3ff1d4aecdb"} Jan 23 10:33:43 crc kubenswrapper[4684]: I0123 10:33:43.514081 4684 generic.go:334] "Generic (PLEG): container finished" podID="55eac6ac-f3fc-4c2d-83f9-d8859d6ec044" containerID="f8132f403a5a8ab89a7b1140bc0a10f341d4430aabba50fc0bfed3ff1d4aecdb" exitCode=0 Jan 23 10:33:43 crc kubenswrapper[4684]: I0123 10:33:43.514180 4684 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-bw9sx" event={"ID":"55eac6ac-f3fc-4c2d-83f9-d8859d6ec044","Type":"ContainerDied","Data":"f8132f403a5a8ab89a7b1140bc0a10f341d4430aabba50fc0bfed3ff1d4aecdb"} Jan 23 10:33:45 crc kubenswrapper[4684]: I0123 10:33:45.530066 4684 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-bw9sx" event={"ID":"55eac6ac-f3fc-4c2d-83f9-d8859d6ec044","Type":"ContainerStarted","Data":"f0ca77707071aa93ac359f7de51f0f23adeb1cf553a6f52624b80fc0f8e0904d"} Jan 23 10:33:45 crc kubenswrapper[4684]: I0123 10:33:45.554475 4684 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/redhat-operators-bw9sx" podStartSLOduration=3.208905277 podStartE2EDuration="10.554450628s" podCreationTimestamp="2026-01-23 10:33:35 +0000 UTC" firstStartedPulling="2026-01-23 10:33:37.464969918 +0000 UTC m=+5190.088348459" lastFinishedPulling="2026-01-23 10:33:44.810515259 +0000 UTC m=+5197.433893810" observedRunningTime="2026-01-23 10:33:45.548291462 +0000 UTC m=+5198.171670013" watchObservedRunningTime="2026-01-23 10:33:45.554450628 +0000 UTC m=+5198.177829169" Jan 23 10:33:46 crc kubenswrapper[4684]: I0123 10:33:46.121929 4684 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/redhat-operators-bw9sx" Jan 23 10:33:46 crc kubenswrapper[4684]: I0123 10:33:46.121992 4684 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/redhat-operators-bw9sx" Jan 23 10:33:47 crc kubenswrapper[4684]: I0123 10:33:47.168475 4684 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-marketplace/redhat-operators-bw9sx" podUID="55eac6ac-f3fc-4c2d-83f9-d8859d6ec044" containerName="registry-server" probeResult="failure" output=< Jan 23 10:33:47 crc kubenswrapper[4684]: timeout: failed to connect service ":50051" within 1s Jan 23 10:33:47 crc kubenswrapper[4684]: > Jan 23 10:33:51 crc kubenswrapper[4684]: I0123 10:33:51.123888 4684 log.go:25] "Finished parsing log file" path="/var/log/pods/cert-manager_cert-manager-858654f9db-9kbld_05d3b6d9-c965-441d-a575-dd4d250c519b/cert-manager-controller/0.log" Jan 23 10:33:51 crc kubenswrapper[4684]: I0123 10:33:51.401936 4684 log.go:25] "Finished parsing log file" path="/var/log/pods/cert-manager_cert-manager-cainjector-cf98fcc89-8p4gl_f4c0acc8-e95c-4880-ad7b-eafc6422a713/cert-manager-cainjector/0.log" Jan 23 10:33:51 crc kubenswrapper[4684]: I0123 10:33:51.502373 4684 log.go:25] "Finished parsing log file" path="/var/log/pods/cert-manager_cert-manager-webhook-687f57d79b-sfbw8_b61e14d8-17ad-4f3b-aa18-e0030a15c870/cert-manager-webhook/0.log" Jan 23 10:33:56 crc kubenswrapper[4684]: I0123 10:33:56.171440 4684 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/redhat-operators-bw9sx" Jan 23 10:33:56 crc kubenswrapper[4684]: I0123 10:33:56.224414 4684 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/redhat-operators-bw9sx" Jan 23 10:33:56 crc kubenswrapper[4684]: I0123 10:33:56.407116 4684 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-operators-bw9sx"] Jan 23 10:33:57 crc kubenswrapper[4684]: I0123 10:33:57.634012 4684 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/redhat-operators-bw9sx" podUID="55eac6ac-f3fc-4c2d-83f9-d8859d6ec044" containerName="registry-server" containerID="cri-o://f0ca77707071aa93ac359f7de51f0f23adeb1cf553a6f52624b80fc0f8e0904d" gracePeriod=2 Jan 23 10:33:58 crc kubenswrapper[4684]: I0123 10:33:58.098274 4684 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-bw9sx" Jan 23 10:33:58 crc kubenswrapper[4684]: I0123 10:33:58.181639 4684 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/55eac6ac-f3fc-4c2d-83f9-d8859d6ec044-catalog-content\") pod \"55eac6ac-f3fc-4c2d-83f9-d8859d6ec044\" (UID: \"55eac6ac-f3fc-4c2d-83f9-d8859d6ec044\") " Jan 23 10:33:58 crc kubenswrapper[4684]: I0123 10:33:58.181821 4684 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-7b8wm\" (UniqueName: \"kubernetes.io/projected/55eac6ac-f3fc-4c2d-83f9-d8859d6ec044-kube-api-access-7b8wm\") pod \"55eac6ac-f3fc-4c2d-83f9-d8859d6ec044\" (UID: \"55eac6ac-f3fc-4c2d-83f9-d8859d6ec044\") " Jan 23 10:33:58 crc kubenswrapper[4684]: I0123 10:33:58.181847 4684 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/55eac6ac-f3fc-4c2d-83f9-d8859d6ec044-utilities\") pod \"55eac6ac-f3fc-4c2d-83f9-d8859d6ec044\" (UID: \"55eac6ac-f3fc-4c2d-83f9-d8859d6ec044\") " Jan 23 10:33:58 crc kubenswrapper[4684]: I0123 10:33:58.183369 4684 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/55eac6ac-f3fc-4c2d-83f9-d8859d6ec044-utilities" (OuterVolumeSpecName: "utilities") pod "55eac6ac-f3fc-4c2d-83f9-d8859d6ec044" (UID: "55eac6ac-f3fc-4c2d-83f9-d8859d6ec044"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 23 10:33:58 crc kubenswrapper[4684]: I0123 10:33:58.192022 4684 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/55eac6ac-f3fc-4c2d-83f9-d8859d6ec044-kube-api-access-7b8wm" (OuterVolumeSpecName: "kube-api-access-7b8wm") pod "55eac6ac-f3fc-4c2d-83f9-d8859d6ec044" (UID: "55eac6ac-f3fc-4c2d-83f9-d8859d6ec044"). InnerVolumeSpecName "kube-api-access-7b8wm". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 23 10:33:58 crc kubenswrapper[4684]: I0123 10:33:58.284599 4684 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-7b8wm\" (UniqueName: \"kubernetes.io/projected/55eac6ac-f3fc-4c2d-83f9-d8859d6ec044-kube-api-access-7b8wm\") on node \"crc\" DevicePath \"\"" Jan 23 10:33:58 crc kubenswrapper[4684]: I0123 10:33:58.284646 4684 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/55eac6ac-f3fc-4c2d-83f9-d8859d6ec044-utilities\") on node \"crc\" DevicePath \"\"" Jan 23 10:33:58 crc kubenswrapper[4684]: I0123 10:33:58.317849 4684 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/55eac6ac-f3fc-4c2d-83f9-d8859d6ec044-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "55eac6ac-f3fc-4c2d-83f9-d8859d6ec044" (UID: "55eac6ac-f3fc-4c2d-83f9-d8859d6ec044"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 23 10:33:58 crc kubenswrapper[4684]: I0123 10:33:58.387233 4684 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/55eac6ac-f3fc-4c2d-83f9-d8859d6ec044-catalog-content\") on node \"crc\" DevicePath \"\"" Jan 23 10:33:58 crc kubenswrapper[4684]: I0123 10:33:58.654348 4684 generic.go:334] "Generic (PLEG): container finished" podID="55eac6ac-f3fc-4c2d-83f9-d8859d6ec044" containerID="f0ca77707071aa93ac359f7de51f0f23adeb1cf553a6f52624b80fc0f8e0904d" exitCode=0 Jan 23 10:33:58 crc kubenswrapper[4684]: I0123 10:33:58.654410 4684 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-bw9sx" event={"ID":"55eac6ac-f3fc-4c2d-83f9-d8859d6ec044","Type":"ContainerDied","Data":"f0ca77707071aa93ac359f7de51f0f23adeb1cf553a6f52624b80fc0f8e0904d"} Jan 23 10:33:58 crc kubenswrapper[4684]: I0123 10:33:58.654454 4684 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-bw9sx" event={"ID":"55eac6ac-f3fc-4c2d-83f9-d8859d6ec044","Type":"ContainerDied","Data":"a341bfb01300c94be52fedbea81252eb21c45b92436b6d644a66ea89cc4ffe07"} Jan 23 10:33:58 crc kubenswrapper[4684]: I0123 10:33:58.654476 4684 scope.go:117] "RemoveContainer" containerID="f0ca77707071aa93ac359f7de51f0f23adeb1cf553a6f52624b80fc0f8e0904d" Jan 23 10:33:58 crc kubenswrapper[4684]: I0123 10:33:58.654658 4684 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-bw9sx" Jan 23 10:33:58 crc kubenswrapper[4684]: I0123 10:33:58.691651 4684 scope.go:117] "RemoveContainer" containerID="f8132f403a5a8ab89a7b1140bc0a10f341d4430aabba50fc0bfed3ff1d4aecdb" Jan 23 10:33:58 crc kubenswrapper[4684]: I0123 10:33:58.712321 4684 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-operators-bw9sx"] Jan 23 10:33:58 crc kubenswrapper[4684]: I0123 10:33:58.723286 4684 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/redhat-operators-bw9sx"] Jan 23 10:33:58 crc kubenswrapper[4684]: I0123 10:33:58.732238 4684 scope.go:117] "RemoveContainer" containerID="2c72feca205ae000884ef3f1da59024903976d2989bf31ed35dc1ddb392fd7bd" Jan 23 10:33:58 crc kubenswrapper[4684]: I0123 10:33:58.762796 4684 scope.go:117] "RemoveContainer" containerID="f0ca77707071aa93ac359f7de51f0f23adeb1cf553a6f52624b80fc0f8e0904d" Jan 23 10:33:58 crc kubenswrapper[4684]: E0123 10:33:58.763372 4684 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"f0ca77707071aa93ac359f7de51f0f23adeb1cf553a6f52624b80fc0f8e0904d\": container with ID starting with f0ca77707071aa93ac359f7de51f0f23adeb1cf553a6f52624b80fc0f8e0904d not found: ID does not exist" containerID="f0ca77707071aa93ac359f7de51f0f23adeb1cf553a6f52624b80fc0f8e0904d" Jan 23 10:33:58 crc kubenswrapper[4684]: I0123 10:33:58.763403 4684 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"f0ca77707071aa93ac359f7de51f0f23adeb1cf553a6f52624b80fc0f8e0904d"} err="failed to get container status \"f0ca77707071aa93ac359f7de51f0f23adeb1cf553a6f52624b80fc0f8e0904d\": rpc error: code = NotFound desc = could not find container \"f0ca77707071aa93ac359f7de51f0f23adeb1cf553a6f52624b80fc0f8e0904d\": container with ID starting with f0ca77707071aa93ac359f7de51f0f23adeb1cf553a6f52624b80fc0f8e0904d not found: ID does not exist" Jan 23 10:33:58 crc kubenswrapper[4684]: I0123 10:33:58.763422 4684 scope.go:117] "RemoveContainer" containerID="f8132f403a5a8ab89a7b1140bc0a10f341d4430aabba50fc0bfed3ff1d4aecdb" Jan 23 10:33:58 crc kubenswrapper[4684]: E0123 10:33:58.763657 4684 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"f8132f403a5a8ab89a7b1140bc0a10f341d4430aabba50fc0bfed3ff1d4aecdb\": container with ID starting with f8132f403a5a8ab89a7b1140bc0a10f341d4430aabba50fc0bfed3ff1d4aecdb not found: ID does not exist" containerID="f8132f403a5a8ab89a7b1140bc0a10f341d4430aabba50fc0bfed3ff1d4aecdb" Jan 23 10:33:58 crc kubenswrapper[4684]: I0123 10:33:58.763683 4684 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"f8132f403a5a8ab89a7b1140bc0a10f341d4430aabba50fc0bfed3ff1d4aecdb"} err="failed to get container status \"f8132f403a5a8ab89a7b1140bc0a10f341d4430aabba50fc0bfed3ff1d4aecdb\": rpc error: code = NotFound desc = could not find container \"f8132f403a5a8ab89a7b1140bc0a10f341d4430aabba50fc0bfed3ff1d4aecdb\": container with ID starting with f8132f403a5a8ab89a7b1140bc0a10f341d4430aabba50fc0bfed3ff1d4aecdb not found: ID does not exist" Jan 23 10:33:58 crc kubenswrapper[4684]: I0123 10:33:58.763701 4684 scope.go:117] "RemoveContainer" containerID="2c72feca205ae000884ef3f1da59024903976d2989bf31ed35dc1ddb392fd7bd" Jan 23 10:33:58 crc kubenswrapper[4684]: E0123 10:33:58.764005 4684 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"2c72feca205ae000884ef3f1da59024903976d2989bf31ed35dc1ddb392fd7bd\": container with ID starting with 2c72feca205ae000884ef3f1da59024903976d2989bf31ed35dc1ddb392fd7bd not found: ID does not exist" containerID="2c72feca205ae000884ef3f1da59024903976d2989bf31ed35dc1ddb392fd7bd" Jan 23 10:33:58 crc kubenswrapper[4684]: I0123 10:33:58.764033 4684 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"2c72feca205ae000884ef3f1da59024903976d2989bf31ed35dc1ddb392fd7bd"} err="failed to get container status \"2c72feca205ae000884ef3f1da59024903976d2989bf31ed35dc1ddb392fd7bd\": rpc error: code = NotFound desc = could not find container \"2c72feca205ae000884ef3f1da59024903976d2989bf31ed35dc1ddb392fd7bd\": container with ID starting with 2c72feca205ae000884ef3f1da59024903976d2989bf31ed35dc1ddb392fd7bd not found: ID does not exist" Jan 23 10:33:59 crc kubenswrapper[4684]: I0123 10:33:59.595401 4684 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="55eac6ac-f3fc-4c2d-83f9-d8859d6ec044" path="/var/lib/kubelet/pods/55eac6ac-f3fc-4c2d-83f9-d8859d6ec044/volumes" Jan 23 10:34:04 crc kubenswrapper[4684]: I0123 10:34:04.876342 4684 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-nmstate_nmstate-console-plugin-7754f76f8b-l7dkm_bedfa793-7aff-4710-ae19-260a52e2957f/nmstate-console-plugin/0.log" Jan 23 10:34:05 crc kubenswrapper[4684]: I0123 10:34:05.180100 4684 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-nmstate_nmstate-handler-2kxj8_2125ebe0-da30-4e7c-93e0-66b7aa2b87e4/nmstate-handler/0.log" Jan 23 10:34:05 crc kubenswrapper[4684]: I0123 10:34:05.322595 4684 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-nmstate_nmstate-metrics-54757c584b-dlvm4_55e58493-0888-4e94-bf0f-6c5b99a10ac4/kube-rbac-proxy/0.log" Jan 23 10:34:05 crc kubenswrapper[4684]: I0123 10:34:05.351869 4684 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-nmstate_nmstate-metrics-54757c584b-dlvm4_55e58493-0888-4e94-bf0f-6c5b99a10ac4/nmstate-metrics/0.log" Jan 23 10:34:05 crc kubenswrapper[4684]: I0123 10:34:05.462454 4684 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-nmstate_nmstate-operator-646758c888-qrbb4_4e70b1ea-5bbb-44b8-893b-0b08388d8a39/nmstate-operator/0.log" Jan 23 10:34:05 crc kubenswrapper[4684]: I0123 10:34:05.606562 4684 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-nmstate_nmstate-webhook-8474b5b9d8-p4bsj_7f98efc7-bdf6-4943-8ef9-9056f713acb2/nmstate-webhook/0.log" Jan 23 10:34:08 crc kubenswrapper[4684]: I0123 10:34:08.307993 4684 patch_prober.go:28] interesting pod/packageserver-d55dfcdfc-g94qp container/packageserver namespace/openshift-operator-lifecycle-manager: Readiness probe status=failure output="Get \"https://10.217.0.42:5443/healthz\": net/http: request canceled (Client.Timeout exceeded while awaiting headers)" start-of-body= Jan 23 10:34:08 crc kubenswrapper[4684]: I0123 10:34:08.320805 4684 patch_prober.go:28] interesting pod/package-server-manager-789f6589d5-2xmjn container/package-server-manager namespace/openshift-operator-lifecycle-manager: Liveness probe status=failure output="Get \"http://10.217.0.24:8080/healthz\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" start-of-body= Jan 23 10:34:08 crc kubenswrapper[4684]: I0123 10:34:08.321999 4684 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-operator-lifecycle-manager/packageserver-d55dfcdfc-g94qp" podUID="35a3e02f-21f3-4762-8260-c52003d4499c" containerName="packageserver" probeResult="failure" output="Get \"https://10.217.0.42:5443/healthz\": net/http: request canceled (Client.Timeout exceeded while awaiting headers)" Jan 23 10:34:08 crc kubenswrapper[4684]: I0123 10:34:08.322661 4684 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-operator-lifecycle-manager/package-server-manager-789f6589d5-2xmjn" podUID="9071fc4b-8d0f-41fe-832b-c3c9f5f0351b" containerName="package-server-manager" probeResult="failure" output="Get \"http://10.217.0.24:8080/healthz\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" Jan 23 10:34:17 crc kubenswrapper[4684]: I0123 10:34:17.879133 4684 scope.go:117] "RemoveContainer" containerID="af88254f892e22e53bd4c8dcf7b3f59af1f6ffbc8d7e497508864051aa4010ed" Jan 23 10:34:17 crc kubenswrapper[4684]: I0123 10:34:17.909606 4684 scope.go:117] "RemoveContainer" containerID="3a6a175e8375cead938cfbb3f2df916b09e18dde09f03a4c2052f1e7966f2eb9" Jan 23 10:34:17 crc kubenswrapper[4684]: I0123 10:34:17.958176 4684 scope.go:117] "RemoveContainer" containerID="f683111af560060791e3757e3cf834ca253c727794dc7d0d128678c70fa639de" Jan 23 10:34:35 crc kubenswrapper[4684]: I0123 10:34:35.260844 4684 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_controller-6968d8fdc4-8v8jk_b6455af6-22c5-44ad-a1fb-7d50f4a5271d/kube-rbac-proxy/0.log" Jan 23 10:34:35 crc kubenswrapper[4684]: I0123 10:34:35.312602 4684 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_controller-6968d8fdc4-8v8jk_b6455af6-22c5-44ad-a1fb-7d50f4a5271d/controller/0.log" Jan 23 10:34:35 crc kubenswrapper[4684]: I0123 10:34:35.524537 4684 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_frr-k8s-webhook-server-7df86c4f6c-qp4nh_ae885236-c9d2-4c57-bc11-a9aa077f5d1b/frr-k8s-webhook-server/0.log" Jan 23 10:34:35 crc kubenswrapper[4684]: I0123 10:34:35.668608 4684 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_frr-k8s-zr9tk_9171f98d-dc3e-4258-9c6e-a8316190944d/cp-frr-files/0.log" Jan 23 10:34:35 crc kubenswrapper[4684]: I0123 10:34:35.956053 4684 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_frr-k8s-zr9tk_9171f98d-dc3e-4258-9c6e-a8316190944d/cp-metrics/0.log" Jan 23 10:34:35 crc kubenswrapper[4684]: I0123 10:34:35.984250 4684 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_frr-k8s-zr9tk_9171f98d-dc3e-4258-9c6e-a8316190944d/cp-frr-files/0.log" Jan 23 10:34:36 crc kubenswrapper[4684]: I0123 10:34:36.055291 4684 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_frr-k8s-zr9tk_9171f98d-dc3e-4258-9c6e-a8316190944d/cp-reloader/0.log" Jan 23 10:34:36 crc kubenswrapper[4684]: I0123 10:34:36.070464 4684 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_frr-k8s-zr9tk_9171f98d-dc3e-4258-9c6e-a8316190944d/cp-reloader/0.log" Jan 23 10:34:36 crc kubenswrapper[4684]: I0123 10:34:36.264561 4684 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_frr-k8s-zr9tk_9171f98d-dc3e-4258-9c6e-a8316190944d/cp-metrics/0.log" Jan 23 10:34:36 crc kubenswrapper[4684]: I0123 10:34:36.264645 4684 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_frr-k8s-zr9tk_9171f98d-dc3e-4258-9c6e-a8316190944d/cp-metrics/0.log" Jan 23 10:34:36 crc kubenswrapper[4684]: I0123 10:34:36.296712 4684 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_frr-k8s-zr9tk_9171f98d-dc3e-4258-9c6e-a8316190944d/cp-frr-files/0.log" Jan 23 10:34:36 crc kubenswrapper[4684]: I0123 10:34:36.312770 4684 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_frr-k8s-zr9tk_9171f98d-dc3e-4258-9c6e-a8316190944d/cp-reloader/0.log" Jan 23 10:34:36 crc kubenswrapper[4684]: I0123 10:34:36.462549 4684 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_frr-k8s-zr9tk_9171f98d-dc3e-4258-9c6e-a8316190944d/cp-frr-files/0.log" Jan 23 10:34:36 crc kubenswrapper[4684]: I0123 10:34:36.492234 4684 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_frr-k8s-zr9tk_9171f98d-dc3e-4258-9c6e-a8316190944d/cp-metrics/0.log" Jan 23 10:34:36 crc kubenswrapper[4684]: I0123 10:34:36.498266 4684 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_frr-k8s-zr9tk_9171f98d-dc3e-4258-9c6e-a8316190944d/cp-reloader/0.log" Jan 23 10:34:36 crc kubenswrapper[4684]: I0123 10:34:36.543319 4684 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_frr-k8s-zr9tk_9171f98d-dc3e-4258-9c6e-a8316190944d/controller/0.log" Jan 23 10:34:36 crc kubenswrapper[4684]: I0123 10:34:36.706830 4684 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_frr-k8s-zr9tk_9171f98d-dc3e-4258-9c6e-a8316190944d/kube-rbac-proxy/0.log" Jan 23 10:34:36 crc kubenswrapper[4684]: I0123 10:34:36.749367 4684 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_frr-k8s-zr9tk_9171f98d-dc3e-4258-9c6e-a8316190944d/frr-metrics/0.log" Jan 23 10:34:36 crc kubenswrapper[4684]: I0123 10:34:36.777504 4684 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_frr-k8s-zr9tk_9171f98d-dc3e-4258-9c6e-a8316190944d/kube-rbac-proxy-frr/0.log" Jan 23 10:34:37 crc kubenswrapper[4684]: I0123 10:34:37.059114 4684 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_frr-k8s-zr9tk_9171f98d-dc3e-4258-9c6e-a8316190944d/reloader/0.log" Jan 23 10:34:37 crc kubenswrapper[4684]: I0123 10:34:37.197678 4684 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_metallb-operator-controller-manager-66c47b49dd-q49fh_00c9dbc4-3023-4be1-9876-0e2e2b35ac82/manager/0.log" Jan 23 10:34:37 crc kubenswrapper[4684]: I0123 10:34:37.455533 4684 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_metallb-operator-webhook-server-bfcb9dfcc-7qsz8_c001f52e-014a-4250-af27-7fdcebc0c759/webhook-server/0.log" Jan 23 10:34:37 crc kubenswrapper[4684]: I0123 10:34:37.712374 4684 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_speaker-v69pl_c673aad0-48c8-4410-9d62-028ebc02c103/kube-rbac-proxy/0.log" Jan 23 10:34:38 crc kubenswrapper[4684]: I0123 10:34:38.428878 4684 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_speaker-v69pl_c673aad0-48c8-4410-9d62-028ebc02c103/speaker/0.log" Jan 23 10:34:38 crc kubenswrapper[4684]: I0123 10:34:38.561391 4684 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_frr-k8s-zr9tk_9171f98d-dc3e-4258-9c6e-a8316190944d/frr/0.log" Jan 23 10:34:43 crc kubenswrapper[4684]: I0123 10:34:43.728990 4684 patch_prober.go:28] interesting pod/machine-config-daemon-wtphf container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Jan 23 10:34:43 crc kubenswrapper[4684]: I0123 10:34:43.729640 4684 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-wtphf" podUID="fe8e0d00-860e-4d47-9f48-686555520d79" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Jan 23 10:34:51 crc kubenswrapper[4684]: I0123 10:34:51.468349 4684 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_270996307cd21d144be796860235064b5127c2fcf62ccccd6689c259dc7ml56_dea3f1d3-f2aa-41e3-afb0-ce7658aae496/util/0.log" Jan 23 10:34:51 crc kubenswrapper[4684]: I0123 10:34:51.701732 4684 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_270996307cd21d144be796860235064b5127c2fcf62ccccd6689c259dc7ml56_dea3f1d3-f2aa-41e3-afb0-ce7658aae496/util/0.log" Jan 23 10:34:51 crc kubenswrapper[4684]: I0123 10:34:51.743798 4684 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_270996307cd21d144be796860235064b5127c2fcf62ccccd6689c259dc7ml56_dea3f1d3-f2aa-41e3-afb0-ce7658aae496/pull/0.log" Jan 23 10:34:51 crc kubenswrapper[4684]: I0123 10:34:51.753733 4684 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_270996307cd21d144be796860235064b5127c2fcf62ccccd6689c259dc7ml56_dea3f1d3-f2aa-41e3-afb0-ce7658aae496/pull/0.log" Jan 23 10:34:51 crc kubenswrapper[4684]: I0123 10:34:51.898303 4684 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_270996307cd21d144be796860235064b5127c2fcf62ccccd6689c259dc7ml56_dea3f1d3-f2aa-41e3-afb0-ce7658aae496/util/0.log" Jan 23 10:34:51 crc kubenswrapper[4684]: I0123 10:34:51.935508 4684 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_270996307cd21d144be796860235064b5127c2fcf62ccccd6689c259dc7ml56_dea3f1d3-f2aa-41e3-afb0-ce7658aae496/extract/0.log" Jan 23 10:34:51 crc kubenswrapper[4684]: I0123 10:34:51.959102 4684 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_270996307cd21d144be796860235064b5127c2fcf62ccccd6689c259dc7ml56_dea3f1d3-f2aa-41e3-afb0-ce7658aae496/pull/0.log" Jan 23 10:34:52 crc kubenswrapper[4684]: I0123 10:34:52.087429 4684 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_53efe8611d43ac2275911d954e05efbbba7920a530aff9253ed1cec713gnsqg_169c6832-37df-469f-9ff3-c0775456568a/util/0.log" Jan 23 10:34:52 crc kubenswrapper[4684]: I0123 10:34:52.287585 4684 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_53efe8611d43ac2275911d954e05efbbba7920a530aff9253ed1cec713gnsqg_169c6832-37df-469f-9ff3-c0775456568a/pull/0.log" Jan 23 10:34:52 crc kubenswrapper[4684]: I0123 10:34:52.307903 4684 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_53efe8611d43ac2275911d954e05efbbba7920a530aff9253ed1cec713gnsqg_169c6832-37df-469f-9ff3-c0775456568a/pull/0.log" Jan 23 10:34:52 crc kubenswrapper[4684]: I0123 10:34:52.322712 4684 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_53efe8611d43ac2275911d954e05efbbba7920a530aff9253ed1cec713gnsqg_169c6832-37df-469f-9ff3-c0775456568a/util/0.log" Jan 23 10:34:52 crc kubenswrapper[4684]: I0123 10:34:52.519554 4684 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_53efe8611d43ac2275911d954e05efbbba7920a530aff9253ed1cec713gnsqg_169c6832-37df-469f-9ff3-c0775456568a/util/0.log" Jan 23 10:34:52 crc kubenswrapper[4684]: I0123 10:34:52.603620 4684 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_53efe8611d43ac2275911d954e05efbbba7920a530aff9253ed1cec713gnsqg_169c6832-37df-469f-9ff3-c0775456568a/extract/0.log" Jan 23 10:34:52 crc kubenswrapper[4684]: I0123 10:34:52.603842 4684 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_53efe8611d43ac2275911d954e05efbbba7920a530aff9253ed1cec713gnsqg_169c6832-37df-469f-9ff3-c0775456568a/pull/0.log" Jan 23 10:34:52 crc kubenswrapper[4684]: I0123 10:34:52.724977 4684 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_certified-operators-qcntf_005d929c-6b2b-4644-bddb-c02aa19facfe/extract-utilities/0.log" Jan 23 10:34:52 crc kubenswrapper[4684]: I0123 10:34:52.901194 4684 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_certified-operators-qcntf_005d929c-6b2b-4644-bddb-c02aa19facfe/extract-utilities/0.log" Jan 23 10:34:52 crc kubenswrapper[4684]: I0123 10:34:52.967791 4684 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_certified-operators-qcntf_005d929c-6b2b-4644-bddb-c02aa19facfe/extract-content/0.log" Jan 23 10:34:52 crc kubenswrapper[4684]: I0123 10:34:52.974282 4684 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_certified-operators-qcntf_005d929c-6b2b-4644-bddb-c02aa19facfe/extract-content/0.log" Jan 23 10:34:53 crc kubenswrapper[4684]: I0123 10:34:53.176203 4684 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_certified-operators-qcntf_005d929c-6b2b-4644-bddb-c02aa19facfe/extract-utilities/0.log" Jan 23 10:34:53 crc kubenswrapper[4684]: I0123 10:34:53.197628 4684 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_certified-operators-qcntf_005d929c-6b2b-4644-bddb-c02aa19facfe/extract-content/0.log" Jan 23 10:34:53 crc kubenswrapper[4684]: I0123 10:34:53.416251 4684 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_community-operators-6dpg4_fdf3fd39-d429-4b70-805a-095ada6f811a/extract-utilities/0.log" Jan 23 10:34:53 crc kubenswrapper[4684]: I0123 10:34:53.783898 4684 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_community-operators-6dpg4_fdf3fd39-d429-4b70-805a-095ada6f811a/extract-utilities/0.log" Jan 23 10:34:53 crc kubenswrapper[4684]: I0123 10:34:53.850595 4684 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_certified-operators-qcntf_005d929c-6b2b-4644-bddb-c02aa19facfe/registry-server/0.log" Jan 23 10:34:53 crc kubenswrapper[4684]: I0123 10:34:53.880216 4684 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_community-operators-6dpg4_fdf3fd39-d429-4b70-805a-095ada6f811a/extract-content/0.log" Jan 23 10:34:53 crc kubenswrapper[4684]: I0123 10:34:53.887595 4684 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_community-operators-6dpg4_fdf3fd39-d429-4b70-805a-095ada6f811a/extract-content/0.log" Jan 23 10:34:54 crc kubenswrapper[4684]: I0123 10:34:54.108011 4684 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_community-operators-6dpg4_fdf3fd39-d429-4b70-805a-095ada6f811a/extract-utilities/0.log" Jan 23 10:34:54 crc kubenswrapper[4684]: I0123 10:34:54.173979 4684 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_community-operators-6dpg4_fdf3fd39-d429-4b70-805a-095ada6f811a/extract-content/0.log" Jan 23 10:34:54 crc kubenswrapper[4684]: I0123 10:34:54.583513 4684 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_community-operators-6dpg4_fdf3fd39-d429-4b70-805a-095ada6f811a/registry-server/0.log" Jan 23 10:34:54 crc kubenswrapper[4684]: I0123 10:34:54.629336 4684 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_marketplace-operator-79b997595-25vv4_9703bbe4-b658-40eb-b8db-14f18c684ab3/marketplace-operator/0.log" Jan 23 10:34:54 crc kubenswrapper[4684]: I0123 10:34:54.763432 4684 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_redhat-marketplace-b27ph_a9a4439f-bc6b-4367-be86-8aa563f0b50e/extract-utilities/0.log" Jan 23 10:34:54 crc kubenswrapper[4684]: I0123 10:34:54.839343 4684 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_redhat-marketplace-b27ph_a9a4439f-bc6b-4367-be86-8aa563f0b50e/extract-utilities/0.log" Jan 23 10:34:54 crc kubenswrapper[4684]: I0123 10:34:54.897816 4684 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_redhat-marketplace-b27ph_a9a4439f-bc6b-4367-be86-8aa563f0b50e/extract-content/0.log" Jan 23 10:34:54 crc kubenswrapper[4684]: I0123 10:34:54.954194 4684 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_redhat-marketplace-b27ph_a9a4439f-bc6b-4367-be86-8aa563f0b50e/extract-content/0.log" Jan 23 10:34:55 crc kubenswrapper[4684]: I0123 10:34:55.243250 4684 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_redhat-marketplace-b27ph_a9a4439f-bc6b-4367-be86-8aa563f0b50e/extract-utilities/0.log" Jan 23 10:34:55 crc kubenswrapper[4684]: I0123 10:34:55.353308 4684 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_redhat-marketplace-b27ph_a9a4439f-bc6b-4367-be86-8aa563f0b50e/extract-content/0.log" Jan 23 10:34:55 crc kubenswrapper[4684]: I0123 10:34:55.435417 4684 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_redhat-marketplace-b27ph_a9a4439f-bc6b-4367-be86-8aa563f0b50e/registry-server/0.log" Jan 23 10:34:55 crc kubenswrapper[4684]: I0123 10:34:55.520819 4684 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_redhat-operators-d7mvn_2f0cf87d-0316-45f3-97f8-2808b497892f/extract-utilities/0.log" Jan 23 10:34:55 crc kubenswrapper[4684]: I0123 10:34:55.719198 4684 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_redhat-operators-d7mvn_2f0cf87d-0316-45f3-97f8-2808b497892f/extract-content/0.log" Jan 23 10:34:55 crc kubenswrapper[4684]: I0123 10:34:55.725844 4684 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_redhat-operators-d7mvn_2f0cf87d-0316-45f3-97f8-2808b497892f/extract-utilities/0.log" Jan 23 10:34:55 crc kubenswrapper[4684]: I0123 10:34:55.753349 4684 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_redhat-operators-d7mvn_2f0cf87d-0316-45f3-97f8-2808b497892f/extract-content/0.log" Jan 23 10:34:55 crc kubenswrapper[4684]: I0123 10:34:55.958450 4684 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_redhat-operators-d7mvn_2f0cf87d-0316-45f3-97f8-2808b497892f/extract-content/0.log" Jan 23 10:34:55 crc kubenswrapper[4684]: I0123 10:34:55.971745 4684 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_redhat-operators-d7mvn_2f0cf87d-0316-45f3-97f8-2808b497892f/extract-utilities/0.log" Jan 23 10:34:56 crc kubenswrapper[4684]: I0123 10:34:56.781920 4684 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_redhat-operators-d7mvn_2f0cf87d-0316-45f3-97f8-2808b497892f/registry-server/0.log" Jan 23 10:35:13 crc kubenswrapper[4684]: I0123 10:35:13.728691 4684 patch_prober.go:28] interesting pod/machine-config-daemon-wtphf container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Jan 23 10:35:13 crc kubenswrapper[4684]: I0123 10:35:13.729301 4684 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-wtphf" podUID="fe8e0d00-860e-4d47-9f48-686555520d79" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Jan 23 10:35:19 crc kubenswrapper[4684]: E0123 10:35:19.348507 4684 upgradeaware.go:427] Error proxying data from client to backend: readfrom tcp 38.129.56.16:42802->38.129.56.16:46467: write tcp 38.129.56.16:42802->38.129.56.16:46467: write: connection reset by peer Jan 23 10:35:43 crc kubenswrapper[4684]: I0123 10:35:43.728225 4684 patch_prober.go:28] interesting pod/machine-config-daemon-wtphf container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Jan 23 10:35:43 crc kubenswrapper[4684]: I0123 10:35:43.728711 4684 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-wtphf" podUID="fe8e0d00-860e-4d47-9f48-686555520d79" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Jan 23 10:35:43 crc kubenswrapper[4684]: I0123 10:35:43.728751 4684 kubelet.go:2542] "SyncLoop (probe)" probe="liveness" status="unhealthy" pod="openshift-machine-config-operator/machine-config-daemon-wtphf" Jan 23 10:35:43 crc kubenswrapper[4684]: I0123 10:35:43.729402 4684 kuberuntime_manager.go:1027] "Message for Container of pod" containerName="machine-config-daemon" containerStatusID={"Type":"cri-o","ID":"c351f8f481b25c1f4451b34197b1573cb0b3fb64f7de44a6587d8bc17e89bbbf"} pod="openshift-machine-config-operator/machine-config-daemon-wtphf" containerMessage="Container machine-config-daemon failed liveness probe, will be restarted" Jan 23 10:35:43 crc kubenswrapper[4684]: I0123 10:35:43.729454 4684 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-machine-config-operator/machine-config-daemon-wtphf" podUID="fe8e0d00-860e-4d47-9f48-686555520d79" containerName="machine-config-daemon" containerID="cri-o://c351f8f481b25c1f4451b34197b1573cb0b3fb64f7de44a6587d8bc17e89bbbf" gracePeriod=600 Jan 23 10:35:43 crc kubenswrapper[4684]: E0123 10:35:43.855570 4684 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-wtphf_openshift-machine-config-operator(fe8e0d00-860e-4d47-9f48-686555520d79)\"" pod="openshift-machine-config-operator/machine-config-daemon-wtphf" podUID="fe8e0d00-860e-4d47-9f48-686555520d79" Jan 23 10:35:44 crc kubenswrapper[4684]: I0123 10:35:44.185539 4684 generic.go:334] "Generic (PLEG): container finished" podID="fe8e0d00-860e-4d47-9f48-686555520d79" containerID="c351f8f481b25c1f4451b34197b1573cb0b3fb64f7de44a6587d8bc17e89bbbf" exitCode=0 Jan 23 10:35:44 crc kubenswrapper[4684]: I0123 10:35:44.185587 4684 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-wtphf" event={"ID":"fe8e0d00-860e-4d47-9f48-686555520d79","Type":"ContainerDied","Data":"c351f8f481b25c1f4451b34197b1573cb0b3fb64f7de44a6587d8bc17e89bbbf"} Jan 23 10:35:44 crc kubenswrapper[4684]: I0123 10:35:44.185629 4684 scope.go:117] "RemoveContainer" containerID="1756bd959f6d356e73018d112baa4f2e84373b3c4243cd97818969471c5f5c40" Jan 23 10:35:44 crc kubenswrapper[4684]: I0123 10:35:44.186451 4684 scope.go:117] "RemoveContainer" containerID="c351f8f481b25c1f4451b34197b1573cb0b3fb64f7de44a6587d8bc17e89bbbf" Jan 23 10:35:44 crc kubenswrapper[4684]: E0123 10:35:44.186807 4684 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-wtphf_openshift-machine-config-operator(fe8e0d00-860e-4d47-9f48-686555520d79)\"" pod="openshift-machine-config-operator/machine-config-daemon-wtphf" podUID="fe8e0d00-860e-4d47-9f48-686555520d79" Jan 23 10:35:58 crc kubenswrapper[4684]: I0123 10:35:58.581665 4684 scope.go:117] "RemoveContainer" containerID="c351f8f481b25c1f4451b34197b1573cb0b3fb64f7de44a6587d8bc17e89bbbf" Jan 23 10:35:58 crc kubenswrapper[4684]: E0123 10:35:58.582536 4684 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-wtphf_openshift-machine-config-operator(fe8e0d00-860e-4d47-9f48-686555520d79)\"" pod="openshift-machine-config-operator/machine-config-daemon-wtphf" podUID="fe8e0d00-860e-4d47-9f48-686555520d79" Jan 23 10:36:09 crc kubenswrapper[4684]: I0123 10:36:09.581884 4684 scope.go:117] "RemoveContainer" containerID="c351f8f481b25c1f4451b34197b1573cb0b3fb64f7de44a6587d8bc17e89bbbf" Jan 23 10:36:09 crc kubenswrapper[4684]: E0123 10:36:09.582633 4684 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-wtphf_openshift-machine-config-operator(fe8e0d00-860e-4d47-9f48-686555520d79)\"" pod="openshift-machine-config-operator/machine-config-daemon-wtphf" podUID="fe8e0d00-860e-4d47-9f48-686555520d79" Jan 23 10:36:21 crc kubenswrapper[4684]: I0123 10:36:21.590779 4684 scope.go:117] "RemoveContainer" containerID="c351f8f481b25c1f4451b34197b1573cb0b3fb64f7de44a6587d8bc17e89bbbf" Jan 23 10:36:21 crc kubenswrapper[4684]: E0123 10:36:21.601170 4684 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-wtphf_openshift-machine-config-operator(fe8e0d00-860e-4d47-9f48-686555520d79)\"" pod="openshift-machine-config-operator/machine-config-daemon-wtphf" podUID="fe8e0d00-860e-4d47-9f48-686555520d79" Jan 23 10:36:32 crc kubenswrapper[4684]: I0123 10:36:32.581796 4684 scope.go:117] "RemoveContainer" containerID="c351f8f481b25c1f4451b34197b1573cb0b3fb64f7de44a6587d8bc17e89bbbf" Jan 23 10:36:32 crc kubenswrapper[4684]: E0123 10:36:32.582519 4684 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-wtphf_openshift-machine-config-operator(fe8e0d00-860e-4d47-9f48-686555520d79)\"" pod="openshift-machine-config-operator/machine-config-daemon-wtphf" podUID="fe8e0d00-860e-4d47-9f48-686555520d79" Jan 23 10:36:44 crc kubenswrapper[4684]: I0123 10:36:44.582490 4684 scope.go:117] "RemoveContainer" containerID="c351f8f481b25c1f4451b34197b1573cb0b3fb64f7de44a6587d8bc17e89bbbf" Jan 23 10:36:44 crc kubenswrapper[4684]: E0123 10:36:44.583117 4684 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-wtphf_openshift-machine-config-operator(fe8e0d00-860e-4d47-9f48-686555520d79)\"" pod="openshift-machine-config-operator/machine-config-daemon-wtphf" podUID="fe8e0d00-860e-4d47-9f48-686555520d79" Jan 23 10:36:56 crc kubenswrapper[4684]: I0123 10:36:56.581992 4684 scope.go:117] "RemoveContainer" containerID="c351f8f481b25c1f4451b34197b1573cb0b3fb64f7de44a6587d8bc17e89bbbf" Jan 23 10:36:56 crc kubenswrapper[4684]: E0123 10:36:56.583803 4684 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-wtphf_openshift-machine-config-operator(fe8e0d00-860e-4d47-9f48-686555520d79)\"" pod="openshift-machine-config-operator/machine-config-daemon-wtphf" podUID="fe8e0d00-860e-4d47-9f48-686555520d79" Jan 23 10:37:09 crc kubenswrapper[4684]: I0123 10:37:09.582949 4684 scope.go:117] "RemoveContainer" containerID="c351f8f481b25c1f4451b34197b1573cb0b3fb64f7de44a6587d8bc17e89bbbf" Jan 23 10:37:09 crc kubenswrapper[4684]: E0123 10:37:09.583712 4684 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-wtphf_openshift-machine-config-operator(fe8e0d00-860e-4d47-9f48-686555520d79)\"" pod="openshift-machine-config-operator/machine-config-daemon-wtphf" podUID="fe8e0d00-860e-4d47-9f48-686555520d79" Jan 23 10:37:18 crc kubenswrapper[4684]: I0123 10:37:18.110611 4684 scope.go:117] "RemoveContainer" containerID="20c55fbf31e83b5a090b264e60b7e74a25d6c1a2685de42d4d779ae415fe95e2" Jan 23 10:37:22 crc kubenswrapper[4684]: I0123 10:37:22.582587 4684 scope.go:117] "RemoveContainer" containerID="c351f8f481b25c1f4451b34197b1573cb0b3fb64f7de44a6587d8bc17e89bbbf" Jan 23 10:37:22 crc kubenswrapper[4684]: E0123 10:37:22.583262 4684 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-wtphf_openshift-machine-config-operator(fe8e0d00-860e-4d47-9f48-686555520d79)\"" pod="openshift-machine-config-operator/machine-config-daemon-wtphf" podUID="fe8e0d00-860e-4d47-9f48-686555520d79" Jan 23 10:37:30 crc kubenswrapper[4684]: E0123 10:37:30.678670 4684 cadvisor_stats_provider.go:516] "Partial failure issuing cadvisor.ContainerInfoV2" err="partial failures: [\"/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-podb2f30992_3b03_4d65_bca1_e1eb2c1ffd87.slice/crio-d093ff96b6e4173fe4bf23463588956f31c7402a5a7eb2c8ef6f9c11c8adf368.scope\": RecentStats: unable to find data in memory cache]" Jan 23 10:37:31 crc kubenswrapper[4684]: I0123 10:37:31.089952 4684 generic.go:334] "Generic (PLEG): container finished" podID="b2f30992-3b03-4d65-bca1-e1eb2c1ffd87" containerID="d093ff96b6e4173fe4bf23463588956f31c7402a5a7eb2c8ef6f9c11c8adf368" exitCode=0 Jan 23 10:37:31 crc kubenswrapper[4684]: I0123 10:37:31.090055 4684 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-must-gather-vptsh/must-gather-h8xvk" event={"ID":"b2f30992-3b03-4d65-bca1-e1eb2c1ffd87","Type":"ContainerDied","Data":"d093ff96b6e4173fe4bf23463588956f31c7402a5a7eb2c8ef6f9c11c8adf368"} Jan 23 10:37:31 crc kubenswrapper[4684]: I0123 10:37:31.090682 4684 scope.go:117] "RemoveContainer" containerID="d093ff96b6e4173fe4bf23463588956f31c7402a5a7eb2c8ef6f9c11c8adf368" Jan 23 10:37:31 crc kubenswrapper[4684]: I0123 10:37:31.469756 4684 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-must-gather-vptsh_must-gather-h8xvk_b2f30992-3b03-4d65-bca1-e1eb2c1ffd87/gather/0.log" Jan 23 10:37:36 crc kubenswrapper[4684]: I0123 10:37:36.582333 4684 scope.go:117] "RemoveContainer" containerID="c351f8f481b25c1f4451b34197b1573cb0b3fb64f7de44a6587d8bc17e89bbbf" Jan 23 10:37:36 crc kubenswrapper[4684]: E0123 10:37:36.583101 4684 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-wtphf_openshift-machine-config-operator(fe8e0d00-860e-4d47-9f48-686555520d79)\"" pod="openshift-machine-config-operator/machine-config-daemon-wtphf" podUID="fe8e0d00-860e-4d47-9f48-686555520d79" Jan 23 10:37:41 crc kubenswrapper[4684]: I0123 10:37:41.341346 4684 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-must-gather-vptsh/must-gather-h8xvk"] Jan 23 10:37:41 crc kubenswrapper[4684]: I0123 10:37:41.344316 4684 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-must-gather-vptsh/must-gather-h8xvk" podUID="b2f30992-3b03-4d65-bca1-e1eb2c1ffd87" containerName="copy" containerID="cri-o://df2aa74f3a104858cd808b67e9896c4a4bd8e923868d5cfb7926c98d2459d5ef" gracePeriod=2 Jan 23 10:37:41 crc kubenswrapper[4684]: I0123 10:37:41.360699 4684 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-must-gather-vptsh/must-gather-h8xvk"] Jan 23 10:37:41 crc kubenswrapper[4684]: I0123 10:37:41.811538 4684 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-must-gather-vptsh_must-gather-h8xvk_b2f30992-3b03-4d65-bca1-e1eb2c1ffd87/copy/0.log" Jan 23 10:37:41 crc kubenswrapper[4684]: I0123 10:37:41.812598 4684 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-must-gather-vptsh/must-gather-h8xvk" Jan 23 10:37:41 crc kubenswrapper[4684]: I0123 10:37:41.878519 4684 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-wbrt4\" (UniqueName: \"kubernetes.io/projected/b2f30992-3b03-4d65-bca1-e1eb2c1ffd87-kube-api-access-wbrt4\") pod \"b2f30992-3b03-4d65-bca1-e1eb2c1ffd87\" (UID: \"b2f30992-3b03-4d65-bca1-e1eb2c1ffd87\") " Jan 23 10:37:41 crc kubenswrapper[4684]: I0123 10:37:41.878805 4684 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"must-gather-output\" (UniqueName: \"kubernetes.io/empty-dir/b2f30992-3b03-4d65-bca1-e1eb2c1ffd87-must-gather-output\") pod \"b2f30992-3b03-4d65-bca1-e1eb2c1ffd87\" (UID: \"b2f30992-3b03-4d65-bca1-e1eb2c1ffd87\") " Jan 23 10:37:41 crc kubenswrapper[4684]: I0123 10:37:41.899306 4684 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/b2f30992-3b03-4d65-bca1-e1eb2c1ffd87-kube-api-access-wbrt4" (OuterVolumeSpecName: "kube-api-access-wbrt4") pod "b2f30992-3b03-4d65-bca1-e1eb2c1ffd87" (UID: "b2f30992-3b03-4d65-bca1-e1eb2c1ffd87"). InnerVolumeSpecName "kube-api-access-wbrt4". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 23 10:37:41 crc kubenswrapper[4684]: I0123 10:37:41.982034 4684 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-wbrt4\" (UniqueName: \"kubernetes.io/projected/b2f30992-3b03-4d65-bca1-e1eb2c1ffd87-kube-api-access-wbrt4\") on node \"crc\" DevicePath \"\"" Jan 23 10:37:42 crc kubenswrapper[4684]: I0123 10:37:42.055992 4684 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/b2f30992-3b03-4d65-bca1-e1eb2c1ffd87-must-gather-output" (OuterVolumeSpecName: "must-gather-output") pod "b2f30992-3b03-4d65-bca1-e1eb2c1ffd87" (UID: "b2f30992-3b03-4d65-bca1-e1eb2c1ffd87"). InnerVolumeSpecName "must-gather-output". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 23 10:37:42 crc kubenswrapper[4684]: I0123 10:37:42.083351 4684 reconciler_common.go:293] "Volume detached for volume \"must-gather-output\" (UniqueName: \"kubernetes.io/empty-dir/b2f30992-3b03-4d65-bca1-e1eb2c1ffd87-must-gather-output\") on node \"crc\" DevicePath \"\"" Jan 23 10:37:42 crc kubenswrapper[4684]: I0123 10:37:42.184390 4684 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-must-gather-vptsh_must-gather-h8xvk_b2f30992-3b03-4d65-bca1-e1eb2c1ffd87/copy/0.log" Jan 23 10:37:42 crc kubenswrapper[4684]: I0123 10:37:42.185422 4684 generic.go:334] "Generic (PLEG): container finished" podID="b2f30992-3b03-4d65-bca1-e1eb2c1ffd87" containerID="df2aa74f3a104858cd808b67e9896c4a4bd8e923868d5cfb7926c98d2459d5ef" exitCode=143 Jan 23 10:37:42 crc kubenswrapper[4684]: I0123 10:37:42.185483 4684 scope.go:117] "RemoveContainer" containerID="df2aa74f3a104858cd808b67e9896c4a4bd8e923868d5cfb7926c98d2459d5ef" Jan 23 10:37:42 crc kubenswrapper[4684]: I0123 10:37:42.185499 4684 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-must-gather-vptsh/must-gather-h8xvk" Jan 23 10:37:42 crc kubenswrapper[4684]: I0123 10:37:42.206301 4684 scope.go:117] "RemoveContainer" containerID="d093ff96b6e4173fe4bf23463588956f31c7402a5a7eb2c8ef6f9c11c8adf368" Jan 23 10:37:42 crc kubenswrapper[4684]: I0123 10:37:42.265662 4684 scope.go:117] "RemoveContainer" containerID="df2aa74f3a104858cd808b67e9896c4a4bd8e923868d5cfb7926c98d2459d5ef" Jan 23 10:37:42 crc kubenswrapper[4684]: E0123 10:37:42.266297 4684 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"df2aa74f3a104858cd808b67e9896c4a4bd8e923868d5cfb7926c98d2459d5ef\": container with ID starting with df2aa74f3a104858cd808b67e9896c4a4bd8e923868d5cfb7926c98d2459d5ef not found: ID does not exist" containerID="df2aa74f3a104858cd808b67e9896c4a4bd8e923868d5cfb7926c98d2459d5ef" Jan 23 10:37:42 crc kubenswrapper[4684]: I0123 10:37:42.266332 4684 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"df2aa74f3a104858cd808b67e9896c4a4bd8e923868d5cfb7926c98d2459d5ef"} err="failed to get container status \"df2aa74f3a104858cd808b67e9896c4a4bd8e923868d5cfb7926c98d2459d5ef\": rpc error: code = NotFound desc = could not find container \"df2aa74f3a104858cd808b67e9896c4a4bd8e923868d5cfb7926c98d2459d5ef\": container with ID starting with df2aa74f3a104858cd808b67e9896c4a4bd8e923868d5cfb7926c98d2459d5ef not found: ID does not exist" Jan 23 10:37:42 crc kubenswrapper[4684]: I0123 10:37:42.266366 4684 scope.go:117] "RemoveContainer" containerID="d093ff96b6e4173fe4bf23463588956f31c7402a5a7eb2c8ef6f9c11c8adf368" Jan 23 10:37:42 crc kubenswrapper[4684]: E0123 10:37:42.267123 4684 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"d093ff96b6e4173fe4bf23463588956f31c7402a5a7eb2c8ef6f9c11c8adf368\": container with ID starting with d093ff96b6e4173fe4bf23463588956f31c7402a5a7eb2c8ef6f9c11c8adf368 not found: ID does not exist" containerID="d093ff96b6e4173fe4bf23463588956f31c7402a5a7eb2c8ef6f9c11c8adf368" Jan 23 10:37:42 crc kubenswrapper[4684]: I0123 10:37:42.267174 4684 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"d093ff96b6e4173fe4bf23463588956f31c7402a5a7eb2c8ef6f9c11c8adf368"} err="failed to get container status \"d093ff96b6e4173fe4bf23463588956f31c7402a5a7eb2c8ef6f9c11c8adf368\": rpc error: code = NotFound desc = could not find container \"d093ff96b6e4173fe4bf23463588956f31c7402a5a7eb2c8ef6f9c11c8adf368\": container with ID starting with d093ff96b6e4173fe4bf23463588956f31c7402a5a7eb2c8ef6f9c11c8adf368 not found: ID does not exist" Jan 23 10:37:43 crc kubenswrapper[4684]: I0123 10:37:43.591698 4684 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="b2f30992-3b03-4d65-bca1-e1eb2c1ffd87" path="/var/lib/kubelet/pods/b2f30992-3b03-4d65-bca1-e1eb2c1ffd87/volumes" Jan 23 10:37:48 crc kubenswrapper[4684]: I0123 10:37:48.582689 4684 scope.go:117] "RemoveContainer" containerID="c351f8f481b25c1f4451b34197b1573cb0b3fb64f7de44a6587d8bc17e89bbbf" Jan 23 10:37:48 crc kubenswrapper[4684]: E0123 10:37:48.583315 4684 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-wtphf_openshift-machine-config-operator(fe8e0d00-860e-4d47-9f48-686555520d79)\"" pod="openshift-machine-config-operator/machine-config-daemon-wtphf" podUID="fe8e0d00-860e-4d47-9f48-686555520d79" Jan 23 10:38:02 crc kubenswrapper[4684]: I0123 10:38:02.581957 4684 scope.go:117] "RemoveContainer" containerID="c351f8f481b25c1f4451b34197b1573cb0b3fb64f7de44a6587d8bc17e89bbbf" Jan 23 10:38:02 crc kubenswrapper[4684]: E0123 10:38:02.582977 4684 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-wtphf_openshift-machine-config-operator(fe8e0d00-860e-4d47-9f48-686555520d79)\"" pod="openshift-machine-config-operator/machine-config-daemon-wtphf" podUID="fe8e0d00-860e-4d47-9f48-686555520d79" Jan 23 10:38:13 crc kubenswrapper[4684]: I0123 10:38:13.583598 4684 scope.go:117] "RemoveContainer" containerID="c351f8f481b25c1f4451b34197b1573cb0b3fb64f7de44a6587d8bc17e89bbbf" Jan 23 10:38:13 crc kubenswrapper[4684]: E0123 10:38:13.584490 4684 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-wtphf_openshift-machine-config-operator(fe8e0d00-860e-4d47-9f48-686555520d79)\"" pod="openshift-machine-config-operator/machine-config-daemon-wtphf" podUID="fe8e0d00-860e-4d47-9f48-686555520d79" Jan 23 10:38:26 crc kubenswrapper[4684]: I0123 10:38:26.582484 4684 scope.go:117] "RemoveContainer" containerID="c351f8f481b25c1f4451b34197b1573cb0b3fb64f7de44a6587d8bc17e89bbbf" Jan 23 10:38:26 crc kubenswrapper[4684]: E0123 10:38:26.583301 4684 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-wtphf_openshift-machine-config-operator(fe8e0d00-860e-4d47-9f48-686555520d79)\"" pod="openshift-machine-config-operator/machine-config-daemon-wtphf" podUID="fe8e0d00-860e-4d47-9f48-686555520d79" Jan 23 10:38:40 crc kubenswrapper[4684]: I0123 10:38:40.582007 4684 scope.go:117] "RemoveContainer" containerID="c351f8f481b25c1f4451b34197b1573cb0b3fb64f7de44a6587d8bc17e89bbbf" Jan 23 10:38:40 crc kubenswrapper[4684]: E0123 10:38:40.582766 4684 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-wtphf_openshift-machine-config-operator(fe8e0d00-860e-4d47-9f48-686555520d79)\"" pod="openshift-machine-config-operator/machine-config-daemon-wtphf" podUID="fe8e0d00-860e-4d47-9f48-686555520d79" Jan 23 10:38:45 crc kubenswrapper[4684]: I0123 10:38:45.010314 4684 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/certified-operators-kmt7v"] Jan 23 10:38:45 crc kubenswrapper[4684]: E0123 10:38:45.012925 4684 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="55eac6ac-f3fc-4c2d-83f9-d8859d6ec044" containerName="extract-utilities" Jan 23 10:38:45 crc kubenswrapper[4684]: I0123 10:38:45.014397 4684 state_mem.go:107] "Deleted CPUSet assignment" podUID="55eac6ac-f3fc-4c2d-83f9-d8859d6ec044" containerName="extract-utilities" Jan 23 10:38:45 crc kubenswrapper[4684]: E0123 10:38:45.014555 4684 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="55eac6ac-f3fc-4c2d-83f9-d8859d6ec044" containerName="extract-content" Jan 23 10:38:45 crc kubenswrapper[4684]: I0123 10:38:45.014647 4684 state_mem.go:107] "Deleted CPUSet assignment" podUID="55eac6ac-f3fc-4c2d-83f9-d8859d6ec044" containerName="extract-content" Jan 23 10:38:45 crc kubenswrapper[4684]: E0123 10:38:45.014797 4684 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="b2f30992-3b03-4d65-bca1-e1eb2c1ffd87" containerName="copy" Jan 23 10:38:45 crc kubenswrapper[4684]: I0123 10:38:45.014879 4684 state_mem.go:107] "Deleted CPUSet assignment" podUID="b2f30992-3b03-4d65-bca1-e1eb2c1ffd87" containerName="copy" Jan 23 10:38:45 crc kubenswrapper[4684]: E0123 10:38:45.014970 4684 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="55eac6ac-f3fc-4c2d-83f9-d8859d6ec044" containerName="registry-server" Jan 23 10:38:45 crc kubenswrapper[4684]: I0123 10:38:45.015047 4684 state_mem.go:107] "Deleted CPUSet assignment" podUID="55eac6ac-f3fc-4c2d-83f9-d8859d6ec044" containerName="registry-server" Jan 23 10:38:45 crc kubenswrapper[4684]: E0123 10:38:45.015128 4684 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="b2f30992-3b03-4d65-bca1-e1eb2c1ffd87" containerName="gather" Jan 23 10:38:45 crc kubenswrapper[4684]: I0123 10:38:45.015204 4684 state_mem.go:107] "Deleted CPUSet assignment" podUID="b2f30992-3b03-4d65-bca1-e1eb2c1ffd87" containerName="gather" Jan 23 10:38:45 crc kubenswrapper[4684]: I0123 10:38:45.015656 4684 memory_manager.go:354] "RemoveStaleState removing state" podUID="55eac6ac-f3fc-4c2d-83f9-d8859d6ec044" containerName="registry-server" Jan 23 10:38:45 crc kubenswrapper[4684]: I0123 10:38:45.015783 4684 memory_manager.go:354] "RemoveStaleState removing state" podUID="b2f30992-3b03-4d65-bca1-e1eb2c1ffd87" containerName="copy" Jan 23 10:38:45 crc kubenswrapper[4684]: I0123 10:38:45.015873 4684 memory_manager.go:354] "RemoveStaleState removing state" podUID="b2f30992-3b03-4d65-bca1-e1eb2c1ffd87" containerName="gather" Jan 23 10:38:45 crc kubenswrapper[4684]: I0123 10:38:45.019592 4684 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-kmt7v" Jan 23 10:38:45 crc kubenswrapper[4684]: I0123 10:38:45.051956 4684 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/certified-operators-kmt7v"] Jan 23 10:38:45 crc kubenswrapper[4684]: I0123 10:38:45.093375 4684 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/94f76145-8fd6-439a-bc4a-013a18d59731-catalog-content\") pod \"certified-operators-kmt7v\" (UID: \"94f76145-8fd6-439a-bc4a-013a18d59731\") " pod="openshift-marketplace/certified-operators-kmt7v" Jan 23 10:38:45 crc kubenswrapper[4684]: I0123 10:38:45.093614 4684 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/94f76145-8fd6-439a-bc4a-013a18d59731-utilities\") pod \"certified-operators-kmt7v\" (UID: \"94f76145-8fd6-439a-bc4a-013a18d59731\") " pod="openshift-marketplace/certified-operators-kmt7v" Jan 23 10:38:45 crc kubenswrapper[4684]: I0123 10:38:45.093724 4684 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-97m29\" (UniqueName: \"kubernetes.io/projected/94f76145-8fd6-439a-bc4a-013a18d59731-kube-api-access-97m29\") pod \"certified-operators-kmt7v\" (UID: \"94f76145-8fd6-439a-bc4a-013a18d59731\") " pod="openshift-marketplace/certified-operators-kmt7v" Jan 23 10:38:45 crc kubenswrapper[4684]: I0123 10:38:45.195167 4684 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/94f76145-8fd6-439a-bc4a-013a18d59731-utilities\") pod \"certified-operators-kmt7v\" (UID: \"94f76145-8fd6-439a-bc4a-013a18d59731\") " pod="openshift-marketplace/certified-operators-kmt7v" Jan 23 10:38:45 crc kubenswrapper[4684]: I0123 10:38:45.195266 4684 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-97m29\" (UniqueName: \"kubernetes.io/projected/94f76145-8fd6-439a-bc4a-013a18d59731-kube-api-access-97m29\") pod \"certified-operators-kmt7v\" (UID: \"94f76145-8fd6-439a-bc4a-013a18d59731\") " pod="openshift-marketplace/certified-operators-kmt7v" Jan 23 10:38:45 crc kubenswrapper[4684]: I0123 10:38:45.195297 4684 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/94f76145-8fd6-439a-bc4a-013a18d59731-catalog-content\") pod \"certified-operators-kmt7v\" (UID: \"94f76145-8fd6-439a-bc4a-013a18d59731\") " pod="openshift-marketplace/certified-operators-kmt7v" Jan 23 10:38:45 crc kubenswrapper[4684]: I0123 10:38:45.195825 4684 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/94f76145-8fd6-439a-bc4a-013a18d59731-utilities\") pod \"certified-operators-kmt7v\" (UID: \"94f76145-8fd6-439a-bc4a-013a18d59731\") " pod="openshift-marketplace/certified-operators-kmt7v" Jan 23 10:38:45 crc kubenswrapper[4684]: I0123 10:38:45.195861 4684 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/94f76145-8fd6-439a-bc4a-013a18d59731-catalog-content\") pod \"certified-operators-kmt7v\" (UID: \"94f76145-8fd6-439a-bc4a-013a18d59731\") " pod="openshift-marketplace/certified-operators-kmt7v" Jan 23 10:38:45 crc kubenswrapper[4684]: I0123 10:38:45.216759 4684 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-97m29\" (UniqueName: \"kubernetes.io/projected/94f76145-8fd6-439a-bc4a-013a18d59731-kube-api-access-97m29\") pod \"certified-operators-kmt7v\" (UID: \"94f76145-8fd6-439a-bc4a-013a18d59731\") " pod="openshift-marketplace/certified-operators-kmt7v" Jan 23 10:38:45 crc kubenswrapper[4684]: I0123 10:38:45.344545 4684 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-kmt7v" Jan 23 10:38:45 crc kubenswrapper[4684]: I0123 10:38:45.785979 4684 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/certified-operators-kmt7v"] Jan 23 10:38:46 crc kubenswrapper[4684]: I0123 10:38:46.755035 4684 generic.go:334] "Generic (PLEG): container finished" podID="94f76145-8fd6-439a-bc4a-013a18d59731" containerID="f67168cfc8a86f99a86ea5e6b4cf9126797d011c41882dd018b2d28088e10215" exitCode=0 Jan 23 10:38:46 crc kubenswrapper[4684]: I0123 10:38:46.755103 4684 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-kmt7v" event={"ID":"94f76145-8fd6-439a-bc4a-013a18d59731","Type":"ContainerDied","Data":"f67168cfc8a86f99a86ea5e6b4cf9126797d011c41882dd018b2d28088e10215"} Jan 23 10:38:46 crc kubenswrapper[4684]: I0123 10:38:46.755403 4684 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-kmt7v" event={"ID":"94f76145-8fd6-439a-bc4a-013a18d59731","Type":"ContainerStarted","Data":"ad8dd832876621ade87764c45e2504ecccd1c012a5437bab9dc0ede5f750bf6f"} Jan 23 10:38:46 crc kubenswrapper[4684]: I0123 10:38:46.756973 4684 provider.go:102] Refreshing cache for provider: *credentialprovider.defaultDockerConfigProvider Jan 23 10:38:47 crc kubenswrapper[4684]: I0123 10:38:47.765021 4684 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-kmt7v" event={"ID":"94f76145-8fd6-439a-bc4a-013a18d59731","Type":"ContainerStarted","Data":"7ebbe3cec25ae6ed2fcd20b7042f399f6e02dd1f51ead487eb343ef962c622ed"} Jan 23 10:38:48 crc kubenswrapper[4684]: I0123 10:38:48.776118 4684 generic.go:334] "Generic (PLEG): container finished" podID="94f76145-8fd6-439a-bc4a-013a18d59731" containerID="7ebbe3cec25ae6ed2fcd20b7042f399f6e02dd1f51ead487eb343ef962c622ed" exitCode=0 Jan 23 10:38:48 crc kubenswrapper[4684]: I0123 10:38:48.776327 4684 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-kmt7v" event={"ID":"94f76145-8fd6-439a-bc4a-013a18d59731","Type":"ContainerDied","Data":"7ebbe3cec25ae6ed2fcd20b7042f399f6e02dd1f51ead487eb343ef962c622ed"} Jan 23 10:38:49 crc kubenswrapper[4684]: I0123 10:38:49.786840 4684 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-kmt7v" event={"ID":"94f76145-8fd6-439a-bc4a-013a18d59731","Type":"ContainerStarted","Data":"e035697253168b4988381fd9b4f0272254ebdc1f742a4a3ccb0eb553a4c57ab7"} Jan 23 10:38:49 crc kubenswrapper[4684]: I0123 10:38:49.804618 4684 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/certified-operators-kmt7v" podStartSLOduration=3.40061518 podStartE2EDuration="5.804599479s" podCreationTimestamp="2026-01-23 10:38:44 +0000 UTC" firstStartedPulling="2026-01-23 10:38:46.756744462 +0000 UTC m=+5499.380123003" lastFinishedPulling="2026-01-23 10:38:49.160728761 +0000 UTC m=+5501.784107302" observedRunningTime="2026-01-23 10:38:49.804134456 +0000 UTC m=+5502.427512997" watchObservedRunningTime="2026-01-23 10:38:49.804599479 +0000 UTC m=+5502.427978020" Jan 23 10:38:52 crc kubenswrapper[4684]: I0123 10:38:52.581799 4684 scope.go:117] "RemoveContainer" containerID="c351f8f481b25c1f4451b34197b1573cb0b3fb64f7de44a6587d8bc17e89bbbf" Jan 23 10:38:52 crc kubenswrapper[4684]: E0123 10:38:52.582566 4684 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-wtphf_openshift-machine-config-operator(fe8e0d00-860e-4d47-9f48-686555520d79)\"" pod="openshift-machine-config-operator/machine-config-daemon-wtphf" podUID="fe8e0d00-860e-4d47-9f48-686555520d79" Jan 23 10:38:55 crc kubenswrapper[4684]: I0123 10:38:55.346077 4684 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/certified-operators-kmt7v" Jan 23 10:38:55 crc kubenswrapper[4684]: I0123 10:38:55.347349 4684 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/certified-operators-kmt7v" Jan 23 10:38:55 crc kubenswrapper[4684]: I0123 10:38:55.388342 4684 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/certified-operators-kmt7v" Jan 23 10:38:55 crc kubenswrapper[4684]: I0123 10:38:55.904282 4684 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/certified-operators-kmt7v" Jan 23 10:38:55 crc kubenswrapper[4684]: I0123 10:38:55.961856 4684 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/certified-operators-kmt7v"] Jan 23 10:38:57 crc kubenswrapper[4684]: I0123 10:38:57.854590 4684 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/certified-operators-kmt7v" podUID="94f76145-8fd6-439a-bc4a-013a18d59731" containerName="registry-server" containerID="cri-o://e035697253168b4988381fd9b4f0272254ebdc1f742a4a3ccb0eb553a4c57ab7" gracePeriod=2 Jan 23 10:38:58 crc kubenswrapper[4684]: I0123 10:38:58.615356 4684 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-kmt7v" Jan 23 10:38:58 crc kubenswrapper[4684]: I0123 10:38:58.679417 4684 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/94f76145-8fd6-439a-bc4a-013a18d59731-utilities\") pod \"94f76145-8fd6-439a-bc4a-013a18d59731\" (UID: \"94f76145-8fd6-439a-bc4a-013a18d59731\") " Jan 23 10:38:58 crc kubenswrapper[4684]: I0123 10:38:58.679511 4684 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/94f76145-8fd6-439a-bc4a-013a18d59731-catalog-content\") pod \"94f76145-8fd6-439a-bc4a-013a18d59731\" (UID: \"94f76145-8fd6-439a-bc4a-013a18d59731\") " Jan 23 10:38:58 crc kubenswrapper[4684]: I0123 10:38:58.679629 4684 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-97m29\" (UniqueName: \"kubernetes.io/projected/94f76145-8fd6-439a-bc4a-013a18d59731-kube-api-access-97m29\") pod \"94f76145-8fd6-439a-bc4a-013a18d59731\" (UID: \"94f76145-8fd6-439a-bc4a-013a18d59731\") " Jan 23 10:38:58 crc kubenswrapper[4684]: I0123 10:38:58.680432 4684 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/94f76145-8fd6-439a-bc4a-013a18d59731-utilities" (OuterVolumeSpecName: "utilities") pod "94f76145-8fd6-439a-bc4a-013a18d59731" (UID: "94f76145-8fd6-439a-bc4a-013a18d59731"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 23 10:38:58 crc kubenswrapper[4684]: I0123 10:38:58.694901 4684 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/94f76145-8fd6-439a-bc4a-013a18d59731-kube-api-access-97m29" (OuterVolumeSpecName: "kube-api-access-97m29") pod "94f76145-8fd6-439a-bc4a-013a18d59731" (UID: "94f76145-8fd6-439a-bc4a-013a18d59731"). InnerVolumeSpecName "kube-api-access-97m29". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 23 10:38:58 crc kubenswrapper[4684]: I0123 10:38:58.746638 4684 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/94f76145-8fd6-439a-bc4a-013a18d59731-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "94f76145-8fd6-439a-bc4a-013a18d59731" (UID: "94f76145-8fd6-439a-bc4a-013a18d59731"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 23 10:38:58 crc kubenswrapper[4684]: I0123 10:38:58.782322 4684 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-97m29\" (UniqueName: \"kubernetes.io/projected/94f76145-8fd6-439a-bc4a-013a18d59731-kube-api-access-97m29\") on node \"crc\" DevicePath \"\"" Jan 23 10:38:58 crc kubenswrapper[4684]: I0123 10:38:58.782365 4684 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/94f76145-8fd6-439a-bc4a-013a18d59731-utilities\") on node \"crc\" DevicePath \"\"" Jan 23 10:38:58 crc kubenswrapper[4684]: I0123 10:38:58.782375 4684 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/94f76145-8fd6-439a-bc4a-013a18d59731-catalog-content\") on node \"crc\" DevicePath \"\"" Jan 23 10:38:58 crc kubenswrapper[4684]: I0123 10:38:58.866861 4684 generic.go:334] "Generic (PLEG): container finished" podID="94f76145-8fd6-439a-bc4a-013a18d59731" containerID="e035697253168b4988381fd9b4f0272254ebdc1f742a4a3ccb0eb553a4c57ab7" exitCode=0 Jan 23 10:38:58 crc kubenswrapper[4684]: I0123 10:38:58.866910 4684 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-kmt7v" event={"ID":"94f76145-8fd6-439a-bc4a-013a18d59731","Type":"ContainerDied","Data":"e035697253168b4988381fd9b4f0272254ebdc1f742a4a3ccb0eb553a4c57ab7"} Jan 23 10:38:58 crc kubenswrapper[4684]: I0123 10:38:58.866950 4684 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-kmt7v" event={"ID":"94f76145-8fd6-439a-bc4a-013a18d59731","Type":"ContainerDied","Data":"ad8dd832876621ade87764c45e2504ecccd1c012a5437bab9dc0ede5f750bf6f"} Jan 23 10:38:58 crc kubenswrapper[4684]: I0123 10:38:58.866969 4684 scope.go:117] "RemoveContainer" containerID="e035697253168b4988381fd9b4f0272254ebdc1f742a4a3ccb0eb553a4c57ab7" Jan 23 10:38:58 crc kubenswrapper[4684]: I0123 10:38:58.867115 4684 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-kmt7v" Jan 23 10:38:58 crc kubenswrapper[4684]: I0123 10:38:58.894423 4684 scope.go:117] "RemoveContainer" containerID="7ebbe3cec25ae6ed2fcd20b7042f399f6e02dd1f51ead487eb343ef962c622ed" Jan 23 10:38:58 crc kubenswrapper[4684]: I0123 10:38:58.922397 4684 scope.go:117] "RemoveContainer" containerID="f67168cfc8a86f99a86ea5e6b4cf9126797d011c41882dd018b2d28088e10215" Jan 23 10:38:58 crc kubenswrapper[4684]: I0123 10:38:58.923888 4684 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/certified-operators-kmt7v"] Jan 23 10:38:58 crc kubenswrapper[4684]: I0123 10:38:58.937995 4684 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/certified-operators-kmt7v"] Jan 23 10:38:58 crc kubenswrapper[4684]: I0123 10:38:58.967659 4684 scope.go:117] "RemoveContainer" containerID="e035697253168b4988381fd9b4f0272254ebdc1f742a4a3ccb0eb553a4c57ab7" Jan 23 10:38:58 crc kubenswrapper[4684]: E0123 10:38:58.968259 4684 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"e035697253168b4988381fd9b4f0272254ebdc1f742a4a3ccb0eb553a4c57ab7\": container with ID starting with e035697253168b4988381fd9b4f0272254ebdc1f742a4a3ccb0eb553a4c57ab7 not found: ID does not exist" containerID="e035697253168b4988381fd9b4f0272254ebdc1f742a4a3ccb0eb553a4c57ab7" Jan 23 10:38:58 crc kubenswrapper[4684]: I0123 10:38:58.968339 4684 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"e035697253168b4988381fd9b4f0272254ebdc1f742a4a3ccb0eb553a4c57ab7"} err="failed to get container status \"e035697253168b4988381fd9b4f0272254ebdc1f742a4a3ccb0eb553a4c57ab7\": rpc error: code = NotFound desc = could not find container \"e035697253168b4988381fd9b4f0272254ebdc1f742a4a3ccb0eb553a4c57ab7\": container with ID starting with e035697253168b4988381fd9b4f0272254ebdc1f742a4a3ccb0eb553a4c57ab7 not found: ID does not exist" Jan 23 10:38:58 crc kubenswrapper[4684]: I0123 10:38:58.968378 4684 scope.go:117] "RemoveContainer" containerID="7ebbe3cec25ae6ed2fcd20b7042f399f6e02dd1f51ead487eb343ef962c622ed" Jan 23 10:38:58 crc kubenswrapper[4684]: E0123 10:38:58.968790 4684 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"7ebbe3cec25ae6ed2fcd20b7042f399f6e02dd1f51ead487eb343ef962c622ed\": container with ID starting with 7ebbe3cec25ae6ed2fcd20b7042f399f6e02dd1f51ead487eb343ef962c622ed not found: ID does not exist" containerID="7ebbe3cec25ae6ed2fcd20b7042f399f6e02dd1f51ead487eb343ef962c622ed" Jan 23 10:38:58 crc kubenswrapper[4684]: I0123 10:38:58.968824 4684 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"7ebbe3cec25ae6ed2fcd20b7042f399f6e02dd1f51ead487eb343ef962c622ed"} err="failed to get container status \"7ebbe3cec25ae6ed2fcd20b7042f399f6e02dd1f51ead487eb343ef962c622ed\": rpc error: code = NotFound desc = could not find container \"7ebbe3cec25ae6ed2fcd20b7042f399f6e02dd1f51ead487eb343ef962c622ed\": container with ID starting with 7ebbe3cec25ae6ed2fcd20b7042f399f6e02dd1f51ead487eb343ef962c622ed not found: ID does not exist" Jan 23 10:38:58 crc kubenswrapper[4684]: I0123 10:38:58.968895 4684 scope.go:117] "RemoveContainer" containerID="f67168cfc8a86f99a86ea5e6b4cf9126797d011c41882dd018b2d28088e10215" Jan 23 10:38:58 crc kubenswrapper[4684]: E0123 10:38:58.969200 4684 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"f67168cfc8a86f99a86ea5e6b4cf9126797d011c41882dd018b2d28088e10215\": container with ID starting with f67168cfc8a86f99a86ea5e6b4cf9126797d011c41882dd018b2d28088e10215 not found: ID does not exist" containerID="f67168cfc8a86f99a86ea5e6b4cf9126797d011c41882dd018b2d28088e10215" Jan 23 10:38:58 crc kubenswrapper[4684]: I0123 10:38:58.969232 4684 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"f67168cfc8a86f99a86ea5e6b4cf9126797d011c41882dd018b2d28088e10215"} err="failed to get container status \"f67168cfc8a86f99a86ea5e6b4cf9126797d011c41882dd018b2d28088e10215\": rpc error: code = NotFound desc = could not find container \"f67168cfc8a86f99a86ea5e6b4cf9126797d011c41882dd018b2d28088e10215\": container with ID starting with f67168cfc8a86f99a86ea5e6b4cf9126797d011c41882dd018b2d28088e10215 not found: ID does not exist" Jan 23 10:38:59 crc kubenswrapper[4684]: I0123 10:38:59.592830 4684 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="94f76145-8fd6-439a-bc4a-013a18d59731" path="/var/lib/kubelet/pods/94f76145-8fd6-439a-bc4a-013a18d59731/volumes" Jan 23 10:39:05 crc kubenswrapper[4684]: I0123 10:39:05.582937 4684 scope.go:117] "RemoveContainer" containerID="c351f8f481b25c1f4451b34197b1573cb0b3fb64f7de44a6587d8bc17e89bbbf" Jan 23 10:39:05 crc kubenswrapper[4684]: E0123 10:39:05.583792 4684 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-wtphf_openshift-machine-config-operator(fe8e0d00-860e-4d47-9f48-686555520d79)\"" pod="openshift-machine-config-operator/machine-config-daemon-wtphf" podUID="fe8e0d00-860e-4d47-9f48-686555520d79" Jan 23 10:39:16 crc kubenswrapper[4684]: I0123 10:39:16.913769 4684 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/redhat-marketplace-gxn8m"] Jan 23 10:39:16 crc kubenswrapper[4684]: E0123 10:39:16.914691 4684 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="94f76145-8fd6-439a-bc4a-013a18d59731" containerName="extract-content" Jan 23 10:39:16 crc kubenswrapper[4684]: I0123 10:39:16.914726 4684 state_mem.go:107] "Deleted CPUSet assignment" podUID="94f76145-8fd6-439a-bc4a-013a18d59731" containerName="extract-content" Jan 23 10:39:16 crc kubenswrapper[4684]: E0123 10:39:16.914743 4684 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="94f76145-8fd6-439a-bc4a-013a18d59731" containerName="registry-server" Jan 23 10:39:16 crc kubenswrapper[4684]: I0123 10:39:16.914749 4684 state_mem.go:107] "Deleted CPUSet assignment" podUID="94f76145-8fd6-439a-bc4a-013a18d59731" containerName="registry-server" Jan 23 10:39:16 crc kubenswrapper[4684]: E0123 10:39:16.914762 4684 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="94f76145-8fd6-439a-bc4a-013a18d59731" containerName="extract-utilities" Jan 23 10:39:16 crc kubenswrapper[4684]: I0123 10:39:16.914768 4684 state_mem.go:107] "Deleted CPUSet assignment" podUID="94f76145-8fd6-439a-bc4a-013a18d59731" containerName="extract-utilities" Jan 23 10:39:16 crc kubenswrapper[4684]: I0123 10:39:16.914959 4684 memory_manager.go:354] "RemoveStaleState removing state" podUID="94f76145-8fd6-439a-bc4a-013a18d59731" containerName="registry-server" Jan 23 10:39:16 crc kubenswrapper[4684]: I0123 10:39:16.917263 4684 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-gxn8m" Jan 23 10:39:16 crc kubenswrapper[4684]: I0123 10:39:16.927106 4684 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-marketplace-gxn8m"] Jan 23 10:39:16 crc kubenswrapper[4684]: I0123 10:39:16.965484 4684 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-8t9gc\" (UniqueName: \"kubernetes.io/projected/c7b3e4fc-bf12-4c2c-8d27-03bf616f4719-kube-api-access-8t9gc\") pod \"redhat-marketplace-gxn8m\" (UID: \"c7b3e4fc-bf12-4c2c-8d27-03bf616f4719\") " pod="openshift-marketplace/redhat-marketplace-gxn8m" Jan 23 10:39:16 crc kubenswrapper[4684]: I0123 10:39:16.965708 4684 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/c7b3e4fc-bf12-4c2c-8d27-03bf616f4719-catalog-content\") pod \"redhat-marketplace-gxn8m\" (UID: \"c7b3e4fc-bf12-4c2c-8d27-03bf616f4719\") " pod="openshift-marketplace/redhat-marketplace-gxn8m" Jan 23 10:39:16 crc kubenswrapper[4684]: I0123 10:39:16.965787 4684 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/c7b3e4fc-bf12-4c2c-8d27-03bf616f4719-utilities\") pod \"redhat-marketplace-gxn8m\" (UID: \"c7b3e4fc-bf12-4c2c-8d27-03bf616f4719\") " pod="openshift-marketplace/redhat-marketplace-gxn8m" Jan 23 10:39:17 crc kubenswrapper[4684]: I0123 10:39:17.068850 4684 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-8t9gc\" (UniqueName: \"kubernetes.io/projected/c7b3e4fc-bf12-4c2c-8d27-03bf616f4719-kube-api-access-8t9gc\") pod \"redhat-marketplace-gxn8m\" (UID: \"c7b3e4fc-bf12-4c2c-8d27-03bf616f4719\") " pod="openshift-marketplace/redhat-marketplace-gxn8m" Jan 23 10:39:17 crc kubenswrapper[4684]: I0123 10:39:17.068969 4684 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/c7b3e4fc-bf12-4c2c-8d27-03bf616f4719-catalog-content\") pod \"redhat-marketplace-gxn8m\" (UID: \"c7b3e4fc-bf12-4c2c-8d27-03bf616f4719\") " pod="openshift-marketplace/redhat-marketplace-gxn8m" Jan 23 10:39:17 crc kubenswrapper[4684]: I0123 10:39:17.069019 4684 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/c7b3e4fc-bf12-4c2c-8d27-03bf616f4719-utilities\") pod \"redhat-marketplace-gxn8m\" (UID: \"c7b3e4fc-bf12-4c2c-8d27-03bf616f4719\") " pod="openshift-marketplace/redhat-marketplace-gxn8m" Jan 23 10:39:17 crc kubenswrapper[4684]: I0123 10:39:17.069683 4684 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/c7b3e4fc-bf12-4c2c-8d27-03bf616f4719-catalog-content\") pod \"redhat-marketplace-gxn8m\" (UID: \"c7b3e4fc-bf12-4c2c-8d27-03bf616f4719\") " pod="openshift-marketplace/redhat-marketplace-gxn8m" Jan 23 10:39:17 crc kubenswrapper[4684]: I0123 10:39:17.069941 4684 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/c7b3e4fc-bf12-4c2c-8d27-03bf616f4719-utilities\") pod \"redhat-marketplace-gxn8m\" (UID: \"c7b3e4fc-bf12-4c2c-8d27-03bf616f4719\") " pod="openshift-marketplace/redhat-marketplace-gxn8m" Jan 23 10:39:17 crc kubenswrapper[4684]: I0123 10:39:17.087177 4684 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-8t9gc\" (UniqueName: \"kubernetes.io/projected/c7b3e4fc-bf12-4c2c-8d27-03bf616f4719-kube-api-access-8t9gc\") pod \"redhat-marketplace-gxn8m\" (UID: \"c7b3e4fc-bf12-4c2c-8d27-03bf616f4719\") " pod="openshift-marketplace/redhat-marketplace-gxn8m" Jan 23 10:39:17 crc kubenswrapper[4684]: I0123 10:39:17.244227 4684 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-gxn8m" Jan 23 10:39:17 crc kubenswrapper[4684]: I0123 10:39:17.785505 4684 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-marketplace-gxn8m"] Jan 23 10:39:17 crc kubenswrapper[4684]: W0123 10:39:17.804292 4684 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podc7b3e4fc_bf12_4c2c_8d27_03bf616f4719.slice/crio-d94a4333cb3adf9b9a52bab544cfd5a0cb9dc5c85ee3d04d6d41a7bf1c6868a9 WatchSource:0}: Error finding container d94a4333cb3adf9b9a52bab544cfd5a0cb9dc5c85ee3d04d6d41a7bf1c6868a9: Status 404 returned error can't find the container with id d94a4333cb3adf9b9a52bab544cfd5a0cb9dc5c85ee3d04d6d41a7bf1c6868a9 Jan 23 10:39:18 crc kubenswrapper[4684]: I0123 10:39:18.026571 4684 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-gxn8m" event={"ID":"c7b3e4fc-bf12-4c2c-8d27-03bf616f4719","Type":"ContainerStarted","Data":"d94a4333cb3adf9b9a52bab544cfd5a0cb9dc5c85ee3d04d6d41a7bf1c6868a9"} Jan 23 10:39:19 crc kubenswrapper[4684]: I0123 10:39:19.036602 4684 generic.go:334] "Generic (PLEG): container finished" podID="c7b3e4fc-bf12-4c2c-8d27-03bf616f4719" containerID="60f23995b4a5051fc743b81dc79a00776d5428007058828e06741add1c1a43d0" exitCode=0 Jan 23 10:39:19 crc kubenswrapper[4684]: I0123 10:39:19.036810 4684 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-gxn8m" event={"ID":"c7b3e4fc-bf12-4c2c-8d27-03bf616f4719","Type":"ContainerDied","Data":"60f23995b4a5051fc743b81dc79a00776d5428007058828e06741add1c1a43d0"} Jan 23 10:39:19 crc kubenswrapper[4684]: I0123 10:39:19.582202 4684 scope.go:117] "RemoveContainer" containerID="c351f8f481b25c1f4451b34197b1573cb0b3fb64f7de44a6587d8bc17e89bbbf" Jan 23 10:39:19 crc kubenswrapper[4684]: E0123 10:39:19.582470 4684 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-wtphf_openshift-machine-config-operator(fe8e0d00-860e-4d47-9f48-686555520d79)\"" pod="openshift-machine-config-operator/machine-config-daemon-wtphf" podUID="fe8e0d00-860e-4d47-9f48-686555520d79" Jan 23 10:39:20 crc kubenswrapper[4684]: I0123 10:39:20.051279 4684 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-gxn8m" event={"ID":"c7b3e4fc-bf12-4c2c-8d27-03bf616f4719","Type":"ContainerStarted","Data":"73e28d87f899876c94facf676fecd5863ca7eb968f975eddfe8651fec6f48b0a"} Jan 23 10:39:21 crc kubenswrapper[4684]: I0123 10:39:21.061649 4684 generic.go:334] "Generic (PLEG): container finished" podID="c7b3e4fc-bf12-4c2c-8d27-03bf616f4719" containerID="73e28d87f899876c94facf676fecd5863ca7eb968f975eddfe8651fec6f48b0a" exitCode=0 Jan 23 10:39:21 crc kubenswrapper[4684]: I0123 10:39:21.061718 4684 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-gxn8m" event={"ID":"c7b3e4fc-bf12-4c2c-8d27-03bf616f4719","Type":"ContainerDied","Data":"73e28d87f899876c94facf676fecd5863ca7eb968f975eddfe8651fec6f48b0a"} Jan 23 10:39:24 crc kubenswrapper[4684]: I0123 10:39:24.091253 4684 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-gxn8m" event={"ID":"c7b3e4fc-bf12-4c2c-8d27-03bf616f4719","Type":"ContainerStarted","Data":"77b6abfe92ba072d18a8e725dcbfd92aae2fb10fec8de649af5d4b3bfb28c070"} Jan 23 10:39:24 crc kubenswrapper[4684]: I0123 10:39:24.115885 4684 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/redhat-marketplace-gxn8m" podStartSLOduration=4.132737378 podStartE2EDuration="8.115867515s" podCreationTimestamp="2026-01-23 10:39:16 +0000 UTC" firstStartedPulling="2026-01-23 10:39:19.039230437 +0000 UTC m=+5531.662608978" lastFinishedPulling="2026-01-23 10:39:23.022360574 +0000 UTC m=+5535.645739115" observedRunningTime="2026-01-23 10:39:24.113058474 +0000 UTC m=+5536.736437015" watchObservedRunningTime="2026-01-23 10:39:24.115867515 +0000 UTC m=+5536.739246056" Jan 23 10:39:27 crc kubenswrapper[4684]: I0123 10:39:27.245305 4684 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/redhat-marketplace-gxn8m" Jan 23 10:39:27 crc kubenswrapper[4684]: I0123 10:39:27.245651 4684 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/redhat-marketplace-gxn8m" Jan 23 10:39:27 crc kubenswrapper[4684]: I0123 10:39:27.293381 4684 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/redhat-marketplace-gxn8m" Jan 23 10:39:28 crc kubenswrapper[4684]: I0123 10:39:28.179659 4684 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/redhat-marketplace-gxn8m" Jan 23 10:39:28 crc kubenswrapper[4684]: I0123 10:39:28.231415 4684 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-marketplace-gxn8m"] Jan 23 10:39:30 crc kubenswrapper[4684]: I0123 10:39:30.151599 4684 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/redhat-marketplace-gxn8m" podUID="c7b3e4fc-bf12-4c2c-8d27-03bf616f4719" containerName="registry-server" containerID="cri-o://77b6abfe92ba072d18a8e725dcbfd92aae2fb10fec8de649af5d4b3bfb28c070" gracePeriod=2 Jan 23 10:39:32 crc kubenswrapper[4684]: I0123 10:39:32.182972 4684 generic.go:334] "Generic (PLEG): container finished" podID="c7b3e4fc-bf12-4c2c-8d27-03bf616f4719" containerID="77b6abfe92ba072d18a8e725dcbfd92aae2fb10fec8de649af5d4b3bfb28c070" exitCode=0 Jan 23 10:39:32 crc kubenswrapper[4684]: I0123 10:39:32.183055 4684 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-gxn8m" event={"ID":"c7b3e4fc-bf12-4c2c-8d27-03bf616f4719","Type":"ContainerDied","Data":"77b6abfe92ba072d18a8e725dcbfd92aae2fb10fec8de649af5d4b3bfb28c070"} Jan 23 10:39:32 crc kubenswrapper[4684]: I0123 10:39:32.413463 4684 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-gxn8m" Jan 23 10:39:32 crc kubenswrapper[4684]: I0123 10:39:32.517063 4684 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-8t9gc\" (UniqueName: \"kubernetes.io/projected/c7b3e4fc-bf12-4c2c-8d27-03bf616f4719-kube-api-access-8t9gc\") pod \"c7b3e4fc-bf12-4c2c-8d27-03bf616f4719\" (UID: \"c7b3e4fc-bf12-4c2c-8d27-03bf616f4719\") " Jan 23 10:39:32 crc kubenswrapper[4684]: I0123 10:39:32.517116 4684 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/c7b3e4fc-bf12-4c2c-8d27-03bf616f4719-catalog-content\") pod \"c7b3e4fc-bf12-4c2c-8d27-03bf616f4719\" (UID: \"c7b3e4fc-bf12-4c2c-8d27-03bf616f4719\") " Jan 23 10:39:32 crc kubenswrapper[4684]: I0123 10:39:32.517277 4684 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/c7b3e4fc-bf12-4c2c-8d27-03bf616f4719-utilities\") pod \"c7b3e4fc-bf12-4c2c-8d27-03bf616f4719\" (UID: \"c7b3e4fc-bf12-4c2c-8d27-03bf616f4719\") " Jan 23 10:39:32 crc kubenswrapper[4684]: I0123 10:39:32.518409 4684 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/c7b3e4fc-bf12-4c2c-8d27-03bf616f4719-utilities" (OuterVolumeSpecName: "utilities") pod "c7b3e4fc-bf12-4c2c-8d27-03bf616f4719" (UID: "c7b3e4fc-bf12-4c2c-8d27-03bf616f4719"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 23 10:39:32 crc kubenswrapper[4684]: I0123 10:39:32.523282 4684 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/c7b3e4fc-bf12-4c2c-8d27-03bf616f4719-kube-api-access-8t9gc" (OuterVolumeSpecName: "kube-api-access-8t9gc") pod "c7b3e4fc-bf12-4c2c-8d27-03bf616f4719" (UID: "c7b3e4fc-bf12-4c2c-8d27-03bf616f4719"). InnerVolumeSpecName "kube-api-access-8t9gc". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 23 10:39:32 crc kubenswrapper[4684]: I0123 10:39:32.546157 4684 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/c7b3e4fc-bf12-4c2c-8d27-03bf616f4719-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "c7b3e4fc-bf12-4c2c-8d27-03bf616f4719" (UID: "c7b3e4fc-bf12-4c2c-8d27-03bf616f4719"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 23 10:39:32 crc kubenswrapper[4684]: I0123 10:39:32.582129 4684 scope.go:117] "RemoveContainer" containerID="c351f8f481b25c1f4451b34197b1573cb0b3fb64f7de44a6587d8bc17e89bbbf" Jan 23 10:39:32 crc kubenswrapper[4684]: E0123 10:39:32.582420 4684 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-wtphf_openshift-machine-config-operator(fe8e0d00-860e-4d47-9f48-686555520d79)\"" pod="openshift-machine-config-operator/machine-config-daemon-wtphf" podUID="fe8e0d00-860e-4d47-9f48-686555520d79" Jan 23 10:39:32 crc kubenswrapper[4684]: I0123 10:39:32.619540 4684 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-8t9gc\" (UniqueName: \"kubernetes.io/projected/c7b3e4fc-bf12-4c2c-8d27-03bf616f4719-kube-api-access-8t9gc\") on node \"crc\" DevicePath \"\"" Jan 23 10:39:32 crc kubenswrapper[4684]: I0123 10:39:32.619581 4684 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/c7b3e4fc-bf12-4c2c-8d27-03bf616f4719-catalog-content\") on node \"crc\" DevicePath \"\"" Jan 23 10:39:32 crc kubenswrapper[4684]: I0123 10:39:32.619590 4684 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/c7b3e4fc-bf12-4c2c-8d27-03bf616f4719-utilities\") on node \"crc\" DevicePath \"\"" Jan 23 10:39:33 crc kubenswrapper[4684]: I0123 10:39:33.195243 4684 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-gxn8m" event={"ID":"c7b3e4fc-bf12-4c2c-8d27-03bf616f4719","Type":"ContainerDied","Data":"d94a4333cb3adf9b9a52bab544cfd5a0cb9dc5c85ee3d04d6d41a7bf1c6868a9"} Jan 23 10:39:33 crc kubenswrapper[4684]: I0123 10:39:33.195314 4684 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-gxn8m" Jan 23 10:39:33 crc kubenswrapper[4684]: I0123 10:39:33.195519 4684 scope.go:117] "RemoveContainer" containerID="77b6abfe92ba072d18a8e725dcbfd92aae2fb10fec8de649af5d4b3bfb28c070" Jan 23 10:39:33 crc kubenswrapper[4684]: I0123 10:39:33.240847 4684 scope.go:117] "RemoveContainer" containerID="73e28d87f899876c94facf676fecd5863ca7eb968f975eddfe8651fec6f48b0a" Jan 23 10:39:33 crc kubenswrapper[4684]: I0123 10:39:33.249215 4684 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-marketplace-gxn8m"] Jan 23 10:39:33 crc kubenswrapper[4684]: I0123 10:39:33.262006 4684 scope.go:117] "RemoveContainer" containerID="60f23995b4a5051fc743b81dc79a00776d5428007058828e06741add1c1a43d0" Jan 23 10:39:33 crc kubenswrapper[4684]: I0123 10:39:33.265084 4684 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/redhat-marketplace-gxn8m"] Jan 23 10:39:33 crc kubenswrapper[4684]: I0123 10:39:33.591523 4684 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="c7b3e4fc-bf12-4c2c-8d27-03bf616f4719" path="/var/lib/kubelet/pods/c7b3e4fc-bf12-4c2c-8d27-03bf616f4719/volumes" Jan 23 10:39:44 crc kubenswrapper[4684]: I0123 10:39:44.581921 4684 scope.go:117] "RemoveContainer" containerID="c351f8f481b25c1f4451b34197b1573cb0b3fb64f7de44a6587d8bc17e89bbbf" Jan 23 10:39:44 crc kubenswrapper[4684]: E0123 10:39:44.583690 4684 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-wtphf_openshift-machine-config-operator(fe8e0d00-860e-4d47-9f48-686555520d79)\"" pod="openshift-machine-config-operator/machine-config-daemon-wtphf" podUID="fe8e0d00-860e-4d47-9f48-686555520d79" Jan 23 10:39:56 crc kubenswrapper[4684]: I0123 10:39:56.582640 4684 scope.go:117] "RemoveContainer" containerID="c351f8f481b25c1f4451b34197b1573cb0b3fb64f7de44a6587d8bc17e89bbbf" Jan 23 10:39:56 crc kubenswrapper[4684]: E0123 10:39:56.583631 4684 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-wtphf_openshift-machine-config-operator(fe8e0d00-860e-4d47-9f48-686555520d79)\"" pod="openshift-machine-config-operator/machine-config-daemon-wtphf" podUID="fe8e0d00-860e-4d47-9f48-686555520d79" Jan 23 10:40:07 crc kubenswrapper[4684]: I0123 10:40:07.591466 4684 scope.go:117] "RemoveContainer" containerID="c351f8f481b25c1f4451b34197b1573cb0b3fb64f7de44a6587d8bc17e89bbbf" Jan 23 10:40:07 crc kubenswrapper[4684]: E0123 10:40:07.592180 4684 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-wtphf_openshift-machine-config-operator(fe8e0d00-860e-4d47-9f48-686555520d79)\"" pod="openshift-machine-config-operator/machine-config-daemon-wtphf" podUID="fe8e0d00-860e-4d47-9f48-686555520d79" Jan 23 10:40:18 crc kubenswrapper[4684]: I0123 10:40:18.582024 4684 scope.go:117] "RemoveContainer" containerID="c351f8f481b25c1f4451b34197b1573cb0b3fb64f7de44a6587d8bc17e89bbbf" Jan 23 10:40:18 crc kubenswrapper[4684]: E0123 10:40:18.582740 4684 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-wtphf_openshift-machine-config-operator(fe8e0d00-860e-4d47-9f48-686555520d79)\"" pod="openshift-machine-config-operator/machine-config-daemon-wtphf" podUID="fe8e0d00-860e-4d47-9f48-686555520d79" Jan 23 10:40:29 crc kubenswrapper[4684]: I0123 10:40:29.583125 4684 scope.go:117] "RemoveContainer" containerID="c351f8f481b25c1f4451b34197b1573cb0b3fb64f7de44a6587d8bc17e89bbbf" Jan 23 10:40:29 crc kubenswrapper[4684]: E0123 10:40:29.583989 4684 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-wtphf_openshift-machine-config-operator(fe8e0d00-860e-4d47-9f48-686555520d79)\"" pod="openshift-machine-config-operator/machine-config-daemon-wtphf" podUID="fe8e0d00-860e-4d47-9f48-686555520d79" Jan 23 10:40:36 crc kubenswrapper[4684]: I0123 10:40:36.602384 4684 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-must-gather-flg8f/must-gather-qb77x"] Jan 23 10:40:36 crc kubenswrapper[4684]: E0123 10:40:36.605032 4684 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="c7b3e4fc-bf12-4c2c-8d27-03bf616f4719" containerName="registry-server" Jan 23 10:40:36 crc kubenswrapper[4684]: I0123 10:40:36.605050 4684 state_mem.go:107] "Deleted CPUSet assignment" podUID="c7b3e4fc-bf12-4c2c-8d27-03bf616f4719" containerName="registry-server" Jan 23 10:40:36 crc kubenswrapper[4684]: E0123 10:40:36.605080 4684 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="c7b3e4fc-bf12-4c2c-8d27-03bf616f4719" containerName="extract-content" Jan 23 10:40:36 crc kubenswrapper[4684]: I0123 10:40:36.605090 4684 state_mem.go:107] "Deleted CPUSet assignment" podUID="c7b3e4fc-bf12-4c2c-8d27-03bf616f4719" containerName="extract-content" Jan 23 10:40:36 crc kubenswrapper[4684]: E0123 10:40:36.605118 4684 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="c7b3e4fc-bf12-4c2c-8d27-03bf616f4719" containerName="extract-utilities" Jan 23 10:40:36 crc kubenswrapper[4684]: I0123 10:40:36.605128 4684 state_mem.go:107] "Deleted CPUSet assignment" podUID="c7b3e4fc-bf12-4c2c-8d27-03bf616f4719" containerName="extract-utilities" Jan 23 10:40:36 crc kubenswrapper[4684]: I0123 10:40:36.605336 4684 memory_manager.go:354] "RemoveStaleState removing state" podUID="c7b3e4fc-bf12-4c2c-8d27-03bf616f4719" containerName="registry-server" Jan 23 10:40:36 crc kubenswrapper[4684]: I0123 10:40:36.606481 4684 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-must-gather-flg8f/must-gather-qb77x" Jan 23 10:40:36 crc kubenswrapper[4684]: I0123 10:40:36.620339 4684 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-must-gather-flg8f"/"default-dockercfg-26fh7" Jan 23 10:40:36 crc kubenswrapper[4684]: I0123 10:40:36.620885 4684 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-must-gather-flg8f"/"openshift-service-ca.crt" Jan 23 10:40:36 crc kubenswrapper[4684]: I0123 10:40:36.620885 4684 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-must-gather-flg8f"/"kube-root-ca.crt" Jan 23 10:40:36 crc kubenswrapper[4684]: I0123 10:40:36.630830 4684 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-must-gather-flg8f/must-gather-qb77x"] Jan 23 10:40:36 crc kubenswrapper[4684]: I0123 10:40:36.675422 4684 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"must-gather-output\" (UniqueName: \"kubernetes.io/empty-dir/c6b31bbd-d573-438a-a8d1-7a2376673a73-must-gather-output\") pod \"must-gather-qb77x\" (UID: \"c6b31bbd-d573-438a-a8d1-7a2376673a73\") " pod="openshift-must-gather-flg8f/must-gather-qb77x" Jan 23 10:40:36 crc kubenswrapper[4684]: I0123 10:40:36.675898 4684 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-thmkl\" (UniqueName: \"kubernetes.io/projected/c6b31bbd-d573-438a-a8d1-7a2376673a73-kube-api-access-thmkl\") pod \"must-gather-qb77x\" (UID: \"c6b31bbd-d573-438a-a8d1-7a2376673a73\") " pod="openshift-must-gather-flg8f/must-gather-qb77x" Jan 23 10:40:36 crc kubenswrapper[4684]: I0123 10:40:36.778518 4684 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-thmkl\" (UniqueName: \"kubernetes.io/projected/c6b31bbd-d573-438a-a8d1-7a2376673a73-kube-api-access-thmkl\") pod \"must-gather-qb77x\" (UID: \"c6b31bbd-d573-438a-a8d1-7a2376673a73\") " pod="openshift-must-gather-flg8f/must-gather-qb77x" Jan 23 10:40:36 crc kubenswrapper[4684]: I0123 10:40:36.778633 4684 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"must-gather-output\" (UniqueName: \"kubernetes.io/empty-dir/c6b31bbd-d573-438a-a8d1-7a2376673a73-must-gather-output\") pod \"must-gather-qb77x\" (UID: \"c6b31bbd-d573-438a-a8d1-7a2376673a73\") " pod="openshift-must-gather-flg8f/must-gather-qb77x" Jan 23 10:40:36 crc kubenswrapper[4684]: I0123 10:40:36.779224 4684 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"must-gather-output\" (UniqueName: \"kubernetes.io/empty-dir/c6b31bbd-d573-438a-a8d1-7a2376673a73-must-gather-output\") pod \"must-gather-qb77x\" (UID: \"c6b31bbd-d573-438a-a8d1-7a2376673a73\") " pod="openshift-must-gather-flg8f/must-gather-qb77x" Jan 23 10:40:36 crc kubenswrapper[4684]: I0123 10:40:36.801068 4684 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-thmkl\" (UniqueName: \"kubernetes.io/projected/c6b31bbd-d573-438a-a8d1-7a2376673a73-kube-api-access-thmkl\") pod \"must-gather-qb77x\" (UID: \"c6b31bbd-d573-438a-a8d1-7a2376673a73\") " pod="openshift-must-gather-flg8f/must-gather-qb77x" Jan 23 10:40:36 crc kubenswrapper[4684]: I0123 10:40:36.930834 4684 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-must-gather-flg8f/must-gather-qb77x" Jan 23 10:40:37 crc kubenswrapper[4684]: I0123 10:40:37.636824 4684 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-must-gather-flg8f/must-gather-qb77x"] Jan 23 10:40:37 crc kubenswrapper[4684]: I0123 10:40:37.773489 4684 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-must-gather-flg8f/must-gather-qb77x" event={"ID":"c6b31bbd-d573-438a-a8d1-7a2376673a73","Type":"ContainerStarted","Data":"10cf1001ae01cc3eb176172620b62d86351105d1d1b0b37c6c6efc980cc4990e"} Jan 23 10:40:38 crc kubenswrapper[4684]: I0123 10:40:38.785063 4684 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-must-gather-flg8f/must-gather-qb77x" event={"ID":"c6b31bbd-d573-438a-a8d1-7a2376673a73","Type":"ContainerStarted","Data":"369cffed874857fdd21e4842ad5d9e5fc4a4e19647922e74c7babcf7fbd2d84b"} Jan 23 10:40:38 crc kubenswrapper[4684]: I0123 10:40:38.785583 4684 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-must-gather-flg8f/must-gather-qb77x" event={"ID":"c6b31bbd-d573-438a-a8d1-7a2376673a73","Type":"ContainerStarted","Data":"830c5570617066f5c6adc549c2ca057cd3ef40a0c4b0f157845bfd8dd5a219e4"} Jan 23 10:40:38 crc kubenswrapper[4684]: I0123 10:40:38.845218 4684 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-must-gather-flg8f/must-gather-qb77x" podStartSLOduration=2.845202461 podStartE2EDuration="2.845202461s" podCreationTimestamp="2026-01-23 10:40:36 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-23 10:40:38.839020774 +0000 UTC m=+5611.462399315" watchObservedRunningTime="2026-01-23 10:40:38.845202461 +0000 UTC m=+5611.468581002" Jan 23 10:40:40 crc kubenswrapper[4684]: I0123 10:40:40.582727 4684 scope.go:117] "RemoveContainer" containerID="c351f8f481b25c1f4451b34197b1573cb0b3fb64f7de44a6587d8bc17e89bbbf" Jan 23 10:40:40 crc kubenswrapper[4684]: E0123 10:40:40.583451 4684 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-wtphf_openshift-machine-config-operator(fe8e0d00-860e-4d47-9f48-686555520d79)\"" pod="openshift-machine-config-operator/machine-config-daemon-wtphf" podUID="fe8e0d00-860e-4d47-9f48-686555520d79" Jan 23 10:40:43 crc kubenswrapper[4684]: I0123 10:40:43.246518 4684 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-must-gather-flg8f/crc-debug-t8tv4"] Jan 23 10:40:43 crc kubenswrapper[4684]: I0123 10:40:43.248259 4684 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-must-gather-flg8f/crc-debug-t8tv4" Jan 23 10:40:43 crc kubenswrapper[4684]: I0123 10:40:43.351360 4684 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host\" (UniqueName: \"kubernetes.io/host-path/ed2fa2b2-807e-4620-8103-5f40c3cafcf1-host\") pod \"crc-debug-t8tv4\" (UID: \"ed2fa2b2-807e-4620-8103-5f40c3cafcf1\") " pod="openshift-must-gather-flg8f/crc-debug-t8tv4" Jan 23 10:40:43 crc kubenswrapper[4684]: I0123 10:40:43.351912 4684 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-5qm98\" (UniqueName: \"kubernetes.io/projected/ed2fa2b2-807e-4620-8103-5f40c3cafcf1-kube-api-access-5qm98\") pod \"crc-debug-t8tv4\" (UID: \"ed2fa2b2-807e-4620-8103-5f40c3cafcf1\") " pod="openshift-must-gather-flg8f/crc-debug-t8tv4" Jan 23 10:40:43 crc kubenswrapper[4684]: I0123 10:40:43.454097 4684 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-5qm98\" (UniqueName: \"kubernetes.io/projected/ed2fa2b2-807e-4620-8103-5f40c3cafcf1-kube-api-access-5qm98\") pod \"crc-debug-t8tv4\" (UID: \"ed2fa2b2-807e-4620-8103-5f40c3cafcf1\") " pod="openshift-must-gather-flg8f/crc-debug-t8tv4" Jan 23 10:40:43 crc kubenswrapper[4684]: I0123 10:40:43.454427 4684 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host\" (UniqueName: \"kubernetes.io/host-path/ed2fa2b2-807e-4620-8103-5f40c3cafcf1-host\") pod \"crc-debug-t8tv4\" (UID: \"ed2fa2b2-807e-4620-8103-5f40c3cafcf1\") " pod="openshift-must-gather-flg8f/crc-debug-t8tv4" Jan 23 10:40:43 crc kubenswrapper[4684]: I0123 10:40:43.454553 4684 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host\" (UniqueName: \"kubernetes.io/host-path/ed2fa2b2-807e-4620-8103-5f40c3cafcf1-host\") pod \"crc-debug-t8tv4\" (UID: \"ed2fa2b2-807e-4620-8103-5f40c3cafcf1\") " pod="openshift-must-gather-flg8f/crc-debug-t8tv4" Jan 23 10:40:43 crc kubenswrapper[4684]: I0123 10:40:43.479024 4684 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-5qm98\" (UniqueName: \"kubernetes.io/projected/ed2fa2b2-807e-4620-8103-5f40c3cafcf1-kube-api-access-5qm98\") pod \"crc-debug-t8tv4\" (UID: \"ed2fa2b2-807e-4620-8103-5f40c3cafcf1\") " pod="openshift-must-gather-flg8f/crc-debug-t8tv4" Jan 23 10:40:43 crc kubenswrapper[4684]: I0123 10:40:43.571770 4684 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-must-gather-flg8f/crc-debug-t8tv4" Jan 23 10:40:43 crc kubenswrapper[4684]: I0123 10:40:43.850324 4684 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-must-gather-flg8f/crc-debug-t8tv4" event={"ID":"ed2fa2b2-807e-4620-8103-5f40c3cafcf1","Type":"ContainerStarted","Data":"21ec8a72ebafd5ef6b30886b5a608955454e4c88b2cfc6af0e85b26bcbc8b1ac"} Jan 23 10:40:44 crc kubenswrapper[4684]: I0123 10:40:44.858526 4684 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-must-gather-flg8f/crc-debug-t8tv4" event={"ID":"ed2fa2b2-807e-4620-8103-5f40c3cafcf1","Type":"ContainerStarted","Data":"1d04f21db2dd498eff68a925b9628260494183e78ffde154c498b63e7a16ecc6"} Jan 23 10:40:44 crc kubenswrapper[4684]: I0123 10:40:44.879134 4684 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-must-gather-flg8f/crc-debug-t8tv4" podStartSLOduration=1.879117012 podStartE2EDuration="1.879117012s" podCreationTimestamp="2026-01-23 10:40:43 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-23 10:40:44.874689595 +0000 UTC m=+5617.498068146" watchObservedRunningTime="2026-01-23 10:40:44.879117012 +0000 UTC m=+5617.502495553" Jan 23 10:40:51 crc kubenswrapper[4684]: I0123 10:40:51.582330 4684 scope.go:117] "RemoveContainer" containerID="c351f8f481b25c1f4451b34197b1573cb0b3fb64f7de44a6587d8bc17e89bbbf" Jan 23 10:40:51 crc kubenswrapper[4684]: I0123 10:40:51.915271 4684 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-wtphf" event={"ID":"fe8e0d00-860e-4d47-9f48-686555520d79","Type":"ContainerStarted","Data":"f4341f39926607ae03c3e178cd27115ca38cc60da7be79d26e9660a1c7ba8da6"} Jan 23 10:41:21 crc kubenswrapper[4684]: I0123 10:41:21.158968 4684 generic.go:334] "Generic (PLEG): container finished" podID="ed2fa2b2-807e-4620-8103-5f40c3cafcf1" containerID="1d04f21db2dd498eff68a925b9628260494183e78ffde154c498b63e7a16ecc6" exitCode=0 Jan 23 10:41:21 crc kubenswrapper[4684]: I0123 10:41:21.159469 4684 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-must-gather-flg8f/crc-debug-t8tv4" event={"ID":"ed2fa2b2-807e-4620-8103-5f40c3cafcf1","Type":"ContainerDied","Data":"1d04f21db2dd498eff68a925b9628260494183e78ffde154c498b63e7a16ecc6"} Jan 23 10:41:22 crc kubenswrapper[4684]: I0123 10:41:22.295277 4684 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-must-gather-flg8f/crc-debug-t8tv4" Jan 23 10:41:22 crc kubenswrapper[4684]: I0123 10:41:22.330175 4684 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-must-gather-flg8f/crc-debug-t8tv4"] Jan 23 10:41:22 crc kubenswrapper[4684]: I0123 10:41:22.340616 4684 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-must-gather-flg8f/crc-debug-t8tv4"] Jan 23 10:41:22 crc kubenswrapper[4684]: I0123 10:41:22.353379 4684 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"host\" (UniqueName: \"kubernetes.io/host-path/ed2fa2b2-807e-4620-8103-5f40c3cafcf1-host\") pod \"ed2fa2b2-807e-4620-8103-5f40c3cafcf1\" (UID: \"ed2fa2b2-807e-4620-8103-5f40c3cafcf1\") " Jan 23 10:41:22 crc kubenswrapper[4684]: I0123 10:41:22.353520 4684 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-5qm98\" (UniqueName: \"kubernetes.io/projected/ed2fa2b2-807e-4620-8103-5f40c3cafcf1-kube-api-access-5qm98\") pod \"ed2fa2b2-807e-4620-8103-5f40c3cafcf1\" (UID: \"ed2fa2b2-807e-4620-8103-5f40c3cafcf1\") " Jan 23 10:41:22 crc kubenswrapper[4684]: I0123 10:41:22.353526 4684 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/ed2fa2b2-807e-4620-8103-5f40c3cafcf1-host" (OuterVolumeSpecName: "host") pod "ed2fa2b2-807e-4620-8103-5f40c3cafcf1" (UID: "ed2fa2b2-807e-4620-8103-5f40c3cafcf1"). InnerVolumeSpecName "host". PluginName "kubernetes.io/host-path", VolumeGidValue "" Jan 23 10:41:22 crc kubenswrapper[4684]: I0123 10:41:22.353972 4684 reconciler_common.go:293] "Volume detached for volume \"host\" (UniqueName: \"kubernetes.io/host-path/ed2fa2b2-807e-4620-8103-5f40c3cafcf1-host\") on node \"crc\" DevicePath \"\"" Jan 23 10:41:22 crc kubenswrapper[4684]: I0123 10:41:22.358658 4684 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/ed2fa2b2-807e-4620-8103-5f40c3cafcf1-kube-api-access-5qm98" (OuterVolumeSpecName: "kube-api-access-5qm98") pod "ed2fa2b2-807e-4620-8103-5f40c3cafcf1" (UID: "ed2fa2b2-807e-4620-8103-5f40c3cafcf1"). InnerVolumeSpecName "kube-api-access-5qm98". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 23 10:41:22 crc kubenswrapper[4684]: I0123 10:41:22.455538 4684 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-5qm98\" (UniqueName: \"kubernetes.io/projected/ed2fa2b2-807e-4620-8103-5f40c3cafcf1-kube-api-access-5qm98\") on node \"crc\" DevicePath \"\"" Jan 23 10:41:23 crc kubenswrapper[4684]: I0123 10:41:23.175891 4684 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="21ec8a72ebafd5ef6b30886b5a608955454e4c88b2cfc6af0e85b26bcbc8b1ac" Jan 23 10:41:23 crc kubenswrapper[4684]: I0123 10:41:23.175945 4684 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-must-gather-flg8f/crc-debug-t8tv4" Jan 23 10:41:23 crc kubenswrapper[4684]: I0123 10:41:23.592677 4684 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="ed2fa2b2-807e-4620-8103-5f40c3cafcf1" path="/var/lib/kubelet/pods/ed2fa2b2-807e-4620-8103-5f40c3cafcf1/volumes" Jan 23 10:41:23 crc kubenswrapper[4684]: I0123 10:41:23.593994 4684 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-must-gather-flg8f/crc-debug-w9v5m"] Jan 23 10:41:23 crc kubenswrapper[4684]: E0123 10:41:23.594452 4684 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="ed2fa2b2-807e-4620-8103-5f40c3cafcf1" containerName="container-00" Jan 23 10:41:23 crc kubenswrapper[4684]: I0123 10:41:23.594474 4684 state_mem.go:107] "Deleted CPUSet assignment" podUID="ed2fa2b2-807e-4620-8103-5f40c3cafcf1" containerName="container-00" Jan 23 10:41:23 crc kubenswrapper[4684]: I0123 10:41:23.594675 4684 memory_manager.go:354] "RemoveStaleState removing state" podUID="ed2fa2b2-807e-4620-8103-5f40c3cafcf1" containerName="container-00" Jan 23 10:41:23 crc kubenswrapper[4684]: I0123 10:41:23.596021 4684 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-must-gather-flg8f/crc-debug-w9v5m" Jan 23 10:41:23 crc kubenswrapper[4684]: I0123 10:41:23.680417 4684 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host\" (UniqueName: \"kubernetes.io/host-path/ab2a6f05-e835-4043-bb5f-da0606608fd8-host\") pod \"crc-debug-w9v5m\" (UID: \"ab2a6f05-e835-4043-bb5f-da0606608fd8\") " pod="openshift-must-gather-flg8f/crc-debug-w9v5m" Jan 23 10:41:23 crc kubenswrapper[4684]: I0123 10:41:23.680558 4684 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-qf8gn\" (UniqueName: \"kubernetes.io/projected/ab2a6f05-e835-4043-bb5f-da0606608fd8-kube-api-access-qf8gn\") pod \"crc-debug-w9v5m\" (UID: \"ab2a6f05-e835-4043-bb5f-da0606608fd8\") " pod="openshift-must-gather-flg8f/crc-debug-w9v5m" Jan 23 10:41:23 crc kubenswrapper[4684]: I0123 10:41:23.782074 4684 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host\" (UniqueName: \"kubernetes.io/host-path/ab2a6f05-e835-4043-bb5f-da0606608fd8-host\") pod \"crc-debug-w9v5m\" (UID: \"ab2a6f05-e835-4043-bb5f-da0606608fd8\") " pod="openshift-must-gather-flg8f/crc-debug-w9v5m" Jan 23 10:41:23 crc kubenswrapper[4684]: I0123 10:41:23.782227 4684 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-qf8gn\" (UniqueName: \"kubernetes.io/projected/ab2a6f05-e835-4043-bb5f-da0606608fd8-kube-api-access-qf8gn\") pod \"crc-debug-w9v5m\" (UID: \"ab2a6f05-e835-4043-bb5f-da0606608fd8\") " pod="openshift-must-gather-flg8f/crc-debug-w9v5m" Jan 23 10:41:23 crc kubenswrapper[4684]: I0123 10:41:23.782286 4684 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host\" (UniqueName: \"kubernetes.io/host-path/ab2a6f05-e835-4043-bb5f-da0606608fd8-host\") pod \"crc-debug-w9v5m\" (UID: \"ab2a6f05-e835-4043-bb5f-da0606608fd8\") " pod="openshift-must-gather-flg8f/crc-debug-w9v5m" Jan 23 10:41:23 crc kubenswrapper[4684]: I0123 10:41:23.800233 4684 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-qf8gn\" (UniqueName: \"kubernetes.io/projected/ab2a6f05-e835-4043-bb5f-da0606608fd8-kube-api-access-qf8gn\") pod \"crc-debug-w9v5m\" (UID: \"ab2a6f05-e835-4043-bb5f-da0606608fd8\") " pod="openshift-must-gather-flg8f/crc-debug-w9v5m" Jan 23 10:41:23 crc kubenswrapper[4684]: I0123 10:41:23.912475 4684 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-must-gather-flg8f/crc-debug-w9v5m" Jan 23 10:41:24 crc kubenswrapper[4684]: I0123 10:41:24.186451 4684 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-must-gather-flg8f/crc-debug-w9v5m" event={"ID":"ab2a6f05-e835-4043-bb5f-da0606608fd8","Type":"ContainerStarted","Data":"a03c8adaedf0cb6c22f4192e534d8fd43977dba97cae4f141c4bd92dfb4c812a"} Jan 23 10:41:24 crc kubenswrapper[4684]: I0123 10:41:24.186735 4684 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-must-gather-flg8f/crc-debug-w9v5m" event={"ID":"ab2a6f05-e835-4043-bb5f-da0606608fd8","Type":"ContainerStarted","Data":"09569df455d6218b2bd377740c0b4a669a1e295501277ac42fc84e0407cb1b41"} Jan 23 10:41:24 crc kubenswrapper[4684]: I0123 10:41:24.205941 4684 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-must-gather-flg8f/crc-debug-w9v5m" podStartSLOduration=1.205921114 podStartE2EDuration="1.205921114s" podCreationTimestamp="2026-01-23 10:41:23 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-23 10:41:24.199577872 +0000 UTC m=+5656.822956403" watchObservedRunningTime="2026-01-23 10:41:24.205921114 +0000 UTC m=+5656.829299665" Jan 23 10:41:25 crc kubenswrapper[4684]: I0123 10:41:25.199471 4684 generic.go:334] "Generic (PLEG): container finished" podID="ab2a6f05-e835-4043-bb5f-da0606608fd8" containerID="a03c8adaedf0cb6c22f4192e534d8fd43977dba97cae4f141c4bd92dfb4c812a" exitCode=0 Jan 23 10:41:25 crc kubenswrapper[4684]: I0123 10:41:25.199847 4684 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-must-gather-flg8f/crc-debug-w9v5m" event={"ID":"ab2a6f05-e835-4043-bb5f-da0606608fd8","Type":"ContainerDied","Data":"a03c8adaedf0cb6c22f4192e534d8fd43977dba97cae4f141c4bd92dfb4c812a"} Jan 23 10:41:26 crc kubenswrapper[4684]: I0123 10:41:26.324075 4684 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-must-gather-flg8f/crc-debug-w9v5m" Jan 23 10:41:26 crc kubenswrapper[4684]: I0123 10:41:26.363630 4684 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-must-gather-flg8f/crc-debug-w9v5m"] Jan 23 10:41:26 crc kubenswrapper[4684]: I0123 10:41:26.374357 4684 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-must-gather-flg8f/crc-debug-w9v5m"] Jan 23 10:41:26 crc kubenswrapper[4684]: I0123 10:41:26.445792 4684 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"host\" (UniqueName: \"kubernetes.io/host-path/ab2a6f05-e835-4043-bb5f-da0606608fd8-host\") pod \"ab2a6f05-e835-4043-bb5f-da0606608fd8\" (UID: \"ab2a6f05-e835-4043-bb5f-da0606608fd8\") " Jan 23 10:41:26 crc kubenswrapper[4684]: I0123 10:41:26.445855 4684 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/ab2a6f05-e835-4043-bb5f-da0606608fd8-host" (OuterVolumeSpecName: "host") pod "ab2a6f05-e835-4043-bb5f-da0606608fd8" (UID: "ab2a6f05-e835-4043-bb5f-da0606608fd8"). InnerVolumeSpecName "host". PluginName "kubernetes.io/host-path", VolumeGidValue "" Jan 23 10:41:26 crc kubenswrapper[4684]: I0123 10:41:26.446142 4684 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-qf8gn\" (UniqueName: \"kubernetes.io/projected/ab2a6f05-e835-4043-bb5f-da0606608fd8-kube-api-access-qf8gn\") pod \"ab2a6f05-e835-4043-bb5f-da0606608fd8\" (UID: \"ab2a6f05-e835-4043-bb5f-da0606608fd8\") " Jan 23 10:41:26 crc kubenswrapper[4684]: I0123 10:41:26.446681 4684 reconciler_common.go:293] "Volume detached for volume \"host\" (UniqueName: \"kubernetes.io/host-path/ab2a6f05-e835-4043-bb5f-da0606608fd8-host\") on node \"crc\" DevicePath \"\"" Jan 23 10:41:26 crc kubenswrapper[4684]: I0123 10:41:26.451898 4684 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/ab2a6f05-e835-4043-bb5f-da0606608fd8-kube-api-access-qf8gn" (OuterVolumeSpecName: "kube-api-access-qf8gn") pod "ab2a6f05-e835-4043-bb5f-da0606608fd8" (UID: "ab2a6f05-e835-4043-bb5f-da0606608fd8"). InnerVolumeSpecName "kube-api-access-qf8gn". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 23 10:41:26 crc kubenswrapper[4684]: I0123 10:41:26.548492 4684 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-qf8gn\" (UniqueName: \"kubernetes.io/projected/ab2a6f05-e835-4043-bb5f-da0606608fd8-kube-api-access-qf8gn\") on node \"crc\" DevicePath \"\"" Jan 23 10:41:27 crc kubenswrapper[4684]: I0123 10:41:27.215950 4684 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="09569df455d6218b2bd377740c0b4a669a1e295501277ac42fc84e0407cb1b41" Jan 23 10:41:27 crc kubenswrapper[4684]: I0123 10:41:27.216012 4684 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-must-gather-flg8f/crc-debug-w9v5m" Jan 23 10:41:27 crc kubenswrapper[4684]: I0123 10:41:27.567477 4684 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-must-gather-flg8f/crc-debug-l8lvc"] Jan 23 10:41:27 crc kubenswrapper[4684]: E0123 10:41:27.568287 4684 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="ab2a6f05-e835-4043-bb5f-da0606608fd8" containerName="container-00" Jan 23 10:41:27 crc kubenswrapper[4684]: I0123 10:41:27.568299 4684 state_mem.go:107] "Deleted CPUSet assignment" podUID="ab2a6f05-e835-4043-bb5f-da0606608fd8" containerName="container-00" Jan 23 10:41:27 crc kubenswrapper[4684]: I0123 10:41:27.568481 4684 memory_manager.go:354] "RemoveStaleState removing state" podUID="ab2a6f05-e835-4043-bb5f-da0606608fd8" containerName="container-00" Jan 23 10:41:27 crc kubenswrapper[4684]: I0123 10:41:27.569046 4684 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-must-gather-flg8f/crc-debug-l8lvc" Jan 23 10:41:27 crc kubenswrapper[4684]: I0123 10:41:27.598469 4684 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="ab2a6f05-e835-4043-bb5f-da0606608fd8" path="/var/lib/kubelet/pods/ab2a6f05-e835-4043-bb5f-da0606608fd8/volumes" Jan 23 10:41:27 crc kubenswrapper[4684]: I0123 10:41:27.668433 4684 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host\" (UniqueName: \"kubernetes.io/host-path/31a44408-fe16-4468-86cb-39a5e01e807e-host\") pod \"crc-debug-l8lvc\" (UID: \"31a44408-fe16-4468-86cb-39a5e01e807e\") " pod="openshift-must-gather-flg8f/crc-debug-l8lvc" Jan 23 10:41:27 crc kubenswrapper[4684]: I0123 10:41:27.668906 4684 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-cfm4s\" (UniqueName: \"kubernetes.io/projected/31a44408-fe16-4468-86cb-39a5e01e807e-kube-api-access-cfm4s\") pod \"crc-debug-l8lvc\" (UID: \"31a44408-fe16-4468-86cb-39a5e01e807e\") " pod="openshift-must-gather-flg8f/crc-debug-l8lvc" Jan 23 10:41:27 crc kubenswrapper[4684]: I0123 10:41:27.770440 4684 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-cfm4s\" (UniqueName: \"kubernetes.io/projected/31a44408-fe16-4468-86cb-39a5e01e807e-kube-api-access-cfm4s\") pod \"crc-debug-l8lvc\" (UID: \"31a44408-fe16-4468-86cb-39a5e01e807e\") " pod="openshift-must-gather-flg8f/crc-debug-l8lvc" Jan 23 10:41:27 crc kubenswrapper[4684]: I0123 10:41:27.770492 4684 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host\" (UniqueName: \"kubernetes.io/host-path/31a44408-fe16-4468-86cb-39a5e01e807e-host\") pod \"crc-debug-l8lvc\" (UID: \"31a44408-fe16-4468-86cb-39a5e01e807e\") " pod="openshift-must-gather-flg8f/crc-debug-l8lvc" Jan 23 10:41:27 crc kubenswrapper[4684]: I0123 10:41:27.770686 4684 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host\" (UniqueName: \"kubernetes.io/host-path/31a44408-fe16-4468-86cb-39a5e01e807e-host\") pod \"crc-debug-l8lvc\" (UID: \"31a44408-fe16-4468-86cb-39a5e01e807e\") " pod="openshift-must-gather-flg8f/crc-debug-l8lvc" Jan 23 10:41:27 crc kubenswrapper[4684]: I0123 10:41:27.794550 4684 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-cfm4s\" (UniqueName: \"kubernetes.io/projected/31a44408-fe16-4468-86cb-39a5e01e807e-kube-api-access-cfm4s\") pod \"crc-debug-l8lvc\" (UID: \"31a44408-fe16-4468-86cb-39a5e01e807e\") " pod="openshift-must-gather-flg8f/crc-debug-l8lvc" Jan 23 10:41:27 crc kubenswrapper[4684]: I0123 10:41:27.890930 4684 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-must-gather-flg8f/crc-debug-l8lvc" Jan 23 10:41:28 crc kubenswrapper[4684]: I0123 10:41:28.226470 4684 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-must-gather-flg8f/crc-debug-l8lvc" event={"ID":"31a44408-fe16-4468-86cb-39a5e01e807e","Type":"ContainerStarted","Data":"08eaa1bbbdec5d9ca1ea7450930fa76210f4440ccb012efda9365b521635cb0f"} Jan 23 10:41:29 crc kubenswrapper[4684]: I0123 10:41:29.235755 4684 generic.go:334] "Generic (PLEG): container finished" podID="31a44408-fe16-4468-86cb-39a5e01e807e" containerID="e252d61ac2ee719b4ecc143e1527e728c2caf8d4589fcfc3e986aef5dd2fb60c" exitCode=0 Jan 23 10:41:29 crc kubenswrapper[4684]: I0123 10:41:29.236057 4684 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-must-gather-flg8f/crc-debug-l8lvc" event={"ID":"31a44408-fe16-4468-86cb-39a5e01e807e","Type":"ContainerDied","Data":"e252d61ac2ee719b4ecc143e1527e728c2caf8d4589fcfc3e986aef5dd2fb60c"} Jan 23 10:41:29 crc kubenswrapper[4684]: I0123 10:41:29.324289 4684 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-must-gather-flg8f/crc-debug-l8lvc"] Jan 23 10:41:29 crc kubenswrapper[4684]: I0123 10:41:29.333806 4684 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-must-gather-flg8f/crc-debug-l8lvc"] Jan 23 10:41:30 crc kubenswrapper[4684]: I0123 10:41:30.361112 4684 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-must-gather-flg8f/crc-debug-l8lvc" Jan 23 10:41:30 crc kubenswrapper[4684]: I0123 10:41:30.425379 4684 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"host\" (UniqueName: \"kubernetes.io/host-path/31a44408-fe16-4468-86cb-39a5e01e807e-host\") pod \"31a44408-fe16-4468-86cb-39a5e01e807e\" (UID: \"31a44408-fe16-4468-86cb-39a5e01e807e\") " Jan 23 10:41:30 crc kubenswrapper[4684]: I0123 10:41:30.425786 4684 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-cfm4s\" (UniqueName: \"kubernetes.io/projected/31a44408-fe16-4468-86cb-39a5e01e807e-kube-api-access-cfm4s\") pod \"31a44408-fe16-4468-86cb-39a5e01e807e\" (UID: \"31a44408-fe16-4468-86cb-39a5e01e807e\") " Jan 23 10:41:30 crc kubenswrapper[4684]: I0123 10:41:30.425588 4684 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/31a44408-fe16-4468-86cb-39a5e01e807e-host" (OuterVolumeSpecName: "host") pod "31a44408-fe16-4468-86cb-39a5e01e807e" (UID: "31a44408-fe16-4468-86cb-39a5e01e807e"). InnerVolumeSpecName "host". PluginName "kubernetes.io/host-path", VolumeGidValue "" Jan 23 10:41:30 crc kubenswrapper[4684]: I0123 10:41:30.426649 4684 reconciler_common.go:293] "Volume detached for volume \"host\" (UniqueName: \"kubernetes.io/host-path/31a44408-fe16-4468-86cb-39a5e01e807e-host\") on node \"crc\" DevicePath \"\"" Jan 23 10:41:30 crc kubenswrapper[4684]: I0123 10:41:30.445505 4684 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/31a44408-fe16-4468-86cb-39a5e01e807e-kube-api-access-cfm4s" (OuterVolumeSpecName: "kube-api-access-cfm4s") pod "31a44408-fe16-4468-86cb-39a5e01e807e" (UID: "31a44408-fe16-4468-86cb-39a5e01e807e"). InnerVolumeSpecName "kube-api-access-cfm4s". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 23 10:41:30 crc kubenswrapper[4684]: I0123 10:41:30.528676 4684 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-cfm4s\" (UniqueName: \"kubernetes.io/projected/31a44408-fe16-4468-86cb-39a5e01e807e-kube-api-access-cfm4s\") on node \"crc\" DevicePath \"\"" Jan 23 10:41:31 crc kubenswrapper[4684]: I0123 10:41:31.254307 4684 scope.go:117] "RemoveContainer" containerID="e252d61ac2ee719b4ecc143e1527e728c2caf8d4589fcfc3e986aef5dd2fb60c" Jan 23 10:41:31 crc kubenswrapper[4684]: I0123 10:41:31.254353 4684 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-must-gather-flg8f/crc-debug-l8lvc" Jan 23 10:41:31 crc kubenswrapper[4684]: I0123 10:41:31.594256 4684 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="31a44408-fe16-4468-86cb-39a5e01e807e" path="/var/lib/kubelet/pods/31a44408-fe16-4468-86cb-39a5e01e807e/volumes" Jan 23 10:43:03 crc kubenswrapper[4684]: I0123 10:43:03.008895 4684 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_barbican-api-6fb45b76fb-6d9bh_d239343a-876f-4e5e-abf8-2bd91fee9812/barbican-api/0.log" Jan 23 10:43:03 crc kubenswrapper[4684]: I0123 10:43:03.264091 4684 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_barbican-api-6fb45b76fb-6d9bh_d239343a-876f-4e5e-abf8-2bd91fee9812/barbican-api-log/0.log" Jan 23 10:43:03 crc kubenswrapper[4684]: I0123 10:43:03.365582 4684 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_barbican-keystone-listener-7c6d999bfd-wgh9p_dd332188-f0b4-4a86-a7ec-c722f64e1e41/barbican-keystone-listener/0.log" Jan 23 10:43:03 crc kubenswrapper[4684]: I0123 10:43:03.505555 4684 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_barbican-keystone-listener-7c6d999bfd-wgh9p_dd332188-f0b4-4a86-a7ec-c722f64e1e41/barbican-keystone-listener-log/0.log" Jan 23 10:43:03 crc kubenswrapper[4684]: I0123 10:43:03.642785 4684 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_barbican-worker-74bcc55f89-qgvh5_996c56f4-2118-4795-91da-d78f1ad2f792/barbican-worker/0.log" Jan 23 10:43:03 crc kubenswrapper[4684]: I0123 10:43:03.664294 4684 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_barbican-worker-74bcc55f89-qgvh5_996c56f4-2118-4795-91da-d78f1ad2f792/barbican-worker-log/0.log" Jan 23 10:43:03 crc kubenswrapper[4684]: I0123 10:43:03.864919 4684 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_bootstrap-edpm-deployment-openstack-edpm-ipam-j7qnk_47eb1e50-9644-40c1-b739-f70c2274808c/bootstrap-edpm-deployment-openstack-edpm-ipam/0.log" Jan 23 10:43:03 crc kubenswrapper[4684]: I0123 10:43:03.981478 4684 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_ceilometer-0_19914f8a-2409-41e0-accb-221ccdb4428f/ceilometer-central-agent/0.log" Jan 23 10:43:04 crc kubenswrapper[4684]: I0123 10:43:04.117444 4684 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_ceilometer-0_19914f8a-2409-41e0-accb-221ccdb4428f/ceilometer-notification-agent/0.log" Jan 23 10:43:04 crc kubenswrapper[4684]: I0123 10:43:04.185665 4684 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_ceilometer-0_19914f8a-2409-41e0-accb-221ccdb4428f/proxy-httpd/0.log" Jan 23 10:43:04 crc kubenswrapper[4684]: I0123 10:43:04.294030 4684 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_ceilometer-0_19914f8a-2409-41e0-accb-221ccdb4428f/sg-core/0.log" Jan 23 10:43:04 crc kubenswrapper[4684]: I0123 10:43:04.399880 4684 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_ceph-client-edpm-deployment-openstack-edpm-ipam-8tpv8_5f77b49d-cf17-4b55-9ef8-0d0e13966845/ceph-client-edpm-deployment-openstack-edpm-ipam/0.log" Jan 23 10:43:04 crc kubenswrapper[4684]: I0123 10:43:04.550517 4684 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_ceph-hci-pre-edpm-deployment-openstack-edpm-ipam-fqnlz_01a17f7c-b39e-4dd6-9a40-d474056ee41a/ceph-hci-pre-edpm-deployment-openstack-edpm-ipam/0.log" Jan 23 10:43:05 crc kubenswrapper[4684]: I0123 10:43:05.003367 4684 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_cinder-api-0_6fc125a2-7cc0-40a7-bb2c-acc93ba7866a/cinder-api-log/0.log" Jan 23 10:43:05 crc kubenswrapper[4684]: I0123 10:43:05.023746 4684 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_cinder-api-0_6fc125a2-7cc0-40a7-bb2c-acc93ba7866a/cinder-api/0.log" Jan 23 10:43:05 crc kubenswrapper[4684]: I0123 10:43:05.276541 4684 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_cinder-backup-0_46859102-633b-4fca-bbeb-c34dfdbea96d/probe/0.log" Jan 23 10:43:05 crc kubenswrapper[4684]: I0123 10:43:05.417005 4684 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_cinder-backup-0_46859102-633b-4fca-bbeb-c34dfdbea96d/cinder-backup/0.log" Jan 23 10:43:05 crc kubenswrapper[4684]: I0123 10:43:05.461091 4684 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_cinder-scheduler-0_7a1bad04-8e0e-4dee-8cef-90091c05526f/cinder-scheduler/0.log" Jan 23 10:43:05 crc kubenswrapper[4684]: I0123 10:43:05.625887 4684 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_cinder-scheduler-0_7a1bad04-8e0e-4dee-8cef-90091c05526f/probe/0.log" Jan 23 10:43:05 crc kubenswrapper[4684]: I0123 10:43:05.770209 4684 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_cinder-volume-volume1-0_2d39cffc-9089-47c7-acd7-50bb64ed8f61/cinder-volume/0.log" Jan 23 10:43:05 crc kubenswrapper[4684]: I0123 10:43:05.810028 4684 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_cinder-volume-volume1-0_2d39cffc-9089-47c7-acd7-50bb64ed8f61/probe/0.log" Jan 23 10:43:06 crc kubenswrapper[4684]: I0123 10:43:06.043628 4684 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_configure-os-edpm-deployment-openstack-edpm-ipam-nwlp8_8cbed0d5-0896-4efe-af09-8469dcbd2cfb/configure-os-edpm-deployment-openstack-edpm-ipam/0.log" Jan 23 10:43:06 crc kubenswrapper[4684]: I0123 10:43:06.089594 4684 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_configure-network-edpm-deployment-openstack-edpm-ipam-bj6vb_f86589ab-3e45-48a5-a081-96572c2bcfca/configure-network-edpm-deployment-openstack-edpm-ipam/0.log" Jan 23 10:43:06 crc kubenswrapper[4684]: I0123 10:43:06.309444 4684 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_dnsmasq-dns-dbdfc799f-zk2np_e93e4d61-ad39-41c9-80ce-653f91213f4d/init/0.log" Jan 23 10:43:06 crc kubenswrapper[4684]: I0123 10:43:06.537896 4684 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_dnsmasq-dns-dbdfc799f-zk2np_e93e4d61-ad39-41c9-80ce-653f91213f4d/init/0.log" Jan 23 10:43:06 crc kubenswrapper[4684]: I0123 10:43:06.812162 4684 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_glance-default-external-api-0_a7c366f0-4ad9-4ec9-91ff-bab599bae5d0/glance-httpd/0.log" Jan 23 10:43:06 crc kubenswrapper[4684]: I0123 10:43:06.889363 4684 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_dnsmasq-dns-dbdfc799f-zk2np_e93e4d61-ad39-41c9-80ce-653f91213f4d/dnsmasq-dns/0.log" Jan 23 10:43:06 crc kubenswrapper[4684]: I0123 10:43:06.902924 4684 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_glance-default-external-api-0_a7c366f0-4ad9-4ec9-91ff-bab599bae5d0/glance-log/0.log" Jan 23 10:43:07 crc kubenswrapper[4684]: I0123 10:43:07.087326 4684 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_glance-default-internal-api-0_b0804b14-3b60-4dbc-8e29-9cb493b96de4/glance-httpd/0.log" Jan 23 10:43:07 crc kubenswrapper[4684]: I0123 10:43:07.315854 4684 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_glance-default-internal-api-0_b0804b14-3b60-4dbc-8e29-9cb493b96de4/glance-log/0.log" Jan 23 10:43:07 crc kubenswrapper[4684]: I0123 10:43:07.341636 4684 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_horizon-7df5b758fb-8sfdj_78d43a15-1645-42a6-a25b-a6c4d7a244c4/horizon/1.log" Jan 23 10:43:07 crc kubenswrapper[4684]: I0123 10:43:07.477475 4684 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_horizon-7df5b758fb-8sfdj_78d43a15-1645-42a6-a25b-a6c4d7a244c4/horizon/0.log" Jan 23 10:43:07 crc kubenswrapper[4684]: I0123 10:43:07.708232 4684 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_install-certs-edpm-deployment-openstack-edpm-ipam-dbfqx_2aa3021c-18ad-49eb-ae34-b54e30548ccf/install-certs-edpm-deployment-openstack-edpm-ipam/0.log" Jan 23 10:43:07 crc kubenswrapper[4684]: I0123 10:43:07.943676 4684 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_horizon-7df5b758fb-8sfdj_78d43a15-1645-42a6-a25b-a6c4d7a244c4/horizon-log/0.log" Jan 23 10:43:07 crc kubenswrapper[4684]: I0123 10:43:07.949465 4684 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_install-os-edpm-deployment-openstack-edpm-ipam-2nhwv_9ed4c3b1-8a47-426f-a72f-80df33efa202/install-os-edpm-deployment-openstack-edpm-ipam/0.log" Jan 23 10:43:08 crc kubenswrapper[4684]: I0123 10:43:08.330211 4684 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_keystone-74b94f7dd5-jfwln_c7c30d54-36fc-47e2-ad40-c3e530d1b721/keystone-api/0.log" Jan 23 10:43:08 crc kubenswrapper[4684]: I0123 10:43:08.697769 4684 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_keystone-cron-29486041-8929f_1ca6f1ca-5942-4aee-a0bc-b7d2549de3a2/keystone-cron/0.log" Jan 23 10:43:08 crc kubenswrapper[4684]: I0123 10:43:08.915135 4684 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_kube-state-metrics-0_2380836b-7770-4b06-9cb2-b61dfda5e96a/kube-state-metrics/0.log" Jan 23 10:43:09 crc kubenswrapper[4684]: I0123 10:43:09.040592 4684 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_libvirt-edpm-deployment-openstack-edpm-ipam-p7z6q_5310afc8-7024-4b88-b421-28631272375a/libvirt-edpm-deployment-openstack-edpm-ipam/0.log" Jan 23 10:43:09 crc kubenswrapper[4684]: I0123 10:43:09.251201 4684 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_manila-api-0_f183a69c-226e-4737-81b8-01cae8e76539/manila-api-log/0.log" Jan 23 10:43:09 crc kubenswrapper[4684]: I0123 10:43:09.360829 4684 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_manila-api-0_f183a69c-226e-4737-81b8-01cae8e76539/manila-api/0.log" Jan 23 10:43:09 crc kubenswrapper[4684]: I0123 10:43:09.430679 4684 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_manila-scheduler-0_1be4f920-aa7e-412c-8241-a795a65be1bb/probe/0.log" Jan 23 10:43:09 crc kubenswrapper[4684]: I0123 10:43:09.612043 4684 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_manila-share-share1-0_f7b4b82a-f432-48b9-ae9c-2d23a78aec42/probe/0.log" Jan 23 10:43:09 crc kubenswrapper[4684]: I0123 10:43:09.621888 4684 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_manila-scheduler-0_1be4f920-aa7e-412c-8241-a795a65be1bb/manila-scheduler/0.log" Jan 23 10:43:09 crc kubenswrapper[4684]: I0123 10:43:09.764053 4684 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_manila-share-share1-0_f7b4b82a-f432-48b9-ae9c-2d23a78aec42/manila-share/0.log" Jan 23 10:43:10 crc kubenswrapper[4684]: I0123 10:43:10.149394 4684 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_neutron-f5484d975-q9jz7_51e1f37f-89c0-4b47-944a-ca74b33d32ce/neutron-api/0.log" Jan 23 10:43:10 crc kubenswrapper[4684]: I0123 10:43:10.161078 4684 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_neutron-f5484d975-q9jz7_51e1f37f-89c0-4b47-944a-ca74b33d32ce/neutron-httpd/0.log" Jan 23 10:43:10 crc kubenswrapper[4684]: I0123 10:43:10.407551 4684 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_neutron-metadata-edpm-deployment-openstack-edpm-ipam-bkm2h_cb533e15-1dac-453b-a0d7-041112a91f0b/neutron-metadata-edpm-deployment-openstack-edpm-ipam/0.log" Jan 23 10:43:11 crc kubenswrapper[4684]: I0123 10:43:11.109958 4684 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_nova-api-0_e0cd885d-0d54-4392-9d8a-cd2cb48b47d2/nova-api-log/0.log" Jan 23 10:43:11 crc kubenswrapper[4684]: I0123 10:43:11.146448 4684 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_nova-cell0-conductor-0_f499765b-3360-4bf8-af8c-415602c1c519/nova-cell0-conductor-conductor/0.log" Jan 23 10:43:11 crc kubenswrapper[4684]: I0123 10:43:11.760611 4684 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_nova-cell1-conductor-0_a36234df-99ba-470a-8309-55d1e0f53072/nova-cell1-conductor-conductor/0.log" Jan 23 10:43:11 crc kubenswrapper[4684]: I0123 10:43:11.788261 4684 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_nova-api-0_e0cd885d-0d54-4392-9d8a-cd2cb48b47d2/nova-api-api/0.log" Jan 23 10:43:11 crc kubenswrapper[4684]: I0123 10:43:11.790068 4684 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_nova-cell1-novncproxy-0_c03f1660-c3bd-4803-b1fd-c07c36966484/nova-cell1-novncproxy-novncproxy/0.log" Jan 23 10:43:12 crc kubenswrapper[4684]: I0123 10:43:12.085793 4684 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_nova-custom-ceph-edpm-deployment-openstack-edpm-ipam-228hk_55887726-e3b8-4e73-a5fe-c82860636e1b/nova-custom-ceph-edpm-deployment-openstack-edpm-ipam/0.log" Jan 23 10:43:12 crc kubenswrapper[4684]: I0123 10:43:12.583019 4684 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_nova-metadata-0_48b55b45-1ad6-4310-aaff-0a978bbf5538/nova-metadata-log/0.log" Jan 23 10:43:13 crc kubenswrapper[4684]: I0123 10:43:13.043874 4684 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_nova-scheduler-0_2fab0d59-7e3d-4c70-a3a7-63dcb3629988/nova-scheduler-scheduler/0.log" Jan 23 10:43:13 crc kubenswrapper[4684]: I0123 10:43:13.069260 4684 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_openstack-cell1-galera-0_80a7fc30-a101-4948-9e81-34c2dfb02797/mysql-bootstrap/0.log" Jan 23 10:43:13 crc kubenswrapper[4684]: I0123 10:43:13.314975 4684 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_openstack-cell1-galera-0_80a7fc30-a101-4948-9e81-34c2dfb02797/mysql-bootstrap/0.log" Jan 23 10:43:13 crc kubenswrapper[4684]: I0123 10:43:13.459283 4684 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_openstack-cell1-galera-0_80a7fc30-a101-4948-9e81-34c2dfb02797/galera/0.log" Jan 23 10:43:13 crc kubenswrapper[4684]: I0123 10:43:13.576723 4684 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_openstack-galera-0_01c5f17c-8303-4cae-b577-1da34c402098/mysql-bootstrap/0.log" Jan 23 10:43:13 crc kubenswrapper[4684]: I0123 10:43:13.728777 4684 patch_prober.go:28] interesting pod/machine-config-daemon-wtphf container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Jan 23 10:43:13 crc kubenswrapper[4684]: I0123 10:43:13.728836 4684 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-wtphf" podUID="fe8e0d00-860e-4d47-9f48-686555520d79" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Jan 23 10:43:13 crc kubenswrapper[4684]: I0123 10:43:13.833055 4684 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_openstack-galera-0_01c5f17c-8303-4cae-b577-1da34c402098/mysql-bootstrap/0.log" Jan 23 10:43:13 crc kubenswrapper[4684]: I0123 10:43:13.935347 4684 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_openstack-galera-0_01c5f17c-8303-4cae-b577-1da34c402098/galera/0.log" Jan 23 10:43:14 crc kubenswrapper[4684]: I0123 10:43:14.040433 4684 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_openstackclient_cfb564ff-94ae-4292-ad6c-41a36677efeb/openstackclient/0.log" Jan 23 10:43:14 crc kubenswrapper[4684]: I0123 10:43:14.254858 4684 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_nova-metadata-0_48b55b45-1ad6-4310-aaff-0a978bbf5538/nova-metadata-metadata/0.log" Jan 23 10:43:14 crc kubenswrapper[4684]: I0123 10:43:14.264928 4684 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_ovn-controller-jgsg8_f6d184f2-6bff-43ba-98a6-6e131c7b45a8/ovn-controller/0.log" Jan 23 10:43:14 crc kubenswrapper[4684]: I0123 10:43:14.598985 4684 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_ovn-controller-metrics-x2qgc_8a2ed8cb-f8c4-4ee2-884e-13a286ef4c86/openstack-network-exporter/0.log" Jan 23 10:43:14 crc kubenswrapper[4684]: I0123 10:43:14.644681 4684 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_ovn-controller-ovs-c5pjd_c816dd8b-7da7-4424-8405-b44759f7861e/ovsdb-server-init/0.log" Jan 23 10:43:14 crc kubenswrapper[4684]: I0123 10:43:14.760573 4684 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_ovn-controller-ovs-c5pjd_c816dd8b-7da7-4424-8405-b44759f7861e/ovsdb-server-init/0.log" Jan 23 10:43:14 crc kubenswrapper[4684]: I0123 10:43:14.915884 4684 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_ovn-controller-ovs-c5pjd_c816dd8b-7da7-4424-8405-b44759f7861e/ovs-vswitchd/0.log" Jan 23 10:43:15 crc kubenswrapper[4684]: I0123 10:43:15.077307 4684 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_ovn-edpm-deployment-openstack-edpm-ipam-9klss_e755b648-4ecf-4fc5-922a-39c5061827de/ovn-edpm-deployment-openstack-edpm-ipam/0.log" Jan 23 10:43:15 crc kubenswrapper[4684]: I0123 10:43:15.080618 4684 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_ovn-controller-ovs-c5pjd_c816dd8b-7da7-4424-8405-b44759f7861e/ovsdb-server/0.log" Jan 23 10:43:15 crc kubenswrapper[4684]: I0123 10:43:15.278371 4684 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_ovn-northd-0_366a8d70-2aa4-439d-a14e-4459b3f45736/openstack-network-exporter/0.log" Jan 23 10:43:15 crc kubenswrapper[4684]: I0123 10:43:15.403563 4684 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_ovn-northd-0_366a8d70-2aa4-439d-a14e-4459b3f45736/ovn-northd/0.log" Jan 23 10:43:15 crc kubenswrapper[4684]: I0123 10:43:15.563763 4684 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_ovsdbserver-nb-0_960d904d-7d3d-4c6a-a933-cf6c6a31d01d/openstack-network-exporter/0.log" Jan 23 10:43:15 crc kubenswrapper[4684]: I0123 10:43:15.658640 4684 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_ovsdbserver-nb-0_960d904d-7d3d-4c6a-a933-cf6c6a31d01d/ovsdbserver-nb/0.log" Jan 23 10:43:15 crc kubenswrapper[4684]: I0123 10:43:15.755061 4684 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_ovsdbserver-sb-0_092669ed-870b-4e9d-a34d-f62fca6b1660/openstack-network-exporter/0.log" Jan 23 10:43:15 crc kubenswrapper[4684]: I0123 10:43:15.876085 4684 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_ovsdbserver-sb-0_092669ed-870b-4e9d-a34d-f62fca6b1660/ovsdbserver-sb/0.log" Jan 23 10:43:16 crc kubenswrapper[4684]: I0123 10:43:16.201762 4684 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_placement-6f7c769f78-7sfgw_90ee2ffb-783f-491a-9fa8-e37f267872f6/placement-api/0.log" Jan 23 10:43:16 crc kubenswrapper[4684]: I0123 10:43:16.214076 4684 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_placement-6f7c769f78-7sfgw_90ee2ffb-783f-491a-9fa8-e37f267872f6/placement-log/0.log" Jan 23 10:43:16 crc kubenswrapper[4684]: I0123 10:43:16.384311 4684 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_rabbitmq-cell1-server-0_5b7f0e5b-e1ba-4da5-b644-e16236fd5403/setup-container/0.log" Jan 23 10:43:16 crc kubenswrapper[4684]: I0123 10:43:16.917532 4684 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_rabbitmq-cell1-server-0_5b7f0e5b-e1ba-4da5-b644-e16236fd5403/setup-container/0.log" Jan 23 10:43:16 crc kubenswrapper[4684]: I0123 10:43:16.961209 4684 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_rabbitmq-cell1-server-0_5b7f0e5b-e1ba-4da5-b644-e16236fd5403/rabbitmq/0.log" Jan 23 10:43:16 crc kubenswrapper[4684]: I0123 10:43:16.967479 4684 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_rabbitmq-server-0_d05a61f9-7d60-4073-ae62-7a4a59fe6ed6/setup-container/0.log" Jan 23 10:43:17 crc kubenswrapper[4684]: I0123 10:43:17.273452 4684 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_rabbitmq-server-0_d05a61f9-7d60-4073-ae62-7a4a59fe6ed6/rabbitmq/0.log" Jan 23 10:43:17 crc kubenswrapper[4684]: I0123 10:43:17.301056 4684 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_rabbitmq-server-0_d05a61f9-7d60-4073-ae62-7a4a59fe6ed6/setup-container/0.log" Jan 23 10:43:17 crc kubenswrapper[4684]: I0123 10:43:17.401840 4684 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_reboot-os-edpm-deployment-openstack-edpm-ipam-hnv4c_89a1992b-4dc8-4218-a148-bec983fddd94/reboot-os-edpm-deployment-openstack-edpm-ipam/0.log" Jan 23 10:43:17 crc kubenswrapper[4684]: I0123 10:43:17.573890 4684 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_repo-setup-edpm-deployment-openstack-edpm-ipam-5qtdd_6572a448-1ced-481b-af00-e2edb0d95187/repo-setup-edpm-deployment-openstack-edpm-ipam/0.log" Jan 23 10:43:17 crc kubenswrapper[4684]: I0123 10:43:17.701677 4684 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_run-os-edpm-deployment-openstack-edpm-ipam-7wznn_1139aa20-9131-40c7-bd06-f108d5ac42ab/run-os-edpm-deployment-openstack-edpm-ipam/0.log" Jan 23 10:43:18 crc kubenswrapper[4684]: I0123 10:43:18.011112 4684 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_ssh-known-hosts-edpm-deployment-bpqtq_d7513ac8-1304-4762-a2f2-6d3b152fc4a7/ssh-known-hosts-edpm-deployment/0.log" Jan 23 10:43:18 crc kubenswrapper[4684]: I0123 10:43:18.183318 4684 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_tempest-tests-tempest_a0e2465e-0c50-4aaa-a1c7-bdbd8d3a516a/tempest-tests-tempest-tests-runner/0.log" Jan 23 10:43:18 crc kubenswrapper[4684]: I0123 10:43:18.409838 4684 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_test-operator-logs-pod-tempest-tempest-tests-tempest_6a468899-9742-4407-95d4-55c6e2c14fe2/test-operator-logs-container/0.log" Jan 23 10:43:18 crc kubenswrapper[4684]: I0123 10:43:18.598475 4684 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_validate-network-edpm-deployment-openstack-edpm-ipam-tlkk8_e2aa43b6-cc3e-4a3f-a98d-a788624c5253/validate-network-edpm-deployment-openstack-edpm-ipam/0.log" Jan 23 10:43:36 crc kubenswrapper[4684]: I0123 10:43:36.055701 4684 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_memcached-0_7320f601-5b97-49b4-af32-aeae7d297ed1/memcached/0.log" Jan 23 10:43:43 crc kubenswrapper[4684]: I0123 10:43:43.728172 4684 patch_prober.go:28] interesting pod/machine-config-daemon-wtphf container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Jan 23 10:43:43 crc kubenswrapper[4684]: I0123 10:43:43.728676 4684 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-wtphf" podUID="fe8e0d00-860e-4d47-9f48-686555520d79" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Jan 23 10:43:49 crc kubenswrapper[4684]: I0123 10:43:49.879527 4684 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/redhat-operators-kttjd"] Jan 23 10:43:49 crc kubenswrapper[4684]: E0123 10:43:49.881390 4684 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="31a44408-fe16-4468-86cb-39a5e01e807e" containerName="container-00" Jan 23 10:43:49 crc kubenswrapper[4684]: I0123 10:43:49.881484 4684 state_mem.go:107] "Deleted CPUSet assignment" podUID="31a44408-fe16-4468-86cb-39a5e01e807e" containerName="container-00" Jan 23 10:43:49 crc kubenswrapper[4684]: I0123 10:43:49.881768 4684 memory_manager.go:354] "RemoveStaleState removing state" podUID="31a44408-fe16-4468-86cb-39a5e01e807e" containerName="container-00" Jan 23 10:43:49 crc kubenswrapper[4684]: I0123 10:43:49.883131 4684 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-kttjd" Jan 23 10:43:49 crc kubenswrapper[4684]: I0123 10:43:49.931747 4684 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-operators-kttjd"] Jan 23 10:43:50 crc kubenswrapper[4684]: I0123 10:43:50.060276 4684 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/6fdb8806-11dc-4679-b511-d9563c55b72c-utilities\") pod \"redhat-operators-kttjd\" (UID: \"6fdb8806-11dc-4679-b511-d9563c55b72c\") " pod="openshift-marketplace/redhat-operators-kttjd" Jan 23 10:43:50 crc kubenswrapper[4684]: I0123 10:43:50.060434 4684 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-bbqxt\" (UniqueName: \"kubernetes.io/projected/6fdb8806-11dc-4679-b511-d9563c55b72c-kube-api-access-bbqxt\") pod \"redhat-operators-kttjd\" (UID: \"6fdb8806-11dc-4679-b511-d9563c55b72c\") " pod="openshift-marketplace/redhat-operators-kttjd" Jan 23 10:43:50 crc kubenswrapper[4684]: I0123 10:43:50.060549 4684 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/6fdb8806-11dc-4679-b511-d9563c55b72c-catalog-content\") pod \"redhat-operators-kttjd\" (UID: \"6fdb8806-11dc-4679-b511-d9563c55b72c\") " pod="openshift-marketplace/redhat-operators-kttjd" Jan 23 10:43:50 crc kubenswrapper[4684]: I0123 10:43:50.162404 4684 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-bbqxt\" (UniqueName: \"kubernetes.io/projected/6fdb8806-11dc-4679-b511-d9563c55b72c-kube-api-access-bbqxt\") pod \"redhat-operators-kttjd\" (UID: \"6fdb8806-11dc-4679-b511-d9563c55b72c\") " pod="openshift-marketplace/redhat-operators-kttjd" Jan 23 10:43:50 crc kubenswrapper[4684]: I0123 10:43:50.162549 4684 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/6fdb8806-11dc-4679-b511-d9563c55b72c-catalog-content\") pod \"redhat-operators-kttjd\" (UID: \"6fdb8806-11dc-4679-b511-d9563c55b72c\") " pod="openshift-marketplace/redhat-operators-kttjd" Jan 23 10:43:50 crc kubenswrapper[4684]: I0123 10:43:50.162669 4684 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/6fdb8806-11dc-4679-b511-d9563c55b72c-utilities\") pod \"redhat-operators-kttjd\" (UID: \"6fdb8806-11dc-4679-b511-d9563c55b72c\") " pod="openshift-marketplace/redhat-operators-kttjd" Jan 23 10:43:50 crc kubenswrapper[4684]: I0123 10:43:50.163061 4684 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/6fdb8806-11dc-4679-b511-d9563c55b72c-catalog-content\") pod \"redhat-operators-kttjd\" (UID: \"6fdb8806-11dc-4679-b511-d9563c55b72c\") " pod="openshift-marketplace/redhat-operators-kttjd" Jan 23 10:43:50 crc kubenswrapper[4684]: I0123 10:43:50.163209 4684 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/6fdb8806-11dc-4679-b511-d9563c55b72c-utilities\") pod \"redhat-operators-kttjd\" (UID: \"6fdb8806-11dc-4679-b511-d9563c55b72c\") " pod="openshift-marketplace/redhat-operators-kttjd" Jan 23 10:43:50 crc kubenswrapper[4684]: I0123 10:43:50.184806 4684 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-bbqxt\" (UniqueName: \"kubernetes.io/projected/6fdb8806-11dc-4679-b511-d9563c55b72c-kube-api-access-bbqxt\") pod \"redhat-operators-kttjd\" (UID: \"6fdb8806-11dc-4679-b511-d9563c55b72c\") " pod="openshift-marketplace/redhat-operators-kttjd" Jan 23 10:43:50 crc kubenswrapper[4684]: I0123 10:43:50.245507 4684 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-kttjd" Jan 23 10:43:50 crc kubenswrapper[4684]: W0123 10:43:50.771207 4684 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod6fdb8806_11dc_4679_b511_d9563c55b72c.slice/crio-a750b7e004c1e0bf34c0795ee44f710b02c408695fd34d02c5735ee31c79406a WatchSource:0}: Error finding container a750b7e004c1e0bf34c0795ee44f710b02c408695fd34d02c5735ee31c79406a: Status 404 returned error can't find the container with id a750b7e004c1e0bf34c0795ee44f710b02c408695fd34d02c5735ee31c79406a Jan 23 10:43:50 crc kubenswrapper[4684]: I0123 10:43:50.771562 4684 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-operators-kttjd"] Jan 23 10:43:51 crc kubenswrapper[4684]: I0123 10:43:51.516616 4684 generic.go:334] "Generic (PLEG): container finished" podID="6fdb8806-11dc-4679-b511-d9563c55b72c" containerID="6ffb75538c95974f5e18f1556363a0c9a185cc3643f6b1bfbbb6654321d6036f" exitCode=0 Jan 23 10:43:51 crc kubenswrapper[4684]: I0123 10:43:51.516897 4684 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-kttjd" event={"ID":"6fdb8806-11dc-4679-b511-d9563c55b72c","Type":"ContainerDied","Data":"6ffb75538c95974f5e18f1556363a0c9a185cc3643f6b1bfbbb6654321d6036f"} Jan 23 10:43:51 crc kubenswrapper[4684]: I0123 10:43:51.516924 4684 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-kttjd" event={"ID":"6fdb8806-11dc-4679-b511-d9563c55b72c","Type":"ContainerStarted","Data":"a750b7e004c1e0bf34c0795ee44f710b02c408695fd34d02c5735ee31c79406a"} Jan 23 10:43:51 crc kubenswrapper[4684]: I0123 10:43:51.518804 4684 provider.go:102] Refreshing cache for provider: *credentialprovider.defaultDockerConfigProvider Jan 23 10:43:52 crc kubenswrapper[4684]: I0123 10:43:52.232383 4684 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_97bb1f8024535e829fa8894f35597f6754858047b9fee802213b02de86bjvsp_985d0dfc-6e0c-4cdc-98c6-045b88957e25/util/0.log" Jan 23 10:43:52 crc kubenswrapper[4684]: I0123 10:43:52.345300 4684 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_97bb1f8024535e829fa8894f35597f6754858047b9fee802213b02de86bjvsp_985d0dfc-6e0c-4cdc-98c6-045b88957e25/pull/0.log" Jan 23 10:43:52 crc kubenswrapper[4684]: I0123 10:43:52.358755 4684 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_97bb1f8024535e829fa8894f35597f6754858047b9fee802213b02de86bjvsp_985d0dfc-6e0c-4cdc-98c6-045b88957e25/util/0.log" Jan 23 10:43:52 crc kubenswrapper[4684]: I0123 10:43:52.431862 4684 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_97bb1f8024535e829fa8894f35597f6754858047b9fee802213b02de86bjvsp_985d0dfc-6e0c-4cdc-98c6-045b88957e25/pull/0.log" Jan 23 10:43:52 crc kubenswrapper[4684]: I0123 10:43:52.685263 4684 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_97bb1f8024535e829fa8894f35597f6754858047b9fee802213b02de86bjvsp_985d0dfc-6e0c-4cdc-98c6-045b88957e25/pull/0.log" Jan 23 10:43:52 crc kubenswrapper[4684]: I0123 10:43:52.714016 4684 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_97bb1f8024535e829fa8894f35597f6754858047b9fee802213b02de86bjvsp_985d0dfc-6e0c-4cdc-98c6-045b88957e25/util/0.log" Jan 23 10:43:52 crc kubenswrapper[4684]: I0123 10:43:52.768215 4684 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_97bb1f8024535e829fa8894f35597f6754858047b9fee802213b02de86bjvsp_985d0dfc-6e0c-4cdc-98c6-045b88957e25/extract/0.log" Jan 23 10:43:53 crc kubenswrapper[4684]: I0123 10:43:53.102291 4684 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_barbican-operator-controller-manager-59dd8b7cbf-sbkxr_dc5b7444-cf61-439c-a7ed-3c97289e6cfe/manager/0.log" Jan 23 10:43:53 crc kubenswrapper[4684]: I0123 10:43:53.156577 4684 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_cinder-operator-controller-manager-69cf5d4557-srv5g_fd2ff302-08d1-4fd7-a45c-152155876b56/manager/0.log" Jan 23 10:43:53 crc kubenswrapper[4684]: I0123 10:43:53.273452 4684 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_designate-operator-controller-manager-b45d7bf98-p77dl_31af0894-c5ac-41ef-842e-b7d01dfa2229/manager/0.log" Jan 23 10:43:53 crc kubenswrapper[4684]: I0123 10:43:53.413384 4684 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_glance-operator-controller-manager-78fdd796fd-hx5dq_299d3d78-4346-43f2-86f2-e1a3c20513a5/manager/0.log" Jan 23 10:43:53 crc kubenswrapper[4684]: I0123 10:43:53.534744 4684 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_heat-operator-controller-manager-594c8c9d5d-ht6sr_294e6daa-1ac9-4afc-b489-f7cff06c18ec/manager/0.log" Jan 23 10:43:53 crc kubenswrapper[4684]: I0123 10:43:53.544853 4684 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-kttjd" event={"ID":"6fdb8806-11dc-4679-b511-d9563c55b72c","Type":"ContainerStarted","Data":"20c2f9f1a226fa7267f6b86c95ee3ef58c96a602105668cffbd36bdda353c745"} Jan 23 10:43:53 crc kubenswrapper[4684]: I0123 10:43:53.913203 4684 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_horizon-operator-controller-manager-77d5c5b54f-gc4d6_d61b277c-9b8c-423e-9b63-66dd812147c3/manager/0.log" Jan 23 10:43:54 crc kubenswrapper[4684]: I0123 10:43:54.077797 4684 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_ironic-operator-controller-manager-69d6c9f5b8-6s79c_5bb19409-93c9-4453-800c-ce2899b48427/manager/0.log" Jan 23 10:43:54 crc kubenswrapper[4684]: I0123 10:43:54.439257 4684 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_infra-operator-controller-manager-54ccf4f85d-t4lh8_56e669a2-5990-45ad-8d32-e8d57ef7a81e/manager/0.log" Jan 23 10:43:54 crc kubenswrapper[4684]: I0123 10:43:54.443761 4684 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_keystone-operator-controller-manager-b8b6d4659-lfjfh_67b55215-9df7-4273-8e15-27c0a969e065/manager/0.log" Jan 23 10:43:54 crc kubenswrapper[4684]: I0123 10:43:54.775831 4684 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_mariadb-operator-controller-manager-c87fff755-pl7fj_e1b45f19-8737-4f21-aade-d2b9cfda08fe/manager/0.log" Jan 23 10:43:54 crc kubenswrapper[4684]: I0123 10:43:54.783160 4684 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_manila-operator-controller-manager-78c6999f6f-skhwl_e13327b0-3e7d-498b-a5cb-1ae9cbc6fad7/manager/0.log" Jan 23 10:43:55 crc kubenswrapper[4684]: I0123 10:43:55.028633 4684 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_neutron-operator-controller-manager-5d8f59fb49-7nv72_9e4ad169-96f1-40ef-bedf-75d3a233ca35/manager/0.log" Jan 23 10:43:55 crc kubenswrapper[4684]: I0123 10:43:55.239785 4684 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_octavia-operator-controller-manager-7bd9774b6-b82vt_2466d64b-62c9-422f-9609-5aaaa7de084c/manager/0.log" Jan 23 10:43:55 crc kubenswrapper[4684]: I0123 10:43:55.381760 4684 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_openstack-baremetal-operator-controller-manager-7c9c58b557lb7bq_b0bb140c-ce3d-4d8b-8627-67ae0145b2d4/manager/0.log" Jan 23 10:43:55 crc kubenswrapper[4684]: I0123 10:43:55.388197 4684 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_nova-operator-controller-manager-6b8bc8d87d-jnlvz_b1376fdd-31b4-4a7a-a9b6-1a38565083cb/manager/0.log" Jan 23 10:43:55 crc kubenswrapper[4684]: I0123 10:43:55.579646 4684 generic.go:334] "Generic (PLEG): container finished" podID="6fdb8806-11dc-4679-b511-d9563c55b72c" containerID="20c2f9f1a226fa7267f6b86c95ee3ef58c96a602105668cffbd36bdda353c745" exitCode=0 Jan 23 10:43:55 crc kubenswrapper[4684]: I0123 10:43:55.579857 4684 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-kttjd" event={"ID":"6fdb8806-11dc-4679-b511-d9563c55b72c","Type":"ContainerDied","Data":"20c2f9f1a226fa7267f6b86c95ee3ef58c96a602105668cffbd36bdda353c745"} Jan 23 10:43:55 crc kubenswrapper[4684]: I0123 10:43:55.814053 4684 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_openstack-operator-controller-init-85bfd44c94-6dlkw_652bdac8-6488-4303-9d64-809a46258816/operator/0.log" Jan 23 10:43:56 crc kubenswrapper[4684]: I0123 10:43:56.175101 4684 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_openstack-operator-index-zt5gw_2c935634-e963-49ad-868b-7576011f21fb/registry-server/0.log" Jan 23 10:43:56 crc kubenswrapper[4684]: I0123 10:43:56.255002 4684 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/community-operators-84lbj"] Jan 23 10:43:56 crc kubenswrapper[4684]: I0123 10:43:56.257954 4684 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-84lbj" Jan 23 10:43:56 crc kubenswrapper[4684]: I0123 10:43:56.273443 4684 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/community-operators-84lbj"] Jan 23 10:43:56 crc kubenswrapper[4684]: I0123 10:43:56.402366 4684 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-2h2sz\" (UniqueName: \"kubernetes.io/projected/965d367f-47b3-4ea0-8157-9549f30c91bd-kube-api-access-2h2sz\") pod \"community-operators-84lbj\" (UID: \"965d367f-47b3-4ea0-8157-9549f30c91bd\") " pod="openshift-marketplace/community-operators-84lbj" Jan 23 10:43:56 crc kubenswrapper[4684]: I0123 10:43:56.402500 4684 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/965d367f-47b3-4ea0-8157-9549f30c91bd-catalog-content\") pod \"community-operators-84lbj\" (UID: \"965d367f-47b3-4ea0-8157-9549f30c91bd\") " pod="openshift-marketplace/community-operators-84lbj" Jan 23 10:43:56 crc kubenswrapper[4684]: I0123 10:43:56.402527 4684 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/965d367f-47b3-4ea0-8157-9549f30c91bd-utilities\") pod \"community-operators-84lbj\" (UID: \"965d367f-47b3-4ea0-8157-9549f30c91bd\") " pod="openshift-marketplace/community-operators-84lbj" Jan 23 10:43:56 crc kubenswrapper[4684]: I0123 10:43:56.506670 4684 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/965d367f-47b3-4ea0-8157-9549f30c91bd-catalog-content\") pod \"community-operators-84lbj\" (UID: \"965d367f-47b3-4ea0-8157-9549f30c91bd\") " pod="openshift-marketplace/community-operators-84lbj" Jan 23 10:43:56 crc kubenswrapper[4684]: I0123 10:43:56.506782 4684 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/965d367f-47b3-4ea0-8157-9549f30c91bd-utilities\") pod \"community-operators-84lbj\" (UID: \"965d367f-47b3-4ea0-8157-9549f30c91bd\") " pod="openshift-marketplace/community-operators-84lbj" Jan 23 10:43:56 crc kubenswrapper[4684]: I0123 10:43:56.506918 4684 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-2h2sz\" (UniqueName: \"kubernetes.io/projected/965d367f-47b3-4ea0-8157-9549f30c91bd-kube-api-access-2h2sz\") pod \"community-operators-84lbj\" (UID: \"965d367f-47b3-4ea0-8157-9549f30c91bd\") " pod="openshift-marketplace/community-operators-84lbj" Jan 23 10:43:56 crc kubenswrapper[4684]: I0123 10:43:56.507834 4684 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/965d367f-47b3-4ea0-8157-9549f30c91bd-catalog-content\") pod \"community-operators-84lbj\" (UID: \"965d367f-47b3-4ea0-8157-9549f30c91bd\") " pod="openshift-marketplace/community-operators-84lbj" Jan 23 10:43:56 crc kubenswrapper[4684]: I0123 10:43:56.508101 4684 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/965d367f-47b3-4ea0-8157-9549f30c91bd-utilities\") pod \"community-operators-84lbj\" (UID: \"965d367f-47b3-4ea0-8157-9549f30c91bd\") " pod="openshift-marketplace/community-operators-84lbj" Jan 23 10:43:56 crc kubenswrapper[4684]: I0123 10:43:56.535012 4684 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-2h2sz\" (UniqueName: \"kubernetes.io/projected/965d367f-47b3-4ea0-8157-9549f30c91bd-kube-api-access-2h2sz\") pod \"community-operators-84lbj\" (UID: \"965d367f-47b3-4ea0-8157-9549f30c91bd\") " pod="openshift-marketplace/community-operators-84lbj" Jan 23 10:43:56 crc kubenswrapper[4684]: I0123 10:43:56.574692 4684 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_ovn-operator-controller-manager-55db956ddc-ll27v_0755ab86-427c-4e7b-8712-4db92f543c69/manager/0.log" Jan 23 10:43:56 crc kubenswrapper[4684]: I0123 10:43:56.601032 4684 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-kttjd" event={"ID":"6fdb8806-11dc-4679-b511-d9563c55b72c","Type":"ContainerStarted","Data":"54e2e506206e9d628fdbe5a527babc1b518a1c6c1cad875f0df4f96d70abd5d9"} Jan 23 10:43:56 crc kubenswrapper[4684]: I0123 10:43:56.619593 4684 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-84lbj" Jan 23 10:43:56 crc kubenswrapper[4684]: I0123 10:43:56.640922 4684 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/redhat-operators-kttjd" podStartSLOduration=3.082038983 podStartE2EDuration="7.640905872s" podCreationTimestamp="2026-01-23 10:43:49 +0000 UTC" firstStartedPulling="2026-01-23 10:43:51.518505971 +0000 UTC m=+5804.141884512" lastFinishedPulling="2026-01-23 10:43:56.07737286 +0000 UTC m=+5808.700751401" observedRunningTime="2026-01-23 10:43:56.625597913 +0000 UTC m=+5809.248976474" watchObservedRunningTime="2026-01-23 10:43:56.640905872 +0000 UTC m=+5809.264284413" Jan 23 10:43:57 crc kubenswrapper[4684]: I0123 10:43:57.210638 4684 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_placement-operator-controller-manager-5d646b7d76-dbggg_ba45281f-6224-4ce8-bc8e-df42f7e89340/manager/0.log" Jan 23 10:43:57 crc kubenswrapper[4684]: I0123 10:43:57.515871 4684 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_swift-operator-controller-manager-547cbdb99f-8cnrp_ca0f93c0-4138-44c8-bd7d-027ced364a97/manager/0.log" Jan 23 10:43:57 crc kubenswrapper[4684]: I0123 10:43:57.536516 4684 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/community-operators-84lbj"] Jan 23 10:43:57 crc kubenswrapper[4684]: W0123 10:43:57.543997 4684 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod965d367f_47b3_4ea0_8157_9549f30c91bd.slice/crio-b4af531dbd8e31ca12eb908af46801d390ed6879789592feb960ab929d50b8de WatchSource:0}: Error finding container b4af531dbd8e31ca12eb908af46801d390ed6879789592feb960ab929d50b8de: Status 404 returned error can't find the container with id b4af531dbd8e31ca12eb908af46801d390ed6879789592feb960ab929d50b8de Jan 23 10:43:57 crc kubenswrapper[4684]: I0123 10:43:57.619920 4684 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-84lbj" event={"ID":"965d367f-47b3-4ea0-8157-9549f30c91bd","Type":"ContainerStarted","Data":"b4af531dbd8e31ca12eb908af46801d390ed6879789592feb960ab929d50b8de"} Jan 23 10:43:57 crc kubenswrapper[4684]: I0123 10:43:57.759794 4684 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_rabbitmq-cluster-operator-manager-668c99d594-c6nkk_b45428ef-0f84-4d58-ab99-9d7e26470caa/operator/0.log" Jan 23 10:43:58 crc kubenswrapper[4684]: I0123 10:43:58.030321 4684 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_test-operator-controller-manager-69797bbcbd-2f7kg_b3f2f6c1-234f-457b-b335-f7e732976b73/manager/0.log" Jan 23 10:43:58 crc kubenswrapper[4684]: I0123 10:43:58.060626 4684 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_watcher-operator-controller-manager-5ffb9c6597-sx2td_afb73601-eb5b-44cd-9f30-4e38a4cc28be/manager/0.log" Jan 23 10:43:58 crc kubenswrapper[4684]: I0123 10:43:58.064436 4684 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_telemetry-operator-controller-manager-85cd9769bb-4rk7k_829a9115-60b9-4f34-811a-1acc4cbd9897/manager/0.log" Jan 23 10:43:58 crc kubenswrapper[4684]: I0123 10:43:58.918125 4684 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_openstack-operator-controller-manager-57c46955cf-s5vdl_ef474359-484b-4042-8d86-0aa2fce7a260/manager/0.log" Jan 23 10:43:59 crc kubenswrapper[4684]: I0123 10:43:59.640147 4684 generic.go:334] "Generic (PLEG): container finished" podID="965d367f-47b3-4ea0-8157-9549f30c91bd" containerID="ed38b2207702717e5d30d4d73bd315075a55d7bb7a99c6177635c481dd764bab" exitCode=0 Jan 23 10:43:59 crc kubenswrapper[4684]: I0123 10:43:59.640191 4684 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-84lbj" event={"ID":"965d367f-47b3-4ea0-8157-9549f30c91bd","Type":"ContainerDied","Data":"ed38b2207702717e5d30d4d73bd315075a55d7bb7a99c6177635c481dd764bab"} Jan 23 10:44:00 crc kubenswrapper[4684]: I0123 10:44:00.245686 4684 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/redhat-operators-kttjd" Jan 23 10:44:00 crc kubenswrapper[4684]: I0123 10:44:00.245735 4684 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/redhat-operators-kttjd" Jan 23 10:44:00 crc kubenswrapper[4684]: I0123 10:44:00.654202 4684 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-84lbj" event={"ID":"965d367f-47b3-4ea0-8157-9549f30c91bd","Type":"ContainerStarted","Data":"7d52d49ceb023b8ab18f5b266041fee0b4fd3d0acf71d9d8b0ff0ab491db942d"} Jan 23 10:44:01 crc kubenswrapper[4684]: I0123 10:44:01.303588 4684 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-marketplace/redhat-operators-kttjd" podUID="6fdb8806-11dc-4679-b511-d9563c55b72c" containerName="registry-server" probeResult="failure" output=< Jan 23 10:44:01 crc kubenswrapper[4684]: timeout: failed to connect service ":50051" within 1s Jan 23 10:44:01 crc kubenswrapper[4684]: > Jan 23 10:44:01 crc kubenswrapper[4684]: I0123 10:44:01.672152 4684 generic.go:334] "Generic (PLEG): container finished" podID="965d367f-47b3-4ea0-8157-9549f30c91bd" containerID="7d52d49ceb023b8ab18f5b266041fee0b4fd3d0acf71d9d8b0ff0ab491db942d" exitCode=0 Jan 23 10:44:01 crc kubenswrapper[4684]: I0123 10:44:01.672500 4684 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-84lbj" event={"ID":"965d367f-47b3-4ea0-8157-9549f30c91bd","Type":"ContainerDied","Data":"7d52d49ceb023b8ab18f5b266041fee0b4fd3d0acf71d9d8b0ff0ab491db942d"} Jan 23 10:44:02 crc kubenswrapper[4684]: I0123 10:44:02.684519 4684 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-84lbj" event={"ID":"965d367f-47b3-4ea0-8157-9549f30c91bd","Type":"ContainerStarted","Data":"8a5eabfa10c338551f44f63c6bc66e5f8c4191d3a2d588945da9f866bc9fc529"} Jan 23 10:44:02 crc kubenswrapper[4684]: I0123 10:44:02.707814 4684 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/community-operators-84lbj" podStartSLOduration=4.141582223 podStartE2EDuration="6.707795189s" podCreationTimestamp="2026-01-23 10:43:56 +0000 UTC" firstStartedPulling="2026-01-23 10:43:59.642871412 +0000 UTC m=+5812.266249963" lastFinishedPulling="2026-01-23 10:44:02.209084388 +0000 UTC m=+5814.832462929" observedRunningTime="2026-01-23 10:44:02.70155722 +0000 UTC m=+5815.324935761" watchObservedRunningTime="2026-01-23 10:44:02.707795189 +0000 UTC m=+5815.331173730" Jan 23 10:44:06 crc kubenswrapper[4684]: I0123 10:44:06.620820 4684 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/community-operators-84lbj" Jan 23 10:44:06 crc kubenswrapper[4684]: I0123 10:44:06.621459 4684 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/community-operators-84lbj" Jan 23 10:44:06 crc kubenswrapper[4684]: I0123 10:44:06.671658 4684 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/community-operators-84lbj" Jan 23 10:44:11 crc kubenswrapper[4684]: I0123 10:44:11.301872 4684 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-marketplace/redhat-operators-kttjd" podUID="6fdb8806-11dc-4679-b511-d9563c55b72c" containerName="registry-server" probeResult="failure" output=< Jan 23 10:44:11 crc kubenswrapper[4684]: timeout: failed to connect service ":50051" within 1s Jan 23 10:44:11 crc kubenswrapper[4684]: > Jan 23 10:44:13 crc kubenswrapper[4684]: I0123 10:44:13.728810 4684 patch_prober.go:28] interesting pod/machine-config-daemon-wtphf container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Jan 23 10:44:13 crc kubenswrapper[4684]: I0123 10:44:13.729824 4684 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-wtphf" podUID="fe8e0d00-860e-4d47-9f48-686555520d79" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Jan 23 10:44:13 crc kubenswrapper[4684]: I0123 10:44:13.729903 4684 kubelet.go:2542] "SyncLoop (probe)" probe="liveness" status="unhealthy" pod="openshift-machine-config-operator/machine-config-daemon-wtphf" Jan 23 10:44:13 crc kubenswrapper[4684]: I0123 10:44:13.731431 4684 kuberuntime_manager.go:1027] "Message for Container of pod" containerName="machine-config-daemon" containerStatusID={"Type":"cri-o","ID":"f4341f39926607ae03c3e178cd27115ca38cc60da7be79d26e9660a1c7ba8da6"} pod="openshift-machine-config-operator/machine-config-daemon-wtphf" containerMessage="Container machine-config-daemon failed liveness probe, will be restarted" Jan 23 10:44:13 crc kubenswrapper[4684]: I0123 10:44:13.731520 4684 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-machine-config-operator/machine-config-daemon-wtphf" podUID="fe8e0d00-860e-4d47-9f48-686555520d79" containerName="machine-config-daemon" containerID="cri-o://f4341f39926607ae03c3e178cd27115ca38cc60da7be79d26e9660a1c7ba8da6" gracePeriod=600 Jan 23 10:44:14 crc kubenswrapper[4684]: I0123 10:44:14.784973 4684 generic.go:334] "Generic (PLEG): container finished" podID="fe8e0d00-860e-4d47-9f48-686555520d79" containerID="f4341f39926607ae03c3e178cd27115ca38cc60da7be79d26e9660a1c7ba8da6" exitCode=0 Jan 23 10:44:14 crc kubenswrapper[4684]: I0123 10:44:14.785058 4684 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-wtphf" event={"ID":"fe8e0d00-860e-4d47-9f48-686555520d79","Type":"ContainerDied","Data":"f4341f39926607ae03c3e178cd27115ca38cc60da7be79d26e9660a1c7ba8da6"} Jan 23 10:44:14 crc kubenswrapper[4684]: I0123 10:44:14.785437 4684 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-wtphf" event={"ID":"fe8e0d00-860e-4d47-9f48-686555520d79","Type":"ContainerStarted","Data":"4e7729ec02a118e54c5ee089f6315bc620ae5a4d86c81be4ec83aff11eb4ee9b"} Jan 23 10:44:14 crc kubenswrapper[4684]: I0123 10:44:14.785462 4684 scope.go:117] "RemoveContainer" containerID="c351f8f481b25c1f4451b34197b1573cb0b3fb64f7de44a6587d8bc17e89bbbf" Jan 23 10:44:16 crc kubenswrapper[4684]: I0123 10:44:16.679803 4684 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/community-operators-84lbj" Jan 23 10:44:16 crc kubenswrapper[4684]: I0123 10:44:16.729435 4684 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/community-operators-84lbj"] Jan 23 10:44:16 crc kubenswrapper[4684]: I0123 10:44:16.806118 4684 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/community-operators-84lbj" podUID="965d367f-47b3-4ea0-8157-9549f30c91bd" containerName="registry-server" containerID="cri-o://8a5eabfa10c338551f44f63c6bc66e5f8c4191d3a2d588945da9f866bc9fc529" gracePeriod=2 Jan 23 10:44:17 crc kubenswrapper[4684]: I0123 10:44:17.840903 4684 generic.go:334] "Generic (PLEG): container finished" podID="965d367f-47b3-4ea0-8157-9549f30c91bd" containerID="8a5eabfa10c338551f44f63c6bc66e5f8c4191d3a2d588945da9f866bc9fc529" exitCode=0 Jan 23 10:44:17 crc kubenswrapper[4684]: I0123 10:44:17.841151 4684 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-84lbj" event={"ID":"965d367f-47b3-4ea0-8157-9549f30c91bd","Type":"ContainerDied","Data":"8a5eabfa10c338551f44f63c6bc66e5f8c4191d3a2d588945da9f866bc9fc529"} Jan 23 10:44:17 crc kubenswrapper[4684]: I0123 10:44:17.935167 4684 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-84lbj" Jan 23 10:44:18 crc kubenswrapper[4684]: I0123 10:44:17.997198 4684 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/965d367f-47b3-4ea0-8157-9549f30c91bd-utilities\") pod \"965d367f-47b3-4ea0-8157-9549f30c91bd\" (UID: \"965d367f-47b3-4ea0-8157-9549f30c91bd\") " Jan 23 10:44:18 crc kubenswrapper[4684]: I0123 10:44:17.997421 4684 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-2h2sz\" (UniqueName: \"kubernetes.io/projected/965d367f-47b3-4ea0-8157-9549f30c91bd-kube-api-access-2h2sz\") pod \"965d367f-47b3-4ea0-8157-9549f30c91bd\" (UID: \"965d367f-47b3-4ea0-8157-9549f30c91bd\") " Jan 23 10:44:18 crc kubenswrapper[4684]: I0123 10:44:17.997449 4684 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/965d367f-47b3-4ea0-8157-9549f30c91bd-catalog-content\") pod \"965d367f-47b3-4ea0-8157-9549f30c91bd\" (UID: \"965d367f-47b3-4ea0-8157-9549f30c91bd\") " Jan 23 10:44:18 crc kubenswrapper[4684]: I0123 10:44:17.998198 4684 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/965d367f-47b3-4ea0-8157-9549f30c91bd-utilities" (OuterVolumeSpecName: "utilities") pod "965d367f-47b3-4ea0-8157-9549f30c91bd" (UID: "965d367f-47b3-4ea0-8157-9549f30c91bd"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 23 10:44:18 crc kubenswrapper[4684]: I0123 10:44:17.998514 4684 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/965d367f-47b3-4ea0-8157-9549f30c91bd-utilities\") on node \"crc\" DevicePath \"\"" Jan 23 10:44:18 crc kubenswrapper[4684]: I0123 10:44:18.021377 4684 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/965d367f-47b3-4ea0-8157-9549f30c91bd-kube-api-access-2h2sz" (OuterVolumeSpecName: "kube-api-access-2h2sz") pod "965d367f-47b3-4ea0-8157-9549f30c91bd" (UID: "965d367f-47b3-4ea0-8157-9549f30c91bd"). InnerVolumeSpecName "kube-api-access-2h2sz". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 23 10:44:18 crc kubenswrapper[4684]: I0123 10:44:18.057113 4684 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/965d367f-47b3-4ea0-8157-9549f30c91bd-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "965d367f-47b3-4ea0-8157-9549f30c91bd" (UID: "965d367f-47b3-4ea0-8157-9549f30c91bd"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 23 10:44:18 crc kubenswrapper[4684]: I0123 10:44:18.100462 4684 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-2h2sz\" (UniqueName: \"kubernetes.io/projected/965d367f-47b3-4ea0-8157-9549f30c91bd-kube-api-access-2h2sz\") on node \"crc\" DevicePath \"\"" Jan 23 10:44:18 crc kubenswrapper[4684]: I0123 10:44:18.100497 4684 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/965d367f-47b3-4ea0-8157-9549f30c91bd-catalog-content\") on node \"crc\" DevicePath \"\"" Jan 23 10:44:18 crc kubenswrapper[4684]: I0123 10:44:18.852120 4684 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-84lbj" event={"ID":"965d367f-47b3-4ea0-8157-9549f30c91bd","Type":"ContainerDied","Data":"b4af531dbd8e31ca12eb908af46801d390ed6879789592feb960ab929d50b8de"} Jan 23 10:44:18 crc kubenswrapper[4684]: I0123 10:44:18.852411 4684 scope.go:117] "RemoveContainer" containerID="8a5eabfa10c338551f44f63c6bc66e5f8c4191d3a2d588945da9f866bc9fc529" Jan 23 10:44:18 crc kubenswrapper[4684]: I0123 10:44:18.852225 4684 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-84lbj" Jan 23 10:44:18 crc kubenswrapper[4684]: I0123 10:44:18.869805 4684 scope.go:117] "RemoveContainer" containerID="7d52d49ceb023b8ab18f5b266041fee0b4fd3d0acf71d9d8b0ff0ab491db942d" Jan 23 10:44:18 crc kubenswrapper[4684]: I0123 10:44:18.894191 4684 scope.go:117] "RemoveContainer" containerID="ed38b2207702717e5d30d4d73bd315075a55d7bb7a99c6177635c481dd764bab" Jan 23 10:44:18 crc kubenswrapper[4684]: I0123 10:44:18.910748 4684 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/community-operators-84lbj"] Jan 23 10:44:18 crc kubenswrapper[4684]: I0123 10:44:18.923245 4684 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/community-operators-84lbj"] Jan 23 10:44:19 crc kubenswrapper[4684]: I0123 10:44:19.597989 4684 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="965d367f-47b3-4ea0-8157-9549f30c91bd" path="/var/lib/kubelet/pods/965d367f-47b3-4ea0-8157-9549f30c91bd/volumes" Jan 23 10:44:20 crc kubenswrapper[4684]: I0123 10:44:20.307807 4684 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/redhat-operators-kttjd" Jan 23 10:44:20 crc kubenswrapper[4684]: I0123 10:44:20.355597 4684 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/redhat-operators-kttjd" Jan 23 10:44:21 crc kubenswrapper[4684]: I0123 10:44:21.314068 4684 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-operators-kttjd"] Jan 23 10:44:21 crc kubenswrapper[4684]: I0123 10:44:21.875562 4684 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/redhat-operators-kttjd" podUID="6fdb8806-11dc-4679-b511-d9563c55b72c" containerName="registry-server" containerID="cri-o://54e2e506206e9d628fdbe5a527babc1b518a1c6c1cad875f0df4f96d70abd5d9" gracePeriod=2 Jan 23 10:44:21 crc kubenswrapper[4684]: I0123 10:44:21.967422 4684 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-machine-api_control-plane-machine-set-operator-78cbb6b69f-4qpn2_f92af7c0-b6ef-4fe1-b057-b2424aa96458/control-plane-machine-set-operator/0.log" Jan 23 10:44:22 crc kubenswrapper[4684]: I0123 10:44:22.165881 4684 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-machine-api_machine-api-operator-5694c8668f-pgngb_9b3c5fb5-4205-4162-9d9e-b522ee092236/kube-rbac-proxy/0.log" Jan 23 10:44:22 crc kubenswrapper[4684]: I0123 10:44:22.184273 4684 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-machine-api_machine-api-operator-5694c8668f-pgngb_9b3c5fb5-4205-4162-9d9e-b522ee092236/machine-api-operator/0.log" Jan 23 10:44:22 crc kubenswrapper[4684]: I0123 10:44:22.902227 4684 generic.go:334] "Generic (PLEG): container finished" podID="6fdb8806-11dc-4679-b511-d9563c55b72c" containerID="54e2e506206e9d628fdbe5a527babc1b518a1c6c1cad875f0df4f96d70abd5d9" exitCode=0 Jan 23 10:44:22 crc kubenswrapper[4684]: I0123 10:44:22.902267 4684 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-kttjd" event={"ID":"6fdb8806-11dc-4679-b511-d9563c55b72c","Type":"ContainerDied","Data":"54e2e506206e9d628fdbe5a527babc1b518a1c6c1cad875f0df4f96d70abd5d9"} Jan 23 10:44:22 crc kubenswrapper[4684]: I0123 10:44:22.902522 4684 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-kttjd" event={"ID":"6fdb8806-11dc-4679-b511-d9563c55b72c","Type":"ContainerDied","Data":"a750b7e004c1e0bf34c0795ee44f710b02c408695fd34d02c5735ee31c79406a"} Jan 23 10:44:22 crc kubenswrapper[4684]: I0123 10:44:22.902538 4684 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="a750b7e004c1e0bf34c0795ee44f710b02c408695fd34d02c5735ee31c79406a" Jan 23 10:44:22 crc kubenswrapper[4684]: I0123 10:44:22.949903 4684 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-kttjd" Jan 23 10:44:23 crc kubenswrapper[4684]: I0123 10:44:23.021190 4684 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-bbqxt\" (UniqueName: \"kubernetes.io/projected/6fdb8806-11dc-4679-b511-d9563c55b72c-kube-api-access-bbqxt\") pod \"6fdb8806-11dc-4679-b511-d9563c55b72c\" (UID: \"6fdb8806-11dc-4679-b511-d9563c55b72c\") " Jan 23 10:44:23 crc kubenswrapper[4684]: I0123 10:44:23.021305 4684 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/6fdb8806-11dc-4679-b511-d9563c55b72c-utilities\") pod \"6fdb8806-11dc-4679-b511-d9563c55b72c\" (UID: \"6fdb8806-11dc-4679-b511-d9563c55b72c\") " Jan 23 10:44:23 crc kubenswrapper[4684]: I0123 10:44:23.021331 4684 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/6fdb8806-11dc-4679-b511-d9563c55b72c-catalog-content\") pod \"6fdb8806-11dc-4679-b511-d9563c55b72c\" (UID: \"6fdb8806-11dc-4679-b511-d9563c55b72c\") " Jan 23 10:44:23 crc kubenswrapper[4684]: I0123 10:44:23.023594 4684 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/6fdb8806-11dc-4679-b511-d9563c55b72c-utilities" (OuterVolumeSpecName: "utilities") pod "6fdb8806-11dc-4679-b511-d9563c55b72c" (UID: "6fdb8806-11dc-4679-b511-d9563c55b72c"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 23 10:44:23 crc kubenswrapper[4684]: I0123 10:44:23.029929 4684 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/6fdb8806-11dc-4679-b511-d9563c55b72c-kube-api-access-bbqxt" (OuterVolumeSpecName: "kube-api-access-bbqxt") pod "6fdb8806-11dc-4679-b511-d9563c55b72c" (UID: "6fdb8806-11dc-4679-b511-d9563c55b72c"). InnerVolumeSpecName "kube-api-access-bbqxt". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 23 10:44:23 crc kubenswrapper[4684]: I0123 10:44:23.125048 4684 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-bbqxt\" (UniqueName: \"kubernetes.io/projected/6fdb8806-11dc-4679-b511-d9563c55b72c-kube-api-access-bbqxt\") on node \"crc\" DevicePath \"\"" Jan 23 10:44:23 crc kubenswrapper[4684]: I0123 10:44:23.125086 4684 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/6fdb8806-11dc-4679-b511-d9563c55b72c-utilities\") on node \"crc\" DevicePath \"\"" Jan 23 10:44:23 crc kubenswrapper[4684]: I0123 10:44:23.185420 4684 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/6fdb8806-11dc-4679-b511-d9563c55b72c-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "6fdb8806-11dc-4679-b511-d9563c55b72c" (UID: "6fdb8806-11dc-4679-b511-d9563c55b72c"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 23 10:44:23 crc kubenswrapper[4684]: I0123 10:44:23.227588 4684 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/6fdb8806-11dc-4679-b511-d9563c55b72c-catalog-content\") on node \"crc\" DevicePath \"\"" Jan 23 10:44:23 crc kubenswrapper[4684]: I0123 10:44:23.909453 4684 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-kttjd" Jan 23 10:44:23 crc kubenswrapper[4684]: I0123 10:44:23.940769 4684 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-operators-kttjd"] Jan 23 10:44:23 crc kubenswrapper[4684]: I0123 10:44:23.954089 4684 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/redhat-operators-kttjd"] Jan 23 10:44:25 crc kubenswrapper[4684]: I0123 10:44:25.591957 4684 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="6fdb8806-11dc-4679-b511-d9563c55b72c" path="/var/lib/kubelet/pods/6fdb8806-11dc-4679-b511-d9563c55b72c/volumes" Jan 23 10:44:36 crc kubenswrapper[4684]: I0123 10:44:36.041170 4684 log.go:25] "Finished parsing log file" path="/var/log/pods/cert-manager_cert-manager-858654f9db-9kbld_05d3b6d9-c965-441d-a575-dd4d250c519b/cert-manager-controller/0.log" Jan 23 10:44:36 crc kubenswrapper[4684]: I0123 10:44:36.229073 4684 log.go:25] "Finished parsing log file" path="/var/log/pods/cert-manager_cert-manager-cainjector-cf98fcc89-8p4gl_f4c0acc8-e95c-4880-ad7b-eafc6422a713/cert-manager-cainjector/0.log" Jan 23 10:44:36 crc kubenswrapper[4684]: I0123 10:44:36.337188 4684 log.go:25] "Finished parsing log file" path="/var/log/pods/cert-manager_cert-manager-webhook-687f57d79b-sfbw8_b61e14d8-17ad-4f3b-aa18-e0030a15c870/cert-manager-webhook/0.log" Jan 23 10:44:49 crc kubenswrapper[4684]: I0123 10:44:49.471054 4684 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-nmstate_nmstate-console-plugin-7754f76f8b-l7dkm_bedfa793-7aff-4710-ae19-260a52e2957f/nmstate-console-plugin/0.log" Jan 23 10:44:49 crc kubenswrapper[4684]: I0123 10:44:49.761772 4684 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-nmstate_nmstate-handler-2kxj8_2125ebe0-da30-4e7c-93e0-66b7aa2b87e4/nmstate-handler/0.log" Jan 23 10:44:49 crc kubenswrapper[4684]: I0123 10:44:49.869617 4684 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-nmstate_nmstate-metrics-54757c584b-dlvm4_55e58493-0888-4e94-bf0f-6c5b99a10ac4/kube-rbac-proxy/0.log" Jan 23 10:44:49 crc kubenswrapper[4684]: I0123 10:44:49.980786 4684 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-nmstate_nmstate-metrics-54757c584b-dlvm4_55e58493-0888-4e94-bf0f-6c5b99a10ac4/nmstate-metrics/0.log" Jan 23 10:44:50 crc kubenswrapper[4684]: I0123 10:44:50.134568 4684 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-nmstate_nmstate-operator-646758c888-qrbb4_4e70b1ea-5bbb-44b8-893b-0b08388d8a39/nmstate-operator/0.log" Jan 23 10:44:50 crc kubenswrapper[4684]: I0123 10:44:50.196034 4684 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-nmstate_nmstate-webhook-8474b5b9d8-p4bsj_7f98efc7-bdf6-4943-8ef9-9056f713acb2/nmstate-webhook/0.log" Jan 23 10:45:00 crc kubenswrapper[4684]: I0123 10:45:00.151836 4684 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29486085-lgdfp"] Jan 23 10:45:00 crc kubenswrapper[4684]: E0123 10:45:00.152956 4684 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="965d367f-47b3-4ea0-8157-9549f30c91bd" containerName="registry-server" Jan 23 10:45:00 crc kubenswrapper[4684]: I0123 10:45:00.152973 4684 state_mem.go:107] "Deleted CPUSet assignment" podUID="965d367f-47b3-4ea0-8157-9549f30c91bd" containerName="registry-server" Jan 23 10:45:00 crc kubenswrapper[4684]: E0123 10:45:00.152997 4684 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="6fdb8806-11dc-4679-b511-d9563c55b72c" containerName="extract-content" Jan 23 10:45:00 crc kubenswrapper[4684]: I0123 10:45:00.153004 4684 state_mem.go:107] "Deleted CPUSet assignment" podUID="6fdb8806-11dc-4679-b511-d9563c55b72c" containerName="extract-content" Jan 23 10:45:00 crc kubenswrapper[4684]: E0123 10:45:00.153018 4684 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="965d367f-47b3-4ea0-8157-9549f30c91bd" containerName="extract-utilities" Jan 23 10:45:00 crc kubenswrapper[4684]: I0123 10:45:00.153024 4684 state_mem.go:107] "Deleted CPUSet assignment" podUID="965d367f-47b3-4ea0-8157-9549f30c91bd" containerName="extract-utilities" Jan 23 10:45:00 crc kubenswrapper[4684]: E0123 10:45:00.153041 4684 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="6fdb8806-11dc-4679-b511-d9563c55b72c" containerName="extract-utilities" Jan 23 10:45:00 crc kubenswrapper[4684]: I0123 10:45:00.153046 4684 state_mem.go:107] "Deleted CPUSet assignment" podUID="6fdb8806-11dc-4679-b511-d9563c55b72c" containerName="extract-utilities" Jan 23 10:45:00 crc kubenswrapper[4684]: E0123 10:45:00.153059 4684 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="965d367f-47b3-4ea0-8157-9549f30c91bd" containerName="extract-content" Jan 23 10:45:00 crc kubenswrapper[4684]: I0123 10:45:00.153064 4684 state_mem.go:107] "Deleted CPUSet assignment" podUID="965d367f-47b3-4ea0-8157-9549f30c91bd" containerName="extract-content" Jan 23 10:45:00 crc kubenswrapper[4684]: E0123 10:45:00.153074 4684 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="6fdb8806-11dc-4679-b511-d9563c55b72c" containerName="registry-server" Jan 23 10:45:00 crc kubenswrapper[4684]: I0123 10:45:00.153080 4684 state_mem.go:107] "Deleted CPUSet assignment" podUID="6fdb8806-11dc-4679-b511-d9563c55b72c" containerName="registry-server" Jan 23 10:45:00 crc kubenswrapper[4684]: I0123 10:45:00.153278 4684 memory_manager.go:354] "RemoveStaleState removing state" podUID="6fdb8806-11dc-4679-b511-d9563c55b72c" containerName="registry-server" Jan 23 10:45:00 crc kubenswrapper[4684]: I0123 10:45:00.153301 4684 memory_manager.go:354] "RemoveStaleState removing state" podUID="965d367f-47b3-4ea0-8157-9549f30c91bd" containerName="registry-server" Jan 23 10:45:00 crc kubenswrapper[4684]: I0123 10:45:00.154084 4684 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29486085-lgdfp" Jan 23 10:45:00 crc kubenswrapper[4684]: I0123 10:45:00.197348 4684 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-operator-lifecycle-manager"/"collect-profiles-config" Jan 23 10:45:00 crc kubenswrapper[4684]: I0123 10:45:00.201216 4684 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-operator-lifecycle-manager"/"collect-profiles-dockercfg-kzf4t" Jan 23 10:45:00 crc kubenswrapper[4684]: I0123 10:45:00.208383 4684 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29486085-lgdfp"] Jan 23 10:45:00 crc kubenswrapper[4684]: I0123 10:45:00.296953 4684 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/5e68ea61-4eea-4774-b113-02f3df467b93-config-volume\") pod \"collect-profiles-29486085-lgdfp\" (UID: \"5e68ea61-4eea-4774-b113-02f3df467b93\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29486085-lgdfp" Jan 23 10:45:00 crc kubenswrapper[4684]: I0123 10:45:00.297882 4684 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/5e68ea61-4eea-4774-b113-02f3df467b93-secret-volume\") pod \"collect-profiles-29486085-lgdfp\" (UID: \"5e68ea61-4eea-4774-b113-02f3df467b93\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29486085-lgdfp" Jan 23 10:45:00 crc kubenswrapper[4684]: I0123 10:45:00.297930 4684 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-8vnz9\" (UniqueName: \"kubernetes.io/projected/5e68ea61-4eea-4774-b113-02f3df467b93-kube-api-access-8vnz9\") pod \"collect-profiles-29486085-lgdfp\" (UID: \"5e68ea61-4eea-4774-b113-02f3df467b93\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29486085-lgdfp" Jan 23 10:45:00 crc kubenswrapper[4684]: I0123 10:45:00.399895 4684 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/5e68ea61-4eea-4774-b113-02f3df467b93-config-volume\") pod \"collect-profiles-29486085-lgdfp\" (UID: \"5e68ea61-4eea-4774-b113-02f3df467b93\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29486085-lgdfp" Jan 23 10:45:00 crc kubenswrapper[4684]: I0123 10:45:00.400061 4684 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/5e68ea61-4eea-4774-b113-02f3df467b93-secret-volume\") pod \"collect-profiles-29486085-lgdfp\" (UID: \"5e68ea61-4eea-4774-b113-02f3df467b93\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29486085-lgdfp" Jan 23 10:45:00 crc kubenswrapper[4684]: I0123 10:45:00.400093 4684 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-8vnz9\" (UniqueName: \"kubernetes.io/projected/5e68ea61-4eea-4774-b113-02f3df467b93-kube-api-access-8vnz9\") pod \"collect-profiles-29486085-lgdfp\" (UID: \"5e68ea61-4eea-4774-b113-02f3df467b93\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29486085-lgdfp" Jan 23 10:45:00 crc kubenswrapper[4684]: I0123 10:45:00.401266 4684 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/5e68ea61-4eea-4774-b113-02f3df467b93-config-volume\") pod \"collect-profiles-29486085-lgdfp\" (UID: \"5e68ea61-4eea-4774-b113-02f3df467b93\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29486085-lgdfp" Jan 23 10:45:00 crc kubenswrapper[4684]: I0123 10:45:00.415496 4684 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/5e68ea61-4eea-4774-b113-02f3df467b93-secret-volume\") pod \"collect-profiles-29486085-lgdfp\" (UID: \"5e68ea61-4eea-4774-b113-02f3df467b93\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29486085-lgdfp" Jan 23 10:45:00 crc kubenswrapper[4684]: I0123 10:45:00.420569 4684 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-8vnz9\" (UniqueName: \"kubernetes.io/projected/5e68ea61-4eea-4774-b113-02f3df467b93-kube-api-access-8vnz9\") pod \"collect-profiles-29486085-lgdfp\" (UID: \"5e68ea61-4eea-4774-b113-02f3df467b93\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29486085-lgdfp" Jan 23 10:45:00 crc kubenswrapper[4684]: I0123 10:45:00.520551 4684 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29486085-lgdfp" Jan 23 10:45:01 crc kubenswrapper[4684]: I0123 10:45:01.015204 4684 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29486085-lgdfp"] Jan 23 10:45:01 crc kubenswrapper[4684]: I0123 10:45:01.303548 4684 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/collect-profiles-29486085-lgdfp" event={"ID":"5e68ea61-4eea-4774-b113-02f3df467b93","Type":"ContainerStarted","Data":"f828ad0e2df049f6d9e24c5479555a6f22c5977461d47fd46192ba2e4111a7a0"} Jan 23 10:45:02 crc kubenswrapper[4684]: I0123 10:45:02.313856 4684 generic.go:334] "Generic (PLEG): container finished" podID="5e68ea61-4eea-4774-b113-02f3df467b93" containerID="2ab1a1199200e5a35592c267b46c350d2275eb1e7bccc564620ba8f6933ed37d" exitCode=0 Jan 23 10:45:02 crc kubenswrapper[4684]: I0123 10:45:02.313899 4684 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/collect-profiles-29486085-lgdfp" event={"ID":"5e68ea61-4eea-4774-b113-02f3df467b93","Type":"ContainerDied","Data":"2ab1a1199200e5a35592c267b46c350d2275eb1e7bccc564620ba8f6933ed37d"} Jan 23 10:45:03 crc kubenswrapper[4684]: I0123 10:45:03.735944 4684 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29486085-lgdfp" Jan 23 10:45:03 crc kubenswrapper[4684]: I0123 10:45:03.876634 4684 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/5e68ea61-4eea-4774-b113-02f3df467b93-secret-volume\") pod \"5e68ea61-4eea-4774-b113-02f3df467b93\" (UID: \"5e68ea61-4eea-4774-b113-02f3df467b93\") " Jan 23 10:45:03 crc kubenswrapper[4684]: I0123 10:45:03.876880 4684 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-8vnz9\" (UniqueName: \"kubernetes.io/projected/5e68ea61-4eea-4774-b113-02f3df467b93-kube-api-access-8vnz9\") pod \"5e68ea61-4eea-4774-b113-02f3df467b93\" (UID: \"5e68ea61-4eea-4774-b113-02f3df467b93\") " Jan 23 10:45:03 crc kubenswrapper[4684]: I0123 10:45:03.877046 4684 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/5e68ea61-4eea-4774-b113-02f3df467b93-config-volume\") pod \"5e68ea61-4eea-4774-b113-02f3df467b93\" (UID: \"5e68ea61-4eea-4774-b113-02f3df467b93\") " Jan 23 10:45:03 crc kubenswrapper[4684]: I0123 10:45:03.877491 4684 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/5e68ea61-4eea-4774-b113-02f3df467b93-config-volume" (OuterVolumeSpecName: "config-volume") pod "5e68ea61-4eea-4774-b113-02f3df467b93" (UID: "5e68ea61-4eea-4774-b113-02f3df467b93"). InnerVolumeSpecName "config-volume". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 23 10:45:03 crc kubenswrapper[4684]: I0123 10:45:03.877673 4684 reconciler_common.go:293] "Volume detached for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/5e68ea61-4eea-4774-b113-02f3df467b93-config-volume\") on node \"crc\" DevicePath \"\"" Jan 23 10:45:03 crc kubenswrapper[4684]: I0123 10:45:03.886851 4684 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/5e68ea61-4eea-4774-b113-02f3df467b93-secret-volume" (OuterVolumeSpecName: "secret-volume") pod "5e68ea61-4eea-4774-b113-02f3df467b93" (UID: "5e68ea61-4eea-4774-b113-02f3df467b93"). InnerVolumeSpecName "secret-volume". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 23 10:45:03 crc kubenswrapper[4684]: I0123 10:45:03.897552 4684 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/5e68ea61-4eea-4774-b113-02f3df467b93-kube-api-access-8vnz9" (OuterVolumeSpecName: "kube-api-access-8vnz9") pod "5e68ea61-4eea-4774-b113-02f3df467b93" (UID: "5e68ea61-4eea-4774-b113-02f3df467b93"). InnerVolumeSpecName "kube-api-access-8vnz9". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 23 10:45:03 crc kubenswrapper[4684]: I0123 10:45:03.979678 4684 reconciler_common.go:293] "Volume detached for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/5e68ea61-4eea-4774-b113-02f3df467b93-secret-volume\") on node \"crc\" DevicePath \"\"" Jan 23 10:45:03 crc kubenswrapper[4684]: I0123 10:45:03.979732 4684 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-8vnz9\" (UniqueName: \"kubernetes.io/projected/5e68ea61-4eea-4774-b113-02f3df467b93-kube-api-access-8vnz9\") on node \"crc\" DevicePath \"\"" Jan 23 10:45:04 crc kubenswrapper[4684]: I0123 10:45:04.332348 4684 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/collect-profiles-29486085-lgdfp" event={"ID":"5e68ea61-4eea-4774-b113-02f3df467b93","Type":"ContainerDied","Data":"f828ad0e2df049f6d9e24c5479555a6f22c5977461d47fd46192ba2e4111a7a0"} Jan 23 10:45:04 crc kubenswrapper[4684]: I0123 10:45:04.332394 4684 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="f828ad0e2df049f6d9e24c5479555a6f22c5977461d47fd46192ba2e4111a7a0" Jan 23 10:45:04 crc kubenswrapper[4684]: I0123 10:45:04.332422 4684 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29486085-lgdfp" Jan 23 10:45:04 crc kubenswrapper[4684]: I0123 10:45:04.820460 4684 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29486040-sg9qd"] Jan 23 10:45:04 crc kubenswrapper[4684]: I0123 10:45:04.833237 4684 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29486040-sg9qd"] Jan 23 10:45:05 crc kubenswrapper[4684]: I0123 10:45:05.593619 4684 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="ff06a00d-310c-41dc-bae5-042190b4be89" path="/var/lib/kubelet/pods/ff06a00d-310c-41dc-bae5-042190b4be89/volumes" Jan 23 10:45:18 crc kubenswrapper[4684]: I0123 10:45:18.381931 4684 scope.go:117] "RemoveContainer" containerID="17eaf213586c18cd0815c547fe7ac44336e510e6db9d3bfc57b801f2786cc066" Jan 23 10:45:18 crc kubenswrapper[4684]: I0123 10:45:18.462456 4684 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_controller-6968d8fdc4-8v8jk_b6455af6-22c5-44ad-a1fb-7d50f4a5271d/controller/0.log" Jan 23 10:45:18 crc kubenswrapper[4684]: I0123 10:45:18.471446 4684 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_controller-6968d8fdc4-8v8jk_b6455af6-22c5-44ad-a1fb-7d50f4a5271d/kube-rbac-proxy/0.log" Jan 23 10:45:18 crc kubenswrapper[4684]: I0123 10:45:18.715974 4684 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_frr-k8s-webhook-server-7df86c4f6c-qp4nh_ae885236-c9d2-4c57-bc11-a9aa077f5d1b/frr-k8s-webhook-server/0.log" Jan 23 10:45:18 crc kubenswrapper[4684]: I0123 10:45:18.915583 4684 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_frr-k8s-zr9tk_9171f98d-dc3e-4258-9c6e-a8316190944d/cp-frr-files/0.log" Jan 23 10:45:19 crc kubenswrapper[4684]: I0123 10:45:19.112042 4684 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_frr-k8s-zr9tk_9171f98d-dc3e-4258-9c6e-a8316190944d/cp-frr-files/0.log" Jan 23 10:45:19 crc kubenswrapper[4684]: I0123 10:45:19.121798 4684 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_frr-k8s-zr9tk_9171f98d-dc3e-4258-9c6e-a8316190944d/cp-reloader/0.log" Jan 23 10:45:19 crc kubenswrapper[4684]: I0123 10:45:19.164108 4684 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_frr-k8s-zr9tk_9171f98d-dc3e-4258-9c6e-a8316190944d/cp-metrics/0.log" Jan 23 10:45:19 crc kubenswrapper[4684]: I0123 10:45:19.185439 4684 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_frr-k8s-zr9tk_9171f98d-dc3e-4258-9c6e-a8316190944d/cp-reloader/0.log" Jan 23 10:45:19 crc kubenswrapper[4684]: I0123 10:45:19.381158 4684 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_frr-k8s-zr9tk_9171f98d-dc3e-4258-9c6e-a8316190944d/cp-frr-files/0.log" Jan 23 10:45:19 crc kubenswrapper[4684]: I0123 10:45:19.404625 4684 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_frr-k8s-zr9tk_9171f98d-dc3e-4258-9c6e-a8316190944d/cp-reloader/0.log" Jan 23 10:45:19 crc kubenswrapper[4684]: I0123 10:45:19.448172 4684 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_frr-k8s-zr9tk_9171f98d-dc3e-4258-9c6e-a8316190944d/cp-metrics/0.log" Jan 23 10:45:19 crc kubenswrapper[4684]: I0123 10:45:19.475948 4684 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_frr-k8s-zr9tk_9171f98d-dc3e-4258-9c6e-a8316190944d/cp-metrics/0.log" Jan 23 10:45:19 crc kubenswrapper[4684]: I0123 10:45:19.670951 4684 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_frr-k8s-zr9tk_9171f98d-dc3e-4258-9c6e-a8316190944d/cp-metrics/0.log" Jan 23 10:45:19 crc kubenswrapper[4684]: I0123 10:45:19.673124 4684 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_frr-k8s-zr9tk_9171f98d-dc3e-4258-9c6e-a8316190944d/cp-frr-files/0.log" Jan 23 10:45:19 crc kubenswrapper[4684]: I0123 10:45:19.709815 4684 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_frr-k8s-zr9tk_9171f98d-dc3e-4258-9c6e-a8316190944d/cp-reloader/0.log" Jan 23 10:45:19 crc kubenswrapper[4684]: I0123 10:45:19.771938 4684 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_frr-k8s-zr9tk_9171f98d-dc3e-4258-9c6e-a8316190944d/controller/0.log" Jan 23 10:45:19 crc kubenswrapper[4684]: I0123 10:45:19.897650 4684 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_frr-k8s-zr9tk_9171f98d-dc3e-4258-9c6e-a8316190944d/frr-metrics/0.log" Jan 23 10:45:19 crc kubenswrapper[4684]: I0123 10:45:19.951742 4684 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_frr-k8s-zr9tk_9171f98d-dc3e-4258-9c6e-a8316190944d/kube-rbac-proxy/0.log" Jan 23 10:45:20 crc kubenswrapper[4684]: I0123 10:45:20.090610 4684 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_frr-k8s-zr9tk_9171f98d-dc3e-4258-9c6e-a8316190944d/kube-rbac-proxy-frr/0.log" Jan 23 10:45:20 crc kubenswrapper[4684]: I0123 10:45:20.275469 4684 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_frr-k8s-zr9tk_9171f98d-dc3e-4258-9c6e-a8316190944d/reloader/0.log" Jan 23 10:45:20 crc kubenswrapper[4684]: I0123 10:45:20.397039 4684 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_metallb-operator-controller-manager-66c47b49dd-q49fh_00c9dbc4-3023-4be1-9876-0e2e2b35ac82/manager/0.log" Jan 23 10:45:20 crc kubenswrapper[4684]: I0123 10:45:20.681610 4684 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_metallb-operator-webhook-server-bfcb9dfcc-7qsz8_c001f52e-014a-4250-af27-7fdcebc0c759/webhook-server/0.log" Jan 23 10:45:20 crc kubenswrapper[4684]: I0123 10:45:20.818396 4684 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_speaker-v69pl_c673aad0-48c8-4410-9d62-028ebc02c103/kube-rbac-proxy/0.log" Jan 23 10:45:21 crc kubenswrapper[4684]: I0123 10:45:21.468905 4684 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_speaker-v69pl_c673aad0-48c8-4410-9d62-028ebc02c103/speaker/0.log" Jan 23 10:45:21 crc kubenswrapper[4684]: I0123 10:45:21.532829 4684 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_frr-k8s-zr9tk_9171f98d-dc3e-4258-9c6e-a8316190944d/frr/0.log" Jan 23 10:45:35 crc kubenswrapper[4684]: I0123 10:45:35.124199 4684 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_270996307cd21d144be796860235064b5127c2fcf62ccccd6689c259dc7ml56_dea3f1d3-f2aa-41e3-afb0-ce7658aae496/util/0.log" Jan 23 10:45:35 crc kubenswrapper[4684]: I0123 10:45:35.504956 4684 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_270996307cd21d144be796860235064b5127c2fcf62ccccd6689c259dc7ml56_dea3f1d3-f2aa-41e3-afb0-ce7658aae496/util/0.log" Jan 23 10:45:35 crc kubenswrapper[4684]: I0123 10:45:35.548209 4684 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_270996307cd21d144be796860235064b5127c2fcf62ccccd6689c259dc7ml56_dea3f1d3-f2aa-41e3-afb0-ce7658aae496/pull/0.log" Jan 23 10:45:35 crc kubenswrapper[4684]: I0123 10:45:35.554799 4684 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_270996307cd21d144be796860235064b5127c2fcf62ccccd6689c259dc7ml56_dea3f1d3-f2aa-41e3-afb0-ce7658aae496/pull/0.log" Jan 23 10:45:35 crc kubenswrapper[4684]: I0123 10:45:35.756399 4684 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_270996307cd21d144be796860235064b5127c2fcf62ccccd6689c259dc7ml56_dea3f1d3-f2aa-41e3-afb0-ce7658aae496/util/0.log" Jan 23 10:45:35 crc kubenswrapper[4684]: I0123 10:45:35.763688 4684 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_270996307cd21d144be796860235064b5127c2fcf62ccccd6689c259dc7ml56_dea3f1d3-f2aa-41e3-afb0-ce7658aae496/extract/0.log" Jan 23 10:45:35 crc kubenswrapper[4684]: I0123 10:45:35.782556 4684 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_270996307cd21d144be796860235064b5127c2fcf62ccccd6689c259dc7ml56_dea3f1d3-f2aa-41e3-afb0-ce7658aae496/pull/0.log" Jan 23 10:45:35 crc kubenswrapper[4684]: I0123 10:45:35.974369 4684 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_53efe8611d43ac2275911d954e05efbbba7920a530aff9253ed1cec713gnsqg_169c6832-37df-469f-9ff3-c0775456568a/util/0.log" Jan 23 10:45:36 crc kubenswrapper[4684]: I0123 10:45:36.136821 4684 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_53efe8611d43ac2275911d954e05efbbba7920a530aff9253ed1cec713gnsqg_169c6832-37df-469f-9ff3-c0775456568a/util/0.log" Jan 23 10:45:36 crc kubenswrapper[4684]: I0123 10:45:36.205169 4684 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_53efe8611d43ac2275911d954e05efbbba7920a530aff9253ed1cec713gnsqg_169c6832-37df-469f-9ff3-c0775456568a/pull/0.log" Jan 23 10:45:36 crc kubenswrapper[4684]: I0123 10:45:36.218364 4684 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_53efe8611d43ac2275911d954e05efbbba7920a530aff9253ed1cec713gnsqg_169c6832-37df-469f-9ff3-c0775456568a/pull/0.log" Jan 23 10:45:36 crc kubenswrapper[4684]: I0123 10:45:36.375042 4684 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_53efe8611d43ac2275911d954e05efbbba7920a530aff9253ed1cec713gnsqg_169c6832-37df-469f-9ff3-c0775456568a/pull/0.log" Jan 23 10:45:36 crc kubenswrapper[4684]: I0123 10:45:36.375749 4684 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_53efe8611d43ac2275911d954e05efbbba7920a530aff9253ed1cec713gnsqg_169c6832-37df-469f-9ff3-c0775456568a/util/0.log" Jan 23 10:45:36 crc kubenswrapper[4684]: I0123 10:45:36.414750 4684 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_53efe8611d43ac2275911d954e05efbbba7920a530aff9253ed1cec713gnsqg_169c6832-37df-469f-9ff3-c0775456568a/extract/0.log" Jan 23 10:45:36 crc kubenswrapper[4684]: I0123 10:45:36.550708 4684 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_certified-operators-qcntf_005d929c-6b2b-4644-bddb-c02aa19facfe/extract-utilities/0.log" Jan 23 10:45:36 crc kubenswrapper[4684]: I0123 10:45:36.757252 4684 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_certified-operators-qcntf_005d929c-6b2b-4644-bddb-c02aa19facfe/extract-utilities/0.log" Jan 23 10:45:36 crc kubenswrapper[4684]: I0123 10:45:36.791503 4684 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_certified-operators-qcntf_005d929c-6b2b-4644-bddb-c02aa19facfe/extract-content/0.log" Jan 23 10:45:36 crc kubenswrapper[4684]: I0123 10:45:36.836371 4684 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_certified-operators-qcntf_005d929c-6b2b-4644-bddb-c02aa19facfe/extract-content/0.log" Jan 23 10:45:36 crc kubenswrapper[4684]: I0123 10:45:36.985936 4684 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_certified-operators-qcntf_005d929c-6b2b-4644-bddb-c02aa19facfe/extract-utilities/0.log" Jan 23 10:45:37 crc kubenswrapper[4684]: I0123 10:45:37.000175 4684 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_certified-operators-qcntf_005d929c-6b2b-4644-bddb-c02aa19facfe/extract-content/0.log" Jan 23 10:45:37 crc kubenswrapper[4684]: I0123 10:45:37.293468 4684 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_community-operators-6dpg4_fdf3fd39-d429-4b70-805a-095ada6f811a/extract-utilities/0.log" Jan 23 10:45:37 crc kubenswrapper[4684]: I0123 10:45:37.541270 4684 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_community-operators-6dpg4_fdf3fd39-d429-4b70-805a-095ada6f811a/extract-utilities/0.log" Jan 23 10:45:37 crc kubenswrapper[4684]: I0123 10:45:37.567146 4684 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_community-operators-6dpg4_fdf3fd39-d429-4b70-805a-095ada6f811a/extract-content/0.log" Jan 23 10:45:37 crc kubenswrapper[4684]: I0123 10:45:37.707666 4684 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_community-operators-6dpg4_fdf3fd39-d429-4b70-805a-095ada6f811a/extract-content/0.log" Jan 23 10:45:37 crc kubenswrapper[4684]: I0123 10:45:37.804136 4684 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_certified-operators-qcntf_005d929c-6b2b-4644-bddb-c02aa19facfe/registry-server/0.log" Jan 23 10:45:37 crc kubenswrapper[4684]: I0123 10:45:37.913044 4684 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_community-operators-6dpg4_fdf3fd39-d429-4b70-805a-095ada6f811a/extract-content/0.log" Jan 23 10:45:37 crc kubenswrapper[4684]: I0123 10:45:37.918825 4684 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_community-operators-6dpg4_fdf3fd39-d429-4b70-805a-095ada6f811a/extract-utilities/0.log" Jan 23 10:45:38 crc kubenswrapper[4684]: I0123 10:45:38.268979 4684 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_marketplace-operator-79b997595-25vv4_9703bbe4-b658-40eb-b8db-14f18c684ab3/marketplace-operator/0.log" Jan 23 10:45:38 crc kubenswrapper[4684]: I0123 10:45:38.526253 4684 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_redhat-marketplace-b27ph_a9a4439f-bc6b-4367-be86-8aa563f0b50e/extract-utilities/0.log" Jan 23 10:45:38 crc kubenswrapper[4684]: I0123 10:45:38.817430 4684 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_community-operators-6dpg4_fdf3fd39-d429-4b70-805a-095ada6f811a/registry-server/0.log" Jan 23 10:45:38 crc kubenswrapper[4684]: I0123 10:45:38.874390 4684 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_redhat-marketplace-b27ph_a9a4439f-bc6b-4367-be86-8aa563f0b50e/extract-utilities/0.log" Jan 23 10:45:38 crc kubenswrapper[4684]: I0123 10:45:38.905440 4684 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_redhat-marketplace-b27ph_a9a4439f-bc6b-4367-be86-8aa563f0b50e/extract-content/0.log" Jan 23 10:45:38 crc kubenswrapper[4684]: I0123 10:45:38.958361 4684 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_redhat-marketplace-b27ph_a9a4439f-bc6b-4367-be86-8aa563f0b50e/extract-content/0.log" Jan 23 10:45:39 crc kubenswrapper[4684]: I0123 10:45:39.115322 4684 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_redhat-marketplace-b27ph_a9a4439f-bc6b-4367-be86-8aa563f0b50e/extract-utilities/0.log" Jan 23 10:45:39 crc kubenswrapper[4684]: I0123 10:45:39.119116 4684 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_redhat-marketplace-b27ph_a9a4439f-bc6b-4367-be86-8aa563f0b50e/extract-content/0.log" Jan 23 10:45:39 crc kubenswrapper[4684]: I0123 10:45:39.367631 4684 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_redhat-marketplace-b27ph_a9a4439f-bc6b-4367-be86-8aa563f0b50e/registry-server/0.log" Jan 23 10:45:39 crc kubenswrapper[4684]: I0123 10:45:39.426775 4684 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_redhat-operators-d7mvn_2f0cf87d-0316-45f3-97f8-2808b497892f/extract-utilities/0.log" Jan 23 10:45:39 crc kubenswrapper[4684]: I0123 10:45:39.690441 4684 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_redhat-operators-d7mvn_2f0cf87d-0316-45f3-97f8-2808b497892f/extract-content/0.log" Jan 23 10:45:39 crc kubenswrapper[4684]: I0123 10:45:39.690450 4684 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_redhat-operators-d7mvn_2f0cf87d-0316-45f3-97f8-2808b497892f/extract-content/0.log" Jan 23 10:45:39 crc kubenswrapper[4684]: I0123 10:45:39.768951 4684 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_redhat-operators-d7mvn_2f0cf87d-0316-45f3-97f8-2808b497892f/extract-utilities/0.log" Jan 23 10:45:39 crc kubenswrapper[4684]: I0123 10:45:39.925628 4684 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_redhat-operators-d7mvn_2f0cf87d-0316-45f3-97f8-2808b497892f/extract-content/0.log" Jan 23 10:45:39 crc kubenswrapper[4684]: I0123 10:45:39.956680 4684 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_redhat-operators-d7mvn_2f0cf87d-0316-45f3-97f8-2808b497892f/extract-utilities/0.log" Jan 23 10:45:40 crc kubenswrapper[4684]: I0123 10:45:40.631322 4684 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_redhat-operators-d7mvn_2f0cf87d-0316-45f3-97f8-2808b497892f/registry-server/0.log" Jan 23 10:46:43 crc kubenswrapper[4684]: I0123 10:46:43.728980 4684 patch_prober.go:28] interesting pod/machine-config-daemon-wtphf container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Jan 23 10:46:43 crc kubenswrapper[4684]: I0123 10:46:43.729801 4684 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-wtphf" podUID="fe8e0d00-860e-4d47-9f48-686555520d79" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Jan 23 10:47:13 crc kubenswrapper[4684]: I0123 10:47:13.728559 4684 patch_prober.go:28] interesting pod/machine-config-daemon-wtphf container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Jan 23 10:47:13 crc kubenswrapper[4684]: I0123 10:47:13.729258 4684 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-wtphf" podUID="fe8e0d00-860e-4d47-9f48-686555520d79" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Jan 23 10:47:18 crc kubenswrapper[4684]: I0123 10:47:18.522181 4684 scope.go:117] "RemoveContainer" containerID="1d04f21db2dd498eff68a925b9628260494183e78ffde154c498b63e7a16ecc6" Jan 23 10:47:43 crc kubenswrapper[4684]: I0123 10:47:43.728686 4684 patch_prober.go:28] interesting pod/machine-config-daemon-wtphf container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Jan 23 10:47:43 crc kubenswrapper[4684]: I0123 10:47:43.729218 4684 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-wtphf" podUID="fe8e0d00-860e-4d47-9f48-686555520d79" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Jan 23 10:47:43 crc kubenswrapper[4684]: I0123 10:47:43.729264 4684 kubelet.go:2542] "SyncLoop (probe)" probe="liveness" status="unhealthy" pod="openshift-machine-config-operator/machine-config-daemon-wtphf" Jan 23 10:47:43 crc kubenswrapper[4684]: I0123 10:47:43.730026 4684 kuberuntime_manager.go:1027] "Message for Container of pod" containerName="machine-config-daemon" containerStatusID={"Type":"cri-o","ID":"4e7729ec02a118e54c5ee089f6315bc620ae5a4d86c81be4ec83aff11eb4ee9b"} pod="openshift-machine-config-operator/machine-config-daemon-wtphf" containerMessage="Container machine-config-daemon failed liveness probe, will be restarted" Jan 23 10:47:43 crc kubenswrapper[4684]: I0123 10:47:43.730084 4684 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-machine-config-operator/machine-config-daemon-wtphf" podUID="fe8e0d00-860e-4d47-9f48-686555520d79" containerName="machine-config-daemon" containerID="cri-o://4e7729ec02a118e54c5ee089f6315bc620ae5a4d86c81be4ec83aff11eb4ee9b" gracePeriod=600 Jan 23 10:47:43 crc kubenswrapper[4684]: E0123 10:47:43.885076 4684 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-wtphf_openshift-machine-config-operator(fe8e0d00-860e-4d47-9f48-686555520d79)\"" pod="openshift-machine-config-operator/machine-config-daemon-wtphf" podUID="fe8e0d00-860e-4d47-9f48-686555520d79" Jan 23 10:47:44 crc kubenswrapper[4684]: I0123 10:47:44.844040 4684 generic.go:334] "Generic (PLEG): container finished" podID="fe8e0d00-860e-4d47-9f48-686555520d79" containerID="4e7729ec02a118e54c5ee089f6315bc620ae5a4d86c81be4ec83aff11eb4ee9b" exitCode=0 Jan 23 10:47:44 crc kubenswrapper[4684]: I0123 10:47:44.844099 4684 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-wtphf" event={"ID":"fe8e0d00-860e-4d47-9f48-686555520d79","Type":"ContainerDied","Data":"4e7729ec02a118e54c5ee089f6315bc620ae5a4d86c81be4ec83aff11eb4ee9b"} Jan 23 10:47:44 crc kubenswrapper[4684]: I0123 10:47:44.844140 4684 scope.go:117] "RemoveContainer" containerID="f4341f39926607ae03c3e178cd27115ca38cc60da7be79d26e9660a1c7ba8da6" Jan 23 10:47:44 crc kubenswrapper[4684]: I0123 10:47:44.845153 4684 scope.go:117] "RemoveContainer" containerID="4e7729ec02a118e54c5ee089f6315bc620ae5a4d86c81be4ec83aff11eb4ee9b" Jan 23 10:47:44 crc kubenswrapper[4684]: E0123 10:47:44.845630 4684 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-wtphf_openshift-machine-config-operator(fe8e0d00-860e-4d47-9f48-686555520d79)\"" pod="openshift-machine-config-operator/machine-config-daemon-wtphf" podUID="fe8e0d00-860e-4d47-9f48-686555520d79" Jan 23 10:47:56 crc kubenswrapper[4684]: I0123 10:47:56.582108 4684 scope.go:117] "RemoveContainer" containerID="4e7729ec02a118e54c5ee089f6315bc620ae5a4d86c81be4ec83aff11eb4ee9b" Jan 23 10:47:56 crc kubenswrapper[4684]: E0123 10:47:56.582905 4684 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-wtphf_openshift-machine-config-operator(fe8e0d00-860e-4d47-9f48-686555520d79)\"" pod="openshift-machine-config-operator/machine-config-daemon-wtphf" podUID="fe8e0d00-860e-4d47-9f48-686555520d79" Jan 23 10:48:08 crc kubenswrapper[4684]: I0123 10:48:08.581788 4684 scope.go:117] "RemoveContainer" containerID="4e7729ec02a118e54c5ee089f6315bc620ae5a4d86c81be4ec83aff11eb4ee9b" Jan 23 10:48:08 crc kubenswrapper[4684]: E0123 10:48:08.582603 4684 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-wtphf_openshift-machine-config-operator(fe8e0d00-860e-4d47-9f48-686555520d79)\"" pod="openshift-machine-config-operator/machine-config-daemon-wtphf" podUID="fe8e0d00-860e-4d47-9f48-686555520d79" Jan 23 10:48:11 crc kubenswrapper[4684]: I0123 10:48:11.053735 4684 generic.go:334] "Generic (PLEG): container finished" podID="c6b31bbd-d573-438a-a8d1-7a2376673a73" containerID="830c5570617066f5c6adc549c2ca057cd3ef40a0c4b0f157845bfd8dd5a219e4" exitCode=0 Jan 23 10:48:11 crc kubenswrapper[4684]: I0123 10:48:11.053812 4684 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-must-gather-flg8f/must-gather-qb77x" event={"ID":"c6b31bbd-d573-438a-a8d1-7a2376673a73","Type":"ContainerDied","Data":"830c5570617066f5c6adc549c2ca057cd3ef40a0c4b0f157845bfd8dd5a219e4"} Jan 23 10:48:11 crc kubenswrapper[4684]: I0123 10:48:11.055095 4684 scope.go:117] "RemoveContainer" containerID="830c5570617066f5c6adc549c2ca057cd3ef40a0c4b0f157845bfd8dd5a219e4" Jan 23 10:48:11 crc kubenswrapper[4684]: I0123 10:48:11.722837 4684 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-must-gather-flg8f_must-gather-qb77x_c6b31bbd-d573-438a-a8d1-7a2376673a73/gather/0.log" Jan 23 10:48:18 crc kubenswrapper[4684]: I0123 10:48:18.658124 4684 scope.go:117] "RemoveContainer" containerID="a03c8adaedf0cb6c22f4192e534d8fd43977dba97cae4f141c4bd92dfb4c812a" Jan 23 10:48:21 crc kubenswrapper[4684]: I0123 10:48:21.584356 4684 scope.go:117] "RemoveContainer" containerID="4e7729ec02a118e54c5ee089f6315bc620ae5a4d86c81be4ec83aff11eb4ee9b" Jan 23 10:48:21 crc kubenswrapper[4684]: E0123 10:48:21.586051 4684 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-wtphf_openshift-machine-config-operator(fe8e0d00-860e-4d47-9f48-686555520d79)\"" pod="openshift-machine-config-operator/machine-config-daemon-wtphf" podUID="fe8e0d00-860e-4d47-9f48-686555520d79" Jan 23 10:48:26 crc kubenswrapper[4684]: I0123 10:48:26.911927 4684 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-must-gather-flg8f/must-gather-qb77x"] Jan 23 10:48:26 crc kubenswrapper[4684]: I0123 10:48:26.912617 4684 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-must-gather-flg8f/must-gather-qb77x" podUID="c6b31bbd-d573-438a-a8d1-7a2376673a73" containerName="copy" containerID="cri-o://369cffed874857fdd21e4842ad5d9e5fc4a4e19647922e74c7babcf7fbd2d84b" gracePeriod=2 Jan 23 10:48:26 crc kubenswrapper[4684]: I0123 10:48:26.922497 4684 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-must-gather-flg8f/must-gather-qb77x"] Jan 23 10:48:27 crc kubenswrapper[4684]: I0123 10:48:27.221878 4684 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-must-gather-flg8f_must-gather-qb77x_c6b31bbd-d573-438a-a8d1-7a2376673a73/copy/0.log" Jan 23 10:48:27 crc kubenswrapper[4684]: I0123 10:48:27.222470 4684 generic.go:334] "Generic (PLEG): container finished" podID="c6b31bbd-d573-438a-a8d1-7a2376673a73" containerID="369cffed874857fdd21e4842ad5d9e5fc4a4e19647922e74c7babcf7fbd2d84b" exitCode=143 Jan 23 10:48:27 crc kubenswrapper[4684]: I0123 10:48:27.401920 4684 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-must-gather-flg8f_must-gather-qb77x_c6b31bbd-d573-438a-a8d1-7a2376673a73/copy/0.log" Jan 23 10:48:27 crc kubenswrapper[4684]: I0123 10:48:27.402332 4684 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-must-gather-flg8f/must-gather-qb77x" Jan 23 10:48:27 crc kubenswrapper[4684]: I0123 10:48:27.550445 4684 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-thmkl\" (UniqueName: \"kubernetes.io/projected/c6b31bbd-d573-438a-a8d1-7a2376673a73-kube-api-access-thmkl\") pod \"c6b31bbd-d573-438a-a8d1-7a2376673a73\" (UID: \"c6b31bbd-d573-438a-a8d1-7a2376673a73\") " Jan 23 10:48:27 crc kubenswrapper[4684]: I0123 10:48:27.550517 4684 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"must-gather-output\" (UniqueName: \"kubernetes.io/empty-dir/c6b31bbd-d573-438a-a8d1-7a2376673a73-must-gather-output\") pod \"c6b31bbd-d573-438a-a8d1-7a2376673a73\" (UID: \"c6b31bbd-d573-438a-a8d1-7a2376673a73\") " Jan 23 10:48:27 crc kubenswrapper[4684]: I0123 10:48:27.566002 4684 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/c6b31bbd-d573-438a-a8d1-7a2376673a73-kube-api-access-thmkl" (OuterVolumeSpecName: "kube-api-access-thmkl") pod "c6b31bbd-d573-438a-a8d1-7a2376673a73" (UID: "c6b31bbd-d573-438a-a8d1-7a2376673a73"). InnerVolumeSpecName "kube-api-access-thmkl". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 23 10:48:27 crc kubenswrapper[4684]: I0123 10:48:27.654837 4684 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-thmkl\" (UniqueName: \"kubernetes.io/projected/c6b31bbd-d573-438a-a8d1-7a2376673a73-kube-api-access-thmkl\") on node \"crc\" DevicePath \"\"" Jan 23 10:48:27 crc kubenswrapper[4684]: I0123 10:48:27.731676 4684 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/c6b31bbd-d573-438a-a8d1-7a2376673a73-must-gather-output" (OuterVolumeSpecName: "must-gather-output") pod "c6b31bbd-d573-438a-a8d1-7a2376673a73" (UID: "c6b31bbd-d573-438a-a8d1-7a2376673a73"). InnerVolumeSpecName "must-gather-output". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 23 10:48:27 crc kubenswrapper[4684]: I0123 10:48:27.766116 4684 reconciler_common.go:293] "Volume detached for volume \"must-gather-output\" (UniqueName: \"kubernetes.io/empty-dir/c6b31bbd-d573-438a-a8d1-7a2376673a73-must-gather-output\") on node \"crc\" DevicePath \"\"" Jan 23 10:48:28 crc kubenswrapper[4684]: I0123 10:48:28.232675 4684 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-must-gather-flg8f_must-gather-qb77x_c6b31bbd-d573-438a-a8d1-7a2376673a73/copy/0.log" Jan 23 10:48:28 crc kubenswrapper[4684]: I0123 10:48:28.233357 4684 scope.go:117] "RemoveContainer" containerID="369cffed874857fdd21e4842ad5d9e5fc4a4e19647922e74c7babcf7fbd2d84b" Jan 23 10:48:28 crc kubenswrapper[4684]: I0123 10:48:28.233527 4684 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-must-gather-flg8f/must-gather-qb77x" Jan 23 10:48:28 crc kubenswrapper[4684]: I0123 10:48:28.261777 4684 scope.go:117] "RemoveContainer" containerID="830c5570617066f5c6adc549c2ca057cd3ef40a0c4b0f157845bfd8dd5a219e4" Jan 23 10:48:29 crc kubenswrapper[4684]: I0123 10:48:29.592018 4684 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="c6b31bbd-d573-438a-a8d1-7a2376673a73" path="/var/lib/kubelet/pods/c6b31bbd-d573-438a-a8d1-7a2376673a73/volumes" Jan 23 10:48:34 crc kubenswrapper[4684]: I0123 10:48:34.581969 4684 scope.go:117] "RemoveContainer" containerID="4e7729ec02a118e54c5ee089f6315bc620ae5a4d86c81be4ec83aff11eb4ee9b" Jan 23 10:48:34 crc kubenswrapper[4684]: E0123 10:48:34.582648 4684 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-wtphf_openshift-machine-config-operator(fe8e0d00-860e-4d47-9f48-686555520d79)\"" pod="openshift-machine-config-operator/machine-config-daemon-wtphf" podUID="fe8e0d00-860e-4d47-9f48-686555520d79" Jan 23 10:48:45 crc kubenswrapper[4684]: I0123 10:48:45.582656 4684 scope.go:117] "RemoveContainer" containerID="4e7729ec02a118e54c5ee089f6315bc620ae5a4d86c81be4ec83aff11eb4ee9b" Jan 23 10:48:45 crc kubenswrapper[4684]: E0123 10:48:45.583539 4684 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-wtphf_openshift-machine-config-operator(fe8e0d00-860e-4d47-9f48-686555520d79)\"" pod="openshift-machine-config-operator/machine-config-daemon-wtphf" podUID="fe8e0d00-860e-4d47-9f48-686555520d79" Jan 23 10:48:54 crc kubenswrapper[4684]: I0123 10:48:54.189818 4684 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/certified-operators-tc9c5"] Jan 23 10:48:54 crc kubenswrapper[4684]: E0123 10:48:54.190911 4684 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="c6b31bbd-d573-438a-a8d1-7a2376673a73" containerName="copy" Jan 23 10:48:54 crc kubenswrapper[4684]: I0123 10:48:54.190931 4684 state_mem.go:107] "Deleted CPUSet assignment" podUID="c6b31bbd-d573-438a-a8d1-7a2376673a73" containerName="copy" Jan 23 10:48:54 crc kubenswrapper[4684]: E0123 10:48:54.190965 4684 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="c6b31bbd-d573-438a-a8d1-7a2376673a73" containerName="gather" Jan 23 10:48:54 crc kubenswrapper[4684]: I0123 10:48:54.190973 4684 state_mem.go:107] "Deleted CPUSet assignment" podUID="c6b31bbd-d573-438a-a8d1-7a2376673a73" containerName="gather" Jan 23 10:48:54 crc kubenswrapper[4684]: E0123 10:48:54.190995 4684 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="5e68ea61-4eea-4774-b113-02f3df467b93" containerName="collect-profiles" Jan 23 10:48:54 crc kubenswrapper[4684]: I0123 10:48:54.191005 4684 state_mem.go:107] "Deleted CPUSet assignment" podUID="5e68ea61-4eea-4774-b113-02f3df467b93" containerName="collect-profiles" Jan 23 10:48:54 crc kubenswrapper[4684]: I0123 10:48:54.191267 4684 memory_manager.go:354] "RemoveStaleState removing state" podUID="5e68ea61-4eea-4774-b113-02f3df467b93" containerName="collect-profiles" Jan 23 10:48:54 crc kubenswrapper[4684]: I0123 10:48:54.191432 4684 memory_manager.go:354] "RemoveStaleState removing state" podUID="c6b31bbd-d573-438a-a8d1-7a2376673a73" containerName="copy" Jan 23 10:48:54 crc kubenswrapper[4684]: I0123 10:48:54.191447 4684 memory_manager.go:354] "RemoveStaleState removing state" podUID="c6b31bbd-d573-438a-a8d1-7a2376673a73" containerName="gather" Jan 23 10:48:54 crc kubenswrapper[4684]: I0123 10:48:54.193155 4684 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-tc9c5" Jan 23 10:48:54 crc kubenswrapper[4684]: I0123 10:48:54.228901 4684 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/certified-operators-tc9c5"] Jan 23 10:48:54 crc kubenswrapper[4684]: I0123 10:48:54.326630 4684 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/cd8dd7a8-fa19-4528-b115-2438399fce82-utilities\") pod \"certified-operators-tc9c5\" (UID: \"cd8dd7a8-fa19-4528-b115-2438399fce82\") " pod="openshift-marketplace/certified-operators-tc9c5" Jan 23 10:48:54 crc kubenswrapper[4684]: I0123 10:48:54.327490 4684 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-9z7gs\" (UniqueName: \"kubernetes.io/projected/cd8dd7a8-fa19-4528-b115-2438399fce82-kube-api-access-9z7gs\") pod \"certified-operators-tc9c5\" (UID: \"cd8dd7a8-fa19-4528-b115-2438399fce82\") " pod="openshift-marketplace/certified-operators-tc9c5" Jan 23 10:48:54 crc kubenswrapper[4684]: I0123 10:48:54.327539 4684 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/cd8dd7a8-fa19-4528-b115-2438399fce82-catalog-content\") pod \"certified-operators-tc9c5\" (UID: \"cd8dd7a8-fa19-4528-b115-2438399fce82\") " pod="openshift-marketplace/certified-operators-tc9c5" Jan 23 10:48:54 crc kubenswrapper[4684]: I0123 10:48:54.429347 4684 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-9z7gs\" (UniqueName: \"kubernetes.io/projected/cd8dd7a8-fa19-4528-b115-2438399fce82-kube-api-access-9z7gs\") pod \"certified-operators-tc9c5\" (UID: \"cd8dd7a8-fa19-4528-b115-2438399fce82\") " pod="openshift-marketplace/certified-operators-tc9c5" Jan 23 10:48:54 crc kubenswrapper[4684]: I0123 10:48:54.429465 4684 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/cd8dd7a8-fa19-4528-b115-2438399fce82-catalog-content\") pod \"certified-operators-tc9c5\" (UID: \"cd8dd7a8-fa19-4528-b115-2438399fce82\") " pod="openshift-marketplace/certified-operators-tc9c5" Jan 23 10:48:54 crc kubenswrapper[4684]: I0123 10:48:54.429560 4684 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/cd8dd7a8-fa19-4528-b115-2438399fce82-utilities\") pod \"certified-operators-tc9c5\" (UID: \"cd8dd7a8-fa19-4528-b115-2438399fce82\") " pod="openshift-marketplace/certified-operators-tc9c5" Jan 23 10:48:54 crc kubenswrapper[4684]: I0123 10:48:54.430148 4684 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/cd8dd7a8-fa19-4528-b115-2438399fce82-utilities\") pod \"certified-operators-tc9c5\" (UID: \"cd8dd7a8-fa19-4528-b115-2438399fce82\") " pod="openshift-marketplace/certified-operators-tc9c5" Jan 23 10:48:54 crc kubenswrapper[4684]: I0123 10:48:54.430722 4684 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/cd8dd7a8-fa19-4528-b115-2438399fce82-catalog-content\") pod \"certified-operators-tc9c5\" (UID: \"cd8dd7a8-fa19-4528-b115-2438399fce82\") " pod="openshift-marketplace/certified-operators-tc9c5" Jan 23 10:48:54 crc kubenswrapper[4684]: I0123 10:48:54.453633 4684 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-9z7gs\" (UniqueName: \"kubernetes.io/projected/cd8dd7a8-fa19-4528-b115-2438399fce82-kube-api-access-9z7gs\") pod \"certified-operators-tc9c5\" (UID: \"cd8dd7a8-fa19-4528-b115-2438399fce82\") " pod="openshift-marketplace/certified-operators-tc9c5" Jan 23 10:48:54 crc kubenswrapper[4684]: I0123 10:48:54.516146 4684 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-tc9c5" Jan 23 10:48:55 crc kubenswrapper[4684]: I0123 10:48:55.018789 4684 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/certified-operators-tc9c5"] Jan 23 10:48:55 crc kubenswrapper[4684]: I0123 10:48:55.492790 4684 generic.go:334] "Generic (PLEG): container finished" podID="cd8dd7a8-fa19-4528-b115-2438399fce82" containerID="08385df5f21c7d188a602de4966edaec2dbd66b92c12ccaa327b95381a4283be" exitCode=0 Jan 23 10:48:55 crc kubenswrapper[4684]: I0123 10:48:55.493068 4684 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-tc9c5" event={"ID":"cd8dd7a8-fa19-4528-b115-2438399fce82","Type":"ContainerDied","Data":"08385df5f21c7d188a602de4966edaec2dbd66b92c12ccaa327b95381a4283be"} Jan 23 10:48:55 crc kubenswrapper[4684]: I0123 10:48:55.493092 4684 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-tc9c5" event={"ID":"cd8dd7a8-fa19-4528-b115-2438399fce82","Type":"ContainerStarted","Data":"b8e607973d6215fedd2a1988ad92d40ba0b7e76b2624dad469333c1e224d6200"} Jan 23 10:48:55 crc kubenswrapper[4684]: I0123 10:48:55.495253 4684 provider.go:102] Refreshing cache for provider: *credentialprovider.defaultDockerConfigProvider Jan 23 10:48:56 crc kubenswrapper[4684]: I0123 10:48:56.503031 4684 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-tc9c5" event={"ID":"cd8dd7a8-fa19-4528-b115-2438399fce82","Type":"ContainerStarted","Data":"1aa293245f76462d40e67204edb4c872352fffb1f8b40fa7b3460e8ce58adcd9"} Jan 23 10:48:57 crc kubenswrapper[4684]: I0123 10:48:57.517841 4684 generic.go:334] "Generic (PLEG): container finished" podID="cd8dd7a8-fa19-4528-b115-2438399fce82" containerID="1aa293245f76462d40e67204edb4c872352fffb1f8b40fa7b3460e8ce58adcd9" exitCode=0 Jan 23 10:48:57 crc kubenswrapper[4684]: I0123 10:48:57.518500 4684 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-tc9c5" event={"ID":"cd8dd7a8-fa19-4528-b115-2438399fce82","Type":"ContainerDied","Data":"1aa293245f76462d40e67204edb4c872352fffb1f8b40fa7b3460e8ce58adcd9"} Jan 23 10:48:58 crc kubenswrapper[4684]: I0123 10:48:58.532951 4684 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-tc9c5" event={"ID":"cd8dd7a8-fa19-4528-b115-2438399fce82","Type":"ContainerStarted","Data":"f1ff0399690da7b5d1931c4083d00238164ec2eba9152499edf0df5221039e03"} Jan 23 10:48:58 crc kubenswrapper[4684]: I0123 10:48:58.566378 4684 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/certified-operators-tc9c5" podStartSLOduration=2.0780249 podStartE2EDuration="4.566360044s" podCreationTimestamp="2026-01-23 10:48:54 +0000 UTC" firstStartedPulling="2026-01-23 10:48:55.494849056 +0000 UTC m=+6108.118227597" lastFinishedPulling="2026-01-23 10:48:57.98318419 +0000 UTC m=+6110.606562741" observedRunningTime="2026-01-23 10:48:58.557964371 +0000 UTC m=+6111.181342912" watchObservedRunningTime="2026-01-23 10:48:58.566360044 +0000 UTC m=+6111.189738585" Jan 23 10:48:58 crc kubenswrapper[4684]: I0123 10:48:58.582452 4684 scope.go:117] "RemoveContainer" containerID="4e7729ec02a118e54c5ee089f6315bc620ae5a4d86c81be4ec83aff11eb4ee9b" Jan 23 10:48:58 crc kubenswrapper[4684]: E0123 10:48:58.582723 4684 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-wtphf_openshift-machine-config-operator(fe8e0d00-860e-4d47-9f48-686555520d79)\"" pod="openshift-machine-config-operator/machine-config-daemon-wtphf" podUID="fe8e0d00-860e-4d47-9f48-686555520d79" Jan 23 10:49:04 crc kubenswrapper[4684]: I0123 10:49:04.517589 4684 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/certified-operators-tc9c5" Jan 23 10:49:04 crc kubenswrapper[4684]: I0123 10:49:04.518886 4684 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/certified-operators-tc9c5" Jan 23 10:49:04 crc kubenswrapper[4684]: I0123 10:49:04.569284 4684 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/certified-operators-tc9c5" Jan 23 10:49:04 crc kubenswrapper[4684]: I0123 10:49:04.646308 4684 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/certified-operators-tc9c5" Jan 23 10:49:04 crc kubenswrapper[4684]: I0123 10:49:04.810208 4684 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/certified-operators-tc9c5"] Jan 23 10:49:06 crc kubenswrapper[4684]: I0123 10:49:06.602510 4684 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/certified-operators-tc9c5" podUID="cd8dd7a8-fa19-4528-b115-2438399fce82" containerName="registry-server" containerID="cri-o://f1ff0399690da7b5d1931c4083d00238164ec2eba9152499edf0df5221039e03" gracePeriod=2 Jan 23 10:49:07 crc kubenswrapper[4684]: I0123 10:49:07.645077 4684 generic.go:334] "Generic (PLEG): container finished" podID="cd8dd7a8-fa19-4528-b115-2438399fce82" containerID="f1ff0399690da7b5d1931c4083d00238164ec2eba9152499edf0df5221039e03" exitCode=0 Jan 23 10:49:07 crc kubenswrapper[4684]: I0123 10:49:07.645123 4684 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-tc9c5" event={"ID":"cd8dd7a8-fa19-4528-b115-2438399fce82","Type":"ContainerDied","Data":"f1ff0399690da7b5d1931c4083d00238164ec2eba9152499edf0df5221039e03"} Jan 23 10:49:07 crc kubenswrapper[4684]: I0123 10:49:07.773597 4684 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-tc9c5" Jan 23 10:49:07 crc kubenswrapper[4684]: I0123 10:49:07.914527 4684 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-9z7gs\" (UniqueName: \"kubernetes.io/projected/cd8dd7a8-fa19-4528-b115-2438399fce82-kube-api-access-9z7gs\") pod \"cd8dd7a8-fa19-4528-b115-2438399fce82\" (UID: \"cd8dd7a8-fa19-4528-b115-2438399fce82\") " Jan 23 10:49:07 crc kubenswrapper[4684]: I0123 10:49:07.914811 4684 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/cd8dd7a8-fa19-4528-b115-2438399fce82-utilities\") pod \"cd8dd7a8-fa19-4528-b115-2438399fce82\" (UID: \"cd8dd7a8-fa19-4528-b115-2438399fce82\") " Jan 23 10:49:07 crc kubenswrapper[4684]: I0123 10:49:07.914954 4684 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/cd8dd7a8-fa19-4528-b115-2438399fce82-catalog-content\") pod \"cd8dd7a8-fa19-4528-b115-2438399fce82\" (UID: \"cd8dd7a8-fa19-4528-b115-2438399fce82\") " Jan 23 10:49:07 crc kubenswrapper[4684]: I0123 10:49:07.916071 4684 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/cd8dd7a8-fa19-4528-b115-2438399fce82-utilities" (OuterVolumeSpecName: "utilities") pod "cd8dd7a8-fa19-4528-b115-2438399fce82" (UID: "cd8dd7a8-fa19-4528-b115-2438399fce82"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 23 10:49:07 crc kubenswrapper[4684]: I0123 10:49:07.938070 4684 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/cd8dd7a8-fa19-4528-b115-2438399fce82-kube-api-access-9z7gs" (OuterVolumeSpecName: "kube-api-access-9z7gs") pod "cd8dd7a8-fa19-4528-b115-2438399fce82" (UID: "cd8dd7a8-fa19-4528-b115-2438399fce82"). InnerVolumeSpecName "kube-api-access-9z7gs". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 23 10:49:07 crc kubenswrapper[4684]: I0123 10:49:07.977411 4684 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/cd8dd7a8-fa19-4528-b115-2438399fce82-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "cd8dd7a8-fa19-4528-b115-2438399fce82" (UID: "cd8dd7a8-fa19-4528-b115-2438399fce82"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 23 10:49:08 crc kubenswrapper[4684]: I0123 10:49:08.017024 4684 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/cd8dd7a8-fa19-4528-b115-2438399fce82-catalog-content\") on node \"crc\" DevicePath \"\"" Jan 23 10:49:08 crc kubenswrapper[4684]: I0123 10:49:08.017063 4684 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-9z7gs\" (UniqueName: \"kubernetes.io/projected/cd8dd7a8-fa19-4528-b115-2438399fce82-kube-api-access-9z7gs\") on node \"crc\" DevicePath \"\"" Jan 23 10:49:08 crc kubenswrapper[4684]: I0123 10:49:08.017078 4684 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/cd8dd7a8-fa19-4528-b115-2438399fce82-utilities\") on node \"crc\" DevicePath \"\"" Jan 23 10:49:08 crc kubenswrapper[4684]: I0123 10:49:08.660421 4684 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-tc9c5" event={"ID":"cd8dd7a8-fa19-4528-b115-2438399fce82","Type":"ContainerDied","Data":"b8e607973d6215fedd2a1988ad92d40ba0b7e76b2624dad469333c1e224d6200"} Jan 23 10:49:08 crc kubenswrapper[4684]: I0123 10:49:08.660864 4684 scope.go:117] "RemoveContainer" containerID="f1ff0399690da7b5d1931c4083d00238164ec2eba9152499edf0df5221039e03" Jan 23 10:49:08 crc kubenswrapper[4684]: I0123 10:49:08.660677 4684 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-tc9c5" Jan 23 10:49:08 crc kubenswrapper[4684]: I0123 10:49:08.715455 4684 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/certified-operators-tc9c5"] Jan 23 10:49:08 crc kubenswrapper[4684]: I0123 10:49:08.726598 4684 scope.go:117] "RemoveContainer" containerID="1aa293245f76462d40e67204edb4c872352fffb1f8b40fa7b3460e8ce58adcd9" Jan 23 10:49:08 crc kubenswrapper[4684]: I0123 10:49:08.726869 4684 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/certified-operators-tc9c5"] Jan 23 10:49:08 crc kubenswrapper[4684]: I0123 10:49:08.754006 4684 scope.go:117] "RemoveContainer" containerID="08385df5f21c7d188a602de4966edaec2dbd66b92c12ccaa327b95381a4283be" Jan 23 10:49:09 crc kubenswrapper[4684]: I0123 10:49:09.592361 4684 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="cd8dd7a8-fa19-4528-b115-2438399fce82" path="/var/lib/kubelet/pods/cd8dd7a8-fa19-4528-b115-2438399fce82/volumes" Jan 23 10:49:11 crc kubenswrapper[4684]: I0123 10:49:11.582411 4684 scope.go:117] "RemoveContainer" containerID="4e7729ec02a118e54c5ee089f6315bc620ae5a4d86c81be4ec83aff11eb4ee9b" Jan 23 10:49:11 crc kubenswrapper[4684]: E0123 10:49:11.583045 4684 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-wtphf_openshift-machine-config-operator(fe8e0d00-860e-4d47-9f48-686555520d79)\"" pod="openshift-machine-config-operator/machine-config-daemon-wtphf" podUID="fe8e0d00-860e-4d47-9f48-686555520d79" Jan 23 10:49:26 crc kubenswrapper[4684]: I0123 10:49:26.582515 4684 scope.go:117] "RemoveContainer" containerID="4e7729ec02a118e54c5ee089f6315bc620ae5a4d86c81be4ec83aff11eb4ee9b" Jan 23 10:49:26 crc kubenswrapper[4684]: E0123 10:49:26.583224 4684 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-wtphf_openshift-machine-config-operator(fe8e0d00-860e-4d47-9f48-686555520d79)\"" pod="openshift-machine-config-operator/machine-config-daemon-wtphf" podUID="fe8e0d00-860e-4d47-9f48-686555520d79" Jan 23 10:49:40 crc kubenswrapper[4684]: I0123 10:49:40.581936 4684 scope.go:117] "RemoveContainer" containerID="4e7729ec02a118e54c5ee089f6315bc620ae5a4d86c81be4ec83aff11eb4ee9b" Jan 23 10:49:40 crc kubenswrapper[4684]: E0123 10:49:40.582730 4684 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-wtphf_openshift-machine-config-operator(fe8e0d00-860e-4d47-9f48-686555520d79)\"" pod="openshift-machine-config-operator/machine-config-daemon-wtphf" podUID="fe8e0d00-860e-4d47-9f48-686555520d79" Jan 23 10:49:53 crc kubenswrapper[4684]: I0123 10:49:53.581780 4684 scope.go:117] "RemoveContainer" containerID="4e7729ec02a118e54c5ee089f6315bc620ae5a4d86c81be4ec83aff11eb4ee9b" Jan 23 10:49:53 crc kubenswrapper[4684]: E0123 10:49:53.582629 4684 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-wtphf_openshift-machine-config-operator(fe8e0d00-860e-4d47-9f48-686555520d79)\"" pod="openshift-machine-config-operator/machine-config-daemon-wtphf" podUID="fe8e0d00-860e-4d47-9f48-686555520d79" Jan 23 10:50:04 crc kubenswrapper[4684]: I0123 10:50:04.277679 4684 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/redhat-marketplace-gnjxv"] Jan 23 10:50:04 crc kubenswrapper[4684]: E0123 10:50:04.278791 4684 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="cd8dd7a8-fa19-4528-b115-2438399fce82" containerName="extract-utilities" Jan 23 10:50:04 crc kubenswrapper[4684]: I0123 10:50:04.278808 4684 state_mem.go:107] "Deleted CPUSet assignment" podUID="cd8dd7a8-fa19-4528-b115-2438399fce82" containerName="extract-utilities" Jan 23 10:50:04 crc kubenswrapper[4684]: E0123 10:50:04.278823 4684 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="cd8dd7a8-fa19-4528-b115-2438399fce82" containerName="registry-server" Jan 23 10:50:04 crc kubenswrapper[4684]: I0123 10:50:04.278830 4684 state_mem.go:107] "Deleted CPUSet assignment" podUID="cd8dd7a8-fa19-4528-b115-2438399fce82" containerName="registry-server" Jan 23 10:50:04 crc kubenswrapper[4684]: E0123 10:50:04.278865 4684 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="cd8dd7a8-fa19-4528-b115-2438399fce82" containerName="extract-content" Jan 23 10:50:04 crc kubenswrapper[4684]: I0123 10:50:04.278873 4684 state_mem.go:107] "Deleted CPUSet assignment" podUID="cd8dd7a8-fa19-4528-b115-2438399fce82" containerName="extract-content" Jan 23 10:50:04 crc kubenswrapper[4684]: I0123 10:50:04.279054 4684 memory_manager.go:354] "RemoveStaleState removing state" podUID="cd8dd7a8-fa19-4528-b115-2438399fce82" containerName="registry-server" Jan 23 10:50:04 crc kubenswrapper[4684]: I0123 10:50:04.280326 4684 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-gnjxv" Jan 23 10:50:04 crc kubenswrapper[4684]: I0123 10:50:04.292810 4684 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/9a854d97-e024-40f4-9fb6-509e58ae3934-utilities\") pod \"redhat-marketplace-gnjxv\" (UID: \"9a854d97-e024-40f4-9fb6-509e58ae3934\") " pod="openshift-marketplace/redhat-marketplace-gnjxv" Jan 23 10:50:04 crc kubenswrapper[4684]: I0123 10:50:04.292977 4684 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/9a854d97-e024-40f4-9fb6-509e58ae3934-catalog-content\") pod \"redhat-marketplace-gnjxv\" (UID: \"9a854d97-e024-40f4-9fb6-509e58ae3934\") " pod="openshift-marketplace/redhat-marketplace-gnjxv" Jan 23 10:50:04 crc kubenswrapper[4684]: I0123 10:50:04.293054 4684 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-wdkql\" (UniqueName: \"kubernetes.io/projected/9a854d97-e024-40f4-9fb6-509e58ae3934-kube-api-access-wdkql\") pod \"redhat-marketplace-gnjxv\" (UID: \"9a854d97-e024-40f4-9fb6-509e58ae3934\") " pod="openshift-marketplace/redhat-marketplace-gnjxv" Jan 23 10:50:04 crc kubenswrapper[4684]: I0123 10:50:04.298374 4684 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-marketplace-gnjxv"] Jan 23 10:50:04 crc kubenswrapper[4684]: I0123 10:50:04.395939 4684 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/9a854d97-e024-40f4-9fb6-509e58ae3934-catalog-content\") pod \"redhat-marketplace-gnjxv\" (UID: \"9a854d97-e024-40f4-9fb6-509e58ae3934\") " pod="openshift-marketplace/redhat-marketplace-gnjxv" Jan 23 10:50:04 crc kubenswrapper[4684]: I0123 10:50:04.396093 4684 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-wdkql\" (UniqueName: \"kubernetes.io/projected/9a854d97-e024-40f4-9fb6-509e58ae3934-kube-api-access-wdkql\") pod \"redhat-marketplace-gnjxv\" (UID: \"9a854d97-e024-40f4-9fb6-509e58ae3934\") " pod="openshift-marketplace/redhat-marketplace-gnjxv" Jan 23 10:50:04 crc kubenswrapper[4684]: I0123 10:50:04.396200 4684 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/9a854d97-e024-40f4-9fb6-509e58ae3934-utilities\") pod \"redhat-marketplace-gnjxv\" (UID: \"9a854d97-e024-40f4-9fb6-509e58ae3934\") " pod="openshift-marketplace/redhat-marketplace-gnjxv" Jan 23 10:50:04 crc kubenswrapper[4684]: I0123 10:50:04.396542 4684 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/9a854d97-e024-40f4-9fb6-509e58ae3934-catalog-content\") pod \"redhat-marketplace-gnjxv\" (UID: \"9a854d97-e024-40f4-9fb6-509e58ae3934\") " pod="openshift-marketplace/redhat-marketplace-gnjxv" Jan 23 10:50:04 crc kubenswrapper[4684]: I0123 10:50:04.396554 4684 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/9a854d97-e024-40f4-9fb6-509e58ae3934-utilities\") pod \"redhat-marketplace-gnjxv\" (UID: \"9a854d97-e024-40f4-9fb6-509e58ae3934\") " pod="openshift-marketplace/redhat-marketplace-gnjxv" Jan 23 10:50:04 crc kubenswrapper[4684]: I0123 10:50:04.418355 4684 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-wdkql\" (UniqueName: \"kubernetes.io/projected/9a854d97-e024-40f4-9fb6-509e58ae3934-kube-api-access-wdkql\") pod \"redhat-marketplace-gnjxv\" (UID: \"9a854d97-e024-40f4-9fb6-509e58ae3934\") " pod="openshift-marketplace/redhat-marketplace-gnjxv" Jan 23 10:50:04 crc kubenswrapper[4684]: I0123 10:50:04.601669 4684 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-gnjxv" Jan 23 10:50:05 crc kubenswrapper[4684]: I0123 10:50:05.109042 4684 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-marketplace-gnjxv"] Jan 23 10:50:05 crc kubenswrapper[4684]: I0123 10:50:05.151062 4684 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-gnjxv" event={"ID":"9a854d97-e024-40f4-9fb6-509e58ae3934","Type":"ContainerStarted","Data":"12ce5f293afbf30d2f009c66c5baa615a1d2143dddd09525440fd20501ecd265"} Jan 23 10:50:06 crc kubenswrapper[4684]: I0123 10:50:06.159402 4684 generic.go:334] "Generic (PLEG): container finished" podID="9a854d97-e024-40f4-9fb6-509e58ae3934" containerID="3a53ca20302909a026e2e76eaa9f408f1f967bc439b5b06e72e6320b97cd1e42" exitCode=0 Jan 23 10:50:06 crc kubenswrapper[4684]: I0123 10:50:06.159473 4684 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-gnjxv" event={"ID":"9a854d97-e024-40f4-9fb6-509e58ae3934","Type":"ContainerDied","Data":"3a53ca20302909a026e2e76eaa9f408f1f967bc439b5b06e72e6320b97cd1e42"} Jan 23 10:50:06 crc kubenswrapper[4684]: I0123 10:50:06.582818 4684 scope.go:117] "RemoveContainer" containerID="4e7729ec02a118e54c5ee089f6315bc620ae5a4d86c81be4ec83aff11eb4ee9b" Jan 23 10:50:06 crc kubenswrapper[4684]: E0123 10:50:06.583472 4684 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-wtphf_openshift-machine-config-operator(fe8e0d00-860e-4d47-9f48-686555520d79)\"" pod="openshift-machine-config-operator/machine-config-daemon-wtphf" podUID="fe8e0d00-860e-4d47-9f48-686555520d79" Jan 23 10:50:07 crc kubenswrapper[4684]: I0123 10:50:07.169540 4684 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-gnjxv" event={"ID":"9a854d97-e024-40f4-9fb6-509e58ae3934","Type":"ContainerStarted","Data":"3c4c970a87d847e63243d163ecfb1186d64ad7591bed85d6dff7e4d3333ce3b9"} Jan 23 10:50:08 crc kubenswrapper[4684]: I0123 10:50:08.435222 4684 generic.go:334] "Generic (PLEG): container finished" podID="9a854d97-e024-40f4-9fb6-509e58ae3934" containerID="3c4c970a87d847e63243d163ecfb1186d64ad7591bed85d6dff7e4d3333ce3b9" exitCode=0 Jan 23 10:50:08 crc kubenswrapper[4684]: I0123 10:50:08.435503 4684 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-gnjxv" event={"ID":"9a854d97-e024-40f4-9fb6-509e58ae3934","Type":"ContainerDied","Data":"3c4c970a87d847e63243d163ecfb1186d64ad7591bed85d6dff7e4d3333ce3b9"} Jan 23 10:50:10 crc kubenswrapper[4684]: I0123 10:50:10.460931 4684 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-gnjxv" event={"ID":"9a854d97-e024-40f4-9fb6-509e58ae3934","Type":"ContainerStarted","Data":"89cb40f5c0bf31707ef16be4669df69e2908d3e51e5ee90166111b8eb0ef43d3"} Jan 23 10:50:10 crc kubenswrapper[4684]: I0123 10:50:10.484333 4684 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/redhat-marketplace-gnjxv" podStartSLOduration=3.3213606 podStartE2EDuration="6.484309997s" podCreationTimestamp="2026-01-23 10:50:04 +0000 UTC" firstStartedPulling="2026-01-23 10:50:06.162024305 +0000 UTC m=+6178.785402836" lastFinishedPulling="2026-01-23 10:50:09.324973692 +0000 UTC m=+6181.948352233" observedRunningTime="2026-01-23 10:50:10.479130247 +0000 UTC m=+6183.102508788" watchObservedRunningTime="2026-01-23 10:50:10.484309997 +0000 UTC m=+6183.107688538" Jan 23 10:50:14 crc kubenswrapper[4684]: I0123 10:50:14.602786 4684 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/redhat-marketplace-gnjxv" Jan 23 10:50:14 crc kubenswrapper[4684]: I0123 10:50:14.603477 4684 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/redhat-marketplace-gnjxv" Jan 23 10:50:14 crc kubenswrapper[4684]: I0123 10:50:14.651676 4684 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/redhat-marketplace-gnjxv" Jan 23 10:50:15 crc kubenswrapper[4684]: I0123 10:50:15.555378 4684 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/redhat-marketplace-gnjxv" Jan 23 10:50:15 crc kubenswrapper[4684]: I0123 10:50:15.602869 4684 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-marketplace-gnjxv"] Jan 23 10:50:17 crc kubenswrapper[4684]: I0123 10:50:17.523865 4684 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/redhat-marketplace-gnjxv" podUID="9a854d97-e024-40f4-9fb6-509e58ae3934" containerName="registry-server" containerID="cri-o://89cb40f5c0bf31707ef16be4669df69e2908d3e51e5ee90166111b8eb0ef43d3" gracePeriod=2 Jan 23 10:50:18 crc kubenswrapper[4684]: I0123 10:50:18.483820 4684 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-gnjxv" Jan 23 10:50:18 crc kubenswrapper[4684]: I0123 10:50:18.534631 4684 generic.go:334] "Generic (PLEG): container finished" podID="9a854d97-e024-40f4-9fb6-509e58ae3934" containerID="89cb40f5c0bf31707ef16be4669df69e2908d3e51e5ee90166111b8eb0ef43d3" exitCode=0 Jan 23 10:50:18 crc kubenswrapper[4684]: I0123 10:50:18.534682 4684 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-gnjxv" event={"ID":"9a854d97-e024-40f4-9fb6-509e58ae3934","Type":"ContainerDied","Data":"89cb40f5c0bf31707ef16be4669df69e2908d3e51e5ee90166111b8eb0ef43d3"} Jan 23 10:50:18 crc kubenswrapper[4684]: I0123 10:50:18.534709 4684 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-gnjxv" Jan 23 10:50:18 crc kubenswrapper[4684]: I0123 10:50:18.534731 4684 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-gnjxv" event={"ID":"9a854d97-e024-40f4-9fb6-509e58ae3934","Type":"ContainerDied","Data":"12ce5f293afbf30d2f009c66c5baa615a1d2143dddd09525440fd20501ecd265"} Jan 23 10:50:18 crc kubenswrapper[4684]: I0123 10:50:18.534757 4684 scope.go:117] "RemoveContainer" containerID="89cb40f5c0bf31707ef16be4669df69e2908d3e51e5ee90166111b8eb0ef43d3" Jan 23 10:50:18 crc kubenswrapper[4684]: I0123 10:50:18.556119 4684 scope.go:117] "RemoveContainer" containerID="3c4c970a87d847e63243d163ecfb1186d64ad7591bed85d6dff7e4d3333ce3b9" Jan 23 10:50:18 crc kubenswrapper[4684]: I0123 10:50:18.575216 4684 scope.go:117] "RemoveContainer" containerID="3a53ca20302909a026e2e76eaa9f408f1f967bc439b5b06e72e6320b97cd1e42" Jan 23 10:50:18 crc kubenswrapper[4684]: I0123 10:50:18.582345 4684 scope.go:117] "RemoveContainer" containerID="4e7729ec02a118e54c5ee089f6315bc620ae5a4d86c81be4ec83aff11eb4ee9b" Jan 23 10:50:18 crc kubenswrapper[4684]: E0123 10:50:18.582728 4684 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-wtphf_openshift-machine-config-operator(fe8e0d00-860e-4d47-9f48-686555520d79)\"" pod="openshift-machine-config-operator/machine-config-daemon-wtphf" podUID="fe8e0d00-860e-4d47-9f48-686555520d79" Jan 23 10:50:18 crc kubenswrapper[4684]: I0123 10:50:18.635947 4684 scope.go:117] "RemoveContainer" containerID="89cb40f5c0bf31707ef16be4669df69e2908d3e51e5ee90166111b8eb0ef43d3" Jan 23 10:50:18 crc kubenswrapper[4684]: E0123 10:50:18.636451 4684 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"89cb40f5c0bf31707ef16be4669df69e2908d3e51e5ee90166111b8eb0ef43d3\": container with ID starting with 89cb40f5c0bf31707ef16be4669df69e2908d3e51e5ee90166111b8eb0ef43d3 not found: ID does not exist" containerID="89cb40f5c0bf31707ef16be4669df69e2908d3e51e5ee90166111b8eb0ef43d3" Jan 23 10:50:18 crc kubenswrapper[4684]: I0123 10:50:18.636489 4684 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"89cb40f5c0bf31707ef16be4669df69e2908d3e51e5ee90166111b8eb0ef43d3"} err="failed to get container status \"89cb40f5c0bf31707ef16be4669df69e2908d3e51e5ee90166111b8eb0ef43d3\": rpc error: code = NotFound desc = could not find container \"89cb40f5c0bf31707ef16be4669df69e2908d3e51e5ee90166111b8eb0ef43d3\": container with ID starting with 89cb40f5c0bf31707ef16be4669df69e2908d3e51e5ee90166111b8eb0ef43d3 not found: ID does not exist" Jan 23 10:50:18 crc kubenswrapper[4684]: I0123 10:50:18.636513 4684 scope.go:117] "RemoveContainer" containerID="3c4c970a87d847e63243d163ecfb1186d64ad7591bed85d6dff7e4d3333ce3b9" Jan 23 10:50:18 crc kubenswrapper[4684]: E0123 10:50:18.636785 4684 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"3c4c970a87d847e63243d163ecfb1186d64ad7591bed85d6dff7e4d3333ce3b9\": container with ID starting with 3c4c970a87d847e63243d163ecfb1186d64ad7591bed85d6dff7e4d3333ce3b9 not found: ID does not exist" containerID="3c4c970a87d847e63243d163ecfb1186d64ad7591bed85d6dff7e4d3333ce3b9" Jan 23 10:50:18 crc kubenswrapper[4684]: I0123 10:50:18.636836 4684 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"3c4c970a87d847e63243d163ecfb1186d64ad7591bed85d6dff7e4d3333ce3b9"} err="failed to get container status \"3c4c970a87d847e63243d163ecfb1186d64ad7591bed85d6dff7e4d3333ce3b9\": rpc error: code = NotFound desc = could not find container \"3c4c970a87d847e63243d163ecfb1186d64ad7591bed85d6dff7e4d3333ce3b9\": container with ID starting with 3c4c970a87d847e63243d163ecfb1186d64ad7591bed85d6dff7e4d3333ce3b9 not found: ID does not exist" Jan 23 10:50:18 crc kubenswrapper[4684]: I0123 10:50:18.636868 4684 scope.go:117] "RemoveContainer" containerID="3a53ca20302909a026e2e76eaa9f408f1f967bc439b5b06e72e6320b97cd1e42" Jan 23 10:50:18 crc kubenswrapper[4684]: E0123 10:50:18.637109 4684 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"3a53ca20302909a026e2e76eaa9f408f1f967bc439b5b06e72e6320b97cd1e42\": container with ID starting with 3a53ca20302909a026e2e76eaa9f408f1f967bc439b5b06e72e6320b97cd1e42 not found: ID does not exist" containerID="3a53ca20302909a026e2e76eaa9f408f1f967bc439b5b06e72e6320b97cd1e42" Jan 23 10:50:18 crc kubenswrapper[4684]: I0123 10:50:18.637130 4684 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"3a53ca20302909a026e2e76eaa9f408f1f967bc439b5b06e72e6320b97cd1e42"} err="failed to get container status \"3a53ca20302909a026e2e76eaa9f408f1f967bc439b5b06e72e6320b97cd1e42\": rpc error: code = NotFound desc = could not find container \"3a53ca20302909a026e2e76eaa9f408f1f967bc439b5b06e72e6320b97cd1e42\": container with ID starting with 3a53ca20302909a026e2e76eaa9f408f1f967bc439b5b06e72e6320b97cd1e42 not found: ID does not exist" Jan 23 10:50:18 crc kubenswrapper[4684]: I0123 10:50:18.680864 4684 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-wdkql\" (UniqueName: \"kubernetes.io/projected/9a854d97-e024-40f4-9fb6-509e58ae3934-kube-api-access-wdkql\") pod \"9a854d97-e024-40f4-9fb6-509e58ae3934\" (UID: \"9a854d97-e024-40f4-9fb6-509e58ae3934\") " Jan 23 10:50:18 crc kubenswrapper[4684]: I0123 10:50:18.681947 4684 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/9a854d97-e024-40f4-9fb6-509e58ae3934-catalog-content\") pod \"9a854d97-e024-40f4-9fb6-509e58ae3934\" (UID: \"9a854d97-e024-40f4-9fb6-509e58ae3934\") " Jan 23 10:50:18 crc kubenswrapper[4684]: I0123 10:50:18.682059 4684 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/9a854d97-e024-40f4-9fb6-509e58ae3934-utilities\") pod \"9a854d97-e024-40f4-9fb6-509e58ae3934\" (UID: \"9a854d97-e024-40f4-9fb6-509e58ae3934\") " Jan 23 10:50:18 crc kubenswrapper[4684]: I0123 10:50:18.682901 4684 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/9a854d97-e024-40f4-9fb6-509e58ae3934-utilities" (OuterVolumeSpecName: "utilities") pod "9a854d97-e024-40f4-9fb6-509e58ae3934" (UID: "9a854d97-e024-40f4-9fb6-509e58ae3934"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 23 10:50:18 crc kubenswrapper[4684]: I0123 10:50:18.687834 4684 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/9a854d97-e024-40f4-9fb6-509e58ae3934-kube-api-access-wdkql" (OuterVolumeSpecName: "kube-api-access-wdkql") pod "9a854d97-e024-40f4-9fb6-509e58ae3934" (UID: "9a854d97-e024-40f4-9fb6-509e58ae3934"). InnerVolumeSpecName "kube-api-access-wdkql". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 23 10:50:18 crc kubenswrapper[4684]: I0123 10:50:18.709975 4684 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/9a854d97-e024-40f4-9fb6-509e58ae3934-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "9a854d97-e024-40f4-9fb6-509e58ae3934" (UID: "9a854d97-e024-40f4-9fb6-509e58ae3934"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 23 10:50:18 crc kubenswrapper[4684]: I0123 10:50:18.784584 4684 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/9a854d97-e024-40f4-9fb6-509e58ae3934-catalog-content\") on node \"crc\" DevicePath \"\"" Jan 23 10:50:18 crc kubenswrapper[4684]: I0123 10:50:18.784868 4684 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/9a854d97-e024-40f4-9fb6-509e58ae3934-utilities\") on node \"crc\" DevicePath \"\"" Jan 23 10:50:18 crc kubenswrapper[4684]: I0123 10:50:18.784943 4684 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-wdkql\" (UniqueName: \"kubernetes.io/projected/9a854d97-e024-40f4-9fb6-509e58ae3934-kube-api-access-wdkql\") on node \"crc\" DevicePath \"\"" Jan 23 10:50:18 crc kubenswrapper[4684]: I0123 10:50:18.787231 4684 scope.go:117] "RemoveContainer" containerID="20c2f9f1a226fa7267f6b86c95ee3ef58c96a602105668cffbd36bdda353c745" Jan 23 10:50:18 crc kubenswrapper[4684]: I0123 10:50:18.810217 4684 scope.go:117] "RemoveContainer" containerID="6ffb75538c95974f5e18f1556363a0c9a185cc3643f6b1bfbbb6654321d6036f" Jan 23 10:50:18 crc kubenswrapper[4684]: I0123 10:50:18.854214 4684 scope.go:117] "RemoveContainer" containerID="54e2e506206e9d628fdbe5a527babc1b518a1c6c1cad875f0df4f96d70abd5d9" Jan 23 10:50:18 crc kubenswrapper[4684]: I0123 10:50:18.916537 4684 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-marketplace-gnjxv"] Jan 23 10:50:18 crc kubenswrapper[4684]: I0123 10:50:18.926031 4684 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/redhat-marketplace-gnjxv"] Jan 23 10:50:19 crc kubenswrapper[4684]: I0123 10:50:19.592975 4684 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="9a854d97-e024-40f4-9fb6-509e58ae3934" path="/var/lib/kubelet/pods/9a854d97-e024-40f4-9fb6-509e58ae3934/volumes" Jan 23 10:50:31 crc kubenswrapper[4684]: I0123 10:50:31.582162 4684 scope.go:117] "RemoveContainer" containerID="4e7729ec02a118e54c5ee089f6315bc620ae5a4d86c81be4ec83aff11eb4ee9b" Jan 23 10:50:31 crc kubenswrapper[4684]: E0123 10:50:31.583839 4684 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-wtphf_openshift-machine-config-operator(fe8e0d00-860e-4d47-9f48-686555520d79)\"" pod="openshift-machine-config-operator/machine-config-daemon-wtphf" podUID="fe8e0d00-860e-4d47-9f48-686555520d79" Jan 23 10:50:45 crc kubenswrapper[4684]: I0123 10:50:45.582076 4684 scope.go:117] "RemoveContainer" containerID="4e7729ec02a118e54c5ee089f6315bc620ae5a4d86c81be4ec83aff11eb4ee9b" Jan 23 10:50:45 crc kubenswrapper[4684]: E0123 10:50:45.582828 4684 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-wtphf_openshift-machine-config-operator(fe8e0d00-860e-4d47-9f48-686555520d79)\"" pod="openshift-machine-config-operator/machine-config-daemon-wtphf" podUID="fe8e0d00-860e-4d47-9f48-686555520d79" Jan 23 10:51:00 crc kubenswrapper[4684]: I0123 10:51:00.582769 4684 scope.go:117] "RemoveContainer" containerID="4e7729ec02a118e54c5ee089f6315bc620ae5a4d86c81be4ec83aff11eb4ee9b" Jan 23 10:51:00 crc kubenswrapper[4684]: E0123 10:51:00.583809 4684 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-wtphf_openshift-machine-config-operator(fe8e0d00-860e-4d47-9f48-686555520d79)\"" pod="openshift-machine-config-operator/machine-config-daemon-wtphf" podUID="fe8e0d00-860e-4d47-9f48-686555520d79" Jan 23 10:51:11 crc kubenswrapper[4684]: I0123 10:51:11.583244 4684 scope.go:117] "RemoveContainer" containerID="4e7729ec02a118e54c5ee089f6315bc620ae5a4d86c81be4ec83aff11eb4ee9b" Jan 23 10:51:11 crc kubenswrapper[4684]: E0123 10:51:11.587390 4684 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-wtphf_openshift-machine-config-operator(fe8e0d00-860e-4d47-9f48-686555520d79)\"" pod="openshift-machine-config-operator/machine-config-daemon-wtphf" podUID="fe8e0d00-860e-4d47-9f48-686555520d79" Jan 23 10:51:25 crc kubenswrapper[4684]: I0123 10:51:25.582478 4684 scope.go:117] "RemoveContainer" containerID="4e7729ec02a118e54c5ee089f6315bc620ae5a4d86c81be4ec83aff11eb4ee9b" Jan 23 10:51:25 crc kubenswrapper[4684]: E0123 10:51:25.583090 4684 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-wtphf_openshift-machine-config-operator(fe8e0d00-860e-4d47-9f48-686555520d79)\"" pod="openshift-machine-config-operator/machine-config-daemon-wtphf" podUID="fe8e0d00-860e-4d47-9f48-686555520d79" Jan 23 10:51:39 crc kubenswrapper[4684]: I0123 10:51:39.582596 4684 scope.go:117] "RemoveContainer" containerID="4e7729ec02a118e54c5ee089f6315bc620ae5a4d86c81be4ec83aff11eb4ee9b" Jan 23 10:51:39 crc kubenswrapper[4684]: E0123 10:51:39.583501 4684 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-wtphf_openshift-machine-config-operator(fe8e0d00-860e-4d47-9f48-686555520d79)\"" pod="openshift-machine-config-operator/machine-config-daemon-wtphf" podUID="fe8e0d00-860e-4d47-9f48-686555520d79" Jan 23 10:51:52 crc kubenswrapper[4684]: I0123 10:51:52.582621 4684 scope.go:117] "RemoveContainer" containerID="4e7729ec02a118e54c5ee089f6315bc620ae5a4d86c81be4ec83aff11eb4ee9b" Jan 23 10:51:52 crc kubenswrapper[4684]: E0123 10:51:52.584433 4684 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-wtphf_openshift-machine-config-operator(fe8e0d00-860e-4d47-9f48-686555520d79)\"" pod="openshift-machine-config-operator/machine-config-daemon-wtphf" podUID="fe8e0d00-860e-4d47-9f48-686555520d79" Jan 23 10:52:05 crc kubenswrapper[4684]: I0123 10:52:05.582782 4684 scope.go:117] "RemoveContainer" containerID="4e7729ec02a118e54c5ee089f6315bc620ae5a4d86c81be4ec83aff11eb4ee9b" Jan 23 10:52:05 crc kubenswrapper[4684]: E0123 10:52:05.583499 4684 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-wtphf_openshift-machine-config-operator(fe8e0d00-860e-4d47-9f48-686555520d79)\"" pod="openshift-machine-config-operator/machine-config-daemon-wtphf" podUID="fe8e0d00-860e-4d47-9f48-686555520d79" var/home/core/zuul-output/logs/crc-cloud-workdir-crc-all-logs.tar.gz0000644000175000000000000000005515134651344024453 0ustar coreroot  Om77'(var/home/core/zuul-output/logs/crc-cloud/0000755000175000000000000000000015134651345017371 5ustar corerootvar/home/core/zuul-output/artifacts/0000755000175000017500000000000015134634420016507 5ustar corecorevar/home/core/zuul-output/docs/0000755000175000017500000000000015134634420015457 5ustar corecore